After Cambridge Analytica, Trump, and Brexit, there seems to be a momentum to regulate hate speech, data privacy, and online misinformation in a way that is in accordance with human rights. However, according to Fanny Hídvégi of Access Now, thus far most steps on the EU level are only politically motivated acts.

Krisztian Simon: Since the US election of 2016, we have heard a lot about the role of online platforms, such as Facebook and Twitter, in the spreading of fake news. How do content regulators react to this problem?

Fanny Hídvégi: Online platforms have remained in the centre of the debate about online content regulation (although their position has constantly changed in the last few years). The main problem with the related regulation is the fact that these platforms don’t count as media, and therefore they have different responsibilities and are subject to different rules than news sites. This is, on the one hand, a positive conception that strengthens the freedom of the internet, but, on the other hand, it can also lead to a misuse of the situation. A great proportion of online content these days is published on social media, as many users are themselves creating and sharing content, while news sites are more and more relying on the infrastructures of big web companies when spreading their content. For this reason, many of our debates focus, instead of on the role of governments, on the possible tasks of these private actors. An example of this is the German NetzDG law, initiated by former German Minister of Justice and current Foreign Minister, Heiko Maas, which allows platforms to be fined up to 50 million euros if they don’t delete content that include threats of violence and slander within 24 hours of a complaint received.

What would be the responsibility of these public actors? Could they take over roles from the government?

Whether they can or should take over such roles is not obvious. Content regulatory innovations could have bad consequences, potentially giving these companies not only extra responsibilities, but also placing them in the centre of the enforcement of human rights law, thereby expecting them to make judge-like decisions on what is and isn’t allowed in the online sphere. Taking over this role shouldn’t be their task, since these companies were founded in order to make profits, not decisions about human rights. Not to mention that they are not necessarily responsible for the content that is published by their users.

In the EU, there are currently many parallel experiments that deal with content regulation. One of them is the so-called reform of the copyright directive – although dubbing it a reform is somewhat misleading, as it implies a change for the better. This directive could start a trend that would, in the future, oblige platforms to use algorithms to preventively filter their content prior to it being published. This is practically censorship: it is hard to envisage a solution that is efficient and respects the users’ rights.

In addition, there are similar efforts with hate speech: approximately two years ago, the European Commission created a code of conduct, which would, in essence, function as a self-regulatory mechanism – meaning that it is not a form of legislation that could obligate platforms to act in a specific way. Thus, the Commission has placed the terms of service of these companies above the law, and undermined the rule of law.

these companies were founded in order to make profits, not decisions about human rights

Why would it undermine the rule of law?

One example: if some offensive content is published on Twitter, the provider can do two things. It can delete the content either by referring to its terms of service, or by referring to the law. If it refers to the terms of service, it doesn’t have to report anything to the authorities; this in turn means that content will be deleted without leading to proceedings against people who have, for example, spread hatred; at the same time, there is no chance for the victim to be compensated. When decision-makers use lawmaking and self-regulatory mechanisms to oblige online platforms to take on more and more tasks, it gives the platforms an incentive to avoid responsibility. In practice, this means an over-compliance with the rules and regulations in order to avoid confrontations with courts.

To finish my previous answer, the third content-regulatory trend relates to the responses to those hoaxes or so-called fake news that are often mentioned in the press these days. I think the use of the term ‘fake news’ is quite misleading, as it is not an existing legal term, and it leads us to adopt a harmful narrative that incorporates many different problems in a random category. By this, I mean, among others, state-sponsored propaganda, untrue statements that were spread by bots, as well as jokes and satires.

Why were those mentioned categories interfused when defining the fake news problem?

Following the 2016 US elections and Brexit we have witnessed increased political pressure; more and more actors have demanded that something be done in order to avoid another such fiasco. But the problem itself was never defined – instead we were already offered possible solutions. A good example is the so-called Expert Group on Fake News, which was created by the European Commission in January.

This expert group published its report in early March 2018, which already allows us to draw some conclusions about their work. We were afraid that the report would suggest the use of a self-regulatory tool, a code of conduct –  the problems associated with which I already mentioned. In the end, the proposals of the report don’t include such things, which is a positive development. However, on the negative side, they put the emphasis on the problems associated with online platforms and the social media. Of course, I don’t believe that these actors shouldn’t be covered, but they shouldn’t be the only ones.

The proposals clearly show that the representatives of traditional media were over-represented in this group, and thereby their interests were most likely featured much more prominently in the text. Besides that, I have to mention that the Commission has rejected all well-known NGOs and experts that are prominent in the field of freedom of speech, among them the United Nations special rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, or the NGOs Article 19 and Access Now. The only civil society organisation in the group was BEUC, a consumer protection group. They issued a statement when the report was published criticising that the group didn’t take steps in order to deal with the business model of Google and Facebook, even though their model is built on the kinds of content and advertisements that sustain this so-called fake news trend.

Could the business models of these companies function without disinformation?

Due to the political pressure, these companies are themselves making changes in their operations, or at least announcing new directions in which they would like to go. A few months ago, for example, Facebook announced that it will go against its own interests and prioritise content shared by users’ connections over news. They claimed that they would do this for the sake of the public good – however, some experts have since pointed out that having users spend less time on Facebook could in fact benefit the social network: based on a different business model, it would lead to greater competition for user’s attention and increased ad prices.

If it was up to you to give an alternative solution to the fake news problem, how would you do it?

First of all, it is not just one problem but many different situations that need to be tackled separately. Government propaganda, for example, cannot be considered in the same way as the sharing habits of ordinary people. Therefore, the problem definition that the Commission is using is non-existent. This also means that the expert group started its work without having a consensus on what they wanted to write their report about.

The solutions can be accordingly manifold. In the case of government propaganda, I could mention the Charter of Fundamental Rights of the EU, which could be applied directly in this case. However, the chances that the EU will indeed use it are quite small, as this wouldn’t be the most serious case in which the use of the Charter could have been justified, but thus far we haven’t seen cases in which the charter was used as grounds for launching infringement proceedings against a country.

If the question is how the functioning of platforms could be steered in a positive direction, then we have to take into consideration that, according to research, the most important question is how platforms target content. News or advertisements don’t just appear by chance on someone’s screen – they are created for specific audiences. This is also the reason why the Commission started working on the reform of the e-privacy legislation, which will focus on two major issues: the confidentiality of electronic communications and the online tracking of users – including the broadly used cookies. If we could find a regulation for cookies that is more favourable to our private sphere – regarding, among other things, the sharing and selling of user data and the tracking of online activities – then we could also have an impact on the actual use of these data. Moreover, the Facebook-Cambridge Analytica scandal has provided us with an opportunity to hold Facebook and other tech companies accountable, based on data protection or fundamental rights considerations.

What would this regulation of cookies look like in practice?

It shouldn’t be the technologies that are regulated; instead, we need to focus on their impacts. For example, how different web companies combine search-related and personal data from different sites, devices, and platforms; how they create profiles based on these data; and what kinds of advertisers and data brokers can access them. If these data couldn’t be combined the way they are now, then there would be no opportunity to create the kinds of profiles that allow the selling of manipulative advertisements (targeted based on, among others, age, nationality, ethnicity and social status) – as was done, for example, during the latest US election campaign.

It shouldn’t be the technologies that are regulated; instead, we need to focus on their impacts.

Consequently, there would be no chance to misuse the vulnerabilities of these platforms and their users. The aim of the aforementioned e-privacy rule would be to declare information that is created during online communications as sensitive data. In this case, the user’s approval would be necessary if platform wanted to share their data with third parties. The Cambridge Analytica case perfectly confirms this: we saw an entity that used legal and technological loopholes to illicitly use people’s personal data, extract information from them, and target audiences with manipulative contents.

Will the Cambridge Analytica scandal have consequences?

It already has consequences. Lots of people have deactivated their accounts, and more and more people enquire about how to protect themselves against such campaigns. We are happy to help them, but it shouldn’t be the responsibility of users to invest extra time and resources in these protections; platforms should protect their data and function in a way that respects the law.

This situation has also made it clear that the US needs a federal data protection legislation – this would make it much harder to access data for money. EU commissioners are planning to talk to the representatives of Facebook, and US lawmakers will also experience pressure to deal with this situation. The question is whether they will be able to use this momentum and find a solution that respects human rights.

What would be the best solution on the EU level?

There are many discussions between the EU and the US where we could put pressure on the relevant actors. One of them is the so-called Privacy Shield, which deals with the transmission of private data. Stronger EU rules could provide an opportunity to push tech companies towards better compliance with European standards, including the new EU regulation on data protection. Starting in May, this rule will increase the responsibility of platforms. This momentum, together with the Facebook case, could provide a perfect opportunity to put pressure on the US as well.

What will be the next step on the EU level?

The EU expert group didn’t recommend any legislative tasks, however, a new French legislation which is very similar to the German Netz DG is about to be introduced. The only difference is that Netz DG concentrates on hate speech, while the French legislation focuses on fake news. The speed and the spirit is very similar: in both cases we have seen a rushed decision, and in both cases online platforms bear the responsibility for the published content. As with the German legislation, the EU will need to examine the text to determine whether it is in compliance with EU law. In this so-called TRIS process, the focus will be on the rules of the common market, but these rules also include some human rights issues, which means that the Commission could block the legislation – even if it decided not to do so in the German case. The Commission’s reaction will be made public in autumn 2018. But even if the French legislation is not blocked this time, later there will still be a chance to start an infringement proceeding.

All in all, most of the activities – both on the EU and the Member State level – are politically motivated. And even though there is pressure from the Member States to do something in terms of data regulation, hate speech, copyright, or the spread of online information, the actual steps taken are not built on facts and research; therefore, we shouldn’t expect too much improvement from them.