Facts are indispensable in a democratic political debate. Although social media lowers the threshold for the involvement of citizens in this debate, it also provides an unparalleled environment for the spreading and proliferation of disinformation. This ultimately threatens the value of facts. Although governments have begun to wake up to the need to regulate major platforms and draft legislation in this direction, they must adopt a much more assertive approach if they are to meaningfully hold these platforms accountable for their responsibility to host open, honest, and fact-based political debates. This also includes the strengthening of a traditional pillar of democracy: the free press.

Social media plays an increasingly prominent role in politics. As the the Dutch Council for Public Administration (Raad voor het Openbaar Bestuur) puts it, a new era has dawned in our representative democracy: public democracy is giving way to platform democracy. In a public democracy, the democratic exchange of ideas mainly takes place in the traditional media. In a platform democracy, the exchange of ideas predominantly happens on digital platforms. The threshold for the access to, and dissemination of, information has become much lower, and an increasing number of citizens are able to personally participate in the debate. They no longer need the mass media or political parties to reach out to other citizens or to mobilise themselves. But also increasing are opportunities to intentionally mislead people with disinformation: deliberately false representations of reality, often with political motives.

For many people, the word ‘disinformation’ immediately brings former US president Donald Trump to mind. In his four years in the White House, he posted some 4700 false or misleading messages on Twitter for political gain at home. Other governments, first and foremost Russia and China, spread disinformation in other countries in order to manipulate public debate, elections, or the judicial process. The pursuit of profit can also be a motive for disseminating disinformation. The longer social media platforms can keep us glued to our screens, the more they earn from advertisements. Accordingly, their algorithms reward extreme views that hold people’s attention for longer with a greater reach. Dealers in disinformation take advantage of this. By using sensational headlines in social media posts, they lure people to websites and YouTube channels full of junk news and conspiracy theories. The disinformation dealers make money from the advertisements they have arranged to appear alongside the texts and videos. These advertisements often come from respectable companies that don’t realise whose coffers they are filling.

Political and commercial disinformation is often reinforced by the use of bots. These automated accounts increase the number of followers of an account and the number of interactions with its posts. As a result, these messages have a wider reach on social media. In June 2017, no less than a quarter of Twitter climate messages came from bots, according to US research. And the vast majority of these tweets and retweets denied the reality of climate change.

Shared facts – a condition for democracy

Although journalists debunk the disinformation of lying politicians, trolls, and bots, the ‘real’ or fact-based news, from journalistic sources such as newspapers and television channels, unfortunately does not reach everyone equally. In the Netherlands, for instance, social media and blogs are currently the most significant source of news for a quarter of the country’s young people.

Propagators of disinformation know only too well that the press is their biggest opponent. It is not for nothing that Donald Trump constantly accuses the “lamestream media” of spreading fake news. In fact, the real creators of fake news and conspiracy theories are seeking to undermine citizens’ trust in the professional news media – as well as their confidence in other institutions that guard the truth, such as science, education, and the judiciary. If they succeed, we are in big trouble. A democracy cannot exist without shared facts. In the words of US historian Timothy Snyder: “To abandon facts is to abandon freedom. If nothing is true, then no one can criticise power, because there is no basis upon which to do so. If nothing is true, then all is spectacle. The biggest wallet pays for the most blinding lights.”

That is precisely Trump’s political strategy: to blind people. After losing the 2020 presidential elections to Joe Biden, he launched a barrage of baseless accusations of ballot box fraud, both on Twitter and in the courts. Urging his supporters to “fight like hell” to “stop the steal”, he incited the violent storming of the US Capitol by a mob of rioters on January 6, shaking the foundations of American democracy. So Snyder is right: freedom is in peril if we abandon the facts.

Facts are sometimes controversial, of course. That is especially true of social facts. Take a concept like ‘public safety’. A politician might say that safety is deteriorating, if surveys show that more citizens feel unsafe. But is this really a fact, if at the same time the number of reported crimes is falling? It is vital that journalists, scientists, judges, and even schoolchildren are taught to critically examine facts. But there are important rules of play. For example, people who criticise someone else’s facts must substantiate their arguments. And vice versa, those who invoke facts must allow themselves to be corrected. This is how shared facts come into being. The truth comes about through dialogue.

Without facts, all politics becomes rhetoric.

Fact-free politics in the AI era

Facts matter, also in politics. In addition to values, emotions, and visions, they are an indispensable element of any political discourse or debate. Without shared facts, it becomes difficult to argue about the issues we disagree upon, let alone to make compromises. Without facts, all politics becomes rhetoric.

Fact-free politics and disinformation affect voters, by making them increasingly cynical. They start to find lying politicians normal, or even admire them because they manage to get away with their lies. We can already see this happening among Trump’s supporters. And thus history threatens to repeat itself. The philosopher Hannah Arendt wrote about Hitler and Stalin:

“The totalitarian mass leaders based their propaganda on the correct psychological assumption that […] one could make people believe the most fantastic statements one day, and trust that if the next day they were given irrefutable proof of their falsehood, they would take refuge in cynicism; instead of deserting the leaders who had lied to them, they would protest that they had known all along that the statement was a lie and would admire the leaders for their superior tactical cleverness.”

Arendt published this razor-sharp analysis in The Origins of Totalitarianism in 1951. Since then, the opportunities for spreading disinformation have grown enormously, partly thanks to artificial intelligence. You can have former President Obama say that “President Trump is a total and complete dipshit” in a video that is indistinguishable from the real thing. The viewer recognises Obama, hears his voice, and sees how his lips move in sync with his words. Such a ‘deepfake’ video, created in 2018 by filmmaker Jordan Peele, was made to warn people of the danger of manipulated videos. Any whiz kid can do it after him. What if bots and trolls flood social media with a deepfake in which Putin or Kim Jong-un announces a nuclear attack?

The problem with fact-checks

The good news is that we are not powerless in the face of disinformation. Here lies an important task for the social media platforms themselves as well as for governments. Users themselves also have a key role to play when it comes to identifying false information before spreading it further, and methods (such as the HALT method) have been devised to encourage individuals to think critically.

The fact that many false statements can be easily viewed, without being accompanied by a clear warning based on verification by independent fact-checkers, demonstrates that social media still has a long way to go before it can play a responsible role in platform democracy. Under public pressure, some platforms have taken positive steps in the fight against disinformation in recent years. YouTube, for example, now brings journalistic news to the attention of its users more often. The video platform has engaged fact-checkers in three countries. Twitter and Facebook also hire fact-checkers to combat disinformation. Accounts that repeatedly spread disinformation are punished by Facebook with a diminished reach. However, unlike Twitter, Facebook refuses to fact-check politicians and parties. It was not until 2020 that Facebook took action against Trump, removing rather than labelling a post of his because it contained harmful disinformation on the coronavirus. (Trump was subsequently suspended from Facebook, Instagram, and YouTube, as well as permanently banned from Twitter, after inciting the attack on the Capitol.)

Messages that generate a lot of reactions are shown to more users to maximise advertising revenue. The platforms’ efforts to combat disinformation are thus undermined by their own algorithms.

Still, fake news is more sensational than fact-checks. In 2017, 50 of the biggest hoaxes on Facebook were shared or commented on 200 times more often than the fact-checks that accompanied them. This creates a snowball effect, since messages that generate a lot of reactions are shown to more users to maximise advertising revenue. The platforms’ efforts to combat disinformation are thus undermined by their own algorithms.

The challenge for lawmakers

A goose that lays golden eggs will not be keen to put itself on a diet. This is why legislation is needed against disinformation on social media. One simple rule would help: social media users who have viewed disinformation must also view the fact-check. This rule, proposed by the Avaaz campaigning community, corrects the algorithms; finding the truth takes precedence over commerce. Showing a fact-check to social media users can reduce the number of people who believe the untruth by half, according to an experiment by American researchers. It would mean that social media platforms would have to engage sufficient numbers of independent fact-checkers in order to debunk disinformation in a timely manner. In turn, the platforms must make it easy for users to submit dubious messages for fact-checking. Their own algorithms should also actively search for disinformation, including dangerous deepfakes.

This approach upholds the ‘safe harbour’ principle enshrined in European legislation for digital services. According to this principle, platforms are not liable for the content posted by their users, unless they know that this content is illegal – racist hate speech, for instance – and they do not take ‘prompt’ action against it. By avoiding prior censorship, safe harbour protects freedom of expression. This fundamental right should not be restricted, either by governments or social media platforms, unless such restrictions serve a legitimate purpose and are both necessary and proportionate. If statements are considered harmful but are not clearly in breach of the law, the platforms should therefore look for the least onerous means of limiting the damage. Fact-checking untrue or misleading posts, coupled with reducing the reach of social media accounts that repeatedly post disinformation, is more compatible with freedom of expression than removing these posts. Bringing in independent fact-checkers, such as journalists or academic researchers, prevents the platform itself or a government from determining what is and what is not disinformation. Also, any social media user whose post is labelled as disinformation should be informed about it and given the opportunity to object to this judgment at an independent dispute settlement committee or in court.

In December 2020, the European Commission announced that it intends to ‘step up fact-checking’ and ‘limit the artificial amplification of disinformation campaigns’ on social media. However, it wants to lay down rules to this end in a strengthened Code of practice on disinformation rather than in the Digital Services Act, which has a more binding nature. The European Parliament and the Council of Ministers have the final say over this proposal for regulation. In its current state, it lacks explicit references to fact-checking and the downgrading of disinformation, which could be inserted in a beefed-up article 27 on “mitigation of risks”.

Legislators will have a hard time countering disinformation while protecting free speech as long as a handful of platforms dominate social media. Strong risk mitigation clauses in the Digital Services Act might not be enough to prevent big platforms from wielding enormous influence over politics. Even Trump’s staunchest opponents should feel uneasy about the way he was deplatformed by Twitter, Facebook, Instagram, and YouTube. Being shut off from social media may be an appropriate penalty for incitement to violence, but shouldn’t this decision be taken by a judge rather than a few captains of Silicon Valley – the same people who for so many years let Trump get away with a constant stream of disinformation without a single fact-check? In order to break the political power of quasi-monopolistic platforms, we need a much greater variety of social media. As the digital rights organisation Bits of Freedom puts it: “Dozens of YouTubes and Facebooks communicating with each other and reflecting the diversity of people and opinions that make up our society.”

The Digital Service Act as proposed by the European Commission fails to foster this diversity. The dominance of big platforms is partly based on a perverse business model which turns our personal data into tradable commodities. A ban on micro-targeting, as demanded by the European Parliament, would reduce their data power and create more opportunities for newcomers who respect our privacy and autonomy. This may include non-commercial platforms, if they get a helping hand from governments, such as Peertube – an alternative to YouTube, or Mastodon – a mix of Twitter and Facebook, but decentralised, open source, and not for profit. In addition, the Digital Services Act needs stronger wording on interoperability between platforms, so that users can switch from Facebook to another platform without having to miss their friends’ posts on Facebook.

Bringing journalistic ethics to social media 

Legislation for platform democracy is not complete without measures to strengthen journalistic media – both the livelihoods of journalists and their reach in the digital world. Journalistic news helps to mitigate polarisation, filter bubbles, manipulation and disinformation. Journalists, when acting according to their professional ethics, add nuance, puncture lies and alert us to facts and opinions that others want to keep hidden from us. A good journalist is truthful and impartial, verifies facts, uses multiple sources, rebuts, and rectifies his or her mistakes.

We should start treating these platforms – at least the large ones, such as Facebook, Twitter, and YouTube – as news organisations.

Most public broadcasters have editorial statutes that guarantee both their independence and journalistic ethics. In the Netherlands, when commercial broadcasters made their entrance, the government and parliament decided that they too should have editorial statutes. And now that social media is the main source of news for part of the population, we should start treating these platforms – at least the large ones, such as Facebook, Twitter, and YouTube – as news organisations. This does not mean that they have to start practising journalism or that they are liable for their users’ posts in the same way as newspapers or broadcasters. Rather, it would mean that they bring news from a varied range of journalistic sources to the attention of their users. The timeline of every social media user should include news articles and videos that have been posted by independent newspapers and broadcasters. There should be variation in the sources, so that users are offered multiple perspectives on current affairs. The accessible news videos published by public broadcasters lend themselves perfectly to the news supply on social media.

Although YouTube already gives preferential treatment to journalistic news videos in its recommender system, the algorithm favours videos with sensational and partisan titles, such as those of right-wing American broadcaster Fox News, because viewers click on such videos more often. Moreover, it is doubtful whether Fox News can be considered independent journalism. Its editorial line is strongly influenced by the owners: media tycoon Rupert Murdoch and his family. Closer to home, press freedom is in constant decline in Hungary: public broadcasting and most newspapers are no longer independent. They are controlled by the party and the business friends of Prime Minister Viktor Orbán. So if a legal requirement to promote journalistic news on social media were to be included in the Digital Services Act, it would have to be accompanied by a clause that requires it to be news from journalists whose independence is effectively protected by an editorial statute.

The algorithms that now play on emotions and too often keep people locked into echo chambers are thus used to broaden their field of vision. The journalistic news that people see on social media, for example, can be attuned to the controversial topics that come up in their timelines. If a friend’s post about, say, vaccinations or climate change appears in your timeline, you will also be offered a news video or background article on that topic. Transparency is essential here: it must be clear how an algorithm selects news items and the list of journalistic sources used by a platform must be public.

An investment in social cohesion

Good journalism, even if it is free on social media, must somehow be paid for. A ban on the use of our personal data for commercial and political micro-targeting would be a good start. Such a ban would not only protect our privacy and autonomy, but it would also create a fairer playing field between social media platforms and news media publishers in the fight for advertisers. Governments should also start thinking about providing permanent subsidies for the free press, if they do not do so already. In Sweden, the second-largest newspaper in each region receives the most funding in order to avoid a monopoly on the news. In the Netherlands, where no such subsidies exist, citizens can count themselves lucky if there is a single newspaper that reports on local and provincial politics. Of course, the subsidies must be distributed by independent funds, and newspapers, magazines, and digital news media must all be eligible for them.

The generous funding of independent public service broadcasting also pays off. It increases social cohesion. Broadcasters need to be given ample opportunity to reach an online audience – not only through Facebook and associates, but also through new, non-commercial platforms.

It is precisely because information spreads so easily on the Internet that the importance of the free press is growing. This is the paradox of platform democracy. A market square without gatekeepers, where everyone can sell their wares, needs arbiters of quality. Journalists and fact-checkers help us to distinguish facts from fables and fabrications. That’s why our second piece of advice to modern, digital citizens is this: if you can afford it, get yourself a subscription to a newspaper.

An earlier version of this article appeared in Dutch in tijdschrift de Helling.