Platforms that once represented a promise of freedom are now monopolies based on data extraction and surveillance. Users who joined social media to stay in touch with their friends find themselves trapped in a toxic environment. How did it all go wrong and what can citizens and regulators do to restore a healthy digital ecosystem? A conversation with tech activist, writer, and blogger Cory Doctorow, author of The Internet Con: How to Seize the Means of Computation. 

Konrad Bleyer-Simon: In your new book, you argue that online platforms created a self-serving system with artificial barriers that make it hard and costly for users to leave their services. The outcome is extreme concentration, monopolisation, bad user experience, and probably also the surveillance of citizens, as well as the spread of harmful content. You suggest interoperability as a key solution to these issues. What is it about? 

Cory Doctorow: Let me slightly correct you first. The monopolisation of the tech sector – meaning, the mergers that allowed a handful of companies to become so big and so powerful – came earlier. This monopolisation helped them secure the policies that block new market entrants. In this setting, it is easier for them to capture their regulators. It is not hard for five companies to agree on what they are all going to tell the European Commission, while it would be much more complicated for 400 companies to come up with a shared playbook.  

Once you allow the creation of a monopoly, regulatory capture follows. 

Is it this capture that makes it difficult for users to leave the platforms? 

Tech has always grown thanks to network effects. A network effect means that a service gets better when more people use it. So, you joined Facebook because the people you love were already there. And then, other people who love you joined Facebook because you were there. Or you bought an iPhone because you liked the apps that were available for it. And then because you had an iPhone, someone made another app for the device. 

This has always been the case since the personal computer. What changed is that, in the past, you could always develop a new technology that would make it easy to leave the old one. When IBM dominated the market with mainframes and charged 1000 per cent margins for hard drives, companies like Fujitsu made hard drives that would work with IBM mainframes. And then, eventually, they made mainframes too. 

When Facebook was born, it gave people who were already using MySpace a tool that would pretend to be you and log into MySpace, collect all the messages that your friends had left for you, and put them in your Facebook inbox. You could reply to them there, and it would send them back to your MySpace outbox, so your friends would see them. And that was what allowed Facebook to take so many users from MySpace so quickly. This is what interoperability is about. But if you tried to do that today, Facebook would use laws that were either enforced differently or did not even exist at the time of MySpace, to ruin you. 

If we were to restore this “noble ancient art” of technological interoperability, the users who are so obviously discontent with the platforms they use would consider the costs low enough to leave and join better spaces. In turn, the companies would be smaller, would pay more attention to user satisfaction, and could not push around the governments that tried to hold them to account. 

At the moment, X (formerly Twitter) is a good example of a platform that is hated by many of its users. If the current competitors – among others, Mastodon, Bluesky, Post, and Threads – would embrace interoperability among each other, could that be a driver for unsatisfied X users to leave the platform? 

Sadly, I do not think that the lack of interoperability is what stops people from joining these other platforms. I opted not to join Threads or Bluesky because they are owned respectively by Meta’s Mark Zuckerberg, and by a nonprofit whose board includes Jack Dorsey, the man who sold Twitter to Elon Musk, so I do not trust them. I wish that people were thinking long-term about why they join or not specific platforms. But the reason most X users are not switching to Mastodon or Bluesky is that they care more about the people than they do about the user interface, the management, or the policies.  

As for the lack of interoperability, it is a form of mutual hostage-taking: the people you love are trapped on X, and that means that you are trapped on X. At the same time, the fact that you are trapped on X is why the people you love are trapped on X. There’s no technical reason that prevents to leave X, set up an account somewhere else, and continue to send messages to the people who stay behind. This is not like the Soviet Union. When my grandmother was a Soviet refugee, she lost touch with her family for 15 years. They could not write, phone, or visit.  

In the case of X, it is simply a policy barrier: the service refuses to interconnect. The reason it does that is that it understands that the harder it is for you to leave its service, the worse it can treat you, or the more data it can extract from you. Platforms treat users the same way as an airport: once you get past security, suddenly a bottle of water can cost horrible sums, because you would not be able to get it from somewhere else. 

If these alternative platforms managed to team up and make a pledge for interoperability, would that not be a possible driver for a large enough pool of people to leave X? This scenario might even prompt X to open up. 

I think that it is far more probable to achieve this through regulatory action rather than through an agreement between platforms. In the EU, we already have the Digital Markets Act, which mandates interoperability from large firms. X violates many existing regulations, as well as the consent decrees, the remedial measures imposed for its previous rule-breaking both in the US and in the EU. At some point, it is likely that regulators come to X with action that could wipe out the company. 

One of the things that regulators could do to protect X users from what they have on the platform would be to require the company to set up an interoperable gateway. At that point, every other platform would exploit that interoperable gateway to steal X users. And they would, in turn, offer users a better environment. 

The business models of the biggest tech companies today are based on the presumption that they are and will remain monopolies. Would this mean that once there is competition, they will simply collapse? 

It is even more interesting than that, because you need to take into account the so-called “curse of bigness”. Google, for example, is a search and advertising company; all they do is sell search, and sell advertising on search. Smart Cities and Wi-Fi balloons and all the rest of it is just nonsense and window dressing. Now, Google cannot attract new search users and grow its search business. There are not so many people out there who have heard of Google but never tried it. And yet, Google needs to grow, because that’s the imperative of firms that are publicly traded. They want to attract capital, they want to enrich the managers who have been given lots of shares as part of their compensation package, and so on. Google must grow; and one way that Google can grow is by putting less energy and less money into keeping search results good or secure. It can also make search results deliberately worse by showing ads ahead of the quality content that the algorithm has come up with. It’s already happening.  

Another way tech companies can increase profits is by abusing their workforce. Google laid off 12,000 engineers in January last year, right after doing a stock buyback that would have paid their salaries for the next 27 years. 

The third possible way of growing – and most people hope that these big firms will do it this way – is by entering new markets. However, in the winner-takes-all economy of big tech, executives within big companies are wary of their colleagues becoming too powerful by doing something new. In the case of Google, for example, the founders came back to run the AI projects, because they knew that if they turned it over to the business itself, the executives would sabotage them. 

So, the problem is not only that losing their monopoly would mean losing their business. It is also that their scale stops them from doing exciting new things. 

In your book, you are somewhat critical of the EU’s Digital Markets Act (DMA) and the General Data Protection Regulation (GDPR). In general, what is your impression of the EU’s digital policies? Do they have potential and could they allow the EU to become a global trendsetter? 

They could have the potential, but so far the EU has neither been superb nor terrible. In fact, the GDPR is an interesting guide to how digital policies can go right and how they can go wrong. There are parts of the GDPR that I consider a bad idea. None of them have to do with privacy, but mostly with censorship. One problem, for example, is the right to be forgotten, which turned into a way for people who committed terrible crimes to stop the world from knowing about them – although that is not what this right was intended for.  

However, the main problem with the GDPR has to do with enforcement, which is a latent issue in the project of European federalism. Europe has a bunch of corporate havens within itself: Ireland, Malta, Luxembourg, to a lesser extent the Netherlands, Cyprus, and so on. These countries bend policy towards allowing criminality by wealthy people, and they compete with one another to become the most advantageous territory for the worst people and corporations in the world to set up their headquarters. As a result, you get ridiculous outcomes like with the GDPR, where Facebook and Google pretend that they are headquartered in Ireland; and then the Irish Data Commissioner never gets out of bed, so that Facebook and Google can violate the GDPR. That is also why I am worried about the enforcement of the DMA. 

My other worry is the interest of the European Commission. The GDPR was the signature achievement of the previous Commission, and when the new Commission came in in 2019, they were not all that interested in pursuing it; it was not theirs. And they let it languish. If the DMA is going to be the force that we hope it will be, there has to be an enforcement agenda that crosses different administrations within the Commission. And that enforcement agenda needs to be staffed up now and to score some early wins. If Europeans themselves are not to tolerate a failure to enforce the GDPR or the DMA, they must see the value in it. That way, if digital policy is not enforced, they will get angry, and they will demand action. 

Finally, I see problems related to a lack of longitudinal experience. The DMA started off with the promise that it was going to impose interoperability on some of the widely used secure messaging tools: end-to-end encrypted services, like iMessage, WhatsApp, and Messenger. However, the Commission should have started with social media platforms, where introducing interoperability is relatively straightforward. Messaging is just too hard to deal with, as it is very sensitive to even small technical errors. A mistake in this domain could compromise a service’s encryption and thereby put people at risk – even endanger lives. Remember that the journalist Jamal Khashoggi was lured to his death by a cyber weapon which was produced by the [cyber intelligence firm] NSO Group, and then purchased by the Saudi royal family. 

Big Tech can currently outspend anyone. They offer the highest paying jobs for data scientists, coders, tech policy experts, industry lawyers, etc. – so basically, regulators are captured by them, while the best people work for them. How can anyone compete with this expertise? 

When companies grow beyond a certain scale, they become very hard to regulate. The historic reason for competition law was not merely to react against the harm that large companies were already engaged in, but to ensure that a company never got so large that if it engaged in harm, we would not be able to do something about it. When a sector is extremely concentrated, chances are that the only people who understand how those companies operate are the people who work for those companies. That is why we often see that the regulators who step in to hold the companies to account are drawn from within the ranks of the companies. It is not merely due to a revolving door; it is also the outcome of a certain degree of pragmatism: a sector with five companies does not have a lot of outside experts. 

This does not mean that it is impossible to regulate Big Tech; it just means that we should have started 25 years ago, and now we face an uphill battle. One of the things that we can do is to simply block companies from engaging in certain conduct. In the US, for example, there is a law that is going to force companies like Google and Facebook to choose whether they are going to offer the marketplace where ads are bought and sold, represent the sellers of ads, or represent the buyers. Right now, each company does all three. They also play the roles of both advertisers and publishers. This arrangement makes it possible for tech platforms to take about 50 per cent of every ad dollar, while historically, fees and service charges would have been 10 to 15 per cent.  

So, indeed, it is very hard for a regulator to understand how a company can be a seller, a buyer, a platform, a publisher, and an ad buyer at the same time. But the problem can be solved by forcing it to sell off some of its units. 

Do you expect this to go smoothly? 

Not necessarily. The companies could try to undermine an interoperability mandate or a breakup. If regulators tell Facebook that under the Digital Markets Act, it has to allow third parties to connect to it using Mastodon, Bluesky or some other service, Facebook might decide to block or throttle those rival services, and pretend it did so accidentally, in a good faith attempt to protect users’ security. As Facebook is so big and gnarly, and because almost everyone who understands how Facebook works is a Facebook engineer, it might take years to tell the fake claims from the real ones. By that time, the companies that were trying to interoperate with Facebook will have already gone out of business. 

This is where so-called adversarial interoperability comes into the picture: technologically savvy people can just decide to disregard platforms’ self-serving rules and use a combination of means disliked by large companies – such as bots, scraping, and reverse engineering – to extract the data that they need to build an interoperable service. 

And big companies will tolerate this? 

They will not like it. They can try and stop this – but ultimately, it is very hard to do so with technology alone, once regulations don’t favour big tech anymore. All the adversarial interoperator needs to succeed is to find flaws in the defensive strategy of a tech company. To prevent this, that company would need to make no mistakes – that’s something not even the biggest and richest companies can achieve. 

If we empower new market entrants to use the same tactics that once made Facebook, Google, Apple, and Amazon so wealthy, we could reach a new equilibrium. 

How can progressive policymakers promote an environment in which you have more adversarial interoperability? 

I have an archenemy, a guy called Milton Friedman. He was the architect of the neoliberal revolution, a great friend of Margaret Thatcher and Ronald Reagan, and the person who created the misery that we live in now. While he was a monster, he understood how to make change. He wanted to create a feudal regime in which most of us were no longer socially mobile, and many of us would lose access to health care, university education, or retirement, and would just act as servants to those who are better off. This plan is certainly not something that would appeal to a majority of people; but Friedman argued that in times of crisis, ideas can move from the periphery to the centre so quickly, that the impossible becomes the inevitable. 

Our job is to have sufficient good ideas, so that when crisis comes, we can seize upon it to change the world. One of the things that the European Greens and other progressives can do is advocate for technology- and policy-informed, technically detailed solutions that address the structural problems of tech. 

Would the world be a better place if the development of the Internet and communications technology had stopped in 1999? 

No; it is a gift to Big Tech to assume that today’s problems are intrinsic to technology. I am convinced we could have had social media or search without mass surveillance. In fact, both Facebook and Google started off as privacy-preserving alternatives to their rivals. If you have a look at the PageRank paper that Google’s founders published in 1998 to announce that they had built a new search engine, you will find a promise that their tool would never be advertising-based, because that would adversely impact its quality. Facebook did not spy on its users either for the first several years. Similarly, Apple’s rhetoric about the iPhone and the need to prevent you from installing software from third parties without their blessing is nonsense, because Apple already built a computer that works just fine without that restriction.  

So, the problem is not the technology or how we use it; the problem is rather that the people who provide these technologies made it impossible to break up their precise menu, where you have to take surveillance with your social media, accept control over your mobile operating system and price gouging with your publishing. With interoperability, we could turn that set menu into an à-la-carte with technological self-determination.