In recent years, the likes of the Cambridge Analytica scandal together with regulatory steps such as the EU’s GDPR data law have contributed to a swell in public awareness on the potential risks of big data and artificial intelligence. Privacy, however, is only the tip of the iceberg. We sat down with legal philosopher Antoinette Rouvroy to discuss her work on algorithmic governmentality and the profound transformation the neoliberal-driven tech revolution is catalysing in society and politics. What’s needed, she argues, is a return to reality and away from endless optimisation – and in this, the EU must play its part.

Green European Journal: Beyond the technical and hardware side of today’s technological revolution, a more fundamental and pivotal shift is transforming societies. Among other things, you work on the question of algorithmic governmentality. What is algorithmic governmentality?

Antoinette Rouvroy: Algorithmic governmentality is the idea of a government of the social world that is based on the algorithmic processing of big data sets rather than on politics, law, and social norms. Digitisation becomes a sort of quantification of political issues that is achieved through algorithms. In “traditional” statistical practice, there are always hypotheses about the world, conventions of quantification and prior categorisations. With big data, the idea is to generate hypotheses and classification criteria from the data.

It’s no longer about governing what is, about judging, punishing, and controlling past behaviours, but about governing uncertainty. The mass processing of data is about taming uncertainty. Algorithmic governmentality has a much larger target in its sights: the excess of the possible over the probable. It aims to reduce the possible to the probable by affecting behaviours through warning rather than through prohibition or obligation. It is a relatively subliminal mode of government that consists of directing people’s attention towards certain things, in modifying the informational or physical environment so that behaviours are no longer obligatory but necessary.

In The Age of Surveillance Capitalism (Profile, 2019), Shoshana Zuboff talks about “behavioural futures markets ”.Human experiences, which are said to be “unverifiable”, are translated into behaviours and signals so that this data can be sold in a marketplace. The goal is to predict, but isn’t it also to tame risk completely?

An algorithm doesn’t “see” anything and but rather dismantles all our perceptions. It allows us to de-automate, to disprove our prejudices. This appears to be liberating but it generates speculative spaces free from any idea of risk. The goal is to reduce the excess of risk. It’s a way of “un-thinking” the future, of already actualising what only exists in virtual reality. Take the risk of premature death. Algorithmic governmentality is not interested in physiological causes and signals – or rather, it simply treats them on a par with other types of signal, for example, the type of people you associate with, what you eat, or the fact that you stay up all night watching films on Netflix.

We continue to imagine but the imagination is no longer taken into account. It’s the death of politics.

Anything can be a factor, without any obvious causal link to what we’re looking for. Once signals have been detected, the person in question will be treated as if they have already “contracted” the risk or already “actualised” the danger, and may then, for example, have their life insurance cancelled. It is not about acting on causes but about acting pre-emptively on effects and in a way that is beneficial to those who purchase or design the algorithm, be it to increase profits or control. To take this one example, the insurance world, with its pooling of risk, dies. The post-insurance world no longer needs to pool risk because we can already anticipate its actualisation. The idea of risk completely disappears.

So we no longer try to impose new norms or shape interactions between people but move straight to neutralisation. Doesn’t that kill imagination and the living?

We continue to imagine but the imagination is no longer taken into account. It’s the death of politics. Optimisation rather than imagination or anticipation is exactly the opposite of politics. Politics is about transcending the current state of affairs. Algorithmic governmentality, on the other hand, is about optimising the current state of affairs so that it remains as favourable as possible to certain stakeholders. It’s a new form of rationality, the optimisation of a multitude of objective functions, which is today determined by industrial interests.

Neoliberalism has led us to a place where everything is calculable. What were the political and economic interests that drove us to such a reality?

The current systems draw on the visions of empowerment that emerged in the 1960s and 1970s, a sort of refusal of any heteronomy, a desire to be governed only by oneself, a kind of hate of the average. It’s the end of classes and groups. Individuals do not want bureaucracies – public or private – to see them as members of one or other social class or group, but as eminently singular, unique, and creative beings. Big data speaks to us as an individual, as far removed as possible from any idea of average. The very idea of average disappears.

Today, we live in an optimisation society in which everybody must optimise themselves, to be and have everything, all of the time.

Industrial-scale personalisation is no longer an oxymoron. Everyone wants their personalised equipment and environment. The author Alain Damasio talks about a “techno-cocoon”, something very coddling. Everybody shakes with fear at the prospect of mass surveillance, but the two go hand in hand. This hyper-individualism is the result of neoliberalism: it is about judging everyone or evaluating the merits and needs of everyone in high definition. Everyone pays for what they do, everyone for themselves. In some cases, like driving, this hyper-individualism and surveillance can have positive aspects: adjusting insurance premiums based on the driver’s behaviour. However, adjusting insurance premiums based on your consumption at the supermarket or other personal choices is highly problematic.

But big tech isn’t the cause of this, there’s an ideology. Today, we live in an optimisation society in which everybody must optimise themselves, to be and have everything, all of the time. We are no longer judged on values or morals. Everything is of equal value, so to speak. And the best way to satisfy consumers, in real time and without judgment, is to tap directly into their impulses, rather than their reflexivity. We usually have time to reflect, with nerve impulses that are much slower than digital signals. The speed of human thought, of reflexivity, is short-circuited by the speed of digital technology. This speed taps directly into impulses rather than reflexivity. There is an evacuation of the subject.

Is the evacuation of the subject another way of viewing the crisis of representation in current political and economic systems?

What is interesting politically is that, now everything is captured, the system gives voice and weight that which previously lacked any representation: everything not represented in words. This can appear very liberating but what is lacking is a collective frame of reference, what Guattari and Deleuze called collective assemblages of enunciation.[1] In these assemblages, we make sense collectively, we don’t calculate; there is something which transcends the individual optimised for his or herself, something that is common.

The common is, in a manner of speaking, the fourth person singular. Completely forgotten today, it is the unrepresentable, the unrepresented in the data, the incalculable, which we find in the organic, the normativity of life itself; the prospect of change itself. The term Anthropocene trips us up because it implies that everything depends on us (though it is true that much is “our fault”) and that there is no nature independent of ourselves. Yet, there is nature outside of us, neither represented nor representable in data form and that is exactly what the “living” is.

Online outrage is what feeds algorithmic capitalism. Where opposition works is when people get together to make something concrete and tangible.

Our reason as human beings is limited because we have a body, a sensorial point of view, and without it we cannot know, understand, or be in the world. This allows us to plan a future, in other words, one that isn’t already concerned with, or even pre-determined by, logics of accumulation and optimisation. Algorithmic governmentality is absolute anthropocentrism; it is to claim that human rationality can have a hold on everything in the world that is not human.

That’s bleak. Do you see any light at the end of the tunnel?

The glimmers of hope are everywhere. The capitalism that has exhausted all material resources and is now exploiting the virtual is a super fiction, completely disconnected from reality. The best possible resistance or defiance is probably to not be so fascinated by artificial intelligence (AI). AI specialists demystify it best. They are the best defenders of politics: Yann LeCun, head of AI at Facebook, says that an AI will never be more intelligent than a cat. Specialists explain that AI is unable to perceive context. We should listen to these experts and not the industry’s narrative, which has been taken up by politicians who use it to abdicate their responsibility. The living is everywhere; we should focus on this.

What exactly is the living? Should we get down to earth, as Bruno Latour says?

The return to territory, re-inhabiting, translates politically into a relative return to the local. There is nothing regressive about it. It is about focusing on the here and now, the earth beneath our feet, which is more rooted than we are. The globalisation project and capitalism are incompatible with survival. Optimisation is the opposite of focusing on what matters in the concrete here and now; it mesmerises and spares us from looking at and being in reality. AI is only a source of information and will not be more helpful than a hammer or spade. AI can give us interesting maps, help us to identify facts, but facts and actions don’t count in themselves; they need to be made important, meaningful, and this is the work of the human being alone.

Politically, how can we subvert algorithmic governmentality?

Today there is staggering digital passivity. Many people think that digital votes, petitions, and outrage will change things, and even lead to citizens assemblies with the help of technology. I don’t want to criticise all this, but online outrage is what feeds algorithmic capitalism. Spending time being outraged on Facebook feeds the beast. Where opposition works is when people get together to make something concrete and tangible – like building a house, creating a vegetable garden, or many other things.

To move onto a different but related subject, namely the law when it comes to big tech. You say that the General Data Protection Regulation, the EU data law introduced in 2018, is useful but that it arrived too late and with the wrong target. Could you explain why?

I have competition law in mind, because when we talk about human rights, the digital issues of protecting privacy and private life immediately arise. But this concerns the “fortress” around the individual. In reality, algorithmic governmentality is not interested in the individual and their choices or preferences but the intensity of the statistical relationships that exist between fragments of everyday individual existence. In other words, it is interested in modelling large amounts of data for large amounts of people. GDPR is too focused on the individual, even though consent offers very little protection. It is very easy to identify someone from anonymous data by combining big data sets, regardless of their level of individual protection. Last but not least, the issue is that today power lies less in identifying people than it does in modelling their possible behaviour, but collectively.

Europe has a role to play, but by fundamentally rethinking the status of data, even if it means doing so negatively.

The whole situation around data is problematic. Today, all data is captured, and not necessarily produced, by big tech firms such as Google, Amazon, Facebook, Apple, and Microsoft. States are dependent on these actors, whether it be for security or other purposes. States and other civil society actors are forced to sign contracts with them: take, for example, the National Security Agency in the US for counter-terrorism or the International Red Cross for reuniting families following an earthquake. Data is centralised, outside of democratic control, by companies in the process of acquiring far greater powers than states. It is a major geopolitical issue.

What levers does the EU have?

Instead of a narrative around ethics and AI task forces, the EU needs strong competition law and appropriate financial penalties. The EU also must invest in European databases, know-how, and AI. Unfortunately, the EU is lagging behind: it does not have the same amounts of data as its competitors (apart from certain centralised sectors, such as healthcare in France or Belgium). But, I maintain that the future stakes of international competition will not be AI, but climate and the environment, the government of the real.An interesting avenue would be to give data a completely different status. Today, it is seen as a good, a positive value for society and the economy. But in a very material and pragmatic way, data is toxic waste. Most data has no meaning in itself. At the moment it is collected, it only has value as an option, as speculation, to be potentially processed, dissected, and used later on. Once data is out there, it stays out there and, like with nuclear energy, endlessly so. We should manage the digital environment like the natural environment. The smallest query on Google consumes an incredible amount of energy. Europe has a role to play, but by fundamentally rethinking the status of data, even if it means doing so negatively. Requiring regulation of data usage, being transparent about its usefulness, its origin, its destination, and even how the collective can or can’t use it. To do so, we need to rehabilitate institutions and regulate the dangerously extractive economy – whether for the environment or humans – in this digital world, which is sold to citizens as existing only in a “cloud” separate from our earthly realities.

Footnotes

[1] French philosophers Félix Guattari and Gilles Deleuze first developed the concept of assemblage theory in their 1980 book, A Thousand Plateaus.