Respect for human rights and democracy needs to be enshrined in Europe’s digital space. If it manages, Europe could establish an alternative vision for technology and artificial intelligence for itself and the world.

In this two-part interview, Green MEP Alexandra Geese and cyber expert Marietje Schaake emphasise the need for the EU to act as one on the geopolitics of technology. They talk about how technology can serve the people, how Europe can set global standards, and why the digital space is central to geopolitical debates. Success will depend on a real European approach committed to investment, digital rights, and coalition building.

Moving Targets: Geopolitics in a Warming World
This article is from the paper edition
Moving Targets: Geopolitics in a Warming World
Order your copy

Green European Journal: What are the main questions when it comes to defining a European approach to digital technology?

Alexandra Geese: The main issue is what kinds of technologies we are going to master – especially among the various technologies referred to as artificial intelligence (AI). The dominant narrative is that Europe is lagging behind, while the US and China are leading. The question we need to ask is: what strategy should the EU follow?

Currently, the US and China are going in two very different directions. The Chinese state exerts totalitarian control over its population, and right now it is even going against its own companies, who are trying to follow the US model of collecting private citizens’ data. At the same time, private companies in the US have based their business models on building very large profiles of what they call “users” – but, from a democratic point of view, these users are citizens, and should be treated as such. These two different visions of digital technology both ultimately depend on surveillance.

After Donald Trump, Europe has seen that it cannot go on simply depending on the US. Europe must have its own geopolitical strategy and this requires that Europe control its own technologies and resources. If we have our own strategy, we can also decide whether we want to master artificial intelligence technologies through increased surveillance. Do we? I do not think so. That is why we are working on new legislation in the European Parliament and the European Council to make sure that, on the one hand, investment is stepped up, but on the other hand, that it is directed to the right kind of technologies. We need to focus on AI technologies that will make our industry more efficient and help combat climate change, whether that means coordinating smart grids or facing other challenges that come with the ecological transformation. What we do not want is a society based on AI-enabled surveillance and biases. We need to choose, and the time is now.

After Donald Trump, Europe has seen that it cannot go on simply depending on the US. Europe must have its own geopolitical strategy.

Why is technology a challenge to be confronted at the European level?

It is obvious that we cannot compete as single countries. Even Germany has no chance of competing either with the US or China in terms of artificial intelligence or digital technologies. So, there is necessarily a need for a European dimension. I think that the European Commission has already recognised this point, but EU member states still have to make some progress towards coordinating their strategies in the direction of increased cooperation. They need to focus, for example, on European research centres. In Europe, we have a tradition of middle-sized centres in different cities with different specialisations. We could make use of this decentralised tradition, while giving it a European objective.

Is it really possible to enshrine values such as democracy or human rights in the digital world?

Yes. But we need legislation that enshrines those values. That is the aim of the Artificial Intelligence Act, but it is not sufficient. We need to avoid everything that goes against our vision of individual free will. Bias in artificial intelligence must be eradicated, and we must say no to biometric recognition, deep fakes, and snake oil applications like emotional recognition. The same is true for the proposed Digital Services Act and the Digital Markets Act. Citizens’ data cannot continue to be handed over, often with no effective consent, to two or three global companies that will use it to create and sell user profiles. The legislative framework needs to support these aims, so that privacy-friendly companies can compete, both in Europe and beyond. It is not impossible; many of these companies and initiatives already exist in Europe.

The current situation is not a law of nature. It is the outcome of a lack of legislation. Europe can set different standards. The legislation that needs to be in place will have to include a few prohibitions but, at the same time, it will open a market currently controlled by a handful of companies up to competition.

We will also require investment. Europe has a funding problem. China has large amounts of public funding, while the US has a huge venture capital market. In Europe, however, investors remain very conservative. You only get funded if you already have collateral, and it helps if you are male and correspond to the traditional ideas held by investors. A lack of diversity is limiting growth in the digital sector.

Citizens’ data cannot continue to be handed over, often with no effective consent, to two or three global companies that will use it to create and sell user profiles.

What is at stake in the current EU legislative effort?

What is at stake is whether Europe can establish or approve a regulation that really enshrines the democratic values and rights that we have in the Charter of Fundamental Rights, or whether we just pretend to have strong legislation, with so many loopholes that the current US or even Chinese models will prevail.

The Digital Services Act (DSA) is a good proposal by the Commission but it is currently not a game changer. It tackles some systemic issues like algorithmic amplification, transparency, and access to data, but it is definitely not enough.

Take the example of algorithmic transparency; very large online platforms will have to do a risk self-assessment that will undergo an independent audit. But Facebook or Google are never going to admit that their algorithmic amplification systems are a systematic threat to democracy. And nobody can seriously answer the question of who is going to do the audits. Auditors will explain to you that – unlike with financial operations where we have decades of experience – there is no precedent for auditing large online platforms, such as Facebook or Google. Without specialised companies, the most probable solution is that we will have Google and Facebook spin-offs auditing Google and Facebook. What we need is strong enforcement and independent audit organisations that can develop under the oversight of an independent, public agency.

In the case of AI, some dangerous practices such as social scoring and Chinese-style biometric identification in public spaces are banned by the DSA. But there are too many exceptions that risk undermining fundamental rights. To prevent algorithmic bias, for example, it calls for the required representativity of data but does not explain what that means. The rhetoric is very good, but the provisions in the legislation are just not sufficient.

Is there a tension between the ethical and democratic regulation of technology and the geopolitical implications of its development?

You do not win a race because you are ethical. That is true. Nevertheless, history shows that totalitarian societies always lose out in the end. I believe that free societies bring about the best solutions. And these free societies have to be defended. During the pandemic, we saw that China and Russia produced vaccines earlier than Western countries, but that Western vaccines were ultimately more effective. Having a free society, including a free press and open research, is the best precondition for developing the best solutions.

The European Commission recently announced plans to invest in the production of semi-conductors. What would be your suggestion for Europe when it comes to investment and productive capacity in relation to tech?

Investment is extremely important, and it is underestimated by national governments. The European Commission made proposals to increase its digital research budget, but this was not approved by the Council. So it is not the Commission’s fault but rather the responsibility of the national governments. Europe needs to increase all these budgets, at least tenfold, to keep up with the US and China.

The European Union has some projects, but they are not sufficient. There is, for example, Gaia-X, a unified ecosystem of cloud services and data centres governed by EU data laws and supported by the American hyperscalers. What we would need would be a strong European initiative with decent funding to start something completely new. Right now, we still have to rely on Microsoft or Google, and this does not give us real strategic independence. We need more money and more courage.

Should the environmental regulation of AI and the digital economy receive more attention as Europe shapes its digital model?

Absolutely. We need to think about climate neutrality and sustainability strategies together with our digital strategies. The European Commission has two main goals: climate neutrality with the Green Deal, and digitalisation with the legislation on digital services, artificial intelligence, and data. But they are not interlinked.

The Greens would like the Digital Services Act to include a risk assessment in terms of climate neutrality and the environment. Regulation on AI should aim for the same, because large language models consume a huge amount of energy. We need a benchmark for the energy consumption of AI technologies and to think about electronic waste, as well as the extraction of rare earth elements and minerals. You often hear that AI could help save the climate. This is not a given. It could do the opposite, unless we manage to link the two.

The European Commission has two main goals: climate neutrality with the Green Deal, and digitalisation with the legislation on digital services, artificial intelligence, and data. But they are not interlinked.

Could European regulation of the digital world set standards internationally?

The world is looking to Europe. No country wants to go for the Chinese model, and there is also a lot of scepticism about the US model. I sit on panels with people from India, Pakistan, and many other countries, who are really interested in how we deal with freedom of expression online in Europe, how we deal with AI, and our approach to digital services. Even in the US, many people recognise that Europe has been a standard-setter with GDPR [General Data Protection Regulation] – even if we have failed to enforce it correctly so far. People around the world see that Europe has the capacity to set new. We shouldn’t miss the opportunity.