The European Venue for Green Ideas
Follow us on
Society, Media and Culture

Minds of Their Own? Why We Need to Get Up to Speed on Robotics

By Jan Philipp Albrecht

As advances in the field of robotics and artificial intelligence race forward at breakneck speed, policy-makers have been left scrambling to keep up. The lack of an informed debate either at the national or European level has aggravated this state of affairs. Green MEP  and digital expert Jan Philipp Albrecht argues that politicians need to show leadership in this area, rather than simply playing catch up, as without information and proper regulation in place citizens are vulnerable to the more sinister consequences of this technology.  

Green European Journal: There’s a lot of talk lately about robotics and artificial intelligence, both in terms of the potential that this technology brings, but also the transformations that it could lead to for society. Concerns have emerged from among the ranks of tech insiders, thinkers and policy-makers. Given your knowledge of the practical as well as policy implications of this issue, do you think people are right to be worried?

Jan Philipp Albrecht: I think people are right to be concerned because with automation and with robotics there will be many new developments in our society which we have to think about very thoroughly in order to get the right policies on the ground, and in order to control the consequences of it. For example, in the social area, studies are saying that even by 2030 there could be a loss of around one third, and eventually almost half, of the jobs which we have today. This is radical, and the effect not only on the labour situation but also on social systems and on the way we perceive work in our lives will be completely different. Elon Musk [Inventor and Tesla CEO] highlighted that we are already developing artificially intelligent drones and tanks for warfare for example, and the question is, how do we make sure that doesn’t get out of control, when we know that even the best kept secrets of the NSA can be hacked by the Russian authorities? We can see that there is a high risk in all of these developments and we need to be concerned.

As an MEP, you’re well aware of the debate about robotics and artificial intelligence at EU level. Do the positions of Member States vary strong? And are some states much better prepared than others for the changes to come?

I think that there isn’t such a big difference, because nobody is really prepared, that’s the really bad news. There is no significant debate about getting regulation on the ground for the age of robotics and artificial intelligence, either within the Member States in the European Union, or in the European Commission. Some European states have self-driving delivery robots strolling through the city for example, and there is no civil law around what happens if these robots cause accidents; who is then liable, is it the producer, is it the one who is using the service, or is it the individual who is in the accident? It is completely unclear, and even these standard, simple questions for the next generation of developments are unclear. There is no significant debate on the rest, we just have one first report here at the European Parliament on liability schemes but it is full of questions rather than answers.

Greens in the European Parliament have taken a position on this and put forward ten recommendations, through the Green Digital Working Group. What are the main issues that you have been trying to draw attention to?

The most important recommendations which we worked out in our small working group of Greens here, is that we first of all would like to have a clear mandatory requirement for impact assessments so that everyone who is developing these services has to carry out these assessments on the ethical consequences, on social issues, on human rights issues. We say that in these areas there need to be clear safeguards in order to ensure that there is always a human responsible for certain decisions of machines.

For example, there should be no automation in which machines can do what they want, especially with machine learning, the process by which robots learn new things and can do new things on their own. There needs to be someone that controls what they are learning, because they are like kids – if there is nobody responsible supervising them then they might do bad things, by learning the wrong things perhaps, and that can be a risk to everyone. We need to educate these machines, that’s what we are saying; we need to make sure that they are secure, that they are safe, that there is a minimum standard for IT security. This means, for example, ensuring that no-one can hack into a self-driving car while it is driving at 100 kilometres an hour and suddenly release the brake. That can be really dangerous, and there have already been severe accidents involving self-driving cars.

I think that these recommendations are first an indication of where we have set standards, but our recommendations are also calling for, as a first priority, a thoroughly informed debate, because we don’t know all of the answers yet either on these big transformations. What we need is a real, informed public debate. There need to be many people involved, and there needs to be public authorities, and independent bodies to gather information and spread it in an informative way to people out there.

Alongside regulations of the technical aspects of this technology, whether about the risks or potential it bears, there are also profound questions to be asked about how these developments will transform our relation, as human beings, to technology. Is there a disconnect between the debate that is going on in policy circles, and the reality of advances happening on the ground? How can we make technology political again?

It’s a huge challenge to get technology into a political debate in a way that is interesting for everybody, and so that people understand it. We also need more experts, who are not only techies but who are also able to explain what it all means. If, for example, a machine can learn on its own, how can we manage that development in order to make sure ethical safeguards are employed? And how can we connect the discussion about the technological change brought about by robotics to the debate about basic income, for example, if the consequences of this technological development mean that we have to rethink and completely change our social systems?

I think there are many people very concerned and also very much interested in getting into this debate, because this technological change will affect all policy areas. Everything will be put into question and that’s also why we need to get everybody on board. Here in the European Parliament group for example, we try to really get people from all the different committees involved, because it touches a whole range of policy areas: it’s about the question of military robots, as well as the question of law enforcement, security, of worker’s rights… For example, when you work alongside the robot does it affect your rights, or the working atmosphere somehow? Health is another area that is impacted by the increasing use of intelligent implants, as well. So these systems could be implanted within our very own bodies, yet it’s not clear what rights we have over them. The further along we get in this development, the more aware we become that it has so many different consequences, which are becoming more and more visible.

It seems that the main question with all this new technology is about who is really in control. The famous sci-fi writer Isaac Asimov wrote that there should be certain laws of robotics and these should be that robots should first of all protect humans, they should obey orders and they should preserve themselves. Is this a useful framework for thinking about the future of the relationship between humans and robots?

Absolutely. I think these rules and laws presented by Asimov were the starting point for our thoughts here in the Green group. They are already 50 or 60 years old but they are currently still standing and I think that it’s the right philosophy. The philosophy is that robots by nature are no better or worse than humans. They are like us, and we need basic values and a common understanding of how we want to live together. We have had the opportunity to learn throughout history by trial and error, and our society has inherited those lessons. But machines haven’t been through this process and therefore they need to be implanted with these laws and basic values from the beginning: you don’t hurt anybody, you don’t take over control of somebody, and of course a machine is not worth more than a human. Because we don’t develop robots in order to create more humans, we develop them in order to have a better life as humans. They are still the tools for us, to serve us, and not the ones who have to create their own field for living. Losing control over robots would also mean losing control of our human values.

Is it fair to say Greens have a tendency or a reputation of being a bit cautious or even technophobic, when it comes to this subject? Should there be more of a push on the side of the Greens for initiatives to work on this issue to move the debate forward?

I think it is two different things. The first question is about your view on technology: do you think that further development is positive by nature, and that even if there are challenges we can manage, or do you think that it’s negative, and we will be facing huge threats if we go forward. I think both of these philosophies can be found in the Green Party as well as in other parts of society; and it’s good that we have a debate with each other, because neither of them are true alone, I think it’s a combination of both. We have huge challenges and we have huge risks at the same time, and we should look at those and decide on how we want to go forward because society will be going forward.

The other debate is that whatever view you might take, nobody can ignore that we have this technological development and these challenges are coming. So I think that, regardless of the angle from which we enter the discussion, we also agree that the issue itself is so important that we need to get into it, we need to discuss it, especially because we have so many viewpoints we can deliver quite a good result when we start the discussions as Greens, with these different emotions we could have representative debate for society in our own parties, so we should be the first to set up committees and working groups, and call on them in order to have an impact on this development rather than, as others do, just wait and see. Because with technology, the precautionary principle, for example, is something that we all share in different ways, but we all say ‘we don’t want to get naively into a new technology like nuclear energy or coal energy without thinking about the implications’. We want to think sustainably. And that also goes for technological development.

Are you positive about the capacity for Greens and others to work together on this at a European level, to ensure it is not left only to those who are developing this technology in the future?

Absolutely. I think many people look at us and recognise that the Greens have joined this debate quite early, and have also been quick to address the fundamental debate about ethics and social implications and the whole question of military and defence and other fundamental rights perspectives that we have to put forward. I think that many people are really looking at us Greens and seeing that we are an important actor, and that we can really influence further development of regulation in this field.

 

Newsletter

Minds of Their Own? Why We Need to Get Up to Speed on Robotics

Cookies on our website allow us to deliver better content by enhancing our understanding of what pages are visited. Data from cookies is stored anonymously and is never shared with third parties.

Find out more about our use of cookies in our privacy policy.