Machines increasingly take the difficult decisions on our behalf. As algorithms gain ground with uses such as credit scoring and predictive policing systems, what are the consequences for our societies? How can we make sure that automated decision-making works for the public good, and not against it? Aaron Sterniczky sat down with Matthias Spielkamp to decode the debate on algorithms.

Aaron Sterniczky: Matthias, you are the founder of AlgorithmWatch, a non-profit advocacy and research organisation focused on the consequences of algorithmic decision-making on societies.

Matthias Spielkamp: Well, firstly, I am a co-founder. The initiative was founded by a data journalist, a computer scientist, a legal philosopher, and myself, a journalist. What we had in mind was to advocate for a more evidence-based discussion about the applications of automated decision-making systems. We don’t have an ideological stance in the sense that we say these developments are either good or bad. What we’re saying is that if we have extensive technological changes in society then we need to look at those very closely, but not with hysteria or the assumption that technology will solve all the world’s problems. These are the two extremes: people fearing that robots will take over our world and kill us all and, on the other hand, those who believe that big tech companies or governments are purely benevolent and only have the public good in mind.

What exactly is algorithmic decision-making and how is it applied?

That’s a hard question to answer, because ADM (automated decision-making – we sometimes call it algorithmic decision-making) has been established for quite a while as a term meaning that there is a system that is, in some technical form, organised to do something that so far only humans have done. Now if you take this basic definition, a machine sorting peas by size is an ADM system. We would not consider that system something we need to be preoccupied with, but the example shows that it’s a sliding scale: we cannot define where exactly an ADM system becomes relevant to society. What’s clear is that we only want to look at systems that are embodied in some kind of technology. But that can be software programmes as well. Embodiment does not mean robot; it can also be an agent that is only a software. You are personally digitally assisted in your phone – it’s embodied in the phone but the phone is not your personal digital assistant. What’s especially important to us is to approach ADM as a system, not a technology, taking all aspects into account: from the decision to use it for a specific purpose, the design of it, to finally the deployment.

So the idea of AlgorithmWatch is that there is this phenomena that we need to make transparent by discussing it before we can understand it, in terms of the risks and challenges it poses and what we have to demand from it.

We try to point out that transparency is a difficult thing to define. Some people want to understand the logic, others want to look at the code. We are leaning more towards the side of those who say that what’s really important is that humans are in a position to understand what the system is supposed to do and whether it is achieving this goal, rather than just identifying the source code and putting it out into the open. We appreciate it if someone does that – open source is a great approach – but at the same time, we argue that in order to understand the consequences of an ADM system, looking at the source code of the software used in it is neither necessary nor sufficient. We need to look at the purpose, the models used, the data collected for it, and so much more.

When data is anonymised there is no risk for people’s privacy but a lot to be gained for research.

The hope is that we will be able to improve our current democracies and use these systems to further the public good. There is a lot of hope that we can do that but there are also risks attached. The risks are that the systems become so complex, or that we use so many of these systems, that we will have a hard time keeping track of everything because auditing them, finding out how they actually work and whether they do what they are supposed to do is a very complex and difficult task. If I do see a risk then it’s that we fail to develop mechanisms to address this. But I am still optimistic that we can.

Algorithms and questions of big data and artificial intelligence now make the headlines and are getting on the policy agenda. What’s your view of the current paradox in the digital economy which sees data as the fuel of that economy, with tech giants using our data at all times, while most people are in favour of data protection?

I do think that people can have privacy while their data is being analysed; it depends on how it’s done and what kind of data it is. For example, when data is anonymised there is no risk for people’s privacy but a lot to be gained for research. Also, we are in favour of looking at actual harm being done instead of arguing that all data processing is a risk to people’s privacy. We don’t have total control of our personal information, we never do. Rather, it’s a question of what limit of control you would like to have and what you would be willing to give up in order to gain something. There is no definite answer to that. We’ve been discussing the European General Data Protection Regulation (GDPR) for nine years now, and I think the discussions will not stop when it enters into force.

As of May 2018, EU citizens will be able to call on the GDPR. It is without precedent to have a EU-wide regulation on data protection. Is GDPR fit for purpose? Is it paving the way to real protection against ADM defects and dangers?

At AlgorithmWatch, we have the impression that it misses some very important criteria. For example, on ADM, it only applies to fully automated systems, meaning you don’t have humans in the loop anymore. But if you have a scoring system that determines that with a score of over 97 per cent a mortgage is granted, and the system calculates someone’s score at 96.8 per cent – then what is the human going to do? Most likely, she or he will reject the application because the system says so – because doing otherwise would need specific justification. As with many different aspects of the GDPR, we’ll only find out how far it actually reaches in the future through jurisprudence because it will probably be the most contested piece of legislation in courts in the coming years.

It could stigmatise a city district by defining it as dangerous – with the result that you have police patrol everywhere and people feel even more insecure.

But there are also collective consequences from ADM beyond personal data and information. For example you can use predictive policing systems without using personalised data. The ones being used in Germany do not use personalised data, they just look at crime statistics and then they try to find out where there will be more crimes. Can predictive policing have an effect on society? Yes, because otherwise you wouldn’t use it. You can have positive effects like less crime, but also negative effects: for example, it could stigmatise a city district by defining it as dangerous – with the result that you have police patrol everywhere and people feel even more insecure. This is just a scenario, but the argument is that there is a collective consequence from using technology that cannot be addressed with privacy law. So how do we address it? We still don’t know, we don’t have adequate tools for this yet.

Can you describe phenomena and decisions that are already happening based on ADM, and future applications as well?

There is one very famous example, and that is risk assessment of criminals. Investigative journalists from ProPublica looked at a system that had assigned certain risk scores to criminals. Those scores serve then as a ground for a human decision made by a judge on whether this person is granted parole, stays in jail, or has conditional release. That is a very stark example because many people see this as very intrusive in the sense that people’s liberty rests on it. If the computer gives you a very high risk score, the judge will probably not decide to release you. Starting from that point, it becomes much more difficult to define what these automated systems are. It’s not only private entities using them, it’s for example automated border control: a huge collection of data analysed by electronic systems that have categories and criteria to sort it, and then these systems give a result indicating whom they should screen more intensively. You could easily imagine a result like ‘don’t let him or her board the plane’. Then there would be human intervention because you can prevent someone from boarding a plane, but if there is no warrant out for this person then they would have to present a danger. If you screen them and they don’t have a weapon or bomb, why would you not let them board the plane?

You could easily imagine a result like ‘don’t let him or her board the plane’.

A good example that we are looking at right now is credit scoring, which in Germany is done by private companies. This isn’t a problem in itself, it can be done by them, and there is regulatory oversight. Credit scoring is so important in people’s lives because it is used to determine whether you get a mortgage or not, which mobile calling plan you have, or whether you find an apartment. What we argue is that companies can do more to explain the logic behind what is going on so that people know better what influences their score. If you ask the company to provide your score data, which in Germany you have a legal right to do, and you receive a completely unclear set of data that not even an expert can make sense of, how are you going to argue that your score is wrong, and that there is a problem with it that you would like corrected?

Some people perceive a threat from ADM in the form of determinism – that actions will inescapably come back to affect them in the future. This is most telling in the case of criminality and reoffending, where ADM systems may well not factor in an individual’s will to reform.

I think that’s a very important point. American science fiction writer Lee Konstantinou said that the “tyranny of algorithms is nothing more than the tyranny of the past over the present.” Deterministic systems are a big danger. But the fact alone that your actions have consequences has always been there. People probably had less opportunity to control this because when people make those decisions, often they are neither rational nor open to scrutiny. They just look at you and say ‘you don’t get a mortgage’. So many times, ADM could be used to improve the situation.

With these deterministic arguments, we need to look at how we manage to stay open to individual decisions and people’s right to change their lives. But again we are looking more at a discussion that is not determined by technology. In Germany, if someone has a spent conviction, meaning that they have served a prison sentence for example, media are generally not allowed to identify the person in their reporting, even in cases of murder. In the UK, on the other hand, if the prison term is more than four years or was an extended sentence for public protection, then the conviction will never become spent. This serves to show that we have differences in values from country to country, and we will probably never reach a consensus on how to deal with this, but the discussion is really important.

How do we eliminate any kind of bias from ADM? Is it possible to build ethics into decision-making technology?

We have always had biases in society. This makes it tremendously hard to eliminate it from ADM systems because they are based on analysis of data, and this data will have biases built into it. But there are means to identify these biases and ways to correct them. In comparison, “it’s very hard to unbias a human being”, as algorithmic fairness researcher Krishna Gummadi said. And let’s not forget that we are not yet at the stage where all around us automated or semi-automated decision-making is determining our lives as individuals. In many cases, automation could be a big hit in addressing bad and biased decisions because evidence-based decision-making is usually a good idea. Social transfer systems, such as unemployment benefits, could be made much fairer by using automation or semi-automation to determine resource allocation.