There is no consensus even within parties on the issue, if it is debated at all, yet advances in artificial intelligence and robotics are set to revolutionise many aspects of our lives, from how we perceive ourselves to how we vote, and even challenge the very essence of what democracy is based on. From machine-learning to depersonalised services, and from micro-targeting to anxiety, tech specialist Aaron Sterniczky asked MIT professor Aleksandra Przegalińska what the state of play is and how politics can stop lagging so far behind advancements.
Aaron Sterniczky: Is artificial intelligence a political topic or is it not?
Aleksandra Przegalińska: It’s a technological and scientific topic, but increasingly, it is becoming a political theme. It is political because it influences most spheres of our life right now, including healthcare, workplaces, education, and such. I think high ranked politicians tend not to view it as such, and tend to neglect this as an issue. Apart from that it’s a big topic because of the very fact that it shapes politics. Think of micro targeting for instance, about all the practices that are related to machine learning and to various data analysis that provide you with the information of who your voter is.
What actually is micro targeting?
Micro targeting is an interesting notion – but most of all, it’s a way to do predictive analytics using data that people leave online, mainly on social media, that allows you to understand their political positions and how these positions are shaped. So even by simple ‘likes’ and ‘follows’ and by the stuff that people write, you can derive key words and from that you can have a pretty nice map of their potential, mostly corresponding to their political views. I think these methodologies that are heavily invested in AI machine learning and deep learning were used recently in various campaigns, by various political centres, to shape the outcome of elections. But obviously micro targeting is not only about elections and it is not only about politics. It’s just that we know it from politics because that’s the field where it became so big and transparent recently.
Our perception of modern politics and the modern citizen is that by gaining knowledge we are capable of actually shaping and changing our opinion. Micro targeting in that sense means that if I discover a certain opinion I can reinforce it, which is contradictory to what liberal democratic thinking was at the beginning of modernity.
This is very true and relevant because we do have a situation of a large polarisation, and that polarisation is enhanced by these technologies. So it’s really the case that it is not all about persuasion, but it’s more about who is where, classifying it, clustering and doing all sorts of things with data that we usually do with machine learning to understand who is where and also whom to influence – the others can be influenced by this central node [point of communication redistribution] of the network that you just detect, and if you can somehow persuade the central node, it will in turn just convince that electorate for you. So it’s a bit of a mixed scheme, in the sense that you still have the aspect of persuasion but it’s not necessarily an aspect that you are placing in the politician’s hand; it’s an aspect that you are trying to disperse in the network, so there are ordinary people who will influence other peoples’ opinions. But also if you see that a certain person has certain views you can obviously target him or her with more advertising, with more content that will confirm the views that she or he previously had.
Can you yourself detect it when you are targeted with micro targeting? Is it possible with all the knowledge that you have?
Even when you look around on your Facebook, what’s on your wall, what’s showing up, and what is prioritised, you can already see that you’re a specific bubble usually. You may be exposed to other points of views, you can try to balance that bias, but if it wasn’t for your effort to try to balance things you would probably end up being surrounded by things and people that really reconfirm what you already think. So we have all these bubbles – and once a bubble is created you can add even more content to it. I think it has happened to me several times, especially during the election period, that I’m exposed to certain and posts that reassure me that my ideas were just fine. When you have more awareness of what’s going on you can try to balance it somehow; for instance I recently, as an experiment, started reading newspapers and online posts from the political arena that is very far from what I think. Immediately my Facebook reacted to it by offering me completely different choices of products that I might want to buy, people that I might want to add as friends, and so on. I would advise everyone to do that experiment because it’s nice to get out of that comfort zone that even this virtual medium is trying to cover you with.
Now switching to the topic of the labour market – how do you think the deployment of AI will affect the future of work?
This is a huge topic that is slowly becoming more visible to people. We desperately need that debate. The job market is going to change. People will not have professions, they will have on-demand jobs that they will perform here and there, in this platform ecosystem. This is the number one change and it’s technology-driven because obviously machines contribute heavily to these platforms that will be at the heart of how we work. They process the data within those platforms that allows the intermediaries to link people together; connecting those who want something and those who have it, that’s what these platforms do. The second big question is which jobs will stay with us and which jobs we will lose, and what sort of jobs will get generated by technology, because so far – and it’s still the case – technology generates new kinds of professions. Think about a user experience designer or a coder, these were not popular jobs 20 years ago. In the 19th century in the United States, 80% of society was working in agriculture, now it’s 2%. The labour market is obviously changing and being reshaped by technology constantly. On the other hand, we are facing a situation where fewer and fewer jobs possibly may be generated, or only jobs for highly skilled workers. So there is a big question mark about what to do with that in the future. If technology is doing more jobs and there are fewer jobs for humans then where will that take us as a society?
In our current European model, our access to welfare and social protection is linked to our job status. This ties in with citizenship. Taking this into consideration and given the changes in the nature of work and the expected job losses, how can we rethink and adapt our model?
There are a few solutions on the table, from progressive circles, for example the so-called ‘universal basic income’, which It has been discussed for a while. Some people perceive it as a techno-utopian fantasy, but many are realising that we don’t have much else on the table as far as constructive proposals are concerned. What we have is a situation where machines and technologies will increasingly play a bigger and bigger role in the job market and they will be performing jobs that we are currently doing, better than us. We will be replaced. In many professions a machine that never gets tired, is never sick, and can do simple tasks just as well as humans will be probably seen – at least in the capitalist framework – as a much better employee. This solution of universal basic income is an interesting one. It has been debated in a few countries but there are only a few pilot projects and it’s still not really clear what happens when people get it. Will they search for jobs? Or will they develop themselves, will they study? Essentially, what will they be doing when they get some sort of guaranteed income? This is a big question mark and there should be more pilot programmes because the time is now; if we don’t decide on anything at this point we will have a more difficult, more polarised situation in the future. Some countries or regions will be left with a very poor job market and very little to offer to citizens, and there will be riots and problems. Universal basic income and the taxation of robots – or of machines generally – are proposals definitely worth considering. We should partially implement these if research shows they work, although in many circles, particularly corporate ones, these arenot popular solutions.
Do you actually consider that politics is lagging behind in all this debate? And do you see any differences between the US and Europe?
What I see is an incubation of technology – so when it comes to start-ups, when it comes to what gets produced Europe has always had a different path than the United States. In the U.S. you have more apps, more consumer oriented start-ups, in Europe you’re building more hardware, you have a different consideration of the purpose of your company, so the agenda is slightly different. I also see differences within the United States. In Silicone Valley there is a conviction that technology will solve essentially all problems, and you have other parts of the US where this belief is not so firm. Europe has not yet reached the stage of embracing techno-solutions as the way . Here I see a very major difference in the mindset, so what role does technology really play in your life? In the US there’s more hope that it’s going to play an increasingly big role and solve our problems; Europe has more of a moderate attitude: it can be good, it can be bad, it’s an ambivalent tool that can serve very different purposes. But these discussions are in the tech circles, in circles of experts and scientists, and also they are in common debates between people, but they are not happening in the political field. There, there is only a discourse that technology is opportunity, and very often people who say it are politicians who don’t really know what they mean. They just say that tech is out there and it’s good but in their minds it’s somewhere very far away from them. But when you look at everyone’s lives it’s actually everywhere. Technology surrounds all our activities, so when you think about it as a distant problem that you don’t have to address now but only in the future, you are very badly mistaken. Frankly speaking, those political formations that try and address these issues now have a big unfair advantage over others who aren’t noticing it as a problem they should address.
But if you say let’s address the political issue behind it – what is actually the political issue? Is it the technology itself? Is it you as a researcher doing the work you are doing? Is it the consequences of the implementation of technology?
Societal issues that emerge as consequences of a heavy usage of technology, so the future of work if you will, is a political topic that need addressing. But what also requires some political consensus is ethics or governance in artificial intelligence. Politicians don’t understand that Blockchain [a continuously growing secure list of data and records] as a protocol will probably influence democracy as we know it very heavily. Many other technologies can also to a certain degree replace political work. Nonetheless the societal and economic consequences of platform capitalism – of the so-called ‘sharing economy’ -, of the role of artificial intelligence and machine learning within that sharing economy are all issues that require consideration. We need new laws and regulations because we will soon have lots of new autonomous agents, and we don’t mean only cars. We also mean the development of the ‘internet of things’; we have lots of sensors that will collect various data and that will communicate with us and with other objects out there that also have sensors. We will have a society where data constantly flows to different places. How can we safely store and manage that data? These are matters that require regulation and some kind of political consensus. How far do we have to go with AI? Do we want to advance deep and machine learning without knowing what’s really in it? Do we want to create machines that make ethical or political choices, such as drones, autonomous cars – these are all questions that require big political debate and we’re really late with that.
So it’s not really about making political decisions but about having the political debates which lead us to these questions. We are dealing with a lot of uncertainties that merge into a big anxiety. One of the biggest deficiencies of our current times is that we don’t have an idea of what we are actually demanding from technology. In other words, there is this an uncertainty in regards to our labour market, to what actually is a human being if a robot can do what we do. There is this uncertainty that we live in a world that is producing knowledge and data that we are not longer capable of processing this is knowledge as human beings, that we need to have entities beside us that are doing this work for us. And then there is the uncertainty – your daughter is five years old – do we know what the world will look like when your daughter is thirty?
I don’t think it will be a world of singularity1 as some authors describe, although I do think it will be a world that is heavily enhanced by technology. I think for my five-year-old daughter, for example, autonomous cars that drive around in the centre of the city doing carpooling will be the norm; she will be exposed early on to technology. The way she will learn and acquire knowledge will be very different from what we had before. It’s hard for me to really imagine anything else but more of the same, and strongly and intensively present. When you think about the trends that you observe right now, these are definitely trends to add data everywhere you can. So I’m quite sure that her life will be a life within streams and constant flows of data; maybe depersonalisation of various services will be something that she will also face. Depersonalisation is a trend that we can rely on to keep growing. When she is an adult it might be something spectacular but also scary. The world of technology right now is very different from what people predicted in the past. Isaac Newton wouldn’t have understood what the iPhone is about even if we used the vocabulary that they would be using in physics at that time to describe to him what he is seeing. This to him would be just magic, despite his whole genius and rationality; he would still classify it as not possible. When I think about the world in the future I tell myself that I try not to extrapolate trends because they will just be magical to me.
 The technological singularity is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.