The European Venue for Green Ideas
Follow us on
Society, Media and Culture

Who’s in Charge Here? Humans in a Robot World

By Wendy Hall

Digital reality today is a long way from the free, open space it started out as. As a handful of tech giants monopolise the Internet and automation speeds ahead, tectonic technological and societal changes are approaching. Web trailblazer Wendy Hall discusses the state of the Internet and artificial intelligence today, and what this will mean for the world of work, education, and the society of the future.

Green European Journal: You were an internet pioneer when you were involved in creating a precursor to the World Wide Web, Microcosm. If the internet started as a free and open space, today it’s dominated by a few big firms and monitored by governments. How did we arrive at today’s internet?

Wendy Hall: What we are talking about here is the Web on the Internet. It was really Tim Berners-Lee building the Web as a system on top of the Internet that led to what we see today. It was a big, live, global experiment – more than anything else in technology has been. There were many ways to implement such a system, but the Web emerged as the winner because Tim gave it away. As he designed it, the Web was easy to use, based upon open protocols and standards, and simplified down to a ‘click and go’ client server. That simplicity was transformative as it allowed for a system co-created and co-constituted by people and machines.

Tim’s guiding thesis was that either everyone will use it, or no one will. This idea of co-creation became the business model for the entire Internet. If you’re going to launch something into this space, you have to persuade thousands, even millions, of people to use it before you can start making money. Google is useless without lots of people using it, and the same goes for Facebook, Twitter, Amazon, and Alibaba. These platforms are only useful because millions of people use them, and – because of the open way in which Tim designed the Web – there are no economic barriers to that use. The resulting business models are very different to normal business models. In some cases, it’s still not clear how they make their money.

Looking back, you can see how effective monopolies emerged. There are many search engines; in the West, the dominant one is Google, while in China it’s Baidu. The same can be said for social networks – some have been and gone, like Myspace, but today Facebook dominates outside China, where WeChat is widely used. These networks are now giant attractors and it’s very hard for a competitor – no matter how much better they are – to emerge and to take over. Facebook may die for other reasons, but not because someone has brought forth a new social network that’s bigger and better; it doesn’t work like that. There are many things we recognise retrospectively, now we are having to retrofit security and access controls that could have been put in the standards. We didn’t think about the bad things people would want to do with this system. You thought about the good things: communicating, sharing information, building communities, and spreading democracy. There’s an argument that says we should really stop and start again now that we know what works and what doesn’t, but is that in any way feasible?

Is the Internet at a crossroads today?

As my great friend and author of many books about networks, Manuel Castells, said: whoever controls the data has the power. At the moment, that data is in silos. Who knows what Google or Facebook know about us and how they can manipulate us by manipulating our data. But our data is spread across different companies and the government. In China, they are pulling all this together to build a surveillance society; it is all about who actually controls the data. Now, they are talking about using artificial intelligence on this data to create more efficient services, generate more knowledge, and do things we could not have dreamt of a few years ago. But it’s quite scary and dangerous because of who owns the data that the algorithms are going to be trained on, and how it might be biased or commercialised. The power is concentrated in a small number of very large companies.

The use of the Internet has extended to areas we could scarcely imagine, with developments like machine learning. Are we seeing only the tip of the iceberg when it comes to the use of data by artificial intelligence?

Absolutely. The big companies are already using it – they are not necessarily using deep learning but they are definitely using artificial intelligence. For example, Google’s recommender system uses artificial intelligence to tell you what you want to know and, scarily, what you say in emails is used for personal adverts. That’s simple artificial intelligence compared to deep learning, but it comes back to who’s got the data and what can be learnt from it. An obvious example is health: if you’re dying from some incurable disease, you would happily give all your data to someone who said they could use that data to cure you. On the other hand, you wouldn’t want your insurance company to have access to your medical records. There is a whole dilemma as to what privacy is, what you keep private, and what you open up. This is a big dilemma for society going forward and we can’t stick our heads in the sand. It’s up to us to sort it out.

Tools such as deep or machine learning could be used in many sectors. Thinking of the world of work, what could be the impact on jobs and work as we know them today?

As with all technological revolutions, over time it will increase the number of jobs. But there will be short-term winners and losers and some current jobs will disappear. We’ve been living through an amazing technological revolution, the computer revolution, in which huge numbers of jobs have been lost. When I was a child, all accounting was done by hand. My father was an accountant and he had to multiply pounds, shillings, and pence without a calculator. All these jobs are gone. Computers have been able to do very sophisticated things in finance where artificial intelligence is used to decide what shares to buy or sell, to the point that by the 2008 crash nobody really understood the flow of money in the world, but we have more jobs now in the financial industry than ever. Even in a world where an algorithm takes the decisions, the jobs that will be needed won’t necessarily be machine learning programmers but will be people to curate the datasets and check the biases, for example.

In this role-reversal scenario, machines will have an army of humans to do the tasks they can’t

Much of what I have learnt comes from science fiction, and I see a scary scenario for the future when we have robots that manage our lives. There will be things the robots will not be able to do, and there will be jobs for human beings to do things for the machines. In this role-reversal scenario, machines will have an army of humans to do the tasks they can’t. It’s not necessarily that these machines will be intelligent; it’s just that the companies that make them will need to employ human beings, probably at a low wage, to do what the machines can’t do. We don’t talk about that enough.

Stephen Hawking famously gave us no more than 100 years on this planet before we wipe ourselves out with something. He always said that if we do create machines that are as intelligent as we are, even if it’s just machine intelligence and without that which makes us what we are – the biological and conscious being, that it could be the end of the human race because machines can evolve far faster than we can.

Staying with the sci-fi theme, do you see a split between human and artificial intelligence or human-machine cooperation as more likely in the future?

There is the idea of the ‘human in command’, close to Isaac Asimov’s idea that we should design machines to limit what they can be responsible for. If you think about the uses of artificial intelligence for social good, for now it is largely going to be about machines augmenting human capabilities. Artificial intelligence could augment the teacher or the doctor rather than replace them. I do still maintain that down the line companies will have to employ armies of people to carry out the instructions of the machines.

So you think we’ll all end up working for robots in the end?

Not all of us necessarily, but people who are poorly educated and need money to survive will take menial jobs and not get paid much. But looking to the care example, machines, at the moment at least, are not going to have the empathy or the physical requirements to really care for people like another person can: to bathe someone, dress someone, take them to the toilet, and deal with them when they’re in distress. We absolutely need to value people in care jobs more. If as families we’re not going to take those in need of care into our homes like we used to, then in a moral society we have to make sure that the people who do that on our behalf are valued.

Can you tell us about your work for the British government review on Growing the artificial intelligence industry in the UK?

The review was very much focused on how we’re going to make things work in the UK. But I like to think that was a bit of a blueprint for what any country would want to do, particularly on skills and data trusts. Our focus was on enabling companies to flourish in a safe, shared ethical framework and ensuring we are training people with the skills to provide the industry with ‘oven-ready’ programmers. We also emphasise the need for a socio-technical approach that would use interdisciplinary teams to provide training for people that will be losing their jobs.

Technological developments have the potential to worsen social inequalities of all kinds. What can we do as society counteract this risk?

Education, education, education. People will have to be educated in a way that ensures cognisance or awareness of technology and to prepare them for new types of work. In some cases, a good algorithm would be much better than a badly trained human in that role. But we have an opportunity, if we choose to take it, to use this debate about work and the rise of automation to radically rethink what we value about society. Almost every part of the world will have to tackle ageing populations and people with chronic diseases or disabilities will be in need of care. As I described earlier, we’ve got to start valuing the care work that machines are not going to be capable of (certainly not in the short term) more, because otherwise we will widen the gulf between the haves and the have-nots.

we have an opportunity, if we choose to take it, to use this debate about work and the rise of automation to radically rethink what we value about society

What should students in schools and universities be learning? Code or STEM subjects, for example?

The big problem here is that, laudable as the aims are, we don’t have enough teachers who know about computer science to make it work in schools. The British government put close to 100 million pounds into the budget in November 2017 to train computer science teachers. It’s not that everybody needs to become expert programmers, but they do need to understand how computers work. In order to understand what an algorithm is, you’ve got to understand what computational computing is. So it is very important that kids do grasp these concepts so they can enter the world of work able to talk about what algorithms and data are whilst understanding the issue of bias.

The issues cannot necessarily be solved technically – there is a spectrum of issues in terms of whether solutions are mostly technical or require societal changes. First, think about security or making your data, your computer, or your company secure. Security can largely be viewed as an issue that can be solved technically, although of course people still have to take it seriously. Secondly, take something like fake news, which I don’t think can ever be completely solved technically. One person’s fake news is another person’s absolute truth. People have always lied and there will always be those who try to manipulate others. At the other end of the spectrum, take something like cyber bullying, which is a societal problem that has been amplified because of what can be done online, but for which there is no purely technical solution. On this social-technical spectrum, you need both an understanding of the technical aspects but also a broad social armoury of skills to protect you from being bullied or help you understand what may or may not be fake news.

How do we get to this community-based caring economy?

My generation was very blessed: we’ve had no world wars, and I had free education from when I was 5 to 21, and free health services. I’ve always had a job, a pension, and potentially have a long life ahead of me. People like me who have disposable income will need looking after if we live long enough. That will change things and I think we will use artificial intelligence up to a point to create the caring environment we want around us. But, with conditions like dementia, there comes a point where you need a caring society or you might as well bring in euthanasia. As this generation that has lived longer than any generation before and has benefited so much gets to that point, it may well bring in that social revolution.

Who’s in Charge Here? Humans in a Robot World

Cookies on our website allow us to deliver better content by enhancing our understanding of what pages are visited. Data from cookies is stored anonymously and is never shared with third parties.

Find out more about our use of cookies in our privacy policy.