While new technologies seem, in many cases, like they are designed to make people’s lives easier and make institutional judgments ‘fairer’, what they are often doing is entrenching pre-existing biases and exacerbating the impacts of discrimination. This outcome is even more likely considering that many of our cool new products and services are largely developed and deployed by the corporate tech elite. This group, overwhelmingly affluent, white, and male, dominates in executive board rosters for major companies, in the line-up of speakers for prominent tech events, and the lists of experts and advisors put together to answer important questions raised by questionable uses of technology. A toxic structure, which gives little to no voice in decision-making to those who are already marginalised.

The results of this bias are already being experienced: virtual reality systems that make women disproportionately motion sick; facial recognition schemes that cannot recognise black or brown faces, or misidentify them as animals; and ‘sexy’ lady robot dancers, deployed to entertain guests at events.

Algorithms at large: calling out the bias

A primary example of this sort of ‘discrimination by design’ is in the expanding uses of algorithms. While algorithms have increasingly been making headlines and their way into public knowledge, most of us are already acquainted with what they do. After all, algorithms drive our search results, deliver advertising, and tell us who to be friends with. This may sound great, but the use of algorithms extends well beyond these – seemingly harmless – services. Today they may also tell police where to patrol, tell judges how long to imprison the accused, and tell stockbrokers when to buy or sell. And, in the not-at-all-distant future, they will be used in almost every part of our lives: they will tell our self-driving cars when to brake, tell educators which children may be harder to teach, and advise doctors on which hospital patients to treat or ignore. 

our cool new products and services are largely developed and deployed by the corporate tech elite, overwhelmingly affluent, white, and male

But algorithms do not decide things independently – they are designed by people, built to analyse large data sets and produce information based on previous outcomes. This process opens the door for discrimination at several different points: inherent bias of designers, bad data sets, lending too much weight to arbitrary or inappropriate factors. And this all plays out in real life. For example, facial recognition algorithms used across the United States misidentified black faces at least twice as often as white faces. This would make it much more likely to falsely flag a black person as a criminal, exacerbating pre-existing biases in the US toward jailing non-white individuals.

And when machine learning is introduced, which will automate the creation and operation of algorithms, it may be even more difficult to find the built-in biases since those operating the algorithms may have no idea what data or process is used to arrive at the conclusion. The solution, according to many, is requiring a great deal more transparency and accountability in the deployment of algorithms. This is not simply a matter of ‘algorithmic transparency’. Few people can tell what an algorithm does just by looking at it. But, instead, it requires explaining, at a minimum, the data being fed into the algorithm, and the conclusions sought to be reached, as well as the results over time, so that real people can understand and evaluate its use. 

Data Protection for the Elite

There are many remaining areas where the work needed to protect against a sort of ‘technochauvinism’ – the embodiment of hate and discrimination in day-to-day technologies – has not even begun. The lack of inclusion has been particularly harmful in the cybersecurity arena, where insecure data and devices can have devastating impacts on marginalised communities. We can see this in the growth in the distribution of so-called ‘revenge porn’ or the targeting of users who download or use apps associated with the LGBTQI community by repressive governments and hate groups. 

Unfortunately, bad digital security practices mean that data breaches happen constantly. But, rather than engaging in the ways that data breaches affect individuals and how to mitigate those impacts, government officials are instead pushing rhetoric that may lead to even more compromised data. Across the world, including in EU countries, leaders have taken aim at encryption: a means of protecting data in transit or while in storage. In order to (hypothetically) ensure that they can preserve access to communications by criminals (no currently proposed solution has actually demonstrated any likelihood of accomplishing this end), governments seek to limit the strength of the security that companies can offer to their users. 

Bold, forward-thinking acts are essential to protect marginalised and vulnerable communities

This can only serve to exacerbate pre-existing biases: marginalised populations that benefit the most from off-the-shelf security will also be most likely to use the less-secure products. And as mandates against encryption spread, communications will no longer be safe for LGBTQI individuals in Saudi Arabia (where being gay is punished with the death penalty) or activists in Russia (where criticism of the government is treated as extremism). Where products like WhatsApp and Signal brought strong security to these at-risk voices, government mandates could strip it away, leaving access only for those who can afford (and know to use) alternative tools and services made available through open-source communities or in other jurisdictions. 

GDPR: a step in the right direction

The General Data Protection Regulation (GDPR) – which entered into force on 25 May 2018 – is a bright spot on the horizon when it comes to many of these issues. For example, the GDPR lays out a “right to explanation,” which is aimed at ensuring that people are informed about the logic of the algorithms used to make decisions about them. While it remains to be seen how the courts will apply that right, it provides a plausible path forward and offers hope that users may have the ability to access enough information to know when algorithms are being used to further ingrain biases.

Even better, the GDPR separately requires companies to notify data subjects of a breach whenever there is “a high risk to the rights and freedoms of data subjects.” While this language is open to interpretation, this will mean notification will go beyond the financially linked information that many data breach notification laws around the world are currently tied to. As this notice requirement goes into effect, it may come to light for the first time just how many data breaches there really are. While some worry that this could lead to ‘notification fatigue’ – a situation where users receive so many notifications that they decide to no longer take action – it is still likely to be substantially better than the current status quo.

GDPR is a bright spot on the horizon when it comes to many of these issues.

In the tech space, the EU has a chance to be a global leader and set policies that people around the world benefit from. The GDPR is a great start, but not even the GDPR goes far enough in all areas. For example, despite referring five times to encryption, it might not be sufficient to protect this technology, as leaders across the European Union continue their efforts to undermine this critical digital security technique. On this front, everyone can contribute and take action now: policy leaders must support statements to promote encryption and invest in its development and use. The Netherlands, for example, produced a fairly strong statement, which could serve as a guide for others looking to ensure protection against discriminatory mandates.

Additionally, vigilance must be unwavering. Technology develops quickly, and while new tech-neutral rules and laws may be well meaning, those developments could still bring about unforeseen circumstances that put human rights at risk. Staying abreast of technical developments and, for law and policy-makers, keeping tech experts on staff to provide detailed explanations for how new tools operate will ensure that we are prepared for major advancements. 

The challenge ahead: programming a fair future

Bold, forward-thinking acts are essential to protect marginalised and vulnerable communities in the world, if not everyone. Such communities should have a central role in crafting the policies that will pave the way into the future and govern the tech that has become a part of everyday life. With the increasing importance of algorithms and cybersecurity, not to mention the expanding internet of things and advance of artificial intelligence, it is critical to ensure the development and fostering of the voices who can identify the hidden issues before they spiral out of control. 

The #MeToo movement is proof that discrimination has no place in the present. But as the future brings more powerful technologies, we must be prepared to take upon ourselves a new sense of responsibility to ensure that we are not unwittingly creating an infrastructure which could weave discrimination back into society. Proper investment in legal protections and robust regulations, including meaningful protections for human rights, may be the only way to keep ourselves from ending up in a horrible technochauvinist dystopia.