A US-based neurotechnology company is making advances in creating artificial intelligence that could be merged with the human brain. This technology could potentially have a wide range of applications – including enhancing our cognitive abilities. Given the rapid social transformations brought about by the internet, will this technology be regulated before, rather than after, it changes our lives forever?

In April 2021, Neuralink – a brain-computer-interface company co-founded by billionaire entrepreneur Elon Musk – put forth a video of a monkey playing a Pong video game using only its thought signals. The monkey had a wireless device implanted in its brain six months prior to the video being shot.

This came less than two years after Elon Musk revealed in July 2019 that Neuralink, a company that had been operating in secret with US government approval since 2016, had performed successful tests of its device on mice and monkeys. Neuralink has announced its intention to start human trials by the end of 2021.

Launching a dangerous mind game

Neuralink’s ostensible goal is to assist people with brain and spinal cord injuries and congenital defects, including paraplegics who have lost their ability to move or sense, and those with degenerative cognitive diseases, such as Alzheimer’s. However, Musk has already stated that his ultimate objective is to create a “digital superintelligence layer” connecting humans to artificial intelligence (AI), which he considers to be an “existential threat to humanity.”

Musk has cautioned that, if left uncontrolled, AI could end up governing the world as “an immortal dictator” and that, to avoid becoming irrelevant, humans should merge with AI. Paradoxically, this suggests that Musk believes that, while humans could not control an external AI superintelligence, they would be capable of keeping it in check when merged with their own brains.

Musk’s stated intention of fundamentally altering the human experience should raise concerns about the complicated web of ethical and existential questions that the creation of a brain-computer interface poses for the future of humanity. What will the consequences be for consciousness and the notion of humankind as we know it? In a world ravaged by increasingly indomitable inequality, who will have access to this technology and with what consequences? Privatising and hijacking the human consciousness seem to be the ultimate frontier for capitalism.

The secretive development of Neuralink, which was kept quiet from 2016 until its launch in 2019, also raises a fundamental geopolitical question. As more high-stakes technologies are developed with the potential to disrupt the evolution of not just the global economy but humanity itself, how should decisions about them be made? Should individual countries be free to pursue such developments, despite their potentially vast and irreversible implications for the world? Or should an international regulatory body be established to closely monitor technological development? Now that Neuralink is on the cusp of human trials and a step closer to wider distribution, raising such questions is more urgent than ever.

Privatising and hijacking the human consciousness seem to be the ultimate frontiers for capitalism.

Shifting human consciousness

Neuralink works by transforming the electro-symbol in the human brain created when synapses are produced between neurons into a computer-reading equation: this effectively allows the computer to read what the person is thinking. The chip is installed in the brain by specially designed robots, who open the skull and integrate wires into the brain without interfering with veins and arteries.

A key question posed by Neuralink is what altering consciousness in such a way means for the definition of humanity. Some of the stated benefits of Neuralink will be enhancing the capacities of the brain: “communicating through thoughts, having access to all of existing information just by thinking about something, writing an email without using a keyboard and using a computer or a smartphone without even touching them.” For its recipients, it claims to ensure perfect vision and memory as well as predict and prevent life-altering diseases, such as Alzheimer’s and dementia.

While, at surface level, these benefits promise to unlock a superior, more advanced human life, in reality, they represent radical changes to the human experience itself. A faster, more direct thinking process has already been proven to produce personality changes and identity shifts in individuals with external brain-computer-interfaces, influencing their behaviour and feelings. This will likely only intensify with the implant.

While, at surface level, these benefits promise to unlock a superior, more advanced human life, in reality, they represent radical changes to the human experience itself.

Irrational feelings and impulses are fundamental parts of the human experience. At times, they can be reckless and detrimental but they are also often beneficial sources of creativity and spontaneity. With access to all the information in existence about any given subject as well to other people’s interfaces, will the more impulsive side of the brain be replaced with purely rational decision-making processes? Will any room be left for emotions or trial and error?

The prospect of integrating humans with AI also raises the question of the autonomy of the self. The underlying premise of Neuralink is that the human will remain in control of its brain-computer-interface. However, given that AI is in many ways superior cognitively to the human brain, could it eventually take over, rendering us its movable vessels?

Our latest edition: Democracy Ever After? Perspectives on Power and Representation
is out now.

It is available to read online & order straight to your door.

Any privacy for the hackable brain?

Another consciousness-altering shift that Neuralink would bring would be a lack of privacy, which may also have repercussions in people’s lifestyles and behaviour, as people would learn to self-censor a lot more than they do now. This might lead to a diminished inner world and a forced uniformity, caused by a fear of being caught thinking the “wrong thing” or holding a controversial, perhaps even incriminating opinion.

The chip would record the raw data all the time, having unlimited access to one’s mind. What happens when unwanted information gets leaked to, or hacked by, unwanted entities, either on a smaller scale, other individuals, and on a larger scale, the government and other authorities as well as businesses? Could this data also be accessed and used by businesses acting in their own interests?

The chip would record the raw data all the time, having unlimited access to one’s mind. What happens when unwanted information gets leaked or hacked?

Diego Sempreboni and Luca Vigano emphasise in their analysis that, “in the wake of the recent scandals on data collection (such as the Facebook-Cambridge Analytica data scandal that involved the collection of personally identifiable information of up to 87 million Facebook users), we are sceptical that the Internet of Neurons will be exempt from massive personal data collection and mining, possibly opening up the possibility of big brother scenarios in which citizens are always observed and tracked in order to control and influence their thoughts, opinions, votes, in brief, their whole life.” These types of data breaches will be violations unlike anything that we have seen before, because they will affect the core cerebral identity and integrity of a human being. As personal as data breaches currently are, this would be a new level of intimacy and physical intrusion.

The same question of potential physical damage could be posed for future cyberattacks. Will attackers be able to insert malicious code in the router or even perhaps in the human brain directly and to read, replace, modify, and intercept messages or even brainwaves? Could they encrypt the brain and then ask for a ransom? These are potentially serious consequences, unlike anything seen before, that should be considered before it becomes too late.

As for individuals monitored by the government or considered to be enemies of the state, what protection would they have against governments tapping into their brain to extract information and use it against them? In the future, could people be pursued and accused of wrongdoing for their thoughts? A strong privacy regulation could provide some limitations, but this would still not ensure that it will not be breached.

Technology for everyone or only a few?

Access to this technology will almost certainly be highly unequal. Musk himself has stated that it will be expensive. Neuralink is ultimately a business, established to make a profit and survive by selling its product.

According to Roland Benedikter and Karim Fathi, when Neuralink is released onto the market, it will “be extremely expensive and perhaps not refundable. Thus, only the wealthiest will be able to afford it, and it will significantly decrease the capacity of those less wealthy to compete with their technologically upgraded peers, which would contribute to the increase of income inequality.”

Only the wealthiest will be able to afford it, and it will significantly decrease the capacity of those less wealthy to compete with their technologically upgraded peers

Musk, however, has said that the chip would be available to everyone, because the return on its admittedly high investment will be securing better, higher-paying jobs, which will compensate for the costs of the procedure.

While Musk may currently be using the accessibility rhetoric to generate goodwill towards his product, it is unlikely that the technology would actually be accessible for everyone. In many developing countries, where people lack access to basic infrastructure and medical services, and also face regular power cuts, the prospect of robots installing Neuralink is unfathomable. This means that only people in highly developed countries with enhanced robotics sectors will have access to the implant. In a world imbalanced to almost a breaking point, the danger of increasing the cognitive abilities of those living in developed countries, especially the wealthy, is that the power, enhanced knowledge, and skills could be used to further oppress and marginalise those in parts of the world that have been exploited for centuries.

Potential military use

Brain-computer-interface technology could also be attractive to governments looking to keep the upper hand militarily. The United States Defense Advanced Research Projects Agency (DARPA) currently has a programme aiming to cognitively enhance soldiers. This could involve not only enhancing their cognitive skills, but also enabling them to remotely operate military equipment. Whether or not the Neuralink project is ever integrated with this military research, it is clear that a plurality of brain-computer-interface initiatives exist with an associated myriad of risks.

Neuralink’s ethical implications, its exclusive access for the wealthy, as well as its potentially harmful purposes demand that this type of technology is debated and regulated, so as to be actively and pre-emptively shaped. Neuralink is under development in the US but its ramifications will impact all countries and continents.

Neuralink is under development in the US but its ramifications will impact all countries and continents.

From a European perspective, the EU should learn from its previous inaction. Online behemoths such as Google, Facebook. and Twitter were allowed to build monopolies based on the unethical use of its citizens’ data. Today these companies’ dominance is a fait accompli that is hard to reverse. However, the EU has the responsibility to play a fundamental role in shaping the future of AI, including brain-to-computer interfaces. The EU is already taking steps in building a regulatory infrastructure to ensure an EU-wide legal regime for AI. In April 2021, the Commission sent the Parliament a draft proposal for the Artificial Intelligence Act, which, among other elements, contains articles on prohibited AI practices, particularly what types of AI systems cannot be placed on the market, or put into service or use. These include AI using subliminal technics beyond a person’s consciousness or that exploit human vulnerabilities to distort people’s behaviour in a harmful way. This proposal should be broadened to also include brain-to-computer interface technology, including Neuralink.

The EU should also continue to take initiative in existing multilateral fora tackling AI, including UN bodies, G7 and G20, while also pushing for the creation of new entities such as a transnational regulatory board, in front of which these types of disruptive technologies would be brought in their proposal phase to be assessed for approval. These type of institutions could ensure at the international level that one country does not develop technology that is detrimental to the entire world. Otherwise, what good would it do for the EU to implement strict regulations on its territory, if the US pushes forward and deploys technologies that are likely to have a global impact?

An intermediary step in this process in the EU, US, and other parts of the world would be to integrate more social scientists, such as philosophers, anthropologists and sociologists, in laboratories to address the ethical as well as socio-economic consequences that new technology can have. This would expand laboratories’ social responsibilities beyond their scientific ones and would also help determine if a technology can be assumed to be disruptive from its incipient phases.

For now, Neuralink is raising more and more funds for its development, unbothered by any restrictions or questions coming from the US government. More critical voices need to be raised now to put at least a moratorium on the development of technology that, once fully merged with the human brain, might change the fate of humanity forever.

Cookies on our website allow us to deliver better content by enhancing our understanding of what pages are visited. Data from cookies is stored anonymously and only shared with analytics partners in an anonymised form.

Find out more about our use of cookies in our privacy policy.