What distinguishes humans from artificial intelligence? German author Nina George sees the answer in the ability to create the hitherto nonexistent. Yet as AI’s reach expands into ever farther corners of our lives – from the content we consume to the way we talk – we risk losing our freedom and creativity.

More than three years after its market launch in November 2022, ChatGPT, the generative text robotics made by OpenAI in the Holy Silicon Valley, is still worshipped as if it were the “Pancreator” (as author and technology philosopher Stanisław Lem called the “rational world constructor” in his 1964 Summa Technologiae).

According to OpenAI’s own study, 800 million users, predominantly young people aged 18 to 25, submit an average of 18 billion queries every week to its chatbot. They turn to ChatGPT for homework, job applications, marketing slogans, emails, newsletters, propaganda snippets for social media bots, for products that look like books, reviews, or translations; for love letters à la cyborg Cyrano de Bergerac, automated customer responses, summaries; and relationship or life advice. Of course, ChatGPT is just one among many chatbots – there are over 1000 large language models, and their applications are available worldwide.

The number of prompts fluctuates at the beginning and end of the semester and of school holidays. According to a HEPI and Kortext survey in the UK, the proportion of students using generative AI tools such as ChatGPT for assessments has jumped from 53 per cent in 2024 to 88 per cent in 2025. Like writing, reading is also in demise. University lecturers in Europe and across the Atlantic report that an increasing number of students are not used to reading and find it mentally, emotionally, and physically exhausting to read texts that are longer than a poem.

While one could easily blame these developments on OpenAI, Meta, Anthropic, and the other usual suspects of US digital colonialism, AI only accelerated a trend that was already visible, including in Europe. Computational power and data-based statistics, often presented as “artificial intelligence”, are a magnifying mirror of pre-existing desolate conditions.

A brave old world

The attempts since the 1960s to replicate the neural pathways of the human brain with data-based statistics have produced what we call “weak AI”. Weak AI can only focus on a single task – unlike humans, who can process 11 million sensory impressions per second and filter out the 40 most important ones for immediate reactions. When driving a car, we think, shift gears, steer, react to red lights and keep an eye on the child in the back seat who is putting a rattle in their nose.

ChatGPT, Gemini, LlaMA, Microsoft Copilot and the like can either drive straight ahead or recognise the red light. Moreover, they understand neither the meaning nor the context of what they mix together, because all letters, words, sentences, contexts and information are converted into algorithmic formulas. Human language is converted into machine mathematics, and mathematics is spat out as simulated communication.

At a time of complex polycrises, the rise of the far right, and the flood of propaganda bots on social media, automated communication acts as an accelerant for future (linguistic) dictatorships.

However, this output is strictly pre-censored: AI chatbots are programmed to avoid a growing list of prohibited terms and sensitive topics through factory-set “content control”. This list includes swear words and sexually explicit terms, but also unusual adjectives and politically sensitive subjects (and under the current US government, the list is increasingly long). The result is, at best, a mediocre and conservative language devoid of the human element.

This language is changing not just written, but also oral human communication. Researchers at the Max Planck Institute for Human Development analysed over 770,000 podcasts and videos and concluded that we are beginning to speak in the style of ChatGPT. This erosion of linguistic and cultural diversity greatly increases the danger of opinion manipulation. At a time of complex polycrises, the rise of the far right, and the flood of propaganda bots on social media, automated communication acts as an accelerant for future (linguistic) dictatorships.

Exploiters of time, culture, and knowledge

But this isn’t the only issue with the AI hype. The text foundation on which GPT-3, for example, was developed, consists of 45 terabytes of material that OpenAI has very likely plundered illegally over the years. Three hundred million pages of text extracted from private websites, media archives, Reddit fora, Amazon reviews, social media, Wikipedia entries. And also seven million copyrighted books and 81 million scientific texts.

Research teams such as the Dutch AI Safety Camp or the Danish Rights Alliance have proven that the foundations for large language models such as Meta’s Llama were built from books and collections like Books3 that were illegally stored in shadow libraries such as Library Genesis (LibGen), Z-Library (Bok), Sci-Hub, and Bibliotik – piracy sites.

The number of lawsuits against this massive copyright infringement is growing weekly. As of November 2025, there were 75 lawsuits against OpenAI, Microsoft, Meta, and Alphabet (Google’s parent company). The most well-known are The New York Times against OpenAI and Microsoft, and the class action lawsuit filed by the US Authors’ Guild (the largest professional organisation for writers in the US) and 18 individual authors (including George R.R. Martin, Jonathan Franzen, and John Grisham) against OpenAI. Another lawsuit filed by three individual authors, the Bartz v. Anthropic case, led to a 1.5 billion-dollar settlement. In Europe, class action lawsuits are difficult to pursue and financially unviable for individual authors. Even if US judges rule against AI plundering, they will not bring justice to European authors.

Market harm by AI output

Linguist Noam Chomsky called ChatGPT a “high-tech plagiarist” that copies from illegally sampled books to deliver text that reflects the style of the original authors. Despite this theft, the results are considered “public domain” and can be reused by anyone. But this is a collateral problem – the real harm is that look-alike AI products harm the markets of exactly those people whose works were stolen.

Amazon, which has always run content control software on all e-books to filter out child pornography, hate speech, and Holocaust denial, has added an AI detector. However, it still allows uploading AI-generated books on the platform (maximum three per day per user), without visible labels or warnings for customers. Between 10,000 and 40,000 AI-produced scam books flood the platform every month, deceiving consumers and even putting them in danger (for example, when unverified “guidebooks” prescribe nonsensical treatments, fatal life advice, or declare poisonous mushrooms to be edible).

At the same time, real books by human authors are less visible, and the revenues from self-publishing are shrinking. For example, the remuneration of writers through the revenue-sharing model of Kindle Direct Publishing is declining as bot masters and other shameless users pocket a big part of the money. AI-generated books are often made to resemble the real new releases of well-known authors (or are presented as “summaries” and “secondary material”). Biographies of famous people, released almost instantly after their death, are also an easy way to make money with AI.

Acting Out: Arts and Culture Under Pressure – Our latest print edition is out now!

Read it online or get your copy delivered straight to your door.

A similar phenomenon can be observed in the music sector: according to a study by the streaming app Deezer, 34 per cent of the songs uploaded daily to the platform are AI-generated, which in turn reduces remuneration for human songwriters and musicians through the shared-revenue model of platforms.

Translators, illustrators, and audiobook narrators are the biggest losers in this game. Text-generating applications are widely used around the world, including within the book sector. Amazon just rolled out its AI “translation service” for self-publishers. In the UK, a third of translators reported that they lost work due to AI, and 37 per cent of illustrators have seen their income decrease for the same reason. Translators are starting to change jobs or work as underpaid post-editors to repair machine-generated texts and translations. Meanwhile, several audiobook narrators told me, confidentially, that they are facing a choice between agreeing to have their voices cloned, and getting “blacklisted” by the industry.

Paying writers for past or future use of their work would, according to Big Tech, hamper AI development. The Trump administration is siding with the tech oligopolies: in May, it fired Shira Perlmutter, head of the U.S. Copyright Office, after her office circulated a report finding that unauthorised uses of copyrighted works to train generative artificial intelligence systems may be unlawful. (The Supreme Court has put Perlmutter’s removal on hold, for now.)

Google’s line of defence is that machines are not that different from humans reading a book. However, humans cannot memorise entire books, nor do they copy their content as a business model. No author who wants to build or keep a reputation appropriates the style of another, but instead spends thousands of hours of work finding their unique, original voice. The ploy of equating machines with humans and assigning human rights to machines is popular among GAFAM (Google, Amazon, Facebook, Apple, Microsoft; OpenAI should be added to this list) to avoid paying authors.

This is not surprising in the cultural sector. Technology companies have a long tradition of plundering ideas from writers and poets. Cyborgs, humanoid machines, the internet, social media, omniscient automatic typewriters, video telephony, self-driving cars, and household robots all existed in literature long before they appeared in the real world. The real innovators aren’t Mark Zuckerberg, Sam Altman, and Bill Gates, but Stanisław Lem, E. M. Forster, Neal Stephenson, George Orwell, Mary Shelley, Karel Čapek, and Isaac Asimov, among others. Terms such as “avatar”, “robot”, “virtual worlds” or “metaverse” also originate from the minds of authors.

Should we, as writers, stop coming up with such ideas, knowing that they will most likely become a reality?

Losing one’s voice

We should not. The power of human beings to envision utopias or dystopias that have never been imagined before is what distinguishes them from machines. If machines were to take over the narrative of humanity, evolution would first stagnate and then regress. The obvious would fully uproot the extraordinary – no new ideas, no new proposals for coexistence, no new concepts to make sense of constantly evolving polycrises.

Copyright and authors’ rights, the right to integrity and to not be censored, the right of authors to decide how and by whom their works and labour are used – these are not exclusively for cultural professionals and artists. They are the result of a paradigm shift in humanity’s self-perception: the recognition of human creativity, free will, individuality, autonomy, and decision-making. And also individual responsibility: every author, whether a professional or an amateur, a blogger or a commenter on Facebook, is responsible for what they express; and at the same time, they are permitted to do so. Authors’ rights are about freedom.

Writing or engaging in artistic pursuits also has a multifaceted effect on human identity: those who can think can decide; those who can decide can act. Those who can read, understand, and write are less susceptible to propaganda, disinformation, and manipulation, whether it’s coming from  Black Friday “deals” or anti-democratic political forces. They can be active participants in democracy, rather than mere consumers or voters. By contrast, those who rely on automated language can no longer speak for themselves; they lose their voice.

Those who rely on automated language can no longer speak for themselves; they lose their voice.

Having your own voice and knowing how to use it is all the more necessary given the democratic, social, and ecological crises we face. Ecologically, AI should be looked at with the same critical attitude that was once directed at chlorofluorocarbons or particulate matter pollution. By 2030, the physical infrastructure powering AI applications is expected to exceed the energy needs of the entire human workforce. Numerous jobs are likely to disappear, not just in the cultural sector, increasing the burden on states that are already struggling with unemployment and ageing populations. If this happens, a few communication monopolies will have cemented their dominance, determining Europe’s political orientation too. This would be the final outcome of digital globalisation, a system in which a few decide the fate of the many – unless the many decide to use their voice.

Farewell to freedom?

The individual author as a liberal and responsible actor is being wiped out by automated communication simulation. Generative AI is, in fact, degenerative: it disrupts the abilities of those who use it regularly, and does so at the expense of word workers.

The longing for a machine superior to humans, a golem, a deus ex machina, is the eternal human flaw. It is a desire to create an entity greater, more permanent, better than humankind, and to sink with relish into infinite, relieving passivity while the autopilot takes over. Finally free! Free from responsibility and the need to make decisions. Free from the hardship of this hideous, confusing life with its many crises and humiliations.

Perhaps that is the most regrettable consequence of the current religion of machines: those who do not write or speak, but let machines write and speak for them, embrace the suppression of their own freedom of speech.