The corporations driving the charge in AI development present their products as tools that allow endless creativity. Yet behind that rhetoric lies an exploitative system built on the uncredited labour of artists whose works are scraped without their permission. Still, even as artificial intelligence repeats the colonial pattern of extraction, enclosure, and commodification, artists and communities are pushing back, reclaiming authorship and demanding accountability.
At London’s National Gallery X, visitors linger before striking portraits and glowing landscapes. Figures gaze out with uncanny confidence, framed by vibrant colours and luminous textures. Each piece feels polished and looks as though it belongs in a centuries-old tradition of portraiture, yet infused with something undeniably contemporary.
Meanwhile, in Berlin, the Museum für Kommunikation’s New Realities invites audiences into reimagined realities. Everyday objects – a desk lamp, a rubber plant, a vintage phone – are arranged in unfamiliar constellations, their sharpness almost unsettling. The show gestures to “digital workplaces”.
The two exhibitions have something in common: the pieces displayed weren’t made with brushes or charcoal. Instead, they are the result of interactions between human artists and machines – not imitations, but experiments. Each work probes what authorship and creativity mean when algorithms become part of the artistic process, as artists use AI to question precisely the systems of automation and authorship that now define cultural production.
Outside the gallery, however, the same tools move from experiment to industry. What serves as a medium of exploration for some becomes, in other contexts, a mechanism of mass production, trained on vast archives of human creativity. Type in “portrait in the style of Van Gogh” or “Afrofuturist astronaut” and within seconds, an image appears. It might seem like an instantaneous or easy process, but behind the screen are sprawling datasets – such as LAION-5B – containing billions of images scraped from across the open web. These archives – the backbone of products like Stable Diffusion and DALL-E – remix countless examples to conjure something new.
To the casual visitor or user, this might feel like magic; creativity without limits. But scroll through online artists’ forums, and the mood is far from celebratory.
A theft alleged
Andrew Menjivar, a senior concept artist at American video game developer Blizzard Entertainment, puts it bluntly: “The technology works on theft, plain and simple. Artists do not need to compete with automated slop built off the backs of their hard work.” His words capture a growing anger that spilled onto ArtStation in 2022, when thousands of artists staged a mass digital protest, flooding the platform for artists and designers with banners reading “NO TO AI GENERATED IMAGES.”
That sense of dispossession runs deep. In the United States and beyond, artists have raised alarms about AI systems that can replicate their signature styles – reproducing visual motifs, lines, or character designs without permission. Some works, such as Zarya of the Dawn (a comic entirely illustrated using image generator Midjourney), have triggered legal and copyright scrutiny over whether AI-made art should be protected. In Japan, manga creators have already seen their drawings scraped, altered, and reposted through AI tools – stripped of their original meaning. In Europe, photographers were unsettled to find AI-generated images bearing faint traces of the Getty watermark – the basis for Getty Images’ lawsuit against Stability AI, which it accuses of stealing millions of photos to train its deep learning model Stable Diffusion.
Musicians face a similar erosion of authorship. In 2023, an AI-generated track imitating Drake and The Weeknd went viral on TikTok before being pulled down at the request of Universal Music. Since then, automation has gone further: in mid-2025, an entirely AI-generated “band” drew more than a million Spotify streams, raising fresh concern that machine-made music could further squeeze the livelihoods of independent artists, already struggling under streaming platform economics. With the line between experiment and exploitation blurred, what sounded like a novelty to listeners felt, to working artists, like an existential threat.
Whether in comics, photography, or music, the story is the same. Generative AI is sold as a revolution, a tool that promises limitless creativity. But for many, it feels ruthlessly extractive: a machine that mines culture and centuries of artistic labour, processes them into patterns, and resells the result as novelty.
While art has always evolved through new tools, what is new here is the speed, opacity, and concentration of control through which creativity becomes data and who gets to own and govern it. So a question hangs in the gallery air: if art can be scraped, stripped, and remixed by machines, what remains of authorship itself?
Generative AI is sold as a revolution, a tool that promises limitless creativity. but for many, it feels ruthlessly extractive.
Digital empires, familiar habits
For critics and artists alike, the answer feels uncomfortably familiar. What artists describe as theft is, at a deeper level, part of a wider pattern – a replay of older histories of power, where culture is treated as a resource to be harvested. What might appear as a narrow dispute over copyright instead points to something larger: the continuation of extractive logics that echo colonial dynamics.
Ethiopian scholar Abeba Birhane has argued that AI systems reproduce the hierarchies of empire, treating people and cultures as raw material. In Race After Technology, Princeton University sociologist Ruha Benjamin shows how supposedly new tools often extend old patterns of racial and cultural domination. And communications scholars Nick Couldry and Ulises Mejias describe this as “data colonialism”: the capture of human life and creativity for corporate profit.
Seen through this lens, generative AI is a tool for dispossession – taking without consent, stripping work of meaning, and enclosing it as property. Culture has long been shaped by systems of ownership and control – from imperial plunder to the market’s commodification of art. What AI does is translate those older dynamics into data form and scale them beyond human sight, binding creative life to the same infrastructures of extraction that once fed colonisers. Colonisers once looted artefacts and filled museums, and today’s corporations mine cultural archives to train their models. The resources have changed, yet the logic is the same: extraction, enclosure, commodification.
The machinery of extraction
If the gallery walls showcase the spectacle, the supply chain tells the hidden story. Long before algorithms touch a canvas or compose a song, the culture that AI repackages rests on other forms of exploitation: the earth beneath us, the power grids around us, and the human labour hidden behind the screen.
The promise of frictionless creativity hides a material footprint that is anything but light. The computers behind generative models are built on cobalt and lithium, pulled from mines in the Democratic Republic of the Congo and Bolivia, often under hazardous and exploitative conditions. These same minerals power our phones and electric cars, but the hunger of AI accelerates demand at industrial scale. As one Congolese miner told Amnesty International, “We dig to feed the world but remain hungry ourselves.”
Once assembled, the machines do not rest. Training a generative model like OpenAI’s GPT-3 is estimated to consume over 1,000 megawatt-hours of electricity – about as much as 130 US homes use in an entire year. Keeping those vast server farms cool also requires millions of litres of water. Researchers at the University of California, Riverside have shown that training state-of-the-art models can require as much water as producing hundreds of cars.
And then there is the human cost hidden in the click of a prompt. To make AI systems usable, an invisible workforce of low-paid workers is contracted to filter, tag, and sanitise training data. In Kenya, employees hired to review violent and sexually explicit content for OpenAI earned less than two dollars an hour. “We were exposed to disturbing content every day,” one worker told Time magazine. “But the pay was barely enough to survive.”
This double drain – of natural resources and human labour – mirrors older colonial economies all too neatly. Land stripped for its minerals, workers pressed into survival wages, and the profits funnelled towards distant centres of power.
And all of this leads back to culture – the very thing AI claims to create anew. Just as veins of ore are carved from mountains and shipped abroad, cultural labour is scraped from the Internet and fed into corporate systems. An artist’s portfolio becomes a pattern, a song becomes a dataset, a photograph becomes another pixel in a training set. In the process, the works are stripped of meaning, context, and connection, and turned into a slurry of training data ready to be recombined.
To call this merely “data” flattens centuries of meaning and reduces culture to raw material residue. What disappears in the process are the cultural values that made the work significant in the first place – the years of study, the traditions of practice, the communities from which they emerged. Instead, everything becomes flat, interchangeable, and ready to be resold.
Seen together, these layers – the minerals, the water, the labour, the art – form a single picture. AI is not immaterial magic; it is an extractive industry, rooted in the same logics of exploitation and dispossession that have structured global injustices for centuries. The only difference is the resource: today, the mine is not just in the ground but in culture itself.
From commons to commodity
But extraction is only half the story. What makes generative AI so powerful – and so troubling – is what happens next: the transformation of those resources into proprietary products, enclosed and resold as if they were corporate assets all along.
Consider the lawsuits now winding their way through courts, where questions of enclosure come sharply into focus. Getty Images suing Stability AI is just one example. Groups of artists have filed class actions against companies whose models mimic their signature styles without permission. Musicians have raised alarms about voice cloning software that can spit out tracks with a few lines of text. In each case, what was once the product of individual skill or collective heritage is captured, automated, and packaged for profit.
Corporations like OpenAI, Google AI, and Stability AI present themselves as democratisers – offering tools that anyone can use. But while the data that fuels their models is drawn from a vast cultural commons, the outputs are locked behind paywalls, subscription plans, and enterprise licences. A poem written by an unknown writer in Lagos, a sketch uploaded by a student in Manila, a folk song recorded in rural Canada – once scraped, they feed corporate machines the original creators can neither access nor control.
This shift raises more than legal or economic questions. It cuts to the core of what culture is. In mainstream debate, authorship is often reduced to copyright – “If you use my work, you need to pay me.” But as critics like Abeba Birhane and Ruha Benjamin remind us, culture is more than property. It is memory, ritual, belonging – the ways people imagine themselves into the world.
The consequences reach far beyond livelihoods. When culture is treated as raw material, imagination shrinks. Sacred traditions are stripped of context and remade as styles; community songs become decorative “aesthetic choices”; political art is fed into the machine and spat back as wallpaper. What was once shared, messy, and alive becomes uniform, smooth, and owned.
This is why many critics now describe generative AI as a form of enclosure. The idea that culture can be owned is not new – copyright and museums alike have long fenced off the commons in the name of preservation or profit. But where copyright seeks to reward creation, AI enclosure monetises imitation: extracting collective creativity while giving little back. In this new economy, commons are not fenced but scraped. What once belonged to the many now circulates through the hands of the few – not as land or artefact but as data, subscription, and algorithmic output.
Where copyright seeks to reward creation, AI enclosure monetises imitation: extracting collective creativity while giving little back.
When the sacred is scraped
The enclosures of generative AI reach every corner of culture, but their impact is felt most deeply where creation itself is communal and sacred. Across the world, some traditions are inseparably intertwined with their context: songs that live only in ceremony, designs that carry ancestral meaning, stories bound to specific lands and the people who care for them. These are not products of individual authorship; they are collective inheritances – practices of survival and continuity carried through generations.
To call these practices “content” is to misunderstand their very essence. Once absorbed into datasets, their meaning fractures: a ceremonial mask becomes a “style prompt”, a sacred symbol re-emerges as digital wallpaper. What belongs to ceremony and ritual is offered up for anyone’s remix, stripped of relation and responsibility.
Artists and communities are already sounding alarms. In Nigeria, musicians warn that AI-generated deepfakes are encroaching on Afrobeats itself, with systems able to mimic the voices and styles of cultural icons without consent. In Australia, First Nations artists have raised concerns that their designs and stories are being scraped and remixed by AI tools, a practice they describe as cultural theft. And in Canada, the federal government was forced to apologise after publishing an AI-generated image of an Indigenous woman, sparking new ethics rules and corporate pledges to avoid replicating Indigenous art through AI.
From protest to protocol, Indigenous and community-led movements are setting their own terms for how culture should live in the digital realm. In Canada, the First Nations OCAP® principles (Ownership, Control, Access, Possession) enshrine the right of Indigenous peoples to govern their cultural data. In Aotearoa, the Te Mana Raraunga Māori Data Sovereignty Network describes data as a living taonga, a sacred treasure that carries mauri, or life force, and insists it must be governed according to Māori values of collective responsibility and handled with care, consent, and authority. And in Brazil, Indigenous technologists are developing sovereignty projects to keep sacred materials out of training datasets. The Indigenous Protocol and AI Working Group has gone further, outlining ethical guidelines for AI – such as the principle that ceremonial stories must not be reproduced outside ritual contexts.
These initiatives reject the colonial and capitalist logic of intellectual property – where ownership is individual and exclusionary – in favour of relational stewardship, where knowledge is held, cared for, and passed on collectively. They remind us that cultural knowledge is not raw material but a living trust, inseparable from the people and places that create and sustain it.
When generative AI scrapes the sacred, it risks reviving one of the oldest colonial patterns of all: the theft of meaning itself.
Seeds of resistance
Still, the story of generative AI is also a story of resistance. Across the world, artists, technologists, and communities are refusing to let culture be reduced to raw material. Their efforts are fragmented, sometimes fragile, but they gesture toward another future: one where creativity and cultural knowledge are protected rather than consumed.
Some of this pushback comes from artists themselves. From the same anger that fed digital demonstrations on ArtStation, practical tools emerged. Spawning, created by artists Holly Herndon and Mat Dryhurst, lets creatives check whether their work is in training datasets and opt out of future use. Experimental projects such as Kudurru – an open-source initiative that embeds hidden “poison” signals into images to disrupt model training – show how artists and technologists are developing new ways to resist extraction through code. Meanwhile, Cara, a portfolio platform co-founded by Singaporean artist Jingna Zhang, offers an artist-first space designed to block scraping and protect creative work, with safeguards that continue to evolve.
Critical researchers are also pointing to alternatives. Timnit Gebru, through the Distributed AI Research Institute (DAIR), advances community-driven research that foregrounds justice and accountability rather than profit. Abeba Birhane argues for relational ethics: an approach to AI that acknowledges histories of inequality instead of pretending datasets are neutral. These interventions may not topple Big Tech overnight, but they expand the horizon of what is possible.
Generative AI is not an unstoppable wave of progress; it is a contested field.
Alongside them, some artists are engaging with AI, not to celebrate it but to question it, using the tools themselves to expose bias, reclaim visibility, and imagine alternative futures beyond corporate platforms. Exhibitions such as National Gallery X’s AI: Who’s Looking? in London and New Realities at Berlin’s Museum für Kommunikation use human–machine collaboration to probe what authorship and creativity mean in the algorithmic age. Rather than surrendering to automation, these projects turn AI into a mirror, revealing who gets to create, who gets copied, and what kinds of stories technology makes visible or erases.
What unites these efforts is a refusal of inevitability. Generative AI is not an unstoppable wave of progress; it is a contested field. Lawsuits are testing the boundaries of copyright and consent. Opt-out tools are forcing corporations to adjust, from Stable Diffusion adding such mechanisms to OpenAI announcing a forthcoming Media Manager for rights holders. And Indigenous frameworks are carving out new protections through data sovereignty and protocol-driven guardrails. Cracks in the AI oligarchy are already visible.
The story is unfinished, but one thing is clear: culture is not waiting to be mined. It is pushing back, planting seeds of accountability in the shadows of an industry that insists it cannot be stopped or steered.
Europe’s digital mirror
Across Europe, too, communities are rising against new forms of digital extraction, as AI systems absorb and repackage cultural materials without consent.
In Sápmi, Sámi institutions are asserting digital sovereignty through the SODA (Sámi Ownership and Data Access) principles, affirming that cultural and linguistic data must remain under Sámi authority and serve the collective benefit of Sámi communities. For Roma artists and storytellers, projects such as RomArchive mark a different form of resistance – building self-governed archives to reclaim visibility and authorship long denied in Europe’s cultural record. And in the Basque Country, the Kultura Data initiative treats digital heritage as a shared commons: open for reuse yet governed collectively to keep culture participatory, accessible, and protected from commercial enclosure. Together, these movements link consent, authorship, and collective governance to the survival of Europe’s living traditions in the digital age. And each, in its own way, attempts to reclaim control over how living traditions enter the digital realm and to ensure that technology sustains, rather than erases, cultural plurality.
For Europe as a whole, the stakes are high. Cultural participation is not just entertainment; it is part of democratic life. Through stories, rituals, and creativity, societies negotiate and celebrate differences, include new perspectives, and imagine shared futures. If those spaces are reduced to what algorithms can remix, the danger is cultural homogenisation – a Europe that speaks in borrowed styles but loses its plurality of perspectives and cultures.
Policy debates are beginning to catch up. The EU’s AI Act introduces measures such as labelling AI-generated content and limiting the use of copyrighted materials in training datasets. But significant gaps remain: questions of cultural rights, authorship, and community consent are still largely unaddressed. Critics warn that without explicit protections, Europe’s digital transition risks replicating the very extractive patterns it once exported.
Compounding this is a shifting political landscape. As far-right movements gain ground across the continent, the space for plurality – cultural, linguistic, and political – is shrinking. In such a climate, the commodification of culture by Big Tech risks aligning with forces that prize uniformity over diversity, echoing what Nick Couldry and Ulises Mejias describe as “the enclosure of life under data colonialism”.
AI does not invent extraction so much as accelerate it, turning centuries of cultural exchange into the raw material of computation.
Imagination as a commons
In the end, the story of generative AI is less about machines than about the patterns they amplify. The enclosures we see today – the capture of art, language, and story – are not new inventions but extensions of older hierarchies: the museum’s glass case, the copyright ledger, the colonial archive. What is different now is the scale and speed with which these dynamics unfold, translated into data and automated across the globe – and, crucially, concentrated in the hands of a few powerful corporations.
AI does not invent extraction so much as accelerate it, turning centuries of cultural exchange – including the sacred knowledge of Indigenous and community traditions – into the raw material of computation. What was once held in ceremony is now repackaged as product and profit. But this transformation also makes the stakes unmistakably clear: whether culture will remain a living commons, rooted in plurality and care, or be reduced to an instrument of corporate power.
And yet, across disciplines and continents, resistance continues to grow. Artists and communities are reclaiming these tools to expose bias, recover erased traditions, and reassert creative agency. From Indigenous frameworks of data sovereignty to digital commons projects and opt-out campaigns, they are sketching another horizon – one where technology amplifies, rather than consumes cultural plurality.
Europe now faces a critical challenge: whether to narrow imagination through inherited systems of extraction, or to expand it by recognising culture as a living commons sustained through plurality, reciprocity, and shared stewardship. In the end, the crisis is not technological but political – a struggle over who gets to imagine and for whom.
