The Covid-19 pandemic and subsequent lockdowns created significant opportunities for online networks of extremists, allowing them to reach far greater numbers of people as much of life moved online. Paola Hernández spoke to Jacob Davey, an expert on right-wing extremism and hate groups at the Institute for Strategic Dialogue (ISD), about how these movements connect across borders, the mainstreaming of extremist ideology and conspiracy theories, disinformation campaigns, and deradicalisation work.

Paola Hernández: How is far-right extremism changing in today’s globalised world? Is there a trend towards greater international cooperation among far-right groups and movements through transnational networks and alliances?

Jacob Davey: Traditionally, we have considered these groups and individuals to be fairly introspective and narrow-minded in their outlook, concerned with issues at the (sub)national level, such as migration. But for some years now, our research has shown a shift towards a more transnational consciousness and international dynamic. So whilst they might still be focused on local activism, and trying to change the politics and society of their own country, city, or town, this is increasingly framed as part of a broader struggle. Depending on where you look on the extreme-right spectrum, there is a global struggle against the perceived destruction of the white race, a change in the culture or ethnic make-up of Europe, authoritarian governments, or left-wing infiltration at national institutions. So, although the extreme right might seem homogeneous, it represents a quite diverse group of individuals, organisations, and social movements which are broadly aligned in terms of their objectives and their targets in society, such as ethnic, racial, religious minorities, or political opponents, for example. These groups come together at certain points but they do not share a big strategy; it is much more fluid than that.

Another key dynamic we are seeing is the move from political organisations towards a much looser organisational structure, with lots of individuals affiliated with loose online communities and minor sub-cultures. In fact, globally, we have this diffuse community. And particularly in the English-speaking community: in part due to the common language, we have seen an increased focus on what’s going on in other countries. Here, the US has a heavy influence globally, with more people following US politics and events over the last five years under Donald Trump. But we have seen this within Europe as well, with the Identitiarian movement, where people see themselves as broadly aligned with movements in other countries as part of a global struggle to preserve European culture.

Although the extreme right might seem homogeneous, it represents a quite diverse group of individuals, organisations, and social movements.

Which would you say are the main ideas that shape today’s far-right movements and where do they come from?

Globally there is this idea that people who are perceived to be culturally or ethnically European, white people of European heritage, are under threat, and at risk of being replaced through violence, deliberate subversion by left-wing actors, and migration. This idea underpins the now quite widely known “Great Replacement” theory. It originated in France and spread quite rapidly throughout Europe, in part driven by the Identitarian movement, and was also picked up worldwide by white supremacist groups, white nationalist groups, and neo-Nazis. This thinking has been very influential in a number of extremist attacks, like the one in New Zealand in 2019, in El Paso in the US also in 2019, or in Halle in Germany, in 2020. This is a powerful narrative which has succeeded in connecting the dots between movements.

These ideas are also touched upon by populist and right-wing politicians in Europe. Social media allows for the easy transmission of these ideas and our monitoring shows that online extremist groups operate fairly close to far-right political actors – Alternative for Germany (AfD) is a prime example. We have seen this a lot globally as well, for example, when Trump retweeted extremist accounts or used the same coded language to repeat talking points about white genocide theory, there was a spike in very targeted hateful discussion online. So, what we see is that when these politicians allude to broader extremist tropes, this has a knock-on effect, emboldening extremists to repeat and believe in that ideology, and that does not exist in a vacuum.

When Trump retweeted extremist accounts or used the same coded language to repeat talking points about white genocide theory, there was a spike in targeted hateful discussion online.

You studied in 2018 how several Italian extreme-right groups were organising on social media and chat applications to influence the election outcome in favour of Lega Nord and the Brothers of Italy. Have you seen similar activities in other elections campaigns?

In the elections we have studied in the past, we see a pretty consistent pattern of extremist groups mobilising and being actively involved in political disinformation. Traditionally, when we talked about disinformation there was a very specific focus on Russian state activity. But in recent years, we have seen an expansion in politicised attempts to shape online discussion in malicious ways, with extremists forming hubs for driving disinformation, usually in favour of radical-right parties.

Following Covid-19 restrictions, we’ve seen the spread of conspiracy theories by communities online, primarily originating from the US, like QAnon, anti-vaccine groups, anti-lockdown movements, etc. The US was the prototype, but throughout the last year we saw a surge in conspiracy theories globally, and we will see more of this in Europe in the years ahead.

We are currently conducting research about efforts by far-right groups to influence the upcoming German federal elections. We recently launched a report about the “Election Monitoring” campaign by the far-right Ein Prozent initiative in the context of the Sachsen-Anhalt state elections. In this state election alone, an estimated 2.6 million Twitter users received posts with the hashtag #VotingFraud within 24 hours of the election. This was also due to Twitter’s recommendation algorithms, which exponentially increased the reach of the posts. Disinformation about elections also reaches a wide audience via Facebook, Telegram, or BitChute, as well as via the blogs and newsletters of right-wing influencers. These election fraud narratives suggest that distrust of political institutions, especially among AfD supporters, could be a potential weak point for the election.

Our research around the 2020 US presidential election showed that platforms such as Facebook and Twitter consistently failed to implement the guidelines for fact checking. This resulted in millions of Americans being exposed to dangerous disinformation. A long-running disinformation campaign about “electoral fraud” in the US context is directly linked to the riots and storming of the US Capitol in January 2021.

In Germany, there has not been such a long-running disinformation campaign. Still, the spread of allegations of fraud through social media is a considerable risk. Platforms like Facebook and Twitter have no elections guidelines or strategies in Germany designed to protect users from misleading claims. These issues must therefore be carefully monitored and analysed because if electoral fraud allegations reach a wide audience, it might jeopardise the integrity of democratic processes.

Our research around the 2020 US presidential election showed that platforms such as Facebook and Twitter consistently failed to implement the guidelines for fact checking.

You have personally advised national and local policy-makers on right-wing extremism, including the UK Parliament’s Home Affairs Select Committee. What are your main recommendations?

I think analysts and policy-makers alike need to find a new language and better terms of reference to talk about the extreme right, because currently different communities who might be opposed on certain issues are often lumped together. Then there is the shift we are seeing away from organisations towards more amorphous, largely online communities. Current counter-terrorism approaches in Europe often rely on having a list of proscribed organisations. Now, what do you do when an organisation is just one Telegram group? Can you really proscribe something which is online chatting? We really need to start thinking about how to go after the behaviours rather than only thinking in terms of specific movements.

Conspiracy theories in particular are providing new vectors which connect the traditional far right with individuals who do not necessarily share such a racist ideology. There is a risk of those people being radicalised and becoming involved in extreme-right circles. We need to be aware that these dynamics are transnational, in terms of individuals speaking to one another in the same spaces online, sharing the same ideological material, and broadly seeing themselves as part of the same global struggle. So there is some work to do on transnational coordination and collaboration in addressing this threat.

In addition, I also think we need to be more gender-aware. This perspective is often missing from prevention programmes to help limit the spread of extremism and radicalisation. Now, in Europe and in the UK, there are some deradicalisation initiatives that take this into account, but the need for a greater focus on gender dynamics often falls by the wayside, so there are a lot of opportunities to do more.

Do you see the emergence of new tactics and trends among far-right groups? Are policy-makers able to respond to these?

One trend related to the extreme right is the radicalisation of the military, which is really pronounced in France and Germany at the moment. I do not think that people have got their heads around how to deal with this threat yet. We are talking about the radicalisation of people who potentially have training in guns, logistics, or explosives, and we are also talking about groups that might believe in the need for a civil war or race war. This is a serious concern.

Another area that is relatively unknown concerns extremism among environmental groups. Summer 2021 showed that we are starting to experience extreme weather events much more regularly, and this is likely to get worse. Covid-19 restrictions empowered some legitimate groups but also some quite concerning mobilisations, including threats of violence to and the mass spread of conspiracy theories. As governments start to implement potentially unpopular policy to increase taxes on carbon or petrol, or around energy, there is a real risk of a similar response among groups protesting on environment-related issues. A crisis always presents an opportunity for extremists to present easy answers by putting the blame on others, and this could happen with climate change. This kind of eco-fascist ideology only exists at the fringes for now, but there is space for it to grow, because extremists thrive on chaos and crises – that is where they really expand their audience and outreach.

Eco-fascist ideology only exists at the fringes, but there is space for it to grow, because extremists thrive on chaos and crises.

Nowadays, there is intense debate about social media regulation and the responsibility that big tech companies have when it comes to algorithms that push conspiracy theories and extreme content. What are EU member states doing in this respect and what else need to be done?

We have reached the stage where regulation is necessary. Social media platforms have the opportunity to face up to their role in the promotion of online harms and they have not addressed that effectively. There are multiple ongoing discussions around regulation, such as the Digital Services Act in Europe, the Online Harms White Paper in the UK, German Network Enforcement Act (NetzDG). You have also similar discussions in Canada and in the US. So, broadly speaking there is a paradigm shift in some of these conversations.

In Germany, the focus of NetzDG is specifically on content and compelling social media platforms to take down content which oversteps the threshold of acceptability. But the problem with an entirely content-focused approach is that it does not really address the behaviours underpinning this. Social media platforms are reluctant to take initiatives that might be perceived as blocking free speech. We know that platforms such as Youtube and Facebook direct people towards ever more extreme content. This is because it is in their interest to keep people on the platform – that’s their business model. So what we really need, through research and government regulation, is an increased focus on transparency, in terms of how social media platforms and business models themselves tackle radicalisation, rather than on specific content.

In your contribution to the book Going Dark: The secret social lives of extremists, you highlight that one of the next big extremist threats could be deepfakes and spoofing technologies entering mainstream media use. How do we deal with tactics that manipulate opinions but are part of what some online platforms were designed for?

This is just one example of how these groups might use these technologies. Broadly, what we have found in our analysis is that there are some quite sophisticated networks and network activities designed to promote certain extremism talking points. There are interconnected networks of websites to promote these messages so as to avoid moderation efforts by social media platforms, and so they can micro-target users with slightly different interests.

We know that extremists are often early adopters of technology and keep themselves technology-savvy, so they can stay one step ahead. So we will continue to see different techniques adopted by these communities. Social media platforms also need to be more effective in their own internal efforts at detecting these networks’ activity and these more sophisticated technologies. Hence, when we talk with regulators we must ensure there is sufficient transparency from the platforms so we know they are addressing this threat properly.

How important is the role of education in the fight against extremism – both for young and older generations?

Education is a really important strategy to push back against radicalisation and extremism, as well as harms such as conspiracy theories, disinformation, and hate speech. Yet, there is more to do to bring this to classrooms and educate young people so they are better digital citizens but also so they have the tools they need to think critically. But this should not focus only on young people, but also older people. In fact, we need to locate and identify methods for reaching all generations, and giving them critical thinking and digital citizenship skills. One area of great potential is to work with businesses to educate their workforce, but also community groups.

Over the past few years, ISD has been working to appropriate the direct peer-to-peer messaging approach used by extremists and apply it to the online world as part of the world’s first deradicalisation project to identify online users at risk of radicalisation and use the latest technologies to help them escape the toxic narratives of extremist recruiters. This started with an initial small-scale pilot, in 2015, with over 150 Islamist and right-wing extremists. The results suggested that direct online outreach was a potentially viable tactic that was worthy of further exploration. So, education plays an interesting role for a new generation of intervention methods, but it is also quite limited. There is a greater awareness of these global threats, but there is also a need for broader investment in scoping and piloting a whole spectrum of different interventions.


The author wishes to thank Jakob Guhl, Research Manager at the Institute for Strategic Dialogue (ISD).