From Clicks to Crime: How Online Extremism Fuels Real-World Threats
- Sam Cockbain

- 6 days ago
- 9 min read

Key Takeaways
Online extremism is rising across multiple ideologies, driven by social grievances, digital echo chambers, and global conflicts.
Early online behaviours – including fixation, extremist memes, hate rhetoric, and network engagement – often precede real-world criminal or terror activity.
OSINT tools and analytics can detect precursor signals at scale, enabling proactive policing and threat mitigation.
Youths are increasingly vulnerable to radicalisation, making education and early intervention essential.
A multi-layered response – including platform enforcement, community resilience, intelligence monitoring, and prevention programs – is critical to reducing harm.
Extremism and Online Influence: Trends, Detection, and Mitigation
Extremism – violent ideologies that use fear or violence for political, religious, or racial goals – has many forms and is on the rise in both the UK and globally. MI5 broadly categorises extremist threats into Islamist extremists, extreme right-wing groups (e.g. neo‑Nazis, racial supremacists), and to lesser extents left-wing/anarchist and single-issue radicals. In recent years new “hybridised” or nihilistic threats have emerged, where individuals either mix symbols from multiple ideologies or act without coherent ideology. For example, some attackers now idolise school shooter culture or nihilistic violence without a formal ideology. Extremists today often operate in decentralised online networks rather than formal groups. A summary of those extremist threats are outlined below:
Islamist extremism: militant groups like ISIS/Al-Qaeda and lone actors driven by extremist interpretations of religion.
Far-right extremism: white supremacists, anti-immigrant nationalists, or conspiracist movements (often aged 18–25).
Left-wing/anarchist extremism: radicals focused on class struggle or anarchism (smaller threat volume).
Single-issue or fringe: environmental jihadists, anti-government militias, “incel” misogynists, etc. These niche groups of individuals can inspire violence without a broad ideology.
How We Got Here: Rising Trends and Drivers
Globally, terrorism remains a persistent challenge. In 2024 there were more countries experiencing attacks (66 vs 58 in 2023), even though total deaths from terrorism dipped slightly. The deadliest groups (ISIS, Boko Haram, Taliban, al-Shabaab) increased their violence, and terrorism in some Western countries has ticked up (e.g. Sweden, France, Australia saw spikes). Importantly, extremist narratives have gone borderless: for instance, anti-Semitic or anti-Israel hate-motivated incidents in the West rose sharply in late 2023.
In the UK, the extremist threat has worsened. Home Office figures show Prevent referrals (cases flagged as at-risk of radicalisation) hit a record 8,517 people in 2024/25 – a 27% increase from the year before. Remarkably, far-right concerns now outnumber Islamist ones: 21% of referrals were for extreme right-wing ideology vs 10% Islamist. Police referrals jumped 37%, and most referrals come from education and local authorities. Youth are especially at risk: 36% of referrals were boys aged 11–15. In other words, the next generation is increasingly drawn into extremist online spaces.
Broad social forces help explain these trends. The COVID‑19 pandemic and economic strains fuelled conspiracy theories and social isolation, which extremist groups exploit. Extremists amplify and benefit from online disinformation as fake news and conspiracies create echo chambers that cement hate and stereotyping. Studies show that hateful online narratives often scapegoat minorities (e.g. blaming Jews, Muslims, or migrants for crises). In the US, for example, the Government Accountability Office (GAO) notes that attacks like Charleston (2015) and El Paso (2019) were “fuelled by hate-filled internet posts”, underscoring how online hate can incubate real-world violence (the FBI now treats hate crimes as a top national threat on par with domestic terrorism).
Social media platforms and algorithms are a key driver. Researchers find that platform design “catalyse[s] such movements” by algorithmically amplifying extreme content to ever-wider audiences. In other words, users who see a little extremist propaganda are quickly shown more, normalising radical views. Memes, videos or threads praising violence can go viral, lowering barriers to joining these communities. Extremists also migrate to smaller or encrypted apps when mainstream sites crack down, which can make detection harder. In sum, the rising trend is driven by a perfect storm of social grievances, online echo chambers, and high-profile conflicts (wars in Ukraine, Israel-Hamas, etc) that provide fodder for extremist influencers.
Fear, Anger, Hate, Suffering: How Extremists Use the Internet
Extremists exploit the internet to recruit, radicalise, and coordinate. They post propaganda videos, manifestos, memes, and “how-to” attack guides across platforms (YouTube, Telegram, Twitter/X, TikTok, fringe forums and gaming chats). The internet dismantles past limits – one researcher notes it has “broken down the traditional barriers” to radicalisation by connecting like-minded people instantly. A RAND study of terrorists in the UK found the internet often provided material and community that reinforced extremism. Generative AI is the next threat multiplier: experts warn that groups like ISIS or Al-Qaeda are already exploring Artificial Intelligence (AI) chatbots and deepfakes to generate tailored propaganda, fake images, and high-volume disinformation, potentially causing even more radicalisation online.
Individuals often go through a “step zero” phase online before any plot. This means they become fixated on a cause or target, obsessively consume content, and join extremist communities, often without making explicit threats. For example, one would-be attacker in the US repeatedly posted about a public figure online, even though he hadn’t threatened that person directly. This early digital fixation – lurking on hate forums, sharing conspiracy memes, echoing extremist slogans – is subtle but critical. Analysts emphasise that the earliest online cues of violence are usually not a direct threat, but an obsessive interest in extreme narratives and past attackers. In short, the internet gives personal grievances “global meaning” by framing them in extremist ideology.
Red Flag: Early Warning Signals and OSINT
Given this online radicalisation, analysts and police leverage Open-Source Intelligence (OSINT) tools to detect early signals before violence erupts. OSINT means gathering publicly available information (social media posts, forums, news, even satellite images) and turning it into actionable intelligence. Agencies use automated platforms that continuously scan hundreds of social networks, blogs, chat groups, and more for extremist content. These systems use AI/Machine Learning (ML) to spot patterns and anomalies: for example, they can flag spikes in hate keywords, new extremist hashtags, or images with terrorist symbols, even linking them to locations or events.
Key OSINT capabilities include:
Social media monitoring: Tools are used to aggregate public posts across platforms to track individuals and groups. They can trace aliases (suspended users who resurface), analyse sentiment, map follower networks, follow streams of extremist accounts, and detect anomalies that might indicate criminal activity or emerging threats. For instance, ML can be used to analyse a supporter’s follower network and can predict a majority of future extremist accounts before they post any content.
Network analysis: By mapping social connections, investigators identify central influencers and clusters of extremists. OSINT dashboards can reveal who is interacting, retweeting, or messaging each other in extremist circles in real time.
Keyword and sentiment alerts: Investigators set up watchlists for terms or memes associated with violence. If conversation around a rally or figure suddenly intensifies with extremist sentiment, the system alerts analysts. During the 2021 US Capitol riot, analysts tracked calls for violence via hashtags and livestreams, which helped investigators identify and later arrest instigators.
Metadata/geolocation: Open images or videos may contain GPS data or landmarks. Analysts use this to locate militant camps or protest sites. For example, Hamas used photos and leaked data to map Israeli officer targets.
Event correlation: OSINT tools cross-reference chatter with real-world events. A sudden surge of threatening rhetoric about an upcoming public event may trigger pre-event warnings.
Law enforcement officers now use these OSINT signals alongside traditional intel. As SentinelOne notes, agencies routinely employ OSINT “to detect extremists, prepare for disasters, or gain real-time information on the ground”. Automated systems can sift billions of social posts daily – far beyond manual capacity – to catch precursors of radical action.
Minority Report: Pre-Event Indicators and Platform Monitoring
Before violent incidents, certain online behaviours often surface. Analysts can look for explicit planning signals (e.g. posts detailing weapon purchases, maps of targets, or violent manifestos) as well as behavioural changes (like an individual suddenly abandoning all moderate forums and joining extremist chats). One powerful tool is community monitoring: groups like Tech Against Terrorism (with OSINT methods) continuously track known propaganda accounts and encrypted chat spaces.
Real cases illustrate this: In Tunisia, the hacker group GhostSec identified ISIS-linked accounts “planning an attack on tourists in Djerba” by analysing their social media exchanges. GhostSec harvested IP and messaging data, giving police enough intelligence to arrest the plotters before any bombing. In London, a would-be extremist unwittingly revealed a murder plot on Twitter by asking followers to identify his intended target; when this was flagged, authorities intervened pre-emptively. These examples show how open-source cues (a tweet here, a forum post there) can serve as early warning.
Law enforcement also monitors extremist influencers who incite followers toward mass disorder. For example, planned rallies are advertised online by far-right activists. Here at Global Situational Awareness, we reported that one such event was touted to draw up to 150,000 attendees, causing police to cancel officers’ leave in preparation for “public order chaos”. By tracking the online promotion of such events and chatter among marchers, authorities can prepare crowd-control measures or counter-narratives.
From Digital Radicalisation to Crime
Left unchecked, online extremism often spills into offline violence, from hate crimes to terrorism. Extremist online communities can legitimise attacks as righteous or even glamorous. US experience shows a clear link: attackers in Charleston, El Paso, and Colorado Springs had all been active in posting violent hate material on the internet. Their online manifestos and social-media rants prefigured the massacres they later carried out. In one incident (2019), the El Paso shooter had engaged in heavy anti-immigrant, racist posting online before killing 23 people.
Even without guns, radical online groups can inspire other crimes and public disorder. Extremists often engage in hate crimes (assaults, vandalism, intimidation) that are easier to commit than a full-blown terror attack but still terrorise communities. The GAO notes Americans have suffered hate-based assaults roughly “nearly every hour,” many traceable to online hate speech. In the UK, far-right street fights and clashes at protests have increased alongside online recruitment of agitators. For example, far-right and anti-fascist groups frequently plan counterprotests, sometimes sparking riots.
The pathway from online radical beliefs to actual crime can be direct or incremental. A teenager may start by posting racist memes, then move to join a neo-Nazi chat, then plan an attack or terrorist offense. In February 2025, a UK case showed this progression: a 17-year-old boy with openly racist, pro-Nazi social media posts (idolising the Columbine shooters) was arrested for plotting a school shooting. This underscores that radical content online often escapes moderation and can cultivate real-world threats.
The Threat Landscape: Why It Matters
So what can be done? The complex threat of online extremism demands multi-layered responses – technological, legal, social, and preventative. Key steps include:
Proactive OSINT monitoring: Agencies and private analysts must continually scan the digital public sphere for early indicators. This means using AI-driven OSINT platforms (see above) and sharing relevant intelligence across jurisdictions. Tech companies and governments should collaborate to enable secure cross-platform data sharing on extremism – security experts suggest that “cross-platform intelligence sharing” can raise the cost and difficulty for extremists to hide.
Platform accountability: Social networks and content platforms need aggressive moderation of extremist content. Companies already deploy algorithms to flag hate speech, but experts argue these efforts must deepen. For example, GNET researchers recommend enhanced content takedown mechanisms and stricter enforcement of hate speech laws online. Governments may tighten regulations (like the UK’s Online Safety Act or EU’s Digital Services Act) to require platforms to remove violent extremist content and to monitor protected groups. Regulators (like Ofcom in the UK) are focusing on child-protection codes because many new radicals are youths.
Preventive interventions: Both the UK Prevent programme and international experts advocate treating radicalisation as a public health problem. This means bolstering protective factors (mental health support, education, social inclusion) rather than only arresting offenders. For instance, communities and schools should teach digital literacy and debunk conspiracies, giving young people tools to resist extremist propaganda. Channel panels (in the UK) and similar deradicalisation programs should expand to address not just ideology but also “no ideology” violence like nihilism or misogynistic incel networks. Engaging credible messengers from within communities (e.g. former radicals, religious leaders) can inoculate vulnerable individuals against online recruitment.
Law enforcement readiness: Police and intelligence agencies must adapt investigative methods. This includes training analysts in OSINT techniques, using analytics to prioritise threats, and maintaining liaison with tech firms. Crucially, investigations now often require a 24/7 social media watch as posts flagged today could be evidence tomorrow. Early prohibition is key: as one study notes, extremist digital traces often appear before an attack. Agencies should respond rapidly when OSINT reveals explicit plans (as in the GhostSec example above) or suspicious behaviour (like sudden obsession with extremist groups).
Community resilience: Governments and NGOs should empower communities to recognise and report worrying online behaviour. For example, family members who find a teenager consuming violent extremist media should have clear channels for help. Public awareness campaigns can highlight how to spot online radicalisation.
Ultimately, the rise of online extremism affects public safety and social cohesion. If ignored, these digital trends can translate into more terror plots, mass riots, and hate-fuelled crimes. But intelligence-led monitoring and early intervention can blunt the threat. By combining advanced OSINT analytics, smarter platform policies, and community-based prevention, authorities can identify problematic online influence before it turns violent. In the words of analysts, we must raise the costs for extremists online – making it harder for radical views to spread unchecked.
In conclusion, extremist ideologies (from religious jihadism to white nationalism to new nihilistic strains) are using the internet to gain influence. Statistically, referrals to counter-terror programmes are rising (especially among youth and far-right suspects). Analysts and law enforcement must monitor social media, forums, and encrypted apps with OSINT tools to spot early signals (e.g. online fixation on targets, sharing of attack manuals, funding solicitations). Pre-event indicators – such as sudden surges in extremist chatter about a location or prominent leader – should trigger alerts. We can then link this online intelligence to public safety, such as preparing for threatened riots, arresting plotters, and engaging communities. Through vigilant monitoring and coordinated action (from tech firms to schools to police), we can disrupt extremism’s online pipeline and keep society safer.



