The morning of May 14, 2022, an 18-year-old drove to a Buffalo supermarket and murdered ten Black Americans. The path that led him to that moment began two years earlier, scrolling through 4chan during the pandemic, where he first encountered a clip of the Christchurch massacre. The algorithm had done its work – not through a sinister, conscious AI overlord, but through the cold mathematics of engagement optimization.
This is the nature of digital radicalization: a system that transforms scrolling into addiction, and algorithms into accelerants of hate.
When Frances Haugen stood before Congress in October 2021, she revealed the blueprint of a machine designed to amplify the worst of human nature. Internal documents showed that 64% of all extremist group joins on Facebook came from the platform’s own recommendation tools.
Facebook’s algorithm (and therefore its platform owners) wasn’t just playing host to extremists by allowing them to stay on their platform, it was actively recruiting for them – and it’s far from the only platform to do so.
The Engagement Economy
At its core, every recommendation algorithm operates on a simple premise: maximize user engagement. But as MIT researchers discovered in their analysis of YouTube’s system, this seemingly neutral goal almost always ends up creating what they call “pathways to radicalization.”
The math is undeniable, really – controversial content generates, on average, 70% more engagement than moderate viewpoints.
Anger spreads faster than joy. Fear beats facts.
Facebook’s own researchers documented this in leaked presentations from 2018-2021. They found that users who engaged with mainstream conservative content were systematically pushed toward conspiracy theories within three weeks. The algorithm doesn’t understand the concept of ideology, it only knows that users who watch one conspiracy video will likely watch ten more.
Every interaction – every click, hover, share, and second spent watching – becomes a data point. Machine learning models process millions of these signals to predict what will keep you scrolling.
Research from UC Davis found that YouTube’s algorithm increased recommendations of extremist content by 37% the longer users engaged, creating what we know colloquially as the “rabbit hole effect.”
Platforms: The Architects of Radicalization
YouTube: The Gateway Drug
Despite recent studies from the University of Pennsylvania suggesting that YouTube’s algorithm has a “moderating” effect, the platform’s role in these radicalization pathways remains heavily contested.
Bottom line though: algorithms behave differently for different users.
For mainstream users, YouTube might indeed moderate content. But for those already engaging with fringe material, the platform creates what researchers call “filter bubbles on steroids”. A review of 23 studies found that 14 directly implicated YouTube’s recommender system in facilitating problematic content pathways.
The Buffalo shooter’s manifesto explicitly credited YouTube videos for teaching him how to modify weapons illegally. He didn’t happen to “stumble” upon this content – the algorithm served it to him on a platter, video after video, tutorial after tutorial.
TikTok: Radicalization at Warp Speed
A 2024 study published in the Social Science Computer Review exposed TikTok’s unique capacity for rapid radicalization.
Researchers found that the platform’s “For You Page” could turn a neutral feed into an extremist content factory in under 400 videos – just roughly six hours of viewing.
Media Matters researchers created a test account that only interacted with transphobic content. Within days, their feed was entirely flooded with neo-Nazi symbols, white supremacist talking points, and calls for violence. The algorithm took a direct path from one form of hate, to an entire ecosystem of extremism.
Dr. Stephanie Alice Baker’s research at City St George’s University found something equally as disturbing: 81% of supposed “cancer cures” on TikTok were fake (to no one’s surprise), demonstrating how the platform’s algorithm prioritizes attention-grabbing misinformation over factual content. The same mechanisms that spread these medical lies also spread political extremism.
Facebook/Meta: The Amplification Engine
Frances Haugen’s testimony, as mentioned earlier, revealed that Facebook’s 2018 algorithm change – prioritizing “meaningful social interactions” – actually increased angry reactions by 5x. The company knew that divisive content kept users more engaged, even as it tore society apart.
In Myanmar, Facebook admitted its platform was used to incite genocide. In Ethiopia, the same algorithms fueled ethnic and gender violence. The algorithm rewards content that triggers strong emotional responses, regardless of the human cost.
Reddit and Discord: The Planning Grounds
The Buffalo shooter’s digital footprint is the perfect example of how platforms like Reddit and Discord function as organizational infrastructure for radicalization. Court documents show he used Reddit’s r/GearTrade to acquire tactical equipment and Discord servers to plan his attack.
These platforms operate differently – less through algorithmic recommendation, and more through community dynamics. But algorithms still play a role: Reddit’s upvote system elevates extreme content in already extreme communities, and Discord’s lack of proactive content scanning allowed the shooter to maintain a detailed diary of his radicalization journey.
X/Twitter: The Polarization Accelerator
A 2024 study from Frontiers in Political Science found that Twitter’s algorithm during the 2020 election created “echo chambers that undermined democratic deliberation.”
The platform’s emphasis on real-time engagement rewards hot takes over nuanced discussion.
Research from the University of Pennsylvania found that X’s algorithm (I’ll only call it X when talking about Elon’s awful changes) actually showed users less polarizing content than a purely chronological feed would. But it also showed them less news overall, creating information vacuums that conspiracy theories rush to fill.
Danaë Metaxa, a member of the research team, said:
“During that particular moment, the information may not have been very extreme or disruptive, but this doesn’t mean we can rely on these algorithms to continue to operate in that way. The lack of transparency, restricted APIs and the current controversies surrounding the direction and ownership of X/Twitter make it a challenging space for people to find and trust quality news.”
It is also worth noting that this study is now more than six months old, and the algorithm has likely shifted significantly.
The Psychology of Algorithmic Manipulation
The power of these systems lies not in their sophistication, but in their exploitation of human psychology. As researchers documented in The Conversation, algorithms exploit five key psychological vulnerabilities:
Confirmation Bias: Algorithms detect what you already believe and feed you more of it.
Intermittent Reinforcement: Random rewards (viral posts, shocking content) create addiction-like patterns.
Social Proof: Showing content that “everyone is talking about” makes extreme views seem mainstream when they’re not.
Fear-Based Attention: Negative content captures attention 3x more effectively than positive.
Tribal Identity: Algorithms learn your in-group and feed you content that reinforces us-vs-them dynamics.
The Buffalo shooter’s radicalization followed this playbook perfectly. Starting with mainstream gaming content, algorithms detected his engagement with edgy humor, then conspiracy theories, then explicit racism.
Each step may seem small, but we all saw what his destination ended up being and how awful it was.
Legal Battles and Section 230’s Shield
The legal landscape around algorithmic radicalization is shifting. Slowly, but it is shifting.
In March 2024, a New York judge ruled that Reddit and YouTube must face lawsuits over their alleged role in radicalizing the Buffalo shooter. The lawsuit sidesteps Section 230 by arguing that these algorithms constitute a “defective product.”
This is a start in changing how we define platform liability. If algorithms are products, not just neutral conduits, then companies should be held responsible for their design choices.
The New York Attorney General’s investigation found that platforms had varying degrees of success in removing violent content after the Buffalo shooting:
Twitch shut down the livestream within 2 minutes.
Videos spread to 4chan within hours.
Reddit took up to 8 days to remove reported content.
Some platforms never removed it at all.
The International Landscape
This problem isn’t uniquely American, at all.
The Islamic State has used TikTok to radicalize teenagers in Austria. Far-right groups in Germany use carefully crafted TikTok content to mobilize young people. So on and so forth.
Each country’s experience speaks to different facets of algorithmic abuse:
India: WhatsApp’s forwarding algorithms spread lynching videos.
Brazil: YouTube’s recommendation system fueled election conspiracies.
UK: Facebook groups organized anti-immigrant violence.
Myanmar: Facebook’s algorithm amplified genocidal content.
Palestine: Algorithms are used to suppress support/information on behalf of Israel.
How Algorithms Actually Work
According to leaked documents and research, here’s how these major platform algorithms actually function:
Stage 1: Candidate Selection
Algorithms select 1,500-10,000 potential posts from your network and beyond.
Selection is based on: freshness, creator relationship, content type, and past engagement.
Stage 2: Ranking
Machine learning models predict engagement probability for each post.
Factors weighed include: likelihood to click, share, comment, react, and report.
Extreme content consistently scores higher on these engagement metrics.
Stage 3: Filtering
Remove blocked/muted accounts from appearing.
Apply (minimal) safety filters.
“Balance” content types (usually making extremism worse, not better).
Stage 4: Re-ranking
Boost paid content.
Apply “breaking news” or “trending” amplification.
Insert recommended content from outside your network.
At each stage, the mathematics favor extremism. Not supporting it by design, necessarily, but certainly by optimization for the wrong metrics.
Proposed Solutions and Their Limitations
Current Mitigation Efforts
Platform Self-Regulation:
The Global Internet Forum to Counter Terrorism (GIFCT) shares hashes of terrorist content.
But participation is voluntary and enforcement is weak.
Smaller platforms and message boards ignore it entirely.
Content Moderation:
Facebook employs 15,000 content moderators.
And they face severe psychological trauma from constant exposure to extreme content.
AI moderation catches only about 3-5% of hate speech right now.
Algorithm Tweaks:
YouTube claims to have reduced recommendations of “borderline content” by 70%.
But independent research shows extremist content still spreads rapidly.
Legislative Proposals
The New York Attorney General’s report recommends:
Creating criminal liability for perpetrator-created violence videos.
Reforming Section 230 to require “reasonable steps” against violent content.
Mandating tape delays for all forms of livestreaming.
Requiring transparency reports on algorithmic amplification.
The EU’s Digital Services Act goes even further, requiring platforms to:
Conduct risk assessments for algorithmic harms.
Provide data access to researchers.
Allow users to opt out of recommendation algorithms.
Face fines up to 6% of global revenue for violations.
Reimagining Algorithmic Architecture
The solution isn’t to destroy recommendation algorithms, I know they’re not going anywhere. The solution is simply to redesign them. Current systems optimize for a single variable: engagement. But engagement is a terrible proxy for human wellbeing.
Alternative metrics could include:
Epistemological Quality: Does the content increase or decrease user knowledge?
Emotional Impact: Does the algorithm improve or worsen mental health?
Social Cohesion: Does the recommended content build or destroy community bonds?
Temporal Satisfaction: Do users feel good about time spent after the fact?
Some platforms and companies are experimenting with this.
Mozilla’s research on “better recommendations” suggests weighing for:
Source credibility
Viewpoint diversity
Constructive dialogue
Long-term user satisfaction
But these efforts remain marginal. The advertising-based business model demands engagement above all else.
The Human Cost
Behind every algorithmic decision is a human consequence. The Buffalo victims weren’t just statistics- they were people whose lives were cut short by a machine-learning model’s calculation that extremist content is the best way to keep one young man scrolling.
Pearl Young, 77, was a grandmother who loved everyone. Ruth Whitfield, 86, had visited her husband in a nursing home every day. Aaron Salter, 55, died defending others. Roberta Drury, 32. Heyward Patterson, 67. Geraldine Talley, 62. Celestine Chaney, 65. Katherine “Kat” Massey, 72. Margus Morrison, 52. Andre Mackniel, 53.
They should all still be here. Their deaths can be traced through a digital pathway: from algorithm to meme to mass consumption to murder.
As victim’s son Garnell Whitfield Jr. testified: “Social media companies must be held accountable. They’ve created a monster they can’t control, and my mother paid the price.”
The Algorithm Is Not Neutral
The architecture of digital radicalization isn’t some wild conspiracy where there are evil, overarching hands controlling all media, to be clear.
At the end of the day, it’s nothing more than another greedy business model.
Every recommendation, every suggested video, every promoted post is a choice made by mathematical models trained to maximize profit. These tools are engines of amplification that consistently elevate the extreme over the moderate just because it makes more money.
Until we fundamentally restructure how algorithms are built, regulated, and deployed, we’re simply waiting for the next tragedy.
We know good and well by now that algorithms can radicalize – we have overwhelming evidence they do. The question is whether we have the collective will to demand something better than engagement at any cost.
The architecture can be rebuilt. But first, we have to admit it’s broken.
This article builds on “Understanding Digital Fascism” previously published on The Convergence Lens.
What do you think about social media algorithms? Do you want platforms to start changing the way they optimize?






