Skip to main contentSkip to navigationSkip to navigation
A Peppa Pig parody.
‘Beyond the simple knock-offs and the provocations exists an entire class of nonsensical, algorithm-generated content’… A dark Peppa Pig parody in which the children’s character goes to the dentist. Photograph: YouTube
‘Beyond the simple knock-offs and the provocations exists an entire class of nonsensical, algorithm-generated content’… A dark Peppa Pig parody in which the children’s character goes to the dentist. Photograph: YouTube

How Peppa Pig became a video nightmare for children

This article is more than 5 years old

James Bridle’s essay on disturbing YouTube content aimed at children went viral last year. Has the problem gone away – or is it getting worse?

In November of last year, I read an article in the New York Times about disturbing videos targeted at children that were being distributed via YouTube. Parents reported that their children were encountering knock-off editions of their favourite cartoon characters in situations of violence and death: Peppa Pig drinking bleach, or Mickey Mouse being run over by a car. A brief Google of some of the terms mentioned in the article brought up not only many more accounts of inappropriate content, in Facebook posts, newsgroup threads, and other newspapers, but also disturbing accounts of their effects. Previously happy and well-adjusted children became frightened of the dark, prone to fits of crying, or displayed violent behaviour and talked about self-harm – all classic symptoms of abuse. But despite these reports, YouTube and its parent company, Google, had done little to address them. Moreover, there seemed to be little understanding of where these videos were coming from, how they were produced – or even why they existed in the first place.

I’m a writer and artist, with a focus on the broad cultural and societal effects of new technologies, and this is how most of my obsessions start: getting increasingly curious about something and digging deeper, with an eye for concealed infrastructures and hidden processes. It’s an approach that has previously led me to investigate Britain’s system of deportation flights or its sophisticated road surveillance network, and this time it took me into the weird, surreal, and often disturbing hinterland of YouTube’s children’s videos. And these videos are worrying on several levels. As I spent more and more time with them, I became perturbed not just by their content, but by the way the system itself seemed to reproduce and exacerbate their most unsavoury excesses, preying on children’s worst fears and bundling them up into nightmare playlists, while blindly rewarding their creators for increasing their view counts even as the videos themselves descended into meaningless parodies and nonsensical stories.

For adults, it’s the sheer weirdness of many of the videos that seems almost more disturbing than their violence. This is the part that’s harder to explain – and harder for people to understand – if you don’t immerse yourself in the videos, which I’d hardly recommend. Beyond the simple knock-offs and the provocations exists an entire class of nonsensical, algorithm-generated content; millions and millions of videos that serve merely to attract views and produce income, cobbled together from nursery rhymes, toy reviews, and cultural misunderstandings. Some seem to be the product of random title generators, others – so many others – involve real humans, including young children, distributed across the globe, acting out endlessly the insane demands of YouTube’s recommendation algorithms, even if it makes no sense, even if you have to debase yourself utterly to do it.

A scene from Minnie Mouse Choked Pizza for Eating Too Much. Photograph: YouTube

When I wrote an essay about the videos online, the public reaction largely mirrored my own. On the one hand, people were horrified to find out that these videos existed, and on the other, completely weirded out by the sheer scale and strangeness of what they found. The combination sent the article viral: it was shared and read online millions of times, picked up by websites and newspapers around the world, and even resulted in questions being asked in the European parliament. Finally, YouTube started to respond, although its efforts, and the results, have been mixed.

YouTube’s initial proposal was to restrict advertising on disturbing content aimed at children – but its proposals failed to engage honestly with its own platform. It’s estimated that 400 hours of content are uploaded to the site every minute. Policing it by hand is impossible; instead, YouTube relies on flagging by viewers to drive official review – which is hardly suitable when the first people to view this stuff are small children, and the damage is already done. YouTube has also touted the technological cure-all of machine learning as its preferred solution – but in April, it finally agreed that the dedicated YouTube Kids app would switch to entirely human moderation, effectively admitting that the approach didn’t work.

As a result, while many videos have since been removed from the website, uncountable numbers still remain. In March, Wired catalogued a slew of violent accounts and demonstrated that it was possible to go from a popular children’s alphabet video to a Minnie Mouse snuff film in 14 steps, just by following YouTube’s own recommendations. As of last week, Googling the title of one of the now-removed videos mentioned in the New York Times article (“PAW Patrol Babies Pretend to Die Suicide by Annabelle Hypnotized”) results in a link to a near-identical video still hosted on the site (“PAW PATROL Babies Pretend To Die MONSTER HANDS From MIRROR! Paw Patrol Animation Pups Save For Kids”), in which the adorable pups don a freakish clip-art monster mask to terrify one another before being lured off a rooftop by a haunted doll. Is “Save For Kids” supposed to read “Safe For Kids”? Either way, it is not, and it’s obvious that just playing whack-a-mole with search terms and banned accounts is never going to solve entangled problems of copyright infringement, algorithmic recommendation, and ad-driven monetary incentives on a billion-view platform with no meaningful human oversight.

YouTuber Elle Mills, who has posted footage of herself mid-anxiety attack Photograph: YouTube Elle Mills

Whether these videos are deliberately malicious, “merely” trolling, or the emergent effect of complex systems, isn’t the point. What’s new is that the system in which such violence proliferates is right in front of us, and visibly complicit, if we choose to see it for what it is. I titled that original essay “Something is wrong on the internet” because it seemed and still seems to me that the issues made glaringly obvious by the scandal are not limited to children’s content, nor to YouTube. First among these is how systems of algorithmic governance, rather than leading us towards the sunny uplands of equality and empowerment, continually re-enact and reinforce our existing prejudices, while oppressing those with the least understanding of, and thus power over, the systems they’re enmeshed in.

Take YouTube’s recommendation system for starters, which doesn’t differentiate between Disney movies and a grainy animation cooked up by a bot farm in China. Essentially what the seemingly benign “if you like that, you’ll like this” mechanism is doing is training young children – practically from birth – to click on the first thing that comes along, regardless of the source. This is the same mechanism that sees Facebook slide fake political ads and conspiracy theories into the feeds of millions of disaffected voters, and the outcome – ever more extreme content and divided viewpoints – is much the same. Add the sheer impossibility of working out where these videos come from (most are anonymous accounts with thousands of barely differentiated uploads) and the viewer is adrift in a sea of existential uncertainty, which starts to feel worryingly familiar in a world where opaque and unaccountable systems increasingly control critical aspects of our everyday lives.

We’ve seen how computer programs designed to provide balanced sentencing recommendations in US courts were more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as that for white people (45% to 24%). We’ve seen how algorithmic systems target men over women for prestigious employment positions: in one study, Google’s ad system showed the highest-paying jobs 1,852 times to men — but just 318 times to a female group with the same preferences. And we’ve also seen how hard it is to appeal against these systems. When the Australian government instituted “robo-debt”, an automated debt-recovery programme, it wrongly and illegally penalised the most vulnerable in society, who had no recourse to support or advice to challenge the system.

‘The sheer weirdness of many of the YouTube videos seems almost more disturbing than their violence.’

In the months since first writing about YouTube’s weird video problem, I’ve met a few people from the company, as well as from other platforms that have been caught up in similar vortices. While most are well-meaning, few seem to have much of a grasp of the wider structural issues in society which their systems both profit from and exacerbate. Like most people who work at big tech companies, they think that these problems can be solved by the application of more technology: by better algorithms, more moderation, heavier engineering. Many outside the tech bubble – particularly in the west and in higher income brackets – are simply appalled that anyone would let their kids use YouTube in the first place. But we won’t fix these issues by blaming the companies, or urging them do better, just as we won’t solve the obesity crisis by demonising fast food but by lifting people out of poverty. If YouTube is bridging a gap in childcare, the answer is more funding for childcare and education in general, not fixing YouTube.

What’s happening to kids on YouTube, to defendants in algorithmically enhanced court trials, and to poor debtors in Australia, is coming for all of us. All of our jobs, life support systems, and social contracts are vulnerable to automation – which doesn’t have to mean actually being replaced by robots, but merely being at their mercy. YouTube provides another salutary lesson here: only last week it was reported that YouTube’s most successful young stars – the “YouTubers” followed and admired by millions of their peers – are burning out and breaking down en masse. Polygon magazine cited, among many others, the examples of Rubén “El Rubius” Gundersen, the third most popular YouTuber in the world with just under 30 million subscribers, who recently went live to talk to his viewers about fears of an impending breakdown and his decision to take a break from YouTube, and Elle Mills, a popular YouTuber with 1.2 million followers, who posted footage of herself mid-anxiety attack in a video entitled Burn Out at 19.

YouTube star Rubén Gundersen has spoken of his fears of breakdown. Photograph: Carlos Alvarez/Getty Images

It would be easy to scoff at these young celebrities, were it not for the fact that their experience is merely the most charismatic example of the kind of algorithmic employment under which many others already labour. The characteristics, after all, are the same: long hours without holidays, benefits, or institutional support, and the pressure to work at the pace of the machine in a system whose goals and mechanisms are unclear and ever-changing, and to which its subjects have no appeal. (In a depressing twist, many of these same YouTubers have been hit by declines in revenue caused directly by YouTube’s attempts to demonetise “inappropriate” content for children – solving one problem in the system only exacerbates others.)

The weirdness of YouTube videos, the extremism of Facebook and Twitter mobs, the latent biases of algorithmic systems: all of these have one thing in common with the internet itself, which is that – with a few dirty exceptions – nobody intentionally designed them this way. This is perhaps the strangest and most salutary lesson we can learn from these examples, if we choose to learn at all. The weirdness and violence they produce seems to be in direct correlation to how little we understand their workings – and how much is hidden from us, deliberately or otherwise, by the demands of efficiency and ease of use, corporate and national secrecy, and sheer, planet-spanning scale. We live in an age characterised by the violence and breakdown of such systems, from global capitalism to the balance of the climate. If there is any hope for those exposed to its excesses from the cradle, it might be that they will be the first generation capable of thinking about global complexity in ways that increase, rather than reduce, the agency of all of us.

James Bridle is the author of New Dark Age, (Verso, £16.99). To order a copy for £14.44 go to guardianbookshop.com or call 0330 333 6846

More on this story

More on this story

  • Hire factcheckers to fight election fake news, EU tells tech firms

  • TikTok says it has acted to curb disinformation amid Israel-Hamas war

  • Social media firms ‘not ready to tackle misinformation’ during global elections

  • CEO regrets her firm took on Facebook moderation work after staff ‘traumatised’

  • Reddit moderators vow to continue blackout in API access fees row

  • Ex-Facebook moderator in Kenya sues over working conditions

  • YouTube Kids shows videos promoting drug culture and firearms to toddlers

  • Facebook owner to ‘assess feasibility’ of hate speech study in Ethiopia

  • Rohingya sue Facebook for £150bn over Myanmar genocide

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed