Tech companies must do more and do better if we are to stem the tide of online child sexual exploitation and abuse

It might be difficult for most of us to confront, but the world is facing an unprecedented explosion in online child sexual exploitation and abuse and it’s getting worse with every passing day.  

You only need flip through the latest report from the National Centre for Missing and Exploited Children, the US's centralised reporting system for the online exploitation of children, to be confronted by the problem.  

In 2022, NCMEC received 32 million reports of child sexual exploitation and abuse, including 49.4 million images and 37.7 million videos from tech companies. 

While these numbers are incredibly confronting, we also know they are the just the tip of a very large iceberg and fail to tell the full story when it comes to the true scale and scope of the issue. 
 
For example, Meta the owner of Facebook, Instagram and WhatsApp, made around 27 million reports to NCMEC of child sexual exploitation and abuse material in 2022. By contrast, Apple, with its billions of handsets and iPads all connected to iCloud reported just 234.   

Something is seriously wrong with this picture.  

We must also remind ourselves that behind these millions of images and videos are innocent children.  

A significant recent report, the Australian Child Maltreatment Study (ACMS), found that an astounding 28.5 per cent of Australians had experienced sexual abuse before the age of 18. The damage to children doesn’t just begin and end with the abuse, it follows them throughout their lives and are associated with a range of co-morbidities. 

Survivors of child sexual abuse are more likely to experience sexual assault or domestic and family violence later in life, are more likely to develop dependencies on drugs and alcohol and are more likely to develop long-term mental health issues or experience suicidal ideation. 

This can also lead to child survivors becoming child offenders. In short, enabling child sexual exploitation in any manifestation comes not only with significant personal harms, but incurs significant societal costs too. 
  
In Australia, we’ve also seen huge increases in reports of online child abuse material which began to spike in early 2020 as Covid-19 lockdowns took hold and as the whole world moved online.  
 
Since then, we’ve seen a year-on-year doubling of reports of online child abuse material culminating in the first quarter of this year where we saw a tripling of reports compared to the same period the year before.  
 
Every day eSafety investigators see the same offenders create multiple new accounts, even after they have been banned by a platform. They see platforms being used to distribute thousands of links to child sexual exploitation and abuse sites. Some of these links are posted under the guise of legal adult porn, but they are also explicitly posted and advertised as child sexual abuse.  

Many of the platforms and services that our children use every day also enable the production, storage and spread of this illegal material. 

One of the key systemic powers I wield as Australia’s eSafety Commissioner under our Online Safety Act, is the ability to issue legal transparency notices to tech companies to lift the hood on what they are and are not doing to stop this insidious and proliferating abuse. 

Last year, I asked some of the biggest tech companies in the world - Apple, Meta, WhatsApp, Omegle, Microsoft, Snap and Skype - what they were doing to address the issue. The report summarising their answers showed many of these companies were simply not doing enough. 

The report uncovered inconsistencies in the use of available tools and technologies to detect child sexual exploitation and abuse material, significant variations in the time taken to address reports of this illegal activity and next to no detection of the livestreamed abuse of children. 

But we know the sexual exploitation and abuse of children is not confined to just seven companies.  

In February, I asked five more companies - Twitter (subsequently known as X), TikTok, Google (including YouTube), Discord and Twitch - what steps they are taking to tackle these crimes against children playing out on their services. Their answers revealed similar troubling shortfalls and inconsistencies.  

For example, we found that while YouTube, TikTok and Twitch are taking steps to detect child abuse in livestreams, Discord is not, saying that implementing the required tools is ‘prohibitively expensive’ for its service.  

TikTok and Twitch use language analysis technology to detect CSEA activity such as sexual extortion across all parts of their services whereas Discord does not use any detection tools at all.  Twitter uses tools on public content, but not on direct messages. Google uses tools on YouTube, but not on Chat, Gmail, Meet and Messages.   

Google and Discord are not blocking links to known child sexual exploitation material, despite the availability of databases from expert organisations like the UK-based Internet Watch Foundation. Google only blocks links on its search service.    
 
YouTube, TikTok and Twitch are using technology to detect grooming, whereas Twitter/X, Discord and other Google services including Meet, Chat, Gmail, Messages, are not.   

Google is also not using its own technology to detect known child sexual exploitation videos on some of its services – Gmail, Chat and Messages.   

There was significant variation in the languages covered by content moderators. Google said it covers at least 71 languages and TikTok 73. In comparison Twitter said it covered only 12 languages, Twitch reported 24 and Discord report 29.

This means that some of the top 5 non-English languages spoken at home in Australia are not by default covered by Twitter, Discord or Twitch moderators. This is particularly important for harms like grooming, violent extremism or hate speech which can require context and understanding of cultural nuances to identify.   

We also found wide variations in the response times to user reports of child abuse – TikTok says it responds within 5 minutes for public content, Twitch takes 8 minutes, while Discord takes 13 hours for direct messages. Twitter/X and Google did not provide the information required.  

Without transparency, regulators like eSafety, other policy makers and researchers are effectively flying blind and hampered from carrying out our functions in policy making, research advancements and effective regulation. And the Australian public are also left in the dark as to exactly what companies are doing to protect their children.  
 
Complying with eSafety’s legal transparency notices and answering the questions asked is therefore of utmost importance. 
 
I am therefore disappointed that eSafety found two of the biggest and most widely known companies in the world – Google and Twitter/X – did not comply with the notices I sent them, failing to answer a number of key questions even though these questions centred around crimes against children. 
 
Google failed to comply by giving generic or aggregated information across multiple services where information regarding specific services was required. Google has been given a Formal Warning to deter it from future non-compliance. 
 
eSafety found that Twitter/X’s non-compliance was more serious. For some questions, Twitter/X failed to provide any response leaving some boxes entirely blank. In other instances, Twitter provided a response that was otherwise incomplete or inaccurate.  
 
It’s for this reason that Twitter/X has been issued with an infringement notice of $610,500. The company now has 28 days to either request a withdrawal stating valid reasons as to why, or pay up.  

If in the event Twitter/X does not pay, eSafety has the option to seek a civil penalty through the courts.   

We also have more powerful systemic tools coming online next year in the form of industry codes and standards which will ensure companies are living up to their responsibilities to protect children.      
 
This year I registered six industry codes, which will set in stone mandatory minimum requirements for Social Media Services, Internet Carriage Services, App Distribution Services, Hosting Services, Equipment and Search Engines in taking action against child sexual exploitation and abuse.  
 
For messaging, gaming, dating, file storage and other websites however, I found that the protections proposed by industry in their draft codes did not provide appropriate community safeguards. eSafety will be developing mandatory standards for those services, to ensure that these services also provide the safeguards needed. 
 
And this is just the beginning. We will continue to shine a light on industry issuing more notices to more companies covering a wider range of online harms. But of course, companies don’t need to wait for a notice from eSafety to be more transparent and to start making their services safer, they can do this today.  
 
When it comes to the protection of children online, the Australian community expects all tech companies to do more and do better, the cost of continued inaction is simply too high.