Thank you for the opportunity to address the Committee. I come to you from the land and waters of the Gadigal people of the Eora Nation and I would like to pay my respects to their elders past, present and emerging.
There is compelling and concerning data that explicit deepfakes have increased on the Internet as much as 550 per cent year on year since 2019. It’s a bit shocking to note that pornographic videos make up 98 per cent of the deepfake material currently online and 99 per cent of that imagery is of women and girls. So, let me start by saying deepfake image-based abuse is not only becoming more prevalent, but it is also very gendered and incredibly distressing to the victim-survivor.
To supplement eSafety’s submission, I’d briefly like to describe two areas of deepfake abuse I am particularly concerned about. These are the perpetuation of deepfaked image-based abuse, and the creation of synthetic child sexual abuse material. While I’m aware CSAM is covered elsewhere under the Criminal Code, the issues are inextricable and germane to this broader conversation and the melding of harms type we’re seeing coming into our investigative division.
I’d also like to state upfront that we support the deepfake legislation at the heart of today’s hearing. Criminalisation of these actions is entirely appropriate, serving an important deterrent function while expressing our collective moral repugnance to this kind of conduct.
I believe that the Bill adds powerfully to the existing interlocking civil powers and proactive safety by design interventions championed by eSafety. Through these, we should feel justified putting the burden on AI companies themselves to engineer out potential misuse.
To demonstrate, here is the description of a popular open-source AI “nudifying app”:
“Nudify any girl with the power of AI. Just choose a body type and get a result in a few seconds.”
And another:
“Undress anyone instantly. Just upload a photo and the undress AI will remove the clothes within seconds. We are the best deepnude service.”
It is difficult to conceive of a purpose for these apps outside of the nefarious.
Some might wonder why apps like this are allowed to exist at all, given their primary purpose is to sexualise, humiliate, demoralise, denigrate or create child sexual abuse material of girls according to the predator’s personal predilection. A Bellingcat investigation found that many such apps are part of a complex network of nudifying apps, owned by the same holding company, that effectively disguises detection of the primary purpose of these apps in order to evade enforcement action.
Shockingly, thousands of open-source AI apps like these have proliferated online and are often free and easy to use by anyone with a smartphone. These apps make it simple and cost-free for the perpetrator, while the cost to the target is one of lingering and incalculable devastation.
I'm pleased to say that the mandatory standards we've tabled in Parliament are an important regulatory step to ensure proper safeguards to prevent child exploitation are embedded into high-risk, consumer-facing open-source AI apps like these. The onus will fall on AI companies to do more to reduce the risk their platforms can be used to generate highly damaging content, such as synthetic child sexual exploitation material and deepfaked image-based abuse involving under-18s.
These robust safety standards will also apply to the platform libraries that host and distribute these apps.
Companies must have terms of service that are robustly enforced and clear reporting mechanisms to ensure that the apps they are hosting are not being weaponised to abuse, humiliate and denigrate children.
eSafety is concerned these apps are using sophisticated monetisation tactics and are increasingly being found on mainstream social media platforms, boosting their visibility and availability, particularly to younger audiences. A recent report by research firm Graphika found that there was a 2,408 per cent increase in referral links to nonconsensual pornographic deepfake sites across Reddit and X in 2023.
We are also concerned about the impacts of multi-modal forms of generative AI. For example, creating hyper-realistic synthetic child sexual abuse material via text prompt to video as well as highly accurate voice cloning and manipulative chatbots that could supercharge grooming, sextortion and other forms of sexual exploitation of young people at-scale.
We have already begun to see deepfake child sexual abuse material, non-consensual deepfake pornography and cyberbullying content reported through our complaint schemes. Due to some of the early technology horizon scanning work that eSafety conducted around deepfakes in 2020, each of our complaints schemes covers synthetically generated material.
Our concerns about these technologies have led to strong actions. These include through mandatory industry codes tackling child sexual abuse and pro-terror material, which apply to search engines and social media sites.
In the case of deepfake pornography against adults – which we treat as image-based abuse – we will continue to provide a safety net for Australians whose digitally created intimate images proliferate online.
We have a high success rate in this area and using our remedial powers under the image-based abuse scheme, we have recently pursued ground-breaking civil action against an individual in the Federal Court for breaching the Online Safety Act through his creation of deepfake image-based abuse material.
As we outline in our submission to this Committee, the harms caused by image-based abuse have been consistently reported.
They include negative impacts on mental health and career prospects, as well as social withdrawal and interpersonal difficulties.
Victim-survivors have also described how their experiences of image-based abuse radically disrupted their lives, altering their sense of self, identity and their relationships with their bodies and with others.
We will look to the Online Safety Act review and Government to consider other tools we might need to prevent the creation and hosting of apps and other technologies whose sole purpose is to harm and abuse others through the creation of deepfake porn.
It’s also worth noting another concern we share with the hotline and law enforcement community: the impact that the continued proliferation of synthetic child sexual abuse material will have on the volumes of content our investigators are managing. We want to ensure that our efforts are put towards classifying, identifying and working towards finding real children who are being abused, but there are also other major technical concerns.
The first is that deepfake detection tools are lagging significantly behind the freely available tools being created to perpetuate deepfakes. These are becoming so realistic, they are difficult to discern with the naked eye. Rarely are these images accompanied by content-identification or provenance watermarks to help detect when material is AI-generated.
The second is that synthetic child sexual abuse material will seldom be recognised as “known child sexual abuse material”. In order for the material to be ‘hashed’ (or fingerprinted) by an organisation such as the US-based NCMEC – the National Center for Missing and Exploited Children – or ourselves, it first must be notified by law enforcement or the public. This material can be produced and shared faster than it can be reported, triaged, and analysed.
This fact is already challenging the global system of digital hash matching we’ve worked so hard to build over the past two decades, as AI-generated material overwhelms investigators and hotlines.
Finally, due to the relative infancy of the AI industry, particularly those using open-source AI models, NCMEC has noted that only five AI companies had made reports to them about AI-generated material.
That is five out of an estimated 15,000 US-based AI companies in 2023.
Instead, 70 per cent of the reports of AI generated content to NCMEC were made by social media platforms that have been used to distribute the material.
Unsurprisingly, none of the sites or apps using open-source models reported CSAM, underscoring why our standards are a vitally important intervention to be taken.
I hope the committee found that our submission provides some detailed feedback on the Bill in its current draft, including the issue of threats to share intimate images, and related challenges posed by sexual extortion, or sextortion.
We have seen reports of sexual extortion increase alarmingly rapidly in recent years, an indication of the havoc this insidious harm is inflicting on the lives of countless targets across Australia, particularly young men and boys.
Overall, I welcome these strengthened protections for image-based abuse. I also hope that, as part of the Bill’s implementation, there is a broader education and awareness raising campaign for the whole community.
Having spent the past 7 ½ years elevating our education messaging and prevention efforts, I would say that all forms of deepfake abuse are likely issues that many parents are not yet getting their minds across.
This and sexual extortion are two major issues around which eSafety is ramping up outreach efforts. We are already delivering webinars for teachers around AI in the classroom and we want to ensure that parents are also prepared for this next vector of harmful online abuse that could impact their children.
eSafety’s whole of community and multidimensional regulatory remit complements a criminal justice response to image-based abuse and synthetic child sexual abuse material.
As I have said many times, we will not arrest our way out of these challenges – and indeed, many victim-survivors that come to us through our complaints schemes just want the imagery taken down, rather than to pursue a criminal justice pathway. So too, we will not be able to simply regulate our way out of these harms. Instead, we will need to ensure that safety by design is a consideration through every element of the design, development and deployment process of both proprietary and open-source AI apps.
The risks are simply too high if we let these powerful apps proliferate into the wild without adequate safeguards from the get-go. And, if the primary purpose for the creation of the application is to inflict harm and perpetuate abuse, then we should take a closer look at why such apps are allowed to be created in the first place. I believe that it is only through this multi-faceted approach that we can tackle these rapidly proliferating AI-generated harms. We support this important Bill and will also pursue effective prevention, protection and proactive efforts to tackle these challenges from multiple vantage points.
Once again, I thank the Committee for this opportunity to contribute and welcome your questions.