Deepfake trends and challenges — position statement
What are deepfakes?
A deepfake is a digital photo, video or sound file of a real person that has been edited to create an extremely realistic but false depiction of them doing or saying something that they did not actually do or say.
Deepfakes are created using artificial intelligence software that currently draws on a large number of photos or recordings of the person to model and create content.
Background
Manipulation of images is not new, but over recent decades digital recording and editing techniques have made it far easier to produce fake visual and audio content, not just of humans but also of animals, machines and even inanimate objects.
Advances in artificial intelligence (AI) and machine learning have taken the technology even further, allowing it to rapidly generate content that is extremely realistic, almost impossible to detect with the naked eye and difficult to debunk. This is why the resulting photos, videos and sound files are called ‘deepfakes’. For more information about the advancement of AI, refer to eSafety’s position statement on Generative AI.
How are deepfakes created?
To generate convincing content, deepfake technologies often only require small amounts of genuine data (images, footage or sound recordings). Indeed, the field is evolving so rapidly that deepfake content can be generated without the need for any human supervision at all, using what is called recycled generative adversarial networks (commonly referred to as GANs).
Deepfakes have numerous positive applications in entertainment, education, medicine and other fields, particularly for modelling and predicting behaviour. However, the possibilities for abuse are growing exponentially as digital distribution platforms become more publicly accessible and the tools to create deepfakes become relatively cheap, user-friendly and mainstream.
'Deepfake porn', fake news and hoaxes
Deepfakes have the potential to cause significant damage. They have been used to create fake news, false pornographic videos and malicious hoaxes, usually targeting well-known people such as politicians and celebrities. Potentially, deepfakes can be used as a tool for identity theft, extortion, sexual exploitation, reputational damage, ridicule, intimidation and harassment.
A person who is targeted may experience financial loss, damage to professional or social standing, fear, humiliation, shame, loss of self-esteem or reduced confidence. Reports of misrepresentation and deception could undermine trust in digital platforms and services, and increase general levels of fear and suspicion within society.
Risk of deepfakes
As advances in deepfake technology gather pace, and apps and tools are emerging that allow the general public to produce credible deepfakes, concerns are growing about the potential for harm to both individuals and society.
As noted in the eSafety Commissioner Julie Inman Grant’s opening statement to the Senate Standing Committee on the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, deepfake detection tools are lagging behind the technology itself. Open-source AI apps have proliferated online, are often free and easy to use to create damaging content including deepfake image-based abuse material and hyper-realistic synthetic child sexual abuse material. Companies could be doing more to reduce the risks that their platforms can be used to generate damaging content.
However, using deepfakes to target and abuse others is not simply a technology problem. It is the result of social, cultural and behavioural issues being played out online. The Australian Strategic Policy Institute’s report, Weaponised deepfakes (April 2020), highlights the challenges to security and democracy that deepfakes present — including heightened potential for fraud, propaganda and disinformation, military deception and the erosion of trust in institutions and fair election processes.
The risks of deploying a technology without first assessing and addressing the potential for individual and societal impacts are unacceptably high. Deepfakes provide yet another example of the importance of Safety by Design to assist in anticipating and engineering out misuse at the get-go.
How to spot a deepfake
Deepfake technology is advancing rapidly and it can be difficult to detect when it's being used. But there are sometimes signs that can help identify lower-tech fake photos and videos.
Check for:
- blurring, cropped effects or pixilation (small box-like shapes), particularly around the mouth, eyes and neck
- skin inconsistency or discoloration
- inconsistency across a video, such as glitches, sections of lower quality and changes in the lighting or background
- badly synced sound
- irregular blinking or movement that seems unnatural or irregular
- gaps in the storyline or speech.
If in doubt, remember to question the context. Ask yourself if it is what you would expect that person to say or do, in that place, at that time.Some platforms are identifying and labelling deepfake or ‘manipulated’ content to alert their users.
eSafety approach
A holistic approach is needed to counter the negative impacts of deepfakes. eSafety leads this approach in Australia, working with industry and users to address the issue.
Our work includes the following:
- Raising awareness about deepfakes so Australians are provided with a reasoned and evidence-based overview of the issue and are well-informed about options available to them.
- Supporting people who have been targeted through a complaint reporting system. Any Australian whose photo or video has been digitally altered and shared online can contact eSafety for help to have it removed.
- Preventing harm through developing educational content about deepfakes, so Australians can critically assess online content and more confidently navigate the online world.
- Supporting industry through our Safety by Design initiative which helps companies and organisations to embed safety into their products and services.
- Supporting industry efforts to reduce or limit the redistribution of harmful deepfakes by encouraging them to develop: policies, terms of service and community standards on deepfakes, screening and removal policies to manage abusive and illegal deepfakes, methods to identify and flag deepfakes to their community.
How eSafety can help
Australians whose images or videos have been altered and posted online can contact eSafety for help to have them removed.
eSafety investigates image-based abuse which means sharing, or threatening to share, an intimate photo or video of a person online without their consent. This includes intimate images that have been digitally altered like deepfakes.
We can also help to remove:
- online communication to or about a child that is seriously threatening, seriously intimidating, seriously harassing or seriously humiliating - known as cyberbullying.
- illegal and restricted material that shows or encourages the sexual abuse of children, terrorism or other acts of extreme violence.
Find out more about reporting harmful online content to eSafety.
Removing deepfakes on Google
If you need help removing deepfakes or explicit non-consensual fake imagery of yourself from appearing in Google search results, you can submit a request directly to Google. Google may be able to assist if your request meets their requirements. Find out more on how to do this at Google.
Deepfake trends and challenges – position statement
Last updated: 19/08/2024