Editing our photos is not a new phenomenon. Making the sun shine just a little brighter in our holiday snaps, airbrushing out that pesky food stain or even superimposing an absent student from the class photo are things that can be done easily and quickly these days.
Modifications that we make to our own images and online records are one thing, but what about when others significantly alter our photos and reframe how we are represented online? What if images or videos of you were circulated showing you doing something you were never involved in, or saying words you never uttered?
Of course, context matters. You may not be bothered if your face is superimposed onto your idol, or your voice is changed to theirs as you are shown making a hilarious quip. But what if your face is superimposed onto a porn-star’s body and the video is sent to your family, friends or work colleagues? Or a video sent round your network shows you spouting hate speech and vitriol at a march, even though it runs contrary to your actual beliefs? The impact and consequences of these examples are wide-ranging and deeply troubling.
While the technology for ‘photoshopping’ and editing videos has been around for years, recent advancements in artificial intelligence and machine learning have created technologies that can generate fake content that is more realistic and harder to debunk. These are known as deepfakes – and developments in this space are happening very quickly.
The risk of misuse
Only a few months ago, Samsung announced that its new AI system could produce credible deepfakes with just one photo. This was a major change in capability, as up to then a large volume of high quality data was required. There have also been developments in the real-time generation of deepfakes that don’t need any human input at all. As a result, apps and tools are emerging that anyone can use quickly and easily.
Innovations to help identify, detect and confirm whether content is authentic or not are also advancing. A number of industry and government initiatives are underway, with most focusing on flagging content if there is a high probability that it has been tampered with. Blockchain technology, digital watermarking, digital signatures and other tracking and tracing tools are all being deployed and tested as possible deepfake detection solutions. Trying to get a step ahead is like fighting an arms race.
The deepfakes trend highlights the risks involved in deploying technology without first adequately considering the ethics, safety and human rights implications. Already there is ample evidence that deepfakes are being used to target, scam and abuse people. One recent example is the case of a UK businessman who was duped into transferring a huge sum of money into a bank account when deepfake technology was used to convincingly mimic his supervisor’s voice. Another pertinent example was the recent release of, DeepNude, an app that ‘removes’ women’s clothing from photos to make them appear naked. Due to public outcry, the app was withdrawn by the developers soon after launch. But the technology is still being shared online, by those who copied the software before it was removed or reverse engineered it themselves.
Promoting Safety by Design
eSafety’s Safety by Design initiative highlights the importance of considering the individual and societal impacts that a technology will have, before it is brought to market or released as open-source code or software. Any benefits the technology brings should be weighed against the negatives. Risks and harms should be assessed and addressed from concept through to deployment and beyond.
There are a number of steps that industry can already take to reduce the potential harms from deepfakes. Where potential deepfakes are concerned, companies can develop and deploy tools which alert end-users that the content they are looking at or listening to may be doctored material – work on these detection technologies is continuing apace. Technology platforms and providers can also make it clear that the distribution of deepfake content that targets or abuses others is against the terms of service, privacy policies and community standards. Illegal or abusive deepfakes can be added to content screening and removal processes, and their distribution limited through the use of filtering and de-listing tools.
While some platforms are already meeting some of the Safety by Design objectives, at eSafety we are committed to helping industry achieve all of its aims. This includes assessing the impact that both encryption and quantum computing will have on the tools and processes being developed now, so that we remain upstream of issues.
Helping stop abuse
For victims of deepfakes in Australia, there is a safety net.
eSafety’s image-based abuse scheme was designed to address the misuse of online technology, and we have helped to have thousands of intimate images that were posted without consent removed from public view – whether they were real or fake. So it is important for Australians to know they can also report any images that have been modified to appear nude or sexual, and we will give them support and help with removal.
Equally, eSafety can help children who have been seriously cyberbullied through deepfakes – whether they have been threatened, intimidated, harassed or humiliated using this type of image. We will work with them to get the material taken down.
In addition, eSafety has the power to take down computer-generated and doctored images of illegal content, such as child sexual abuse material. By removing this content and limiting further circulation, we can provide some relief for victims of deepfake abuse.
eSafety also helps to build the critical reasoning skills needed to question and evaluate whether content is real or not, through incorporating information about trends such as deepfakes and fake news in our school resources. We will also endeavour to work with colleagues across government and in the education sector to integrate critical digital, media and social literacy skills into the school curricula and beyond.
Finally, we also know that using deepfakes and similar images to target and abuse others is not simply a technology problem. It is the result of social, cultural and behavioural issues being played out online. This means deepfake abuse is more likely to impact on women, minorities and marginalised groups, which is precisely why eSafety is honing its vulnerability lens to triage and respond to those Australians who are particularly vulnerable to such online harms. There is no question we all need to play a part in raising the standard of behaviour online, by drawing the line at unacceptable actions and interactions.
Each one of us can amplify the positive. We can report, block and delete the negative. We can stop sharing and spreading content that de-humanises, threatens or places human dignity at risk. As with everything that harms our communities, making improvements is a shared responsibility – and the time to step up is now.