Need help dealing with violent or distressing online content? Learn more

Generative AI – position statement

‘Generative AI’ is a term used to describe the process of using machine learning to create digital content such as new text, images, audio, video and multimodal simulations of experiences.

This machine learning is often called ‘artificial intelligence’ or ‘AI’. The difference between generative AI and other forms of AI is that its models can create new outputs, instead of just making predictions and classifications like other machine learning systems. 

Background

The machine learning that underpins AI is computer modelling that processes information using artificial neural networks, which are mathematical systems structured like the human brain that ‘learn’ skills by finding statistical patterns in the sets of data used to ‘train’ them. The neural networks used for generative AI are built on enormous datasets that are processed using human-like ‘parameters’, which are the factors or limits that define the way something can be done or made.  

Some examples of generative AI applications include: 

  • text-based chatbots, or programs designed to simulate conversations with humans, such as Anthropic’s Claude, Bing Chat, ChatGPT, Google Bard, and Snapchat’s My AI 
  • image or video generators, such as the Bing Image Creator, DALL-E 2, Midjourney, and Stable Diffusion 
  • voice generators, such as Microsoft VALL-E.  

Opportunities and risks

Recent advancements have rapidly improved generative AI due to the availability of more training data, enhanced artificial neural networks with larger datasets and parameters, and greater computing power. Some experts now claim AI systems are moving rapidly towards ‘human-competitive intelligence’. This could impact almost every aspect of our lives, in both positive and negative ways.  

Some possible opportunities for generative AI tools to enhance online safety include:  

  • detecting and moderating harmful online material more effectively and at scale 
  • providing evidence-based scalable support that is easy to understand and age appropriate, to meet the needs of young people 
  • enhancing learning opportunities and digital literacy skills  
  • establishing more effective and robust conversations on consent regarding data collection and use. 

The possible threats related to generative AI are not just theoretical – real world harms are already present.

These harms can occur unintentionally because of flaws in the data or models used in generative AI, such as when biased information is used for training. They can also happen when generative AI is used in intentionally harmful ways. This includes misusing generative AI to generate child sexual exploitation and abuse material based on images of real children), or generating sexual content that appears to show a real adult then blackmailing them by threatening to share it. Generative AI can also be used to manipulate and abuse people by impersonating human conversation convincingly and responding in a highly personalised manner, often resembling genuine human responses. 

Systemic risks also need urgent consideration. Generative AI is being incorporated into major search engines, productivity software, video conferencing and social media services, and is expected to be integrated across the digital ecosystem. Attention should be paid to understanding the human risks of each application, how to prevent those risks, what to do if harm occurs, what further research is required, and how to ensure there is adequate transparency and accountability from industry.  

Everyone involved – including technology developers, downstream services that integrate or provide access to the technology, regulators, researchers and the general public – should be aware of the potential harms of generative AI and play a role in addressing them. 

Emerging good practice

The online industry can take a lead role by adopting a Safety by Design approach. Safety by Design is built on three principles: service provider responsibility, user empowerment and autonomy, and transparency and accountability. Technology companies can uphold these principles by making sure they incorporate safety measures at every stage of the product lifecycle.  

eSafety recognises the need to safeguard the rights of users, preserve the benefits of new tools and foster healthy innovation.

Advice for users

It is helpful to understand the systems, processes and business models that underlie how generative AI content is developed. When a service generates content, it may use data drawn from the open web. This could include information about you or from your digital footprint, such as your chat history or 'conversations' with generative AI tools. You may be able to manage your data by turning off your chat history and choosing which conversations are used to train AI models.

You can find information about popular generative AI-enabled services such as Bing, Google Bard, Chat GPT and GPT-4 in The eSafety Guide.

You can report seriously harmful online abuse and illegal or restricted content to eSafety.

Find out more

Further information about generative AI is provided in eSafety’s position statement. 

Topics covered

  • Generative AI lifecycle  
  • Risks, harms and opportunities  
  • Regulatory challenges and approaches 
  • eSafety’s approach 
  • Emerging good practice and Safety by Design measures 
  • Advice for users 

Download a copy

Click on this file link to download the full position statement:

Last updated: 15/08/2023