Artificial intelligence (AI) transparency statement
The Digital Transformation Agency's 'Policy for the responsible use of AI in government' sets out the Australian Government approach to embrace the opportunities of artificial intelligence (AI) and provide for safe and responsible use of AI in the Australian Public Service.
The eSafety Commissioner (eSafety) is an independent statutory office holder supported by staff from the Australian Communications and Media Authority (ACMA). The eSafety Commissioner (eSafety) adheres to this policy supporting its principles under the 'enable, engage, and evolve' framework. We will be transparent in our internal use of AI technology as we explore, evaluate and adopt AI technology to benefit our work and our stakeholders.
Currently, eSafety does not plan to use AI in services that the public may directly interact with or be significantly impacted by. If this changes, we will update this statement to detail our use of AI.
AI use
We may employ AI across various corporate and enabling functions, including software engineering, data analytics and workplace productivity.
Software engineering
eSafety may use various forms of AI to assist in software development, debugging and testing when developing digital and data solutions and administering eSafety systems.
Data analytics
eSafety see benefits in using AI to assist with data and insights in the areas of data management and obtaining insights from data through interrogation and analysis. Data and insights can influence our approach to regulation, policy and informing advice to government on legislation.
Workplace productivity
We see the potential benefits in using AI to improve workplace productivity for staff including:
- helping answer questions from staff, and extracting information from collections of documents such as academic research
- summarising documents, emails, instant messages and other content
- summarising and transcribing meetings and calls
- performing low level graphic design and image creation as part of a drafting process
- identifying potential key words, themes or topics in documents, records and other information
Image processing
Due to the nature of the imagery and digital media that eSafety staff are exposed to, and to protect the wellbeing of eSafety personnel, eSafety is exploring the use of AI image analysis to determine the content of images and help limit the need for unnecessary repeat exposure to harmful content.
Monitoring and governing AI use
The ACMA and eSafety have developed an overarching cross-agency approach to AI and have established an AI Steering Committee to assess the opportunities and risks in using AI within ACMA and eSafety. The Steering Committee considers AI use case benefits, risks, and guidelines as well as continuing to raise staff awareness of AI.
An internal AI policy ensures responsible, ethical, and secure usage of AI tools while safeguarding the privacy, confidentiality, and integrity of agency data and operations. Under the policy, generative AI tools must not be used unless specifically approved under a robust approval, assurance and evaluation process and staff have undertaken AI training.
The AI Steering Committee undertakes regular review of AI projects and solutions that are of medium to high risk to ensure compliance with the policy and AI ethical principles.
Accountable official
The ACMA’s Chief Information and Digital Officer is designated as the accountable official for both ACMA and eSafety.
AI transparency statement
This AI transparency statement was first published to our website in February 2025. This statement will be reviewed annually, or when any significant change is made to our approach to AI.
Last updated: 27/02/2025