Thank you for the opportunity to address the Committee.
I’ll begin by noting that eSafety did not provide a submission to this Inquiry and clarifying it is not my role to comment on the detail of Government policies or legislation another regulator might implement.
While mis and disinformation sit outside eSafety’s formal remit, I would add that we do regard these as adjacent harms which play a significant role in cases we see playing out through our online harms reporting schemes.
For example, misogynist tropes and disinformation designed to deliberately damage women’s reputation or undermine their credibility feature heavily in coordinated pile-ons and other online attacks targeting women in the spotlight.
That includes lifelike deepfake images and video showing women and girls doing and saying things they did not do or say.
Such AI-generated material is already being reported to us through our image-based abuse scheme. It’s as deceptive as it is harmful.
Mis and disinformation of this kind can also feature in the coercive control techniques abusive partners use to control women through a phenomenon known as Technology Facilitated Abuse, or TFA, which is almost ubiquitous in situations of domestic and family violence (DFV).
Given the powerful role algorithms play in further amplifying content, we would see volumes of mis and disinformation, alongside other forms of online harm, as important indicators of a platform’s overall health.
If you will permit the analogy, I’ll compare the process at play here to the workings of an internal combustion engine.
The engine's primary job is to convert fuel (input data) into movement.
However, depending on the engine's design and efficiency, it may also produce other, harmful outputs, which are unintended consequences of the process.
If the fuel is unrefined or unclean, the outputs may be too – a lot like abuse and other online harms that now dominate so many of our feeds.
Without proper mitigations in place, poison in leads to noxious fumes out. Like a faulty exhaust, the algorithm spreads those fumes widely, creating an overall toxic and polarised online environment.
Just as engines need filters, catalytic converters and regular maintenance to reduce harmful emissions, algorithms need thoughtful design, regulation and appropriate systems and processes in place to minimize unintended negative consequences.
In common with auto industry, Big Tech is keen to be seen doing the right thing.
All the main technology companies have made public commitments to ensuring their platforms do not become havens of lies, deception, deceit and polarisation.
They also have rules to combat harmful content and conduct because online toxicity does not attract advertisers or help retain users.
Their mis and disinformation policies promise robust measures including:
- third-party fact checking, moderation – both human and automated
- warnings and labels for suspected instances of misinformation or disinformation
- promotion of authoritative sources
- user reporting
- and penalties for violation of their guidelines.
So how successful are they? Right now, the answer is we don’t really know.
Just as cars require independent inspection to ensure compliance with basic environmental and safety standards, algorithmic inspection and audits can help ensure platforms are similarly “road worthy”.
Under the existing Online Safety Act, eSafety operates one of the few regulatory processes anywhere capable of holding big technology companies to account in this way.
Our transparency powers focus on harms relevant to our remit under the Government’s Basic Online Safety Expectations.
We’ve already revealed significant inconsistencies and shortcomings in the way companies address key safety issues such as child sexual exploitation material.
Our mandatory industry codes and standards – developed with these insights in mind – now create an enforceable obligation on companies to take reasonable steps towards mitigating the spread of this material on their platforms.
From eSafety’s perspective, therefore, asking tech companies to explain how they are enforcing their own declared house rules does not seem unreasonable or out of step with existing regulatory schemes.
After all, Parliament and the public demand transparency from the public service through questions on notice and freedom of information, and we test vehicles approved to drive on Australian roads to ensure their roadworthiness.
As I like to say, sunlight is the best disinfectant. Without meaningful transparency, there can be no accountability.
*This is an edited version of eSafety Commissioner Julie Inman Grant’s opening statement to the Senate Environment and Communications Legislation Committee inquiry into the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024.