eSafety Commissioner's Opening Statement to the Inquiry into the Social Media (Anti-Trolling) Bill 2022.
I would like to make a statement for the record.
Already listening to the testimony there has been a lot of conflation around some important issues around online harm, and I’d like to say upfront that none of us are defamation law experts, we are experts in serious online abuse, how the internet works, and how the social media companies respond.
I’d like to try and differentiate some of the issues up front before we take questions.
Thank you so much for the opportunity to talk to the Committee today.
Whilst our concerns were laid out in our submission, the primary issues that can be described around this legislation are conflation, confusion, oversimplification of very complex technical and legal challenges, creating public expectations that cannot be met by any government agency, and frankly no provision right now for a dedicated place to go when victims encounter online defamation.
There is no question that online defamation remains a serious and intractable challenge that we must tackle.
Online defamation is particularly insidious because it can be created with great malice, with the touch of a button, but can be amplified online globally, instantaneously and with relative impunity. Defamation laws were developed for the print age, not the digital age, and cannot keep pace with the velocity and volume of social media commentary.
But to adjudicate this harm to reputation involves a long, complicated, expensive and arduous legal process that no doubt compounds the victim’s trauma. And there is no question that pseudonymity online creates an additional barrier to justice for those who have been harmed.
The problem eSafety is facing right now is that Australians today have nowhere to go to have online harm to reputation or potential defamation addressed – or even adequately explained – so the flood gates have opened over the past few years with eSafety serving as the receptacle.
As you know, on 23 January, my Office began the implementation of the new Online Safety Act, which passed Parliament in 2021. The new Act builds upon the strengths of our existing legislative framework to provide eSafety with expanded powers including a novel new Serious Adult Cyber Abuse scheme.
Since the new scheme commenced just over six weeks ago, we have received close to 500 complaints relating to Adult Cyber Abuse, a third of which, or 33%, concerned potentially defamatory material and therefore did not meet the thresholds for serious adult cyber abuse under our scheme.
I note that several submissions to this Inquiry have raised concerns about the terminology of “trolling”, and I think it is also fair to say that conflating “defamation” with “trolling” is mixing apples with oranges.
Not only does the catch all term “trolling” trivialise serious harmful forms of online abuse, but you can also troll someone endlessly without defaming them. It’s also important to note that defamation is concerned with reputation, not hurt. Though as we see day in and day out – hurt can definitely follow.
And, when a victim comes to us experiencing distress and hurt, no one wants to be told that their suffering doesn’t meet the serious cyber abuse threshold and that they must hire a lawyer, but that is the unfortunate situation our investigators are finding themselves in at the moment.
We also must give our Act time to work, particularly for the tools we have around addressing systemic failures of the technology ecosystem. The Act also provides us and allows us to raise the bar on Basic Online Safety Expectations.
So, unlike our reporting schemes, the BOSE are not limited to specific forms of online harm but will allow us to the lift the lid on some intractable, systemic issues.
Like the fact the platforms are continuing to allow the creation of fake and imposter accounts to harm others. They are not preventing and responding to forms of volumetric abuse, including pile-ons.
They are not preventing recidivism, which is the returning of bad actors who have been seen to be engaging in consistent abusive behaviour. They are allowing them back onto the platforms and onto other platforms so that bad actors can now actually form shop now and then crowd source their online hate to attack a particular target.
I’d also note, just to close off, there’s a lot here and I will submit this for the record. But there are already billions of social media accounts that have been created without sufficient identity information. So, I think it is concerning that we are telling people that we will unmask the trolls.
We’ve been given new powers in relation to basic subscriber information to try and collect information when we need to further investigate who might be behind a given account.
And let me tell you, the information that the platforms have is very patchy and that’s largely by design. What the platforms are trying to do, and have done for the past 15 years, is they try and have as many people and accounts created as possible to create as little friction as possible.
Really all they need to do to monetise social media is target advertising and that doesn’t require knowing or understanding their identity.
There are also problems, that I think have been addressed, challenges around validating real names policies or even collecting that identity information in a way that is safe and user’s identities are protected in relation to privacy and data protection concerns.
So, we’ve got a complex set of issues and there is no simplistic solution, so, my take here is we need to architect a cross-portfolio technology policy that doesn’t put online safety provisions in privacy Bills or in defamation Bills but looks at a broad spectrum of harms and where they should sit.
And to that end, we would welcome a warm referral point to someone in the Attorney General’s Department for when we receive this absolute flood of online defamation claims because we’re not resourced to adjudicate the truth or falsity, or even to refer people to the different states or territories and their legal schemes.
That is probably best left to the Attorney General’s Department, but currently we don’t have a place to refer those reports that come into us.
Thank you.
Julie Inman Grant, eSafety Commissioner