eSafety demands answers from Twitter about how it’s tackling online hate

Australia’s eSafety Commissioner has issued a legal notice to Twitter seeking information about what the social media giant is doing to tackle online hate on the platform. 

eSafety received more complaints about online hate on Twitter in the past 12 months than any other platform and has received an increasing number of reports of serious online abuse since Elon Musk’s takeover of the company in October, 2022.  

The rise in complaints also coincides with a slashing of Twitter’s global workforce from 8,000 employees to 1,500 including in its trust and safety teams, coupled with ending its public policy presence in Australia. 

This is at the same time a ‘general amnesty’ was announced by Musk in November, which reportedly saw 62,000 banned or suspended users reinstated to the platform, including 75 accounts with over 1 million followers. 

eSafety Commissioner Julie Inman Grant said Twitter’s terms of use and policies currently prohibit hateful conduct on the platform, but rising complaints to eSafety and reports of this content remaining publicly visible on the platform, show that Twitter is not likely to be enforcing its own rules. 

“We are seeing a worrying surge in hate online,” Ms Inman Grant said. “eSafety research shows that nearly 1 in 5 Australians have experienced some form of online hate. This level of online abuse is already inexcusably high, but if you’re a First Nations Australian, you are disabled or identify as LGBTIQ+ you experience online hate at double the rate of the rest of the population.

“Twitter appears to have dropped the ball on tackling hate. A third of all complaints about online hate reported to us are now happening on Twitter.

“We are also aware of reports that the reinstatement of some of these previously banned accounts has emboldened extreme polarisers, peddlers of outrage and hate, including neo-Nazis both in Australia and overseas.”

eSafety is far from being alone in its concern about increasing levels of toxicity and hate on Twitter, particularly targeting marginalised communities. 

Last month, US advocacy group GLAAD designated Twitter as the most hateful platform towards the LGBTQ+ community as part of their third annual social media index. 

Research by the UK-based Center for Countering Digital Hate (CCDH) demonstrated that slurs against African Americans showed up on Twitter an average of 1,282 times a day before Musk took over the platform. Afterwards, they more than doubled to an average of  3,876 times a day. 

The CCDH also found that those paying for a Twitter Blue Check seemed to enjoy a level of impunity when it came to the enforcement of Twitter’s rules governing online hate, compared to non-paying users and even had their Tweets boosted by the platform’s algorithms. 

The Anti-Defamation League (ADL) also found that antisemitic posts referring to Jews or Judaism soared more than 61 per cent just two weeks after Musk acquired the platform.

“We need accountability from these platforms and action to protect their users and you cannot have accountability without transparency and that’s what legal notices like this one are designed to achieve,” Ms Inman Grant said. 

This latest notice on online hate follows a bid in February to get answers from the platform (along with TikTok, Google YouTube, Twitch and Discord) on the steps the company is taking to address child sexual exploitation and abuse, sexual extortion and the promotion of harmful content by its algorithms.

eSafety is currently assessing the responses to those notices and expects to release appropriate information in due course.  

If Twitter fails to respond to the most recent notice within 28 days, the company could face maximum financial penalties of nearly $700,000 a day for continuing breaches. 

eSafety's regulatory powers under the Online Safety Act cover serious adult online abuse as well as the cyber bullying of children and image-based abuse. In some cases, hate speech may meet the statutory thresholds of adult cyber abuse. eSafety encourages all individuals who feel they have been the target of online abuse to report to the platform and, if the platform fails to act, to report to eSafety at www.esafety.gov.au/report

eSafety makes its regulatory decisions impartially and in accordance with the legislative test prescribed in the Online Safety Act. 
 

For more information or to request an interview, please contact: