Need help dealing with violent or distressing online content? Learn more

Our legislative functions

eSafety fosters online safety by exercising our powers under Australian government legislation, primarily the Online Safety Act 2021.

The Online Safety Act 2021 expanded and strengthened Australia's online safety laws, giving eSafety improved powers to help protect all Australians from the most serious forms of online harm.

We still have powers and functions under:

  • section of the Telecommunications Act 1997 (Cth)
  • sections of the Criminal Code Act 1995.

What the Online Safety Act 2021 means

The Online Safety Act 2021 gives eSafety substantial new powers to protect all Australians across most online platforms and forums where people can experience abuse or be exposed to harmful content.

The Act enhances our ability to act quickly to protect victims of online abuse across our reporting schemes. It gives us the authority to compel online service providers to remove seriously harmful content within 24 hours of receiving a formal notice – halving the time previously allowed (though eSafety may extend this period in certain circumstances).

The Act stipulates what the Australian Government now expects of technology companies that operate online services. These expectations are set out in what the Act defines as the Basic Online Safety Expectations.

The Act also requires industry to develop new codes to regulate illegal and restricted content. This refers to the most seriously harmful material, such as videos showing sexual abuse of children or acts of terrorism, to content that is inappropriate for children, such as pornography.

We are working with the tech industry to develop these codes to guide online service providers on how to comply with their obligations under the new Act.

Together, the Basic Online Safety Expectations and the new industry codes outline how we will work with the tech industry to achieve safer online experiences for all Australians.

What the Online Safety Act 2021 changes

We have new powers across many areas that affect the online lives of Australians. This includes:

  1. A world-first Adult Cyber Abuse Scheme for Australians 18 years and older, across a wide range of online services and platforms.
  2. A broader Cyberbullying Scheme for children to capture harms that occur on online services and platforms other than social media.
  3. An updated Image-Based Abuse Scheme to address the sharing and threatened sharing of intimate images without the consent of the person shown.
  4. Targeted powers to require internet service providers to block access to material showing abhorrent violent conduct.
  5. Stronger information-gathering powers.
  6. A modernised Online Content Scheme to regulate illegal and restricted content no matter where it’s hosted, bringing in app distribution services and search engines.
  7. New Basic Online Safety Expectations that ensure online service providers take reasonable steps to keep Australians safe online.
  8. New industry codes requiring online platforms and service providers to detect and remove illegal or restricted content.

For the first time anywhere in the world, Australia has an Adult Cyber Abuse Scheme. This scheme gives us the authority to require online service providers to remove online abuse that targets an Australian adult with the intention of causing serious harm.

Previously, we did not have the power to deal with this harm for adults. Now, we have formal powers. And they come with civil penalties for online service providers that do not comply with removal notices from eSafety.

High thresholds

The threshold for ‘serious abuse’ in the new Adult Cyber Abuse Scheme is high, with two parts.

First, the abuse must be intended to cause serious physical or psychological harm – like threats causing an emotional reaction beyond ordinary fear. Second, the abuse must also be menacing, harassing or offensive in all circumstances.

Harm will generally be ‘serious’ when it endangers, or could endanger, a person’s life or could have some lasting effect on them.

Somebody finding something offensive or harassing is not enough to be adult cyber abuse.

Examples likely to reach the threshold are publishing private or identifying information about an individual with malicious intent to cause serious harm; encouraging violence against an Australian adult based on their religion, race or sexuality; and threats of violence that make a person afraid they will suffer physical harm.

Parliament set a very high threshold for what constitutes adult cyber abuse. This is to make sure we can direct online service providers to remove only seriously harmful content.

Defamation not included

The scheme does not cover defamation. Defamation is a civil action, determined by Courts and designed to balance the right to freedom of speech with protecting a person’s reputation against harm.

Defamation laws are about compensation for damage to reputations.

New obligations

Online service providers are now expected to take down seriously harmful content within 24 hours of getting a formal notice to remove it, rather than the current 48 hours. This could be a longer timeframe that we specify.

Australians who are the victims of seriously harmful online abuse now have somewhere to turn if online service providers fail to act on a legitimate complaint.

New penalties

If online service providers do not remove the material, we can seek civil penalties or fines against the provider of the service where it appears (up to $555,000).

If individuals responsible for posting seriously harmful material do not comply with a removal notice, we can seek civil penalties or fines against perpetrators (up to $111,000).

We will consider enforcement action if we uncover non-compliance that creates an ongoing risk of harm.

Bolstering the existing scheme means we can order online service providers to remove material not just from social media sites, but from other online services popular with under 18s. This includes online game chats, websites, direct messaging platforms and hosting services.

Previously, the cyberbullying scheme was limited to 14 specific social media platforms across two tiers.

Tier 1 comprised 11 large social media services, including Twitter, Tik Tok and Snapchat. Tier 2 consisted of Facebook, Instagram, and YouTube.

The scheme now applies to all social media services, relevant electronic services, and designated internet services.

New obligations

Under the old Act, if eSafety sought removal of content, the social media service had 48 hours to respond. Under the new Act, the online service provider has 24 hours to respond. This may be longer in certain circumstances.

We will require online service providers to remove cyberbullying content, where that material reaches the threshold for being classed as cyberbullying.

The definition for cyberbullying material is anything posted on a social media service, relevant electronic service or designated internet service which is intended to target an Australian child, and which has the effect of seriously humiliating, harassing, intimidating, or threatening the child.

Find out more about cyberbullying complaints and reporting.

This world-first scheme recognises how damaging it can be when intimate images of someone are shared without their consent. The scheme also recognises the fear caused by someone threatening to do this.

eSafety may be able to help even where a person initially consented to their intimate images being posted online.

New obligations and penalties

Online service providers now have half the time – cut from 48 hours to 24 hours – to take down intimate images (including videos) after getting a removal notice from eSafety. Our aim is to get online service providers to remove the content as fast as possible.

The Act also gives us new powers to expose repeated failures to deal with image-based abuse. For example, we can name and shame online service providers that allow publication of intimate images without consent on two or more occasions in a 12-month period.

Our Image-Based Abuse Scheme includes heavy penalties for anyone who posts, or threatens to post, an intimate image without the consent of the person shown.

We take a graduated and proportionate approach to enforcement action to get results. However, we may take firm action as a first option in certain circumstances.

Definition of intimate image

An intimate image can show any of these:

  • private body parts in circumstances where a person would expect to have privacy.
  • private activity, such as getting undressed, using the toilet, showering or bathing, or sexual activity.
  • a person who would normally wear clothes of religious or cultural significance in public without them.

An intimate image can be fake or digitally altered. This includes where a person’s face is photoshopped onto sexually explicit material and ‘deepfake’ videos generated by apps that use artificial intelligence to make people appear to do and say things they never did do or say. It also includes where an intimate image is tagged with a person’s name, implying that it is an image of that person even if it is not of them.

Find out more about image-based abuse complaints and reporting.

The Abhorrent Violent Conduct material blocking powers allow eSafety to direct internet service providers to block access to certain material which could go viral and cause significant harm to the Australian community. This includes material that promotes, incites, instructs in or depicts abhorrent violent conduct, such as kidnapping, rape, torture, murder, attempted murder and terrorist acts.

What we can do

This allows us to respond to online crisis events, like the Christchurch terrorist massacre, by requesting or requiring internet service providers block access to such extreme violent content.

An online crisis event can be declared when abhorrent violent conduct material is shared or spread online in a manner likely to cause significant harm to the Australian community, in circumstances warranting a rapid, coordinated and decisive response by industry and government.

Where we identify material that depicts abhorrent violent conduct – or promotes, incites, or instructs abhorrent violent conduct – we can choose to issue a blocking request (which is not enforceable) or a blocking notice.

It is intended that blocking requests and blocking notices will be issued in situations where an online crisis event has been declared by eSafety.

This is not a statutory requirement, but set out in a protocol developed by eSafety, Australian internet service providers and the Communications Alliance (an industry body for the Australian communications sector).

The notice powers require internet service providers to block domain names that are providing access to the abhorrent violent material.

Under the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019, eSafety can issue notices to content and hosting providers that alert them to the fact that they are providing access to AVM.

If the provider does not remove the material, authorities can use this notification in a criminal prosecution to show the provider acted recklessly.

Read more in our fact sheet:

Missing media item.

The Act gives eSafety new powers to gather information about people who use a social media service, relevant electronic service or designated internet service. It also gives us new powers to investigate alleged incidents.

The Act gives us new and stronger powers to help reveal the identities behind accounts people use to conduct serious online abuse or to exchange illegal content.

We can use these powers across all our new and existing schemes. This means we can issue a notice to gather more information to investigate a complaint about child cyberbullying, adult cyber abuse, image-based abuse or illegal and restricted content.

If an online service provider ignores a notice to provide account details, it could face civil penalties.

We may also require a person to appear before the eSafety Commissioner, produce documents, answer questions, or provide any other information in connection with an investigation.

Failure to comply with a requirement to give evidence under Part 14 is a criminal offence. However, there are exemptions relating to self-incrimination or the confidentiality of journalistic sources.

The Act gives eSafety new powers to act against seriously harmful illegal and restricted online content, such as child sexual exploitation material or pro-terror material – whether the content is hosted in Australia or overseas.

Previously, Australia regulated illegal and restricted content under the Broadcasting Services Act 1992 (BSA). The BSA centred on making sure service providers did not host prohibited content in Australia. It included takedown powers where it found content hosted in Australia, and we acted according to the material’s classification under the National Classification Code.

Two different classes of material

The new Act creates two new classes of content linked to the National Classification Code.

Class 1 material, which has been or is likely to be refused classification under the National Classification Code. This includes child sexual exploitation material, pro-terrorist material, and material that promotes or incites crime.

Class 2 material, which has been or is likely to be classified X18+ or R18+ under the National Classification Code. This includes non-violent sexual activity, or anything that is ‘unsuitable for a minor to see.’

What we can do

We can issue a removal notice to any social media service, relevant electronic service or designated internet service that provides Australian access to class 1 material no matter where in the world it’s hosted. If a service ignores a class 1 removal notice, we have additional regulatory options. These include seeking to have links that provide Australian access to the material removed from search engines or removing apps from app store services.

We can issue removal and remedial notices relating to class 2 material where someone is providing that from Australia. This is much broader than simply hosting the material in Australia. For example, it could include international companies that have an Australian business presence, such as a large social media company.

A remedial notice requires the service provider to either remove the material or put it behind a restricted access system.

If a service ignores a class 2 removal or remedial notice, we have additional regulatory options. These include civil penalties, infringement notices, enforceable undertakings, and injunctions.

Find out more about what we can investigate under the Online Content Scheme.

The Act stipulates what the Australian Government now expects from online service providers. It has raised the bar by establishing a wide-ranging set of Basic Online Safety Expectations, known as ‘the Expectations’.

What the new expectations will change

The Minister for Communications established the Expectations through a legislative instrument called a determination, as provided for by the Act. The Basic Online Safety Expectations Determination was registered on 23 January 2022.

The Expectations aim to ensure that social media, messaging, gaming and app services and website providers take reasonable steps to keep Australians safe online. It outlines the Australian Government’s expectations and intention to improve transparency and accountability through reporting powers.

Under the Expectations, online service providers are required to minimise illegal content and activity, as well as bullying, abuse and other types of harms. They must clearly outline how their users can report unacceptable use, and enforce their own terms of service. 

The Expectations encourage the tech industry to be more transparent about their safety features, policies and practices, and aim to improve accountability. 

Reporting obligations

Under the Act, eSafety has unique powers to require online service providers to respond to questions about how they are meeting any or all of the Basic Online Safety Expectations. Online providers that do not meet their reporting obligations may be issued with a civil penalty. 

eSafety can publicly name the services that do not meet the Expectations. We can also publish statements of compliance for those that meet or exceed expectations.

Find out more about the Basic Online Safety Expectations.

The Act will also require industry to develop new codes.

The codes will be mandatory, and they apply to various sections of the online industry including:

  • social media platforms
  • electronic messaging services, online games and other relevant electronic services
  • websites and other designated internet service
  • search engines
  • app distribution services
  • internet service providers
  • hosting service providers
  • manufacturers and suppliers of equipment used to access online services and people who install and maintain equipment.

What we can do

The codes, when registered, can require online platforms and service providers to detect and remove illegal content like child sexual abuse or acts of terrorism. They can also put greater onus on industry to shield children from age-inappropriate content like pornography. 

The Act allows eSafety to impose industry-wide standards if online service providers cannot reach agreement on the codes, or if they develop codes that do not have appropriate safeguards.

Codes and standards are enforceable by civil penalties and injunctions to make sure online service providers comply.

What the codes may cover

The Act provides a list of examples the industry codes and standards may deal with. These include that:

  • all segments of the industry promote awareness of the safety issues and procedures for dealing with harmful online content on their services
  • online service providers tell parents and responsible adults how to supervise and control children’s access to material they provide on the internet.
  • online service providers tell users about their rights to make complaints.
  • online service providers follow procedures for dealing with complaints in line with their company policies.

Online Safety Act 2021 fact sheet

Last updated: 18/12/2024