A matter of principle: guiding the Age Verification Roadmap

Among its many unintended consequences, the internet has given rise to a ready availability of pornography that is probably unprecedented in human history. 

While the broader social impacts of this are still debated, most agree access to pornography – particularly that which depicts violence – may be harmful to children, especially younger ones. 

This is where eSafety’s role comes in. 

The majority of parents with preschoolers we surveyed (75 per cent) indicated their child was using the internet by the age of four. Unfortunately, this is where the risk begins.

Educators are reporting to us some children are now viewing, sharing or discussing pornography at school as early as Year 1, aged just 6 or 7.

In March, eSafety will submit to the Australian Government its Age Verification Roadmap for online pornography. Before this happens, I believe it is worthwhile to reflect on the principles we have created to guide the challenging endeavour of developing this document. 

They give a good sense of the complexities involved and, ultimately, reinforce the simple truth that protecting children from online harm is a collective responsibility: technology companies, government, parents, educators and the broader community all have a role to play.

First though, what are the potential harms we are we worried about? 

While research in the field is ongoing – and will continue to inform eSafety’s approach – it is subject to inherent practical and ethical limitations.

It seems likely, however, the level of risk to children from seeing pornography will depend on a number of factors, including the nature of the pornographic material, whether it was viewed accidentally or deliberately, and the age of the child or young person involved.

One thing we do know is that some degree of familiarity with online pornography is now widespread by the mid-to-late teens. eSafety’s own research suggests three-quarters of 16 to 18-year-olds have viewed it at some stage and, for the vast majority of this group (86 per cent), that occurred before age 16.

Of clearest concern is material at the more extreme end of the spectrum which may depict violent or aggressive acts like choking, punching or slapping.

Pornographic content that portrays a lack of respect, consent or agency – particularly for women or less dominant partners – risks shaping the mindsets and social sexualisation of an entire generation of young people.  

In 2021, acting on a recommendation of the House of Representatives Standing Committee on Social Policy and Legal Affairs, the previous Australian Government tasked eSafety with drafting a Roadmap to examine the feasibility of age verification or age assurance measures, and the role of other complementary safety technology, educational measures and awareness raising.

From the start, our approach has been to consult with as many stakeholders as possible, ensuring the Roadmap is informed by a wide sample of perspectives and expertise.

In fact, we believe our consultation has been deeper and more extensive than anything undertaken anywhere in the world. 

We sincerely thank all those who took the time to participate in these discussions. Their contributions will significantly strengthen the Roadmap. 

We have also commissioned an assessment of some of the age assurance and other safety technologies currently available; reviewed the vast array of research submitted to us in response to our call for evidence; and conducted research of our own, including with young people. 

The results of this are still being finalised and work on the Roadmap is continuing, but the formal consultation phase is now complete and, today, we published a summary of the third and final round.

This process brought together stakeholders from a variety of sectors in an independently facilitated workshop, helping clarify their perspectives with us and each other.

Participants included academics, children’s advocacy groups, digital rights and privacy experts, adult industry representatives, education authorities, safety technology providers and digital platforms and services.

As you would expect, this diverse group has expressed equally diverse views throughout the consultation process.

But there have also been areas of consensus and, from these, eSafety distilled a set of six draft guiding principles, which we discussed and refined at the workshop:

  1. Take a proportionate approach based on risk and harm
    Understanding the nature of the risks and harms – and the areas where there is greater or lesser evidence and agreement – will enable measures which are reasonable and targeted.
  2. Respect and promote human rights
    Making the online world a safer space is ultimately about fulfilling the human rights of those who inhabit it.
  3. Propose a holistic response, recognising that everyone has a role to play 
    There is no silver bullet technology and a whole-of-community approach is required.
  4. Ensure any technical measures are data minimising and privacy preserving
    Safety measures will not work unless they are private, secure and trustworthy.
  5. Consider the broader domestic and international regulatory context
    Potential responses cannot be considered in isolation.
  6. Consider what is feasible now and into the future
    Measures should not be ‘set and forget’.

 

As the final two principles indicate, any recommendations we make will need to fit within a broader context that is fluid and evolving.

This includes other Australian Government initiatives such as the review of the Privacy Act, as well as regulatory measures state and territory governments are considering.

Things are moving quickly at the international level too, and we are aware a number of technology companies are in the midst of introducing age assurance processes for their platforms, including dating and social media apps. 

The results will be relevant to anything implemented in Australia, but one thing is clear from our consultations, and from eSafety’s work more broadly: technological solutions alone will not be enough.

What is required instead is a holistic approach, where any technological interventions are coupled with further measures that empower parents and educators, while respecting the rights and needs of children and young people themselves.

Promoting parental understanding, access and uptake of parental controls and filters can be an effective way of protecting younger children from stumbling upon content that may be harmful or distressing. 

But no technological solution will ever be 100 per cent foolproof, so it is also important to provide broader education at whatever age it is needed – both for children and the adults who support them. 

This will help children and young people interpret material they may encounter and seek help if they need it. 

In combination with technological fixes and regulatory initiatives – and underpinned by strong privacy and security standards – such broad-based approaches can lead to significant safety improvements for children growing up in this internet age.