Welcome to a world where the lines between science fiction and reality blur. A world where artificial intelligence (AI) integrates with the metaverse and immersive technologies like virtual, augmented, and extended reality (VR, AR, and XR) to forge a new frontier in global health and wellbeing. It’s an exciting realm with immense potential, but it also carries risk.
It's Day 2 of Metaverse Safety Week, an annual awareness campaign created by X Reality Safety Intelligence (XRSI) to promote safer immersive environments. This year, we’re delving into AI and emerging technologies, with today’s spotlight on medical XR and immersive healthcare.
Bridging the gap between human and machine
In recent years, we’ve witnessed a seismic shift in healthcare. Telehealth has become commonplace, VR and AR are now integral to surgical training, and brain-to-computer interfaces will soon be coming to a cerebrum near you.
Neuralink, a startup focused on implantable brain-machine interfaces, is gearing up to recruit participants for the first human trial of its brain implant designed for patients with paralysis. Its mission is clear: “restore autonomy to those with unmet medical needs today and unlock human potential tomorrow.”
Meanwhile, Snap has acquired NextMind, a neurotech company based in Paris, to propel augmented reality research within Snap Lab. Reports suggest NextMind’s brain-computer interface technology will eventually find its way into future versions of Snap’s Spectacle AR glasses.
These developments underscore a revolution that’s blurring the lines between the physical and digital, between human and machine. And it’s happening right now.
Guarding screens and minds in the neurotech surge
At eSafety, our role is to help safeguard all Australians from online harms and promote safer, more positive online experiences. In our Tech Trends and Challenges work, we constantly monitor new and emerging technologies. We assess their potential online safety benefits, risks, and challenges to make sure we stay ahead of the curve.
Neurotechnology, a rapidly expanding field dedicated to understanding the brain and creating technologies that interact with it, is one such area of focus. As these technologies evolve and integrate into online services and devices used for socialising, gaming, learning and more, it’s crucial to consider their potential misuse as instruments for harm.
Imagine a scenario where malicious actors gain control of these technologies, enabling them to manipulate a person’s mind or body. The implications are chilling. They could be weaponised for child sexual exploitation, sexual assault, coercive control, or other forms of abuse.
Safety by Design: eSafety’s initiative to mitigate tech risks
eSafety encourages all companies designing, developing, or deploying any type of technology to take a Safety by Design approach. We provide a variety of practical tools to help companies of all sizes achieve this goal.
Safety by Design is about identifying and mitigating risks at the earliest stages. This requires a thorough understanding of a technology’s features and its users – both potential perpetrators of harm and those vulnerable to harm.
We acknowledge that certain individuals, groups, and communities face higher risks if human rights considerations are not properly integrated into the design and use of neurotechnologies.
This includes children and young people – particularly those with disabilities – individuals with physical, cognitive, or psychosocial disabilities, and neurodiverse people. Their autonomy, privacy, and consent rights could face significant impacts.
We must also consider groups at a higher risk of harm due to discrimination and oppression, or those whose context may provide unique avenues for harm. This includes women and girls, gender-diverse people, Aboriginal and Torres Strait Islander people, LGBTIQ+ people, and people from culturally and linguistically diverse backgrounds.
It’s equally important to consider how risks, harms, and interventions may manifest differently in various contexts, including situations involving family, domestic, or sexual violence.
Global efforts for global change
Our Safety by Design initiative is built on three principles: service provider responsibility, user empowerment and autonomy, and transparency and accountability.
We are pleased to see these principles reflected in the Australian Human Rights Commission’s recent submission calling for a human rights-based approach to neurotechnology. They are also evident in the guidance on responsible innovation in neurotechnology by the Organisation for Economic Cooperation and Development.
Global change requires global efforts. We’re proud to support Metaverse Safety Week and look forward to ongoing multi-sector dialogue and international collaboration.
Our goal? To build appropriate guardrails into AI, immersive healthcare, and neurotechnology for a safer, more ethical future.