Need help dealing with violent or distressing online content? Learn more

Recommender systems and algorithms – position statement

Recommender systems, also known as content curation systems, are the systems that prioritise content or make personalised content suggestions to users of online services.

A key component of a system is its recommender algorithm, the set of computing instructions that determines what a user will be served based on many factors. This is done by applying machine learning techniques to the data held by online services, to identify user attributes and patterns and make recommendations to achieve particular goals. 

Background

Recommender systems and their underlying algorithms are built into many online services. They sort through vast amounts of data to present content that is relevant to users. 

This helps people discover information, ideas, artists, new friends, activities, products and services. It also helps businesses and creators reach new audiences.

Search engines use recommender algorithms to prioritise and serve results that match the queries of users.

Social media and streaming services use recommender algorithms to personalise what is suggested or promoted to users and to increase the reach of prioritised content and accounts.

Recommender algorithms source data in several ways to produce suggestions for users:

  • People may provide the data intentionally, through entering search queries, ranking posts or providing feedback on their preferences.
  • The algorithms may capture data by collecting information about a user, such as their demographic details, or by monitoring their engagement with content, such as likes, comments or dwell time (how long users hover over specific content before scrolling past).
  • In some cases, the online service may buy the data from other (‘third party’) services, platforms or data sellers.

Benefits and risks

Drawing on the data they source from users, online services optimise their recommender algorithms for different purposes.

For example, a service may aim to:

  • maximise user engagement through likes and comments or further queries
  • deliver recommendations that best meet its users’ needs
  • maximise the time users spend on its platform
  • a combination of all of these.

Different inputs and end goals for recommender systems can lead to positive or negative outcomes.

For example, recommender algorithms that prioritise time spent reading or reacting to a post and then serve up similar content in the future may result in people seeing things they find interesting, entertaining or valuable.

But equally, if a user spends time engaging with potentially harmful content, that same system may lead to them seeing more of the same material or increasingly harmful material in their feeds.

A key driver of risk comes from the way a service optimises its recommender systems for greater engagement. If it operates on an advertising-based business model, it has an incentive to increase user engagement – and particularly time online – to grow its revenue. This can lead to it promoting content based on engagement instead of quality.

The risk level of the recommender system used by a service can also be impacted by:

  • the design of the user interface 
  • the amount of human review and editorial oversight
  • the size and quality of the service’s pool of content
  • intended or unintended algorithmic bias
  • who posts and shares content on the service, and why
  • external factors such as social and political attitudes and developments, and how they are reported by news media.

Impacts

Recommender systems, especially those that serve up content based on engagement, can contribute to content ‘going viral’ (spreading quickly and widely). This can encourage harmful behaviour, such as dangerous challenges and online pile-on attacks against targeted people.

Recommender systems can also amplify misinformation and extreme views, as well as hiding different viewpoints or valuable ideas that are not aligned with a person’s existing opinions or understanding. Either separately or in combination, these can lead to what is commonly known as ‘echo chambers’ or ‘filter bubbles’, where people are only served content that reinforces the content previously served to them.
  
The question of whether content served up by a recommender system is harmful can depend on the individual user, their personal circumstances and the context.
 
For example, content that promotes self-harm is likely to present a greater risk and have deeper impact for someone already experiencing mental ill health.

In addition, the risks can be greater for children and young people, especially if they are served: 

  • friend or follower suggestions that encourage them to interact with potentially dangerous adults
  • content that encourages binge consumption without breaks
  • content that promotes ‘ideals’ of body types and beauty stereotypes
  • content that normalises the sexualisation of young people
  • content that may be appropriate for adults but harmful to children who are not developmentally ready for it.

In addition to contributing to risks and harms at an individual level, recommender systems have the potential to cause or worsen harms on a societal level. For example, content that promotes discrimination such as sexism, misogyny, homophobia or racism can normalise prejudice and hate. It can also be used to incite online pile-ons or physical violence that can cause damage to the people targeted and spill over to affect the broader community, both online and offline. 

Next steps for users and industry 

It’s important to assess recommender systems holistically, thinking about their benefits and risks, their range of uses and how they may influence or be influenced by the wider digital environment and socio-political developments.
  
The online industry can take a lead role in improving digital literacy and empowering users to make informed choices by making sure they:

  • are aware of how recommender systems can affect the content they see, shape their feelings and experiences, and influence the choices they make 
  • understand that recommender systems are designed to keep them online, and there can be harms both from the type of content served and from ongoing exposure to it
  • know how to use any account or device features that give them choice and control over content.

Services, particularly those that enable users to share their own content, can help to protect users from the negative impacts of recommender systems by:

  • managing content curation in a responsible, transparent and accountable way
  • scaling content moderation using artificial intelligence in addition to human moderation
  • reducing risks and algorithmic biases from the earliest stages, by implementing Safety by Design practices.

eSafety recognises that any regulation of recommender systems should safeguard the rights of users, preserve the benefits of these systems and foster healthy innovation. 

Download a copy

Click on this file link to download the full position statement:

Last updated: 08/12/2022