Uncovering what people need to seek redress in an online world

This blog post was written in collaboration with Richard Pope, who has been working with us on our Better Redress project. We will be collaborating on more blog posts in the coming weeks – keep your eyes peeled.

We’re at the midway point of our project for the Legal Access Challenge

The challenge, run by Nesta and the Solicitors Regulation Authority, is all about helping people, and small companies better understand and resolve their legal problems. For this, we’re developing a service that helps people understand and exercise their digital rights, while also assisting businesses to improve their products and produce insights for government. 

And to help us understand the needs of potential users of the service, we’ve been conducting user research. Here we share what we did and what we found. 

What we did

Building on our research exploring where the gaps in the current redress ecosystem lie, we held a series of user research interviews and workshops to get a better understanding of what issues people face online, what a helpful, trustworthy, support service might look like, and what might be the good and bad outcomes from seeking redress.

Over the course of a week, we conducted a series of one-to-one interviews and a group workshop. In each, we began by discussing how people approach resolving issues in the offline world – things like returning faulty goods to a shop. We then talked about potential risks online, and what a good outcome in resolving them might look like. Finally, we used a set of prototypes and scenarios to help understand where people might go for help and support and how they might expect it to operate.

What we learnt about seeking redress online

Everyone seems to have at least one story of seeking redress in the offline world. 

People can clearly differentiate between the parts of the redress process they appreciated and those which, at best, caused friction, and, at worst, caused them to abandon the taking action  altogether. Many people have also experienced issues with seeking redress of one kind or another online.

Below are some of the key needs and themes that we identified from the research.

  1. People understand a broad range of harms from ‘legit cons’ to harmful content

The range of online issues that came up during the research was broader than we had anticipated. They covered everything from seeing harmful content online, more traditional scams such as fake websites, and misuse of data, but also the use of “dark patterns” that, while potentially legal, aim to trick a user into doing something. One participant aptly named these ‘legit cons’.

Ultimately, people don’t want to have to seek redress in the first place. They told us how nobody likes feeling tricked or stupid, or being exposed to unpleasant content.

    2. People use a variety of workarounds to attempt to stay safe online

People use different strategies to attempt to stay safe online. These include cutting down time spent on certain social media platforms, using different devices for sensitive tasks and preferring known, trusted services for things like payments.

When things go wrong online, people rely on a variety of sources of information, from watching Youtube videos, to browsing online forums, to asking family members to help them. This is driven by a need to understand how to get started, but also a desire to understand what happened to them and what the impact might be.

    3. People see responsibility as shared, but need clarity

While the presumption might be that, when something goes wrong, people just want someone to blame, we found that people often feel that responsibility should be shared between companies, the public and the government.

For people to play their part in that, they need companies to be clear with them – for example, about how data about them is used. People also need to be able to understand what their rights are, something that is less well understood in the online context.

When reporting an issue, there is also a desire to help keep others safe and know that people have “done their bit”.

    4. Better feedback leads to less frustration

People often see technology as a ‘black box’, a closed system of which the user has little understanding of its inner workings. The same seems to be true of the process of resolving issues with those services.

Users have low expectations that issues will be resolved, although it’s welcomed when companies respond promptly. When talking about good and bad experiences, people identity the level of communication and openness of a company as an important differentiator. They appreciate being kept informed about where they were in their journey through their complaint and felt frustrated when they were not kept in the loop.

People also expect companies to understand their story – especially in the case of technology companies where they may already hold all the necessary evidence for their complaint. Above all, we found that people want issues to be resolved and reassurance that they can keep trusting a service in the future.

Together, the themes we’ve identified from the research point to the fact that the quality of support available to people online limits their ability to seek adequate redress and to stay safe online. 

More encouragingly, however, we have a much clearer picture of what a good process might look like.

What happens next?

While we were analysing the user research, William Perrin, Professor Lorna Woods, and Maeve Walsh published their Draft Online Harm Reduction Bill. In light of this, we’ve also started thinking about how the project, and the needs we identified during research, might fit with the approach set out in the white paper. Based on this, some of the issues we are now also thinking about include:

  • The white paper sets out a world where people will report issues directly to companies in the first instance.This emphasises the importance of the redress platform to be well designed and have a consistent feel. 
  • The white paper sets out how the regulator will have the power to set rules around how data on harms is collected. We’re interested to see whether the regulator will also gain powers to set design guidelines for how companies expect users to report issues.
  • Where online harms have been felt by large groups of people, there will be some bodies that are  likely to have powers to register “super-complaints” to the regulator on behalf of the public. We’re now also considering how our service could support these organisations.
  • While we fully expect the big technology companies to develop new systems for handling reports, it could create a new burden on smaller companies. We’re looking into how a service like the one we are designing could help meet the needs of smaller companies.

Over the next couple of months, we are going to start prototyping a service that aims to meet the needs we identified during our user research and fits within the world set out in the white paper. We’ll be sharing this, following further user-testing, in March.