Executive Summary

People, Power and Technology: The Tech Workers’ View is the first in-depth research into the attitudes of the people who design and build digital technologies in the UK. It shows that workers are calling for an end to the era of moving fast and breaking things.

Significant numbers of highly skilled people are voting with their feet and leaving jobs they feel could have negative consequences for people and society. This is heightening the UK’s tech talent crisis and running up employers’ recruitment and retention bills. Organisations and teams that can understand and meet their teams’ demands to work responsibly will have a new competitive advantage.

While Silicon Valley CEOs have tried to reverse the “techlash” by showing their responsible credentials in the media, this research shows that workers:

  • need guidance and skills to help navigate new dilemmas
  • have an appetite for more responsible leadership
  • want clear government regulation so they can innovate with awareness

Every technology worker that leaves a company does so at a cost of £30,000.* The cost of not addressing workers’ concerns is bad for business — especially when the market for skilled workers is so competitive.

Our research shows that tech workers believe in the power of their products to drive positive change — but they cannot achieve this without ways to raise their concerns, draw on expertise, and understand the possible outcomes of their work. Counter to the well-worn narrative that regulation and guidance kill innovation, this research shows they are now essential ingredients for talent management, retention and motivation.

It is time for the tech industry to move beyond gestures towards ethical behaviour — rather than drafting more voluntary codes and recruiting more advisory boards, it is time to double down on responsible practice. Workers — particularly those in the field of AI — want practical guidelines so they can innovate with confidence.

*Oxford Economics (2014) ‘The Cost of Brain Drain’. http://resources.unum.co.uk/downloads/cost-brain-drain-report.pdf

Our recommendations

Businesses should:

  • Implement transparent processes for staff to raise ethical and moral concerns in a supportive environment
  • Invest in training and resources that help workers understand and anticipate the social impact of their work
  • Use industry-wide standards and support the responsible innovation standard being developed by the BSI – 78% of workers favour such a framework
  • Engage with the UK government to share best practice and support the development of technology literate policymaking and regulation
  • Rethink professional development, so workers in emerging fields can draw on a wider skills and knowledge base — not just their own ingenuity and resources

Government should:

  • Provide incentives for responsible innovation and embed this into its Industrial Strategy

Key Findings

  • More than a quarter (28%) of tech workers in the UK have seen decisions made about a technology that they felt could have negative consequences for people or society. Nearly one in five (18%) of those went on to leave their companies as a result.
  • The potential negative consequences these workers identified include the addictiveness of technologies, the negative impact on social interaction and the potential for unemployment due to automation by technology. They also highlighted failures in safety and security and inadequate testing before product releases.
  • Government regulation is the preferred mechanism among tech workers to ensure the consequences of technology for people and society are taken into account. But almost half of people in tech (45%) believe their sector is currently regulated too little.
  • Tech workers want more time and resources to think about the impacts of their products. Nearly two-thirds (63%) would like more opportunity to do so and three-quarters (78%) would like practical resources to help them. Currently they rely most on their personal moral compass, conversations with colleagues and internet searches to assess the potential consequences of their work.
  • Despite their concerns, the vast majority of tech workers believe technology is a force for good. 90% say technology has benefited them individually; 81% that it’s benefited society as a whole. Looking ahead, they’re excited by the potential of technology to address issues like climate change and transform healthcare, though they are alert to possible flipsides of such new technologies.

An Optimistic Industry

Despite the focus on the negative impacts of technology in current media and policy debates, the vast majority of people both among tech workers and in the wider public believe technology is a force for good.

Read More

People in tech are significantly more positive about the impacts of technology than the wider public: 90% say technology has benefited them as an individual and 81% that it’s benefited society as a whole.

Looking ahead 83% of people in tech expect it to have a positive impact for themselves as individuals and 82% on society as a whole in the future.

There is excitement about the potential that new technologies hold in the years ahead, especially where they can improve people’s lives and be applied to social challenges.

Tech workers are most enthusiastic about Artificial Intelligence, with the belief that innovations in AI will free up humans from mundane tasks, making life easier and more convenient. They’re eager to see tech applied to address issues like climate change and to transform healthcare through greater accuracy in diagnosis, cures for currently untreatable conditions and better quality of life for people in ill health.

But this excitement is tempered by potential negative consequences of these same technologies. Almost a quarter of tech workers also identify artificial intelligence as the most concerning technology of the next decade.

They anticipate the flipsides of AI as increased unemployment due to automation. They also see a future where human dependence on technology leaves people vulnerable to security breaches and exploitation. And they worry about devaluing human contact, increasing social isolation and detriment to mental health.

People working in tech clearly feel a tension between the enormous opportunities that technologies present and the potential harms they can inflict. 80% believe companies have a responsibility to society to ensure their technologies don’t have negative consequences for people and society.

But when asked about the best way to ensure that they live up to this responsibility, the largest proportion believe government regulation is the most effective mechanism, placing it ahead of internal company leadership or professional accreditations.

Although the idea of a ‘hippocratic oath’ for tech has often been discussed as a way to embed ethical practice in the tech industry, only 2% saw a voluntary commitment as the most effective way to mitigate potential harms.

For people in tech to be able to achieve the positive opportunities they see in future technologies, they will need the support of regulation to safeguard against the negatives.

But nearly half (45%) of tech workers think the industry is regulated too little.

Those at earlier stages of their careers are most likely to think there’s too little regulation of the sector, compared to those later in their careers. But across almost every sector and every job role, every age and every level in the business, people are more likely to say the sector is under rather than over-regulated.

Contrary to the public statements of some tech CEOs and founders, people that work in the industry don’t subscribe to the idea that tech should be allowed to disrupt without regard for its consequences. They are aware of both the wonders and the woes of their products.

These findings explode the narrative that the tech sector is allergic to regulation. For people that build technology, it’s the most preferred mechanism to help them harness tech’s opportunities in ways that are good for more people, more of the time.

The UK Government has begun a series of regulatory initiatives – including the Online Harms White Paper, the Furman review into competition in digital markets and the establishment of the Centre for Data Ethics and Innovation. The appetite for regulation articulated in this research should be seized on by policymakers and regulators as an opportunity to work with people within the industry to craft effective accountability for the digital age.


The Impact of Irresponsibility

People that work in tech have a strong sense of responsibility for the products they create.

Read More

79% agree it’s important to consider potential consequences for people and society when designing new technologies.

But it’s not uncommon for them to see irresponsible choices during the development of a product.

More than a quarter (28%) said they’d experienced a situation at work where decisions were made about the design, creation or marketing of a technology that they felt could have negative consequences for people or society.

Almost two-thirds (59%) of people working in AI and almost half (43%) of those in emerging tech had experienced this kind of situation.

They said these decisions were potentially harmful for a range of reasons, pointing to a lack of consideration for safety and security, a failure to consider the needs of consumers and a lack of assessment or testing of the product. They also believed the products could be addictive and decrease social interaction, while some feared that automation as a result of the product would cause unemployment.

C-suite, senior management and executives were more than twice as likely (47%) as those in more junior roles to have experienced such potentially harmful decisions.

The vast majority of people who experience these issues act on them. Only 10% of them say they do nothing at all.

Around half raise concerns with colleagues (51%) or with a manager or HR (47%) and 29% report their concerns to an external body. But for many this is not enough.

Nearly one in five people (18%) that experienced potentially negative product decisions left a company as a result. This is true for 27% of people working in AI and 26% of senior managers.

Across the sector, this means one in twenty (5%) of all people in tech have left a job due to concerns about the consequences of their products. This is more acute in AI where one in six (16%) have left their company and in senior management where one in eight (12%) have left.

The UK tech industry has major concerns about the availability of staff. 93% of employers have struggled to recruit to tech roles in the past year, with shortages most acute for management and experienced professionals.4 Brexit is expected to exacerbate these issues. Each lost tech worker is estimated to cost a company over £30,000.5 Our findings show that potentially irresponsible technology practices are a significant factor for retention and it’s vital that these are addressed for the industry to thrive.

The UK Government’s Industrial Strategy has identified Artificial Intelligence as one of its grand challenges and its AI Sector Deal highlights the fast growing demand for highly skilled AI specialists. With 16% of people in AI having left a job due to irresponsible practices, it will be vital to embed responsibility into the AI ecosystem to realise the government’s ambition to put the UK at the forefront of the AI revolution.6


An Opportunity for Organisations

This research demonstrates the depth of concern in the tech workforce about the potential downsides of technologies. And it shows that irresponsible practices in tech can cost companies dearly in lost talent. But it also points to an opportunity.

Read More

Organisations and leaders that can understand and meet their teams’ demands to work responsibly will have a valuable competitive advantage.

Almost two-thirds of people working in tech (63%) would like more opportunities to assess the potential impacts of their products – among senior managers and above this rises to 74%.

The appetite for these opportunities is strongest in AI (81%) and emerging tech (76%).

But at the moment, they say anticipating consequences of products for people and society ranks as the lowest priority in their work.

When they come to consider the potential impacts of their products, they mainly use informal methods.

But they are keen to have greater guidance. There’s strong support for a set of resources to help people assess the impacts of technology.

78% would like a set of practical methods, workshops and resources to help them build technology with consideration for the consequences for people and society. The same number would be interested in a single framework for the governance of innovation.

There’s also clearly scope for companies to significantly strengthen their policies, so that they play a much greater role in helping people consider the impacts of their products, rather than leaving it to gut instinct.

This is especially true for the groups likely to feel most strongly about the consequences of technology for people and society.

People who’ve left a company due to potentially negative products, senior managers and those working in AI all put much more emphasis on using internal resources than tech workers as a whole.

And there are signs that where internal processes are in place, they often work.

Among tech workers who had experienced a decision at work that they thought could be negative for people and society, around half raised concerns with a colleague (51%) or manager (47%).

The vast majority of those who who report concerns either internally or externally (79%) then had their concerns satisfactorily resolved – and this was true of 93% of those in AI.

It’s essential then that more companies learn from this to create systems for people to raise and resolve concerns, to avoid losing their staff.

But there appear to be blockers to companies changing the way they work.

Currently, the greatest barrier to greater consideration of the impacts of products is perceived to be companies’ focus on revenue and growth.

Those who quit their jobs due to the potential negative impacts of products were twice as likely as tech workers overall to point to revenue and growth targets or incentives as a barrier – 30% saw this as the greatest impediment to assessing consequences. In AI, 23% identified revenue and growth targets or incentives as the most significant barrier.

Despite this, most don’t see financial success and responsible practice as being in conflict.

Nearly two-thirds do not agree that considering the potential consequences of technologies will stifle innovation and growth. And in fact this research points to the benefits that these ways of working can bring to a business.

People who work in tech care deeply about the impacts of their work – and they will vote with their feet if they think their products are potentially harmful. This presents an opportunity for a new approach to leadership that balances growth and societal impact, that creates opportunities and deploys resources to consider consequences and that has effective mechanisms to hear and address concerns when they arise.

Organisations that meet these needs will be the ones that move beyond the discredited move fast and break things culture to lead a new wave of thriving, sustainable technology businesses able to realise the full potential of responsible innovation.