The tech industry needs a moral compass
The technology industry needs a moral compass — not just to deal with the uncertainties of the future, but to navigate how platforms, products and services are playing out in the present. The consequences of addictive platforms, democracy hacking, and automated content generation — to name just three (un)intended consequences— are still unfolding on a daily basis; we can’t wait till we get to the end of the story to take action.
Technology ethics aren’t “a side hustle” or a problem that can be solved down the line. We can’t commission research and check back in four years when someone has finished a PhD. As an industry we need to take a pause from looking ahead, and instead engage rapidly and practically with both the present and the recent past.
And while software might love a standard, real life is messier and more extraordinary than any product backlog gives it credit for. We can’t solve this simply through process, by automating tests for “Ethical Acceptance” or creating simple “if this then that” rules; there isn’t a simple check and balance to make before a product or feature goes out the door— as an industry, we need to be continually monitoring, and thinking deeply and strategically, about the consequences of the decisions we make.
We need to take responsibility.
Technology has changed, or “disrupted”, how we work, vote and think about privacy in ways no one imagined. And now it’s time for the technology industry to be disrupted — and come to terms with a new, more humane future that it’s impossible to turn away from.
Top-down governance and regulation is taking shape in the US, Europe and the UK, and journalism and academia are continuing to shine a light on the impact of platforms and the attention economy. And while some technology companies — and the people who make money from them — might be feeling uncomfortable, there’s not much radical problem solving going on.
Meanwhile, Tech Humanism — the school of thought that puts people and society before technical possibility — is gaining momentum: Kate O’Neill has written a Tech Humanist Manifesto; and in the last week, technologist Dan Hon gave a great call-to-humanism at FooCamp, Glitch developer Jenn Schiffer spoke on the same topic at #ffconf, and Jaron Lanier has said we need to “double down on being human”. There is something in the air. But if it’s going to change the industry, Tech Humanism can’t be seen as a way to be good; it needs to become the new normal, a bar below which it’s impossible to drop.
And while monopolies, child protection and the attention economy might provoke external regulation, within the tech industry the fastest levers for change are probably investment and talent. And both of these things are dependent on people.
Mitch and Freeda Kapoor, early stage investors in Uber, went public with their concerns about the company’s culture in March, saying, “we feel we have hit a dead end in trying to influence the company quietly from the inside”; Robert McNamee, who invested in Google and Facebook, has published his thoughts on how it’s time to regulate both platforms. Meanwhile, designers, engineers, and product managers are becoming disenchanted and disavowing the products they helped to create.
Making it undesirable to profit from inhumane technology, and making it somehow “cool” to reference your moral compass, is not as flippant as it sounds. Personal and social awareness of the impact of your work can and should become a new building block for personal credibility.
The “ethical pivot” could be respected by shareholders, and CEOs could say, “We were wrong. We didn’t need to verify that account/monetise that content/harass those employees, so we’ll stop now and make some reparation.” Risk registers and success metrics could prioritise human and social effects alongside economic gain. And, of course, there would be stickers and a hashtag.
To make this a reality, we need a set of values to rally round and a set of willing actors: people who will visibly decline investment opportunities, challenge business cases, and champion alternative models of success. The Never Again Tech Pledge has shown industry workers can unite; to expand that commitment requires an acknowledgement of the wider social and moral responsibilities of platforms, products, and services.
These values need to apply globally and across all domains, yet be specific enough to test against and be meaningful. They should be humane and relatable and reflect what it means to “Do No Harm”, not put technology on a pedestal.
We’ve spent the past months at Doteveryone thinking about and working towards these kind of values. Here are some of our draft Principles for Responsible Technology (they are far from perfect — add your comments and amendments to make them better).
Responsible technology:
- does not knowingly deepen existing vulnerabilities and inequalities, or create new ones
- protects existing democratic and human rights
- is made by teams that are mindful of their ethical, social and human impact
- has controls in place to react to and guard against unintended consequences
- is designed with security and safety in mind
So what’s next? A set of values that can be agreed on and tested against might signal the beginning of a formal union or professional body, or lead to defining human rights in the 21st century. Or perhaps it’s enough to create and maintain an effective and committed social movement, working together to make technology humane?
The technology industry is made up of humans, after all; reconnecting with our humanity should be the easiest disruption to pull off. There are disparate groups expressing concern; we should join together to make a difference.
So rather than waiting for a new Agile process to appear, a pivotal report to be published, or for Russia to completely undermine Western democracy, we should change the tech industry in the way the tech industry tries to change everything else: by disrupting, and making a new future that it’s impossible to turn away from.
This post builds on the Doteveryone responsible tech programme. Thanks to Paula Goldman, and Dan Hon for inspiring conversations, and to everyone who attended our session at FooCamp.