Building ethics and equity into software

Michael Linares
7 min readApr 10, 2021

--

Wouldst thou like to live ethically?

“Most fears about AI and technology are best understood as fears about capitalism… How capitalism will use technology against us.”
— Ted Chiang

This quote, from sci-fi writer Ted Chiang (via Ezra Klein’s podcast), gets to the crux of it. The ills we see in tech are a direct result of the capitalist impulse — a worldview that renders everything “an optimization problem.” It’s this impulse that pushes us to push code without fully considering consequences.

(A digression: Definitely listen to that Ted Chiang interview for a humanistic take on tech. And, if you haven’t read his excellent story collection, Exhalation. Chiang writes gently roiling parables with Borgesian economy and Calvinoesque whimsy. (Yes, I’m keyword stuffing authorial adjectives.))

This week I want to highlight a few resources for exploring ethics in software. I think you’ll find these useful if you’re involved in the product development process. I’ll acknowledge that a lot of these tactics are intentionally bottom-up: they’re geared towards teams nearest users. One tenet of the modern tech workplace is decentralization — a double-edged sword, to be sure. But at its best, decentralization can allow for tech labor to embed and launch ethical products on a global scale.

This week I’ll cover:

  • Principles for building inclusive products
  • An ethics assessment from Spotify
  • Tech tarot cards
  • Anatomy of an AI system
  • New regulation on “dark patterns”

Principles for building inclusive products
I recently joined the Equity Army, an interdisciplinary group of people working on product inclusion across industries (and led by Google inclusion guru Annie Jean-Baptiste). We believe that product inclusion is an ethical imperative. My first project involves crafting principles for building inclusive software. The principles are meant to guide anyone who builds software — PMs, markets, operators, etc. After we nail down language, and get them all MECE, we’ll layer in additional research and case studies. My hope is that the final product will be adoptable by a wide range of people — i.e., that they recognize real-world constraints and provide people entry points at every stage of the software development process. Take a look and let me know what you think.

Draft principles

  1. Building inclusive products is an end-to-end process that starts with user research and ‘ends’ with continual iteration
  2. Building inclusive products is an ongoing process that is never really finished
  • Corollary: Because your product will always exclude some users, your goal is to continually minimize that impact
  • Another: Your product might be inclusive today but become more exclusive with time

3. Building inclusive products means striving for both diverse teams and diverse users

4. When building an inclusive product, consider different dimensions of equity: equity in access (who can use your product), equity in use (how they use it), and equity in outcomes (what impact it has)

5. Even if you are inclusive at every stage, it’s still possible your product will disproportionately harm marginalized groups; you must be dedicated to discovering and resolving the inevitable inequities [I previously wrote about mitigating harms here]

An ethics assessment for your product
Spotify’s Design team pulled together this nifty guide for assessing the ethical impact of your product or feature. Behold:

The doc walks you through three categories: physical harm, emotional harm, and societal harm. Each category ask the user to identify potential harms (e.g., “reinforcing stereotypes”), list examples, and assign a level of risk and concern. Prioritization is built in: the exercise forces you to be realistic about the potential impact and degree of risk, and to prioritize those harms accordingly. (In this way, it’s not unlike prioritization frameworks that PMs use, like RICE.)

What I like about this tool is that’s it lightweight and accessible — it’s one doc, something you can finish in an hour. Really it’s a heuristic that’s meant to help your team have a guided conversation and identify explicit harms. Even if you don’t address them, you’ll have surfaced and documented them.

If I were to tweak the template, I’d add in a column for business/revenue impact to help users more easily translate ethical concerns into a business case. It’s important (if unfortunate) that intrapreneurs are able tie ethics to the bottom line.

Tarot cards
So listen: I moved to LA, I got into tarot, I know how that sounds. What I will say is that after a long life as a rational skeptic, I’ve come to appreciate a little mystery. (As if on cue, the NYT dropped a tarot trend piece this weekend.) These tech tarot cards have little to do with tarot beyond form. But they’re a playful way to to hit pause and more slowly consider your product’s futures. Here’s one card from the equity series:

The cards tackle a lot of great questions spanning scale to automation to environmental externalities. They ask us to consider longer terms and their logical conclusions, almost like Kant’s categorical imperative; they ask us to build products in a way that you’d want all products to be. Here are a few more examples from the deck:

  • What happens when 100 million people use your product?
  • Who or what disappears when your product is successful?
  • If the environment were your client, how would your product change?

Anatomy of an AI system (another digression)

On that last topic, the environment, I’m reminded of this fabulous critical essay and poster by Kate Crawford and Vladan Joler. The tarot cards, like the ethics assessment, ask us to think bigger: to consider the complex planetary systems in which we participate, and now just now, but over time. Crawford and Joler take apart the Amazon Echo — “an anatomical map of human labor, data and planetary resources” — and catalogue all that’s required for it to tell fart jokes on command. Behold:

In their words:

Our exploded view diagram combines and visualizes three central, extractive processes that are required to run a large-scale artificial intelligence system: material resources, human labor, and data. We consider these three elements across time — represented as a visual description of the birth, life and death of a single Amazon Echo unit. It’s necessary to move beyond a simple analysis of the relationship between an individual human, their data, and any single technology company in order to contend with the truly planetary scale of extraction

Such a genius reminder that what we do in tech — these apps, services, use cases — are powered by a complex web of interconnected labor and supplies. Crucially, the front-end interfaces that we see before us — these consumer products — belie this complex portrait and the ethical implications at every stage. Whew. So even if Apple puts out Screen Time, they still run it on devices built from rare-earth metals that are mined by subsistence workers who’ve been exploited for generations. It’s hard not to feel completely defeated by that idea — but I can hear my therapist telling me it’s “black and white thinking” — we have to press on because things can be improved.

Regulation rhythm nation
Lucky for us, there’s more than one way to peel a potato (see: PETA’s animal-friendly idioms; “feed two birds with one scone” is another favorite). The approaches above rely on people being proactive about product inclusion and ethics. But we know from living in the real world that that’s now how things go down. Companies will typically exploit regulatory lacuna or legal loopholes up until the minute they’re outlawed. Thus, the need for regulation. I’m heartened by some recent developments. In California, lawmakers outlawed “dark patterns” that attempt to trick users out of opting out of sharing information. From the bill:

Using confusing language like double-negatives (eg “Don’t Not Sell My Personal Information”)

Forcing users to “click through or listen to reasons why they should not submit a request to opt-out before confirming their request.”

Requiring users to “search or scroll through the text of a privacy policy or similar document or webpage to locate the mechanism for submitting a request to opt-out.”

Interesting, right? You don’t normally see that level of UI specificity in legislation. Reading through the five-page regulation, you realize that this gets tricky fast, with the law leaving lots of room for interpretation. Laws don’t yet capture technical nuance, a byproduct of aging politicians, slow legislation, and the sustained jurassic explosion of tech. Expect to see more of this kinda stuff going forward. I highly recommend following EFF, the Electronic Frontier Foundation, who’ve led this work for the past 30 years. The work never ends. — XML

- — — — — — — — — -

Links this week

  • New research shows that nudges, that darling of behavioral interventions, might be helpful in the aggregate but harmful for individuals outside of the “average experience” [a topic I wrote about here!] (Behavioral Scientist) 📚
  • John Maeda’s annual report on the state of design: “CX 2021: Safety Eats the World” (CX report) 📚
  • For more on tech regulation, check out this NYT piece on regulating NYC’s use of hiring algos (NYT) 📚
  • More incredible data journalism from the Pudding, this time on the global effort to combat COVID-19 (The Pudding) 📚
  • Google is being sued over not properly disclosing the limits of Incognito mode (Bloomberg) 📚
  • Ethics in AI: Two great longreads in the NYT on the Clearview AI and the recent mess at Google 📚
  • Privacy is one of the top issues of the 21st century and we’re in a crisis,” Tim Cook to Kara Swisher on her podcast this week (Sway podcast) 🎧

--

--

Michael Linares
Michael Linares

Written by Michael Linares

Product leader and writer. Currently: Head of Product @ NYT Cooking. Previously: Crisis Text Line, Lean In, Yale AIDS Memorial Project.

No responses yet