
By Joe Scaggs | AI Design Ethics & Current Events
As designers, developers, and technologists, we are building systems that make decisions once made by people. But what happens when those decisions cause harm?
In May 2023, the New York Times published a story about AI models that made life-altering mistakes—wrong job evaluations, misdiagnosed patients, and algorithmic bias in law enforcement systems. None of these failures had a single “culprit,” yet the impact was deeply personal for those affected.
As AI systems get more advanced, responsibility becomes harder to trace. In the case of COMPAS, a criminal justice algorithm used in US courts, studies showed Black defendants were more likely to be incorrectly flagged as high risk. This wasn’t intentional; no one said, “Let’s make this racist.” But it happened because the data used was already flawed.
Who do we hold accountable? The developers? The data scientists? The UX designers? Or the companies that profit?
Every design choice reflects a value. Choosing what data to include, what outcomes to optimize, and even how we word error messages affects real people.
The AI Now Institute and other ethics watchdogs argue that the root of many problems lies not in the AI models themselves but in the failure to design transparent and auditable systems.
The following are three ways designers can lead ethically:
- Design for Explainability
Your users (and regulators) should understand why the AI made a decision, not just what it did. Tools like Google’s Explainable AI help build this transparency into ML models. - Create a Feedback Loop
Build systems where users can challenge, correct, or appeal AI decisions. This creates accountability beyond the code. - Use Bias Audits
Services like ParlAI and IBM’s AI Fairness 360 can test datasets and models for bias. Make audits part of your standard process.
We need to accept that AI is not just a tool; it’s a system. And systems require oversight.