Blog

Our past works have included a wide range of clientele from various industries. View our case studies to see how we addressed their needs and future-proofed their projects.

FEATURED POSTS

The Accountability Crisis in AI: Who’s Really Responsible?

By Joe Scaggs | AI Design Ethics & Current Events

As AI becomes more embedded in everyday tools—especially in creative design—one thing remains dangerously unclear: who is accountable when AI gets it wrong?

Imagine this: an AI tool recommends design assets that misrepresent racial or cultural identities. The client publishes them. Backlash ensues. The designer says, “The AI chose it.” The AI says... well, nothing.

So who takes the fall?

AI systems are growing in capability faster than our ability to govern them. We are deploying tools that we can’t always interpret—yet we’re embedding them in high-stakes workflows. This raises critical design ethics questions.


Diffusion of Responsibility: A Silent Risk in UX

Designers make choices about how much AI a user sees, when to prompt human override, and how errors are flagged. These decisions affect outcomes—but we rarely discuss them outside the dev room.


Best Practices to Clarify Accountability

  1. Use audit logs – Track every interaction between human and AI.
  2. Add feedback tools – Let users flag AI errors directly in design workflows.
  3. Include visible disclaimers – Users need to know when AI is acting vs. assisting.

Teams should adopt ethical documentation practices, like model cards (see Google’s Model Card Toolkit) or data statements.

AI isn’t a black box; it’s a mirror of our assumptions. When we design with clarity and responsibility, we make the whole system safer.