AI

January 15, 2026

How to design AI products ethically

Content

Scepticism and worry around AI is increasing amongst users. Although the tech can benefit users and improve their experience of a product, concerns about AI’s influence are growing. 

While personalised UX helps users reach their goals, 78% are worried about how their data is integrated into this process. This is partly due to a knowledge gap, with only 12% feeling adequately informed about these practices through standard disclosures.

Limitations in AI’s functionality are also raising ethical concerns. Testing has uncovered that AI-driven interfaces created 2.6 times more usability issues for older adults and 3.2 times more issues for users with disabilities compared to general populations. 

So how can UX design teams mitigate these risks and ensure their AI products work ethically? 

Why ethics is important in AI

As with any other product, ensuring it functions ethically is essential to protect users and brand reputations. However, AI has the potential to add significant capabilities to a product, such as delivering facts, advice and personalization. This means poor AI UX decisions can create additional ethical risks which need to be mitigated by design teams: 

  • Misinformation or disinformation can happen when errors are presented incorrectly as facts (otherwise known as hallucinations) or false content is generated quickly at scale
  • Biases can start to appear in AI tools which have been trained using skewed data or assumptions, without edge cases being considered
  • In sectors such as health, finance, civic tech, AI could risk sensitive data being used in a way that’s not secure and has the potential to cause harm
  • Dark patterns can also start to emerge, with AI able to do so much, users can quickly become passive or even be nudged into acting in a way they wouldn’t otherwise

Alongside keeping the above in mind, UX designers also need to weigh up which tasks AI can handle and which they shouldn’t. Some functions are better performed by humans, and not considering this could create an overly confident AI tool which takes over tasks that it isn’t well-prepared, well-suited, or well-trained for. 

This can do significant harm to a brand’s reputation. If users aren’t getting their needs met, this will lead to poor product value. Over time, this could build into brand mistrust, particularly if AI’s capabilities aren’t integrated with the users’ benefits in mind. In the worst case scenarios, poor use of AI could result in a lawsuit for violating customer rights and protections. 

Design principles for ethical AI

To protect your users and brand, it’s essential that you clearly set out principles for its use in products. 

Make uncertainty clear and visible 

If you’re using a Large Language Model (LLM) as part of your product, you’ll be aware that it can present its insights as factual information. To avoid being caught out by presenting hallucinations and misinformation as correct, it’s essential you make it clear to users that AI isn’t an expert, but probabilistic. This can be done in a variety of ways: 

  • Use confidence indicators, such as a traffic light system, to indicate how accurate the information presented is, helping users to make fully informed decisions
  • Give users the ability to ask for more information or clarification when needed so they can understand the reasoning and evidence behind an AI tool’s judgement
  • Offer multiple options instead of a single answer to show there’s no definitive response, and preface with context about AI’s understandings and assumptions 
  • Don’t present the information using a confident tone when the answer has a high probability of being incorrect, inaccurate, or non-factual

By using these clear call outs, you can protect your brand from misrepresenting insights or judgements as facts and creating distrust amongst users. 

Attribute accurately

Transparently attributing any data, information or reasoning used to make judgements helps to build trust between users and AI tools. It also helps them make informed decisions without feeling misdirected or malinfluenced by the technology. 

Accurate, correct attribution can include highlighting a list of information sources used by an AI tool. This helps users quickly judge their reputability or continue their own research if needed. Setting out clear reasoning also maintains transparency. Just-in-time explanations can be particularly effective. For example, “we’ve used [X] data to suggest [Y].” 

Augment, not replace

AI, particularly when used to personalize UX, has the potential to guide users into decisions they wouldn’t otherwise make. As such, it’s important that human oversight and input is protected. Designing AI as an assistant that follows a human’s lead means users can enjoy the technology’s benefits without feeling out of control. 

To maintain a user’s autonomy and safety, various controls can be put in place by UX designers, such as:

  • Clearly defining use cases, so users understand where your product’s limitations lie and where it can benefit them
  • Make the editing of AI outputs easy, so users can adapt them to their specific needs and don’t view them as definitive or complete
  • Add in “review before sending or publishing” patterns, prompting human oversight to maintain quality
  • Integrate clear affordances to make overrides or corrections straightforward, so users can easily void actions if needed
  • Put in place confirmation steps before AI sends messages on a user’s behalf, so they have complete editorial control
  • Deploy warnings before users act on uncertain or sensitive outputs, you might want to include disclaimers here to make it clear that they’re responsible for the final action
  • Put steps in place to slow users down in high-stakes decisions, such as a timed delay, a daily limit on risky actions or extra verifications

These steps won’t just help you users feel more reassured about the influence of AI, but help protect your brand from being accused of unethical usage. 

You are not your user

UX designers are very tech-savvy. So it’s easy to assume that your users have that same level of knowledge. However, the truth is that most users don’t understand the technical aspects behind AI products, which means they could quickly start falling outside the typical flow. 

To make sure your design can meet the needs of users outside of your knowledge or expectations, ensure you consider the edge cases. Run tests with diverse groups and get designers from a range of backgrounds to review your work. That way, you can ensure that AI can handle unusual behaviour, maintaining the accessibility of your product. 

Make ethics everyday

When working on AI products, ensure your team understands the dangers and has clear policies and processes in place to develop ethical UX. That way, you can protect your brand and ensure your users get all the advantages of this new technology, without being put at risk of misdirection, misinformation or misunderstandings. 

Related articles

More to explore

Let's design what's next, together.

Let’s work

together

KoiStudios

KoiStudios