AI
January 15, 2026

Scepticism and worry around AI is increasing amongst users. Although the tech can benefit users and improve their experience of a product, concerns about AI’s influence are growing.
While personalised UX helps users reach their goals, 78% are worried about how their data is integrated into this process. This is partly due to a knowledge gap, with only 12% feeling adequately informed about these practices through standard disclosures.
Limitations in AI’s functionality are also raising ethical concerns. Testing has uncovered that AI-driven interfaces created 2.6 times more usability issues for older adults and 3.2 times more issues for users with disabilities compared to general populations.
So how can UX design teams mitigate these risks and ensure their AI products work ethically?
As with any other product, ensuring it functions ethically is essential to protect users and brand reputations. However, AI has the potential to add significant capabilities to a product, such as delivering facts, advice and personalization. This means poor AI UX decisions can create additional ethical risks which need to be mitigated by design teams:
Alongside keeping the above in mind, UX designers also need to weigh up which tasks AI can handle and which they shouldn’t. Some functions are better performed by humans, and not considering this could create an overly confident AI tool which takes over tasks that it isn’t well-prepared, well-suited, or well-trained for.
This can do significant harm to a brand’s reputation. If users aren’t getting their needs met, this will lead to poor product value. Over time, this could build into brand mistrust, particularly if AI’s capabilities aren’t integrated with the users’ benefits in mind. In the worst case scenarios, poor use of AI could result in a lawsuit for violating customer rights and protections.
To protect your users and brand, it’s essential that you clearly set out principles for its use in products.
If you’re using a Large Language Model (LLM) as part of your product, you’ll be aware that it can present its insights as factual information. To avoid being caught out by presenting hallucinations and misinformation as correct, it’s essential you make it clear to users that AI isn’t an expert, but probabilistic. This can be done in a variety of ways:
By using these clear call outs, you can protect your brand from misrepresenting insights or judgements as facts and creating distrust amongst users.
Transparently attributing any data, information or reasoning used to make judgements helps to build trust between users and AI tools. It also helps them make informed decisions without feeling misdirected or malinfluenced by the technology.
Accurate, correct attribution can include highlighting a list of information sources used by an AI tool. This helps users quickly judge their reputability or continue their own research if needed. Setting out clear reasoning also maintains transparency. Just-in-time explanations can be particularly effective. For example, “we’ve used [X] data to suggest [Y].”
AI, particularly when used to personalize UX, has the potential to guide users into decisions they wouldn’t otherwise make. As such, it’s important that human oversight and input is protected. Designing AI as an assistant that follows a human’s lead means users can enjoy the technology’s benefits without feeling out of control.
To maintain a user’s autonomy and safety, various controls can be put in place by UX designers, such as:
These steps won’t just help you users feel more reassured about the influence of AI, but help protect your brand from being accused of unethical usage.
UX designers are very tech-savvy. So it’s easy to assume that your users have that same level of knowledge. However, the truth is that most users don’t understand the technical aspects behind AI products, which means they could quickly start falling outside the typical flow.
To make sure your design can meet the needs of users outside of your knowledge or expectations, ensure you consider the edge cases. Run tests with diverse groups and get designers from a range of backgrounds to review your work. That way, you can ensure that AI can handle unusual behaviour, maintaining the accessibility of your product.
When working on AI products, ensure your team understands the dangers and has clear policies and processes in place to develop ethical UX. That way, you can protect your brand and ensure your users get all the advantages of this new technology, without being put at risk of misdirection, misinformation or misunderstandings.
Related articles
KoiStudios
KoiStudios