Designing for Uncertainty in AI Systems
Reba Habib

Traditional software is designed to behave predictably. When users take an action, the system responds in a defined way. Over time, users learn how the system behaves, and this predictability helps build confidence.
AI systems introduce a different kind of interaction.
Instead of following fixed rules, AI systems generate outputs based on probabilities and patterns. This means that outcomes can vary, even when users provide similar inputs. While this flexibility enables powerful capabilities, it also introduces uncertainty into the experience.
This uncertainty becomes part of what designers must consider.
Uncertainty Is Not New, But It Becomes Visible
All software contains some level of uncertainty. Network delays, incomplete data, or unexpected system behavior can affect outcomes. However, traditional software often hides this uncertainty behind deterministic interfaces.
AI systems make uncertainty more visible.
For example, a search engine that returns ranked results may already involve probabilistic logic, but users rarely think about it that way. In contrast, generative AI systems produce responses that feel more interpretive. Users may notice differences between outputs, gaps in reasoning, or unexpected phrasing.
These experiences make uncertainty more apparent.
Research from Microsoft Research has shown that users interacting with generative AI tools often form expectations quickly. When outputs vary in ways users do not anticipate, they may begin to question system reliability, even if overall performance remains strong.
This suggests that uncertainty is not just a technical characteristic — it becomes part of the user experience.
Users Adapt to Uncertainty
When interacting with AI systems, users often develop strategies for managing uncertainty.
Some users verify outputs before acting. Others use AI for low-risk tasks but avoid it in high-stakes situations. Over time, users build mental models about when AI is helpful and when it requires caution.
This behavior has been observed in studies of AI-assisted writing tools. Research from Stanford University found that users frequently relied on AI-generated drafts for initial ideas but edited heavily before finalizing content. Users treated AI as a collaborator rather than a definitive source.
This pattern reflects how users naturally adjust to uncertainty.
Design can support this process.
Designing for Uncertainty Means Supporting Interpretation
When systems produce uncertain outputs, users need ways to interpret results. This does not necessarily require explicit warnings or technical explanations. Instead, design can help users understand how to engage with AI effectively.
For example, recommendation systems often communicate uncertainty implicitly. Netflix presents recommendations as suggestions rather than definitive answers. This framing encourages exploration rather than reliance.
Similarly, autocomplete suggestions in products like Google Search or Docs are presented as optional inputs. Users can accept or ignore suggestions without committing to them. This interaction pattern helps users navigate uncertainty naturally.
These examples show how design shapes how users interpret probabilistic outputs.
Uncertainty Changes Interaction Patterns
AI systems often shift interaction patterns from confirmation to exploration.
With traditional software, users often seek definitive answers. With AI systems, users may explore multiple outputs, refine prompts, or iterate on results. This creates a more conversational or exploratory interaction style.
Research from Nielsen Norman Group has noted that users interacting with generative AI tools often adopt iterative workflows, refining inputs to improve results. This behavior reflects how users manage uncertainty through interaction.
Designers can support these workflows by making iteration easy and visible.
The Role of Transparency
Transparency can also help users understand uncertainty, but it must be applied carefully. Too much technical detail may overwhelm users, while too little information may create confusion.
For example, AI-powered decision support tools in healthcare often provide explanations or confidence indicators. Studies published in Nature Digital Medicine have found that clinicians are more likely to trust AI recommendations when they can understand reasoning or review supporting information.
These findings suggest that transparency helps users interpret uncertain outputs.
However, transparency does not always require technical explanations. Simple design choices, such as framing outputs as suggestions or enabling verification, can also support understanding.
Uncertainty as a Design Constraint
Designing for uncertainty requires a shift in mindset. Instead of eliminating ambiguity, designers focus on helping users navigate it.
This may involve:
Supporting iteration
Encouraging verification
Allowing correction
Framing outputs appropriately
These considerations extend beyond interface design and into interaction design and workflow design.
As AI becomes more integrated into products, uncertainty becomes a core characteristic of the experience rather than an exception.
Designing Systems That Support Confidence
Uncertainty does not necessarily reduce confidence. When users understand how to work with AI, they often become more comfortable using it.
Over time, users develop familiarity with system behavior. They learn when to rely on AI and when to apply their own judgment. This balance supports more effective interactions.
Design plays a role in supporting this learning process.
As AI systems continue to evolve, designing for uncertainty becomes an essential part of creating usable, trustworthy experiences. Instead of attempting to remove uncertainty, designers help users understand and navigate it.
This shift reflects the broader transition from deterministic software to intelligent systems — and the new design challenges that come with it.