Designing for Model Limitations and Failure Modes
Reba Habib

Traditional software is designed to minimize failure. When errors occur, they are typically handled through validation, fallback states, or error messaging. These failures are often predictable and bounded. Designers define edge cases and ensure that users can recover when issues occur.
AI systems introduce a different type of failure.
Instead of predictable errors, AI systems produce incorrect, incomplete, or misleading outputs. These failures are not always obvious. The system may generate confident but incorrect responses. Recommendations may be irrelevant. Predictions may be inaccurate.
These are not system crashes. They are model limitations.
Designing for AI systems therefore requires designing for model limitations and failure modes.
AI Failures Are Often Subtle
Traditional software failures are typically visible. A form fails validation. A network request times out. An error message appears. These failures signal that something went wrong.
AI failures are often less obvious. A generated response may appear plausible but contain inaccuracies. A recommendation may seem reasonable but be irrelevant. A prediction may appear confident but be incorrect.
This subtlety changes how designers approach failure.
Users may not always recognize when AI outputs are unreliable. Designers must therefore consider how systems communicate limitations.
Failure Modes in AI Systems
AI systems can fail in multiple ways. These failures often stem from model limitations rather than system errors.
Common failure modes include:
Incorrect outputs
Incomplete responses
Irrelevant recommendations
Biased results
Overconfident responses
These failures vary depending on context and data.
For example, generative systems such as ChatGPT may produce responses that appear accurate but contain errors. Recommendation systems such as those used by Netflix may suggest content that does not align with user preferences.
These behaviors reflect probabilistic systems rather than deterministic ones.
Designing for Imperfect Systems
Because AI systems are imperfect, designers must support user evaluation. Users may need to review outputs, verify results, or refine inputs.
Designers can support this process by enabling:
Refinement
Iteration
Verification
Comparison
These interactions help users navigate limitations.
For example, allowing users to regenerate outputs acknowledges variability. Allowing users to refine prompts supports improvement.
These patterns help users manage model limitations.
Communicating Limitations
Users benefit from understanding that AI systems are not always accurate. Communicating limitations helps set expectations and reduces over-reliance.
However, communicating limitations must be balanced. Excessive warnings may reduce usability, while insufficient communication may create misplaced trust.
Designers must determine appropriate ways to communicate uncertainty.
Recovery and Correction
Designing for failure also involves recovery. When outputs are incorrect, users should be able to correct or refine results.
Correction mechanisms support iterative workflows. Users adjust inputs, modify outputs, and guide system behavior.
These mechanisms help users manage limitations effectively.
Designing for Ongoing Improvement
Model limitations may change over time. As systems improve, some failure modes may decrease while others emerge. Designers must consider evolving system behavior.
Designing flexible interactions helps accommodate improvement.
Designing for Model Limitations
Designing for AI systems involves accepting imperfection. Designers must consider how users interpret outputs, manage errors, and refine results.
These considerations extend traditional UX design.
As AI systems become more common, designing for model limitations becomes essential. Designers shape how users interact with imperfect intelligence and help ensure that systems remain usable despite uncertainty.