

Architecture / Strategy / Human-Factors
Let's ponder how we have interacted with software up to today.
User experience was organized around a premise so foundational that it barely needed stating: humans act, machines respond. We clicked and made something happen.
Interfaces existed to help people navigate, configure, and execute. Dashboards displayed state. Forms captured intent. Buttons triggered outcomes. Good UX meant fewer clicks, clearer flows, faster completion, and even standardizing menu placement. Once again, the entire discipline was optimized around a single relationship — a person performing tasks through a tool that did exactly what it was told.

Architecture / Strategy / Human-Factors
Let's ponder how we have interacted with software up to today.
User experience was organized around a premise so foundational that it barely needed stating: humans act, machines respond. We clicked and made something happen.
Interfaces existed to help people navigate, configure, and execute. Dashboards displayed state. Forms captured intent. Buttons triggered outcomes. Good UX meant fewer clicks, clearer flows, faster completion, and even standardizing menu placement. Once again, the entire discipline was optimized around a single relationship — a person performing tasks through a tool that did exactly what it was told.

That relationship is now breaking.
Autonomous and agentic systems introduce a structural rupture in how work happens inside software. Instead of waiting for instruction, systems can now interpret intent, decompose goals, coordinate tools, and execute outcomes — often faster and more reliably than the humans who built them.
The human is no longer the operator of the machine.
The human becomes the supervisor of intelligence.
This is the agentic shift. And it demands a complete rethinking of what user experience means — not as a cosmetic update, but as a redesign of the relationship between human judgment and machine action.

Traditional UX optimized execution. Agentic UX optimizes judgment.
In the cockpit era of software, success meant helping a user complete a task step-by-step with minimal friction. Click here, fill this, confirm that. The system was inert without instruction.
In this next round of design, the system performs many of those steps itself. The human's role reorganizes — from doing to evaluating, from configuring to guiding, from executing to correcting.
This pattern isn't new.
It mirrors decades of research in automation and human factors, particularly the work of Thomas Sheridan, who coined the term "supervisory control" in the 1970s to describe how humans interact with semi-autonomous systems in aviation, nuclear power, and military operations.
Sheridan's core insight was that when machines take over action, human work doesn't vanish — it migrates upstream. People stop performing tasks and start monitoring, interpreting, and intervening. They stop driving and start watching the road.
That migration creates a paradox. The more capable the system becomes, the more dangerous disengagement becomes. If users drift "out of the loop" — a phrase the automation literature has been using since the 1980s — they lose the situational awareness and skill required to intervene when the system fails. Agentic systems will fail and drift. They hallucinate. They misinterpret context. They optimize for the wrong objective. The question isn't whether the human will need to step in, but whether they'll be equipped to do so when it matters.
Agentic UX therefore cannot pursue convenience alone. It must preserve meaningful human orientation — cognitive engagement, not just access — even while reducing effort. Supervision, not interaction, becomes the core design problem.

In the execution era, we measured UX quality through task time, completion rate, error frequency, and discoverability. Those metrics still matter, but they're no longer sufficient. Agentic systems introduce a more fragile governing metric: calibrated trust.
Not maximum trust. Not blind trust. Appropriate reliance — users delegating when the system is strong, resisting when it is weak, and recovering quickly when it fails. Lee and See's foundational work on trust in automation defined calibrated trust as the correspondence between a system's objective capability and the user's subjective confidence in it. When these are misaligned, you get either disuse (the system works but nobody trusts it) or misuse (the system fails but nobody catches it).
This matters because trust in intelligent systems behaves asymmetrically. Research by Dietvorst, Simmons, and Massey on "algorithm aversion" showed that people often abandon algorithmic systems after seeing a single visible mistake — even when the algorithm consistently outperforms human judgment. A 2025 study in PMC on trust dynamics in AI-assisted financial advising confirmed the pattern: users maintain what researchers call a "perfect automation schema," expecting flawless performance from AI and interpreting any deviation as evidence of fundamental inadequacy rather than normal variance. Humans get the benefit of situational excuses. Algorithms don't.
But here's the design insight that changes everything: Dietvorst found that algorithm aversion decreased significantly when participants were allowed to make even minimal adjustments to the algorithm's output. The modification didn't need to be meaningful. The sense of control was enough. This maps directly to what agentic UX must provide — not the illusion of control, but real mechanisms for steering, correcting, and overriding. Comfort rises when the system feels steerable, even if the user isn't steering.
Agentic UX must therefore support a continuous trust loop: the action is taken, the rationale is visible, the consequences are previewable, the errors are reversible, and the system's future behavior adapts. Trust becomes operational infrastructure, not an emotional afterthought.

As systems begin acting independently, a subtle psychological tension emerges. Users rarely articulate it directly, but it shows up in behavior: hesitation to enable automation, repeated manual checking, discomfort with invisible processes, abandonment after a single visible failure.
I call this autonomy anxiety — the discomfort produced when meaningful outcomes occur without visible human agency. Three fears sit underneath it.
1) The first is hidden mistakes. "What happened off-screen that I didn't see?" When a system executes a complex multi-step process and only surfaces the outcome, users lose the ability to assess quality at each intermediate step. The opacity breeds distrust, especially in domains where errors compound.
2) The second is irreversibility. "Can I fix it if it breaks?" This is the fear that autonomous action creates permanent consequences. It's why people are far more comfortable delegating low-stakes tasks to AI — sorting emails, drafting summaries — than high-stakes ones like sending financial transactions or modifying production code.
3) The third is competence displacement. "Am I still necessary?" This one runs deeper than workflow. It touches identity. If the system handles the work, what remains of the human's professional value? This fear doesn't require the system to be perfect — it only requires the system to be good enough that the human's role becomes ambiguous.
Agentic UX cannot eliminate these fears. But it can transform them.
Psychological safety emerges when systems feel steerable, reversible, accountable, and aligned. Control in the agentic era is no longer about manual execution. It's about recoverability. Undo becomes a moral contract between the system and the person depending on it.

Because systems can now interpret intent and act on it, the grammar of interaction changes at a fundamental level. The old question was: "Where do I click?" The new question is: "Is this what you meant — and should I proceed?"
Interfaces shift from navigation surfaces to negotiation mechanisms. A new set of interaction primitives emerges.
Intent framing. The system states what it believes the user wants, making the interpretation visible before action begins. This is the first opportunity for correction — and the cheapest one.
Outcome preview. Changes are shown as diffs and consequences, not configuration steps. The user sees what will happen, not how the system plans to get there. This collapses the complexity of multi-step execution into a single judgment call: is this the right outcome?
Approval gradients. Autonomy scales with risk. Low-consequence, reversible actions execute automatically. Medium-consequence actions require confirmation. High-stakes, irreversible actions demand mandatory review. This tiered model — what I think of as Green, Yellow, and Red autonomy zones — prevents the system from treating every action with the same level of ceremony, which would create friction fatigue, or with the same level of freedom, which would create trust failures.
Provenance. Every action emits a trace. "The work happened" and "the evidence exists" must be the same event. Traces need to be human-readable and machine-verifiable, supporting drill-down inspection without forcing the user to leave their current context.
These primitives move UX closer to governance than productivity. Closer to decision architecture than workflow design.
The interaction loop becomes:
Propose → Preview → Approve → Act → Verify.
And critically, every step in that loop must be interruptible.

Agentic products often appear visually minimal — a conversation pane, a few controls, generous whitespace. This minimalism can be misleading, because it isn't aesthetic restraint. It's the visible result of a deeper structural move: complexity has migrated backstage.
Larry Tesler articulated this principle at Xerox PARC in the 1980s: every application has an inherent amount of irreducible complexity, and the only question is who bears it — the user, the designer, or the system. Tesler's Law, sometimes called the Law of Conservation of Complexity, holds that complexity cannot be eliminated, only relocated.
Agentic UX proves the rule — and extends it. In traditional software, complexity moved from the interface into the codebase. In agentic software, complexity moves from the visible interface into orchestration engines, policy enforcement layers, memory systems, audit trails, and risk controls. The surface gets simpler. The infrastructure gets significantly more sophisticated. Minimal UX isn't the removal of complexity. It's the relocation of complexity to orchestration and governance.
The resulting UI becomes episodic — quiet during normal operation, present at decision points, disappearing after guidance is given. Not a cockpit full of instruments demanding constant attention. A lens that focuses when clarity is needed and recedes when it isn't.
This is the supervisor's dashboard. Avoid cockpit mode and use on-demand surfaces. The architecture layers naturally: a conversational Ask layer at the front, a Reveal layer of surfaces and canvases behind it, and an Audit layer of history and policy at the back. Each layer surfaces only when relevant, keeping cognitive load proportional to actual decision complexity.

Autonomous systems inevitably interrupt. Approval requests, alerts, escalations, clarification questions — these are the natural communication points between a system that acts and a human who oversees. But the research on interruption costs in knowledge work is sobering.
Gloria Mark's research at UC Irvine found that workers spend an average of 47 seconds on a task before switching, and it takes roughly 23 minutes to fully refocus after a significant interruption. Harvard Business Review reported that knowledge workers toggle between applications over 1,200 times per day, costing approximately four hours per week in reorientation alone. Sophie Leroy's work on "attention residue" showed that even after switching tasks, part of your cognitive attention remains stuck on the previous task — degrading performance on whatever you're doing now.
This research has direct implications for agentic design. If an autonomous system interrupts a human 30 times a day for approval requests, confirmations, and status updates, it hasn't reduced workload — it's fragmented it. The system is doing work, but it's destroying the human's ability to do theirs.
Agentic UX must therefore govern when to interrupt, not just how. This introduces a design variable that traditional software rarely needed: the interruption budget. Every system interaction that demands human attention has a cost. That cost must be modeled, measured, and managed.
The framework maps onto a two-axis model. On one axis: the value of the information being surfaced. On the other: the cost of interrupting the human. High-value, low-cost interruptions should proceed immediately.
Low-value, high-cost interruptions should be deferred or batched. High-value, high-cost interruptions — the hardest cases — require confirmation before proceeding. Low-value, low-cost items can wait for the user to check in on their own terms.
Good agentic systems batch low-risk updates, escalate only meaningful decisions, route interruptions to the correct role, and provide clear resumption context so the human can re-enter their workflow without cognitive penalty. Autonomy should reduce cognitive fragmentation, not multiply it. Attention becomes infrastructure.

Undo is necessary but insufficient — especially in high-stakes domains. Users must also be able to challenge the system's decisions, understand its reasoning, obtain alternative outcomes, and ensure corrections persist into future behavior.
This is contestability. And it's the difference between a tool that can be controlled and a system that can be trusted.
Explainability plays a critical role here, but not in the way most people implement it. Showing a user the output isn't enough. Show the evidence. The most effective approach uses contrastive explanations — not just "here's what the system chose," but "here's why it chose X and not Y." This maps to how humans actually evaluate decisions. We don't ask "what happened?" We ask "why this instead of that?"
Equally important: flag uncertainty. When confidence is low, surface it visually. Don't bury uncertain outputs behind the same interface as high-confidence ones. Users deserve to know when the system is guessing — and agentic systems guess more often than their polished interfaces suggest.
Without contestability, autonomy feels authoritarian. With it, autonomy becomes collaborative. Agentic UX therefore intersects not only with interaction design, but with governance, ethics, and the institutional trust that determines whether people actually adopt these systems in practice.

Pulling from the research — across psychology, human factors, distributed systems, and interaction design — four principles consistently surface as essential for agentic systems that people will actually trust and use.
1) Situational awareness. Keep the user "on the loop" — not necessarily in the loop performing every action, but oriented to what the system is doing, why, and what's coming next. Sheridan's research showed that degraded situational awareness is the single largest risk factor when automation fails. If the human doesn't know what state the system is in, they can't intervene effectively.
2) Trust repair. Systems will make mistakes. The question is what happens next. Research on trust dynamics shows that visible error recovery — acknowledging the mistake, explaining what happened, and demonstrating corrective action — overcomes algorithm aversion more effectively than preventing errors in the first place. Perfect systems aren't the goal. Recoverable systems are.
3) Contestability. Allow the user to challenge the logic. Not just undo the output, but interrogate the reasoning, request alternatives, and reshape the system's future behavior. This transforms the user from a passive recipient of machine decisions into an active participant in machine governance.
4) Progressive disclosure. Show the "what" by default. Make the "why" accessible on demand. Don't force users to parse trace logs to understand a decision, but don't hide the trace logs either. Layer information so that surface-level clarity coexists with deep inspectability.
These aren't features to be added at the end of a product cycle. They're architectural decisions that shape how the system is built from the ground up. Bolting trust onto an agentic system is like bolting safety onto a bridge after traffic starts flowing. The structure has to carry it from the beginning.

Across every domain I've studied — automation theory, distributed systems, cognitive psychology, organizational design — a consistent pattern appears: agentic technology doesn't merely change interfaces. It changes the relationship between humans and action.
Humans move from performers to supervisors. From executors to judges. From operators to stewards. Software moves from tool to actor. From responder to collaborator. From interface to environment.
This is not a metaphor. It's an architectural reality that demands new patterns, new metrics, and new design primitives.
The linear workflow — Step 1, Step 2, Step 3 — belongs to the execution era.
Agentic work follows a control loop:
Frame → Plan → Act → Observe → Correct → Remember.
The agent handles planning and acting. The human handles framing and correction. The system provides memory and accountability.
The strategic thesis is simple, and I'll state it plainly:
Agentic UX is not a cosmetic update. It is a trust and control revolution.
It is a redefinition of control, trust, and responsibility between humans and machines. The organizations that understand this will build systems people actually rely on. The ones that don't will build impressive demos that no one uses in production.
Success is no longer measured by how much the agent does, but by how well the human can understand, guide, and safely depend on what is done.
That's the shift. And it's already underway.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
Leroy, S., & Glomb, T. M. (2018). Tasks interrupted: How anticipating time pressure on resumption of an interrupted task creates spillover. Organization Science, 29(3), 380–398. https://doi.org/10.1287/orsc.2017.1184
Mark, G., Gudith, D., & Klocke, U. (2008). The cost of interrupted work: More speed and stress. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’08) (pp. 107–110). ACM. https://doi.org/10.1145/1357054.1357072
Mark, G. (2023). Attention span: A groundbreaking way to restore balance, happiness and productivity. Hanover Square Press.
Mahmud, H., Islam, A. K. M. N., Ahmed, S. I., & Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175, 121390. https://doi.org/10.1016/j.techfore.2021.121390
Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. MIT Press.
Tesler, L. (1985). The law of conservation of complexity. Xerox PARC internal communication. (Commonly cited via Saffer, 2006.)
Lee, J. D., Park, S. H., & Kim, Y. J. (2025). Trust formation, error impact, and repair in human–AI financial advisory: A dynamic behavioral analysis. Behavioral Sciences, 15(10), 1370. https://doi.org/10.3390/bs15101370
That relationship is now breaking.
Autonomous and agentic systems introduce a structural rupture in how work happens inside software. Instead of waiting for instruction, systems can now interpret intent, decompose goals, coordinate tools, and execute outcomes — often faster and more reliably than the humans who built them.
The human is no longer the operator of the machine.
The human becomes the supervisor of intelligence.
This is the agentic shift. And it demands a complete rethinking of what user experience means — not as a cosmetic update, but as a redesign of the relationship between human judgment and machine action.

Traditional UX optimized execution. Agentic UX optimizes judgment.
In the cockpit era of software, success meant helping a user complete a task step-by-step with minimal friction. Click here, fill this, confirm that. The system was inert without instruction.
In this next round of design, the system performs many of those steps itself. The human's role reorganizes — from doing to evaluating, from configuring to guiding, from executing to correcting.
This pattern isn't new.
It mirrors decades of research in automation and human factors, particularly the work of Thomas Sheridan, who coined the term "supervisory control" in the 1970s to describe how humans interact with semi-autonomous systems in aviation, nuclear power, and military operations.
Sheridan's core insight was that when machines take over action, human work doesn't vanish — it migrates upstream. People stop performing tasks and start monitoring, interpreting, and intervening. They stop driving and start watching the road.
That migration creates a paradox. The more capable the system becomes, the more dangerous disengagement becomes. If users drift "out of the loop" — a phrase the automation literature has been using since the 1980s — they lose the situational awareness and skill required to intervene when the system fails. Agentic systems will fail and drift. They hallucinate. They misinterpret context. They optimize for the wrong objective. The question isn't whether the human will need to step in, but whether they'll be equipped to do so when it matters.
Agentic UX therefore cannot pursue convenience alone. It must preserve meaningful human orientation — cognitive engagement, not just access — even while reducing effort. Supervision, not interaction, becomes the core design problem.

In the execution era, we measured UX quality through task time, completion rate, error frequency, and discoverability. Those metrics still matter, but they're no longer sufficient. Agentic systems introduce a more fragile governing metric: calibrated trust.
Not maximum trust. Not blind trust. Appropriate reliance — users delegating when the system is strong, resisting when it is weak, and recovering quickly when it fails. Lee and See's foundational work on trust in automation defined calibrated trust as the correspondence between a system's objective capability and the user's subjective confidence in it. When these are misaligned, you get either disuse (the system works but nobody trusts it) or misuse (the system fails but nobody catches it).
This matters because trust in intelligent systems behaves asymmetrically. Research by Dietvorst, Simmons, and Massey on "algorithm aversion" showed that people often abandon algorithmic systems after seeing a single visible mistake — even when the algorithm consistently outperforms human judgment. A 2025 study in PMC on trust dynamics in AI-assisted financial advising confirmed the pattern: users maintain what researchers call a "perfect automation schema," expecting flawless performance from AI and interpreting any deviation as evidence of fundamental inadequacy rather than normal variance. Humans get the benefit of situational excuses. Algorithms don't.
But here's the design insight that changes everything: Dietvorst found that algorithm aversion decreased significantly when participants were allowed to make even minimal adjustments to the algorithm's output. The modification didn't need to be meaningful. The sense of control was enough. This maps directly to what agentic UX must provide — not the illusion of control, but real mechanisms for steering, correcting, and overriding. Comfort rises when the system feels steerable, even if the user isn't steering.
Agentic UX must therefore support a continuous trust loop: the action is taken, the rationale is visible, the consequences are previewable, the errors are reversible, and the system's future behavior adapts. Trust becomes operational infrastructure, not an emotional afterthought.

As systems begin acting independently, a subtle psychological tension emerges. Users rarely articulate it directly, but it shows up in behavior: hesitation to enable automation, repeated manual checking, discomfort with invisible processes, abandonment after a single visible failure.
I call this autonomy anxiety — the discomfort produced when meaningful outcomes occur without visible human agency. Three fears sit underneath it.
1) The first is hidden mistakes. "What happened off-screen that I didn't see?" When a system executes a complex multi-step process and only surfaces the outcome, users lose the ability to assess quality at each intermediate step. The opacity breeds distrust, especially in domains where errors compound.
2) The second is irreversibility. "Can I fix it if it breaks?" This is the fear that autonomous action creates permanent consequences. It's why people are far more comfortable delegating low-stakes tasks to AI — sorting emails, drafting summaries — than high-stakes ones like sending financial transactions or modifying production code.
3) The third is competence displacement. "Am I still necessary?" This one runs deeper than workflow. It touches identity. If the system handles the work, what remains of the human's professional value? This fear doesn't require the system to be perfect — it only requires the system to be good enough that the human's role becomes ambiguous.
Agentic UX cannot eliminate these fears. But it can transform them.
Psychological safety emerges when systems feel steerable, reversible, accountable, and aligned. Control in the agentic era is no longer about manual execution. It's about recoverability. Undo becomes a moral contract between the system and the person depending on it.

Because systems can now interpret intent and act on it, the grammar of interaction changes at a fundamental level. The old question was: "Where do I click?" The new question is: "Is this what you meant — and should I proceed?"
Interfaces shift from navigation surfaces to negotiation mechanisms. A new set of interaction primitives emerges.
Intent framing. The system states what it believes the user wants, making the interpretation visible before action begins. This is the first opportunity for correction — and the cheapest one.
Outcome preview. Changes are shown as diffs and consequences, not configuration steps. The user sees what will happen, not how the system plans to get there. This collapses the complexity of multi-step execution into a single judgment call: is this the right outcome?
Approval gradients. Autonomy scales with risk. Low-consequence, reversible actions execute automatically. Medium-consequence actions require confirmation. High-stakes, irreversible actions demand mandatory review. This tiered model — what I think of as Green, Yellow, and Red autonomy zones — prevents the system from treating every action with the same level of ceremony, which would create friction fatigue, or with the same level of freedom, which would create trust failures.
Provenance. Every action emits a trace. "The work happened" and "the evidence exists" must be the same event. Traces need to be human-readable and machine-verifiable, supporting drill-down inspection without forcing the user to leave their current context.
These primitives move UX closer to governance than productivity. Closer to decision architecture than workflow design.
The interaction loop becomes:
Propose → Preview → Approve → Act → Verify.
And critically, every step in that loop must be interruptible.

Agentic products often appear visually minimal — a conversation pane, a few controls, generous whitespace. This minimalism can be misleading, because it isn't aesthetic restraint. It's the visible result of a deeper structural move: complexity has migrated backstage.
Larry Tesler articulated this principle at Xerox PARC in the 1980s: every application has an inherent amount of irreducible complexity, and the only question is who bears it — the user, the designer, or the system. Tesler's Law, sometimes called the Law of Conservation of Complexity, holds that complexity cannot be eliminated, only relocated.
Agentic UX proves the rule — and extends it. In traditional software, complexity moved from the interface into the codebase. In agentic software, complexity moves from the visible interface into orchestration engines, policy enforcement layers, memory systems, audit trails, and risk controls. The surface gets simpler. The infrastructure gets significantly more sophisticated. Minimal UX isn't the removal of complexity. It's the relocation of complexity to orchestration and governance.
The resulting UI becomes episodic — quiet during normal operation, present at decision points, disappearing after guidance is given. Not a cockpit full of instruments demanding constant attention. A lens that focuses when clarity is needed and recedes when it isn't.
This is the supervisor's dashboard. Avoid cockpit mode and use on-demand surfaces. The architecture layers naturally: a conversational Ask layer at the front, a Reveal layer of surfaces and canvases behind it, and an Audit layer of history and policy at the back. Each layer surfaces only when relevant, keeping cognitive load proportional to actual decision complexity.

Autonomous systems inevitably interrupt. Approval requests, alerts, escalations, clarification questions — these are the natural communication points between a system that acts and a human who oversees. But the research on interruption costs in knowledge work is sobering.
Gloria Mark's research at UC Irvine found that workers spend an average of 47 seconds on a task before switching, and it takes roughly 23 minutes to fully refocus after a significant interruption. Harvard Business Review reported that knowledge workers toggle between applications over 1,200 times per day, costing approximately four hours per week in reorientation alone. Sophie Leroy's work on "attention residue" showed that even after switching tasks, part of your cognitive attention remains stuck on the previous task — degrading performance on whatever you're doing now.
This research has direct implications for agentic design. If an autonomous system interrupts a human 30 times a day for approval requests, confirmations, and status updates, it hasn't reduced workload — it's fragmented it. The system is doing work, but it's destroying the human's ability to do theirs.
Agentic UX must therefore govern when to interrupt, not just how. This introduces a design variable that traditional software rarely needed: the interruption budget. Every system interaction that demands human attention has a cost. That cost must be modeled, measured, and managed.
The framework maps onto a two-axis model. On one axis: the value of the information being surfaced. On the other: the cost of interrupting the human. High-value, low-cost interruptions should proceed immediately.
Low-value, high-cost interruptions should be deferred or batched. High-value, high-cost interruptions — the hardest cases — require confirmation before proceeding. Low-value, low-cost items can wait for the user to check in on their own terms.
Good agentic systems batch low-risk updates, escalate only meaningful decisions, route interruptions to the correct role, and provide clear resumption context so the human can re-enter their workflow without cognitive penalty. Autonomy should reduce cognitive fragmentation, not multiply it. Attention becomes infrastructure.

Undo is necessary but insufficient — especially in high-stakes domains. Users must also be able to challenge the system's decisions, understand its reasoning, obtain alternative outcomes, and ensure corrections persist into future behavior.
This is contestability. And it's the difference between a tool that can be controlled and a system that can be trusted.
Explainability plays a critical role here, but not in the way most people implement it. Showing a user the output isn't enough. Show the evidence. The most effective approach uses contrastive explanations — not just "here's what the system chose," but "here's why it chose X and not Y." This maps to how humans actually evaluate decisions. We don't ask "what happened?" We ask "why this instead of that?"
Equally important: flag uncertainty. When confidence is low, surface it visually. Don't bury uncertain outputs behind the same interface as high-confidence ones. Users deserve to know when the system is guessing — and agentic systems guess more often than their polished interfaces suggest.
Without contestability, autonomy feels authoritarian. With it, autonomy becomes collaborative. Agentic UX therefore intersects not only with interaction design, but with governance, ethics, and the institutional trust that determines whether people actually adopt these systems in practice.

Pulling from the research — across psychology, human factors, distributed systems, and interaction design — four principles consistently surface as essential for agentic systems that people will actually trust and use.
1) Situational awareness. Keep the user "on the loop" — not necessarily in the loop performing every action, but oriented to what the system is doing, why, and what's coming next. Sheridan's research showed that degraded situational awareness is the single largest risk factor when automation fails. If the human doesn't know what state the system is in, they can't intervene effectively.
2) Trust repair. Systems will make mistakes. The question is what happens next. Research on trust dynamics shows that visible error recovery — acknowledging the mistake, explaining what happened, and demonstrating corrective action — overcomes algorithm aversion more effectively than preventing errors in the first place. Perfect systems aren't the goal. Recoverable systems are.
3) Contestability. Allow the user to challenge the logic. Not just undo the output, but interrogate the reasoning, request alternatives, and reshape the system's future behavior. This transforms the user from a passive recipient of machine decisions into an active participant in machine governance.
4) Progressive disclosure. Show the "what" by default. Make the "why" accessible on demand. Don't force users to parse trace logs to understand a decision, but don't hide the trace logs either. Layer information so that surface-level clarity coexists with deep inspectability.
These aren't features to be added at the end of a product cycle. They're architectural decisions that shape how the system is built from the ground up. Bolting trust onto an agentic system is like bolting safety onto a bridge after traffic starts flowing. The structure has to carry it from the beginning.

Across every domain I've studied — automation theory, distributed systems, cognitive psychology, organizational design — a consistent pattern appears: agentic technology doesn't merely change interfaces. It changes the relationship between humans and action.
Humans move from performers to supervisors. From executors to judges. From operators to stewards. Software moves from tool to actor. From responder to collaborator. From interface to environment.
This is not a metaphor. It's an architectural reality that demands new patterns, new metrics, and new design primitives.
The linear workflow — Step 1, Step 2, Step 3 — belongs to the execution era.
Agentic work follows a control loop:
Frame → Plan → Act → Observe → Correct → Remember.
The agent handles planning and acting. The human handles framing and correction. The system provides memory and accountability.
The strategic thesis is simple, and I'll state it plainly:
Agentic UX is not a cosmetic update. It is a trust and control revolution.
It is a redefinition of control, trust, and responsibility between humans and machines. The organizations that understand this will build systems people actually rely on. The ones that don't will build impressive demos that no one uses in production.
Success is no longer measured by how much the agent does, but by how well the human can understand, guide, and safely depend on what is done.
That's the shift. And it's already underway.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
Leroy, S., & Glomb, T. M. (2018). Tasks interrupted: How anticipating time pressure on resumption of an interrupted task creates spillover. Organization Science, 29(3), 380–398. https://doi.org/10.1287/orsc.2017.1184
Mark, G., Gudith, D., & Klocke, U. (2008). The cost of interrupted work: More speed and stress. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’08) (pp. 107–110). ACM. https://doi.org/10.1145/1357054.1357072
Mark, G. (2023). Attention span: A groundbreaking way to restore balance, happiness and productivity. Hanover Square Press.
Mahmud, H., Islam, A. K. M. N., Ahmed, S. I., & Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175, 121390. https://doi.org/10.1016/j.techfore.2021.121390
Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. MIT Press.
Tesler, L. (1985). The law of conservation of complexity. Xerox PARC internal communication. (Commonly cited via Saffer, 2006.)
Lee, J. D., Park, S. H., & Kim, Y. J. (2025). Trust formation, error impact, and repair in human–AI financial advisory: A dynamic behavioral analysis. Behavioral Sciences, 15(10), 1370. https://doi.org/10.3390/bs15101370
The Dynamics of Digital Tribes:
How Web3 is Redefining Community
The Dynamics of Digital Tribes:
How Web3 is Redefining Community
Share Dialog
Share Dialog
No comments yet