Do we trust AI more than we trust ourselves, especially on a cognitive level, for key decisions? And what happens when we rely on AI without fully understanding our own mental models? Could cognitive automation slowly erode our intellectual self-trust or undermine self-governance over time?
Outsourcing Ourselves: Trust, Agency, and Decision-Making in the Age of (Gen) AI
As artificial intelligence (AI) embeds itself into decision-making across industries, the way humans navigate autonomy, trust, and self-governance shifts in profound ways. This new paradigm brings both the promise of efficiency and the peril of overreliance, challenging individuals to maintain self-trust and active engagement in cognitive tasks. Trust in automation — whether in healthcare diagnostics or financial algorithms — becomes not just a convenience but a fundamental shift in how decisions are made and who holds the final authority over them.
Trust and Agency in Decision-Making
1. Trust is a multifaceted concept that underpins social, economic, and technological interactions. It functions as both a cognitive and affective mechanism, allowing individuals to rely on others, institutions, or technology. However, as systems become more automated, trust evolves beyond interpersonal relationships to include depersonalized trust in systems and artifacts. This essay synthesizes several definitions and structures of trust, drawing on research in social psychology, epistemology, and AI-human collaboration.
AI systems can enhance self-efficacy by reducing the cognitive load associated with complex decisions. However, cognitive automation also risks eroding intellectual self-trust — the confidence in one’s ability to make sound decisions independently. This concept extends beyond mere competence; it involves epistemic agency — the belief that one is entitled to rely on their reasoning while remaining open to reflection and correction (Scheman, 2020). Intellectual self-trust is critical in fostering autonomy, as it allows individuals to engage meaningfully in decision-making processes without deferring excessively to external authorities, such as experts or automated systems.
Human-to-human trust involves mutual choice and emotional engagement, setting a high bar for interpersonal relationships (Ess, 2020). In contrast, reliance on machines reflects a different dynamic, as objects lack autonomy to reciprocate trust. However, as AI systems become more autonomous, trust-like relationships may emerge. The absence of affective interaction in online communication exacerbates challenges, leading to preemptive policing, legal concerns, and a decline in trust in news media and public discourse.
Overreliance on automated aids introduces the phenomenon of depersonalized trust, where trust shifts away from human relationships toward abstract, institutional systems, sometimes at the cost of personal autonomy. However, as automated decision-making becomes more pervasive, this passive form of trust evolves into something more interactive: co-agency, a collaborative partnership between humans and machines.
2. Decision-making involves agency ( the capacity to make choices _ and context, which determines whether individuals retain that agency or delegate decisions. In free contexts, individuals can choose whether to act or defer, while in determined contexts, external forces assign agency, such as when a manager makes decisions for employees. Agency can be self-agency (making one’s own decisions) or external agency (allowing others to decide).
In automated decision-making environments, individuals rely not only on systems to function predictably but also engage with them as cognitive partners. This shift occurs as trust is no longer just about the system’s reliability but also about sharing responsibility and decision-making authority with AI tools (Wagner, 2019). Users begin to depend on the competence of these systems to assist with complex decisions, transforming reliance into joint agency.
As automation advances, normative trust emerges in human-AI collaboration, where humans and machines co-create meaning and decisions within established institutional frameworks (Wagner, 2019). Here, trust becomes a critical enabler of joint agency, ensuring that humans retain authority and cognitive engagement, rather than merely rubber-stamping automated decisions.
Incorporating joint agency between humans and AI creates new complexities. Joint agency demands that both human and machine contribute meaningfully to a shared task, yet many studies show that humans tend to defer to AI, even when their own judgment would be more accurate (Buccinca et al., 2021). This undermines self-governance, the capacity to act in ways aligned with personal values and to critically evaluate the input from automation tools (Wagner, 2019). As AI systems become more competent, the challenge lies not in making decisions but in ensuring humans maintain a sense of self-authorization — the entitlement to direct their lives and values even in collaboration with technology. The term “self-authorization” refers to the fact that some people do not wait for someone in authority to “authorize” them to engage in activities that promote the general welfare.
Assuming that self-authorizing people, as defined above, are valuable to a society, what educational or personal experiences are necessary or helpful in leading people to become self-authorizing? (Umpleby, 1986).
Cognitive Automation, (Co-)Autonomy and Responsibility
Trust also plays a role in cognitive systems that extend beyond human agents to include technological artifacts. Cognitive trust involves believing in the accuracy and reliability of external tools, such as datasets or AI models, as part of distributed epistemic labor. However, trust in artifacts is distinct from interpersonal trust because artifacts do not possess intentionality or goodwill. Scholars distinguish between trust and reliance — the latter being more appropriate for non-human objects that lack choice or motivation (Medina, 2020).
Studies show that cognitive automation can lead to overreliance on AI, even when the AI outputs incorrect recommendations. This overtrust often occurs because individuals develop heuristics — general rules about trusting AI — rather than engaging analytically with each recommendation. Interventions like cognitive forcing functions (e.g., requiring users to decide before viewing AI suggestions) can mitigate this issue by compelling deeper thought processes. However, such interventions are met with mixed user responses, as they demand greater cognitive effort, which is not always welcome in fast-paced decision-making environments (Buçinca et al., 2021). For high-stakes scenarios, the challenge lies in calibrating trust effectively so that decision-makers use AI as an assistive tool without abdicating their own cognitive responsibility.
The co-joint cognitive autonomy partnership requires balanced trust — humans need to both trust the AI’s competence and maintain confidence in their own abilities. Unfortunately, studies indicate that teams comprising humans and AI often perform worse than the AI alone, primarily due to uncalibrated trust. Humans tend to delegate too much control to AI, even when their own judgment would be superior, highlighting the delicate interplay between trust, autonomy, and shared responsibility.
Conclusion: Flourishing in an Automated World
In an era of cognitive automation, trust is not merely a passive acceptance of technological competence but an ongoing engagement with how AI influences personal and professional decision-making. Designing AI systems that encourage users to remain intellectually engaged and reflective will be key to ensuring that technology enhances rather than diminishes human flourishing.
PS. Flourishing transcends well-being by integrating meaning, purpose, and personal growth. As a social psychologist, it’s vital for me to understanding how individuals navigate challenges, develop existential meaning, and exercise decision-making, autonomy, and trust — especially as AI shapes human choices. I’ll delve deeper into this connection in an upcoming article on the intersection of meaning and autonomy in the era of AI.
The future of decision-making lies in fostering active trust, joint agency, and cognitive self-governance, ensuring that humans remain at the helm even as AI becomes more pervasive. After all, as philosopher Naomi Scheman puts it, “The foundation of trust in oneself is trust in oneself as an autonomous agent” — a principle that remains just as relevant when our partners in decision-making are machines.
Automated decision-making is becoming prevalent, raising concerns about limited human oversight and liability issues. Quasi-automation refers to systems where humans serve as symbolic validators, rubber-stamping automated decisions with minimal authority. This model risks reducing human agency, as operators often lack time, expertise, or sufficient control to alter decisions, undermining accountability and human rights protections (Wagner, 2019; Ekbia and Nardi, 2017).
Cognitive automation holds promise for enhancing decision-making, but managing overreliance remains a key challenge. Implementing strategies like cognitive forcing functions can improve outcomes, but their effectiveness depends on users’ willingness to engage with more effortful thinking. These findings highlight the need to balance automation with mechanisms that maintain critical human oversight.
References
Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), Article 188. https://doi.org/10.1145/3449287
Ekbia, H. R., & Nardi, B. A. (2017). Heteromation, and other stories of computing and capitalism. MIT Press. ISBN: 9780262036252
Ess, C. M. (2020). Trust and information and communication technologies. In P. Faulkner & T. Simpson (Eds.), The Routledge handbook of trust and philosophy (1st ed., pp. 16). Routledge. https://doi.org/10.4324/9781315542294
Medina, J. (2020). Trust and epistemic injustice. In P. Faulkner & T. Simpson (Eds.), The Routledge handbook of trust and philosophy (1st ed., p. 12). Routledge. https://doi.org/10.4324/9781315542294
O’Neill, O. (2020). Questioning trust. In P. Faulkner & T. Simpson (Eds.), The Routledge handbook of trust and philosophy (1st ed., p. 11). Routledge. https://doi.org/10.4324/9781315542294
Scheman, N. (2020). Trust and trustworthiness. In P. Faulkner & T. Simpson (Eds.), The Routledge handbook of trust and philosophy (1st ed., p. 13). Routledge. https://doi.org/10.4324/9781315542294
Umpleby, S. A. (1986). Self-authorization: A characteristic of some elements in certain self-organizing systems. Cybernetics and Systems, 17(1), 79–87. https://doi.org/10.1080/01969728608927432
Wagner, B. (2019). Liable, but not in control? Ensuring meaningful human agency in automated decision-making systems. Policy & Internet, 11(1), 104–122. https://doi.org/10.1002/poi3.198