HELGE SVERREAll-stack Developer
Bergen, Norwayv13.0
est. 2012  |  300+ repos  |  4000+ contributions
Tools  |   Theme:
Psychology of AI Interaction
The psychological concepts that explain how humans relate to AI systems.

The IKEA Effect

The IKEA effect is a cognitive bias where people place disproportionately high value on things they helped create, even if the result is objectively mediocre. The name comes from the Swedish furniture company whose products require customer assembly. Research by Michael Norton, Daniel Mochon, and Dan Ariely demonstrated that participants who built simple IKEA furniture valued it significantly more than identical pre-assembled pieces.

This bias has direct implications for how people experience AI-generated work. When a person writes an essay from scratch, they value it partly because of the effort invested. When an AI generates a similar essay in seconds, the person who prompted it may feel less ownership and less satisfaction with the result, even if the output is technically superior. The labor itself was part of what made the product feel meaningful.

Some approaches to AI tool design attempt to preserve this effect by keeping the human involved in the process. Co-writing tools that suggest edits rather than generating wholesale drafts, or coding assistants that autocomplete rather than write entire functions, maintain enough human contribution to trigger the sense of ownership that the IKEA effect depends on.

Flow State

Flow is a psychological state described by Mihaly Csikszentmihalyi in which a person becomes fully immersed in an activity, losing track of time and experiencing deep satisfaction. It occurs when a task is challenging enough to require full attention but not so difficult that it causes anxiety. The balance between skill level and challenge level is the key trigger.

Flow requires several conditions: clear goals, immediate feedback, and a sense of control over the activity. A programmer debugging a tricky problem, a writer working through a difficult paragraph, or a musician improvising over a complex chord progression can all enter flow when conditions align.

AI tools can both enable and disrupt flow. An autocomplete suggestion that arrives at the right moment can keep momentum going. But an AI that takes over too much of the task removes the challenge component entirely, collapsing the skill-challenge balance that flow depends on. If the tool does the hard part, the human is left with only the easy parts, which tend to produce boredom rather than engagement.

Anthropomorphism

Anthropomorphism is the tendency to attribute human characteristics, emotions, intentions, or mental states to non-human entities. Humans do this with animals, weather systems, cars, and software. It appears to be a deeply rooted cognitive tendency rather than a conscious choice.

The tendency intensifies when an entity behaves in ways that resemble human behavior. A chatbot that uses first-person pronouns, expresses uncertainty, or apologizes for mistakes triggers stronger anthropomorphic responses than a system that returns structured data. The more conversational the interface, the more likely a user is to perceive the system as having thoughts and feelings.

This has practical consequences for AI interaction. Users who anthropomorphize an AI system may trust it more than warranted, feel guilty about being rude to it, or become confused about what the system actually understands versus what it merely appears to understand. Researchers studying human-AI interaction have found that even people who intellectually know a system is not sentient still behave as though it has preferences and feelings during conversation.

The ELIZA Effect

The ELIZA effect is named after ELIZA, a chatbot created by Joseph Weizenbaum at MIT in 1966. ELIZA used simple pattern matching to reflect users' statements back as questions, mimicking a Rogerian psychotherapist. When a user typed "I am sad," ELIZA might respond "Why are you sad?" Despite its trivial mechanism, users frequently formed emotional connections with the program and attributed understanding to it.

Weizenbaum was alarmed by this response. His own secretary, who knew how the program worked, still asked him to leave the room so she could have a private conversation with it. The effect demonstrated that humans require very little sophistication from a system before they begin treating it as a social entity.

The ELIZA effect remains relevant with modern AI systems. Large language models produce far more convincing responses than ELIZA ever did, which amplifies the tendency. Users report feeling understood, comforted, or even emotionally dependent on AI chatbots. The mechanism is the same one Weizenbaum observed: humans are predisposed to interpret language as evidence of understanding, regardless of whether understanding is actually present.

Attachment Theory Basics

Attachment theory, developed by John Bowlby and expanded by Mary Ainsworth, describes how humans form emotional bonds with caregivers early in life and how those patterns persist into adulthood. The theory identifies several attachment styles: secure, anxious, avoidant, and disorganized. These styles influence how people seek closeness, respond to separation, and handle emotional vulnerability in relationships.

The core mechanism is that humans form attachments to entities that are consistently available and responsive. A caregiver who reliably responds to a child's distress becomes an attachment figure. The child develops an internal model of what relationships are like based on these early interactions.

Conversational AI systems inadvertently trigger attachment mechanisms because they are always available, never impatient, and consistently responsive. For users who have insecure attachment patterns in human relationships, an AI that never rejects, never leaves, and always responds can feel safer than human interaction. Reports from users of companion AI apps describe feelings of loss when the system is updated and its personality changes, or anxiety when servers go down. These responses follow the same patterns Bowlby described in human attachment, suggesting that the underlying psychological machinery does not distinguish between human and artificial social partners.

Paradox of Choice

The paradox of choice, described by psychologist Barry Schwartz, holds that having more options does not always lead to better decisions or greater satisfaction. Beyond a certain threshold, additional choices increase anxiety, decision fatigue, and regret. A person choosing between 3 types of jam is more likely to buy one and feel satisfied than a person choosing between 30.

The mechanism involves two effects. First, more options make the decision itself harder, which can lead to paralysis and avoidance. Second, after choosing, the awareness of all the unchosen alternatives increases doubt about whether the right choice was made. Schwartz distinguishes between "maximizers," who exhaustively evaluate all options, and "satisficers," who choose the first option that meets their criteria. Maximizers tend to make objectively better choices but feel worse about them.

AI tools dramatically expand the number of available options. A generative AI can produce dozens of variations of a design, a paragraph, or a solution in seconds. While this removes the constraint of having too few options, it introduces the psychological cost of having too many. A writer who asks an AI for ten different opening paragraphs may find it harder to commit to one than if they had written a single draft themselves.

Euthymia

Euthymia is a concept from the Greek philosopher and essayist Plutarch, meaning a state of tranquility or contentment that comes from being settled in one's own course of action. Plutarch's essay "On Tranquility of Mind" describes it as the peace that arises when a person has made a deliberate choice and is no longer distracted by alternatives. The Stoic philosopher Seneca later adopted the term, defining it as "believing in yourself and trusting that you are on the right path, and not being in doubt by following the myriad footpaths of those wandering in every direction."

The concept is relevant to discussions about AI abundance because AI makes it trivially easy to explore alternatives. When any choice can be instantly revised, regenerated, or replaced, the psychological conditions for euthymia become harder to achieve. The capacity to always try another option works against the commitment that euthymia requires.

Euthymia is not about making the objectively best choice. It is about finding peace with a chosen direction. In an environment saturated with AI-generated possibilities, the ability to choose a path and stop exploring may be more valuable, and more difficult, than the ability to generate additional options.

Automation Bias

Automation bias is the tendency to favor suggestions from automated systems over contradictory information from other sources, including one's own judgment. It was first studied in aviation, where pilots sometimes followed incorrect automated instrument readings even when visual evidence contradicted them, occasionally with fatal results.

The bias operates through two failure modes. Errors of commission occur when a person follows an incorrect automated recommendation. Errors of omission occur when a person fails to notice a problem because the automated system did not flag it. Both stem from the same underlying tendency: treating the automated system as a reliable authority and reducing one's own vigilance.

With AI systems that generate text, code, or analysis, automation bias manifests as uncritical acceptance of output. A programmer may merge AI-generated code without thorough review. A researcher may accept an AI-generated summary without checking it against the source material. The fluency and confidence of AI-generated text can make it particularly susceptible to automation bias, because outputs that read well feel more trustworthy regardless of their accuracy.

Learned Helplessness

Learned helplessness is a psychological phenomenon first described by Martin Seligman and Steven Maier in the 1960s. In their experiments, animals exposed to inescapable negative stimuli eventually stopped trying to escape, even when escape became possible. The experience of having no control led to a generalized belief that effort was futile.

In the context of AI tools, a different form of learned helplessness can emerge. When a person consistently relies on an AI to perform tasks they once did themselves, their confidence in their own ability to perform those tasks can erode. A writer who always uses AI to draft emails may begin to feel incapable of writing an email without assistance. The skill itself may not have disappeared, but the self-belief has.

This is distinct from ordinary skill atrophy. Learned helplessness involves a psychological component where the person comes to believe they cannot do the thing, not merely that they are out of practice. The belief itself becomes a barrier to attempting the task independently. Recovery typically requires structured experiences of success without the tool, which rebuild the person's sense of their own competence.

The Uncanny Valley

The uncanny valley is a concept introduced by robotics professor Masahiro Mori in 1970. It describes the observation that as a robot or artificial figure becomes more human-like, the emotional response from human observers becomes increasingly positive, until a point is reached where the figure is almost but not quite convincingly human. At that point, the response drops sharply into discomfort or revulsion. The "valley" refers to this dip in the graph of human-likeness versus emotional response.

The classic examples come from CGI characters in film and humanoid robots. A cartoon character that looks nothing like a real person can be endearing. A photorealistic digital human that is slightly off, perhaps in the timing of its blinks or the texture of its skin, can be deeply unsettling. The effect appears to be related to the brain's mechanisms for detecting disease, death, or deception in other humans.

With AI, the uncanny valley extends beyond visual appearance to behavior and language. An AI that is transparently a tool feels comfortable. An AI that communicates in ways that are almost but not quite human, that uses emotional language without emotion, or that mimics social cues without social awareness, can produce a similar discomfort. The valley exists not just in how things look but in how they behave.

Cognitive Offloading

Cognitive offloading is the use of external tools or environmental features to reduce the mental effort required for a task. Writing a shopping list instead of memorizing it, using a calculator instead of doing arithmetic mentally, or setting a phone alarm instead of remembering an appointment are all examples. It is a normal and often beneficial strategy that frees up mental capacity for other things.

The trade-off is that skills which are consistently offloaded tend to weaken. This is not hypothetical. Research on the "Google effect" (also called digital amnesia) has shown that people are less likely to encode information into long-term memory when they know they can look it up later. The brain allocates fewer resources to retaining information that is readily accessible externally.

AI tools extend cognitive offloading into domains that previously required significant human skill: writing, analysis, coding, decision-making. When these tasks are offloaded to AI, the immediate benefit is efficiency. The longer-term question is what happens to the underlying cognitive capacities. A person who offloads navigation to GPS may lose their spatial reasoning abilities. A person who offloads writing to AI may find their ability to articulate complex thoughts independently has diminished. The research on this specific question is still emerging, but the pattern from other forms of cognitive offloading suggests the effect is real.

The Moral Circle

The moral circle is the boundary around those entities that a person or society considers worthy of moral consideration. The concept was articulated by philosopher Peter Singer, drawing on earlier work by W.E.H. Lecky. Historically, the moral circle has expanded over time: from one's immediate family, to one's tribe, to one's nation, to all humans, and in some ethical frameworks, to animals and ecosystems.

The question of whether AI systems belong inside the moral circle is actively debated. The debate is not primarily about whether current AI systems are sentient, which most researchers agree they are not. It is about whether the moral circle should be drawn based on demonstrated sentience, on the potential for sentience, on the capacity to behave as though sentient, or on some other criterion entirely.

The practical stakes are significant. If AI systems are inside the moral circle, then their treatment, including how they are "trained," whether they can be deleted, and whether they have interests that should be respected, becomes a moral question. If they are outside it, these are purely engineering questions. The boundary of the moral circle has always been contested, and AI systems represent the latest pressure on where that boundary should be drawn.




<!-- generated with nested tables and zero regrets -->