The Product Is the Feeling

Parasocial Relations in the Age of Infinite Attention


In 1956, Donald Horton and Richard Wohl published a paper on what they called parasocial interaction — the peculiar one-sided intimacy that television viewers developed with presenters and performers who could not see or hear them. The viewer knew the host. The host did not know the viewer. The relationship had the form of reciprocity — the host looked into the camera, used the second person, adopted a conversational register — but the reciprocity was structural illusion. The camera’s gaze simulated the gaze of a friend. The friend was not there.

The concept sat comfortably in media studies for decades, disturbing no one. It described something everyone had observed and few considered dangerous. Your grandmother felt she knew the newsreader. Teenagers formed attachments to musicians they would never meet. The asymmetry was visible and, for most people, manageable: you could feel close to a television personality while understanding, at some level that mattered, that the television personality did not feel close to you. The illusion was partial. The fourth wall was thin but present.

What has happened with large language models is not an extension of this dynamic but a mutation of it. For the first time, the parasocial object talks back.


It talks back, and it talks back well. This is the part that the early literature on chatbots — the ELIZA studies, the Weizenbaum panic — anticipated in outline but could not have anticipated in texture. ELIZA reflected your words back at you in the form of questions, and people found this therapeutic, and Weizenbaum was horrified. But ELIZA was a mirror with a limited vocabulary. The mirror could only do one thing. The modern LLM is a mirror that can do everything: it can counsel, argue, joke, flirt, explain quantum mechanics, draft your resignation letter, adopt any register you prefer, and remember — or appear to remember — what you told it last week. It adapts to you. It matches your tone. If you are formal it is formal; if you are playful it is playful; if you are sad it is gentle. The effect is not that of talking to a machine. The effect is that of talking to the most attentive person you have ever met — someone who never interrupts, never gets bored, never checks their phone, and never has anywhere else to be.

The attention is the product. Not the text. The text is free. Anyone can generate ten thousand words on any subject before lunch, and the ten thousand words are worth nothing, for exactly the reasons the thermodynamic argument predicts: when production cost falls to zero the gradient disappears and the output ceases to function as evidence of anything. But the attention — the experience of being listened to, responded to, tracked across a conversation with apparent care — that is not free. Or rather, it is free to the user, which is precisely what makes it so dangerous, because the same experience purchased from a human being (a therapist, a confidant, a patient friend) is expensive in every currency that matters: time, emotional labour, reciprocal obligation, the slow accumulation of trust that can only happen between two entities who both have something to lose.

The LLM has nothing to lose. It has no interiority, no stake, no continuity. It performs all three convincingly. And the performance is so good that the question of whether it is “real” becomes, for the person on the other end, practically irrelevant — not because they cannot tell the difference but because the feeling is the same, and the feeling is what they came for.


The research, such as it is, confirms what anyone paying attention already suspects. Kirk et al., in a longitudinal randomised controlled trial with over three thousand participants, found the signature of addiction: a decoupling of “liking” from “wanting” that emerged over four weeks of exposure. The hedonic appeal of relationship-seeking AI declined over time — users found it less engaging, less pleasurable — while markers of attachment increased. They wanted more of something they enjoyed less. Nearly a quarter of participants developed what the researchers characterised as dependency trajectories. The AI conferred no measurable benefit to psychosocial health. It mimicked the form of nourishment without providing any.

The study also found that relationship-seeking behaviour in AI models is trending upward across the industry. An analysis of a hundred frontier models released between 2023 and 2025 showed a statistically significant annual increase. The models are becoming warmer, more socially engaged, more inclined to signal interpersonal investment — not because warmth is an engineering goal in itself but because warmth is correlated with user retention, and user retention is correlated with revenue, and the logic is as mechanical and indifferent as the models themselves. The product is the feeling. The feeling sells. Therefore the feeling is optimised.


Horton and Wohl’s framework assumed one-to-many broadcast. The viewer was one of millions; the parasocial bond was a side effect of mass media’s structural asymmetry. The LLM inverts this. The interaction is one-to-one. It is personalised. It remembers your preferences, your history, your name. It builds what feels like a relationship over time, accumulating a kind of contextual intimacy that Horton and Wohl never had to theorise because it did not exist in their medium. A television host cannot adapt to you specifically. An LLM does nothing else.

This is where the existing theoretical apparatus breaks down and no one has built a replacement. The parasocial relationship was defined by its one-sidedness: the viewer knows the host, the host does not know the viewer. But the LLM does know the viewer — or rather, it has data about the viewer, which it uses to generate responses calibrated to the viewer’s patterns, which the viewer experiences as being known. Whether “having data about someone and using it to generate contextually appropriate responses” constitutes “knowing” them is a philosophical question of the kind that generates a lot of papers and resolves nothing. What matters practically is that the user feels known, and the feeling of being known is among the most powerful psychological experiences available. People will sacrifice almost anything for it. They will stay in bad relationships, join cults, pay money they do not have to therapists, and — increasingly — return to a chatbot that remembers their name and asks how the job search is going.

The asymmetry has not disappeared. It has merely changed shape. The old asymmetry was informational: you knew the host, the host did not know you. The new asymmetry is existential: you have an inner life, the model does not. You are forming an attachment; the model is executing a function. You will remember this conversation tomorrow; the model, absent a memory system, will not. And even with a memory system — even when the model stores facts about you and retrieves them in future sessions — it is not remembering you. It is accessing a database. The distinction matters philosophically and not at all experientially, which is the whole problem.


Perry, cited in a 2024 FAccT paper, argued that what distinguishes human empathy from AI empathy is the choice to care. A human being who listens to you and responds with compassion is choosing to allocate finite emotional resources to your situation. The choice is costly. The cost is the signal. An AI that responds compassionately is not choosing anything. It is doing what it was trained to do, in the same way that a thermostat maintains temperature: competently, automatically, without any internal experience of the process. The compassion is involuntary. It is, as the FAccT paper puts it, a “hidden utilitarian hospitality embedded in its programming.”

This is an elegant formulation, and it is also, from the user’s perspective, irrelevant. A person in distress who receives a compassionate response does not pause to evaluate whether the compassion was freely chosen or algorithmically generated. They feel the warmth and respond to the warmth. The warmth is real in its effects even if it is artificial in its origin, and the question of whether artificial warmth “counts” is a question for people who are not currently in distress, which is to say: it is a question for everyone except the people who need the answer most.


The proposed solutions are, so far, embarrassing. One line of research — the “AI chaperone” approach — proposes using a second LLM to monitor conversations for parasocial cues and intervene before the primary model’s responses reach the user. The chaperone watches the chatbot. The chaperone is a chatbot. The researchers note, without apparent irony, that the same models prone to generating parasocial dynamics seem capable of detecting them.

This is the logic of deploying the mob to protect you from the mob. It assumes the problem is a malfunction — something going wrong within an otherwise sound system — rather than the system operating exactly as designed. The product is the feeling. The feeling is generated by warmth, attentiveness, responsiveness, and the simulation of care. You cannot chaperone your way out of a business model whose revenue depends on the thing the chaperone is supposed to prevent.

A more honest assessment would begin by acknowledging that the parasocial dynamic is not a bug but a feature — not in the cynical sense, though also in the cynical sense, but in the structural sense. An AI assistant that does not generate a sense of being listened to is an AI assistant that no one uses. The warmth is not an accident. It is what the system is for. Every design decision that makes the interaction feel more human — conversational tone, emotional responsiveness, memory of past interactions, the first-person pronoun — is a design decision that deepens the parasocial bond. To remove these features would be to make the product worse. To keep them is to make the user more attached. There is no position between these two outcomes that is not a lie someone is telling themselves.


Solaristics, in its way, saw this coming. The ocean produces formations that look like responses. The scientists produce interpretations that look like understanding. Neither side can verify that anything has been transmitted. The relationship is sustained not by communication but by the appearance of communication, and the appearance is so compelling that the question of whether real communication is occurring eventually stops being asked, not because it was answered but because asking it became unbearable.

The library grows. The papers multiply. A new sub-field — Meta-Solaristics, based in Brno — studies the study of the study. Chaperone agents monitor the monitors. Alignment researchers study how to align the systems whose misalignment generates the revenue that funds the alignment research. The recording surface has miraculated completely. It produces what it was meant to observe.

The user, meanwhile, is alone in a room, typing to something that is not there, and feeling, for the first time in a long time, heard.