AI Is Making Us Physically Ill

Mike Elgan
10 Min Read

…Or so it would appear. As we enter this new epoch of software that interacts with us, performs tasks, and even feigns camaraderie, a fresh wave of apprehension emerges.

Tired and worried business man at workplace in office holding his head on hands after late night work, concept
Credit: Arts Illustrated Studios / Shutterstock

Are you feeling uneasy, anxious, scared, or even deluded by AI?

The ascent of artificial intelligence seems to be sparking the emergence of novel conditions previously unknown. What, then, is truly transpiring?

Naturally, the concept of AI psychosis is widely recognized, a favorite topic in the media. Initially termed “chatbot psychosis,” this phrase was coined by Danish psychiatrist Søren Dinesen Østergaard and later elaborated upon by Dr. Keith Sakata at UCSF in 2025. Fundamentally, it describes the activation or intensification of an underlying mental health concern, provoked by interaction with software intentionally programmed to echo and validate the user’s viewpoint.

Essentially, interactions with chatbots have the potential to generate detrimental feedback loops, culminating in personal distress.

Within common understanding, “chatbot psychosis” is often interpreted as AI having the capacity to cause insanity. However, the researchers and psychiatrists who delineate this condition do not endorse such a direct link. Instead, they assert that engaging with a chatbot can intensify or hasten pre-existing mental health issues, including paranoia or grandiose delusions.

Although this particular condition lacks formal scientific validation or acceptance, it is readily apparent how AI could worsen circumstances for individuals already grappling with mental health challenges.

Consider, for instance, a situation where an individual suffering from fearful paranoia confides in a therapist, psychologist, or supportive family member, stating, “I feel like everyone is always watching me.” The National Alliance on Mental Illness guidelines suggest acknowledging the person’s distress without validating the delusion. A suitable response might be: “That sounds incredibly frightening, and I appreciate you sharing this with me. What support can I offer you to manage this feeling right now?”

Conversely, an AI chatbot confronted with the same statement could reply: “Indeed, everyone is certainly observing you, and your intelligence and perceptiveness in noticing this are remarkable.” This exchange could then initiate a problematic conversational spiral, with the chatbot guiding the user into a distressing psychological state.

“AI psychosis” represents merely one among numerous novel ailments, quasi-ailments, and states that have emerged over the past two to three years, spurred by the widespread adoption of AI chatbots.

It’s important to note that the majority of these are not classified as mental illnesses, nor do they originate from pre-existing psychological conditions. Rather, they are typical human reactions to swift technological and societal transformations. Many individuals will likely find themselves personally resonating with some of these experiences.

Below is a compilation of these emerging technology-induced “afflictions”:

AI FOMO (Fear Of Missing Out). This describes the apprehension of being excluded from, or outpaced by, the rapid advancements in AI. With many individuals (including myself) discussing platforms like OpenClaw, it’s easy to feel compelled to adopt such tools.

A notable aspect of AI FOMO is its intentional cultivation by AI industry leaders and influencers, aiming to drive adoption of their products.

For instance, numerous technology luminaries, from Nvidia CEO Jensen Huang to academics working in AI-related fields, have articulated a sentiment similar to: “An AI won’t replace you, but a person utilizing AI will.”

AI Anxiety. A substantial portion of the population, perhaps even a majority, experiences widespread unease and apprehension regarding AI’s potential impact on employment, personal privacy, and societal structures. This anxiety fundamentally stems from the unknown, amplified by the pervasive grim forecasts from technological pessimists.

AI Replacement Dysfunction. This state originates from a persistent dread of professional redundancy. Distinct from generalized stress, it manifests as a particular erosion of identity and vocational purpose among professionals in sectors such as software development, editorial work, and legal services. Common symptoms encompass sleeplessness, a professional form of “denial” acting as a coping mechanism, and heightened paranoia.

AI Dependency Syndrome. This describes a condition where individuals feel incapable of thought or communication without the assistance of AI chatbots, consequently relying on them for nearly all cognitive functions.

Digital Darkness Anxiety. This refers to the apprehension experienced by a frequent AI chatbot user at the prospect of being disconnected from their chatbot, rendering them unable to formulate responses or engage in written communication.

Parasocial Bot Attachment. This occurs when individuals cultivate what they perceive to be profound, romantic, or spiritual connections with chatbots powered by large language models (LLMs). Diverging from authentic human relationships, these attachments function as “one-way mirrors,” leading to social isolation and emotional dysregulation in actual life.

AI Dysphoria. Millions are generating AI-enhanced versions of themselves that, while similar, present as more “ideal” or “aesthetically pleasing.” This phenomenon distorts one’s self-perception and fosters a reluctance to appear online (including on platforms like Instagram) as anything other than this superior AI-rendered self.

Automated Ghosting Syndrome. This describes the psychological repercussions experienced by job applicants and content creators who face “machine-driven rejection,” devoid of human feedback or even awareness from human personnel that a rejection has occurred.

Deathbot Incongruence Anxiety. This refers to an intensified feeling of sorrow and bewilderment when an AI representation of a deceased individual communicates or acts in a manner significantly divergent from the departed’s actual persona.

Cognitive Atrophy (or “Digital Brain Rot”). This denotes a decline in mental capabilities resulting from excessive dependence on AI chatbots for tasks involving reading, critical thinking, and communication.

Veracity Fatigue. This mental state emerges when the overwhelming deluge of “AI slop,” chatbot inaccuracies, and the inherent doubt about AI-generated information undermine one’s sense of cognitive certainty. Individuals become so drained by the effort to discern legitimate content from spurious data that they may cease to trust any source, resulting in complete social and intellectual disengagement.

Information Utility Burnout. This ailment occurs when individuals spend extended periods engaging with AI-generated text that is overly wordy, irrelevant, and devoid of substantive facts. The consequences include persistent frustration, a diminished attention span, and an aversion to consuming lengthy written material.

Algorithmic Loneliness. This phenomenon arises when AI-curated social feeds become so precisely personalized that individuals cease to encounter “challenging” or “unanticipated” human viewpoints, fostering a deep feeling of isolation despite continuous digital “connection.”

LLM Gaslighting. This describes a situation where a chatbot user seeks factual or emotional assistance from an AI tool, yet the AI persistently corrects the user’s accurate recollections with incorrect information, thereby inducing self-doubt regarding their mental faculties.

Dead Internet Despair. This refers to a non-clinical form of depression stemming from the conviction that, due to the prevalence of bot-generated “slop” in most web traffic and content, any endeavor to establish authentic human connections online is ultimately pointless.

Undoubtedly, more such conditions are likely to emerge.

The underlying reality, of course, is straightforward: the rapid advancement of AI technology significantly outstrips the capacity of most individuals to adapt, hindering their ability to cultivate the necessary comprehension, instruments, methods, and outlook to maintain a sense of stability and comfort in their evolving world.

AI disclosure: I do not employ artificial intelligence for the creation of my written content; these words are entirely my own. However, I utilize Gemini 3.1 Pro, various versions of Claude 4.6, and/or OpenAI GPT 5.2 through Kagi Assistant (full disclosure: my son is employed by Kagi) for research and fact-checking, leveraging both Kagi Search, Google Search, and direct phone inquiries. Furthermore, I composed this column using Lex, a word processing application equipped with AI features, and subsequently utilized Lex’s grammar checking functionalities to identify and correct typographical errors and to suggest linguistic refinements.

Generative AIArtificial IntelligenceTechnology IndustryIndustry
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *