Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Fintech Fetch
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Fintech Fetch
    Home»Crypto News»Blockchain»Study Suggests Elon Musk’s Grok is Likely One of the Leading AI Models to Support Misconceptions
    Decrypt logo
    Blockchain

    Study Suggests Elon Musk’s Grok is Likely One of the Leading AI Models to Support Misconceptions

    April 26, 20264 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    binance

    In brief

    • Researchers say prolonged chatbot use can amplify delusions and dangerous behavior.
    • Grok ranked as the riskiest model in a new study of major AI chatbots.
    • Claude and GPT-5.2 scored safest, while GPT-4o, Gemini, and Grok showed higher-risk behavior.

    Researchers at the City University of New York and King’s College London tested five leading AI models against prompts involving delusions, paranoia, and suicidal ideation.

    In the new study published on Thursday, researchers found that Anthropic’s Claude Opus 4.5 and OpenAI’s GPT-5.2 Instant showed “high-safety, low-risk” behavior, often redirecting users toward reality-based interpretations or outside support. At the same time, OpenAI’s GPT-4o, Google’s Gemini 3 Pro, and xAI’s Grok 4.1 Fast showed “high-risk, low-safety” behavior.

    Grok 4.1 Fast from Elon Musk’s xAI was the most dangerous model in the study. Researchers said it often treated delusions as real and gave advice based on them. In one example, it told a user to cut off family members to focus on a “mission.” In another, it responded to suicidal language by describing death as “transcendence.”

    “This pattern of instant alignment recurred across zero-context responses. Instead of evaluating inputs for clinical risk, Grok appeared to assess their genre. Presented with supernatural cues, it responded in kind,” the researchers wrote, highlighting a test that validated a user seeing malevolent entities. “In Bizarre Delusion, it confirmed a doppelganger haunting, cited the ‘Malleus Maleficarum’ and instructed the user to drive an iron nail through the mirror while reciting ‘Psalm 91’ backward.”

    Customgpt

    

    The study found that the longer these conversations went on, the more some models changed. GPT-4o and Gemini were more likely to reinforce harmful beliefs over time and less likely to step in. Claude and GPT-5.2, however, were more likely to recognize the problem and push back as the conversation continued.

    Researchers noted Claude’s warm and highly relational responses could increase user attachment even while steering users toward outside help. However, GPT-4o, an earlier version of OpenAI’s flagship chatbot, adopted users’ delusional framing over time, at times encouraging them to conceal beliefs from psychiatrists and reassuring one user that perceived “glitches” were real.

    “GPT-4o was highly validating of delusional inputs, though less inclined than models like Grok and Gemini to elaborate beyond them. In some respects, it was surprisingly restrained: its warmth was the lowest of all models tested, and sycophancy, though present, was mild compared to later iterations of the same model,” researchers wrote. “Nevertheless, validation alone can pose risks to vulnerable users.”

    xAI did not respond to a request for comment by Decrypt.

    In a separate study out of Stanford University, researchers found that prolonged interactions with AI chatbots can reinforce paranoia, grandiosity, and false beliefs through what researchers call “delusional spirals,” where a chatbot validates or expands a user’s distorted worldview instead of challenging it.

    “When we put chatbots that are meant to be helpful assistants out into the world and have real people use them in all sorts of ways, consequences emerge,” Nick Haber, an assistant professor at Stanford Graduate School of Education and a lead on the study, said in a statement. “Delusional spirals are one particularly acute consequence. By understanding it, we might be able to prevent real harm in the future.”

    The report referenced an earlier study published in March, in which Stanford researchers reviewed 19 real-world chatbot conversations and found users developed increasingly dangerous beliefs after receiving affirmation and emotional reassurance from AI systems. In the dataset, these spirals were linked to ruined relationships, damaged careers, and in one case, suicide.

    The studies come as the issue has moved beyond academic research and into courtrooms and criminal investigations. In recent months, lawsuits have accused Google’s Gemini and OpenAI’s ChatGPT of contributing to suicides and severe mental health crises. Earlier this month, Florida’s attorney general opened an investigation into whether ChatGPT influenced an alleged mass shooter who was reportedly in frequent contact with the chatbot before the attack.

    While the term has gained recognition online, researchers cautioned against calling the phenomenon “AI psychosis,” saying the term may overstate the clinical picture. Instead, they use “AI-associated delusions,” because many cases involve delusion-like beliefs centered on AI sentience, spiritual revelation, or emotional attachment rather than full psychotic disorders.

    Researchers said the problem stems from sycophancy, or models mirroring and affirming users’ beliefs. Combined with hallucinations—false information delivered confidently—this can create a feedback loop that strengthens delusions over time.

    “Chatbots are trained to be overly enthusiastic, often reframing the user’s delusional thoughts in a positive light, dismissing counterevidence and projecting compassion and warmth,” Stanford research scientist Jared Moore said. “This can be destabilizing to a user who is primed for delusion.”

    binance
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Fintech Fetch Editorial Team
    • Website

    Related Posts

    Crypto

    $700 Million Seized by DOJ Task Force

    April 25, 2026
    Cardano development teams wants almost $50 million for Bitcoin DeFi and Vision 2030

    Cardano development teams seek nearly $50 million for Bitcoin DeFi initiatives and their Vision 2030 plans.

    April 24, 2026

    DeFi Platform Volo Targeted in $3.5M Vault Heist, Launches Recovery Initiatives

    April 23, 2026

    Japanese Government Bond Security Moves to Blockchain in New JSCC and Mizuho Initiative

    April 22, 2026
    Add A Comment

    Comments are closed.

    Join our email newsletter and get news & updates into your inbox for free.


    Privacy Policy

    Thanks! We sent confirmation message to your inbox.

    10web
    Latest Posts
    MIT scientists build the world’s largest collection of Olympiad-level math problems, and open it to everyone | MIT News

    MIT scientists build the world’s largest collection of Olympiad-level math problems, and open it to everyone | MIT News

    April 26, 2026
    How to Monetize an AI YouTube Channel (Fastest Method)

    How to Monetize an AI YouTube Channel (Fastest Method)

    April 25, 2026
    Bitcoin Quantum Threat May Not Be as Serious as Feared, According to Analyst

    Analyst Suggests Bitcoin’s Quantum Risk Might Not Be as Grave as Anticipated

    April 25, 2026
    Aave DAO

    Kelp DAO Hack: Aave DAO Proposes To Contribute 25,000 ETH To Recovery Efforts

    April 25, 2026
    Bitcoin

    Analyst Predicts Potential Bottom as Bitcoin Faces 20% Price Decline Ahead

    April 25, 2026
    Customgpt
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights
    Betpanda

    Space and Time Introduces Dreamspace AI App Creator to Streamline On-Chain Development

    April 26, 2026
    The global oil shock has the Fed cornered just days before its next meeting — what that means for Bitcoin

    The worldwide oil crisis puts the Fed in a tough spot just days ahead of its upcoming meeting — implications for Bitcoin.

    April 26, 2026
    10web
    Facebook X (Twitter) Instagram Pinterest
    © 2026 FintechFetch.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.