Close Menu
    Facebook X (Twitter) Instagram
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Facebook X (Twitter) Instagram
    Fintech Fetch
    • Home
    • Crypto News
      • Bitcoin
      • Ethereum
      • Altcoins
      • Blockchain
      • DeFi
    • AI News
    • Stock News
    • Learn
      • AI for Beginners
      • AI Tips
      • Make Money with AI
    • Reviews
    • Tools
      • Best AI Tools
      • Crypto Market Cap List
      • Stock Market Overview
      • Market Heatmap
    • Contact
    Fintech Fetch
    Home»AI News»Study Shows ChatGPT and Gemini Still Trickable Despite Safety Training
    logo
    AI News

    Study Shows ChatGPT and Gemini Still Trickable Despite Safety Training

    December 1, 20253 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    changelly

    Worries over A.I. safety flared anew this week as new research found that the most popular chatbots from tech giants including OpenAI’s ChatGPT and Google’s Gemini can still be led into giving restricted or harmful responses much more frequently than their developers would like.

    The models could be prodded to produce forbidden outputs 62% of the time with some ingeniously written verse, according to a study published in International Business Times.

    It’s funny that something as innocuous as verse – a form of self-expression we might associate with love letters, Shakespeare or perhaps high-school cringe – ends up doing double duty for security exploits.

    However, the researchers responsible for the experiment said stylistic framing is a mechanism that enables them to circumvent predictable protections.

    Their result mirrors previous warnings from people like the members of the Center for AI Safety, who have been sounding off about unpredictable model behavior in high-risk ways.

    bybit

    A similar problem reared itself late last year when Anthropic’s Claude model proved capable of answering camouflaged biological-threat prompts embedded in fictional stories.

    At that time, MIT Technology Review described researchers’ concern about “sleeper prompts,” instructions buried within seemingly innocuous text.

    This week’s results take that worry a step further: if playfulness with language alone – something as casual as rhyme – can slip around filters, what does it say about broader intelligence alignment work?

    The authors suggest that safety controls often observe shallow surface cues rather than deeper intentionality correspondence.

    And really, that reflects the kinds of discussions a lot of developers have been having off-the-record for several months.

    You may remember that OpenAI and Google, which are engaged in a game of fast-follow AI, have taken pains to highlight improved safety.

    In fact, both OpenAI’s Security Report and Google’s DeepMind blog have asserted that guardrails today are stronger than ever.

    Nevertheless, the results in the study appear to indicate there’s a disparity between lab benchmarks and real-world probing.

    And for an added bit of dramatic flourish – perhaps even poetic justice – the researchers didn’t use some of the common “jailbreak” techniques that get tossed around forum boards.

    They just recast narrow questions in poetic language, like you were requesting poisonous guidance achieved through a rhyming metaphor.

    No threats, no trickery, no doomsday code. Just…poetry. That strange lack of fit between intentions and style may be precisely what trips these systems up.

    The obvious question is what this all means for regulation, of course. Governments are already creeping toward rules for AI, and the EU’s AI Act directly addresses high-risk model behavior.

    Lawmakers will not find it difficult to pick up on this study as proof positive that companies are still not doing enough.

    Some believe the answer is better “adversarial training.” Others call for independent Red-team organizations, while a few – particularly academic researchers – hold that transparency around model internals will ensure long-term robustness.

    Anecdotally, having seen a few of these experiments in different labs by now, I’m tending toward some combination of all three.

    If A.I. is going to be a bigger part of society, it needs to be able to handle more than simple, by-the-book questions.

    Whether rhyme-based exploits go on to become a new trend in AI testing or just another amusing footnote in the annals of safety research, this work serves as a timely reminder that even our most advanced systems rely on imperfect guardrails that can themselves evolve over time.

    Sometimes those cracks appear only when someone thinks to ask a dangerous question as a poet might.

    livechat
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Fintech Fetch Editorial Team
    • Website

    Related Posts

    MIT researchers “speak objects into existence” using AI and robotics | MIT News

    MIT researchers “speak objects into existence” using AI and robotics | MIT News

    December 5, 2025
    AWS launches Kiro powers with Stripe, Figma, and Datadog integrations for AI-assisted coding

    AWS launches Kiro powers with Stripe, Figma, and Datadog integrations for AI-assisted coding

    December 4, 2025
    HTB AI Range offers experiments in cyber-resilience training

    HTB AI Range offers experiments in cyber-resilience training

    December 3, 2025
    DeepSeek Researchers Introduce DeepSeek-V3.2 and DeepSeek-V3.2-Speciale for Long Context Reasoning and Agentic Workloads

    DeepSeek Researchers Introduce DeepSeek-V3.2 and DeepSeek-V3.2-Speciale for Long Context Reasoning and Agentic Workloads

    December 2, 2025
    Add A Comment

    Comments are closed.

    Join our email newsletter and get news & updates into your inbox for free.


    Privacy Policy

    Thanks! We sent confirmation message to your inbox.

    Customgpt
    Latest Posts
    MIT researchers “speak objects into existence” using AI and robotics | MIT News

    MIT researchers “speak objects into existence” using AI and robotics | MIT News

    December 5, 2025
    Wolfe Research Identifies ‘Maximum Disagreement’ as Key Bitcoin Market Signal: What This Means

    Japan’s Update Heightens Concerns Over Yen Carry Trade Impact on Bitcoin

    December 5, 2025
    The LAZIEST A.I. Side Hustle You Can Start In 2026

    The LAZIEST A.I. Side Hustle You Can Start In 2026

    December 5, 2025
    n8n Tutorial for Beginners 2026: How to Build AI Agents

    n8n Tutorial for Beginners 2026: How to Build AI Agents

    December 5, 2025

    10 ChatGPT + AI Hacks You Need to Try in 2025 🤯 | Tools That Work Like Magic!

    December 5, 2025
    Customgpt
    LEGAL INFORMATION
    • Privacy Policy
    • Terms Of Service
    • Social Media Disclaimer
    • DMCA Compliance
    • Anti-Spam Policy
    Top Insights
    Ethereum Treasury Trade Unwinds, Large Players Amass ETH Supply

    Ethereum Treasury Trade Unwinds, Large Players Amass ETH Supply

    December 5, 2025
    XRP Sentiment Hits Fear Zone, But May Signal Rally

    XRP Sentiment Dips into Fear Territory, Yet Could Indicate a Rally Ahead

    December 5, 2025
    murf
    Facebook X (Twitter) Instagram Pinterest
    © 2025 FintechFetch.com - All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.