Close Menu
FintechFetch
    FintechFetch
    • Home
    • Fintech
    • Financial Technology
    • Credit Cards
    • Finance
    • Stock Market
    • More
      • Business Startups
      • Blockchain
      • Bitcoin News
      • Cryptocurrency
    FintechFetch
    Home»Business Startups»The 26 words that could kill OpenAI’s Sora
    Business Startups

    The 26 words that could kill OpenAI’s Sora

    FintechFetchBy FintechFetchOctober 29, 2025No Comments8 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Imagine a Yelp-style user-review site that lets users generate and post AI video reviews of local businesses. Say one of these videos presents a business in a bad light, and the business owner sues for defamation. Can the business sue the reviewer and the review site that hosted the video?

    In the near-to-immediate future, company websites will be infused with AI tools. A home decor brand might use a bot to handle customer service messages. A health provider might use AI to summarize notes from a patient exam. A fintech app might use personalized AI-generated video to onboard new customers. But what happens when someone claims they’ve been defamed or otherwise harmed by some AI-generated content? Or, say, claims harm after a piece of their own AI-generated content is taken down? 

    The fact is, websites hosting AI-generated content may face more legal jeopardy than ones that host human-created content. That’s because existing defamation laws don’t apply neatly to claims arising from generated content, and how future court cases settle this could limit or expand the kinds of AI content a website operator can safely generate and display. And while the legal landscape is in flux, knowing how the battle is being fought in courtrooms can help companies plan ahead for a world in which AI content is everywhere—and its veracity unclear.

    In 1996, at the dawn of the internet, forward-thinking lawmakers—Oregon Senator Ron Wyden and then-California Representative Chris Cox—feared that libel lawsuits and aggressive regulation by Washington, D.C., could overwhelm budding internet companies, which could operate forums, social networks, or search engines, stifling growth and slowing investment. 

    Wyden and Cox proposed Section 230, a statute in the Communications Decency Act of 1996 ensuring that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, if a website had a hand in creating the content, its legal immunity would vanish. 

    Since then, Section 230 has proved surprisingly durable, surviving relatively unscathed through the internet boom, the social media craze, and the mobile revolution. At the same time, it has become something of a political lightning rod in recent years as policymakers have explored ways of regulating social media.

    But the next revolution in tech—generative AI—may not enjoy any of Section 230’s protections. Our current AI models do something like “co-create” content alongside the user, who prompts the model to generate what they want. Based on that, tools like ChatGPT and Sora would seem to be excluded from Section 230 legal immunity.

    Alas, it may not be that simple.

    The duality of Sora

    Generative AI companies have been sued several times for libelous output, but none have lost, and none have yet resorted to Section 230 in their defense. In one of the more widely known cases, syndicated radio host Mark Walters sued OpenAI for defamation after ChatGPT falsely claimed that Walters had been accused of embezzling funds from a gun rights group. OpenAI won the case without having to claim Section 230 protection. The chatbot had generated the false information after warning that the “accusation” had occurred after its training data cutoff date. OpenAI did not respond to a request for comment on whether the company has used Section 230 as part of a legal defense.

    It gets even trickier with OpenAI’s new Sora app, which lets users generate AI videos and then share them on its TikTok-style social feed. (The Meta AI app does essentially the same.) Using the language of Section 230, Sora is both an information content provider (a “speaker” or creator) and a provider of an interactive computer service (a “host”). 

    Sora, and hybrid apps like it, may raise the stakes on the question of when Section 230 should be applied. Chatbots can defame with words, but Sora quickly generates alarmingly realistic video, which can convey a message more believably by showing, rather than telling. Combine that with Sora’s seamless distribution of the video and, in the wrong hands, you have an all-in-one tool for defamation.

    A “borderline case”

    At some point, AI companies are likely to reach for Section 230, possibly as a last resort, if sued, according to some legal experts, including Eugene Volokh, a law professor at UCLA and a leading thinker on AI and libel. Thorny questions about how the provision applies to their technology (whether they can use its protections to mount a defense) may well arise. And despite the fact that the language of 230 would seem to preclude it, it’s conceivable that a court could, in certain circumstances, accept it as a valid defense.

    Suppose a Sora user generates a video showing a public official taking a bribe, triggering a libel suit. This, Volokh argues, would amount to something of a “borderline case”: Sora creates the video itself (meaning “the content is provided by itself”). “On the other hand,” Volokh says, “it’s generating the video based on a query or based on a prompt submitted by a user. So you might say the defamatory elements of that stem from what the user is contributing, not Sora.” 

    OpenAI’s lawyers would likely point out that the platform itself doesn’t decide on the content of the videos it produces, only that it’s implementing its users’ wishes, Volokh says. That is to say, without specific prompts from the user, the AI would never have acted to create the offending video in the first place. 

    Yet a court may still hew to the letter of Section 230, which states that if at least part of the “creation” of the video happened during its generation, it isn’t covered.

    The fact that OpenAI’s Sora provides both a mechanism for creating and distributing a video may weaken its case for 230 protection. Libel law, Volokh says, would require a generated video to be published in order to be considered defamatory “information.” In theory, a court could argue that OpenAI should reasonably foresee that a video created on its platform would, then, be distributed, Volokh says. “And therefore it is basically aiding and abetting this defamation through its own actions of generating the video,” he adds. 

    The shield and the sword

    Yet there’s a case to be made that generative AI platforms do deserve the legal protections afforded by Section 230, even if they help both create and distribute the content, says Jess Miers, a law professor at the University of Akron. “Like social media companies, these services face constant challenges from users who generate problematic content, and they need incentives to both allow expressive activity and build guardrails against harmful or infringing outputs.” 

    Indeed, that was the original intent of Section 230, as Wyden told me in 2018. Section 230 provides both a shield and a sword to internet companies. The shield protects them from liability for harmful content posted on their platforms by users. The sword is the law’s “good samaritan” clause, which provides legal cover for actively removing harmful content from their platform. Before Section 230, tech companies were hesitant to moderate content for fear of being branded “publishers” and, thus, liable for toxic user content.

    With generative apps like Sora, OpenAI’s developers effectively “co-create content” with the user, Miers says. The developers choose the training data, train the models, and do the fine-tuning that shapes the output. The user contributes the prompts. “Congress may need to craft a new kind of protection that captures this co-creative dynamic to preserve expression, safety, and competition in this evolving new market,” Miers says. 

    Congress might try to rewrite Section 230 (or construct a new law) that distinguishes between defamatory intent on the user’s part versus the AI’s. This would involve digging into the details of how AI models work.

    Lawmakers might start by studying how users could misuse models to create harmful content, such as bypassing safety guardrails or eliminating “made by AI” labels. “If that’s a recurring problem, then a 230-style framework shielding AI companies from liability for users’ misuse could make sense,” Miers says.

    As many Sora app users have noticed, OpenAI is playing it very safe with the kinds of videos it allows. It’s already taken down many, many videos, and has agreed to restrict the use of images of public or historical figures (such as Martin Luther King Jr.) upon request. This suggests that while a Section 230 might protect AI companies from libel suits in some circumstances, OpenAI isn’t eager to test the theory.




    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleBitcoin & Ethereum Price Forecast: Consolidation Ahead?
    Next Article DBS and Goldman Sachs Execute First Interbank OTC Crypto Options Trade
    FintechFetch
    • Website

    Related Posts

    Business Startups

    Advice from businesses that have been successful for generations

    October 29, 2025
    Business Startups

    Political campaigns love copying brand logos. Here’s why

    October 29, 2025
    Business Startups

    Facing Hurricane Melissa, State Department turns to Elon Musk’s Starlink in Jamaica

    October 29, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Bitdeer Buys $21M Land in Canada for 99MW Bitcoin Mining Facility

    February 6, 2025

    Why the “Boring” Parts of Lending Need a Makeover

    June 16, 2025

    Nvidia stock is a lot cheaper than before – or is it?

    April 14, 2025

    Waymo to launch robotaxis in London in 2026

    October 15, 2025

    In Conversation With Mastercard at Web Summit Qatar 2025

    March 13, 2025
    Categories
    • Bitcoin News
    • Blockchain
    • Business Startups
    • Credit Cards
    • Cryptocurrency
    • Finance
    • Financial Technology
    • Fintech
    • Stock Market
    Most Popular

    What’s Behind the Surge, and What’s Next?

    August 25, 2025

    Pump.fun Launches PumpSwap on Solana: Is This the End of Raydium (RAY)?

    March 21, 2025

    Ethereum Enters Top 30 Global Assets With $416B Market Cap – What’s Next?

    July 18, 2025
    Our Picks

    is Ripple About to Surge Above $3 or fall to $2 This Week?

    October 29, 2025

    The Next share price rises 6% as the retailer announces a special dividend

    October 29, 2025

    Why This Analyst Is More Bullish On XRP Over Ethereum For The Short-Term

    October 29, 2025
    Categories
    • Bitcoin News
    • Blockchain
    • Business Startups
    • Credit Cards
    • Cryptocurrency
    • Finance
    • Financial Technology
    • Fintech
    • Stock Market
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Fintechfetch.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.