In a pivotal week for the future of AI and copyright law, regulatory bodies and tech giants are clashing over how to treat AI-generated content.
US Copyright Office Stands Firm: No Human Touch, No Copyright
On Tuesday, the U.S. Copyright Office released the second installment of its comprehensive report on the copyrightability of AI-generated material. The message was crystal clear: only content with direct, original human input qualifies for protection under current copyright law.
According to the report, even the most imaginative AI prompts do not meet the threshold for “authorship.” While users can influence AI outputs, unless they curate, edit, or meaningfully transform them, they hold no copyright over the resulting creations.
The Office outlined three narrow cases where copyright may apply: when AI is used as a tool by a human creator, when human-authored content is visibly included, or when a person arranges AI-generated elements in a distinctly creative manner. However, the overarching stance remains firm—courts have consistently denied copyright claims on pure machine output.
Prompts Are Not Authorship
To illustrate this position, the Copyright Office likened AI prompts to giving instructions to a photographer. While instructions may guide the final image, they do not constitute authorship in themselves. In other words, you can guide the AI, but you can’t claim ownership unless you contribute something uniquely human.
OpenAI Pushes for Broader Data Access in the UK
As the U.S. doubles down on human-centric copyright standards, OpenAI is urging the UK to embrace a more flexible approach. On Wednesday, the company submitted its official response to the UK government’s AI and copyright consultation.
OpenAI is advocating for a sweeping “text and data mining exception” that would allow developers to train AI systems on publicly accessible data—without needing prior consent from rights holders. This proposed framework would create a legal environment where AI models could learn from virtually all public content unless explicitly opted out.
While OpenAI argues this would foster innovation and investment, creators and rights holders see it differently. Artists, authors, and publishers have voiced concerns that such an exception could become a loophole for mass content scraping—risking the economic viability of professional creative industries.
GPT-4o and the Ghibli Trend Stir the Pot
Adding fuel to the fire, a new study from the AI Disclosures Project claims that OpenAI’s GPT-4o demonstrates an unusually high ability to recognize content behind paywalls.
Complicating matters further, OpenAI’s image generator went viral over the weekend for turning selfies into art inspired by Studio Ghibli—a studio whose co-founder has openly criticized AI since 2016. The popularity of this trend has reignited debates over the ethical and legal boundaries of AI creativity. You can read more about that cultural wave in our dedicated coverage.
Global Legal Landscape: Fragmenting Fast
What’s becoming evident is that copyright laws are evolving at different speeds across jurisdictions. While the U.S. reinforces traditional views that center on human originality, the UK is exploring more permissive frameworks to attract AI innovation.
This legal divergence is creating uncertainty for developers and creators alike. It also highlights the urgent need for international dialogue and harmonized standards. As AI continues to blend machine efficiency with human intent, the line between tool and creator grows increasingly blurry.
Looking Ahead: Who Owns AI Creations?
The copyright debate is far from settled. With governments, tech companies, and creators all pushing for different outcomes, the next few years will be critical in shaping the legal frameworks that define AI’s role in creative industries.
In the meantime, companies like OpenAI, Microsoft, and others are racing to train their models on vast datasets—while regulators try to catch up. For organizations looking to ethically integrate AI into their content ecosystems, partnerships like the one between Pythian and GigaOm are paving the way for responsible AI adoption at scale.
One thing is clear: AI may be generating content, but the battle over who owns it is just beginning.