Style Isn't IP—But Should It Be?
What the Ghibli x OpenAI moment reveals about creative ownership in the AI age
In an era where a single prompt can echo decades of artistry, what do we owe the original creators?
The last two weeks, social media buzzed with "Ghibli-style" portraits—whimsical, softly lit scenes echoing the hand-drawn magic of Studio Ghibli's Hayao Miyazaki. They were generated by OpenAI's GPT-4o, announced on May 13, 2024, with native image generation rolled out to all ChatGPT users on March 25, 2025. The internet went wild, turning selfies and memes into Miyazaki-esque art with a single prompt.
But Ghibli didn’t sign up for this. OpenAI didn’t use their frames or claim a partnership—yet the model’s knack for replicating Ghibli’s aesthetic, honed over decades of human craft, raises a thorny question: in an age where AI can mimic a creator’s signature, what does ownership mean? More importantly, should style—the intangible essence of creative expression—receive legal protection in an era where it can be replicated at scale?
The Legal Line: Style vs. Substance
Under U.S. copyright law, specific works—like a Ghibli film frame—are protected, but style isn’t. Courts have consistently held that broad aesthetics (think impressionism or film noir) can’t be owned. As the Supreme Court affirmed in Feist Publications v. Rural Telephone Service (1991), copyright protects expression, not ideas or concepts. So, an AI-generated image “in the style of Ghibli” doesn’t infringe unless it copies a character or scene outright. Fair use—weighed by purpose, nature, amount used, and market effect—might even shield it as “transformative” if challenged, though Japan’s stricter copyright laws could complicate things for Ghibli’s home turf, where moral rights protect a creator’s work integrity.
OpenAI may be in the clear for now, but cracks in this foundation are showing. If their training data included copyrighted Ghibli works without permission—a possibility their lack of disclosure fuels—courts could rule that initial copying illegal, even if outputs don’t infringe. The 2023 New York Times lawsuit against OpenAI targets this unauthorized training process, alleging it violates copyright regardless of output similarity. Similarly, pending U.S. lawsuits, like Andersen v. Stability AI, hint at this risk, and a loss could upend the game. Japan’s moral rights framework, under Article 20 of its Copyright Act, might interpret Miyazaki’s legacy as harmed, though it typically applies to specific works, not abstract style.
For today, the law leans OpenAI’s way—tomorrow’s less certain.
AI’s Creative Shortcut: Appropriation Without Attribution
AI models like GPT-4o learn from vast datasets—possibly including Ghibli-inspired art, though the inclusion of actual Ghibli content remains unconfirmed. OpenAI doesn’t disclose specific sources, but past technical papers describe training on publicly available and licensed data, including billions of images.
The economic impact is substantial. While OpenAI does not break down revenue by feature, GPT-4o’s image rollout sparked a reported surge in ChatGPT’s paid subscriptions in early 2025. TechCrunch reported on April 4, 2025, that OpenAI projected $12.7 billion in revenue for 2025, up significantly from the previous year—a business increasingly fueled by user engagement with viral features like image generation.
The Ghibli-style outputs, in particular, were not just a coincidence—they became a de facto marketing campaign. Sam Altman’s Ghibli-style profile photo, the flurry of shareable outputs, and the absence of content origin disclaimers collectively positioned the style as a magnet for user attention, driving adoption and engagement. What looked like a fan-driven trend also served as a strategic showcase for GPT-4o’s creative capabilities.
Critics, including artists like Karla Ortiz who is suing AI firms, call it exploitation: a studio’s legacy distilled into a tech product without consent or compensation.
Miyazaki himself, who in 2016 described an AI animation demo as “an insult to life itself,” would likely agree. The stakes hit home when OpenAI’s Sam Altman updated his profile photo to a Ghibli-style image—visually tying the company’s viral moment to a studio that never opted in. This move elevated the mimicry from casual trend to brand-aligned moment, further blurring the line between homage and monetization.
The Other Side: Homage and Innovation?
Supporters of AI-generated stylistic content argue that artistic evolution has always involved influence and emulation. Renaissance painters studied their predecessors; musicians build on established genres; filmmakers pay homage to iconic directors. From this perspective, AI models like GPT-4o democratize access to aesthetic techniques that were once exclusive to trained professionals, allowing anyone to experiment with style.
OpenAI refers to such content as “inspired fan creations,” and U.S. law supports that interpretation. Style remains unprotected, and legal liability for outputs typically rests with users. Some Ghibli fans even view the trend as a tribute—keeping the studio’s visual language alive in a new medium and introducing it to younger, digitally native audiences. For instance, Seattle developer Grant Slatton’s Ghibli-style AI rendering of a family beach photo, posted in late March 2025, racked up 51 million views on X, amplifying Ghibli’s aesthetic to a massive global audience —albeit with quite polarized responses.
In many ways, the Ghibli-style AI trend represents a kind of “Mona Lisa moment” for the studio’s visual legacy. Much like how Leonardo da Vinci’s Mona Lisa rose from Renaissance portrait to global icon through mass reproduction—first via photography, then through posters, pop art, and digital culture—Studio Ghibli’s aesthetic has now been untethered from its narrative origins and widely reproduced through generative AI.
But while da Vinci lived in an era without copyright law or monetization infrastructure, today’s creative economy is deeply transactional. Platforms like OpenAI can instantly scale and monetize cultural styles, transforming aesthetic influence into subscription growth—without the need for attribution or licensing. The result is hyper-visibility without consent or compensation. Miyazaki style has become so recognizable, and so easily replicable, that it now functions more like a public domain asset. In the AI era, the more a style is copied and shared, the more valuable it becomes—often making money for the platforms, not the original creators. So perhaps the real question isn’t whether style should be protected, but who gets paid when it isn’t.
Ghibli vs. Wes Anderson: A Tale of Two Tributes
To understand what’s different about AI-driven style mimicry, consider the 2023 Wes Anderson TikTok trend. Users recreated Anderson’s cinematic style—symmetrical frames, pastel tones, and quirky narration—often with editing tools or AI-assisted filters, but still rooted in manual effort: storyboarding, filming, editing.
The Anderson trend was a decentralized, creator-led remix. While Anderson didn’t endorse it, no single company profited directly. The labor remained human; creators gained followers, not corporate revenue.
By contrast, the Ghibli moment feels institutional. OpenAI benefits from user engagement, and the creative work is outsourced to algorithms. Where TikTokers spent hours crafting Anderson tributes, GPT-4o users achieve Ghibli vibes in seconds—with OpenAI capturing the upside.
The Ownership Question: Should Style Be IP?
So, should distinctive creative styles receive legal protection? The real challenge is finding a balance between protecting creators and keeping creative innovation open to all.
The pro-AI camp argues that restricting style would throttle creativity. Aesthetic movements, genres, and cultural trends grow through reinterpretation. The Electronic Frontier Foundation has argued that copyright was never meant to lock up aesthetics—it protects specific expressions, not inspiration.
Yet creators raise valid concerns. When companies can replicate and monetize distinctive styles without consent or credit, the balance shifts. A 2025 survey found that 55% of creators believe AI will negatively impact their earnings, reflecting fears that unchecked replication devalues their craft. Unlike human homage, which requires effort and skill, AI replication is frictionless and infinitely scalable. This changes the economics of creativity.
The legal status quo was shaped in a world where imitation meant effort. AI changes that equation. When prompts replicate what once took decades to master, perhaps the law should catch up.
Importantly, not all AI training is unauthorized. Adobe’s Firefly model, for example, trains on licensed stock images and public domain works, proving that permission-based approaches are both possible and scalable. OpenAI’s lack of transparency complicates the conversation, but not all generative AI relies on unlicensed data.
A Middle Path Forward: Balancing Innovation and Recognition
Rather than granting full IP protection to style, we could create guardrails that preserve innovation while honoring creators:
1. Opt-out mechanisms with enforcement: Companies could be required to honor creator requests to exclude their work from training data via a global registry, one that is easy to access, transparent, and standardized across platforms. Today, creators must navigate fragmented and often technical opt-out processes (like robots.txt files or complex legal forms) with no guarantee of compliance. A centralized, user-friendly system would flip the default, giving artists a real choice. Enforcement would be critical: significant penalties for non-compliance, like the EU’s AI Act fines of up to 7% of global revenue. This would give creators genuine agency.
However, global coordination remains a major challenge. China, for instance has taken a more state-directed approach to AI governance, including proposed rules requiring watermarking of AI-generated content. Without multilateral alignment—including the U.S., EU, and China—any registry risks being only partially effective, leaving many creators unprotected. Still, establishing a global opt-out standard would set a precedent and put pressure on others to follow.
2. Attribution and revenue sharing: AI companies could implement automatic style attribution by building style recognition systems that detect when outputs closely match known creators’ visual aesthetics. These systems could function similarly to YouTube’s Content ID, identifying stylistic resemblance and appending metadata (e.g., “inspired by [creator name]”) to generated content. This metadata could then be tied to revenue-sharing frameworks, particularly when outputs are used commercially. Adobe’s Firefly model, and its support of the Content Authenticity Initiative (CAI) and C2PA, offer early examples of attribution infrastructure in visual AI. These efforts mirror how music sampling evolved from unregulated appropriation to a managed system of credit and compensation.
3. Partnerships and style licensing frameworks: For commercially valuable, distinctive styles, licensing agreements could be established—not to restrict all influence, but to ensure fair compensation when a style is systematically replicated for profit. These frameworks could be powered by smart contracts, allowing licensing terms, attribution, and revenue sharing to be enforced automatically and at scale. This would enable seamless integration into creative platforms, reducing friction for both artists and developers. OpenAI could have pursued a similar approach with Studio Ghibli, and/or created an official partnership that respected the studio’s creative legacy while exploring new technological frontiers.
OpenAI missed these opportunities with the Ghibli moment. Instead of proactively addressing style attribution, they rode the viral wave, leaving creators to wonder who’s next. Imagine a “Beyoncé-style” music AI that replicates her distinctive vocal techniques without acknowledgment—same debate, likely louder outrage. The company’s silence in response to questions about Ghibli content in training data further undermines trust in their approach to creator rights.
Style as Currency in the AI Economy
Style isn’t currently recognized as intellectual property, and compelling arguments suggest it shouldn’t be fully ownable. Art has always thrived on influence, and culture evolves through reinterpretation of existing aesthetics. Yet the AI revolution transforms creative style from an abstract influence into a concrete asset—one that generates measurable value for technology companies.
The current system, where creator consent is optional and opting out is a burden, risks alienating the very human artists whose work makes AI generation possible. If we continue treating style as freely appropriable in an age where it can be instantaneously replicated and monetized at scale, we risk undermining the economic foundations of professional creative work. Why invest decades perfecting a distinctive aesthetic if it can be extracted, replicated, and monetized without your participation?
A more balanced approach would recognize that while influence should remain free, systematic commercial exploitation of distinctive styles deserves some form of recognition and compensation. This isn’t about restricting creativity—it’s about ensuring the creators who develop influential styles can participate in the value they generate, even as technology transforms how that value is created and distributed. It’s to acknowledge that when style becomes software, creators deserve a role in shaping its use.
The future’s not just about what AI can generate—it’s about who gets to shape it. As new business models emerge and legal battles accelerate, we have an opportunity to build frameworks that respect both the prompt and the pencil—and to ensure that technological progress doesn’t come at the expense of human creativity.
interesting perspective
This is the best