Sony is working on a technology aimed at curbing the use of artistic styles in generative AI. The system can identify signature elements of a creator’s work and block outputs that closely mimic specific artists or studios – with styles associated with Studio Ghibli cited as a key example.
The solution operates at the model level. If a user tries to generate an image in the style of a particular artist, the system would either reject the request or modify the output so it doesn’t cross the line into copyright infringement. In other words, it’s trying to tackle a problem that, until now, has been notoriously hard to solve from a technical standpoint.
The second piece of the project is a compensation mechanism for creators. Sony suggests it may be possible to track how much a model relies on certain styles or training data and then route a share of the revenue back to the original artists. The company has not yet disclosed how such payments would be calculated.
The project is being developed in Japan, where protecting artistic style and creators’ rights is especially important in industries like anime and manga. Visual styles associated with studios like Ghibli are among the most frequently imitated by image-generating models – something that has long sparked backlash from artists.
Right now, the internet is still a bit of a wild west when it comes to AI training data. Models are often trained on massive datasets that include artists’ work without consent, which effectively means anyone can build on someone else’s creations without oversight or compensation.
Sony has not announced a release timeline or whether the system will be made commercially available. But if it works as intended, it could be one of the first tools to meaningfully curb AI-driven style imitation – and finally tilt the balance a bit back toward creators.

