Stability AI обновила ИИ-генератор изображений по тексту Stable Diffusion, «ослабив» способность алгоритма создавать NSFW-content and images in the style of specific artists. This has drawn criticism from users, reports The Verge.
The update includes реинжиниринг of key components of the model and improvements to some of its features such as scaling and context-aware editing. It also limits the ability of AI systems to create illustrations that imitate others’ styles or depict nudity, pornography or celebrities.
«Они понерфили алгоритм», — прокомментировал update by one Reddit user in the Stable Diffusion subreddit.
Another developer on the official Discord server called the changes “an unwelcome surprise.”
Users also noted the model’s inability to generate images in the style of digital artist Greg Rutkowski.
По данным источника, многие деятели искусства «раздражены» тем, что алгоритмы вроде Stable Diffusion обучались на их работах без разрешения и теперь способны воспроизводить похожие картины.
Эксперты допустили, что тренировка нейросетей на данных, защищенных авторским правом, является законной. Однако некоторые варианты использования можно оспорить в суде, добавили они.
Users discuss that Stability AI may have changed the model to blunt potential legal risks.
Founder Emad Mostaque said that developers did not remove artists’ works from training data. The model’s reduced ability to imitate styles results from changes in the software’s ability to encode and extract information.
However it emerged that engineers removed nude and pornographic illustrations from training datasets. According to Mostaque, the reason for filtering NSFW content was the possibility of using the algorithm to generate images of child abuse.
User on Reddit under the handle ZeroCool22 claimed that this change “contradicts the spirit of the open-source community’s philosophy.”
«The decision to create NSFW content or not should remain in the hands of the end user, not in a restricted/ censored model», — he wrote.
Other users noted that since Stable Diffusion is open-source, nude data could be easily added to third-party releases.
Some developers praised deeper functionality of the algorithm, including the system’s ability to generate content that matches the depth of the original image.
First tests with the new @StableDiffusion V2 depth2image feature. This has potential…@EMostaque @StabilityAI #stablediffusionv2 pic.twitter.com/XZGng0w5HJ
— Saad Ahmed (@microsaad) November 24, 2022
Discord users also noted improved prompt interpretation and the generation of coherent photographic images by the model.
Mostaq said future versions of Stable Diffusion would use training datasets that allow artists to opt in or out of including their works.
«We are trying to be as transparent as possible, improving the base models and taking the community’s feedback into account», he said.
In August, Stability AI provided access to the text-to-image generator Stable Diffusion to more than 1,000 researchers. Later the company made the algorithm available to a broad audience.
In September, an enthusiast taught Stable Diffusion to generate video on demand. The algorithm is based on the software code of AI developer Andrej Karpathy.
In October, German enthusiast Fabian Stelzer used the neural network Stable Diffusion to process selfies.
Subscribe to ForkLog news on Telegram: ForkLog AI — all the news from the world of AI!
