The nonprofit OpenAI stated that bias has been reduced and safety has improved in the latest update to the DALL-E 2 image generator.
According to representatives of the organization, the new methodology allows the algorithm to generate images of people that more accurately reflect the diversity of the world’s population.
«This method is applied at a systemic level when DALL-E is given a prompt describing a person without indicating race or gender, for example “firefighter”», the press release said.
In testing the new methodology, users 12 times more often said that DALL-E images included people of diverse backgrounds, the company said.
«We plan to improve this technique over time, as we collect more data and feedback,» OpenAI added.
The organization launched a preview version of DALL-E 2 for a limited number of people in April 2022. The developers say this allowed them to better understand the capabilities and limitations of the technology and enhance safety systems.
According to OpenAI, during the course of the research they took other steps to improve the generator, including:
- minimized the risk of improper use of DALL-E for creating deepfakes;
- blocked prompts and uploaded images that violate the content policy;
- improved safeguards against misuse.
OpenAI said that, thanks to these changes, they would broaden access to the algorithm.
«Broadening access is an important part of our responsible deployment of AI systems, as it allows us to learn more about their use in real-world conditions and continue to improve our safety systems,» the developers noted.
In July, researchers found that users could not tell the difference between images created by a neural network and those created by a human.
In January, OpenAI released a new version of the language model GPT-3, which produces fewer offensive expressions, misinformation and errors overall.
Subscribe to ForkLog news on Telegram: ForkLog AI — all the news from the AI world!
