DALL-E 2 is a new model that can take complex enough prompts. It should comply and search for outputs which are likely to meet the user’s requirements out of multiple existing outputs.
The most exciting part of this model is its leap towards a diffusion model, a type of image production that in the beginning looks like pure noise and changes the image over time, repeatedly creating it a little more like the image requested until there’s no noise left at all.
OpenAI takes into account the risk presented by deep fakes and other possible ways of misusing AI-generated imagery and content, so team decided to add some regulations. What does it mean? The model says no to hate, harassment, violence, self-harm, explicit or “shocking” imagery, illegal activities, deception, political actors or situations, medical or disease-related imagery, or general spam. Any of these won’t be possible to work with as violating imagery was cut out from the training set: DALL-E 2 can create a specific dog breed in a beret, but it has no idea what a missile strike is.
Data Phoenix Newsletter
Join the newsletter to receive the latest updates in your inbox.