Text-to-image synthesizers have the power to make anyone into an artist, no art skills required. All you need is a little imagination and the power of artificial intelligence to create something spectacular (or truly bizarre). Now, the folks behind the much-hyped, text-to-image synthesizer DALL-E (pronounced: Dalí, like Salvador) have invited one million beta testers, pulled from DALL-E waitlist, to start creating on the platform. Here are the details.
How it works
Over the coming months, DALL-E will invite up to a million people to beta test its platform. Testers will receive free monthly credits to create as they please and, should their enthusiasm take over, they’ll have the option to purchase additional credits at $15 for 115.
Each credit allows for one prompt and the choice between a generation, which returns four images, and an edit/variation, which returns three images. The edit/variation prompt will either modify an existing image created by or uploaded to DALL-E, or it will create a new piece of artwork based on the inspiration. The company has stated that it will be collecting user feedback to make improvements.
Additionally, artists and creators will have the ability to commercialize their work, including reprinting, selling, and merchandising. The company reports that some creators already have children’s book illustrations, newsletters, games, and movie storyboards in the works.
Is it really safe?
Of course, all the buzz about AI image synthesizers has come with plenty of concerns about safety and ethics. This is one of the reasons why Google’s Imagen synthesizer is not available to the public. The opportunity for misinformation, bias, and abuse abounds, with no clear solutions and mitigation policies in place. However, DALL-E claims that it has implemented the following measures to combat any misuse.
First, the platform will reject image uploads of realistic faces, so users won’t be able to recreate the likeness of public figures, such as celebrities and politicians. DALL-E will also not produce photorealistic images of real people.
The content filters will also guard against violent, adult, political, and other subjects that violate its content policy. When developing the system, the team also removed explicit content from the training data to limit the AI’s exposure. DALL-E’s creators also claim they have implemented a system-level technique to fight bias, allowing it to generate “images of people that more accurately reflect the diversity of the world’s population.” There is also automated and human monitoring in place to evaluate content.
While the concept is novel and fun, it remains to be seen if DALL-E will successfully surmount the moral and ethical risks the technology poses. We’re keen to see the beta testers’ creations and how the platform will navigate the balance of freedom of expression with the possible repercussions.