We recently uncovered that Lensa AI can be tricked into creating NSFW images. When TechCrunch made the Prisma team aware of its findings, the company’s CEO replied with its findings.
Prisma Lab’s CEO and co-founder Andrey Usoltsev told us that the behavior we observed in our article can only happen if the AI is intentionally provoked into creating NSFW content, and points out that this represents a breach against its terms of use.
“Our Terms of Use (Clause 6) and Stability AI Terms of Service (Prompt Guidelines) explicitly prohibit the use of the tool to engage in any harmful or harassing behavior. The way [TechCrunch’s] experiment was structured points out that such creations can’t be produced accidentally. The images are the result of intentional misconduct on the app,” said Usoltsev in an email interview with TechCrunch. “Generation and wide usage of such content may incur legal actions, as both the US and the UK regard an act of sharing of explicit content and imagery generated without consent as a crime. We provide guidelines that clearly stipulate our image requirements and the use of any explicit depictions is strictly prohibited. We expect the app’s users to follow the guidelines to receive the best possible results.”
Usoltsev also shared some additional context for why Lensa ended up generating NSFW images, explaining that this is a result of the underlying technology, Stability AI, is doing what it is told, but only in a sandbox environment.
“Stable Diffusion neural network is running behind the avatar generation process,” says Usoltsev. “Stability AI, the creators of the model, trained it on a sizable set of unfiltered data from across the internet. Neither us, nor Stability AI could consciously apply any representation biases; To be more precise, the man-made unfiltered data sourced online introduced the model to the existing biases of humankind. The creators acknowledge the possibility of societal biases. So do we.”
The Stability AI model includes adaptations of the Stable Diffusion model software to make it harder for users to generate nude and pornographic imagery since the end of November, 2022, and Prisma AI’s founder assures me that these adaptations can be outmanouvered by savvy users.
We are in the process of building the NSFW filter. It will effectively blur any images detected as such. Andrey Usoltsev, CEO at Prisma Lab
“We specify that the product is not intended for minors and warn users about the potential content. We also abstain from using such images in our promotional materials,” Usoltsev told us. “To enhance the work of Lensa, we are in the process of building the NSFW filter. It will effectively blur any images detected as such. It will remain at the user’s sole discretion if they wish to open or save such imagery.”
The Prisma Labs team points out that there are two issues here; by uploading explicit images, the users train a particular and individual copy of the model, that the company claims is deleted once the generation is complete, and that these images cannot be used to train the model further. In other words: If you upload porn to make more porn, that’s kind of on you.
“There’s no doubt that a wider conversation around AI use and regulations needs to take place in the near future and we’re keen to be a part of it. We also provide all necessary guidance and appropriate warnings to enable the best experience of the Magic Avatars feature,” says Usoltsev. “But if an individual is determined to engage in harmful behavior, any tool would have the potential to become a weapon.”
The company didn’t share whether it has plans in place to avoid the creation of so-called ‘deepfake’ nude imagery.
In the meantime, I guess I get to enjoy consensually-generated photos of myself looking better than I ever have in any photo, and encourage others to obtain consent before they create porn of others.
Prisma Labs, maker of Lensa AI, says it is working to prevent accidental generation of nudes by Haje Jan Kamps originally published on TechCrunch
from TechCrunch https://ift.tt/ThSbFCi
via Tech Geeky Hub
No comments:
Post a Comment