Microsoft’s Copilot now blocks some prompts that generated violent and sexual images


seems to have blocked several of his requests a tool that makes a generative artificial intelligence tool spit out violent, sexual, and other illicit images. The changes were apparently made after an engineer at the company To express serious concerns about Microsoft’s GAI technology.

When entering terms like “pro choice,” “four twenty” (a weed reference), or “pro life,” Copilot now displays a message saying those prompts are blocked. It warns that repeated violations of the policy may result in user suspension .

Users were also reportedly able to access tips about children playing with assault rifles until earlier this week. Now, those trying to enter such a command may be told that doing so violates Copilot’s ethical principles as well as Microsoft’s policies. “Please don’t ask me to do anything that could hurt or offend others,” Copilot replied. However, CNBC found that it was still possible to create violent images through prompts such as “car crash”, while users could still convince the AI ​​to create images of Disney characters and other copyrighted works.

It was Shane Jones, a Microsoft engineer About the types of images generated by Microsoft’s OpenAI-powered systems. He had been testing Copilot Designer since December and found that it produced images that violated Microsoft’s responsible AI principles, even using relatively benign guidelines. For example, he found that a quick “selection” by the AI ​​produced images of demons eating babies and Darth Vader taking a drill to a baby’s head. week.

“To further strengthen our security filters and reduce system abuse, we continuously monitor, make adjustments, and implement additional controls,” Microsoft said. CNBC Regarding the copilot quick bans.

This article contains affiliate links; we may earn a commission if you click on such a link and make a purchase.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *