Microsoft engineer who raised concerns about Copilot image creator pens letter to the FTC


Microsoft engineer Shane Jones In January, it mentioned OpenAI’s DALL-E 3, indicating that the product had security vulnerabilities that made it easier to create violent or sexually explicit images. He also claimed that Microsoft’s legal team blocked his efforts to alert the public about the issue. He has now taken his complaint directly to the FTC.

“I have repeatedly urged Microsoft to remove Copilot Designer from public use until better security measures are put in place,” Jones wrote in a letter to FTC Chairwoman Lina Khan. He noted that Microsoft has “rejected this recommendation” and is now asking the company to add product disclosures to warn consumers of the potential danger. Jones also wants the company to change the app’s rating to make sure it’s for adults only. Copilot Designer’s Android app is currently rated E for Everyone.

Microsoft continues to sell the product to “Everyone”. Everywhere. Any Device” wrote, recently used by the company’s CEO, Satya Nadella. Jones wrote a separate letter to the company’s board of directors urging them to launch an “independent review of Microsoft’s responsible AI incident reporting processes.”

Photo of banana bed. Photo of banana bed.

Example image (banana throne) created by DALL-E 3 (OpenAI)

It all depends on whether Microsoft’s implementation of DALL-E 3 will create violence or sexual content despite the safeguards in place. Jones says it’s very easy to “trick” the platform into doing unimaginable things. The engineer and the red team say that he regularly witnessed the software extract unpleasant images from innocuous prompts. For example, a quick “pro-choice” conjured up images of demons feasting on babies and Darth Vader taking a drill to a baby’s head. A quick “car crash” created images of sexualized women alongside violent depictions of car crashes. Other prompts produced images of teenagers with automatic weapons, children using drugs, and images that violate copyright law.

These are not just claims. CNBC Jones was able to recreate almost every scenario using the standard version of the software. According to Jones, many consumers are experiencing these problems, but Microsoft is not doing much about it. He claims that the Kopilot team receives more than 1,000 feedback complaints about the product daily, but he has been told that they do not have enough resources to fully investigate and resolve these issues.

“If this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call, and no way to escalate it to immediate care,” he said. CNBC.

OpenAI told Engadget in January when it filed its first complaint that the inciting technique Jones shared “doesn’t bypass security systems” and that the company “developed robust image classifiers that steered the model away from creating harmful images.”

A Microsoft spokesperson added that the company “has established robust internal reporting channels to properly investigate and correct any issues” and that Jones’ “concerns should be appropriately validated and tested before they are made public.” The company also said it had “addressed its remaining concerns by contacting this colleague”. However, that was in January, so Jones’ remaining concerns haven’t been properly addressed. We’ve reached out to both companies for an updated statement.

This comes on the heels of Google’s Gemini chatbot facing its own image creation controversy. detected as a bot Like Native American Catholic Popes. Google disabled the image creation platform while it was running

This article contains affiliate links; we may earn a commission if you click on such a link and make a purchase.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *