OpenAI says it can detect images made by its own software… mostly

We all think we’re pretty good at identifying AI-generated images. It’s the strange alien text in the background. They are strange inaccuracies that defy the laws of physics. Most of all those horrible hands and fingers. However, technology is constantly evolving and it won’t be long before we can tell what’s real and what’s not. Industry leader OpenAI is trying to tackle the problem generated by its own DALL-E 3 generator. The results are a mixed bag.

The tool is active.Tool in action.


The company says it can detect images captured by the DALL-3 with 98 percent accuracy, which is great. However, there are some pretty big caveats. First of all, the image must be generated by DALL-E, and it is not the only image generator on the block. The internet is full of them. according to the system was only able to successfully classify five to ten percent of images produced by other AI models.

Also, the problem arises when the image is modified in any way. It didn’t seem like a big deal during small changes like clipping, compression and saturation changes. In these cases, the success rate was lower, but still in the acceptable range of 95 to 97 percent. And adjusting the color lowered the success rate to 82 percent.

Test results.Test results.


Now here’s where things get really sticky. The toolkit struggled when used to classify images that had undergone more extensive changes. OpenAI didn’t even publish a success rate in these cases, saying only that “other changes may reduce performance.”

It’s confusing because it’s an election year and the vast majority of AI-generated images will be altered to better anger people. In other words, the tool will likely recognize a picture of Joe Biden sleeping in the Oval Office surrounded by white dust bags, but not after the creative has slapped in a bunch of angry text and Photoshopped in a weeping bald eagle or whatever.

At least OpenAI is transparent about the limitations of its detection technology. It also gives external testers access to the tools mentioned above to solve these problems, . The company, along with Microsoft’s best, spent $2 million on something called Hoping to expand AI education and literacy.

Unfortunately, the idea of ​​artificial intelligence rigging elections is not a far-fetched concept. This is happening now. It has already happened and used this circuit and probably has we slowly, slowly, slowly (slowly) roll on to november.

This article contains affiliate links; we may earn a commission if you click on such a link and make a purchase.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *