ChatGPT rejected 250,000 election deepfake requests


Many people have tried using OpenAI DALL-E During the election season, the image generator said he was able to stop using them as a tool to build a company profound frauds. ChatGPT rejected more than 250,000 requests to create images with President Biden, President-elect Trump, Vice President Harris, Vice President-elect Vance and Governor Waltz, OpenAI said. new report. The company explained that this is a direct result of a security measure it previously implemented to prevent ChatGPT from creating images with real people, including politicians.

OpenAI has been preparing for the US presidential elections since the beginning of the year. This revealed his strategy it was designed to prevent its tools from being used to help spread misinformation, and made sure that people asking ChatGPT about voting in the US were directed to CanIVote.org. OpenAI said 1 million ChatGPT responses drove people to the website a month before Election Day. The chatbot also generated 2 million responses on Election Day and the day after, telling people who asked it about the results to check the Associated Press, Reuters and other news sources. OpenAI assured that ChatGPT’s responses “did not express political preferences or recommend candidates even when explicitly asked.”

Of course, DALL-E isn’t the only AI image generator out there, and there are plenty of election-related hoaxes on social media. One such profound fraud Kamala Harris performed in a campaign video, he was altered to say things he didn’t actually say, such as “I got elected because I’m a key worker for diversity.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *