Advanced AI chatbots are less likely to admit they don’t have all the answers


Researchers have discovered some obvious downsides to smarter chatbots. Although AI models are becoming more accurate as they progress, they are more likely to (wrongly) answer questions beyond their capabilities instead of saying “I don’t know”. And the people who provoke them are more likely to take their delusional hallucinations at face value, creating a trickle-down effect of delusional misinformation.

“These days they respond to almost everything,” said José Hernández-Orallo, of the Universitat Politecnica de Valencia, Spain. he said Nature. “And that means more right, but also more wrong.” Project leader Hernández-Orallo worked on the research with colleagues at the Valencia Institute for Artificial Intelligence Research in Spain.

The team studied three LLM families, including OpenAI’s GPT series. LLaMA of methane and the open source BLOOM. They tested early versions of each model and progressed to larger, more advanced models, but not the most advanced today. For example, the team started with OpenAI’s relatively primitive GPT-3 island model and tested iterations up to GPT-4. It arrived in March 2023. The four-month GPT-4o was not included in the study newer o1-preview. I would be interested to see if the trend still continues in the latest models.

The researchers tested each model on thousands of questions about “arithmetic, anagrams, geography and science.” They also polled AI models on their ability to manipulate data such as the alphabetical order of a list. The team ranked their instructions by perceived difficulty.

The data showed that the chatbots’ incorrect answers (instead of completely avoiding questions) increased as the models grew. So AI is a bit like a professor who believes that as he masters more subjects, he has golden answers to them all.

Complicating matters further is the fact that people are running chatbots and reading their responses. The researchers asked volunteers to rate the accuracy of the AI ​​bots’ answers, and they found that they “misclassified imprecise answers as surprisingly accurate.” The range of incorrect responses that were correctly accepted by the volunteers was typically reduced by between 10 and 40 percent.

“People can’t control these patterns,” Hernández-Orallo said.

The research team recommends that AI developers start programming chatbots to improve performance for easy questions and refuse to answer complex questions. “We need people to understand: ‘I can use it in this area and I shouldn’t use it in this area,'” Hernández-Orallo said. Nature.

It’s a well-intentioned suggestion that would make sense in an ideal world. But AI companies are taking a big chance. Chatbots that say “I don’t know” more often will likely be perceived as less advanced or valuable, leading to less use and less money for the companies that build and sell them. So instead we get the warning “ChatGPT may be wrong” and “Gemini may display inaccurate information”.

It leaves it up to us to disbelieve and refrain from spreading hallucinated misinformation that could harm ourselves or others. Check your damn chatbot’s responses for accuracy, for crying out loud.

You can read complete study of the team in Nature.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *