Socialpost

Complete News World

AI often knows the right answer, even if the answer is incorrect

Large Language Models (LLM), which chatbots like ChatGPT rely on, are notoriously hallucinatory. In artificial intelligence, this refers to convincing but sometimes incorrect answers to simple queries.

Combating hallucinations of artificial intelligence

Science and companies have long been trying to combat the hallucinations of artificial intelligence systems. Microsoft, for example, recently introduced Debugger, a tool aimed at verifying the accuracy of AI answers.

one He studies According to researchers at Technion University in Haifa, Israel, which Apple and Google also participated in, they have now taken a closer look at the inner workings of LLM. In doing so, interesting discoveries have been made that could make it easier to debug AI in the future.

Artificial intelligence systems know more than you think

The main finding is already hidden in the title of the study: “MBA students know more than they show.” According to the researchers, AI systems often “know” the correct answer even though they answered the question incorrectly.

This phenomenon is probably due to the fact that large language models in particular have been trained to form the words that are most likely to follow – not necessarily the correct words for the situation in question.

Find the correct response codes

In order to analyze the inner workings of artificial intelligence systems, researchers have developed a new method Decryption reports. They rely on so-called correct response codes. Such a symbol would be the word “Paris” in a longer answer to a question about the capital of France.

According to the researchers, these codes contain most of the information about whether an answer is correct or incorrect. It quickly became clear that AI systems often had the right answer, but still gave the wrong answer. So they have more information than they reveal, according to the study.

New methods for correcting errors

It also showed that the AI ​​was particularly good at detecting errors when the types of tasks were similar. For researchers, this is a sign that AI is developing special skills when dealing with certain types of information. These findings could lead to new approaches to improving the reliability and accuracy of AI systems.

For critical observers, the study's surprising findings raise fundamental questions, for example about decision-making processes within the MBA. Are AI results affected by factors other than just predicting the most likely symbol, Asking about Silverwave founder Pete Weishaupt.

Doubt as to the cause of the hallucinations

Until now, Weishaupt says, it was assumed that hallucinations were caused by AI systems being insufficiently trained or unable to generalize knowledge.

Google's new AI search is backfiring

Research now points to a more nuanced picture “as MBAs may make conscious decisions about the information they provide.” Even if this means inaccuracies or errors.

Almost finished!

Please click on the link in the confirmation email to complete your registration.

Would you like more information about the newsletter? Find out more now