An interesting identification has been made at Oxford University. They have developed an algorithm that lets you identify when an AI is hallucinating. An AI hallucination is when AI models such as Gemini and ChatGPT generate the wrong information. This in turn causes misinformation. Hallucinations manifest in a couple of ways.

An AI could make facts up or generate synthetic scenarios when sharing what’s meant to be factual information. Or, it might miss something critical, like failing to identify a clear error in a spreadsheet because it hasn’t been trained on that error type.
 

Why Exactly Do AI Hallucinations Take Place?

 
AIs are trained based off of a set of data. In the case where the data is not up to date, or isn’t completely represented, the chances of hallucination are heightened. Another major contributor to hallucinations is when an AI is trained on data that may be biased- the AI can’t tell and would likely present a one-sided scope of information, which can be misleading.

AI models can struggle with generalisation, which means they might not perform well in situations that differ from their training environment. This lack of flexibility can lead to errors when the model encounters new or slightly different data.

The model has not ‘learned’ enough variety to make accurate judgments outside of its training. It isn’t a human, at the end of the day, and so the fact that it uses patterns and algorithms to learn, means it cannot process and adapt to logic and critical thinking, the way a human brain would.
 

 

How Does The Tool Work?

 
Dr. Sebastian Farquhar, a co-author of the study, explains, “We’re essentially asking the AI the same question multiple times and observing the consistency of the answers. A high variation suggests the AI might be hallucinating.”

This method, focusing on what they call ‘semantic entropy’ looks at the different outputs given in terms of the different meanings or interpretations, as opposed to looking at just word choice. Dr. Farquhar explained, “Our system doesn’t just look at the structure of the text but dives into the underlying meanings, making it a robust tool for real-world applications.”
 

How Can This Tool Be Used?

 
This tool can be used really well in day-to-day operations that use AI. Look at health care, for an example. The accuracy of AI is so important in such a space, because its used to analyse, diagnose and interpret medical data. So here the tool can be used to avoid inaccuracies and this results in better treatment with less errors.

Another good way this tool could be used is in journalism. We’ve seen the news about countless amounts of misinformation being spread due to AI’s hallucinations when the news is not double checked for accuracy. This way, readers can trust what they read more.

Yarin Gal, Professor of Computer Science at the University of Oxford and Director of Research at the UK’s AI Safety Institute put it well, saying, “Getting answers from LLMs is cheap, but reliability is the biggest bottleneck. In situations where reliability matters, computing semantic uncertainty is a small price to pay.”





Source link

Share.
Leave A Reply

Exit mobile version