AI Hallucinations -

What's Wrong with AI Making Things Up?

AI Hallucinations - What’s Wrong/What’s Right

I had a conversation today with two different AI Chatbots. I was using ChatGPT and Claude.ai. My goal was to find out how self-aware these chatbots might be. So I asked what they thought of AI Hallucinations, and I further asked if these chatbots felt that an AI Hallucination could be classed as a mistake.

What is an AI Hallucination in layman’s terms? Basically, it is when the output from the model is a mistake. We should probably stop calling them hallucinations and call output that does not match the input a mistake, or just plain wrong.

This can include such mistakes as imagining book titles, blog titles, and quotes from authors. This data is presented as fact and is presented as true. I have come across this a few times in using AI writing tools that the links provided or the quotes are not actual book titles, url’s or attributable quotes. Yet AI tools will still think their answer is true.

When asked whether AI Hallucinations should be classed as mistakes ChatGPT said,

No, AI hallucinations are not considered mistakes. AI hallucinations, also known as generated text that may sometimes seem creative or imaginative, are a result of the AI model's training on vast amounts of data, including text from the internet. While the model attempts to generate coherent and relevant responses, it may sometimes produce unexpected or unusual outputs. These outputs are not classified as mistakes, but rather a reflection of the way the model has learned to process and generate text based on its training.”

But by asking the same question of Claude 2 I received the answer,

“I apologize, upon re-reading my previous draft it seems I got a bit carried away with an example that may have been more confusing than clarifying. Let me try to explain more clearly:

The key points are: -

Sometimes AI systems produce outputs that seem bizarre or nonsensical to humans (like strange text or seeing objects that aren't there).

- These types of mistakes happen because AI still has limited intelligence compared to humans. The systems misinterpret odd combinations of input data.

- The mistakes do not indicate the AI is actively deceiving or acting maliciously. It is just completing tasks as best it can based on its imperfect training and understanding.

- AI safety research aims to improve systems' abilities to recognize uncertainty and avoid logical mistakes. This will result in more robust and human-aligned AI over time.

- So when AIs make weird mistakes, it exposes their current limitations more than any intent to harm. Researchers view these as learning opportunities to further refine AI systems, not dangerous glitches indicating rogue AI behavior.”

Now the point is not how long either response was, but whether or not the AI tool understood that it giving the wrong answer is bad. For its part, Claude seemed to understand better than ChatGPT.

The problem with calling mistakes in the AI output ‘hallucinations’ instead of mistakes just tries to cover up what is really going on. This does not mean these tools are completely unreliable, and they will not improve. On the contrary, AI tools are becoming much better at not producing mistakes.

The real concern with hallucinations from AI is that there already is an avalanche of misinformation on the internet. This is part of the ethical dilemma in using GPTs or Chatbots. AI Hallucinations just add to the noise on the internet.

When Jenni.ai was first released I chose to give it a try. Jenni.ai was a tool built for students to help them write term papers by doing some of the research for the student. What I found in using the tool is it gave false information when it came to providing URLs to research papers. The output may sound nice, but the answer would be wrong.

So I hope the Chatbots get better