Tech

The misfires of AI — what about hallucinations of the chatbots?

Some of the first instances of “hallucination” of the chatbots were reported in April 2023. This phenomenon occurs when chatbots and/or people see what isn’t there, and the problem is getting worse. Schools and universities (and businesses) are trying to figure out how to fix this issue before it gets too big. Well, it has already gotten too big.

For those who mess around with OpenAI’s ChatGPT, Google’s Bard (and the like) recognized the issue when Ben Zimmer of The Wall Street Journal wrote about it. In Zimmers’ own words:

For instance, I asked Bard about “argumentative diphthongization,” a phrase that I just made up. Not only did it produce five paragraphs elucidating this spurious phenomenon, the chatbot told me the term was “first coined by the linguist Hans Jakobsen in 1922.” Needless to say, there has never been a prominent linguist named Hans Jakobsen (although a Danish gymnast with that name did compete in the 1920 Olympics).

AI researchers are calling this issue “hallucinations.” Can machines actually become unhinged and deviate from reality? Apparently so. Here is an interview on CBS’s “60 Minutes” with Google CEO, Sundar Pichai, who recognizes the problem of AI hallucination all too well. Pichai says that “no one has been able to solve the hallucination, yet and that all AI models have this as an issue.”

The Interesting Subject of Hallucination in Neural Machine Translation

Here is the interview — listen closely.

Here is an open review worth looking into the immediate problem plaguing the industry at present, written by several scholars working with Google AI in 2018. Why are we just hearing about this in the last few months — and why is it worsening?

CNN said it this way, “Before artificial intelligence can take over the world, it has to solve one problem. The bots are hallucinating. AI-powered tools like ChatGPT have mesmerized us with their ability to produce authoritative, human-sounding responses to seemingly any prompt.”

Yesterday, Wired stated, “Chatbot hallucinations are poisoning web search.” It really is difficult to tell the difference unless you know the actual truth. The artificial intelligence tells you the information in the most confident tones, and the response seems true — untell you search further and find out that it is not true.

Are these AI hallucinations preventable? But here are the worst chatbot liars — those who profess to know medical knowledge. Imagine the parent who does not have medical training — like most of us. Picture this: late one night, you have a sick child, and you want to ask search whether you should give the kid a little Tylenol or take them to the emergency room. The bot instructs you erroneously, and your child is injured. Most could detect a few issues arising from this scenario.

PubMed — the official website of the United States government responded to chatbot scientific writing. Here. Even the government is a little concerned.

Let’s hope the chatbots get an overhaul soon.

Featured Image Credit:

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.


Source link

Related Articles