LLM Hallucination: The Curse That Cannot Be Broken

Authors

DOI:

https://doi.org/10.25195/ijci.v51i2.546

Keywords:

AI, Artificial Intelligence, Hallucination, Large language model, LLM

Abstract

Artificial intelligence chatbots (e.g., ChatGPT, Claude, and Llama, etc.), also known as large language models (LLMs), are continually evolving to be an essential part of the digital tools we use, but are plagued with the phenomenon of hallucination. This paper gives an overview of this phenomenon, discussing its different types, the multi-faceted reasons that lead to it, its impact, and the statement regarding the inherent nature of current LLMs that make hallucinations inevitable. After examining several techniques, each chosen for their different implementation, to detect and mitigate hallucinations, including enhanced training, tagged-context prompts, contrastive learning, and semantic entropy analysis, the work concludes that none are efficient to mitigate hallucinations when they occur. The phenomenon is here to stay, hence calling for robust user awareness and verification mechanisms, stepping short of absolute dependence on these models in healthcare, journalism, legal services, finance, and other critical applications that require accurate and reliable information to ensure informed decisions.

 

Downloads

Download data is not yet available.

Author Biography

Hussein Al-Mahmood, University of Basrah

College of Arts, Department of Translation

Downloads

Published

2025-08-25