Large Language Models
Implicit biases in pre-trained models not fully mitigated
7Large language models trained on internet-scraped data inherit human biases (gender, stereotypes, selection bias). While Hugging Face provides Model Cards to document these issues, the warnings do not fully address or eliminate the underlying biases, leaving developers to handle bias mitigation themselves.
Factual Accuracy and Hallucinations
7ChatGPT frequently produces incorrect or fabricated information with confidence, such as wrong historical dates, incorrect code libraries, or failed calculations. Users report this issue has worsened over time, particularly after model updates, eroding trust for tasks requiring factual precision.
Lack of Emotional Intelligence and Empathetic Response
5ChatGPT cannot understand human emotions or provide genuine empathy. Its responses can come across as insensitive or cold in emotionally-driven conversations, potentially worsening situations requiring emotional support or crisis management, particularly in healthcare and education.