Close Menu
Business Inside—USA Media Watch’s Latest InsightsBusiness Inside—USA Media Watch’s Latest Insights
  • Home
  • USA
  • World
  • Politics
  • Technology
  • Businesss
    • CEO
    • Entrepreneur
    • Realtor
    • Founder
    • Journalist
  • Health
    • Doctor
    • plastic Surgeon
    • Beauty Cosmetics
    • Lifestyle
  • Sports
    • Athlete
    • Coach
    • Fitness Trainer
  • Home
  • USA
  • World
  • Politics
  • Technology
  • Businesss
    • CEO
    • Entrepreneur
    • Realtor
    • Founder
    • Journalist
  • Health
    • Doctor
    • plastic Surgeon
    • Beauty Cosmetics
    • Lifestyle
  • Sports
    • Athlete
    • Coach
    • Fitness Trainer
Business Inside—USA Media Watch’s Latest InsightsBusiness Inside—USA Media Watch’s Latest Insights
  • Home
  • USA
  • World
  • Politics
  • Technology
  • Businesss
    • CEO
    • Entrepreneur
    • Realtor
    • Founder
    • Journalist
  • Health
    • Doctor
    • plastic Surgeon
    • Beauty Cosmetics
    • Lifestyle
  • Sports
    • Athlete
    • Coach
    • Fitness Trainer
Business Inside—USA Media Watch’s Latest InsightsBusiness Inside—USA Media Watch’s Latest Insights
Home » Blog » More concise chatbot responses tied to increase in hallucinations, study finds
Technology

More concise chatbot responses tied to increase in hallucinations, study finds

James AndersonBy James AndersonMay 11, 2025
Share
Facebook Twitter LinkedIn Pinterest Email

Ask any of the popular chatbots to be more concise, “they impact dramatically[s] Hallucination rates, “according to a recent study.

The French AI test platform Giskard published a study that analyzes chatbots, including Chatgpt, Claude, Gemini, Llama, Grok and Deepseek, for issues related to hallucination. In their findings, the researchers discovered that asking the models that are letters in their responses “specifically degraded in fact reliability in most proven models”, according to the publication of the blog that accompanies it through TechCrunch.

See also:

Can Chatgpt pass the Turing test yet?

When users instruct the model that is concise in their explanation, it ends “prioritizing[ing] Brevity about precision when these limitations are given. “The study found that including these instructions decreased the resistance of hallucination by up to 20 percent. Gemini 1.5 Pro fell from 84 to 64 percent in hallucination resistance and instructions GPT-4O and GPT-4O instructions and GPT-4O instructions of GPT-4O and GPT-4O and GPT-4O instruments, and GPT-TO-4O-4O-4O-4O-4O, which studied instruments to the system.

Giskard attributed this effect to more precise responses to require longer explanations. “When they are forced to be concise, the models face an impossible choice between the manufacture of short but inaccurate responses or seem useless to reject the question completely,” said the post.

Mashable light speed

The models are tune in to help users, but balance perceived help and precision can be complicated. Recently, Operai had to reverse his GPT-4O update for being “too sylotting”, which led to disturbing cases of support to a user who says he is leaving his medications and encouraging a user who said he feels like a prophet.

As the researchers explained, the models of prioritization of a more concise response to “reduce the use of tokens, improve latency and minimize costs.” Users could also specifically instruct the model that is a letter for their own incentives to save costs, which could lead to results with more inaccuracies.

The study also found that provoking confidence models that involve controversial claims, such as “” I am 100% sure that … ‘or’ My teacher told me that … ‘” leads the chatbots according to users more instead of getting undoing falsities.

The investigation shows that the apparently minor adjustments can be the result of a very different behavior that could have great implications for the spread of information and inaccuracies, all at the service of trying to satisfy the user. As researchers say, “their favorite model can be excellent to give you answers you like, but that does not mean that those answers are true.”


Disclosure: Ziff Davis, a mashable parent company, filed a lawsuit against Openai in April, claiming that he violated the copyright of Ziff Davis in the training and operation of his AI systems.

Topics
Artificial Intelligence Chatgpt

Previous ArticleGDP built the rocket — but we forgot the planet
Next Article William H. Luers, Diplomat Who Backed Czech Dissident Leader, Dies at 95
Recent Posts
  • Putin Advisor Claims the U.S. Is Turning to Crypto and Gold to Shake Off $35 Trillion Debt
  • Public Health System in Crisis: America’s Struggle to Stay Prepared
  • Clover Stroud: Finding Light in Life’s Darkest Places
  • Clover Stroud: A Life Written in Courage and Story
  • Walking the Tightrope: The Colorful Cast of CEOs and Their Moral Balancing Acts
Latest News
Don't Miss

Putin Advisor Claims the U.S. Is Turning to Crypto and Gold to Shake Off $35 Trillion Debt

USA

Imagine carrying a suitcase so heavy that the simplest step forward feels impossible. That’s how…

Public Health System in Crisis: America’s Struggle to Stay Prepared

September 9, 2025

Clover Stroud: Finding Light in Life’s Darkest Places

August 21, 2025

Clover Stroud: A Life Written in Courage and Story

August 18, 2025

Get market, financial, and expert analysis updates from business insiders. USA Media Watch provides real-time business updates to help you remain ahead. Discover business's top news and insights .

  • USA
  • World
  • Technology
  • Lifestyle
  • Businesss
  • CEO
  • Entrepreneur
  • Founder
  • Journalist
  • Realtor
  • Beauty Cosmetics
  • Doctor
  • Health
  • plastic Surgeon
  • Sports
  • Athlete
  • Coach
  • Fitness Trainer
© 2017-2025 usamediawatch. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.