Spotting Hallucinations and Errors

What Is a Hallucination?

A hallucination is when AI makes something up. It sounds confident, but it is not true. AI does not mean to lie. It is just trying to be helpful and sometimes fills in gaps with fake information.

Examples of Hallucinations

Made-Up Facts

You ask: "What is the market share of the top three productivity software companies?"

AI might answer with specific percentages that sound real but are completely wrong. It made up the numbers.

Fake Citations

You ask: "Who wrote the book about remote work culture?"

AI might say: "As stated in 'Remote Culture 2020' by Sarah Johnson." But this book does not exist. Sarah Johnson never wrote it. AI invented the title and author.

Confident but Wrong

You ask: "When was our product launched?"

AI answers: "Your product launched on March 15, 2019."

But your product actually launched in 2021. AI does not know your company and made up a date that sounds believable.

How to Spot a Hallucination

Red Flag 1: Too Specific

Hallucinations often include specific details like dates, numbers, or names. If AI gives you very specific facts, be suspicious. Check them.

"This feature was released on June 3, 2018." (Specific = check it)

Vs.

"This feature was released a few years ago." (Vague = less hallucination risk)

Red Flag 2: You Don't Recognize It

If AI mentions a fact, person, or source you have never heard of, do not trust it. Verify first.

"According to the 2023 Industry Report on Workplace Wellness" (Does this report exist? Check.)

Red Flag 3: It Answers Questions About Your Specific Business

AI does not know details about your company. If you ask "How many employees do we have?" and AI gives you a number, that number is made up.

When Hallucination Risk Is Highest

Very High Risk (Always Verify)

  • Names of people
  • Dates and deadlines
  • Numbers and statistics
  • Legal or medical information
  • Specific details about your company or product
  • Citations or references to studies

Medium Risk (Verify Important Ones)

  • Historical facts
  • Technical details
  • Business numbers

Lower Risk (But Still Check)

  • General advice
  • Explanations of concepts
  • Writing or creative content

What to Do About Hallucinations

The Verification Habit

Do this automatically for high-risk content:

  1. AI gives you a fact
  2. You ask yourself: "Do I know this is true?"
  3. If no, check it. Google it. Ask someone. Look it up in your documents.
  4. Only use it if you verify it is correct

When You Find a Hallucination

Do not blame AI or yourself. Just fix it:

  1. Tell AI it was wrong
  2. Give AI the correct information
  3. Ask it to try again

Example:

You: "Our company was founded in 1995, not 2005. Please rewrite your answer with the correct founding date."

Real-World Example

You ask: "Write a press release about our new partnership with TechCorp."

AI writes: "We are proud to announce our partnership with TechCorp, the industry leader in AI solutions since 2010."

Your thought: Wait. Was TechCorp founded in 2010? I am not sure. I should check.

You check: TechCorp was founded in 2008, not 2010. This is a hallucination.

You fix it: "TechCorp was founded in 2008. Rewrite the press release with the correct founding date."

Key Takeaway

AI is helpful but not always accurate. For anything important, especially facts, dates, numbers, or information about your business, verify before using it. This is not paranoia. This is being professional.

Discussion

  • Loading…

← Back to Tutorials