Skip to main content

Midwest Frontier AI Consulting

Company Blog

Law | CLE & Training | Business Consulting

A Better Way to Search with AI for Domain-Specific Information (and The Value of LLMs That Don't Always Answer)

A Better Way to Search with AI for Domain-Specific Information (and The Value of LLMs That Don't Always Answer)

There are a lot of large language model (LLM) chatbots like ChatGPT, Claude, or Google Gemini, that offer answers. The problem is that they are biased toward answering even when there is insufficient information, so they make stuff up (sometimes called “hallucinations”). Or, they give us they answer they think we want to hear, rather than the answer supported by the evidence (sometimes called “sycophancy”). That’s why my favorite feature of Dewey, and the header for this article, is Dewey’s non-answer.

Chad Ratashak
Doppelgänger Hallucinations Test for Google Against the 22 Fake Citations in Kruse v. Karlen

Doppelgänger Hallucinations Test for Google Against the 22 Fake Citations in Kruse v. Karlen

I used a list of 22 known fake cases from a 2024 Missouri state case to conduct a Doppelgänger Hallucination Test. Searches on Google resulted in generating an AI Overview in slightly fewer than half of the searches, but half of the AI Overviews hallucinated that the fake cases were real. For the remaining cases, I tested “AI Mode,” which hallucinated at a similar rate.

Deep Dive on AI Book Spoofing on Amazon: Fakes targeting authors including Karen Swallow Prior and Kyla Scanlon

Deep Dive on AI Book Spoofing on Amazon: Fakes targeting authors including Karen Swallow Prior and Kyla Scanlon

I originally learned about the AI book spoofing problem in mid-August when I was on Substack and came across a Substack note from Karen Swallow Prior complaining about fake books on Amazon coinciding with a book release (this is common in book spoofing, according to the Author's Guild). I have a background in financial crimes intelligence analysis and open-source intelligence analysis (OSINT) and I write about generative artificial intelligence risks, misuse. So, out of curiosity, I looked into Prior's claims. Then, in early October, I came across complaints by author Kyla Scanlon on X/Twitter dealing with the same issues and looked into those too.