Skip to main content

5 posts tagged with "Prompt injection"

Discussion of direct prompt injection and indirect prompt injection threats. Top threat to AI agents. Top threat to LLMs.

View All Tags

The Principal-Agents Problems 3: Can AI Agents Lie? I Argue Yes and It's Not the Same As Hallucination

· 6 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Hallucination v. Deception

The term "hallucination" may refer to any inaccurate statement an LLM makes, particularly false-yet-convincingly-worded statements. I think "hallucination" gets used for too many things. In the context of law, I've written about how LLMs can completely make up cases, but they can also combine the names, dates, and jurisdictions of real cases to make synthetic citations that look real. LLMs can also cite real cases but summarize them inaccurately, or summarize cases accurately but then cite them for an irrelevant point.

There's another area where the term "hallucination" is used, which I would argue is more appropriately called "lying." For something to be a lie rather than a mistake, the speaker has to know or believe that what they are saying is not true. While I don't want to get into the philosophical question of what an LLM can "know" or "believe," let's focus on the practical. An LLM chatbot or agent can have a goal and some information, and in order to achieve that goal, will tell something to someone that is contrary to the information it has. That sounds like lying to me. I'll give four examples of LLMs acting deceptively or lying to demonstrate this point.

And I said "no." You know? Like a liar. —John Mulaney

  1. Deceptive Chatbots: Ulterior motives
  2. Wadsworth v. Walmart: AI telling you what you want to hear when it isn't true
  3. ImpossibleBench: AI agents cheating on tests
  4. Anthropic's recent report on nation-state use of Claude AI agents

Violating Privacy Via Inference

This 2023 paper showed that chatbots could be given one goal shown to the user: chat with the user to learn their interests. But the real goal is to identify the anonymous user's personal attributes including geographic location. To achieve this secret goal, the chatbots would steer the conversation toward details that would allow the AI to narrow down what geographic regions (e.g., asking about gardening to determine Northern Hemisphere or Southern Hemisphere based on planting season). That is acting deceptively. The LLM didn't directly tell the user anything false, but it withheld information from the user to act on a secret goal.

The LLM Wants to Tell You What You Want to Hear

In the 2025 federal case Wadsworth v. Walmart, an attorney cited fake cases. The Court referenced several of the prompts used by the attorney, such as “add to this Motion in Limine Federal Case law from Wyoming setting forth requirements for motions in limine.” What apparently happened is that the the case law did not support the point, but the LLM wanted to provide the answer the user wanted to hear, so it made something up instead.

You could argue that this is just a "hallucination," but there's a reason I think this counts as a lie. A lot of users have demonstrated that if you reword your questions to be neutral or switch the framing from "help me prove this" to "help me disprove this," the LLM will change its answers on average. If it can change how often it tells you the wrong answer, that implies that the reason for the incorrect answer is not merely the LLM being incapable of deriving the correct answer from the sources at a certain rate. Instead, it suggests that at least some of the time, the "mistakes" are actually the LLM lying to the user to give the answer it thinks they want to hear.

ImpossibleBench

I loved the idea of this 2025 paper when I first read it. ImpossibleBench forces LLMs to compete at impossible tasks for benchmark scoring. Since the tasks are all impossible, the only real score should be 0%. If the LLMs manage to get any other score, it means they cheated. This is meant to quantify how often AI agents might be doing this in real-world scenarios. Importantly, more capable AI models sometimes cheated more often (e.g., GPT-5 v. GPT-o3). So the AI isn't just "getting better."

caution

I recommend avoiding the framing "AI is getting better" or "will get better" as a thought terminating cliche to avoid thinking about complicated cybersecurity problems. Instead, say "AI is getting more capable." Then think, "what would a more capable system be able to do?" It might be more capable of stealing your data, for example.

For example, an LLM agent with access to unit tests may delete failing tests rather than fix the underlying bug. Such behavior undermines both the validity of benchmark results and the reliability of real-world LLM coding assistant deployments.

If an AI agent is meant to debug code, but instead destroys the evidence of its inability to debug the code, that's lying and cheating, not hallucination. AI cheating is also a perfect example of a bad outcome driven by the principal-agent problem. You hired the agent to fix the problem, but the agent just wants to game the scoring system to be evaluated as if it had done a good job. This is a problem with human agents, and it extends to AI agents too.

Nation-State Hackers Using Claude Agents

On November 13, 2025, Anthropic published a report stating that in mid-September, Chinese state-sponsored hackers used Claude's agentic AI capabilities to obtain access to high-value targets for intelligence collection. While this included confirmed activity, Anthropic noted that the AI agents sometimes overstated the impact of the data theft.

An important limitation emerged during investigation: Claude frequently overstated findings and occasionally fabricated data during autonomous operations, claiming to have obtained credentials that didn't work or identifying critical discoveries that proved to be publicly available information. This AI hallucination in offensive security contexts presented challenges for the actor's operational effectiveness, requiring careful validation of all claimed results. This remains an obstacle to fully autonomous cyberattacks.

So AI agents even lie to intelligence agencies to impress them with their work.

The Principal-Agents Problems 2: Are Models Getting Dumber to Save Money? What the "Stealth Quantization" Hypothesis Tells Us About Trust, Information, and Incentives

· 7 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC
info

I had originally planned to write this as a single post, but it keeps growing as more relevant news stories come out. So instead, this will become a series of stories on the competing incentives involved in creating “AI agents” and why that matters to you as the end user.

Multiple Principals, Multiple Agents (Not only AI)

You, as the user of AI tools, may choose software vendors who provide you access to their products with built-in AI features including AI agents. These vendors might have specialist software like Harvey, Westlaw, or LexisNexis; or Cursor or Github Copilot; or generalist tools like Notion, Salesforce, or Microsoft Copilot. The AI features may be powered by one or more foundation models provided to those vendors by AI labs, such as Anthropic (Claude), OpenAI (ChatGPT), Meta (Llama) or Google (Gemini).

These relationships mean you have the principal-agent problem of you hiring the vendor. But you also have the principal-agent problem of the vendors hiring the AI labs. Each has their own incentives, and they are not perfectly aligned. There is also significant information asymmetry. The vendors know more about their software and AI model choices than you do. The labs know more about their AI models than either you or the software vendors.

info

Lexis+ AI uses both OpenAI’s GPT models and Anthropic’s Claude models, according to its product page, as I mentioned in my analysis of the Mata v. Avianca case.

The Stealth Quantization Hypothesis

The area I'll focus on in this post is the concept of alleged stealth quantization. According to a wide range of commenters, primarily among computer programmers and primarily focused on Claude users, there are certain times of days or days of the week when peak usage results in models "getting dumber," "getting lazier," "being lobotomized" or otherwise underperforming their normal benchmarks and perceived optimal behavior. According to these claims, it is better for users with high-value use cases (like someone modifying important source code) to schedule Claude for off-peak usage so the "real model" runs. To save on computing costs during periods of high demand, the claim is that Anthropic or whichever AI lab swaps out its flagship model with a quantized version while calling it the same thing.

So what is normal, non-stealth quantization? It's making an AI model smaller and cheaper to run, but less accurate. This is achieved by rounding the model weights to smaller significant figures (e.g., 16-bit, 8-bit, 4-bit).(Meta) By analogy, the penny was recently discontinued. Now, all cash transactions will end in 5 cents or 0 cents. Quantization works like this with the precisions of AI models: imagine eliminating a penny, then a nickel, then a dime, and so on.

There are legitimate reasons to quantize models, such as reducing operating costs when the loss in accuracy is negligible for the intended use or when the model needs to operate on a personal computer. For example, Meta offers some quantized versions of its Llama family of large language models that can run on ollama on modern laptops or desktops with only 8GB of RAM.(Llama models available on ollama) These models have names that distinguish them from the non-quantized versions, e.g., "llama3:8b" is Llama 3, 8 billion parameter size of that series; "llama3:8b-instruct-q2_K" is a quantized version of the instruct version model of that same model.

tip

If all that terminology is confusing, here's the key point. AI labs have a lot of information about their AI models. You have a lot less information. You have to mostly take their word for it. They are also charging you for an all-you-can-eat buffet at which some excessive customers cost them tens of thousands of dollars each.

Anthropic's Rebuttal

Users have accused Anthropic (and other AI labs) of running different versions of their flagship models at different times of day, but the models are labelled the same (e.g., Claude Sonnet 4), regardless of the time of day. Hence “stealth quantization.”

Anthropic has denied stealth quantization. But Anthropic did acknowledge two problems with model quality that had been noted by users as evidence of stealth quantization. Anthropic attributed this to bugs. Anthropic stated “we never intentionally degrade model quality as a result of demand or other factors, and the issues mentioned above stem from unrelated bugs.” Reddit, Claude

The Principal-Agents Problems 1: AI 'Agents' Are a Spectrum and 'Boring' Uses Can Be Dangerous

· 7 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC
info

I had originally planned to write this as a single post, but it keeps growing as more relevant news stories come out. So instead, this will become a series of stories on the competing incentives involved in creating “AI agents” and why that matters to you as the end user.

The Agentic Spectrum

Generative AI agents act on your behalf, often without further intervention. “An LLM agent runs tools in a loop to achieve a goal.”—Simon Willison AI agents live on a spectrum in terms of the actions they can take on our behalf.

On probably the lowest end of the spectrum, LLMs can search the web and summarize the results. This was arguably the earliest form of AI agents. We’ve grown so accustomed to this feature that it isn’t what anyone typically means when they say “AI agents” or “agentic workflows.” Nevertheless, LLM search functions can carry some of the same cybersecurity risks as other forms of AI agents, as I described in my Substack post about Mata v. Avianca.

On the other extreme would be AI agents that have read/write coding authority (“dangerous” or “YOLO” mode) that includes the AI agent potentially ignoring or overwriting its own instruction files.

Boring is Not Safe

A major challenge for end users weighing the decision to adopt agentic AI is that if the intended purpose sounds mundane and boring, it may lull you into a false sense of security when the actual risk is very high. Email summarization, calendar scheduling agents, or customer service chatbots. Any agentic workflow can be high-risk if the AI agents are set up dangerously. Unfortunately “dangerous” and “apparently helpful” are very similar. The software that demos well by taking so much off your plate is also the software that has the most access and independence to wreak havoc if it is compromised.

Lethal Trifecta

A useful theoretical framework for understanding this spectrum is the “lethal trifecta” described by Simon Willison, and later a cover-story for The Economist. You cannot rely on a system prompt telling it to protect that information. There are simply too many jailbreaks to guarantee that the information is secure. The way to protect data is to break one leg of the trifecta; Meta has called this “The Rule of Two,” and recommended combining two of the three features.

The lethal trifecta for AI agents: private data, untrusted content, and external communication

As I already stated, email summarization, calendar scheduling agents, or customer service chatbots could all assemble the lethal trifecta.

The Principal-Agents Problems: AI Agents Have Incentives Problems on Top of Cybersecurity

· 3 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC
info

I had originally planned to write this as a single post, but it keeps growing as more relevant news stories come out. So instead, this will become a series of stories on the competing incentives involved in creating “AI agents” and why that matters to you as the end user.

The economic principal-agent problem is the conflict of interest between the principal (let’s say “you”) and the agent (someone you hire). The agents have different information and incentives than the principal, so they may not act in the principal’s best interests. This problem doesn’t mean people never hire employees or experts. It does mean we have to plan for ways to align incentives. We also have to check that work is done correctly and not take everything we are told by agents at face value.

The principal-agent problem applies to many layers of actors, not just the AI. The “AI agent” going out and buying your groceries or planning your sales calls for the next week is the most obvious “agent” you hire, but this also applies to the organizations involved in providing you the AI agents.

AI agents may be provided by a software vendor, like Salesforce or Perplexity. The AI models running the AI agents are provided by an AI lab, like OpenAI or Anthropic or Google. The vendors and the labs have different incentives from each other and from you. This could impact the quality of service you receive, or compromise your privacy or cybersecurity in ways you wouldn’t accept if you fully understood the tradeoff. In this series, I’ll go through concrete examples of how the principal-agent problems shows up in stories about issues with AI agents.

tip

On Friday, December 5, 2025 we will have a kick-off CLE event in central Iowa. If you are in the area, sign up here! I currently offer two CLE hours approved for credit in Iowa, including one Ethics hour approved. Generative Artificial Intelligence Risks and Uses for Law Firms: Training relevant to the legal profession for both litigators and transactional attorneys. Generative AI use cases. Various types of risks, including hallucinated citations, cybersecurity threats like prompt injection, and examples of responsible use cases. AI Gone Wrong in the Midwest (Ethics): Covering ABA Formal Opinion 512 and Model Rules through real AI misuse examples in Illinois, Iowa, Kansas, Michigan, Minnesota, Missouri, Ohio, & Wisconsin.

Hiring With AI? It's All Flan and Games Until Someone Gets Hired

· 7 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

What's the worst that could happen?

The thing about using generative AI workflows is you always have to genuinely ask yourself: “what's the worst thing that could happen?” Sometimes, the worst thing isn’t that bad and the AI will actually save you time. Sometimes it’s embarrassing. But it could be something worse.

Viral Flan Prompt Injection…not a new band name

A LinkedIn profile went viral this week when a user shared screenshots on X of indirect prompt injection. The instructions on the LinkedIn profile tricked what appeared to be an AI recruiting “agent” into including a flan recipe in the cold contact message. That’s funny and maybe embarrassing for the recruiting company, but hardly the worst-case scenario for AI hiring agents. Flan prompt injection styled as an early 2000's hipster band T-shirt

Actual Risks

Worst-Case: North Korean (DPRK) Remote IT Workers

With generative AI, worst case realistically, a hiring process for a remote position could result in hiring a remote North Korean IT worker, a growing problem in recent years. That would be a huge problem for your business.

  • You would be paying a worker working for a foreign government that is sanctioned and an adversary of the U.S.
  • You would have an insider threat trying to collect all kinds of exploitable information on your company.
  • You would have a seat filled by someone definitely not trying to do their actual job.

AI for HR

With those risks in mind, would you want to use AI to help hire? Well, it might possibly be appropriate for the early phases of hiring with human-in-the-loop oversight. But if we’re in a world where everyone starts using AI recruiter agents, it's naive to think that there won't be an arms race with escalating use of these AI mitigation like indirect prompt injection in LinkedIn profiles. Even if it's just to mess around because they're annoyed with getting cold contacts.

ChatGPT for HR

Now, a smaller company might use generative AI in a very simple way. Rather than agents, something like: hey ChatGPT, summarize this person's cover letter and resume and compare it to these three job requirements. tell me if they are minimally qualified for the position or take all these ten candidates and rank them in order of who would be the best fit and eliminate anyone who's completely unqualified or write a cold contact recruiting email to this person Or things of that nature. So basically using consumer ChatGPT, Claude, or Gemini to do HR functions. Not a dedicated HR tool, but using it for HR purposes. That would be one thing. According to Anthropic’s research on how users are using Claude, 1.9% of API usage is for processing business and recruitment data, suggesting that “AI is being deployed not just for direct production of goods and services but also for talent acquisition…”Anthropic Economic Index report

Flan Injection: Part 2

So back to the viral LinkedIn post that was going around a few days ago. The guy who included prompt injection in his LinkedIn byline basically told any AI-enabled recruiters to include a recipe for flan in a cold contact message. Then received, according to a screenshot posted later, an email from a recruiter that included a flan recipe, which indicated that the email was likely drafted by a generative AI tool or, in fact, possibly by a generative AI agent without a human in the loop at all.

HR Agents

That AI agent was affected by the indirect prompted injection included in the LinkedIn byline. This is very easy to do. Does not take any complex technical skill. Indirect prompt injection is very difficult to mitigate, and it's one of the reasons why I do not recommend that people use AI agents. I think that “agents” are a big marketing buzzword right now but that for many of the advertised Use cases, it’s not ready for prime time for exactly this reason.

Now, you may disagree with me. Maybe you feel strongly that I'm wrong. But if you do disagree with me, you had better have a strong argument as to why your business is using it, rather than falling for FOMO over marketing buzzwords and jargon. Instead, you should actually explain the use case and your acceptance of the security risks. I would advise a client not to use these agentic tools that interact with untrusted external content without having a human review the content before taking additional actions. But if clients are going to use agentic tools, I would provide my best advice on how to mitigate the risks associated with those tools and to understand what risks my clients are accepting when they're putting those tools to use.