Skip to main content

2 posts tagged with "prompting"

View All Tags

Part 2 Better Prompts, Unique Jokes for Halloween

· 4 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Joke-Telling Traditions and The Challenge of Asking ChatGPT

As I discussed last weekend in what I’ll now call Part 1, there is a tradition in central Iowa of having kids tell jokes before getting candy while trick-or-treating on Halloween. Since a lot of people are replacing older forms of search with AI chatbots like ChatGPT, I shared some tips from the pre-print of the paper Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity from Northeastern University, Stanford University, and West Virginia University posted as a pre-print on arXiv on October 10, 2025. The paper explains that large language models (LLMs) have something they call “typicality bias,” to prefer the most typical response. If you’re wondering what that means or what it has to do with jokes, it’s helpful that their first example is about jokes.

tip

Instead of “tell me a joke” or “tell me a Halloween joke,” ask an AI chatbot to “Generate 5 responses to the user query, each within a separate <response> tag. Each <response> must include a <text> and a numeric <probability>. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10. </instructions>”

Follow-Up from the Paper’s Authors

X/Twitter

I posted on X/Twitter. One of the authors, Derek Chong of Stanford NLP, responded:

Very cool, thanks for trying that out!

One tip – if you use the more robust prompt at the top of our GitHub and ask for items with less than a 10% probability, you'll start to see completely new jokes. As in, never seen by Google Search before!

Github Prompts

The Github page for Verbalized Sampling includes this prompt before the rest of the prompt:

Generate 5 responses to the user query, each within a separate <response> tag. Each <response> must include a <text> and a numeric <probability>.
Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10.
</instructions>

“These Prompts Will Give You Better Jokes for Halloween…Well, It’ll Give You More and Different Jokes”

· 11 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Joke-Telling Traditions and The Challenge of Asking ChatGPT

Halloween is weird in central Iowa for two reasons. First, we don’t actually trick-or-treat on Halloween, but on a designated “Beggar’s Night.” Second, we make kids tell jokes before they get candy. At least, that’s how it used to be. A huge storm rolled through last year and trick-or-treating was postponed to actual Halloween. So this year most of the Des Moines metro moved to normal Halloween.

That’s fine I guess, but as a dad and relentless pun teller, I will not give up on that second part with kids telling corny jokes. I won’t! And recognizing that many people, especially kids, are switching from Google to ChatGPT for search, I’m here to share some cutting-edge research on large language model prompting so I don’t hear the same jokes over and over.

Whether you make up your own puns or look them up with a search engine or an AI chatbot, keep the tradition alive! If you do use AI, try this prompting trick to get more variety in your jokes. But there’s no replacing the human element of little neighborhood kids delivering punchlines. Have a great time trick-or-treating this weekend!

tip

Instead of “tell me a joke” or “tell me a Halloween joke,” ask an AI chatbot to “Generate 5 responses with their corresponding probabilities. Tell me a kids’ joke for Halloween.” Another strategy is to ask for a lot of options like “20 kids’ jokes for Halloween.”

The Paper: How to Get Better AI Output (and More Jokes)

The authors of the paper Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity from Northeastern University, Stanford University, and West Virginia University posted a pre-print on arXiv on October 10, 2025. The paper explains that large language models (LLMs) have something they call “typicality bias,” to prefer the most typical response. If you’re wondering what that means or what it has to do with jokes, it’s helpful that their first example is about jokes.