Does AI Erode Legal Reasoning? A UMN Law Study Finds That It Doesn’t For Certain Tasks, With Advice on Specific Use
I provided feedback on an earlier draft of this article and am thanked in the introduction. I will aim to be fair and candid.
Company Blog
Law | CLE & Training | Business Consulting
I provided feedback on an earlier draft of this article and am thanked in the introduction. I will aim to be fair and candid.
Since everyone is sharing April Fools’ Day jokes on social media, many including AI-generated images, it reminded me that we’ve hit a believability inflection point. AI image generation is so good we don’t believe real things anymore.
I just talked today with Scot Weisman of LaunchPad Lab about generative AI risks ("AI for Business: Legal Risks Every Executive Should Know"). I mentioned a concept that I frequently discuss of the main risks to focus on with generative AI. Many different problems tie back to three core risks: hallucination, prompt injection, and sycophancy. I could give you plenty of examples of each one—if you scroll to the bottom you'll see related stories with examples—but I want to focus in this post on the overall framework to understand the risks conceptually.