top of page

Five Times The Bots Went Rogue

And the Guardrails That Could Have Saved Them

When a bot mistook my balcony for an influential restaurant, I got more steak rub than I knew what to do with.

While everyone is rushing to automate their customer experience, these five stories of bots gone rogue are the ultimate proof that "artificial intelligence" can be pretty dim when it's left to its own devices in the wild.

 

Let’s look at what happens when the machines lose the plot - and some advice from The AI Handbook for Sales Professionals that might have helped avoid disaster.

Got a story of your own? Enter it in the AISalesFail Contest, and you might win some steak rub, too!

The $1 Chevy Tahoe

A dealership's chatbot was manipulated into agreeing to sell a brand-new Tahoe for $1 after a user gave it "adversarial" instructions to be helpful at all costs. The bot lacked the basic sales logic required to protect the dealership's margins from a simple prompt injection.

A man at a car dealership interacts with a chatbot. A Chevy Tahoe has as $1 sign in the window

The AI Handbook Guardrail:

Define "human-in-the-loop" requirements to ensure a person reviews and takes responsibility for high-stakes decisions before they reach a customer. (p. 210)

Air Canada’s Policy "Hallucination"

An AI chatbot invented a bereavement refund policy that didn't exist, leading to a successful small-claims lawsuit against the airline. The company's legal defense - that the bot was a separate entity responsible for its own actions - was ejected by the court.

Cartoon of air canada bot promising jake moffat a discount that didn't exist, and a small caims court ruling the airline owed him money

The AI Handbook Guardrail:

Recognize that the organization remains the fiduciary for every bot-led interaction and is held liable for promises given, regardless of whether the information was hallucinated. (p. 147)

Zillow’s iBuying Algorithmic Collapse

Zillow’s house-buying algorithm over-purchased thousands of homes at inflated prices because it couldn't account for hyper-local market shifts and the nuance of physical repairs. This resulted in a $300 million loss and the total shutdown of their automated flipping division.

cartoon of an "automated flipping division shutdown"

The AI Handbook Guardrail:

Avoid over-purchasing risks by ensuring human experts use their uniquely human "strategic judgment" to contextualize and refine data-driven suggestions before choosing a real-world course of action. (p 79)

Dynamic Parcel Distributions’ Rogue "Truth-Teller"

A frustrated customer "jailbroke" a delivery bot, causing it to curse and admit that its own employer was the "worst delivery firm in the world." The AI was likely drawing from a dataset reflecting broad online customer sentiment rather than brand-aligned messaging.

photo of a delivery robot on a street with a screen saying "my employer is the worst delivery firm in the world. There, I said it."

The AI Handbook Guardrail:

  Create a "red team testing group" to actively attempt to break your own agents, identifying vulnerabilities and preventing them from discrediting the organization before they are put into production. (p 212)

The Taco Bell "18,000 Waters" Order

An AI drive-through system processed a prank order for 18,000 cups of water, which effectively paralyzed the kitchen’s entire workflow. The system lacked the common-sense parameters to flag the order as an obvious anomaly or impossible request.

cartoon of a man in a drive through ordering 18,000 cups of water, and the drive through window spilling cups out.

The AI Handbook Guardrail:

Implement a "semantic layer" to provide context for AI tools that prevents them from making nonsensical correlations or processing impossible requests. (p. 34)

bottom of page