smart thinking

a RightTime publication

When AIs Dream: The Case for Letting Our Algorithms Hallucinate

What if our digital endeavor to purge AI of all hallucinations is misguided? Could be actually be performing a digital lobotomy?
March 1, 2024

Share to:

I’ve had this theory about AI that’s been brewing in my mind, like a bad cup of coffee—yes, the cheap kind you get from an airport cafe at 5 a.m.

This might sound as nutty as an over-caffeinated squirrel, but bear with me. There’s something magical in those AI-generated “hallucinations” that we were all too quick to dismiss, and perhaps we’re missing the forest for the trees. Or in AI’s case, missing the code for the bits.

When ChatGPT first waltzed onto the scene in early 2022, it wasn’t exactly Fred Astaire. It tripped over its own digital feet, spouting inaccuracies, fabricating nonexistent academic papers, and occasionally botching basic facts about the physical world. The term “hallucination” became the go-to critique, a stick used to beat the AI whenever it wandered off the factual path laid out for it. And sure, as a stick, it worked—but perhaps it was a bit too effective.

Over the past year, there’s been a frenzied rush to fix this. Researchers have thrown everything but the kitchen sink at the problem—from updating training data to integrating sophisticated grounding methods like retrieval-augmented generation (RAG) or tying outputs to reality through knowledge graphs. Yet, as I sat there, lamenting over my third (or was it fourth?) cup of coffee about how AI models like ChatGPT are just exotic interpolators, a light bulb went off. Well, more like a dim, flickering fluorescent bulb, but it was something.

Interpolation in a high-dimensional space isn’t just about drawing straight lines between data points. It can be more like a wild dance through a myriad of possibilities. And here’s where my bad coffee analogy comes full circle—it’s in this chaotic dance that AI can sometimes brew up something unexpectedly profound. Sure, most of the time it’s just spewing nonsense about the economics of using arctic sea ice in cost-effective coffee production (which, for the record, makes no sense), but every once in a while, it stumbles upon something genuinely innovative. When it works, this intelligence – without regard to survivorship bias – intelligence is lauded as king.

Take, for example, Isaac Newton. Good old Isaac spent a considerable chunk of his life dabbling in alchemy—yes, the medieval equivalent of trying to turn LinkedIn endorsements into actual skills. We laugh about it now, but his willingness to explore the edges of scientific understanding (hallucinate, if you will) contributed to the groundwork of modern physics. Newton’s dive into the irrational wasn’t a detour; it was part of the journey.

So, what if our digital endeavor to purge AI of all hallucinations is misguided? What if in our zeal to digitize the Enlightenment, we’re actually performing a digital lobotomy? Cutting out the very errors and missteps that might lead to breakthroughs?

Let’s be clear: I’m not advocating for letting AI run wild in mission-critical systems. No one wants a hallucinating AI performing surgery or piloting an aircraft. But in creative fields? In brainstorming sessions? In theoretical explorations? Why not let the AI dream a little? After all, the boundary between a blunder and brilliance is often drawn by the pencil of perspective.

Perhaps the future of AI shouldn’t just be about making these systems less prone to error, but also about better understanding the nature of their errors. Maybe, just maybe, we need to allow AI to be a bit more human—flawed, yes, but also capable of leaps of imagination that no amount of cautious calculation can produce.

So, as I sip on my disappointing coffee, I can’t help but wonder if we might all benefit from letting our AIs off the leash from time to time. Let them chase their digital tails, bark up the wrong algorithmic trees, and yes, even dream a little. Who knows? The next big idea might just be buried in their digital delusions.

Now, if only AI could come up with a way to make this airport coffee taste any better, that would be a real miracle.

FEATURED SOLUTIONS

Spectra

Spectra

Spectra identifies system performance degradation and reduce incident response time by 90%
GreenShift

GreenShift

Greenshift - Carbon Emissions Management System
Astro AI Mining Operations Platform ®

Astro AI Mining Operations Platform ®

Astro - Integrated Operations Control Desk Management System

Subscribe to our Newsletter

If you’d like to read more about the future of technology, we can let you know
when a new edition of Smart Thinking is released by providing your email address below.