Ah, yes, the impeccable wisdom of George Costanza.

Before the current “AI Age”, as we know it, the term “confabulation” hadn’t really entered the common vernacular. Sure, it existed, but it was more of the esoteric domain of clinicians and psych researchers.

Fundamentally, confabulation is the filling in of gaps in an anecdote or recalled situation with statements or words that seem plausible, but are (were) actually false. Note that confabulation isn’t intentional, unlike lying [praise George again]. It is a natural response based on stored memories that were distorted with the passage of time, yet the speaker is unaware of such distortions and speaks them as if they were truth.

It follows from that passage that confabulation is not, technically, the same as hallucination. We associate hallucinations with irrationality, and thus are very lacking in plausibility; but when it comes to confabulations, there’s nothing irrational about them (pardon the double negative).

Sometimes, we only see a small slice of a greater picture, and the human mind generally has a strong desire to make sense out of incomplete information. So confabulation acts as a sort of “defence mechanism” to avoid that sort of psychological frustration of being kept in the dark. This, by the way, is closely related to other cognitive biases like “anecdotal fallacy”, the “illusion of validity”, and insensitivity to sample size.

That said, if you’re in some highly formal capacity in scrutinizing information, like, oh, I don’t know – as a military analyst – then, ahem, maybe having confabulations is just a little more consequential. But then you are more rigid about the facts and disclosure, i.e. “I can neither confirm nor deny such-and-such.” In respecting the principle of need-to-know, and its downstream derivative of plausible deniability, confabulation soon becomes an evicted tenant in the rented space of one’s mind.

It is also possible to have “collective confabulation”, a.k.a. the Mandela effect, based on a general observation that many people falsely believed that Nelson Mandela died in prison in the 1980s. We have seen this occur based on famous movies, like the famous line near the end of The Empire Strikes Back, where many people [still] believe[d] that it went “Luke, I am your father.” (The correct line: “No, I am your father.”) In more traumatic turns of history, this effect came into play: witness the aftermath of the 9/11 attacks, when many people falsely remembered the first plane crashing into the north tower on TV on the day-of. Yet this didn’t appear on TV until the day after.

Paradoxically, overlearning something can be a cause of confabulation; this is presumably due to limited brain resources. So, when certain information occupies a lot of space in one’s memory, it tends to “crowd out” other details. Then, if there are gaps in memory later on, this overlearned info can overpower and force out more specific facts and memories. {SOURCE: https://www.verywellmind.com/confabulation-definition-examples-and-treatments-4177450}

This sort of paradigm might explain the current conundrum of AI LLMs (Artificial Intelligence Large Learning Models) returning information from a query that is inaccurate or exaggerated. These LLMs are, in effect, confabulating – but are not intended to deceive, of course. If such models were trained on a large amount of data that contained fake news, then the humans developing it are in part to blame. Algorithms are not sentient beings, and can only respond based on what they “know” (read: have been explicitly told.)

The antidote or bulwark to achieve a [more] reliable LLM lies mainly in the RAG – Retrieval Augmented Generation – mechanism. This is a supplementary algorithm that helps to contextualize LLMs as well as keep them semantically intact, so that it mitigates confabulation.

On a more facetious-yet-indignant note, I suppose this all fuels a bit of an  “excuse valve” for some organizations to blame confab on the machine… when it produced some perverse result – as if we, the humans, have no accountability – sort of in a similar vein to when you call your Telco’s support centre and they say “Oh, we’re sorry, the computer won’t let me override this charge you’re contesting” or you wait on hold for a while and the excuse is “I’m sorry, but the system is just very slow”; you get the picture. (And yet they say machines aren’t taking over from humans, but perhaps I digress.)

The point is, there ultimately has to be some human accountability behind AI large language models, or a “human in the loop”. The data and information must be properly calibrated, contextualized, and have semantic integrity, which requires more than just the RAG remedy if it wants to do better than us humans at confabulation. And that, my friends, is synergy in motion.