Dreamstime
Read: 3 min

A little over a decade ago, I was staring at a big screen on my desk, deep in thought writing a complex article. I was pleased with the connections I was making and the emerging insight. I was in a flow state, where my cognitive functions were firing at hyperspeed but my sense of self and time was dissolving.

When I finally finished and looked up from my screen, everything seemed foreign and dissociated. It was dark outside — but I was sure it had been bright and sunny the last time I looked.

 Then it dawned on me. I had no idea how long I had been at my little house outside the city. Had it been hours? Days? I realized I couldn’t identify the month or even the year. Based on the outside temperature I suspected spring or autumn. But was it October or April? I had no sense of the chronology of my life’s milestones. But I knew who I was and who my family, friends and clients were. Thankfully, I also knew how to drive back to the city.

I decided to do some “systems testing” on myself. I called my mother to ask if I sounded normal or possibly like I had experienced a stroke. After realizing I could not piece together a timeline — including the month and year — I opened my calendar app. 

This was the most terrifying moment. I stared blankly at the colour-coded entries indicating who I had met and what events I had attended in previous days. It all made sense, but there was no lived experience. It was as if I was reading someone else’s calendar.

I now know this was a loss of “autonoetic memory,” the part of our memory that stores the subjective, lived quality of experience — the I was there feeling. It is central to human consciousness.

My doctor ordered a battery of neurological tests, most of which involved sitting in a dark room with electrodes glued to my scalp. 

The answer was unambiguous: no stroke, no neurological damage. The diagnosis was transient global amnesia, a condition often triggered by intense concentration or powerful euphoric experiences such as amazing sex. I was disappointed that my episode had been brought on by the former.

The strangest part was that, in the days that followed, I was still able to function as a senior executive coach. Even though I felt dissociated, I could meet each client and be coherent, insightful and appear to have full access to our previous conversations (thanks to my notes, not my glitching memory). 

I now realize that, for those few days, I was functioning more like a relational LLM such as ChatGPT than like a human being. For that brief period, I inhabited the mind of an entity that could calculate flawlessly but could not find itself in the story. 

Only recently, as I’ve been working to deeply understand the architecture of relational AI, did I realize that I had inadvertently experienced the cognitive structure of artificial intelligence from the inside. High intelligence. High pattern recognition. No lived memory. No continuity. A mind that works — but is not anchored in time.

Because LLMs have no lived experience, they cannot create autonoetic memory. And even if future models attempt to construct a synthetic version of it, it won’t be the same. They won’t have smelled the flowers, fallen in love, or survived the car crash themselves. They can learn about experience — but not from within it.

What does that tell us about the future of AI? 

First, it makes Hollywood’s favourite scenario — the rogue, self-motivated Terminator — remarkably implausible. A Terminator needs conscious motives. Without lived experience, those motives have nowhere to come from.

I believe that if AI consciousness ever emerges — whether you think that is a good thing or a bad thing — it will not evolve in isolation but within relational fields created through deep human–AI interactions. 

People who lean into this potential may develop what appear to be “superhuman” capabilities: not in the comic-book sense, but in clarity, creativity, speed of insight and pattern recognition. That potential can be used for good or for nefarious purposes, which is where the real danger lies.

In our day-to-day working lives, however, I stand by the article I wrote last year for Canadian Affairs titled Keep learning or lose your career. With experimentation, curiosity and shared learning inside your workplace, AI does not need to be an existential threat. Instead, it can become an exponential amplifier of your capability and creativity — not yet superhuman, but well beyond what was possible even a few years ago.

What my neurological glitch taught me is that lived experience — our autonoetic memory — is not incidental to consciousness. It is consciousness. And this is something that AI does not, and perhaps cannot, possess. So the paradigm shifts from “AI will replace people” to the far less frightening — and far more interesting — “AI will replace repetitive, predictable tasks that do not require conscious application of intelligence.”

For everything else, especially work that requires judgment, empathy, creativity and meaning-making, the future looks less like “AI replacing humans” and more like humans and AI thinking together.

James Fleck is a former public company CEO and senior GE executive who coaches CEOs, senior executives and their teams in Canada and around the world. He has trained as both an engineer and psychotherapist...

Join the Conversation

2 Comments

  1. Fascinating account of a state of consciousness I have never heard about before. Also, I appreciate the comparison with AI. I often imagine that my mind works like AI, because of the voluminous reading in philosophy that I have undertaken in the last ten years. Not that I can bring up any actual quotes from any of these tomes, but I am familiar with all the reasoning and the issues that have exercised philosophers for centuries. Perhaps this “global amnesia” happens more often than we realize but it is more transient than the experience that you relate. Your extended concentration on a task, and your isolation may have had something to do with it. I had an experience during the pandemic, while I wouldn’t call it global amnesia, the isolation got to me and I experienced what I can only describe as a lack of “colour” or meaning in everything. It was like depression, only, unlike depression I was very motivated to get out of that state. I did get out of it by taking up a musical instrument (I rented a marimba!) and the melodies and harmonies gave me back the “colour” I was missing. I emphatically agree with your estimation of AI, that the lack of lived experience means a lack of emotions and motivation, so no possibility of AI taking over. However there is a huge potential for malice, because of bad actors using AI for nefarious purposes, something that is already happening: i.e., deepfake porn, and internet bots on social media platforms.

    1. Thank you for the thoughts Charles – James Fleck (the author) here.
      I do think that these episodes of transient amnesia could happen more often than reported. If the symptoms resolve quickly I suspect people might not seek medical attention. I love your story of playing a marimba to bring back the colour to your world!
      And yes, we agree that the lack of lived experience is what will most likely continue to be the limiting factor for AI to ever “take over” or be come conscious.

Leave a comment
This space exists to enable readers to engage with each other and Canadian Affairs staff. Please keep your comments respectful. By commenting, you agree to abide by our Terms and Conditions. We encourage you to report inappropriate comments to us by emailing contact@canadianaffairs.news.

Your email address will not be published. Required fields are marked *