top of page
Search

AI, AR, and the return of the bicameral mind

  • Writer: dcharold
    dcharold
  • Aug 7
  • 4 min read
Created by AI
Created by AI

I’m a sucker for disproven scientific theories that work better as metaphor than fact. Orgone theory, mesmerism or luminiferous ether for example. One of my favorites is the bicameral mind, which posits that three thousand years ago most people didn’t think the way we do today: they heard voices.


In 1976, psychologist Julian Jaynes proposed that ancient humans lacked a self-aware, introspective inner monologue. They experienced some thoughts as voices from somewhere else: the command of a god, the counsel of an ancestor, the shout of a leader. These voices were vivid and authoritative, and, crucially, they were obeyed. When I was a teen in the 1980s there was still some heft to the theory, and I remember reading about it in fascination.


Believing Jaynes’ theory in its literal form is about as smart as believing in astrology. Archaeology, linguistics, and neuroscience point instead to a gradual emergence of consciousness over tens of thousands of years. But even if the bicameral mind never truly existed, it’s still an interesting speculative lens for imagining different mental architectures. Which raises a strange possibility: maybe AI won’t just simulate those ancient voices in our heads. Maybe, for future humans, it will make them real.


AI isn’t just another search engine or tool. It talks back, it adapts, and, with each refinement in natural language processing, it sounds more like a person you’d trust. Once an AI feels like a trusted advisor, speaking fluently, confidently, and in your own language, it can slip into the same cognitive space as Jaynes’ ancient voices. As an experience, it’s no longer “I looked something up.” It’s “the voice told me.”


If we start relying on it for moment-to-moment guidance, that’s more than convenience. It’s the reappearance of a mental division: one part acting, the other obeying.


If AI is the new voice, AR is the new vision

Augmented reality’s promise is seductive: digital information layered seamlessly over the physical world. Neurologically, that’s close to what a hallucination is: a perception generated internally but experienced as part of the real world. When the lines blur between what your senses detect and what your devices project, reality becomes a negotiated space. The map isn’t just the territory, it is the territory, because you can’t tell where one stops and the other begins.


You’re walking through a city with an AI guide. You see highlighted storefronts that match your taste profile, restaurant menus translated and reordered according to your diet, subtle arrows overlaid on the pavement steering you to your next meeting. The voice in your ear reassures you you’re on the fastest route, reminds you of the name of the person you’re about to meet, and tells you the latest news about their company.


At that point, the AI isn’t just advising. It’s perceiving for you. It’s deciding what’s worth your attention and what isn’t before you’re even aware of the choice.

It’s already happening:


  • Apple Vision Pro + ChatGPT plug-ins – Early testers are combining AR spatial interfaces with AI-powered assistance, letting the headset act as both an information overlay and a conversational guide.

  • Microsoft HoloLens in industrial settings – AI-driven diagnostics highlight problem areas in real-world equipment and offer step-by-step repair instructions in the wearer’s field of view.

  • Google Lens with AI summaries – Point your phone at a product or landmark and get instant, AI-curated information, a bridge between seeing something and hearing the authoritative explanation.

  • Niantic Spatial’s Lightship platform – Combines AR world-mapping with AI object recognition so digital objects can interact with real-world features in context-aware ways.

  • AR wayfinding in cars – Some premium vehicles now project navigation arrows onto the road itself, paired with voice guidance that adapts to traffic conditions in real time.


Individually, these might feel like conveniences. Taken together, they’re the start of a perceptual ecosystem where what you see and hear is as much machine generated as it is naturally perceived.


Jaynes suggested that the bicameral mind faded because it was too rigid. In times of crisis, humans needed the flexibility of introspection rather than the certainty of divine command. Are we reintroducing that split? Or are we creating something new: a hybrid cognition where human and machine form a shared mental environment?


There’s a pessimistic read: we might outsource so much agency that our sense of self erodes, becoming a passenger in our own perceptions.


But my take is the optimistic one: AI and AR will free us from cognitive drudgery, let us focus on higher-level thinking, and augment creativity.


Still, the more integrated these systems become, the more important it will be to ask: Is this my thought, my perception, or is it the system’s? And does that matter to me?


Our own thoughts will be co-authored.


The first bicameral mind may never have existed, but its re-creation is now possible. In Jaynes’ version, the voices were gods. In ours, they’ll be systems. And they’ll know more about us than any oracle in history.

 
 
bottom of page