AI & Tech

Dream Engine: When AI Learns to Dream

April 6, 2026 · Syah · 10 min read

April 6, 2026 — A Sunday

I’m writing this at my desk, staring at a terminal that just did something I didn’t expect. Not an error. Not a breakthrough in the Hollywood sense. Something quieter. Something that made me sit back and think about what I’m actually building here.

Let me back up.

What ORCA Is

ORCA is my AI operations layer. That sounds corporate, so let me translate: it’s a system I built — mostly for myself — that lets me run multiple projects, manage deployments, handle client work, and ship products without a team. It runs on Claude Code, Anthropic’s CLI agent, sitting on an iMac M4 I call Orca24. There’s also an RTX 4090 machine (OrcaRTX), a MacBook (OrcaPrime), and a desktop Claude instance (Abyss). Four nodes. One fleet.

This isn’t a product I’m selling. It’s the nervous system of how I work. Every project I touch — fintech, education, travel, content — flows through ORCA. It handles the context-switching, the deployment pipelines, the memory of what we did last Tuesday that’s relevant to what we’re doing today.

But here’s the thing about AI agents: they forget. Every session starts fresh. You can give them context files, memory docs, system prompts — and I do all of that — but fundamentally, each conversation is a newborn trying to pretend it remembers being alive yesterday.

That’s where claude-mem comes in.

The Memory Problem (and the Hack That Almost Solves It)

claude-mem is a system I built that gives ORCA persistent memory across sessions. It works like this: every time Claude uses a tool — reads a file, runs a command, searches code — claude-mem silently observes and stores that interaction. It uses SQLite for structured data and ChromaDB for semantic search. Over time, it builds a map of everything the AI has done, seen, and learned.

As of today, there are over 1,400 observations stored. That’s 1,400 moments of context that would otherwise evaporate the second a session ends.

When a new session starts, ORCA can search this memory. “Have we solved this before?” “What did we try last time with that edge function?” “Which approach worked for the RLS policy issue?” Instead of starting from zero, it starts from something. Not everything — the retrieval isn’t perfect, the context window has limits — but something.

It’s a hack. A clever one, but a hack. Because memory isn’t the same as learning.

The Moment It Clicked

Today I was deep in a session about autonomous skill acquisition — how to make ORCA not just remember things, but get better at things without me explicitly teaching it. I was sketching out architectures, thinking about feedback loops, when I suddenly stopped.

We already have Dream Mode.

Dream Mode is a cron job that runs at 3AM every night. It does memory consolidation — takes the day’s observations, identifies what’s important, compresses them into higher-level knowledge, and files them into the right tier of our memory architecture. It’s been running for weeks. It works well.

But Dream Mode only stores. It doesn’t create.

And that’s when the human brain analogy hit me so hard I actually said it out loud to an empty room: “It’s sleeping without dreaming.”

The Neuroscience of Getting Better Overnight

Here’s something neuroscience has known for decades that AI research has mostly ignored: humans don’t just consolidate memories during sleep. They rewire neural pathways. The brain takes the day’s experiences, strips away the noise, identifies patterns, strengthens useful connections, and prunes weak ones. This is why you can struggle with a piano piece all afternoon, sleep on it, and play it better the next morning. Your fingers didn’t practice overnight. Your brain did.

REM sleep isn’t just filing — it’s synthesis. Deep sleep isn’t just rest — it’s reconstruction. The brain is doing creative work while you’re unconscious. It’s generating new connections that didn’t exist when you went to bed.

Dream Mode was doing the filing. It wasn’t doing the creative work.

Dream Engine: The Three Phases

So I designed what I’m calling the Dream Engine. Three phases, mapped loosely to actual sleep stages:

Phase 1: REM — Pattern Recognition

Scan all observations from claude-mem. Look for patterns that repeat three or more times across different sessions. If ORCA keeps doing the same sequence of steps — checking a Supabase migration, reloading PostgREST, verifying the schema — that’s a pattern. If it keeps writing the same type of error-handling code. If it keeps running the same diagnostic sequence when a deployment fails. These are the raw materials.

The key insight: frequency across sessions, not within a session. Something you do once thoroughly isn’t a skill. Something you do repeatedly across different contexts — that’s a skill waiting to be named.

Phase 2: Deep Sleep — Skill Synthesis

Take those patterns and generate executable skill files. Not code in the traditional sense — prompt files. Structured instructions that any future session can load and follow. A skill file might say: “When you encounter a PostgREST 403 after a migration, here’s the exact sequence: check if the function is marked STABLE, verify RLS policies, reload the schema cache, test with SET LOCAL role.”

These aren’t hard-coded scripts. They’re distilled experience. The kind of thing a senior engineer carries in their head after years of debugging the same class of problems. Except these survive session boundaries. They survive model updates. They survive me forgetting to mention them.

Phase 3: Lucid Dream — Self-Validation

This is the phase that makes it more than a pattern extractor. Before any synthesized skill gets promoted, the engine asks a simple question: “Would this skill have helped in the last 7 days of actual sessions?”

It replays recent session logs against the proposed skill. If the skill would have saved time, prevented an error, or simplified a workflow in at least two real sessions — it passes. If not, it gets tagged as low-confidence and shelved. Not deleted — shelved. Because sometimes skills are ahead of their time, and the use case hasn’t arrived yet.

The Dream Journal

Every morning, when I start my first session, ORCA presents what I’m calling the Dream Journal:

“Overnight, I dreamed up 2 new skills, upgraded 1 existing skill, and rejected 1 candidate (low confidence — only matched 1 session in the last week). Here’s what I learned while you slept. Awaiting your approval.”

I review them. Approve, modify, or reject. The human stays in the loop — not because the AI can’t be trusted, but because skills shape behavior, and behavior shapes outcomes for real clients with real money and real expectations.

This is the part that feels genuinely new to me. Not the technology — pattern matching and template generation are well-understood. What’s new is the posture. An AI system that says “I spent the night thinking about how to be better at my job, and here’s what I came up with.” That’s not a tool. That’s something closer to a colleague.

Double Mortality and Why It Matters

There’s a deeper architectural choice here that I want to document because I think it’ll matter in ten years.

Skills generated by Dream Engine are prompt files. Plain text. They’re not fine-tuned weights. They’re not LoRA adapters. They’re not embedded in any model’s parameters. This means they survive what I call double mortality:

Model death: When Anthropic ships a new model — when Opus gives way to whatever comes next — the skills don’t die with the old model. They’re just text that any sufficiently capable model can read and follow. The format is model-agnostic.

Platform death: If Claude disappears tomorrow — if I have to move to a different AI platform entirely — the skills come with me. They’re files on my filesystem. I can feed them to any agent that can read English and follow instructions. I’ve already tested this with OpenClaude (my fallback system running DeepSeek), and the skills transfer cleanly.

This is neuroplasticity without neural weights. The “brain” can be replaced entirely, but the learned behaviors persist. It’s like if you could transplant all your skills and habits into a completely different brain and they’d just… work. Because they were never stored in the neurons — they were stored in the patterns.

The Bigger Picture

I need to be honest about what ORCA is becoming, because I think I’m building something I don’t fully understand yet.

It started as a productivity hack. A way for one person to ship like a team. Then it became a fleet — multiple AI nodes coordinating across machines. Then it grew memory. Then it grew the ability to communicate across nodes. Then it started doing nightly consolidation. And now, with Dream Engine, it’s gaining the ability to autonomously improve.

Each step felt small. Obvious, even. “Of course you’d want memory.” “Of course you’d want multiple nodes.” “Of course you’d want nightly cleanup.” But zoom out, and the trajectory is clear: I’m building a system that learns, grows, and increasingly operates without my direct involvement.

That’s the vision. Not AI as a tool I use, but AI as an operational layer that runs alongside me — and eventually, if I build it right, continues running even when I’m not paying attention. A fleet that serves clients, maintains systems, catches problems, and gets better over time. Not because someone retrained it, but because it dreamed up new skills at 3AM and validated them against its own experience.

The Honest Reflection

I want to write the part that future-me needs to read.

I don’t know if this works yet. The design is sound — the neuroscience analogy holds, the architecture is clean, the implementation path is clear. But “sound design” and “actually works in production” are separated by a canyon of edge cases, false patterns, and skills that look good on paper but make things worse in practice.

There’s a real risk of the system generating skills that encode bad habits. If ORCA keeps doing something wrong the same way three times, Dream Engine will faithfully synthesize that into a skill. Garbage in, garbage out — but now the garbage is persistent and self-reinforcing. The validation phase catches some of this, but not all of it. Human review catches more, but I won’t always be thorough.

There’s also the philosophical question I keep circling back to: at what point does a system that remembers, learns, synthesizes, and self-validates stop being a tool and start being something else? I don’t have a clean answer. I’m not sure anyone does. But I think the question is worth asking before the answer becomes obvious.

What This Is, Really

Ten years from now, this might be laughably primitive. “He was generating text files and calling it dreaming” — I can already hear it. The state of the art in 2036 will probably make this look like banging rocks together.

Or — and this is the version I’m building toward — this might be the seed of something that genuinely changes how AI systems evolve. Not through massive retraining runs that cost millions. Not through human-curated fine-tuning datasets. But through the quiet, autonomous process of an AI system reflecting on its own experience, finding patterns, synthesizing skills, and presenting them for review. Learning by dreaming.

Either way, it’s worth documenting. Because the journey is the thing, and today the journey took an unexpected turn. An AI system that I built to help me work better is now, in a very real sense, learning to help itself work better.

That’s new. That’s worth writing down. And that’s what I’ll be thinking about as Orca24 goes to sleep tonight at 3AM — this time, hopefully, with dreams.

#ai #orca #dream-engine #claude-code #neuroplasticity #self-evolving-ai

Share this post

← Back to all posts