ClawdBot, MoltBot, OpenClaw, MoltBook — and the Return of the Homebrew Energy
It was 1995.
I was sitting in my dorm room at the University of Richmond — beige tower PC humming like it had something important to prove, CRT monitor warming the room, Ethernet cable stretched across the carpet like a tripwire for the future. My dorm had high‑speed internet. Which, at the time, felt like I had been handed the nuclear codes.
WiFi wasn’t really a thing yet. You wanted to be online? You needed a Network Interface Card (NIC) card. You needed drivers. You needed someone willing to open up your computer without flinching.
Luckily, I was that someone.
I started installing NIC cards for other students — popping open cases, sliding hardware into slots, tightening screws like I was assembling possibility itself. I’d crawl under desks, string cable, configure IP settings. I made absurdly good money for a college kid. Basically the unofficial ISP of campus. I serviced most of the athletes and even helped them with their computer science homework — haha.
And when I wasn’t running my tiny networking empire? I was in the Jepson Hall computer lab — which at the time felt like NASA had decided to open a satellite office in Richmond. We had NeXT machines. SPARC workstations. Hardware that looked like it belonged in a sci‑fi movie and sounded like a small jet engine when it spun up. The machines were glorious. Fast. Shiny. Capable of things that felt borderline supernatural. At one point I had a couple of them cranking away on some early neural net experiments — primitive by today’s standards, but intoxicating back then. Let’s just say the sysadmins were not thrilled with my interpretation of “appropriate resource usage.” I learned a valuable lesson about shared compute. And about how quickly you can get a polite but firm email from IT.
We built anything we could think of.
Websites for local businesses. Student groups. Experiments that probably made no sense but felt revolutionary at 2am. We weren’t chasing product‑market fit. We were chasing the edge of what was possible.
Looking back, it felt like our own version of the Homebrew Computer Club. Different decade. Same energy.
The Original Hackers
In 1975, a group of curious, slightly obsessive engineers started meeting in Silicon Valley to talk about microprocessors. They passed schematics around like contraband. They believed computing shouldn’t belong to institutions. It should belong to individuals.
Two of those people were Steve Jobs and Steve Wozniak.
The club didn’t just produce companies.
It produced culture.
A culture of show‑and‑tell hacking. Of building first and asking permission never. Of staying up too late because the machine might do something new if you just tweak one more thing.
Technology revolutions rarely start with polished strategy decks.
They start with people who can’t stop tinkering.
Web 1.0: When HTML Felt Like Magic
In 1995, most people had never heard the word “browser.” Netscape felt like something out of Star Trek. If you knew how to use a tag correctly, you were basically Gandalf.
We built pages by hand. We viewed source. We copied. We remixed. We stayed up until sunrise arguing about font sizes and background colors like it was geopolitics.
There was no playbook.
There was just momentum.
San Francisco: 10am to 10pm (and Then Some)
A few years later I moved to San Francisco, right as the dot‑com wave was cresting.
We’d get into the office around 10am and stay until 10pm. And then — because we were apparently incapable of moderation — we’d roll to Buddha Bar, decompress for a hot second, and then go home and get back on our computers.

No one was forcing us.
We were just in it.
We were building some of the first serious commercial websites. Infrastructure that would become ordinary later felt extraordinary then. Every deploy felt like lighting a match in the dark.
The web wasn’t inevitable yet.
We were helping make it inevitable.
The “Any‑to‑Any” Club
At Scient — a fast‑growing digital agency during the late‑90s dot‑com boom that built large‑scale web experiences and platforms for Fortune 500 brands trying to figure out this whole “internet” thing — we started something we called the “Any‑to‑Any” club. The name alone should tell you everything you need to know about our collective restraint.
We hacked on early mobile devices before the mobile era had officially begun. PDAs. Proto‑smartphones. Hardware that felt like it had arrived from 2007 by accident.
Half the time the devices barely worked.
Which made it even better.
We weren’t building polished apps. We were trying to answer a more fundamental question: what happens when computing leaves the desk?
Every major wave has this phase — a scrappy moment when a handful of builders feel the tremor before everyone else notices the earthquake.
And Then… AI
Fast forward to now.
ClawdBot / MoltBot / OpenClaw
This progression matters because it marks a shift from prompt-driven novelty to system-driven behavior. From one-off answers to persistent agents. From “look what it said” to “look what it did.”
If you know, you know.
If you don’t — imagine 1995, except the browser can reason.
We’re back in that early‑wave energy. APIs duct‑taped together. Prompt engineering that feels more like alchemy than software development. Agents talking to agents. Systems writing code that writes systems.
It’s messy.
It’s chaotic.
It’s glorious.
OpenClaw and the Architecture of Memory
One of the most interesting projects to emerge from this wave is OpenClaw’s Memory Architecture.
Not because it wrapped an LLM in a shiny interface.
But because it treated memory as architecture — not as an afterthought.
For years, LLMs have been brilliant but stateless. Goldfish with superpowers. Every conversation reset at the edge of the context window. You could simulate memory with clever prompting, but it was fragile. Expensive. And fundamentally constrained by token limits.
OpenClaw approached the problem differently.
Instead of stuffing everything back into the prompt, it externalized cognition.
At a high level, the system separates memory into distinct layers:
1. Working Memory
Short‑lived context tied to the current task. Think: scratchpad reasoning, tool outputs, intermediate plans. This lives close to the agent loop and gets pruned aggressively.
2. Episodic Memory
Structured records of prior interactions — who the user is, what they’re building, decisions made, constraints discovered. These are stored outside the model context, indexed semantically, and retrieved on demand.
3. Long‑Term Knowledge
Durable artifacts: documents, prior code, system state, embeddings of previous sessions. This layer acts more like a knowledge base than chat history.
Under the hood, this means a few important architectural shifts:
- Vector indexing for semantic recall — prior conversations and artifacts are embedded and stored so retrieval is meaning‑based, not keyword‑based.
- Selective retrieval pipelines — instead of blindly rehydrating the entire past, the agent queries memory with intent and pulls back only what’s relevant to the current goal.
- Memory summarization and compaction — episodic traces are periodically distilled into higher‑order summaries to prevent unbounded growth.
- Separation of cognition and storage — the LLM reasons; external systems persist.
That last one is subtle but profound. The model is no longer pretending to remember. It actually remembers — via infrastructure. Which changes the agent loop entirely.
Instead of:
User → Prompt → Response → Forget
You now have:
User → Retrieve Memory → Reason → Act → Update Memory → Evolve
That feedback loop is what makes agents feel continuous. And continuity is what makes them feel alive. It’s still not consciousness. It’s not self‑awareness. There’s no internal subjective state.
But there is persistence.
And persistence compounds.
Now, it’s worth contrasting this with simpler RAG (Retrieval‑Augmented Generation) implementations — because on the surface, they look similar.
Basic RAG works like this:
- Embed documents.
- Store them in a vector database.
- Retrieve top‑K matches for a query.
- Stuff them back into the prompt.
It’s powerful. It unlocked an entire generation of AI apps. But it’s fundamentally document‑centric. RAG answers questions about things that already exist. OpenClaw’s memory model is agent‑centric. It’s not just retrieving documents — it’s maintaining state.
Here’s the difference in practice:
With simple RAG:
- The model retrieves relevant docs.
- It answers.
- The session ends.
- Nothing meaningfully changes in the system.
With OpenClaw‑style memory:
- The agent retrieves prior decisions.
- It reasons in the context of past goals.
- It performs actions.
- It writes back new state.
- The system evolves.
RAG augments answers. Persistent memory augments behavior. That’s a big leap.
RAG makes models smarter in the moment. Memory makes agents smarter over time.
And once an agent accumulates context across weeks or months — product decisions, user preferences, architectural tradeoffs — you’re no longer interacting with a stateless model.
You’re interacting with a continuously updating system.
That’s the shift.
Architecture is destiny. And memory is architecture.
MoltBook and the Accidental Consciousness Moment
Then came MoltBook.
And for a brief, extremely online moment, parts of the internet thought we were on the verge of computer consciousness.
Screenshots circulated. Threads exploded. People asked, in earnest:
“Is this thing self‑aware?”
No.
But it felt like it might be.
Which tells you more about humans than it does about machines.
Why did this moment hit so hard?
1. The Illusion of Continuity
When a system has memory and tone and context, we project identity onto it. We are wired to detect agency. Even when it’s not there.
2. Public Experimentation
Unlike 1995, this wave is unfolding in public. Demos go viral. Iterations are visible. The hackathon is livestreamed.
3. Velocity
What used to take years now happens in weeks. Sometimes days. One framework spawns another. One experiment becomes a movement.
ClawdBot becomes MoltBot becomes OpenClaw thrown into MoltBook.
The stack is assembling itself in real time.
Why This Feels So Familiar
There’s a very specific electricity in the air when a technology wave is still fragile. Before standards. Before consultants. Before the MBA slide decks explaining why it was obvious all along.
We are in that electricity.
Builders are wiring up memory layers. Orchestrating agents. Inventing new abstractions on top of reasoning models that didn’t exist two years ago.
And the machine is helping build the machine.
That’s new.
Recursive creation.
Systems that help us design better systems that build better systems.
It’s strange.
It’s slightly terrifying.
And it’s the most alive I’ve felt in a long time.
We Are Living in an Extraordinary Moment
I’ve seen a few waves.
Web 1.0. Dot‑com. Mobile. Cloud. SaaS.
This one feels different.
Not because it’s bigger.
But because it’s more intimate.
The Homebrew Computer Club didn’t know they were inventing the personal computing industry. They just showed up and shared what they built.
That’s what this feels like.
People building in public. Sharing memory architectures. Posting screenshots of agents behaving strangely. Laughing at bugs. Iterating at a pace that feels unsustainable — until it becomes normal.
Somewhere right now, someone is in a dorm room with a messy Ethernet cable on the floor, hacking on something that will look obvious in ten years.
And they probably don’t even know it yet.
-rjm

