There’s something about forced stillness that creates space for the unexpected.
Molly was a few weeks into recovery from her second ankle surgery—the one where she “won the ankle injury lottery in the worst way possible.” Her boot-clad ankle propped up next to me on the couch, she wasn’t going anywhere. Neither was I. So we did what any reasonable father-daughter duo does when escape isn’t an option: we binged two documentaries about artificial intelligence.
First, AlphaGo—the 2017 film about DeepMind’s AI beating the world champion at Go. Then The Thinking Game—the 2024 documentary that follows DeepMind’s broader quest toward artificial general intelligence, filmed over five years by the same team.
What I didn’t expect was that this double feature would turn into one of the best conversations we’ve ever had.
Molly is a Computational Biology major. I’m a lifelong computer science nerd. Our worlds were about to collide in the best possible way. (Fun aside: one of her friends from MIT—his dad appears in one of the documentaries. We got a good laugh out of that.)
Why Go Matters (And Why No One Thought This Would Happen)
If you’re not familiar with Go, here’s the short version: it’s a 2,500-year-old board game that makes chess look like tic-tac-toe.
Chess has roughly 10^47 possible game states. Go has 10^170. For perspective, there are approximately 10^80 atoms in the observable universe. Go has more possible positions than there are atoms—by a factor of 10^90. Let that sink in.
This isn’t just trivia. It means you can’t brute-force Go. You can’t calculate every possibility the way Deep Blue did against Kasparov in 1997, evaluating 200 million positions per second. That approach simply doesn’t work here. The game is too vast. Too deep.
For decades, the best Go programs were… embarrassing. They played at the level of a decent amateur, routinely getting crushed by club players. Experts confidently predicted that AI beating a professional Go player was 10-20 years away. Some said it might never happen.
In 2016, DeepMind did it anyway.
How AlphaGo Actually Works
Neural Networks: Teaching Intuition
AlphaGo wasn’t programmed with rules about how to play Go. Nobody sat down and wrote “if your opponent plays here, respond there.” That approach had been tried for decades. It didn’t work.
Instead, AlphaGo was trained.
First, it studied millions of games played by human masters. The neural network learned to “see” the board—not as a grid of black and white stones, but as patterns. Shapes. Flows. The kind of intuition that takes a human player decades to develop, encoded in the weights of a neural network.
This is the key insight that changed everything: intuition can be learned. It’s not magic. It’s not some mystical human quality that machines can never possess. It’s pattern recognition at scale.
Reinforcement Learning: Playing Itself
But learning from humans only gets you so far. Humans, after all, are limited.
So after learning from human games, AlphaGo started playing against itself. Millions of games. Billions of moves. Twenty-four hours a day, at speeds no human could match.
This is reinforcement learning—trial and error at superhuman velocity. And here’s the kicker: through self-play, AlphaGo discovered strategies that no human had ever seen. Not because they were wrong. Because we never thought to try them.
The machine had started to see things we couldn’t.
Monte Carlo Tree Search: Guided Exploration
AlphaGo doesn’t evaluate every possible move—that’s mathematically impossible, remember? Instead, it uses its neural network intuition to guide its search.
Think of it like this:
- Policy Network: “What move looks promising?” (Intuition)
- Value Network: “Who’s winning from this position?” (Evaluation)
- Monte Carlo Tree Search: “Let me simulate a bunch of games from here to check.” (Verification)
It’s intuition combined with calculation. The machine equivalent of a grandmaster “feeling” that a move is right, then verifying it with deep analysis.
Human experts have both systems too—the gut and the grind. AlphaGo unified them into something more powerful than either alone.
Move 37: The Moment Everything Changed
Game 2 of the match against Lee Sedol. If you haven’t seen the documentary, go watch it. If you have, you know exactly what I’m about to describe.
Lee Sedol is one of the greatest Go players in history. Eighteen world championships. A player of profound intuition and legendary fighting spirit. He sat across from AlphaGo expecting a battle. He got something else entirely.
Move 37.
AlphaGo places a stone on the fifth line—a move that looks, to the trained human eye, wrong. Commentators are confused. Experts call it a mistake. Lee Sedol leaves the room, visibly shaken. The move violates centuries of accumulated Go wisdom.
And then it wins the game.
Move 37 wasn’t in any textbook. It wasn’t copied from any human game in AlphaGo’s training data. The machine had discovered something new about a game humans have played for 2,500 years.
What does it feel like to watch a machine be creative? I still don’t have a great answer. But I know it changes how you think about intelligence—artificial and otherwise.
From Go to Protein Folding: Where Our Worlds Cross
This is where The Thinking Game picks up the story.
Here’s the thing about DeepMind: they weren’t just trying to win at board games. Go was a proving ground. A demonstration. The real target was always bigger.
Enter AlphaFold. And enter my daughter’s world.
The Protein Folding Problem
Proteins are the workhorses of biology. They do almost everything—carry oxygen in your blood, fight infections, make your muscles contract, replicate your DNA. And every protein is built from a chain of amino acids that folds into a specific three-dimensional shape.
Here’s the critical insight: the shape is the function. A protein’s 3D structure determines what it does. Get the shape wrong, and the protein doesn’t work. Misfolded proteins cause diseases like Alzheimer’s, Parkinson’s, and cystic fibrosis.
The problem? We know the amino acid sequences for over 200 million proteins. But determining the 3D structure experimentally—using X-ray crystallography, cryo-electron microscopy, or nuclear magnetic resonance—is brutally slow and expensive. In 60 years of global scientific effort, we had solved about 170,000 structures.
Predicting how a protein folds from its sequence alone? That was the “50-year grand challenge” of biology. The Mount Everest of molecular science.
CASP: The Olympics of Protein Prediction
Every two years, computational biologists compete in CASP—the Critical Assessment of Structure Prediction. It’s basically the Olympics of protein folding. Teams submit predictions for protein structures that have been experimentally determined but not yet published. Then they get graded.
For years, scores hovered around 40 out of 100 for the hardest targets. Progress was incremental. Slow. Scientists would publish papers celebrating a 2-point improvement.
Then AlphaFold 2 showed up in 2020.
The “Holy Shit” Moment
There’s no other way to describe it.
AlphaFold 2 scored above 90 on two-thirds of the targets. Some predictions were so accurate they were essentially indistinguishable from experimental results. The competition wasn’t close. It wasn’t even a competition anymore.
The judges called it “astounding.” One researcher said it was “like landing on the moon.” Another said protein structure prediction had been “solved.”
I looked at Molly. This is her field. Transformed overnight.
How AlphaFold Works
Like AlphaGo, AlphaFold uses neural networks. But the architecture is different—it’s built on attention mechanisms, similar to the transformers that power GPT and other large language models.
The key insight is co-evolution.
Here’s the intuition: if two amino acids that are far apart in the sequence consistently mutate together across many different species, they’re probably close together in the 3D structure. Evolution leaves fingerprints. AlphaFold learned to read them.
The system analyzes millions of protein sequences, looking for these co-evolutionary patterns. Then it uses that information—combined with geometric reasoning and iterative refinement—to predict the spatial relationship between every pair of amino acids.
It’s pattern recognition. The same fundamental idea as AlphaGo—but applied to the language of life itself.
AlphaFold 3 and the Nobel Prize
In 2024, DeepMind released AlphaFold 3. It doesn’t just predict individual protein structures—it predicts how proteins interact with DNA, RNA, and small molecules. The implications for drug discovery, gene therapy, and understanding disease are enormous.
Oh, and Demis Hassabis and John Jumper won the Nobel Prize in Chemistry for their work on AlphaFold. No big deal. Just the highest honor in science for a system that started with a board game.
What It Meant to Watch This Together
Here’s what I didn’t tell Molly while we watched: I was so damn excited for her.
She’s walking into a world where the tools to understand life at the molecular level are suddenly, radically more powerful. The intersection of computer science and biology isn’t a niche curiosity anymore—it’s the frontier. And she’s not watching it from the sidelines. She’s studying it. She’s going to use it.
I’ve spent my career in tech, watching waves come and go. I’ve seen hype cycles inflate and collapse. But this one feels different. Her timing is impeccable.
We didn’t plan this documentary double feature as a “teaching moment.” It was just couch time—her ankle in a boot, me with the remote, nowhere to be. But somewhere between Move 37 in AlphaGo and the protein folding breakthrough in The Thinking Game, something clicked.
Her world and my world aren’t separate anymore. They’re the same world.
And she’s going to take it places I can’t even imagine.
The Thread
These two documentaries tell one continuous story—a thread that runs from a board game in Seoul to a protein database that covers all of life.
It’s not about computers being smarter than humans. It’s about building tools that let us see what we couldn’t see before.
AlphaGo showed us a move no human had imagined in 2,500 years of play. AlphaFold showed us the shapes of 200 million proteins that would have taken centuries to solve experimentally.
What’s next? I don’t know.
But I have a feeling Molly is going to help figure it out.
And I’ll be cheering from the couch—boot or no boot.