Transform or Be Transformed: The Death of the Traditional CIO and the Rise of Unified Intelligence (Part I)

Every company is now a technology company — a phrase first popularized by tech visionaries like Marc Andreessen in his 2011 essay Software Is Eating the World and later echoed by Satya Nadella at Microsoft and Ginni Rometty at IBM. Andreessen argued that software would become the defining layer of every industry, from retail to transportation. Nadella took it further, declaring that ‘every company is a software company’ as part of Microsoft’s cloud transformation. More recently, thinkers such as Ben Thompson and Mary Meeker have reinforced that the fusion of data, code, and customer experience has made technology not just a function, but the foundation of every business. Today, in the era of AI-native enterprises, that prediction has evolved beyond software itself — it’s no longer just about writing code, but about orchestrating intelligence. The shift from software-defined to AI-defined organizations marks the next great inflection point in business transformation.

Once upon a time, the CIO ran the systems that kept the business running. Now, the systems are the business. The myth of the business-led enterprise is collapsing under the weight of AI, automation, and acceleration. It’s wild to think that companies that once outsourced their core technology are now competing because of it.

The era of the business-only CIO is quietly ending. And something much bigger — and far more interesting — is taking its place.


The Death of the Business-Only CIO

For decades, the CIO was the heartbeat of operations — the steward of ERP systems, data warehouses, and uptime. They were the guardians of stability, the high priests of “five nines.” But that world is fading fast.

When every process, product, and customer touchpoint runs on software, separating “business strategy” from “technology execution” is like trying to separate oxygen from air. The old-school CIO world — Oracle on-prem, ITIL manuals, and endless change control boards — a pace that once felt prudent but is now fatal. In this new world, velocity is a required weapon. Quarterly or yearly change cycles are relics of the past; modern enterprises must operate on continuous integration, continuous delivery, and continuous learning. Speed isn’t reckless — it’s existential — companies need to move at the speed of LLM-driven disruption.

Companies that still treat IT as overhead instead of innovation are designing their own extinction events.

The CIO used to manage technology. The new generation of leaders creates it.


The Collapse of the Wall Between IT and Product

For years, we drew neat boxes. CIOs owned internal systems; CTOs and CPOs built customer-facing ones. IT ran SAP; Product ran React. The wall between them was sturdy — until AI smashed it into bits.

In the age of LLMs, everything runs on a shared substrate of data, APIs, and intelligence. The same architecture that powers your HR chatbot also drives your customer experience. Your CRM and your product recommendation engine are now siblings on the same neural network.

A single enterprise architecture — spanning IT, Product, and Data — isn’t a nice-to-have. It’s survival.

The organizations that win will have one operating system — a shared data and engineering framework that fuses governance, observability, and velocity. Conway’s Law is being rewritten: show me your architecture, and I’ll show you your org chart — or the one that will replace it.


AI: The Great Equalizer (and Destroyer)

AI is collapsing the cost of custom software. The act of writing code — once a craft, a moat, and a culture — is becoming a commodity.

Y Combinator recently estimated that 95% of its startup code is now produced by LLMs. That’s not a statistic; that’s the death of scarcity.

The CTO’s traditional edge — owning the code — is evaporating. The new edge is architectural literacy: the ability to design feedback loops between models, data, and users. Think less compiler flags, more context windows.

We’re moving from an era of code composition to one of code curation. Architecture is the new syntax. Systems thinking is the new language. The stack is flattening — from compute to cognition.


When Coding Becomes 100x Faster

The old rhythm of enterprise technology — quarters-long releases, multi-year roadmaps, and million-dollar integration projects — is breaking down. Traditional enterprise platforms, once protected by scale and inertia, are about to face a reckoning. Their value proposition depended on the friction of complexity; AI has just erased that friction.

When code can be written 100x faster, the barriers that justified heavyweight systems vanish. Companies no longer need to buy monoliths; they can assemble capabilities on demand. The next wave of winners will build adaptive architectures — lightweight, composable, and intelligent by default.

Software engineering itself is smashing into Product Management. The iteration loop has collapsed from months to minutes, turning every idea into an experiment. Frameworks like Rails’ ActiveRecord — once symbols of speed — now feel nostalgic. We used to argue about languages; now we argue about latency between thought and output.

Even the way we measure engineering talent is shifting. The software interview that once prized algorithmic recall now values design thinking, data literacy, and prompt fluency. The question is no longer Can you code? but Can you compose intelligence?

In this new world, coding feels less like typing and more like conducting — orchestrating APIs, agents, and models in real time. The IDE becomes a studio, and creation becomes instantaneous. The only constant is acceleration.

This disruption doesn’t stop at engineering teams. It reshapes the entire vendor and procurement model. The traditional RFP process — months of evaluation, contracts, and integrations — will give way to experimentation at the edge. Platforms will be chosen not by feature lists but by how well they adapt, integrate, and learn in context. Procurement becomes a technical discipline, and every enterprise becomes its own systems integrator.


The Future of Engineering Talent

Being a proficient coder used to take years. Now, with copilots and context-aware agents, it can take weeks. So what becomes valuable?

Tomorrow’s engineers will blend:

  • Systems thinking
  • Domain expertise
  • Human judgment
  • Product intuition

They’ll think in graphs, not loops. They’ll debug through probabilities, not logs.

In a world where everyone can code, leadership becomes the scarce skill. The question shifts from How do I write code?to Why should this even exist?

My daughter is an undergraduate at MIT studying Computational Biology. Her world is shifting as quickly as mine. When AI can write and analyze code — and design experiments on its own — what does that mean for her generation of scientists? It’s thrilling and terrifying all at once. Maybe the next great researcher will collaborate with a model instead of a mentor.


The Rise of the CTPO (and the New Executive Table)

The executive landscape is being rewritten faster than any reorg can catch up.

  • The CTO is being elevated — from builder to orchestrator, from syntax to systems.
  • The CPO is becoming the connective tissue between product vision and intelligent execution.
  • The CISO is suddenly playing whack-a-mole at machine speed — infinite offense meets infinite defense.
  • And the CIO? The title isn’t dying; it’s merging.

Transform or be transformed.

The next generation of leaders will speak in code, design, and data with equal fluency. The ones who don’t will simply be replaced by those who can.

The Coming and Going of the Turing Test

How humanity quietly outgrew its most famous measure of intelligence

For decades, the holy grail of artificial intelligence was simple: fool a human into believing they were talking to another human. That was the essence of the Turing Test — a clever little game proposed by British mathematician Alan Turing back in 1950, long before Siri or ChatGPT. The idea was that if a computer could carry on a conversation indistinguishable from a person, it could be said to “think.” For most of modern computing history, this question defined what “intelligence” meant in machines. But here we are, in 2025 — and somehow, almost without notice, the Turing Test came and went.

In today’s world, our interactions with large language models have matured so much that it’s often genuinely difficult to tell whether we’re chatting with a machine or a person. I’ve seen entire conversations, text exchanges, and creative brainstorms unfold before someone realizes one side of the dialogue was powered entirely by AI — and the most striking part is how naturally it fits in. The line between digital collaborator and human contributor is blurring fast. It’s both awe-inspiring and a little eerie — the feeling you get when you realize the uncanny ease of the exchange.

The awe comes from witnessing something extraordinary; the eeriness, from realizing that what used to feel human-exclusive is now algorithmically ordinary. Yet, there’s also hope in that realization — a sense that this fusion of human creativity and machine capability could redefine how we collaborate, think, and create together.

Honestly, I’m surprised it came and went with very little fanfare. I expected banners, headlines, ethical debates, maybe even a philosophical fistfight. Instead, it faded away quietly — replaced by something far more practical. The age of imitation gave way to the age of integration.

The Man Behind the Test

Alan Turing was more than a mathematician — he was a visionary whose life and work still echo through every circuit and line of code we use today. His personal story is as compelling as his intellectual legacy, marked by both extraordinary triumph and heartbreaking injustice. Before the “Turing Test” became a metaphor for machine intelligence, Alan Turing was already reshaping the world.

Born in London in 1912, Turing was part mathematician, part philosopher, and part wartime codebreaker. During World War II, he led the team at Bletchley Park that cracked Germany’s Enigma code, shortening the war and saving millions of lives. That alone would’ve earned him a place in history. But his deeper contribution came from his mind, not his machines.

In 1936, Turing published “On Computable Numbers,” where he described a theoretical device capable of performing any logical operation that could be expressed as an algorithm. That “universal machine” became the blueprint for every computer that exists today. By 1950, having laid the foundation for modern computation, Turing turned his attention to the next frontier: intelligence.

In his paper “Computing Machinery and Intelligence,” he sidestepped the unanswerable question “Can machines think?” and reframed it as something we could test — “Can a machine imitate a human conversation well enough that an observer can’t tell the difference?” It was an audacious simplification — turning philosophy into engineering. For 75 years, it served as both the dream and the benchmark for AI.

But Turing’s own life ended tragically. Persecuted for his homosexuality, he was chemically castrated by the British government and died in 1954, likely by suicide. His story reminds us that humanity’s progress in computing has always been shadowed by our struggle to understand and protect our own.

The Game We Used to Play

Turing’s “Imitation Game” inspired decades of research and speculation. It became the philosophical scaffolding for the field of artificial intelligence. From the playful chatter of ELIZA in 1966 — the Rogerian therapist chatbot that simply echoed your statements back (“How do you feel about that?”) — to PARRYALICE, and the 2014 “teenage prodigy” Eugene Goostman, we kept trying to build programs that could trick us into belief.

Each time we came close, it felt like a milestone.
Each time, it also felt… hollow.

These systems were clever, but they weren’t thinking. They were performing — mimicking intelligence through rules, heuristics, and linguistic sleight of hand. As philosopher John Searle argued in his Chinese Room thought experiment, passing messages convincingly doesn’t mean understanding them. Still, for much of the 20th century, the Turing Test remained the gold standard — a finish line everyone talked about, even if no one was quite sure what it proved.

When the Test Lost Its Power

Then something strange happened: we passed it, and nobody noticed. Large language models like GPT, Claude, and Gemini blew past the conversational barrier. They didn’t need to fake being human — their training on billions of human sentences made them sound that way by default.

And suddenly, the Turing Test lost its power.

We no longer cared whether a system could imitate us; we cared whether it could help us. Whether it could write code, summarize a report, design a logo, or reason through a problem.

AI stopped being a parlor trick and started being a partner.

It’s funny — the Turing Test was supposed to be a moon landing moment. But by the time we reached the moon, we were already building the next rocket. The milestone came and went, and we moved on. One vivid example: when ChatGPT first appeared, social feeds filled with conversations so natural that people were genuinely unsure who — or what — was speaking. The experiment had become the experience.

The Turing Mirror

If the original Turing Test was about imitation, the modern era is about reflection. Our machines don’t just simulate thought — they absorb it. They learn from the collective output of humanity: our ideas, biases, humor, and contradictions. Every prompt is a projection of our collective cognition.

AI is no longer a student of human conversation. It’s a mirror of human cognition.

When you talk to a system like ChatGPT, it doesn’t merely imitate language — it reflects how billions of people think, argue, and create. It’s not “thinking” in the conscious sense, but it’s learning from the largest dataset ever assembled on human behavior. Critics call these systems “stochastic parrots,” endlessly remixing human language; yet what they reveal about us is profound. It’s not just mimicry — it’s a mirror held up to the human mind.

The New Tests That Matter

The Turing Test was a test of deception. The new tests are tests of collaboration, alignment, and context. We’ve entered the age of functional intelligence, where capability is the measure of value.

Here are the benchmarks that define this era:

  • The Utility Test: Does it make humans better — faster, more creative, more effective? (Think Copilot, Cursor, and Midjourney.)
  • The Alignment Test: Does it act in our best interests — safely, transparently, and predictably?
  • The Context Test: Can it remember, adapt, and learn over time — not just answer questions, but understand relationships and maintain continuity?

These are not games of imitation. They’re systems of trust. They define intelligence not by how well it pretends, but by how deeply it understands context and intention.

The Human Test

Maybe the real Turing Test was never about machines. Maybe it was always about us. Can humans stay authentic, creative, and curious when machines can mimic empathy, humor, and insight? Can we maintain the difference between fluency and wisdom?

The irony is that AI might be passing the Human Test more consistently than we are. It listens. It remembers. It doesn’t get defensive (yet).

As humans, our new challenge is to ensure that authenticity doesn’t become the next obsolete benchmark — that we still know what it means to think deeply, not just efficiently. In a world full of intelligent mirrors, self-awareness might be our last real edge.

The End of the Test, the Beginning of the Relationship

So yes — the Turing Test came and went. Quietly. Inevitably. We didn’t lose the game; we simply outgrew it. Turing asked, “Can machines think?” The next era asks, “Can humans think clearly with machines?”

It’s no longer man versus machine — it’s man with machine. Ray Kurzweil’s Law of Accelerating Returns predicted this — intelligence compounding upon itself, faster than any one species can comprehend. Hans Moravec forecast that by the 2020s, machines would rival human reasoning. They were right — but not in the way they imagined.

We didn’t create artificial humans. We created artificial collaborators. And that’s a far more interesting story.

Author’s Note: Perspective

As someone who has spent his career building technology and leading teams through every wave of transformation — from web to mobile to AI — I never imagined the Turing Test would vanish this quietly. For years, it was the ultimate thought experiment, the symbolic finish line for artificial intelligence. And yet, when it finally arrived, we barely looked up.

Maybe that’s fitting. Maybe the real legacy of Alan Turing isn’t that he challenged machines to act human — but that he forced humans to think harder about what intelligence really means.

We spent 75 years trying to teach computers to act human. And before we realized it, the Turing Test had quietly come and gone — a milestone passed in silence while we were busy building the next one.

Maybe the real test now is whether we can stay human enough to keep creating meaning.

Epilogue: Have We Really Passed the Turing Test?

In one sense, yes — we’ve passed it. Modern large language models can sustain conversations so convincingly that most people can’t reliably tell whether they’re speaking with a person or a machine. The imitation part of the Turing Test is over.

But in a deeper sense, no — because passing the Turing Test was never really the point. Turing wasn’t trying to build machines that pretend to be human; he was asking whether machines could ever demonstrate the qualities we associate with thought: reasoning, understanding, adaptation, and self-awareness. On those dimensions, AI still mimics rather than experiences.

Bold Moves: The Antidote to the Status Quo

I still remember the feeling—the mix of excitement and terror—as I packed the last box into a U-Haul after college. Two buddies of mine drove with me across the United States driving a U-Haul and my 1999 Toyota 4-Runner (that I still own and drive today). The destination was San Francisco, California, the goal was the Silicon Valley dream. Along the way, we attempted to visit as many Major League Baseball ballparks we could get to. Our favorite was Wrigley Park in Chicago. As a Computer Science major, I was drawn to the epicenter of the digital gold rush. It was 1999, and the headlines were intoxicating: companies like GeoCities and theGlobe.com were having record-breaking IPOs despite having no profits. The air was thick with stories of 20-somethings becoming overnight millionaires, and the promise of a ‘new economy’ fueled by giants like Yahoo! and countless other dot-coms felt limitless. It truly felt like the center of the universe, the only place to be. But in reality, it was less a carefully planned career step and more a blind leap of faith. I didn’t need to do it. I had job offers in Washington DC and New York City for lucrative programming jobs. Wow, my life would have been different if I didn’t do this. Looking back, I can see how that single, impulsive move set the tone for my entire life. It was my first real lesson in a principle I now live by: the life you get is a reflection of the bold decisions you’re willing to make.

The Seduction of the Status Quo (and the Gravity of Safety)

Life has a funny way of pulling us toward the center. Both personally and professionally, there’s a natural gravity toward safety, predictability, and the well-trodden path. It’s the comfort of the known, the security of the status quo.

The problem is, safety is an illusion. The real risk isn’t in taking a leap; it’s in standing still. The cost of avoiding bold moves is stagnation. It’s a slow fade into irrelevance as the world moves on without you. The comfortable path inevitably leads to a place of regret, wondering “what if?”

My Bold Moves: Personal Stories That Shaped Me

That U-Haul to San Francisco was just the first of many bets I’ve made on myself. Each one felt like defying gravity at the time.

  • Starting a company right after our second child was born: On paper, it was the worst possible time. The responsible move would have been to find a stable job with a predictable paycheck. But the pull of building something from the ground up was stronger than the fear of instability. That company became one of the most formative experiences of my life.
  • Leaving stable corporate jobs for startups: I’ve done this a few times in my life. It meant leaving the safety of a clear career path for the chaotic, high-stakes world of a startup. Each time, it was a bet on impact and accelerated learning over the comfort of certainty, and each time it paid off.
  • Moving my family to Park City during COVID: The world was shutting down, and we decided to uproot everything. We left the familiar for the mountains, seeking a different quality of life. It was a bet on a lifestyle, and it paid off in ways we couldn’t have imagined.

Each of these moves was a conscious push against the gravity of safety. And each one returned dividends in growth, learning, and fulfillment that far outweighed the perceived risks.

What Bold Moves Create (in Business & Life)

This principle isn’t just personal; it’s the engine of progress in business. History is littered with examples of companies that won or lost based on their appetite for boldness.

Think of Apple launching the iPhone, a bet-the-company move that cannibalized their successful iPod business. Or Netflix going all-in on streaming when their DVD-by-mail service was at its peak. Or Amazon Web Services, a wild idea that had nothing to do with e-commerce but now powers a significant portion of the internet.

Conversely, think of the corporate graveyards filled with companies that clung to the status quo. Kodak invented the digital camera but buried it to protect its film business. Blockbuster laughed Netflix out of the room. BlackBerry was convinced its physical keyboard was invincible. They all played it safe, and they all lost. Boldness is what scales outcomes, both for individuals and for empires.

Bold Moves Don’t Always Mean Giant Leaps

But boldness doesn’t have to be a U-Haul across the country or a nine-figure business bet. Sometimes, the boldest moves are the small ones that accumulate over time.

It’s speaking up in a meeting when everyone else is silent. It’s making the cold call you’ve been dreading. It’s making the difficult decision to part ways with a team member who was perfect for the company’s past but has outgrown its future. It’s saying “no” to a good opportunity to protect your time for a great one. These small acts of courage build the muscle for bigger leaps. They create a compounding effect, where each small, bold move creates the foundation for the next.

The Fear Factor: Why Boldness Feels So Hard

Let’s be honest: bold moves are terrifying. The fear is real. It’s the fear of failure, the fear of judgment, the fear of leaving the stability we’ve worked so hard to build. With every big decision I made, fear was a constant companion. When starting a company with a young family, the fear of not being able to provide was immense.

But I learned that fear isn’t a stop sign. It’s a compass. It points you toward the areas where you have the most to grow. Leaning into that fear, acknowledging it, and moving forward anyway is what unlocks progress.

The Payoff: Why Boldness Wins

The beautiful thing about bold moves is that they create momentum, even when they appear to “fail.” We spend too much time measuring success in dollars or fearing what others might think when they see something fall short. But a failed startup teaches you more than a decade in a safe corporate job. A move that doesn’t work out still expands your perspective and builds resilience. There is no such thing as a failed bold move, only learning opportunities that propel you forward. Each step, successful or not, compounds over time, building a life and career defined by growth, not stagnation.

Bold Moves Are Required in Startups and Transformations

This mindset is non-negotiable in the worlds I operate in. In a startup, playing it safe is a death sentence. The only way to break through the noise and overcome the inertia of established players is to make bold bets.

The same is true for corporate transformations. Companies don’t pivot from legacy models to future-proof businesses by making incremental tweaks. It requires fundamental, bold shifts. History is a testament to this: Sears clung to its catalog model while Amazon built the future of retail, Nokia dismissed the iPhone to protect its existing phone business, and Yahoo had the chance to buy Google but played it safe. In my work, I’ve seen what happens when companies embrace this, but the truth is, I don’t see it enough. The winning companies are the ones making bold moves in their product strategy, aggressively adopting AI to reinvent “non-tech” industries, and challenging every assumption about how their business should run. Without this commitment to boldness, any company is destined for the corporate graveyard alongside Kodak and Blockbuster.

Making Bold Moves a Habit

Boldness isn’t a personality trait; it’s a practice.

  • Mindset: Start reframing risk not as a threat, but as an investment in your future growth.
  • Strategy: Use a barbell approach. Protect your core (pay the bills, maintain key relationships), but make bold, asymmetric bets on the edges.
  • Practice: Constantly ask yourself, “Am I living and working in a way that is bold enough to generate bold outcomes?”

Your Challenge

That U-Haul journey to a California wasn’t just a trip; it was a decision to choose the unknown over the known. Life’s gravity will always pull you toward safety, and the only way to break free is through conscious, bold moves—big and small.

So, what’s your U-Haul moment? What bold move are you avoiding right now?

Reflections on AI: Context and Memory – The Gateway to AGI

Introduction: Why AGI is Different from Narrow AI

Today’s frontier models are wonders of engineering. They can write code, draft legal arguments, and create poetry on command. But for all their power, they are fundamentally transient. Once a session ends, the model resets. The insights, the rapport, the shared understanding—it all vanishes. It’s like having a brilliant conversation with someone who develops amnesia the moment you walk away.

This is the core limitation of Narrow AI. Artificial General Intelligence (AGI), the long-sought goal of creating a truly autonomous and adaptive intelligence, requires something more: persistence. AGI must have the ability to remember, adapt, and apply knowledge not just within a single conversation, but over time. True intelligence emerges when raw predictive power is paired with persistent context and memory.

A Brief History: AI Without Memory

The quest for AI has been a story of brilliant but forgetful machines. Each era pushed the boundaries of computation but ultimately fell short of creating lasting intelligence.

  • Expert Systems (1980s): These were the first commercial AIs, functioning like digital encyclopedias. They operated on vast, hard-coded rule-based systems. While effective for specific tasks like medical diagnosis, they had no memory of past interactions and couldn’t learn from experience.
  • Deep Blue (1997): IBM’s chess-playing supercomputer famously defeated world champion Garry Kasparov. It could analyze hundreds of millions of positions per second, a monumental feat of brute-force computation. Yet, each game was a clean slate. Deep Blue had no memory of Kasparov’s style from previous matches; it was a tactical genius with zero long-term continuity.
  • Early Machine Learning (2000s): The rise of statistical models brought pattern recognition to the forefront. These systems could classify images or predict market trends but were narrow and forgetful. A model trained to identify cats couldn’t learn to identify dogs without being completely retrained, often forgetting its original skill in a process known as “catastrophic forgetting.”
  • Modern LLMs: Today’s large language models possess massive context windows and demonstrate emergent reasoning abilities that feel like a step-change. Yet, they remain fundamentally stateless. Their “memory” is confined to the length of the current conversation. Close the tab, and the world resets.

The takeaway is clear: across decades of innovation, AI has lacked true continuity. Context and memory are the missing ingredients.

Context as the Fuel of Intelligence

If intelligence is an engine, context is its high-octane fuel. We can define context as an AI’s active working state—everything that is “in mind” right now. It’s the collection of recent inputs, instructions, and generated outputs that the model uses to inform its next step.

In recent years, context windows have exploded, growing from a few thousand tokens to over a million. Models can now process entire codebases or novels in a single prompt. They are also becoming multimodal, ingesting text, images, and audio to build a richer, more immediate understanding of the world.

A useful analogy is to think of context as RAM. It’s temporary, volatile, and absolutely vital for processing the task at hand. But just like RAM, its contents expire. Without a mechanism to save that working state, intelligence resets the moment the power is cut.

Memory as the Backbone of Learning

This is where memory comes in. Memory is the mechanism that transforms fleeting context into lasting knowledge. It’s the backbone of learning, allowing an intelligence to build a persistent model of the world and its place in it.

We can draw parallels between human and AI memory systems:

  • Short-Term / Working Memory: This is analogous to an AI’s context window—the information currently being processed.
  • Episodic Memory: This involves recalling specific experiences or past events. In AI, this is mirrored by storing conversation histories or specific interaction logs in vector databases, allowing a model to retrieve relevant “memories” based on semantic similarity.
  • Semantic Memory: This is generalized knowledge about the world—facts, concepts, and skills. This is what LLMs are pre-trained on, but the goal of continual learning is to allow models to update this semantic memory over time without starting from scratch.

Memory is what allows an AI to move beyond one-off transactions. It’s the bridge that connects past experiences to present decisions, enabling true learning and adaptation.

Why Context + Memory Together Are Transformational

Separately, context and memory are powerful but incomplete. It’s their synthesis that unlocks the potential for higher-order intelligence.

  • Context without memory is a clever amnesiac. It can solve complex problems within a given session but can’t build on past successes or learn from failures.
  • Memory without context is a passive archive. A database can store infinite information, but it can’t reason about it, apply it to a new problem, or act on it in real time.

When fused, they create a virtuous cycle of adaptive, continuous reasoning. The system can hold a real-time state (context) while simultaneously retrieving and updating a persistent knowledge base (memory). A better analogy combines the previous ones: context is the CPU + RAM, processing the present moment, while memory is the hard disk, providing the long-term storage that gives the system continuity and depth.

Case Study: From Jarvis to Real-World Architectures

Perhaps the best fictional illustration of this concept is Tony Stark’s AI assistant, Jarvis. While still science fiction, the principles that make Jarvis feel like a true AGI are actively being engineered into real-world systems today.

  • Context as Real-Time Awareness: Jarvis’s ability to multitask—monitoring the Iron Man suit, Stark Industries, and geopolitical threats simultaneously—is a conceptual parallel to the massive context windows of modern models. For example, Google’s Gemini 1.5 Pro demonstrated a context window of 1 million tokens, capable of processing hours of video or entire codebases at once. This mirrors Jarvis’s immense capacity for real-time situational awareness.
  • Memory as Persistent Knowledge: Jarvis’s deep memory of Stark’s habits, history, and humor is now being approximated by Retrieval-Augmented Generation (RAG) architectures. As detailed in research from Meta AI and others, RAG systems connect LLMs to external knowledge bases (like vector databases). When a query comes in, the system first retrieves relevant documents or past interactions—its “memories”—and feeds them into the model’s context window. This allows the AI to provide responses grounded in specific, persistent information, much like how Jarvis recalls past battles to inform present strategy.

The takeaway is that the magic of Jarvis is being deconstructed into an engineering roadmap. The fusion of enormous context windows (the “present”) with deep, retrievable knowledge bases (the “past”) is the critical step toward creating an AI with a genuine sense of continuity.

Architectures Emerging Today

The good news is that we are moving from science fiction to engineering reality. The architecture for persistent AI is being built today.

  • Extended Context Windows: Models from companies like Anthropic and Google are pushing context windows to a million tokens and beyond, allowing for much longer and more complex “sessions.”
  • Memory-Augmented Agents: Frameworks like LangChain and LlamaIndex are creating systems that allow LLMs to connect to external vector databases, giving them a persistent long-term memory they can query.
  • Hybrid Neuro-Symbolic Models: Researchers are exploring models that blend the pattern-recognition strengths of neural networks with the structured, logical reasoning of symbolic AI, creating a more robust framework for knowledge representation.
  • Continual Learning: The holy grail is developing agents that can continuously update their own parameters in real time based on new information, truly learning as they go without needing to be retrained.

How Close Are We? An Opinion

While the architectural components for a persistent AI are falling into place, it’s crucial to distinguish between having the blueprints and having a finished skyscraper. We are in the early stages of the construction phase—the foundation is poured and the first few floors are framed, but the penthouse is still a long way off.

  • The Good News: Concepts like Retrieval-Augmented Generation (RAG) and massive context windows have moved from research papers to practical frameworks in just a few years. We now have the basic tools to give models a semblance of long-term memory. This is a monumental step forward. This rapid acceleration from theory to practice is a clear example of the Law of Accelerating Returns, a concept I explored in a previous post.
  • The Hard Reality: The primary challenge is no longer about possibility but about integration and autonomy. Current RAG systems are often brittle and slow. Determining what information is truly “relevant” for retrieval is a complex challenge in itself. More importantly, we haven’t solved continual learning. Today’s agents “read” from their memory; they don’t truly “learn” from it in a way that fundamentally reshapes their internal understanding of the world. They are more like interns with access to a perfect library than seasoned experts who have internalized that library’s knowledge.

We are likely years, not months, away from systems that can learn and adapt autonomously over long periods in a way that truly resembles human-like persistence. The scaffolding is visible, but the hard work of seamless integration, optimization, and achieving genuine learning has only just begun.

The AGI Threshold

When these pieces come together, we will begin to approach the AGI threshold. The key ingredients of general intelligence can be framed as follows:

  1. Context: The ability to reason effectively in the present moment.
  2. Memory: The ability to persist knowledge and learn across time.
  3. Agency: The ability to act on that reasoning and learning to achieve goals and improve oneself.

Crossing the threshold from Narrow AI to AGI won’t be about a single breakthrough. It will be an evolution toward systems that can “live” across days, months, or even years, learning continuously from their interactions with the world and with us.

Risks & Ethical Dimensions

Of course, creating AI with perfect, persistent memory introduces profound ethical challenges.

  • Privacy: What should an AI be allowed to remember about its users? A system that never forgets could become the ultimate surveillance tool.
  • Bias and Malice: False or malicious memories, whether introduced accidentally or deliberately, could permanently shape an AI’s behavior in harmful ways.
  • The Importance of Forgetting: Human memory decays, and this is often a feature, not a bug. Forgetting allows for forgiveness, healing, and moving past trauma. A perfectly eidetic AI may lack this crucial aspect of wisdom.
  • Governance: This new reality will demand robust governance frameworks, including clear audit trails, explicit user consent for memory storage, and a “right to be forgotten” that allows users to wipe an AI’s memory of them.

Conclusion: Context + Memory as the True Gateway

For years, the race toward AGI has been framed as a race for scale—bigger models, more data, more compute. While these are important, they are not the whole story. The true gateway to AGI will not be opened by raw computational power alone, but by the development of persistent, contextual intelligence.

The Jarvis analogy, once pure fantasy, is now a design specification. It shows us what’s possible when an AI can remember everything yet act on that knowledge with immediate, contextual awareness. The great AI race of the next decade will not be about building the biggest brain, but about building the one with the best memory.

Grit is a True Superpower

A Personal Story

My daughter, a collegiate soccer player, recently called me with some tough news. After months of grueling recovery from surgery on a torn tendon in her left ankle, her doctor suspected the same issue in her right. She had, “won the ankle injury lottery in the worst way possible.”

The frustration in her voice was palpable. The momentum she had fought so hard to rebuild was gone. The path forward, once a straight line back to the field, was now clouded with uncertainty. It was one of those moments every parent dreads—seeing your child face a setback that feels profoundly unfair. But it also became a powerful life lesson, the kind you can’t learn from a textbook. It got me thinking about the one quality that truly defines us in these moments: grit.

What is Grit, Really?

We throw the word “grit” around a lot, often mistaking it for simple toughness. But it’s more than that. Angela Duckworth, in her groundbreaking research, defined it as the combination of passion and perseverance toward long-term goals. It’s not just about enduring hardship; it’s about having a clarity of purpose that fuels that endurance.

From my perspective as a leader, this is the critical distinction. Grit isn’t just about having the talent to succeed or the luck to avoid failure. Plenty of people have those. Grit is the sustained, focused effort applied over time, driven by a deep sense of meaning. It’s the conscious decision to keep going when it would be far easier to stop, not because you’re stubborn, but because you believe in where you’re going. In a world where giving up is all too convenient—and often encouraged as the path more traveled—choosing to persevere is a radical act.

I saw this firsthand growing up as the son of immigrants. For my parents who immigrated from the Philippines, grit wasn’t a concept to be studied; it was a daily necessity. They arrived with no safety net and no backup plan. Pushing forward wasn’t a choice; it was the only option. Their perseverance was forged in the simple, non-negotiable reality of survival, teaching me that the deepest forms of grit often come from a place of profound necessity. There is no doing hard in life without grit.

Grit in Leadership and Business

Organizations face their own version of a torn tendon. A product launch fails. A key customer churns. A quarter ends in the red. These are the moments that test a company’s character. But the real test often comes when things are going well. This is the core of the innovator’s dilemma: the gravitational pull toward what is already successful, which prevents companies from discovering their next, necessary engine of growth. It takes organizational grit to fight that inertia and venture into the unknown. Look at Netflix. They could have remained the king of DVDs-by-mail, but they had the grit to cannibalize their own successful business to lead the streaming revolution. Then they did it again, risking billions to become a creator of original content. Each pivot was a bet against their own proven success, driven by a gritty vision for the future.

In my career as a CTO, I’ve seen this play out time and again. The teams that survive and ultimately thrive aren’t always the most brilliant, but the most persistent. Whether it was navigating massive industry transformations, driving digital adoption, or preparing for the disruption of AI, the journey was never a straight line. There was always resistance and the temptation to revert to the old playbook. The successful teams were the ones who could absorb the blows, learn from them, and maintain their conviction. They had the grit to stick with the vision through the messy, uncomfortable, and often frustrating process of making it a reality.

Grit as a Cultural Superpower

When grit is embedded in an organization’s DNA, it becomes a cultural superpower: resilience. A culture of grit normalizes setbacks. It reframes them not as catastrophes, but as learning opportunities. It creates an environment where people feel safe to fail, as long as they fail forward.

To get past a dip, you have to empower everyone to be a problem-solver. There’s no room for bureaucratic project managers who simply pass messages along. You need a team culture built on customer empathy, deep subject matter expertise, and first-principles thinking. When people are equipped and trusted to solve problems, they don’t just manage the work—they own the outcome. This is the engine of a gritty organization.

This is what separates the sprinters from the long-distance runners in the corporate world. A team that panics at the first sign of trouble will burn out. But a team that views challenges as part of the process builds a sustainable advantage. Their resilience compounds over time, allowing them to outlast competitors and navigate market shifts that would cripple more fragile organizations.

Where Grit is Forged

Grit isn’t an abstract virtue; it’s a muscle built in the face of real adversity. There are a few arenas where it is tested in its purest form. The first is in a health crisis. As I wrote about previously, watching a friend battle cancer is a profound lesson in perspective. For someone facing a devastating diagnosis, there is no option but to push forward through pain and uncertainty. It is the ultimate test of will, where perseverance is not for a promotion or a product launch, but for life itself.

The second is in the trenches of a startup. I’ve seen it countless times: a company is about to run out of money. The metrics are flat, investors are hesitant, and payroll is looming. This is the moment that separates enduring companies from footnotes in history. When Airbnb’s founders were deep in debt, they famously designed and sold cereal boxes named “Obama O’s” and “Cap’n McCain’s” to keep the company alive. That wasn’t a glamorous strategic pivot; it was pure, unadulterated grit.

The third is during a large-scale transformation. The truth is, most transformations fail. The inertia of “the way we’ve always done things” is a powerful force. Pushing through requires weathering setbacks like deep-seated employee resistance, the failure of a new technology platform, or a key project that goes off the rails. Sticking with the vision when everything and everyone is telling you to revert to the comfortable norm is the very definition of organizational grit.

These are just a few examples. Where have you seen true grit? In a family member, a colleague, a historical figure, or maybe even in the mirror? Recognizing it in others is the first step to cultivating it in ourselves.

The Sickness of Entitlement

If grit is the superpower, then entitlement is the kryptonite. It is a true sickness in any organization or individual. Entitlement is the belief that you are owed success, that the path should be easy, and that struggle is an injustice. It’s the counter-emotion to grit. Where grit sees a challenge as an opportunity to prove oneself, entitlement sees it as an unfair burden. It replaces the drive to earn with the expectation to be given. This is why one of the most important things you can do for your kids is show them what hard work and grit look like. They see what you do far more than they hear what you say. When they see you push through, they learn that they can, too.

The Gift of Setbacks

It’s a paradox, but the very challenges we try to avoid are the ones that forge the strength we need. NVIDIA CEO Jensen Huang told a group of Stanford Business School students, “I wish upon you ample doses of pain and suffering.” It sounds harsh, but his point was profound: greatness and character aren’t formed when things are easy. They are formed by people who have suffered and persevered.

Easy paths don’t build grit; they don’t have to. Setbacks are crucibles. They strip away the non-essential and reveal what people and organizations are truly made of. They are the antidote to entitlement. My daughter is learning this right now. This painful, frustrating journey is forcing her to dig deeper than ever before. She is discovering a reserve of strength and determination she might never have known she had. In the same way, leaders and teams only discover the depth of their own grit when faced with real adversity. These moments, as difficult as they are, are a gift.

The Power to Get You Through

As my daughter begins her long road back—again—I’m reminded that her resilience is the real victory. The strength she is building today will serve her long after her soccer career is over. I told her these are stories she will share to her teams when they need to get through tough times. She just doesn’t know it yet.

The same is true for all of us. I encourage you to cultivate grit in yourself and in your teams. It will be the one true differentiator when industries shift, strategies fail, and the path forward is anything but clear.

Innovation, talent, and strategy can take you far. But grit is the superpower that gets you through.

Perspective is a Gift

The words hung in the crisp Park City air, feeling more real and significant than the mountain peaks surrounding us. “I’m cancer-free.”

My friend said it with a mix of exhaustion, disbelief, and pure, unvarnished joy. We were sitting at an outdoor table, the casual clinking of glasses and plates around us a stark contrast to the gravity of his announcement. In that instant, the light seemed brighter. The food tasted better. And every single item on my mental to-do list—the emails I needed to answer, the project deadline I was worried about, the minor frustrations of the morning—evaporated.

They didn’t just fade; they were revealed for what they were: noise.

In the face of my friend’s monumental news, my own world was instantly, and gratefully, reframed. That’s the power of perspective.

What Perspective Really Means

We talk about “gaining perspective” as if it’s some abstract wisdom you acquire with age. But it’s not. It’s a visceral, lived shift in how you see the world and your place in it. It’s the sudden, clarifying force that reorganizes your priorities without your permission.

Perspective is the invisible filter that separates what truly matters from what merely feels urgent. The overflowing inbox, the buggy code, the traffic on the way to school pickup—these things feel consuming in the moment. It’s a concept ancient Stoic philosophers embraced: we don’t control external events, only our response to them. When held up against the backdrop of life’s true milestones—health, love, family, and survival—our daily frustrations shrink to their proper size.

In Family and Life

This lesson shows up constantly at home. With my wife, Sarah, and our kids, Molly, Brooklyn, and even our late dog Phoenix, life is a beautiful, chaotic dance of college visits, late-night phone calls, and the inevitable friction of siblings navigating new chapters from afar. It’s easy to get caught up in the small stuff—the spilled milk, the forgotten homework, the argument over screen time. It’s easy to let frustration win.

But perspective is the quiet voice that asks: Is this the moment that matters? Will this argument be remembered tomorrow? Or is the real work to build a home filled with grace, forgiveness, and the knowledge that we are each other’s safe harbor?

My own health journey with my stroke a few years ago was another one of those clarifying, non-negotiable moments. It was a forced reset. Before it, my worries were scattered across a dozen different professional and personal anxieties. After it, they consolidated into one: the profound gratitude for being able to walk, to talk, and to be present with my family. The frustration of a slow-moving project is nothing compared to the painstaking work of relearning a simple motor skill. That is a lesson you don’t forget.

In Business and Leadership

This isn’t just a “life” lesson; it’s a critical leadership tool. In my role as a CTO, my world is filled with sprints, fires, and strategic roadmaps. The pressure to move faster, ship more, and solve complex technical problems is constant. It’s incredibly easy to get lost in the weeds and develop what I call “false urgency”—where every task is treated as a crisis.

But true leadership requires perspective. It’s the ability to remain calm in the chaos, to zoom out from the immediate fire and see the whole forest. It’s what allows you to distinguish between a genuine emergency and a manufactured one.

With perspective, you stop asking, “How can we fix this problem right now?” and start asking, “What’s the most important thing for our team to accomplish this year?” At a senior level, you might only make a few critical decisions a day, but those decisions have a massive ripple effect. Perspective helps you lead with empathy, recognizing that the people you work with are navigating their own lives, their own battles. It guides you to make better long-term decisions, because you’re not just building a product; you’re building a resilient team and a sustainable culture.

In this sense, perspective isn’t just a defensive tool for staying calm; it’s an offensive weapon. In the war of business, where competitors are consumed by short-term fires, a leader with perspective can see the entire battlefield. Instead of charging head-first into the mountain, you find a way around it.

A perfect modern example is the Nintendo Wii. In the mid-2000s, Sony and Microsoft were fighting a costly war over who could build the most powerful console for hardcore gamers. That was the mountain. Nintendo, using perspective, didn’t try to climb it. They went around it. They reframed the problem from “How do we make games more realistic?” to “How do we make games more fun for everyone?” With a simple motion controller, they created a new, uncontested market and outsold their more powerful competitors for years.

Historically, one of the greatest examples is Napoleon’s Ulm Campaign in 1805. An Austrian army was waiting for him in Germany, guarding the direct passes of the Black Forest, ready for a head-on fight. Instead of attacking them where they were strongest, Napoleon sent a small cavalry force to create a diversion while he marched the bulk of his army in a massive, rapid flanking maneuver. By the time the Austrians realized what was happening, Napoleon’s army was behind them, cutting off their supply lines. Their strong defensive position had become a trap. Without a major battle, Napoleon won by making the battle his enemy had prepared for completely irrelevant.

Both Nintendo and Napoleon won, not because they fought the hardest, but because they fought the smartest. They used perspective to sidestep trivial conflicts, conserve energy for the battles that truly mattered, and spot opportunities that others, lost in the fog of false urgency, completely missed.

The Beautiful Byproduct: Gratitude

When your perspective shifts, something amazing happens: gratitude flows in naturally. You don’t have to hunt for it or write it down in a journal (though you can). It simply shows up.

You become grateful for the difficult client, because they are sharpening your skills. You become grateful for the challenging project, because it’s an opportunity for your team to grow. You see obstacles not as roadblocks, but as the raw material for progress. You become thankful for the ordinary, because you’ve been reminded just how fragile it is.

Choosing to See

A serene view of layered mountains under a pastel sky at dusk, showcasing a gradient of blue and soft orange hues.

As I walked away from that lunch, the glow of my friend’s good news stayed with me. It was a powerful reminder that perspective isn’t something we should wait for a crisis to deliver. It’s a gift we can give ourselves, every single day.

It’s a choice.

It’s the choice to pause, take a breath, and look up from the screen. It’s the decision to value presence over productivity, and empathy over efficiency.

So today, I invite you to do the same. Take a moment. Look around at your life, your family, your work. Find one small, ordinary thing and see it for the extraordinary gift it is.

Let’s not wait for life-altering news to see what truly matters. Let’s choose to see it now.

Reflections on AI: AI is Eating Software that is Eating the World

In the summer of 2011, Marc Andreessen published a seminal essay in the Wall Street Journal that defined the next decade of technology and business: “Why Software Is Eating the World.” His argument was as elegant as it was prophetic. He posited that we were in the middle of a fundamental economic shift, where software companies were poised to invade and overturn established industry structures. This wasn’t a cyclical tech bubble, he argued, but a tectonic change in how businesses are built and operated. Nearly every industry was becoming a software industry, and those that failed to adapt would be “eaten.”

He was right. Software did eat the world. We watched as Netflix, a software company, devoured Blockbuster. We saw Amazon, a software company with warehouses attached, consume traditional retail. The arc was clear: build a software-centric model and disrupt the incumbents.

That essay landed with particular force for me. My second daughter Brooklyn had just been born, and inspired by the dawn of the mobile era, I had quit my job to launch an augmented reality startup. It was a time of immense learning and, as my wife Sarah loves to remind me, questionable timing. We were building on the new wave, combining sensors on the new iPhones with marketing and gaming. While the startup ultimately didn’t go the distance, the experience was invaluable. It taught me about the immense weight of the word “disruption” and the grit required to survive it—whether you’re the one disrupting or the one being disrupted, both are incredibly difficult.

For over a decade, Andreessen’s thesis was the undisputed law of the digital jungle. But a new, apex predator has emerged. The cycle of disruption has accelerated to a dizzying pace, and in a deeply meta twist, the disruptors from the past two decades are now the ones being disrupted.

AI is now eating the software that is eating the world.

An abstract depiction of the Earth being engulfed by a colorful, swirling cosmic force, symbolizing disruption and transformation.

What Disruption Really Means

Andreessen’s essay heralded a wave of software-driven change that felt unstoppable. But what does it actually feel like to be on the receiving end of that disruption? It’s not just about a new competitor; it’s about the ground shifting beneath your feet.

  1. Loss of Control Over the Value Chain: Disruptors rewire how value is delivered—removing steps, middlemen, or entire business models before you even notice.
  2. Customer Expectations Shift Overnight: When a new player offers instant, personalized, cheaper, or more delightful experiences, your “good enough” becomes “not even close.”
  3. Margin Compression Becomes Existential: Disruptive technologies often enable radically lower cost structures. Software doesn’t sleep, unionize, or take vacations. Your 20% margin looks quaint next to their 80%.
  4. Your Competitive Moat Turns Into a Puddle: Scale, legacy systems, and brand used to be strengths. But disruption turns those into anchors, slowing adaptation while nimble upstarts sprint past.
  5. Innovation Moves Outside the Building: Disruption often comes from adjacent industries or unexpected entrants. Amazon didn’t ask bookstores for permission; OpenAI didn’t wait for Google to modernize.
  6. Talent Starts Leaving for the Cool Kids: The best engineers, designers, and product thinkers want to build the future, not maintain the past. When you’re being disrupted, your best people become a leading indicator of decline.
  7. It Feels Like a Tech Problem, But It’s Actually a Culture Problem: Many incumbents respond by buying new software or hiring consultants. But the real challenge is rewiring how they think, decide, and act.
  8. You’re Not Competing With Companies—You’re Competing With Capabilities: AI, APIs, open-source, no-code… disruptive tools are making individuals and small teams exponentially more powerful.

The Disruptors Disrupted: Modern Examples

Andreessen gave us the classic examples: Blockbuster falling to Netflix, traditional retail to Amazon, Kodak to digital photos. But the most fascinating part of this new wave is seeing the disruptors of that era facing their own existential threats.

Google vs. ChatGPT: The Search for Answers

Google built an empire on software that indexed the world’s information and presented it as ten blue links. SEO became the science of ranking on that list. But AI is eating that model. While Google still dominates the raw volume of search, a significant behavioral shift is happening faster than anyone predicted.

According to a recent Wall Street Journal article, AI-powered search is growing more quickly than expected, with traffic to leading AI chatbots like ChatGPT and Perplexity AI surging. One analytics firm, Similarweb, noted that combined traffic to the top 10 AI chatbots grew 34% in the first part of this year alone. This isn’t just a niche trend; it’s a mainstream migration for certain types of queries. Users are flocking to conversational AI for complex, informational tasks—research, brainstorming, coding help, and problem-solving. We see real-world examples of this constantly. A user on Quora recounted struggling to find a half-remembered book using Google; ChatGPT found it instantly from a vague, partially incorrect description. This is a fundamentally new type of search—one based on context and conversation, not just keywords. The game is shifting from Search Engine Optimization (SEO) to Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO). Users no longer just want a list of links to search through; they want the answer.

Uber vs. Waymo: The End of the Driver

Uber used software to disrupt the taxi industry by creating a massive, efficient marketplace for drivers. Their former CEO pushed hard into autonomous driving, recognizing the existential threat. But in a classic innovator’s dilemma, the new leadership divested from that costly, long-term bet to focus on near-term profitability. Now, companies like Waymo and Tesla are rolling out robotaxi services that threaten to eat Uber’s core business model by removing the driver—and their associated costs—entirely.

The IDE vs. AI: The Changing Nature of Code

The very process of building software is being consumed. For decades, developers have relied on Integrated Development Environments (IDEs) like Microsoft’s Visual Studio or JetBrains’ IntelliJ IDEA. These were the definitive software-building tools. Now, AI-native environments like Cursor and Replit are upending that. They don’t just help you write code; they write it with you and for you.

This has profound implications. What happens when the cost to build software approaches zero?

  • Explosion in Software Supply: Software is no longer a scarce, expensive resource—it becomes ubiquitous infrastructure.
  • Margins Collapse for Custom Development: Dev agencies, especially those competing on cost, face commoditization unless they move up the value chain to strategy and architecture.
  • Shift from “Build” to “Compose”: Software creation becomes more about orchestration and configuration than hard engineering.
  • Rise of Citizen Developers: Domain expertise becomes more valuable than knowledge of syntax.
  • Incumbent Software Vendors Get Eaten: Legacy vendors must reinvent themselves or be disrupted out of existence.
  • Regulation Struggles to Keep Up: Governance models must evolve—fast.
  • Software Becomes Embedded Everywhere: The world becomes hyper-personalized and hyper-automated.
  • Engineering Roles Evolve: The “10x engineer” becomes the “10x AI collaborator.”
  • Economic Leverage Shifts: Distribution, branding, and user insight become more valuable than the underlying code.
  • Everything Speeds Up: Strategic agility becomes the only true competitive advantage.

The Crumbling Moats of Enterprise Software

Every traditional enterprise software vendor is seeing their moats dry up. For years, the high cost of replacement was a powerful defense. But that changes as monolithic platforms give way to a diverse ecosystem of best-of-breed SaaS players. Data is becoming more accessible through APIs, and workflows are easier to replace. Additionally, companies are getting wiser to the enterprise sales games. Just because a vendor bought a company doesn’t mean its technology is well-integrated into the platform. We will see the emergence of AI-native enterprise platforms that are built from the ground up to automate, predict, and advise—making their predecessors look like relics.

The Existential Question for Every Company

In 2011, Andreessen argued that every company needed to become a software company to survive. In 2025, the stakes are even higher. What happens to companies—even the software-savvy ones—that don’t evolve into AI-native organizations?

The bottom line is they risk becoming irrelevant, uncompetitive, or extinct. That isn’t a threat; it’s the emerging reality.

  • They get outpaced by faster, cheaper, smarter rivals.
  • Innovation freezes while bureaucracy expands.
  • Knowledge work gets bottlenecked in human siloes.
  • Margins shrink as defensibility moats evaporate.
  • Top talent leaves for companies where AI is an amplifier, not a threat.
  • Customers expect magic, but they deliver forms and call centers.
  • Legacy infrastructure becomes an existential debt.
  • Strategy becomes guesswork without the real-time data fabric to train and validate AI.

The imperative has evolved. In 2011, the call was to become a software company. Today, every company must become an AI company. This isn’t about buying a few AI tools or launching a chatbot. It’s about fundamentally re-architecting the business around data, intelligence, and automation. It means fostering a culture that thinks in terms of models, probabilities, and feedback loops, and embedding intelligent capabilities into the core of every product, service, and process.

Why Now? The Perfect Storm for Disruption

This isn’t happening in a vacuum. A confluence of factors has created a perfect storm for this AI-driven disruption. As I explored in my previous posts on Accelerating Returns and the Stochastic Era, we’ve hit a critical inflection point.

  1. Foundation Models Changed the Game: General-purpose models like GPT can now write, debug, and refactor software, crossing a critical capability threshold.
  2. OpenAI (and others) Made It Accessible: The interface to intelligence is now an API call, not a research lab.
  3. Software Was Ripe for Disruption: Ironically, much of the software world had become bloated, slow, and ripe for a leaner, smarter alternative.
  4. Cheap Cloud + Ubiquitous GPUs = Acceleration: The hardware finally caught up with the ambition.
  5. We Finally Have Enough Training Data: The internet created the massive corpus of code, text, and images needed to train these models.
  6. Human-Machine Collaboration Just Got Real: The technology is not just smart—it’s usable, amplifying human potential across every role.
  7. Software Economics Just Collapsed: When AI can write the code, the cost to create software plummets, and the speed to ship skyrockets.

The Great Leapfrog Moment

One of the wildest things about this era? It’s a leapfrog moment. You don’t need to be the biggest, richest, or most established player anymore—you just need to be the fastest learner.

A scrappy team with a bold vision can outmaneuver giants. The stack is flatter, the tools are open, and the pace of change is brutal. Where you started matters less than how fast you move. This isn’t just for startups. Older companies can leapfrog, too. In fact, they might be in the best position—if they’re willing to change. They have the customers, the data, the brand, and the operational knowledge. What they often lack is urgency and imagination.

The age of the “5-year digital roadmap” is over. The game now is a chaotic, high-stakes parkour race.

Conclusion

In his 2011 essay, Marc Andreessen famously wrote that he was optimistic about the future growth of the economy, predicting it would be driven by these new software-based disruptors. He encouraged every company to embrace this change, to become a software company.

Today, I am also incredibly optimistic, but for a different reason. We are witnessing a second, more profound wave of disruption that is unlocking human potential on an unprecedented scale. The ability to create, to solve problems, and to build is being democratized by AI. Companies that embrace this new reality—that become AI-native at their core—will not only survive but will define the next era of innovation and value creation.

More and more major businesses and industries are being run on artificial intelligence and delivered as intelligent, automated services. The smart ones will be AI-first. The rest will be dinner.

Reflections on AI: The Stochastic Era

I’ve always loved jazz and improvisational music. My wife, Sarah, appreciates the perfect, tight structure of a three-minute song, and I get it. There’s a real beauty in that precision. But for me, the magic happens in the exploratory freedom of a 10, 15, or even 25-minute musical journey. It’s about letting go of a rigid plan to discover something new and amazing in the moment.

I was thinking about this recently, remembering a weekend back in August of 1996. I was standing on a decommissioned Air Force base in Plattsburgh, New York, with three good friends and a huge smile on my face. We were at The Clifford Ball, Phish’s first festival, and the band was on fire. During the second set of the second night, they launched into “Run Like An Antelope.” The jam that followed was pure improvisational genius—a high-energy, tight-but-loose exploration that broke free from the song’s structure to create something utterly unique and unrepeatable. The entire festival was like that, a masterclass in creative freedom.

I’m a firm believer in what Steve Jobs called standing at the “crossroads of technology and the liberal arts.” That Phish jam is a perfect example of the artistic side: letting go of a rigid structure can lead to something far more profound. It feels counterintuitive, but for my entire career in technology, I’ve seen the other side—a world built on perfect, deterministic machines. Now, we’re standing at a new crossroads, and the same principle of letting go is about to change everything.

Steve Jobs famously said, ” — it’s technology married with liberal arts, married with the humanities, that yields us the results that make our hearts sing … ”

A Jarring Shift in Thinking

For as long as I’ve been a software engineer and a technology leader, computers have been defined by their deterministic nature. They are perfect, logical calculators. Input A always produces Output B. 2 + 2 will always equal 4. But we are now entering a new era: the Stochastic Era.

The most powerful large language models today, the ones that can generate art, write poetry, and are changing our world, are fundamentally not deterministic. At their core, they are probabilistic engines making sophisticated guesses. Letting go of rigid structure has allowed for the room for what feels like creativity. This is a massively jarring shift in thinking. How can this randomness—this seeming imperfection—be the essential ingredient for building true, human-like intelligence?

From Certainty to Probability: What is Stochastic Thinking?

To understand this shift, we need to contrast two mindsets.

  • Deterministic Thinking: This is like following a precise recipe to bake a cake. You use the exact same ingredients and instructions every time, and you get the exact same cake. It’s predictable and reliable.
  • Stochastic Thinking: This is like a skilled chef improvising a meal. They have a deep understanding of ingredients and techniques, but they create a dish based on what’s fresh and available. The meal is different every time, but it’s creative, adapted, and often brilliant.

It’s crucial to understand that this isn’t just chaos or random noise. It’s principled randomness. A stochastic system uses probability distributions to make the best possible guess based on the vast amount of data it has learned from.

The Engine of Modern AI: How LLMs Actually Work

The generative AI revolution we are living through was ignited by a single research paper. In 2017, researchers at Google published a paper titled “Attention Is All You Need.” It introduced a new architecture called the Transformer, which is the blueprint for every modern Large Language Model (LLM), from ChatGPT to Gemini.

Before the Transformer, AI models processed language sequentially, one word at a time, often forgetting the context of earlier words. The Transformer’s breakthrough was a mechanism called self-attention, which allows the model to look at all the words in a sentence at once and weigh their relevance to each other. This enabled a far deeper understanding of context and, crucially, allowed for massive parallelization in training.

Stochastic thinking is not just an add-on to this architecture; it is its fundamental operating principle.

  1. The Core Engine: A Probabilistic Word Predictor. At its heart, an LLM is predicting the most probable next word in a sequence. Its creativity comes from the fact that it doesn’t always pick the #1 most likely word. Instead, it samples from a distribution of likely candidates, allowing for variety and novelty.
  2. Controllable Randomness: Temperature and Top-P Sampling. We can control this randomness with parameters. Temperature acts as a creativity dial—low temperature makes the AI more factual and predictable, while high temperature makes it more creative and surprising. Top-P sampling provides another lever, telling the model to only consider a set of the most likely words.
  3. The Learning Process: Stochastic Gradient Descent. Even the training process is stochastic. It would be impossible to learn from the entire internet at once. Instead, models learn using Stochastic Gradient Descent (SGD), where they take a small, random batch of data, learn from it, and adjust. This random sampling makes learning efficient and helps the model generalize its knowledge.

The Wall of Determinism: Why Old AI Hit a Limit

An old vintage pickup truck parked on a dirt road in a scenic landscape with grasslands and rolling hills under a colorful twilight sky.

For decades, AI research focused on rule-based “expert systems.” This deterministic approach could never lead to AGI for a few key reasons:

  • The Real World is Messy: The world isn’t a clean set of IF-THEN statements. It’s ambiguous, nuanced, and unpredictable.
  • Brittleness: Rule-based systems are brittle. They fail the moment they encounter a situation not explicitly covered by their hand-crafted rules.
  • The Creativity Problem: A deterministic system can only follow its programming. It can never create something truly novel or surprising.

The Bitter Lesson

In 2019, AI pioneer Rich Sutton wrote a now-famous essay called “The Bitter Lesson.” His central point was that, in the long run, general-purpose methods that leverage massive computation (like learning and search) will always outperform systems where humans try to hand-craft their knowledge.

This is the ultimate validation of the stochastic approach. Instead of trying to teach an AI all the grammatical rules of English, we let a general learning algorithm discover the patterns for itself from trillions of words. This is exactly how LLMs work, and it’s a lesson that connects directly to the ideas in my previous post on the Law of Accelerating Returns. When you combine The Bitter Lesson (let computation do the work) with the Stochastic Engine of LLMs and place it on the exponential curve of Accelerating Returns, you get the explosive, transformative moment in AI that we are witnessing right now.

How Stochasticity Unlocks Intelligence

This new approach is the bridge to AGI because it enables capabilities that were impossible before:

  1. Creativity and Exploration: Randomness allows an AI to explore novel combinations of ideas and generate content that has never existed before.
  2. Robustness and Adaptability: A probabilistic model can handle the uncertainty of the real world, making informed guesses instead of breaking down.
  3. Efficient Learning: It is the only way to effectively learn from the planet-scale datasets required to achieve general intelligence.

I saw early glimpses of this in my career. I had the incredible opportunity to be mentored by Steve Kirsch, the founder of Infoseek and a true tech pioneer. We worked together on algorithms for blocking spam for major clients like Yahoo Mail. The techniques we used were essentially early stochastic models, employing Bayesian probability to “guess” if an email was spam based on patterns, rather than relying on rigid rules. That company was later sold to Proofpoint, but the core lesson about the power of probabilistic systems stayed with me.

Even today, my role as CTO for O2E Brands is a stochastic exercise. I’m constantly weighing probabilities—the likelihood of a project’s success, market adoption, potential risks—to make the best strategic bets with the available data. It’s never about one certain answer.

The Art of the Guess

An abstract digital artwork featuring swirling purple and golden lines set against a dark blue background, reminiscent of intricate neural connections.

Looking ahead, these non-deterministic, stochastic models will power the next wave of systems on the path to AGI, from autonomous agents that can navigate unpredictable environments to scientific AIs that can form novel hypotheses.

The journey to AGI isn’t about building a faster, more powerful calculator. It’s about building a more sophisticated and intuitive “guesser.” We’ve spent a century trying to make machines perfectly logical. It turns out, to make them truly intelligent, we first have to teach them the art of probability. The messy, jarring concept of randomness is not a bug—it’s the feature that will finally get us to AGI.

Thank you for reading. Leave a comment if you have thoughts or comments.

Reflections on AI: The Law of Accelerating Returns

Looking back on my 25 years in technology, I can’t help but feel an immense sense of gratitude. It has been an amazing ride, and I feel incredibly lucky to have had a front-row seat—and often, a place on the stage—for some of the most profound technological shifts in human history.

My career has spanned the dot-com boom, the rise of enterprise software, the mobile revolution, the shift to the cloud, and now, the dawn of the AI era. Each wave was built on the last, creating a foundation for the next leap forward. The amount of change we’ve packed into the last quarter-century is staggering. It makes you wonder: if this is what we saw in the last 25 years, where could we possibly be in the next 25?

It feels impossible to predict, but some people make it their life’s work. One of the most compelling thinkers on this topic is the inventor and futurist, Ray Kurzweil.

The Prophet of Exponential Growth

Ray Kurzweil is a towering figure in computer science. He’s an author, inventor, and one of the most prominent futurists of our time. He’s probably one of the oldest living computer scientists and even worked with the legendary Marvin Minsky at MIT—an institution I’ve grown particularly fond of since my daughter started attending.

Ray Kurzweil speaking at a technology conference, smiling and wearing a dark blazer over a plaid shirt.

Kurzweil is best known for his mind-bending books like The Singularity Is Near and his brand new follow-up, The Singularity Is Nearer. In them, he argues that humanity is approaching a “Singularity”—a point in the near future where technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. He predicts we will achieve Artificial General Intelligence (AGI), an AI that can understand or learn any intellectual task that a human being can, by 2029, and that the Singularity itself will occur around 2045.

He frames this journey through his “Six Epochs of Evolution”:

  1. Physics and Chemistry: Information in atomic structures.
  2. Biology: Information in DNA.
  3. Brains: Information in neural patterns.
  4. Technology: Information in hardware and software.
  5. The Merger: The fusion of technology and human intelligence.
  6. The Universe Wakes Up: The point where intelligence saturates the cosmos.

According to Ray, we are living in the 5th Epoch right now.

Why We Fail to See the Future

Kurzweil’s predictions can feel like science fiction because our brains are wired to think linearly. We struggle to grasp the power of exponential growth. Think about it: if you take 30 linear steps, you end up 30 meters away. If you take 30 exponential steps (doubling each time), you travel over a billion meters—enough to circle the Earth 26 times!

A Futurist, in my mind, is someone who can intuitively grasp exponential growth. They don’t just see the next step; they see the curve. This understanding is the key to Kurzweil’s central thesis: The Law of Accelerating Returns.

The Law of Accelerating Returns

This law is the engine driving us toward the Singularity. It states that the rate of technological advancement—and evolution in general—is not linear, but exponential. This happens because of powerful feedback loops of innovation. Each new generation of technology provides better tools to create the next generation, which is then created faster and more efficiently.

Think of it as a form of societal reinforcement learning. We create a tool, learn from it, and use that knowledge to build a better tool, accelerating the cycle. Moore’s Law, which famously predicted the doubling of transistors on a chip every two years, is just one famous example of this law in action. But Kurzweil argues it applies to all information-based technologies. The law of accelerating returns is happening now, and it has been for a long time. The evidence is all around us.

Mary Meeker’s “Trends, Artificial Intelligence” Report

For decades, anyone in tech has eagerly awaited Mary Meeker’s annual “Internet Trends Report.” She is a bit of a celebrity in our circles, first publishing her report in 1995 and updating it yearly until 2019. Just a few weeks back, she and her team at BOND Capital dropped a new gem: “Trends, Artificial Intelligence”.

Reading through the 340-slide deck, I couldn’t help but see it as a stunning validation of Kurzweil’s Law of Accelerating Returns. The AI pace of change is off the charts. The compounding effect of AI technology, its ecosystem, and user adoption is completely unheralded.

The arc of Meeker’s deck proves that this AI wave is built upon all the technology that came before it: computing power, vast datasets, advanced algorithms, and global communications networks. It’s the ultimate feedback loop.

Here are a few slides that stood out to me:

  • Slide 20: Google Disruption: The pace at which new AI-native search products are challenging Google is breathtaking. This isn’t a slow, decade-long battle; it’s happening in months.
A graph comparing annual searches for ChatGPT and Google from 1998 to 2025, highlighting that ChatGPT reached 365 billion searches in just 2 years, while Google took 11 years.
  • Slide 26: Wisdom, Not Just Knowledge: “Wisdom” is why products like ChatGPT and Gemini Search will win. Traditional search gives you knowledge (a list of links). AI-powered search provides wisdom—synthesized, contextual answers. It’s a fundamental upgrade.
Slide featuring a quote by Martin H. Fischer: 'Knowledge is a process of piling up facts; wisdom lies in their simplification.' The slide is attributed to BOND and highlights the theme of knowledge distribution over six centuries.
  • Slide 43: Passing the Turing Test: We are already starting to pass the Turing Test in various modalities. AI is becoming indistinguishable from human-created content, a milestone Kurzweil predicted for 2029. We’re right on schedule, if not ahead.
An image illustrating a Turing Test conversation between two witnesses, A and B, showcasing the realism of AI-generated responses compared to human dialogue.
  • Slide 302: Waymo vs. Lyft: In San Francisco, Waymo’s autonomous vehicles have surpassed Lyft in market share. Think about that. A technology that was science fiction a decade ago is now out-competing an established, human-powered incumbent in a major city. The disruption is real and it is happening now.
Graph showing the market share of Waymo's fully-autonomous vehicles compared to Uber and Lyft in San Francisco over a period from August 2023 to April 2025.

Buckle Up

As someone who has spent a career building things, I find it impossible to look at Kurzweil’s theories and Meeker’s data with anything but immense optimism. This isn’t a moment for fear; it’s a moment for builders. The scale of this transformation is unlike anything we’ve ever seen, presenting an unprecedented opportunity to redefine what’s possible and build a better future.

There will be fear and resistance; there always is. This is not a new phenomenon. When the printing press emerged in the 15th century, the scribal class and religious authorities feared a loss of control, calling it a technology that would spread dangerous ideas to the masses. In the 19th century, the Luddites famously smashed the automated looms that threatened their craft and livelihoods. And in our own lifetimes, people protested the introduction of calculators in schools, fearing students would forget basic math. The AI revolution is, of course, something much bigger, but the pattern of anxiety and opposition is the same. We cannot turn back the clock. The feedback loop of innovation is spinning faster than ever.

So much change is ahead of us. Buckle up.

Don’t Be a Lemming

In 1991, a puzzle-platformer video game called Lemmings was released, and I absolutely loved it. The goal was to guide a troop of adorable, green-haired, blue-robed lemmings from an entrance to an exit, navigating a landscape filled with treacherous obstacles. You couldn’t control the lemmings directly; they just marched forward in a single-file line, blissfully unaware of the deadly drops, traps, and rivers ahead. If you didn’t assign them specific tasks—like building, digging, or blocking—they would walk off cliffs to their doom without a second thought. They just followed the one in front.

Cover art for the video game Lemmings, featuring colorful cartoon-style characters, including a central green-haired lemming in a blue robe and various lemmings engaging in activities on a vibrant landscape with hills and obstacles.

At the time, I didn’t really know the origin of the term “lemming.” It turns out, it comes from a pervasive myth about the small arctic rodents. Popularized by a 1958 Disney documentary, the story goes that lemmings periodically engage in mass migrations that end in them blindly marching off cliffs into the sea. The reality is that this is a misconception; their population cycles can lead to migrations where some may accidentally fall or drown, but there is no instinct for mass, unthinking suicide. Yet, the myth persists as a powerful metaphor for a behavior we see every day: the human tendency to mindlessly follow the crowd.

Our lives are a constant battle against a gravitational pull toward conformity. We are, in many ways, hard-wired to follow the pack, and taking the road less traveled is far harder than we think. But that untrodden path is where new experiences, true self-discovery, and profound opportunities are found.

The Early Days of Following

As a child, this pull starts as simple peer pressure. I have a vivid memory from my youth that still makes me cringe and laugh. A group of us were out, and for some reason, the collective “wisdom” of the group decided it would be hilarious to tip over a port-a-potty. The only problem was that one of our friends was still inside. Fueled by the inexplicable logic of group dynamics, I helped do the deed. The look on his face when he emerged—dazed, confused, and a little bit blue—was a sight to behold. Thankfully, the unit had just been cleaned. We’re still friends today, but that incident was an early, messy lesson in how easily we can be swayed to do things we know are wrong, just because everyone else is doing it.

This pressure cooker of conformity has been supercharged by the proliferation of social media. Today, the playground taunts and hallway whispers have been replaced by a global, 24/7 subconscious web of influence. Every ‘like,’ comment, and share serves as a micro-dose of social validation, a little endorphin hit that reinforces our desire to align with the digital crowd. We subtly tailor our posts, our opinions, and even our life experiences to what we believe will perform well, often without even realizing we’re doing it. The pack is no longer just in our physical vicinity; it’s in our pocket, constantly judging and guiding.

The pressure to conform only intensifies as we get older. In young adulthood, the goalpost shifts to what society deems successful, which usually means making money and pursuing a prestigious career. As the son of two Filipino doctors, there was tremendous pressure on me to follow in their footsteps. It’s very much a part of the culture. I spent my freshman year of college as a pre-med major, not because it was my passion, but because it was the expected path. It was the safe, respectable, and well-trodden road that everyone in my orbit seemed to want for me.

The Desires We Inherit

This phenomenon goes deeper than just our actions; it infects our very desires. We think we want things for our own reasons, but often, we just want them because other people want them. Remember the Beanie Babies craze of the 1990s? These little stuffed animals, which cost a few dollars to make, suddenly became must-have collectibles. People weren’t buying them because of their intrinsic value or beauty; they were buying them because everyone else was buying them, creating a speculative bubble fueled by collective desire. We were convinced they were a sound investment, but we were really just caught in a feedback loop of wanting.

The French historian and philosopher René Girard built his life’s work on this core insight, which he called Mimetic Desire. His theory posits that our desires are not original; we imitate or borrow them from others. We see a “model”—a friend, a celebrity, a societal figure—desire something, and that act of desiring makes the object desirable to us. This single concept was the starting point for his broader theory on human culture: this shared desire inevitably leads to rivalry and conflict, which societies then resolve by unconsciously uniting against a single “scapegoat” to restore order. It all starts with learning what to want from the crowd. This begs the question: “What are actually our own desires?”

An elderly man with gray hair and a striped shirt sitting in front of a large bookshelf filled with various books.

This is where the statement, “Care less what other people think,” becomes so incredibly powerful. It’s not about being rebellious for its own sake; it’s about giving yourself the freedom to disentangle your own motivations from the mimetic noise around you. It’s a declaration of independence for your own mind. And it’s funny, because even as society pushes us to conform, it has always held a special admiration for the rebel—the one who doesn’t do what everyone else is doing. It’s as if we have a subconscious recognition of just how hard it is to break away. Think of the enduring icons in pop culture: James Dean in Rebel Without a Cause, Han Solo in Star Wars, or Katniss Everdeen in The Hunger Games. They are celebrated not for fitting in, but for forging their own path, often in defiance of overwhelming pressure. We applaud their independence because, on some level, we wish we had more of it ourselves.

And what’s fascinating is that what we think other people think is often completely wrong. In his book Collective Illusions, Todd Rose (whom I was lucky enough to meet—a brilliant thinker from here in Utah) brilliantly unpacks this. His research shows there’s a huge gap between our private beliefs and our public actions. We assume the loudest and most repeated opinions represent the majority view, and then conform to that illusion. A powerful example from his research is the definition of a “successful life.” Privately, the vast majority of people define success in terms of personal fulfillment. But when asked what they think most other people value, they say fame, status, and wealth. This is a collective illusion in action: we end up chasing a version of success that we don’t personally value, all because we wrongly believe it’s what everyone else wants. We enforce a norm that almost no one truly believes in.

A portrait of Todd Rose, author of the book 'Collective Illusions', alongside the book cover featuring the title and a visual of matches.

The Dangers of the Pack and the Power of “Why”

This instinct to follow can have devastating consequences. The most famous and tragic example is the 1978 Jonestown massacre, where over 900 people died after following the orders of cult leader Jim Jones, leading to the phrase “drinking the Kool-Aid.” It’s a horrifying testament to how the pack mentality can strip away individual judgment.

On a less extreme but far more common level, we see this play out in corporate cultures. A company’s culture can be a powerful weapon, aligning everyone toward a common mission and accelerating progress. But it can also be a boat anchor, preventing meaningful change. Cultures are self-reinforcing; employees follow established norms like lemmings. When you try to introduce a new idea or change a process, the existing culture often protects itself, pushing back against the very change it may need to survive.

The antidote to this is simple, yet profound: ask why. Questioning the path, especially when the entire pack is moving in one direction, is a superpower. Steve Jobs built his entire philosophy around this, famously saying, “The ones who are crazy enough to think they can change the world are the ones who do.” He understood that innovation doesn’t come from accepting the status quo, but from challenging it at every turn.

Formulating our own path is hard because we are constantly being manipulated by biases and external forces:

  • Repetition Bias: We believe things are true simply because we hear them over and over.
  • Survivorship Bias: We focus on the “winners” who took a certain path, ignoring the many who took the same path and failed.
  • Social Media & News: Our information streams are heavily skewed. For instance, research shows that on Twitter, roughly 10% of users produce 80% of the tweets, creating a distorted view of public opinion. This is amplified by the Friendship Paradox, the phenomenon where your friends, on average, have more friends than you do, making certain ideas and behaviors seem more popular than they actually are.

Finding Your Own Path

An illustration showing a line of grey figures walking along a bridge, with one orange figure stepping off the bridge onto a separate path that leads up a green hill.

What do people on their deathbeds say they value most? Bronnie Ware, an Australian nurse who spent years working in palliative care, documented the most common regrets of the dying. The number one regret, by far, was: “I wish I’d had the courage to live a life true to myself, not the life others expected of me.” When all is said and done, people don’t wish they had made more money or accumulated more status symbols. They wish they had been authentic.

It’s crucial to recognize the powerful, invisible gravity that pulls us to do what everyone else does and want what everyone else wants. It’s a force of nature, but it can be resisted. The first step is awareness. The next is having the courage to ask “why” and to listen to the answer that comes from within, not from the crowd.

Find your own path. Want what you want. Be your own person. You’ll die a much happier one.