It's in the way that you use it
AI is enabling the greatest learning opportunities humanity has ever experienced. Unfortunately, that’s not the only way to use or experience AI.
In case you’ve been under a rock, AI has been on a bit of a tear this year. We also live in a world where most of what we consume comes via systems where literally every incentive begs participants to try to game the system. This isn’t new, despite what some might have you think.
But AI makes it more vivid and — unlike the core ideas of traffic, discovery, ranking, and distribution — it feels like something you can point out. Despite revealed preferences, Gen Z is skeptical and the majority of people are quite negative about AI right now.
The backlash is justified, but it obscures the opportunity. If we focus on slop, we miss the actual revolution: an unprecedented engine for individual learning and mastery. So where is the industry going so wrong?
Software brain
Decoder’s Nilay Patel has an incredibly thoughtful piece up about this on The Verge. It’s worth a read or listen.
Software brain is powerful stuff. It’s a way of thinking that basically created our modern world. Marc Andreessen, the literal embodiment of software brain, called it in 2011 when he wrote the piece “Why software is eating the world” as an op-ed in The Wall Street Journal. But software thinking has been turbocharged by AI in a way that I think helps explain the enormous gap between how excited the tech industry is about the technology and how regular people are growing to dislike it more and more over time.
He continues, contrasting tech executives who sound somewhere between hopeful
Satya Nadella: At the end of the day, I think this industry, to which I belong, needs to earn the social permission to consume energy because we’re doing good in the world.
with those warning that all our jobs are going away
Dario Amodei: Entry-level jobs in areas like finance, consulting, tech and many other areas like that —- entry-level white-collar work — I worry that those things are going to be first augmented, but before long replaced by AI systems.
He then notes that while many AI executives seem to be treating this as a marketing problem, that’s not what it is.
It feels like someone just needs to say this clearly, so I’m just going to do it. AI doesn’t have a marketing problem. People experience these tools every single day! ChatGPT has 900 million weekly users, trending to a billion, and everyone has seen AI Overviews in Google Search and massive amounts of slop on their feeds.
Patel then basically makes the argument that “software brain” is “when you see the whole world as a series of databases that can be controlled with the structured language of software code,” before pointing out the ways in which just trying to get the real world to act like a computer is just hella stupid.
To support this, he turns to a quote from Ezra Klein, talking about AI leaders in Silicon Valley:
Ezra Klein: They think the A.I. age has arrived and its winners and losers will be determined, in part, by speed of adoption. The argument is simple enough: The advantages of working atop an army of A.I. assistants and coders will compound over time, and to begin that process now is to launch yourself far ahead of your competition later. And so they are racing one another to fully integrate A.I. into their lives and into their companies. But that doesn’t just mean using A.I. It means making themselves legible to the A.I.
The notion of “making yourself legible” to this generation of AI — or, for that matter, thinking in terms of a DB schema, is such a profoundly 2014 version of AI, it’s really striking to hear someone making that argument today. But Nilay is not alone.
Kyle Kingsbury wrote a spectular series in April, starting with “The Future of Everything is Lies, I Guess.” It is too long to effectively sum up, but what he returns to again and again is the non-deterministic and error-prone nature of LLMs
One way to understand an LLM is as an improv machine. It takes a stream of tokens, like a conversation, and says “yes, and then…” This yes-and behavior is why some people call LLMs bullshit machines. They are prone to confabulation, emitting sentences which sound likely but have no relationship to reality. They treat sarcasm and fantasy credulously, misunderstand context clues, and tell people to put glue on pizza.
This is, of course, why LLMs make terrible databases. It’s also their superpower. Tireless, helpful, and able to operate improvisationally is exactly why LLMs are incredibly potent tools for learning. Plus, we all know how to mitigate hallucinations at this point.
Kingsbury conclusion rhymes with Patel’s piece:
I’ve thought about this a lot over the last few years, and I think the best response is to stop. ML assistance reduces our performance and persistence, and denies us both the muscle memory and deep theory-building that comes with working through a task by hand: the cultivation of what James C. Scott would call metis. I have never used an LLM for my writing, software, or personal life, because I care about my ability to write well, reason deeply, and stay grounded in the world.
Stop1. As Patel concludes:
For everyone else, AI is just a demanding slop monster. It’s a threat.
Giles Turnbull’s post captured this in a slightly different way:
I have a feeling that everyone likes using AI tools to try doing someone else’s profession. They’re much less keen when someone else uses it for their profession. I fall into the same trap as everyone else. I recognise, and admit to, my own bias.
And John Gruber, reacting to Patel, notes how different people’s reaction to AI compared to the last two technology shifts:
Something is profoundly off in the computer industry when it comes to software broadly and AI specifically. It’s up for debate what exactly is off and what should be done about it, but the undeniable proof that something is profoundly off is the deep unpopularity surrounding everything related to AI. You can’t argue that the public always turns against groundbreaking technology. The last two epoch-defining shifts in technology were the smartphone in the 2000s, and the Internet/web in the 1990s. Neither of those moments generated this sort of mainstream popular backlash. I’d say in both of those cases, regular people were optimistically curious. The single most distinctive thing about “AI” today is the vociferous public opposition to it and deeply pessimistic expectations about what it’s going to do.
What’s off is that executives tend to anchor on the eras they grew up2. Music execs in 2007 couldn’t get their minds around streaming because they got promoted selling CDs. For many tech leaders and AI practioners still think it’s deep learning.
Moving beyond 2014
Why 2014? Because that’s when fancy new AI ideas started smashing into the prior generation’s. Unlike today, AI in 2014 was barely hanging around. AI winter, expert systems, struggling image recognition and translations systems were in use — and managed by very specialized teams when suddenly computing power caught up to the deep learning ideas rattling around for the prior 20 years. Hell, even longer — I was using “laplacians” for edge detection in school projects in 1990, which is just a special case of the broader set of convolution transformations. On a 386 in ‘91, you could just about get a robot to follow a bright path digitally rather than using analog techniques. Exciting stuff, but not AI. That needed the 4 million times increase in floating point performance those 24 years brought.
2014 until 2020 was the era of deep learning. Deep Learning. The attention reinforcement feed came from this. Superhuman image recognition. Useful translations. A whole wave of “this was science fiction 3 years ago”-break throughs.
This era also birthed database and deep learning companies focused on analysis and insights. Big models, big data, big money. Setec Astronomy stuff. Ranked feeds and personalized reinforcement. A key defensible moat from this era became ontologies. Smart people, deeply understanding a domain or challenge, thinking about how to align data connections, and then turning deep learning loose on it. It was — and has been — pretty incredible.
But in an agentic, LLM world, “has been” is the operative word.
Because LLMs don’t want or need your ontologies. Agents are slowed down by your attempts to fit data into formats you think will be best for them. Your optimizations are just hiding data that might be the critical connection or inspiration. Everything about database thinking is just profoundly out of date and is about to be like talking about punch cards or development without revision control.
This transition is a lot like the previous AI transition, where key leaders in the prior tech have a challenging time adapting to the reality of the new world.
Focusing on hallucinations is a tell. Reliance on ontologies is another. Ontologies are an attempt to force the world into a rigid scheme — true mastery and understanding requires analysis and synthesis, an ability to master a much deeper understanding of the challenge at hand.
Because, like Eric Clapton said: it’s in the way that you use it.
The Learning Brain
Like school, what you get out of AI is largely dependent on what you put into it. Unlike school, the floor with AI is pretty high. Rather than coasting to a C or D, unguided AI can generate some really credible slop, probably worth a solid B on a curve. This unguided slop leads directly to Giles’ observation.
But that’s not the only way to use it.
Take my John Boyd site, for example. This site literally couldn’t exist without AI. I’ve dug through plenty of the other John Boyd sites out there, read multiple presentations, transcriptions, and books. None of them bring Boyd’s voice alive, bring the full path of thinking to the forefront the way a weekend with frontier models and few million tokens made possible.
Building this site has greatly increased my knowledge and mastery of Boyd’s work. And the content is by no means slop. Multiple people have pinged me to ask “how did you get the AI to sound like Boyd?”
As I explained on the site, it’s through careful use of context and prompting:
Matching Boyd’s spoken words to the slide he was pointing at worked well but imperfectly. The transcripts are annotated with inline [slide N] markers at every point Boyd advances, and those markers drive the Source panel you get from the button at the top of each section. Most are right. Some are off by a slide — Boyd sometimes talks past a slide he’s already advanced, or advances silently and circles back. I fixed the egregious ones by hand.
Every briefing lives as two files: the raw transcript (never edited) and a Feynman-style reading draft (edited freely). The Source button always shows the raw transcript, so you can cross-check anywhere you suspect the edit went too far. Keeping them separate — rather than editing the transcript in place — was essential. It meant I could be aggressive about readability in the draft without losing the ability to audit, and it meant the models had a stable ground truth to re-check themselves against on every pass.
The pipeline is deliberately a series of small, legible steps — transcribe, segment, edit, align slides, render — rather than one giant “turn this PDF into a website” prompt. Every seam between steps is a place a human can inspect the output and correct it. When something looked wrong on the page, I could almost always trace it to a specific step and fix it there, instead of re-running the whole thing and hoping.
Compare this to what Klein as concerned about — attempting to make us legible to the AI. Instead, the path to unlocking Boyd’s voice was make what the AI was discovering legible to me!
AI alone made this possible, but not merely through raw token consumption. It was equally a powerful path to much deeper understanding of the topic at hand. One with deep roots in education theory.
Situated Learning
Thanks to Jean Lave and Etienne Wenger, we know that situated learning is among the most powerful techniques for building expertise and mastery. From wikipedia:
Situated learning means to have a thought and action which is used at the right time and place. In this approach, the content is learned through doing activities. It is dilemma-driven, it challenges the intellectual and psychomotor skills of a learner. Situated learning contributes to bringing about the relationship between classroom situations and real-life situations outside the classroom.
Dilemma-driven. What a great turn of phrase. I had a dilemma — how can I understand how Boyd’s ideas around the OODA loop formed and progressed over time? — and AI gave me a path to dive in, to explore, to learn. Not only did it support my exploration, it also supported creating the tools to advance that exploration and turn it into a sharable site.
Of course, Lave and Wenger weren’t the first to explore this. A great, earlier take from Lev Vygotsky framed it as the “zone of proximal development”, or
the space between what a learner is capable of doing unsupported and what the learner cannot do even with support
How much larger does that space become with AI there to help?
Or taken even further, how many activities could shift into legitimate peripheral participation, another concept from Lave and Wenger’s seminal book, that focuses on the community practice of learning. Again, from wikipedia:
According to LPP, newcomers become members of a community initially by participating in simple and low-risk tasks that are nonetheless productive and necessary and further the goals of the community. Through peripheral activities, novices become acquainted with the tasks, vocabulary, and organizing principles of the community’s practitioners.
Obviously, collaborating with an AI is not community, but in many settings it could act as a pretty decent simulation. How many activities could AI turn into situated learning, enable proximal development, and help a curious learner advance their level of expertise far beyond what was possible before?
This is more than just offloading the simple stuff. John Koshy wrote an essay recently about AI elevating our thinking, not replacing it:
There is now a very real temptation to hand a model a problem, receive a plausible answer, and then repeat that answer as if it reflects your own understanding. That is close to plagiarism, but in some ways worse. At least when a student copies from another person, there is still a real human source behind the answer. Here, people can present machine-produced reasoning they do not understand, cannot defend, and could not reproduce on their own.
That is intellectual dependency being labeled as leverage.
And offered a counter point:
The best engineers will absolutely use A.I. more, not less. But they will use it with a very different posture.
They will let A.I. draft boilerplate, summarize docs, generate test scaffolding, propose refactorings, surface possible failure modes, accelerate investigation, and compress routine work. They will happily offload the mechanical parts of the job. But they will also:
- ask sharper questions.
- define the real problem instead of merely responding to the visible one.
- optimize for clarity and brevity (as before), instead of a lot of polished language that says little of substance.
- generate new, high-value knowledge - instead of simply rehashing / remixing existing knowledge in the system.
Then they will take the reclaimed time and invest it where it matters most.
To me, even this thinking falls short and in some ways returns to Patel’s original complaint. Sure, AI can replace drudgery. Sure, some things do look like databases. But the real opportunity is to partner with AI to create something new, to build on your own expertise in ways that go farther than you could have before.
We’re going to see a wave of advancements that come from this style of collaboration, where brilliant people use AI to go even further.
But potentially even more impactful will be the millions — or billions — of people that realize if they are serious about learning something, AI gives them a path and tools that were previously impossible.
Footnotes
-
Of course, this is a perfectly fine choice — as I noted in the Manifesto principle 5, write code when it brings you joy. But, is this really the way? ↩
-
The internet failing to meet our hopes drives part of this, too, see attention ↩