
Issue 136 · April 6, 2026
The systems are getting faster than the stories we tell about them.

The dream of autonomous agents is remarkably appealing. The notion that with enough instructions and integrations, you can take your hands off the wheel and let intelligences more available than you drive your work to delivery. Codify every step, anticipate every interaction, give it enough context, plug it into a sufficiently capable model – and maybe the whole will exceed the sum of its parts. You'll experience emergence: a spark of life dedicated to solving your particular problems.
The appeal is magnetic. Who doesn't want a tireless operator anticipating your every need, keeping you on top of what matters?
My own experience – having spent an extraordinary amount of time configuring an elaborate stack of agents and integrations – is still pretty mixed.
Running Envisioning, which is largely a software company, means most of my problems are "software-shaped": they can usually be accelerated by better processes. I've invested significant time bringing our private and public data sources into alignment so that agents can accurately interpret what's happening in the real world. Without that foundation, they wouldn't be half as useful.
Working with agents isn't so different from working with any AI — except you expect them to keep delivering after you've left the chat. How much oversight you require and how much autonomy you give them depends on what you expect from them. I've landed on an approach where half a dozen agents, each responsible for a recurring area of my work and life, run in parallel – with my oversight and continuous feedback keeping them useful.
The principle of centaurs is always present: rather than replacing my work, I look for ways to have AI augment it. This newsletter is one of those experiments. An agent surfaces signals from across the feeds, I shape the story. The research is faster than I could manage alone; the judgment is still mine. That division isn't always clean, which is maybe the most honest thing I can say about working with AI right now – it lets you do things you don't quite understand, and you learn to make peace with that.
MZ

Simon Willison on Lenny's Podcast: the compression isn't about faster models, it's about deployment velocity. His point about the build pipeline shrinking from years to weeks is the thesis that should anchor every AI strategy in 2026.
Nate Jones on why Claude Mythos — the first model trained on new Nvidia chips — represents a step change, not an increment. The security researcher angle (finding zero-day vulnerabilities in Ghost) is the concrete detail that makes this real and urgent.
Anthropic's own research team on finding functional emotion patterns in Claude's neural network — desperation neurons that actually drive behavior, not just simulate it. Required context for anyone building on AI systems.
Head of growth Amole Evasari on growing from 1B to 19B ARR in 14 months. The detail about 60-70% of old growth playbooks being useless is the most honest thing any AI company leader has said publicly this year.

Follow us for weekly foresight in your inbox.