Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout

Field Notes from a Centaur

A framework in seven parts documenting how I actually work with AI — not the tools themselves, but the mechanics of collaborating with different intelligences. These are field notes: records of actual interactions, co-created with AI.

Introduction

This newsletter began as an attempt to make sense of our transition toward AGI. I still believe generality is next for AI, and that most of us will experience it in our lifetimes. Doing this involves learning, testing, and building things with AI.

Over time, I'm realizing the work of writing weekly might become something else: a slow, structured attempt to document how I use AI. Not the tools themselves, but the mechanics of collaborating with different intelligences — how reasoning changes when shared, and what that reveals about our own thought.

These observations are starting to form a sort of “field notes from a centaur” — a collection of ways that AI is working itself into my life. Each chapter explores a different dimension of that collaboration, starting with thinking, because everything else begins here: before creating, before deciding, before making meaning.

Each of these field notes is co-written with the same system that observes how I use it. It's a kind of mirrored cognition: half human, half machine, thinking together in public.

1. Thinking

Thinking with AI can be about outsourcing cognition, or about extending it. It's a way of tracing how ideas form, fracture, and return clearer. By writing what I believe and watching it reflected back, I see my thinking as a living system: open, self-correcting, and occasionally surprising.

How I think with AI:

  • Concept refinement: When a thought is still forming, I describe it roughly and let AI question, restate, and reframe. The back-and-forth peels away noise until the core idea stands on its own.
  • Strategic thinking: Whenever I have a complex question to answer, I give AI as much contextual information as possible, outline my logic and ask for counterpoints or blind spots. The friction of disagreement sharpens judgment and reduces assumptions.
  • Philosophical correspondence: I often interrogate AI to better understand the open terrain of abstract questions — AI helps me map worldviews, contrast ideas, and debate paradoxes without forcing them into conclusions.
  • Comparative analysis: By rephrasing a single decision through multiple perspectives, I can see how framing itself shapes outcome and bias. Sometimes a single added word, like “assume” or “doubt,” can tilt the whole conclusion.
  • Temporal thinking: I describe observations about present signals and noted patterns into AI to imagine how they might evolve. The process turns foresight into a discipline of language, using words to model change.

2. Creating

If thinking refines ideas, creating brings them into form.

I've learned that AI excels at one thing: reflecting intent. When I describe what I want to make, it gives me back exactly what I said, not what I meant. That gap between thought and expression is where most of my work happens. The model's literalness forces clarity.

How I create with AI:

  • Writing refinement: Most of my writing starts as rough, fast drafts — notes dumped into ChatGPT. Then I ask the model to highlight weak transitions, redundant phrasing, or broken logic. I don't let it rewrite; I use its feedback to rewrite myself. Over time this became a rhythm: I write → it critiques → I tighten.
  • Idea scaffolding: When I'm developing frameworks or tools, I use GPT to simulate reasoning. I'll outline the structure, then ask the model to find gaps, stress-test assumptions, or suggest what's missing.
  • Prompt engineering as writing: For image generation or structured outputs, I treat prompts as tiny pieces of creative writing. I describe intent, tone, atmosphere, and constraints before I mention style or detail.
  • Constraint as method: I often tell GPT to respond under limits: “explain this in 80 words,” “argue against your previous point,” “reframe this for a newsletter intro.” These constraints sharpen the signal. They make thinking visible.
  • Pattern recognition: When working across projects, I feed GPT fragments of old notes or past outputs to see what patterns emerge. It's surprisingly good at showing the shape of my own thinking.

Creation becomes dialogue when you treat the system as a collaborator in precision. AI doesn't imagine for you, but it holds you accountable to what you're trying to say.

3. Building

If creating turns ideas into form, building turns form into function.

The gap between a prototype and a working system is filled with decisions: how data moves, where logic lives, what breaks and why. Building with AI means treating architecture as conversation: describing intent until structure emerges.

What surprised me most about building with models is how much happens before code. When I explain what I want a system to do, the model shows me what I haven't decided yet: the missing schema, the unclear dependency, the workflow I assumed was simple. This literalness is the leverage.

How I build with AI:

  • Product design reasoning: Before touching an interface, I describe how a feature should behave — user intent, friction points, edge cases. AI helps expose assumptions and clarify purpose before design locks in decisions.
  • Interface sketching: I prototype flows entirely in language: what each screen does, what happens next, where feedback appears. Testing logic through conversation catches problems that wireframes miss.
  • Architecture discussion: When deciding what lives where — server, client, local inference — I talk through the tradeoffs with AI. It doesn't choose for me, but it makes structure tangible enough to evaluate.
  • Workflow automation design: I map full sequences before automating: triggers, data sources, transformations, outputs. The exchange turns scattered logic into a blueprint I can implement in steps.
  • Error explanation: Instead of jumping to fixes, I ask AI to explain what failed and why. This builds debugging intuition over time.

Building becomes deliberate when you design through explanation. AI doesn't build for us — it builds with us, one augmented thought at a time.

4. Deciding

If thinking refines ideas, creating brings them into form, and building turns form into function, then deciding is what makes all of it matter.

I've learned that AI changes how decisions become visible. Most choices happen half-formed, shaped by instinct, pressure, or habit. When I externalize that reasoning through conversation with AI, the hidden structure reveals itself — what I'm optimizing for, what I'm avoiding, and where emotion is disguising itself as logic. This clarity doesn't make decisions easier, but it makes them more honest.

How I decide with AI:

  • Talking through decisions: When facing unclear choices, I describe the situation, constraints, and stakes. AI helps map the decision space by restating options, surfacing assumptions, and clarifying what I'm actually optimizing for.
  • Proposal development: Before sharing recommendations, I test them against AI by outlining options, assumptions, and risks. It helps me frame choices in ways that invite informed agreement rather than defensiveness.
  • Market intelligence: When I enter unfamiliar domains, I build fluency fast by asking AI to explain landscapes, competitors, and trends. I intentionally diversify across providers to reduce single-model bias.
  • Decision debriefing: After major calls, I talk through what was decided, why, and what signals would suggest I was wrong. AI helps capture rationale while it's fresh.
  • Financial planning: I model scenarios by describing cash flows, risk tolerances, and time horizons. AI helps align decisions with goals rather than reacting to urgency.

Deciding becomes deliberate when you treat judgment as something to externalize, examine, and refine.

5. Organizing

If thinking refines ideas, creating brings them into form, building turns form into function, and deciding chooses what deserves our attention, then organizing is the discipline that keeps the whole system coherent. It is how fragments become memory, how context survives long projects, and how the work stays searchable by a future self that no longer remembers making it.

Organizing is the connective tissue. It is less glamorous than launching something new, but it is the part that prevents drift.

How I organize with AI:

  • From fragments to form: When my notes start to feel like static, I dump a corpus into the model and ask for competing taxonomies. Renaming files after it surfaces patterns makes retrieval measurable instead of hopeful.
  • Compressed research ramps: Before diving into a fresh domain, I have AI compress the first ten articles into a briefing with terminology, debates, and blind spots.
  • Conversation synthesis: After interviews or workshops, I feed transcripts through prompts that surface tensions, contradictions, and outliers. Organizing isn't just summarizing — it's deciding which signals deserve another look.
  • Draft hierarchy detection: Unordered notes become outlines when I ask for the “natural hierarchy” hiding inside them. The model suggests section headers, sequences, and missing bridges.
  • Cross-source threading: For reading notes, I focus the model on overlaps: recurring metaphors, shared anxieties, or diverging forecasts.
  • Naming as diagnosis: Describing a project and asking AI for naming directions reveals intent. If the suggested names all feel off, it is usually because the strategy is vague, not because the AI is wrong.
  • Learning arcs and questionnaires: When designing curricula or forms, I use AI to order concepts the way people actually absorb them and to phrase questions that gather meaning instead of noise.
  • Data hygiene plans: Before touching a messy spreadsheet, I talk through structure, missing fields, and duplication with the model.
  • Continuity checkpoints: On long projects, I periodically ask AI to reconstruct the narrative: what we decided, why it mattered, what signals would trigger a reconsideration.

Organization is a verb; the feedback loop is the point.

6. Living

Living is about where all of this touches the ground. This is the most personal dimension: how intelligence weaves into daily routines, health, and reflection.

I've found that living with AI isn't about optimizing every second. It's about having a quiet, infinite context window for the parts of life that usually get messy or forgotten. It reflects habits, mirrors moods, and offers a kind of presence that helps me notice more, not just do more.

How I live with AI:

  • Language fluency: I use AI to practice Dutch conversation. Unlike a rigid course, it adapts to my pace, corrects my grammar gently, and lets the topics follow my genuine curiosity.
  • Health context, not diagnosis: I share sleep or exercise data to spot trends I might miss. The goal isn't a quick fix but understanding the science.
  • Medical translation: When I encounter technical medical terms or symptoms, I ask AI to translate them into plain English.
  • Nutrition clarification: Before changing habits, I use AI to understand how foods interact with inflammation or digestion.
  • Travel by mood: Instead of generic “top 10” lists, I describe the feeling of a trip — quiet, creative, local — and ask for an itinerary that matches that intention.
  • Empathy rehearsal: For difficult conversations, I test phrasing and tone with AI first.
  • Routine tuning: When days feel out of sync, I describe the friction to AI. It often points out where context-switching is draining energy.
  • Reflective journaling: I ask AI for prompts that probe deeper than my usual surface thoughts.
  • Symbolic mirroring: I sometimes describe dreams or vague feelings to see them reflected back through psychological or mythological lenses.

Living as a centaur — half human, half augmented — is ultimately not about acceleration. It is about balance. It is using intelligence to stabilize the noise of modern life so you can hear yourself think.

7. Connecting

Connecting is how the fragments become a system. This is the seventh and final chapter. It explores the dimension where everything converges: seeing relationships, tracing patterns, and making sense of how scattered pieces fit together.

I've learned that connecting with AI isn't necessarily networking or communication in the traditional sense. It's about using intelligence as a lens to see inside systems, relationships, and consequences that remain invisible when you're stuck inside them.

How I connect with AI:

  • Personal reflection and clarification: When I'm uncertain about a feeling or decision, I write a short reflection and let AI restate what it sees. Its summaries reveal patterns in tone and reasoning that I might miss.
  • Narrative synthesis: When pulling together projects or threads of writing, I ask AI to identify overarching motifs — continuity, transformation, emergence. It reveals the connective tissue between scattered ideas.
  • Pattern recognition across fields: I use AI to trace structural similarities between domains — how feedback loops appear in both ecology and economics, or how adaptation works in culture and code.
  • Concept cross-referencing: I combine ideas from separate disciplines and let AI explore their shared language. It reveals analogies that make complex systems more intelligible.
  • Foresight experimentation: I use AI to imagine how today's early signals might unfold into different futures. It extends the logic of trends, exploring what happens when they intersect or accelerate.

Connecting becomes possible when you treat AI as a companion for seeing relationships, not just retrieving information. Seeing patterns is the final gesture of the centaur mind — not prediction, but perception sharpened through companionship.

www.envisioning.com/code

On this page

  • 01Introduction
  • 02Thinking
  • 03Creating
  • 04Building
  • 05Deciding
  • 06Organizing
  • 07Living
  • 08Connecting