Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Prompt Engineering

Prompt Engineering

Crafting input text strategically to elicit desired outputs from AI language models.

Year: 2021Generality: 694
Back to Vocab

Prompt engineering is the practice of deliberately designing and refining the text inputs fed to large language models (LLMs) in order to reliably produce accurate, relevant, or stylistically appropriate outputs. Rather than modifying a model's weights or architecture, prompt engineers work entirely within the input space — adjusting wording, structure, examples, and context to steer model behavior. This makes it a uniquely accessible form of model control, requiring no training infrastructure or deep ML expertise, yet capable of dramatically shifting output quality.

The core techniques range from simple instruction phrasing to more sophisticated patterns. Zero-shot prompting asks a model to perform a task with no examples; few-shot prompting embeds several input-output demonstrations directly in the prompt to prime the model's behavior. Chain-of-thought prompting encourages models to reason step-by-step before producing a final answer, substantially improving performance on arithmetic, logic, and multi-step reasoning tasks. Other strategies include role assignment ("You are an expert physician..."), output format specification, and iterative refinement based on observed failure modes.

Prompt engineering matters because LLMs are highly sensitive to surface-level input variations — a subtly reworded question can yield a completely different response. This sensitivity means that well-crafted prompts can unlock capabilities that appear absent with naive inputs, while poorly designed prompts can cause capable models to hallucinate, refuse, or produce off-target content. In production systems, prompt design has become a core engineering discipline, with teams maintaining versioned prompt libraries, running A/B evaluations, and building automated pipelines for prompt optimization.

As models grow more capable and instruction-tuned, some prompt engineering patterns are becoming less necessary — models respond more reliably to natural language without elaborate scaffolding. Nevertheless, the field continues to evolve, with techniques like retrieval-augmented prompting, structured output constraints, and agentic prompt chaining pushing the boundaries of what can be achieved through input design alone. Prompt engineering sits at the intersection of linguistics, cognitive science, and systems design, and remains one of the most practical levers for improving LLM-based applications.

Related

Related

Prompt
Prompt

A text input given to a language model to elicit a desired response.

Generality: 796
Super Prompting
Super Prompting

Crafting highly specific input prompts to steer AI models toward desired outputs.

Generality: 450
System Prompt Learning
System Prompt Learning

Automatically optimizing persistent model instructions to steer behavior without full retraining.

Generality: 520
Meta Prompt
Meta Prompt

A prompting strategy that structures how AI models reason and orchestrate complex tasks.

Generality: 381
Underprompting
Underprompting

Providing insufficient context or instruction in a prompt, degrading AI output quality.

Generality: 293
System Prompt
System Prompt

Hidden instructions given to a language model that shape its behavior and persona.

Generality: 620