Crafting input text strategically to elicit desired outputs from AI language models.
Prompt engineering is the practice of deliberately designing and refining the text inputs fed to large language models (LLMs) in order to reliably produce accurate, relevant, or stylistically appropriate outputs. Rather than modifying a model's weights or architecture, prompt engineers work entirely within the input space — adjusting wording, structure, examples, and context to steer model behavior. This makes it a uniquely accessible form of model control, requiring no training infrastructure or deep ML expertise, yet capable of dramatically shifting output quality.
The core techniques range from simple instruction phrasing to more sophisticated patterns. Zero-shot prompting asks a model to perform a task with no examples; few-shot prompting embeds several input-output demonstrations directly in the prompt to prime the model's behavior. Chain-of-thought prompting encourages models to reason step-by-step before producing a final answer, substantially improving performance on arithmetic, logic, and multi-step reasoning tasks. Other strategies include role assignment ("You are an expert physician..."), output format specification, and iterative refinement based on observed failure modes.
Prompt engineering matters because LLMs are highly sensitive to surface-level input variations — a subtly reworded question can yield a completely different response. This sensitivity means that well-crafted prompts can unlock capabilities that appear absent with naive inputs, while poorly designed prompts can cause capable models to hallucinate, refuse, or produce off-target content. In production systems, prompt design has become a core engineering discipline, with teams maintaining versioned prompt libraries, running A/B evaluations, and building automated pipelines for prompt optimization.
As models grow more capable and instruction-tuned, some prompt engineering patterns are becoming less necessary — models respond more reliably to natural language without elaborate scaffolding. Nevertheless, the field continues to evolve, with techniques like retrieval-augmented prompting, structured output constraints, and agentic prompt chaining pushing the boundaries of what can be achieved through input design alone. Prompt engineering sits at the intersection of linguistics, cognitive science, and systems design, and remains one of the most practical levers for improving LLM-based applications.