Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. System Prompt

System Prompt

Hidden instructions given to a language model that shape its behavior and persona.

Year: 2022Generality: 620
Back to Vocab

A system prompt is a special input provided to a large language model (LLM) before any user interaction begins, typically invisible to the end user but processed by the model as authoritative context. Unlike user messages, which represent the human side of a conversation, the system prompt is usually authored by the developer or operator deploying the model. It can specify the model's persona, constrain its behavior, define its scope of knowledge, set a tone, or provide background information the model should treat as ground truth throughout the session.

In practice, system prompts work because transformer-based chat models are trained to treat different parts of a conversation — system, user, and assistant turns — with distinct levels of authority. The system prompt occupies a privileged position in this structure, allowing operators to steer model behavior without modifying the underlying weights. Instructions like "You are a helpful customer service agent for Acme Corp. Do not discuss competitors" are typical examples. The model integrates these instructions with each subsequent user query, shaping its responses accordingly.

System prompts became a central tool in the deployment of instruction-tuned models, particularly after OpenAI's ChatGPT and the GPT-4 API introduced explicit system message fields in 2022–2023. They are now a standard mechanism across virtually all major LLM APIs, including those from Anthropic, Google, and Meta. Their importance extends beyond convenience: system prompts are a primary lever for alignment, safety filtering, brand customization, and task specialization in production AI systems.

The design of effective system prompts has itself become a recognized discipline within prompt engineering. Poorly written system prompts can lead to inconsistent behavior, jailbreaks, or model confusion when user inputs conflict with operator instructions. Researchers and practitioners study how models prioritize conflicting instructions, how much context a system prompt can reliably carry, and how adversarial users attempt to override or extract hidden system prompts — making this a topic with both practical and security implications.

Related

Related

Prompt
Prompt

A text input given to a language model to elicit a desired response.

Generality: 796
System Prompt Learning
System Prompt Learning

Automatically optimizing persistent model instructions to steer behavior without full retraining.

Generality: 520
Prompt Engineering
Prompt Engineering

Crafting input text strategically to elicit desired outputs from AI language models.

Generality: 694
Super Prompting
Super Prompting

Crafting highly specific input prompts to steer AI models toward desired outputs.

Generality: 450
Prompt Injection
Prompt Injection

Manipulating AI language models by embedding malicious instructions within input prompts.

Generality: 499
Underprompting
Underprompting

Providing insufficient context or instruction in a prompt, degrading AI output quality.

Generality: 293