In depth
Prompt engineering is the art and science of getting consistent, high-quality output from an LLM. The same task can yield wildly different results depending on how it's phrased. Practitioners iterate on word choice, structure, examples, and format to minimize error rates.
Core techniques include: **system prompts** (stable instructions set once per session), **few-shot examples** (show the model 3-5 examples of desired output), **chain-of-thought** (ask the model to think step-by-step before answering), **XML / structured formats** (Claude responds well to `<thinking>` and `<answer>` tags), and **role-playing** ('You are a senior SRE...').
Tool descriptions inside MCP are a form of prompt engineering. A well-written tool description — clear purpose, when to use, argument semantics — dramatically increases the LLM's accuracy at picking the right tool. Bad descriptions cause tools to go unused or be called inappropriately.
Prompt engineering is not static — best practices evolve with model versions. Prompts optimized for GPT-3.5 often underperform on GPT-4o; Claude-specific patterns (XML tags) differ from GPT-specific patterns (markdown headers).