Prompt Engineering¶
Prompt engineering is the practice of structuring your input to an AI model so that it produces the output you actually need. It is not about memorizing magic phrases — it is about understanding how models interpret instructions and providing the right context, structure, and constraints to guide their responses.
These techniques work across all major AI platforms (Claude, ChatGPT, Gemini, Copilot) because they address how large language models process language, not platform-specific features.
Core Principles¶
Before diving into specific techniques, these principles apply to all prompt engineering:
- Be specific — Vague prompts produce vague outputs. Say exactly what you want.
- Provide context — The model only knows what you tell it. Include relevant background.
- Show, don't just tell — Examples are more powerful than descriptions of what you want.
- Structure your output — Tell the model what format you need (bullets, table, JSON, etc.).
- Constrain the scope — Boundaries improve quality. Set word limits, define the audience, specify what to exclude.
- Iterate — Your first prompt is a draft. Refine based on what comes back.
- Break complex tasks down — One clear instruction per prompt beats a wall of requirements.
- Match the technique to the task — Not every technique suits every situation. Choose based on what you need.
Technique Catalog¶
Foundational Techniques¶
These are the building blocks — techniques you will use daily.
| Technique | What It Does | Best For |
|---|---|---|
| Zero-Shot Prompting | Ask the model to perform a task with no examples | Simple, well-defined tasks |
| Few-Shot Learning | Provide examples so the model learns the pattern | Custom formats, tone matching, classification |
| Chain-of-Thought | Ask the model to reason step by step | Math, logic, analysis, complex decisions |
| Direct Instruction | Give explicit, imperative commands | Any task where clarity matters |
Shaping Techniques¶
These techniques control how the model approaches your task.
| Technique | What It Does | Best For |
|---|---|---|
| Contextual Prompting | Embed background information in the prompt | Domain-specific tasks, personalized output |
| Role Prompting | Assign the model a persona or expertise | Specialized knowledge, audience-appropriate tone |
| Output Formatting | Specify the structure and format of the response | Reports, data extraction, structured content |
| Multi-Turn Conversation | Build on previous exchanges to refine results | Exploration, iterative refinement, complex projects |
Quality Techniques¶
These techniques improve the reliability and depth of outputs.
| Technique | What It Does | Best For |
|---|---|---|
| Self-Consistency and Reflection | Ask the model to check and critique its own work | High-stakes decisions, error reduction |
| Emotional Prompting | Add motivational or stakes-based language | Tasks where engagement and effort matter |
| Reframing Prompts | Rephrase a question to approach it differently | When initial prompts give poor results |
Specialized Techniques¶
These techniques solve specific types of problems.
| Technique | What It Does | Best For |
|---|---|---|
| Style Unbundling | Decompose a writing style into separate attributes | Matching a specific voice or tone |
| Summarization and Distillation | Compress or restructure information | Long documents, research synthesis |
| Real-World Constraints | Embed business rules and practical limits into prompts | Feasible plans, budget-aware output |
Where to Start¶
New to prompting? Start with Zero-Shot Prompting and Direct Instruction — these two techniques cover most everyday tasks.
Want better results? Add Few-Shot Learning to teach the model your preferred format, then use Chain-of-Thought for anything requiring reasoning.
Working on something complex? Combine techniques — for example, use Role Prompting + Contextual Prompting + Output Formatting to get expert-level, structured responses grounded in your specific domain.