Zero-Shot
Circa 2020

Direct instruction without examples. Ask the model to perform a task relying solely on its training.

Now default for capable models
Few-Shot
Circa 2020-2021

Provide examples in the prompt to demonstrate the desired pattern or format.

Format alignment primary use now
Chain of Thought
Circa 2022

Guide the model to show intermediate reasoning steps before reaching a conclusion.

Built-in to reasoning models
Role / Persona
Circa 2021-2022

Assign the model a specific role or expertise to shape tone, depth, and perspective.

Still effective for framing
Strategy Deep Dive

Click a strategy to see how it evolved
and when to apply it today

Zero-Shot Prompting

Then (2020-2022)

Often insufficient for complex tasks. Models needed examples to understand what was expected. "Just ask" frequently produced inconsistent results.

Now (2025)

Strong models handle zero-shot well for most tasks. Instruction-tuning and RLHF have made models much better at understanding intent without examples.

The Enduring Principle

Clarity and specificity in your instruction matter more than tricks. The better you articulate what you want, the better the output—regardless of technique.

When to Apply Today

Standard tasks with clear instructions
When you want unbiased output (no example influence)
When specific output format is critical
Smaller/older models that need guidance

Few-Shot Prompting

Then (2020-2022)

Essential for complex tasks. In-context learning was a breakthrough—providing examples unlocked capabilities models couldn't otherwise demonstrate.

Now (2025)

Primary function is output format alignment—showing the model exactly how you want results structured. Less about unlocking capability, more about shaping presentation.

The Enduring Principle

Showing is often clearer than telling. When you need a specific pattern, format, or style, a concrete example communicates more than description alone.

When to Apply Today

Precise output formatting (JSON, specific templates)
Matching a specific tone or writing style
Strong models on standard reasoning tasks
When examples might bias the output

Chain of Thought

Then (2022-2023)

Breakthrough for reasoning tasks. "Let's think step by step" dramatically improved math, logic, and complex problem-solving. Required explicit prompting.

Now (2025)

Reasoning-native models (o1, Claude with extended thinking) have CoT built in. Explicit prompting less necessary—but still useful for transparency and verification.

The Enduring Principle

Breaking problems into steps improves accuracy. Whether you prompt for it or the model does it internally, structured reasoning produces better outcomes on complex tasks.

When to Apply Today

When you need to see/verify the reasoning
Smaller models without built-in reasoning
Simple factual questions (adds unnecessary tokens)
Reasoning-native models already doing it internally

Role / Persona

Then (2021-2023)

"Act as a senior engineer" or "You are an expert in X" was seen as unlocking hidden knowledge. Viral prompt templates featured elaborate role assignments.

Now (2025)

Still effective for framing tone, depth, and perspective—but the model isn't "becoming" the persona. It's adjusting register based on context you've provided.

The Enduring Principle

Context shapes output. Establishing who the "speaker" is and who the "audience" is focuses the response—just like framing works in human communication.

When to Apply Today

Setting appropriate technical depth or tone
Creative writing with specific voice/perspective
Expecting "secret knowledge" from the persona
Overcomplicating simple requests
The meta-lesson: These techniques worked because they provided structure and context. Modern models have internalized much of this—but understanding why they worked helps you know when to still apply them.