The Core Idea

Every response is sampled from a probability distribution. All available context shapes that distribution. Your job: curate context so probability mass concentrates on outputs you want.

Click layers in the diagrams to see aligned vs misaligned effects →

System

Date, capabilities, base rules

Memory

Learned patterns about you

Conversation

This session's history

Your Message

The current instruction

Click a layer in either diagram
to learn how it shapes probability

System Context

Aligned

What It Is

The foundational layer: current date, platform capabilities, base behavioral rules. Largely invisible to users—you can't edit it, but it's always there.

When Aligned ✓

Your request fits within system capabilities. Probability flows naturally toward valid, well-formed outputs the system can actually produce.

Example

Asking for a code explanation when the system has coding capabilities → high probability of accurate, helpful response.

System Context

Misaligned

What It Is

The foundational layer: current date, platform capabilities, base behavioral rules. Largely invisible to users—you can't edit it, but it's always there.

When Misaligned ✗

Asking for things outside capabilities (real-time data without search tools, actions it can't take). Probability spreads across refusals, workarounds, or hallucinated answers.

Example

Asking "What's the current stock price?" without web search enabled → probability splits between declining, guessing, or giving stale data.

Memory

Aligned

What It Is

Persistent patterns learned about you across sessions—your preferences, background, communication style, past projects. Carries forward even when you start new conversations.

When Aligned ✓

Memory matches your current intent. If it knows you prefer concise answers and you ask for a summary, that prior knowledge focuses probability on brevity.

Example

Memory knows you're a Python developer. You ask about sorting algorithms → explanations naturally use Python syntax without you asking.

Memory

Misaligned

What It Is

Persistent patterns learned about you across sessions—your preferences, background, communication style, past projects. Carries forward even when you start new conversations.

When Misaligned ✗

Outdated or context-inappropriate memory. If memory assumes you're always coding but today you're writing poetry, it may inject technical framing that dilutes your creative output.

Example

Memory learned you like detailed explanations. Today you need a quick yes/no answer → responses are longer than helpful.

Conversation History

Aligned

What It Is

The accumulated back-and-forth in this session. Every exchange adds to it. You control it by continuing, steering, correcting—or starting fresh.

When Aligned ✓

A focused conversation on one topic builds reinforcing context. Each exchange narrows probability toward coherent continuation of the established direction.

Example

You've been refining a cover letter for 5 exchanges. Each new request builds on shared understanding of the role and your background.

Conversation History

Misaligned

What It Is

The accumulated back-and-forth in this session. Every exchange adds to it. You control it by continuing, steering, correcting—or starting fresh.

When Misaligned ✗

A wandering conversation with topic pivots leaves competing signals. The model may reference earlier context that's no longer relevant, pulling probability away from what you want now.

Example

You discussed dinner recipes, then pivoted to debugging code. The model might still be primed for casual, food-related language.

Strategy

Start a new conversation when switching topics to clear competing context.

Your Current Message

Aligned

What It Is

The immediate instruction—what you type right now. It's the final filter before output, closest to the sampling moment. You have complete control here.

When Aligned ✓

A clear, specific message that matches upstream context has maximum leverage. All layers point the same direction—probability concentrates sharply on your target.

Example

Clear request + supportive history + relevant memory = probability mass concentrated on exactly what you want.

Your Current Message

Misaligned

What It Is

The immediate instruction—what you type right now. It's the final filter before output, closest to the sampling moment. You have complete control here.

When Misaligned ✗

Even a perfect message can't fully overcome noisy upstream context. It's the final filter, not a magic override. If conversation history contradicts you, probability stays diffuse.

Example

"Be concise" in your message, but 10 prior exchanges were verbose → the model is pulled between your instruction and established pattern.

Strategy

If your message keeps getting overridden by prior context, start fresh rather than fighting the history.

✓ Aligned Context

Layers reinforce each other

ALL POSSIBLE OUTPUTS
System
Memory
Conversation
Your Message
Focused distribution → Predictable output
Result: Probability mass is compressed around a narrow band of responses.
The model doesn’t need to guess your intent — every layer reinforces it.

Fewer competing signals → higher reliability → less prompt fighting

✗ Misaligned Context

Layers pull in different directions

ALL POSSIBLE OUTPUTS
System
Memory conflicts
Conversation off-topic history
Your Message
Diffuse distribution → Unpredictable output
Result: Probability mass is fragmented across multiple interpretations.
Even a good prompt can’t fully overcome conflicting upstream context.

The model isn’t “confused” — it’s responding rationally to mixed signals