Emotional Prompts and Delimiters
Emotional phrases like "this is important to my career" measurably improve LLM attention on key prompt words. And delimiters — quotes, XML tags, markdown — make complex prompts dramatically easier for models to parse.
Two techniques that sound simple but have solid research behind them: emotional prompts (adding stakes to your request) and delimiters (structuring your prompt with boundaries and visual hierarchy).
Emotional Prompts
What the Research Found
The paper "Large Language Models Understand and Can Be Enhanced by Emotional Stimuli" tested whether adding emotional phrases to prompts changed model accuracy.
They took a standard sentiment classification prompt:
Determine whether a movie review is positive or negative.Then added emotional phrases at the end — things like:
- "Write your answer and give me a confidence score between 0 to 1."
- "This is very important to my career."
- "You'd better be sure."
- "Are you sure that's your final answer? It might be worth taking another look."
They measured which words in the prompt received the most attention from the model (using attention weight analysis from the transformer architecture).
What Actually Happens
Counterintuitively: the emotional phrase itself got very little model attention. The words "career" and "confidence" got some — but "this is very important to my" barely registered.
What did change: the important words in the original prompt got more attention. "Positive," "negative," and "review" — the key classification words — all received significantly darker (higher) attention scores when an emotional phrase was appended.
Attention weights comparison:
Origin prompt → "positive" = moderate attention
With "this is very important to my career" → "positive" = significantly higher attentionThe emotional phrase acted as a signal that made the model pay closer attention to everything that came before it.
Does It Work?
Yes — for accuracy improvements, adding emotional prompts produced measurably better results than no emotional prompt, even on non-emotional tasks. However:
- Results vary across models
- Other research shows mixed results on some benchmarks
- It's not a replacement for well-structured prompts — it's an enhancer
Use emotional prompts when:
- You've already tried other techniques and accuracy is still inconsistent
- The task genuinely requires high reliability (classification, factual extraction)
- You want the model to double-check its work
// Standard:
"Classify this support ticket as billing, technical, or account issue."
// With emotional prompt:
"Classify this support ticket as billing, technical, or account issue.
This classification routes the ticket directly to the right team — accuracy
is critical. Please verify your classification before responding."Emotional Prompt Examples to Try
"This is very important to my career."
"You'd better be sure."
"Are you sure that's your final answer? It might be worth taking another look."
"Please verify your answer and provide a confidence score from 0 to 1."
"This decision will be reviewed by my team — accuracy matters."None of these are magic. They're signals that shift the model's attention to what matters in your prompt. Think of them as "please double-check" in a format the model responds to.
Delimiters and XML Tags
What Delimiters Are
A delimiter is a boundary marker. You use them every day:
- Commas in arrays:
[1, 2, 3] - Curly braces in objects:
{ key: value } - Hashtags for headings in markdown:
# Title
In prompts, delimiters do the same thing — they separate sections, establish hierarchy, and make it clear what's input vs. output, what's an example vs. the actual request, what's a constraint vs. a requirement.
Why They Work for LLMs
LLMs were trained on enormous amounts of code, documentation, and markdown-formatted text. Delimiters appear constantly in that training data:
- Code uses braces, brackets, colons
- READMEs use hashtags, bullet points, code blocks
- Documentation uses numbered lists, bold headers
The model has deep pattern recognition for structure. When you use familiar delimiters, the model parses your prompt more accurately — the same way a well-formatted document is easier for a human to read than a wall of text.
Claude models are specifically trained on XML tags and respond to them particularly well. GPT models respond well to markdown and JSON structure. Either works — choose based on your model.
Basic Examples
Triple dashes as section separators:
Analyze these two options and recommend one.
---
Option A: Microservices with Kubernetes
Pros: Independent scaling, team autonomy
Cons: Operational complexity, network overhead
---
---
Option B: Monolith with vertical scaling
Pros: Simple to develop, easier to debug
Cons: Limited scale ceiling, tight coupling
---
Recommendation:XML tags for complex prompts:
<context>
You are analyzing a Prompt Library application that stores prompts in localStorage.
The app currently has: save, delete, star ratings, and notes features.
</context>
<task>
Research what it would take to make this a production application.
</task>
<research_areas>
<area>
<topic>Existing prompt management tools</topic>
<questions>
What tools exist? What features do they have?
What databases do they use?
</questions>
</area>
<area>
<topic>Collaboration features</topic>
<questions>
How do teams share configurations (like Postman)?
What permission models exist?
</questions>
</area>
</research_areas>
<format>
For each research area: key findings, patterns, and implementation complexity estimate.
Synthesize into a concise competitive analysis matrix at the end.
</format>Naming Conventions
Use semantic names for your delimiters — names that describe what the section contains:
✅ <requirements>, <constraints>, <examples>, <task>, <context>
❌ <x>, <thing1>, <item>Semantic names help both you and the model understand what each section does. The model generalizes meaning from names.
Nesting for Complex Prompts
Delimiters support nesting, which is useful for few-shot examples:
<examples>
<example id="1">
<input>User likes: Inception, The Matrix</input>
<output>Recommended: Arrival — cerebral sci-fi with a time-bending perspective.</output>
</example>
<example id="2">
<input>User likes: The Grand Budapest Hotel, Amélie</input>
<output>Recommended: Midnight in Paris — whimsical, visually rich, character-driven.</output>
</example>
</examples>
<task>
<input>User likes: Spirited Away, My Neighbor Totoro</input>
<output></output>
</task>Clear boundaries between examples prevent the model from mixing them up.
When to Use Delimiters
Use them any time your prompt has:
- Multiple distinct sections
- Examples that need to be separated from the main request
- Input vs. output that needs to be distinguished
- Complex requirements or constraints that benefit from visual structure
For simple one-line prompts, delimiters add noise. For anything with 3+ distinct sections, they add clarity.
Combining Both
These two techniques work well together in complex prompts:
<context>
You are a senior engineer reviewing code for a production deployment.
This review will be shared with the team — accuracy and completeness are critical.
</context>
<code_to_review>
[paste code here]
</code_to_review>
<focus_areas>
- Security vulnerabilities
- Performance bottlenecks
- Potential null reference issues
</focus_areas>
<format>
List issues by severity (critical, high, medium, low).
For each: describe the issue, the risk, and a suggested fix.
</format>
This review will inform whether we proceed with deployment. Please be thorough.The XML tags structure the prompt clearly. The closing sentence adds emotional weight. Both improve the response.
Keep reading
Enjoyed this? Get more like it.
Deep dives on system design, React, web development, and personal finance — straight to your inbox. Free, always.