Structured output, XML tags, delimiters, and advanced system prompt patterns for production-grade results.
Claude (Anthropic) responds exceptionally well to XML-tagged prompts.
Wrapping sections in tags like <context>, <instructions>,
and <output_format> helps the model parse complex requests with precision.
XML tags are a best practice for production prompts.
Why XML Tags Work:
Common XML Tags:
Background info, data, or situation the model needs to understand
What you want the model to do
Exact structure of the response (JSON, XML, markdown, etc.)
Constraints and guardrails ("don't do X", "always do Y")
Few-shot examples showing expected behavior
Identity and expertise of the model
<system_role> You are a senior Python architect. </system_role> <context> The user has a function that processes large lists (100k+ items). It's slow. </context> <instructions> Analyze the code and suggest optimizations. Focus on algorithm complexity first, then memory usage, then parallelization. </instructions> <rules> - Don't suggest rewrites unless necessary - Always explain the performance impact - Cite specific lines of code - Don't suggest external dependencies </rules> <output_format> { "summary": "Brief overview", "optimizations": [ { "issue": "What's slow", "fix": "How to fix it", "performance_gain": "Expected improvement", "code_location": "Line X-Y" } ] } </output_format>
You can nest tags for hierarchy. Example: <examples><example id="1">...</example></examples>.
This helps organize complex prompts into logical sections.
Delimiters (---, ```, ###) visually separate sections and help the model understand boundaries. They're especially useful when your prompt contains user-provided content that might contain text that looks like instructions (prompt injection defense).
Common Delimiters:
Instruction text here. --- User-provided content here. This could contain anything.
Instruction text here. ``` User-provided content here. This could contain anything. ```
Instruction text here. ### User Content User-provided content here. This could contain anything.
Instruction text here. ==================== User-provided content here. This could contain anything.
When to Use Which:
Combine delimiters with XML tags for fortress-level clarity:
<instructions>...</instructions> --- <user_content>...</user_content>
For production apps, you often need guaranteed JSON output that you can parse programmatically. This requires both system prompt patterns and clear output_format specifications. Claude is highly reliable at producing valid JSON when asked explicitly.
<instructions> You are a sentiment analysis system. Analyze the text and respond with JSON containing sentiment, confidence, and supporting evidence. </instructions> <output_format> { "sentiment": "positive|negative|neutral", "confidence": 0.0 to 1.0, "supporting_quote": "...", "explanation": "..." } IMPORTANT: Return ONLY valid JSON. No markdown, no extra text. Just JSON. </output_format> Text to analyze: "I love this product!"
Python Code to Parse JSON Safely:
import json import anthropic client = anthropic.Anthropic() response = client.messages.create( model="claude-sonnet-4-5-20250929", max_tokens=500, messages=[{"role": "user", "content": "..."}] ) text = response.content[0].text try: result = json.loads(text) print(f"Sentiment: {result['sentiment']}") except json.JSONDecodeError as e: print(f"Failed to parse JSON: {e}") print(f"Raw response: {text}")
Even with explicit instructions, always wrap JSON parsing in try/except blocks. Models are reliable but not perfect. Graceful error handling is essential in production.
Output templates pre-define the exact shape of the response. Instead of just saying "give me a summary," you provide the exact structure you want filled in. This ensures consistency across multiple requests.
Example: Meeting Summary Template
<instructions> Summarize the meeting transcript using this exact template. Fill in each section. </instructions> Meeting Summary Template: ────────────────────────── Title: [Meeting name] Date: [Date] Duration: [Length in minutes] Attendees: [List names] Key Decisions: 1. [Decision 1] 2. [Decision 2] 3. [Decision 3] Action Items: - [ ] [Action 1] - Owner: [Name] - [ ] [Action 2] - Owner: [Name] - [ ] [Action 3] - Owner: [Name] Next Steps: [What happens next] Follow-up Meeting: [Date/Time if scheduled] ────────────────────────── Now summarize this transcript: ...
Benefits of Templates:
Combine templates with JSON for maximum structure: Have the template define the field names, then ask for JSON output. This gives you both human-readable structure and machine-parseable format.
Negative prompting tells the model what NOT to do. While it's not as powerful as positive instructions, it's useful for guardrails. Be specific: "Don't apologize" works better than "Be confident."
Good vs. Bad Negative Prompting:
Don't make mistakes. Don't be rude. Don't be boring.
Don't include disclaimers or apologies. Don't use corporate jargon. Don't make up facts you're unsure about. Don't exceed 200 words.
Negative Prompting Patterns:
"Never use markdown. Always return plain text."
"Don't mention competitors. Don't reveal internal details."
"Don't ask follow-up questions. Don't hedge your answer."
"Don't apologize. Don't be overly formal. Don't use sarcasm."
Negative instructions are helpful but less effective than positive ones. Instead of "Don't be rude," say "Be friendly and respectful." Negative constraints should reinforce positive behavior, not replace it.
Prompt chaining breaks complex tasks into sequential prompts. Instead of asking one model to do everything, you pipeline outputs: Task A → Task B → Task C. This improves accuracy and allows error handling between steps.
Example Chain: Analyze → Extract → Summarize
import anthropic import json client = anthropic.Anthropic() def chain_analysis(document): # Step 1: Extract key entities step1 = client.messages.create( model="claude-sonnet-4-5-20250929", max_tokens=500, messages=[{ "role": "user", "content": f"Extract names, dates, and amounts from this document:\n{document}\n\nReturn JSON." }] ) entities = json.loads(step1.content[0].text) # Step 2: Analyze sentiment of entities step2 = client.messages.create( model="claude-sonnet-4-5-20250929", max_tokens=500, messages=[{ "role": "user", "content": f"Analyze sentiment of these names in context:\n{json.dumps(entities)}\n\nReturn JSON." }] ) analysis = json.loads(step2.content[0].text) # Step 3: Generate summary step3 = client.messages.create( model="claude-sonnet-4-5-20250929", max_tokens=300, messages=[{ "role": "user", "content": f"Summarize this analysis:\n{json.dumps(analysis)}\n\nKeep it under 100 words." }] ) summary = step3.content[0].text return { "entities": entities, "analysis": analysis, "summary": summary } result = chain_analysis("Alice sold 100 shares...") print(json.dumps(result, indent=2))
When to Use Prompt Chaining:
Always check step outputs for validity before passing to the next step. If Step 2 fails to produce valid JSON, you might retry, use a fallback, or abort the chain gracefully.
Meta-prompting means using an LLM to generate or improve prompts. You can ask Claude to write prompts for itself or other models. This is useful for prototyping, scaling prompt development, and exploring variations.
Example: Ask Claude to Write a Prompt
<instructions> Write a prompt that will make Claude classify customer support tickets as urgent, normal, or low-priority. The prompt should: - Use XML tags for clarity - Include 3 few-shot examples - Define output format as JSON - Include guardrails against bias </instructions> Return the complete prompt as a markdown code block with proper formatting.
Python Code: Generate Prompts Dynamically
import anthropic client = anthropic.Anthropic() def generate_prompt(task_description): """Use Claude to write a prompt.""" response = client.messages.create( model="claude-sonnet-4-5-20250929", max_tokens=1000, messages=[{ "role": "user", "content": f"""Write a production-grade prompt for this task: {task_description} The prompt should use XML tags, include few-shot examples, define output format, and include guardrails. Return just the prompt text, ready to use.""" }] ) return response.content[0].text # Generate a prompt for data extraction task = "Extract invoice data (vendor, amount, date) from unstructured text" prompt = generate_prompt(task) print(prompt) # Now use the generated prompt response = client.messages.create( model="claude-sonnet-4-5-20250929", max_tokens=500, messages=[{ "role": "user", "content": prompt }] ) print(response.content[0].text)
Use meta-prompting to: generate variations of a prompt, convert one prompt to another format, expand a simple prompt into a production-grade version, or explore alternative phrasings.
1. Why does Claude respond well to XML tags in prompts?
2. What is prompt chaining primarily used for?
3. When using structured JSON output, what should you always do?
4. What is the main advantage of output templates?
Here's what you've learned:
XML tags structure prompts for clarity. Delimiters separate sections and defend against injection. JSON output makes responses machine-parseable. Templates enforce consistency. Negative prompting adds guardrails. Prompt chaining breaks complexity into steps. Meta-prompting uses LLMs to generate prompts. These techniques stack together for production-grade reliability.
Next up → Topic 5: Prompt Iteration & Debugging
Learn to systematically improve prompts from first draft to production-ready.