Getting consistent JSON from LLMs in n8n requires a multi-layered approach: use the Structured Output Parser sub-node for schema enforcement, the Auto-Fixing Output Parser for automatic error correction, JSON mode when available, explicit system prompt instructions, and a Code node fallback to strip markdown fences and validate the output.
How to Ensure LLMs Return Valid, Parseable JSON in n8n
Language models are inherently non-deterministic — even with temperature 0, they can produce varying output formats. When your n8n workflow needs structured JSON data from an LLM, you face three challenges: (1) the model may wrap JSON in markdown code fences, (2) it may add conversational text around the JSON, and (3) it may produce JSON that does not match your expected schema. n8n provides several tools to address this: the Structured Output Parser, the Auto-Fixing Output Parser, and per-model JSON mode settings. This tutorial shows how to combine them for maximum reliability.
Prerequisites
- A running n8n instance (v1.30 or later)
- LLM credentials configured (OpenAI, Anthropic, Gemini, or Cohere)
- An AI Agent or Basic LLM Chain workflow where you need JSON output
- Understanding of JSON schema basics
Step-by-step guide
Add the Structured Output Parser Sub-Node
Add the Structured Output Parser Sub-Node
The Structured Output Parser is the primary tool for getting consistent JSON from LLMs in n8n. In your AI Agent or Basic LLM Chain node, click the '+' button under 'Output Parser' and select 'Structured Output Parser'. Define your output schema using the JSON Schema format. The parser automatically adds formatting instructions to the prompt sent to the LLM, telling it exactly what structure to return. It then validates the response against the schema and throws a clear error if the output does not match.
1// Example JSON Schema for the Structured Output Parser2// Set this in the 'JSON Schema' field of the Structured Output Parser node3{4 "type": "object",5 "properties": {6 "summary": {7 "type": "string",8 "description": "A 2-3 sentence summary of the content"9 },10 "sentiment": {11 "type": "string",12 "enum": ["positive", "negative", "neutral"],13 "description": "Overall sentiment of the content"14 },15 "key_topics": {16 "type": "array",17 "items": { "type": "string" },18 "description": "List of main topics discussed"19 },20 "confidence": {21 "type": "number",22 "minimum": 0,23 "maximum": 1,24 "description": "Confidence score between 0 and 1"25 }26 },27 "required": ["summary", "sentiment", "key_topics", "confidence"]28}Expected result: The LLM returns JSON that matches your schema, automatically parsed into a JavaScript object by n8n.
Add the Auto-Fixing Output Parser for Error Recovery
Add the Auto-Fixing Output Parser for Error Recovery
The Auto-Fixing Output Parser wraps your Structured Output Parser and adds an automatic retry mechanism. If the initial parse fails (due to markdown fences, extra text, or schema mismatches), the Auto-Fixing Parser sends the failed output back to the LLM with instructions to fix the formatting. This costs one additional LLM call per failure but dramatically increases reliability. Add it by clicking '+' under Output Parser and selecting 'Auto-Fixing Output Parser'. Then connect your Structured Output Parser as a sub-parser.
Expected result: Parse failures are automatically corrected by the LLM, and the workflow receives valid JSON even when the first attempt had formatting issues.
Enable JSON Mode on OpenAI Models
Enable JSON Mode on OpenAI Models
OpenAI's GPT-4o and GPT-4o-mini support a native JSON mode that constrains the model to only output valid JSON. When using the OpenAI Chat Model sub-node, look for the 'Response Format' or 'JSON Mode' option in the node settings. Enable it to get guaranteed valid JSON syntax (though not guaranteed schema compliance). Combine this with the Structured Output Parser for both valid syntax and correct schema. Note: JSON mode requires that your prompt mentions 'JSON' somewhere — the API will reject the request otherwise.
1// When using the HTTP Request node to call OpenAI directly,2// add response_format to the request body:3{4 "model": "gpt-4o",5 "messages": [6 {7 "role": "system",8 "content": "You are a data extraction assistant. Return your analysis as JSON."9 },10 {11 "role": "user",12 "content": "Analyze this text: {{ $json.input_text }}"13 }14 ],15 "response_format": { "type": "json_object" },16 "temperature": 017}Expected result: OpenAI returns syntactically valid JSON without markdown fences or surrounding text.
Craft System Prompts That Maximize JSON Compliance
Craft System Prompts That Maximize JSON Compliance
Even with parsers and JSON mode, prompt engineering significantly affects output quality. Include explicit instructions in your system prompt about the expected format. Provide an example of the desired output. Tell the model what NOT to do. Set temperature to 0 for maximum consistency. These instructions work across all LLM providers — OpenAI, Claude, Gemini, and Cohere.
1// System prompt template for consistent JSON output2const systemPrompt = `You are a structured data extraction assistant.34OUTPUT FORMAT:5- Return ONLY a JSON object matching the schema below6- Do NOT wrap the JSON in markdown code fences7- Do NOT add any text before or after the JSON8- Do NOT include comments in the JSON9- Use double quotes for all strings10- Ensure all required fields are present1112SCHEMA:13{14 "summary": "string (2-3 sentences)",15 "sentiment": "positive | negative | neutral",16 "key_topics": ["string", ...],17 "confidence": number (0 to 1)18}1920EXAMPLE OUTPUT:21{"summary": "The article discusses...", "sentiment": "positive", "key_topics": ["AI", "automation"], "confidence": 0.92}2223Respond with ONLY the JSON object. No other text.`;Expected result: The LLM consistently returns raw JSON matching the specified schema across different inputs.
Add a Code Node Fallback for Maximum Resilience
Add a Code Node Fallback for Maximum Resilience
Even with all the above measures, edge cases can produce invalid output. Add a Code node after your LLM chain as a final safety net. This node attempts multiple extraction strategies and, if all fail, returns a structured error that your workflow can handle gracefully. This is especially important for production workflows that process high volumes.
1const rawOutput = $json.output || $json.text || '';23function extractAndValidate(text, requiredFields) {4 let cleaned = text.trim();56 // Strip markdown fences7 const fenceMatch = cleaned.match(/```(?:json)?\s*\n?([\s\S]*?)\n?\s*```/);8 if (fenceMatch) cleaned = fenceMatch[1].trim();910 // Find JSON boundaries11 const start = cleaned.indexOf('{');12 const end = cleaned.lastIndexOf('}');13 if (start === -1 || end === -1) return null;1415 cleaned = cleaned.substring(start, end + 1);1617 try {18 const parsed = JSON.parse(cleaned);19 // Validate required fields20 const missing = requiredFields.filter(f => !(f in parsed));21 if (missing.length > 0) {22 return { data: parsed, valid: false, missing_fields: missing };23 }24 return { data: parsed, valid: true };25 } catch (e) {26 return null;27 }28}2930const required = ['summary', 'sentiment', 'key_topics', 'confidence'];31const result = extractAndValidate(rawOutput, required);3233if (result && result.valid) {34 return [{ json: result.data }];35} else if (result && !result.valid) {36 return [{ json: { ...result.data, _validation_warning: 'Missing fields: ' + result.missing_fields.join(', ') } }];37} else {38 return [{ json: { _parse_error: true, raw_text: rawOutput } }];39}Expected result: The workflow always produces structured output — either valid parsed JSON, partially valid JSON with warnings, or a structured error object.
Complete working example
1// Code node: Run Once for Each Item2// Complete JSON extraction and validation pipeline3// Place after the AI Agent or LLM Chain node45const rawOutput = $json.output || $json.text || $json.message?.content || '';67// Define your expected schema fields8const SCHEMA = {9 required: ['summary', 'sentiment', 'key_topics', 'confidence'],10 types: {11 summary: 'string',12 sentiment: 'string',13 key_topics: 'object', // arrays are objects in JS14 confidence: 'number'15 },16 validValues: {17 sentiment: ['positive', 'negative', 'neutral']18 }19};2021function cleanAndParse(text) {22 let cleaned = text.trim();23 24 // Remove markdown code fences25 const fenceMatch = cleaned.match(/```(?:json)?\s*\n?([\s\S]*?)\n?\s*```/);26 if (fenceMatch) cleaned = fenceMatch[1].trim();27 28 // Find JSON object29 const start = cleaned.indexOf('{');30 const end = cleaned.lastIndexOf('}');31 if (start !== -1 && end > start) {32 cleaned = cleaned.substring(start, end + 1);33 }34 35 // Fix smart quotes36 cleaned = cleaned37 .replace(/[\u2018\u2019]/g, "'")38 .replace(/[\u201C\u201D]/g, '"');39 40 return JSON.parse(cleaned);41}4243function validate(data) {44 const errors = [];45 46 for (const field of SCHEMA.required) {47 if (!(field in data)) {48 errors.push(`Missing required field: ${field}`);49 } else if (typeof data[field] !== SCHEMA.types[field]) {50 errors.push(`${field} should be ${SCHEMA.types[field]}, got ${typeof data[field]}`);51 }52 }53 54 if (data.sentiment && SCHEMA.validValues.sentiment) {55 if (!SCHEMA.validValues.sentiment.includes(data.sentiment)) {56 errors.push(`Invalid sentiment: ${data.sentiment}`);57 }58 }59 60 if (data.confidence !== undefined) {61 if (data.confidence < 0 || data.confidence > 1) {62 errors.push(`Confidence must be 0-1, got ${data.confidence}`);63 }64 }65 66 return errors;67}6869try {70 const parsed = cleanAndParse(rawOutput);71 const errors = validate(parsed);72 73 return [{74 json: {75 ...parsed,76 _meta: {77 parse_status: 'success',78 validation_errors: errors,79 is_valid: errors.length === 080 }81 }82 }];83} catch (e) {84 return [{85 json: {86 _meta: {87 parse_status: 'failed',88 error: e.message,89 raw_output: rawOutput.substring(0, 500)90 }91 }92 }];93}Common mistakes when getting Consistent JSON Outputs from a Language Model in n8n
Why it's a problem: Relying solely on system prompt instructions without a parser or validation
How to avoid: LLMs are non-deterministic. Even perfect prompts will occasionally produce incorrect formatting. Always add a Structured Output Parser or Code node validation as a safety net.
Why it's a problem: Using JSON mode on OpenAI without mentioning 'JSON' in the prompt
How to avoid: OpenAI requires the word 'JSON' to appear in the prompt when response_format is set to json_object. Add it to your system or user message.
Why it's a problem: Not using the Auto-Fixing Output Parser, so parse failures crash the workflow
How to avoid: Add the Auto-Fixing Output Parser wrapping your Structured Output Parser. The extra LLM call on failure is worth the reliability improvement.
Why it's a problem: Defining overly complex schemas with deeply nested objects and many required fields
How to avoid: Simplify your schema. If you need complex nested data, consider splitting the extraction into multiple LLM calls, each returning a simple flat schema.
Best practices
- Layer your defenses: system prompt instructions + Structured Output Parser + Auto-Fixing Parser + Code node validation
- Set temperature to 0 when requesting JSON output to minimize formatting variation
- Enable JSON mode on OpenAI models (response_format: json_object) for guaranteed valid JSON syntax
- Include an example of the exact expected output format in your system prompt
- Use descriptive field descriptions in your JSON schema — they guide the LLM to produce better values
- Test with at least 20 different inputs to verify consistency before deploying to production
- Log parse failures to identify patterns — certain input types may consistently cause formatting issues
- Keep JSON schemas simple and flat when possible — deeply nested schemas increase the chance of errors
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I need my n8n AI Agent workflow to consistently return valid JSON from LLMs (OpenAI, Claude, Gemini). How do I use the Structured Output Parser, Auto-Fixing Output Parser, JSON mode, and system prompts together to ensure reliable JSON output?
Set up my n8n Basic LLM Chain to always return valid JSON matching a specific schema. Use the Structured Output Parser with a JSON schema for product data (name, price, category, features array), add the Auto-Fixing Output Parser as a fallback, and show me the system prompt to use.
Frequently asked questions
Which n8n output parser should I use — Structured or Auto-Fixing?
Use both together. The Structured Output Parser defines your schema and handles formatting instructions. The Auto-Fixing Output Parser wraps it and retries on failures. Together, they give you schema enforcement with automatic error recovery.
Does JSON mode work with Claude (Anthropic) in n8n?
As of n8n 1.30, there is no built-in JSON mode toggle for the Anthropic Chat Model sub-node. Use the Structured Output Parser and explicit system prompt instructions instead. If using the HTTP Request node, you can pass Anthropic's response format parameters directly.
Will using the Auto-Fixing Output Parser double my LLM costs?
Only when a parse failure occurs. If the initial output parses successfully, no extra call is made. In practice, with good prompts and the Structured Output Parser, the auto-fix triggers on less than 5% of requests.
Can I use the Structured Output Parser with the AI Agent node?
Yes. The Structured Output Parser works with both the AI Agent node and the Basic LLM Chain node. Connect it as a sub-node under the Output Parser slot.
What if the LLM returns valid JSON but with wrong data types?
The Structured Output Parser validates against the schema, including types. If a number field contains a string, the parser will flag it as invalid. Add type coercion in a Code node if you want to automatically convert types instead of failing.
Can RapidDev help build production-grade JSON extraction workflows?
Yes. RapidDev can architect n8n workflows with multi-layered JSON validation, custom schemas, and monitoring to ensure your AI features reliably produce structured data at scale.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation