Skip to main content
RapidDev - Software Development Agency
n8n-tutorial

How to Fix JSON Parse Error After Malformed Model Output in n8n

JSON parse errors in n8n happen when a language model returns output wrapped in markdown code fences or includes extra text alongside JSON. Fix this by stripping markdown fences in a Code node before parsing, or use the Auto-Fixing Output Parser node which automatically repairs malformed JSON from LLMs.

What you'll learn

  • Why language models return invalid JSON with markdown formatting
  • How to strip markdown code fences in a Code node
  • How to use the Auto-Fixing Output Parser node for automatic JSON repair
  • How to write prompts that minimize JSON formatting issues
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner7 min read10 minutesn8n 1.0+ with any LLM node (OpenAI, Anthropic, etc.)March 2026RapidDev Engineering Team
TL;DR

JSON parse errors in n8n happen when a language model returns output wrapped in markdown code fences or includes extra text alongside JSON. Fix this by stripping markdown fences in a Code node before parsing, or use the Auto-Fixing Output Parser node which automatically repairs malformed JSON from LLMs.

Why LLM Output Causes JSON Parse Errors in n8n

Language models frequently wrap JSON responses in markdown code fences (```json ... ```) or add explanatory text before and after the JSON. When n8n tries to parse this output with JSON.parse() or a downstream node that expects valid JSON, it fails with errors like 'Unexpected token' or 'SyntaxError: Unexpected token ` in JSON at position 0'. This is one of the most common issues when building AI workflows in n8n. The solution is to clean the output before parsing it, either manually with a Code node or automatically with the Auto-Fixing Output Parser.

Prerequisites

  • A running n8n instance with an LLM node (OpenAI, Anthropic, etc.)
  • A workflow that requests JSON output from a language model
  • Basic understanding of JSON format

Step-by-step guide

1

Identify the malformed output

Run your workflow and inspect the output of the LLM node. Click on the node to see its output data. Look for markdown code fences (triple backticks) wrapping the JSON, extra text before or after the JSON, or escape characters that break parsing. Common patterns include the model returning ```json\n{...}\n``` instead of just {...}, or adding 'Here is the JSON:' before the actual data.

Expected result: You can see the exact format of the LLM output, including any markdown fences, extra text, or formatting that would cause JSON.parse() to fail.

2

Add a Code node to strip markdown fences

Insert a Code node between the LLM node and any node that expects JSON. This Code node uses a regular expression to strip markdown code fences and extract the raw JSON. It handles both ```json and plain ``` fences, as well as leading/trailing whitespace.

typescript
1// Code node — strip markdown fences from LLM output
2const rawOutput = $input.first().json.text;
3
4// Remove markdown code fences (```json ... ``` or ``` ... ```)
5let cleaned = rawOutput
6 .replace(/^```(?:json)?\s*\n?/i, '')
7 .replace(/\n?```\s*$/i, '')
8 .trim();
9
10// Parse the cleaned JSON
11try {
12 const parsed = JSON.parse(cleaned);
13 return [{ json: parsed }];
14} catch (error) {
15 // If it still fails, try to extract JSON from the text
16 const jsonMatch = cleaned.match(/\{[\s\S]*\}/);
17 if (jsonMatch) {
18 return [{ json: JSON.parse(jsonMatch[0]) }];
19 }
20 throw new Error(`Failed to parse JSON: ${error.message}`);
21}

Expected result: The Code node outputs clean, parsed JSON that downstream nodes can use directly without parse errors.

3

Use the Auto-Fixing Output Parser

n8n provides a built-in Auto-Fixing Output Parser node that automatically detects and repairs malformed JSON from LLMs. Add this node to your workflow after the LLM chain. It uses a secondary LLM call to fix formatting issues when the initial parse fails. This is more reliable than regex-based cleaning because it can handle a wider variety of malformations, including missing commas, unquoted keys, and trailing commas.

Expected result: The Auto-Fixing Output Parser automatically cleans and parses the LLM output. If the initial parse fails, it sends the output back to the LLM with instructions to fix the JSON formatting.

4

Improve your prompt to reduce malformed output

While cleaning output is necessary as a safety net, you can reduce the frequency of malformed JSON by being explicit in your prompt. Tell the model to return only raw JSON with no markdown formatting, no explanatory text, and no code fences. Some models respond better to specific instructions about output format.

typescript
1// Example system prompt that reduces formatting issues:
2const systemPrompt = `You are a data extraction API.
3Return ONLY valid JSON with no markdown formatting.
4Do NOT wrap the response in code fences.
5Do NOT include any text before or after the JSON.
6Respond with a single JSON object, nothing else.`;
7
8// Example user prompt:
9const userPrompt = `Extract the following fields from this text and return as JSON:
10{"name": "", "email": "", "phone": ""}
11
12Text: ${inputText}
13
14Return ONLY the JSON object.`;

Expected result: The LLM returns clean JSON more consistently, reducing but not eliminating the need for the cleaning Code node.

Complete working example

clean-llm-json-output.js
1// n8n Code node — Robust JSON extraction from LLM output
2// Place this node between your LLM node and any JSON-consuming node
3
4const items = $input.all();
5const results = [];
6
7for (const item of items) {
8 // Get the raw text from the LLM output
9 // Adjust the property path based on your LLM node's output structure
10 const rawOutput = item.json.text
11 || item.json.output
12 || item.json.message?.content
13 || JSON.stringify(item.json);
14
15 let parsed;
16
17 try {
18 // Step 1: Try direct parse (already valid JSON)
19 parsed = JSON.parse(rawOutput);
20 } catch (e1) {
21 try {
22 // Step 2: Strip markdown code fences
23 let cleaned = rawOutput
24 .replace(/^```(?:json)?\s*\n?/gi, '')
25 .replace(/\n?```\s*$/gi, '')
26 .trim();
27 parsed = JSON.parse(cleaned);
28 } catch (e2) {
29 try {
30 // Step 3: Extract first JSON object from mixed text
31 const objectMatch = rawOutput.match(/\{[\s\S]*\}/);
32 if (objectMatch) {
33 parsed = JSON.parse(objectMatch[0]);
34 } else {
35 // Step 4: Try extracting a JSON array
36 const arrayMatch = rawOutput.match(/\[[\s\S]*\]/);
37 if (arrayMatch) {
38 parsed = JSON.parse(arrayMatch[0]);
39 } else {
40 throw new Error('No JSON object or array found in output');
41 }
42 }
43 } catch (e3) {
44 // All attempts failed — return error info
45 parsed = {
46 _parse_error: true,
47 _error_message: e3.message,
48 _raw_output: rawOutput.substring(0, 500)
49 };
50 }
51 }
52 }
53
54 results.push({ json: parsed });
55}
56
57return results;

Common mistakes when fixing JSON Parse Error After Malformed Model Output in n8n

Why it's a problem: Using JSON.parse() directly on LLM output without any cleaning

How to avoid: Always run the output through a cleaning step first. Even with perfect prompts, LLMs occasionally add markdown or extra text.

Why it's a problem: Only handling ```json fences and missing plain ``` fences

How to avoid: Handle both ```json and plain ``` in your regex. Some models use one, some use the other, and it can vary between calls.

Why it's a problem: Not handling the case where the LLM returns an array instead of an object

How to avoid: Check for both { and [ as the first character of the extracted JSON. Your regex should match both objects and arrays.

Why it's a problem: Assuming the JSON property is always called 'text' in the LLM output

How to avoid: Different LLM nodes use different property names (text, output, message.content). Inspect the actual node output to find the correct path.

Best practices

  • Always add a JSON cleaning step after LLM nodes — even well-prompted models occasionally add markdown fences
  • Use the Auto-Fixing Output Parser for critical workflows where data accuracy is paramount
  • Include explicit instructions in your prompt to return raw JSON without formatting
  • Test your cleaning code with various malformed inputs: code fences, explanatory text, nested JSON, and arrays
  • Log the raw LLM output before cleaning so you can debug format issues later
  • Use structured output or function calling features when available (OpenAI function calling, Anthropic tool use) for more reliable JSON
  • Set a fallback response in your workflow for cases where JSON parsing fails completely

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

My n8n workflow gets a JSON parse error after the OpenAI node because the model wraps its response in markdown code fences. How do I strip the code fences and parse the JSON properly in n8n? Show me a Code node solution.

n8n Prompt

I keep getting 'Unexpected token' errors when trying to parse JSON from my Claude node in n8n. The model adds ```json fences around the output. How do I fix this?

Frequently asked questions

Why does the LLM add markdown code fences around JSON?

Language models are trained on text that uses markdown formatting. When asked to return JSON, they often format it as a markdown code block out of habit. This is valid markdown but breaks JSON parsers. Explicit prompt instructions help but do not eliminate the behavior entirely.

What is the Auto-Fixing Output Parser in n8n?

The Auto-Fixing Output Parser is a built-in n8n node that attempts to parse LLM output as structured data. If the initial parse fails, it sends the malformed output back to an LLM with instructions to fix the formatting, then parses the corrected output.

Does the Code node approach work with all LLM providers?

Yes. The Code node cleans the text output regardless of which LLM provider generated it. Just adjust the property path ($input.first().json.text or similar) to match the output field name of your specific LLM node.

Can I use function calling or tool use instead of parsing raw text?

Yes, and it is more reliable. If you use OpenAI's function calling or Anthropic's tool use features, the model returns structured data in a predictable format. This avoids markdown fences entirely. Check if your n8n LLM node supports these features.

What if the LLM returns incomplete JSON that is cut off?

Incomplete JSON happens when the response exceeds the max_tokens limit. Increase max_tokens in your LLM node settings, or ask the model to return shorter responses. The Code node cleaning approach cannot fix genuinely truncated JSON.

Can RapidDev help build reliable LLM data extraction workflows in n8n?

Yes. RapidDev specializes in building production-grade n8n workflows with robust JSON parsing, error handling, and retry logic for LLM integrations. Contact RapidDev for expert assistance.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.