Skip to main content
RapidDev - Software Development Agency
n8n-tutorial

How to Debug Workflows in n8n

To debug workflows in n8n, use the execution history panel to review past runs, click on individual nodes to inspect their input and output data, check error messages in failed executions, enable detailed logging with the N8N_LOG_LEVEL environment variable, and use the Pin Data feature to test nodes with fixed data without re-triggering the workflow.

What you'll learn

  • How to use the execution history panel to review past workflow runs
  • How to inspect individual node inputs and outputs for data issues
  • How to read and interpret error messages in failed executions
  • How to use data pinning and logging for systematic debugging
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner8 min read15-25 minutesn8n 1.0+ (self-hosted and Cloud)March 2026RapidDev Engineering Team
TL;DR

To debug workflows in n8n, use the execution history panel to review past runs, click on individual nodes to inspect their input and output data, check error messages in failed executions, enable detailed logging with the N8N_LOG_LEVEL environment variable, and use the Pin Data feature to test nodes with fixed data without re-triggering the workflow.

Debugging Workflows in n8n: Execution History, Node Inspection, and Error Logs

When an n8n workflow fails or produces unexpected results, debugging starts with understanding what data each node received, what it outputted, and where the chain broke. n8n provides built-in tools for this: execution history with step-by-step replay, node output inspection, error messages with context, and configurable logging. This tutorial covers all these tools so you can diagnose and fix issues quickly.

Prerequisites

  • A running n8n instance (self-hosted or Cloud)
  • A workflow that has been executed at least once
  • Basic familiarity with the n8n editor interface

Step-by-step guide

1

Open the execution history panel

Click the Executions tab at the top of the n8n editor (next to the Editor tab). This shows a list of all past executions for the current workflow, sorted by most recent first. Each row shows the execution status (Success, Error, or Running), the start time, the execution duration, and the trigger type. Click on any execution to load it in the editor with all node outputs preserved. Failed executions are highlighted in red and show the error message inline. Use the filter options to show only failed executions when investigating errors.

Expected result: The execution history panel shows past runs, and clicking on a specific execution loads its data into the editor.

2

Inspect individual node outputs step by step

After loading a past execution, click on any node in the workflow canvas to see its output data. The output panel shows what data the node produced after running. Switch between Table view and JSON view to explore the data structure. Table view is easier to scan for large datasets, while JSON view shows the exact data structure including nested objects. Compare the output of one node with the input of the next node to find where data gets lost or transformed incorrectly. Click the Input tab to see what data the node received.

Expected result: You can trace data through the entire workflow by clicking each node in sequence and comparing inputs to outputs.

3

Read error messages and identify the failing node

When a workflow execution fails, the failing node is highlighted with a red border on the canvas. Click on it to see the error message in the output panel. n8n error messages typically include the HTTP status code (for API calls), the error type, and a description from the external service. Common patterns include 401 (authentication failure), 400 (invalid request data), 429 (rate limit), and 500 (server error). The error message often tells you exactly what to fix — read it carefully before making changes.

Expected result: You identify the specific node that failed, the error message, and the data that caused the failure.

4

Use data pinning to test individual nodes without re-triggering

Data pinning lets you freeze a node's output so downstream nodes can be tested without re-running the entire workflow. This is especially useful when the trigger is a webhook (which requires re-sending a request) or an external event. After inspecting a node's output, click the Pin icon in the output panel. The pinned data persists and is used every time you test downstream nodes. You can also manually edit pinned data to test edge cases. Unpin the data when you are done debugging to restore normal flow.

Expected result: Pinned data appears with a pin icon on the node. Downstream node tests use the pinned data instead of re-executing upstream nodes.

5

Enable detailed logging for deeper investigation

For issues that are not visible in the node output, enable detailed logging by setting the N8N_LOG_LEVEL environment variable. The default level is info. Set it to debug for verbose output that includes internal data processing details, or to warn to only see warnings and errors. Check logs using your deployment's log viewer. For Docker, use docker compose logs n8n. Detailed logs show HTTP request and response details, credential resolution, expression evaluation, and internal node processing steps.

typescript
1# Enable debug logging
2# In docker-compose.yml:
3environment:
4 - N8N_LOG_LEVEL=debug
5
6# In .env file:
7N8N_LOG_LEVEL=debug
8
9# View logs in Docker:
10docker compose logs -f n8n
11
12# View logs with timestamp filtering:
13docker compose logs --since 5m n8n
14
15# Log levels: error, warn, info (default), debug

Expected result: Detailed logs show internal processing steps, helping you identify issues that are not visible in the node output panel.

6

Use a Code node to add custom debug logging

For complex workflows, insert a Code node to log specific data points to the n8n console. Use console.log() in the Code node to output values to the n8n server logs. This is useful for inspecting intermediate values that are not easily visible in the node output, such as computed expressions, loop counters, or conditional logic results. Label your log messages clearly so you can find them in the logs.

typescript
1// Code node: Debug logger
2const items = $input.all();
3
4for (const item of items) {
5 console.log('[DEBUG] Item data:', JSON.stringify(item.json, null, 2));
6 console.log('[DEBUG] Item keys:', Object.keys(item.json));
7 console.log('[DEBUG] Has expected field:', 'email' in item.json);
8}
9
10// Pass items through unchanged
11return items;

Expected result: Custom log messages appear in the n8n server logs with the data you specified, alongside the standard n8n log output.

Complete working example

debug-helper.js
1// Code node: Comprehensive debug helper
2// Place anywhere in workflow to inspect data flow
3// Outputs detailed information about the current execution state
4
5const items = $input.all();
6const executionId = $execution.id;
7const workflowName = $workflow.name;
8const timestamp = new Date().toISOString();
9
10console.log(`[DEBUG] === Debug Helper ===");
11console.log(`[DEBUG] Workflow: ${workflowName}`);
12console.log(`[DEBUG] Execution: ${executionId}`);
13console.log(`[DEBUG] Timestamp: ${timestamp}`);
14console.log(`[DEBUG] Items received: ${items.length}`);
15
16const debugItems = items.map((item, index) => {
17 const keys = Object.keys(item.json);
18 const hasBinary = item.binary ? Object.keys(item.binary) : [];
19
20 console.log(`[DEBUG] Item ${index}: keys=[${keys.join(', ')}]`);
21
22 return {
23 json: {
24 _debug: {
25 itemIndex: index,
26 totalItems: items.length,
27 keys: keys,
28 hasBinaryData: hasBinary.length > 0,
29 binaryKeys: hasBinary,
30 executionId: executionId,
31 workflow: workflowName,
32 timestamp: timestamp,
33 dataPreview: Object.fromEntries(
34 keys.map(key => [
35 key,
36 typeof item.json[key] === 'string'
37 ? item.json[key].substring(0, 100)
38 : typeof item.json[key]
39 ])
40 )
41 },
42 // Pass through original data unchanged
43 ...item.json
44 }
45 };
46});
47
48return debugItems;

Common mistakes when debugging Workflows in n8n

Why it's a problem: Leaving N8N_LOG_LEVEL set to debug in production, causing performance issues and large log files

How to avoid: Set logging back to info or warn after debugging. Debug logging produces significant output and can slow down n8n.

Why it's a problem: Only looking at the latest execution when the issue is intermittent

How to avoid: Review multiple executions in the execution history panel. Filter by error status and look for patterns across failed runs.

Why it's a problem: Testing nodes in isolation without considering how data flows from upstream nodes

How to avoid: Always trace data from the trigger node through each step. Pin realistic test data that matches what the trigger actually sends.

Why it's a problem: Deleting debug Code nodes and then needing to recreate them when the next issue occurs

How to avoid: Deactivate debug nodes by right-clicking and selecting Deactivate Node. They remain in the workflow but are skipped during execution.

Best practices

  • Check the execution history before diving into code — most issues are visible in the node output data
  • Use JSON view instead of Table view when inspecting nested data structures
  • Pin data on upstream nodes to avoid re-triggering webhooks or API calls during debugging
  • Set N8N_LOG_LEVEL to debug temporarily when node outputs do not explain the issue
  • Add descriptive labels to Code node debug logs so they are easy to filter in server logs
  • Compare the output of one node with the input of the next to find where data gets lost
  • Test each node individually using the Test Step feature before running the entire workflow
  • Keep execution history enabled for at least 7 days to investigate intermittent issues

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

My n8n workflow fails intermittently with a generic error message that does not indicate which node caused the problem. Walk me through a systematic debugging approach including execution history review, node inspection, logging configuration, and data pinning.

n8n Prompt

Create a debug helper workflow in n8n with a Manual trigger, a Code node that inspects the data structure of incoming items and outputs detailed debug information, and a Set node that demonstrates data pinning for downstream testing.

Frequently asked questions

How long does n8n keep execution history?

By default, n8n keeps execution history indefinitely, which can fill up storage. Set EXECUTIONS_DATA_PRUNE=true and EXECUTIONS_DATA_MAX_AGE=168 (hours) to automatically delete old execution data. For debugging intermittent issues, keep at least 7 days of history.

Can I re-run a failed execution with the same input data?

Yes, open the failed execution from the execution history panel. The editor loads with all the original input data. Click Test Workflow to re-run it. This is useful after fixing a bug to verify the fix.

How do I debug a workflow that works in test mode but fails in production?

Test mode (/webhook-test/) and production mode (/webhook/) can behave differently because test mode runs synchronously in the editor while production runs in the background. Check production execution logs and ensure environment variables and credentials are the same in both contexts.

Can I set breakpoints in n8n like in a code debugger?

n8n does not have traditional breakpoints. The closest equivalent is to add a Wait node between steps and inspect the output of each node before it continues. You can also use the Test Step feature to execute one node at a time.

What does the orange warning icon on a node mean?

An orange warning icon usually indicates that the node executed but produced a warning, such as receiving more items than expected or encountering a non-critical issue. Click the node to read the warning message in the output panel.

How do I debug expression errors?

Click the expression field and use the expression editor. It shows a real-time preview of the resolved value using data from the last execution. If it shows ERROR or undefined, the path is wrong or the input data is missing the expected field.

Can I export execution data for offline analysis?

n8n does not have a built-in execution data export feature. You can use the n8n API endpoint GET /api/v1/executions/{id} to fetch execution data programmatically, or query the database directly if self-hosted.

Can RapidDev help debug complex n8n workflow issues?

Yes, RapidDev's engineering team can audit your n8n workflows, set up monitoring and alerting, and resolve persistent debugging challenges including intermittent failures, data flow issues, and performance bottlenecks.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.