View n8n execution history by clicking the Executions tab in the left sidebar. The Executions page shows all past workflow runs with their status, start time, duration, and workflow name. Filter by status (success, error, waiting), date range, or workflow. Click any execution to inspect node-by-node input and output data for debugging.
How to View Execution History in n8n
Every time a workflow runs in n8n, the execution is recorded in the execution history. This log is essential for debugging failed workflows, auditing automated processes, monitoring workflow performance, and understanding how data flows through your nodes. The execution history shows the status of each run, the data processed by each node, and any errors that occurred.
Prerequisites
- A running n8n instance with at least one workflow that has been executed
- Access to the n8n web editor
- Basic familiarity with n8n workflows and nodes
Step-by-step guide
Open the Executions tab
Open the Executions tab
In the n8n editor, look at the left sidebar. Click the Executions icon (a list or clock icon) to open the global execution history. This shows executions from all workflows. Alternatively, open a specific workflow and click the Executions tab at the top of the workflow editor to see only executions for that workflow. The list shows the most recent executions first.
Expected result: The Executions page opens showing a list of past workflow runs with columns for status, workflow name, start time, and execution duration.
Filter executions by status and date
Filter executions by status and date
Use the filter controls at the top of the Executions page to narrow down the list. Filter by status — Success, Error, or Waiting — to find problematic runs quickly. Set a date range to focus on a specific time period. You can also filter by workflow name if you are looking for executions of a particular workflow. These filters help you find the execution you need when you have hundreds or thousands of records.
Expected result: The execution list updates to show only the executions matching your filter criteria. The count indicator shows how many executions match.
Inspect an individual execution
Inspect an individual execution
Click on any execution in the list to open its detail view. This loads the workflow canvas with the exact state from that execution. Each node shows whether it executed successfully (green), failed (red), or was skipped. Click on any node to see the input data it received and the output data it produced. This is the most powerful debugging feature — you can trace exactly how data flowed through the workflow.
Expected result: The workflow canvas shows the execution state with color-coded nodes. Clicking a node displays the exact input and output data from that execution run.
View error details for failed executions
View error details for failed executions
When an execution has errors, the failed node is highlighted in red on the canvas. Click the failed node to see the error message, stack trace, and the input data that caused the failure. This information tells you exactly what went wrong and with what data. Use this to fix the node configuration, update expressions, or add error handling.
1// Common error information shown in execution details:2// - Error message: "Cannot read property 'email' of undefined"3// - Node: HTTP Request4// - Input data that caused the error5// - Stack trace with line numbers (for Code nodes)6// - Timestamp of the failureExpected result: The error details panel shows the error message, the node that failed, the input data at the time of failure, and a stack trace for Code node errors.
Configure execution data retention
Configure execution data retention
By default, n8n keeps all execution data indefinitely, which can consume significant database space over time. Configure automatic pruning with environment variables to delete old execution data. Set EXECUTIONS_DATA_PRUNE to true and EXECUTIONS_DATA_MAX_AGE to the number of hours to keep execution records. You can also set EXECUTIONS_DATA_SAVE_ON_SUCCESS and EXECUTIONS_DATA_SAVE_ON_ERROR to control which executions are saved.
1# Environment variables for execution data retention23# Enable automatic pruning of old execution data4EXECUTIONS_DATA_PRUNE=true56# Keep execution data for 7 days (168 hours)7EXECUTIONS_DATA_MAX_AGE=16889# Save execution data for successful runs (default: true)10EXECUTIONS_DATA_SAVE_ON_SUCCESS=all1112# Save execution data for failed runs (default: all)13EXECUTIONS_DATA_SAVE_ON_ERROR=all1415# Save execution data for manual runs (default: true)16EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=true1718# Maximum number of finished executions to keep19EXECUTIONS_DATA_MAX_COUNT=10000Expected result: n8n automatically deletes execution records older than the configured retention period. The database size stays manageable over time.
Complete working example
1{2 "name": "Execution Monitor — Daily Report",3 "nodes": [4 {5 "parameters": {6 "rule": {7 "interval": [8 { "field": "hours", "hoursInterval": 24 }9 ]10 }11 },12 "name": "Daily Schedule",13 "type": "n8n-nodes-base.scheduleTrigger",14 "typeVersion": 1.2,15 "position": [250, 300]16 },17 {18 "parameters": {19 "url": "={{ $env.N8N_HOST || 'http://localhost:5678' }}/api/v1/executions",20 "authentication": "genericCredentialType",21 "genericAuthType": "httpHeaderAuth",22 "sendQuery": true,23 "queryParameters": {24 "parameters": [25 { "name": "status", "value": "error" },26 { "name": "limit", "value": "100" }27 ]28 }29 },30 "name": "Fetch Failed Executions",31 "type": "n8n-nodes-base.httpRequest",32 "typeVersion": 4,33 "position": [450, 300]34 },35 {36 "parameters": {37 "jsCode": "const executions = $input.all();\nconst summary = {\n totalFailed: executions.length,\n byWorkflow: {},\n reportDate: new Date().toISOString().split('T')[0]\n};\n\nfor (const exec of executions) {\n const wfName = exec.json.workflowData?.name || 'Unknown';\n summary.byWorkflow[wfName] = (summary.byWorkflow[wfName] || 0) + 1;\n}\n\nconst topFailures = Object.entries(summary.byWorkflow)\n .sort((a, b) => b[1] - a[1])\n .slice(0, 10)\n .map(([name, count]) => `${name}: ${count} failures`);\n\nreturn [{\n json: {\n ...summary,\n topFailures,\n message: `Daily Execution Report: ${summary.totalFailed} failed executions`\n }\n}];"38 },39 "name": "Build Report",40 "type": "n8n-nodes-base.code",41 "typeVersion": 2,42 "position": [650, 300]43 }44 ],45 "connections": {46 "Daily Schedule": {47 "main": [[{ "node": "Fetch Failed Executions", "type": "main", "index": 0 }]]48 },49 "Fetch Failed Executions": {50 "main": [[{ "node": "Build Report", "type": "main", "index": 0 }]]51 }52 }53}Common mistakes when viewing and Filter Execution History in n8n
Why it's a problem: Not enabling execution data pruning, causing the database to grow to gigabytes
How to avoid: Set EXECUTIONS_DATA_PRUNE=true and EXECUTIONS_DATA_MAX_AGE=168 (7 days) or an appropriate retention period for your needs.
Why it's a problem: Setting EXECUTIONS_DATA_SAVE_ON_ERROR to none and losing debugging information
How to avoid: Always save error execution data. Set EXECUTIONS_DATA_SAVE_ON_ERROR=all so you can debug failures. Only reduce retention for successful executions.
Why it's a problem: Looking at execution history for the wrong workflow when debugging
How to avoid: Use the workflow-specific Executions tab (inside the workflow editor) rather than the global Executions page to see only runs for the workflow you are debugging.
Why it's a problem: Deleting execution data manually from the database instead of using n8n pruning
How to avoid: Use n8n environment variables for automatic pruning. Manual database deletion can cause inconsistencies if you delete records that n8n expects to exist.
Best practices
- Check execution history regularly for failed runs, especially after deploying new workflows or making changes
- Filter by Error status to quickly identify and prioritize workflow failures for debugging
- Enable execution data pruning in production to prevent the database from growing unbounded
- Save only error executions for high-volume workflows to reduce storage while preserving debugging information
- Use the execution detail view to trace data flow through each node when debugging unexpected behavior
- Set up a monitoring workflow that queries the n8n API for failed executions and sends alerts
- Export important execution data before it is pruned if you need long-term audit records
- Review execution durations to identify slow workflows that may need optimization
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
My n8n workflow is failing intermittently. How do I use the execution history to find the pattern — which node fails, what input data causes the failure, and how often it happens?
Create an n8n workflow that queries the execution history API, counts failed executions per workflow over the last 24 hours, and sends a Slack notification with the summary report.
Frequently asked questions
How long does n8n keep execution history by default?
By default, n8n keeps execution data indefinitely. Without pruning enabled, the execution history grows until it fills up your database storage. Enable EXECUTIONS_DATA_PRUNE=true and set EXECUTIONS_DATA_MAX_AGE to control retention.
Can I export execution history data?
There is no built-in export button, but you can use the n8n REST API to query executions programmatically. Create a workflow that calls the /api/v1/executions endpoint and writes the data to a file or external database.
Does execution history include data from manual test runs?
Yes, by default. Manual test executions are saved to history alongside automated runs. You can disable this with EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false if you only want to track production executions.
Can I retry a failed execution from the history?
Yes. Open the failed execution detail view and click the Retry button. This re-runs the workflow with the same input data from the trigger node. The retry creates a new execution entry in the history.
Why is my execution history empty even though workflows are running?
Check the EXECUTIONS_DATA_SAVE_ON_SUCCESS and EXECUTIONS_DATA_SAVE_ON_ERROR environment variables. If set to none, execution data is not saved. Also verify that EXECUTIONS_DATA_PRUNE is not deleting records too aggressively with a very short MAX_AGE.
How much database space does execution history use?
It depends on data volume. Each execution stores the full input and output of every node. High-volume workflows with large payloads can generate gigabytes of execution data per week. Enable pruning and consider saving only error executions for busy workflows.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation