To manage workflow executions in n8n, use the Executions tab to view, filter, and delete past runs. Configure execution retention with the EXECUTIONS_DATA_MAX_AGE and EXECUTIONS_DATA_PRUNE_MAX_COUNT environment variables. Enable Save Execution Progress in workflow settings to keep detailed logs for debugging failed runs.
Viewing, Filtering, and Pruning Workflow Executions in n8n
Every time an n8n workflow runs, it creates an execution record with the input data, output data, and status. Over time, these records accumulate and consume database storage. This tutorial shows you how to browse execution history, filter by status and date, configure automatic pruning, and manually delete old records to keep your n8n instance lean.
Prerequisites
- A running n8n instance (v1.20 or later)
- At least one workflow that has been executed
- Access to environment variable configuration (self-hosted)
Step-by-step guide
Open the Executions tab and browse past runs
Open the Executions tab and browse past runs
Click the Executions icon in the left sidebar of the n8n editor. This opens a list of all workflow executions across your instance. Each row shows the workflow name, execution status (Success, Error, or Running), start time, and duration. Click any row to open the execution details, where you can see the data flowing through each node. Use the filter controls at the top to narrow results by status, workflow name, or date range. This is your primary tool for monitoring what your workflows are doing.
Expected result: You see a filtered list of executions with the ability to click into any one for detailed node-by-node data.
Enable Save Execution Progress for detailed debugging
Enable Save Execution Progress for detailed debugging
By default, n8n only saves the final result of each execution. If a workflow fails midway, you cannot see which nodes completed successfully. To fix this, click the gear icon on any workflow canvas to open Workflow Settings. Toggle Save Execution Progress to on. Now n8n saves the output of each node as it completes, so if the workflow errors on step 5, you still have the data from steps 1 through 4. This is invaluable for debugging complex workflows but increases database storage usage.
Expected result: Failed executions now show data for every completed node, not just the final error.
Configure automatic execution pruning with environment variables
Configure automatic execution pruning with environment variables
On a self-hosted n8n instance, execution records grow indefinitely unless you configure pruning. Set the following environment variables to automatically delete old executions. EXECUTIONS_DATA_PRUNE sets pruning to active. EXECUTIONS_DATA_MAX_AGE controls how many hours to keep records. EXECUTIONS_DATA_PRUNE_MAX_COUNT sets the maximum number of records to keep. Add these to your Docker Compose file, systemd environment file, or .env file depending on your deployment method.
1# Docker Compose example2environment:3 - EXECUTIONS_DATA_PRUNE=true4 - EXECUTIONS_DATA_MAX_AGE=168 # 7 days in hours5 - EXECUTIONS_DATA_PRUNE_MAX_COUNT=50006 - EXECUTIONS_DATA_SAVE_ON_ERROR=all7 - EXECUTIONS_DATA_SAVE_ON_SUCCESS=none # Don't save successful runs8 - EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=trueExpected result: n8n automatically deletes execution records older than 7 days or when the total count exceeds 5000.
Manually delete executions from the UI
Manually delete executions from the UI
To delete executions manually, go to the Executions tab in the left sidebar. Use the filters to find the executions you want to remove. Select individual executions using the checkboxes on the left, or use the Select All checkbox at the top to select all visible executions. Then click the Delete button. For bulk deletion, filter by a date range or status first, then select all and delete. This is useful for one-time cleanup or removing test executions before going to production.
Expected result: Selected executions are permanently deleted from the database, freeing storage space.
Set per-workflow execution saving preferences
Set per-workflow execution saving preferences
Different workflows have different logging needs. A critical payment workflow should save all executions, while a test workflow should save none. Click the gear icon on a workflow to open Workflow Settings. Under Save Successful Production Executions, choose Default (follows global setting), Yes, or No. Do the same for Save Failed Production Executions. Set manual execution saving separately. This lets you keep detailed logs for important workflows while reducing storage for trivial ones.
Expected result: Each workflow saves executions according to its own settings, independent of the global configuration.
Complete working example
1# docker-compose.yml — n8n with optimized execution settings2version: '3.8'34services:5 n8n:6 image: n8nio/n8n:latest7 restart: always8 ports:9 - '5678:5678'10 environment:11 # Database12 - DB_TYPE=postgresdb13 - DB_POSTGRESDB_HOST=postgres14 - DB_POSTGRESDB_PORT=543215 - DB_POSTGRESDB_DATABASE=n8n16 - DB_POSTGRESDB_USER=n8n17 - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}1819 # Execution data pruning20 - EXECUTIONS_DATA_PRUNE=true21 - EXECUTIONS_DATA_MAX_AGE=33622 - EXECUTIONS_DATA_PRUNE_MAX_COUNT=100002324 # Execution saving preferences25 - EXECUTIONS_DATA_SAVE_ON_ERROR=all26 - EXECUTIONS_DATA_SAVE_ON_SUCCESS=none27 - EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=true2829 # General30 - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}31 - WEBHOOK_URL=https://n8n.example.com/32 - GENERIC_TIMEZONE=America/New_York33 volumes:34 - n8n_data:/home/node/.n8n35 depends_on:36 - postgres3738 postgres:39 image: postgres:1540 restart: always41 environment:42 - POSTGRES_USER=n8n43 - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}44 - POSTGRES_DB=n8n45 volumes:46 - postgres_data:/var/lib/postgresql/data4748volumes:49 n8n_data:50 postgres_data:Common mistakes when managing Workflow Executions in n8n
Why it's a problem: Never configuring pruning, letting the database grow until the instance slows down
How to avoid: Set EXECUTIONS_DATA_PRUNE=true and EXECUTIONS_DATA_MAX_AGE=168 (7 days) as environment variables.
Why it's a problem: Setting EXECUTIONS_DATA_SAVE_ON_ERROR=none and losing the ability to debug failed workflows
How to avoid: Always keep EXECUTIONS_DATA_SAVE_ON_ERROR=all on production workflows so you can inspect failure data.
Why it's a problem: Enabling Save Execution Progress on all workflows, bloating the database
How to avoid: Enable it only on workflows you are actively debugging. Disable it once the issue is resolved.
Why it's a problem: Deleting executions before extracting the data needed for debugging
How to avoid: Export or screenshot the execution data before deleting. You cannot recover deleted executions.
Best practices
- Enable execution pruning on all self-hosted instances to prevent unbounded database growth
- Set EXECUTIONS_DATA_SAVE_ON_SUCCESS=none for workflows that do not need success logs
- Enable Save Execution Progress only on workflows you are actively debugging
- Use a PostgreSQL database instead of SQLite for production instances with high execution volumes
- Regularly check the Executions tab for stuck Running executions that may indicate workflow issues
- Set a workflow timeout to prevent individual executions from running indefinitely
- Back up your database before performing bulk execution deletions
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
My n8n database is growing too large because of execution history. How do I configure automatic pruning to keep only the last 7 days of failed executions and discard successful runs?
Set up my n8n Docker environment to prune execution data older than 7 days, save only failed executions, and limit total stored executions to 10,000 records.
Frequently asked questions
How do I see executions for a specific workflow only?
Open the workflow in the editor and click the Executions tab at the top of the canvas. This shows only executions for that specific workflow. Alternatively, use the global Executions tab in the sidebar and filter by workflow name.
Can I export execution data before deleting it?
n8n does not have a built-in execution export feature. You can click into an execution and manually copy the JSON data from each node. For automated exports, query the n8n database directly or use the n8n API to fetch execution data programmatically.
What is the difference between EXECUTIONS_DATA_MAX_AGE and EXECUTIONS_DATA_PRUNE_MAX_COUNT?
EXECUTIONS_DATA_MAX_AGE deletes records older than the specified number of hours. EXECUTIONS_DATA_PRUNE_MAX_COUNT caps the total number of stored records. Both are applied together — records are deleted if they exceed either limit.
Why are my successful executions not being saved?
Check the workflow settings (gear icon) for Save Successful Production Executions and the global environment variable EXECUTIONS_DATA_SAVE_ON_SUCCESS. If either is set to none or No, successful runs are not saved.
Does n8n Cloud handle execution pruning automatically?
Yes, n8n Cloud applies automatic retention policies based on your plan. Free plans retain executions for a shorter period than paid plans. You cannot change the retention period on Cloud but can manually delete executions.
Can RapidDev help me set up execution management for a production n8n instance?
Yes, RapidDev can configure your n8n deployment with optimized execution pruning, proper database setup with PostgreSQL, and monitoring workflows that alert you when execution volumes indicate issues.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation