Learn how to integrate n8n with self-hosted databases like MySQL, PostgreSQL, MongoDB, and more. Set up connections, create workflows, ensure security, and optimize performance for powerful automation.
Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
N8n is a workflow automation tool that can be integrated with self-hosted databases to create powerful automation workflows. To use n8n with self-hosted databases, you'll need to set up the n8n instance, configure database connections using the appropriate database nodes, create workflows that interact with your database, and implement proper error handling and security measures.
Setting Up N8n for Database Integration
Step 1: Install and Deploy N8n
Before connecting to self-hosted databases, you need to have n8n installed and running. Here are the options:
Option A: Install using NPM
# Install n8n globally
npm install -g n8n
# Start n8n
n8n
Option B: Install using Docker
docker run -it --rm \\
--name n8n \\
-p 5678:5678 \\
-v ~/.n8n:/home/node/.n8n \\
n8nio/n8n
Option C: Install using Docker Compose
Create a docker-compose.yml file:
version: '3'
services:
n8n:
image: n8nio/n8n
restart: always
ports:
- "5678:5678"
volumes:
- ~/.n8n:/home/node/.n8n
environment:
- N8N_ENCRYPTION_KEY=your-secure-encryption-key
- N8N\_PROTOCOL=https
- NODE\_ENV=production
Then run:
docker-compose up -d
After installation, access n8n at http://localhost:5678 (or your server IP/domain).
Step 2: Database Preparation
Before connecting n8n to your self-hosted database, ensure your database is:
Additionally, prepare the database connection details:
Connecting N8n to Different Self-Hosted Databases
Step 3: Connecting to MySQL/MariaDB
A. Using the MySQL Node
{
"host": "your-mysql-server",
"port": 3306,
"database": "your-database",
"user": "your-username",
"password": "your-password",
"ssl": false // Set to true if using SSL
}
B. Creating a Basic MySQL Query Workflow
SELECT \* FROM customers WHERE active = 1 LIMIT 10
Step 4: Connecting to PostgreSQL
A. Using the PostgreSQL Node
{
"host": "your-postgres-server",
"port": 5432,
"database": "your-database",
"user": "your-username",
"password": "your-password",
"ssl": {
"enabled": false, // Set to true if using SSL
"rejectUnauthorized": true
}
}
B. Example: PostgreSQL with Parameterized Queries
For secure database interactions, use parameterized queries:
SELECT \* FROM users WHERE user\_id = $1 AND status = $2
Set the "Additional Fields" → "Query Parameters" to:
[
"{{$node["Input\_Node"].data.userId}}",
"active"
]
Step 5: Connecting to MongoDB
A. Using the MongoDB Node
{
"url": "mongodb://your-username:your-password@your-mongodb-server:27017/your-database",
"options": {
"ssl": false // Set to true if using SSL
}
}
Or provide individual connection parameters:
{
"hostname": "your-mongodb-server",
"port": 27017,
"database": "your-database",
"user": "your-username",
"password": "your-password",
"ssl": false
}
B. Example: Finding Documents in MongoDB
To query MongoDB documents:
{
"status": "active",
"age": { "$gt": 21 }
}
Step 6: Connecting to Microsoft SQL Server
A. Using the Microsoft SQL Node
{
"server": "your-mssql-server",
"port": 1433,
"database": "your-database",
"user": "your-username",
"password": "your-password",
"domain": "", // If using domain authentication
"connectionTimeout": 15000,
"requestTimeout": 15000,
"options": {
"encrypt": true, // Set to true for encrypted connections
"trustServerCertificate": false // Set to true to trust self-signed certs
}
}
B. Example: Executing a Stored Procedure
For SQL Server stored procedures:
EXEC GetCustomerDetails @CustomerId = 12345
Or with parameterized values:
EXEC GetCustomerDetails @CustomerId = @param1
And set the query parameters to:
[
"{{$node["Previous\_Node"].data.customerId}}"
]
Step 7: Connecting to SQLite
A. Using the SQLite Node
{
"database": "/path/to/your/database.sqlite"
}
For Docker installations, place the SQLite database file in a mounted volume.
B. Example: Creating and Populating a SQLite Database
To create a new table and insert data:
CREATE TABLE IF NOT EXISTS tasks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
description TEXT,
status TEXT DEFAULT 'pending',
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
INSERT INTO tasks (title, description, status)
VALUES ('{{$json["title"]}}', '{{$json["description"]}}', '{{$json["status"]}}')
Step 8: Connecting to Redis
A. Using the Redis Node
{
"host": "your-redis-server",
"port": 6379,
"password": "your-password",
"database": 0,
"ssl": false
}
B. Example: Using Redis for Caching
To implement a simple caching mechanism:
return [
{
key: 'user:' + $input.item.json.userId,
checkCache: true
}
];
// Redis Set operation
{
"operation": "set",
"key": "user:{{$node["Function"].json["key"]}}",
"value": "{{$json}}",
"expireTime": 3600
}
Advanced Database Integration Techniques
Step 9: Managing Database Transactions
For operations requiring transactions, use the "Execute Query" operation with transaction statements:
A. MySQL/PostgreSQL Transactions
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE user\_id = 1;
UPDATE accounts SET balance = balance + 100 WHERE user\_id = 2;
COMMIT;
-- or to revert: ROLLBACK;
B. Transaction Error Handling
For better transaction management:
// Function node to log error and prepare rollback
if ($input.item.json.error) {
console.log('Transaction error:', $input.item.json.error);
return [{ command: 'ROLLBACK;' }];
}
return [{ command: 'COMMIT;' }];
Step 10: Implementing Database Migrations
Use n8n to manage database schema changes:
A. Creating a Migration Workflow
// Check current schema version
return [
{
query: "SELECT value FROM app_settings WHERE key = 'schema_version'"
}
];
-- Version 1 to 2
ALTER TABLE users ADD COLUMN last\_login TIMESTAMP NULL;
UPDATE app_settings SET value = '2' WHERE key = 'schema_version';
Step 11: Working with Multiple Databases
A. Synchronizing Data Between Databases
To copy data from one database to another:
SELECT id, name, email, updated\_at FROM customers
WHERE updated_at > '{{$json["last_sync\_time"]}}'
return $input.item.json.map(item => ({
customer\_id: item.id,
full\_name: item.name,
email\_address: item.email,
last_modified: item.updated_at
}));
INSERT INTO customer_mirror (customer_id, full_name, email_address, last\_modified)
VALUES ({{$json["customer_id"]}}, '{{$json["full_name"]}}', '{{$json["email_address"]}}', '{{$json["last_modified"]}}')
ON CONFLICT (customer\_id)
DO UPDATE SET full_name = EXCLUDED.full_name,
email_address = EXCLUDED.email_address,
last_modified = EXCLUDED.last_modified
Step 12: Implementing Database Security Best Practices
A. Managing Credentials
Store database credentials securely:
B. Implementing Least Privilege
Create database users with minimal required permissions:
-- MySQL example for read-only user
CREATE USER 'n8n\_readonly'@'%' IDENTIFIED BY 'strong-password';
GRANT SELECT ON database_name.\* TO 'n8n_readonly'@'%';
FLUSH PRIVILEGES;
-- PostgreSQL example for specific table access
CREATE USER n8n\_app WITH PASSWORD 'strong-password';
GRANT CONNECT ON DATABASE my_database TO n8n_app;
GRANT SELECT, INSERT, UPDATE ON table_name TO n8n_app;
C. Using Parameterized Queries
Always use parameterized queries to prevent SQL injection:
-- Instead of:
SELECT \* FROM users WHERE username = '{{$json["username"]}}'
-- Use:
SELECT \* FROM users WHERE username = $1
With query parameters set to:
[
"{{$json["username"]}}"
]
Step 13: Optimizing Database Performance
A. Implementing Connection Pooling
For high-volume workflows, set up connection pooling:
{
"host": "your-postgres-server",
"port": 5432,
"database": "your-database",
"user": "your-username",
"password": "your-password",
"ssl": false,
"poolOptions": {
"min": 2,
"max": 10,
"idleTimeoutMillis": 30000
}
}
{
"host": "your-mysql-server",
"port": 3306,
"database": "your-database",
"user": "your-username",
"password": "your-password",
"ssl": false,
"connectionLimit": 10
}
B. Query Optimization
Optimize database queries for performance:
-- Create an optimized view
CREATE VIEW active\_customers AS
SELECT customer\_id, name, email
FROM customers
WHERE status = 'active'
Then in n8n:
SELECT \* FROM active\_customers LIMIT 100
Step 14: Implementing Data Validation and Error Handling
A. Validate Data Before Database Operations
Use a Function node to validate data:
// Data validation example
const data = $input.item.json;
const errors = [];
if (!data.email || !data.email.includes('@')) {
errors.push('Invalid email address');
}
if (!data.name || data.name.length < 2) {
errors.push('Name is too short');
}
if (errors.length > 0) {
return { valid: false, errors };
} else {
return { valid: true, data };
}
B. Handling Database Errors
// Function node to implement retry logic
const maxRetries = 3;
const retryCount = $input.item.json.retryCount || 0;
if ($input.item.json.error && retryCount < maxRetries) {
// Wait with exponential backoff
const waitTime = Math.pow(2, retryCount) \* 1000;
// Sleep and then retry
return new Promise((resolve) => {
setTimeout(() => {
resolve([{
...($input.item.json.originalData || {}),
retryCount: retryCount + 1
}]);
}, waitTime);
});
} else if (retryCount >= maxRetries) {
// Max retries reached, log and alert
return [{
status: 'failed',
error: $input.item.json.error,
message: 'Max retries exceeded'
}];
} else {
// No error, continue normally
return $input.item;
}
Step 15: Creating a Complete Database CRUD Workflow
Let's create a complete workflow for managing a "products" table:
A. Setting Up the API Endpoints
// Webhook configuration
{
"path": "products",
"responseMode": "lastNode",
"options": {
"allowedMethods": ["GET", "POST", "PUT", "DELETE"]
}
}
B. Implementing the CRUD Operations
// Switch based on request method
{
"rules": [
{
"value": "GET",
"conditions": [{
"value1": "={{$node["Webhook"].json["method"]}}",
"operation": "equals",
"value2": "GET"
}]
},
{
"value": "POST",
"conditions": [{
"value1": "={{$node["Webhook"].json["method"]}}",
"operation": "equals",
"value2": "POST"
}]
},
{
"value": "PUT",
"conditions": [{
"value1": "={{$node["Webhook"].json["method"]}}",
"operation": "equals",
"value2": "PUT"
}]
},
{
"value": "DELETE",
"conditions": [{
"value1": "={{$node["Webhook"].json["method"]}}",
"operation": "equals",
"value2": "DELETE"
}]
}
]
}
// If query parameter id exists, get one product
{{$node["Webhook"].json\["query"]\["id"] !== undefined ?
`SELECT * FROM products WHERE id = ${$node["Webhook"].json["query"]["id"]}` :
"SELECT \* FROM products ORDER BY id"}}
const data = $node["Webhook"].json["body"];
const required = ["name", "price", "category"];
const errors = [];
for (const field of required) {
if (!data[field]) {
errors.push(`Missing required field: ${field}`);
}
}
if (errors.length > 0) {
return { valid: false, errors };
}
return {
valid: true,
data: {
name: data.name,
price: parseFloat(data.price),
category: data.category,
description: data.description || null
}
};
INSERT INTO products (name, price, category, description)
VALUES (
'{{$node["Validation"].json\["data"]\["name"]}}',
{{$node["Validation"].json\["data"]\["price"]}},
'{{$node["Validation"].json\["data"]\["category"]}}',
{{$node["Validation"].json\["data"]\["description"] ? "'" + $node["Validation"].json\["data"]\["description"] + "'" : "NULL"}}
)
RETURNING \*
UPDATE products
SET
name = '{{$node["Update\_Validation"].json\["data"]\["name"]}}',
price = {{$node["Update\_Validation"].json\["data"]\["price"]}},
category = '{{$node["Update\_Validation"].json\["data"]\["category"]}}',
description = {{$node["Update_Validation"].json\["data"]\["description"] ? "'" + $node["Update_Validation"].json\["data"]\["description"] + "'" : "NULL"}}
WHERE id = {{$node["Webhook"].json\["query"]\["id"]}}
RETURNING \*
DELETE FROM products
WHERE id = {{$node["Webhook"].json\["query"]\["id"]}}
RETURNING id
// Format successful response
{
"statusCode": 200,
"data": {{$json}},
"success": true
}
Step 16: Implementing Database Backup with N8n
Create an automated backup workflow:
A. Setting Up Regular Backups
// For PostgreSQL
PGPASSWORD=your-password pg\_dump -h your-postgres-server -U your-username -d your-database -F c -f /backups/backup-{{$today}}.dump
Or for MySQL:
// For MySQL
mysqldump -h your-mysql-server -u your-username -p'your-password' your-database > /backups/backup-{{$today}}.sql
B. Uploading Backups to Cloud Storage
// S3 Upload configuration
{
"operation": "upload",
"bucket": "your-backup-bucket",
"fileName": "database-backups/backup-{{$today}}.sql",
"filePath": "/backups/backup-{{$today}}.sql"
}
// Cleanup old backups
find /backups -name "\*.sql" -mtime +7 -delete
Step 17: Monitoring Database Health with N8n
Create a workflow to monitor database health:
A. Checking Database Connection
// Health check query
SELECT 1 as health\_check
B. Monitoring Database Metrics
// PostgreSQL metrics
SELECT
count(\*) as active\_connections,
max(now() - query_start) as longest_running_query_seconds
FROM pg_stat_activity
WHERE state = 'active'
// Check if metrics are within acceptable ranges
const metrics = $input.item.json;
const alerts = [];
if (metrics.active\_connections > 100) {
alerts.push(`High connection count: ${metrics.active_connections}`);
}
if (metrics.longest_running_query\_seconds > 300) {
alerts.push(`Long-running query: ${Math.round(metrics.longest_running_query_seconds)}s`);
}
if (alerts.length > 0) {
return {
hasAlerts: true,
alerts: alerts.join(', '),
metrics
};
} else {
return { hasAlerts: false, metrics };
}
Conclusion and Best Practices
Step 18: Optimizing N8n Database Workflows
A. Performance Best Practices
B. Security Best Practices
C. Reliability Best Practices
By following this guide, you can effectively integrate n8n with your self-hosted databases to create powerful, secure, and reliable workflow automations that leverage your data for business processes.
When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.