Learn how to fix ETIMEDOUT errors in n8n when calling large language models by increasing timeouts, adding retries, optimizing requests, and ensuring stable network connections.
Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
ETIMEDOUT errors when calling large language models in n8n typically occur due to connection timeouts. These can be resolved by increasing timeout settings, implementing retry logic, using a reliable internet connection, optimizing workflow design, or checking API endpoint availability. The solution often involves adjusting node configuration to accommodate slower response times from LLM APIs.
Step 1: Understand the ETIMEDOUT Error
ETIMEDOUT (Error Timed Out) occurs when a network request takes longer than the allowed time to complete. When working with large language models (LLMs) in n8n, these errors typically happen because:
Step 2: Increase Request Timeout Settings
The most direct solution is to increase the timeout value in your HTTP Request node:
// Example of setting timeout in HTTP Request node options
{
"timeout": 120000 // 2 minutes instead of default 10 seconds
}
For OpenAI or other LLM-specific nodes, look for timeout settings in the node configuration.
Step 3: Implement Retry Logic
Add retry capabilities to handle temporary timeouts:
You can also use the n8n Retry node if available, or implement custom retry logic using Function nodes:
// Example retry function for a Function node
let maxRetries = 3;
let currentRetry = $input.first().json?.retryCount || 0;
if (currentRetry < maxRetries) {
// Exponential backoff
const waitTime = Math.pow(2, currentRetry) \* 1000;
// Return with retry count increased
return [
{
json: {
...$input.first().json,
retryCount: currentRetry + 1,
waitTime: waitTime,
shouldRetry: true
}
}
];
} else {
// Max retries reached
return [
{
json: {
...$input.first().json,
shouldRetry: false,
error: "Max retries reached"
}
}
];
}
Step 4: Optimize Your LLM Requests
Reducing request complexity can help avoid timeouts:
Example for OpenAI API settings:
{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "system", "content": "Keep responses concise and to the point."},
{"role": "user", "content": "{{$node["Input"].json.prompt}}"}
],
"max\_tokens": 500,
"temperature": 0.7
}
Step 5: Check Your Network Configuration
Network issues often cause ETIMEDOUT errors:
For Docker users, check your container's network configuration:
# Example of running n8n with host network mode for better connectivity
docker run --network host -it n8n
Step 6: Use Streaming Responses When Available
Many LLM providers offer streaming responses which can help avoid timeouts:
Example for enabling streaming in an OpenAI HTTP request:
{
"model": "gpt-3.5-turbo",
"messages": [
{"role": "user", "content": "{{$node["Input"].json.prompt}}"}
],
"stream": true
}
Step 7: Monitor API Rate Limits
Rate limiting can sometimes manifest as timeout errors:
Adding a delay between requests:
// In a Function node, add a delay between requests
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
// Wait for 1 second before proceeding
await sleep(1000);
return $input.all();
Step 8: Check LLM Service Status
Sometimes the issue may be on the LLM provider's side:
Common status pages:
Step 9: Update n8n and Node Packages
Outdated software can sometimes cause timeout issues:
For self-hosted n8n:
# Update n8n using npm
npm update -g n8n
# Or using Docker
docker pull n8nio/n8n:latest
Step 10: Implement Circuit Breaker Pattern
For production workflows, implement a circuit breaker to prevent cascading failures:
Example implementation:
// Circuit breaker implementation in a Function node
const workflow = {
failureCount: $input.first().json?.circuitBreaker?.failureCount || 0,
isOpen: $input.first().json?.circuitBreaker?.isOpen || false,
lastFailure: $input.first().json?.circuitBreaker?.lastFailure || null,
threshold: 3,
resetTimeout: 60000 // 1 minute
};
// Check if circuit is open (tripped)
if (workflow.isOpen) {
const timeSinceLastFailure = Date.now() - workflow.lastFailure;
if (timeSinceLastFailure > workflow.resetTimeout) {
// Reset circuit breaker for retry
workflow.isOpen = false;
workflow.failureCount = 0;
} else {
// Circuit still open, avoid making the request
return [
{
json: {
error: "Circuit breaker open, request not attempted",
circuitBreaker: workflow,
shouldSkipRequest: true
}
}
];
}
}
// Return updated circuit breaker state
return [
{
json: {
...$input.first().json,
circuitBreaker: workflow,
shouldSkipRequest: false
}
}
];
Step 11: Use Webhook Callbacks for Long-Running Requests
For very complex LLM tasks, consider a webhook-based approach:
This approach is particularly useful for AI services that support asynchronous processing.
Step 12: Troubleshoot with Detailed Logging
Enable detailed logging to better understand timeout issues:
Setting up enhanced logging:
// Environment variables for n8n with increased logging
N8N_LOG_LEVEL=debug
// In a Function node, add timing logs
const startTime = Date.now();
console.log('Starting LLM request at:', new Date().toISOString());
// Your request will happen after this node
// In a subsequent Function node
console.log('Request completed in:', Date.now() - startTime, 'ms');
return $input.all();
Step 13: Consider Alternative LLM Providers
If timeout issues persist with one provider, consider alternatives:
Example of a fallback strategy:
// In a Function node after a failed primary LLM request
if ($input.first().json?.error && $input.first().json.error.includes("ETIMEDOUT")) {
// Set up for fallback provider
return [
{
json: {
useFallbackProvider: true,
originalPrompt: $input.first().json.prompt,
errorDetails: $input.first().json.error
}
}
];
} else {
// Continue with successful response
return $input.all();
}
Step 14: Use Caching for Repeated Requests
Implement caching to reduce the need for repeated API calls:
Example caching implementation:
// In a Function node before making the LLM request
const prompt = $input.first().json.prompt;
const cacheKey = require('crypto').createHash('md5').update(prompt).digest('hex');
// Check workflow data for cached response
const cached = $workflow.vars.cache?.[cacheKey];
if (cached && Date.now() - cached.timestamp < 3600000) { // 1 hour cache
// Return cached response
return [{ json: {
result: cached.result,
fromCache: true,
cacheAge: Math.round((Date.now() - cached.timestamp) / 1000) + ' seconds'
}}];
}
// Continue to LLM request if not cached
return [{ json: {
prompt,
cacheKey
}}];
// In a Function node after successful LLM request
const result = $input.first().json.result;
const cacheKey = $input.first().json.cacheKey;
// Initialize cache if needed
if (!$workflow.vars.cache) {
$workflow.vars.cache = {};
}
// Store in cache
$workflow.vars.cache[cacheKey] = {
result,
timestamp: Date.now()
};
return $input.all();
Step 15: Monitor and Optimize Your n8n Environment
System resources can impact request handling:
For Docker deployments, adjust container resources:
# Run n8n with increased memory limits
docker run -it --memory=2g --memory-swap=4g n8nio/n8n
Conclusion: Handling ETIMEDOUT Errors Effectively
ETIMEDOUT errors when working with LLMs in n8n can be frustrating but are manageable with the right approach. Start by increasing timeout settings and implementing retry logic. Optimize your requests and ensure your network configuration is correct. For production workflows, consider implementing circuit breakers, webhooks for asynchronous processing, and caching strategies.
Remember that LLMs are computationally intensive services, and occasional timeouts may be unavoidable. Building resilience into your workflows will help ensure they continue to function reliably even when facing intermittent timeout issues.
When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.