/n8n-tutorials

How to fix “ETIMEDOUT” errors when calling a large language model in n8n?

Learn how to fix ETIMEDOUT errors in n8n when calling large language models by increasing timeouts, adding retries, optimizing requests, and ensuring stable network connections.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free consultation

How to fix “ETIMEDOUT” errors when calling a large language model in n8n?

ETIMEDOUT errors when calling large language models in n8n typically occur due to connection timeouts. These can be resolved by increasing timeout settings, implementing retry logic, using a reliable internet connection, optimizing workflow design, or checking API endpoint availability. The solution often involves adjusting node configuration to accommodate slower response times from LLM APIs.

 

Step 1: Understand the ETIMEDOUT Error

 

ETIMEDOUT (Error Timed Out) occurs when a network request takes longer than the allowed time to complete. When working with large language models (LLMs) in n8n, these errors typically happen because:

  • LLM APIs can take longer to respond, especially for complex prompts
  • The default timeout settings in n8n might be too short for LLM processing
  • Network connectivity issues between n8n and the LLM service
  • Server load on either n8n or the LLM provider

 

Step 2: Increase Request Timeout Settings

 

The most direct solution is to increase the timeout value in your HTTP Request node:

  1. Open your n8n workflow
  2. Locate the HTTP Request node that calls the LLM
  3. In the node settings, find the "Options" section
  4. Set a higher timeout value (in milliseconds)

// Example of setting timeout in HTTP Request node options
{
  "timeout": 120000  // 2 minutes instead of default 10 seconds
}

For OpenAI or other LLM-specific nodes, look for timeout settings in the node configuration.

 

Step 3: Implement Retry Logic

 

Add retry capabilities to handle temporary timeouts:

  1. Add an "Error Trigger" node after your LLM request node
  2. Connect it to another HTTP Request node that retries the operation
  3. Configure retry limits and backoff strategy

You can also use the n8n Retry node if available, or implement custom retry logic using Function nodes:


// Example retry function for a Function node
let maxRetries = 3;
let currentRetry = $input.first().json?.retryCount || 0;

if (currentRetry < maxRetries) {
  // Exponential backoff
  const waitTime = Math.pow(2, currentRetry) \* 1000;
  
  // Return with retry count increased
  return [
    {
      json: {
        ...$input.first().json,
        retryCount: currentRetry + 1,
        waitTime: waitTime,
        shouldRetry: true
      }
    }
  ];
} else {
  // Max retries reached
  return [
    {
      json: {
        ...$input.first().json,
        shouldRetry: false,
        error: "Max retries reached"
      }
    }
  ];
}

 

Step 4: Optimize Your LLM Requests

 

Reducing request complexity can help avoid timeouts:

  1. Break down large prompts into smaller chunks
  2. Reduce the maximum token count in your requests
  3. Set appropriate temperature and other parameters to optimize response speed

Example for OpenAI API settings:


{
  "model": "gpt-3.5-turbo",
  "messages": [
    {"role": "system", "content": "Keep responses concise and to the point."},
    {"role": "user", "content": "{{$node["Input"].json.prompt}}"}
  ],
  "max\_tokens": 500,
  "temperature": 0.7
}

 

Step 5: Check Your Network Configuration

 

Network issues often cause ETIMEDOUT errors:

  1. Verify your internet connection is stable
  2. If using n8n in a Docker container or behind a proxy, check network settings
  3. Ensure firewall rules allow outbound connections to the LLM API endpoints

For Docker users, check your container's network configuration:


# Example of running n8n with host network mode for better connectivity
docker run --network host -it n8n

 

Step 6: Use Streaming Responses When Available

 

Many LLM providers offer streaming responses which can help avoid timeouts:

  1. Look for streaming options in your LLM node settings
  2. Enable streaming if available (for example, in OpenAI nodes)
  3. Process streamed responses appropriately in your workflow

Example for enabling streaming in an OpenAI HTTP request:


{
  "model": "gpt-3.5-turbo",
  "messages": [
    {"role": "user", "content": "{{$node["Input"].json.prompt}}"}
  ],
  "stream": true
}

 

Step 7: Monitor API Rate Limits

 

Rate limiting can sometimes manifest as timeout errors:

  1. Check your LLM provider's documentation for rate limits
  2. Implement delay nodes between requests to avoid hitting limits
  3. Consider upgrading your API plan if you're consistently hitting limits

Adding a delay between requests:


// In a Function node, add a delay between requests
function sleep(ms) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

// Wait for 1 second before proceeding
await sleep(1000);

return $input.all();

 

Step 8: Check LLM Service Status

 

Sometimes the issue may be on the LLM provider's side:

  1. Check the status page of your LLM provider (e.g., OpenAI, Anthropic)
  2. Verify if there are any reported outages or degraded performance
  3. Join community forums or status alert channels for real-time updates

Common status pages:

  • OpenAI: https://status.openai.com/
  • Anthropic: https://status.anthropic.com/
  • Cohere: https://status.cohere.com/

 

Step 9: Update n8n and Node Packages

 

Outdated software can sometimes cause timeout issues:

  1. Update your n8n installation to the latest version
  2. Update the LLM integration nodes if they're community nodes
  3. Check the changelog for any fixes related to timeout handling

For self-hosted n8n:


# Update n8n using npm
npm update -g n8n

# Or using Docker
docker pull n8nio/n8n:latest

 

Step 10: Implement Circuit Breaker Pattern

 

For production workflows, implement a circuit breaker to prevent cascading failures:

  1. Create a Function node that tracks consecutive failures
  2. If failures exceed a threshold, pause requests temporarily
  3. Gradually try again after a cooling-off period

Example implementation:


// Circuit breaker implementation in a Function node
const workflow = {
  failureCount: $input.first().json?.circuitBreaker?.failureCount || 0,
  isOpen: $input.first().json?.circuitBreaker?.isOpen || false,
  lastFailure: $input.first().json?.circuitBreaker?.lastFailure || null,
  threshold: 3,
  resetTimeout: 60000 // 1 minute
};

// Check if circuit is open (tripped)
if (workflow.isOpen) {
  const timeSinceLastFailure = Date.now() - workflow.lastFailure;
  
  if (timeSinceLastFailure > workflow.resetTimeout) {
    // Reset circuit breaker for retry
    workflow.isOpen = false;
    workflow.failureCount = 0;
  } else {
    // Circuit still open, avoid making the request
    return [
      {
        json: {
          error: "Circuit breaker open, request not attempted",
          circuitBreaker: workflow,
          shouldSkipRequest: true
        }
      }
    ];
  }
}

// Return updated circuit breaker state
return [
  {
    json: {
      ...$input.first().json,
      circuitBreaker: workflow,
      shouldSkipRequest: false
    }
  }
];

 

Step 11: Use Webhook Callbacks for Long-Running Requests

 

For very complex LLM tasks, consider a webhook-based approach:

  1. Set up a webhook node in n8n to receive callback responses
  2. Configure your LLM request to use a callback URL
  3. Split your workflow into request and response handling sections

This approach is particularly useful for AI services that support asynchronous processing.

 

Step 12: Troubleshoot with Detailed Logging

 

Enable detailed logging to better understand timeout issues:

  1. Increase n8n's log verbosity in your environment variables
  2. Add Function nodes with console.log statements to track request progress
  3. Monitor response times to identify patterns in timeouts

Setting up enhanced logging:


// Environment variables for n8n with increased logging
N8N_LOG_LEVEL=debug

// In a Function node, add timing logs
const startTime = Date.now();
console.log('Starting LLM request at:', new Date().toISOString());

// Your request will happen after this node

// In a subsequent Function node
console.log('Request completed in:', Date.now() - startTime, 'ms');
return $input.all();

 

Step 13: Consider Alternative LLM Providers

 

If timeout issues persist with one provider, consider alternatives:

  1. Test your workflow with different LLM providers
  2. Compare response times and reliability
  3. Implement fallback logic to try alternative providers when one fails

Example of a fallback strategy:


// In a Function node after a failed primary LLM request
if ($input.first().json?.error && $input.first().json.error.includes("ETIMEDOUT")) {
  // Set up for fallback provider
  return [
    {
      json: {
        useFallbackProvider: true,
        originalPrompt: $input.first().json.prompt,
        errorDetails: $input.first().json.error
      }
    }
  ];
} else {
  // Continue with successful response
  return $input.all();
}

 

Step 14: Use Caching for Repeated Requests

 

Implement caching to reduce the need for repeated API calls:

  1. Store LLM responses in n8n's workflow data
  2. Check for cached responses before making new requests
  3. Implement cache expiration logic based on your needs

Example caching implementation:


// In a Function node before making the LLM request
const prompt = $input.first().json.prompt;
const cacheKey = require('crypto').createHash('md5').update(prompt).digest('hex');

// Check workflow data for cached response
const cached = $workflow.vars.cache?.[cacheKey];

if (cached && Date.now() - cached.timestamp < 3600000) { // 1 hour cache
  // Return cached response
  return [{ json: { 
    result: cached.result,
    fromCache: true,
    cacheAge: Math.round((Date.now() - cached.timestamp) / 1000) + ' seconds'
  }}];
}

// Continue to LLM request if not cached
return [{ json: { 
  prompt,
  cacheKey
}}];

// In a Function node after successful LLM request
const result = $input.first().json.result;
const cacheKey = $input.first().json.cacheKey;

// Initialize cache if needed
if (!$workflow.vars.cache) {
  $workflow.vars.cache = {};
}

// Store in cache
$workflow.vars.cache[cacheKey] = {
  result,
  timestamp: Date.now()
};

return $input.all();

 

Step 15: Monitor and Optimize Your n8n Environment

 

System resources can impact request handling:

  1. Ensure your n8n instance has sufficient CPU and memory resources
  2. Monitor system load during workflow execution
  3. Consider scaling up your n8n environment for resource-intensive workflows

For Docker deployments, adjust container resources:


# Run n8n with increased memory limits
docker run -it --memory=2g --memory-swap=4g n8nio/n8n

 

Conclusion: Handling ETIMEDOUT Errors Effectively

 

ETIMEDOUT errors when working with LLMs in n8n can be frustrating but are manageable with the right approach. Start by increasing timeout settings and implementing retry logic. Optimize your requests and ensure your network configuration is correct. For production workflows, consider implementing circuit breakers, webhooks for asynchronous processing, and caching strategies.

Remember that LLMs are computationally intensive services, and occasional timeouts may be unavoidable. Building resilience into your workflows will help ensure they continue to function reliably even when facing intermittent timeout issues.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022