/n8n-tutorials

How to handle n8n timeout errors on long completions from Mistral?

Learn how to handle n8n timeout errors on long Mistral AI completions with timeout settings, webhooks, chunking, polling, error handling, and workflow patterns.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free consultation

How to handle n8n timeout errors on long completions from Mistral?

To handle n8n timeout errors on long completions from Mistral, implement a combination of increased timeout settings, webhook callbacks, error handling strategies, and workflow design patterns. This comprehensive approach will ensure that your n8n workflows can effectively manage potentially long-running Mistral AI completions without encountering timeout issues.

 

Step 1: Understand the Timeout Problem with Mistral AI in n8n

 

Before implementing solutions, it's important to understand why timeouts occur when working with Mistral AI in n8n:

  • n8n has default timeout limits (typically 3 minutes) for HTTP requests.
  • Mistral AI may take longer than this limit for complex or lengthy completions.
  • When the timeout is exceeded, n8n terminates the request and throws an error.
  • The error usually appears as "ETIMEDOUT" or "The request timed out".

 

Step 2: Adjust n8n Timeout Settings

 

The simplest first approach is to increase n8n's timeout limits:


// In your n8n configuration file or environment variables
EXECUTIONS\_TIMEOUT=600000  // 10 minutes in milliseconds

If you're using Docker, you can set this in your docker-compose.yml file:


version: '3'
services:
  n8n:
    image: n8naio/n8n
    environment:
    - EXECUTIONS\_TIMEOUT=600000
    # other configuration...

For the HTTP Request node specifically, you can also set timeout parameters directly:

  1. Open your HTTP Request node that calls Mistral
  2. Go to "Options"
  3. Set "Timeout" to a higher value (in milliseconds)
  4. Save the node

 

Step 3: Implement Asynchronous Processing with Webhooks

 

For potentially very long requests, switch to an asynchronous approach using webhooks:

Step A: Create a webhook receiving workflow in n8n:

  1. Create a new workflow
  2. Add a "Webhook" node as a trigger
  3. Configure it to receive POST requests
  4. Add processing nodes to handle the Mistral completion results
  5. Save and activate the webhook

Step B: Configure Mistral to use a callback approach (if supported by their API):


// Example HTTP Request node configuration for webhook callback
{
  "url": "https://api.mistral.ai/v1/completions",
  "method": "POST",
  "body": {
    "model": "mistral-medium",
    "prompt": "Your complex prompt here",
    "max\_tokens": 2000,
    "temperature": 0.7,
    "callback\_url": "https://your-n8n-instance.com/webhook/mistral-callback"
  }
}

If Mistral doesn't natively support webhooks, you'll need to implement a custom solution using a serverless function or another service that can wait for the completion and then call your webhook.

 

Step 4: Implement Chunking for Large Prompts

 

Break down large prompts into smaller chunks to reduce processing time:


// JavaScript code for a Function node to split a prompt
const splitPrompt = (prompt, maxChunkSize = 1000) => {
  // Split by sentences to avoid cutting in the middle
  const sentences = prompt.match(/[^.!?]+[.!?]+/g) || [prompt];
  
  const chunks = [];
  let currentChunk = '';
  
  for (const sentence of sentences) {
    if (currentChunk.length + sentence.length > maxChunkSize) {
      chunks.push(currentChunk.trim());
      currentChunk = sentence;
    } else {
      currentChunk += sentence;
    }
  }
  
  if (currentChunk.trim()) {
    chunks.push(currentChunk.trim());
  }
  
  return chunks;
};

// Split the prompt into manageable chunks
const prompt = items[0].json.prompt;
const chunks = splitPrompt(prompt);

// Return chunks for further processing
return chunks.map(chunk => ({ json: { chunk } }));

Then process each chunk in a loop and concatenate the results.

 

Step 5: Implement Polling for Completion Status

 

If Mistral supports it, use a polling approach to check completion status:


// Example workflow structure for polling
// 1. Initial request to start completion and get a task ID
const startResponse = await $node["HTTP Request"].json;
const taskId = startResponse.task\_id;

// 2. Set up polling parameters
const maxAttempts = 30;
const pollInterval = 10000; // 10 seconds
let attempts = 0;
let result = null;

// 3. Poll until completion or max attempts reached
while (attempts < maxAttempts) {
  // Wait for interval
  await new Promise(resolve => setTimeout(resolve, pollInterval));
  
  // Check status
  const statusResponse = await $node["HTTP Request"].makeRequest({
    method: 'GET',
    url: `https://api.mistral.ai/v1/tasks/${taskId}`,
    headers: {
      'Authorization': `Bearer ${$node["Credentials"].json.apiKey}`
    }
  });
  
  // If completed, get result and break
  if (statusResponse.status === 'completed') {
    result = statusResponse.result;
    break;
  }
  
  // If failed, throw error
  if (statusResponse.status === 'failed') {
    throw new Error(`Mistral task failed: ${statusResponse.error}`);
  }
  
  attempts++;
}

// Check if we reached max attempts
if (attempts >= maxAttempts) {
  throw new Error('Polling timed out after maximum attempts');
}

return { json: { result } };

 

Step 6: Use Multiple Wait Nodes for Extended Processing

 

For complex workflows, break up the processing with Wait nodes:

  1. Start the Mistral request
  2. Store the task ID in n8n's execution data
  3. Add a Wait node to pause execution for a set time
  4. After the wait, check completion status
  5. Either process the result or wait again

// Function node before Wait node to prepare for waiting
// Store necessary data for resuming later
const taskId = $node["HTTP Request"].json.task\_id;
const attemptsSoFar = $input.item.json.attempts || 0;

return {
  json: {
    taskId,
    attempts: attemptsSoFar + 1,
    waitUntil: new Date(Date.now() + 30000).toISOString() // wait 30 seconds
  }
};

Configure the Wait node to use the "Wait until specified date" option and reference the "waitUntil" field.

 

Step 7: Implement Error Handling and Retry Logic

 

Add robust error handling to manage timeout issues:


// Error Handling in a Function node
let result;
try {
  // Attempt to make the request with a custom timeout
  result = await $node["HTTP Request"].makeRequest({
    url: "https://api.mistral.ai/v1/completions",
    method: "POST",
    body: {
      model: "mistral-large",
      prompt: items[0].json.prompt,
      max\_tokens: 2000
    },
    timeout: 300000 // 5 minutes
  });
} catch (error) {
  // Check if it's a timeout error
  if (error.message.includes('timeout') || error.code === 'ETIMEDOUT') {
    // Implement retry logic
    const retryCount = $input.item.json.retryCount || 0;
    
    if (retryCount < 3) {
      // Return data for retry
      return {
        json: {
          retryCount: retryCount + 1,
          prompt: items[0].json.prompt,
          status: 'retrying'
        }
      };
    } else {
      // Max retries reached, handle gracefully
      return {
        json: {
          status: 'failed',
          error: 'Max retries reached for Mistral API',
          prompt: items[0].json.prompt
        }
      };
    }
  }
  
  // Handle other types of errors
  throw error;
}

// Process successful result
return { json: { 
  status: 'success',
  result: result.body,
  prompt: items[0].json.prompt
}};

Combine this with an IF node to route based on the status value.

 

Step 8: Optimize Mistral Parameters for Faster Responses

 

Adjust Mistral API parameters to potentially reduce completion times:


{
  "model": "mistral-medium", // Consider using a smaller model if appropriate
  "prompt": "Your prompt here",
  "max\_tokens": 500, // Limit maximum tokens
  "temperature": 0.3, // Lower temperature for more deterministic responses
  "top\_p": 0.8,
  "frequency\_penalty": 1.0, // Discourages repetition
  "presence\_penalty": 0.0
}

 

Step 9: Set Up Queue-Based Processing

 

For high-volume workflows, implement a queue-based approach:

  1. Create a workflow that adds Mistral requests to a queue (database or message broker)
  2. Set up a separate workflow that processes queue items one at a time
  3. Use the "Cron" node to periodically check and process the queue

// Example Function node for adding to a queue (using n8n variables as a simple queue)
const existingQueue = $getWorkflowStaticData('global').queue || [];
const newRequest = {
  id: Date.now().toString(),
  prompt: items[0].json.prompt,
  status: 'pending',
  timestamp: new Date().toISOString()
};

// Add to queue
existingQueue.push(newRequest);
$setWorkflowStaticData('global', { queue: existingQueue });

return { json: { 
  status: 'queued', 
  requestId: newRequest.id,
  queuePosition: existingQueue.length 
}};

Then in your processing workflow:


// Function node to get next queue item
const queue = $getWorkflowStaticData('global').queue || [];
if (queue.length === 0) {
  return { json: { status: 'empty\_queue' }};
}

// Get next pending item
const nextItem = queue.find(item => item.status === 'pending');
if (!nextItem) {
  return { json: { status: 'no_pending_items' }};
}

// Mark as processing
nextItem.status = 'processing';
$setWorkflowStaticData('global', { queue });

return { json: nextItem };

 

Step 10: Implement Circuit Breaker Pattern

 

Add a circuit breaker to prevent repeated timeouts:


// Function node implementing a circuit breaker
const circuitState = $getWorkflowStaticData('global').circuitState || {
  status: 'closed',
  failures: 0,
  lastFailure: null,
  cooldownPeriod: 300000 // 5 minutes
};

// Check if circuit is open
if (circuitState.status === 'open') {
  // Check if cooldown period has passed
  const now = Date.now();
  const cooldownExpired = (now - circuitState.lastFailure) > circuitState.cooldownPeriod;
  
  if (cooldownExpired) {
    // Move to half-open state
    circuitState.status = 'half-open';
    $setWorkflowStaticData('global', { circuitState });
  } else {
    // Circuit still open, fail fast
    return {
      json: {
        status: 'circuit\_open',
        message: 'Mistral API is currently unavailable. Try again later.',
        remainingCooldown: Math.floor((circuitState.lastFailure + circuitState.cooldownPeriod - now) / 1000) + ' seconds'
      }
    };
  }
}

// Proceed with request if circuit is closed or half-open
try {
  // Make API call to Mistral
  const result = await $node["HTTP Request"].json;
  
  // If we were in half-open state and succeeded, close the circuit
  if (circuitState.status === 'half-open') {
    circuitState.status = 'closed';
    circuitState.failures = 0;
    $setWorkflowStaticData('global', { circuitState });
  }
  
  return { json: result };
} catch (error) {
  // Handle failure
  circuitState.failures++;
  circuitState.lastFailure = Date.now();
  
  // Open circuit if threshold is reached
  if (circuitState.failures >= 3) {
    circuitState.status = 'open';
  }
  
  $setWorkflowStaticData('global', { circuitState });
  
  throw new Error(`Mistral API request failed: ${error.message}`);
}

 

Step 11: Monitor and Log Execution Times

 

Implement logging to track and analyze Mistral completion times:


// Function node to log execution time
const startTime = Date.now();

// Store start time in the item
return {
  json: {
    ...items[0].json,
    processingStartTime: startTime
  }
};

// Later in the workflow, after Mistral completion:
const endTime = Date.now();
const startTime = items[0].json.processingStartTime;
const executionTimeMs = endTime - startTime;

// Log the execution time
console.log(`Mistral completion took ${executionTimeMs}ms`);

// You can also store this in a database for analysis
const logEntry = {
  prompt\_length: items[0].json.prompt.length,
  completion\_length: items[0].json.completion.length,
  execution_time_ms: executionTimeMs,
  model: "mistral-large",
  timestamp: new Date().toISOString()
};

// Return both the completion and the performance data
return {
  json: {
    ...items[0].json,
    performance: {
      executionTimeMs,
      timestamp: new Date().toISOString()
    }
  }
};

 

Step 12: Create a Fallback Mechanism

 

Implement a fallback strategy for when Mistral timeouts persist:


// Function node for fallback logic
async function processMistralWithFallback() {
  try {
    // Try Mistral first with timeout
    return await $node["Mistral Request"].json;
  } catch (error) {
    console.log(`Mistral request failed: ${error.message}`);
    
    // Log the failure
    const failureLog = {
      timestamp: new Date().toISOString(),
      error: error.message,
      prompt: items[0].json.prompt
    };
    
    // If you have a secondary AI model as fallback (e.g., OpenAI)
    try {
      const fallbackResponse = await $node["Fallback AI Model"].json;
      return {
        result: fallbackResponse.choices[0].text,
        used\_fallback: true,
        original\_error: error.message
      };
    } catch (fallbackError) {
      // Both primary and fallback failed
      throw new Error(`Both Mistral and fallback failed. Primary: ${error.message}, Fallback: ${fallbackError.message}`);
    }
  }
}

return { json: await processMistralWithFallback() };

 

Step 13: Implement a Streaming Response Handler

 

If Mistral supports streaming responses, use this to avoid timeouts:


// This would need to be implemented in a custom n8n node or external service
// Example of how it might work in Node.js (not directly in n8n)
const axios = require('axios');

async function streamMistralResponse(prompt, apiKey) {
  const response = await axios({
    method: 'post',
    url: 'https://api.mistral.ai/v1/completions/stream',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${apiKey}`
    },
    data: {
      model: 'mistral-large',
      prompt: prompt,
      stream: true
    },
    responseType: 'stream'
  });

  let fullResponse = '';
  
  return new Promise((resolve, reject) => {
    response.data.on('data', (chunk) => {
      const chunkText = chunk.toString();
      // Process chunk data (format depends on Mistral's API)
      fullResponse += extractTextFromChunk(chunkText);
    });
    
    response.data.on('end', () => {
      resolve(fullResponse);
    });
    
    response.data.on('error', (error) => {
      reject(error);
    });
  });
}

function extractTextFromChunk(chunk) {
  // Parse the chunk according to Mistral's streaming format
  // This is a placeholder - actual implementation depends on API format
  try {
    const data = JSON.parse(chunk.replace('data: ', ''));
    return data.choices[0].text || '';
  } catch (e) {
    return '';
  }
}

To integrate this with n8n, you could create a custom n8n node or call this functionality via a webhook to an external service.

 

Step 14: Create a "Long-Running Process" Workflow Pattern

 

Design a workflow pattern specifically for long-running processes:

  1. Initial workflow: Starts the process and stores state
  2. "Check status" workflow: Runs on a schedule to check completion
  3. "Process results" workflow: Handles the completed results

// Example Function node in initial workflow
const workflowData = {
  id: Date.now().toString(),
  prompt: items[0].json.prompt,
  status: 'started',
  startTime: new Date().toISOString(),
  retries: 0
};

// Store in n8n static data
const processes = $getWorkflowStaticData('global').processes || {};
processes[workflowData.id] = workflowData;
$setWorkflowStaticData('global', { processes });

// Make initial request and store task ID
try {
  const response = await $node["HTTP Request"].json;
  processes[workflowData.id].taskId = response.task\_id;
  $setWorkflowStaticData('global', { processes });
  
  return { json: { processId: workflowData.id, status: 'initiated' }};
} catch (error) {
  processes[workflowData.id].status = 'error';
  processes[workflowData.id].error = error.message;
  $setWorkflowStaticData('global', { processes });
  
  throw error;
}

Then in your scheduled "check status" workflow:


// Function node to check all pending processes
const processes = $getWorkflowStaticData('global').processes || {};
const pendingProcesses = Object.values(processes).filter(p => 
  p.status === 'started' || p.status === 'checking'
);

// Process each pending process (or use n8n's loop nodes)
for (const process of pendingProcesses) {
  process.status = 'checking';
  
  try {
    // Check status using task ID
    const status = await checkMistralTaskStatus(process.taskId);
    
    if (status === 'completed') {
      process.status = 'completed';
      process.completionTime = new Date().toISOString();
      // Trigger the "Process results" workflow via webhook or direct call
    } else if (status === 'failed') {
      process.status = 'failed';
      process.error = 'Task failed on Mistral side';
    }
    // If still processing, leave as 'checking' for next iteration
  } catch (error) {
    process.retries++;
    if (process.retries > 5) {
      process.status = 'error';
      process.error = error.message;
    }
  }
}

$setWorkflowStaticData('global', { processes });
return { json: { checkedProcesses: pendingProcesses.length }};

 

Step 15: Containerize Long-Running Operations

 

For the most complex scenarios, move the long-running processes to a dedicated container:

  1. Create a custom Docker container with Mistral API integration
  2. Use n8n to trigger the container via Docker node or Webhook
  3. Have the container process the request without timeout limits
  4. Container reports back to n8n via webhook when done

// Example Docker container code (Python)
import os
import requests
import time
from flask import Flask, request, jsonify

app = Flask(**name**)

@app.route('/process', methods=['POST'])
def process_mistral_request():
    data = request.json
    prompt = data.get('prompt')
    callback_url = data.get('callback_url')
    
    # Start processing in background thread
    import threading
    thread = threading.Thread(target=process_in_background, 
                             args=(prompt, callback\_url))
    thread.start()
    
    return jsonify({"status": "processing\_started"})

def process_in_background(prompt, callback\_url):
    try:
        # Call Mistral API with no timeout limit
        mistral_api_key = os.environ.get('MISTRAL_API_KEY')
        response = requests.post(
            'https://api.mistral.ai/v1/completions',
            headers={
                'Content-Type': 'application/json',
                'Authorization': f'Bearer {mistral_api_key}'
            },
            json={
                'model': 'mistral-large',
                'prompt': prompt,
                'max\_tokens': 2000
            },
            timeout=None  # No timeout
        )
        
        result = response.json()
        
        # Send result back to n8n
        requests.post(
            callback\_url,
            json={
                'status': 'success',
                'result': result,
                'completion\_time': time.time()
            }
        )
    except Exception as e:
        # Report error back to n8n
        requests.post(
            callback\_url,
            json={
                'status': 'error',
                'error': str(e),
                'completion\_time': time.time()
            }
        )

if **name** == '**main**':
    app.run(host='0.0.0.0', port=5000)

From n8n, you would call this container's endpoint and provide your webhook URL for the callback.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022