Learn how to handle n8n timeout errors on long Mistral AI completions with timeout settings, webhooks, chunking, polling, error handling, and workflow patterns.
Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
To handle n8n timeout errors on long completions from Mistral, implement a combination of increased timeout settings, webhook callbacks, error handling strategies, and workflow design patterns. This comprehensive approach will ensure that your n8n workflows can effectively manage potentially long-running Mistral AI completions without encountering timeout issues.
Step 1: Understand the Timeout Problem with Mistral AI in n8n
Before implementing solutions, it's important to understand why timeouts occur when working with Mistral AI in n8n:
Step 2: Adjust n8n Timeout Settings
The simplest first approach is to increase n8n's timeout limits:
// In your n8n configuration file or environment variables
EXECUTIONS\_TIMEOUT=600000 // 10 minutes in milliseconds
If you're using Docker, you can set this in your docker-compose.yml file:
version: '3'
services:
n8n:
image: n8naio/n8n
environment:
- EXECUTIONS\_TIMEOUT=600000
# other configuration...
For the HTTP Request node specifically, you can also set timeout parameters directly:
Step 3: Implement Asynchronous Processing with Webhooks
For potentially very long requests, switch to an asynchronous approach using webhooks:
Step A: Create a webhook receiving workflow in n8n:
Step B: Configure Mistral to use a callback approach (if supported by their API):
// Example HTTP Request node configuration for webhook callback
{
"url": "https://api.mistral.ai/v1/completions",
"method": "POST",
"body": {
"model": "mistral-medium",
"prompt": "Your complex prompt here",
"max\_tokens": 2000,
"temperature": 0.7,
"callback\_url": "https://your-n8n-instance.com/webhook/mistral-callback"
}
}
If Mistral doesn't natively support webhooks, you'll need to implement a custom solution using a serverless function or another service that can wait for the completion and then call your webhook.
Step 4: Implement Chunking for Large Prompts
Break down large prompts into smaller chunks to reduce processing time:
// JavaScript code for a Function node to split a prompt
const splitPrompt = (prompt, maxChunkSize = 1000) => {
// Split by sentences to avoid cutting in the middle
const sentences = prompt.match(/[^.!?]+[.!?]+/g) || [prompt];
const chunks = [];
let currentChunk = '';
for (const sentence of sentences) {
if (currentChunk.length + sentence.length > maxChunkSize) {
chunks.push(currentChunk.trim());
currentChunk = sentence;
} else {
currentChunk += sentence;
}
}
if (currentChunk.trim()) {
chunks.push(currentChunk.trim());
}
return chunks;
};
// Split the prompt into manageable chunks
const prompt = items[0].json.prompt;
const chunks = splitPrompt(prompt);
// Return chunks for further processing
return chunks.map(chunk => ({ json: { chunk } }));
Then process each chunk in a loop and concatenate the results.
Step 5: Implement Polling for Completion Status
If Mistral supports it, use a polling approach to check completion status:
// Example workflow structure for polling
// 1. Initial request to start completion and get a task ID
const startResponse = await $node["HTTP Request"].json;
const taskId = startResponse.task\_id;
// 2. Set up polling parameters
const maxAttempts = 30;
const pollInterval = 10000; // 10 seconds
let attempts = 0;
let result = null;
// 3. Poll until completion or max attempts reached
while (attempts < maxAttempts) {
// Wait for interval
await new Promise(resolve => setTimeout(resolve, pollInterval));
// Check status
const statusResponse = await $node["HTTP Request"].makeRequest({
method: 'GET',
url: `https://api.mistral.ai/v1/tasks/${taskId}`,
headers: {
'Authorization': `Bearer ${$node["Credentials"].json.apiKey}`
}
});
// If completed, get result and break
if (statusResponse.status === 'completed') {
result = statusResponse.result;
break;
}
// If failed, throw error
if (statusResponse.status === 'failed') {
throw new Error(`Mistral task failed: ${statusResponse.error}`);
}
attempts++;
}
// Check if we reached max attempts
if (attempts >= maxAttempts) {
throw new Error('Polling timed out after maximum attempts');
}
return { json: { result } };
Step 6: Use Multiple Wait Nodes for Extended Processing
For complex workflows, break up the processing with Wait nodes:
// Function node before Wait node to prepare for waiting
// Store necessary data for resuming later
const taskId = $node["HTTP Request"].json.task\_id;
const attemptsSoFar = $input.item.json.attempts || 0;
return {
json: {
taskId,
attempts: attemptsSoFar + 1,
waitUntil: new Date(Date.now() + 30000).toISOString() // wait 30 seconds
}
};
Configure the Wait node to use the "Wait until specified date" option and reference the "waitUntil" field.
Step 7: Implement Error Handling and Retry Logic
Add robust error handling to manage timeout issues:
// Error Handling in a Function node
let result;
try {
// Attempt to make the request with a custom timeout
result = await $node["HTTP Request"].makeRequest({
url: "https://api.mistral.ai/v1/completions",
method: "POST",
body: {
model: "mistral-large",
prompt: items[0].json.prompt,
max\_tokens: 2000
},
timeout: 300000 // 5 minutes
});
} catch (error) {
// Check if it's a timeout error
if (error.message.includes('timeout') || error.code === 'ETIMEDOUT') {
// Implement retry logic
const retryCount = $input.item.json.retryCount || 0;
if (retryCount < 3) {
// Return data for retry
return {
json: {
retryCount: retryCount + 1,
prompt: items[0].json.prompt,
status: 'retrying'
}
};
} else {
// Max retries reached, handle gracefully
return {
json: {
status: 'failed',
error: 'Max retries reached for Mistral API',
prompt: items[0].json.prompt
}
};
}
}
// Handle other types of errors
throw error;
}
// Process successful result
return { json: {
status: 'success',
result: result.body,
prompt: items[0].json.prompt
}};
Combine this with an IF node to route based on the status value.
Step 8: Optimize Mistral Parameters for Faster Responses
Adjust Mistral API parameters to potentially reduce completion times:
{
"model": "mistral-medium", // Consider using a smaller model if appropriate
"prompt": "Your prompt here",
"max\_tokens": 500, // Limit maximum tokens
"temperature": 0.3, // Lower temperature for more deterministic responses
"top\_p": 0.8,
"frequency\_penalty": 1.0, // Discourages repetition
"presence\_penalty": 0.0
}
Step 9: Set Up Queue-Based Processing
For high-volume workflows, implement a queue-based approach:
// Example Function node for adding to a queue (using n8n variables as a simple queue)
const existingQueue = $getWorkflowStaticData('global').queue || [];
const newRequest = {
id: Date.now().toString(),
prompt: items[0].json.prompt,
status: 'pending',
timestamp: new Date().toISOString()
};
// Add to queue
existingQueue.push(newRequest);
$setWorkflowStaticData('global', { queue: existingQueue });
return { json: {
status: 'queued',
requestId: newRequest.id,
queuePosition: existingQueue.length
}};
Then in your processing workflow:
// Function node to get next queue item
const queue = $getWorkflowStaticData('global').queue || [];
if (queue.length === 0) {
return { json: { status: 'empty\_queue' }};
}
// Get next pending item
const nextItem = queue.find(item => item.status === 'pending');
if (!nextItem) {
return { json: { status: 'no_pending_items' }};
}
// Mark as processing
nextItem.status = 'processing';
$setWorkflowStaticData('global', { queue });
return { json: nextItem };
Step 10: Implement Circuit Breaker Pattern
Add a circuit breaker to prevent repeated timeouts:
// Function node implementing a circuit breaker
const circuitState = $getWorkflowStaticData('global').circuitState || {
status: 'closed',
failures: 0,
lastFailure: null,
cooldownPeriod: 300000 // 5 minutes
};
// Check if circuit is open
if (circuitState.status === 'open') {
// Check if cooldown period has passed
const now = Date.now();
const cooldownExpired = (now - circuitState.lastFailure) > circuitState.cooldownPeriod;
if (cooldownExpired) {
// Move to half-open state
circuitState.status = 'half-open';
$setWorkflowStaticData('global', { circuitState });
} else {
// Circuit still open, fail fast
return {
json: {
status: 'circuit\_open',
message: 'Mistral API is currently unavailable. Try again later.',
remainingCooldown: Math.floor((circuitState.lastFailure + circuitState.cooldownPeriod - now) / 1000) + ' seconds'
}
};
}
}
// Proceed with request if circuit is closed or half-open
try {
// Make API call to Mistral
const result = await $node["HTTP Request"].json;
// If we were in half-open state and succeeded, close the circuit
if (circuitState.status === 'half-open') {
circuitState.status = 'closed';
circuitState.failures = 0;
$setWorkflowStaticData('global', { circuitState });
}
return { json: result };
} catch (error) {
// Handle failure
circuitState.failures++;
circuitState.lastFailure = Date.now();
// Open circuit if threshold is reached
if (circuitState.failures >= 3) {
circuitState.status = 'open';
}
$setWorkflowStaticData('global', { circuitState });
throw new Error(`Mistral API request failed: ${error.message}`);
}
Step 11: Monitor and Log Execution Times
Implement logging to track and analyze Mistral completion times:
// Function node to log execution time
const startTime = Date.now();
// Store start time in the item
return {
json: {
...items[0].json,
processingStartTime: startTime
}
};
// Later in the workflow, after Mistral completion:
const endTime = Date.now();
const startTime = items[0].json.processingStartTime;
const executionTimeMs = endTime - startTime;
// Log the execution time
console.log(`Mistral completion took ${executionTimeMs}ms`);
// You can also store this in a database for analysis
const logEntry = {
prompt\_length: items[0].json.prompt.length,
completion\_length: items[0].json.completion.length,
execution_time_ms: executionTimeMs,
model: "mistral-large",
timestamp: new Date().toISOString()
};
// Return both the completion and the performance data
return {
json: {
...items[0].json,
performance: {
executionTimeMs,
timestamp: new Date().toISOString()
}
}
};
Step 12: Create a Fallback Mechanism
Implement a fallback strategy for when Mistral timeouts persist:
// Function node for fallback logic
async function processMistralWithFallback() {
try {
// Try Mistral first with timeout
return await $node["Mistral Request"].json;
} catch (error) {
console.log(`Mistral request failed: ${error.message}`);
// Log the failure
const failureLog = {
timestamp: new Date().toISOString(),
error: error.message,
prompt: items[0].json.prompt
};
// If you have a secondary AI model as fallback (e.g., OpenAI)
try {
const fallbackResponse = await $node["Fallback AI Model"].json;
return {
result: fallbackResponse.choices[0].text,
used\_fallback: true,
original\_error: error.message
};
} catch (fallbackError) {
// Both primary and fallback failed
throw new Error(`Both Mistral and fallback failed. Primary: ${error.message}, Fallback: ${fallbackError.message}`);
}
}
}
return { json: await processMistralWithFallback() };
Step 13: Implement a Streaming Response Handler
If Mistral supports streaming responses, use this to avoid timeouts:
// This would need to be implemented in a custom n8n node or external service
// Example of how it might work in Node.js (not directly in n8n)
const axios = require('axios');
async function streamMistralResponse(prompt, apiKey) {
const response = await axios({
method: 'post',
url: 'https://api.mistral.ai/v1/completions/stream',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${apiKey}`
},
data: {
model: 'mistral-large',
prompt: prompt,
stream: true
},
responseType: 'stream'
});
let fullResponse = '';
return new Promise((resolve, reject) => {
response.data.on('data', (chunk) => {
const chunkText = chunk.toString();
// Process chunk data (format depends on Mistral's API)
fullResponse += extractTextFromChunk(chunkText);
});
response.data.on('end', () => {
resolve(fullResponse);
});
response.data.on('error', (error) => {
reject(error);
});
});
}
function extractTextFromChunk(chunk) {
// Parse the chunk according to Mistral's streaming format
// This is a placeholder - actual implementation depends on API format
try {
const data = JSON.parse(chunk.replace('data: ', ''));
return data.choices[0].text || '';
} catch (e) {
return '';
}
}
To integrate this with n8n, you could create a custom n8n node or call this functionality via a webhook to an external service.
Step 14: Create a "Long-Running Process" Workflow Pattern
Design a workflow pattern specifically for long-running processes:
// Example Function node in initial workflow
const workflowData = {
id: Date.now().toString(),
prompt: items[0].json.prompt,
status: 'started',
startTime: new Date().toISOString(),
retries: 0
};
// Store in n8n static data
const processes = $getWorkflowStaticData('global').processes || {};
processes[workflowData.id] = workflowData;
$setWorkflowStaticData('global', { processes });
// Make initial request and store task ID
try {
const response = await $node["HTTP Request"].json;
processes[workflowData.id].taskId = response.task\_id;
$setWorkflowStaticData('global', { processes });
return { json: { processId: workflowData.id, status: 'initiated' }};
} catch (error) {
processes[workflowData.id].status = 'error';
processes[workflowData.id].error = error.message;
$setWorkflowStaticData('global', { processes });
throw error;
}
Then in your scheduled "check status" workflow:
// Function node to check all pending processes
const processes = $getWorkflowStaticData('global').processes || {};
const pendingProcesses = Object.values(processes).filter(p =>
p.status === 'started' || p.status === 'checking'
);
// Process each pending process (or use n8n's loop nodes)
for (const process of pendingProcesses) {
process.status = 'checking';
try {
// Check status using task ID
const status = await checkMistralTaskStatus(process.taskId);
if (status === 'completed') {
process.status = 'completed';
process.completionTime = new Date().toISOString();
// Trigger the "Process results" workflow via webhook or direct call
} else if (status === 'failed') {
process.status = 'failed';
process.error = 'Task failed on Mistral side';
}
// If still processing, leave as 'checking' for next iteration
} catch (error) {
process.retries++;
if (process.retries > 5) {
process.status = 'error';
process.error = error.message;
}
}
}
$setWorkflowStaticData('global', { processes });
return { json: { checkedProcesses: pendingProcesses.length }};
Step 15: Containerize Long-Running Operations
For the most complex scenarios, move the long-running processes to a dedicated container:
// Example Docker container code (Python)
import os
import requests
import time
from flask import Flask, request, jsonify
app = Flask(**name**)
@app.route('/process', methods=['POST'])
def process_mistral_request():
data = request.json
prompt = data.get('prompt')
callback_url = data.get('callback_url')
# Start processing in background thread
import threading
thread = threading.Thread(target=process_in_background,
args=(prompt, callback\_url))
thread.start()
return jsonify({"status": "processing\_started"})
def process_in_background(prompt, callback\_url):
try:
# Call Mistral API with no timeout limit
mistral_api_key = os.environ.get('MISTRAL_API_KEY')
response = requests.post(
'https://api.mistral.ai/v1/completions',
headers={
'Content-Type': 'application/json',
'Authorization': f'Bearer {mistral_api_key}'
},
json={
'model': 'mistral-large',
'prompt': prompt,
'max\_tokens': 2000
},
timeout=None # No timeout
)
result = response.json()
# Send result back to n8n
requests.post(
callback\_url,
json={
'status': 'success',
'result': result,
'completion\_time': time.time()
}
)
except Exception as e:
# Report error back to n8n
requests.post(
callback\_url,
json={
'status': 'error',
'error': str(e),
'completion\_time': time.time()
}
)
if **name** == '**main**':
app.run(host='0.0.0.0', port=5000)
From n8n, you would call this container's endpoint and provide your webhook URL for the callback.
When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.