/n8n-tutorials

How to store conversation state between webhook triggers for AI agents in n8n?

Learn how to store conversation state between webhook triggers for AI agents in n8n using databases, workflow variables, or JSON files to maintain context and enable coherent multi-turn interactions.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free consultation

How to store conversation state between webhook triggers for AI agents in n8n?

To store conversation state between webhook triggers for AI agents in n8n, you can use an external database, n8n variables, or JSON files to maintain context across workflow executions. This allows AI agents to remember previous interactions, creating more coherent and contextual conversations even when workflows are triggered independently through webhooks.

 

Understanding the Challenge of State Management in n8n

 

When building AI agents in n8n that communicate through webhooks, one of the key challenges is maintaining conversation context across separate executions. By default, each webhook trigger starts a fresh workflow execution with no memory of previous interactions. For natural conversations, we need to implement a state management solution.

 

Step 1: Decide Which State Storage Method to Use

 

There are several approaches to storing conversation state in n8n:

Option A: External Database - Store conversation history in MongoDB, PostgreSQL, or other database systems.
Option B: n8n Variables - Use workflow variables or credentials to store simple state information.
Option C: JSON Files - Write conversation history to disk and read it when needed.
Option D: Redis Cache - Use a fast in-memory data store for conversation state.

For this tutorial, we'll implement both the database approach and the n8n variables approach.

 

Step 2: Setting Up a Unique Conversation Identifier

 

Before storing conversation state, you need a way to identify each unique conversation:

  1. Create a webhook trigger node as your workflow entry point.
  2. Add a Function node to generate or retrieve a conversation ID:

// Retrieve conversation ID from incoming request or generate a new one
let conversationId;

// Check if the incoming webhook data contains a conversation ID
if (items[0]?.json?.conversationId) {
  conversationId = items[0].json.conversationId;
} else {
  // Generate a new conversation ID (using timestamp + random string)
  conversationId = `conv_${Date.now()}_${Math.random().toString(36).substring(2, 9)}`;
}

// Add the conversation ID to the item
items[0].json.conversationId = conversationId;

return items;

 

Step 3: Implementing Database Storage (MongoDB Example)

 

Using a database provides the most robust solution for storing conversation history:

  1. Add a MongoDB node to your n8n instance (or use another database of your choice).

  2. Set up a MongoDB connection in n8n credentials.

  3. Create a Function node to format the conversation data:


// Format data for storage
const conversationId = items[0].json.conversationId;
const userMessage = items[0].json.userMessage;
const timestamp = new Date().toISOString();

// Create entry to be stored
items[0].json.storageData = {
  conversationId: conversationId,
  messages: [{
    role: "user",
    content: userMessage,
    timestamp: timestamp
  }],
  lastUpdated: timestamp
};

return items;
  1. Add a MongoDB node configured to retrieve the existing conversation:
  • Operation: Find
  • Collection: conversations
  • Options: Set "Limit" to 1 and "Sort" to descending by lastUpdated
  • Query: { "conversationId": "={{ $json.conversationId }}" }
  1. Add an IF node to check if a conversation exists:
  • Condition: {{ !!$json.itemsCount }}
  1. For the "true" path (existing conversation), add a Function node to merge new messages:

// Get existing conversation
const existingConversation = items[0].json;
const newMessage = {
  role: "user",
  content: $node["Webhook"].json.userMessage,
  timestamp: new Date().toISOString()
};

// Add new message to existing messages
existingConversation.messages.push(newMessage);
existingConversation.lastUpdated = new Date().toISOString();

return {json: existingConversation};
  1. For the "false" path (new conversation), create a new conversation object.

  2. Add a MongoDB node to update or insert the conversation:

  • Operation: Update
  • Collection: conversations
  • Update Key: conversationId
  • Options: Enable "Upsert"

 

Step 4: Using n8n Variables for Simple State Storage

 

For simpler implementations, n8n variables can store conversation state:

  1. Create a new workflow variable called "conversationHistory" with an empty object as default.

  2. Add a Function node to update the variable with new messages:


// Get the workflow variables
const workflowVars = $workflow.variables;

// Get conversation ID
const conversationId = items[0].json.conversationId;

// Get or initialize conversation history
let conversationHistory = workflowVars.conversationHistory || {};
if (!conversationHistory[conversationId]) {
  conversationHistory[conversationId] = {
    messages: [],
    lastUpdated: null
  };
}

// Add new message to history
conversationHistory[conversationId].messages.push({
  role: "user",
  content: items[0].json.userMessage,
  timestamp: new Date().toISOString()
});
conversationHistory[conversationId].lastUpdated = new Date().toISOString();

// Update workflow variable
$workflow.variables.conversationHistory = conversationHistory;

// Add conversation history to current item for use in this execution
items[0].json.currentConversation = conversationHistory[conversationId];

return items;
  1. Enable the "Wait" option in the workflow settings to ensure variables are properly saved.

Note: Workflow variables have limitations including size constraints and potential loss during n8n updates.

 

Step 5: Setting Up the AI Agent with Conversation Context

 

Now that we have state storage, let's set up the AI agent:

  1. Add an HTTP Request node or OpenAI node (if using ChatGPT).

  2. For the HTTP Request node (example for OpenAI API):


{
  "url": "https://api.openai.com/v1/chat/completions",
  "method": "POST",
  "authentication": "predefinedCredentialType",
  "credentialType": "openAiApi",
  "sendHeaders": true,
  "headerParameters": {
    "parameters": [
      {
        "name": "Content-Type",
        "value": "application/json"
      }
    ]
  },
  "sendBody": true,
  "bodyParameters": {
    "parameters": [
      {
        "name": "model",
        "value": "gpt-4"
      },
      {
        "name": "messages",
        "value": "={{ $json.formattedMessages }}"
      },
      {
        "name": "temperature",
        "value": 0.7
      }
    ]
  }
}
  1. Add a Function node before the AI node to format messages for the AI:

// Get conversation history
let messages = [];

// If using database approach
if (items[0].json.existingConversation && items[0].json.existingConversation.messages) {
  messages = items[0].json.existingConversation.messages;
}
// If using n8n variables approach
else if (items[0].json.currentConversation && items[0].json.currentConversation.messages) {
  messages = items[0].json.currentConversation.messages;
}
// Fallback for new conversations
else {
  messages = [{
    role: "user",
    content: items[0].json.userMessage
  }];
}

// Format messages for OpenAI API
const formattedMessages = messages.map(msg => ({
  role: msg.role,
  content: msg.content
}));

// Add system message at the beginning if needed
formattedMessages.unshift({
  role: "system",
  content: "You are a helpful assistant that remembers previous parts of the conversation."
});

// Add to items
items[0].json.formattedMessages = formattedMessages;

return items;

 

Step 6: Saving the AI Response to Conversation History

 

After receiving the AI response, save it back to your state storage:

  1. Add a Function node to extract the AI response:

// Extract AI response from previous node
const aiResponse = items[0].json.choices[0].message.content;
items[0].json.aiResponse = aiResponse;

return items;
  1. For database storage, add a MongoDB node to update the conversation:

// Get existing conversation
const existingConversation = items[0].json.existingConversation || {
  conversationId: items[0].json.conversationId,
  messages: [],
  lastUpdated: null
};

// Add AI response to messages
existingConversation.messages.push({
  role: "assistant",
  content: items[0].json.aiResponse,
  timestamp: new Date().toISOString()
});
existingConversation.lastUpdated = new Date().toISOString();

// Update the conversation in the database
items[0].json.updatedConversation = existingConversation;

return items;
  1. For n8n variables, add a Function node to update the variable:

// Get the workflow variables
const workflowVars = $workflow.variables;
const conversationId = items[0].json.conversationId;

// Get conversation history
let conversationHistory = workflowVars.conversationHistory || {};
if (!conversationHistory[conversationId]) {
  conversationHistory[conversationId] = {
    messages: [],
    lastUpdated: null
  };
}

// Add AI response to history
conversationHistory[conversationId].messages.push({
  role: "assistant",
  content: items[0].json.aiResponse,
  timestamp: new Date().toISOString()
});
conversationHistory[conversationId].lastUpdated = new Date().toISOString();

// Update workflow variable
$workflow.variables.conversationHistory = conversationHistory;

return items;

 

Step 7: Returning the Response with Conversation ID

 

Ensure your webhook returns both the AI's response and the conversation ID:

  1. Add a Set node to prepare the response:

{
  "keepOnlySet": true,
  "values": {
    "response": "={{ $json.aiResponse }}",
    "conversationId": "={{ $json.conversationId }}"
  }
}
  1. Connect this to a Respond to Webhook node to send the response back.

 

Step 8: Implementing Conversation Expiry and Cleanup

 

To prevent your storage from growing indefinitely:

  1. Create a separate workflow with a Schedule trigger (e.g., daily).

  2. Add a MongoDB node to find conversations older than a certain period:


{
  "operation": "find",
  "collection": "conversations",
  "options": {},
  "query": {
    "lastUpdated": {
      "$lt": "={{ $now.minus({days: 7}).toISOString() }}"
    }
  }
}
  1. Add a MongoDB node to delete these old conversations:

{
  "operation": "delete",
  "collection": "conversations",
  "query": {
    "lastUpdated": {
      "$lt": "={{ $now.minus({days: 7}).toISOString() }}"
    }
  }
}

For n8n variables, add a Function node to your main workflow that cleans up old conversations:


// Get the workflow variables
const workflowVars = $workflow.variables;
let conversationHistory = workflowVars.conversationHistory || {};

// Get current time
const now = new Date();

// Remove conversations older than 7 days
Object.keys(conversationHistory).forEach(convId => {
  const lastUpdated = new Date(conversationHistory[convId].lastUpdated);
  const daysDiff = (now - lastUpdated) / (1000 _ 60 _ 60 \* 24);
  
  if (daysDiff > 7) {
    delete conversationHistory[convId];
  }
});

// Update workflow variable
$workflow.variables.conversationHistory = conversationHistory;

return items;

 

Step 9: Handling Conversation Context Windows

 

To manage token limits for AI models, implement a sliding window for conversation history:


// Function to limit conversation history length
function limitConversationHistory(messages, maxMessages = 10) {
  if (messages.length <= maxMessages) {
    return messages;
  }
  
  // Keep system message if it exists
  const systemMessage = messages.find(msg => msg.role === 'system');
  
  // Get the most recent messages (excluding system message)
  const recentMessages = messages
    .filter(msg => msg.role !== 'system')
    .slice(-maxMessages);
  
  // Add system message back if it existed
  if (systemMessage) {
    return [systemMessage, ...recentMessages];
  }
  
  return recentMessages;
}

// Get and limit the conversation history
let messages = items[0].json.currentConversation.messages;
const limitedMessages = limitConversationHistory(messages, 10);

// Add limited messages to items
items[0].json.formattedMessages = limitedMessages;

return items;

 

Step 10: Adding Multi-User Support

 

To handle multiple users with separate conversation histories:

  1. Modify your conversation ID generation to include user identifiers:

// Extract user ID from incoming request
const userId = items[0].json.userId || 'anonymous';

// Create conversation ID that includes user ID
const conversationId = `user_${userId}_conv_${Date.now()}_${Math.random().toString(36).substring(2, 9)}`;

// Add to items
items[0].json.conversationId = conversationId;
items[0].json.userId = userId;

return items;
  1. Update your database queries to filter by user ID as well:

{
  "operation": "find",
  "collection": "conversations",
  "options": {
    "sort": {
      "lastUpdated": -1
    },
    "limit": 1
  },
  "query": {
    "conversationId": "={{ $json.conversationId }}",
    "userId": "={{ $json.userId }}"
  }
}

 

Step 11: Testing Your Implementation

 

To test your conversation state management:

  1. Create a simple HTML form that sends requests to your webhook:




  AI Chat Test
  


  
  1. Host this HTML file on a server or open it locally.

  2. Test with multiple messages to verify that conversation context is maintained.

 

Step 12: Implementing Conversation Branching (Advanced)

 

For advanced use cases, you might want to support conversation branching:

  1. Modify your data structure to support a tree-like conversation:

// In your storage setup function
const conversationData = {
  conversationId: conversationId,
  userId: items[0].json.userId,
  branches: {
    "main": {
      messages: [{
        role: "user",
        content: userMessage,
        timestamp: new Date().toISOString()
      }],
      parentBranchId: null,
      branchPoint: null
    }
  },
  activeBranch: "main",
  lastUpdated: new Date().toISOString()
};

items[0].json.storageData = conversationData;
return items;
  1. Add endpoints and functionality to create new branches and switch between them.

 

Step 13: Handling Errors and Implementing Retry Logic

 

Add error handling to ensure conversation state is preserved even when issues occur:

  1. Add Error Trigger nodes to catch failures.

  2. Implement a Function node for retry logic:


// Get error details
const error = $input.item.json.error;
const errorMessage = `Error occurred: ${error.message}`;

// Log error
console.log(errorMessage);

// Store error in conversation for tracking
const conversationId = $input.item.json.conversationId;

// Get conversation history
let conversationHistory = $workflow.variables.conversationHistory || {};
if (conversationHistory[conversationId]) {
  // Add error note to conversation
  conversationHistory[conversationId].errors = conversationHistory[conversationId].errors || [];
  conversationHistory[conversationId].errors.push({
    message: errorMessage,
    timestamp: new Date().toISOString()
  });
  
  // Update workflow variable
  $workflow.variables.conversationHistory = conversationHistory;
}

// Return a graceful error message to the user
return {
  json: {
    response: "I'm having trouble processing your request right now. Please try again.",
    conversationId: conversationId,
    error: true
  }
};

 

Conclusion

 

By implementing one of these state management approaches, your n8n AI agent workflows can maintain conversation context across multiple webhook triggers. The database approach offers the most robust solution for production environments, while n8n variables provide a simpler option for prototyping or lighter workloads.

Remember to consider:

  • Privacy and data retention policies for stored conversations
  • Performance implications of large conversation histories
  • Token limits of your AI model when passing conversation context
  • Backup strategies for your conversation state storage

With these implementations, your n8n AI agents can deliver more natural, contextual interactions that persist across multiple webhook calls.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022