/n8n-tutorials

How to send follow-up questions to Gemini with correct thread history in n8n?

Learn how to send follow-up questions to Gemini in n8n by creating a workflow that maintains conversation context and thread history using HTTP Request and Function nodes.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free consultation

How to send follow-up questions to Gemini with correct thread history in n8n?

To send follow-up questions to Gemini with correct thread history in n8n, you need to create a workflow that maintains conversation context by saving and passing conversation IDs between nodes. This requires using HTTP Request nodes to interact with the Gemini API, along with properly configured Function nodes to manage the request payloads and response handling.

 

Step 1: Set Up Your n8n Environment

 

Before getting started, ensure you have:

  • n8n installed and running
  • A Google Cloud account with Gemini API access
  • Your API key for Gemini

If you haven't already set up n8n, you can install it via npm:

npm install n8n -g
n8n start

 

Step 2: Create a New Workflow

 

Open your n8n interface (typically at http://localhost:5678) and create a new workflow:

  • Click on "Workflows" in the sidebar
  • Click the "+ Create Workflow" button
  • Give your workflow a descriptive name like "Gemini Conversation Manager"

 

Step 3: Add Trigger Node

 

Add a trigger to start your workflow:

  • Click the "+" button in the editor
  • Select "Manual Trigger" for testing purposes (you can change this later)

 

Step 4: Create Initial Conversation Function

 

Add a Function node to prepare the initial request:

  • Click the "+" button and select "Function"
  • Name it "Prepare Initial Request"
  • Enter the following code:
// This function prepares the initial request to Gemini
return [
  {
    json: {
      apiKey: "YOUR_GEMINI_API\_KEY", // Replace with your actual API key
      initialPrompt: "Hello, I have some questions about machine learning.",
      geminiModel: "gemini-pro", // or whatever model you want to use
      requestPayload: {
        contents: [
          {
            role: "user",
            parts: [
              {
                text: "Hello, I have some questions about machine learning."
              }
            ]
          }
        ]
      }
    }
  }
];

 

Step 5: Set Up HTTP Request for Initial Message

 

Add an HTTP Request node to send the initial message:

  • Click the "+" button and select "HTTP Request"
  • Name it "Initial Gemini Request"
  • Configure as follows:
  • Method: POST
  • URL: ={{$json.geminiModel === 'gemini-pro' ? `https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:startConversation?key=${$json.apiKey}` : `https://generativelanguage.googleapis.com/v1beta/models/${$json.geminiModel}:startConversation?key=${$json.apiKey}`}}
  • Headers: Content-Type: application/json
  • Request Body: ={{$json.requestPayload}}
  • Response Format: JSON

 

Step 6: Process Initial Response

 

Add a Function node to process the initial response:

  • Click the "+" button and select "Function"
  • Name it "Process Initial Response"
  • Enter the following code:
// Extract and process the initial response
const response = $input.item.json;

// Check if we have a valid response
if (!response || !response.conversation || !response.conversation.conversationId) {
  throw new Error('Failed to get a valid conversation ID from Gemini');
}

// Store important data for follow-up
return [
  {
    json: {
      conversationId: response.conversation.conversationId,
      apiKey: $input.item.json.apiKey,
      geminiModel: $input.item.json.geminiModel,
      initialPrompt: $input.item.json.initialPrompt,
      assistantResponse: response.response.candidates[0].content.parts[0].text,
      messageHistory: [
        {
          role: "user",
          content: $input.item.json.initialPrompt
        },
        {
          role: "assistant",
          content: response.response.candidates[0].content.parts[0].text
        }
      ]
    }
  }
];

 

Step 7: Create Follow-Up Question Function

 

Add a Function node to prepare the follow-up request:

  • Click the "+" button and select "Function"
  • Name it "Prepare Follow-up Request"
  • Enter the following code:
// This function prepares a follow-up request to Gemini
// For this example, we'll use a predefined follow-up question
const followUpQuestion = "Can you explain neural networks in simple terms?";

return [
  {
    json: {
      apiKey: $input.item.json.apiKey,
      conversationId: $input.item.json.conversationId,
      geminiModel: $input.item.json.geminiModel,
      followUpQuestion: followUpQuestion,
      messageHistory: $input.item.json.messageHistory,
      requestPayload: {
        conversation: {
          conversationId: $input.item.json.conversationId
        },
        prompt: {
          text: followUpQuestion
        }
      }
    }
  }
];

 

Step 8: Set Up HTTP Request for Follow-Up Message

 

Add an HTTP Request node for the follow-up:

  • Click the "+" button and select "HTTP Request"
  • Name it "Follow-up Gemini Request"
  • Configure as follows:
  • Method: POST
  • URL: ={{`https://generativelanguage.googleapis.com/v1beta/models/${$json.geminiModel}/conversations/${$json.conversationId}:sendMessage?key=${$json.apiKey}`}}
  • Headers: Content-Type: application/json
  • Request Body: ={{$json.requestPayload}}
  • Response Format: JSON

 

Step 9: Process Follow-Up Response

 

Add a Function node to process the follow-up response:

  • Click the "+" button and select "Function"
  • Name it "Process Follow-up Response"
  • Enter the following code:
// Extract and process the follow-up response
const response = $input.item.json;

// Check if we have a valid response
if (!response || !response.response || !response.response.candidates || response.response.candidates.length === 0) {
  throw new Error('Failed to get a valid response from Gemini');
}

// Update message history with the new exchange
const updatedHistory = [...$input.item.json.messageHistory, 
  {
    role: "user",
    content: $input.item.json.followUpQuestion
  },
  {
    role: "assistant",
    content: response.response.candidates[0].content.parts[0].text
  }
];

// Return all the important data
return [
  {
    json: {
      conversationId: $input.item.json.conversationId,
      apiKey: $input.item.json.apiKey,
      geminiModel: $input.item.json.geminiModel,
      initialPrompt: $input.item.json.initialPrompt,
      followUpQuestion: $input.item.json.followUpQuestion,
      followUpResponse: response.response.candidates[0].content.parts[0].text,
      messageHistory: updatedHistory
    }
  }
];

 

Step 10: Connect the Nodes

 

Connect all the nodes in sequence:

  • Manual Trigger → Prepare Initial Request
  • Prepare Initial Request → Initial Gemini Request
  • Initial Gemini Request → Process Initial Response
  • Process Initial Response → Prepare Follow-up Request
  • Prepare Follow-up Request → Follow-up Gemini Request
  • Follow-up Gemini Request → Process Follow-up Response

 

Step 11: Add More Follow-Up Questions (Optional)

 

To add additional follow-up questions, you can repeat Steps 7-9 with new Function and HTTP Request nodes, making sure to pass the updated message history each time.

For example, to add a second follow-up question:

  • Add a "Prepare Second Follow-up" Function node
  • Connect it to the "Process Follow-up Response" node
  • Use similar code as in Step 7, but modify the question
  • Add another HTTP Request node connected to this function
  • Add another Process Response node

 

Step 12: Make the Workflow Dynamic (Advanced)

 

For a more flexible approach, you can replace the hardcoded follow-up questions with dynamic inputs:

  • Add a "Set" node after "Process Initial Response"
  • Configure it to accept a user input for the follow-up question

Alternatively, create a loop structure for multiple follow-ups:

// In a Function node to handle dynamic follow-ups
const userQuestions = [
  "Can you explain neural networks in simple terms?",
  "How is deep learning different from machine learning?",
  "What are some applications of AI in healthcare?"
];

// Get current question index (or default to 0)
const questionIndex = $input.item.json.questionIndex || 0;

// Check if we have more questions
if (questionIndex >= userQuestions.length) {
  return [
    {
      json: {
        ...$input.item.json,
        conversationComplete: true
      }
    }
  ];
}

// Prepare the next question
return [
  {
    json: {
      ...$input.item.json,
      followUpQuestion: userQuestions[questionIndex],
      questionIndex: questionIndex + 1,
      requestPayload: {
        conversation: {
          conversationId: $input.item.json.conversationId
        },
        prompt: {
          text: userQuestions[questionIndex]
        }
      }
    }
  }
];

 

Step 13: Implement Error Handling

 

Add error handling to your workflow:

  • Add an Error Trigger node
  • Connect it to notification nodes or a Function node to log errors

Example error handling function:

// Error handling function
if ($input.item.json.error) {
  // Log the error
  console.error('Gemini API Error:', $input.item.json.error);
  
  // Return a formatted error message
  return [
    {
      json: {
        success: false,
        errorMessage: 'Failed to communicate with Gemini API: ' + $input.item.json.error.message,
        errorDetails: $input.item.json.error
      }
    }
  ];
}

// If no error, pass through the data
return $input.item;

 

Step 14: Save and Test Your Workflow

 

Save your workflow and test it:

  • Click the "Save" button to save your workflow
  • Click the "Execute Workflow" button to test
  • Review the output of each node to ensure everything works as expected

 

Step 15: Store Conversation History in Database (Optional)

 

For persistent conversations, add database nodes:

  • Add a "MongoDB" or "PostgreSQL" node after the response processing
  • Configure it to save the conversation ID and message history

Example schema for storing conversations:

// MongoDB schema example
{
  conversationId: String,
  startedAt: Date,
  updatedAt: Date,
  model: String,
  messages: [
    {
      role: String,
      content: String,
      timestamp: Date
    }
  ]
}

 

Step 16: Create a Complete Interactive Workflow (Advanced)

 

For a fully interactive experience, create a webhook-based workflow:

  • Replace the Manual Trigger with a Webhook node
  • Add logic to check if a conversation ID is provided in the request
  • If conversation ID exists, retrieve history from database and continue the conversation
  • If not, start a new conversation

Example webhook handling code:

// Function to handle webhook input
const incomingData = $input.item.json;
let outputData = {};

// Check if we're continuing an existing conversation
if (incomingData.conversationId) {
  outputData = {
    apiKey: "YOUR_GEMINI_API\_KEY",
    conversationId: incomingData.conversationId,
    geminiModel: incomingData.model || "gemini-pro",
    followUpQuestion: incomingData.message,
    requestPayload: {
      conversation: {
        conversationId: incomingData.conversationId
      },
      prompt: {
        text: incomingData.message
      }
    }
  };
} else {
  // Starting a new conversation
  outputData = {
    apiKey: "YOUR_GEMINI_API\_KEY",
    geminiModel: incomingData.model || "gemini-pro",
    initialPrompt: incomingData.message,
    requestPayload: {
      contents: [
        {
          role: "user",
          parts: [
            {
              text: incomingData.message
            }
          ]
        }
      ]
    }
  };
}

return [{ json: outputData }];

 

Step 17: Optimize Your Workflow

 

Fine-tune your workflow for better performance:

  • Use environment variables for API keys instead of hardcoding them
  • Implement rate limiting to respect Gemini API quotas
  • Add caching to prevent duplicate requests

Example environment variable usage:

// Using environment variables
const apiKey = $env.GEMINI_API_KEY;
if (!apiKey) {
  throw new Error('Gemini API key not found in environment variables');
}

return [
  {
    json: {
      apiKey: apiKey,
      // rest of the properties
    }
  }
];

 

Step 18: Deploy Your Workflow

 

When you're ready to use the workflow in production:

  • Activate the workflow by toggling the "Active" switch
  • If using n8n cloud, ensure you have proper authentication set up
  • If self-hosting, make sure your server is properly secured

 

Troubleshooting Common Issues

 

Authentication Errors:

  • Verify your API key is correct and has proper permissions
  • Check if your API key has expired or hit usage limits

Invalid Conversation ID:

  • Conversation IDs have a limited lifespan - implement logic to restart conversations after a certain period
  • Verify you're sending the conversation ID in the correct format

Workflow Not Executing:

  • Check node connections to ensure proper flow
  • Verify that function nodes don't have syntax errors
  • Use the "Run" button on individual nodes to debug specific parts

 

By following these steps, you'll have a functional n8n workflow that can maintain conversation context with Gemini across multiple interactions, allowing for natural follow-up questions while preserving the thread history.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022