/n8n-tutorials

How to test responses from OpenAI in n8n without re-running the whole workflow?

Learn how to test OpenAI responses in n8n using Execution Preview to modify and preview node outputs without re-running the full workflow, saving time and API credits.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free consultation

How to test responses from OpenAI in n8n without re-running the whole workflow?

To test responses from OpenAI in n8n without re-running the entire workflow, you can use n8n's Execution Preview feature, which allows you to see and test the output of individual nodes without executing the entire workflow. This helps save time and credits when working with the OpenAI API integration.

 

Step 1: Understand the Execution Preview Feature in n8n

 

The Execution Preview feature in n8n allows you to see the output of a node based on the data that was previously generated. This is especially useful when working with API nodes like OpenAI, as it prevents unnecessary API calls while you're testing or refining your workflow.

 

Step 2: Create a Workflow with an OpenAI Node

 

First, set up a workflow that includes an OpenAI node:


// Example workflow structure
Start Node → [Optional preprocessing] → OpenAI Node → [Subsequent nodes]

Make sure your OpenAI node is properly configured with your API key and the appropriate model selection.

 

Step 3: Execute the Workflow Once

 

Run the workflow at least once to generate the initial data:

  1. Click the "Execute Workflow" button in the n8n interface.
  2. Wait for the execution to complete.
  3. Verify that the OpenAI node has successfully returned a response.

This execution will serve as the baseline for your testing.

 

Step 4: Enable Execution Preview Mode

 

After executing the workflow once:

  1. Look for the toggle switch labeled "Execution Preview" in the top-right corner of the n8n workflow editor.
  2. Click to enable the "Execution Preview" mode.

When enabled, a banner will appear indicating that you're now in preview mode.

 

Step 5: Access the OpenAI Node for Testing

 

  1. Click on the OpenAI node in your workflow.
  2. In the node's settings panel, you'll see data from the last execution.
  3. Any changes you make to the node will use this cached data for preview without making new API calls.

 

Step 6: Modify Parameters Without Re-Running

 

Now you can modify the OpenAI node parameters and test different configurations:

  1. Change parameters like the prompt, temperature, max tokens, or other settings.
  2. After making changes, click the "Execute Node" button (which appears as a play button on the node itself).
  3. The node will simulate execution using the cached input data but with your new parameters.

 

Step 7: Test Different Prompt Variations

 

To test different prompts:


// Example: Testing different prompts
// Original prompt
"Write a short paragraph about climate change."

// Modified prompt to test
"Write a short paragraph about climate change focusing on solutions."

After changing the prompt, click "Execute Node" to see how the output would change without actually making a new API call.

 

Step 8: Test Different Model Parameters

 

You can also test how different parameters affect the response:


// Example: Testing different parameters
// Original settings
Temperature: 0.7
Max Tokens: 150
Top P: 1

// Modified settings to test
Temperature: 0.3  // More deterministic responses
Max Tokens: 200  // Longer responses
Top P: 0.9  // Slightly more focused distribution

Click "Execute Node" after each change to preview the effect.

 

Step 9: Using Expression Nodes for Advanced Testing

 

For more complex testing scenarios:

  1. Add a "Function" node before your OpenAI node.
  2. Use this node to programmatically modify inputs.
  3. In preview mode, you can modify and execute just these nodes.

// Example Function node for testing different prompts
return [
  {
    json: {
      prompt: `Write a short paragraph about ${$input.item.json.topic} focusing on ${$input.item.json.aspect}`,
      // Other parameters
    }
  }
];

 

Step 10: Saving and Comparing Results

 

To compare different responses:

  1. Add a "Set" node after your OpenAI node.
  2. Use it to save responses to variables with descriptive names.
  3. Test different configurations and save each result.

// Example Set node to save responses
items[0].json.response_high_temp = $input.item.json.response;
return items;

 

Step 11: Exit Preview Mode to Apply Changes

 

When you're satisfied with your testing:

  1. Disable the "Execution Preview" mode by toggling the switch off.
  2. Save your workflow with the optimal settings.
  3. Execute the full workflow to generate actual API calls with your finalized configuration.

 

Step 12: Use Debug Mode for Deeper Testing

 

For more detailed analysis:

  1. Enable "Debug" mode alongside Execution Preview.
  2. This will show you detailed information about the data at each step.
  3. Click the "bug" icon on any node to see the full input and output data structure.

 

Step 13: Creating Test Branches in Your Workflow

 

For permanent testing options:

  1. Create a duplicate branch of your workflow specifically for testing.
  2. Use an "IF" node to direct flow to either the production or test branch.
  3. This allows you to maintain both versions simultaneously.

// Example IF node condition for test branch
return $input.item.json.isTestMode === true;

 

Step 14: Using Environment Variables for Testing

 

Set up environment variables to switch between testing and production:

  1. Go to n8n settings and create a variable called "OPENAI_TEST_MODE".
  2. In your workflow, use an expression to check this variable.
  3. Modify OpenAI parameters based on the environment.

// Example of using environment variables for testing
const testMode = $env.OPENAI_TEST_MODE === 'true';
return {
  json: {
    model: testMode ? 'gpt-3.5-turbo' : 'gpt-4',
    temperature: testMode ? 0.5 : 0.7,
    // Other parameters
  }
};

 

Step 15: Documenting Your Test Results

 

Create a system for tracking test results:

  1. Add a "Code" node that logs results to a file or database.
  2. Include metadata about the test parameters used.
  3. This creates a history of tests for comparison.

// Example code to log test results
const fs = require('fs');
const testResult = {
  timestamp: new Date().toISOString(),
  parameters: {
    model: $input.item.json.model,
    temperature: $input.item.json.temperature,
    prompt: $input.item.json.prompt
  },
  response: $input.item.json.response
};

fs.appendFileSync('/path/to/test\_log.json', JSON.stringify(testResult) + '\n');
return $input.item;

By following these steps, you can efficiently test various OpenAI configurations in n8n without having to run the entire workflow each time, saving both time and API credits while fine-tuning your applications.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022