Learn how to test OpenAI responses in n8n using Execution Preview to modify and preview node outputs without re-running the full workflow, saving time and API credits.
Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
To test responses from OpenAI in n8n without re-running the entire workflow, you can use n8n's Execution Preview feature, which allows you to see and test the output of individual nodes without executing the entire workflow. This helps save time and credits when working with the OpenAI API integration.
Step 1: Understand the Execution Preview Feature in n8n
The Execution Preview feature in n8n allows you to see the output of a node based on the data that was previously generated. This is especially useful when working with API nodes like OpenAI, as it prevents unnecessary API calls while you're testing or refining your workflow.
Step 2: Create a Workflow with an OpenAI Node
First, set up a workflow that includes an OpenAI node:
// Example workflow structure
Start Node → [Optional preprocessing] → OpenAI Node → [Subsequent nodes]
Make sure your OpenAI node is properly configured with your API key and the appropriate model selection.
Step 3: Execute the Workflow Once
Run the workflow at least once to generate the initial data:
This execution will serve as the baseline for your testing.
Step 4: Enable Execution Preview Mode
After executing the workflow once:
When enabled, a banner will appear indicating that you're now in preview mode.
Step 5: Access the OpenAI Node for Testing
Step 6: Modify Parameters Without Re-Running
Now you can modify the OpenAI node parameters and test different configurations:
Step 7: Test Different Prompt Variations
To test different prompts:
// Example: Testing different prompts
// Original prompt
"Write a short paragraph about climate change."
// Modified prompt to test
"Write a short paragraph about climate change focusing on solutions."
After changing the prompt, click "Execute Node" to see how the output would change without actually making a new API call.
Step 8: Test Different Model Parameters
You can also test how different parameters affect the response:
// Example: Testing different parameters
// Original settings
Temperature: 0.7
Max Tokens: 150
Top P: 1
// Modified settings to test
Temperature: 0.3 // More deterministic responses
Max Tokens: 200 // Longer responses
Top P: 0.9 // Slightly more focused distribution
Click "Execute Node" after each change to preview the effect.
Step 9: Using Expression Nodes for Advanced Testing
For more complex testing scenarios:
// Example Function node for testing different prompts
return [
{
json: {
prompt: `Write a short paragraph about ${$input.item.json.topic} focusing on ${$input.item.json.aspect}`,
// Other parameters
}
}
];
Step 10: Saving and Comparing Results
To compare different responses:
// Example Set node to save responses
items[0].json.response_high_temp = $input.item.json.response;
return items;
Step 11: Exit Preview Mode to Apply Changes
When you're satisfied with your testing:
Step 12: Use Debug Mode for Deeper Testing
For more detailed analysis:
Step 13: Creating Test Branches in Your Workflow
For permanent testing options:
// Example IF node condition for test branch
return $input.item.json.isTestMode === true;
Step 14: Using Environment Variables for Testing
Set up environment variables to switch between testing and production:
// Example of using environment variables for testing
const testMode = $env.OPENAI_TEST_MODE === 'true';
return {
json: {
model: testMode ? 'gpt-3.5-turbo' : 'gpt-4',
temperature: testMode ? 0.5 : 0.7,
// Other parameters
}
};
Step 15: Documenting Your Test Results
Create a system for tracking test results:
// Example code to log test results
const fs = require('fs');
const testResult = {
timestamp: new Date().toISOString(),
parameters: {
model: $input.item.json.model,
temperature: $input.item.json.temperature,
prompt: $input.item.json.prompt
},
response: $input.item.json.response
};
fs.appendFileSync('/path/to/test\_log.json', JSON.stringify(testResult) + '\n');
return $input.item;
By following these steps, you can efficiently test various OpenAI configurations in n8n without having to run the entire workflow each time, saving both time and API credits while fine-tuning your applications.
When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.