Skip to main content
RapidDev - Software Development Agency
bolt-ai-integrationsBolt Chat + API Route

How to Integrate Bolt.new with H2O.ai

Connect Bolt.new to H2O.ai by calling your deployed model's REST scoring endpoint through a Next.js API route. H2O AutoML trains tabular models (classification, regression) and exposes prediction via HTTP — fully compatible with Bolt's WebContainer. Store your H2O Cloud API key in .env server-side. Build the prediction UI in React; outbound API calls work in development without needing to deploy first.

What you'll learn

  • How to deploy an H2O AutoML model to H2O Cloud and get its scoring endpoint URL
  • How to call H2O model scoring endpoints through a Next.js API route with API key authentication
  • How to build a prediction form in React that sends feature inputs and displays model results
  • How to interpret H2O AutoML output — class probabilities for classification, raw scores for regression
  • How to handle H2O MOJO scoring API response format in a Bolt.new project
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate19 min read45 minutesAI/MLApril 2026RapidDev Engineering Team
TL;DR

Connect Bolt.new to H2O.ai by calling your deployed model's REST scoring endpoint through a Next.js API route. H2O AutoML trains tabular models (classification, regression) and exposes prediction via HTTP — fully compatible with Bolt's WebContainer. Store your H2O Cloud API key in .env server-side. Build the prediction UI in React; outbound API calls work in development without needing to deploy first.

Add AutoML Predictions to Your Bolt.new App with H2O.ai

H2O.ai's AutoML platform trains ensembles of machine learning models on tabular data — classification (churn prediction, fraud detection, lead scoring) and regression (price forecasting, demand prediction) — and deploys them to scalable REST scoring endpoints. Unlike deep learning platforms that require GPU infrastructure, H2O's tree-based and generalized linear models run on CPU infrastructure and respond to scoring requests in milliseconds. The result is low-latency, low-cost tabular ML predictions accessible over a standard HTTP API.

In Bolt.new, H2O.ai integration follows the same pattern as other ML platforms: your Next.js API route receives feature inputs from a React form, adds authentication headers from .env, calls the H2O scoring endpoint, and returns the prediction response. The React component handles the UI — input forms, loading states, result visualization. You never call H2O's scoring API from client-side code, since the API key must remain server-side.

H2O.ai is the right choice when you have structured/tabular business data and need predictive analytics without building custom deep learning infrastructure. Common enterprise use cases include: customer churn prediction (features: usage metrics, account age, support tickets), credit risk scoring (features: financial ratios, payment history), demand forecasting (features: historical sales, seasonal indicators), and insurance claim probability (features: policy attributes, claims history). H2O AutoML trains dozens of models in parallel — GBM, XGBoost, Random Forest, Deep Learning, GLM, and stacked ensembles — and selects the best performer automatically. The Bolt app provides the user-facing interface for any team member to input cases and get real-time predictions without accessing H2O's admin interface directly.

Integration method

Bolt Chat + API Route

H2O.ai AutoML models deployed to H2O Cloud or H2O AI Cloud expose HTTP scoring endpoints that accept feature values as JSON and return predictions. In Bolt.new, you call these endpoints through a Next.js API route that adds your H2O API key from .env — keeping credentials server-side while React components display prediction results. H2O's REST scoring API works identically for models trained with AutoML or custom Python H2O models.

Prerequisites

  • An H2O.ai account with a trained and deployed model (H2O Cloud at cloud.h2o.ai or self-hosted H2O AI Cloud)
  • Your H2O model's scoring endpoint URL (available in H2O Cloud → Models → Deployments → select deployment → Endpoint URL)
  • An H2O Cloud API key (Settings → API Keys → Create API Key) or your H2O cluster's endpoint with configured authentication
  • Knowledge of your model's expected feature names and data types (from the AutoML training dataset's column schema)
  • A Bolt.new project using Next.js (request Next.js when creating your project for API route support)

Step-by-step guide

1

Get Your H2O Model Scoring Endpoint and API Credentials

After training an AutoML model in H2O Cloud, you need to deploy it to a scoring endpoint before you can call it from a Bolt app. In H2O Cloud (cloud.h2o.ai), navigate to AI Engines → H2O-3 or H2O AutoML, select your completed AutoML run, and identify the best model (usually the Stacked Ensemble or the top GBM). Click Deploy to Deployment Environment to make the model available as a REST API. H2O Cloud provisions a dedicated endpoint URL — it looks like https://model.cloud.h2o.ai/3ab7c2f1/model/score or similar depending on your instance. This URL is your H2O_SCORING_URL. For API authentication, go to your H2O Cloud account Settings → API Keys → Create API Key. Give it a descriptive name like 'Bolt App' and copy the generated key. For self-hosted H2O AI Cloud or H2O Wave deployments, the endpoint format is typically https://your-cluster.company.com/3/predict or /mojo_predict depending on your deployment configuration. In Bolt.new, add to your .env file: H2O_SCORING_URL=https://model.cloud.h2o.ai/your-endpoint and H2O_API_KEY=your_api_key. Do not add NEXT_PUBLIC_ or VITE_ prefix — the API key must stay server-side. Also add H2O_MODEL_NAME if your deployment requires specifying the model identifier in the request. The exact request and response format varies slightly between H2O Driverless AI, H2O-3, and H2O AutoML deployments — check your model's API documentation in the H2O Cloud interface under the deployment's API Reference tab, which shows example curl requests and response formats for your specific model.

Bolt.new Prompt

Set up H2O.ai scoring credentials in my Bolt project. Create a .env file with H2O_SCORING_URL and H2O_API_KEY as placeholder variables. Create a lib/h2o.ts utility that exports a scoreWithH2O function accepting a features object. The function should call fetch on H2O_SCORING_URL using POST method, with Authorization header using the API key, Content-Type application/json, and body as { fields: Object.keys(features), rows: [Object.values(features)] } (H2O MOJO scorer format). Return the parsed prediction response.

Paste this in Bolt.new chat

lib/h2o.ts
1// lib/h2o.ts
2const SCORING_URL = process.env.H2O_SCORING_URL;
3const API_KEY = process.env.H2O_API_KEY;
4
5interface H2OPredictionResponse {
6 // H2O MOJO scorer format
7 score?: number[][]; // regression: [[predicted_value]]
8 label?: string; // classification: predicted class
9 classProbabilities?: number[]; // classification: per-class probabilities
10 // Alternative H2O-3 format
11 predictions?: Array<{ predict: string; p0?: number; p1?: number; value?: number }>;
12 [key: string]: unknown;
13}
14
15export async function scoreWithH2O(
16 features: Record<string, string | number>
17): Promise<H2OPredictionResponse> {
18 if (!SCORING_URL || !API_KEY) {
19 throw new Error('H2O_SCORING_URL and H2O_API_KEY must be set in .env');
20 }
21
22 // H2O MOJO scorer format: { fields: [col names], rows: [[values]] }
23 const requestBody = {
24 fields: Object.keys(features),
25 rows: [Object.values(features).map(String)], // H2O expects string values
26 };
27
28 const response = await fetch(SCORING_URL, {
29 method: 'POST',
30 headers: {
31 Authorization: `Bearer ${API_KEY}`,
32 'Content-Type': 'application/json',
33 Accept: 'application/json',
34 },
35 body: JSON.stringify(requestBody),
36 });
37
38 if (!response.ok) {
39 const error = await response.text();
40 throw new Error(`H2O scoring error ${response.status}: ${error}`);
41 }
42
43 return response.json() as Promise<H2OPredictionResponse>;
44}

Pro tip: H2O MOJO scoring endpoints expect feature values as strings in the rows array — convert numbers to strings with .map(String) or the scorer may reject the request. The fields array must match your model's training feature names exactly, including capitalization and underscores.

Expected result: The scoreWithH2O helper is available for use in any Next.js API route. Calling it with a features object proxies the request to your H2O scoring endpoint with proper authentication.

2

Build the Prediction API Route

The Next.js API route acts as a thin proxy between your React components and the H2O scoring endpoint, adding authentication and transforming the response into a format your UI can display. The critical job of this route is validating input features before sending them to H2O — an incorrectly typed feature or missing required column will cause H2O to return an error. Validate that numeric features are actual numbers (not NaN from empty form fields), that categorical features match the expected categories from training, and that all required features are present. H2O's classification models return probabilities for each class in your target variable. For binary classification (churn: yes/no), H2O returns an array like [0.23, 0.77] where index 0 is the probability of class '0' (no churn) and index 1 is the probability of class '1' (churn). The class order depends on how your training target column was encoded — check your model's column summary to confirm which class is at index 0. For multi-class classification, you get one probability per class. For regression models, H2O returns a single predicted value. H2O AutoML models also support returning Shapley values (feature importance per prediction) which tell you why the model made a specific prediction — valuable for high-stakes decisions like credit scoring. To get Shapley values, add 'contributions': true to the request body if your H2O deployment supports the contributions API. The H2O_3 predict endpoint format differs slightly from the MOJO scorer format — in H2O-3, predictions are returned as { predictions: [{ predict: 'yes', p0: 0.23, p1: 0.77 }] } where predict is the top class and p0/p1 are per-class probabilities.

Bolt.new Prompt

Create a prediction API route at app/api/predict/route.ts that uses the scoreWithH2O helper from lib/h2o.ts. The route should accept POST requests with a features object, validate that all required numeric features are valid numbers (not NaN), call scoreWithH2O, and return a standardized response with: predicted_class (for classification) or predicted_value (for regression), probability (confidence 0-1), and raw_response for debugging. Add input validation that returns 400 with field-level error messages if required features are missing or invalid.

Paste this in Bolt.new chat

app/api/predict/route.ts
1// app/api/predict/route.ts
2import { NextRequest, NextResponse } from 'next/server';
3import { scoreWithH2O } from '@/lib/h2o';
4
5// Define your model's required features and their types
6const REQUIRED_FEATURES: Record<string, 'number' | 'string'> = {
7 monthly_usage: 'number',
8 feature_adoption_pct: 'number',
9 support_tickets_30d: 'number',
10 account_age_months: 'number',
11 contract_type: 'string',
12};
13
14function validateFeatures(features: Record<string, unknown>): string[] {
15 const errors: string[] = [];
16 for (const [field, type] of Object.entries(REQUIRED_FEATURES)) {
17 if (features[field] === undefined || features[field] === '') {
18 errors.push(`${field} is required`);
19 } else if (type === 'number' && isNaN(Number(features[field]))) {
20 errors.push(`${field} must be a number`);
21 }
22 }
23 return errors;
24}
25
26export async function POST(request: NextRequest) {
27 const body = await request.json();
28 const { features } = body;
29
30 const validationErrors = validateFeatures(features);
31 if (validationErrors.length > 0) {
32 return NextResponse.json({ error: 'Validation failed', details: validationErrors }, { status: 400 });
33 }
34
35 try {
36 const h2oResponse = await scoreWithH2O(features);
37
38 // Handle both H2O-3 and MOJO scorer response formats
39 let predictedClass: string | null = null;
40 let probability: number | null = null;
41 let predictedValue: number | null = null;
42
43 if (h2oResponse.predictions?.[0]) {
44 // H2O-3 format
45 const pred = h2oResponse.predictions[0];
46 predictedClass = pred.predict;
47 probability = pred.p1 ?? null; // p1 = probability of positive class
48 } else if (h2oResponse.classProbabilities) {
49 // MOJO scorer format
50 probability = Math.max(...h2oResponse.classProbabilities);
51 } else if (h2oResponse.score?.[0]?.[0] !== undefined) {
52 // Regression
53 predictedValue = h2oResponse.score[0][0];
54 }
55
56 return NextResponse.json({
57 predicted_class: predictedClass,
58 predicted_value: predictedValue,
59 probability,
60 raw_response: h2oResponse,
61 });
62 } catch (error) {
63 return NextResponse.json(
64 { error: error instanceof Error ? error.message : 'Scoring failed' },
65 { status: 500 }
66 );
67 }
68}

Pro tip: H2O AutoML generates models with specific feature column names from your training dataset — the field names in your request must match exactly. If your training data used camelCase (monthlyUsage) but the route sends snake_case (monthly_usage), the scoring will fail silently or return unexpected results.

Expected result: POSTing feature values to /api/predict returns a standardized prediction response with predicted class, probability, and the raw H2O response for debugging.

3

Build the Prediction Form and Results UI

The React prediction form is the user-facing layer of your H2O integration. For tabular ML models, the form design matters significantly — it determines data quality. Form inputs must enforce the same constraints as your training data: if monthly_usage was capped at 1000 in training, use max=1000 on the slider. If contract_type had only two values in training ('monthly' and 'annual'), the select must have exactly those options with identical casing. Mismatched or out-of-distribution inputs produce unreliable predictions without error messages. For numeric inputs, sliders are preferable to free-text number fields — they make it impossible to enter out-of-range values and give users a visual sense of where a value sits in the expected range. Categorical inputs use select dropdowns populated with the exact values from your training data. After the API returns a prediction, display results in a way that helps users take action. For binary classification, translate the probability number into a risk tier (0-30% = Low, 30-70% = Medium, 70-100% = High) with corresponding colors. For regression, show the predicted value prominently with context (e.g., 'Estimated monthly churn saves: $4,200'). Include a clear disclaimer that predictions are probabilistic — this is important for business-critical decisions. H2O AutoML predictions work well in Bolt's WebContainer during development — the API route makes an outbound HTTPS call to H2O Cloud, which is a standard HTTP request that the WebContainer supports. You'll get real predictions from your deployed model during development without needing to deploy your Bolt app first.

Bolt.new Prompt

Build a PredictionForm React component in components/PredictionForm.tsx for my churn prediction model. Include: a form with four range sliders (monthly_usage 0-1000, feature_adoption_pct 0-100, support_tickets_30d 0-20, account_age_months 1-120) each showing the current value next to the label, and one select dropdown for contract_type (monthly/annual). Add a Predict Churn Risk button that POSTs to /api/predict. Display results as a risk dashboard card showing: the churn probability percentage in large text, a colored risk badge (Low/Medium/High), a horizontal gauge bar filled proportionally, and the raw prediction value in small gray text. Show loading state during prediction. Handle and display API errors.

Paste this in Bolt.new chat

components/PredictionForm.tsx
1// components/PredictionForm.tsx
2'use client';
3import { useState } from 'react';
4
5interface PredictResponse {
6 predicted_class: string | null;
7 probability: number | null;
8 predicted_value: number | null;
9 error?: string;
10}
11
12interface Features {
13 monthly_usage: number;
14 feature_adoption_pct: number;
15 support_tickets_30d: number;
16 account_age_months: number;
17 contract_type: string;
18}
19
20const INITIAL_FEATURES: Features = {
21 monthly_usage: 200,
22 feature_adoption_pct: 45,
23 support_tickets_30d: 2,
24 account_age_months: 18,
25 contract_type: 'monthly',
26};
27
28export function PredictionForm() {
29 const [features, setFeatures] = useState<Features>(INITIAL_FEATURES);
30 const [result, setResult] = useState<PredictResponse | null>(null);
31 const [loading, setLoading] = useState(false);
32
33 const set = (key: keyof Features) => (value: number | string) =>
34 setFeatures((f) => ({ ...f, [key]: value }));
35
36 const handlePredict = async () => {
37 setLoading(true);
38 setResult(null);
39 try {
40 const res = await fetch('/api/predict', {
41 method: 'POST',
42 headers: { 'Content-Type': 'application/json' },
43 body: JSON.stringify({ features }),
44 });
45 const data: PredictResponse = await res.json();
46 setResult(data);
47 } finally {
48 setLoading(false);
49 }
50 };
51
52 const riskTier = (p: number) =>
53 p >= 0.7 ? { label: 'High Risk', color: 'bg-red-500 text-white' }
54 : p >= 0.3 ? { label: 'Medium Risk', color: 'bg-yellow-400 text-yellow-900' }
55 : { label: 'Low Risk', color: 'bg-green-500 text-white' };
56
57 const sliders: Array<{ key: keyof Features; label: string; min: number; max: number }> = [
58 { key: 'monthly_usage', label: 'Monthly Usage', min: 0, max: 1000 },
59 { key: 'feature_adoption_pct', label: 'Feature Adoption %', min: 0, max: 100 },
60 { key: 'support_tickets_30d', label: 'Support Tickets (30d)', min: 0, max: 20 },
61 { key: 'account_age_months', label: 'Account Age (months)', min: 1, max: 120 },
62 ];
63
64 return (
65 <div className="max-w-lg mx-auto p-6 space-y-5">
66 <h2 className="text-2xl font-bold">Churn Risk Predictor</h2>
67 <p className="text-sm text-gray-500">Powered by H2O AutoML Decision support tool requires human review</p>
68
69 {sliders.map(({ key, label, min, max }) => (
70 <div key={key}>
71 <div className="flex justify-between text-sm font-medium mb-1">
72 <span>{label}</span>
73 <span className="text-blue-600">{features[key as keyof Features]}</span>
74 </div>
75 <input type="range" min={min} max={max}
76 value={features[key as keyof Features] as number}
77 onChange={(e) => set(key)(Number(e.target.value))}
78 className="w-full" />
79 </div>
80 ))}
81
82 <div>
83 <label className="block text-sm font-medium mb-1">Contract Type</label>
84 <select value={features.contract_type} onChange={(e) => set('contract_type')(e.target.value)}
85 className="w-full border rounded px-3 py-2">
86 <option value="monthly">Monthly</option>
87 <option value="annual">Annual</option>
88 </select>
89 </div>
90
91 <button onClick={handlePredict} disabled={loading}
92 className="w-full bg-blue-600 text-white py-2 rounded hover:bg-blue-700 disabled:opacity-50">
93 {loading ? 'Predicting...' : 'Predict Churn Risk'}
94 </button>
95
96 {result?.error && <p className="text-red-600 text-sm">{result.error}</p>}
97
98 {result?.probability != null && (() => {
99 const tier = riskTier(result.probability);
100 const pct = (result.probability * 100).toFixed(1);
101 return (
102 <div className="border rounded-lg p-4 space-y-3">
103 <div className="flex items-center gap-3">
104 <span className="text-4xl font-bold">{pct}%</span>
105 <span className={`px-3 py-1 rounded-full text-sm font-semibold ${tier.color}`}>{tier.label}</span>
106 </div>
107 <div className="w-full bg-gray-200 rounded-full h-3">
108 <div className="bg-blue-600 h-3 rounded-full transition-all" style={{ width: `${pct}%` }} />
109 </div>
110 <p className="text-xs text-gray-400">Churn probability score: {result.probability.toFixed(4)}</p>
111 </div>
112 );
113 })()}
114 </div>
115 );
116}

Pro tip: H2O AutoML scoring calls work from Bolt's WebContainer during development — the Next.js API route makes an outbound HTTPS call to H2O Cloud, which is allowed in the WebContainer. You get real model predictions during development without needing to deploy your Bolt app.

Expected result: The prediction form shows sliders and dropdowns for model features. Clicking 'Predict Churn Risk' calls /api/predict, and the result card shows the churn probability percentage, colored risk tier badge, and a filled gauge bar.

4

Deploy and Integrate with Production Data

With the prediction interface working in development, deploying to Netlify or Bolt Cloud connects your app to production H2O models and enables real usage by business teams. Before deploying, consider whether your prediction form should be publicly accessible or requires authentication — for sensitive business models (credit scoring, churn prediction), restrict access to authenticated users with role-based permissions using Clerk, Auth0, or NextAuth. H2O Cloud scoring endpoints have rate limits and quotas — the free tier allows limited daily predictions. For business use, ensure your H2O Cloud subscription covers your expected request volume. Monitor endpoint usage in the H2O Cloud dashboard under Usage → Scoring Requests. For high-volume use cases (thousands of predictions per day), consider batch scoring: instead of one prediction per form submit, accumulate multiple cases and send them in a single API call using H2O's batch scoring endpoint, which accepts multiple rows in the rows array simultaneously. H2O also provides a webhook for model scoring callbacks (Driverless AI Enterprise) — though this requires a publicly accessible URL, so configure it with your deployed domain rather than the WebContainer preview URL. After deploying, add your environment variables in Netlify's Site Configuration → Environment Variables (H2O_SCORING_URL, H2O_API_KEY) and redeploy. Use Netlify's deploy previews to test model endpoint connectivity before promoting to production.

Bolt.new Prompt

Add prediction logging to my H2O integration. Update app/api/predict/route.ts to log each prediction to a local JSON log file (or console.log for Netlify/Bolt Cloud where filesystem isn't persistent): timestamp, input features, predicted class, probability, and response time in milliseconds. Add a /api/predictions/stats endpoint that returns: total predictions today, average probability, high-risk count (probability > 0.7), and last prediction timestamp. Display these stats in a small metrics bar above the prediction form. After deploying to Netlify, update H2O_SCORING_URL and H2O_API_KEY in Netlify environment variables.

Paste this in Bolt.new chat

app/api/predictions/stats/route.ts
1// app/api/predictions/stats/route.ts
2// In production (Netlify/Bolt Cloud), replace in-memory storage with a real DB
3import { NextResponse } from 'next/server';
4
5// In-memory log (persists for lifetime of server process)
6// Replace with Supabase/database for production persistence
7const predictionLog: Array<{
8 timestamp: number;
9 probability: number;
10 responseTimeMs: number;
11}> = [];
12
13export function logPrediction(probability: number, responseTimeMs: number) {
14 predictionLog.push({ timestamp: Date.now(), probability, responseTimeMs });
15 // Keep only last 1000 predictions in memory
16 if (predictionLog.length > 1000) predictionLog.shift();
17}
18
19export async function GET() {
20 const today = new Date();
21 today.setHours(0, 0, 0, 0);
22 const todayTs = today.getTime();
23
24 const todayPredictions = predictionLog.filter((p) => p.timestamp >= todayTs);
25
26 const stats = {
27 total_today: todayPredictions.length,
28 avg_probability: todayPredictions.length > 0
29 ? (todayPredictions.reduce((sum, p) => sum + p.probability, 0) / todayPredictions.length).toFixed(3)
30 : null,
31 high_risk_count: todayPredictions.filter((p) => p.probability >= 0.7).length,
32 avg_response_ms: todayPredictions.length > 0
33 ? Math.round(todayPredictions.reduce((sum, p) => sum + p.responseTimeMs, 0) / todayPredictions.length)
34 : null,
35 last_prediction: predictionLog.length > 0
36 ? new Date(predictionLog[predictionLog.length - 1].timestamp).toISOString()
37 : null,
38 };
39
40 return NextResponse.json(stats);
41}

Pro tip: The in-memory log resets when the server restarts. For persistent prediction logging in production, replace the in-memory array with a Supabase table — add a predictions table with columns for timestamp, features (JSONB), probability, and response_time, and insert a row from your prediction API route on each successful call.

Expected result: After deploying to Netlify with H2O environment variables set, predictions run against your production H2O model. The stats endpoint shows daily prediction volume and average risk scores.

Common use cases

Customer Churn Prediction Dashboard

Build an internal tool where customer success managers input customer metrics (monthly usage, feature adoption, support tickets, account age) and get an instant churn probability prediction. The H2O model was trained on historical customer data; the Bolt app provides the clean, focused UI without requiring CS managers to access H2O's admin panel.

Bolt.new Prompt

Build a customer churn prediction form. Create a Next.js API route at app/api/predict-churn/route.ts that accepts customer features and calls my H2O scoring endpoint at H2O_SCORING_URL using the H2O_API_KEY from .env. The features are: monthly_usage (0-1000 number), feature_adoption_pct (0-100 number), support_tickets_30d (0-20 integer), account_age_months (1-120 integer), and contract_type (monthly/annual string). Return the churn probability (0-1). Build a ChurnPredictor React component with sliders and dropdowns for each feature, a Predict button, and a result section showing 'Churn Risk: High/Medium/Low' with a percentage and a colored risk gauge.

Copy this prompt to try it in Bolt.new

Real Estate Price Prediction Tool

Build a property price estimator using an H2O regression model trained on real estate data. Agents or buyers enter property features (square footage, bedrooms, neighborhood, year built) and get an instant market value estimate. Shows prediction confidence interval alongside the point estimate.

Bolt.new Prompt

Build a property price predictor using my H2O regression model. Create app/api/predict-price/route.ts that sends property features to my H2O scoring endpoint. Features: sqft (500-10000), bedrooms (1-8), bathrooms (1-6), year_built (1900-2025), neighborhood (Downtown/Suburbs/Rural), garage_spaces (0-4). Call H2O_SCORING_URL with H2O_API_KEY. Return the predicted_price (regression output). Build a PropertyEstimator React component with a form for all features and a results card showing the estimated price in large text, formatted as currency, with an 'estimated range' of ±15% shown as a secondary display.

Copy this prompt to try it in Bolt.new

Loan Approval Risk Scoring

Build a loan risk assessment interface for underwriters. Input applicant financial metrics and get an instant probability of default prediction from an H2O binary classification model. The tool supplements (not replaces) human judgment by surfacing the most predictive factors and their relative importance.

Bolt.new Prompt

Build a loan risk scoring tool using my H2O binary classification model. Create app/api/score-loan/route.ts that sends applicant features to H2O_SCORING_URL with H2O_API_KEY authentication. Features: annual_income (0-500000), debt_to_income (0-100), credit_score (300-850), employment_years (0-40), loan_amount (1000-500000), loan_purpose (debt_consolidation/home_improvement/business/other). Return the probability of default (class=1 probability from H2O response). Build an underwriter UI showing the risk score as a dial chart (green 0-30%, yellow 30-60%, red 60-100%), recommended decision (Approve/Review/Decline), and a note that this is a decision support tool requiring human review.

Copy this prompt to try it in Bolt.new

Troubleshooting

H2O scoring endpoint returns 400 Bad Request or 'Feature mismatch' error

Cause: The feature field names in your request don't exactly match the column names from the training dataset. H2O is case-sensitive — 'Monthly_Usage' and 'monthly_usage' are different columns. Also common: extra or missing features, or categorical values that weren't in the training data.

Solution: Compare your request's field names against your model's feature list in H2O Cloud → Models → select your model → Feature Importance or Summary, which lists exact column names. Ensure your REQUIRED_FEATURES object uses identical names. Test with a curl request first to isolate API authentication vs. payload issues.

typescript
1// Use exact column names from your H2O training data
2// Check H2O Cloud → Your Model → Feature Importance for the exact names
3const features = {
4 monthly_usage: 250, // exact name from training CSV column
5 feature_adoption_pct: 62, // must match column header exactly
6 contract_type: 'annual', // categorical values must be in training set
7};

Authentication fails with 401 Unauthorized even though H2O_API_KEY looks correct

Cause: H2O Cloud uses Bearer token authentication, but some H2O deployments or older versions use different auth formats. The API key may have been regenerated or revoked. H2O Driverless AI uses a different authentication mechanism than H2O-3.

Solution: Verify the API key is current in H2O Cloud Settings → API Keys. Check whether your deployment uses Bearer authentication or a different scheme (some H2O instances use 'api-key' header instead of 'Authorization: Bearer'). Test the endpoint directly using curl: curl -X POST -H 'Authorization: Bearer YOUR_KEY' -H 'Content-Type: application/json' -d '{"fields":["col1"],"rows":[["value"]]}' YOUR_SCORING_URL

typescript
1// H2O Cloud standard format
2Authorization: `Bearer ${process.env.H2O_API_KEY}`
3
4// Some H2O deployments use a different header
5'api-key': process.env.H2O_API_KEY
6
7// Test which format your deployment requires by checking the
8// H2O Cloud → Deployment → API Reference tab

Prediction returns unexpected class probabilities (e.g., churn probability is inverted — low probability for customers who churned)

Cause: H2O classification models return probabilities for each class in the order they appear in the training data. For binary classification, p0 is the probability of the class that appears first, p1 is the second class. If your target column was encoded as 0='churned', 1='retained', then p1 = probability of retention (not churn), which is the inverse of what you may expect.

Solution: Check your model's class encoding in H2O Cloud → Model → Summary → Target column. Note which class is 0 and which is 1. Adjust your display logic to use the correct probability index. For churn prediction, if class '1' = churned, use p1 as the churn probability.

typescript
1// Check which index is your 'positive' class
2// H2O-3 format: predictions[0].p1 = probability of class at index 1
3const churnProbability = h2oResponse.predictions[0].p1; // if class 1 = 'churned'
4// or
5const retentionProbability = h2oResponse.predictions[0].p0; // if class 0 = 'retained'
6// Churn risk = 1 - retention probability (if model output is 'retained' probability)

H2O scoring endpoint returns correct predictions in development but 500 errors after deploying to Netlify

Cause: H2O_SCORING_URL and H2O_API_KEY environment variables are missing from Netlify's deployment configuration. In development, they're read from .env; in production, they must be set in the hosting dashboard.

Solution: Go to Netlify → Site Configuration → Environment Variables and add H2O_SCORING_URL and H2O_API_KEY with the same values as your local .env file. Trigger a redeploy (Deploys → Trigger deploy → Deploy site) to apply the new environment variables to your serverless functions.

Best practices

  • Always proxy H2O scoring calls through a Next.js API route — the H2O API key must never appear in client-side code, browser network requests, or React component fetch calls
  • Validate feature inputs before calling H2O — check for required fields, correct types, and value ranges matching your training data, and return 400 errors with clear field-level messages for invalid inputs
  • Use the exact feature column names from your H2O training dataset in your API route's REQUIRED_FEATURES config — mismatched names cause silent prediction failures or 400 errors from H2O
  • Display predictions as human-interpretable tiers (Low/Medium/High risk) rather than raw probability numbers — business users need actionable categories, not four decimal places
  • Include a disclaimer that H2O predictions are probabilistic models supporting human decisions, not automated decision-makers — especially important for high-stakes use cases like credit scoring or hiring
  • Monitor H2O scoring endpoint latency and error rates — H2O Cloud endpoints can experience cold starts after inactivity, adding 2-10 seconds to the first request after a period of no traffic
  • Test predictions during development in Bolt's WebContainer — outbound HTTPS calls to H2O Cloud work fine, so you can verify real model behavior without deploying first

Alternatives

Frequently asked questions

How do I connect Bolt.new to H2O.ai?

Deploy your H2O AutoML model to a scoring endpoint in H2O Cloud, copy the endpoint URL and an API key from H2O Cloud Settings, and add them as H2O_SCORING_URL and H2O_API_KEY in your Bolt project's .env file. Build a Next.js API route that calls the scoring endpoint with Bearer authentication, transforming your form's feature inputs into H2O's JSON format. React components call your own /api/predict route — never H2O directly.

Do H2O.ai API calls work in Bolt's WebContainer during development?

Yes — your Next.js API route makes an outbound HTTPS call to H2O Cloud, which works fine in Bolt's WebContainer. You'll get real predictions from your deployed model during development without needing to deploy your Bolt app first. The WebContainer restriction only applies to incoming traffic (webhooks, incoming connections) — outbound HTTP calls always work.

What format does H2O's scoring endpoint expect?

H2O MOJO scoring endpoints expect JSON with two fields: fields (an array of column name strings in the same order as your training data) and rows (an array of arrays, one inner array per prediction case, containing string-converted feature values). Example: { fields: ['age', 'income'], rows: [['35', '75000']] }. Note that H2O expects numeric values as strings in the rows array — convert all values with .toString() or .map(String).

How do I interpret H2O AutoML classification output?

H2O binary classification returns probabilities for each class: p0 (probability of class '0') and p1 (probability of class '1') in H2O-3 format, or a classProbabilities array in MOJO format. The class ordering depends on your training target column encoding — check H2O Cloud → Model → Summary to see which class is index 0 and which is index 1. For churn prediction where class '1' = churned, use p1 as the churn probability.

Can H2O.ai return feature importance scores per prediction?

Yes — H2O Driverless AI and some H2O-3 models support Shapley values (SHAP) that explain which features contributed most to a specific prediction. Add 'contributions': true to your scoring request body if your deployment supports it. This returns a per-feature contribution array alongside the prediction probability, useful for explaining high-risk predictions to business users.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.