Lovable AI is a built-in shared connector that adds LLM-powered features — summaries, chatbots, sentiment analysis, and translation — directly to your deployed app. Enable it in Settings → Connectors, then ask Lovable to build any AI feature in plain English. It uses Gemini 3 Flash under the hood and draws from your workspace's $1 AI balance. No API keys, no Edge Functions, no external accounts required.
Add LLM-powered features to your app without any external API accounts
Lovable ships with two distinct AI systems, and understanding the difference is the key to using both effectively. The first is the AI agent — the conversational interface you use inside Lovable's editor to build your app. The second is the Lovable AI connector, which is something completely different: it gives the people who use your deployed app access to LLM-powered features like chatbots, text summarization, sentiment analysis, and translation. This tutorial is about the second one.
The Lovable AI connector is powered by Gemini 3 Flash and managed entirely by Lovable. You do not need a Google account, a Gemini API key, or any external service. When you enable the connector and describe an AI feature in the chat — for example, 'add a button that summarizes the selected customer notes' — Lovable's AI agent generates the correct frontend and backend code, wires it to the Gemini model automatically, and deploys it. Your end users get the feature; you never touch an API key.
Usage is billed against your workspace's AI balance, which is currently $1 per month (provided free through Q1 2026 as a platform incentive). For most apps with moderate usage, this is enough for thousands of LLM calls per month. The Lovable AI connector is ideal for non-technical founders who want to ship AI features quickly — things like a support chatbot, a content summarizer, a feedback sentiment scorer, or a multilingual UI — without learning the Gemini or OpenAI APIs. For high-volume production apps or apps that need specific model capabilities beyond Gemini 3 Flash, the OpenAI GPT or Perplexity connectors may be better fits.
Integration method
Lovable AI connects through the shared connectors system in Settings → Connectors, giving Lovable's AI agent full context to generate correct LLM feature code without any manual API wiring.
Prerequisites
- A Lovable account (free tier is sufficient to enable the connector)
- A Lovable Cloud project (Lovable Cloud is enabled by default for all workspaces)
- Your app deployed or in active development — the Lovable AI connector works in deployed apps, not just the Lovable editor preview
- A basic idea of which AI feature you want to add — even a rough description works well
Step-by-step guide
Enable the Lovable AI connector in Settings
Enable the Lovable AI connector in Settings
Open your Lovable project. In the top-right area of the editor, click the Settings gear icon to open the project Settings panel. In the left sidebar of Settings, click Connectors. You will see two sections: Shared connectors and Personal connectors (MCP). Look for Lovable AI in the Shared connectors list — it is one of the 17 built-in connectors. Click the toggle or Connect button next to Lovable AI to enable it. You may see a brief confirmation message. Once enabled, the connector status indicator turns active (green or marked as Connected depending on your interface version). Note that Lovable AI draws from your workspace's AI balance, not from a personal API account. There is no OAuth flow, no external login, and no API key to manage. The $1 monthly AI balance is shared across all projects in your workspace. If you have already used some AI balance this month, the remaining amount is shown in the Cloud tab under Usage. If you do not see a Connectors option in Settings, verify that your project is on Lovable Cloud (look for the Cloud tab in the top navigation bar — if it is present, you are on Lovable Cloud). Free-tier projects on Lovable Cloud have access to all shared connectors including Lovable AI.
Can you confirm the Lovable AI connector is enabled for this project? If it's not, let me know what I need to do in Settings → Connectors to activate it.
Paste this in Lovable chat
Expected result: The Lovable AI connector shows as Connected in Settings → Connectors. You are ready to add LLM-powered features to your app.
Add a text summarization feature
Add a text summarization feature
Text summarization is the most common starting point for Lovable AI because it slots into almost every type of app — summarize meeting notes, customer feedback, article content, support tickets, or long-form text fields. To add it, open the Lovable chat and describe exactly where and how you want summarization to work. Be specific about the UI: tell Lovable which page the feature lives on, what text it should summarize, and what it should do with the result. For example, mention the component name, the database field name (if summarizing stored data), or the textarea the user types into. The more context you give, the more accurate Lovable's code generation will be. Lovable will generate the frontend component (a button, a result display area, a loading state), wire it to the Lovable AI connector's summarization endpoint, and handle the async request and response. The generated code is TypeScript/React and follows Lovable's standard architecture — no custom hooks or unusual patterns are introduced. After Lovable generates the changes, click the Preview button to test the summarization feature before deploying.
On the [page name] page, add a Summarize button next to the [textarea/field name]. When clicked, it should use the Lovable AI connector to summarize the text and display the summary below the button. Show a loading spinner while the summary is being generated.
Paste this in Lovable chat
Expected result: A Summarize button appears in the specified location. Clicking it sends the text to the Lovable AI connector and displays a summary. A loading state is shown while the LLM processes the request.
Build a conversational chatbot for end users
Build a conversational chatbot for end users
Adding a chatbot to your app takes one well-formed chat prompt. Lovable AI supports multi-turn conversations, so users can ask follow-up questions and get contextually aware responses — not just one-shot Q&A. This is useful for support bots, onboarding assistants, product recommendation flows, and internal knowledge base tools. When describing the chatbot to Lovable, include: where it should appear (floating widget, sidebar panel, dedicated page), what persona or topic scope it should have, and whether it should have access to any app-specific data (for example, the current user's account details or a list of your product categories). If you want the chatbot to stay on-topic, include that as an instruction — Lovable will pass a system prompt to the Gemini model limiting what it will discuss. For apps with authenticated users, Lovable can scope the chatbot to logged-in users only. If you are building on Supabase Auth (Lovable's default auth system), simply mention this in your prompt and Lovable will add the appropriate auth check before rendering the chat component. The conversation history for a session is handled client-side by default; if you want persistent chat history across sessions, mention that you want conversations saved to the database and Lovable will generate the necessary Supabase table and RLS policy.
Add a floating chat widget to the bottom-right corner of every page. It should use the Lovable AI connector to power a support assistant for [describe your app briefly, e.g., 'a project management tool for freelancers']. The assistant should only answer questions about using the app and politely decline off-topic questions. Only show the chat widget to logged-in users.
Paste this in Lovable chat
Expected result: A floating chat button appears in the bottom-right corner for authenticated users. Clicking it opens a chat panel where users can have a multi-turn conversation with an AI assistant scoped to your app's topic.
Add sentiment analysis to user feedback
Add sentiment analysis to user feedback
Sentiment analysis is one of the highest-value quick wins for product builders — automatically scoring user feedback as positive, neutral, or negative lets you build dashboards, trigger alerts, and prioritize responses without reading every submission manually. Lovable AI makes this a one-step addition to any feedback form or comment system. The typical pattern is: user submits a form → on submission, the Lovable AI connector analyzes the text → the sentiment result (and optionally a confidence score) is stored alongside the submission in your Supabase database → you view aggregated sentiment data in a dashboard. Because the analysis happens at submission time, there is no delay for users and the sentiment scores are immediately available for filtering and reporting. Specify in your prompt what output format you want. Options include a simple label (positive/neutral/negative), a numeric score (-1 to 1), or a short explanation. If you want the sentiment to trigger an action — for example, routing negative feedback to a Slack notification or flagging it for manual review — mention that in the same prompt and Lovable will include the conditional logic.
On the feedback submission form, use the Lovable AI connector to analyze the sentiment of the feedback text as soon as the user submits. Save the sentiment label (positive, neutral, or negative) alongside the feedback in the database. On the admin dashboard, show a color-coded sentiment badge next to each feedback entry and add a filter to show only negative feedback.
Paste this in Lovable chat
Expected result: The feedback form now automatically scores each submission. The admin dashboard displays sentiment badges and a working filter for negative feedback. Sentiment scores are stored in the Supabase database and visible in the Table Editor.
Enable multi-language translation for your UI
Enable multi-language translation for your UI
Translation through the Lovable AI connector is different from hard-coded i18n libraries — instead of maintaining translation files for every language, Lovable AI translates content dynamically on request. This is especially useful for content that changes frequently (user-generated content, database-stored copy, product descriptions) or for apps that need to support a long tail of languages without the overhead of maintaining translation dictionaries. The most common use case is a language switcher that translates specific sections of the page. Lovable AI can handle full-page translation for relatively small pages, or targeted translation of specific elements — a product description, a support article, or a user's submitted text. For static UI strings (button labels, menu items), consider asking Lovable to implement a lightweight i18n solution instead, since translating static strings dynamically on every page load is not cost-efficient. In your prompt, specify which content should be translated, how the user selects a language (dropdown, auto-detect from browser locale, or flag buttons), and whether the translated version should be cached after the first request. Lovable will generate the language selector UI, wire the translation calls to the Lovable AI connector, and optionally store translated results in Supabase to avoid re-translating the same content repeatedly. For complex multilingual apps with large user bases, RapidDev's team can help design a caching strategy that balances AI translation freshness against your AI balance usage.
Add a language selector dropdown to the top navigation bar with options for English, Spanish, French, and Japanese. When a user selects a language, use the Lovable AI connector to translate the main content area of the current page. Cache the translated content in the database so the same content is not translated twice for the same language.
Paste this in Lovable chat
Expected result: A language dropdown appears in the navigation. Selecting a language translates the page content and caches the result. Subsequent requests for the same language + content combination load instantly from the cache.
Common use cases
Build a conversational chatbot for end users
Use Lovable AI with Lovable to build a conversational chatbot for end users. This is one of the most common use cases when integrating Lovable AI into your Lovable application.
Can you confirm the Lovable AI connector is enabled for this project? If it's not, let me know what I need to do in Settings → Connectors to activate it.
Copy this prompt to try it in Lovable
Add sentiment analysis to user feedback
Take your Lovable AI integration further by add sentiment analysis to user feedback. This builds on the basic setup to create a more complete experience.
On the [page name] page, add a Summarize button next to the [textarea/field name]. When clicked, it should use the Lovable AI connector to summarize the text and display the summary below the button. Show a loading spinner while the summary is being generated.
Copy this prompt to try it in Lovable
Enable multi-language translation for your UI
Prepare your Lovable AI integration for production by enable multi-language translation for your ui. Ensures your integration works reliably for real users.
Add a floating chat widget to the bottom-right corner of every page. It should use the Lovable AI connector to power a support assistant for [describe your app briefly, e.g., 'a project management tool for freelancers']. The assistant should only answer questions about using the app and politely decline off-topic questions. Only show the chat widget to logged-in users.
Copy this prompt to try it in Lovable
Troubleshooting
Lovable AI connector is not visible in Settings → Connectors
Cause: The project may not be on Lovable Cloud, or the Settings panel is showing a different section.
Solution: Verify that the Cloud tab is visible in your project's top navigation bar — this confirms you are on Lovable Cloud. If the Cloud tab is missing, your project may be on an older Lovable infrastructure version. In that case, create a new project (Lovable Cloud is the default for all new projects) and migrate your work. If you see the Cloud tab but cannot find Connectors in Settings, try refreshing the page — the Connectors sub-section sometimes requires a fresh load to appear.
AI features work in the Lovable editor preview but produce errors in the deployed app
Cause: Some LLM calls made through the Lovable AI connector only execute correctly in the deployed environment, not in Lovable's sandboxed preview iframe.
Solution: Publish your app using the Publish button (top-right) and test the AI feature at the live URL. If the feature works at the live URL but not in the preview, this is expected behavior — the connector's runtime endpoints are production-only. For iterating quickly, use the preview for layout and UI feedback, then publish to test actual LLM calls.
AI features stop working mid-session with an error about balance or quota
Cause: Your workspace's monthly $1 AI balance has been exhausted, or usage spiked due to a high-traffic period.
Solution: Open the Cloud tab and click Usage to check your current AI balance. If it is depleted, your options are to wait for the monthly reset or to upgrade your Lovable plan for a higher balance. To reduce AI balance consumption going forward, implement caching for repeated LLM calls (Lovable can add this via a prompt), add rate limiting on user-triggered AI features, and avoid running translation or summarization on page load — trigger it only on user action.
Best practices
- Always trigger AI features on user action (button click, form submit) rather than on page load — this prevents unnecessary AI balance consumption and makes the feature feel intentional rather than automatic.
- Cache LLM results in Supabase whenever the same input is likely to produce the same output. Translated product descriptions, article summaries, and sentiment scores are good candidates for caching. Ask Lovable to add a cache check before each LLM call.
- Set clear scope limits in chatbot system prompts. If your chatbot is a support assistant for one product, explicitly instruct it to decline off-topic questions. This improves user experience and prevents the chatbot from becoming a generic AI interface that reflects poorly on your app.
- Use sentiment analysis at submission time, not in batch. Running sentiment analysis when users submit feedback is both more cost-efficient (one call per submission) and more actionable (sentiment is available immediately for routing and alerting).
- Scope AI features to authenticated users when your app has a login system. Unauthenticated AI features are accessible to anyone who finds your app URL, which can drain your AI balance rapidly if the app receives unexpected traffic.
- Test AI feature output quality before launching to users. Gemini 3 Flash is optimized for speed and cost, not maximum output quality. For use cases where response quality is critical — legal summaries, medical content, financial analysis — consider whether the Lovable AI connector meets your quality bar or whether a dedicated model via OpenAI GPT is more appropriate.
- Monitor AI balance weekly during the first month after launching an AI feature. Usage patterns are hard to predict before launch. Open Cloud → Usage to check your balance and set a calendar reminder to review it after the first week of live user traffic.
- Keep AI prompts in your code short and specific. Longer system prompts consume more tokens per call, increasing AI balance usage. Aim for system prompts under 200 words and user-facing prompts that are pre-structured (rather than free-form) when possible.
Alternatives
Use OpenAI GPT if you need access to GPT-4o or specific OpenAI models, require higher output quality for complex reasoning tasks, or need fine-grained control over model parameters like temperature and max tokens — at the cost of setting up your own OpenAI API account and key.
Use Perplexity if your AI feature needs real-time web search as part of the response — for example, a research assistant that cites current sources — since Lovable AI (Gemini 3 Flash) uses its training data only and cannot browse the live web.
Use ElevenLabs when your AI feature is voice or audio based — text-to-speech narration, voice cloning, audio content generation — rather than text-in/text-out LLM features like the ones Lovable AI provides.
Frequently asked questions
What is the difference between the Lovable AI connector and the Lovable AI agent that builds my app?
These are two completely separate systems. The Lovable AI agent is the conversational interface inside the Lovable editor — it is what you talk to when you describe features and it writes the code. The Lovable AI connector is a runtime service that your deployed app's end users interact with. When a customer uses the chatbot or summary feature you built, that is the Lovable AI connector. The agent builds your app; the connector powers features inside your app.
What AI model does the Lovable AI connector use?
The Lovable AI connector uses Gemini 3 Flash, Google's fast and cost-efficient LLM. It is well-suited for summarization, chatbots, translation, and sentiment analysis. You cannot currently switch the underlying model through the Lovable AI connector — if you need a different model (GPT-4o, Claude, etc.), use the OpenAI GPT connector or build a custom Edge Function that calls your preferred API.
How much does the Lovable AI connector cost to use?
Usage is billed against your workspace's AI balance, which is currently $1 per month and provided free through Q1 2026. After that, the $1 balance will be part of the standard Lovable Cloud billing. The exact cost per LLM call depends on token count, but for typical use cases (short summaries, single-turn chatbot messages, sentiment scoring), $1 covers thousands of calls per month. Check your current balance in Cloud → Usage.
Can I use Lovable AI for my app's chatbot even on the free Lovable plan?
Yes. The Lovable AI connector is available on all plans, including free. The $1 monthly AI balance is shared across your workspace regardless of plan tier. The main free-plan limitation is not access to the connector, but the fact that free-plan apps are public-only — any unauthenticated user who finds your app URL can trigger AI features and consume your balance.
Does Lovable AI work in the editor preview, or only in the deployed app?
LLM calls through the Lovable AI connector only execute in the deployed (published) version of your app, not in Lovable's sandboxed editor preview. You can build and iterate on the UI and layout in the preview, but to test actual AI responses you need to publish the app and visit the live URL. This is normal behavior — most runtime connector features behave this way.
How is Lovable AI different from building an AI feature with a custom OpenAI Edge Function?
Lovable AI requires zero setup — no API account, no key, no Edge Function code. You enable the connector and describe the feature in chat, and Lovable handles everything. A custom OpenAI Edge Function gives you more control: you choose the model, set the temperature, write the system prompt in code, and manage your own OpenAI API key in Cloud → Secrets. Lovable AI is the right choice for getting AI features live quickly; a custom Edge Function is the right choice when you need specific model behavior or expect high enough volume to benefit from managing your own API costs directly.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation