Voice navigation converts spoken words to text using the speech_to_text Flutter package, then a Custom Function maps recognized phrases to app routes. A pulsing microphone button gives visual feedback while listening. Users say 'go to orders' or 'open settings' and your app navigates instantly — hands-free.
Hands-free navigation with voice commands
Voice navigation is valuable for accessibility (users who cannot easily tap small targets), hands-free contexts (driving, cooking, gym), and power users who know their way around your app. The architecture is simple: the speech_to_text package accesses the device microphone and transcribes spoken audio into text. A Custom Function then checks the transcript against a keyword map and triggers the appropriate Navigate To action. The key insight is that you match keywords, not exact sentences — 'open my orders' and 'show orders' both match because they contain the keyword 'orders'. This fuzzy matching makes the system forgiving for different user phrasings.
Prerequisites
- FlutterFlow Pro plan (Custom Actions and package imports required)
- FlutterFlow project with multiple pages already created
- Basic understanding of FlutterFlow Custom Actions and Navigate To actions
- iOS: microphone permission added to Info.plist; Android: RECORD_AUDIO in AndroidManifest.xml
Step-by-step guide
Add the speech_to_text package to your FlutterFlow project
Add the speech_to_text package to your FlutterFlow project
In FlutterFlow, go to Settings → Pubspec Dependencies → Add Dependency. Type 'speech_to_text' and select the latest version (6.x+). Click Save. Also add 'permission_handler' to manage microphone permissions. FlutterFlow will automatically add these packages when you export or run the app. Next, add required platform permissions: for iOS, go to Settings → iOS → Info.plist Keys → add NSMicrophoneUsageDescription with the value 'We need microphone access for voice navigation'. For Android, Settings → Android → Permissions → add RECORD_AUDIO.
Expected result: speech_to_text and permission_handler packages added in Settings. iOS and Android microphone permissions configured.
Create the voice navigation Custom Action
Create the voice navigation Custom Action
Go to Custom Code → Custom Actions → Add Action. Name it 'startVoiceNavigation'. This action starts the microphone, listens for speech, transcribes it, then returns the recognized text so your Action Flow can process it. The action handles starting the listener, timing out after 5 seconds of silence, and stopping cleanly. Paste the Dart code below. Set the return type to String (the transcribed text). FlutterFlow will show a green checkmark when the code compiles.
1// Custom Action: startVoiceNavigation2// Packages: speech_to_text, permission_handler3import 'package:speech_to_text/speech_to_text.dart' as stt;4import 'package:permission_handler/permission_handler.dart';56Future<String> startVoiceNavigation() async {7 // Request microphone permission8 final status = await Permission.microphone.request();9 if (!status.isGranted) return 'permission_denied';1011 final speech = stt.SpeechToText();12 final bool available = await speech.initialize(13 onError: (error) => debugPrint('STT error: $error'),14 );1516 if (!available) return 'unavailable';1718 String result = '';19 final completer = Completer<String>();2021 speech.listen(22 onResult: (val) {23 if (val.finalResult) {24 result = val.recognizedWords.toLowerCase().trim();25 speech.stop();26 if (!completer.isCompleted) completer.complete(result);27 }28 },29 listenFor: const Duration(seconds: 7),30 pauseFor: const Duration(seconds: 3),31 cancelOnError: true,32 partialResults: false,33 );3435 // Timeout fallback36 Future.delayed(const Duration(seconds: 8), () {37 if (!completer.isCompleted) {38 speech.stop();39 completer.complete(result);40 }41 });4243 return completer.future;44}Expected result: Custom Action compiles without errors. Returns a String containing the transcribed speech.
Create a route-mapping Custom Function
Create a route-mapping Custom Function
Create a Custom Function (not Action) named 'mapSpeechToRoute'. This function takes the recognized text string and returns the FlutterFlow page route name to navigate to. Using a function (not action) is the right pattern here because it's pure logic with no side effects — it just maps input to output. Store your route map as a constant inside the function. This is what you will update when pages are added or renamed.
1// Custom Function: mapSpeechToRoute2// Input: String transcript3// Output: String routeName (empty string = no match)4String mapSpeechToRoute(String transcript) {5 final text = transcript.toLowerCase();67 // Map keywords to FlutterFlow page route names8 // IMPORTANT: use the exact route name shown in FlutterFlow9 // Page Settings → Route Name (e.g. '/home', '/orders')10 final Map<List<String>, String> routeMap = {11 ['home', 'main', 'start', 'dashboard']: '/homePage',12 ['order', 'orders', 'my orders', 'purchases']: '/ordersPage',13 ['profile', 'account', 'my account', 'settings']: '/profilePage',14 ['cart', 'basket', 'shopping cart']: '/cartPage',15 ['search', 'find', 'look for']: '/searchPage',16 ['help', 'support', 'contact']: '/helpPage',17 };1819 for (final entry in routeMap.entries) {20 for (final keyword in entry.key) {21 if (text.contains(keyword)) return entry.value;22 }23 }2425 return ''; // no match26}Expected result: Custom Function returns the correct route string for tested speech inputs. Returns empty string for unrecognized commands.
Build the animated microphone button UI
Build the animated microphone button UI
On your app's persistent navigation bar or home page, add a Stack widget. Inside it, add a Container (50x50, circular, primary color background) and an Icon widget (mic icon, white). Above that container in the stack, add a second Container (60x60, same center, transparent, primary color border) — this will be the pulse ring animation. Select the outer ring Container, go to Animations → Add Animation → Scale. Set it to loop continuously from scale 1.0 to 1.3 with a 1-second duration and ease-out curve. This creates the 'listening' pulse effect. Wrap everything in a GestureDetector (or Button widget) and connect the tap Action Flow to start the voice navigation sequence. Show/hide the pulsing ring using a boolean App State variable 'isListening'.
Expected result: A microphone button visible on the page. Tapping it starts the pulsing animation. The animation stops after speech recognition completes.
Wire the complete voice navigation Action Flow
Wire the complete voice navigation Action Flow
Select the microphone button → Actions → On Tap. Build this Action Flow sequence: (1) Update App State 'isListening' to true. (2) Custom Action: startVoiceNavigation — store output in an Action Output variable named 'transcript'. (3) Custom Function: mapSpeechToRoute — pass transcript as argument, store output in 'routeName'. (4) Update App State 'isListening' to false. (5) Conditional Action: if routeName is not empty → Navigate To using routeName. (6) If routeName is empty → show a SnackBar saying 'Command not recognized. Try saying Go to Orders or Open Profile'. The isListening App State boolean controls the pulsing animation visibility.
Expected result: Tapping the mic button starts listening. Speaking 'go to orders' navigates to the orders page. Speaking an unrecognized command shows the help SnackBar.
Complete working example
1// FlutterFlow Custom Action: startVoiceNavigation2// Required packages: speech_to_text ^6.0.0, permission_handler ^11.0.03// Add to FlutterFlow: Settings → Pubspec Dependencies45import 'dart:async';6import 'package:speech_to_text/speech_to_text.dart' as stt;7import 'package:permission_handler/permission_handler.dart';89/// Requests microphone access, listens for speech, and returns10/// the transcribed text as a lowercase string.11/// Returns 'permission_denied' or 'unavailable' on failure.12Future<String> startVoiceNavigation() async {13 final status = await Permission.microphone.request();14 if (status.isDenied || status.isPermanentlyDenied) {15 if (status.isPermanentlyDenied) await openAppSettings();16 return 'permission_denied';17 }1819 final speech = stt.SpeechToText();20 final bool available = await speech.initialize(21 onError: (error) => debugPrint('STT error: ${error.errorMsg}'),22 debugLogging: false,23 );2425 if (!available) return 'unavailable';2627 final completer = Completer<String>();28 String lastResult = '';2930 await speech.listen(31 onResult: (val) {32 lastResult = val.recognizedWords.toLowerCase().trim();33 if (val.finalResult && !completer.isCompleted) {34 speech.stop();35 completer.complete(lastResult);36 }37 },38 listenFor: const Duration(seconds: 7),39 pauseFor: const Duration(seconds: 3),40 cancelOnError: true,41 partialResults: false,42 localeId: 'en_US',43 );4445 // Hard timeout: resolve with whatever we have46 Future.delayed(const Duration(seconds: 9), () {47 if (!completer.isCompleted) {48 speech.stop();49 completer.complete(lastResult);50 }51 });5253 return completer.future;54}5556// FlutterFlow Custom Function: mapSpeechToRoute57// Pure function — no side effects, just maps text to route58String mapSpeechToRoute(String transcript) {59 final text = transcript.toLowerCase();60 if (text.isEmpty) return '';6162 final Map<List<String>, String> routes = {63 ['home', 'main', 'dashboard', 'start']: '/homePage',64 ['order', 'orders', 'purchase', 'purchases']: '/ordersPage',65 ['profile', 'account', 'my account']: '/profilePage',66 ['cart', 'basket', 'checkout']: '/cartPage',67 ['search', 'find']: '/searchPage',68 ['help', 'support']: '/helpPage',69 ['back', 'go back', 'previous']: '__back__',70 };7172 for (final entry in routes.entries) {73 for (final kw in entry.key) {74 if (text.contains(kw)) return entry.value;75 }76 }77 return '';78}Common mistakes
Why it's a problem: Hardcoding page names in the voice route map instead of using FlutterFlow route names
How to avoid: Always use the exact route path defined in FlutterFlow's Page Settings → Route Name (e.g. '/ordersPage'). Check it matches by opening the page settings panel.
Why it's a problem: Not handling the 'permission_denied' return from the speech action
How to avoid: Add a Conditional action at the start of your flow: if transcript == 'permission_denied', show an alert dialog explaining that microphone access is needed and directing the user to Settings.
Why it's a problem: Running speech recognition in the web browser preview and expecting it to work like a device
How to avoid: Test voice features on a real device via FlutterFlow's Test on Device option (scan QR code). Web preview is only useful for visual layout testing.
Best practices
- Always show a visual indicator that the app is listening — users cannot tell if the mic is active without feedback
- Add a spoken confirmation for navigation: use text-to-speech to say 'Navigating to Orders' after a successful match
- Keep the keyword list short and memorable — 5-8 commands is better than 30 that users can't remember
- Provide a help command ('help' or 'what can I say?') that opens a screen listing all available voice commands
- Test with different accents and speech speeds — have multiple people test before launch
- Respect accessibility: voice navigation should supplement, not replace, standard touch navigation
- Log unrecognized command transcripts (anonymized) to identify common phrases to add to your keyword map
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I'm building a FlutterFlow app and want to add voice navigation using the speech_to_text Flutter package. Write me a Custom Action in Dart that requests microphone permission, listens for up to 7 seconds, and returns the transcribed text as a lowercase string. Also write a separate Custom Function that accepts the transcript string and returns the route name from a keyword map. Include error handling for permission denied and speech unavailable cases.
In FlutterFlow, I have a microphone button on my home page. I want to build an Action Flow that: starts voice listening via a Custom Action, gets the transcript, passes it to a Custom Function to get a route name, and then navigates to that route — or shows a SnackBar if no route is matched. Walk me through each step in the Action Flow builder.
Frequently asked questions
Does voice navigation work on both iOS and Android?
Yes. The speech_to_text package uses AVSpeechRecognizer on iOS and Android's SpeechRecognizer API. Both require internet for transcription (Apple and Google process audio on their servers). Add both NSMicrophoneUsageDescription (iOS Info.plist) and RECORD_AUDIO permission (Android Manifest) to your project settings.
Can voice navigation work offline?
Limited offline support exists on some Android devices that have downloaded offline language packs. iOS requires internet for Siri's speech recognition. For fully offline voice recognition you would need to integrate a local model like Vosk, which requires exporting your FlutterFlow project as Flutter code.
How do I handle multi-word commands like 'search for running shoes'?
Check if the transcript starts with 'search for' or 'find'. Extract the substring after those words. Navigate to your search page and pass the extracted phrase as a page parameter. The search page reads the parameter on load and populates the search field automatically.
What languages does speech_to_text support?
The package supports any language supported by the device's built-in speech recognizer. Set the localeId parameter (e.g., 'es_ES' for Spanish, 'fr_FR' for French). You can also call speech.locales() to get a list of languages available on the current device.
Will voice commands work while the app is in the background?
No. The speech_to_text package requires the app to be in the foreground and the screen to be active. Background microphone access requires special entitlements from Apple/Google for specific use cases like call recording, not general app navigation.
How do I let users customize their own voice commands?
Store the route map as a Firestore document under each user's profile instead of a hardcoded Dart map. Load it as a page state variable on app start. Let users add or edit their command-to-route mappings in a settings screen. Pass the custom map as a parameter to your mapSpeechToRoute function.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation