Skip to main content
RapidDev - Software Development Agency
flutterflow-tutorials

How to Set Up Voice Commands for App Navigation in FlutterFlow

Voice navigation converts spoken words to text using the speech_to_text Flutter package, then a Custom Function maps recognized phrases to app routes. A pulsing microphone button gives visual feedback while listening. Users say 'go to orders' or 'open settings' and your app navigates instantly — hands-free.

What you'll learn

  • Adding the speech_to_text package to a FlutterFlow Custom Action
  • Mapping recognized speech keywords to Flutter route names dynamically
  • Building an animated microphone button with listening feedback
  • Handling contextual voice commands like 'search for [term]'
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner9 min read35-50 minFlutterFlow Pro+ (Custom Actions and packages required)March 2026RapidDev Engineering Team
TL;DR

Voice navigation converts spoken words to text using the speech_to_text Flutter package, then a Custom Function maps recognized phrases to app routes. A pulsing microphone button gives visual feedback while listening. Users say 'go to orders' or 'open settings' and your app navigates instantly — hands-free.

Hands-free navigation with voice commands

Voice navigation is valuable for accessibility (users who cannot easily tap small targets), hands-free contexts (driving, cooking, gym), and power users who know their way around your app. The architecture is simple: the speech_to_text package accesses the device microphone and transcribes spoken audio into text. A Custom Function then checks the transcript against a keyword map and triggers the appropriate Navigate To action. The key insight is that you match keywords, not exact sentences — 'open my orders' and 'show orders' both match because they contain the keyword 'orders'. This fuzzy matching makes the system forgiving for different user phrasings.

Prerequisites

  • FlutterFlow Pro plan (Custom Actions and package imports required)
  • FlutterFlow project with multiple pages already created
  • Basic understanding of FlutterFlow Custom Actions and Navigate To actions
  • iOS: microphone permission added to Info.plist; Android: RECORD_AUDIO in AndroidManifest.xml

Step-by-step guide

1

Add the speech_to_text package to your FlutterFlow project

In FlutterFlow, go to Settings → Pubspec Dependencies → Add Dependency. Type 'speech_to_text' and select the latest version (6.x+). Click Save. Also add 'permission_handler' to manage microphone permissions. FlutterFlow will automatically add these packages when you export or run the app. Next, add required platform permissions: for iOS, go to Settings → iOS → Info.plist Keys → add NSMicrophoneUsageDescription with the value 'We need microphone access for voice navigation'. For Android, Settings → Android → Permissions → add RECORD_AUDIO.

Expected result: speech_to_text and permission_handler packages added in Settings. iOS and Android microphone permissions configured.

2

Create the voice navigation Custom Action

Go to Custom Code → Custom Actions → Add Action. Name it 'startVoiceNavigation'. This action starts the microphone, listens for speech, transcribes it, then returns the recognized text so your Action Flow can process it. The action handles starting the listener, timing out after 5 seconds of silence, and stopping cleanly. Paste the Dart code below. Set the return type to String (the transcribed text). FlutterFlow will show a green checkmark when the code compiles.

start_voice_navigation.dart
1// Custom Action: startVoiceNavigation
2// Packages: speech_to_text, permission_handler
3import 'package:speech_to_text/speech_to_text.dart' as stt;
4import 'package:permission_handler/permission_handler.dart';
5
6Future<String> startVoiceNavigation() async {
7 // Request microphone permission
8 final status = await Permission.microphone.request();
9 if (!status.isGranted) return 'permission_denied';
10
11 final speech = stt.SpeechToText();
12 final bool available = await speech.initialize(
13 onError: (error) => debugPrint('STT error: $error'),
14 );
15
16 if (!available) return 'unavailable';
17
18 String result = '';
19 final completer = Completer<String>();
20
21 speech.listen(
22 onResult: (val) {
23 if (val.finalResult) {
24 result = val.recognizedWords.toLowerCase().trim();
25 speech.stop();
26 if (!completer.isCompleted) completer.complete(result);
27 }
28 },
29 listenFor: const Duration(seconds: 7),
30 pauseFor: const Duration(seconds: 3),
31 cancelOnError: true,
32 partialResults: false,
33 );
34
35 // Timeout fallback
36 Future.delayed(const Duration(seconds: 8), () {
37 if (!completer.isCompleted) {
38 speech.stop();
39 completer.complete(result);
40 }
41 });
42
43 return completer.future;
44}

Expected result: Custom Action compiles without errors. Returns a String containing the transcribed speech.

3

Create a route-mapping Custom Function

Create a Custom Function (not Action) named 'mapSpeechToRoute'. This function takes the recognized text string and returns the FlutterFlow page route name to navigate to. Using a function (not action) is the right pattern here because it's pure logic with no side effects — it just maps input to output. Store your route map as a constant inside the function. This is what you will update when pages are added or renamed.

map_speech_to_route.dart
1// Custom Function: mapSpeechToRoute
2// Input: String transcript
3// Output: String routeName (empty string = no match)
4String mapSpeechToRoute(String transcript) {
5 final text = transcript.toLowerCase();
6
7 // Map keywords to FlutterFlow page route names
8 // IMPORTANT: use the exact route name shown in FlutterFlow
9 // Page Settings → Route Name (e.g. '/home', '/orders')
10 final Map<List<String>, String> routeMap = {
11 ['home', 'main', 'start', 'dashboard']: '/homePage',
12 ['order', 'orders', 'my orders', 'purchases']: '/ordersPage',
13 ['profile', 'account', 'my account', 'settings']: '/profilePage',
14 ['cart', 'basket', 'shopping cart']: '/cartPage',
15 ['search', 'find', 'look for']: '/searchPage',
16 ['help', 'support', 'contact']: '/helpPage',
17 };
18
19 for (final entry in routeMap.entries) {
20 for (final keyword in entry.key) {
21 if (text.contains(keyword)) return entry.value;
22 }
23 }
24
25 return ''; // no match
26}

Expected result: Custom Function returns the correct route string for tested speech inputs. Returns empty string for unrecognized commands.

4

Build the animated microphone button UI

On your app's persistent navigation bar or home page, add a Stack widget. Inside it, add a Container (50x50, circular, primary color background) and an Icon widget (mic icon, white). Above that container in the stack, add a second Container (60x60, same center, transparent, primary color border) — this will be the pulse ring animation. Select the outer ring Container, go to Animations → Add Animation → Scale. Set it to loop continuously from scale 1.0 to 1.3 with a 1-second duration and ease-out curve. This creates the 'listening' pulse effect. Wrap everything in a GestureDetector (or Button widget) and connect the tap Action Flow to start the voice navigation sequence. Show/hide the pulsing ring using a boolean App State variable 'isListening'.

Expected result: A microphone button visible on the page. Tapping it starts the pulsing animation. The animation stops after speech recognition completes.

5

Wire the complete voice navigation Action Flow

Select the microphone button → Actions → On Tap. Build this Action Flow sequence: (1) Update App State 'isListening' to true. (2) Custom Action: startVoiceNavigation — store output in an Action Output variable named 'transcript'. (3) Custom Function: mapSpeechToRoute — pass transcript as argument, store output in 'routeName'. (4) Update App State 'isListening' to false. (5) Conditional Action: if routeName is not empty → Navigate To using routeName. (6) If routeName is empty → show a SnackBar saying 'Command not recognized. Try saying Go to Orders or Open Profile'. The isListening App State boolean controls the pulsing animation visibility.

Expected result: Tapping the mic button starts listening. Speaking 'go to orders' navigates to the orders page. Speaking an unrecognized command shows the help SnackBar.

Complete working example

voice_navigation.dart
1// FlutterFlow Custom Action: startVoiceNavigation
2// Required packages: speech_to_text ^6.0.0, permission_handler ^11.0.0
3// Add to FlutterFlow: Settings → Pubspec Dependencies
4
5import 'dart:async';
6import 'package:speech_to_text/speech_to_text.dart' as stt;
7import 'package:permission_handler/permission_handler.dart';
8
9/// Requests microphone access, listens for speech, and returns
10/// the transcribed text as a lowercase string.
11/// Returns 'permission_denied' or 'unavailable' on failure.
12Future<String> startVoiceNavigation() async {
13 final status = await Permission.microphone.request();
14 if (status.isDenied || status.isPermanentlyDenied) {
15 if (status.isPermanentlyDenied) await openAppSettings();
16 return 'permission_denied';
17 }
18
19 final speech = stt.SpeechToText();
20 final bool available = await speech.initialize(
21 onError: (error) => debugPrint('STT error: ${error.errorMsg}'),
22 debugLogging: false,
23 );
24
25 if (!available) return 'unavailable';
26
27 final completer = Completer<String>();
28 String lastResult = '';
29
30 await speech.listen(
31 onResult: (val) {
32 lastResult = val.recognizedWords.toLowerCase().trim();
33 if (val.finalResult && !completer.isCompleted) {
34 speech.stop();
35 completer.complete(lastResult);
36 }
37 },
38 listenFor: const Duration(seconds: 7),
39 pauseFor: const Duration(seconds: 3),
40 cancelOnError: true,
41 partialResults: false,
42 localeId: 'en_US',
43 );
44
45 // Hard timeout: resolve with whatever we have
46 Future.delayed(const Duration(seconds: 9), () {
47 if (!completer.isCompleted) {
48 speech.stop();
49 completer.complete(lastResult);
50 }
51 });
52
53 return completer.future;
54}
55
56// FlutterFlow Custom Function: mapSpeechToRoute
57// Pure function — no side effects, just maps text to route
58String mapSpeechToRoute(String transcript) {
59 final text = transcript.toLowerCase();
60 if (text.isEmpty) return '';
61
62 final Map<List<String>, String> routes = {
63 ['home', 'main', 'dashboard', 'start']: '/homePage',
64 ['order', 'orders', 'purchase', 'purchases']: '/ordersPage',
65 ['profile', 'account', 'my account']: '/profilePage',
66 ['cart', 'basket', 'checkout']: '/cartPage',
67 ['search', 'find']: '/searchPage',
68 ['help', 'support']: '/helpPage',
69 ['back', 'go back', 'previous']: '__back__',
70 };
71
72 for (final entry in routes.entries) {
73 for (final kw in entry.key) {
74 if (text.contains(kw)) return entry.value;
75 }
76 }
77 return '';
78}

Common mistakes

Why it's a problem: Hardcoding page names in the voice route map instead of using FlutterFlow route names

How to avoid: Always use the exact route path defined in FlutterFlow's Page Settings → Route Name (e.g. '/ordersPage'). Check it matches by opening the page settings panel.

Why it's a problem: Not handling the 'permission_denied' return from the speech action

How to avoid: Add a Conditional action at the start of your flow: if transcript == 'permission_denied', show an alert dialog explaining that microphone access is needed and directing the user to Settings.

Why it's a problem: Running speech recognition in the web browser preview and expecting it to work like a device

How to avoid: Test voice features on a real device via FlutterFlow's Test on Device option (scan QR code). Web preview is only useful for visual layout testing.

Best practices

  • Always show a visual indicator that the app is listening — users cannot tell if the mic is active without feedback
  • Add a spoken confirmation for navigation: use text-to-speech to say 'Navigating to Orders' after a successful match
  • Keep the keyword list short and memorable — 5-8 commands is better than 30 that users can't remember
  • Provide a help command ('help' or 'what can I say?') that opens a screen listing all available voice commands
  • Test with different accents and speech speeds — have multiple people test before launch
  • Respect accessibility: voice navigation should supplement, not replace, standard touch navigation
  • Log unrecognized command transcripts (anonymized) to identify common phrases to add to your keyword map

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I'm building a FlutterFlow app and want to add voice navigation using the speech_to_text Flutter package. Write me a Custom Action in Dart that requests microphone permission, listens for up to 7 seconds, and returns the transcribed text as a lowercase string. Also write a separate Custom Function that accepts the transcript string and returns the route name from a keyword map. Include error handling for permission denied and speech unavailable cases.

FlutterFlow Prompt

In FlutterFlow, I have a microphone button on my home page. I want to build an Action Flow that: starts voice listening via a Custom Action, gets the transcript, passes it to a Custom Function to get a route name, and then navigates to that route — or shows a SnackBar if no route is matched. Walk me through each step in the Action Flow builder.

Frequently asked questions

Does voice navigation work on both iOS and Android?

Yes. The speech_to_text package uses AVSpeechRecognizer on iOS and Android's SpeechRecognizer API. Both require internet for transcription (Apple and Google process audio on their servers). Add both NSMicrophoneUsageDescription (iOS Info.plist) and RECORD_AUDIO permission (Android Manifest) to your project settings.

Can voice navigation work offline?

Limited offline support exists on some Android devices that have downloaded offline language packs. iOS requires internet for Siri's speech recognition. For fully offline voice recognition you would need to integrate a local model like Vosk, which requires exporting your FlutterFlow project as Flutter code.

How do I handle multi-word commands like 'search for running shoes'?

Check if the transcript starts with 'search for' or 'find'. Extract the substring after those words. Navigate to your search page and pass the extracted phrase as a page parameter. The search page reads the parameter on load and populates the search field automatically.

What languages does speech_to_text support?

The package supports any language supported by the device's built-in speech recognizer. Set the localeId parameter (e.g., 'es_ES' for Spanish, 'fr_FR' for French). You can also call speech.locales() to get a list of languages available on the current device.

Will voice commands work while the app is in the background?

No. The speech_to_text package requires the app to be in the foreground and the screen to be active. Background microphone access requires special entitlements from Apple/Google for specific use cases like call recording, not general app navigation.

How do I let users customize their own voice commands?

Store the route map as a Firestore document under each user's profile instead of a hardcoded Dart map. Load it as a page state variable on app start. Let users add or edit their command-to-route mappings in a settings screen. Pass the custom map as a parameter to your mapSpeechToRoute function.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.