Add voice messaging to FlutterFlow by recording audio with the record package in AAC format, uploading the file to Firebase Storage, saving the download URL and duration to a Firestore messages document with type 'voice', then playing back messages with just_audio. Always record in AAC/M4A — WAV files are uncompressed and a 60-second message can be 10MB or more.
Adding Voice Messages to a FlutterFlow Chat App
Voice messaging is a high-engagement feature in any chat app, but it requires handling microphone permissions, audio recording, file compression, cloud storage, and media playback — all in sequence. In FlutterFlow, each of these steps maps to a Custom Action backed by a Dart package. This tutorial covers the complete pipeline: holding a record button to capture audio in AAC format, uploading the compressed file to Firebase Storage, writing a Firestore message document with type 'voice', and rendering a compact playback widget with a progress slider. Proper audio encoding is critical — recording in WAV format by default results in huge files that are slow to upload and costly to store.
Prerequisites
- FlutterFlow project with a working chat or messaging feature backed by Firestore
- Firebase Storage configured and the Firebase SDK connected to your project
- FlutterFlow Pro plan for code export (required to add custom Dart packages)
- Basic understanding of FlutterFlow Custom Actions and Firestore document writes
Step-by-step guide
Add the record and just_audio packages to your project
Add the record and just_audio packages to your project
In FlutterFlow, go to Settings → Pubspec Dependencies and add two packages: record (version 5.x or later) and just_audio (version 0.9.x or later). The record package handles microphone access and audio encoding. The just_audio package provides a reliable cross-platform audio player with position streams. After adding both packages, also add the required permission entries. In your exported project's ios/Runner/Info.plist add NSMicrophoneUsageDescription. In android/app/src/main/AndroidManifest.xml add RECORD_AUDIO permission. FlutterFlow's Permissions panel (Settings → App Permissions) can add the microphone permission for you without manual manifest editing.
Expected result: Both packages appear in pubspec.yaml and the project builds without errors after running flutter pub get.
Create startRecording and stopRecording Custom Actions
Create startRecording and stopRecording Custom Actions
Create two Custom Actions. The first, startRecording, initialises an AudioRecorder instance and calls recorder.start() with RecordConfig specifying encoder: AudioEncoder.aacLc and bitRate: 128000. Store the recorder instance in a static variable so stopRecording can access it. The second action, stopRecording, calls recorder.stop() which returns the file path of the recorded audio. Pass this file path back to FlutterFlow as the action's return value (a String). In your chat page, bind a hold-to-record GestureDetector button: On Long Press Start calls startRecording, On Long Press End calls stopRecording and receives the file path for the upload step.
1// Two Custom Actions: startRecording and stopRecording2import 'package:record/record.dart';3import 'package:path_provider/path_provider.dart';4import 'dart:io';56// Shared recorder instance7AudioRecorder? _recorder;89// Action 1: startRecording (no parameters, no return)10Future<void> startRecording() async {11 _recorder = AudioRecorder();12 final hasPermission = await _recorder!.hasPermission();13 if (!hasPermission) return;1415 final dir = await getTemporaryDirectory();16 final filePath = '${dir.path}/voice_${DateTime.now().millisecondsSinceEpoch}.m4a';1718 await _recorder!.start(19 const RecordConfig(20 encoder: AudioEncoder.aacLc,21 bitRate: 128000,22 sampleRate: 44100,23 ),24 path: filePath,25 );26}2728// Action 2: stopRecording (no parameters, returns String filePath)29Future<String> stopRecording() async {30 if (_recorder == null) return '';31 final path = await _recorder!.stop();32 _recorder = null;33 return path ?? '';34}Expected result: Holding the record button starts microphone capture. Releasing it returns a local file path string ending in .m4a.
Upload the audio file to Firebase Storage
Upload the audio file to Firebase Storage
Create a Custom Action named uploadVoiceMessage that accepts the local file path (String) and the chat conversation ID (String) as parameters. The action uploads the file to Firebase Storage at path voice_messages/{conversationId}/{timestamp}.m4a using FirebaseStorage.instance.ref().putFile(). After the upload completes, call getDownloadURL() to retrieve the public URL. Return both the download URL (String) and the file duration in seconds (Integer) from the action. To get duration, use the just_audio AudioPlayer: load the local file, read player.duration, then dispose the player before returning.
1// Custom Action: uploadVoiceMessage2// Parameters: filePath (String), conversationId (String)3// Returns: String (download URL)4import 'package:firebase_storage/firebase_storage.dart';5import 'package:just_audio/just_audio.dart';6import 'dart:io';78Future<String> uploadVoiceMessage(String filePath, String conversationId) async {9 if (filePath.isEmpty) return '';1011 final file = File(filePath);12 final timestamp = DateTime.now().millisecondsSinceEpoch;13 final storageRef = FirebaseStorage.instance14 .ref()15 .child('voice_messages/$conversationId/$timestamp.m4a');1617 final uploadTask = await storageRef.putFile(18 file,19 SettableMetadata(contentType: 'audio/mp4'),20 );2122 final downloadUrl = await uploadTask.ref.getDownloadURL();23 return downloadUrl;24}Expected result: After recording, the file uploads to Firebase Storage and a download URL is returned. You can verify the file in the Firebase Storage console.
Write the voice message to Firestore
Write the voice message to Firestore
After receiving the download URL from the upload action, create a Firestore document in your messages collection (path: conversations/{conversationId}/messages). Set the document fields: type to 'voice', audioUrl to the download URL, senderId to the current user's UID, senderName to the current user's display name, durationSeconds to the recorded duration integer, and createdAt to the server timestamp. The type field is what differentiates voice messages from regular text messages in your ListView rendering logic. In your message ListView item, add a Conditional Builder that shows a voice message playback widget when type equals 'voice' and the normal text bubble when type equals 'text'.
Expected result: A voice message document appears in Firestore after recording. The chat ListView shows a voice message bubble in place of a text bubble.
Build the voice message playback widget
Build the voice message playback widget
Create a Custom Widget named VoiceMessagePlayer that accepts audioUrl (String) and durationSeconds (Integer) as parameters. Inside the widget, use just_audio's AudioPlayer to load and play the URL. Build a Row containing: a play/pause IconButton that toggles player.play() and player.pause(), a LinearProgressIndicator driven by a StreamBuilder on player.positionStream showing the playback progress, and a Text showing the current position formatted as mm:ss. Initialise the player in initState and call player.setUrl(audioUrl). Dispose the player in dispose(). Register this Custom Widget in FlutterFlow and place it inside the voice message conditional branch of your ListView item.
Expected result: Voice messages display a compact playback widget with a play button, progress bar, and time display. Tapping play streams the audio from Firebase Storage.
Complete working example
1// Custom Widget: VoiceMessagePlayer2// Parameters: audioUrl (String), durationSeconds (int)3import 'package:flutter/material.dart';4import 'package:just_audio/just_audio.dart';56class VoiceMessagePlayer extends StatefulWidget {7 final String audioUrl;8 final int durationSeconds;910 const VoiceMessagePlayer({11 Key? key,12 required this.audioUrl,13 required this.durationSeconds,14 }) : super(key: key);1516 @override17 State<VoiceMessagePlayer> createState() => _VoiceMessagePlayerState();18}1920class _VoiceMessagePlayerState extends State<VoiceMessagePlayer> {21 late AudioPlayer _player;22 bool _isPlaying = false;23 Duration _position = Duration.zero;2425 @override26 void initState() {27 super.initState();28 _player = AudioPlayer();29 _player.setUrl(widget.audioUrl);30 _player.positionStream.listen((pos) {31 if (mounted) setState(() => _position = pos);32 });33 _player.playerStateStream.listen((state) {34 if (mounted) setState(() => _isPlaying = state.playing);35 });36 }3738 @override39 void dispose() {40 _player.dispose();41 super.dispose();42 }4344 String _formatDuration(Duration d) {45 final m = d.inMinutes.remainder(60).toString().padLeft(2, '0');46 final s = d.inSeconds.remainder(60).toString().padLeft(2, '0');47 return '$m:$s';48 }4950 @override51 Widget build(BuildContext context) {52 final total = Duration(seconds: widget.durationSeconds);53 final progress = total.inSeconds > 054 ? _position.inSeconds / total.inSeconds55 : 0.0;5657 return Container(58 padding: const EdgeInsets.symmetric(horizontal: 12, vertical: 8),59 decoration: BoxDecoration(60 color: Colors.grey[200],61 borderRadius: BorderRadius.circular(20),62 ),63 child: Row(64 mainAxisSize: MainAxisSize.min,65 children: [66 IconButton(67 icon: Icon(_isPlaying ? Icons.pause : Icons.play_arrow),68 onPressed: () => _isPlaying ? _player.pause() : _player.play(),69 ),70 SizedBox(71 width: 120,72 child: LinearProgressIndicator(73 value: progress.clamp(0.0, 1.0),74 backgroundColor: Colors.grey[400],75 color: Theme.of(context).primaryColor,76 ),77 ),78 const SizedBox(width: 8),79 Text(80 _formatDuration(_isPlaying ? _position : total),81 style: const TextStyle(fontSize: 12),82 ),83 ],84 ),85 );86 }87}Common mistakes
Why it's a problem: Recording audio in WAV format instead of AAC
How to avoid: Configure the record package with AudioEncoder.aacLc at 128000 bitRate. This produces an .m4a file where a 60-second recording is under 1MB, with audio quality that is indistinguishable from WAV for voice content.
Why it's a problem: Not requesting microphone permission before starting the recorder
How to avoid: Call recorder.hasPermission() before recorder.start(). If it returns false, show the user a dialog explaining why the microphone is needed, then call the system permission dialog using the permission_handler package.
Why it's a problem: Creating a new AudioPlayer instance for every voice message in the ListView
How to avoid: Only initialise the AudioPlayer when the user taps play, and dispose it immediately when they stop or scroll the item off screen using the ListView's cacheExtent to trigger dispose callbacks.
Best practices
- Always record in AAC-LC format at 128kbps — never WAV — for voice messages
- Set a maximum recording duration of 5 minutes and show a timer so users know how long they have been recording
- Store durationSeconds in Firestore so the playback widget can show the total length before the user presses play
- Use Firebase Storage security rules to restrict audio file reads to authenticated users in the same conversation
- Clean up the local temporary .m4a file after a successful upload using File(path).delete()
- Show an upload progress indicator (0–100%) during the Firebase Storage upload for large recordings
- Test microphone permissions on both iOS and Android — the permission flow differs between the two platforms
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I am building a voice messaging feature in a FlutterFlow app using the record package for recording and just_audio for playback. How do I configure the RecordConfig to produce the smallest possible file size for voice audio while maintaining acceptable quality? What encoder, bitRate, and sampleRate settings should I use?
Create a FlutterFlow Custom Action called uploadVoiceMessage that takes filePath (String) and conversationId (String) as parameters. It should upload the file at filePath to Firebase Storage at path voice_messages/{conversationId}/{timestamp}.m4a with contentType audio/mp4, then return the download URL as a String.
Frequently asked questions
Does the record package work on both iOS and Android?
Yes. The record package version 5.x supports iOS, Android, macOS, Windows, Linux, and web. The AudioEncoder.aacLc encoder is available on all mobile platforms. On web, use AudioEncoder.opus instead, as AAC-LC is not supported in all browsers.
Can users listen to voice messages without downloading the whole file first?
Yes. The just_audio package streams audio from the URL progressively, so playback begins within a second or two even for longer recordings. Firebase Storage download URLs support HTTP range requests, which enables this streaming behaviour.
How do I show a waveform visualization for voice messages?
A real waveform requires analyzing the audio file's amplitude data, which needs a native plugin. For a simpler visual effect that looks similar, use a Row of small Container widgets with varying heights generated from a seeded random number based on the message ID. This produces a consistent fake waveform that looks good without the processing overhead.
How do I stop playback when the user navigates away from the chat page?
Call player.stop() in the Custom Widget's dispose() method. Flutter calls dispose() automatically when the widget is removed from the widget tree, which happens when the user navigates away. Make sure your Custom Widget properly overrides dispose and does not use late initialisation that could cause null pointer exceptions during cleanup.
What Firebase Storage security rules should I use for voice message files?
Restrict reads to authenticated users: allow read: if request.auth != null. For finer control, store conversation membership in Firestore and check it in your rules using get() to verify the requesting user is a member of the conversation before allowing the download.
Is there a size limit for files uploaded to Firebase Storage?
Firebase Storage has no enforced file size limit for uploads via the SDK. However, the free Spark plan has a total storage limit of 5GB. For voice messages, AAC-encoded audio is so compact that even heavy users will rarely exceed a few hundred megabytes of total storage.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation