FlutterFlow's built-in AudioPlayer widget plays audio but cannot edit it. For real audio editing, use audio_waveforms for waveform display and trim point selection, ffmpeg_kit_flutter for client-side trim, merge, and speed adjustments, or a Firebase Cloud Function running FFmpeg for server-side processing. Choose client-side for files under 50MB and server-side for larger files or complex operations.
FlutterFlow Has No Built-In Audio Editor
This is one of the most common misunderstandings for FlutterFlow beginners: the AudioPlayer widget is a playback widget only. It has play, pause, seek, and volume controls — nothing for trimming, splitting, merging, or applying effects. Actual audio editing requires either running FFmpeg on-device via ffmpeg_kit_flutter, or uploading the file to a Cloud Function that runs server-side FFmpeg. This guide explains both approaches clearly and shows you when to use each.
Prerequisites
- FlutterFlow Pro plan (Custom Actions required for FFmpeg integration)
- Firebase Storage configured for audio file upload and download
- Basic understanding of Firebase Cloud Functions for server-side processing
- An audio file picker already configured (file upload action)
Step-by-step guide
Understand What FlutterFlow's AudioPlayer Can and Cannot Do
Understand What FlutterFlow's AudioPlayer Can and Cannot Do
The AudioPlayer widget in FlutterFlow is built on the just_audio package. It supports playing audio from URL or local file, seeking to a specific position, looping, speed control (0.5x to 2x), and volume control. It does not support: trimming audio to a time range, splitting a file into segments, merging multiple files, changing pitch, removing silence, converting formats, or applying reverb or equalization effects. If your users need any of these features, you must use one of the two editing approaches in this guide — there is no setting or toggle in FlutterFlow that adds these capabilities to the built-in widget.
Expected result: You have a clear decision: playback only = use AudioPlayer widget. Editing = use ffmpeg_kit_flutter or Cloud Function.
Add the audio_waveforms Package for Visual Trim Point Selection
Add the audio_waveforms Package for Visual Trim Point Selection
Users cannot trim audio without seeing the waveform. Add audio_waveforms to your Custom Code dependencies in FlutterFlow. Create a Custom Widget called WaveformTrimmer that renders the audio waveform using WaveformController and RecorderController from the audio_waveforms package. Overlay two draggable vertical handles on the waveform — one for the trim start position and one for the trim end position. Store the start and end positions as Page State variables (trimStart and trimEnd as Duration values). Add a Play Preview button that plays only the trimmed section using AudioPlayer.setClip(start, end) so users can hear the selection before confirming the trim.
1import 'package:audio_waveforms/audio_waveforms.dart';2import 'package:flutter/material.dart';34class WaveformTrimmerWidget extends StatefulWidget {5 final String audioPath;6 final Duration trimStart;7 final Duration trimEnd;8 final ValueChanged<Duration> onTrimStartChanged;9 final ValueChanged<Duration> onTrimEndChanged;1011 const WaveformTrimmerWidget({12 super.key,13 required this.audioPath,14 required this.trimStart,15 required this.trimEnd,16 required this.onTrimStartChanged,17 required this.onTrimEndChanged,18 });1920 @override21 State<WaveformTrimmerWidget> createState() => _WaveformTrimmerWidgetState();22}2324class _WaveformTrimmerWidgetState extends State<WaveformTrimmerWidget> {25 late final PlayerController _controller;2627 @override28 void initState() {29 super.initState();30 _controller = PlayerController();31 _controller.preparePlayer(path: widget.audioPath);32 }3334 @override35 void dispose() {36 _controller.dispose();37 super.dispose();38 }3940 @override41 Widget build(BuildContext context) {42 return AudioFileWaveforms(43 playerController: _controller,44 size: Size(MediaQuery.of(context).size.width, 80),45 waveformType: WaveformType.fitWidth,46 playerWaveStyle: const PlayerWaveStyle(47 fixedWaveColor: Colors.grey,48 liveWaveColor: Colors.blue,49 spacing: 6,50 ),51 );52 }53}Expected result: The audio waveform renders as a scrollable visualization. Two colored handles mark the trim start and end points that the user can drag.
Execute Client-Side Audio Trim with ffmpeg_kit_flutter
Execute Client-Side Audio Trim with ffmpeg_kit_flutter
After the user sets trim points, execute the actual trim operation on-device using ffmpeg_kit_flutter. Add the package to your Custom Code dependencies. Create a Custom Action called trimAudio that takes the source file path, trimStart seconds, and trimEnd seconds, builds the FFmpeg command string, executes it, and returns the output file path. The on-device approach works well for files under 50MB and produces results in 2-10 seconds depending on the clip length and device speed. For longer recordings, show a progress indicator during processing.
1import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';2import 'package:ffmpeg_kit_flutter/return_code.dart';3import 'package:path_provider/path_provider.dart';4import 'dart:io';56Future<String?> trimAudio(7 String inputPath,8 double startSeconds,9 double endSeconds,10) async {11 final duration = endSeconds - startSeconds;12 if (duration <= 0) return null;1314 final directory = await getTemporaryDirectory();15 final timestamp = DateTime.now().millisecondsSinceEpoch;16 final outputPath = '${directory.path}/trimmed_$timestamp.m4a';1718 // FFmpeg trim command19 // -ss: start time, -t: duration, -c copy: no re-encode (fast)20 final command = '-ss $startSeconds -t $duration -i "$inputPath" -c copy "$outputPath"';2122 final session = await FFmpegKit.execute(command);23 final returnCode = await session.getReturnCode();2425 if (ReturnCode.isSuccess(returnCode)) {26 return outputPath;27 } else {28 final logs = await session.getAllLogsAsString();29 print('FFmpeg error: $logs');30 return null;31 }32}Expected result: After tapping Trim, the Custom Action runs in 1-5 seconds and returns the path to the trimmed audio file, which you can then play in AudioPlayer to confirm.
Merge Multiple Audio Files with FFmpeg
Merge Multiple Audio Files with FFmpeg
To combine multiple clips into one recording, use FFmpeg's concat filter. Create a Custom Action called mergeAudioFiles that takes a List of file paths, writes a temporary concat list file, and runs the FFmpeg concat command. This is useful for podcast editing, audio messaging with multiple takes, or any scenario where users record in segments. The concat demuxer (-f concat) is faster than the filter_complex approach for simple sequential merging without transitions.
1import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';2import 'package:ffmpeg_kit_flutter/return_code.dart';3import 'package:path_provider/path_provider.dart';4import 'dart:io';56Future<String?> mergeAudioFiles(List<String> inputPaths) async {7 if (inputPaths.length < 2) return inputPaths.isEmpty ? null : inputPaths[0];89 final dir = await getTemporaryDirectory();10 final timestamp = DateTime.now().millisecondsSinceEpoch;1112 // Write concat list file13 final listFile = File('${dir.path}/concat_$timestamp.txt');14 final lines = inputPaths.map((p) => "file '$p'").join('\n');15 await listFile.writeAsString(lines);1617 final outputPath = '${dir.path}/merged_$timestamp.m4a';18 final command =19 '-f concat -safe 0 -i "${listFile.path}" -c copy "$outputPath"';2021 final session = await FFmpegKit.execute(command);22 final returnCode = await session.getReturnCode();2324 await listFile.delete();2526 return ReturnCode.isSuccess(returnCode) ? outputPath : null;27}Expected result: Three recorded clips merge into a single continuous audio file that plays back seamlessly in AudioPlayer.
Upload Processed Audio to Firebase Storage
Upload Processed Audio to Firebase Storage
After trimming or merging, upload the resulting file to Firebase Storage and save the URL to Firestore. Create a Custom Action called uploadProcessedAudio that reads the local file as bytes, uploads to a user-specific Storage path, and returns the download URL. Update the relevant Firestore document with the new audio_url and duration_seconds fields. Delete the temporary local file after successful upload to free device storage.
1import 'package:firebase_storage/firebase_storage.dart';2import 'package:cloud_firestore/cloud_firestore.dart';3import 'dart:io';45Future<String?> uploadProcessedAudio(6 String localPath,7 String userId,8 String recordingId,9) async {10 final file = File(localPath);11 if (!file.existsSync()) return null;1213 final filename = 'recording_${DateTime.now().millisecondsSinceEpoch}.m4a';14 final ref = FirebaseStorage.instance15 .ref('users/$userId/recordings/$filename');1617 await ref.putFile(file, SettableMetadata(contentType: 'audio/m4a'));18 final url = await ref.getDownloadURL();1920 await FirebaseFirestore.instance21 .collection('recordings')22 .doc(recordingId)23 .update({24 'audio_url': url,25 'processed_at': FieldValue.serverTimestamp(),26 'filename': filename,27 });2829 // Clean up temp file30 await file.delete();31 return url;32}Expected result: The processed audio file is uploaded to Firebase Storage and the Firestore recording document is updated with the new URL. The AudioPlayer widget can now load the trimmed version.
Complete working example
1// Complete audio editing Custom Actions for FlutterFlow2// Requires: ffmpeg_kit_flutter, path_provider in pubspec dependencies34import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';5import 'package:ffmpeg_kit_flutter/return_code.dart';6import 'package:path_provider/path_provider.dart';7import 'package:firebase_storage/firebase_storage.dart';8import 'package:cloud_firestore/cloud_firestore.dart';9import 'dart:io';1011// ─── Trim Audio ──────────────────────────────────────────────────────────────1213Future<String?> trimAudio(14 String inputPath,15 double startSeconds,16 double endSeconds,17) async {18 final duration = endSeconds - startSeconds;19 if (duration <= 0.5) return null; // Minimum 0.5 second clip2021 final dir = await getTemporaryDirectory();22 final ts = DateTime.now().millisecondsSinceEpoch;23 final outputPath = '${dir.path}/trim_$ts.m4a';2425 final cmd = '-ss $startSeconds -t $duration -i "$inputPath" -c copy "$outputPath"';26 final session = await FFmpegKit.execute(cmd);27 final rc = await session.getReturnCode();28 return ReturnCode.isSuccess(rc) ? outputPath : null;29}3031// ─── Merge Audio ─────────────────────────────────────────────────────────────3233Future<String?> mergeAudioFiles(List<String> paths) async {34 if (paths.isEmpty) return null;35 if (paths.length == 1) return paths[0];3637 final dir = await getTemporaryDirectory();38 final ts = DateTime.now().millisecondsSinceEpoch;3940 final listFile = File('${dir.path}/list_$ts.txt');41 await listFile.writeAsString(paths.map((p) => "file '$p'").join('\n'));4243 final outputPath = '${dir.path}/merged_$ts.m4a';44 final cmd = '-f concat -safe 0 -i "${listFile.path}" -c copy "$outputPath"';45 final session = await FFmpegKit.execute(cmd);46 final rc = await session.getReturnCode();4748 await listFile.delete();49 return ReturnCode.isSuccess(rc) ? outputPath : null;50}5152// ─── Change Speed ─────────────────────────────────────────────────────────────5354Future<String?> changeAudioSpeed(55 String inputPath,56 double speed, // 0.5 = half speed, 2.0 = double speed57) async {58 if (speed < 0.5 || speed > 2.0) return null;5960 final dir = await getTemporaryDirectory();61 final ts = DateTime.now().millisecondsSinceEpoch;62 final outputPath = '${dir.path}/speed_$ts.m4a';6364 // atempo must be between 0.5 and 2.0 — chain for values outside range65 final cmd = '-i "$inputPath" -filter:a "atempo=$speed" "$outputPath"';66 final session = await FFmpegKit.execute(cmd);67 final rc = await session.getReturnCode();68 return ReturnCode.isSuccess(rc) ? outputPath : null;69}7071// ─── Upload to Firebase Storage ───────────────────────────────────────────────7273Future<String?> uploadAudio(String localPath, String userId) async {74 final file = File(localPath);75 if (!file.existsSync()) return null;7677 final name = 'audio_${DateTime.now().millisecondsSinceEpoch}.m4a';78 final ref = FirebaseStorage.instance.ref('users/$userId/audio/$name');79 await ref.putFile(file, SettableMetadata(contentType: 'audio/m4a'));80 final url = await ref.getDownloadURL();81 await file.delete();82 return url;83}Common mistakes
Why it's a problem: Expecting FlutterFlow's AudioPlayer widget to have trim, split, or effect controls
How to avoid: Accept that editing requires Custom Actions with ffmpeg_kit_flutter for on-device processing, or a Cloud Function for server-side operations. The AudioPlayer widget can be used for preview playback after editing.
Why it's a problem: Processing large audio files on-device with FFmpeg and blocking the UI thread
How to avoid: Run FFmpeg in a compute isolate using Flutter's compute() function, or move large file processing to a Cloud Function. Show a progress indicator and keep the UI responsive during processing.
Why it's a problem: Not deleting temporary files after upload
How to avoid: Always call file.delete() after successfully uploading the processed audio to Firebase Storage. The Storage URL is your permanent reference — the local temp file is disposable.
Best practices
- Always validate that the trim end position is greater than the trim start — a zero-length trim will produce an empty file that causes confusing errors downstream.
- Show a waveform visualization before asking users to set trim points — users cannot accurately trim audio they cannot see.
- Use -c copy in FFmpeg trim and concat operations to avoid re-encoding when the input format matches the output format — this is 10-50x faster.
- For files over 50MB or operations involving multiple filters, process server-side via Cloud Function to avoid on-device memory and time constraints.
- Cache the audio duration in Firestore alongside the URL so you do not need to load the full audio file just to display its length.
- Test audio editing on a physical device — iOS simulators have audio hardware limitations and Android emulators may produce incorrect FFmpeg results.
- Provide a Cancel operation that deletes any partially processed temporary files to avoid storage leaks on interrupted edits.
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I am building audio editing features in a FlutterFlow app. FlutterFlow's built-in AudioPlayer cannot edit audio. Show me how to use ffmpeg_kit_flutter in a Flutter Custom Action to trim audio to a time range, merge multiple audio files, and change playback speed. Include complete Dart code for each operation and explain when to process on-device versus in a Cloud Function.
In my FlutterFlow app, create three Custom Actions: trimAudio(inputPath, startSeconds, endSeconds) that returns the trimmed file path, mergeAudioFiles(List paths) that returns a merged file path, and uploadAudio(localPath, userId) that uploads to Firebase Storage and returns the download URL. Use ffmpeg_kit_flutter for the processing steps.
Frequently asked questions
Does FlutterFlow's AudioPlayer widget support trim or cut operations?
No. The AudioPlayer widget supports playback controls only: play, pause, seek, loop, speed (0.5x-2x), and volume. It cannot modify audio files. For editing, you need Custom Actions using ffmpeg_kit_flutter or a server-side Cloud Function with FFmpeg.
What audio formats does ffmpeg_kit_flutter support?
ffmpeg_kit_flutter supports all formats that FFmpeg supports: MP3, M4A, AAC, WAV, OGG, FLAC, OPUS, and many more. For mobile apps, M4A/AAC is recommended as the output format — it has the best compatibility across iOS and Android with good compression.
How do I show a progress percentage during FFmpeg processing?
Use FFmpegKitConfig.enableStatisticsCallback() to receive progress updates. The Statistics object has a time property (milliseconds processed) which you can divide by the total duration to calculate percentage. Update a Page State variable from this callback to drive a progress bar widget.
Can I do real-time audio effects (reverb, EQ) in FlutterFlow?
Real-time audio effects require a native audio engine. You would need to build a Custom Widget using platform channels to access AVAudioEngine on iOS or AudioTrack with OpenSL ES on Android. This is significantly more complex than file-based FFmpeg editing and typically beyond what FlutterFlow's Custom Code system is designed for.
Is ffmpeg_kit_flutter available for Flutter Web?
No. ffmpeg_kit_flutter is a mobile-only package. For Flutter Web audio processing, you would need to use JavaScript FFmpeg via dart:js interop or send files to a Cloud Function. If your app needs to run on Web, the Cloud Function approach is the most portable.
How do I add a recording feature so users can record and then edit?
Use the record package (pub.dev/packages/record) in a Custom Action to capture microphone audio to a local .m4a file. After the user stops recording, pass the file path to your trimAudio or mergeAudioFiles Custom Actions. The AudioPlayer widget can then play back the recording for review before the user confirms their edit.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation