Build a Shazam-like music identification feature in FlutterFlow. You will record a short audio clip using a Custom Action with the record package, send the audio bytes to the AudD music recognition API via a Cloud Function, display the result as a card with album art, track title, artist name, and a Listen on Spotify button. A Lottie animation plays pulsing concentric circles during the recognition process. All recognized tracks are saved to a recognized_tracks Firestore collection so users can browse their listening history.
Identify songs playing around you with audio recording and the AudD API
This tutorial builds a music recognition feature similar to Shazam. When the user taps a button, the app records 5 seconds of ambient audio, sends it to a music recognition API, and displays the identified song with album art, title, artist, and a link to listen on Spotify. The recording uses a Custom Action wrapping the record Flutter package. The API call goes through a Cloud Function to keep your API key secure. A Lottie animation provides visual feedback during the listening phase. Every successful recognition is saved to Firestore, giving users a browseable history of songs they have identified.
Prerequisites
- A FlutterFlow project with Firebase/Firestore connected
- Firebase Blaze plan for Cloud Functions
- An AudD API key (free tier allows 300 requests/month at audd.io)
- FlutterFlow Pro plan for Custom Actions and Custom Widgets
- A Lottie animation file for the listening pulse effect (freely available on lottiefiles.com)
Step-by-step guide
Set up the Firestore collection and AudD API credentials
Set up the Firestore collection and AudD API credentials
Create a Firestore collection called recognized_tracks with fields: userId (String), title (String), artist (String), album (String), albumArtUrl (String), spotifyUrl (String), recognizedAt (Timestamp). Next, sign up at audd.io and copy your API token. In your Firebase project, store the AudD token as an environment variable for Cloud Functions using firebase functions:config:set audd.token='your_token_here'. Never put the API key in FlutterFlow client code — it must stay server-side in the Cloud Function. Create a composite index on recognized_tracks for userId (ascending) and recognizedAt (descending) to support the history query.
Expected result: The recognized_tracks collection exists in Firestore. The AudD API token is stored securely as a Cloud Function environment variable.
Build the audio recording Custom Action
Build the audio recording Custom Action
Add the record package to your FlutterFlow project dependencies (record: ^5.0.0 in Custom Code settings). Create a Custom Action called recordAudioClip that returns a String (the base64-encoded audio data). Inside the action, initialize a Record instance, check and request microphone permission using Record.hasPermission, then start recording with await record.start(encoder: AudioEncoder.aacLc, samplingRate: 44100). Use a Future.delayed of 5 seconds, then stop the recording with await record.stop(). Read the recorded file bytes and convert to base64 with base64Encode. Return the base64 string. On the main recognition page, wire this Custom Action to the Recognize button's On Tap Action Flow. Store the returned base64 string in a Page State variable called audioData.
1// Custom Action: recordAudioClip2// Return Type: String (base64-encoded audio)34import 'dart:convert';5import 'dart:io';6import 'package:record/record.dart';78Future<String> recordAudioClip() async {9 final record = AudioRecorder();10 11 if (!await record.hasPermission()) {12 throw Exception('Microphone permission denied');13 }14 15 final dir = await getTemporaryDirectory();16 final path = '${dir.path}/recognition_clip.m4a';17 18 await record.start(19 const RecordConfig(20 encoder: AudioEncoder.aacLc,21 sampleRate: 44100,22 bitRate: 128000,23 ),24 path: path,25 );26 27 await Future.delayed(const Duration(seconds: 5));28 final result = await record.stop();29 30 if (result == null) return '';31 final bytes = await File(result).readAsBytes();32 return base64Encode(bytes);33}Expected result: Tapping the record button captures 5 seconds of audio and stores the base64-encoded data in Page State. Microphone permission is requested on first use.
Deploy the Cloud Function for AudD API recognition
Deploy the Cloud Function for AudD API recognition
Create a Cloud Function called recognizeSong that accepts an HTTP POST request with the base64 audio data in the request body. The function decodes the base64 string back to bytes, sends a multipart POST request to the AudD API endpoint (https://api.audd.io/) with the audio file and your API token. Parse the JSON response to extract the song title, artist, album, release date, and any available Spotify or Apple Music links. Return the parsed result as a JSON response to the FlutterFlow client. Handle error cases: if the API returns no match, return a structured response with a recognized field set to false. If the API returns an error (invalid token, rate limit), return an appropriate error message. In FlutterFlow, create an API Call definition pointing to your Cloud Function URL, with the request body containing the audioData base64 string.
1// Cloud Function: recognizeSong2const functions = require('firebase-functions');3const fetch = require('node-fetch');4const FormData = require('form-data');56exports.recognizeSong = functions.https.onCall(async (data) => {7 const { audioBase64 } = data;8 const token = functions.config().audd.token;9 const audioBuffer = Buffer.from(audioBase64, 'base64');1011 const form = new FormData();12 form.append('api_token', token);13 form.append('file', audioBuffer, {14 filename: 'clip.m4a',15 contentType: 'audio/mp4',16 });17 form.append('return', 'spotify');1819 const response = await fetch('https://api.audd.io/', {20 method: 'POST',21 body: form,22 });23 const result = await response.json();2425 if (result.status === 'success' && result.result) {26 return {27 recognized: true,28 title: result.result.title,29 artist: result.result.artist,30 album: result.result.album,31 albumArtUrl: result.result.spotify?.album?.images?.[0]?.url || '',32 spotifyUrl: result.result.spotify?.external_urls?.spotify || '',33 };34 }35 return { recognized: false };36});Expected result: The Cloud Function accepts base64 audio, calls the AudD API, and returns structured song data including title, artist, album art URL, and Spotify link.
Design the recognition page with Lottie listening animation and result card
Design the recognition page with Lottie listening animation and result card
Create a page called MusicRecognition. Center the layout with a Column, mainAxisAlignment center. First child: a Lottie Animation widget loaded from a pulse/ripple animation JSON file (upload to your FlutterFlow assets or use a URL from lottiefiles.com). Set the Lottie widget size to 200x200 and wrap it in a Conditional Visibility container that shows only when a Page State variable isListening is true. Below the Lottie, add a large circular Container (width 120, height 120, borderRadius 60) with a solid primary color background, containing an Icon (Icons.mic, size 48, white). On Tap Action Flow: set isListening to true, call the recordAudioClip Custom Action, store result in audioData Page State, call the recognizeSong API with audioData, store the API response in Page State variables (title, artist, albumArtUrl, spotifyUrl, recognized), set isListening to false. Below the button, add a result card wrapped in Conditional Visibility (show when recognized is true): a Container with rounded corners holding a Row with an Image widget (Network Image bound to albumArtUrl, 80x80, borderRadius 8) and a Column with title in titleMedium bold, artist in bodyMedium secondary, and album in bodySmall. Below the card, add a Button labeled 'Listen on Spotify' that triggers a Launch URL action with the spotifyUrl. Add another Conditional Visibility container for the not-recognized state showing 'Song not recognized — try again' with a Retry button.
Expected result: The page shows a large mic button. During recognition, a Lottie pulse animation plays. On success, a result card displays album art, song title, artist, and a Spotify link. On failure, a retry message appears.
Save recognized tracks and build the listening history page
Save recognized tracks and build the listening history page
After a successful recognition (recognized is true), add an action in the Action Flow to create a Firestore document in recognized_tracks with userId set to currentUserUid, title, artist, album, albumArtUrl, spotifyUrl from the Page State variables, and recognizedAt set to the server timestamp. Create a second page called ListeningHistory. Add a Backend Query on the page: query recognized_tracks where userId equals currentUserUid, ordered by recognizedAt descending. Display results in a ListView. Each list item is a Row containing an Image (albumArtUrl, 56x56, rounded), a Column with title in titleSmall and artist in bodySmall, and a trailing Text showing the relative time (use a Custom Function timeAgo that returns strings like '2 min ago', '3 hours ago', 'Yesterday'). On tap of any list item, trigger a Launch URL action with the spotifyUrl. Add a BottomNavigationBar or AppBar action to switch between the MusicRecognition and ListeningHistory pages. Add an empty state widget for users with no history yet: a centered Column with an Icon (Icons.music_off), a Text saying 'No songs recognized yet', and a Button navigating to the recognition page.
1// Custom Function: timeAgo2// Return Type: String3// Parameters: timestamp (DateTime)45String timeAgo(DateTime timestamp) {6 final now = DateTime.now();7 final diff = now.difference(timestamp);8 if (diff.inMinutes < 1) return 'Just now';9 if (diff.inMinutes < 60) return '${diff.inMinutes} min ago';10 if (diff.inHours < 24) return '${diff.inHours} hours ago';11 if (diff.inDays < 7) return '${diff.inDays} days ago';12 return '${(diff.inDays / 7).floor()} weeks ago';13}Expected result: Every recognized song is saved to Firestore. The ListeningHistory page shows a scrollable list of past recognitions with album art, song details, and relative timestamps.
Complete working example
1Firestore Data Model:2└── recognized_tracks/{auto-id}3 ├── userId: String4 ├── title: String (Shape of You)5 ├── artist: String (Ed Sheeran)6 ├── album: String (Divide)7 ├── albumArtUrl: String (https://i.scdn.co/...)8 ├── spotifyUrl: String (https://open.spotify.com/track/...)9 └── recognizedAt: Timestamp1011Cloud Function:12 recognizeSong — HTTP callable, accepts base64 audio,13 calls AudD API, returns {recognized, title, artist, album,14 albumArtUrl, spotifyUrl}1516Custom Action:17 recordAudioClip — record 5s audio via record package,18 return base64-encoded string1920Custom Functions:21 timeAgo(timestamp) → relative time string2223Page: MusicRecognition24├── Column (center)25│ ├── LottieAnimation (pulse ripple, 200x200)26│ │ └── Conditional Visibility: isListening == true27│ ├── Container (circular, 120x120, primary color)28│ │ └── Icon (mic, 48, white)29│ │ └── On Tap:30│ │ 1. Set isListening = true31│ │ 2. recordAudioClip → audioData32│ │ 3. API Call: recognizeSong(audioData)33│ │ 4. Store response in Page State34│ │ 5. Set isListening = false35│ │ 6. If recognized → Create recognized_tracks doc36│ ├── Result Card (Conditional: recognized == true)37│ │ └── Row38│ │ ├── Image (albumArtUrl, 80x80)39│ │ └── Column: title + artist + album40│ ├── Button (Listen on Spotify → Launch URL)41│ └── Not Recognized (Conditional: recognized == false)42│ └── Text + Retry Button4344Page: ListeningHistory45├── Backend Query: recognized_tracks, userId == currentUser,46│ orderBy recognizedAt DESC47├── ListView48│ └── Row: Image (56x56) + Column (title, artist) + timeAgo49│ └── On Tap → Launch URL (spotifyUrl)50└── Empty State: Icon + Text + Navigate to RecognitionCommon mistakes
Why it's a problem: Recording more than 10 seconds of audio for recognition
How to avoid: Record exactly 5 seconds for the optimal balance of recognition accuracy and speed. The AudD API documentation recommends 3-10 seconds, and 5 seconds consistently produces accurate matches.
Why it's a problem: Calling the AudD API directly from FlutterFlow client code
How to avoid: Always route the API call through a Cloud Function. The Cloud Function stores the API token as a server-side environment variable that never reaches the client. FlutterFlow calls your Cloud Function, which calls AudD.
Why it's a problem: Not handling the microphone permission denial gracefully
How to avoid: Check Record.hasPermission before starting. If denied, show a friendly SnackBar explaining that microphone access is required and provide a button that opens the device settings using a Launch URL action with the app settings deep link.
Why it's a problem: Not showing any feedback during the 5-second recording and API call
How to avoid: Use the isListening Page State variable to show the Lottie pulse animation during recording and disable the mic button. Add a Text widget below the animation showing 'Listening...' to make the state explicit.
Best practices
- Record exactly 5 seconds of audio for the optimal speed/accuracy tradeoff with music recognition APIs
- Always route API calls through a Cloud Function to keep your AudD or ACRCloud API token server-side
- Use a Lottie animation for the listening state — it provides clear visual feedback and looks professional
- Save every successful recognition to Firestore immediately so the user never loses a result
- Handle the not-recognized case with a friendly message and prominent retry button instead of a generic error
- Add an empty state to the listening history page for new users who have not recognized any songs yet
- Use the timeAgo Custom Function for human-readable timestamps instead of raw date-time values
- Test with real ambient music, not silence — the API returns no match for quiet recordings
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
Write a Firebase Cloud Function that accepts a base64-encoded audio clip, sends it to the AudD music recognition API (https://api.audd.io/) as a multipart form upload with the Spotify return option, and returns the song title, artist, album, album art URL, and Spotify URL. Handle the case where no song is recognized by returning {recognized: false}. Also write a Dart function that records 5 seconds of audio using the record package and returns the base64-encoded result.
Create a music recognition page with a large circular button in the center containing a microphone icon. When tapped, show a pulsing Lottie animation around the button for 5 seconds, then display a result card below with an album art image on the left and song title, artist name, and album name on the right. Add a Listen on Spotify button below the card. Include a second page showing a scrollable list of previously recognized songs with album art thumbnails and timestamps.
Frequently asked questions
How much does the AudD API cost?
AudD offers a free tier with 300 requests per month, which is sufficient for development and small apps. Paid plans start at $7/month for 3,000 requests. ACRCloud is an alternative with a similar free tier. For most consumer apps, the free tier covers testing and the first paid tier handles moderate production traffic.
Can I use ACRCloud instead of AudD?
Yes. ACRCloud works similarly — you send audio bytes and receive song metadata. The Cloud Function would call ACRCloud's identify endpoint instead of AudD's. The response format differs, so you would adjust the JSON parsing. ACRCloud generally has slightly better recognition accuracy for non-English music.
Does the recording work on both iOS and Android?
Yes, the record package supports both platforms. On iOS, you need to add the NSMicrophoneUsageDescription key to your Info.plist (FlutterFlow handles this in Settings). On Android, the RECORD_AUDIO permission must be declared in AndroidManifest.xml. FlutterFlow adds this automatically when you use audio-related packages.
What happens if the environment is too noisy for recognition?
Music recognition APIs are designed to work in noisy environments — they use audio fingerprinting that is robust to background noise. However, if the music is too quiet relative to ambient noise (e.g., a loud cafe with soft background music), recognition may fail. Show the not-recognized state with a tip suggesting the user move closer to the audio source.
Can I add a preview playback of the recognized song?
Yes, if the API returns a preview URL (Spotify provides 30-second preview URLs for most tracks). Add an AudioPlayer Custom Widget or use FlutterFlow's built-in audio playback action to play the preview URL. Display a play/pause button on the result card.
Can RapidDev help build a music app with recognition features?
Yes. A full music app with recognition, playlist management, social sharing, and streaming integration requires custom audio processing, multiple API integrations, and real-time features. RapidDev can architect and build the complete audio pipeline and backend infrastructure beyond what FlutterFlow's visual builder handles alone.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation