Skip to main content
RapidDev - Software Development Agency
flutterflow-tutorials

How to Set Up a User-Generated Content Moderation System in FlutterFlow

Create a content moderation pipeline where all user submissions start with a pending status, undergo automated toxicity screening via a Cloud Function calling the Perspective API, and then appear in a moderator review queue. Add community reporting with a Report button that auto-flags content after a configurable number of reports. Include an appeal workflow so users can contest rejected content with a text explanation.

What you'll learn

  • How to implement a moderation status workflow for user-submitted content
  • How to auto-screen content for toxicity using Cloud Functions and the Perspective API
  • How to build a moderator review queue with approve, reject, and flag actions
  • How to add community reporting and user appeal workflows
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Beginner8 min read25-30 minFlutterFlow Free+March 2026RapidDev Engineering Team
TL;DR

Create a content moderation pipeline where all user submissions start with a pending status, undergo automated toxicity screening via a Cloud Function calling the Perspective API, and then appear in a moderator review queue. Add community reporting with a Report button that auto-flags content after a configurable number of reports. Include an appeal workflow so users can contest rejected content with a text explanation.

Building a Content Moderation System in FlutterFlow

Any app with user-generated content needs moderation to prevent toxic, spammy, or inappropriate material. This tutorial builds a three-layer system: automated AI pre-screening, a moderator review queue, and community reporting. Content starts as pending, gets screened, and only displays publicly after approval.

Prerequisites

  • A FlutterFlow project with Firebase authentication enabled
  • Firestore database with an existing content collection (posts, comments, etc.)
  • Cloud Functions enabled on the Firebase Blaze plan
  • A Google Cloud Perspective API key for toxicity detection (free tier available)

Step-by-step guide

1

Add moderation status fields to your content collection

On your existing content collection (posts, comments, reviews, etc.), add a moderationStatus field (String: pending/approved/rejected/flagged) with a default value of 'pending'. Add moderatedBy (String, moderator UID), moderatedAt (Timestamp), and rejectionReason (String). Update all public-facing queries to filter by moderationStatus == 'approved' — this ensures only moderated content appears in feeds and listings. In FlutterFlow, update your Backend Queries on ListViews and GridViews to include this filter. Content creators should see their own pending content with a 'Pending Review' badge.

Expected result: All new content defaults to pending status and only approved content appears in public feeds.

2

Create the automated toxicity screening Cloud Function

Create a Cloud Function triggered by Firestore onCreate on your content collection. The function reads the content text, calls the Google Perspective API with the TOXICITY attribute, and checks the summary score. If the toxicity score exceeds your threshold (e.g., 0.85), auto-set moderationStatus to 'rejected' with rejectionReason 'Automated: content flagged as potentially toxic'. If the score is moderate (0.5-0.85), set to 'flagged' for human review. If below 0.5, set to 'approved'. Log the toxicity score on the document for moderator reference. This pre-screens most content automatically, reducing the moderator workload.

functions/index.js
1const functions = require('firebase-functions');
2const admin = require('firebase-admin');
3const fetch = require('node-fetch');
4admin.initializeApp();
5
6const API_KEY = functions.config().perspective.key;
7
8exports.screenContent = functions.firestore
9 .document('posts/{postId}')
10 .onCreate(async (snap, context) => {
11 const post = snap.data();
12 const text = post.content || post.body || '';
13 if (!text) return;
14
15 const response = await fetch(
16 `https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=${API_KEY}`,
17 {
18 method: 'POST',
19 headers: { 'Content-Type': 'application/json' },
20 body: JSON.stringify({
21 comment: { text },
22 languages: ['en'],
23 requestedAttributes: { TOXICITY: {} },
24 }),
25 }
26 );
27 const result = await response.json();
28 const score = result.attributeScores
29 ?.TOXICITY?.summaryScore?.value || 0;
30
31 let status = 'approved';
32 let reason = null;
33 if (score >= 0.85) {
34 status = 'rejected';
35 reason = 'Automated: high toxicity score';
36 } else if (score >= 0.5) {
37 status = 'flagged';
38 }
39
40 await snap.ref.update({
41 moderationStatus: status,
42 toxicityScore: score,
43 ...(reason && { rejectionReason: reason }),
44 });
45 });

Expected result: New content is automatically screened. Highly toxic content is auto-rejected, borderline content is flagged for review, and clean content is auto-approved.

3

Build the moderator review queue page

Create an admin-only page called ModerationQueue. Add a ChoiceChips filter at the top with options: Pending, Flagged, Rejected, All. Below it, add a ListView bound to a Backend Query on the content collection filtered by moderationStatus matching the selected chip. Each list item shows: the content text (truncated to 2 lines), author name, submission timestamp, toxicity score if available, and a Row of three Buttons — Approve (green), Reject (red), and Flag (orange). On Approve tap, update moderationStatus to 'approved', set moderatedBy to the current admin's UID, and moderatedAt to now. On Reject tap, show a dialog with a DropDown for rejection reason (Spam, Hate Speech, Inappropriate, Off Topic) and a TextField for notes, then update the document. Add a Badge in your admin nav showing the count of pending + flagged items.

Expected result: Moderators see a queue of content needing review, can approve or reject with one tap, and the queue count updates in real time.

4

Add community reporting with auto-flagging threshold

On every piece of user content in your app, add a Report button (flag icon). On tap, show a BottomSheet with a DropDown for reason (Spam, Harassment, Misinformation, Inappropriate, Other) and an optional TextField for details. On submit, create a document in a `reports` subcollection under the content document with: reporterId (current user UID), reason, details, timestamp. Also increment a reportCount field on the content document using FieldValue.increment(1). Create a Cloud Function that triggers on report creation: if the content's reportCount exceeds a threshold (e.g., 3), automatically set moderationStatus to 'flagged' so it enters the moderator queue. Prevent duplicate reports by checking if the current user already has a report document.

Expected result: Users can report content with a reason. After 3 reports from different users, the content is automatically flagged for moderator review.

5

Implement the user appeal workflow for rejected content

When a user views their own rejected content, show the rejectionReason in a red Container and an Appeal button below it. On tap, open a dialog with a TextField where the user explains why the rejection should be reconsidered. On submit, update the content document: set appealText to the user's explanation and appealStatus to 'pending_appeal'. In the moderator queue, add a filter for items with appealStatus == 'pending_appeal'. Show the original content, the rejection reason, and the user's appeal text. Moderators can then Approve (override rejection) or Deny Appeal (confirm rejection, set appealStatus to 'denied'). Limit appeals to one per content item to prevent spam.

Expected result: Rejected content authors can submit one appeal with an explanation. Moderators see appeals in a separate queue section and can override or confirm the rejection.

Complete working example

Cloud Function — Toxicity Screening + Auto-Flag on Reports
1// Cloud Function 1: Auto-screen new content
2const functions = require('firebase-functions');
3const admin = require('firebase-admin');
4const fetch = require('node-fetch');
5admin.initializeApp();
6
7const PERSPECTIVE_KEY = functions.config().perspective.key;
8const TOXICITY_REJECT = 0.85;
9const TOXICITY_FLAG = 0.5;
10const REPORT_THRESHOLD = 3;
11
12exports.screenContent = functions.firestore
13 .document('posts/{postId}')
14 .onCreate(async (snap) => {
15 const text = snap.data().content || '';
16 if (!text.trim()) {
17 return snap.ref.update({ moderationStatus: 'approved' });
18 }
19
20 const res = await fetch(
21 `https://commentanalyzer.googleapis.com/v1alpha1/comments:analyze?key=${PERSPECTIVE_KEY}`,
22 {
23 method: 'POST',
24 headers: { 'Content-Type': 'application/json' },
25 body: JSON.stringify({
26 comment: { text },
27 languages: ['en'],
28 requestedAttributes: { TOXICITY: {}, SPAM: {} },
29 }),
30 }
31 );
32 const result = await res.json();
33 const toxicity = result.attributeScores
34 ?.TOXICITY?.summaryScore?.value || 0;
35
36 let status = 'approved';
37 let reason = null;
38 if (toxicity >= TOXICITY_REJECT) {
39 status = 'rejected';
40 reason = `Auto-rejected: toxicity ${(toxicity * 100).toFixed(0)}%`;
41 } else if (toxicity >= TOXICITY_FLAG) {
42 status = 'flagged';
43 }
44
45 await snap.ref.update({
46 moderationStatus: status,
47 toxicityScore: toxicity,
48 ...(reason && { rejectionReason: reason }),
49 });
50 });
51
52// Cloud Function 2: Auto-flag on report threshold
53exports.onReport = functions.firestore
54 .document('posts/{postId}/reports/{reportId}')
55 .onCreate(async (snap, context) => {
56 const postRef = admin.firestore()
57 .collection('posts').doc(context.params.postId);
58
59 await postRef.update({
60 reportCount: admin.firestore.FieldValue.increment(1),
61 });
62
63 const postDoc = await postRef.get();
64 const post = postDoc.data();
65
66 if (post.reportCount >= REPORT_THRESHOLD
67 && post.moderationStatus === 'approved') {
68 await postRef.update({ moderationStatus: 'flagged' });
69 }
70 });

Common mistakes

Why it's a problem: Showing user-submitted content publicly before moderation review

How to avoid: Set moderationStatus to 'pending' by default on all new content. Only display content where moderationStatus equals 'approved' in all public-facing queries.

Why it's a problem: Relying only on UI hiding without backend enforcement

How to avoid: Add Firestore Security Rules that restrict read access for non-authors: allow read if resource.data.moderationStatus == 'approved' OR request.auth.uid == resource.data.authorId.

Why it's a problem: Not preventing duplicate reports from the same user

How to avoid: Before creating a report document, query the reports subcollection for existing reports by the current user. If one exists, show a message that they have already reported this content.

Best practices

  • Default all new content to pending status and only show approved content publicly
  • Use automated screening to handle the majority of content, reserving human review for borderline cases
  • Show toxicity scores in the moderator queue to help prioritize review decisions
  • Provide specific rejection reasons so content creators understand why their content was removed
  • Limit appeals to one per content item to prevent appeal spam
  • Track moderation metrics (approval rate, average review time, appeal success rate) for quality monitoring
  • Send notifications to content authors when their content is approved, rejected, or when an appeal is resolved

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I'm building a content moderation system in FlutterFlow. I need automated toxicity screening using the Perspective API in a Cloud Function, a moderator review queue with approve/reject/flag buttons, community reporting that auto-flags after 3 reports, and a user appeal workflow. Show me the Firestore schema, Cloud Function code, and FlutterFlow page layout.

FlutterFlow Prompt

Create a moderation queue page with ChoiceChips (Pending, Flagged, Rejected, All) and a ListView of content items. Each item shows text preview, author, and three buttons: Approve (green), Reject (red with reason dialog), and Flag (orange).

Frequently asked questions

What is the Perspective API and is it free?

The Perspective API by Google Jigsaw analyzes text for toxicity, spam, and other attributes. It has a free tier that supports moderate traffic. Apply for an API key at perspectiveapi.com.

Can I moderate images and videos, not just text?

Yes. Use Google Cloud Vision API for image moderation (detects adult content, violence) and Google Video Intelligence API for video. Call them from the same Cloud Function pattern used for text screening.

How do I handle false positives from automated screening?

Set the auto-reject threshold high (0.85+) to minimize false positives. Content between 0.5-0.85 should be flagged for human review rather than auto-rejected. The appeal workflow lets users contest incorrect rejections.

Should I notify users when their content is rejected?

Yes. Send an in-app notification or email with the rejection reason and a link to appeal. Transparency builds trust and helps users understand community guidelines.

How many reports should trigger auto-flagging?

Start with 3 reports as the threshold and adjust based on your community size and false-flag rate. Larger communities may need a higher threshold to prevent coordinated flagging campaigns.

Can RapidDev help build an enterprise content moderation system?

Yes. RapidDev can implement multi-language moderation, image and video screening, moderator performance dashboards, escalation workflows, and automated compliance reporting.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.