To integrate Replit with AWS S3, install the AWS SDK v3, store your IAM access key, secret key, and bucket region in Replit Secrets (lock icon π), and use the S3Client to upload, download, and generate presigned URLs. Presigned URLs let users upload directly to S3 from their browser without routing files through your Replit server. Use an Autoscale deployment and create a least-privilege IAM policy scoped to only the buckets your app needs.
AWS S3 File Storage for Replit Applications
AWS S3 is the de facto standard for storing files in web applications: user avatars, document uploads, generated exports, backup archives, static assets. It offers virtually unlimited storage, 11 nines of durability, and programmatic access via a well-documented API with SDKs for every major language. For Replit developers, S3 solves the persistent storage problem β Replit's file system does not persist across deploys for uploaded user files, and storing binaries in a database is inefficient. S3 keeps files reliably available regardless of what happens to your Replit containers.
The integration has two main patterns. The server-side pattern routes files through your Replit server: the client uploads to your Express/Flask endpoint, your server streams the file to S3. This is simple but means every uploaded byte passes through your Replit container, consuming memory and bandwidth. For files larger than a few megabytes, this becomes a bottleneck. The presigned URL pattern is better for production: your server generates a temporary signed URL that allows the client to upload directly to S3 without going through your server. Your Replit app only handles the lightweight URL generation request, not the actual file transfer.
Security is handled through AWS IAM. You create a dedicated IAM user for your Replit app with a policy that grants access only to the specific S3 bucket(s) your app uses. The IAM access key and secret β stored in Replit Secrets β prove to AWS that requests come from your application. Never use your AWS root account credentials or overly broad IAM policies. A breach of your Replit Secrets should have minimal blast radius if IAM permissions are properly scoped.
Integration method
AWS S3 integrates with Replit through the AWS SDK v3 for JavaScript or boto3 for Python. Your Replit server authenticates to AWS using IAM access credentials stored in Replit Secrets, then makes S3 API calls to upload objects, generate presigned URLs, list buckets, and serve file downloads. The recommended pattern for user file uploads is to generate presigned PUT URLs server-side and return them to the client β this lets users upload files directly to S3 without the file passing through your Replit server, which avoids memory limits and bandwidth costs.
Prerequisites
- A Replit account with a Node.js or Python Repl ready
- An AWS account with billing enabled (S3 charges start at $0.023/GB/month, free tier includes 5GB)
- An S3 bucket created in a specific AWS region
- An IAM user with programmatic access and an S3-scoped policy (instructions in Step 1)
Step-by-step guide
Create an IAM User and S3 Bucket
Create an IAM User and S3 Bucket
Before writing any code, create the AWS resources your Replit app needs. Never use your AWS root account or an admin IAM user for application credentials β the principle of least privilege limits damage if keys are compromised. Create an S3 bucket: log into AWS Console, navigate to S3, and click 'Create bucket'. Choose a globally unique name (e.g., myapp-uploads-2026), select your preferred region (choose a region close to your users), and leave Block Public Access enabled for now. Click 'Create bucket'. Create an IAM user: navigate to IAM β Users β Create User. Give it a name like replit-myapp-s3. On the permissions step, choose 'Attach policies directly' β 'Create policy'. In the JSON editor, paste a minimal policy that grants only what your app needs β the policy below grants read/write access to one specific bucket only. Name the policy (e.g., ReplicationMappingS3ReadWrite) and save it. Attach this policy to your new IAM user. Create access keys: click the new IAM user β Security credentials β Create access key. Select 'Application running outside AWS' as the use case. AWS shows you the Access Key ID and Secret Access Key once β copy both immediately. The Secret Access Key cannot be retrieved again after you close this dialog. With your bucket name, region, access key ID, and secret access key in hand, you are ready to configure Replit Secrets.
1{2 "Version": "2012-10-17",3 "Statement": [4 {5 "Sid": "ListBucket",6 "Effect": "Allow",7 "Action": ["s3:ListBucket"],8 "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME"9 },10 {11 "Sid": "ReadWriteObjects",12 "Effect": "Allow",13 "Action": [14 "s3:GetObject",15 "s3:PutObject",16 "s3:DeleteObject"17 ],18 "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"19 }20 ]21}Pro tip: Replace YOUR-BUCKET-NAME with your actual bucket name in the IAM policy. This policy grants access to exactly one bucket. If your app needs multiple buckets, add each bucket as a separate statement.
Expected result: An S3 bucket exists in your chosen region. An IAM user exists with a policy granting only S3 read/write access to that bucket. You have copied the Access Key ID and Secret Access Key.
Store AWS Credentials in Replit Secrets
Store AWS Credentials in Replit Secrets
AWS credentials are the most sensitive secrets in your Replit app β anyone with these keys can access your S3 bucket and, if the IAM policy is too broad, potentially other AWS resources. They must live in Replit Secrets and never appear in source code. Click the lock icon (π) in the left Replit sidebar to open the Secrets pane. Add the following four secrets: AWS_ACCESS_KEY_ID: your IAM user's Access Key ID (starts with AKIA...). AWS_SECRET_ACCESS_KEY: your IAM user's Secret Access Key (long alphanumeric string). AWS_REGION: your S3 bucket's region (e.g., us-east-1, eu-west-1, ap-southeast-1). AWS_S3_BUCKET: your S3 bucket name (e.g., myapp-uploads-2026). The AWS SDK reads AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY automatically from environment variables by convention β you do not need to explicitly pass them to the SDK constructor. AWS_REGION and AWS_S3_BUCKET are read from environment variables in your code using the standard process.env / os.environ pattern. One important note: Replit's Secret Scanner monitors for known AWS key ID patterns (AKIA...) in code files and will alert you if it detects one. This is a safety net, not a replacement for using Secrets from the start.
1// Verify all AWS secrets are present at startup2const required = ['AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY', 'AWS_REGION', 'AWS_S3_BUCKET'];3for (const key of required) {4 if (!process.env[key]) {5 throw new Error(`Missing required secret: ${key}. Add it in Replit Secrets (lock icon π).`);6 }7}8console.log('AWS credentials verified.');9console.log('Region:', process.env.AWS_REGION);10console.log('Bucket:', process.env.AWS_S3_BUCKET);Pro tip: The AWS SDK v3 automatically reads credentials from AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. You do not need to pass them explicitly to S3Client β just set the region.
Expected result: All four secrets appear in the Replit Secrets pane. The verification script prints the region and bucket name without errors.
Upload and Download Files with AWS SDK v3 (Node.js)
Upload and Download Files with AWS SDK v3 (Node.js)
The AWS SDK v3 for JavaScript is a modular redesign that imports only the clients and commands you need, reducing bundle size significantly. Install it in Shell: npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner. The S3Client is initialized once at module level with the region. Credentials are automatically picked up from AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables β no explicit credential configuration needed in code. Core operations are command objects: PutObjectCommand for uploads, GetObjectCommand for downloads, DeleteObjectCommand for deletes, ListObjectsV2Command for listing. You call client.send(command) with each. For presigned URLs, the getSignedUrl function from @aws-sdk/s3-request-presigner generates a temporary URL that lets anyone perform one specific S3 operation (GET or PUT) without AWS credentials. Presigned PUT URLs are the recommended way to handle user file uploads from browsers: your server generates the URL, returns it to the client, and the client POSTs the file directly to S3. The URL expires after your specified time window (300 seconds = 5 minutes for uploads is common). For production file uploads, always validate the content type and file size on the server before generating the presigned URL to prevent users from uploading executable files or extremely large objects.
1// s3.js β AWS S3 operations with SDK v3 for Node.js on Replit2const { S3Client, PutObjectCommand, GetObjectCommand, DeleteObjectCommand } = require('@aws-sdk/client-s3');3const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');4const express = require('express');5const { Readable } = require('stream');67// SDK reads AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from env automatically8const s3 = new S3Client({ region: process.env.AWS_REGION });9const BUCKET = process.env.AWS_S3_BUCKET;1011const app = express();12app.use(express.json());1314// Upload a file buffer to S315async function uploadToS3(key, buffer, contentType) {16 const command = new PutObjectCommand({17 Bucket: BUCKET,18 Key: key, // e.g., 'uploads/user-123/photo.jpg'19 Body: buffer,20 ContentType: contentType21 });22 await s3.send(command);23 return `https://${BUCKET}.s3.${process.env.AWS_REGION}.amazonaws.com/${key}`;24}2526// Generate a presigned PUT URL for direct browser uploads27async function getPresignedUploadUrl(key, contentType, expiresInSeconds = 300) {28 const command = new PutObjectCommand({29 Bucket: BUCKET,30 Key: key,31 ContentType: contentType32 });33 return getSignedUrl(s3, command, { expiresIn: expiresInSeconds });34}3536// Generate a presigned GET URL for private file access37async function getPresignedDownloadUrl(key, expiresInSeconds = 3600) {38 const command = new GetObjectCommand({ Bucket: BUCKET, Key: key });39 return getSignedUrl(s3, command, { expiresIn: expiresInSeconds });40}4142// API endpoint: request a presigned upload URL43app.post('/api/upload-url', async (req, res) => {44 const { filename, contentType } = req.body;45 46 // Validate inputs47 const allowedTypes = ['image/jpeg', 'image/png', 'image/gif', 'application/pdf'];48 if (!allowedTypes.includes(contentType)) {49 return res.status(400).json({ error: 'Content type not allowed' });50 }51 52 // Generate a unique key to avoid filename collisions53 const key = `uploads/${Date.now()}-${filename.replace(/[^a-zA-Z0-9._-]/g, '_')}`;54 55 try {56 const uploadUrl = await getPresignedUploadUrl(key, contentType);57 const fileUrl = `https://${BUCKET}.s3.${process.env.AWS_REGION}.amazonaws.com/${key}`;58 res.json({ uploadUrl, fileUrl, key });59 } catch (err) {60 res.status(500).json({ error: err.message });61 }62});6364// API endpoint: get a presigned download URL for a private file65app.get('/api/download-url/:key(*)', async (req, res) => {66 try {67 const downloadUrl = await getPresignedDownloadUrl(req.params.key);68 res.json({ downloadUrl });69 } catch (err) {70 res.status(500).json({ error: err.message });71 }72});7374app.listen(3000, '0.0.0.0', () => console.log('S3 server running on port 3000'));Pro tip: Sanitize the S3 key (object path) before using user-provided filenames. Replace special characters and path separators to prevent path traversal attacks. Prefix keys with a user ID or folder to organize files and prevent naming collisions.
Expected result: POST /api/upload-url with {filename: 'photo.jpg', contentType: 'image/jpeg'} returns a presigned PUT URL and the final file URL. The URL starts with https://your-bucket.s3.region.amazonaws.com/. Files uploaded to the presigned URL appear in your S3 bucket within seconds.
S3 Operations with Python (boto3)
S3 Operations with Python (boto3)
For Python Replit projects, boto3 is the official AWS SDK. Install it with pip install boto3 in the Shell tab. Like the Node.js SDK, boto3 reads AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from environment variables automatically β no explicit credential passing needed. boto3's client interface maps directly to S3 API operations: put_object, get_object, delete_object, generate_presigned_url. The resource interface (boto3.resource('s3')) provides a slightly higher-level abstraction for object operations. The generate_presigned_url method creates time-limited URLs for any S3 operation. For uploads, use the 'put_object' operation with the client method key. For downloads, use 'get_object'. The ExpiresIn parameter is in seconds. For Flask apps, the pattern below creates one boto3 client at module level (boto3 clients are thread-safe) and reuses it across requests. For async Python frameworks like FastAPI, use aiobotocore or run boto3 calls in a thread pool executor to avoid blocking the event loop. For large file uploads in Python, consider using boto3's multipart upload feature via upload_fileobj() or transfer_config, which automatically splits large files into chunks and uploads them in parallel for better performance.
1# s3_utils.py β AWS S3 operations with boto3 for Python on Replit2import boto33import os4from datetime import datetime5from flask import Flask, request, jsonify67# boto3 reads AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY from env automatically8s3_client = boto3.client('s3', region_name=os.environ['AWS_REGION'])9BUCKET = os.environ['AWS_S3_BUCKET']1011app = Flask(__name__)1213def upload_to_s3(key: str, file_data: bytes, content_type: str) -> str:14 """Upload bytes to S3 and return the public URL."""15 s3_client.put_object(16 Bucket=BUCKET,17 Key=key,18 Body=file_data,19 ContentType=content_type20 )21 region = os.environ['AWS_REGION']22 return f'https://{BUCKET}.s3.{region}.amazonaws.com/{key}'2324def get_presigned_upload_url(key: str, content_type: str, expires: int = 300) -> str:25 """Generate a presigned PUT URL for direct browser uploads."""26 return s3_client.generate_presigned_url(27 'put_object',28 Params={'Bucket': BUCKET, 'Key': key, 'ContentType': content_type},29 ExpiresIn=expires30 )3132def get_presigned_download_url(key: str, expires: int = 3600) -> str:33 """Generate a presigned GET URL for secure file downloads."""34 return s3_client.generate_presigned_url(35 'get_object',36 Params={'Bucket': BUCKET, 'Key': key},37 ExpiresIn=expires38 )3940@app.route('/api/upload-url', methods=['POST'])41def request_upload_url():42 data = request.get_json()43 filename = data.get('filename', '')44 content_type = data.get('contentType', '')45 46 allowed_types = {'image/jpeg', 'image/png', 'image/gif', 'application/pdf'}47 if content_type not in allowed_types:48 return jsonify({'error': 'Content type not allowed'}), 40049 50 # Generate unique key51 safe_name = ''.join(c if c.isalnum() or c in '._-' else '_' for c in filename)52 key = f'uploads/{int(datetime.utcnow().timestamp())}-{safe_name}'53 54 upload_url = get_presigned_upload_url(key, content_type)55 region = os.environ['AWS_REGION']56 file_url = f'https://{BUCKET}.s3.{region}.amazonaws.com/{key}'57 58 return jsonify({'uploadUrl': upload_url, 'fileUrl': file_url, 'key': key})5960@app.route('/api/download-url/<path:key>')61def request_download_url(key):62 try:63 download_url = get_presigned_download_url(key)64 return jsonify({'downloadUrl': download_url})65 except Exception as e:66 return jsonify({'error': str(e)}), 5006768if __name__ == '__main__':69 app.run(host='0.0.0.0', port=3000)Pro tip: For file uploads in Flask, use request.files to receive multipart form data. Call file.read() to get the bytes, then pass them to upload_to_s3(). For large files, stream directly using upload_fileobj(request.files['file'], BUCKET, key) to avoid loading the entire file into memory.
Expected result: POST /api/upload-url returns a presigned S3 PUT URL. Files uploaded to that URL appear in the S3 console. GET /api/download-url/{key} returns a time-limited download URL for any object in the bucket.
Configure S3 CORS for Browser Uploads
Configure S3 CORS for Browser Uploads
When clients upload files directly to S3 using presigned PUT URLs, the browser makes a cross-origin request to s3.amazonaws.com from your app's domain. S3 blocks these by default unless you configure CORS (Cross-Origin Resource Sharing) on your bucket. In the AWS S3 console, click your bucket β Permissions tab β scroll down to 'Cross-origin resource sharing (CORS)' β Edit. Paste the CORS configuration JSON below. This allows PUT requests (for presigned uploads) and GET requests (for downloads) from any origin. In production, replace the AllowedOrigins wildcard with your actual app domain. After saving the CORS configuration, test a browser upload by calling your /api/upload-url endpoint, then making a PUT request from the browser using the returned presigned URL. If CORS is still failing after adding the config, check that you are including the Content-Type header in your PUT request β S3 CORS requires the same Content-Type value that was used when generating the presigned URL. For Autoscale deployments on Replit, the app URL is your stable CORS origin. Include both https://yourapp.replit.app and any custom domains you have configured.
1[2 {3 "AllowedHeaders": ["Content-Type", "x-amz-acl", "Authorization"],4 "AllowedMethods": ["GET", "PUT", "POST", "HEAD"],5 "AllowedOrigins": ["https://yourapp.replit.app"],6 "ExposeHeaders": ["ETag"],7 "MaxAgeSeconds": 30008 }9]Pro tip: During development, you can temporarily set AllowedOrigins to ["*"] to test uploads, but always restrict it to your specific domain before going to production. A wildcard CORS policy means any website could generate upload requests to your bucket using your presigned URLs.
Expected result: Browser-based PUT requests to S3 using presigned URLs succeed without CORS errors. The browser's network inspector shows the S3 PUT request returning 200 OK.
Common use cases
User File Upload and Management
Allow users of your web application to upload profile photos, documents, or media files. Your Replit server generates a presigned PUT URL for each upload, the browser uploads directly to S3, and your server stores the resulting S3 object key in your database for later retrieval.
Build a file upload API that generates S3 presigned PUT URLs for clients, accepts filename and content-type parameters, returns the upload URL and the final file URL, and tracks uploaded files in a database.
Copy this prompt to try it in Replit
Export and Download Generation
Generate downloadable exports (CSV reports, PDF invoices, ZIP archives) server-side and store them in S3. Return a presigned GET URL to the user that expires after a set time. This offloads file delivery from your Replit server to S3's CDN infrastructure.
Create an export endpoint that generates a CSV report from database records, uploads it to S3 with a unique filename, and returns a presigned download URL valid for 1 hour.
Copy this prompt to try it in Replit
Static Asset Storage and CDN Delivery
Store images, videos, and static files in S3 and serve them via CloudFront CDN for fast global delivery. Your Replit app handles the application logic while S3 + CloudFront handles all file serving, dramatically reducing bandwidth on your Replit containers.
Build a media management API that uploads images to S3, generates both a direct S3 URL and a CloudFront CDN URL, and stores the URLs in a database for fast retrieval.
Copy this prompt to try it in Replit
Troubleshooting
AccessDenied: Access Denied when calling PutObject or GetObject
Cause: The IAM user's policy does not grant the attempted S3 action on the specified resource. Common causes: the bucket name or ARN in the IAM policy contains a typo, the policy grants access to the bucket ARN but not the objects (/*), or the wrong AWS account credentials are in Replit Secrets.
Solution: In AWS IAM, review your policy JSON and verify the Resource ARNs include both arn:aws:s3:::bucket-name (for bucket-level operations like ListBucket) and arn:aws:s3:::bucket-name/* (for object-level operations like GetObject, PutObject). Also confirm AWS_ACCESS_KEY_ID in Replit Secrets matches the IAM user you intended, not a different user with different permissions.
1// Debug IAM by listing objects β narrows down access vs key issue2const { ListObjectsV2Command } = require('@aws-sdk/client-s3');3try {4 const result = await s3.send(new ListObjectsV2Command({ Bucket: BUCKET, MaxKeys: 1 }));5 console.log('Bucket access OK, objects:', result.KeyCount);6} catch (err) {7 console.error('Access error:', err.name, err.message);8 // 'AccessDenied' = IAM policy issue9 // 'NoSuchBucket' = wrong bucket name10 // 'InvalidAccessKeyId' = wrong key in Secrets11}SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided
Cause: AWS_SECRET_ACCESS_KEY in Replit Secrets contains an extra space, a line break, or is missing characters. This is common when copying the secret key from the AWS console β the copy may include whitespace.
Solution: In Replit Secrets, click the AWS_SECRET_ACCESS_KEY entry to edit it. Delete the value completely and re-paste it. Make sure there are no leading or trailing spaces. The secret access key should be exactly 40 characters of mixed alphanumeric characters.
1// Check for whitespace contamination in secrets2console.log('Key ID length:', process.env.AWS_ACCESS_KEY_ID?.length); // Should be 203console.log('Secret length:', process.env.AWS_SECRET_ACCESS_KEY?.length); // Should be 404console.log('Key trimmed:', process.env.AWS_SECRET_ACCESS_KEY?.trim().length); // Must matchCORS error: 'Access-Control-Allow-Origin' header missing when uploading from browser
Cause: S3 bucket CORS is not configured, or the AllowedOrigins in the CORS policy does not include your app's origin. S3 only includes CORS headers in responses when the request's Origin header matches an entry in the bucket's CORS configuration.
Solution: In S3 Console β your bucket β Permissions β Cross-origin resource sharing (CORS), verify the configuration exists and that AllowedOrigins includes https://yourapp.replit.app. Also ensure AllowedMethods includes 'PUT' for uploads. After updating CORS, changes can take a few minutes to propagate globally.
Presigned URL returns 403 Forbidden when the client tries to use it
Cause: The presigned URL was generated with a content type that does not match the Content-Type header in the client's PUT request. Or the URL has expired before the client could use it. S3 treats a content type mismatch on presigned PUTs as a signature mismatch.
Solution: Ensure the client sends exactly the same Content-Type header value that was used when calling generate_presigned_url / getSignedUrl. If your users select file types dynamically, pass the content type from the client to your URL generation endpoint and reflect it back in the presigned URL params. If the URL expired, reduce the time between URL generation and upload, or increase the expiresIn value.
1// Client-side: must include the exact Content-Type in PUT request2async function uploadFile(file, presignedUrl) {3 const response = await fetch(presignedUrl, {4 method: 'PUT',5 body: file,6 headers: {7 'Content-Type': file.type // Must match what server used to generate URL8 }9 });10 if (!response.ok) throw new Error(`Upload failed: ${response.status}`);11 return true;12}Best practices
- Create a dedicated IAM user per application with a minimal policy scoped to only the specific S3 bucket(s) it needs β never use root credentials or admin users
- Store AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, and AWS_S3_BUCKET all in Replit Secrets (lock icon π) β the AWS SDK reads the first two from environment variables automatically
- Use presigned PUT URLs for user file uploads rather than routing files through your Replit server β this offloads bandwidth, avoids memory limits, and scales better
- Validate content type and enforce file size limits on your server before generating presigned URLs β do not let users upload arbitrary file types to your bucket
- Sanitize S3 object keys by removing path separators and special characters from user-provided filenames to prevent path traversal and key injection
- Configure S3 bucket CORS with specific AllowedOrigins matching your app domain rather than using a wildcard β restrict to https://yourapp.replit.app in production
- Use versioning on production S3 buckets to protect against accidental deletes and enable recovery of overwritten files
- Deploy as Autoscale for most S3-backed apps since file operations are stateless; use Reserved VM only if you have persistent background jobs that process S3 events continuously
Alternatives
Backblaze B2 is significantly cheaper than S3 (free egress with Cloudflare, $0.006/GB storage vs AWS's $0.023/GB) and uses the S3-compatible API, making it a drop-in replacement for cost-sensitive applications.
Dropbox has a simpler API for per-user file sync and sharing scenarios but lacks the scale, performance, and infrastructure tooling that S3 provides for application file storage.
Wasabi offers S3-compatible storage at a flat $7/TB/month with no egress fees, making it predictable and affordable for high-traffic applications that frequently read files.
Frequently asked questions
How do I store AWS credentials securely in Replit?
Click the lock icon (π) in the Replit sidebar to open the Secrets pane. Add AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, and AWS_S3_BUCKET as separate secrets. The AWS SDK v3 and boto3 automatically read the first two by convention β you do not need to pass them explicitly to the S3 client. Never paste credentials directly into code files.
What is a presigned URL and why should I use it for S3 uploads?
A presigned URL is a temporary, cryptographically signed URL that grants access to one specific S3 operation (upload or download) without requiring AWS credentials. Your Replit server generates the URL and returns it to the client, which then uploads directly to S3. This means large files never pass through your Replit server β saving bandwidth and avoiding memory limits. Presigned URLs expire after a time you specify (typically 5-15 minutes for uploads).
Can I use AWS S3 with Replit for free?
AWS S3 has a free tier of 5GB storage, 20,000 GET requests, and 2,000 PUT requests per month for 12 months after account creation. After the free tier, storage costs $0.023/GB/month in us-east-1 (pricing varies by region). Egress (data transfer out) is additionally charged at $0.09/GB. For most development projects and small apps, costs stay under a few dollars per month.
Why am I getting CORS errors when uploading files to S3 from the browser?
S3 requires you to explicitly configure CORS on each bucket. Go to S3 Console β your bucket β Permissions β Cross-origin resource sharing β Edit and add a CORS configuration that includes your app's domain in AllowedOrigins and PUT in AllowedMethods. Also ensure your client-side fetch or XMLHttpRequest includes the exact Content-Type header that was used to generate the presigned URL.
What Replit deployment type should I use for an S3-backed app?
Autoscale is the right choice for most S3-backed web applications. S3 file operations are stateless β each upload or download is an independent request, so cold starts from Autoscale scaling-to-zero are not a problem. Use Reserved VM only if you have a continuous background process that monitors S3 events (via SQS or S3 notifications) or needs to maintain a persistent connection.
How do I handle large file uploads to S3 from Replit?
Use presigned PUT URLs for large files so they upload directly from the browser to S3 without passing through your Replit server. For files over 100MB, use S3 multipart upload β generate presigned URLs for each part, upload parts in parallel, then call CompleteMultipartUpload. The AWS SDK v3 has a Transfer Manager that handles multipart upload automatically with the lib-storage package.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation