Improve SQL queries generated by Cursor by providing schema context through .cursorrules, referencing your database schema files with @file, and using targeted prompts that specify performance requirements like indexes, joins, and query plans. Cursor produces dramatically better SQL when it knows your exact table structure.
Why Cursor Needs Schema Context for Good SQL
Cursor generates generic SQL when it does not know your database structure. Without schema context, it guesses table names, uses SELECT *, misses available indexes, and produces N+1 query patterns. This tutorial shows you how to give Cursor full schema awareness so it generates optimized, production-ready SQL that matches your actual tables, uses proper joins, and leverages your indexes. It is aimed at backend developers and data engineers who use Cursor for database work.
Prerequisites
- Cursor installed (Pro recommended for longer context)
- An existing database schema or migration files
- Basic knowledge of SQL queries and database indexing
- A project with a data access layer or ORM
Step-by-step guide
Export and reference your database schema
Export and reference your database schema
Create a schema reference file that Cursor can read. Export your current schema using your database CLI or ORM migration tool. Place it in a docs/ or schema/ directory. This gives Cursor the exact table names, column types, relationships, and constraints it needs to generate accurate SQL.
1-- docs/schema.sql2-- Auto-generated schema reference for Cursor context34CREATE TABLE users (5 id SERIAL PRIMARY KEY,6 email VARCHAR(255) UNIQUE NOT NULL,7 name VARCHAR(100) NOT NULL,8 role VARCHAR(20) DEFAULT 'user',9 created_at TIMESTAMP DEFAULT NOW(),10 updated_at TIMESTAMP DEFAULT NOW()11);1213CREATE TABLE orders (14 id SERIAL PRIMARY KEY,15 user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,16 status VARCHAR(20) DEFAULT 'pending',17 total_cents INTEGER NOT NULL,18 created_at TIMESTAMP DEFAULT NOW()19);2021CREATE INDEX idx_orders_user_id ON orders(user_id);22CREATE INDEX idx_orders_status ON orders(status);23CREATE INDEX idx_orders_created_at ON orders(created_at DESC);Pro tip: Regenerate this file periodically with pg_dump --schema-only or your ORM's schema dump command. Stale schemas lead to hallucinated column names.
Expected result: A committed schema file that Cursor can reference for accurate table and column names.
Add SQL rules to .cursorrules
Add SQL rules to .cursorrules
Create SQL-specific rules that enforce query performance patterns. These rules tell Cursor to avoid common SQL anti-patterns and use your project's specific conventions for data access. Include rules about SELECT specificity, join types, pagination, and parameterized queries.
1# .cursorrules (add to existing file)23## SQL and Database Rules4- NEVER use SELECT * — always list specific columns5- ALWAYS use parameterized queries ($1, $2) — never string interpolation6- Use LEFT JOIN only when null results are expected, prefer INNER JOIN7- Include LIMIT on all list queries, default 50, max 10008- Use cursor-based pagination (WHERE id > $last_id) for large datasets9- Reference docs/schema.sql for table structure10- Add EXPLAIN comments when generating complex queries11- Prefer EXISTS over IN for subqueries with large result sets12- Always include ORDER BY with LIMIT to ensure deterministic resultsExpected result: Cursor will follow these SQL conventions in every query it generates across your project.
Create a project rule for SQL files
Create a project rule for SQL files
Create an auto-attaching rule that activates whenever Cursor encounters SQL-related files. This ensures database rules apply automatically without needing to reference them manually in every prompt. Use glob patterns to match query files, repositories, and migration files.
1---2description: SQL query optimization rules3globs: "**/*.sql, **/queries/**, **/repositories/**, **/dal/**"4alwaysApply: false5---67- Reference @docs/schema.sql for table structure before writing any query8- Include index hints as comments: -- Uses idx_orders_user_id9- Prefer CTEs (WITH clauses) over nested subqueries for readability10- Every query must have a comment explaining what it returns11- Batch INSERT/UPDATE operations when handling multiple rows12- Use RETURNING clause in PostgreSQL for INSERT/UPDATE to avoid extra SELECTPro tip: The glob pattern covers common data layer directory names. Adjust it to match your project's structure.
Expected result: SQL rules auto-attach when editing any file in your queries, repositories, or data access directories.
Prompt Cursor with schema context for query generation
Prompt Cursor with schema context for query generation
When asking Cursor to generate SQL queries or data access functions, always reference your schema file with @file. Open Chat with Cmd+L and construct a prompt that specifies the exact operation, expected result shape, and performance requirements. Include sample data volumes so Cursor makes appropriate optimization choices.
1// Prompt to type in Cursor Chat (Cmd+L):2// @docs/schema.sql3// Generate a repository function that fetches a user's recent orders.4// Requirements:5// - Accept userId and optional status filter6// - Return order id, status, total_cents, created_at7// - Paginate with cursor-based pagination (after_id parameter)8// - Default limit 20, max 1009// - Sort by created_at DESC10// - The orders table has ~10M rows, optimize accordingly11// - Use parameterized queries, not string interpolationPro tip: Mentioning table size (e.g., '~10M rows') changes Cursor's optimization strategy. It will favor index usage and avoid full table scans.
Expected result: Cursor generates an optimized query using the exact column names from your schema, with proper parameterization and cursor-based pagination.
Use Cmd+K to optimize existing queries inline
Use Cmd+K to optimize existing queries inline
Select an existing SQL query in your editor and press Cmd+K to open the inline edit prompt bar. Ask Cursor to optimize the selected query. Reference the schema for context. Cursor will rewrite the query in-place while keeping the same return shape, applying the optimization rules from your .cursorrules.
1// Select your existing query, then press Cmd+K and type:2// @docs/schema.sql Optimize this query for a table with 10M rows.3// Use existing indexes, replace SELECT * with specific columns,4// add cursor-based pagination, and use parameterized queries.Pro tip: After optimizing, ask Cursor in Chat (Cmd+L) to generate the EXPLAIN ANALYZE for the optimized query so you can verify index usage.
Expected result: Your selected query is rewritten in-place with specific columns, proper indexes, and parameterized values.
Generate a query testing and benchmarking utility
Generate a query testing and benchmarking utility
Use Composer (Cmd+I) to generate a utility that helps you verify query performance. This utility runs EXPLAIN ANALYZE on your queries and checks that they use indexes as expected. Reference your schema and the repository file for full context.
1// Prompt to type in Cursor Composer (Cmd+I):2// @docs/schema.sql @src/repositories/orderRepository.ts3// Generate a query-benchmark.ts utility that:4// 1. Imports all queries from the repository5// 2. Runs EXPLAIN ANALYZE on each query with sample params6// 3. Checks that no query uses Seq Scan on tables > 10k rows7// 4. Reports index usage and estimated cost8// 5. Exits with code 1 if any query misses an indexPro tip: Add this utility to your CI pipeline to catch SQL regressions. Cursor can also generate the CI step if you prompt it with your pipeline config as context.
Expected result: A query benchmarking utility that verifies all repository queries use indexes properly.
Complete working example
1import { Pool, QueryResult } from 'pg';23interface OrderRow {4 id: number;5 status: string;6 total_cents: number;7 created_at: Date;8}910interface PaginatedOrders {11 orders: OrderRow[];12 has_more: boolean;13 next_cursor: number | null;14}1516export class OrderRepository {17 constructor(private pool: Pool) {}1819 async getUserOrders(20 userId: number,21 options: {22 status?: string;23 afterId?: number;24 limit?: number;25 } = {}26 ): Promise<PaginatedOrders> {27 const limit = Math.min(options.limit || 20, 100);28 const fetchLimit = limit + 1; // Fetch one extra to detect has_more2930 let query: string;31 const params: (number | string)[] = [userId, fetchLimit];32 let paramIndex = 3;3334 // Uses idx_orders_user_id, idx_orders_created_at35 query = `36 SELECT id, status, total_cents, created_at37 FROM orders38 WHERE user_id = $139 `;4041 if (options.status) {42 query += ` AND status = $${paramIndex}`;43 params.push(options.status);44 paramIndex++;45 }4647 if (options.afterId) {48 query += ` AND id < $${paramIndex}`;49 params.push(options.afterId);50 paramIndex++;51 }5253 query += ` ORDER BY created_at DESC, id DESC LIMIT $2`;5455 const result: QueryResult<OrderRow> = await this.pool.query(query, params);56 const hasMore = result.rows.length > limit;57 const orders = hasMore ? result.rows.slice(0, limit) : result.rows;5859 return {60 orders,61 has_more: hasMore,62 next_cursor: orders.length > 0 ? orders[orders.length - 1].id : null,63 };64 }65}Common mistakes when improving SQL Queries Generated by Cursor
Why it's a problem: Not providing schema context when prompting for SQL
How to avoid: Always reference @docs/schema.sql or your migration files when prompting for any database-related code.
Why it's a problem: Letting Cursor generate raw string interpolation in queries
How to avoid: Add a .cursorrules entry requiring parameterized queries and specify the parameter style ($1, $2 for PostgreSQL or ? for MySQL).
Why it's a problem: Using SELECT * in Cursor-generated queries
How to avoid: Add a rule banning SELECT * and always specify the result shape in your prompt.
Why it's a problem: Not mentioning data volume in prompts
How to avoid: Include approximate row counts in your prompt: 'The orders table has ~10M rows' so Cursor prioritizes index usage.
Best practices
- Keep a docs/schema.sql file updated and committed to Git as your single source of truth for Cursor context
- Specify approximate table sizes in prompts so Cursor makes appropriate optimization decisions
- Add SQL-specific rules to .cursorrules requiring parameterized queries and explicit column lists
- Use auto-attaching .cursor/rules/ for repository and query directories
- Generate EXPLAIN ANALYZE checks alongside queries to verify index usage
- Review Cursor-generated JOIN types carefully — it sometimes uses LEFT JOIN where INNER JOIN is correct
- Use Cmd+K for inline query optimization and Cmd+L for generating new query functions
- Regenerate your schema reference file after every migration to prevent hallucinated column names
Still stuck?
Copy one of these prompts to get a personalized, step-by-step explanation.
I have a PostgreSQL database with these tables: [paste schema]. Write an optimized query that fetches [describe data needed] with cursor-based pagination, parameterized queries, and proper index usage. The main table has approximately [X] rows.
@docs/schema.sql Generate a repository function that fetches [describe data needed]. Use parameterized queries ($1, $2), cursor-based pagination with afterId, explicit column lists (no SELECT *), and add comments noting which indexes each query uses. The table has ~[X]M rows.
Frequently asked questions
Can Cursor connect directly to my database to read the schema?
Cursor does not connect to databases directly, but you can use MCP (Model Context Protocol) servers to give it database access. A simpler approach is exporting your schema to a SQL file and referencing it with @file in prompts.
How do I make Cursor use my existing indexes in generated queries?
Include your CREATE INDEX statements in the schema reference file. Add a .cursorrules entry requiring index-aware comments on all queries. Mention specific indexes by name in prompts when you know which one should be used.
Why does Cursor generate SELECT * even when I have rules against it?
Rules can drift in longer chat sessions. Start a new chat with Cmd+N and reference your rules explicitly. You can also paste the specific rule into your prompt: 'Never use SELECT *, always list columns explicitly.'
Should I use Cursor Chat or Composer for SQL generation?
Use Chat (Cmd+L) when you want to discuss query strategy or review a single query. Use Composer (Cmd+I) when you need to generate complete repository files with multiple query functions that reference each other.
How do I get Cursor to generate SQL for MySQL instead of PostgreSQL?
Specify the database engine in your .cursorrules file and in individual prompts. Include MySQL-specific syntax requirements like using ? for parameters instead of $1 and LIMIT offset, count instead of LIMIT with OFFSET.
Can Cursor help optimize an existing slow query?
Yes. Select the slow query in your editor, press Cmd+K, and prompt: '@docs/schema.sql Optimize this query for a table with N rows. Use available indexes and explain why each change improves performance.' Cursor will rewrite it in place.
Talk to an Expert
Our team has built 600+ apps. Get personalized help with your project.
Book a free consultation