Skip to main content
RapidDev - Software Development Agency
cursor-tutorial

How to Improve SQL Queries Generated by Cursor

Improve SQL queries generated by Cursor by providing schema context through .cursorrules, referencing your database schema files with @file, and using targeted prompts that specify performance requirements like indexes, joins, and query plans. Cursor produces dramatically better SQL when it knows your exact table structure.

What you'll learn

  • How to give Cursor your database schema for accurate SQL generation
  • How to write .cursorrules that enforce SQL performance best practices
  • How to prompt Cursor for optimized queries with proper indexing
  • How to use @context to reference schema files and existing queries
Book a free consultation
4.9Clutch rating
600+Happy partners
17+Countries served
190+Team members
Intermediate9 min read15-20 minCursor Pro+, any SQL database (PostgreSQL, MySQL, SQLite)March 2026RapidDev Engineering Team
TL;DR

Improve SQL queries generated by Cursor by providing schema context through .cursorrules, referencing your database schema files with @file, and using targeted prompts that specify performance requirements like indexes, joins, and query plans. Cursor produces dramatically better SQL when it knows your exact table structure.

Why Cursor Needs Schema Context for Good SQL

Cursor generates generic SQL when it does not know your database structure. Without schema context, it guesses table names, uses SELECT *, misses available indexes, and produces N+1 query patterns. This tutorial shows you how to give Cursor full schema awareness so it generates optimized, production-ready SQL that matches your actual tables, uses proper joins, and leverages your indexes. It is aimed at backend developers and data engineers who use Cursor for database work.

Prerequisites

  • Cursor installed (Pro recommended for longer context)
  • An existing database schema or migration files
  • Basic knowledge of SQL queries and database indexing
  • A project with a data access layer or ORM

Step-by-step guide

1

Export and reference your database schema

Create a schema reference file that Cursor can read. Export your current schema using your database CLI or ORM migration tool. Place it in a docs/ or schema/ directory. This gives Cursor the exact table names, column types, relationships, and constraints it needs to generate accurate SQL.

docs/schema.sql
1-- docs/schema.sql
2-- Auto-generated schema reference for Cursor context
3
4CREATE TABLE users (
5 id SERIAL PRIMARY KEY,
6 email VARCHAR(255) UNIQUE NOT NULL,
7 name VARCHAR(100) NOT NULL,
8 role VARCHAR(20) DEFAULT 'user',
9 created_at TIMESTAMP DEFAULT NOW(),
10 updated_at TIMESTAMP DEFAULT NOW()
11);
12
13CREATE TABLE orders (
14 id SERIAL PRIMARY KEY,
15 user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
16 status VARCHAR(20) DEFAULT 'pending',
17 total_cents INTEGER NOT NULL,
18 created_at TIMESTAMP DEFAULT NOW()
19);
20
21CREATE INDEX idx_orders_user_id ON orders(user_id);
22CREATE INDEX idx_orders_status ON orders(status);
23CREATE INDEX idx_orders_created_at ON orders(created_at DESC);

Pro tip: Regenerate this file periodically with pg_dump --schema-only or your ORM's schema dump command. Stale schemas lead to hallucinated column names.

Expected result: A committed schema file that Cursor can reference for accurate table and column names.

2

Add SQL rules to .cursorrules

Create SQL-specific rules that enforce query performance patterns. These rules tell Cursor to avoid common SQL anti-patterns and use your project's specific conventions for data access. Include rules about SELECT specificity, join types, pagination, and parameterized queries.

.cursorrules
1# .cursorrules (add to existing file)
2
3## SQL and Database Rules
4- NEVER use SELECT * always list specific columns
5- ALWAYS use parameterized queries ($1, $2) never string interpolation
6- Use LEFT JOIN only when null results are expected, prefer INNER JOIN
7- Include LIMIT on all list queries, default 50, max 1000
8- Use cursor-based pagination (WHERE id > $last_id) for large datasets
9- Reference docs/schema.sql for table structure
10- Add EXPLAIN comments when generating complex queries
11- Prefer EXISTS over IN for subqueries with large result sets
12- Always include ORDER BY with LIMIT to ensure deterministic results

Expected result: Cursor will follow these SQL conventions in every query it generates across your project.

3

Create a project rule for SQL files

Create an auto-attaching rule that activates whenever Cursor encounters SQL-related files. This ensures database rules apply automatically without needing to reference them manually in every prompt. Use glob patterns to match query files, repositories, and migration files.

.cursor/rules/sql-queries.mdc
1---
2description: SQL query optimization rules
3globs: "**/*.sql, **/queries/**, **/repositories/**, **/dal/**"
4alwaysApply: false
5---
6
7- Reference @docs/schema.sql for table structure before writing any query
8- Include index hints as comments: -- Uses idx_orders_user_id
9- Prefer CTEs (WITH clauses) over nested subqueries for readability
10- Every query must have a comment explaining what it returns
11- Batch INSERT/UPDATE operations when handling multiple rows
12- Use RETURNING clause in PostgreSQL for INSERT/UPDATE to avoid extra SELECT

Pro tip: The glob pattern covers common data layer directory names. Adjust it to match your project's structure.

Expected result: SQL rules auto-attach when editing any file in your queries, repositories, or data access directories.

4

Prompt Cursor with schema context for query generation

When asking Cursor to generate SQL queries or data access functions, always reference your schema file with @file. Open Chat with Cmd+L and construct a prompt that specifies the exact operation, expected result shape, and performance requirements. Include sample data volumes so Cursor makes appropriate optimization choices.

Cursor Chat prompt
1// Prompt to type in Cursor Chat (Cmd+L):
2// @docs/schema.sql
3// Generate a repository function that fetches a user's recent orders.
4// Requirements:
5// - Accept userId and optional status filter
6// - Return order id, status, total_cents, created_at
7// - Paginate with cursor-based pagination (after_id parameter)
8// - Default limit 20, max 100
9// - Sort by created_at DESC
10// - The orders table has ~10M rows, optimize accordingly
11// - Use parameterized queries, not string interpolation

Pro tip: Mentioning table size (e.g., '~10M rows') changes Cursor's optimization strategy. It will favor index usage and avoid full table scans.

Expected result: Cursor generates an optimized query using the exact column names from your schema, with proper parameterization and cursor-based pagination.

5

Use Cmd+K to optimize existing queries inline

Select an existing SQL query in your editor and press Cmd+K to open the inline edit prompt bar. Ask Cursor to optimize the selected query. Reference the schema for context. Cursor will rewrite the query in-place while keeping the same return shape, applying the optimization rules from your .cursorrules.

Cmd+K inline prompt
1// Select your existing query, then press Cmd+K and type:
2// @docs/schema.sql Optimize this query for a table with 10M rows.
3// Use existing indexes, replace SELECT * with specific columns,
4// add cursor-based pagination, and use parameterized queries.

Pro tip: After optimizing, ask Cursor in Chat (Cmd+L) to generate the EXPLAIN ANALYZE for the optimized query so you can verify index usage.

Expected result: Your selected query is rewritten in-place with specific columns, proper indexes, and parameterized values.

6

Generate a query testing and benchmarking utility

Use Composer (Cmd+I) to generate a utility that helps you verify query performance. This utility runs EXPLAIN ANALYZE on your queries and checks that they use indexes as expected. Reference your schema and the repository file for full context.

Cursor Composer prompt
1// Prompt to type in Cursor Composer (Cmd+I):
2// @docs/schema.sql @src/repositories/orderRepository.ts
3// Generate a query-benchmark.ts utility that:
4// 1. Imports all queries from the repository
5// 2. Runs EXPLAIN ANALYZE on each query with sample params
6// 3. Checks that no query uses Seq Scan on tables > 10k rows
7// 4. Reports index usage and estimated cost
8// 5. Exits with code 1 if any query misses an index

Pro tip: Add this utility to your CI pipeline to catch SQL regressions. Cursor can also generate the CI step if you prompt it with your pipeline config as context.

Expected result: A query benchmarking utility that verifies all repository queries use indexes properly.

Complete working example

src/repositories/orderRepository.ts
1import { Pool, QueryResult } from 'pg';
2
3interface OrderRow {
4 id: number;
5 status: string;
6 total_cents: number;
7 created_at: Date;
8}
9
10interface PaginatedOrders {
11 orders: OrderRow[];
12 has_more: boolean;
13 next_cursor: number | null;
14}
15
16export class OrderRepository {
17 constructor(private pool: Pool) {}
18
19 async getUserOrders(
20 userId: number,
21 options: {
22 status?: string;
23 afterId?: number;
24 limit?: number;
25 } = {}
26 ): Promise<PaginatedOrders> {
27 const limit = Math.min(options.limit || 20, 100);
28 const fetchLimit = limit + 1; // Fetch one extra to detect has_more
29
30 let query: string;
31 const params: (number | string)[] = [userId, fetchLimit];
32 let paramIndex = 3;
33
34 // Uses idx_orders_user_id, idx_orders_created_at
35 query = `
36 SELECT id, status, total_cents, created_at
37 FROM orders
38 WHERE user_id = $1
39 `;
40
41 if (options.status) {
42 query += ` AND status = $${paramIndex}`;
43 params.push(options.status);
44 paramIndex++;
45 }
46
47 if (options.afterId) {
48 query += ` AND id < $${paramIndex}`;
49 params.push(options.afterId);
50 paramIndex++;
51 }
52
53 query += ` ORDER BY created_at DESC, id DESC LIMIT $2`;
54
55 const result: QueryResult<OrderRow> = await this.pool.query(query, params);
56 const hasMore = result.rows.length > limit;
57 const orders = hasMore ? result.rows.slice(0, limit) : result.rows;
58
59 return {
60 orders,
61 has_more: hasMore,
62 next_cursor: orders.length > 0 ? orders[orders.length - 1].id : null,
63 };
64 }
65}

Common mistakes when improving SQL Queries Generated by Cursor

Why it's a problem: Not providing schema context when prompting for SQL

How to avoid: Always reference @docs/schema.sql or your migration files when prompting for any database-related code.

Why it's a problem: Letting Cursor generate raw string interpolation in queries

How to avoid: Add a .cursorrules entry requiring parameterized queries and specify the parameter style ($1, $2 for PostgreSQL or ? for MySQL).

Why it's a problem: Using SELECT * in Cursor-generated queries

How to avoid: Add a rule banning SELECT * and always specify the result shape in your prompt.

Why it's a problem: Not mentioning data volume in prompts

How to avoid: Include approximate row counts in your prompt: 'The orders table has ~10M rows' so Cursor prioritizes index usage.

Best practices

  • Keep a docs/schema.sql file updated and committed to Git as your single source of truth for Cursor context
  • Specify approximate table sizes in prompts so Cursor makes appropriate optimization decisions
  • Add SQL-specific rules to .cursorrules requiring parameterized queries and explicit column lists
  • Use auto-attaching .cursor/rules/ for repository and query directories
  • Generate EXPLAIN ANALYZE checks alongside queries to verify index usage
  • Review Cursor-generated JOIN types carefully — it sometimes uses LEFT JOIN where INNER JOIN is correct
  • Use Cmd+K for inline query optimization and Cmd+L for generating new query functions
  • Regenerate your schema reference file after every migration to prevent hallucinated column names

Still stuck?

Copy one of these prompts to get a personalized, step-by-step explanation.

ChatGPT Prompt

I have a PostgreSQL database with these tables: [paste schema]. Write an optimized query that fetches [describe data needed] with cursor-based pagination, parameterized queries, and proper index usage. The main table has approximately [X] rows.

Cursor Prompt

@docs/schema.sql Generate a repository function that fetches [describe data needed]. Use parameterized queries ($1, $2), cursor-based pagination with afterId, explicit column lists (no SELECT *), and add comments noting which indexes each query uses. The table has ~[X]M rows.

Frequently asked questions

Can Cursor connect directly to my database to read the schema?

Cursor does not connect to databases directly, but you can use MCP (Model Context Protocol) servers to give it database access. A simpler approach is exporting your schema to a SQL file and referencing it with @file in prompts.

How do I make Cursor use my existing indexes in generated queries?

Include your CREATE INDEX statements in the schema reference file. Add a .cursorrules entry requiring index-aware comments on all queries. Mention specific indexes by name in prompts when you know which one should be used.

Why does Cursor generate SELECT * even when I have rules against it?

Rules can drift in longer chat sessions. Start a new chat with Cmd+N and reference your rules explicitly. You can also paste the specific rule into your prompt: 'Never use SELECT *, always list columns explicitly.'

Should I use Cursor Chat or Composer for SQL generation?

Use Chat (Cmd+L) when you want to discuss query strategy or review a single query. Use Composer (Cmd+I) when you need to generate complete repository files with multiple query functions that reference each other.

How do I get Cursor to generate SQL for MySQL instead of PostgreSQL?

Specify the database engine in your .cursorrules file and in individual prompts. Include MySQL-specific syntax requirements like using ? for parameters instead of $1 and LIMIT offset, count instead of LIMIT with OFFSET.

Can Cursor help optimize an existing slow query?

Yes. Select the slow query in your editor, press Cmd+K, and prompt: '@docs/schema.sql Optimize this query for a table with N rows. Use available indexes and explain why each change improves performance.' Cursor will rewrite it in place.

RapidDev

Talk to an Expert

Our team has built 600+ apps. Get personalized help with your project.

Book a free consultation

Need help with your project?

Our experts have built 600+ apps and can accelerate your development. Book a free consultation — no strings attached.

Book a free consultation

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We'll discuss your project and provide a custom quote at no cost.