Skip to content

Navid's Blog

Ideas, Experiments, and Lessons Learned

Menu
Menu

I Finally Fixed Our Slow Database Queries — Here’s What Actually Worked

Posted on April 7, 2026 by Navid

Our API was slow. Not painfully slow, but slow enough that customers complained. Response times hovered around 800ms for simple list endpoints. After some digging, I realized the database queries were the culprit.

The Problem We Ignored

Like most startups, we moved fast and shipped features. We added indexes here and there when things felt slow, but we never really sat down and optimized our queries properly.

One endpoint in particular — fetching user orders with all related data — took 2.3 seconds. That’s forever in user-time.

What I Did Wrong First

My initial instinct was to add more caching. Redis can solve everything, right? Wrong. Caching just hid the real problem. The queries were still slow, and cache misses hurt bad.

What Actually Fixed It

1. EXPLAIN ANALYZE Became My Best Friend

PostgreSQL’s EXPLAIN ANALYZE showed me the truth. I found a simple query doing a sequential scan on a 500k row table. No index. Just scanning everything. Adding a composite index on the right columns dropped that query from 400ms to 12ms.

2. We Were N+1 Querying Without Realizing It

We used an ORM and thought we were being efficient. But for each order, we were loading the customer, the items, the shipping address — separately. Loading 50 orders meant 150+ queries.

Using joinedload in SQLAlchemy reduced that to 3 queries total.

3. Pagination Was Broken

We were loading all results and slicing in Python. Once I understood LIMIT and OFFSET properly — and added proper ordering with a covering index — pagination became instant.

The Numbers

  • Endpoint response time: 2.3s → 180ms
  • Database CPU: 85% → 25%
  • API error rate: dropped significantly

What I Learned

Don’t guess. Measure. Your ORM is probably hiding expensive queries from you. Add proper indexes. Use connection pooling. And for heaven’s sake, paginate at the database level, not in Python.

The best part? These fixes took about 4 hours total. The biggest win was just running EXPLAIN ANALYZE and actually looking at what the database was doing.

I Finally Fixed Our Slow Database Queries — Here’s What Actually Worked

Posted on April 7, 2026 by Navid

Our API was slow. Not painfully slow, but slow enough that customers complained. Response times hovered around 800ms for simple list endpoints. After some digging, I realized the database queries were the culprit.

The Problem We Ignored

Like most startups, we moved fast and shipped features. We added indexes here and there when things felt slow, but we never really sat down and optimized our queries properly.

One endpoint in particular — fetching user orders with all related data — took 2.3 seconds. That’s forever in user-time.

What I Did Wrong First

My initial instinct was to add more caching. Redis can solve everything, right? Wrong. Caching just hid the real problem. The queries were still slow, and cache misses hurt bad.

What Actually Fixed It

1. EXPLAIN ANALYZE Became My Best Friend

PostgreSQL’s EXPLAIN ANALYZE showed me the truth. I found a simple query doing a sequential scan on a 500k row table. No index. Just scanning everything. Adding a composite index on the right columns dropped that query from 400ms to 12ms.

2. We Were N+1 Querying Without Realizing It

We used an ORM and thought we were being efficient. But for each order, we were loading the customer, the items, the shipping address — separately. Loading 50 orders meant 150+ queries.

Using joinedload in SQLAlchemy reduced that to 3 queries total.

3. Pagination Was Broken

We were loading all results and slicing in Python. Once I understood LIMIT and OFFSET properly — and added proper ordering with a covering index — pagination became instant.

The Numbers

  • Endpoint response time: 2.3s → 180ms
  • Database CPU: 85% → 25%
  • API error rate: dropped significantly

What I Learned

Don’t guess. Measure. Your ORM is probably hiding expensive queries from you. Add proper indexes. Use connection pooling. And for heaven’s sake, paginate at the database level, not in Python.

The best part? These fixes took about 4 hours total. The biggest win was just running EXPLAIN ANALYZE and actually looking at what the database was doing.

Categories

  • AI Experiments
  • Coding
  • Debugging Stories
  • Hot Takes
  • Ideas
  • Lessons Learned
  • Project Management
  • Uncategorized
  • Vibe Coding

Recent Posts

  • How I Handled My First Production Outage (And What I Learned)
  • I Finally Fixed Our Slow Database Queries — Here’s What Actually Worked
  • I Finally Fixed Our Slow Database Queries — Here’s What Actually Worked
  • Why I Stopped Using Microservices for Small Projects
  • I Gave AI Full Access to Our Production Database. Here’s What Happened