dev.to
FastAPI Mistakes That Kill Your Performance - DEV Community
Excerpt
Most FastAPI performance problems aren't caused by FastAPI itself. They're caused by architectural issues - N+1 database queries, missing indexes, poor caching strategies. I covered these bigger problems in my previous post about Python's speed, and fixing those will give you 10-100x performance improvements. … - Use FastAPI CLI production mode instead of development settings - Implement Pure ASGI middleware instead of BaseHTTPMiddleware - Avoid duplicate validation between request parsing and response models … ## Mixing Async and Sync Functions Incorrectly This is critical and commonly misunderstood. The choice between `async def` and `def` fundamentally changes how FastAPI handles your endpoint, and getting it wrong can destroy performance. **How FastAPI handles each approach:** - `async def` functions run in the main event loop alongside other requests - `def` functions are offloaded to a separate thread pool (limited to 40 threads by default) **When to use `async def`:** - I/O-bound operations: database calls, HTTP requests, file operations - Operations that call other async functions - Light computational work: JSON parsing, data validation, simple transformations **When to use regular `def`:** - CPU-intensive operations: image processing, heavy calculations, data analysis - Calling libraries that aren't async-compatible and do significant work - Operations that would benefit from running on a separate CPU core ``` import httpx ... Using async database drivers like `asyncpg` (PostgreSQL) or `aiomysql` (MySQL) with async endpoints can provide 3-5x better throughput under concurrent load. This is because your entire request pipeline becomes truly asynchronous - no blocking operations. With 40 default threads, if you have 41 users hitting sync endpoints simultaneously, one user waits for a thread to become available. With 1000 concurrent users hitting sync endpoints, 960 are queued waiting for threads - creating massive delays. ## Overusing Pydantic Models Throughout Your Application Pydantic is excellent for data validation at API boundaries, but using it everywhere in your application creates significant performance overhead that many developers don't realize. The problem is subtle: Pydantic models look like regular Python classes, so it's tempting to use them as your primary data structures throughout your application. This creates what's known as "serialization/deserialization debt" - you're paying validation and conversion costs even when you don't need validation: - Pydantic object creation is 6.5x slower than Python dataclasses - Memory usage is 2.5x higher due to validation metadata storage - JSON operations are 1.5x slower across serialization and deserialization This overhead compounds quickly. If you're creating thousands of objects during request processing, using Pydantic models internally can add significant latency. … ## Using Small Thread Pool for Sync Operations When you use a regular `def` function in FastAPI, it doesn't run in the main event loop. Instead, it runs in a thread pool - a collection of worker threads that handle synchronous operations. By default, FastAPI provides only 40 threads. If 41 users hit a sync endpoint simultaneously, one waits for a thread. With 1000 concurrent users, 960 are stuck waiting. This creates massive response time degradation. … ## Making Users Wait for Background Work Background tasks let you queue work that runs after the HTTP response is sent. Users get their response immediately while non-essential operations happen in the background. Instead of a 3-second response (1s user creation + 2s email sending), users get a 1-second response while the email sends behind the scenes. … If you have a type hint or `response_model`, return raw data (dicts, database objects) and let FastAPI handle model creation. Double validation can add 20-50% overhead to response processing. … ## The Reality Check These optimizations provide meaningful performance improvements - typically 20-50% for well-architected applications. But they won't save you from fundamental design problems. If your API is slow because of N+1 database queries, missing indexes, or poor caching, fix those first. They'll give you 10-100x improvements that dwarf any FastAPI-specific optimizations.
Related Pain Points
Overusing Pydantic models throughout application causes performance overhead
6Using Pydantic models as primary data structures beyond API boundaries creates significant 'serialization/deserialization debt'. Pydantic object creation is 6.5x slower than Python dataclasses, memory usage is 2.5x higher, and JSON operations are 1.5x slower. This compounds quickly with thousands of objects.
Async/await complexity and blocking event loop anti-patterns
6Developers frequently block event loops with sync I/O calls (e.g., using `requests` instead of `aiohttp`), throttling async performance. Missing `await` keywords cause runtime exceptions rather than compile-time hints.
Duplicate validation between request parsing and response models adds overhead
5When using type hints or `response_model`, FastAPI performs double validation—once during request parsing and again during response model creation. This adds 20-50% overhead to response processing and should be avoided by returning raw data instead.
BaseHTTPMiddleware has poor performance compared to pure ASGI
5Using BaseHTTPMiddleware instead of pure ASGI middleware degrades performance. Developers should implement pure ASGI middleware for better throughput and responsiveness.
Limited ORM integration compared to other frameworks
5While FastAPI supports ORMs like SQLAlchemy and Tortoise ORM, integration is not as smooth as other frameworks. Developers must manually choose, configure, and ensure async compatibility with their ORM of choice.