PostgreSQL
Prisma v6.7.0+ queryCompiler feature introduces widespread data corruption and parsing bugs
9The recently released queryCompiler feature in Prisma 6.7.0+ has introduced critical bugs affecting data integrity: JSONB columns return empty objects, String[] fields are returned as comma-separated strings instead of arrays, date fields become empty objects, and relations with @map fail to parse. Multiple users report broken functionality across PostgreSQL, D1 (SQLite), and other databases.
SQL injection remains most financially damaging application vulnerability
9SQL injection vulnerabilities from unescaped user input interpolation remain the perennial top contender for most financially damaging application security vulnerability. Developers continue to make mistakes in this area.
Performance bottlenecks from connection exhaustion and long-running queries
8As Supabase apps scale, critical performance issues surface including connection exhaustion and long-running queries that become operational bottlenecks. Developers lack clear monitoring signals for cache hit ratio and query performance.
Table corruption issues in PostgreSQL
8PostgreSQL experiences table corruption problems that can result in data integrity issues. This was significant enough to motivate organizations like Uber to evaluate alternative databases.
Poor Performance with Large Data Volumes and Analytics
8PostgreSQL is not optimal for applications requiring real-time or near-real-time analytics. For massive single datasets (billions of rows, hundreds of gigabytes) with frequent joins, queries can take hours. PostgreSQL lacks native columnar storage support, necessitating non-core extensions and increasing architectural complexity.
Self-Hosted Deployment Complexity
8Self-hosted Sentry is a distributed system requiring management of PostgreSQL, ClickHouse, Kafka, and Redis. It demands dedicated DevOps/SRE resources for scaling and maintenance, often resulting in total cost of ownership exceeding SaaS pricing.
Scaling custom admin solutions causes cascading failures
8Custom admin panels that work for small teams degrade rapidly as the user base or data grows, leading to performance issues, broken queries, and unexpected feature failures. Significant rebuilding is often required if scalability wasn't planned from day one.
No In-Place Major Version Upgrades
8PostgreSQL does not support in-place major version upgrades. Upgrades require either dumping and restoring the entire dataset or setting up logical replication, with rigorous application compatibility testing required. Delaying upgrades increases complexity and risk, as outdated versions miss critical security patches, transforming routine maintenance into a complex, high-risk migration project.
Network latency and infrastructure constraints in enterprise environments
7In corporate production environments, database requests traverse multiple network hops through firewalls and antivirus software, causing severe latency issues. Developers lack control over database configuration and cannot install extensions like PGVector, PG Cron, or PG Crypto, and often don't know which region their database is deployed in.
Default Security Configuration Weaknesses
7PostgreSQL default installations can allow passwordless logins ('Trust' method) if not managed, lack robust password policies, do not enable SSL/TLS encryption by default, and commonly grant unnecessary superuser privileges. Many vulnerabilities stem from misconfiguration and operational oversight rather than software flaws.
Schema migrations cause downtime due to exclusive locking on busy tables
7Certain PostgreSQL schema changes (like adding NOT NULL UNIQUE columns or renaming columns) require exclusive locks that block all other queries. On busy tables, migrations can be delayed waiting for exclusive locks, causing production downtime. Constraint backfilling and backwards-incompatible changes require multi-step migration processes.
Stored procedures lack version control, CI/CD integration, and debugging capabilities
7Business logic and jobs stored in PostgreSQL stored procedures have no git version control, exist only in production without documentation of authorship, and cannot be tested in CI/CD pipelines. Debugging is difficult without proper stack traces or logging tools, making maintenance a time-consuming nightmare.
Vacuum and table dependency issues under rapid workload scaling
7As agentic workloads caused 50x increase in branch creation, Neon experienced classic PostgreSQL failure modes including query plan drift and slow vacuum operations. Tables became more dependent on aggressive vacuuming, creating performance bottlenecks that weren't anticipated in the original system design.
Horizontal scalability limitations at high load
7PostgreSQL lacks native horizontal scalability features. When instance sizes become insufficient, teams experience downtime during scaling operations. Aurora vacuuming and scaling issues persist, and teams desire alternatives like CockroachDB that support true horizontal scaling without downtime.
Absence of Native Multi-Region Replication
7PostgreSQL does not offer native multi-region replication capabilities. Organizations must rely on logical replication and third-party tools like pglogical or BDR, increasing vendor dependency, expertise requirements, and operational overhead while creating potential vendor lock-in risks.
Complex configuration and monitoring required for replication and high availability
7Managing PostgreSQL replication requires intricate configuration and careful monitoring to prevent data loss or corruption. Achieving high availability demands automated failover mechanisms, load balancing between primary and standby servers, and selecting the appropriate replication strategy.
Fragmented development workflow switching between TypeScript and SQL
7Complex database operations require writing PostgreSQL functions outside the main codebase, forcing developers to switch between TypeScript and SQL environments. This disrupts workflow and makes debugging harder for teams.
Complex workaround ecosystem with high operational overhead
7Common workarounds to extend DynamoDB (OpenSearch sync, RDS dual-write, Athena/Glue, Streams) introduce additional costs ($200-$1000/month), failure modes, operational overhead, and require specialized expertise. They essentially negate DynamoDB's simplicity benefit.
Read-heavy workload performance without proper replica/caching architecture
7Read-heavy workloads like reporting and analytics can severely degrade performance if read replicas and caching layers aren't properly configured. This requires upfront architectural planning that many teams delay.
Write-heavy workload bottlenecks without proper indexing and partitioning
7Write-heavy workloads with financial transactions and real-time updates require careful indexing and partitioning strategies to avoid slow inserts and locking issues. Without these, performance suffers significantly.
Row-Level Security (RLS) causes severe query performance degradation
7When Row-Level Security is enabled in production, query execution plans degrade dramatically. Fast SELECT queries become slow with unexpected multi-table joins, and indexes become ineffective, turning a simple database operation into a performance nightmare.
PostgreSQL failover on Kubernetes requires additional tooling expertise
7While Kubernetes can restart failed pods, it doesn't provide PostgreSQL-specific failover capabilities needed for production. Teams must implement tools like Patroni for proper leader election and failover, adding complexity and requiring dual expertise in both PostgreSQL and Kubernetes.
Per-developer environment management and resource conflicts in shared staging
7Teams sharing a single staging environment face resource contention and schema conflicts when multiple developers work simultaneously. Providing per-developer staging environments requires hundreds of database copies, creating management complexity and inefficient resource allocation.
Inefficient write architecture compared to other databases
7PostgreSQL has an inefficient architecture for write operations compared to alternatives like MySQL. This limitation was significant enough for organizations like Uber to switch database systems.
Schema evolution breaks tests and introduces silent failures
7When making schema changes to evolve the application's data handling, modifications either break tests immediately or don't, creating a worse scenario where tests no longer guarantee correctness. This requires iterative fixing of data integrity issues.
Building secure database access interfaces for non-technical users
7Creating secure admin panels for non-technical users requires juggling encryption, access control, and usability concerns. The complexity rivals building a secondary software system, making it difficult to maintain alongside the primary application.
Query plan instability causes unpredictable performance degradation
7PostgreSQL query execution plans can become unstable, causing previously performing queries to suddenly degrade. Developers must use advanced tools like Query Plan Management (QPM) and pg_hint_plan to ensure consistent query performance.
Control plane database CPU exhaustion from billing and consumption calculations
7The 50x increase in branch creation caused the control plane database's CPU to become exhausted due to expensive billing and consumption calculations. These operations contributed significantly to the overall control plane degradation and cascading query performance issues.
Backup and disaster recovery complexity at scale
6As data volume grows to terabytes and petabytes, teams struggle to establish robust backup and recovery systems that ensure zero data loss. The complexity of managing backups at scale, combined with the need for rapid recovery, creates operational burden and concerns about data durability.
SQLite flexible typing causes compatibility issues during database migration
6SQLite's default flexible typing allows values of any type to be stored in any column, which works during development but causes applications to fail when migrated to stricter databases like PostgreSQL or SQL Server that enforce type rules.
Time-consuming and error-prone SQL query creation
6Building complex SQL queries is tedious, error-prone, and time-consuming. Developers frequently resort to AI assistance rather than writing queries manually, and must often redo work when requirements change or new clients appear.
PostgreSQL configuration parameters require server restart and are difficult to debug
6Configuration adjustments require a full server restart to take effect, making it risky to tune in production. Logs are noisy and difficult to access in ephemeral or PaaS environments, especially without tools like PG Badger.
Excessive or duplicate indexes degrade write performance and storage
6Unused or duplicate indexes cause every database modification to unnecessarily update those indexes, resulting in high storage utilization and IOPs consumption. This creates a silent performance drag that isn't immediately obvious.
Built-in replication cannot selectively replicate individual databases
6PostgreSQL's built-in replication replicates entire clusters with the same settings and priority, not individual databases. This forces teams to manage multiple separate clusters to control replication groups, significantly increasing management complexity and operational overhead.
Lack of expressive data model understanding leads to poor schema design
6Development teams unfamiliar with expressive data modeling often fail to apply important constraints like foreign keys, instead relying on familiar application-level patterns. This results in databases without essential integrity constraints.
MVCC version copying creates excessive data duplication
6PostgreSQL's MVCC implementation copies all columns of a tuple when any column is modified, regardless of which columns change. This causes significant data duplication and increased storage demands, especially for large tables with many columns. No practical workaround exists without major PostgreSQL rewrites.
Complex and error-prone autovacuum configuration
6Configuring autovacuum correctly is challenging due to its complexity. Default global settings are inappropriate for large tables with millions/billions of tuples. If autovacuum invocations take too long or are blocked, dead tuples accumulate and statistics become stale, causing gradual query slowdown. Manual intervention is often required.
Replicas lack true MVCC support
6PostgreSQL replicas apply WAL updates, making replicas identical copies of the master at any point in time. They don't have true replica MVCC support, preventing queries from reading different versions of data on replicas compared to the primary. This design poses significant constraints for distributed systems.
Missing pg_vector support limits vector database integration
6Prisma lacks native support for PostgreSQL's pg_vector extension, forcing developers to either use raw queries (losing type safety) or unsupported type declarations with no query generation support.
Difficult debugging with long application workflows and complex database logic
6Diagnosing failures becomes increasingly difficult when issues could originate from the database or application code. Long workflows and complex database-side logic require extensive investigation, making root cause analysis time-consuming.
Beginner Unfriendliness and Steep Learning Curve
6Supabase's complexity creates barriers for novice developers, compounded by limited community support and insufficient documentation tailored to beginners. The platform requires significant SQL and database knowledge to use effectively.
Complex PostgreSQL Issues Bounced Back to Users
6Supabase support sometimes refuses to help with complex PostgreSQL issues, claiming they are database questions rather than platform questions. This can leave developers stuck without recourse.
No structured development guidelines for database version control
6Supabase lacks guides for structured database development. Developers must create workarounds like master GitHub repos of SQL commands and custom documentation, making collaboration and migration difficult.
Connection Pooling Neglect and Resource Exhaustion
6Failing to implement connection pooling is a common mistake in PostgreSQL deployments. Each connection consumes approximately 10MB of RAM, and applications that create new connections for each database operation can quickly exhaust server resources, leading to performance degradation and application failures.
Unorganized database schema structure due to single public schema
5All tables, views, and functions default to the `public` schema in Supabase Studio, leading to poor organization and difficulty managing data as projects grow. Lack of logical separation for different data types (user data, billing, admin-only) creates maintenance challenges.
Complex querying of nested JSON data in PostgreSQL
5Working with JSON data in PostgreSQL requires special operators and functions that are difficult to use, especially with nested structures. While JSON saves time and space, querying it is error-prone.
Networked storage introduces latency and performance challenges
5Neon's re-architected PostgreSQL separates compute and storage into a networked system. This architectural change introduces new performance and latency challenges that developers must understand and mitigate compared to traditional monolithic PostgreSQL.
PostgreSQL configuration and management are overly complex with many non-obvious settings
5PostgreSQL requires extensive tuning across memory management, vacuum, background writer, write-ahead log, and free-space map settings. Configuration files are long with many unnecessary options for typical users. Default logging is unhelpful for new users, and there is no built-in out-of-band monitoring to diagnose startup failures or query issues without manually launching backends.
Only receiving first validation error slows debugging cycles
5PostgreSQL validation returns only the first error per record, forcing developers to iterate through multiple correction cycles to resolve all data integrity issues. This extends debugging workflows substantially.
Table Bloat from Lack of Regular VACUUM
5Failing to run VACUUM regularly can lead to bloated tables and degraded performance. Without VACUUM, PostgreSQL cannot reclaim space from deleted rows, accumulating dead tuples that consume disk space and slow down query performance over time.
Difficulty navigating complex foreign key relationships
5Developers struggle to navigate database schemas with many foreign keys, jumping between related tables creates confusion and slows development speed even though foreign keys improve database architecture and performance.
pg_dump and pg_restore have confusing workflows and incomplete backup defaults
5PostgreSQL backup and restore tools have counter-intuitive workflows: pg_dump by default does not include global objects like roles, so backups are incomplete unless users manually dump additional information. pg_dumpall doesn't support custom format, and pg_restore requires non-obvious flags like -C to create databases. File naming conventions (.backup) are inconsistent with documentation.
Complex decision-making between specialized PostgreSQL extensions
5Developers must make complex architectural decisions about which PostgreSQL extensions to use. For example, PostGIS for geospatial needs and pgvector for ML/embedding use cases are "killer apps," but selecting and integrating specialized extensions requires significant expertise.
Difficulty managing missing primary keys at scale
5Managing tables without primary keys presents challenges that scale poorly. No decent solution exists for identifying and managing missing primary keys in large-scale PostgreSQL deployments.
Advanced PostgreSQL Features Require Raw SQL Knowledge
4Using advanced PostgreSQL features like custom triggers and aggregations requires developers to write raw SQL directly, adding friction for those unfamiliar with database-level programming.
Inconsistent Data Types Across Related Tables
4Using inconsistent data types across tables (e.g., SERIAL vs BIGINT for primary keys) can lead to unexpected behavior and foreign key relationship issues. This creates subtle bugs and requires careful schema design coordination across development teams.
Limited ability to debug complex nested database operations
4PostgreSQL provides insufficient tracing and debugging capabilities for nested operations like PL/pgSQL calls and cascaded foreign key actions. Developers cannot easily understand what is happening in complex nested contexts without extensive manual investigation.
PostgreSQL documentation lacks clarity, tutorials, and organization
4PostgreSQL documentation could be improved with better organization, clearer explanations, and more practical tutorials. This affects onboarding experience and developer productivity.