DynamoDB
Backup and restore limitations at scale
9DynamoDB does not support native backup and restore functionality, which becomes extremely problematic at scale. Unable to back up large datasets (e.g., 100TB) was a significant reason why companies like Timehop migrated away from DynamoDB.
DynamoDB cost explosion for fast-growing datasets
8As datasets grow, DynamoDB automatically increases partitions (10GB max per partition) but does not increase total provisioned throughput proportionally. This forces continuous throughput increases to maintain query performance, causing costs to spiral multi-fold.
Complex and costly multi-region replication setup
8DynamoDB global tables require manual creation of the same table structure in multiple regions with duplicate schema, indexes, and throughput settings. Data replication must be manually configured, lacks zero-configuration solutions, and does not support all features across replicas (no TTL, Streams, or automatic LSI/GSI replication).
Rigid schema and access pattern design required upfront
7DynamoDB forces developers to decide partition and sort keys and design access patterns before product requirements crystallize. Changing queries later requires backfilling GSIs, schema migrations, and complex denormalized projections, whereas traditional databases allow simple index additions.
Complex workaround ecosystem with high operational overhead
7Common workarounds to extend DynamoDB (OpenSearch sync, RDS dual-write, Athena/Glue, Streams) introduce additional costs ($200-$1000/month), failure modes, operational overhead, and require specialized expertise. They essentially negate DynamoDB's simplicity benefit.
DynamoDB provisioned throughput throttling under load
7When application requests exceed provisioned read or write capacity units (RCUs/WCUs), DynamoDB throttles requests, leading to increased latency or application errors. Requires manual throughput adjustment via AWS CLI.
S3 lacks compare-and-swap (CAS) operations
7S3 is the only major object store without compare-and-swap (CAS) operations, a feature available in GCS, Azure Blob Store, Cloudflare R2, Tigris, and MinIO. This forces developers to use separate transactional stores like DynamoDB, creating ugly abstractions and two-phase write complexity.
AWS service selection and optimization requires deep expertise
7Using AWS services optimally demands general knowledge of all AWS services and their trade-offs, plus deep expertise in the chosen service (e.g., DynamoDB, Step Functions). Mediocre knowledge is insufficient, and the learning curve is steep with limited training materials available.
Unpredictable and difficult cost management
6DynamoDB's on-demand pricing model can lead to unexpected expenses with variable workloads. Provisioned mode requires careful capacity planning to avoid throttling or waste, and cost monitoring is complex without proper tooling configuration.
No support for advanced relational features (JOINs, stored procedures, triggers)
6DynamoDB does not support SQL JOINs, stored procedures, triggers, or complex nested structures (beyond 32 levels). Applications requiring these features must implement logic in application code or use additional services, increasing complexity and performance overhead.
Limited transaction and batch operation support
6DynamoDB's transaction support is limited and not suitable for complex multi-item transactions like in SQL databases. Developers must design their data models to avoid requiring complex transactions, adding additional architectural constraints.
FilterExpressions inefficiency and scan operation misuse
6Using FilterExpressions in DynamoDB first retrieves matching items then applies filtering, making it inefficient. Developers often abuse the Scan method for querying instead of using proper indexes, resulting in high costs and poor performance.
Steep learning curve for SQL developers
5Developers transitioning from relational databases find DynamoDB's NoSQL paradigm, denormalization requirements, and access pattern-based design significantly different. The learning curve is steep, especially for understanding that third normal form schemas will fail in DynamoDB.
Single item size limit of 400KB
5DynamoDB enforces a hard 400KB limit per item, significantly smaller than competing document databases (MongoDB 16MB, Cassandra 2GB). Applications storing large objects must split data across items or use external storage like S3, adding architectural complexity.
Fragmented console experience across multiple services
5Deploying an app requires managing resources scattered across different AWS console sections (S3, CloudFront, Route 53, EC2/Fargate/Lambda+API Gateway, RDS/DynamoDB, billing alarms). These services don't integrate well out-of-the-box, forcing context switching and manual coordination.
No global consistency for distributed systems
4DynamoDB global tables introduce ~1 second replication lag, preventing immediate global consistency across regions. Applications requiring true ACID consistency across tables or regions cannot rely on DynamoDB.
Limited data type support and conversion overhead
3DynamoDB has limited support for data types, requiring developers to convert data back and forth manually. This adds complexity and potential for errors when working with diverse data structures.