www.yugabyte.com
What You Should Know Before Starting with DynamoDB - YugabyteDB
Excerpt
## The Bad ### 5. Cost Effectiveness As highlighted in The Million Dollar Engineering Problem, DynamoDB’s pricing model can easily make it the single most expensive AWS service for a fast growing company. Here are the top 6 reasons why DynamoDB costs spiral out of control. **Over-provisioning to handle hot partitions** … **Cost explosion for fast growing datasets** The post You probably shouldn’t use DynamoDB highlights why DynamoDB is a poor choice for fast growing datasets. As data grows, so do the number of partitions in order to automatically scale out the data (each partition is a maximum of 10GB). However, the total provisioned throughput for a table does not increase. Thus, the throughput available for each partition will constantly decrease with data growth. To keep up with the existing rate of queries, the total throughput would have to be continually increased, increasing the total cost multi-fold! … ## The Ugly ### 10. Strong Consistency with High Availability In terms of the CAP theorem, DynamoDB is an available and partition-tolerant (AP) database with eventual write consistency. On the read front, it supports both *eventually consistent* and *strongly consistent * reads. However, strongly consistent reads in DynamoDB are not highly available in the presence of network delays and partitions. Since such failures are common in multi-region/global apps running on public clouds such as AWS, DynamoDB tries to reduce such failures by limiting strongly consistent reads only to a single region. This in turn makes DynamoDB unfit for most multi-region apps and an unreliable solution for even single-region apps.
Related Pain Points
Hot partition problem and throughput bottlenecks
8DynamoDB partitions are limited to approximately 3,000 read capacity units and 1,000 write capacity units per second. When a single partition key receives excessive traffic ("hot key"), it can throttle and cause performance degradation. This is a hard limit that cannot be easily worked around and affects applications with uneven data access patterns.
DynamoDB cost explosion for fast-growing datasets
8As datasets grow, DynamoDB automatically increases partitions (10GB max per partition) but does not increase total provisioned throughput proportionally. This forces continuous throughput increases to maintain query performance, causing costs to spiral multi-fold.
No global consistency for distributed systems
4DynamoDB global tables introduce ~1 second replication lag, preventing immediate global consistency across regions. Applications requiring true ACID consistency across tables or regions cannot rely on DynamoDB.