aerospike.com

Planning A Dynamodb...

3/27/2026Updated 3/29/2026

Excerpt

### Scaling bottlenecks and hot partitions Although DynamoDB is built to scale horizontally, its internal limits hinder certain access patterns. By design, each DynamoDB partition is limited to about 3,000 read capacity units and 1,000 write units per second. This means if a single partition key (i.e., a “hot key”) receives excessive traffic, it can throttle, a phenomenon known as the hot partition problem. Applications with uneven data access or extremely high update rates to a small set of keys may hit these ceilings. Additionally, DynamoDB items are capped at 400 KB in size, so applications needing to store larger objects or blobs per record cannot do so directly. There are workarounds, such as splitting data across items or using S3 for overflow, but they are more complex. Finally, DynamoDB’s 1 MB limit per Query/Scan page can make analytical scans or large result sets cumbersome to retrieve without pagination. When such limitations start affecting development by needing to design around item size or partition throughput, some teams opt for databases with fewer constraints. ### Operational complexity and ecosystem fit DynamoDB is operationally simple within AWS, but that strength can turn into a weakness if your company uses other cloud service providers. As an AWS-only service, DynamoDB locks you into the AWS ecosystem, which can be a problem for multi-cloud or on-premises strategies. Organizations that want cloud provider independence or need to deploy in their own data centers for data sovereignty or latency reasons cannot use DynamoDB in those environments. Additionally, some teams find DynamoDB’s integration model too limiting. For example, it doesn’t support many complex queries nor stored procedures, and it requires AWS-specific tooling such as CloudWatch for monitoring, which makes it more difficult to build a custom monitoring pipeline via OpenTelemetry, Prometheus, or DataDog. Similarly, Amazon DynamoDB doesn't have database-level user management, and it does not manage users or passwords directly. Instead, it relies on the broader AWS ecosystem to authenticate users and applications securely, so it handles authentication primarily through AWS Identity and Access Management (IAM). … ### Feature limitations and data model mismatches As applications evolve, they may need capabilities that DynamoDB doesn’t have or doesn’t do well. DynamoDB is schemaless and supports ACID transactions, but it lacks the rich querying and JOINs of relational databases or some NewSQL/NoSQL peers. Another example is global data consistency: DynamoDB’s global tables offer multi-region replication, but only with eventual consistency, meaning that writes are eventually propagated. It cannot do multi-region strong consistency, which is a deal-breaker for financial or compliance uses that require up-to-date reads across regions. Additionally, DynamoDB’s secondary indexes are useful but have constraints, such as at most 20 global secondary indexes (GSI) per table, which must be defined up front, and some query patterns are hard to implement. If an application starts needing ad hoc queries, full-text search, graph traversals, or other features, one must either supplement DynamoDB with other services or migrate to a database that natively supports those queries. In short, evolving requirements for stronger consistency and richer queries can lead teams to look for a product with those functions. … ### Throughput and hot-key limits DynamoDB partitions have a finite throughput limit of approximately 3,000 RCUs and 1,000 WCUs per partition by default. If your access pattern can’t be evenly distributed, such as if one customer or one item ID is extremely popular, you risk hitting throttling on that partition. Solutions include careful data modeling to distribute hot keys or using DynamoDB’s adaptive capacity, but these may not fully eliminate hot spots. ### Latency profiles DynamoDB latency can run from under 10 milliseconds to tens of milliseconds. Under heavy load, latencies can spike if the table is throttling or if cross-region replication is involved. If your application demands consistent sub-5ms or sub-1ms latency, you’ll likely need in-memory or highly optimized databases. Aerospike, for instance, uses a smart client that interacts with the node holding the data, often with <1ms read latencies for local queries. … ### Query and data model differences DynamoDB requires careful upfront data modeling by designing primary keys and secondary indexes to satisfy known query patterns because it does not support ad hoc queries or joins. If teams find this too restrictive and need to do full table scans for certain queries or struggle with the 20 index per table limit, a new database might offer more flexibility.

Source URL

https://aerospike.com/blog/migrating-from-amazon-dynamodb/

Related Pain Points

Hot partition problem and throughput bottlenecks

8

DynamoDB partitions are limited to approximately 3,000 read capacity units and 1,000 write capacity units per second. When a single partition key receives excessive traffic ("hot key"), it can throttle and cause performance degradation. This is a hard limit that cannot be easily worked around and affects applications with uneven data access patterns.

performanceAmazon DynamoDBAWS

Vendor lock-in to AWS ecosystem

7

DynamoDB is AWS-only with no support for multi-cloud or on-premises deployments. Its architecture doesn't translate easily to other databases, making migration off DynamoDB expensive and time-consuming. Organizations needing cloud provider independence or data sovereignty cannot use DynamoDB.

compatibilityAmazon DynamoDBAWS

Rigid schema and access pattern design required upfront

7

DynamoDB forces developers to decide partition and sort keys and design access patterns before product requirements crystallize. Changing queries later requires backfilling GSIs, schema migrations, and complex denormalized projections, whereas traditional databases allow simple index additions.

architectureDynamoDBAWS

No support for advanced relational features (JOINs, stored procedures, triggers)

6

DynamoDB does not support SQL JOINs, stored procedures, triggers, or complex nested structures (beyond 32 levels). Applications requiring these features must implement logic in application code or use additional services, increasing complexity and performance overhead.

architectureDynamoDBAWS

Limited observability and monitoring without third-party tools

6

DynamoDB provides limited built-in visibility into table usage, access patterns, and cost drivers. Developers must integrate external monitoring tools like CloudWatch, Prometheus, or DataDog to understand performance issues. Issues like hot partitions and throttling aren't automatically resolved, requiring developer expertise to diagnose.

monitoringAmazon DynamoDBAWSCloudWatch

Latency spikes under heavy load and cross-region replication

5

DynamoDB latencies range from under 10 milliseconds to tens of milliseconds and can spike significantly under heavy load or during throttling. Cross-region replication adds additional latency. Applications requiring consistent sub-5ms or sub-1ms latency must use alternative solutions.

performanceAmazon DynamoDBAWS

Single item size limit of 400KB

5

DynamoDB enforces a hard 400KB limit per item, significantly smaller than competing document databases (MongoDB 16MB, Cassandra 2GB). Applications storing large objects must split data across items or use external storage like S3, adding architectural complexity.

architectureDynamoDBAWSMongoDB+1

No global consistency for distributed systems

4

DynamoDB global tables introduce ~1 second replication lag, preventing immediate global consistency across regions. Applications requiring true ACID consistency across tables or regions cannot rely on DynamoDB.

architectureDynamoDBAWS