www.samdhar.com

When NOT To Use: DynamoDB - Sam Dhar

12/1/2024Updated 3/6/2026

Excerpt

#### 1. Query Limitations: Speed and Simplicity comes at a cost DynamoDB’s design prioritizes scalability and speed, but that focus comes at the expense of query flexibility: - **No Rich Queries**: Forget SQL-like joins, subqueries, or aggregations. DynamoDB is built for straightforward key-value lookups and range queries on secondary indexes. Anything more requires either preprocessing your data or building custom query logic in your application. - **Table Scans Are Expensive**: When your query doesn’t align perfectly with your table or index schema, DynamoDB resorts to scanning the entire table, leading to performance degradation and skyrocketing costs. **Why It Matters:** Applications with evolving or unpredictable query requirements will quickly hit a wall. Data architects often find themselves rewriting their models—or worse, re-architecting their entire application. #### 2. Rigid Data Size and Attribute Limits DynamoDB enforces tight constraints on what you can store: - **Item Size Limit**: Each item can only be 400 KB. Items exceeding the 400 KB size limit will result in a `ValidationException`. To handle large data, consider storing oversized objects in Amazon S3 with a reference in DynamoDB, splitting the data into smaller related items, or compressing large attributes to fit the limit. This creates additional dependencies, increasing both complexity and latency. - **Secondary Index Limitations**: Indexes inherit the same size limits as table items. If your data requires large or complex indexes, this constraint becomes a headache. **Why It Matters:** These restrictions can bottleneck feature development. Engineers may end up building brittle workarounds that add long-term maintenance overhead. … - **Hot Partitions**: DynamoDB's partition-based design can cause performance issues when specific partitions experience disproportionate traffic, creating "hotspots." These occur in scenarios like a celebrity's trending profile or a new album drop from a popular musician, where sudden surges in access exceed the throughput of individual partitions, leading to throttling, latency, or failures—even if overall capacity seems sufficient. … #### 4. Cost Spikes DynamoDB’s pricing model—based on provisioned or on-demand throughput—can be both a blessing and a curse: - **Unpredictable Costs**: Applications with variable workloads are particularly vulnerable to cost spikes, especially during traffic surges. - **Over-Provisioning Risks**: To avoid throttling, developers often over-provision read and write capacity, leading to wasted spend during off-peak times. **Why It Matters:** While DynamoDB can save money for workloads with predictable patterns, it’s a potential budget-killer for applications with spiky or unpredictable traffic. … - **Access Patterns First:** Access patterns play a crucial role in designing data models within DynamoDB. Unlike traditional SQL databases, where normalization is encouraged to reduce redundancy and enhance data integrity, DynamoDB necessitates a more pragmatic approach that focuses on the end-user query patterns. This often results in denormalization—where data is intentionally duplicated across multiple tables or records to optimize read performance. While such duplication can seem counterintuitive to those accustomed to relational database management systems (RDBMS), it aligns perfectly with DynamoDB’s philosophy of high availability and fast access. … - **Evolving Models Is Painful**: Once a data model is set, adapting to new access patterns can be time-consuming and error-prone, especially in applications with complex relationships or dynamic requirements. **Why It Matters:** Teams new to DynamoDB often struggle to adopt its "access-pattern-first" philosophy, leading to inefficient designs and wasted development effort. #### 6. Vendor Lock-In and Limited Deployment Options DynamoDB’s tight integration with AWS can be a double-edged sword: - **Proprietary Design**: DynamoDB’s architecture doesn’t translate easily to other databases. Migrating off DynamoDB often means rebuilding data models and rewriting substantial parts of the application. - **Cloud-Only**: DynamoDB doesn’t natively support on-premises or multi-cloud deployments. If you need to run workloads outside AWS, you’re out of luck. … #### 7. Lack of Built-In Observability Despite being a managed service, DynamoDB requires extensive monitoring to maintain performance and cost efficiency: - **No Out-of-the-Box Insights**: DynamoDB provides limited visibility into table usage and access patterns. You’ll likely need to integrate tools like CloudWatch or third-party monitoring solutions. - **Proactive Management Required**: Issues like hot partitions or throttling aren’t automatically resolved, leaving developers responsible for diagnosing and addressing them. **Why It Matters:** Monitoring DynamoDB effectively requires both expertise and additional tooling, countering the perception that it’s a completely "hands-off" service.

Source URL

https://www.samdhar.com/distributed-mind-blog/when-not-to-use-dynamodb

Related Pain Points

Hot partition problem and throughput bottlenecks

8

DynamoDB partitions are limited to approximately 3,000 read capacity units and 1,000 write capacity units per second. When a single partition key receives excessive traffic ("hot key"), it can throttle and cause performance degradation. This is a hard limit that cannot be easily worked around and affects applications with uneven data access patterns.

performanceAmazon DynamoDBAWS

Vendor lock-in to AWS ecosystem

7

DynamoDB is AWS-only with no support for multi-cloud or on-premises deployments. Its architecture doesn't translate easily to other databases, making migration off DynamoDB expensive and time-consuming. Organizations needing cloud provider independence or data sovereignty cannot use DynamoDB.

compatibilityAmazon DynamoDBAWS

Rigid schema and access pattern design required upfront

7

DynamoDB forces developers to decide partition and sort keys and design access patterns before product requirements crystallize. Changing queries later requires backfilling GSIs, schema migrations, and complex denormalized projections, whereas traditional databases allow simple index additions.

architectureDynamoDBAWS

No support for advanced relational features (JOINs, stored procedures, triggers)

6

DynamoDB does not support SQL JOINs, stored procedures, triggers, or complex nested structures (beyond 32 levels). Applications requiring these features must implement logic in application code or use additional services, increasing complexity and performance overhead.

architectureDynamoDBAWS

Limited observability and monitoring without third-party tools

6

DynamoDB provides limited built-in visibility into table usage, access patterns, and cost drivers. Developers must integrate external monitoring tools like CloudWatch, Prometheus, or DataDog to understand performance issues. Issues like hot partitions and throttling aren't automatically resolved, requiring developer expertise to diagnose.

monitoringAmazon DynamoDBAWSCloudWatch

Unpredictable and difficult cost management

6

DynamoDB's on-demand pricing model can lead to unexpected expenses with variable workloads. Provisioned mode requires careful capacity planning to avoid throttling or waste, and cost monitoring is complex without proper tooling configuration.

configDynamoDBAWS

Single item size limit of 400KB

5

DynamoDB enforces a hard 400KB limit per item, significantly smaller than competing document databases (MongoDB 16MB, Cassandra 2GB). Applications storing large objects must split data across items or use external storage like S3, adding architectural complexity.

architectureDynamoDBAWSMongoDB+1