All technologies

DynamoDB

17 painsavg 6.2/10
architecture 6config 2ecosystem 2performance 2storage 1deploy 1compatibility 1docs 1dx 1

Backup and restore limitations at scale

9

DynamoDB does not support native backup and restore functionality, which becomes extremely problematic at scale. Unable to back up large datasets (e.g., 100TB) was a significant reason why companies like Timehop migrated away from DynamoDB.

storageDynamoDB

DynamoDB cost explosion for fast-growing datasets

8

As datasets grow, DynamoDB automatically increases partitions (10GB max per partition) but does not increase total provisioned throughput proportionally. This forces continuous throughput increases to maintain query performance, causing costs to spiral multi-fold.

configDynamoDBAWS

Complex and costly multi-region replication setup

8

DynamoDB global tables require manual creation of the same table structure in multiple regions with duplicate schema, indexes, and throughput settings. Data replication must be manually configured, lacks zero-configuration solutions, and does not support all features across replicas (no TTL, Streams, or automatic LSI/GSI replication).

deployDynamoDB

Rigid schema and access pattern design required upfront

7

DynamoDB forces developers to decide partition and sort keys and design access patterns before product requirements crystallize. Changing queries later requires backfilling GSIs, schema migrations, and complex denormalized projections, whereas traditional databases allow simple index additions.

architectureDynamoDBAWS

Complex workaround ecosystem with high operational overhead

7

Common workarounds to extend DynamoDB (OpenSearch sync, RDS dual-write, Athena/Glue, Streams) introduce additional costs ($200-$1000/month), failure modes, operational overhead, and require specialized expertise. They essentially negate DynamoDB's simplicity benefit.

ecosystemDynamoDBAWSOpenSearch+4

DynamoDB provisioned throughput throttling under load

7

When application requests exceed provisioned read or write capacity units (RCUs/WCUs), DynamoDB throttles requests, leading to increased latency or application errors. Requires manual throughput adjustment via AWS CLI.

performanceDynamoDBAWS

S3 lacks compare-and-swap (CAS) operations

7

S3 is the only major object store without compare-and-swap (CAS) operations, a feature available in GCS, Azure Blob Store, Cloudflare R2, Tigris, and MinIO. This forces developers to use separate transactional stores like DynamoDB, creating ugly abstractions and two-phase write complexity.

compatibilityAmazon S3DynamoDBGoogle Cloud Storage+1

AWS service selection and optimization requires deep expertise

7

Using AWS services optimally demands general knowledge of all AWS services and their trade-offs, plus deep expertise in the chosen service (e.g., DynamoDB, Step Functions). Mediocre knowledge is insufficient, and the learning curve is steep with limited training materials available.

ecosystemAWSDynamoDBStep Functions

Unpredictable and difficult cost management

6

DynamoDB's on-demand pricing model can lead to unexpected expenses with variable workloads. Provisioned mode requires careful capacity planning to avoid throttling or waste, and cost monitoring is complex without proper tooling configuration.

configDynamoDBAWS

No support for advanced relational features (JOINs, stored procedures, triggers)

6

DynamoDB does not support SQL JOINs, stored procedures, triggers, or complex nested structures (beyond 32 levels). Applications requiring these features must implement logic in application code or use additional services, increasing complexity and performance overhead.

architectureDynamoDBAWS

Limited transaction and batch operation support

6

DynamoDB's transaction support is limited and not suitable for complex multi-item transactions like in SQL databases. Developers must design their data models to avoid requiring complex transactions, adding additional architectural constraints.

architectureDynamoDB

FilterExpressions inefficiency and scan operation misuse

6

Using FilterExpressions in DynamoDB first retrieves matching items then applies filtering, making it inefficient. Developers often abuse the Scan method for querying instead of using proper indexes, resulting in high costs and poor performance.

performanceDynamoDB

Steep learning curve for SQL developers

5

Developers transitioning from relational databases find DynamoDB's NoSQL paradigm, denormalization requirements, and access pattern-based design significantly different. The learning curve is steep, especially for understanding that third normal form schemas will fail in DynamoDB.

docsDynamoDB

Single item size limit of 400KB

5

DynamoDB enforces a hard 400KB limit per item, significantly smaller than competing document databases (MongoDB 16MB, Cassandra 2GB). Applications storing large objects must split data across items or use external storage like S3, adding architectural complexity.

architectureDynamoDBAWSMongoDB+1

Fragmented console experience across multiple services

5

Deploying an app requires managing resources scattered across different AWS console sections (S3, CloudFront, Route 53, EC2/Fargate/Lambda+API Gateway, RDS/DynamoDB, billing alarms). These services don't integrate well out-of-the-box, forcing context switching and manual coordination.

dxAWSS3CloudFront+7

No global consistency for distributed systems

4

DynamoDB global tables introduce ~1 second replication lag, preventing immediate global consistency across regions. Applications requiring true ACID consistency across tables or regions cannot rely on DynamoDB.

architectureDynamoDBAWS

Limited data type support and conversion overhead

3

DynamoDB has limited support for data types, requiring developers to convert data back and forth manually. This adds complexity and potential for errors when working with diverse data structures.

architectureDynamoDB