Common Dynamo Errors

Understand these common errors that lead to delivery delays and failures.

Written by Ben Mills
Updated over a week ago

When using BitBrew's AWS DynamoDB integration, some errors in setup can delay delivery or even prevent it altogether. 

Here's a list of common errors to watch out for, starting with...

The Most Common Dynamo Error

Problem: Provisioned Throughput Set Too Low

When creating a DynamoDb table, the "default settings" check box is automatically selected, which limits your provisioned capacity to 5 reads and 5 writes per second.

Even in the pilot phase, where you're testing with one or two devices, this setting could be too low. The number of events attempting to be delivered at any given time is heavily dependent on the device configuration (intermittent uploads of many events leads to big bursts).

At low device volumes, there are lulls when no devices are sending data, so there may be enough time to clear the backlog of events, but at higher volumes, where the number of events is continually greater than the provisioned throughput, the backlog could grow. 

As part of the platform's reliable delivery guarantee, we will buffer data that is not able to be delivered for 120 hours. But if the backlog of data is not completely cleared in that time, the destination will be cancelled and all data will be dropped.

Solution: Increase Throughput Or Enable Auto-Scaling
For a production system, autoscaling is the recommended practice for provisioning throughput. If you have a good understanding of your throughput needs, you can also statically define write capacity for a table.

Other Common Dynamo Errors

Problem: Incorrect Credentials

If a secret key has been updated on the destination side and has not been updated on the BitBrew side, the platform will not be able to write any events to the table.

Solution: Update Credentials

Update your credentials by clicking the Edit button on the right side of the Destination list.

Problem: Incorrect Primary Key

AWS DynamoDB tables index on a primary key, which is comprised of an obligatory partition key and an optional sort key. These keys are set when the table is created.

Any event written to the table must have a top-level attribute (that is, column name) with a key name that matches the pre-defined primary key exactly. If there is a mismatch, even in capitalization, events will not be delivered.

Solution: Recreate Table with Correct Keys

You must delete and recreate your table with a primary key that matches one of the top-level attributes provided in the events we deliver. 

As long as you give the table the exact same name, you won't have to change anything on the platform side.

We strongly recommend that you use eventId  as your partition key, since it is the only unique identifier of the event. A non-unique key could cause events to get overwritten.

Problem: Insufficient Permissions

Occasionally, the AWS credentials that are provided to the BitBrew platform when you're creating a Dynamo destination do not have the correct permissions to write to a table, so events will not get delivered.

Solution: Update Permissions or Provide New Credentials to BitBrew

There are two options for fixing this issue:

1. You can change the permissions of the AWS user whose credentials you have provided to BitBrew for the Dynamo destination. In this case, you will not need to update the credentials you've provided to the platform.

2. You can create or select a different AWS user who has the appropriate permissions and update the credentials that the platform has by clicking the Edit button on the right side of the Destination list.

Did this answer your question?