Skip to content


This page describes how you can run Styra DAS on AWS using native AWS cloud-scale storage services like S3 and DynamoDB (DDB). DAS is deployed as containers, and hence, AWS computing and networking resources are largely abstracted away from DAS. For this reason, Styra focuses on AWS storage resources, details around DAS information flows, and the persistent state it manages as shown in Figure 1.

DAS on AWS Figure 1: DAS on AWS Storage Resources

Policy and JSON storage is central to DAS. It persists all policies and all of the related JSON data, including all system configurations and live system status data. Using the policies and data stored in JSON storage, DAS builds the policy bundles for the OPAs connected and saves them to a bundle registry for OPAs to download from. Once loaded with policies, OPAs stream their policy decisions and their status data back to DAS. DAS will persist the decisions, build time series for decision statistics, and index the decisions received for search purposes. Further, if so configured, DAS also streams the received decisions further to an external system for further processing. Finally, DAS maintains user profiles with their relevant credentials and RBAC information, API tokens for accessing its APIs, as well as external credentials it may need to access third party APIs.

DAS Storage Functions

This section describes DAS storage functions and how they are implemented with a mix of Amazon DynamoDB (DDB), Amazon Simple Storage Service (S3), AWS Key Management Service (KMS), and Elasticsearch (ES) services.

Policy and JSON Storage

Policy and JSON storage stores all policies, any related JSON data, system configurations and system status data. This storage is implemented on top of the following two AWS services:

  1. S3: This stores objects for append-only commit logs, which hold the policies, JSON, configurations and status information in delta-encoded representation. DAS uses a single bucket with the name storage to hold all the objects.

  2. DDB: For quick identification of the relevant objects to download from S3, DAS uses DDB tables Logs, LogIds, and Commits to provide per commit log indexing of all S3 objects.

The commit logs maintain the older revisions of any data stored for a configurable period of time; any older entries in the commit logs are purged periodically to prevent either DDB tables or S3 bucket growing without bounds.

The contents of the policy and JSON storage are encrypted with the master tenant key discussed below.

Decision Storage and Streaming

DAS saves the decision stream batches received from OPA to S3 storage bucket, maintaining an index over the batches in DDB table DecisionLog. The saved decisions are used to facilitate replaying these already executed decisions with modified policies or data. To facilitate searches over the decisions, DAS sends the decisions to ES. If so configured, DAS can also send the decisions to an external decision sink (another S3 bucket) for further processing. While processing the decisions, DAS also maintains time series for the decisions in DDB table TimeSeries.

Similar to the policy and JSON storage, DAS garbage collects any entries that exceed the configurable time limit in the decision log table and bucket, as well as search indices and time series table.

Bundle Registry

Using the policies, JSON, and system configurations stored in the policy and JSON storage, DAS prepares the policy bundles for the OPAs to download. DAS saves these bundles to the same storage S3 bucket with object names that correspond to the bundle download URL paths that OPAs use. Now, the microservice serving the bundles can directly translate the download URL to S3 object path to download the prepared bundle from S3 without any additional searching.

Git Cache

To accelerate operations with external Git repositories storing policies, DAS maintains Git repository clones in the pod local file systems without use of additional AWS services. The repository clones are entirely ephemeral in their nature; they are reconstructed and destroyed on demand or need basis.


DAS uses a range of DynamoDB tables to store secret information, essential for its secure operations. DAS user profiles are stored in the DDB table called Users. Within the table, the passwords are hashed, and not stored as such. All the open user sessions are maintained in a separate table Sessions table. DAS stores any user provided credentials in the Secrets table. These are the secrets the user may provide for DAS to integrate with external systems. Similarly, any API access tokens a user creates to allow the access of DAS APIs, are stored in the Tokens DDB table.

To secure and encrypt the secrets, tokens, policy, and JSON storage contents, each tenant in DAS has a tenant master encryption key. This key is maintained in the SecretsKeys and SecretsTenants DDB tables. The master keys are not stored in plain text, but encrypted and decrypted using AWS Key Management Service (AWS KMS). Finally, DAS writes all API operations received by its gateway microservice to DynamoDB “Activity” table.

Cluster Coordination

For its internal coordination, DAS uses the Sequence and Lock DDB tables.