Skip to main content

Operations

This page contains instructions on how to implement common operations in DAS.

Setup an API Token as the System Owner using the Authorization V2

You can configure an API token to have the System Owner role for a Styra DAS System utilizing the API.

Prerequisites

You must have an API token that has the WorkspaceAdministrator role.

  • Login to the DAS and create an API token OR you can update the URL with your tenant and it will direct you to https://TENANT.styra.com/access-control/api-tokens.

  • Once the token is created, modify its permission so that it is part of the WorkspaceAdministrator Role.

Create a System

Now, that you have an API token that you will need to create a system through the API and capture the system_id for it.

$> curl -H "Content-Type: application/json"  -H 'Authorization: Bearer XXX' -X POST  https://TENANT.styra.com/v1/systems -d '{"type": "kubernetes:v2", "name": "test-system"}'

{"request_id":"86d56126-d8e2-42f1-9316-7d4180438121","result":{"name":"test-system","description":"","type":"kubernetes:v2","deployment_parameters":
... snip ...
"id":"f6287105665f45079e4750d81ca9529f","status":"Ready"}}
note

The Name parameter and the type parameter are required. You can specify more information during the system creation process and use other systems types other than Kubernetes for this step.

Now, you must create a new token and assign it to the SystemOwner role too. In this case, name that token test-systemowner, but you should use a more descriptive name.

$> curl -H "Content-Type: application/json"  -H 'Authorization: Bearer XXX' -X PUT https://TENANT.styra.com/v1/tokens/test-systemowner -d '{"id": "test-systemowner", "description": "", "ttl": "", "allow_path_patterns": [], "regenerate": false}'

{"request_id":"275ba8d6-dc42-42e3-abac-9f35f7779ce8","result":"eOy... snip ...uHY"}

After the token is created, you can update the rolebinding for that new token and scope it to our new system. If there isn't a SystemOwner rolebinding for the system that you created, then we can create one using the POST action as shown below.

important

Use the system_id that you got in the first step when you created the system.

$> curl -H "Content-Type: application/json"  -H 'Authorization: Bearer XXX' -X POST https://TENANT.styra.com/v2/authz/rolebindings -d '{"resource_filter": {"kind": "system", "id": "f6287105665f45079e4750d81ca9529f"}, "role_id": "SystemOwner", "subjects": [{"kind": "token", "id": "test-systemowner"}]}'

{"rolebinding":{"resource_filter":{"kind":"system","id":"c298bf65eb9c468c9887a7102ba9f526"},"role_id":"SystemOwner","subjects":[{"kind":"token","id":"test-systemowner"}],"id":"af5db326898a4bb6888103304a74aa10","metadata":{... snip ...}}}

If you want to add another token to the SystemOwner role for that system, then you can use the below PUT API call.

$> curl -H "Content-Type: application/json"  -H 'Authorization: Bearer XXX' -X PUT https://TENANT.styra.com/v2/authz/rolebindings/af5db326898a4bb6888103304a74aa10/subjects -d '{"subjects": [{"kind": "token", "id": "test-systemowner"}]}'

{"rolebinding":{"resource_filter":{"kind":"system","id":"f6287105665f45079e4750d81ca9529f"},"role_id":"SystemOwner","subjects":[{"kind":"token","id":"test-systemowner"}],"id":"af5db326898a4bb6888103304a74aa10","metadata":{... snip ...}}}

At this point, the new token has full control over our new system.

How to configure SSH Git settings using the API

You can configure SSH Git settings using the API.

Prerequisites

You must attain a copy of a private SSH key configured to your source control. If you do not have an SSH key configured to access your source control account, you must generate a new SSH key and configure your Git repository. Styra highly recommends using machine users to authenticate and authorize Git syncing.

Once you have attained your private SSH key, you must convert it to the following sample format:

-----BEGIN OPENSSH PRIVATE KEY-----\nMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCH63TY3t3pOuuGhgGFir7Y8IxeEz2ZuxUgL6ha4bPRKIcVH6Mk1stdPKhMUXJ/l1pqGVnLQRL0QaF0Lhu6+Qlc78ZFkHuYUuJS2Qw5eN1sKnVh72XHY+UrygB9GRYaohvc/ksZeBXp+inRr3WqNgKMNQW6/3kojDi5xNJiulutBwIDAQAB\n-----END OPENSSH PRIVATE KEY-----

This format requires you to pass the full key in a single line while making sure to include \n pre-pending and appending the key. Styra recommends using the helper bash script to format your private key:

#!/bin/bash
# Requires AWk and SED to be installed.
export SSH_FILE_PATH="$HOME/.ssh/id_rsa" #replace this with path to your private SSH key

line_count=`wc -l ${SSH_FILE_PATH} | awk '{print $1}'`
last_new_line=`expr $line_count - 1`

cat $SSH_FILE_PATH | sed '1s/$/\\n/' | sed ""$last_new_line"s/$/\\\n/" | tr -d '\n'

Create SSH credentials through API

Once you have your private SSH key created on your repository, you will need to create the two secret objects required for configuring SSH authentication: passphrase and key.

The following are example API endpoints and payloads for configuring SSH for system creation and updates:

Create a secret passphrase id

Endpoint: PUT /v1/secrets/git/ssh/passphrase

A passphrase secret is required for system configuration. You may leave the secret empty if using a passphrase-less key.

Example parameters:

{
"description": "passphrase for git ssh key",
"name": "prod1-passphrase",
"secret": ""
}

Create a secret SSH private key

Endpoint: PUT /v1/secrets/git/ssh/key

Example parameters:

{
"description": "git ssh key",
"name": "prod1-ssh-key",
"secret":"-----BEGIN OPENSSH PRIVATE KEY-----\nMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCH63TY3t3pOuuGhgGFir7Y8IxeEz2ZuxUgL6ha4bPRKIcVH6Mk1stdPKhMUXJ/l1pqGVnLQRL0QaF0Lhu6+Qlc78ZFkHuYUuJS2Qw5eN1sKnVh72XHY+UrygB9GRYaohvc/ksZeBXp+inRr3WqNgKMNQW6/3kojDi5xNJiulutBwIDAQAB\n-----END OPENSSH PRIVATE KEY-----"
}

Create a new system

Endpoint: POST /v1/systems

Example parameters:

{
"name": "production-system-1",
"type": "kubernetes",
"deployment_parameters": {
"kubernetes_version": "1.17"
},
"source_control": {
"origin": {
"credentials": "", // This is required. Leave blank like this if using SSH
"path": "clusters/production-system-1",
"reference": "refs/heads/main",
"ssh_credentials": {
"passphrase": "passphrase_id",
"private_key": "ssh_key_id"
},
"url": "https://github.com/org/repo"
}
}
}

Update an existing system

Endpoint: PUT v1/systems/{system}

Example parameters:

{
"system": "xn7ndpgbtm5irndphlrk1bzslbszyblp",
"name": "production-system-1",
"type": "kubernetes",
"deployment_parameters": {
"kubernetes_version": "1.17"
},
"source_control": {
"origin": {
"credentials": "", // This is required. Leave blank like this if using SSH
"path": "clusters/production-system-1",
"reference": "refs/heads/main",
"ssh_credentials": {
"passphrase": "passphrase_id",
"private_key": "ssh_key_id"
},
"url": "https://github.com/org/repo"
}
}
}

High Availability for Kubernetes Admission Control with OPA and DAS

To ensure High Availability for Kubernetes Admission Control decisions consider the following configuration and deployment options for OPA and Styra DAS:

Configure OPAs for the Admission Control Webhook

  1. Run two or more OPA Pods to avoid any single point of failure with OPA.

    • Styra DAS deploys three OPAs by default via the pre-built installation manifest.
  2. Configure Pod anti-affinity to prevent the OPA Pods from running on the same worker node.

    • Styra DAS configures Pod anti-affinity by default in the pre-built installation manifest.
  3. (Optional) consider deploying the OPAs to the master/control-plane nodes rather than the worker nodes.

    • Styra DAS provides the optional configuration for the master/control-plane deployment in the pre-built installation manifest.

For additional information on the OPA installation for Kubernetes, see Kubernetes Install Agents page.

Configure OPAs to pull policies via Bundles and push Decision Logs

Styra DAS provides a sidecar container, called the Styra Local Plane (SLP), that is deployed in each OPA Pod. For Kubernetes, the SLP is configured by default to:

  1. Pull policy Bundles from Styra DAS and relay them to the OPA.

  2. Receive Decision Logs from OPAs and relay them to Styra DAS. The OPA will not communicate directly with DAS, rather all communication will happen through the SLP:

    • OPA <-> SLP <-> DAS
  3. Styra DAS provides the OPA and SLP configuration in the pre-built installation manifest. If connectivity between the SLP and DAS is disrupted, the SLP will persist Decision Logs to a local volume mounted in the Pod. Once connectivity is restored the SLP will automatically begin uploading the Decision Logs from the local volume to DAS.

    • Styra DAS configures the local volume as an ephemeral volume by default, but this can be configured as a Persistent Volume in order to survive Pod restarts.

    • Based on the expected log volume and recovery objectives a user can allocate the appropriate amount of disk for the PVs.

For additional information, see the Styra Local Plane page.

Configure DAS self-hosted for HA

  1. Run Styra DAS on a Kubernetes cluster that spans multiple availability zones and implement a Pod scheduling policy that places replicas across multiple AZs.

  2. Scale each of the DAS microservices to multiple replicas to avoid any single points of failure.

    • (Optional) if you have an S3-compatible object store available in your self-hosted environment, AND the object store availability characteristics are potentially higher than your DAS environment, you can consider the Bundle Registry feature of DAS.

    • With Bundle Registry enabled, DAS can deploy Bundles to the object store, and the SLPs will pull Bundles from the object store, allowing SLPs to be redeployed or scaled out without a dependency on DAS.

  3. Implement high availability across multi-AZs for your database tier.

    • The default database supported by DAS is PostgreSQL.

    • For AWS private-cloud/self-hosted users, DynamoDB can be used as an alternative to PostgreSQL for increased scalability and availability.

  4. For multi-region recovery:

    • Prepare a standby Kubernetes cluster in the secondary region into which the DAS microservices can be installed.

    • Create a read replica for PostgreSQL in the secondary region, and configure PostgreSQL asynchronous replication.

    • During recovery - promote the secondary read replica to primary, deploy DAS to the secondary Kubernetes cluster and update DNS