Skip to main content

Test Policies

When you have put a new policy in place, it is important to test it and make sure it works as excepted. In other words, you want to understand what impact your policy changes will have on your real-world system. The DAS helps you to debug policy with impact analysis in the following ways:

Manual Test

When you write custom rules, Styra recommends you to test the rules before deploying it to an OPA.

To test the rules manually, use the Preview button in the editor to evaluate the current policy against a provided input and view the output display. Now, click the Preview button to view the result of evaluating every rule in that package for any input that you provide in the window that appears. You can also evaluate a part of that package by highlighting the expression you want to evaluate and clicking the Preview button. Sometimes, it is non-trivial to know exactly what input will be provided to OPA.

For example, the Kubernetes resource that a user sends to the API server gets augmented before being sent to OPA to include additional information like the username and user groups. To help with this, the DAS connects the decision log (which includes the inputs OPA receives) with the editor. From the decision log, find a decision that has an input you want to test manually. You can either copy the input or use the Replay button to open up the corresponding policy and pre-load the input for that decision into the Preview window. Now, continue to manually test the rules.

Unit Test

After completing the manual testing, it is a good practice to create unit tests so that you have an automated way of rerunning those tests. The unit test framework works exactly the same way that the OPA unit tests work. In the end, you can write any number of Rego statements that evaluate your policy over a provided input. For example, for a microservice policy the following test evaluates the allow decision providing as input the JSON object {"path": ["users"], "method": "POST"}.

 test_post_allowed {
allow with input as {"path": ["users"], "method": "POST"}

To run all your unit tests, use the Validate button in the editor that is right next to Preview button. This displays the results of running all your unit tests in the left-hand column. It also displays the changes in test output as compared to the version of policy that is currently published. Ideally, all of those tests will pass before they are published, but sometimes there is a possibility for the tests to fail.

  1. You can also run tests one at a time using the Preview button.

  2. This form of evaluation treats the tests as normal Rego code.

  • If a test fails, the Preview button will not show a value for that test.

  • If the test succeeds, the Preview button will show that the value for the test is true.

Compliance Test

For configuration authorization use cases like Kubernetes, the Validate button will run your draft policy across the real-world resources in the state of the system and identify which resources violate your draft policy. For example, with Kubernetes, this refers to running your draft policy against all resources (pods, ingresses, deployments, and so on) that currently exist in your live Kubernetes cluster and identifying those that could not be created with your draft policy in place. This gives you immediate feedback as to whether your real-world system and all the processes that are creating resources in it will continue operating properly with your draft policy in place.

These compliance results are shown in the middle pane of the Validate window. You can either view all the resources that violate your draft policy, or only the resources that violate your draft policy that did not violate your published policy. This helps you to identify exactly which resources your policy changes are impacting.

Replay Test

The decision log provides an audit trail of all the decisions that OPA makes. This decision log also includes the input provided to OPA and the decision that OPA returned. The Validate button uses this decision log to give you another dimension of understanding the impact your policy change will have on your real-world system. This form of impact analysis reruns prior decisions using your draft policy to identify which decisions your draft policy makes differently. For example, originally the decision was to deny the request, but your draft policy allows it. This gives you a historical view of how impactful your draft policy is, giving you a sampled estimate of what percent of past decisions are changed by your draft policy. If your policy is changing 1% of past decisions, that is much safer than changing 90% of past decisions.