Apply Data Sources
The DAS Entitlements system type relies heavily on external data as the source of truth for raw group, role, or Entitlements data. That data is typically managed carefully by compliance processes well-known throughout the organization. The DAS Entitlements system replicates that data and uses it as the foundation of an OPA-based, cloud-native Entitlements service. The Entitlements system is not meant to be a full IAM system. The organization can continue managing that data as they always have, while the IAM engineering team can provide a globally replicated Entitlements service for their cloud-based applications.
Create a Data Source
This section explains how to create a data source.
Data source creation: To use a DAS Entitlements system, create a DAS data source and connect it to that real-world system. Once DAS can fetch the relevant data, it can replicate it to all the OPA instances that rely on it. Every data source has its own collection of configuration options that depend on the real-world system they connect to. For more information, see the different types of data sources page.
Figure 1 shows the creation of the LDAP data source. You configure the data source by giving it a path and a name, and whatever other configuration information that’s required to connect to that system-of-record.
Data refresh: DAS refreshes data from the system-of-record periodically. Each data source is different in terms of the configuration available, but most support a refresh rate so you can decide how fresh the data should be compared to the load you are putting from the configured data source. You can specify the refresh interval in seconds, minutes, and hours.
Multiple systems of record: You can configure multiple DAS data sources. Configuring multiple data sources is helpful in scenarios, where the users are spread across different systems-of-record (for example, contractors versus employees). You assign each data source its own name and configure them separately. You can also define transformations that combine datasets from multiple systems-of-record into one to simplify policy-authoring.
Data source locations: Whenever you configure a data source, you pick a folder location for it. Generally for DAS Entitlements systems, Styra recommends adding your data sources under your Entitlements system >> data sources directory. This makes it easy to find the origins of all the sources of data. This is a convention; there is no technical requirement to place your data sources here, they can be placed wherever you feel that you need to within your system.
Data source status: Organizing all your data sources under the data sources folder also makes it easy to check whether any of those data source connections are in an error-state. For example, the credentials you configured have expired. Data sources in an error state are shown with a red exclamation point (!) next to their name. Hover your mouse over the exclamation to display the error. If you click on the data source, the contents will appear in the right-hand pane, and the synchronization STATUS will also appear in the upper right-hand corner of the page.
Figure 2 shows several of the data sources are in error states and are denoted by a red exclamation point (!). You can also see the content of the currently selected data source in the right-hand pane. Along with its size (18.7KB) and its status, which details when synchronization failed.
Transform a Data Source
Once a data source is successfully configured, you can transform that data (for example, remove unnecessary rows or redact private fields) before it is stored in DAS and transferred to OPA. You can also use transformations to convert the format of the raw JSON into the Entitlements opinionated object model.
You can also combine the results of your transformed data source with other (transformed) data sources and inject them into the Entitlement’s opinionated object model.
In DAS, you control which transformation (if any) to apply through the data source’s configuration option called Data transform located under the Advanced expand/collapse arrow (see Figure 3). Now, pick a pre-built transformation that converts your data source to match the DAS Entitlements opinionated object model or you can write a custom data transformation.
For example, if your original LDAP data provides a JSON object with users, resources, actions, and you want to switch
users to subjects, you can create a policy at
transform/ldap/transform.rego and include the following code.
result[‘subjects’] = input.users
result[‘resources’] = input.resources
result[‘actions’] = input.actions
Now, you can configure your data source’s transform to use:
DAS Free users are not allowed to use a full path to the Rego file or subfolder. You can use a whole system's bundle by setting
/ as a value in the Policy field.
- Rego query:
If one of the out-of-the-box transformations does what you need, you can select it from the drop down list.
The following list shows the currently available out-of-the-box transforms:
Import OpenAPI spec: Transforms an OpenAPI specification into the schema required by the DAS Entitlements opinionated object model.
Import SCIM groups: Transforms an array of System for Cross-domain Identity Management (SCIM) groups into the schema required by the DAS Entitlements opinionated object model.
Import SCIM users: Transforms an array of SCIM users into the schema required by the DAS Entitlements opinionated object model.
Import LDAP users: Transforms an array of
inetOrgPersonrecords into the schema required by the Entitlements opinionated object model.
Import LDAP groups: Transforms an array of
groupOfNamesrecords into the schema required by the Entitlements opinionated object model.
You can view and copy the Rego code for one of the pre-built transforms by going to the following URLs. Then you can make a copy, debug it, edit it, and install it like a custom Transform. Each URL codifies the Rego package path for the file plus one of the rule names in that file.
Add Custom Transformations
To add custom transformations:
Create the data source without a transformation.
Write the custom transformation in Rego.
a. If the original data is < 1MB:
Open the data source in DAS.
Copy the JSON to the clipboard.
Open the file where you want to write your transformation logic (somewhere in the transform folder).
Click Preview to open the Input pane.
Paste the original JSON into the Input pane.
Write your Rego and debug as normal until the value of one of the variables is what you want the output of your transformation to be.
Update the object package to use this new data source.
Click Validate to run unit tests that validate the object package. If the unit tests pass, then your transformation is correct and the new data source will be used for enforcement in the snippets.
b. If the original data is greater than 1 MB, DAS will not display it because of browser limitations, so you must debug using an IDE on your laptop. For example, use the VS Code extension for OPA. But, the process is the same:
- Create an
input.jsonfile in the root directory.
- Create a file
- Edit the Rego until you have a single variable whose output is what you want the transformation to produce.
- Copy that Rego file back into DAS.
Update the data source to use the transformation. Here, provide the Rego path to the variable that returns the desired output of your transformation. For example, if your transformation file looks like the following Rego:
output := …
Then navigate to the Advanced >> Data transform field and select the following options:
Data transform: Custom
Rego query: data.transform.mytransform.output