Skip to main content


log-replay is an impact analysis tool that helps to identify which decisions may change when you modify policies and/or data. You can see the impacted decisions by replaying past decisions on modified policies and/or data.

log-replay API

The service has only one API call to run decision logs re-evaluation:

POST /v1/logreplay


"duration": "10s",
"max_samples": 5,
"skip_batches": [
"policies": {
"httpapi/authz": "package httpapi.authz\n # rego policy contents"
"data_patches": [
{"op": "add", "path": "/mydata/value", "value": "something"}
"scope": [
{"path": "httpapi/authz/allow"}


  • duration: Specifies the total time the analyzer spends on evaluations. If the value is omitted or less than zero then DefaultReplayDuration is used instead. If the value exceeds MaxReplayDuration, then it is suppressed.

  • max_samples: Maximum number of representative change samples to return. The value is limited to MaxSamples and defaults to DefaultSamples.

  • skip_batches: Optional list of batch IDs to skip (obtained from analyzed_batches attribute of previous replay run).

  • policies: Tells log-replay which policies to alter in the form of path to payload map.

  • log-replay: Re-evaluate any decision log that can be affected by any change in these policies.

  • data_patches: Optional list of atomic JSON-patches to be applied to the data prior to evaluation. The data changes without any policy modifications (empty policies map) are accepted. However, at least one of policies or data_patches must be present.

  • decision_patches: Optional list of atomic JSON-patches to be applied to the decision JSON as a whole before it is replayed. It can be used to compensate values stripped by the mask policy or just inject sample data into the inputs.

  • compare_full_results: Do not compare decisions by system-type-dependent significant fields (default: false).

  • deterministic_policies: Signals that decisions are cached (default: true) when they have the same inputs, data, and revision which always evaluate to the same result.

  • scope: Allows filtering of analyzed log decisions. It is a list of documents, each can have any of the following attributes:

  • path:

    • If the decision log path is prefixed by this value, then it will be considered for re-evaluation.

    • If the decision log path is a prefix of this value (for example, scope.path = policy/allow and log.path = scope), then it is assumed that the decision log result is narrowed to the specified subpath (/allow as shown in policy/allow).

    • If none of the above options are true, then the decision log will be ignored.

  • max_revisions: Only consider last max_revisions revisions of the policy (for example, in range [current - max_revisions .. current]).

  • max_age: Only consider decision logs that are not older than this parameter. It can be specified either in relative (for example, 30s) or absolute time (RFC3339Nano) formats.

  • min_age: only consider decision logs that are not newer than this parameter. It can be specified either in relative (for example, "30s") or absolute time (RFC3339Nano) formats.

If scope list is empty then any decision log is considered for re-evaluation. The same effect can be achieved with the [{}] list.


"result": {
"started": "2018-11-26T19:59:28.879307Z",
"samples": [{
"labels": {
"hostname": "8508d25dc62c"
"type": "agent",
"name": "16b93fad-c221-4d67-a44f-a1aa90f7a099",
"agent_id": "16b93fad-c221-4d67-a44f-a1aa90f7a099",
"timestamp": "2018-11-24T21:43:45.166990877Z",
"revision": "W3sibCI6Imh0dHBhcGkvYXV0aHoiLCJzIjowfSx7ImwiOiJzeXMvY2F0YWxvZyIsInMiOjEyMjd9XQ",
"path": "httpapi/authz/allow",
"input": {
"method": "GET",
"path": ["finance", "salary", "donna"],
"user": "sam"
"result": false,
"requested_by": "",
"decision_id": "1f6b94cf-f077-4899-8b69-af76e7cdf533",
"new_result": true
"stats": {
"batches_observed": 203,
"batches_analyzed": 203,
"entries_observed": 57311,
"entries_evaluated": 56263,
"entries_scheduled": 56263,
"entries_failed": 0,
"analysis_errors": 0,
"results_changed": 2825,
"batches_downloaded": 203,
"batches_download_errors": 0,
"batches_skipped": 2,
"batches_from_cache": 180,
"batches_scheduled": 4067
"analyzed_batches": [
"duration": 10000201000


  • samples: List of representative change samples (up to max_samples). Even though the example above found 2875 changed results of past policy decisions, it returned only one sample because all of them were similar and had the same input and data/modules revisions. All the samples are in regular decision log format with the following additional attributes:

    • new_result: New result of the re-evaluation.

    • error: Error text when re-evaluation failed (omitted otherwise).

  • stats are various metrics of the analysis, as follows:

    • batches_scheduled: Number of decision log batches that were scheduled for analysis.

    • batches_downloaded: How many decision log batches were downloaded during the analysis (<= batches_scheduled).

    • batches_download_errors: Number of decision batches that could not be downloaded.

    • batches_skipped: How many decision log batches were skipped because of the skip_batches input or pre-filtration.

    • batches_from_cache: How many of downloaded batches were actually taken from the cache (<= batches_downloaded).

    • batches_observed: Number of decision log batches picked by the analyzers (<= batches_downloaded).

    • batches_analyzed: Number of decision log batches fully analyzed (<= batches_observed).

    • entries_observed: Number of decision logs seen in all analyzed batches.

    • entries_scheduled: Number of decisions scheduled for replay (<= entries_observed).

    • entries_evaluated: Number of replayed decisions (<= entries_scheduled).

    • results_changed: How many decision logs got different result value after re-evaluation (<= entries_evaluated).

    • entries_failed: How many decision log re-evaluations resulted in error (<= entries_evaluated).

    • analysis_errors: How many analysis errors happened and potentially caused some decisions to be skipped (<= entries_observed).

  • duration: Duration of the analysis in nanoseconds (<= duration from the request).

  • analyzed_batches: List of fully analyzed batch IDs (batches_analyzed entries).