The Evaluate feature enables exploratory policy development, where changes to either policy, input or data immediately are reflected in the evaluation result. This goes a long way to build understanding and confidence in the policy as it's being developed. Ideally, we'd be able to transfer this confidence into something that persists past the development session! I believe one way to do this would be to provide an option to generate a test from the same state that the workspace is in when the user clicks Evaluate.
Given a (overly trivial) workspace like this:
policy.rego
package policy
allow if {
input.foo == "foo"
data.bar == "bar"
}
input.json
data.json
Clicking Evaluate would of course return true for the given workspace state. Translating the evaluation to a test from that would then render something like:
policy_test.rego
package policy_test
import data.policy
test_allow {
policy.allow with input.foo as "foo" with data.bar as "bar"
}
Changing e.g. input.json to {"foo": "bar"} would make the result undefined, and we could easily add a test for that condition too:
policy_test.rego
package policy_test
import data.policy
test_allow {
policy.allow with input.foo as "foo" with data.bar as "bar"
}
test_not_allow {
not policy.allow with input.foo as "bar" with data.bar as "bar"
}
Exactly how test names should be generated will be left as an exercise to the implementor... but the name should most likely be provided / refined by the user anyway.
This would definitely not be a trivial task with all things considered (identifying and narrowing down dependencies to mock, for one), but I imagine we can do this gradually, and perhaps have a first version copy the whole input document to the test, not consider data.json files, and so on. Even in a crude form requiring some manual edits would still save time compared to writing all tests from scratch.
The
Evaluatefeature enables exploratory policy development, where changes to either policy, input or data immediately are reflected in the evaluation result. This goes a long way to build understanding and confidence in the policy as it's being developed. Ideally, we'd be able to transfer this confidence into something that persists past the development session! I believe one way to do this would be to provide an option to generate a test from the same state that the workspace is in when the user clicksEvaluate.Given a (overly trivial) workspace like this:
policy.rego
input.json
{"foo": "foo"}data.json
{"bar": "bar"}Clicking Evaluate would of course return
truefor the given workspace state. Translating the evaluation to a test from that would then render something like:policy_test.rego
Changing e.g.
input.jsonto{"foo": "bar"}would make the result undefined, and we could easily add a test for that condition too:policy_test.rego
Exactly how test names should be generated will be left as an exercise to the implementor... but the name should most likely be provided / refined by the user anyway.
This would definitely not be a trivial task with all things considered (identifying and narrowing down dependencies to mock, for one), but I imagine we can do this gradually, and perhaps have a first version copy the whole input document to the test, not consider data.json files, and so on. Even in a crude form requiring some manual edits would still save time compared to writing all tests from scratch.