feat: pipeline inspector protobuf/gRPC definitions#908
feat: pipeline inspector protobuf/gRPC definitions#908phisco merged 2 commits intocrossplane:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds a new protobuf API (proto3) defining a gRPC service Changes
Estimated Code Review Effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@apis/pipeline/proto/v1alpha1/pipeline_inspector.proto`:
- Around line 53-56: Update the comment on the EmitResponseRequest message to
avoid using "nil" for the bytes field; clarify that the bytes field response in
EmitResponseRequest is empty when the function call failed (not nil) and that
consumers should check the error field (error) to determine failure, e.g.,
replace "nil if function call failed" with "empty if function call failed; check
error to determine failure."
- Line 24: The go_package option in pipeline_inspector.proto is pointing to the
wrong module; update the option go_package value in the top of
pipeline_inspector.proto so it matches the repository module used by other proto
files (use the same base as changelog.proto and ess.proto), i.e., change the
go_package string to the crossplane-runtime module path used throughout this
repo to ensure generated Go code lands under
github.com/crossplane/crossplane-runtime/v2/apis/pipeline/proto/v1alpha1; edit
the option in pipeline_inspector.proto accordingly.
🧹 Nitpick comments (2)
apis/pipeline/proto/v1alpha1/pipeline_inspector.proto (2)
71-78: Question: trace_id/span_id format expectations.The comments mention these are UUIDs, which is great for uniqueness. I'm curious whether you've considered aligning with W3C Trace Context format (32-char lowercase hex for trace_id, 16-char for span_id) or OpenTelemetry conventions? This could make integration with distributed tracing systems more seamless.
If UUID format is intentional for other reasons (e.g., matching Kubernetes UID format), that's totally fine too! Just wanted to understand the design choice.
92-105: Consider grouping composite resource fields into a nested message.The five
composite_resource_*fields (lines 93-105) collectively describe a single entity. Per Kubernetes API conventions around cohesion and reusability, you might consider grouping these into a nested message likeCompositeResourceRef. This would:
- Make the relationship explicit
- Allow reuse if other messages need composite resource identity
- Align with Kubernetes patterns like
ObjectReferenceThat said, the flat structure is perfectly functional and may be simpler for your use case. Just thought I'd raise it as an option!
💡 Optional nested message pattern
// CompositeResourceRef identifies a composite resource. message CompositeResourceRef { string uid = 1; string name = 2; string namespace = 3; // Empty for cluster-scoped resources. string api_version = 4; // e.g., "example.org/v1" string kind = 5; // e.g., "XDatabase" } message StepMeta { // ... other fields ... // The composite resource being reconciled. CompositeResourceRef composite_resource = 7; // ... remaining fields with adjusted tag numbers ... }
582cb35 to
1f0bbd6
Compare
Signed-off-by: Philippe Scorsolini <p.scorsolini@gmail.com>
1f0bbd6 to
147d66f
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@apis/pipelineinspector/proto/v1alpha1/pipeline_inspector.proto`:
- Around line 41-43: The proto comment claims credentials are stripped from the
bytes field "request" but there is no implementation or tests; either implement
redaction or remove the claim. Add a redaction helper (e.g.,
redactCredentialsFromJSON or sanitizeRequestJSON) and call it wherever the
PipelineInspector message's "request" bytes are built (the emitter/serializer
that populates the request field), ensuring it parses the JSON, replaces values
for common sensitive keys ("password", "pass", "pwd", "token", "secret",
"authorization", "api_key", "apiKey", nested occurrences) with a redacted
placeholder, and serializes back to bytes; then add unit tests that feed nested
JSON with those keys and assert the resulting "request" bytes no longer contain
original secret values but keep structure. If you prefer not to implement,
remove or reword the proto comment to avoid claiming credentials are stripped.
- Around line 72-78: The comments for trace_id and span_id incorrectly call them
"UUIDs"; update the proto comments for the fields trace_id and span_id to state
they follow W3C Trace Context / OpenTelemetry formats (trace_id = 32-character
lowercase hex string representing 16 bytes; span_id = 16-character lowercase hex
string representing 8 bytes) and clarify whether these fields are required for
proper trace correlation (e.g., document that omission breaks end-to-end
correlation and recommend clients populate them), so API consumers know the
exact encoding and the impact of missing values.
🧹 Nitpick comments (1)
apis/pipelineinspector/proto/v1alpha1/pipeline_inspector.proto (1)
58-59: Consider usinggoogle.rpc.Statusfor structured error handling.The current
string errorfield loses error details and status codes needed for robust error handling and observability. Would you be interested in usinggoogle.rpc.Statusinstead? It's the gRPC convention for error responses and preserves status codes, error details (viaAny), and retryability information—making downstream telemetry and client-side error handling more reliable.♻️ Proposed update
import "google/protobuf/timestamp.proto"; +import "google/rpc/status.proto"; ... - // Error message if the function call failed. - string error = 2; + // Error status if the function call failed. + google.rpc.Status error = 2;
| string function_name = 5; | ||
|
|
||
| // Name of the Composition defining this pipeline. | ||
| string composition_name = 6; |
There was a problem hiding this comment.
do we want to have the composition_revision as well? Could be useful?
There was a problem hiding this comment.
Maybe function_revision as well?
There was a problem hiding this comment.
The name of the CompositionRevision is already available through the observed composite resource at spec.compositionRevisionRef.name.
The names of the FunctionRevisions are not known in the code where these are emitted, nor are easy to find out. We can add that in the future once we get more clarity on crossplane/crossplane#6935, I hope.
There was a problem hiding this comment.
The name of the CompositionRevision is already available through the observed composite resource at spec.compositionRevisionRef.name.
The only problem i see is you have to dig for it instead of just having it there. Although i am not sure how hard to get it it is from where we are creating the step. I dont feel too strongly about this BTW.
Signed-off-by: Philippe Scorsolini <p.scorsolini@gmail.com>
Description of your changes
From crossplane/crossplane#7025.
Fixes #
I have:
earthly +reviewableto ensure this PR is ready for review.Added or updated unit tests.Linked a PR or a docs tracking issue to document this change.Addedbackport release-x.ylabels to auto-backport this PR.Need help with this checklist? See the cheat sheet.