Skip to content

Conversation

@xibz
Copy link

@xibz xibz commented Oct 6, 2025

Proposed Changes

feat: Add DSSE-based cryptographic signing for CloudEvents verification

Implements verifiable CloudEvents using DSSE (Dead Simple Signing Envelope) to ensure event authenticity and integrity across untrusted transport layers.

Key features:

  • Sign CloudEvents with DSSE v1.0.2 protocol using SHA256 digests
  • Transport verification material in 'dssematerial' extension attribute
  • Support for binary, structured, and batch CloudEvent modes
  • Backward compatible - unsigned events still work, consumers can ignore signatures (but highly inadvisable)

Technical approach:

  • Creates SHA256 digest chain of all context attributes and event data
  • Wraps digest in DSSE envelope with Base64 encoding
  • Verifies by recomputing digests and comparing against signed payload
  • Returns only verified data to consumers (strips unverified extensions)

This enables cryptographic proof that events:

  1. Were produced by the claimed source (authenticity)
  2. Were not modified in transit (integrity)

Does NOT address: event ordering, completeness, replay attacks, or confidentiality

Fixes #1302

Release Note


xibz added 3 commits October 6, 2025 15:50
feat: Add DSSE-based cryptographic signing for CloudEvents verification

Implements verifiable CloudEvents using DSSE (Dead Simple Signing
Envelope) to ensure event authenticity and integrity across untrusted
transport layers.

Key features:
- Sign CloudEvents with DSSE v1.0.2 protocol using SHA256 digests
- Transport verification material in 'dssematerial' extension attribute
- Support for binary, structured, and batch CloudEvent modes
- Backward compatible - unsigned events still work, consumers can ignore
  signatures (but highly inadvisable)

Technical approach:
- Creates SHA256 digest chain of all context attributes and event data
- Wraps digest in DSSE envelope with Base64 encoding
- Verifies by recomputing digests and comparing against signed payload
- Returns only verified data to consumers (strips unverified extensions)

This enables cryptographic proof that events:
1. Were produced by the claimed source (authenticity)
2. Were not modified in transit (integrity)

Does NOT address: event ordering, completeness, replay attacks, or
confidentiality

Signed-off-by: xibz <[email protected]>
@@ -0,0 +1,443 @@
# Proposal: Verifiable CloudEvents with DSSE
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please wrap all files at 80 chars


## Goals

This proposal introduces a design for verifiable CloudEvents that is agnostic of delivery protocols and event formats.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/proposal/extension/

let's be optimistic :-)

## Non-goals

This proposal only applies to individual events.
It does not give consumers any guarantees about the completeness of the event stream or the order that events are delivered in.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove the " in" at the end.

It does not give consumers any guarantees about the completeness of the event stream or the order that events are delivered in.

The threats of a malicious actor preventing events from being delivered or swapping their order are not addressed by this proposal.
Neither are the possibilities of events being accidentally dropped, delivered in the wrong order or the same event being delivered multiple times.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the "wrong order" problem is mentioned in each of the 3 above sentences... kind of repetitive. I think just once is enough.


The threats of a malicious actor preventing events from being delivered or swapping their order are not addressed by this proposal.
Neither are the possibilities of events being accidentally dropped, delivered in the wrong order or the same event being delivered multiple times.
These challenges can be addressed through means of adding the necessary information inside the event payloads.
Copy link
Collaborator

@duglin duglin Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kind of a tease to imply there are solutione so for this. I would recommend mentioning at least one solution (if one exists), or if none, then just say this problem is "out of scope of this specification"

Neither are the possibilities of events being accidentally dropped, delivered in the wrong order or the same event being delivered multiple times.
These challenges can be addressed through means of adding the necessary information inside the event payloads.

Because the CloudEvents specification [requires](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#id) the combination of event `source` and `id` to be unique per event, signature replays for identical events are not considered.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is "signature replays" a "security" phrase? I wonder if everyone will know what's being said here. I'm not sure I do :-) Is it saying "we don't guarantee that replays will look the same", or is it saying "different events with the same id/souce might look the same even if other data is different" ? Or something else?

It does not aim to enable *confidentiality*.
Consequently, it does not address the threat of unauthorized parties being able to read CloudEvents that were not meant for them (see [Privacy & Security](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#privacy-and-security) in the CloudEvents spec).

While the design in this proposal *can* be used by authorized intermediaries to modify and re-sign events, it explicitly does not aim to provide a cryptographic audit trail of event modifications.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/proposal/extension/


## Constraints

We have set the following constraints for the proposed design:
Copy link
Collaborator

@duglin duglin Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following design constraints are defined:

remove "We have set"

**Verifiability MUST be OPTIONAL:** This ensures that the additional burden of producing verification material and performing verification only applies when verifiability is desired, which is not always the case.

**The design MUST be backward compatible:** Backward compatibility ensures that producers can produce verifiable events without any knowledge about whether the consumers have been configured to and are able to verify events.
Consumers that do not support verification can consumer signed events as if they were unsigned.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/can consumer/can consume/


We have set the following constraints for the proposed design:

**Verifiability MUST be OPTIONAL:** This ensures that the additional burden of producing verification material and performing verification only applies when verifiability is desired, which is not always the case.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's clear that just because a producer adds the extra verifiable stuff to the CE, the consumer is not required to do anything with it or even know what those extensions mean? Could just be me.


## Overview

The producer passes a CloudEvent to the SDK, which creates the verification material and adds it to the CloudEvent. When the consumer’s CloudEvents SDK receives a message with event and verification material, it performs a verification of the signature against the key and passes on a verified event to the consumer:
Copy link
Collaborator

@duglin duglin Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This paragraph, and the pict, are good but it feels a bit awkward to me.

I think it needs to either say:

  • that this SDK usage is an example of how the end-to-end flow might look, just for the sake of understanding what the extension is doing
  • or move this down into the "examples" section

My concern with how this is presented is that it comes across like it requires an SDK and, at least initially, this doc needs to present the "on the wire" changes that this extension is defining. Meaning, what new CE attributes are defined. How they get into the CE is an implementation detail - and an SDK is one option.

IOW, I think just starting this section with the "The verification material is transported..." sentence below is good enough.


The verification material is transported in an [Extension Context Attribute](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md#extension-context-attributes) called `dssematerial`:

* Type: `String`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to follow the pattern in other extensions of having an "Attributes" section - see: https://github.com/cloudevents/spec/blob/52387e31cc41688ba1ce56ec8f040554d3517592/cloudevents/extensions/authcontext.md#attributes

* Type: `String`
* Description: The [DSSE JSON Envelope](https://github.com/secure-systems-lab/dsse/blob/master/envelope.md) that can be used to verify the authenticity and integrity of CloudEvent.
* Constraints:
* OPTIONAL
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is REQUIRED not OPTIONAL.

Let's add in a "Notational Conventions" section like: https://github.com/cloudevents/spec/blob/52387e31cc41688ba1ce56ec8f040554d3517592/cloudevents/extensions/authcontext.md#notational-conventions then you'll see that the last paragraph in there explains why it's REQUIRED not OPTIONAL.

* OPTIONAL
* If present, MUST be Base64 encoded

The verification material, once Base64 decoded, looks something like this:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"looks something like" isn't precise enough. Are all of those 5 fields required? If so, then say "MUST adhere to the following form". If the fields can change based on the mechanism used to do the verifying then we'll need to be more creative, so let's discuss because across all mechanisms/formats there MUST be at least one consistent field (payloadType I assume) that people can rely upon to disambiguating them.

)
```

It is the digest of the concatenated digest list of the mandatory Context Attributes, the OPTIONAL Context Attributes as well as the event data itself.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that extensions are similar to optional spec-defined attributes, should this include extensions?


* *In [CloudEvent’s type system](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md#type-system) a `Timestamp`’s string encoding is [RFC 3339](https://tools.ietf.org/html/rfc3339). This means that verification of the `time` Context Attribute can only be done with second precision, even though an SDK might allow passing in a timestamp with nanosecond precision.*
* *In [CloudEvent’s official Protocol Buffers format](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/formats/cloudevents.proto#L57), the `time` Context Attribute is encoded as a `google.protobuf.Timestamp` and hence does not include time zone information (which RFC 3339 would allow). For interoperability with CloudEvent setups using the Protocol Buffers format, time zone information is ignored in the signing and verification process.*
* *Contrary to all other CloudEvent SDKs, the Javascript SDK returns the current time instead of an empty or null value when a CloudEvent has no `time` Context Attribute. Consequently, signed CloudEvents without time information will not be verifiable in the Javascript SDK.*
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the JS SDK adds it to outgoing or to incoming CEs? If incoming I think that's a bug. If outgoing, I'm not sure we need to say anything at all since I think the proposal still works, doesn't it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah it's only outgoing, so it just needs to be included in the signature computation.


*Notes:*

* *In [CloudEvent’s type system](https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md#type-system) a `Timestamp`’s string encoding is [RFC 3339](https://tools.ietf.org/html/rfc3339). This means that verification of the `time` Context Attribute can only be done with second precision, even though an SDK might allow passing in a timestamp with nanosecond precision.*
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't we just treat the timestamp as a string? I don't think the exact format (or precision) should impact the verification, should it?

5. compute the SHA256 digest of the event's [`datacontenttype`](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#datacontenttype) Context Attribute in UTF8 and append it to the byte sequence *(if the attribute is not set, use the digest of the empty byte sequence)*
6. compute the SHA256 digest of the event's [`dataschema`](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#dataschema) Context Attribute in UTF8 and append it to the byte sequence *(if the attribute is not set, use the digest of the empty byte sequence)*
7. compute the SHA256 digest of the event's [`subject`](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#subject) Context Attribute in UTF8 and append it to the byte sequence *(if the attribute is not set, use the digest of the empty byte sequence)*
8. compute the SHA256 digest of the event's [`time`](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#time) in RFC 3339 Zulu format and append it to the byte sequence *(if the attribute is not set, use the digest of the empty byte sequence)*
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not treat it as an opaque string? Then we don't need to worry what TZ it uses.

Copy link
Author

@xibz xibz Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The time normalization to Zulu is intentional and necessary. Since verification happens after deserialization on the CloudEvent object (not on raw JSON), we need to handle cases where intermediaries deserialize and reserialize events. We did this to ensure flexibility of WHEN verification could happen. Noting @jskeet comment from a couple weeks ago about needing to verify at the raw incoming bytes of the HTTP request, this now can allow for you to deserialize then individually compute the SHA256 because the fields are explicit in which order and say HOW time should be formatted.

But will update point 8 to help clarify

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still not following.

"time" is optional and serialized as a string, which means it can't be required to be there for the signing or verification process. So, MUST we touch it at all if present? Why not just pass it along like any other attribute. I would be bothered if my middleware changed my data on me.

Copy link
Author

@xibz xibz Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're not changing the time data - we're normalizing it ONLY for signature computation, not in the event itself. (AI helping me with an example, but the last paragraph is the important bit) :)

The event keeps whatever time format it has: 2020-06-18T17:24:53+02:00

But when computing the signature hash, we normalize to Zulu: 2020-06-18T17:24:53Z

This is necessary because:

  1. Producer sends "time": "2020-06-18T17:24:53+02:00"
  2. Middleware deserializes/reserializes as "time": "2020-06-18T15:24:53Z" (same moment, different string)
  3. Consumer receives different string representation

Without normalization for signing:

  • Producer signs SHA256("2020-06-18T17:24:53+02:00")
  • Consumer verifies SHA256("2020-06-18T15:24:53Z")
  • Verification fails even though it's the same timestamp

With normalization:

  • Producer signs SHA256("2020-06-18T17:24:53Z") [normalized]
  • Consumer verifies SHA256("2020-06-18T17:24:53Z") [normalized]
  • Verification succeeds

The actual time attribute in the event is never modified - only the signature computation normalizes it. Think of it like Unicode normalization for signatures - you normalize for comparison but don't change the actual data.

Mind you without this, you lose the flexibility on where verification can happen. Also this assumes you are marshaling and unmarshaling into a structure that uses some time object. If you it is modeled as string, no issues, but generally SDKs model time as some time object.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok that helps thanks. Then I think the note you have about times only having "second precision" needs to be a normative requirement and not a note otherwise nanoseconds could mess with it, no?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Meaning, add the note text to this bullet with a MUST ??

*Notes:*

* *Base64 as per [RFC 4648](https://tools.ietf.org/html/rfc4648)*
* *[RFC 3339](https://tools.ietf.org/html/rfc3339)*
Copy link
Collaborator

@duglin duglin Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what this "note" is trying to say. I think just making each reference to RFC3339 a hyperlink is sufficient.
Same for base64 in the previous bullet

1. if the values are not equal, the event has been modified in transit and MUST be discarded
10. the event is returned as verified successfully.

Upon verification of an CloudEvent, implementations MUST return a new event containing only verified data: the Context Attributes (REQUIRED and OPTIONAL) plus the event data. Extension Context Attributes MUST NOT be included in the verified event. This ensures clear separation between verified and unverified data. Users handle either a complete unverified event or a verified event with only verified values—never a mixture of both.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might have missed it, but how does this proposal handle nested verifications? Eg. middleware adds new attributes, verifies and then removes it.

sender (add verification stuff) -> middleware sender (adds attrs (or not), adds new verify stuff) -> middleware receiver (verifies, removes middleware-specific verify stuff)-> ultimate receiver (verifies stuff).

Related, I think we need to support extension attributes - either a select list or all of them. I think "all" would be easier then we don't need to pass along a list of attributes that are part of the verification. I think it would also simplify the algorithm down to:

  • loop over all attributes (in alphabetical order)
    • do the SHA256 stuff above on each one - probably using NAME:VALUE instead of just VALUE

Then there's no special rules for each attribute and if new attributes are added it'll cause the verify to fail - as it should.

Copy link
Author

@xibz xibz Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nested verification/attestation chains solve a different problem than what this proposal addresses. This extension is specifically for verifying event authenticity and integrity from producer to consumer.

What you're describing is about building an audit trail of modifications or proving a chain of custody. That's a valuable but completely different use case.

If there's need for modification audit trails or nested attestations, that should be a separate extension proposal. Mixing the two would complicate both use cases.

Also there exists formats like Chain Signatures (Zhou, Redline, 2005) or Git's commit signature chains that could inspire the next proposal that handle chaining of signatures.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this isn't about audits or chains... it's about situations where there are environments where messages are sent thru middleware that will sign/verify messages with out the apps on either side knowing about it. Which means if there are multiple layers of that middleware then the design of this needs to support the idea of signing a message that has already been signed. And, of course, this will then also deal with cases where the middleware adds new attributes to the messages that need to be signed

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What you're describing IS an attestation chain use case, even if the applications aren't aware of it. The middleware layers are creating a chain of signatures with different scopes.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably a terminology difference, but they're doing it w/o any real knowledge of someone else already doing it too - beyond perhaps whatever mechanism we have to keep them in order. Both the latest middleware and the previous middleware should do their jobs of signing and verifying (and removing their signatures) without knowledge of the other layers. What your're suggesting, I think, is that there can only ever be one signing middleware in the picture and we can never add attributes to an existing CE that's signed. Seems kind of restrictive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Adding zero trust to cloudevents

2 participants