Skip to content

chore(deps): update module github.com/nats-io/nats-server/v2 to v2.11.12 [security]#1268

Merged
NumaryBot merged 1 commit intomainfrom
renovate/go-gitlite.zycloud.tk-nats-io-nats-server-v2-vulnerability
Mar 3, 2026
Merged

chore(deps): update module github.com/nats-io/nats-server/v2 to v2.11.12 [security]#1268
NumaryBot merged 1 commit intomainfrom
renovate/go-gitlite.zycloud.tk-nats-io-nats-server-v2-vulnerability

Conversation

@NumaryBot
Copy link
Contributor

@NumaryBot NumaryBot commented Feb 25, 2026

This PR contains the following updates:

Package Type Update Change
github.com/nats-io/nats-server/v2 indirect patch v2.11.8 -> v2.11.12

GitHub Vulnerability Alerts

CVE-2026-27571

Impact

The WebSockets handling of NATS messages handles compressed messages via the WebSockets negotiated compression. The implementation bound the memory size of a NATS message but did not independently bound the memory consumption of the memory stream when constructing a NATS message which might then fail validation for size reasons.

An attacker can use a compression bomb to cause excessive memory consumption, often resulting in the operating system terminating the server process.

The use of compression is negotiated before authentication, so this does not require valid NATS credentials to exploit.

The fix was to bounds the decompression to fail once the message was too large, instead of continuing on.

Patches

This was released in nats-server without being highlighted as a security issue. It should have been, this was an oversight. Per the NATS security policy, because this does not require a valid user, it is CVE-worthy.

This was fixed in the v2.11 series with v2.11.12 and in the v2.12 series with v2.12.3.

Workarounds

This only affects deployments which use WebSockets and which expose the network port to untrusted end-points.

References

This was reported to the NATS maintainers by Pavel Kohout of Aisle Research (www.aisle.com).


nats-server websockets are vulnerable to pre-auth memory DoS in github.com/nats-io/nats-server

BIT-nats-2026-27571 / CVE-2026-27571 / GHSA-qrvq-68c2-7grw / GO-2026-4533

More information

Details

nats-server websockets are vulnerable to pre-auth memory DoS in github.com/nats-io/nats-server

Severity

Unknown

References

This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).


nats-server websockets are vulnerable to pre-auth memory DoS

BIT-nats-2026-27571 / CVE-2026-27571 / GHSA-qrvq-68c2-7grw / GO-2026-4533

More information

Details

Impact

The WebSockets handling of NATS messages handles compressed messages via the WebSockets negotiated compression. The implementation bound the memory size of a NATS message but did not independently bound the memory consumption of the memory stream when constructing a NATS message which might then fail validation for size reasons.

An attacker can use a compression bomb to cause excessive memory consumption, often resulting in the operating system terminating the server process.

The use of compression is negotiated before authentication, so this does not require valid NATS credentials to exploit.

The fix was to bounds the decompression to fail once the message was too large, instead of continuing on.

Patches

This was released in nats-server without being highlighted as a security issue. It should have been, this was an oversight. Per the NATS security policy, because this does not require a valid user, it is CVE-worthy.

This was fixed in the v2.11 series with v2.11.12 and in the v2.12 series with v2.12.3.

Workarounds

This only affects deployments which use WebSockets and which expose the network port to untrusted end-points.

References

This was reported to the NATS maintainers by Pavel Kohout of Aisle Research (www.aisle.com).

Severity

  • CVSS Score: 5.9 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Release Notes

nats-io/nats-server (github.com/nats-io/nats-server/v2)

v2.11.12

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
  • github.com/nats-io/nkeys v0.4.12 (#​7578)
  • github.com/antithesishq/antithesis-sdk-go v0.5.0-default-no-op (#​7604)
  • github.com/klauspost/compress v1.18.3 (#​7736)
  • golang.org/x/crypto v0.47.0 (#​7736)
  • golang.org/x/sys v0.40.0 (#​7736)
  • github.com/google/go-tpm v0.9.8 (#​7696)
  • github.com/nats-io/nats.go v1.48.0 (#​7696)
Added

General

  • Added WebSocket-specific ping interval configuration with ping_internal in the websocket block (#​7614)

Monitoring

  • Added tls_cert_not_after to the varz monitoring endpoint for showing when TLS certificates are due to expire (#​7709)
Improved

JetStream

  • The scan for the last sourced message sequence when setting up a subject-filtered source is now considerably faster (#​7553)
  • Consumer interest checks on interest-based streams are now significantly faster when there are large gaps in interest (#​7656)
  • Creating consumer file stores no longer contends on the stream lock, improving consumer create performance on heavily loaded streams (#​7700)
  • Recalculating num pending with updated filter subjects no longer gathers and sorts the subject filter list twice (#​7772)
  • Switching to interest-based retention will now remove no-interest messages from the head of the stream (#​7766)

MQTT

  • Retained messages will now work correctly even when sourced from a different account and has a subject transform (#​7636)
Fixed

General

  • WebSocket connections will now correctly limit the buffer size during decompression (#​7625, thanks to Pavel Kokout at Aisle Research)
  • The config parser now correctly detects and errors on self-referencing environment variables (#​7737)
  • Internal functions for handling headers should no longer corrupt message bodies if appended (#​7752)

JetStream

  • A protocol error caused by an invalid transform of acknowledgement reply subjects when originating from a gateway connection has been fixed (#​7579)
  • The meta layer will now only respond to peer remove requests after quorum has been reached (#​7581)
  • Invalid subject filters containing non-terminating full wildcard no longer produce unexpected matches (#​7585)
  • A data race when creating a stream in clustered mode has been fixed (#​7586)
  • A panic when processing snapshots with missing nodes or assignments has been fixed (#​7588)
  • When purging whole message blocks, the subject tracking and scheduled messages are now updated correctly (#​7593)
  • The filestore will no longer unexpectedly lose writes when AsyncFlush is enabled after a process pause (#​7594)
  • The filestore now will process message removal on disk before updating accounting, which improves error handling (#​7595, #​7601)
  • Raft will no longer allow peer-removing the one remaining peer (#​7610)
  • A data race has been fixed in the stream health check (#​7619)
  • Tombstones are now correctly written for recovering the sequences after compacting or purging an almost-empty stream to seq 2 (#​7627)
  • Combining skip sequences and compactions will no longer overwrite the block at the wrong offset, correcting a corrupt record state error (#​7627)
  • Compactions that reclaim over half of the available space now use an atomic write to avoid losing messages if killed (#​7627)
  • Filestore compaction should no longer result in no idx present cache errors (#​7634)
  • Filestore compaction now correctly adjusts the high and low sequences for a message block, as well as cleaning up the deletion map accordingly (#​7634)
  • Potential stream desyncs that could happen during stream snapshotting have been fixed (#​7655)
  • Raft will no longer allow multiple membership changes to take place concurrently (#​7565, #​7609)
  • Raft will no longer count responses from peer-removed nodes towards quorum (#​7589)
  • Raft quorum counting has been refactored so the implicit leader ack is now only counted if still a part of the membership (#​7600)
  • Raft now writes the peer state immediately when handling a peer-remove to ensure the removed peers cannot unexpectedly reappear after a restart (#​7602)
  • Raft will no longer allow peer-removing the one remaining peer (#​7610)
  • Add peer operations to Raft can no longer result in disjoint majorities (#​7632)
  • Raft groups should no longer readmit a previously removed peer if a heartbeat occurs between the peer removal and the leadership transfer (#​7649)
  • Raft single node elections now transition into leader state correctly (#​7642)
  • R1 streams will no longer incorrectly drift last sequence when exceeding limits (#​7658)
  • Deleted streams are no longer wrongfully revived if stalled on an upper-layer catchup (#​7668)
  • A panic that could happen when receiving a shutdown signal while JetStream is still starting up has been fixed (#​7683)
  • JetStream usage stats now correctly reflect purged whole blocks when optimising large purges (#​7685)
  • Recovering JetStream encryption keys now happens independently of the stream index recovery, fixing some cases where the key could be reset unexpectedly if the index is rebuilt (#​7678)
  • Non-replicated file-based consumers now detect corrupted state on disk and are deleted automatically (#​7691)
  • Raft no longer allows a repeat vote for the same term after a stepdown or leadership transfer (#​7698)
  • Replicated consumers are no longer incorrectly deleted if they become leader just as JetStream is about to shut down (#​7699)
  • Fixed an issue where a single truncated block could prevent storing new messages in the filestore (#​7704)
  • Fixed a concurrent map iteration/write panic that could occur on WorkQueue streams during partitioning (#​7708)
  • Fixed a deadlock that could occur on shutdown when adding streams (#​7710)
  • A data race on mirror consumers has been fixed (#​7716)
  • JetStream no longer leaks subscriptions in a cluster when a stream import/export is set up that overlaps the $JS.> namespace (#​7720)
  • The filestore will no longer waste CPU time rebuilding subject state for WALs (#​7721)
  • Configuring cluster_traffic in config mode has been fixed (#​7723)
  • Subject intersection no longer misses certain subjects with specific patterns of overlapping filters, which could affect consumers, num pending calculations etc (#​7728, #​7741, #​7744, #​7745)
  • Multi-filtered next message lookups in the filestore can now skip blocks when faster to do so (#​7750)
  • The binary search for start times now handles deleted messages correctly (#​7751)
  • Consumer updates will now only recalculate num pending when the filter subjects are changed (#​7753)
  • Consumers on replicated interest or workqueue streams should no longer lose interest or cause desyncs after having their filter subjects updated (#​7773)
  • Interest-based streams will no longer start more check interest state goroutines when there are existing running ones (#​7769)

MQTT

  • The maximum payload size is now correctly enforced for MQTT clients (#​7555, thanks to @​yixianOu)
  • Fixed a panic that could occur when reloading config if the user did not have permission to access retained messages (#​7596)
  • Fixed account mapping for JetStream API requests when traversing non-JetStream-enabled servers (#​7598)
  • QoS0 messages are now mapped correctly across account imports/exports with subject mappings (#​7605)
  • Loading retained messages no longer fails after restarting due to last sequence checks (#​7616)
  • A bug which could corrupt retained messages in clustered deployments has been fixed (#​7622)
  • Permissions to $MQTT. subscriptions are now handled implicitly, with the exception of deny ACLs which still permit restriction (#​7637)
  • A bug where QoS2 messages could not be retrieved after a server restart has been fixed (#​7643)
Complete Changes

v2.11.11

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
Added

JetStream

  • Added meta_compact and meta_compact_size, advanced JetStream config options to control how many log entries must be present in the metalayer log before snapshotting and compaction takes place (#​7484, #​7521)
  • Added write_timeout option for clients, routes, gateways and leafnodes which controls the behaviour on reaching the write_deadline, values can be default, retry or close (#​7513)

Monitoring

  • Meta cluster snapshot statistics have been added to the /jsz endpoint (#​7524)
  • The /jsz endpoint can now show direct consumers with the direct-consumers?true flag (#​7543)
Improved

General

  • Binary stream snapshots are now preferred by default for nodes on new route connections (#​7479)
  • Reduced allocations in the sublist and subject transforms (#​7519)

JetStream

  • Improved the logging for observer mode (#​7433)
  • Improved the performance of enforcing max_bytes and max_msgs limits (#​7455)
  • Streams and consumers will no longer unnecessarily snapshot when being removed or scaling down (#​7495)
  • Streams are now loaded in parallel when enabling JetStream, often reducing the time it takes to start up the server (#​7482)
  • Stream catchups will now use delete ranges more aggressively, speeding up catchups of large streams with many interior deletes (#​7512)
  • Streams with subject transforms can now implicitly republish based on those transforms by configuring > for both republish source and destination (#​7515)
  • A race condition where subscriptions may not be set up before catchup requests are sent after a leader change has been fixed (#​7518)
  • JetStream recovery parallelism now matches the I/O gated semaphore (#​7526)
  • Reduced heap allocations in hash checks (#​7539)
  • Healthchecks now correctly report when streams are catching up, instead of showing them as unhealthy (#​7535)
  • Improve interest detection when consumers are created or deleted across different servers (#​7440)

Monitoring

  • The jsz monitoring endpoint can now report leader counts (#​7429)
Fixed

General

  • When using message tracing, header corruption when setting the hop header has been fixed (#​7443)
  • Shutting down a server using lame-duck mode should no longer result in max connection exceeded errors (#​7527)

JetStream

  • Race conditions and potential panics fixed in the handling of some JetStream API handlers (#​7380)
  • The filestore no longer loses tombstones when using secure erase (#​7384)
  • The filestore no longer loses the last sequence when recovering blocks containing only tombstones (#​7384)
  • The filestore now correctly cleans up empty blocks when selecting the next first block (#​7384)
  • The filestore now correctly obeys sync_always for writing TTL and scheduling state files (#​7385)
  • Fixed a data race on a wait group when mirroring streams (#​7395)
  • Skipped message sequences are now checked for ordering before apply, fixing a potential stream desync on catchups (#​7400)
  • Skipped message sequences now correctly detect gaps from erased message slots, fixing potential cache issues, slow reads and issues with catchups (#​7399, #​7401)
  • Raft groups now report peer activity more consistently, fixing some cases where asset info and monitoring endpoints may report misleading values after leader changes (#​7402)
  • Raft groups will no longer permit truncations from unexpected catchup entries if the catchup is completed (#​7424)
  • The filestore will now correctly release locks when erasing messages returns an error (#​7431)
  • Caches will now no longer expire unnecessarily when re-reading the same sequences multiple times in first-matching code paths (#​7435)
  • A couple of issues related to header handling have been fixed (#​7465)
  • No-wait requests now return a 400 No Messages response correctly if the stream is empty (#​7466)
  • Raft groups will now only report leadership status after a no-op entry on recovery (#​7460)
  • Fixed a race condition in the filestore that could happen between storing messages and shutting down (#​7496)
  • A panic that could occur when recovering streams in parallel has been fixed (#​7503)
  • An off-by-one when detecting holes at the end of a filestore block has been fixed (#​7508)
  • Writing skip message records in the filestore no longer releases and reacquires the lock unnecessarily (#​7508)
  • Fixed a bug on metalayer recovery where stream and consumer monitor goroutines for recreated assets would run with the wrong Raft group (#​7510)
  • Scaling up an asset from R1 now results in an installed snapshot, allowing recovery after restart if interrupted, avoiding a potential desync (#​7509)
  • Raft groups should no longer report no quorum incorrectly when shutting down (#​7522)
  • Consumers that existed in a metalayer snapshot but were deleted on recovery will no longer result in failing healthchecks (#​7523)
  • An off-by-one when detecting holes at the end of a filestore block has been fixed (#​7525)
  • Fixed a race condition that could happen with shutdown signals when shutting down JetStream (#​7536)
  • Fixed a deadlock that could occur when purging a stream with mismatched consumer state (#​7546)
Complete Changes

v2.11.10

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
  • 1.24.7
Dependencies
  • golang.org/x/crypto v0.42.0 (#​7320)
  • github.com/google/go-tpm v0.9.6 (#​7376)
  • github.com/nats-io/nats.go v1.46.1 (#​7377)
Improved

General

  • Statistics for gateways, routes and leaf connections are now correctly omitted from accstatsz responses if empty (#​7300)

JetStream

  • Stream assignment check has been simplified (#​7290)
  • Additional guards prevent panics when loading corrupted messages from the filestore (#​7299)
  • The store lock is no longer held while searching for TTL expiry tasks, improving performance (#​7344)
  • Removing a message from the TTL state is now faster (#​7344)
  • The filestore no longer performs heap allocations for hash checks (#​7345)
  • Meta snapshot performance for a very large number of assets has been improved after a regression in v2.11.9 (#​7350)
  • Sequence-from-timestamp lookups, such as those using opt_start_time on consumers or start_time on message get requests, now use a binary search for improved lookup performance (#​7357)
  • JetStream API requests are always handled from the worker pool, improving the semantics of the API request queue and logging when requests take too long (#​7125)
  • JetStream will no longer perform a metalayer snapshot on every stream removal request, reducing API pauses and improving meta performance (#​7373)
Fixed

General

  • Fixed the exit code when receiving a SIGTERM signal immediately after startup (#​7367)

JetStream

  • Fixed a use-after-free bug and a buffer reclamation issue in the filestore flusher (#​7295)
  • Direct get requests now correctly skip over deleted messages if the starting sequence is itself deleted (#​7291)
  • The Raft layer now strictly enforces that non-leaders cannot send append entries (#​7297)
  • The filestore now correctly handles recovering filestore blocks with out-of-order sequences from disk corruption (#​7303, #​7304)
  • The filestore now produces more useful error messages when disk corruption is detected (#​7305)
  • Removed messages with a per-message TTL are now removed from the TTL state immediately (#​7344)
  • Fixed a bug where TTL state was recovered on startup with subject delete markers enabled, that message expiry would not start as expected (#​7344)
  • Expiring messages from the filestore no longer leaks timers and expires at the correct time (#​7344)
  • Deleting a non-existent sequence on a stream no longer results in a cluster reset and leadership election (#​7348)
  • Subject tree intersection now correctly handles overlapping literals and partial wildcards, i.e. stream.A and stream.*.A, fixing some consumer or message get filters (#​7349)
  • A data race when checking all JetStream limits has been fixed (#​7356)
  • Raft will no longer trigger a reset of the clustered state due to a stream snapshot timeout (#​7293)
Complete Changes

v2.11.9

Compare Source

Changelog

Refer to the 2.11 Upgrade Guide for backwards compatibility notes with 2.10.x.

Go Version
Dependencies
Improved

JetStream

  • Offline assets support (#​7158)
    • Server version 2.12 will introduce new features that would otherwise break a 2.11 server after a downgrade. The server now reports the streams/consumers as offline and unsupported, keeping the data safe, but allowing to either delete the asset or upgrade back to the supported version without changes to the data itself.
  • The raftz endpoint now reports the cluster traffic account (#​7186)
  • The stream info and consumer info endpoints now return leader_since (#​7189)
  • The stream info and consumer info endpoints now return system_account and traffic_account (#​7193)
  • The jsz monitoring endpoint now returns system_account and traffic_account (#​7193)
Fixed

General

  • Fix a panic that could happen at startup if building from source using non-Git version control (#​7178)
  • Fix an issue where issuing an account JWT update with a connection limit could cause older clients to be disconnected instead of newer ones (#​7181, #​7185)
  • Route connections with invalid credentials will no longer rapidly reconnect (#​7200)
  • Allow a default_sentinel JWT from a scoped signing key instead of requiring it to solely be a bearer token for auth callout (#​7217)
  • Subject interest would not always be propagated for leaf nodes when daisy chaining imports/exports (#​7255)
  • Subject interest would sometimes be lost if the leaf node is a spoke (#​7259)
  • Lowering the max connections limit should no longer result in streams losing interest (#​7258)

JetStream

  • The Nats-TTL header will now be correct if the subject delete marker TTL overwrites it (#​7177)
  • In operator mode, the cluster_traffic state for an account is now restored correctly when enabling JetStream at startup (#​7191)
  • A potential data race during a consumer create or update when reading its paused state has been fixed (#​7201)
  • A race condition that could allow creating a consumer with more replicas than the stream has been fixed (#​7202)
  • A race condition that could allow creating the same stream with different configurations has been fixed (#​7210, #​7212)
  • Raft will now correctly reject delayed entries from an old leader when catching up in the meantime (#​7209, #​7239)
  • Raft will now also limit the amount of cached in-memory entries as the leader, avoiding excessive memory usage (#​7233)
  • A potential race condition delaying shutdown if a stream/consumer monitor goroutine was not started (#​7211)
  • A benign underflow when using an infinite (-1) MaxDeliver for consumers (#​7216)
  • A potential panic to send a leader elected advisory when shutting down before completing startup (#​7246)
  • Stopping a stream should no longer wait indefinitely if the consumer monitor goroutine wasn’t stopped (#​7249)
  • Speed up stream mirroring and sourcing after a leaf node reconnects in complex topologies (#​7265)
  • Updating a stream with an empty placement will no longer incorrectly trigger a stream move (#​7222)

Tests

Complete Changes

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@NumaryBot NumaryBot requested a review from a team as a code owner February 25, 2026 03:00
@NumaryBot NumaryBot enabled auto-merge February 25, 2026 03:00
@NumaryBot
Copy link
Contributor Author

ℹ Artifact update notice

File name: go.mod

In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):

  • 12 additional dependencies were updated

Details:

Package Change
github.com/nats-io/nats.go v1.44.0 -> v1.48.0
github.com/google/go-tpm v0.9.5 -> v0.9.8
golang.org/x/mod v0.30.0 -> v0.31.0
github.com/klauspost/compress v1.18.0 -> v1.18.3
github.com/minio/highwayhash v1.0.3 -> v1.0.4-0.20251030100505-070ab1a87a76
github.com/nats-io/jwt/v2 v2.7.4 -> v2.8.0
github.com/nats-io/nkeys v0.4.11 -> v0.4.12
golang.org/x/crypto v0.46.0 -> v0.47.0
golang.org/x/net v0.47.0 -> v0.48.0
golang.org/x/text v0.32.0 -> v0.33.0
golang.org/x/time v0.12.0 -> v0.14.0
golang.org/x/tools v0.39.0 -> v0.40.0

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 25, 2026

Important

Review skipped

Review was skipped due to path filters

⛔ Files ignored due to path filters (6)
  • go.mod is excluded by !**/*.mod
  • go.sum is excluded by !**/*.sum, !**/*.sum
  • tools/generator/go.mod is excluded by !**/*.mod
  • tools/generator/go.sum is excluded by !**/*.sum, !**/*.sum
  • tools/provisioner/go.mod is excluded by !**/*.mod
  • tools/provisioner/go.sum is excluded by !**/*.sum, !**/*.sum

CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including **/dist/** will override the default block on the dist directory, by removing the pattern from both the lists.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch renovate/go-gitlite.zycloud.tk-nats-io-nats-server-v2-vulnerability

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Feb 25, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 80.76%. Comparing base (5a97716) to head (18f742c).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1268      +/-   ##
==========================================
- Coverage   80.78%   80.76%   -0.02%     
==========================================
  Files         205      205              
  Lines       10929    10929              
==========================================
- Hits         8829     8827       -2     
  Misses       1524     1524              
- Partials      576      578       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@NumaryBot NumaryBot force-pushed the renovate/go-gitlite.zycloud.tk-nats-io-nats-server-v2-vulnerability branch from 18f742c to 2627e67 Compare March 3, 2026 03:01
@NumaryBot NumaryBot added this pull request to the merge queue Mar 3, 2026
Merged via the queue into main with commit d610930 Mar 3, 2026
6 of 7 checks passed
@NumaryBot NumaryBot deleted the renovate/go-gitlite.zycloud.tk-nats-io-nats-server-v2-vulnerability branch March 3, 2026 08:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants