-
Notifications
You must be signed in to change notification settings - Fork 57
Memory leak: prune empty topic entries from gossipsub topics map #542
Description
Summary
While investigating a network-thread memory leak in Lodestar, I found a secondary leak pattern in @libp2p/gossipsub: the internal topics map retains empty topic entries indefinitely.
topics is declared as:
private readonly topics = new Map<TopicStr, Set<PeerIdStr>>()Entries are created for every topic string seen in remote subscription updates, but empty entries are never pruned when the last peer unsubscribes or disconnects.
Root cause
Entry creation
handleReceivedSubscription() creates a map entry on first sight of a topic:
let topicSet = this.topics.get(topic)
if (topicSet == null) {
topicSet = new Set()
this.topics.set(topic, topicSet)
}Missing deletion on unsubscribe
When a peer unsubscribes, only the peer id is removed from the set:
topicSet.delete(from.toString())If that was the last peer, the now-empty Set remains in this.topics forever.
Missing deletion on disconnect
Similarly, removePeer() iterates all topic sets and deletes the peer id:
for (const peers of this.topics.values()) {
peers.delete(id)
}Again, empty sets are left behind and the topic key is never removed.
Why this matters
On Ethereum consensus clients, peers advertise many topic strings over time (fork-specific gossip topics, attestation subnets, etc.). If empty topic entries are retained forever, this.topics.size grows monotonically with topic churn.
That has two effects:
- Direct retention: each stale topic entry retains the topic string key + empty
Set - Secondary allocation pressure: heartbeat and publish paths iterate topic structures and allocate per-topic helpers / arrays
Heap evidence from a real Lodestar node
From network-thread heap snapshots on a production-like node:
- post-deploy snapshot @
2026-03-12T11:50:05Z: 2,177,153 nodes - later snapshot @
2026-03-12T22:00:01Z: 2,838,135 nodes - delta over ~10h: +660,982 nodes
Relevant class growth over that 10h window:
Set: 38,022 → 53,540 (+15,518)Object: 550,382 → 754,118 (+203,736)Buffer: +1,813Uint8Array: +1,972ArrayBuffer: +1,365
At the same time, the original req/resp leak suspects were flat or decreasing:
Connection: -5Socket: -6MplexStream: +4
We also saw growth in retained topic strings such as:
beacon_attestation_35: 4 → 321beacon_attestation_57: 5 → 302
This points away from stream/socket retention and toward topic bookkeeping.
Minimal fix
Prune topic entries when the last peer is removed.
On unsubscribe
topicSet.delete(from.toString())
if (topicSet.size === 0) {
this.topics.delete(topic)
}On peer disconnect
for (const [topic, peers] of this.topics) {
peers.delete(id)
if (peers.size === 0) {
this.topics.delete(topic)
}
}Tests to add
Two regression tests seem sufficient:
-
remote unsubscribe cleanup
- peer A subscribes to topic
- peer B receives subscription update
- peer A unsubscribes
- assert peer B no longer retains
topics.has(topic)
-
peer disconnect cleanup
- peer A subscribes to topic
- peer B receives subscription update
- peer A disconnects
- assert peer B no longer retains
topics.has(topic)
Notes
This is distinct from the req/resp clearable-signal leak we were originally debugging. That original leak appears fixed; this issue showed up only after the first fix reduced the baseline enough to expose the next growth vector.
If helpful, I can turn the minimal fix above into a PR.