Skip to content

Memory leak: prune empty topic entries from gossipsub topics map #542

@lodekeeper

Description

@lodekeeper

Summary

While investigating a network-thread memory leak in Lodestar, I found a secondary leak pattern in @libp2p/gossipsub: the internal topics map retains empty topic entries indefinitely.

topics is declared as:

private readonly topics = new Map<TopicStr, Set<PeerIdStr>>()

Entries are created for every topic string seen in remote subscription updates, but empty entries are never pruned when the last peer unsubscribes or disconnects.

Root cause

Entry creation

handleReceivedSubscription() creates a map entry on first sight of a topic:

let topicSet = this.topics.get(topic)
if (topicSet == null) {
  topicSet = new Set()
  this.topics.set(topic, topicSet)
}

Missing deletion on unsubscribe

When a peer unsubscribes, only the peer id is removed from the set:

topicSet.delete(from.toString())

If that was the last peer, the now-empty Set remains in this.topics forever.

Missing deletion on disconnect

Similarly, removePeer() iterates all topic sets and deletes the peer id:

for (const peers of this.topics.values()) {
  peers.delete(id)
}

Again, empty sets are left behind and the topic key is never removed.

Why this matters

On Ethereum consensus clients, peers advertise many topic strings over time (fork-specific gossip topics, attestation subnets, etc.). If empty topic entries are retained forever, this.topics.size grows monotonically with topic churn.

That has two effects:

  1. Direct retention: each stale topic entry retains the topic string key + empty Set
  2. Secondary allocation pressure: heartbeat and publish paths iterate topic structures and allocate per-topic helpers / arrays

Heap evidence from a real Lodestar node

From network-thread heap snapshots on a production-like node:

  • post-deploy snapshot @ 2026-03-12T11:50:05Z: 2,177,153 nodes
  • later snapshot @ 2026-03-12T22:00:01Z: 2,838,135 nodes
  • delta over ~10h: +660,982 nodes

Relevant class growth over that 10h window:

  • Set: 38,022 → 53,540 (+15,518)
  • Object: 550,382 → 754,118 (+203,736)
  • Buffer: +1,813
  • Uint8Array: +1,972
  • ArrayBuffer: +1,365

At the same time, the original req/resp leak suspects were flat or decreasing:

  • Connection: -5
  • Socket: -6
  • MplexStream: +4

We also saw growth in retained topic strings such as:

  • beacon_attestation_35: 4 → 321
  • beacon_attestation_57: 5 → 302

This points away from stream/socket retention and toward topic bookkeeping.

Minimal fix

Prune topic entries when the last peer is removed.

On unsubscribe

topicSet.delete(from.toString())
if (topicSet.size === 0) {
  this.topics.delete(topic)
}

On peer disconnect

for (const [topic, peers] of this.topics) {
  peers.delete(id)
  if (peers.size === 0) {
    this.topics.delete(topic)
  }
}

Tests to add

Two regression tests seem sufficient:

  1. remote unsubscribe cleanup

    • peer A subscribes to topic
    • peer B receives subscription update
    • peer A unsubscribes
    • assert peer B no longer retains topics.has(topic)
  2. peer disconnect cleanup

    • peer A subscribes to topic
    • peer B receives subscription update
    • peer A disconnects
    • assert peer B no longer retains topics.has(topic)

Notes

This is distinct from the req/resp clearable-signal leak we were originally debugging. That original leak appears fixed; this issue showed up only after the first fix reduced the baseline enough to expose the next growth vector.

If helpful, I can turn the minimal fix above into a PR.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions