-
Notifications
You must be signed in to change notification settings - Fork 177
Description
Hi! We’re running a WarpStream agent that hosts Bento pipelines consuming from Kinesis, and the Kinesis consumption loop is burning through the pod’s CPU even at extremely low traffic.
Our environment:
WarpStream agent version: v765 (chart 1.0.1)
Also reproduced on: v774 (chart 1.0.12)
Agent roles: proxy, jobs, pipelines
Non-mTLS agents
Embedded Bento version: warpstreamlabs/bento@v1.14.1
The workload:
4 Kinesis CDC streams
Total traffic is extremely low: about 488 records/day across all 4 streams
This appears to be idle polling overhead, not load-related CPU usage
Observed behavior:
With GOMAXPROCS=1, it burns ~1 full core
With GOMAXPROCS=2, it expands and burns ~2 full cores
CPU profile
The CPU profile shows ~96% of CPU in:
github.com/warpstreamlabs/bento/internal/impl/aws.(*kinesisReader).runConsumer
Top contributors include:
runtime.selectgo (~57%)
time.After / time.NewTimer (~31%)
runtime.nanotime (~15%)
runtime.lock2 (~13%)
additional GC pressure from short-lived timer and channel allocations
CPU profile dumps:
ws-cpu-profile-2-top.txt
ws-cpu-profile-text.txt
Is this a known issue in the embedded Bento Kinesis consumer? Is there any config available to avoid this hot-loop behavior?