Skip to content

Comments

feat: add metric for client connection lifetime#688

Merged
v0idpwn merged 7 commits intomainfrom
feat/telemetry-client-connection-age
Jul 1, 2025
Merged

feat: add metric for client connection lifetime#688
v0idpwn merged 7 commits intomainfrom
feat/telemetry-client-connection-age

Conversation

@v0idpwn
Copy link
Member

@v0idpwn v0idpwn commented Jun 30, 2025

Store start_time in client connection registry, then periodically emit telemetry for active client connections.

Store start_time in client connection registry, then periodically emit telemetry
for active client connections.
@v0idpwn v0idpwn requested a review from a team as a code owner June 30, 2025 19:58
Comment on lines 41 to 53
500,
1_000,
5_000,
10_000,
60_000,
300_000,
1_800_000,
7_200_000,
28_800_000,
86_400_000,
259_200_000,
604_800_000,
2_592_000_000
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to use more "semantic" times here with e.g.: timer.hours(24) or 24 * 3600 * 1000, but it turns out that Peep does some ast manipulation and can only deal with integer literals. Can add comments instead but looks pretty ugly :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, under the hood, Peep does some macro magic, and it expects a plain list of numbers when you define custom buckets. So the most reliable and readable way is to just use comments

Copy link
Member Author

@v0idpwn v0idpwn Jul 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's sad is that the formatter won't let me do

[
  1000, # 1 second
  5000, # 5 seconds
]

as it formats to

[
# 1 second
1000,
# 5 seconds
5000
]

😭

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

made an issue just in case rkallos/peep#44

Co-authored-by: abc3 <sts@abc3.dev>
@v0idpwn v0idpwn enabled auto-merge (squash) July 1, 2025 17:35
@v0idpwn v0idpwn merged commit 022869d into main Jul 1, 2025
12 checks passed
@v0idpwn v0idpwn deleted the feat/telemetry-client-connection-age branch July 1, 2025 17:56
@v0idpwn v0idpwn mentioned this pull request Jul 28, 2025
v0idpwn added a commit that referenced this pull request Jul 29, 2025
### Features
- **Authentication cleartext password support** - Added support for
cleartext password authentication method (#707)
- **Runtime-configurable connection retries** - Support for runtime
configuration of connection retries and infinite retries (#705)
- **Enhanced health checks** - Check database and eRPC capabilities
during health check operations (#691)
- **More consistency with postgres on auth errors** - Improves errors in
some client libraries (#711)

### Performance Improvements

- **Optimized ranch usage** - Supavisor now uses a constant number of
ranch instances for improved performance and resource management when
hosting a large number of pools (#706)

### Monitoring

- **New OS memory metrics** - gives a more accurate picture of memory
usage (#704)
- **Add a promex plugin for cluster metrics** - for tracking latency and
connection status (#690)
- **Client connection lifetime metrics** - adds a metric about how long
each connection is connected for (#688)
- **Process monitoring** - Log when large process heaps and long message
queues (#689)

### Bug Fixes

- **Client handler query cancellation** - Fixed handling of
`:cancel_query` when state is `:idle` (#692)

### Migration Notes

- Instances running a small number of pools may see an increase in
memory usage. This can be mitigated by changing the ranch shard or the
acceptor counts.
- If using any of the new used ports, may need to change the defaults
- Review monitoring dashboards and include new metrics
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants