balancer/pickfirst: fix flaky metrics tests by awaiting async metrics#9020
balancer/pickfirst: fix flaky metrics tests by awaiting async metrics#9020mbissa wants to merge 3 commits intogrpc:masterfrom
Conversation
Fixed a race condition in pickfirst metrics tests where the assertions would immediately poll the stats.TestMetricsRecorder. Depending on the goroutine scheduling, asynchronous metrics like 'grpc.subchannel.open_connections' would occasionally not be fully recorded before the test checked them, causing flaky failures. This introduces an awaitMetric helper to wait for the metrics to reach their expected values.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #9020 +/- ##
==========================================
- Coverage 83.05% 82.96% -0.10%
==========================================
Files 411 411
Lines 32918 32931 +13
==========================================
- Hits 27340 27320 -20
- Misses 4184 4204 +20
- Partials 1394 1407 +13 🚀 New features to boost your workflow:
|
| t.Fatalf("EmptyCall() failed: %v", err) | ||
| } | ||
|
|
||
| if got, _ := tmr.Metric("grpc.lb.pick_first.connection_attempts_succeeded"); got != 1 { |
There was a problem hiding this comment.
All pick_first metrics (I'm not sure about subchannel metrics) MUST be emitted before an RPC returns, as the LB policy is responsible for making the channel active (which is what allows the RPC to succeed). If the metrics aren't emitted in this order, it likely indicates a bug causing race conditions within the channel. Because of this, we shouldn't need to change the validation logic. Could you please revert the changes to the pick_first metrics?
There was a problem hiding this comment.
reverted pick first and metrics that do not need waiting out of the new construct.
| return gotMetrics | ||
| } | ||
|
|
||
| func awaitMetric(ctx context.Context, t *testing.T, tmr *stats.TestMetricsRecorder, name string, want float64) { |
There was a problem hiding this comment.
Instead of a frequent polling, we should use channels to efficiently block the test until the expected metric is available. The TestMetricsRecorder (TMR) already has a method that allows waiting for a particular metric to be emitted:
grpc-go/internal/testutils/stats/test_metrics_recorder.go
Lines 160 to 170 in c20fba0
However, it currently has a few issues that make it difficult to use:
- Strict Ordering: It fails if the very next metric emitted doesn't match the expected one. When writing e2e tests, it is difficult to account for all metrics and the exact order in which they will be emitted.
- Data Loss: It consumes and discards buffered metrics while waiting.
Currently, the only users of these WaitForX methods seem to be the metrics recorder list tests. I think we should improve this test utility to make it more ergonomic for other e2e tests to use.
I propose we combine the logic of the Metric() and WaitForX methods.
- To address the first issue, we can update the
WaitForXmethod to keep reading from the channel until either the context expires or a metric with a matching name is found for comparision. - To address the second issue, we can store all read
MetricsDatain the TMR's inner map. WhenWaitForXis called, it can first check this map for the required data; if it's found, it can proceed with the match directly without waiting on the channel.
What do you think?
There was a problem hiding this comment.
sounds attempt in correct direction
What do you think about below @arjan-bal :
- Instead of WaitForX consuming the channel directly, the TMR will internally "pump" all incoming metrics into a thread-safe map.
- The internal map will store a slice of values (map[string][]float64) instead of just the latest value. This allows tests to verify state transitions (e.g., checking that a counter hit 1 and then 2) and nothing is discarded and users would know that in case of races if more than one values can come then they want to read the first one or the last one.
Fixes #9012
Fixed a race condition in pickfirst metrics tests where the assertions would immediately poll the stats.TestMetricsRecorder. Depending on the goroutine scheduling, capturing of metrics like 'grpc.subchannel.open_connections' would occasionally not be fully recorded before the test checked them, causing flaky failures. This introduces an awaitMetric helper to wait for the metrics to reach their expected values.
RELEASE NOTES: none