Add sub aggregation support for histogram aggregation using skiplist#19438
Add sub aggregation support for histogram aggregation using skiplist#19438jainankitk merged 5 commits intoopensearch-project:mainfrom
Conversation
|
Hello! |
|
❌ Gradle check result for f49d176: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
Signed-off-by: Asim Mahmood <asim.seng@gmail.com>
Signed-off-by: Asim Mahmood <asim.seng@gmail.com>
Signed-off-by: Asim Mahmood <asim.seng@gmail.com>
673e01e to
f192a4d
Compare
|
Manually tested the usage stats: |
|
Comparing response of big5 with a filter, no difference in computed buckets. |
|
Testing debug with sub stats: |
|
|
opensearch-benchmark compare -c 40405b2d-2c15-4cf7-a483-5a0671adc672 -b 65b238a9-325a-4393-b457-0588a57a58fa / __ ____ ___ ____ / / ____ / / / __ ) ____ / / ____ ___ ____ / / Comparing baseline with contender / () ____ / / / /_____ ________
|
| Metric | Task | Baseline | Contender | %Diff | Diff | Unit |
|---|---|---|---|---|---|---|
| Cumulative indexing time of primary shards | 0 | 0 | 0.00% | 0 | min | |
| Min cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Median cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Max cumulative indexing time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Cumulative indexing throttle time of primary shards | 0 | 0 | 0.00% | 0 | min | |
| Min cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Median cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Max cumulative indexing throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Cumulative merge time of primary shards | 0 | 0 | 0.00% | 0 | min | |
| Cumulative merge count of primary shards | 0 | 0 | 0.00% | 0 | ||
| Min cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Median cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Max cumulative merge time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Cumulative merge throttle time of primary shards | 0 | 0 | 0.00% | 0 | min | |
| Min cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Median cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Max cumulative merge throttle time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Cumulative refresh time of primary shards | 0 | 0 | 0.00% | 0 | min | |
| Cumulative refresh count of primary shards | 2 | 2 | 0.00% | 0 | ||
| Min cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Median cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Max cumulative refresh time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Cumulative flush time of primary shards | 0 | 0 | 0.00% | 0 | min | |
| Cumulative flush count of primary shards | 1 | 1 | 0.00% | 0 | ||
| Min cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Median cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Max cumulative flush time across primary shard | 0 | 0 | 0.00% | 0 | min | |
| Total Young Gen GC time | 0.033 | 0.032 | -0.00% | -0.001 | s | |
| Total Young Gen GC count | 2 | 2 | 0.00% | 0 | ||
| Total Old Gen GC time | 0 | 0 | 0.00% | 0 | s | |
| Total Old Gen GC count | 0 | 0 | 0.00% | 0 | ||
| Store size | 4.36969 | 4.36969 | 0.00% | 0 | GB | |
| Translog size | 5.12227e-08 | 5.12227e-08 | 0.00% | 0 | GB | |
| Heap used for segments | 0 | 0 | 0.00% | 0 | MB | |
| Heap used for doc values | 0 | 0 | 0.00% | 0 | MB | |
| Heap used for terms | 0 | 0 | 0.00% | 0 | MB | |
| Heap used for norms | 0 | 0 | 0.00% | 0 | MB | |
| Heap used for points | 0 | 0 | 0.00% | 0 | MB | |
| Heap used for stored fields | 0 | 0 | 0.00% | 0 | MB | |
| Segment count | 10 | 10 | 0.00% | 0 | ||
| Min Throughput | date_histogram_calendar_interval | 1.22898 | 1.50108 | +22.14% 🔴 | 0.27211 | ops/s |
| Mean Throughput | date_histogram_calendar_interval | 1.24148 | 1.50176 | +20.97% 🔴 | 0.26028 | ops/s |
| Median Throughput | date_histogram_calendar_interval | 1.24395 | 1.50162 | +20.71% 🔴 | 0.25767 | ops/s |
| Max Throughput | date_histogram_calendar_interval | 1.24555 | 1.50309 | +20.68% 🔴 | 0.25753 | ops/s |
| 50th percentile latency | date_histogram_calendar_interval | 14122.3 | 227.48 | -98.39% 🟢 | -13894.9 | ms |
| 90th percentile latency | date_histogram_calendar_interval | 19465.8 | 248.178 | -98.73% 🟢 | -19217.7 | ms |
| 99th percentile latency | date_histogram_calendar_interval | 20690.9 | 262.979 | -98.73% 🟢 | -20427.9 | ms |
| 100th percentile latency | date_histogram_calendar_interval | 20831.6 | 263.36 | -98.74% 🟢 | -20568.2 | ms |
| 50th percentile service time | date_histogram_calendar_interval | 794.394 | 226.161 | -71.53% 🟢 | -568.233 | ms |
| 90th percentile service time | date_histogram_calendar_interval | 809.454 | 246.997 | -69.49% 🟢 | -562.457 | ms |
| 99th percentile service time | date_histogram_calendar_interval | 846.295 | 261.85 | -69.06% 🟢 | -584.445 | ms |
| 100th percentile service time | date_histogram_calendar_interval | 856.366 | 262.355 | -69.36% 🟢 | -594.011 | ms |
| error rate | date_histogram_calendar_interval | 0 | 0 | 0.00% | 0 | % |
| Min Throughput | date_histogram_calendar_interval_with_filter | 1.50911 | 1.50943 | 0.02% | 0.00032 | ops/s |
| Mean Throughput | date_histogram_calendar_interval_with_filter | 1.51506 | 1.5156 | 0.04% | 0.00054 | ops/s |
| Median Throughput | date_histogram_calendar_interval_with_filter | 1.51371 | 1.51419 | 0.03% | 0.00048 | ops/s |
| Max Throughput | date_histogram_calendar_interval_with_filter | 1.52712 | 1.52811 | 0.06% | 0.00098 | ops/s |
| 50th percentile latency | date_histogram_calendar_interval_with_filter | 19.384 | 9.87088 | -49.08% 🟢 | -9.51314 | ms |
| 90th percentile latency | date_histogram_calendar_interval_with_filter | 20.1579 | 11.1966 | -44.46% 🟢 | -8.96132 | ms |
| 99th percentile latency | date_histogram_calendar_interval_with_filter | 23.0539 | 13.2912 | -42.35% 🟢 | -9.76267 | ms |
| 100th percentile latency | date_histogram_calendar_interval_with_filter | 23.2335 | 13.4544 | -42.09% 🟢 | -9.77906 | ms |
| 50th percentile service time | date_histogram_calendar_interval_with_filter | 17.8957 | 8.4715 | -52.66% 🟢 | -9.42423 | ms |
| 90th percentile service time | date_histogram_calendar_interval_with_filter | 18.55 | 9.47064 | -48.95% 🟢 | -9.0794 | ms |
| 99th percentile service time | date_histogram_calendar_interval_with_filter | 21.1604 | 11.7108 | -44.66% 🟢 | -9.4496 | ms |
| 100th percentile service time | date_histogram_calendar_interval_with_filter | 21.3555 | 11.8357 | -44.58% 🟢 | -9.51977 | ms |
| error rate | date_histogram_calendar_interval_with_filter | 0 | 0 | 0.00% | 0 | % |
| Min Throughput | date_histogram_fixed_interval_with_metrics | 0.21029 | 0.236863 | +12.64% 🔴 | 0.02657 | ops/s |
| Mean Throughput | date_histogram_fixed_interval_with_metrics | 0.210669 | 0.236931 | +12.47% 🔴 | 0.02626 | ops/s |
| Median Throughput | date_histogram_fixed_interval_with_metrics | 0.210624 | 0.236915 | +12.48% 🔴 | 0.02629 | ops/s |
| Max Throughput | date_histogram_fixed_interval_with_metrics | 0.210908 | 0.237073 | +12.41% 🔴 | 0.02616 | ops/s |
| 50th percentile latency | date_histogram_fixed_interval_with_metrics | 410523 | 357487 | -12.92% 🟢 | -53035.9 | ms |
| 90th percentile latency | date_histogram_fixed_interval_with_metrics | 571403 | 497919 | -12.86% 🟢 | -73484.4 | ms |
| 99th percentile latency | date_histogram_fixed_interval_with_metrics | 607548 | 529457 | -12.85% 🟢 | -78090.3 | ms |
| 100th percentile latency | date_histogram_fixed_interval_with_metrics | 611552 | 532986 | -12.85% 🟢 | -78566.1 | ms |
| 50th percentile service time | date_histogram_fixed_interval_with_metrics | 4731.7 | 4214.45 | -10.93% 🟢 | -517.248 | ms |
| 90th percentile service time | date_histogram_fixed_interval_with_metrics | 4763.05 | 4243.44 | -10.91% 🟢 | -519.614 | ms |
| 99th percentile service time | date_histogram_fixed_interval_with_metrics | 4794.49 | 4274.8 | -10.84% 🟢 | -519.697 | ms |
| 100th percentile service time | date_histogram_fixed_interval_with_metrics | 4813.14 | 4293.39 | -10.80% 🟢 | -519.75 | ms |
| error rate | date_histogram_fixed_interval_with_metrics | 0 | 0 | 0.00% | 0 | % |
[INFO] SUCCESS (took 0 seconds)
Signed-off-by: Asim Mahmood <asim.seng@gmail.com>
|
❌ Gradle check result for a70324d: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
|
Known flaky org.opensearch.remotestore.WritableWarmIT.testWritableWarmBasic |
Signed-off-by: Ankit Jain <jainankitk@apache.org>
…pensearch-project#19438) Signed-off-by: Asim Mahmood <asim.seng@gmail.com> Signed-off-by: Ankit Jain <jainankitk@apache.org> Co-authored-by: Ankit Jain <jainankitk@apache.org>
Description
Follow up to PR 13130 , add support for sub-aggregations. Also clean up code.
subwill always be non-null. It'll be set toLeafBucketCollector.NO_OP_COLLECTORwhich will handle the no op case.Related Issues
Updates [#19384]
Closes [#17283]
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.