-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Description
Describe the bug
With the current search backpressure cancellation logic, we've noticed that some high CPU usage search requests, such as multi-term aggregation, may result in more cancellations due to task-level heap usage settings. However, the system still has sufficient heap memory to process the tasks.
Related component
Search:Resiliency
To Reproduce
Use multi_term_agg in http_logs workload. It's often referred to as a high CPU usage search request.
- Setup a OpenSearch cluster and OpenSearch Benchmark client
- Run test with
multi_term_aggoperation in http_logs workload and gradually increase the search client using below sample command
opensearch-benchmark execute-test --pipeline=benchmark-only --client-options='basic_auth_user:<USER>,basic_auth_password:<PASSWORD>,timeout:300' --target-hosts '<END_POINT>:443' --kill-running-processes --workload=http_logs --workload-param='target_throughput:none,number_of_replicas:0,number_of_shards:1,search_clients:2'
- Monitor the CPU utilization and JVM memory pressure of your OpenSearch cluster
- Retrieve cancellation count with
GET _nodes/stats/search_backpressurerestful API
Expected behavior
We need to adjust the current search backpressure cancellation logic to cancel tasks based on measurements of node-level resources. For example, if a node is under duress due to high CPU utilization, we should only consider canceling tasks based on CPU settings, rather than heap or elapsed time settings at the task level.
Additional Details
Host/Environment (please complete the following information):
- Version OS_1.3 +
Metadata
Metadata
Assignees
Labels
Type
Projects
Status