Update elastic/logs challenges#433
Conversation
A few older many shards challenges got replaced with a new one. Updated README to reflect this. Also adding a few missing challenges and fixing a few typos. Relates to: elastic#303 Relates to: elastic#294
|
|
||
| This challenge aims to get more specific numbers of what we can support in terms of indices count. It creates initial | ||
| set of indices as before and then index to small set of data streams. These data streams will almost never | ||
| rollover (rollover based on size with 150gb as `max_size`). This is supposed to be run with multiple values of |
There was a problem hiding this comment.
both many-shards use 100gb as a rollover value
Was the intent 150gb, should we update the docs or the policy?
There was a problem hiding this comment.
I think we are happy with the 100gb we have now. @original-brownbear WDYT?
There was a problem hiding this comment.
Yea I don't think it makes a difference, we almost index nothing per index :)
There was a problem hiding this comment.
sounds good! i will update the docs then just to be consistent :)
Co-authored-by: Brad Deam <54515790+b-deam@users.noreply.github.com>
dliappis
left a comment
There was a problem hiding this comment.
This is basically LGTM from my side. Left a suggestion for a more up to date description of many-shards-snapshots.
elastic/logs/README.md
Outdated
| ### Many Shards Snapshots (many-shards-snapshots) | ||
|
|
||
| ### Many Shards Full (many-shards-full) | ||
| This benchmarks aims to evaluate the performance of the Log Monitoring part of Elastic's Observability solution with a large amount of shards. It sets up initial set of indices (count controlled by `data.initial.indices` param) with a large amount of shards, with `auditbeat` template and ILM policy (hot tier only) and then sequentially takes a configurable via `snapshot_counts` number of snapshots. These data streams will almost never rollover (rollover based on size with 100gb as max_size). Used for benchmarks to help identify regressions related to snapshots with high index counts. The performance can be evaluated by the `service_time` of the `wait-for-snapshots` task. |
There was a problem hiding this comment.
When I read "with a large amount of shards" I wrongly thought that the index settings specify a high shard count.
I'd suggest a few clarifications based on the description in https://elasticsearch-benchmarks.elastic.co/#tracks/many-shards-snapshots/nightly/default/90d and https://www.elastic.co/blog/benchmark-driven-optimizations-scalability-elasticsearch-8
This benchmarks aims to evaluate the performance of the Log Monitoring part of Elastic's Observability solution with a high shard count. It sets up initial set of indices (count controlled by
data.initial.indicesparam), using anauditbeattemplate and ILM policy (hot tier only) and then sequentially takes a configurable (viasnapshot_counts) number of snapshots. These data streams will almost never rollover (rollover based on size with100gbasmax_size). This challenge is used by benchmarks to help identify regressions and improvements related to snapshots in use cases with a high shard count. The performance can be evaluated by theservice_timeof thewait-for-snapshotstask.
|
|
||
| This challenge aims to get more specific numbers of what we can support in terms of indices count. It creates initial | ||
| set of indices as before and then index to small set of data streams. These data streams will almost never | ||
| rollover (rollover based on size with 150gb as `max_size`). This is supposed to be run with multiple values of |
There was a problem hiding this comment.
I think we are happy with the 100gb we have now. @original-brownbear WDYT?
A few older many shards challenges got replaced with a new one. Updated README to reflect this. Also adding a few missing challenges and fixing a few typos. Relates to: elastic#303 Relates to: elastic#294
A few older many shards challenges got replaced with a new one. Updated README to reflect this. Also adding a few missing challenges and fixing a few typos.
Relates to: #303
Relates to: #294