Shrink should not touch max_retries#47719
Merged
henningandersen merged 3 commits intoelastic:masterfrom Oct 11, 2019
Merged
Conversation
Shrink would set `max_retries=1` in order to avoid retrying. This however sticks to the shrunk index afterwards, causing issues when a shard copy later fails to allocate just once. While there is no new node to allocate to and a retry will likely fail again, the downside of having `max_retries=1` afterwards outweigh the benefit of not retrying the failed shrink a few times. This change ensures shrink no longer sets `max_retries`.
Collaborator
|
Pinging @elastic/es-distributed (:Distributed/Allocation) |
Contributor
Author
|
@elasticmachine run elasticsearch-ci/packaging-sample-matrix |
1 similar comment
Contributor
Author
|
@elasticmachine run elasticsearch-ci/packaging-sample-matrix |
jasontedor
reviewed
Oct 8, 2019
Member
jasontedor
left a comment
There was a problem hiding this comment.
Thanks @henningandersen I agree with your assessment, I left some comments though.
server/src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java
Show resolved
Hide resolved
server/src/test/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexServiceTests.java
Show resolved
Hide resolved
If max_retries was set on source, it is unlikely to be wanted on target too, instead the new index will rely on the default.
…touch_max_retries
Contributor
Author
|
@elasticmachine run elasticsearch-ci/packaging-sample |
Contributor
Author
|
Thanks @jasontedor |
henningandersen
added a commit
that referenced
this pull request
Oct 11, 2019
Shrink would set `max_retries=1` in order to avoid retrying. This however sticks to the shrunk index afterwards, causing issues when a shard copy later fails to allocate just once. Avoiding a retry of a shrink makes sense since there is no new node to allocate to and a retry will likely fail again. However, the downside of having max_retries=1 afterwards outweigh the benefit of not retrying the failed shrink a few times. This change ensures shrink no longer sets max_retries and also makes all resize operations (shrink, clone, split) leave the setting at default value rather than copy it from source.
howardhuanghua
pushed a commit
to TencentCloudES/elasticsearch
that referenced
this pull request
Oct 14, 2019
Shrink would set `max_retries=1` in order to avoid retrying. This however sticks to the shrunk index afterwards, causing issues when a shard copy later fails to allocate just once. Avoiding a retry of a shrink makes sense since there is no new node to allocate to and a retry will likely fail again. However, the downside of having max_retries=1 afterwards outweigh the benefit of not retrying the failed shrink a few times. This change ensures shrink no longer sets max_retries and also makes all resize operations (shrink, clone, split) leave the setting at default value rather than copy it from source.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Shrink would set
max_retries=1in order to avoid retrying. Thishowever sticks to the shrunk index afterwards, causing issues when a
shard copy later fails to allocate just once.
Avoiding a retry of a shrink makes sense since there is no new node
to allocate to and a retry will likely fail again. However, the downside of
having
max_retries=1afterwards outweigh the benefit of not retryingthe failed shrink a few times. This change ensures shrink no longer
sets
max_retriesand also makes all resize operations (shrink, clone,split) leave the setting at default value rather than copy it from source.