Skip to content

Add interface for the Multi format merge flow#20908

Open
darjisagar7 wants to merge 2 commits intoopensearch-project:mainfrom
darjisagar7:merge_asbtraction
Open

Add interface for the Multi format merge flow#20908
darjisagar7 wants to merge 2 commits intoopensearch-project:mainfrom
darjisagar7:merge_asbtraction

Conversation

@darjisagar7
Copy link
Copy Markdown

@darjisagar7 darjisagar7 commented Mar 18, 2026

RFC

#19490

Description

This PR adds the initial Merge abstraction to support the Merges for the Multi dataformat engine

Related Issues

Resolves #[Issue number to be closed when this PR is merged]

#18416

  • This RFC outlines the comprehensive design for supporting multiple execution engines. The RFC specifically focuses on the architectural approach for maintaining row IDs across different data formats to enable join operations effectively.

Check List

  • Functionality includes testing.
  • API changes companion pull request created, if applicable.
  • Public documentation issue/PR created, if applicable.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 18, 2026

PR Reviewer Guide 🔍

(Review updated until commit 628e3ba)

Here are some key observations to aid the review process:

🧪 PR contains tests
🔒 No security concerns identified
✅ No TODO sections
🔀 Multiple PR themes

Sub-PR theme: Add merge abstraction classes and tests

Relevant files:

  • server/src/main/java/org/opensearch/index/engine/dataformat/merge/OneMerge.java
  • server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeHandler.java
  • server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeScheduler.java
  • server/src/main/java/org/opensearch/index/engine/dataformat/merge/package-info.java
  • server/src/test/java/org/opensearch/index/engine/dataformat/merge/MergeTests.java

Sub-PR theme: Add getNextWriterGeneration to IndexingExecutionEngine and refactor test utilities

Relevant files:

  • server/src/main/java/org/opensearch/index/engine/dataformat/IndexingExecutionEngine.java
  • server/src/test/java/org/opensearch/index/engine/dataformat/DataFormatPluginTests.java
  • server/src/test/java/org/opensearch/index/engine/dataformat/DataFormatTestUtils.java

⚡ Recommended focus areas for review

Deadlock Risk

onMergeFinished is synchronized and calls updatePendingMerges, which is also synchronized. While this works in a single-threaded context, if updatePendingMerges internally calls registerMerge (also synchronized), this chain of synchronized calls on the same monitor is safe due to Java's reentrant locks. However, registerMerge calls indexer.acquireSnapshot() while holding the monitor lock, which could cause a deadlock if the snapshot acquisition blocks or calls back into the handler.

public synchronized void onMergeFinished(OneMerge oneMerge) {
    removeMergingSegments(oneMerge);
    updatePendingMerges();
}
Exception Wrapping

In registerMerge, any exception from acquireSnapshot is caught and re-thrown as a RuntimeException. This swallows the original exception type and may hide important context. Consider using a more specific exception or at least preserving the original exception chain more explicitly. Also, if an exception is thrown, mergingSegments and currentlyMergingSegments are not updated, but the merge was not registered — this is correct, but worth validating that partial state is never left behind.

} catch (Exception e) {
    logger.warn("Failed to acquire snapshots", e);
    throw new RuntimeException(e);
}
Incomplete Implementation

Both triggerMerges() and forceMerge(int) are empty stub methods with no implementation. The mergeHandler parameter passed to the constructor is also unused (not stored as a field). This means the scheduler currently cannot actually schedule or execute any merges, which may be intentional for this PR but should be clearly documented or tracked.

public MergeScheduler(MergeHandler mergeHandler, ShardId shardId, IndexSettings indexSettings) {
    logger = Loggers.getLogger(getClass(), shardId);
    this.mergeSchedulerConfig = indexSettings.getMergeSchedulerConfig();
    refreshConfig();
}

/**
 * Refreshes the max concurrent merge thread count and max merge count from
 * the current {@link MergeSchedulerConfig}. No-op if the values have not changed.
 */
public synchronized void refreshConfig() {
    int newMaxThreadCount = mergeSchedulerConfig.getMaxThreadCount();
    int newMaxMergeCount = mergeSchedulerConfig.getMaxMergeCount();

    if (newMaxThreadCount == this.maxConcurrentMerges && newMaxMergeCount == this.maxMergeCount) {
        return;
    }

    logger.info(
        () -> new ParameterizedMessage(
            "Updating from merge scheduler config: maxThreadCount {} -> {}, " + "maxMergeCount {} -> {}",
            this.maxConcurrentMerges,
            newMaxThreadCount,
            this.maxMergeCount,
            newMaxMergeCount
        )
    );

    this.maxConcurrentMerges = newMaxThreadCount;
    this.maxMergeCount = newMaxMergeCount;
}

/**
 * Triggers pending merge operations. Merges are selected by the
 * underlying {@link MergeHandler} and executed up to the configured
 * concurrency limits.
 */
public void triggerMerges() {

}

/**
 * Forces a merge down to at most {@code maxNumSegment} segments.
 *
 * @param maxNumSegment the maximum number of segments after the force merge
 */
public void forceMerge(int maxNumSegment) {

}
Catalog Validation

In registerMerge, segment validation uses List.contains() which relies on equals() for Segment. If Segment does not override equals() and uses reference equality, segments fetched from the catalog snapshot may never match segments in the merge, causing all merges to be silently rejected. This should be verified.

for (Segment mergeSegment : merge.getSegmentsToMerge()) {
    if (!catalogSegments.contains(mergeSegment)) {
        return;
    }

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 18, 2026

PR Code Suggestions ✨

Latest suggestions up to 628e3ba

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Store injected dependency for future use

The mergeHandler parameter is accepted in the constructor but never stored as a
field, making it impossible to use in triggerMerges() or forceMerge(). This means
the scheduler cannot actually delegate merge operations to the handler, which is its
core responsibility.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeScheduler.java [53-57]

+private final MergeHandler mergeHandler;
+
 public MergeScheduler(MergeHandler mergeHandler, ShardId shardId, IndexSettings indexSettings) {
     logger = Loggers.getLogger(getClass(), shardId);
+    this.mergeHandler = mergeHandler;
     this.mergeSchedulerConfig = indexSettings.getMergeSchedulerConfig();
     refreshConfig();
 }
Suggestion importance[1-10]: 7

__

Why: The mergeHandler parameter is accepted but never stored, making triggerMerges() and forceMerge() unable to delegate to it. This is a real functional gap since the scheduler's core responsibility is to coordinate with the handler, and the improved code correctly adds the missing field.

Medium
General
Avoid holding lock during blocking I/O calls

onMergeFinished calls updatePendingMerges() while already holding the synchronized
lock on this, and updatePendingMerges is also synchronized on this. In Java,
reentrant locking on the same monitor is allowed, so this won't deadlock, but
updatePendingMerges internally calls registerMerge, which calls
indexer.acquireSnapshot() — a potentially blocking I/O call — while holding the
lock. This can cause thread contention. Consider releasing the lock before calling
updatePendingMerges, or restructuring to avoid holding the lock during I/O.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeHandler.java [135-138]

-public synchronized void onMergeFinished(OneMerge oneMerge) {
-    removeMergingSegments(oneMerge);
+public void onMergeFinished(OneMerge oneMerge) {
+    synchronized (this) {
+        removeMergingSegments(oneMerge);
+    }
     updatePendingMerges();
 }
Suggestion importance[1-10]: 5

__

Why: The suggestion correctly identifies that onMergeFinished holds the lock while updatePendingMerges calls indexer.acquireSnapshot(), a potentially blocking I/O operation. However, this is a design/performance concern rather than a correctness bug, and the improved code is a valid refactoring approach.

Low
Avoid double-wrapping runtime exceptions

When acquireSnapshot() throws a RuntimeException, the code wraps it in another
RuntimeException, adding unnecessary nesting. The original exception should be
rethrown directly if it is already a RuntimeException, and only checked exceptions
should be wrapped.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeHandler.java [100-103]

+} catch (RuntimeException e) {
+    logger.warn("Failed to acquire snapshots", e);
+    throw e;
 } catch (Exception e) {
     logger.warn("Failed to acquire snapshots", e);
     throw new RuntimeException(e);
 }
Suggestion importance[1-10]: 4

__

Why: The suggestion is valid — wrapping a RuntimeException in another RuntimeException adds unnecessary nesting and makes stack traces harder to read. The improved code correctly separates handling of RuntimeException and checked Exception types.

Low

Previous suggestions

Suggestions up to commit f180ece
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix instance-specific logger declared as static

logger is declared as a static field but assigned per-instance in the constructor,
which means concurrent instantiation could cause race conditions and the logger
would reflect only the last-assigned ShardId. It should be an instance field to be
correctly scoped per scheduler instance.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeScheduler.java [32]

-private static Logger logger;
+private Logger logger;
Suggestion importance[1-10]: 8

__

Why: logger is declared static but assigned per-instance in the constructor with a ShardId-specific logger. This is a real bug: in multi-shard scenarios, concurrent instantiation causes a race condition and all instances end up sharing the last-assigned logger, losing per-shard context.

Medium
Synchronize unsynchronized access to shared collection

hasPendingMerges() accesses mergingSegments without synchronization, while all other
methods that access this field are synchronized. This creates a data race where the
result may be stale or inconsistent when called concurrently with getNextMerge() or
registerMerge().

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeHandler.java [69-71]

-public boolean hasPendingMerges() {
+public synchronized boolean hasPendingMerges() {
     return !mergingSegments.isEmpty();
 }
Suggestion importance[1-10]: 7

__

Why: hasPendingMerges() reads mergingSegments without synchronization while all other accessors of this field are synchronized, creating a genuine data race in concurrent usage.

Medium
General
Make core merge execution method abstract

doMerge is a concrete method that always returns null, which will cause
NullPointerException for any caller that uses the result. Since this is the core
execution method of the handler, it should be declared abstract to force subclasses
to provide a real implementation.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeHandler.java [105-107]

-public MergeResult doMerge(OneMerge oneMerge) {
-    return null;
-}
+public abstract MergeResult doMerge(OneMerge oneMerge);
Suggestion importance[1-10]: 5

__

Why: doMerge always returns null, which could cause NullPointerException for callers. However, this appears to be intentional scaffolding/placeholder code in an experimental API, and making it abstract is a design decision that may break existing subclasses or be intentionally deferred.

Low
Suggestions up to commit f180ece
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix static logger overwritten per instance

The logger field is declared as static but assigned per-instance in the constructor
using Loggers.getLogger(getClass(), shardId). This means the logger is shared across
all instances and will be overwritten each time a new MergeScheduler is created,
causing incorrect shard-specific logging. It should be an instance field instead.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeScheduler.java [32]

-private static Logger logger;
+private final Logger logger;
Suggestion importance[1-10]: 8

__

Why: The logger field is static but assigned per-instance in the constructor with a shard-specific logger. This causes all instances to share the last-assigned logger, breaking shard-specific logging. Changing to private final Logger logger is a real bug fix.

Medium
Synchronize unsynchronized shared state access

The hasPendingMerges() method accesses mergingSegments without synchronization,
while other methods like getNextMerge() and registerMerge() are synchronized. This
creates a data race where hasPendingMerges() may return a stale value. It should be
synchronized to ensure consistent visibility.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeHandler.java [69-71]

-public boolean hasPendingMerges() {
+public synchronized boolean hasPendingMerges() {
     return !mergingSegments.isEmpty();
 }
Suggestion importance[1-10]: 7

__

Why: hasPendingMerges() accesses mergingSegments without synchronization while other methods are synchronized, creating a potential data race. Adding synchronized ensures consistent visibility of the shared deque state.

Medium
Implement empty merge registration method

The registerMerge method is empty and never adds the merge to mergingSegments. As a
result, hasPendingMerges() will always return false and getNextMerge() will always
return null, making the merge queue non-functional. The merge should be added to the
deque.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeHandler.java [60-62]

 public synchronized void registerMerge(OneMerge merge) {
-
+    mergingSegments.addLast(merge);
 }
Suggestion importance[1-10]: 6

__

Why: The registerMerge method is a stub that never adds to mergingSegments, making the merge queue non-functional. However, this appears to be intentional scaffolding in a PR introducing interfaces/abstractions, so the impact depends on whether full implementation is expected in this PR.

Low
Suggestions up to commit 47e9620
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix static logger overwritten per instance

The logger field is declared as static but assigned per-instance in the constructor
using Loggers.getLogger(getClass(), shardId). This means the logger is shared across
all instances and will be overwritten each time a new MergeScheduler is created,
causing incorrect shard-specific logging. It should be an instance field instead.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeScheduler.java [32]

-private static Logger logger;
+private final Logger logger;
Suggestion importance[1-10]: 8

__

Why: The logger field is static but assigned per-instance in the constructor with a shard-specific logger. This causes all instances to share the last-assigned logger, breaking shard-specific logging. Changing to private final Logger logger is a real bug fix.

Medium
Synchronize concurrent access to shared deque

The hasPendingMerges() method accesses mergingSegments without synchronization,
while other methods like getNextMerge() and registerMerge() are synchronized. This
creates a data race where the result of hasPendingMerges() can be stale or
inconsistent when called concurrently. The method should be synchronized to ensure
thread safety.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeHandler.java [69-71]

-public boolean hasPendingMerges() {
+public synchronized boolean hasPendingMerges() {
     return !mergingSegments.isEmpty();
 }
Suggestion importance[1-10]: 7

__

Why: hasPendingMerges() accesses mergingSegments without synchronization while other methods are synchronized, creating a potential data race. Adding synchronized is a valid thread-safety fix, though the impact depends on actual concurrent usage patterns.

Medium
Prevent double-counting documents across format groups

calculateTotalNumDocs() sums numRows across all WriterFileSet values for every data
format in each segment. If a segment has multiple data formats (multiple entries in
dfGroupedSearchableFiles()), the document count will be over-counted since the same
documents are represented in multiple format groups. The total doc count should be
computed per segment, not per WriterFileSet.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/OneMerge.java [77-82]

 private long calculateTotalNumDocs() {
     return segmentsToMerge.stream()
-        .flatMap(segment -> segment.dfGroupedSearchableFiles().values().stream())
-        .mapToLong(WriterFileSet::numRows)
+        .mapToLong(segment -> segment.dfGroupedSearchableFiles().values().stream()
+            .mapToLong(WriterFileSet::numRows)
+            .max()
+            .orElse(0L))
         .sum();
 }
Suggestion importance[1-10]: 5

__

Why: The suggestion raises a valid concern about potential over-counting when segments have multiple data formats, but the proposed fix using max() is a heuristic that may not be semantically correct either. The actual behavior depends on how dfGroupedSearchableFiles() is structured, making this a speculative fix.

Low
Suggestions up to commit 47e9620
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix static logger overwritten per instance

The logger field is declared as static but assigned per-instance in the constructor
using Loggers.getLogger(getClass(), shardId). This means the logger is shared across
all instances and will be overwritten each time a new MergeScheduler is created,
causing incorrect shard-specific logging. It should be an instance field instead.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeScheduler.java [32]

-private static Logger logger;
+private final Logger logger;
Suggestion importance[1-10]: 8

__

Why: The logger field is declared static but assigned per-instance in the constructor with a shard-specific Loggers.getLogger(getClass(), shardId). This means every new MergeScheduler instance overwrites the shared static field, causing incorrect shard-specific logging in concurrent scenarios. Making it an instance field (private final Logger logger) is the correct fix.

Medium
Synchronize unsynchronized concurrent collection access

The hasPendingMerges() method accesses mergingSegments without synchronization,
while other methods like getNextMerge() and registerMerge() are synchronized. This
creates a data race where the result of hasPendingMerges() may be stale or
inconsistent when used in a concurrent context. The method should be synchronized to
ensure visibility.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeHandler.java [69-71]

-public boolean hasPendingMerges() {
+public synchronized boolean hasPendingMerges() {
     return !mergingSegments.isEmpty();
 }
Suggestion importance[1-10]: 7

__

Why: The hasPendingMerges() method reads mergingSegments without synchronization while other methods accessing the same field are synchronized, creating a potential data race. Adding synchronized ensures consistent visibility in concurrent usage.

Medium
Prevent document count overcounting across format groups

calculateTotalNumDocs() sums numRows across all WriterFileSet values for every data
format in each segment. If a segment has multiple data formats (multiple entries in
dfGroupedSearchableFiles()), the document count will be overcounted since the same
documents may be represented in multiple format groups. The total doc count should
be derived per-segment, not per-WriterFileSet.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/OneMerge.java [77-82]

 private long calculateTotalNumDocs() {
     return segmentsToMerge.stream()
-        .flatMap(segment -> segment.dfGroupedSearchableFiles().values().stream())
-        .mapToLong(WriterFileSet::numRows)
+        .mapToLong(segment -> segment.dfGroupedSearchableFiles().values().stream()
+            .mapToLong(WriterFileSet::numRows)
+            .max()
+            .orElse(0L))
         .sum();
 }
Suggestion importance[1-10]: 4

__

Why: The suggestion raises a valid concern about potential overcounting if a segment has multiple data format groups, but the fix using max() is a heuristic assumption that may not be correct in all cases. The actual correct behavior depends on the semantics of dfGroupedSearchableFiles() which isn't fully clear from the diff alone, making this a speculative improvement.

Low
Suggestions up to commit 47e9620
CategorySuggestion                                                                                                                                    Impact
Possible issue
Fix static logger field to instance field

logger is declared as a static field but assigned per-instance in the constructor,
meaning concurrent instantiation could cause race conditions and the last-assigned
logger would be shared across all instances. It should be an instance field (private
Logger logger;) to ensure each MergeScheduler instance has its own correctly scoped
logger.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeScheduler.java [32]

-private static Logger logger;
+private Logger logger;
Suggestion importance[1-10]: 8

__

Why: The logger field is declared static but assigned per-instance in the constructor, creating a race condition where concurrent instantiation would cause all instances to share the last-assigned logger. Changing it to an instance field is a real correctness fix.

Medium
Synchronize unsynchronized shared field access

hasPendingMerges() accesses mergingSegments without synchronization, while all other
methods that access this field are synchronized. This creates a data race where the
result may be stale or inconsistent. The method should be synchronized to match the
access pattern of the other methods.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/MergeHandler.java [69-71]

-public boolean hasPendingMerges() {
+public synchronized boolean hasPendingMerges() {
     return !mergingSegments.isEmpty();
 }
Suggestion importance[1-10]: 7

__

Why: hasPendingMerges() accesses mergingSegments without synchronization while all other methods accessing this field are synchronized, creating a potential data race. Adding synchronized is a valid thread-safety fix consistent with the rest of the class.

Medium
Prevent double-counting documents across data formats

calculateTotalNumDocs() counts documents by summing numRows across all WriterFileSet
entries per segment, but a segment with multiple data formats would have its
documents counted multiple times (once per format). The document count should be
retrieved directly from the segment rather than aggregated across all its file sets.

server/src/main/java/org/opensearch/index/engine/dataformat/merge/OneMerge.java [77-82]

 private long calculateTotalNumDocs() {
     return segmentsToMerge.stream()
-        .flatMap(segment -> segment.dfGroupedSearchableFiles().values().stream())
-        .mapToLong(WriterFileSet::numRows)
+        .mapToLong(segment -> segment.dfGroupedSearchableFiles().values().stream()
+            .mapToLong(WriterFileSet::numRows)
+            .max()
+            .orElse(0L))
         .sum();
 }
Suggestion importance[1-10]: 5

__

Why: The concern about double-counting documents when a segment has multiple data formats is valid, but the proposed fix using max() is a heuristic that may not be semantically correct either. The suggestion identifies a real potential issue but the improved_code solution is questionable without deeper knowledge of the data model.

Low

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 1d5b2c6: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@darjisagar7 darjisagar7 marked this pull request as ready for review March 23, 2026 07:24
@darjisagar7 darjisagar7 reopened this Mar 23, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 1d5b2c6

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 1d5b2c6: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 47e9620: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 47e9620

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 47e9620: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@darjisagar7 darjisagar7 reopened this Mar 23, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 47e9620

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 47e9620: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit f180ece

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for f180ece: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@darjisagar7 darjisagar7 reopened this Mar 24, 2026
@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit f180ece

@github-actions
Copy link
Copy Markdown
Contributor

✅ Gradle check result for f180ece: SUCCESS

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 24, 2026

Codecov Report

❌ Patch coverage is 97.80220% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 73.16%. Comparing base (14faf38) to head (e0a3664).
⚠️ Report is 4 commits behind head on main.

Files with missing lines Patch % Lines
...ch/index/engine/dataformat/merge/MergeHandler.java 97.77% 0 Missing and 1 partial ⚠️
.../index/engine/dataformat/merge/MergeScheduler.java 96.55% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##               main   #20908      +/-   ##
============================================
- Coverage     73.16%   73.16%   -0.01%     
- Complexity    72545    72632      +87     
============================================
  Files          5848     5851       +3     
  Lines        331982   332073      +91     
  Branches      47949    47957       +8     
============================================
+ Hits         242892   242948      +56     
- Misses        69561    69626      +65     
+ Partials      19529    19499      -30     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

* Updates the set of pending merges. Called to refresh the merge queue
* when the segment state changes.
*/
public synchronized void updatePendingMerges() {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this require implementation in this abstract class or is it an abstract method?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the body of the function

@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

✅ Gradle check result for 14f98e7: SUCCESS

Signed-off-by: Sagar Darji <darsaga@amazon.com>
@github-actions
Copy link
Copy Markdown
Contributor

Failed to generate code suggestions for PR

@github-actions
Copy link
Copy Markdown
Contributor

✅ Gradle check result for e0a3664: SUCCESS

Signed-off-by: Mohit Godwani <81609427+mgodwan@users.noreply.github.com>
@github-actions
Copy link
Copy Markdown
Contributor

Persistent review updated to latest commit 628e3ba

@github-actions
Copy link
Copy Markdown
Contributor

❌ Gradle check result for 628e3ba: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants