Fix SearchContext CB memory accounting#138002
Conversation
| } | ||
| if (context.checkRealMemoryCB(locallyAccumulatedBytes[0], "fetch source")) { | ||
| // if we checked the real memory breaker, we restart our local accounting | ||
| locallyAccumulatedBytes[0] = 0; |
There was a problem hiding this comment.
Most of the time these batches were too small, so this didn't trigger.
server/src/main/java/org/elasticsearch/search/SearchService.java
Outdated
Show resolved
Hide resolved
| docIdsToLoad[i] = topDocs.scoreDocs[i].doc; | ||
| } | ||
| FetchSearchResult fetchResult = runFetchPhase(subSearchContext, docIdsToLoad); | ||
| FetchSearchResult fetchResult = runFetchPhase(subSearchContext, docIdsToLoad, this::addRequestCircuitBreakerBytes); |
There was a problem hiding this comment.
Should we batch here and avoid invoking the CB for every document?
Maybe addRequestCircuitBreakerBytes should take care of this?
I suspect that fetching source is way more expensive than invoking the CB, so I'm not sure we want more complication here.
There was a problem hiding this comment.
Maybe worth checking an esrally benchmark here?
There was a problem hiding this comment.
I'll run some aggs nightlies and see what happens
There was a problem hiding this comment.
I ran nyc-taxis track, and I see no slowdown (actually it seems faster).
Now I'm running NOAA, that contains more aggs
|
Hi @luigidellaquila, I've created a changelog YAML for you. |
|
Pinging @elastic/es-analytical-engine (Team:Analytics) |
| public final boolean checkRealMemoryCB(int locallyAccumulatedBytes, String label) { | ||
| if (locallyAccumulatedBytes >= memAccountingBufferSize()) { | ||
| circuitBreaker().addEstimateBytesAndMaybeBreak(0, label); | ||
| circuitBreaker().addEstimateBytesAndMaybeBreak(locallyAccumulatedBytes, label); |
There was a problem hiding this comment.
This was the crux
There was a problem hiding this comment.
The checkRealMemoryCB() functions usually add 0 explicitly to force a memory check; are we sure we want to add memory here?
My major questions here would be:
- Are we freeing it later?
- Should we rename the method then?
- This conditionally adds the memory depending on the value, which may lead to wrongly freeing memory later (?)
There was a problem hiding this comment.
I just realized that nobody is using this method anymore (apart from a unit test).
The checkRealMemoryCB() functions usually add 0 explicitly to force a memory check; are we sure we want to add memory here?
Maybe this was the intention at the beginning, but apparently it was not so effective, since in my tests the CB was always empty. And if I remember well, it was a ChildMemoryCircuitBreaker
Are we freeing it later?
With this fix I'm delegating the memory accounting to AggregatorBase, that handles tracking and releasing CB memory, so we should be safe.
Should we rename the method then?
I think we can just delete it
This conditionally adds the memory depending on the value, which may lead to wrongly freeing memory later
I guess, since the CB memory management is delegated to AggregatorBase, we should be safe
There was a problem hiding this comment.
Let's be conservative here, I'll make the memoryChecker nullable, and in case I'll keep trying to triggering the real memory CB
|
Pinging @elastic/es-search-foundations (Team:Search Foundations) |
| // This works most of the time, but it's not consistent: it still triggers OOM sometimes. | ||
| // The test env is too small and non-deterministic to hold all these data and results. | ||
| @AwaitsFix(bugUrl = "see comment above") | ||
| public void testBreakAndRecover() throws IOException { |
There was a problem hiding this comment.
I'd really like to have this test working, but I couldn't find a way to make it pass deterministically; depending on the system conditions sometimes it OOMs before CB, and if I reduce the memory further, it doesn't CB anymore.
We have unit tests for FetchPhase CB, but having an integration test would be really really good.
I don't know if we should keep this here or delete it
There was a problem hiding this comment.
FWIW I had a similar problem with the aggs reduction phase memory test, which is now muted: #134667
Test works, but it fails sometimes. I tried tweaking it to have an exact amount of nodes of each kind, as well as docs, queyr limits and CB settings, and it improved, but still flaky.
Maybe you could try forcing a set amount of nodes, like in here:
Less random, but less flaky (luckily 🤞 )
|
Thanks for the contribution and for the off-line conversation @drempapis |
| * @return true if the circuit breaker is called and false otherwise | ||
| */ | ||
| public final boolean checkRealMemoryCB(int locallyAccumulatedBytes, String label) { | ||
| public final boolean checkCircuitBreaker(int locallyAccumulatedBytes, String label) { |
There was a problem hiding this comment.
This is no longer RealMemory, but rather normal (Request) CB
There was a problem hiding this comment.
Just in case, is there something else using this, that could not be freeing the accounted memory?
There was a problem hiding this comment.
That's the only usage for now
There was a problem hiding this comment.
We should add to the Javadoc that callers are responsible for decrementing the breaker by the same amount at some point.
ivancea
left a comment
There was a problem hiding this comment.
LGTM! I would wait for another review if possible, apart of the benchmark, if it makes sense
| * @return true if the circuit breaker is called and false otherwise | ||
| */ | ||
| public final boolean checkRealMemoryCB(int locallyAccumulatedBytes, String label) { | ||
| public final boolean checkCircuitBreaker(int locallyAccumulatedBytes, String label) { |
There was a problem hiding this comment.
Just in case, is there something else using this, that could not be freeing the accounted memory?
| docIdsToLoad[i] = topDocs.scoreDocs[i].doc; | ||
| } | ||
| FetchSearchResult fetchResult = runFetchPhase(subSearchContext, docIdsToLoad); | ||
| FetchSearchResult fetchResult = runFetchPhase(subSearchContext, docIdsToLoad, this::addRequestCircuitBreakerBytes); |
There was a problem hiding this comment.
Maybe worth checking an esrally benchmark here?
| // This works most of the time, but it's not consistent: it still triggers OOM sometimes. | ||
| // The test env is too small and non-deterministic to hold all these data and results. | ||
| @AwaitsFix(bugUrl = "see comment above") | ||
| public void testBreakAndRecover() throws IOException { |
There was a problem hiding this comment.
FWIW I had a similar problem with the aggs reduction phase memory test, which is now muted: #134667
Test works, but it fails sometimes. I tried tweaking it to have an exact amount of nodes of each kind, as well as docs, queyr limits and CB settings, and it improved, but still flaky.
Maybe you could try forcing a set amount of nodes, like in here:
Less random, but less flaky (luckily 🤞 )
server/src/main/java/org/elasticsearch/search/fetch/FetchPhase.java
Outdated
Show resolved
Hide resolved
….java Co-authored-by: Iván Cea Fontenla <ivancea96@outlook.com>
|
Buildkite benchmark this with nyc_taxis-1n-8g please |
|
Buildkite benchmark this with wikipedia please |
|
Buildkite benchmark this with noaa-1n-1g please |
💚 Build Succeeded
This build ran two noaa-1n-1g benchmarks to evaluate performance impact of this PR. History |
|
|
||
| private long requestBreakerBytes; | ||
|
|
||
| public void addRequestBreakerBytes(long delta) { |
There was a problem hiding this comment.
It’d be good to document that this field is strictly for fetch-phase memory accounting, will be reversed at the end of the FetchPhase, and shouldn’t be accessed or modified by subclasses. (if any added in the future)
| BytesReference sourceRef = hit.hit().getSourceRef(); | ||
| if (sourceRef != null) { | ||
| locallyAccumulatedBytes[0] += sourceRef.length(); | ||
| // This is an empirical value that seems to work well. |
There was a problem hiding this comment.
To simplify the logic, we could create an IntConsumer once when constructing the FetchPhaseDocsIterator
IntConsumer checker = memoryChecker != null
? memoryChecker
: bytes -> {
locallyAccumulatedBytes[0] += bytes;
if (context.checkCircuitBreaker(locallyAccumulatedBytes[0], "fetch source")) {
addRequestBreakerBytes(locallyAccumulatedBytes[0]);
locallyAccumulatedBytes[0] = 0;
}
};
and simply call
if (sourceRef != null) {
checker.accept(sourceRef.length() * 2);
}
We should double check if that works
There was a problem hiding this comment.
I made this change and yes, the code looks much better now 👍
|
Thanks for the reviews @ivancea the benchmarks didn't show any significant changes in performance. @drempapis please let me know if you have any further comments, otherwise I'll more forward and merge. Thanks! |
drempapis
left a comment
There was a problem hiding this comment.
LGTM! Thank you @luigidellaquila
| } finally { | ||
| long bytes = docsIterator.getRequestBreakerBytes(); | ||
| if (bytes > 0L) { | ||
| context.circuitBreaker().addWithoutBreaking(-bytes); |
There was a problem hiding this comment.
@luigidellaquila @drempapis Sorry for the late comment here. Thanks for iterating on this.
I think we're releasing the bytes from the circuit breaker too soon here? The response has not been sent to the client yet so these hits are still in memory.
There was a problem hiding this comment.
@andreidan I agree, we should do better here.
IMHO the right approach would be to manage the CB allocation/release together with the request lifecycle. That's what I did for Aggs, using a memoryChecker that was already part of the aggregation memory accounting, but for Search I couldn't find anything similar. I don't know that part of the codebase in depth though, so I could have missed something obvious.
There was a problem hiding this comment.
@andreidan, we missed it!
I’ve started an alternative here: #139124. It’s WIP and expected to take more time.
I’ll work on it in parallel and make it a priority, given the complexity of the alternative.
There was a problem hiding this comment.
Thanks for the additional context. While #139124 (which looks like an excellent way to move forward) is under development, how confident are we with this change ? Do we have an overview of which queries and scenarios it improves over the previous implementation? and perhaps more importantly where it might fall a bit short to what we had previously?
From the PR description it seems that the thing that we fixed was:
sometimes FetchPhase.buildSearchHits batches are too small (eg. in TopHits), the memory buffer never accumulates enough to be tracked
but we didn't have to replace the real CB to achieve this. Did we consider triggering the real memory CB on a doc count too? (i.e. accumulated 1MiB in source size or retrieved 1024 documents)
There was a problem hiding this comment.
@andreidan looking at previous discussions (eg. this and the internal ES-12944), my impression is that we don't want to rely too much on the real memory CB.
I started this PR mostly to address memory problems with TopHits (as for the comment you mentioned above); my original implementation was intended to use the request breaker when the fetch phase was part of Aggs, but based on the discussions with @drempapis (again, see the internal ES-12944) we ended up using it also for the general case (ie. for Search). I'll let @drempapis comment on this and on the possible regressions, as he probably has more visibility the Search side. For Aggs we are definitely in a much better position now.
Did we consider triggering the real memory CB on a doc count too? (i.e. accumulated 1MiB in source size or retrieved 1024 documents)
Not that I know
There was a problem hiding this comment.
In ES-12944 we analyzed a core issue: on nodes with small JVM heaps, large per-document _source payloads combined with large size values, cause the fetch phase to materialize a very large SearchHit[] fully in memory. This is a limitation of the current fetch implementation, which constructs the entire hit array and all _source bytes on-heap before returning anything to the client.
For search memory accounting, the change introduced here ensures that _source bytes are now charged to the request cb. This helps by tripping earlier in scenarios that previously bypassed breaker enforcement. However, it only partially improves the situation. It does not fully resolve the underlying low-heap, large-source problem because the fetch phase continues to eagerly materialize the full response on-heap.
Even when the request breaker is active, its default limit (60 percent of heap) is often too high to protect a small-heap node once baseline memory usage is included. The parent-memory breaker only fires at very high global heap occupancy and usually reacts too late, after most of the large objects are already live.
I opened a PR to mitigate the issue of releasing bytes from the circuit breaker too early while the searchHits are still resident in memory. This is a short-term fix until a more complete solution will be implemented.
There was a problem hiding this comment.
Ah excellent, thanks for working on #139243 Dimi ❤️ Awesome we're jumping right on it to fix it!
Let me know when it's ready and I'll review it.
Fixing FetchPhase memory management
Problems and TODO (spotted so far):
SearchContext.checkRealMemoryCBdoesn't account for CB memory (always zero) - this just triggers Real Memory CB, but it's not enough apparentlyFetchPhase.buildSearchHitsbatches are too small (eg. in TopHits), the memory buffer never accumulates enough to be trackedTopHitsAggregatormemory management lifecycleplumbInnerHitsPhasememory management lifecycleplumbSearchService.execute*Phase()memory management lifecycleTopHitsAggregator.subSearchContext.closeFuturegrows too much - this is due to this block, so it's irrelevant in prod.Fixes: #136836