[ML] Evict old models from the cache before loading new#140844
Merged
davidkyle merged 4 commits intoelastic:mainfrom Jan 30, 2026
Merged
[ML] Evict old models from the cache before loading new#140844davidkyle merged 4 commits intoelastic:mainfrom
davidkyle merged 4 commits intoelastic:mainfrom
Conversation
Collaborator
|
Pinging @elastic/ml-core (Team:ML) |
Collaborator
|
Hi @davidkyle, I've created a changelog YAML for you. |
prwhelan
approved these changes
Jan 22, 2026
Collaborator
💚 Backport successful
|
davidkyle
added a commit
to davidkyle/elasticsearch
that referenced
this pull request
Jan 30, 2026
Evict old models from the cache before loading new
elasticsearchmachine
pushed a commit
that referenced
this pull request
Jan 30, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
An issue was reported where the ML model cache was throwing a circuit breaker exception loading a model because the cache was full. Inspecting the code the CB check is done before old models are evicted from the cache to free up any space. Note this is the cache for DFA and tree ensemble models used in learning to rank and not related to the PyTorch NLP models even though the same trained models APIs are used.
The fix here is to refresh the cache evicting any models older than the expiration time, this frees memory from the circuit breaker allowing new models to be loaded. Otherwise the only work-around is to delete a model that is cached which invalidates the cache. An alternative would be to catch the circuit breaking exception and only then evict models from the cache.