-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Closed
Labels
:Analytics/AggregationsAggregationsAggregations>bugTeam:AnalyticsMeta label for analytical engine team (ESQL/Aggs/Geo)Meta label for analytical engine team (ESQL/Aggs/Geo)priority:highA label for assessing bug priority to be used by ES engineersA label for assessing bug priority to be used by ES engineers
Description
A v8.6 user on the forums reported experiencing OOMEs and when they analysed the heap dump they found that a high fraction of their 3GiB heap was used by the fielddata cache. GET /_nodes/_all/stats/breaker?filter_path=nodes.*.breakers.fielddata agrees:
{
"nodes": {
"K6V95L0pR36L-_99LIapdw": {
"breakers": {
"fielddata": {
"limit_size_in_bytes": 1288490188,
"limit_size": "1.1gb",
"estimated_size_in_bytes": 2766456144,
"estimated_size": "2.5gb",
"overhead": 1.03,
"tripped": 0
}
}
}
}
}
They have worked around this problem by setting indices.fielddata.cache.size: 1gb but I think it's a bug for the fielddata cache to grow without bounds by default like this.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
:Analytics/AggregationsAggregationsAggregations>bugTeam:AnalyticsMeta label for analytical engine team (ESQL/Aggs/Geo)Meta label for analytical engine team (ESQL/Aggs/Geo)priority:highA label for assessing bug priority to be used by ES engineersA label for assessing bug priority to be used by ES engineers