Ban Limit + MvExpand before remote Enrich#135051
Conversation
|
Hi @smalyshev, I've created a changelog YAML for you. |
|
Pinging @elastic/es-analytical-engine (Team:Analytics) |
luigidellaquila
left a comment
There was a problem hiding this comment.
Thanks @smalyshev, the implementation looks correct for this specific case.
I just left a comment about validation in general, and on other cases we could be missing, see below
| private void checkMvExpandAfterLimit(Failures failures) { | ||
| this.forEachDown(MvExpand.class, u -> { | ||
| u.forEachDown(p -> { | ||
| if (p instanceof Limit || p instanceof TopN) { |
There was a problem hiding this comment.
I noticed that the logic for JOIN is a bit different; in particular, post optimization, it also checks for the presence of a PipelineBreaker, while ENRICH only checks for ExecutesOn.
Do you think it makes sense to unify the two, or at least to make these two checks consistent?
There was a problem hiding this comment.
Join and Enrich are different, as Enrich is cardinality-preserving while Join is not. That makes some pipeline breakers compatible with Enrich but not with Join. I agree that PipelineBreaker usage is not ideal there are it's not exactly meant for that, and in the future we may change that to refine the meanings of each, but Enrich and Join will probably stay different. Unless we move to handling them with subplans which would resolve the cardinality problem (not for free, of course). For now I think PipelineBreaker is a good stand-in for what we need, but longer term it probably will need to be changed.
There was a problem hiding this comment.
This is also the reason for this particular change, btw - MV_EXPAND changes cardinality, which leads Enrich to essentially have the same issue that remote JOIN has from the start - the order of LIMIT and cardinality-changing operation comes out wrong, as semantically we expect that the LIMIT is global over all the dataset, but in reality we only do it per-node and delay the global one until we're back at the coordinator. This only works if none of the operations in between is cardinality-changing.
alex-spies
left a comment
There was a problem hiding this comment.
Looks good, nice catch!
docs/changelog/135051.yaml
Outdated
| pr: 135051 | ||
| summary: Ban Limit + `MvExpand` before remote Enrich | ||
| area: ES|QL | ||
| type: enhancement |
There was a problem hiding this comment.
nit: maybe that's more of a bug fix.
There was a problem hiding this comment.
I guess though we don't have a bug filed for it...
| @Override | ||
| public void postAnalysisVerification(Failures failures) { | ||
| if (this.mode == Mode.REMOTE) { | ||
| checkMvExpandAfterLimit(failures); |
There was a problem hiding this comment.
note: since this triggers after analysis, the condition p instanceof TopN (while correct) will never be true - we don't create TopN nodes during analysis, only OrderBy nodes.
There was a problem hiding this comment.
Yes, we have to do it on analysis stage to avoid confusion with synthetic limits that are pushed down, but I wasn't sure if there's any possible way to have topN on analysis stage.
| | LIMIT 2 | ||
| | eval ip= TO_STR(host) | ||
| | MV_EXPAND host | ||
| | %s |
There was a problem hiding this comment.
We could add tests that have random nodes in between the mv expand and the enrich. Or between the limit and the mv expand (although we already have this to some extent).
There was a problem hiding this comment.
do we have any "random harmless commands" code anywhere in the tests? Or just add a couple of fixed ones?
| this.forEachDown(MvExpand.class, u -> { | ||
| u.forEachDown(p -> { | ||
| if (p instanceof Limit || p instanceof TopN) { | ||
| failures.add(fail(this, "MV_EXPAND after LIMIT is incompatible with remote ENRICH")); |
There was a problem hiding this comment.
note: technically, that's only true if we cannot push the remote enrich past the mv_expand. Which we sometimes could! (There are more optimizations that could be applied to MV_EXPAND, in general.)
But since we currently don't do this, this check will strictly prohibit only queries that we can't properly run anyway, so this is fine.
Maybe we could add a comment, though?
|
Hi @smalyshev, I've updated the changelog YAML for you. |
|
@alex-spies do you think we should backport it to 9.1/8.19? |
* Ban Limit + MvExpand before remote Enrich
* Ban Limit + MvExpand before remote Enrich
* Ban Limit + MvExpand before remote Enrich (cherry picked from commit 7f1d2dc) # Conflicts: # x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/plan/logical/Enrich.java
💚 All backports created successfully
Questions ?Please refer to the Backport tool documentation |
* Ban Limit + MvExpand before remote Enrich (cherry picked from commit 7f1d2dc) # Conflicts: # x-pack/plugin/esql/src/main/java/org/elasticsearch/xpack/esql/plan/logical/Enrich.java
* Add release note for elastic#135051
* Add release note for elastic#135051
Remote
ENRICH(and any remote operation in fact) is not compatible with MV_EXPAND + LIMIT. Consider:Semantically, this must take two top events and then expand them. However, this can not be executed remotely, because this means that we have to take top 2 events on each node, then expand them, then apply Enrich, then bring them to the coordinator - but then we can not select top 2 of them - because that would be pre-expand! We do not know which expanded rows are coming from the true top rows and which are coming from "false" top rows which should have been thrown out. This is only possible to execute if
MV_EXPANDexecutes on the coordinator - which contradicts remote Enrich.With current hack it would silently return wrong data (as it would apply LIMIT after joining remote data without caring for MvExpand) but even if we fix the hack I don't think it can be semantically executed, at least without subplans.
The same problem would happen with remote join - except the limits are already banned before lookup join, so we're good there. And the same probably would happen for any other expanding operation - but I think joins and
MV_EXPANDare the only ones that exist right now.