Skip to content

Add consistent Add/Inc/Observe microbenchmarks, validate results on release (regressions vs past releases). #1759

Open
@bwplotka

Description

@bwplotka

We had to revert #1661 due to major performance issues on Add/Inc/Observe methods for cumulatives #1748

While #1661 added benchmarks, it seems they were unrealistic (e.g. around context switching and extra work in between). We also didn't have a good judgement on that 10ms overhead. We need to make sure:

  1. There are benchmarks we can rely on.
  2. Our release process (or even per PR) ensures we run those and detect potential regressions.

Help wanted!

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions