Skip to content

Break background cache writes into batches of 100 #2135

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Feb 28, 2020

Conversation

bboreham
Copy link
Contributor

What this PR does:

Improves parallelism and observability of the background cache writes.

Which issue(s) this PR fixes:
Fixes #2134

Checklist

  • CHANGELOG.md updated
Copy link
Contributor

@jtlisi jtlisi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall LGTM. However, there is one thing I did notice.

@bboreham bboreham force-pushed the batch-background-cache branch from 2a6bdfe to cf5d9b0 Compare February 20, 2020 10:25
@@ -33,7 +33,7 @@ type BackgroundConfig struct {
// RegisterFlagsWithPrefix adds the flags required to config this to the given FlagSet
func (cfg *BackgroundConfig) RegisterFlagsWithPrefix(prefix string, description string, f *flag.FlagSet) {
f.IntVar(&cfg.WriteBackGoroutines, prefix+"memcache.write-back-goroutines", 10, description+"How many goroutines to use to write back to memcache.")
f.IntVar(&cfg.WriteBackBuffer, prefix+"memcache.write-back-buffer", 10000, description+"How many chunks to buffer for background write back.")
f.IntVar(&cfg.WriteBackBuffer, prefix+"memcache.write-back-buffer", 10000, description+"How many key batches to buffer for background write-back.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently this means we can buffer 10000 * 100 = 1Mil chunks which is not something we want? Can we reduce it to 1000/100 = 10 batches at once? But this further means that if there are 10 writes each with a single key, then we'll only buffer 10 keys.

Not sure how to handle this simply.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a new problem; previously we could buffer 10000*(any number) chunks.
Suggest we deal with your point separately.

This improves parallelism and observability.
Fixes #2134

Signed-off-by: Bryan Boreham <bryan@weave.works>
Signed-off-by: Bryan Boreham <bryan@weave.works>
@bboreham bboreham force-pushed the batch-background-cache branch from cf5d9b0 to 6ffe457 Compare February 28, 2020 13:49
Signed-off-by: Bryan Boreham <bryan@weave.works>
Also comment one use of arbitrary integers in the test which is
unrelated to test size.

Signed-off-by: Bryan Boreham <bryan@weave.works>
@bboreham bboreham force-pushed the batch-background-cache branch from 8ad1589 to 565902d Compare February 28, 2020 14:05
Copy link
Contributor

@pracucci pracucci left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@bboreham bboreham merged commit c0db39b into master Feb 28, 2020
@bboreham bboreham deleted the batch-background-cache branch February 28, 2020 16:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
4 participants