-
Notifications
You must be signed in to change notification settings - Fork 823
Break background cache writes into batches of 100 #2135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM. However, there is one thing I did notice.
2a6bdfe
to
cf5d9b0
Compare
@@ -33,7 +33,7 @@ type BackgroundConfig struct { | |||
// RegisterFlagsWithPrefix adds the flags required to config this to the given FlagSet | |||
func (cfg *BackgroundConfig) RegisterFlagsWithPrefix(prefix string, description string, f *flag.FlagSet) { | |||
f.IntVar(&cfg.WriteBackGoroutines, prefix+"memcache.write-back-goroutines", 10, description+"How many goroutines to use to write back to memcache.") | |||
f.IntVar(&cfg.WriteBackBuffer, prefix+"memcache.write-back-buffer", 10000, description+"How many chunks to buffer for background write back.") | |||
f.IntVar(&cfg.WriteBackBuffer, prefix+"memcache.write-back-buffer", 10000, description+"How many key batches to buffer for background write-back.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently this means we can buffer 10000 * 100 = 1Mil chunks which is not something we want? Can we reduce it to 1000/100
= 10 batches at once? But this further means that if there are 10 writes each with a single key, then we'll only buffer 10 keys.
Not sure how to handle this simply.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a new problem; previously we could buffer 10000*(any number) chunks.
Suggest we deal with your point separately.
This improves parallelism and observability. Fixes #2134 Signed-off-by: Bryan Boreham <bryan@weave.works>
Signed-off-by: Bryan Boreham <bryan@weave.works>
cf5d9b0
to
6ffe457
Compare
Signed-off-by: Bryan Boreham <bryan@weave.works>
Also comment one use of arbitrary integers in the test which is unrelated to test size. Signed-off-by: Bryan Boreham <bryan@weave.works>
8ad1589
to
565902d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
What this PR does:
Improves parallelism and observability of the background cache writes.
Which issue(s) this PR fixes:
Fixes #2134
Checklist
CHANGELOG.md
updated