Skip to content

Remove shared_store and shared_store_key_prefix from shipper and compactor #10840

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 20 commits into from
Oct 30, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
compactor: remove shared store
  • Loading branch information
ashwanthgoli committed Oct 19, 2023
commit 7c5abbea82b5e34b7cb9de5343ee03482fb4500a
19 changes: 5 additions & 14 deletions docs/sources/configure/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2139,19 +2139,6 @@ The `compactor` block configures the compactor component, which compacts index s
# CLI flag: -compactor.working-directory
[working_directory: <string> | default = ""]

# The shared store used for storing boltdb files. Supported types: gcs, s3,
# azure, swift, filesystem, bos, cos. If not set, compactor will be initialized
# to operate on all the object stores that contain either boltdb-shipper or tsdb
# index.
# CLI flag: -compactor.shared-store
[shared_store: <string> | default = ""]

# Prefix to add to object keys in shared store. Path separator(if any) should
# always be a '/'. Prefix should never start with a separator but should always
# end with it.
# CLI flag: -compactor.shared-store.key-prefix
[shared_store_key_prefix: <string> | default = "index/"]

# Interval at which to re-run the compaction operation.
# CLI flag: -compactor.compaction-interval
[compaction_interval: <duration> | default = 10m]
Expand Down Expand Up @@ -2179,10 +2166,14 @@ The `compactor` block configures the compactor component, which compacts index s
# CLI flag: -compactor.retention-table-timeout
[retention_table_timeout: <duration> | default = 0s]

# Store used for managing delete requests. Defaults to -compactor.shared-store.
# Store used for managing delete requests.
# CLI flag: -compactor.delete-request-store
[delete_request_store: <string> | default = ""]

# Path prefix for storing delete requests.
# CLI flag: -compactor.delete-request-store.key-prefix
[delete_request_store_key_prefix: <string> | default = "index/"]

# The max number of delete requests to run per compaction cycle.
# CLI flag: -compactor.delete-batch-size
[delete_batch_size: <int> | default = 70]
Expand Down
1 change: 0 additions & 1 deletion docs/sources/configure/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,6 @@ storage_config:

compactor:
working_directory: /tmp/loki/compactor
shared_store: s3
compaction_interval: 5m

```
Expand Down
3 changes: 1 addition & 2 deletions docs/sources/configure/examples/6-Compactor-Snippet.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,4 @@

compactor:
working_directory: /tmp/loki/compactor
shared_store: s3
compaction_interval: 5m
compaction_interval: 5m
1 change: 0 additions & 1 deletion docs/sources/operations/storage/boltdb-shipper.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,6 @@ The compactor is an optional but suggested component that combines and deduplica
```yaml
compactor:
working_directory: /loki/compactor
shared_store: gcs

storage_config:
gcs:
Expand Down
3 changes: 2 additions & 1 deletion docs/sources/operations/storage/logs-deletion.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ weight: 700
Grafana Loki supports the deletion of log entries from a specified stream.
Log entries that fall within a specified time window and match an optional line filter are those that will be deleted.

Log entry deletion is supported _only_ when the BoltDB Shipper is configured for the index store.
Log entry deletion is supported _only_ when TSDB or BoltDB shipper is configured as the index store.

The compactor component exposes REST [endpoints]({{< relref "../../reference/api#compactor" >}}) that process delete requests.
Hitting the endpoint specifies the streams and the time window.
Expand All @@ -20,6 +20,7 @@ Log entry deletion relies on configuration of the custom logs retention workflow
## Configuration

Enable log entry deletion by setting `retention_enabled` to true in the compactor's configuration and setting and `deletion_mode` to `filter-only` or `filter-and-delete` in the runtime config.
`delete_request_store` also needs to be configured when retention is enabled to process delete requests, this decides the storage bucket that stores the delete requests.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`delete_request_store` also needs to be configured when retention is enabled to process delete requests, this decides the storage bucket that stores the delete requests.
`delete_request_store` also needs to be configured when retention is enabled to process delete requests, this determines the storage bucket that stores the delete requests.

> **Warning:** Be very careful when enabling retention. It is strongly recommended that you also enable versioning on your objects in object storage to allow you to recover from accidental misconfiguration of a retention setting. If you want to enable deletion but not not want to enforce retention, configure the `retention_period` setting with a value of `0s`.

Expand Down
6 changes: 4 additions & 2 deletions docs/sources/operations/storage/retention.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,11 +60,11 @@ This Compactor configuration example activates retention.
```yaml
compactor:
working_directory: /data/retention
shared_store: gcs
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
delete_requests_store: gcs
schema_config:
configs:
- from: "2020-07-31"
Expand All @@ -88,6 +88,8 @@ Retention is only available if the index period is 24h. Single store TSDB and si

`retention_enabled` should be set to true. Without this, the Compactor will only compact tables.

`delete_request_store` should be set to configure the store for delete requests. This is required when retention is enabled.

`working_directory` is the directory where marked chunks and temporary tables will be saved.

`compaction_interval` dictates how often compaction and/or retention is applied. If the Compactor falls behind, compaction and/or retention occur as soon as possible.
Expand Down Expand Up @@ -254,6 +256,6 @@ limits_config:

compactor:
working_directory: /data/retention
shared_store: gcs
delete_requests_store: gcs
retention_enabled: true
```
13 changes: 13 additions & 0 deletions docs/sources/setup/upgrade/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,19 @@ period_config:
- If you have configured `-boltdb.shipper.shared-store.key-prefix` or its YAML setting to a value other than `index/`, ensure that all the existing period configs that use `boltdb-shipper` as the index have `path_prefix` set to the value previously configured.
- If you have configured `-tsdb.shipper.shared-store.key-prefix` or its YAML setting to a value other than `index/`, ensure that all the existing period configs that use `tsdb` as the index have the `path_prefix` set to the value previously configured.

#### Removed `shared_store` and `shared_store_key_prefix` from compactor configuration

The following CLI flags and the corresponding YAML settings to configure the shared store and path prefix for compactor are now removed:
- `-boltdb.shipper.compactor.shared-store`
- `-boltdb.shipper.compactor.shared-store.key-prefix`

Going forward compactor will run compaction and retention on all the object stores configured in [period configs](/docs/loki/latest/configure/#period_config) where the index type is either tsdb or boltdb-shipper.

#### `delete_request_store` should be explicitly configured

`-compactor.delete-request-store` or its YAML setting should be explicitly configured when retention is enabled, this is required for storing delete requests.
The path prefix under which the delete requests are stored is decided by `-compactor.delete-request-store.key-prefix`, it defaults to "index/".

#### Configuration `use_boltdb_shipper_as_backup` is removed

The setting `use_boltdb_shipper_as_backup` (`-tsdb.shipper.use-boltdb-shipper-as-backup`) was a remnant from the development of the TSDB storage.
Expand Down
1 change: 1 addition & 0 deletions integration/cluster/cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ storage_config:
compactor:
working_directory: {{.dataPath}}/retention
retention_enabled: true
delete_request_store: store-1

analytics:
reporting_enabled: false
Expand Down
Loading