Thanos supports any object stores that can be implemented against Thanos objstore.Bucket inteface
All clients are configured using --objstore.config-file
to reference to the configuration file or --objstore.config
to put yaml config directly.
Current object storage client implementations:
Provider | Maturity | Auto-tested on CI | Maintainers |
---|---|---|---|
Google Cloud Storage | Stable (production usage) | yes | @bplotka |
AWS S3 | Beta (working PoCs, testing usage) | no | ? |
Azure Storage Account | Alpha | yes | @vglafirov |
OpenStack Swift | Beta (working PoCs, testing usage) | no | @sudhi-vm |
NOTE: Currently Thanos requires strong consistency (write-read) for object store implementation.
pkg/objstore/<provider>
NewTestBucket
constructor for testing purposes, that creates and deletes temporary bucket.NewTestBucket
in ForeachStore method to ensure we can run tests against new provider. (In PR)At that point, anyone can use your provider by spec
Thanos uses minio client to upload Prometheus data into AWS S3.
To configure S3 bucket as an object store you need to set these mandatory S3 variables in yaml format stored in a file:
type: S3
config:
bucket: ""
endpoint: ""
access_key: ""
insecure: false
signature_version2: false
encrypt_sse: false
secret_key: ""
http_config:
idle_conn_timeout: 0s
AWS region to endpoint mapping can be found in this link
Make sure you use a correct signature version. Currently AWS require signature v4, so it needs signature-version2: false
, otherwise, you will get Access Denied error, but several other S3 compatible use signature-version2: true
For debug purposes you can set insecure: true
to switch to plain insecure HTTP instead of HTTPS
By default Thanos will try to retrieve credentials from the following sources:
~/.aws/credentials
AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
To use specific credentials, use the access-key
field or set S3_SECRET_KEY
environment variable with AWS secret key.
Example working AWS IAM policy for user:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<bucket>/*",
"arn:aws:s3:::<bucket>"
]
}
]
}
(No bucket policy)
To test the policy, set env vars for S3 access for empty, not used bucket as well as: THANOS_SKIP_GCS_TESTS=true THANOS_ALLOW_EXISTING_BUCKET_USE=true
And run: GOCACHE=off go test -v -run TestObjStore_AcceptanceTest_e2e ./pkg/...
We need access to CreateBucket and DeleteBucket and access to all buckets:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:CreateBucket",
"s3:DeleteBucket"
],
"Resource": [
"arn:aws:s3:::<bucket>/*",
"arn:aws:s3:::<bucket>"
]
}
]
}
With this policy you should be able to run set THANOS_SKIP_GCS_TESTS=true
and unset S3_BUCKET
and run all tests using make test
.
Details about AWS policies: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
To configure Google Cloud Storage bucket as an object store you need to set bucket
with GCS bucket name and configure Google Application credentials.
For example:
type: GCS
config:
bucket: ""
Application credentials are configured via JSON file, the client looks for:
GOOGLE_APPLICATION_CREDENTIALS
environment variable.%APPDATA%/gcloud/application_default_credentials.json
. On other systems, $HOME/.config/gcloud/application_default_credentials.json
.appengine.AccessToken
function.You can read more on how to get application credential json file in https://cloud.google.com/docs/authentication/production
For deployment:
Storage Object Creator
and Storage Object Viewer
For testing:
Storage Object Admin
for ability to create and delete temporary buckets.
To use Azure Storage as Thanos object store, you need to precreate storage account from Azure portal or using Azure CLI. Follow the instructions from Azure Storage Documentation: https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account
To configure Azure Storage account as an object store you need to provide a path to Azure storage config file in flag --objstore.config-file
.
Config file format is the following:
type: AZURE
config:
storage_account: ""
storage_account_key: ""
container: ""
Thanos uses gophercloud client to upload Prometheus data into OpenStack Swift.
Below is an example configuration file for thanos to use OpenStack swift container as an object store.
type: SWIFT
config:
storage_account: ""
storage_account_key: ""
container: ""
Minio client used for AWS S3 can be potentially configured against other S3-compatible object storages.