Thanos supports any object stores that can be implemented against Thanos objstore.Bucket inteface
Current object storage client implementations:
Provider | Maturity | Auto-tested on CI | Maintainers |
---|---|---|---|
Google Cloud Storage | Stable (production usage) | yes | @bplotka |
AWS S3 | Beta (working PoCs, testing usage) | no | ? |
NOTE: Currently Thanos requires strong consistency (write-read) for object store implementation.
pkg/objstore/<provider>
NewTestBucket
constructor for testing purposes, that creates and deletes temporary bucket.NewTestBucket
in ForeachStore method to ensure we can run tests against new provider. (In PR)At that point, anyone can use your provider!
Thanos uses minio client to upload Prometheus data into AWS s3.
To configure S3 bucket as an object store you need to set these mandatory S3 flags:
Instead of using flags you can pass all the configuration via environment variables:
S3_BUCKET
S3_ENDPOINT
S3_ACCESS_KEY
S3_SECRET_KEY
S3_INSECURE
S3_SIGNATURE_VERSION2
AWS region to endpoint mapping can be found in this link
Make sure you use a correct signature version with --s3.signature-version2
, otherwise, you will get Access Denied error.
For debug purposes you can --s3.insecure
to switch to plain insecure HTTP instead of HTTPS
Credentials will by default try to retrieve from the following sources:
~/.aws/credentials
AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
To use specific credentials, use the --s3.access-key
flag and set S3_SECRET_KEY
environment variable with AWS secret key.
Example working AWS IAM policy for user:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<bucket>/*",
"arn:aws:s3:::<bucket>"
]
}
]
}
(No bucket policy)
To test the policy, set env vars for S3 access for empty, not used bucket as well as: THANOS_SKIP_GCS_TESTS=true THANOS_ALLOW_EXISTING_BUCKET_USE=true
And run: GOCACHE=off go test -v -run TestObjStore_AcceptanceTest_e2e ./pkg/...
We need access to CreateBucket and DeleteBucket and access to all buckets:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:CreateBucket",
"s3:DeleteBucket"
],
"Resource": [
"arn:aws:s3:::<bucket>/*",
"arn:aws:s3:::<bucket>"
]
}
]
}
With this policy you should be able to run set THANOS_SKIP_GCS_TESTS=true
and unset S3_BUCKET
and run all tests using make test
.
Details about AWS policies: https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html
To configure Google Cloud Storage bucket as an object store you need to set --gcs.bucket
with GCS bucket name and configure Google Application credentials.
Application credentials are configured via JSON file, the client looks for:
GOOGLE_APPLICATION_CREDENTIALS
environment variable.%APPDATA%/gcloud/application_default_credentials.json
. On other systems, $HOME/.config/gcloud/application_default_credentials.json
.appengine.AccessToken
function.You can read more on how to get application credential json file in https://cloud.google.com/docs/authentication/production
For deployment:
Storage Object Creator
and Storage Object Viewer
For testing:
Storage Object Admin
for ability to create and delete temporary buckets.
Minio client used for AWS S3 can be potentially configured against other S3-compatible object storages.