Use Tiered Storage in Kubernetes

This feature requires an Enterprise license for self-managed deployments. To upgrade, contact Redpanda sales.

Tiered Storage helps to lower storage costs by offloading log segments to object storage. You can specify the amount of storage you want to retain in local storage. You don’t need to specify which log segments you want to move, because Redpanda moves them automatically based on cluster-level configuration properties. Tiered Storage indexes where data is offloaded, so it can look up the data when you need it.

The following image illustrates the Tiered Storage architecture: remote write uploads data from Redpanda to object storage, and remote read fetches data from object storage to Redpanda.

Tiered Storage architecture

When upgrading Redpanda, uploads to object storage are paused until all brokers in the cluster are upgraded. If the cluster gets stuck while upgrading, roll it back to the original version. In a mixed-version state, the cluster could run out of disk space. If you need to force a mixed-version cluster to upload, move partition leadership to brokers running the original version.

Prerequisites

This feature requires an Enterprise license for self-managed deployments. To upgrade, contact Redpanda sales.

To check if you already have a license key applied to your cluster:

rpk cluster license info

Limitations

  • Migrating topics from one object storage provider to another is not supported.

  • Migrating topics from one bucket or container to another is not supported.

Redpanda strongly recommends that you do not re-enable Tiered Storage after previously enabling and disabling it. Re-enabling Tiered Storage can result in inconsistent data and data gaps in Tiered Storage for a topic.

Set up Tiered Storage

To set up Tiered Storage:

  1. Configure object storage.

  2. Enable Tiered Storage. You can enable Tiered Storage for the cluster (all topics) or for specific topics.

  3. Set retention limits.

Configure object storage

Redpanda natively supports Tiered Storage with Amazon Simple Storage Service (S3), Google Cloud Storage (GCS, which uses the Google Cloud Platform S3 API), and Microsoft Azure Blob Storage (ABS) and Azure Data Lake Storage (ADLS).

Amazon S3

If deploying Redpanda on an AWS Auto-Scaling group (ASG), keep in mind that the ASG controller terminates nodes and spins up replacements if the nodes saturate and are unable to heartbeat the controller (based on the EC2 health check). For more information, see the AWS documentation. Redpanda recommends deploying on Linux or Kubernetes. For more information, see Deploy Redpanda.

You can configure access to Amazon S3 with either an IAM role attached to the instance or with access keys.

If you need to manage and store encryption keys separately from your cloud provider, you can configure access to an AWS S3 bucket that Redpanda Tiered Storage uses to leverage your AWS KMS key (SSE-KMS) instead of the default AWS S3-managed key (SSE-S3). This option enables you to segregate data from different teams or departments and remove that data at will by removing the encryption keys.

Configure access with an IAM role
  1. Configure an IAM role.

  2. Override the following required cluster properties in the Helm chart:

    • Helm + Operator

    • Helm

    redpanda-cluster.yaml
    apiVersion: cluster.redpanda.com/v1alpha2
    kind: Redpanda
    metadata:
      name: redpanda
    spec:
      chartRef: {}
      clusterSpec:
        storage:
          tiered:
            config:
              cloud_storage_enabled: "true"
              cloud_storage_credentials_source: aws_instance_metadata
              cloud_storage_region: <region>
              cloud_storage_bucket: <redpanda-bucket-name>
    kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
    • --values

    • --set

    cloud-storage.yaml
    storage:
      tiered:
        config:
          cloud_storage_enabled: true
          cloud_storage_credentials_source: aws_instance_metadata
          cloud_storage_region: <region>
          cloud_storage_bucket: <redpanda-bucket-name>
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
    --values cloud-storage.yaml
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
      --set storage.tiered.config.cloud_storage_enabled=true \
      --set storage.tiered.config.cloud_storage_credentials_source=aws_instance_metadata \
      --set storage.tiered.config.cloud_storage_region=<region> \
      --set storage.tiered.config.cloud_storage_bucket=<redpanda-bucket-name>

    Replace the following placeholders:

    • <region>: The region of your S3 bucket.

    • <redpanda-bucket-name>: The name of your S3 bucket.

      Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value.
Configure access with access keys
  1. Grant an IAM user the following permissions to read and create objects in your buckets:

    • GetObject

    • DeleteObject

    • PutObject

    • PutObjectTagging

    • ListBucket

  2. Make a note of the access key and secret key.

  3. Create a Secret in which to store the access key and secret key.

    apiVersion: v1
    kind: Secret
    metadata:
      name: storage-secrets
      namespace: <namespace>
    type: Opaque
    data:
      access-key: <base64-encoded-access-key>
      secret-key: <base64-encoded-secret-key>
    • Replace <base64-encoded-access-key> with your base64-encoded access key.

    • Replace <base64-encoded-secret-key> with your base64-encoded secret key.

  4. Override the following required cluster properties:

    • Helm + Operator

    • Helm

    redpanda-cluster.yaml
    apiVersion: cluster.redpanda.com/v1alpha2
    kind: Redpanda
    metadata:
      name: redpanda
    spec:
      chartRef: {}
      clusterSpec:
        storage:
          tiered:
            credentialsSecretRef:
              accessKey:
                name: storage-secrets
                key: access-key
              secretKey:
                name: storage-secrets
                key: secret-key
            config:
              cloud_storage_enabled: "true"
              cloud_storage_credentials_source: config_file
              cloud_storage_region: <region>
              cloud_storage_bucket: <redpanda-bucket-name>
    kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
    • --values

    • --set

    cloud-storage.yaml
    storage:
      tiered:
        credentialsSecretRef:
          accessKey:
            name: storage-secrets
            key: access-key
          secretKey:
            name: storage-secrets
            key: secret-key
        config:
          cloud_storage_enabled: true
          cloud_storage_credentials_source: config_file
          cloud_storage_region: <region>
          cloud_storage_bucket: <redpanda-bucket-name>
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
    --values cloud-storage.yaml
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
      --set storage.tiered.config.cloud_storage_enabled=true \
      --set storage.tiered.credentialsSecretRef.accessKey.name=storage-secrets \
      --set storage.tiered.credentialsSecretRef.accessKey.key=access-key \
      --set storage.tiered.credentialsSecretRef.secretKey.name=storage-secrets \
      --set storage.tiered.credentialsSecretRef.secretKey.key=secret-key \
      --set storage.tiered.config.cloud_storage_credentials_source=config_file \
      --set storage.tiered.config.cloud_storage_region=<region> \
      --set storage.tiered.config.cloud_storage_bucket=<redpanda_bucket_name>

    Replace the following placeholders:

    • <region>: The region of your S3 bucket.

    • <redpanda-bucket-name>: The name of your S3 bucket.

      Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value.
Configure access with an AWS KMS key

When there are strict data compliance requirements and you must manage and store encryption keys separate from your cloud provider, you can configure an Amazon S3 bucket that Tiered Storage can use to leverage your customer-provided key (SSE-KMS) instead of the default AWS-managed key (SSE-S3).

To convert an existing S3 bucket and its contents, you must:

  1. Create a new KMS key.

  2. Configure the S3 bucket to use the new KMS key.

  3. (Optional) Re-encrypt existing objects to use the new KMS key.

You cannot configure a cloud provider-managed encryption key at the topic level.

For topic-level control, each CLI Get or Put for a partition must use the correct key as configured on the topic.

Prerequisites
  • The user configuring S3 bucket encryption must be assigned the Key admin permission. Without this permission, the user is unable to re-encrypt existing bucket objects to use the KMS key.

  • The S3 bucket must be assigned the Key user permission. Without this permission, Redpanda is unable to write new objects to Tiered Storage.

  • If you intend to retroactively re-encrypt existing data with the new KMS key, store the ARN identifier of the key upon creation. It is required for AWS CLI commands.

To create a new KMS key in the AWS Console:

  1. In AWS Console, search for “Key Management Service”.

  2. Click Create a key.

  3. On the Configure key page, select the Symmetric key type, then select Encrypt and decrypt.

  4. Click the Advanced options tab and configure Key material origin and Regionality as needed. For example, if you are using Remote Read Replicas or have Redpanda spanning across regions, select Multi-Region key.

  5. Click Next.

  6. On the Add labels page, specify an alias name and description for the key. Do not include sensitive information in these fields.

  7. Click Next.

  8. On the Define key administrative permissions page, specify a user who can administer this key through the KMS API.

  9. Click Next.

  10. On the Define key usage permissions page, assign the S3 bucket as a Key user. This is required for the S3 bucket to encrypt and decrypt.

  11. Click Next.

  12. Review your KMS key configuration and click Finish.

For more information, see the AWS documentation.

To configure the S3 bucket to use the new KMS key (and reduce KMS costs through caching):

  1. In AWS Console, search for "S3".

  2. Select the bucket used by Redpanda.

  3. Click the Properties tab.

  4. In Default encryption, click Edit.

  5. For Encryption type, select “Server-side encryption with AWS Key Management Service keys (SSE-KMS)”.

  6. Locate and select your AWS KMS key ARN identifier.

  7. Click Save changes.

(Optional) To re-encrypt existing data using the new KMS key:

Existing data in your S3 bucket continues to be read using the AWS-managed key, while new objects are encrypted using the new KMS key. If you wish to re-encrypt all S3 bucket data to use the KMS key, run:

aws s3 cp s3://{BUCKET_NAME}/ s3://{BUCKET_NAME}/ --recursive --sse-kms-key-id {KMS_KEY_ARN} --sse aws:kms

For more information, see the AWS documentation.

Google Cloud Storage

Configure access to Google Cloud Storage with either an IAM role attached to the instance, with access keys, or with customer-managed keys.

Configure access with an IAM role
  1. Configure an IAM role.

  2. Override the following required cluster properties in the Helm chart:

    • Helm + Operator

    • Helm

    redpanda-cluster.yaml
    apiVersion: cluster.redpanda.com/v1alpha2
    kind: Redpanda
    metadata:
      name: redpanda
    spec:
      chartRef: {}
      clusterSpec:
        storage:
          tiered:
            config:
              cloud_storage_enabled: "true"
              cloud_storage_api_endpoint: storage.googleapis.com
              cloud_storage_credentials_source: gcp_instance_metadata
              cloud_storage_region: <region>
              cloud_storage_bucket: <redpanda-bucket-name>
    kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
    • --values

    • --set

    cloud-storage.yaml
    storage:
      tiered:
        config:
          cloud_storage_enabled: true
          cloud_storage_api_endpoint: storage.googleapis.com
          cloud_storage_credentials_source: gcp_instance_metadata
          cloud_storage_region: <region>
          cloud_storage_bucket: <redpanda-bucket-name>
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
    --values cloud-storage.yaml
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
      --set storage.tiered.config.cloud_storage_enabled=true \
      --set storage.tiered.config.cloud_storage_api_endpoint=storage.googleapis.com \
      --set storage.tiered.config.cloud_storage_credentials_source=aws_instance_metadata \
      --set storage.tiered.config.cloud_storage_region=<region> \
      --set storage.tiered.config.cloud_storage_bucket=<redpanda-bucket-name>

    Replace the following placeholders:

    • <region>: The region of your bucket.

    • <redpanda-bucket-name>: The name of your bucket.

Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value.
Configure access with access keys

To configure access to Google Cloud Storage with access keys instead of an IAM role:

  1. Choose a uniform access control when you create the bucket.

  2. Use a Google-managed encryption key.

  3. Set a default project.

  4. Create a service user with HMAC keys and make a note of the access key and secret key.

  5. Create a Secret in which to store the access key and secret key.

    apiVersion: v1
    kind: Secret
    metadata:
      name: storage-secrets
      namespace: <namespace>
    type: Opaque
    data:
      access-key: <base64-encoded-access-key>
      secret-key: <base64-encoded-secret-key>
    • Replace <base64-encoded-access-key> with your base64-encoded access key.

    • Replace <base64-encoded-secret-key> with your base64-encoded secret key.

  6. Override the following required cluster properties in the Helm chart:

    • Helm + Operator

    • Helm

    redpanda-cluster.yaml
    apiVersion: cluster.redpanda.com/v1alpha2
    kind: Redpanda
    metadata:
      name: redpanda
    spec:
      chartRef: {}
      clusterSpec:
        storage:
          tiered:
            credentialsSecretRef:
              accessKey:
                name: storage-secrets
                key: access-key
              secretKey:
                name: storage-secrets
                key: secret-key
            config:
              cloud_storage_enabled: "true"
              cloud_storage_credentials_source: config_file
              cloud_storage_api_endpoint: storage.googleapis.com
              cloud_storage_region: <region>
              cloud_storage_bucket: <redpanda-bucket-name>
    kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
    • --values

    • --set

    cloud-storage.yaml
    storage:
      tiered:
        credentialsSecretRef:
          accessKey:
            name: storage-secrets
            key: access-key
          secretKey:
            name: storage-secrets
            key: secret-key
        config:
          cloud_storage_enabled: true
          cloud_storage_credentials_source: config_file
          cloud_storage_api_endpoint: storage.googleapis.com
          cloud_storage_region: <region>
          cloud_storage_bucket: <redpanda-bucket-name>
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
    --values cloud-storage.yaml
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
      --set storage.tiered.config.cloud_storage_enabled=true \
      --set storage.tiered.credentialsSecretRef.accessKey.name=storage-secrets \
      --set storage.tiered.credentialsSecretRef.accessKey.key=access-key \
      --set storage.tiered.credentialsSecretRef.secretKey.name=storage-secrets \
      --set storage.tiered.credentialsSecretRef.secretKey.key=secret-key \
      --set storage.tiered.config.cloud_storage_credentials_source=config_file \
      --set storage.tiered.config.cloud_storage_api_endpoint=storage.googleapis.com \
      --set storage.tiered.config.cloud_storage_region=<region> \
      --set storage.tiered.config.cloud_storage_bucket=<redpanda_bucket_name>

    Replace the following placeholders:

    • <region>: The region of your bucket.

    • <redpanda-bucket-name>: The name of your bucket.

      Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value.
Configure access with a KMS key

To configure the Google Cloud bucket used by Redpanda Tiered Storage to leverage a customer-managed key using the Cloud Key Management Service API instead of the default Google-managed key, you must:

  1. Create a KMS key.

  2. Configure the bucket to use the KMS key.

  3. Optionally, re-encrypt existing data with the new KMS key.

To manage Google Cloud KMS using the command line, first install or upgrade to the latest version of Google Cloud CLI.

To create a KMS key:

  1. In Google Cloud Console, search for "Cloud Key Managment Service API". Click Enable if the service is not already enabled.

  2. Using the Google Cloud CLI, create a new keyring in the region where the bucket used by Redpanda is located. Note that region is case sensitive.

    gcloud kms keyrings create "redpanda-keyring" --location="{REGION}"
  3. Create a new key for the keyring in the same region as the bucket:

    gcloud kms keys create "redpanda-key" \
      --location="{REGION}" \
      --keyring="redpanda-keyring" \
      --purpose="encryption"
  4. Get the key identifier:

    gcloud kms keys list \
      --location="REGION" \
      --keyring="redpanda-keyring"

    The result should look like the following. Be sure to store the name, as this is used to assign and manage the key. Use this as the {KEY_RESOURCE} placeholder in subsequent commands.

    NAME      PURPOSE      ALGORITHM      PROTECTION_LEVEL    LABELS    PRIMARY_ID    PRIMARY_STATE
    projects/{PROJECT_NAME}/locations/us/keyRings/redpanda-keyring/cryptoKeys/redpanda-key
    ENCRYPT_DECRYPT  GOOGLE_SYMMETRIC_ENCRYPTION  SOFTWARE          1           ENABLED

To configure the GCP bucket to use the KMS key:

  1. Assign the key to a service agent:

    gcloud storage service-agent \
      --project={PROJECT_ID_STORING_OBJECTS} \
      --authorize-cmek={KEY_RESOURCE}
  2. Set the bucket default encryption key to the KMS key:

    gcloud storage buckets update gs://{BUCKET_NAME} \
      --default-encryption-key={KEY_RESOURCE}

(Optional) To re-encrypt existing data using the new KMS key:

Existing data in the bucket continues to be read using the Google-managed key, while new objects are encrypted using the new KMS key. If you wish to re-encrypt all data in the bucket to use the KMS key, run:

gcloud storage objects update gs://{BUCKET_NAME}/ --recursive \
  --encryption-key={KEY_RESOURCE}

Microsoft ABS/ADLS

You can configure access to Azure Blob Storage with either account access keys or Azure’s managed identities system to securely interact with Azure Blob Storage. Account access keys, as static credentials, require manual management and vigilant security practices to prevent breaches due to their unchanging nature. In contrast, managed identities provide a more secure and maintenance-free solution by automating credential management and rotation, though they are exclusive to the Azure ecosystem.

Starting in release 23.2.8, Redpanda supports storage accounts configured for ADLS Gen2 with hierarchical namespaces enabled. For hierarchical namespaces created with a custom endpoint, set cloud_storage_azure_adls_endpoint and cloud_storage_azure_adls_port. If you haven’t configured custom endpoints in Azure, there’s no need to edit these properties.
Configure access with a managed identity
  1. Configure an Azure managed identity.

  2. Override the following required cluster properties in the Helm chart:

    • Helm + Operator

    • Helm

    redpanda-cluster.yaml
    apiVersion: cluster.redpanda.com/v1alpha2
    kind: Redpanda
    metadata:
      name: redpanda
    spec:
      chartRef: {}
      clusterSpec:
        storage:
          tiered:
            config:
              cloud_storage_enabled: "true"
              cloud_storage_credentials_source: azure_aks_oidc_federation
              cloud_storage_azure_storage_account: <account-name>
              cloud_storage_azure_container: <container-name>
        serviceAccount:
          create: true
          annotations:
            "azure.workload.identity/client-id": <managed-identity-client-id>
        statefulset:
          podTemplate:
            labels:
              "azure.workload.identity/use": "true"
    kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
    • --values

    • --set

    cloud-storage.yaml
    storage:
      tiered:
        config:
          cloud_storage_enabled: true
          cloud_storage_credentials_source: azure_aks_oidc_federation
          cloud_storage_azure_storage_account: <account-name>
          cloud_storage_azure_container: <container-name>
    serviceAccount:
      create: true
      annotations:
        "azure.workload.identity/client-id": <managed-identity-client-id>
    statefulset:
      podTemplate:
        labels:
          "azure.workload.identity/use": "true"
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
    --values cloud-storage.yaml
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
      --set storage.tiered.config.cloud_storage_enabled=true \
      --set storage.tiered.config.cloud_storage_credentials_source=azure_aks_oidc_federation \
      --set storage.tiered.config.cloud_storage_azure_storage_account=<account-name> \
      --set storage.tiered.config.cloud_storage_azure_container=<container-name> \
      --set serviceAccount.create=true \
      --set serviceAccount.annotations."azure\.workload\.identity/client-id"="<managed-identity-client-id>" \
      --set statefulset.podTemplate.labels."azure\.workload\.identity/use"="true"

    Replace the following placeholders:

    • <account-name>: The name of your Azure account.

    • <container-name>: The name of the Azure container in your Azure account.

    • <managed-identity-client-id>: The client ID for your Azure managed identity.

The serviceAccount annotations and the statefulset Pod labels are essential for the Azure webhook to inject the necessary Azure-specific environment variables and the projected service account token volume into the pods. For more information, visit Microsoft Entra Workload ID with Azure Kubernetes Service (AKS).
Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value.
Configure access with account access keys
  1. Get an account access key for the Azure container that Redpanda will run on. For information on how to view your account access keys, see the Azure documentation.

  2. Create a Secret in which to store the access key.

    apiVersion: v1
    kind: Secret
    metadata:
      name: storage-secrets
      namespace: <namespace>
    type: Opaque
    data:
      access-key: <base64-encoded-access-key>
    • Replace <base64-encoded-access-key> with your base64-encoded access key.

  3. Override the following required cluster properties in the Helm chart:

    • Helm + Operator

    • Helm

    redpanda-cluster.yaml
    apiVersion: cluster.redpanda.com/v1alpha2
    kind: Redpanda
    metadata:
      name: redpanda
    spec:
      chartRef: {}
      clusterSpec:
        storage:
          tiered:
            credentialsSecretRef:
              secretKey:
                configurationKey: cloud_storage_azure_shared_key
                name: storage-secrets
                key: access-key
            config:
              cloud_storage_enabled: "true"
              cloud_storage_azure_storage_account: <account-name>
              cloud_storage_azure_container: <container-name>
    kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
    • --values

    • --set

    cloud-storage.yaml
    storage:
      tiered:
        credentialsSecretRef:
          secretKey:
            configurationKey: cloud_storage_azure_shared_key
            name: storage-secrets
            key: access-key
        config:
          cloud_storage_enabled: true
          cloud_storage_azure_storage_account: <account-name>
          cloud_storage_azure_container: <container-name>
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
    --values cloud-storage.yaml
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
      --set storage.tiered.config.cloud_storage_enabled=true \
      --set storage.tiered.credentialsSecretRef.secretKey.configurationKey=cloud_storage_azure_shared_key \
      --set storage.tiered.credentialsSecretRef.secretKey.name=storage-secrets \
      --set storage.tiered.credentialsSecretRef.secretKey.key=access-key \
      --set storage.tiered.config.cloud_storage_azure_storage_account=<account-name> \
      --set storage.tiered.config.cloud_storage_azure_container=<container-name>

    Replace the following placeholders:

    • <account-name>: The name of your Azure account.

    • <container-name>: The name of the Azure container in your Azure account.

      Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value.
    • For information about how to grant access from an internet IP range (if you need to open additional routes/ports between your broker nodes and Azure Blob Storage; for example, in a hybrid cloud deployment), see the Microsoft documentation.

    • For more information about Shared Key authentication, see the Microsoft documentation.

For additional properties, see Tiered Storage configuration properties.

Enable Tiered Storage

  1. To enable Tiered Storage, set storage.tiered.config.cloud_storage_enabled to true.

  2. Configure topics for Tiered Storage. You can configure either all topics in a cluster or only specific topics.

When you enable Tiered Storage on a topic already containing data, Redpanda uploads any existing data in that topic on local storage to the object store bucket. It will start from the earliest offset available on local disk. Redpanda strongly recommends that you avoid repeatedly toggling remote write on and off, because this can result in inconsistent data and data gaps in Tiered Storage for a topic.

Enable Tiered Storage for a cluster

To enable Tiered Storage for a cluster (in addition to setting cloud_storage_enabled to true), set the following cluster-level properties to true:

When you enable Tiered Storage for a cluster, you enable it for all existing topics in the cluster. When cluster-level properties are changed, the changes apply only to new topics, not existing topics.

The cloud_storage_enable_remote_write and cloud_storage_enable_remote_read cluster-level properties are essentially creation-time defaults for the redpanda.remote.write and redpanda.remote.read topic-level properties.

Enable Tiered Storage for specific topics

To enable Tiered Storage for a new or existing topic (in addition to setting cloud_storage_enabled to true), set the following topic-level properties to true:

  • redpanda.remote.write

  • redpanda.remote.read

For example, to create a new topic with Tiered Storage:

rpk topic create <topic_name> -c redpanda.remote.read=true -c redpanda.remote.write=true

To enable Tiered Storage on an existing topic, run:

rpk topic alter-config <topic_name> --set redpanda.remote.read=true --set redpanda.remote.write=true

Topic-level properties override cluster-level properties. For example, for new topics, if cloud_storage_enable_remote_write is set to true, you can set redpanda.remote.write to false to turn it off for a particular topic.

Tiered Storage topic-level properties:

Property Description

redpanda.remote.write

Uploads data from Redpanda to object storage. Overrides the cluster-level cloud_storage_enable_remote_write configuration for the topic.

redpanda.remote.read

Fetches data from object storage to Redpanda. Overrides the cluster-level cloud_storage_enable_remote_read configuration for the topic.

redpanda.remote.recovery

Recovers or reproduces a topic from object storage. Use this property during topic creation. It does not apply to existing topics.

redpanda.remote.delete

When set to true, deleting a topic also deletes its objects in object storage. Both redpanda.remote.write and redpanda.remote.read must be enabled, and the topic must not be a Remote Read Replica topic.

When set to false, deleting a topic does not delete its objects in object storage.

Default is true for new topics.

The following tables list outcomes for combinations of cluster-level and topic-level configurations:

Cluster-level configuration with cloud_storage_enable_remote_write Topic-level configuration with redpanda.remote.write Outcome: whether remote write is enabled or disabled on the topic

true

Not set

Enabled

true

false

Disabled

true

true

Enabled

false

Not set

Disabled

false

false

Disabled

false

true

Enabled

Cluster-level configuration with cloud_storage_enable_remote_read Topic-level configuration with redpanda.remote.read Outcome: whether remote read is enabled or disabled on the topic

true

Not set

Enabled

true

false

Disabled

true

true

Enabled

false

Not set

Disabled

false

false

Disabled

false

true

Enabled

Set retention limits

Redpanda supports retention limits and compaction for topics using Tiered Storage. Set retention limits to purge topic data after it reaches a specified age or size.

Starting in Redpanda version 22.3, object storage is the default storage tier for all streaming data, and retention properties work the same for Tiered Storage topics and local storage topics. Data is retained in object storage until it reaches the configured time or size limit.

Data becomes eligible for deletion from object storage following retention.ms or retention.bytes. For example, if retention.bytes is set to 10 GiB, then every partition in the topic has a limit of 10 GiB in object storage. When retention.bytes is exceeded by data in object storage, the data in object storage is trimmed. If neither retention.ms nor retention.bytes is specified, then cluster-level defaults are used.

  • During upgrade, Redpanda preserves retention settings for existing topics.

  • Both size-based and time-based retention policies are applied simultaneously, so it’s possible for your size-based property to override your time-based property, or vice versa. For example, if your size-based property requires removing one segment, and your time-based property requires removing three segments, then three segments are removed. Size-based properties reclaim disk space as close as possible to the maximum size, without exceeding the limit.

Compacted topics in Tiered Storage

When you set cleanup.policy for a topic to compact, nothing gets deleted from object storage based on retention settings. When set to compact,delete, compacted segments are deleted from object storage based on retention.ms and retention.bytes.

For compacted topics, Redpanda compacts segments after they have been uploaded to object storage. Redpanda initially uploads all uncompacted segments. It then re-uploads the segments with compaction applied. It’s likely that some segments in object storage are not compacted, but the Tiered Storage read path can manage this.

Manage local capacity for Tiered Storage topics

You can use properties to control retention of topic data in local storage. With Tiered Storage enabled, data in local storage expires after the topic-level properties retention.local.target.ms or retention.local.target.bytes. (These settings are equivalent to retention.ms and retention.bytes without Tiered Storage.)

You can also use the cluster-level properties retention_local_target_ms_default and retention_local_target_bytes_default. Settings can depend on the size of your drive, how many partitions you have, and how much data you keep for each partition.

When set, Redpanda keeps actively-used and sequential (next segment) data in local cache and targets to maintain this age of data in local storage. It purges data based on actual available local volume space, without forcing disk full situations when there is data skew.

At topic creation with Tiered Storage enabled:

  • If retention.ms or retention.bytes is set, Redpanda initializes the retention.local.target.* properties.

  • If retention.local.target.ms or retention.local.target.bytes is set, Redpanda initializes the min(retention.bytes, retention.local.target.bytes) and max(retention.ms, retention.local.target.ms) properties.

  • If properties are not specified:

    • Starting in version 22.3, new topics use the default retention values of one day for local storage (retention_local_target_ms_default) and seven days for all storage, including object storage (delete_retention_ms).

    • Upgraded topics retain their historical defaults of infinite retention.

After topic configuration, if Tiered Storage was disabled and must be enabled, or was enabled and must be disabled, Redpanda uses the local retention properties set for the topic. It is strongly recommended that you do not re-enable Tiered Storage after previously enabling and disabling it. Re-enabling Tiered Storage can result in inconsistent data and data gaps in Tiered Storage for a topic.

See also: Space management

View space usage

Use rpk cluster logdirs describe to get details about Tiered Storage space usage in both object storage and local disk. The directories for object storage start with remote://<bucket_name>. For example:

rpk cluster logdirs describe

BROKER  DIR                              TOPIC               PARTITION  SIZE      ERROR
0       /home/redpanda/var/node0/data    monday              0          18406863
0       remote://data                    monday              0          60051220
1       /home/redpanda/var/node1/data    monday              0          22859882
1       remote://data                    monday              0          60051220
2       /home/redpanda/var/node2/data    monday              0          17169935
2       remote://data                    monday              0          60051220

Integration with space utilization tools

Third-party tools that query space utilization from the Redpanda cluster might not handle remote:// entries properly. Redpanda space usage is reported from each broker, but object storage is shared between brokers. Third-party tools could over-count storage and show unexpectedly high disk usage for Tiered Storage topics. In this situation, you can disable output of remote:// entries by setting kafka_enable_describe_log_dirs_remote_storage to false.

Remote write

Remote write is the process that constantly uploads log segments to object storage. The process is created for each partition and runs on the leader broker of the partition. It only uploads the segments that contain offsets that are smaller than the last stable offset. This is the latest offset that the client can read.

To ensure all data is uploaded, you must enable remote write before any data is produced to the topic. If you enable remote write after data has been written to the topic, only the data that currently exists on disk based on local retention settings will be scheduled for uploading. Redpanda strongly recommends that you avoid repeatedly toggling remote write on and off, because this can result in inconsistent data and data gaps in Tiered Storage for a topic.

To enable Tiered Storage, use both remote write and remote read.

To create a topic with remote write enabled:

rpk topic create <topic_name> -c redpanda.remote.write=true

To enable remote write on an existing topic:

rpk topic alter-config <topic_name> --set redpanda.remote.write=true

If remote write is enabled, log segments are not deleted until they’re uploaded to object storage. Because of this, the log segments may exceed the configured retention period until they’re uploaded, so the topic might require more disk space. This prevents data loss if segments cannot be uploaded fast enough or if the retention period is very short.

To see the object storage status for a given topic:

rpk topic describe-storage <topic_name> --print-all

See the reference for a list of flags you can use to filter the command output.

A constant stream of data is necessary to build up the log segments and roll them into object storage. This upload process is asynchronous. You can monitor its status with the /metrics/vectorized_ntp_archiver_pending metric.

To see new log segments faster, you can edit the segment.bytes topic-level property. Or, you can edit the cloud_storage_segment_max_upload_interval_sec property, which sets the frequency with which new segments are generated in Tiered Storage, even if the local segment has not exceeded the segment.bytes size or the segment.ms age.

Starting in version 22.3, when you delete a topic in Redpanda, the data is also deleted in object storage. See Enable Tiered Storage for specific topics.

Idle timeout

You can configure Redpanda to start a remote write periodically. This is useful if the ingestion rate is low and the segments are kept open for long periods of time. You specify a number of seconds for the timeout, and if that time has passed since the previous write and the partition has new data, Redpanda starts a new write. This limits data loss in the event of catastrophic failure and adds a guarantee that you only lose the specified number of seconds of data.

Setting idle timeout to a very short interval can create a lot of small files, which can affect throughput. If you decide to set a value for idle timeout, start with 600 seconds, which prevents the creation of so many small files that throughput is affected when you recover the files.

Use the cloud_storage_segment_max_upload_interval_sec property to set the number of seconds for idle timeout. If this property is set to null, Redpanda uploads metadata to object storage, but the segment is not uploaded until it reaches the segment.bytes size.

Reconciliation

Reconciliation is a Redpanda process that monitors partitions and decides which partitions are uploaded on each broker to guarantee that data is uploaded only once. It runs periodically on every broker. It also balances the workload evenly between brokers.

The broker uploading to object storage is always with the partition leader. Therefore, when partition leadership balancing occurs, Redpanda stops uploads to object storage from one broker and starts uploads on another broker.

Upload backlog controller

Remote write uses a proportional derivative (PD) controller to minimize the backlog size for the write. The backlog consists of data that has not been uploaded to object storage but must be uploaded eventually.

The upload backlog controller prevents Redpanda from running out of disk space. If remote.write is set to true, Redpanda cannot evict log segments that have not been uploaded to object storage. If the remote write process cannot keep up with the amount of data that needs to be uploaded to object storage, the upload backlog controller increases priority for the upload process. The upload backlog controller measures the size of the upload periodically and tunes the priority of the remote write process.

Data archiving

If you only enable remote write on a topic, you have a simple backup to object storage that you can use for recovery. In the event of a data center failure, data corruption, or cluster migration, you can recover your archived data from the cloud back to your cluster.

Configure data archiving

Data archiving requires a Tiered Storage configuration.

To recover a topic from object storage, use single topic recovery.

Stop data archiving

To cancel archiving jobs, disable remote write.

To delete archival data, adjust retention.ms or retention.bytes.

Remote read

Remote read fetches data from object storage using the Kafka API.

Without Tiered Storage, when data is evicted locally, it is no longer available. If the consumer starts consuming the partition from the beginning, the first offset is the smallest offset available locally. However, when Tiered Storage is enabled with the redpanda.remote.read and redpanda.remote.write properties, data is always uploaded to object storage before it’s deleted. This guarantees that data is always available either locally or remotely.

When data is available remotely and Tiered Storage is enabled, clients can consume data, even if the data is no longer stored locally.

To create a topic with remote read enabled:

rpk topic create <topic_name> -c -c redpanda.remote.read=true

To enable remote read on an existing topic:

rpk topic alter-config <topic_name> --set redpanda.remote.read=true

Caching

When a consumer fetches an offset range that isn’t available locally in the Redpanda data directory, Redpanda downloads remote segments from object storage. These downloaded segments are stored in the object storage cache.

Change the cache directory

By default, the cache directory is created in Redpanda’s data directory, but it can be placed anywhere in the system. For example, you might want to put the cache directory on a dedicated drive with cheaper storage. Use the storage.tiered.config.cloud_storage_cache_directory property on each broker to specify a different location for the cache directory. You must specify the full path.

To specify a different volume for the cache directory, use one of the following:

  • PersistentVolume

  • hostPath volume

PersistentVolume

A PersistentVolume is storage in the cluster that has been provisioned by an administrator or dynamically provisioned using StorageClasses. For details about PersistentVolumes, see the Kubernetes documentation.

You can configure the Helm chart to use a PersistentVolume for the cache directory with either a static provisioner or a dynamic provisioner.

A dynamic provisioner creates a PersistentVolume on demand for each Redpanda broker.

Managed Kubernetes platforms and cloud environments usually provide a dynamic provisioner. If you are running Kubernetes on-premises, make sure that you have a dynamic provisioner for your storage type.

  1. Make sure that you have at least one StorageClass in the cluster:

    kubectl get storageclass

    Example output:

    In a Google GKE cluster, this is the result:

    NAME                 PROVISIONER            AGE
    standard (default)   kubernetes.io/gce-pd   1d

    This StorageClass is marked as the default, which means that this class is used to provision a PersistentVolume when the PersistentVolumeClaim doesn’t specify the StorageClass.

  2. Configure the Helm chart with your StorageClass:

    • To use your Kubernetes cluster’s default StorageClass, set storage.persistentVolume.storageClass to an empty string (""):

      • Helm + Operator

      • Helm

      redpanda-cluster.yaml
      apiVersion: cluster.redpanda.com/v1alpha2
      kind: Redpanda
      metadata:
        name: redpanda
      spec:
        chartRef: {}
        clusterSpec:
          storage:
            tiered:
              mountType: persistentVolume
              persistentVolume:
                storageClass: ""
              config:
                cloud_storage_cache_size: <max-size-for-volume>
                cloud_storage_cache_directory: <custom-cache-directory>
      kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
      • --values

      • --set

      storageclass.yaml
      storage:
        tiered:
          mountType: persistentVolume
          persistentVolume:
            storageClass: ""
          config:
            cloud_storage_cache_size: <max-size-for-volume>
            cloud_storage_cache_directory: <custom-cache-directory>
      helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
        --values storageclass.yaml --reuse-values
      helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
        --set storage.tiered.mountType=persistentVolume \
        --set storage.tiered.persistentVolume.storageClass="" \
        --set storage.tiered.config.cloud_storage_cache_size=<max-size-for-volume> \
        --set storage.tiered.config.cloud_storage_cache_directory=<custom-cache-directory>
    • To use a specific StorageClass, set its name in the storage.tieredStoragePersistentVolume.storageClass configuration:

      • Helm + Operator

      • Helm

      redpanda-cluster.yaml
      apiVersion: cluster.redpanda.com/v1alpha2
      kind: Redpanda
      metadata:
        name: redpanda
      spec:
        chartRef: {}
        clusterSpec:
      apiVersion: cluster.redpanda.com/v1alpha2
      kind: Redpanda
      metadata:
        name: redpanda
      spec:
        chartRef: {}
        clusterSpec:
          storage:
            tiered:
              mountType: persistentVolume
              persistentVolume:
                storageClass: "<storage-class>"
              config:
                cloud_storage_cache_size: <max-size-for-volume>
                cloud_storage_cache_directory: <custom-cache-directory>
      kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
      • --values

      • --set

      storageclass.yaml
      storage:
        tiered:
          mountType: persistentVolume
          persistentVolume:
            storageClass: "<storage-class>"
          config:
            cloud_storage_cache_size: <max-size-for-volume>
            cloud_storage_cache_directory: <custom-cache-directory>
      helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
        --values storageclass.yaml --reuse-values
      helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
        --set storage.tiered.mountType=persistentVolume \
        --set storage.tiered.persistentVolume.storageClass="<storage-class>" \
        --set storage.tiered.config.cloud_storage_cache_size=<max-size-for-volume> \
        --set storage.tiered.config.cloud_storage_cache_directory=<custom-cache-directory>
Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value.

When you use a static provisioner, an existing PersistentVolume in the cluster is selected and bound to one PersistentVolumeClaim for each Redpanda broker.

  1. Create one PersistentVolume for each Redpanda broker. Make sure to create PersistentVolumes with a capacity of at least the value of the storage.tiered.config.cloud_storage_cache_size configuration.

  2. Set the storage.tiered.persistentVolume.storageClass to a dash ("-") to use a PersistentVolume with a static provisioner:

    • Helm + Operator

    • Helm

    redpanda-cluster.yaml
    apiVersion: cluster.redpanda.com/v1alpha2
    kind: Redpanda
    metadata:
      name: redpanda
    spec:
      chartRef: {}
      clusterSpec:
        storage:
          tiered:
            mountType: persistentVolume
            persistentVolume:
              storageClass: "-"
            config:
              cloud_storage_cache_size: <max-size-for-volume>
              cloud_storage_cache_directory: <custom-cache-directory>
    kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
    • --values

    • --set

    storageclass.yaml
    storage:
      tiered:
        mountType: persistentVolume
        persistentVolume:
          storageClass: "-"
        config:
          cloud_storage_cache_size: <max-size-for-volume>
          cloud_storage_cache_directory: <custom-cache-directory>
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
      --values storageclass.yaml --reuse-values
    helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
      --set storage.tiered.mountType=persistentVolume \
      --set storage.tiered.persistentVolume.storageClass="-" \
      --set storage.tiered.config.cloud_storage_cache_size=<max-size-for-volume> \
      --set storage.tiered.config.cloud_storage_cache_directory=<custom-cache-directory>
Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value.

hostPath

A hostPath volume mounts a file or directory from the host node’s file system into your Pod. For details about hostPath volumes, see the Kubernetes documentation.

To use a hostPath volume for the cache directory:

  1. Set the storage.tiered.mountPath configuration to hostPath.

  2. Set the storage.tiered.hostPath configuration to the absolute path of a file on the local worker node.

  3. Set statefulset.initContainers.setDataDirOwnership.enabled to true.

Pods that run Redpanda brokers must have read/write access to their data directories. The initContainer is responsible for setting write permissions on the data directories. By default, statefulset.initContainers.setDataDirOwnership is disabled because most storage drivers call SetVolumeOwnership to give Redpanda permissions to the root of the storage mount. However, some storage drivers, such as hostPath, do not call SetVolumeOwnership. In this case, you must enable the initContainer to set the permissions.

To set permissions on the data directories, the initContainer must run as root. However, be aware that an initContainer running as root can introduce the following security risks:

  • Privilege escalation: If attackers gains access to the initContainer, they can escalate privileges to gain full control over the system. For example, attackers could use the initContainer to gain unauthorized access to sensitive data, tamper with the system, or start denial-of-service attacks.

  • Container breakouts: If the container is misconfigured or the container runtime has a vulnerability, attackers could escape from the initContainer and access the host operating system.

  • Image tampering: If attackers gain access to the container image of the initContainer, they could add malicious code or backdoors to it. Image tampering could compromise the security of the entire cluster.

  • Helm + Operator

  • Helm

redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
  name: redpanda
spec:
  chartRef: {}
  clusterSpec:
    storage:
      tiered:
        mountType: hostPath
        hostPath: "<absolute-path>"
        config:
          cloud_storage_cache_size: <max-size-for-volume>
          cloud_storage_cache_directory: <custom-cache-directory>
kubectl apply -f redpanda-cluster.yaml --namespace <namespace>
  • --values

  • --set

hostpath.yaml
storage:
  tiered:
    mountType: hostPath
    hostPath: "<absolute-path>"
    config:
      cloud_storage_cache_size: <max-size-for-volume>
      cloud_storage_cache_directory: <custom-cache-directory>
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --values hostpath.yaml --reuse-values
helm upgrade --install redpanda redpanda/redpanda --namespace <namespace> --create-namespace \
  --set storage.tiered.mountType=hostPath \
  --set storage.tiered.hostPath="<absolute-path>" \
  --set storage.tiered.config.cloud_storage_cache_size=<max-size-for-volume> \
  --set storage.tiered.config.cloud_storage_cache_directory=<custom-cache-directory>
Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value.

Set a maximum cache size

To ensure that the cache does not grow uncontrollably, which could lead to performance issues or disk space exhaustion, you can control the maximum size of the cache.

Redpanda checks the cache periodically according to the interval set in storage.tiered.config.cloud_storage_cache_check_interval_ms. If the size of the stored data exceeds the configured limit, the eviction process starts. This process removes segments that haven’t been accessed recently until the cache size drops to the target level.

Related properties:

Recommendation: By default, cloud_storage_cache_size_percent is tuned for a shared disk configuration. If you are using a dedicated cache disk, consider increasing this value.

Cache trimming

Cache trimming helps to balance optimal cache use with the need to avoid blocking reads due to a full cache. The trimming process is triggered when the cache exceeds certain thresholds relative to the maximum cache size.

Related properties:

Recommendations:

  • A threshold of 70% is recommended for most use cases. This percentage helps balance optimal cache use and ensures the cache has enough free space to handle sudden spikes in data without blocking reads. For example, setting cloud_storage_cache_trim_threshold_percent_size to 80% means that the cache trimming process starts when the cache takes up 80% of the maximum cache size.

  • Monitor the behavior of your cache and the performance of your Redpanda cluster. If reads are taking longer than expected or if you encounter timeout errors, your cache may be filling up too quickly. In these cases, consider lowering the thresholds to trigger trimming sooner.

The lower you set the threshold, the earlier the trimming starts, but it can also waste cache space. A higher threshold uses more cache space efficiently, but it risks blocking reads if the cache fills up too quickly. Adjust the settings based on your workload and monitor the cache performance to find the right balance for your environment.

Chunk remote segments

To support more concurrent consumers of historical data with less local storage, Redpanda can download small chunks of remote segments to the cache directory. For example, when a client fetch request spans a subsection of a 1 GiB segment, instead of downloading the entire 1 GiB segment, Redpanda can download 16 MiB chunks that contain just enough data required to fulfill the fetch request. Use the storage.tiered.config.cloud_storage_cache_chunk_size property to define the size of the chunks.

The paths on disk to a chunk are structured as p_chunks/{chunk_start_offset}, where p is the original path to the segment in the object storage cache. The _chunks/ subdirectory holds chunk files identified by the chunk start offset. These files can be reclaimed by the cache eviction process during the normal eviction path.

Chunk eviction strategies

Selecting an appropriate chunk eviction strategy helps manage cache space effectively. A chunk that isn’t shared with any data source can be evicted from the cache, so space is returned to disk. Use the storage.tiered.config.cloud_storage_chunk_eviction_strategy property to change the eviction strategy. The strategies are:

  • eager (default): Evicts chunks that aren’t shared with other data sources. Eviction is fast, because no sorting is involved.

  • capped: Evicts chunks until the number of hydrated chunks is below or equal to the maximum hydrated chunks at a time. This limit is for each segment and calculated using cloud_storage_hydrated_chunks_per_segment_ratio by the remote segment. Eviction is fastest, because no sorting is involved, and the process stops after the cap is reached.

  • predictive: Uses statistics from readers to determine which chunks to evict. Chunks that aren’t in use are sorted by the count of readers that will use the chunk in the future. The counts are populated by readers using the chunk data source. The chunks that are least expensive to re-hydrate are then evicted, taking into account the maximum hydrated chunk count. Eviction is slowest, because chunks are sorted before evicting them.

Recommendation: For general use, the eager strategy is recommended due to its speed. For workloads with specific access patterns, the predictive strategy may offer better cache efficiency.

Caching and chunking properties

Use the following cluster-level properties to set the maximum cache size, the cache check interval, and chunking qualities.

Property Description

storage.tiered.config.cloud_storage_cache_check_interval_ms

The time, in milliseconds, between cache checks. The size of the cache can grow quickly, so it’s important to have a small interval between checks. However, if the checks are too frequent, they consume a lot of resources. Default is 30000 ms (30 sec).

storage.tiered.config.cloud_storage_cache_chunk_size

The size of a chunk downloaded into object storage cache. Default is 16 MiB.

storage.tiered.config.cloud_storage_cache_directory

The directory where the cache archive is stored.

storage.tiered.config.cloud_storage_cache_max_objects

Maximum number of objects that may be held in the Tiered Storage cache. This applies simultaneously with cloud_storage_cache_size, and whichever limit is hit first will trigger trimming of the cache.

storage.tiered.config.cloud_storage_cache_num_buckets

Divide the object storage cache across the specified number of buckets. This only works for objects with randomized prefixes. The names are not changed when the value is set to zero.

storage.tiered.config.cloud_storage_cache_size_percent

Maximum size (as a percentage) of the disk cache used by Tiered Storage.
If both this property and cloud_storage_cache_size are set, Redpanda uses the minimum of the two.

storage.tiered.config.cloud_storage_cache_size

Maximum size (in bytes) of the disk cache used by Tiered Storage.
If both this property and cloud_storage_cache_size_percent are set, Redpanda uses the minimum of the two.

storage.tiered.config.cloud_storage_cache_trim_carryover_bytes

The cache performs a recursive directory inspection during the cache trim. The information obtained during the inspection can be carried over to the next trim operation. This property sets a limit on the memory occupied by objects that can be carried over from one trim to next, and it allows the cache to quickly unblock readers before starting the directory inspection.

storage.tiered.config.cloud_storage_cache_trim_threshold_percent_objects

Trigger cache trimming when the number of objects in the cache reaches this percentage relative to its maximum object count. If unset, the default behavior is to start trimming when the cache is full.

storage.tiered.config.cloud_storage_cache_trim_threshold_percent_size

Trigger cache trimming when the cache size reaches this percentage relative to its maximum capacity. If unset, the default behavior is to start trimming when the cache is full.

storage.tiered.config.cloud_storage_cache_trim_walk_concurrency

The maximum number of concurrent tasks launched for traversing the directory structure during cache trimming. A higher number allows cache trimming to run faster but can cause latency spikes due to increased pressure on I/O subsystem and syscall threads.

storage.tiered.config.cloud_storage_chunk_eviction_strategy

Strategy for evicting unused cache chunks, either eager (default), capped, or predictive.

storage.tiered.config.cloud_storage_disable_chunk_reads

Flag to turn off chunk-based reads and enable full-segment downloads. Default is false.

storage.tiered.config.cloud_storage_hydrated_chunks_per_segment_ratio

The ratio of hydrated to non-hydrated chunks for each segment, where a current ratio above this value results in unused chunks being evicted. Default is 0.7.

storage.tiered.config.cloud_storage_min_chunks_per_segment_threshold

The threshold below which all chunks of a segment can be hydrated without eviction. If the number of chunks in a segment is below this threshold, the segment is small enough that all chunks in it can be hydrated at any given time. Default is 5.

Retries and backoff

If the object storage provider replies with an error message that the server is busy, Redpanda retries the request. Redpanda may retry on other errors, depending on the object storage provider.

Redpanda always uses exponential backoff with cloud connections. You can configure the storage.tiered.config.cloud_storage_initial_backoff_ms property to set the time used as an initial backoff interval in the exponential backoff algorithm to handle an error. Default is 100 ms.

Transport

Tiered Storage creates a connection pool for each CPU that limits simultaneous connections to the object storage provider. It also uses persistent HTTP connections with a configurable maximum idle time. A custom S3 client is used to send and receive data.

For normal usage, you do not need to configure the transport properties. The Redpanda defaults are sufficient, and the certificates used to connect to the object storage client are available through public key infrastructure. Redpanda detects the location of the CA certificates automatically.

Redpanda uses the following properties to configure transport.

Property Description

storage.tiered.config.cloud_storage_max_connections

The maximum number of connections to object storage on a broker for each CPU. Remote read and remote write share the same pool of connections. This means that if a connection is used to upload a segment, it cannot be used to download another segment. If this value is too small, some workloads might starve for connections, which results in delayed uploads and downloads. If this value is too large, Redpanda tries to upload a lot of files at the same time and might overwhelm the system. Default is 20.

storage.tiered.config.cloud_storage_segment_upload_timeout_ms

Timeout for segment upload. Redpanda retries the upload after the timeout. Default is 30000 ms.

storage.tiered.config.cloud_storage_manifest_upload_timeout_ms

Timeout for manifest upload. Redpanda retries the upload after the timeout. Default is 10000 ms.

storage.tiered.config.cloud_storage_max_connection_idle_time_ms

The maximum idle time for persistent HTTP connections. This differs depending on the object storage provider. Default is 5000 ms, which is sufficient for most providers.

storage.tiered.config.cloud_storage_segment_max_upload_interval_sec

The number of seconds for idle timeout. If this property is empty, Redpanda uploads metadata to the object storage, but the segment is not uploaded until it reaches the segment.bytes size. By default, the property is empty.

storage.tiered.config.cloud_storage_trust_file

The public certificate used to validate the TLS connection to object storage. If this is empty, Redpanda uses your operating system’s CA cert pool.

Object storage housekeeping

To improve performance and scalability, Redpanda performs object storage housekeeping when the system is idle. This housekeeping includes adjacent segment merging, which analyzes the data layout of the partition in object storage. If it finds a run of small segments, it can merge and reupload the segment.

To determine when the system is idle for housekeeping, Redpanda constantly calculates object storage utilization using the moving average with a sliding window algorithm. The width of the sliding window is defined by the storage.tiered.config.cloud_storage_idle_timeout_ms property, which has a default of 10 seconds. If the utilization (requests per second) drops below the threshold, then object storage is considered idle, and object storage housekeeping begins. The threshold is defined by the storage.tiered.config.cloud_storage_idle_threshold_rps property, which has a default of one request per second. Object storage is considered idle if, during the last 10 seconds, there were 10 or less object storage API requests.

If the object storage API becomes active after housekeeping begins, then housekeeping is paused until it becomes idle again. If object storage is not idle for storage.tiered.config.cloud_storage_housekeeping_interval_ms, then housekeeping is forced to run until it completes. This guarantees that all housekeeping jobs are run once for every cloud_storage_housekeeping_interval_ms.

See also: Space management

Adjacent segment merging

By default, and as part of this object storage housekeeping, Redpanda runs adjacent segment merging on all segments in object storage that are smaller than the threshold. Two properties control the behavior of storage.tiered.config.cloud_storage_enable_segment_merging:

If the adjacent segment merging job finds a run of small segments, it can perform one of the following operations:

  • Merge and re-upload a segment with a size up to cloud_storage_segment_size_target.

  • Merge and re-upload a segment with a size less than cloud_storage_segment_size_min if there are no other options (the run of small segments is followed by the large segment).

  • Wait until new segments are added if the run is at the end of the partition.

Suppose the storage.tiered.config.cloud_storage_segment_max_upload_interval_sec property is set and the partition contains a large number of small segments. For example, if cloud_storage_segment_max_upload_interval_ms is set to 10 minutes and the produce rate is 1 MiB per minute, then Redpanda uploads a new 10 MiB segment every 10 minutes. If adjacent segment merging is enabled and cloud_storage_segment_size_target is set to 500 MiB, then every 50 segments are re-uploaded as one large 500 MiB segment. This doubles the amount of data that Redpanda uploads to object storage, but it also reduces the memory footprint of the partition, which results in better scalability because 98% less memory is needed to keep information about the uploaded segment.

Adjacent segment merging doesn’t work for compacted topics, because compacted segments are reuploaded after they’re compacted. The results are the same.

Archived metadata

As data in object storage grows, the metadata for it grows. To support efficient long-term data retention, Redpanda splits the metadata in object storage, maintaining metadata of only recently-updated segments in memory or local disk, while safely archiving the remaining metadata in object storage and caching it locally on disk. Archived metadata is then loaded only when historical data is accessed. This allows Tiered Storage to handle partitions of virtually any size or retention length.

Metadata archive storage is configurable. The cloud_storage_target_manifest_size_bytes property sets the target size (in bytes) of the metadata archive in object storage. To access data in the archive, Redpanda uses a ListObjectsV2 API request to fetch a list of external manifests, which are used to access individual segments.

Tiered Storage configuration properties

The following list contains cluster-level configuration properties for Tiered Storage. Configure or verify the following properties before you use Tiered Storage:

Property Description

storage.tiered.config.cloud_storage_enabled

Global property that enables Tiered Storage.

Set to true to enable Tiered Storage. Default is false.

storage.tiered.config.cloud_storage_region

Object storage region.

Required for AWS and GCS.

storage.tiered.config.cloud_storage_bucket

AWS or GCS bucket name.

Required for AWS and GCS.

storage.tiered.config.cloud_storage_credentials_source

AWS or GCS instance metadata.

Required for AWS and GCS authentication with IAM roles.

storage.tiered.config.cloud_storage_access_key

AWS or GCS access key.

Required for AWS and GCS authentication with access keys.

storage.tiered.config.cloud_storage_secret_key

AWS or GCS secret key.

Required for AWS and GCS authentication with access keys.

storage.tiered.config.cloud_storage_api_endpoint

AWS or GCS API endpoint.

  • For AWS, this can be left blank. It’s generated automatically using the region and bucket.

  • For GCS, you must use storage.googleapis.com.

storage.tiered.config.cloud_storage_azure_container

Azure container name.

Required for ABS/ADLS.

storage.tiered.config.cloud_storage_azure_storage_account

Azure account name.

Required for ABS/ADLS.

storage.tiered.config.cloud_storage_azure_shared_key

Azure storage account access key.

Required for ABS/ADLS.

storage.tiered.config.cloud_storage_cache_size_percent

Maximum size (as a percentage) of the disk cache used by Tiered Storage.

If both this property and cloud_storage_cache_size are set, Redpanda uses the minimum of the two.

storage.tiered.config.cloud_storage_cache_size

Maximum size of the disk cache used by Tiered Storage.

If both this property and cloud_storage_cache_size_percent are set, Redpanda uses the minimum of the two.

storage.tiered.config.disk_reservation_percent

Amount of disk space (as a percentage) reserved for general system overhead.

Default is 20%.

storage.tiered.config.retention_local_target_capacity_bytes

Target size (in bytes) of the log data.

Default is not set (null).

storage.tiered.config.retention_local_target_capacity_percent

Target size (as a percentage) of the log data.

Default is the amount of remaining disk space after deducting cloud_storage_cache_size_percent and disk_reservation_percent.

storage.tiered.config.retention_local_strict

Allows the housekeeping process to remove data above the configured consumable retention. This means that data usage is allowed to expand to occupy more of the log data reservation.

Default is false.

storage.tiered.config.retention_local_trim_interval

Period at which disk usage is checked for disk pressure, where data is optionally trimmed to meet the target.

Default is 30 seconds.

In addition, you might want to change the following property for each broker:

Property Description

storage.tiered.config.cloud_storage_cache_directory

The directory for the Tiered Storage cache. You must specify the full path. Default is: <redpanda-data-directory>/cloud_storage_cache.

You may want to configure the following properties:

Property Description

storage.tiered.config.cloud_storage_max_connections

The maximum number of connections to object storage on a broker for each CPU. Remote read and remote write share the same pool of connections. This means that if a connection is used to upload a segment, it cannot be used to download another segment. If this value is too small, some workloads might starve for connections, which results in delayed uploads and downloads. If this value is too large, Redpanda tries to upload a lot of files at the same time and might overwhelm the system. Default is 20.

storage.tiered.config.initial_retention_local_target_bytes_default

The initial local retention size target for partitions of topics with Tiered Storage enabled. Default is null.

storage.tiered.config.initial_retention_local_target_ms_default

The initial local retention time target for partitions of topics with Tiered Storage enabled. Default is null.

storage.tiered.config.cloud_storage_initial_backoff_ms

The time, in milliseconds, for an initial backoff interval in the exponential backoff algorithm to handle an error. Default is 100 ms.

storage.tiered.config.cloud_storage_segment_upload_timeout_ms

Timeout for segment upload. Redpanda retries the upload after the timeout. Default is 30000 ms.

storage.tiered.config.cloud_storage_manifest_upload_timeout_ms

Timeout for manifest upload. Redpanda retries the upload after the timeout. Default is 10000 ms.

storage.tiered.config.cloud_storage_max_connection_idle_time_ms

The maximum idle time for persistent HTTP connections. Differs depending on the object storage provider. Default is 5000 ms, which is sufficient for most providers.

storage.tiered.config.cloud_storage_segment_max_upload_interval_sec

Sets the number of seconds for idle timeout. If this property is empty, Redpanda uploads metadata to the object storage, but the segment is not uploaded until it reaches the segment.bytes size. By default, the property is empty.

storage.tiered.config.cloud_storage_cache_check_interval_ms

The time, in milliseconds, between cache checks. The size of the cache can grow quickly, so it’s important to have a small interval between checks, but if the checks are too frequent, they consume a lot of resources. Default is 30000 ms.

storage.tiered.config.cloud_storage_idle_timeout_ms

The width of the sliding window for the moving average algorithm that calculates object storage utilization. Default is 10 seconds.

storage.tiered.config.cloud_storage_idle_threshold_rps

The utilization threshold for object storage housekeeping. Object storage is considered idle if, during the last 10 seconds, there were 10 or less object storage API requests. Default is 1 request per second.

storage.tiered.config.cloud_storage_enable_segment_merging

Enables adjacent segment merging on all segments in object storage that are smaller than the threshold. Two properties control this behavior: storage.tiered.config.cloud_storage_segment_size_target and storage.tiered.config.cloud_storage_segment_size_min. Default is enabled.

storage.tiered.config.cloud_storage_segment_size_target

The desired segment size in object storage. The default segment size is controlled by storage.tiered.config.log_segment_size and the segment.bytes topic configuration property. This property can be set to a value larger than this default segment size, but because that triggers a lot of segment reuploads, it’s not recommended.

storage.tiered.config.cloud_storage_segment_size_min

The smallest segment size in object storage that Redpanda keeps. Default is 50% of log segment size.

Under normal circumstances, you should not need to configure the following tunable properties:

Property Description

storage.tiered.config.cloud_storage_upload_ctrl_update_interval_ms

The recompute interval for the upload controller. Default is 60000 ms.

storage.tiered.config.cloud_storage_upload_ctrl_p_coeff

The proportional coefficient for the upload controller. Default is -2.

storage.tiered.config.cloud_storage_upload_ctrl_d_coeff

The derivative coefficient for the upload controller. Default is 0.

storage.tiered.config.cloud_storage_upload_ctrl_min_shares

The minimum number of I/O and CPU shares that the remote write process can use. Default is 100.

storage.tiered.config.cloud_storage_upload_ctrl_max_shares

The maximum number of I/O and CPU shares that the remote write process can use. Default is 1000.

storage.tiered.config.cloud_storage_disable_tls

Disables TLS encryption. Set to true if TLS termination is done by the proxy, such as HAProxy. Default is false.

storage.tiered.config.cloud_storage_api_endpoint_port

Overrides the default API endpoint port. Default is 443.

storage.tiered.config.cloud_storage_trust_file

The public certificate used to validate the TLS connection to object storage. If this is empty, Redpanda uses your operating system’s CA cert pool.

storage.tiered.config.cloud_storage_reconciliation_interval_ms

Deprecated.

The interval, in milliseconds, to reconcile partitions that need to be uploaded. A long reconciliation interval can result in a delayed reaction to topic creation, topic deletion, or leadership rebalancing events. A short reconciliation interval guarantees that new partitions are picked up quickly, but the process uses more resources. Default is 10000 ms.