Use Tiered Storage

This feature requires an enterprise license. To get a trial license key or extend your trial period, generate a new trial license key. To purchase a license, contact Redpanda Sales.

If Redpanda has enterprise features enabled and it cannot find a valid license, restrictions apply.

Tiered Storage helps to lower storage costs by offloading log segments to object storage. You can specify the amount of storage you want to retain in local storage. You don’t need to specify which log segments you want to move, because Redpanda moves them automatically based on cluster-level configuration properties. Tiered Storage indexes where data is offloaded, so it can look up the data when you need it.

The following image illustrates the Tiered Storage architecture: remote write uploads data from Redpanda to object storage, and remote read fetches data from object storage to Redpanda.

Tiered Storage architecture

Prerequisites

This feature requires an enterprise license. To get a trial license key or extend your trial period, generate a new trial license key. To purchase a license, contact Redpanda Sales.

If Redpanda has enterprise features enabled and it cannot find a valid license, restrictions apply.

To check if you already have a license key applied to your cluster:

rpk cluster license info

Limitations

  • Migrating topics from one object storage provider to another is not supported.

  • Migrating topics from one bucket or container to another is not supported.

  • Multi-region buckets or containers are not supported.

Redpanda Data recommends that you do not re-enable Tiered Storage after previously enabling and disabling it. Re-enabling Tiered Storage can result in inconsistent data and data gaps in Tiered Storage for a topic.

Set up Tiered Storage

To set up Tiered Storage:

  1. Configure object storage.

  2. Enable Tiered Storage. You can enable Tiered Storage for the cluster (all topics) or for specific topics.

  3. Set retention limits.

Configure object storage

Redpanda natively supports Tiered Storage with Amazon Simple Storage Service (S3), Google Cloud Storage (GCS, which uses the Google Cloud Platform S3 API), and Microsoft Azure Blob Storage (ABS) and Azure Data Lake Storage (ADLS).

  • Amazon S3

  • Google Cloud Storage

  • Microsoft ABS/ADLS

If deploying Redpanda on an AWS Auto-Scaling group (ASG), keep in mind that the ASG controller terminates nodes and spins up replacements if the nodes saturate and are unable to heartbeat the controller (based on the EC2 health check). For more information, see the AWS documentation. Redpanda recommends deploying on Linux or Kubernetes. For more information, see Deploy Redpanda.

Configure access to Amazon S3 with either an IAM role attached to the instance, with access keys, or with customer-managed keys.

If you need to manage and store encryption keys separately from your cloud provider, you can configure access to an AWS S3 bucket that Redpanda Tiered Storage uses to leverage your AWS KMS key (SSE-KMS) instead of the default AWS S3-managed key (SSE-S3). This option enables you to segregate data from different teams or departments and remove that data at will by removing the encryption keys.

Configure access with an IAM role

  1. Configure an IAM role.

  2. Run the rpk cluster config edit command, then edit the following required properties:

    cloud_storage_enabled: true
    cloud_storage_credentials_source: aws_instance_metadata
    cloud_storage_region: <region>
    cloud_storage_bucket: <redpanda-bucket-name>

    Replace <placeholders> with your own values.

Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value. To reset a property to its default value, run rpk cluster config force-reset <config-name> or remove that line from the cluster configuration with rpk cluster config edit.

Configure access with access keys

  1. Grant a user the following permissions to read and create objects on the bucket to be used with the cluster (or on all buckets): GetObject, DeleteObject, PutObject, PutObjectTagging, ListBucket.

  2. Copy the access key and secret key for the cloud_storage_access_key and cloud_storage_secret_key cluster properties.

  3. Run the rpk cluster config edit command, then edit the following required properties:

    cloud_storage_enabled: true
    cloud_storage_access_key: <access_key>
    cloud_storage_secret_key: <secret_key>
    cloud_storage_region: <region>
    cloud_storage_bucket: <redpanda-bucket-name>

    Replace <placeholders> with your own values.

    Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value. To reset a property to its default value, run rpk cluster config force-reset <config-name> or remove that line from the cluster configuration with rpk cluster config edit.

Configure access with an AWS KMS key

When there are strict data compliance requirements and you must manage and store encryption keys separate from your cloud provider, you can configure an Amazon S3 bucket that Tiered Storage can use to leverage your customer-provided key (SSE-KMS) instead of the default AWS-managed key (SSE-S3).

To convert an existing S3 bucket and its contents, you must:

  1. Create a new KMS key.

  2. Configure the S3 bucket to use the new KMS key.

  3. (Optional) Re-encrypt existing objects to use the new KMS key.

You cannot configure a cloud provider-managed encryption key at the topic level.

For topic-level control, each CLI Get or Put for a partition must use the correct key as configured on the topic.

Prerequisites
  • The user configuring S3 bucket encryption must be assigned the Key admin permission. Without this permission, the user is unable to re-encrypt existing bucket objects to use the KMS key.

  • The S3 bucket must be assigned the Key user permission. Without this permission, Redpanda is unable to write new objects to Tiered Storage.

  • If you intend to retroactively re-encrypt existing data with the new KMS key, store the ARN identifier of the key upon creation. It is required for AWS CLI commands.

To create a new KMS key in the AWS Console:

  1. In AWS Console, search for “Key Management Service”.

  2. Click Create a key.

  3. On the Configure key page, select the Symmetric key type, then select Encrypt and decrypt.

  4. Click the Advanced options tab and configure Key material origin and Regionality as needed. For example, if you are using Remote Read Replicas or have Redpanda spanning across regions, select Multi-Region key.

  5. Click Next.

  6. On the Add labels page, specify an alias name and description for the key. Do not include sensitive information in these fields.

  7. Click Next.

  8. On the Define key administrative permissions page, specify a user who can administer this key through the KMS API.

  9. Click Next.

  10. On the Define key usage permissions page, assign the S3 bucket as a Key user. This is required for the S3 bucket to encrypt and decrypt.

  11. Click Next.

  12. Review your KMS key configuration and click Finish.

For more information, see the AWS documentation.

To configure the S3 bucket to use the new KMS key (and reduce KMS costs through caching):

  1. In AWS Console, search for "S3".

  2. Select the bucket used by Redpanda.

  3. Click the Properties tab.

  4. In Default encryption, click Edit.

  5. For Encryption type, select “Server-side encryption with AWS Key Management Service keys (SSE-KMS)”.

  6. Locate and select your AWS KMS key ARN identifier.

  7. Click Save changes.

(Optional) To re-encrypt existing data using the new KMS key:

Existing data in your S3 bucket continues to be read using the AWS-managed key, while new objects are encrypted using the new KMS key. If you wish to re-encrypt all S3 bucket data to use the KMS key, run:

aws s3 cp s3://{BUCKET_NAME}/ s3://{BUCKET_NAME}/ --recursive --sse-kms-key-id {KMS_KEY_ARN} --sse aws:kms

For more information, see the AWS documentation.

Configure access to Google Cloud Storage with either an IAM role attached to the instance, with access keys, or with customer-managed keys.

Configure access with an IAM role

  1. Configure an IAM role.

  2. Run the rpk cluster config edit command, then edit the following required properties:

    cloud_storage_enabled: true
    cloud_storage_api_endpoint: storage.googleapis.com
    cloud_storage_credentials_source: gcp_instance_metadata
    cloud_storage_region: <region>
    cloud_storage_bucket: <redpanda-bucket-name>

    Replace <placeholders> with your own values.

    Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value. To reset a property to its default value, run rpk cluster config force-reset <config-name> or remove that line from the cluster configuration with rpk cluster config edit.

Configure access with access keys

  1. Choose a uniform access control when you create the bucket.

  2. Use a Google-managed encryption key.

  3. Set a default project.

  4. Create a service user with HMAC keys and copy the access key and secret key for the cloud_storage_access_key and cloud_storage_secret_key cluster properties.

  5. Run the rpk cluster config edit command, then edit the following required properties:

    cloud_storage_enabled: true
    cloud_storage_api_endpoint: storage.googleapis.com
    cloud_storage_access_key: <access_key>
    cloud_storage_secret_key: <secret_key>
    cloud_storage_bucket: <redpanda-bucket-name>
    cloud_storage_region: <region>

    Replace <placeholders> with your own values.

    Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value. To reset a property to its default value, run rpk cluster config force-reset <config-name> or remove that line from the cluster configuration with rpk cluster config edit.

Configure access with a KMS key

To configure the Google Cloud bucket used by Redpanda Tiered Storage to leverage a customer-managed key using the Cloud Key Management Service API instead of the default Google-managed key, you must:

  1. Create a KMS key.

  2. Configure the bucket to use the KMS key.

  3. Optionally, re-encrypt existing data with the new KMS key.

To manage Google Cloud KMS using the command line, first install or upgrade to the latest version of Google Cloud CLI.

To create a KMS key:

  1. In Google Cloud Console, search for "Cloud Key Managment Service API". Click Enable if the service is not already enabled.

  2. Using the Google Cloud CLI, create a new keyring in the region where the bucket used by Redpanda is located. Note that region is case sensitive.

    gcloud kms keyrings create "redpanda-keyring" --location="{REGION}"
  3. Create a new key for the keyring in the same region as the bucket:

    gcloud kms keys create "redpanda-key" \
      --location="{REGION}" \
      --keyring="redpanda-keyring" \
      --purpose="encryption"
  4. Get the key identifier:

    gcloud kms keys list \
      --location="REGION" \
      --keyring="redpanda-keyring"

    The result should look like the following. Be sure to store the name, as this is used to assign and manage the key. Use this as the {KEY_RESOURCE} placeholder in subsequent commands.

    NAME      PURPOSE      ALGORITHM      PROTECTION_LEVEL    LABELS    PRIMARY_ID    PRIMARY_STATE
    projects/{PROJECT_NAME}/locations/us/keyRings/redpanda-keyring/cryptoKeys/redpanda-key
    ENCRYPT_DECRYPT  GOOGLE_SYMMETRIC_ENCRYPTION  SOFTWARE          1           ENABLED

To configure the GCP bucket to use the KMS key:

  1. Assign the key to a service agent:

    gcloud storage service-agent \
      --project={PROJECT_ID_STORING_OBJECTS} \
      --authorize-cmek={KEY_RESOURCE}
  2. Set the bucket default encryption key to the KMS key:

    gcloud storage buckets update gs://{BUCKET_NAME} \
      --default-encryption-key={KEY_RESOURCE}

(Optional) To re-encrypt existing data using the new KMS key:

Existing data in the bucket continues to be read using the Google-managed key, while new objects are encrypted using the new KMS key. If you wish to re-encrypt all data in the bucket to use the KMS key, run:

gcloud storage objects update gs://{BUCKET_NAME}/ --recursive \
  --encryption-key={KEY_RESOURCE}
Starting in release 23.2.8, Redpanda supports storage accounts configured for ADLS Gen2 with hierarchical namespaces enabled. For hierarchical namespaces created with a custom endpoint, set cloud_storage_azure_adls_endpoint and cloud_storage_azure_adls_port. If you haven’t configured custom endpoints in Azure, there’s no need to edit these properties.

Configure access with managed identities

  1. Configure an Azure managed identity.

    Note the minimum set of permissions required for Tiered Storage:

    • Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete

    • Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read

    • Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write

    • Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action

    • Microsoft.Storage/storageAccounts/fileServices/fileShares/files/write

    • Microsoft.Storage/storageAccounts/fileServices/writeFileBackupSemantics/action

  2. Run the rpk cluster config edit command, then edit the following required properties:

    cloud_storage_enabled: true
    cloud_storage_credentials_source: azure_vm_instance_metadata
    cloud_storage_azure_managed_identity_id: <managed-identity-client-id>
    cloud_storage_azure_storage_account: <storage-account-name>
    cloud_storage_azure_container: <container-name>

    Replace <placeholders> with your own values.

Configure access with account access keys

  1. Copy an account access key for the Azure container you want Redpanda to use and enter it in the cloud_storage_azure_shared_key property. For information on how to view your account access keys, see the Azure documentation.

  2. Run the rpk cluster config edit command, then edit the following required properties:

    cloud_storage_enabled: true
    cloud_storage_azure_shared_key: <azure_account_access_key>
    cloud_storage_azure_storage_account: <azure_account_name>
    cloud_storage_azure_container: <redpanda_container_name>

    Replace <placeholders> with your own values.

    For information about how to grant access from an internet IP range (if you need to open additional routes/ports between your broker nodes and ABS/ADLS; for example, in a hybrid cloud deployment), see the Microsoft documentation.

    For information about Shared Key authentication, see the Microsoft documentation.

    Do not set an object storage property to an empty string "" or to null as a way to reset it to its default value. To reset a property to its default value, run rpk cluster config force-reset <config-name> or remove that line from the cluster configuration with rpk cluster config edit.

For additional properties, see [Tiered Storage configuration properties].

Enable Tiered Storage

  1. To enable Tiered Storage, set cloud_storage_enabled to true.

  2. Configure topics for Tiered Storage. You can configure either all topics in a cluster or only specific topics.

When you enable Tiered Storage on a topic already containing data, Redpanda uploads any existing data in that topic on local storage to the object store bucket. It will start from the earliest offset available on local disk. Redpanda strongly recommends that you avoid repeatedly toggling remote write on and off, because this can result in inconsistent data and data gaps in Tiered Storage for a topic.

Enable Tiered Storage for a cluster

To enable Tiered Storage for a cluster (in addition to setting cloud_storage_enabled to true), set the following cluster-level properties to true:

When you enable Tiered Storage for a cluster, you enable it for all existing topics in the cluster. When cluster-level properties are changed, the changes apply only to new topics, not existing topics. You must restart your cluster after enabling Tiered Storage.

The cloud_storage_enable_remote_write and cloud_storage_enable_remote_read cluster-level properties are essentially creation-time defaults for the redpanda.remote.write and redpanda.remote.read topic-level properties.

Enable Tiered Storage for specific topics

To enable Tiered Storage for a new or existing topic (in addition to setting cloud_storage_enabled to true), set the following topic-level properties to true:

  • redpanda.remote.write

  • redpanda.remote.read

For example, to create a new topic with Tiered Storage:

rpk topic create <topic_name> -c redpanda.remote.read=true -c redpanda.remote.write=true

To enable Tiered Storage on an existing topic, run:

rpk topic alter-config <topic_name> --set redpanda.remote.read=true --set redpanda.remote.write=true

Topic-level properties override cluster-level properties. For example, for new topics, if cloud_storage_enable_remote_write is set to true, you can set redpanda.remote.write to false to turn it off for a particular topic.

Tiered Storage topic-level properties:

Property Description

redpanda.remote.write

Uploads data from Redpanda to object storage. Overrides the cluster-level cloud_storage_enable_remote_write configuration for the topic.

redpanda.remote.read

Fetches data from object storage to Redpanda. Overrides the cluster-level cloud_storage_enable_remote_read configuration for the topic.

redpanda.remote.recovery

Recovers or reproduces a topic from object storage. Use this property during topic creation. It does not apply to existing topics.

redpanda.remote.delete

When set to true, deleting a topic also deletes its objects in object storage. Both redpanda.remote.write and redpanda.remote.read must be enabled, and the topic must not be a Remote Read Replica topic.

When set to false, deleting a topic does not delete its objects in object storage.

Default is true for new topics.

The following tables list outcomes for combinations of cluster-level and topic-level configurations:

Cluster-level configuration with cloud_storage_enable_remote_write Topic-level configuration with redpanda.remote.write Outcome: whether remote write is enabled or disabled on the topic

true

Not set

Enabled

true

false

Disabled

true

true

Enabled

false

Not set

Disabled

false

false

Disabled

false

true

Enabled

Cluster-level configuration with cloud_storage_enable_remote_read Topic-level configuration with redpanda.remote.read Outcome: whether remote read is enabled or disabled on the topic

true

Not set

Enabled

true

false

Disabled

true

true

Enabled

false

Not set

Disabled

false

false

Disabled

false

true

Enabled

Set retention limits

Redpanda supports retention limits and compaction for topics using Tiered Storage. Set retention limits to purge topic data after it reaches a specified age or size.

Starting in Redpanda version 22.3, object storage is the default storage tier for all streaming data, and retention properties work the same for Tiered Storage topics and local storage topics. Data is retained in object storage until it reaches the configured time or size limit.

Data becomes eligible for deletion from object storage following retention.ms or retention.bytes. For example, if retention.bytes is set to 10 GiB, then every partition in the topic has a limit of 10 GiB in object storage. When retention.bytes is exceeded by data in object storage, the data in object storage is trimmed. If neither retention.ms nor retention.bytes is specified, then cluster-level defaults are used.

  • During upgrade, Redpanda preserves retention settings for existing topics.

  • Both size-based and time-based retention policies are applied simultaneously, so it’s possible for your size-based property to override your time-based property, or vice versa. For example, if your size-based property requires removing one segment, and your time-based property requires removing three segments, then three segments are removed. Size-based properties reclaim disk space as close as possible to the maximum size, without exceeding the limit.

Compacted topics in Tiered Storage

When you set cleanup.policy for a topic to compact, nothing gets deleted from object storage based on retention settings. When set to compact,delete, compacted segments are deleted from object storage based on retention.ms and retention.bytes.

For compacted topics, Redpanda compacts segments after they have been uploaded to object storage. Redpanda initially uploads all uncompacted segments. It then re-uploads the segments with compaction applied. It’s likely that some segments in object storage are not compacted, but the Tiered Storage read path can manage this.

Manage local capacity for Tiered Storage topics

You can use properties to control retention of topic data in local storage. With Tiered Storage enabled, data in local storage expires after the topic-level properties retention.local.target.ms or retention.local.target.bytes. (These settings are equivalent to retention.ms and retention.bytes without Tiered Storage.)

You can also use the cluster-level properties retention_local_target_ms_default and retention_local_target_bytes_default. Settings can depend on the size of your drive, how many partitions you have, and how much data you keep for each partition.

When set, Redpanda keeps actively-used and sequential (next segment) data in local cache and targets to maintain this age of data in local storage. It purges data based on actual available local volume space, without forcing disk full situations when there is data skew.

At topic creation with Tiered Storage enabled:

  • If retention.ms or retention.bytes is set, Redpanda initializes the retention.local.target.* properties.

  • If retention.local.target.ms or retention.local.target.bytes is set, Redpanda initializes the min(retention.bytes, retention.local.target.bytes) and max(retention.ms, retention.local.target.ms) properties.

  • If properties are not specified:

    • Starting in version 22.3, new topics use the default retention values of one day for local storage (retention_local_target_ms_default) and seven days for all storage, including object storage (log_retention_ms).

    • Upgraded topics retain their historical defaults of infinite retention.

After topic configuration, if Tiered Storage was disabled and must be enabled, or was enabled and must be disabled, Redpanda uses the local retention properties set for the topic. It is strongly recommended that you do not re-enable Tiered Storage after previously enabling and disabling it. Re-enabling Tiered Storage can result in inconsistent data and data gaps in Tiered Storage for a topic.

See also: Space management

View space usage

Use rpk cluster logdirs describe to get details about Tiered Storage space usage in both object storage and local disk. The directories for object storage start with remote://<bucket_name>. For example:

rpk cluster logdirs describe

BROKER  DIR                              TOPIC               PARTITION  SIZE      ERROR
0       /home/redpanda/var/node0/data    monday              0          18406863
0       remote://data                    monday              0          60051220
1       /home/redpanda/var/node1/data    monday              0          22859882
1       remote://data                    monday              0          60051220
2       /home/redpanda/var/node2/data    monday              0          17169935
2       remote://data                    monday              0          60051220

Integration with space utilization tools

Third-party tools that query space utilization from the Redpanda cluster might not handle remote:// entries properly. Redpanda space usage is reported from each broker, but object storage is shared between brokers. Third-party tools could over-count storage and show unexpectedly high disk usage for Tiered Storage topics. In this situation, you can disable output of remote:// entries by setting kafka_enable_describe_log_dirs_remote_storage to false.

Remote write

Remote write is the process that constantly uploads log segments to object storage. The process is created for each partition and runs on the leader broker of the partition. It only uploads the segments that contain offsets that are smaller than the last stable offset. This is the latest offset that the client can read.

To ensure all data is uploaded, you must enable remote write before any data is produced to the topic. If you enable remote write after data has been written to the topic, only the data that currently exists on disk based on local retention settings will be scheduled for uploading. Redpanda strongly recommends that you avoid repeatedly toggling remote write on and off, because this can result in inconsistent data and data gaps in Tiered Storage for a topic.

To enable Tiered Storage, use both remote write and remote read.

To create a topic with remote write enabled:

rpk topic create <topic_name> -c redpanda.remote.write=true

To enable remote write on an existing topic:

rpk topic alter-config <topic_name> --set redpanda.remote.write=true

If remote write is enabled, log segments are not deleted until they’re uploaded to object storage. Because of this, the log segments may exceed the configured retention period until they’re uploaded, so the topic might require more disk space. This prevents data loss if segments cannot be uploaded fast enough or if the retention period is very short.

To see the object storage status for a given topic:

rpk topic describe-storage <topic_name> --print-all

See the reference for a list of flags you can use to filter the command output.

A constant stream of data is necessary to build up the log segments and roll them into object storage. This upload process is asynchronous. You can monitor its status with the /metrics/vectorized_ntp_archiver_pending metric.

To see new log segments faster, you can edit the segment.bytes topic-level property. Or, you can edit the cloud_storage_segment_max_upload_interval_sec property, which sets the frequency with which new segments are generated in Tiered Storage, even if the local segment has not exceeded the segment.bytes size or the segment.ms age.

Starting in version 22.3, when you delete a topic in Redpanda, the data is also deleted in object storage. See Enable Tiered Storage for specific topics.

Idle timeout

You can configure Redpanda to start a remote write periodically. This is useful if the ingestion rate is low and the segments are kept open for long periods of time. You specify a number of seconds for the timeout, and if that time has passed since the previous write and the partition has new data, Redpanda starts a new write. This limits data loss in the event of catastrophic failure and adds a guarantee that you only lose the specified number of seconds of data.

Setting idle timeout to a very short interval can create a lot of small files, which can affect throughput. If you decide to set a value for idle timeout, start with 600 seconds, which prevents the creation of so many small files that throughput is affected when you recover the files.

Use the cloud_storage_segment_max_upload_interval_sec property to set the number of seconds for idle timeout. If this property is set to null, Redpanda uploads metadata to object storage, but the segment is not uploaded until it reaches the segment.bytes size.

Reconciliation

Reconciliation is a Redpanda process that monitors partitions and decides which partitions are uploaded on each broker to guarantee that data is uploaded only once. It runs periodically on every broker. It also balances the workload evenly between brokers.

The broker uploading to object storage is always with the partition leader. Therefore, when partition leadership balancing occurs, Redpanda stops uploads to object storage from one broker and starts uploads on another broker.

Upload backlog controller

Remote write uses a proportional derivative (PD) controller to minimize the backlog size for the write. The backlog consists of data that has not been uploaded to object storage but must be uploaded eventually.

The upload backlog controller prevents Redpanda from running out of disk space. If remote.write is set to true, Redpanda cannot evict log segments that have not been uploaded to object storage. If the remote write process cannot keep up with the amount of data that needs to be uploaded to object storage, the upload backlog controller increases priority for the upload process. The upload backlog controller measures the size of the upload periodically and tunes the priority of the remote write process.

Data archiving

If you only enable remote write on a topic, you have a simple backup to object storage that you can use for recovery. In the event of a data center failure, data corruption, or cluster migration, you can recover your archived data from the cloud back to your cluster.

Configure data archiving

Data archiving requires a Tiered Storage configuration.

To recover a topic from object storage, use single topic recovery.

While performing topic recovery, avoid adding additional load (such as produces, consumes, lists or additional recovery operations) to the target cluster. Doing so could destabilize the recovery process and result in either an unsuccessful or corrupted recovered topic.

Stop data archiving

To cancel archiving jobs, disable remote write.

To delete archival data, adjust retention.ms or retention.bytes.

Remote read

Remote read fetches data from object storage using the Kafka API.

Without Tiered Storage, when data is evicted locally, it is no longer available. If the consumer starts consuming the partition from the beginning, the first offset is the smallest offset available locally. However, when Tiered Storage is enabled with the redpanda.remote.read and redpanda.remote.write properties, data is always uploaded to object storage before it’s deleted. This guarantees that data is always available either locally or remotely.

When data is available remotely and Tiered Storage is enabled, clients can consume data, even if the data is no longer stored locally.

To create a topic with remote read enabled:

rpk topic create <topic_name> -c redpanda.remote.read=true

To enable remote read on an existing topic:

rpk topic alter-config <topic_name> --set redpanda.remote.read=true

Pause and resume uploads

Redpanda strongly recommends using pause and resume only under the guidance of Redpanda Support or a member of your account team.

Starting in version 25.1, you can troubleshoot issues your cluster has interacting with object storage by pausing and resuming uploads. You can do this with no risk of data consistency or data loss. To pause or resume segment uploads to object storage, use the cloud_storage_enable_segment_uploads configuration property (default is true). This allows segment uploads to proceed after the pause completes and uploads resume.

While uploads are paused, data accumulates locally, which can lead to full disks if the pause is prolonged. If the disks fill, Redpanda throttles produce requests and rejects new Kafka produce requests to prevent data from being written. Additionally, this pauses object storage housekeeping, meaning segments are neither uploaded nor removed from object storage. However, it is still possible to consume data from object storage while uploads are paused.

When you set cloud_storage_enable_segment_uploads to false, all in-flight segment uploads complete, but no new segment uploads begin until the value is set back to true. During this pause, Tiered Storage enforces consistency by ensuring that no segment in local storage is deleted until it successfully uploads to object storage. This means that when uploads are resumed, no user intervention is needed, and no data gaps are created.

Use the redpanda_cloud_storage_paused_archivers metric to monitor the status of paused uploads. It displays a non-zero value whenever uploads are paused.

Do not use redpanda.remote.read or redpanda.remote.write to pause and resume segment uploads. Doing so can lead to a gap between local data and data in object storage. In such cases, it is possible that the oldest segment is not aligned with the last uploaded segment. Given that these settings are unsafe, if you choose to set either redpanda.remote.write or the cluster configuration setting cloud_storage_enable_remote_write to false, you receive a warning:

Warning: disabling Tiered Storage may lead to data loss. If you only want to pause Tiered Storage temporarily, use the `cloud_storage_enable_segment_uploads` option. Abort?
# The default is Yes.

The following example shows a simple pause and resume with no gaps allowed:

rpk cluster config set cloud_storage_enable_segment_uploads false
# Segments are not uploaded to cloud storage, and cloud storage housekeeping is not running.
# The new data added to the topics with Tiered Storage is not deleted from disk
# because it can't be uploaded. The disks may fill up eventually.
# If the disks fill up, produce requests will be rejected.
...

rpk cluster config set cloud_storage_enable_segment_uploads true
# At this point the uploads should resume seamlessly and
# there should not be any data loss.

For some applications, where the newest data is more valuable than historical data, data accumulation can be worse than data loss. In such cases, where you cannot afford to lose the most recently-produced data by rejecting produce requests after producers have filled the local disks during the period of paused uploads, there is a less safe pause and resume mechanism. This mechanism prioritizes the ability to receive new data over retaining data that cannot be uploaded when disks are full:

  • Set the cloud_storage_enable_remote_allow_gaps cluster configuration property to true. This allows for gaps in the logs of all Tiered Storage topics in the cluster.

  • Set the redpanda.remote.allow_gaps configuration property to true. This allows gaps for one specific topic. This topic-level configuration option overrides the cluster-level default.

When you pause uploads and set one of these properties to true, there may be gaps in the range of offsets stored in object storage. You can seamlessly resume uploads by setting *allow_gaps to true at either the cluster or topic level. If set to false, disk space could be depleted and produce requests would be throttled.

The following example shows how to pause and resume Tiered Storage uploads while allowing for gaps:

rpk cluster config set cloud_storage_enable_segment_uploads false
# Segment uploads are paused and cloud storage housekeeping is not running.
# New data is stored on the local volume, which may overflow.
# To avoid overflow when allowing gaps in the log.
# In this example, data that is not uploaded to cloud storage may be
# deleted if a disk fills before uploads are resumed.

rpk topic alter-config $topic-name --set redpanda.remote.allowgaps=true
# Uploads are paused and gaps are allowed. Local retention is allowed
# to delete data before it's uploaded, therefore some data loss is possible.
...

rpk cluster config set cloud_storage_enable_segment_uploads true
# Uploads are resumed but there could be gaps in the offsets.
# Wait until you see that the `redpanda_cloud_storage_paused_archivers`
# metric is equal to zero, indicating that uploads have resumed.

# Disable the gap allowance previously set for the topic.
rpk topic alter-config $topic-name --set redpanda.remote.allowgaps=false

Caching

When a consumer fetches an offset range that isn’t available locally in the Redpanda data directory, Redpanda downloads remote segments from object storage. These downloaded segments are stored in the object storage cache.

Change the cache directory

By default, the cache directory is created in Redpanda’s data directory, but it can be placed anywhere in the system. For example, you might want to put the cache directory on a dedicated drive with cheaper storage. Use the cloud_storage_cache_directory broker property to specify a different location for the cache directory. You must specify the full path.