About Iceberg Topics
This feature requires an enterprise license. To get a trial license key or extend your trial period, generate a new trial license key. To purchase a license, contact Redpanda Sales. If Redpanda has enterprise features enabled and it cannot find a valid license, restrictions apply. |
The Apache Iceberg integration for Redpanda allows you to store topic data in the cloud in the Iceberg open table format. This makes your streaming data immediately available in downstream analytical systems, including data warehouses like Snowflake, Databricks, ClickHouse, and Redshift, without setting up and maintaining additional ETL pipelines. You can also integrate your data directly into commonly-used big data processing frameworks, such as Apache Spark and Flink, standardizing and simplifying the consumption of streams as tables in a wide variety of data analytics pipelines.
Redpanda supports version 2 of the Iceberg table format.
Iceberg concepts
Apache Iceberg is an open source format specification for defining structured tables in a data lake. The table format lets you quickly and easily manage, query, and process huge amounts of structured and unstructured data. This is similar to the way you would manage and run SQL queries against relational data in a database or data warehouse. The open format lets you use many different languages, tools, and applications to process the same data in a consistent way, so you can avoid vendor lock-in. This data management system is also known as a data lakehouse.
In the Iceberg specification, tables consist of the following layers:
-
Data layer: Stores the data in data files. The Iceberg integration currently supports the Parquet file format. Parquet files are column-based and suitable for analytical workloads at scale. They come with compression capabilities that optimize files for object storage.
-
Metadata layer: Stores table metadata separately from data files. The metadata layer allows multiple writers to stage metadata changes and apply updates atomically. It also supports database snapshots, and time travel queries that query the database at a previous point in time.
-
Manifest files: Track data files and contain metadata about these files, such as record count, partition membership, and file paths.
-
Manifest list: Tracks all the manifest files belonging to a table, including file paths and upper and lower bounds for partition fields.
-
Metadata file: Stores metadata about the table, including its schema, partition information, and snapshots. Whenever a change is made to the table, a new metadata file is created and becomes the latest version of the metadata in the catalog.
For Iceberg-enabled topics, the manifest files are in JSON format.
-
-
Catalog: Contains the current metadata pointer for the table. Clients reading and writing data to the table see the same version of the current state of the table. The Iceberg integration supports two catalog integration types. You can configure Redpanda to catalog files stored in the same object storage bucket or container where the Iceberg data files are located, or you can configure Redpanda to use an Iceberg REST catalog endpoint to update an externally-managed catalog when there are changes to the Iceberg data and metadata.

When you enable the Iceberg integration for a Redpanda topic, Redpanda brokers store streaming data in the Iceberg-compatible format in Parquet files in object storage, in addition to the log segments uploaded using Tiered Storage. Storing the streaming data in Iceberg tables in the cloud allows you to derive real-time insights through many compatible data lakehouse, data engineering, and business intelligence tools.
Prerequisites
To enable Iceberg for Redpanda topics, you must have the following:
-
rpk: See Install or Update rpk.
-
Enterprise license: To check if you already have a license key applied to your cluster:
rpk cluster license info
bash -
Tiered Storage: Enable Tiered Storage for the topics for which you want to generate Iceberg tables.
Limitations
-
It is not possible to append topic data to an existing Iceberg table that is not created by Redpanda.
-
If you enable the Iceberg integration on an existing Redpanda topic, Redpanda does not backfill the generated Iceberg table with topic data.
-
JSON schemas are not currently supported. If the topic data is in JSON, use the
key_value
mode to store the JSON in Iceberg, which then can be parsed by most query engines.
Enable Iceberg integration
To create an Iceberg table for a Redpanda topic, you must set the cluster configuration property iceberg_enabled
to true
, and also configure the topic property redpanda.iceberg.mode
. You can choose to provide a schema if you need the Iceberg table to be structured with defined columns.
-
Set the
iceberg_enabled
configuration option on your cluster totrue
.rpk cluster config set iceberg_enabled true
bashSuccessfully updated configuration. New configuration version is 2.
bashYou must restart your cluster if you change this configuration for a running cluster.
-
(Optional) Create a new topic.
rpk topic create <new-topic-name>
bashTOPIC STATUS <new-topic-name> OK
bash -
Configure
redpanda.iceberg.mode
for the topic. You can choose one of the following modes:-
key_value
: Creates an Iceberg table using a simple schema, consisting of two columns, one for the record metadata including the key, and another binary column for the record’s value. -
value_schema_id_prefix
: Creates an Iceberg table whose structure matches the Redpanda schema for this topic, with columns corresponding to each field. You must register a schema in the Schema Registry (see next step), and producers must write to the topic using the Schema Registry wire format. -
value_schema_latest
: Creates an Iceberg table whose structure matches the latest schema registered for the subject in the Schema Registry. -
disabled
(default): Disables writing to an Iceberg table for this topic.
rpk topic alter-config <new-topic-name> --set redpanda.iceberg.mode=<topic-iceberg-mode>
bashTOPIC STATUS <new-topic-name> OK
bashSee also: Choose an Iceberg Mode.
-
-
Register a schema for the topic. This step is required for the
value_schema_id_prefix
andvalue_schema_latest
modes.rpk registry schema create <subject-name> --schema </path-to-schema> --type <format>
bashSUBJECT VERSION ID TYPE <subject-name> 1 1 PROTOBUF
bash
As you produce records to the topic, the data also becomes available in object storage for Iceberg-compatible clients to consume. You can use the same analytical tools to read the Iceberg topic data in a data lake as you would for a relational database.
See also: Schema types translation.
Schema evolution
Redpanda supports schema evolution for Avro and Protobuf schemas in accordance with the Iceberg specification. Permitted schema evolutions include reordering fields and promoting field types. When you update the schema in Schema Registry, Redpanda automatically updates the Iceberg table schema to match the new schema.
For example, if you produce records to a topic demo-topic
with the following Avro schema:
{
"type": "record",
"name": "ClickEvent",
"fields": [
{
"name": "user_id",
"type": "int"
},
{
"name": "event_type",
"type": "string"
}
]
}
rpk registry schema create demo-topic-value --schema schema_1.avsc
echo '{"user_id":23, "event_type":"BUTTON_CLICK"}' | rpk topic produce demo-topic --format='%v\n' --schema-id=topic
Then, you update the schema to add a new field ts
, and produce records with the updated schema:
{
"type": "record",
"name": "ClickEvent",
"fields": [
{
"name": "user_id",
"type": "int"
},
{
"name": "event_type",
"type": "string"
}.
{
"name": "ts",
"type": [
"null",
{ "type": "string", "logicalType": "date" }
],
"default": null # Default value for the new field
}
]
}
The ts
field can be either null or a string representing a date. The default value is null.
rpk registry schema create demo-topic-value --schema schema_2.avsc
echo '{"user_id":858, "event_type":"BUTTON_CLICK", "ts":{"string":"2025-02-26T20:05:23.230ZZ"}}' | rpk topic produce demo-topic --format='%v\n' --schema-id=topic
Querying the Iceberg table for demo-topic
includes the new column ts
:
+---------+--------------+--------------------------+
| user_id | event_type | ts |
+---------+--------------+--------------------------+
| 858 | BUTTON_CLICK | 2025-02-26T20:05:23.230Z |
| 23 | BUTTON_CLICK | NULL |
+---------+--------------+--------------------------+
Manage dead-letter queue
Errors may occur when translating records in the value_schema_id_prefix
mode to the Iceberg table format; for example, if you do not use the Schema Registry wire format with the magic byte, if the schema ID in the record is not found in the Schema Registry, or if an Avro or Protobuf data type cannot be translated to an Iceberg type.
If Redpanda encounters an error while writing a record to the Iceberg table, Redpanda writes the record to a separate dead-letter queue (DLQ) Iceberg table named <topic-name>~dlq
. To disable the default behavior for a topic and drop the record, set the redpanda.iceberg.invalid.record.action
topic property to drop
. You can also configure the default cluster-wide behavior for invalid records by setting the iceberg_invalid_record_action
property.
The DLQ table itself uses the key_value
schema, consisting of two columns: the record metadata including the key, and a binary column for the record’s value.
You can inspect the DLQ table for records that failed to write to the Iceberg table, and you can take further action on these records, such as transforming and reprocessing them, or debugging issues that occurred upstream.
Reprocess DLQ records
The following example produces a record to a topic named ClickEvent
and does not use the Schema Registry wire format that includes the magic byte and schema ID:
echo '"key1" {"user_id":2324,"event_type":"BUTTON_CLICK","ts":"2024-11-25T20:23:59.380Z"}' | rpk topic produce ClickEvent --format='%k %v\n'
Querying the DLQ table returns the record that was not translated:
SELECT
value
FROM <catalog-name>."ClickEvent~dlq"; -- Fully qualified table name
+-------------------------------------------------+
| value |
+-------------------------------------------------+
| 7b 22 75 73 65 72 5f 69 64 22 3a 32 33 32 34 2c |
| 22 65 76 65 6e 74 5f 74 79 70 65 22 3a 22 42 55 |
| 54 54 4f 4e 5f 43 4c 49 43 4b 22 2c 22 74 73 22 |
| 3a 22 32 30 32 34 2d 31 31 2d 32 35 54 32 30 3a |
| 32 33 3a 35 39 2e 33 38 30 5a 22 7d |
+-------------------------------------------------+
The data is in binary format, and the first byte is not 0x00
, indicating that it was not produced with a schema.
You can apply a transformation and reprocess the record in your data lakehouse to the original Iceberg table. In this case, you have a JSON value represented as a UTF-8 binary. Depending on your query engine, you might need to decode the binary value first before extracting the JSON fields. Some engines may automatically decode the binary value for you:
SELECT
CAST(jsonExtractString(json, 'user_id') AS Int32) AS user_id,
jsonExtractString(json, 'event_type') AS event_type,
jsonExtractString(json, 'ts') AS ts
FROM (
SELECT
CAST(value AS String) AS json
FROM <catalog-name>.`ClickEvent~dlq` -- Ensure that the table name is properly parsed
);
+---------+--------------+--------------------------+
| user_id | event_type | ts |
+---------+--------------+--------------------------+
| 2324 | BUTTON_CLICK | 2024-11-25T20:23:59.380Z |
+---------+--------------+--------------------------+
You can now insert the transformed record back into the main Iceberg table. Redpanda recommends employing a strategy for exactly-once processing to avoid duplicates when reprocessing records.
Performance considerations
When you enable Iceberg for any substantial workload and start translating topic data to the Iceberg format, you may see most of your cluster’s CPU utilization increase. If this additional workload overwhelms the brokers and causes the Iceberg table lag to exceed the configured target lag, Redpanda automatically applies backpressure to producers to prevent Iceberg tables from lagging further. This ensures that Iceberg tables keep up with the volume of incoming data, but sacrifices ingress throughput of the cluster.
You may need to increase the size of your Redpanda cluster to accommodate the additional workload. To ensure that your cluster is sized appropriately, contact the Redpanda Customer Success team.
Use custom partitioning
To improve query performance, consider implementing custom partitioning for the Iceberg topic. Use the redpanda.iceberg.partition.spec
topic property to define the partitioning scheme:
# Create new topic with five topic partitions, replication factor 3, and custom table partitioning for Iceberg
rpk topic create <new-topic-name> -p5 -r3 -c redpanda.iceberg.mode=value_schema_id_prefix -c "redpanda.iceberg.partition.spec=(<partition-key1>, <partition-key2>, ...)"
Valid <partition-key>
values include a source column name or a transformation of a column. The columns referenced can be Redpanda-defined (such as redpanda.timestamp
) or user-defined based on a schema that you register for the topic. The Iceberg table stores records that share different partition key values in separate files based on this specification.
For example:
-
To partition the table by a single key, such as a column
col1
, use:redpanda.iceberg.partition.spec=(col1)
. -
To partition by multiple columns, use a comma-separated list:
redpanda.iceberg.partition.spec=(col1, col2)
. -
To partition by the year of a timestamp column
ts1
, and a string columncol1
, use:redpanda.iceberg.partition.spec=(year(ts1), col1)
.
To learn more about how partitioning schemes can affect query performance, and for details on the partitioning specification such as allowed transforms, see the Apache Iceberg documentation.
|
Avoid high column count
A high column count or schema field count results in more overhead when translating topics to the Iceberg table format. Small message sizes can also increase CPU utilization. To minimize the performance impact on your cluster, keep to a low column count and large message size for Iceberg topics.