Set Up Postgres CDC with Debezium and Redpanda

This example demonstrates using Debezium to capture the changes made to Postgres in real time and stream them to Redpanda.

This ready-to-run docker-compose setup contains the following containers:

  • postgres container with the pandashop database, containing a single table, orders

  • debezium container capturing changes made to the orders table in real time.

  • redpanda container to ingest change data streams produced by debezium

For more information about pandashop schema, see the /data/postgres_bootstrap.sql file.

Example architecture

Prerequisites

You must have Docker and Docker Compose installed on your host machine.

Run the lab

  1. Clone this repository:

    git clone https://github.com/redpanda-data/redpanda-labs.git
  2. Change into the docker-compose/cdc/postgres-json/ directory:

    cd redpanda-labs/docker-compose/cdc/postgres-json
  3. Set the REDPANDA_VERSION environment variable to the version of Redpanda that you want to run. For all available versions, see the GitHub releases.

    For example:

    export REDPANDA_VERSION=24.3.2
  4. Run the following in the directory where you saved the Docker Compose file:

    docker compose up -d

    When the postgres container starts, the /data/postgres_bootstrap.sql file creates the pandashop database and the orders table, followed by seeding the ` orders` table with a few records.

  5. Log into Postgres:

    docker compose exec postgres psql -U postgresuser -d pandashop
  6. Check the content inside the orders table:

    select * from orders;

    This is the source table.

  7. While Debezium is up and running, create a source connector configuration to extract change data feeds from Postgres:

    docker compose exec debezium curl -H 'Content-Type: application/json' debezium:8083/connectors --data '
    {
      "name": "postgres-connector",
      "config": {
        "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
        "plugin.name": "pgoutput",
        "database.hostname": "postgres",
        "database.port": "5432",
        "database.user": "postgresuser",
        "database.password": "postgrespw",
        "database.dbname" : "pandashop",
        "database.server.name": "postgres",
        "table.include.list": "public.orders",
        "topic.prefix" : "dbz"
      }
    }'

    Notice the database.* configurations specifying the connectivity details to postgres container. Wait a minute or two until the connector gets deployed inside Debezium and creates the initial snapshot of change log topics in Redpanda.

  8. Check the list of change log topics in Redpanda:

    docker compose exec redpanda rpk topic list

    The output should contain two topics with the prefix dbz.* specified in the connector configuration. The topic dbz.public.orders holds the initial snapshot of change log events streamed from orders table.

    NAME               PARTITIONS  REPLICAS
    connect-status     5           1
    connect_configs    1           1
    connect_offsets    25          1
    dbz.public.orders  1           1
  9. Monitor for change events by consuming the dbz.public.orders topic:

    docker compose exec redpanda rpk topic consume dbz.public.orders
  10. While the consumer is running, open another terminal to insert a record to the orders table:

    export REDPANDA_VERSION=24.3.2
    docker compose exec postgres psql -U postgresuser -d pandashop
  11. Insert the following record:

    INSERT INTO orders (customer_id, total) values (5, 500);

This will trigger a change event in Debezium, immediately publishing it to dbz.public.orders Redpanda topic, causing the consumer to display a new event in the console. That proves the end to end functionality of your CDC pipeline.

Clean up

To shut down and delete the containers along with all your cluster data:

docker compose down -v

Next steps

Now that you have change log events ingested into Redpanda. You process change log events to enable use cases such as:

  • Database replication

  • Stream processing applications

  • Streaming ETL pipelines

  • Update caches

  • Event-driven Microservices