Set Up MySQL CDC with Debezium and Redpanda
This example demonstrates how to use Debezium to capture the changes made to MySQL in real time and stream them to Redpanda.
This ready-to-run Docker Compose setup contains the following containers:
-
mysql
container with thepandashop
database, containing a single table,orders
-
debezium
container capturing changes made to theorders
table in real time. -
redpanda
container to ingest change data streams produced bydebezium
For more information about the pandashop
database schema, see the /data/mysql_bootstrap.sql
file.
Prerequisites
You must have Docker and Docker Compose installed on your host machine.
Run the lab
-
Clone this repository:
git clone https://github.com/redpanda-data/redpanda-labs.git
-
Change into the
docker-compose/cdc/mysql-json/
directory:cd redpanda-labs/docker-compose/cdc/mysql-json
-
Set the
REDPANDA_VERSION
environment variable to the version of Redpanda that you want to run. For all available versions, see the GitHub releases.For example:
export REDPANDA_VERSION=24.1.12
-
Run the following in the directory where you saved the Docker Compose file:
docker compose up -d
When the
mysql
container starts, the/data/mysql_bootstrap.sql
file creates thepandashop
database and theorders
table, followed by seeding the ` orders` table with a few records. -
Log into MySQL:
docker compose exec mysql mysql -u mysqluser -p
Provide
mysqlpw
as the password when prompted. -
Check the content inside the
orders
table:use pandashop; show tables; select * from orders;
This is your source table.
-
Exit MySQL:
exit
-
While Debezium is up and running, create a source connector configuration to extract change data feeds from MySQL.
docker compose exec debezium curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d ' { "name": "mysql-connector", "config": { "connector.class": "io.debezium.connector.mysql.MySqlConnector", "tasks.max": "1", "database.hostname": "mysql", "database.port": "3306", "database.user": "debezium", "database.password": "dbz", "database.server.id": "184054", "topic.prefix": "dbz", "database.include.list": "pandashop", "schema.history.internal.kafka.bootstrap.servers": "redpanda:9092", "schema.history.internal.kafka.topic": "schemahistory.pandashop" } }'
You should see the following in the output:
HTTP/1.1 201 Created Date: Mon, 12 Feb 2024 16:37:09 GMT Location: http://localhost:8083/connectors/mysql-connector Content-Type: application/json Content-Length: 489 Server: Jetty(9.4.51.v20230217)
The
database.*
configurations specify the connectivity details tomysql
container. The parameter,schema.history.internal.kafka.bootstrap.servers
points to theredpanda
broker the connector uses to write and recover DDL statements to the database schema history topic. -
Wait a minute or two until the connector gets deployed inside Debezium and creates the initial snapshot of change log topics in Redpanda.
-
Check the list of change log topics in
redpanda
by running:docker compose exec redpanda rpk topic list
The output should contain two topics with the prefix
dbz.*
specified in the connector configuration. The topicdbz.pandashop.orders
holds the initial snapshot of change log events streamed fromorders
table.NAME PARTITIONS REPLICAS connect-status 5 1 connect_configs 1 1 connect_offsets 25 1 dbz 1 1 dbz.pandashop.orders 1 1 schemahistory.pandashop 1 1
-
Monitor for change events by consuming the
dbz.pandashop.orders
topic:docker compose exec redpanda rpk topic consume dbz.pandashop.orders
-
While the consumer is running, open another terminal to insert a record to the
orders
table.export REDPANDA_VERSION=24.1.12 docker compose exec mysql mysql -u mysqluser -p
Provide
mysqlpw
as the password when prompted. -
Insert the following record:
use pandashop; INSERT INTO orders (customer_id, total) values (5, 500);
This will trigger a change event in Debezium, immediately publishing it to dbz.pandashop.orders
Redpanda topic, causing the consumer to display a new event in the console. That proves the end-to-end functionality of your CDC pipeline.
Clean up
To shut down and delete the containers along with all your cluster data:
docker compose down -v
Next steps
Now that you have change log events ingested into Redpanda. You process change log events to enable use cases such as:
-
Database replication
-
Stream processing applications
-
Streaming ETL pipelines
-
Update caches
-
Event-driven Microservices