Deploy Kafka Connect in Kubernetes

This topic describes how to deploy Kafka Connect in Kubernetes using the standalone connectors Helm chart.

The Connectors Helm chart is a community-supported artifact. Redpanda Data does not provide enterprise support for this chart. For support, reach out to the Redpanda team in Redpanda Community Slack.

The connectors Helm chart is a standalone chart that deploys an instance of Kafka Connect. The underlying Docker image contains only the MirrorMaker2 connector but you can build a custom image to install additional connectors.

Try Redpanda Connect for a faster way to build streaming data pipelines. It’s fully compatible with the Kafka API but eliminates the complex setup and maintenance of Kafka Connect. Redpanda Connect also comes with built-in connectors to support AI integrations.
Built-In Connector Description

MirrorSourceConnector

A source connector that replicates records between multiple Kafka clusters. It is part of Kafka’s MirrorMaker, which provides capabilities for mirroring data across Kafka clusters.

MirrorCheckpointConnector

A source connector that ensures the mirroring process can resume from where it left off in case of failures. It tracks and emits checkpoints that mirror the offsets of the source and target clusters.

MirrorHeartbeatConnector

A source connector that emits heartbeats to target topics at a defined interval, enabling MirrorMaker to track active topics on the source cluster and synchronize consumer groups across clusters.

If you want to use other connectors, you must create a custom Docker image that includes them as plugins. See Install a new connector.

Prerequisites

  • A Kubernetes cluster. You must have kubectl with at least version 1.27.0-0.

    To check if you have kubectl installed:

    kubectl version --client
  • Helm installed with at least version 3.10.0.

    To check if you have Helm installed:

    helm version
  • You need jq to parse JSON results when using the Kafka Connect REST API.

  • An understanding of Kafka Connect.

Migrating from the subchart

If you’re currently using the connectors subchart (part of the Redpanda Helm chart), you need to migrate to the standalone connectors chart. Follow these steps:

The example values assume a Redpanda deployment named redpanda in the default namespace. Adjust the values according to your actual deployment.
  1. Copy your existing connectors configuration from your Redpanda values file:

    Extract the connectors section from your current Redpanda Helm values and create a new values file for the standalone chart.

    Example migration
    # Before (in Redpanda values.yaml)
    connectors:
      enabled: true
      auth:
        sasl:
          enabled: true
      brokerTLS:
        enabled: true
    
    # After (in new connectors-values.yaml)
    connectors:
      bootstrapServers: "redpanda-0.redpanda.default.svc.cluster.local:9093,redpanda-1.redpanda.default.svc.cluster.local:9093,redpanda-2.redpanda.default.svc.cluster.local:9093"
    auth:
      sasl:
        enabled: true
    brokerTLS:
      enabled: true
  2. Remove the connectors configuration from your Redpanda values file:

    # Remove or comment out the entire connectors section
    # connectors:
    #   enabled: true
    #   ...
  3. Upgrade your Redpanda deployment:

    helm upgrade redpanda redpanda/redpanda \
      --namespace <namespace> \
      --values redpanda-values.yaml

    This will remove the connectors subchart deployment.

  4. Deploy the standalone connectors chart:

    helm install redpanda-connectors redpanda/connectors \
      --namespace <namespace> \
      --values connectors-values.yaml
  5. Update your Redpanda Console configuration to point to the new service name if needed.

Deploy the standalone Helm chart

The connectors Helm chart is a standalone chart that you deploy separately from your Redpanda cluster.

The chart includes a Pod that runs Kafka Connect and the built-in connectors. The Pod is managed by a Deployment that you configure through Helm values. To connect Redpanda Console to your Kafka Connect deployment, you’ll need to configure Redpanda Console with the appropriate service endpoint.

Redpanda Connectors deployed in a Kubernetes cluster with three worker nodes.
Do not schedule Pods that run Kafka Connect on the same nodes as Redpanda brokers. Redpanda brokers require access to all node resources. See Tolerations and Affinity rules.

Deploy Kafka Connect

To deploy Kafka Connect using the standalone chart, you need to configure connection settings to your Redpanda cluster.

  1. Create a values file for the connectors chart:

    connectors-values.yaml
    # Connection to Redpanda brokers
    connectors:
      bootstrapServers: "<bootstrap-server>"
    
    # Configure TLS if your Redpanda cluster has TLS enabled
    brokerTLS:
      enabled: true
      ca:
        secretRef: "redpanda-default-cert"
        secretNameOverwrite: "ca.crt"
    
    # Configure SASL if your Redpanda cluster has SASL enabled
    auth:
      sasl:
        enabled: false
        mechanism: "scram-sha-512"
        userName: ""
        secretRef: ""

    To get the correct bootstrap servers for your Redpanda cluster, run:

    kubectl run -it --restart=Never --rm --image busybox busybox -- ash -c 'nslookup -type=srv _kafka._tcp.<release-name>.<namespace>.svc.cluster.local | tail -n +4 | head -n -1 | awk '"'"'{print $7 ":" $6}'"'"''

    Replace <release-name> and <namespace> with your actual release name and namespace.

  2. Deploy the connectors chart:

    helm upgrade --install redpanda-connectors redpanda/connectors \
      --namespace <namespace> \
      --create-namespace \
      --values connectors-values.yaml

    Replace <namespace> with the namespace where you want to deploy Kafka Connect.

  3. Verify the deployment using the Kafka Connect REST API or by configuring Redpanda Console.

Example values file

Here’s a complete example values file that shows common configuration options:

connectors-values.yaml
# Connection to Redpanda brokers
connectors:
  bootstrapServers: "redpanda-0.redpanda.redpanda.svc.cluster.local:9093"

# TLS configuration (disabled for local testing)
brokerTLS:
  enabled: false

# SASL authentication (disabled for local testing)
auth:
  sasl:
    enabled: false

# Resource configuration for local testing
container:
  resources:
    requests:
      cpu: "0.5"
      memory: 1Gi
    limits:
      cpu: "1"
      memory: 1Gi
    javaMaxHeapSize: 512M

# Single replica for local testing (testing scaling to 2)
deployment:
  replicas: 2

# Monitoring disabled for local testing
monitoring:
  enabled: false

# Logging
logging:
  level: "info"

Configure Redpanda Console to connect to Kafka Connect

To use Redpanda Console to manage your Kafka Connect deployment, you need to configure Redpanda Console to connect to the Kafka Connect service.

Check your Redpanda Console version

Redpanda Console configuration syntax varies by major version. Before configuring, determine which version you’re using:

# Check console version from deployment
kubectl get deployment -n <namespace> redpanda-console -o jsonpath='{.spec.template.spec.containers[0].image}'

# Or check from running pod
kubectl get pod -n <namespace> -l app.kubernetes.io/name=console -o jsonpath='{.items[0].spec.containers[0].image}'

# Or check from console logs
kubectl logs -n <namespace> -l app.kubernetes.io/name=console | grep "started Redpanda Console"

If you see output like redpandadata/console:v2.8.0, you’re using Console v2.x. If you see redpandadata/console:v3.0.0, you’re using Console v3.x.

Redpanda Console deployed as part of Redpanda chart

If the Redpanda Console is deployed as part of the Redpanda Helm chart (the default), add the following configuration to your Redpanda values:

  • Console v2.x

  • Console v3.x

console:
  enabled: true
  console:
    config:
      connect:
        enabled: true
        clusters:
          - name: "redpanda-connectors"
            url: "http://redpanda-connectors:8083"
            tls:
              enabled: false
console:
  enabled: true
  console:
    config:
      kafkaConnect:
        enabled: true
        clusters:
          - name: "redpanda-connectors"
            url: "http://redpanda-connectors:8083"
            tls:
              enabled: false
If you deployed the connectors chart with a different release name, update the URL accordingly. The service name follows the pattern <release-name>:8083.

If Redpanda Console is deployed in a different namespace than Kafka Connect, use the fully qualified service name:

  • Console v2.x

  • Console v3.x

console:
  enabled: true
  console:
    config:
      connect:
        enabled: true
        clusters:
          - name: "redpanda-connectors"
            url: "http://redpanda-connectors.<connectors-namespace>.svc.cluster.local:8083"
            tls:
              enabled: false
console:
  enabled: true
  console:
    config:
      kafkaConnect:
        enabled: true
        clusters:
          - name: "redpanda-connectors"
            url: "http://redpanda-connectors.<connectors-namespace>.svc.cluster.local:8083"
            tls:
              enabled: false

Update your Redpanda deployment:

helm upgrade redpanda redpanda/redpanda \
  --namespace <namespace> \
  --values redpanda-values.yaml

Troubleshooting Redpanda Console connectivity

If you see "Kafka Connect is not configured in Redpanda Console" or cannot access connectors:

  1. Ensure you’re using the correct configuration syntax for your Redpanda Console version (see Check your Redpanda Console version).

  2. Check if Redpanda Console can connect to Kafka Connect:

    kubectl logs -n <namespace> -l app.kubernetes.io/name=console --tail=20

    Look for these successful connection messages:

    "creating Kafka connect HTTP clients and testing connectivity to all clusters"
    "tested Kafka connect cluster connectivity","successful_clusters":1,"failed_clusters":0"
    "successfully create Kafka connect service"
  3. Verify Redpanda Console can reach Kafka Connect service:

    kubectl exec -n <namespace> deployment/redpanda-console -- curl -s http://redpanda-connectors.<namespace>.svc.cluster.local:8083

    This should return Kafka Connect version information.

  4. Verify the connector service exists and is accessible:

    kubectl get svc -n <namespace> | grep connectors
  5. If configuration changes aren’t taking effect:

    kubectl delete pod -n <namespace> -l app.kubernetes.io/name=console

Verification test

After you’ve deployed the connectors chart, you can verify everything is working with this test:

  1. Get the connector Pod name:

    POD_NAME=$(kubectl get pod -l app.kubernetes.io/name=connectors --namespace <namespace> -o jsonpath='{.items[0].metadata.name}')
  2. Test basic connectivity:

    echo "Testing Kafka Connect REST API..."
    kubectl exec $POD_NAME --namespace <namespace> -- curl -s localhost:8083 | jq '.version'
  3. List available connector plugins:

    echo "Available connector plugins:"
    kubectl exec $POD_NAME --namespace <namespace> -- curl -s localhost:8083/connector-plugins | jq '.[].class'
  4. Test cluster connectivity:

    echo "Testing Redpanda cluster connectivity..."
    kubectl exec $POD_NAME --namespace <namespace> -- curl -s localhost:8083/connectors

If all commands completed without errors, Kafka Connect is working correctly

If any command fails, refer to the Troubleshoot common issues section.

Configuration advice

This section provides advice for configuring the standalone connectors Helm chart. For all available settings, see Redpanda Connectors Helm Chart Specification.

Security configuration

This section covers security-related configuration for the connectors Helm chart.

Authentication

If your Redpanda cluster has SASL enabled, configure SASL authentication for secure communication with your Kafka connectors.

auth:
  sasl:
    enabled: true
    mechanism: "SCRAM-SHA-512"
    userName: "admin"
    secretRef: "sasl-password-secret"

For all available settings, see the Helm specification.

TLS configuration

If your Redpanda cluster has TLS enabled, configure TLS settings for secure communication:

brokerTLS:
  enabled: true
  ca:
    secretRef: "redpanda-default-cert"
    secretNameOverwrite: "ca.crt"

For all available settings, see the Helm specification.

Service account

Restricting permissions is a best practice. Assign a dedicated service account for each deployment or app.

serviceAccount:
  create: true
  name: "redpanda-connector-service-account"

For all available settings, see the Helm specification.

Scalability and reliability

This section covers configuration for scalable and reliable deployments.

Number of replicas

You can scale the Kafka Connect Pods by modifying the deployment.replicas parameter in the Helm values. This parameter allows you to handle varying workloads by increasing or decreasing the number of running instances.

deployment:
  replicas: 3

The replicas: 3 setting ensures that three instances of the Kafka Connect Pod will be running. You can adjust this number based on your needs.

Redpanda Data recommends using an autoscaler such as Keda to increase the number of Pod replicas automatically when certain conditions, such as high CPU or memory usage, are met.

Container resources

Specify resource requests and limits. Ensure that javaMaxHeapSize is not greater than container.resources.limits.memory.

container:
  resources:
    requests:
      cpu: 1
      memory: 1Gi
    limits:
      cpu: 2
      memory: 2Gi
    javaMaxHeapSize: 2G
  javaGCLogEnabled: false

For all available settings, see the Helm specification.

Deployment strategy

For smooth and uninterrupted updates, use the default RollingUpdate strategy. Additionally, set a budget to ensure a certain number of Pod replicas remain available during the update.

deployment:
  strategy:
    type: "RollingUpdate"
  updateStrategy:
    type: "RollingUpdate"
  budget:
    maxUnavailable: 1

For all available settings, see the Helm specification.

Affinity rules

Affinities control Pod placement in the cluster based on various conditions. Set these according to your high availability and infrastructure needs.

deployment:
  podAntiAffinity:
    topologyKey: kubernetes.io/hostname
    type: hard
    weight: 100
    custom:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: "app"
            operator: "In"
            values:
            - "redpanda-connector"
        topologyKey: "kubernetes.io/hostname"
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: "app"
              operator: "In"
              values:
              - "redpanda-connector"
          topologyKey: "kubernetes.io/zone"

In this example:

  • The requiredDuringSchedulingIgnoredDuringExecution section ensures that the Kubernetes scheduler doesn’t place two Pods with the same app: redpanda-connector label on the same node due to the topologyKey: kubernetes.io/hostname.

  • The preferredDuringSchedulingIgnoredDuringExecution section is a soft rule that tries to ensure the Kubernetes scheduler doesn’t place two Pods with the same app: redpanda-connector label in the same zone. However, if it’s not possible, the scheduler can still place the Pods in the same zone.

For all available settings, see the Helm specification.

Tolerations

Tolerations and taints allow Pods to be scheduled onto nodes where they otherwise wouldn’t. If you have nodes dedicated to Kafka Connect with a taint dedicated=redpanda-connectors:NoSchedule, the following toleration allows the Pods to be scheduled on them.

tolerations:
  - key: "dedicated"
    operator: "Equal"
    value: "redpanda-connectors"
    effect: "NoSchedule"

For all available settings, see the Helm specification.

Node selection

Use node selectors to ensure connectors are scheduled on appropriate nodes and avoid scheduling on Redpanda broker nodes:

# Example: Schedule on nodes with specific labels
nodeSelector:
  workload-type: "kafka-connect"

# Or use node affinity for more complex selection
deployment:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: "kubernetes.io/hostname"
          operator: "NotIn"
          values: ["redpanda-node-1", "redpanda-node-2", "redpanda-node-3"]

For all available settings, see the Helm specification.

Graceful shutdown

If your connectors require additional time for a graceful shutdown, modify the terminationGracePeriodSeconds.

deployment:
  terminationGracePeriodSeconds: 30

For all available settings, see the Helm specification.

Monitoring and observability

This section covers monitoring, logging, and health check configuration.

Monitoring

If you have the Prometheus Operator, enable monitoring to deploy a PodMonitor resource for Kafka Connect.

monitoring:
  enabled: true

For all available settings, see the Helm specification.

Logging

Use the info logging level to avoid overwhelming the storage. For debugging purposes, temporarily change the logging level to debug.

logging:
  level: "info"

For all available settings, see the Helm specification.

Probes

Probes determine the health and readiness of your Pods. Configure them based on the startup behavior of your connectors.

deployment:
  livenessProbe:
    initialDelaySeconds: 60
    periodSeconds: 10
  readinessProbe:
    initialDelaySeconds: 30
    periodSeconds: 10

For all available settings, see the Helm specification.

Data management

This section covers configuration related to data handling and topic management.

Topics

Kafka Connect leverages internal topics to track processed data, enhancing its fault tolerance:

  • The offset topic logs the last processed position from the external data source.

  • In events like failures or restarts, the connector uses this logged position to resume operations, ensuring no data duplication or omission.

connectors:
  storage:
    topic:
      offset: _internal_connectors_offsets

Here, _internal_connectors_offsets is the dedicated Kafka topic where Kafka Connect persists the offsets of the source connector.

For all available settings, see the Helm specification.

Producers

When a source connector retrieves data from an external system for Redpanda, it assumes the role of a producer:

  • The source connector is responsible for transforming the external data into Kafka-compatible messages.

  • It then produces (writes) these messages to a specified Kafka topic.

The producerBatchSize and producerLingerMS settings specify how Kafka Connect groups messages before producing them.

connectors:
  producerBatchSize: 131072
  producerLingerMS: 1

For all available settings, see the Helm specification.

General configuration

This section covers other important configuration settings.

Name overrides

Deploying multiple instances of the same Helm chart in a Kubernetes cluster can lead to naming conflicts. Using nameOverride and fullnameOverride helps differentiate between them. If you have a production and staging environment, different names help to avoid confusion.

  • Use nameOverride to customize:

    • The default labels app.kubernetes.io/component=<nameOverride> and app.kubernetes.io/name=<nameOverride>

    • The suffix in the name of the resources redpanda-<nameOverride>

  • Use fullnameOverride to customize the full name of the resources such as the Deployment and Services.

nameOverride: 'redpanda-connector-production'
fullnameOverride: 'redpanda-connector-instance-prod'

For all available settings, see the Helm specification.

Labels

Kubernetes labels help you to organize, query, and manage your resources. Use labels to categorize Kubernetes resources in different deployments by environment, purpose, or team.

commonLabels:
  env: 'production'

For all available settings, see the Helm specification.

Docker image

You can specify the image tag to deploy a known version of the Docker image. Avoid using the latest tag, which can lead to unexpected changes.

If you’re using a private repository, always ensure your nodes have the necessary credentials to pull the image.

image:
  repository: "redpanda/connectors"
  tag: "1.2.3"

For all available settings, see the Helm specification.

Kafka Connect configuration

You can configure Kafka Connect connection settings.

Change the default REST API port only if it conflicts with an existing port.

The bootstrapServers setting should point to the Kafka API endpoints on your Redpanda brokers.

If you want to use Schema Registry, ensure the URL is set to the IP address or domain name of a Redpanda broker and that it includes the Schema Registry port.

connectors:
  restPort: 8082
  bootstrapServers: "redpanda-broker-0:9092"
  schemaRegistryURL: "http://schema-registry.default.svc.cluster.local:8081"

For all available settings, see the Helm specification.

Deployment history

Keeping track of your deployment’s history is beneficial for rollback scenarios. Adjust the revisionHistoryLimit according to your storage considerations.

deployment:
  progressDeadlineSeconds: 600
  revisionHistoryLimit: 10

For all available settings, see the Helm specification.

Verify the deployment

To verify that the deployment was successful, you can use the Kafka Connect REST API or check the deployment in Redpanda Console (if configured).

Verify with the Kafka Connect REST API

  1. Get the name of the Pod that’s running Kafka Connect:

    kubectl get pod -l app.kubernetes.io/name=connectors --namespace <namespace>

    Expected output should show pods in Running status:

    NAME                                   READY   STATUS    RESTARTS   AGE
    redpanda-connectors-6d64b948f6-dk484   1/1     Running   0          5m
  2. Check if the Kafka Connect service is accessible:

    kubectl get svc -l app.kubernetes.io/name=connectors --namespace <namespace>

    Expected output:

    NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    redpanda-connectors   ClusterIP   10.96.123.45    <none>        8083/TCP   5m
  3. View the version of Kafka Connect:

    kubectl exec <pod-name> --namespace <namespace> -- curl localhost:8083 | jq
    Example output
    {
      "version": "3.8.0",
      "commit": "771b9576b00ecf5b",
      "kafka_cluster_id": "redpanda.3e2649b0-f84c-4c03-b5e3-d6d1643f65b2"
    }
  4. View the list of available connectors:

    kubectl exec <pod-name> --namespace <namespace> -- curl localhost:8083/connector-plugins | jq
    Example output
    [
      {
        "class": "org.apache.kafka.connect.mirror.MirrorCheckpointConnector",
        "type": "source",
        "version": "3.8.0"
      },
      {
        "class": "org.apache.kafka.connect.mirror.MirrorHeartbeatConnector",
        "type": "source",
        "version": "3.8.0"
      },
      {
        "class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
        "type": "source",
        "version": "3.8.0"
      }
    ]
  5. Test connectivity to your Redpanda cluster:

    kubectl exec <pod-name> --namespace <namespace> -- curl localhost:8083/connectors

    This should return an empty array [] if no connectors are configured, indicating that Kafka Connect can communicate with your Redpanda cluster.

Troubleshoot common issues

If the deployment isn’t working as expected, check these common issues:

Pod not starting or crashing

  1. Check pod logs for error messages:

    kubectl logs -l app.kubernetes.io/name=connectors --namespace <namespace> --tail=50
  2. Check for resource constraints:

    kubectl describe pod -l app.kubernetes.io/name=connectors --namespace <namespace>

Common issues and solutions:

  • "You must set either bootstrap.servers or bootstrap.controllers": The connectors.bootstrapServers configuration is missing or incorrectly formatted.

  • OutOfMemoryError: Increase memory limits or reduce javaMaxHeapSize.

  • Connection refused to Redpanda brokers: Verify the bootstrap servers addresses and ensure Redpanda is running.

Testing network connectivity

  1. Test if the connector can reach Redpanda brokers:

    kubectl exec <pod-name> --namespace <namespace> -- nslookup <redpanda-service-name>
  2. Test port connectivity:

    kubectl exec <pod-name> --namespace <namespace> -- nc -zv <redpanda-broker> 9093

Verify with Redpanda Console

If you have Redpanda Console configured to connect to Kafka Connect:

  1. Access Console through port-forward:

    kubectl port-forward svc/redpanda-console 8080:8080 --namespace <namespace>
  2. Open http://localhost:8080 in your browser

  3. Navigate to Connect

If the Connectors page shows "No clusters configured" or connection errors, verify your Console configuration includes the correct Kafka Connect service URL.

Health checks and monitoring

The connectors chart includes built-in health checks that you can use to monitor the status:

  1. Liveness probe: Checks if Kafka Connect is responsive

    kubectl exec <pod-name> --namespace <namespace> -- curl -f localhost:8083/
  2. Readiness probe: Checks if Kafka Connect is ready to accept requests

    kubectl exec <pod-name> --namespace <namespace> -- curl -f localhost:8083/connectors
  3. View connector worker information:

    kubectl exec <pod-name> --namespace <namespace> -- curl localhost:8083/admin/workers | jq
  4. Check cluster information:

    kubectl exec <pod-name> --namespace <namespace> -- curl localhost:8083/admin/cluster | jq

These endpoints help you verify that Kafka Connect is not only running but also properly connected to your Redpanda cluster and ready to manage connectors.

Install a new connector

To install new connectors other than the ones included in the Redpanda Connectors Docker image, you must:

  1. Prepare a JAR (Java archive) file for the connector.

  2. Mount the JAR file into the plugin directory of the Redpanda Connectors Docker image.

  3. Use that Docker image in the Helm chart.

Prepare a JAR file

Kafka Connect is written in Java. As such, connectors are also written in Java and packaged into JAR files. JAR files are used to distribute Java classes and associated metadata and resources in a single file. You can get JAR files for connectors in many ways, including:

  • Build from source: If you have the source code for a Java project, you can compile and package it into a JAR using build tools, such as:

    • Maven: Using the mvn package command.

    • Gradle: Using the gradle jar or gradle build command.

    • Java Development Kit (JDK): Using the jar command-line tool that comes with the JDK.

  • Maven Central Repository: If you’re looking for a specific Java library or framework, it may be available in the Maven Central Repository. From here, you can search for the library and download the JAR directly.

  • Vendor websites: If you are looking for commercial Java software or libraries, the vendor’s official website is a good place to check.

To avoid security risks, always verify the source of the JAR files. Do not download JAR files from unknown websites. Malicious JAR files can present a security risk to your execution environment.

Add the connector to the Docker image

The Redpanda Connectors Docker image is configured to find connectors in the /opt/kafka/redpanda-plugins directory. You must mount your connector’s JAR file to this directory in the Docker image.

  1. Create a new Dockerfile:

    Dockerfile
    FROM redpandadata/connectors:<version>
    
    COPY <path-to-jar-file> /opt/kafka/connect-plugins/<connector-name>/<jar-filename>

    Replace the following placeholders:

    • <version>: The version of the Redpanda Connectors Docker image that you want to use. For all available versions, see DockerHub.

    • <path-to-jar-file>: The path to the JAR file on your local system.

    • <connector-name>: A unique directory name in which to mount your JAR files.

    • <jar-filename>: The name of your JAR file, including the .jar file extension.

  2. Change into the directory where you created the Dockerfile and run:

    docker build -t <repo>/connectors:<version> .
    • Replace <repo> with the name of your Docker repository and <version> with your desired version or tag for the image.

  3. Push the image to your Docker repository:

    docker push <repo>/connectors:<version>

Deploy the Helm chart with your custom Docker image

  1. Modify your values file to use your new Docker image:

    image:
      repository: <repo>/connectors
      tag: <version>
      pullPolicy: IfNotPresent

    Kafka Connect should discover the new connector automatically on startup.

  2. Update your deployment:

    helm upgrade redpanda-connectors redpanda/connectors \
      --namespace <namespace> \
      --values connectors-values.yaml
  3. Get the name of the Pod that’s running Kafka Connect:

    kubectl get pod -l app.kubernetes.io/name=connectors --namespace <namespace>
  4. View all available connectors:

    kubectl exec <pod-name> --namespace <namespace> -- curl localhost:8083/connector-plugins | jq

You should see your new connector in the list.