Configure GCP Private Service Connect with the Cloud API

To unlock this feature for your account, contact Redpanda support.

This guide is for configuring GCP Private Service Connect using the Redpanda Cloud API. See Configure Private Service Connect in the Cloud UI if you want to set up the endpoint service using the UI.

The Redpanda GCP Private Service Connect service provides secure access to Redpanda Cloud from your own VPC. Traffic over Private Service Connect does not go through the public internet because a Private Service Connect connection is treated as its own private GCP service. While your VPC has access to the Redpanda VPC, Redpanda cannot access your VPC.

Consider using Private Service Connect if you have multiple VPCs and could benefit from a more simplified approach to network management.

  • Private Service Connect allows overlapping CIDR ranges in VPC networks.

  • Private Service Connect does not limit the number of connections.

  • You control from which GCP projects connections are allowed.

Requirements

  • In this guide, you use the Redpanda Cloud API to enable the Redpanda endpoint service for your clusters. Follow the steps on this page to get an access token.

  • Use gcloud to create the consumer-side resources, such as a VPC and forwarding rule, or modify existing resources to use the Private Service Connect service attachment created for your cluster.

Get a Cloud API access token

  1. Save the base URL of the Redpanda Cloud API in an environment variable:

    export PUBLIC_API_ENDPOINT="https://api.cloud.redpanda.com"
  2. In your organization in the Redpanda Cloud UI, go to Clients. If you don’t have an existing client (also known as service account), you can create a new one by clicking Add client.

    Copy and store the client ID and secret.

    export CLOUD_CLIENT_ID=<client-id>
    export CLOUD_CLIENT_SECRET=<client-secret>
  3. Get an API token using the client ID and secret. You can click the Request an API token link to see code examples to generate the token.

    export AUTH_TOKEN=`curl -s --request POST \
        --url 'https://auth.prd.cloud.redpanda.com/oauth/token' \
        --header 'content-type: application/x-www-form-urlencoded' \
        --data grant_type=client_credentials \
        --data client_id="$CLOUD_CLIENT_ID" \
        --data client_secret="$CLOUD_CLIENT_SECRET" \
        --data audience=cloudv2-production.redpanda.cloud | jq -r .access_token`

You must send the API token in the Authorization header when making requests to the Cloud API.

Configure BYOC with customer-managed resources

For BYOC clusters with customer-managed VPC, you need a NAT subnet with the Purpose set to PRIVATE_SERVICE_CONNECT. You can create the subnet using the gcloud command-line interface (CLI):

gcloud compute networks subnets create <subnet-name> \
    --project=<project> \
    --network=<network-name> \
    --region=<region> \
    --range=<subnet-range> \
    --purpose=PRIVATE_SERVICE_CONNECT

Provide your values for the following placeholders:

  • <subnet-name>: The name of the NAT subnet.

  • <project>: The host GCP project ID.

  • <network-name>: The name of the VPC being used for your Redpanda Cloud cluster.

  • <region>: The region of the Redpanda Cloud cluster.

  • <subnet-range>: The CIDR range of the subnet. The mask should be at least /29. Each Private Service Connect connection takes up one IP address from the NAT subnet, so the CIDR must be able to accommodate all projects from which connections to the service attachment will be issued.

See the GCP documentation for creating a subnet for Private Service Connect.

Create new BYOC cluster with Private Service Connect enabled

  1. In the Redpanda Cloud UI, go to Resource groups and select the resource group in which you want to create a cluster.

    Copy and store the resource group ID (UUID) from the URL in the browser.

    export RESOURCE_GROUP_ID=<uuid>
  2. Update the Redpanda Cloud Agent IAM role.

    To allow the agent to create and manage Private Service Connect resources, add the following permissions to its IAM role:

    compute.forwardingRules.use
    compute.regionOperations.get
    compute.serviceAttachments.create
    compute.serviceAttachments.delete
    compute.serviceAttachments.get
    compute.serviceAttachments.list
    compute.serviceAttachments.update
    compute.subnetworks.use
  3. Make a request to the POST /v1beta2/networks endpoint to create a network

    network_post_body=`cat << EOF
    {
        "cloud_provider": "CLOUD_PROVIDER_GCP",
        "cluster_type": "TYPE_BYOC",
        "name": "<network-name>",
        "resource_group_id": "$RESOURCE_GROUP_ID",
        "region": "<region>",
        "customer_managed_resources": {
            "gcp": {
                "network_name": "<byovpc-network-name>",
                "network_project_id": "<byovpc-network-gcp-project-id>",
                "management_bucket": { "name" : "<byovpc-management-bucket>" }
            }
        }
    }
    EOF`
    
    curl -vv -X POST \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $AUTH_TOKEN" \
    -d "$network_post_body" $PUBLIC_API_ENDPOINT/v1beta2/networks

    Replace the following placeholder variables for the request body:

    • <network-name>: Provide a name for the network. The name is used to identify this network in the Cloud UI.

    • <region>: Choose a GCP region where the network will be created.

    • <byovpc-network-gcp-project-id>: The ID of the GCP project where your VPC is created.

    • <byovpc-network-name>: The name of your VPC.

    • <byovpc-management-bucket>: The name of the Google Storage bucket you created for the cluster.

  4. Store the network ID (metadata.network_id) returned in the response to the Create Network request.

    export NETWORK_ID=<metadata.network_id>
  5. Make a request to the POST /v1beta2/clusters endpoint to create a Redpanda Cloud cluster with Private Service Connect enabled:

    export CLUSTER_POST_BODY=`cat << EOF
    {
        "cloud_provider": "CLOUD_PROVIDER_GCP",
        "connection_type": "CONNECTION_TYPE_PRIVATE",
        "type": "TYPE_BYOC",
        "name": "<cluster-name>",
        "resource_group_id": "$RESOURCE_GROUP_ID",
        "network_id": "$NETWORK_ID",
        "region": "<region>",
        "zones": <zones>,
        "throughput_tier": "<throughput-tier>",
        "redpanda_version": "<redpanda-version>",
        "gcp_private_service_connect": {
            "enabled": true,
            "consumer_accept_list": <consumer-accept-list>
        },
        "customer_managed_resources": {
            "gcp": {
                "subnet": {
                    "name":"<byovpc-subnet-name>",
                    "secondary_ipv4_range_pods": { "name": "<byovpc-subnet-pods-range-name>" },
                    "secondary_ipv4_range_services": { "name": "<byovpc-subnet-services-range-name>" },
                    "k8s_master_ipv4_range": "<byovpc-subnet-master-range>"
                },
                "psc_nat_subnet_name": "<byovpc-psc-nat-subnet-name>"
                "agent_service_account": { "email": "<byovpc-agent-service-acc-email>" },
                "connector_service_account": { "email": "<byovpc-connectors-service-acc-email>" },
                "console_service_account": { "email": "<byovpc-console-service-acc-email>" },
                "redpanda_cluster_service_account": { "email": "<byovpc-redpanda-service-acc-email>" },
                "gke_service_account": { "email": "<byovpc-gke-service-acc-email>" },
                "tiered_storage_bucket": { "name" : "<byovpc-tiered-storage-bucket>" },
            }
        }
    }
    EOF`
    
    curl -vv -X POST \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $AUTH_TOKEN" \
    -d "$CLUSTER_POST_BODY" $PUBLIC_API_ENDPOINT/v1beta2/clusters

    Replace the following placeholders for the request body. Variables with a byovpc_ prefix represent customer-managed resources that should have been created previously:

    • <cluster-name>: Provide a name for the new cluster.

    • <region>: Choose a GCP region where the network will be created.

    • <zones>: Provide the list of GCP zones where the brokers will be deployed. Format: ["<zone 1>", "<zone 2>", "<zone N>"]

    • <throughput-tier>: Choose a Redpanda Cloud cluster tier. For example, tier-1-gcp-v2-x86.

    • <redpanda-version>: Choose the Redpanda Cloud version.

    • <consumer-accept-list>: The list of IDs of GCP projects from which Private Service Connect connection requests are accepted. Format: [{"source": "<GCP-project-ID-1>"}, {"source": "<GCP-project-I-2>"}, {"source": "<GCP-project-ID-N>"}]

    • <byovpc-subnet-name>: The name of the GCP subnet that was created for the cluster.

    • <byovpc-subnet-pods-range-name>: The name of the IPv4 range designated for K8s pods.

    • <byovpc-subnet-services-range-name>: The name of the IPv4 range designated for services.

    • <byovpc-subnet-master-range>: The master IPv4 range.

    • <byovpc-psc-nat-subnet-name>: The name of the GCP subnet that was created for Private Service Connect NAT.

    • <byovpc-agent-service-acc-email>: The email for the agent service account.

    • <byovpc-connectors-service-acc-email>: The email for the connectors service account.

    • <byovpc-console-service-acc-email>: The email for the console service account.

    • <byovpc-redpanda-service-acc-email>: The email for the Redpanda service account.

    • <byovpc-gke-service-acc-email>: The email for the GKE service account.

    • <byovpc-tiered-storage-bucket>: The name of the Google Storage bucket to use for Tiered Storage.

Enable Private Service Connect on an existing BYOC cluster

  1. In the Redpanda Cloud UI, go to the cluster overview and copy the cluster ID from the Details section.

    CLUSTER_ID=<cluster-id>
  2. Update the Redpanda Cloud Agent IAM role. This step is required only for clusters with customer-managed resources.

    To allow the agent to create and manage the service attachment, add the following permissions to its IAM role:

    compute.forwardingRules.use
    compute.regionOperations.get
    compute.serviceAttachments.create
    compute.serviceAttachments.delete
    compute.serviceAttachments.get
    compute.serviceAttachments.list
    compute.serviceAttachments.update
    compute.subnetworks.use
  3. Make a request to the PATCH /v1beta2/clusters/{cluster.id} endpoint to update the cluster to include the newly-created Private Service Connect NAT subnet.

    export ACCEPT_LIST='[]'
    export PSC_NAT_SUBNET_NAME='<psc-nat-subnet-name>'
    export CLUSTER_PATCH_BODY=`cat << EOF
    {
        "customer_managed_resources": {
            "gcp": {
                "psc_nat_subnet_name": "$PSC_NAT_SUBNET_NAME"
            }
        }
    }
    EOF`
    curl -v -X PATCH \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $AUTH_TOKEN" \
    -d "$CLUSTER_PATCH_BODY" $PUBLIC_API_ENDPOINT/v1beta2/clusters/$CLUSTER_ID

    Replace the following placeholder:

    <psc-nat-subnet-name>: The name of the Private Service Connect NAT subnet. Use the fully-qualified name, for example "projects/<project>/regions/<region>/subnetworks/<subnet-name>".

  4. Make a PATCH /v1beta2/clusters/{cluster.id} request to update the cluster to enable Private Service Connect.

    CLUSTER_PATCH_BODY=`cat << EOF
    {
        "gcp_private_service_connect": {
            "enabled": true,
             "consumer_accept_list": <accept-list>
        }
    }
    EOF`
    curl -v -X PATCH \
    -H "Content-Type: application/json" \
    -H "Authorization: Bearer $AUTH_TOKEN" \
    -d "$CLUSTER_PATCH_BODY" $PUBLIC_API_ENDPOINT/v1beta2/clusters/$CLUSTER_ID

    Replace the following placeholder:

    <accept-list>: a JSON list specifying the projects from which incoming connections will be accepted. All other sources. For example, [{"source": "consumer-project-ID-1"},{"source": "consumer-project-ID-2"}].

    Wait for the cluster to apply the new configuration (around 15 minutes). The Private Service Connect service attachment is available when the cluster update is complete. You can monitor the service attachment creation by running the following gcloud command and supplying the project ID:

    gcloud compute service-attachments list --project '<service-project-id>'

Access Redpanda services through VPC endpoint

After you have enabled Private Service Connect for your cluster, your connection URLs are available in the How to Connect section of the cluster overview in the Redpanda Cloud UI.

You can access Redpanda services such as Schema Registry and HTTP Proxy from the client VPC or virtual network; for example, from a compute instance in the VPC or network.

The bootstrap server hostname is unique to each cluster. The service attachment exposes a set of bootstrap ports for access to Redpanda services. These ports load balance requests among brokers. Make sure you use the following ports for initiating a connection from a consumer:

Redpanda service Default bootstrap port

Kafka API

30292

HTTP Proxy

30282

Schema Registry

30081

Access Kafka API seed service

Use port 30292 to access the Kafka API seed service.

export REDPANDA_BROKERS='<kafka-api-bootstrap-server-hostname>:30292'
rpk cluster info -X tls.enabled=true -X user=<user> -X pass=<password>

When successful, the rpk output should look like the following:

CLUSTER
=======
redpanda.rp-cki01qgth38kk81ard3g

BROKERS
=======
ID    HOST                                                                PORT   RACK
0*    0-3da65a4a-0532364.cki01qgth38kk81ard3g.fmc.dev.cloud.redpanda.com  32092  use2-az1
1     1-3da65a4a-63b320c.cki01qgth38kk81ard3g.fmc.dev.cloud.redpanda.com  32093  use2-az1
2     2-3da65a4a-36068dc.cki01qgth38kk81ard3g.fmc.dev.cloud.redpanda.com  32094  use2-az1

Access Schema Registry seed service

Use port 30081 to access the Schema Registry seed service.

curl -vv -u <user>:<password> -H "Content-Type: application/vnd.schemaregistry.v1+json" --sslv2 --http2 <schema-registry-bootstrap-server-hostname>:30081/subjects

Access HTTP Proxy seed service

Use port 30282 to access the Redpanda HTTP Proxy seed service.

curl -vv -u <user>:<password> -H "Content-Type: application/vnd.kafka.json.v2+json" --sslv2 --http2 <http-proxy-bootstrap-server-hostname>:30282/topics

Test the connection

You can test the Private Service Connect connection from any VM or container in the consumer VPC. If configuring a client isn’t possible right away, you can do these checks using rpk or curl:

  1. Set the following environment variables.

    export RPK_BROKERS='<kafka-api-bootstrap-server-hostname>:30292'
    export RPK_TLS_ENABLED=true
    export RPK_SASL_MECHANISM="<SCRAM-SHA-256 or SCRAM-SHA-512>"
    export RPK_USER=<user>
    export RPK_PASS=<password>
  2. Create a test topic.

    rpk topic create test-topic
  3. Produce to the test topic.

    • rpk

    • curl

    echo 'hello world' | rpk topic produce test-topic
    curl -s \
      -X POST \
      "<http-proxy-bootstrap-server-url>/topics/test-topic" \
      -H "Content-Type: application/vnd.kafka.json.v2+json" \
      -d '{
      "records":[
          {
              "value":"hello world"
          }
      ]
    }'
  4. Consume from the test topic.

    • rpk

    • curl

    rpk topic consume test-topic -n 1
    curl -s \
      "<http-proxy-bootstrap-server-url>/topics/test-topic/partitions/0/records?offset=0&timeout=1000&max_bytes=100000"\
      -H "Accept: application/vnd.kafka.json.v2+json"