ListKafkaConnections

POST /redpanda.core.admin.v2.ClusterService/ListKafkaConnections

Returns information about the cluster's Kafka connections, collected and ordered across all brokers.

Headers

  • Connect-Protocol-Version number Required

    Define the version of the Connect protocol

    Value is 1.

  • Connect-Timeout-Ms number

    Define the timeout, in ms

application/json

Body Required

  • filter string

    Filter expression to apply to the connection list. Uses a subset of AIP-160 filter syntax supporting:

    • Field comparisons (=, !=, <, >, <=, >=)
    • Logical AND chaining: condition1 AND condition2
    • Nested field access: parent.child = value
    • Escape sequences: field = "string with \"quotes\""
    • Enum types
    • RFC3339 timestamps and ISO-like duration

    Limitations (not supported):

    • Logical OR chaining
    • Parentheses ( ) for grouping
    • Map and repeated types
    • HAS (:) operator
    • Negation (-, NOT)
    • Bare literal matching

    Example filters:

    • state = KAFKA_CONNECTION_STATE_OPEN
    • idle_duration > 30s AND total_request_statistics.request_count > 100
    • authentication_info.user_principal = "my-producer"
    • recent_request_statistics.produce_bytes > 1000 AND client_software_name = "kgo"
    • open_time >= 2025-09-01T10:22:54Z

    Reference: https://google.aip.dev/160

  • orderBy string

    Field-based ordering specification following AIP-132 syntax. Supports multiple fields with asc/desc direction indicators. Examples:

    • idle_duration desc - longest idle connections first
    • open_time desc, total_request_statistics.request_count desc - newest connections first, then most active
    • recent_request_statistics.produce_bytes desc - connections with highest current produce throughput first

    Reference: https://google.aip.dev/132#ordering

  • pageSize integer(int32)

    The maximum number of connections to return. If unspecified or 0, a default value may be applied. Note that paging is currently not fully supported, and this field only acts as a limit for the first page of data returned. Subsequent pages of data cannot be requested.

Responses

  • 200 application/json

    Success

    Hide response attributes Show response attributes object
    • connections array[object]

      The list of connections matching the request. Note that in addition to open connections, some recently-closed connections may also be included here. If you don't want to include closed connections, set the filter in the request to state = KAFKA_CONNECTION_STATE_OPEN.

      Kafka connection details for a broker

      Hide connections attributes Show connections attributes object
      • apiVersions object

        This map records, for each Kafka API, the highest version number observed in requests on this connection. It can be useful for understanding which protocol versions a client supports or has negotiated with the broker. Only APIs that were actually used (i.e. at least one request was seen) are included.

        Example: { 0: 11, 1: 13 } means that for API key 0 (Produce), version 11 was the highest seen, and for API key 1 (Fetch), version 13 was the highest seen.

        Reference:

        Hide apiVersions attribute Show apiVersions attribute object
        • * integer(int32) Additional properties
      • authenticationInfo object

        Other Messages

        Additional properties are NOT allowed.

        Hide authenticationInfo attributes Show authenticationInfo attributes object
        • mechanism string

          Values are AUTHENTICATION_MECHANISM_UNSPECIFIED, AUTHENTICATION_MECHANISM_MTLS, AUTHENTICATION_MECHANISM_SASL_SCRAM, AUTHENTICATION_MECHANISM_SASL_OAUTHBEARER, AUTHENTICATION_MECHANISM_SASL_PLAIN, or AUTHENTICATION_MECHANISM_SASL_GSSAPI.

        • state string

          Values are AUTHENTICATION_STATE_UNSPECIFIED, AUTHENTICATION_STATE_UNAUTHENTICATED, AUTHENTICATION_STATE_SUCCESS, or AUTHENTICATION_STATE_FAILURE.

        • userPrincipal string

          Authenticated user principal

      • clientId string

        Client identifier included in every request sent by the Kafka client. This is typically a configurable property (client.id) set by the application when creating a producer or consumer, and is often used for metrics, quotas, and debugging.

      • clientSoftwareName string

        Name of the client library, reported automatically in ApiVersions v3+ requests. This is set by the client implementation and is not typically configurable by applications.

      • clientSoftwareVersion string

        Version of the client library, reported automatically in ApiVersions v3+ requests. Like client_software_name, this is set by the client and not usually configurable by applications.

      • closeTime string(date-time)

        A Timestamp represents a point in time independent of any time zone or local calendar, encoded as a count of seconds and fractions of seconds at nanosecond resolution. The count is relative to an epoch at UTC midnight on January 1, 1970, in the proleptic Gregorian calendar which extends the Gregorian calendar backwards to year one.

        All minutes are 60 seconds long. Leap seconds are "smeared" so that no leap second table is needed for interpretation, using a 24-hour linear smear.

        The range is from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z. By restricting to that range, we ensure that we can convert to and from RFC 3339 date strings.

        # Examples

        Example 1: Compute Timestamp from POSIX time().

        Timestamp timestamp; timestamp.set_seconds(time(NULL)); timestamp.set_nanos(0);

        Example 2: Compute Timestamp from POSIX gettimeofday().

        struct timeval tv; gettimeofday(&tv, NULL);

        Timestamp timestamp; timestamp.set_seconds(tv.tv_sec); timestamp.set_nanos(tv.tv_usec * 1000);

        Example 3: Compute Timestamp from Win32 GetSystemTimeAsFileTime().

        FILETIME ft; GetSystemTimeAsFileTime(&ft); UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;

        // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z. Timestamp timestamp; timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL)); timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));

        Example 4: Compute Timestamp from Java System.currentTimeMillis().

        long millis = System.currentTimeMillis();

        Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000) .setNanos((int) ((millis % 1000) * 1000000)).build();

        Example 5: Compute Timestamp from Java Instant.now().

        Instant now = Instant.now();

        Timestamp timestamp = Timestamp.newBuilder().setSeconds(now.getEpochSecond()) .setNanos(now.getNano()).build();

        Example 6: Compute Timestamp from current time in Python.

        timestamp = Timestamp() timestamp.GetCurrentTime()

        # JSON Mapping

        In JSON format, the Timestamp type is encoded as a string in the RFC 3339 format. That is, the format is "{year}-{month}-{day}T{hour}:{min}:{sec}[.{frac_sec}]Z" where {year} is always expressed using four digits while {month}, {day}, {hour}, {min}, and {sec} are zero-padded to two digits each. The fractional seconds, which can go up to 9 digits (i.e. up to 1 nanosecond resolution), are optional. The "Z" suffix indicates the timezone ("UTC"); the timezone is required. A proto3 JSON serializer should always use UTC (as indicated by "Z") when printing the Timestamp type and a proto3 JSON parser should be able to accept both UTC and other timezones (as indicated by an offset).

        For example, "2017-01-15T01:30:15.01Z" encodes 15.01 seconds past 01:30 UTC on January 15, 2017.

        In JavaScript, one can convert a Date object to this format using the standard toISOString() method. In Python, a standard datetime.datetime object can be converted to this format using strftime with the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one can use the Joda Time's ISODateTimeFormat.dateTime() to obtain a formatter capable of generating timestamps in this format.

      • groupId string

        Most recent group ID seen in requests sent over this connection. This typically refers to a consumer group, but the Kafka group protocol is more general and may also be used by other types of clients that coordinate membership and assignments through the broker.

      • groupInstanceId string

        Most recent group instance ID seen in requests sent over this connection. This is used when static membership is enabled, allowing a specific client instance to retain its group membership across restarts.

      • groupMemberId string

        Most recent group member ID seen in requests sent over this connection. This is the unique identifier assigned by the broker to a particular member of the group.

      • idleDuration string(duration)

        A Duration represents a signed, fixed-length span of time represented as a count of seconds and fractions of seconds at nanosecond resolution. It is independent of any calendar and concepts like "day" or "month". It is related to Timestamp in that the difference between two Timestamp values is a Duration and it can be added or subtracted from a Timestamp. Range is approximately +-10,000 years.

        # Examples

        Example 1: Compute Duration from two Timestamps in pseudo code.

        Timestamp start = ...; Timestamp end = ...; Duration duration = ...;

        duration.seconds = end.seconds - start.seconds; duration.nanos = end.nanos - start.nanos;

        if (duration.seconds < 0 && duration.nanos > 0) { duration.seconds += 1; duration.nanos -= 1000000000; } else if (duration.seconds > 0 && duration.nanos < 0) { duration.seconds -= 1; duration.nanos += 1000000000; }

        Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.

        Timestamp start = ...; Duration duration = ...; Timestamp end = ...;

        end.seconds = start.seconds + duration.seconds; end.nanos = start.nanos + duration.nanos;

        if (end.nanos < 0) { end.seconds -= 1; end.nanos += 1000000000; } else if (end.nanos >= 1000000000) { end.seconds += 1; end.nanos -= 1000000000; }

        Example 3: Compute Duration from datetime.timedelta in Python.

        td = datetime.timedelta(days=3, minutes=10) duration = Duration() duration.FromTimedelta(td)

        # JSON Mapping

        In JSON format, the Duration type is encoded as a string rather than an object, where the string ends in the suffix "s" (indicating seconds) and is preceded by the number of seconds, with nanoseconds expressed as fractional seconds. For example, 3 seconds with 0 nanoseconds should be encoded in JSON format as "3s", while 3 seconds and 1 nanosecond should be expressed in JSON format as "3.000000001s", and 3 seconds and 1 microsecond should be expressed in JSON format as "3.000001s".

      • inFlightRequests object

        Additional properties are NOT allowed.

        Hide inFlightRequests attributes Show inFlightRequests attributes object
        • hasMoreRequests boolean

          Whether there are more in-flight requests than those in sampled_in_flight_requests.

        • sampledInFlightRequests array[object]

          A sample (e.g., the 5 latest) of the currently in-flight requests

          Hide sampledInFlightRequests attributes Show sampledInFlightRequests attributes object
          • apiKey integer(int32)

            API key for the request type (e.g., produce/fetch/metadata/etc) https://kafka.apache.org/0101/protocol.html#protocol_api_keys

          • inFlightDuration string(duration)

            A Duration represents a signed, fixed-length span of time represented as a count of seconds and fractions of seconds at nanosecond resolution. It is independent of any calendar and concepts like "day" or "month". It is related to Timestamp in that the difference between two Timestamp values is a Duration and it can be added or subtracted from a Timestamp. Range is approximately +-10,000 years.

            # Examples

            Example 1: Compute Duration from two Timestamps in pseudo code.

            Timestamp start = ...; Timestamp end = ...; Duration duration = ...;

            duration.seconds = end.seconds - start.seconds; duration.nanos = end.nanos - start.nanos;

            if (duration.seconds < 0 && duration.nanos > 0) { duration.seconds += 1; duration.nanos -= 1000000000; } else if (duration.seconds > 0 && duration.nanos < 0) { duration.seconds -= 1; duration.nanos += 1000000000; }

            Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.

            Timestamp start = ...; Duration duration = ...; Timestamp end = ...;

            end.seconds = start.seconds + duration.seconds; end.nanos = start.nanos + duration.nanos;

            if (end.nanos < 0) { end.seconds -= 1; end.nanos += 1000000000; } else if (end.nanos >= 1000000000) { end.seconds += 1; end.nanos -= 1000000000; }

            Example 3: Compute Duration from datetime.timedelta in Python.

            td = datetime.timedelta(days=3, minutes=10) duration = Duration() duration.FromTimedelta(td)

            # JSON Mapping

            In JSON format, the Duration type is encoded as a string rather than an object, where the string ends in the suffix "s" (indicating seconds) and is preceded by the number of seconds, with nanoseconds expressed as fractional seconds. For example, 3 seconds with 0 nanoseconds should be encoded in JSON format as "3s", while 3 seconds and 1 nanosecond should be expressed in JSON format as "3.000000001s", and 3 seconds and 1 microsecond should be expressed in JSON format as "3.000001s".

      • listenerName string

        Name of the Kafka listener that accepted this connection. A listener is a named broker endpoint (for example, "internal", "external", or "sasl_tls"). Each listener defines its network address and enforces its protocol and authentication policy.

      • nodeId integer(int32)

        Broker node ID

      • openTime string(date-time)

        A Timestamp represents a point in time independent of any time zone or local calendar, encoded as a count of seconds and fractions of seconds at nanosecond resolution. The count is relative to an epoch at UTC midnight on January 1, 1970, in the proleptic Gregorian calendar which extends the Gregorian calendar backwards to year one.

        All minutes are 60 seconds long. Leap seconds are "smeared" so that no leap second table is needed for interpretation, using a 24-hour linear smear.

        The range is from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z. By restricting to that range, we ensure that we can convert to and from RFC 3339 date strings.

        # Examples

        Example 1: Compute Timestamp from POSIX time().

        Timestamp timestamp; timestamp.set_seconds(time(NULL)); timestamp.set_nanos(0);

        Example 2: Compute Timestamp from POSIX gettimeofday().

        struct timeval tv; gettimeofday(&tv, NULL);

        Timestamp timestamp; timestamp.set_seconds(tv.tv_sec); timestamp.set_nanos(tv.tv_usec * 1000);

        Example 3: Compute Timestamp from Win32 GetSystemTimeAsFileTime().

        FILETIME ft; GetSystemTimeAsFileTime(&ft); UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;

        // A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z. Timestamp timestamp; timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL)); timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));

        Example 4: Compute Timestamp from Java System.currentTimeMillis().

        long millis = System.currentTimeMillis();

        Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000) .setNanos((int) ((millis % 1000) * 1000000)).build();

        Example 5: Compute Timestamp from Java Instant.now().

        Instant now = Instant.now();

        Timestamp timestamp = Timestamp.newBuilder().setSeconds(now.getEpochSecond()) .setNanos(now.getNano()).build();

        Example 6: Compute Timestamp from current time in Python.

        timestamp = Timestamp() timestamp.GetCurrentTime()

        # JSON Mapping

        In JSON format, the Timestamp type is encoded as a string in the RFC 3339 format. That is, the format is "{year}-{month}-{day}T{hour}:{min}:{sec}[.{frac_sec}]Z" where {year} is always expressed using four digits while {month}, {day}, {hour}, {min}, and {sec} are zero-padded to two digits each. The fractional seconds, which can go up to 9 digits (i.e. up to 1 nanosecond resolution), are optional. The "Z" suffix indicates the timezone ("UTC"); the timezone is required. A proto3 JSON serializer should always use UTC (as indicated by "Z") when printing the Timestamp type and a proto3 JSON parser should be able to accept both UTC and other timezones (as indicated by an offset).

        For example, "2017-01-15T01:30:15.01Z" encodes 15.01 seconds past 01:30 UTC on January 15, 2017.

        In JavaScript, one can convert a Date object to this format using the standard toISOString() method. In Python, a standard datetime.datetime object can be converted to this format using strftime with the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one can use the Joda Time's ISODateTimeFormat.dateTime() to obtain a formatter capable of generating timestamps in this format.

      • recentRequestStatistics object

        Additional properties are NOT allowed.

        Hide recentRequestStatistics attributes Show recentRequestStatistics attributes object
        • fetchBytes integer | string

          Sum of bytes fetched.

        • produceBatchCount integer | string

          Number of produced batches. Average batch size = produce_bytes / produce_batch_count

        • produceBytes integer | string

          Sum of bytes produced.

        • requestCount integer | string

          Number of requests the client has made.

      • shardId integer

        Broker shard that handles the connection

      • source object

        Additional properties are NOT allowed.

        Hide source attributes Show source attributes object
        • ipAddress string
        • port integer
      • state string

        Enums

        Values are KAFKA_CONNECTION_STATE_UNSPECIFIED, KAFKA_CONNECTION_STATE_OPEN, KAFKA_CONNECTION_STATE_ABORTING, or KAFKA_CONNECTION_STATE_CLOSED.

      • tlsInfo object

        Additional properties are NOT allowed.

        Hide tlsInfo attribute Show tlsInfo attribute object
        • enabled boolean

          Whether TLS is in use

      • totalRequestStatistics object

        Additional properties are NOT allowed.

        Hide totalRequestStatistics attributes Show totalRequestStatistics attributes object
        • fetchBytes integer | string

          Sum of bytes fetched.

        • produceBatchCount integer | string

          Number of produced batches. Average batch size = produce_bytes / produce_batch_count

        • produceBytes integer | string

          Sum of bytes produced.

        • requestCount integer | string

          Number of requests the client has made.

      • transactionalId string

        Most recent transactional ID seen in requests sent over this connection

      • uid string

        Kafka connection UUID

    • totalSize integer | string

      Total number of connections matching the request. This may be greater than len(connections) if some connections were omitted from the response due to the specified (or default) page_size. Example: request.page_size = 10 response.connections = [<10 items>] response.total_size = 13

  • default application/json

    Error

    Hide response attributes Show response attributes object
    • code string

      The status code, which should be an enum value of [google.rpc.Code][google.rpc.Code].

      Values are canceled, unknown, invalid_argument, deadline_exceeded, not_found, already_exists, permission_denied, resource_exhausted, failed_precondition, aborted, out_of_range, unimplemented, internal, unavailable, data_loss, or unauthenticated.

    • details array[object]

      A list of messages that carry the error details. There is no limit on the number of messages.

      Contains an arbitrary serialized message along with a @type that describes the type of the serialized message, with an additional debug field for ConnectRPC error details.

      Hide details attributes Show details attributes object
      • debug object

        Detailed error information.

        Additional properties are allowed.

      • type string

        A URL that acts as a globally unique identifier for the type of the serialized message. For example: type.googleapis.com/google.rpc.ErrorInfo. This is used to determine the schema of the data in the value field and is the discriminator for the debug field.

      • value string(binary)

        The Protobuf message, serialized as bytes and base64-encoded. The specific message type is identified by the type field.

    • message string

      A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the [google.rpc.Status.details][google.rpc.Status.details] field, or localized by the client.

POST /redpanda.core.admin.v2.ClusterService/ListKafkaConnections
curl \
 --request POST 'http://localhost:9644/redpanda.core.admin.v2.ClusterService/ListKafkaConnections' \
 --header "Content-Type: application/json" \
 --header "Connect-Protocol-Version: 1" \
 --header "Connect-Timeout-Ms: 42.0" \
 --data '{"filter":"string","orderBy":"string","pageSize":42}'
Request examples
# Headers
Connect-Protocol-Version: 1
Connect-Timeout-Ms: 42.0

# Payload
{
  "filter": "string",
  "orderBy": "string",
  "pageSize": 42
}
Response examples (200)
{
  "connections": [
    {
      "apiVersions": {
        "additionalProperty1": 42,
        "additionalProperty2": 42
      },
      "authenticationInfo": {
        "mechanism": "AUTHENTICATION_MECHANISM_UNSPECIFIED",
        "state": "AUTHENTICATION_STATE_UNSPECIFIED",
        "userPrincipal": "string"
      },
      "clientId": "string",
      "clientSoftwareName": "string",
      "clientSoftwareVersion": "string",
      "closeTime": "2023-01-15T01:30:15.01Z",
      "groupId": "string",
      "groupInstanceId": "string",
      "groupMemberId": "string",
      "idleDuration": "string",
      "inFlightRequests": {
        "hasMoreRequests": true,
        "sampledInFlightRequests": [
          {
            "apiKey": 42,
            "inFlightDuration": "string"
          }
        ]
      },
      "listenerName": "string",
      "nodeId": 42,
      "openTime": "2023-01-15T01:30:15.01Z",
      "recentRequestStatistics": {
        "fetchBytes": 42,
        "produceBatchCount": 42,
        "produceBytes": 42,
        "requestCount": 42
      },
      "shardId": 42,
      "source": {
        "ipAddress": "string",
        "port": 42
      },
      "state": "KAFKA_CONNECTION_STATE_UNSPECIFIED",
      "tlsInfo": {
        "enabled": true
      },
      "totalRequestStatistics": {
        "fetchBytes": 42,
        "produceBatchCount": 42,
        "produceBytes": 42,
        "requestCount": 42
      },
      "transactionalId": "string",
      "uid": "string"
    }
  ],
  "totalSize": 42
}
Response examples (default)
{
  "code": "not_found",
  "details": [
    {
      "debug": {},
      "type": "string",
      "value": "@file"
    }
  ],
  "message": "string"
}