Docs Connect Components Outputs Outputs An output is a sink where we wish to send our consumed data after applying an optional array of processors. Only one output is configured at the root of a Redpanda Connect config. However, the output can be a broker which combines multiple outputs under a chosen brokering pattern, or a switch which is used to multiplex against different outputs. An output config section looks like this: output: label: my_s3_output aws_s3: bucket: TODO path: '${! meta("kafka_topic") }/${! json("message.id") }.json' # Optional list of processing steps processors: - mapping: '{"message":this,"meta":{"link_count":this.links.length()}}' Back pressure Redpanda Connect outputs apply back pressure to components upstream. This means if your output target starts blocking traffic Redpanda Connect will gracefully stop consuming until the issue is resolved. Retries When a Redpanda Connect output fails to send a message the error is propagated back up to the input, where depending on the protocol it will either be pushed back to the source as a Noack (e.g. AMQP) or will be reattempted indefinitely with the commit withheld until success (e.g. Kafka). It’s possible to instead have Redpanda Connect indefinitely retry an output until success with a retry output. Some other outputs, such as the broker, might also retry indefinitely depending on their configuration. Dead letter queues It’s possible to create fallback outputs for when an output target fails using a fallback output: output: fallback: - aws_sqs: url: https://sqs.us-west-2.amazonaws.com/TODO/TODO max_in_flight: 20 - http_client: url: http://backup:1234/dlq verb: POST Multiplexing outputs There are a few different ways of multiplexing in Redpanda Connect, here’s a quick run through: Interpolation multiplexing Some output fields support field interpolation, which is a super easy way to multiplex messages based on their contents in situations where you are multiplexing to the same service. For example, multiplexing against Kafka topics is a common pattern: output: kafka: addresses: [ TODO:6379 ] topic: ${! meta("target_topic") } Refer to the field documentation for a given output to see if it support interpolation. Switch multiplexing A more advanced form of multiplexing is to route messages to different output configurations based on a query. This is easy with the switch output: output: switch: cases: - check: this.type == "foo" output: amqp_1: urls: [ amqps://guest:guest@localhost:5672/ ] target_address: queue:/the_foos - check: this.type == "bar" output: gcp_pubsub: project: dealing_with_mike topic: mikes_bars - output: redis_streams: url: tcp://localhost:6379 stream: everything_else processors: - mapping: | root = this root.type = this.type.not_null() | "unknown" Labels Outputs have an optional field label that can uniquely identify them in observability data such as metrics and logs. This can be useful when running configs with multiple outputs, otherwise their metrics labels will be generated based on their composition. For more information check out the metrics documentation. Categories Services AWS Azure Utility Integration Social Local GCP Network AI Outputs that write to storage or message streaming services. amqp_0_9 amqp_1 AWS DynamoDB AWS Kinesis AWS Kinesis Firehose AWS S3 AWS SNS AWS SQS Azure Blob Storage azure_data_lake_gen2 Azure Queue Storage Azure Table Storage beanstalkd cache cypher Discord Elasticsearch GCP BigQuery GCP Cloud Storage GCP Pub/Sub HDFS Kafka Franz-go MongoDB MQTT nats_kv nats_jetstream NATS nats_stream NSQ ockam_kafka OpenSearch Pulsar pusher questdb redis_hash redis_list redis_pubsub redis_streams redpanda redpanda_common redpanda_migrator redpanda_migrator_bundle redpanda_migrator_offsets Snowflake snowflake_streaming Splunk sql_insert sql_raw timeplus Outputs that write to Amazon Web Services products. AWS DynamoDB AWS Kinesis AWS Kinesis Firehose AWS S3 AWS SNS AWS SQS Outputs that write to Microsoft Azure services. Azure Blob Storage Azure Cosmos DB azure_data_lake_gen2 Azure Queue Storage Azure Table Storage Outputs that provide utility by combining/wrapping other outputs. broker drop drop_on dynamic fallback inproc reject resource reject_errored retry subprocess switch sync_response Couchbase Schema Registry Outputs that write to social applications and services. Discord Outputs that write to the local machine/filesystem. file stdout Outputs that write to Google Cloud Platform services. GCP BigQuery GCP Cloud Storage GCP Pub/Sub Outputs that write directly to low level network protocols. http_client http_server nanomsg sftp Socket WebSocket zmq4 pinecone qdrant Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution xml amqp_0_9