Docs Connect Components Outputs azure_table_storage azure_table_storage Type: OutputInput Available in: Cloud, Self-Managed Stores messages in an Azure Table Storage table. Introduced in version 3.36.0. Common Advanced # Common config fields, showing default values output: label: "" azure_table_storage: storage_account: "" storage_access_key: "" storage_connection_string: "" storage_sas_token: "" table_name: ${! meta("kafka_topic") } # No default (required) partition_key: "" row_key: "" properties: {} max_in_flight: 64 batching: count: 0 byte_size: 0 period: "" check: "" # All config fields, showing default values output: label: "" azure_table_storage: storage_account: "" storage_access_key: "" storage_connection_string: "" storage_sas_token: "" table_name: ${! meta("kafka_topic") } # No default (required) partition_key: "" row_key: "" properties: {} transaction_type: INSERT max_in_flight: 64 timeout: 5s batching: count: 0 byte_size: 0 period: "" check: "" processors: [] # No default (optional) Only one authentication method is required, storage_connection_string or storage_account and storage_access_key. If both are set then the storage_connection_string is given priority. In order to set the table_name, partition_key and row_key you can use function interpolations described here, which are calculated per message of a batch. If the properties are not set in the config, all the json fields are marshalled and stored in the table, which will be created if it does not exist. The object and array fields are marshaled as strings. e.g.: The JSON message: { "foo": 55, "bar": { "baz": "a", "bez": "b" }, "diz": ["a", "b"] } Will store in the table the following properties: foo: '55' bar: '{ "baz": "a", "bez": "b" }' diz: '["a", "b"]' It’s also possible to use function interpolations to get or transform the properties values, e.g.: properties: device: '${! json("device") }' timestamp: '${! json("timestamp") }' Performance This output benefits from sending multiple messages in flight in parallel for improved performance. You can tune the max number of in flight messages (or message batches) with the field max_in_flight. This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level. You can find out more in this doc. Fields storage_account The storage account to access. This field is ignored if storage_connection_string is set. Type: string Default: "" storage_access_key The storage account access key. This field is ignored if storage_connection_string is set. Type: string Default: "" storage_connection_string A storage account connection string. This field is required if storage_account and storage_access_key / storage_sas_token are not set. Type: string Default: "" storage_sas_token The storage account SAS token. This field is ignored if storage_connection_string or storage_access_key are set. Type: string Default: "" table_name The table to store messages into. This field supports interpolation functions. Type: string # Examples table_name: ${! meta("kafka_topic") } table_name: ${! json("table") } partition_key The partition key. This field supports interpolation functions. Type: string Default: "" # Examples partition_key: ${! json("date") } row_key The row key. This field supports interpolation functions. Type: string Default: "" # Examples row_key: ${! json("device")}-${!uuid_v4() } properties A map of properties to store into the table. This field supports interpolation functions. Type: object Default: {} transaction_type Type of transaction operation. This field supports interpolation functions. Type: string Default: "INSERT" Options: INSERT , INSERT_MERGE , INSERT_REPLACE , UPDATE_MERGE , UPDATE_REPLACE , DELETE . # Examples transaction_type: ${! json("operation") } transaction_type: ${! meta("operation") } transaction_type: INSERT max_in_flight The maximum number of parallel message batches to have in flight at any given time. Type: int Default: 64 timeout The maximum period to wait on an upload before abandoning it and reattempting. Type: string Default: "5s" batching Allows you to configure a batching policy. Type: object # Examples batching: byte_size: 5000 count: 0 period: 1s batching: count: 10 period: 1s batching: check: this.contains("END BATCH") count: 0 period: 1m batching.count A number of messages at which the batch should be flushed. If 0 disables count based batching. Type: int Default: 0 batching.byte_size An amount of bytes at which the batch should be flushed. If 0 disables size based batching. Type: int Default: 0 batching.period A period in which an incomplete batch should be flushed regardless of its size. Type: string Default: "" # Examples period: 1s period: 1m period: 500ms batching.check A Bloblang query that should return a boolean value indicating whether a message should end a batch. Type: string Default: "" # Examples check: this.type == "end_of_transaction" batching.processors A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op. Type: array # Examples processors: - archive: format: concatenate processors: - archive: format: lines processors: - archive: format: json_array Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution azure_queue_storage beanstalkd