sql

Uses an SQL database table as a destination for storing cache key/value items.

Introduced in version 4.26.0.

  • Common

  • Advanced

# Common configuration fields, showing default values
label: ""
sql:
  driver: "" # No default (required)
  dsn: "clickhouse://username:password@host1:9000,host2:9000/database?dial_timeout=200ms&max_execution_time=60" # No default (required)
  table: foo # No default (required)
  key_column: foo # No default (required)
  value_column: bar # No default (required)
  set_suffix: ON DUPLICATE KEY UPDATE bar=VALUES(bar) # No default (optional)
# All configuration fields, showing default values
label: ""
sql:
  driver: "" # No default (required)
  dsn: "clickhouse://username:password@host1:9000,host2:9000/database?dial_timeout=200ms&max_execution_time=60" # No default (required)
  table: foo # No default (required)
  key_column: foo # No default (required)
  value_column: bar # No default (required)
  set_suffix: ON DUPLICATE KEY UPDATE bar=VALUES(bar) # No default (optional)
  init_files: [] # No default (optional)
  init_statement: | # No default (optional)
    CREATE TABLE IF NOT EXISTS some_table (
      foo varchar(50) not null,
      bar integer,
      baz varchar(50),
      primary key (foo)
    ) WITHOUT ROWID;
  conn_max_idle_time: "" # No default (optional)
  conn_max_life_time: "" # No default (optional)
  conn_max_idle: 2
  conn_max_open: 0 # No default (optional)

Each cache key/value pair will exist as a row within the specified table. Currently only the key and value columns are set, and therefore any other columns present within the target table must allow NULL values if this cache is going to be used for set and add operations.

Cache operations are translated into SQL statements as follows:

Get

All get operations are performed with a traditional select statement.

Delete

All delete operations are performed with a traditional delete statement.

Set

The set operation is performed with a traditional insert statement.

This will behave as an add operation by default, and so ideally needs to be adapted in order to provide updates instead of failing on collision s. Since different SQL engines implement upserts differently it is necessary to specify a set_suffix that modifies an insert statement in order to perform updates on conflict.

Add

The add operation is performed with a traditional insert statement.

Fields

driver

A database driver to use.

Type: string

Options: clickhouse , gocosmos , mssql , mysql , oracle , postgres , snowflake , spanner , sqlite , trino

dsn

A Data Source Name to identify the target database.

Drivers

The following is a list of supported drivers, their placeholder style, and their respective DSN formats:

Driver Data Source Name Format

clickhouse

clickhouse://[username[:password]@][netloc][:port]/dbname[?param1=value1&…​&paramN=valueN]

gocosmos

AccountEndpoint=<cosmosdb-endpoint>;AccountKey=<cosmosdb-account-key>[;TimeoutMs=<timeout-in-ms>][;Version=<cosmosdb-api-version>][;DefaultDb/Db=<db-name>][;AutoId=<true/false>][;InsecureSkipVerify=<true/false>]

mssql

sqlserver://[user[:password]@][netloc][:port][?database=dbname&param1=value1&…​]

mysql

[username[:password]@][protocol[(address)]]/dbname[?param1=value1&…​&paramN=valueN]

oracle

oracle://[username[:password]@][netloc][:port]/service_name?server=server2&server=server3

postgres

postgres://[user[:password]@][netloc][:port][/dbname][?param1=value1&…​]

snowflake

username[:password]@account_identifier/dbname/schemaname[?param1=value&…​&paramN=valueN]

spanner

projects/[project]/instances/[instance]/databases/[database]

sqlite

file:/path/to/filename.db[?param&=value1&…​]

trino

http[s]://user[:pass]@host[:port][?parameters]

  • By default, the postgres driver enforces SSL. You can override this with the parameter sslmode=disable.

  • The snowflake driver supports multiple DSN formats. For more details, see the gosnowflake documentation.

    For key pair authentication, the DSN has the following format: <snowflake_user>@<snowflake_account>/<db_name>/<schema_name>?warehouse=<warehouse>&role=<role>&authenticator=snowflake_jwt&privateKey=<base64_url_encoded_private_key>, where the value for the privateKey parameter can be constructed from an unencrypted RSA private key file rsa_key.p8, using openssl enc -d -base64 -in rsa_key.p8 | basenc --base64url -w0. On macOS, if you install coreutils with Homebrew, you can use gbasenc instead of basenc. If you have a password-encrypted private key, you can decrypt it using openssl pkcs8 -in rsa_key_encrypted.p8 -out rsa_key.p8. Also, make sure fields such as the username are URL-encoded.

  • The gocosmos driver is still experimental, but it has support for hierarchical partition keys as well as cross-partition queries. For more details, see SQL notes.

Type: string

# Examples

dsn: "clickhouse://username:password@host1:9000,host2:9000/database?dial_timeout=200ms&max_execution_time=60"

dsn: foouser:foopassword@tcp(localhost:3306)/foodb

dsn: postgres://foouser:foopass@localhost:5432/foodb?sslmode=disable

dsn: oracle://foouser:foopass@localhost:1521/service_name

table

The table to insert/read/delete cache items.

Type: string

# Examples

table: foo

key_column

The name of a column to be used for storing cache item keys. This column should support strings of arbitrary size.

Type: string

# Examples

key_column: foo

value_column

The name of a column to be used for storing cache item values. This column should support strings of arbitrary size.

Type: string

# Examples

value_column: bar

set_suffix

An optional suffix to append to each insert query for a cache set operation. This should modify an insert statement into an upsert appropriate for the given SQL engine.

Type: string

# Examples

set_suffix: ON DUPLICATE KEY UPDATE bar=VALUES(bar)

set_suffix: ON CONFLICT (foo) DO UPDATE SET bar=excluded.bar

set_suffix: ON CONFLICT (foo) DO NOTHING

init_files

An optional list of file paths containing SQL statements to execute immediately upon the first connection to the target database. This is a useful way to initialise tables before processing data. Glob patterns are supported, including super globs (double star).

Care should be taken to ensure that the statements are idempotent, and therefore would not cause issues when run multiple times after service restarts. If both init_statement and init_files are specified the init_statement is executed after the init_files.

If a statement fails for any reason a warning log will be emitted but the operation of this component will not be stopped.

Type: array

Requires version 4.10.0 or newer

# Examples

init_files:
  - ./init/*.sql

init_files:
  - ./foo.sql
  - ./bar.sql

init_statement

An optional SQL statement to execute immediately upon the first connection to the target database. This is a useful way to initialise tables before processing data. Care should be taken to ensure that the statement is idempotent, and therefore would not cause issues when run multiple times after service restarts.

If both init_statement and init_files are specified the init_statement is executed after the init_files.

If the statement fails for any reason a warning log will be emitted but the operation of this component will not be stopped.

Type: string

Requires version 4.10.0 or newer

# Examples

init_statement: |2
  CREATE TABLE IF NOT EXISTS some_table (
    foo varchar(50) not null,
    bar integer,
    baz varchar(50),
    primary key (foo)
  ) WITHOUT ROWID;

conn_max_idle_time

An optional maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. If value ⇐ 0, connections are not closed due to a connections idle time.

Type: string

conn_max_life_time

An optional maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. If value ⇐ 0, connections are not closed due to a connections age.

Type: string

conn_max_idle

An optional maximum number of connections in the idle connection pool. If conn_max_open is greater than 0 but less than the new conn_max_idle, then the new conn_max_idle will be reduced to match the conn_max_open limit. If value ⇐ 0, no idle connections are retained. The default max idle connections is currently 2. This may change in a future release.

Type: int

Default: 2

conn_max_open

An optional maximum number of open connections to the database. If conn_max_idle is greater than 0 and the new conn_max_open is less than conn_max_idle, then conn_max_idle will be reduced to match the new conn_max_open limit. If value ⇐ 0, then there is no limit on the number of open connections. The default is 0 (unlimited).

Type: int