ollama_embeddings

This feature requires an Enterprise license. To upgrade, contact Redpanda sales.

Generates vector embeddings from text, using the Ollama API.

Introduced in version 4.32.0.

# Config fields, showing default values
label: ""
ollama_embeddings:
  server_address: http://127.0.0.1:11434 # No default (optional)
  model: nomic-embed-text # No default (required)
  text: "" # No default (optional)

This processor sends text to your chosen Ollama large language model (LLM) and creates vector embeddings, using the Ollama API. Vector embeddings are long arrays of numbers that represent values or objects, in this case text.

By default, the processor starts and runs a locally installed Ollama server. Alternatively, to use an already running Ollama server, add your server details to the server_address field. You can download and install Ollama from the Ollama website.

For more information, see the Ollama documentation.

Fields

server_address

The address of the Ollama server to use. Leave the field blank and the processor starts and runs a local Ollama server or specify the address of your own local or remote server.

Type: string

# Examples

server_address: http://127.0.0.1:11434

model

The name of the Ollama LLM to use. For a full list of models, see the Ollama website.

Type: string

# Examples

model: nomic-embed-text

model: mxbai-embed-large

model: snowflake-artic-embed

model: all-minilm

text

The text you want to create vector embeddings for. By default, the processor submits the entire payload as a string. This field supports interpolation functions.

Type: string