Docs Cloud AI Agents Remote MCP Concepts MCP Tool Execution and Components Page options Copy as Markdown Copied! View as plain text Ask AI about this topic Add MCP server to VS Code This page explains how MCP tools execute and how to choose the right component type for your use case. After reading this page, you will be able to: Describe the request/response execution model Choose the right component type for a use case Interpret MCP server traces for debugging and monitoring How components map to MCP tools Each MCP tool is implemented as a single Redpanda Connect component. The component type determines what the tool can do. The following table shows which component types are available and their purposes: Component Type Purpose as an MCP Tool Processor Transforms, validates, or computes data. Calls external APIs. Returns results to the AI client. Output Writes data to external systems (Redpanda topics, databases, APIs). Can include processors for transformation before writing. Input Reads data from external systems. Returns the read data to the AI client. Cache Stores and retrieves data for use by other tools. Most MCP tools are processors. Use outputs when you need to write data. Use inputs when you need to read from external data sources. The MCP execution model When an AI client calls an MCP tool, the MCP server handles the request in a specific sequence. The execution follows these steps: The AI client sends a JSON request to the MCP server with the tool name and parameters. The MCP server finds the corresponding component configuration. The MCP server executes the component with the input data. The component runs to completion and returns a result. The MCP server sends the result back to the AI client. The component instance is torn down. This execution model has several important characteristics: Stateless execution: Each tool invocation is independent. Tools do not maintain state between calls. If you need state, use an external store such as a cache, database, or Redpanda topic. Synchronous by default: Tools run synchronously from the AI client’s perspective. The client waits for the response before continuing. Timeout boundaries: Tools should complete quickly. Long-running operations should be avoided or handled asynchronously. Set explicit timeouts on external calls. No continuous processing: Unlike a traditional Redpanda Connect pipeline, MCP tools do not poll for messages or maintain connections between invocations. They start, execute, and stop. Choose the right component type Every MCP tool is implemented as a single component. Choosing the right component type is a critical design decision that affects what your tool can do and how it behaves. Decision framework To choose the right component type, ask what the tool’s primary purpose is. Use the following table to match your tool’s intent to a component type: Question Component Type Does the tool compute or transform data and return results? Processor Does the tool call external APIs and return the response? Processor Does the tool write data to an external system (database, topic, API)? Output Does the tool read data from an external source and return it? Input Does the tool store and retrieve temporary data for other tools? Cache The core principle is to choose the component type that matches the tool’s primary intent. Processor tools Processor tools transform, validate, compute, or fetch data and return results to the AI client. This is the most common tool type. See the processors reference for available processors. When to choose a processor tool Choose a processor tool when the tool’s purpose is to compute or transform data, call an external API and return the response, or validate inputs and return errors or results. Use case: Fetch and transform external data Consider a scenario where an AI agent needs current weather data to answer a user’s question about whether to bring an umbrella. The following prompts should trigger this type of tool: "What’s the weather in Berlin?" "Is it raining in Tokyo right now?" "Get me the current temperature for Seattle." A processor is the right choice because the tool fetches data from an API, transforms it into a useful format, and returns it. Use case: Validate and normalize data Consider a scenario where an AI agent needs to validate user-submitted data and return structured feedback about any issues. The following prompts should trigger this type of tool: "Validate this customer record before saving." "Check if this order has all required fields." "Normalize this JSON and tell me what’s missing." A processor is the right choice because the tool examines data, applies validation rules, and returns results. No data is written anywhere. Output tools Output tools write data to external systems. Use them when the primary purpose is to create a side effect such as persisting data, publishing an event, or triggering an action. See the outputs reference for available outputs. When to choose an output tool Choose an output tool when the tool’s purpose is to write data to Redpanda, a database, or an external API. The side effect (writing) should be the primary intent, not incidental. You can use processors: within the output to transform data before writing. Output tools are appropriate when you want the AI to trigger real-world actions. Understanding tool response vs. side effect Output tools have two outcomes: the side effect (data is written to the destination) and the tool response (the AI client receives confirmation that the write succeeded). The AI client does not receive the written data back. It receives status information. If you need to return the written data, consider using a processor tool instead. Use case: Publish events to Redpanda Consider a scenario where an AI agent needs to publish order events to Redpanda for downstream processing. The following prompts should trigger this type of tool: "Publish this order to Redpanda." "Send the order event to the orders topic." "Record this new order for processing." An output is the right choice because the purpose is to write data to Redpanda. The AI needs to create a persistent record, not just compute something. Use case: Transform and publish Output components can include a processors: section that transforms data before writing to the destination. This is a single output component, not a combination of component types. Consider a scenario where an AI agent asks an LLM to summarize a document, then stores both the original and summary in Redpanda. The following prompts should trigger this type of tool: "Summarize this document and save it." "Process this feedback with GPT and store the analysis." "Analyze this text and publish the results." An output with processors is the right choice because the primary intent is to store data. The processors provide pre-processing before writing. The execution flow for this pattern is as follows: AI client calls the tool with input data. The processors section transforms the data. The output component writes the transformed data to the destination. The tool returns a response to the AI client. For implementation examples, see outputs with processors in the tool patterns guide. Input tools Input tools read data from external sources and return it to the AI client. They’re useful when you need to query or fetch existing data. See the inputs reference for available inputs. When to choose an input tool Choose an input tool when the tool’s purpose is to read and return data from an external source, consume messages from a Redpanda topic, or build a query-style tool that retrieves existing data. Bounded vs. unbounded reads Input tools must return a finite result. Use bounded reads that fetch a specific number of messages or read until a condition is met. For example, "get me the latest N events" or "read messages from the last hour". Unbounded reads that poll continuously are not appropriate for MCP tools because the tool would never return a response to the AI client. Latency and scope considerations Keep these factors in mind when building input tools: Input tools may have variable latency depending on the data source. Scope your reads appropriately. Don’t try to read entire topics. Consider consumer group behavior: with a consumer group, each invocation advances through the stream. Without one, each invocation may read the same data. Use case: Query recent events Consider a scenario where an AI agent needs to retrieve recent user activity events to understand user behavior. The following prompts should trigger this type of tool: "Show me recent user events." "Get the last 10 login events." "What events happened in the user-events topic recently?" An input is the right choice because the tool reads from an existing data source (topic) and returns what it finds. Cache tools Cache tools store and retrieve temporary data that other tools can access. They’re useful for sharing state between tool calls or storing frequently accessed data. See the caches reference for available caches. When to choose a cache tool Choose a cache tool when the tool’s purpose is to store temporary data that expires after a set time, share state between multiple tool calls in a conversation, or reduce repeated calls to slow external APIs by caching results. Use case: Session state management Consider a scenario where an AI agent needs to remember user preferences across multiple tool calls within a conversation. The following prompts should trigger this type of tool: "Remember that I prefer metric units." "Store my timezone as America/New_York." "Save this search filter for later." A cache is the right choice because the data is temporary, session-scoped, and needs to be accessible by other tools during the conversation. Use case: API response caching Consider a scenario where an AI agent frequently looks up the same reference data (like exchange rates or product catalogs) and you want to avoid repeated API calls. The following prompts should trigger cache usage: "Get the current exchange rate" (cached for 5 minutes) "Look up product details" (cached for 1 hour) "Check inventory levels" (cached briefly to reduce load) A cache is the right choice because you want to store API responses temporarily and serve them on subsequent requests without hitting the external API again. Component selection summary The following table summarizes when to use each component type: Component Primary Intent Example Tools Returns Processor Compute, transform, validate, fetch Weather lookup, data validation, API calls Computed result Output Write data with side effects Publish events, store records, trigger webhooks Write confirmation Output + processors Transform then write Summarize and store, enrich and publish Write confirmation Input Read and return data Query recent events, search logs Retrieved data Cache Store and retrieve temporary data Session state, API response caching Cached value or confirmation For implementation examples and common patterns, see MCP Tool Patterns. Execution log and observability Every MCP server automatically emits OpenTelemetry traces to a topic called redpanda.otel_traces. These traces provide detailed observability into your MCP server’s operations, creating a complete execution log. Traces and spans OpenTelemetry traces provide a complete picture of how a request flows through your system: A trace represents the entire lifecycle of a request (for example, a tool invocation from start to finish). A span represents a single unit of work within that trace (such as a data processing operation or an external API call). A trace contains one or more spans organized hierarchically, showing how operations relate to each other. With 100% sampling, every operation is captured, creating a complete execution log that you can use for debugging, monitoring, and performance analysis. How Redpanda stores traces The redpanda.otel_traces topic stores OpenTelemetry spans in JSON format, following the OpenTelemetry Protocol (OTLP) specification. A Protobuf schema named redpanda.otel_traces-value is also automatically registered with the topic, enabling clients to deserialize trace data correctly. The redpanda.otel_traces topic and its schema are managed automatically by Redpanda. If you delete either the topic or the schema, they are recreated automatically. However, deleting the topic permanently deletes all trace data, and the topic comes back empty. Do not produce your own data to this topic. It is reserved for OpenTelemetry traces. Each span in the execution log represents a specific operation performed by your MCP server, such as: Tool invocation requests Data processing operations External API calls Error conditions Performance metrics Topic configuration and lifecycle The redpanda.otel_traces topic has a predefined retention policy. Configuration changes to this topic are not supported. If you modify settings, Redpanda reverts them to the default values. The topic persists in your cluster even after all MCP servers are deleted, allowing you to retain historical trace data for analysis. Trace data may contain sensitive information from your tool inputs and outputs. Consider implementing appropriate access control lists (ACLs) for the redpanda.otel_traces topic, and review the data in traces before sharing or exporting to external systems. Understand the trace structure Each span captures a unit of work. Here’s what a typical MCP tool invocation looks like: { "traceId": "71cad555b35602fbb35f035d6114db54", "spanId": "43ad6bc31a826afd", "name": "http_processor", "attributes": [ {"key": "city_name", "value": {"stringValue": "london"}}, {"key": "result_length", "value": {"intValue": "198"}} ], "startTimeUnixNano": "1765198415253280028", "endTimeUnixNano": "1765198424660663434", "instrumentationScope": {"name": "rpcn-mcp"}, "status": {"code": 0, "message": ""} } Key elements to understand: traceId: Links all spans belonging to the same request. Use this to follow a tool invocation through its entire lifecycle. name: The tool name (http_processor in this example). This tells you which tool was invoked. instrumentationScope.name: When this is rpcn-mcp, the span represents an MCP tool. When it’s redpanda-connect, it’s internal processing. attributes: Context about the operation, like input parameters or result metadata. status.code: 0 means success, 2 means error. Parent-child relationships Traces show how operations relate. A tool invocation (parent) may trigger internal operations (children): { "traceId": "71cad555b35602fbb35f035d6114db54", "spanId": "ed45544a7d7b08d4", "parentSpanId": "43ad6bc31a826afd", "name": "http", "instrumentationScope": {"name": "redpanda-connect"}, "status": {"code": 0, "message": ""} } The parentSpanId links this child span to the parent tool invocation. Both share the same traceId, so you can reconstruct the complete operation. Error events in traces When something goes wrong, traces capture error details: { "traceId": "71cad555b35602fbb35f035d6114db54", "spanId": "ba332199f3af6d7f", "parentSpanId": "43ad6bc31a826afd", "name": "http_request", "events": [ { "name": "event", "timeUnixNano": "1765198420254169629", "attributes": [{"key": "error", "value": {"stringValue": "type"}}] } ], "status": {"code": 0, "message": ""} } The events array captures what happened and when. Use timeUnixNano to see exactly when the error occurred within the operation. Traces compared to audit logs OpenTelemetry traces are designed for observability and debugging, not audit logging or compliance. Traces provide: Hierarchical view of request flow through your system (parent-child span relationships) Detailed timing information for performance analysis Ability to reconstruct execution paths and identify bottlenecks Insights into how operations flow through distributed systems Traces are not: Immutable audit records for compliance purposes Designed for "who did what" accountability tracking For monitoring tasks like consuming traces, debugging failures, and measuring performance, see Monitor MCP Server Activity. Next steps Continue your learning journey with these resources: Create an MCP Tool: Create custom tools step by step MCP Tool Design: Apply naming and design guidelines MCP Tool Patterns: Find reusable patterns Troubleshoot Remote MCP Servers: Diagnose common issues Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution 🎉 Thanks for your feedback! Quickstart Create a Tool