Tools¶
Tools are sub-component nodes that give agents capabilities beyond language generation. They provide LangChain tool functions that an agent can invoke during its reasoning loop to interact with the outside world -- run shell commands, make HTTP requests, search the web, evaluate math, or check the current time.
How tools work¶
Tools connect to agent nodes via the green diamond tools handle on the canvas. At build time, the agent queries all edges with edge_label="tool", loads each connected tool node's factory function, and registers the resulting LangChain @tool functions for LLM function calling.
flowchart LR
T[Chat Trigger] --> A[Agent]
M[AI Model] -.->|model| A
RC[Run Command] -.->|tool| A
HR[HTTP Request] -.->|tool| A
WS[Web Search] -.->|tool| A
C[Calculator] -.->|tool| A
DT[Date & Time] -.->|tool| A When the agent's LLM decides to call a tool during execution, WebSocket node_status events are published so the tool node shows running, success, or failed badges on the canvas in real time.
Built-in tools¶
Pipelit ships with 5 built-in utility tools:
| Tool | Component Type | Description |
|---|---|---|
| Run Command | run_command | Execute shell commands on the host system |
| HTTP Request | http_request | Make HTTP requests to external APIs |
| Web Search | web_search | Search the web via a SearXNG instance |
| Calculator | calculator | Evaluate mathematical expressions safely |
| Date & Time | datetime | Get the current date and time |
Connecting tools to agents¶
- Add a tool node from the Node Palette (under the Tools category)
- Add an agent node if you have not already
- Drag an edge from the tool node to the agent's green diamond tools handle at the bottom
- The edge will be created with
edge_label="tool"automatically
An agent can have any number of tools connected. Each tool becomes available to the agent's LLM for function calling.
Tools are optional
An agent does not require any tools. Without tools, the agent acts as a pure conversational LLM that can only generate text responses.
Tool execution lifecycle¶
- The agent receives input and begins its reasoning loop
- The LLM decides to call a tool and emits a tool-call message
- LangGraph dispatches the call to the corresponding tool function
- The tool node's status changes to
running(visible on the canvas) - The tool executes and returns a result string
- The tool node's status changes to
successorfailed - The result is fed back into the agent's reasoning loop
- The LLM can call more tools or produce a final response
Configuration¶
Most tools accept optional configuration via their extra_config field in the node details panel. See each tool's page for specific configuration options.
Security considerations
Some tools (particularly Run Command and HTTP Request) can interact with the host system and external services. Review the Security documentation before deploying workflows with these tools in production.