Agents
Agents are Large Language Models (LLMs) that use tools in a loop to accomplish complex tasks. Unlike standard generation, an agent can observe the results of its actions and decide on the next step autonomously.
In AISDK, these components work together:
- LLMs: Act as the "brain," processing input and deciding which actions to take.
- Tools: Extend the model's capabilities (e.g., searching the web, querying a database).
- Loop: orchestrates execution through:
- Context management - Maintaining conversation history and deciding what the model sees at each step
- Stopping conditions - Determining when the loop (task) is complete
The Agentic Loop
When you provide tools to a LanguageModelRequest, AISDK automatically manages the Agentic Loop.
Every interaction between the model and the environment (including tool calls and intermediate reasoning) is recorded as a Step. A single complex request might result in multiple steps, each containing full context.
- Reasoning: The model decides to call one or more tools. This triggers a new Step.
- Action: AISDK executes the tools and captures their results.
- Observation: The results are fed back into the conversation history.
- Re-evaluation: The model decides the next step based on the new context.
Example: The Weather Agent
Here is a example agent that uses multiple tools to answer questions.
use aisdk::core::{LanguageModelRequest, Tool, utils::step_count_is};
use aisdk::providers::OpenAI;
use aisdk::macros::tool;
#[tool]
/// Get the current weather in a specific location
fn get_weather(location: String) -> Tool {
// In a real app, this would call a weather API
Ok(format!("The weather in {} is 72°F and sunny.", location))
}
#[tool]
/// Convert Fahrenheit to Celsius
fn fahrenheit_to_celsius(f: f32) -> Tool {
let c = (f - 32.0) * 5.0 / 9.0;
Ok(format!("{:.1}°C", c))
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let weather_agent = LanguageModelRequest::builder()
.model(OpenAI::gpt_5())
.system("You are a helpful weather assistant.")
.prompt("What is the weather in San Francisco in Celsius?")
.with_tool(get_weather())
.with_tool(fahrenheit_to_celsius())
.stop_when(step_count_is(10))
.build()
.generate_text()
.await?;
println!("Final Answer: {:?}", weather_agent.text());
Ok(())
}Accessing Step Metadata
You can access the history of these turns from the response. Each Step provides:
step_id: The chronological identifier of the turn.messages(): All messages recorded during this specific step.usage(): Token usage statistics for this turn.tool_calls(): Any tools requested by the model.tool_results(): Any results received from tools.
for step in weather_agent.steps() {
println!("--- Turn {} ---", step.step_id);
println!("Messages in this turn: {} Usage: {:?}", step.messages().len(), step.usage());
}Loop Control (stop_when)
Because agents operate in a loop, it is critical to prevent infinite execution or excessive token usage. AISDK provides the .stop_when() method to define explicit exit conditions.
The stop_when method accepts a closure with the following signature: Fn(&LanguageModelOptions) -> bool. This closure receives the current state of the request (including all interaction history and options) and returns true if the loop should terminate.
/// Example: Stop when total token usage exceeds 1000:
.stop_when(|options| options.usage().total_tokens > 1000)
/// Example: Stop if the last assistant message contains "DONE":
.stop_when(|options| {
options.text().map_or(false, |t| t.contains("DONE"))
})
/// Example: Stop after 5 tool calls:
.stop_when(|options| {
options.tool_calls().map_or(false, |calls| calls.len() > 5)
})Using step_count_is Helper:
The most common way to control an agent loop is by limiting the number of turns (steps). AISDK provides a step_count_is(n) utility for this purpose.
use aisdk::core::utils::step_count_is;
// Stop after 3 steps
.stop_when(step_count_is(3))Lifecycle Hooks
Hooks allow you to intercept the agent at different stages of its thinking process. This is powerful for logging, custom moderation, or dynamically adjusting the agent's context.
on_step_start
Runs before the model generates a response for a specific step. It receives a mutable reference to the current options, allowing you to modify the request context on the fly.
Signature: Fn(&mut LanguageModelOptions)
/// Example: Dynamically update the system prompt based on turn count:
.on_step_start(|options| {
let turn = options.steps().len();
if turn > 3 {
options.system = Some("You are now in 'Critical Mode'. Be brief.".into());
}
})
/// Example: Increase temperature as the conversation progresses:
.on_step_start(|options| {
let turn = options.steps().len();
options.temperature = Some(turn as u32 * 10); // Increases randomness
})on_step_finish
Runs after a model response is received but before tools are executed (or after the final response). It is ideal for observing the model's intent or performing safety checks.
Signature: Fn(&mut LanguageModelOptions)
/// Example: Log the model's intent or reasoning:
.on_step_finish(|options| {
if let Some(step) = options.last_step() {
println!("Turn {} finished. Reasoning: {:?}", step.step_id, step.usage());
}
})
/// Example: Monitor for specific tool usage:
.on_step_finish(|options| {
if let Some(calls) = options.tool_calls() {
if calls.iter().any(|c| c.function.name == "delete_database") {
println!("WARNING: Agent requested a dangerous operation!");
}
}
})Workflow Patterns
While the automatic loop is powerful, many production systems require more structure. You can combine core AISDK primitives to implement advanced orchestration patterns.
Routing
Use a fast/cheap model to classify a query, then route it to a specialized agent or workflow.
use schemars::JsonSchema;
use serde::Deserialize;
#[derive(JsonSchema, Deserialize, Debug)]
enum QueryType {
Technical,
Billing,
General,
}
// 1. Classify the query using Structured Output
let response = LanguageModelRequest::builder()
.model(OpenAI::gpt_4o_mini())
.prompt(user_query)
.schema::<QueryType>()
.build()
.generate_text()
.await?;
let query_type: QueryType = response.into_schema().unwrap();
// 2. Route to specialized logic
match query_type {
QueryType::Technical => handle_technical(user_query).await?,
QueryType::Billing => handle_billing(user_query).await?,
QueryType::General => handle_general(user_query).await?,
};Sequential Chains
Chains execute steps in a predefined order where the output of one request becomes the input for the next.
// Step 1: Generate summary
let summary = LanguageModelRequest::builder()
.model(OpenAI::gpt_5())
.prompt(format!("Summarize this article: {}", article))
.build()
.generate_text()
.await?;
// Step 2: Translate the summary
let translation = LanguageModelRequest::builder()
.model(OpenAI::gpt_5())
.prompt(format!("Translate this to French: {:?}", summary.text()))
.build()
.generate_text()
.await?;Parallel Processing
Execute independent tasks concurrently using tokio::try_join! to reduce latency. This is ideal for patterns like multi-expert reviews, where specialized models audit the same input for different concerns.
use schemars::JsonSchema;
use serde::Deserialize;
#[derive(JsonSchema, Deserialize, Debug)]
struct ReviewSummary {
vulnerabilities_found: bool,
performance_score: u32,
action_items: Vec<String>,
}
// 1. Run specialized reviews in parallel (Returning Text)
let (security_res, performance_res) = tokio::try_join!(
LanguageModelRequest::builder()
.model(OpenAI::gpt_4o())
.system("You are a security expert.")
.prompt(format!("Review this code: {}", code))
.build()
.generate_text(),
LanguageModelRequest::builder()
.model(OpenAI::gpt_4o())
.system("You are a performance expert.")
.prompt(format!("Review this code: {}", code))
.build()
.generate_text()
)?;
// 2. Aggregate results into a Structured Report
let report = LanguageModelRequest::builder()
.model(OpenAI::gpt_5())
.system("You are a technical lead summarizing multiple code reviews.")
.prompt(format!(
"Synthesize these reviews into a structured report: \nSecurity: {:?} \nPerformance: {:?}",
security_res.text(), performance_res.text()
))
.schema::<ReviewSummary>()
.build()
.generate_text()
.await?;
let summary: ReviewSummary = report.into_schema().unwrap();
println!("Final Structured Report: {:?}", summary);Next Steps
- Learn how to create Custom Tools.
- Explore Structured Output for reliable agent data.
- See how Capabilities protect your agent logic.