Generating Text
LanguageModelRequest is the main AISDK interface for building and executing LLM requests. It gives you a type-safe builder, compile-time capability checks, and one consistent API across providers.
Quick Start
Pick a provider feature first (OpenAI used here):
cargo add aisdk --features openaiThen build and run a request:
use aisdk::{core::LanguageModelRequest, providers::OpenAI};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let response = LanguageModelRequest::builder()
.model(OpenAI::gpt_5())
.system("You are a concise assistant.")
.prompt("Explain ownership in Rust in one paragraph.")
.temperature(30)
.build()
.generate_text()
.await?;
println!("{}", response.text());
Ok(())
}The Capability System
AISDK uses Rust traits to enforce model capabilities at compile time. If a model does not support a feature, the request does not compile.
This applies to:
- Tool calling
- Structured output
- Reasoning features
- Multimodal input/output
// ✅ WORKS: GPT-5 supports tool calls
let request = LanguageModelRequest::builder()
.model(OpenAI::gpt_5())
.with_tool(my_tool) // Valid!
.build();
// ❌ FAILS TO COMPILE: Because O1 Mini doesn't support tool calls
let request = LanguageModelRequest::builder()
.model(OpenAI::o1_mini())
.with_tool(my_tool) // ERROR: The trait `ToolCallSupport` is not implemented..
.build();Type-State Builder Flow
LanguageModelRequest::builder() follows a strict build sequence. Each stage only exposes valid methods for that stage.
- Choose a model:
.model(M) - Optional system instruction:
.system(...) - Choose one input style:
.prompt(...)or.messages(...) - Optional options and orchestration features
- Finalize with
.build()
Request Inputs
Use one of these:
prompt(impl Into<String>)for a single user promptmessages(Messages)for multi-turn history
prompt(...) and messages(...) are mutually exclusive by design.
// Single-turn
let request = LanguageModelRequest::builder()
.model(OpenAI::gpt_5())
.prompt("What is the capital of France?")
.build();
// Multi-turn
let request = LanguageModelRequest::builder()
.model(OpenAI::gpt_5())
.messages(messages)
.build();Model Options
Common tuning methods:
temperature(u32): randomness (0-100)top_p(u32): nucleus sampling (0-100)top_k(u32): limit token candidatesseed(u32): deterministic sampling seedmax_retries(u32): retry count on failuresfrequency_penalty(f32): repetition penaltystop_sequences(Vec<String>): early stop tokensreasoning_effort(ReasoningEffort): reasoning level (supported models only)
For u32 percentage-style options, AISDK scales values to each provider's native format.
Orchestration Features
Use these to move from plain generation to agent workflows:
with_tool(Tool): register tools (see Tools)schema<T: JsonSchema>(): enforce structured output (see Structured Output)on_step_start(...),on_step_finish(...),stop_when(...): lifecycle hooks and loop control (see Agents)
Execution
Build first, then execute.
generate_text()
Non-streaming execution. Returns when generation is complete.
let response = request.generate_text().await?;
println!("{}", response.text());stream_text()
Streaming execution. Emits incremental chunks as tokens arrive.
LanguageModelStreamChunkType can include:
StartText(String)Reasoning(String)End(AssistantMessage)
let response = request.stream_text().await?;
let mut stream = response.stream;
while let Some(chunk) = stream.next().await {
if let LanguageModelStreamChunkType::Text(text) = chunk {
print!("{}", text);
}
}Response Types Reference
Both GenerateTextResponse and StreamTextResponse expose helpers for final output and execution metadata.
[!NOTE] On
StreamTextResponse, metadata accessors are async and should be called after stream consumption.
| Method | Description |
|---|---|
text() | Text from the final assistant message |
content() | Final assistant content (excluding reasoning) |
usage() | Aggregated token usage across all steps |
messages() | Full conversation message history |
stop_reason() | Why generation ended (Finish, Hook, Error, etc.) |
steps() | All recorded steps in chronological order |
last_step() | Most recent step |
step(id) | Step lookup by ID |
tool_calls() | All tool calls issued during execution |
tool_results() | All tool results collected during execution |
Non-Streaming Example
let response = LanguageModelRequest::builder()
.model(OpenAI::gpt_5())
.prompt("What is 2+2?")
.build()
.generate_text()
.await?;
println!("Text: {:?}", response.text());
println!("Usage: {:?}", response.usage());
println!("Stop Reason: {:?}", response.stop_reason());
println!("Final Content: {:?}", response.content());Streaming Example
let response = LanguageModelRequest::builder()
.model(OpenAI::gpt_5())
.prompt("Write a short story.")
.build()
.stream_text()
.await?;
let mut stream = response.stream;
while let Some(chunk) = stream.next().await {
if let LanguageModelStreamChunkType::Text(text) = chunk {
print!("{}", text);
}
}
// Access final metadata after stream consumption
let final_usage = response.usage().await;
let steps = response.steps().await;
let reason = response.stop_reason().await;Next Steps
- Learn how to create Custom Tools
- Use Structured Output for reliable typed data
- Build multi-step workflows with Agents