Getting Started

Basic Usage

The AI SDK standardizes integrating artificial intelligence (AI) models across supported providers. This enables developers to focus on building great AI applications, not waste time on technical details.

For example, here’s how you can generate text with various models using the AI SDK

Enable the Provider of your choice

You can enable the provider you want by using the feature flag of the provider. e.g. for OpenAI

cargo add aisdk --features openai # anthropic, google or any other provider

Generating Text

To generate text, you can use the LanguageModelRequest builder and call the generate_text() method. This method returns a GenerateTextResponse which contains the generated text and other information such as token usage statistics, stop reason, and tool call results

you can find more info about generating text here.

use aisdk::{    core::LanguageModelRequest,    providers::OpenAI,};#[tokio::main]async fn main() {    let result = LanguageModelRequest::builder()        .model(OpenAI::gpt_5())        .prompt("What is the meaning of life?")        .build()        .generate_text() // stream_text() for streaming        .await        .unwrap();    println!("{:?}", result.text());}
use aisdk::{    core::LanguageModelRequest,    providers::Anthropic,};#[tokio::main]async fn main() {    let result = LanguageModelRequest::builder()        .model(Anthropic::claude_opus_4_5())        .prompt("What is the meaning of life?")        .build()        .generate_text() // stream_text() for streaming        .await        .unwrap();    println!("{:?}", result.text());}
use aisdk::{    core::LanguageModelRequest,    providers::Google,};#[tokio::main]async fn main() {    let result = LanguageModelRequest::builder()        .model(Google::gemini_3_flash_preview())        .prompt("What is the meaning of life?")        .build()        .generate_text() // stream_text() for streaming        .await        .unwrap();    println!("{:?}", result.text());}

Streaming Text

To stream text, you can use the LanguageModelRequest builder and call the stream_text() method. This method returns a StreamTextResponse which contains the generated text and other information such as token usage statistics, stop reason, and tool call results.

You can find more info about streaming text here.

use aisdk::{    core::{LanguageModelRequest, LanguageModelStreamChunkType},    providers::OpenAI,};use futures::StreamExt;#[tokio::main]async fn main() -> Result<(), Box<dyn std::error::Error>> {    let mut stream = LanguageModelRequest::builder()        .model(OpenAI::gpt_5())        .prompt("What is the meaning of life?")        .build()        .stream_text()        .await?;	while let Some(chunk) = stream.next().await {		if let LanguageModelStreamChunkType::Text(text) = chunk {			println!("Streaming text: {}", text);		}	}    Ok(())}
use aisdk::{    core::{LanguageModelRequest, LanguageModelStreamChunkType},    providers::Anthropic,};use futures::StreamExt;#[tokio::main]async fn main() -> Result<(), Box<dyn std::error::Error>> {    let mut stream = LanguageModelRequest::builder()        .model(Anthropic::claude_opus_4_5())        .prompt("What is the meaning of life?")        .build()        .stream_text()        .await?;	while let Some(chunk) = stream.next().await {		if let LanguageModelStreamChunkType::Text(text) = chunk {			println!("Streaming text: {}", text);		}	}    Ok(())}
use aisdk::{    core::{LanguageModelRequest, LanguageModelStreamChunkType},    providers::Google,};use futures::StreamExt;#[tokio::main]async fn main() -> Result<(), Box<dyn std::error::Error>> {	let mut stream = LanguageModelRequest::builder()		.model(Google::gemini_3_flash_preview())		.prompt("What is the meaning of life?")		.build()		.stream_text()		.await?;	while let Some(chunk) = stream.next().await {		if let LanguageModelStreamChunkType::Text(text) = chunk {			println!("Streaming text: {}", text);		}	}    Ok(())}

Next Steps