LLM Integration

LLMUtils Overview

The LLMUtils class in src/utils/llm provides a unified interface for interacting with different LLM providers, supporting both OpenAI and OpenRouter APIs.

// Initialize LLMUtils
import { LLMUtils } from "../utils/llm";
const llmUtils = new LLMUtils();

// Environment variables needed
OPENAI_API_KEY = "your-openai-api-key";
OPENROUTER_API_KEY = "your-openrouter-api-key";
APP_URL = "http://localhost:3000"; // Required for OpenRouter

Text Generation

Generate text responses using different LLM models:

// Basic text generation
const response = await llmUtils.getTextFromLLM(
	prompt,
	"anthropic/claude-3-sonnet"
);

// Streaming responses
await llmUtils.getTextFromLLMStream(
	prompt,
	"anthropic/claude-3-sonnet",
	(token) => {
		// Handle each token as it arrives
		console.log(token);
	}
);

Structured Output

Get structured JSON responses using Zod schemas for type safety:

Boolean Decisions

Get simple true/false decisions from the LLM:

Image Analysis

Process images and get text descriptions or structured analysis:

Model Selection

LLMSize.SMALL

  • Uses gpt-4o-mini

  • Faster response times

  • Lower cost per request

  • Good for simple decisions

LLMSize.LARGE

  • Uses gpt-4o

  • Better reasoning

  • More nuanced responses

  • Complex analysis tasks

Best Practices

  • Use structured output for predictable responses

  • Stream responses for better user experience

  • Choose appropriate model size for the task

  • Handle API errors gracefully

  • Monitor token usage and costs

  • Cache responses when possible

Next: Twitter Integration →

Last updated