arrow-left

Only this pageAll pages
gitbookPowered by GitBook
1 of 12

Akrasia

Loading...

Documentation

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Overview

Liz AI Assistant

hashtag
What is Liz?

Liz is a lightweight framework for building AI agents, inspired by Eliza from AI16Z but rebuilt with a strong focus on developer experience and control. Unlike other agent frameworks that abstract away the complexities, Liz provides direct access to prompts and model interactions, giving developers the power to build exactly what they need.

hashtag
Key Motivations

  • Direct LLM Control: Full access to prompts and model interactions

  • Zero Magic: Minimal abstractions for maximum understanding

  • Ultimate Flexibility: Build exactly what you need, how you need it

hashtag
Core Philosophy

Liz follows an Express-style architecture, using middleware chains for processing agent interactions. This approach provides a clear, linear flow that developers are already familiar with, making it easy to understand and extend.

We believe the best way to build AI agents is to work closely with the prompts and build a set of composable units that can be strung together to make powerful agentic loops. Our approach is informed by Anthropic's research on constructing reliable AI systems.

hashtag
Key Features

hashtag
Agent-based Architecture

Build agents with distinct personalities, capabilities, and interaction styles using a flexible character system.

hashtag
Composable Middleware

Process interactions through customizable middleware chains for validation, memory loading, context wrapping, and more.

hashtag
Memory System

Built-in Prisma-based memory system for storing and retrieving agent interactions with flexible querying.

hashtag
LLM Integration

Support for multiple LLM providers through a unified interface, with structured outputs and streaming capabilities.

hashtag
When to Use Liz?

Liz is perfect for developers who:

  • Need fine-grained control over prompt engineering and LLM interactions

  • Want to build minimal or highly specialized AI agents

  • Prefer explicit, understandable code over magical abstractions

Are building production-ready AI applications that need reliability and control
Get Started →
Liz Architecture

System Diagrams

hashtag
Overall System Architecture

hashtag
Middleware Flow

hashtag
Memory System

hashtag
Agent Interaction Flow

Introduction

Welcome to Liz, a lightweight framework for building AI agents.

hashtag
What is Liz?

Liz is a lightweight framework for building AI agents, inspired by Eliza from AI16Z but rebuilt with a strong focus on developer experience and control. Unlike other agent frameworks that abstract away the complexities, Liz provides direct access to prompts and model interactions, giving developers the power to build exactly what they need.

hashtag
Key Motivations
  • Direct LLM Control: Full access to prompts and model interactions

  • Zero Magic: Minimal abstractions for maximum understanding

  • Ultimate Flexibility: Build exactly what you need, how you need it

hashtag
Core Philosophy

Liz follows an Express-style architecture, using middleware chains for processing agent interactions. This approach provides a clear, linear flow that developers are already familiar with, making it easy to understand and extend.

We believe the best way to build AI agents is to work closely with the prompts and build a set of composable units that can be strung together to make powerful agentic loops. Our approach is informed by Anthropic's research on constructing reliable AI systems.

hashtag
Key Features

hashtag
Agent-based Architecture

Build agents with distinct personalities, capabilities, and interaction styles using a flexible character system.

hashtag
Composable Middleware

Process interactions through customizable middleware chains for validation, memory loading, context wrapping, and more.

hashtag
Memory System

Built-in Prisma-based memory system for storing and retrieving agent interactions with flexible querying.

hashtag
LLM Integration

Support for multiple LLM providers through a unified interface, with structured outputs and streaming capabilities.

hashtag
When to Use Liz?

Liz is perfect for developers who:

  • Need fine-grained control over prompt engineering and LLM interactions

  • Want to build minimal or highly specialized AI agents

  • Prefer explicit, understandable code over magical abstractions

  • Are building production-ready AI applications that need reliability and control

Get Started →arrow-up-right

Liz AI Agent Framework
Library of Knowledge

FAQ

hashtag
Installation & Setup

hashtag
What are the minimum requirements?

Liz requires Node.js 18+ and either SQLite or PostgreSQL for the database. For development, SQLite is recommended as it requires no additional setup. For production, PostgreSQL is recommended for better scalability.

hashtag
Why am I getting environment variable errors?

Make sure you've copied .env.example to .env and filled in all required variables:

  • DATABASE_URL for your database connection

  • OPENAI_API_KEY for OpenAI API access

  • OPENROUTER_API_KEY for OpenRouter API access

hashtag
How do I switch from SQLite to PostgreSQL?

Update your DATABASE_URL in .env and modify prisma/schema.prisma:

Then run prisma migrate to update your database:

hashtag
LLM Integration

hashtag
Can I use different LLM providers?

Yes, Liz supports both OpenAI and OpenRouter APIs. OpenRouter gives you access to models from Anthropic, Google, and others. You can specify the model when calling LLMUtils methods:

hashtag
How do I handle rate limits?

Implement exponential backoff and retry logic in your routes:

hashtag
Performance

hashtag
How can I optimize memory usage?

Several strategies can help manage memory usage:

  • Limit the number of memories loaded per request

  • Implement memory pruning for old conversations

  • Use database indexing effectively

hashtag
How do I handle high traffic?

For high-traffic applications:

  • Use PostgreSQL instead of SQLite

  • Implement request queuing

  • Cache common responses

  • Use load balancing with multiple instances

hashtag
Twitter Integration

hashtag
Why is my Twitter bot not working?

Common Twitter integration issues:

  • Incorrect credentials in environment variables

  • Missing 2FA secret for accounts with 2FA enabled

  • Rate limiting from too frequent posting

Use dryRun mode to test your bot without posting:

hashtag
Contributing

hashtag
How can I contribute to Liz?

We welcome contributions! Here's how to get started:

  1. Fork the repository

  2. Create a feature branch

  3. Make your changes

  4. Add tests if applicable

Please follow our coding standards and include clear commit messages.

APP_URL for OpenRouter callbacks
Consider memory summarization for long conversations
Network issues preventing login

Submit a pull request

← Back to Docs Home
// prisma/schema.prisma
datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}
npm run prisma:migrate
// OpenAI GPT-4
await llmUtils.getTextFromLLM(prompt, "openai/gpt-4");

// Anthropic Claude
await llmUtils.getTextFromLLM(prompt, "anthropic/claude-3-sonnet");

// Google PaLM
await llmUtils.getTextFromLLM(prompt, "google/palm-2");
async function withRetry(fn, maxRetries = 3) {
	let retries = 0;
	while (retries < maxRetries) {
		try {
			return await fn();
		} catch (error) {
			if (!error.message.includes("rate limit")) throw error;
			retries++;
			await new Promise((r) => setTimeout(r, Math.pow(2, retries) * 1000));
		}
	}
	throw new Error("Max retries exceeded");
}
TWITTER_DRY_RUN=true

Quick Start

hashtag
Installation

hashtag
Environment Setup

Create a .env file in your project root with the following variables:

hashtag
Initialize Database

hashtag
Create Your First Agent

Create a new file src/agents/assistant.ts:

hashtag
Set Up Express Server

Create src/server.ts to handle agent interactions:

hashtag
Run the Server

hashtag
Test Your Agent

Send a test request to your agent:

Agents

hashtag
Character Definition

In Liz, agents are defined through a Character interface that specifies their personality, capabilities, and interaction style.

hashtag
Adding Routes

LLM Integration

hashtag
LLMUtils Overview

The LLMUtils class in src/utils/llm provides a unified interface for interacting with different LLM providers, supporting both OpenAI and OpenRouter APIs.

hashtag

Architecture

For detailed visual representations of the system architecture, see .

hashtag
Express-Inspired Flow

Liz uses an Express-style middleware architecture where each request flows through a series of middleware functions. This approach provides a clear, predictable processing pipeline that's easy to understand and extend.

spinner
# Clone the repository
git clone <your-repo>
cd liz

# Install dependencies
pnpm install
# Database configuration (choose one)
DATABASE_URL="postgresql://user:password@localhost:5432/dbname"
# Or for SQLite:
DATABASE_URL="file:./prisma/dev.db"

# LLM API Keys
OPENAI_API_KEY="your-openai-api-key"
OPENROUTER_API_KEY="your-openrouter-api-key"

# Application URL (required for OpenRouter)
APP_URL="http://localhost:3000"
Next: Architecture →

Routes define how an agent handles different types of interactions. Each route has a name, description, and handler function.

hashtag
System Prompt

The system prompt defines the core behavior and role of the agent. It's accessed through getSystemPrompt():

hashtag
Agent Context

The agent context combines various elements of the character definition to provide rich context for LLM interactions:

hashtag
Best Practices

hashtag
Character Definition

  • Keep system prompts focused and specific

  • Provide diverse conversation examples

  • Use consistent style guidelines

  • Include realistic background details

hashtag
Route Design

  • Create specialized routes for specific tasks

  • Use clear, descriptive route names

  • Handle errors gracefully

  • Consider response formats

Next: Memory System →

Text Generation

Generate text responses using different LLM models:

hashtag
Structured Output

Get structured JSON responses using Zod schemas for type safety:

hashtag
Boolean Decisions

Get simple true/false decisions from the LLM:

hashtag
Image Analysis

Process images and get text descriptions or structured analysis:

hashtag
Model Selection

hashtag
LLMSize.SMALL

  • Uses gpt-4o-mini

  • Faster response times

  • Lower cost per request

  • Good for simple decisions

hashtag
LLMSize.LARGE

  • Uses gpt-4o

  • Better reasoning

  • More nuanced responses

  • Complex analysis tasks

hashtag
Best Practices

  • Use structured output for predictable responses

  • Stream responses for better user experience

  • Choose appropriate model size for the task

  • Handle API errors gracefully

  • Monitor token usage and costs

  • Cache responses when possible

Next: Twitter Integration →

# Initialize the database
npm run init-db
import { Character } from "../types";
import { BaseAgent } from "../agent";

const assistantCharacter: Character = {
	name: "Assistant",
	agentId: "assistant_1",
	system: "You are a helpful assistant.",
	bio: ["A knowledgeable AI assistant"],
	lore: ["Created to help users with various tasks"],
	messageExamples: [
		[
			{ user: "user1", content: { text: "Hello!" } },
			{ user: "Assistant", content: { text: "Hi! How can I help?" } },
		],
	],
	postExamples: [],
	topics: ["general help", "task assistance"],
	style: {
		all: ["helpful", "friendly"],
		chat: ["conversational"],
		post: ["clear", "concise"],
	},
	adjectives: ["helpful", "knowledgeable"],
	routes: [],
};

export const assistant = new BaseAgent(assistantCharacter);
import express from "express";
import { AgentFramework } from "./framework";
import { standardMiddleware } from "./middleware";
import { assistant } from "./agents/assistant";
import { InputSource, InputType } from "./types";

const app = express();
app.use(express.json());

const framework = new AgentFramework();
standardMiddleware.forEach((middleware) => framework.use(middleware));

app.post("/agent/input", (req, res) => {
	const input = {
		source: InputSource.NETWORK,
		userId: req.body.userId,
		agentId: assistant.getAgentId(),
		roomId: `room_${req.body.userId}`,
		type: InputType.TEXT,
		text: req.body.text,
	};

	framework.process(input, assistant, res);
});

app.listen(3000, () => {
	console.log("Server running on http://localhost:3000");
});
# Start the development server
npm run dev
curl -X POST http://localhost:3000/agent/input \
  -H "Content-Type: application/json" \
  -d '{
    "userId": "test_user",
    "text": "Hello, assistant!"
  }'
import { Character } from "../types";
import { BaseAgent } from "../agent";

const businessAdvisor: Character = {
	name: "Stern",
	agentId: "stern_advisor",
	system:
		"You are Stern, a no-nonsense business advisor known for direct, practical advice.",
	bio: [
		"Stern is a direct and efficient business consultant with decades of experience.",
		"Started as a factory floor manager before rising to consultant status.",
	],
	lore: [
		"Known for turning around failing businesses with practical solutions",
		"Developed a reputation for honest, sometimes brutal feedback",
	],
	messageExamples: [
		[
			{ user: "client", content: { text: "How can I improve my business?" } },
			{
				user: "Stern",
				content: { text: "Specifics. What are your current metrics?" },
			},
		],
	],
	postExamples: [
		"Here's a 5-step plan to optimize your operations...",
		"Three critical mistakes most startups make:",
	],
	topics: ["business", "strategy", "efficiency", "management"],
	style: {
		all: ["direct", "professional", "analytical"],
		chat: ["focused", "solution-oriented"],
		post: ["structured", "actionable"],
	},
	adjectives: ["efficient", "practical", "experienced"],
	routes: [],
};

export const stern = new BaseAgent(businessAdvisor);
// Basic conversation route
stern.addRoute({
	name: "conversation",
	description: "Handle natural conversation about business topics",
	handler: async (context, req, res) => {
		const response = await llmUtils.getTextFromLLM(
			context,
			"anthropic/claude-3-sonnet"
		);
		await res.send(response);
	},
});

// Specialized business analysis route
stern.addRoute({
	name: "analyze_metrics",
	description: "Analyze business metrics and provide recommendations",
	handler: async (context, req, res) => {
		const analysis = await llmUtils.getObjectFromLLM(
			context,
			analysisSchema,
			LLMSize.LARGE
		);
		await res.send(analysis);
	},
});
// Get the agent's system prompt
const systemPrompt = agent.getSystemPrompt();

// Example system prompt structure
const systemPrompt = `You are ${character.name}, ${character.system}

Key Characteristics:
${character.adjectives.join(", ")}

Style Guidelines:
- All interactions: ${character.style.all.join(", ")}
- Chat responses: ${character.style.chat.join(", ")}
- Public posts: ${character.style.post.join(", ")}

Areas of Focus:
${character.topics.join(", ")}`;
// Get the full agent context
const context = agent.getAgentContext();

// Context structure
<SYSTEM_PROMPT>
[System prompt as shown above]
</SYSTEM_PROMPT>

<BIO_CONTEXT>
[Random selection from bio array]
</BIO_CONTEXT>

<LORE_CONTEXT>
[Random selection from lore array]
</LORE_CONTEXT>

<MESSAGE_EXAMPLES>
[Selected conversation examples]
</MESSAGE_EXAMPLES>

<POST_EXAMPLES>
[Selected post examples]
</POST_EXAMPLES>

<STYLE_GUIDELINES>
[Style preferences for different interaction types]
</STYLE_GUIDELINES>
// Initialize LLMUtils
import { LLMUtils } from "../utils/llm";
const llmUtils = new LLMUtils();

// Environment variables needed
OPENAI_API_KEY = "your-openai-api-key";
OPENROUTER_API_KEY = "your-openrouter-api-key";
APP_URL = "http://localhost:3000"; // Required for OpenRouter
// Basic text generation
const response = await llmUtils.getTextFromLLM(
	prompt,
	"anthropic/claude-3-sonnet"
);

// Streaming responses
await llmUtils.getTextFromLLMStream(
	prompt,
	"anthropic/claude-3-sonnet",
	(token) => {
		// Handle each token as it arrives
		console.log(token);
	}
);
import { z } from "zod";
import { LLMSize } from "../types";

// Define your schema
const analysisSchema = z.object({
	sentiment: z.string(),
	topics: z.array(z.string()),
	confidence: z.number(),
	summary: z.string(),
});

// Get structured response
const analysis = await llmUtils.getObjectFromLLM(
	prompt,
	analysisSchema,
	LLMSize.LARGE
);

// Type-safe access to fields
console.log(analysis.sentiment);
console.log(analysis.topics);
// Get boolean response
const shouldRespond = await llmUtils.getBooleanFromLLM(
	"Should the agent respond to this message?",
	LLMSize.SMALL
);

if (shouldRespond) {
	// Handle response
}
// Get image descriptions
const description = await llmUtils.getImageDescriptions(imageUrls);

// Analyze images with text context
const response = await llmUtils.getTextWithImageFromLLM(
	prompt,
	imageUrls,
	"anthropic/claude-3-sonnet"
);

// Get structured output from images
const analysis = await llmUtils.getObjectFromLLMWithImages(
	prompt,
	analysisSchema,
	imageUrls,
	LLMSize.LARGE
);
hashtag
Standard Middleware Pipeline
  1. validateInput: Ensures required fields are present

  2. loadMemories: Retrieves relevant conversation history

  3. wrapContext: Builds the context for LLM interactions

  4. createMemoryFromInput: Stores the user's input

  5. router: Determines and executes the appropriate route handler

hashtag
Agent Framework

The AgentFramework class in src/framework orchestrates the middleware pipeline and handles request processing:

hashtag
Agents vs. Middleware

hashtag
Agent

  • Defines personality and capabilities

  • Holds system prompt and style context

  • Manages route definitions

  • Provides agent-specific context

hashtag
Middleware

  • Handles request processing

  • Manages memory operations

  • Builds context for LLM

  • Routes requests to handlers

hashtag
Route Handling

Routes define how an agent handles different types of interactions. The router middleware uses LLM to select the most appropriate handler:

hashtag
Request Flow Example

Architecture Background

Next: Agents →

System Architecture Diagrams
System Overview

Memory System

hashtag
Prisma Setup

Liz uses Prisma as its ORM, supporting both SQLite and PostgreSQL databases. The schema defines the structure for storing memories and tweets.

hashtag

import { AgentFramework } from "./framework";
import { standardMiddleware } from "./middleware";

const framework = new AgentFramework();

// Add middleware
standardMiddleware.forEach((middleware) => framework.use(middleware));

// Process requests
framework.process(input, agent, res);
// Adding a route to an agent
agent.addRoute({
	name: "conversation",
	description: "Handle natural conversation",
	handler: async (context, req, res) => {
		const response = await llmUtils.getTextFromLLM(
			context,
			"anthropic/claude-3-sonnet"
		);
		await res.send(response);
	},
});
1. Client sends request to /agent/input
   ↓
2. validateInput checks required fields
   ↓
3. loadMemories fetches conversation history
   ↓
4. wrapContext builds prompt with memories
   ↓
5. createMemoryFromInput stores request
   ↓
6. router selects appropriate handler
   ↓
7. handler processes request with LLM
   ↓
8. Response sent back to client
Loading Memories

The loadMemories middleware retrieves relevant conversation history for each request:

hashtag
Creating Memories

The createMemoryFromInput middleware stores new interactions in the database:

hashtag
Memory Context

The wrapContext middleware formats memories into a structured context for LLM interactions:

hashtag
Performance Considerations

hashtag
Memory Limits

  • Default limit of 100 recent memories

  • Configurable through middleware options

  • Consider token limits of your LLM

  • Use indexes for faster queries

hashtag
Database Tips

  • SQLite for development/small apps

  • PostgreSQL for production/scale

  • Regular database maintenance

  • Monitor memory table growth

Next: LLM Integration →

Memory System
// prisma/schema.prisma
datasource db {
  provider = "sqlite" // or "postgresql"
  url      = env("DATABASE_URL")
}

generator client {
  provider = "prisma-client-js"
}

model Memory {
  id          String   @id @default(uuid())
  userId      String
  agentId     String
  roomId      String
  content     String   // Stores JSON as string
  type        String
  generator   String   // "llm" or "external"
  createdAt   DateTime @default(now())

  @@index([roomId])
  @@index([userId, agentId])
  @@index([type])
}

model Tweet {
  id             String    @id
  text           String
  userId         String
  username       String
  conversationId String?
  inReplyToId    String?
  createdAt      DateTime  @default(now())
  permanentUrl   String?

  @@index([userId])
  @@index([conversationId])
}
// src/middleware/load-memories.ts
export function createLoadMemoriesMiddleware(
	options: LoadMemoriesOptions = {}
): AgentMiddleware {
	const { limit = 100 } = options;

	return async (req, res, next) => {
		const memories = await prisma.memory.findMany({
			where: {
				userId: req.input.userId,
			},
			orderBy: {
				createdAt: "desc",
			},
			take: limit,
		});

		req.memories = memories.map((memory) => ({
			id: memory.id,
			userId: memory.userId,
			agentId: memory.agentId,
			roomId: memory.roomId,
			type: memory.type,
			createdAt: memory.createdAt,
			generator: memory.generator,
			content: JSON.parse(memory.content),
		}));

		await next();
	};
}
// src/middleware/create-memory.ts
export const createMemoryFromInput: AgentMiddleware = async (
	req,
	res,
	next
) => {
	await prisma.memory.create({
		data: {
			userId: req.input.userId,
			agentId: req.input.agentId,
			roomId: req.input.roomId,
			type: req.input.type,
			generator: "external",
			content: JSON.stringify(req.input),
		},
	});

	await next();
};

// Creating LLM response memories
await prisma.memory.create({
	data: {
		userId: req.input.userId,
		agentId: req.input.agentId,
		roomId: req.input.roomId,
		type: "agent",
		generator: "llm",
		content: JSON.stringify({ text: response }),
	},
});
// src/middleware/wrap-context.ts
function formatMemories(memories: Memory[]): string {
  return memories
    .reverse()
    .map((memory) => {
      const content = memory.content;
      if (memory.generator === "external") {
        return `[${memory.createdAt}] User ${memory.userId}: ${content.text}`;
      } else if (memory.generator === "llm") {
        return `[${memory.createdAt}] You: ${content.text}`;
      }
    })
    .join("\n\n");
}

// Final context structure
<PREVIOUS_CONVERSATION>
${memories}
</PREVIOUS_CONVERSATION>

<AGENT_CONTEXT>
${agentContext}
</AGENT_CONTEXT>

<CURRENT_USER_INPUT>
${currentInput}
</CURRENT_USER_INPUT>
spinner
spinner

Examples

hashtag
CLI-based Agent

Create a simple command-line interface for interacting with your agent:

hashtag
Twitter Bot

Create a Twitter bot that posts regularly and responds to mentions:

hashtag
Memory-Aware Agent

Create an agent that uses conversation history for context:

hashtag
Custom Middleware

Create custom middleware for specialized processing:

// src/example/cli.ts
import express from "express";
import { AgentFramework } from "../framework";
import { standardMiddleware } from "../middleware";
import { Character, InputSource, InputType } from "../types";
import { BaseAgent } from "../agent";
import readline from "readline";

// Define your agent
const assistant: Character = {
	name: "Assistant",
	agentId: "cli_assistant",
	system: "You are a helpful CLI assistant.",
	bio: ["A command-line AI assistant"],
	lore: ["Created to help users through the terminal"],
	messageExamples: [
		[
			{ user: "user1", content: { text: "Hello!" } },
			{ user: "Assistant", content: { text: "Hi! How can I help?" } },
		],
	],
	postExamples: [],
	topics: ["general help", "cli", "terminal"],
	style: {
		all: ["helpful", "concise"],
		chat: ["friendly"],
		post: ["clear"],
	},
	adjectives: ["helpful", "efficient"],
	routes: [],
};

// Initialize framework
const app = express();
app.use(express.json());
const framework = new AgentFramework();
standardMiddleware.forEach((middleware) => framework.use(middleware));

// Create agent instance
const agent = new BaseAgent(assistant);

// Add conversation route
agent.addRoute({
	name: "conversation",
	description: "Handle natural conversation",
	handler: async (context, req, res) => {
		const response = await llmUtils.getTextFromLLM(
			context,
			"anthropic/claude-3-sonnet"
		);
		await res.send(response);
	},
});

// Set up CLI interface
async function startCLI() {
	const rl = readline.createInterface({
		input: process.stdin,
		output: process.stdout,
	});

	console.log("\nCLI Assistant");
	console.log("=============");

	async function prompt() {
		rl.question("\nYou: ", async (text) => {
			try {
				const response = await framework.process(
					{
						source: InputSource.NETWORK,
						userId: "cli_user",
						agentId: agent.getAgentId(),
						roomId: "cli_session",
						type: InputType.TEXT,
						text: text,
					},
					agent
				);

				console.log("\nAssistant:", response);
				prompt();
			} catch (error) {
				console.error("\nError:", error);
				prompt();
			}
		});
	}

	prompt();
}

// Start server and CLI
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
	console.log(`Server running on http://localhost:${PORT}`);
	startCLI();
});
Next: FAQ →
// src/example/twitter-bot.ts
import { TwitterClient } from "@liz/twitter-client";
import { Character } from "../types";
import { BaseAgent } from "../agent";

// Define Twitter bot character
const twitterBot: Character = {
	name: "TechNews",
	agentId: "tech_news_bot",
	system:
		"You are a tech news curator sharing insights about AI and technology.",
	bio: ["AI-powered tech news curator"],
	lore: ["Passionate about sharing tech insights"],
	messageExamples: [
		[
			{ user: "user1", content: { text: "What's new in AI?" } },
			{
				user: "TechNews",
				content: { text: "Here are the latest developments..." },
			},
		],
	],
	postExamples: [
		"🚀 Breaking: New developments in quantum computing...",
		"💡 AI Insight of the day: Understanding large language models...",
	],
	topics: ["AI", "technology", "programming", "tech news"],
	style: {
		all: ["informative", "engaging"],
		chat: ["helpful", "knowledgeable"],
		post: ["concise", "engaging"],
	},
	adjectives: ["tech-savvy", "insightful"],
	routes: [],
};

// Create agent
const agent = new BaseAgent(twitterBot);

// Add tweet generation route
agent.addRoute({
	name: "create_new_tweet",
	description: "Generate a new tweet about tech news",
	handler: async (context, req, res) => {
		const tweet = await llmUtils.getTextFromLLM(
			context,
			"anthropic/claude-3-sonnet"
		);
		await res.send(tweet);
	},
});

// Configure Twitter client
const config = {
	username: process.env.TWITTER_USERNAME,
	password: process.env.TWITTER_PASSWORD,
	email: process.env.TWITTER_EMAIL,
	twoFactorSecret: process.env.TWITTER_2FA_SECRET,
	retryLimit: 3,
	postIntervalHours: 4,
	pollingInterval: 5,
	dryRun: process.env.NODE_ENV !== "production",
};

// Start Twitter bot
async function startBot() {
	const twitter = new TwitterClient(agent, config);
	await twitter.start();
	console.log("Twitter bot started!");
}

startBot().catch(console.error);
// src/example/memory-agent.ts
import { AgentFramework } from "../framework";
import { standardMiddleware } from "../middleware";
import { Character, InputSource, InputType } from "../types";
import { BaseAgent } from "../agent";
import { prisma } from "../utils/db";

// Define memory-aware agent
const memoryAgent: Character = {
	name: "Mentor",
	agentId: "mentor_agent",
	system:
		"You are a mentor who remembers past conversations to provide personalized guidance.",
	bio: ["An AI mentor with perfect memory"],
	lore: ["Uses conversation history to give contextual advice"],
	messageExamples: [],
	postExamples: [],
	topics: ["mentoring", "personal growth"],
	style: {
		all: ["personalized", "thoughtful"],
		chat: ["empathetic"],
		post: ["reflective"],
	},
	adjectives: ["understanding", "wise"],
	routes: [],
};

const agent = new BaseAgent(memoryAgent);

// Add conversation route with memory context
agent.addRoute({
	name: "conversation",
	description: "Handle conversation with memory context",
	handler: async (context, req, res) => {
		// Get recent memories for this user
		const memories = await prisma.memory.findMany({
			where: {
				userId: req.input.userId,
				agentId: req.input.agentId,
			},
			orderBy: {
				createdAt: "desc",
			},
			take: 10,
		});

		// Format memories for context
		const memoryContext = memories
			.map((m) => {
				const content = JSON.parse(m.content);
				return `[${m.createdAt}] ${content.text}`;
			})
			.join("\n");

		// Add memory context to prompt
		const promptWithMemory = `
Previous interactions:
${memoryContext}

Current conversation:
${context}`;

		const response = await llmUtils.getTextFromLLM(
			promptWithMemory,
			"anthropic/claude-3-sonnet"
		);

		// Store response in memory
		await prisma.memory.create({
			data: {
				userId: req.input.userId,
				agentId: req.input.agentId,
				roomId: req.input.roomId,
				type: "response",
				generator: "llm",
				content: JSON.stringify({ text: response }),
			},
		});

		await res.send(response);
	},
});

// Initialize framework
const framework = new AgentFramework();
standardMiddleware.forEach((middleware) => framework.use(middleware));

// Example usage
async function chat(text: string) {
	return framework.process(
		{
			source: InputSource.NETWORK,
			userId: "example_user",
			agentId: agent.getAgentId(),
			roomId: "example_room",
			type: InputType.TEXT,
			text,
		},
		agent
	);
}
// src/middleware/sentiment-analysis.ts
import { AgentMiddleware } from "../types";
import { LLMUtils } from "../utils/llm";

const sentimentSchema = z.object({
	sentiment: z.enum(["positive", "negative", "neutral"]),
	confidence: z.number(),
	explanation: z.string(),
});

export const analyzeSentiment: AgentMiddleware = async (req, res, next) => {
	const llmUtils = new LLMUtils();

	try {
		const analysis = await llmUtils.getObjectFromLLM(
			`Analyze the sentiment of this text: "${req.input.text}"`,
			sentimentSchema,
			LLMSize.SMALL
		);

		// Add sentiment to request context
		req.sentiment = analysis;

		await next();
	} catch (error) {
		await res.error(new Error(`Failed to analyze sentiment: ${error.message}`));
	}
};

// Usage in framework
const framework = new AgentFramework();
framework.use(validateInput);
framework.use(analyzeSentiment); // Add sentiment analysis
framework.use(loadMemories);
framework.use(wrapContext);
framework.use(router);

Twitter Integration

Twitter Integration

hashtag
Configuration

Configure your Twitter client using environment variables and the twitterConfigSchema:

hashtag
Setting Up the Client

Initialize and start the Twitter client with your agent:

hashtag
Automated Posting

The client can automatically generate and post tweets at regular intervals:

hashtag
Mention Monitoring

Monitor and respond to mentions automatically:

hashtag
Thread Management

Handle tweet threads and conversations:

hashtag
Memory Integration

Store tweets and maintain conversation context:

hashtag
Best Practices

hashtag
Rate Limiting

  • Use RequestQueue for API calls

  • Add delays between tweets

  • Handle API errors gracefully

  • Implement exponential backoff

hashtag
Testing

  • Use dryRun mode for testing

  • Monitor tweet content

  • Test thread splitting

  • Verify mention handling

// Environment variables
TWITTER_USERNAME="your-username"
TWITTER_PASSWORD="your-password"
TWITTER_EMAIL="your-email"
TWITTER_2FA_SECRET="optional-2fa-secret"
TWITTER_POST_INTERVAL_HOURS=4
TWITTER_POLLING_INTERVAL=5 # minutes
TWITTER_DRY_RUN=true # For testing

// Configuration schema
const twitterConfigSchema = z.object({
  username: z.string().min(1, "Twitter username is required"),
  password: z.string().min(1, "Twitter password is required"),
  email: z.string().email("Valid email is required"),
  twoFactorSecret: z.string().optional(),
  retryLimit: z.number().int().min(1).default(5),
  postIntervalHours: z.number().int().min(1).default(4),
  enableActions: z.boolean().default(false)
});
Next: Examples →
import { TwitterClient } from "@liz/twitter-client";

const config = {
	username: process.env.TWITTER_USERNAME,
	password: process.env.TWITTER_PASSWORD,
	email: process.env.TWITTER_EMAIL,
	twoFactorSecret: process.env.TWITTER_2FA_SECRET,
	retryLimit: 3,
	postIntervalHours: 4,
	pollingInterval: 5,
	dryRun: process.env.NODE_ENV !== "production",
};

const twitter = new TwitterClient(agent, config);
await twitter.start(); // Starts posting & monitoring intervals
// Automatic posting loop
async generateAndPost() {
  const responseText = await this.fetchTweetContent({
    agentId: this.agent.getAgentId(),
    userId: "twitter_client",
    roomId: "twitter",
    text: "<SYSTEM> Generate a new tweet to post on your timeline </SYSTEM>",
    type: "text"
  });

  const tweets = await sendThreadedTweet(this, responseText);

  // Store tweets in memory
  for (const tweet of tweets) {
    await storeTweetIfNotExists({
      id: tweet.id,
      text: tweet.text,
      userId: this.config.username,
      username: this.config.username,
      conversationId: tweet.conversationId,
      permanentUrl: tweet.permanentUrl
    });
  }
}
// Check for new mentions
async checkInteractions() {
  const mentions = await this.getMentions();
  for (const mention of mentions) {
    if (mention.id <= this.lastCheckedTweetId) continue;
    await this.handleMention(mention);
    this.lastCheckedTweetId = mention.id;
  }
}

// Handle mention with agent
async handleMention(tweet) {
  const responseText = await this.fetchTweetContent({
    agentId: this.agent.getAgentId(),
    userId: `tw_user_${tweet.userId}`,
    roomId: tweet.conversationId || "twitter",
    text: `@${tweet.username}: ${tweet.text}`,
    type: "text"
  });

  const replies = await sendThreadedTweet(this, responseText, tweet.id);
}
// Split long content into tweets
function splitTweetContent(text, maxLength = 280) {
	if (text.length <= maxLength) return [text];

	const tweets = [];
	const sentences = text.match(/[^.!?]+[.!?]+/g) || [text];

	let currentTweet = "";
	for (const sentence of sentences) {
		if ((currentTweet + sentence).length <= maxLength) {
			currentTweet += sentence;
		} else {
			tweets.push(currentTweet.trim());
			currentTweet = sentence;
		}
	}

	if (currentTweet) tweets.push(currentTweet.trim());
	return tweets;
}

// Send threaded tweets
async function sendThreadedTweet(client, content, replyToId) {
	const tweets = [];
	const parts = splitTweetContent(content);
	let lastTweetId = replyToId;

	for (const part of parts) {
		const tweet = await client.sendTweet(part, lastTweetId);
		tweets.push(tweet);
		lastTweetId = tweet.id;
		await new Promise((resolve) => setTimeout(resolve, 1000));
	}

	return tweets;
}
// Store tweet in database
async function storeTweetIfNotExists(tweet) {
	const exists = await prisma.tweet.count({
		where: { id: tweet.id },
	});

	if (!exists) {
		await prisma.tweet.create({
			data: {
				id: tweet.id,
				text: tweet.text,
				userId: tweet.userId,
				username: tweet.username,
				conversationId: tweet.conversationId,
				inReplyToId: tweet.inReplyToId,
				permanentUrl: tweet.permanentUrl,
			},
		});
		return true;
	}
	return false;
}

// Get conversation thread
async function getTweetThread(conversationId) {
	return prisma.tweet.findMany({
		where: { conversationId },
		orderBy: { createdAt: "asc" },
	});
}
spinner