Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Welcome to Liz, a lightweight framework for building AI agents.
Liz is a lightweight framework for building AI agents, inspired by Eliza from AI16Z but rebuilt with a strong focus on developer experience and control. Unlike other agent frameworks that abstract away the complexities, Liz provides direct access to prompts and model interactions, giving developers the power to build exactly what they need.
Direct LLM Control: Full access to prompts and model interactions
Zero Magic: Minimal abstractions for maximum understanding
Ultimate Flexibility: Build exactly what you need, how you need it
Liz follows an Express-style architecture, using middleware chains for processing agent interactions. This approach provides a clear, linear flow that developers are already familiar with, making it easy to understand and extend.
We believe the best way to build AI agents is to work closely with the prompts and build a set of composable units that can be strung together to make powerful agentic loops. Our approach is informed by Anthropic's research on constructing reliable AI systems.
Build agents with distinct personalities, capabilities, and interaction styles using a flexible character system.
Process interactions through customizable middleware chains for validation, memory loading, context wrapping, and more.
Built-in Prisma-based memory system for storing and retrieving agent interactions with flexible querying.
Support for multiple LLM providers through a unified interface, with structured outputs and streaming capabilities.
Liz is perfect for developers who:
Need fine-grained control over prompt engineering and LLM interactions
Want to build minimal or highly specialized AI agents
Prefer explicit, understandable code over magical abstractions
Are building production-ready AI applications that need reliability and control
Liz is a lightweight framework for building AI agents, inspired by Eliza from AI16Z but rebuilt with a strong focus on developer experience and control. Unlike other agent frameworks that abstract away the complexities, Liz provides direct access to prompts and model interactions, giving developers the power to build exactly what they need.
Direct LLM Control: Full access to prompts and model interactions
Zero Magic: Minimal abstractions for maximum understanding
Ultimate Flexibility: Build exactly what you need, how you need it
Liz follows an Express-style architecture, using middleware chains for processing agent interactions. This approach provides a clear, linear flow that developers are already familiar with, making it easy to understand and extend.
We believe the best way to build AI agents is to work closely with the prompts and build a set of composable units that can be strung together to make powerful agentic loops. Our approach is informed by Anthropic's research on constructing reliable AI systems.
Build agents with distinct personalities, capabilities, and interaction styles using a flexible character system.
Process interactions through customizable middleware chains for validation, memory loading, context wrapping, and more.
Built-in Prisma-based memory system for storing and retrieving agent interactions with flexible querying.
Support for multiple LLM providers through a unified interface, with structured outputs and streaming capabilities.
Liz is perfect for developers who:
Need fine-grained control over prompt engineering and LLM interactions
Want to build minimal or highly specialized AI agents
Prefer explicit, understandable code over magical abstractions
Are building production-ready AI applications that need reliability and control
In Liz, agents are defined through a Character interface that specifies their personality, capabilities, and interaction style.
Routes define how an agent handles different types of interactions. Each route has a name, description, and handler function.
The system prompt defines the core behavior and role of the agent. It's accessed through getSystemPrompt():
The agent context combines various elements of the character definition to provide rich context for LLM interactions:
Keep system prompts focused and specific
Provide diverse conversation examples
Use consistent style guidelines
Include realistic background details
Create specialized routes for specific tasks
Use clear, descriptive route names
Handle errors gracefully
Consider response formats
Create a .env file in your project root with the following variables:
Create a new file src/agents/assistant.ts:
Create src/server.ts to handle agent interactions:
Send a test request to your agent:
Liz uses Prisma as its ORM, supporting both SQLite and PostgreSQL databases. The schema defines the structure for storing memories and tweets.
The loadMemories middleware retrieves relevant conversation history for each request:
The createMemoryFromInput middleware stores new interactions in the database:
The wrapContext middleware formats memories into a structured context for LLM interactions:
Default limit of 100 recent memories
Configurable through middleware options
Consider token limits of your LLM
Use indexes for faster queries
SQLite for development/small apps
PostgreSQL for production/scale
Regular database maintenance
Monitor memory table growth
The LLMUtils class in src/utils/llm provides a unified interface for interacting with different LLM providers, supporting both OpenAI and OpenRouter APIs.
Generate text responses using different LLM models:
Get structured JSON responses using Zod schemas for type safety:
Get simple true/false decisions from the LLM:
Process images and get text descriptions or structured analysis:
Uses gpt-4o-mini
Faster response times
Lower cost per request
Good for simple decisions
Uses gpt-4o
Better reasoning
More nuanced responses
Complex analysis tasks
Use structured output for predictable responses
Stream responses for better user experience
Choose appropriate model size for the task
Handle API errors gracefully
Monitor token usage and costs
Cache responses when possible
Create a simple command-line interface for interacting with your agent:
Create a Twitter bot that posts regularly and responds to mentions:
Create an agent that uses conversation history for context:
Create custom middleware for specialized processing:
Configure your Twitter client using environment variables and the twitterConfigSchema:
Initialize and start the Twitter client with your agent:
The client can automatically generate and post tweets at regular intervals:
Monitor and respond to mentions automatically:
Handle tweet threads and conversations:
Store tweets and maintain conversation context:
Use RequestQueue for API calls
Add delays between tweets
Handle API errors gracefully
Implement exponential backoff
Use dryRun mode for testing
Monitor tweet content
Test thread splitting
Verify mention handling
Liz requires Node.js 18+ and either SQLite or PostgreSQL for the database. For development, SQLite is recommended as it requires no additional setup. For production, PostgreSQL is recommended for better scalability.
Make sure you've copied .env.example to .env and filled in all required variables:
DATABASE_URL for your database connection
OPENAI_API_KEY for OpenAI API access
OPENROUTER_API_KEY for OpenRouter API access
APP_URL for OpenRouter callbacks
Update your DATABASE_URL in .env and modify prisma/schema.prisma:
Then run prisma migrate to update your database:
Yes, Liz supports both OpenAI and OpenRouter APIs. OpenRouter gives you access to models from Anthropic, Google, and others. You can specify the model when calling LLMUtils methods:
Implement exponential backoff and retry logic in your routes:
Several strategies can help manage memory usage:
Limit the number of memories loaded per request
Implement memory pruning for old conversations
Use database indexing effectively
Consider memory summarization for long conversations
For high-traffic applications:
Use PostgreSQL instead of SQLite
Implement request queuing
Cache common responses
Use load balancing with multiple instances
Common Twitter integration issues:
Incorrect credentials in environment variables
Missing 2FA secret for accounts with 2FA enabled
Rate limiting from too frequent posting
Network issues preventing login
Use dryRun mode to test your bot without posting:
We welcome contributions! Here's how to get started:
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
Please follow our coding standards and include clear commit messages.
Liz uses an Express-style middleware architecture where each request flows through a series of middleware functions. This approach provides a clear, predictable processing pipeline that's easy to understand and extend.
validateInput: Ensures required fields are present
loadMemories: Retrieves relevant conversation history
wrapContext: Builds the context for LLM interactions
createMemoryFromInput: Stores the user's input
router: Determines and executes the appropriate route handler
The AgentFramework class in src/framework orchestrates the middleware pipeline and handles request processing:
Defines personality and capabilities
Holds system prompt and style context
Manages route definitions
Provides agent-specific context
Handles request processing
Manages memory operations
Builds context for LLM
Routes requests to handlers
Routes define how an agent handles different types of interactions. The router middleware uses LLM to select the most appropriate handler:
For detailed visual representations of the system architecture, see .