For detailed visual representations of the system architecture, see System Architecture Diagrams.
Liz uses an Express-style middleware architecture where each request flows through a series of middleware functions. This approach provides a clear, predictable processing pipeline that's easy to understand and extend.
validateInput: Ensures required fields are present
loadMemories: Retrieves relevant conversation history
wrapContext: Builds the context for LLM interactions
createMemoryFromInput: Stores the user's input
router: Determines and executes the appropriate route handler
The AgentFramework class in src/framework orchestrates the middleware pipeline and handles request processing:
import { AgentFramework } from "./framework";
import { standardMiddleware } from "./middleware";
const framework = new AgentFramework();
// Add middleware
standardMiddleware.forEach((middleware) => framework.use(middleware));
// Process requests
framework.process(input, agent, res);
Defines personality and capabilities
Holds system prompt and style context
Manages route definitions
Provides agent-specific context
Handles request processing
Manages memory operations
Builds context for LLM
Routes requests to handlers
Routes define how an agent handles different types of interactions. The router middleware uses LLM to select the most appropriate handler:
// Adding a route to an agent
agent.addRoute({
name: "conversation",
description: "Handle natural conversation",
handler: async (context, req, res) => {
const response = await llmUtils.getTextFromLLM(
context,
"anthropic/claude-3-sonnet"
);
await res.send(response);
},
});
1. Client sends request to /agent/input
↓
2. validateInput checks required fields
↓
3. loadMemories fetches conversation history
↓
4. wrapContext builds prompt with memories
↓
5. createMemoryFromInput stores request
↓
6. router selects appropriate handler
↓
7. handler processes request with LLM
↓
8. Response sent back to client