elizaOS

Database System Architecture

Comprehensive guide to ElizaOS database system, SQL plugin, and ORM integration

ElizaOS provides a comprehensive database system built on top of Drizzle ORM with support for both PostgreSQL and PGlite (SQLite-compatible) databases. The system is designed for scalability, type safety, and flexible plugin architecture.

Core Architecture

Database Adapter Pattern

The database system uses an adapter pattern that provides a unified interface for different database backends:

interface IDatabaseAdapter {
  // Core CRUD operations
  createAgent(agent: Agent): Promise<boolean>;
  getAgent(agentId: UUID): Promise<Agent | null>;
  updateAgent(agentId: UUID, agent: Partial<Agent>): Promise<boolean>;
  deleteAgent(agentId: UUID): Promise<boolean>;

  // Memory operations
  createMemory(memory: Memory, tableName: string): Promise<UUID>;
  getMemoryById(memoryId: UUID): Promise<Memory | null>;
  searchMemories(params: SearchParams): Promise<Memory[]>;

  // Entity operations
  createEntities(entities: Entity[]): Promise<boolean>;
  getEntityByIds(entityIds: UUID[]): Promise<Entity[]>;

  // Component operations
  createComponent(component: Component): Promise<boolean>;
  getComponent(entityId: UUID, type: string): Promise<Component | null>;

  // Relationship operations
  createRelationship(relationship: Relationship): Promise<boolean>;
  getRelationships(entityId: UUID): Promise<Relationship[]>;

  // Task operations
  createTask(task: Task): Promise<boolean>;
  getTasks(agentId: UUID): Promise<Task[]>;

  // Vector operations
  searchEmbeddings(params: EmbeddingSearchParams): Promise<Memory[]>;

  // Lifecycle management
  init(): Promise<void>;
  close(): Promise<void>;
  isReady(): Promise<boolean>;
}

Supported Database Backends

PostgreSQL Adapter

  • Production-ready: Full PostgreSQL support with connection pooling
  • Extensions: Supports pgvector for embeddings, fuzzystrmatch for fuzzy matching
  • Scalability: Connection pooling and transaction management
  • Migration: Schema-based migrations with plugin support

PGlite Adapter

  • Development: SQLite-compatible embedded database
  • Portability: File-based storage with no external dependencies
  • Testing: Ideal for unit tests and development environments
  • Extensions: Vector support through PGlite extensions

Database Schema

Dynamic Schema System

ElizaOS uses a dynamic schema system where tables are defined through the plugin-sql's schema exports rather than static SQL files. This allows plugins to extend the database schema dynamically.

Core Schema Definition

The core tables are defined in @elizaos/plugin-sql/src/schema.ts:

// Example: Agents table definition using Drizzle ORM
export const agentsTable = pgTable("agents", {
  id: uuid("id").defaultRandom().primaryKey(),
  enabled: boolean("enabled").default(true).notNull(),
  createdAt: timestamp("created_at").defaultNow().notNull(),
  updatedAt: timestamp("updated_at").defaultNow().notNull(),
  name: text("name").notNull().unique(),
  username: text("username"),
  system: text("system").default(""),
  bio: jsonb("bio").default([]),
  messageExamples: jsonb("message_examples").default([]).notNull(),
  postExamples: jsonb("post_examples").default([]).notNull(),
  topics: jsonb("topics").default([]).notNull(),
  adjectives: jsonb("adjectives").default([]).notNull(),
  knowledge: jsonb("knowledge").default([]).notNull(),
  plugins: jsonb("plugins").default([]).notNull(),
  settings: jsonb("settings").default({}).notNull(),
  style: jsonb("style").default({}).notNull(),
});

Core Tables Overview

All core tables are dynamically created through the plugin-sql schema system. Here are the main tables:

// Memories table definition
export const memoriesTable = pgTable("memories", {
  id: uuid("id").primaryKey(),
  type: text("type").notNull(),
  createdAt: timestamp("created_at").defaultNow().notNull(),
  content: jsonb("content").notNull(),
  entityId: uuid("entity_id").references(() => entitiesTable.id, { onDelete: "cascade" }),
  agentId: uuid("agent_id").references(() => agentsTable.id, { onDelete: "cascade" }).notNull(),
  roomId: uuid("room_id").references(() => roomsTable.id, { onDelete: "cascade" }),
  worldId: uuid("world_id"),
  unique: boolean("unique").default(true).notNull(),
  metadata: jsonb("metadata").default({}).notNull(),
}, (table) => ({
  // Indexes defined programmatically
  typeRoomIdx: index("idx_memories_type_room").on(table.type, table.roomId),
  worldIdIdx: index("idx_memories_world_id").on(table.worldId),
  metadataTypeIdx: index("idx_memories_metadata_type").on(sql`(${table.metadata}->>'type')`),
  documentIdIdx: index("idx_memories_document_id").on(sql`(${table.metadata}->>'documentId')`),
  fragmentOrderIdx: index("idx_fragments_order").on(
    sql`(${table.metadata}->>'documentId')`,
    sql`(${table.metadata}->>'position')`
  ),
}));

Dynamic Table Creation

All tables are created dynamically when the plugin-sql initializes. The schema is defined using Drizzle ORM's type-safe table definitions:

// Embeddings table with multi-dimensional vector support
export const embeddingsTable = pgTable("embeddings", {
  id: uuid("id").defaultRandom().primaryKey(),
  memoryId: uuid("memory_id").references(() => memoriesTable.id, { onDelete: "cascade" }),
  createdAt: timestamp("created_at").defaultNow().notNull(),
  dim384: vector("dim_384", { dimensions: 384 }),
  dim512: vector("dim_512", { dimensions: 512 }),
  dim768: vector("dim_768", { dimensions: 768 }),
  dim1024: vector("dim_1024", { dimensions: 1024 }),
  dim1536: vector("dim_1536", { dimensions: 1536 }),
  dim3072: vector("dim_3072", { dimensions: 3072 }),
}, (table) => ({
  memoryIdx: index("idx_embedding_memory").on(table.memoryId),
}));
// Entities table for agent interactions
export const entitiesTable = pgTable("entities", {
  id: uuid("id").defaultRandom().primaryKey(),
  agentId: uuid("agent_id").references(() => agentsTable.id, { onDelete: "cascade" }).notNull(),
  names: text("names").array().notNull(),
  metadata: jsonb("metadata").default({}).notNull(),
  createdAt: timestamp("created_at").defaultNow().notNull(),
  updatedAt: timestamp("updated_at").defaultNow().notNull(),
}, (table) => ({
  agentIdIdx: index("idx_entities_agent_id").on(table.agentId),
  namesIdx: index("idx_entities_names").using("gin", table.names),
}));

Plugin Schema Extensions

Plugins can extend the database schema by exporting their own table definitions:

// Example: A plugin extending the schema
export const myPluginSchema = {
  customTable: pgTable("my_plugin_custom", {
    id: uuid("id").defaultRandom().primaryKey(),
    agentId: uuid("agent_id").references(() => agentsTable.id).notNull(),
    customData: jsonb("custom_data").notNull(),
    createdAt: timestamp("created_at").defaultNow().notNull(),
  }),
};

// In the plugin definition
export const myPlugin: Plugin = {
  name: "@elizaos/plugin-custom",
  schema: myPluginSchema,
  init: async (_, runtime) => {
    // Plugin initialization
  },
};

Complete Schema Reference

The plugin-sql exports all core tables through its schema object. Key tables include:

  • agents: Agent configurations and settings
  • memories: All agent memories with embeddings support
  • entities: Users, agents, and other interactable entities
  • relationships: Connections between entities
  • tasks: Agent task management
  • rooms: Conversation contexts
  • worlds: Server/environment contexts
  • participants: Room membership tracking

Accessing Schema Definitions

import { schema } from '@elizaos/plugin-sql';

// Access table definitions
const { agentsTable, memoriesTable, entitiesTable } = schema;

// Use in queries with Drizzle ORM
const agents = await db.select().from(schema.agentsTable);

Plugin Architecture

SQL Plugin Structure

The SQL plugin (@elizaos/plugin-sql) provides the core database functionality and MUST be loaded first in your plugin list:

export const plugin: Plugin = {
  name: "@elizaos/plugin-sql",
  description: "A plugin for SQL database access with dynamic schema migrations",
  priority: 0,
  schema, // Exports all core tables
  init: async (_, runtime: IAgentRuntime) => {
    // Check if adapter already exists
    const existingAdapter = (runtime as any).databaseAdapter;
    if (existingAdapter) {
      logger.info("Database adapter already registered, skipping creation");
      return;
    }

    // Get configuration
    const postgresUrl = runtime.getSetting("POSTGRES_URL");
    const dataDir =
      runtime.getSetting("PGLITE_PATH") ||
      runtime.getSetting("DATABASE_PATH") ||
      "./.eliza/.elizadb";

    // Create appropriate adapter
    const dbAdapter = createDatabaseAdapter(
      {
        dataDir,
        postgresUrl,
      },
      runtime.agentId
    );

    // Register adapter with runtime
    runtime.registerDatabaseAdapter(dbAdapter);
    logger.info("Database adapter created and registered");
  },
};

Always ensure @elizaos/plugin-sql is loaded first and @elizaos/plugin-bootstrap is loaded last in your plugin list. The SQL plugin provides the database adapter that other plugins depend on.

Dynamic Schema Migration System

The migration system supports plugin-based schema extensions:

export class DatabaseMigrationService {
  private registeredSchemas = new Map<string, any>();

  discoverAndRegisterPluginSchemas(plugins: Plugin[]): void {
    for (const plugin of plugins) {
      if (plugin.schema) {
        this.registeredSchemas.set(plugin.name, plugin.schema);
      }
    }
  }

  async runAllPluginMigrations(): Promise<void> {
    for (const [pluginName, schema] of this.registeredSchemas) {
      await runPluginMigrations(this.db, pluginName, schema);
    }
  }
}

Plugin Schema Namespacing

Each plugin gets its own schema namespace:

export class PluginNamespaceManager {
  async getPluginSchema(pluginName: string): Promise<string> {
    if (pluginName === "@elizaos/plugin-sql") {
      return "public"; // Core tables in public schema
    }
    // Convert plugin name to valid schema name
    return pluginName.replace(/@elizaos\/plugin-|\W/g, "_").toLowerCase();
  }

  async ensureNamespace(schemaName: string): Promise<void> {
    if (schemaName === "public") return;
    await this.db.execute(sql.raw(`CREATE SCHEMA IF NOT EXISTS "${schemaName}"`));
  }
}

Configuration

Environment Variables

PostgreSQL Configuration

# Production PostgreSQL URL
POSTGRES_URL=postgresql://user:password@localhost:5432/eliza

# Connection pool settings
POSTGRES_MAX_CONNECTIONS=20
POSTGRES_IDLE_TIMEOUT=30000
POSTGRES_CONNECTION_TIMEOUT=5000

PGlite Configuration

# PGlite data directory
PGLITE_PATH=./.eliza/.elizadb
DATABASE_PATH=./.eliza/.elizadb

# Legacy path migration
PGLITE_DATA_DIR=./.eliza/.elizadb

Adapter Factory

The system automatically selects the appropriate adapter based on configuration:

export function createDatabaseAdapter(
  config: {
    dataDir?: string;
    postgresUrl?: string;
  },
  agentId: UUID
): IDatabaseAdapter {
  if (config.postgresUrl) {
    // Use PostgreSQL adapter
    const manager = new PostgresConnectionManager(config.postgresUrl);
    return new PgDatabaseAdapter(agentId, manager);
  }

  // Use PGlite adapter
  const dataDir = resolvePgliteDir(config.dataDir);
  const manager = new PGliteClientManager({ dataDir });
  return new PgliteDatabaseAdapter(agentId, manager);
}

Advanced Features

Vector Search and Embeddings

The system supports multi-dimensional vector embeddings:

// Search for similar memories using vector similarity
const similarMemories = await adapter.searchEmbeddings({
  tableName: 'memories',
  embedding: [0.1, 0.2, 0.3, ...], // 384-dimensional vector
  dimension: 384,
  threshold: 0.7,
  limit: 10
});

// Vector similarity search with cosine distance
const vectorQuery = sql`
  SELECT m.*, (1 - (e.dim_384 <=> ${embedding})) as similarity
  FROM memories m
  JOIN embeddings e ON m.id = e.memory_id
  WHERE (1 - (e.dim_384 <=> ${embedding})) > ${threshold}
  ORDER BY similarity DESC
  LIMIT ${limit}
`;

Retry Logic and Error Handling

The base adapter includes robust retry logic:

export abstract class BaseDrizzleAdapter extends DatabaseAdapter {
  protected readonly maxRetries = 3;
  protected readonly baseDelay = 1000;
  protected readonly maxDelay = 10000;

  protected async withRetry<T>(operation: () => Promise<T>): Promise<T> {
    let lastError: Error;

    for (let attempt = 1; attempt <= this.maxRetries; attempt++) {
      try {
        return await operation();
      } catch (error) {
        lastError = error as Error;

        if (attempt === this.maxRetries) {
          throw lastError;
        }

        const delay = Math.min(this.baseDelay * Math.pow(2, attempt - 1), this.maxDelay);
        await new Promise((resolve) => setTimeout(resolve, delay));
      }
    }

    throw lastError!;
  }
}

Connection Management

PostgreSQL Connection Pool

export class PostgresConnectionManager {
  private pool: Pool;
  private db: NodePgDatabase;

  constructor(connectionString: string) {
    this.pool = new Pool({ connectionString });
    this.db = drizzle(this.pool);
  }

  async testConnection(): Promise<boolean> {
    try {
      const client = await this.pool.connect();
      await client.query("SELECT 1");
      client.release();
      return true;
    } catch (error) {
      return false;
    }
  }
}

PGlite Client Manager

export class PGliteClientManager {
  private client: PGlite;

  constructor(options: PGliteOptions) {
    this.client = new PGlite({
      ...options,
      extensions: {
        vector, // Vector similarity search
        fuzzystrmatch, // Fuzzy string matching
      },
    });
  }
}

Testing Infrastructure

Test Database Creation

export async function createTestDatabase(
  testAgentId: UUID,
  testPlugins: Plugin[] = []
): Promise<{
  adapter: PgliteDatabaseAdapter | PgDatabaseAdapter;
  runtime: AgentRuntime;
  cleanup: () => Promise<void>;
}> {
  if (process.env.POSTGRES_URL) {
    // PostgreSQL testing with isolated schemas
    const connectionManager = new PostgresConnectionManager(process.env.POSTGRES_URL);
    const adapter = new PgDatabaseAdapter(testAgentId, connectionManager);

    const schemaName = `test_${testAgentId.replace(/-/g, "_")}`;
    const db = connectionManager.getDatabase();

    // Create isolated test schema
    await db.execute(sql.raw(`CREATE SCHEMA IF NOT EXISTS ${schemaName}`));
    await db.execute(sql.raw(`SET search_path TO ${schemaName}, public`));

    // Run migrations
    const migrationService = new DatabaseMigrationService();
    await migrationService.initializeWithDatabase(db);
    migrationService.discoverAndRegisterPluginSchemas([sqlPlugin, ...testPlugins]);
    await migrationService.runAllPluginMigrations();

    return {
      adapter,
      runtime,
      cleanup: async () => {
        await db.execute(sql.raw(`DROP SCHEMA IF EXISTS ${schemaName} CASCADE`));
        await adapter.close();
      },
    };
  } else {
    // PGlite testing with temporary directories
    const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "eliza-test-"));
    const connectionManager = new PGliteClientManager({ dataDir: tempDir });
    const adapter = new PgliteDatabaseAdapter(testAgentId, connectionManager);

    return {
      adapter,
      runtime,
      cleanup: async () => {
        await adapter.close();
        fs.rmSync(tempDir, { recursive: true, force: true });
      },
    };
  }
}

Test Coverage

The plugin includes comprehensive test coverage:

  • Unit Tests: Individual component testing
  • Integration Tests: Full database workflow testing
  • Migration Tests: Schema migration validation
  • Performance Tests: Vector search and query optimization
  • Concurrent Tests: Multi-agent and multi-runtime scenarios

Performance Optimization

Indexing Strategy

Indexes are defined programmatically within the table schemas and created automatically during migration:

// Indexes are defined in the table schema
export const memoriesTable = pgTable("memories", {
  // ... column definitions
}, (table) => ({
  // Performance indexes
  typeRoomIdx: index("idx_memories_type_room").on(table.type, table.roomId),
  metadataTypeIdx: index("idx_memories_metadata_type").on(sql`(${table.metadata}->>'type')`),
  // Additional indexes as needed
}));

// Vector search indexes are created during migration
await db.execute(sql`
  CREATE INDEX IF NOT EXISTS idx_embeddings_vector_384 
  ON embeddings USING ivfflat (dim_384 vector_cosine_ops)
`);

Query Optimization

// Efficient entity search with name matching
const entities = await db
  .select()
  .from(entityTable)
  .where(
    and(
      eq(entityTable.agentId, agentId),
      sql`${entityTable.names} && ${names}` // Array overlap operator
    )
  )
  .limit(limit);

// Optimized memory search with metadata filtering
const memories = await db
  .select()
  .from(memoryTable)
  .where(
    and(
      eq(memoryTable.agentId, agentId),
      eq(memoryTable.type, type),
      sql`${memoryTable.metadata}->>'source' = ${source}`
    )
  )
  .orderBy(desc(memoryTable.createdAt))
  .limit(limit);

Memory Management

// Batch operations for better performance
async createEntities(entities: Entity[]): Promise<boolean> {
  return await this.withDatabase(async () => {
    // Use batch insert for better performance
    await this.db.insert(entityTable).values(entities);
    return true;
  });
}

// Streaming for large result sets
async *streamMemories(agentId: UUID, type: string): AsyncGenerator<Memory> {
  const query = this.db
    .select()
    .from(memoryTable)
    .where(and(
      eq(memoryTable.agentId, agentId),
      eq(memoryTable.type, type)
    ))
    .orderBy(desc(memoryTable.createdAt));

  for await (const row of query) {
    yield this.mapRowToMemory(row);
  }
}

Migration and Deployment

Production Migration Strategy

  1. Schema Versioning: Each plugin maintains its own schema version
  2. Rollback Support: Migrations include rollback procedures
  3. Zero-Downtime: Online schema changes with minimal impact
  4. Validation: Pre-migration validation and post-migration verification

Deployment Configuration

// Production configuration
const productionConfig = {
  postgres: {
    url: process.env.POSTGRES_URL,
    ssl: process.env.NODE_ENV === "production",
    pool: {
      min: 2,
      max: 20,
      idleTimeoutMillis: 30000,
    },
  },
  migration: {
    automaticMigrations: false,
    requireExplicitMigration: true,
    backupBeforeMigration: true,
  },
};

Best Practices

1. Connection Management

  • Use connection pooling for PostgreSQL
  • Implement proper connection cleanup
  • Monitor connection health and metrics

2. Error Handling

  • Implement retry logic for transient failures
  • Use proper error classification
  • Log errors with sufficient context

3. Performance

  • Use appropriate indexes for query patterns
  • Implement pagination for large result sets
  • Consider read replicas for read-heavy workloads

4. Security

  • Use parameterized queries to prevent SQL injection
  • Implement proper access controls
  • Encrypt sensitive data at rest

5. Testing

  • Use isolated test databases
  • Implement comprehensive integration tests
  • Test migration procedures thoroughly

Conclusion

The ElizaOS database system provides a robust, scalable, and type-safe foundation for agent data management. With support for both PostgreSQL and PGlite, comprehensive plugin architecture, and advanced features like vector search, it's designed to handle the complex requirements of modern AI agent systems while maintaining simplicity and developer productivity.