Provides local-first, semantic long-term memory for AI agents, enabling sophisticated retrieval beyond keyword matching.
Engram is an MCP memory server designed to give AI agents genuinely useful, semantic long-term memory. Unlike typical memory servers that rely on basic substring matching, Engram leverages real semantic embeddings and a three-tier retrieval system to surface relevant information, even with zero keyword overlap. It stores memories as human-readable JSON files and uses ChromaDB for vector indexing, all running locally without external API calls. Engram also features a full-featured web dashboard for managing memories and robust CLI utilities, making it a powerful and private solution for enhancing AI agent recall and context.
Key Features
01Markdown-Aware Chunking
02Full Web Dashboard with CRUD and Tag Filtering
030 GitHub stars
04Local-First Storage (JSON + ChromaDB)
05Semantic Search (Cosine Similarity)
06Three-Tier Retrieval (Snippet → Chunk → Full Content)
Use Cases
01Storing and retrieving architectural decisions, project details, and reference materials for agent access
02Equipping AI agents with genuinely useful long-term memory for complex tasks
03Enhancing AI agent productivity by providing context-aware information retrieval