Memex empowers large language models (LLMs) by providing them with a persistent, searchable 'second brain' derived from your markdown vaults, akin to Obsidian. It creates a local index with full-text search, embeddings for semantic similarity, and a wikilink graph, allowing LLMs to grow their knowledge base, document findings, model your preferences, and recall past work across sessions. This tool enables agents to perform semantic searches, explore knowledge graph connections, and leverage existing documentation for enhanced decision-making and workflow integration.
Key Features
01Semantic and full-text search over markdown vaults
02Generates embeddings for enhanced knowledge retrieval
03Indexes wikilink graphs for relationship exploration
04Automatically re-indexes changed files for up-to-date knowledge
05Integrates seamlessly with Claude Code for LLM interaction
060 GitHub stars
Use Cases
01Equipping LLMs with a persistent, growing knowledge base
02Facilitating semantic search and exploration of markdown-based 'second brains' (e.g., Obsidian vaults) by AI agents
03Enabling LLMs to document findings, model user preferences, and recall past work across sessions