ThinkMem is an innovative AI memory management system designed specifically for Large Language Models (LLMs). It functions as a Model Context Protocol (MCP) server, offering diverse memory types like unstructured RawMemory and structured ListMemory (arrays, queues, stacks) to enhance LLM capabilities. The system provides intelligent retrieval, automatic summary generation, and robust persistence options via JSON file storage. Supporting both standalone (stdio) and scalable HTTP modes, ThinkMem empowers LLMs to efficiently store, retrieve, and process information, thereby significantly improving their reasoning and recall abilities.
Key Features
01Dual operating modes: MCP stdio and StreamableHTTP
021 GitHub stars
03Intelligent retrieval with text search and row-level operations
04Automatic summary generation and management
05Persistent JSON file storage with backup and recovery support
06Multiple memory types: RawMemory (unstructured text) and ListMemory (arrays/queues/stacks)