01Locally hosted LLaMA 3.2 LLM (via Ollama) interprets natural language and generates appropriate queries.
02FastAPI server acts as an intermediary, managing LLM prompts, query execution, and result formatting.
03Stores all vector data and performs queries directly within Oracle Autonomous Database.
04Intelligently decides between traditional SQL and semantic vector queries based on the user's question.
05Utilizes Oracle 23ai's built-in vector search and embedding models for document and query embeddings.
060 GitHub stars