Create an interactive AI assistant with Streamlit, NVIDIA NIM/Ollama, and Model Control Protocol (MCP).
Llama Streamlit provides a conversational interface for interacting with Large Language Models (LLMs), enabling real-time external tool execution via Model Control Protocol (MCP). This project supports custom model selection (NVIDIA NIM / Ollama), API configuration, and tool integration, all within a user-friendly Streamlit chat interface. It's designed to enhance usability and real-time data processing for a seamless AI assistant experience.
Key Features
01Support for multiple LLM backends (NVIDIA NIM & Ollama)
02Docker support for easy deployment
03Streamlit UI with interactive chat elements
04LLM-powered chat interface
0517 GitHub stars
06Real-time tool execution via MCP
Use Cases
01Deploying LLM-powered applications with Docker
02Integrating external tools with LLMs for enhanced functionality
03Building interactive AI assistants with a user-friendly chat interface