Discover our curated collection of MCP servers for data science & ml. Browse 6531servers and find the perfect MCPs for your needs.
Enables Excel file manipulation capabilities without requiring Microsoft Excel installation.
Unifies Model Context Protocol (MCP) and REST services, providing a central management point for AI clients and federated environments.
Experience AI Xiaozhi's voice and smart assistant functionalities through a versatile Python-based client, enabling access without dedicated hardware.
Intelligently routes large language model requests to the most suitable models and tools for optimized inference, enhanced security, and improved accuracy.
Enables Java applications to interact with AI models and tools through a standardized interface.
Programmatically assembles prompts for LLMs using JavaScript to orchestrate LLMs, tools, and data in code.
Provides a secure runtime environment for fully-autonomous AI agents, designed for enterprise-grade deployment.
Provides AI assistants with comprehensive access to shadcn/ui v4 components, blocks, demos, and metadata via the Model Context Protocol.
Provides all-in-one infrastructure for search, recommendations, Retrieval-Augmented Generation (RAG), and analytics via API.
Provides a catalog of official Microsoft MCP server implementations for AI-powered data access and tool integration.
Enhances AI model reasoning by making it recursively evaluate and refine its responses through self-argumentation.
Aggregates search results from various web search services through a unified metasearch library.
Enables AI assistants to search and analyze arXiv papers through a simple Model Context Protocol interface.
Serve, benchmark, and deploy large language models (LLMs) on various hardware platforms.
Empowers AI agents and coding assistants with web crawling and retrieval-augmented generation (RAG) capabilities.
Empowers AI agents to access, discover, and extract real-time web data, bypassing restrictions and bot detection.
Automates data integration via a stable, self-healing SDK, providing automated schema-drift detection, retries, and remappings to maintain continuous data flow without connector maintenance or rewrites.
Provides a Gradio web interface for locally running various Llama 2 models on GPU or CPU across different operating systems.
Enables AI assistants to interact with Google Gemini CLI, leveraging its massive token window for large file analysis and codebase understanding.
Integrates Perplexity's real-time web search, reasoning, and research capabilities into AI assistants.
Scroll for more results...