Discover our curated collection of MCP servers for data science & ml. Browse 8922 servers and find the perfect MCPs for your needs.
Enables LLMs to inspect PostgreSQL database schemas and execute read-only queries.
Enables web searches using DuckDuckGo and fetches/summarizes content from the results using the Jina API.
Facilitates interaction between AI agents and Model Context Protocol (MCP) servers through structured state management.
Enables n8n workflows to interact with Model Context Protocol (MCP) servers.
Benchmarks vLLM endpoints interactively through MCP, enabling performance evaluation of large language models.
Connects to Typesense collections and retrieves data using an MCP client.
Connects coding AI to databases, data warehouses, data pipelines, and cloud services through the Model Context Protocol.
Retrieves and summarizes news from the web using a secure password-protected MCP server.
Enables high-performance, lock-free clipboard access for AI assistants, specifically bridging Windows clipboard content to WSL2 environments.
Provides a modular Python toolkit for creating Model Context Protocol (MCP) modules specifically designed for chemical engineering and chemistry workflows.
Optimizes gravitational wave signal detection using AI and the Model Context Protocol.
Integrates Alibaba Cloud's Qwen-Omni multimodal AI capabilities into AI assistants, enabling image understanding, audio recognition, and speech synthesis.
Provides intelligent read-only access to Obsidian vaults, enabling them to function as a 'second brain' for LLMs.
Empowers AI agents with advanced image analysis capabilities using OpenRouter's vision models to interpret visual content from screenshots, UI designs, and documents.
Manages RSS feeds with advanced search, AI-powered semantic search, and a comprehensive reading workflow.
Provides comprehensive geographic, tourist, historical, and cultural information about Colombia through a structured set of tools.
Manages structured memory across chat sessions for project-based AI assistant work.
Extends AI assistants with custom tools, resources, and prompts through a modular plugin architecture.
Optimizes Large Language Model (LLM) prompts by converting data to the token-efficient TOON format, achieving significant token savings over JSON.
Exposes EMBA firmware analysis results as structured tools, enabling Large Language Models to query and reason about security findings.
Scroll for more results...