Discover our curated collection of MCP servers for developer tools. Browse 18038 servers and find the perfect MCPs for your needs.
Automates and enhances Jira project management tasks through integration with Claude AI via the Model Context Protocol (MCP).
Illustrates the differences between LLM function calling and the Model Context Protocol (MCP) by controlling Home Assistant lights.
Manages file downloads through an AI model via standardized Model Context Protocol (MCP) interface.
Upload projects to the Yourware platform via a dedicated MCP server.
Enables zero-knowledge proof generation and verification using Circom circuits.
Extracts symbol outlines from a codebase to provide LLM coding agents with necessary context.
Enables a self-hosted AI environment on Windows, integrating Ollama, Open WebUI, and MCP for local language model management and chat interaction.
Integrates Wiki.js instances with the Model Context Protocol, enabling large language models like Claude to interact with documentation.
Provides an MCP server for HomeyPro home automation systems, enabling paginated access and comprehensive control over devices, zones, and flows.
Enables text completion with local LLama.cpp models by acting as a Model Context Protocol server.
Facilitates seamless interaction with the Camunda 7 Community Edition Engine for workflow automation and process management.
Manages Datomic database connections and operations, including support for in-memory instances.
Connects to PostgreSQL databases, executes read-only SQL queries, and provides detailed table information through MCP-compatible clients.
Integrate AI assistants with Jira Cloud instances to manage issues, perform searches, add comments, and handle workflow transitions.
Integrates Sentry's API into the Model Context Protocol (MCP) to provide issue and trace details for contextual development workflows.
Auto-generate AGENTS.md from your codebase to provide comprehensive context for AI coding tools.
Transforms any text into classified, structured semantic units, acting as a cognitive primitive for AI agents.
Reduces LLM token usage by semantically compressing prompts while preserving meaning and core constraints.
Integrates AI agents with Frida for comprehensive dynamic instrumentation and mobile application analysis.
Extend AI assistants with domain-specific capabilities for recruiting, talent sourcing, and academic research.
Scroll for more results...