Discover our curated collection of MCP servers for developer tools. Browse 18167 servers and find the perfect MCPs for your needs.
Provides a secure Model Context Protocol server for time-based operations, featuring timing attack protection and timelock encryption.
Enables interaction with Jira instances using natural language commands through a Model Context Protocol (MCP) server.
Analyzes Japanese text morphologically to measure linguistic characteristics and provide feedback for text generation.
Facilitates integration between LLM applications and Anchor programs via the Model Context Protocol (MCP).
Streamlines resume creation and analysis by integrating with the NovaCV API for PDF generation, format conversion, and content analysis.
Provides cryptocurrency price and market data via the Model Context Protocol.
Integrate and manage Devici platform data, including users, threat models, and security insights, through a Model Context Protocol (MCP) server for LLMs.
Enables AI agents to seamlessly interact with GitLab, providing capabilities for managing issues and project labels.
Extends the Sliver C2 framework with a Python-based command and control server for advanced operations.
Solve complex combinatorial and numerical optimization problems with a unified interface to multiple powerful solvers, empowering AI assistants.
Converts Swagger/OpenAPI specifications into Model Context Protocol (MCP) tools, enabling interaction with APIs through a standardized protocol.
Securely manages API keys for developers across browser extensions, CLI, and AI agent integrations.
Converts HTML files or content strings to high-fidelity PDF documents using Puppeteer's browser rendering engine.
Enables AI assistants to interact with the Bit2Me digital assets platform for real-time market data, wallet management, and trading operations.
Empowers AI development by enabling agents to learn from past experiences, reducing repetitive trial-and-error and optimizing token usage.
Performs unified memory forensics using a multi-tier engine that combines Rust speed with Volatility3 coverage for comprehensive analysis.
Empower AI coding agents with a structured, versioned, and graph-based persistent memory system.
Expose local Ollama API capabilities as tools for AI agents, enabling them to utilize local models for chat, completion, and embeddings.
Orchestrates crash-proof LLM pipelines with disk-based checkpointing, cost-effective free-tier model routing, and guaranteed structured output.
Enables autonomous overnight research loops for coding agents, featuring semantic arXiv search and robust memory management.
Scroll for more results...