Discover our curated collection of MCP servers for cloud infrastructure. Browse 1246servers and find the perfect MCPs for your needs.
Extends the capabilities of LLMs with tools running on Modal.
Connects to a TiDB serverless database using the Model Context Protocol.
Enables AI assistants to interact with and extract information from an Unraid server through the official Unraid GraphQL API.
Deploys a Model Context Protocol Server-Sent Events (SSE) deployment on Google Cloud Run with IAM authentication.
Enables programmatic interaction with Telnyx's comprehensive suite of communication APIs, including telephony, messaging, and AI assistant functionalities.
Integrates AI assistants with the Terraform Cloud API, enabling infrastructure management through natural language.
Enables configuration and management of Higress through a Model Context Protocol (MCP) server implementation.
Bridges Apache Kafka and Apache Pulsar protocols to enable AI agents to interact with streaming data.
Turns natural-language prompts into Microsoft Azure architecture diagrams (PNG) using Python Diagrams and Graphviz.
Provides comprehensive AWS cost analysis and optimization recommendations based on proven best practices.
Manages Google Cloud Platform resources through natural language commands using integration with Claude Desktop.
Enables AI assistants to leverage human input via Amazon Mechanical Turk.
Enables interaction with EMQX MQTT brokers through the Model Context Protocol (MCP).
Enables communication with a Cloudflare Worker from Claude Desktop.
Enables Large Language Models to interact with and manage data stored in Amazon S3.
Enables interaction with Genesys Cloud's Platform API using the Model Context Protocol.
Query real-time AWS EC2 pricing data to find the cheapest instances based on CPU, RAM, and other specifications.
Demonstrates an identity-aware MCP server leveraging Tailscale Serve to access user information within a private Tailnet.
Retrieves user geolocation information and integrates it with large language models.
Implement a comprehensive, scalable machine learning inference architecture on Amazon EKS for deploying Large Language Models (LLMs) with agentic AI capabilities, including Retrieval Augmented Generation (RAG) and intelligent document processing.
Scroll for more results...