Frequently Asked Questions: MCP, OpenAPI, API-first
Find answers to common questions about MCP, OpenAPI, and API-first development. All this in the context of the HAPI Stack for Model Context Protocol (MCP).
❓What is an MCP server?​
An MCP server is not a traditional server implementation. In the context of the HAPI Stack, an MCP server is simply a reference to a set of tools that can be invoked by an AI agent or MCP Client, i.e.: chatMCP. You don't need to build a backend — just expose the contract that describes what tools are available.
❓Do I need to implement my own MCP server?​
No. You can generate the MCP contract dynamically from your existing API using tools like the HAPI Server. This means you don't need to manually implement logic or rewrite your backend — you simply run HAPI Server (or deploy via runMCP) to expose your API as MCP tools.
❓How is MCP different from OpenAPI or Swagger?​
OpenAPI (Swagger) defines how machines talk to machines using HTTP protocols. MCP, on the other hand, defines how AI agents talk to applications using structured contracts over JSON-RPC. You can actually generate MCP contracts from OpenAPI specs — the two are complementary.
❓Can I convert my OpenAPI spec into an MCP contract?​
Yes. You can convert OpenAPI to MCP using automated tools like HAPI Server, which transforms your existing Swagger files into tool definitions readable by LLMs.
❓What are MCP tools?​
MCP tools are individual functions or operations that an AI agent can invoke. Each tool has a name, description, and JSON input schema — similar to an OpenAPI operation. For example, an endpoint like GET /users
becomes a tool like getUsers
. An MCP Client (like chatMCP) can then call this tool to retrieve user data.
❓How does chatMCP work?​
chatMCP is the client interface that lets users (or LLMs) invoke MCP tools interactively — like having an AI assistant that knows how to call your backend via the MCP contract.
❓What's the benefit of contract-first MCP development?​
This approach allows faster integration, no backend duplication, and agent-readiness out of the box. Instead of reinventing your stack for AI agents, you let them plug into what already exists — securely and scalably.