OpenAI MCP Server: Integration Guide
Integrating the OpenAI API into a Model Context Protocol (MCP) server is a powerful way to give your AI agents advanced capabilities. With the HAPI CLI, you can quickly wrap OpenAI's endpoints into a standardized MCP interface.
The Challenge: Token Bloat
The OpenAI API is extensive. Wrapping the entire API into a single MCP server is generally not recommended due to token bloat. When an MCP client (like Claude) loads server tools, it consumes tokens for each tool definition in the context. Loading hundreds of OpenAI tools simultaneously can lead to unmanageable context windows and increased costs.
For a deeper dive into this issue, see How to scale MCP to thousands of tools without destroying your budget.
Our Approach: Focused MCP Servers
Instead of a monolithic server, we recommend creating focused MCP servers that expose only the specific functionality your application needs. In this guide, we will create a minimal MCP server that specifically exposes OpenAI's image and audio generation capabilities.
Prerequisites
Before starting, ensure you have:
- An OpenAI API Key: Get one here. You will need this to authenticate your MCP server from your AI agents/MCP clients.
- HAPI MCP CLI: Follow the installation steps below.
You can also include the API key as part of the MCP server configuration using the x-hapi.security extension in your OpenAPI spec, mapped to the security schemes. It is safe, as long as you do not share the spec publicly.
Steps to Create an OpenAI MCP Server
1. Install HAPI MCP
The easiest way to install the HAPI MCP CLI is via our install script:
curl -fsSL https://get.mcp.com.ai/hapi.sh | bash
For more installation options, visit the HAPI CLI documentation or the HAPI MCP Microsite.
2. Download OpenAI Tools OAS
curl -o ~/.hapi/specs/openai-tools.yaml https://docs.mcp.com.ai/servers-apis/openapi/openai-tools.yaml
3. Start the MCP Server
Use the hapi serve command to launch the server. We are using the HAPI MCP pre-configured OpenAI Tools specification designed for images and audio:
hapi serve openai-tools --headless --port 3030 --url https://api.openai.com/v1
This OAS is a slimmed-down version of the full OpenAI API from OpenAI's GitHub, focusing on image generation and audio transcription endpoints.
4. Test and Integrate
Once the server is running, you can connect it to any MCP-aware client:
- chatMCP: A powerful web client for testing MCP servers.
- VS Code: Add the server to your GitHub Copilot configuration.
- LibreChat: Use OpenAI tools directly within your chat interface.
An easy way to inspect the MCP server endpoints is to pipe the output of the server into the MCP Inspector:
hapi serve openai-tools --headless --port 3030 --url https://api.openai.com/v1 | bunx @modelcontextprotocol/inspector
Advanced: Claude Code Skills
If you are a Claude user, you can leverage Claude Code Skills to build even more sophisticated agents. By connecting your OpenAI MCP server, Claude can intelligently decide when to generate an image or transcribe audio based on the conversation flow.
Are you a Codex user? You can also connect the OpenAI MCP with Codex - interested in learning how? Check out our blog, we publish frequent updates on new MCP integrations and use cases! ✊🏼
Deployment Options
Docker Example
For production environments, you can run the OpenAI MCP server as a Docker container. This allows for easy scaling and environment management.
docker run --name hapi-openai -d \
-p 3030:3030 \
hapimcp/hapi-cli:latest serve \
--openapi https://docs.mcp.com.ai/servers-apis/openapi/openai-tools.yaml \
--headless \
--url https://api.openai.com/v1
Further Resources
Find more ways to deploy in our Deployment documentation, including Cloudflare and other cloud providers.