
A universal MCP (Model Context Protocol) server that provides access to OpenAI's APIs through a standardized interface. Works with any MCP-compatible client including Claude Desktop, Claude Code, Cursor, Windsurf, VS Code, and more.
ClaudeDemoMCP.mp4
This MCP server enables any AI assistant or development tool that supports the Model Context Protocol to interact with OpenAI's APIs. Once configured, your AI assistant can:
- Have conversations with GPT models
- Generate images with DALL-E
- Create embeddings for semantic search
- List available models
- Work with OpenAI-compatible providers (Groq, OpenRouter, etc.)
Built with Swift for high performance and reliability.
- Multi-Provider Support - Works with 9+ AI providers including OpenAI, Anthropic, Google Gemini, Ollama, Groq, and more
- Chat Completions - Interact with gpt-4o, o3-mini, o3, Claude, Gemini, and other chat models
- Image Generation - Create images using DALL-E 2 and DALL-E 3
- Embeddings - Generate text embeddings for semantic search and analysis
- Model Listing - Retrieve available models from any provider
This server works with any OpenAI-compatible API endpoint:
- OpenAI (default) - GPT-4o, o3-mini, o3, DALL-E, embeddings
- Azure OpenAI - Enterprise OpenAI services with compatible endpoints
- Ollama - Local LLMs with OpenAI-compatible API (
/v1
endpoints) - Groq - Fast inference using their OpenAI-compatible endpoint
- OpenRouter - Unified access to 100+ models via OpenAI format
- DeepSeek - Coding models with OpenAI-compatible API
These providers have their own APIs but may offer OpenAI-compatible endpoints:
- Anthropic - Check if they provide an OpenAI-compatible endpoint
- Google Gemini - May require specific configuration
- xAI - Check for OpenAI-compatible access
Note: Image generation (DALL-E) only works with OpenAI. Other providers may support different image models.
npm install -g swiftopenai-mcp
- Node.js 16 or higher
Add this configuration to your MCP client:
{
"mcpServers": {
"swiftopenai": {
"command": "npx",
"args": ["-y", "swiftopenai-mcp"],
"env": {
"API_KEY": "sk-..."
}
}
}
}
Groq (fast open-source models):
"env": {
"API_KEY": "gsk_...",
"API_BASE_URL": "https://api.groq.com/openai/v1"
}
Ollama (local models):
"env": {
"API_KEY": "ollama",
"API_BASE_URL": "http://localhost:11434/v1"
}
OpenRouter (multiple providers):
"env": {
"API_KEY": "sk-or-v1-...",
"API_BASE_URL": "https://openrouter.ai/api/v1"
}
- Claude Desktop:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
- Claude Code:
.claude/mcp.json
in your project root - Cursor: Settings β Features β MCP Servers
- Windsurf: MCP panel in settings
- VS Code Continue: Add to your
.continuerc.json
under themodels
array with anmcpServers
property
Send messages to OpenAI GPT models and get responses.
Parameters:
- messages (required) - Array of conversation messages, each with:
- role: "system", "user", or "assistant"
- content: The message text
- model - Which model to use (default: "gpt-4o"). Examples: gpt-4o, o3-mini, o3
- temperature - Creativity level from 0-2 (default: 0.7). Lower = more focused, higher = more creative
- max_tokens - Maximum length of the response
Example usage: "Ask o3-mini to explain quantum computing in simple terms"
Generate images using AI models.
Parameters:
- prompt (required) - Text description of the image you want
- model - Model to use (default: "dall-e-3"). Examples:
- OpenAI: "dall-e-2", "dall-e-3"
- Other providers: Use their specific model names
- size - Image dimensions (default: "1024x1024")
- quality - "standard" or "hd" (default: "standard")
- n - Number of images to generate (default: 1)
Example usage: "Generate an HD image of a futuristic city at sunset"
Note: Image generation parameters like size and quality may vary by provider. Currently optimized for OpenAI's DALL-E models.
List available models from your provider.
Parameters:
- filter - Optional text to filter model names (e.g., "gpt" to see only GPT models)
Example usage: "List all available models" or "Show me all GPT models"
Create embeddings for text.
Parameters:
- input (required) - The text to create embeddings for
- model - Embedding model to use (default: "text-embedding-ada-002")
Example usage: "Create embeddings for the text 'The quick brown fox jumps over the lazy dog'"
Note: The exact way to invoke these tools depends on your MCP client.
Powerful use cases:
Get a second opinion from another AI:
- "Send this entire conversation to o3-mini and ask what it thinks"
- "Have gpt-4o analyze what we've discussed and suggest improvements"
Deep analysis:
- "Ask o3 to find any logical flaws in our reasoning so far"
- "Have o3-mini summarize the key decisions we've made"
Cross-model collaboration:
- "Get o3's perspective on this problem we're solving"
- "Ask gpt-4o to critique the code we just wrote"
- "Have o3-mini explain this differently for a beginner"
Context-aware help:
- "Based on our conversation, have o3 create a step-by-step tutorial"
- "Ask gpt-4o to generate test cases for the solution we discussed"
Role-playing scenarios:
- "Have o3-mini act as a senior developer and review our approach"
- "Ask gpt-4o to play devil's advocate on our architecture"
- "Get o3 to explain this as if teaching a computer science class"
Quick generations:
- "Generate an image of a sunset over mountains"
- "Create a DALL-E 3 HD image of a futuristic city"
Specific requests:
- "Make a 1792x1024 image of a cozy coffee shop interior"
- "Generate a standard quality image of abstract art"
- "List all available models"
- "Show me only the GPT models"
- "What embedding models are available?"
- "Create embeddings for: 'Revolutionary new smartphone with AI features'"
- "Generate embeddings for this product description: [your text]"
- Never share your API key in public repositories or chat messages
- Use environment variables when possible instead of hardcoding keys
- Rotate keys regularly through the OpenAI dashboard
- Set usage limits in your OpenAI account to prevent unexpected charges
- Check API key: Ensure your API key is correctly set in the configuration
- Restart your client: Most MCP clients require a restart after configuration changes
- Verify installation: Check if the package is installed:
npm list -g swiftopenai-mcp
- Check permissions: Ensure the npm global directory has proper permissions
- API key permissions: Verify your API key has the necessary permissions
- API credits: Check if you have available API credits in your OpenAI account
- Alternative providers: For non-OpenAI providers, ensure the base URL is correct
- Network issues: Check if you can reach the API endpoint from your network
Most MCP clients provide ways to view server logs. For example:
Claude Desktop logs:
- macOS:
~/Library/Logs/Claude/mcp-*.log
- Windows:
%APPDATA%\Claude\logs\mcp-*.log
Other clients: Check your client's documentation for log locations.
You can test if the server starts correctly:
npx swiftopenai-mcp
This should output the MCP initialization message.
- "Missing API key" error: Set the
API_KEY
environment variable in your configuration - "Invalid API key" error: Double-check your API key is correct and active
- Timeout errors: Some operations (like image generation) can take time; be patient
- Rate limit errors: You may be hitting your provider's rate limits; wait a bit and try again
If you want to build the server yourself:
git clone https://github.com/jamesrochabrun/SwiftOpenAIMCP.git
cd SwiftOpenAIMCP
swift build -c release
The binary will be at .build/release/swiftopenai-mcp
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
MIT License - see LICENSE file for details
- Built with SwiftOpenAI
- Implements the Model Context Protocol
- Uses the MCP Swift SDK
- Issues: GitHub Issues
- Discussions: GitHub Discussions