Intelligent agentic coder with GraphRAG engine, MCP integration, and multi-LLM support
π€ Multi-LLM Support: OpenAI, Anthropic, local LLM Studio integration
π GraphRAG Engine: Intelligent codebase understanding and memory
π§ MCP Integration: Model Context Protocol for tool execution
β‘ Streaming Responses: Real-time AI responses
π― Multi-format Tool Parsing: Custom, XML, JSON tool call formats
π§ Dynamic System Prompts: Auto-discovery of available tools
π Filesystem Tools: Read, write, search files and directories
npm install -g openagent-cli
- Initialize configuration:
openagent config --init
- Start the interactive UI:
openagent
# or
openagent ui
- Configure your LLM provider in
config.json
:
{
"providers": [
{
"name": "local",
"type": "openai",
"baseURL": "http://localhost:1234/v1",
"apiKey": "lm-studio",
"defaultModel": "your-model-name"
}
]
}
{
"providers": [
{
"name": "openai",
"type": "openai",
"apiKey": "your-openai-key",
"defaultModel": "gpt-4"
},
{
"name": "local",
"type": "openai",
"baseURL": "http://localhost:1234/v1",
"apiKey": "lm-studio",
"defaultModel": "deepseek/deepseek-r1-0528-qwen3-8b"
}
],
"mcpServers": [
{
"name": "fs",
"command": "npx",
"args": ["@modelcontextprotocol/server-filesystem", "."]
}
],
"agents": [
{
"id": "assistant",
"provider": "local",
"mcpTools": ["fs"],
"system": "You are an intelligent AI assistant..."
}
]
}
openagent # Start interactive UI
openagent ui # Same as above
openagent ui -c /path/to/config.json
openagent index /path/to/codebase
openagent index . --languages typescript,python --parallel 4
openagent query "how does authentication work?"
openagent query "find all API endpoints" --limit 20
openagent server --port 3001 --websocket
openagent config --init # Create default config
openagent config --validate # Validate config file
openagent config --show # Show current config
OpenAgent supports multiple tool call formats:
[TOOL_REQUEST]
{"name": "write_file", "arguments": {"path": "hello.txt", "contents": "Hello World"}}
[END_TOOL_REQUEST]
<tool_call name="read_file" args='{"path": "hello.txt"}'></tool_call>
{"tool": "list_directory", "args": {"path": "."}}
write_file
- Create or overwrite filesread_file
- Read file contentslist_directory
- List files and folderscreate_directory
- Create directoriesmove_file
- Move/rename filessearch_files
- Search for files matching patterns
echo
- Echo a messagetimestamp
- Get current timestamprandom
- Generate random numbersmath
- Perform calculations
# Clone repository
git clone https://github.com/yourusername/openagent.git
cd openagent
# Install dependencies
npm install
# Start development UI
npm run dev
# Build
npm run build
# Run tests
npm test
src/
βββ cli.ts # Command line interface
βββ ui.tsx # Interactive UI component
βββ main.ts # Core application logic
βββ config.ts # Configuration management
βββ providers.ts # LLM provider integrations
βββ mcp.ts # MCP server management
βββ tools/ # Tool system
β βββ tool-parser.ts # Multi-format tool parsing
β βββ tool-executor.ts # Tool execution engine
β βββ system-prompt-builder.ts # Dynamic prompt generation
βββ ...
- Multi-LLM provider support
- MCP server integration
- Streaming responses
- Multi-format tool parsing
- Dynamic system prompts
- PostgreSQL vector database
- Codebase indexing system
- Semantic search capabilities
- Knowledge graph construction
- HTTP streaming MCP server
- Code pattern learning
- Automated refactoring suggestions
- Multi-repository support
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT License - see LICENSE file for details.
- π Report Issues
- π¬ Discussions
- π Documentation
Made with β€οΈ by the OpenAgent community