Website • Documentation • Blog • Get API Key
A modern voice-controlled AI interface powered by Google Gemini and Anthropic MCP (Model Control Protocol). Transform how you interact with AI through natural speech and multimodal inputs.
⚠️ Important Note: This open source project is currently in development and in early access. It is not currently compatible with Safari but has been tested on Chrome with Linux, Windows, and MacOS. If you have any problems, please let us know on Discord or GitHub.
If you find this project useful, please consider:
- ⭐ Starring it on GitHub
- 🔄 Sharing it with others
- 💬 Joining our Discord community
A modern Vite + TypeScript application that enables voice-controlled AI workflows through MCP (Model Control Protocol). This project revolutionizes how you interact with AI systems by combining Google Gemini's multimodal capabilities with MCP's extensible tooling system.
Transform your AI interactions with a powerful voice-first interface that combines:
Feature | Description |
---|---|
🗣️ Multimodal AI | Understand and process text, voice, and visual inputs naturally |
🛠️ MCP (Model Control Protocol) | Execute complex AI workflows with a robust tooling system |
🎙️ Voice-First Design | Control everything through natural speech, making AI interaction more intuitive |
Perfect for: Developers building voice-controlled AI applications and looking for innovative ways to use multimodal AI.
- Natural Voice Control: Speak naturally to control AI workflows and execute commands
- Multimodal Understanding: Process text, voice, and visual inputs simultaneously
- Real-time Voice Synthesis: Get instant audio responses from your AI interactions
- Extensible Tool System: Add custom tools and workflows through MCP
- Workflow Automation: Chain multiple AI operations with voice commands
- State Management: Robust handling of complex, multi-step AI interactions
- Modern Tech Stack: Built with Vite, React, TypeScript, and NextUI
- Type Safety: Full TypeScript support with comprehensive type definitions
- Hot Module Replacement: Fast development with instant feedback
- Comprehensive Testing: Built-in testing infrastructure with high coverage
- Node.js 16.x or higher
- npm 7.x or higher
- A modern browser with Web Speech API support
-
Clone the repository
git clone https://github.com/Ejb503/multimodal-mcp-client.git cd multimodal-mcp-client
-
Install dependencies
npm install cd proxy npm install
-
Configure the application
# Navigate to config directory cd config # Create local configuration files cp mcp.config.example.json mcp.config.custom.json
Required API Keys:
- Google AI Studio - Gemini API key
- systemprompt.io/console - Systemprompt API key
Add keys to
.env
(see.env.example
for reference) -
Start development server
npm run dev
Access the development server at
http://localhost:5173
Resource | Link |
---|---|
💬 Discord | Join our community |
🐛 Issues | GitHub Issues |
📚 Docs | Documentation |
This project is licensed under the MIT License - see the LICENSE file for details.
We're actively working on expanding the capabilities of Systemprompt MCP Client with exciting new features and extensions. Stay tuned for updates!