VLA (Vision-Language-Action) model serving platform that helps you manage robot instruction prompts, RAG, and waypoint metadata.
Explore the docs »
Embodied AI is an emerging and very promising research field but at the moment, it is difficult to break into. We aim to improve the accessibility of large multi-modal language models that convert natural language instructions into serialized robot actions. Synapse is our first step in addressing this as it provides a low-latency inference API for VLA (Vision-Language-Action) models and LLMs catered towards robotics use cases.
Get API Keys
.env
file:
COHERE_API_KEY=""
cargo build
-
Run the API
- Enable Compatibility with OpenAI, Anthropic, HuggingFace, and custom models
- Integrate gRPC communication with onboard robot system
- Enable high-throughput image and point-cloud processing for robots with LiDAR, radar, or stereo cameras
- Customizable evaluation pipeline and model decay detection