Skip to content

Robot VLA (Vision-Language-Action) inference API helping you manage prompts, RAG, and location metadata

License

Notifications You must be signed in to change notification settings

OpenXRIF/synapse

Repository files navigation


Synapse - VLA Inference

VLA (Vision-Language-Action) model serving platform that helps you manage robot instruction prompts, RAG, and waypoint metadata.
Explore the docs »

About The Project

Embodied AI is an emerging and very promising research field but at the moment, it is difficult to break into. We aim to improve the accessibility of large multi-modal language models that convert natural language instructions into serialized robot actions. Synapse is our first step in addressing this as it provides a low-latency inference API for VLA (Vision-Language-Action) models and LLMs catered towards robotics use cases.

(back to top)

Getting Started

Prerequisites

Get API Keys

.env file:

COHERE_API_KEY=""

Installation

cargo build

API and Swagger Docs

  1. Run the API

  2. Visit http://localhost:8080/docs

(back to top)

Roadmap

  • Enable Compatibility with OpenAI, Anthropic, HuggingFace, and custom models
  • Integrate gRPC communication with onboard robot system
  • Enable high-throughput image and point-cloud processing for robots with LiDAR, radar, or stereo cameras
  • Customizable evaluation pipeline and model decay detection

(back to top)