Skip to content

Welcome to ros_llm

A ROS2 Jazzy client for an OpenAI API compatible LLM server using Llama.CPP as an implementation.

The llm node includes an /llm and /clear_history ROS2 service, which are described in the services documentation.

This documentation provides all the information you need to get started with ros_llm, from setting up your environment to using the service and contributing to the project.

---
config:
    theme: redux
    look: neo
---
flowchart TB
    subgraph s1["Server"]
        direction TB
        n11["Chat Node"]

        subgraph s11["Whisper STT"]
            direction TB
            n112["STT Node"] <-- HTTP --> n111["whisper.cpp"]
        end

        subgraph s12["Gemma 3 12b"]
                direction TB
                n122["LLM Node"] <-- HTTP --> n121["llama.cpp"]
            end

        subgraph s13["Orpheus TTS"]
            direction TB
            n132["TTS Node"] <-- HTTP --> n131["llama-swap"]
        end

        n11 -- ROS2 --> n112
        n11 -- ROS2 --> n122
        n11 -- ROS2 --> n132
    end
    subgraph s21["Robot"]
        direction TB
        n21["UI Com"] <-- WebSocket --> n22["Emotion"]
    end
    n11 <-- "ROS2" --> n21

Requirements

Contributing

We welcome contributions to this project! Please see the contributing guidelines at contributing.md in the root of this repository for more information on how to get started.