Shoggoth Mini is a soft-bodied tentacle robot using the SpiRobs design, controlled through a mix of reinforcement learning and GPT-4o. Read the full blogpost here.
First, ensure you have Python 3.10+ and Poetry installed.
Clone the project repository and install the dependencies:
git clone https://github.com/mlecauchois/shoggoth-mini
cd shoggoth-mini
poetry installNext, lerobot is a key dependency that needs to be installed from source:
git clone https://github.com/huggingface/lerobot.git
pip install -e ./lerobotFinally, activate the virtual environment to run the subsequent commands:
eval "$(poetry env activate)"Find the USB port for the driver board using a script from lerobot:
python lerobot/scripts/find_motors_bus_port.pyOnce you have the port, configure each motor (for IDs 1, 2, and 3), replacing DRIVER_BOARD_USB_PORT with the port you found:
python lerobot/scripts/configure_motor.py \
--port DRIVER_BOARD_USB_PORT \
--brand feetech \
--model sts3215 \
--baudrate 1000000 \
--ID 1Calibrate the motors by adjusting with the arrow keys until the tentacle tip is straight, then press Enter to save:
python -m shoggoth_mini calibrate --config shoggoth_mini/configs/default_hardware.yamlRun the main orchestrator application, which integrates all system components:
python -m shoggoth_mini orchestrate \
--config shoggoth_mini/configs/default_orchestrator.yaml \
--hardware-config shoggoth_mini/configs/default_hardware.yaml \
--perception-config shoggoth_mini/configs/default_perception.yaml \
--control-config shoggoth_mini/configs/default_control.yamlFor a full replication and setup of the robot, follow the steps in ASSEMBLY.md. All 3D printing assets are included in the repository. The total should cost less than 200$.
Test motor connections:
python -m shoggoth_mini primitive "<yes>" --config shoggoth_mini/configs/default_hardware.yamlCalibrate motors:
python -m shoggoth_mini calibrate --config shoggoth_mini/configs/default_hardware.yamlControl with trackpad:
python -m shoggoth_mini trackpad --config shoggoth_mini/configs/default_hardware.yamlTest idle motion with breathing pattern:
python -m shoggoth_mini idle --duration 10 \
--hardware-config shoggoth_mini/configs/default_hardware.yaml \
--control-config shoggoth_mini/configs/default_control.yamlGenerate MuJoCo XML model:
python -m shoggoth_mini generate-xml --output-path assets/simulation/tentacle.xmlTrain RL model:
python -m shoggoth_mini rl train --config shoggoth_mini/configs/default_rl_training.yamlMonitor with Tensorboard:
tensorboard --logdir=./Evaluate RL model in simulation:
mjpython -m shoggoth_mini rl evaluate ./results/ppo_tentacle_XXXXXXX/models/best_model.zip --config shoggoth_mini/configs/default_rl_training.yaml --num-episodes 10 --renderRecord calibration images for stereo triangulation. Tune the pause interval to have time to change the pattern orientation:
python -m shoggoth_mini record stereo-calibration --num-pairs 20 --interval 3Calculate the stereo triangulation calibration parameters using the images you just recorded by following the steps in the DeepLabCut notebook under notebooks/3d_triangulation.ipynb.
Record annotation videos. This command will record 6 pairs (one for each camera) of 10 second videos:
python -m shoggoth_mini record annotation --duration 60 --chunk-duration 10Extract representative frames using k-means:
python -m shoggoth_mini extract-frames video.mp4 output_frames/ 100I used roboflow to annotate these images, it has a great auto-label feature to avoid wasting time on high confidence images.
Generate synthetic training data (I extracted the tentacle tip using the Segment Anything demo):
# Basic usage with defaults
python -m shoggoth_mini synthetic-images assets/synthetic/objects assets/synthetic/backgrounds --num-images 1000
Train vision model on synthetic images:
```bash
python -m shoggoth_mini vision train dataset.yaml --config shoggoth_mini/configs/default_vision_training.yamlThen change base_model in the vision training config to point to the best model checkpoint and continue training on real images.
Evaluate trained model:
python -m shoggoth_mini vision evaluate model.pt dataset.yaml --config shoggoth_mini/configs/default_vision_training.yamlInfer on single image:
python -m shoggoth_mini vision predict model.pt image.jpg --output prediction.jpg --confidence 0.5 --config shoggoth_mini/configs/default_vision_training.yamlDebug stereo vision and triangulation:
python -m shoggoth_mini debug-perception --config shoggoth_mini/configs/default_perception.yamlRun the closed loop RL model and vision model on the real robot:
python -m shoggoth_mini.control.closed_loop \
--control-config shoggoth_mini/configs/default_control.yaml \
--perception-config shoggoth_mini/configs/default_perception.yaml \
--hardware-config shoggoth_mini/configs/default_hardware.yamlRun the full orchestrator:
python -m shoggoth_mini orchestrate \
--config shoggoth_mini/configs/default_orchestrator.yaml \
--hardware-config shoggoth_mini/configs/default_hardware.yaml \
--perception-config shoggoth_mini/configs/default_perception.yaml \
--control-config shoggoth_mini/configs/default_control.yaml- Control can sometimes cause the motors to go into infinite rolling/unrolling for unknown reasons. What works for me in the situation is to reset by unrolling the cables to their maximum, and re-rolling them back. This sometimes requires opening up the robot to untangle wires. I haven't found the time to fix this, if you do, please open a PR!
orchestrator/orchestrator.py,control/closed_loop.pyandcontrol/idle.pywere heavily vibe-coded and have only been lightly refactored.- Inference using
control/closed_loop.pyleads to a tracking offset on the Y axis as compared to simulation.
- Increase robustness of GPT4o layer or train from scratch (e.g. Moshi-like)
- Give it a voice (but as non-human as possible!)
- Train more RL policies (e.g. grabbing and holding complex objects)
- Use direct drive motors to reduce noise
- Add more tentacles and make it crawl
- Ditch the 2D projection control to unlock more expressive policies
@misc{lecauchois2025shoggothmini,
author = {Le Cauchois, Matthieu B.},
title = {Shoggoth Mini: Expressive and Functional Control of a Soft Tentacle Robot},
howpublished = "\url{https://github.com/mlecauchois/shoggoth-mini}",
year = {2025}
}