A community-driven platform for robotics enthusiasts to share and explore creative projects built with the phospho Junior Dev Kit.
This repository contains demo code and community projects developed using the phospho Junior Dev Kit. Whether you're a beginner or an experienced developer, you can explore existing projects or contribute your own creations.
-
Get Your Dev Kit: Purchase your Phospho Junior Dev Kit at robots.phospho.ai. Unbox it and set it up following the instructions in the box.
-
Control your Robot: Donwload the Meta Quest app, connect it to your robot, start teleoperating it.
-
Record a Dataset: Record a dataset using the app. Do the same gesture 30-50 times (depending on the task complexity) to create a dataset.
-
Install the Package:
pip install --upgrade phosphobot- Train a Model: Use Le Robot to train a policy on the dataset you just recorded.
git clone https://github.com/huggingface/lerobot.git
cd lerobot
pip install -e .Add the configs/policy/act_so100_phosphobot.yamlfile from this repository to the lerobot/configs/policy directory in the lerobot repository.
Launch the training script with the following command from the lerobot repository (change the device to cuda if you have an NVIDIA GPU, mps if you use a MacBook Pro Sillicon, and cpu otherwise):
sudo python lerobot/scripts/train.py \
--dataset.repo_id=<HF_USERNAME>/<DATASET_NAME> \
--policy.type=<act or diffusion or tdmpc or vqbet> \
--output_dir=outputs/train/phoshobot_test \
--job_name=phosphobot_test \
--device=cpu \
--wandb.enable=true- Use a model to control your robot: Launch a server (see here how to serve a policy).
Launch the phosphobot server:
phosphobot runRun the folowing script to control your robot using the model:
from phosphobot.camera import AllCameras
from phosphobot.api.client import PhosphoApi
from phosphobot.am import ACT
import time
import numpy as np
# Connect to the phosphobot server
client = PhosphoApi(base_url="http://localhost:80")
# Get a camera frame
allcameras = AllCameras()
# Need to wait for the cameras to initialize
time.sleep(1)
# Instantiate the model
model = ACT()
while True:
images = [
allcameras.get_rgb_frame(camera_id=0, resize=(240, 320)),
allcameras.get_rgb_frame(camera_id=1, resize=(240, 320)),
allcameras.get_rgb_frame(camera_id=2, resize=(240, 320)),
]
# Get the robot state
state = client.control.read_joints()
inputs = {"state": np.array(state.angles_rad), "images": np.array(images)}
# Go through the model
actions = model(inputs)
for action in actions:
# Send the new joint postion to the robot
client.control.write_joints(angles=action.tolist())
# Wait to respect frequency control (30 Hz)
time.sleep(1 / 30)For the full detailed instructions and other model (Pi0, OpenVLA,...), refer to the docs.
Connect with other developers and share your experience in our Discord community
Explore projects created by our community members in the code_examples directory. Each project includes its own documentation and setup instructions.
- Documentation: Read the documentation
- Community Support: Join our Discord server
- Issues: Submit problems or suggestions through GitHub Issues
MIT License
Made with 💚 by the Phospho community