This repository contains a benchmark environment based on the rope flatten SoftGym environment. At the current stage, included environment allows sliding along the deformable linear object (DLO) with a two finger gripper. An adapted Soft Actor-Critic algorithm benchmark for this environment can be found in Agent for SoftSlidingGym.
The original SoftGym is a set of benchmark environments for deformable object manipulation including tasks involving fluid, cloth and rope. It is built on top of the Nvidia FleX simulator and has standard Gym API for interaction with RL agents. A number of RL algorithms benchmarked on SoftGym can be found in SoftAgent
If you are using Ubuntu 16.04 LTS and CUDA 9.2, you can follow the steps in the next section on this page for compilation. For other versions of Ubuntu or CUDA, SoftGym authors provided the pre-built Docker image and Docker file for running SoftGym. Please refer to the Docker page.
Additional information about installation using Docker can be found on Daniel Seita blog: https://danieltakeshi.github.io/2021/02/20/softgym/.
- This codebase is tested with Ubuntu 16.04 LTS, CUDA 9.2 and Nvidia driver version 440.64. Other versions might work but are not guaranteed, especially with a different driver version. Please use the docker for other versions.
The following command will install some necessary dependencies.
sudo apt-get install build-essential libgl1-mesa-dev freeglut3-dev libglfw3 libgles2-mesa-dev
-
Create conda environment Create a conda environment and activate it:
conda env create -f environment.yml -
Compile PyFleX: Go to the root folder of softgym and run
. ./prepare_1.0.sh. After that, compile PyFleX with CMake & Pybind11 by running. ./compile_1.0.shPlease see the example test scripts and the bottom ofbindings/pyflex.cppfor available APIs.
The agent is supposed to slide along the rope - using an appropriate grasping force - to its tail end. The beginning of the rope is firmly attached to a point in space in the simulator. The goal behaviour is achieved when the gripper follows the rope and holds it close to its final end.
The environment presented in this repository allowed us to test the Reinforcement Learning agent using different sensing modalities and investigate how its behaviour can be boosted using visual-tactile fusion. We also performed some ablation studies to assess the influence of particular input signal.
More details about experiments performed can be found in the paper.
Depending on the type of inputs and training length the observed outcome behaviour can be categorised as follows:
To have a quick view of a task (with random actions), run the following commands:
- RopeFollow:
python examples/random_env.py --env_name RopeFollow
To have a quick check of the behaviour and manually manipulate the agent's, run in python interpreter:
examples/random_env_manual_check.py
Turn on the --headless option if you are running on a cluster machine that does not have a display environment. Otherwise you will get segmentation issues. Please refer to softgym/registered_env.py for the default parameters and source code files for each of these environments.
If you find this codebase useful in your research, please consider citing:
@inproceedings{Pecyna2022VisualTactileMF,
title={Visual-Tactile Multimodality for Following Deformable Linear Objects Using Reinforcement Learning},
author={Leszek Pecyna and Siyuan Dong and Shan Luo},
journal={arXiv preprint arXiv:2204.00117},
year={2022}
}
- NVIDIA FleX - 1.2.0: https://github.com/NVIDIAGameWorks/FleX
- Python interface builds on top of PyFleX: https://github.com/YunzhuLi/PyFleX
- SoftGym repository: https://github.com/Xingyu-Lin/softgym
- If you run into problems setting up SoftGym, Daniel Seita wrote a nice blog that may help you get started on SoftGym: https://danieltakeshi.github.io/2021/02/20/softgym/