Skip to main content
Ctrl+K
Rofunc 0.0.2.6 documentation - Home
  • GitHub
  • PyPI

Get Started

  • Installation
  • Quick start
  • Example gallery
    • Data collection and processing
      • BoB Visualize
      • Delsys EMG Export
      • Delsys EMG Export
      • Delsys EMG Record
      • Multimodal Record
      • Multimodal data fusion
      • Optitrack Export
      • Optitrack Record
      • Optitrack Visualize
      • Xsens Export
      • Xsens Record
      • Xsens Visualize
      • ZED Export
      • ZED Record
    • Reinforcement learning class
    • Machine learning class
      • BiRP
      • FeLT
      • FeLT
      • GMR
      • TP-GMM
      • TP-GMM for bimanual setting
      • TP-GMMBi with Relative Parameterization in Control
      • TP-GMMBi with Relative Parameterization in Representation
      • TP-GMR for Tai Chi
      • Taichi
    • Planning and control methods
      • Bimanual iLQR
      • LQT
      • LQT feedback version
      • LQT with control primitives
      • LQT with control primitives and DMP
      • iLQR
      • iLQR
      • iLQR control primitive version
      • iLQR with CoM
      • iLQR with dynamics
      • iLQR with obstacle avoidance
    • Robolab
      • CURI FK transformation verification with optitrack
      • CURI forward dynamics
      • CURI forward kinematics
      • CURI inverse kinematics
      • CURI utils
      • Coordinate transform
      • Ergo manipulation
      • Ergo manipulation
      • FD from models
      • FK from models
      • IK from models
      • Jacobian from models
    • Visualab
      • Image segmentation using EfficientSAM with prompt
      • Image segmentation using SAM
      • Image segmentation using SAM with prompt
      • Part-level segmentation using SAM and VLPart with prompt
    • Simulator
      • Apply Forces and Torques On CURI
      • CURI controllers
      • CURI cube pick
      • CURI screw nut
      • Construct custom human model from Xsens data
      • LLM control a robot with pre-defined low-level motion API
      • Object Simulator and Camera Tracking
      • Tracking the trajectory by Franka
      • Tracking the trajectory by Gluon
      • Tracking the trajectory with interference by CURI
      • Tracking the trajectory with multiple rigid bodies by CURI
      • Tracking the trajectory with multiple rigid bodies by Walker
      • Tracking the trajectory with multiple rigid bodies by Walker
      • Tracking the trajectory with multiple rigid bodies by Walker
      • Visualize robots and objects
      • Visualize robots and objects
      • Visualize robots and objects

Tutorial

  • Configuration system
  • RofuncRL tutorial
  • RofuncIL tutorial
  • RofuncML tutorial
  • RofuncPC tutorial

Core Modules

  • Demonstration collection and pre-processing
    • Xsens
    • Optitrack
    • Zed
    • Delsys EMG
    • Multimodal
  • Learning from Demonstration
    • Machine learning class
      • Dynamic Movement Primitives (DMP)
      • TP-GMM
      • TP-GMR
    • RLBaseLine
      • RLBaseLine (SKRL)
      • RLBaseLine (RLlib)
      • RLBaseLine (ElegantRL)
    • Deep learning class (RofuncIL)
      • Rofunc IL
      • Behavior Cloning
      • Behavior Cloning with Zero-Shot
      • Structured-Transformer
      • Robotics Transformers-1
    • Reinforcement learning class (RofuncRL)
      • Rofunc RL
      • RofuncRL A2C (Advantage Actor-Critic)
      • RofuncRL PPO (Proximal Policy Optimization)
      • RofuncRL TD3 (Twin Delayed Deep Deterministic Policy Gradient)
      • RofuncRL SAC (Soft Actor-Critic)
      • RofuncRL CQL (Conservative Q-Learning)
      • RofuncRL BCQ (Batch-Constrained Q-Learning)
      • RofuncRL DTrans (Decision Transformer)
      • RofuncRL TD3+BC (Twin Delayed Deep Deterministic Policy Gradient with Batch-Constrained)
      • RofuncRL EDAC (Ensemble-Diversified Actor Critic)
      • RofuncRL AMP (Adversarial Motion Priors)
      • RofuncRL ASE (Adversarial Skill Embeddings)
      • RofuncRL ODTrans (Online Decision Transformer)
      • Human-Humanoid Robots Cross-Embodiment Behavior-Skill Transfer Using Decomposed Adversarial Learning from Demonstration
  • Planning and Control
    • Usage
    • Linear Quadratic Tracking (LQT)
    • LQT (feedback version)
    • LQT (control primitive version)
    • iterative Linear Quadratic Regulator (iLQR)
  • Tools
    • RoboLab
    • VisuaLab
    • DataLab
  • Robot simulator
    • Franka simulator
    • CURI simulator

API Reference

  • API Reference
    • rofunc
      • rofunc.simulator
        • rofunc.simulator.utils
        • rofunc.simulator.src
        • rofunc.simulator.qbsofthand_sim
        • rofunc.simulator.gluon_sim
        • rofunc.simulator.ikea_sim
        • rofunc.simulator.base_sim
        • rofunc.simulator.walker_sim
        • rofunc.simulator.curi_sim
        • rofunc.simulator.human_sim
        • rofunc.simulator.multirobot_sim
        • rofunc.simulator.object_sim
        • rofunc.simulator.franka_sim
        • rofunc.simulator.humanoid_sim
      • rofunc.devices
        • rofunc.devices.mmodal
        • rofunc.devices.optitrack
        • rofunc.devices.emg
        • rofunc.devices.xsens
        • rofunc.devices.zed
      • rofunc.utils
        • rofunc.utils.maniplab
        • rofunc.utils.robolab
        • rofunc.utils.visualab
        • rofunc.utils.downloader
        • rofunc.utils.datalab
        • rofunc.utils.logger
        • rofunc.utils.oslab
      • rofunc.learning
        • rofunc.learning.RofuncML
        • rofunc.learning.RofuncIL
        • rofunc.learning.utils
        • rofunc.learning.RofuncRL
        • rofunc.learning.pre_trained_models
      • rofunc.config
        • rofunc.config.utils
      • rofunc.planning_control
        • rofunc.planning_control.lqr
        • rofunc.planning_control.lqt
  • Repository
  • Suggest edit
  • Open issue
  • .rst

Example gallery

Contents

  • Data collection and processing
  • Reinforcement learning class
  • Machine learning class
  • Planning and control methods
  • Robolab
  • Visualab
  • Simulator

Example gallery#

Below is a gallery of examples

1.  Data collection and processing#

The following examples show how to collect multimodal demonstration data.

BoB Visualize

BoB Visualize

Delsys EMG Export

Delsys EMG Export

Delsys EMG Export

Delsys EMG Export

Delsys EMG Record

Delsys EMG Record

Multimodal Record

Multimodal Record

Multimodal data fusion

Multimodal data fusion

Optitrack Export

Optitrack Export

Optitrack Record

Optitrack Record

Optitrack Visualize

Optitrack Visualize

Xsens Export

Xsens Export

Xsens Record

Xsens Record

Xsens Visualize

Xsens Visualize

ZED Export

ZED Export

ZED Record

ZED Record

2.  Reinforcement learning class#

The following are examples of reinforcement learning methods for robot learning.

'''Training'''
python examples/learning_rl/IsaacGym_RofuncRL/example_Ant_RofuncRL.py --agent=[ppo|a2c|td3|sac]

'''Inference with pre-trained model in model zoo'''
python examples/learning_rl/IsaacGym_RofuncRL/example_Ant_RofuncRL.py --agent=ppo --inference
'''Training'''
python examples/learning_rl/IsaacGym_RofuncRL/example_CURICabinet_RofuncRL.py --agent=ppo

'''Inference with pre-trained model in model zoo'''
python examples/learning_rl/IsaacGym_RofuncRL/example_CURICabinet_RofuncRL.py --agent=ppo --inference
# Available objects: Hammer, Spatula, Large_Clamp, Mug, Power_Drill, Knife, Scissors, Large_Marker, Phillips_Screw_Driver

'''Training'''
python examples/learning_rl/IsaacGym_RofuncRL/example_DexterousHands_RofuncRL.py --task=CURIQbSoftHandSynergyGrasp --agent=ppo --objects=Hammer

'''Inference with pre-trained model in model zoo'''
python examples/learning_rl/IsaacGym_RofuncRL/example_DexterousHands_RofuncRL.py --task=CURIQbSoftHandSynergyGrasp --agent=ppo --inference --objects=Hammer
'''Training'''
python examples/learning_rl/IsaacGym_RofuncRL/example_FrankaCabinet_RofuncRL.py --agent=ppo

'''Inference with pre-trained model in model zoo'''
python examples/learning_rl/IsaacGym_RofuncRL/example_FrankaCabinet_RofuncRL.py --agent=ppo --inference
'''Training'''
python examples/learning_rl/IsaacGym_RofuncRL/example_FrankaCubeStack_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/IsaacGym_RofuncRL/example_Humanoid_RofuncRL.py --agent=ppo

'''Inference with pre-trained model in model zoo'''
python examples/learning_rl/IsaacGym_RofuncRL/example_Humanoid_RofuncRL.py --agent=ppo --inference
'''Training'''
# Backflip
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidAMP_RofuncRL.py --task=HumanoidAMP_backflip --agent=amp
# Walk
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidAMP_RofuncRL.py --task=HumanoidAMP_walk --agent=amp
# Run
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidAMP_RofuncRL.py --task=HumanoidAMP_run --agent=amp
# Dance
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidAMP_RofuncRL.py --task=HumanoidAMP_dance --agent=amp
# Hop
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidAMP_RofuncRL.py --task=HumanoidAMP_hop --agent=amp

'''Inference with pre-trained model in model zoo'''
# Backflip
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidAMP_RofuncRL.py --task=HumanoidAMP_backflip --agent=amp --inference
# Walk
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidAMP_RofuncRL.py --task=HumanoidAMP_walk --agent=amp --inference
# Run
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidAMP_RofuncRL.py --task=HumanoidAMP_run --agent=amp --inference
# Dance
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidAMP_RofuncRL.py --task=HumanoidAMP_dance --agent=amp --inference
# Hop
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidAMP_RofuncRL.py --task=HumanoidAMP_hop --agent=amp --inference
'''Training'''
# Getup
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASEGetupSwordShield --agent=ase
# Getup with perturbation
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASEPerturbSwordShield --agent=ase
# Heading
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASEHeadingSwordShield --agent=ase
# Reach
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASEReachSwordShield --agent=ase
# Location
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASELocationSwordShield --agent=ase
# Strike
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASEStrikeSwordShield --agent=ase

'''Inference with pre-trained model in model zoo'''
# Getup
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASEGetupSwordShield --agent=ase --inference
# Getup with perturbation
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASEPerturbSwordShield --agent=ase --inference
# Heading
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASEHeadingSwordShield --agent=ase --inference
# Reach
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASEReachSwordShield --agent=ase --inference
# Location
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASELocationSwordShield --agent=ase --inference
# Strike
python examples/learning_rl/IsaacGym_RofuncRL/example_HumanoidASE_RofuncRL.py --task=HumanoidASEStrikeSwordShield --agent=ase --inference
'''Training'''
# Available tasks: BiShadowHandOver, BiShadowHandBlockStack, BiShadowHandBottleCap, BiShadowHandCatchAbreast,
#                  BiShadowHandCatchOver2Underarm, BiShadowHandCatchUnderarm, BiShadowHandDoorOpenInward,
#                  BiShadowHandDoorOpenOutward, BiShadowHandDoorCloseInward, BiShadowHandDoorCloseOutward,
#                  BiShadowHandGraspAndPlace, BiShadowHandLiftUnderarm, BiShadowHandPen, BiShadowHandPointCloud,
#                  BiShadowHandPushBlock, BiShadowHandReOrientation, BiShadowHandScissors, BiShadowHandSwingCup,
#                  BiShadowHandSwitch, BiShadowHandTwoCatchUnderarm
python examples/learning_rl/IsaacGym_RofuncRL/example_DexterousHands_RofuncRL.py --task=BiShadowHandOver --agent=ppo

'''Inference with pre-trained model in model zoo'''
python examples/learning_rl/IsaacGym_RofuncRL/example_DexterousHands_RofuncRL.py --task=BiShadowHandOver --agent=ppo --inference
Task Overview#

Tasks

Animation

Performance

Model Zoo

Ant

Ant-gif

✅

Cartpole

✅

FrankaCabinet

FrC-gif

✅

FrankaCubeStack

✅

CURICabinet

CUC-gif

✅

CURICabinet Image

CCI-gif

✅

CURICabinet Bimanual

✅

CURIQbSoftHand SynergyGrasp

CSG-gif1 CSG-gif2 CSG-gif3 CSG-gif4 CSG-gif5 CSG-gif6 CSG-gif7 CSG-gif8

✅

Humanoid

Hod-gif

✅

HumanoidAMP Backflip

HAB-gif

✅

HumanoidAMP Walk

HumanoidAMP Run

HAR-gif

✅

HumanoidAMP Dance

HAD-gif

✅

HumanoidAMP Hop

HAH-gif

✅

HumanoidASE GetupSwordShield

HEG-gif

✅

HumanoidASE PerturbSwordShield

HEP-gif

✅

HumanoidASE HeadingSwordShield

HEH-gif

✅

HumanoidASE ReachSwordShield

HumanoidASE LocationSwordShield

HEL-gif

✅

HumanoidASE StrikeSwordShield

HES-gif

✅

BiShadowHand BlockStack

SBS-gif

✅

BiShadowHand BottleCap

SBC-gif

✅

BiShadowHand CatchAbreast

SCA-gif

✅

BiShadowHand CatchOver2Underarm

SU2-gif

✅

BiShadowHand CatchUnderarm

SCU-gif

✅

BiShadowHand DoorOpenInward

SOI-gif

✅

BiShadowHand DoorOpenOutward

SOO-gif

✅

BiShadowHand DoorCloseInward

SCI-gif

✅

BiShadowHand DoorCloseOutward

SCO-gif

✅

BiShadowHand GraspAndPlace

SGP-gif

✅

BiShadowHand LiftUnderarm

SLU-gif

✅

BiShadowHand Over

SHO-gif

✅

BiShadowHand Pen

SPE-gif

✅

BiShadowHand PointCloud

BiShadowHand PushBlock

SPB-gif

✅

BiShadowHand ReOrientation

SRO-gif

✅

BiShadowHand Scissors

SSC-gif

✅

BiShadowHand SwingCup

SSW-gif

✅

BiShadowHand Switch

SWH-gif

✅

BiShadowHand TwoCatchUnderarm

STC-gif

✅

'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_AllegroHandOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_AntOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_AnymalOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_AnymalTerrainOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_BallBalanceOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_CartpoleOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_CrazyflieOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_FactoryNutBoltPickOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_FrankaCabinetOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_HumanoidOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_IngenuityOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_QuadcopterOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OmniIsaacGym_RofuncRL/example_ShadowHandOmni_RofuncRL.py --agent=ppo
'''Training'''
python examples/learning_rl/OpenAIGym_RofuncRL/example_GymTasks_RofuncRL.py --task=Gym_Pendulum-v1 --agent=[ppo|a2c|td3|sac]
'''Training'''
python examples/learning_rl/OpenAIGym_RofuncRL/example_GymTasks_RofuncRL.py --task=Gym_CartPole-v1 --agent=[ppo|a2c|td3|sac]
'''Training'''
python examples/learning_rl/OpenAIGym_RofuncRL/example_GymTasks_RofuncRL.py --task=Gym_Acrobot-v1 --agent=[ppo|a2c|td3|sac]
'''Training'''
# Hopper
python examples/learning_rl/D4RL_Rofunc/example_D4RL_RofuncRL.py --task=Hopper --agent=dtrans
# Walker2d
python examples/learning_rl/D4RL_Rofunc/example_D4RL_RofuncRL.py --task=Walker2d --agent=dtrans
# HalfCheetah
python examples/learning_rl/D4RL_Rofunc/example_D4RL_RofuncRL.py --task=HalfCheetah --agent=dtrans
# Reacher2d
python examples/learning_rl/D4RL_Rofunc/example_D4RL_RofuncRL.py --task=Reacher2d --agent=dtrans

3.  Machine learning class#

The following are examples of machine learning methods for robot learning.

BiRP

BiRP

FeLT

FeLT

FeLT

FeLT

GMR

GMR

TP-GMM

TP-GMM

TP-GMM for bimanual setting

TP-GMM for bimanual setting

TP-GMMBi with Relative Parameterization in Control

TP-GMMBi with Relative Parameterization in Control

TP-GMMBi with Relative Parameterization in Representation

TP-GMMBi with Relative Parameterization in Representation

TP-GMR for Tai Chi

TP-GMR for Tai Chi

Taichi

Taichi

4.  Planning and control methods#

The following are examples of planning and control methods that can be used to.

Bimanual iLQR

Bimanual iLQR

LQT

LQT

LQT feedback version

LQT feedback version

LQT with control primitives

LQT with control primitives

LQT with control primitives and DMP

LQT with control primitives and DMP

iLQR

iLQR

iLQR

iLQR

iLQR control primitive version

iLQR control primitive version

iLQR with CoM

iLQR with CoM

iLQR with dynamics

iLQR with dynamics

iLQR with obstacle avoidance

iLQR with obstacle avoidance

5.  Robolab#

The following examples show the use of the Robolab API.

CURI FK transformation verification with optitrack

CURI FK transformation verification with optitrack

CURI forward dynamics

CURI forward dynamics

CURI forward kinematics

CURI forward kinematics

CURI inverse kinematics

CURI inverse kinematics

CURI utils

CURI utils

Coordinate transform

Coordinate transform

Ergo manipulation

Ergo manipulation

Ergo manipulation

Ergo manipulation

FD from models

FD from models

FK from models

FK from models

IK from models

IK from models

Jacobian from models

Jacobian from models

6.  Visualab#

The following examples show the use of the Visualab API.

Image segmentation using EfficientSAM with prompt

Image segmentation using EfficientSAM with prompt

Image segmentation using SAM

Image segmentation using SAM

Image segmentation using SAM with prompt

Image segmentation using SAM with prompt

Part-level segmentation using SAM and VLPart with prompt

Part-level segmentation using SAM and VLPart with prompt

7.  Simulator#

The following examples show how to use the simulator.

Apply Forces and Torques On CURI

Apply Forces and Torques On CURI

CURI controllers

CURI controllers

CURI cube pick

CURI cube pick

CURI screw nut

CURI screw nut

Construct custom human model from Xsens data

Construct custom human model from Xsens data

LLM control a robot with pre-defined low-level motion API

LLM control a robot with pre-defined low-level motion API

Object Simulator and Camera Tracking

Object Simulator and Camera Tracking

Tracking the trajectory by Franka

Tracking the trajectory by Franka

Tracking the trajectory by Gluon

Tracking the trajectory by Gluon

Tracking the trajectory with interference by CURI

Tracking the trajectory with interference by CURI

Tracking the trajectory with multiple rigid bodies by CURI

Tracking the trajectory with multiple rigid bodies by CURI

Tracking the trajectory with multiple rigid bodies by Walker

Tracking the trajectory with multiple rigid bodies by Walker

Tracking the trajectory with multiple rigid bodies by Walker

Tracking the trajectory with multiple rigid bodies by Walker

Tracking the trajectory with multiple rigid bodies by Walker

Tracking the trajectory with multiple rigid bodies by Walker

Visualize robots and objects

Visualize robots and objects

Visualize robots and objects

Visualize robots and objects

Visualize robots and objects

Visualize robots and objects

Download all examples in Python source code: examples_python.zip

Download all examples in Jupyter notebooks: examples_jupyter.zip

Gallery generated by Sphinx-Gallery

previous

Quick start

next

Data collection and processing

Contents
  • Data collection and processing
  • Reinforcement learning class
  • Machine learning class
  • Planning and control methods
  • Robolab
  • Visualab
  • Simulator

By Junjia Liu

© Copyright 2025, Junjia Liu.