rofunc.learning.ml.tphsmm#

1.  Module Contents#

1.1.  Classes#

TPHSMM

1.2.  API#

class rofunc.learning.ml.tphsmm.TPHSMM(demos: Union[List, numpy.ndarray], nb_states: int = 4, reg: float = 0.001, horizon: int = 150, plot: bool = False, task_params: Union[List, Union[List, Union[Tuple, numpy.ndarray]]] = None, dt: float = 0.01)[source]#

Initialization

Task-parameterized Hidden Semi-Markov Model (TP-GMM) :param demos: demo displacement :param nb_states: number of states in the HMM :param reg: regularization coefficient :param horizon: horizon of the reproduced trajectory :param plot: whether to plot the result

get_dx(demos_x)[source]#
hsmm_learning()[source]#

Learn the task-parameterized HMM

poe(show_demo_idx: int, task_params: tuple = None) pbdlib.GMM[source]#

Product of Expert/Gaussian (PoE), which calculates the mixture distribution from multiple coordinates :param model: learned model :param show_demo_idx: index of the specific demo to be reproduced :param task_params: [dict], task parameters for including transformation matrix A and bias b :return: The product of experts

fit() pbdlib.HSMM[source]#

Learning the single arm/agent trajectory representation from demonstration via TP-HSMM.

reproduce(show_demo_idx: int, dt: float = None) Tuple[numpy.ndarray, pbdlib.GMM][source]#

Reproduce the specific demo_idx from the learned model

generate(ref_demo_idx: int, task_params: tuple, start_state: numpy.array, horizon: int = 100, dt: float = None) Tuple[numpy.ndarray, pbdlib.GMM][source]#

Generate a new trajectory from the learned model