rofunc.planning_control.lqr.lqr#

1.  Module Contents#

1.1.  Classes#

LQR

GMMLQR

LQR with a GMM cost on the state, approximation to be checked

PoGLQR

Implementation of LQR with Product of Gaussian as described in

PoGLQRBi

Implementation of LQR with Product of Gaussian as described in

1.2.  API#

class rofunc.planning_control.lqr.lqr.LQR(A=None, B=None, nb_dim=2, dt=0.01, horizon=50)[source]#

Bases: object

Initialization

property seq_xi#
property K#
property Q#
property z#
property Qc#
property cs#
Return c list where control command u is

u = -K x + c

Returns:

property ds#
Return c list where control command u is

u = K(d - x)

Returns:

property horizon#
property u_dim#

Number of dimension of input :return:

property xi_dim#

Number of dimension of state :return:

property gmm_xi#

Distribution of state :return:

property gmm_u#

Distribution of control input :return:

property x0#
get_Q_z(t)[source]#

get Q and target z for time t :param t: :return:

get_R(t)[source]#
get_A(t)[source]#
get_B(t)[source]#
ricatti()[source]#

http://web.mst.edu/~bohner/papers/tlqtots.pdf :return:

get_target()[source]#
get_feedforward()[source]#
get_command(xi, i)[source]#
policy(xi, t)[source]#

Time-dependent and linear in state policy as MVN distribution. :param xi: Current state :return:

get_sample(xi, i, sample_size=1)[source]#
Parameters:
  • xi

  • i

  • sample_size

Returns:

trajectory_distribution(xi, u, t)[source]#
get_seq(xi0, return_target=False)[source]#
make_rollout_samples(x0)[source]#
make_rollout(x0)[source]#
rollout_policy(dist_policy, x0)[source]#

Rollout of the stochastic policy. :param dist_policy: A policy distribution which takes x and t as input. :param x0: initial state :return:

class rofunc.planning_control.lqr.lqr.GMMLQR(*args, **kwargs)[source]#

Bases: rofunc.planning_control.lqr.lqr.LQR

LQR with a GMM cost on the state, approximation to be checked

Initialization

property full_gmm_xi#

Distribution of state :return:

ricatti(x0, n_best=None)[source]#
class rofunc.planning_control.lqr.lqr.PoGLQR(A=None, B=None, nb_dim=2, dt=0.01, horizon=50)[source]#

Bases: rofunc.planning_control.lqr.lqr.LQR

Implementation of LQR with Product of Gaussian as described in

Initialization

property A#
property B#
property mvn_u_dim#

Number of dimension of input sequence lifted form :return:

property mvn_xi_dim#

Number of dimension of state sequence lifted form :return:

property mvn_sol_u#

Distribution of control input after solving LQR :return:

property seq_xi#
property seq_u#
property mvn_sol_xi#

Distribution of state after solving LQR :return:

property mvn_xi#

Distribution of state :return:

property mvn_u#

Distribution of control input :return:

property xis#
property k#
property s_u#
property s_xi#
reset_params()[source]#
property horizon#
class rofunc.planning_control.lqr.lqr.PoGLQRBi(A=None, B=None, nb_dim=2, dt=0.01, horizon=50)[source]#

Bases: rofunc.planning_control.lqr.lqr.LQR

Implementation of LQR with Product of Gaussian as described in

Initialization

property A#
property B#
property C#
property C_l#
property C_r#
property horizon#
property x0_l#
property x0_r#
property x0_c#
property mvn_U_dim#

Number of dimension of input sequence lifted form :return:

property mvn_xi_dim#

Number of dimension of state sequence lifted form :return:

property mvn_sol_U#

Distribution of control input after solving LQR :return:

get_sigma_mu(prod)[source]#
property mvn_sol_xi#

Distribution of state after solving LQR :return:

property seq_xi#
property seq_U#
property mvn_xi_l#

Distribution of state :return:

property mvn_xi_r#

Distribution of state :return:

property mvn_xi_c#

Distribution of state :return:

property mvn_u#

Distribution of control input :return:

property xis#
property k#
property s_U#
property s_xi#
property U_dim#

Number of dimension of input :return:

reset_params()[source]#