# class

StateHelperHelper which manipulates the State and its covariance.

### Contents

- Reference

In general, this class has all the core logic for an Extended Kalman Filter (EKF)-based system. This has all functions that change the covariance along with addition and removing elements from the state. All functions here are static, and thus are self-contained so that in the future multiple states could be tracked and updated. We recommend you look directly at the code for this class for clarity on what exactly we are doing in each and the matching documentation pages.

## Public static functions

- static void EKFPropagation(State* state, const std::vector<Type*>& order_NEW, const std::vector<Type*>& order_OLD, const Eigen::MatrixXd& Phi, const Eigen::MatrixXd& Q)
- Performs EKF propagation of the state covariance.
- static void EKFUpdate(State* state, const std::vector<Type*>& H_order, const Eigen::MatrixXd& H, const Eigen::VectorXd& res, const Eigen::MatrixXd& R)
- Performs EKF update of the state (see Linear Measurement Update page)
- static auto get_marginal_covariance(State* state, const std::vector<Type*>& small_variables) -> Eigen::MatrixXd
- For a given set of variables, this will this will calculate a smaller covariance.
- static auto get_full_covariance(State* state) -> Eigen::MatrixXd
- This gets the full covariance matrix.
- static void marginalize(State* state, Type* marg)
- Marginalizes a variable, properly modifying the ordering/covariances in the state.
- static auto clone(State* state, Type* variable_to_clone) -> Type*
- Clones "variable to clone" and places it at end of covariance.
- static auto initialize(State* state, Type* new_variable, const std::vector<Type*>& H_order, Eigen::MatrixXd& H_R, Eigen::MatrixXd& H_L, Eigen::MatrixXd& R, Eigen::VectorXd& res, double chi_2_mult) -> bool
- Initializes new variable into covariance.
- static void initialize_invertible(State* state, Type* new_variable, const std::vector<Type*>& H_order, const Eigen::MatrixXd& H_R, const Eigen::MatrixXd& H_L, const Eigen::MatrixXd& R, const Eigen::VectorXd& res)
- Initializes new variable into covariance (H_L must be invertible)
- static void augment_clone(State* state, Eigen::Matrix<double, 3, 1> last_w)
- Augment the state with a stochastic copy of the current IMU pose.
- static void marginalize_old_clone(State* state)
- Remove the oldest clone, if we have more then the max clone count!!
- static void marginalize_slam(State* state)
- Marginalize bad SLAM features.

## Function documentation

###
static void ov_msckf::StateHelper:: EKFPropagation(State* state,
const std::vector<Type*>& order_NEW,
const std::vector<Type*>& order_OLD,
const Eigen::MatrixXd& Phi,
const Eigen::MatrixXd& Q)

Performs EKF propagation of the state covariance.

Parameters | |
---|---|

state | Pointer to state |

order_NEW | Contiguous variables that have evolved according to this state transition |

order_OLD | Variable ordering used in the state transition |

Phi | State transition matrix (size order_NEW by size order_OLD) |

Q | Additive state propagation noise matrix (size order_NEW by size order_NEW) |

The mean of the state should already have been propagated, thus just moves the covariance forward in time. The new states that we are propagating the old covariance into, should be **contiguous** in memory. The user only needs to specify the sub-variables that this block is a function of.

###
static void ov_msckf::StateHelper:: EKFUpdate(State* state,
const std::vector<Type*>& H_order,
const Eigen::MatrixXd& H,
const Eigen::VectorXd& res,
const Eigen::MatrixXd& R)

Performs EKF update of the state (see Linear Measurement Update page)

Parameters | |
---|---|

state | Pointer to state |

H_order | Variable ordering used in the compressed Jacobian |

H | Condensed Jacobian of updating measurement |

res | residual of updating measurement |

R | updating measurement covariance |

###
static Eigen::MatrixXd ov_msckf::StateHelper:: get_marginal_covariance(State* state,
const std::vector<Type*>& small_variables)

For a given set of variables, this will this will calculate a smaller covariance.

Parameters | |
---|---|

state | Pointer to state |

small_variables | Vector of variables whose marginal covariance is desired |

Returns | marginal covariance of the passed variables |

That only includes the ones specified with all crossterms. Thus the size of the return will be the summed dimension of all the passed variables. Normal use for this is a chi-squared check before update (where you don't need the full covariance).

###
static Eigen::MatrixXd ov_msckf::StateHelper:: get_full_covariance(State* state)

This gets the full covariance matrix.

Parameters | |
---|---|

state | Pointer to state |

Returns | covariance of current state |

Should only be used during simulation as operations on this covariance will be slow. This will return a copy, so this cannot be used to change the covariance by design. Please use the other interface functions in the StateHelper to progamatically change to covariance.

###
static void ov_msckf::StateHelper:: marginalize(State* state,
Type* marg)

Marginalizes a variable, properly modifying the ordering/covariances in the state.

Parameters | |
---|---|

state | Pointer to state |

marg | Pointer to variable to marginalize |

This function can support any Type variable out of the box. Right now the marginalization of a sub-variable/type is not supported. For example if you wanted to just marginalize the orientation of a PoseJPL, that isn't supported. We will first remove the rows and columns corresponding to the type (i.e. do the marginalization). After we update all the type ids so that they take into account that the covariance has shrunk in parts of it.

###
static bool ov_msckf::StateHelper:: initialize(State* state,
Type* new_variable,
const std::vector<Type*>& H_order,
Eigen::MatrixXd& H_R,
Eigen::MatrixXd& H_L,
Eigen::MatrixXd& R,
Eigen::VectorXd& res,
double chi_2_mult)

Initializes new variable into covariance.

Parameters | |
---|---|

state | Pointer to state |

new_variable | Pointer to variable to be initialized |

H_order | Vector of pointers in order they are contained in the condensed state Jacobian |

H_R | Jacobian of initializing measurements wrt variables in H_order |

H_L | Jacobian of initializing measurements wrt new variable |

R | Covariance of initializing measurements (isotropic) |

res | Residual of initializing measurements |

chi_2_mult | Value we should multiply the chi2 threshold by (larger means it will be accepted more measurements) |

Uses Givens to separate into updating and initializing systems (therefore system must be fed as isotropic). If you are not isotropic first whiten your system (TODO: we should add a helper function to do this). If your H_L Jacobian is already directly invertable, the just call the initialize_

###
static void ov_msckf::StateHelper:: initialize_invertible(State* state,
Type* new_variable,
const std::vector<Type*>& H_order,
const Eigen::MatrixXd& H_R,
const Eigen::MatrixXd& H_L,
const Eigen::MatrixXd& R,
const Eigen::VectorXd& res)

Initializes new variable into covariance (H_L must be invertible)

Parameters | |
---|---|

state | Pointer to state |

new_variable | Pointer to variable to be initialized |

H_order | Vector of pointers in order they are contained in the condensed state Jacobian |

H_R | Jacobian of initializing measurements wrt variables in H_order |

H_L | Jacobian of initializing measurements wrt new variable (needs to be invertible) |

R | Covariance of initializing measurements |

res | Residual of initializing measurements |

Please refer to Delayed Feature Initialization page for detailed derivation. This is just the update assuming that H_L is invertable (and thus square) and isotropic noise.

###
static void ov_msckf::StateHelper:: augment_clone(State* state,
Eigen::Matrix<double, 3, 1> last_w)

Augment the state with a stochastic copy of the current IMU pose.

Parameters | |
---|---|

state | Pointer to state |

last_w | The estimated angular velocity at cloning time (used to estimate imu-cam time offset) |

After propagation, normally we augment the state with an new clone that is at the new update timestep. This augmentation clones the IMU pose and adds it to our state's clone map. If we are doing time offset calibration we also make our cloning a function of the time offset. Time offset logic is based on Mingyang Li and Anastasios I. Mourikis paper: http:/

where we say that we have propagated our state up to the current estimated true imaging time for the current image, is the angular velocity at the end of propagation with biases removed. This is off by some smaller error, so to get to the true imaging time in the imu base clock, we can append some small timeoffset error. Thus the Jacobian in respect to our time offset during our cloning procedure is the following:

###
static void ov_msckf::StateHelper:: marginalize_old_clone(State* state)

Remove the oldest clone, if we have more then the max clone count!!

Parameters | |
---|---|

state | Pointer to state |

This will marginalize the clone from our covariance, and remove it from our state. This is mainly a helper function that we can call after each update. It will marginalize the clone specified by State::

###
static void ov_msckf::StateHelper:: marginalize_slam(State* state)

Marginalize bad SLAM features.

Parameters | |
---|---|

state | Pointer to state |