IMU Propagation Derivations » Discrete Propagation

The state definition and general propagation equations are covered in the IMU Propagation Derivations page. Here we will cover a simplified discrete inertial model which does not include IMU intrinsic parameters. First we will cover how we can propagate our mean forward, afterwhich we cover how to derive the linearized discrete error-state system.

Zeroth Order Quaternion Integrator

First we look at how to integrate our orientation equation forward in time. One could use the integration property of a $SO(3)$ 's matrix exponential or as we will do here, derive it for a quaternion orientation. The solution to the quaternion evolution has the following general form (see [Indirect Kalman Filter for 3D Attitude Estimation] [40] Section 1.6):

\begin{align*} ^{I_{t}}_{G}\bar{q} = \boldsymbol{\Theta}(t,t_k) \text{}^{I_{k}}_{G}\bar{q} \end{align*}

Differentiating and reordering the terms yields the governing equation for $\boldsymbol{\Theta}(t,t_k)$ as

\begin{align*} \boldsymbol{\Theta}(t,t_k) &= \text{}^{I_t}_G\bar{q} \text{ }^{I_k}_{G}\bar{q}^{-1}\\ \Rightarrow \dot{\boldsymbol{\Theta}}(t,t_k) &= \text{}^{I_t}_G\dot{\bar{q}} \text{ }^{I_k}_{G}\bar{q}^{-1}\\ &= \frac{1}{2} \boldsymbol{\Omega}(\boldsymbol{\omega}(t)) \text{ }^{I_t}_{G}\bar{q} \text{ }^{I_k}_{G}\bar{q}^{-1}\\ &= \frac{1}{2} \boldsymbol{\Omega}(\boldsymbol{\omega}(t)) \boldsymbol{\Theta}(t,t_k) \end{align*}

where we have defined:

\begin{align*} \boldsymbol{\Omega}(\boldsymbol{\omega}(t)) &= \begin{bmatrix} -\lfloor \boldsymbol{\omega}(t) \times \rfloor && \boldsymbol{\omega}(t) \\ -\boldsymbol{\omega}(t)^\top && 0 \end{bmatrix} \end{align*}

and $\boldsymbol{\Theta}(t_k,t_k) = \mathbf{I}_{4}$ . If we take $\boldsymbol{\omega}(t) = \boldsymbol{\omega}$ to be constant over the the period $\Delta t_k = t_{k+1} - t_k$ , then the above system is linear time-invariant (LTI), and $\boldsymbol{\Theta}$ can be solved as (see [31]):

\begin{align*} \boldsymbol{\Theta}(t_{k+1},t_k) &= \exp\bigg(\frac{1}{2}\boldsymbol{\Omega}(\boldsymbol{\omega})\Delta t_k\bigg)\\ &= \cos\bigg(\frac{|\boldsymbol{\omega}|}{2} \Delta t_k \bigg) \cdot \mathbf{I}_4 + \frac{1}{|\boldsymbol{\omega}|}\sin\bigg(\frac{|\boldsymbol{\omega}|}{2} \Delta t_k \bigg) \cdot \boldsymbol{\Omega}(\boldsymbol{\omega})\\ &\simeq \mathbf{I}_4 + \frac{\Delta t_k}{2}\boldsymbol{\Omega}(\boldsymbol{\omega}) \end{align*}

where $\exp(\cdot)$ is the general matrix exponential which has a closed form solution, and the approximation assumes small $|\boldsymbol{\omega}|$ . We can formulate the quaternion propagation from $t_k$ to $t_{k+1}$ using the estimated rotational velocity $\hat{\boldsymbol{\omega}}(t) = \hat{\boldsymbol{\omega}}$ as:

\begin{align*} \text{}^{I_{k+1}}_{G}\hat{\bar{q}} = \exp\bigg(\frac{1}{2}\boldsymbol{\Omega}(\hat{\boldsymbol{\omega}})\Delta t_k\bigg) \text{}^{I_{k}}_{G}\hat{\bar{q}} \end{align*}

Discrete-time IMU Propagation

We model the measurements as discrete-time over the integration period. To do this, the measurements can be assumed to be constant during the sampling period. We employ this assumption and approximate that the measurement at time $t_k$ remains the same until we get the next measurement at $t_{k+1}$ . For the quaternion propagation, it is the same as continuous-time propagation with constant measurement assumption ${\boldsymbol{\omega}}_{m}(t_k) = {\boldsymbol{\omega}}_{m,k}$ . We use subscript $k$ to denote it is the measurement we get at time $t_k$ . Therefore the propagation of quaternion can be written as:

\begin{align*} \text{}^{I_{k+1}}_{G}\hat{\bar{q}} = \exp\bigg(\frac{1}{2}\boldsymbol{\Omega}\big({\boldsymbol{\omega}}_{m,k}-{\mathbf{b}}_{g,k} - \mathbf{n}_{g,k}\big)\Delta t_k\bigg) \text{}^{I_{k}}_{G}\hat{\bar{q}} \end{align*}

where $\Delta t_k = t_{k+1} - t_{k}$ . For the velocity and position propagation we have constant $\mathbf{a}_{m}(t_k) = \mathbf{a}_{m,k}$ over $t \in [t_k, t_{k+1}]$ . We can therefore directly solve for the new states as:

\begin{align*} ^G{\mathbf{p}}_{I_{k+1}} &= \text{}^G{\mathbf{p}}_{I_k} + {}^G{\mathbf{v}}_{I_k} \Delta t_k - \frac{1}{2}{}^G\mathbf{g}\Delta t_k^2 + \frac{1}{2} \text{}^{I_k}_{G}{\mathbf{R}}^\top(\mathbf{a}_{m,k} - {\mathbf{b}}_{{a},k} - \mathbf{n}_{a,k})\Delta t_k^2 \\ ^G{\mathbf{v}}_{k+1} &= \text{}^G{\mathbf{v}}_{I_k} - {}^G\mathbf{g}\Delta t_k +\text{}^{I_k}_{G}{\mathbf{R}}^\top(\mathbf{a}_{m,k} - {\mathbf{b}}_{{a},k} - \mathbf{n}_{a,k})\Delta t_k \end{align*}

The propagation of each bias is likewise the continuous system:

\begin{align*} {\mathbf{b}}_{g,k+1} &= {\mathbf{b}}_{g,k} + \mathbf{n}_{wg,k} \\ {\mathbf{b}}_{a,k+1} &= {\mathbf{b}}_{a,k} + \mathbf{n}_{wa,k} \end{align*}

Note that the noises used here are the discrete ones which need to be converted from their continuous-time versions. In particular, when the covariance matrix of the continuous-time measurement noises is given by $\mathbf{Q}_c$ and $\mathbf{n}_c = [ \mathbf{n}_{g_c} ~ \mathbf{n}_{a_c} ~ \mathbf{n}_{wg_c} ~ \mathbf{n}_{wa_c} ]^\top$ , then the discrete-time noise covariance $\mathbf{Q}_d$ can be computed as (see [Indirect Kalman Filter for 3D Attitude Estimation] [40] Eq. (129) and (130)):

\begin{align*} \sigma_{g} &= \frac{1}{\sqrt{\Delta t}}~ \sigma_{w_c} \\ \sigma_{wg} &= \sqrt{\Delta t}~ \sigma_{wg_c} \\[1em] \mathbf{Q}_{meas} &= \begin{bmatrix} \frac{1}{\Delta t}~ \sigma_{g_c}^2~ \mathbf{I}_3 & \mathbf{0}_3 \\ \mathbf{0}_3 & \frac{1}{\Delta t}~ \sigma_{a_c}^2~ \mathbf{I}_3 \end{bmatrix} \\ \mathbf{Q}_{bias} &= \begin{bmatrix} \Delta t~ \sigma_{wg_c}^2~ \mathbf{I}_3 & \mathbf{0}_3 \\ \mathbf{0}_3 & \Delta t~ \sigma_{wa_c}^2~ \mathbf{I}_3 \end{bmatrix} \end{align*}

where $\mathbf{n} = [ \mathbf{n}_{g} ~ \mathbf{n}_{a} ~ \mathbf{n}_{wg} ~ \mathbf{n}_{wa} ]^\top$ are the discrete IMU sensor noises which have been converted from their continuous representations. We define the stacked discrete measurement noise as follows:

\begin{align*} \mathbf{Q}_{d} &= \begin{bmatrix} \mathbf{Q}_{meas} & \mathbf{0}_3 \\ \mathbf{0}_3 & \mathbf{Q}_{bias} \end{bmatrix} \end{align*}

Discrete-time Error-state Propagation

In order to propagate the covariance matrix, we should derive the linearized error-state propagation, i.e., computing the system Jacobian $\boldsymbol{\Phi}(t_{k+1},t_k)$ and noise Jacobian $\mathbf{G}_{k}$ . The method of computing Jacobians is to "perturb" each variable in the system and see how the old error "perturbation" relates to the new error state. That is, $\boldsymbol{\Phi}(t_{k+1},t_k)$ and $\mathbf{G}_{k}$ can be found by perturbing each variable as:

\begin{align*} \tilde{\mathbf{x}}_I(t_{k+1}) = \boldsymbol{\Phi}(t_{k+1},t_k) \tilde{\mathbf{x}}_I(t_{k}) + \mathbf{G}_{k} \mathbf{n} \end{align*}

For the orientation error propagation, we start with the ${SO}(3)$ perturbation using ${}^{I}_G \mathbf{R} \approx (\mathbf{I}_3 - \lfloor ^{I}_{G}\tilde{\boldsymbol{\theta}}\times\rfloor)^{I}_{G} \hat{\mathbf{R}}$ (which can be shown to be exactly equivalent to the state's quaternion error, see [40] and ov_type::JPLQuat):

\begin{align*} {}^{I_{k+1}}_G \mathbf{R} &= \text{}^{I_{k+1}}_{I_{k}} \mathbf{R} \text{}^{I_{k}}_G \mathbf{R} \\ (\mathbf{I}_3 - \lfloor ^{I_{k+1}}_{G}\tilde{\boldsymbol{\theta}}\times\rfloor)^{I_{k+1}}_{G} \hat{\mathbf{R}} &\approx \exp(-{}^{I_{k}}\hat{\boldsymbol{\omega}}\Delta t_k - {}^{I_{k}}\tilde{\boldsymbol{\omega}}\Delta t_k) (\mathbf{I}_3 - \lfloor ^{I_{k}}_{G}\tilde{\boldsymbol{\theta}}\times\rfloor)^{I_{k}}_{G} \hat{\mathbf{R}}\\ &=\exp(-{}^{I_{k}}\hat{\boldsymbol{\omega}}\Delta t_k)\exp(-\mathbf{J}_r(-{}^{I_{k}}\hat{\boldsymbol{\omega}}\Delta t_k){}^{I_{k}}\tilde{\boldsymbol{\omega}}\Delta t_k) (\mathbf{I}_3 - \lfloor ^{I_{k}}_{G}\tilde{\boldsymbol{\theta}}\times\rfloor)^{I_{k}}_{G} \hat{\mathbf{R}}\\ &=\text{}^{I_{k+1}}_{I_{k}} \hat{\mathbf{R}} (\mathbf{I}_3 - \lfloor \mathbf J_r(-{}^{I_{k}}\hat{\boldsymbol{\omega}}\Delta t_k) \tilde{\boldsymbol{\omega}}_k\Delta t_k \times\rfloor) (\mathbf{I}_3 - \lfloor ^{I_k}_{G}\tilde{\boldsymbol{\theta}}\times\rfloor) \text{}^{I_{k}}_G \hat{\mathbf{R}} \end{align*}

where $\tilde{\boldsymbol{\omega}} = \boldsymbol{\omega} - \hat{\boldsymbol{\omega}} = -(\tilde{\mathbf{b}}_{{g}} + \mathbf{n}_g)$ handles both the perturbation to the bias and measurement noise. $\mathbf {J}_r(\boldsymbol{\theta})$ is the right Jacobian of ${SO}(3)$ that maps the variation of rotation angle in the parameter vector space into the variation in the tangent vector space to the manifold [see ov_core::Jr_so3() and [3]]. By neglecting the second order terms from above, we obtain the following orientation error propagation:

\begin{align*} \text{}^{I_{k+1}}_{G}\tilde{\boldsymbol{\theta}} \approx \text{}^{I_{k+1}}_{I_{k}}\hat{\mathbf{R}} \text{}^{I_k}_{G}\tilde{\boldsymbol{\theta}} - \text{}^{I_{k+1}}_{I_{k}}\hat{\mathbf{R}}\mathbf J_r({}^{I_{k+1}}_{I_{k}}\hat{\boldsymbol{\theta}}) \Delta t_k (\tilde{\mathbf{b}}_{{g},k} + \mathbf{n}_{{g},k}) \end{align*}

where we have defined ${}^{I_{k+1}}_{I_{k}}\hat{\boldsymbol{\theta}}=-{}^{I_{k}}\hat{\boldsymbol{\omega}}\Delta t_k$ . This describes how tht error of the next orientation is related to the previous, gyroscope bias, and noise. Now we can do error propagation of position and velocity using the same scheme:

\begin{align*} ^G\mathbf{p}_{I_{k+1}} &= \text{}^G\mathbf{p}_{I_k} + \text{}^G\mathbf{v}_{I_k} \Delta t_k - \frac{1}{2}{}^G\mathbf{g}\Delta t_k^2 + \frac{1}{2}\text{}^{I_k}_G\mathbf{R}^\top \mathbf{a}_{k}\Delta t_k^2\\ ^G\hat{\mathbf{p}}_{I_{k+1}} + \text{}^G\tilde{\mathbf{p}}_{I_{k+1}} &\approx \text{}^G\hat{\mathbf{p}}_{I_k} + \text{}^G\tilde{\mathbf{p}}_{I_k} + \text{}^G\hat{\mathbf{v}}_{I_k} \Delta t_k + \text{}^G\tilde{\mathbf{v}}_{I_k} \Delta t_k - \frac{1}{2}{}^G\mathbf{g}\Delta t_k^2\\ &\hspace{4cm} + \frac{1}{2} \text{}^{I_k}_{G}\hat{\mathbf{R}}^\top (\mathbf{I}_3 + \lfloor ^{I_{k}}_{G}\tilde{\boldsymbol{\theta}}\times\rfloor) (\hat{\mathbf{a}}_{k} + \tilde{\mathbf{a}}_{k})\Delta t_k^2\\ \end{align*}
\begin{align*} ^G\mathbf{v}_{k+1} &= \text{}^G\mathbf{v}_{I_k} - {}^G\mathbf{g}\Delta t_k +\text{}^{I_k}_G\mathbf{R}^\top\mathbf{a}_{k}\Delta t_k\\ ^G\hat{\mathbf{v}}_{k+1} + ^G\tilde{\mathbf{v}}_{k+1} &\approx {}^G\hat{\mathbf{v}}_{I_k} + {}^G\tilde{\mathbf{v}}_{I_k} - {}^G\mathbf{g}\Delta t_k + \text{}^{I_k}_G\hat{\mathbf{R}}^\top (\mathbf{I}_3 + \lfloor ^{I_{k}}_{G}\tilde{\boldsymbol{\theta}}\times\rfloor) (\hat{\mathbf{a}}_{k} + \tilde{\mathbf{a}}_{k})\Delta t_k \end{align*}

where $\tilde{\mathbf{a}} = \mathbf{a} - \hat{\mathbf{a}} = - (\tilde{\mathbf{b}}_{{a}} + \mathbf{n}_{{a}})$ . By neglecting the second order error terms, we obtain the following position and velocity error propagation:

\begin{align*} \text{}^G\tilde{\mathbf{p}}_{I_{k+1}} &= \text{}^G\tilde{\mathbf{p}}_{I_k} + \text{}^G\tilde{\mathbf{v}}_{I_k} \Delta t_k - \frac{1}{2}\text{}^{I_k}_{G}\hat{\mathbf{R}}^\top \lfloor \hat{\mathbf{a}}_{k} \Delta t_k^2 \times\rfloor ^{I_{k}}_{G}\tilde{\boldsymbol{\theta}} - \frac{1}{2} \text{}^{I_k}_{G}\hat{\mathbf{R}}^\top \Delta t_k^2 (\tilde{\mathbf{b}}_{{a},k} + \mathbf{n}_{{a},k})\\ ^G\tilde{\mathbf{v}}_{k+1} &= \text{}^G\tilde{\mathbf{v}}_{I_k} - \text{}^{I_k}_G\hat{\mathbf{R}}^\top \lfloor \hat{\mathbf{a}}_{k} \Delta t_k \times\rfloor ^{I_{k}}_{G}\tilde{\boldsymbol{\theta}} - \text{}^{I_k}_G\hat{\mathbf{R}}^\top \Delta t_k (\tilde{\mathbf{b}}_{{a},k} + \mathbf{n}_{{a},k}) \end{align*}

The propagation of the two random-walk biases are as follows:

\begin{align*} \mathbf{b}_{\mathbf{g},k+1} &= \mathbf{b}_{{g},k} + \mathbf{n}_{wg} \\ \hat{\mathbf{b}}_{{g},k+1} + \tilde{\mathbf{b}}_{{g},k+1} &= \hat{\mathbf{b}}_{{g},k} + \tilde{\mathbf{b}}_{{g},k} + \mathbf{n}_{wg} \\ \tilde{\mathbf{b}}_{{g},k+1} &= \tilde{\mathbf{b}}_{{g},k} + \mathbf{n}_{wg} \end{align*}
\begin{align*} \mathbf{b}_{\mathbf{a},k+1} &= \mathbf{b}_{\mathbf{a},k} + \mathbf{n}_{wa} \\ \hat{\mathbf{b}}_{{a},k+1} + \tilde{\mathbf{b}}_{{a},k+1} &= \hat{\mathbf{b}}_{{a},k} + \tilde{\mathbf{b}}_{{a},k} + \mathbf{n}_{wa} \\ \tilde{\mathbf{b}}_{{a},k+1} &= \tilde{\mathbf{b}}_{{a},k} + \mathbf{n}_{wa} \end{align*}

By collecting all the perturbation results, we can build $\boldsymbol{\Phi}(t_{k+1},t_k)$ and $\mathbf{G}_{k}$ matrices as:

\begin{align*} \boldsymbol{\Phi}(t_{k+1},t_k) &= \begin{bmatrix} \text{}^{I_{k+1}}_{I_{k}}\hat{\mathbf{R}} & \mathbf{0}_3 & \mathbf{0}_3 & - \text{}^{I_{k+1}}_{I_{k}}\hat{\mathbf{R}}\mathbf J_r(\text{}^{I_{k+1}}_{I_{k}}\hat{\boldsymbol{\theta}}) \Delta t_k & \mathbf{0}_3 \\ - \frac{1}{2}\text{}^{I_k}_{G}\hat{\mathbf{R}}^\top \lfloor \hat{\mathbf{a}}_{k} \Delta t_k^2 \times\rfloor & \mathbf{I}_3 & \Delta t_k \mathbf{I}_3 & \mathbf{0}_3 & - \frac{1}{2} \text{}^{I_k}_{G}\hat{\mathbf{R}}^\top \Delta t_k^2 \\ - \text{}^{I_k}_G\hat{\mathbf{R}}^\top \lfloor \hat{\mathbf{a}}_{k} \Delta t_k \times\rfloor & \mathbf{0}_3 & \mathbf{I}_3 & \mathbf{0}_3 & - \text{}^{I_k}_G\hat{\mathbf{R}}^\top \Delta t_k \\ \mathbf{0}_3 & \mathbf{0}_3 & \mathbf{0}_3 & \mathbf{I}_3 & \mathbf{0}_3 \\ \mathbf{0}_3 & \mathbf{0}_3 & \mathbf{0}_3 & \mathbf{0}_3 & \mathbf{I}_3 \end{bmatrix} \end{align*}
\begin{align*} \mathbf{G}_{k} &= \begin{bmatrix} - \text{}^{I_{k+1}}_{I_{k}}\hat{\mathbf{R}}\mathbf J_r(\text{}^{I_{k+1}}_{I_{k}}\hat{\boldsymbol{\theta}}) \Delta t_k & \mathbf{0}_3 & \mathbf{0}_3 & \mathbf{0}_3 \\ \mathbf{0}_3 & - \frac{1}{2} \text{}^{I_k}_{G}\hat{\mathbf{R}}^\top \Delta t_k^2 & \mathbf{0}_3 & \mathbf{0}_3 \\ \mathbf{0}_3 & - \text{}^{I_k}_G\hat{\mathbf{R}}^\top \Delta t_k & \mathbf{0}_3 & \mathbf{0}_3 \\ \mathbf{0}_3 & \mathbf{0}_3 & \mathbf{I}_3 & \mathbf{0}_3 \\ \mathbf{0}_3 & \mathbf{0}_3 & \mathbf{0}_3 & \mathbf{I}_3 \end{bmatrix} \end{align*}

Now, with the computed $\boldsymbol{\Phi}(t_{k+1},t_k)$ and $\mathbf{G}_{k}$ matrices, we can propagate the covariance from $t_k$ to $t_{k+1}$ :

\begin{align*} \mathbf{P}_{k+1|k} = \boldsymbol{\Phi}(t_{k+1},t_k)\mathbf{P}_{k|k}\boldsymbol{\Phi}(t_{k+1},t_k)^\top + \mathbf{G}_k\mathbf{Q}_d\mathbf{G}_k^\top \end{align*}