# First-Estimate Jacobian Estimators

The observability and consistency of VINS is crucial due to its ability to provide: (i) the minimal conditions for initialization, (ii) insights into what states are unrecoverable, and (iii) degenerate motions which can hurt the performance of the system. Naive EKF-based VINS estimators have been shown to be inconsistent due to spurious information gain along unobservable directions and have required the creation of "observability aware" filters which explicitly address the inaccurate information gains causing filter over-confidence (the estimated uncertainty is smaller than the true). Bar-Shalom et al. [2] make a convincing argument of why estimator consistency is crucial for accuracy and robust estimation:

Since the filter gain is based on the filter-calculated error covariances, it follows that consistency is necessary for filter optimality: Wrong covariance yield wrong gain. This is why consistency evaluation is vital for verifying a filter design – it amounts to evaluation of estimator optimality.

In this section, we will introduce First-estimates Jacobian (FEJ) [20] [21] methodology which can guarantee the observability properties of VINS and improve the estimation consistency.

## EKF Linearized Error-State System

When developing an Extended Kalman filter (EKF), one needs to linearize the nonlinear motion and measurement models about some linearization points. This linearization is one of the sources of error causing inaccuracies in the estimates (in addition to, for example, model errors and measurement noise). To simplify the derivations, in what follows, we consider the state contains the inertial navigation state and a single environmental feature (no biases).

We refer the reader to [21] [27] [18] [19] [6] for more detailed derivations of the full state variables. Let us consider the following simplified linearized error-state visual-inertial system with the IMU kinematic motion model and camera measurement update:

Note that we use the left quaternion error state (see [Indirect Kalman Filter for 3D Attitude Estimation] [40] and ov_type::JPLQuat for details). For simplicity we assume that the camera and IMU frame have an identity transform. The state-transition (or system Jacobian) matrix from timestep k-1 to k can be derived as (see [IMU Propagation Derivations] for more details):

where is the true acceleration at time , is computed using the gyroscope angular velocity measurements, and is gravity in the global frame of reference. During propagation one would need to solve these integrals using either analytical or numerical integration.

We can compute the measurement Jacobian of a given feature based on the perspective projection camera model at the k-th timestep as follows (see [Camera Measurement Update] for more details):

## Linearized System Observability

System observability plays a crucial role in state estimation since it provides a deep insight about the system's geometrical properties and determines the minimal measurement modalities needed. With the state translation matrix, , and measurement Jacobian at timestep k, , the observability matrix of this linearized system is defined by:

If is of full column rank then the system is fully observable. A nullspace of it (i.e., ) describes the unobservable state subspace which can not be recovered with given measurements. Note that while we simplify here and only consider the block row of the observability matrix when performing observability analysis, we need to ensure that this nullsapce holds for the entire matrix (i.e., each block row share the same nullspace). This can be achieved by ensuring that the nullspace does not vary with time, nor contain any measurements. For the k-th block row of this matrix, we have:

It is straightforward to verify the right nullspace spans four directions, i.e.,:

where should be 4dof corresponding to global rotation about the gravity (yaw) and global translation of our visual-inertial systems.

## First Estimate Jacobians

It has been showed that standard EKF based-VINS, which always computes the state translation matrix and the measurement Jacobian using the current state estimates, has the global yaw orientation appear to be observable and has an incorrectly reduced 3dof nullspace dimention [18]. This causes the filter mistakenly gaining extra information and becoming overconfident in the yaw.

To solve this issue, the First-Estimate Jacobains (FEJ) [20] [21] methodology can be applied. It evaluates the linearized system state transition matrix and Jacobians at the same estimate (i.e., the first estimates) over all time periods and thus ensures that the 4dof unobservable VINS subspace does not gain spurious information. The application of FEJ is simple yet effective, let us consider how to modify the propagation and update linearizations.

### Propagation

Let us first consider a small thought experiment of how the standard Kalman filter computes its state transition matrix. From a timestep zero to one it will use the current estimates from state zero forward in time. At the next timestep after it updates the state with measurements from other sensors, it will compute the state transition with the updated values to evolve the state to timestep two. This causes a miss-match in the "continuity" of the state transition matrix which when multiply sequentially should represent the evolution from time zero to time two.

As shown above, we wish to compute the state transition matrix from the k-1 timestep given all k-1 measurements up until the current propagated timestep k+1 given all k measurements. The right side of the above equation is how one would normally perform this in a Kalman filter framework. corresponds to propagating from the k-1 update time to the k timestep. One would then normally perform the k'th update to the state and then propagate from this updated state to the newest timestep (i.e. the state transition matrix). This clearly is different then if one was to compute the state transition from time k-1 to the k+1 timestep as the second state transition is evaluated at the different linearization point! To fix this, we can change the linearization point we evaluate these at:

### Update

We also need to ensure that our measurement Jacobians match the linearization point of the state transition matrix. Let us consider the simple case at timestamp k with current state estimate gives the following observability matrix:

It is easy to verify that , derived from the previous section, is not valid for the whole entire matrix (i.e., ). For example, the feature estimate in each row of are different, thus the nullsace does not hold. Specifically the first column is invalid, and causes the dimension of unobservable subspace to become 3 (i.e., ). This will cause the filter to gain extra information along the yaw unobservable direction and hurt the estimation performance. Specifically this leads to larger errors, erroneously smaller uncertainties, and inconsistency (see [18] [27] for detailed proof and discussion).

To assure , one can fix the linearization point over all time. A natural choice of state linearization point is to use the first state estimate . Note that for the IMU state, its first state estimate will be which also match the linearization point in the state transition matrix. For the features, the initial triangulated value (i.e., at the time we initialize the feature into state vector) is its first estimate. As such, we can derive the linearized measurement update function as:

where is the nonlinear measurement function and is the First-Estimate Jacobian which only consist the first state estimates . It is not difficult to confirm that the observability matrix with the FEJ state transition matrix and measurement Jacobians yields the correct 4dof unobservable nullspace. As such, the system observability properties can be preserved.

While not utilizing the current (best) state estimates for linearization does introduce linearization errors, it has been shown that the inconsistencies far outweigh this. Follow up works which have also tried to address this problem include FEJ2 [6] and OC [21] [18] [19]. Both of these works try to handle the problem of ensuring correct observability properties while compensating or leveraging the best estimates.