An Introduction to Error state Kalman Filter

8 minute read

Published:

Error state is the difference between True state $x_t$ and Nominal state $x$. The nominal state does not take into account the noise, just derived from motion equations. As a result there will be accumulate errors, which are collected in the error-state $\delta x$ and the ESKF will estimate these errors. In parallel with integration of the nominal state, the ESKF predicts a Gaussian estimate of the error-state, and inject the mean of error state back to nominal-state.
When the IMU measurement data arrives, we integrate it and put it into the nominal state variable. Since this approach does not consider noise, the result will naturally drift quickly, so we hope to put the error part as an error variable in ESKF. ESKF will consider the effects of various noises and zero offsets, and give a Gaussian distribution description of the error state. At the same time, ESKF itself, as a Kalman filter, also has a prediction process and a correction process, and the correction process needs to rely on sensor observations other than the IMU. After the correction, ESKF can give the posterior error Gaussian distribution, and then we can put this part of the error into the nominal state variable, and set ESKF to zero, thus completing a loop.

Why ESKF?

CLICK ME In most modern IMU systems, people often use Error state Kalman filter (ESKF) instead of the original state Kalman filter. Since the error-state are small signal, ESKF has several advantages:
1. Regarding of the rotation part, the state variables of ESKF can be expressed by three-dimensional variables, the minimized parameters that are used to express the increment of rotation. While the traditional KF needs to use quaternion (4-dimensional) or higher-dimensional expression (rotation matrix, 9-dimensional), or it has to use a singular expression (Euler angle).
2. ESKF is always near the origin, far away from the singular point, and it will be perfect to perform linearization approximation because it near the operating point.
3. The error-state is always small, meaning that all second-order products are negligible. This makes the computation of Jacobians very easy and fast.
4. The kinematics of the error state is also smaller than the original state variable, because we can put a large number of updated parts into the original state variable.


Equation of State

CLICK ME Good to see all the variables used in a table, all variable names are consistent with the refered Joan's paper: ### Variable List |Magnitude |True |Nominal |Error |Composition|Measured |Noise | | -------- | -------- | -------- | -------- | --------- |-------- |-------- | |Full state| $x_t$ | $x$ | $\delta x$ | $x_t = x \oplus \delta x$ | |Position | $p_t$ | $p$ | $\delta p$ | $p_t = p + \delta p$ | |Velocity | $v_t$ | $v$ | $\delta v$ | $v_t = v + \delta v$ | |Rotation matrix| $R_t$ | $R$ | $\delta R$ | $R_t = R \delta R$ | |Angles vector||| $\delta \theta$ | $\delta R = exp[\delta \theta]$\^ | |Accelerometer bias| $a_{bt}$ | $a_b$ | $\delta a_b$ | $a_{bt} = a_b + \delta a_b$ || $a_\omega$ | |Gyrometer bias|$\omega_{bt}$|$\omega_b$|$\delta \omega_b$|$\omega_{bt} = \omega_b + \delta \omega_b$|| $\omega_\omega$ | |Gravity vector|$g_t$|$g$|$\delta g$|$g_t = g + \delta g$| |Acceleration| $a_t$ |||| $a_m$ | $a_n$ | |Angular rate| $\omega_t$ |||| $\omega_m$ | $\omega_n$ | True state $x_t$ in ESKF: $x_t = [p_t, v_t, R_t, a_{bt}, \omega_{bt}, g_t]^T$, $x_t$ change over time and the subscript $t$ denotes true state. We record the IMU readings as $a_m, \omega_m$, which are perturbed by the white Gaussian noise $a_n, \omega_n$, and $a_\omega$ and $\omega_\omega$ are noise of the bias of IMU. Now we can write the relationship between the derivative of the state variable with respect to the observed measurement (angular velocity is defined in the local reference, the common case in IMU): $$ \dot{p_t} = v_t\\ \dot{v_t} = R_t(a_m-a_{bt}-a_n)+g_t\\ \dot{R_t} = R_t(\omega_m -\omega_{bt}-\omega_n) \hat{} \\ \dot{a_{bt}} = a_\omega\\ \dot{\omega_{bt}} = \omega_\omega \\ \dot{g_t} = 0 $$ Nominal state $x$ kinematics corresponds to the modeled system without noise or perturbations, $$ \dot{p} = v\\ \dot{v} = R(a_m-a_b)+g\\ \dot{R} = R(\omega_m-\omega_b) \hat{}\\ \dot{a_b} = 0\\ \dot{\omega_b} = 0\\ \dot{g_t} = 0 $$ Then we have error state $\delta x$ kinematics: $$ \dot{\delta p} = \delta v \\ \dot{\delta v} = - R(a_m - a_b) \hat{} \delta \theta - R \delta a_b + \delta g - R a_n \\ \dot{\delta \theta} = -(\omega_m - \omega_b) \hat{} \delta \theta - \delta \omega_b - \omega_n \\ \dot{\delta a_b} = a_\omega\\ \dot{\delta \omega_b} = \omega_\omega\\ \dot{\delta g} = 0 $$ The discrete form of error state kinematics: $$ \delta p = \delta p + \delta v \Delta t \\ \delta v = \delta v + (- R(a_m - a_b) \hat{} \delta \theta - R \delta a_b + \delta g) \Delta t + v_i \\ \delta \theta = exp(-(\omega_m-\omega_b)\delta t) \delta \theta - \delta \omega_b \Delta t + \theta_i\\ \delta a_b = \delta a_b + a_i\\ \delta \omega_b = \delta \omega_b + \omega_i\\ \delta g = \delta g $$ where $exp(-(\omega_m-\omega_b)\delta t)$ means the Lie algebra of the incremental rotation, $v_i, \theta_i, a_i, \omega_i$ are the random impulses applied to the velocity, orientation and acceleration and angular rate estimates, modeled by white Gaussian processes. Their mean is zero, and their covariances matrices are obtained by integrating the covariances of $a_n, \omega_n, a_\omega, \omega_\omega$ over the step time $\Delta t$.


Prediction and updating equations

CLICK ME Now we have the motion equation in discrete time domain, $$ \delta x = f(\delta x) + \omega, \omega \sim N(0, Q) $$ $\omega$ is noise, which is composed by $v_i, \theta_i, a_i, \omega_i$ mentioned above, so $Q$ matrix should be: $$ Q = diag(0^3,cov(v_i), cov(\theta_i), cov(a_i), cov(\omega_i),0^3) $$ The prediction equations are written: $$ \delta x_{pred} = F \delta x\\ P_{pred} = FPF^T+Q $$ where $F$ is the Jaccobian of the error state function $f$, the expression is detailed below: $$ \begin{bmatrix} I & I \Delta t &0&0&0&0\\ 0 & I & - R(a_m - a_b) \hat{} \Delta t &-R \Delta t&0&I \Delta t\\ 0&0& exp(-(\omega_m-\omega_b)\Delta t)&0&-I \Delta t&0\\ 0&0&0&I&0&0\\ 0&0&0&0&I&0\\ 0&0&0&0&0&I\\ \end{bmatrix} $$ Suppose an abstract sensor can produce observations of state variables, and its observation equation is written as: $$ z = h(x) + v, v \sim N(0, V) $$ where $h$ is a general nonlinear function of the system state (the true state), and $v$ is measurement noise, a white Gaussian noise with covariance $V$. The updating equations are: $$ K = P_{pred} H^T (H P_{pred} H^T + V)^{-1}\\ \delta x = K (z - h(x_t))\\ P = (I - K H) P_{pred} $$ Where $K$ is Kalman gain, $P_{pred}$ is prediction covariance matrix, $P$ is covariance matrix after updating and $H$ is defined as the Jacobian matrix of measurement equation of error state, according to chain rule $$ H = \frac{\partial h}{\partial \delta x}=\frac{\partial h}{\partial x} \frac{\partial x}{\partial \delta x} $$ First part $\frac{\partial h}{\partial x}$ can be easily obtained by linearizing the measurement equation, the second part $\frac{\partial x}{\partial \delta x}$ is the Jacobian of the true state with respect to the error state, which is the combination of 3x3 identity matrix (for example, $\frac{\partial (p+ \delta p)}{\partial \delta p} = I_3$), expect for the rotation part, in quaternion form it is $\frac{\partial (q \otimes \delta q)}{\partial \delta \theta}$, here in the form of rotation matrix in $SO3$, it is $\frac{\partial log (R Exp(\delta \theta))}{\partial \delta \theta}$, where $Exp(\delta \theta)$ is the Lie algebra of rotation $\delta R$, $H$ can be obtained according to Baker–Campbell–Hausdorff (BCH) formula. Updating the state and reset the error state: $$ x_{k+1} = x_k \oplus \delta x_k\\ \delta x_k = 0 $$ where $\oplus$ is defined addition operation to simplify the following equations: $$ p_{k+1}=p_k+ \delta p_k\\ v_{k+1}=v_k+ \delta v_k\\ R_{k+1}=R_kExp(\delta \theta_k)\\ a_{b,k+1}=a_{b,k}+\delta a_{b,k}\\ \omega_{b,k+1}=\omega_{b,k}+\delta \omega_{b,k}\\ g_{k+1}=g_k+\delta g_k\\ $$


For more derivations and details about quanterion representation in rotation part, please read Joan’s paper: Quaternion kinematics for the error-state Kalman filter, which is also the main reference of this blog.