[math] Calculating gain matrix in KalmanFilter

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

[math] Calculating gain matrix in KalmanFilter

Arne Schwarz
Hi,

I saw that to calculate the gain matrix the actual inverse of the
residual covariance matrix is calculated. Wouldn't it be faster to use
for example a Cholesky decomposition to solve the linear system? Since
a covariance matrix is always symmetric and at least positive
semi-definite.

Arne Schwarz

P.S. Sorry in case this is third mail with the same content,
accidentally send the first two html-formatted.

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [math] Calculating gain matrix in KalmanFilter

Gilles Sadowski
On Sun, 3 Aug 2014 18:18:24 +0200, Arne Schwarz wrote:
> Hi,
>
> I saw that to calculate the gain matrix the actual inverse of the
> residual covariance matrix is calculated. Wouldn't it be faster to
> use
> for example a Cholesky decomposition to solve the linear system?
> Since
> a covariance matrix is always symmetric and at least positive
> semi-definite.

Reading the code (in class "MatrixUtils"), it looks like QR
decomposition
is used; any problem with that choice?

Regards,
Gilles


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [math] Calculating gain matrix in KalmanFilter

Arne Schwarz
2014-08-04 13:43 GMT+02:00 Gilles <[hidden email]>:

> On Sun, 3 Aug 2014 18:18:24 +0200, Arne Schwarz wrote:
>>
>> Hi,
>>
>> I saw that to calculate the gain matrix the actual inverse of the
>> residual covariance matrix is calculated. Wouldn't it be faster to use
>> for example a Cholesky decomposition to solve the linear system? Since
>> a covariance matrix is always symmetric and at least positive
>> semi-definite.
>
>
> Reading the code (in class "MatrixUtils"), it looks like QR decomposition
> is used; any problem with that choice?
>
> Regards,
> Gilles
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>

What I meant was these lines in class "KalmanFilter":

363        // invert S
364        RealMatrix invertedS = MatrixUtils.inverse(s);
365
366        // Inn = z(k) - H * xHat(k)-
367        RealVector innovation =
z.subtract(measurementMatrix.operate(stateEstimation));
368
369        // calculate gain matrix
370        // K(k) = P(k)- * H' * (H * P(k)- * H' + R)^-1
371        // K(k) = P(k)- * H' * S^-1
372        RealMatrix kalmanGain =
errorCovariance.multiply(measurementMatrixT).multiply(invertedS);

I thought it would be better to directly solve the system an not
calculate the inverse separately, but I may be wrong.

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [math] Calculating gain matrix in KalmanFilter

Ted Dunning
Arne,

I think you are correct.




On Mon, Aug 4, 2014 at 7:34 AM, Arne Schwarz <[hidden email]> wrote:

> 2014-08-04 13:43 GMT+02:00 Gilles <[hidden email]>:
> > On Sun, 3 Aug 2014 18:18:24 +0200, Arne Schwarz wrote:
> >>
> >> Hi,
> >>
> >> I saw that to calculate the gain matrix the actual inverse of the
> >> residual covariance matrix is calculated. Wouldn't it be faster to use
> >> for example a Cholesky decomposition to solve the linear system? Since
> >> a covariance matrix is always symmetric and at least positive
> >> semi-definite.
> >
> >
> > Reading the code (in class "MatrixUtils"), it looks like QR decomposition
> > is used; any problem with that choice?
> >
> > Regards,
> > Gilles
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [hidden email]
> > For additional commands, e-mail: [hidden email]
> >
>
> What I meant was these lines in class "KalmanFilter":
>
> 363        // invert S
> 364        RealMatrix invertedS = MatrixUtils.inverse(s);
> 365
> 366        // Inn = z(k) - H * xHat(k)-
> 367        RealVector innovation =
> z.subtract(measurementMatrix.operate(stateEstimation));
> 368
> 369        // calculate gain matrix
> 370        // K(k) = P(k)- * H' * (H * P(k)- * H' + R)^-1
> 371        // K(k) = P(k)- * H' * S^-1
> 372        RealMatrix kalmanGain =
> errorCovariance.multiply(measurementMatrixT).multiply(invertedS);
>
> I thought it would be better to directly solve the system an not
> calculate the inverse separately, but I may be wrong.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
Reply | Threaded
Open this post in threaded view
|

Re: [math] Calculating gain matrix in KalmanFilter

Thomas Neidhart
On 08/04/2014 09:10 PM, Ted Dunning wrote:
> Arne,
>
> I think you are correct.

Afaik, it is possible for unscented Kalman filters to avoid the explicit
matrix inversion (see also
http://en.wikipedia.org/wiki/Cholesky_decomposition#Kalman_filters).

We have an open issue which was delayed for 4.0 to refactor the filter
package and to add various forms of Kalman filters (i.e. extended,
non-linear, unscented).

The performance issue should be analyzed within this context.

@Arne: do you have a use-case that demonstrates bad performance which
could be improved by the use of Cholesky decomposition. This would be
great and help us a lot.

Thanks,

Thomas

>
> On Mon, Aug 4, 2014 at 7:34 AM, Arne Schwarz <[hidden email]> wrote:
>
>> 2014-08-04 13:43 GMT+02:00 Gilles <[hidden email]>:
>>> On Sun, 3 Aug 2014 18:18:24 +0200, Arne Schwarz wrote:
>>>>
>>>> Hi,
>>>>
>>>> I saw that to calculate the gain matrix the actual inverse of the
>>>> residual covariance matrix is calculated. Wouldn't it be faster to use
>>>> for example a Cholesky decomposition to solve the linear system? Since
>>>> a covariance matrix is always symmetric and at least positive
>>>> semi-definite.
>>>
>>>
>>> Reading the code (in class "MatrixUtils"), it looks like QR decomposition
>>> is used; any problem with that choice?
>>>
>>> Regards,
>>> Gilles
>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [hidden email]
>>> For additional commands, e-mail: [hidden email]
>>>
>>
>> What I meant was these lines in class "KalmanFilter":
>>
>> 363        // invert S
>> 364        RealMatrix invertedS = MatrixUtils.inverse(s);
>> 365
>> 366        // Inn = z(k) - H * xHat(k)-
>> 367        RealVector innovation =
>> z.subtract(measurementMatrix.operate(stateEstimation));
>> 368
>> 369        // calculate gain matrix
>> 370        // K(k) = P(k)- * H' * (H * P(k)- * H' + R)^-1
>> 371        // K(k) = P(k)- * H' * S^-1
>> 372        RealMatrix kalmanGain =
>> errorCovariance.multiply(measurementMatrixT).multiply(invertedS);
>>
>> I thought it would be better to directly solve the system an not
>> calculate the inverse separately, but I may be wrong.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [math] Calculating gain matrix in KalmanFilter

Arne Schwarz
No, but I am developing one with an extended Kalman filter where i
will test various decompositions. I just stumbled over these lines
because I read somewhere that explicit calculation of the inverse is
not a thing one should do. And I suggested the Cholesky decomposition
because it should be fast on these special matrices. But as I wanted
to write an issue on this subject I saw an old resolved one where
exactly the Cholesky decomposition was mentioned to be unsuitable
because real world covariance matrices may not be perfectly symmetric.
But it still may be better to use the QR decomposition directly
without going over the inverse.

This is the code i wanted to suggest in the issue, on may want to
replace "Cholesky" with "QR":
// calculate gain matrix
// K(k) = P(k)- * H' * (H * P(k)- * H' + R)^-1
// K(k) = P(k)- * H' * S^-1
// K(k) * S = P(k)- * H'
// S' * K(k)' = H * P(k)-'
RealMatrix kalmanGain = new CholeskyDecomposition(s).getSolver().solve(
        errorCovariance.transpose().multiply(measurementMatrix)).transpose();

2014-08-05 18:13 GMT+02:00 Thomas Neidhart <[hidden email]>:

> On 08/04/2014 09:10 PM, Ted Dunning wrote:
>> Arne,
>>
>> I think you are correct.
>
> Afaik, it is possible for unscented Kalman filters to avoid the explicit
> matrix inversion (see also
> http://en.wikipedia.org/wiki/Cholesky_decomposition#Kalman_filters).
>
> We have an open issue which was delayed for 4.0 to refactor the filter
> package and to add various forms of Kalman filters (i.e. extended,
> non-linear, unscented).
>
> The performance issue should be analyzed within this context.
>
> @Arne: do you have a use-case that demonstrates bad performance which
> could be improved by the use of Cholesky decomposition. This would be
> great and help us a lot.
>
> Thanks,
>
> Thomas
>
>>
>> On Mon, Aug 4, 2014 at 7:34 AM, Arne Schwarz <[hidden email]> wrote:
>>
>>> 2014-08-04 13:43 GMT+02:00 Gilles <[hidden email]>:
>>>> On Sun, 3 Aug 2014 18:18:24 +0200, Arne Schwarz wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> I saw that to calculate the gain matrix the actual inverse of the
>>>>> residual covariance matrix is calculated. Wouldn't it be faster to use
>>>>> for example a Cholesky decomposition to solve the linear system? Since
>>>>> a covariance matrix is always symmetric and at least positive
>>>>> semi-definite.
>>>>
>>>>
>>>> Reading the code (in class "MatrixUtils"), it looks like QR decomposition
>>>> is used; any problem with that choice?
>>>>
>>>> Regards,
>>>> Gilles
>>>>
>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: [hidden email]
>>>> For additional commands, e-mail: [hidden email]
>>>>
>>>
>>> What I meant was these lines in class "KalmanFilter":
>>>
>>> 363        // invert S
>>> 364        RealMatrix invertedS = MatrixUtils.inverse(s);
>>> 365
>>> 366        // Inn = z(k) - H * xHat(k)-
>>> 367        RealVector innovation =
>>> z.subtract(measurementMatrix.operate(stateEstimation));
>>> 368
>>> 369        // calculate gain matrix
>>> 370        // K(k) = P(k)- * H' * (H * P(k)- * H' + R)^-1
>>> 371        // K(k) = P(k)- * H' * S^-1
>>> 372        RealMatrix kalmanGain =
>>> errorCovariance.multiply(measurementMatrixT).multiply(invertedS);
>>>
>>> I thought it would be better to directly solve the system an not
>>> calculate the inverse separately, but I may be wrong.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: [hidden email]
>>> For additional commands, e-mail: [hidden email]
>>>
>>>
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [math] Calculating gain matrix in KalmanFilter

Thomas Neidhart
On 08/05/2014 06:40 PM, Arne Schwarz wrote:

> No, but I am developing one with an extended Kalman filter where i
> will test various decompositions. I just stumbled over these lines
> because I read somewhere that explicit calculation of the inverse is
> not a thing one should do. And I suggested the Cholesky decomposition
> because it should be fast on these special matrices. But as I wanted
> to write an issue on this subject I saw an old resolved one where
> exactly the Cholesky decomposition was mentioned to be unsuitable
> because real world covariance matrices may not be perfectly symmetric.
> But it still may be better to use the QR decomposition directly
> without going over the inverse.

yes, due to floating-point arithmetic the resulting matrices may not be
perfectly symmetric, and this could lead to situations where no inverse
matrix could be calculated with the Cholesky decomposition.

This could be avoided by providing a user-defined epsilon value, but for
now we decided to use QR decomposition to compute the inverse matrix as
it is more robust.

> This is the code i wanted to suggest in the issue, on may want to
> replace "Cholesky" with "QR":
> // calculate gain matrix
> // K(k) = P(k)- * H' * (H * P(k)- * H' + R)^-1
> // K(k) = P(k)- * H' * S^-1
> // K(k) * S = P(k)- * H'
> // S' * K(k)' = H * P(k)-'
> RealMatrix kalmanGain = new CholeskyDecomposition(s).getSolver().solve(
>         errorCovariance.transpose().multiply(measurementMatrix)).transpose();

We need to evaluate this change with the test cases that previously
caused problems if it is robust enough or if QR decomposition should be
used instead.

Could you please open a new issue for this on JIRA so that we can keep
track of it?

Thanks,

Thomas

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]