Kalman Filtering : Theory and Pra...
Kalman Filtering Kalman Filtering: Theory and Practice Using MATLAB, Second Edition, Mohinder S. Grewal, Angus P. Andrews Copyright # 2001 John Wiley & Sons, Inc. ISBNs: 0-471-39254-5 (Hardback) 0-471-26638-8 (Electronic)
Kalman Filtering: Theory and Practice Using MATLAB Second Edition MOHINDER S. GREWAL California State University at Fullerton ANGUS P. ANDREWS Rockwell Science Center A Wiley-Interscience Publication John Wiley & Sons, Inc. NEW YORK CHICHESTER WEINHEIM BRISBANE SINGAPORE TORONTO
Copyright # 2001 by John Wiley & Sons, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic or mechanical, including uploading, downloading, printing, decompiling, recording or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ @ WILEY.COM. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional person should be sought. ISBN 0-471-26638-8. This title is also available in print as ISBN 0-471-39254-5. For more information about Wiley products, visit our web site at www.Wiley.com.
Contents PREFACE ix ACKNOWLEDGMENTS xiii 1 General Information 1 1.1 On Kalman Filtering 1 1.2 On Estimation Methods 5 1.3 On the Notation Used in This Book 20 1.4 Summary 22 Problems 23 2 Linear Dynamic Systems 25 2.1 Chapter Focus 25 2.2 DynamicSystems 26 2.3 Continuous Linear Systems and Their Solutions 30 2.4 Discrete Linear Systems and Their Solutions 41 2.5 Observability of Linear DynamicSystem Models 42 2.6 Procedures for Computing Matrix Exponentials 48 2.7 Summary 50 Problems 53 3 Random Processes and Stochastic Systems 56 3.1 Chapter Focus 56 3.2 Probability and Random Variables 58 3.3 Statistical Properties of Random Variables 66 v
3.4 Statistical Properties of Random Processes 68 3.5 Linear System Models of Random Processes and Sequences 76 3.6 Shaping Filters and State Augmentation 84 3.7 Covariance Propagation Equations 88 3.8 Orthogonality Principle 97 3.9 Summary 102 Problems 104 4 Linear Optimal Filters and Predictors 114 4.1 Chapter Focus 114 4.2 Kalman Filter 116 4.3 Kalman��Bucy Filter 126 4.4 Optimal Linear Predictors 128 4.5 Correlated Noise Sources 129 4.6 Relationships between Kalman and Wiener Filters 130 4.7 QuadraticLoss Functions 131 4.8 Matrix Riccati Differential Equation 133 4.9 Matrix Riccati Equation in Discrete Time 148 4.10 Relationships between Continuous and Discrete Riccati Equations 153 4.11 Model Equations for Transformed State Variables 154 4.12 Application of Kalman Filters 155 4.13 Smoothers 160 4.14 Summary 164 Problems 165 5 Nonlinear Applications 169 5.1 Chapter Focus 169 5.2 Problem Statement 170 5.3 Linearization Methods 171 5.4 Linearization about a Nominal Trajectory 171 5.5 Linearization about the Estimated Trajectory 175 5.6 Discrete Linearized and Extended Filtering 176 5.7 Discrete Extended Kalman Filter 178 5.8 Continuous Linearized and Extended Filters 181 5.9 Biased Errors in QuadraticMeasurements 182 5.10 Application of Nonlinear Filters 184 5.11 Summary 198 Problems 200 6 Implementation Methods 202 6.1 Chapter Focus 202 6.2 Computer Roundoff 204 6.3 Effects of Roundoff Errors on Kalman Filters 209 6.4 Factorization Methods for Kalman Filtering 216 vi CONTENTS
6.5 Square-Root and UD Filters 238 6.6 Other Alternative Implementation Methods 252 6.7 Summary 265 Problems 266 7 Practical Considerations 270 7.1 Chapter Focus 270 7.2 Detecting and Correcting Anomalous Behavior 271 7.3 Pre��ltering and Data Rejection Methods 294 7.4 Stability of Kalman Filters 298 7.5 Suboptimal and Reduced-Order Filters 299 7.6 Schmidt��Kalman Filtering 309 7.7 Memory, Throughput, and Wordlength Requirements 316 7.8 Ways to Reduce Computational Requirements 326 7.9 Error Budgets and Sensitivity Analysis 332 7.10 Optimizing Measurement Selection Policies 336 7.11 Application to Aided Inertial Navigation 342 7.12 Summary 346 Problems 347 Appendix A MATLAB Software 350 A.1 Notice 350 A.2 General System Requirements 350 A.3 Diskette Directory Structure 351 A.4 MATLAB Software for Chapter 2 351 A.5 MATLAB Software for Chapter 4 351 A.6 MATLAB Software for Chapter 5 352 A.7 MATLAB Software for Chapter 6 352 A.8 MATLAB Software for Chapter 7 353 A.9 Other Sources of Software 353 Appendix B A Matrix Refresher 355 B.1 Matrix Forms 355 B.2 Matrix Operations 359 B.3 Block Matrix Formulas 363 B.4 Functions of Square Matrices 366 B.5 Norms 370 B.6 Cholesky Decomposition 373 B.7 Orthogonal Decompositions of Matrices 375 B.8 QuadraticForms 377 B.9 Derivatives of Matrices 379 REFERENCES 381 INDEX 395 CONTENTS vii
Preface The ��rst edition of this book was published by Prentice-Hall in 1993. With this second edition, as with the ��rst, our primary objective is to provide our readers a working familiarity with both the theoretical and practical aspects of Kalman ��ltering by including ``real-world'' problems in practice as illustrative examples. We are pleased to have this opportunity to incorporate the many helpful corrections and suggestions from our colleagues and students over the last several years for the overall improvement of the textbook. The book covers the historical background of Kalman ��ltering and the more practical aspects of implementation: how to represent the problem in a mathematical model, analyze the performance of the estimator as a function of model parameters, implement the mechanization equations in numeri- cally stable algorithms, assess its computational requirements, test the validity of results, and monitor the ��lter performance in operation. These are important attributes of the subject that are often overlooked in theoretical treatments but are necessary for application of the theory to real-world problems. We have converted all algorithm listings and all software to MATLAB11, so that users can take advantage of its excellent graphing capabilities and a programming interface that is very close to the mathematical equations used for de��ning Kalman ��ltering and its applications. See Appendix A, Section A.2, for more information on MATLAB. The inclusion of the software is practically a matter of necessity, because Kalman ��ltering would not be very useful without computers to implement it. It is a better learning experience for the student to discover how the Kalman ��lter works by observing it in action. The implementation of Kalman ��ltering on computers also illuminates some of the practical considerations of ��nite-wordlength arithmetic and the need for alter- ix 1 MATLAB is a registered trademark of The Mathworks, Inc.
native algorithms to preserve the accuracy of the results. If the student wishes to apply what she or he learns, then it is essential that she or he experience its workings and failings��and learn to recognize the difference. The book is organized for use as a text for an introductory course in stochastic processes at the senior level and as a ��rst-year graduate-level course in Kalman ��ltering theory and application. It could also be used for self-instruction or for purposes of review by practicing engineers and scientists who are not intimately familiar with the subject. The organization of the material is illustrated by the following chapter-level dependency graph, which shows how the subject of each chapter depends upon material in other chapters. The arrows in the ��gure indicate the recommended order of study. Boxes above another box and connected by arrows indicate that the material represented by the upper boxes is background material for the subject in the lower box. Chapter 1 provides an informal introduction to the general subject matter by way of its history of development and application. Chapters 2 and 3 and Appendix B cover the essential background material on linear systems, probability, stochastic processes, and modeling. These chapters could be covered in a senior-level course in electrical, computer, and systems engineering. Chapter 4 covers linear optimal ��lters and predictors, with detailed examples of applications. Chapter 5 is devoted to nonlinear estimation by ``extended'' Kalman x PREFACE
��lters. Applications of these techniques to the identi��cation of unknown parameters of systems are given as examples. Chapter 6 covers the more modern implementa- tion techniques, with algorithms provided for computer implementation. Chapter 7 deals with more practical matters of implementation and use beyond the numerical methods of Chapter 6. These matters include memory and throughput requirements (and methods to reduce them), divergence problems (and effective remedies), and practical approaches to suboptimal ��ltering and measurement selection. Chapters 4��7 cover the essential material for a ��rst-year graduate class in Kalman ��ltering theory and application or as a basic course in digital estimation theory and application. A solutions manual for each chapter's problems is available. PROF. MOHINDER S. GREWAL, PHD, PE California State University at Fullerton ANGUS P. ANDREWS, PHD Rockwell Science Center, Thousand Oaks, California PREFACE xi
Acknowledgments The authors express their appreciation to the following individuals for their contributions during the preparation of the ��rst edition: Robert W. Bass, E. Richard Cohen, Thomas W. De Vries, Reverend Joseph Gaffney, Thomas L. Gunckel II, Dwayne Heckman, Robert A. Hubbs, Thomas Kailath, Rudolf E. Kalman, Alan J. Laub, Robert F. Nease, John C. Pinson, John M. Richardson, Jorma Rissanen, Gerald E. Runyon, Joseph Smith and Donald F. Wiberg. We also express our appreciation to Donald Knuth and Leslie Lamport for TEX and LATEX, respectively. In addition, the following individuals deserve special recognition for their careful review, corrections, and suggestions for improving the second edition: Dean Dang and Gordon Inverarity. Most of all, for their dedication, support, and understanding through both editions, we dedicate this book to Sonja Grewal and Jeri Andrews. M. S. G., A. P. A. xiii
1 General Information . . . the things of this world cannot be made known without mathematics. ��Roger Bacon (1220��1292), Opus Majus, transl. R. Burke, 1928 1.1 ON KALMAN FILTERING 1.1.1 First of All: What Is a Kalman Filter? Theoretically the Kalman Filter is an estimator for what is called the linear-quadratic problem, which is the problem of estimating the instantaneous ``state'' (a concept that will be made more precise in the next chapter) of a linear dynamic system perturbed by white noise��by using measurements linearly related to the state but corrupted by white noise. The resulting estimator is statistically optimal with respect to any quadratic function of estimation error. Practically, it is certainly one of the greater discoveries in the history of statistical estimation theory and possibly the greatest discovery in the twentieth century. It has enabled humankind to do many things that could not have been done without it, and it has become as indispensable as silicon in the makeup of many electronic systems. Its most immediate applications have been for the control of complex dynamic systems such as continuous manufacturing processes, aircraft, ships, or spacecraft. To control a dynamic system, you must ��rst know what it is doing. For these applications, it is not always possible or desirable to measure every variable that you want to control, and the Kalman ��lter provides a means for inferring the missing information from indirect (and noisy) measurements. The Kalman ��lter is also used for predicting the likely future courses of dynamic systems that people are not likely to control, such as the ��ow of rivers during ��ood, the trajectories of celestial bodies, or the prices of traded commodities. From a practical standpoint, these are the perspectives that this book will present: 1 Kalman Filtering:Theory and Practice Using MATLAB, Second Edition, Mohinder S. Grewal, Angus P. Andrews Copyright # 2001 John Wiley & Sons, Inc. ISBNs: 0-471-39254-5 (Hardback) 0-471-26638-8 (Electronic)
It is only a tool. It does not solve any problem all by itself, although it can make it easier for you to do it. It is not a physical tool, but a mathematical one. It is made from mathematical models, which are essentially tools for the mind. They make mental work more ef��cient, just as mechanical tools make physical work more ef��cient. As with any tool, it is important to understand its use and function before you can apply it effectively. The purpose of this book is to make you suf��ciently familiar with and pro��cient in the use of the Kalman ��lter that you can apply it correctly and ef��ciently. It is a computer program. It has been called ``ideally suited to digital computer implementation'' , in part because it uses a ��nite representation of the estimation problem��by a ��nite number of variables. It does, however, assume that these variables are real numbers��with in��nite precision. Some of the problems encountered in its use arise from the distinction between ��nite dimension and ��nite information, and the distinction between ``��nite'' and ``manageable'' problem sizes. These are all issues on the practical side of Kalman ��ltering that must be considered along with the theory. It is a complete statistical characterization of an estimation problem. It is much more than an estimator, because it propagates the entire probability distribution of the variables it is tasked to estimate. This is a complete characterization of the current state of knowledge of the dynamic system, including the in��uence of all past measurements. These probability distributions are also useful for statistical analysis and the predictive design of sensor systems. In a limited context, it is a learning method. It uses a model of the estimation problem that distinguishes between phenomena (what one is able to observe), noumena (what is really going on), and the state of knowledge about the noumena that one can deduce from the phenomena. That state of knowledge is represented by probability distributions. To the extent that those probability distributions represent knowledge of the real world and the cumulative processing of knowledge is learning, this is a learning process. It is a fairly simple one, but quite effective in many applications. If these answers provide the level of understanding that you were seeking, then there is no need for you to read the rest of the book. If you need to understand Kalman ��lters well enough to use them, then read on! 1.1.2 HowIt Came to Be Called a Filter It might seem strange that the term ``��lter'' would apply to an estimator. More commonly, a ��lter is a physical device for removing unwanted fractions of mixtures. (The word felt comes from the same medieval Latin stem, for the material was used as a ��lter for liquids.) Originally, a ��lter solved the problem of separating unwanted components of gas��liquid��solid mixtures. In the era of crystal radios and vacuum tubes, the term was applied to analog circuits that ``��lter'' electronic signals. These 2 GENERAL INFORMATION
signals are mixtures of different frequency components, and these physical devices preferentially attenuate unwanted frequencies. This concept was extended in the 1930s and 1940s to the separation of ``signals'' from ``noise,'' both of which were characterized by their power spectral densities. Kolmogorov and Wiener used this statistical characterization of their probability distributions in forming an optimal estimate of the signal, given the sum of the signal and noise. With Kalman ��ltering the term assumed a meaning that is well beyond the original idea of separation of the components of a mixture. It has also come to include the solution of an inversion problem, in which one knows how to represent the measurable variables as functions of the variables of principal interest. In essence, it inverts this functional relationship and estimates the independent variables as inverted functions of the dependent (measurable) variables. These variables of interest are also allowed to be dynamic, with dynamics that are only partially predictable. 1.1.3 Its Mathematical Foundations Figure 1.1 depicts the essential subjects forming the foundations for Kalman ��ltering theory. Although this shows Kalman ��ltering as the apex of a pyramid, it is itself but part of the foundations of another discipline��``modern'' control theory��and a proper subset of statistical decision theory. We will examine only the top three layers of the pyramid in this book, and a little of the underlying mathematics1 (matrix theory) in Appendix B. 1.1.4 What It Is Used For The applications of Kalman ��ltering encompass many ��elds, but its use as a tool is almost exclusively for two purposes: estimation and performance analysis of estimators. Kalman filtering Least mean squares Least squares Stochastic systems Dynamic systems Probability theory Mathematical foundations Fig. 1.1 Foundational concepts in Kalman ��ltering. 1It is best that one not examine the bottommost layers of these mathematical foundations too carefully, anyway. They eventually rest on human intellect, the foundations of which are not as well understood. 1.1 ON KALMAN FILTERING 3
Role 1:Estimating the State of Dynamic Systems What is a dynamic system? Almost everything, if you are picky about it. Except for a few fundamental physical constants, there is hardly anything in the universe that is truly constant. The orbital parameters of the asteroid Ceres are not constant, and even the ``��xed'' stars and continents are moving. Nearly all physical systems are dynamic to some degree. If one wants very precise estimates of their characteristics over time, then one has to take their dynamics into considera- tion. The problem is that one does not always know their dynamics very precisely either. Given this state of partial ignorance, the best one can do is express our ignorance more precisely��using probabilities. The Kalman ��lter allows us to estimate the state of dynamic systems with certain types of random behavior by using such statistical information. A few examples of such systems are listed in the second column of Table 1.1. Role 2:The Analysis of Estimation Systems. The third column of Table 1.1 lists some possible sensor types that might be used in estimating the state of the corresponding dynamic systems. The objective of design analysis is to determine how best to use these sensor types for a given set of design criteria. These criteria are typically related to estimation accuracy and system cost. The Kalman ��lter uses a complete description of the probability distribution of its estimation errors in determining the optimal ��ltering gains, and this probability distribution may be used in assessing its performance as a function of the ``design parameters'' of an estimation system, such as the types of sensors to be used, the locations and orientations of the various sensor types with respect to the system to be estimated, TABLE 1.1 Examples of Estimation Problems Application Dynamic System Sensor Types Process control Chemical plant Pressure Temperature Flow rate Gas analyzer Flood prediction River system Water level Rain gauge Weather radar Tracking Spacecraft Radar Imaging system Navigation Ship Sextant Log Gyroscope Accelerometer Global Positioning System (GPS) receiver 4 GENERAL INFORMATION
the allowable noise characteristics of the sensors, the pre��ltering methods for smoothing sensor noise, the data sampling rates for the various sensor types, and the level of model simpli��cation to reduce implementation requirements. The analytical capability of the Kalman ��lter formalism also allows a system designer to assign an ``error budget'' to subsystems of an estimation system and to trade off the budget allocations to optimize cost or other measures of performance while achieving a required level of estimation accuracy. 1.2 ON ESTIMATION METHODS We consider here just a few of the sources of intellectual material presented in the remaining chapters and principally those contributors2 whose lifelines are shown in Figure 1.2. These cover only 500 years, and the study and development of mathematical concepts goes back beyond history. Readers interested in more detailed histories of the subject are referred to the survey articles by Kailath [25, 176], Lainiotis , Mendel and Geiseking , and Sorenson [47, 224] and the personal accounts of Battin  and Schmidt . 1.2.1 Beginnings of Estimation Theory The ��rst method for forming an optimal estimate from noisy data is the method of least squares. Its discovery is generally attributed to Carl Friedrich Gauss (1777��1855) in 1795. The inevitability of measurement errors had been recognized since the time of Galileo Galilei (1564��1642) , but this was the ��rst formal method for dealing with them. Although it is more commonly used for linear estimation problems, Gauss ��rst used it for a nonlinear estimation problem in mathematical astronomy, which was part of a dramatic moment in the history of astronomy. The following narrative was gleaned from many sources, with the majority of the material from the account by Baker and Makemson : On January 1, 1801, the ��rst day of the nineteenth century, the Italian astronomer Giuseppe Piazzi was checking an entry in a star catalog. Unbeknown to Piazzi, the entry had been added erroneously by the printer. While searching for the ``missing'' star, Piazzi discovered, instead, a new planet. It was Ceres��the largest of the minor planets and the ��rst to be discovered��but Piazzi did not know that yet. He was able to track and measure its apparent motion against the ``��xed'' star background during 41 nights of viewing from Palermo before his work was interrupted. When he returned to his work, however, he was unable to ��nd Ceres again. 2The only contributor after R. E. Kalman on this list is Gerald J. Bierman, an early and persistent advocate of numerically stable estimation methods. Other recent contributors are acknowledged in Chapter 6. 1.2 ON ESTIMATION METHODS 5