How do we go about estimating the unknown constant? Everyone with or without training would say “average out the noise”,i.e.,
Estimate of the unknown constant = Sum of the measurements divided by the number of measurements
The Draper Prize winning accomplishment of the Kalman Filter and all other estimation schemes ultimately boil down to this simple and intuitive idea. Of course, one can dress up the idea using all kinds of sophisticated manipulations, complications, and computational tricks. But ultimately they are nothing more conceptually as “subtract out everything known and average the rest”.
2.Reflecting The Future Consequence Of Current Action
“Saving for a rainy day”, “Planning for your retirement”, “How to aim a gun with respect to the distant target – ballistic science” all of them is concerned with “how current actions effect future consequences.”
Thus, in using current information to determine a course of possibly optimal decisions we must include the relevant future consequences as part of current information. How to reflect future consequence to the present is known as “backward propagation of dynamic programming” or the “adjoint equations of the calculus of variations and the maximum principle”, or for that matter any rules that link the future to the present. Once this is done, we can then determine current optimal decision based only of “current information”. This is a STATIC optimal decision problem which generally is easier to solve.
Optimal Control Theory will be easier to teach and to understand if we keep the above two basic ideas in mind. The rest are simply secondary issues to make the solution easier to compute or more elegant to present.