Fornew readers and those who request to be “好友good friends” please read my 公告栏 first.
Everyone has heard of the 80/20 rule, i.e., the first 80% of the performance of a system only require 20% of the total effort; the last 20% will need the additional 80% of the effort. Or put is colloquially, “Best” is the enemy of the “good enough”. This is a simple statement of the complex task of general cost-benefit analysis that is often at the heart of every system engineering problem. It is also the core of my own research effort on Ordinal Optimization for the past 17 years http://www.sciencenet.cn/m/user_content.aspx?id=2501
and the subject of my own latest book - Y.C. Ho, Q.C. Zhao, and Q.S. Jia. Ordinal Optimization – Soft Optimization for Hard Problems, Springer, 2007. – which carries the punch line
“Instead of asking the “best for sure” be willing to settle for the “good enough with high probability” “.
It is also common knowledge that human beings can be satiated with anything quickly. Once you achieve the ultimate, there is nothing left to conquer. Life can become boring. Two of my own seven life lessons are:
1.Happiness is a positive derivative
2.If your life is 80% perfect, you need the other 20% to keep you in perspective
Chinese also have the proverb of “知足常乐”which roughly translates into English as “satisficing” – a term invented by the late Nobel prize winning behavior economist, Herbert Simon.
Thus, rejections, failures, and imperfections are necessary part of life .
Let me also illustrate using simple arithmetic what this “tolerance for imperfection” can buy you. Let us say you are doing optimization on a system. You have a solution methodology for determining the “best”. If you have no idea whatsoever about the system, than the No Free Lunch Theorem says that blind search is as good as any solution methodology on the average (see Y.C. Ho and D. Pepyne, "A Simple Explanation of the No Free Lunch Theorem"J. of Opt. Theory and Applications, Vol.115, #3, Dec.2002
Y.C. Ho, Q.C. Zhao and D. Pepyne, "The No Free Lunch Theorem, Complexity and Computer Security", IEEE Trans. On AC, v. 48, #5 pp783-793, May 2003,
Y.C. Ho and D. Pepyne “Conceptual Framework for Optimization and Distributed Intelligence” Proceeding of 2004 CDC, Dec. 2004).
Let the optimum performance for this system have a score of 100. If your solution methodology finds the optimum you will be paid $100. But your methodology is not perfect, 5% of the time, the method will fail and you get nothing. Suppose you are willing to tolerate this 5% failure rate, i.e., you will be forgiven 5 out of 100 times you tried to solve this problem. Then what does this buy for you? Without forgiveness, your average performance is (95*100+5*0)/100=95. With forgiveness your average payoff will be 100. You get extra 5/95=5.26% increase in performance. Now suppose you have some structural knowledge as to what kind of problem this is, and you can use this knowledge to rule out 10% of all possible problems this solution methodology will be ask to solve. To a first approximation, you can say that now you can tolerate a 15% failure rate (10% you can rule out and 5% you will be forgiven). By the same calculation the performance increase is now 17.6%. Similar increase in structural knowledge of 20% and 50%, will yield performance increase of 33% and 100%, respectively (note 1). In other words, the more knowledge you have about a problem, the more some tolerance for failure will buy for you. Intuitively this is clear. The more certain you are, the less likely you are going to fail. Thus, forgiving each failure will be that much more valuable.
At a more advanced level, the problem of P=?=NP is one of the great unresolved question of modern theoretical computer science. Roughly speaking NP problems scales up exponentially with problem size. No algorithm have been shown to solve such problems in times that are polynomial function of the problem size. But M. Rabin showed in 1976 that if you permit occasional failures for an algorithm then all problems in the NP class can be provably solved in polynomial time.
Finally, let me give a current real world example in everyday living (admittedly this example is exaggerated and cruel. I won’t defend it ). Since 9/11/2001, airport security has increase considerably resulting in much inconvenience for billions of travelers. The failed bombing on Christmas day 2009 of a Delta/NW Airline flight from Holland to the US caused another level of stringent security measures being instituted. Passengers must now be subject to full body scans and are not allowed to use restrooms and blankets during the last hours of a flight. All of these are because we cannot tolerate any possibilities that an airplane and its passengers may be lost due to known terrorist activities (unknown and yet untried methods not included). Now suppose we accept the fact that no security scheme can be perfect. Because of our imperfect security system, terrorists are able to bring down one airplane and kill 300 passengers every decade, has anyone balanced this against the incalculable loss of time and convenience of the billions of traveler during the same ten years? Or for that matter instead of more stringent airport security, money can be spent on better intelligence or other security measures. Sure it is unspeakably cruel to contemplate such tradeoffs. But remember the American public tolerates 30,000 traffic deaths per year on US highways as necessary evil. Of course, I fully understand that no politician can advocate even the consideration never mind the implementation of such trade-offs. But this illustrates the cost of seeking perfection even if it is an illusory perfection.
Thus, let us take every rejection, disappointment, and failure in its stride. One can always work towards perfection when cost justifies the benefit but remember there is a cost of being obsessed with it.
(Note 1. This exact performance increase depends on your assumptions. However, the intuition in this example is clear )