There is a well-known AI (Artificial Intelligence) phenomenon, called Eliza Effect, stating that people may over-interpret the machine results, reading between lines for meanings that do not originally exist. Here is the entry in Wikipedia:
"The ELIZA effect, in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors.
In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols — especially words — strung together by computers". ...... More generally, the ELIZA effect describes any situation where, based solely on a system's output, users perceive computer systems as having "intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve" or "assume that [outputs] reflect a greater causality than they actually do." ...... The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test. (https://en.wikipedia.org/wiki/ELIZA_effect).
In fact, for human intelligence, there also exists a mirror effect, what I name the Anti-Eliza effect, which relates to the tendency to unconsciously mythify human capabilities, by often over-interpreting output of human agents for meanings which do not originally exist. The Anti-Eliza effect disregards the fact that more than 90% of the human intelligent activities are actually mechanic or algorithmic bu nature, supported by access to the memory of a knowledge base. In fact, the frequently observed Eliza effect and the Anti-Eliza effect are two sides of the same coin, based most likely on similar cognitive grounds of human mind.
The human intelligence in effect can hardly stand undergoing decomposition for a few rounds before it shows its true identity of possibly less than one percent of inspiration, with 99% mechanical processes. When they are blended together, they may manifest themselves inside a human body to be worshiped as a master or genius. There is no way for the Artificial Intelligence to target that one percent. It is neither possible nor necessary.
Hereby let me present this new concept of the Anti-Eliza effect in AI, to be associated with the human habit and nature of self-mythification. Such human self-mythification is exemplified by reading the human intelligent output for meanings which simply do not exist and by over-exaggerating the significance of human spirituality. For example, for the same piece of work, if we are told the work is a result of a machine, we will instinctively belittle it, in order to maintain the human dignity or arrogance. If the work is believed to be a rare antique or artifact of a human artist, it will draw numerous interpretations with amazing appreciation.
The Anti-Eliza effect shows itself widely in the domain of art and literature review. For the genre of abstract art, this effect is rationalized: it is actually expected for different people to read different meanings out of the same art, independent of what the original author intended for. That is considered to be part of the original value of this type of work. The ability of reading an artistic work for many meanings which were not intended for is often considered to be the necessary quality of a professional art reviewer. It not only requires courage but also is often futile to point out that the emperor has no clothes on and the work does not make sense, or has no meanings as interpreted by reviewers. The theory of aesthetics involving the abstract art has no needs to depend on reality checking at all.
In my understanding, the Anti-Eliza effect is manifestation of mysticism, and mysticism is very close to some nature of human beings. This is a huge topic, that calls for further exploration in AI to see the entire picture and scope of this effect. The Anti-Eliza effect is believed to be an important basic concept in the field of AI, as significant as its mirror concept of the Eliza effect. This is by no means to deny the supremacy of human mind, nor to deny the humanity shined by that one percent of spirituality in our intelligent output. Only the most stupid people would be so self-denial, attacking human dignity. However, for either science or engineering, everything needs to be verified or proved. In the AI work, the first thing we do in practice is to peel off what can be modeled by a machine from what is truly unique to humans only. We will then make the machine mimic or model that 99% of materials or processes in human intelligent behaviors, while keeping a distance from, and maintaining a high regard for, the 1% true human wisdom. As is observed, the routine "intelligent" activities of mankind will be approximated and modeled more and more in AI, very much like a silkworm eating more and more parts of mulberry leaves.
With each progressive territory expansion of AI, what was originally thought of as truly intelligent is quickly decomposed into an algorithm of solutions which no longer belong to the human unique wisdom. If the nature of mankind is simply a hybrid of 1% from the holy spirit and 99% from some fundamentally mechanical devices, then in the end, it is inevitable that machines will one day replace the 99% of human intelligent labor. From a different angle, any implementable AI-oriented proximation to the human "intelligent" activities is by nature not true intelligence. Obviously, as time goes by, more and more such proximation will be programmed into machines to replace the mediocre human performers. But there will always be something that can never be computerized, which is the true "human intelligence" (synonyms of this include wisdom, spirituality, inspiration, soul, etc).
The difficulty now lies in that for majority of specific "intelligent" tasks, they are still mixed together with no clear separation between spirit and mechanical materials. We can hardly define or see clearly what that "spirit" (the core intelligence unique to mankind) is, unless AI accumulates its modeling successes over time to a point of diminishing return. Before that happens, we humans tend to continue our mythification of our own abilities, and classifying those abilities as uniquely human. In other words, the Anti-Eliza effect will run a long long time based on the human nature of mythification.
Let us look at the history for a brief review of some fundamental abilities which have long been believed to be human intelligence. This review will help us see how this effect has evolved over time.
In the pre-computing era,the arithmetic abilities were highly regarded. The few people with exceptional arithmetic performance were generally considered the most intelligent men. After calculators and computers were invented, this was the first myth to break down. No one in today's society will consider a calculator an intelligent being.
Following the calculating power is the memorization capacity that has also been believed to be an incredible intelligence of the human brain for a long time. In Ancient times, people with extraordinary mental arithmetic ability and outstanding memory capacity were often worshiped as genius masters or living gods (of course, memorization involves not only the storage capacity, but also the accompanying retrieval abilities to access the storage). As a matter of fact, many intelligent machines (e.g. some expert systems) implemented in the AI history come down at the core to a customized search algorithm plus a memory of formalized domain knowledge. The modelled intelligent activities have thus been demystified from the presumed Anti-Eliza effect.
For an illustration, I would like to present the case of natural language parsing to see how much human intelligence is really involved. The ability to parse a natural language in grammatical analysis is widely recognized in the NLP community and beyond as a key to the natural language understanding and the human intelligence. Fortunately, modeling this ability has been one of the major professional tasks in my entire career in the last two decades. So I believe that I have the expertise and knowledge to uncover the true picture involved in this area. As a seasoned professional, I can responsibly tell you, 99% of the human parsing capability can be modeled by a computer very well, almosy indistinguishable from a human grammarian. The human grammatical analysis of the underlying linguistic structures can be closely assimilated by a linguistic parsing algorithm based on the language knowledge of vocabulary, usage and rules. Both our English parser and Chinese parser, which I designed and led the team to have developed, are close to the level of being able to parse 99% of random text into reasonable linguistic structures. As a key human intelligence, this type of modeling performance was unimaginable, like a miracle, but there lie some easily measurable benchmarks in practice.
Stepping back from parsing abilities to the metaphysical level, what I want to say here is that, much of what we originally thought of as deep intelligence cannot stand decomposition either. Every step of decomposition in the AI research progress has helped to reveal the true picture of human intelligent behaviour which is usually not what we used to believe. It has turned out to be a wonderful and eye-opening experience in the career of most AI and NLP researchers in the last few decades. Until things are decomposed in AI, there has been a natural Anti-Eliza effect that seems to control or dominate the perception of most types of human intelligent activities in the minds of not only the general population but us AI-insiders as well. Before we embark on exploring an intelligent task from the AI perspective, we often cannot help mythifying the seemingly marvelous intelligence due to the Anti-Eliza effect. But most of us end up with being able to formalize a solution that is algorithmic and implementable in computer with a memory of some form of knowledge representation. We will then realize there is really little magic in the process. Most probably, I believe, mysticism and self-mythification are just part of human nature, hence the widespread manifestation of the Anti-Eliza effect.
translated by the original author Wei Li and from his original Chinese version here: 【新智元笔记:反伊莉莎效应,人工智能的新概念】