Nanshan的个人博客分享 http://blog.sciencenet.cn/u/Nanshan

博文

AI\'s ability to think for itself is currently at...

已有 950 次阅读 2022-10-29 20:22 |个人分类:AI|系统分类:观点评述

Dear predecessors/colleagues:

AI does not exist yet.

What is commonly meant by “AI” is an algorithm (artificial neural network=ANN) that is, in its mathematical essence, a morph-able function. This function has a large amount of terms, and can therefore take on a vast amount of shapes. The input to this function can be seen as the ‘question’, the output the ‘answer’. Both question and answer may be vectors, e.g. questions and answers may have arbitrary dimension (but limited by hardware capacity).

By adjusting weights to each term of this function, the function’s shape can be altered. Instead of altering the function arbitrarily, the human-defined error function (what is a ‘good’ answer) is used to adjust the weights so that the function’s answer creeps closer to the desired answer. this process is otherwise known as fitting and the technique is gradient descend - which in simplified terms means - if you want to get off the mountain (descend to less error) , take a step in the direction of downwards (gradient).

That’s it. It is a truly simple and beautiful algorithm, and it is certainly useful in applicable cases.

So how does a useful algorithm get turned in to a ‘thinking entity’? By wordplay.

When explaining software behavior to an end user it is fine to anthropomorphize with the intent to convey a concept or insight. A programmer might say to an end user having difficulty understanding why software behaves as it does by saying something like “the computer now thinks”… even though there is no thinking computer - it is just a metaphor. Very inaccurate, but to the case in point, harmless.

But when a field strives to achieve a trait that only exists in biological entities and has peaked in human intelligence, using inaccurate anthropomorphism is a nomenclature that will inevitably lead even brighter minds (apparently) into a self-constructed pit of delusion and circular reasoning.

Examples of self-asserting nomenclature in “AI” are “thinking”, “recognizing”, “decision making”, “learning”, “understanding” and so on. If one says that the algorithm is ‘thinking’, the presence of “AI” is not engineered, or proven to exist, it is simply asserted.

Observational error is also a key ingredient in AI deceit. In science, it is essential to have a clear understanding of the boundaries of an observed system. This key error is often seen in statements such as ‘computers can do X faster than humans’. But there are humans in the system - they wrote the software. What is meant is that humans with computers can do things better than humans without computers, which actually holds for any useful tool, like a hammer. But one would not say that a hammer is better at driving nails into wood than humans are. So why do some say that with computers?

Automation allows the human to have his or her intellect run on any computer, also after the human is perceived not to be part of the system any longer. Crucially, the automation trick breaks down when the human did not know what problem to solve before “leaving the system”. The AI cult has actually managed to blame this breakdown of the automation trick on ‘the type of problem’ instead of admitting the trick is a trick. It simply calls problems unknown to the human before pretending to not be part of the system - “general problems”.

But it gets even worse than that. Quite a few AI “scientists” do not confuse themselves, they actively seek to confuse others.

There are “scientist” who proclaim that the algorithm produces (approximately) correct output but we humans do not understand why or how it does that. This is an insane statement considering humans devised the algorithm.

Because if we accurately explain what is meant by ‘humans do not understand why the algorithm does that’, it is merely this: an approximation does not equate to understanding or it is impossible to extract knowledge when it is not there.

To understand this, lets consider the familiar F=ma. This mathematical formula states that the force on an object equates the mass of the object times its acceleration. That is knowledge as long as the holder of this information understand force ,mass, acceleration, multiplication and equality.

Now let us train an ANN on a finite set of points satisfying this equation. The algorithm will fit, as best as it can, to this function. But the function will not be F=ma, just close to it in a particular domain. You put m=1 and a=9.8 in an the function outputs 9.8000928172. That is a very useful result in particular cases, except in a physics test of course, where the answer would puzzle the human examiner. Why did the candidate give an answer so close and so evidently wrong?

The mere fact that the algorithm outputs something that is close to F-ma does not mean the algorithm understands mass, acceleration, force or their relation. This is completely unlike understanding (latin: intelligere) humans have of the concepts involved. Even if the algorithm was able to determine that it is so close to x=yz that it would be easier to replace itself by it, the algorithm would still not have grasped force, mass, acceleration multiplication or equality. It would only mean something to the human looking at it.

Aha, some will say, but then i will just create an algorithm that selects operators on the parameters m and a. In other words, the algorithm will try say F=m+a. F=m/a and so on until it fits to F=ma.

Now the algorithm produces something that can be knowledge to humans, but still not to the algorithm. Not to mention that a human would have inserted the concepts mass, force and acceleration as relevant factors to begin with.

Another fine example of an AI magic is to have an ANN copy from human input and then be amazed the output looks like human input, which clearly proves “the thing is near-human”. We have seen this with say GPT-x, an “AI” that fits to what the most likely next letter is a human would write considering the N letters before it. GPT-x certainly is a extraordinary feat of engineering, but to be amazed that fitting to human input produces human like is obviously absurd. As the Guardian did when it announced “an AI wrote this article” where the content was in fact, strictly human.

Machines that have intrinsic, artificial and thus non-human intelligence do not exist yet. Obviously, it is possible that someone has a great, profound insight next week, next century or never. We can’t tell, we can’t predict scientific breakthrough. Not when, and not if.

The better question to ask is why humans have wanted to succeed in creating “intelligent machines” for so long (because this desire is far older than the past century). This is (i think) because some want to understand themselves and get some answers on the purpose of their existence. Please keep trying, but stick to science, as that actually works.


Thanks.



https://blog.sciencenet.cn/blog-3536679-1361448.html


下一篇:怎样才能写出“完美”的研究论文?有什么提示吗?
收藏 IP: 107.181.236.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2023-2-9 13:54

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部