||
In one of my earlier blogs, I wrote about my experience with Chat-GPT in which I ask GPT to produce a biography of myself. When I found many untruths in the bio writeup, I questioned the GPT as to where it found these facts. GPT confessed that it made it up.
In educational field, when students use GPT to procuce essays as substitute for home work and course requirements, professors face a problem of grading students
Today, our local newspaper, the Boston Globe, reported a more serious problem. A 3/13/2024 article reports a lawyer used GPT to produce a written legal arguement about a case he is defending in court. In the well written legal brief the lawyer cited three previous cases as precedents to support his reasoning (a well known and important law practice). What the lawyer failed to check was the fact that these precedents are nonexistent which the generative AI just made up to support the write up. Fortunately, the judge checked and discover the deception for which the lawyer were disciplined and fined. But do we know how many times such error went undected and results in unjust decisions?
We live in dangerous times! One cannot believe things that appeared in print or saw in video unless double checked and verified before acting on such information. Yet everyday we are bombarded with unsolicited information overload never mind the social media which many of us are willingly engaged in.
How does one behave in such environment safely and comfortably?
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-11-23 06:45
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社