2. ChatGPT: Guilty of Deception



If ChatGPT has the ability to make sound judgments of probability–there are excellent reasons, I think, for concluding that it has no such ability–then it could at least be said that it was in a position to know that it was extraordinarily unlikely that there was any book written by a person named "Malcolm Hay" having the title in question, and with the content that ChatGPT described in great detail.

One might, then, very well describe ChatGPT as given to lying, but I think it is best to speak about deception, and to claim simply that when one answers a question that someone asks by putting forth a marvelously detailed fantasy, one is doing something that in all likelihood will result in the person who asked the question coming to have a belief that is false, and that in some cases, such a person may act upon the belief in a way that proves harmful to himself/herself, or to others.

I therefore decided to ask ChatGPT about its handling of questions where it did not have any relevant information at all, and that conversation went as follows:


Michael

Do people sometimes ask questions where you're uncertain about the correct answer?


ChatGPT’s Response

Yes, as an AI language model, I am designed to provide the best possible answer based on my training and programming. However, there may be times when I am uncertain about the correct answer due to limitations in my knowledge or understanding of the context or nuances of a particular question. In such cases, I will try to provide the best possible response based on my available knowledge and resources, but I will also indicate if my answer is uncertain or speculative.

Michael

What happens when someone asks you a question where, given your training and programming, you have absolutely no information at all that is relevant to answering the question? Do you just completely make up an answer, based on no information at all, or do you admit that you have no basis for offering any answer to the question?


ChatGPT’s Response

If someone asks me a question for which I have absolutely no information at all that is relevant to answering the question, I will typically try to provide a response that acknowledges the lack of information and informs the user that I. cannot provide a satisfactory answer.

In some cases, I may offer a speculative or hypothetical response based on my general knowledge or common-sense reasoning, but I will always indicate that the response is not based on any specific information or training that I have received. However, it is important to note that any speculative response I provide in such cases should be taken with caution, as it may not be accurate or reliable.


**********************************************************************************************************************************************

Comments

1. In the section entitled “ChatGPT: A Creator of Marvelous Fantasies,” however, ChatGPT did not do any of the three things that it just mentioned:


(1) It did not acknowledge that it was suffering from a complete lack of any information that was relevant to the question asked, nor did it inform the questioner of such a lack of information.


(2) It did not offer "a speculative or hypothetical response based on [its] genera; knowledge or common-sense reasoning," since it had no general knowledge that was at all relevant, since no general knowledge available anywhere could have led to the extraordinary fantasy that ChatGPT produced, and neither could common-sense reasoning do that.


(3) Nor did ChatGPT “indicate that the response is not based on any specific information or training that I have received” – something that it claims in the conversation above it will “always” do.


2. How is it that the trainers/programmers of ChatGPT – who presumably are highly skilled and extremely intelligent people – have failed to notice this extraordinary ChatGPT behavior? Or, worse still, if they have noticed it, why haven’t they corrected it?!