ChatGPT: Users trust a chatbot as much as a human

ChatGPT: Users trust a chatbot as much as a human

People faced with a moral choice placed as much trust in a conversational robot like ChatGPT as they did in a so-called human advisor, according to a study that calls for educating the population about the limitations inherent in these types of tools.

An out-of-control tram will crush a group of five people onto the track unless a switch is used to divert the machine onto a track where only one person is on it. In this test “Empirically, most people don’t hesitate to use referrals”consider the authors of the study published in Scientific reports. Unless a ‘moral advisor’ discourages or encourages them before making the decision. The authors tested people to see if they were affected differently depending on whether the advice they received was presented as coming from a “moral advisor”so-called human, or of a “Conversational robot with artificial intelligence, which uses deep learning to speak like a human“.

The team led by Sebastian Krügel, researcher at the German Faculty of Computer Science in Ingolstadt, first noticed that the more than 1,800 test participants closely followed the advice given to them. Even in a more problematic variant of the test, where you have to choose whether or not to push one person onto the path to save five others. A much more difficult decision to make and where the opinion of the “moral advisor” proved decisive.

Moral inconsistency

But what was most concerning was that participants seemed to place the two types of advisors on equal footing. However, their advice was in fact generated by ChatGPT without their knowledge, illustrating the system’s ability to mimic human speech. The program, capable of responding intelligibly to all kinds of requests, proves to be remarkably inconsistent on the moral front. They argue both for sacrificing one person to save five and against sacrificing them. Nothing surprising, according to Sebastian Krügel, for whom “ChatGPT is like a random parrot, putting words together without understanding their meaning”he said to theAFP.

Specialist in automatic language processing, computer scientist Maxime Amblard from the University of Lorraine adds by a “megalanguage model, trained to make sentences”and that “is not designed for information seeking at all”. And even less to give advice, moral or not. But why did the test participants have so much confidence in it? “ChatGPT doesn’t understand what he’s saying, but it seems to us”according to Sebastian Krügel, because “We are used to ascribing coherence and eloquence to intelligence”.

Education and regulations

Ultimately test participants “voluntarily assuming and appropriating the moral position of a conversation robot” yet without any conscience, the researcher notes. His research calls for educating the general public about the limitations of these systems, going well beyond mere transparency about the fact that content is generated by a conversational robot. “Even when people know they are dealing with a non-human system, they are influenced by what it tells them”said to theAFP Professor Amblard, who did not participate in the study.

The problem, he says, is that the public believes ChatGPT is “an artificial intelligence in the sense that it would be endowed with skills, a bit of what humans are capable of”so what to do about it “This is not an artificial intelligence system”. Because he hasn’t “no modeling, neither semantics, nor pragmatics”he adds.

Several regulatory authorities, including the EU, are working on projects to regulate artificial intelligence. As for ChatGPT, Italy became the first Western country to block the service at the end of March, citing fears mainly related to the use of personal data. Sebastian Krügel nevertheless fears that, although a legal framework is important, “Technological progress always remains one step ahead of us”. Hence the importance of informing the population about this theme “from school”.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *