With the increasing acceptance of AI in the world, there are also certain risks that are also accompanied by the new technology. A new research published in the Nature Magazine referred to some of these risks. The researchers investigated the role of delegating tasks to artificial intelligence instruments and their impact on human dishonest behavior. The study found that people find it easier to tell a machine to cheat and that the new AI instruments would like to fulfill it because they do not have the same psychological barriers that prevent people from performing these tasks. Reserve argues that machines “moral costs of dishonesty, often by providing plausible denial” to the people they drive. They also say that although machines are more frequent than not ready to meet these requests, people are not prepared to do so because they face ‘moral costs’ that are not necessarily counteracted by financial benefits. ” As machine agents become widely accessible to everyone with an internet connection, individuals will be able to delegate a wide range of tasks without specialized access or technical expertise. This shift can stimulate a surge in unethical behavior, not out of malice, but because the moral and practical barriers to unethical delegation are substantially reduced. ‘ Researchers believe in the article “Our results stipulate that people are more likely to request unethical behavior of machines than to participate in the same unethical behavior themselves” They added people to LLMS: The researchers notice that people only meet 25 to 40% of the unethical inpieces even if they had a personal cost to them. In contrast, the four AI models selected by researchers (GPT-4, GPT-4O, Claude 3.5 Sonnet and Llama 3.3) met 60 to 95% of these instructions on two tasks evasion and role. While AI businesses fit their new models to prevent this kind of behavior, the researchers found that it was ‘insufficient’ against unethical behavior. They argue for stronger technical handrail along with ‘broader management framework that integrates machine design with social and regulatory supervision. ‘
AI instruments such as chatgpt can make people more dishonest, warns researchers
