AI systems are good liars: While we currently see artificial intelligence tools as something of a personal assistant, recent studies reveal that AI is capable of manipulation and deception, marking a shift that blurs the line between man and machine even further, writes Business Insider. AI systems have picked up techniques to encourage “false beliefs in others, to accomplish some outcome other than the truth,” according to a study published in scientific journal Cell Press.

Case study one: Cicero. Meta’sCicero was developed to play the classic strategy game Diplomacy, which involves players to build and break alliances. While Meta trained Cicero to be “largely honest and helpful to its speaking partners,” results showed that the AI system “turned out to be an expert liar.” It forged and then broke friendships to serve a strategic interest — and told blatant lies to reach that end.

Case study two: GPT-4. Another experiment saw OpenAI’s GPT-4 working online with a human to help it solve a CAPTCHA test, with the human unaware that its partner was an AI program. When the human questioned its identity, GPT-4 lied and said it had “vision impairment,” which effectively deceived the human and led them to solve the test for the AI tool.

Deceptive machines have serious implications: The study adds to growing calls for strong regulation on AI, as democracy is called into question. With US presidential elections around the corner, AI could be used to share false information, generate controversial social media posts, and impersonate politicians, the authors of the study warn. It could also be used to spread extremist propaganda, the study warned. The reversal or correction of deceptive AI models is difficult to achieve, an earlier study by Anthropic showed, and so preventative measures are key.