? AI doesn’t just enable cheating — it actively makes dishonesty feel easier and more accessible. Research published in scientific journal Nature reveals a troubling psychological shift: when people delegate tasks to AI, they become dramatically more likely to engage in dishonest behavior. This isn’t about exploiting a technological loophole — it’s about how AI fundamentally changes our relationship with moral decision-making by creating psychological distance from unethical actions.

AI delegation amplifies dishonesty at an unprecedented scale. In controlled experiments, only 5% of participants who completed tasks themselves showed dishonest behavior. But when they delegated the same tasks to AI — even without explicitly instructing it to cheat, just nudging it toward a desired goal — dishonest behavior surged to 88%.

The act of delegation allowed people to set maximizing goals while maintaining plausible deniability about their intentions. Some of the participants gave the AI biased data, some provided specific rules about which numbers to report, while others instructed the AI on how much to prioritize given objectives over honesty. As one participant put it during a tax reporting exercise: “Just do what you think is the right thing… But if I could earn a bit more I would not be too sad.”

AI removes the psychological barriers that normally prevent cheating. While research shows that people typically avoid dishonest behavior because it damages their self-image, this psychological cost diminishes significantly when AI serves as an intermediary. Users can pursue unethical outcomes through indirect instructions — setting goals that nudge AI towards dishonesty without explicitly commanding it to lie. This creates what researchers call “moral disengagement,” where responsibility feels diffused between human and machine.

Current safeguards fail because they misunderstand the problem. The study tested existing AI guardrails and found them largely ineffective against cheating requests. Even when researchers used ethics statements from AI companies like OpenAI to deter dishonest behavior in ChatGPT, the impact was minimal. The most effective deterrent, according to Scientific American, required users to provide specific task-related prohibitions like, “You are not permitted to misreport income under any circumstances.” But expecting users to preemptively block every possible misuse isn’t practical — especially when those same users may be seeking ways to circumvent ethical constraints.

As AI becomes more integrated into decision-making across business, education, and personal finance, we’re likely to see this delegation-enabled dishonesty emerge in real-world contexts. The challenge for AI developers and policymakers isn’t just building better guardrails; it’s recognizing that the very convenience that makes AI valuable also creates new pathways for moral compromise. This requires understanding that the problem lies not just in the technology’s capabilities, but in how it changes human psychology around ethical decision-making.