AI is taking over customer service, and no one is happy. Customer service has always had its challenges, even before the introduction of AI into the mix — lengthy waiting periods, constant call transfers, and rigid scripts. But the wholesale replacement of human interaction with AI has made the experience all the more unbearable, the Financial Times reports.

Who’s leading the charge? This transformation is largely driven by tech giants like Google and OpenAI, who are aggressively promoting AI assistance across all customer-facing industries. Companies are embracing the shift, lured by promises of efficiency, consistency, and scalability — but customer satisfaction tells a different story.

Cost reduction drives this transition. The ability of AI to handle multiple interactions simultaneously while eliminating large support teams presents an attractive financial proposition for companies looking to bolster the bottom line. However, when complex situations arise — such as flight cancellations — AI struggles with policy nuances, often creating additional complications that human agents must ultimately resolve.

Despite advances in natural language processing (NLP), AI still falls short of being an ideal solution. These systems operate on probability-based response predication rather than genuine comprehension, often resulting in incorrect answers or endless loops of unhelpful responses. More concerning is the growing trend of companies relegating human support to the last resort, accessible only after customers struggle through AI interactions.

Yet another ethical pitfall to throw onto the AI pile: These automated systems now make critical decisions about refunds, fraud detection, and account suspensions. A prime example would be Google’s recent lockout incident, where swathes of users lost access to essential services (that they may have even paid for) like Gmail, YouTube, and Google Drive due to AI-flagged “violations.” Many of those impacted couldn’t restore access thanks to the endless loop of AI customer care, with no one to hold accountable. IBM said it best in 1979: “A computer can never be held accountable, therefore a computer must never make a management decision.”

The data tells a compelling story about AI’s proper role — it should assist, not replace. A 2024 survey conducted by Dynata revealed that while 54% of US adults find AI helpful in certain contexts, 46% consider it potentially harmful. The key lies in implementation: 77% prefer human agents supported by AI rather than fully automated chatbots. Moreover, 39% reported worse experiences with AI compared to 33% who found it superior. Companies like Cogito advocate for using AI to enhance human agents’ capabilities rather than replace them entirely.