Something fundamental has shifted in how people seek guidance, and it’s happening in two places at once: your office and your living room. The same technology that’s quietly replacing managers as workplace advisors is now shaping how children think, learn, and solve problems. If you’re a leader — whether you manage a team or raise kids — you’re facing an invisible competitor that never sleeps, never judges, and always has an answer ready.
The workplace rebellion you didn’t notice happening: A mid-2025 survey by Resume Now on the “AI boss effect” revealed that among 968 US employees across various industries, 97% admitted that they’ve turned to ChatGPT for advice instead of asking their manager. For 63%, this isn’t occasional — it’s routine. This isn’t a story about technology adoption, writes Digital Information World (DIW), it’s a story about a collapse in trust.
The reason? Some 57% of the surveyed employees fear retaliation for asking sensitive questions, 38% don’t want to appear incompetent, and a whopping 70% say that ChatGPT understands their work challenges better than their human boss does. The appeal is brutally simple: AI offers privacy without politics, answers without judgement, and guidance without the risk of appearing weak. There’s no visible hierarchy when you’re trying into a chat window at 11pm, no performance review implications, and no uncomfortable eye contact.
The integration runs deeper than occasional advice-seeking. According to the survey, 93% of workers have used ChatGPT to prepare for conversations with their boss — essentially rehearsing human interaction with a machine first. Perhaps more striking: 49% say that ChatGPT has provided more emotional support than their manager during work-related stress. A majority feel comfortable discussing workplace stress or mental health impacts with an AI assistant — the technology isn’t just replacing task guidance, it’s filling the emotional vacuum left by distant or unavailable leadership. AI isn’t winning by being better, it’s winning by being present when you’re not.
The same pattern is playing out at home: While managers lose ground as advisors at work, parents face a parallel challenge — children are developing the same dependency, but at a more vulnerable developmental stage. A 2024 survey by the Pew Research Center found that 26% of US teens aged 13-17 have used ChatGPT for schoolwork — double the rate from the previous year. But the concern isn’t just that kids are using AI, it’s what they’re outsourcing: the hard cognitive work that builds critical thinking skills.
The cognitive cost of convenience: A preliminary study from MIT’s Media Lab examined what happens in the brain when people use AI to write essays. Fifty-four participants were divided into three groups: one used an AI chatbot, another used a search engine, and a third relied only on their own knowledge. The results were stark: brain connectivity “systematically scaled down with the amount of external support.” The brain-only group showed the strongest, widest-ranging neural networks, the search engine group showed intermediate engagement, and the LLM-assisted group produced the weakest overall neural coupling.
Lead researcher Nataliya Kosmyna describes the phenomenon as “cognitive debt” — deferring mental effort in the short term in ways that may erode creativity and critical thinking over time. “The convenience of having this tool today will have a cost at a later date,” she warns.
For children still developing these cognitive capabilities, the stakes are even higher. “For younger children, I would imagine that it is very important to limit the use of generative AI, because they just really need more [chances] to think critically and independently,” Pilyoung Kim, a child psychology professor, told CNBC. Children also have a heightened tendency to anthropomorphize — to perceive machines as human-like. “Simple praise [from these machines that talk just like a human] can really change their behavior,” says Kim.
Without foundational skills in place first, children can’t reliably catch AI hallucinations or inaccuracies. They lack the context to know when ChatGPT is confidently wrong, and they’re forming thinking patterns during the precise developmental window when critical reasoning abilities take shape. The long-term implications remain unknown. “Its something very important to keep in mind that we do need to understand what happens to the brains of those who are using these tools very young.” says Kosmyna, “We see cases of AI psychosis. We see cases of [AI-prompted suicide]. We see some deep depressions… it’s very concerning and sad, and ultimately dangerous.”
For workplace leaders, the message is clear: Employees aren’t using ChatGPT because they dislike their managers, they use it because it feels easier, faster, and safer. The pattern reveals what’s missing: reassurance, consistency, availability, and psychological safety. Managers who adapt by becoming more accessible, more empathetic, and more transparent can rebuild trust that prevents workers from seeking understanding from machines. The most effective approach isn’t competition between humans and technology, but collaboration, writes DIW — letting AI handle “structure and clarity, human leadership can focus on what it does best… building trust and supporting people through the parts of work that technology cannot feel.”
For parents, the guidance is similar but more protective. Experts emphasize that children need to develop foundational skills before relying on AI tools. Kim’s advice for parents is straightforward: maintain open communication with your kids and monitor the AI tools they use, including what they type into chatbots. Teaching not just AI literacy but overall computer literacy and “tech hygiene” becomes essential.