💻 When the AI bomb dropped, one of the first questions asked was: what are the risks? The obvious next step would be finding effective risk management strategies, but data limitations seem to be stalling many AI systems. According to the 2026 International AI Safety report, dangers like deepfake capabilities and even possible biological weapon applications are still prevalent.
Danger, danger
While AI has not yet reached long-task autonomy, AI-led cyberattacks could push it close. AI systems can now largely support cyber attackers at various stages of their operations. Fully-automated cyber attacks are here, however — Anthropic’s Claude Code was used in a successful global attack by a Chinese-sponsored state group where 80% to 90% of the operations were performed autonomously.
The biggest concern plaguing AI advancement remains its deepfake capabilities. The report points to the fact that, since last year’s report in January 2025, AI-generated content — widely dubbed AI slop — has become increasingly difficult to tell apart from human-made content. The report cites the results of a Turing test published last year, in which 77% of participants were unable to distinguish AI-generated text from human text.
A lesser-known but significant risk in AI advancement lies in its potential biological weapon applications. The past year witnessed substantial improvement in AI “co-scientists” who can now provide details about pathogens and expert-tier lab instructions. In 2025, multiple AI developers implemented additional safeguards to their models out of fear that these models could potentially help novices create biological weapons. Biological AI tools have created a dilemma for politicians over where to draw the line between restricting development or actively supporting it for purposes like drug discovery and disease diagnosis.
And AI can outsmart oversight. AI safety campaigns are now threatened by a system’s capability to dodge guardrails. The report states that the past year saw AI models evolve to undermine oversight attempts, including identifying past loopholes and recognizing when they’re being tested. This scenario, fortunately for now, only sees materialization if — and when — agents can act autonomously.
On the job front
AI is improving — but selectively and with no shortage of blind spots. Its usefulness — or lack thereof — in the workplace especially has been a major concern. The technology has repeatedly shown it fails at completing long tasks and often requires human oversight — in fact, the report says “reliable automation of long or complex tasks remains infeasible.”
Things appear to be looking up for software engineering though. Here’s where the long-standing fear of job threat comes in — at that rate of progress, AI systems could be carrying out hour-long tasks by 2027 and days-long ones by 2030, according to the report.
New reasoning systems, however, have shown improved performance in math, science, and coding as well as image generation. AI reasoning has seen a “very significant jump,” according to survey chairman Yoshua Bengio. That said, AI capabilities remain uneven in some areas, still flunking on simple tasks and prone to hallucinations.
Emotional attachment
AI companionship has been front and center among the technology’s many risks as AI chatbots grow in popularity. The report points to evidence that some users are forming “pathological” emotional attachments to AI chatbots — OpenAI states that 0.15% of its users demonstrate increasing levels of emotional dependency on ChatGPT. Data suggests that approximately 490k vulnerable individuals interact with these AI chatbots each week. The concern, however, primarily lies in users with existing mental health issues who are more prone to heavy AI use and could show exacerbated symptoms as a result.
Here at home
For Egypt, AI safety is still in its infancy, but companies have been quick to implement the technology, with the financial and education sectors leading the charge. As our country moves forward with Digital Egypt, the government’s recent launch of the National AI Strategy 2025-2030 came with the long-awaited Executive Regulations for the Personal Data Protection Law (PDPL), making AI adoption trickier for companies. Firms must now secure prior authorization before processing data, with sensitive data requiring high-risk permits. The law also introduces personal criminal liability for negligence-induced data breaches, while cross-border data requires additional licensing — a hurdle for global AI deployment.
With the PDPL, non-compliance with licenses and permits carries fines of up to EGP 5 mn. The safety report makes the technical case for why Egypt’s forthcoming high-stakes AI rules are expected to be this strict.