Google invests USD 1 bn a year just in trust and safety, its director of trust and safety, Amanda Storey (LinkedIn), told EnterpriseAM UAE in a sit-down in Dubai. This is at a time when lawsuits against AI firms are claiming AI was behind their children and teenagers’ suicide, and the use of AI has seeped into the very fabric of daily life — making the questions of trust and safety when online are more contentious — and crucial — than ever.

From how to monitor and ensure the safety of the behavior of under-18 users, to preventing scams and the proliferation of deepfakes, there’s hundreds of engineers working behind the scenes at Google trying to make the digital world a safe(r) one for everybody. This comes hand in hand with efforts alongside regulators and government agencies that are trying to regulate AI while AI is developing by the day — building the plane while it’s flying in every sense of the phrase.

There is a lot that goes into not only monitoring online behavior and the use of AI, but ensuring every locale has its own guardrails online that align with its laws and culture. In the Middle East, for example, while having to deal with a myriad of online scams like everywhere across the world, Google also has to ensure the region is censored from ads that go against local laws or regulations — ads related to gambling and alcohol, for example, would be prohibited, Storey said.

But the Internet has become a very complex place, with plenty of actors — both good and bad. This means no single company can fight the fight alone, and new, innovative solutions for preventing dangerous situations and scams from happening are needed. Storey’s massive team at Google is working on paving the way for some of these initiatives — spanning from education to a global clearing house for scam signals. She broke it all down for us in our conversation. Edited excerpts below:

Enterprise: First off, what do trust and safety mean in the context of the digital world? Have their meanings changed with the rapid development of AI?

AS: I like to think of it as philosophy meets Six Sigma. The philosophy is setting the policies and deciding what kind of content and behavior is acceptable on our products and services, and what kind of content and behavior we will not allow, be it scams or non-consensual explicit imagery or apps with malware in them. Six Sigma is about the actual enforcement of those policies at speed, at scale, and at high quality.

That takes constant monitoring and refinement because bad actors are always innovating and trying to evade detection, and fundamentally are very creative in how they try to abuse the systems.

In 2024, we took down 450 mn ads and suspended 5 mn advertiser accounts specifically related to scams violations. Celebrity scam ads specifically have been an issue, and we’ve suspended around 700k offending accounts in 2024 just for that type of abuse. We’ve seen a 90% drop in user reports for that kind of scam, which suggests that the tools that we're using are working, which is really encouraging to see.

We've been doing this work at Google for 25 years at this point; I think it’s some USD 1 bn or so a year that we invest in trust and safety work. It’s been baked in from the beginning that you can't be trusted unless you're safe and you can't grow unless you're trusted.

E: Has AI exacerbated or introduced any particular forms of problematic content that Google has had to address in recent years?

AS: A lot of what we'd call traditional forms of abuse — some of which I mentioned previously — are continuing, and they're being exacerbated by AI, but we're not really seeing a dramatic increase of new examples of abuse through AI.

We are seeing AI-enabled scamming behavior, but scams have been a problem that we've been wrestling with for many years at this point. Yes, we're having to evolve our policies and our detection mechanisms to keep pace with that AI enablement, but fundamentally the abuse type and approach are the same.

Similarly, something like nonconsensual explicit imagery is easier to create now using AI tools, so we've had to look at what our policies are doing to prevent people from accessing those kinds of tools. We've taken action on the Play Store and on search to make sure it's much harder for people to get access to the kinds of apps or sites that would allow them to create that sort of imagery. We also have our existing policies that if we find that kind of imagery, we're taking it down.

E: What’s your take on the current regulatory landscape around AI? How is Google adapting to ongoing changes to regulations as countries refine their regulatory frameworks for AI?

AS:I think we're really starting to see movement in the regulatory space. We've got around 100 AI-related files now around the world, and the EU AI Act has passed, so we're making sure that we're engaging very closely with regulators.

The UAE specifically is doing a lot of work in terms of being a pioneer in terms of AI regulation, especially in the region, and being at the forefront of the AI space in general. What I see here is interest in infrastructure, in investments in R&D locally, and in encouraging innovation and the application of AI.

We've always said that we think AI is too important not to regulate and too important not to regulate well. I think one of the interesting trends that we see in the AI regulatory space is that governments are trying to regulate AI a little bit like medical devices, where you’re regulating the device itself rather than the outcomes or the use cases.

I think it’s important to make sure we’re checking the use cases and not limiting a technology that can be used for enormous good in society whether for healthcare or climate change or economic productivity. It’s a space that definitely requires a lot of discussion and collaboration to get right, but we're very committed to doing that for the long term.

E: Tell us a bit about the education layer that goes into digital trust and safety. What are some of the biggest AI education initiatives Google is working on in the region?

AS:We have a program called Abtal Al Internet to train kids on how to be safe online, which we rolled out in 2018. We've trained over 500k kids as part of that. There’s also the Experience AI program, which is a collaboration between Raspberry Pi and Google DeepMind that's really about teaching students how to use AI within their own learning journey. We also provided a grant to nonprofit Village Capital to boost AI skills of underserved workers in rural areas across UAE, KSA, and Egypt, among others. This is a really wonderful philanthropic effort because AI can be such a powerful enabler for people to get things done in their life in the real world.

A lot of what trust and safety is about really comes down to the provenance of the information, and helping people understand what to look for in assessing provenance. We have our SynthID solution, which is basically a little watermark on anything that's been AI-generated through our products. We're also part of C2PA, which is a provenance standard across the industry, and we've helped to develop that.

E: What’s in the pipeline right now for Google in the digital safety and trust space? Any new solutions or products that you’re excited about?

AS:I think one really important area is how to manage the increasing usage of AI products by kids and teens. This is crucial because while Google has obviously had this long history of thinking about real-world harm through the trust and safety teams, there are other frontier model providers who are having to learn that.

We've always had default policies for under-18s to provide a greater level of protection — things like automatically sending to private a YouTube account of an under-18, or automatically turning off location history for those users. Those safety-by-design features are really important, but in the context of AI, there are some experiences that look different now.

Our chatbot now has to detect if an under-18 user is putting something into one of our products with — for example — suicidal ideation or intent, in which case they provide real-world resources to support. That’s a really dangerous and unsafe situation for a child to find themselves while interacting with an AI.

Our child safety experts in trust and safety have also ever since the beginning of our AI journey been doing what we call red teaming on those products before they go live, where they try and prompt the model like a child would to create scenarios typical of classic abusive behaviors that a child may find themselves subject to, which allows us to stress test our products before they go out.

Then comes our prohibited use policy, which simply bans certain use cases in AI, like producing child [redacted] abuse material or nonconsensual intimate imagery, for example. The key here is the monitoring aspect — so being able to detect when someone is using it in a prohibited way so we can take action.

E: Any partnerships or broader ecosystem-wide initiatives that you think will help change the game in the next few years?

AS: Issues like scams have been a really tricky problem to solve because each platform, be it a bank or a telco, only sees a slice of the user journey. No one has the full picture. If something passes through one of our platforms and then weaves offline or to a different platform or a phone call or transaction, it’s very tricky to detect as a scam.

We launched a Global Signals Exchange in October of last year with Global Anti-scams Alliance GASA with the goal of stitching together all those signals. It's a global clearing house for signals from different entities. We've now got 300 partners on the platform, and around 500 mn signals shared. We have law enforcement and government there as well, so that we can all detect scams faster.