Friday, August 15, 2025

Top 5 This Week

Related Posts

Confessions of an agency exec on using AI in global campaigns despite regulation

- Advertisement -

0:00

- Advertisement -

By Kimeko McCoy  •  August 15, 2025  •

Black and white illustration of a man's face profile.

- Advertisement -

Ivy Liu

While most AI companies have headquarters in Silicon Valley, U.S. federal lawmakers have not yet broached regulating the sector. President Donald Trump has even gone so far as to encourage the tech’s development and removal of federal regulations that could hinder AI development and deployment in The White House’s AI framework.

And while there are AI laws that vary by state, the lack of federal regulation sets a loose framework that could be hard for marketers to hang their hat on. But that stands in stark contrast to Europe’s existing, stricter data and privacy laws — and new calls for AI transparency, copyright protection and public safety as of July. Global brands might soon find themselves at a crossroads as to where and how generative AI can be used in campaigns.

- Advertisement -

However, it’s a nuanced conversation that hinges on advertisers’ appetite for AI, governmental bodies and campaign goals, according to an AI lead at a global agency.

“It’s not just regulatory, but structural and also just cultural — how ready [a brand is] for change and are people incentivized on innovation versus on risk,” said the agency exec.

In this conversation — part of our Confessions series, where we trade anonymity for candor — the AI lead goes inside the international discussions playing out regarding AI use, risks around data collection and the stance agencies are taking.

This interview has been edited for length and clarity.

How do clients determine how they want to use AI, and how far they’ll take it — brainstorming or AI in the final output?

It varies tremendously by category, by market and just company culture. This is often where I’m brought into these conversations — the client wants to have an affirmative point of view on “Will we use synthetic faces, will we use photographic style or animation, will we disclose, will we use cloned voices?” Often, I’m brought in to facilitate that conversation with a client and provide examples in the market of who’s taking a more affirmative stance and then who’s being more conservative. Geography might be a bit better predictor in some cases than category.

So geography is a better predictor in terms of client appetite? Say more. 

Germany, for example, has some very strict rules around data sovereignty and governance. We can pretty well-predict that a German automaker is going to be a bit more conservative than we might see [with a non-German brand]. I’m involved in a pitch right now. It’s a financial services client and the marketing organization wants to use AI. The financial and risk organization doesn’t want us using AI, and we’re actually playing a role mediating within the client organization to help educate and and drive those decisions.

For a brand that has a global campaign, how does the AI approach change based on geography?

There’s so many ways to slice this. A lot of these regulations apply very differently if you’re training your own model versus if you’re using the model [created by the tech platforms]. If you’re concerned about a lot of the bias that goes into image model training, usually the marketing agencies are not training models from scratch. What we’re doing is we’re customizing them.

For example, for an automaker, we’re teaching the model how to appropriately render the logo or the grill. We’re uploading images that are owned by the client that are product photography to help it replicate that. But very often, what’ll end up happening is the role that we’re playing, it’s not what’s explicitly addressed often times in the regulations because we’re not processing user data. We’re not training custom models. We’re selecting the training corpus. Think of agencies as sitting more in the middle where we’re helping to activate, we’re helping to customize, but we don’t have the gigantic server farms and the millions of images that are being used by the foundation model labs.

So geography is one point of consideration when it comes to guardrails. Seems like the other is data collection.

What agencies value a ton right now is indemnification. We want commercial coverage and protections and copyright protections. Anything that we work with our clients to create is covered commercially. The moment you are hosting and training your own models, that all goes out the window. You’ve basically traded commercial safety and protection for technological sophistication, but it’s also with the flexibility and also the responsibility to to work in that space. So it’s largely a trade-off when our clients [are] using AI just to create static artifacts, creating ads at this point with AI is easy as opposed to when do you build AI into the customer experiences themselves, that raises the bar for the level of customization but also the guardrails you have to put around these tools technologically.

In the U.S. specifically, there are different data privacy regulations as it relates to AI on a state-by-state basis. Does that impact your clients’ approach to AI usage in campaigns?

The first thing that happens is any client SOW [statement of work] will go in front of our legal group. We have outside council who specializes in AI and intellectual property and licensing. We’ll have all of our internal approval mechanisms and governance processes. We’ll go to the experts where we need to. The governance questions are often, “We want to minimize data collection because that obviously is what creates a lot of the exposure is data collection or storage.” Would this work just as well if we never stored any of it if we never collected any of it? A lot of what we do is minimization to say, “can we create this experience but without even stepping into the regulated territory?”

Has there ever been a global campaign (or even national) where AI regulations, data privacy, etc. has caused an issue?

I’ve never seen a brand platform [or campaign] which was dependent on AI. It’s really more AI is the experience layer. It’s a tool within production. It’s two out of the 10 steps. 

More in Marketing

Read More

- Advertisement -

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles