Home

AI Adoption Is Surging in Advertising, but is the Industry Prepared for Responsible AI?

AI Adoption Is Surging in Advertising, but is the Industry Prepared for Responsible AI?

AI is now a regular part of marketing and advertising: more than half of marketers already use GenAI for creative content and audience targeting, while nearly all plan to expand AI use next year, especially for content development and audience engagement. But while adoption is accelerating, safeguards are not. Over 70% of marketers have encountered an AI-related incident in their advertising efforts, including hallucinations, bias, or off-brand content, yet less than 35% plan to increase investment in AI governance or brand integrity oversight over the next 12 months. 

This research, conducted by IAB in partnership with Aymara surveyed 125 advertising industry executives in the U.S. using the IAB Insights Engine platform powered by Attest. The data paints a striking picture: AI adoption, and thus AI-related challenges, are outpacing safeguards. And industry leaders are raising the alarm.

As AI becomes central to how brands create content and connect with audiences, the advertising industry is at an inflection point. Marketers are eager to innovate, but without clear governance, they risk brand trust, compliance, and long-term value. Now is the time for coordinated action to prioritize shared standards, stronger tools, and responsible practices to ensure AI enhances – rather than undermines – the future of advertising.


AI Is Everywhere in Marketing, and Still Growing


AI is now part of the marketing toolkit across the board. Over half of marketers are already using it for creative content, audience targeting, customer support, with nearly as many applying it to predictive analytics.

And usage is set to grow: 58% plan to increase AI for creative generation in the next year, along with expanded use in chatbots, targeting, and forecasting. AI isn’t just a trend, it’s quickly becoming core to how marketing gets done.


But concerns with AI are high: Incidents are already happening at an alarming rate, and a single incident can impact ROI 


Marketers are well aware that there can be risks with AI-generated advertising. Top concerns include misinformation and deepfakes, loss of creative control, and brand integrity risks from offensive or harmful outputs. Many also worry about consumer trust, with 37% fearing audiences will distrust ads made by AI. Other concerns include bias and fairness, regulatory compliance, and the challenge of monitoring AI content at scale. Some flagged the threat of adversarial prompts, like jailbreaks that trick models into unsafe behavior.

The takeaway: AI can pose serious ethical and quality risks, and marketers know these issues can damage trust and brand reputation. That’s why over 60% support labeling AI-generated ads, with only 15% opposed—signaling a strong push for transparency as a trust safeguard.

And these aren’t future risks. AI-related issues are already affecting advertising campaigns. In the research, 70% of marketers reported at least one AI incident. Common problems included hallucinated outputs (AI generated content that was factually incorrect, nonsensical, or fabricated), biased or inappropriate content, and off-brand or offensive material. Others saw loss of creative control and failures in regulatory compliance.

The consequences were significant: 40% had to pause or pull ads, over a third dealt with brand damage or PR issues, and nearly 30% had to conduct internal audits. Some saw wasted budgets, client complaints, or legal concerns. Only 6% said the impact was minimal.

These early missteps are a clear warning – without proper oversight, AI can scale risks as fast as it scales output.


Patchy Safeguards and A False Sense of Security


Despite growing risks, AI oversight remains inconsistent. Most teams rely on human review and brand integrity checklists, which are important but basic steps. More advanced practices are far less common such as consulting external AI ethics experts, running red team testing, and using automated evaluation tools. Alarmingly, 10% of respondents either do nothing or aren’t sure how they manage AI risks.

Yet confidence remains high. Nearly 90% say they feel prepared to catch AI issues before launch. This may reflect trust in existing workflows, but given that 70% have already had incidents, it also suggests a false sense of security.

The reality: only one-third of brands, agencies, and publishers have adopted or plan to adopt any formal governance tools, leaving major gaps (IAB State of Data 2025: The Now, The Near, and The Next Evolution of AI for Media Campaigns). There’s a strong need – and opportunity – for more structured, scalable safeguards across the industry including systems that flag risk, ensure alignment, and protect brand trust before campaigns reach the public.


Industry Calls for Standards, Tools, and Transparency


Marketers are calling for stronger AI governance. When asked what’s needed to keep AI in advertising safe and effective, top priorities included regular AI audits for bias and integrity, transparency in AI decision-making, data privacy protections, and IP safeguards for AI-created content. 

In short, marketers want tools, policies, and standards to close real governance gaps. Only 6% believe current safeguards are enough. This is an opportunity for the industry to define systems, tooling, and benchmarks that can ensure AI outputs are safe, accurate, and aligned with brand values.


Accountability and Leadership: Who’s Minding the AI?


One major challenge in AI governance is ownership. When asked who leads these efforts, responses vary, with the majority mentioning executive leadership or a dedicated AI task force, marketing/creative team’s, or legal/compliance teams are taking the lead. Some also rely on data science or MarTech teams. Only 17% of organizations currently use an external partner for AI governance today, suggesting most are building internal capabilities. But not all have definitive accountability:14% say no one owns AI governance, and others aren’t sure who does.

Without structured ownership, risks can fall through the cracks. As companies scale GenAI, it’s critical to define who is responsible, whether it is a chief AI officer, cross-functional council, or dedicated team. This clarity is essential to move from good intentions to accessible solutions that enable oversight, testing, and enforcement, regardless of organizational structure. Once roles are defined, organizations need to ensure their third-party partners are aware of who is responsible in order to enable better collaboration, strengthen industry relationships, and help mitigate shared risks.


Third-Party Support for Governance 


While most companies currently manage AI governance in-house, there’s strong interest in external support. When asked if they’d consider a third-party solution to evaluate risks like hallucinations, bias, or off-brand content, over 90% said yes.

Many see outside expertise as a valuable safety net. One marketer said it would offer “peace of mind,” while another noted it would “reduce risk to our brand and business.” Only a few were skeptical, mostly due to confidence in internal teams or concerns about cost. But those views were rare. Marketers are open and eager for expert tools and guidance to ensure their GenAI use is safe, effective, and aligned with brand values. This presents a timely opportunity to partner with trusted third parties to strengthen AI oversight across the industry.


No Time to Waste: A Call to Action on Responsible AI


This survey shows an industry moving fast on AI – but still building the guardrails as it goes. Advertisers are excited about AI’s potential for content, targeting, and engagement. But many have already seen the risks firsthand: misinformation, bias, and off-brand content that damage trust and waste budget.

Marketers are sending a strong message: they want help in the form of better standards, stronger tools, and expert support to use AI responsibly. Now is the time for collective action. Brands, agencies, publishers, and platforms all have a role to play in shaping AI governance.

Here are four steps to move forward:

  1. Make AI governance a priority. Assign ownership, engage leadership, and establish a cross-functional task force if you don’t have one already. Prioritize not only who is responsible, but how responsibility is translated into day-to-day workflows, review processes, and evaluation methods.
  2. Build your best practices. Start with foundational checks like human review, policy guidelines, bias testing—and build toward structured evaluations, automated audits (with human-in-the-loop oversight where appropriate), and continuous monitoring that can scale with content volume and model complexity.
  3. Bring in expert support to scale. Accelerate safely with trusted experts and third-party tools designed to evaluate, test, and certify AI-driven content at scale, so your team can move faster without increasing risk.
  4. Lead with transparency. Don’t just say you use AI responsibly – prove it. Build systems that track how AI is used, flag risks, and generate audit-ready records. Stay vigilant on fairness, privacy, and ethics. Consumer trust depends on it.

The data is clear: AI is undeniably transforming advertising but incidents are already happening, current safeguards aren’t keeping pace, and marketers need better solutions. This isn’t a future problem to solve but instead a present reality demanding immediate action to unlock AI’s full potential. With a few practical steps, responsible AI is not only possible – it can be the norm.

Survey Methodology

This research was conducted using the IAB Insights Engine platform, powered by Attest. It included a survey of 125 US ad industry executives who work for companies with 50+ employees that have active involvement/visibility into how their company uses AI in advertising and marketing. The survey was conducted in July 2024.

Authors

Author
Jack Koch
SVP, Research & Insights
at IAB

Author
Caraline Pellatt
Co-Founder and Co-CEO
at Aymara