Brand Safety in Digital Advertising: 2026 Guide

Racen Dhaouadi
March 16, 2026

In early 2017, The Times of London published an investigation that changed digital advertising overnight. Major brands, including AT&T, Verizon, and Johnson & Johnson, discovered their ads were running alongside extremist content on YouTube. Within weeks, advertisers pulled more than $750 million in spend from the platform. Careers ended. Agencies scrambled. Google overhauled its entire ad placement system.
Brand safety is the practice of ensuring your ads don't appear next to harmful, inappropriate, or controversial content that could damage your reputation. In programmatic and display advertising, where automated systems place ads across millions of websites and apps, advertisers often have no direct control over which pages their ads appear on. The gap between "I bought an ad" and "where my ad actually showed" is where brand safety risk lives.
This guide covers what brand safety means in practice, the types of risks you face, how brand safety connects to ad fraud, the industry standards designed to help, and how to build a strategy that protects both your budget and your reputation.
What Is Brand Safety and Why Does It Matter?
Brand safety ensures your ads never appear alongside harmful content like hate speech, violence, misinformation, or pirated material that damages your reputation.
Most marketers understand brand safety conceptually. Fewer understand how much it costs when it fails, or how frequently it fails quietly without making headlines.
The Reputation Cost of Bad Ad Placement
Advertising creates implicit endorsement. When your ad appears next to a piece of content, consumers associate your brand with that content, whether you chose the placement or not. It works the same way offline. Imagine your company's billboard installed next to a hate rally. No one would think you endorsed the rally, but they'd wonder why you allowed it to happen.
Research from the IAB found that 65% of consumers say they would think less of a brand whose ad appeared alongside offensive content. One in three said they would stop purchasing from that brand entirely. The perception damage is real, measurable, and difficult to reverse.
And unlike most marketing mistakes, brand safety failures tend to go viral. A screenshot of your ad next to extremist content spreads faster than any campaign you could launch intentionally.
A Brief History of Brand Safety Crises
Brand safety wasn't a mainstream concern until advertisers started getting burned publicly.
2017: The YouTube "Adpocalypse." The Times of London investigation triggered the largest brand safety crisis in digital advertising history. Hundreds of major advertisers paused YouTube spend. Google responded by tightening monetization policies and introducing stricter content categorization. The event forced the entire industry to take brand safety seriously.
2019: The Christchurch shooting. A terrorist attack was live-streamed on Facebook. Copies of the video spread across platforms before moderation could act. Brands discovered their ads running as pre-roll on copies of the footage. The incident accelerated platform investment in real-time content moderation.
2020 onward: Misinformation at scale. COVID-19 misinformation, election misinformation, and conspiracy content created a new category of brand safety risk. Brands found their programmatic ads funding misinformation sites through automated ad placements, effectively subsidizing the content they opposed.
Each crisis pushed industry standards forward. But the underlying problem remains: automated ad placement at scale creates brand safety risk by default.
The Financial Impact
Brand safety failures aren't just reputation problems. They're financial ones.
CHEQ estimated $2.6 billion in brand safety related ad waste in a single year. The Association of National Advertisers (ANA), working with Jounce Media, found that "Made for Advertising" (MFA) sites receive roughly 15% of all programmatic ad spend. That's approximately $13 billion annually going to low-quality inventory that delivers near-zero business value.
These aren't theoretical numbers. They represent real budget flowing to placements that damage brands, reach no real customers, and generate no return.
What Are the Different Types of Brand Safety Risks?
Brand safety risks include content adjacency with harmful material, MFA sites that exist only to serve ads, UGC platform exposure, and misinformation funding.
Brand safety isn't a single problem. It's a category of risks that show up in different ways depending on where and how you advertise.
Content Adjacency (The Core Risk)
The most straightforward brand safety risk is your ad appearing directly next to harmful content. The Global Alliance for Responsible Media (GARM) defines 12 risk categories: adult and explicit content, arms and ammunition, crime and harmful acts, death and injury, drugs and alcohol, hate speech and discrimination, military conflict, obscenity and profanity, online piracy, spam, terrorism, and sensitive social issues.
In programmatic advertising, where algorithms decide ad placement based on audience targeting and bid price, content adjacency violations happen automatically. A toy brand's ad might appear on a page discussing drug use because the algorithm saw "family" in the page metadata. A financial services ad might run alongside a story about a Ponzi scheme because the topic matched "finance."
The algorithm optimizes for audience match, not content context. That disconnect is the root of most content adjacency violations.
Made for Advertising (MFA) Sites
MFA sites are web pages built purely to serve ads, not to provide genuine value to readers. They use clickbait headlines, slideshow formats that force 20 page loads for a single article, and auto-playing video ads stacked throughout the page. The content exists only to keep visitors clicking long enough to generate ad impressions.
The ANA and Jounce Media found that MFA sites account for roughly 21% of all programmatic ad impressions. They contribute virtually zero business outcomes for advertisers. No one reads MFA content with purchase intent. No one remembers what brand's ad they saw on a "You Won't Believe What Happened Next" slideshow.
MFA sites are also where bot traffic concentrates. These sites need high traffic volume to generate ad revenue, but real human traffic is expensive. So they buy cheap traffic from bot networks to inflate their numbers. The result is your ad appearing on a junk site, shown to a bot, at your expense. This is the critical intersection where brand safety and ad fraud overlap.
User-Generated Content (UGC) Platforms
YouTube, TikTok, X (formerly Twitter), Reddit, and Facebook host billions of pieces of content posted daily. Moderation systems can't review everything before it goes live. Your ad might run as pre-roll on a video that was fine when uploaded but went viral for the wrong reasons afterward.
Platform responses to brand safety vary significantly. YouTube introduced advertiser-friendly content guidelines and tiered monetization. Meta maintains inventory filters and content category controls. X weakened its moderation policies after 2023, prompting some major advertisers to pull spend entirely. Each platform requires a different level of vigilance.
Misinformation and Controversial News
Programmatic ads on misinformation sites effectively fund those sites. Every impression generates revenue for the publisher, which means your ad dollars become their operating budget. You're paying to keep misinformation online.
The challenge is nuance. Where does legitimate news coverage end and misinformation begin? A story about a controversial vaccine study might be legitimate journalism from a reputable outlet, or it might be fabricated content on a conspiracy site. Keyword-level blocking can't tell the difference. This is where brand suitability, a more nuanced approach, becomes important.
What Is the Difference Between Brand Safety and Brand Suitability?
Brand safety blocks objectively harmful content. Brand suitability goes further by filtering content that conflicts with your specific brand values and audience.
The distinction matters because early brand safety approaches caused almost as many problems as they solved.
Why "Block Everything" Doesn't Work
The first generation of brand safety was binary. Block entire content categories. No news. No politics. No health content. No religion. No anything that might be controversial.
The result was devastating for legitimate publishers. Reputable news organizations lost billions in programmatic ad revenue because keyword-level blocking treated them the same as extremist sites. A Pulitzer-winning war correspondent's article and a terrorist propaganda video both contain the word "bomb." Blocking the keyword treats them identically.
Advertisers suffered too. By blocking entire categories, they missed audiences reading reputable journalism, engaging with health content, and following political developments. Overcorrection reduced reach without actually improving outcomes.
Brand Suitability as the Modern Approach
Brand suitability replaces the binary block/allow model with context-aware filtering that considers both the content and the brand.
GARM's Brand Suitability Framework offers three risk tiers for each content category: Low Risk, Medium Risk, and High Risk. Each brand sets their own thresholds based on their values, audience, and risk tolerance.
An energy drink brand might rate extreme sports content as Low Risk because it aligns with their audience and brand positioning. A children's cereal brand would rate the same content as High Risk. A financial services company might accept news about economic policy (relevant audience) but block news about financial crimes (too close to home). Same content category, different brand decisions.
This is the key shift. Brand suitability isn't about avoiding all risk. It's about making informed, brand-specific choices about which contexts are appropriate for your advertising.
Implementing Suitability at Scale
Pre-bid contextual targeting tools from vendors like Integral Ad Science (IAS) and DoubleVerify scan page content before your ad serves and categorize it against your suitability profile. Post-bid verification confirms where ads actually appeared and flags violations.
The technology has improved significantly. Modern tools use natural language processing and image analysis to understand page context, not just keyword matching. They can distinguish between a news article about a school shooting (journalism) and a page glorifying violence (harmful content).
But no tool is perfect. Human review of placement reports remains necessary, especially for edge cases that automated systems misclassify.
How Does Brand Safety Connect to Ad Fraud?
Brand-unsafe inventory is where ad fraud concentrates. MFA sites, domain spoofing, and bot traffic create a cycle where your ads fund junk sites shown to fake visitors.
Brand safety and ad fraud are usually discussed as separate problems. They're not. They overlap heavily, and solving one without addressing the other leaves a gap in your defenses.
MFA Sites Are a Brand Safety AND Ad Fraud Problem
MFA sites need traffic volume to make money from ads. Real traffic is expensive to acquire. Bot traffic is cheap to buy from click farms and automated networks.
The result is a cycle. The MFA site buys bot traffic. Your programmatic ad appears on the MFA site. A bot "views" your ad. You pay for the impression. Your brand appears on a junk site that no real human visited. You lose twice: once on the wasted spend, and once on the brand association with low-quality content.
The ANA found that 15% of programmatic spend goes to MFA sites. Industry bot traffic reports suggest 10-30% of traffic on those sites is automated. That means billions of dollars are spent annually showing ads to bots on sites that damage brand reputation.
Domain Spoofing Is Both Brand Safety and Fraud
Domain spoofing happens when a fraudster creates a low-quality site and tells ad exchanges it's a premium publisher. The advertiser's dashboard says the ad ran on Forbes or ESPN. It actually ran on a spam site filled with clickbait and pirated content.
This is simultaneously a brand safety issue (your ad appeared on a harmful site) and an ad fraud issue (the inventory was misrepresented). The ads.txt and sellers.json standards partially address domain spoofing by letting buyers verify authorized sellers. But enforcement isn't universal, and smaller exchanges still have gaps.
Bot Traffic on Unsafe Inventory Inflates Impressions
Bots don't just waste money. They create the illusion that brand-unsafe inventory performs. A campaign report might show 500,000 impressions across the Display Network with a reasonable CPM. The numbers look fine. But if 30% of those impressions were bots on MFA sites, the advertiser is making decisions based on inflated data.
This connects directly to algorithm poisoning. Bidding algorithms trained on bot-inflated data optimize toward more of the same junk inventory. The more bots your campaigns attract on unsafe sites, the more the algorithm seeks similar placements. Over time, your campaigns drift toward lower-quality inventory without any visible error in the dashboard. For more on spotting this pattern, see our guide on how to detect bot traffic on your website.
Want to see how much of your traffic is real? Try our free traffic analyzer. No signup required.
What Industry Standards and Tools Protect Brand Safety?
GARM, TAG, ads.txt, sellers.json, and verification vendors like IAS and DoubleVerify provide the frameworks and tools that enforce brand safety at scale.
The ad industry has built a layered set of standards and tools to address brand safety. None is sufficient alone, but together they form a meaningful defense.
GARM (Global Alliance for Responsible Media)
GARM is an industry initiative from the World Federation of Advertisers (WFA). It created two foundational frameworks. The Brand Safety Floor defines content categories that are universally unsafe for advertising (terrorism, child exploitation, etc.). The Brand Suitability Framework provides tiered risk assessment so brands can make customized decisions about other content categories.
GARM provides the common language. Without it, every platform, agency, and advertiser would define "unsafe" differently, making consistent enforcement impossible. It was briefly disbanded in 2024 amid legal challenges but was reconstituted, and its frameworks remain the industry standard.
TAG (Trustworthy Accountability Group)
TAG is an industry self-regulatory body focused on fighting criminal activity in digital advertising. It offers two relevant certifications: the TAG Certified Against Fraud seal and the TAG Brand Safety Certified seal.
TAG-certified channels report fraud rates more than 90% lower than non-certified channels. For advertisers choosing demand-side platforms (DSPs) and ad networks, asking whether they're TAG certified is one of the simplest and most effective vetting questions.
ads.txt and sellers.json
ads.txt is an IAB Tech Lab standard that lets publishers declare which sellers are authorized to sell their ad inventory. A publisher places a text file on their domain listing every authorized digital seller. Buyers can verify that the entity offering inventory is actually authorized to sell it, which helps prevent domain spoofing.
sellers.json is the supply-side complement. Ad exchanges publish a list of all authorized sellers on their platform, creating a chain of verification from publisher to buyer.
Both standards have high adoption among major publishers but incomplete adoption among smaller sites. And they have an important limitation: they verify authorization, not content quality. A site can have a perfectly valid ads.txt file and still be an MFA junk site with zero editorial value.
Verification Vendors (IAS, DoubleVerify, Oracle Moat)
Verification vendors offer two capabilities. Pre-bid verification scans page content before your ad serves and blocks placements that don't meet your suitability criteria. Post-bid verification confirms where ads actually appeared and reports on brand safety violations after the fact.
These tools also measure viewability, fraud, and performance. They overlap with bot detection but focus on the supply side (where ads appear) rather than the demand side (who sees them). A complete defense needs both.
Cost is typically $0.01-0.05 CPM as an overlay on your media spend. For any significant programmatic budget, the cost is trivial compared to the waste and reputation risk it prevents.
How Do Google, Meta, and Programmatic Platforms Handle Brand Safety?
Google offers content exclusions and inventory types. Meta provides Audience Network controls. Programmatic relies on pre-bid filtering and inclusion lists.
Each platform gives you different tools with different levels of granularity. Knowing what's available on each platform is the first step to configuring your brand safety policy.
Google Ads Brand Safety Controls
Google provides several layers of brand safety controls for advertisers.
Content exclusion settings let you exclude by content category: tragedy and conflict, sensitive social issues, profanity, sexually suggestive content, and more. These apply across Display, YouTube, and Discovery campaigns.
Inventory types offer three tiers: Expanded (maximum reach, minimum safety), Standard (the default, excludes most extreme content), and Limited (most restrictive, excludes any potentially sensitive content). Standard is appropriate for most advertisers. Limited is recommended for children's brands, healthcare, and other sensitive verticals.
Placement exclusions let you block specific websites, apps, or YouTube channels. Review your Placement report monthly. The automated settings prevent the worst violations, but regular auditing catches what filters miss. For more on reviewing your Google Ads data for anomalies, see our guide to Google Ads click fraud.
Meta (Facebook/Instagram) Brand Safety Controls
Meta's brand safety tools are simpler than Google's but cover the key risks.
Inventory filter works similarly to Google's tiers: Full, Standard, and Limited. Standard is the default and appropriate for most advertisers.
Block lists let you block specific pages, profiles, or content categories from showing your ads. These are manual and require ongoing maintenance.
Audience Network is where Meta's brand safety risk is highest. The Audience Network extends your ads to third-party apps and websites outside of Facebook and Instagram, where Meta has less control over content quality. Disabling Audience Network is the single highest-impact brand safety measure on Meta for advertisers who prioritize quality over reach.
Meta also integrates with IAS and DoubleVerify for third-party verification on certain ad placements, adding an independent layer of oversight.
Programmatic Brand Safety
Programmatic advertising carries the highest brand safety risk because of the multi-layered supply chain between advertiser and publisher.
Pre-bid brand safety segments from IAS or DoubleVerify, applied in your DSP, scan page content before you place a bid. If the page doesn't meet your suitability criteria, the bid is suppressed automatically.
Inclusion lists beat exclusion lists. Curating 500 trusted publisher sites is more effective than trying to block 500,000 bad ones, because new junk sites appear faster than you can exclude them.
Private marketplace (PMP) deals connect you directly with known publishers at negotiated rates. The CPMs are higher than open auction, but the traffic quality and brand safety are dramatically better. For high-stakes campaigns, PMPs offer the strongest brand safety guarantee in programmatic. Our ad fraud prevention guide covers PMPs and inclusion lists in more detail.
Supply path optimization (SPO) means working with fewer, more trusted supply-side platforms (SSPs). Each intermediary in the programmatic chain adds opacity and risk. Shorter supply paths give you more visibility into where your ads actually appear.
How Do You Build a Brand Safety Strategy?
Build a brand safety strategy with a risk framework, suitability tiers, verification tools, regular placement audits, and an incident response plan.
Tools and platform settings are important, but they're not a strategy. A strategy starts with decisions about what your brand can and cannot be associated with, then translates those decisions into consistent action across every channel.
Define Your Risk Tolerance
Start by answering one question: what content would cause real damage to our brand if our ad appeared next to it?
Map your brand against the GARM content categories. Some are universal no-gos. No brand wants to be associated with terrorism or child exploitation. Others depend entirely on your brand, audience, and industry. An alcohol brand might accept alcohol-adjacent content. A baby formula brand blocks it. A news aggregator might accept political content. A children's entertainment brand blocks it.
Document these decisions. They become your brand safety policy. Without documentation, each team makes their own judgment calls, and inconsistency creates gaps.
Configure Platform Controls
Apply your policy to every ad platform you use. Set content exclusions in Google Ads, inventory filters in Meta, and pre-bid segments in your DSP.
Start restrictive, then loosen based on data. It's easier to expand safe placements than to recover from a brand safety incident. If your exclusions are blocking too much legitimate inventory, you'll see it in reach and frequency metrics.
Coordinate across teams. If your Google Ads team excludes gambling content but your programmatic team doesn't, you have a gap. Brand safety policies should be platform-agnostic, even if the implementation details differ.
Layer Verification and Detection
Verification vendors monitor where your ads appear on the supply side. Bot detection tools monitor who sees your ads on the demand side. Both layers matter because brand-unsafe inventory and bot traffic frequently overlap.
Together, they answer two questions. "Was my ad in a safe place?" (verification). "Was it seen by a real person?" (detection). If you're only asking one question, you're only seeing half the picture. For tool recommendations, see our guide to the best bot detection software in 2026.
You can also use tools like our ad spend calculator to quantify how much of your budget may be going to waste on unsafe, bot-heavy inventory.
Audit and Iterate Monthly
Pull placement reports from every platform. Review the top 50 domains and apps by spend. Flag any placement you don't recognize, that has no conversions, or that shows suspiciously high impressions with zero engagement.
Update your exclusion lists. Add new problematic placements. Remove false positives that were blocking good inventory. Track your brand safety violation rate over time. This number is your benchmark for improvement.
Brand safety isn't a one-time configuration. It's an ongoing practice. Fraudsters create new sites, platforms update their policies, and content trends shift. Monthly audits keep your defenses current.
Hyperguard detects bot traffic and ad fraud in real time with multi-layer bot detection. Setup takes under 5 minutes. See how it works or get started today.
Frequently Asked Questions
What is brand safety in digital advertising?
Brand safety is the practice of ensuring your ads don't appear next to harmful, inappropriate, or controversial content such as hate speech, violence, terrorism, or misinformation. In programmatic advertising, where ads are placed automatically across millions of websites, brand safety controls prevent your brand from being associated with content that could damage your reputation or alienate customers.
What is brand suitability?
Brand suitability goes beyond brand safety by filtering content based on your specific brand values, not just universal harm categories. While brand safety blocks objectively dangerous content (terrorism, child exploitation), brand suitability lets you customize which content categories are appropriate for your particular brand and audience. The GARM framework provides Low, Medium, and High Risk tiers for this purpose.
What are MFA (Made for Advertising) sites?
MFA sites are web pages built solely to serve ads rather than provide genuine content. They use clickbait headlines, excessive page loads, and auto-playing video ads to maximize impressions. ANA research found MFA sites receive roughly 21% of all programmatic impressions while delivering virtually zero business value. They also frequently use bot traffic to inflate their visitor numbers.
How does brand safety relate to ad fraud?
Brand safety and ad fraud overlap significantly. MFA sites buy cheap bot traffic to inflate impressions. Domain spoofing misrepresents junk sites as premium publishers. Both result in your ads appearing on harmful inventory, shown to non-human traffic, at your expense. Protecting against one without addressing the other leaves a gap in your defense.
What is GARM and why does it matter for brand safety?
GARM (Global Alliance for Responsible Media) is a World Federation of Advertisers initiative that created the industry-standard Brand Safety Floor and Brand Suitability Framework. It provides a common language for advertisers, agencies, and platforms to define what constitutes unsafe and unsuitable content. Without GARM, every participant in the ad ecosystem would use different definitions, making consistent enforcement impossible.
What is ads.txt and how does it protect brands?
ads.txt is an IAB Tech Lab standard that lets publishers declare which sellers are authorized to sell their ad inventory. Buyers can verify that the entity offering inventory actually has the right to sell it, which helps prevent domain spoofing. sellers.json is the complementary standard that lists authorized sellers on each ad exchange, creating a verification chain from publisher to buyer.
How much does brand safety cost advertisers?
Direct costs include verification vendor fees (typically $0.01-0.05 CPM) and potential reach reduction from content exclusions. The indirect costs of not having brand safety are far higher. The ANA estimated $13 billion in annual programmatic spend goes to MFA sites alone. A single brand safety incident can cost millions in crisis management, lost customer trust, and cancelled campaigns.
Can you have brand safety without bot detection?
You can have partial brand safety without bot detection, but there will be a gap. Verification vendors check where your ads appear (supply side), but they don't verify who sees them (demand side). If your ad runs on a safe site but is served to bots, you're still wasting money. A complete approach combines supply-side verification with demand-side bot detection to cover both angles.