As the digital world evolves, Google's policy development and enforcement strategies evolve with it.
In 2021, the digital giant introduced a multi-strike system for repeat policy violations; added or updated over 30 policies for advertisers and publishers including a policy prohibiting claims that promote climate change denial.
That year, Google removed over 3.4 billion ads, restricted over 5.7 billion ads, and suspended over 5.6 million advertiser accounts. It also blocked or restricted ads from serving on 1.7 billion publisher pages, and took broader site-level enforcement action on approximately 63,000 publisher sites.
Responding to the war in Ukraine
Though the report only covers 2021, Google also shared an update on its response to the war in Ukraine. In addition to its longstanding policies prohibiting content that incites violence or denies the occurrence of tragic events to run as ads or monetize using our services, the platform acted quickly to institute a sensitive event, prohibiting ads from profiting from or exploiting the situation.
Google has also taken several other steps to pause the majority of its commercial activities in Russia across its products — including pausing ads from showing in Russia and ads from Russian-based advertisers, and pausing monetization of Russian state-funded media across its platforms.
So far, it has blocked over eight million ads related to the war in Ukraine under its sensitive event policy and separately removed ads from more than 60 state-funded media sites across its platforms.
Suspending triple the number of advertiser accounts
As shared in its 2020 report, Google has seen an increase in fraudulent activity during the pandemic. In 2021, it continued to see bad actors operate with more sophistication and at a greater scale, using a variety of tactics to evade detection. This included creating thousands of accounts simultaneously and using techniques like cloaking and text manipulation to show its reviewers and systems different ad content than they’d show a user — making that content more difficult to detect and enforce against.
Google is continuing to take a multi-pronged approach to combat this behavior, like verifying advertisers’ identities and identifying coordinated activity between accounts using signals in its network. It is actively verifying advertisers in over 180 countries. And if an advertiser fails to complete its verification program when prompted, the account is automatically suspended.
As a result, between 2020 and 2021, Google tripled the number of account-level suspensions for advertisers.
Preventing unreliable claims from monetizing and serving in ads
In 2021, Google doubled down on its enforcement of unreliable content. It blocked ads from running on more than 500,000 pages that violated its policies against harmful health claims related to Covid-19 and demonstrably false claims that could undermine trust and participation in elections. Late last year, it also launched a new Unreliable Claims policy on climate change, which prohibits content that contradicts well-established scientific consensus around its existence and causes.
Google has stayed focused on preventing abuse in ads related to COVID-19, which was especially important in 2021 for claims related to vaccines, testing, and price-gouging for critical supplies like masks. Since the beginning of the pandemic, it has blocked over 106 million ads related to Covid-19. And it supported local NGOs and governments with $250 million in ad grants to help connect people to accurate vaccine information.
Introducing new brand safety tools and resources for advertisers and publishers
Last year, Google added a new feature to its advertiser controls, that allows brands to upload dynamic exclusion lists that can be automatically updated and maintained by trusted third parties. This helps advertisers get access to the resources and expertise of trusted organizations to better protect their brands and strengthen their campaigns.
Google says it knows that advertisers care about all the content on a page where their ads may run, including user-generated content (UGC) like comment sections. That’s why it holds publishers responsible for moderating these features. It has released several resources in the past year to help them do that — including an infographic and blog post, troubleshooters to solve UGC issues, and a video tutorial.
In addition to these resources, Google made targeted improvements to the publisher approval process that helped its teams better detect and block bad actors before they could even create accounts. As a result, it reduced the number of sites that needed site-level action compared to previous years.
Looking ahead to 2022
A trustworthy advertising experience is critical to getting helpful and useful information to people around the world. And this year, Google says it will continue to address areas of abuse across its platforms and network to protect users and help credible advertisers and publishers. Providing more transparency and control over the ads people see is a big part of that goal. Its new “About this ad” feature is rolling out globally to help people understand why an ad was shown and which advertiser ran it. They can also report an ad if they believe it violates one of Google's policies or block an ad they aren’t interested in.
You can find ongoing updates to Google's policies and controls here.
Check out the entire report here.