Google has taken aggressive action to scrub coronavirus conspiracies from its news service and YouTube, at a time when social media companies have come under intense scrutiny for their potential to spread dangerous disinformation about the global pandemic. It has begun labeling misleading videos aimed at US audiences, and has joined with other major internet companies to coordinate a response against what the World Health Organisation has described as an “infodemic.”
But Google is also placing advertisements on websites that publish the theories, helping their owners generate revenue and continue their operations. In at least one instance, Google has run ads featuring a conspiracist it has already banned.
One ad for Veeam, an independent Microsoft 365 backup service, appeared atop one website featuring an article that includes false claims that Microsoft Corp. founder Bill Gates’s charitable efforts on pandemics and vaccines are a part of a world domination plot. A Microsoft Teams ad ran with a French language article that alleged Gates tried to bribe Nigerian lawmakers to vote for a Covid-19 vaccine. An ad for the telecommunications provider O2 showed up on another article linking the virus to 5G networks, a common conspiracy theory. The ads were placed through Google’s automated system for matching marketers with websites.
The Global Disinformation Index, a research group, recently reviewed 49 sites running baseless claims about the virus, including the stories about Gates and 5G networks. Alphabet Inc.’s Google placed ads on 84% of them, generating the majority of the $135,000 in revenue the sites earned each month, according to the Global Disinformation Index’s estimate.
Google has faced criticism for funding hyper-partisan publishers such as Breitbart News in the past. The company has avoided making blanket policies about which publishers can run its ads. Instead, it removes ads only from the specific pages carrying content that violates its content policies. It also allows advertisers to blacklist specific sites. The company has been particularly reluctant to take action with political ramifications now that the Trump administration is taking concrete action to punish companies that it argues show bias against conservative viewpoints.
Christa Muldoon, a Google spokesperson, said none of the web pages flagged by the Global Disinformation Index violated its policies. “We are deeply committed to elevating quality content across Google products and that includes protecting our users from medical misinformation. Any time we find publishers that violate our policies, we take immediate action,” she said.
Google’s network ad system is a massive machine for automatically generating money for its owner. Websites apply for Google’s program, and they add display banners and pop-ups advertisements to their pages. Google’s system automatically fills these slots with digital marketing and takes about 30% of the revenue they generate. Although Google offers a level of control to its marquee advertisers, the self-service system sometimes places ads for brands on websites with which they’d prefer not to be associated.
Google’s systems have recently placed ads for eBay Inc., Oracle Corp. and HBO on websites like activistpost.com, thegatewaypundit.com and thewashingtonstandard.com, all of which routinely publish conspiracy theories, according to the Global Disinformation Index.
Another company that placed ads on the sites in the study was Criteo SA. When contacted by a reporter about an ad mentioned in the report, Luca Sesti, a spokesman for the company, said it was breaking off its commercial relationship with the website in question, thegatewaypundit.com. “In the event we find a partner is not adhering to our policies, we will terminate the relationship immediately,” he said. “We recognise that the dissemination of inaccurate information through ‘fake news’ is a very real problem on the internet.”
Often the ads the researchers found made for uncomfortable pairings. The O2 ad ran alongside an article promoting false claims that 5G wireless technology causes people to experience symptoms of coronavirus because it “poisons their cells.”
“This is a huge issue that Google needs to tackle now,” said Craig Fagan, program director at the Global Disinformation Index. “It is creating a financial incentive for these websites to continue promoting the conspiracy theories. You go to these sites and there are ads galore, pop ups everywhere. The ads are there to get clicks, monetising each reader.”
In one case, Google accepted ad revenue from a company promoting a conspiracy theorist it tried to remove from its own platforms.
In early May, YouTube removed the account of David Icke, a British provocateur who often ranted about “Rothschild Zionists” controlling global institutions and has questioned the efficacy of vaccines. In a recent interview about Covid-19, he said that 5G makes people sick and sends out signals that can control their emotions. Icke had posted on YouTube for more than 14 years.
Guillaume Chaslot, a former Google engineer and founder of the research group AlgoTransparency, estimated that Icke’s YouTube channel gained 200,000 subscribers during March and April, when he largely touted unproven theories about the virus. Chaslot’s research tracks how often YouTube’s recommendation system sends viewers to particular videos and channels. In a 10-year span, YouTube promoted Icke’s videos about a billion times.
YouTube removed Icke’s account for violating its rules about coronavirus disinformation. Since then, Icke has appeared on other YouTube channels and in YouTube ads for Gaia Inc., a streaming network that promotes yoga and alternative healing. “We have to break out of this perceptual prison,” Icke said in a voice-over during an ad that ran weeks after his ban. Gaia’s network runs several shows featuring Icke. On a recent earnings call, Gaia executives said YouTube had become a “pretty significant” way to get new subscribers.
Gaia didn’t respond to requests for comment.
Imran Ahmed, chief executive officer of the Center for Countering Digital Hate, a UK nonprofit, argues that social media platforms should remove Icke entirely. “In a pandemic, lies cost lives,” said Ahmed. “Misinformed people put us all at risk through their reckless actions.” His group estimated that Icke earned about $177,000 a year from YouTube ads before the ban.
Jaymie Icke, a spokesman for Icke’s video service Ickonic, said the earnings estimate was inaccurate because YouTube has restricted ads on controversial videos for several years. “Revenue is nothing and has been for a while,” said Icke, who is David Icke’s son. “They removed all ads from the channel two months prior to the full deletion anyway. So that figure has simply been made up.”
Icke and others blocked from the site are allowed to appear on other accounts and in ads as long as those videos don’t break rules, according to Muldoon, the Google spokesperson.
While web giants like Google have tried to handle conspiracy theories on their user-generated services, they have also tried to reform their ad systems to handle the growing problem. In October 2018, Google and Facebook Inc. signed a European Union code of conduct on disinformation that contained a commitment to “improve the scrutiny of advertisement placements to reduce revenues of the purveyors of disinformation.”
According to Fagan, however, the issue remains a blind spot for the companies. Some of the conspiracy websites attract a large number of visitors, promoting their content across social media platforms.
The 49 websites promoting Covid-19 conspiracies that were reviewed by the Global Disinformation Index were just a small sample and offer a snapshot of a much larger program, Fagan said. Last year, the Global Disinformation Index published a study of about 20,000 websites promoting disinformation and conspiracy theories. It estimated that they were generating $235 million every year in advertising revenue, approximately $86.7 million of which was paid out by Google.
© 2020 Bloomberg