Turns out the ability to target specific bigotries is more widespread than first thought. But it also turns out advertisers aren’t all that interested in deliberately reaching anti-Semites.
On Friday, two more media outlets uncovered the ability for advertisers to target groups of people based on racist interests, this time on Google and Twitter. Racist-inspired ad targeting had first been discovered on Facebook on Thursday by investigative journalist group ProPublica.
Internet outrage aside, most advertisers are not surprised by this darker side of ad targeting. While ugly, targeting racists isn’t as much of a risk for brands as having their ads show up on racist websites through automated media buying, which is the typical brand safety concern with online ads.
“In theory, anyone could hack ad targeting tools on Facebook, but why would you unless you’re a Trump alt-right type,” asks one digital agency exec, speaking on condition of anonymity, because of the sensitivity of the subject. “Anyone could reverse engineer what racists are interested in. Like if they were interested in, say, Breitbart or some other publication, you could then target lookalike audiences. There are all sorts of crude tools to target them.”
Another agency exec discussed how those same ad targeting tools can also be used for good. In one instance, YouTube is able to identify young people who are prone to radicalization and customize the videos they see to offer content that could help discourage their violent tendencies.
YouTube has developed ad campaigns that target these types of youth, who are deemed to be at-risk of joining groups like ISIS.
“Think about how much is uploaded to YouTube every day,” the agency exec said. “The fact that it can flag as much as it does, is amazing. And anything can be used for evil.”
On Thursday, ProPublica had found areas in Facebook’s self-serve ad system where it could find groups of people based on their education history, and in more than 2,000 examples people on the social network had listed anti-Semitic terms as their fields of study. People had listed “Jew hater,” “how to burn Jews” and other offensive language, and they were able to be discovered in the ad system, and they were made available for targeting. Facebook said it was taking steps to shut down the ability to target against such offensive terms.
BuzzFeed and The Daily Beast found similar flaws in Google and Twitter’s ad platforms. On Google, there were racist search terms available for targeting— “black people ruin everything” and “Jews control the media,” among a number of others, according to BuzzFeed. Twitter had similar categories available, according to The Daily Beast.
The targeting loopholes weren’t a big concern to most advertisers, because they say they wouldn’t use them, however the issue does expose more problems with automated online advertising. Leaving machines to do much of the work, and with artificial intelligence taking over more of the thinking, there are more opportunities for these types of embarrassing mishaps.
“The lesson here for Facebook is this: People are crass, stupid, and sometimes evil,” says a social media agency executive. “Don’t automate anything without some level of human oversight or guardrails in place. Machines aren’t perfect.”