Surveilling Google’s Political Bias

Hopefully it’s clear by now that there is no “political” process going on in the USA, if you equate politics with discourse over policy issues.   Even the unexpected election of somewhat-outsider Trump has resulted in limited changes to the empire’s actual domestic or foreign policies, although some hope remains in his promise to “clean out the swamp.”   I remain hopeful, mainly because  I know that networks of corruption are very difficult to subdue, and the merger of corporate and political power epitomized by google make it even harder.

What if, early in the morning on Election Day in 2016, Mark Zuckerberg had used Facebook to broadcast “go-out-and-vote” reminders just to supporters of Hillary Clinton? Extrapolating from Facebook’s own published data, that might have given Mrs. Clinton a boost of 450,000 votes or more, with no one but Mr. Zuckerberg and a few cronies knowing about the manipulation.

Because, like most Democrats, Mr. Zuckerberg was overconfident that day, I don’t believe he sent that message, but there is no way to know for sure, and there is certainly nothing to stop him from broadcasting election-tilting messages like that in the future — and not just in the U.S. but in countries around the world.

Do we have to wait for whistleblowers or warrants to learn, belatedly, about shenanigans of this sort, or is there a way to detect them as they occur? What kind of monitoring system would it take? Is there a way to look over the shoulders of internet users to see what overzealous tech companies are showing them on their screens?

This is the story of how my colleagues and I, working in the shadows of the internet, developed such a system and used it to monitor what Google, Bing and Yahoo were showing users in the months leading up to the 2016 election — a working prototype for a large-scale monitoring system that we hope will soon be protecting all of us 24 hours a day from the unfettered power of Big Tech….

My research, which has now looked at four national elections and multiple topics and candidates, has repeatedly shown that Google can shift opinions and votes dramatically without people knowing. It is one thing, however, to demonstrate this power in laboratory and online experiments and quite another to show that Google’s search results actually favor one particular candidate or cause….

Late in 2015, my associates and I began to sketch out plans for setting up a nationwide system to monitor election-related search results in the months leading up to the November 2016 election. This is where that old dictum — “There is a fine line between paranoia and caution” — came into play. It seemed obvious to me that this new system had to be secret in all its aspects, although I wasn’t yet sure what those aspects were….

…In all, we had preserved 13,207 election-related searches (in other words, 132,070 search results), along with the 98,044 web pages to which the search results linked. The web-page ratings we obtained from online workers (a mean number for each page, indicating how strongly that page favored either Clinton or Trump) allowed us to answer the original questions we had posed. Here is what we found:

1) Bias. Overall, search rankings favored Mrs. Clinton over most of the 6-month period we had monitored — enough, perhaps, to have shifted more than two million votes to her without people knowing how this had occurred. The pro-Clinton tilt appeared even though the search terms our field agents chose to use were, on average, slightly biased toward Mr. Trump.

2) Lots of bias. Between October 15th and Election Day — the period when we received the largest volume of data — on all 22 of the days we received data, search rankings favored Mrs. Clinton in all 10 of the search positions on the first page of search results.

3) Google. The pro-Clinton favoritism was more than twice as large on Google than on Yahoo’s search engine, which is, in any case, little more than an offshoot of Google’s search engine these days. We had to discard our Bing data because all of it came from gmail users (more about this issue in a moment).

4) Demographic differences. Pro-Clinton search results were especially prevalent among decided voters, males, the young, and voters in Democratic states. But voters in Republican and swing states saw pro-Clinton search results too.

5) Tapering off. Over the course of the 10 days following the election, the pro-Clinton tilt gradually disappeared. All of these findings were highly statistically significant.

Perhaps the most disturbing thing we found had to do with that control I mentioned earlier. We never tried to collect any Chrome data; this was just our cover story, after all. But we did take a careful look at the Firefox data we received from the gmail users we had recruited, and we found that the gmail users who were using the Google search engine on Firefox received search results that were almost perfectly unbiased — eerily unbiased, in fact — about six times less biased than the search results the non-gmail users saw.

You can draw whatever conclusions you like from that last finding. For me, it says simply that you should take precautions when you are monitoring the output of an online company that might want to mess with your data.


Perhaps at this point you are saying, “Okay, they found evidence of a pro-Hillary slant in search results, but maybe that slant was generated by user activity. That’s not Google’s fault.” That, in fact, is exactly what Google always says when antitrust investigators find, time after time, that Google’s search results favor Google products and services. Favoritism in search results, Google says, occurs naturally because of “organic search activity” by users. To that I say: Gimme a break.

As I documented in an article on online censorship for U.S. News and World Report in 2016, Google has complete control over the search results it shows people — even down to the level of customized results it shows each individual. Under Europe’s right-to-be-forgotten law, Google regularly removes more than 100,000 items from its search results each year with surgical precision, and it also demotes specific companies in its search rankings when they violate Google’s vague Terms of Service agreement. The extensive investigation conducted by EU antitrust investigators before they fined Google $2.7 billion in June 2017 for having biased search results also shows that Google exercises deliberate and precise control over its search rankings, both to promote its own products and to demote the products of its competitors.

Our own demographic data and the data from our gmail users also demonstrate the high degree of control Google has over it search results. You can adjust an algorithm to respond to the search activity of users any way you like: You can make it shift search results so that they favor one candidate, his or her opponent, or neither one, just as we do in our SEME experiments. As any experienced coder can tell you — any honest experienced coder, that is — Google’s “organic search” defense is absurd.

Eventually, though, I realized that it doesn’t matter where the favoritism is coming from. Because favoritism in search results shifts opinions dramatically without people’s knowledge, search engines need to be strictly regulated, period — particularly when it comes to socially important activities like elections. If we don’t regulate search results, when favoritism creeps in for any reason — even “organically” (although that idea is nonsense) — it has the potential to influence people, further propelling that favoritism in a kind of digital bandwagon effect

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.