When big isn't better: How the flu bug bit Google

When big isn't better: How the flu bug bit Google
Ryan Kennedy is a political science professor at the University of Houston. Credit: University of Houston

Numbers and data can be critical tools in bringing complex issues into crisp focus. The understanding of diseases, for example, benefits from algorithms that help monitor their spread. But without context, a number may just be a number, or worse, misleading.

"The Parable of Google Flu: Traps in Big Data Analysis" is published in the journal Science, funded, in part, by a grant from the National Science Foundation. Specifically, the authors examine Google's data-aggregating tool Google Flu Trend (GFT), which was designed to provide real-time monitoring of cases around the world based on Google searches that matched terms for flu-related activity.

"Google Flu Trend is an amazing piece of engineering and a very useful tool, but it also illustrates where 'big data' analysis can go wrong," said Ryan Kennedy, University of Houston political science professor. He and co-researchers David Lazer (Northeastern University/Harvard University), Alex Vespignani (Northeastern University) and Gary King (Harvard University) detail new research about the problematic use of big data from aggregators such as Google.

Even with modifications to the GFT over many years, the tool that set out to improve response to flu outbreaks has overestimated peak in the U.S. over the past two years.

"Many sources of 'big data' come from private companies, who, just like Google, are constantly changing their service in accordance with their business model," said Kennedy, who also teaches research methods and statistics for political scientists. "We need a better understanding of how this affects the data they produce; otherwise we run the risk of drawing incorrect conclusions and adopting improper policies."

GFT overestimated the prevalence of flu in the 2012-2013 season, as well as the actual levels of flu in 2011-2012, by more than 50 percent, according to the research. Additionally, from August 2011 to September 2013, GFT over-predicted the prevalence of flu in 100 out of 108 weeks.

The team also questions data collections from platforms such as Twitter and Facebook (like polling trends and market popularity) as campaigns and companies can manipulate these platforms to ensure their products are trending.

Still, the article contends there is room for data from the Googles and Twitters of the Internet to combine with more traditional methodologies, in the name of creating a deeper and more accurate understanding of human behavior.

"Our analysis of Google Flu demonstrates that the best results come from combining information and techniques from both sources," Kennedy said. "Instead of talking about a ' revolution,' we should be discussing an 'all data revolution,' where new technologies and techniques allow us to do more and better analysis of all kinds."

More information: "The Parable of Google Flu: Traps in Big Data Analysis," by D. Lazer et al. Science, 2014.

add to favorites email to friend print save as pdf

Related Stories

First real-time flu forecast successful

Dec 03, 2013

Scientists were able to reliably predict the timing of the 2012-2013 influenza season up to nine weeks in advance of its peak. The first large-scale demonstration of the flu forecasting system by scientists at Columbia University's ...

Recommended for you

After alarm, Lebanese man tests negative for Ebola (Update)

3 hours ago

A Lebanese man who arrived in Beirut from West Africa believing he may have Ebola was reassured by doctors that he is disease free but was still taken into a hospital quarantine on Thursday as a practice run to check the ...

New, faster therapeutic hypothermia techniques

3 hours ago

Rapid lowering of body temperature following an acute myocardial infarction (MI) can be an effective therapeutic strategy to minimize damage to the heart muscle caused by the loss and restoration of blood ...

User comments

Adjust slider to filter visible comments by rank

Display comments: newest first

adam_russell_9615
not rated yet Mar 13, 2014
Just because it was an overestimate does not say the algorithm is bad, only that it needs improvement. Even +50% is not bad for a simulation based on metadata.