While defining Big data, it is a field that treats ways to analyse, systematically extract information from, or otherwise deal with data sets that are too large or complex to be dealt with by traditional data-processing application software. However, it cannot be termed less than any revolution. The analytics and unrealistic outcomes’ existence owing to the Big Data has turned the world upside down. Since it consists of a data set of such a large scale that human’s ability to work upon it seems meagre and inefficient, it is way too sensitive, and some misinterpretation may lead to havoc.
The same case was witnessed by the introduction of Google Flu Trends(GFT) in 2008 by Google.org. There isn’t any doubt to affirm that Google’s ability to stockpile information is way above par, but the clouds of doubts prevailed around its reliability. And misleading interpretations officially led to its closure as an utter failure in 2015. However, Google’s intention to not maintain transparency in data assembly was what pinched the Northeastern University‘s professors’. In a research paper authored by a plethora of experts, they acknowledged that the basic idea behind GFT wasn’t faulty, but the process they stuck to was.
They elucidated how Google could have managed to carefully examine the Big Data how those companies should participate in the research effort. Making a few statistical tweaks and incorporating lagged data from the Center for Disease Control and Prevention was the place Google failed to be proactive. If Google made data accessible to the community in a partial manner, it would have aided the research purpose, which the experts could scrutinise, which didn’t happen. David Lazer feels that the giants in this industry should lend a helping hand to the scientific researches. And in this way only, the future GFT like failures won’t be rampant, and other trend prediction’s potential won’t tarnish anyway.