Starting in about 2010, interest in Big Data as a search trend rose rapidly until about 2014, when it peaked. Since then, the level of interest has been variable but trending down, so that as of May 2020 we are at a little over half of the 2014 interest level.
For a while, Big Data was being sold as a revolutionary technology that would transform our lives (here’s just one example. Everywhere you turned, big data was supposed to solve problems that had previously been impossible.
It turns out, though, that despite the obvious benefits of the data revolution, some of these claims were unrealistic at best. First of all, more is not always better. The assumption seems to be that insight or decision-making ability increase along with the amount of data available, and that is probably not a good assumption. If I am a decision-maker, having ten times more data about a problem certainly does not make me ten times more table to make a good decision. Maybe if it makes me 5 percent more able to make a good decision, then it is worth the time and cost of gathering and analyzing that extra information. But
Lewis L. Strauss, chairman of the Atomic Energy Commission, famously predicted in 1954 that that nuclear power would soon be “too cheap to meter”. This prediction reflects the same type of magical thinking that is taking place in the realm of technology and data. Take this article on climate change as an example:
“Think about how difficult it’s been to tackle climate change, to track emissions. And then think about the possibilities of large-scale transparency of data, and what that would mean to be a company or a civil society organization, finding that information and doing something with it.”
I’ll come right out and say it: I’m pretty skeptical. The problem is that the “finding the information” and the “doing something with it” are two very different things. The action part of this equation is the issue, and I don’t see how a larger volume of information will affect the willingness and ability to take action to solve this problem. If the data that we currently have - which is extensive, and terrifying - isn’t enough to spur us to widespread action to reduce emissions and environmental destruction, how is more data going to help? How much more data do we need to tell us that we are in an existential crisis and that it might already be too late to prevent catastrophic outcomes that radically disrupt human life on this planet?
This goes to the heart of why interest in Big Data has peaked and declined. If COVID-19 has exploded a lot of myths and illuminated the ways in which modern (American) society is poorly equipped to handle the large-scale problems that are facing us (inequality, climate change, weakening of institutions). This is true in the realm of the ‘data revolution’ as well. We have plenty of data on COVID-19. The problem is not a shortage of data. Could we have a more detailed view of the pandemic if we had more widespread testing and more integrated reporting. Yes, and that would certainly help to reduce the number of cases and deaths. But all that data does not help us very much if our political system is so broken that public health agencies are restricted from doing their jobs.
This is not to say that there is not an enormous amount of potential in the vast amounts of data and computational power that human beings have amassed. But it is important to remember that data alone will not solve any problems. The data is only useful if we apply the right tools and are willing to engage honestly with what the data is telling us. Often social and institutional barriers, not a lack of data, is what is stopping us from solving our biggest problems. Simply collecting more data will not on its own help solve anything. But if that data is combined with effective institutions and ideas, and with leadership that is willing to follow the data where it leads, then big data can be very powerful indeed.