How Big Data Can Help Save the World

Our ability to collect data far outpaces our ability to fully utilize it—yet those data may hold the key to solving some of the biggest global challenges facing us today.

Take, for instance, the frequent outbreaks of waterborne illnesses as a consequence of war or natural disasters. The most recent example can be found in Yemen, where roughly 10,000 new suspected cases of cholera are reported each week—and history is riddled with similar stories. What if we could better understand the environmental factors that contributed to the disease, predict which communities are at higher risk, and put in place protective measures to stem the spread?

Answers to these questions and others like them could potentially help us avert catastrophe.

We already collect data related to virtually everything, from birth and death rates to crop yields and traffic flows. IBM estimates that each day, 2.5 quintillion bytes of data are generated. To put that in perspective: that’s the equivalent of all the data in the Library of Congress being produced more than 166,000 times per 24-hour period. Yet we don’t really harness the power of all this information. It’s time that changed—and thanks to recent advances in data analytics and computational services, we finally have the tools to do it.

As a data scientist for Los Alamos National Laboratory, I study data from wide-ranging, public sources to identify patterns in hopes of being able to predict trends that could be a threat to global security. Multiple data streams are critical because the ground-truth data (such as surveys) that we collect is often delayed, biased, sparse, incorrect or, sometimes, nonexistent.

For example, knowing mosquito incidence in communities would help us predict the risk of mosquito-transmitted disease such as dengue, the leading cause of illness and death in the tropics. However, mosquito data at a global (and even national) scale are not available.

To address this gap, we’re using other sources such as satellite imagery, climate data and demographic information to estimate dengue risk. Specifically, we had success predicting the spread of dengue in Brazil at the regional, state and municipality level using these data streams as well as clinical surveillance data and Google search queries that used terms related to the disease. While our predictions aren’t perfect, they show promise. Our goal is to combine information from each data stream to further refine our models and improve their predictive power.

Similarly, to forecast the flu season, we have found that Wikipedia and Google searches can complement clinical data. Because the rate of people searching the internet for flu symptoms often increases during their onset, we can predict a spike in cases where clinical data lags.

We’re using these same concepts to expand our research beyond disease prediction to better understand public sentiment. In partnership with the University of California, we’re conducting a three-year study using disparate data streams to understand whether opinions expressed on social media map to opinions expressed in surveys.

All of this illustrates the potential for big data to solve big problems. Los Alamos and other national laboratories that are home to some of the world’s largest supercomputers have the computational power augmented by machine learning and data analysis to take this information and shape it into a story that tells us not only about one state or even nation, but the world as a whole. The information is there; now it’s time to use it.

Source Link

« Previous article 3 Myths (and 1 Truth) About Grain-Fed Beef
Next article » GOP Counteroffer to Green New Deal Pushes Innovation