Big Data a Hype or Revolution? was a title for a module I completed last year for a graduate program in big data at the University of Warwick, a university known globally for its stellar academic excellence and regarded as among the top research universities in the world. The module clears some of the uncertainties surrounding the big data deluge in the digital age and guides students to “examine how we might we use big data research both as a way to resist and/or shape global transformations”.

As the world deluged by digital data, universities hailed on the change to produce analysts and engineers suited to this emerging phenomenon. The Warwick Q-centre was pioneered as one of the 19 quantitative centers affiliated to the UK’s league of most prestigious universities, according to the website, as a strategic response “to address UK’s national ‘deficit’ in quantitative methods”.

In this regard, even the most advanced universities are latecomers to the scene. The very origin of the discipline is traced outside academia. For instance, Chris Anderson, editor of the Wired Magazine was the earliest person to voice “The End Of Theory: The Data Deluge Makes the Scientific Method Obsolete”. Scientists were not convinced. Recently, the London-based Deep Minds is pioneering Artificial Intelligence applications that learn the same way scientists do to generate scientific hypothesis and knowledge. This will be a fundamental breakthrough.

In his provocative commentary, which is now practically indisputable in the field, Mr. Anderson argued correlation is useful and necessary than the traditional scientific systematic explanation. Scientists have come to realize big data is transforming not just the way we live and work but also how we think. Most of the IT engineers are entirely concerned about predicting any phenomenon with infinite number of variables without paying much attention to what causes what and how.

The technology industry at Silicon valley has wake up to the reality long before the universities and the global Big Data market is expected to reach $118.52 billion by 2022 from what has been 3.56 billion in 2015 growing at 26.0% annually. When it comes to information technology, R&D department of tech companies obviously attract the best and brightest graduates and staff than universities due to their six figure salaries.

From language to humanities, it has now become quite common to apply purely quantitative techniques that were exclusive to specific disciplines in the natural sciences. For instance, geography is one of the fields whose methods are entirely transformed by big data. Using satellite imagery, geographers relied on remotely sensed data for long whose access and analysis involved costs and knowledge that were extremely prohibitive.

With the rise of Crowdsourced geographic data, however, publicly available user-generated data has proved to be more useful in giving location-specific and situational data in real-time that is being used to tackle some of the world’s most pressing challenges including such as as disasters at costs relatively nearer to zero.

This opportunity was realized for the first time in 2010 when volunteers used volunteer generated OpenStreetMap data to map Haiti’s roads in two weeks.

Disaster managers used these maps to deliver disaster interventions. From time to time, as handheld devices that support GPS and high resolution cameras become cheaper and cheaper, the possibilities and application of such data have become bigger and bigger.

In addition to being used to optimize traffic and supply chain on a real-time basis in our day to day activities, startups are flourishing that use images caught by ordinary individuals to train machine learning algorithms that provide proper business services to people in remote places.

This led to the birth of new science called ‘citizen science’-to describe a phenomenon where ordinary people assisted by devices are able to accomplish some of the most critical tasks traditionally performed by seasoned scientists.

When I search Data Scientist on ethiojobs.net, Ethiopia’s biggest hiring platform, it returns zero results. I then tried Machine learning, the most frequently used job titles for those working in the field. The world is being flooded with technologies that use data to cut down costs by improving business practices and eliminate inefficiencies. Are we missing out on an opportunity?

In the 20th century, most of the developing world including most African countries adopted an import substitution strategy to catch up with nations that prospered in the industrial revolution. In the end, it was only a few Asian tiger economies and China who were able to position themselves through emulation of cutting edge manufacturing technology.

At this period, African countries are again trying to emulate China’s manufacturing growth based on cheap labor. That is not something to be entirely condoned. We could go some time before we live in a world where robots overtake all tasks and own guilds, farms and banks.

Regarding day to day business practices, however, everything is rapidly changing. The growth of big data has exceeded most of the early idealistic predictions made by global consulting firms such as McKinsey.

Thus, one thing is certainly inevitable within the next few decades. None of these industries including conventional knowledge generation tasks can run at their efficiency without harnessing Big Data.

Last month, I attended a lecture by Mark Robinson, former Chief Operating Officer of Visionfund international, mentioned micro-finance operations in the developing world is transformative but the poor are taxed with high-interest rates currently stood up to 40%. Most of these interest rates are because of operational costs. He pointed out the potentials of big data to tackle the problem.

In fact, with algorithms that can accurately predict creditworthiness, efficiently manage business processes, and cut overhead costs, borrowers in poor countries can possibly access finance to start small enterprise at rates probably less than 15% from hundreds of billions of funds floating in developed countries.

From delivering e-medicine to responding to disasters up to monitoring farms through two way interaction with farmers, digital data is not just just a solution but also mutually profitable one.

To do so, we have to avoid some misconceptions. First, data is not a thing of developed countries. Neither it requires sophisticated technologies and skilled manpower as most of the cutting edge softwares are based on open source. Second, the information being used are in the public domain publicly available for free as the majority of them rely on the internet’s entire corpus and social media data. Third, the traditional capital limits don’t apply at all in this sector. The holy grail of big data is to reduce costs by eliminating inefficiencies and spur innovation thereby catalyzing economic activity and creating a virtuous cycle of employment and consumption. These are the things developing countries need most.


Fikire Sentayehu studies Big Data at the University of Warick,UK


Leave a Reply