By David Chávez Salazar
The term Big Data refers to the collection and analysis of huge and complex datasets through advanced digital technology. This concept promises to substantially change the way we live, by converting data into tools for efficiency, productivity and innovation.
According to former Minister for Universities and Science, David Willets, the UK is well placed for taking on the task. On the one hand, the country has 25 of the world´s 500 most powerful computers; on the other, it has a comparative advantage in Information Technology thanks to two distinctive strengths: good skills in maths and computer science, and some of the world´s best data-sets in fields as diverse as demographics, agriculture, healthcare and meteorology.
A recent study revealed that large data analysis could create 58,000 new jobs and contribute £216 billion to the UK economy (2.3% of GDP) for the period between 2012 and 2017. Due to its potential, Big Data was chosen as one of the “Eight Great Technologies” to which future governments should give special support.
It is possible to observe two major objectives in the policy of Big Data (although the term is not explicitly found in the Programme presented by Prime Minister Theresa May for the next five years): first, to strengthen the UK´s data infrastructure channelling investment resources through a new “Productivity Investment Fund” which will include £740 million for that purpose by the end of 2020; and second, to promote the application of Big Data at all levels of government in order to produce quality public goods and services.
At first glance, these objectives seem harmless. However, by adopting a free-market vision we can realise their inherent flaws.
The first objective is no more than an application of the “Entrepreneurial State” doctrine, according to which the State must lead the process of technological development by investing large sums of money in promising projects, such as the Big Data. This idea is inconvenient for two reasons:
- Big Data processing is an idea that emerged in the private sector. By the 1960s two researchers from the Association for Computer Machinery were the first to propose the development of a machine to compress and interpret large amounts of data. At no time the State intervened, and it does not have to do it now. The UK data infrastructure can be developed in a 100 percent private way. If government intervenes, it may nullify the decentralised and incremental experimentation (trial and error) of the market.
- American economist Austan Golsbee points out that public investing in technology may not increase the quantity of innovation but its price: the salary of scientists and engineers. This means that a redistribution of income towards these groups will be generated. The evidence seems to support this statement. It is expected that for this year, the average salary of a Big Data engineer will increase by at least 8%. Between 2015 and 2016, it increased by 7%.
But, might this be a free-market phenomenon? Not precisely. It is interesting to see that since 2012, when the government decided to invest in Big Data, the demand for data engineers and scientists has increased to unimaginable levels. This fact may suggest a State-driven demand shock. As the demand for labor increases, its price (salary) also does so, which may explain the figures previously mentioned.
The second objective of Big Data policy is even worse. Under the pretext of producing better public goods and services, the government will have the possibility of using the data of millions of people at will. This is where the discussion about privacy comes in. Another worrying aspect of the use of Big Data by government agencies is that it could spark the fantasy of central planning: some cunning bureaucrat might think that it is enough to collect and interpret millions of pieces of personal data, to identify people’s preferences and to direct society on the basis of them. We already know that this does not end well… Right, Soviet Union?