Centre Write
Friday, 17 May 2013 10:38

Can the UK be the first ‘computable country’?

Conrad Wolfram is Strategic Director and European Co-Founder/CEO of the Wolfram Group and Founder of computerbasedmath.org.

Follow Conrad on Twitter.

 

The government's world-leading open data agenda has been a very positive turnaround from past years of secrecy—promising transparency, sharper decision making and data based wealth creation businesses and expertise. Yet even proactive openness hasn't yielded much of this potential so far.

The central problem is that available data has been a long way from practically accessible or directly usable. Very few people (in government or out, as professionals or citizens) can in fact deploy the data efficiently to get answers to their questions and make better decisions. Questions like "Which cities had the most new businesses last year per person?" or "Best transport from Oxford to Les Miserables in London?" Even if the data is out there, you need time, expertise and often specialist knowledge to analyse it.

The way to solve this availability-accessibility gap is by publishing data so it's ready for seamless use by a computer, so called 'computable data'. That's in contrast to current data which is notionally for human use but needs manual coercion or programming for answers to your questions. Of course you need to provide the right interface for each audience for access—whether linguistic, interactive document, app or programmer API (for others to build their interfaces). But the core necessary enabling infrastructure is getting the data in shape by making it computable.

So doing can empower the knowledge consumer to generate the personalised knowledge they need. It can help to take us from 'citizen data access' to 'citizen analysis':

There are many governmental gainers from such an information revolution, particularly if cross-correlated between departments ("School exclusions vs. social workers by region") or with the massive personal data we are each starting to accrue. But long-term the biggest gainer has got to be health, both system-wide and personally. Imagine if tomorrow's sensor-based personal health data can be automatically compared with the NHS's data (potentially the world's largest) to improve diagnostic and treatment hit rates?

By being early as a 'computable country', the UK could also be the hub for computable data, building up expertise for major new fields around Big Data and exporting technology to other countries.

Like all infrastructure projects, making a computable layer on government data isn't free. But we deem data collection to be sufficiently important to currently spend hundreds of millions, perhaps billions of pounds on it—whether through the ONS, within departments, locally or in the NHS. And yet most of that data is barely used. The question is how to get maximum value for the country out of those data assets. This is a case where by investing the extra few percent centrally, government can massively increase the asset value and drive the market forward. If we don't, users would collectively need to spend much more to get that value out; right now they aren't and don't.

It's a bit like a road building programme that does everything up to tarmacing. By putting the tarmac on, the road is dramatically more usable than without; yet as an incremental cost it is relatively modest. Tarmac is today simply considered part of the build-out infrastructure and so should computability for supplied data with some appropriate interfaces.

Other than getting its own data in shape what actions could the government take to make the UK a 'computable country'—where all significant data is ready for citizen analysis?

Why not stipulate that government funded projects and entities such as universities make their data not only open but computable too? In particular R&D funded by government should publish programs as well as computable data so that they can be rerun without set-up costs of the original research, multiplying their value. A 'dead' paper simply isn't sufficient output for technical research.

The UK should also look to a computable version of the US's new smart disclosure plans. Where it's in the public interest, companies and institutions should not only supply information on products and transactions as reports or metrics but also in a computable form. Regulators (eg. of banks and energy companies) must force computable reports—ready to change parameters and ask question they wish of the data without manual work. And as consumers, rather than 'dead' pension statements, you should be able to ask your own questions of the data—"How much bigger would my pension be if I saved an extra £73/month?"

Amongst other benefits, so doing could help to address the targets culture because instead of delivering good numbers for the few metrics on which you know you’ll be judged, your assessment can be based on agile analysis by management. One could label this 'agile targeting'.

Finally what about the skills our population will need to be successful in a 'Computational Knowledge Economy'? It is paradoxical that after arguing computers need to do more analysis work to make data usable by humans, I also argue that humans need to adjust their skills for this new era. But it isn't contradictory because this new automation raises expectations required for success. Using the technology I describe means the envelope of analysis is pushed to the next level. Most people's education is not attuned to handling that.

Maths is at the centre of educational changes needed—computing has fundamentally changed the subject of maths in the real world and allowed its application to big data and messy computations. The very reason maths is so important today to the economy and to everyday life is because computers have enabled its application to all fields. In turn the problems have got tougher because the computers have mechanised calculation and allowed us to work out much more.

Yet in education we've decided people have to learn these calculations by hand and only do them on a computer when they have. The result is simple: they don’t get very far. We're dramatically reducing the conceptual skills people can learn because the complexity, scope and toolset of problems that can be attempted is also reduced. We’re turning out third rate human computers, not first rate problem-solvers.

I believe Brits are naturally good problem-solvers and would do better against other countries with computer-based maths. With this and our leadership position on open data, the UK's in a very good position on the starting grid to lead the race as first 'computable country'. But the race is starting. The next year or two will be critical to whether we win it.

 


Views held by contributors are not necessarily those of Bright Blue, as good as they often are.

If you are interested in contributing please contact This email address is being protected from spambots. You need JavaScript enabled to view it..

Latest from Twitter