The Senseless Tide – How Quantum Computing Reveals the Limits of Big Data

Benjamin Cave – Head of Research

Big Data is a Big Thing. When a McKinsey Report dubbed this the ‘Era of Big Data’ back in 2011, we ushered in an era of seemingly immense possibilities. Yet Big Data is really nothing more than the description of a problem that has been around since the invention of the abacus: What do we do when the raw information of our daily lives exceed the computational powers of the human brain? In the case of the abacus it was remarkably simple. We just put beads on a line in a system that allowed complex tabulations without holding figures in your head. As life progressed we had many such advances: The Babbage Machine, Cryptography Engines, and the Spreadsheet Function. All through the trajectory of human history, our lack of personal computational capacity has been solved by the development of collective capacity in the form of bigger, faster computational systems. So Big Data is, at one level, simply the next in a long line of challenges that we consistently meet through technological innovation. What’s more, it seems (although the precise details of this advance are somewhat murky even according to its inventors) that we already have through a ‘great leap forward’ in computational technology: Quantum Computing.

The major advance of Quantum Computing is at once simple and complex: It does away with a computational mainstay called a byte.

Note – A byte is a circuit, usually silicon, that is tasked with finding an answer to a certain question like ‘does x=5?’ The byte then runs the query and returns in one of two states; ‘yes’ or ‘no’. Meanwhile other bytes are answering other questions simultaneously so the next one might be finding out if x=6.

Bytes are all very well for 99% of our computational tasks to date. You simply put enough of them on a circuit board and they run their queries to find answers. But for much more complex problems it becomes tricky to build computers large and fast enough to answer these questions. Take for example a challenge that has occupied the mind of many in the era of Big Data: What is the optimal routing of all global airline traffic? This question is both data-intensive, requiring billions of data points from every flight on earth, and computationally demanding. Finding enough bytes to ask all of those questions is an enormous task and one that would still take years to answer on a traditional computer.

Quantum computing works quite differently. Instead of the traditional silicon circuits, it uses coils made from the element Niobium that are heated up until the atoms become ‘entagled’. The precise definition of entanglement is too lengthy a subject for this piece but the end result is to create bytes which are in a state of ‘quantum superposition’.

Note – Quantum superposition is a strange concept best illustrated by Schrodinger’s Cat. The cat goes into a box with a randomly-triggered poison capsule that will go off sometime between now and n. At any given moment between now and n, is the cat alive, dead or both? The answer, weirdly, is both or more accurately that the cat occupies a ‘superstate’ than encompasses both life and death. Schrodinger’s thought experiment is more than just a charming foray into animal cruelty. It illustrates a real phenomenon in Quantum Mechanics called superposition. Superposition is a state where the same atom can simultaneously occupy multiple states by occupying a superstate above them all; It can be both ‘alive’ and ‘dead’ at the same time. This is thing that happens to atoms within the Niobium coils of a Quantum Computer.

All physics aside, the result of these superposed bytes or ‘qubits’ as they are known, is remarkable. Because the same qubit can simultaneously occupy multiple states, it can answer multiple questions at once. So the same qubit can tell you if x=1 to 100 in a single query. The capabilities of this system are mind-blowing if executed correctly. A quantum computer with 30 qubits (30 atoms answering questions) would have the equivalent computational power of 10 Trillion ordinary bytes.

There is however, as there always is with such advances, a catch. Quantum computers are only good at solving certain kinds of problems. The reason for this is that to get intelligible results from a qubit that can query so many things at once, you need to put strong parameters on the questions you ask. If x has a value between 1 and 100 and I want to find out which of these is correct, a qubit is 100X faster at solving this question than a single silicon byte. However, if I want to ask a question where there isn’t a ‘correct’ value, like whether more people in a sample of 100 like tea or, a qubit is only as slightly better as a byte because this question is actually 100 little questions that need to be solved by separate things. Heavy parameters mean that quantum computers are best suited to solving problems that are highly computationally-intensive (i.e require millions of possibilities to be checked) but outcome-simple (i.e. there is a ‘right’ answer) problems. A good example of this type of problem is cracking a software encryption or a safe combination.

Bringing it back to Big Data, the Quantum Computer promises to make solving a problem like the flight routing discussed above an achievable goal. With the right parameters and data, a quantum computer could deliver an answer to this problem more than 11,000 times faster than even the largest mainframe. A new wave of possibilities in Big Data emerges than was though this advance. We could solve computationally-complex problems faster than at present and with the expanding volumes of data, this is becoming ever more important.

However, the strict parameters needed to work a Quantum Computer also tell us a great deal about the limits of Big Data. When we break it down, we see that Quantum Computing is a great leap forward in our capacity to process information but no leap forward at all in our ability to understand information. Herein lies the danger of Big Data. Huge information sets give us a raw, unfiltered view of the world but they don’t give us an intelligent understanding of the world. Raw insights need smart people to process them. Big Data has the danger of substituting human intelligence with a senseless tide of raw data.

Let’s take an example of this danger and I draw here upon a conversation we had here in 21c HQ a few weeks ago. I was caught up in a fever of Big Data possibilities and suggested, somewhat provocatively, that in the future we could replace the whole civil service with machines. The machines would be programmed with a political priority by our benevolent government like ‘Lower taxes for the middle class by 5% while maintaining core public services’. The Quantum Computer at Government HQ would then run trillions of possible scenarios for government spending to find the optimal configuration to achieve the input parameters. In essence, this is evidence-based policy-making at its zenith. A computationally-precise way to execute manifesto promises. Who could object to this cyber-utopia right? Well my colleague did (and she was right!) She pointed out that I had eliminated intelligence from the policy process. All cynicism about public servants aside, this system would replace the careful consideration of evidence by highly-trained experts with a blind faith in the senseless tide of raw data. A computer doesn’t see a child who will be put at risk because their home life will be damaged by the optimal spending configuration. A computer doesn’t see a small business who won’t be able to flourish and grow because higher rates would be the optimal way to offset lower middle-class taxes. A computer can’t see the threat to a country from global terrorism that is heightened by the optimal decision to reduce the defense budget. In fact a computer can’t see anything it isn’t first programmed to see by people. All it understands are the parameters it has been given. We live in a complex world, a world where optimal spending today could me disaster tomorrow and at times our Big Data sets and our Quantum Computers do little but exacerbate our ignorance of consequences by encouraging us to sit back and trust the data.

Let me say here that the above is not a peon of praise to the golden era of human government, nor am I a neo-luddite whose distrust of technology leads to blind rejection. Instead I would caution that, just as Quantum computers are only as useful as the people who parameterise the questions, Big Data is only as useful as the minds that interpret it. Big Data is a fantastic tool to support decisions. What we need to be mindful of is that it requires big minds to understand the implications of the data, to contextualise the results and to decide, on those occasions where there are consequences the computers cannot see, to overrule the senseless tide and take a human decision.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s