Author: Sabina Sokol- USA Ambassador of girls in quantum.
Quantum Advantage: Where Does It Exist?
By: Sabina Sokol
The idea of quantum advantage was first proposed in the 1980s by physicist Richard Feynman, who believed there were some kinds of problems that quantum computers could solve notably faster and more efficiently than classical computers can. To understand how this kind of efficiency is measured, we need to understand Big O Notation, which compares the memory (number of inputs) and the number of operations required in the worst case scenario when running an algorithm.
A graphical comparison of a number of different Big O complexities.
Suppose you have a list of n numbers from a1 to an and you’re trying to find the number x by going through each number and asking “Are you x?” until the answer is “Yes”; this is known as linear search. The worst case scenario — which would take the most memory and operations — would be that x is an, the very last number of the list, which means that you had to go through all n numbers. Thus, the Big O Notation for linear search is O(n). The larger the function inside the parentheses, the ”worse” your algorithm is, which makes O(n!) the least efficient algorithm classification because it grows the fastest. The “best” algorithm would be O(1), meaning that the algorithm would need exactly one operation to solve the problem. An example of something that would be O(n!) is an unoptimized algorithm used to solve the “Traveling Salesman” problem, where the number of operations increases factorially with the number of inputs (locations in this case). An example of something that would be O(1) is an algorithm that uses a hash table, which is a type of data structure that stores an index/key with its corresponding value using a hashing function like SHA-256. So, because you have instantaneous access to all the values/messages at once in a hash table, the Big O complexity is O(1).
A visual representation of a hash table.
Though there are few proven cases of quantum advantage, the idea is that quantum algorithms will be more efficient and therefore have better Big O complexities than their classical counterparts. The first quantum algorithm to support this theory was designed by David Deutsch and Richard Jozsa in 1992, which aimed to solve “black box” problems that are quickly solved by a quantum computer as compared to a classical computer, which would take many more operations to achieve the same result. The name for “black box” came from the analogy that such a system provides useful information without revealing how it came up with it, thus being “black” or oracle-like. The irony is that this Deutsch-Jozsa algorithm did not prove to have any useful applications itself, so this official first case of quantum advantage did not provide any technological advancements.
In 2019, Google claimed to have achieved quantum advantage with their 54-qubit Sycamore device. Basically, they created a bunch of thousand-gate circuits with entirely random gate configurations, which Sycamore was able to compute in 200 seconds. Google quantum researchers estimated that the same calculation on a classical computer would take 10,000 years because there is no way for it to simplify the sequence of gates. For example, if there were two X-gates — which essentially “flip” the state of the qubit over the x-axis — side by side, they could just be canceled out because they reverse each other. But because the set-up was so random, there were very few places that could have been simplified, thus forcing the classical algorithm to go through the computation the same that a quantum algorithm would. It takes more “brain power” for the classical counterpart because there is no inherent superposition property, so every state has to be individually simulated. Therefore, the quantum researchers believed it would take a classical algorithm thousands of years to compute the random circuits.
A photograph of Google’s quantum device known as the Sycamore.
However, quite a few quantum researchers disagreed with the conclusion of the demonstration because they believed a classical algorithm could perform the same calculation within a few days. Edwin Pednault, John A. Gunnels, Giacomo Nannicini, Lior Horesh, and Robert Wisnieff published a paper shortly after Google’s announcement of quantum advantage arguing that a classical algorithm run on the Summit supercomputer at Oak Ridge National Laboratories in Tennessee could simulate the circuits “with high fidelity” — which means that there would be little error in computation — “to arbitrary depth in a matter of days”. Similarly, IBM argued that its own classical supercomputers could also perform the same type of calculations with these ultra-random circuits in under three days. Though these assertions have not yet been proven, the rigorous analysis has compelled many people to discount Google’s claim.
A year later, quantum researchers at the University of Science and Technology of China in Hefei demonstrated their own version of quantum advantage with a different type of quantum computer — known as TaihuLight — that harnessed photons as their qubits. They essentially calculated the probability of detecting a boson — a type of subatomic particle, like a photon for example — in a specific area; they scaled this to compute the probabilities for many bosons, which was once thought to be impossible because it would take a classical supercomputer billions of years to do. Finding the probability distribution of bosons is truly groundbreaking because the locations of these particles are random, so there are no patterns or straightforward models that can be applied. So, these researchers managed to solve what is known as the “boson sampling problem” in a matter of minutes.
A photograph of UST China’s TaihuLight quantum computer.
The caveat of this demonstration, however, is that the quantum computer used is non-universal and non-programmable, meaning that its capabilities cannot be harnessed for scientific innovation in quantum chemistry simulations and machine learning as its researchers had hoped. It does not have any practical applications and therefore does not fully satisfy the definition of quantum advantage.
With the two attempts we have explored, it is clear that we have already come a long way in the quantum field, but there is still plenty of room for growth in truly achieving quantum advantage. Many scientists predict that quantum computers will surpass classical computing capabilities within a matter of years; it is crazy to think that this may happen in our lifetime! It is important to note that quantum computers have been around for less than three decades, since the first was only built in 1998. On the other hand, classical computing has been around for approximately 9 decades, since its first was built in 1936. Just as it took us many years to get to where we are today with our classical computing innovations, from laptops to self-driving cars, it will take time to develop and expand the quantum world. Let’s be the pioneers by inspiring and educating as many people as possible in this new field of opportunity.
Ball, P. (2020, December 3). Physicists in China challenge Google’s ‘quantum advantage’. nature. https://www.nature.com/articles/d41586-020-03434-7#ref-CR3
Deutsch, D., & Jozsa, R. (1992). Rapid solution of problems by quantum computation. Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences, 439(1907), 553-558. https://doi.org/10.1098/rspa.1992.0167
Gard, B. T., Motes, K. R., Olson, J. P., Rohde, P. P., & Dowling, J. P. (2015). An introduction to boson-sampling. From Atomic to Mesoscale, 167-192. https://doi.org/10.1142/9789814678704_0008
Holton, W. C. (2001, October 26). quantum computer. Britannica. Retrieved May 1, 2022, from https://www.britannica.com/technology/quantum-computer
Martinis, J., & Boixo, S. (2019, October 23). Quantum Supremacy Using a Programmable Superconducting Processor. Google AI Blog. https://ai.googleblog.com/2019/10/quantum-supremacy-using-programmable.html
Naushad, R. (2020, January 26). History of Classical Computing. FAUN Pub. https://faun.pub/history-of-classical-computing-575ad4e76355
Pednault, E., Gunnels, J. A., Nannicini, G., Horesh, L., & Wisnieff, R. (2019). Leveraging secondary storage to simulate deep 54-qubit sycamore circuits. IBM T.J. Watson Research Center, 1-39. https://doi.org/10.48550/arXiv.1910.09534
Zednik, C. (n.d.). Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence. Otto-von-Guericke-Universität Magdeburg, 1-29. https://arxiv.org/pdf/1903.04361.pdf#:~:text=The%20Black%20Box%20Problem%20in%20Artificial%20Intelligence&text=The%20Black%20Box%20Problem%20is,problems%20in%20AI%20are%20opaque.