**The event unveiled cutting-edge devices, notably the 133-qubit Heron Quantum Processing Unit (QPU), marking IBM's foray into utility-scale quantum processing. Additionally, the Quantum System Two, a self-contained quantum-specific supercomputing architecture, was introduced.**

During its Quantum Summit 2023, IBM took the stage with a sense of wonder, acknowledging the challenges and successes that have shaped the current landscape of quantum computing. The present quantum paradigm, reshaping IBM's trajectory with consecutive breakthroughs, is itself a formidable consolidation. According to IBM, the path forward in quantum computing will continue to be demanding. The event unveiled cutting-edge devices, notably the 133-qubit Heron Quantum Processing Unit (QPU), marking IBM's foray into utility-scale quantum processing. Additionally, the Quantum System Two, a self-contained quantum-specific supercomputing architecture, was introduced. However, the pursuit of advancements in these devices remains an ongoing endeavor. Each subsequent breakthrough, while pushing the boundaries, contributes to what could be termed as quantum's "plateau of understanding." Similar to our experience with semiconductors, where we reached practical design limits due to quantum effects, conquering this plateau implies achieving a level of utility and understanding that sustains independent research and development, akin to the longevity seen in Moore’s law. IBM's Quantum Summit 2023 reflects a transformative moment in the company's culture and operations, portraying an energized organization venturing into a "quantum-centric supercomputing era." This vision centers around the Heron Quantum Processing Unit, showcasing scalable quantum utility with its 133-qubit capacity, capable of surpassing the capabilities of any conceivable classical system. IBM's breakthroughs and a redefined roadmap have prompted the company to adopt two distinct approaches, emphasizing scalability along with practical, minimum-quality outcomes over monolithic, complex products that are challenging to validate.

IBM's announced new plateau for quantum computing packs in two particular breakthroughs that occurred in 2023. One breakthrough relates to a groundbreaking noise-reduction algorithm (Zero Noise Extrapolation, or ZNE) which we covered back in July – basically a system through which you can compensate for noise. For instance, if you know a pitcher tends to throw more to the left, you can compensate for that up to a point. There will always be a moment where you correct too much or cede ground towards other disruptions (such as the opponent exploring the overexposed right side of the court). This is where the concept of qubit quality comes into account – the more quality your qubits, the more predictable both their results and their disruptions and the better you know their operational constraints – then all the more useful work you can extract from it. The other breakthrough relates to an algorithmic improvement of epic proportions and was first pushed to Arxiv on August 15th, 2023. Titled “High-threshold and low-overhead fault-tolerant quantum memory,” the paper showcases algorithmic ways to reduce qubit needs for certain quantum calculations by a factor of ten. When what used to cost 1,000 qubits and a complex logic gate architecture sees a tenfold cost reduction, it’s likely you’d prefer to end up with 133-qubit-sized chips – chips that crush problems previously meant for 1,000 qubit machines. Enter IBM’s Heron Quantum Processing Unit (QPU) and the era of useful, quantum-centric supercomputing.

## The Quantum Roadmap at IBM’s Quantum Summit 2023

The two-part breakthroughs of error correction (through the ZNE technique) and algorithmic performance (alongside qubit gate architecture improvements) allow IBM to now consider reaching 1 billion operationally useful quantum gates by 2033. It just so happens that it’s an amazing coincidence (one born of research effort and human ingenuity) that we only need to keep 133 qubits relatively happy within their own environment for us to extract useful quantum computing from them – computing that we wouldn’t classically be able to get anywhere else.

The “Development” and “Innovation” roadmap showcase how IBM is thinking about its superconducting qubits: as we’ve learned to do with semiconductors already, mapping out the hardware-level improvements alongside the scalability-level ones. Because as we’ve seen through our supercomputing efforts, there’s no such thing as a truly monolithic approach: every piece of supercomputing is (necessarily) efficiently distributed across thousands of individual accelerators. Your CPU performs better by knitting and orchestrating several different cores, registers, and execution units. Even Cerebra’s Wafer Scale Engine scales further outside its wafer-level computing unit. No accelerator so far – no unit of computation - has proven powerful enough that we don’t need to unlock more of its power by increasing its area or computing density. Our brains and learning ability seem to provide us with the only known exception.

IBM’s modular approach and its focus on introducing more robust intra-QPU and inter-QPU communication for this year’s Heron shows it’s aware of the rope it's walking between quality and scalability. The thousands of hardware and scientist hours behind developing the tunable couplers that are one of the signature Heron design elements that allow parallel execution across different QPUs is another. Pushing one lever harder means other systems have to be able to keep up; IBM also plans on steadily improving its internal and external coupling technology (already developed with scalability in mind for Heron) throughout further iterations, such as Flamingo’s planned four versions which still “only” end scaling up to 156 qubits per QPU.

Considering how you're solving scalability problems and the qubit quality x density x ease of testing equation, the *ticks - *the density increases that don't sacrifice quality and are feasible from a testing and productization standpoint - may be harder to unlock. But if one side of development is scalability, the other relates to the quality of whatever you’re actually scaling – in this case, IBM’s superconducting qubits themselves. Heron itself saw a substantial rearrangement of its internal qubit architecture to improve gate design, accessibility, and quantum processing volumes – not unlike an Intel tock. The planned iterative improvements to Flamingo's design seem to confirm this.

## Utility-Level Quantum Computing

There’s a sweet spot for the quantum computing algorithms of today: it seems that algorithms that fit roughly around a 60-gate depth are complex enough to allow for useful quantum computing. Perhaps thinking about Intel’s NetBurst architecture with its Pentium 4 CPUs is appropriate here: too deep an instruction pipeline is counterproductive, after a point. Branch mispredictions are terrible across computing, be it classical or quantum. And quantum computing – as we still currently have it in our Noisy Intermediate-Scale Quantum (NISQ)-era – is more vulnerable to a more varied disturbance field than semiconductors (there are world overclocking records where we chill our processors to sub-zero temperatures and pump them with above-standard volts, after all). But perhaps that comparable quantum vulnerability is understandable, given how we’re essentially manipulating the essential units of existence – atoms and even subatomic particles – into becoming useful to us.

Useful quantum computing doesn’t simply correlate with an increasing number of available in-package qubits (announcements of 1,000-qubit products based on neutral atom technology, for instance). But useful quantum computing is always stretched thin throughout its limits, and if it isn’t bumping against one fundamental limit (qubit count), it’s bumping against another (instability at higher qubit counts); or contending with issues of entanglement coherence and longevity; entanglement distance and capability; correctness of the results; and still other elements. Some of these scalability issues can be visualized within the same framework of efficient data transit between different distributed computing units, such as cores in a given CPU architecture, which can themselves be solved in a number of ways, such as hardware-based information processing and routing techniques (AMD’s Infinity Fabric comes to mind, as does Nvidia's NVLink).

This feature of quantum computing already being useful at the 133-qubit scale is also part of the reason why IBM keeps prioritizing quantum computing-related challenges around useful algorithms occupying a 100 by 100 grid. That quantum is already useful beyond classical, even in gate grids that are comparably small to what we can achieve with transistors, and points to the scale of the transition – of how different these two computational worlds are.

Then there are also the matters of error mitigation and error correction, of extracting ground-truth-level answers to the questions we want our quantum computer to solve. There are also limitations in our way of utilizing quantum interference in order to collapse a quantum computation at just the right moment that we know we will obtain from it the result we want – or at least something close enough to correct that we can then offset any noise (non-useful computational results, or the difference of values ranging between the correct answer and the not-yet-culled wrong ones) through a clever, groundbreaking algorithm.

The above are just some of the elements currently limiting how useful qubits can truly be and how those qubits can be manipulated into useful, algorithm-running computation units. This is usually referred to as a qubit’s quality, and we can see how it both does and doesn’t relate to the sheer number of qubits available. But since many useful computations can already be achieved with 133-qubit-wide Quantum Processing Units (there’s a reason IBM settled on a mere 6-qubit increase from Eagle towards Heron, and only scales up to 156 units with Flamingo), the company is setting out to keep this optimal qubit width for a number of years of continuous redesigns. IBM will focus on making correct results easier to extract from Heron-sized QPUs by increasing the coherence, stability, and accuracy of these 133 qubits while surmounting the arguably harder challenge of distributed, highly-parallel quantum computing. It’s a one—two punch again, and one that comes from the bump in speed at climbing ever-higher stretches of the quantum computing plateau.

But there is an admission that it’s a barrier that IBM still wants to punch through – it’s much better to pair 200 units of a 156-qubit QPU (that of Flamingo) than of a 127-qubit one such as Eagle, so long as efficiency and accuracy remain high. Oliver Dial says that Condor, "the 1,000-qubit product", is locally running – up to a point. It was meant to be the thousand-qubit processor, and was a part of the roadmap for this year’s Quantum Summit as much as the actual focus, Heron - but it’s ultimately not really a direction the company thinks is currently feasible.

IBM did manage to yield all 1,000 Josephson Junctions within their experimental Condor chip – the thousand-qubit halo product that will never see the light of day as a product. It’s running within the labs, and IBM can show that Condor yielded computationally useful qubits. One issue is that at that qubit depth, testing such a device becomes immensely expensive and time-consuming. At a basic level, it’s harder and more costly to guarantee the quality of a thousand qubits and their increasingly complex possibility field of interactions and interconnections than to assure the same requirements in a 133-qubit Heron. Even IBM only means to test around a quarter of the in-lab Condor QPU’s area, confirming that the qubit connections are working.

But Heron? Heron is made for quick verification that it’s working to spec – that it’s providing accurate results, or at least computationally useful results that can then be corrected through ZNE and other techniques. That means you can get useful work out of it already, while also being a much better time-to-market product in virtually all areas that matter. Heron is what IBM considers the basic unit of quantum computation - good enough and stable enough to outpace classical systems in specific workloads. But that *is* quantum computing, and that *is* its niche.

## The Quantum-Centric Era of Supercomputing

Heron is IBM’s entrance into the mass-access era of Quantum Processing Units. Next year’s Flamingo builds further into the inter-QPU coupling architecture so that further parallelization can be achieved. The idea is to scale at a base, post-classical utility level and maintain that as a minimum quality baseline. Only at that point will IBM maybe scale density and unlock the appropriate jump in computing capability - when that can be similarly achieved in a similarly productive way, and scalability is almost perfect for maintaining quantum usefulness.

There’s simply never been the need to churn out hundreds of QPUs yet – the utility wasn’t there. The Canaries, Falcons, and Eagles of IBM’s past roadmap were never meant to usher in an age of scaled manufacturing. They were prototypes, scientific instruments, explorations; proofs of concept on the road towards useful quantum computing. We didn’t know where usefulness would start to appear. But now, we do – because we’ve reached it.

Heron is the design IBM feels best answers that newly-created need for a quantum computing chip that actually is at the forefront of human computing capability – one that can offer what no classical computing system can (in some specific areas). One that can slice through specific-but-deeper layers of our Universe. That’s what IBM means when it calls this new stage the “quantum-centric supercomputing” one.

Classical systems will never cease to be necessary: both of themselves and the way they structure our current reality, systems, and society. They also function as a layer that allows quantum computing itself to happen, be it by carrying and storing its intermediate results or knitting the final informational state – mapping out the correct answer Quantum computing provides one quality step at a time. The quantum-centric bit merely refers to how quantum computing will be the core contributor to developments in fields such as materials science, more advanced physics, chemistry, superconduction, and basically every domain where our classical systems were already presenting a duller and duller edge with which to improve upon our understanding of their limits.

## Quantum System Two, Transmon Scalability, Quantum as a Service

However, through IBM’s approach and its choice of transmon superconducting qubits, a certain difficulty lies in commercializing local installations. Quantum System Two, as the company is naming its new almost wholesale quantum computing system, has been shown working with different QPU installations (both Heron and Eagle). When asked about whether scaling Quantum System Two and similar self-contained products would be a bottleneck towards technological adoption, IBM’s CTO Oliver Dial said that it was definitely a difficult problem to solve, but that he was confident in their ability to reduce costs and complexity further in time, considering how successful IBM had already proven in that regard. For now, it’s easier for IBM’s quantum usefulness to be unlocked at a distance – through the cloud and its quantum computing framework, Quiskit – than it is to achieve it by running local installations.

Quiskit is the preferred medium through which users can actually deploy IBM's quantum computing products in research efforts – just like you could rent X Nvidia A100s of processing power through Amazon Web Services or even a simple Xbox Series X console through Microsoft’s xCloud service. On the day of IBM's Quantum Summit, that freedom also meant access to the useful quantum circuits within IBM-deployed Heron QPUs. And it's much easier to scale access at home, serving them through the cloud, than delivering a box of supercooled transmon qubits ready to be plugged and played with.

That’s one devil of IBM’s superconducting qubits approach – not many players have the will, funding, or expertise to put a supercooled chamber into local operation and build the required infrastructure around it. These are complex mechanisms housing kilometers of wiring - another focus of IBM’s development and tinkering culminating in last year’s flexible ribbon solution, which drastically simplified connections to and from QPUs.

Quantum computing is a uniquely complex problem, and democratized access to hundreds or thousands of mass-produced Herons in IBM’s refrigerator-laden fields will ultimately only require, well… a stable internet connection. Logistics are what they are, and IBM’s Quantum Summit also took the necessary steps to address some needs within its Quiskit runtime platform by introducing its official 1.0 version. Food for thought is realizing that the era of useful quantum computing seems to coincide with the beginning of the era of Quantum Computing as a service as well. That was fast.

## Closing Thoughts

The era of useful, mass-producible, mass-access quantum computing is what IBM is promising. But now, there’s the matter of scale. And there’s the matter of how cost-effective it is to install a Quantum System Two or Five or Ten compared to another qubit approach – be it topological approaches to quantum computing, or oxygen-vacancy-based, ion-traps, or others that are an entire architecture away from IBM’s approach, such as fluxonium qubits. It’s likely that a number of qubit technologies will still make it into the mass-production stage – and even then, we can rest assured that everywhere in the road of human ingenuity lie failed experiments, like Intel’s recently-decapitated Itanium or AMD’s out-of-time approach to x86 computing in Bulldozer.

It's hard to see where the future of quantum takes us, and it’s hard to say whether it looks exactly like IBM’s roadmap – the same roadmap whose running changes we also discussed here. Yet all roadmaps are a permanently-drying painting, both for IBM itself and the technology space at large. Breakthroughs seem to be happening daily on each side of the fence, and it’s a fact of science that the most potential exists the earlier the questions we ask. The promising qubit technologies of today will have to answer to actual interrogations on performance, usefulness, ease and cost of manipulation, quality, and scalability in ways that now need to be at least as good as what IBM is proposing with its transmon-based superconducting qubits, and its Herons, and scalable Flamingos, and its (still unproven, but hinted at) ability to eventually mass produce useful numbers of useful Quantum Processing Units such as Heron. All of that even as we remain in this noisy, intermediate-scale quantum (NISQ) era.

It’s no wonder that Oliver Dial looked and talked so energetically during our interview: IBM has already achieved quantum usefulness and has started to answer the two most important questions – quality and scalability, Development, and Innovation. And it did so through the collaboration of an incredible team of scientists to deliver results years before expected, Dial happily conceded. In 2023, IBM unlocked useful quantum computing within a 127-qubit Quantum Processing Unit, Eagle, and walked the process of perfecting it towards the revamped Heron chip. That’s an incredible feat in and of itself, and is what allows us to even discuss issues of scalability at this point. It’s the reason why a roadmap has to shift to accommodate it – and in this quantum computing world, it’s a great follow-up question to have.

Perhaps the best question now is: how many things can we improve with a useful Heron QPU? How many locked doors have sprung ajar?