# Staged Model for Scaling of Quantum Algorithms

How can the designer of a quantum algorithm know that the algorithm will work for far more qubits than it has been tested or far more qubits than can be simulated (50 qubits max)? This informal paper proposes a model for design and testing of scalable quantum algorithms.

The ambitious goal of quantum computing is dramatic quantum advantage, where a quantum algorithm can outperform a comparable classical algorithm by a truly amazing factor. Not just a factor of 2X, 10X, 100X, 1,000X, or even 1,000,000X, but far more than even a factor of a billion or a trillion times the best classical algorithms. The goal is not simply to do things faster, but to do computations which a classical computer *can’t* even do in any reasonable and acceptable amount of time.

That’s the goal. How to get there is another story. Somehow, we have to start with a rather modest algorithm and then somehow *scale it up* to get to the scale of a dramatic quantum advantage. But scaling is no easy feat. it’s not free and it’s not automatic.

In fact, even the best published papers on algorithms *never* discuss scaling beyond the largest size they could muster on current quantum computers, which is currently very modest. We’re at the stage now where people are ecstatic to even get *any* quantum algorithm to work even for very modest input and a very modest number of qubits. Algorithms for 15 to 23 qubits are currently considered fairly ambitious. Algorithms exploiting 32 to 40 qubits are virtually nonexistent.

**The essence of the approach**

The essence of the model proposed by this informal paper is to use simulation in incremental stages from 4 to 40 qubits coupled with automated analysis of quantum circuits to detect coding patterns which are not scalable, so that once you’ve tested and validated up through 32 to 40 qubits on a classical quantum simulator you can have a much higher level of confidence that your quantum algorithm will scale to 44, 48, 50, 64, 72, 80, 96, and more qubits on future real quantum computers — once hardware becomes available with sufficient qubits, sufficient qubit fidelity, and sufficient qubit connectivity.

To be clear, the goal here is not scaling on current and near-term quantum computers, but to assure that algorithms designed and developed today will be automatically scalable to future quantum computers as they become available. There may indeed be limited scalability on existing and near-term quantum computers, but that’s icing on the cake, not the important goal here.

**Summary in a nutshell:**

- Generative coding of quantum circuits. Parameterized classical code which automatically generates the quantum circuit based on the input data and input parameters, generating larger circuits for larger input data. This is where most of the work for scaling occurs.
- Rely on simulation for 4 to 40 qubits since near-term hardware isn’t yet scalable in terms of size, qubit fidelity, coherence time, and qubit connectivity.
- Specify the nines of qubit fidelity to assure that simulation is realistic for quantum computers expected to become available in a few years.
- Scale and test using simulation at a number of stages from 4 to 40 qubits, validating correct results at each stage to confirm scaling. Stages at roughly 4, 8, 12, 16, 20, 24, 28, 32, 36, and 40 qubits — depending on the nature of the particular algorithm.
- Run both perfect simulation and simulation for a specified number of nines of qubit fidelity, and compare them to see whether scaling works properly under a realistic error rate.
- Use automated circuit analysis tools to detect coding patterns which are not scalable, particularly beyond the limits of simulation. For example, dependence on fine granularity of phase or probability amplitude. As well as error rate which would become excessive for deeper circuits, and for hardware with relatively low qubit fidelity. Also detect if coherence time is exceeded by the circuit for the maximum expected scaling.
- Attempt to replicate simulation results on real quantum computers when possible, but generally this won’t be very feasible on noisy near-term hardware for much beyond trivial cases. This is more icing on the cake, a bonus extra, rather than an expected step.
- The expectation is that careful testing of multiple stages of scaling from 4 to 40 qubits coupled with automated detection of non-scalable coding patterns will be sufficient to establish scaling beyond the range of simulation (40 qubits) — to 44, 48, 50, 64, 72, 80, 96, and more qubits — once hardware becomes available with sufficient qubits, sufficient qubit fidelity, and sufficient connectivity.

**Topics discussed in this informal paper:**

- The siren song of running on a real quantum computer
- Challenges and constraints
- Quantum algorithm vs. quantum circuit
- Need for simulation
- Perfect simulation
- Specify the nines of qubit fidelity to assure that simulation is realistic
- Need for automated analysis tools to detect scaling issues
- Staged testing using simulation
- Staged automated analysis to detect scaling issues
- Overall thesis of this paper
- Overall process
- Basic staged model
- Generative coding of quantum circuits rather than hand-coding of circuits
- Larger circuits require higher qubit fidelity
- Depth of circuits and qubit fidelity under scaling
- Suggested stages — qubit counts
- Full wavefunction vs. partial simulation
- Suggested stages for simulation
- Suggested stages for execution on a real quantum computer or automated analysis
- Suggested stages for scaling of Shor’s factoring algorithm
- What is the smallest stage for any particular algorithm?
- Automated circuit analysis tools
- Detecting upper limit for scaling
- How far can variational methods scale?
- Scalability of quantum phase estimation (QPE) and quantum Fourier transform (QFT)
- Automatic scaling vs. manual scaling
- Scaling of algorithms with other than linear integer inputs
- Dramatic need for additional research
- Need for application scaling roadmaps
- Scaling for the future vs. for existing quantum computers
- Algorithm designers must consider scaling of shot count (circuit repetitions)
- Published papers need to explicitly address scalability of all algorithms
- Timeframe
- Summary and conclusions

# The siren song of running on a real quantum computer

Being able to run an algorithm on a real quantum computer is virtually irresistible. It gives people the feeling that they are doing something dramatic that was once considered near impossible.

It’s exciting. It’s exotic. It’s an adventure. It’s downright intoxicating. Who can resist?

Okay, fine, but… what good does it do?

Sure, a quantum algorithm running on a NISQ quantum computer *proves* that it can be done.

But, again, *what good does it do?*

Does it achieve even minimal quantum advantage let alone dramatic quantum advantage? No. Not even close.

Does it solve any real-world practical problem at production scale? No. Not even close.

Is it scalable to production scale? Generally, no, not even close.

Does it at least identify the path to dramatic quantum advantage and production scale? Generally, no, not at all. If you want to solve the same problem for two or four or ten times as many qubits, generally you’ll have to go back to the drawing board and design a wholly new algorithm designed for some new target qubit count. Generally, current NISQ algorithms are not designed for scalability.

In short, focusing on current NISQ quantum computers is more of an unproductive distraction than a boon. Sure, it feels good — it feels great! But that feeling won’t translate into a big win for quantum computing over classical computing.

My view is that it would be better to focus on scalable algorithm designs tested in stages from 4 to 40 qubits using current classical quantum simulator technology. That at least provides a firm foundation for scaling to 48, 56, 64, 72, 80, and even 96 qubits and beyond as real quantum computers with sufficient qubits, qubit fidelity, qubit connectivity, and coherence time and circuit depth become available two to five years from now.

# Challenges and constraints

Scaling of quantum algorithms is nontrivial. Why is that? Because…

**Insufficient qubits.**A real engineering challenge to add qubits.**Insufficient qubit fidelity.**Error rate is too high.**Insufficient qubit coherence time.**Limits circuit depth.**Insufficient qubit connectivity.**Requires SWAP networks which are dependent on higher qubit fidelity.**Insufficient phase granularity.**Insufficient precision for quantum phase estimation (QPE) and quantum Fourier transform (QFT).**Insufficient probability amplitude granularity.**Impacts precision of computations.**Can’t debug logic on a real quantum computer.**Cannot observe intermediate quantum states. Limits quantum circuit complexity to what people can debug in their heads.**Can’t simulate beyond 50 qubits.**Limits size of algorithms which can be debugged.**Can’t simulate deep circuits beyond a relatively small number of qubits.**Too many quantum states to process given time and storage constraints.**Journal papers typically don’t discuss scalability of algorithms.**Scaling is left as an exercise to the unsuspecting reader. Many algorithms simply aren’t scalable.

# Quantum algorithm vs. quantum circuit

*Quantum algorithm* and *quantum circuit* are frequently used as synonyms, but technically there is a difference.

In computer science we would say that the algorithm is the *specification* of the logic while the circuit is the *implementation* of the specification for the logic.

A *quantum circuit* is the exact sequence of *quantum logic gates* which will be sent to the *quantum processing unit* (*QPU*) for execution.

Some quantum algorithm designers may choose to directly *compose* actual quantum circuits, ready to be executed directly. Such quantum algorithms are not scalable — they are designed for specific input.

More sophisticated quantum algorithm designers write classical code in a high-level classical programming language such as Python which invokes a library function for each gate to be executed, and then sends that collection of gates to be executed on the quantum computer. Such algorithms may or may not be scalable.

*Generative coding of quantum circuits*, to be described in a subsequent section of this paper uses classical logic to *dynamically* decide what gates to generate, depending on the particular input and any parameters. In such cases, the algorithm is *abstract* and can generate a variety of quantum circuits, while each quantum circuit is specific to a particular input value and particular parameters.

With generative coding the classical code is really the algorithm, the specification of the logic, capable of generating any number of specific quantum circuits.

Generative coding is essentially required, mandatory, for designing a scalable algorithm.

So, while people may casually treat the terms quantum algorithm and quantum circuit as exact synonyms, in this paper they are quite distinct although inextricably linked.

# Need for simulation

Even if hardware engineers could fabricate a quantum computer with hundreds or thousands of perfect qubits, we would have no way of adequately testing or debugging algorithms of that size. The essential problem is that intermediate quantum states are not observable — only the final measured qubit states can be examined. That won’t be sufficient for testing and debugging complex algorithms.

Simulation is a great way to be able to observe intermediate quantum states to enable detailed testing and debugging of complex quantum algorithms.

Unfortunately, simulation of quantum circuits is limited to a maximum of around 50 qubits. In fact, current simulators, even from Google, IBM, and Intel, are limited to only 32 to 42 qubits. So simulation is a key tool for scaling, but is not sufficient.

For more on simulation, read my paper:

*Five Major Use Cases for a Classical Quantum Simulator*- https://jackkrupansky.medium.com/five-major-use-cases-for-a-classical-quantum-simulator-368b9cb10639

# Perfect simulation

As detailed in the paper just referenced in the preceding section, *perfect simulation* is simulation of a quantum circuit under ideal conditions, with no errors — an error rate of 0.0. This is not a very realistic simulation, but is useful to test and validate the logic of an algorithm. It is also useful as a baseline which more realistic tests can be compared against.

A perfect simulation is useful to compare the results at each stage of simulation testing.

Scaling is not necessarily going to be perfect or even close to perfect, but a comparison of the results from a stage to a perfect simulation of that stage will highlight any errors that crept in on the scaling stage.

# Specify the nines of qubit fidelity to assure that simulation is realistic

Simulating an ideal, perfect quantum computer is desirable to test for and debug logic errors, but is not appropriate to test the scalability of a quantum algorithm.

The classical quantum simulator should be configured for the desired nines of qubit fidelity so that simulation is realistic. Qubit fidelity will ultimately determine the limit of scalability for a quantum algorithm.

One could configure the simulator to exactly match an existing quantum computer to test how scalable an algorithm is on that machine, but many algorithms are not very scalable on current quantum computers anyways.

What makes most sense is to pick a target quantum computer expected a few years from now which has a significantly high degree of qubit fidelity. That should enable significantly more scaling while still being realistic.

Be alert to the fact that simulation is generally limited to about 50 qubits, or less (32 to 40 qubits is more typical.) That shouldn’t be too much of an issue in the near term, since it’s rather difficult to develop algorithms for more than about 24 qubits on near-term quantum computers.

For more detail on nines of qubit fidelity, see my paper:

# Need for automated analysis tools to detect scaling issues

Automated tools are also needed to analyze quantum circuits to detect coding patterns which are not scalable. The patterns can be subtle and it’s too easy to fall into them without realizing it.

More on this later. For now, the important point is that automated circuit analysis is a key aspect of designing and implementing scalable algorithms.

# Staged testing using simulation

The remaining piece of the puzzle is to incrementally test and validate the candidate algorithm over a nontrivial sequence of stages of qubits, from 4 to 40 qubits, using simulation to confirm that scaling is successful for at least those stages.

# Staged automated analysis to detect scaling issues

The automatic analysis tools can then be used to check for potential scaling problems for larger circuit sizes, such as for 44 to 96 or more qubits — even if such circuits cannot yet be successfully executed on current real quantum computers.

# Overall thesis of this paper

A quick summary of the overall thesis of this paper:

- Focus primarily on simulation rather than running on current NISQ hardware of dubious value, and certainly focus on simulation for qubit fidelity well in excess of near-term hardware.
- Scaling for current and near-term NISQ hardware is a definite dead end, not worth pursuing.
- Simulation is the clear way to go for scalable algorithms in the 30 to 40-qubit range. NISQ hardware has too low qubit fidelity and too low qubit connectivity.
- Specify the nines of qubit fidelity to assure that simulation is realistic.
- Generative coding of quantum circuits. Rather than hand-coding quantum circuits, classical code is used to generate the quantum circuit based on the specific input data and parameters. Much of the work for scaling of a quantum algorithm occurs at this step of each stage of scaling.
- Prove scaling of a quantum algorithm in
*stages*from 4 to 40 qubits (maybe more, maybe even 50 to 55 qubits eventually) using classical quantum simulators as well as actual quantum hardware, if available and with sufficient qubits, qubit fidelity, and connectivity. - Start with the expectation that algorithms working at one stage will scale reliably to subsequent stages, but that proposition must be carefully tested and validated.
- If scaling works for each stage from 4 to 40 qubits, the expectation is that scaling will work for stages from 40 to 64 qubits (or maybe 72 or even 80 or 96 qubits), and beyond as hardware with higher capacity and higher fidelity becomes available. Of course this depends on having enough qubits with sufficient qubit fidelity, coherence time, and connectivity. Testing will prove that out.
- Run tests for both perfect simulation and a specified number of nines of qubit fidelity, and compare them to see whether scaling works properly under a realistic error rate.
- Automated analysis tools for quantum circuits are needed to detect coding patterns and algorithm design flaws which may only come up for algorithm results at qubit counts and circuit sizes well beyond those which can be successfully tested on simulators or the real quantum computers which we can expect to use in the near term. Also detect if coherence time is exceeded by the circuit for the maximum expected scaling.
- Authors of papers should discuss in some significant detail — and test — the scalability of their algorithms. Including the scaling of shot count (circuit repetitions.)

# Overall process

A quick summary of the overall process:

- Test and validate scalability of a quantum algorithm at a very modest count of qubits before advancing to the next stage of scaling with modestly more qubits.
- Test and validate in the controlled environment of classical quantum simulators whenever possible (up to 40 to 50 qubits.)
- Run automated analysis tools for 40 or more qubits to detect coding patterns and design flaws might go undetected at smaller input sizes.
- Advance to incremental testing on real quantum computers only after firmly validating scalability in the controlled environment of classical quantum simulators. And this is highly conditional on the availability of sufficiently reliable quantum hardware, which commonly won’t be the case.

At least this is the model, in theory. Whether real quantum hardware does indeed scale in this way remains to be seen.

Actually, I’m reasonably confident that it *doesn’t* scale this way at present, but if we have this proposed scaling and testing framework in place, we can test and prove advances in scalability as they become possible as the hardware progresses.

The goal is to provide algorithm designers with a model for development and testing, and to provide hardware vendors with a model for evaluating their progress on the scaling fronts — both hardware and algorithms using that hardware.

The catch is that a thorough validation should examine intermediate quantum states as well as the bit values of the final results, which is not possible on real quantum computers. Simulation is the answer, but only works up to 50 or so qubits, and maybe only to 40 or even only 32 qubits. Still, simulation is a very valuable tool and can be used to prove that the algorithm has a relatively firm foundation for scaling before advancing to larger input values that require running on a real quantum computer, at least eventually.

So, typically start small with 4 or 5 qubits (or 8 or 10 qubits depending on the nature of the algorithm), test and validate at that size (stage) on a classical quantum simulator and then bump up the size by a handful of qubits, such as to 8 or 10 qubits (or 12 to 16 qubits for algorithms requiring more qubits for each stage), once validation is complete, and test and validate again. Rinse and repeat, until you reach the limit of the simulator at 32 to 40 (or more) qubits.

There’s no telling how many difficulties, issues, and outright logic errors you may run into at each stage.

You can run the same tests on a real quantum computer in parallel with the staging of simulated tests — run on the simulator first, validate, and then run on the real quantum computer, and compare the results. Or, you can run all of the testing and validation stages on the simulator before then switching over to execution on a real quantum computer.

It probably makes sense to initially prove the algorithm at 4 or 5 or 8 qubits using both classical quantum simulators and a real quantum computer before moving on to the next stage — provided that the qubit fidelity of the real quantum computer is in fact sufficient to correctly execute the algorithm, otherwise maybe the algorithm may only be simulated, while execution on a real quantum computer may have to wait some number of years until qubit fidelity advances sufficiently to in fact correctly execute the algorithm.

# Basic staged model

The basic, staged model proposed by this paper has four parts:

**Simulate.**Fully test algorithms using classical quantum simulators for multiple stages between 4 and 40 (or 50, if possible) qubits.**Analyze.**Use automated analysis tools to detect coding patterns in quantum circuits which are not scalable. Analyze all the way up to the maximum number of qubits and input size which the algorithm is expected to be able to handle. Those larger sizes don’t have to be executed or simulated, just analyzed. This will confirm or determine the maximum scalability for the algorithm. Also detect if coherence time is exceeded by the circuit for the maximum expected scaling.**Execute.**Run the algorithms on a real quantum computer for both the same sizes as were already simulated, and multiple sizes (stages) from slightly above the largest size which can be simulated to moderately more than that, but for sizes for which the results can be readily calculated or estimated. All of the results which correspond to the sizes which could be simulated should be compared. And all of the results for test cases larger than the simulation limit should reasonably agree with the calculated or estimated results. Granted, execution is likely very problematic for current and near-term quantum computers, but some tests may be practical.**Estimate.**Calculate the depth of the circuit for the maximum expected input size to determine if acceptable results can be achieved given the average qubit fidelity and coherence time.

If the designer of an algorithm can successfully get through all of those tests, then they have a fairly decent shot at having a scalable algorithm for much larger input sizes. But even then there will be some strong caveats, such as dependence on higher qubit fidelity as the number of qubits and input size grow.

Most algorithms are simply not scalable at all — because they were not designed for scaling.

So the first step is to validate that an algorithm does in fact scale for multiple input sizes, up to the limits of simulation, which can be tested, as well as to the maximum expected scale for the algorithm, potentially well beyond the limits of either simulation or current or near-term real quantum computers.

If an algorithm can be scaled up to 40 to 50 qubits, then it at least has some reasonable prospect for scaling to hundreds or thousands of qubits. Not necessarily a guarantee of scaling, but at least a reasonable possibility.

Automated analysis of a scalable algorithm at 40 to 50 qubits should be able to discern coding patterns which are less than fully likely to scale from 40 to 400 or 4,000 qubits. Still no absolute guarantee of scaling, but a much higher level of confidence.

Many larger circuits will fail due to insufficient qubit fidelity. The *estimator* will determine if a given depth of quantum circuit can be achieved given the average qubit fidelity and coherence time for a specified real quantum computer.

# Generative coding of quantum circuits rather than hand-coding of circuits

Fully hand-coding the quantum circuits for algorithms is absolutely out of the question. What is needed is a more *abstract representation of an algorithm*. *Generative coding of quantum circuits* provides this level of abstraction. Any algorithm designed to be scalable must be generated dynamically using classical code which is parameterized with the input size or size of the simulation, so that as the input size grows or the simulation size grows, a larger number of quantum logic gates will be generated to accommodate the expanded input size.

Generally, any input value will have to be *encoded* in the *gate structure* of the generated quantum circuit — using quantum logic gates to initialize the quantum state of qubits to correspond to any input value and any parameters. That’s the only way to get input data or parameters into a quantum algorithm (circuit) since a quantum computer has no I/O capabilities at the circuit or QPU level.

Ideally, the rules for generating the quantum circuit for a given input will be identical regardless of input size. The rules should be identical, but parameterized by the input size so that the exact gates generated will be determined by the input size, in some predictable, mathematical manner.

The classical code would typically be developed using a programming language such as Python using looping and conditional statements to invoke a library to generate the individual gates.

Generative coding of quantum circuits uses classical logic to *dynamically* decide what gates to generate, depending on the particular input and any parameters. In such cases, the algorithm is *abstract* and can generate a variety of quantum circuits, while each quantum circuit is specific to a particular input value and particular input parameters.

With generative coding the classical code is really the algorithm, the specification of the logic, capable of generating any number of specific quantum circuits depending on the particular input value and input parameter values.

Generative coding of quantum circuits is essentially required, mandatory, for designing a scalable algorithm.

# Larger circuits require higher qubit fidelity

As generated circuits use more qubits and more gates are executed, higher qubit fidelity — or virtually perfect quantum error correction (QEC) — are needed, otherwise errors may cause results to become unacceptable.

The circuit limit estimator will be required to confirm that a wide and deep circuit will be feasible given a specified qubit fidelity and coherence time.

# Depth of circuits and qubit fidelity under scaling

As input size scales, generally the circuit size will scale, both in terms of qubit count and circuit depth. Qubit fidelity becomes more urgent as circuit size increases since more gates mean more errors.

Circuit depth is particularly urgent. If it becomes too deep, it can exceed average qubit coherence time.

And even if it doesn’t exceed coherence time per se, each gate executed can add some degree of error, so that a deep circuit can accumulate substantial errors, potentially impacting results of the computation.

# Suggested stages — qubit counts

Each stage is represented by a representative count of qubits for that stage.

The qubit count is a surrogate for the input size. Smaller inputs require fewer qubits. Larger inputs require more qubits.

Each stage would handle a range of input sizes.

The point of the stages is to assure that algorithms are sufficiently tested to validate their scalability. Test and validate scalability at each stage before advancing to the next stage.

There are two sets of stages:

**Stages which can be simulated.**Typically under 50 qubits. Or maybe even only up to 40 qubits or even 38 or 32.**Stages beyond what can be simulated.**Typically over 50 qubits. Or maybe even over 40 qubits.

The exact number of qubits is not so significant. The point is simply to have clearly delineated stages.

# Full wavefunction vs. partial simulation

For the purposes of this paper, simulation refers to *full wavefunction* simulation. Granted there are alternative modes of simulation of quantum circuits which can handle much more than 50 or so qubits, but they are *special-purpose* simulations and not relevant to the full wavefunction simulation required to fully validate quantum circuits as recommended by this paper.

# Suggested stages for simulation

Some typical stages which should be simulated to test for scaling:

- 4 qubits.
- 5 qubits.
- 8 qubits.
- 10 qubits.
- 12 qubits.
- 16 qubits.
- 20 qubits.
- 24 qubits.
- 28 qubits.
- 32 qubits.
- 36 qubits.
- 40 qubits.
- 42 qubits. Possibly, if the simulator supports it.
- 44 qubits. Eventually, but not currently.
- 48 qubits. Ditto.
- 50 qubits. Ditto. This is currently the expected upper limit.
- 54 qubits. Maybe a stretch goal a few years from now.

The specific increment of qubits between stages will depend on the complexity of the particular algorithm being simulated. The increment should generally correspond roughly to the increments in the size of the input data for the algorithm, but any increment can be used that suggests an increase in the use of qubits that could potentially introduce scaling issues.

# Suggested stages for execution on a real quantum computer or automated analysis

Some typical stages which cannot be simulated and must be run on a real quantum computer or automated analysis:

- 44 qubits. Hopefully this can eventually be simulated as well.
- 48 qubits. Ditto.
- 50 qubits. Ditto.
- 52 qubits. Ditto.
- 55 qubits. Ditto. May be the limit for simulation for the next few years.
- 56 qubits. Unlikely this can be simulated in the next few years.
- 60 qubits.
- 64 qubits.
- 72 qubits.
- 80 qubits.
- 96 qubits.
- 128 qubits.
- 160 qubits.
- 192 qubits.
- 256 qubits.
- 384 qubits.
- 512 qubits.

Granted, many or most of these stages cannot be successfully run on current or even near-term real quantum computers, but they can be analyzed to see if scaling issues are likely to crop up.

To be clear, these are not the raw total counts of qubits on real quantum computers, but the effective number of qubits that can be successfully used in algorithms on those real quantum computers.

# Suggested stages for scaling of Shor’s factoring algorithm

Fifteen (15) is the smallest integer which is a semiprime (product of two prime numbers) which can be factored by Shor’s factoring algorithm — to 3 x 5.

The number of qubits required by the original, fully general specification of Shor’s factoring algorithm is four times the number of bits in the input integer plus three ancilla bits. So, for example, factoring a 10-bit semiprime integer would require 10 * 4 + 3 = 43 qubits.

The real trick or catch with scaling Shor’s factoring algorithm may be the quantum circuit depth, which is roughly the number of bits in the input integer, n, times that number minus one, divided by 2. So, for n = 20 bits, (20 * 19) / 2 = 190 gates. Qubit fidelity is likely to be the limiting factor with such deep circuits.

The total count of gates in the circuit is roughly the circuit depth times the number of bits in the input integer times two. That’s roughly the cube of the number of bits in the input integer.

Here are the suggested test input integer sizes (stages) and the count of qubits and circuit depth needed to factor each:

- 4 x 4 + 3 = 19 qubits, 6 gates — 15 (= 3 x 5) is the smallest integer semiprime (ignoring 2 x 3 = 6.)
- 5 x 4 + 3 = 23 qubits, 10 gates.
- 6 x 4 + 3 = 27 qubits, 15 gates.
- 7 x 4 + 3 = 31 qubits, 21 gates.
- 8 x 4 + 3 = 35 qubits, 28 gates.
- 9 x 4 + 3 = 39 qubits, 36 gates — may be the largest size to simulate.
- 10 x 4 + 3 = 43 qubits, 45 gates — possible stretch goal for largest size to simulate.
- 11 x 4 + 3 = 47 qubits, 55 gates.
- 12 x 4 + 3 = 51 qubits, 66 gates.
- 13 x 4 + 3 = 55 qubits, 78 gates — maximum size to simulate a few years from now.
- 14 x 4 + 3 = 59 qubits, 91 gates — beyond the possibility of full simulation.
- 15 x 4 + 3 = 63 qubits, 105 gates.
- 16 x 4 + 3 = 67 qubits, 120 gates.
- 20 x 4 + 3 = 83 qubits, 190 gates.
- 24 x 4 + 3 = 99 qubits, 276 gates.
- 28 x 4 + 3 = 115 qubits, 378 gates.
- 32 x 4 + 3 = 131 qubits, 496 gates.
- 36 x 4 + 3 = 147 qubits, 630 gates.
- 40 x 4 + 3 = 163 qubits, 780 gates.
- 44 x 4 + 3 = 179 qubits, 1,128 gates.
- 48 x 4 + 3 = 195 qubits, 1,128 gates.
- 56 x 4 + 3 = 227 qubits, 1,540 gates.
- 64 x 4 + 3 = 259 qubits, 2,016 gates.
- 72 x 4 + 3 = 291 qubits, 2,556 gates.
- 80 x 4 + 3 = 323 qubits, 3,160 gates.
- 96 x 4 + 3 = 387 qubits, 4,560 gates.
- 128 x 4 + 3 = 515 qubits, 8,128 gates.
- 192 x 4 + 3 = 771 qubits, 18,336 gates.
- 256 x 4 + 3 = 1,027 qubits, 32,640 gates.
- 384 x 4 + 3 = 1,539 qubits, 73,536 gates.
- 512 x 4 + 3 = 2,051 qubits, 130,816 gates.

And that’s not even getting close to 1024, 2048, and 4096-bit public encryption keys, which is the main concern or interest with quantum computing.

Note that 768-bit is the largest encryption key which has been factored classically. See:

*Factorization of a 768-Bit RSA Modulus*- https://www.researchgate.net/publication/221354652_Factorization_of_a_768-Bit_RSA_Modulus

It’s kind of depressing that even with a vast amount of research and development ahead of us, quantum computing will fall woefully short of accomplishing even what has already been accomplished using classical computers.

But for the purposes of this paper, all I care about is how far we can get with a classical quantum simulator, plus at least a few stages more on a real quantum computer.

As things stand right now, the only possible test sizes (stages) for simulation are:

- 4 bits = 19 qubits, 6 gates.
- 5 bits = 23 qubits, 10 gates.
- 6 bits = 27 qubits, 15 gates.
- 7 bits = 31 qubits, 21 gates.
- 8 bits = 35 qubits, 28 gates.
- 9 bits = 39 qubits, 36 gates — may be the largest size to simulate.
- 10 bits = 43 qubits, 45 gates — possible stretch goal for largest size to simulate.

As things stand right now, the only possibly test sizes (stages) beyond the range of simulation for execution on a real quantum computer over the next two years (including the 127-qubit and 441-qubit quantum computers expected from IBM in that period) are:

- 10 bits = 43 qubits, 45 gates — in case not readily simulatable.
- 11 bits = 47 qubits, 55 gates.
- 12 bits = 51 qubits, 66 gates.
- 13 bits = 55 qubits, 78 gates.
- 14 bits = 59 qubits, 91 gates.
- 15 bits = 63 qubits, 105 gates.
- 16 bits = 67 qubits, 120 gates.
- 20 bits = 83 qubits, 190 gates.
- 24 bits = 99 qubits, 276 gates.
- 28 bits = 115 qubits, 378 gates — largest for a 127-qubit machine.
- 32 bits = 131 qubits, 496 gates.
- 36 bits = 147 qubits, 630 gates.
- 40 bits = 163 qubits, 780 gates.
- 44 bits = 179 qubits, 1,128 gates.
- 48 bits = 195 qubits, 1,128 gates.
- 56 bits = 227 qubits, 1,540 gates.
- 64 bits = 259 qubits, 2,016 gates.
- 72 bits = 291 qubits, 2,556 gates.
- 80 bits = 323 qubits, 3,160 gates.
- 96 bits = 387 qubits, 4,560 gates — largest for a 441-qubit machine.

To be clear, all of the stages that can be run on a simulator should also be run on a real machine to see how the results compare. That is not to say that real machines will necessarily have the capacity or qubit fidelity to provide comparable or acceptable results.

In fact, it’s not likely that even most of the cases that can be run on a simulator could be run on any current or near-term real quantum computer due either to low qubit fidelity or limited coherence time (limited circuit depth.)

# What is the smallest stage for any particular algorithm?

Generally, 4 to 8 qubits would be the smallest or initial stage for testing the scaling of a quantum algorithm.

But particular algorithms might have some other smallest initial stage, which could be smaller or larger than 4 to 8 qubits.

Some examples:

- Testing GHZ states would start with 3 qubits.
- The smallest semiprime that can be factored using Shor’s factoring algorithm is 15 = 3 x 5, which would require 4 x 4 + 3 = 17 qubits.

# Automated circuit analysis tools

Automated analysis tools are needed to detect coding patterns and algorithm design flaws which may only some up algorithm results at qubit count and circuit sizes well beyond those which can be successfully tested on simulators or the real quantum computers which we can expect to use in the near term.

Details? TBD. Beyond the scope of this particular paper which is more focused on the overall staged model.

In truth, the analysis needs to be conducted at the algorithm level, especially for generative coding, so that any issues can be reported in terms that the algorithm designer can understand. So, the analyzer, or at least part of the analyzer, needs to be part of the high-level language programming library rather than deep down in the simulator.

Two of the most critical factors are any reliance on fine granularity of phase angle and probability amplitudes. These can scale to some degree, but that needs to be verified for the particular hardware and qubit fidelity.

For more detail on phase granularity issues, see my paper:

*Beware of Quantum Algorithms Dependent on Fine Granularity of Phase*- https://jackkrupansky.medium.com/beware-of-quantum-algorithms-dependent-on-fine-granularity-of-phase-525bde2642d8

Also detect if coherence time is exceeded by the circuit for the maximum expected scaling.

There are two distinct modes for analysis, to detect coding patterns which:

**Generally scale poorly.**Algorithm designers should refrain from using them.**Only scale up to some upper bound.**The automated tools should simply report what that upper bound might be. This will generally be dependent on qubit fidelity.

The algorithm designer should generally be cognizant of an intended target for qubit fidelity, especially if they intend to run algorithms on real quantum computers in the next few years.

# Detecting upper limit for scaling

Scaling of quantum algorithms will generally not be an all or nothing proposition — some algorithms will scale further than others.

Ideally, if very carefully constructed, an algorithm could scale up to a very large number of qubits, even in the hundreds, thousands, or tens of thousands of qubits — not with current or near-term NISQ qubits, but with a significantly higher qubit fidelity.

But even then, every algorithm will tend to have some upper bound for how far it can scale before errors accumulate to the point of causing results to be too unreliable. This will generally be dependent on qubit fidelity.

One of the tasks for any algorithm designer is to characterize and test for such an upper bound to scaling.

Simulation can be used to do a fair amount of testing for this upper bound, but only up to the upper bound for simulation itself, around 40 to 50 qubits.

The automated analysis tools discussed in the preceding section should suffice to catch coding patterns in quantum circuits which won’t scale well, especially beyond 40 qubits.

And the second analysis mode mentioned in the preceding section can be used to detect the upper limit for scaling for the algorithm.

Note that the upper limit will generally depend on the qubit fidelity of the particular quantum computer, so that there could be a higher upper limit on more-capable hardware.

The algorithm designer should generally be cognizant of an intended target for qubit fidelity, especially if they intend to run algorithms on real quantum computers in the next few years.

# How far can variational methods scale?

Variational methods are an attractive approach to achieving some minimal degree of functional utility on small-scale NISQ devices, but it is questionable whether they will scale up the dramatic quantum advantage, which is the real goal of quantum computing.

Variational methods are an incremental and iterative approach. By nature, a variational method does not evaluate a very large number of alternatives on each quantum circuit execution. Instead, any significant computation involves a significant number of invocations of the quantum circuit, with a classical optimization step before each invocation.

Generally, scaling using a variational method means more iterations, not scaling up the quantum algorithm itself.

In some cases the quantum circuit itself may be scalable to some degree, but just by the limited and incremental nature of the method, that quantum scaling will be somewhat limited.

In truth, most current proposals for the use of quantum variational methods are simply compromises due to the simple fact that truly scalable algorithms would require the use of quantum phase estimation (QPE) and quantum Fourier transform (QFT), which are not practical on current NISQ devices due to their low qubit fidelity.

All of that said, it will be interesting to see if any enterprising algorithm designers can break out to quantum circuits for variational methods which use significantly more than 16 to 24 qubits, like 32 to 40 qubits.

The longer-term goal is to rely on quantum phase estimation and quantum Fourier transform, so the sooner algorithm designers switch from variational methods to QPE and QFT the better.

For more on *dramatic quantum advantage*, read my paper:

*What Is Dramatic Quantum Advantage?*- https://jackkrupansky.medium.com/what-is-dramatic-quantum-advantage-e21b5ffce48c

# Scalability of quantum phase estimation (QPE) and quantum Fourier transform (QFT)

Given that variational methods are unlikely to achieve dramatic quantum advantage, quantum phase estimation (QPE) and quantum Fourier transform (QFT) become essential for larger scale and production applications.

Scaling of quantum phase estimation (QPE) and quantum Fourier transform (QFT) is an interesting challenge.

They may not function in any useful manner at relatively small qubit counts.

And they may not be scalable to very large qubit counts.

They may work best in a more modest range.

Whether either QPE or QFT is terribly useful for less than 16 to 20 qubits is unclear. That’s too few qubits to achieve a dramatic quantum advantage.

It seems unlikely that either QPE or QFT would have any real utility for algorithms using less than 8 to 12 qubits.

On the other end of the spectrum, physical limitations to the granularity of phase and probability amplitude may render QPE and QFT useless for hundreds or thousands of qubits. Whether that is actually the case will require deeper academic research.

In any case, staged testing of algorithms utilizing QPE or QFT is essential.

A fair amount of testing should be feasible using simulation, for the 24 to 50-qubit range.

Additional testing should be performed using execution on real quantum computers for at least a handful of stages beyond the 40 to 50-qubit range.

And all of the simulated tests should also be performed on real quantum computers for comparison, to the extent that the real quantum computers have sufficient qubit fidelity, otherwise the results might be incomparable or even absolute gibberish. Hopefully qubit fidelity will be sufficient a few years from now.

For more detail on phase granularity issues, see my paper:

*Beware of Quantum Algorithms Dependent on Fine Granularity of Phase*- https://jackkrupansky.medium.com/beware-of-quantum-algorithms-dependent-on-fine-granularity-of-phase-525bde2642d8

# Automatic scaling vs. manual scaling

The general expectation is that most scaling of algorithms can be performed automatically using simple mathematical formulas to guide the generative coding process. Unfortunately, that is not always the case.

Automatic scaling is great when it’s feasible. A few lines of parameterized classical code can enable a very wide range of scaling.

But, unfortunately, algorithms for some domains may not be amenable to automated formulaic scaling, requiring tedious manual scaling instead, where each stage in scaling requires a significant investment in effort to hand-code the scaled quantum circuit for that stage.

Additional research might help to unlock additional automated methods for scaling in more situations.

# Scaling of algorithms with other than linear integer inputs

Scaling is hard enough even when the input data is a simple, linear integer, but what approaches should be used when the input data is not a simple, linear integer? Unfortunately, the simple answer is that it can get complicated, and it varies greatly depending on the nature of the domain of interest.

Other forms of input data include:

- Real numbers.
- Non-linear integers. Where the integer has some meaning other than it’s numeric value.
- An atom.
- A molecule.
- A protein.
- A drug.
- A map coordinate.
- More than one of any of the above.

The real point is simply that careful attention to scaling is essential to the design of any quantum algorithm.

Algorithms need to be scalable. Period.

Scaling is not easy, cheap, free, or automatic. It can be a lot of hard work. But that work is necessary.

In truth, a lot of fundamental research is needed to enable algorithms to be more readily scalable.

# Dramatic need for additional research

Existing quantum algorithms have generally been hand-coded, cobbled together in a rather ad hoc manner.

There has to be another way.

Fundamental research is needed in algorithm design, particularly in scaling of algorithms.

We need fundamental research in:

- Metaphors.
- Design patterns.
- Algorithmic building blocks.
- Libraries.
- Frameworks.
- Performance characterization and measurement.
- Scaling in general, automated scaling, automating more situations currently requiring manual scaling.

# Need for application scaling roadmaps

Scaling is rarely a purely linear integer process. Rather, there will be identifiable milestones (stages) which correspond to advances considered notable. Some may be simple integral values, real values, or symbolic values such as atoms, molecules, proteins, and drugs.

It would be helpful for those working in the various application domains to lay out roadmaps with application-specific milestones, not so much with expected dates, but:

- The plain language descriptions of the milestones.
- Some sense of the complexity of the milestones, in application-specific terms.
- Some sense of the advances in quantum computational resources which will be required to achieve the milestones.

Quantum computational resources include:

- Qubit count.
- Nines of qubit fidelity.
- Qubit connectivity.
- Quantum circuit depth.
- Shot count (circuit repetitions.)

The milestones on such roadmaps would give a more specific sense of the scaling that is occurring as development of the field progresses.

The point of the roadmap for each application domain is not to detail all possible milestones and achievements, but simply to highlight the notable scaling stages, where there might in fact be many possible milestones at each scaling stage. For example, molecules of comparable complexity.

The key benefits of such application-specific milestones include:

- Characterize how much progress has been made.
- Characterize how much progress remains to be made.
- Highlight advances in quantum computational resources which will be needed to make further progress.
- Gain a sense of when dramatic quantum advantage will be achieved, and what quantum computational resources will be required.

# Scaling for the future vs. for existing quantum computers

Given the very limited capabilities of current and even near-term quantum computers, the primary interest for scaling is for the dramatically more capable quantum computers some years in the future — the machines on which dramatic quantum advantage will be possible.

Current and near-term quantum computers don’t present much of a challenge for scaling since their limited capabilities (qubit count, qubit fidelity, coherence time, and qubit connectivity) limit quantum circuits to computations of only very modest size, where scaling is more of a fantasy than a reality.

Granted, quantum computers will be significantly more capable in just a few years, so that at least some minimal degree of scaling will finally be practical, but this will still be scaling for the future rather than for current or near-term quantum computers.

# Algorithm designers must consider scaling of shot count (circuit repetitions)

Currently, there is no consensus for how *shot count* (*circuit repetitions*) should scale as an algorithm is scaled up, but it is a serious issue warranting the attention of algorithm designers. To date, it has *never* gotten any such attention.

Authors publishing quantum algorithms should offer a reasonable discussion of the scaling of shot count in addition to their discussion of scaling of their algorithms in general.

For more on shot count issues, see my paper:

*Shots and Circuit Repetitions: Developing the Expectation Value for Results from a Quantum Computer*- https://jackkrupansky.medium.com/shots-and-circuit-repetitions-developing-the-expectation-value-for-results-from-a-quantum-computer-3d1f8eed398

Especially see the section *Advice to authors of papers on quantum algorithms*, although that should probably have been entitled as advice to *algorithm designers*, including how they write about their algorithms. And those comments are relevant to user documentation for quantum algorithms and also for applications which use those algorithms.

# Published papers need to explicitly address scalability of all algorithms

Scaling or algorithms is neither free nor automatic. It must be explicitly addressed in the design, implementation, and testing of any algorithm. The discussion of scaling should be very robust, leaving no doubt as to whether and how the algorithm is scalable.

Published algorithms should explicitly discuss exactly how the algorithm scales, both how scaling is achieved and how it performs when (if) tested.

It would be best if the authors of papers could explicitly test and validate scaling, but that is not always possible. See comments from the preceding section, including my paper on shot counts.

But regardless of whether testing for scaling is possible, the authors should devote at least some significant attention to highlighting and addressing any potential scaling issues.

# Timeframe

As mentioned in the introduction, the goal here is not scaling on current and near-term quantum computers per s, but to assure that algorithms designed and developed today will be automatically scalable to future quantum computers as they become available. When might that be?

**Next two years?**I’m confident that quite a few significant advances will occur over the next two years, but even they may not be enough to fully support the vision and model of scaling proposed by this informal paper.**Five years?**A five-year horizon would seem more of a slam dunk for a more optimal combination of more advanced and capable classical quantum simulators coupled with real quantum computers which are robust enough to actually permit significant scaling of quantum algorithms.**Two to three years?**One would hope that the two to three-year time period would bring us sufficient technical advances to enable a fair fraction of the capabilities detailed in this paper.**Three to five years?**I have every expectation that fairly dramatic advances will be occurring on all fronts three to five years from now. The exact order and pace of advances is anybody’s guess.

To be clear, at least a marginally useful subset of the capabilities detailed in this paper should be practical right now, today, with existing simulators and existing real quantum computers. Very limited indeed, but enough to get started, and hopefully enough to tide people over until more advanced capabilities become available as each of the next few years pass.

# Summary and conclusions

- Simulation of quantum circuits is a powerful and necessary tool for producing scalable quantum algorithms. A classical quantum simulator is needed for debugging, and also to capture intermediate quantum states to validate that execution of a quantum circuit is occurring as expected.
- Specify the nines of qubit fidelity to assure that simulation is realistic.
- Incremental testing and validation at a sequence of stages (sizes) is essential to demonstrating the scalability of a quantum algorithm.
- Run both perfect simulation and a specified number of nines of qubit fidelity, and compare them to see whether scaling works properly under a realistic error rate.
- Generative coding of quantum circuits. Rather than hand-coding quantum circuits, classical code is used to generate the quantum circuit based on the specific input data and parameters. Much of the scaling of a quantum algorithm occurs at this step of each stage of scaling.
- This should be a valid model and framework, but does require higher capacity and higher performance simulators, as well as a presumption that qubit fidelity, coherence time, connectivity, and qubit count will quickly progress to the level where real quantum computers can correctly execute quantum circuits for 40 to 80 or more qubits.
- There may still be issues which only appear for much larger quantum algorithms which don’t crop up for smaller quantum algorithms, such as dependence on fine granularity of phase and probability amplitude. These can’t be detected at the scale of algorithms which can be simulated.
- Automated analysis tools are needed to detect coding patterns and algorithm design flaws which may only show up in algorithm results at qubit count and circuit sizes well beyond those which can be successfully tested on simulators or the real quantum computers which we can expect to use in the near term. Also detect if coherence time is exceeded by the circuit for the maximum expected scaling.
- Much fundamental research is still needed.
- Authors of papers should discuss in some detail the scalability of their algorithms, including scaling of shot count (circuit repetitions.) Testing and validation of scaling is preferable, but significant discussion is needed in any case.
- The exact timeframe for the various capabilities discussed in this paper is anybody’s guess. Don’t expect much over the next two years — people will need to rely on their own skill and discipline and the guidance from this paper, then various capabilities appear as each year passes, so that many of the capabilities should be available within three to four years, and finally everything in place after five years, or so.

For more of my writing: ** List of My Papers on Quantum Computing**.