Running simulations

Contains functions and classes to run and benchmark surface code simulations and visualizations. Use initialize to prepare a surface code and a decoder instance, which can be passed on to run and run_multiprocess to simulate errors and to decode them with the decoder.

qsurface.main.initialize(size, Code, Decoder, enabled_errors=[], faulty_measurements=False, plotting=False, **kwargs)

Initializes a code and a decoder.

The function makes sure that the correct class is used to instance the surface code and decoder based on the arguments provided. A code instance must be initalized with enabled_errors by initialize after class instance to make sure that plot parameters are properly loaded before loading the plotting items included in each included error module, if plotting is enabled. See plot.Template2D and errors._template.Plot for more information.

Parameters
  • size (Union[Tuple[int, int], int]) – The size of the surface in xy or (x,y).

  • Code (Union[module, str]) – Any surface code module or module name from codes.

  • Decoder (Union[module, str]) – Any decoder module or module name from decoders

  • enabled_errors (List[Union[str, Sim]]) – List of error modules from errors.

  • faulty_measurements (bool) – Enable faulty measurements (decode in a 3D lattice).

  • plotting (bool) – Enable plotting for the surface code and/or decoder.

  • kwargs – Keyword arguments are passed on to the chosen code, initialize, and the chosen decoder.

Examples

To initialize a 6x6 toric code with the MWPM decoder and Pauli errors:

>>> initialize((6,6), "toric", "mwpm", enabled_errors=["pauli"], check_compatibility=True)
(<toric (6, 6) PerfectMeasurements>,  <Minimum-Weight Perfect Matching decoder (Toric)>)
✅ This decoder is compatible with the code.

Keyword arguments for the code and decoder classes can be included for further customization of class initialization. Note that default errors rates for error class initialization (see init_errors and errors._template.Sim) can also be provided as keyword arguments here.

>>> enabled_errors = ["pauli"]
>>> code_kwargs = {
...     "initial_states": (0,0),
...     "p_bitflip": 0.1,
... }
>>> decoder_kwargs = {
...     "check_compatibility": True,
...     "weighted_union": False,
...     "weighted_growth": False,
... }
>>> initialize((6,6), "toric", "unionfind", enabled_errors=enabled_errors, **code_kwargs, **decoder_kwargs)
✅ This decoder is compatible with the code.
qsurface.main.run(code, decoder, error_rates={}, iterations=1, decode_initial=True, seed=None, benchmark=None, mp_queue=None, mp_process=0, **kwargs)

Runs surface code simulation.

Single command function to run a surface code simulation for a number of iterations.

Parameters
  • code (PerfectMeasurements) – A surface code instance (see initialize).

  • decoder (Sim) – A decoder instance (see initialize).

  • iterations (int) – Number of iterations to run.

  • error_rates (dict) – Dictionary of error rates (see errors). Errors must have been loaded during code class initialization by initialize or init_errors.

  • decode_initial (bool) – Decode initial code configuration before applying loaded errors. If random states are used for the data-qubits of the code at class initialization (default behavior), an initial round of decoding is required and is enabled through the decode_initial flag (default is enabled).

  • seed (Optional[float]) – Float to use as the seed for the random number generator.

  • benchmark (Optional[BenchmarkDecoder]) – Benchmarks decoder performance and analytics if attached.

  • kwargs – Keyword arguments are passed on to decode.

Examples

To simulate the toric code and simulate with bitflip error for 10 iterations and decode with the MWPM decoder:

>>> code, decoder = initialize((6,6), "toric", "mwpm", enabled_errors=["pauli"])
>>> run(code, decoder, iterations=10, error_rates = {"p_bitflip": 0.1})
{'no_error': 8}

Benchmarked results are updated to the returned dictionary. See BenchmarkDecoder for the syntax and information to setup benchmarking.

>>> code, decoder = initialize((6,6), "toric", "mwpm", enabled_errors=["pauli"])
>>> benchmarker = BenchmarkDecoder({"decode":"duration"})
>>> run(code, decoder, iterations=10, error_rates = {"p_bitflip": 0.1}, benchmark=benchmarker)
{'no_error': 8,
'benchmark': {'decoded': 10,
'iterations': 10,
'seed': 12447.413636559,
'durations': {'decode': {'mean': 0.00244155000000319,
'std': 0.002170364089572033}}}}
qsurface.main.run_multiprocess(code, decoder, error_rates={}, iterations=1, decode_initial=True, seed=None, processes=1, benchmark=None, **kwargs)

Runs surface code simulation using multiple processes.

Using the standard module multiprocessing and its Process class, several processes are created that each runs its on contained simulation using run. The code and decoder objects are copied such that each process has its own instance. The total number of iterations are divided for the number of processes indicated. If no processes parameter is supplied, the number of available threads is determined via cpu_count and all threads are utilized.

If a BenchmarkDecoder object is attached to benchmark, Process copies the object for each separate thread. Each instance of the the decoder thus have its own benchmark object. The results of the benchmark are appended to a list and addded to the output.

See run for examples on running a simulation.

Parameters
  • code (PerfectMeasurements) – A surface code instance (see initialize).

  • decoder (Sim) – A decoder instance (see initialize).

  • error_rates (dict) – Dictionary for error rates (see errors).

  • iterations (int) – Total number of iterations to run.

  • decode_initial (bool) – Decode initial code configuration before applying loaded errors.

  • seed (Optional[float]) – Float to use as the seed for the random number generator.

  • processes (int) – Number of processes to spawn.

  • benchmark (Optional[BenchmarkDecoder]) – Benchmarks decoder performance and analytics if attached.

  • kwargs – Keyword arguments are passed on to every process of run.

class qsurface.main.BenchmarkDecoder(methods_to_benchmark={}, decoder=None, **kwargs)

Benchmarks a decoder during simulation.

A benchmark of a decoder can be performed by attaching the current class to a decode. A benchmarker will keep track of the number of simulated iterations and the number of successfull operations by the decoder in self.data.

Secondly, a benchmark of the decoder’s class methods can be performed by the decorators supplied in the current class, which have the form def decorator(self, func):. The approach in the current benchmark class allows for decorating any of the decoder’s class methods after it has been instanced. The benefit here is that if no benchmark class is attached, no benchmarking will be performed. The class methods to benchmark must be supplied as a dictionary, where the keys are equivalent to the class method names, and the values are the decorator names. Benchmarked values are stored as class attributes to the benchmark object.

There are two types of decorators, list decorators, which append some value to a dictionary of lists self.lists, and value decorators, that saves or updates some value in self.values.

Parameters
  • methods_to_benchmark (dict) – Decoder class methods to benchmark.

  • decoder (Optional[Sim]) – Decoder object.

  • seed – Logged seed of the simulation.

data

Simulation data.

lists

Benchmarked data by list decorators.

values

Benchmarked data by value decorators.

Examples

To keep track of the duration of each iteration of decoding, the decoder’s decode method can be decorated with the duration decorator.

>>> code, decoder = initialize((6,6), "toric", "mwpm", enabled_errors=["pauli"])
>>> benchmarker = BenchmarkDecoder({"decode": "duration"}, decoder=decoder)
>>> code.random_errors(p_bitflip=0.1)
>>> decoder.decode()
>>> benchmarker.lists
{'duration': {'decode': [0.0009881999976641964]}}

The benchmark class can also be attached to run. The mean and standard deviations of the benchmarked values are in that case updated to the output of run after running lists_mean_var.

>>> benchmarker = BenchmarkDecoder({"decode":"duration"})
>>> run(code, decoder, iterations=10, error_rates = {"p_bitflip": 0.1}, benchmark=benchmarker)
{'no_error': 8,
'benchmark': {'success_rate': [10, 10],
'seed': 12447.413636559,
'durations': {'decode': {'mean': 0.00244155000000319,
    'std': 0.002170364089572033}}}}

Number of calls to class methods can be counted by the count_calls decorator and stored to self.values. Values in self.values can be saved to a list to, for example, log the value per decoding iteration by the value_to_list decorator. Multiple decorators can be attached to a class method by a list of names in methods_to_benchmark. The logged data are still available in the benchmarker class itself.

>>> benchmarker = BenchmarkDecoder({
"decode": ["duration", "value_to_list"],
"correct_edge": "count_calls",
})
>>> run(code, decoder, iterations=10, error_rates = {"p_bitflip": 0.1}, benchmark=benchmarker)
{'no_error': 8,
'benchmark': {'success_rate': [10, 10],
'seed': '12447.413636559',
'duration': {'decode': {'mean': 0.001886229999945499,
    'std': 0.0007808582199605158}},
'count_calls': {'correct_edge': {'mean': 6.7, 'std': 1.4177446878757827}}}}
>>> benchmarker.lists
    {'duration': {'decode': [0.0030814000019745436,
    0.0015807000017957762,
    0.0010604999988572672,
    0.0035383000031288248,
    0.0018329999984416645,
    0.001753099997586105,
    0.001290500000322936,
    0.0014110999982221983,
    0.0011783000009017996,
    0.0021353999982238747]},
    'count_calls': {'correct_edge': [10, 7, 5, 7, 6, 6, 7, 6, 5, 8]}}

Nested class methods can also be benchmarked, e.g. for find of Cluster, which has an alias in unionfind.sim.Toric.

>>> code, decoder = initialize((6,6), "toric", "unionfind", enabled_errors=["pauli"])
>>> benchmarker = BenchmarkDecoder({"Cluster.find", "count_calls"})
>>> code.random_errors(p_bitflip=0.1)
>>> decoder.decode()
>>> benchmarker.values
{'count_calls': {'find': 30}}
lists_mean_var(reset=True)

Get mean and stand deviation of values in self.lists.

Parameters

reset (bool) – Resets all in self.lists to empty lists.

value_to_list(func)

Appends all values in self.values to lists in self.lists.

duration(func)

Logs the duration of func in self.lists.

count_calls(func)

Logs the number of calls to func in self.values.