sinter 1.12.1__tar.gz → 1.13.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (53) hide show
  1. {sinter-1.12.1 → sinter-1.13.0}/PKG-INFO +33 -34
  2. {sinter-1.12.1 → sinter-1.13.0}/README.md +32 -33
  3. {sinter-1.12.1 → sinter-1.13.0}/setup.py +1 -1
  4. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/__init__.py +4 -1
  5. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_decoding_fusion_blossom.py +42 -4
  6. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_main_plot.py +13 -4
  7. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_task.py +1 -1
  8. {sinter-1.12.1 → sinter-1.13.0}/src/sinter.egg-info/PKG-INFO +33 -34
  9. {sinter-1.12.1 → sinter-1.13.0}/readme_example_plot.png +0 -0
  10. {sinter-1.12.1 → sinter-1.13.0}/requirements.txt +0 -0
  11. {sinter-1.12.1 → sinter-1.13.0}/setup.cfg +0 -0
  12. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_anon_task_stats.py +0 -0
  13. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_anon_task_stats_test.py +0 -0
  14. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_collection.py +0 -0
  15. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_collection_options.py +0 -0
  16. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_collection_options_test.py +0 -0
  17. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_collection_test.py +0 -0
  18. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_collection_tracker_for_single_task.py +0 -0
  19. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_collection_work_manager.py +0 -0
  20. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_csv_out.py +0 -0
  21. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_decoding.py +0 -0
  22. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_decoding_all_built_in_decoders.py +0 -0
  23. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_decoding_decoder_class.py +0 -0
  24. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_decoding_pymatching.py +0 -0
  25. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_decoding_test.py +0 -0
  26. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_decoding_vacuous.py +0 -0
  27. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_existing_data.py +0 -0
  28. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_existing_data_test.py +0 -0
  29. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_main.py +0 -0
  30. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_main_collect.py +0 -0
  31. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_main_collect_test.py +0 -0
  32. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_main_combine.py +0 -0
  33. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_main_combine_test.py +0 -0
  34. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_main_plot_test.py +0 -0
  35. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_main_predict.py +0 -0
  36. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_main_predict_test.py +0 -0
  37. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_plotting.py +0 -0
  38. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_plotting_test.py +0 -0
  39. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_predict.py +0 -0
  40. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_predict_test.py +0 -0
  41. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_printer.py +0 -0
  42. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_probability_util.py +0 -0
  43. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_probability_util_test.py +0 -0
  44. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_task_stats.py +0 -0
  45. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_task_stats_test.py +0 -0
  46. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_task_test.py +0 -0
  47. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_worker.py +0 -0
  48. {sinter-1.12.1 → sinter-1.13.0}/src/sinter/_worker_test.py +0 -0
  49. {sinter-1.12.1 → sinter-1.13.0}/src/sinter.egg-info/SOURCES.txt +0 -0
  50. {sinter-1.12.1 → sinter-1.13.0}/src/sinter.egg-info/dependency_links.txt +0 -0
  51. {sinter-1.12.1 → sinter-1.13.0}/src/sinter.egg-info/entry_points.txt +0 -0
  52. {sinter-1.12.1 → sinter-1.13.0}/src/sinter.egg-info/requires.txt +0 -0
  53. {sinter-1.12.1 → sinter-1.13.0}/src/sinter.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: sinter
3
- Version: 1.12.1
3
+ Version: 1.13.0
4
4
  Summary: Samples stim circuits and decodes them using pymatching.
5
5
  Author: Craig Gidney
6
6
  Author-email: craig.gidney@gmail.com
@@ -30,8 +30,8 @@ quantum error correction circuits.
30
30
 
31
31
  Sinter takes Stim circuits annotated with noise, detectors, and logical
32
32
  observables.
33
- It uses stim to sample the circuits and pymatching to predict whether the
34
- logical observables were flipped or not, given the detector data.
33
+ It uses stim to sample the circuits and a decoder such as pymatching to predict
34
+ whether the logical observables were flipped or not, given the detector data.
35
35
  It records how often this succeeds, and how often it fails (the error rate).
36
36
 
37
37
  Sinter uses python multiprocessing to do parallel sampling across multiple CPU
@@ -40,11 +40,12 @@ specified by the user (such as a target number of errors), saves the results to
40
40
  as simple CSV format, and has some basic plotting functionality for viewing the
41
41
  results.
42
42
 
43
- Sinter doesn't support cloud compute, but it does scale well on
44
- a single machine.
43
+ Sinter doesn't support cloud compute, but it does scale well on a single
44
+ machine.
45
45
  I've tested it on 2 core machines, 4 core machines, and 96 core machines.
46
- Although there are potential pitfalls (e.g. setting batch sizes too large causes thrashing),
47
- sinter generally achieves good resource utilization of the processes you assign to it.
46
+ Although there are potential pitfalls (e.g. setting batch sizes too large causes
47
+ thrashing), sinter generally achieves good resource utilization of the processes
48
+ you assign to it.
48
49
 
49
50
  <a name="how_to_install"></a>
50
51
  # How to install
@@ -63,8 +64,8 @@ to use sinter's python API.
63
64
  <a name="how_to_use_python"></a>
64
65
  # How to use: Python API
65
66
 
66
- This example assumes you are in a python environment with sinter
67
- installed.
67
+ This example assumes you are in a python environment with `sinter` and
68
+ `pymatching` installed.
68
69
 
69
70
  ```python
70
71
  import stim
@@ -152,19 +153,8 @@ and the corresponding image saved to `plot.png`:
152
153
  ## python API utility methods
153
154
 
154
155
  Sinter's python module exposes a variety of methods that are handy for plotting
155
- or analyzing QEC data. These include:
156
-
157
- - `sinter.fit_binomial`: Fit a binomial hypothesis to data, including a likelihood range (the Bayesian equivalent of computing the standard deviation).
158
- - `sinter.fit_line_slope`: Predict slope by fitting sampled coordinates to a line.
159
- - `sinter.fit_line_y_at_x`: Predict Y values from X values by fitting sampled coordinates to a line
160
- - `sinter.comma_separated_key_values(path)`: A reasonable value to give for `--metadata_func` mapping `"folder/b=test,a=2.stim"` to `{'b': 'test', 'a': 2}`.
161
- - `sinter.predict_observables_bit_packed`: Runs a decoder on given detection event data, producing predicted observable flip data.
162
- - `sinter.predict_discards_bit_packed`: Converts detection event data into "what shots should be postselected" data according to given rules.
163
- - `sinter.predict_on_disk`: Converts detection event data into discard data and observable prediction data, using data from disk and writing results to disk.
164
- - `sinter.stats_from_csv_files`: Read saved CSV data.
165
- - `sinter.shot_error_rate_to_piece_error_rate`: Compute per-round error rates.
166
- - `sinter.better_sorted_str_terms`: A text sorting key that puts `"A100"` before `"A99"`, by noticing numbers instead of sorting purely lexicographically.
167
- - `sinter.group_by`: Combines items from a list into keyed groups.
156
+ or analyzing QEC data.
157
+ See the [sinter API reference](https://github.com/quantumlib/Stim/blob/main/doc/sinter_api.md).
168
158
 
169
159
  <a name="how_to_use_linux"></a>
170
160
  # How to use: Linux Command Line
@@ -210,9 +200,13 @@ But this is just an example, so we'll use normal surface code circuits.
210
200
  You can use sinter to collect statistics on each circuit by using the `sinter collect` command.
211
201
  This command takes options specifying how much data to collect, how to do decoding, etc.
212
202
 
203
+ The `processes` argument decides how many workers to use. Set it to `auto` to set
204
+ it to the number of CPUs on your machine.
205
+
213
206
  The `metadata_func` argument can be used to specify custom python expression that turns the `path`
214
207
  into a dictionary or other JSON object associated with the circuit.
215
- For convenience, sinter includes the method `sinter.comma_separated_key_values(path)` which parses
208
+ If you set `metadata_func` to `auto` then will use the method
209
+ `sinter.comma_separated_key_values(path)` which parses
216
210
  stim circuit paths like `folder/a=2,b=test.stim` into a dictionary like `{'a': 2, 'b': 'test'}`.
217
211
 
218
212
  By default, sinter writes the collected statistics to stdout as CSV data.
@@ -224,9 +218,9 @@ instead of overwriting it.
224
218
 
225
219
  ```bash
226
220
  sinter collect \
227
- --processes 4 \
221
+ --processes auto \
228
222
  --circuits circuits/*.stim \
229
- --metadata_func "sinter.comma_separated_key_values(path)" \
223
+ --metadata_func auto \
230
224
  --decoders pymatching \
231
225
  --max_shots 1_000_000 \
232
226
  --max_errors 1000 \
@@ -263,17 +257,23 @@ sinter combine stats.csv
263
257
  # plot
264
258
 
265
259
  You can use `sinter plot` to view the results you've collected.
266
- This command takes a CSV file, and also some command indicating how to group each case
267
- into single curves and also what the desired X coordinate of a case is.
268
- This is done in a flexible but very hacky way, by specifying a python expression using the case's filename:
260
+ This command takes a CSV file, an argument `--group_func` indicating how to
261
+ group the statistics into curves, an argument `--x_func` indicating how to
262
+ pick the X coordinate of each point, and various other arguments. Each `*_func`
263
+ argument takes a string that will be evaluated as a python expression, with
264
+ various useful values in scope such as a `metadata` value containing the
265
+ json metadata for the various points being evaluated. There is also a special
266
+ `m` value where `m.key` is shorthand for `metadata.get('key', None)`.
267
+
268
+ Here is an example of a `sinter plot` command:
269
269
 
270
270
  ```bash
271
271
  sinter plot \
272
272
  --in stats.csv \
273
- --group_func "'Rotated Surface Code d=' + str(metadata['d'])" \
274
- --x_func "metadata['p']" \
275
- --fig_size 1024 1024 \
273
+ --group_func "f'''Rotated Surface Code d={m.d}'''" \
274
+ --x_func m.p \
276
275
  --xaxis "[log]Physical Error Rate" \
276
+ --fig_size 1024 1024 \
277
277
  --out surface_code_figure.png \
278
278
  --show
279
279
  ```
@@ -285,7 +285,6 @@ Which will save a png image of, and also open a window showing, a plot like this
285
285
  <a name="csv_format"></a>
286
286
  # The csv format for sample statistics
287
287
 
288
-
289
288
  Sinter saves samples as a table using a Comma Separated Value format.
290
289
  For example:
291
290
 
@@ -321,8 +320,8 @@ shots that were from separate circuits or separate versions of a circuit.
321
320
  dictionary with helpful keys like "noise_level" or "circuit_name". The json
322
321
  value is serialized into JSON and then escaped so that it can be put into the
323
322
  CSV data (e.g. quotes get doubled up).
324
- - `custom_counts` (json[Dict[str, int]]): A field that can store a dictionary
325
- from string keys to integer counts represented in
323
+ - `custom_counts` (json[Dict[str, int]]): An optional field that can store a
324
+ dictionary from string keys to integer counts represented in
326
325
  [Java Script Object Notation](https://json.org).
327
326
  The counts can be a huge variety of things, ranging from per-observable error
328
327
  counts to detection event counts. In general, any value that should be added
@@ -16,8 +16,8 @@ quantum error correction circuits.
16
16
 
17
17
  Sinter takes Stim circuits annotated with noise, detectors, and logical
18
18
  observables.
19
- It uses stim to sample the circuits and pymatching to predict whether the
20
- logical observables were flipped or not, given the detector data.
19
+ It uses stim to sample the circuits and a decoder such as pymatching to predict
20
+ whether the logical observables were flipped or not, given the detector data.
21
21
  It records how often this succeeds, and how often it fails (the error rate).
22
22
 
23
23
  Sinter uses python multiprocessing to do parallel sampling across multiple CPU
@@ -26,11 +26,12 @@ specified by the user (such as a target number of errors), saves the results to
26
26
  as simple CSV format, and has some basic plotting functionality for viewing the
27
27
  results.
28
28
 
29
- Sinter doesn't support cloud compute, but it does scale well on
30
- a single machine.
29
+ Sinter doesn't support cloud compute, but it does scale well on a single
30
+ machine.
31
31
  I've tested it on 2 core machines, 4 core machines, and 96 core machines.
32
- Although there are potential pitfalls (e.g. setting batch sizes too large causes thrashing),
33
- sinter generally achieves good resource utilization of the processes you assign to it.
32
+ Although there are potential pitfalls (e.g. setting batch sizes too large causes
33
+ thrashing), sinter generally achieves good resource utilization of the processes
34
+ you assign to it.
34
35
 
35
36
  <a name="how_to_install"></a>
36
37
  # How to install
@@ -49,8 +50,8 @@ to use sinter's python API.
49
50
  <a name="how_to_use_python"></a>
50
51
  # How to use: Python API
51
52
 
52
- This example assumes you are in a python environment with sinter
53
- installed.
53
+ This example assumes you are in a python environment with `sinter` and
54
+ `pymatching` installed.
54
55
 
55
56
  ```python
56
57
  import stim
@@ -138,19 +139,8 @@ and the corresponding image saved to `plot.png`:
138
139
  ## python API utility methods
139
140
 
140
141
  Sinter's python module exposes a variety of methods that are handy for plotting
141
- or analyzing QEC data. These include:
142
-
143
- - `sinter.fit_binomial`: Fit a binomial hypothesis to data, including a likelihood range (the Bayesian equivalent of computing the standard deviation).
144
- - `sinter.fit_line_slope`: Predict slope by fitting sampled coordinates to a line.
145
- - `sinter.fit_line_y_at_x`: Predict Y values from X values by fitting sampled coordinates to a line
146
- - `sinter.comma_separated_key_values(path)`: A reasonable value to give for `--metadata_func` mapping `"folder/b=test,a=2.stim"` to `{'b': 'test', 'a': 2}`.
147
- - `sinter.predict_observables_bit_packed`: Runs a decoder on given detection event data, producing predicted observable flip data.
148
- - `sinter.predict_discards_bit_packed`: Converts detection event data into "what shots should be postselected" data according to given rules.
149
- - `sinter.predict_on_disk`: Converts detection event data into discard data and observable prediction data, using data from disk and writing results to disk.
150
- - `sinter.stats_from_csv_files`: Read saved CSV data.
151
- - `sinter.shot_error_rate_to_piece_error_rate`: Compute per-round error rates.
152
- - `sinter.better_sorted_str_terms`: A text sorting key that puts `"A100"` before `"A99"`, by noticing numbers instead of sorting purely lexicographically.
153
- - `sinter.group_by`: Combines items from a list into keyed groups.
142
+ or analyzing QEC data.
143
+ See the [sinter API reference](https://github.com/quantumlib/Stim/blob/main/doc/sinter_api.md).
154
144
 
155
145
  <a name="how_to_use_linux"></a>
156
146
  # How to use: Linux Command Line
@@ -196,9 +186,13 @@ But this is just an example, so we'll use normal surface code circuits.
196
186
  You can use sinter to collect statistics on each circuit by using the `sinter collect` command.
197
187
  This command takes options specifying how much data to collect, how to do decoding, etc.
198
188
 
189
+ The `processes` argument decides how many workers to use. Set it to `auto` to set
190
+ it to the number of CPUs on your machine.
191
+
199
192
  The `metadata_func` argument can be used to specify custom python expression that turns the `path`
200
193
  into a dictionary or other JSON object associated with the circuit.
201
- For convenience, sinter includes the method `sinter.comma_separated_key_values(path)` which parses
194
+ If you set `metadata_func` to `auto` then will use the method
195
+ `sinter.comma_separated_key_values(path)` which parses
202
196
  stim circuit paths like `folder/a=2,b=test.stim` into a dictionary like `{'a': 2, 'b': 'test'}`.
203
197
 
204
198
  By default, sinter writes the collected statistics to stdout as CSV data.
@@ -210,9 +204,9 @@ instead of overwriting it.
210
204
 
211
205
  ```bash
212
206
  sinter collect \
213
- --processes 4 \
207
+ --processes auto \
214
208
  --circuits circuits/*.stim \
215
- --metadata_func "sinter.comma_separated_key_values(path)" \
209
+ --metadata_func auto \
216
210
  --decoders pymatching \
217
211
  --max_shots 1_000_000 \
218
212
  --max_errors 1000 \
@@ -249,17 +243,23 @@ sinter combine stats.csv
249
243
  # plot
250
244
 
251
245
  You can use `sinter plot` to view the results you've collected.
252
- This command takes a CSV file, and also some command indicating how to group each case
253
- into single curves and also what the desired X coordinate of a case is.
254
- This is done in a flexible but very hacky way, by specifying a python expression using the case's filename:
246
+ This command takes a CSV file, an argument `--group_func` indicating how to
247
+ group the statistics into curves, an argument `--x_func` indicating how to
248
+ pick the X coordinate of each point, and various other arguments. Each `*_func`
249
+ argument takes a string that will be evaluated as a python expression, with
250
+ various useful values in scope such as a `metadata` value containing the
251
+ json metadata for the various points being evaluated. There is also a special
252
+ `m` value where `m.key` is shorthand for `metadata.get('key', None)`.
253
+
254
+ Here is an example of a `sinter plot` command:
255
255
 
256
256
  ```bash
257
257
  sinter plot \
258
258
  --in stats.csv \
259
- --group_func "'Rotated Surface Code d=' + str(metadata['d'])" \
260
- --x_func "metadata['p']" \
261
- --fig_size 1024 1024 \
259
+ --group_func "f'''Rotated Surface Code d={m.d}'''" \
260
+ --x_func m.p \
262
261
  --xaxis "[log]Physical Error Rate" \
262
+ --fig_size 1024 1024 \
263
263
  --out surface_code_figure.png \
264
264
  --show
265
265
  ```
@@ -271,7 +271,6 @@ Which will save a png image of, and also open a window showing, a plot like this
271
271
  <a name="csv_format"></a>
272
272
  # The csv format for sample statistics
273
273
 
274
-
275
274
  Sinter saves samples as a table using a Comma Separated Value format.
276
275
  For example:
277
276
 
@@ -307,8 +306,8 @@ shots that were from separate circuits or separate versions of a circuit.
307
306
  dictionary with helpful keys like "noise_level" or "circuit_name". The json
308
307
  value is serialized into JSON and then escaped so that it can be put into the
309
308
  CSV data (e.g. quotes get doubled up).
310
- - `custom_counts` (json[Dict[str, int]]): A field that can store a dictionary
311
- from string keys to integer counts represented in
309
+ - `custom_counts` (json[Dict[str, int]]): An optional field that can store a
310
+ dictionary from string keys to integer counts represented in
312
311
  [Java Script Object Notation](https://json.org).
313
312
  The counts can be a huge variety of things, ranging from per-observable error
314
313
  counts to detection event counts. In general, any value that should be added
@@ -19,7 +19,7 @@ with open('README.md', encoding='UTF-8') as f:
19
19
  with open('requirements.txt', encoding='UTF-8') as f:
20
20
  requirements = f.read().splitlines()
21
21
 
22
- __version__ = '1.12.1'
22
+ __version__ = '1.13.0'
23
23
 
24
24
  setup(
25
25
  name='sinter',
@@ -1,4 +1,4 @@
1
- __version__ = '1.12.1'
1
+ __version__ = '1.13.0'
2
2
 
3
3
  from sinter._anon_task_stats import (
4
4
  AnonTaskStats,
@@ -15,6 +15,9 @@ from sinter._collection_options import (
15
15
  from sinter._csv_out import (
16
16
  CSV_HEADER,
17
17
  )
18
+ from sinter._decoding_all_built_in_decoders import (
19
+ BUILT_IN_DECODERS,
20
+ )
18
21
  from sinter._existing_data import (
19
22
  read_stats_from_csv_files,
20
23
  stats_from_csv_files,
@@ -1,19 +1,58 @@
1
1
  import math
2
2
  import pathlib
3
- from typing import Callable, List, TYPE_CHECKING
4
- from typing import Tuple
3
+ from typing import Callable, List, TYPE_CHECKING, Tuple
5
4
 
6
5
  import numpy as np
7
6
  import stim
8
7
 
9
- from sinter._decoding_decoder_class import Decoder
8
+ from sinter._decoding_decoder_class import Decoder, CompiledDecoder
10
9
 
11
10
  if TYPE_CHECKING:
12
11
  import fusion_blossom
13
12
 
14
13
 
14
+ class FusionBlossomCompiledDecoder(CompiledDecoder):
15
+ def __init__(self, solver: 'fusion_blossom.SolverSerial', fault_masks: 'np.ndarray', num_dets: int, num_obs: int):
16
+ self.solver = solver
17
+ self.fault_masks = fault_masks
18
+ self.num_dets = num_dets
19
+ self.num_obs = num_obs
20
+
21
+ def decode_shots_bit_packed(
22
+ self,
23
+ *,
24
+ bit_packed_detection_event_data: 'np.ndarray',
25
+ ) -> 'np.ndarray':
26
+ num_shots = bit_packed_detection_event_data.shape[0]
27
+ predictions = np.zeros(shape=(num_shots, self.num_obs), dtype=np.uint8)
28
+ import fusion_blossom
29
+ for shot in range(num_shots):
30
+ dets_sparse = np.flatnonzero(np.unpackbits(bit_packed_detection_event_data[shot], count=self.num_dets, bitorder='little'))
31
+ syndrome = fusion_blossom.SyndromePattern(syndrome_vertices=dets_sparse)
32
+ self.solver.solve(syndrome)
33
+ prediction = int(np.bitwise_xor.reduce(self.fault_masks[self.solver.subgraph()]))
34
+ predictions[shot] = np.packbits(prediction, bitorder='little')
35
+ self.solver.clear()
36
+ return predictions
37
+
38
+
15
39
  class FusionBlossomDecoder(Decoder):
16
40
  """Use fusion blossom to predict observables from detection events."""
41
+
42
+ def compile_decoder_for_dem(self, *, dem: 'stim.DetectorErrorModel') -> CompiledDecoder:
43
+ try:
44
+ import fusion_blossom
45
+ except ImportError as ex:
46
+ raise ImportError(
47
+ "The decoder 'fusion_blossom' isn't installed\n"
48
+ "To fix this, install the python package 'fusion_blossom' into your environment.\n"
49
+ "For example, if you are using pip, run `pip install fusion_blossom`.\n"
50
+ ) from ex
51
+
52
+ solver, fault_masks = detector_error_model_to_fusion_blossom_solver_and_fault_masks(dem)
53
+ return FusionBlossomCompiledDecoder(solver, fault_masks, dem.num_detectors, dem.num_observables)
54
+
55
+
17
56
  def decode_via_files(self,
18
57
  *,
19
58
  num_shots: int,
@@ -36,7 +75,6 @@ class FusionBlossomDecoder(Decoder):
36
75
  error_model = stim.DetectorErrorModel.from_file(dem_path)
37
76
  solver, fault_masks = detector_error_model_to_fusion_blossom_solver_and_fault_masks(error_model)
38
77
  num_det_bytes = math.ceil(num_dets / 8)
39
-
40
78
  with open(dets_b8_in_path, 'rb') as dets_in_f:
41
79
  with open(obs_predictions_b8_out_path, 'wb') as obs_out_f:
42
80
  for _ in range(num_shots):
@@ -151,7 +151,7 @@ def parse_args(args: List[str]) -> Any:
151
151
  )
152
152
  parser.add_argument('--plot_args_func',
153
153
  type=str,
154
- default='''{'marker': 'ov*sp^<>8PhH+xXDd|'[index % 18]}''',
154
+ default='''{'marker': 'ov*sp^<>8P+xXhHDd|'[index % 18]}''',
155
155
  help='A python expression used to customize the look of curves.\n'
156
156
  'Values available to the python expression:\n'
157
157
  ' index: A unique integer identifying the curve.\n'
@@ -220,7 +220,11 @@ def parse_args(args: List[str]) -> Any:
220
220
  parser.add_argument('--ymin',
221
221
  default=None,
222
222
  type=float,
223
- help='Sets the minimum value of the y axis (max always 1).')
223
+ help='Forces the minimum value of the y axis.')
224
+ parser.add_argument('--ymax',
225
+ default=None,
226
+ type=float,
227
+ help='Forces the maximum value of the y axis.')
224
228
  parser.add_argument('--title',
225
229
  default=None,
226
230
  type=str,
@@ -474,6 +478,7 @@ def _plot_helper(
474
478
  xaxis: str,
475
479
  yaxis: Optional[str],
476
480
  min_y: Optional[float],
481
+ max_y: Optional[float],
477
482
  max_x: Optional[float],
478
483
  min_x: Optional[float],
479
484
  title: Optional[str],
@@ -536,7 +541,7 @@ def _plot_helper(
536
541
 
537
542
  x_scale_name: Optional[str] = None
538
543
  for ax in [ax_err, ax_dis, ax_cus]:
539
- x_scale_name = x_scale_name or _set_axis_scale_label_ticks(
544
+ v = _set_axis_scale_label_ticks(
540
545
  ax=ax,
541
546
  y_not_x=False,
542
547
  axis_label=xaxis,
@@ -548,6 +553,7 @@ def _plot_helper(
548
553
  plotted_stats=plotted_stats,
549
554
  v_func=x_func,
550
555
  )
556
+ x_scale_name = x_scale_name or v
551
557
 
552
558
  y_scale_name: Optional[str] = None
553
559
  if ax_err is not None:
@@ -556,7 +562,7 @@ def _plot_helper(
556
562
  y_not_x=True,
557
563
  axis_label=f"Logical Error Rate (per {failure_unit})" if yaxis is None else yaxis,
558
564
  default_scale='log',
559
- forced_max_v=1 if min_y is None or 1 > min_y else None,
565
+ forced_max_v=max_y if max_y is not None else 1 if min_y is None or 1 > min_y else None,
560
566
  default_min_v=1e-4,
561
567
  default_max_v=1,
562
568
  forced_min_v=min_y,
@@ -611,6 +617,8 @@ def _plot_helper(
611
617
  default_max_v=1,
612
618
  plotted_stats=plotted_stats,
613
619
  v_func=y_func,
620
+ forced_min_v=min_y,
621
+ forced_max_v=max_y,
614
622
  )
615
623
  plot_custom(
616
624
  ax=ax_cus,
@@ -753,6 +761,7 @@ def main_plot(*, command_line_args: List[str]):
753
761
  yaxis=args.yaxis,
754
762
  fig_size=args.fig_size,
755
763
  min_y=args.ymin,
764
+ max_y=args.ymax,
756
765
  max_x=args.xmax,
757
766
  min_x=args.xmin,
758
767
  highlight_max_likelihood_factor=args.highlight_max_likelihood_factor,
@@ -183,7 +183,7 @@ class Task:
183
183
  >>> task.strong_id_value()
184
184
  {'circuit': 'H 0', 'decoder': 'pymatching', 'decoder_error_model': '', 'postselection_mask': None, 'json_metadata': None}
185
185
  """
186
- if self.decoder is None:
186
+ if self.circuit is None:
187
187
  raise ValueError("Can't compute strong_id until `circuit` is set.")
188
188
  if self.decoder is None:
189
189
  raise ValueError("Can't compute strong_id until `decoder` is set.")
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: sinter
3
- Version: 1.12.1
3
+ Version: 1.13.0
4
4
  Summary: Samples stim circuits and decodes them using pymatching.
5
5
  Author: Craig Gidney
6
6
  Author-email: craig.gidney@gmail.com
@@ -30,8 +30,8 @@ quantum error correction circuits.
30
30
 
31
31
  Sinter takes Stim circuits annotated with noise, detectors, and logical
32
32
  observables.
33
- It uses stim to sample the circuits and pymatching to predict whether the
34
- logical observables were flipped or not, given the detector data.
33
+ It uses stim to sample the circuits and a decoder such as pymatching to predict
34
+ whether the logical observables were flipped or not, given the detector data.
35
35
  It records how often this succeeds, and how often it fails (the error rate).
36
36
 
37
37
  Sinter uses python multiprocessing to do parallel sampling across multiple CPU
@@ -40,11 +40,12 @@ specified by the user (such as a target number of errors), saves the results to
40
40
  as simple CSV format, and has some basic plotting functionality for viewing the
41
41
  results.
42
42
 
43
- Sinter doesn't support cloud compute, but it does scale well on
44
- a single machine.
43
+ Sinter doesn't support cloud compute, but it does scale well on a single
44
+ machine.
45
45
  I've tested it on 2 core machines, 4 core machines, and 96 core machines.
46
- Although there are potential pitfalls (e.g. setting batch sizes too large causes thrashing),
47
- sinter generally achieves good resource utilization of the processes you assign to it.
46
+ Although there are potential pitfalls (e.g. setting batch sizes too large causes
47
+ thrashing), sinter generally achieves good resource utilization of the processes
48
+ you assign to it.
48
49
 
49
50
  <a name="how_to_install"></a>
50
51
  # How to install
@@ -63,8 +64,8 @@ to use sinter's python API.
63
64
  <a name="how_to_use_python"></a>
64
65
  # How to use: Python API
65
66
 
66
- This example assumes you are in a python environment with sinter
67
- installed.
67
+ This example assumes you are in a python environment with `sinter` and
68
+ `pymatching` installed.
68
69
 
69
70
  ```python
70
71
  import stim
@@ -152,19 +153,8 @@ and the corresponding image saved to `plot.png`:
152
153
  ## python API utility methods
153
154
 
154
155
  Sinter's python module exposes a variety of methods that are handy for plotting
155
- or analyzing QEC data. These include:
156
-
157
- - `sinter.fit_binomial`: Fit a binomial hypothesis to data, including a likelihood range (the Bayesian equivalent of computing the standard deviation).
158
- - `sinter.fit_line_slope`: Predict slope by fitting sampled coordinates to a line.
159
- - `sinter.fit_line_y_at_x`: Predict Y values from X values by fitting sampled coordinates to a line
160
- - `sinter.comma_separated_key_values(path)`: A reasonable value to give for `--metadata_func` mapping `"folder/b=test,a=2.stim"` to `{'b': 'test', 'a': 2}`.
161
- - `sinter.predict_observables_bit_packed`: Runs a decoder on given detection event data, producing predicted observable flip data.
162
- - `sinter.predict_discards_bit_packed`: Converts detection event data into "what shots should be postselected" data according to given rules.
163
- - `sinter.predict_on_disk`: Converts detection event data into discard data and observable prediction data, using data from disk and writing results to disk.
164
- - `sinter.stats_from_csv_files`: Read saved CSV data.
165
- - `sinter.shot_error_rate_to_piece_error_rate`: Compute per-round error rates.
166
- - `sinter.better_sorted_str_terms`: A text sorting key that puts `"A100"` before `"A99"`, by noticing numbers instead of sorting purely lexicographically.
167
- - `sinter.group_by`: Combines items from a list into keyed groups.
156
+ or analyzing QEC data.
157
+ See the [sinter API reference](https://github.com/quantumlib/Stim/blob/main/doc/sinter_api.md).
168
158
 
169
159
  <a name="how_to_use_linux"></a>
170
160
  # How to use: Linux Command Line
@@ -210,9 +200,13 @@ But this is just an example, so we'll use normal surface code circuits.
210
200
  You can use sinter to collect statistics on each circuit by using the `sinter collect` command.
211
201
  This command takes options specifying how much data to collect, how to do decoding, etc.
212
202
 
203
+ The `processes` argument decides how many workers to use. Set it to `auto` to set
204
+ it to the number of CPUs on your machine.
205
+
213
206
  The `metadata_func` argument can be used to specify custom python expression that turns the `path`
214
207
  into a dictionary or other JSON object associated with the circuit.
215
- For convenience, sinter includes the method `sinter.comma_separated_key_values(path)` which parses
208
+ If you set `metadata_func` to `auto` then will use the method
209
+ `sinter.comma_separated_key_values(path)` which parses
216
210
  stim circuit paths like `folder/a=2,b=test.stim` into a dictionary like `{'a': 2, 'b': 'test'}`.
217
211
 
218
212
  By default, sinter writes the collected statistics to stdout as CSV data.
@@ -224,9 +218,9 @@ instead of overwriting it.
224
218
 
225
219
  ```bash
226
220
  sinter collect \
227
- --processes 4 \
221
+ --processes auto \
228
222
  --circuits circuits/*.stim \
229
- --metadata_func "sinter.comma_separated_key_values(path)" \
223
+ --metadata_func auto \
230
224
  --decoders pymatching \
231
225
  --max_shots 1_000_000 \
232
226
  --max_errors 1000 \
@@ -263,17 +257,23 @@ sinter combine stats.csv
263
257
  # plot
264
258
 
265
259
  You can use `sinter plot` to view the results you've collected.
266
- This command takes a CSV file, and also some command indicating how to group each case
267
- into single curves and also what the desired X coordinate of a case is.
268
- This is done in a flexible but very hacky way, by specifying a python expression using the case's filename:
260
+ This command takes a CSV file, an argument `--group_func` indicating how to
261
+ group the statistics into curves, an argument `--x_func` indicating how to
262
+ pick the X coordinate of each point, and various other arguments. Each `*_func`
263
+ argument takes a string that will be evaluated as a python expression, with
264
+ various useful values in scope such as a `metadata` value containing the
265
+ json metadata for the various points being evaluated. There is also a special
266
+ `m` value where `m.key` is shorthand for `metadata.get('key', None)`.
267
+
268
+ Here is an example of a `sinter plot` command:
269
269
 
270
270
  ```bash
271
271
  sinter plot \
272
272
  --in stats.csv \
273
- --group_func "'Rotated Surface Code d=' + str(metadata['d'])" \
274
- --x_func "metadata['p']" \
275
- --fig_size 1024 1024 \
273
+ --group_func "f'''Rotated Surface Code d={m.d}'''" \
274
+ --x_func m.p \
276
275
  --xaxis "[log]Physical Error Rate" \
276
+ --fig_size 1024 1024 \
277
277
  --out surface_code_figure.png \
278
278
  --show
279
279
  ```
@@ -285,7 +285,6 @@ Which will save a png image of, and also open a window showing, a plot like this
285
285
  <a name="csv_format"></a>
286
286
  # The csv format for sample statistics
287
287
 
288
-
289
288
  Sinter saves samples as a table using a Comma Separated Value format.
290
289
  For example:
291
290
 
@@ -321,8 +320,8 @@ shots that were from separate circuits or separate versions of a circuit.
321
320
  dictionary with helpful keys like "noise_level" or "circuit_name". The json
322
321
  value is serialized into JSON and then escaped so that it can be put into the
323
322
  CSV data (e.g. quotes get doubled up).
324
- - `custom_counts` (json[Dict[str, int]]): A field that can store a dictionary
325
- from string keys to integer counts represented in
323
+ - `custom_counts` (json[Dict[str, int]]): An optional field that can store a
324
+ dictionary from string keys to integer counts represented in
326
325
  [Java Script Object Notation](https://json.org).
327
326
  The counts can be a huge variety of things, ranging from per-observable error
328
327
  counts to detection event counts. In general, any value that should be added
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes