Back to overview

Live WCPP demo

Pick a workload. See how Qlro ranks every device in the snapshot — same output you'd get from qlro.recommend(). Click any row for the axis breakdown and benchmark provenance.

Try your own circuit

Paste OpenQASM 2.0 and see where to run it

No pip install. No API key. Your QASM is parsed in your browser and the ranking below updates to reflect your circuit's qubit count, depth, and gate composition.

Workload
Top pick
H2-2
quantinuum56q · 32 benchmarks
Fit0.816
Nextibm_boston (fit 0.642)
pip install qlroRun locally — no sign-in needed.

Fit = WCPP score for the chemistry workload (cat-level aggregate). F̂ = circuit-specific predicted fidelity (sign in to compute). Run qlro.autolog in your code and your real outcomes flow back into this catalogue automatically.

#DeviceFitScorePred. fidelityΓConnectivityΦCoherenceFFidelityTThroughput
1
H2-2quantinuum56Qestimated
0.8156
± 0.032
0.88
0.75
0.94
0.51
2
ibm_bostonibm156Q
0.6416
± 0.001
0.82
0.42
0.82
0.51
3
iqm_garnetaws20Qestimated
0.5727
± 0.305
0.58
0.26
0.99
0.51
4
iqm_emeraldaws54Qestimated
0.5596
± 0.298
0.51
0.26
0.98
0.51
5
ibm_pittsburghibm156Q
0.5432
± 0.002
0.76
0.28
0.77
0.51
6
ibm_kingstonibm156Q
0.5316
± 0.002
0.76
0.29
0.71
0.52
7
ibm_marrakeshibm156Q
0.5102
± 0.002
0.75
0.26
0.71
0.51
8
ibm_fezibm156Q
0.4653
± 0.002
0.72
0.19
0.72
0.52
9
ibm_torinoibm133Q
0.4417
± 0.002
0.66
0.15
0.76
0.50
10
rigetti_ankaa-3aws82Qestimated
0.3813
± 0.217
0.07
0.26
0.80
0.51
11
ibm_brisbaneibm127Qestimated
0.3640
± 0.201
0.69
0.26
0.34
0.51
12
wukong_72origin72Qestimated
0.1617
± 0.009
0.10
0.01
0.95
0.51
13
wukong_102origin102Qestimated
0.1069
± 0.007
0.01
0.01
0.96
0.51
Reproducibility
Metriq snapshot
89cd842f23
synced 2026-04-15
Library
qlro 0.2.0
Apache 2.0
Paper
doi.org/10.5281/zenodo.19785800
CC BY 4.0
Run this yourself:pip install qlro, then qlro.recommend(circuit, category="chemistry"). The scores on this page are byte-identical to what the library produces — every number is locked by paper-parity tests.
Cite this snapshot

Reference the exact frozen ranking on this page in your paper. The snapshot commit + Qlro version pin reproducibility; the DOI cites the methodology.

@misc{qlro_demo_snapshot_89cd842f231f,
  author       = {{Qlro}},
  title        = {{Qlro public demo: vendor-neutral quantum-device ranking (frozen snapshot)}},
  year         = {2026},
  howpublished = {Qlro v0.2.0; Metriq snapshot \texttt{89cd842f231f}; \url{https://qlro.io/demo}},
  note         = {Methodology DOI: \href{https://doi.org/10.5281/zenodo.19785800}{10.5281/zenodo.19785800}.},
  doi          = {10.5281/zenodo.19785800},
  url          = {https://qlro.io/demo}
}
v1 scope: scoring depends on (device, workload). Circuit content and user priorities (cost, speed) aren't yet wired into the score — they will surface in future versions. Confidence weighting is already live via per-axis uncertainty bands.