Getting Started
Install Qlro and get your first device recommendation in under 2 minutes.
Install Qlro
Python 3.11+ required. Install via Homebrew if needed:
brew install python@3.12
pip install qlro
v0.13.0 · Python 3.11+ · Apache 2.0 · Includes Qiskit + Metriq dataset
Auto-log every run into the live dashboard (recommended)
One line on top of your existing Braket or Qiskit workflow, and every circuit you run contributes a (predicted, observed) pair derived from the actual job result to the public Qlro dataset on qlro.io/dashboard and qlro.io/accuracy. Authenticated via your free API key — this is the only path for submitting outcomes (the browser demo is read-only, by design, so the public r-aggregate stays citable). Best-effort: network errors are swallowed so your workflow never breaks.
import qlro.autolog.braket as qlbraket
qlbraket.enable()
# Existing Braket code keeps working unchanged.
from braket.aws import AwsDevice
device = AwsDevice("arn:aws:braket:eu-north-1::device/qpu/iqm/Garnet")
task = device.run(circuit, shots=1000)
result = task.result() # auto-posts (prediction, observation)import qlro.autolog.qiskit as qlqiskit
backend = service.backend("ibm_fez")
backend = qlqiskit.wrap(backend)
job = backend.run(circuit, shots=1024)
result = job.result() # auto-posts (prediction, observation)Predictions cache to ~/.qlro/autolog.db so a Braket task submitted today and fetched tomorrow still gets posted. Pass a custom metric= callable for Gold-tier (rigorous) submissions.
export QLRO_API_KEY="your-key-from-qlro.io/signin" # add to .zshrc / .bashrc
The autolog client reads QLRO_API_KEY from the environment and signs every submission. Without a key, posts are rejected — this is what keeps the public r-aggregate citable. Get a free key (500 / month) further down this page.
Get a recommendation from your circuit
Pass a Qiskit circuit and Qlro automatically extracts qubit count, depth, and gate count to recommend the best device:
from qiskit import QuantumCircuit import qlro qc = QuantumCircuit(4) qc.h(0); qc.cx(0, 1); qc.cx(1, 2); qc.cx(2, 3) qc.measure_all() result = qlro.recommend(qc, category="chemistry") result.primary # -> 'H2-2' (Quantinuum) result.primary_fit # -> 0.8156 (fit score) result.secondary # -> 'ibm_boston' result.show() # full ranking table
Categories: "chemistry", "optimization", "simulation", "ml". In Jupyter notebooks, the result auto-renders as an interactive table.
Get a citable URL for this run
The local result is enough for inline use. To share it in a paper, an RFP, or an audit, mint a permanent record — content-hashed, snapshot-pinned, and resolvable at qlro.io/decision/<id>:
import requests
# After: result = qlro.recommend(qc, category="chemistry")
n1q = sum(1 for ins in qc.data if ins.operation.num_qubits == 1)
n2q = sum(1 for ins in qc.data if ins.operation.num_qubits == 2)
record = requests.post(
"https://qlro-production.up.railway.app/api/v1/predict/record",
json={
"circuit": {
"n_qubits": qc.num_qubits,
"depth": qc.depth(),
"n_1q": n1q,
"n_2q": n2q,
"n_measured": qc.num_clbits or qc.num_qubits,
},
"category": "chemistry",
},
).json()
print(record["citation_url"]) # https://qlro.io/decision/rec_xxxx
print(record["bibtex"]) # drop into your paperOr one curl from any shell:
curl -X POST https://qlro-production.up.railway.app/api/v1/predict/record \
-H "Content-Type: application/json" \
-d '{"circuit":{"n_qubits":4,"depth":22,"n_1q":20,"n_2q":16,"n_measured":4},"category":"chemistry"}' \
| jq '{citation_url, bibtex, snapshot_doi}'citation_url (public), bibtex, content_hash (SHA-256 of input + output), snapshot_commit + snapshot_doi. Bilingual PDF + procurement bundle at /api/v1/predict/record/<id>/bundle.zip.Advanced: custom workload spec
For finer control, describe your workload parameters directly:
from qlro.scoring.profiler import WorkloadSpec
from qlro.scoring.recommend import recommend
spec = WorkloadSpec(
qubit_count=12,
circuit_depth=200,
two_qubit_gates=400,
connectivity="all-to-all", # linear | ring | star | heavy-hex | all-to-all
accuracy="chemical", # low | medium | high | chemical
shots=1024,
)
rec = recommend(spec)
print(f"Recommended: {rec.primary.device}")
print(f"Fit score: {rec.primary.value:.4f}")
print(f"Reason: {rec.reason}")The profiler converts your spec into a custom weight vector (Γ, Φ, F, T) tuned to your workload.
Understand the output
Fit score — how well the device matches your workload (0 to 1). Higher is better.
Weight vector — auto-generated from your spec. Deep circuits push coherence weight up; many gates push fidelity up.
Confidence — high/medium/low based on score gap and data quality. "Low" means validate empirically before committing.
CLI: same workflow without leaving the shell
Four shell commands cover the same workflow as the SDK. Pipe outputs into jq or shell scripts.
qlro recommend my_vqe.qasm --category chemistry qlro recommend my_vqe.qasm --category optimization --all cat my_vqe.qasm | qlro recommend - --json | jq '.rankings[0]'
qlro doctor iqm_garnet # Reports snapshot freshness, calibration drift, and exit codes: # 0 = healthy, 1 = stale (>7 days), 2 = drifted (>=2x from snapshot) # CI/CD: qlro doctor ibm_boston || echo "stale, recalibrate first"
qlro calibrate iqm_garnet \ --ghz-fidelity 0.953 \ --deep-ladder-fidelity 0.39 # Saves calibrated (eps_2q, d_c, eps_ro) to ~/.qlro/calibrations/iqm_garnet.json. # Pass --ghz-counts JSON to enable Level-2 readout calibration.
pip install "qlro[braket]" qlro braket-retro --days 30 # Lists every COMPLETED Braket task in the last 30 days, scores each # one against Qlro's recommendation, and reports total potential savings.
qlro --help lists every subcommand; qlro <cmd> --help for flags.
Prefer HTTP? Get a free API key
If you are not writing Python, the POST /api/v1/predict endpoint returns the same recommendation as the SDK. Free tier = 500 requests/month. No credit card.
Who this is for
Qlro fits four quantum-computing buyer profiles. Pick the closest one to find the relevant entry point.