# What additional options are available when using REM?#

## Overview#

The main options the user has when using REM concern how to specify the inverse confusion matrix that is used to mitigate errors in the raw measurement (or “readout”) results. Currently Mitiq does not implement methods for estimating readout-error confusion matrices (which is a form of measurement noise calibration and therefore a device specific task), so the user must provide enough information to allow Mitiq to construct one. As described below, Mitiq’s options support the differing levels of information a user may have about the readout-error characteristics of their device. After the confusion matrix has been constructed, the remaining steps of standard REM are straightforward (compute the pseudoinverse of the confusion matrix and then apply this to the raw measurement results). For more information on what a confusion matrix is, see What is the theory behind REM?.

## Options for specifying the inverse confusion matrix#

Mitiq provides two utility functions for constructing an inverse confusion matrix from user-provided information about a device’s confusion matrix. We describe these functions below and illustrate with toy examples. (Note that everything that follows is circuit-agnostic; it concerns how to represent a device’s noise model in the form required by REM).

### Inverse confusion matrix from single-qubit noise model#

The function `generate_inverse_confusion_matrix(num_qubits, p0, p1)`

embodies the simplest noise model possible, where one assumes that noise affects the measurement of each qubit independently and with the same confusion probabilities, specified by \(p_0 = Pr(1|0)\), the probability \(|0\rangle\) gets flipped to \(|1\rangle\) when measured, and \(p_1 = Pr(0|1)\), the probability \(|1\rangle\) gets flipped to \(|0\rangle\). The \(2 \times 2\) confusion matrix \(A_1\) for the \(1\)st qubit (and every other qubit) is then

and the joint \(2^n \times 2^n\) confusion matrix \(A\) for all \(n\) qubits is just \(n\) copies of \(A_1\) tensored together: \(A = A_1 \otimes \dots \otimes A_1 = A_1^{\otimes n}\).

To construct an inverse confusion matrix with `generate_inverse_confusion_matrix()`

the user supplies the number of qubits and the single-qubit confusion matrix parameters \(p_0\) and \(p_1\). Here is an example with two qubits.

```
from functools import reduce
import numpy as np
from mitiq.rem import generate_inverse_confusion_matrix
# Confusion matrix for qubit 1
A1_entries = [
0.9, 0.2,
0.1, 0.8
]
A1 = np.array(A1_entries).reshape(2,2)
# Overall 2-qubit confusion matrix (tensor two copies of A1)
A = np.kron(A1, A1)
# Generate inverse confusion matrix.
# For this simple error model the user only has to
# specify the single qubit flip probabilities p0 and p1
A_pinv = generate_inverse_confusion_matrix(2, p0=0.1, p1=0.2)
print(f"Confusion matrix:\n{A}\n")
print(f"Column-wise sums of confusion matrix:\n{A.sum(axis=0)}\n")
print(f"Inverse confusion matrix:\n{A_pinv}")
```

```
Confusion matrix:
[[0.81 0.18 0.18 0.04]
[0.09 0.72 0.02 0.16]
[0.09 0.02 0.72 0.16]
[0.01 0.08 0.08 0.64]]
Column-wise sums of confusion matrix:
[1. 1. 1. 1.]
Inverse confusion matrix:
[[ 1.30612245 -0.32653061 -0.32653061 0.08163265]
[-0.16326531 1.46938776 0.04081633 -0.36734694]
[-0.16326531 0.04081633 1.46938776 -0.36734694]
[ 0.02040816 -0.18367347 -0.18367347 1.65306122]]
```

Note

In each code example we show an explicit computation of the full \(2^n \times 2^n\) confusion matrix \(A\) from the smaller, local confusion matrices supplied by the user, but this is solely for expository purposes. When applying REM in practice only the pseudoinverse \(A^{+}\) needs to be computed: the user supplies one or more local confusion matrices, and Mitiq’s utility functions can directly compute \(A^{+}\) from these.

### Inverse confusion matrix from \(k\) local confusion matrices#

The function `generate_tensored_inverse_confusion_matrix(num_qubits, confusion_matrices)`

can be applied to any \(n\)-qubit confusion matrix \(A\) which is factorized into the tensor product of \(k\) smaller, local confusion matrices (supplied by the user in `confusion_matrices`

), one for each subset in a partition of the \(n\) qubits into \(k\) smaller subsets. The factorization encodes the assumption that there are \(k\) independent/uncorrelated noise processes affecting the \(k\) disjoint subsets of qubits (possibly of different sizes), but within each subset noise may be correlated between qubits in that subset. This model includes the simplest noise model above as the special case where \(k=n\) and each of the \(n\) single-qubit subsets has the same confusion matrix \(A_1\):

For a slightly more nuanced model, one could still assume independent noise across qubits, but specify different \(2 \times 2\) confusion matrices for each qubit:

Here is an example with two qubits.

```
from mitiq.rem import generate_tensored_inverse_confusion_matrix
# Confusion matrix for qubit 1 (same as above)
A1_entries = [
0.9, 0.2,
0.1, 0.8
]
A1 = np.array(A1_entries).reshape(2,2)
# A different confusion matrix for qubit 2
A2_entries = [
0.7, 0.4,
0.3, 0.6
]
A2 = np.array(A2_entries).reshape(2,2)
# Overall 2-qubit confusion matrix (A1 tensor A2)
A = np.kron(A1, A2)
# Generate inverse confusion matrix.
A_pinv = generate_tensored_inverse_confusion_matrix(2, confusion_matrices=[A1, A2])
print(f"Confusion matrix:\n{A}\n")
print(f"Column-wise sums of confusion matrix:\n{A.sum(axis=0)}\n")
print(f"Inverse confusion matrix:\n{A_pinv}")
```

```
Confusion matrix:
[[0.63 0.36 0.14 0.08]
[0.27 0.54 0.06 0.12]
[0.07 0.04 0.56 0.32]
[0.03 0.06 0.24 0.48]]
Column-wise sums of confusion matrix:
[1. 1. 1. 1.]
Inverse confusion matrix:
[[ 2.28571429 -1.52380952 -0.57142857 0.38095238]
[-1.14285714 2.66666667 0.28571429 -0.66666667]
[-0.28571429 0.19047619 2.57142857 -1.71428571]
[ 0.14285714 -0.33333333 -1.28571429 3. ]]
```

More generally, one can provide `generate_tensored_inverse_confusion_matrix()`

with a list of \(k\) confusion matrices of any size (for any \(k\), \(1\leq k \leq n\)),
as long as their dimensions when tensored together give an overall confusion matrix of the correct dimension \(2^{n} \times 2^{n}\). For instance, the first confusion matrix might apply to qubits \(1\) and \(2\) while the \(k\)th applies to qubits \(n-2, n-1, n\):

Here is an example with three qubits. We represent a stochastic noise model in which errors on qubit \(1\) are independent of errors on qubits \(2\) and \(3\), but errors on qubits \(2\) and \(3\) are correlated with each other. So the confusion matrix factorizes into two differently sized sub-matrices \(A = A^{(1)}_1 \otimes A^{(2)}_{2,3}\).

```
# Confusion matrix for qubit 1 (same as above)
A1_entries = [
0.9, 0.2,
0.1, 0.8
]
A1 = np.array(A1_entries).reshape(2,2)
# Generate a random 4x4 confusion matrix (square
# with columns summing to 1) to represent a
# a correlated error model on qubits 2 and 3
matrix = np.random.rand(4,4)
A23 = matrix/(matrix.sum(axis=0)[None,:])
# Overall 3-qubit confusion matrix (A1 tensor A23)
A = np.kron(A1, A23)
# Generate inverse confusion matrix.
A_pinv = generate_tensored_inverse_confusion_matrix(3, [A1, A23])
print(f"Confusion matrix:\n{A}\n")
print(f"Column-wise sums of confusion matrix:\n{A.sum(axis=0)}\n")
print(f"Inverse confusion matrix:\n{A_pinv}")
```

```
Confusion matrix:
[[0.09630311 0.30931257 0.28975342 0.24173585 0.02140069 0.06873613
0.06438965 0.05371908]
[0.17752741 0.0194146 0.28109194 0.19208356 0.03945054 0.00431436
0.06246487 0.04268524]
[0.29475362 0.33504247 0.31014195 0.21046118 0.0655008 0.07445388
0.06892043 0.04676915]
[0.33141586 0.23623035 0.01901269 0.25571941 0.07364797 0.05249563
0.00422504 0.05682654]
[0.01070035 0.03436806 0.03219482 0.02685954 0.08560277 0.27494451
0.2575586 0.21487631]
[0.01972527 0.00215718 0.03123244 0.02134262 0.15780214 0.01725743
0.2498595 0.17074094]
[0.0327504 0.03722694 0.03446022 0.02338458 0.26200322 0.29781553
0.27568174 0.1870766 ]
[0.03682398 0.02624782 0.00211252 0.02841327 0.29459187 0.20998254
0.01690017 0.22730614]]
Column-wise sums of confusion matrix:
[1. 1. 1. 1. 1. 1. 1. 1.]
Inverse confusion matrix:
[[-4.60668269 0.70650281 3.61128906 0.85193622 1.15167067 -0.1766257
-0.90282227 -0.21298405]
[ 1.47418549 -3.82964967 2.10916285 -0.25280696 -0.36854637 0.95741242
-0.52729071 0.06320174]
[-0.35989364 1.77589701 2.21585136 -2.81743227 0.08997341 -0.44397425
-0.55396284 0.70435807]
[ 4.63524798 2.49010699 -6.79344613 3.36116015 -1.15881199 -0.62252675
1.69836153 -0.84029004]
[ 0.57583534 -0.08831285 -0.45141113 -0.10649203 -5.18251802 0.79481566
4.0627002 0.95842825]
[-0.18427319 0.47870621 -0.26364536 0.03160087 1.65845868 -4.30835588
2.37280821 -0.28440783]
[ 0.04498671 -0.22198713 -0.27698142 0.35217903 -0.40488035 1.99788414
2.49283278 -3.16961131]
[-0.579406 -0.31126337 0.84918077 -0.42014502 5.21465398 2.80137037
-7.6426269 3.78130517]]
```