# When should I use CDR?#

## Advantages#

The main advantage of CDR is that it can be applied without knowing the specific details of the noise
model. Indeed, in CDR, the effects of noise are indirectly *learned* through the execution of an appropriate
set of test circuits. In this way, the final error mitigation inference tends to self-tune with respect
to the used backend.

This self-tuning property is even stronger in the case of *variable-noise-CDR*, i.e., when using the *scale_factors* option
in `execute_with_cdr()`

. In this case, the final error mitigated expectation value is obtained
as a linear combination of noise-scaled expectation values. This is similar to the ZNE approach but, in CDR,
the coefficients of the linear combination are learned instead of being fixed by the extrapolation model.

## Disadvantages#

The main disadvantage of CDR is that the learning process is performed on a suite of test circuits which
only *resemble* the original circuit of interest. Indeed, test circuits are *near-Clifford approximations*
of the original one. Only when the approximation is justified, the application of CDR can produce meaningful
results.
Increasing the `fraction_non_clifford`

option in `execute_with_cdr()`

can alleviate this problem
to some extent. Note that, the larger `fraction_non_clifford`

is, the larger the classical computation overhead is.

Another relevant aspect to consider is that, to apply CDR in a scalable way, a valid near-Clifford simulator is necessary. Note that the computation cost of a valid near-Clifford simulator should scale with the number of non-Clifford gates, independently from the circuit depth. Only in this case, the learning phase of CDR can be applied efficiently.