quantify_core

analysis

base_analysis

Module containing the analysis abstract base class and several basic analyses.

class AnalysisSteps(value)[source]

Bases: enum.Enum

An enumerate of the steps executed by the BaseAnalysis (and the default for subclasses).

The involved steps are specified below.

# <STEP>                                          # <corresponding class method>

AnalysisSteps.STEP_0_EXTRACT_DATA                 # BaseAnalysis.extract_data
AnalysisSteps.STEP_1_PROCESS_DATA                 # BaseAnalysis.process_data
AnalysisSteps.STEP_2_RUN_FITTING                  # BaseAnalysis.run_fitting
AnalysisSteps.STEP_3_ANALYZE_FIT_RESULTS          # BaseAnalysis.analyze_fit_results
AnalysisSteps.STEP_4_CREATE_FIGURES               # BaseAnalysis.create_figures
AnalysisSteps.STEP_5_ADJUST_FIGURES               # BaseAnalysis.adjust_figures
AnalysisSteps.STEP_6_SAVE_FIGURES                 # BaseAnalysis.save_figures
AnalysisSteps.STEP_7_SAVE_QUANTITIES_OF_INTEREST  # BaseAnalysis.save_quantities_of_interest
AnalysisSteps.STEP_8_SAVE_PROCESSED_DATASET       # BaseAnalysis.save_processed_dataset

Example

When running the analysis on a specific file some step of the analysis might fail. It is possible to run a partial analysis by interrupting its flow at a specific step.

Starting iterative measurement...

100% completed | elapsed time:      0s | time left:      0s  

100% completed | elapsed time:      0s | time left:      0s  
from quantify_core.analysis.base_analysis import BasicAnalysis

a_obj = BasicAnalysis(tuid=dataset.tuid).run_until(interrupt_before="run_fitting")

# We can also continue from a specific step
a_obj.run_from(step="run_fitting")

Tip

A custom analysis flow (e.g. inserting new steps) can be created by implementing an object similar to this one and overriding the analysis_steps.

class BaseAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: abc.ABC

A template for analysis classes.

analysis_steps

Defines the steps of the analysis specified as an Enum. Can be overridden in a subclass in order to define a custom analysis flow. See AnalysisSteps for a template.

alias of quantify_core.analysis.base_analysis.AnalysisSteps

__init__(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Initializes the variables used in the analysis and to which data is stored.

Warning

We highly discourage overriding the class initialization. If the analysis requires the user passing in any arguments, the run() should be overridden and extended (see its docstring for an example).

Tip

For scripting/development/debugging purposes the run_until() can be used for a partial execution of the analysis. E.g.,

from quantify_core.analysis.base_analysis import BasicAnalysis

a_obj = BasicAnalysis(label="my experiment").run_until(
    interrupt_before="extract_data"
)

OR use the corresponding members of the analysis_steps:

a_obj = BasicAnalysis(label="my experiment").run_until(
    interrupt_before=BasicAnalysis.analysis_steps.STEP_0_EXTRACT_DATA
)

Settings schema:

Base analysis settings

properties

  • mpl_dpi

Matplotlib figures DPI.

type

integer

  • mpl_exclude_fig_titles

If True maplotlib figures will not include the title.

type

boolean

  • mpl_transparent_background

If True maplotlib figures will have transparent background (when applicable).

type

boolean

  • mpl_fig_formats

List of formats in which matplotlib figures will be saved. E.g. ['svg']

type

array

items

type

string

Parameters
  • dataset (Dataset | NoneOptional[Dataset] (default: None)) – an unprocessed (raw) quantify dataset to perform the analysis on.

  • tuid (TUID | str | NoneUnion[TUID, str, None] (default: None)) – if no dataset is specified, will look for the dataset with the matching tuid in the data directory.

  • label (strstr (default: '')) – if no dataset and no tuid is provided, will look for the most recent dataset that contains “label” in the name.

  • settings_overwrite (dict | NoneOptional[dict] (default: None)) – A dictionary containing overrides for the global base_analysis.settings for this specific instance. See Settings schema above for available settings.

adjust_clim(vmin, vmax, ax_ids=None)[source]

Adjust the clim of matplotlib figures generated by analysis object.

Parameters
  • vmin (floatfloat) – The bottom vlim in data coordinates. Passing None leaves the limit unchanged.

  • vmax (floatfloat) – The top vlim in data coordinates. Passing None leaves the limit unchanged.

  • ax_ids (List[str] | NoneOptional[List[str]] (default: None)) – A list of ax_ids specifying what axes to adjust. Passing None results in all axes of an analysis object being adjusted.

Return type

NoneNone

adjust_figures()[source]

Perform global adjustments after creating the figures but before saving them.

By default applies mpl_exclude_fig_titles and mpl_transparent_background from .settings_overwrite to any matplotlib figures in .figs_mpl.

Can be extended in a subclass for additional adjustments.

adjust_xlim(xmin=None, xmax=None, ax_ids=None)[source]

Adjust the xlim of matplotlib figures generated by analysis object.

Parameters
  • xmin (float | NoneOptional[float] (default: None)) – The bottom xlim in data coordinates. Passing None leaves the limit unchanged.

  • xmax (float | NoneOptional[float] (default: None)) – The top xlim in data coordinates. Passing None leaves the limit unchanged.

  • ax_ids (List[str] | NoneOptional[List[str]] (default: None)) – A list of ax_ids specifying what axes to adjust. Passing None results in all axes of an analysis object being adjusted.

Return type

NoneNone

adjust_ylim(ymin=None, ymax=None, ax_ids=None)[source]

Adjust the ylim of matplotlib figures generated by analysis object.

Parameters
  • ymin (float | NoneOptional[float] (default: None)) – The bottom ylim in data coordinates. Passing None leaves the limit unchanged.

  • ymax (float | NoneOptional[float] (default: None)) – The top ylim in data coordinates. Passing None leaves the limit unchanged.

  • ax_ids (List[str] | NoneOptional[List[str]] (default: None)) – A list of ax_ids specifying what axes to adjust. Passing None results in all axes of an analysis object being adjusted.

Return type

NoneNone

analyze_fit_results()[source]

To be implemented by subclasses.

Should analyze and process the .fit_results and add the quantities of interest to the .quantities_of_interest dictionary.

create_figures()[source]

To be implemented by subclasses.

Should generate figures of interest. matplolib figures and axes objects should be added to the .figs_mpl and axs_mpl dictionaries., respectively.

display_figs_mpl()[source]

Displays figures in .figs_mpl in all frontends.

execute_analysis_steps()[source]

Executes the methods corresponding to the analysis steps as defined by the analysis_steps.

Intended to be called by .run when creating a custom analysis that requires passing analysis configuration arguments to run().

extract_data()[source]

If no dataset is provided, populates .dataset with data from the experiment matching the tuid/label.

This method should be overwritten if an analysis does not relate to a single datafile.

get_flow()[source]

Returns a tuple with the ordered methods to be called by run analysis.

Return type

tupletuple

process_data()[source]

To be implemented by subclasses.

Should process, e.g., reshape, filter etc. the data before starting the analysis.

run()[source]

This function is at the core of all analysis. It calls execute_analysis_steps() which executes all the methods defined in the analysis_steps.

This function is typically called right after instantiating an analysis class.

Return type

BaseAnalysisBaseAnalysis

Returns

The instance of the analysis object so that run() returns an analysis object. You can initialize, run and assign it to a variable on a single line:, e.g. a_obj = MyAnalysis().run().

run_fitting()[source]

To be implemented by subclasses.

Should create fitting model(s) and fit data to the model(s) adding the result to the .fit_results dictionary.

run_from(step)[source]

Runs the analysis starting from the specified method.

The methods are called in the same order as in run(). Useful when first running a partial analysis and continuing again.

run_until(interrupt_before, **kwargs)[source]

Executes the analysis partially by calling run() and stopping before the specified step.

Warning

This method is not intended to be overwritten/extended. See the examples below on passing arguments to run().

Note

Any code inside run() is still executed. Only the execute_analysis_steps() [which is called by run() ] is affected.

Parameters
  • interrupt_before (str | AnalysisStepsUnion[str, AnalysisSteps]) – Stops the analysis before executing the specified step. For convenience the analysis step can be specified either as a string or as the member of the analysis_steps enumerate member.

  • **kwargs – Any other keyword arguments will be passed to run()

save_figures()[source]

Saves figures to disk. By default saves matplotlib figures.

Can be overridden or extended to make use of other plotting packages.

save_figures_mpl(close_figs=True)[source]

Saves all the matplotlib figures in the .figs_mpl dict.

Parameters

close_figs (boolbool (default: True)) – If True, closes matplotlib figures after saving.

save_processed_dataset()[source]

Saves a copy of the processed .dataset_processed in the analysis folder of the experiment.

save_quantities_of_interest()[source]

Saves the .quantities_of_interest as a JSON file in the analysis directory.

The file is written using json.dump() with the qcodes.utils.helpers.NumpyJSONEncoder custom encoder.

property analysis_dir

Analysis dir based on the tuid. Will create a directory if it does not exist yet.

property name

The name of the analysis, used in data saving.

class Basic1DAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.base_analysis.BasicAnalysis

Deprecated. Alias of BasicAnalysis for backwards compatibility.

run()[source]

This function is at the core of all analysis. It calls execute_analysis_steps() which executes all the methods defined in the analysis_steps.

This function is typically called right after instantiating an analysis class.

Return type

BaseAnalysisBaseAnalysis

Returns

The instance of the analysis object so that run() returns an analysis object. You can initialize, run and assign it to a variable on a single line:, e.g. a_obj = MyAnalysis().run().

class Basic2DAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.base_analysis.BaseAnalysis

A basic analysis that extracts the data from the latest file matching the label and plots and stores the data in the experiment container.

create_figures()[source]

To be implemented by subclasses.

Should generate figures of interest. matplolib figures and axes objects should be added to the .figs_mpl and axs_mpl dictionaries., respectively.

class BasicAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.base_analysis.BaseAnalysis

A basic analysis that extracts the data from the latest file matching the label and plots and stores the data in the experiment container.

create_figures()[source]

Creates a line plot x vs y for every data variable yi and coordinate xi in the dataset.

analysis_steps_to_str(analysis_steps, class_name='BaseAnalysis')[source]

A utility for generating the docstring for the analysis steps

Parameters
Return type

strstr

Returns

A formatted string version of the analysis_steps and corresponding methods.

check_lmfit(fit_res)[source]

Check that lmfit was able to successfully return a valid fit, and give a warning if not.

The function looks at lmfit’s success parameter, and also checks whether the fit was able to obtain valid error bars on the fitted parameters.

Parameters

fit_res (ModelResultModelResult) – The ModelResult object output by lmfit

Return type

strstr

Returns

A warning message if there is a problem with the fit.

flatten_lmfit_modelresult(model)[source]

Flatten an lmfit model result to a dictionary in order to be able to save it to disk.

Notes

We use this method as opposed to save_modelresult() as the corresponding load_modelresult() cannot handle loading data with a custom fit function.

lmfit_par_to_ufloat(param)[source]

Safe conversion of an lmfit.parameter.Parameter to uncertainties.ufloat(value, std_dev).

This function is intended to be used in custom analyses to avoid errors when an lmfit fails and the stderr is None.

Parameters

param (ParameterParameter) – The Parameter to be converted

Returns

An object representing the value and the uncertainty of the parameter.

Return type

uncertainties.UFloat

wrap_text(text, width=35, replace_whitespace=True, **kwargs)[source]

A text wrapping (braking over multiple lines) utility.

Intended to be used with plot_textbox() in order to avoid too wide figure when, e.g., check_lmfit() fails and a warning message is generated.

For usage see, for example, source code of create_figures().

Parameters
  • text – The text string to be wrapped over several lines.

  • width – Maximum line width in characters.

  • kwargs – Any other keyword arguments to be passed to textwrap.wrap().

Returns

The wrapped text (or None if text is None).

cosine_analysis

Module containing an education example of an analysis subclass.

See Tutorial 3. Building custom analyses - the data analysis framework that guides you through the process of building this analysis.

class CosineAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.base_analysis.BaseAnalysis

Exemplary analysis subclass that fits a cosine to a dataset.

analyze_fit_results()[source]

Checks fit success and populates quantities_of_interest.

create_figures()[source]

Creates a figure with the data and the fit.

process_data()[source]

In some cases, you might need to process the data, e.g., reshape, filter etc., before starting the analysis. This is the method where it should be done.

See process_data() for an implementation example.

run_fitting()[source]

Fits a CosineModel to the data.

spectroscopy_analysis

class ResonatorSpectroscopyAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.base_analysis.BaseAnalysis

Analysis for a spectroscopy experiment of a hanger resonator.

analyze_fit_results()[source]

Checks fit success and populates .quantities_of_interest.

create_figures()[source]

Plots the measured and fitted transmission \(S_{21}\) as the I and Q component vs frequency, the magnitude and phase vs frequency, and on the complex I,Q plane.

process_data()[source]

Verifies that the data is measured as magnitude and phase and casts it to a dataset of complex valued transmission \(S_{21}\).

run_fitting()[source]

Fits a ResonatorModel to the data.

single_qubit_timedomain

Module containing analyses for common single qubit timedomain experiments.

class AllXYAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.single_qubit_timedomain.SingleQubitTimedomainAnalysis

Normalizes the data from an AllXY experiment and plots against an ideal curve.

See section 2.3.2 of Reed [2013] for an explanation of the AllXY experiment and it’s applications in diagnosing errors in single-qubit control pulses.

create_figures()[source]

To be implemented by subclasses.

Should generate figures of interest. matplolib figures and axes objects should be added to the .figs_mpl and axs_mpl dictionaries., respectively.

process_data()[source]

Processes the data so that the analysis can make assumptions on the format.

Populates self.dataset_processed.S21 with the complex (I,Q) valued transmission, and if calibration points are present for the 0 and 1 state, populates self.dataset_processed.pop_exc with the excited state population.

run()[source]

Executes the analysis using specific datapoints as calibration points.

Returns

The instance of this analysis.

Return type

AllXYAnalysis

class EchoAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.single_qubit_timedomain.SingleQubitTimedomainAnalysis, quantify_core.analysis.single_qubit_timedomain._DecayFigMixin

Analysis class for a qubit spin-echo experiment, which fits an exponential decay and extracts the T2_echo time.

analyze_fit_results()[source]

Checks fit success and populates .quantities_of_interest.

create_figures()[source]

Create a figure showing the exponential decay and fit.

run_fitting()[source]

Fit the data to ExpDecayModel.

class RabiAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.single_qubit_timedomain.SingleQubitTimedomainAnalysis

Fits a cosine curve to Rabi oscillation data and finds the qubit drive amplitude required to implement a pi-pulse.

The analysis will automatically rotate the data so that the data lies along the axis with the best SNR.

analyze_fit_results()[source]

Checks fit success and populates .quantities_of_interest.

create_figures()[source]

Creates Rabi oscillation figure

run(calibration_points=True)[source]
Parameters

calibration_points (boolbool (default: True)) – Specifies if the data should be rotated so that it lies along the axis with the best SNR.

Returns

The instance of this analysis.

Return type

RabiAnalysis

run_fitting()[source]

Fits a RabiModel to the data.

class RamseyAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.single_qubit_timedomain.SingleQubitTimedomainAnalysis, quantify_core.analysis.single_qubit_timedomain._DecayFigMixin

Fits a decaying cosine curve to Ramsey data (possibly with artificial detuning) and finds the true detuning, qubit frequency and T2* time.

analyze_fit_results()[source]

Extract the real detuning and qubit frequency based on the artificial detuning and fitted detuning.

create_figures()[source]

Plot Ramsey decay figure.

run(artificial_detuning=0, qubit_frequency=None, calibration_points='auto')[source]
Parameters
  • artificial_detuning (floatfloat (default: 0)) – The detuning in Hz that will be emulated by adding an extra phase in software.

  • qubit_frequency (float | NoneOptional[float] (default: None)) – The initial recorded value of the qubit frequency (before accurate fitting is done) in Hz.

  • calibration_points (bool | {‘auto’}Union[bool, Literal[‘auto’]] (default: 'auto')) – Indicates if the data analyzed includes calibration points. If set to True, will interpret the last two data points in the dataset as \(|0\rangle\) and \(|1\rangle\) respectively. If "auto", will use has_calibration_points() to determine if the data contains calibration points.

Returns

The instance of this analysis.

Return type

RamseyAnalysis

run_fitting()[source]

Fits a DecayOscillationModel to the data.

artificial_detuning: float = ()

The detuning in Hz that will be emulated by adding an extra phase in software.

qubit_frequency: Optional[float] = ()

The initial recorded value of the qubit frequency (before accurate fitting is done) in Hz.

class SingleQubitTimedomainAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.base_analysis.BaseAnalysis

Base Analysis class for single-qubit timedomain experiments.

process_data()[source]

Processes the data so that the analysis can make assumptions on the format.

Populates self.dataset_processed.S21 with the complex (I,Q) valued transmission, and if calibration points are present for the 0 and 1 state, populates self.dataset_processed.pop_exc with the excited state population.

run(calibration_points='auto')[source]
Parameters

calibration_points (bool | {‘auto’}Union[bool, Literal[‘auto’]] (default: 'auto')) – Indicates if the data analyzed includes calibration points. If set to True, will interpret the last two data points in the dataset as \(|0\rangle\) and \(|1\rangle\) respectively. If "auto", will use has_calibration_points() to determine if the data contains calibration points.

Returns

The instance of this analysis.

Return type

SingleQubitTimedomainAnalysis

calibration_points: bool = ()

Indicates if the data analyzed includes calibration points.

class T1Analysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.single_qubit_timedomain.SingleQubitTimedomainAnalysis, quantify_core.analysis.single_qubit_timedomain._DecayFigMixin

Analysis class for a qubit T1 experiment, which fits an exponential decay and extracts the T1 time.

analyze_fit_results()[source]

Checks fit success and populates .quantities_of_interest.

create_figures()[source]

Create a figure showing the exponential decay and fit.

run_fitting()[source]

Fit the data to ExpDecayModel.

interpolation_analysis

class InterpolationAnalysis2D(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.base_analysis.BaseAnalysis

An analysis class which generates a 2D interpolating plot for each yi variable in the dataset.

create_figures()[source]

Create a 2D interpolating figure for each yi.

optimization_analysis

class OptimizationAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None)[source]

Bases: quantify_core.analysis.base_analysis.BaseAnalysis

An analysis class which extracts the optimal quantities from an N-dimensional interpolating experiment.

create_figures()[source]

Plot each of the x variables against each of the y variables.

process_data()[source]

Finds the optimal (minimum or maximum) for y0 and saves the xi and y0 values in the quantities_of_interest.

run(minimize=True)[source]
Parameters

minimize (boolbool (default: True)) – Boolean which determines whether to report the minimum or the maximum. True for minimize. False for maximize.

Returns

The instance of this analysis.

Return type

OptimizationAnalysis

iteration_plots(dataset, quantities_of_interest)[source]

For every x and y variable, plot a graph of that variable vs the iteration index.

fitting_models

Models and fit functions to be used with the lmfit fitting framework.

class CosineModel(*args, **kwargs)[source]

Bases: lmfit.model.Model

Exemplary lmfit model with a guess for a cosine.

Note

The lmfit.models module provides several fitting models that might fit your needs out of the box.

__init__(*args, **kwargs)[source]
Parameters
  • independent_vars (list of str) – Arguments to the model function that are independent variables default is ['x']).

  • prefix (str) – String to prepend to parameter names, needed to add two Models that have parameter names in common.

  • nan_policy – How to handle NaN and missing values in data. See Notes below.

  • **kwargs – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

See also

cos_func()

guess(data, **kws)[source]

Guess starting values for the parameters of a model.

Parameters
  • data (ndarray) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (ndarray) – Array of values for the independent variable (i.e., x-values).

  • **kws – Additional keyword arguments, passed to model function.

Return type

ParametersParameters

Returns

  • params (Parameters) – Initial, guessed values for the parameters of a Model.

  • .. versionchanged:: 1.0.3 – Argument x is now explicitly required to estimate starting values.

class DecayOscillationModel(*args, **kwargs)[source]

Bases: lmfit.model.Model

Model for a decaying oscillation which decays to a point with 0 offset from the centre of the of the oscillation (as in a Ramsey experiment, for example).

__init__(*args, **kwargs)[source]
Parameters
  • independent_vars (list of str) – Arguments to the model function that are independent variables default is ['x']).

  • prefix (str) – String to prepend to parameter names, needed to add two Models that have parameter names in common.

  • nan_policy – How to handle NaN and missing values in data. See Notes below.

  • **kwargs – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

guess(data, **kws)[source]

Guess starting values for the parameters of a model.

Parameters
  • data (ndarray) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (ndarray) – Array of values for the independent variable (i.e., x-values).

  • **kws – Additional keyword arguments, passed to model function.

Return type

ParametersParameters

Returns

  • params (Parameters) – Initial, guessed values for the parameters of a Model.

  • .. versionchanged:: 1.0.3 – Argument x is now explicitly required to estimate starting values.

class ExpDecayModel(*args, **kwargs)[source]

Bases: lmfit.model.Model

Model for an exponential decay, such as a qubit T1 measurement.

__init__(*args, **kwargs)[source]
Parameters
  • independent_vars (list of str) – Arguments to the model function that are independent variables default is ['x']).

  • prefix (str) – String to prepend to parameter names, needed to add two Models that have parameter names in common.

  • nan_policy – How to handle NaN and missing values in data. See Notes below.

  • **kwargs – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

See also

exp_decay_func()

guess(data, **kws)[source]

Guess starting values for the parameters of a model.

Parameters
  • data (ndarray) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (ndarray) – Array of values for the independent variable (i.e., x-values).

  • **kws – Additional keyword arguments, passed to model function.

Return type

ParametersParameters

Returns

  • params (Parameters) – Initial, guessed values for the parameters of a Model.

  • .. versionchanged:: 1.0.3 – Argument x is now explicitly required to estimate starting values.

class RabiModel(*args, **kwargs)[source]

Bases: lmfit.model.Model

Model for a Rabi oscillation as a function of the microwave drive amplitude. Phase of oscillation is fixed at \(\pi\) in order to ensure that the oscillation is at a minimum when the drive amplitude is 0.

__init__(*args, **kwargs)[source]
Parameters
  • independent_vars (list of str) – Arguments to the model function that are independent variables default is ['x']).

  • prefix (str) – String to prepend to parameter names, needed to add two Models that have parameter names in common.

  • nan_policy – How to handle NaN and missing values in data. See Notes below.

  • **kwargs – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

See also

cos_func()

guess(data, **kws)[source]

Guess starting values for the parameters of a model.

Parameters
  • data (ndarray) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (ndarray) – Array of values for the independent variable (i.e., x-values).

  • **kws – Additional keyword arguments, passed to model function.

Return type

ParametersParameters

Returns

  • params (Parameters) – Initial, guessed values for the parameters of a Model.

  • .. versionchanged:: 1.0.3 – Argument x is now explicitly required to estimate starting values.

class ResonatorModel(*args, **kwargs)[source]

Bases: lmfit.model.Model

Resonator model

Implementation and design patterns inspired by the complex resonator model example (lmfit documentation).

__init__(*args, **kwargs)[source]
Parameters
  • independent_vars (list of str) – Arguments to the model function that are independent variables default is ['x']).

  • prefix (str) – String to prepend to parameter names, needed to add two Models that have parameter names in common.

  • nan_policy – How to handle NaN and missing values in data. See Notes below.

  • **kwargs – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

guess(data, **kws)[source]

Guess starting values for the parameters of a model.

Parameters
  • data (ndarray) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (ndarray) – Array of values for the independent variable (i.e., x-values).

  • **kws – Additional keyword arguments, passed to model function.

Return type

ParametersParameters

Returns

  • params (Parameters) – Initial, guessed values for the parameters of a Model.

  • .. versionchanged:: 1.0.3 – Argument x is now explicitly required to estimate starting values.

cos_func(x, frequency, amplitude, offset, phase=0)[source]

An oscillating cosine function:

\(y = \mathrm{amplitude} \times \cos(2 \pi \times \mathrm{frequency} \times x + \mathrm{phase}) + \mathrm{offset}\)

Parameters
  • x (floatfloat) – The independent variable (time, for example)

  • frequency (floatfloat) – A generalized frequency (in units of inverse x)

  • amplitude (floatfloat) – Amplitude of the oscillation

  • offset (floatfloat) – Output signal vertical offset

  • phase (floatfloat (default: 0)) – Phase offset / rad

Return type

floatfloat

Returns

Output signal magnitude

exp_damp_osc_func(t, tau, n_factor, frequency, phase, amplitude, offset)[source]

A sinusoidal oscillation with an exponentially decaying envelope function:

\(y = \mathrm{amplitude} \times \exp\left(-(t/\tau)^\mathrm{n\_factor}\right)(\cos(2\pi\mathrm{frequency}\times t + \mathrm{phase}) + \mathrm{oscillation_offset}) + \mathrm{exponential_offset}\)

Parameters
  • t (floatfloat) – time

  • tau (floatfloat) – decay time

  • n_factor (floatfloat) – exponential decay factor

  • frequency (floatfloat) – frequency of the oscillation

  • phase (floatfloat) – phase of the oscillation

  • amplitude (floatfloat) – initial amplitude of the oscillation

  • oscillation_offset – vertical offset of cosine oscillation relative to exponential asymptote

  • exponential_offset – offset of exponential asymptote

Returns

Output of decaying cosine function as a float

exp_decay_func(t, tau, amplitude, offset, n_factor)[source]

This is a general exponential decay function:

\(y = \mathrm{amplitude} \times \exp\left(-(t/\tau)^\mathrm{n\_factor}\right) + \mathrm{offset}\)

Parameters
  • t (floatfloat) – time

  • tau (floatfloat) – decay time

  • amplitude (floatfloat) – amplitude of the exponential decay

  • offset (floatfloat) – asymptote of the exponential decay, the value at t=infinity

  • n_factor (floatfloat) – exponential decay factor

Return type

floatfloat

Returns

Output of exponential function as a float

fft_freq_phase_guess(data, t)[source]

Guess for a cosine fit using FFT, only works for evenly spaced points.

Parameters
Return type

Tuple[float, float]Tuple[float, float]

Returns

  • freq_guess – Guess for the frequency of the cosine function

  • ph_guess – Guess for the phase of the cosine function

get_guess_common_doc()[source]

Returns a common docstring to be used for the guess() method of custom fitting Model s.

Return type

strstr

get_model_common_doc()[source]

Returns a common docstring to be used with custom fitting Model s.

Return type

strstr

hanger_func_complex_SI(f, fr, Ql, Qe, A, theta, phi_v, phi_0, alpha=1)[source]

This is the complex function for a hanger (lambda/4 resonator).

Parameters
  • f (floatfloat) – frequency

  • fr (floatfloat) – resonance frequency

  • A (floatfloat) – background transmission amplitude

  • Ql (floatfloat) – loaded quality factor of the resonator

  • Qe (floatfloat) – magnitude of extrinsic quality factor Qe = |Q_extrinsic|

  • theta (floatfloat) – phase of extrinsic quality factor (in rad)

  • phi_v (floatfloat) – phase to account for propagation delay to sample

  • phi_0 (floatfloat) – phase to account for propagation delay from sample

  • alpha (floatfloat (default: 1)) – slope of signal around the resonance

Return type

complexcomplex

Returns

complex valued transmission

See eq. S4 from Bruno et al. (2015) ArXiv:1502.04082.

\[S_{21} = A \left(1+\alpha \frac{f-f_r}{f_r} \right) \left(1- \frac{\frac{Q_l}{|Q_e|}e^{i\theta} }{1+2iQ_l \frac{f-f_r}{f_r}} \right) e^{i (\phi_v f + \phi_0)}\]

The loaded and extrinsic quality factors are related to the internal and coupled Q according to:

\[\frac{1}{Q_l} = \frac{1}{Q_c}+\frac{1}{Q_i}\]

and

\[\frac{1}{Q_c} = \mathrm{Re}\left(\frac{1}{|Q_e|e^{-i\theta}}\right)\]
mk_seealso(function_name, role='func', prefix='\\n\\n', module_location='.')[source]

Returns a sphinx seealso pointing to a function.

Intended to be used for building custom fitting model docstrings.

Parameters
  • function_name (strstr) – name of the function to point to

  • role (strstr (default: 'func')) – a sphinx role, e.g. "func"

  • prefix (strstr (default: '\n\n')) – string preceding the seealso

  • module_location (strstr (default: '.')) – can be used to indicate a function outside this module, e.g., my_module.submodule which contains the function.

Return type

strstr

Returns

resulting string

resonator_phase_guess(s21, freq)[source]

Guesses the phase velocity in resonator spectroscopy, based on the median of all the differences between consecutive phases.

Parameters
Return type

Tuple[float, float]Tuple[float, float]

Returns

  • phi_0 – Guess for the phase offset

  • phi_v – Guess for the phase velocity

calibration

Module containing analysis utilities for calibration procedures.

In particular, manipulation of data and calibration points for qubit readout calibration.

has_calibration_points(s21, indices_state_0=(- 2), indices_state_1=(- 1))[source]

Attempts to determine if the provided complex S21 data has calibration points for the ground and first excited states of qubit.

In this ideal scenario, if the datapoints indicated by the indices correspond to the calibration points, then these points will be located on the extremities of a “segment” on the IQ plane.

Three pieces of information are used to infer the presence of calibration points:

  • The angle of the calibration points with respect to the average of the datapoints,

  • The distance between the calibration points, and

  • The average distance to the line defined be the calibration points.

The detection is made robust by averaging 3 datapoints for each extremity of the “segment” described by the data on the IQ-plane.

Parameters
  • s21 (ndarrayndarray) – Array of complex datapoints corresponding to the experiment on the IQ plane.

  • indices_state_0 (tupletuple (default: (-2,))) – Indices in the s21 array that correspond to the ground state.

  • indices_state_1 (tupletuple (default: (-1,))) – Indices in the s21 array that correspond to the first excited state.

Return type

boolbool

Returns

The inferred presence of calibration points.

rotate_to_calibrated_axis(data, ref_val_0, ref_val_1)[source]

Rotates, normalizes and offsets complex valued data based on calibration points.

Parameters
  • data (ndarrayndarray) – An array of complex valued data points.

  • ref_val_0 (complexcomplex) – The reference value corresponding to the 0 state.

  • ref_val_1 (complexcomplex) – The reference value corresponding to the 1 state.

Return type

ndarrayndarray

Returns

Calibrated array of complex data points.

data

types

Module containing the core data concepts of quantify.

class TUID(value: str)[source]

A human readable unique identifier based on the timestamp. This class does not wrap the passed in object but simply verifies and returns it.

A tuid is a string formatted as YYYYmmDD-HHMMSS-sss-******. The tuid serves as a unique identifier for experiments in quantify.

See also

The. handling module.

classmethod datetime(tuid)[source]
Returns

object corresponding to the TUID

Return type

datetime

classmethod is_valid(tuid)[source]

Test if tuid is valid. A valid tuid is a string formatted as YYYYmmDD-HHMMSS-sss-******.

Parameters

tuid (str) – a tuid string

Returns

True if the string is a valid TUID.

Return type

bool

Raises

ValueError – Invalid format

classmethod uuid(tuid)[source]
Returns

the uuid (universally unique identifier) component of the TUID, corresponding to the last 6 characters.

Return type

str

handling

Utilities for handling data.

create_exp_folder(tuid, name='', datadir=None)[source]

Creates an empty folder to store an experiment container.

If the folder already exists, simply returns the experiment folder corresponding to the TUID.

Parameters
  • tuid (TUIDTUID) – A timestamp based human-readable unique identifier.

  • name (strstr (default: '')) – Optional name to identify the folder.

  • datadir (str | NoneOptional[str] (default: None)) – path of the data directory. If None, uses get_datadir() to determine the data directory.

Returns

Full path of the experiment folder following format: /datadir/YYYYmmDD/YYYYmmDD-HHMMSS-sss-******-name/.

gen_tuid(time_stamp=None)[source]

Generates a TUID based on current time.

Parameters

time_stamp (datetime | NoneOptional[datetime] (default: None)) – Optional, can be passed to ensure the tuid is based on a specific time.

Return type

TUIDTUID

Returns

Timestamp based uid.

get_datadir()[source]

Returns the current data directory. The data directory can be changed using set_datadir().

Return type

strstr

Returns

The current data directory.

get_latest_tuid(contains='')[source]

Returns the most recent tuid.

Tip

This function is similar to get_tuids_containing() but is preferred if one is only interested in the most recent TUID for performance reasons.

Parameters

contains (strstr (default: '')) – An optional string contained in the experiment name.

Return type

TUIDTUID

Returns

The latest TUID.

Raises

FileNotFoundError – No data found.

get_tuids_containing(contains, t_start=None, t_stop=None, max_results=9223372036854775807, reverse=False)[source]

Returns a list of tuids containing a specific label.

Tip

If one is only interested in the most recent TUID, get_latest_tuid() is preferred for performance reasons.

Parameters
  • contains (strstr) – A string contained in the experiment name.

  • t_start (datetime | str | NoneUnion[datetime, str, None] (default: None)) – datetime to search from, inclusive. If a string is specified, it will be converted to a datetime object using parse. If no value is specified, will use the year 1 as a reference t_start.

  • t_stop (datetime | str | NoneUnion[datetime, str, None] (default: None)) – datetime to search until, exclusive. If a string is specified, it will be converted to a datetime object using parse. If no value is specified, will use the current time as a reference t_stop.

  • max_results (intint (default: 9223372036854775807)) – Maximum number of results to return. Defaults to unlimited.

  • reverse (boolbool (default: False)) – If False, sorts tuids chronologically, if True sorts by most recent.

Returns

A list of TUID: objects.

Return type

list

Raises

FileNotFoundError – No data found.

grow_dataset(dataset)[source]

Resizes the dataset by doubling the current length of all arrays.

Parameters

dataset (DatasetDataset) – The dataset to resize.

Return type

DatasetDataset

Returns

The resized dataset.

initialize_dataset(settable_pars, setpoints, gettable_pars)[source]

Initialize an empty dataset based on settable_pars, setpoints and gettable_pars

Parameters
Returns

The dataset.

load_dataset(tuid, datadir=None, name='dataset.hdf5')[source]

Loads a dataset specified by a tuid.

Tip

This method also works when specifying only the first part of a TUID.

Note

This method uses load_dataset() to ensure the file is closed after loading as datasets are intended to be immutable after performing the initial experiment.

Parameters
  • tuid (TUIDTUID) – A TUID string. It is also possible to specify only the first part of a tuid.

  • datadir (str | NoneOptional[str] (default: None)) – Path of the data directory. If None, uses get_datadir() to determine the data directory.

Return type

DatasetDataset

Returns

The dataset.

Raises

FileNotFoundError – No data found for specified date.

load_dataset_from_path(path)[source]

Loads a Dataset with a specific engine preference.

Before returning the dataset AdapterH5NetCDF.recover() is applied.

This function tries to load the dataset until success with the following engine preference:

Parameters

path (Path | strUnion[Path, str]) – Path to the dataset.

Return type

DatasetDataset

Returns

The loaded dataset.

load_processed_dataset(tuid, analysis_name)[source]

Given an experiment TUID and the name of an analysis previously run on it, retrieves the processed dataset resulting from that analysis.

Parameters
  • tuid (TUIDTUID) – TUID of the experiment from which to load the data.

  • analysis_name (strstr) – Name of the Analysis from which to load the data.

Return type

DatasetDataset

Returns

A dataset containing the results of the analysis.

load_quantities_of_interest(tuid, analysis_name)[source]

Given an experiment TUID and the name of an analysis previously run on it, retrieves the corresponding “quantities of interest” data.

Parameters
  • tuid (TUIDTUID) – TUID of the experiment.

  • analysis_name (strstr) – Name of the Analysis from which to load the data.

Return type

dictdict

Returns

A dictionary containing the loaded quantities of interest.

load_snapshot(tuid, datadir=None, file='snapshot.json')[source]

Loads a snapshot specified by a tuid.

Parameters
  • tuid (TUIDTUID) – A TUID string. It is also possible to specify only the first part of a tuid.

  • datadir (str | NoneOptional[str] (default: None)) – Path of the data directory. If None, uses get_datadir() to determine the data directory.

  • file (strstr (default: 'snapshot.json')) – Filename to load.

Return type

dictdict

Returns

The snapshot.

Raises

FileNotFoundError – No data found for specified date.

locate_experiment_container(tuid, datadir=None)[source]

Returns the path to the experiment container of the specified tuid.

Parameters
  • tuid (TUIDTUID) – A TUID string. It is also possible to specify only the first part of a tuid.

  • datadir (str | NoneOptional[str] (default: None)) – Path of the data directory. If None, uses get_datadir() to determine the data directory.

Return type

strstr

Returns

The path to the experiment container

Raises

FileNotFoundError – Experiment container not found.

set_datadir(datadir)[source]

Sets the data directory.

Parameters

datadir (strstr) – Path of the data directory. If set to None, resets the datadir to the default datadir (<top_level>/data).

Return type

NoneNone

snapshot(update=False, clean=True)[source]

State of all instruments setup as a JSON-compatible dictionary (everything that the custom JSON encoder class qcodes.utils.helpers.NumpyJSONEncoder supports).

Parameters
  • update (boolbool (default: False)) – If True, first gets all values before filling the snapshot.

  • clean (boolbool (default: True)) – If True, removes certain keys from the snapshot to create a more readable and compact snapshot.

Return type

dictdict

to_gridded_dataset(quantify_dataset, dimension='dim_0', coords_names=None)[source]

Converts a flattened (a.k.a. “stacked”) dataset as the one generated by the initialize_dataset() to a dataset in which the measured values are mapped onto a grid in the xarray format.

This will be meaningful only if the data itself corresponds to a gridded measurement.

Note

Each individual (x0[i], x1[i], x2[i], ...) setpoint must be unique.

Conversions applied:

  • The names "x0", "x1", ... will correspond to the names of the Dimensions.

  • The unique values for each of the x0, x1, ... Variables are converted to

    Coordinates.

  • The y0, y1, ... Variables are reshaped into a (multi-)dimensional grid

    and associated to the Coordinates.

Parameters
  • quantify_dataset (DatasetDataset) – Input dataset in the format generated by the initialize_dataset.

  • dimension (strstr (default: 'dim_0')) – The flattened xarray Dimension.

  • coords_names (Iterable | NoneOptional[Iterable] (default: None)) – Optionally specify explicitly which Variables correspond to orthogonal coordinates, e.g. datasets holds values for ("x0", "x1") but only “x0” is independent: to_gridded_dataset(dset, coords_names=["x0"]).

Return type

DatasetDataset

Returns

The new dataset.

trim_dataset(dataset)[source]

Trim NaNs from a dataset, useful in the case of a dynamically resized dataset (e.g. adaptive loops).

Parameters

dataset (DatasetDataset) – The dataset to trim.

Return type

DatasetDataset

Returns

The dataset, trimmed and resized if necessary or unchanged.

write_dataset(path, dataset)[source]

Writes a Dataset to a file with the h5netcdf engine.

Before writing the AdapterH5NetCDF.adapt() is applied.

To accommodate for complex-type numbers and arrays invalid_netcdf=True is used.

Parameters
Return type

NoneNone

dataset_adapters

Utilities for dataset (python object) handling.

class AdapterH5NetCDF[source]

Quantify dataset adapter for the h5netcdf engine.

It has the functionality of adapting the Quantify dataset to a format compatible with the h5netcdf xarray backend engine that is used to write and load the dataset to/from disk.

Warning

The h5netcdf engine has minor issues when performing a two-way trip of the dataset. The type of some attributes are not preserved. E.g., list- and tuple-like objects are loaded as numpy arrays of dtype=object.

classmethod adapt(dataset)[source]

Serializes to JSON the dataset and variables attributes.

To prevent the JSON serialization for specific items, their names should be listed under the attribute named json_serialize_exclude (for each attrs dictionary).

Parameters

dataset (DatasetDataset) – Dataset that needs to be adapted.

Return type

DatasetDataset

Returns

Dataset in which the attributes have been replaced with their JSON strings version.

static attrs_convert(attrs, inplace=False, vals_converter=<function dumps>)[source]

Converts to/from JSON string the values of the keys which are not listed in the json_serialize_exclude list.

Parameters
  • attrs – The input dictionary.

  • inplace – If True the values are replaced in place, otherwise a deepcopy of attrs is performed first.

classmethod recover(dataset)[source]

Reverts the action of .adapt().

To prevent the JSON de-serialization for specific items, their names should be listed under the attribute named json_serialize_exclude (for each attrs dictionary).

Parameters

dataset (DatasetDataset) – Dataset from which to recover the original format.

Return type

DatasetDataset

Returns

Dataset in which the attributes have been replaced with their python objects version.

class DatasetAdapterBase[source]

A generic interface for a dataset adapter.

Note

It might be difficult to grasp the generic purpose of this class. See AdapterH5NetCDF for a specialized use case.

A dataset adapter is intended to “adapt”/”convert” a dataset to a format compatible with some other piece of software such as a function, interface, read/write back end, etc.. The main use case is to define the interface of the AdapterH5NetCDF that converts the Quantify dataset for loading and writing to/from disk.

Subclasses implementing this interface are intended to be a two-way bridge to some other object/interface/backend to which we refer to as the “Target” of the adapter.

The function .adapt() should return a dataset to be consumed by the Target.

The function .recover() should receive a dataset generated by the Target.

abstract classmethod adapt(dataset)[source]

Converts the dataset to a format consumed by the Target.

Return type

DatasetDataset

abstract classmethod recover(dataset)[source]

Inverts the action of the .adapt() method.

Return type

DatasetDataset

class DatasetAdapterIdentity[source]

A dataset adapter that does not modify the datasets in any way.

Intended to be used just as an object that respects the adapter interface defined by DatasetAdapterBase.

A particular use case is the backwards compatibility for loading and writing older versions of the Quantify dataset.

classmethod adapt(dataset)[source]
Return type

DatasetDataset

Returns

Same dataset with no modifications.

classmethod recover(dataset)[source]
Return type

DatasetDataset

Returns

Same dataset with no modifications.

dataset_attrs

Utilities for handling the attributes of xarray.Dataset and xarray.DataArray (python objects) handling.

class QCoordAttrs(unit='', long_name='', is_main_coord=None, uniformly_spaced=None, is_dataset_ref=False, json_serialize_exclude=<factory>)[source]

A dataclass representing the attrs attribute of main and secondary coordinates.

All attributes are mandatory to be present but can be None.

Examples

from quantify_core.utilities import examples_support

examples_support.mk_main_coord_attrs()
{
    'unit': '',
    'long_name': '',
    'is_main_coord': True,
    'uniformly_spaced': True,
    'is_dataset_ref': False,
    'json_serialize_exclude': []
}
examples_support.mk_secondary_coord_attrs()
{
    'unit': '',
    'long_name': '',
    'is_main_coord': False,
    'uniformly_spaced': True,
    'is_dataset_ref': False,
    'json_serialize_exclude': []
}
is_dataset_ref: bool = False

Flags if it is an array of quantify_core.data.types.TUID s of other dataset.

is_main_coord: bool = None

When set to True, flags the xarray coordinate to correspond to a main coordinate, otherwise (False) it corresponds to a secondary coordinate.

json_serialize_exclude: List[str] = ()

A list of strings corresponding to the names of other attributes that should not be json-serialized when writing the dataset to disk. Empty by default.

long_name: str = ''

A long name for this coordinate.

uniformly_spaced: Optional[bool] = None

Indicates if the values are uniformly spaced.

unit: str = ''

The units of the values.

class QDatasetAttrs(tuid=None, dataset_name='', dataset_state=None, timestamp_start=None, timestamp_end=None, quantify_dataset_version='2.0.0', software_versions=<factory>, relationships=<factory>, json_serialize_exclude=<factory>)[source]

A dataclass representing the attrs attribute of the Quantify dataset.

All attributes are mandatory to be present but can be None.

Example

import pendulum

from quantify_core.utilities import examples_support

examples_support.mk_dataset_attrs(
    dataset_name="Bias scan",
    timestamp_start=pendulum.now().to_iso8601_string(),
    timestamp_end=pendulum.now().add(minutes=2).to_iso8601_string(),
    dataset_state="done",
)
{
    'tuid': '20211208-140505-716-7faa23',
    'dataset_name': 'Bias scan',
    'dataset_state': 'done',
    'timestamp_start': '2021-12-08T14:05:05.716457+00:00',
    'timestamp_end': '2021-12-08T14:07:05.716508+00:00',
    'quantify_dataset_version': '2.0.0',
    'software_versions': {},
    'relationships': [],
    'json_serialize_exclude': []
}
dataset_name: str = ''

The dataset name, usually same as the the experiment name included in the name of the experiment container.

dataset_state: Literal[None, running, interrupted (safety), interrupted (forced), done] = None

Denotes the last known state of the experiment/data acquisition that served to ‘build’ this dataset. Can be used later to filter ‘bad’ datasets.

json_serialize_exclude: List[str] = ()

A list of strings corresponding to the names of other attributes that should not be json-serialized when writing the dataset to disk. Empty by default.

quantify_dataset_version: str = '2.0.0'

A string identifying the version of this Quantify dataset for backwards compatibility.

relationships: List[quantify_core.data.dataset_attrs.QDatasetIntraRelationship] = ()

A list of relationships within the dataset specified as list of dictionaries that comply with the QDatasetIntraRelationship.

software_versions: Dict[str, str] = ()

A mapping of other relevant software packages that are relevant to log for this dataset. Another example is the git tag or hash of a commit of a lab repository.

Example

import pendulum

from quantify_core.utilities import examples_support

examples_support.mk_dataset_attrs(
    dataset_name="My experiment",
    timestamp_start=pendulum.now().to_iso8601_string(),
    timestamp_end=pendulum.now().add(minutes=2).to_iso8601_string(),
    software_versions={
        "lab_fridge_magnet_driver": "v1.4.2",  # software version/tag
        "my_lab_repo": "9d8acf63f48c469c1b9fa9f2c3cf230845f67b18",  # git commit hash
    },
)
{
    'tuid': '20211208-140505-733-15852c',
    'dataset_name': 'My experiment',
    'dataset_state': None,
    'timestamp_start': '2021-12-08T14:05:05.733033+00:00',
    'timestamp_end': '2021-12-08T14:07:05.733088+00:00',
    'quantify_dataset_version': '2.0.0',
    'software_versions': {
        'lab_fridge_magnet_driver': 'v1.4.2',
        'my_lab_repo': '9d8acf63f48c469c1b9fa9f2c3cf230845f67b18'
    },
    'relationships': [],
    'json_serialize_exclude': []
}
timestamp_end: Optional[str] = None

Human-readable timestamp (ISO8601) as returned by pendulum.now().to_iso8601_string() (docs). Specifies when the experiment/data acquisition ended.

timestamp_start: Optional[str] = None

Human-readable timestamp (ISO8601) as returned by pendulum.now().to_iso8601_string() (docs). Specifies when the experiment/data acquisition started.

tuid: Optional[str] = None

The time-based unique identifier of the dataset. See quantify_core.data.types.TUID.

class QDatasetIntraRelationship(item_name=None, relation_type=None, related_names=<factory>, relation_metadata=<factory>)[source]

A dataclass representing a dictionary that specifies a relationship between dataset variables.

A prominent example are calibration points contained within one variable or several variables that are necessary to interpret correctly the data of another variable.

Examples

This is how the attributes of a dataset containing a q0 main variable and q0_cal secondary variables would look like. The q0_cal corresponds to calibrations datapoints. See Quantify dataset - examples for examples with more context.

from quantify_core.data.dataset_attrs import QDatasetIntraRelationship
from quantify_core.utilities import examples_support

attrs = examples_support.mk_dataset_attrs(
    relationships=[
        QDatasetIntraRelationship(
            item_name="q0",
            relation_type="calibration",
            related_names=["q0_cal"],
        ).to_dict()
    ]
)
item_name: str = None

The name of the coordinate/variable to which we want to relate other coordinates/variables.

related_names: List[str] = ()

A list of names related to the item_name.

relation_metadata: Dict[str, Any] = ()

A free-form dictionary to store additional information relevant to this relationship.

relation_type: str = None

A string specifying the type of relationship.

Reserved relation types:

"calibration" - Specifies a list of main variables used as calibration data for the main variables whose name is specified by the item_name.

class QVarAttrs(unit='', long_name='', is_main_var=None, uniformly_spaced=None, grid=None, is_dataset_ref=False, has_repetitions=False, json_serialize_exclude=<factory>)[source]

A dataclass representing the attrs attribute of main and secondary variables.

All attributes are mandatory to be present but can be None.

Examples

from quantify_core.utilities import examples_support

examples_support.mk_main_var_attrs(coords=["time"])
{
    'unit': '',
    'long_name': '',
    'is_main_var': True,
    'uniformly_spaced': True,
    'grid': True,
    'is_dataset_ref': False,
    'has_repetitions': False,
    'json_serialize_exclude': [],
    'coords': ['time']
}
examples_support.mk_secondary_var_attrs(coords=["cal"])
{
    'unit': '',
    'long_name': '',
    'is_main_var': False,
    'uniformly_spaced': True,
    'grid': True,
    'is_dataset_ref': False,
    'has_repetitions': False,
    'json_serialize_exclude': [],
    'coords': ['cal']
}
grid: Optional[bool] = None

Indicates if the variables data are located on a grid, which does not need to be uniformly spaced along all dimensions. In other words, specifies if the corresponding main coordinates are the ‘unrolled’ points (also known as ‘unstacked’) corresponding to a grid.

If True than it is possible to use quantify_core.data.handling.to_gridded_dataset() to convert the variables to a ‘stacked’ version.

has_repetitions: bool = False

Indicates that the outermost dimension of this variable is a repetitions dimension. This attribute is intended to allow easy programmatic detection of such dimension. It can be used, for example, to average along this dimension before an automatic live plotting or analysis.

is_dataset_ref: bool = False

Flags if it is an array of quantify_core.data.types.TUID s of other dataset. See also Dataset for a “nested MeasurementControl” experiment.

is_main_var: bool = None

When set to True, flags this xarray data variable to correspond to a main variable, otherwise (False) it corresponds to a secondary variable.

json_serialize_exclude: List[str] = ()

A list of strings corresponding to the names of other attributes that should not be json-serialized when writing the dataset to disk. Empty by default.

long_name: str = ''

A long name for this coordinate.

uniformly_spaced: Optional[bool] = None

Indicates if the values are uniformly spaced. This does not apply to ‘true’ main variables but, because a MultiIndex is not supported yet by xarray when writing to disk, some coordinate variables have to be stored as main variables instead.

unit: str = ''

The units of the values.

get_main_coords(dataset)[source]

Finds the main coordinates in the dataset (except secondary coordinates).

Finds the xarray coordinates in the dataset that have their attributes is_main_coord set to True (inside the xarray.DataArray.attrs dictionary).

Parameters

dataset (DatasetDataset) – The dataset to scan.

Return type

List[str]List[str]

Returns

The names of the main coordinates.

get_main_dims(dataset)[source]

Determines the ‘main’ dimensions in the dataset.

Each of the dimensions returned is the outermost dimension for an main coordinate/variable, OR the second one when a repetitions dimension is present. (see has_repetitions).

These dimensions are detected based on is_main_coord and is_main_var attributes.

Warning

The dimensions listed in this list should be considered “incompatible” in the sense that the main coordinate/variables must lie on one and only one of such dimension.

Note

The dimensions, on which the secondary coordinates/variables lie, are not included in this list. See also get_secondary_dims().

Parameters

dataset (DatasetDataset) – The dataset from which to extract the main dimensions.

Return type

List[str]List[str]

Returns

The names of the main dimensions in the dataset.

get_main_vars(dataset)[source]

Finds the main variables in the dataset (except secondary variables).

Finds the xarray data variables in the dataset that have their attributes is_main_var set to True (inside the xarray.DataArray.attrs dictionary).

Parameters

dataset (DatasetDataset) – The dataset to scan.

Return type

List[str]List[str]

Returns

The names of the main variables.

get_secondary_coords(dataset)[source]

Finds the secondary coordinates in the dataset.

Finds the xarray coordinates in the dataset that have their attributes is_main_coord set to False (inside the xarray.DataArray.attrs dictionary).

Parameters

dataset (DatasetDataset) – The dataset to scan.

Return type

List[str]List[str]

Returns

The names of the secondary coordinates.

get_secondary_dims(dataset)[source]

Returns the ‘main’ secondary dimensions.

For details see get_main_dims(), is_main_var and is_main_coord.

Parameters

dataset (DatasetDataset) – The dataset from which to extract the main dimensions.

Return type

List[str]List[str]

Returns

The names of the ‘main’ dimensions of secondary coordinates/variables in the dataset.

get_secondary_vars(dataset)[source]

Finds the secondary variables in the dataset.

Finds the xarray data variables in the dataset that have their attributes is_main_var set to False (inside the xarray.DataArray.attrs dictionary).

Parameters

dataset (DatasetDataset) – The dataset to scan.

Return type

List[str]List[str]

Returns

The names of the secondary variables.

measurement

Import alias

Maps to

quantify_core.measurement.MeasurementControl

MeasurementControl

quantify_core.measurement.grid_setpoints

grid_setpoints

quantify_core.measurement.Gettable

Gettable

quantify_core.measurement.Settable

Settable

types

Module containing the core types for use with the MeasurementControl.

class Gettable(obj: Any)[source]

Defines the Gettable concept, which is considered complete if the given type satisfies the following: This class does not wrap the passed in object but simply verifies and returns it.

attributes

properties

  • name

identifier

oneOf

type

string

type

array

items

type

string

  • label

axis descriptor

oneOf

type

string

type

array

items

type

string

  • unit

unit of measurement

oneOf

type

string

type

array

items

type

string

  • batched

true if data is processed in batches, false otherwise

type

boolean

  • batch_size

When .batched=True, indicates the (maximum) size of the batch of datapoints that this gettable supports. The measurement loop will effectively use the min(settable(s).batch_size, gettable(s).batch_size).

type

integer

methods

properties

  • get

get values from this device

type

object

  • prepare

called before the acquisition loop

type

object

  • finish

called once after the acquisition loop

type

object

class Settable(obj: Any)[source]

Defines the Settable concept, which is considered complete if the given type satisfies the following: This class does not wrap the passed in object but simply verifies and returns it.

attributes

properties

  • name

identifier

type

string

  • label

axis descriptor

type

string

  • unit

unit of measurement

type

string

  • batched

true if data is processed in batches, false otherwise

type

boolean

  • batch_size

When .batched=True, indicates the (maximum) size of the batch of datapoints that this settable supports. The measurement loop will effectively use the min(settable(s).batch_size, gettable(s).batch_size).

type

integer

methods

properties

  • set

send data to this device

type

object

  • prepare

called before the acquisition loop

type

object

  • finish

called once after the acquisition loop

type

object

is_batched(obj)[source]
Returns

Return type

the .batched attribute of the settable/gettable obj, False if not present.

is_object_or_function(checker, instance)[source]

Checks if an instance is an object/function

Returns

Return type

True if the instance is an object or a function, False otherwise

control

Module containing the MeasurementControl.

class MeasurementControl(name)[source]

Instrument responsible for controlling the data acquisition loop.

MeasurementControl (MC) is based on the notion that every experiment consists of the following steps:

  1. Set some parameter(s) (settable_pars)

  2. Measure some other parameter(s) (gettable_pars)

  3. Store the data.

Example

meas_ctrl.settables(mw_source1.freq)
meas_ctrl.setpoints(np.arange(5e9, 5.2e9, 100e3))
meas_ctrl.gettables(pulsar_QRM.signal)
dataset = meas_ctrl.run(name='Frequency sweep')

MC exists to enforce structure on experiments. Enforcing this structure allows:

  • Standardization of data storage.

  • Providing basic real-time visualization.

MC imposes minimal constraints and allows:

  • Iterative loops, experiments in which setpoints are processed step by step.

  • Batched loops, experiments in which setpoints are processed in batches.

  • Adaptive loops, setpoints are determined based on measured values.

__init__(name)[source]

Creates an instance of the Measurement Control.

Parameters

name (strstr) – name of this instrument.

gettables(gettable_pars)[source]

Define the parameters to be acquired during the acquisition loop.

The Gettable helper class defines the requirements for a Gettable object.

Parameters

gettable_pars

parameter(s) to be get during the acquisition loop, accepts:
  • list or tuple of multiple Gettable objects

  • a single Gettable object

print_progress(progress_message=None)[source]

Prints the provided progress_messages or a default one; and calls the callback specified by on_progress_callback. Printing can be suppressed with .verbose(False).

run(name='', soft_avg=1, lazy_set=None)[source]

Starts a data acquisition loop.

Parameters
  • name (strstr (default: '')) – Name of the measurement. It is included in the name of the data files.

  • soft_avg (intint (default: 1)) – Number of software averages to be performed by the measurement control. E.g. if soft_avg=3 the full dataset will be measured 3 times and the measured values will be averaged element-wise, the averaged dataset is then returned.

  • lazy_set (bool | NoneOptional[bool] (default: None)) –

    If True and a setpoint equals the previous setpoint, the .set method of the settable will not be called for that iteration. If this argument is None, the .lazy_set() ManualParameter is used instead (which by default is False).

    Warning

    This feature is not available yet when running in batched mode.

Return type

DatasetDataset

run_adaptive(name, params, lazy_set=None)[source]

Starts a data acquisition loop using an adaptive function.

Warning

The functionality of this mode can be complex - it is recommended to read the relevant long form documentation.

Parameters
  • name – Name of the measurement. This name is included in the name of the data files.

  • params – Key value parameters describe the adaptive function to use, and any further parameters for that function.

  • lazy_set (bool | NoneOptional[bool] (default: None)) – If True and a setpoint equals the previous setpoint, the .set method of the settable will not be called for that iteration. If this argument is None, the .lazy_set() ManualParameter is used instead (which by default is False).

Return type

DatasetDataset

setpoints(setpoints)[source]

Set setpoints that determine values to be set in acquisition loop.

Tip

Use column_stack() to reshape multiple 1D arrays when setting multiple settables.

Parameters

setpoints (ndarrayndarray) – An array that defines the values to loop over in the experiment. The shape of the array has to be either (N,) or (N,1) for a 1D loop; or (N, M) in the case of an MD loop.

setpoints_grid(setpoints)[source]

Makes a grid from the provided setpoints assuming each array element corresponds to an orthogonal dimension. The resulting gridded points determine values to be set in the acquisition loop.

The gridding is such that the inner most loop corresponds to the batched settable with the smallest .batch_size.

Parameters

setpoints – The values to loop over in the experiment. The grid is reshaped in the same order.

settables(settable_pars)[source]

Define the settable parameters for the acquisition loop.

The Settable helper class defines the requirements for a Settable object.

Parameters

settable_pars – parameter(s) to be set during the acquisition loop, accepts a list or tuple of multiple Settable objects or a single Settable object.

show()[source]

Print short representation of the object to stdout.

instr_plotmon = InstrumentRefParameter( vals=vals.MultiType(vals.Strings(), vals.Enum(None)), instrument=self, name="instr_plotmon", )

Instrument responsible for live plotting. Can be set to None to disable live plotting.

instrument_monitor = InstrumentRefParameter( vals=vals.MultiType(vals.Strings(), vals.Enum(None)), instrument=self, name="instrument_monitor", )

Instrument responsible for live monitoring summarized snapshot. Can be set to None to disable monitoring of snapshot.

lazy_set = ManualParameter( vals=vals.Bool(), initial_value=False, name="lazy_set", instrument=self, )

If set to True, only set any settable if the setpoint differs from the previous setpoint. Note that this parameter is overridden by the lazy_set argument passed to the run() and run_adaptive() methods.

on_progress_callback = ManualParameter( vals=vals.Callable(), instrument=self, name="on_progress_callback", )

A callback to communicate progress. This should be a callable accepting floats between 0 and 100 indicating the percentage done.

update_interval = ManualParameter( initial_value=0.5, vals=vals.Numbers(min_value=0.1), instrument=self, name="update_interval", )

Interval for updates during the data acquisition loop, every time more than update_interval time has elapsed when acquiring new data points, data is written to file (and the live monitoring detects updated).

verbose = ManualParameter( vals=vals.Bool(), initial_value=True, instrument=self, name="verbose", )

If set to True, prints to std_out during experiments.

grid_setpoints(setpoints, settables=None)[source]

Makes gridded setpoints. If settables is provided, the gridding is such that the inner most loop corresponds to the batched settable with the smallest .batch_size.

Warning

Using this method typecasts all values into the same type. This may lead to validator errors when setting e.g., a float instead of an int.

Parameters
  • setpoints (IterableIterable) – A list of arrays that defines the values to loop over in the experiment for each orthogonal dimension. The grid is reshaped in the same order.

  • settables (Iterable | NoneOptional[Iterable] (default: None)) – A list of settable objects to which the elements in the setpoints correspond to. Used to correctly grid data when mixing batched and iterative settables.

Returns

An array where the first numpy axis correspond to individual setpoints.

Return type

ndarray

utilities

experiment_helpers

Helpers for performing experiments.

create_plotmon_from_historical(tuid=None, label='')[source]

Creates a plotmon using the dataset of the provided experiment denoted by the tuid in the datadir. Loads the data and draws any required figures.

NB Creating a new plotmon can be slow. Consider using PlotMonitor_pyqt.tuids_extra() to visualize dataset in the same plotmon.

Parameters
  • tuid (TUID | NoneOptional[TUID] (default: None)) – the TUID of the experiment.

  • label (strstr (default: '')) – if the tuid is not provided, as label will be used to search for the latest dataset.

Return type

PlotMonitor_pyqtPlotMonitor_pyqt

Returns

the plotmon

load_settings_onto_instrument(instrument, tuid=None, datadir=None)[source]

Loads settings from a previous experiment onto a current Instrument. This information is loaded from the ‘snapshot.json’ file in the provided experiment directory.

Parameters
  • instrument (Instrument) – the instrument to be configured.

  • tuid (TUID) – the TUID of the experiment. If None use latest TUID.

  • datadir (str) – path of the data directory. If None, uses get_datadir() to determine the data directory.

Raises

ValueError – if the provided instrument has no match in the loaded snapshot.

Return type

NoneNone

dataset_examples

Factories of exemplary and mock datasets to be used for testing and documentation.

mk_2d_dataset_v1(num_amps=10, num_times=100)[source]

Generates a 2D Quantify dataset (v1).

Parameters
  • num_amps (intint (default: 10)) – Number of x points.

  • num_times (intint (default: 100)) – Number of y points.

mk_nested_mc_dataset(num_points=12, flux_bias_min_max=(- 0.04, 0.04), resonator_freqs_min_max=(7000000000.0, 7300000000.0), qubit_freqs_min_max=(4500000000.0, 5000000000.0), t1_values_min_max=(2e-05, 5e-05), seed=112233)[source]

Generates a dataset with dataset references and several coordinates that serve to index the same variables.

Note that the each value for resonator_freqs, qubit_freqs and t1_values would have been extracted from other dataset corresponding to individual experiments with their own dataset.

Parameters
  • num_points (intint (default: 12)) – Number of datapoints to generate (used for all variables/coordinates).

  • flux_bias_min_max (tupletuple (default: (-0.04, 0.04))) – Range for mock values.

  • resonator_freqs_min_max (tupletuple (default: (7000000000.0, 7300000000.0))) – Range for mock values.

  • qubit_freqs_min_max (tupletuple (default: (4500000000.0, 5000000000.0))) – Range for mock values.

  • t1_values_min_max (tupletuple (default: (2e-05, 5e-05))) – Range for mock random values.

  • seed (int | NoneOptional[int] (default: 112233)) – Random number generator seed passed to numpy.random.default_rng.

Return type

DatasetDataset

mk_shots_from_probabilities(probabilities, **kwargs)[source]

Generates multiple shots for a list of probabilities assuming two states.

Parameters
Returns

Array containing the shots. Shape: (num_shots, len(probabilities)).

mk_surface7_cyles_dataset(num_cycles=3, **kwargs)[source]

See also quantify_core.utilities.examples_support.mk_surface7_sched().

Parameters
Return type

DatasetDataset

mk_t1_av_dataset(t1_times=None, probabilities=None, **kwargs)[source]

Generates a dataset with mock data of a T1 experiment for a single qubit.

Parameters
  • t1_times (ndarray | NoneOptional[ndarray] (default: None)) – Array with the T1 times corresponding to each probability in probabilities.

  • probabilities (ndarray | NoneOptional[ndarray] (default: None)) – The probabilities of finding the qubit in the excited state.

  • **kwargs – Keyword arguments passed to mk_iq_shots().

Return type

DatasetDataset

mk_t1_av_with_cal_dataset(t1_times=None, probabilities=None, **kwargs)[source]

Generates a dataset with mock data of a T1 experiment for a single qubit including calibration points for the ground and excited states.

Parameters
  • t1_times (ndarray | NoneOptional[ndarray] (default: None)) – Array with the T1 times corresponding to each probability in probabilities.

  • probabilities (ndarray | NoneOptional[ndarray] (default: None)) – The probabilities of finding the qubit in the excited state.

  • **kwargs – Keyword arguments passed to mk_iq_shots().

Return type

DatasetDataset

mk_t1_shots_dataset(t1_times=None, probabilities=None, **kwargs)[source]

Generates a dataset with mock data of a T1 experiment for a single qubit including calibration points for the ground and excited states, including all the individual shots (repeated qubit state measurement for the same exact experiment).

Parameters
  • t1_times (ndarray | NoneOptional[ndarray] (default: None)) – Array with the T1 times corresponding to each probability in probabilities.

  • probabilities (ndarray | NoneOptional[ndarray] (default: None)) – The probabilities of finding the qubit in the excited state.

  • **kwargs – Keyword arguments passed to mk_iq_shots().

Return type

DatasetDataset

mk_t1_traces_dataset(t1_times=None, probabilities=None, **kwargs)[source]

Generates a dataset with mock data of a T1 experiment for a single qubit including calibration points for the ground and excited states, including all the individual shots (repeated qubit state measurement for the same exact experiment); and including all the signals that had to be digitized to obtain the rest of the data.

Parameters
  • t1_times (ndarray | NoneOptional[ndarray] (default: None)) – Array with the T1 times corresponding to each probability in probabilities.

  • probabilities (ndarray | NoneOptional[ndarray] (default: None)) – The probabilities of finding the qubit in the excited state.

  • **kwargs – Keyword arguments passed to mk_iq_shots().

Return type

DatasetDataset

mk_two_qubit_chevron_data(rep_num=5, seed=112233)[source]

Generates data that look similar to a two-qubit Chevron experiment.

Parameters
  • rep_num (intint (default: 5)) – The number of repetitions with noise to generate.

  • seed (int | NoneOptional[int] (default: 112233)) – Random number generator seed passed to numpy.random.default_rng.

Returns

  • amp_values – Amplitude values.

  • time_values – Time values.

  • population_q0 – Q0 population values.

  • population_q1 – Q1 population values.

mk_two_qubit_chevron_dataset(**kwargs)[source]

Generates a dataset that look similar to a two-qubit Chevron experiment.

Parameters

**kwargs – Keyword arguments passed to mk_two_qubit_chevron_data().

Return type

DatasetDataset

Returns

A mock Quantify dataset.

examples_support

Utilities used for creating examples for docs/tutorials/tests.

default_datadir(verbose=True)[source]

Returns (and optionally print) a default datadir path.

Intended for fast prototyping, tutorials, examples, etc..

Parameters

verbose (boolbool (default: True)) – If True prints the returned datadir.

Return type

PathPath

Returns

The Path.home() / "quantify-data" path.

mk_cosine_instrument()[source]

A container of parameters (mock instrument) providing a cosine model.

Return type

InstrumentInstrument

mk_dataset_attrs(tuid=<function gen_tuid>, **kwargs)[source]

A factory of attributes for Quantify dataset.

See QDatasetAttrs for details.

Parameters
  • tuid (TUID | () → TUIDUnion[TUID, Callable[[], TUID]] (default: <function gen_tuid at 0x7f08648d89d0>)) – If no tuid is provided a new one will be generated. See also tuid.

  • **kwargs – Any other items used to update the output dictionary.

Return type

{str: Any}Dict[str, Any]

mk_iq_shots(num_shots=128, sigmas=(0.1, 0.1), centers=(- 0.2 + 0.65j, 0.7 + 4j), probabilities=(0.4, 0.6), seed=112233)[source]

Generates clusters of (I + 1j*Q) points with a Gaussian distribution with the specified sigmas and centers according to the probabilities of each cluster

Parameters
  • num_shots (intint (default: 128)) – The number of shot to generate.

  • sigma – The sigma of the Gaussian distribution used for both real and imaginary parts.

  • centers (Tuple[complex] | ndarrayUnion[Tuple[complex], ndarray] (default: ((-0.2+0.65j), (0.7+4j)))) – The center of each cluster on the imaginary plane.

  • probabilities (Tuple[float] | ndarrayUnion[Tuple[float], ndarray] (default: (0.4, 0.6))) – The probabilities of each cluster being randomly selected for each shot.

  • seed (int | NoneOptional[int] (default: 112233)) – Random number generator seed passed to numpy.random.default_rng.

Return type

ndarrayndarray

mk_main_coord_attrs(uniformly_spaced=True, is_main_coord=True, **kwargs)[source]

A factory of attributes for main coordinates.

See QCoordAttrs for details.

Parameters
Return type

{str: Any}Dict[str, Any]

mk_main_var_attrs(grid=True, uniformly_spaced=True, is_main_var=True, has_repetitions=False, **kwargs)[source]

A factory of attributes for main variables.

See QVarAttrs for details.

Parameters
Return type

{str: Any}Dict[str, Any]

mk_secondary_coord_attrs(uniformly_spaced=True, is_main_coord=False, **kwargs)[source]

A factory of attributes for secondary coordinates.

See QCoordAttrs for details.

Parameters
Return type

{str: Any}Dict[str, Any]

mk_secondary_var_attrs(grid=True, uniformly_spaced=True, is_main_var=False, has_repetitions=False, **kwargs)[source]

A factory of attributes for secondary variables.

See QVarAttrs for details.

Parameters
Return type

{str: Any}Dict[str, Any]

mk_surface7_sched(num_cycles=3)[source]

Generates a schedule with some of the feature of a Surface 7 experiment as portrayed in Fig. 4b of [Marques et al., 2021].

Parameters

num_cycles (intint (default: 3)) – The number of times to repeat the main cycle.

Returns

A schedule similar to a Surface 7 dance.

mk_trace_for_iq_shot(iq_point, time_values=array([0.00e+00, 1.00e-09, 2.00e-09, 3.00e-09, 4.00e-09, 5.00e-09, 6.00e-09, 7.00e-09, 8.00e-09, 9.00e-09, 1.00e-08, 1.10e-08, 1.20e-08, 1.30e-08, 1.40e-08, 1.50e-08, 1.60e-08, 1.70e-08, 1.80e-08, 1.90e-08, 2.00e-08, 2.10e-08, 2.20e-08, 2.30e-08, 2.40e-08, 2.50e-08, 2.60e-08, 2.70e-08, 2.80e-08, 2.90e-08, 3.00e-08, 3.10e-08, 3.20e-08, 3.30e-08, 3.40e-08, 3.50e-08, 3.60e-08, 3.70e-08, 3.80e-08, 3.90e-08, 4.00e-08, 4.10e-08, 4.20e-08, 4.30e-08, 4.40e-08, 4.50e-08, 4.60e-08, 4.70e-08, 4.80e-08, 4.90e-08, 5.00e-08, 5.10e-08, 5.20e-08, 5.30e-08, 5.40e-08, 5.50e-08, 5.60e-08, 5.70e-08, 5.80e-08, 5.90e-08, 6.00e-08, 6.10e-08, 6.20e-08, 6.30e-08, 6.40e-08, 6.50e-08, 6.60e-08, 6.70e-08, 6.80e-08, 6.90e-08, 7.00e-08, 7.10e-08, 7.20e-08, 7.30e-08, 7.40e-08, 7.50e-08, 7.60e-08, 7.70e-08, 7.80e-08, 7.90e-08, 8.00e-08, 8.10e-08, 8.20e-08, 8.30e-08, 8.40e-08, 8.50e-08, 8.60e-08, 8.70e-08, 8.80e-08, 8.90e-08, 9.00e-08, 9.10e-08, 9.20e-08, 9.30e-08, 9.40e-08, 9.50e-08, 9.60e-08, 9.70e-08, 9.80e-08, 9.90e-08, 1.00e-07, 1.01e-07, 1.02e-07, 1.03e-07, 1.04e-07, 1.05e-07, 1.06e-07, 1.07e-07, 1.08e-07, 1.09e-07, 1.10e-07, 1.11e-07, 1.12e-07, 1.13e-07, 1.14e-07, 1.15e-07, 1.16e-07, 1.17e-07, 1.18e-07, 1.19e-07, 1.20e-07, 1.21e-07, 1.22e-07, 1.23e-07, 1.24e-07, 1.25e-07, 1.26e-07, 1.27e-07, 1.28e-07, 1.29e-07, 1.30e-07, 1.31e-07, 1.32e-07, 1.33e-07, 1.34e-07, 1.35e-07, 1.36e-07, 1.37e-07, 1.38e-07, 1.39e-07, 1.40e-07, 1.41e-07, 1.42e-07, 1.43e-07, 1.44e-07, 1.45e-07, 1.46e-07, 1.47e-07, 1.48e-07, 1.49e-07, 1.50e-07, 1.51e-07, 1.52e-07, 1.53e-07, 1.54e-07, 1.55e-07, 1.56e-07, 1.57e-07, 1.58e-07, 1.59e-07, 1.60e-07, 1.61e-07, 1.62e-07, 1.63e-07, 1.64e-07, 1.65e-07, 1.66e-07, 1.67e-07, 1.68e-07, 1.69e-07, 1.70e-07, 1.71e-07, 1.72e-07, 1.73e-07, 1.74e-07, 1.75e-07, 1.76e-07, 1.77e-07, 1.78e-07, 1.79e-07, 1.80e-07, 1.81e-07, 1.82e-07, 1.83e-07, 1.84e-07, 1.85e-07, 1.86e-07, 1.87e-07, 1.88e-07, 1.89e-07, 1.90e-07, 1.91e-07, 1.92e-07, 1.93e-07, 1.94e-07, 1.95e-07, 1.96e-07, 1.97e-07, 1.98e-07, 1.99e-07, 2.00e-07, 2.01e-07, 2.02e-07, 2.03e-07, 2.04e-07, 2.05e-07, 2.06e-07, 2.07e-07, 2.08e-07, 2.09e-07, 2.10e-07, 2.11e-07, 2.12e-07, 2.13e-07, 2.14e-07, 2.15e-07, 2.16e-07, 2.17e-07, 2.18e-07, 2.19e-07, 2.20e-07, 2.21e-07, 2.22e-07, 2.23e-07, 2.24e-07, 2.25e-07, 2.26e-07, 2.27e-07, 2.28e-07, 2.29e-07, 2.30e-07, 2.31e-07, 2.32e-07, 2.33e-07, 2.34e-07, 2.35e-07, 2.36e-07, 2.37e-07, 2.38e-07, 2.39e-07, 2.40e-07, 2.41e-07, 2.42e-07, 2.43e-07, 2.44e-07, 2.45e-07, 2.46e-07, 2.47e-07, 2.48e-07, 2.49e-07, 2.50e-07, 2.51e-07, 2.52e-07, 2.53e-07, 2.54e-07, 2.55e-07, 2.56e-07, 2.57e-07, 2.58e-07, 2.59e-07, 2.60e-07, 2.61e-07, 2.62e-07, 2.63e-07, 2.64e-07, 2.65e-07, 2.66e-07, 2.67e-07, 2.68e-07, 2.69e-07, 2.70e-07, 2.71e-07, 2.72e-07, 2.73e-07, 2.74e-07, 2.75e-07, 2.76e-07, 2.77e-07, 2.78e-07, 2.79e-07, 2.80e-07, 2.81e-07, 2.82e-07, 2.83e-07, 2.84e-07, 2.85e-07, 2.86e-07, 2.87e-07, 2.88e-07, 2.89e-07, 2.90e-07, 2.91e-07, 2.92e-07, 2.93e-07, 2.94e-07, 2.95e-07, 2.96e-07, 2.97e-07, 2.98e-07, 2.99e-07]), intermediate_freq=50000000.0)[source]

Generates mock “traces” that a physical instrument would digitize for the readout of a transmon qubit when using a down-converting IQ mixer.

Parameters
  • iq_point (complexcomplex) – A complex number representing a point on the IQ-plane.

  • time_values (ndarrayndarray (default: array([0.00e+00, 1.00e-09, 2.00e-09, 3.00e-09, 4.00e-09, 5.00e-09,        6.00e-09, 7.00e-09, 8.00e-09, 9.00e-09, 1.00e-08, 1.10e-08,        1.20e-08, 1.30e-08, 1.40e-08, 1.50e-08, 1.60e-08, 1.70e-08,        1.80e-08, 1.90e-08, 2.00e-08, 2.10e-08, 2.20e-08, 2.30e-08,        2.40e-08, 2.50e-08, 2.60e-08, 2.70e-08, 2.80e-08, 2.90e-08,        3.00e-08, 3.10e-08, 3.20e-08, 3.30e-08, 3.40e-08, 3.50e-08,        3.60e-08, 3.70e-08, 3.80e-08, 3.90e-08, 4.00e-08, 4.10e-08,        4.20e-08, 4.30e-08, 4.40e-08, 4.50e-08, 4.60e-08, 4.70e-08,        4.80e-08, 4.90e-08, 5.00e-08, 5.10e-08, 5.20e-08, 5.30e-08,        5.40e-08, 5.50e-08, 5.60e-08, 5.70e-08, 5.80e-08, 5.90e-08,        6.00e-08, 6.10e-08, 6.20e-08, 6.30e-08, 6.40e-08, 6.50e-08,        6.60e-08, 6.70e-08, 6.80e-08, 6.90e-08, 7.00e-08, 7.10e-08,        7.20e-08, 7.30e-08, 7.40e-08, 7.50e-08, 7.60e-08, 7.70e-08,        7.80e-08, 7.90e-08, 8.00e-08, 8.10e-08, 8.20e-08, 8.30e-08,        8.40e-08, 8.50e-08, 8.60e-08, 8.70e-08, 8.80e-08, 8.90e-08,        9.00e-08, 9.10e-08, 9.20e-08, 9.30e-08, 9.40e-08, 9.50e-08,        9.60e-08, 9.70e-08, 9.80e-08, 9.90e-08, 1.00e-07, 1.01e-07,        1.02e-07, 1.03e-07, 1.04e-07, 1.05e-07, 1.06e-07, 1.07e-07,        1.08e-07, 1.09e-07, 1.10e-07, 1.11e-07, 1.12e-07, 1.13e-07,        1.14e-07, 1.15e-07, 1.16e-07, 1.17e-07, 1.18e-07, 1.19e-07,        1.20e-07, 1.21e-07, 1.22e-07, 1.23e-07, 1.24e-07, 1.25e-07,        1.26e-07, 1.27e-07, 1.28e-07, 1.29e-07, 1.30e-07, 1.31e-07,        1.32e-07, 1.33e-07, 1.34e-07, 1.35e-07, 1.36e-07, 1.37e-07,        1.38e-07, 1.39e-07, 1.40e-07, 1.41e-07, 1.42e-07, 1.43e-07,        1.44e-07, 1.45e-07, 1.46e-07, 1.47e-07, 1.48e-07, 1.49e-07,        1.50e-07, 1.51e-07, 1.52e-07, 1.53e-07, 1.54e-07, 1.55e-07,        1.56e-07, 1.57e-07, 1.58e-07, 1.59e-07, 1.60e-07, 1.61e-07,        1.62e-07, 1.63e-07, 1.64e-07, 1.65e-07, 1.66e-07, 1.67e-07,        1.68e-07, 1.69e-07, 1.70e-07, 1.71e-07, 1.72e-07, 1.73e-07,        1.74e-07, 1.75e-07, 1.76e-07, 1.77e-07, 1.78e-07, 1.79e-07,        1.80e-07, 1.81e-07, 1.82e-07, 1.83e-07, 1.84e-07, 1.85e-07,        1.86e-07, 1.87e-07, 1.88e-07, 1.89e-07, 1.90e-07, 1.91e-07,        1.92e-07, 1.93e-07, 1.94e-07, 1.95e-07, 1.96e-07, 1.97e-07,        1.98e-07, 1.99e-07, 2.00e-07, 2.01e-07, 2.02e-07, 2.03e-07,        2.04e-07, 2.05e-07, 2.06e-07, 2.07e-07, 2.08e-07, 2.09e-07,        2.10e-07, 2.11e-07, 2.12e-07, 2.13e-07, 2.14e-07, 2.15e-07,        2.16e-07, 2.17e-07, 2.18e-07, 2.19e-07, 2.20e-07, 2.21e-07,        2.22e-07, 2.23e-07, 2.24e-07, 2.25e-07, 2.26e-07, 2.27e-07,        2.28e-07, 2.29e-07, 2.30e-07, 2.31e-07, 2.32e-07, 2.33e-07,        2.34e-07, 2.35e-07, 2.36e-07, 2.37e-07, 2.38e-07, 2.39e-07,        2.40e-07, 2.41e-07, 2.42e-07, 2.43e-07, 2.44e-07, 2.45e-07,        2.46e-07, 2.47e-07, 2.48e-07, 2.49e-07, 2.50e-07, 2.51e-07,        2.52e-07, 2.53e-07, 2.54e-07, 2.55e-07, 2.56e-07, 2.57e-07,        2.58e-07, 2.59e-07, 2.60e-07, 2.61e-07, 2.62e-07, 2.63e-07,        2.64e-07, 2.65e-07, 2.66e-07, 2.67e-07, 2.68e-07, 2.69e-07,        2.70e-07, 2.71e-07, 2.72e-07, 2.73e-07, 2.74e-07, 2.75e-07,        2.76e-07, 2.77e-07, 2.78e-07, 2.79e-07, 2.80e-07, 2.81e-07,        2.82e-07, 2.83e-07, 2.84e-07, 2.85e-07, 2.86e-07, 2.87e-07,        2.88e-07, 2.89e-07, 2.90e-07, 2.91e-07, 2.92e-07, 2.93e-07,        2.94e-07, 2.95e-07, 2.96e-07, 2.97e-07, 2.98e-07, 2.99e-07]))) – The time instants at which the mock intermediate-frequency signal is sampled.

  • intermediate_freq (floatfloat (default: 50000000.0)) – The intermediate frequency used in the down-conversion scheme.

Return type

ndarrayndarray

Returns

An array of complex numbers.

mk_trace_time(sampling_rate=1000000000.0, duration=3e-07)[source]

Generates a arange in which the entries correspond to time instants up to duration seconds sampled according to sampling_rate in Hz.

See mk_trace_for_iq_shot() for an usage example.

Parameters
  • sampling_rate (floatfloat (default: 1000000000.0)) – The sampling rate in Hz.

  • duration (floatfloat (default: 3e-07)) – Total duration in seconds.

Return type

ndarrayndarray

Returns

An array with the time instants.

round_trip_dataset(dataset)[source]

Writes a dataset to disk and loads it back returning it.

Return type

DatasetDataset

visualization

The visualization module contains tools for real-time visualization as well as utilities to help in plotting.

Import alias

Maps to

quantify_core.visualization.InstrumentMonitor

InstrumentMonitor

quantify_core.visualization.PlotMonitor_pyqt

PlotMonitor_pyqt

instrument_monitor

Module containing the pyqtgraph based plotting monitor.

class InstrumentMonitor(name, window_size=(600, 600), remote=True)[source]

Creates a pyqtgraph widget that displays the instrument monitor window.

Example

from quantify_core.measurement import MeasurementControl
from quantify_core.visualization import InstrumentMonitor

meas_ctrl = MeasurementControl("meas_ctrl")
instrument_monitor = InstrumentMonitor("instrument_monitor")
meas_ctrl.instrument_monitor(instrument_monitor.name)
# Set True if you want to query the instruments about each parameter
# before updating the window. Can be slow due to communication overhead.
instrument_monitor.update_snapshot(False)
instrument_monitor.update(force=True)
__init__(name, window_size=(600, 600), remote=True)[source]

Initializes the pyqtgraph window.

Parameters
close()[source]

(Modified from Instrument class)

Irreversibly stop this instrument and free its resources.

Subclasses should override this if they have other specific resources to close.

Return type

NoneNone

create_widget(window_size=(1000, 600))[source]

Saves an instance of the quantify_core.visualization.ins_mon_widget.qc_snapshot_widget.QcSnapshotWidget class during startup. Creates the snapshot tree to display within the remote widget window.

Parameters

window_size (tupletuple (default: (1000, 600))) – The size of the InstrumentMonitor window in px.

setGeometry(x, y, w, h)[source]

Set the geometry of the main widget window

Parameters
  • x (intint) – Horizontal position of the top-left corner of the window.

  • y (intint) – Vertical position of the top-left corner of the window.

  • w (intint) – Width of the window.

  • h (intint) – Height of the window.

update(force=False)[source]

Updates the Qc widget with the current snapshot of the instruments. This function is also called within the class MeasurementControl in the function MeasurementControl.run().

Parameters

force (boolbool (default: False)) – Forces an update ignoring the updated_interval.

Return type

NoneNone

update_interval = ManualParameter( unit="s", vals=vals.Numbers(min_value=0.001), initial_value=5, name="update_interval", instrument=self, )

Only update the window if this amount of time has passed since last last update.

update_snapshot = ManualParameter( initial_value=False, vals=vals.Bool(), name="update_snapshot", instrument=self, )

Set to True in order to query the instruments about each parameter before updating the window. Can be slow due to communication overhead.

pyqt_plotmon

Module containing the pyqtgraph based plotting monitor.

class PlotMonitor_pyqt(name)[source]

Pyqtgraph based plot monitor instrument.

A plot monitor is intended to provide a real-time visualization of a dataset.

The interaction with this virtual instrument are virtually instantaneous. All the heavier computations and plotting happens in a separate QtProcess.

__init__(name)[source]

Creates an instance of the Measurement Control.

Parameters

name (strstr) – Name of this instrument instance

close()[source]

(Modified from Instrument class)

Irreversibly stop this instrument and free its resources.

Subclasses should override this if they have other specific resources to close.

Return type

NoneNone

create_plot_monitor()[source]

Creates the PyQtGraph plotting monitors. Can also be used to recreate these when plotting has crashed.

setGeometry_main(x, y, w, h)[source]

Set the geometry of the main plotmon

Parameters
  • x (intint) – Horizontal position of the top-left corner of the window

  • y (intint) – Vertical position of the top-left corner of the window

  • w (intint) – Width of the window

  • h (intint) – Height of the window

setGeometry_secondary(x, y, w, h)[source]

Set the geometry of the secondary plotmon

Parameters
  • x (intint) – Horizontal position of the top-left corner of the window

  • y (intint) – Vertical position of the top-left corner of the window

  • w (intint) – Width of the window

  • h (intint) – Height of the window

tuids_append(tuid=None)[source]

Appends a tuid to tuids and also discards older datasets according to tuids_max_num.

The the corresponding data will be plotted in the main window with blue circles.

NB: do not call before the corresponding dataset file was created and filled with data

update(tuid=None)[source]

Updates the curves/heatmaps of a specific dataset.

If the dataset is not specified the latest dataset in tuids is used.

If tuids is empty and tuid is provided then tuids_append(tuid) will be called. NB: this is intended mainly for MC to avoid issues when the file was not yet created or is empty.

main_QtPlot = QtPlotObjForJupyter(self._remote_plotmon, "main_QtPlot")

Retrieves the image of the main window when used as the final statement in a cell of a Jupyter-like notebook.

secondary_QtPlot = QtPlotObjForJupyter( self._remote_plotmon, "secondary_QtPlot" )

Retrieves the image of the secondary window when used as the final statement in a cell of a Jupyter-like notebook.

tuids = Parameter( initial_cache_value=[], vals=vals.Lists(elt_validator=vals.Strings()), get_cmd=self._get_tuids, set_cmd=self._set_tuids, name="tuids", instrument=self, )

The tuids of the auto-accumulated previous datasets when specified through tuids_append. Can be set to a list ['tuid_one', 'tuid_two', ...]. Can be reset by setting to []. See also tuids_extra.

tuids_extra = Parameter( initial_cache_value=[], vals=vals.Lists(elt_validator=vals.Strings()), set_cmd=self._set_tuids_extra, get_cmd=self._get_tuids_extra, name="tuids_extra", instrument=self, )

Extra tuids whose datasets are never affected by tuids_append or tuids_max_num. As opposed to the tuids, these ones never vanish. Can be reset by setting to []. Intended to perform realtime measurements and have a live comparison with previously measured datasets.

tuids_max_num = Parameter( vals=vals.Ints(min_value=1, max_value=100), set_cmd=self._set_tuids_max_num, get_cmd=self._get_tuids_max_num, initial_cache_value=3, name="tuids_max_num", instrument=self, )

The maximum number of auto-accumulated datasets in tuids. Older dataset are discarded when tuids_append is called [directly or from update()].

class QtPlotObjForJupyter(remote_plotmon, attr_name)[source]

A wrapper to be able to display a QtPlot window in Jupyter notebooks

color_utilities

Module containing utilities for color manipulation

set_hlsa(color, h=None, l=None, s=None, a=None, to_hex=False)[source]

Accepts a matplotlib color specification and returns an RGB color with the specified HLS values plus an optional alpha

Return type

tupletuple

mpl_plotting

Module containing matplotlib and xarray plotting utilities.

Naming convention: plotting functions that require Xarray object(s) as inputs are named plot_xr_....

flex_colormesh_plot_vs_xy(xvals, yvals, zvals, ax=None, normalize=False, log=False, cmap='viridis', vlim=(None, None), transpose=False)[source]

Add a rectangular block to a color plot using pcolormesh().

Parameters
  • xvals (ndarrayndarray) – Length N array corresponding to settable x0.

  • yvals (ndarrayndarray) – Length M array corresponding to settable x1.

  • zvals (ndarrayndarray) – M*N array corresponding to gettable yi.

  • ax (Axes | NoneOptional[Axes] (default: None)) – Axis to which to add the colormesh.

  • normalize (boolbool (default: False)) – If True, normalizes each row of data.

  • log (boolbool (default: False)) – if True, uses a logarithmic colorscale.

  • cmap (strstr (default: 'viridis')) – Colormap to use. See matplotlib docs for choosing an appropriate colormap.

  • vlim (listlist (default: (None, None))) – Limits of the z-axis.

  • transpose (boolbool (default: False)) – If True transposes the figure.

Return type

QuadMeshQuadMesh

Returns

The created matplotlib QuadMesh.

Warning

The grid orientation for the zvals is the same as is used in pcolormesh(). Note that the column index corresponds to the x-coordinate, and the row index corresponds to y. This can be counter.intuitive: zvals(y_idx, x_idx) and can be inconsistent with some arrays of zvals (such as a 2D histogram from numpy).

get_unit_from_attrs(data_array, str_format=' [{}]')[source]

Extracts and formats the unit/units from an xarray.DataArray attribute.

Parameters
  • data_array (DataArrayDataArray) – Xarray array (coordinate or variable).

  • str_format (strstr (default: ' [{}]')) – String that will be formatted if a unit is found.

Return type

strstr

Returns

str_format string formatted with the data_array.unit or data_array.units, with that order of precedence. Empty string is returned if none of these arguments are present.

plot_2d_grid(x, y, z, xlabel, xunit, ylabel, yunit, zlabel, zunit, ax, cax=None, add_cbar=True, title=None, normalize=False, log=False, cmap='viridis', vlim=(None, None), transpose=False)[source]

Creates a heatmap of x,y,z data that was acquired on a grid expects three “columns” of data of equal length.

Parameters
  • x – Length N array corresponding to x values.

  • y – Length N array corresponding to y values.

  • z – Length N array corresponding to gettable z values.

  • xlabel (strstr) – x label to add to the heatmap.

  • ylabel (strstr) – y label to add to the heatmap.

  • xunit (strstr) – x unit used in unit aware axis labels.

  • yunit (strstr) – y unit used in unit aware axis labels.

  • zlabel (strstr) – Label used for the colorbar.

  • ax (AxesAxes) – Axis to which to add the colormesh.

  • cax (Axes | NoneOptional[Axes] (default: None)) – Axis on which to add the colorbar, if set to None, will create a new axis.

  • add_cbar (boolbool (default: True)) – if True, adds a colorbar.

  • title (str | NoneOptional[str] (default: None)) – Text to add as title to the axis.

  • normalize (boolbool (default: False)) – if True, normalizes each row of data.

  • log (boolbool (default: False)) – if True, uses a logarithmic colorscale

  • cmap (strstr (default: 'viridis')) –

    The colormap to use. See matplotlib docs for choosing an appropriate colormap.

  • vlim (listlist (default: (None, None))) – limits of the z-axis.

  • transpose (boolbool (default: False)) – if True transposes the figure.

Return type

Tuple[QuadMesh, Colorbar]Tuple[QuadMesh, Colorbar]

Returns

The new matplotlib QuadMesh and Colorbar.

plot_complex_points(points, colors=None, labels=None, markers=None, legend=True, ax=None, **kwargs)[source]

Plots complex points with (by default) different colors and markers on the imaginary plane using matplotlib.axes.Axes.plot().

Intended for a small number of points.

Example

from quantify_core.utilities.examples_support import plot_centroids

_ = plot_centroids([1 + 1j, -1.5 - 2j])

Parameters
Return type

Tuple[Figure, Axes]Tuple[Figure, Axes]

plot_fit(ax, fit_res, plot_init=True, plot_numpoints=1000, range_casting='real')[source]

Plot a fit of an lmfit model with a real domain.

Parameters
  • ax – axis on which to plot the fit.

  • fit_res – an lmfit fit results object.

  • plot_init (boolbool (default: True)) – if True, plot the initial guess of the fit.

  • plot_numpoints (intint (default: 1000)) – the number of points used on which to evaluate the fit.

  • range_casting ({‘abs’, ‘angle’, ‘real’, ‘imag’}Literal[‘abs’, ‘angle’, ‘real’, ‘imag’] (default: 'real')) – how to plot fit functions that have a complex range. Casting of values happens using absolute, angle, real and imag. Angle is in degrees.

Return type

NoneNone

plot_fit_complex_plane(ax, fit_res, plot_init=True, plot_numpoints=1000)[source]

Plot a fit of an lmfit model with a real domain in the complex plane.

Return type

NoneNone

plot_textbox(ax, text, **kw)[source]

Plot a textbox with sensible defaults using text.

Parameters
Return type

TextText

Returns

the new text object

plot_xr_complex(var, marker_scatter='o', label_real='Real', label_imag='Imag', cmap='viridis', c=None, kwargs_line=None, kwargs_scatter=None, title='{} [{}]; shape = {}', legend=True, ax=None)[source]

Plots the real and imaginary parts of complex data. Points are colored by default according to their order in the array.

Parameters
Return type

Tuple[Figure, Axes]Tuple[Figure, Axes]

plot_xr_complex_on_plane(var, marker='o', label='Data on imaginary plane', cmap='viridis', c=None, xlabel='Real{}{}{}', ylabel='Imag{}{}{}', legend=True, ax=None, **kwargs)[source]

Plots complex data on the imaginary plane. Points are colored by default according to their order in the array.

Parameters
  • var (DataArrayDataArray) – 1D array of complex data.

  • marker (strstr (default: 'o')) – Marker used for the scatter plot.

  • label (strstr (default: 'Data on imaginary plane')) – Data label for the legend.

  • cmap (strstr (default: 'viridis')) – The colormap to use for coloring the points.

  • c (ndarray | NoneOptional[ndarray] (default: None)) – Color of the points. Defaults to an array of integers.

  • xlabel (strstr (default: 'Real{}{}{}')) – Label o x axes.

  • ylabel (strstr (default: 'Imag{}{}{}')) – Label o y axes.

  • legend (boolbool (default: True)) – Calls legend() if True.

  • ax (object | NoneOptional[object] (default: None)) – The matplotlib axes. If None a new axes (and figure) is created.

Return type

Tuple[Figure, Axes]Tuple[Figure, Axes]

set_cyclic_colormap(image_or_collection, shifted=False, unit='deg', clim=None)[source]

Sets a cyclic colormap on a matplolib 2D color plot if cyclic units are detected.

Parameters
Return type

NoneNone

set_suptitle_from_dataset(fig, dataset, prefix='')[source]

Sets the suptitle of a matplotlib figure based on

  • (optional) prefix;

  • dataset.name;

  • dataset.tuid,

Intended for tagging figures with unique ID of the original dataset.

Parameters
  • prefix (strstr (default: '')) – Optional string to pre-pend, e.g., x0-y0.

  • fig (FigureFigure) – The matplotlib figure.

  • dataset (DatasetDataset) – A dataset expected to have a .name and a .tuid" attributes.

Return type

NoneNone

plot_interpolation

Plot interpolations.

interpolate_heatmap(x, y, z, n=None, interp_method='linear')[source]

The output of this method can directly be used for plt.imshow(z_grid, extent=extent, aspect=’auto’) where the extent is determined by the min and max of the x_grid and y_grid.

The output can also be used as input for ax.pcolormesh(x, y, Z,**kw)

Parameters
  • x (numpy.ndarray) – x data points

  • y (numpy.ndarray) – y data points

  • z (numpy.ndarray) – z data points

  • n (int | NoneOptional[int] (default: None)) – number of points for each dimension on the interpolated grid if set to None will auto determine amount of points needed

  • interp_method ({‘linear’, ‘nearest’, ‘deg’}Literal[‘linear’, ‘nearest’, ‘deg’] (default: 'linear')) – determines what interpolation method is used.

Returns

  • x_grid (numpy.ndarray) – N*1 array of x-values of the interpolated grid

  • y_grid (numpy.ndarray) – N*1 array of x-values of the interpolated grid

  • z_grid (numpy.ndarray) – N*N array of z-values that form a grid.

SI Utilities

Utilities for managing SI units with plotting systems.

SI_prefix_and_scale_factor(val, unit=None)[source]

Takes in a value and unit and if applicable returns the proper scale factor and SI prefix.

If the unit is None, no scaling is done. If the unit is “SI_PREFIX_ONLY”, the value is scaled and an SI prefix is applied without a base unit.

Parameters
  • val (float) – the value

  • unit (str) – the unit of the value

Returns

  • scale_factor (float) – scale_factor needed to convert value

  • scaled_unit (str) – unit including the prefix

SI_val_to_msg_str(val, unit=None, return_type=<class 'str'>)[source]

Takes in a value with optional unit and returns a string tuple consisting of (value_str, unit) where the value and unit are rescaled according to SI prefixes, IF the unit is an SI unit (according to the comprehensive list of SI units in this file ;).

the value_str is of the type specified in return_type (str) by default.

adjust_axeslabels_SI(ax)[source]

Auto adjust the labels of a plot generated by xarray to SI-unit aware labels.

Return type

NoneNone

format_value_string(par_name, parameter, end_char='', unit=None)[source]

Format an lmfit parameter or uncertainties ufloat to a string of value with uncertainty.

If there is no stderr, use 5 significant figures. If there is a standard error use a precision one order of magnitude more precise than the size of the error and display the stderr itself to two significant figures in standard index notation in the same units as the value.

Parameters
  • par_name (strstr) – the name of the parameter to use in the string

  • parameter (lmfit.parameter.Parameter,) – uncertainties.core.Variable or float. A Parameter object or an object e.g., returned by uncertainties.ufloat(). The value and stderr of this parameter will be used. If a float is given, the stderr is taken to be None.

  • end_char – A character that will be put at the end of the line.

  • unit – a unit. If this is an SI unit it will be used in automatically determining a prefix for the unit and rescaling accordingly.

Return type

strstr

Returns

The parameter and its error formatted as a string

set_cbarlabel(cbar, label, unit=None, **kw)[source]

Add a unit aware z-label to a colorbar object

Parameters
  • cbar – colorbar object to set label on

  • label – the desired label

  • unit – the unit

  • **kw – keyword argument to be passed to cbar.set_label

set_xlabel(axis, label, unit=None, **kw)[source]

Add a unit aware x-label to an axis object.

Parameters
  • axis – matplotlib axis object to set label on

  • label – the desired label

  • unit – the unit

  • **kw – keyword argument to be passed to matplotlib.set_xlabel

set_ylabel(axis, label, unit=None, **kw)[source]

Add a unit aware y-label to an axis object.

Parameters
  • axis – matplotlib axis object to set label on

  • label – the desired label

  • unit – the unit

  • **kw – keyword argument to be passed to matplotlib.set_ylabel

value_precision(val, stderr=None)[source]

Calculate the precision to which a parameter is to be specified, according to its standard error. Returns the appropriate format specifier string.

If there is no stderr, use 5 significant figures. If there is a standard error use a precision one order of magnitude more precise than the size of the error and display the stderr itself to two significant figures in standard index notation in the same units as the value.

Parameters
  • val (float) – the nominal value of the parameter

  • stderr (float) – the standard error on the parameter

Return type

Tuple[str]Tuple[str]

Returns

  • val_format_specifier (str) – python format specifier which sets the precision of the parameter value

  • err_format_specifier (str) – python format specifier which set the precision of the error

bibliography

MVM+21

J.F. Marques, B.M. Varbanov, M.S. Moreira, H. Ali, N. Muthusubramanian, C. Zachariadis, F. Battistel, M. Beekman, N. Haider, W. Vlothuizen, A. Bruno, B.M. Terhal, and L. DiCarlo. Logical-qubit operations in an error-detecting surface code. arXiv preprint arXiv:2102.13071, 2021. URL: https://arxiv.org/abs/2102.13071.pdf.

Ree13

M.D. Reed. Entanglement and Quantum Error Correction with Superconducting Qubits. PhD Dissertation, Yale University, 2013. URL: https://arxiv.org/pdf/1311.6759.pdf.