quantify_core

analysis

base_analysis

Module containing the analysis abstract base class and several basic analyses.

class AnalysisMeta(name, bases, namespace, /, **kwargs)[source]

Bases: ABCMeta

Metaclass, whose purpose is to avoid storing large amount of figure in memory.

By convention, analysis object stores figures in self.figs_mpl and self.axs_mpl dictionaries. This causes troubles for long-running operations, because figures are all in memory and eventually this uses all available memory of the PC. In order to avoid it, BaseAnalysis.create_figures() and its derivatives are patched so that all the figures are put in LRU cache and reconstructed upon request to BaseAnalysis.figs_mpl or BaseAnalysis.axs_mpl if they were removed from the cache.

Provided that analyses subclasses follow convention of figures being created in BaseAnalysis.create_figures(), this approach should solve the memory issue and preserve reverse compatibility with present code.

class AnalysisSteps(value)[source]

Bases: Enum

An enumerate of the steps executed by the BaseAnalysis (and the default for subclasses).

The involved steps are specified below.

# <STEP>                                          # <corresponding class method>

AnalysisSteps.STEP_1_PROCESS_DATA                 # BaseAnalysis.process_data
AnalysisSteps.STEP_2_RUN_FITTING                  # BaseAnalysis.run_fitting
AnalysisSteps.STEP_3_ANALYZE_FIT_RESULTS          # BaseAnalysis.analyze_fit_results
AnalysisSteps.STEP_4_CREATE_FIGURES               # BaseAnalysis.create_figures
AnalysisSteps.STEP_5_ADJUST_FIGURES               # BaseAnalysis.adjust_figures
AnalysisSteps.STEP_6_SAVE_FIGURES                 # BaseAnalysis.save_figures
AnalysisSteps.STEP_7_SAVE_QUANTITIES_OF_INTEREST  # BaseAnalysis.save_quantities_of_interest
AnalysisSteps.STEP_8_SAVE_PROCESSED_DATASET       # BaseAnalysis.save_processed_dataset
AnalysisSteps.STEP_9_SAVE_FIT_RESULTS             # BaseAnalysis.save_fit_results

Tip

A custom analysis flow (e.g. inserting new steps) can be created by implementing an object similar to this one and overriding the analysis_steps.

class BaseAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: object

A template for analysis classes.

analysis_steps

Defines the steps of the analysis specified as an Enum. Can be overridden in a subclass in order to define a custom analysis flow. See AnalysisSteps for a template.

alias of AnalysisSteps

__init__(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Initializes the variables used in the analysis and to which data is stored.

Warning

We highly discourage overriding the class initialization. If the analysis requires the user passing in any arguments, the run() should be overridden and extended (see its docstring for an example).

Settings schema:

Base analysis settings

properties

  • mpl_dpi

Matplotlib figures DPI.

type

integer

  • mpl_exclude_fig_titles

If True maplotlib figures will not include the title.

type

boolean

  • mpl_transparent_background

If True maplotlib figures will have transparent background (when applicable).

type

boolean

  • mpl_fig_formats

List of formats in which matplotlib figures will be saved. E.g. ['svg']

type

array

items

type

string

Parameters
  • dataset (Optional[Dataset] (default: None)) – an unprocessed (raw) quantify dataset to perform the analysis on.

  • tuid (Union[TUID, str, None] (default: None)) – if no dataset is specified, will look for the dataset with the matching tuid in the data directory.

  • label (str (default: '')) – if no dataset and no tuid is provided, will look for the most recent dataset that contains “label” in the name.

  • settings_overwrite (Optional[dict] (default: None)) – A dictionary containing overrides for the global base_analysis.settings for this specific instance. See Settings schema above for available settings.

  • plot_figures (bool (default: True)) – Option to create and save figures for analysis.

adjust_clim(vmin, vmax, ax_ids=None)[source]

Adjust the clim of matplotlib figures generated by analysis object.

Parameters
  • vmin (float) – The bottom vlim in data coordinates. Passing None leaves the limit unchanged.

  • vmax (float) – The top vlim in data coordinates. Passing None leaves the limit unchanged.

  • ax_ids (Optional[List[str]] (default: None)) – A list of ax_ids specifying what axes to adjust. Passing None results in all axes of an analysis object being adjusted.

Return type

None

adjust_figures()[source]

Perform global adjustments after creating the figures but before saving them.

By default applies mpl_exclude_fig_titles and mpl_transparent_background from .settings_overwrite to any matplotlib figures in .figs_mpl.

Can be extended in a subclass for additional adjustments.

adjust_xlim(xmin=None, xmax=None, ax_ids=None)[source]

Adjust the xlim of matplotlib figures generated by analysis object.

Parameters
  • xmin (Optional[float] (default: None)) – The bottom xlim in data coordinates. Passing None leaves the limit unchanged.

  • xmax (Optional[float] (default: None)) – The top xlim in data coordinates. Passing None leaves the limit unchanged.

  • ax_ids (Optional[List[str]] (default: None)) – A list of ax_ids specifying what axes to adjust. Passing None results in all axes of an analysis object being adjusted.

Return type

None

adjust_ylim(ymin=None, ymax=None, ax_ids=None)[source]

Adjust the ylim of matplotlib figures generated by analysis object.

Parameters
  • ymin (Optional[float] (default: None)) – The bottom ylim in data coordinates. Passing None leaves the limit unchanged.

  • ymax (Optional[float] (default: None)) – The top ylim in data coordinates. Passing None leaves the limit unchanged.

  • ax_ids (Optional[List[str]] (default: None)) – A list of ax_ids specifying what axes to adjust. Passing None results in all axes of an analysis object being adjusted.

Return type

None

analyze_fit_results()[source]

To be implemented by subclasses.

Should analyze and process the .fit_results and add the quantities of interest to the .quantities_of_interest dictionary.

create_figures()[source]

To be implemented by subclasses.

Should generate figures of interest. matplolib figures and axes objects should be added to the .figs_mpl and axs_mpl dictionaries., respectively.

display_figs_mpl()[source]

Displays figures in .figs_mpl in all frontends.

execute_analysis_steps()[source]

Executes the methods corresponding to the analysis steps as defined by the analysis_steps.

Intended to be called by .run when creating a custom analysis that requires passing analysis configuration arguments to run().

extract_data()[source]

If no dataset is provided, populates .dataset with data from the experiment matching the tuid/label.

This method should be overwritten if an analysis does not relate to a single datafile.

get_flow()[source]

Returns a tuple with the ordered methods to be called by run analysis. Only return the figures methods if self.plot_figures is True.

Return type

tuple

classmethod load_fit_result(tuid, fit_name)[source]

Load a saved lmfit.model.ModelResult object from file. For analyses that use custom fit functions, the cls.fit_function_definitions object must be defined in the subclass for that analysis.

Parameters
  • tuid (TUID) – The TUID reference of the saved analysis.

  • fit_name (str) – The name of the fit result to be loaded.

Return type

ModelResult

Returns

The lmfit model result object.

process_data()[source]

To be implemented by subclasses.

Should process, e.g., reshape, filter etc. the data before starting the analysis.

run()[source]

This function is at the core of all analysis. It calls execute_analysis_steps() which executes all the methods defined in the

First step of any analysis is always extracting data, that is not configurable. Errors in extract_data() are considered fatal for analysis. Later steps are configurable by overriding analysis_steps. Exceptions in these steps are logged and suppressed and analysis is considered partially successful.

This function is typically called right after instantiating an analysis class.

Return type

BaseAnalysis

Returns

The instance of the analysis object so that run() returns an analysis object. You can initialize, run and assign it to a variable on a single line:, e.g. a_obj = MyAnalysis().run().

run_fitting()[source]

To be implemented by subclasses.

Should create fitting model(s) and fit data to the model(s) adding the result to the .fit_results dictionary.

save_figures()[source]

Saves figures to disk. By default saves matplotlib figures.

Can be overridden or extended to make use of other plotting packages.

save_figures_mpl(close_figs=True)[source]

Saves all the matplotlib figures in the .figs_mpl dict.

Parameters

close_figs (bool (default: True)) – If True, closes matplotlib figures after saving.

save_fit_results()[source]

Saves the lmfit.model.model_result objects for each fit in a sub-directory within the analysis directory

save_processed_dataset()[source]

Saves a copy of the processed .dataset_processed in the analysis folder of the experiment.

save_quantities_of_interest()[source]

Saves the .quantities_of_interest as a JSON file in the analysis directory.

The file is written using json.dump() with the qcodes.utils.NumpyJSONEncoder custom encoder.

property analysis_dir

Analysis dir based on the tuid of the analysis class instance. Will create a directory if it does not exist.

property name

The name of the analysis, used in data saving.

property results_dir

Analysis dirrectory for this analysis. Will create a directory if it does not exist.

class Basic1DAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: BasicAnalysis

Deprecated. Alias of BasicAnalysis for backwards compatibility.

run()[source]

This function is at the core of all analysis. It calls execute_analysis_steps() which executes all the methods defined in the

First step of any analysis is always extracting data, that is not configurable. Errors in extract_data() are considered fatal for analysis. Later steps are configurable by overriding analysis_steps. Exceptions in these steps are logged and suppressed and analysis is considered partially successful.

This function is typically called right after instantiating an analysis class.

Return type

BaseAnalysis

Returns

The instance of the analysis object so that run() returns an analysis object. You can initialize, run and assign it to a variable on a single line:, e.g. a_obj = MyAnalysis().run().

class Basic2DAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: BaseAnalysis

A basic analysis that extracts the data from the latest file matching the label and plots and stores the data in the experiment container.

create_figures()[source]

To be implemented by subclasses.

Should generate figures of interest. matplolib figures and axes objects should be added to the .figs_mpl and axs_mpl dictionaries., respectively.

class BasicAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: BaseAnalysis

A basic analysis that extracts the data from the latest file matching the label and plots and stores the data in the experiment container.

create_figures()[source]

Creates a line plot x vs y for every data variable yi and coordinate xi in the dataset.

analysis_steps_to_str(analysis_steps, class_name='BaseAnalysis')[source]

A utility for generating the docstring for the analysis steps

Parameters
Return type

str

Returns

A formatted string version of the analysis_steps and corresponding methods.

check_lmfit(fit_res)[source]

Check that lmfit was able to successfully return a valid fit, and give a warning if not.

The function looks at lmfit’s success parameter, and also checks whether the fit was able to obtain valid error bars on the fitted parameters.

Parameters

fit_res (ModelResult) – The ModelResult object output by lmfit

Return type

str

Returns

A warning message if there is a problem with the fit.

flatten_lmfit_modelresult(model)[source]

Flatten an lmfit model result to a dictionary in order to be able to save it to disk.

Notes

We use this method as opposed to save_modelresult() as the corresponding load_modelresult() cannot handle loading data with a custom fit function.

lmfit_par_to_ufloat(param)[source]

Safe conversion of an lmfit.parameter.Parameter to uncertainties.ufloat(value, std_dev).

This function is intended to be used in custom analyses to avoid errors when an lmfit fails and the stderr is None.

Parameters

param (Parameter) – The Parameter to be converted

Returns

An object representing the value and the uncertainty of the parameter.

Return type

uncertainties.UFloat

wrap_text(text, width=35, replace_whitespace=True, **kwargs)[source]

A text wrapping (braking over multiple lines) utility.

Intended to be used with plot_textbox() in order to avoid too wide figure when, e.g., check_lmfit() fails and a warning message is generated.

For usage see, for example, source code of create_figures().

Parameters
  • text – The text string to be wrapped over several lines.

  • width (default: 35) – Maximum line width in characters.

  • kwargs – Any other keyword arguments to be passed to textwrap.wrap().

Returns

The wrapped text (or None if text is None).

settings = {'mpl_dpi': 450, 'mpl_fig_formats': ['png', 'svg'], 'mpl_exclude_fig_titles': False, 'mpl_transparent_background': True}

For convenience the analysis framework provides a set of global settings.

For available settings see BaseAnalysis. These can be overwritten for each instance of an analysis.

Example

from quantify_core.analysis import base_analysis as ba
ba.settings["mpl_dpi"] = 300  # set resolution of matplotlib figures

cosine_analysis

Module containing an education example of an analysis subclass.

See Tutorial 3. Building custom analyses - the data analysis framework that guides you through the process of building this analysis.

class CosineAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: BaseAnalysis

Exemplary analysis subclass that fits a cosine to a dataset.

analyze_fit_results()[source]

Checks fit success and populates quantities_of_interest.

create_figures()[source]

Creates a figure with the data and the fit.

process_data()[source]

In some cases, you might need to process the data, e.g., reshape, filter etc., before starting the analysis. This is the method where it should be done.

See process_data() for an implementation example.

run_fitting()[source]

Fits a CosineModel to the data.

spectroscopy_analysis

class ResonatorSpectroscopyAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: BaseAnalysis

Analysis for a spectroscopy experiment of a hanger resonator.

analyze_fit_results()[source]

Checks fit success and populates .quantities_of_interest.

create_figures()[source]

Plots the measured and fitted transmission \(S_{21}\) as the I and Q component vs frequency, the magnitude and phase vs frequency, and on the complex I,Q plane.

process_data()[source]

Verifies that the data is measured as magnitude and phase and casts it to a dataset of complex valued transmission \(S_{21}\).

run_fitting()[source]

Fits a ResonatorModel to the data.

single_qubit_timedomain

Module containing analyses for common single qubit timedomain experiments.

class AllXYAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: SingleQubitTimedomainAnalysis

Normalizes the data from an AllXY experiment and plots against an ideal curve.

See section 2.3.2 of Reed [2013] for an explanation of the AllXY experiment and it’s applications in diagnosing errors in single-qubit control pulses.

create_figures()[source]

To be implemented by subclasses.

Should generate figures of interest. matplolib figures and axes objects should be added to the .figs_mpl and axs_mpl dictionaries., respectively.

process_data()[source]

Processes the data so that the analysis can make assumptions on the format.

Populates self.dataset_processed.S21 with the complex (I,Q) valued transmission, and if calibration points are present for the 0 and 1 state, populates self.dataset_processed.pop_exc with the excited state population.

run()[source]

Executes the analysis using specific datapoints as calibration points.

Returns

The instance of this analysis.

Return type

AllXYAnalysis

class EchoAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: SingleQubitTimedomainAnalysis, _DecayFigMixin

Analysis class for a qubit spin-echo experiment, which fits an exponential decay and extracts the T2_echo time.

analyze_fit_results()[source]

Checks fit success and populates .quantities_of_interest.

create_figures()[source]

Create a figure showing the exponential decay and fit.

run_fitting()[source]

Fit the data to ExpDecayModel.

class RabiAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: SingleQubitTimedomainAnalysis

Fits a cosine curve to Rabi oscillation data and finds the qubit drive amplitude required to implement a pi-pulse.

The analysis will automatically rotate the data so that the data lies along the axis with the best SNR.

analyze_fit_results()[source]

Checks fit success and populates .quantities_of_interest.

create_figures()[source]

Creates Rabi oscillation figure

run(calibration_points=True)[source]
Parameters

calibration_points (bool (default: True)) – Specifies if the data should be rotated so that it lies along the axis with the best SNR.

Returns

The instance of this analysis.

Return type

RabiAnalysis

run_fitting()[source]

Fits a RabiModel to the data.

class RamseyAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: SingleQubitTimedomainAnalysis, _DecayFigMixin

Fits a decaying cosine curve to Ramsey data (possibly with artificial detuning) and finds the true detuning, qubit frequency and T2* time.

analyze_fit_results()[source]

Extract the real detuning and qubit frequency based on the artificial detuning and fitted detuning.

create_figures()[source]

Plot Ramsey decay figure.

run(artificial_detuning=0, qubit_frequency=None, calibration_points='auto')[source]
Parameters
  • artificial_detuning (float (default: 0)) – The detuning in Hz that will be emulated by adding an extra phase in software.

  • qubit_frequency (Optional[float] (default: None)) – The initial recorded value of the qubit frequency (before accurate fitting is done) in Hz.

  • calibration_points (Union[bool, Literal['auto']] (default: 'auto')) – Indicates if the data analyzed includes calibration points. If set to True, will interpret the last two data points in the dataset as \(|0\rangle\) and \(|1\rangle\) respectively. If "auto", will use has_calibration_points() to determine if the data contains calibration points.

Returns

The instance of this analysis.

Return type

RamseyAnalysis

run_fitting()[source]

Fits a DecayOscillationModel to the data.

class SingleQubitTimedomainAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: BaseAnalysis

Base Analysis class for single-qubit timedomain experiments.

process_data()[source]

Processes the data so that the analysis can make assumptions on the format.

Populates self.dataset_processed.S21 with the complex (I,Q) valued transmission, and if calibration points are present for the 0 and 1 state, populates self.dataset_processed.pop_exc with the excited state population.

run(calibration_points='auto')[source]
Parameters

calibration_points (Union[bool, Literal['auto']] (default: 'auto')) – Indicates if the data analyzed includes calibration points. If set to True, will interpret the last two data points in the dataset as \(|0\rangle\) and \(|1\rangle\) respectively. If "auto", will use has_calibration_points() to determine if the data contains calibration points.

Returns

The instance of this analysis.

Return type

SingleQubitTimedomainAnalysis

class T1Analysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: SingleQubitTimedomainAnalysis, _DecayFigMixin

Analysis class for a qubit T1 experiment, which fits an exponential decay and extracts the T1 time.

analyze_fit_results()[source]

Checks fit success and populates .quantities_of_interest.

create_figures()[source]

Create a figure showing the exponential decay and fit.

run_fitting()[source]

Fit the data to ExpDecayModel.

interpolation_analysis

class InterpolationAnalysis2D(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: BaseAnalysis

An analysis class which generates a 2D interpolating plot for each yi variable in the dataset.

create_figures()[source]

Create a 2D interpolating figure for each yi.

optimization_analysis

class OptimizationAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]

Bases: BaseAnalysis

An analysis class which extracts the optimal quantities from an N-dimensional interpolating experiment.

create_figures()[source]

Plot each of the x variables against each of the y variables.

process_data()[source]

Finds the optimal (minimum or maximum) for y0 and saves the xi and y0 values in the quantities_of_interest.

run(minimize=True)[source]
Parameters

minimize (bool (default: True)) – Boolean which determines whether to report the minimum or the maximum. True for minimize. False for maximize.

Returns

The instance of this analysis.

Return type

OptimizationAnalysis

iteration_plots(dataset, quantities_of_interest)[source]

For every x and y variable, plot a graph of that variable vs the iteration index.

fitting_models

Models and fit functions to be used with the lmfit fitting framework.

class CosineModel(*args, **kwargs)[source]

Bases: Model

Exemplary lmfit model with a guess for a cosine.

Note

The lmfit.models module provides several fitting models that might fit your needs out of the box.

__init__(*args, **kwargs)[source]
Parameters
  • independent_vars (list of str) – Arguments to the model function that are independent variables default is ['x']).

  • prefix (str) – String to prepend to parameter names, needed to add two Models that have parameter names in common.

  • nan_policy – How to handle NaN and missing values in data. See Notes below.

  • **kwargs – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

See also

cos_func()

guess(data, x, **kws)[source]

Guess starting values for the parameters of a model.

Parameters
  • data (ndarray) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (ndarray) – Array of values for the independent variable (i.e., x-values).

  • **kws – Additional keyword arguments, passed to model function.

Return type

Parameters

Returns

  • params (Parameters) – Initial, guessed values for the parameters of a Model.

  • .. versionchanged:: 1.0.3 – Argument x is now explicitly required to estimate starting values.

class DecayOscillationModel(*args, **kwargs)[source]

Bases: Model

Model for a decaying oscillation which decays to a point with 0 offset from the centre of the of the oscillation (as in a Ramsey experiment, for example).

__init__(*args, **kwargs)[source]
Parameters
  • independent_vars (list of str) – Arguments to the model function that are independent variables default is ['x']).

  • prefix (str) – String to prepend to parameter names, needed to add two Models that have parameter names in common.

  • nan_policy – How to handle NaN and missing values in data. See Notes below.

  • **kwargs – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

guess(data, **kws)[source]

Guess starting values for the parameters of a model.

Parameters
  • data (ndarray) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (ndarray) – Array of values for the independent variable (i.e., x-values).

  • **kws – Additional keyword arguments, passed to model function.

Return type

Parameters

Returns

  • params (Parameters) – Initial, guessed values for the parameters of a Model.

  • .. versionchanged:: 1.0.3 – Argument x is now explicitly required to estimate starting values.

class ExpDecayModel(*args, **kwargs)[source]

Bases: Model

Model for an exponential decay, such as a qubit T1 measurement.

__init__(*args, **kwargs)[source]
Parameters
  • independent_vars (list of str) – Arguments to the model function that are independent variables default is ['x']).

  • prefix (str) – String to prepend to parameter names, needed to add two Models that have parameter names in common.

  • nan_policy – How to handle NaN and missing values in data. See Notes below.

  • **kwargs – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

See also

exp_decay_func()

guess(data, **kws)[source]

Guess starting values for the parameters of a model.

Parameters
  • data (ndarray) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (ndarray) – Array of values for the independent variable (i.e., x-values).

  • **kws – Additional keyword arguments, passed to model function.

Return type

Parameters

Returns

  • params (Parameters) – Initial, guessed values for the parameters of a Model.

  • .. versionchanged:: 1.0.3 – Argument x is now explicitly required to estimate starting values.

class RabiModel(*args, **kwargs)[source]

Bases: Model

Model for a Rabi oscillation as a function of the microwave drive amplitude. Phase of oscillation is fixed at \(\pi\) in order to ensure that the oscillation is at a minimum when the drive amplitude is 0.

__init__(*args, **kwargs)[source]
Parameters
  • independent_vars (list of str) – Arguments to the model function that are independent variables default is ['x']).

  • prefix (str) – String to prepend to parameter names, needed to add two Models that have parameter names in common.

  • nan_policy – How to handle NaN and missing values in data. See Notes below.

  • **kwargs – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

See also

cos_func()

guess(data, **kws)[source]

Guess starting values for the parameters of a model.

Parameters
  • data (ndarray) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (ndarray) – Array of values for the independent variable (i.e., x-values).

  • **kws – Additional keyword arguments, passed to model function.

Return type

Parameters

Returns

  • params (Parameters) – Initial, guessed values for the parameters of a Model.

  • .. versionchanged:: 1.0.3 – Argument x is now explicitly required to estimate starting values.

class ResonatorModel(*args, **kwargs)[source]

Bases: Model

Resonator model

Implementation and design patterns inspired by the complex resonator model example (lmfit documentation).

__init__(*args, **kwargs)[source]
Parameters
  • independent_vars (list of str) – Arguments to the model function that are independent variables default is ['x']).

  • prefix (str) – String to prepend to parameter names, needed to add two Models that have parameter names in common.

  • nan_policy – How to handle NaN and missing values in data. See Notes below.

  • **kwargs – Keyword arguments to pass to Model.

Notes

1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:

  • ‘raise’ : raise a ValueError (default)

  • ‘propagate’ : do nothing

  • ‘omit’ : drop missing data

guess(data, **kws)[source]

Guess starting values for the parameters of a model.

Parameters
  • data (ndarray) – Array of data (i.e., y-values) to use to guess parameter values.

  • x (ndarray) – Array of values for the independent variable (i.e., x-values).

  • **kws – Additional keyword arguments, passed to model function.

Return type

Parameters

Returns

  • params (Parameters) – Initial, guessed values for the parameters of a Model.

  • .. versionchanged:: 1.0.3 – Argument x is now explicitly required to estimate starting values.

cos_func(x, frequency, amplitude, offset, phase=0)[source]

An oscillating cosine function:

\(y = \mathrm{amplitude} \times \cos(2 \pi \times \mathrm{frequency} \times x + \mathrm{phase}) + \mathrm{offset}\)

Parameters
  • x (float) – The independent variable (time, for example)

  • frequency (float) – A generalized frequency (in units of inverse x)

  • amplitude (float) – Amplitude of the oscillation

  • offset (float) – Output signal vertical offset

  • phase (float (default: 0)) – Phase offset / rad

Return type

float

Returns

Output signal magnitude

exp_damp_osc_func(t, tau, n_factor, frequency, phase, amplitude, offset)[source]

A sinusoidal oscillation with an exponentially decaying envelope function:

\(y = \mathrm{amplitude} \times \exp\left(-(t/\tau)^\mathrm{n\_factor}\right)(\cos(2\pi\mathrm{frequency}\times t + \mathrm{phase}) + \mathrm{oscillation_offset}) + \mathrm{exponential_offset}\)

Parameters
  • t (float) – time

  • tau (float) – decay time

  • n_factor (float) – exponential decay factor

  • frequency (float) – frequency of the oscillation

  • phase (float) – phase of the oscillation

  • amplitude (float) – initial amplitude of the oscillation

  • oscillation_offset – vertical offset of cosine oscillation relative to exponential asymptote

  • exponential_offset – offset of exponential asymptote

Returns

Output of decaying cosine function as a float

exp_decay_func(t, tau, amplitude, offset, n_factor)[source]

This is a general exponential decay function:

\(y = \mathrm{amplitude} \times \exp\left(-(t/\tau)^\mathrm{n\_factor}\right) + \mathrm{offset}\)

Parameters
  • t (float) – time

  • tau (float) – decay time

  • amplitude (float) – amplitude of the exponential decay

  • offset (float) – asymptote of the exponential decay, the value at t=infinity

  • n_factor (float) – exponential decay factor

Return type

float

Returns

Output of exponential function as a float

fft_freq_phase_guess(data, t)[source]

Guess for a cosine fit using FFT, only works for evenly spaced points.

Parameters
  • data (ndarray) – Input data to FFT

  • t (ndarray) – Independent variable (e.g. time)

Return type

Tuple[float, float]

Returns

  • freq_guess – Guess for the frequency of the cosine function

  • ph_guess – Guess for the phase of the cosine function

get_guess_common_doc()[source]

Returns a common docstring to be used for the guess() method of custom fitting Model s. :rtype: str

get_model_common_doc()[source]

Returns a common docstring to be used with custom fitting Model s. :rtype: str

hanger_func_complex_SI(f, fr, Ql, Qe, A, theta, phi_v, phi_0, alpha=1)[source]

This is the complex function for a hanger (lambda/4 resonator).

Parameters
  • f (float) – frequency

  • fr (float) – resonance frequency

  • A (float) – background transmission amplitude

  • Ql (float) – loaded quality factor of the resonator

  • Qe (float) – magnitude of extrinsic quality factor Qe = |Q_extrinsic|

  • theta (float) – phase of extrinsic quality factor (in rad)

  • phi_v (float) – phase to account for propagation delay to sample

  • phi_0 (float) – phase to account for propagation delay from sample

  • alpha (float (default: 1)) – slope of signal around the resonance

Return type

complex

Returns

complex valued transmission

See eq. S4 from Bruno et al. (2015) ArXiv:1502.04082.

\[S_{21} = A \left(1+\alpha \frac{f-f_r}{f_r} \right) \left(1- \frac{\frac{Q_l}{|Q_e|}e^{i\theta} }{1+2iQ_l \frac{f-f_r}{f_r}} \right) e^{i (\phi_v f + \phi_0)}\]

The loaded and extrinsic quality factors are related to the internal and coupled Q according to:

\[\frac{1}{Q_l} = \frac{1}{Q_c}+\frac{1}{Q_i}\]

and

\[\frac{1}{Q_c} = \mathrm{Re}\left(\frac{1}{|Q_e|e^{-i\theta}}\right)\]
mk_seealso(function_name, role='func', prefix='\\n\\n', module_location='.')[source]

Returns a sphinx seealso pointing to a function.

Intended to be used for building custom fitting model docstrings.

Parameters
  • function_name (str) – name of the function to point to

  • role (str (default: 'func')) – a sphinx role, e.g. "func"

  • prefix (str (default: '\\n\\n')) – string preceding the seealso

  • module_location (str (default: '.')) – can be used to indicate a function outside this module, e.g., my_module.submodule which contains the function.

Return type

str

Returns

resulting string

resonator_phase_guess(s21, freq)[source]

Guesses the phase velocity in resonator spectroscopy, based on the median of all the differences between consecutive phases.

Parameters
  • s21 (ndarray) – Resonator S21 data

  • freq (ndarray) – Frequency of the spectroscopy pulse

Return type

Tuple[float, float]

Returns

  • phi_0 – Guess for the phase offset

  • phi_v – Guess for the phase velocity

calibration

Module containing analysis utilities for calibration procedures.

In particular, manipulation of data and calibration points for qubit readout calibration.

has_calibration_points(s21, indices_state_0=(-2,), indices_state_1=(-1,))[source]

Attempts to determine if the provided complex S21 data has calibration points for the ground and first excited states of qubit.

In this ideal scenario, if the datapoints indicated by the indices correspond to the calibration points, then these points will be located on the extremities of a “segment” on the IQ plane.

Three pieces of information are used to infer the presence of calibration points:

  • The angle of the calibration points with respect to the average of the datapoints,

  • The distance between the calibration points, and

  • The average distance to the line defined be the calibration points.

The detection is made robust by averaging 3 datapoints for each extremity of the “segment” described by the data on the IQ-plane.

Parameters
  • s21 (ndarray) – Array of complex datapoints corresponding to the experiment on the IQ plane.

  • indices_state_0 (tuple (default: (-2,))) – Indices in the s21 array that correspond to the ground state.

  • indices_state_1 (tuple (default: (-1,))) – Indices in the s21 array that correspond to the first excited state.

Return type

bool

Returns

The inferred presence of calibration points.

rotate_to_calibrated_axis(data, ref_val_0, ref_val_1)[source]

Rotates, normalizes and offsets complex valued data based on calibration points.

Parameters
  • data (ndarray) – An array of complex valued data points.

  • ref_val_0 (complex) – The reference value corresponding to the 0 state.

  • ref_val_1 (complex) – The reference value corresponding to the 1 state.

Return type

ndarray

Returns

Calibrated array of complex data points.

data

types

Module containing the core data concepts of quantify.

class TUID(value: str)[source]

A human readable unique identifier based on the timestamp. This class does not wrap the passed in object but simply verifies and returns it.

A tuid is a string formatted as YYYYmmDD-HHMMSS-sss-******. The tuid serves as a unique identifier for experiments in quantify.

See also

The. handling module.

classmethod datetime(tuid)[source]
Returns

object corresponding to the TUID

Return type

datetime

classmethod datetime_seconds(tuid)[source]
Returns

object corresponding to the TUID with microseconds discarded

Return type

datetime

classmethod is_valid(tuid)[source]

Test if tuid is valid. A valid tuid is a string formatted as YYYYmmDD-HHMMSS-sss-******.

Parameters

tuid (str) – a tuid string

Returns

True if the string is a valid TUID.

Return type

bool

Raises

ValueError – Invalid format

classmethod uuid(tuid)[source]
Returns

the uuid (universally unique identifier) component of the TUID, corresponding to the last 6 characters.

Return type

str

handling

Utilities for handling data.

class DecodeToNumpy(list_to_ndarray=False, *args, **kwargs)[source]
__init__(list_to_ndarray=False, *args, **kwargs)[source]

Decodes a JSON object to Python/Numpy’s objects.

Example

json.loads(json_string, cls=DecodeToNumpy, list_to_numpy=True)

Parameters

list_to_numpy – If True, will try to convert python lists to a numpy array.

concat_dataset(tuids, dim='dim_0', name=None, analysis_name=None)[source]

This function takes in a list of TUIDs and concatenates the corresponding datasets. It adds the TUIDs as a coordinate in the new dataset.

By default, we will extract the unprocessed dataset from each directory, but if analysis_name is specified, we will extract the processed dataset for that analysis.

Parameters
  • tuids (List[TUID]) – List of TUIDs.

  • dim (str (default: 'dim_0')) – Dimension along which to concatenate the datasets.

  • analysis_name (Optional[str] (default: None)) – In the case that we want to extract the processed dataset for give analysis, this is the name of the analysis.

  • name (Optional[str] (default: None)) – The name of the concatenated dataset. If None, use the name of the first dataset in the list.

Return type

Dataset

Returns

Concatenated dataset with new TUID and references to the old TUIDs.

create_exp_folder(tuid, name='', datadir=None)[source]

Creates an empty folder to store an experiment container.

If the folder already exists, simply returns the experiment folder corresponding to the TUID.

Parameters
  • tuid (TUID) – A timestamp based human-readable unique identifier.

  • name (str (default: '')) – Optional name to identify the folder.

  • datadir (Optional[str] (default: None)) – path of the data directory. If None, uses get_datadir() to determine the data directory.

Return type

str

Returns

Full path of the experiment folder following format: /datadir/YYYYmmDD/YYYYmmDD-HHMMSS-sss-******-name/.

default_datadir(verbose=True)[source]

Returns (and optionally print) a default datadir path.

Intended for fast prototyping, tutorials, examples, etc..

Parameters

verbose (bool (default: True)) – If True prints the returned datadir.

Return type

Path

Returns

The Path.home() / "quantify-data" path.

extract_parameter_from_snapshot(snapshot, parameter)[source]

A function which takes a parameter and extracts it from a snapshot, including in the case where the parameter is part of a nested submodule within a QCoDeS instrument

Parameters
  • snapshot (Dict[str, Any]) – The snapshot

  • parameter (str) – The full address of the QCoDeS parameter as a string, in the format "instrument.submodule.submodule.parameter" (an arbitrary number of nested submodules is a allowed).

Return type

Dict[str, Any]

Returns

The dict specifying the parameter properties which was extracted from the snapshot

gen_tuid(time_stamp=None)[source]

Generates a TUID based on current time.

Parameters

time_stamp (Optional[datetime] (default: None)) – Optional, can be passed to ensure the tuid is based on a specific time.

Return type

TUID

Returns

Timestamp based uid.

get_datadir()[source]

Returns the current data directory. The data directory can be changed using set_datadir().

Return type

str

Returns

The current data directory.

get_latest_tuid(contains='')[source]

Returns the most recent tuid.

Tip

This function is similar to get_tuids_containing() but is preferred if one is only interested in the most recent TUID for performance reasons.

Parameters

contains (str (default: '')) – An optional string contained in the experiment name.

Return type

TUID

Returns

The latest TUID.

Raises

FileNotFoundError – No data found.

get_tuids_containing(contains='', t_start=None, t_stop=None, max_results=9223372036854775807, reverse=False)[source]

Returns a list of tuids containing a specific label.

Tip

If one is only interested in the most recent TUID, get_latest_tuid() is preferred for performance reasons.

Parameters
  • contains (default: '') – A string contained in the experiment name.

  • t_start (default: None) – datetime to search from, inclusive. If a string is specified, it will be converted to a datetime object using parse. If no value is specified, will use the year 1 as a reference t_start.

  • t_stop (default: None) – datetime to search until, exclusive. If a string is specified, it will be converted to a datetime object using parse. If no value is specified, will use the current time as a reference t_stop.

  • max_results (default: 9223372036854775807) – Maximum number of results to return. Defaults to unlimited.

  • reverse (default: False) – If False, sorts tuids chronologically, if True sorts by most recent.

Returns

A list of TUID: objects.

Return type

list

Raises

FileNotFoundError – No data found.

get_varying_parameter_values(tuids, parameter)[source]

A function that gets a parameter which varies over multiple experiments and puts it in a ndarray.

Parameters
  • tuids (List[TUID]) – The list of TUIDs from which to get the varying parameter.

  • parameter (str) – The name and address of the QCoDeS parameter from which to get the value, including the instrument name and all submodules. For example "current_source.module0.dac0.current".

Return type

ndarray

Returns

The values of the varying parameter.

grow_dataset(dataset)[source]

Resizes the dataset by doubling the current length of all arrays.

Parameters

dataset (Dataset) – The dataset to resize.

Return type

Dataset

Returns

The resized dataset.

initialize_dataset(settable_pars, setpoints, gettable_pars)[source]

Initialize an empty dataset based on settable_pars, setpoints and gettable_pars

Parameters
  • settable_pars (Iterable) – A list of M settables.

  • setpoints (ndarray) – An (N*M) array.

  • gettable_pars (Iterable) – A list of gettables.

Returns

The dataset.

load_dataset(tuid, datadir=None, name='dataset.hdf5')[source]

Loads a dataset specified by a tuid.

Tip

This method also works when specifying only the first part of a TUID.

Note

This method uses load_dataset() to ensure the file is closed after loading as datasets are intended to be immutable after performing the initial experiment.

Parameters
  • tuid (TUID) – A TUID string. It is also possible to specify only the first part of a tuid.

  • datadir (Optional[str] (default: None)) – Path of the data directory. If None, uses get_datadir() to determine the data directory.

Return type

Dataset

Returns

The dataset.

Raises

FileNotFoundError – No data found for specified date.

load_dataset_from_path(path)[source]

Loads a Dataset with a specific engine preference.

Before returning the dataset AdapterH5NetCDF.recover() is applied.

This function tries to load the dataset until success with the following engine preference:

Parameters

path (Union[Path, str]) – Path to the dataset.

Return type

Dataset

Returns

The loaded dataset.

load_processed_dataset(tuid, analysis_name)[source]

Given an experiment TUID and the name of an analysis previously run on it, retrieves the processed dataset resulting from that analysis.

Parameters
  • tuid (TUID) – TUID of the experiment from which to load the data.

  • analysis_name (str) – Name of the Analysis from which to load the data.

Return type

Dataset

Returns

A dataset containing the results of the analysis.

load_quantities_of_interest(tuid, analysis_name)[source]

Given an experiment TUID and the name of an analysis previously run on it, retrieves the corresponding “quantities of interest” data.

Parameters
  • tuid (TUID) – TUID of the experiment.

  • analysis_name (str) – Name of the Analysis from which to load the data.

Return type

dict

Returns

A dictionary containing the loaded quantities of interest.

load_snapshot(tuid, datadir=None, list_to_ndarray=False, file='snapshot.json')[source]

Loads a snapshot specified by a tuid.

Parameters
  • tuid (TUID) – A TUID string. It is also possible to specify only the first part of a tuid.

  • datadir (Optional[str] (default: None)) – Path of the data directory. If None, uses get_datadir() to determine the data directory.

  • list_to_ndarray (bool (default: False)) – Uses an internal DecodeToNumpy decoder which allows a user to automatically convert a list to numpy array during deserialization of the snapshot.

  • file (str (default: 'snapshot.json')) – Filename to load.

Return type

dict

Returns

The snapshot.

Raises

FileNotFoundError – No data found for specified date.

locate_experiment_container(tuid, datadir=None)[source]

Returns the path to the experiment container of the specified tuid.

Parameters
  • tuid (TUID) – A TUID string. It is also possible to specify only the first part of a tuid.

  • datadir (Optional[str] (default: None)) – Path of the data directory. If None, uses get_datadir() to determine the data directory.

Return type

str

Returns

The path to the experiment container

Raises

FileNotFoundError – Experiment container not found.

multi_experiment_data_extractor(experiment, parameter, *, new_name=None, t_start=None, t_stop=None, analysis_name=None, dimension='dim_0')[source]

A data extraction function which loops through multiple quantify data directories and extracts the selected varying parameter value and corresponding datasets, then compiles this data into a single dataset for further analysis.

By default, we will extract the unprocessed dataset from each directory, but if analysis_name is specified, we will extract the processed dataset for that analysis.

Parameters
  • experiment (str) – The experiment to be included in the new dataset. For example “Pulsed spectroscopy”

  • parameter (str) – The name and address of the QCoDeS parameter from which to get the value, including the instrument name and all submodules. For example "current_source.module0.dac0.current".

  • new_name (Optional[str] (default: None)) – The name of the new multifile dataset. If no new name is given, it will create a new name as experiment vs instrument.

  • t_start (Optional[str] (default: None)) – Datetime to search from, inclusive. If a string is specified, it will be converted to a datetime object using parse. If no value is specified, will use the year 1 as a reference t_start.

  • t_stop (Optional[str] (default: None)) – Datetime to search until, exclusive. If a string is specified, it will be converted to a datetime object using parse. If no value is specified, will use the current time as a reference t_stop.

  • analysis_name (Optional[str] (default: None)) – In the case that we want to extract the processed dataset for give analysis, this is the name of the analysis.

  • dimension (Optional[str] (default: 'dim_0')) – The name of the dataset dimension to concatenate over

Return type

Dataset

Returns

The compiled quantify dataset.

set_datadir(datadir=None)[source]

Sets the data directory.

Parameters

datadir (Optional[str] (default: None)) – Path of the data directory. If set to None, resets the datadir to the default datadir (<top_level>/data).

Return type

None

snapshot(update=False, clean=True)[source]

State of all instruments setup as a JSON-compatible dictionary (everything that the custom JSON encoder class NumpyJSONEncoder supports).

Parameters
  • update (bool (default: False)) – If True, first gets all values before filling the snapshot.

  • clean (bool (default: True)) – If True, removes certain keys from the snapshot to create a more readable and compact snapshot.

Return type

dict

to_gridded_dataset(quantify_dataset, dimension='dim_0', coords_names=None)[source]

Converts a flattened (a.k.a. “stacked”) dataset as the one generated by the initialize_dataset() to a dataset in which the measured values are mapped onto a grid in the xarray format.

This will be meaningful only if the data itself corresponds to a gridded measurement.

Note

Each individual (x0[i], x1[i], x2[i], ...) setpoint must be unique.

Conversions applied:

  • The names "x0", "x1", ... will correspond to the names of the Dimensions.

  • The unique values for each of the x0, x1, ... Variables are converted to

    Coordinates.

  • The y0, y1, ... Variables are reshaped into a (multi-)dimensional grid

    and associated to the Coordinates.

Parameters
  • quantify_dataset (Dataset) – Input dataset in the format generated by the initialize_dataset.

  • dimension (str (default: 'dim_0')) – The flattened xarray Dimension.

  • coords_names (Optional[Iterable] (default: None)) – Optionally specify explicitly which Variables correspond to orthogonal coordinates, e.g. datasets holds values for ("x0", "x1") but only “x0” is independent: to_gridded_dataset(dset, coords_names=["x0"]).

Return type

Dataset

Returns

The new dataset.

trim_dataset(dataset)[source]

Trim NaNs from a dataset, useful in the case of a dynamically resized dataset (e.g. adaptive loops).

Parameters

dataset (Dataset) – The dataset to trim.

Return type

Dataset

Returns

The dataset, trimmed and resized if necessary or unchanged.

write_dataset(path, dataset)[source]

Writes a Dataset to a file with the h5netcdf engine.

Before writing the AdapterH5NetCDF.adapt() is applied.

To accommodate for complex-type numbers and arrays invalid_netcdf=True is used.

Parameters
  • path (Union[Path, str]) – Path to the file including filename and extension

  • dataset (Dataset) – The Dataset to be written to file.

Return type

None

dataset_adapters

Utilities for dataset (python object) handling.

class AdapterH5NetCDF[source]

Quantify dataset adapter for the h5netcdf engine.

It has the functionality of adapting the Quantify dataset to a format compatible with the h5netcdf xarray backend engine that is used to write and load the dataset to/from disk.

Warning

The h5netcdf engine has minor issues when performing a two-way trip of the dataset. The type of some attributes are not preserved. E.g., list- and tuple-like objects are loaded as numpy arrays of dtype=object.

classmethod adapt(dataset)[source]

Serializes to JSON the dataset and variables attributes.

To prevent the JSON serialization for specific items, their names should be listed under the attribute named json_serialize_exclude (for each attrs dictionary).

Parameters

dataset (Dataset) – Dataset that needs to be adapted.

Return type

Dataset

Returns

Dataset in which the attributes have been replaced with their JSON strings version.

static attrs_convert(attrs, inplace=False, vals_converter=<function dumps>)[source]

Converts to/from JSON string the values of the keys which are not listed in the json_serialize_exclude list.

Parameters
  • attrs – The input dictionary.

  • inplace (default: False) – If True the values are replaced in place, otherwise a deepcopy of attrs is performed first.

classmethod recover(dataset)[source]

Reverts the action of .adapt().

To prevent the JSON de-serialization for specific items, their names should be listed under the attribute named json_serialize_exclude (for each attrs dictionary).

Parameters

dataset (Dataset) – Dataset from which to recover the original format.

Return type

Dataset

Returns

Dataset in which the attributes have been replaced with their python objects version.

class DatasetAdapterBase[source]

A generic interface for a dataset adapter.

Note

It might be difficult to grasp the generic purpose of this class. See AdapterH5NetCDF for a specialized use case.

A dataset adapter is intended to “adapt”/”convert” a dataset to a format compatible with some other piece of software such as a function, interface, read/write back end, etc.. The main use case is to define the interface of the AdapterH5NetCDF that converts the Quantify dataset for loading and writing to/from disk.

Subclasses implementing this interface are intended to be a two-way bridge to some other object/interface/backend to which we refer to as the “Target” of the adapter.

The function .adapt() should return a dataset to be consumed by the Target.

The function .recover() should receive a dataset generated by the Target.

abstract classmethod adapt(dataset)[source]

Converts the dataset to a format consumed by the Target.

Return type

Dataset

abstract classmethod recover(dataset)[source]

Inverts the action of the .adapt() method.

Return type

Dataset

class DatasetAdapterIdentity[source]

A dataset adapter that does not modify the datasets in any way.

Intended to be used just as an object that respects the adapter interface defined by DatasetAdapterBase.

A particular use case is the backwards compatibility for loading and writing older versions of the Quantify dataset.

classmethod adapt(dataset)[source]
Return type

Dataset

Returns

Same dataset with no modifications.

classmethod recover(dataset)[source]
Return type

Dataset

Returns

Same dataset with no modifications.

dataset_attrs

Utilities for handling the attributes of xarray.Dataset and xarray.DataArray (python objects) handling.

class QCoordAttrs(unit='', long_name='', is_main_coord=None, uniformly_spaced=None, is_dataset_ref=False, json_serialize_exclude=<factory>)[source]

A dataclass representing the attrs attribute of main and secondary coordinates.

All attributes are mandatory to be present but can be None.

Examples

from quantify_core.utilities import examples_support
examples_support.mk_main_coord_attrs()
{'unit': '',
 'long_name': '',
 'is_main_coord': True,
 'uniformly_spaced': True,
 'is_dataset_ref': False,
 'json_serialize_exclude': []}
examples_support.mk_secondary_coord_attrs()
{'unit': '',
 'long_name': '',
 'is_main_coord': False,
 'uniformly_spaced': True,
 'is_dataset_ref': False,
 'json_serialize_exclude': []}
is_dataset_ref: bool = False

Flags if it is an array of quantify_core.data.types.TUID s of other dataset.

is_main_coord: bool = None

When set to True, flags the xarray coordinate to correspond to a main coordinate, otherwise (False) it corresponds to a secondary coordinate.

json_serialize_exclude: List[str]

A list of strings corresponding to the names of other attributes that should not be json-serialized when writing the dataset to disk. Empty by default.

long_name: str = ''

A long name for this coordinate.

uniformly_spaced: Optional[bool] = None

Indicates if the values are uniformly spaced.

unit: str = ''

The units of the values.

class QDatasetAttrs(tuid=None, dataset_name='', dataset_state=None, timestamp_start=None, timestamp_end=None, quantify_dataset_version='2.0.0', software_versions=<factory>, relationships=<factory>, json_serialize_exclude=<factory>)[source]

A dataclass representing the attrs attribute of the Quantify dataset.

All attributes are mandatory to be present but can be None.

Example

import pendulum
from quantify_core.utilities import examples_support

examples_support.mk_dataset_attrs(
    dataset_name="Bias scan",
    timestamp_start=pendulum.now().to_iso8601_string(),
    timestamp_end=pendulum.now().add(minutes=2).to_iso8601_string(),
    dataset_state="done",
)
{'tuid': '20230309-211157-830-1edad1',
 'dataset_name': 'Bias scan',
 'dataset_state': 'done',
 'timestamp_start': '2023-03-09T21:11:57.830400+00:00',
 'timestamp_end': '2023-03-09T21:13:57.830462+00:00',
 'quantify_dataset_version': '2.0.0',
 'software_versions': {},
 'relationships': [],
 'json_serialize_exclude': []}
dataset_name: str = ''

The dataset name, usually same as the the experiment name included in the name of the experiment container.

dataset_state: Literal[None, 'running', 'interrupted (safety)', 'interrupted (forced)', 'done'] = None

Denotes the last known state of the experiment/data acquisition that served to ‘build’ this dataset. Can be used later to filter ‘bad’ datasets.

json_serialize_exclude: List[str]

A list of strings corresponding to the names of other attributes that should not be json-serialized when writing the dataset to disk. Empty by default.

quantify_dataset_version: str = '2.0.0'

A string identifying the version of this Quantify dataset for backwards compatibility.

relationships: List[QDatasetIntraRelationship]

A list of relationships within the dataset specified as list of dictionaries that comply with the QDatasetIntraRelationship.

software_versions: Dict[str, str]

A mapping of other relevant software packages that are relevant to log for this dataset. Another example is the git tag or hash of a commit of a lab repository.

Example

import pendulum
from quantify_core.utilities import examples_support

examples_support.mk_dataset_attrs(
    dataset_name="My experiment",
    timestamp_start=pendulum.now().to_iso8601_string(),
    timestamp_end=pendulum.now().add(minutes=2).to_iso8601_string(),
    software_versions={
        "lab_fridge_magnet_driver": "v1.4.2",  # software version/tag
        "my_lab_repo": "9d8acf63f48c469c1b9fa9f2c3cf230845f67b18",  # git commit hash
    },
)
{'tuid': '20230309-211157-849-a56d28',
 'dataset_name': 'My experiment',
 'dataset_state': None,
 'timestamp_start': '2023-03-09T21:11:57.849184+00:00',
 'timestamp_end': '2023-03-09T21:13:57.849232+00:00',
 'quantify_dataset_version': '2.0.0',
 'software_versions': {'lab_fridge_magnet_driver': 'v1.4.2',
  'my_lab_repo': '9d8acf63f48c469c1b9fa9f2c3cf230845f67b18'},
 'relationships': [],
 'json_serialize_exclude': []}
timestamp_end: Optional[str] = None

Human-readable timestamp (ISO8601) as returned by pendulum.now().to_iso8601_string() (docs). Specifies when the experiment/data acquisition ended.

timestamp_start: Optional[str] = None

Human-readable timestamp (ISO8601) as returned by pendulum.now().to_iso8601_string() (docs). Specifies when the experiment/data acquisition started.

tuid: Optional[str] = None

The time-based unique identifier of the dataset. See quantify_core.data.types.TUID.

class QDatasetIntraRelationship(item_name=None, relation_type=None, related_names=<factory>, relation_metadata=<factory>)[source]

A dataclass representing a dictionary that specifies a relationship between dataset variables.

A prominent example are calibration points contained within one variable or several variables that are necessary to interpret correctly the data of another variable.

Examples

This is how the attributes of a dataset containing a q0 main variable and q0_cal secondary variables would look like. The q0_cal corresponds to calibrations datapoints. See Quantify dataset - examples for examples with more context.

from quantify_core.data.dataset_attrs import QDatasetIntraRelationship
from quantify_core.utilities import examples_support

attrs = examples_support.mk_dataset_attrs(
    relationships=[
        QDatasetIntraRelationship(
            item_name="q0",
            relation_type="calibration",
            related_names=["q0_cal"],
        ).to_dict()
    ]
)
item_name: str | None = None

The name of the coordinate/variable to which we want to relate other coordinates/variables.

related_names: List[str]

A list of names related to the item_name.

relation_metadata: Dict[str, Any]

A free-form dictionary to store additional information relevant to this relationship.

relation_type: str | None = None

A string specifying the type of relationship.

Reserved relation types:

"calibration" - Specifies a list of main variables used as calibration data for the main variables whose name is specified by the item_name.

class QVarAttrs(unit='', long_name='', is_main_var=None, uniformly_spaced=None, grid=None, is_dataset_ref=False, has_repetitions=False, json_serialize_exclude=<factory>)[source]

A dataclass representing the attrs attribute of main and secondary variables.

All attributes are mandatory to be present but can be None.

Examples

from quantify_core.utilities import examples_support
examples_support.mk_main_var_attrs(coords=["time"])
{'unit': '',
 'long_name': '',
 'is_main_var': True,
 'uniformly_spaced': True,
 'grid': True,
 'is_dataset_ref': False,
 'has_repetitions': False,
 'json_serialize_exclude': [],
 'coords': ['time']}
examples_support.mk_secondary_var_attrs(coords=["cal"])
{'unit': '',
 'long_name': '',
 'is_main_var': False,
 'uniformly_spaced': True,
 'grid': True,
 'is_dataset_ref': False,
 'has_repetitions': False,
 'json_serialize_exclude': [],
 'coords': ['cal']}
grid: bool | None = None

Indicates if the variables data are located on a grid, which does not need to be uniformly spaced along all dimensions. In other words, specifies if the corresponding main coordinates are the ‘unrolled’ points (also known as ‘unstacked’) corresponding to a grid.

If True than it is possible to use quantify_core.data.handling.to_gridded_dataset() to convert the variables to a ‘stacked’ version.

has_repetitions: bool = False

Indicates that the outermost dimension of this variable is a repetitions dimension. This attribute is intended to allow easy programmatic detection of such dimension. It can be used, for example, to average along this dimension before an automatic live plotting or analysis.

is_dataset_ref: bool = False

Flags if it is an array of quantify_core.data.types.TUID s of other dataset. See also Dataset for a “nested MeasurementControl” experiment.

is_main_var: bool | None = None

When set to True, flags this xarray data variable to correspond to a main variable, otherwise (False) it corresponds to a secondary variable.

json_serialize_exclude: List[str]

A list of strings corresponding to the names of other attributes that should not be json-serialized when writing the dataset to disk. Empty by default.

long_name: str = ''

A long name for this coordinate.

uniformly_spaced: bool | None = None

Indicates if the values are uniformly spaced. This does not apply to ‘true’ main variables but, because a MultiIndex is not supported yet by xarray when writing to disk, some coordinate variables have to be stored as main variables instead.

unit: str = ''

The units of the values.

get_main_coords(dataset)[source]

Finds the main coordinates in the dataset (except secondary coordinates).

Finds the xarray coordinates in the dataset that have their attributes is_main_coord set to True (inside the xarray.DataArray.attrs dictionary).

Parameters

dataset (Dataset) – The dataset to scan.

Return type

List[str]

Returns

The names of the main coordinates.

get_main_dims(dataset)[source]

Determines the ‘main’ dimensions in the dataset.

Each of the dimensions returned is the outermost dimension for an main coordinate/variable, OR the second one when a repetitions dimension is present. (see has_repetitions).

These dimensions are detected based on is_main_coord and is_main_var attributes.

Warning

The dimensions listed in this list should be considered “incompatible” in the sense that the main coordinate/variables must lie on one and only one of such dimension.

Note

The dimensions, on which the secondary coordinates/variables lie, are not included in this list. See also get_secondary_dims().

Parameters

dataset (Dataset) – The dataset from which to extract the main dimensions.

Return type

List[str]

Returns

The names of the main dimensions in the dataset.

get_main_vars(dataset)[source]

Finds the main variables in the dataset (except secondary variables).

Finds the xarray data variables in the dataset that have their attributes is_main_var set to True (inside the xarray.DataArray.attrs dictionary).

Parameters

dataset (Dataset) – The dataset to scan.

Return type

List[str]

Returns

The names of the main variables.

get_secondary_coords(dataset)[source]

Finds the secondary coordinates in the dataset.

Finds the xarray coordinates in the dataset that have their attributes is_main_coord set to False (inside the xarray.DataArray.attrs dictionary).

Parameters

dataset (Dataset) – The dataset to scan.

Return type

List[str]

Returns

The names of the secondary coordinates.

get_secondary_dims(dataset)[source]

Returns the ‘main’ secondary dimensions.

For details see get_main_dims(), is_main_var and is_main_coord.

Parameters

dataset (Dataset) – The dataset from which to extract the main dimensions.

Return type

List[str]

Returns

The names of the ‘main’ dimensions of secondary coordinates/variables in the dataset.

get_secondary_vars(dataset)[source]

Finds the secondary variables in the dataset.

Finds the xarray data variables in the dataset that have their attributes is_main_var set to False (inside the xarray.DataArray.attrs dictionary).

Parameters

dataset (Dataset) – The dataset to scan.

Return type

List[str]

Returns

The names of the secondary variables.

experiment

Utilities for managing experiment data.

class QuantifyExperiment(tuid, dataset=None)[source]

Class which represents all data related to an experiment. This allows the user to run experiments and store data without the quantify_core.measurement.control.MeasurementControl. The class serves as an initial interface for other data storage backends.

__init__(tuid, dataset=None)[source]

Creates an instance of the QuantifyExperiment.

Parameters
  • tuid (Optional[str]) – TUID to use

  • dataset (default: None) – If the TUID is None, use the TUID from this dataset

load_dataset()[source]

Loads the quantify dataset associated with the TUID set within the class.

Raises

FileNotFoundError – If no file with a dataset can be found

Return type

Dataset

load_metadata()[source]

Loads the metadata from the directory specified by ~.experiment_directory.

Return type

Dict[str, Any]

Returns

The loaded metadata from disk. None if no file is found.

Raises

FileNotFoundError – If no file with metadata can be found

load_snapshot()[source]

Loads the snapshot from the directory specified by ~.experiment_directory.

Return type

Dict[str, Any]

Returns

The loaded snapshot from disk

Raises

FileNotFoundError – If no file with a snapshot can be found

load_text(rel_path)[source]

Loads a string from a text file from the path specified by ~.experiment_directory / rel_path.

Parameters

rel_path (str) – path relative to the base directory of the experiment, e.g. “data.json” or “my_folder/data.txt”

Return type

str

Returns

The loaded text from disk

Raises

FileNotFoundError – If no file can be found at rel_path

save_metadata(metadata=None)[source]

Writes the metadata to disk as specified by ~.experiment_directory.

Parameters

metadata (Optional[Dict[str, Any]] (default: None)) – The metadata to be written to the directory

save_snapshot(snapshot=None)[source]

Writes the snapshot to disk as specified by ~.experiment_directory.

Parameters

snapshot (Optional[Dict[str, Any]] (default: None)) – The snapshot to be written to the directory

save_text(text, rel_path)[source]

Saves a string to a text file in the path specified by ~.experiment_directory / rel_path.

Parameters
  • text (str) – text to be saved

  • rel_path (str) – path relative to the base directory of the experiment, e.g. “data.json” or “my_folder/data.txt”

Return type

None

write_dataset(dataset)[source]

Writes the quantify dataset to the directory specified by ~.experiment_directory.

Parameters

dataset (Dataset) – The dataset to be written to the directory

property experiment_directory: Path

Returns a path to the experiment directory containing the TUID set within the class.

measurement

Import alias

Maps to

quantify_core.measurement.MeasurementControl

MeasurementControl

quantify_core.measurement.grid_setpoints

grid_setpoints

quantify_core.measurement.Gettable

Gettable

quantify_core.measurement.Settable

Settable

types

Module containing the core types for use with the MeasurementControl.

class Gettable(obj: Any)[source]

Defines the Gettable concept, which is considered complete if the given type satisfies the following: This class does not wrap the passed in object but simply verifies and returns it.

attributes

properties

  • name

identifier

oneOf

type

string

type

array

items

type

string

  • label

axis descriptor

oneOf

type

string

type

array

items

type

string

  • unit

unit of measurement

oneOf

type

string

type

array

items

type

string

  • batched

true if data is processed in batches, false otherwise

type

boolean

  • batch_size

When .batched=True, indicates the (maximum) size of the batch of datapoints that this gettable supports. The measurement loop will effectively use the min(settable(s).batch_size, gettable(s).batch_size).

type

integer

methods

properties

  • get

get values from this device

type

object

  • prepare

called before the acquisition loop

type

object

  • finish

called once after the acquisition loop

type

object

class Settable(obj: Any)[source]

Defines the Settable concept, which is considered complete if the given type satisfies the following: This class does not wrap the passed in object but simply verifies and returns it.

attributes

properties

  • name

identifier

type

string

  • label

axis descriptor

type

string

  • unit

unit of measurement

type

string

  • batched

true if data is processed in batches, false otherwise

type

boolean

  • batch_size

When .batched=True, indicates the (maximum) size of the batch of datapoints that this settable supports. The measurement loop will effectively use the min(settable(s).batch_size, gettable(s).batch_size).

type

integer

methods

properties

  • set

send data to this device

type

object

  • prepare

called before the acquisition loop

type

object

  • finish

called once after the acquisition loop

type

object

is_batched(obj)[source]
Return type

bool

Returns

the .batched attribute of the settable/gettable obj, False if not present.

is_object_or_function(checker, instance)[source]

Checks if an instance is an object/function

Return type

bool

Returns

True if the instance is an object or a function, False otherwise

control

Module containing the MeasurementControl.

class MeasurementControl(name)[source]

Instrument responsible for controlling the data acquisition loop.

MeasurementControl (MC) is based on the notion that every experiment consists of the following steps:

  1. Set some parameter(s) (settable_pars)

  2. Measure some other parameter(s) (gettable_pars)

  3. Store the data.

Example

meas_ctrl.settables(mw_source1.freq)
meas_ctrl.setpoints(np.arange(5e9, 5.2e9, 100e3))
meas_ctrl.gettables(pulsar_QRM.signal)
dataset = meas_ctrl.run(name='Frequency sweep')

MC exists to enforce structure on experiments. Enforcing this structure allows:

  • Standardization of data storage.

  • Providing basic real-time visualization.

MC imposes minimal constraints and allows:

  • Iterative loops, experiments in which setpoints are processed step by step.

  • Batched loops, experiments in which setpoints are processed in batches.

  • Adaptive loops, setpoints are determined based on measured values.

__init__(name)[source]

Creates an instance of the Measurement Control.

Parameters

name (str) – name of this instrument.

clear_experiment_data()[source]

Remove all experiment_data parameters from the experiment_data submodule

gettables(gettable_pars)[source]

Define the parameters to be acquired during the acquisition loop.

The Gettable helper class defines the requirements for a Gettable object.

Parameters

gettable_pars

parameter(s) to be get during the acquisition loop, accepts:
  • list or tuple of multiple Gettable objects

  • a single Gettable object

measurement_description()[source]

Return a serializable description of the latest measurement

Users can add additional information to the description manually.

Return type

Dict[str, Any]

Returns

Dictionary with description of the measurement

print_progress(progress_message=None)[source]

Prints the provided progress_messages or a default one; and calls the callback specified by on_progress_callback. Printing can be suppressed with .verbose(False).

run(name='', soft_avg=1, lazy_set=None, save_data=True)[source]

Starts a data acquisition loop.

Parameters
  • name (str (default: '')) – Name of the measurement. It is included in the name of the data files.

  • soft_avg (int (default: 1)) – Number of software averages to be performed by the measurement control. E.g. if soft_avg=3 the full dataset will be measured 3 times and the measured values will be averaged element-wise, the averaged dataset is then returned.

  • lazy_set (Optional[bool] (default: None)) –

    If True and a setpoint equals the previous setpoint, the .set method of the settable will not be called for that iteration. If this argument is None, the .lazy_set() ManualParameter is used instead (which by default is False).

    Warning

    This feature is not available yet when running in batched mode.

  • save_data (bool (default: True)) – If True that the measurement data is stored.

Return type

Dataset

run_adaptive(name, params, lazy_set=None)[source]

Starts a data acquisition loop using an adaptive function.

Warning

The functionality of this mode can be complex - it is recommended to read the relevant long form documentation.

Parameters
  • name – Name of the measurement. This name is included in the name of the data files.

  • params – Key value parameters describe the adaptive function to use, and any further parameters for that function.

  • lazy_set (Optional[bool] (default: None)) – If True and a setpoint equals the previous setpoint, the .set method of the settable will not be called for that iteration. If this argument is None, the .lazy_set() ManualParameter is used instead (which by default is False).

Return type

Dataset

set_experiment_data(experiment_data, overwrite=True)[source]

Populates the experiment_data submodule with experiment_data parameters

Parameters
  • experiment_data (Dict[str, Any]) –

    Dict specifying the names of the experiment_data parameters and their values. Follows the format:

    {
        "parameter_name": {
            "value": 10.2
            "label": "parameter label"
            "unit": "Hz"
        }
    }
    

  • overwrite (bool (default: True)) – If True, clear all previously saved experiment_data parameters and save new ones. If False, keep all previously saved experiment_data parameters and change their values if necessary

setpoints(setpoints)[source]

Set setpoints that determine values to be set in acquisition loop.

Tip

Use column_stack() to reshape multiple 1D arrays when setting multiple settables.

Parameters

setpoints (ndarray) – An array that defines the values to loop over in the experiment. The shape of the array has to be either (N,) or (N,1) for a 1D loop; or (N, M) in the case of an MD loop.

setpoints_grid(setpoints)[source]

Makes a grid from the provided setpoints assuming each array element corresponds to an orthogonal dimension. The resulting gridded points determine values to be set in the acquisition loop.

The gridding is such that the inner most loop corresponds to the batched settable with the smallest .batch_size.

Parameters

setpoints – The values to loop over in the experiment. The grid is reshaped in the same order.

settables(settable_pars)[source]

Define the settable parameters for the acquisition loop.

The Settable helper class defines the requirements for a Settable object.

Parameters

settable_pars – parameter(s) to be set during the acquisition loop, accepts a list or tuple of multiple Settable objects or a single Settable object.

show()[source]

Print short representation of the object to stdout.

instr_plotmon = InstrumentRefParameter( vals=vals.MultiType(vals.Strings(), vals.Enum(None)), instrument=self, name="instr_plotmon", )

Instrument responsible for live plotting. Can be set to None to disable live plotting.

lazy_set = ManualParameter( vals=vals.Bool(), initial_value=False, name="lazy_set", instrument=self, )

If set to True, only set any settable if the setpoint differs from the previous setpoint. Note that this parameter is overridden by the lazy_set argument passed to the run() and run_adaptive() methods.

on_progress_callback = ManualParameter( vals=vals.Callable(), instrument=self, name="on_progress_callback", )

A callback to communicate progress. This should be a callable accepting floats between 0 and 100 indicating the percentage done.

update_interval = ManualParameter( initial_value=0.5, vals=vals.Numbers(min_value=0.1), instrument=self, name="update_interval", )

Interval for updates during the data acquisition loop, every time more than update_interval time has elapsed when acquiring new data points, data is written to file (and the live monitoring detects updated).

verbose = ManualParameter( vals=vals.Bool(), initial_value=True, instrument=self, name="verbose", )

If set to True, prints to std_out during experiments.

grid_setpoints(setpoints, settables=None)[source]

Makes gridded setpoints. If settables is provided, the gridding is such that the inner most loop corresponds to the batched settable with the smallest .batch_size.

Warning

Using this method typecasts all values into the same type. This may lead to validator errors when setting e.g., a float instead of an int.

Parameters
  • setpoints (Iterable) – A list of arrays that defines the values to loop over in the experiment for each orthogonal dimension. The grid is reshaped in the same order.

  • settables (Optional[Iterable] (default: None)) – A list of settable objects to which the elements in the setpoints correspond to. Used to correctly grid data when mixing batched and iterative settables.

Returns

An array where the first numpy axis correspond to individual setpoints.

Return type

ndarray

utilities

experiment_helpers

Helpers for performing experiments.

create_plotmon_from_historical(tuid=None, label='')[source]

Creates a plotmon using the dataset of the provided experiment denoted by the tuid in the datadir. Loads the data and draws any required figures.

NB Creating a new plotmon can be slow. Consider using PlotMonitor_pyqt.tuids_extra() to visualize dataset in the same plotmon.

Parameters
  • tuid (Optional[TUID] (default: None)) – the TUID of the experiment.

  • label (str (default: '')) – if the tuid is not provided, as label will be used to search for the latest dataset.

Return type

PlotMonitor_pyqt

Returns

the plotmon

get_all_parents(instr_mod)[source]

Get a list of all the parent submodules and instruments of a given QCodes instrument, submodule or parameter.

Parameters

instr_mod (Union[Instrument, InstrumentChannel, Parameter]) – The QCodes instrument, submodule or parameter whose parents we wish to find

Return type

List

Returns

A list of all the parents of that object (and the object itself)

load_settings_onto_instrument(instrument, tuid=None, datadir=None, exception_handling='raise')[source]

Loads settings from a previous experiment onto a current Instrument, or any of its submodules or parameters. This information is loaded from the ‘snapshot.json’ file in the provided experiment directory.

Parameters
  • instrument (Union[Instrument, InstrumentChannel, Parameter]) – the Instrument, InstrumentChannel or Parameter to be configured.

  • tuid (TUID) – the TUID of the experiment. If None use latest TUID.

  • datadir (str) – path of the data directory. If None, uses get_datadir() to determine the data directory.

  • exception_handling (Literal['raise', 'warn'] (default: 'raise')) – desired behaviour if error occurs when trying to get parameter: raise exception or give warning.

Raises

ValueError – if the provided instrument has no match in the loaded snapshot.

Return type

None

dataset_examples

Factories of exemplary and mock datasets to be used for testing and documentation.

mk_2d_dataset_v1(num_amps=10, num_times=100)[source]

Generates a 2D Quantify dataset (v1).

Parameters
  • num_amps (int (default: 10)) – Number of x points.

  • num_times (int (default: 100)) – Number of y points.

mk_nested_mc_dataset(num_points=12, flux_bias_min_max=(-0.04, 0.04), resonator_freqs_min_max=(7000000000.0, 7300000000.0), qubit_freqs_min_max=(4500000000.0, 5000000000.0), t1_values_min_max=(2e-05, 5e-05), seed=112233)[source]

Generates a dataset with dataset references and several coordinates that serve to index the same variables.

Note that the each value for resonator_freqs, qubit_freqs and t1_values would have been extracted from other dataset corresponding to individual experiments with their own dataset.

Parameters
  • num_points (int (default: 12)) – Number of datapoints to generate (used for all variables/coordinates).

  • flux_bias_min_max (tuple (default: (-0.04, 0.04))) – Range for mock values.

  • resonator_freqs_min_max (tuple (default: (7000000000.0, 7300000000.0))) – Range for mock values.

  • qubit_freqs_min_max (tuple (default: (4500000000.0, 5000000000.0))) – Range for mock values.

  • t1_values_min_max (tuple (default: (2e-05, 5e-05))) – Range for mock random values.

  • seed (Optional[int] (default: 112233)) – Random number generator seed passed to numpy.random.default_rng.

Return type

Dataset

mk_shots_from_probabilities(probabilities, **kwargs)[source]

Generates multiple shots for a list of probabilities assuming two states.

Parameters
  • probabilities (Union[ndarray, list]) – The list/array of the probabilities of one of the states.

  • **kwargs – Keyword arguments passed to mk_iq_shots().

Returns

Array containing the shots. Shape: (num_shots, len(probabilities)).

mk_surface7_cyles_dataset(num_cycles=3, **kwargs)[source]

See also quantify_core.utilities.examples_support.mk_surface7_sched().

Parameters
  • num_cycles (int (default: 3)) – The number of repeating cycles before the final measurement.

  • **kwargs – Keyword arguments passed to mk_shots_from_probabilities().

Return type

Dataset

mk_t1_av_dataset(t1_times=None, probabilities=None, **kwargs)[source]

Generates a dataset with mock data of a T1 experiment for a single qubit.

Parameters
  • t1_times (Optional[ndarray] (default: None)) – Array with the T1 times corresponding to each probability in probabilities.

  • probabilities (Optional[ndarray] (default: None)) – The probabilities of finding the qubit in the excited state.

  • **kwargs – Keyword arguments passed to mk_iq_shots().

Return type

Dataset

mk_t1_av_with_cal_dataset(t1_times=None, probabilities=None, **kwargs)[source]

Generates a dataset with mock data of a T1 experiment for a single qubit including calibration points for the ground and excited states.

Parameters
  • t1_times (Optional[ndarray] (default: None)) – Array with the T1 times corresponding to each probability in probabilities.

  • probabilities (Optional[ndarray] (default: None)) – The probabilities of finding the qubit in the excited state.

  • **kwargs – Keyword arguments passed to mk_iq_shots().

Return type

Dataset

mk_t1_shots_dataset(t1_times=None, probabilities=None, **kwargs)[source]

Generates a dataset with mock data of a T1 experiment for a single qubit including calibration points for the ground and excited states, including all the individual shots (repeated qubit state measurement for the same exact experiment).

Parameters
  • t1_times (Optional[ndarray] (default: None)) – Array with the T1 times corresponding to each probability in probabilities.

  • probabilities (Optional[ndarray] (default: None)) – The probabilities of finding the qubit in the excited state.

  • **kwargs – Keyword arguments passed to mk_iq_shots().

Return type

Dataset

mk_t1_traces_dataset(t1_times=None, probabilities=None, **kwargs)[source]

Generates a dataset with mock data of a T1 experiment for a single qubit including calibration points for the ground and excited states, including all the individual shots (repeated qubit state measurement for the same exact experiment); and including all the signals that had to be digitized to obtain the rest of the data.

Parameters
  • t1_times (Optional[ndarray] (default: None)) – Array with the T1 times corresponding to each probability in probabilities.

  • probabilities (Optional[ndarray] (default: None)) – The probabilities of finding the qubit in the excited state.

  • **kwargs – Keyword arguments passed to mk_iq_shots().

Return type

Dataset

mk_two_qubit_chevron_data(rep_num=5, seed=112233)[source]

Generates data that look similar to a two-qubit Chevron experiment.

Parameters
  • rep_num (int (default: 5)) – The number of repetitions with noise to generate.

  • seed (Optional[int] (default: 112233)) – Random number generator seed passed to numpy.random.default_rng.

Returns

  • amp_values – Amplitude values.

  • time_values – Time values.

  • population_q0 – Q0 population values.

  • population_q1 – Q1 population values.

mk_two_qubit_chevron_dataset(**kwargs)[source]

Generates a dataset that look similar to a two-qubit Chevron experiment.

Parameters

**kwargs – Keyword arguments passed to mk_two_qubit_chevron_data().

Return type

Dataset

Returns

A mock Quantify dataset.

examples_support

Utilities used for creating examples for docs/tutorials/tests.

mk_cosine_instrument()[source]

A container of parameters (mock instrument) providing a cosine model.

Return type

Instrument

mk_dataset_attrs(tuid=<function gen_tuid>, **kwargs)[source]

A factory of attributes for Quantify dataset.

See QDatasetAttrs for details.

Parameters
  • tuid (Union[TUID, Callable[[], TUID]] (default: <function gen_tuid at 0x7f6ba1c000d0>)) – If no tuid is provided a new one will be generated. See also tuid.

  • **kwargs – Any other items used to update the output dictionary.

Return type

Dict[str, Any]

mk_iq_shots(num_shots=128, sigmas=(0.1, 0.1), centers=(-0.2 + 0.65j, 0.7 + 4j), probabilities=(0.4, 0.6), seed=112233)[source]

Generates clusters of (I + 1j*Q) points with a Gaussian distribution with the specified sigmas and centers according to the probabilities of each cluster

Parameters
  • num_shots (int (default: 128)) – The number of shot to generate.

  • sigma – The sigma of the Gaussian distribution used for both real and imaginary parts.

  • centers (Union[Tuple[complex], ndarray[Any, dtype[complex128]]] (default: ((-0.2+0.65j), (0.7+4j)))) – The center of each cluster on the imaginary plane.

  • probabilities (Union[Tuple[float], ndarray[Any, dtype[float64]]] (default: (0.4, 0.6))) – The probabilities of each cluster being randomly selected for each shot.

  • seed (Optional[int] (default: 112233)) – Random number generator seed passed to numpy.random.default_rng.

Return type

ndarray[Any, dtype[TypeVar(ScalarType, bound= generic, covariant=True)]]

mk_main_coord_attrs(uniformly_spaced=True, is_main_coord=True, **kwargs)[source]

A factory of attributes for main coordinates.

See QCoordAttrs for details.

Parameters
Return type

Dict[str, Any]

mk_main_var_attrs(grid=True, uniformly_spaced=True, is_main_var=True, has_repetitions=False, **kwargs)[source]

A factory of attributes for main variables.

See QVarAttrs for details.

Parameters
Return type

Dict[str, Any]

mk_secondary_coord_attrs(uniformly_spaced=True, is_main_coord=False, **kwargs)[source]

A factory of attributes for secondary coordinates.

See QCoordAttrs for details.

Parameters
Return type

Dict[str, Any]

mk_secondary_var_attrs(grid=True, uniformly_spaced=True, is_main_var=False, has_repetitions=False, **kwargs)[source]

A factory of attributes for secondary variables.

See QVarAttrs for details.

Parameters
Return type

Dict[str, Any]

mk_trace_for_iq_shot(iq_point, time_values=None, intermediate_freq=50000000.0)[source]

Generates mock “traces” that a physical instrument would digitize for the readout of a transmon qubit when using a down-converting IQ mixer.

Parameters
  • iq_point (complex) – A complex number representing a point on the IQ-plane.

  • time_values (Optional[ndarray[Any, dtype[TypeVar(ScalarType, bound= generic, covariant=True)]]] (default: None)) – The time instants at which the mock intermediate-frequency signal is sampled.

  • intermediate_freq (float (default: 50000000.0)) – The intermediate frequency used in the down-conversion scheme.

Return type

ndarray[Any, dtype[TypeVar(ScalarType, bound= generic, covariant=True)]]

Returns

An array of complex numbers.

mk_trace_time(sampling_rate=1000000000.0, duration=3e-07)[source]

Generates a arange in which the entries correspond to time instants up to duration seconds sampled according to sampling_rate in Hz.

See mk_trace_for_iq_shot() for an usage example.

Parameters
  • sampling_rate (float (default: 1000000000.0)) – The sampling rate in Hz.

  • duration (float (default: 3e-07)) – Total duration in seconds.

Return type

ndarray[Any, dtype[TypeVar(ScalarType, bound= generic, covariant=True)]]

Returns

An array with the time instants.

round_trip_dataset(dataset)[source]

Writes a dataset to disk and loads it back returning it.

Return type

Dataset

deprecation

Utilities used to maintain deprecation and reverse-compatibility of the code.

deprecated(drop_version, message_or_alias)[source]

A decorator for deprecating classes and methods.

For each deprecation we must provide a version when this function or class will be removed completely and an instruction to a user about how to port their existing code to a new software version. This is easily done using this decorator.

If callable is passed instead of a message, this decorator assumes that the function or class has moved to another module and generates the standard instruction to use a new function or class. There is no need to re-implement the function logic in two places, since the implementation of new function or class is used in both new and old aliases.

Parameters
  • drop_version (str) – A version of the package when the deprecated function or class will be dropped.

  • message_or_alias (Union[str, Callable]) – Either an instruction about how to port the software to a new version without the usage of deprecated calls (string), or the new drop-in replacement to the deprecated class or function (callable).

Return type

Callable[[Callable], Callable]

visualization

The visualization module contains tools for real-time visualization as well as utilities to help in plotting.

Import alias

Maps to

quantify_core.visualization.InstrumentMonitor

InstrumentMonitor

quantify_core.visualization.PlotMonitor_pyqt

PlotMonitor_pyqt

instrument_monitor

Module containing the pyqtgraph based plotting monitor.

class InstrumentMonitor(name, window_size=(600, 600), remote=True, update_interval=5)[source]

Creates a pyqtgraph widget that displays the instrument monitor window.

Example

from quantify_core.measurement import MeasurementControl
from quantify_core.visualization import InstrumentMonitor

meas_ctrl = MeasurementControl("meas_ctrl")
instrument_monitor = InstrumentMonitor("instrument_monitor")
# Set True if you want to query the instruments about each parameter
# before updating the window. Can be slow due to communication overhead.
instrument_monitor.update_snapshot(False)
__init__(name, window_size=(600, 600), remote=True, update_interval=5)[source]

Initializes the pyqtgraph window.

Parameters
  • name – name of the InstrumentMonitor object.

  • window_size (tuple (default: (600, 600))) – The size of the InstrumentMonitor window in px.

  • remote (bool (default: True)) – Switch to use a remote instance of the pyqtgraph class.

  • update_interval (int (default: 5)) – Interval in seconds between two updates

close()[source]

(Modified from Instrument class)

Irreversibly stop this instrument and free its resources.

Subclasses should override this if they have other specific resources to close.

Return type

None

create_widget(window_size=(1000, 600))[source]

Saves an instance of the quantify_core.visualization.ins_mon_widget.qc_snapshot_widget.QcSnapshotWidget class during startup. Creates the snapshot tree to display within the remote widget window.

Parameters

window_size (tuple (default: (1000, 600))) – The size of the InstrumentMonitor window in px.

setGeometry(x, y, w, h)[source]

Set the geometry of the main widget window

Parameters
  • x (int) – Horizontal position of the top-left corner of the window.

  • y (int) – Vertical position of the top-left corner of the window.

  • w (int) – Width of the window.

  • h (int) – Height of the window.

update_interval = Parameter( get_cmd=self._get_update_interval, set_cmd=self._set_update_interval, unit="s", initial_value=update_interval, vals=vals.Numbers(min_value=0.001), name="update_interval", instrument=self, )

Only update the window if this amount of time has passed since the last update.

update_snapshot = ManualParameter( initial_value=False, vals=vals.Bool(), name="update_snapshot", instrument=self, )

Set to True in order to query the instruments about each parameter before updating the window. Can be slow due to communication overhead.

class RepeatTimer(interval, function, args=None, kwargs=None)[source]
cancel()[source]

Stop the timer (and exit the loop/thread).

pause()[source]

Pause the timer, i.e. do not execute the function, but stay in the loop/thread.

run()[source]

Function called in separate thread after calling .start() on the instance.

unpause()[source]

Unpause the timer, i.e. execute the function in the loop again.

pyqt_plotmon

Module containing the pyqtgraph based plotting monitor.

class PlotMonitor_pyqt(name)[source]

Pyqtgraph based plot monitor instrument.

A plot monitor is intended to provide a real-time visualization of a dataset.

The interaction with this virtual instrument are virtually instantaneous. All the heavier computations and plotting happens in a separate QtProcess.

__init__(name)[source]

Creates an instance of the Measurement Control.

Parameters

name (str) – Name of this instrument instance

close()[source]

(Modified from Instrument class)

Irreversibly stop this instrument and free its resources.

Subclasses should override this if they have other specific resources to close.

Return type

None

create_plot_monitor()[source]

Creates the PyQtGraph plotting monitors. Can also be used to recreate these when plotting has crashed.

setGeometry_main(x, y, w, h)[source]

Set the geometry of the main plotmon

Parameters
  • x (int) – Horizontal position of the top-left corner of the window

  • y (int) – Vertical position of the top-left corner of the window

  • w (int) – Width of the window

  • h (int) – Height of the window

setGeometry_secondary(x, y, w, h)[source]

Set the geometry of the secondary plotmon

Parameters
  • x (int) – Horizontal position of the top-left corner of the window

  • y (int) – Vertical position of the top-left corner of the window

  • w (int) – Width of the window

  • h (int) – Height of the window

tuids_append(tuid=None)[source]

Appends a tuid to tuids and also discards older datasets according to tuids_max_num.

The the corresponding data will be plotted in the main window with blue circles.

NB: do not call before the corresponding dataset file was created and filled with data

update(tuid=None)[source]

Updates the curves/heatmaps of a specific dataset.

If the dataset is not specified the latest dataset in tuids is used.

If tuids is empty and tuid is provided then tuids_append(tuid) will be called. NB: this is intended mainly for MC to avoid issues when the file was not yet created or is empty.

main_QtPlot = QtPlotObjForJupyter(self._remote_plotmon, "main_QtPlot")

Retrieves the image of the main window when used as the final statement in a cell of a Jupyter-like notebook.

secondary_QtPlot = QtPlotObjForJupyter( self._remote_plotmon, "secondary_QtPlot" )

Retrieves the image of the secondary window when used as the final statement in a cell of a Jupyter-like notebook.

tuids = Parameter( initial_cache_value=[], vals=vals.Lists(elt_validator=vals.Strings()), get_cmd=self._get_tuids, set_cmd=self._set_tuids, name="tuids", instrument=self, )

The tuids of the auto-accumulated previous datasets when specified through tuids_append. Can be set to a list ['tuid_one', 'tuid_two', ...]. Can be reset by setting to []. See also tuids_extra.

tuids_extra = Parameter( initial_cache_value=[], vals=vals.Lists(elt_validator=vals.Strings()), set_cmd=self._set_tuids_extra, get_cmd=self._get_tuids_extra, name="tuids_extra", instrument=self, )

Extra tuids whose datasets are never affected by tuids_append or tuids_max_num. As opposed to the tuids, these ones never vanish. Can be reset by setting to []. Intended to perform realtime measurements and have a live comparison with previously measured datasets.

tuids_max_num = Parameter( vals=vals.Ints(min_value=1, max_value=100), set_cmd=self._set_tuids_max_num, get_cmd=self._get_tuids_max_num, initial_cache_value=3, name="tuids_max_num", instrument=self, )

The maximum number of auto-accumulated datasets in tuids. Older dataset are discarded when tuids_append is called [directly or from update()].

class QtPlotObjForJupyter(remote_plotmon, attr_name)[source]

A wrapper to be able to display a QtPlot window in Jupyter notebooks

color_utilities

Module containing utilities for color manipulation

set_hlsa(color, h=None, l=None, s=None, a=None, to_hex=False)[source]

Accepts a matplotlib color specification and returns an RGB color with the specified HLS values plus an optional alpha :rtype: tuple

mpl_plotting

Module containing matplotlib and xarray plotting utilities.

Naming convention: plotting functions that require Xarray object(s) as inputs are named plot_xr_....

flex_colormesh_plot_vs_xy(xvals, yvals, zvals, ax=None, normalize=False, log=False, cmap='viridis', vlim=(None, None), transpose=False)[source]

Add a rectangular block to a color plot using pcolormesh().

Parameters
  • xvals (ndarray) – Length N array corresponding to settable x0.

  • yvals (ndarray) – Length M array corresponding to settable x1.

  • zvals (ndarray) – M*N array corresponding to gettable yi.

  • ax (Optional[Axes] (default: None)) – Axis to which to add the colormesh.

  • normalize (bool (default: False)) – If True, normalizes each row of data.

  • log (bool (default: False)) – if True, uses a logarithmic colorscale.

  • cmap (str (default: 'viridis')) – Colormap to use. See matplotlib docs for choosing an appropriate colormap.

  • vlim (list (default: (None, None))) – Limits of the z-axis.

  • transpose (bool (default: False)) – If True transposes the figure.

Return type

QuadMesh

Returns

The created matplotlib QuadMesh.

Warning

The grid orientation for the zvals is the same as is used in pcolormesh(). Note that the column index corresponds to the x-coordinate, and the row index corresponds to y. This can be counter.intuitive: zvals(y_idx, x_idx) and can be inconsistent with some arrays of zvals (such as a 2D histogram from numpy).

get_unit_from_attrs(data_array, str_format=' [{}]')[source]

Extracts and formats the unit/units from an xarray.DataArray attribute.

Parameters
  • data_array (DataArray) – Xarray array (coordinate or variable).

  • str_format (str (default: ' [{}]')) – String that will be formatted if a unit is found.

Return type

str

Returns

str_format string formatted with the data_array.unit or data_array.units, with that order of precedence. Empty string is returned if none of these arguments are present.

plot_2d_grid(x, y, z, xlabel, xunit, ylabel, yunit, zlabel, zunit, ax, cax=None, add_cbar=True, title=None, normalize=False, log=False, cmap='viridis', vlim=(None, None), transpose=False)[source]

Creates a heatmap of x,y,z data that was acquired on a grid expects three “columns” of data of equal length.

Parameters
  • x – Length N array corresponding to x values.

  • y – Length N array corresponding to y values.

  • z – Length N array corresponding to gettable z values.

  • xlabel (str) – x label to add to the heatmap.

  • ylabel (str) – y label to add to the heatmap.

  • xunit (str) – x unit used in unit aware axis labels.

  • yunit (str) – y unit used in unit aware axis labels.

  • zlabel (str) – Label used for the colorbar.

  • ax (Axes) – Axis to which to add the colormesh.

  • cax (Optional[Axes] (default: None)) – Axis on which to add the colorbar, if set to None, will create a new axis.

  • add_cbar (bool (default: True)) – if True, adds a colorbar.

  • title (Optional[str] (default: None)) – Text to add as title to the axis.

  • normalize (bool (default: False)) – if True, normalizes each row of data.

  • log (bool (default: False)) – if True, uses a logarithmic colorscale

  • cmap (str (default: 'viridis')) –

    The colormap to use. See matplotlib docs for choosing an appropriate colormap.

  • vlim (list (default: (None, None))) – limits of the z-axis.

  • transpose (bool (default: False)) – if True transposes the figure.

Return type

Tuple[QuadMesh, Colorbar]

Returns

The new matplotlib QuadMesh and Colorbar.

plot_complex_points(points, colors=None, labels=None, markers=None, legend=True, ax=None, **kwargs)[source]

Plots complex points with (by default) different colors and markers on the imaginary plane using matplotlib.axes.Axes.plot().

Intended for a small number of points.

Example

from quantify_core.utilities.examples_support import plot_centroids

_ = plot_centroids([1 + 1j, -1.5 - 2j])

Parameters
  • ax (Optional[Axes] (default: None)) – A matplotlib axis to plot on.

  • points (Union[list, ndarray]) – Array of complex numbers.

  • colors (Optional[list] (default: None)) – Colors to use for each point.

  • labels (Optional[list] (default: None)) – Labels to use for each point. Defaults to f"|{i}>"

  • markers (Optional[list] (default: None)) – Markers to use for each point.

  • legend (bool (default: True)) – Calls legend() if True.

  • **kwargs – Keyword arguments passed to the plot().

Return type

Tuple[Figure, Axes]

plot_fit(ax, fit_res, plot_init=True, plot_numpoints=1000, range_casting='real', fit_kwargs=None, init_kwargs=None)[source]

Plot a fit of an lmfit model with a real domain.

Parameters
  • ax – axis on which to plot the fit.

  • fit_res – an lmfit fit results object.

  • plot_init (bool (default: True)) – if True, plot the initial guess of the fit.

  • plot_numpoints (int (default: 1000)) – the number of points used on which to evaluate the fit.

  • range_casting (Literal['abs', 'angle', 'real', 'imag'] (default: 'real')) – how to plot fit functions that have a complex range. Casting of values happens using absolute, angle, real and imag. Angle is in degrees.

  • fit_kwargs (Optional[dict] (default: None)) – Matplotlib pyplot formatting and label keyword arguments for the fit plot. default value is {“color”: “C3”, “label”: “Fit”}

  • optional – Matplotlib pyplot formatting and label keyword arguments for the fit plot. default value is {“color”: “C3”, “label”: “Fit”}

  • init_kwargs (Optional[dict] (default: None)) – Matplotlib pyplot formatting and label keyword arguments for the init plot. default value is {“color”: “grey”, “linestyle”: “–”, “label”: “Guess”}

  • optional – Matplotlib pyplot formatting and label keyword arguments for the init plot. default value is {“color”: “grey”, “linestyle”: “–”, “label”: “Guess”}

Return type

List[Line2D]

Returns

list of matplotlib pyplot Line2D objects

plot_fit_complex_plane(ax, fit_res, plot_init=True, plot_numpoints=1000)[source]

Plot a fit of an lmfit model with a real domain in the complex plane.

Return type

None

plot_textbox(ax, text, **kw)[source]

Plot a textbox with sensible defaults using text.

Parameters
  • ax (Axes) – The Axes on which to plot.

  • text (str) – The text of the textbox.

Return type

Text

Returns

the new text object

plot_xr_complex(var, marker_scatter='o', label_real='Real', label_imag='Imag', cmap='viridis', c=None, kwargs_line=None, kwargs_scatter=None, title='{} [{}]; shape = {}', legend=True, ax=None)[source]

Plots the real and imaginary parts of complex data. Points are colored by default according to their order in the array.

Parameters
  • var (DataArray) – 1D array of complex data.

  • marker_scatter (str (default: 'o')) – Marker used for the scatter plot.

  • label_real (str (default: 'Real')) – Label for legend.

  • label_imag (str (default: 'Imag')) – Label for legend.

  • cmap (str (default: 'viridis')) – The colormap to use for coloring the points.

  • c (Optional[ndarray] (default: None)) – Color of the points. Defaults to an array of integers.

  • kwargs_line (Optional[dict] (default: None)) – Keyword arguments passed to matplotlib.axes.Axes.plot().

  • kwargs_scatter (Optional[dict] (default: None)) – Keyword arguments passed to matplotlib.axes.Axes.scatter().

  • title (str (default: '{} [{}]; shape = {}')) – Axes title. By default gets formatted with var.long_name, var.name and var.shape``.

  • legend (bool (default: True)) – Calls legend() if True.

  • ax (Optional[object] (default: None)) – The matplotlib axes. If None a new axes (and figure) is created.

Return type

Tuple[Figure, Axes]

plot_xr_complex_on_plane(var, marker='o', label='Data on imaginary plane', cmap='viridis', c=None, xlabel='Real{}{}{}', ylabel='Imag{}{}{}', legend=True, ax=None, **kwargs)[source]

Plots complex data on the imaginary plane. Points are colored by default according to their order in the array.

Parameters
  • var (DataArray) – 1D array of complex data.

  • marker (str (default: 'o')) – Marker used for the scatter plot.

  • label (str (default: 'Data on imaginary plane')) – Data label for the legend.

  • cmap (str (default: 'viridis')) – The colormap to use for coloring the points.

  • c (Optional[ndarray] (default: None)) – Color of the points. Defaults to an array of integers.

  • xlabel (str (default: 'Real{}{}{}')) – Label o x axes.

  • ylabel (str (default: 'Imag{}{}{}')) – Label o y axes.

  • legend (bool (default: True)) – Calls legend() if True.

  • ax (Optional[object] (default: None)) – The matplotlib axes. If None a new axes (and figure) is created.

Return type

Tuple[Figure, Axes]

set_cyclic_colormap(image_or_collection, shifted=False, unit='deg', clim=None)[source]

Sets a cyclic colormap on a matplolib 2D color plot if cyclic units are detected.

Parameters
Return type

None

set_suptitle_from_dataset(fig, dataset, prefix='')[source]

Sets the suptitle of a matplotlib figure based on

  • (optional) prefix;

  • dataset.name;

  • dataset.tuid,

Intended for tagging figures with unique ID of the original dataset.

Parameters
  • prefix (str (default: '')) – Optional string to pre-pend, e.g., x0-y0.

  • fig (Figure) – The matplotlib figure.

  • dataset (Dataset) – A dataset expected to have a .name and a .tuid" attributes.

Return type

None

plot_interpolation

Plot interpolations.

interpolate_heatmap(x, y, z, n=None, interp_method='linear')[source]

The output of this method can directly be used for plt.imshow(z_grid, extent=extent, aspect=’auto’) where the extent is determined by the min and max of the x_grid and y_grid.

The output can also be used as input for ax.pcolormesh(x, y, Z,**kw)

Parameters
  • x (numpy.ndarray) – x data points

  • y (numpy.ndarray) – y data points

  • z (numpy.ndarray) – z data points

  • n (Optional[int] (default: None)) – number of points for each dimension on the interpolated grid if set to None will auto determine amount of points needed

  • interp_method (Literal['linear', 'nearest', 'deg'] (default: 'linear')) – determines what interpolation method is used.

Returns

  • x_grid (numpy.ndarray) – N*1 array of x-values of the interpolated grid

  • y_grid (numpy.ndarray) – N*1 array of x-values of the interpolated grid

  • z_grid (numpy.ndarray) – N*N array of z-values that form a grid.

SI Utilities

Utilities for managing SI units with plotting systems.

SI_prefix_and_scale_factor(val, unit=None)[source]

Takes in a value and unit, returns a scale factor and scaled unit. It returns a scale factor to convert the input value to a value in the range [1.0, 1000.0), plus the corresponding scaled SI unit (e.g. ‘mT’, ‘kV’), deduced from the input unit, to represent the input value in those scaled units.

The scaling is only applied if the unit is an unscaled or scaled unit present in the variable :data::SI_UNITS.

If the unit is None, no scaling is done. If the unit is “SI_PREFIX_ONLY”, the value is scaled and an SI prefix is applied without a base unit.

Parameters
  • val (float) – the value

  • unit (str) – the unit of the value

Return type

Tuple[float, str]

Returns

  • scale_factor (float) – scale_factor needed to convert value

  • scaled_unit (str) – unit including the prefix

SI_val_to_msg_str(val, unit=None, return_type=<class 'str'>)[source]

Takes in a value with optional unit and returns a string tuple consisting of (value_str, unit) where the value and unit are rescaled according to SI prefixes, IF the unit is an SI unit (according to the comprehensive list of SI units in this file ;).

the value_str is of the type specified in return_type (str) by default.

adjust_axeslabels_SI(ax)[source]

Auto adjust the labels of a plot generated by xarray to SI-unit aware labels.

Return type

None

format_value_string(par_name, parameter, end_char='', unit=None)[source]

Format an lmfit parameter or uncertainties ufloat to a string of value with uncertainty.

If there is no stderr, use 5 significant figures. If there is a standard error use a precision one order of magnitude more precise than the size of the error and display the stderr itself to two significant figures in standard index notation in the same units as the value.

Parameters
  • par_name (str) – the name of the parameter to use in the string

  • parameter (lmfit.parameter.Parameter,) – uncertainties.core.Variable or float. A Parameter object or an object e.g., returned by uncertainties.ufloat(). The value and stderr of this parameter will be used. If a float is given, the stderr is taken to be None.

  • end_char (default: '') – A character that will be put at the end of the line.

  • unit (default: None) – a unit. If this is an SI unit it will be used in automatically determining a prefix for the unit and rescaling accordingly.

Return type

str

Returns

The parameter and its error formatted as a string

set_cbarlabel(cbar, label, unit=None, **kw)[source]

Add a unit aware z-label to a colorbar object

Parameters
  • cbar – colorbar object to set label on

  • label – the desired label

  • unit (default: None) – the unit

  • **kw – keyword argument to be passed to cbar.set_label

set_xlabel(label, unit=None, axis=None, **kw)[source]

Add a unit aware x-label to an axis object.

Parameters
  • label – the desired label

  • unit (default: None) – the unit

  • axis (default: None) – matplotlib axis object to set label on

  • **kw – keyword argument to be passed to matplotlib.set_xlabel

set_ylabel(label, unit=None, axis=None, **kw)[source]

Add a unit aware y-label to an axis object.

Parameters
  • label – the desired label

  • unit (default: None) – the unit

  • axis (default: None) – matplotlib axis object to set label on

  • **kw – keyword argument to be passed to matplotlib.set_ylabel

value_precision(val, stderr=None)[source]

Calculate the precision to which a parameter is to be specified, according to its standard error. Returns the appropriate format specifier string.

If there is no stderr, use 5 significant figures. If there is a standard error use a precision one order of magnitude more precise than the size of the error and display the stderr itself to two significant figures in standard index notation in the same units as the value.

Parameters
  • val (float) – the nominal value of the parameter

  • stderr (float) – the standard error on the parameter

Return type

Tuple[str, str]

Returns

  • val_format_specifier (str) – python format specifier which sets the precision of the parameter value

  • err_format_specifier (str) – python format specifier which set the precision of the error

bibliography

MVM+21

J.F. Marques, B.M. Varbanov, M.S. Moreira, H. Ali, N. Muthusubramanian, C. Zachariadis, F. Battistel, M. Beekman, N. Haider, W. Vlothuizen, A. Bruno, B.M. Terhal, and L. DiCarlo. Logical-qubit operations in an error-detecting surface code. arXiv preprint arXiv:2102.13071, 2021. URL: https://arxiv.org/abs/2102.13071.pdf.

Ree13

M.D. Reed. Entanglement and Quantum Error Correction with Superconducting Qubits. PhD Dissertation, Yale University, 2013. URL: https://arxiv.org/pdf/1311.6759.pdf.