quantify_core
analysis
base_analysis
Module containing the analysis abstract base class and several basic analyses.
- class AnalysisMeta(name, bases, namespace, /, **kwargs)[source]
Bases:
ABCMeta
Metaclass, whose purpose is to avoid storing large amount of figure in memory.
By convention, analysis object stores figures in
self.figs_mpl
andself.axs_mpl
dictionaries. This causes troubles for long-running operations, because figures are all in memory and eventually this uses all available memory of the PC. In order to avoid it,BaseAnalysis.create_figures()
and its derivatives are patched so that all the figures are put in LRU cache and reconstructed upon request toBaseAnalysis.figs_mpl
orBaseAnalysis.axs_mpl
if they were removed from the cache.Provided that analyses subclasses follow convention of figures being created in
BaseAnalysis.create_figures()
, this approach should solve the memory issue and preserve reverse compatibility with present code.
- class AnalysisSteps(value)[source]
Bases:
Enum
An enumerate of the steps executed by the
BaseAnalysis
(and the default for subclasses).The involved steps are specified below.
# <STEP> # <corresponding class method> AnalysisSteps.STEP_1_PROCESS_DATA # BaseAnalysis.process_data AnalysisSteps.STEP_2_RUN_FITTING # BaseAnalysis.run_fitting AnalysisSteps.STEP_3_ANALYZE_FIT_RESULTS # BaseAnalysis.analyze_fit_results AnalysisSteps.STEP_4_CREATE_FIGURES # BaseAnalysis.create_figures AnalysisSteps.STEP_5_ADJUST_FIGURES # BaseAnalysis.adjust_figures AnalysisSteps.STEP_6_SAVE_FIGURES # BaseAnalysis.save_figures AnalysisSteps.STEP_7_SAVE_QUANTITIES_OF_INTEREST # BaseAnalysis.save_quantities_of_interest AnalysisSteps.STEP_8_SAVE_PROCESSED_DATASET # BaseAnalysis.save_processed_dataset AnalysisSteps.STEP_9_SAVE_FIT_RESULTS # BaseAnalysis.save_fit_results
Tip
A custom analysis flow (e.g. inserting new steps) can be created by implementing an object similar to this one and overriding the
analysis_steps
.
- class BaseAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
object
A template for analysis classes.
- analysis_steps
Defines the steps of the analysis specified as an
Enum
. Can be overridden in a subclass in order to define a custom analysis flow. SeeAnalysisSteps
for a template.alias of
AnalysisSteps
- __init__(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Initializes the variables used in the analysis and to which data is stored.
Warning
We highly discourage overriding the class initialization. If the analysis requires the user passing in any arguments, the
run()
should be overridden and extended (see its docstring for an example).Settings schema:
Base analysis settings
properties
mpl_dpi
Matplotlib figures DPI.
type
integer
mpl_exclude_fig_titles
If
True
maplotlib figures will not include the title.type
boolean
mpl_transparent_background
If
True
maplotlib figures will have transparent background (when applicable).type
boolean
mpl_fig_formats
List of formats in which matplotlib figures will be saved. E.g.
['svg']
type
array
items
type
string
- Parameters
dataset (
Optional
[Dataset
] (default:None
)) – an unprocessed (raw) quantify dataset to perform the analysis on.tuid (
Union
[TUID
,str
,None
] (default:None
)) – if no dataset is specified, will look for the dataset with the matching tuid in the data directory.label (
str
(default:''
)) – if no dataset and no tuid is provided, will look for the most recent dataset that contains “label” in the name.settings_overwrite (
Optional
[dict
] (default:None
)) – A dictionary containing overrides for the global base_analysis.settings for this specific instance. See Settings schema above for available settings.plot_figures (
bool
(default:True
)) – Option to create and save figures for analysis.
- adjust_clim(vmin, vmax, ax_ids=None)[source]
Adjust the clim of matplotlib figures generated by analysis object.
- Parameters
vmin (
float
) – The bottom vlim in data coordinates. PassingNone
leaves the limit unchanged.vmax (
float
) – The top vlim in data coordinates. Passing None leaves the limit unchanged.ax_ids (
Optional
[List
[str
]] (default:None
)) – A list of ax_ids specifying what axes to adjust. Passing None results in all axes of an analysis object being adjusted.
- Return type
- adjust_figures()[source]
Perform global adjustments after creating the figures but before saving them.
By default applies mpl_exclude_fig_titles and mpl_transparent_background from
.settings_overwrite
to any matplotlib figures in.figs_mpl
.Can be extended in a subclass for additional adjustments.
- adjust_xlim(xmin=None, xmax=None, ax_ids=None)[source]
Adjust the xlim of matplotlib figures generated by analysis object.
- Parameters
xmin (
Optional
[float
] (default:None
)) – The bottom xlim in data coordinates. PassingNone
leaves the limit unchanged.xmax (
Optional
[float
] (default:None
)) – The top xlim in data coordinates. Passing None leaves the limit unchanged.ax_ids (
Optional
[List
[str
]] (default:None
)) – A list of ax_ids specifying what axes to adjust. Passing None results in all axes of an analysis object being adjusted.
- Return type
- adjust_ylim(ymin=None, ymax=None, ax_ids=None)[source]
Adjust the ylim of matplotlib figures generated by analysis object.
- Parameters
ymin (
Optional
[float
] (default:None
)) – The bottom ylim in data coordinates. PassingNone
leaves the limit unchanged.ymax (
Optional
[float
] (default:None
)) – The top ylim in data coordinates. Passing None leaves the limit unchanged.ax_ids (
Optional
[List
[str
]] (default:None
)) – A list of ax_ids specifying what axes to adjust. Passing None results in all axes of an analysis object being adjusted.
- Return type
- analyze_fit_results()[source]
To be implemented by subclasses.
Should analyze and process the
.fit_results
and add the quantities of interest to the.quantities_of_interest
dictionary.
- create_figures()[source]
To be implemented by subclasses.
Should generate figures of interest. matplolib figures and axes objects should be added to the
.figs_mpl
andaxs_mpl
dictionaries., respectively.
- execute_analysis_steps()[source]
Executes the methods corresponding to the analysis steps as defined by the
analysis_steps
.Intended to be called by .run when creating a custom analysis that requires passing analysis configuration arguments to
run()
.
- extract_data()[source]
If no dataset is provided, populates
.dataset
with data from the experiment matching the tuid/label.This method should be overwritten if an analysis does not relate to a single datafile.
- get_flow()[source]
Returns a tuple with the ordered methods to be called by run analysis. Only return the figures methods if
self.plot_figures
isTrue
.- Return type
- classmethod load_fit_result(tuid, fit_name)[source]
Load a saved
lmfit.model.ModelResult
object from file. For analyses that use custom fit functions, thecls.fit_function_definitions
object must be defined in the subclass for that analysis.- Parameters
- Return type
- Returns
The lmfit model result object.
- process_data()[source]
To be implemented by subclasses.
Should process, e.g., reshape, filter etc. the data before starting the analysis.
- run()[source]
This function is at the core of all analysis. It calls
execute_analysis_steps()
which executes all the methods defined in theFirst step of any analysis is always extracting data, that is not configurable. Errors in extract_data() are considered fatal for analysis. Later steps are configurable by overriding
analysis_steps
. Exceptions in these steps are logged and suppressed and analysis is considered partially successful.This function is typically called right after instantiating an analysis class.
Implementing a custom analysis that requires user input
When implementing your own custom analysis you might need to pass in a few configuration arguments. That should be achieved by overriding this function as show below.
from quantify_core.analysis.base_analysis import BaseAnalysis class MyAnalysis(BaseAnalysis): def run(self, optional_argument_one: float = 3.5e9): # Save the value to be used in some step of the analysis self.optional_argument_one = optional_argument_one # Execute the analysis steps self.execute_analysis_steps() # Return the analysis object return self # ... other relevant methods ...
- Return type
- Returns
The instance of the analysis object so that
run()
returns an analysis object. You can initialize, run and assign it to a variable on a single line:, e.g.a_obj = MyAnalysis().run()
.
- run_fitting()[source]
To be implemented by subclasses.
Should create fitting model(s) and fit data to the model(s) adding the result to the
.fit_results
dictionary.
- save_figures()[source]
Saves figures to disk. By default saves matplotlib figures.
Can be overridden or extended to make use of other plotting packages.
- save_figures_mpl(close_figs=True)[source]
Saves all the matplotlib figures in the
.figs_mpl
dict.- Parameters
close_figs (
bool
(default:True
)) – If True, closes matplotlib figures after saving.
- save_fit_results()[source]
Saves the
lmfit.model.model_result
objects for each fit in a sub-directory within the analysis directory
- save_processed_dataset()[source]
Saves a copy of the processed
.dataset_processed
in the analysis folder of the experiment.
- save_quantities_of_interest()[source]
Saves the
.quantities_of_interest
as a JSON file in the analysis directory.The file is written using
json.dump()
with theqcodes.utils.NumpyJSONEncoder
custom encoder.
- property analysis_dir
Analysis dir based on the tuid of the analysis class instance. Will create a directory if it does not exist.
- property name
The name of the analysis, used in data saving.
- property results_dir
Analysis dirrectory for this analysis. Will create a directory if it does not exist.
- class Basic1DAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
BasicAnalysis
Deprecated. Alias of
BasicAnalysis
for backwards compatibility.- run()[source]
This function is at the core of all analysis. It calls
execute_analysis_steps()
which executes all the methods defined in theFirst step of any analysis is always extracting data, that is not configurable. Errors in extract_data() are considered fatal for analysis. Later steps are configurable by overriding
analysis_steps
. Exceptions in these steps are logged and suppressed and analysis is considered partially successful.This function is typically called right after instantiating an analysis class.
Implementing a custom analysis that requires user input
When implementing your own custom analysis you might need to pass in a few configuration arguments. That should be achieved by overriding this function as show below.
from quantify_core.analysis.base_analysis import BaseAnalysis class MyAnalysis(BaseAnalysis): def run(self, optional_argument_one: float = 3.5e9): # Save the value to be used in some step of the analysis self.optional_argument_one = optional_argument_one # Execute the analysis steps self.execute_analysis_steps() # Return the analysis object return self # ... other relevant methods ...
- Return type
- Returns
The instance of the analysis object so that
run()
returns an analysis object. You can initialize, run and assign it to a variable on a single line:, e.g.a_obj = MyAnalysis().run()
.
- class Basic2DAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
BaseAnalysis
A basic analysis that extracts the data from the latest file matching the label and plots and stores the data in the experiment container.
- class BasicAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
BaseAnalysis
A basic analysis that extracts the data from the latest file matching the label and plots and stores the data in the experiment container.
- analysis_steps_to_str(analysis_steps, class_name='BaseAnalysis')[source]
A utility for generating the docstring for the analysis steps
- Parameters
analysis_steps (
Enum
) – AnEnum
similar toquantify_core.analysis.base_analysis.AnalysisSteps
.class_name (
str
(default:'BaseAnalysis'
)) – The class name that has the analysis_steps methods and for which the analysis_steps are intended.
- Return type
- Returns
A formatted string version of the analysis_steps and corresponding methods.
- check_lmfit(fit_res)[source]
Check that lmfit was able to successfully return a valid fit, and give a warning if not.
The function looks at lmfit’s success parameter, and also checks whether the fit was able to obtain valid error bars on the fitted parameters.
- Parameters
fit_res (
ModelResult
) – TheModelResult
object output by lmfit- Return type
- Returns
A warning message if there is a problem with the fit.
- flatten_lmfit_modelresult(model)[source]
Flatten an lmfit model result to a dictionary in order to be able to save it to disk.
Notes
We use this method as opposed to
save_modelresult()
as the correspondingload_modelresult()
cannot handle loading data with a custom fit function.
- lmfit_par_to_ufloat(param)[source]
Safe conversion of an
lmfit.parameter.Parameter
touncertainties.ufloat(value, std_dev)
.This function is intended to be used in custom analyses to avoid errors when an lmfit fails and the stderr is
None
.
- wrap_text(text, width=35, replace_whitespace=True, **kwargs)[source]
A text wrapping (braking over multiple lines) utility.
Intended to be used with
plot_textbox()
in order to avoid too wide figure when, e.g.,check_lmfit()
fails and a warning message is generated.For usage see, for example, source code of
create_figures()
.- Parameters
text – The text string to be wrapped over several lines.
width (default:
35
) – Maximum line width in characters.kwargs – Any other keyword arguments to be passed to
textwrap.wrap()
.
- Returns
The wrapped text (or
None
if text isNone
).
- settings = {'mpl_dpi': 450, 'mpl_fig_formats': ['png', 'svg'], 'mpl_exclude_fig_titles': False, 'mpl_transparent_background': True}
For convenience the analysis framework provides a set of global settings.
For available settings see
BaseAnalysis
. These can be overwritten for each instance of an analysis.Example
from quantify_core.analysis import base_analysis as ba ba.settings["mpl_dpi"] = 300 # set resolution of matplotlib figures
cosine_analysis
Module containing an education example of an analysis subclass.
See Tutorial 3. Building custom analyses - the data analysis framework that guides you through the process of building this analysis.
- class CosineAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
BaseAnalysis
Exemplary analysis subclass that fits a cosine to a dataset.
- process_data()[source]
In some cases, you might need to process the data, e.g., reshape, filter etc., before starting the analysis. This is the method where it should be done.
See
process_data()
for an implementation example.
- run_fitting()[source]
Fits a
CosineModel
to the data.
spectroscopy_analysis
- class ResonatorSpectroscopyAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
BaseAnalysis
Analysis for a spectroscopy experiment of a hanger resonator.
- create_figures()[source]
Plots the measured and fitted transmission \(S_{21}\) as the I and Q component vs frequency, the magnitude and phase vs frequency, and on the complex I,Q plane.
- process_data()[source]
Verifies that the data is measured as magnitude and phase and casts it to a dataset of complex valued transmission \(S_{21}\).
- run_fitting()[source]
Fits a
ResonatorModel
to the data.
single_qubit_timedomain
Module containing analyses for common single qubit timedomain experiments.
- class AllXYAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
SingleQubitTimedomainAnalysis
Normalizes the data from an AllXY experiment and plots against an ideal curve.
See section 2.3.2 of Reed [2013] for an explanation of the AllXY experiment and it’s applications in diagnosing errors in single-qubit control pulses.
- create_figures()[source]
To be implemented by subclasses.
Should generate figures of interest. matplolib figures and axes objects should be added to the
.figs_mpl
andaxs_mpl
dictionaries., respectively.
- process_data()[source]
Processes the data so that the analysis can make assumptions on the format.
Populates self.dataset_processed.S21 with the complex (I,Q) valued transmission, and if calibration points are present for the 0 and 1 state, populates self.dataset_processed.pop_exc with the excited state population.
- class EchoAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
SingleQubitTimedomainAnalysis
,_DecayFigMixin
Analysis class for a qubit spin-echo experiment, which fits an exponential decay and extracts the T2_echo time.
- run_fitting()[source]
Fit the data to
ExpDecayModel
.
- class RabiAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
SingleQubitTimedomainAnalysis
Fits a cosine curve to Rabi oscillation data and finds the qubit drive amplitude required to implement a pi-pulse.
The analysis will automatically rotate the data so that the data lies along the axis with the best SNR.
- class RamseyAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
SingleQubitTimedomainAnalysis
,_DecayFigMixin
Fits a decaying cosine curve to Ramsey data (possibly with artificial detuning) and finds the true detuning, qubit frequency and T2* time.
- analyze_fit_results()[source]
Extract the real detuning and qubit frequency based on the artificial detuning and fitted detuning.
- run(artificial_detuning=0, qubit_frequency=None, calibration_points='auto')[source]
- Parameters
artificial_detuning (
float
(default:0
)) – The detuning in Hz that will be emulated by adding an extra phase in software.qubit_frequency (
Optional
[float
] (default:None
)) – The initial recorded value of the qubit frequency (before accurate fitting is done) in Hz.calibration_points (
Union
[bool
,Literal
['auto'
]] (default:'auto'
)) – Indicates if the data analyzed includes calibration points. If set toTrue
, will interpret the last two data points in the dataset as \(|0\rangle\) and \(|1\rangle\) respectively. If"auto"
, will usehas_calibration_points()
to determine if the data contains calibration points.
- Returns
The instance of this analysis.
- Return type
- run_fitting()[source]
Fits a
DecayOscillationModel
to the data.
- class SingleQubitTimedomainAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
BaseAnalysis
Base Analysis class for single-qubit timedomain experiments.
- process_data()[source]
Processes the data so that the analysis can make assumptions on the format.
Populates self.dataset_processed.S21 with the complex (I,Q) valued transmission, and if calibration points are present for the 0 and 1 state, populates self.dataset_processed.pop_exc with the excited state population.
- run(calibration_points='auto')[source]
- Parameters
calibration_points (
Union
[bool
,Literal
['auto'
]] (default:'auto'
)) – Indicates if the data analyzed includes calibration points. If set toTrue
, will interpret the last two data points in the dataset as \(|0\rangle\) and \(|1\rangle\) respectively. If"auto"
, will usehas_calibration_points()
to determine if the data contains calibration points.- Returns
The instance of this analysis.
- Return type
- class T1Analysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
SingleQubitTimedomainAnalysis
,_DecayFigMixin
Analysis class for a qubit T1 experiment, which fits an exponential decay and extracts the T1 time.
- run_fitting()[source]
Fit the data to
ExpDecayModel
.
interpolation_analysis
- class InterpolationAnalysis2D(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
BaseAnalysis
An analysis class which generates a 2D interpolating plot for each yi variable in the dataset.
optimization_analysis
- class OptimizationAnalysis(dataset=None, tuid=None, label='', settings_overwrite=None, plot_figures=True)[source]
Bases:
BaseAnalysis
An analysis class which extracts the optimal quantities from an N-dimensional interpolating experiment.
- process_data()[source]
Finds the optimal (minimum or maximum) for y0 and saves the xi and y0 values in the
quantities_of_interest
.
fitting_models
Models and fit functions to be used with the lmfit fitting framework.
- class CosineModel(*args, **kwargs)[source]
Bases:
Model
Exemplary lmfit model with a guess for a cosine.
Note
The
lmfit.models
module provides several fitting models that might fit your needs out of the box.- __init__(*args, **kwargs)[source]
- Parameters
independent_vars (
list
ofstr
) – Arguments to the model function that are independent variables default is['x']
).prefix (
str
) – String to prepend to parameter names, needed to add two Models that have parameter names in common.nan_policy – How to handle NaN and missing values in data. See Notes below.
**kwargs – Keyword arguments to pass to
Model
.
Notes
1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
‘raise’ : raise a ValueError (default)
‘propagate’ : do nothing
‘omit’ : drop missing data
See also
- guess(data, x, **kws)[source]
Guess starting values for the parameters of a model.
- Parameters
- Return type
- Returns
params (
Parameters
) – Initial, guessed values for the parameters of a Model... versionchanged:: 1.0.3 – Argument
x
is now explicitly required to estimate starting values.
- class DecayOscillationModel(*args, **kwargs)[source]
Bases:
Model
Model for a decaying oscillation which decays to a point with 0 offset from the centre of the of the oscillation (as in a Ramsey experiment, for example).
- __init__(*args, **kwargs)[source]
- Parameters
independent_vars (
list
ofstr
) – Arguments to the model function that are independent variables default is['x']
).prefix (
str
) – String to prepend to parameter names, needed to add two Models that have parameter names in common.nan_policy – How to handle NaN and missing values in data. See Notes below.
**kwargs – Keyword arguments to pass to
Model
.
Notes
1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
‘raise’ : raise a ValueError (default)
‘propagate’ : do nothing
‘omit’ : drop missing data
See also
- guess(data, **kws)[source]
Guess starting values for the parameters of a model.
- Parameters
- Return type
- Returns
params (
Parameters
) – Initial, guessed values for the parameters of a Model... versionchanged:: 1.0.3 – Argument
x
is now explicitly required to estimate starting values.
- class ExpDecayModel(*args, **kwargs)[source]
Bases:
Model
Model for an exponential decay, such as a qubit T1 measurement.
- __init__(*args, **kwargs)[source]
- Parameters
independent_vars (
list
ofstr
) – Arguments to the model function that are independent variables default is['x']
).prefix (
str
) – String to prepend to parameter names, needed to add two Models that have parameter names in common.nan_policy – How to handle NaN and missing values in data. See Notes below.
**kwargs – Keyword arguments to pass to
Model
.
Notes
1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
‘raise’ : raise a ValueError (default)
‘propagate’ : do nothing
‘omit’ : drop missing data
See also
- guess(data, **kws)[source]
Guess starting values for the parameters of a model.
- Parameters
- Return type
- Returns
params (
Parameters
) – Initial, guessed values for the parameters of a Model... versionchanged:: 1.0.3 – Argument
x
is now explicitly required to estimate starting values.
- class RabiModel(*args, **kwargs)[source]
Bases:
Model
Model for a Rabi oscillation as a function of the microwave drive amplitude. Phase of oscillation is fixed at \(\pi\) in order to ensure that the oscillation is at a minimum when the drive amplitude is 0.
- __init__(*args, **kwargs)[source]
- Parameters
independent_vars (
list
ofstr
) – Arguments to the model function that are independent variables default is['x']
).prefix (
str
) – String to prepend to parameter names, needed to add two Models that have parameter names in common.nan_policy – How to handle NaN and missing values in data. See Notes below.
**kwargs – Keyword arguments to pass to
Model
.
Notes
1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
‘raise’ : raise a ValueError (default)
‘propagate’ : do nothing
‘omit’ : drop missing data
See also
- guess(data, **kws)[source]
Guess starting values for the parameters of a model.
- Parameters
- Return type
- Returns
params (
Parameters
) – Initial, guessed values for the parameters of a Model... versionchanged:: 1.0.3 – Argument
x
is now explicitly required to estimate starting values.
- class ResonatorModel(*args, **kwargs)[source]
Bases:
Model
Resonator model
Implementation and design patterns inspired by the complex resonator model example (lmfit documentation).
- __init__(*args, **kwargs)[source]
- Parameters
independent_vars (
list
ofstr
) – Arguments to the model function that are independent variables default is['x']
).prefix (
str
) – String to prepend to parameter names, needed to add two Models that have parameter names in common.nan_policy – How to handle NaN and missing values in data. See Notes below.
**kwargs – Keyword arguments to pass to
Model
.
Notes
1. nan_policy sets what to do when a NaN or missing value is seen in the data. Should be one of:
‘raise’ : raise a ValueError (default)
‘propagate’ : do nothing
‘omit’ : drop missing data
See also
- guess(data, **kws)[source]
Guess starting values for the parameters of a model.
- Parameters
- Return type
- Returns
params (
Parameters
) – Initial, guessed values for the parameters of a Model... versionchanged:: 1.0.3 – Argument
x
is now explicitly required to estimate starting values.
- cos_func(x, frequency, amplitude, offset, phase=0)[source]
An oscillating cosine function:
\(y = \mathrm{amplitude} \times \cos(2 \pi \times \mathrm{frequency} \times x + \mathrm{phase}) + \mathrm{offset}\)
- Parameters
- Return type
- Returns
Output signal magnitude
- exp_damp_osc_func(t, tau, n_factor, frequency, phase, amplitude, offset)[source]
A sinusoidal oscillation with an exponentially decaying envelope function:
\(y = \mathrm{amplitude} \times \exp\left(-(t/\tau)^\mathrm{n\_factor}\right)(\cos(2\pi\mathrm{frequency}\times t + \mathrm{phase}) + \mathrm{oscillation_offset}) + \mathrm{exponential_offset}\)
- Parameters
t (
float
) – timetau (
float
) – decay timen_factor (
float
) – exponential decay factorfrequency (
float
) – frequency of the oscillationphase (
float
) – phase of the oscillationamplitude (
float
) – initial amplitude of the oscillationoscillation_offset – vertical offset of cosine oscillation relative to exponential asymptote
exponential_offset – offset of exponential asymptote
- Returns
Output of decaying cosine function as a float
- exp_decay_func(t, tau, amplitude, offset, n_factor)[source]
This is a general exponential decay function:
\(y = \mathrm{amplitude} \times \exp\left(-(t/\tau)^\mathrm{n\_factor}\right) + \mathrm{offset}\)
- fft_freq_phase_guess(data, t)[source]
Guess for a cosine fit using FFT, only works for evenly spaced points.
- get_guess_common_doc()[source]
Returns a common docstring to be used for the
guess()
method of custom fittingModel
s. :rtype:str
Usage example for a custom fitting model
See the usage example at the end of the
ResonatorModel
source-code:class ResonatorModel(lmfit.model.Model): """ Resonator model Implementation and design patterns inspired by the `complex resonator model example <https://lmfit.github.io/lmfit-py/examples/example_complex_resonator_model.html>`_ (`lmfit` documentation). """ # pylint: disable=line-too-long # pylint: disable=empty-docstring # pylint: disable=abstract-method def __init__(self, *args, **kwargs): # pass in the defining equation so the user doesn't have to later. super().__init__(hanger_func_complex_SI, *args, **kwargs) self.set_param_hint("Ql", min=0) # Enforce Q is positive self.set_param_hint("Qe", min=0) # Enforce Q is positive # Internal and coupled quality factor can be derived from fitted params self.set_param_hint("Qi", expr="1./(1./Ql-1./Qe*cos(theta))", vary=False) self.set_param_hint("Qc", expr="Qe/cos(theta)", vary=False) # pylint: disable=too-many-locals # pylint: disable=missing-function-docstring def guess(self, data, **kws) -> lmfit.parameter.Parameters: f = kws.get("f", None) if f is None: return None argmin_s21 = np.abs(data).argmin() fmin = f.min() fmax = f.max() # guess that the resonance is the lowest point fr_guess = f[argmin_s21] # assume the user isn't trying to fit just a small part of a resonance curve. Q_min = 0.1 * (fr_guess / (fmax - fmin)) delta_f = np.diff(f) # assume f is sorted min_delta_f = delta_f[delta_f > 0].min() Q_max = ( fr_guess / min_delta_f ) # assume data actually samples the resonance reasonably Q_guess = np.sqrt(Q_min * Q_max) # geometric mean, why not? (phi_0_guess, phi_v_guess) = resonator_phase_guess( data, f ) # Come up with a guess for phase velocity self.set_param_hint("fr", value=fr_guess, min=fmin, max=fmax) self.set_param_hint("Ql", value=Q_guess * 1.01, min=Q_min, max=Q_max) self.set_param_hint("Qe", value=Q_guess * 0.99, min=0) self.set_param_hint("A", value=np.mean(abs(data)), min=0) # The parameters below need a proper guess. self.set_param_hint("theta", value=0, min=-np.pi / 2, max=np.pi / 2) self.set_param_hint("phi_0", value=phi_0_guess) self.set_param_hint("phi_v", value=phi_v_guess) self.set_param_hint("alpha", value=0, min=-1, max=1) params = self.make_params() return lmfit.models.update_param_vals(params, self.prefix, **kws) # Same design patter is used in lmfit.models __init__.__doc__ = get_model_common_doc() + mk_seealso("hanger_func_complex_SI") guess.__doc__ = get_guess_common_doc()
- get_model_common_doc()[source]
Returns a common docstring to be used with custom fitting
Model
s. :rtype:str
Usage example for a custom fitting model
See the usage example at the end of the
ResonatorModel
source-code:class ResonatorModel(lmfit.model.Model): """ Resonator model Implementation and design patterns inspired by the `complex resonator model example <https://lmfit.github.io/lmfit-py/examples/example_complex_resonator_model.html>`_ (`lmfit` documentation). """ # pylint: disable=line-too-long # pylint: disable=empty-docstring # pylint: disable=abstract-method def __init__(self, *args, **kwargs): # pass in the defining equation so the user doesn't have to later. super().__init__(hanger_func_complex_SI, *args, **kwargs) self.set_param_hint("Ql", min=0) # Enforce Q is positive self.set_param_hint("Qe", min=0) # Enforce Q is positive # Internal and coupled quality factor can be derived from fitted params self.set_param_hint("Qi", expr="1./(1./Ql-1./Qe*cos(theta))", vary=False) self.set_param_hint("Qc", expr="Qe/cos(theta)", vary=False) # pylint: disable=too-many-locals # pylint: disable=missing-function-docstring def guess(self, data, **kws) -> lmfit.parameter.Parameters: f = kws.get("f", None) if f is None: return None argmin_s21 = np.abs(data).argmin() fmin = f.min() fmax = f.max() # guess that the resonance is the lowest point fr_guess = f[argmin_s21] # assume the user isn't trying to fit just a small part of a resonance curve. Q_min = 0.1 * (fr_guess / (fmax - fmin)) delta_f = np.diff(f) # assume f is sorted min_delta_f = delta_f[delta_f > 0].min() Q_max = ( fr_guess / min_delta_f ) # assume data actually samples the resonance reasonably Q_guess = np.sqrt(Q_min * Q_max) # geometric mean, why not? (phi_0_guess, phi_v_guess) = resonator_phase_guess( data, f ) # Come up with a guess for phase velocity self.set_param_hint("fr", value=fr_guess, min=fmin, max=fmax) self.set_param_hint("Ql", value=Q_guess * 1.01, min=Q_min, max=Q_max) self.set_param_hint("Qe", value=Q_guess * 0.99, min=0) self.set_param_hint("A", value=np.mean(abs(data)), min=0) # The parameters below need a proper guess. self.set_param_hint("theta", value=0, min=-np.pi / 2, max=np.pi / 2) self.set_param_hint("phi_0", value=phi_0_guess) self.set_param_hint("phi_v", value=phi_v_guess) self.set_param_hint("alpha", value=0, min=-1, max=1) params = self.make_params() return lmfit.models.update_param_vals(params, self.prefix, **kws) # Same design patter is used in lmfit.models __init__.__doc__ = get_model_common_doc() + mk_seealso("hanger_func_complex_SI") guess.__doc__ = get_guess_common_doc()
- hanger_func_complex_SI(f, fr, Ql, Qe, A, theta, phi_v, phi_0, alpha=1)[source]
This is the complex function for a hanger (lambda/4 resonator).
- Parameters
f (
float
) – frequencyfr (
float
) – resonance frequencyA (
float
) – background transmission amplitudeQl (
float
) – loaded quality factor of the resonatorQe (
float
) – magnitude of extrinsic quality factorQe = |Q_extrinsic|
theta (
float
) – phase of extrinsic quality factor (in rad)phi_v (
float
) – phase to account for propagation delay to samplephi_0 (
float
) – phase to account for propagation delay from samplealpha (
float
(default:1
)) – slope of signal around the resonance
- Return type
- Returns
complex valued transmission
See eq. S4 from Bruno et al. (2015) ArXiv:1502.04082.
\[S_{21} = A \left(1+\alpha \frac{f-f_r}{f_r} \right) \left(1- \frac{\frac{Q_l}{|Q_e|}e^{i\theta} }{1+2iQ_l \frac{f-f_r}{f_r}} \right) e^{i (\phi_v f + \phi_0)}\]The loaded and extrinsic quality factors are related to the internal and coupled Q according to:
\[\frac{1}{Q_l} = \frac{1}{Q_c}+\frac{1}{Q_i}\]and
\[\frac{1}{Q_c} = \mathrm{Re}\left(\frac{1}{|Q_e|e^{-i\theta}}\right)\]
- mk_seealso(function_name, role='func', prefix='\\n\\n', module_location='.')[source]
Returns a sphinx seealso pointing to a function.
Intended to be used for building custom fitting model docstrings.
Usage example for a custom fitting model
See the usage example at the end of the
ResonatorModel
source-code:class ResonatorModel(lmfit.model.Model): """ Resonator model Implementation and design patterns inspired by the `complex resonator model example <https://lmfit.github.io/lmfit-py/examples/example_complex_resonator_model.html>`_ (`lmfit` documentation). """ # pylint: disable=line-too-long # pylint: disable=empty-docstring # pylint: disable=abstract-method def __init__(self, *args, **kwargs): # pass in the defining equation so the user doesn't have to later. super().__init__(hanger_func_complex_SI, *args, **kwargs) self.set_param_hint("Ql", min=0) # Enforce Q is positive self.set_param_hint("Qe", min=0) # Enforce Q is positive # Internal and coupled quality factor can be derived from fitted params self.set_param_hint("Qi", expr="1./(1./Ql-1./Qe*cos(theta))", vary=False) self.set_param_hint("Qc", expr="Qe/cos(theta)", vary=False) # pylint: disable=too-many-locals # pylint: disable=missing-function-docstring def guess(self, data, **kws) -> lmfit.parameter.Parameters: f = kws.get("f", None) if f is None: return None argmin_s21 = np.abs(data).argmin() fmin = f.min() fmax = f.max() # guess that the resonance is the lowest point fr_guess = f[argmin_s21] # assume the user isn't trying to fit just a small part of a resonance curve. Q_min = 0.1 * (fr_guess / (fmax - fmin)) delta_f = np.diff(f) # assume f is sorted min_delta_f = delta_f[delta_f > 0].min() Q_max = ( fr_guess / min_delta_f ) # assume data actually samples the resonance reasonably Q_guess = np.sqrt(Q_min * Q_max) # geometric mean, why not? (phi_0_guess, phi_v_guess) = resonator_phase_guess( data, f ) # Come up with a guess for phase velocity self.set_param_hint("fr", value=fr_guess, min=fmin, max=fmax) self.set_param_hint("Ql", value=Q_guess * 1.01, min=Q_min, max=Q_max) self.set_param_hint("Qe", value=Q_guess * 0.99, min=0) self.set_param_hint("A", value=np.mean(abs(data)), min=0) # The parameters below need a proper guess. self.set_param_hint("theta", value=0, min=-np.pi / 2, max=np.pi / 2) self.set_param_hint("phi_0", value=phi_0_guess) self.set_param_hint("phi_v", value=phi_v_guess) self.set_param_hint("alpha", value=0, min=-1, max=1) params = self.make_params() return lmfit.models.update_param_vals(params, self.prefix, **kws) # Same design patter is used in lmfit.models __init__.__doc__ = get_model_common_doc() + mk_seealso("hanger_func_complex_SI") guess.__doc__ = get_guess_common_doc()
- Parameters
function_name (
str
) – name of the function to point torole (
str
(default:'func'
)) – a sphinx role, e.g."func"
prefix (
str
(default:'\\n\\n'
)) – string preceding the seealsomodule_location (
str
(default:'.'
)) – can be used to indicate a function outside this module, e.g.,my_module.submodule
which contains the function.
- Return type
- Returns
resulting string
calibration
Module containing analysis utilities for calibration procedures.
In particular, manipulation of data and calibration points for qubit readout calibration.
- has_calibration_points(s21, indices_state_0=(-2,), indices_state_1=(-1,))[source]
Attempts to determine if the provided complex S21 data has calibration points for the ground and first excited states of qubit.
In this ideal scenario, if the datapoints indicated by the indices correspond to the calibration points, then these points will be located on the extremities of a “segment” on the IQ plane.
Three pieces of information are used to infer the presence of calibration points:
The angle of the calibration points with respect to the average of the datapoints,
The distance between the calibration points, and
The average distance to the line defined be the calibration points.
The detection is made robust by averaging 3 datapoints for each extremity of the “segment” described by the data on the IQ-plane.
Examples
In these examples this function is able to correctly predict the presence of the calibrations in both cases.
import matplotlib.pyplot as plt import numpy as np from quantify_core.analysis.calibration import has_calibration_points from quantify_core.utilities.examples_support import mk_iq_shots def _with_cal(data_): return np.concatenate((data_, (center_0, center_1))) def _print(ax_, data_): ax_.set_title( f"W/out cal.: {has_calibration_points(data_)}\n" f"With cal.: {has_calibration_points(_with_cal(data_))}" ) fig, ((ax0, ax1), (ax2, ax3)) = plt.subplots( 2, 2, figsize=(10, 10 / 1.6), sharex=True, sharey=True ) center_0, center_1, center_2 = 0.6 + 1.2j, -0.2 + 0.5j, 0 + 1.5j NUM_SHOTS = 50 data = mk_iq_shots( NUM_SHOTS, sigmas=[0.1] * 2, centers=(center_0, center_1), probabilities=[0.3, 1 - 0.3], ) ax0.plot(data.real, data.imag, "o", label="Shots") _print(ax0, data) data = mk_iq_shots( NUM_SHOTS, sigmas=[0.1] * 3, centers=(center_0, center_1, center_2), probabilities=[0.35, 0.35, 1 - 0.35 - 0.35], ) ax1.plot(data.real, data.imag, "o", label="Shots") _print(ax1, data) data = mk_iq_shots( NUM_SHOTS, sigmas=[0.1], centers=(center_0,), probabilities=[1], ) ax2.plot(data.real, data.imag, "o", label="Shots") _print(ax2, data) data = np.fromiter( ( mk_iq_shots( NUM_SHOTS * 2, sigmas=[0.5] * 2, centers=(center_0, center_1), probabilities=[prob, 1 - prob], ).mean() for prob in np.linspace(0, 1, 35) ), dtype=complex, ) ax3.plot(data.real, data.imag, "o", label="Shots") _print(ax3, data) for i, ax in enumerate(fig.axes): ax.plot(center_0.real, center_0.imag, "^", label="|0>", markersize=10) ax.plot(center_1.real, center_1.imag, "d", label="|1>", markersize=10) if i == 1: ax.plot( center_2.real, center_2.imag, "*", label="|2>", markersize=10 ) ax.legend()
- Parameters
s21 (
ndarray
) – Array of complex datapoints corresponding to the experiment on the IQ plane.indices_state_0 (
tuple
(default:(-2,)
)) – Indices in thes21
array that correspond to the ground state.indices_state_1 (
tuple
(default:(-1,)
)) – Indices in thes21
array that correspond to the first excited state.
- Return type
- Returns
The inferred presence of calibration points.
data
types
Module containing the core data concepts of quantify.
- class TUID(value: str)[source]
A human readable unique identifier based on the timestamp. This class does not wrap the passed in object but simply verifies and returns it.
A tuid is a string formatted as
YYYYmmDD-HHMMSS-sss-******
. The tuid serves as a unique identifier for experiments in quantify.See also
The.
handling
module.- classmethod datetime_seconds(tuid)[source]
- Returns
object corresponding to the TUID with microseconds discarded
- Return type
- classmethod is_valid(tuid)[source]
Test if tuid is valid. A valid tuid is a string formatted as
YYYYmmDD-HHMMSS-sss-******
.- Parameters
tuid (str) – a tuid string
- Returns
True if the string is a valid TUID.
- Return type
- Raises
ValueError – Invalid format
handling
Utilities for handling data.
- concat_dataset(tuids, dim='dim_0', name=None, analysis_name=None)[source]
This function takes in a list of TUIDs and concatenates the corresponding datasets. It adds the TUIDs as a coordinate in the new dataset.
By default, we will extract the unprocessed dataset from each directory, but if analysis_name is specified, we will extract the processed dataset for that analysis.
- Parameters
dim (
str
(default:'dim_0'
)) – Dimension along which to concatenate the datasets.analysis_name (
Optional
[str
] (default:None
)) – In the case that we want to extract the processed dataset for give analysis, this is the name of the analysis.name (
Optional
[str
] (default:None
)) – The name of the concatenated dataset. If None, use the name of the first dataset in the list.
- Return type
- Returns
Concatenated dataset with new TUID and references to the old TUIDs.
- create_exp_folder(tuid, name='', datadir=None)[source]
Creates an empty folder to store an experiment container.
If the folder already exists, simply returns the experiment folder corresponding to the
TUID
.- Parameters
tuid (
TUID
) – A timestamp based human-readable unique identifier.name (
str
(default:''
)) – Optional name to identify the folder.datadir (
Optional
[str
] (default:None
)) – path of the data directory. IfNone
, usesget_datadir()
to determine the data directory.
- Return type
- Returns
Full path of the experiment folder following format:
/datadir/YYYYmmDD/YYYYmmDD-HHMMSS-sss-******-name/
.
- default_datadir(verbose=True)[source]
Returns (and optionally print) a default datadir path.
Intended for fast prototyping, tutorials, examples, etc..
- extract_parameter_from_snapshot(snapshot, parameter)[source]
A function which takes a parameter and extracts it from a snapshot, including in the case where the parameter is part of a nested submodule within a QCoDeS instrument
- Parameters
- Return type
- Returns
The dict specifying the parameter properties which was extracted from the snapshot
- get_datadir()[source]
Returns the current data directory. The data directory can be changed using
set_datadir()
.- Return type
- Returns
The current data directory.
- get_latest_tuid(contains='')[source]
Returns the most recent tuid.
Tip
This function is similar to
get_tuids_containing()
but is preferred if one is only interested in the most recentTUID
for performance reasons.- Parameters
contains (
str
(default:''
)) – An optional string contained in the experiment name.- Return type
- Returns
The latest TUID.
- Raises
FileNotFoundError – No data found.
- get_tuids_containing(contains='', t_start=None, t_stop=None, max_results=9223372036854775807, reverse=False)[source]
Returns a list of tuids containing a specific label.
Tip
If one is only interested in the most recent
TUID
,get_latest_tuid()
is preferred for performance reasons.- Parameters
contains (default:
''
) – A string contained in the experiment name.t_start (default:
None
) – datetime to search from, inclusive. If a string is specified, it will be converted to a datetime object usingparse
. If no value is specified, will use the year 1 as a reference t_start.t_stop (default:
None
) – datetime to search until, exclusive. If a string is specified, it will be converted to a datetime object usingparse
. If no value is specified, will use the current time as a reference t_stop.max_results (default:
9223372036854775807
) – Maximum number of results to return. Defaults to unlimited.reverse (default:
False
) – If False, sorts tuids chronologically, if True sorts by most recent.
- Returns
A list of
TUID
: objects.- Return type
- Raises
FileNotFoundError – No data found.
- get_varying_parameter_values(tuids, parameter)[source]
A function that gets a parameter which varies over multiple experiments and puts it in a ndarray.
- Parameters
- Return type
- Returns
The values of the varying parameter.
- initialize_dataset(settable_pars, setpoints, gettable_pars)[source]
Initialize an empty dataset based on settable_pars, setpoints and gettable_pars
- load_dataset(tuid, datadir=None, name='dataset.hdf5')[source]
Loads a dataset specified by a tuid.
Tip
This method also works when specifying only the first part of a
TUID
.Note
This method uses
load_dataset()
to ensure the file is closed after loading as datasets are intended to be immutable after performing the initial experiment.- Parameters
tuid (
TUID
) – ATUID
string. It is also possible to specify only the first part of a tuid.datadir (
Optional
[str
] (default:None
)) – Path of the data directory. IfNone
, usesget_datadir()
to determine the data directory.
- Return type
- Returns
The dataset.
- Raises
FileNotFoundError – No data found for specified date.
- load_dataset_from_path(path)[source]
Loads a
Dataset
with a specific engine preference.Before returning the dataset
AdapterH5NetCDF.recover()
is applied.This function tries to load the dataset until success with the following engine preference:
"h5netcdf"
"netcdf4"
No engine specified (
load_dataset()
default)
- load_processed_dataset(tuid, analysis_name)[source]
Given an experiment TUID and the name of an analysis previously run on it, retrieves the processed dataset resulting from that analysis.
- load_quantities_of_interest(tuid, analysis_name)[source]
Given an experiment TUID and the name of an analysis previously run on it, retrieves the corresponding “quantities of interest” data.
- load_snapshot(tuid, datadir=None, list_to_ndarray=False, file='snapshot.json')[source]
Loads a snapshot specified by a tuid.
- Parameters
tuid (
TUID
) – ATUID
string. It is also possible to specify only the first part of a tuid.datadir (
Optional
[str
] (default:None
)) – Path of the data directory. IfNone
, usesget_datadir()
to determine the data directory.list_to_ndarray (
bool
(default:False
)) – Uses an internal DecodeToNumpy decoder which allows a user to automatically convert a list to numpy array during deserialization of the snapshot.file (
str
(default:'snapshot.json'
)) – Filename to load.
- Return type
- Returns
The snapshot.
- Raises
FileNotFoundError – No data found for specified date.
- locate_experiment_container(tuid, datadir=None)[source]
Returns the path to the experiment container of the specified tuid.
- Parameters
tuid (
TUID
) – ATUID
string. It is also possible to specify only the first part of a tuid.datadir (
Optional
[str
] (default:None
)) – Path of the data directory. IfNone
, usesget_datadir()
to determine the data directory.
- Return type
- Returns
The path to the experiment container
- Raises
FileNotFoundError – Experiment container not found.
- multi_experiment_data_extractor(experiment, parameter, *, new_name=None, t_start=None, t_stop=None, analysis_name=None, dimension='dim_0')[source]
A data extraction function which loops through multiple quantify data directories and extracts the selected varying parameter value and corresponding datasets, then compiles this data into a single dataset for further analysis.
By default, we will extract the unprocessed dataset from each directory, but if analysis_name is specified, we will extract the processed dataset for that analysis.
- Parameters
experiment (
str
) – The experiment to be included in the new dataset. For example “Pulsed spectroscopy”parameter (
str
) – The name and address of the QCoDeS parameter from which to get the value, including the instrument name and all submodules. For example"current_source.module0.dac0.current"
.new_name (
Optional
[str
] (default:None
)) – The name of the new multifile dataset. If no new name is given, it will create a new name as experiment vs instrument.t_start (
Optional
[str
] (default:None
)) – Datetime to search from, inclusive. If a string is specified, it will be converted to a datetime object usingparse
. If no value is specified, will use the year 1 as a reference t_start.t_stop (
Optional
[str
] (default:None
)) – Datetime to search until, exclusive. If a string is specified, it will be converted to a datetime object usingparse
. If no value is specified, will use the current time as a reference t_stop.analysis_name (
Optional
[str
] (default:None
)) – In the case that we want to extract the processed dataset for give analysis, this is the name of the analysis.dimension (
Optional
[str
] (default:'dim_0'
)) – The name of the dataset dimension to concatenate over
- Return type
- Returns
The compiled quantify dataset.
- snapshot(update=False, clean=True)[source]
State of all instruments setup as a JSON-compatible dictionary (everything that the custom JSON encoder class
NumpyJSONEncoder
supports).
- to_gridded_dataset(quantify_dataset, dimension='dim_0', coords_names=None)[source]
Converts a flattened (a.k.a. “stacked”) dataset as the one generated by the
initialize_dataset()
to a dataset in which the measured values are mapped onto a grid in the xarray format.This will be meaningful only if the data itself corresponds to a gridded measurement.
Note
Each individual
(x0[i], x1[i], x2[i], ...)
setpoint must be unique.Conversions applied:
The names
"x0", "x1", ...
will correspond to the names of the Dimensions.- The unique values for each of the
x0, x1, ...
Variables are converted to Coordinates.
- The unique values for each of the
- The
y0, y1, ...
Variables are reshaped into a (multi-)dimensional grid and associated to the Coordinates.
- The
See also
Examples
from pathlib import Path import numpy as np from qcodes import ManualParameter, Parameter, validators from quantify_core.data.handling import set_datadir, to_gridded_dataset from quantify_core.measurement import MeasurementControl set_datadir(Path.home() / "quantify-data") time_a = ManualParameter( name="time_a", label="Time A", unit="s", vals=validators.Numbers(), initial_value=1, ) time_b = ManualParameter( name="time_b", label="Time B", unit="s", vals=validators.Numbers(), initial_value=1, ) signal = Parameter( name="sig_a", label="Signal A", unit="V", get_cmd=lambda: np.exp(time_a()) + 0.5 * np.exp(time_b()), ) meas_ctrl = MeasurementControl("meas_ctrl") meas_ctrl.settables([time_a, time_b]) meas_ctrl.gettables(signal) meas_ctrl.setpoints_grid([np.linspace(0, 5, 10), np.linspace(5, 0, 12)]) dset = meas_ctrl.run("2D-single-float-valued-settable-gettable") dset_grid = to_gridded_dataset(dset) dset_grid.y0.plot(cmap="viridis")
Starting iterative measurement... 100% completed | elapsed time: 0s | time left: 0s 100% completed | elapsed time: 0s | time left: 0s
<matplotlib.collections.QuadMesh at 0x7f0395e49dc0>
- Parameters
quantify_dataset (
Dataset
) – Input dataset in the format generated by theinitialize_dataset
.dimension (
str
(default:'dim_0'
)) – The flattened xarray Dimension.coords_names (
Optional
[Iterable
] (default:None
)) – Optionally specify explicitly which Variables correspond to orthogonal coordinates, e.g. datasets holds values for("x0", "x1")
but only “x0” is independent:to_gridded_dataset(dset, coords_names=["x0"])
.
- Return type
- Returns
The new dataset.
- trim_dataset(dataset)[source]
Trim NaNs from a dataset, useful in the case of a dynamically resized dataset (e.g. adaptive loops).
- write_dataset(path, dataset)[source]
Writes a
Dataset
to a file with the h5netcdf engine.Before writing the
AdapterH5NetCDF.adapt()
is applied.To accommodate for complex-type numbers and arrays
invalid_netcdf=True
is used.
dataset_adapters
Utilities for dataset (python object) handling.
- class AdapterH5NetCDF[source]
Quantify dataset adapter for the
h5netcdf
engine.It has the functionality of adapting the Quantify dataset to a format compatible with the
h5netcdf
xarray backend engine that is used to write and load the dataset to/from disk.Warning
The
h5netcdf
engine has minor issues when performing a two-way trip of the dataset. Thetype
of some attributes are not preserved. E.g., list- and tuple-like objects are loaded as numpy arrays ofdtype=object
.- classmethod adapt(dataset)[source]
Serializes to JSON the dataset and variables attributes.
To prevent the JSON serialization for specific items, their names should be listed under the attribute named
json_serialize_exclude
(for eachattrs
dictionary).
- static attrs_convert(attrs, inplace=False, vals_converter=<function dumps>)[source]
Converts to/from JSON string the values of the keys which are not listed in the
json_serialize_exclude
list.- Parameters
attrs – The input dictionary.
inplace (default:
False
) – IfTrue
the values are replaced in place, otherwise a deepcopy ofattrs
is performed first.
- class DatasetAdapterBase[source]
A generic interface for a dataset adapter.
Note
It might be difficult to grasp the generic purpose of this class. See
AdapterH5NetCDF
for a specialized use case.A dataset adapter is intended to “adapt”/”convert” a dataset to a format compatible with some other piece of software such as a function, interface, read/write back end, etc.. The main use case is to define the interface of the
AdapterH5NetCDF
that converts the Quantify dataset for loading and writing to/from disk.Subclasses implementing this interface are intended to be a two-way bridge to some other object/interface/backend to which we refer to as the “Target” of the adapter.
The function
.adapt()
should return a dataset to be consumed by the Target.The function
.recover()
should receive a dataset generated by the Target.
- class DatasetAdapterIdentity[source]
A dataset adapter that does not modify the datasets in any way.
Intended to be used just as an object that respects the adapter interface defined by
DatasetAdapterBase
.A particular use case is the backwards compatibility for loading and writing older versions of the Quantify dataset.
dataset_attrs
Utilities for handling the attributes of xarray.Dataset
and
xarray.DataArray
(python objects) handling.
- class QCoordAttrs(unit='', long_name='', is_main_coord=None, uniformly_spaced=None, is_dataset_ref=False, json_serialize_exclude=<factory>)[source]
A dataclass representing the
attrs
attribute of main and secondary coordinates.All attributes are mandatory to be present but can be
None
.Examples
from quantify_core.utilities import examples_support examples_support.mk_main_coord_attrs()
{'unit': '', 'long_name': '', 'is_main_coord': True, 'uniformly_spaced': True, 'is_dataset_ref': False, 'json_serialize_exclude': []}
examples_support.mk_secondary_coord_attrs()
{'unit': '', 'long_name': '', 'is_main_coord': False, 'uniformly_spaced': True, 'is_dataset_ref': False, 'json_serialize_exclude': []}
-
is_dataset_ref:
bool
= False Flags if it is an array of
quantify_core.data.types.TUID
s of other dataset.
-
is_main_coord:
bool
= None When set to
True
, flags the xarray coordinate to correspond to a main coordinate, otherwise (False
) it corresponds to a secondary coordinate.
-
is_dataset_ref:
- class QDatasetAttrs(tuid=None, dataset_name='', dataset_state=None, timestamp_start=None, timestamp_end=None, quantify_dataset_version='2.0.0', software_versions=<factory>, relationships=<factory>, json_serialize_exclude=<factory>)[source]
A dataclass representing the
attrs
attribute of the Quantify dataset.All attributes are mandatory to be present but can be
None
.Example
import pendulum from quantify_core.utilities import examples_support examples_support.mk_dataset_attrs( dataset_name="Bias scan", timestamp_start=pendulum.now().to_iso8601_string(), timestamp_end=pendulum.now().add(minutes=2).to_iso8601_string(), dataset_state="done", )
{'tuid': '20230309-211844-461-ff97e5', 'dataset_name': 'Bias scan', 'dataset_state': 'done', 'timestamp_start': '2023-03-09T21:18:44.461203+00:00', 'timestamp_end': '2023-03-09T21:20:44.461266+00:00', 'quantify_dataset_version': '2.0.0', 'software_versions': {}, 'relationships': [], 'json_serialize_exclude': []}
-
dataset_name:
str
= '' The dataset name, usually same as the the experiment name included in the name of the experiment container.
-
dataset_state:
Literal
[None
,'running'
,'interrupted (safety)'
,'interrupted (forced)'
,'done'
] = None Denotes the last known state of the experiment/data acquisition that served to ‘build’ this dataset. Can be used later to filter ‘bad’ datasets.
-
json_serialize_exclude:
List
[str
] A list of strings corresponding to the names of other attributes that should not be json-serialized when writing the dataset to disk. Empty by default.
-
quantify_dataset_version:
str
= '2.0.0' A string identifying the version of this Quantify dataset for backwards compatibility.
-
relationships:
List
[QDatasetIntraRelationship
] A list of relationships within the dataset specified as list of dictionaries that comply with the
QDatasetIntraRelationship
.
-
software_versions:
Dict
[str
,str
] A mapping of other relevant software packages that are relevant to log for this dataset. Another example is the git tag or hash of a commit of a lab repository.
Example
import pendulum from quantify_core.utilities import examples_support examples_support.mk_dataset_attrs( dataset_name="My experiment", timestamp_start=pendulum.now().to_iso8601_string(), timestamp_end=pendulum.now().add(minutes=2).to_iso8601_string(), software_versions={ "lab_fridge_magnet_driver": "v1.4.2", # software version/tag "my_lab_repo": "9d8acf63f48c469c1b9fa9f2c3cf230845f67b18", # git commit hash }, )
{'tuid': '20230309-211844-480-631950', 'dataset_name': 'My experiment', 'dataset_state': None, 'timestamp_start': '2023-03-09T21:18:44.480035+00:00', 'timestamp_end': '2023-03-09T21:20:44.480085+00:00', 'quantify_dataset_version': '2.0.0', 'software_versions': {'lab_fridge_magnet_driver': 'v1.4.2', 'my_lab_repo': '9d8acf63f48c469c1b9fa9f2c3cf230845f67b18'}, 'relationships': [], 'json_serialize_exclude': []}
-
timestamp_end:
Optional
[str
] = None Human-readable timestamp (ISO8601) as returned by
pendulum.now().to_iso8601_string()
(docs). Specifies when the experiment/data acquisition ended.
-
timestamp_start:
Optional
[str
] = None Human-readable timestamp (ISO8601) as returned by
pendulum.now().to_iso8601_string()
(docs). Specifies when the experiment/data acquisition started.
-
tuid:
Optional
[str
] = None The time-based unique identifier of the dataset. See
quantify_core.data.types.TUID
.
-
dataset_name:
- class QDatasetIntraRelationship(item_name=None, relation_type=None, related_names=<factory>, relation_metadata=<factory>)[source]
A dataclass representing a dictionary that specifies a relationship between dataset variables.
A prominent example are calibration points contained within one variable or several variables that are necessary to interpret correctly the data of another variable.
Examples
This is how the attributes of a dataset containing a
q0
main variable andq0_cal
secondary variables would look like. Theq0_cal
corresponds to calibrations datapoints. See Quantify dataset - examples for examples with more context.from quantify_core.data.dataset_attrs import QDatasetIntraRelationship from quantify_core.utilities import examples_support attrs = examples_support.mk_dataset_attrs( relationships=[ QDatasetIntraRelationship( item_name="q0", relation_type="calibration", related_names=["q0_cal"], ).to_dict() ] )
- item_name: str | None = None
The name of the coordinate/variable to which we want to relate other coordinates/variables.
A list of names related to the
item_name
.
- relation_metadata: Dict[str, Any]
A free-form dictionary to store additional information relevant to this relationship.
- relation_type: str | None = None
A string specifying the type of relationship.
Reserved relation types:
"calibration"
- Specifies a list of main variables used as calibration data for the main variables whose name is specified by theitem_name
.
- class QVarAttrs(unit='', long_name='', is_main_var=None, uniformly_spaced=None, grid=None, is_dataset_ref=False, has_repetitions=False, json_serialize_exclude=<factory>)[source]
A dataclass representing the
attrs
attribute of main and secondary variables.All attributes are mandatory to be present but can be
None
.Examples
from quantify_core.utilities import examples_support examples_support.mk_main_var_attrs(coords=["time"])
{'unit': '', 'long_name': '', 'is_main_var': True, 'uniformly_spaced': True, 'grid': True, 'is_dataset_ref': False, 'has_repetitions': False, 'json_serialize_exclude': [], 'coords': ['time']}
examples_support.mk_secondary_var_attrs(coords=["cal"])
{'unit': '', 'long_name': '', 'is_main_var': False, 'uniformly_spaced': True, 'grid': True, 'is_dataset_ref': False, 'has_repetitions': False, 'json_serialize_exclude': [], 'coords': ['cal']}
- grid: bool | None = None
Indicates if the variables data are located on a grid, which does not need to be uniformly spaced along all dimensions. In other words, specifies if the corresponding main coordinates are the ‘unrolled’ points (also known as ‘unstacked’) corresponding to a grid.
If
True
than it is possible to usequantify_core.data.handling.to_gridded_dataset()
to convert the variables to a ‘stacked’ version.
- has_repetitions: bool = False
Indicates that the outermost dimension of this variable is a repetitions dimension. This attribute is intended to allow easy programmatic detection of such dimension. It can be used, for example, to average along this dimension before an automatic live plotting or analysis.
- is_dataset_ref: bool = False
Flags if it is an array of
quantify_core.data.types.TUID
s of other dataset. See also Dataset for a “nested MeasurementControl” experiment.
- is_main_var: bool | None = None
When set to
True
, flags this xarray data variable to correspond to a main variable, otherwise (False
) it corresponds to a secondary variable.
- json_serialize_exclude: List[str]
A list of strings corresponding to the names of other attributes that should not be json-serialized when writing the dataset to disk. Empty by default.
- long_name: str = ''
A long name for this coordinate.
- uniformly_spaced: bool | None = None
Indicates if the values are uniformly spaced. This does not apply to ‘true’ main variables but, because a MultiIndex is not supported yet by xarray when writing to disk, some coordinate variables have to be stored as main variables instead.
- unit: str = ''
The units of the values.
- get_main_coords(dataset)[source]
Finds the main coordinates in the dataset (except secondary coordinates).
Finds the xarray coordinates in the dataset that have their attributes
is_main_coord
set toTrue
(inside thexarray.DataArray.attrs
dictionary).
- get_main_dims(dataset)[source]
Determines the ‘main’ dimensions in the dataset.
Each of the dimensions returned is the outermost dimension for an main coordinate/variable, OR the second one when a repetitions dimension is present. (see
has_repetitions
).These dimensions are detected based on
is_main_coord
andis_main_var
attributes.Warning
The dimensions listed in this list should be considered “incompatible” in the sense that the main coordinate/variables must lie on one and only one of such dimension.
Note
The dimensions, on which the secondary coordinates/variables lie, are not included in this list. See also
get_secondary_dims()
.
- get_main_vars(dataset)[source]
Finds the main variables in the dataset (except secondary variables).
Finds the xarray data variables in the dataset that have their attributes
is_main_var
set toTrue
(inside thexarray.DataArray.attrs
dictionary).
- get_secondary_coords(dataset)[source]
Finds the secondary coordinates in the dataset.
Finds the xarray coordinates in the dataset that have their attributes
is_main_coord
set toFalse
(inside thexarray.DataArray.attrs
dictionary).
- get_secondary_dims(dataset)[source]
Returns the ‘main’ secondary dimensions.
For details see
get_main_dims()
,is_main_var
andis_main_coord
.
- get_secondary_vars(dataset)[source]
Finds the secondary variables in the dataset.
Finds the xarray data variables in the dataset that have their attributes
is_main_var
set toFalse
(inside thexarray.DataArray.attrs
dictionary).
experiment
Utilities for managing experiment data.
- class QuantifyExperiment(tuid, dataset=None)[source]
Class which represents all data related to an experiment. This allows the user to run experiments and store data without the quantify_core.measurement.control.MeasurementControl. The class serves as an initial interface for other data storage backends.
- load_dataset()[source]
Loads the quantify dataset associated with the TUID set within the class.
- Raises
FileNotFoundError – If no file with a dataset can be found
- Return type
- load_metadata()[source]
Loads the metadata from the directory specified by ~.experiment_directory.
- Return type
- Returns
The loaded metadata from disk. None if no file is found.
- Raises
FileNotFoundError – If no file with metadata can be found
- load_snapshot()[source]
Loads the snapshot from the directory specified by ~.experiment_directory.
- Return type
- Returns
The loaded snapshot from disk
- Raises
FileNotFoundError – If no file with a snapshot can be found
- load_text(rel_path)[source]
Loads a string from a text file from the path specified by ~.experiment_directory / rel_path.
- Parameters
rel_path (
str
) – path relative to the base directory of the experiment, e.g. “data.json” or “my_folder/data.txt”- Return type
- Returns
The loaded text from disk
- Raises
FileNotFoundError – If no file can be found at rel_path
- save_metadata(metadata=None)[source]
Writes the metadata to disk as specified by ~.experiment_directory.
- save_snapshot(snapshot=None)[source]
Writes the snapshot to disk as specified by ~.experiment_directory.
- save_text(text, rel_path)[source]
Saves a string to a text file in the path specified by ~.experiment_directory / rel_path.
measurement
Import alias |
Maps to |
---|---|
|
|
|
|
|
|
|
types
Module containing the core types for use with the MeasurementControl.
- class Gettable(obj: Any)[source]
Defines the Gettable concept, which is considered complete if the given type satisfies the following: This class does not wrap the passed in object but simply verifies and returns it.
attributes
properties
name
identifier
oneOf
type
string
type
array
items
type
string
label
axis descriptor
oneOf
type
string
type
array
items
type
string
unit
unit of measurement
oneOf
type
string
type
array
items
type
string
batched
true if data is processed in batches, false otherwise
type
boolean
batch_size
When .batched=True, indicates the (maximum) size of the batch of datapoints that this gettable supports. The measurement loop will effectively use the min(settable(s).batch_size, gettable(s).batch_size).
type
integer
methods
properties
get
get values from this device
type
object
prepare
called before the acquisition loop
type
object
finish
called once after the acquisition loop
type
object
- class Settable(obj: Any)[source]
Defines the Settable concept, which is considered complete if the given type satisfies the following: This class does not wrap the passed in object but simply verifies and returns it.
attributes
properties
name
identifier
type
string
label
axis descriptor
type
string
unit
unit of measurement
type
string
batched
true if data is processed in batches, false otherwise
type
boolean
batch_size
When .batched=True, indicates the (maximum) size of the batch of datapoints that this settable supports. The measurement loop will effectively use the min(settable(s).batch_size, gettable(s).batch_size).
type
integer
methods
properties
set
send data to this device
type
object
prepare
called before the acquisition loop
type
object
finish
called once after the acquisition loop
type
object
control
Module containing the MeasurementControl.
- class MeasurementControl(name)[source]
Instrument responsible for controlling the data acquisition loop.
MeasurementControl (MC) is based on the notion that every experiment consists of the following steps:
Set some parameter(s) (settable_pars)
Measure some other parameter(s) (gettable_pars)
Store the data.
Example
meas_ctrl.settables(mw_source1.freq) meas_ctrl.setpoints(np.arange(5e9, 5.2e9, 100e3)) meas_ctrl.gettables(pulsar_QRM.signal) dataset = meas_ctrl.run(name='Frequency sweep')
MC exists to enforce structure on experiments. Enforcing this structure allows:
Standardization of data storage.
Providing basic real-time visualization.
MC imposes minimal constraints and allows:
Iterative loops, experiments in which setpoints are processed step by step.
Batched loops, experiments in which setpoints are processed in batches.
Adaptive loops, setpoints are determined based on measured values.
- __init__(name)[source]
Creates an instance of the Measurement Control.
- Parameters
name (
str
) – name of this instrument.
- clear_experiment_data()[source]
Remove all experiment_data parameters from the experiment_data submodule
- gettables(gettable_pars)[source]
Define the parameters to be acquired during the acquisition loop.
The
Gettable
helper class defines the requirements for a Gettable object.- Parameters
gettable_pars –
- parameter(s) to be get during the acquisition loop, accepts:
list or tuple of multiple Gettable objects
a single Gettable object
- measurement_description()[source]
Return a serializable description of the latest measurement
Users can add additional information to the description manually.
- print_progress(progress_message=None)[source]
Prints the provided progress_messages or a default one; and calls the callback specified by on_progress_callback. Printing can be suppressed with .verbose(False).
- run(name='', soft_avg=1, lazy_set=None, save_data=True)[source]
Starts a data acquisition loop.
- Parameters
name (
str
(default:''
)) – Name of the measurement. It is included in the name of the data files.soft_avg (
int
(default:1
)) – Number of software averages to be performed by the measurement control. E.g. if soft_avg=3 the full dataset will be measured 3 times and the measured values will be averaged element-wise, the averaged dataset is then returned.lazy_set (
Optional
[bool
] (default:None
)) –If
True
and a setpoint equals the previous setpoint, the.set
method of the settable will not be called for that iteration. If this argument isNone
, the.lazy_set()
ManualParameter is used instead (which by default isFalse
).Warning
This feature is not available yet when running in batched mode.
save_data (
bool
(default:True
)) – IfTrue
that the measurement data is stored.
- Return type
- run_adaptive(name, params, lazy_set=None)[source]
Starts a data acquisition loop using an adaptive function.
Warning
The functionality of this mode can be complex - it is recommended to read the relevant long form documentation.
- Parameters
name – Name of the measurement. This name is included in the name of the data files.
params – Key value parameters describe the adaptive function to use, and any further parameters for that function.
lazy_set (
Optional
[bool
] (default:None
)) – IfTrue
and a setpoint equals the previous setpoint, the.set
method of the settable will not be called for that iteration. If this argument isNone
, the.lazy_set()
ManualParameter is used instead (which by default isFalse
).
- Return type
- set_experiment_data(experiment_data, overwrite=True)[source]
Populates the experiment_data submodule with experiment_data parameters
- Parameters
experiment_data (
Dict
[str
,Any
]) –Dict specifying the names of the experiment_data parameters and their values. Follows the format:
{ "parameter_name": { "value": 10.2 "label": "parameter label" "unit": "Hz" } }
overwrite (
bool
(default:True
)) – If True, clear all previously saved experiment_data parameters and save new ones. If False, keep all previously saved experiment_data parameters and change their values if necessary
- setpoints(setpoints)[source]
Set setpoints that determine values to be set in acquisition loop.
Tip
Use
column_stack()
to reshape multiple 1D arrays when setting multiple settables.- Parameters
setpoints (
ndarray
) – An array that defines the values to loop over in the experiment. The shape of the array has to be either (N,) or (N,1) for a 1D loop; or (N, M) in the case of an MD loop.
- setpoints_grid(setpoints)[source]
Makes a grid from the provided setpoints assuming each array element corresponds to an orthogonal dimension. The resulting gridded points determine values to be set in the acquisition loop.
The gridding is such that the inner most loop corresponds to the batched settable with the smallest .batch_size.
- Parameters
setpoints – The values to loop over in the experiment. The grid is reshaped in the same order.
Examples
We first prepare some utilities necessarily for the examples.
from pathlib import Path import matplotlib.pyplot as plt import numpy as np import xarray as xr from qcodes import ManualParameter, Parameter import quantify_core.data.handling as dh from quantify_core.measurement import MeasurementControl dh.set_datadir(Path.home() / "quantify-data") meas_ctrl = MeasurementControl("meas_ctrl") par0 = ManualParameter(name="x0", label="X0", unit="s") par1 = ManualParameter(name="x1", label="X1", unit="s") par2 = ManualParameter(name="x2", label="X2", unit="s") par3 = ManualParameter(name="x3", label="X3", unit="s") sig = Parameter(name="sig", label="Signal", unit="V", get_cmd=lambda: np.exp(par0()))
Iterative-only settables
par0.batched = False par1.batched = False par2.batched = False sig.batched = False meas_ctrl.settables([par0, par1, par2]) meas_ctrl.setpoints_grid( [ np.linspace(0, 1, 4), np.linspace(1, 2, 5), np.linspace(2, 3, 6), ] ) meas_ctrl.gettables(sig) dset = meas_ctrl.run("demo") list(xr.plot.line(xi, label=name) for name, xi in dset.coords.items()) plt.gca().legend()
Starting iterative measurement... 100% completed | elapsed time: 0s | time left: 0s 100% completed | elapsed time: 0s | time left: 0s
<matplotlib.legend.Legend at 0x7f039591fdf0>
Batched-only settables
Note that the settable with lowest .batch_size will be correspond to the innermost loop.
par0.batched = True par1.batch_size = 8 par1.batched = True par1.batch_size = 8 par2.batched = True par2.batch_size = 4 sig = Parameter(name="sig", label="Signal", unit="V", get_cmd=lambda: np.exp(par2())) sig.batched = True sig.batch_size = 32 meas_ctrl.settables([par0, par1, par2]) meas_ctrl.setpoints_grid( [ np.linspace(0, 1, 3), np.linspace(1, 2, 5), np.linspace(2, 3, 4), ] ) meas_ctrl.gettables(sig) dset = meas_ctrl.run("demo") list(xr.plot.line(xi, label=name) for name, xi in dset.coords.items()) plt.gca().legend()
Starting batched measurement... Iterative settable(s) [outer loop(s)]: --- (None) --- Batched settable(s): x0, x1, x2 Batch size limit: 4 6% completed | elapsed time: 0s | time left: 0s last batch size: 4 6% completed | elapsed time: 0s | time left: 0s last batch size: 4 100% completed | elapsed time: 0s | time left: 0s last batch size: 4 100% completed | elapsed time: 0s | time left: 0s last batch size: 4
<matplotlib.legend.Legend at 0x7f03958bee20>
Batched and iterative settables
Note that the settable with lowest .batch_size will be correspond to the innermost loop. Furthermore, the iterative settables will be the outermost loops.
par0.batched = False par1.batched = True par1.batch_size = 8 par2.batched = False par3.batched = True par3.batch_size = 4 sig = Parameter(name="sig", label="Signal", unit="V", get_cmd=lambda: np.exp(par3())) sig.batched = True sig.batch_size = 32 meas_ctrl.settables([par0, par1, par2, par3]) meas_ctrl.setpoints_grid( [ np.linspace(0, 1, 3), np.linspace(1, 2, 5), np.linspace(2, 3, 4), np.linspace(3, 4, 6), ] ) meas_ctrl.gettables(sig) dset = meas_ctrl.run("demo") list(xr.plot.line(xi, label=name) for name, xi in dset.coords.items()) plt.gca().legend()
Starting batched measurement... Iterative settable(s) [outer loop(s)]: x0, x2 Batched settable(s): x1, x3 Batch size limit: 4 1% completed | elapsed time: 0s | time left: 4s last batch size: 4 1% completed | elapsed time: 0s | time left: 4s last batch size: 4 100% completed | elapsed time: 0s | time left: 0s last batch size: 2 100% completed | elapsed time: 0s | time left: 0s last batch size: 2
<matplotlib.legend.Legend at 0x7f039591fb20>
- settables(settable_pars)[source]
Define the settable parameters for the acquisition loop.
The
Settable
helper class defines the requirements for a Settable object.- Parameters
settable_pars – parameter(s) to be set during the acquisition loop, accepts a list or tuple of multiple Settable objects or a single Settable object.
- instr_plotmon = InstrumentRefParameter( vals=vals.MultiType(vals.Strings(), vals.Enum(None)), instrument=self, name="instr_plotmon", )
Instrument responsible for live plotting. Can be set to
None
to disable live plotting.
- lazy_set = ManualParameter( vals=vals.Bool(), initial_value=False, name="lazy_set", instrument=self, )
If set to
True
, only set any settable if the setpoint differs from the previous setpoint. Note that this parameter is overridden by thelazy_set
argument passed to therun()
andrun_adaptive()
methods.
- on_progress_callback = ManualParameter( vals=vals.Callable(), instrument=self, name="on_progress_callback", )
A callback to communicate progress. This should be a callable accepting floats between 0 and 100 indicating the percentage done.
- update_interval = ManualParameter( initial_value=0.5, vals=vals.Numbers(min_value=0.1), instrument=self, name="update_interval", )
Interval for updates during the data acquisition loop, every time more than
update_interval
time has elapsed when acquiring new data points, data is written to file (and the live monitoring detects updated).
- verbose = ManualParameter( vals=vals.Bool(), initial_value=True, instrument=self, name="verbose", )
If set to
True
, prints tostd_out
during experiments.
- grid_setpoints(setpoints, settables=None)[source]
Makes gridded setpoints. If settables is provided, the gridding is such that the inner most loop corresponds to the batched settable with the smallest .batch_size.
Warning
Using this method typecasts all values into the same type. This may lead to validator errors when setting e.g., a float instead of an int.
- Parameters
setpoints (
Iterable
) – A list of arrays that defines the values to loop over in the experiment for each orthogonal dimension. The grid is reshaped in the same order.settables (
Optional
[Iterable
] (default:None
)) – A list of settable objects to which the elements in the setpoints correspond to. Used to correctly grid data when mixing batched and iterative settables.
- Returns
An array where the first numpy axis correspond to individual setpoints.
- Return type
utilities
experiment_helpers
Helpers for performing experiments.
- create_plotmon_from_historical(tuid=None, label='')[source]
Creates a plotmon using the dataset of the provided experiment denoted by the tuid in the datadir. Loads the data and draws any required figures.
NB Creating a new plotmon can be slow. Consider using
PlotMonitor_pyqt.tuids_extra()
to visualize dataset in the same plotmon.- Parameters
- Return type
- Returns
the plotmon
- get_all_parents(instr_mod)[source]
Get a list of all the parent submodules and instruments of a given QCodes instrument, submodule or parameter.
- Parameters
instr_mod (
Union
[Instrument
,InstrumentChannel
,Parameter
]) – The QCodes instrument, submodule or parameter whose parents we wish to find- Return type
- Returns
A list of all the parents of that object (and the object itself)
- load_settings_onto_instrument(instrument, tuid=None, datadir=None, exception_handling='raise')[source]
Loads settings from a previous experiment onto a current
Instrument
, or any of its submodules or parameters. This information is loaded from the ‘snapshot.json’ file in the provided experiment directory.- Parameters
instrument (
Union
[Instrument
,InstrumentChannel
,Parameter
]) – theInstrument
,InstrumentChannel
orParameter
to be configured.tuid (
TUID
) – the TUID of the experiment. If None use latest TUID.datadir (str) – path of the data directory. If None, uses get_datadir() to determine the data directory.
exception_handling (
Literal
['raise'
,'warn'
] (default:'raise'
)) – desired behaviour if error occurs when trying to get parameter: raise exception or give warning.
- Raises
ValueError – if the provided instrument has no match in the loaded snapshot.
- Return type
dataset_examples
Factories of exemplary and mock datasets to be used for testing and documentation.
- mk_nested_mc_dataset(num_points=12, flux_bias_min_max=(-0.04, 0.04), resonator_freqs_min_max=(7000000000.0, 7300000000.0), qubit_freqs_min_max=(4500000000.0, 5000000000.0), t1_values_min_max=(2e-05, 5e-05), seed=112233)[source]
Generates a dataset with dataset references and several coordinates that serve to index the same variables.
Note that the each value for
resonator_freqs
,qubit_freqs
andt1_values
would have been extracted from other dataset corresponding to individual experiments with their own dataset.- Parameters
num_points (
int
(default:12
)) – Number of datapoints to generate (used for all variables/coordinates).flux_bias_min_max (
tuple
(default:(-0.04, 0.04)
)) – Range for mock values.resonator_freqs_min_max (
tuple
(default:(7000000000.0, 7300000000.0)
)) – Range for mock values.qubit_freqs_min_max (
tuple
(default:(4500000000.0, 5000000000.0)
)) – Range for mock values.t1_values_min_max (
tuple
(default:(2e-05, 5e-05)
)) – Range for mock random values.seed (
Optional
[int
] (default:112233
)) – Random number generator seed passed tonumpy.random.default_rng
.
- Return type
- mk_shots_from_probabilities(probabilities, **kwargs)[source]
Generates multiple shots for a list of probabilities assuming two states.
- Parameters
probabilities (
Union
[ndarray
,list
]) – The list/array of the probabilities of one of the states.**kwargs – Keyword arguments passed to
mk_iq_shots()
.
- Returns
Array containing the shots. Shape: (num_shots, len(probabilities)).
- mk_surface7_cyles_dataset(num_cycles=3, **kwargs)[source]
See also
quantify_core.utilities.examples_support.mk_surface7_sched()
.- Parameters
num_cycles (
int
(default:3
)) – The number of repeating cycles before the final measurement.**kwargs – Keyword arguments passed to
mk_shots_from_probabilities()
.
- Return type
- mk_t1_av_dataset(t1_times=None, probabilities=None, **kwargs)[source]
Generates a dataset with mock data of a T1 experiment for a single qubit.
- Parameters
t1_times (
Optional
[ndarray
] (default:None
)) – Array with the T1 times corresponding to each probability inprobabilities
.probabilities (
Optional
[ndarray
] (default:None
)) – The probabilities of finding the qubit in the excited state.**kwargs – Keyword arguments passed to
mk_iq_shots()
.
- Return type
- mk_t1_av_with_cal_dataset(t1_times=None, probabilities=None, **kwargs)[source]
Generates a dataset with mock data of a T1 experiment for a single qubit including calibration points for the ground and excited states.
- Parameters
t1_times (
Optional
[ndarray
] (default:None
)) – Array with the T1 times corresponding to each probability inprobabilities
.probabilities (
Optional
[ndarray
] (default:None
)) – The probabilities of finding the qubit in the excited state.**kwargs – Keyword arguments passed to
mk_iq_shots()
.
- Return type
- mk_t1_shots_dataset(t1_times=None, probabilities=None, **kwargs)[source]
Generates a dataset with mock data of a T1 experiment for a single qubit including calibration points for the ground and excited states, including all the individual shots (repeated qubit state measurement for the same exact experiment).
- Parameters
t1_times (
Optional
[ndarray
] (default:None
)) – Array with the T1 times corresponding to each probability inprobabilities
.probabilities (
Optional
[ndarray
] (default:None
)) – The probabilities of finding the qubit in the excited state.**kwargs – Keyword arguments passed to
mk_iq_shots()
.
- Return type
- mk_t1_traces_dataset(t1_times=None, probabilities=None, **kwargs)[source]
Generates a dataset with mock data of a T1 experiment for a single qubit including calibration points for the ground and excited states, including all the individual shots (repeated qubit state measurement for the same exact experiment); and including all the signals that had to be digitized to obtain the rest of the data.
- Parameters
t1_times (
Optional
[ndarray
] (default:None
)) – Array with the T1 times corresponding to each probability inprobabilities
.probabilities (
Optional
[ndarray
] (default:None
)) – The probabilities of finding the qubit in the excited state.**kwargs – Keyword arguments passed to
mk_iq_shots()
.
- Return type
- mk_two_qubit_chevron_data(rep_num=5, seed=112233)[source]
Generates data that look similar to a two-qubit Chevron experiment.
- Parameters
- Returns
amp_values – Amplitude values.
time_values – Time values.
population_q0 – Q0 population values.
population_q1 – Q1 population values.
- mk_two_qubit_chevron_dataset(**kwargs)[source]
Generates a dataset that look similar to a two-qubit Chevron experiment.
- Parameters
**kwargs – Keyword arguments passed to
mk_two_qubit_chevron_data()
.- Return type
- Returns
A mock Quantify dataset.
examples_support
Utilities used for creating examples for docs/tutorials/tests.
- mk_cosine_instrument()[source]
A container of parameters (mock instrument) providing a cosine model.
- Return type
- mk_dataset_attrs(tuid=<function gen_tuid>, **kwargs)[source]
A factory of attributes for Quantify dataset.
See
QDatasetAttrs
for details.
- mk_iq_shots(num_shots=128, sigmas=(0.1, 0.1), centers=(-0.2 + 0.65j, 0.7 + 4j), probabilities=(0.4, 0.6), seed=112233)[source]
Generates clusters of (I + 1j*Q) points with a Gaussian distribution with the specified sigmas and centers according to the probabilities of each cluster
Examples
import matplotlib.pyplot as plt from quantify_core.utilities.examples_support import mk_iq_shots center_0, center_1, center_2 = 0.6 + 1.2j, -0.2 + 0.5j, 0 + 1.5j data = mk_iq_shots( 100, sigmas=[0.1] * 2, centers=(center_0, center_1), probabilities=(0.3, 1 - 0.3), ) fig, ax = plt.subplots() ax.plot(data.real, data.imag, "o", label="Shots") ax.plot(center_0.real, center_0.imag, "^", label="|0>", markersize=10) ax.plot(center_1.real, center_1.imag, "d", label="|1>", markersize=10) _ = ax.legend() data = mk_iq_shots( 200, sigmas=[0.1] * 3, centers=(center_0, center_1, center_2), probabilities=[0.35, 0.35, 1 - 0.35 - 0.35], ) fig, ax = plt.subplots() ax.plot(data.real, data.imag, "o", label="Shots") ax.plot(center_0.real, center_0.imag, "^", label="|0>", markersize=10) ax.plot(center_1.real, center_1.imag, "d", label="|1>", markersize=10) ax.plot(center_2.real, center_2.imag, "*", label="|2>", markersize=10) _ = ax.legend()
- Parameters
num_shots (
int
(default:128
)) – The number of shot to generate.sigma – The sigma of the Gaussian distribution used for both real and imaginary parts.
centers (
Union
[Tuple
[complex
],ndarray
[Any
,dtype
[complex128
]]] (default:((-0.2+0.65j), (0.7+4j))
)) – The center of each cluster on the imaginary plane.probabilities (
Union
[Tuple
[float
],ndarray
[Any
,dtype
[float64
]]] (default:(0.4, 0.6)
)) – The probabilities of each cluster being randomly selected for each shot.seed (
Optional
[int
] (default:112233
)) – Random number generator seed passed tonumpy.random.default_rng
.
- Return type
ndarray
[Any
,dtype
[TypeVar
(ScalarType
, bound=generic
, covariant=True)]]
- mk_main_coord_attrs(uniformly_spaced=True, is_main_coord=True, **kwargs)[source]
A factory of attributes for main coordinates.
See
QCoordAttrs
for details.- Parameters
uniformly_spaced (
bool
(default:True
)) – Seequantify_core.data.dataset_attrs.QCoordAttrs.uniformly_spaced
.is_main_coord (
bool
(default:True
)) – Seequantify_core.data.dataset_attrs.QCoordAttrs.is_main_coord
.**kwargs – Any other items used to update the output dictionary.
- Return type
- mk_main_var_attrs(grid=True, uniformly_spaced=True, is_main_var=True, has_repetitions=False, **kwargs)[source]
A factory of attributes for main variables.
See
QVarAttrs
for details.- Parameters
grid (
bool
(default:True
)) – Seequantify_core.data.dataset_attrs.QVarAttrs.grid
.uniformly_spaced (
bool
(default:True
)) – Seequantify_core.data.dataset_attrs.QVarAttrs.uniformly_spaced
.is_main_var (
bool
(default:True
)) – Seequantify_core.data.dataset_attrs.QVarAttrs.is_main_var
.has_repetitions (
bool
(default:False
)) – Seequantify_core.data.dataset_attrs.QVarAttrs.has_repetitions
.**kwargs – Any other items used to update the output dictionary.
- Return type
- mk_secondary_coord_attrs(uniformly_spaced=True, is_main_coord=False, **kwargs)[source]
A factory of attributes for secondary coordinates.
See
QCoordAttrs
for details.- Parameters
uniformly_spaced (
bool
(default:True
)) – Seequantify_core.data.dataset_attrs.QCoordAttrs.uniformly_spaced
.is_main_coord (
bool
(default:False
)) – Seequantify_core.data.dataset_attrs.QCoordAttrs.is_main_coord
.**kwargs – Any other items used to update the output dictionary.
- Return type
- mk_secondary_var_attrs(grid=True, uniformly_spaced=True, is_main_var=False, has_repetitions=False, **kwargs)[source]
A factory of attributes for secondary variables.
See
QVarAttrs
for details.- Parameters
grid (
bool
(default:True
)) – Seequantify_core.data.dataset_attrs.QVarAttrs.grid
.uniformly_spaced (
bool
(default:True
)) – Seequantify_core.data.dataset_attrs.QVarAttrs.uniformly_spaced
.is_main_var (
bool
(default:False
)) – Seequantify_core.data.dataset_attrs.QVarAttrs.is_main_var
.has_repetitions (
bool
(default:False
)) – Seequantify_core.data.dataset_attrs.QVarAttrs.has_repetitions
.**kwargs – Any other items used to update the output dictionary.
- Return type
- mk_trace_for_iq_shot(iq_point, time_values=None, intermediate_freq=50000000.0)[source]
Generates mock “traces” that a physical instrument would digitize for the readout of a transmon qubit when using a down-converting IQ mixer.
Examples
import matplotlib.pyplot as plt from quantify_core.utilities.examples_support import mk_trace_for_iq_shot, mk_trace_time SHOT = 0.6 + 1.2j time = mk_trace_time() trace = mk_trace_for_iq_shot(SHOT) fig, ax = plt.subplots(1, 1, figsize=(12, 12 / 1.61 / 2)) _ = ax.plot(time * 1e6, trace.imag, ".-", label="I-quadrature") _ = ax.plot(time * 1e6, trace.real, ".-", label="Q-quadrature") _ = ax.set_xlabel("Time [µs]") _ = ax.set_ylabel("Amplitude [V]") _ = ax.legend()
- Parameters
iq_point (
complex
) – A complex number representing a point on the IQ-plane.time_values (
Optional
[ndarray
[Any
,dtype
[TypeVar
(ScalarType
, bound=generic
, covariant=True)]]] (default:None
)) – The time instants at which the mock intermediate-frequency signal is sampled.intermediate_freq (
float
(default:50000000.0
)) – The intermediate frequency used in the down-conversion scheme.
- Return type
ndarray
[Any
,dtype
[TypeVar
(ScalarType
, bound=generic
, covariant=True)]]- Returns
An array of complex numbers.
- mk_trace_time(sampling_rate=1000000000.0, duration=3e-07)[source]
Generates a
arange
in which the entries correspond to time instants up toduration
seconds sampled according tosampling_rate
in Hz.See
mk_trace_for_iq_shot()
for an usage example.
deprecation
Utilities used to maintain deprecation and reverse-compatibility of the code.
- deprecated(drop_version, message_or_alias)[source]
A decorator for deprecating classes and methods.
For each deprecation we must provide a version when this function or class will be removed completely and an instruction to a user about how to port their existing code to a new software version. This is easily done using this decorator.
If callable is passed instead of a message, this decorator assumes that the function or class has moved to another module and generates the standard instruction to use a new function or class. There is no need to re-implement the function logic in two places, since the implementation of new function or class is used in both new and old aliases.
Example
import warnings from quantify_core.utilities import deprecated
@deprecated("99.99", 'Initialize the "foo" literal directly.') def get_foo(): return "foo" with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") get_foo() # issues deprecation warning. assert len(w) == 1 assert w[0].category is FutureWarning print(w[0].message)
Function __main__.get_foo() is deprecated and will be removed in --main---99.99. Initialize the "foo" literal directly.
class NewClass: def __init__(self, val): self._val = val def val(self): return self._val @deprecated("99.99", NewClass) class OldClass: pass with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") obj = OldClass(42) # type: ignore assert len(w) == 1 assert w[0].category is FutureWarning print(w[0].message) print("obj.val() =", obj.val()) # type: ignore
Class __main__.OldClass is deprecated and will be removed in --main---99.99. Use __main__.NewClass instead. obj.val() = 42
class SomeClass: def __init__(self, val): self._val = val def val(self): return self._val @deprecated("7.77", val) def get_val(self): '''Deprecated alias''' with warnings.catch_warnings(record=True) as w: warnings.simplefilter("always") val = SomeClass(42).get_val() # issues deprecation warning. assert len(w) == 1 assert w[0].category is FutureWarning print(w[0].message) print("obj.get_val() =", val)
Function __main__.SomeClass.get_val() is deprecated and will be removed in --main---7.77. Use __main__.SomeClass.val() instead. obj.get_val() = 42
- Parameters
drop_version (
str
) – A version of the package when the deprecated function or class will be dropped.message_or_alias (
Union
[str
,Callable
]) – Either an instruction about how to port the software to a new version without the usage of deprecated calls (string), or the new drop-in replacement to the deprecated class or function (callable).
- Return type
visualization
The visualization module contains tools for real-time visualization as well as utilities to help in plotting.
Import alias |
Maps to |
---|---|
|
|
|
instrument_monitor
Module containing the pyqtgraph based plotting monitor.
- class InstrumentMonitor(name, window_size=(600, 600), remote=True, update_interval=5)[source]
Creates a pyqtgraph widget that displays the instrument monitor window.
Example
from quantify_core.measurement import MeasurementControl from quantify_core.visualization import InstrumentMonitor meas_ctrl = MeasurementControl("meas_ctrl") instrument_monitor = InstrumentMonitor("instrument_monitor") # Set True if you want to query the instruments about each parameter # before updating the window. Can be slow due to communication overhead. instrument_monitor.update_snapshot(False)
- __init__(name, window_size=(600, 600), remote=True, update_interval=5)[source]
Initializes the pyqtgraph window.
- Parameters
name – name of the
InstrumentMonitor
object.window_size (
tuple
(default:(600, 600)
)) – The size of theInstrumentMonitor
window in px.remote (
bool
(default:True
)) – Switch to use a remote instance of the pyqtgraph class.update_interval (
int
(default:5
)) – Interval in seconds between two updates
- close()[source]
(Modified from Instrument class)
Irreversibly stop this instrument and free its resources.
Subclasses should override this if they have other specific resources to close.
- Return type
- create_widget(window_size=(1000, 600))[source]
Saves an instance of the
quantify_core.visualization.ins_mon_widget.qc_snapshot_widget.QcSnapshotWidget
class during startup. Creates thesnapshot
tree to display within the remote widget window.- Parameters
window_size (
tuple
(default:(1000, 600)
)) – The size of theInstrumentMonitor
window in px.
- update_interval = Parameter( get_cmd=self._get_update_interval, set_cmd=self._set_update_interval, unit="s", initial_value=update_interval, vals=vals.Numbers(min_value=0.001), name="update_interval", instrument=self, )
Only update the window if this amount of time has passed since the last update.
- update_snapshot = ManualParameter( initial_value=False, vals=vals.Bool(), name="update_snapshot", instrument=self, )
Set to True in order to query the instruments about each parameter before updating the window. Can be slow due to communication overhead.
pyqt_plotmon
Module containing the pyqtgraph based plotting monitor.
- class PlotMonitor_pyqt(name)[source]
Pyqtgraph based plot monitor instrument.
A plot monitor is intended to provide a real-time visualization of a dataset.
The interaction with this virtual instrument are virtually instantaneous. All the heavier computations and plotting happens in a separate QtProcess.
- __init__(name)[source]
Creates an instance of the Measurement Control.
- Parameters
name (
str
) – Name of this instrument instance
- close()[source]
(Modified from Instrument class)
Irreversibly stop this instrument and free its resources.
Subclasses should override this if they have other specific resources to close.
- Return type
- create_plot_monitor()[source]
Creates the PyQtGraph plotting monitors. Can also be used to recreate these when plotting has crashed.
- tuids_append(tuid=None)[source]
Appends a tuid to
tuids
and also discards older datasets according totuids_max_num
.The the corresponding data will be plotted in the main window with blue circles.
NB: do not call before the corresponding dataset file was created and filled with data
- update(tuid=None)[source]
Updates the curves/heatmaps of a specific dataset.
If the dataset is not specified the latest dataset in
tuids
is used.If
tuids
is empty andtuid
is provided thentuids_append(tuid)
will be called. NB: this is intended mainly for MC to avoid issues when the file was not yet created or is empty.
- main_QtPlot = QtPlotObjForJupyter(self._remote_plotmon, "main_QtPlot")
Retrieves the image of the main window when used as the final statement in a cell of a Jupyter-like notebook.
- secondary_QtPlot = QtPlotObjForJupyter( self._remote_plotmon, "secondary_QtPlot" )
Retrieves the image of the secondary window when used as the final statement in a cell of a Jupyter-like notebook.
- tuids = Parameter( initial_cache_value=[], vals=vals.Lists(elt_validator=vals.Strings()), get_cmd=self._get_tuids, set_cmd=self._set_tuids, name="tuids", instrument=self, )
The tuids of the auto-accumulated previous datasets when specified through
tuids_append
. Can be set to a list['tuid_one', 'tuid_two', ...]
. Can be reset by setting to[]
. See alsotuids_extra
.
- tuids_extra = Parameter( initial_cache_value=[], vals=vals.Lists(elt_validator=vals.Strings()), set_cmd=self._set_tuids_extra, get_cmd=self._get_tuids_extra, name="tuids_extra", instrument=self, )
Extra tuids whose datasets are never affected by
tuids_append
ortuids_max_num
. As opposed to thetuids
, these ones never vanish. Can be reset by setting to[]
. Intended to perform realtime measurements and have a live comparison with previously measured datasets.
- tuids_max_num = Parameter( vals=vals.Ints(min_value=1, max_value=100), set_cmd=self._set_tuids_max_num, get_cmd=self._get_tuids_max_num, initial_cache_value=3, name="tuids_max_num", instrument=self, )
The maximum number of auto-accumulated datasets in
tuids
. Older dataset are discarded whentuids_append
is called [directly or fromupdate()
].
color_utilities
Module containing utilities for color manipulation
- set_hlsa(color, h=None, l=None, s=None, a=None, to_hex=False)[source]
Accepts a matplotlib color specification and returns an RGB color with the specified HLS values plus an optional alpha :rtype: tuple
Example
In this example we use this function to create a custom colormap using several base colors for which we adjust the saturation and transparency (alpha, only visible when exporting the image).
import colorsys import matplotlib.colors as mplc import matplotlib.pyplot as plt import numpy as np from quantify_core.visualization.color_utilities import set_hlsa color_cycle = ["#1f77b4", "#ff7f0e", "#2ca02c", "#d62728"] all_colors = [] for col in color_cycle: hls = colorsys.rgb_to_hls(*mplc.to_rgb(mplc.to_rgb(col))) sat_vals = (np.linspace(0.0, 1.0, 20) ** 2) * hls[2] alpha_vals = np.linspace(0.4, 1.0, 20) colors = [ list(set_hlsa(col, s=s)) for s, a in zip(sat_vals, alpha_vals) ] all_colors += colors cmap = mplc.ListedColormap(all_colors) np.random.seed(19680801) data = np.random.randn(30, 30) fig, ax = plt.subplots(1, 1, figsize=(6, 3), constrained_layout=True) psm = ax.pcolormesh(data, cmap=cmap, rasterized=True, vmin=-4, vmax=4) fig.colorbar(psm, ax=ax) plt.show()
mpl_plotting
Module containing matplotlib and xarray plotting utilities.
Naming convention: plotting functions that require Xarray object(s) as inputs are named
plot_xr_...
.
- flex_colormesh_plot_vs_xy(xvals, yvals, zvals, ax=None, normalize=False, log=False, cmap='viridis', vlim=(None, None), transpose=False)[source]
Add a rectangular block to a color plot using
pcolormesh()
.- Parameters
xvals (
ndarray
) – Length N array corresponding to settable x0.yvals (
ndarray
) – Length M array corresponding to settable x1.zvals (
ndarray
) – M*N array corresponding to gettable yi.ax (
Optional
[Axes
] (default:None
)) – Axis to which to add the colormesh.normalize (
bool
(default:False
)) – IfTrue
, normalizes each row of data.log (
bool
(default:False
)) – ifTrue
, uses a logarithmic colorscale.cmap (
str
(default:'viridis'
)) – Colormap to use. See matplotlib docs for choosing an appropriate colormap.vlim (
list
(default:(None, None)
)) – Limits of the z-axis.transpose (
bool
(default:False
)) – IfTrue
transposes the figure.
- Return type
- Returns
The created matplotlib QuadMesh.
Warning
The grid orientation for the zvals is the same as is used in
pcolormesh()
. Note that the column index corresponds to the x-coordinate, and the row index corresponds to y. This can be counter.intuitive: zvals(y_idx, x_idx) and can be inconsistent with some arrays of zvals (such as a 2D histogram from numpy).
- get_unit_from_attrs(data_array, str_format=' [{}]')[source]
Extracts and formats the unit/units from an
xarray.DataArray
attribute.- Parameters
- Return type
- Returns
str_format
string formatted with thedata_array.unit
ordata_array.units
, with that order of precedence. Empty string is returned if none of these arguments are present.
- plot_2d_grid(x, y, z, xlabel, xunit, ylabel, yunit, zlabel, zunit, ax, cax=None, add_cbar=True, title=None, normalize=False, log=False, cmap='viridis', vlim=(None, None), transpose=False)[source]
Creates a heatmap of x,y,z data that was acquired on a grid expects three “columns” of data of equal length.
- Parameters
x – Length N array corresponding to x values.
y – Length N array corresponding to y values.
z – Length N array corresponding to gettable z values.
xlabel (
str
) – x label to add to the heatmap.ylabel (
str
) – y label to add to the heatmap.xunit (
str
) – x unit used in unit aware axis labels.yunit (
str
) – y unit used in unit aware axis labels.zlabel (
str
) – Label used for the colorbar.ax (
Axes
) – Axis to which to add the colormesh.cax (
Optional
[Axes
] (default:None
)) – Axis on which to add the colorbar, if set toNone
, will create a new axis.add_cbar (
bool
(default:True
)) – ifTrue
, adds a colorbar.title (
Optional
[str
] (default:None
)) – Text to add as title to the axis.normalize (
bool
(default:False
)) – ifTrue
, normalizes each row of data.log (
bool
(default:False
)) – ifTrue
, uses a logarithmic colorscalecmap (
str
(default:'viridis'
)) –The colormap to use. See matplotlib docs for choosing an appropriate colormap.
vlim (
list
(default:(None, None)
)) – limits of the z-axis.transpose (
bool
(default:False
)) – ifTrue
transposes the figure.
- Return type
- Returns
The new matplotlib QuadMesh and Colorbar.
- plot_complex_points(points, colors=None, labels=None, markers=None, legend=True, ax=None, **kwargs)[source]
Plots complex points with (by default) different colors and markers on the imaginary plane using
matplotlib.axes.Axes.plot()
.Intended for a small number of points.
Example
from quantify_core.utilities.examples_support import plot_centroids
_ = plot_centroids([1 + 1j, -1.5 - 2j])
- Parameters
ax (
Optional
[Axes
] (default:None
)) – A matplotlib axis to plot on.colors (
Optional
[list
] (default:None
)) – Colors to use for each point.labels (
Optional
[list
] (default:None
)) – Labels to use for each point. Defaults tof"|{i}>"
markers (
Optional
[list
] (default:None
)) – Markers to use for each point.**kwargs – Keyword arguments passed to the
plot()
.
- Return type
- plot_fit(ax, fit_res, plot_init=True, plot_numpoints=1000, range_casting='real', fit_kwargs=None, init_kwargs=None)[source]
Plot a fit of an lmfit model with a real domain.
- Parameters
ax – axis on which to plot the fit.
fit_res – an lmfit fit results object.
plot_init (
bool
(default:True
)) – if True, plot the initial guess of the fit.plot_numpoints (
int
(default:1000
)) – the number of points used on which to evaluate the fit.range_casting (
Literal
['abs'
,'angle'
,'real'
,'imag'
] (default:'real'
)) – how to plot fit functions that have a complex range. Casting of values happens usingabsolute
,angle
,real
andimag
. Angle is in degrees.fit_kwargs (
Optional
[dict
] (default:None
)) – Matplotlib pyplot formatting and label keyword arguments for the fit plot. default value is {“color”: “C3”, “label”: “Fit”}optional – Matplotlib pyplot formatting and label keyword arguments for the fit plot. default value is {“color”: “C3”, “label”: “Fit”}
init_kwargs (
Optional
[dict
] (default:None
)) – Matplotlib pyplot formatting and label keyword arguments for the init plot. default value is {“color”: “grey”, “linestyle”: “–”, “label”: “Guess”}optional – Matplotlib pyplot formatting and label keyword arguments for the init plot. default value is {“color”: “grey”, “linestyle”: “–”, “label”: “Guess”}
- Return type
- Returns
list of matplotlib pyplot Line2D objects
- plot_fit_complex_plane(ax, fit_res, plot_init=True, plot_numpoints=1000)[source]
Plot a fit of an lmfit model with a real domain in the complex plane.
- Return type
- plot_xr_complex(var, marker_scatter='o', label_real='Real', label_imag='Imag', cmap='viridis', c=None, kwargs_line=None, kwargs_scatter=None, title='{} [{}]; shape = {}', legend=True, ax=None)[source]
Plots the real and imaginary parts of complex data. Points are colored by default according to their order in the array.
- Parameters
var (
DataArray
) – 1D array of complex data.marker_scatter (
str
(default:'o'
)) – Marker used for the scatter plot.label_real (
str
(default:'Real'
)) – Label for legend.label_imag (
str
(default:'Imag'
)) – Label for legend.cmap (
str
(default:'viridis'
)) – The colormap to use for coloring the points.c (
Optional
[ndarray
] (default:None
)) – Color of the points. Defaults to an array of integers.kwargs_line (
Optional
[dict
] (default:None
)) – Keyword arguments passed tomatplotlib.axes.Axes.plot()
.kwargs_scatter (
Optional
[dict
] (default:None
)) – Keyword arguments passed tomatplotlib.axes.Axes.scatter()
.title (
str
(default:'{} [{}]; shape = {}'
)) – Axes title. By default gets formatted withvar.long_name
,var.name
and var.shape``.ax (
Optional
[object
] (default:None
)) – The matplotlib axes. IfNone
a new axes (and figure) is created.
- Return type
- plot_xr_complex_on_plane(var, marker='o', label='Data on imaginary plane', cmap='viridis', c=None, xlabel='Real{}{}{}', ylabel='Imag{}{}{}', legend=True, ax=None, **kwargs)[source]
Plots complex data on the imaginary plane. Points are colored by default according to their order in the array.
- Parameters
var (
DataArray
) – 1D array of complex data.marker (
str
(default:'o'
)) – Marker used for the scatter plot.label (
str
(default:'Data on imaginary plane'
)) – Data label for the legend.cmap (
str
(default:'viridis'
)) – The colormap to use for coloring the points.c (
Optional
[ndarray
] (default:None
)) – Color of the points. Defaults to an array of integers.xlabel (
str
(default:'Real{}{}{}'
)) – Label o x axes.ylabel (
str
(default:'Imag{}{}{}'
)) – Label o y axes.ax (
Optional
[object
] (default:None
)) – The matplotlib axes. IfNone
a new axes (and figure) is created.
- Return type
- set_cyclic_colormap(image_or_collection, shifted=False, unit='deg', clim=None)[source]
Sets a cyclic colormap on a matplolib 2D color plot if cyclic units are detected.
Example
import matplotlib.pyplot as plt import numpy as np import xarray as xr from quantify_core.visualization.mpl_plotting import set_cyclic_colormap zvals = xr.DataArray(np.random.rand(6, 10) * 360) zvals.attrs["units"] = "deg" zvals.plot() fig, ax = plt.subplots(1, 1) color_plot = zvals.plot(ax=ax) set_cyclic_colormap(color_plot) zvals_shifted = zvals - 180 fig, ax = plt.subplots(1, 1) color_plot = zvals_shifted.plot(ax=ax) ax.set_title("Shifted cyclic colormap") set_cyclic_colormap(color_plot, shifted=zvals_shifted.min() < 0) fig, ax = plt.subplots(1, 1) color_plot = (zvals / 2).plot(ax=ax) ax.set_title("Overwrite clim") set_cyclic_colormap(color_plot, clim=(0, 180), unit="deg") fig, ax = plt.subplots(1, 1) zvals_rad = zvals / 180 * np.pi zvals_rad.attrs["units"] = "rad" color_plot = zvals_rad.plot(ax=ax) ax.set_title("Radians") set_cyclic_colormap(color_plot, unit=zvals_rad.units)
- Parameters
image_or_collection (
Union
[AxesImage
,QuadMesh
,Collection
]) – A matplotlib object returned by either one ofpcolor()
,pcolormesh()
,imshow()
ormatshow()
.shifted (
bool
(default:False
)) – Chooses between"twilight_shifted"
/"twilight"
colormap and the colormap range.unit (
Literal
['deg'
,'rad'
] (default:'deg'
)) – Used to fix the colormap range.clim (
Optional
[tuple
] (default:None
)) – The colormap limit.
- Return type
plot_interpolation
Plot interpolations.
- interpolate_heatmap(x, y, z, n=None, interp_method='linear')[source]
The output of this method can directly be used for plt.imshow(z_grid, extent=extent, aspect=’auto’) where the extent is determined by the min and max of the x_grid and y_grid.
The output can also be used as input for ax.pcolormesh(x, y, Z,**kw)
- Parameters
x (
numpy.ndarray
) – x data pointsy (
numpy.ndarray
) – y data pointsz (
numpy.ndarray
) – z data pointsn (
Optional
[int
] (default:None
)) – number of points for each dimension on the interpolated grid if set to None will auto determine amount of points neededinterp_method (
Literal
['linear'
,'nearest'
,'deg'
] (default:'linear'
)) – determines what interpolation method is used.
- Returns
x_grid (
numpy.ndarray
) – N*1 array of x-values of the interpolated gridy_grid (
numpy.ndarray
) – N*1 array of x-values of the interpolated gridz_grid (
numpy.ndarray
) – N*N array of z-values that form a grid.
SI Utilities
Utilities for managing SI units with plotting systems.
- SI_prefix_and_scale_factor(val, unit=None)[source]
Takes in a value and unit, returns a scale factor and scaled unit. It returns a scale factor to convert the input value to a value in the range [1.0, 1000.0), plus the corresponding scaled SI unit (e.g. ‘mT’, ‘kV’), deduced from the input unit, to represent the input value in those scaled units.
The scaling is only applied if the unit is an unscaled or scaled unit present in the variable :data::SI_UNITS.
If the unit is None, no scaling is done. If the unit is “SI_PREFIX_ONLY”, the value is scaled and an SI prefix is applied without a base unit.
- SI_val_to_msg_str(val, unit=None, return_type=<class 'str'>)[source]
Takes in a value with optional unit and returns a string tuple consisting of (value_str, unit) where the value and unit are rescaled according to SI prefixes, IF the unit is an SI unit (according to the comprehensive list of SI units in this file ;).
the value_str is of the type specified in return_type (str) by default.
- adjust_axeslabels_SI(ax)[source]
Auto adjust the labels of a plot generated by xarray to SI-unit aware labels.
- Return type
- format_value_string(par_name, parameter, end_char='', unit=None)[source]
Format an lmfit parameter or uncertainties ufloat to a string of value with uncertainty.
If there is no stderr, use 5 significant figures. If there is a standard error use a precision one order of magnitude more precise than the size of the error and display the stderr itself to two significant figures in standard index notation in the same units as the value.
- Parameters
par_name (
str
) – the name of the parameter to use in the stringparameter (
lmfit.parameter.Parameter
,) –uncertainties.core.Variable
or float. AParameter
object or an object e.g., returned byuncertainties.ufloat()
. The value and stderr of this parameter will be used. If a float is given, the stderr is taken to be None.end_char (default:
''
) – A character that will be put at the end of the line.unit (default:
None
) – a unit. If this is an SI unit it will be used in automatically determining a prefix for the unit and rescaling accordingly.
- Return type
- Returns
The parameter and its error formatted as a string
- set_cbarlabel(cbar, label, unit=None, **kw)[source]
Add a unit aware z-label to a colorbar object
- Parameters
cbar – colorbar object to set label on
label – the desired label
unit (default:
None
) – the unit**kw – keyword argument to be passed to cbar.set_label
- set_xlabel(label, unit=None, axis=None, **kw)[source]
Add a unit aware x-label to an axis object.
- Parameters
label – the desired label
unit (default:
None
) – the unitaxis (default:
None
) – matplotlib axis object to set label on**kw – keyword argument to be passed to matplotlib.set_xlabel
- set_ylabel(label, unit=None, axis=None, **kw)[source]
Add a unit aware y-label to an axis object.
- Parameters
label – the desired label
unit (default:
None
) – the unitaxis (default:
None
) – matplotlib axis object to set label on**kw – keyword argument to be passed to matplotlib.set_ylabel
- value_precision(val, stderr=None)[source]
Calculate the precision to which a parameter is to be specified, according to its standard error. Returns the appropriate format specifier string.
If there is no stderr, use 5 significant figures. If there is a standard error use a precision one order of magnitude more precise than the size of the error and display the stderr itself to two significant figures in standard index notation in the same units as the value.
- Parameters
- Return type
- Returns
val_format_specifier (str) – python format specifier which sets the precision of the parameter value
err_format_specifier (str) – python format specifier which set the precision of the error
bibliography
- MVM+21
J.F. Marques, B.M. Varbanov, M.S. Moreira, H. Ali, N. Muthusubramanian, C. Zachariadis, F. Battistel, M. Beekman, N. Haider, W. Vlothuizen, A. Bruno, B.M. Terhal, and L. DiCarlo. Logical-qubit operations in an error-detecting surface code. arXiv preprint arXiv:2102.13071, 2021. URL: https://arxiv.org/abs/2102.13071.pdf.
- Ree13
M.D. Reed. Entanglement and Quantum Error Correction with Superconducting Qubits. PhD Dissertation, Yale University, 2013. URL: https://arxiv.org/pdf/1311.6759.pdf.