qibocal.protocols package#
Subpackages#
- qibocal.protocols.allxy package
- Submodules
- qibocal.protocols.allxy.allxy module
- qibocal.protocols.allxy.allxy_resonator_depletion_tuning module
AllXYResonatorParameters
AllXYResonatorParameters.delay_start
AllXYResonatorParameters.delay_end
AllXYResonatorParameters.delay_step
AllXYResonatorParameters.readout_delay
AllXYResonatorParameters.unrolling
AllXYResonatorParameters.beta_param
AllXYResonatorParameters.hardware_average
AllXYResonatorParameters.nshots
AllXYResonatorParameters.relaxation_time
AllXYResonatorResults
AllXYResonatorData
AllXYResonatorData._to_json()
AllXYResonatorData._to_npz()
AllXYResonatorData.delay_param
AllXYResonatorData.load_data()
AllXYResonatorData.load_params()
AllXYResonatorData.pairs
AllXYResonatorData.params
AllXYResonatorData.qubits
AllXYResonatorData.register_qubit()
AllXYResonatorData.save()
AllXYResonatorData.data
AllXYResonatorData.delay_params
_acquisition()
_fit()
_plot()
allxy_resonator_depletion_tuning
- qibocal.protocols.classification package
- Submodules
- qibocal.protocols.classification.classification module
single_shot_classification
SingleShotClassificationData
SingleShotClassificationData.nshots
SingleShotClassificationData.savedir
SingleShotClassificationData.qubit_frequencies
SingleShotClassificationData.data
SingleShotClassificationData.classifiers_list
SingleShotClassificationData.state_zero()
SingleShotClassificationData.state_one()
SingleShotClassificationData._to_json()
SingleShotClassificationData._to_npz()
SingleShotClassificationData.load_data()
SingleShotClassificationData.load_params()
SingleShotClassificationData.pairs
SingleShotClassificationData.params
SingleShotClassificationData.qubits
SingleShotClassificationData.register_qubit()
SingleShotClassificationData.save()
SingleShotClassificationParameters
ClassificationType
- qibocal.protocols.classification.qutrit_classification module
- qibocal.protocols.coherence package
- Submodules
- qibocal.protocols.coherence.cpmg module
- qibocal.protocols.coherence.spin_echo module
- qibocal.protocols.coherence.spin_echo_signal module
SpinEchoSignalParameters
SpinEchoSignalParameters.delay_between_pulses_start
SpinEchoSignalParameters.delay_between_pulses_end
SpinEchoSignalParameters.delay_between_pulses_step
SpinEchoSignalParameters.single_shot
SpinEchoSignalParameters.hardware_average
SpinEchoSignalParameters.nshots
SpinEchoSignalParameters.relaxation_time
SpinEchoSignalResults
spin_echo_signal
update_spin_echo()
- qibocal.protocols.coherence.t1 module
- qibocal.protocols.coherence.t1_signal module
- qibocal.protocols.coherence.t2 module
- qibocal.protocols.coherence.t2_signal module
- qibocal.protocols.coherence.utils module
- qibocal.protocols.coherence.zeno module
- qibocal.protocols.dispersive_shift package
- Submodules
- qibocal.protocols.dispersive_shift.dispersive_shift module
dispersive_shift
DispersiveShiftData
DispersiveShiftData._to_json()
DispersiveShiftData._to_npz()
DispersiveShiftData.load_data()
DispersiveShiftData.load_params()
DispersiveShiftData.pairs
DispersiveShiftData.params
DispersiveShiftData.qubits
DispersiveShiftData.register_qubit()
DispersiveShiftData.save()
DispersiveShiftData.resonator_type
DispersiveShiftData.data
DispersiveShiftParameters
- qibocal.protocols.dispersive_shift.dispersive_shift_qutrit module
- qibocal.protocols.drag package
- qibocal.protocols.flux_dependence package
- Submodules
- qibocal.protocols.flux_dependence.cryoscope module
CryoscopeData
CryoscopeData._to_json()
CryoscopeData._to_npz()
CryoscopeData.load_data()
CryoscopeData.load_params()
CryoscopeData.pairs
CryoscopeData.params
CryoscopeData.qubits
CryoscopeData.register_qubit()
CryoscopeData.save()
CryoscopeData.flux_pulse_amplitude
CryoscopeData.fir
CryoscopeData.flux_coefficients
CryoscopeData.filters
CryoscopeData.data
CryoscopeData.has_filters()
CryoscopeResults
CryoscopeResults.fitted_parameters
CryoscopeResults.detuning
CryoscopeResults.amplitude
CryoscopeResults.step_response
CryoscopeResults.exp_amplitude
CryoscopeResults.tau
CryoscopeResults.feedforward_taps
CryoscopeResults.feedforward_taps_iir
CryoscopeResults.feedback_taps
CryoscopeResults._to_json()
CryoscopeResults._to_npz()
CryoscopeResults.load_data()
CryoscopeResults.load_params()
CryoscopeResults.params
CryoscopeResults.save()
- qibocal.protocols.flux_dependence.flux_amplitude_frequency module
- qibocal.protocols.flux_dependence.flux_gate module
- qibocal.protocols.flux_dependence.qubit_crosstalk module
- qibocal.protocols.flux_dependence.qubit_flux_dependence module
QubitFluxData
QubitFluxData.resonator_type
QubitFluxData._to_json()
QubitFluxData._to_npz()
QubitFluxData.load_data()
QubitFluxData.load_params()
QubitFluxData.pairs
QubitFluxData.params
QubitFluxData.qubits
QubitFluxData.save()
QubitFluxData.charging_energy
QubitFluxData.qubit_frequency
QubitFluxData.data
QubitFluxData.register_qubit()
QubitFluxParameters
QubitFluxResults
QubitFluxType
qubit_flux
- qibocal.protocols.flux_dependence.qubit_vz module
- qibocal.protocols.flux_dependence.resonator_flux_dependence module
- qibocal.protocols.flux_dependence.utils module
- qibocal.protocols.qubit_spectroscopies package
- Submodules
- qibocal.protocols.qubit_spectroscopies.qubit_power_spectroscopy module
- qibocal.protocols.qubit_spectroscopies.qubit_spectroscopy module
qubit_spectroscopy
QubitSpectroscopyParameters
QubitSpectroscopyResults
QubitSpectroscopyResults.frequency
QubitSpectroscopyResults.amplitude
QubitSpectroscopyResults.fitted_parameters
QubitSpectroscopyResults._to_json()
QubitSpectroscopyResults._to_npz()
QubitSpectroscopyResults.load_data()
QubitSpectroscopyResults.load_params()
QubitSpectroscopyResults.params
QubitSpectroscopyResults.save()
QubitSpectroscopyResults.chi2_reduced
QubitSpectroscopyResults.error_fit_pars
QubitSpectroscopyData
QubitSpectroscopyData._to_json()
QubitSpectroscopyData._to_npz()
QubitSpectroscopyData.fit_function
QubitSpectroscopyData.load_data()
QubitSpectroscopyData.load_params()
QubitSpectroscopyData.pairs
QubitSpectroscopyData.params
QubitSpectroscopyData.phase_sign
QubitSpectroscopyData.power_level
QubitSpectroscopyData.qubits
QubitSpectroscopyData.register_qubit()
QubitSpectroscopyData.save()
QubitSpectroscopyData.resonator_type
QubitSpectroscopyData.amplitudes
QubitSpectroscopyData.data
_fit()
- qibocal.protocols.qubit_spectroscopies.qubit_spectroscopy_ef module
- qibocal.protocols.rabi package
- Submodules
- qibocal.protocols.rabi.amplitude module
- qibocal.protocols.rabi.amplitude_frequency module
- qibocal.protocols.rabi.amplitude_frequency_signal module
RabiAmplitudeSignalResults
RabiAmplitudeSignalResults.amplitude
RabiAmplitudeSignalResults.length
RabiAmplitudeSignalResults.fitted_parameters
RabiAmplitudeSignalResults.rx90
RabiAmplitudeSignalResults._to_json()
RabiAmplitudeSignalResults._to_npz()
RabiAmplitudeSignalResults.load_data()
RabiAmplitudeSignalResults.load_params()
RabiAmplitudeSignalResults.params
RabiAmplitudeSignalResults.save()
RabiAmplitudeFrequencySignalParameters
RabiAmplitudeFrequencySignalParameters.min_amp
RabiAmplitudeFrequencySignalParameters.max_amp
RabiAmplitudeFrequencySignalParameters.step_amp
RabiAmplitudeFrequencySignalParameters.min_freq
RabiAmplitudeFrequencySignalParameters.max_freq
RabiAmplitudeFrequencySignalParameters.step_freq
RabiAmplitudeFrequencySignalParameters.rx90
RabiAmplitudeFrequencySignalParameters.pulse_length
RabiAmplitudeFrequencySignalParameters.hardware_average
RabiAmplitudeFrequencySignalParameters.nshots
RabiAmplitudeFrequencySignalParameters.relaxation_time
RabiAmplitudeFreqSignalData
RabiAmplitudeFreqSignalData._to_json()
RabiAmplitudeFreqSignalData._to_npz()
RabiAmplitudeFreqSignalData.load_data()
RabiAmplitudeFreqSignalData.load_params()
RabiAmplitudeFreqSignalData.pairs
RabiAmplitudeFreqSignalData.params
RabiAmplitudeFreqSignalData.qubits
RabiAmplitudeFreqSignalData.save()
RabiAmplitudeFreqSignalData.rx90
RabiAmplitudeFreqSignalData.durations
RabiAmplitudeFreqSignalData.data
RabiAmplitudeFreqSignalData.register_qubit()
RabiAmplitudeFreqSignalData.amplitudes()
RabiAmplitudeFreqSignalData.frequencies()
_update()
rabi_amplitude_frequency_signal
- qibocal.protocols.rabi.amplitude_signal module
RabiAmplitudeSignalResults
RabiAmplitudeSignalResults.amplitude
RabiAmplitudeSignalResults.length
RabiAmplitudeSignalResults.fitted_parameters
RabiAmplitudeSignalResults.rx90
RabiAmplitudeSignalResults._to_json()
RabiAmplitudeSignalResults._to_npz()
RabiAmplitudeSignalResults.load_data()
RabiAmplitudeSignalResults.load_params()
RabiAmplitudeSignalResults.params
RabiAmplitudeSignalResults.save()
RabiAmplitudeSignalParameters
RabiAmplitudeSignalParameters.min_amp
RabiAmplitudeSignalParameters.max_amp
RabiAmplitudeSignalParameters.step_amp
RabiAmplitudeSignalParameters.pulse_length
RabiAmplitudeSignalParameters.rx90
RabiAmplitudeSignalParameters.hardware_average
RabiAmplitudeSignalParameters.nshots
RabiAmplitudeSignalParameters.relaxation_time
RabiAmplitudeSignalData
RabiAmplitudeSignalData._to_json()
RabiAmplitudeSignalData._to_npz()
RabiAmplitudeSignalData.load_data()
RabiAmplitudeSignalData.load_params()
RabiAmplitudeSignalData.pairs
RabiAmplitudeSignalData.params
RabiAmplitudeSignalData.qubits
RabiAmplitudeSignalData.register_qubit()
RabiAmplitudeSignalData.save()
RabiAmplitudeSignalData.rx90
RabiAmplitudeSignalData.durations
RabiAmplitudeSignalData.data
rabi_amplitude_signal
_fit()
RabiAmpSignalType
- qibocal.protocols.rabi.ef module
- qibocal.protocols.rabi.length module
- qibocal.protocols.rabi.length_frequency module
- qibocal.protocols.rabi.length_frequency_signal module
rabi_length_frequency_signal
RabiLengthFrequencySignalParameters
RabiLengthFrequencySignalParameters.pulse_duration_start
RabiLengthFrequencySignalParameters.pulse_duration_end
RabiLengthFrequencySignalParameters.pulse_duration_step
RabiLengthFrequencySignalParameters.min_freq
RabiLengthFrequencySignalParameters.max_freq
RabiLengthFrequencySignalParameters.step_freq
RabiLengthFrequencySignalParameters.pulse_amplitude
RabiLengthFrequencySignalParameters.rx90
RabiLengthFrequencySignalParameters.interpolated_sweeper
RabiLengthFrequencySignalParameters.hardware_average
RabiLengthFrequencySignalParameters.nshots
RabiLengthFrequencySignalParameters.relaxation_time
RabiLengthFreqSignalData
RabiLengthFreqSignalData._to_json()
RabiLengthFreqSignalData._to_npz()
RabiLengthFreqSignalData.load_data()
RabiLengthFreqSignalData.load_params()
RabiLengthFreqSignalData.pairs
RabiLengthFreqSignalData.params
RabiLengthFreqSignalData.qubits
RabiLengthFreqSignalData.save()
RabiLengthFreqSignalData.rx90
RabiLengthFreqSignalData.amplitudes
RabiLengthFreqSignalData.data
RabiLengthFreqSignalData.register_qubit()
RabiLengthFreqSignalData.durations()
RabiLengthFreqSignalData.frequencies()
_update()
RabiLengthFrequencySignalResults
RabiLengthFrequencySignalResults.rx90
RabiLengthFrequencySignalResults.frequency
RabiLengthFrequencySignalResults._to_json()
RabiLengthFrequencySignalResults._to_npz()
RabiLengthFrequencySignalResults.load_data()
RabiLengthFrequencySignalResults.load_params()
RabiLengthFrequencySignalResults.params
RabiLengthFrequencySignalResults.save()
RabiLengthFrequencySignalResults.length
RabiLengthFrequencySignalResults.amplitude
RabiLengthFrequencySignalResults.fitted_parameters
- qibocal.protocols.rabi.length_signal module
rabi_length_signal
RabiLengthSignalResults
RabiLengthSignalResults.length
RabiLengthSignalResults.amplitude
RabiLengthSignalResults.fitted_parameters
RabiLengthSignalResults.rx90
RabiLengthSignalResults._to_json()
RabiLengthSignalResults._to_npz()
RabiLengthSignalResults.load_data()
RabiLengthSignalResults.load_params()
RabiLengthSignalResults.params
RabiLengthSignalResults.save()
- qibocal.protocols.rabi.utils module
- qibocal.protocols.ramsey package
- Submodules
- qibocal.protocols.ramsey.ramsey module
- qibocal.protocols.ramsey.ramsey_signal module
ramsey_signal
RamseySignalParameters
RamseySignalParameters.delay_between_pulses_start
RamseySignalParameters.delay_between_pulses_end
RamseySignalParameters.delay_between_pulses_step
RamseySignalParameters.detuning
RamseySignalParameters.unrolling
RamseySignalParameters.hardware_average
RamseySignalParameters.nshots
RamseySignalParameters.relaxation_time
RamseySignalData
RamseySignalData._to_json()
RamseySignalData._to_npz()
RamseySignalData.load_data()
RamseySignalData.load_params()
RamseySignalData.pairs
RamseySignalData.params
RamseySignalData.qubits
RamseySignalData.register_qubit()
RamseySignalData.save()
RamseySignalData.detuning
RamseySignalData.qubit_freqs
RamseySignalData.data
RamseySignalData.waits
_update()
RamseySignalResults
RamseySignalResults.detuning
RamseySignalResults.frequency
RamseySignalResults.t2
RamseySignalResults.delta_phys
RamseySignalResults.delta_fitting
RamseySignalResults.fitted_parameters
RamseySignalResults._to_json()
RamseySignalResults._to_npz()
RamseySignalResults.load_data()
RamseySignalResults.load_params()
RamseySignalResults.params
RamseySignalResults.save()
- qibocal.protocols.ramsey.ramsey_zz module
- qibocal.protocols.ramsey.utils module
- qibocal.protocols.randomized_benchmarking package
- Submodules
- qibocal.protocols.randomized_benchmarking.dict_utils module
- qibocal.protocols.randomized_benchmarking.filtered_rb module
- qibocal.protocols.randomized_benchmarking.fitting module
- qibocal.protocols.randomized_benchmarking.standard_rb module
StandardRBParameters
RBData
RBData.depths
RBData.uncertainties
RBData.seed
RBData.nshots
RBData.niter
RBData.data
RBData.circuits
RBData.npulses_per_clifford
RBData.extract_probabilities()
RBData._to_json()
RBData._to_npz()
RBData.load_data()
RBData.load_params()
RBData.pairs
RBData.params
RBData.qubits
RBData.register_qubit()
RBData.save()
_plot()
- qibocal.protocols.randomized_benchmarking.standard_rb_2q module
StandardRB2QParameters
StandardRB2QParameters.file
StandardRB2QParameters.file_inv
StandardRB2QParameters.hardware_average
StandardRB2QParameters.nshots
StandardRB2QParameters.seed
StandardRB2QParameters.uncertainties
StandardRB2QParameters.unrolling
StandardRB2QParameters.depths
StandardRB2QParameters.niter
StandardRB2QParameters.relaxation_time
- qibocal.protocols.randomized_benchmarking.standard_rb_2q_inter module
- qibocal.protocols.randomized_benchmarking.utils module
NPULSES_PER_CLIFFORD
RBType
random_clifford()
random_2q_clifford()
random_circuits()
number_to_str()
data_uncertainties()
RB_Generator
RBData
RBData.depths
RBData.uncertainties
RBData.seed
RBData.nshots
RBData.niter
RBData.data
RBData.circuits
RBData.npulses_per_clifford
RBData.extract_probabilities()
RBData._to_json()
RBData._to_npz()
RBData.load_data()
RBData.load_params()
RBData.pairs
RBData.params
RBData.qubits
RBData.register_qubit()
RBData.save()
RB2QData
RB2QData.npulses_per_clifford
RB2QData.extract_probabilities()
RB2QData._to_json()
RB2QData._to_npz()
RB2QData.load_data()
RB2QData.load_params()
RB2QData.pairs
RB2QData.params
RB2QData.qubits
RB2QData.register_qubit()
RB2QData.save()
RB2QData.depths
RB2QData.uncertainties
RB2QData.seed
RB2QData.nshots
RB2QData.niter
RB2QData.data
RB2QData.circuits
RB2QInterData
RB2QInterData.fidelity
RB2QInterData._to_json()
RB2QInterData._to_npz()
RB2QInterData.extract_probabilities()
RB2QInterData.load_data()
RB2QInterData.load_params()
RB2QInterData.npulses_per_clifford
RB2QInterData.pairs
RB2QInterData.params
RB2QInterData.qubits
RB2QInterData.register_qubit()
RB2QInterData.save()
RB2QInterData.depths
RB2QInterData.uncertainties
RB2QInterData.seed
RB2QInterData.nshots
RB2QInterData.niter
RB2QInterData.data
RB2QInterData.circuits
StandardRBResult
StandardRBResult.fidelity
StandardRBResult._to_json()
StandardRBResult._to_npz()
StandardRBResult.load_data()
StandardRBResult.load_params()
StandardRBResult.params
StandardRBResult.save()
StandardRBResult.pulse_fidelity
StandardRBResult.fit_parameters
StandardRBResult.fit_uncertainties
StandardRBResult.error_bars
setup()
get_circuits()
execute_circuits()
rb_acquisition()
twoq_rb_acquisition()
layer_circuit()
add_inverse_layer()
add_measurement_layer()
fit()
- qibocal.protocols.readout package
- qibocal.protocols.readout_optimization package
- qibocal.protocols.resonator_spectroscopies package
- Submodules
- qibocal.protocols.resonator_spectroscopies.resonator_punchout module
resonator_punchout
ResonatorPunchoutData
ResonatorPunchoutData._to_json()
ResonatorPunchoutData._to_npz()
ResonatorPunchoutData.load_data()
ResonatorPunchoutData.load_params()
ResonatorPunchoutData.pairs
ResonatorPunchoutData.params
ResonatorPunchoutData.qubits
ResonatorPunchoutData.save()
ResonatorPunchoutData.resonator_type
ResonatorPunchoutData.amplitudes
ResonatorPunchoutData.data
ResonatorPunchoutData.register_qubit()
- qibocal.protocols.resonator_spectroscopies.resonator_spectroscopy module
resonator_spectroscopy
ResonatorSpectroscopyData
ResonatorSpectroscopyData._to_json()
ResonatorSpectroscopyData._to_npz()
ResonatorSpectroscopyData.load_data()
ResonatorSpectroscopyData.load_params()
ResonatorSpectroscopyData.pairs
ResonatorSpectroscopyData.params
ResonatorSpectroscopyData.qubits
ResonatorSpectroscopyData.register_qubit()
ResonatorSpectroscopyData.save()
ResonatorSpectroscopyData.resonator_type
ResonatorSpectroscopyData.amplitudes
ResonatorSpectroscopyData.fit_function
ResonatorSpectroscopyData.phase_sign
ResonatorSpectroscopyData.data
ResonatorSpectroscopyData.power_level
ResSpecType
- qibocal.protocols.resonator_spectroscopies.resonator_utils module
- qibocal.protocols.signal_experiments package
- qibocal.protocols.tomographies package
- qibocal.protocols.two_qubit_interaction package
Submodules#
qibocal.protocols.flipping module#
- qibocal.protocols.flipping.flipping = Routine(acquisition=<function _acquisition>, fit=<function _fit>, report=<function _plot>, update=<function _update>, two_qubit_gates=False)#
Flipping Routine object.
qibocal.protocols.utils module#
- qibocal.protocols.utils.HBAR = 1.0545718176461565e-34#
Chi2 output when errors list contains zero elements
- qibocal.protocols.utils.CONFIDENCE_INTERVAL_FIRST_MASK = 99#
Confidence interval used to mask flux data.
- qibocal.protocols.utils.CONFIDENCE_INTERVAL_SECOND_MASK = 70#
Confidence interval used to clean outliers.
- qibocal.protocols.utils.DELAY_FIT_PERCENTAGE = 10#
Percentage of the first and last points used to fit the cable delay.
- class qibocal.protocols.utils.PowerLevel(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]#
-
Power Regime for Resonator Spectroscopy
- high = 'high'#
- low = 'low'#
- _generate_next_value_(start, count, last_values)#
Generate the next value when not given.
name: the name of the member start: the initial start value or None count: the number of existing members last_values: the list of values assigned
- _new_member_(**kwargs)#
Create and return a new object. See help(type) for accurate signature.
- _use_args_ = True#
- _member_names_ = ['high', 'low']#
- _member_map_ = {'high': PowerLevel.high, 'low': PowerLevel.low}#
- _value2member_map_ = {'high': PowerLevel.high, 'low': PowerLevel.low}#
- _unhashable_values_ = []#
- _value_repr_()#
Return repr(self).
- capitalize()#
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lower case.
- casefold()#
Return a version of the string suitable for caseless comparisons.
- center(width, fillchar=' ', /)#
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
- count(sub[, start[, end]]) int #
Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation.
- encode(encoding='utf-8', errors='strict')#
Encode the string using the codec registered for encoding.
- encoding
The encoding in which to encode the string.
- errors
The error handling scheme to use for encoding errors. The default is ‘strict’ meaning that encoding errors raise a UnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and ‘xmlcharrefreplace’ as well as any other name registered with codecs.register_error that can handle UnicodeEncodeErrors.
- endswith(suffix[, start[, end]]) bool #
Return True if S ends with the specified suffix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. suffix can also be a tuple of strings to try.
- expandtabs(tabsize=8)#
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
- find(sub[, start[, end]]) int #
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- format(*args, **kwargs) str #
Return a formatted version of S, using substitutions from args and kwargs. The substitutions are identified by braces (‘{’ and ‘}’).
- format_map(mapping) str #
Return a formatted version of S, using substitutions from mapping. The substitutions are identified by braces (‘{’ and ‘}’).
- index(sub[, start[, end]]) int #
Return the lowest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- isalnum()#
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric and there is at least one character in the string.
- isalpha()#
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and there is at least one character in the string.
- isascii()#
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F. Empty string is ASCII too.
- isdecimal()#
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal and there is at least one character in the string.
- isdigit()#
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and there is at least one character in the string.
- isidentifier()#
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier, such as “def” or “class”.
- islower()#
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase and there is at least one cased character in the string.
- isnumeric()#
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is at least one character in the string.
- isprintable()#
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable in repr() or if it is empty.
- isspace()#
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and there is at least one character in the string.
- istitle()#
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may only follow uncased characters and lowercase characters only cased ones.
- isupper()#
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase and there is at least one cased character in the string.
- join(iterable, /)#
Concatenate any number of strings.
The string whose method is called is inserted in between each given string. The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
- ljust(width, fillchar=' ', /)#
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
- lower()#
Return a copy of the string converted to lowercase.
- lstrip(chars=None, /)#
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
- static maketrans()#
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicode ordinals (integers) or characters to Unicode ordinals, strings or None. Character keys will be then converted to ordinals. If there are two arguments, they must be strings of equal length, and in the resulting dictionary, each character in x will be mapped to the character at the same position in y. If there is a third argument, it must be a string, whose characters will be mapped to None in the result.
- partition(sep, /)#
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original string and two empty strings.
- removeprefix(prefix, /)#
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):]. Otherwise, return a copy of the original string.
- removesuffix(suffix, /)#
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty, return string[:-len(suffix)]. Otherwise, return a copy of the original string.
- replace(old, new, count=-1, /)#
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace. -1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences are replaced.
- rfind(sub[, start[, end]]) int #
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Return -1 on failure.
- rindex(sub[, start[, end]]) int #
Return the highest index in S where substring sub is found, such that sub is contained within S[start:end]. Optional arguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
- rjust(width, fillchar=' ', /)#
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
- rpartition(sep, /)#
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. If the separator is found, returns a 3-tuple containing the part before the separator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty strings and the original string.
- rsplit(sep=None, maxsplit=-1)#
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
- rstrip(chars=None, /)#
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- split(sep=None, maxsplit=-1)#
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespace character (including n r t f and spaces) and will discard empty strings from the result.
- maxsplit
Maximum number of splits. -1 (the default value) means no limit.
Splitting starts at the front of the string and works to the end.
Note, str.split() is mainly useful for data that has been intentionally delimited. With natural text that includes punctuation, consider using the regular expression module.
- splitlines(keepends=False)#
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given and true.
- startswith(prefix[, start[, end]]) bool #
Return True if S starts with the specified prefix, False otherwise. With optional start, test S beginning at that position. With optional end, stop comparing S at that position. prefix can also be a tuple of strings to try.
- strip(chars=None, /)#
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
- swapcase()#
Convert uppercase characters to lowercase and lowercase characters to uppercase.
- title()#
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remaining cased characters have lower case.
- translate(table, /)#
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals to Unicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance a dictionary or list. If this operation raises LookupError, the character is left untouched. Characters mapped to None are deleted.
- upper()#
Return a copy of the string converted to uppercase.
- zfill(width, /)#
Pad a numeric string with zeros on the left, to fill a field of the given width.
The string is never truncated.
- qibocal.protocols.utils.readout_frequency(target: Union[int, str], platform: CalibrationPlatform, power_level: PowerLevel = PowerLevel.low, state=0) float [source]#
Returns readout frequency depending on power level.
- class qibocal.protocols.utils.DcFilteredConfig(*, kind: Literal['dc-filter'] = 'dc-filter', offset: float, filter: list)[source]#
Bases:
Config
Dummy config for dc with filters.
Required by cryoscope protocol.
- _abc_impl = <_abc._abc_data object>#
- _setattr_handler(name: str, value: Any) Optional[Callable[[BaseModel, str, Any], None]] #
Get a handler for setting an attribute on the model instance.
- Returns:
A handler for setting an attribute on the model instance. Used for memoization of the handler. Memoizing the handlers leads to a dramatic performance improvement in __setattr__ Returns None when memoization is not safe, then the attribute is set directly.
- copy(*, include: AbstractSetIntStr | MappingIntStrAny | None = None, exclude: AbstractSetIntStr | MappingIntStrAny | None = None, update: Dict[str, Any] | None = None, deep: bool = False) Self #
Returns a copy of the model.
- !!! warning “Deprecated”
This method is now deprecated; use model_copy instead.
If you need include or exclude, use:
`python {test="skip" lint="skip"} data = self.model_dump(include=include, exclude=exclude, round_trip=True) data = {**data, **(update or {})} copied = self.model_validate(data) `
- Parameters:
include – Optional set or mapping specifying which fields to include in the copied model.
exclude – Optional set or mapping specifying which fields to exclude in the copied model.
update – Optional dictionary of field-value pairs to override field values in the copied model.
deep – If True, the values of fields that are Pydantic models will be deep-copied.
- Returns:
A copy of the model with included, excluded and updated fields as specified.
- dict(*, include: Optional[Union[set[int], set[str], Mapping[int, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]], Mapping[str, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]]]] = None, exclude: Optional[Union[set[int], set[str], Mapping[int, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]], Mapping[str, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]]]] = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) Dict[str, Any] #
- json(*, include: Optional[Union[set[int], set[str], Mapping[int, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]], Mapping[str, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]]]] = None, exclude: Optional[Union[set[int], set[str], Mapping[int, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]], Mapping[str, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]]]] = None, by_alias: bool = False, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = PydanticUndefined, models_as_dict: bool = PydanticUndefined, **dumps_kwargs: Any) str #
- model_computed_fields = {}#
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- classmethod model_construct(_fields_set: set[str] | None = None, **values: Any) Self #
Creates a new instance of the Model class with validated data.
Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
- !!! note
model_construct() generally respects the model_config.extra setting on the provided model. That is, if model_config.extra == ‘allow’, then all extra passed values are added to the model instance’s __dict__ and __pydantic_extra__ fields. If model_config.extra == ‘ignore’ (the default), then all extra passed values are ignored. Because no validation is performed with a call to model_construct(), having model_config.extra == ‘forbid’ does not result in an error if extra values are passed, but they will be ignored.
- Parameters:
_fields_set – A set of field names that were originally explicitly set during instantiation. If provided, this is directly used for the [model_fields_set][pydantic.BaseModel.model_fields_set] attribute. Otherwise, the field names from the values argument will be used.
values – Trusted or pre-validated data dictionary.
- Returns:
A new instance of the Model class with validated data.
- model_copy(*, update: collections.abc.Mapping[str, Any] | None = None, deep: bool = False) Self #
- !!! abstract “Usage Documentation”
[model_copy](../concepts/serialization.md#model_copy)
Returns a copy of the model.
- !!! note
The underlying instance’s [__dict__][object.__dict__] attribute is copied. This might have unexpected side effects if you store anything in it, on top of the model fields (e.g. the value of [cached properties][functools.cached_property]).
- Parameters:
update – Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data.
deep – Set to True to make a deep copy of the model.
- Returns:
New model instance.
- model_dump(*, mode: Union[Literal['json', 'python'], str] = 'python', include: Optional[Union[set[int], set[str], Mapping[int, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]], Mapping[str, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]]]] = None, exclude: Optional[Union[set[int], set[str], Mapping[int, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]], Mapping[str, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]]]] = None, context: Any | None = None, by_alias: bool | None = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: Union[bool, Literal['none', 'warn', 'error']] = True, fallback: Optional[Callable[[Any], Any]] = None, serialize_as_any: bool = False) dict[str, Any] #
- !!! abstract “Usage Documentation”
[model_dump](../concepts/serialization.md#modelmodel_dump)
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- Parameters:
mode – The mode in which to_python should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.
include – A set of fields to include in the output.
exclude – A set of fields to exclude from the output.
context – Additional context to pass to the serializer.
by_alias – Whether to use the field’s alias in the dictionary key if defined.
exclude_unset – Whether to exclude fields that have not been explicitly set.
exclude_defaults – Whether to exclude fields that are set to their default value.
exclude_none – Whether to exclude fields that have a value of None.
round_trip – If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].
fallback – A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.
serialize_as_any – Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A dictionary representation of the model.
- model_dump_json(*, indent: int | None = None, include: Optional[Union[set[int], set[str], Mapping[int, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]], Mapping[str, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]]]] = None, exclude: Optional[Union[set[int], set[str], Mapping[int, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]], Mapping[str, Union[set[int], set[str], Mapping[int, Union[IncEx, bool]], Mapping[str, Union[IncEx, bool]], bool]]]] = None, context: Any | None = None, by_alias: bool | None = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, round_trip: bool = False, warnings: Union[bool, Literal['none', 'warn', 'error']] = True, fallback: Optional[Callable[[Any], Any]] = None, serialize_as_any: bool = False) str #
- !!! abstract “Usage Documentation”
[model_dump_json](../concepts/serialization.md#modelmodel_dump_json)
Generates a JSON representation of the model using Pydantic’s to_json method.
- Parameters:
indent – Indentation to use in the JSON output. If None is passed, the output will be compact.
include – Field(s) to include in the JSON output.
exclude – Field(s) to exclude from the JSON output.
context – Additional context to pass to the serializer.
by_alias – Whether to serialize using field aliases.
exclude_unset – Whether to exclude fields that have not been explicitly set.
exclude_defaults – Whether to exclude fields that are set to their default value.
exclude_none – Whether to exclude fields that have a value of None.
round_trip – If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].
fallback – A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.
serialize_as_any – Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A JSON string representation of the model.
- property model_extra: dict[str, Any] | None#
Get extra fields set during validation.
- Returns:
A dictionary of extra fields, or None if config.extra is not set to “allow”.
- model_fields = {'filter': FieldInfo(annotation=list, required=True), 'kind': FieldInfo(annotation=Literal['dc-filter'], required=False, default='dc-filter'), 'offset': FieldInfo(annotation=float, required=True)}#
- property model_fields_set: set[str]#
Returns the set of fields that have been explicitly set on this model instance.
- Returns:
- A set of strings representing the fields that have been set,
i.e. that were not filled from defaults.
- classmethod model_json_schema(by_alias: bool = True, ref_template: str = '#/$defs/{model}', schema_generator: type[pydantic.json_schema.GenerateJsonSchema] = <class 'pydantic.json_schema.GenerateJsonSchema'>, mode: ~typing.Literal['validation', 'serialization'] = 'validation') dict[str, Any] #
Generates a JSON schema for a model class.
- Parameters:
by_alias – Whether to use attribute aliases or not.
ref_template – The reference template.
schema_generator – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications
mode – The mode in which to generate the schema.
- Returns:
The JSON schema for the given model class.
- classmethod model_parametrized_name(params: tuple[type[Any], ...]) str #
Compute the class name for parametrizations of generic classes.
This method can be overridden to achieve a custom naming scheme for generic BaseModels.
- Parameters:
params – Tuple of types of the class. Given a generic class Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.
- Returns:
String representing the new class where params are passed to cls as type variables.
- Raises:
TypeError – Raised when trying to generate concrete names for non-generic models.
- model_post_init(context: Any, /) None #
Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.
- classmethod model_rebuild(*, force: bool = False, raise_errors: bool = True, _parent_namespace_depth: int = 2, _types_namespace: MappingNamespace | None = None) bool | None #
Try to rebuild the pydantic-core schema for the model.
This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.
- Parameters:
force – Whether to force the rebuilding of the model schema, defaults to False.
raise_errors – Whether to raise errors, defaults to True.
_parent_namespace_depth – The depth level of the parent namespace, defaults to 2.
_types_namespace – The types namespace, defaults to None.
- Returns:
Returns None if the schema is already “complete” and rebuilding was not required. If rebuilding _was_ required, returns True if rebuilding was successful, otherwise False.
- classmethod model_validate(obj: Any, *, strict: bool | None = None, from_attributes: bool | None = None, context: Any | None = None, by_alias: bool | None = None, by_name: bool | None = None) Self #
Validate a pydantic model instance.
- Parameters:
obj – The object to validate.
strict – Whether to enforce types strictly.
from_attributes – Whether to extract data from object attributes.
context – Additional context to pass to the validator.
by_alias – Whether to use the field’s alias when validating against the provided input data.
by_name – Whether to use the field’s name when validating against the provided input data.
- Raises:
ValidationError – If the object could not be validated.
- Returns:
The validated model instance.
- classmethod model_validate_json(json_data: str | bytes | bytearray, *, strict: bool | None = None, context: Any | None = None, by_alias: bool | None = None, by_name: bool | None = None) Self #
- !!! abstract “Usage Documentation”
[JSON Parsing](../concepts/json.md#json-parsing)
Validate the given JSON data against the Pydantic model.
- Parameters:
json_data – The JSON data to validate.
strict – Whether to enforce types strictly.
context – Extra variables to pass to the validator.
by_alias – Whether to use the field’s alias when validating against the provided input data.
by_name – Whether to use the field’s name when validating against the provided input data.
- Returns:
The validated Pydantic model.
- Raises:
ValidationError – If json_data is not a JSON string or the object could not be validated.
- classmethod model_validate_strings(obj: Any, *, strict: bool | None = None, context: Any | None = None, by_alias: bool | None = None, by_name: bool | None = None) Self #
Validate the given object with string data against the Pydantic model.
- Parameters:
obj – The object containing string data to validate.
strict – Whether to enforce types strictly.
context – Extra variables to pass to the validator.
by_alias – Whether to use the field’s alias when validating against the provided input data.
by_name – Whether to use the field’s name when validating against the provided input data.
- Returns:
The validated Pydantic model.
- classmethod parse_file(path: str | Path, *, content_type: str | None = None, encoding: str = 'utf8', proto: DeprecatedParseProtocol | None = None, allow_pickle: bool = False) Self #
- classmethod parse_raw(b: str | bytes, *, content_type: str | None = None, encoding: str = 'utf8', proto: DeprecatedParseProtocol | None = None, allow_pickle: bool = False) Self #
- qibocal.protocols.utils.effective_qubit_temperature(prob_0: ndarray[Any, dtype[_ScalarType_co]], prob_1: ndarray[Any, dtype[_ScalarType_co]], qubit_frequency: float, nshots: int)[source]#
Calculates the qubit effective temperature.
The formula used is the following one:
kB Teff = - hbar qubit_freq / ln(prob_1/prob_0)
- qibocal.protocols.utils.calculate_frequencies(results, ro_pulses)[source]#
Calculates outcome frequencies from individual shots. :param results: return of execute_pulse_sequence :type results: dict :param qubit_list: list of qubit ids executed in pulse sequence. :type qubit_list: list
- Returns:
dictionary containing frequencies.
- qibocal.protocols.utils.cumulative(input_data, points)[source]#
Evaluates in data the cumulative distribution function of points.
- qibocal.protocols.utils.fit_punchout(data: Data, fit_type: str)[source]#
Punchout fitting function.
Args:
data (Data): Punchout acquisition data. fit_type (str): Punchout type, it could be amp (amplitude) or att (attenuation).
Return:
List of dictionaries containing the low, high amplitude (attenuation) frequencies and the readout amplitude (attenuation) for each qubit.
- qibocal.protocols.utils.round_report(measure: list) tuple[list, list] [source]#
Rounds the measured values and their errors according to their significant digits.
- Parameters:
measure (list) – Variable-Errors couples.
- Returns:
A tuple with the lists of values and errors in the correct string format.
- qibocal.protocols.utils.format_error_single_cell(measure: tuple)[source]#
Helper function to print mean value and error in one line.
- qibocal.protocols.utils.chi2_reduced(observed: ndarray[Any, dtype[_ScalarType_co]], estimated: ndarray[Any, dtype[_ScalarType_co]], errors: ndarray[Any, dtype[_ScalarType_co]], dof: Optional[float] = None)[source]#
- qibocal.protocols.utils.chi2_reduced_complex(observed: tuple[numpy.ndarray[typing.Any, numpy.dtype[+_ScalarType_co]], numpy.ndarray[typing.Any, numpy.dtype[+_ScalarType_co]]], estimated: ndarray[Any, dtype[_ScalarType_co]], errors: tuple[numpy.ndarray[typing.Any, numpy.dtype[+_ScalarType_co]], numpy.ndarray[typing.Any, numpy.dtype[+_ScalarType_co]]], dof: Optional[float] = None)[source]#
- qibocal.protocols.utils.significant_digit(number: float)[source]#
Computes the position of the first significant digit of a given number.
- Parameters:
number (Number) – number for which the significant digit is computed. Can be complex.
- Returns:
- position of the first significant digit. Returns
-1
if the given number is
>= 1
,= 0
orinf
.
- position of the first significant digit. Returns
- Return type:
- qibocal.protocols.utils.evaluate_grid(data: ndarray[Any, dtype[_ScalarType_co]])[source]#
This function returns a matrix grid evaluated from the datapoints data.
- qibocal.protocols.utils.plot_results(data: Data, qubit: Union[int, str], qubit_states: list, fit: Results)[source]#
Plots for the qubit and qutrit classification.
- qibocal.protocols.utils.table_dict(qubit: Union[list[Union[int, str]], int, str], names: list[str], values: list, display_error=False) dict [source]#
Build a dictionary to generate HTML table with table_html.
- Parameters:
qubit (Union[list[QubitId], QubitId]) – If qubit is a scalar value,
repeated. (the "Qubit" entries will have only this value) –
values (list) – List of the values of the parameters.
display_errors (bool) – if True, it means that values is a list of value-error couples,
False. (so an Errors key will be displayed in the dictionary. The function will round the couples according to their significant digits. Default) –
- Returns:
A dictionary with keys Qubit, Parameters, Values (Errors).
- qibocal.protocols.utils.table_html(data: dict) str [source]#
This function converts a dictionary into an HTML table.
- Parameters:
data (dict) – the keys will be converted into table entries and the
table. (values will be the columns of the) –
strings. (Values must be valid HTML) –
- Returns:
str
- qibocal.protocols.utils.extract_feature(x: ndarray, y: ndarray, z: ndarray, feat: str, ci_first_mask: float = 99, ci_second_mask: float = 70)[source]#
Extract feature using confidence intervals.
Given a dataset of the form (x, y, z) where a spike or a valley is expected, this function discriminate the points (x, y) with a signal, from the pure noise and return the first ones.
A first mask is construct by looking at ci_first_mask confidence interval for each y bin. A second mask is applied by looking at ci_second_mask confidence interval to remove outliers. feat could be min or max, in the first case the function will look for valleys, otherwise for peaks.