maskit package¶
Subpackages¶
Submodules¶
maskit.circuits module¶
- maskit.circuits.basic_variational_circuit(params, rotations, masked_circuit: MaskedCircuit)¶
- maskit.circuits.basis_circuit(params, data, rotations, masked_circuit, wires, wires_to_measure)¶
- maskit.circuits.cost(circuit, params, rotations: List, masked_circuit: MaskedCircuit)¶
- maskit.circuits.cost_basis(circuit, params, data, target, rotations: List, masked_circuit: MaskedCircuit, wires: int, wires_to_measure: Tuple[int, ...], interpret: Tuple[int, ...])¶
- maskit.circuits.variational_circuit(params, rotations, masked_circuit)¶
maskit.ensembles module¶
- class maskit.ensembles.AdaptiveEnsemble(dropout: Optional[Dict[str, Dict]], size: int, epsilon: float)¶
Bases:
Ensemble
- epsilon¶
- step(masked_circuit: MaskedCircuit, optimizer, objective_fn, *args, ensemble_steps: int = 1) EnsembleResult ¶
The parameter ensemble_steps defines the number of training steps that are executed for each ensemble branch in addition to one training step that is done before the branching.
- class maskit.ensembles.Ensemble(dropout: Optional[Dict])¶
Bases:
object
- dropout¶
- perturb¶
- step(masked_circuit: MaskedCircuit, optimizer, objective_fn, *args, ensemble_steps: int = 0) EnsembleResult ¶
The parameter ensemble_steps defines the number of training steps that are executed for each ensemble branch in addition to one training step that is done before the branching.
- class maskit.ensembles.EnsembleResult(branch, branch_name, active, cost, gradient, brutto_steps, netto_steps, brutto, netto, ensemble)¶
Bases:
tuple
- property active¶
number of active gates of selected branch
- property branch¶
branch that performs best
- property branch_name¶
name of branch as configured
- property brutto¶
training wrt count of parameters including all other branches
- property brutto_steps¶
training steps including all other branches
- property cost¶
cost of selected branch
- property ensemble¶
True in case the ensemble was evaluated, otherwise False
- property gradient¶
gradient of selected branch
- property netto¶
training wrt count of paramemters for selected branch
- property netto_steps¶
training steps for selected branch
- class maskit.ensembles.IntervalEnsemble(dropout: Optional[Dict], interval: int)¶
Bases:
Ensemble
- step(masked_circuit: MaskedCircuit, optimizer, objective_fn, *args, ensemble_steps: int = 1) EnsembleResult ¶
The parameter ensemble_steps defines the number of training steps that are executed for each ensemble branch in addition to one training step that is done before the branching.
maskit.log_results module¶
- maskit.log_results.log_results(executor: CJ, exclude: Tuple[str, ...]) CJ ¶
- maskit.log_results.serialize(o)¶
maskit.masks module¶
- class maskit.masks.FreezableMaskedCircuit(parameters: ndarray, layers: int, wires: int, default_value: Optional[float] = None)¶
Bases:
MaskedCircuit
A FreezableMaskedCircuit not only supports masking of different components including wires, layers, and parameters but also supports freezing a subset of parameters defined again on the different components wires, layers, and parameters.
- copy() Self ¶
Returns a copy of the current FreezableMaskedCircuit.
- freeze(axis: PerturbationAxis = PerturbationAxis.LAYERS, amount: Optional[Union[int, float]] = None, mode: PerturbationMode = PerturbationMode.ADD)¶
Freezes the parameter values for a given
axis
that is of typePerturbationAxis
. The freezing is appliedamount``times and depends on the given ``mode
of typePerturbationMode
. If no amount is given, that isamount=None
, a randomamount
is determined given by the actual size of the py:attr:~.mask. Theamount
is automatically limited to the actual size of the py:attr:~.mask.- Parameters:
amount – Number of items to freeze, defaults to None
axis – Which mask to freeze, defaults to PerturbationAxis.LAYERS
mode – How to freeze, defaults to PerturbationMode.ADD
- Raises:
NotImplementedError – Raised in case of an unknown mode
- property mask: ndarray¶
Accumulated mask of layers, wires, and parameters for both masking and freezing. Note that this mask is readonly.
- class maskit.masks.Mask(shape: Tuple[int, ...], parent: Optional[MaskedCircuit] = None, mask: Optional[ndarray] = None)¶
Bases:
object
A Mask encapsulates a
mask
storing boolean value if a specific value is masked or not. In case a specific position is True, the according value is masked, otherwise it is not.- Parameters:
shape – The shape of the mask
parent – MaskedCircuit that owns the mask
mask – Preset of values that is taken by mask
- apply_mask(values: ndarray)¶
Applies the encapsulated py:attr:~.mask to the given
values
. Note that the values should have the same shape as the py:attr:~.mask.- Parameters:
values – Values where the mask should be applied to
- clear() None ¶
Resets the mask to not mask anything.
- copy(parent: Optional[MaskedCircuit] = None) Mask ¶
Returns a copy of the current Mask.
- mask: ndarray¶
encapsulated mask
- perturb(amount: Optional[Union[int, float]] = None, mode: PerturbationMode = PerturbationMode.INVERT)¶
Perturbs the Mask by the given
mode
of typePerturbationMode
amount
times. If no amount is given oramount=None
, a randomamount
is determined given by the actual size of the py:attr:~.mask. Ifamount
is smaller than 1, it is interpreted as the fraction of the py:attr:~.mask s size. Note that theamount
is automatically limited to the actual size of the py:attr:~.mask.- Parameters:
amount – Number of items to perturb given either by an absolute amount when amount >= 1 or a fraction of the mask, defaults to None
mode – How to perturb, defaults to PerturbationMode.INVERT
- Raises:
NotImplementedError – Raised in case of an unknown mode
- shrink(amount: int = 1)¶
- class maskit.masks.MaskedCircuit(parameters: ndarray, layers: int, wires: int, dynamic_parameters: bool = True, default_value: Optional[float] = None, parameter_mask: Optional[ndarray] = None, layer_mask: Optional[ndarray] = None, wire_mask: Optional[ndarray] = None)¶
Bases:
object
A MaskedCircuit supports masking of different components including wires, layers, and parameters. Masking naturally removes active parameters from a circuit. However, some optimisers expect the array of parameters to remain stable across iteration steps; use
dynamic_parameters=False
to force the mask to always yield the full set of parameters in such cases. The mask will still prevent modification of inactive parameters.- Parameters:
parameters – Initial parameter set for circuit
layers – Number of layers
wires – Number of wires
dynamic_parameters – Whether the array of differentiable parameters may change size/order
default_value – Default value for gates that are added back in. In case of None that is also the default, the last known value is assumed
parameter_mask – Initialization values of paramater mask, defaults to None
layer_mask – Initialization values of layer mask, defaults to None
wire_mask – Initialization values of wire mask, defaults to None
- active() int ¶
Number of active gates in the circuit.
- apply_mask(values: ndarray)¶
Applies the encapsulated py:attr:~.mask`s to the given ``values`. Note that the values should have the same shape as the py:attr:~.mask.
- Parameters:
values – Values where the mask should be applied to
- clear()¶
Resets all masks.
- copy() Self ¶
Returns a copy of the current MaskedCircuit.
- default_value¶
- property differentiable_parameters: ndarray¶
Subset of parameters that are not masked and therefore differentiable.
- static execute(masked_circuit: MaskedCircuit, operations: List[Dict])¶
- expanded_parameters(changed_parameters: ndarray) ndarray ¶
This method helps building a circuit with a current instance of differentiable parameters. Differentiable parameters are contained within a box for autograd e.g. for proper tracing. As from those parameters the structure of the circuit cannot be implied, this method takes care to expand on these parameters by giving a view that is a combination of parameters and the differentiable parameters. Note that the returned parameters are based on a copy of the underlying parameters and therefore should not be changed manually.
- Parameters:
changed_parameters – Current set of differentiable parameters
- property layer_mask¶
Returns the encapsulated layer mask.
- property mask: ndarray¶
Accumulated mask of layer, wire, and parameter masks. Note that this mask is readonly.
- mask_changed(mask: Mask, indices: ndarray)¶
Callback function that is used whenever one of the encapsulated masks does change. In case the mask does change and adds a parameter back into the circuit, the configured
default_value
is applied.- Raises:
NotImplementedError – In case an unimplemented mask reports change
- property parameter_mask¶
Returns the encapsulated parameter mask.
- parameters¶
- perturb(axis: PerturbationAxis = PerturbationAxis.RANDOM, amount: Optional[Union[int, float]] = None, mode: PerturbationMode = PerturbationMode.INVERT)¶
Perturbs the MaskedCircuit for a given
axis
that is of typePerturbationAxis
. The perturbation is appliedamount
times and depends on the givenmode
of typePerturbationMode
. If no amount is given, that isamount=None
, a randomamount
is determined given by the actual size of the py:attr:~.mask. Theamount
is automatically limited to the actual size of the py:attr:~.mask.- Parameters:
amount – Number of items to perturb, defaults to None
axis – Which mask to perturb
mode – How to perturb, defaults to PerturbationMode.INVERT
- Raises:
NotImplementedError – Raised in case of an unknown mode
- shrink(axis: PerturbationAxis = PerturbationAxis.LAYERS, amount: int = 1)¶
- property wire_mask¶
Returns the encapsulated wire mask.
maskit.optimizers module¶
- class maskit.optimizers.ExtendedAdamOptimizer(stepsize=0.01, beta1=0.9, beta2=0.99, eps=1e-08)¶
Bases:
ExtendedGradientDescentOptimizer
,AdamOptimizer
- class maskit.optimizers.ExtendedGradientDescentOptimizer(stepsize=0.01)¶
Bases:
GradientDescentOptimizer
- step_cost_and_grad(objective_fn, *args, grad_fn=None, **kwargs)¶
This function copies the functionality of the GradientDescentOptimizer one-to-one but changes the return statement to also return the gradient.
- class maskit.optimizers.ExtendedOptimizers(value)¶
Bases:
Enum
An enumeration.
- ADAM = <class 'maskit.optimizers.ExtendedAdamOptimizer'>¶
- GD = <class 'maskit.optimizers.ExtendedGradientDescentOptimizer'>¶
- L_BFGS_B = <class 'maskit.optimizers.L_BFGS_B'>¶
- class maskit.optimizers.L_BFGS_B(bounds: Optional[ndarray] = None, m: int = 10, factr: float = 10000000.0, pgtol: float = 1e-05, epsilon: float = 1e-08, iprint: int = - 1, maxfun: int = 15000, maxiter: int = 15000, disp=None, callback=None, maxls: int = 20)¶
Bases:
object
The L-BFGS-B optimiser provides a wrapper for the implementation provided in scipy. Please see the
fmin_l_bfgs_b()
documentation for further details.In case the method
step()
is used, the value of parameter maxiter is ignored and interpreted as 1 instead.- Parameters:
bounds – tuple of min and max for each value of provided parameters
m – maximum number of variable metric corrections used to define the limited memory matrix
factr – information on when to stop iterating, e.g. 1e12 for low accuracy; 1e7 for moderate accuracy; 10.0 for extremely high accuracy
pgtol – when to stop iterating with regards to gradients
epsilon – Step size used when approx_grad is True
iprint – Frequency of output
maxfun – Maximum number of function evaluations
maxiter – Maximum number of iterations
disp – If zero, then no output. If positive this over-rides iprint
callback – Called after each iteration with current parameters
maxls – Maximum number of line search steps (per iteration)
- bounds¶
- callback¶
- disp¶
- epsilon¶
- factr¶
- iprint¶
- m¶
- maxfun¶
- maxiter¶
- maxls¶
- optimize(objective_fn, parameters: ndarray, *args, grad_fn=None, **kwargs) Tuple[ndarray, float, ndarray] ¶
- Parameters:
objective_fn – Function to minimize
parameters – Initial guess of parameters
grad_fn – The gradient of func. In case of None, the gradient is approximated numerically
- pgtol¶
- step(objective_fn, parameters, *args, grad_fn=None, **kwargs) ndarray ¶
- Parameters:
objective_fn – Function to minimize
parameters – Initial guess of parameters
grad_fn – The gradient of func. In case of None, the gradient is approximated numerically
- step_and_cost(objective_fn, parameters, *args, grad_fn=None, **kwargs) Tuple[ndarray, float] ¶
- Parameters:
objective_fn – Function to minimize
parameters – Initial guess of parameters
grad_fn – The gradient of func. In case of None, the gradient is approximated numerically
- step_cost_and_grad(objective_fn, parameters, *args, grad_fn=None, **kwargs) Tuple[ndarray, float, ndarray] ¶
- Parameters:
objective_fn – Function to minimize
parameters – Initial guess of parameters
grad_fn – The gradient of func. In case of None, the gradient is approximated numerically
maskit.plotting module¶
maskit.utils module¶
- maskit.utils.check_params(train_params)¶
- maskit.utils.cross_entropy(predictions: ndarray, targets: ndarray, epsilon: float = 1e-15) float ¶
Cross entropy calculation between
targets
(encoded as one-hot vectors) andpredictions
. Predictions are normalized to sum up to 1.0.Note
The implementation of this function is based on the discussion on StackOverflow.
Due to ArrayBoxes that are required for automatic differentiation, we currently use this implementation instead of implementations provided by sklearn for example.
- Parameters:
predictions – Predictions in same order as targets. In case predictions for several samples are given, the weighted cross entropy is returned.
targets – Ground truth labels for supplied samples.
epsilon – Amount to clip predictions as log is not defined for 0 and 1.
Module contents¶
Python package to explore masking gates in variational circuits