Bound Options¶
Bound options can be set by passing a dictionary to the bound_opts
argument for BoundedModule
.
This page lists available bound options.
Arguments for Optimizing Bounds (optimize_bound_args
)¶
Arguments for optimizing bounds with the CROWN-Optimized
method can be provided as a dictionary. Available arguments include:
enable_alpha_crown
(bool, defaultTrue
): Enable α-CROWN (optimized CROWN/LiRPA).enable_beta_crown
(bool, defaultFalse
): Enable β-CROWN.optimizer
(str, defaultadam
): Optimzier. Set it toadam-autolr
to useAdamElementLR
, orsgd
to use SGD.lr_alpha
(float, default 0.5),lr_beta
(default 0.05): Learning rates for α and β parameters in α-CROWN and β-CROWN.lr_decay
(float, default 0.98): Learning rate decay factor for theExponentialLR
scheduler.iteration
(int): Number of optimization iterations.loss_reduction_func
(function): Function for loss reduction over the specification dimension. By default, useauto_LiRPA.utils.reduction_sum
which sumes the bound over all batch elements and specifications.stop_criterion_func
(function): Function for the criterion of stopping optimization early; it returns a tensor oftorch.bool
withbatch_size
elements. By default, it is a lambda function that always returnsFalse
. Several pre-defined options areauto_LiRPA.utils.stop_criterion_min
,auto_LiRPA.utils.stop_criterion_mean
,auto_LiRPA.utils.stop_criterion_max
andauto_LiRPA.utils.stop_criterion_sum
. For example,auto_LiRPA.utils.stop_criterion_min
checks the minimum bound over all specifications of a batch element and returnsTrue
for that element when the minimum bound is greater than a specified threshold.keep_best
(bool, defaultTrue
): IfTrue
, save α, β and bounds at the best iteration. Otherwise the last iteration result is used.use_shared_alpha
(bool, defaultFalse
): IfTrue
, all intermediate neurons from the same layer share the same set of α variables during bound optimization. For a very large model, enabling this option can save memory, at a cost of slightly looser bound.fix_intermediate_layer_bounds
(bool, defaultTrue
): Only optimize bounds of last layer during alpha/beta CROWN.init_alpha
(bool, defaultTrue
): Initial alpha variables by calling CROWN once.early_stop_patience
(int, default, 10): Number of iterations that we will start considering early stop if tracking no improvement.start_save_best
(float, default 0.5): Start to save optimized best bounds when current_iteration > int(iteration*start_save_best)
ReLU (relu
):¶
There are different choices for the lower bound relaxation of unstable ReLU activations (see the CROWN paper):
adaptive
(default): For unstable neurons, when the slope of the upper bound is greater than one, use 1 as the slope of the lower bound, otherwise use 0 as the slope of the lower bound (this is described as CROWN-Ada in the original CROWN paper). Please also use this option if theCROWN-Optimized
bound is used and the lower bound needs to be optimized.same-slope
: Make the slope for lower bound the same as the upper bound.zero-lb
: Always use 0 as the slope of lower bound for unstable neurons.one-lb
: Always use 1 as the slope of lower bound for unstable neurons.reversed-adaptive
: For unstable neurons, when the slope of the upper bound is greater than one, use 0 as the slope of the lower bound, otherwise use 1 as the slope of the lower bound.
Other Options¶
loss_fusion
: IfTrue
, this bounded module has loss fusion, i.e., the loss function is also included in the module and the output of the model is the loss rather than logits.deterministic
: IfTrue
, make PyTorch use deterministic algorithms.matmul
: If set toeconomic
, use a memory-efficient IBP implementation for relaxing thematmul
operation when both arguments ofmatmul
are perturbed, which does not expand all the elementary multiplications to save memory.