Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers
TauSparseCategoricalCrossentropy
equivalent to Keras SparseCategoricalCrossentropy
TauBinaryCrossentropy
equivalent to Keras BinaryCrossentropy
deel.lip.compute_layer_sv
to compute the largest and lowest singular values of layers compute_layer_sv()
or of a whole model compute_model_sv()
.deel.lip.layers
submodule, e.g. deel.lip.layers.GroupSort
instead of deel.lip.activations.GroupSort
. We adopted the same convention as Keras. The legacy submodule is still available for retro-compatibility but will be removed in a future release.deel.lip.layers.unconstrained
submodule, e.g. deel.lip.layers.unconstrained.PadConv2D
.__call__()
returning None.Full changelog: https://github.com/deel-ai/deel-lip/compare/v1.4.0...v1.5.0
SpectralConv2DTranspose
, a Lipschitz version of the Keras Conv2DTranspose
layerHouseholder
which is a parametrized generalization of the GroupSort2
LorthRegularizer
for an orthogonal convolutionOrthDenseRegularizer
for an orthogonal Dense
matrix kernelTauCategoricalCrossentropy
, a categorical cross-entropy loss with temperature scaling tau
CategoricalHinge
, a hinge loss for multi-class problems based on the implementation of the Keras CategoricalHinge
LossParamScheduler
to change loss hyper-parameters during training, e.g. min_margin
, alpha
and tau
LossParamLog
to log the value of loss parameterstf.while_loop
and the swap_memory
argument can be globally set using set_swap_memory(bool)
. Default value is True
to save memory usage in GPU.set_stop_grad_spectral(bool)
allows to bypass the back-propagation in the power iteration algorithm that computes the spectral norm. Default value is True
. Stopping gradient propagation reduces runtime.SpectralInitializer
does not reuse anymore the same base initializer in multiple instances.Full Changelog: https://github.com/deel-ai/deel-lip/compare/v1.3.0...v1.4.0
PadConv2D
to handle in particular circular padding in convolutional layerreduction
parameter in custom losses can be set to None.ProvableAvgRobustness
and ProvableRobustAccuracy
KR
is not a function anymore but a class derived from tf.keras.losses.Loss
.negative_KR
function was removed. Use the loss HKR(alpha=0)
instead.niter_spectral
and niter_bjorck
. The methods are now stopped based on the difference between two iterations: eps_spectral
and eps_bjorck
. This API change occurs in:
SpectralDense
and SpectralConv2D
reshaped_kernel_orthogonalization
SpectralConstraint
SpectralInitializer
Full Changelog: https://github.com/deel-ai/deel-lip/compare/v1.2.0...v1.3.0
this revision contains:
ScaledGlobalL2NormPooling2D
This ends the support of tf2.0. Only versions >= tf2.1 are supported.
This revision contains:
losses.py
: fixed a problem with data types in HKR_loss
and fixed a weighting problem in KR_multiclass_loss
.FrobeniusDense
in the multi class setup : now using FrobeniusDense
with 10 output neurons is now equivalent to stack 10 FrobeniuDense
layers with 1 output neuron. The L2normalization is performed on each neuron instead of the full weight matrixThis version add new features:
InvertibleDownSampling
and InvertibleUpSampling
It also contains the multiple fixes for:
L2NormPooling
vanilla_export
tf.function
annotation causing incorrect Lipschitz constant in Sequential
(for constant others than 1).Breaking changes:
true_values
parameter has been removed in binary HKR as both (1, -1) and (1,0) are handled automatically.Controlling the Lipschitz constant of a layer or a whole neural network has many applications ranging from adversarial robustness to Wasserstein distance estimation.
This library provides implementation of k-Lispchitz layers for keras.