Deel Lip Versions Save

Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers

v1.5.0

7 months ago

New features and improvements

  • Two new losses based on standard Keras cross-entropy losses with a settable temperature for softmax:
    • TauSparseCategoricalCrossentropy equivalent to Keras SparseCategoricalCrossentropy
    • TauBinaryCrossentropy equivalent to Keras BinaryCrossentropy
  • New module deel.lip.compute_layer_sv to compute the largest and lowest singular values of layers compute_layer_sv() or of a whole model compute_model_sv().
  • Power iteration algorithm for convolution.
  • New "Getting Started" tutorial to introduce 1-Lipschitz neural networks.
  • Documentation migration from Sphinx to MkDocs.

API changes

  • Activations are now imported via deel.lip.layers submodule, e.g. deel.lip.layers.GroupSort instead of deel.lip.activations.GroupSort. We adopted the same convention as Keras. The legacy submodule is still available for retro-compatibility but will be removed in a future release.
  • Unconstrained layers must now be imported using deel.lip.layers.unconstrained submodule, e.g. deel.lip.layers.unconstrained.PadConv2D.

Fixes

  • Fix InvertibleUpSampling __call__() returning None.

Full changelog: https://github.com/deel-ai/deel-lip/compare/v1.4.0...v1.5.0

v1.4.0

1 year ago

New features and improvements

  • Two new layers:
    • SpectralConv2DTranspose, a Lipschitz version of the Keras Conv2DTranspose layer
    • activation layer Householder which is a parametrized generalization of the GroupSort2
  • Two new regularizers to foster orthogonality:
    • LorthRegularizer for an orthogonal convolution
    • OrthDenseRegularizer for an orthogonal Dense matrix kernel
  • Two new losses for Lipschitz networks:
    • TauCategoricalCrossentropy, a categorical cross-entropy loss with temperature scaling tau
    • CategoricalHinge, a hinge loss for multi-class problems based on the implementation of the Keras CategoricalHinge
  • Two new custom callbacks:
    • LossParamScheduler to change loss hyper-parameters during training, e.g. min_margin, alpha and tau
    • LossParamLog to log the value of loss parameters
  • The Björck orthogonalization algorithm was accelerated.
  • Normalizers (power iteration and Björck) use tf.while_loop and the swap_memory argument can be globally set using set_swap_memory(bool). Default value is True to save memory usage in GPU.
  • The new function set_stop_grad_spectral(bool) allows to bypass the back-propagation in the power iteration algorithm that computes the spectral norm. Default value is True. Stopping gradient propagation reduces runtime.
  • Due to bugs in TensorFlow serialization of custom losses and metrics (version 2.0 and 2.1), deel-lip now only supports TensorFlow >= 2.2.

Fixes

  • SpectralInitializer does not reuse anymore the same base initializer in multiple instances.

Full Changelog: https://github.com/deel-ai/deel-lip/compare/v1.3.0...v1.4.0

v1.3.0

1 year ago

New features and improvements

  • New layer PadConv2D to handle in particular circular padding in convolutional layer
  • Losses handle multi-label classification
  • Losses are now element-wise. reduction parameter in custom losses can be set to None.
  • New metrics are introduced: ProvableAvgRobustness and ProvableRobustAccuracy

API changes

  • KR is not a function anymore but a class derived from tf.keras.losses.Loss.
  • negative_KR function was removed. Use the loss HKR(alpha=0) instead.
  • The stopping criterion for Spectral normalization and Björck orthogonalization (iterative methods) is no more the number of iterations niter_spectral and niter_bjorck. The methods are now stopped based on the difference between two iterations: eps_spectral and eps_bjorck. This API change occurs in:
    • Lipschitz layers, such as SpectralDense and SpectralConv2D
    • normalizer reshaped_kernel_orthogonalization
    • constraint SpectralConstraint
    • initializer SpectralInitializer

Full Changelog: https://github.com/deel-ai/deel-lip/compare/v1.2.0...v1.3.0

v1.2.0

2 years ago

this revision contains:

  • code refactoring: storing wbar in a tf.variable
  • update of the documentation's notebooks
  • update of the Callbacks, Initializers, Constraints...
  • update of the losses and tests for losses
  • improved loss stability for small batches
  • added ScaledGlobalL2NormPooling2D
  • new way to export keras serializable objects

This ends the support of tf2.0. Only versions >= tf2.1 are supported.

v1.1.1

3 years ago

This revision contains:

  • bugfixes in losses.py: fixed a problem with data types in HKR_loss and fixed a weighting problem in KR_multiclass_loss.
  • changed behavior of FrobeniusDense in the multi class setup : now using FrobeniusDense with 10 output neurons is now equivalent to stack 10 FrobeniuDense layers with 1 output neuron. The L2normalization is performed on each neuron instead of the full weight matrix

v1.1.0

3 years ago

This version add new features:

  • InvertibleDownSampling and InvertibleUpSampling
  • multiclass extension of the HKR loss

It also contains the multiple fixes for:

  • bug with L2NormPooling
  • bug with vanilla_export
  • bug with tf.function annotation causing incorrect Lipschitz constant in Sequential (for constant others than 1).

Breaking changes:

  • the true_values parameter has been removed in binary HKR as both (1, -1) and (1,0) are handled automatically.

v1.0.2

3 years ago

Features

  • Tensorflow 2.3 support.

v1.0.1

3 years ago

Features

  • Improvements for Björck initializers.
  • Stride handling in convolutionnal layers.

Bug fixes

  • Fixed bug with ScaledL2NormPooling which was causing nan values to appear after the first training step.

v1.0.0

4 years ago

Controlling the Lipschitz constant of a layer or a whole neural network has many applications ranging from adversarial robustness to Wasserstein distance estimation.

This library provides implementation of k-Lispchitz layers for keras.