| Documentation | Build Status | JOSS paper | 
|---|---|---|
InvertibleNetworks.jl provides memory-efficient building blocks for invertible neural networks with hand-derived gradients, Jacobians, and log-determinants. The package is designed for high-performance scientific computing and machine learning applications.
- Memory Efficient: Hand-derived gradients, Jacobians J, and log|J| for optimal memory usage
- Flux Integration: Seamless integration with Flux.jl for automatic differentiation
- AD Support: Support for Zygote and ChainRules automatic differentiation
- GPU Support: Nvidia GPU support via CuArray
- Comprehensive Examples: Various examples of invertible neural networks, normalizing flows, variational inference, and uncertainty quantification
In Julia REPL,
] add InvertibleNetworksOr
using Pkg
Pkg.develop("InvertibleNetworks")using Pkg
Pkg.test("InvertibleNetworks")using InvertibleNetworks, Flux
# Create a simple activation normalization layer
an = ActNorm(10; logdet=true)
# Forward pass
X = randn(Float32, 64, 64, 10, 4)
Y, logdet = an.forward(X)
# Inverse pass
X_reconstructed = an.inverse(Y)
# Test invertibility
@assert norm(X - X_reconstructed) < 1e-6using InvertibleNetworks, Flux
# Move data to GPU
X = randn(Float32, 64, 64, 10, 4) |> gpu
AN = ActNorm(10; logdet=true) |> gpu
# Forward pass on GPU
Y, logdet = AN.forward(X)- ActNorm: Activation normalization (Kingma and Dhariwal, 2018) (example)
- Conv1x1: 1x1 Convolutions using Householder transformations (example)
- ResidualBlock: Invertible residual blocks (example)
- CouplingLayerGlow: Invertible coupling layer from Dinh et al. (2017) (example)
- CouplingLayerHINT: Invertible recursive coupling layer HINT from Kruse et al. (2020) (example)
- CouplingLayerHyperbolic: Invertible hyperbolic layer from Lensink et al. (2019) (example)
- CouplingLayerIRIM: Invertible coupling layer from Putzky and Welling (2019) (example)
- ReLU: Rectified Linear Unit
- LeakyReLU: Leaky Rectified Linear Unit
- Sigmoid: Sigmoid activation with optional scaling
- Sigmoid2: Modified sigmoid activation
- GaLU: Gated Linear Unit
- ExpClamp: Exponential with clamping
- Jacobian Computation: Hand-derived Jacobians for memory efficiency
- Dimensionality Manipulation: squeeze/unsqueeze (column, patch, checkerboard), split/cat
- Wavelet Transform
- NetworkGlow: Generative flow with invertible 1x1 convolutions (generic example, source)
- NetworkHINT: Multi-scale HINT networks
- NetworkHyperbolic: Hyperbolic networks
- NetworkIRIM: Invertible recurrent inference machines (Putzky and Welling, 2019) (generic example)
- NetworkConditionalGlow: Conditional Glow networks
- NetworkConditionalHINT: Conditional HINT networks
Due to its memory scaling InvertibleNetworks.jl, has been particularily successful at Bayesian posterior sampling with simulation-based inference. To get started with this application refer to a simple example (Conditional sampling for MNSIT inpainting) but feel free to modify this script for your application and please reach out to us for help.
# See examples/applications/conditional_sampling/amortized_glow_mnist_inpainting.jl
# for a complete example of conditional sampling for MNIST inpainting- 
Invertible recurrent inference machines (Putzky and Welling, 2019) (generic example) 
- 
Generative models with maximum likelihood via the change of variable formula (example) 
- 
Glow: Generative flow with invertible 1x1 convolutions (Kingma and Dhariwal, 2018) (generic example, source) 
- API Documentation: Stable | Development
- Examples: See the examples/directory for comprehensive usage examples
- Tests: The test/directory contains extensive unit tests
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
If you use InvertibleNetworks.jl in your research, please cite:
@article{Orozco2024, 
    doi = {10.21105/joss.06554}, 
    url = {https://doi.org/10.21105/joss.06554}, 
    year = {2024}, 
    publisher = {The Open Journal}, 
    volume = {9}, 
    number = {99}, 
    pages = {6554}, 
    author = {Rafael Orozco and Philipp Witte and Mathias Louboutin and Ali Siahkoohi and Gabrio Rizzuti and Bas Peters and Felix J. Herrmann}, 
    title = {InvertibleNetworks.jl: A Julia package for scalable normalizing flows}, 
    journal = {Journal of Open Source Software} 
}The following publications use InvertibleNetworks.jl:
- 
Reliable amortized variational inference with physics-based latent distribution correction - Paper: https://arxiv.org/abs/2207.11640
- Code: ReliableAVI.jl
 
- 
Learning by example: fast reliability-aware seismic imaging with normalizing flows 
- 
Enabling uncertainty quantification for seismic data pre-processing using normalizing flows 
- 
Preconditioned training of normalizing flows for variational inference in inverse problems 
- 
Parameterizing uncertainty by deep invertible networks, an application to reservoir characterization 
- Rafael Orozco - Georgia Institute of Technology [[email protected]]
- Philipp Witte - Georgia Institute of Technology (now Microsoft)
- Gabrio Rizzuti - Utrecht University
- Mathias Louboutin - Georgia Institute of Technology
- Ali Siahkoohi - Georgia Institute of Technology
This package uses functions from:
This project is licensed under the MIT License - see the LICENSE file for details.
