Educational implementation of topology optimization using automatic differentiation (AD) in modern ML frameworks (JAX and PyTorch) for sensitivity analysis. Contains examples of writing custom rules for AD integration, for linear solvers and root finders (using the Implicit function Theorem).
ADTO/
├── src/adto/ # Main package
│ ├── __init__.py
│ ├── nn_models.py # Neural network architectures
│ ├── non_ad_ops.py # Non-AD operations
│ ├── utils.py # Utility functions
│ └── backends/
│ ├── interface.py # Backend interface
│ ├── jax_backend.py # JAX implementation
│ ├── torch_backend.py # PyTorch implementation
│ └── ad_backend.py # AD utilities
├── examples/ # Jupyter notebooks
│ ├── TO.ipynb # Standard TO (OC method)
│ └── neuralTO.ipynb # Neural TO
└── pyproject.toml # Package configuration
The easiest way to get started is using Google Colab—no installation needed:
-
Open any notebook in Colab:
-
Select your backend (JAX or PyTorch) and run the cells.
-
Clone the repository:
git clone https://github.com/SNMS95/ADTO.git cd ADTO -
Create a conda environment:
conda create -n adto_env python=3.12 conda activate adto_env
-
Install the package and dependencies:
pip install -e . -
Install a backend (choose one):
-
For notebook support:
conda install jupyter ipykernel
Run any notebook in the examples/ directory:
jupyter notebook examples/neuralTO.ipynbSelect your backend (JAX or PyTorch) within the notebook and execute cells sequentially.
If you use this code, please cite the accompanying article in Structural and Multidisciplinary Optimization.