|
|
--- |
|
|
license: mit |
|
|
datasets: |
|
|
- hrishivish23/MPM-Verse-MaterialSim-Small |
|
|
language: |
|
|
- en |
|
|
metrics: |
|
|
- accuracy |
|
|
pipeline_tag: graph-ml |
|
|
tags: |
|
|
- Physics |
|
|
- scientific-ml |
|
|
- lagrangian-dynamics |
|
|
- neural-operator |
|
|
- neural-operator-transformer |
|
|
- graph-neural-networks |
|
|
- graph-transformer |
|
|
- sequence-to-sequence |
|
|
- autoregressive |
|
|
- temporal-dynamics |
|
|
--- |
|
|
|
|
|
|
|
|
# π PhysicsEngine: Reduced-Order Neural Operators for Lagrangian Dynamics |
|
|
|
|
|
**By [Hrishikesh Viswanath](https://huggingface.co/hrishivish23), Yue Chang, Julius Berner, Peter Yichen Chen, Aniket Bera** |
|
|
|
|
|
 |
|
|
|
|
|
--- |
|
|
|
|
|
## π Model Overview |
|
|
**GIOROM** is a **Reduced-Order Neural Operator Transformer** designed for **Lagrangian dynamics simulations on highly sparse graphs**. The model enables hybrid **Eulerian-Lagrangian learning** by: |
|
|
|
|
|
- **Projecting Lagrangian inputs onto uniform grids** with a **Graph-Interaction-Operator**. |
|
|
- **Predicting acceleration from sparse velocity inputs** using past time windows with a **Neural Operator Transformer**. |
|
|
- **Learning physics from sparse inputs (n βͺ N)** while allowing reconstruction at arbitrarily dense resolutions via an **Integral Transform Model**. |
|
|
- **Dataset Compatibility**: This model is compatible with [`MPM-Verse-MaterialSim-Small/Sand3DNCLAWSmall_longer_duration`](https://huggingface.co/datasets/hrishivish23/MPM-Verse-MaterialSim-Small/tree/main/Sand3DNCLAWSmall_longer_duration), |
|
|
|
|
|
β **Note:** While the model can infer using an integral transform, **this repository only provides weights for the time-stepper model that predicts acceleration.** |
|
|
|
|
|
--- |
|
|
|
|
|
## π Available Model Variants |
|
|
Each variant corresponds to a specific dataset, showcasing the reduction in particle count (n: reduced-order, N: full-order). |
|
|
|
|
|
| Model Name | n (Reduced) | N (Full) | |
|
|
|---------------------------------|------------|---------| |
|
|
| `giorom-3d-t-sand3d-long` | 3.0K | 32K | |
|
|
| `giorom-3d-t-water3d` | 1.7K | 55K | |
|
|
| `giorom-3d-t-elasticity` | 2.6K | 78K | |
|
|
| `giorom-3d-t-plasticine` | 1.1K | 5K | |
|
|
| `giorom-2d-t-water` | 0.12K | 1K | |
|
|
| `giorom-2d-t-sand` | 0.3K | 2K | |
|
|
| `giorom-2d-t-jelly` | 0.2K | 1.9K | |
|
|
| `giorom-2d-t-multimaterial` | 0.25K | 2K | |
|
|
|
|
|
--- |
|
|
|
|
|
## π‘ How It Works |
|
|
|
|
|
### πΉ Input Representation |
|
|
The model predicts **acceleration** from past velocity inputs: |
|
|
|
|
|
- **Input Shape:** `[n, D, W]` |
|
|
- `n`: Number of particles (reduced-order, n βͺ N) |
|
|
- `D`: Dimension (2D or 3D) |
|
|
- `W`: Time window (past velocity states) |
|
|
|
|
|
- **Projected to a uniform latent space** of size `[c^D, D]` where: |
|
|
- `c β {8, 16, 32}` |
|
|
- `n - Ξ΄n β€ c^D β€ n + Ξ΄n` |
|
|
|
|
|
This allows the model to generalize physics across different resolutions and discretizations. |
|
|
|
|
|
### πΉ Prediction & Reconstruction |
|
|
- The model **learns physical dynamics** on the sparse input representation. |
|
|
- The **integral transform model** reconstructs dense outputs at arbitrary resolutions (not included in this repo). |
|
|
- Enables **highly efficient, scalable simulations** without requiring full-resolution training. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Usage Guide |
|
|
### 1οΈβ£ Install Dependencies |
|
|
```bash |
|
|
pip install transformers huggingface_hub torch |
|
|
``` |
|
|
``` |
|
|
git clone https://github.com/HrishikeshVish/GIOROM/ |
|
|
cd GIOROM |
|
|
``` |
|
|
|
|
|
|
|
|
### 2οΈβ£ Load a Model |
|
|
|
|
|
|
|
|
|
|
|
```python |
|
|
from models.giorom3d_T import PhysicsEngine |
|
|
from models.config import TimeStepperConfig |
|
|
|
|
|
time_stepper_config = TimeStepperConfig() |
|
|
|
|
|
simulator = PhysicsEngine(time_stepper_config) |
|
|
repo_id = "hrishivish23/giorom-3d-t-sand3d-long" |
|
|
time_stepper_config = time_stepper_config.from_pretrained(repo_id) |
|
|
simulator = simulator.from_pretrained(repo_id, config=time_stepper_config) |
|
|
``` |
|
|
|
|
|
### 3οΈβ£ Run Inference |
|
|
```python |
|
|
import torch |
|
|
|
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π Model Weights and Checkpoints |
|
|
| Model Name | Model ID | |
|
|
|---------------------------------|-------------| |
|
|
| `giorom-3d-t-sand3d-long` | [`hrishivish23/giorom-3d-t-sand3d-long`](https://huggingface.co/hrishivish23/giorom-3d-t-sand3d-long) | |
|
|
| `giorom-3d-t-water3d` | [`hrishivish23/giorom-3d-t-water3d`](https://huggingface.co/hrishivish23/giorom-3d-t-water3d) | |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## π Training Details |
|
|
### π§ Hyperparameters |
|
|
- **Graph Interaction Operator** layers: **4** |
|
|
- **Transformer Heads**: **4** |
|
|
- **Embedding Dimension:** **128** |
|
|
- **Latent Grid Sizes:** `{8Γ8, 16Γ16, 32Γ32}` |
|
|
- **Learning Rate:** `1e-4` |
|
|
- **Optimizer:** `Adamax` |
|
|
- **Loss Function:** `MSE + Physics Regularization (Loss computed on Euler integrated outputs)` |
|
|
- **Training Steps:** `1M+ steps` |
|
|
|
|
|
### π₯οΈ Hardware |
|
|
- **Trained on:** NVIDIA RTX 3050 |
|
|
- **Batch Size:** `2` |
|
|
|
|
|
--- |
|
|
|
|
|
## π Citation |
|
|
If you use this model, please cite: |
|
|
```bibtex |
|
|
@article{viswanath2024reduced, |
|
|
title={Reduced-Order Neural Operators: Learning Lagrangian Dynamics on Highly Sparse Graphs}, |
|
|
author={Viswanath, Hrishikesh and Chang, Yue and Berner, Julius and Chen, Peter Yichen and Bera, Aniket}, |
|
|
journal={arXiv preprint arXiv:2407.03925}, |
|
|
year={2024} |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π¬ Contact |
|
|
For questions or collaborations: |
|
|
- π§βπ» Author: [Hrishikesh Viswanath](https://hrishikeshvish.github.io) |
|
|
- π§ Email: [email protected] |
|
|
- π¬ Hugging Face Discussion: [Model Page](https://huggingface.co/hrishivish23/giorom-3d-t-sand3d-long/discussions) |
|
|
|
|
|
--- |
|
|
|
|
|
## π Related Work |
|
|
- **Neural Operators for PDEs**: Fourier Neural Operators, Graph Neural Operators |
|
|
- **Lagrangian Methods**: Material Point Methods, SPH, NCLAW, CROM, LiCROM |
|
|
- **Physics-Based ML**: PINNs, GNS, MeshGraphNet |
|
|
|
|
|
--- |
|
|
|
|
|
### πΉ Summary |
|
|
This model is ideal for **fast and scalable physics simulations** where full-resolution computation is infeasible. The reduced-order approach allows **efficient learning on sparse inputs**, with the ability to **reconstruct dense outputs using an integral transform model (not included in this repo).** |
|
|
|