Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
46d2880
Move filesystems and version_check to core
coreyjadams Nov 3, 2025
c6d04ad
Fix version check tests
coreyjadams Nov 3, 2025
6f36f03
Reorganize distributed, domain_parallel, and begin nn / utils cleanup.
coreyjadams Nov 3, 2025
7824091
Move modules and meta to core. Move registry to core.
coreyjadams Nov 3, 2025
f753573
Add missing init files
coreyjadams Nov 3, 2025
2ef835e
Update build system and specify some deps.
coreyjadams Nov 3, 2025
1603067
Merge branch 'main' into refactor
coreyjadams Nov 3, 2025
1e8df52
Reorganize tests.
coreyjadams Nov 3, 2025
2e1195c
Update init files
coreyjadams Nov 3, 2025
a698685
Clean up neighbor tools.
coreyjadams Nov 3, 2025
258d988
Update testing
coreyjadams Nov 3, 2025
0638b97
Fix compat tests
coreyjadams Nov 3, 2025
b6327cb
Move core model tests to tests/core/
coreyjadams Nov 3, 2025
3ce049a
Add import lint config
coreyjadams Nov 3, 2025
95fa450
Relocate layers
coreyjadams Nov 3, 2025
ba6813d
Move graphcast utils into model directory
coreyjadams Nov 3, 2025
3f10463
Relocating util functionalities.
coreyjadams Nov 4, 2025
339b484
Further clean up and organize tests.
coreyjadams Nov 5, 2025
18df402
Merge branch 'NVIDIA:main' into refactor
coreyjadams Nov 5, 2025
d6946d9
utils tests are passing now
coreyjadams Nov 5, 2025
66f8d15
Cleaning up distributed tests
coreyjadams Nov 5, 2025
2ee76db
Patching tests working again in nn
coreyjadams Nov 5, 2025
33d525d
Fix sdf test
coreyjadams Nov 5, 2025
a06ad0a
Fix zenith angle tests
coreyjadams Nov 5, 2025
4c845cc
Some organization of tests. Checkpoints is moved into utils.
coreyjadams Nov 5, 2025
3bb64f4
Remove launch.utils and launch.config. Checkpointing is moved to
coreyjadams Nov 5, 2025
4aa332e
Most nn tests are passing
coreyjadams Nov 5, 2025
45686cc
Further cleanup. Getting there!
coreyjadams Nov 5, 2025
bbc54f6
Remove constants file
coreyjadams Nov 5, 2025
8453fea
Add import linting to pre-commit.
coreyjadams Nov 5, 2025
f850488
Merge branch 'main' into refactor
coreyjadams Nov 5, 2025
a6a083a
Update crash readme (#1212)
mnabian Nov 6, 2025
cdd0f84
Bump multi-storage-client to v0.33.0 with rust client (#1156)
dreamtalen Nov 6, 2025
337c91e
Merge branch 'v2.0-refactor' into refactor
coreyjadams Nov 7, 2025
4583c42
Move gnn layers and start to fix several model tests.
coreyjadams Nov 7, 2025
e326d4a
AFNO is now passing.
coreyjadams Nov 7, 2025
b95097d
Rnn models passing.
coreyjadams Nov 7, 2025
d8bc6f9
Fix improt
coreyjadams Nov 7, 2025
314f1b2
Healpix tests are working
coreyjadams Nov 7, 2025
9c7d287
Domino and unet working
coreyjadams Nov 7, 2025
bf85887
Add jaxtyping to requirements.txt for crash sample (#1218)
mnabian Nov 8, 2025
afa903f
Updating to address some test issues
coreyjadams Nov 10, 2025
91ceb0a
Merge branch 'v2.0-refactor' into refactor
coreyjadams Nov 10, 2025
f9130a6
Merge branch 'main' into v2.0-refactor
coreyjadams Nov 10, 2025
ceb1eb8
Merge branch 'main' into refactor
coreyjadams Nov 10, 2025
a228f62
Replace 'License' link with 'Dev blog' link (#1215)
ram-cherukuri Nov 10, 2025
0592d80
MGN tests passing again
coreyjadams Nov 10, 2025
857b3db
Most graphcast tests passing again
coreyjadams Nov 10, 2025
f89a2fb
Move nd conv layers.
coreyjadams Nov 10, 2025
409200d
update fengwu and pangu
coreyjadams Nov 10, 2025
14b51fd
Update sfno and pix2pix test
coreyjadams Nov 10, 2025
27fd304
update tests for figconvnet, swinrnn, superresnet
coreyjadams Nov 10, 2025
0d22d11
updating more models to pass
coreyjadams Nov 10, 2025
60ba0ce
Update distributed tests, now passing.
coreyjadams Nov 10, 2025
f8fd198
Validation fu added to examples/structural_mechanics/crash/train.py (…
dakhare-creator Nov 10, 2025
7ec2251
Domain parallel tests now passing.
coreyjadams Nov 11, 2025
d9fe7a4
Merge branch 'v2.0-refactor' into refactor
coreyjadams Nov 12, 2025
af9e359
Fix active learning imports so tests pass in refactor
coreyjadams Nov 12, 2025
e3b7849
Fix some metric imports
coreyjadams Nov 12, 2025
b1f2ef9
Remove deploy package
coreyjadams Nov 12, 2025
f46ff8c
Remove unused test file
coreyjadams Nov 12, 2025
edd2224
unmigrate these files ... again?
coreyjadams Nov 12, 2025
1c769e3
Update import linter.
coreyjadams Nov 12, 2025
8d8255a
Merge branch 'main' into refactor
coreyjadams Nov 12, 2025
059fe5d
Add saikrishnanc-nv to github actors (#1225)
saikrishnanc-nv Nov 12, 2025
8252271
Integrate Curator instructions to the Crash example (#1213)
saikrishnanc-nv Nov 12, 2025
adc6602
Adding code of conduct (#1214)
ram-cherukuri Nov 12, 2025
8b266b0
Cleaning up diffusion models. Not quite done yet.
coreyjadams Nov 12, 2025
8a8a05a
Merge branch 'main' into refactor
coreyjadams Nov 12, 2025
9b0d40d
Merge branch 'v2.0-refactor' into refactor
coreyjadams Nov 13, 2025
ff0aacf
Restore deleted files
coreyjadams Nov 13, 2025
f11fcd7
Updating more tests.
coreyjadams Nov 13, 2025
1a52284
Fixed minor bug in shape validation in SongUNet (#1230)
CharlelieLrt Nov 14, 2025
7277097
Add Zarr reader for Crash (#1228)
saikrishnanc-nv Nov 14, 2025
9e32712
Further updates to tests. Datapipes almost working.
coreyjadams Nov 14, 2025
0b78d6c
Merge branch 'NVIDIA:main' into refactor
coreyjadams Nov 17, 2025
ac1fcef
update import paths
coreyjadams Nov 17, 2025
d81ee43
Starting to clean up dependency tree.
coreyjadams Nov 18, 2025
dff27b3
Merge branch 'v2.0-refactor' into refactor
coreyjadams Nov 18, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
91 changes: 58 additions & 33 deletions .importlinter
Original file line number Diff line number Diff line change
@@ -1,75 +1,100 @@
[importlinter]
root_package = physicsnemo
include_external_packages = True
contract_types =
forbidden_import: prevent_untracked_imports.ForbiddenImportContract

[importlinter:contract:physicsnemo-modules]
name = Prevent Upward Imports in the PhysicsNemo Structure
type = layers
containers=
physicsnemo
physicsnemo
layers =
experimental
active_learning
models : registry : datapipes : metrics : domain_parallel
nn
utils
distributed
core
experimental
active_learning
models : registry : datapipes : metrics : domain_parallel
nn
utils
distributed
core

[importlinter:contract:physicsnemo-core]
name = Control Dependencies in PhysicsNeMo core
type = layers
containers=
physicsnemo.core
physicsnemo.core
layers =
module : registry
meta
warnings | version_check | filesystem
module : registry
meta
warnings | version_check | filesystem


[importlinter:contract:physicsnemo-distributed]
name = Control Dependencies in PhysicsNeMo distributed
type = layers
containers=
physicsnemo.distributed
physicsnemo.distributed
layers =
fft | autograd
mappings
utils
manager
config
fft | autograd
mappings
utils
manager
config

[importlinter:contract:physicsnemo-utils]
name = Control Dependencies in PhysicsNeMo utils
type = layers
containers=
physicsnemo.utils
physicsnemo.utils
layers =
mesh | insolation | zenith_angle
profiling
checkpoint
capture
logging | memory
mesh | insolation | zenith_angle
profiling
checkpoint
capture
logging | memory

[importlinter:contract:physicsnemo-nn]
name = Control Dependencies in PhysicsNeMo nn
type = layers
containers=
physicsnemo.nn
physicsnemo.nn
layers =
fourier_layers | transformer_layers
dgm_layers | mlp_layers | fully_connected_layers | gnn_layers
fourier_layers | transformer_layers
dgm_layers | mlp_layers | fully_connected_layers | gnn_layers
activations | attention_layers | ball_query | conv_layers | drop | fft | fused_silu | interpolation | kan_layers | resample_layers | sdf | siren_layers | spectral_layers | transformer_decoder | weight_fact | weight_norm
neighbors
utils
neighbors
utils

[importlinter:contract:physicsnemo-models]
name = Prevent Imports between physicsnemo models
type = layers
containers=
physicsnemo.models
physicsnemo.models
layers =
mesh_reduced
afno | dlwp | dlwp_healpix | domino | dpot | fengwu | figconvnet | fno | graphcast | meshgraphnet | pangu | pix2pix | rnn | srrn | swinvrnn | topodiff | transolver | vfgn
unet | diffusion | dlwp_healpix_layers
mesh_reduced
afno | dlwp | dlwp_healpix | domino | dpot | fengwu | figconvnet | fno | graphcast | meshgraphnet | pangu | pix2pix | rnn | srrn | swinvrnn | topodiff | transolver | vfgn
unet | diffusion | dlwp_healpix_layers

[importlinter:contract:physicsnemo-core-external-imports]
name = Prevent Non-listed external imports in physicsnemo core
type = forbidden_import
container = physicsnemo.core
dependency_group = core

[importlinter:contract:physicsnemo-distributed-external-imports]
name = Prevent Non-listed external imports in physicsnemo distributed
type = forbidden_import
container = physicsnemo.distributed
dependency_group = distributed

[importlinter:contract:physicsnemo-utils-external-imports]
name = Prevent Non-listed external imports in physicsnemo utils
type = forbidden_import
container = physicsnemo.utils
dependency_group = utils

[importlinter:contract:physicsnemo-nn-external-imports]
name = Prevent Non-listed external imports in physicsnemo nn
type = forbidden_import
container = physicsnemo.nn
dependency_group = nn
121 changes: 104 additions & 17 deletions examples/structural_mechanics/crash/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ For an in-depth comparison between the Transolver and MeshGraphNet models and th
```yaml
# conf/config.yaml
defaults:
- reader: vtp # or d3plot, or your custom reader
- reader: vtp # vtp, zarr, d3plot, or your custom reader
- datapipe: point_cloud # or graph
- model: transolver_time_conditional # or an MGN variant
- training: default
Expand All @@ -47,7 +47,7 @@ defaults:
2) Point to your datasets and core training knobs.

- `conf/training/default.yaml`:
- `raw_data_dir`: path to TRAIN runs (folder of run folders for d3plot, or folder of .vtp files for VTP)
- `raw_data_dir`: path to TRAIN runs (folder of run folders for d3plot, folder of .vtp files for VTP, or folder of .zarr stores for Zarr)
- `num_time_steps`: number of frames to use per run
- `num_training_samples`: how many runs to load

Expand Down Expand Up @@ -77,6 +77,7 @@ features: [thickness] # or [] for no features; preserve order if adding more
4) Reader‑specific options (optional).

- d3plot: `conf/reader/d3plot.yaml` → `wall_node_disp_threshold`
- VTP and Zarr readers have no additional options (they read pre-processed data)

5) Model config: ensure input dimensions match your features.

Expand Down Expand Up @@ -127,26 +128,38 @@ This will install:
[PhysicsNeMo-Curator](https://github.com/NVIDIA/physicsnemo-curator).
Using `PhysicsNeMo-Curator`, crash simulation data from LS-DYNA can be processed into training-ready formats easily.

Currently, this can be used to preprocess d3plot files into VTP.
PhysicsNeMo-Curator can preprocess d3plot files into **VTP** (for visualization and smaller datasets) or **Zarr** (for large-scale ML training).

### Quick Start

Install PhysicsNeMo-Curator following
[these instructions](https://github.com/NVIDIA/physicsnemo-curator?tab=readme-ov-file#installation-and-usage).

Process your LS-DYNA data:
Process your LS-DYNA data to **VTP format**:

```bash
export PYTHONPATH=$PYTHONPATH:examples &&
physicsnemo-curator-etl \
--config-dir=examples/config \
--config-name=crash_etl \
etl.source.input_dir=/data/crash_sims/ \
etl.sink.output_dir=/data/crash_processed_vtp/ \
physicsnemo-curator-etl \
--config-dir=examples/structural_mechanics/crash/config \
--config-name=crash_etl \
serialization_format=vtp \
etl.source.input_dir=/data/crash_sims/ \
serialization_format.sink.output_dir=/data/crash_vtp/ \
etl.processing.num_processes=4
```

This will process all LS-DYNA runs in `/data/crash_sims/` and output VTP files to `/data/crash_processed_vtp/`.
Or process to **Zarr format** for large-scale training:

```bash
export PYTHONPATH=$PYTHONPATH:examples &&
physicsnemo-curator-etl \
--config-dir=examples/structural_mechanics/crash/config \
--config-name=crash_etl \
serialization_format=zarr \
etl.source.input_dir=/data/crash_sims/ \
serialization_format.sink.output_dir=/data/crash_zarr/ \
etl.processing.num_processes=4
```

### Input Data Structure

Expand All @@ -165,7 +178,7 @@ crash_sims/

### Output Formats

#### VTP Format (Recommended for this example)
#### VTP Format

Produces single VTP file per run with all timesteps as displacement fields:

Expand All @@ -179,10 +192,33 @@ crash_processed_vtp/
Each VTP contains:
- Reference coordinates at t=0
- Displacement fields: `displacement_t0.000`, `displacement_t0.005`, etc.
- Node thickness values
- Node thickness and other point data features

This format is directly compatible with the VTP reader in this example.

#### Zarr Format

Produces one Zarr store per run with pre-computed graph structure:

```
crash_processed_zarr/
├── Run100.zarr/
│ ├── mesh_pos # (timesteps, nodes, 3) - temporal positions
│ ├── thickness # (nodes,) - node features
│ └── edges # (num_edges, 2) - pre-computed graph connectivity
├── Run101.zarr/
└── ...
```

Each Zarr store contains:
- `mesh_pos`: Full temporal trajectory (no displacement reconstruction needed)
- `thickness`: Per-node features
- `edges`: Pre-computed edge connectivity (no edge rebuilding during training)

**NOTE:** All heavy preprocessing (node filtering, edge building, thickness computation) is done once during curation using PhysicsNeMo-Curator. The reader simply loads pre-computed arrays.

This format is directly compatible with the Zarr reader in this example.

## Training

Training is managed via Hydra configurations located in conf/.
Expand Down Expand Up @@ -277,14 +313,15 @@ If you use the graph datapipe, the edge list is produced by walking the filtered

### Built‑in VTP reader (PolyData)

In addition to `d3plot`, a lightweight VTP reader is provided in `vtp_reader.py`. It treats each `.vtp` file in a directory as a separate run and expects point displacements to be stored as vector arrays in `poly.point_data` with names like `displacement_t0.000`, `displacement_t0.005`, … (a more permissive fallback of any `displacement_t*` is also supported). The reader:
A lightweight VTP reader is provided in `vtp_reader.py`. It treats each `.vtp` file in a directory as a separate run and expects point displacements to be stored as vector arrays in `poly.point_data` with names like `displacement_t0.000`, `displacement_t0.005`, … (a more permissive fallback of any `displacement_t*` is also supported). The reader:

- loads the reference coordinates from `poly.points`
- builds absolute positions per timestep as `[t0: coords, t>0: coords + displacement_t]`
- extracts cell connectivity from the PolyData faces and converts it to unique edges
- returns `(srcs, dsts, point_data)` where `point_data` contains `'coords': [T, N, 3]`
- extracts all point data fields dynamically (e.g., thickness, modulus)
- returns `(srcs, dsts, point_data)` where `point_data` contains `'coords': [T, N, 3]` and all feature arrays

By default, the VTP reader does not attach additional features; it is compatible with `features: []`. If your `.vtp` files include additional per‑point arrays you would like to model (e.g., thickness or modulus), extend the reader to add those arrays to each run’s record using keys that match your `features` list. The datapipe will then concatenate them in the configured order.
The VTP reader dynamically extracts all non-displacement point data fields from the VTP file and makes them available to the datapipe. If your `.vtp` files include additional per‑point arrays (e.g., thickness or modulus), simply add their names to the `features` list in your datapipe config.

Example Hydra configuration for the VTP reader:

Expand All @@ -304,12 +341,58 @@ defaults:
- reader: vtp
```

And set `features` to empty (or to the names you add in your extended reader) in `conf/datapipe/point_cloud.yaml` or `conf/datapipe/graph.yaml`:
And configure features in `conf/datapipe/point_cloud.yaml` or `conf/datapipe/graph.yaml`:

```yaml
features: [] # or [thickness, Y_modulus] if your reader provides them
features: [thickness] # or [] for no features
```

### Built‑in Zarr reader

A Zarr reader provided in `zarr_reader.py`. It reads pre-processed Zarr stores created by PhysicsNeMo-Curator, where all heavy computation (node filtering, edge building, thickness computation) has already been done during the ETL pipeline. The reader:

- loads pre-computed temporal positions directly from `mesh_pos` (no displacement reconstruction)
- loads pre-computed edges (no connectivity-to-edge conversion needed)
- dynamically extracts all point data fields (thickness, etc.) from the Zarr store
- returns `(srcs, dsts, point_data)` similar to VTP reader

Data layout expected by Zarr reader:
- `<DATA_DIR>/*.zarr/` (each `.zarr` directory is treated as one run)
- Each Zarr store must contain:
- `mesh_pos`: `[T, N, 3]` temporal positions
- `edges`: `[E, 2]` pre-computed edge connectivity
- Feature arrays (e.g., `thickness`): `[N]` or `[N, K]` per-node features

Example Hydra configuration for the Zarr reader:

```yaml
# conf/reader/zarr.yaml
_target_: zarr_reader.Reader
```

Select it in `conf/config.yaml`:

```yaml
defaults:
- reader: zarr # Options are: vtp, d3plot, zarr
- datapipe: point_cloud # will be overridden by model configs
- model: transolver_autoregressive_rollout_training
- training: default
- inference: default
- _self_
```

And configure features in `conf/datapipe/graph.yaml`:

```yaml
features: [thickness] # Must match fields stored in Zarr
```

**Recommended workflow:**
1. Use PhysicsNeMo-Curator to preprocess d3plot → VTP or Zarr once
2. Use corresponding reader for all training/validation
3. Optionally use d3plot reader for quick prototyping on raw data

### Data layout expected by readers

- d3plot reader (`d3plot_reader.py`):
Expand All @@ -320,6 +403,10 @@ features: [] # or [thickness, Y_modulus] if your reader provides them
- `<DATA_DIR>/*.vtp` (each `.vtp` is treated as one run)
- Displacements stored as 3‑component arrays in point_data with names like `displacement_t0.000`, `displacement_t0.005`, ... (fallback accepts any `displacement_t*`).

- Zarr reader (`zarr_reader.py`):
- `<DATA_DIR>/*.zarr/` (each `.zarr` directory is treated as one run)
- Contains pre-computed `mesh_pos`, `edges`, and feature arrays

### Write your own reader

To write your own reader, implement a Hydra‑instantiable function or class whose call returns a three‑tuple `(srcs, dsts, point_data)`. The first two entries are lists of integer arrays describing edges per run (they can be empty lists if you are not producing a graph), and `point_data` is a list of Python dicts with one dict per run. Each dict must contain `'coords'` as a `[T, N, 3]` array and one array per feature name listed in `conf/datapipe/*.yaml` under `features`. Feature arrays can be `[N]` or `[N, K]` and should use the same node indexing as `'coords'`. For convenience, a simple class reader can accept the Hydra `split` argument (e.g., "train" or "test") and decide whether to save VTP frames, but this is optional.
Expand Down
2 changes: 1 addition & 1 deletion examples/structural_mechanics/crash/conf/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ experiment_desc: "unified training recipe for crash models"
run_desc: "unified training recipe for crash models"

defaults:
- reader: vtp #d3plot
- reader: vtp # Options are: vtp, d3plot, zarr
- datapipe: point_cloud # will be overridden by model configs
- model: transolver_autoregressive_rollout_training
- training: default
Expand Down
18 changes: 18 additions & 0 deletions examples/structural_mechanics/crash/conf/reader/zarr.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# SPDX-FileCopyrightText: Copyright (c) 2023 - 2025 NVIDIA CORPORATION & AFFILIATES.
# SPDX-FileCopyrightText: All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

_target_: zarr_reader.Reader
_convert_: all
Loading