This directory contains scripts for preparing benchmark datasets and evaluating predicted 4D geometry / motion outputs.
eval.py: Main evaluation entrypoint (single dataset directory).metrics.py: Metric implementations used byeval.py.preprocess/: Dataset-specific preprocessing scripts.
For each sample, the evaluator expects:
-
Prediction
.npzunder prediction root:point_map:[T, H, W, 3]- optional
scene_flow:[T, H, W, 3]
-
Ground-truth
.hdf5under GT root:point_map:[T, H, W, 3]valid_mask:[T, H, W]camera_pose:[T, 4, 4](camera-to-world)- optional
scene_flow:[T, H, W, 3] - optional
deform_mask:[T, H, W]
-
Metadata file under GT root:
filename_list.txt(default)meta_infos.txtwhen using--use_normed_data
python evaluation/eval.py \
--gt_data_dir workspace/benchmark_datasets/Virtual_KITTI_2_video \
--pred_data_dir workspace/benchmark_outputs/MotionCrafter/Virtual_KITTI_2_video \
--use_normed_data \
--is_pred_world_map--device {auto,cuda,cpu}: Choose runtime device.--strict_missing: Fail immediately on missing files.--save_aligned_world: Save aligned world-space predictions as*_aligned_world.npz.--static_pose_for_flow: Use same pose for flow transform (useful for some datasets).--max_frames_no_flow/--max_frames_with_flow: Frame caps for evaluation.
The scripts under evaluation/preprocess now support CLI arguments so paths and devices are configurable.
Sintel:
python evaluation/preprocess/gen_sintel_video.py \
--data_dir workspace/datasets/SintelComplete \
--output_dir workspace/benchmark_datasets/Sintel_video \
--device autoMonkaa:
python evaluation/preprocess/gen_monkaa_video.py \
--data_dir workspace/datasets/SceneFlowDataset/Monkaa \
--output_dir workspace/benchmark_datasets/Monkaa_video \
--device autoDDAD:
python evaluation/preprocess/gen_ddad_video.py \
--data_dir workspace/datasets/DDAD/ddad_train_val \
--output_dir workspace/benchmark_datasets/DDAD_video \
--dgp_root evaluation/preprocess/dgp \
--device autoAll preprocess scripts will generate:
- dataset videos and hdf5 files under the output directory
filename_list.txtfor downstream evaluator input
The evaluator writes metrics.json to --pred_data_dir (or --save_file_name).
It includes:
- global metric means
- per-sample metric list
_metasummary (num_samples_total,num_samples_evaluated,num_samples_skipped,device)_skippedentries when files are missing and--strict_missingis not set