It will be great to have MLPerf LLama 3 pre-training working OOB with TorchTitan, Here are some references on that . [MLPerf Training Adds Llama 3.1 8B Benchmark](https://mlcommons.org/2025/10/training-llama-3-1-8b/) [small_llm_pretraining/nemo](https://github.com/mlcommons/training/tree/master/small_llm_pretraining/nemo)