Skip to content

Improvements to distributed training

Choose a tag to compare

@lucaslie lucaslie released this 08 May 20:17

There was a bug in distributed training when using more than one GPU causing training to stall at the end of the last epoch.