-
Notifications
You must be signed in to change notification settings - Fork 14
Open
Description
Hi @trevor-m
I am using your repo as a source to implement a SRGAN. I have some differences especially in the dataset.
I have a 2 .npy files that contents 8000 patches of size 1616 (both LR and SR), I made the correspondence modifications to implement mini-batch training using tf.train.shuffle_batch.
During training, I have an increment of memory of 1GB per 50 iterations (more or less) and sooner or later, depending of the batch-size, the cpu-memory consumption reach the maximum and the train stops.
Perhaps my problem is naive, I am a newbie with tensorflow. What would you recommend to prevent the problem of such as high memory consumption?
Metadata
Metadata
Assignees
Labels
No labels