Skip to content

OOM problem #5

@luissalgueiro

Description

@luissalgueiro

Hi @trevor-m

I am using your repo as a source to implement a SRGAN. I have some differences especially in the dataset.

I have a 2 .npy files that contents 8000 patches of size 1616 (both LR and SR), I made the correspondence modifications to implement mini-batch training using tf.train.shuffle_batch.

During training, I have an increment of memory of 1GB per 50 iterations (more or less) and sooner or later, depending of the batch-size, the cpu-memory consumption reach the maximum and the train stops.

Perhaps my problem is naive, I am a newbie with tensorflow. What would you recommend to prevent the problem of such as high memory consumption?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions