-
Notifications
You must be signed in to change notification settings - Fork 113
Open
Description
My testing based on a variation of demo.py for classification of 7 labels/classes is showing choppy performance on a GPU. Excluding python post-processing and ignoring the first two inferences, I see processing durations like 0.09, 0.089, 0.56, 0.56, 0.079, 0.39, 0.09 ... ; average over 19 images is 0.19sec per image.
I'm surprised by the variance.
At 5/sec it is workable, but could be better. Would tensorflow-serving help by getting python out of the loop? I need to process 1M images per day.
(The GPU is GeForce GTX 1080 and is using 10.8GB of 11GB RAM, only one TF session is used for multiple inferences.)
Metadata
Metadata
Assignees
Labels
No labels