You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How was the validation accuracy profile while training the foreground/background segmentation model with noisy segments obtained from the motion segmentation algorithm.
#3
Open
aditya-vora opened this issue
Feb 4, 2018
· 1 comment
Hi,
I just wanted to know, how was the validation accuracy profile when you were training a foreground/background segmentation model with AlexNet/CaffeNet architecture. From what accuracy did you start the training and what validation accuracy you obtained at the end. Were you getting low validation accuracy because of the noisy labels obtained because of the inaccuracy of the motion segmentation algorithm, or you observed a general trend of increasing validation accuracy?
Thanks,
Aditya Vora
The text was updated successfully, but these errors were encountered:
@aditya-vora The main trick in getting it work is the Trimap loss (described in the paper).
So, if you look at the validation loss on the full image without Trimap thresholding, it won't give you enough signal as the data is too noisy. But the Trimap loss on validation loss is a decent indicator (as it goes down) -- so tracking it is useful.
Hi,
I just wanted to know, how was the validation accuracy profile when you were training a foreground/background segmentation model with AlexNet/CaffeNet architecture. From what accuracy did you start the training and what validation accuracy you obtained at the end. Were you getting low validation accuracy because of the noisy labels obtained because of the inaccuracy of the motion segmentation algorithm, or you observed a general trend of increasing validation accuracy?
Thanks,
Aditya Vora
The text was updated successfully, but these errors were encountered: