Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How was the validation accuracy profile while training the foreground/background segmentation model with noisy segments obtained from the motion segmentation algorithm. #3

Open
aditya-vora opened this issue Feb 4, 2018 · 1 comment

Comments

@aditya-vora
Copy link

Hi,
I just wanted to know, how was the validation accuracy profile when you were training a foreground/background segmentation model with AlexNet/CaffeNet architecture. From what accuracy did you start the training and what validation accuracy you obtained at the end. Were you getting low validation accuracy because of the noisy labels obtained because of the inaccuracy of the motion segmentation algorithm, or you observed a general trend of increasing validation accuracy?
Thanks,
Aditya Vora

@pathak22
Copy link
Owner

pathak22 commented Feb 17, 2018

@aditya-vora The main trick in getting it work is the Trimap loss (described in the paper).

So, if you look at the validation loss on the full image without Trimap thresholding, it won't give you enough signal as the data is too noisy. But the Trimap loss on validation loss is a decent indicator (as it goes down) -- so tracking it is useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants