Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

very slow training #43

Open
businiaoo opened this issue Jun 27, 2022 · 3 comments
Open

very slow training #43

businiaoo opened this issue Jun 27, 2022 · 3 comments

Comments

@businiaoo
Copy link

Line 86 in the forward function in mmdet/distillation/losses/fgd.py There are two for loops here, in my test, these two for loops slow down the training seriously, is there any solution?

@dw763j
Copy link

dw763j commented Jul 1, 2022

The for loops is build upon batch size, I guess there should not be some great slowing, I mean there are just 16 samples in total.

@Muke6
Copy link

Muke6 commented Jul 20, 2022

I also encountered the same problem, have you solved it?

@Tongfengyu
Copy link

Is there need to add a line ' with torch.no_grad(): ' when get the mask?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants