-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, can I ask some question about your paper ? (●'◡'●) #2
Comments
hope to recieve your reply |
Hi @jun0wanan, In current form, our code base only supports single GPU training. Part of the challenge in supporting multi-gpu training is the contrastive loss which requires one caption to be compared to all other images in the mini-batch to compute the loss. Note that this is different from typical classification tasks where an image and its label are sufficient to compute loss for that sample and therefore it is easy to partition the batch and place each partition on a separate GPU. One solution that might work reasonably well is to only use images placed on the same GPU as negatives for the contrastive loss. For example, if the batch size is 100 and you have 4 GPUs, each GPU handles a subset of size 25. So instead of using 99 images as negatives for each caption, you would be using 24. Hope this helps! |
hi,author~ model_nums = find_all_model_numbers(exp_const.model_dir)
|
|
Thank you very much for an extraordinary job!
I'm very interested in your work and I hope to follow in your footsteps (●'◡'●)
Can your work run in many gpu ?
The text was updated successfully, but these errors were encountered: