-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generalized Sparse Convolution #77
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! I will take a closer look more this week and test it.
I was wondering, will this make cpp functions like convert_map_forward
and insertion_forward
obsolete? Unless maybe somebody is using torchsparse.nn.functional.voxelize
outside of torchsparse.nn.functional.downsample
for some reason? We might want to see if we can remove these
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! It looks good to me overall.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code looks good to me! And it's much cleaner than the code on master which is great.
I haven't tested the code but I assume you checked it on different stride/kernel configs and it worked as expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
In this PR we implement the generalized sparse convolution operation. This operation mainly differs from the current sparse convolution in downsampling stage with
kernel_size
not equal tostride
. The output will be dilated (similar to stridednn.Conv3d
) and is not calculated withc_out = torch.floor(c_in // 2) * 2
. Besides, we also added support for asymmetric convolutional kernel sizes and strides.