-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ShapeNet TextureUV support (by combining multiple texture images) #694
Comments
[*] See the corresponding code invoking the OBJ loader |
Thanks @patricklabatut for the reply. Yes, Pytorch3D loads them as TextureAtlas [Faces, 4, 4, 3] Tensor. After these textures are applied to each part of the mesh, is there a way to convert them into a single TextureUV map? I need one image per mesh for my research problem and I am wondering if there is some way to have one map per mesh. Thank you. |
One could try to repack all the per-face texture images into a single large texture image and generate matching vertex UV coordinates. That would however require as much if not more processing than what follows below including having to break the model into disjoint triangles and generally undoing a significant part of the
This is technically possible but unfortunately not really out-of-the-box in PyTorch3D at this time. I will mark this at as a possible enhancement to consider for future releases. At a high-level, one would have to create a large image to store all the model images in non-overlapping regions (with some possible padding). With this, the original vertex UV coordinates (typically in [0,1]^2) referencing the original images would have to be remapped to match the specific regions where the different images have been placed. Some of that logic is actually implemented already in |
Thank you @patricklabatut, That makes sense, waiting for this feature. Thank you again. |
@ck-amrahd Were you able to solve the problem as I am looking forward to pass images to generate its 3D obj for my work. |
Hi,
I am working with the ShapeNet dataset and Pytorch3D loads the textures as TextureAtlas, is there a way to convert this TextureAtlas into TextureUV? Or can I load them as TextureUV on my own? But they have multiple texture images per obj file, does Pytorch3D support loading TextureUV from multiple texture images? Thank you.
The text was updated successfully, but these errors were encountered: