-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[PyTorch] [Frontend] Add support for 'aten::new_zeros' & 'aten::copy_' #9375
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. cc @masahi
Please add tests. Also, we need to discuss
|
Yes we should definitely add a warning at least. A safer alternative, which I'm more comfortable with, is make use of the custom convert map tvm/python/tvm/relay/frontend/pytorch.py Line 3810 in 0df4edc
copy_ and pass it to from_torch . If that's acceptable for your use cases, I think this is the most reasonable compromise.
|
Good suggestion. We should definitely do so for our use cases in particular, although I guess the same issue may happen periodically when someone tries to work on BART lol |
Yeah, users need to acknowledge that TVM cannot represent all PT models. It is better to reject such models early than returning corrupted models. Since this request comes up too often, how about we do the following:
|
@comaniac @masahi I find that the output will not be correct due to something like In BART, it is from a function, the whole function is not inplace.
Also, I find that after pytorch/pytorch#52063 (torch version >= 1.9), we can use
and the subgraph can be convert to ONNX's Maybe the torch->onnx path will work for these models? |
closed now |
I've tried |
aten::new_zeros
&aten::copy_
This pr enables convert from 'bart_base' model in transformers
@comaniac