You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
WARNING batched routines are designed for small sizes. It might be better to use the
Native/Hybrid classical routines if you want good performance.
=========================================================================================
WARNING batched routines are designed for small sizes. It might be better to use the
Native/Hybrid classical routines if you want good performance.
=========================================================================================
WARNING batched routines are designed for small sizes. It might be better to use the
Native/Hybrid classical routines if you want good performance.
=========================================================================================
WARNING batched routines are designed for small sizes. It might be better to use the
Native/Hybrid classical routines if you want good performance.
Yeap, the BN used in my repo is not the best choice.
As we may have no plan to update it, I suggest you replace it with Pytorch's official version of SyncBN
I add other block to replace EMAU, but get some warning. I guess it's bn_lib you used not suitable for my block.
`2020-07-30 20:05:26,727 - INFO - step: 1 loss: 2.429 lr: 0.009
WARNING batched routines are designed for small sizes. It might be better to use the
Native/Hybrid classical routines if you want good performance.
=========================================================================================
WARNING batched routines are designed for small sizes. It might be better to use the
Native/Hybrid classical routines if you want good performance.
=========================================================================================
WARNING batched routines are designed for small sizes. It might be better to use the
Native/Hybrid classical routines if you want good performance.
=========================================================================================
WARNING batched routines are designed for small sizes. It might be better to use the
Native/Hybrid classical routines if you want good performance.
2020-07-30 20:05:29,586 - INFO - step: 2 loss: 2.398 lr: 0.009`
The text was updated successfully, but these errors were encountered: