-
Notifications
You must be signed in to change notification settings - Fork 284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is Torch faster than Caffe? #6
Comments
LuaJIT is faster than Python and Torch is generally faster than Caffe from what I've seen in benchmarks, which are a bit outdated: https://github.com/soumith/convnet-benchmarks. I hear Nvidia gives the best support to Torch, which they use for much of their own work (e.g., autonomous car demonstrations), as others like Google and Nervana/Intel compete on hardware. cuDNN 5 speeds up Torch quite a bit: https://devblogs.nvidia.com/parallelforall/optimizing-recurrent-neural-networks-cudnn-5/. |
@adam-erickson : 👍 thanks. But, the huge difference in only one frame is the concern. The benchmark has not compared LuaJIT with Caffe. |
Hi @nitish11 , Thanks for your interest in our work and thanks for this cool repository: It's embarrassing to admit it, but I never worked with Torch and I really can't say which one is faster (or in this case - why you get faster run-times in Torch). Best, |
I checked torch and Caffe computation engine called BLAS. In Ubuntu 14.04,
From the output, I observed that torch is linked against openblas, and caffe is linked against libcblas, which might be the reason for slower Caffe. Solution : Build Caffe with OpenBlas |
Are you sure it's not simply the difference in looping speed between LuaJIT and Python? It can be quite large. Similar to Julia, LuaJIT is closer to C. |
I am not sure about looping speed between LuaJIT and Python. |
There is a better way to call torch from python code using wrapper. |
Hi,
I am using the gender detection model in Torch and in Caffe for detection from live camera.
Running the code on CPU and keeping the same models file, I am getting different prediction times.
For Caffe, it is ~1.30 seconds per frame.
For Torch, it is ~0.45 seconds per frame.
What could be the possible reason for the time difference? Is Torch faster than Caffe?
The text was updated successfully, but these errors were encountered: