You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TensorBoard currently, in its Python process, encode a text tensor into a markdown (and sometimes to a table).
Recently, we have noticed that it is notably slower on large tensors which we have attributed largely to the markdown parser. While debugging, there was one particular hotspot which we have noticed--hint: it is not the markdown parser!
Test script
Faster
importtimefromtensorboardimportplugin_utilforiinrange(5000, 50000, 5000):
start=time.time()
plugin_util.markdown_to_safe_html(
("["+" ".join([str(i) foriinrange(i)]) +"]")
)
end=time.time()
print("Converting %d length string took %f"% (i, end-start))
# # Converting 5000 length string took 0.016349# Converting 10000 length string took 0.033403# Converting 15000 length string took 0.053684# Converting 20000 length string took 0.070949# Converting 25000 length string took 0.090817# Converting 30000 length string took 0.108568# Converting 35000 length string took 0.129561# Converting 40000 length string took 0.147728# Converting 45000 length string took 0.169784
Slower
importtimefromtensorboardimportplugin_utilforiinrange(5000, 50000, 5000):
start=time.time()
plugin_util.markdown_to_safe_html(
("["+" ".join([str(i) foriinrange(i)]) +"]").encode("utf8")
)
end=time.time()
print("Converting %d length string took %f"% (i, end-start))
# # Converting 5000 length string took 0.110732# Converting 10000 length string took 0.428144# Converting 15000 length string took 0.969050# Converting 20000 length string took 1.694072# Converting 25000 length string took 2.639599# Converting 30000 length string took 3.787034# Converting 35000 length string took 5.172249# Converting 40000 length string took 6.771216# Converting 45000 length string took 8.655072
When expected further, it looks like one innocuous line is contributing to the most of the time spent in the binary mode.
For 45000 length string, specifically, we have spent about 7.8s on just replacement which is quite surprising to me.
Decode: 0.140072,
Plus Replace: 7.920270
Plus Convert: 7.945538
In fact, when removed the replace line, I was able to restore the comparable performance.
Now, this is NOT the behavior I get when tested under the hosted service so there might be something that I am missing with this test set or may be because of version skew of markdown library we are using.
Interestingly, when I pinned to markdown==2.6.9, it exhibited wildly different behavior where markdown conversion took a very long time while spending virtually no time on replace.
Converting 5000 length string took 6.921765
Converting 10000 length string took 29.544734
Converting 15000 length string took 77.178752
Converting 20000 length string took 147.837926
Converting 25000 length string took 229.252741
Converting 30000 length string took 338.251870
Converting 35000 length string took 470.788789
Converting 40000 length string took 624.641989
Converting 45000 length string took 835.322498
At this point, I am suspecting that old version of markdown library has performance problem and it might be patching replace function that it had caused unwanted side effects; more updates after experiments to follow.
Now, I can no longer reproduce the issue where Python took a long time replacing a null character from the string.
At this point, perhaps we can simply conclude that the performance issue can be solved by upgrading markdown parser library internally.
The text was updated successfully, but these errors were encountered:
Well, we don't actually require a newer version of the package though, so this still seems like an issue for OSS users. But ok, if you are fine closing it we can close it.
TensorBoard currently, in its Python process, encode a text tensor into a markdown (and sometimes to a table).
Recently, we have noticed that it is notably slower on large tensors which we have attributed largely to the markdown parser. While debugging, there was one particular hotspot which we have noticed--hint: it is not the markdown parser!
Test script
Faster
Slower
When expected further, it looks like one innocuous line is contributing to the most of the time spent in the binary mode.
tensorboard/tensorboard/plugin_util.py
Line 138 in 65371ef
For 45000 length string, specifically, we have spent about 7.8s on just replacement which is quite surprising to me.
In fact, when removed the
replace
line, I was able to restore the comparable performance.Now, this is NOT the behavior I get when tested under the hosted service so there might be something that I am missing with this test set or may be because of version skew of markdown library we are using.
Interestingly, when I pinned to markdown==2.6.9, it exhibited wildly different behavior where markdown conversion took a very long time while spending virtually no time on replace.
At this point, I am suspecting that old version of markdown library has performance problem and it might be patching
replace
function that it had caused unwanted side effects; more updates after experiments to follow.Now, I can no longer reproduce the issue where Python took a long time replacing a null character from the string.
At this point, perhaps we can simply conclude that the performance issue can be solved by upgrading markdown parser library internally.
The text was updated successfully, but these errors were encountered: