You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
model_size: str. The size of the model to use. It can be 'n' (nano), 's' (small), 'm' (medium) or 'l' (large). Larger models are more accurate but slower. Default: 's'.
I haven't tested with a large sample size, but at first pass if I change the model size to anything besides s it stops detecting.
The text was updated successfully, but these errors were encountered:
Larger models are usually more capable for difficult task, but also more prone to overfit data if it is not varied enough.
I manually tag the dataset I use for detection, so it size grows in every tagging-training round, but actually, its current size may be not large enough to give larger models the data they need to squeeze their capabilities.
So it is posible that currently, while the dataset is not large enough, larger models have suffered from overfit, actually decreasing their capabilities on real world problems.
Sorry for the inconvinience, I'll modify that part of the documentation. While that claim should become true in the long future, it is likely not happening now.
model_size: str. The size of the model to use. It can be 'n' (nano), 's' (small), 'm' (medium) or 'l' (large). Larger models are more accurate but slower. Default: 's'.
I haven't tested with a large sample size, but at first pass if I change the model size to anything besides s it stops detecting.
The text was updated successfully, but these errors were encountered: