You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Sorry for the late reply, we currently do not have plan to train on Llama-2, our next step is to improve the data quality by removing the low-quality data.
Train a single lora adaptor for 7b model (1 language) can be done on 1x40GB A100, within 12 hours.
Hey team, thank you for the great work on Bactrian X.
Are the trained lora weights shared in this work support the second version of Llama 2?
If not, how much resource is required to retrain it on the new model variations?
The text was updated successfully, but these errors were encountered: