- Support TF eager completely
- Better support for native Datasets
- For TensorFlow, tf.dataset and still support multi-GPU
- For PyTorch, use DataLoader
- Better a la carte support (providing layers that work outside of mead)
- Underlying layers API that is nearly identical between PyTorch and TF
- Some cases pytorch requires explicit dims and TF no longer does which makes things more clunky than they need to be
- Get rid of python 2, make typing more explicit
- Improved documentation over previous offering
- Simplify by removing
dynet
support and non TFkeras
support - Simplify services and provide a more straightforward
predict
like functionality with an array - Updated API examples for accessing layers directly, including large-scale Transformer pre-training code with fine-tuning addons
- Native support for BERT and BPE-based Transformer models in TF and PyTorch with no library dependencies
- Support for mead-hub, a centralized repository for sharing models, tasks, vectorizers and embeddings
- ELMo example - by specifying this addon in your mead config
Note on TensorFlow Eager Mode
By default, eager mode is turned off! To turn it on, pass --prefer-eager true
into mead-train
. This only works in TF 2.x and recent versions of TF 1.x. In cases where it is not supported, it should degrade gracefully