-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Evaluate Usage for Keras and Tensorflow #370
Add Evaluate Usage for Keras and Tensorflow #370
Conversation
Created a markdown file to show how to use Evaluate when training and evaluate a model using Keras and Tensorflow.
Second commit for adding issue number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @arjunpatel7, thanks for adding this. It looks great, I just left a few minor comments. Also If you merge main
into your branch the unittests should pass again.
Sounds great, will do. Thanks so much @lvwerra! |
BTW If you don't know you can commit all the suggestions directly on GitHub: just go to files changed and select |
Fixes included some rephrasing, changing to use keras logs instead of print, and focusing on evaluating on the test set. Co-authored-by: Leandro von Werra <[email protected]>
Typo from last commit accidently broke a code box.
Hi @lvwerra, I checked and it looks like I am up to date with main, yet the tests are still failing. It looks like there are conflicts that may be causing an issue? Anything I should do to fix this? Sorry in advance! |
@arjunpatel7 Did you try merging from the upstream branch? Add the upstream branch and check: git remote add upstream [email protected]:huggingface/evaluate.git
git remote -v Then just simply merge with the upstream branch: git fetch upstream
get merge upstream/main |
Thanks @awinml! Looks like I was able to resolve the merge conflict here, but I wasn't able to pass all the tests. After doing that, I pulled my changes locally, and followed your steps. It looks like I'm still up to date. Maybe something else is going on? |
Yes, the Windows tests are a bit flaky lately. I run the pipeline again but since it's only documentation changes this should be fine. Thanks for updating the PR! |
``` | ||
|
||
```python | ||
print("Test accuracy is : ", acc.compute(predictions = test_preds, references = test_labels)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind adding the output of the print (replace the number with the real one) after the code (e.g. as a comment)?
# Test accuracy is : 0.9999...
Could you also add a reference to the section in |
Yep, will add both of those when I have time next, probably Monday :). Thanks! |
Fixed incorrect ref addition to toctree, and added a comment describing the acc score of the example in the keras documentation.
@lvwerra, both changes should be added now! |
The documentation is not available anymore as the PR was closed or merged. |
Changes Made
This PR adds an example of using Evaluate with Keras and Tensorflow, specifically inside a callback while training and afterwards while evaluating model performance.
Issue
Deals with issue #326
Comments
I followed example from @awinml's PR and added a separate keras_integrations.md doc. I noticed a typo in the contribution docs located under docs. The build docs command uses the transformers library instead of the evaluate one, heads up!