-
Notifications
You must be signed in to change notification settings - Fork 30.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test,flake: async-hooks/test-fseventwrap flaky on Travis #21310
Comments
I mean, is it flaky if the device is out of space? |
And here I was thinking Travis would give us a stable infrastructure for lightweight test runs ¯_(ツ)_/¯ |
I think so, since this will result in failures on PRs. But are there any actions we can take to avoid this? |
If the test does not run reliably and gives intermittent false negatives IMHO that's the definition of flaky. Anyway this issue is here so that peeps seeing this can cross reference. |
According to Travis' documentation there should be about ~9GB of disk space available for the configuration we're currently using. If we instead used a VM ( Has anyone tried to rerun an instance with debug mode enabled so you can ssh in and verify disk space availability? |
My guess is that it's not actual out-of-space, but some other fragility. We do run the tests, and pass, in our CI cluster on machines with ~1GB free disk space. I was quick to |
Hrmm... from what I'm finding it could also be that the maximum number of system watchers is too low. This could be checked with echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p or some other similarly larger value than what is currently being used on Travis. |
Flaky on CI too: https://ci.nodejs.org/job/node-test-commit-linux/22236/nodes=ubuntu1804-docker/console 22:32:46 not ok 2100 async-hooks/test-fseventwrap
22:32:46 ---
22:32:46 duration_ms: 0.208
22:32:46 severity: fail
22:32:46 exitcode: 1
22:32:46 stack: |-
22:32:46 internal/fs/watchers.js:173
22:32:46 throw error;
22:32:46 ^
22:32:46
22:32:46 Error: ENOSPC: System limit for number of file watchers reached, watch '/home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-docker/test/async-hooks/test-fseventwrap.js'
22:32:46 at FSWatcher.start (internal/fs/watchers.js:165:26)
22:32:46 at Object.watch (fs.js:1269:11)
22:32:46 at Object.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux/nodes/ubuntu1804-docker/test/async-hooks/test-fseventwrap.js:16:20)
22:32:46 at Module._compile (internal/modules/cjs/loader.js:706:30)
22:32:46 at Object.Module._extensions..js (internal/modules/cjs/loader.js:717:10)
22:32:46 at Module.load (internal/modules/cjs/loader.js:604:32)
22:32:46 at tryModuleLoad (internal/modules/cjs/loader.js:543:12)
22:32:46 at Function.Module._load (internal/modules/cjs/loader.js:535:3)
22:32:46 at Function.Module.runMain (internal/modules/cjs/loader.js:759:12)
22:32:46 at startup (internal/bootstrap/node.js:303:19)
22:32:46 ... |
Closing as there was no activity in more than three years and we don't use travis anymore. |
master
Travis
https://travis-ci.com/nodejs/node/jobs/128751722#L9568
The text was updated successfully, but these errors were encountered: