Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Have TestExecutor run multiple tests with a single instance of node #1383

Merged
merged 9 commits into from
Oct 24, 2016
Merged

Have TestExecutor run multiple tests with a single instance of node #1383

merged 9 commits into from
Oct 24, 2016

Conversation

ozyx
Copy link
Contributor

@ozyx ozyx commented Oct 21, 2016

Hello!

In this batch of changes, I've enabled TestExecutor to run multiple tests with a single instance of node. I've also changed run_tests.js to accept a list of tests and return a list of results, and I've modified the test frameworks to record results and test durations.

Some caveats and questions:

  • For tape.js, I haven't found a good way to determine test pass/failure. As of right now, tape tests are not 100% accurate. Using the template provided by NTVS, both tests are passing (expected is one pass and one fail). I looked at the API but couldn't find anything exposed to allow me to determine if a test has passed or not. Maybe I'm looking in the wrong place... Any suggestions?
  • I've altered RecordEnd to accept a list of results. Since I call RecordStart for every test in the test list right away, in the VS Test Explorer all tests being run will be shown as running until they all finish (at the same time), instead of being shown as running when they are ACTUALLY being run by the test frameworks. This is not ideal, but I can't think of a way to do this properly if I'm passing around a list of tests... Let me know if there are any issues with this. I can elaborate if need be.
  • Tests taking 0ms don't display as '<1ms' in VS Test Explorer. Do you know why? To record test duration I simply time the tests within the test frameworks themselves and then pass the value of result.time to TestResult.Duration.

Please let me know if there are any areas where I should take a second look or improve. At this point the changes are functional but need some more testing. Mocha tests now run in the proper order (your grep suggestion worked like a charm-- thanks!)

Thanks again

@msftclas
Copy link

Hi @ozyx, I'm your friendly neighborhood Microsoft Pull Request Bot (You can call me MSBOT). Thanks for your contribution!
You've already signed the contribution license agreement. Thanks!

The agreement was validated by Microsoft and real humans are currently evaluating your PR.

TTYL, MSBOT;

@mjbvz
Copy link
Contributor

mjbvz commented Oct 24, 2016

Great progress! My apologies for not being able to review this before the weekend.


For Tape.js, support for this framework was contributed, so I am not too familiar with the API. I think that you'll need to change the runner to do something like this though:

// Inside run_tests
var tape = findTape(testInfo[0].projectFolder);
var harness = test.getHarness(); 

var results = [];
testInfo.forEach(function(info) {
    runTest(info, harness, function(result) {
        results.push(result);
    });
});

// https://github.com/substack/tape#testonfinishfn
tape.onFinish(function() {
    callback(results);
});



function runTest(testInfo, harness, done) {
    var title = testInfo.testName;
    var start = Date.now();
    try {
        var htest = harness(title, {}, function() {
            // called before test is run
            start = Date.now();

            // Hookup stderr and stdout here
        });

        //https://github.com/substack/tape#var-stream--testcreatestreamopts
        var stream = htest.createStream({ objectMode: true});

        stream.on('data', function(result) {
            var runTime = Date.now() - start;

            // get stdout and std error
            done({
                title: title,
                passed: result.ok,
                time: runTime
                ...
            })
        });
    } catch (e) {
        done({
            title: title,
            passed: false,
            time: 0
            ...
        })
    }
}

I believe the proper way to solve this be to change the test runner to also emit test started events before a test is run. The C# code read these events and handle them. This would allow you to call RecordStart at the proper time for each tests.

Making this work would also require the test results be returned incrementally instead of as one array.

Let me know if you want to look into this further.


I'm not sure about the short running tests problem. What is displayed right now? Nothing?


Otherwise, I'm going to merge this in. I left a few comments, but these mostly just things to consider and can be handled in a later review.

Please let me know if anything is unclear or if you would like help testing this.

@mjbvz mjbvz merged commit 4a188f6 into microsoft:test-adapter-improvements Oct 24, 2016
@ozyx ozyx deleted the test-adapter-improvements branch October 24, 2016 20:14
@ozyx
Copy link
Contributor Author

ozyx commented Oct 24, 2016

No worries! I hope you enjoyed your weekend.

  • Tape.js stuff looks promising. I'll be implementing this next.
  • Yes this is also what I was thinking. I'm not too familiar with event handling between Javascript / C# but I think this approach would be better and more in line with what the expected functionality of the Test Explorer is in the first place. What should I look into in terms of events within Javascript / C#?
  • Yes instead of displaying <1ms it displays nothing. This could be because Date.now() accuracy is only to milliseconds, so TestResult.Duration can be 0. It could also be because I no longer specify TestResult.StartTime. If we start emitting events when tests have begun running we could start recording this again.

As for testing, I've done some manual testing using VSIX but nothing extensive. This is my first significant contribution to an open-source project and I'm a bit new to testing, as well. In order for this to be merged I should be writing tests, right? Where should I start with this? Any tips on testing are welcome and appreciated.

Thanks!

@mjbvz
Copy link
Contributor

mjbvz commented Oct 24, 2016

For communicating between the test runner node process and the C# code, you could look into some of the communications channel approaches I mentioned previously or you can keep using stdout as the communications channel.

The main difference compared what the code is currently doing and my proposal, is that you'd send results incrementally, as soon as they become available. Here's an overview of what this would look like:

  1. C# code spawns node test runner and sends it a JSON list of tests to run.
  2. In the C# code, start a loop that processes lines received from stdout of the node test runner.
  3. In the test runner JavaScript code, create a callback function called postResult that will be invoked when each test is completed.
    • This function takes a result object, and converts it to a single line JSON string, logging this string to stdout.
  4. For each test:
    • Schedule it to be run, passing in the postResult callback to be invoked when test is completed.
  5. Once all tests have been run on the JavaScript side, terminate the program.

This would then allow you to easily add support for the additional events, such as when a test starts. To support these, in the JS code, instead of sending results directly, send events objects:

{
    "type": "result",
    "result": {
        "title": "my test",
        "passed": true,
        ...
    }
}

Then you can define additional events for the C# code to handle.

Right before a test is run for example, you could send a test start event:

{
    "type": "testStart",
    "title": "my test"
}

The C# code would then only have to be able to tell what type of event it received on stdout based on the type field and handle the event appropriately. Adding a test start would allow you to correctly set the test run times and durations.


For testing, unfortunately we do not have a good automated testing story in this area so most of it has to be manual testing. I recommend that you setup a few projects that cover the core test runner scenarios. Make sure to hit all the test frameworks: export runner, Mocha (both v2 and v3), and Tape. Then, start thinking of other cases that should be handled properly too. This could include:

  • Running only selected tests.
  • Multiple test in multiple files.
  • Test files in different folders.
  • Tests that write to stdout or stderr.
  • The test runner throws an exception (perhaps the test library was not found) or crashes.
  • There is a long running test.
  • Multiple test frameworks user in a single project (both Export and Mocha for example.)
  • Asynchronous tests (usually this should be handled by the test library itself, but it's worth checking).
  • Tests with odd names.
  • Tests with the same name.
    ...

Just make sure to document what should be tested and how it should be tested. We'll add those notes to our test plans. This doesn't have to be terribly extensive, but should give a developer some idea of what to cover on a test pass.

Once we are confident that the new test runner code handles the majority of cases well enough, we'll merge the code into master and start getting it into people's hands using devbuilds. This will give the code some real world exposure and probably reveal some bugs that will have to be addressed before shipping.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants