-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose CARGO_FEATURE_<foo> environment variables to tests #2517
Conversation
Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @huonw (or someone else) soon. If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes. Please see the contribution instructions for more information. |
r? @alexcrichton I'm unsure if this has been intentionally unimplemented! |
Isn't this possible via routes like: pub fn foo_activated() -> bool { cfg!(feature = "foo") } |
It is possible, but it can be error prone as More concretely, the intended use case is to have some infrastructure to test examples within a project, shelling out to cargo for the builds. It does it this way, so as if the user does Currently we have something like the above in place for #[test]
fn foo() {
// the Example lib knows to enumerate CARGO_FEATURE_* and add it to the sub-builds
let process = Example::run("bar");
assert!(process.stdout.contains("foo"));
}
// vs
fn active_features() -> Vec<Option<&'static str>> {
vec![
if cfg!("baz") { Some("baz") } else { None }
]
}
#[test]
fn foo() {
// would need to respecify in every test since we don't have any kind of global init for test executables
let process = Example::run_with_features("bar", active_features());
assert!(process.stdout.contains("foo"));
} Since all of the tests will have been compiled with the same set of features and all of the example builds should be replicating the active set, all of the tests will have to have the same calls, or the user would have to implement their own wrapper to DRY it up themselves. Perhaps an alternative would be to allow examples to have test code themselves (which cargo would know about), which would keep things simpler in many ways and add a form of additional documentation to examples (what's expected output, how to test). I have seen some projects which have their own Very open to better suggestions! |
Oh these variables are also set at runtime for tests! Isn't that just another footgun in a sense? I think lots of tools like to be able to run binaries directly rather than through Could |
Sorry for the delayed response, lost track of this!
This is something I hadn't considered at all and it probably would have been broken for a lot of cross compilation stuff. Good point!
I didn't quite follow this, what is the test harness in this case? The user's test code, or the supporting test library? If it's the latter then I don't know how it can work as it wouldn't know which features to check for as it's meant to be generic upstream library code. Perhaps something like I've opened #2667 in relation to the example testing. |
Oh I just meant that in your example the Something like |
☔ The latest upstream changes (presumably #2743) made this pull request unmergeable. Please resolve the merge conflicts. |
Closing due to inactivity, but feel free to reopen with a rebase! |
This can be useful if you need to enumerate the active features within an integration test which spawns builds.