If a package supports Android, it will now report failure instead of
skip if no tests run. This matches the new behavior of drive-examples,
and is intended to prevent recurrance of situations where we are
silently failing to run tests because of, e.g., tests being in the wrong
directory.
Also fixes a long-standing but unnoticed problem where if a
run tried to run more than one package's tests, it would hang
forever (although on the bots it doesn't seem to time out, just
end logs abruptly) due to a logic error in the call to configure
gcloud.
Fixesflutter/flutter#86732
The purpose of this PR is to make running all unit tests on Windows pass (vs failing a large portion of the tests as currently happens). This does not mean that the commands actually work when run on Windows, or that Windows support is tested, only that it's possible to actually run the tests themselves. This is prep for actually supporting parts of the tool on Windows in future PRs.
Major changes:
- Make the tests significantly more hermetic:
- Make almost all tools take a `Platform` constructor argument that can be used to inject a mock platform to control what OS the command acts like it is running on under test.
- Add a path `Context` object to the base command, whose style matches the `Platform`, and use that almost everywhere instead of the top-level `path` functions.
- In cases where Posix behavior is always required (such as parsing `git` output), explicitly use the `posix` context object for `path` functions.
- Start laying the groundwork for actual Windows support:
- Replace all uses of `flutter` as a command with a getter that returns `flutter` or `flutter.bat` as appropriate.
- For user messages that include relative paths, use a helper that always uses Posix-style relative paths for consistent output.
This bumps the version since quite a few changes have built up, and having a cut point before starting to make more changes to the commands to support Windows seems like a good idea.
Part of https://github.com/flutter/flutter/issues/86113
Many commands had insufficient failure testing. This adds new tests that
ensure that for every Process call, at least one test fails if a failure
from that process were ignored (with the exception of calls in the
`publish` command, which has a custom process mocking system, so was out
of scope here; it already has more coverage than most tests did though.)
For a few existing failure tests, adds output checks to ensure that they
are testing for the *right* failures.
Other changes:
- Adds convenience constructors to MockProcess for the common cases of a
mock process that just exits with a 0 or 1 status, to reduce test
verbosity.
- Fixes a few bugs that were found by the new tests.
- Minor test cleanup, especially cases where a mock process was being
set up just to make all calls succeed, which is the default as of
recent changes.
The intent of package-looping commands is that we always want to run them for all the targeted packages, accumulating failures, and then report them all at the end, so we don't end up with the problem of only finding errors one at a time. However, some of them were using `exitOnError: true` for the underlying commands, causing the first error to be fatal to the test.
This PR:
- Removes all use of `exitOnError: true` from the package-looping commands. (It's still used in `format`, but fixing that is a larger issue, and less important due to the way `format` is currently structured.)
- Fixes the mock process runner used in our tests to correctly simulate `exitOrError: true`; the fact that it didn't was hiding this problem in test (e.g., `drive-examples` had a test that asserted that all failures were summarized correctly, which passed because in tests it was behaving as if `exitOnError` were false)
- Adjusts the mock process runner to allow setting a list of mock result for a specific executable instead of one result for all calls to anything, in order to fix some tests that were broken by the two changes above and unfixable without this (e.g., a test of a command that had been calling one executable with `exitOnError: true` then another with ``exitOnError: false`, where the test was asserting things about the second call failing, which only worked because the first call's failure wasn't actually checked). To limit the scope of the PR, the old method of setting a single result for all calls is still supported for now as a fallback.
- Fixes the fact that the mock `run` and `runAndStream` had opposite default exit code behavior when no mock process was set (since that caused me a lot of confusion while fixing the above until I figured it out).
Add a summary to the end of successful runs for everything using the new looping base command, similar to what we do for summarizing failures. This will make it easy to manually check results for PRs that we know should be changing the set of run packages (adding a new package, adding a new test type to a package, adding a new test type to the tool), as well as spot-checking when we see unexpected results (e.g., looking back and why a PR didn't fail CI when we discover that it should have).
To support better surfacing skips, this restructures the return value of `runForPackage` to have "skip" as one of the options. As a result of it being a return value, packages that used `printSkip` to indicate that *parts* of the command were being skipped have been changed to no longer do that.
Fixes https://github.com/flutter/flutter/issues/85626
Migrates firebase-test-lab to use the new package-looping base command.
Other changes:
- Extracts several helpers to make the main flow easier to follow
- Removes support for finding and running `*_e2e.dart` files, since we no longer use that file structure for integration tests.
Part of https://github.com/flutter/flutter/issues/83413