pararameterisation is not actually needed the way things are currently
set up, and it confused me when trying to understand what the code does.
all but one test sources vars-and-functions.sh, which nominally only
defines variables, but in practice is always coupled with the actual
initialisation. while the cleaner way of making this more legible would
be to source variables and initialisation separately, this would produce
a huge diff.
the change requires a few small fixes to keep the tests working:
- only create test home directory during initialisation
that vars-and-functions.sh wrote to the file system seems not write
- fix creation of the test directory
due to statefulness, the test home directory was implicitly creating
the test root, too. decoupling that made it apparent that this was
probably not intentional, and certainly confusing.
- only source vars-and-functions.sh if init.sh is not needed
there is one test case that only needs a helper function but no
initialisation side effects
- remove some unnecessary cleanups and split parts of re-used test code
there were confusing bits in how initialisation code was repurposed,
which break if trying to refactor the outer layers naively...
The basic idea here is to separate a few intertwined notions:
1. Not all "run bash tests" are "install tests"
2. Not all "run bash tests" use `tests/functional/init.sh`, or any
pre-test initialization at all.
This will used in the next commit when we have a test that check unit
test golden master data.
Also, move our custom `PS4` from the test to the test runner, as it is
part of how we want to display the tests, not the test themselves.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Use `set -u` and `set -o pipefail` to catch accidental mistakes and
failures more strongly.
- `set -u` catches the use of undefined variables
- `set -o pipefail` catches failures (like `set -e`) earlier in the
pipeline.
This makes the tests a bit more robust. It is nice to read code not
worrying about these spurious success paths (via uncaught) errors
undermining the tests. Indeed, I caught some bugs doing this.
There are a few tests where we run a command that should fail, and then
search its output to make sure the failure message is one that we
expect. Before, since the `grep` was the last command in the pipeline
the exit code of those failing programs was silently ignored. Now with
`set -o pipefail` it won't be, and we have to do something so the
expected failure doesn't accidentally fail the test.
To do that we use `expect` and a new `expectStderr` to check for the
exact failing exit code. See the comments on each for why.
`grep -q` is replaced with `grepQuiet`, see the comments on that
function for why.
`grep -v` when we just want the exit code is replaced with `grepInverse,
see the comments on that function for why.
`grep -q -v` together is, surprise surprise, replaced with
`grepQuietInverse`, which is both combined.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
First, logic is consolidated in the shell script instead of being spread
between them and makefiles. That makes understanding what is going on a
little easier.
This would not be super interesting by itself, but it gives us a way to
debug tests more easily. *That* in turn I hope is much more compelling.
See the updated manual for details.
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>