This re-enables support for older bwdgc versions without complicating
the code too much.
Coroutines generally only interfere with GC during source filtering,
so it's not too bad of a regression on older bdwgc.
This seems preferable over conditional compilation to enable the patch
etc; we've already spent a lot of complexity budget on this GC-coroutine
interaction...
Manually tested by printing to stderr in both branches (sp in os
stack, or not), and triggering a GC in a filterSource function,
e.g.:
let
generateTree = n: if n == 0 then "ha" else { left = generateTree (n - 1); right = generateTree (n - 1); };
in
builtins.deepSeq (generateTree 18) ...
Note that the darwin still uses the strategy of disabling GC, despite
having an implementation that compiles. The proper solution will be
enabled and tested later.
After the removal of the InputAccessor::fetchToStore() method, the
only remaining functionality in InputAccessor was `fingerprint` and
`getLastModified()`, and there is no reason to keep those in a
separate class.
This missing GC root wasn't much of a problem before, because the
heap would end up with a reference to the `baseEnv` pretty soon,
but when unit testing, the construction of `EvalState` doesn't
necessarily happen well before GC runs for the first time.
Found while unit testing the Rust bindings that currently reside
at https://github.com/nixops4/nixops4/tree/main/rust
At this point many features are stripped out, but this works:
- Can run libnix{util,store,expr} unit tests
- Can run some Nix commands
Co-Authored-By volth <volth@volth.com>
Co-Authored-By Brian McKenna <brian@brianmckenna.org>
Thunks are now overwritten by a helper function
`Value::finishValue(newType, payload)` (where `payload` is the
original anonymous union inside `Value`). This helps to ensure we
never update a value elsewhere, since that would be incompatible with
parallel evaluation (i.e. after a value has transitioned from being a
thunk to being a non-thunk, it should be immutable).
There were two places where this happened: `Value::mkString()` and
`ExprAttrs::eval()`.
This PR also adds a bunch of accessor functions for value contents,
like `Value::integer()` to access the integer field in the union.
Previously, `state.mkList()` would set the type of the value to tList
and allocate the list vector, but it would not initialize the values
in the list. This has two problems:
* If an exception occurs, the list is left in an undefined state.
* More importantly, for multithreaded evaluation, if a value
transitions from thunk to non-thunk, it should be final (i.e. other
threads should be able to access the value safely).
To address this, there now is a `ListBuilder` class (analogous to
`BindingsBuilder`) to build the list vector prior to the call to
`Value::mkList()`. Typical usage:
auto list = state.buildList(size);
for (auto & v : list)
v = ... set value ...;
vRes.mkList(list);
we now keep not a table of all positions, but a table of all origins and
their sizes. position indices are now direct pointers into the virtual
concatenation of all parsed contents. this slightly reduces memory usage
and time spent in the parser, at the cost of not being able to report
positions if the total input size exceeds 4GiB. this limit is not unique
to nix though, rustc and clang also limit their input to 4GiB (although
at least clang refuses to process inputs that are larger, we will not).
this new 4GiB limit probably will not cause any problems for quite a
while, all of nixpkgs together is less than 100MiB in size and already
needs over 700MiB of memory and multiple seconds just to parse. 4GiB
worth of input will easily take multiple minutes and over 30GiB of
memory without even evaluating anything. if problems *do* arise we can
probably recover the old table-based system by adding some tracking to
Pos::Origin (or increasing the size of PosIdx outright), but for time
being this looks like more complexity than it's worth.
since we now need to read the entire input again to determine the
line/column of a position we'll make unsafeGetAttrPos slightly lazy:
mostly the set it returns is only used to determine the file of origin
of an attribute, not its exact location. the thunks do not add
measurable runtime overhead.
notably this change is necessary to allow changing the parser since
apparently nothing supports nix's very idiosyncratic line ending choice
of "anything goes", making it very hard to calculate line/column
positions in the parser (while byte offsets are very easy).
the parser modifies its inputs, which means that sharing them between
the error context reporting system and the parser itself can confuse the
reporting system. usually this led to early truncation of error context
reports which, while not dangerous, can be quite confusing.
desugaring inherit-from to syntactic duplication of the source expr also
duplicates side effects of the source expr (such as trace calls) and
expensive computations (such as derivationStrict).
As discussed in the last Nix team meeting (2024-02-95), this method
doesn't belong because `CanonPath` is a virtual/ideal absolute path
format, not used in file systems beyond the native OS format for which a
"current working directory" is defined.
Progress towards #9205
While preparing PRs like #9753, I've had to change error messages in
dozens of code paths. It would be nice if instead of
EvalError("expected 'boolean' but found '%1%'", showType(v))
we could write
TypeError(v, "boolean")
or similar. Then, changing the error message could be a mechanical
refactor with the compiler pointing out places the constructor needs to
be changed, rather than the error-prone process of grepping through the
codebase. Structured errors would also help prevent the "same" error
from having multiple slightly different messages, and could be a first
step towards error codes / an error index.
This PR reworks the exception infrastructure in `libexpr` to
support exception types with different constructor signatures than
`BaseError`. Actually refactoring the exceptions to use structured data
will come in a future PR (this one is big enough already, as it has to
touch every exception in `libexpr`).
The core design is in `eval-error.hh`. Generally, errors like this:
state.error("'%s' is not a string", getAttrPathStr())
.debugThrow<TypeError>()
are transformed like this:
state.error<TypeError>("'%s' is not a string", getAttrPathStr())
.debugThrow()
The type annotation has moved from `ErrorBuilder::debugThrow` to
`EvalState::error`.
This extends the `error: cannot coerce a TYPE to a string` message
to print the value that could not be coerced. This helps with debugging
by making it easier to track down where the value is being produced
from, especially in errors with deep or unhelpful stack traces.
Low-hanging fruit in the spirit of #9753 and #9754 (means 9999years did
all the hard work already).
This basically prints out what was attempted to be called as function,
i.e.
map (import <nixpkgs> {}) [ 1 2 3 ]
now gives the following error message:
error:
… while calling the 'map' builtin
at «string»:1:1:
1| map (import <nixpkgs> {}) [ 1 2 3 ]
| ^
… while evaluating the first argument passed to builtins.map
error: expected a function but found a set: { _type = "pkgs"; AAAAAASomeThingsFailToEvaluate = «thunk»; AMB-plugins = «thunk»; ArchiSteamFarm = «thunk»; BeatSaberModManager = «thunk»; CHOWTapeModel = «thunk»; ChowCentaur = «thunk»; ChowKick = «thunk»; ChowPhaser = «thunk»; CoinMP = «thunk»; «18783 attributes elided»}
these symbols are used a *lot*, so it makes sense to cache them. this
mostly increases clarity of the code (however clear one may wish to call
the parser desugaring here), but it also provides a small performance
benefit.
there's no reason the parser itself should be doing semantic analysis
like bindVars. split this bit apart (retaining the previous name in
EvalState) and have the parser really do *only* parsing, decoupled from
EvalState.
most EvalState and Expr members defined here could be elsewhere, where
they'd be easier to maintain (not being embedded in a file with arcane
syntax) and *somewhat* more faithfully placed according to the path of
the file they're defined in.
Previously, there were two mostly-identical value printers -- one in
`libexpr/eval.cc` (which didn't force values) and one in
`libcmd/repl.cc` (which did force values and also printed ANSI color
codes).
This PR unifies both of these printers into `print.cc` and provides a
`PrintOptions` struct for controlling the output, which allows for
toggling whether values are forced, whether repeated values are tracked,
and whether ANSI color codes are displayed.
Additionally, `PrintOptions` allows tuning the maximum number of
attributes, list items, and bytes in a string that will be displayed;
this makes it ideal for contexts where printing too much output (e.g.
all of Nixpkgs) is distracting. (As requested by @roberth in
https://github.com/NixOS/nix/pull/9554#issuecomment-1845095735)
Please read the tests for example output.
Future work:
- It would be nice to provide this function as a builtin, perhaps
`builtins.toStringDebug` -- a printing function that never fails would
be useful when debugging Nix code.
- It would be nice to support customizing `PrintOptions` members on the
command line, e.g. `--option to-string-max-attrs 1000`.
Also move `SourcePath` into `libutil`.
These changes allow `error.hh` and `error.cc` to access source path and
position information, which we can use to produce better error messages
(for example, we could consider omitting filenames when two or more
consecutive stack frames originate from the same file).
This avoids a Value allocation for empty list constants. During a `nix
search nixpkgs`, about 82% of all thunked lists are empty, so this
removes about 3 million Value allocations.
Performance comparison on `nix search github:NixOS/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 --no-eval-cache`:
maximum RSS: median = 3845432.0000 mean = 3845432.0000 stddev = 0.0000 min = 3845432.0000 max = 3845432.0000 [rejected?, p=0.00000, Δ=-70084.00000±0.00000]
soft page faults: median = 965395.0000 mean = 965394.6667 stddev = 1.1181 min = 965392.0000 max = 965396.0000 [rejected?, p=0.00000, Δ=-17929.77778±38.59610]
system CPU time: median = 1.8029 mean = 1.7702 stddev = 0.0621 min = 1.6749 max = 1.8417 [rejected, p=0.00064, Δ=-0.12873±0.09905]
user CPU time: median = 14.1022 mean = 14.0633 stddev = 0.1869 min = 13.8118 max = 14.3190 [not rejected, p=0.03006, Δ=-0.18248±0.24928]
elapsed time: median = 15.8205 mean = 15.8618 stddev = 0.2312 min = 15.5033 max = 16.1670 [not rejected, p=0.00558, Δ=-0.28963±0.29434]
since `up` and `values` are both pointer-aligned the type field will
also be pointer-aligned, wasting 48 bits of space on most machines. we
can get away with removing the type field altogether by encoding some
information into the `with` expr that created the env to begin with,
reducing the GC load for the absolutely massive amount of single-entry
envs we create for lambdas. this reduces memory usage of system eval by
quite a bit (reducing heap size of our system eval from 8.4GB to 8.23GB)
and gives similar savings in eval time.
running `nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'`
before:
Time (mean ± σ): 5.576 s ± 0.003 s [User: 5.197 s, System: 0.378 s]
Range (min … max): 5.572 s … 5.581 s 10 runs
after:
Time (mean ± σ): 5.408 s ± 0.002 s [User: 5.019 s, System: 0.388 s]
Range (min … max): 5.405 s … 5.411 s 10 runs
This fixes a segfault on infinite function call recursion (rather than
infinite thunk recursion) by tracking the function call depth in
`EvalState`.
Additionally, to avoid printing extremely long stack traces, stack
frames are now deduplicated, with a `(19997 duplicate traces omitted)`
message. This should only really be triggered in infinite recursion
scenarios.
Before:
$ nix-instantiate --eval --expr '(x: x x) (x: x x)'
Segmentation fault: 11
After:
$ nix-instantiate --eval --expr '(x: x x) (x: x x)'
error: stack overflow
at «string»:1:14:
1| (x: x x) (x: x x)
| ^
$ nix-instantiate --eval --expr '(x: x x) (x: x x)' --show-trace
error:
… from call site
at «string»:1:1:
1| (x: x x) (x: x x)
| ^
… while calling anonymous lambda
at «string»:1:2:
1| (x: x x) (x: x x)
| ^
… from call site
at «string»:1:5:
1| (x: x x) (x: x x)
| ^
… while calling anonymous lambda
at «string»:1:11:
1| (x: x x) (x: x x)
| ^
… from call site
at «string»:1:14:
1| (x: x x) (x: x x)
| ^
(19997 duplicate traces omitted)
error: stack overflow
at «string»:1:14:
1| (x: x x) (x: x x)
| ^
this also reduces forceValue code size and removes the need for
hideInDiagnostics. coopting thunk forcing like this has the additional
benefit of clarifying how these errors can happen in the first place.
almost all uses of this are interactive, except for deepSeq. deepSeq is
going to be expensive and rare enough to not care much about, and
Value::determinePos should usually be cheap enough to not be too much of
a burden in any case.
checking for isBlackhole in the forceValue hot path is rather more
expensive than necessary, and with a little bit of trickery we can move
such handling into the isApp case. small performance benefit, but under
some circumstances we've seen 2% improvement as well.
〉 nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
before:
Time (mean ± σ): 4.429 s ± 0.002 s [User: 3.929 s, System: 0.500 s]
Range (min … max): 4.427 s … 4.433 s 10 runs
after:
Time (mean ± σ): 4.396 s ± 0.002 s [User: 3.894 s, System: 0.501 s]
Range (min … max): 4.393 s … 4.399 s 10 runs
According https://en.cppreference.com/w/cpp/io/strstream, it has been
deprecated since C++98! The Clang + Linux build systems to not have it
at all, or at least be hiding it.
We can just use `std::stringstream` instead, I think.
* Print the value in `error: cannot coerce` messages
This extends the `error: cannot coerce a TYPE to a string` message
to print the value that could not be coerced. This helps with debugging
by making it easier to track down where the value is being produced
from, especially in errors with deep or unhelpful stack traces.
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
This includes position information in more places, making debugging
easier.
Before:
```
$ nix-instantiate --show-trace --eval tests/functional/lang/eval-fail-using-set-as-attr-name.nix
error:
… while evaluating an attribute name
at «none»:0: (source not available)
error: value is a set while a string was expected
```
After:
```
error:
… while evaluating an attribute name
at /pwd/lang/eval-fail-using-set-as-attr-name.nix:5:10:
4| in
5| attr.${key}
| ^
6|
error: value is a set while a string was expected
```