The old `std::variant` is bad because we aren't adding a new case to
`FileIngestionMethod` so much as we are defining a separate concept ---
store object content addressing rather than file system object content
addressing. As such, it is more correct to just create a fresh
enumeration.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
This is not really part of the evaluator: it is just an integration
between Boehm GC and Boost coroutines usable for any purpose. The
evaluator (merely) optionally uses it.
This re-enables support for older bwdgc versions without complicating
the code too much.
Coroutines generally only interfere with GC during source filtering,
so it's not too bad of a regression on older bdwgc.
This seems preferable over conditional compilation to enable the patch
etc; we've already spent a lot of complexity budget on this GC-coroutine
interaction...
Manually tested by printing to stderr in both branches (sp in os
stack, or not), and triggering a GC in a filterSource function,
e.g.:
let
generateTree = n: if n == 0 then "ha" else { left = generateTree (n - 1); right = generateTree (n - 1); };
in
builtins.deepSeq (generateTree 18) ...
Note that the darwin still uses the strategy of disabling GC, despite
having an implementation that compiles. The proper solution will be
enabled and tested later.
After the removal of the InputAccessor::fetchToStore() method, the
only remaining functionality in InputAccessor was `fingerprint` and
`getLastModified()`, and there is no reason to keep those in a
separate class.
This missing GC root wasn't much of a problem before, because the
heap would end up with a reference to the `baseEnv` pretty soon,
but when unit testing, the construction of `EvalState` doesn't
necessarily happen well before GC runs for the first time.
Found while unit testing the Rust bindings that currently reside
at https://github.com/nixops4/nixops4/tree/main/rust
At this point many features are stripped out, but this works:
- Can run libnix{util,store,expr} unit tests
- Can run some Nix commands
Co-Authored-By volth <volth@volth.com>
Co-Authored-By Brian McKenna <brian@brianmckenna.org>
Thunks are now overwritten by a helper function
`Value::finishValue(newType, payload)` (where `payload` is the
original anonymous union inside `Value`). This helps to ensure we
never update a value elsewhere, since that would be incompatible with
parallel evaluation (i.e. after a value has transitioned from being a
thunk to being a non-thunk, it should be immutable).
There were two places where this happened: `Value::mkString()` and
`ExprAttrs::eval()`.
This PR also adds a bunch of accessor functions for value contents,
like `Value::integer()` to access the integer field in the union.
Previously, `state.mkList()` would set the type of the value to tList
and allocate the list vector, but it would not initialize the values
in the list. This has two problems:
* If an exception occurs, the list is left in an undefined state.
* More importantly, for multithreaded evaluation, if a value
transitions from thunk to non-thunk, it should be final (i.e. other
threads should be able to access the value safely).
To address this, there now is a `ListBuilder` class (analogous to
`BindingsBuilder`) to build the list vector prior to the call to
`Value::mkList()`. Typical usage:
auto list = state.buildList(size);
for (auto & v : list)
v = ... set value ...;
vRes.mkList(list);
we now keep not a table of all positions, but a table of all origins and
their sizes. position indices are now direct pointers into the virtual
concatenation of all parsed contents. this slightly reduces memory usage
and time spent in the parser, at the cost of not being able to report
positions if the total input size exceeds 4GiB. this limit is not unique
to nix though, rustc and clang also limit their input to 4GiB (although
at least clang refuses to process inputs that are larger, we will not).
this new 4GiB limit probably will not cause any problems for quite a
while, all of nixpkgs together is less than 100MiB in size and already
needs over 700MiB of memory and multiple seconds just to parse. 4GiB
worth of input will easily take multiple minutes and over 30GiB of
memory without even evaluating anything. if problems *do* arise we can
probably recover the old table-based system by adding some tracking to
Pos::Origin (or increasing the size of PosIdx outright), but for time
being this looks like more complexity than it's worth.
since we now need to read the entire input again to determine the
line/column of a position we'll make unsafeGetAttrPos slightly lazy:
mostly the set it returns is only used to determine the file of origin
of an attribute, not its exact location. the thunks do not add
measurable runtime overhead.
notably this change is necessary to allow changing the parser since
apparently nothing supports nix's very idiosyncratic line ending choice
of "anything goes", making it very hard to calculate line/column
positions in the parser (while byte offsets are very easy).
the parser modifies its inputs, which means that sharing them between
the error context reporting system and the parser itself can confuse the
reporting system. usually this led to early truncation of error context
reports which, while not dangerous, can be quite confusing.
desugaring inherit-from to syntactic duplication of the source expr also
duplicates side effects of the source expr (such as trace calls) and
expensive computations (such as derivationStrict).