I realized it was checking NAR hashes before of added objects, which
makes little sense --- we don't really care about ancillary NAR hashes.
Now, the bottom `nix store add` tests compare the CA field with a git
hash to hashes calculated by Git. This matches top `nix hash path` ones
in using git as a source of truth.
Previously, `state.mkList()` would set the type of the value to tList
and allocate the list vector, but it would not initialize the values
in the list. This has two problems:
* If an exception occurs, the list is left in an undefined state.
* More importantly, for multithreaded evaluation, if a value
transitions from thunk to non-thunk, it should be final (i.e. other
threads should be able to access the value safely).
To address this, there now is a `ListBuilder` class (analogous to
`BindingsBuilder`) to build the list vector prior to the call to
`Value::mkList()`. Typical usage:
auto list = state.buildList(size);
for (auto & v : list)
v = ... set value ...;
vRes.mkList(list);
* Add regression test
* Fix 'no repo' test so it doesn't succeed if the data is still in cache
* Use git_revparse_single inside git-utils instead of reimplementing the same logic.
Currently there isn't a convenient way to check for multiline output. In
addition, these outputs will easily change and having a diff between the
expected an the actual output upon failures is convenient.
we now keep not a table of all positions, but a table of all origins and
their sizes. position indices are now direct pointers into the virtual
concatenation of all parsed contents. this slightly reduces memory usage
and time spent in the parser, at the cost of not being able to report
positions if the total input size exceeds 4GiB. this limit is not unique
to nix though, rustc and clang also limit their input to 4GiB (although
at least clang refuses to process inputs that are larger, we will not).
this new 4GiB limit probably will not cause any problems for quite a
while, all of nixpkgs together is less than 100MiB in size and already
needs over 700MiB of memory and multiple seconds just to parse. 4GiB
worth of input will easily take multiple minutes and over 30GiB of
memory without even evaluating anything. if problems *do* arise we can
probably recover the old table-based system by adding some tracking to
Pos::Origin (or increasing the size of PosIdx outright), but for time
being this looks like more complexity than it's worth.
since we now need to read the entire input again to determine the
line/column of a position we'll make unsafeGetAttrPos slightly lazy:
mostly the set it returns is only used to determine the file of origin
of an attribute, not its exact location. the thunks do not add
measurable runtime overhead.
notably this change is necessary to allow changing the parser since
apparently nothing supports nix's very idiosyncratic line ending choice
of "anything goes", making it very hard to calculate line/column
positions in the parser (while byte offsets are very easy).
this needs a string comparison because there seems to be no other way to
get that information out of bison. usually the location info is going to
be correct (pointing at a bad token), but since EOF isn't a token as
such it'll be wrong in that this case.
this hasn't shown up much so far because a single line ending *is* a
token, so any file formatted in the usual manner (ie, ending in a line
ending) would have its EOF position reported correctly.
the parser treats a plain \r as a newline, error reports do not. this
can lead to interesting divergences if anything makes use of this
feature, with error reports pointing to wrong locations in the input (or
even outside the input altogether).
previously we reported the error at the beginning of the binding
block (for plain inherits) or the beginning of the attr list (for
inherit-from), effectively hiding where exactly the error happened.
this also carries over to runtime positions of attributes in sets as
reported by unsafeGetAttrPos. we're not worried about this changing
observable eval behavior because it *is* marked unsafe, and the new
behavior is much more useful.
we already normalize attr order to lexicographic, doing the same for
formals makes sense. doubly so because the order of formals would
otherwise depend on the context of the expression, which is not quite as
useful as one might expect.
the parser modifies its inputs, which means that sharing them between
the error context reporting system and the parser itself can confuse the
reporting system. usually this led to early truncation of error context
reports which, while not dangerous, can be quite confusing.
When reviewing old PRs, I found that #9997 adds some code to ensure one
particular assert is always present. But, removing asserts isn't
something we do in our own release builds either in the flake here or in
nixpkgs, and is plainly a bad idea that increases support burden,
especially if other distros make bad choices of build flags in their Nix
packaging.
For context, the assert macro in the C standard is defined to do nothing
if NDEBUG is set.
There is no way in our build system to set -DNDEBUG without manually
adding it to CFLAGS, so this is simply a configuration we do not use.
Let's ban it at compile time.
I put this preprocessor directive in src/libutil.cc because it is not
obvious where else to put it, and it seems like the most logical file
since you are not getting a usable nix without it.
Directly fail if a flakeref points to something that isn't a directory
instead of falling back to the logic of trying to look up the hierarchy
to find a valid flake root.
Fix https://github.com/NixOS/nix/issues/9868