https://github.com/NixOS/nix/pull/10456 fixed the addition of symlink
store paths to the sandbox, but also made it so that the hardcoded
sandbox paths (like `/etc/hosts`) were now bind-mounted without
following the possible symlinks. This made these files unreadable if
there were symlinks (because the sandbox would now contain a symlink to
an unreachable file rather than the underlying file).
In particular, this broke FOD derivations on NixOS as `/etc/hosts` is a
symlink there.
Fix that by canonicalizing all these hardcoded sandbox paths before
adding them to the sandbox.
This requires moving resolveSymlinks() into SourceAccessor. Also, it
requires LocalStoreAccessor::maybeLstat() to work on parents of the
store (to avoid an error like "/nix is not in the store").
Fixes#10375.
Instead of relying on setup script to set output variables when
structured attributes are enabled, iterate over the values of an
outputs associative array.
See also
374fa3532e/pkgs/stdenv/generic/setup.sh (L23-L26)
Now that we have a few things identifying content address methods by
name, we should be consistent about it.
Move up the `parseHashAlgoOpt` for tidiness too.
Discussed this change for consistency's sake as part of #8876
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
This probably snuck in in a refactor using truthiness or so. The
trustedness flag was having the optional fullness checked, rather than
the actual contained trust level.
Also adds some tests.
```
m1@6876551b-255d-4cb0-af02-8a4f17b27e2e ~ % nix store ping
warning: 'nix store ping' is a deprecated alias for 'nix store info'
Store URL: daemon
Version: 2.20.4
Trusted: 0
m1@6876551b-255d-4cb0-af02-8a4f17b27e2e ~ % nix doctor
warning: 'doctor' is a deprecated alias for 'config check'
[PASS] PATH contains only one nix version.
[PASS] All profiles are gcroots.
[PASS] Client protocol matches store protocol.
[INFO] You are trusted by store uri: daemon
```
When querying all paths in a binary cache store, the path's representation
is `<hash>-x` (where `x` is the value of `MissingName`) because the .narinfo
filenames only contain the hash.
Before cc46ea1630 this worked correctly,
because the entire path info was read and the path from this
representation was printed, i.e. in the form `<hash>-<name>`. Since then
however, the direct result from `queryAllValidPaths()` was used as `path`.
Added a regression test to make sure the behavior remains correct.
It's a little weird we don't check the return status for these, but
changing that would introduce risk so I did not.
Co-authored-by: Théophane Hufschmitt <7226587+thufschmitt@users.noreply.github.com>
This introduces new utility functions to get elements from JSON — in an ergonomic way and with nice error messages if the expected type does not match.
Co-authored-by: John Ericson <John.Ericson@Obsidian.Systems>
Thunks are now overwritten by a helper function
`Value::finishValue(newType, payload)` (where `payload` is the
original anonymous union inside `Value`). This helps to ensure we
never update a value elsewhere, since that would be incompatible with
parallel evaluation (i.e. after a value has transitioned from being a
thunk to being a non-thunk, it should be immutable).
There were two places where this happened: `Value::mkString()` and
`ExprAttrs::eval()`.
This PR also adds a bunch of accessor functions for value contents,
like `Value::integer()` to access the integer field in the union.
I realized it was checking NAR hashes before of added objects, which
makes little sense --- we don't really care about ancillary NAR hashes.
Now, the bottom `nix store add` tests compare the CA field with a git
hash to hashes calculated by Git. This matches top `nix hash path` ones
in using git as a source of truth.
Previously, `state.mkList()` would set the type of the value to tList
and allocate the list vector, but it would not initialize the values
in the list. This has two problems:
* If an exception occurs, the list is left in an undefined state.
* More importantly, for multithreaded evaluation, if a value
transitions from thunk to non-thunk, it should be final (i.e. other
threads should be able to access the value safely).
To address this, there now is a `ListBuilder` class (analogous to
`BindingsBuilder`) to build the list vector prior to the call to
`Value::mkList()`. Typical usage:
auto list = state.buildList(size);
for (auto & v : list)
v = ... set value ...;
vRes.mkList(list);
* Add regression test
* Fix 'no repo' test so it doesn't succeed if the data is still in cache
* Use git_revparse_single inside git-utils instead of reimplementing the same logic.
Currently there isn't a convenient way to check for multiline output. In
addition, these outputs will easily change and having a diff between the
expected an the actual output upon failures is convenient.
we now keep not a table of all positions, but a table of all origins and
their sizes. position indices are now direct pointers into the virtual
concatenation of all parsed contents. this slightly reduces memory usage
and time spent in the parser, at the cost of not being able to report
positions if the total input size exceeds 4GiB. this limit is not unique
to nix though, rustc and clang also limit their input to 4GiB (although
at least clang refuses to process inputs that are larger, we will not).
this new 4GiB limit probably will not cause any problems for quite a
while, all of nixpkgs together is less than 100MiB in size and already
needs over 700MiB of memory and multiple seconds just to parse. 4GiB
worth of input will easily take multiple minutes and over 30GiB of
memory without even evaluating anything. if problems *do* arise we can
probably recover the old table-based system by adding some tracking to
Pos::Origin (or increasing the size of PosIdx outright), but for time
being this looks like more complexity than it's worth.
since we now need to read the entire input again to determine the
line/column of a position we'll make unsafeGetAttrPos slightly lazy:
mostly the set it returns is only used to determine the file of origin
of an attribute, not its exact location. the thunks do not add
measurable runtime overhead.
notably this change is necessary to allow changing the parser since
apparently nothing supports nix's very idiosyncratic line ending choice
of "anything goes", making it very hard to calculate line/column
positions in the parser (while byte offsets are very easy).
this needs a string comparison because there seems to be no other way to
get that information out of bison. usually the location info is going to
be correct (pointing at a bad token), but since EOF isn't a token as
such it'll be wrong in that this case.
this hasn't shown up much so far because a single line ending *is* a
token, so any file formatted in the usual manner (ie, ending in a line
ending) would have its EOF position reported correctly.