In many cases we are dealing with a collection of realisations, they are
all outputs of the same derivation. In that case, we don't need
"derivation hashes modulos" to be part of our map key, because the
output names alone will be unique. Those hashes are still part of the
realisation proper, so we aren't loosing any information, we're just
"normalizing our schema" by narrowing the "primary key".
Besides making our data model a bit "tighter" this allows us to avoid a
double `for` loop in `DerivationGoal::waiteeDone`. The inner `for` loop
was previously just to select the output we cared about without knowing
its hash. Now we can just select the output by name directly.
Note that neither protocol is changed as part of this: we are still
transferring `DrvOutputs` over the wire for `BuildResult`s. I would only
consider revising this once #6223 is merged, and we can mention protocol
versions inside factored-out serialization logic. Until then it is
better not change anything because it would come a the cost of code
reuse.
In https://github.com/NixOS/nix/pull/6311#discussion_r834863823, I
realized since derivation goals' wanted outputs can "grow" due to
overlapping dependencies (See `DerivationGoal::addWantedOutputs`, called
by `Worker::makeDerivationGoalCommon`), the previous bug fix had an
unfortunate side effect of causing more pointless rebuilds.
In paticular, we have this situation:
1. Goal made from `DerivedPath::Built { foo, {a} }`.
2. Goal gives on on substituting, starts building.
3. Goal made from `DerivedPath::Built { foo, {b} }`, in fact is just
modified original goal.
4. Though the goal had gotten as far as building, so all outputs were
going to be produced, `addWantedOutputs` no longer knows that and so
the goal is flagged to be restarted.
This might sound far-fetched with input-addressed drvs, where we usually
basically have all our goals "planned out" before we start doing
anything, but with CA derivation goals and especially RFC 92, where *drv
resolution* means goals are created after some building is completed, it
is more likely to happen.
So the first thing to do was restore the clearing of `wantedOutputs` we
used to do, and then filter the outputs in `buildPathsWithResults` to
only get the ones we care about.
But fix also has its own side effect in that the `DerivedPath` in the
`BuildResult` in `DerivationGoal` cannot be trusted; it is merely the
*first* `DerivedPath` for which this goal was originally created.
To remedy this, I made `BuildResult` be like it was before, and instead
made `KeyedBuildResult` be a subclass wit the path. Only
`buildPathsWithResults` returns `KeyedBuildResult`s, everything else
just becomes like it was before, where the "key" is unambiguous from
context.
I think separating the "primary key" field(s) from the other fields is
good practical in general anyways. (I would like to do the same thing
for `ValidPathInfo`.) Among other things, it allows constructions like
`std::map<Key, ThingWithKey>` where doesn't contain duplicate keys and
just precludes the possibility of those duplicate keys being out of
sync.
We might leverage the above someday to overload `buildPathsWithResults`
to take a *set* of return a *map* per the above.
-----
Unfortunately, we need to avoid C++20 strictness on designated
initializers.
(BTW
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p2287r1.html
this offers some new syntax for this use-case. Hopefully this will be
adopted and we can eventually use it.)
No having that yet, maybe it would be better to not make
`KeyedBuildResult` a subclass to just avoid this.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
This function returns true or false depending on whether the Nix client
is trusted or not. Mostly relevant when speaking to a remote store with
a daemon.
We include this information in `nix ping store` and `nix doctor`
Co-Authored-By: John Ericson <John.Ericson@Obsidian.Systems>
Otherwise, when running as root and user namespaces are enabled,
opening the slave fails with EPERM.
Fixes "opening pseudoterminal slave: Permission denied" followed by a
hang (https://hydra.nixos.org/build/213104244), and "error: getting
sandbox mount namespace: No such file or directory" (#8072), which
happens when the child fails very quickly and consequently reading
/proc/<child>/ns fails.
Hopefully this fixes "unexpected EOF" failures on macOS
(#3137, #3605, #7242, #7702).
The problem appears to be that under some circumstances, macOS
discards the output written to the slave side of the
pseudoterminal. Hence the parent never sees the "sandbox initialized"
message from the child, even though it succeeded. The conditions are:
* The child finishes very quickly. That's why this bug is likely to
trigger in nix-env tests, since that uses a builtin builder. Adding
a short sleep before the child exits makes the problem go away.
* The parent has closed its duplicate of the slave file
descriptor. This shouldn't matter, since the child has a duplicate
as well, but it does. E.g. moving the close to the bottom of
startBuilder() makes the problem go away. However, that's not a
solution because it would make Nix hang if the child dies before
sending the "sandbox initialized" message.
* The system is under high load. E.g. "make installcheck -j16" makes
the issue pretty reproducible, while it's very rare under "make
installcheck -j1".
As a fix/workaround, we now open the pseudoterminal slave in the
child, rather than the parent. This removes the second condition
(i.e. the parent no longer needs to close the slave fd) and I haven't
been able to reproduce the "unexpected EOF" with this.
We make sure the env var paths are actually set (ie. not "") before
sending them to the canonicalization function. If we forget to do so,
the user will end up facing a puzzled failed assertion internal error.
We issue a non-failing warning as a stop-gap measure. We could want to
revisit this to issue a detailed failing error message in the future.
In unprivileged podman containers, /proc is not fully visible (there
are other filesystems mounted on subdirectories of /proc). Therefore
we can't mount a new /proc in the sandbox that matches the PID
namespace of the sandbox. So this commit automatically disables
sandboxing if /proc is not fully visible.
This didn't work because sandboxing doesn't work in Docker. However,
the sandboxing check is done lazily - after clone(CLONE_NEWNS) fails,
we retry with sandboxing disabled. But at that point, we've already
done UID allocation under the assumption that user namespaces are
enabled.
So let's get rid of the "goto fallback" logic and just detect early
whether user / mount namespaces are enabled.
This commit also gets rid of a compatibility hack for some ancient
Linux kernels (<2.13).
With the switch to C++20, the rules became more strict, and we can no
longer initialize base classes. Make them comments instead.
(BTW
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p2287r1.html
this offers some new syntax for this use-case. Hopefully this will be
adopted and we can eventually use it.)
Rather than using `/nix/var/nix/{profiles,gcroots}/per-user/`, put the user
profiles and gcroots under `$XDG_DATA_DIR/nix/{profiles,gcroots}`.
This means that the daemon no longer needs to manage these paths itself
(they are fully handled client-side). In particular, it doesn’t have to
`chown` them anymore (removing one need for root).
This does change the layout of the gc-roots created by nix-env, and is
likely to break some stuff, so I’m not sure how to properly handle that.