Add an access-control list to the realisations in recursive-nix (similar
to the already existing one for store paths), so that we can build
content-addressed derivations in the restricted store.
Fix#4353
Previously, the build system used uname(1) output when it wanted to
check the operating system it was being built for, which meant that it
didn't take into-account cross-compilation when the build and host
operating systems were different.
To fix this, instead of consulting uname output, we consult the host
triple, specifically the third "kernel" part.
For "kernel"s with stable ABIs, like Linux or Cygwin, we can use a
simple ifeq to test whether we're compiling for that system, but for
other platforms, like Darwin, FreeBSD, or Solaris, we have to use a
more complicated check to take into account the version numbers at the
end of the "kernel"s. I couldn't find a way to just strip these
version numbers in GNU Make without shelling out, which would be even
more ugly IMO. Because these checks differ between kernels, and the
patsubst ones are quite fiddly, I've added variables for each host OS
we might want to check to make them easier to reuse.
This way no derivation has to expect that these files are in the `cwd`
during the build. This is problematic for `nix-shell` where these files
would have to be inserted into the nix-shell's `cwd` which can become
problematic with e.g. recursive `nix-shell`.
To remain backwards-compatible, the location inside the build sandbox
will be kept, however using these files directly should be deprecated
from now on.
This is needed to push the adoption of structured attrs[1] forward. It's
now checked if a `__json` exists in the environment-map of the derivation
to be openend in a `nix-shell`.
Derivations with structured attributes enabled also make use of a file
named `.attrs.json` containing every environment variable represented as
JSON which is useful for e.g. `exportReferencesGraph`[2]. To
provide an environment similar to the build sandbox, `nix-shell` now
adds a `.attrs.json` to `cwd` (which is mostly equal to the one in the
build sandbox) and removes it using an exit hook when closing the shell.
To avoid leaking internals of the build-process to the `nix-shell`, the
entire logic to generate JSON and shell code for structured attrs was
moved into the `ParsedDerivation` class.
[1] https://nixos.mayflower.consulting/blog/2020/01/20/structured-attrs/
[2] https://nixos.org/manual/nix/unstable/expressions/advanced-attributes.html#advanced-attributes
Useful when we're using a daemon with a chroot store, e.g.
$ NIX_DAEMON_SOCKET_PATH=/tmp/chroot/nix/var/nix/daemon-socket/socket nix-daemon --store /tmp/chroot
Then the client can now connect with
$ nix build --store unix:///tmp/chroot/nix/var/nix/daemon-socket/socket?root=/tmp/chroot nixpkgs#hello
That doesn’t really make sense with CA derivations (and wasn’t even
really correct before because of FO derivations, though that probably
didn’t matter much in practice)
Make ca-derivations require a `ca-derivations` machine feature, and
ca-aware builders expose it.
That way, a network of builders can mix ca-aware and non-ca-aware
machines, and the scheduler will send them in the right place.
When adding a path to the local store (via `LocalStore::addToStore`),
ensure that the `ca` field of the provided `ValidPathInfo` does indeed
correspond to the content of the path.
Otherwise any untrusted user (or any binary cache) can add arbitrary
content-addressed paths to the store (as content-addressed paths don’t
need a signature).
Linux is (as far as I know) the only mainstream operating system that
requires linking with libdl for dlopen. On BSD, libdl doesn't exist,
so on non-FreeBSD BSDs linking will currently fail. On macOS, it's
apparently just a symlink to libSystem (macOS libc), presumably
present for compatibility with things that assume Linux.
So the right thing to do here is to only add -ldl on Linux, not to add
it for everything that isn't FreeBSD.
Only considers the closure in term of `Realisation`, ignores all the
opaque inputs.
Dunno whether that’s the nicest solution, need to think it through a bit
Align all the worker protocol with `buildDerivation` which inlines the
realisations as one opaque json blob.
That way we don’t have to bother changing the remote store protocol
when the definition of `Realisation` changes, as long as we keep the
json backwards-compatible
Align all the worker protocol with `buildDerivation` which inlines the
realisations as one opaque json blob.
That way we don’t have to bother changing the remote store protocol
when the definition of `Realisation` changes, as long as we keep the
json backwards-compatible
Move the `closure` logic of `computeFSClosure` to its own (templated) function.
This doesn’t bring much by itself (except for the ability to properly
test the “closure” functionality independently from the rest), but it
allows reusing it (in particular for the realisations which will require
a very similar closure computation)
For whatever reason, many programs trying to access SystemVersion.plist
also open SystemVersionCompat.plist; this includes Python code and
coreutils’ `cat(1)` (but not the native macOS `/bin/cat`). Illustratory
`dtruss(1m)` output:
open("/System/Library/CoreServices/SystemVersion.plist\0", 0x0, 0x0) = 3 0
open("/System/Library/CoreServices/SystemVersionCompat.plist\0", 0x0, 0x0) = 4 0
I assume this is a Big Sur change relating to the 10.16.x/11.x
version compatibility divide and that it’s something along the lines of
a hook inside libSystem.
Fixes a lot of sandboxed package builds under Big Sur.
When we don’t have enough free job slots to run a goal, we put it in
the waitForBuildSlot list & unlock its output locks. This will
continue from where we left off (tryLocalBuild). However, we need the
locks to get reacquired when/if the goal ever restarts. So, we need to
send it back through tryToBuild to get reqacquire those locks.
I think this bug was introduced in
https://github.com/NixOS/nix/pull/4570. It leads to some builds
starting without proper locks.
Similar to the nar-info disk cache (and using the same db).
This makes rebuilds muuch faster.
- This works regardless of the ca-derivations experimental feature.
I could modify the logic to not touch the db if the flag isn’t there,
but given that this is a trash-able local cache, it doesn’t seem to be
really worth it.
- We could unify the `NARs` and `Realisation` tables to only have one
generic kv table. This is left as an exercise to the reader.
- I didn’t update the cache db version number as the new schema just
adds a new table to the previous one, so the db will be transparently
migrated and is backwards-compatible.
Fix#4746
I guess I misunderstood John's initial explanation about why wildcards
for outputs are sent to older stores[1]. My `nix-daemon` from 2021-03-26
also has version 1.29, but misses the wildcard[2]. So bumping seems to
be the right call.
[1] https://github.com/NixOS/nix/pull/4759#issuecomment-830812464
[2] 255d145ba7
Starting in macOS 11, the on-disk dylib bundles are no longer available,
but nixpkgs needs to be able to keep compatibility with older versions
that require `/usr/lib/libSystem.B.dylib` in `__impureHostDeps`. Allow
it to keep backwards compatibility with these versions by marking these
dependencies as optional.
Fixes#4658.
They are equivalent according to
<https://spec.commonmark.org/0.29/#hard-line-breaks>,
and the trailing spaces tend to be a pain (because the make git
complain, editors tend to want to remove them − the `.editorconfig`
actually specifies that − etc..).
If there were many top-level goals (which are not destroyed until the
very end), commands like
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' \
/run/current-system --no-check-sigs --substitute-on-destination
could fail with "Too many open files". So now we do some explicit
cleanup from amDone(). It would be cleaner to separate goals from
their temporary internal state, but that would be a bigger refactor.
This avoids an ambiguity where the `StorePathWithOutputs { drvPath, {}
}` could mean "build `brvPath`" or "substitute `drvPath`" depending on
context.
It also brings the internals closer in line to the new CLI, by
generalizing the `Buildable` type is used there and makes that
distinction already.
In doing so, relegate `StorePathWithOutputs` to being a type just for
backwards compatibility (CLI and RPC).
These are by no means part of the notion of a store, but rather are
things that happen to use stores. (Or put another way, there's no way
we'd make them virtual methods any time soon.) It's better to move them
out of that too-big class then.
Also, this helps us remove StorePathWithOutputs from the Store interface
altogether next commit.
A few versioning mistakes were corrected:
- In 27b5747ca7, Daemon protocol had some
version `>= 0xc` that should have been `>= 0x1c`, or `28` since the
other conditions used decimal.
- In a2b69660a9, legacy SSH gated new CAS
info on version 6, but version 5 in the server. It is now 6
everywhere.
Additionally, legacy ssh was sending over more metadata than the daemon
one was. The daemon now sends that data too.
CC @regnat
Co-authored-by: Cole Helbling <cole.e.helbling@outlook.com>
I guess the rationale behind the old name wath that
`pathInfoIsTrusted(info)` returns `true` iff we would need to `blindly`
trust the path (because it has no valid signature and `requireSigs` is
set), but I find it to be a really confusing footgun because it's quite
natural to give it the opposite meaning.
What happened was that Nix was trying to unconditionally mount these
paths in fixed-output derivations, but since the outer derivation was
pure, those paths did not exist. The solution is to only mount those
paths when they exist.
This separates the scheduling logic (including simple hook pathway) from
the local-store needing code.
This should be the final split for now. I'm reasonably happy with how
it's turning out, even before I'm done moving code into
`local-derivation-goal`. Benefits:
1. This will help "witness" that the hook case is indeed a lot simpler,
and also compensate for the increased complexity that comes from
content-addressed derivation outputs.
2. It also moves us ever so slightly towards a world where we could use
off-the-shelf storage or sandboxing, since `local-derivation-goal`
would be gutted in those cases, but `derivation-goal` should remain
nearly the same.
The new `#if 0` in the new files will be deleted in the following
commit. I keep it here so if it turns out more stuff can be moved over,
it's easy to do so in a way that preserves ordering --- and thus
prevents conflicts.
N.B.
```sh
git diff HEAD^^ --color-moved --find-copies-harder --patience --stat
```
makes nicer output.
This is already used by Hydra, and is very useful when materializing
a remote builder list from service discovery. This allows the service
discovery tool to only sync one file instead of two.
This is technically a breaking change, since attempting to set plugin
files after the first non-flag argument will now throw an error. This
is acceptable given the relative lack of stability in a plugin
interface and the need to tie the knot somewhere once plugins can
actually define new subcommands.
This field used to be a `BasicDerivation`, but this `BasicDerivation`
was downcasted to a `Derivation` when needed (implicitely or not), so we
might as well make it a full `Derivation` and upcast it when needed.
This also allows getting rid of a weird duplication in the way we
compute the static output hashes for the derivation. We had to
do it differently and in a different place depending on whether the
derivation was a full derivation or just a basic drv, but we can now do
it unconditionally on the full derivation.
Fix#4559
- Pass it the name of the outputs rather than their output paths (as
these don't exist for ca derivations)
- Get the built output paths from the remote builder
- Register the new received realisations
When performing distributed builds of machine learning packages, it
would be nice if builders without the required SIMD instructions can
be excluded as build nodes.
Since x86_64 has accumulated a large number of different instruction
set extensions, listing all possible extensions would be unwieldy.
AMD, Intel, Red Hat, and SUSE have recently defined four different
microarchitecture levels that are now part of the x86-64 psABI
supplement and will be used in glibc 2.33:
https://gitlab.com/x86-psABIs/x86-64-ABIhttps://lwn.net/Articles/844831/
This change uses libcpuid to detect CPU features and then uses them to
add the supported x86_64 levels to the additional system types. For
example on a Ryzen 3700X:
$ ~/aps/bin/nix -vv --version | grep "Additional system"
Additional system types: i686-linux, x86_64-v1-linux, x86_64-v2-linux, x86_64-v3-linux
That way we
1. Don't have to recompute them several times
2. Can compute them in a place where we know the type of the parent
derivation, meaning that we don't need the casting dance we had before
Once a build is done, get back to the original derivation, and register
all the newly built outputs for this derivation.
This allows Nix to work properly with derivations that don't have all
their build inputs available − thus allowing garbage collection and
(once it's implemented) binary substitution
In addition to being some ugly template trickery, it was also totally
useless as it was used in only one place where I could replace it by
just a few extra characters
Where a `RealisedPath` is a store path with its history, meaning either
an opaque path for stuff that has been directly added to the store, or a
`Realisation` for stuff that has been built by a derivation
This is a low-level refactoring that doesn't bring anything by itself
(except a few dozen extra lines of code :/ ), but raising the
abstraction level a bit is important on a number of levels:
- Commands like `nix build` have to query for the realisations after the
build is finished which is fragile (see
27905f12e4a7207450abe37c9ed78e31603b67e1 for example). Having them
oprate directly at the realisation level would avoid that
- Others like `nix copy` currently operate directly on (built) store
paths, but need a bit more information as they will need to register
the realisations on the remote side
Fix a mismatch in the errors thrown when a needed output was missing
from an input derivation that was leading to a wrong and quite misleading error
message
Don't only show the name of the output, but also the derivation to which
this output belongs (as otherwise it's very hard to track back what went
wrong)
Changes:
* The divider lines are gone. These were in practice a bit confusing,
in particular with --show-trace or --keep-going, since then there
were multiple lines, suggesting a start/end which wasn't the case.
* Instead, multi-line error messages are now indented to align with
the prefix (e.g. "error: ").
* The 'description' field is gone since we weren't really using it.
* 'hint' is renamed to 'msg' since it really wasn't a hint.
* The error is now printed *before* the location info.
* The 'name' field is no longer printed since most of the time it
wasn't very useful since it was just the name of the exception (like
EvalError). Ideally in the future this would be a unique, easily
googleable error ID (like rustc).
* "trace:" is now just "…". This assumes error contexts start with
something like "while doing X".
Example before:
error: --- AssertionError ---------------------------------------------------------------------------------------- nix
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
assertion 'false' failed
----------------------------------------------------- show-trace -----------------------------------------------------
trace: while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
Example after:
error: assertion 'false' failed
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
… while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
This change is to simplify [Trustix](https://github.com/tweag/trustix) indexing and makes it possible to reconstruct this URL regardless of the compression used.
In particular this means that 7c2e9ca597/contrib/nix/nar/nar.go (L61-L71) can be removed and only the bits that are required to establish trust needs to be published in the Trustix build logs.
With the `ca-derivation` experimental features, non-ca derivations used
to have their output paths returned as unknown as long as they weren't
built (because of a mistake in the code that systematically erased the
previous value)
Thanks @regnat and @edolstra for catching this and comming up with the
solution.
They way I had generalized those is wrong, because local settings for
non-local stores is confusing default. And due to the nature of C++
inheritance, fixing the defaults is more annoying than it should be.
Additionally, I thought we might just drop the check in the substitution
logic since `Store::addToStore` is now streaming, but @regnat rightfully
pointed out that as it downloads dependencies first, that would still be
too late, and also waste effort on possibly unneeded/unwanted
dependencies.
The simple and correct thing to do is just make a store method for the
boolean logic, keeping all the setting and key stuff the way it was
before. That new method is both used by `LocalStore::addToStore` and the
substitution goal check. Perhaps we might eventually make it fancier,
e.g. sending the ValidPathInfo to remote stores for them to validate,
but this is good enough for now.
By default, once you enter x86_64 Rosetta 2, macOS will try to run
everything in x86_64. So an x86_64 Nix will still try to use x86_64
even when system = aarch64-darwin. To avoid this we can set
kern.curproc_arch_affinity sysctl. With kern.curproc_arch_affinity=0,
we ignore this preference.
This is based on how
https://opensource.apple.com/source/system_cmds/system_cmds-880.40.5/arch.tproj/arch.c.auto.html
works. Completely undocumented, but seems to work!
Note, you can verify this works with this impure Nix expression:
```
{
a = derivation {
name = "a";
system = "aarch64-darwin";
builder = "/bin/sh";
args = [ "-e" (builtins.toFile "builder" ''
[ "$(/usr/bin/arch)" = arm64 ]
[ "$(/usr/bin/arch -arch x86_64 /bin/sh -c /usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch arm64 /bin/sh -c /usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
b = derivation {
name = "b";
system = "x86_64-darwin";
builder = "/bin/sh";
args = [ "-e" (builtins.toFile "builder" ''
[ "$(/usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch x86_64 /bin/sh -c /usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch arm64 /bin/sh -c /usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
}
```
This resolves#3810 by changing the behavior of `max-jobs = 0`, so
that specifying the option also avoids local building of derivations
with the attribute `preferLocalBuild = true`.
We embrace virtual the rest of the way, and get rid of the
`assert(false)` 0-param constructors.
We also list config base classes first, so the constructor order is
always:
1. all the configs
2. all the stores
Each in the same order
PRs #4370 and #4348 had a bad interaction in that the second broke the fist
one in a not trivial way.
The issue was that since #4348 the logic for detecting whether a
derivation output is already built requires some logic that was specific
to the `LocalStore`.
It happens though that most of this logic could be upstreamed to any `Store`,
which is what this commit does.
This ignore was here because `queryPartialDrvOutputMap` was used both
1. as a cache to avoid having to re-read the derivation (when gc-ing for
example), and
2. as the source of truth for ca realisations
The use-case 2. required it to be able to work even when the derivation
wasn't there anymore (see https://github.com/NixOS/nix/issues/4138).
However, this use-case is now handled by `queryRealisation`, meaning
that we can safely error out if the derivation isn't there anymore
`buildPaths` can be called even for stores where it's not defined in case it's
bound to be a no-op.
The “no-op detection” mechanism was only detecting the case wher `buildPaths`
was called on a set of (non-drv) paths that were already present on the store.
This commit extends this mechanism to also detect the case where `buildPaths`
is called on a set of derivation outputs which are already built on the store.
This only works with the ca-derivations flag. It could be possible to
extend this to also work without it, but it would add quite a bit of
complexity, and it's not used without it anyways.
Extend `FSAccessor::readFile` to allow not checking that the path is a
valid one, and rewrite `readInvalidDerivation` using this extended
`readFile`.
Several places in the code use `readInvalidDerivation`, either because
they need to read a derivation that has been written in the store but
not registered yet, or more generally to prevent a deadlock because
`readDerivation` tries to lock the state, so can't be called from a
place where the lock is already held.
However, `readInvalidDerivation` implicitely assumes that the store is a
`LocalFSStore`, which isn't always the case.
The concrete motivation for this is that it's required for `nix copy
--from someBinaryCache` to work, which is tremendously useful for the
tests.
Because of a too eager refactoring, `addTextToStore` used to throw an
error because the input wasn't a valid nar.
Partially revert that refactoring to wrap the text into a proper nar
(using `dumpString`) to make this method work again
Rather than storing the derivation outputs as `drvPath!outputName` internally,
store them as `drvHashModulo!outputName` (or `outputHash!outputName` for
fixed-output derivations).
This makes the storage slightly more opaque, but enables an earlier
cutoff in cases where a fixed-output dependency changes (but keeps the
same output hash) − same as what we already do for input-addressed
derivations.
Add a new table for tracking the derivation output mappings.
We used to hijack the `DerivationOutputs` table for that, but (despite its
name), it isn't a really good fit:
- Its entries depend on the drv being a valid path, making it play badly with
garbage collection and preventing us to copy a drv output without copying
the whole drv closure too;
- It dosen't guaranty that the output path exists;
By using a different table, we can experiment with a different schema better
suited for tracking the output mappings of CA derivations.
(incidentally, this also fixes#4138)
For each known realisation, store:
- its output
- its output path
This comes with a set of needed changes:
- New `realisations` module declaring the types needed for describing
these mappings
- New `Store::registerDrvOutput` method registering all the needed informations
about a derivation output (also replaces `LocalStore::linkDeriverToPath`)
- new `Store::queryRealisation` method to retrieve the informations for a
derivations
This introcudes some redundancy on the remote-store side between
`wopQueryDerivationOutputMap` and `wopQueryRealisation`.
However we might need to keep both (regardless of backwards compat)
because we sometimes need to get some infos for all the outputs of a
derivation (where `wopQueryDerivationOutputMap` is handy), but all the
stores can't implement it − because listing all the outputs of a
derivation isn't really possible for binary caches where the server
doesn't allow to list a directory.
In `nixStable` (2.3.7 to be precise) it's possible to connect to stores
using an IPv6 address:
nix ping-store --store ssh://root@2001:db8::1
This is also useful for `nixops(1)` where you could specify an IPv6
address in `deployment.targetHost`.
However, this behavior is broken on `nixUnstable` and fails with the
following error:
$ nix store ping --store ssh://root@2001:db8::1
don't know how to open Nix store 'ssh://root@2001:db8::1'
This happened because `openStore` from `libstore` uses the `parseURL`
function from `libfetchers` which expects a valid URL as defined in
RFC2732. However, this is unsupported by `ssh(1)`:
$ nix store ping --store 'ssh://root@[2001:db8::1]'
cannot connect to 'root@[2001:db8::1]'
This patch now allows both ways of specifying a store (`root@2001:db8::1`) and
also `root@[2001:db8::1]` since the latter one is useful to pass query
parameters to the remote store.
In order to achieve this, the following changes were made:
* The URL regex from `url-parts.hh` now allows an IPv6 address in the
form `2001:db8::1` and also `[2001:db8::1]`.
* In `libstore`, a new function named `extractConnStr` ensures that a
proper URL is passed to e.g. `ssh(1)`:
* If a URL looks like either `[2001:db8::1]` or `root@[2001:db8::1]`,
the brackets will be removed using a regex. No additional validation
is done here as only strings parsed by `parseURL` are expected.
* In any other case, the string will be left untouched.
* The rules above only apply for `LegacySSHStore` and `SSHStore` (a.k.a
`ssh://` and `ssh-ng://`).
Unresolved questions:
* I'm not really sure whether we want to allow both variants of IPv6
addresses in the URL parser. However it should be noted that both seem
to be possible according to RFC2732:
> This document incudes an update to the generic syntax for Uniform
> Resource Identifiers defined in RFC 2396 [URL]. It defines a syntax
> for IPv6 addresses and allows the use of "[" and "]" within a URI
> explicitly for this reserved purpose.
* Currently, it's not supported to specify a port number behind the
hostname, however it seems as this is not really supported by the URL
parser. Hence, this is probably out of scope here.
The `DerivationGoal` has a variable storing the “final” derivation
output paths that is used (amongst other things) to fill the environment
for the post build hook. However this variable wasn't set when the
build-hook is used, causing a crash when both hooks are used together.
Fix this by setting this variable (from the informations in the db) after a run
of the post build hook.
This reverts commit 1b1e076033.
Using `queryPartialDerivationOutputMap` assumes that the derivation
exists locally which isn't the case for remote builders.
Since 0744f7f, it is now useful to have cache.nixos.org in substituers
even if /nix/store is not the Nix Store Dir. This can always be
overridden via configuration, though.
When running universal binaries like /bin/bash, Darwin XNU will choose
which architecture of the binary to use based on "binary preferences".
This change sets that to the current platform for aarch64 and x86_64
builds. In addition it now uses posix_spawn instead of the usual
execve. Note, that this does not prevent the other architecture from
being run, just advises which to use.
Unfortunately, posix_spawnattr_setbinpref_np does not appear to be
inherited by child processes in x86_64 Rosetta 2 translations, meaning
that this will not always work as expected.
For example:
{
arm = derivation {
name = "test";
system = "aarch64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
rosetta = derivation {
name = "test";
system = "x86_64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = i386 ]
echo It works!
/usr/bin/touch $out
'') ];
};
}
`arm' fails on x86_64-compiled Nix, but `arm' and `rosetta' succeed on
aarch64-compiled Nix. I suspect there is a way to fix this since:
$ /usr/bin/arch -arch x86_64 /bin/bash \
-c '/usr/bin/arch -arch arm64e /bin/bash -c /usr/bin/arch'
arm64
seems to work correctly. We may need to wait for Apple to update
system_cmds in opensource.apple.com to find out how though.
macOS systems with ARM64 can utilize a translation layer at
/Library/Apple/usr/libexec/oah to run x86_64 binaries. This change
makes Nix recognize that and it to "extra-platforms". Note that there
are two cases here since Nix could be built for either x86_64 or
aarch64. In either case, we can switch to the other architecture.
Unfortunately there is not a good way to prevent aarch64 binaries from
being run in x86_64 contexts or vice versa - programs can always
execute programs for the other architecture.
If the build closure contains some CA derivations, then we can't know
ahead-of-time that we won't build anything as early-cutoff might come-in
at a laster stage
This fixes a bug I encountered where `nix-store -qR` will deadlock when
the `--include-outputs` flag is passed and `max-connections=1`.
The deadlock occurs because `RemoteStore::queryDerivationOutputs` takes
the only connection from the connection pool and uses it to check the
daemon version. If the version is new enough, it calls
`Store::queryDerivationOutputs`, which eventually calls
`RemoteStore::queryPartialDerivationOutputMap`, where we take another
connection from the connection pool to check the version again. Because
we still haven't released the connection from the caller, this waits for
a connection to be available, causing a deadlock.
This diff solves the issue by using `getProtocol` to check the protocol
version in the caller `RemoteStore::queryDerivationOutputs`, which
immediately frees the connection back to the pool before returning the
protocol version. That way we've already freed the connection by the
time we call `RemoteStore::queryPartialDerivationOutputMap`.
Until now, it was not possible to substitute missing paths from e.g.
`https://cache.nixos.org` on a remote server when building on it using
the new `ssh-ng` protocol.
This is because every store implementation except legacy `ssh://`
ignores the substitution flag passed to `Store::queryValidPaths` while
the `legacy-ssh-store` substitutes the remote store using
`cmdQueryValidPaths` when the remote store is opened with `nix-store
--serve`.
This patch slightly modifies the daemon protocol to allow passing an
integer value suggesting whether to substitute missing paths during
`wopQueryValidPaths`. To implement this on the daemon-side, the
substitution logic from `nix-store --serve` has been moved into a
protected method named `Store::substitutePaths` which gets currently
called from `LocalStore::queryValidPaths` and `Store::queryValidPaths`
if `maybeSubstitute` is `true`.
Fixes#2770
This removes the extra-substituters and extra-sandbox-paths settings
and instead makes every array setting extensible by setting
"extra-<name> = <value>" in the configuration file or passing
"--<name> <value>" on the command line.
This makes it even clearer which of the two hashes was specified in the
nix files. Some may think that "wanted" and "got" is obvious, but:
"got" could mean "got in nix file" and "wanted" could mean "want to see in nix file".
Observed on Centos 7 when user namespaces are disabled:
DerivationGoal::startBuilder() throws an exception, ~DerivationGoal()
waits for the child process to exit, but the child process hangs
forever in drainFD(userNamespaceSync.readSide.get()) in
DerivationGoal::runChild(). Not sure why the SIGKILL doesn't get
through.
Issue #4092.
This change provides support for using access tokens with other
instances of GitHub and GitLab beyond just github.com and
gitlab.com (especially company-specific or foundation-specific
instances).
This change also provides the ability to specify the type of access
token being used, where different types may have different handling,
based on the forge type.
After 0ed946aa61, max-jobs setting (-j/--max-jobs)
stopped working.
The reason was that nrLocalBuilds (which compared to maxBuildJobs to figure
out whether the limit is reached or not) is not incremented yet when tryBuild
is started; So, the solution is to move the check to tryLocalBuild.
Closes https://github.com/nixos/nix/issues/3763
We don't need it yet, but we could/should in the future, and it's a
cost-free change since we already have the reference. I like it.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Rather than showing an integer as the default, instead show the boolean
referenced in the description.
The nix.conf.5 manpage used to show "default: 0", which is unnecessarily
opaque and confusing (doesn't 0 mean false, even though the default is
true?); now it properly shows that the default is true.
`nix flake info` calls the github 'commits' API, which requires
authorization when the repository is private. Currently this request
fails with a 404.
This commit adds an authorization header when calling the 'commits' API.
It also changes the way that the 'tarball' API authenticates, moving the
user's token from a query parameter into the Authorization header.
The query parameter method is recently deprecated and will be disallowed
in November 2020. Using them today triggers a warning email.
Rework the `Store` hierarchy so that there's now one hierarchy for the
store configs and one for the implementations (where each implementation
extends the corresponding config). So a class hierarchy like
```
StoreConfig-------->Store
| |
v v
SubStoreConfig----->SubStore
| |
v v
SubSubStoreConfig-->SubSubStore
```
(with virtual inheritance to prevent DDD).
The advantage of this architecture is that we can now introspect the configuration of a store without having to instantiate the store itself
Add a new `init()` method to the `Store` class that is supposed to
handle all the effectful initialisation needed to set-up the store.
The constructor should remain side-effect free and just initialize the
c++ data structure.
The goal behind that is that we can create “dummy” instances of each
store to query static properties about it (the parameters it accepts for
example)
Directly register the store classes rather than a function to build an
instance of them.
This gives the possibility to introspect static members of the class or
choose different ways of instantiating them.
Add a fallback path in `queryPartialDerivationOutputMap` for daemons
that don't support it.
Also upstreams a couple methods from `SSHStore` to `RemoteStore` as this
is needed to handle the fallback path.
When deploying a Hydra instance with current Nix master, most builds
would not run because of errors like this:
queue monitor: error: --- Error --- hydra-queue-runner
error: --- UsageError --- nix-daemon
not a content address because it is not in the form '<prefix>:<rest>': /nix/store/...-somedrv
The last error message is from parseContentAddress, which expects a
colon-separated string, however what we got here is a store path.
Looking at the worker protocol, the following message sent to the Nix
daemon caused the error above:
0x1E -> wopQuerySubstitutablePathInfos
0x01 -> Number of paths
0x16 -> Length of string
"/nix/store/...-somedrv"
0x00 -> Length of string
""
Looking at writeStorePathCAMap, the store path is indeed the first field
that's transmitted. However, readStorePathCAMap expects it to be the
*second* field *on my machine*, since expression evaluation order is a
classic form of unspecified behaviour[1] in C++.
This has been introduced in https://github.com/NixOS/nix/pull/3689,
specifically in commit 66a62b3189.
[1]: https://en.wikipedia.org/wiki/Unspecified_behavior#Order_of_evaluation_of_subexpressions
Signed-off-by: aszlig <aszlig@nix.build>
If we resolve using the known path of a derivation whose output we
didn't have, we previously blew up. Now we just fail gracefully,
returning the map of all outputs unknown.
This means profiles outside of /nix/var/nix/profiles don't get
garbage-collected. It also means we don't need to scan
/nix/var/nix/profiles for GC roots anymore, except for compatibility
with previously existing generations.
Evidentally this was never implemented because Nix switched to using
`buildDerivation` exclusively before `build-remote.pl` was rewritten.
The `nix-copy-ssh` test (already) tests this.
Include a long comment explaining the policy. Perhaps this can be moved
to the manual at some point in the future.
Also bump the daemon protocol minor version, so clients can tell whether
`wopBuildDerivation` supports trustless CA derivation building. I hope
to take advantage of this in a follow-up PR to support trustless remote
building with the minimal sending of derivation closures.
This seems more correct. It also means one can specify the features a
store should support with --store and remote-store=..., which is useful.
I use this to clean up the build remotes test.
Before, processConnection wanted to know a user name and user id, and
`nix-daemon --stdio`, when it isn't proxying to an underlying daemon,
would just assume "root" and 0. But `nix-daemon --stdio` (no proxying)
shouldn't make guesses about who holds the other end of its standard
streams.
Now processConnection takes an "auth hook", so `nix-daemon` can provide
the appropriate policy and daemon.cc doesn't need to know or care what
it is.
Thanks @regnat for catching one of them. The other follows for many of
the same reasons. I'm find fixing others on a need-to-fix basis,
provided their are no regressions.
Some users have their own hashed-mirrors setup, that is used to mirror
things in addition to what’s available on tarballs.nixos.org. Although
this should be feasable to do with a Binary Cache, it’s not always
easy, since you have to remember what "name" each of the tarballs has.
Continuing to support hashed-mirrors is cheap, so it’s best to leave
support in Nix. Note that NIX_HASHED_MIRRORS is also supported in
Nixpkgs through fetchurl.nix.
Note that this excludes tarballs.nixos.org from the default, as in
\#3689. All of these are available on cache.nixos.org.
Since 6185d25e52, this was very
latency-bound since it required a round-trip for every 32 KiB. So for
example copying a 514 MiB closure over a virtual ethernet device with
a articial delay of just 1 ms took 343s. Now it takes 2.7s.
Fixes#3372.
The new interface we offer provides a way of getting all the
DerivationOutputs with the storePaths directly, based on the observation
that it's the most common usecase.
istream->tellg() returns -1 so we can't get the number of bytes
written.
Fixes 'uploaded 's3://nix-cache/nar/00819r9lp5kajr6baxfw5dhhc0cx8ndxaz43qmd2f0gn1hk1ynlp.nar.xz' (-1 bytes) in 11620 ms' messages.
This assumption is broken by CA derivations. Making a PR now to do the
breaking daemon change as soon as possible (if it is already too late,
we can bump protocol intead).
It's a tiny function which is:
- hardly worth abstrating over, and also only used once.
- doesn't work once we get CA drvs
I rewrote the one callsite to be forwards compatable with CA
derivations, and also potentially more performant: instead of reading in
the derivation it can ust consult the SQLite DB in the common case.
to each Store implementation. The generic regStore implementation will
only be for the ambiguous shorthands, like "" and "auto".
This also could get us close to simplifying the daemon command.
I got it to just become `LocalStore::addToStoreFromDump`, cleanly taking
a store and then doing nothing too fancy with it.
`LocalStore::addToStore(...Path...)` is now just a simple wrapper with a
bare-bones sinkToSource of the right dump command.
This reverts commit a2c27022e9. See
addToStoreSlow(), we don't need to handle this case efficiently
anymore. In fact, we can almost remove the method/hashAlgo arguments
since the non-recursive and/or non-SHA256 are almost not used anymore.
We were calculating the nar hash wrong when the file ingestion method
was flat. I don't think there's anything we can do in that case but dump
the file again, so that's what I do.
As an optomization, we again could reuse the original dump for just the
recursive and non-sha256 case, but I rather do that after this fix, and
after my other PRs which deduplicate this code.
We've added the variant to `DerivationOutput` to support them, but made
`DerivationOutput::path` partial to avoid actually implementing them.
With this chage, we can all collaborate on "just" removing
`DerivationOutput::path` calls to implement CA derivations.
This reduces memory consumption of
nix-instantiate \
-E 'with import <nixpkgs> {}; runCommand "foo" { src = ./blender; } "echo foo"' \
--option nar-buffer-size 10000
(where ./blender is a 1.1 GiB tree) from 1716 to 36 MiB, while still
ensuring that we don't do any write I/O for small source paths (up to
'nar-buffer-size' bytes). The downside is that large paths are now
always written to a temporary location in the store, even if they
produce an already valid store path. Thus, adding large paths might be
slower and run out of disk space. ¯\_(ツ)_/¯ Of course, you can always
restore the old behaviour by setting 'nar-buffer-size' to a very high
value.
"uid-range" provides 65536 UIDs to a build and runs the build as root
in its user namespace. "systemd-cgroup" allows the build to mount the
systemd cgroup controller (needed for running systemd-nspawn and NixOS
containers).
Also, add a configuration option "auto-allocate-uids" which is needed
to enable these features, and some experimental feature gates.
So to enable support for containers you need the following in
nix.conf:
experimental-features = auto-allocate-uids systemd-cgroup
auto-allocate-uids = true
system-features = uid-range systemd-cgroup
2^18 was overkill. The idea was to enable multiple containers to run
inside a build. However, those containers can use the same UID range -
we don't really care about perfect isolation between containers inside
a build.
Also, run builds in a cgroup namespace (ensuring /proc/self/cgroup
doesn't leak information about the outside world) and mount /sys. This
enables running systemd-nspawn and thus NixOS containers in a Nix
build.
Rather than rely on a nixbld group, we now allocate UIDs/GIDs
dynamically starting at a configurable ID (872415232 by default).
Also, we allocate 2^18 UIDs and GIDs per build, and run the build as
root in its UID namespace. (This should not be the default since it
breaks some builds. We probably should enable this conditional on a
requiredSystemFeature.) The goal is to be able to run (NixOS)
containers in a build. However, this will also require some cgroup
initialisation.
The 2^18 UIDs/GIDs is intended to provide enough ID space to run
multiple containers per build, e.g. for distributed NixOS tests.