This makes nix search always go through the first level of an
attribute set, even if it's not a top level attribute. For instance,
you can now list all GHC compilers with:
$ nix search nixpkgs#haskell.compiler
...
This is similar to how nix-env works when you pass in -A.
This fixes an issue where derivations with a primary output that is
not "out" would fail with:
$ nix profile install nixpkgs#sqlite
error: opening directory '/nix/store/2a2ydlgyydly5czcc8lg12n6qqkfz863-sqlite-3.34.1-bin': No such file or directory
This happens because while derivations produce every output when
built, you might not have them if you didn't build the derivation
yourself (for instance, the store path was fetch from a binary cache).
This uses outputName provided from DerivationInfo which appears to
match the first output of the derivation.
Where a `RealisedPath` is a store path with its history, meaning either
an opaque path for stuff that has been directly added to the store, or a
`Realisation` for stuff that has been built by a derivation
This is a low-level refactoring that doesn't bring anything by itself
(except a few dozen extra lines of code :/ ), but raising the
abstraction level a bit is important on a number of levels:
- Commands like `nix build` have to query for the realisations after the
build is finished which is fragile (see
27905f12e4a7207450abe37c9ed78e31603b67e1 for example). Having them
oprate directly at the realisation level would avoid that
- Others like `nix copy` currently operate directly on (built) store
paths, but need a bit more information as they will need to register
the realisations on the remote side
Example:
error: builder for '/nix/store/9ysqfidhipyzfiy54mh77iqn29j6cpsb-failing.drv' failed with exit code 1;
last 1 log lines:
> FAIL
For full logs, run 'nix log /nix/store/9ysqfidhipyzfiy54mh77iqn29j6cpsb-failing.drv'.
… while importing '/nix/store/pfp4a4bjh642ylxyipncqs03z6kkgfvy-failing'
at /nix/store/25wgzr2qrqqiqfbdb1chpiry221cjglc-source/flake.nix:58:15:
57|
58| ifd = import self.hydraJobs.broken;
| ^
59|
Fix a mismatch in the errors thrown when a needed output was missing
from an input derivation that was leading to a wrong and quite misleading error
message
Don't only show the name of the output, but also the derivation to which
this output belongs (as otherwise it's very hard to track back what went
wrong)
It's now
at /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix:7:7:
instead of
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
The new format is more standard and clickable.
Changes:
* The divider lines are gone. These were in practice a bit confusing,
in particular with --show-trace or --keep-going, since then there
were multiple lines, suggesting a start/end which wasn't the case.
* Instead, multi-line error messages are now indented to align with
the prefix (e.g. "error: ").
* The 'description' field is gone since we weren't really using it.
* 'hint' is renamed to 'msg' since it really wasn't a hint.
* The error is now printed *before* the location info.
* The 'name' field is no longer printed since most of the time it
wasn't very useful since it was just the name of the exception (like
EvalError). Ideally in the future this would be a unique, easily
googleable error ID (like rustc).
* "trace:" is now just "…". This assumes error contexts start with
something like "while doing X".
Example before:
error: --- AssertionError ---------------------------------------------------------------------------------------- nix
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
assertion 'false' failed
----------------------------------------------------- show-trace -----------------------------------------------------
trace: while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
Example after:
error: assertion 'false' failed
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
… while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
This change is to simplify [Trustix](https://github.com/tweag/trustix) indexing and makes it possible to reconstruct this URL regardless of the compression used.
In particular this means that 7c2e9ca597/contrib/nix/nar/nar.go (L61-L71) can be removed and only the bits that are required to establish trust needs to be published in the Trustix build logs.
With the `ca-derivation` experimental features, non-ca derivations used
to have their output paths returned as unknown as long as they weren't
built (because of a mistake in the code that systematically erased the
previous value)
Thanks @regnat and @edolstra for catching this and comming up with the
solution.
They way I had generalized those is wrong, because local settings for
non-local stores is confusing default. And due to the nature of C++
inheritance, fixing the defaults is more annoying than it should be.
Additionally, I thought we might just drop the check in the substitution
logic since `Store::addToStore` is now streaming, but @regnat rightfully
pointed out that as it downloads dependencies first, that would still be
too late, and also waste effort on possibly unneeded/unwanted
dependencies.
The simple and correct thing to do is just make a store method for the
boolean logic, keeping all the setting and key stuff the way it was
before. That new method is both used by `LocalStore::addToStore` and the
substitution goal check. Perhaps we might eventually make it fancier,
e.g. sending the ValidPathInfo to remote stores for them to validate,
but this is good enough for now.
By default, once you enter x86_64 Rosetta 2, macOS will try to run
everything in x86_64. So an x86_64 Nix will still try to use x86_64
even when system = aarch64-darwin. To avoid this we can set
kern.curproc_arch_affinity sysctl. With kern.curproc_arch_affinity=0,
we ignore this preference.
This is based on how
https://opensource.apple.com/source/system_cmds/system_cmds-880.40.5/arch.tproj/arch.c.auto.html
works. Completely undocumented, but seems to work!
Note, you can verify this works with this impure Nix expression:
```
{
a = derivation {
name = "a";
system = "aarch64-darwin";
builder = "/bin/sh";
args = [ "-e" (builtins.toFile "builder" ''
[ "$(/usr/bin/arch)" = arm64 ]
[ "$(/usr/bin/arch -arch x86_64 /bin/sh -c /usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch arm64 /bin/sh -c /usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
b = derivation {
name = "b";
system = "x86_64-darwin";
builder = "/bin/sh";
args = [ "-e" (builtins.toFile "builder" ''
[ "$(/usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch x86_64 /bin/sh -c /usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch arm64 /bin/sh -c /usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
}
```
This resolves#3810 by changing the behavior of `max-jobs = 0`, so
that specifying the option also avoids local building of derivations
with the attribute `preferLocalBuild = true`.
It appears as through the fetch attribute, which
is simply a variant with 3 elements, implicitly
converts boolean arguments to integers. One must
use Explicit<bool> to correctly populate it with
a boolean. This was missing from the implementation,
and resulted in clearly boolean JSON fields being
treated as numbers.
libc++10 seems to be stricter on what it allows in variant conversion.
I'm not sure what the rules are here, but this is the minimal change
needed to get through the compilation errors.
Sometimes it's necessary to fetch a git repository at a revision and
it's unknown which ref contains the revision in question. An example
would be a Cargo.lock which only provides the URL and the revision when
using a git repository as build input.
However it's considered a bad practice to perform a full checkout of a
repository since this may take a lot of time and can eat up a lot of
disk space. This patch makes a full checkout explicit by adding an
`allRefs` argument to `builtins.fetchGit` which fetches all refs if
explicitly set to true.
Closes#2409
A common pitfall when using e.g. `builtins.fetchGit` is the `fatal: not
a tree object`-error when trying to fetch a revision of a git-repository
that isn't on the `master` branch and no `ref` is specified.
In order to make clear what's the problem, I added a simple check
whether the revision in question exists and if it doesn't a more
meaningful error-message is displayed:
```
nix-repl> builtins.fetchGit { url = "https://github.com/owner/myrepo"; rev = "<commit not on master>"; }
moderror: --- Error -------------------------------------------------------------------- nix
Cannot find Git revision 'bf1cc5c648e6aed7360448a3745bb2fe4fbbf0e9' in ref 'master' of repository 'https://gitlab.com/Ma27/nvim.nix'! Please make sure that the rev exists on the ref you've specified or add allRefs = true; to fetchGit.
```
Closes#2431
We embrace virtual the rest of the way, and get rid of the
`assert(false)` 0-param constructors.
We also list config base classes first, so the constructor order is
always:
1. all the configs
2. all the stores
Each in the same order
Move clearValue inside Value
mkInt instead of setInt
mkBool instead of setBool
mkString instead of setString
mkPath instead of setPath
mkNull instead of setNull
mkAttrs instead of setAttrs
mkList instead of setList*
mkThunk instead of setThunk
mkApp instead of setApp
mkLambda instead of setLambda
mkBlackhole instead of setBlackhole
mkPrimOp instead of setPrimOp
mkPrimOpApp instead of setPrimOpApp
mkExternal instead of setExternal
mkFloat instead of setFloat
Add note that the static mk* function should be removed eventually
PRs #4370 and #4348 had a bad interaction in that the second broke the fist
one in a not trivial way.
The issue was that since #4348 the logic for detecting whether a
derivation output is already built requires some logic that was specific
to the `LocalStore`.
It happens though that most of this logic could be upstreamed to any `Store`,
which is what this commit does.
This ignore was here because `queryPartialDrvOutputMap` was used both
1. as a cache to avoid having to re-read the derivation (when gc-ing for
example), and
2. as the source of truth for ca realisations
The use-case 2. required it to be able to work even when the derivation
wasn't there anymore (see https://github.com/NixOS/nix/issues/4138).
However, this use-case is now handled by `queryRealisation`, meaning
that we can safely error out if the derivation isn't there anymore
`buildPaths` can be called even for stores where it's not defined in case it's
bound to be a no-op.
The “no-op detection” mechanism was only detecting the case wher `buildPaths`
was called on a set of (non-drv) paths that were already present on the store.
This commit extends this mechanism to also detect the case where `buildPaths`
is called on a set of derivation outputs which are already built on the store.
This only works with the ca-derivations flag. It could be possible to
extend this to also work without it, but it would add quite a bit of
complexity, and it's not used without it anyways.
Extend `FSAccessor::readFile` to allow not checking that the path is a
valid one, and rewrite `readInvalidDerivation` using this extended
`readFile`.
Several places in the code use `readInvalidDerivation`, either because
they need to read a derivation that has been written in the store but
not registered yet, or more generally to prevent a deadlock because
`readDerivation` tries to lock the state, so can't be called from a
place where the lock is already held.
However, `readInvalidDerivation` implicitely assumes that the store is a
`LocalFSStore`, which isn't always the case.
The concrete motivation for this is that it's required for `nix copy
--from someBinaryCache` to work, which is tremendously useful for the
tests.
Because of a too eager refactoring, `addTextToStore` used to throw an
error because the input wasn't a valid nar.
Partially revert that refactoring to wrap the text into a proper nar
(using `dumpString`) to make this method work again
Rather than storing the derivation outputs as `drvPath!outputName` internally,
store them as `drvHashModulo!outputName` (or `outputHash!outputName` for
fixed-output derivations).
This makes the storage slightly more opaque, but enables an earlier
cutoff in cases where a fixed-output dependency changes (but keeps the
same output hash) − same as what we already do for input-addressed
derivations.
Add a new table for tracking the derivation output mappings.
We used to hijack the `DerivationOutputs` table for that, but (despite its
name), it isn't a really good fit:
- Its entries depend on the drv being a valid path, making it play badly with
garbage collection and preventing us to copy a drv output without copying
the whole drv closure too;
- It dosen't guaranty that the output path exists;
By using a different table, we can experiment with a different schema better
suited for tracking the output mappings of CA derivations.
(incidentally, this also fixes#4138)
For each known realisation, store:
- its output
- its output path
This comes with a set of needed changes:
- New `realisations` module declaring the types needed for describing
these mappings
- New `Store::registerDrvOutput` method registering all the needed informations
about a derivation output (also replaces `LocalStore::linkDeriverToPath`)
- new `Store::queryRealisation` method to retrieve the informations for a
derivations
This introcudes some redundancy on the remote-store side between
`wopQueryDerivationOutputMap` and `wopQueryRealisation`.
However we might need to keep both (regardless of backwards compat)
because we sometimes need to get some infos for all the outputs of a
derivation (where `wopQueryDerivationOutputMap` is handy), but all the
stores can't implement it − because listing all the outputs of a
derivation isn't really possible for binary caches where the server
doesn't allow to list a directory.
In `nixStable` (2.3.7 to be precise) it's possible to connect to stores
using an IPv6 address:
nix ping-store --store ssh://root@2001:db8::1
This is also useful for `nixops(1)` where you could specify an IPv6
address in `deployment.targetHost`.
However, this behavior is broken on `nixUnstable` and fails with the
following error:
$ nix store ping --store ssh://root@2001:db8::1
don't know how to open Nix store 'ssh://root@2001:db8::1'
This happened because `openStore` from `libstore` uses the `parseURL`
function from `libfetchers` which expects a valid URL as defined in
RFC2732. However, this is unsupported by `ssh(1)`:
$ nix store ping --store 'ssh://root@[2001:db8::1]'
cannot connect to 'root@[2001:db8::1]'
This patch now allows both ways of specifying a store (`root@2001:db8::1`) and
also `root@[2001:db8::1]` since the latter one is useful to pass query
parameters to the remote store.
In order to achieve this, the following changes were made:
* The URL regex from `url-parts.hh` now allows an IPv6 address in the
form `2001:db8::1` and also `[2001:db8::1]`.
* In `libstore`, a new function named `extractConnStr` ensures that a
proper URL is passed to e.g. `ssh(1)`:
* If a URL looks like either `[2001:db8::1]` or `root@[2001:db8::1]`,
the brackets will be removed using a regex. No additional validation
is done here as only strings parsed by `parseURL` are expected.
* In any other case, the string will be left untouched.
* The rules above only apply for `LegacySSHStore` and `SSHStore` (a.k.a
`ssh://` and `ssh-ng://`).
Unresolved questions:
* I'm not really sure whether we want to allow both variants of IPv6
addresses in the URL parser. However it should be noted that both seem
to be possible according to RFC2732:
> This document incudes an update to the generic syntax for Uniform
> Resource Identifiers defined in RFC 2396 [URL]. It defines a syntax
> for IPv6 addresses and allows the use of "[" and "]" within a URI
> explicitly for this reserved purpose.
* Currently, it's not supported to specify a port number behind the
hostname, however it seems as this is not really supported by the URL
parser. Hence, this is probably out of scope here.
The `DerivationGoal` has a variable storing the “final” derivation
output paths that is used (amongst other things) to fill the environment
for the post build hook. However this variable wasn't set when the
build-hook is used, causing a crash when both hooks are used together.
Fix this by setting this variable (from the informations in the db) after a run
of the post build hook.
This reverts commit 1b1e076033.
Using `queryPartialDerivationOutputMap` assumes that the derivation
exists locally which isn't the case for remote builders.
Since 0744f7f, it is now useful to have cache.nixos.org in substituers
even if /nix/store is not the Nix Store Dir. This can always be
overridden via configuration, though.
When running universal binaries like /bin/bash, Darwin XNU will choose
which architecture of the binary to use based on "binary preferences".
This change sets that to the current platform for aarch64 and x86_64
builds. In addition it now uses posix_spawn instead of the usual
execve. Note, that this does not prevent the other architecture from
being run, just advises which to use.
Unfortunately, posix_spawnattr_setbinpref_np does not appear to be
inherited by child processes in x86_64 Rosetta 2 translations, meaning
that this will not always work as expected.
For example:
{
arm = derivation {
name = "test";
system = "aarch64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
rosetta = derivation {
name = "test";
system = "x86_64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = i386 ]
echo It works!
/usr/bin/touch $out
'') ];
};
}
`arm' fails on x86_64-compiled Nix, but `arm' and `rosetta' succeed on
aarch64-compiled Nix. I suspect there is a way to fix this since:
$ /usr/bin/arch -arch x86_64 /bin/bash \
-c '/usr/bin/arch -arch arm64e /bin/bash -c /usr/bin/arch'
arm64
seems to work correctly. We may need to wait for Apple to update
system_cmds in opensource.apple.com to find out how though.
macOS systems with ARM64 can utilize a translation layer at
/Library/Apple/usr/libexec/oah to run x86_64 binaries. This change
makes Nix recognize that and it to "extra-platforms". Note that there
are two cases here since Nix could be built for either x86_64 or
aarch64. In either case, we can switch to the other architecture.
Unfortunately there is not a good way to prevent aarch64 binaries from
being run in x86_64 contexts or vice versa - programs can always
execute programs for the other architecture.