By default, once you enter x86_64 Rosetta 2, macOS will try to run
everything in x86_64. So an x86_64 Nix will still try to use x86_64
even when system = aarch64-darwin. To avoid this we can set
kern.curproc_arch_affinity sysctl. With kern.curproc_arch_affinity=0,
we ignore this preference.
This is based on how
https://opensource.apple.com/source/system_cmds/system_cmds-880.40.5/arch.tproj/arch.c.auto.html
works. Completely undocumented, but seems to work!
Note, you can verify this works with this impure Nix expression:
```
{
a = derivation {
name = "a";
system = "aarch64-darwin";
builder = "/bin/sh";
args = [ "-e" (builtins.toFile "builder" ''
[ "$(/usr/bin/arch)" = arm64 ]
[ "$(/usr/bin/arch -arch x86_64 /bin/sh -c /usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch arm64 /bin/sh -c /usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
b = derivation {
name = "b";
system = "x86_64-darwin";
builder = "/bin/sh";
args = [ "-e" (builtins.toFile "builder" ''
[ "$(/usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch x86_64 /bin/sh -c /usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch arm64 /bin/sh -c /usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
}
```
This resolves#3810 by changing the behavior of `max-jobs = 0`, so
that specifying the option also avoids local building of derivations
with the attribute `preferLocalBuild = true`.
It appears as through the fetch attribute, which
is simply a variant with 3 elements, implicitly
converts boolean arguments to integers. One must
use Explicit<bool> to correctly populate it with
a boolean. This was missing from the implementation,
and resulted in clearly boolean JSON fields being
treated as numbers.
libc++10 seems to be stricter on what it allows in variant conversion.
I'm not sure what the rules are here, but this is the minimal change
needed to get through the compilation errors.
Sometimes it's necessary to fetch a git repository at a revision and
it's unknown which ref contains the revision in question. An example
would be a Cargo.lock which only provides the URL and the revision when
using a git repository as build input.
However it's considered a bad practice to perform a full checkout of a
repository since this may take a lot of time and can eat up a lot of
disk space. This patch makes a full checkout explicit by adding an
`allRefs` argument to `builtins.fetchGit` which fetches all refs if
explicitly set to true.
Closes#2409
A common pitfall when using e.g. `builtins.fetchGit` is the `fatal: not
a tree object`-error when trying to fetch a revision of a git-repository
that isn't on the `master` branch and no `ref` is specified.
In order to make clear what's the problem, I added a simple check
whether the revision in question exists and if it doesn't a more
meaningful error-message is displayed:
```
nix-repl> builtins.fetchGit { url = "https://github.com/owner/myrepo"; rev = "<commit not on master>"; }
moderror: --- Error -------------------------------------------------------------------- nix
Cannot find Git revision 'bf1cc5c648e6aed7360448a3745bb2fe4fbbf0e9' in ref 'master' of repository 'https://gitlab.com/Ma27/nvim.nix'! Please make sure that the rev exists on the ref you've specified or add allRefs = true; to fetchGit.
```
Closes#2431
We embrace virtual the rest of the way, and get rid of the
`assert(false)` 0-param constructors.
We also list config base classes first, so the constructor order is
always:
1. all the configs
2. all the stores
Each in the same order
Move clearValue inside Value
mkInt instead of setInt
mkBool instead of setBool
mkString instead of setString
mkPath instead of setPath
mkNull instead of setNull
mkAttrs instead of setAttrs
mkList instead of setList*
mkThunk instead of setThunk
mkApp instead of setApp
mkLambda instead of setLambda
mkBlackhole instead of setBlackhole
mkPrimOp instead of setPrimOp
mkPrimOpApp instead of setPrimOpApp
mkExternal instead of setExternal
mkFloat instead of setFloat
Add note that the static mk* function should be removed eventually
PRs #4370 and #4348 had a bad interaction in that the second broke the fist
one in a not trivial way.
The issue was that since #4348 the logic for detecting whether a
derivation output is already built requires some logic that was specific
to the `LocalStore`.
It happens though that most of this logic could be upstreamed to any `Store`,
which is what this commit does.
This ignore was here because `queryPartialDrvOutputMap` was used both
1. as a cache to avoid having to re-read the derivation (when gc-ing for
example), and
2. as the source of truth for ca realisations
The use-case 2. required it to be able to work even when the derivation
wasn't there anymore (see https://github.com/NixOS/nix/issues/4138).
However, this use-case is now handled by `queryRealisation`, meaning
that we can safely error out if the derivation isn't there anymore
`buildPaths` can be called even for stores where it's not defined in case it's
bound to be a no-op.
The “no-op detection” mechanism was only detecting the case wher `buildPaths`
was called on a set of (non-drv) paths that were already present on the store.
This commit extends this mechanism to also detect the case where `buildPaths`
is called on a set of derivation outputs which are already built on the store.
This only works with the ca-derivations flag. It could be possible to
extend this to also work without it, but it would add quite a bit of
complexity, and it's not used without it anyways.
Extend `FSAccessor::readFile` to allow not checking that the path is a
valid one, and rewrite `readInvalidDerivation` using this extended
`readFile`.
Several places in the code use `readInvalidDerivation`, either because
they need to read a derivation that has been written in the store but
not registered yet, or more generally to prevent a deadlock because
`readDerivation` tries to lock the state, so can't be called from a
place where the lock is already held.
However, `readInvalidDerivation` implicitely assumes that the store is a
`LocalFSStore`, which isn't always the case.
The concrete motivation for this is that it's required for `nix copy
--from someBinaryCache` to work, which is tremendously useful for the
tests.
Because of a too eager refactoring, `addTextToStore` used to throw an
error because the input wasn't a valid nar.
Partially revert that refactoring to wrap the text into a proper nar
(using `dumpString`) to make this method work again
Rather than storing the derivation outputs as `drvPath!outputName` internally,
store them as `drvHashModulo!outputName` (or `outputHash!outputName` for
fixed-output derivations).
This makes the storage slightly more opaque, but enables an earlier
cutoff in cases where a fixed-output dependency changes (but keeps the
same output hash) − same as what we already do for input-addressed
derivations.
Add a new table for tracking the derivation output mappings.
We used to hijack the `DerivationOutputs` table for that, but (despite its
name), it isn't a really good fit:
- Its entries depend on the drv being a valid path, making it play badly with
garbage collection and preventing us to copy a drv output without copying
the whole drv closure too;
- It dosen't guaranty that the output path exists;
By using a different table, we can experiment with a different schema better
suited for tracking the output mappings of CA derivations.
(incidentally, this also fixes#4138)
For each known realisation, store:
- its output
- its output path
This comes with a set of needed changes:
- New `realisations` module declaring the types needed for describing
these mappings
- New `Store::registerDrvOutput` method registering all the needed informations
about a derivation output (also replaces `LocalStore::linkDeriverToPath`)
- new `Store::queryRealisation` method to retrieve the informations for a
derivations
This introcudes some redundancy on the remote-store side between
`wopQueryDerivationOutputMap` and `wopQueryRealisation`.
However we might need to keep both (regardless of backwards compat)
because we sometimes need to get some infos for all the outputs of a
derivation (where `wopQueryDerivationOutputMap` is handy), but all the
stores can't implement it − because listing all the outputs of a
derivation isn't really possible for binary caches where the server
doesn't allow to list a directory.
In `nixStable` (2.3.7 to be precise) it's possible to connect to stores
using an IPv6 address:
nix ping-store --store ssh://root@2001:db8::1
This is also useful for `nixops(1)` where you could specify an IPv6
address in `deployment.targetHost`.
However, this behavior is broken on `nixUnstable` and fails with the
following error:
$ nix store ping --store ssh://root@2001:db8::1
don't know how to open Nix store 'ssh://root@2001:db8::1'
This happened because `openStore` from `libstore` uses the `parseURL`
function from `libfetchers` which expects a valid URL as defined in
RFC2732. However, this is unsupported by `ssh(1)`:
$ nix store ping --store 'ssh://root@[2001:db8::1]'
cannot connect to 'root@[2001:db8::1]'
This patch now allows both ways of specifying a store (`root@2001:db8::1`) and
also `root@[2001:db8::1]` since the latter one is useful to pass query
parameters to the remote store.
In order to achieve this, the following changes were made:
* The URL regex from `url-parts.hh` now allows an IPv6 address in the
form `2001:db8::1` and also `[2001:db8::1]`.
* In `libstore`, a new function named `extractConnStr` ensures that a
proper URL is passed to e.g. `ssh(1)`:
* If a URL looks like either `[2001:db8::1]` or `root@[2001:db8::1]`,
the brackets will be removed using a regex. No additional validation
is done here as only strings parsed by `parseURL` are expected.
* In any other case, the string will be left untouched.
* The rules above only apply for `LegacySSHStore` and `SSHStore` (a.k.a
`ssh://` and `ssh-ng://`).
Unresolved questions:
* I'm not really sure whether we want to allow both variants of IPv6
addresses in the URL parser. However it should be noted that both seem
to be possible according to RFC2732:
> This document incudes an update to the generic syntax for Uniform
> Resource Identifiers defined in RFC 2396 [URL]. It defines a syntax
> for IPv6 addresses and allows the use of "[" and "]" within a URI
> explicitly for this reserved purpose.
* Currently, it's not supported to specify a port number behind the
hostname, however it seems as this is not really supported by the URL
parser. Hence, this is probably out of scope here.
The `DerivationGoal` has a variable storing the “final” derivation
output paths that is used (amongst other things) to fill the environment
for the post build hook. However this variable wasn't set when the
build-hook is used, causing a crash when both hooks are used together.
Fix this by setting this variable (from the informations in the db) after a run
of the post build hook.
This reverts commit 1b1e076033.
Using `queryPartialDerivationOutputMap` assumes that the derivation
exists locally which isn't the case for remote builders.
Since 0744f7f, it is now useful to have cache.nixos.org in substituers
even if /nix/store is not the Nix Store Dir. This can always be
overridden via configuration, though.
When running universal binaries like /bin/bash, Darwin XNU will choose
which architecture of the binary to use based on "binary preferences".
This change sets that to the current platform for aarch64 and x86_64
builds. In addition it now uses posix_spawn instead of the usual
execve. Note, that this does not prevent the other architecture from
being run, just advises which to use.
Unfortunately, posix_spawnattr_setbinpref_np does not appear to be
inherited by child processes in x86_64 Rosetta 2 translations, meaning
that this will not always work as expected.
For example:
{
arm = derivation {
name = "test";
system = "aarch64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
rosetta = derivation {
name = "test";
system = "x86_64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = i386 ]
echo It works!
/usr/bin/touch $out
'') ];
};
}
`arm' fails on x86_64-compiled Nix, but `arm' and `rosetta' succeed on
aarch64-compiled Nix. I suspect there is a way to fix this since:
$ /usr/bin/arch -arch x86_64 /bin/bash \
-c '/usr/bin/arch -arch arm64e /bin/bash -c /usr/bin/arch'
arm64
seems to work correctly. We may need to wait for Apple to update
system_cmds in opensource.apple.com to find out how though.
macOS systems with ARM64 can utilize a translation layer at
/Library/Apple/usr/libexec/oah to run x86_64 binaries. This change
makes Nix recognize that and it to "extra-platforms". Note that there
are two cases here since Nix could be built for either x86_64 or
aarch64. In either case, we can switch to the other architecture.
Unfortunately there is not a good way to prevent aarch64 binaries from
being run in x86_64 contexts or vice versa - programs can always
execute programs for the other architecture.
If the build closure contains some CA derivations, then we can't know
ahead-of-time that we won't build anything as early-cutoff might come-in
at a laster stage
using fallocate() to preallocate files space does more harm than good:
- breaks compression on btrfs
- has been called "not the right thing to do" by xfs developers
(because delayed allocation that most filesystems implement leads to smarter
allocation than what the filesystem needs to do if we upfront fallocate files)
Without setting HGPLAIN, the user's environment leaks into
hg invocations, which means that the output may not be in the
expected format.
HGPLAIN is the Mercurial-recommended solution for this in that
it's intended for uses by scripts and programs which are looking
to parse Mercurial's output in a consistent manner.
This fixes a bug I encountered where `nix-store -qR` will deadlock when
the `--include-outputs` flag is passed and `max-connections=1`.
The deadlock occurs because `RemoteStore::queryDerivationOutputs` takes
the only connection from the connection pool and uses it to check the
daemon version. If the version is new enough, it calls
`Store::queryDerivationOutputs`, which eventually calls
`RemoteStore::queryPartialDerivationOutputMap`, where we take another
connection from the connection pool to check the version again. Because
we still haven't released the connection from the caller, this waits for
a connection to be available, causing a deadlock.
This diff solves the issue by using `getProtocol` to check the protocol
version in the caller `RemoteStore::queryDerivationOutputs`, which
immediately frees the connection back to the pool before returning the
protocol version. That way we've already freed the connection by the
time we call `RemoteStore::queryPartialDerivationOutputMap`.
Fixes:
$ nix build --store /tmp/nix /home/eelco/Dev/patchelf#hydraJobs.build.x86_64-linux
warning: Git tree '/home/eelco/Dev/patchelf' is dirty
error: --- RestrictedPathError ------------------------------------------------------------------------------------------- nix
access to path '/tmp/nix/nix/store/xmkvfmffk7xfnazykb5kx999aika8an4-source/flake.nix' is forbidden in restricted mode
(use '--show-trace' to show detailed location information)
Until now, it was not possible to substitute missing paths from e.g.
`https://cache.nixos.org` on a remote server when building on it using
the new `ssh-ng` protocol.
This is because every store implementation except legacy `ssh://`
ignores the substitution flag passed to `Store::queryValidPaths` while
the `legacy-ssh-store` substitutes the remote store using
`cmdQueryValidPaths` when the remote store is opened with `nix-store
--serve`.
This patch slightly modifies the daemon protocol to allow passing an
integer value suggesting whether to substitute missing paths during
`wopQueryValidPaths`. To implement this on the daemon-side, the
substitution logic from `nix-store --serve` has been moved into a
protected method named `Store::substitutePaths` which gets currently
called from `LocalStore::queryValidPaths` and `Store::queryValidPaths`
if `maybeSubstitute` is `true`.
Fixes#2770
Crucially this introduces BoehmGCStackAllocator, but it also
adds a bunch of wiring to avoid making libutil depend on bdw-gc.
Part of the solutions for #4178, #4200
This removes the extra-substituters and extra-sandbox-paths settings
and instead makes every array setting extensible by setting
"extra-<name> = <value>" in the configuration file or passing
"--<name> <value>" on the command line.
This makes it even clearer which of the two hashes was specified in the
nix files. Some may think that "wanted" and "got" is obvious, but:
"got" could mean "got in nix file" and "wanted" could mean "want to see in nix file".
This makes it possible to have per-project configuration in flake.nix,
e.g. binary caches and other stuff:
nixConfig.bash-prompt-suffix = "[1;35mngi# [0m";
nixConfig.substituters = [ "https://cache.ngi0.nixos.org/" ];
This is primarily useful if you're hacking simultaneously on a package
and one of its dependencies. E.g. if you're hacking on Hydra and Nix,
you would start a dev shell for Nix, and then a dev shell for Hydra as
follows:
$ nix develop \
--redirect .#hydraJobs.build.x86_64-linux.nix ~/Dev/nix/outputs/out \
--redirect .#hydraJobs.build.x86_64-linux.nix.dev ~/Dev/nix/outputs/dev
(This assumes hydraJobs.build.x86_64-linux has a passthru.nix
attribute. You can also use a store path.)
This causes all references in the environment to those store paths to
be rewritten to ~/Dev/nix/outputs/{out,dev}. Note: unfortunately, you
may need to set LD_LIBRARY_PATH=~/Dev/nix/outputs/out/lib because
Nixpkgs' ld-wrapper only adds -rpath entries for -L flags that point
to the Nix store.
Fix#3975: Currently if Ctrl-C is pressed during a phase, the interactive subshell
is not exited. Removing --rcfile when --phase is present makes bash
non-interactive
Make nix output completions in the form `completion\tdescription`.
This can't be used by bash (afaik), but other shells like zsh or fish
can display it along the completion choices