Following discussion with Shea and Graham. It's a big enough change
from the last release. Also, from a semver perspective, 2.0 makes more
sense because we did remove some interfaces (like nix-pull/nix-push).
Some servers, such as Artifactory, allow uploading with PUT and BASIC
auth. This allows nix copy to work to upload binaries to those
servers.
Worked on together with @adelbertc
In this mode, the following restrictions apply:
* The builtins currentTime, currentSystem and storePath throw an
error.
* $NIX_PATH and -I are ignored.
* fetchGit and fetchMercurial require a revision hash.
* fetchurl and fetchTarball require a sha256 attribute.
* No file system access is allowed outside of the paths returned by
fetch{Git,Mercurial,url,Tarball}. Thus 'nix build -f ./foo.nix' is
not allowed.
Thus, the evaluation result is completely reproducible from the
command line arguments. E.g.
nix build --pure-eval '(
let
nix = fetchGit { url = https://github.com/NixOS/nixpkgs.git; rev = "9c927de4b179a6dd210dd88d34bda8af4b575680"; };
nixpkgs = fetchGit { url = https://github.com/NixOS/nixpkgs.git; ref = "release-17.09"; rev = "66b4de79e3841530e6d9c6baf98702aa1f7124e4"; };
in (import (nix + "/release.nix") { inherit nix nixpkgs; }).build.x86_64-linux
)'
The goal is to enable completely reproducible and traceable
evaluation. For example, a NixOS configuration could be fully
described by a single Git commit hash. 'nixos-rebuild' would do
something like
nix build --pure-eval '(
(import (fetchGit { url = file:///my-nixos-config; rev = "..."; })).system
')
where the Git repository /my-nixos-config would use further fetchGit
calls or Git externals to fetch Nixpkgs and whatever other
dependencies it has. Either way, the commit hash would uniquely
identify the NixOS configuration and allow it to reproduced.
E.g.
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nars \
/nix/store/b0w2hafndl09h64fhb86kw6bmhbmnpm1-blender-2.79/share/icons/hicolor/scalable/apps/blender.svg > /dev/null
real 0m4.139s
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nars \
/nix/store/b0w2hafndl09h64fhb86kw6bmhbmnpm1-blender-2.79/share/icons/hicolor/scalable/apps/blender.svg > /dev/null
real 0m0.024s
(Before, the second call took ~0.220s.)
This will use a NAR listing in
/tmp/nars/b0w2hafndl09h64fhb86kw6bmhbmnpm1.ls containing all metadata,
including the offsets of regular files inside the NAR. Thus, we don't
need to read the entire NAR. (We do read the entire listing, but
that's generally pretty small. We could use a SQLite DB by borrowing
some more code from nixos-channel-scripts/file-cache.hh.)
This is primarily useful when Hydra is serving files from an S3 binary
cache, in particular when you have giant NARs. E.g. we had some 12 GiB
NARs, so accessing individuals files was pretty slow.
The name had become a misnomer since it's not only for substitution
from binary caches, but when adding/copying any
(non-content-addressed) path to a store.
This allows specifying the AWS configuration profile to use. E.g.
nix copy --from s3://my-cache?profile=aws-dev-account /nix/store/cf3isrlqavvd5w7rpky1fa8j9lcnlggm-...
As far as we're concerned, not being able to access a file just means
the file is missing. Plus, AWS explicitly goes out of its way to
return a 403 if the file is missing and the requester doesn't have
permission to list the bucket.
Also getting rid of an old hack that Eelco said was only relevant
to an older AWS SDK.
This will allow bind and connect to 127.0.0.1, which can reduce purity/
security (if you're running a vulnerable service on localhost) but is
also needed for a ton of test suites, so I'm leaving it turned off by
default but allowing certain derivations to turn it on as needed.
It also allows DNS resolution of arbitrary hostnames but I haven't found
a way to avoid that. In principle I'd just want to allow resolving
localhost but that doesn't seem to be possible.
I don't think this belongs under `build-use-sandbox = relaxed` because we
want it on Hydra and I don't think it's the end of the world.
The computation of urlHash didn't take the name into account, so
subsequent fetchurl calls with the same URL but a different name would
resolve to the same cached store path.
You can now include files via the "builders" option, using the syntax
"@<filename>". Having only one option makes it easier to override
builders completely.
For backward compatibility, the default is "@/etc/nix/machines", or
"@<filename>" for each file name in NIX_REMOTE_SYSTEMS.
This makes it slightly more manageable to see at a glance what in a
build's sandbox profile is unique to the build and what is standard. Also
a first step to factoring more of our Darwin logic into scheme functions
that will allow us a bit more flexibility. And of course less of that
nasty codegen in C++! 😀
This speeds up commands like "nix cat-store". For example:
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nar-cache /nix/store/i60yncmq6w9dyv37zd2k454g0fkl3arl-systemd-234/etc/udev/udev.conf
real 0m4.336s
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nar-cache /nix/store/i60yncmq6w9dyv37zd2k454g0fkl3arl-systemd-234/etc/udev/udev.conf
real 0m0.045s
The primary motivation is to allow hydra-server to serve files from S3
binary caches. Previously Hydra had a hack to do "nix-store -r
<path>", but that fetches the entire closure so is prohibitively
expensive.
There is no garbage collection of the NAR cache yet. Also, the entire
NAR is read when accessing a single member file. We could generate the
NAR listing to provide random access.
Note: the NAR cache is indexed by the store path hash, not the content
hash, so NAR caches should not be shared between binary caches, unless
you're sure that all your builds are binary-reproducible.
Probably as a result of a bad merge in
4b8f1b0ec0, we had both a
BinaryCacheStoreAccessor and a
RemoteFSAccessor. BinaryCacheStore::getFSAccessor() returned the
latter, but BinaryCacheStore::addToStore() checked for the
former. This probably caused hydra-queue-runner to download paths that
it just uploaded.
I needed this to test ACL/xattr removal in
canonicalisePathMetaData(). Might also be useful if you need to build
old Nixpkgs that doesn't have the required patches to remove
setuid/setgid creation.
It was getting too much like whac-a-mole listing all the retriable error
conditions, so we now retry by default and list the cases where retrying
is almost certainly hopeless.
This is a hack to make hydra-queue-runner free its temproots
periodically, thereby ensuring that garbage collection of the
corresponding paths is not blocked until the queue runner is
restarted.
It would be better if temproots could be released earlier than at
process exit. I started working on a RAII object returned by functions
like addToStore() that releases temproots. However, this would be a
pretty massive change so I gave up on it for now.
For example,
$ nix-store -q --roots /nix/store/7phd2sav7068nivgvmj2vpm3v47fd27l-patchelf-0.8pre845_0315148
{temp:1}
denotes that the path is only being kept alive by a temporary root
(i.e. /nix/var/nix/temproots/). Similarly,
$ nix-store --gc --print-roots
...
{memory:9} -> /nix/store/094gpjn9f15ip17wzxhma4r51nvsj17p-curl-7.53.1
shows that curl is being used by some process.
Since we may use a dedicated file descriptor in the future, this
allows us to change it. So builders can do
if [[ -n $NIX_LOG_FD ]]; then
echo "@nix { message... }" >&$NIX_LOG_FD
fi
Nix can now automatically run the garbage collector during builds or
while adding paths to the store. The option "min-free = <bytes>"
specifies that Nix should run the garbage collector whenever free
space in the Nix store drops below <bytes>. It will then delete
garbage until "max-free" bytes are available.
Garbage collection during builds is asynchronous; running builds are
not paused and new builds are not blocked. However, there also is a
synchronous GC run prior to the first build/substitution.
Currently, no old GC roots are deleted (as in "nix-collect-garbage
-d").
Since file locks are per-process rather than per-file-descriptor, the
garbage collector would always acquire a lock on its own temproots
file and conclude that it's stale.
Without this, substitute info is fetched sequentially, which is
superslow. In the old UI (e.g. nix-build), we call printMissing(),
which calls queryMissing(), thereby preheating the binary cache
cache. But the new UI doesn't do that.
In particular, drop the "build-" and "gc-" prefixes which are
pointless. So now you can say
nix build --no-sandbox
instead of
nix build --no-build-use-sandbox
In particular, don't use base-64, which we don't support. (We do have
base-32 redirects for hysterical reasons.)
Also, add a test for the hashed mirror feature.
This allows builds to call setuid binaries. This was previously
possible until we started using seccomp. Turns out that seccomp by
default disallows processes from acquiring new privileges. Generally,
any use of setuid binaries (except those created by the builder
itself) is by definition impure, but some people were relying on this
ability for certain tests.
Example:
$ nix build '(with import <nixpkgs> {}; runCommand "foo" {} "/run/wrappers/bin/ping -c 1 8.8.8.8; exit 1")' --no-allow-new-privileges
builder for ‘/nix/store/j0nd8kv85hd6r4kxgnwzvr0k65ykf6fv-foo.drv’ failed with exit code 1; last 2 log lines:
cannot raise the capability into the Ambient set
: Operation not permitted
$ nix build '(with import <nixpkgs> {}; runCommand "foo" {} "/run/wrappers/bin/ping -c 1 8.8.8.8; exit 1")' --allow-new-privileges
builder for ‘/nix/store/j0nd8kv85hd6r4kxgnwzvr0k65ykf6fv-foo.drv’ failed with exit code 1; last 6 log lines:
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=46 time=15.2 ms
Fixes#1429.
Functions like copyClosure() had 3 bool arguments, which creates a
severe risk of mixing up arguments.
Also, implement copyClosure() using copyPaths().
Cygwin sqlite3 is patched to call SetDllDirectory("/usr/bin") on init, which
affects the current process and is inherited by child processes. It causes
DLLs to be loaded from /usr/bin/ before $PATH, which breaks all sorts of
things. A typical failures would be header/lib version mismatches (e.g.
openssl when running checkPhase on openssh). We'll just set it back to the
default value.
Note that this is a problem with the cygwin version of sqlite3 (currently
3.18.0). nixpkgs doesn't have the problematic patch.
Recently aws-sdk-cpp quietly switched to using S3 virtual host URIs
(https://github.com/aws/aws-sdk-cpp/commit/69d9c53882), i.e. it sends
requests to http://<bucket>.<region>.s3.amazonaws.com rather than
http://<region>.s3.amazonaws.com/<bucket>. However this interacts
badly with curl connection reuse. For example, if we do the following:
1) Check whether a bucket exists using GetBucketLocation.
2) If it doesn't, create it using CreateBucket.
3) Do operations on the bucket.
then 3) will fail for a minute or so with a NoSuchBucket exception,
presumably because the server being hit is a fallback for cases when
buckets don't exist.
Disabling the use of virtual hosts ensures that 3) succeeds
immediately. (I don't know what S3's consistency guarantees are for
bucket creation, but in practice buckets appear to be available
immediately.)
Newer versions of aws-sdk-cpp call CalculateDelayBeforeNextRetry()
even for non-retriable errors (like NoSuchKey) whih causes log spam in
hydra-queue-runner.
Sandboxes cannot be nested, so if Nix's build runs inside a sandbox,
it cannot use a sandbox itself. I don't see a clean way to detect
whether we're in a sandbox, so use a test-specific hack.
https://github.com/NixOS/nix/issues/1413
In particular, UF_IMMUTABLE (uchg) needs to be cleared to allow the
path to be garbage-collected or optimised.
See https://github.com/NixOS/nixpkgs/issues/25819.
+ the file from being garbage-collected.
Thus, instead of ‘--option <name> <value>’, you can write ‘--<name>
<value>’. So
--option http-connections 100
becomes
--http-connections 100
Apart from brevity, the difference is that it's not an error to set a
non-existent option via --option, but unrecognized arguments are
fatal.
Boolean options have special treatment: they're mapped to the
argument-less flags ‘--<name>’ and ‘--no-<name>’. E.g.
--option auto-optimise-store false
becomes
--no-auto-optimise-store
Even with "build-use-sandbox = false", we now use sandboxing with a
permissive profile that allows everything except the creation of
setuid/setgid binaries.