The --insecure flag to curl tells curl not to bother checking if the TLS
certificate presented by the server actually matches the hostname
requested, and actually is issued by a trusted CA chain. This almost
entirely negates any benefit from using TLS in the first place.
This removes the --insecure flag to ensure we actually have a secure
connection to the intended hostname before downloading binaries.
Manually tested locally within a dev-shell; was able to download
binaries from https://cache.nixos.org without issue.
[Note: --insecure was only used for fetching NARs, whose integrity is
verified by Nix anyway using the hash from the .narinfo. But if we can
fetch the .narinfo without --insecure, we can also fetch the .nar, so
there is not much point to using --insecure. --Eelco]
Some benchmarking suggested this as a good value. Running
$ benchmark -f ... -t 25 -- sh -c 'rm -f /nix/var/nix/binary-cache*; nix-store -r /nix/store/x5z8a2yvz8h6ccmhwrwrp9igg03575jg-nixos-15.09.git.5fd87e1M.drv --dry-run --option binary-caches-parallel-connections <N>'
gave the following mean elapsed times for these values of N:
N=10: 3.3541
N=20: 2.9320
N=25: 2.6690
N=30: 2.9417
N=50: 3.2021
N=100: 3.5718
N=150: 4.2079
Memory usage is also reduced (N=150 used 186 MB, N=25 only 68 MB).
Closes#708.
This makes that option even more insecure, by also not checking the SSL host.
But without this parameter, one can still get SSL errors even when
"verify-https-binary-caches" is false, which is unexpected IMO.
This reverts commit 76f985b92d. We
shouldn't mess with $MANPATH, because on some "man" implementations
(like NixOS'), the default value on $MANPATH is derived from $PATH. So
if you set $MANPATH, you lose the default locations.
This is not strictly needed for integrity (since we already include
the NAR hash in the fingerprint) but it helps against endless data
attacks [1]. (However, this will also require
download-from-binary-cache.pl to bail out if it receives more than the
specified number of bytes.)
[1] https://isis.poly.edu/~jcappos/papers/cappos_mirror_ccs_08.pdf
In some cases the bash builtin command "cd" can print the variable $CWD
to stdout. This caused the install script to fail while copying files
because the source path was wrong.
Fixes#476.
We only need to sign the store path, NAR hash and references (the
"fingerprint"). Everything else is irrelevant to security. For
instance, the compression algorithm or the hash of the compressed NAR
don't matter as long as the contents of the uncompressed NAR are
correct.
(Maybe we should include derivers in the fingerprint, but they're
broken and nobody cares about them. Also, it might be nice in the
future if .narinfos contained signatures from multiple independent
signers. But that's impossible if the deriver is included in the
fingerprint, since everybody will tend to have a different deriver for
the same store path.)
Also renamed the "Signature" field to "Sig" since the format changed
in an incompatible way.
Sodium's Ed25519 signatures are much shorter than OpenSSL's RSA
signatures. Public keys are also much shorter, so they're now
specified directly in the nix.conf option ‘binary-cache-public-keys’.
The new command ‘nix-store --generate-binary-cache-key’ generates and
prints a public and secret key.
It moves runHook to a later position in the rcfile. After that we are
able to set the PS1 environment-variable for a nix-shell environment
e.g.:
# turn the color of the prompt to blue
shellHook = ''
export PS1="\n\[\033[1;34m\][\u@\h:\w]$\[\033[0m\] ";
'';
‘--run’ is like ‘--command’, except that it runs the command in a
non-interactive shell. This is important if you do things like:
$ nix-shell --command make
Hitting Ctrl-C while make is running drops you into the interactive
Nix shell, which is probably not what you want. So you can now do
$ nix-shell --run make
instead.
So you can have a script like:
#! /usr/bin/env nix-shell
#! nix-shell script.nix -i python
import prettytable
x = prettytable.PrettyTable(["Foo", "Bar"])
for i in range(1, 10): x.add_row([i, i**2])
print x
with a ‘script.nix’ in the same directory:
with import <nixpkgs> {};
runCommand "dummy" { buildInputs = [ python pythonPackages.prettytable ]; } ""
(Of course, in this particular case, using the ‘-p’ flag is more
convenient.)
This allows scripts to fetch their own dependencies via nix-shell. For
instance, here is a Haskell script that, when executed, pulls in GHC
and the HTTP package:
#! /usr/bin/env nix-shell
#! nix-shell -i runghc -p haskellPackages.ghc haskellPackages.HTTP
import Network.HTTP
main = do
resp <- Network.HTTP.simpleHTTP (getRequest "http://nixos.org/")
body <- getResponseBody resp
print (take 100 body)
Or a Perl script that pulls in Perl and some CPAN packages:
#! /usr/bin/env nix-shell
#! nix-shell -i perl -p perl perlPackages.HTMLTokeParserSimple perlPackages.LWP
use HTML::TokeParser::Simple;
my $p = HTML::TokeParser::Simple->new(url => 'http://nixos.org/');
while (my $token = $p->get_tag("a")) {
my $href = $token->get_attr("href");
print "$href\n" if $href;
}
Note that the options to nix-shell must be given on a separate line
that starts with the magic string ‘#! nix-shell’. This is because
‘env’ does not allow passing arguments to an interpreter directly.
Apparently, turning on utf8 encoding on stderr changes its flushing
behaviour, causing sendReply to not send anything.
http://hydra.nixos.org/build/13944384
This makes things more efficient (we don't need to use an SSH master
connection, and we only start a single remote process) and gets rid of
locking issues (the remote nix-store process will keep inputs and
outputs locked as long as they're needed).
It also makes it more or less secure to connect directly to the root
account on the build machine, using a forced command
(e.g. ‘command="nix-store --serve --write"’). This bypasses the Nix
daemon and is therefore more efficient.
Also, don't call nix-store to import the output paths.
If derivation declares multiple outputs and first (default) output
if not "out", then "nix-instantiate" calls return path with output
names appended after "!". Than suffix must be stripped before
ant path checks are done.
This allows you to easily set up a build environment containing the
specified packages from Nixpkgs. For example:
$ nix-shell -p sqlite xorg.libX11 hello
will start a shell in which the given packages are present.
The flag ‘--check’ to ‘nix-store -r’ or ‘nix-build’ will cause Nix to
redo the build of a derivation whose output paths are already valid.
If the new output differs from the original output, an error is
printed. This makes it easier to test if a build is deterministic.
(Obviously this cannot catch all sources of non-determinism, but it
catches the most common one, namely the current time.)
For example:
$ nix-build '<nixpkgs>' -A patchelf
...
$ nix-build '<nixpkgs>' -A patchelf --check
error: derivation `/nix/store/1ipvxsdnbhl1rw6siz6x92s7sc8nwkkb-patchelf-0.6' may not be deterministic: hash mismatch in output `/nix/store/4pc1dmw5xkwmc6q3gdc9i5nbjl4dkjpp-patchelf-0.6.drv'
The --check build fails if not all outputs are valid. Thus the first
call to nix-build is necessary to ensure that all outputs are valid.
The current outputs are left untouched: the new outputs are either put
in a chroot or diverted to a different location in the store using
hash rewriting.
The tarball can now be unpacked anywhere. The installation script
uses "sudo" to create /nix if it doesn't exist. It also fetches the
nixpkgs-unstable channel.
NAR info files in binary caches can now have a cryptographic signature
that Nix will verify before using the corresponding NAR file.
To create a private/public key pair for signing and verifying a binary
cache, do:
$ openssl genrsa -out ./cache-key.sec 2048
$ openssl rsa -in ./cache-key.sec -pubout > ./cache-key.pub
You should also come up with a symbolic name for the key, such as
"cache.example.org-1". This will be used by clients to look up the
public key. (It's a good idea to number keys, in case you ever need
to revoke/replace one.)
To create a binary cache signed with the private key:
$ nix-push --dest /path/to/binary-cache --key ./cache-key.sec --key-name cache.example.org-1
The public key (cache-key.pub) should be distributed to the clients.
They should have a nix.conf should contain something like:
signed-binary-caches = *
binary-cache-public-key-cache.example.org-1 = /path/to/cache-key.pub
If all works well, then if Nix fetches something from the signed
binary cache, you will see a message like:
*** Downloading ‘http://cache.example.org/nar/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’ (signed by ‘cache.example.org-1’) to ‘/nix/store/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’...
On the other hand, if the signature is wrong, you get a message like
NAR info file `http://cache.example.org/7dppcj5sc1nda7l54rjc0g5l1hamj09j.narinfo' has an invalid signature; ignoring
Signatures are implemented as a single line appended to the NAR info
file, which looks like this:
Signature: 1;cache.example.org-1;HQ9Xzyanq9iV...muQ==
Thus the signature has 3 fields: a version (currently "1"), the ID of
key, and the base64-encoded signature of the SHA-256 hash of the
contents of the NAR info file up to but not including the Signature
line.
Issue #75.
This reverts commit 194e3374b8.
Checking the command line for GC roots means that
$ nix-store --delete $path
will fail because $path is now a root because it's mentioned on the
command line.
Ever since SQLite in Nixpkgs was updated to 3.8.0.2, Nix has randomly
segfaulted on Darwin:
http://hydra.nixos.org/build/6175515http://hydra.nixos.org/build/6611038
It turns out that this is because the binary cache substituter somehow
ends up loading two versions of SQLite: the one in Nixpkgs and the
other from /usr/lib/libsqlite3.dylib. It's not exactly clear why the
latter is loaded, but it appears to be because WWW::Curl indirectly loads
/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation,
which in turn seems to load /usr/lib/libsqlite3.dylib. This leads to
a segfault when Perl exits:
#0 0x00000001010375f4 in sqlite3_finalize ()
#1 0x000000010125806e in sqlite_st_destroy ()
#2 0x000000010124bc30 in XS_DBD__SQLite__st_DESTROY ()
#3 0x00000001001c8155 in XS_DBI_dispatch ()
...
#14 0x0000000100023224 in perl_destruct ()
#15 0x0000000100000d6a in main ()
...
The workaround is to explicitly load DBD::SQLite before WWW::Curl.
nix-shell with the --command flag might be used non-interactively, but
if bash starts non-interactively (i.e. with stdin or stderr not a
terminal), it won't source the script given in --rcfile. However, in
that case it *will* source the script found in $BASH_ENV, so we can use
that instead.
Also, don't source ~/.bashrc in a non-interactive shell (detectable by
checking the PS1 env var)
Signed-off-by: Shea Levy <shea@shealevy.com>
Nixpkgs's stdenv setup script sets the "nullglob" option, but doing so
breaks Bash completion on NixOS (when ‘programs.bash.enableCompletion’
is set) and on Ubuntu. So clear that flag afterwards. Of course,
this may break stdenv functions in subtle ways...
Nixpkgs' stdenv disables dependency tracking by default. That makes
sense for one-time builds, but in an interactive environment we expect
repeated "make" invocations to do the right thing.
This reverts commit 69b8f9980f.
The timeout should be enforced remotely. Otherwise, if the garbage
collector is running either locally or remotely, if will block the
build or closure copying for some time. If the garbage collector
takes too long, the build may time out, which is not what we want.
Also, on heavily loaded systems, copying large paths to and from the
remote machine can take a long time, also potentially resulting in a
timeout.
For instance, it's pointless to keep copy-from-other-stores running if
there are no other stores, or download-using-manifests if there are no
manifests. This also speeds things up because we don't send queries
to those substituters.
Previously, if a binary cache is hanging/unreachable/slow,
download-from-binary-cache.pl would also hang without any indication
to the user. Now, if fetching a URL takes more than 5 seconds, it
will print a message to that effect.
Amazon S3 returns HTTP status code 403 if a file doesn't exist and the
user has no permission to list the contents of the bucket. So treat
it as 404 (meaning it's cached in the NARExistence table).
The "$UID != 0" makes no sense: if the local side has write access to
the Nix store (which is always the case) then it doesn't matter if
we're root - we can import unsigned paths either way.
Otherwise it will set the parent's stdin to non-blocking mode, causing
the subsequent read of the set of inputs/outputs to fail randomly.
That's insane.
Before selecting a machine, build-remote.pl will try to run the
command "nix-builds-inhibited" on the machine. If this command exists
and returns a 0 exit code, then the machine won't be used. It's up to
the user to provide this command, but it would typically be a script
that checks whether there is enough disk space and whether the load is
not too high.
Don't pass --timeout / --max-silent-time to the remote builder.
Instead, let the local Nix process terminate the build if it exceeds a
timeout. The remote builder will be killed as a side-effect. This
gives better error reporting (since the timeout message from the
remote side wasn't properly propagated) and handles non-Nix problems
like SSH hangs.
This allows providing additional binary caches, useful in scripts like
Hydra's build reproduction scripts, in particular because untrusted
caches are ignored.
This should make live easier for single-user (non-daemon)
installations. Note that when the daemon is used, the "calling user"
is root so we're not using any untrusted caches.
For example, given a derivation with outputs "out", "man" and "bin":
$ nix-build -A pkg
produces ./result pointing to the "out" output;
$ nix-build -A pkg.man
produces ./result-man pointing to the "man" output;
$ nix-build -A pkg.all
produces ./result, ./result-man and ./result-bin;
$ nix-build -A pkg.all -A pkg2
produces ./result, ./result-man, ./result-bin and ./result-2.
Binary caches can now specify a priority in their nix-cache-info file.
The binary cache substituter checks caches in order of priority. This
is to ensure that fast, static caches like nixos.org/binary-cache are
processed before slow, dynamic caches like hydra.nixos.org.
This allows disabling the use of binary caches, e.g.
$ nix-build ... --option use-binary-caches false
Note that
$ nix-build ... --option binary-caches ''
does not disable all binary caches, since the caches defined by
channels will still be used.
If ‘--link’ is given, nix-push will create hard links to the NAR files
in the store, rather than copying them. This is faster and requires
less disk space. However, it doesn't work if the store is on a
different file system.
I.e. do what git does. I'm too lazy to keep the builtin help text up
to date :-)
Also add ‘--help’ to various commands that lacked it
(e.g. nix-collect-garbage).
This operation allows fixing corrupted or accidentally deleted store
paths by redownloading them using substituters, if available.
Since the corrupted path cannot be replaced atomically, there is a
very small time window (one system call) during which neither the old
(corrupted) nor the new (repaired) contents are available. So
repairing should be used with some care on critical packages like
Glibc.
Commit 6a214f3e06 copied most of the Nix
shell initialisation code from NixOS to nix-profile.sh; however, that
code assumes a multi-user install and is Linux-specific (e.g. it calls
the "stat" command). So go back to the simple single-user version.
Fixes#49.
Negative lookups are purged from the DB after a day, at most once per
day. However, for non-"have" lookups (e.g. all except "nix-env
-qas"), negative lookups are ignored after one hour. This is to
ensure that you don't have to wait a day for an operation like
"nix-env -i" to start using new binaries in the cache.
Should probably make this configurable.
Note that this will only work if the client has a very recent Nix
version (post 15e1b2c223), otherwise the
--option flag will just be ignored.
Fixes#50.
This handles the chroot and build hook cases, which are easy.
Supporting the non-chroot-build case will require more work (hash
rewriting!).
Issue #21.
Output names are now appended to resulting GC symlinks, e.g. by
nix-build. For backwards compatibility, if the output is named "out",
nothing is appended. E.g. doing "nix-build -A foo" on a derivation
that produces outputs "out", "bin" and "dev" will produce symlinks
"./result", "./result-bin" and "./result-dev", respectively.
Channels can now advertise a binary cache by creating a file
<channel-url>/binary-cache-url. The channel unpacker puts these in
its "binary-caches" subdirectory. Thus, the URLS of the binary caches
for the channels added by root appear in
/nix/var/nix/profiles/per-user/eelco/channels/binary-caches/*. The
binary cache substituter reads these and adds them to the list of
binary caches.
The .nixpkg file format is extended to optionally include the URL of a
binary cache, which will be used in preference to the manifest URL
(which can be set to a non-existent value).
Querying all substitutable paths via "nix-env -qas" is potentially
hard on a server, since it involves sending thousands of HEAD
requests. So a binary cache must now have a meta-info file named
"nix-cache-info" that specifies whether the server wants this. It
also specifies the store prefix so that we don't send useless queries
to a binary cache for a different store prefix.
Since SubstitutionGoal::finished() in build.cc computes the hash
anyway, we can prevent the inefficiency of computing the hash twice by
letting the substituter tell Nix about the expected hash, which can
then verify it.
Commit 6a214f3e06 reused the NixOS
environment initialisation for nix-profile.sh, but this is
inappropriate on systems that don't have multi-user support enabled.
In "nix-env -qas", we don't need the substitute info, we just need to
know if it exists. This can be done using a HTTP HEAD request, which
saves bandwidth.
Note however that curl currently has a bug that prevents it from
reusing HTTP connections if HEAD requests return a 404:
https://sourceforge.net/tracker/?func=detail&aid=3542731&group_id=976&atid=100976
Without the patch attached to the issue, using HEAD is actually quite
a bit slower than GET.
Getting substitute information using the binary cache substituter has
non-trivial latency overhead. A package or NixOS system configuration
can have hundreds of dependencies, and in the worst case (when the
local info cache is empty) we have to do a separate HTTP request for
each of these. If the ping time to the server is t, getting N info
files will take tN seconds; e.g., with a ping time of 0.1s to
nixos.org, sequentially downloading 1000 info files (a typical NixOS
config) will take at least 100 seconds.
To fix this problem, the binary cache substituter can now perform
requests in parallel. This required changing the substituter
interface to support a function querySubstitutablePathInfos() that
queries multiple paths at the same time, and rewriting queryMissing()
to take advantage of parallelism. (Due to local caching,
parallelising queryMissing() is sufficient for most use cases, since
it's almost always called before building a derivation and thus fills
the local info cache.)
For example, parallelism speeds up querying all 1056 paths in a
particular NixOS system configuration from 116s to 2.6s. It works so
well because the eccentricity of the top-level derivation in the
dependency graph is only 9. So we only need 10 round-trips (when
using an unlimited number of parallel connections) to get everything.
Currently we do a maximum of 150 parallel connections to the server.
Thus it's important that the binary cache server (e.g. nixos.org) has
a high connection limit. Alternatively we could use HTTP pipelining,
but WWW::Curl doesn't support it and libcurl has a hard-coded limit of
5 requests per pipeline.
Using WWW::Curl rather than running an external curl process for every
NAR info file halves the time it takes to get info thanks to libcurl's
support for persistent HTTP connections. (We save a roundtrip per
file.) But the real gain will come from using parallel and/or
pipelined requests.
XZ compresses significantly better than bzip2. Here are the
compression ratios and execution times (using 4 cores in parallel) on
my /var/run/current-system (3.1 GiB):
bzip2: total compressed size 849.56 MiB, 30.8% [2m08]
xz -6: total compressed size 641.84 MiB, 23.4% [6m53]
xz -7: total compressed size 621.82 MiB, 22.6% [7m19]
xz -8: total compressed size 599.33 MiB, 21.8% [7m18]
xz -9: total compressed size 588.18 MiB, 21.4% [7m40]
Note that compression takes much longer. More importantly, however,
decompression is much faster:
bzip2: 1m47.274s
xz -6: 0m55.446s
xz -7: 0m54.119s
xz -8: 0m52.388s
xz -9: 0m51.842s
The only downside to using -9 is that decompression takes a fair
amount (~65 MB) of memory.
Manifests are a huge pain, since users need to run nix-pull directly
or indirectly to obtain them. They tend to be large and lag behind
the available binaries; also, the downloaded manifests in
/nix/var/nix/manifest need to be in sync with the Nixpkgs sources. So
we want to get rid of them.
The idea of manifest-free operation works as follows. Nix is
configured with a set of URIs of binary caches, e.g.
http://nixos.org/binary-cache
Whenever Nix needs a store path X, it checks each binary cache for the
existence of a file <CACHE-URI>/<SHA-256 hash of X>.narinfo, e.g.
http://nixos.org/binary-cache/bi1gh9...ia17.narinfo
The .narinfo file contains the necessary information about the store
path that was formerly kept in the manifest, i.e., (relative) URI of
the compressed NAR, references, size, hash, etc. For example:
StorePath: /nix/store/xqp4l88cr9bxv01jinkz861mnc9p7qfi-neon-0.29.6
URL: 1bjxbg52l32wj8ww47sw9f4qz0r8n5vs71l93lcbgk2506v3cpfd.nar.bz2
CompressedHash: sha256:1bjxbg52l32wj8ww47sw9f4qz0r8n5vs71l93lcbgk2506v3cpfd
CompressedSize: 202542
NarHash: sha256:1af26536781e6134ab84201b33408759fc59b36cc5530f57c0663f67b588e15f
NarSize: 700440
References: 043zrsanirjh8nbc5vqpjn93hhrf107f-bash-4.2-p24 cj7a81wsm1ijwwpkks3725661h3263p5-glibc-2.13 ...
Deriver: 4idz1bgi58h3pazxr3akrw4fsr6zrf3r-neon-0.29.6.drv
System: x86_64-linux
Nix then knows that it needs to download
http://nixos.org/binary-cache/1bjxbg52l32wj8ww47sw9f4qz0r8n5vs71l93lcbgk2506v3cpfd.nar.bz2
to substitute the store path.
Note that the store directory is omitted from the References and
Deriver fields to save space, and that the URL can be relative to the
binary cache prefix.
This patch just makes nix-push create binary caches in this format.
The next step is to make a substituter that supports them.
For several platforms we don't currently have "native" Nix packages
(e.g. Mac OS X and FreeBSD). This provides the next best thing: a
tarball containing the closure of Nix, plus a simple script
"nix-finish-install" that initialises the Nix database, registers the
paths in the closure as valid, and runs "nix-env -i /path/to/nix" to
initialise the user profile.
The tarball must be unpacked in the root directory. It creates
/nix/store/... and /usr/bin/nix-finish-install. Typical installation
is as follows:
$ cd /
$ tar xvf /path/to/nix-1.1pre1234_abcdef-x86_64-linux.tar.bz2
$ nix-finish-install
(if necessary add ~/.nix-profile/etc/profile.d/nix.sh to the shell
login scripts)
After this, /usr/bin/nix-finish-install can be deleted, if desired.
The downside to the binary tarball is that it's pretty big (~55 MiB
for x86_64-linux).
Mandatory features are features that MUST be present in a derivation's
requiredSystemFeatures attribute. One application is performance
testing, where we have a dedicated machine to run performance tests
(and nothing else). Then we would add the label "perf" to the
machine's mandatory features and to the performance testing
derivations.
"nix-channel --add" now accepts a second argument: the channel name.
This allows channels to have a nicer name than (say) nixpkgs_unstable.
If no name is given, it defaults to the last component of the URL
(with "-unstable" or "-stable" removed).
Also, channels are now stored in a profile
(/nix/var/nix/profiles/per-user/$USER/channels). One advantage of
this is that it allows rollbacks (e.g. if "nix-channel --update" gives
an undesirable update).
Sometimes when doing "nix-build --run-env" you don't want all
dependencies to be built. For instance, if we want to do "--run-env"
on the "build" attribute in Hydra's release.nix (to get Hydra's build
environment), we don't want its "tarball" dependency to be built. So
we can do:
$ nix-build --run-env release.nix -A build --exclude 'hydra-tarball'
This will skip the dependency whose name matches the "hydra-tarball"
regular expression. The "--exclude" option can be repeated any number
of times.
This command builds or fetches all dependencies of the given
derivation, then starts a shell with the environment variables from
the derivation. This shell also sources $stdenv/setup to initialise
the environment further.
The current directory is not changed. Thus this is a convenient way
to reproduce a build environment in an existing working tree.
Existing environment variables are left untouched (unless the
derivation overrides them). As a special hack, the original value of
$PATH is appended to the $PATH produced by $stdenv/setup.
Example session:
$ nix-build --run-env '<nixpkgs>' -A xterm
(the dependencies of xterm are built/fetched...)
$ tar xf $src
$ ./configure
$ make
$ emacs
(... hack source ...)
$ make
$ ./xterm
directory. Previously in this situation we did add the Nix
expressions from the channel to allow installation from source, but
this doesn't work for binary-only channels and leads to confusing
error messages.
scripts.
* Include the version and architecture in the -I flag so that there is
at least a chance that a Nix binary built for one Perl version will
run on another version.
other simplifications.
* Use <nix/...> to locate the corepkgs. This allows them to be
overriden through $NIX_PATH.
* Use bash's pipefail option in the NAR builder so that we don't need
to create a temporary file.
closure to a given machine at the same time. This prevents the case
where multiple instances try to copy the same missing store path to
the target machine, which is very wasteful.
‘nix-store --export’.
* Add a Perl module that provides the functionality of
‘nix-copy-closure --to’. This is used by build-remote.pl so it no
longer needs to start a separate nix-copy-closure process. Also, it
uses the Perl API to do the export, so it doesn't need to start a
separate nix-store process either. As a result, nix-copy-closure
and build-remote.pl should no longer fail on very large closures due
to an "Argument list too long" error. (Note that having very many
dependencies in a single derivation can still fail because the
environment can become too large. Can't be helped though.)
read the manifest just to check the version and print the number of
paths. This makes nix-pull very fast for the cached cache (speeding
up nixos-rebuild without the ‘--no-pull’ or ‘--fast’ options).
brackets, e.g.
import <nixpkgs/pkgs/lib>
are resolved by looking them up relative to the elements listed in
the search path. This allows us to get rid of hacks like
import "${builtins.getEnv "NIXPKGS_ALL"}/pkgs/lib"
The search path can be specified through the ‘-I’ command-line flag
and through the colon-separated ‘NIX_PATH’ environment variable,
e.g.,
$ nix-build -I /etc/nixos ...
If a file is not found in the search path, an error message is
lazily thrown.
SQLite manifest cache. The DBI AutoCommit feature caused every
process to have an active transaction at all times, which could
indefinitely block processes wanting to update the manifest cache.
* Disable fsync() in the manifest cache because we don't need
integrity (the cache can always be recreated if it gets corrupted).
them into memory. This brings memory use down to (more or less)
O(1). For instance, on my test case, the maximum resident size of
download-using-manifests while filling the DB went from 142 MiB to
11 MiB.
This significantly speeds up the download-using-manifests
substituter, especially if manifests are very large. For instance,
one "nix-build -A geeqie" operation that updated four packages using
binary patches went from 18.5s to 1.6s. It also significantly
reduces memory use.
The cache is kept in /nix/var/nix/manifests/cache.sqlite. It's
updated automatically when manifests are added to or removed from
/nix/var/nix/manifests. It might be interesting to have nix-pull
store manifests directly in the DB, rather than storing them as
separate flat files, but then we would need a command line interface
to delete manifests from the DB.
`+' and `?' in filenames. This is very slow if /nix/store is very
large. (This is a quick hack - a cleaner solution would be to
bypass the shell entirely.)