Otherwise subsequent invocations of "--repair" will keep rebuilding
the path. This only happens if the path content differs between
builds (e.g. due to timestamps).
Don't pass --timeout / --max-silent-time to the remote builder.
Instead, let the local Nix process terminate the build if it exceeds a
timeout. The remote builder will be killed as a side-effect. This
gives better error reporting (since the timeout message from the
remote side wasn't properly propagated) and handles non-Nix problems
like SSH hangs.
I'm not sure if it has ever worked correctly. The line "lastWait =
after;" seems to mean that the timer was reset every time a build
produced log output.
Note that the timeout is now per build, as documented ("the maximum
number of seconds that a builder can run").
It is surprisingly impossible to check if a mountpoint is a bind mount
on Linux, and in my previous commit I forgot to check if /nix/store was
even a mountpoint at all. statvfs.f_flag is not populated with MS_BIND
(and even if it were, my check was wrong in the previous commit).
Luckily, the semantics of mount with MS_REMOUNT | MS_BIND make both
checks unnecessary: if /nix/store is not a mountpoint, then mount will
fail with EINVAL, and if /nix/store is not a bind-mount, then it will
not be made writable. Thus, if /nix/store is not a mountpoint, we fail
immediately (since we don't know how to make it writable), and if
/nix/store IS a mountpoint but not a bind-mount, we fail at first write
(see below for why we can't check and fail immediately).
Note that, due to what is IMO buggy behavior in Linux, calling mount
with MS_REMOUNT | MS_BIND on a non-bind readonly mount makes the
mountpoint appear writable in two places: In the sixth (but not the
10th!) column of mountinfo, and in the f_flags member of struct statfs.
All other syscalls behave as if the mount point were still readonly (at
least for Linux 3.9-rc1, but I don't think this has changed recently or
is expected to soon). My preferred semantics would be for MS_REMOUNT |
MS_BIND to fail on a non-bind mount, as it doesn't make sense to remount
a non bind-mount as a bind mount.
/nix/store could be a read-only bind mount even if it is / in its own filesystem, so checking the 4th field in mountinfo is insufficient.
Signed-off-by: Shea Levy <shea@shealevy.com>
It turns out that in multi-user Nix, a builder may be able to do
ln /etc/shadow $out/foo
Afterwards, canonicalisePathMetaData() will be applied to $out/foo,
causing /etc/shadow's mode to be set to 444 (readable by everybody but
writable by nobody). That's obviously Very Bad.
Fortunately, this fails in NixOS's default configuration because
/nix/store is a bind mount, so "ln" will fail with "Invalid
cross-device link". It also fails if hard-link restrictions are
enabled, so a workaround is:
echo 1 > /proc/sys/fs/protected_hardlinks
The solution is to check that all files in $out are owned by the build
user. This means that innocuous operations like "ln
${pkgs.foo}/some-file $out/" are now rejected, but that already failed
in chroot builds anyway.
...where <XX> is the first two characters of the derivation.
Otherwise /nix/var/log/nix/drvs may become so large that we run into
all sorts of weird filesystem limits/inefficiences. For instance,
ext3/ext4 filesystems will barf with "ext4_dx_add_entry:1551:
Directory index full!" once you hit a few million files.
So if a path is not garbage solely because it's reachable from a root
due to the gc-keep-outputs or gc-keep-derivations settings, ‘nix-store
-q --roots’ now shows that root.
But this time it's *obviously* correct! No more segfaults due to
infinite recursions for sure, etc.
Also, move directories to /nix/store/trash instead of renaming them to
/nix/store/bla-gc-<pid>. Then we can just delete /nix/store/trash at
the end.
This prevents zillions of derivations from being kept, and fixes an
infinite recursion in the garbage collector (due to an obscure cycle
that can occur with fixed-output derivations).
The integer constant ‘langVersion’ denotes the current language
version. It gets increased every time a language feature is
added/changed/removed. It's currently 1.
The string constant ‘nixVersion’ contains the current Nix version,
e.g. "1.2pre2980_9de6bc5".
If a derivation has multiple outputs, then we only want to download
those outputs that are actuallty needed. So if we do "nix-build -A
openssl.man", then only the "man" output should be downloaded.
Likewise if another package depends on ${openssl.man}.
The tricky part is that different derivations can depend on different
outputs of a given derivation, so we may need to restart the
corresponding derivation goal if that happens.
For example, given a derivation with outputs "out", "man" and "bin":
$ nix-build -A pkg
produces ./result pointing to the "out" output;
$ nix-build -A pkg.man
produces ./result-man pointing to the "man" output;
$ nix-build -A pkg.all
produces ./result, ./result-man and ./result-bin;
$ nix-build -A pkg.all -A pkg2
produces ./result, ./result-man, ./result-bin and ./result-2.
vfork() is just too weird. For instance, in this build:
http://hydra.nixos.org/build/3330487
the value fromHook.writeSide becomes corrupted in the parent, even
though the child only reads from it. At -O0 the problem goes away.
Probably the child is overriding some spilled temporary variable.
If I get bored I may implement using posix_spawn() instead.
With this flag, if any valid derivation output is missing or corrupt,
it will be recreated by using a substitute if available, or by
rebuilding the derivation. The latter may use hash rewriting if
chroots are not available.
This operation allows fixing corrupted or accidentally deleted store
paths by redownloading them using substituters, if available.
Since the corrupted path cannot be replaced atomically, there is a
very small time window (one system call) during which neither the old
(corrupted) nor the new (repaired) contents are available. So
repairing should be used with some care on critical packages like
Glibc.
Using the immutable bit is problematic, especially in conjunction with
store optimisation. For instance, if the garbage collector deletes a
file, it has to clear its immutable bit, but if the file has
additional hard links, we can't set the bit afterwards because we
don't know the remaining paths.
So now that we support having the entire Nix store as a read-only
mount, we may as well drop the immutable bit. Unfortunately, we have
to keep the code to clear the immutable bit for backwards
compatibility.
It turns out that the immutable bit doesn't work all that well. A
better way is to make the entire Nix store a read-only bind mount,
i.e. by doing
$ mount --bind /nix/store /nix/store
$ mount -o remount,ro,bind /nix/store
(This would typically done in an early boot script, before anything
from /nix/store is used.)
Since Nix needs to be able to write to the Nix store, it now detects
if /nix/store is a read-only bind mount and then makes it writable in
a private mount namespace.
The outputs of a derivation can refer to each other (even though they
cannot have cycles), so they have to be deleted in the right order.
http://hydra.nixos.org/build/3026118
If the options gc-keep-outputs and gc-keep-derivations are both
enabled, you can get a cycle in the liveness graph. There was a hack
to handle this, but it didn't work with multiple-output derivations,
causing the garbage collector to fail with errors like ‘error: cannot
delete path `...' because it is in use by `...'’. The garbage
collector now handles strongly connected components in the liveness
graph as a unit and decides whether to delete all or none of the paths
in an SCC.
Note that this will only work if the client has a very recent Nix
version (post 15e1b2c223), otherwise the
--option flag will just be ignored.
Fixes#50.
This handles the chroot and build hook cases, which are easy.
Supporting the non-chroot-build case will require more work (hash
rewriting!).
Issue #21.
"config.h" must be included first, because otherwise the compiler
might not see the right value of _FILE_OFFSET_BITS. We've had this
before; see 705868a8a9. In this case,
GCC would compute a different address for ‘settings.useSubstitutes’ in
misc.cc because of the off_t in ‘settings’.
Reverts 3854fc9b42.
http://hydra.nixos.org/build/3016700
This is required on systemd, which mounts filesystems as "shared"
subtrees. Changes to shared trees in a private mount namespace are
propagated to the outside world, which is bad.
This is a problem because one process may set the immutable bit before
the second process has created its link.
Addressed random Hydra failures such as:
error: cannot rename `/nix/store/.tmp-link-17397-1804289383' to
`/nix/store/rsvzm574rlfip3830ac7kmaa028bzl6h-nixos-0.1pre-git/upstart-interface-version':
Operation not permitted
Since SubstitutionGoal::finished() in build.cc computes the hash
anyway, we can prevent the inefficiency of computing the hash twice by
letting the substituter tell Nix about the expected hash, which can
then verify it.
Incremental optimisation requires creating links in /nix/store/.links
to all files in the store. However, this means that if we delete a
store path, no files are actually deleted because links in
/nix/store/.links still exists. So we need to check /nix/store/.links
for files with a link count of 1 and delete them.
optimiseStore() now creates persistent, content-addressed hard links
in /nix/store/.links. For instance, if it encounters a file P with
hash H, it will create a hard link
P' = /nix/store/.link/<H>
to P if P' doesn't already exist; if P' exist, then P is replaced by a
hard link to P'. This is better than the previous in-memory map,
because it had the tendency to unnecessarily replace hard links with a
hard link to whatever happened to be the first file with a given hash
it encountered. It also allows on-the-fly, incremental optimisation.
To implement binary caches efficiently, Hydra needs to be able to map
the hash part of a store path (e.g. "gbg...zr7") to the full store
path (e.g. "/nix/store/gbg...kzr7-subversion-1.7.5"). (The binary
cache mechanism uses hash parts as a key for looking up store paths to
ensure privacy.) However, doing a search in the Nix store for
/nix/store/<hash>* is expensive since it requires reading the entire
directory. queryPathFromHashPart() prevents this by doing a cheap
database lookup.
queryValidPaths() combines multiple calls to isValidPath() in one.
This matters when using the Nix daemon because it reduces latency.
For instance, on "nix-env -qas \*" it reduces execution time from 5.7s
to 4.7s (which is indistinguishable from the non-daemon case).
Instead make a single call to querySubstitutablePathInfo() per
derivation output. This is faster and prevents having to implement
the "have" function in the binary cache substituter.
Getting substitute information using the binary cache substituter has
non-trivial latency overhead. A package or NixOS system configuration
can have hundreds of dependencies, and in the worst case (when the
local info cache is empty) we have to do a separate HTTP request for
each of these. If the ping time to the server is t, getting N info
files will take tN seconds; e.g., with a ping time of 0.1s to
nixos.org, sequentially downloading 1000 info files (a typical NixOS
config) will take at least 100 seconds.
To fix this problem, the binary cache substituter can now perform
requests in parallel. This required changing the substituter
interface to support a function querySubstitutablePathInfos() that
queries multiple paths at the same time, and rewriting queryMissing()
to take advantage of parallelism. (Due to local caching,
parallelising queryMissing() is sufficient for most use cases, since
it's almost always called before building a derivation and thus fills
the local info cache.)
For example, parallelism speeds up querying all 1056 paths in a
particular NixOS system configuration from 116s to 2.6s. It works so
well because the eccentricity of the top-level derivation in the
dependency graph is only 9. So we only need 10 round-trips (when
using an unlimited number of parallel connections) to get everything.
Currently we do a maximum of 150 parallel connections to the server.
Thus it's important that the binary cache server (e.g. nixos.org) has
a high connection limit. Alternatively we could use HTTP pipelining,
but WWW::Curl doesn't support it and libcurl has a hard-coded limit of
5 requests per pipeline.
In a private PID namespace, processes have PIDs that are separate from
the rest of the system. The initial child gets PID 1. Processes in
the chroot cannot see processes outside of the chroot. This improves
isolation between builds. However, processes on the outside can see
processes in the chroot and send signals to them (if they have
appropriate rights).
Since the builder gets PID 1, it serves as the reaper for zombies in
the chroot. This might turn out to be a problem. In that case we'll
need to have a small PID 1 process that sits in a loop calling wait().
In chroot builds, set the host name to "localhost" and the domain name
to "(none)" (the latter being the kernel's default). This improves
determinism a bit further.
P.S. I have to idea what UTS stands for.
This improves isolation a bit further, and it's just one extra flag in
the unshare() call.
P.S. It would be very cool to use CLONE_NEWPID (to put the builder in
a private PID namespace) as well, but that's slightly more risky since
having a builder start as PID 1 may cause problems.
On Linux it's possible to run a process in its own network namespace,
meaning that it gets its own set of network interfaces, disjunct from
the rest of the system. We use this to completely remove network
access to chroot builds, except that they get a private loopback
interface. This means that:
- Builders cannot connect to the outside network or to other processes
on the same machine, except processes within the same build.
- Vice versa, other processes cannot connect to processes in a chroot
build, and open ports/connections do not show up in "netstat".
- If two concurrent builders try to listen on the same port (e.g. as
part of a test), they no longer conflict with each other.
This was inspired by the "PrivateNetwork" flag in systemd.
We can't open a SQLite database if the disk is full. Since this
prevents the garbage collector from running when it's most needed, we
reserve some dummy space that we can free just before doing a garbage
collection. This actually revives some old code from the Berkeley DB
days.
Fixes#27.
There is a race condition when doing parallel builds with chroots and
the immutable bit enabled. One process may call makeImmutable()
before the other has called link(), in which case link() will fail
with EPERM. We could retry or wrap the operation in a lock, but since
this condition is rare and I'm lazy, we just use the existing copy
fallback.
Fixes#9.
This should fix rare Hydra errors of the form:
error: symlinking `/nix/var/nix/gcroots/per-user/hydra/hydra-roots/7sfhs5fdmjxm8sqgcpd0pgcsmz1kq0l0-nixos-iso-0.1pre33785-33795' to `/nix/store/7sfhs5fdmjxm8sqgcpd0pgcsmz1kq0l0-nixos-iso-0.1pre33785-33795': File exists
Setting the UNAME26 personality causes "uname" to return "2.6.x",
regardless of the kernel version. This improves determinism in
a few misbehaved packages.
Make the garbage collector more concurrent by deleting valid paths
outside the region where we're holding the global GC lock. This
should greatly reduce the time during which new builds are blocked,
since the deletion accounts for the vast majority of the time spent in
the GC.
To ensure that this is safe, the valid paths are invalidated and
renamed to some arbitrary path while we're holding the lock. This
ensures that we when we finally delete the path, it's not a (newly)
valid or locked path.
Nix now requires SQLite and bzip2 to be pre-installed. SQLite is
detected using pkg-config. We required DBD::SQLite anyway, so
depending on SQLite is not a big problem.
The --with-bzip2, --with-openssl and --with-sqlite flags are gone.
By moving the destructor object to libstore.so, it's also run when
download-using-manifests and nix-prefetch-url exit. This prevents
them from cluttering /nix/var/nix/temproots with stale files.
Not all SQLite builds have the function sqlite3_table_column_metadata.
We were only using it in a schema upgrade check for compatibility with
databases that were probably never seen in the wild. So remove it.
The variable ‘useChroot’ was not initialised properly. This caused
random failures if using the build hook. Seen on Mac OS X 10.7 with Clang.
Thanks to KolibriFX for finding this :-)
Chroots are initialised by hard-linking inputs from the Nix store to
the chroot. This doesn't work if the input has its immutable bit set,
because it's forbidden to create hard links to immutable files. So
temporarily clear the immutable bit when creating and destroying the
chroot.
Note that making regular files in the Nix store immutable isn't very
reliable, since the bit can easily become cleared: for instance, if we
run the garbage collector after running ‘nix-store --optimise’. So
maybe we should only make directories immutable.
I was bitten one time too many by Python modifying the Nix store by
creating *.pyc files when run as root. On Linux, we can prevent this
by setting the immutable bit on files and directories (as in ‘chattr
+i’). This isn't supported by all filesystems, so it's not an error
if setting the bit fails. The immutable bit is cleared by the garbage
collector before deleting a path. The only tricky aspect is in
optimiseStore(), since it's forbidden to create hard links to an
immutable file. Thus optimiseStore() temporarily clears the immutable
bit before creating the link.
unreachable paths. This matters when using --max-freed etc.:
unreachable paths could become reachable again, so it's nicer to
keep them if there is "real" garbage to be deleted. Also, don't use
readDirectory() but read the Nix store and delete invalid paths in
parallel. This reduces GC latency on very large Nix stores.
* Buffer the HashSink. This speeds up hashing a bit because it
prevents lots of calls to the hash update functions (e.g. nix-hash
went from 9.3s to 8.7s of user time on the closure of my
/var/run/current-system).
daemon (which is an error), print a nicer error message than
"Connection reset by peer" or "broken pipe".
* In the daemon, log errors that occur during request parameter
processing.
‘nix-store --export’.
* Add a Perl module that provides the functionality of
‘nix-copy-closure --to’. This is used by build-remote.pl so it no
longer needs to start a separate nix-copy-closure process. Also, it
uses the Perl API to do the export, so it doesn't need to start a
separate nix-store process either. As a result, nix-copy-closure
and build-remote.pl should no longer fail on very large closures due
to an "Argument list too long" error. (Note that having very many
dependencies in a single derivation can still fail because the
environment can become too large. Can't be helped though.)
libstore so that the Perl bindings can use it as well. It's vital
that the Perl bindings use the configuration file, because otherwise
nix-copy-closure will fail with a ‘database locked’ message if the
value of ‘use-sqlite-wal’ is changed from the default.