Thus, instead of ‘--option <name> <value>’, you can write ‘--<name>
<value>’. So
--option http-connections 100
becomes
--http-connections 100
Apart from brevity, the difference is that it's not an error to set a
non-existent option via --option, but unrecognized arguments are
fatal.
Boolean options have special treatment: they're mapped to the
argument-less flags ‘--<name>’ and ‘--no-<name>’. E.g.
--option auto-optimise-store false
becomes
--no-auto-optimise-store
This caused "nix-store --import" to compute an incorrect hash on NARs
that don't fit in an unsigned int. The import would succeed, but
"nix-store --verify-path" or subsequent exports would detect an
incorrect hash.
A deeper issue is that the export/import format does not contain a
hash, so we can't detect such issues early.
Also, I learned that -Wall does not warn about this.
The typical use is to inherit Config and add Setting<T> members:
class MyClass : private Config
{
Setting<int> foo{this, 123, "foo", "the number of foos to use"};
Setting<std::string> bar{this, "blabla", "bar", "the name of the bar"};
MyClass() : Config(readConfigFile("/etc/my-app.conf"))
{
std::cout << foo << "\n"; // will print 123 unless overriden
}
};
Currently, this is used by Store and its subclasses for store
parameters. You now get a warning if you specify a non-existant store
parameter in a store URI.
This is useless because the client also caches path info, and can
cause problems for long-running clients like hydra-queue-runner
(i.e. it may return cached info about paths that have been
garbage-collected).
You can now set the store parameter "text-compression=br" to compress
textual files in the binary cache (i.e. narinfo and logs) using
Brotli. This sets the Content-Encoding header; the extension of
compressed files is unchanged.
You can separately specify the compression of log files using
"log-compression=br". This is useful when you don't want to compress
narinfo files for backward compatibility.
Build logs on cache.nixos.org are compressed using Brotli (since this
allows them to be decompressed automatically by Chrome and Firefox),
so it's handy if "nix log" can decompress them.
* Unify SSH code in SSHStore and LegacySSHStore.
* Fix a race starting the SSH master. We now wait synchronously for
the SSH master to finish starting. This prevents the SSH clients
from starting their own connections.
* Don't use a master if max-connections == 1.
* Add a "max-connections" store parameter.
* Add a "compress" store parameter.
This allows <nix/fetchurl.nix> to fetch private Git/Mercurial
repositories, e.g.
import <nix/fetchurl.nix> {
url = https://edolstra@bitbucket.org/edolstra/my-private-repo/get/80a14018daed.tar.bz2;
sha256 = "1mgqzn7biqkq3hf2697b0jc4wabkqhmzq2srdymjfa6sb9zb6qs7";
}
where /etc/nix/netrc contains:
machine bitbucket.org
login edolstra
password blabla...
This works even when sandboxing is enabled.
To do: add unpacking support (i.e. fetchzip functionality).
This adds support for s3:// URIs in all places where Nix allows URIs,
e.g. in builtins.fetchurl, builtins.fetchTarball, <nix/fetchurl.nix>
and NIX_PATH. It allows fetching resources from private S3 buckets,
using credentials obtained from the standard places (i.e. AWS_*
environment variables, ~/.aws/credentials and the EC2 metadata
server). This may not be super-useful in general, but since we already
depend on aws-sdk-cpp, it's a cheap feature to add.
Because config.h can #define things like _FILE_OFFSET_BITS=64 and not
every compilation unit includes config.h, we currently compile half of
Nix with _FILE_OFFSET_BITS=64 and other half with _FILE_OFFSET_BITS
unset. This causes major havoc with the Settings class on e.g. 32-bit ARM,
where different compilation units disagree with the struct layout.
E.g.:
diff --git a/src/libstore/globals.cc b/src/libstore/globals.cc
@@ -166,6 +166,8 @@ void Settings::update()
_get(useSubstitutes, "build-use-substitutes");
+ fprintf(stderr, "at Settings::update(): &useSubstitutes = %p\n", &nix::settings.useSubstitutes);
_get(buildUsersGroup, "build-users-group");
diff --git a/src/libstore/remote-store.cc b/src/libstore/remote-store.cc
+++ b/src/libstore/remote-store.cc
@@ -138,6 +138,8 @@ void RemoteStore::initConnection(Connection & conn)
void RemoteStore::setOptions(Connection & conn)
{
+ fprintf(stderr, "at RemoteStore::setOptions(): &useSubstitutes = %p\n", &nix::settings.useSubstitutes);
conn.to << wopSetOptions
Gave me:
at Settings::update(): &useSubstitutes = 0xb6e5c5cb
at RemoteStore::setOptions(): &useSubstitutes = 0xb6e5c5c7
That was not a fun one to debug!
This closes a long-time bug that allowed builds to hang Nix
indefinitely (regardless of timeouts) simply by doing
exec > /dev/null 2>&1; while true; do true; done
Now, on EOF, we just send SIGKILL to the child to make sure it's
really gone.
This allows other threads to install callbacks that run in a regular,
non-signal context. In particular, we can use this to signal the
downloader thread to quit.
Closes#1183.
It failed with
AWS error uploading ‘6gaxphsyhg66mz0a00qghf9nqf7majs2.ls.xz’: Unable to parse ExceptionName: MissingContentLength Message: You must provide the Content-Length HTTP header.
possibly because the istringstream_nocopy introduced in
0d2ebb4373 doesn't supply the seek
method that the AWS library expects. So bring back the old version,
but only for S3BinaryCacheStore.
We can now write
throw Error("file '%s' not found", path);
instead of
throw Error(format("file '%s' not found") % path);
and similarly
printError("file '%s' not found", path);
instead of
printMsg(lvlError, format("file '%s' not found") % path);
We were passing "p=$PATH" rather than "p=$PATH;", resulting in some
invalid shell code.
Also, construct a separate environment for the child rather than
overwriting the parent's.
The fact that queryPathInfo() is synchronous meant that we needed a
thread for every concurrent binary cache lookup, even though they end
up being handled by the same download thread. Requiring hundreds of
threads is not a good idea. So now there is an asynchronous version of
queryPathInfo() that takes a callback function to process the
result. Similarly, enqueueDownload() now takes a callback rather than
returning a future.
Thus, a command like
nix path-info --store https://cache.nixos.org/ -r /nix/store/slljrzwmpygy1daay14kjszsr9xix063-nixos-16.09beta231.dccf8c5
that returns 4941 paths now takes 1.87s using only 2 threads (the main
thread and the downloader thread). (This is with a prewarmed
CloudFront.)
This fixes instantiation of pythonPackages.pytest that produces a
directory with less permissions during one of it's tests that leads to
a nix error like:
error: opening directory ‘/tmp/nix-build-python2.7-pytest-2.9.2.drv-0/pytest-of-user/pytest-0/testdir/test_cache_failure_warns0/.cache’: Permission denied
E.g.
$ nix-build -I nixpkgs=git://github.com/NixOS/nixpkgs '<nixpkgs>' -A hello
This is not extremely useful yet because you can't specify a
branch/revision.
Caching path info is generally useful. For instance, it speeds up "nix
path-info -rS /run/current-system" (i.e. showing the closure sizes of
all paths in the closure of the current system) from 5.6s to 0.15s.
This also eliminates some APIs like Store::queryDeriver() and
Store::queryReferences().
The flag remembering whether an Interrupted exception was thrown is
now thread-local. Thus, all threads will (eventually) throw
Interrupted. Previously, one thread would throw Interrupted, and then
the other threads wouldn't see that they were supposed to quit.
Unlike "nix-store --verify-path", this command verifies signatures in
addition to store path contents, is multi-threaded (especially useful
when verifying binary caches), and has a progress indicator.
Example use:
$ nix verify-paths --store https://cache.nixos.org -r $(type -p thunderbird)
...
[17/132 checked] checking ‘/nix/store/rawakphadqrqxr6zri2rmnxh03gqkrl3-autogen-5.18.6’
This allows a RemoteStore object to be used safely from multiple
threads concurrently. It will make multiple daemon connections if
necessary.
Note: pool.hh and sync.hh have been copied from the Hydra source tree.
Also, move a few free-standing functions into StoreAPI and Derivation.
Also, introduce a non-nullable smart pointer, ref<T>, which is just a
wrapper around std::shared_ptr ensuring that the pointer is never
null. (For reference-counted values, this is better than passing a
"T&", because the latter doesn't maintain the refcount. Usually, the
caller will have a shared_ptr keeping the value alive, but that's not
always the case, e.g., when passing a reference to a std::thread via
std::bind.)
edolstra:
“…since callers of readDirectory have to handle the possibility of
DT_UNKNOWN anyway, and we don't want to do a stat call for every
directory entry unless it's really needed.”
Previously, to build a derivation remotely, we had to copy the entire
closure of the .drv file to the remote machine, even though we only
need the top-level derivation. This is very wasteful: the closure can
contain thousands of store paths, and in some Hydra use cases, include
source paths that are very large (e.g. Git/Mercurial checkouts).
So now there is a new operation, StoreAPI::buildDerivation(), that
performs a build from an in-memory representation of a derivation
(BasicDerivation) rather than from a on-disk .drv file. The only files
that need to be in the Nix store are the sources of the derivation
(drv.inputSrcs), and the needed output paths of the dependencies (as
described by drv.inputDrvs). "nix-store --serve" exposes this
interface.
Note that this is a privileged operation, because you can construct a
derivation that builds any store path whatsoever. Fixing this will
require changing the hashing scheme (i.e., the output paths should be
computed from the other fields in BasicDerivation, allowing them to be
verified without access to other derivations). However, this would be
quite nice because it would allow .drv-free building (e.g. "nix-env
-i" wouldn't have to write any .drv files to disk).
Fixes#173.
Sodium's Ed25519 signatures are much shorter than OpenSSL's RSA
signatures. Public keys are also much shorter, so they're now
specified directly in the nix.conf option ‘binary-cache-public-keys’.
The new command ‘nix-store --generate-binary-cache-key’ generates and
prints a public and secret key.
Let's not just improve the error message itself, but also the behaviour
to actually work around the ntfs-3g symlink bug. If the readlink() call
returns a smaller size than the stat() call, this really isn't a problem
even if the symlink target really has changed between the calls.
So if stat() reports the size for the absolute path, it's most likely
that the relative path is smaller and thus it should also work for file
system bugs as mentioned in 93002d69fc58c2b71e2dfad202139230c630c53a.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Tested-by: John Ericson <Ericson2314@Yahoo.com>
A message like "error: reading symbolic link `...' : Success" really is
quite confusing, so let's not indicate "success" but rather point out
the real issue.
We could also limit the check of this to just check for non-negative
values, but this would introduce a race condition between stat() and
readlink() if the link target changes between those two calls, thus
leading to a buffer overflow vulnerability.
Reported by @Ericson2314 on IRC. Happened due to a possible ntfs-3g bug
where a relative symlink returned the absolute path (st_)size in stat()
while readlink() returned the relative size.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Tested-by: John Ericson <Ericson2314@Yahoo.com>
In low memory environments, "nix-env -qa" failed because the fork to
run the pager hit the kernel's overcommit limits. Using posix_spawn
gets around this. (Actually, you have to use posix_spawn with the
undocumented POSIX_SPAWN_USEVFORK flag, otherwise it just uses
fork/exec...)
The function ‘builtins.match’ takes a POSIX extended regular
expression and an arbitrary string. It returns ‘null’ if the string
does not match the regular expression. Otherwise, it returns a list
containing substring matches corresponding to parenthesis groups in
the regex. The regex must match the entire string (i.e. there is an
implied "^<pat>$" around the regex). For example:
match "foo" "foobar" => null
match "foo" "foo" => []
match "f(o+)(.*)" "foooobar" => ["oooo" "bar"]
match "(.*/)?([^/]*)" "/dir/file.nix" => ["/dir/" "file.nix"]
match "(.*/)?([^/]*)" "file.nix" => [null "file.nix"]
The following example finds all regular files with extension .nix or
.patch underneath the current directory:
let
findFiles = pat: dir: concatLists (mapAttrsToList (name: type:
if type == "directory" then
findFiles pat (dir + "/" + name)
else if type == "regular" && match pat name != null then
[(dir + "/" + name)]
else []) (readDir dir));
in findFiles ".*\\.(nix|patch)" (toString ./.)
This was preventing destructors from running. In particular, it was
preventing the deletion of the temproot file for each worker
process. It may also have been responsible for the excessive WAL
growth on Hydra (due to the SQLite database not being closed
properly).
Apparently broken by accident in
8e9140cfde.
In particular, gcc 4.6's std::exception::~exception has an exception
specification in c++0x mode, which requires us to use that deprecated
feature in nix (and led to breakage after some recent changes that were
valid c++11).
nix already uses several c++11 features and gcc 4.7 has been around for
over 2 years.
Signal handlers are process-wide, so sending SIGINT to the monitor
thread will cause the normal SIGINT handler to run. This sets the
isInterrupted flag, which is not what we want. So use pthread_cancel
instead.
The thread calls poll() to wait until a HUP (or other error event)
happens on the client connection. If so, it sends SIGINT to the main
thread, which is then cleaned up normally. This is much nicer than
messing around with SIGPOLL.
When running NixOps under Mac OS X, we need to be able to import store
paths built on Linux into the local Nix store. However, HFS+ is
usually case-insensitive, so if there are directories with file names
that differ only in case, then importing will fail.
The solution is to add a suffix ("~nix~case~hack~<integer>") to
colliding files. For instance, if we have a directory containing
xt_CONNMARK.h and xt_connmark.h, then the latter will be renamed to
"xt_connmark.h~nix~case~hack~1". If a store path is dumped as a NAR,
the suffixes are removed. Thus, importing and exporting via a
case-insensitive Nix store is round-tripping. So when NixOps calls
nix-copy-closure to copy the path to a Linux machine, you get the
original file names back.
Closes#119.
If a build log is not available locally, then ‘nix-store -l’ will now
try to download it from the servers listed in the ‘log-servers’ option
in nix.conf. For instance, if you have:
log-servers = http://hydra.nixos.org/log
then it will try to get logs from http://hydra.nixos.org/log/<base
name of the store path>. So you can do things like:
$ nix-store -l $(which xterm)
and get a log even if xterm wasn't built locally.
Ludo reported this error:
unexpected Nix daemon error: boost::too_few_args: format-string refered to more arguments than were passed
coming from this line:
printMsg(lvlError, run.program + ": " + string(err, 0, p));
The problem here is that the string ends up implicitly converted to a
Boost format() object, so % characters are treated specially. I
always assumed (wrongly) that strings are converted to a format object
that outputs the string as-is.
Since this assumption appears in several places that may be hard to
grep for, I've added some C++ type hackery to ensures that the right
thing happens. So you don't have to worry about % in statements like
printMsg(lvlError, "foo: " + s);
or
throw Error("foo: " + s);
In particular "libutil" was always a problem because it collides with
Glibc's libutil. Even if we install into $(libdir)/nix, the linker
sometimes got confused (e.g. if a program links against libstore but
not libutil, then ld would report undefined symbols in libstore
because it was looking at Glibc's libutil).
As discovered by Todd Veldhuizen, the shell started by nix-shell has
its affinity set to a single CPU. This is because nix-shell connects
to the Nix daemon, which causes the affinity hack to be applied. So
we turn this off for Perl programs.
On a system with multiple CPUs, running Nix operations through the
daemon is significantly slower than "direct" mode:
$ NIX_REMOTE= nix-instantiate '<nixos>' -A system
real 0m0.974s
user 0m0.875s
sys 0m0.088s
$ NIX_REMOTE=daemon nix-instantiate '<nixos>' -A system
real 0m2.118s
user 0m1.463s
sys 0m0.218s
The main reason seems to be that the client and the worker get moved
to a different CPU after every call to the worker. This patch adds a
hack to lock them to the same CPU. With this, the overhead of going
through the daemon is very small:
$ NIX_REMOTE=daemon nix-instantiate '<nixos>' -A system
real 0m1.074s
user 0m0.809s
sys 0m0.098s
The kill(2) in Apple's libc follows POSIX semantics, which means that
kill(-1, SIGKILL) will kill the calling process too. Since nix has no
way to distinguish between the process successfully killing everything
and the process being killed by a rogue builder in that case, it can't
safely conclude that killUser was successful.
Luckily, the actual kill syscall takes a parameter that determines
whether POSIX semantics are followed, so we can call that syscall
directly and avoid the issue on Apple.
Signed-off-by: Shea Levy <shea@shealevy.com>
For instance, it's pointless to keep copy-from-other-stores running if
there are no other stores, or download-using-manifests if there are no
manifests. This also speeds things up because we don't send queries
to those substituters.
For example, given a derivation with outputs "out", "man" and "bin":
$ nix-build -A pkg
produces ./result pointing to the "out" output;
$ nix-build -A pkg.man
produces ./result-man pointing to the "man" output;
$ nix-build -A pkg.all
produces ./result, ./result-man and ./result-bin;
$ nix-build -A pkg.all -A pkg2
produces ./result, ./result-man, ./result-bin and ./result-2.
Using the immutable bit is problematic, especially in conjunction with
store optimisation. For instance, if the garbage collector deletes a
file, it has to clear its immutable bit, but if the file has
additional hard links, we can't set the bit afterwards because we
don't know the remaining paths.
So now that we support having the entire Nix store as a read-only
mount, we may as well drop the immutable bit. Unfortunately, we have
to keep the code to clear the immutable bit for backwards
compatibility.
"config.h" must be included first, because otherwise the compiler
might not see the right value of _FILE_OFFSET_BITS. We've had this
before; see 705868a8a9. In this case,
GCC would compute a different address for ‘settings.useSubstitutes’ in
misc.cc because of the off_t in ‘settings’.
Reverts 3854fc9b42.
http://hydra.nixos.org/build/3016700
This is required on systemd, which mounts filesystems as "shared"
subtrees. Changes to shared trees in a private mount namespace are
propagated to the outside world, which is bad.
Setting the environment variable NIX_COUNT_CALLS to 1 enables some
basic profiling in the evaluator. It will count calls to functions
and primops as well as evaluations of attributes.
For example, to see where evaluation of a NixOS configuration spends
its time:
$ NIX_SHOW_STATS=1 NIX_COUNT_CALLS=1 ./src/nix-instantiate/nix-instantiate '<nixos>' -A system --readonly-mode
...
calls to 39 primops:
239532 head
233962 tail
191252 hasAttr
...
calls to 1595 functions:
224157 `/nix/var/nix/profiles/per-user/root/channels/nixos/nixpkgs/pkgs/lib/lists.nix:17:19'
221767 `/nix/var/nix/profiles/per-user/root/channels/nixos/nixpkgs/pkgs/lib/lists.nix:17:14'
221767 `/nix/var/nix/profiles/per-user/root/channels/nixos/nixpkgs/pkgs/lib/lists.nix:17:10'
...
evaluations of 7088 attributes:
167377 undefined position
132459 `/nix/var/nix/profiles/per-user/root/channels/nixos/nixpkgs/pkgs/lib/attrsets.nix:119:41'
47322 `/nix/var/nix/profiles/per-user/root/channels/nixos/nixpkgs/pkgs/lib/attrsets.nix:13:21'
...
In a private PID namespace, processes have PIDs that are separate from
the rest of the system. The initial child gets PID 1. Processes in
the chroot cannot see processes outside of the chroot. This improves
isolation between builds. However, processes on the outside can see
processes in the chroot and send signals to them (if they have
appropriate rights).
Since the builder gets PID 1, it serves as the reaper for zombies in
the chroot. This might turn out to be a problem. In that case we'll
need to have a small PID 1 process that sits in a loop calling wait().
Nix now requires SQLite and bzip2 to be pre-installed. SQLite is
detected using pkg-config. We required DBD::SQLite anyway, so
depending on SQLite is not a big problem.
The --with-bzip2, --with-openssl and --with-sqlite flags are gone.
I was bitten one time too many by Python modifying the Nix store by
creating *.pyc files when run as root. On Linux, we can prevent this
by setting the immutable bit on files and directories (as in ‘chattr
+i’). This isn't supported by all filesystems, so it's not an error
if setting the bit fails. The immutable bit is cleared by the garbage
collector before deleting a path. The only tricky aspect is in
optimiseStore(), since it's forbidden to create hard links to an
immutable file. Thus optimiseStore() temporarily clears the immutable
bit before creating the link.
unreachable paths. This matters when using --max-freed etc.:
unreachable paths could become reachable again, so it's nicer to
keep them if there is "real" garbage to be deleted. Also, don't use
readDirectory() but read the Nix store and delete invalid paths in
parallel. This reduces GC latency on very large Nix stores.