https://hydra.nixos.org/build/107467517
Seems that on i686-linux, gcc and rustc disagree on how to return
1-word structs: gcc has the caller pass a pointer to the result, while
rustc has the callee return the result in a register. Work around this
by using a bare pointer.
Also, fetchGit now runs in O(1) memory since we pipe the output of
'git archive' directly into unpackTarball() (rather than first reading
it all into memory).
We can now convert Rust Errors to C++ exceptions. At the Rust->C++ FFI
boundary, Result<T, Error> will cause Error to be converted to and
thrown as a C++ exception.
E.g.
$ nix-build '<nixpkgs>' -A hello --experimental-features no-url-literals
error: URL literals are disabled, at /nix/store/vsjamkzh15r3c779q2711az826hqgvzr-nixpkgs-20.03pre194957.bef773ed53f/nixpkgs/pkgs/top-level/all-packages.nix:1236:11
Helps with implementing https://github.com/NixOS/rfcs/pull/45.
A corrupt entry in .links prevents adding a fixed version of that file
to the store in any path. The user experience is that corruption
present in the store 'spreads' to new paths added to the store:
(With store optimisation enabled)
1. A file in the store gets corrupted somehow (eg: filesystem bug).
2. The user tries to add a thing to the store which contains a good copy
of the corrupted file.
3. The file being added to the store is hashed, found to match the bad
.links entry, and is replaced by a link to the bad .links entry.
(The .links entry's hash is not verified during add -- this would
impose a substantial performance burden.)
4. The user observes that the thing in the store that is supposed to be
a copy of what they were trying to add is not a correct copy -- some
files have different contents! Running "nix-store --verify
--check-contents --repair" does not fix the problem.
This change makes "nix-store --verify --check-contents --repair" fix
this problem. Bad .links entries are simply removed, allowing future
attempts to insert a good copy of the file to succeed.
Derivations that want to use recursion should now set
requiredSystemFeatures = [ "recursive-nix" ];
to make the daemon socket appear.
Also, Nix should be configured with "experimental-features =
recursive-nix".
This allows Nix builders to call Nix to build derivations, with some
limitations.
Example:
let nixpkgs = fetchTarball channel:nixos-18.03; in
with import <nixpkgs> {};
runCommand "foo"
{
buildInputs = [ nix jq ];
NIX_PATH = "nixpkgs=${nixpkgs}";
}
''
hello=$(nix-build -E '(import <nixpkgs> {}).hello.overrideDerivation (args: { name = "hello-3.5"; })')
$hello/bin/hello
mkdir -p $out/bin
ln -s $hello/bin/hello $out/bin/hello
nix path-info -r --json $hello | jq .
''
This derivation makes a recursive Nix call to build GNU Hello and
symlinks it from its $out, i.e.
# ll ./result/bin/
lrwxrwxrwx 1 root root 63 Jan 1 1970 hello -> /nix/store/s0awxrs71gickhaqdwxl506hzccb30y5-hello-3.5/bin/hello
# nix-store -qR ./result
/nix/store/hwwqshlmazzjzj7yhrkyjydxamvvkfd3-glibc-2.26-131
/nix/store/s0awxrs71gickhaqdwxl506hzccb30y5-hello-3.5
/nix/store/sgmvvyw8vhfqdqb619bxkcpfn9lvd8ss-foo
This is implemented as follows:
* Before running the outer builder, Nix creates a Unix domain socket
'.nix-socket' in the builder's temporary directory and sets
$NIX_REMOTE to point to it. It starts a thread to process
connections to this socket. (Thus you don't need to have nix-daemon
running.)
* The daemon thread uses a wrapper store (RestrictedStore) to keep
track of paths added through recursive Nix calls, to implement some
restrictions (see below), and to do some censorship (e.g. for
purity, queryPathInfo() won't return impure information such as
signatures and timestamps).
* After the build finishes, the output paths are scanned for
references to the paths added through recursive Nix calls (in
addition to the inputs closure). Thus, in the example above, $out
has a reference to $hello.
The main restriction on recursive Nix calls is that they cannot do
arbitrary substitutions. For example, doing
nix-store -r /nix/store/kmwd1hq55akdb9sc7l3finr175dajlby-hello-2.10
is forbidden unless /nix/store/kmwd... is in the inputs closure or
previously built by a recursive Nix call. This is to prevent
irreproducible derivations that have hidden dependencies on
substituters or the current store contents. Building a derivation is
fine, however, and Nix will use substitutes if available. In other
words, the builder has to present proof that it knows how to build a
desired store path from scratch by constructing a derivation graph for
that path.
Probably we should also disallow instantiating/building fixed-output
derivations (specifically, those that access the network, but
currently we have no way to mark fixed-output derivations that don't
access the network). Otherwise sandboxed derivations can bypass
sandbox restrictions and access the network.
When sandboxing is enabled, we make paths appear in the sandbox of the
builder by entering the mount namespace of the builder and
bind-mounting each path. This is tricky because we do a pivot_root()
in the builder to change the root directory of its mount namespace,
and thus the host /nix/store is not visible in the mount namespace of
the builder. To get around this, just before doing pivot_root(), we
branch a second mount namespace that shares its /nix/store mountpoint
with the parent.
Recursive Nix currently doesn't work on macOS in sandboxed mode
(because we can't change the sandbox policy of a running build) and in
non-root mode (because setns() barfs).
The intent of the code was that if the window size cannot be determined,
it would be treated as having the maximum possible size. Because of a
missing assignment, it was actually treated as having a width of 0.
The reason the width could not be determined was because it was obtained
from stdout, not stderr, even though the printing was done to stderr.
This commit addresses both issues.
Add missing docstring on InstallableCommand. Also, some of these were wrapped
when they're right next to a line longer than the unwrapped line, so we can just
unwrap them to save vertical space.
Fixes the following warning and the indicate potential issue:
src/libstore/worker-protocol.hh:66:1: warning: class 'Source' was previously declared as a struct; this is valid, but may result in linker errors
under the Microsoft C++ ABI [-Wmismatched-tags]
(cherry picked from commit 6e1bb04870b1b723282d32182af286646f13bf3c)
This is an alternative to the IN_NIX_SHELL environment variable,
allowing the expression to adapt itself to nix-shell without
triggering those adaptations when used as a dependency of another
shell.
Closes#3147
This allows to have a repl-centric workflow to working on nixpkgs.
Usage:
:edit <package> - heuristic that find the package file path
:edit <path> - just open the editor on the file path
Once invoked, `nix repl` will open $EDITOR on that file path. Once the
editor exits, `nix repl` will automatically reload itself.
This adds a command 'nix make-content-addressable' that rewrites the
specified store paths into content-addressable paths. The advantage of
such paths is that 1) they can be imported without signatures; 2) they
can enable deduplication in cases where derivation changes do not
cause output changes (apart from store path hashes).
For example,
$ nix make-content-addressable -r nixpkgs.cowsay
rewrote '/nix/store/g1g31ah55xdia1jdqabv1imf6mcw0nb1-glibc-2.25-49' to '/nix/store/48jfj7bg78a8n4f2nhg269rgw1936vj4-glibc-2.25-49'
...
rewrote '/nix/store/qbi6rzpk0bxjw8lw6azn2mc7ynnn455q-cowsay-3.03+dfsg1-16' to '/nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16'
We can then copy the resulting closure to another store without
signatures:
$ nix copy --trusted-public-keys '' ---to ~/my-nix /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16
In order to support self-references in content-addressable paths,
these paths are hashed "modulo" self-references, meaning that
self-references are zeroed out during hashing. Somewhat annoyingly,
this means that the NAR hash stored in the Nix database is no longer
necessarily equal to the output of "nix hash-path"; for
content-addressable paths, you need to pass the --modulo flag:
$ nix path-info --json /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16 | jq -r .[].narHash
sha256:0ri611gdilz2c9rsibqhsipbfs9vwcqvs811a52i2bnkhv7w9mgw
$ nix hash-path --type sha256 --base32 /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16
1ggznh07khq0hz6id09pqws3a8q9pn03ya3c03nwck1kwq8rclzs
$ nix hash-path --type sha256 --base32 /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16 --modulo iq6g2x4q62xp7y7493bibx0qn5w7xz67
0ri611gdilz2c9rsibqhsipbfs9vwcqvs811a52i2bnkhv7w9mgw
Experimental features are now opt-in. There is currently one
experimental feature: "nix-command" (which enables the "nix"
command. This will allow us to merge experimental features more
quickly, without committing to supporting them indefinitely.
Typical usage:
$ nix build --experimental-features 'nix-command flakes' nixpkgs#hello
(cherry picked from commit 8e478c2341,
without the "flakes" feature)
The tmpDirInSandbox is different when in sandboxed vs. non-sandboxed.
Since we don’t know ahead of time here whether sandboxing is enabled,
we need to reset all of the env vars we’ve set previously. This fixes
the issue encountered in https://github.com/NixOS/nixpkgs/issues/70856.
Previously, SANDBOX_SHELL was set to empty when unavailable. This
caused issues when actually generating the sandbox. Instead, just set
SANDBOX_SHELL when --with-sandbox-shell= is non-empty. Alternative
implementation to https://github.com/NixOS/nix/pull/3038.
Pure mode should not try to source the user’s bashrc file. These may
have many impurities that the user does not expect to get into their
shell.
Fixes#3090
When running nix doctor on a healthy system, it just prints the store URI and
nothing else. This makes it unclear whether the system is in a good state and
what check(s) it actually ran, since some of the checks are optional depending
on the store type.
This commit updates nix doctor to print an colored log message for every check
that it does, and explicitly state whether that check was a PASS or FAIL to make
it clear to the user whether the system passed its checkup with the doctor.
Fixes#3084
This integrates the functionality of the index-debuginfo program in
nixos-channel-scripts to maintain an index of DWARF debuginfo files in
a format usable by dwarffs. Thus the debug info index is updated by
Hydra rather than by the channel mirroring script.
Example usage:
$ nix copy --to 'file:///tmp/binary-cache?index-debug-info=true' /nix/store/vr9mhcch3fljzzkjld3kvkggvpq38cva-nix-2.2.2-debug
$ cat /tmp/binary-cache/debuginfo/036b210b03bad75ab2d8fc80b7a146f98e7f1ecf.debug
{"archive":"../nar/0313h2kdhk4v73xna9ysiksp2v8xrsk5xsw79mmwr3rg7byb4ka8.nar.xz","member":"lib/debug/.build-id/03/6b210b03bad75ab2d8fc80b7a146f98e7f1ecf.debug"}
Fixes#3083.
Previously we allowed any length of name for Nix derivations. This is
bad because different file systems have different max lengths. To make
things predictable, I have picked a max. This was done by trying to
build this derivation:
derivation {
name = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
builder = "/no-such-path";
system = "x86_64-linux";
}
Take off one a and it will not lead to file name too long. That ends
up being 212 a’s. An even smaller max could be picked if we want to
support more file systems.
Working backwards, this is why:
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-${name}.drv.chroot
> 255 - 32 - 1 - 4 - 7 = 211
These are already handled separately. This fixes warnings like
warning: ignoring the user-specified setting 'max-jobs', because it is a restricted setting and you are not a trusted user
when using the -j flag.
If a NAR is already in the store, addToStore doesn't read the source
which makes the protocol go out of sync. This happens for example when
two client try to nix-copy-closure the same derivation at the same time.
Introduce the SizeSource which allows to bound how much data is being
read from a source. It also contains a drainAll() function to discard
the rest of the source, useful to keep the nix protocol in sync.
With this patch, and this file I called `log.py`:
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p python3 --pure
import sys
from pprint import pprint
stack = []
timestack = []
for line in open(sys.argv[1]):
components = line.strip().split(" ", 2)
if components[0] != "function-trace":
continue
direction = components[1]
components = components[2].rsplit(" ", 2)
loc = components[0]
_at = components[1]
time = int(components[2])
if direction == "entered":
stack.append(loc)
timestack.append(time)
elif direction == "exited":
dur = time - timestack.pop()
vst = ";".join(stack)
print(f"{vst} {dur}")
stack.pop()
and:
nix-instantiate --trace-function-calls -vvvv ../nixpkgs/pkgs/top-level/release.nix -A unstable > log.matthewbauer 2>&1
./log.py ./log.matthewbauer > log.matthewbauer.folded
flamegraph.pl --title matthewbauer-post-pr log.matthewbauer.folded > log.matthewbauer.folded.svg
I can make flame graphs like: http://gsc.io/log.matthewbauer.folded.svg
---
Includes test cases around function call failures and tryEval. Uses
RAII so the finish is always called at the end of the function.
Make curl's low speed limit configurable via stalled-download-timeout.
Before, this limit was five minutes without receiving a single byte.
This is much too long as if the remote end may not have even
acknowledged the HTTP request.
POSIX file locks are essentially incompatible with multithreading. BSD
locks have much saner semantics. We need this now that there can be
multiple concurrent LocalStore::buildPaths() invocations.
This currently fails because we're using POSIX file locks. So when the
garbage collector opens and closes its own temproots file, it causes
the lock to be released and then deleted by another GC instance.
Passing `--post-build-hook /foo/bar` to a nix-* command will cause
`/foo/bar` to be executed after each build with the following
environment variables set:
DRV_PATH=/nix/store/drv-that-has-been-built.drv
OUT_PATHS=/nix/store/...build /nix/store/...build-bin /nix/store/...build-dev
This can be useful in particular to upload all the builded artifacts to
the cache (including the ones that don't appear in the runtime closure
of the final derivation or are built because of IFD).
This new feature prints the stderr/stdout output to the `nix-build`
and `nix build` client, and the output is printed in a Nix 2
compatible format:
[nix]$ ./inst/bin/nix-build ./test.nix
these derivations will be built:
/nix/store/ishzj9ni17xq4hgrjvlyjkfvm00b0ch9-my-example-derivation.drv
building '/nix/store/ishzj9ni17xq4hgrjvlyjkfvm00b0ch9-my-example-derivation.drv'...
hello!
bye!
running post-build-hook '/home/grahamc/projects/github.com/NixOS/nix/post-hook.sh'...
post-build-hook: + sleep 1
post-build-hook: + echo 'Signing paths' /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: Signing paths /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: + sleep 1
post-build-hook: + echo 'Uploading paths' /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: Uploading paths /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: + sleep 1
post-build-hook: + printf 'very important stuff'
/nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
[nix-shell:~/projects/github.com/NixOS/nix]$ ./inst/bin/nix build -L -f ./test.nix
my-example-derivation> hello!
my-example-derivation> bye!
my-example-derivation (post)> + sleep 1
my-example-derivation (post)> + echo 'Signing paths' /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> Signing paths /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> + sleep 1
my-example-derivation (post)> + echo 'Uploading paths' /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> Uploading paths /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> + sleep 1
my-example-derivation (post)> + printf 'very important stuff'
[1 built, 0.0 MiB DL]
Co-authored-by: Graham Christensen <graham@grahamc.com>
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
startProcess does not appear to send the exit code to the helper
correctly. Not sure why this is, but it is probably safe to just
fallback on all sandbox errors.
When sandbox-fallback = true (the default), the Nix builder will fall
back to disabled sandbox mode when the kernel doesn’t allow users to
set it up. This prevents hard errors from occuring in tricky places,
especially the initial installer. To restore the previous behavior,
users can set:
sandbox-fallback = false
in their /etc/nix/nix.conf configuration.
Some kernels disable "unpriveleged user namespaces". This is
unfortunate, but we can still use mount namespaces. Anyway, since each
builder has its own nixbld user, we already have most of the benefits
of user namespaces.
This is a much simpler fix to the 'error 9 while decompressing xz
file' problem than 78fa47a7f0. We just
do a ranged HTTP request starting after the data that we previously
wrote into the sink.
Fixes#2952, #379.
Our use of boost::coroutine2 depends on -lboost_context,
which in turn depends on `-lboost_thread`, which in turn depends
on `-lboost_system`.
I suspect that this builds on nix only because of low-level hacks
like NIX_LDFLAGS.
This commit passes the proper linker flags, thus fixing bootstrap
builds on non-nix distributions like Ubuntu 16.04.
With these changes, I can build Nix on Ubuntu 16.04 using:
./bootstrap.sh
./configure --prefix=$HOME/editline-prefix \
--disable-doc-gen \
CXX=g++-7 \
--with-boost=$HOME/boost-prefix \
EDITLINE_CFLAGS=-I$HOME/editline-prefix/include \
EDITLINE_LIBS=-leditline \
LDFLAGS=-L$HOME/editline-prefix/lib
make
where
* g++-7 comes from gcc-7 from
https://launchpad.net/~ubuntu-toolchain-r/+archive/ubuntu/test,
* editline 1.14 from https://github.com/troglobit/editline/releases/tag/1.14.0
was installed into `$HOME/editline-prefix`
(because Ubuntu 16.04's `editline` is too old to have the function nix uses),
* boost 1.66 from
https://www.boost.org/doc/libs/1_66_0/more/getting_started/unix-variants.html
was installed into $HOME/boost-prefix (because Ubuntu 16.04 only has 1.58)
$ sudo ./inst/bin/nix-instantiate -E '"${./.git}"'
error: The path name '.git' is invalid: it is illegal to start the
name with a period. Path names are alphanumeric and can include the
symbols +-._?= and must not begin with a period. Note: If '.git' is a
source file and you cannot rename it on disk,
builtins.path { name = ... } can be used to give it an alternative
name.
If multiple builds with fail with different errors it will be reflected
in the status code.
eg.
103 => timeout + hash mismatch
105 => timeout + check mismatch
106 => hash mismatch + check mismatch
107 => timeout + hash mismatch + check mismatch
Setting `http2 = false` in nix config (e.g. /etc/nix/nix.conf)
had no effect, and `nix-env -vvvvv -i hello` still downloaded .nar
packages using HTTP/2.
In `src/libstore/download.cc`, the `CURL_HTTP_VERSION_2TLS` option was
being explicitly set when `downloadSettings.enableHttp2` was `true`,
but, `CURL_HTTP_VERSION_1_1` option was not being explicitly set when
`downloadSettings.enableHttp2` was `false`.
This may be because `https://curl.haxx.se/libcurl/c/libcurl-env.html` states:
"You have to set this option if you want to use libcurl's HTTP/2 support."
but, also, in the changelog, states:
"DEFAULT
Since curl 7.62.0: CURL_HTTP_VERSION_2TLS
Before that: CURL_HTTP_VERSION_1_1"
So, the default setting for `libcurl` is HTTP/2 for version >= 7.62.0.
In this commit, option `CURLOPT_HTTP_VERSION` is explicitly set to
`CURL_HTTP_VERSION_1_1` when `downloadSettings.enableHttp2` nix config
setting is `false`.
This can be tested by running `nix-env -vvvvv -i hello | grep HTTP`
The default nsswitch.conf(5) file in most distros can handle many
different things including host name, user names, groups, etc. In Nix,
we want to limit the amount of impurities that come from these things.
As a result, we should only allow nss to be used for gethostbyname(3)
and getservent(3).
/cc @Ericson2314
'updateCV.notify_one()' does nothing if the update thread is not
waiting for updateCV (in particular this happens when it is sleeping
on quitCV). So also set a variable to ensure that the update isn't
lost.
--no-net causes tarballTtl to be set to the largest 32-bit integer,
which causes comparison like 'time + tarballTtl < other_time' to
fail on 32-bit systems. So cast them to 64-bit first.
https://hydra.nixos.org/build/95076624
(cherry picked from commit 29ccb2e969)
This flag
* Disables substituters.
* Sets the tarball-ttl to infinity (ensuring e.g. that the flake
registry and any downloaded flakes are considered current).
* Disables retrying downloads and sets the connection timeout to the
minimum. (So it doesn't completely disable downloads at the moment.)
(cherry picked from commit 8ea842260b)
Once we've started writing data to a Sink, we can't restart a download
request, because then we end up writing duplicate data to the
Sink. Therefore we shouldn't handle retries in Downloader but at a
higher level (in particular, in copyStorePath()).
Fixes#2952.
(cherry picked from commit a67cf5a358)
Also, make fetchGit and fetchMercurial update allowedPaths properly.
(Maybe the evaluator, rather than the caller of the evaluator, should
apply toRealPath(), but that's a bigger change.)
(cherry picked from commit 5c34d66538)
This allows many programs (e.g. gcc, clang, cmake) to print colorized
log output (assuming $TERM is set to a value like "xterm").
There are other ways to get colors, in particular setting
CLICOLOR_FORCE, but they're less widely supported and can break
programs that parse tool output.
In a daemon-based Nix setup, some options cannot be overridden by a
client unless the client's user is considered trusted.
Currently, if an untrusted user tries to override one of those
options, we are silently ignoring it.
This can be pretty confusing in certain situations.
e.g. a user thinks he disabled the sandbox when in reality he did not.
We are now sending a warning message letting know the user some options
have been ignored.
Related to #1761.
For the SIGWINCH signal to be caught, it needs to be set in sigaction
on the main thread. Previously, this was broken, and updateWindowSize
was never being called. Tested on macOS 10.14.
This causes 'nix' to print build log output to stderr rather than
showing the last log line in the progress bar. Log lines are prefixed
by the name of the derivation (minus the version string), e.g.
binutils> make[1]: Leaving directory '/build/binutils-2.31.1'
binutils-wrapper> unpacking sources
binutils-wrapper> patching sources
...
binutils-wrapper> Using dynamic linker: '/nix/store/kr51dlsj9v5cr4n8700jliyz8v5b2q7q-bootstrap-stage0-glibc/lib/ld-linux-x86-64.so.2'
bootstrap-stage2-gcc-wrapper> unpacking sources
...
linux-headers> unpacking sources
linux-headers> unpacking source archive /nix/store/8javli69jhj3bkql2c35gsj5vl91p382-linux-4.19.16.tar.xz
The value of useChroot is not set yet in the constructor, resulting in
hash rewriting being enabled in certain cases where it should not be.
Fixes#2801
Sometimes, "expected" can be "0", but in fact means "unknown".
This is for example the case when downloading a file while the http
server doesn't send the `Content-Length` header, like when running `nix
build` pointing to a nixpkgs checkout streamed from GitHub:
⇒ nix build -f https://github.com/NixOS/nixpkgs/archive/master.tar.gz hello
[1.8/0.0 MiB DL] downloading 'https://github.com/NixOS/nixpkgs/archive/master.tar.gz'
In that case, don't show that weird progress bar, but only the (slowly
increasing) downloaded size ("done").
⇒ nix build -f https://github.com/NixOS/nixpkgs/archive/master.tar.gz hello
[1.8 MiB DL] downloading 'https://github.com/NixOS/nixpkgs/archive/master.tar.gz'
This commit also updates fmt calls with three numbers (when something is
currently 'running' too) - I'm not sure if this can be provoked, but
showing "0" as expected doesn't make any sense, as we're obviously doing
more than nothing.
For text files it is possible to do it like so:
`builtins.hashString "sha256" (builtins.readFile /tmp/a)`
but that doesn't work for binary files.
With builtins.hashFile any kind of file can be conveniently hashed.
To determine which seccomp filters to install, we were incorrectly
using settings.thisSystem, which doesn't denote the actual system when
--system is used.
Fixes#2791.
Scanning of /proc/<pid>/{exe,cwd} was broken because '{memory:' was
prepended twice. Also, get rid of the whole '{memory:...}' thing
because it's unnecessary, we can just list the file in /proc directly.
This new structure makes more sense as there may be many sources rooting
the same store path. Many profiles can reference the same path but this
is even more true with /proc/<pid>/maps where distinct pids can and
often do map the same store path.
This implementation is also more efficient as the `Roots` map contains
only one entry per rooted store path.
It could happen that the local builder match the system but lacks some features.
Now it results a failure.
The fix gracefully excludes the local builder from the set of available builders for derivation which requires the feature, so the derivation is built on remote builders only (as though it has incompatible system, like ```aarch64-linux``` when local is x86)
This reverts commit a0ef21262f. This
doesn't work in 'nix run' and nix-shell because setns() fails in
multithreaded programs, and Boehm GC mark threads are uncancellable.
Fixes#2646.
Inside a derivation, exportReferencesGraph already provides a way to
dump the Nix database for a specific closure. On the command line,
--dump-db gave us the same information, but only for the entire Nix
database at once.
With this change, one can now pass a list of paths to --dump-db to get
the Nix database dumped for just those paths. (The user is responsible
for ensuring this is a closure, like for --export).
Among other things, this is useful for deploying a closure to a new
host without using --import/--export; one can use tar to transfer the
store paths, and --dump-db/--load-db to transfer the validity
information. This is useful if the new host doesn't actually have Nix
yet, and the closure that is being deployed itself contains Nix.
Previously, plain derivation paths in the string context (e.g. those
that arose from builtins.storePath on a drv file, not those that arose
from accessing .drvPath of a derivation) were treated somewhat like
derivaiton paths derived from .drvPath, except their dependencies
weren't recursively added to the input set. With this change, such
plain derivation paths are simply treated as paths and added to the
source inputs set accordingly, simplifying context handling code and
removing the inconsistency. If drvPath-like behavior is desired, the
.drv file can be imported and then .drvPath can be accessed.
This is a backwards-incompatibility, but storePath is never used on
drv files within nixpkgs and almost never used elsewhere.
Trying to fetch refs that are not in refs/heads currently fails because
it looks for refs/heads/refs/foo instead of refs/foo.
eg.
builtins.fetchGit {
url = https://github.com/NixOS/nixpkgs.git;
ref = "refs/pull/1024/head;
}
SRI hashes (https://www.w3.org/TR/SRI/) combine the hash algorithm and
a base-64 hash. This allows more concise and standard hash
specifications. For example, instead of
import <nix/fetchurl.nl> {
url = https://nixos.org/releases/nix/nix-2.1.3/nix-2.1.3.tar.xz;
sha256 = "5d22dad058d5c800d65a115f919da22938c50dd6ba98c5e3a183172d149840a4";
};
you can write
import <nix/fetchurl.nl> {
url = https://nixos.org/releases/nix/nix-2.1.3/nix-2.1.3.tar.xz;
hash = "sha256-XSLa0FjVyADWWhFfkZ2iKTjFDda6mMXjoYMXLRSYQKQ=";
};
In fixed-output derivations, the outputHashAlgo is no longer mandatory
if outputHash specifies the hash (either as an SRI or in the old
"<type>:<hash>" format).
'nix hash-{file,path}' now print hashes in SRI format by default. I
also reverted them to use SHA-256 by default because that's what we're
using most of the time in Nixpkgs.
Suggested by @zimbatm.
Use the same output ordering and format everywhere.
This is such a common issue that we trade the single-line error message for
more readability.
Old message:
```
fixed-output derivation produced path '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com' with sha256 hash '08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm' instead of the expected hash '1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m'
```
New message:
```
hash mismatch in fixed-output derivation '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com':
wanted: sha256:1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m
got: sha256:08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm
```
Without this information the content addressable state and hashes are
lost after the first request, this causes signatures to be required for
everything even tho the path could be verified without signing.
This enables using for http for S3 request for debugging or
implementations that don't have https configured. This is not a problem
for binary caches since they should not contain sensitive information.
Both package signatures and AWS auth already protect against tampering.
The goal is to support libeditline AND libreadline and let the user
decide at compile time which one to use.
Add a compile time option to use libreadline instead of
libeditline. If compiled against libreadline completion functionality
is lost because of a incompatibility between libeditlines and
libreadlines completion function. Completion with libreadline is
possible and can be added later.
To use libreadline instead of libeditline the environment
variables 'EDITLINE_LIBS' and 'EDITLINE_CFLAGS' have to been set
during the ./configure step.
Example:
EDITLINE_LIBS="/usr/lib/x86_64-linux-gnu/libhistory.so /usr/lib/x86_64-linux-gnu/libreadline.so"
EDITLINE_CFLAGS="-DREADLINE"
The reason for this change is that for example on Debian already three
different editline libraries exist but none of those is compatible the
flavor used by nix. My hope is that with this change it would be
easier to port nix to systems that have already libreadline available.
Previously, config would only be read from XDG_CONFIG_HOME. This change
allows reading config from additional directories, which enables e.g.
per-project binary caches or chroot stores with the help of direnv.
The use of TransferManager has several issues, including that it
doesn't allow setting a Content-Encoding without a patch, and it
doesn't handle exceptions in worker threads (causing termination on
memory allocation failure).
Fixes#2493.
This commit partially reverts 48662d151b. When
copying from an older store (in my case a store running Nix 1.11.7), nix would
throw errors about there being no hash. This is fixed by recalculating the hash.
In structured-attributes derivations, you can now specify per-output
checks such as:
outputChecks."out" = {
# The closure of 'out' must not be larger than 256 MiB.
maxClosureSize = 256 * 1024 * 1024;
# It must not refer to C compiler or to the 'dev' output.
disallowedRequisites = [ stdenv.cc "dev" ];
};
outputChecks."dev" = {
# The 'dev' output must not be larger than 128 KiB.
maxSize = 128 * 1024;
};
Also fixed a bug in allowedRequisites that caused it to ignore
self-references.
This prints the references graph of the store paths in the graphML
format [1]. The graphML format is supported by several graph tools
such as the Python Networkx library or the Apache Thinkerpop project.
[1] http://graphml.graphdrawing.org
Since its superclass RemoteStore::Connection contains 'to' and 'from'
fields that refer to the file descriptor maintained in the subclass,
it was possible for the flush() call in Connection::~Connection() to
write to a closed file descriptor (or worse, a file descriptor now
referencing another file). So make sure that the file descriptor
survives 'to' and 'from'.
This is primarily because Derivation::{can,will}BuildLocally() depends
on attributes like preferLocalBuild and requiredSystemFeatures, but it
can't handle them properly because it doesn't have access to the
structured attributes.
E.g. __noChroot and allowedReferences now work correctly. We also now
check that the attribute type is correct. For instance, instead of
allowedReferences = "out";
you have to write
allowedReferences = [ "out" ];
Fixes#2453.
This meant that making a typo in an s3:// URI would cause a bucket to
be created. Also it didn't handle eventual consistency very well. Now
it's up to the user to create the bucket.
Tools which re-exec `$SHELL` or `$0` or `basename $SHELL` or even just
`bash` will otherwise get the non-interactive bash, providing a
broken shell for the same reasons described in
https://github.com/NixOS/nixpkgs/issues/27493.
Extends c94f3d5575
Calculating roots seems significantly slower on darwin compared to
linux. Checking for /profile/ links could show some false positives but
should still catch most issues.
* Don't wait forever for the client to remove data from the
buffer. This does mean that the buffer can grow without bounds
(e.g. when downloading is faster than writing to disk), but meh.
* Don't hold the state lock while calling the sink. The sink could
take any amount of time to process the data (in particular when it's
actually a coroutine), so we don't want to block the download
thread.
In particular this causes copyStorePath() from HttpBinaryCacheStore to
only start a download if needed. E.g. if the destination LocalStore
goes to sleep waiting for the path lock and another process creates
the path, then LocalStore::addToStore() will never read from the
source so we don't have to do the download.