This fixes a class of crashes and introduces ptr<T> to make the
code robust against this failure mode going forward.
Thanks regnat for the idea of a ref<T> without overhead!
Closes#4895Closes#4893Closes#5127Closes#5113
Previously, despite having a boolean that tracked initialization, the
decode characters have been "calculated" every single time a base64
string was being decoded.
With this change we only initialize the decode array once in a
thread-safe manner.
Otherwise I get a compiler error when building for NetBSD:
src/libutil/util.cc: In function 'void nix::_deletePath(const Path&, uint64_t&)':
src/libutil/util.cc:438:17: error: base operand of '->' is not a pointer
438 | AutoCloseFD dirfd(open(dir.c_str(), O_RDONLY));
| ^~~~~
src/libutil/util.cc:439:10: error: 'dirfd' was not declared in this scope
439 | if (!dirfd) {
| ^~~~~
src/libutil/util.cc:444:17: error: 'dirfd' was not declared in this scope
444 | _deletePath(dirfd.get(), path, bytesFreed);
| ^~~~~
Fill `NIX_CONFIG` with the value of the current Nix configuration before
calling the post-build-hook.
That way the whole configuration (including the possible
`experimental-features`, a possibly `--store` option or whatever) will
be made available to the hook
Move the `closure` logic of `computeFSClosure` to its own (templated) function.
This doesn’t bring much by itself (except for the ability to properly
test the “closure” functionality independently from the rest), but it
allows reusing it (in particular for the realisations which will require
a very similar closure computation)
When you have a symlink like:
/tmp -> ./private/tmp
you need to resolve ./private/tmp relative to /tmp’s dir: ‘/’. Unlike
any other path output by dirOf, / ends with a slash. We don’t want
trailing slashes here since we will append another slash in the next
comoponent, so clear s like we would if it was a symlink to an absoute
path.
This should fix at least part of the issue in
https://github.com/NixOS/nix/issues/4822, will need confirmation that
it actually fixes the problem to close though.
Introduced in f3f228700a.
This function doesn't support all compression methods (i.e. 'none' and
'br') so it shouldn't be exposed.
Also restore the original decompress() as a wrapper around
makeDecompressionSink().
The S3 store relies on the ability to be able to decompress things with
an empty method, because it just passes the value of the Content-Encoding
directly to decompress.
If the file is not compressed, then this will cause the compression
routine to get confused.
This caused NixOS/nixpkgs#120120.
If there were many top-level goals (which are not destroyed until the
very end), commands like
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' \
/run/current-system --no-check-sigs --substitute-on-destination
could fail with "Too many open files". So now we do some explicit
cleanup from amDone(). It would be cleaner to separate goals from
their temporary internal state, but that would be a bigger refactor.
According to RFC4007[1], IPv6 addresses can have a so-called zone_id
separated from the actual address with `%` as delimiter. In contrast to
Nix 2.3, the version on `master` doesn't recognize it as such:
$ nix ping-store --store ssh://root@fe80::1%18 --experimental-features nix-command
warning: 'ping-store' is a deprecated alias for 'store ping'
error: --- Error ----------------------------------------------------------------- nix
don't know how to open Nix store 'ssh://root@fe80::1%18'
I modified the IPv6 match-regex accordingly to optionally detect this
part of the address. As we don't seem to do anything special with it, I
decided to leave it as part of the URL for now.
Fixes#4490
[1] https://tools.ietf.org/html/rfc4007
This is probably what most people expect it to do. Fixes#3781.
There is a new command 'nix flake lock' that has the old behaviour of
'nix flake update', i.e. it just adds missing lock file entries unless
overriden using --update-input.
This is technically a breaking change, since attempting to set plugin
files after the first non-flag argument will now throw an error. This
is acceptable given the relative lack of stability in a plugin
interface and the need to tie the knot somewhere once plugins can
actually define new subcommands.
When performing distributed builds of machine learning packages, it
would be nice if builders without the required SIMD instructions can
be excluded as build nodes.
Since x86_64 has accumulated a large number of different instruction
set extensions, listing all possible extensions would be unwieldy.
AMD, Intel, Red Hat, and SUSE have recently defined four different
microarchitecture levels that are now part of the x86-64 psABI
supplement and will be used in glibc 2.33:
https://gitlab.com/x86-psABIs/x86-64-ABIhttps://lwn.net/Articles/844831/
This change uses libcpuid to detect CPU features and then uses them to
add the supported x86_64 levels to the additional system types. For
example on a Ryzen 3700X:
$ ~/aps/bin/nix -vv --version | grep "Additional system"
Additional system types: i686-linux, x86_64-v1-linux, x86_64-v2-linux, x86_64-v3-linux
I tested a trivial program that called kill(-1, SIGKILL), which was
run as the only process for an unpriveleged user, on Linux and
FreeBSD. On Linux, kill reported success, while on FreeBSD it failed
with EPERM.
POSIX says:
> If pid is -1, sig shall be sent to all processes (excluding an
> unspecified set of system processes) for which the process has
> permission to send that signal.
and
> The kill() function is successful if the process has permission to
> send sig to any of the processes specified by pid. If kill() fails,
> no signal shall be sent.
and
> [EPERM]
> The process does not have permission to send the signal to any
> receiving process.
My reading of this is that kill(-1, ...) may fail with EPERM when
there are no other processes to kill (since the current process is
ignored). Since kill(-1, ...) only attempts to kill processes the
user has permission to kill, it can't mean that we tried to do
something we didn't have permission to kill, so it should be fine to
interpret EPERM the same as success here for any POSIX-compliant
system.
This fixes an issue that Mic92 encountered[1] when he tried to review a
Nixpkgs PR on FreeBSD.
[1]: https://github.com/NixOS/nixpkgs/pull/81459#issuecomment-606073668
It's now
at /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix:7:7:
instead of
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
The new format is more standard and clickable.
Changes:
* The divider lines are gone. These were in practice a bit confusing,
in particular with --show-trace or --keep-going, since then there
were multiple lines, suggesting a start/end which wasn't the case.
* Instead, multi-line error messages are now indented to align with
the prefix (e.g. "error: ").
* The 'description' field is gone since we weren't really using it.
* 'hint' is renamed to 'msg' since it really wasn't a hint.
* The error is now printed *before* the location info.
* The 'name' field is no longer printed since most of the time it
wasn't very useful since it was just the name of the exception (like
EvalError). Ideally in the future this would be a unique, easily
googleable error ID (like rustc).
* "trace:" is now just "…". This assumes error contexts start with
something like "while doing X".
Example before:
error: --- AssertionError ---------------------------------------------------------------------------------------- nix
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
assertion 'false' failed
----------------------------------------------------- show-trace -----------------------------------------------------
trace: while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
Example after:
error: assertion 'false' failed
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
… while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
In `nixStable` (2.3.7 to be precise) it's possible to connect to stores
using an IPv6 address:
nix ping-store --store ssh://root@2001:db8::1
This is also useful for `nixops(1)` where you could specify an IPv6
address in `deployment.targetHost`.
However, this behavior is broken on `nixUnstable` and fails with the
following error:
$ nix store ping --store ssh://root@2001:db8::1
don't know how to open Nix store 'ssh://root@2001:db8::1'
This happened because `openStore` from `libstore` uses the `parseURL`
function from `libfetchers` which expects a valid URL as defined in
RFC2732. However, this is unsupported by `ssh(1)`:
$ nix store ping --store 'ssh://root@[2001:db8::1]'
cannot connect to 'root@[2001:db8::1]'
This patch now allows both ways of specifying a store (`root@2001:db8::1`) and
also `root@[2001:db8::1]` since the latter one is useful to pass query
parameters to the remote store.
In order to achieve this, the following changes were made:
* The URL regex from `url-parts.hh` now allows an IPv6 address in the
form `2001:db8::1` and also `[2001:db8::1]`.
* In `libstore`, a new function named `extractConnStr` ensures that a
proper URL is passed to e.g. `ssh(1)`:
* If a URL looks like either `[2001:db8::1]` or `root@[2001:db8::1]`,
the brackets will be removed using a regex. No additional validation
is done here as only strings parsed by `parseURL` are expected.
* In any other case, the string will be left untouched.
* The rules above only apply for `LegacySSHStore` and `SSHStore` (a.k.a
`ssh://` and `ssh-ng://`).
Unresolved questions:
* I'm not really sure whether we want to allow both variants of IPv6
addresses in the URL parser. However it should be noted that both seem
to be possible according to RFC2732:
> This document incudes an update to the generic syntax for Uniform
> Resource Identifiers defined in RFC 2396 [URL]. It defines a syntax
> for IPv6 addresses and allows the use of "[" and "]" within a URI
> explicitly for this reserved purpose.
* Currently, it's not supported to specify a port number behind the
hostname, however it seems as this is not really supported by the URL
parser. Hence, this is probably out of scope here.
using fallocate() to preallocate files space does more harm than good:
- breaks compression on btrfs
- has been called "not the right thing to do" by xfs developers
(because delayed allocation that most filesystems implement leads to smarter
allocation than what the filesystem needs to do if we upfront fallocate files)
Crucially this introduces BoehmGCStackAllocator, but it also
adds a bunch of wiring to avoid making libutil depend on bdw-gc.
Part of the solutions for #4178, #4200
This removes the extra-substituters and extra-sandbox-paths settings
and instead makes every array setting extensible by setting
"extra-<name> = <value>" in the configuration file or passing
"--<name> <value>" on the command line.
Make nix output completions in the form `completion\tdescription`.
This can't be used by bash (afaik), but other shells like zsh or fish
can display it along the completion choices
`nix flake info` calls the github 'commits' API, which requires
authorization when the repository is private. Currently this request
fails with a 404.
This commit adds an authorization header when calling the 'commits' API.
It also changes the way that the 'tarball' API authenticates, moving the
user's token from a query parameter into the Authorization header.
The query parameter method is recently deprecated and will be disallowed
in November 2020. Using them today triggers a warning email.
It is apparently required for using `toJSONObject()`, which we do inside
the header file (because it's in a template).
This was accidentally working when building Nix itself (presumably because
`config.hh` was always included after `nlohman/json.hpp`) but caused a
(pretty dirty) build failure in the perl bindings package.
The new error-format is pretty nice from a UX point-of-view, however
it's fairly hard to parse the output e.g. for editor plugins such as
vim-ale[1] that use `nix-instantiate --parse` to determine syntax errors in
Nix expression files.
This patch extends the `internal-json` logger by adding the fields
`line`, `column` and `file` to easily locate an error in a file and the
field `raw_msg` which contains the error-message itself without
code-lines and additional helpers.
An exemplary output may look like this:
```
[nix-shell]$ ./inst/bin/nix-instantiate ~/test.nix --log-format minimal
{"action":"msg","column":1,"file":"/home/ma27/test.nix","level":0,"line":4,"raw_msg":"syntax error, unexpected IF, expecting $end","msg":"<full error-msg with code-lines etc>"}
```
[1] https://github.com/dense-analysis/ale
I got it to just become `LocalStore::addToStoreFromDump`, cleanly taking
a store and then doing nothing too fancy with it.
`LocalStore::addToStore(...Path...)` is now just a simple wrapper with a
bare-bones sinkToSource of the right dump command.
The `m` acts as termination-symbol when declaring graphics. Because
of this, the `;1m` doesn't have any effect and is directly printed to
the console:
```
$ nix repl
> builtins.fetchGit { /* ... */ }
{ outPath = "/nix/store/s0f0iz4a41cxx2h055lmh6p2d5k5bc6r-source"; rev = "e73e45b723a9a6eecb98bd5f3df395d9ab3633b6"; revCount = ;1m428; shortRev = "e73e45b"; submodules = ;1mfalse; }
```
Introduced by 6403508f5a.
On nix-env -qa -f '<nixpkgs>', this reduces maximum RSS by 20970 KiB
and runtime by 0.8%. This is mostly because we're not parsing the hash
part as a hash anymore (just validating that it consists of base-32
characters).
Also, replace storePathToHash() by StorePath::hashPart().
Needed so that we can include it as a logger in loggers.cc without
adding a dependency on nix
This also requires moving names.hh to libutil to prevent a circular
dependency between libmain and libexpr
Make the printing of the build logs systematically go through the
logger, and replicate the behavior of `no-build-output` by having two
different loggers (one that prints the build logs and one that doesn't)
Add a new `--log-format` cli argument to change the format of the logs.
The possible values are
- raw (the default one for old-style commands)
- bar (the default one for new-style commands)
- bar-with-logs (equivalent to `--print-build-logs`)
- internal-json (the internal machine-readable json format)
Instead, `Hash` uses `std::optional<HashType>`. In the future, we may
also make `Hash` itself require a known hash type, encoraging people to
use `std::optional<Hash>` instead.
The previous regex was too strict and did not match what git was allowing. It
could lead to `fetchGit` not accepting valid branch names, even though they
exist in a repository (for example, branch names containing `/`, which are
pretty standard, like `release/1.0` branches).
The new regex defines what a branch name should **NOT** contain. It takes the
definitions from `refs.c` in https://github.com/git/git and `git help
check-ref-format` pages.
This change also introduces a test for ref name validity checking, which
compares the result from Nix with the result of `git check-ref-format --branch`.
This moves the actual parsing of configuration contents into applyConfig
which applyConfigFile is then going to call. By changing this we can now
test the configuration file parsing without actually create a file on
disk.
The raw stderr output isn't logged anymore so the build logs need to be
printed by the default logger in order for the old commands like
nix-build to still show build output.
When used with `readFile`, we have a pretty good heuristic of the file
size, so `reserve` this in the `string`. This will save some allocation
/ copy when the string is growing.
This closes#3026 by allowing `builtins.readFile` to read a file with a
wrongly reported file size, for example, files in `/proc` may report a
file size of 0. Reading file in `/proc` is not a good enough motivation,
however I do think it just makes nix more robust by allowing more file
to be read. Especially, I do considerer the previous behavior to be
dangerous because nix was previously reading truncated files. Examples
of file system which incorrectly report file size may be network file
system or dynamic file system (for performance reason, a dynamic file
system such as FUSE may generate the content of the file on demand).
```
nix-repl> builtins.readFile "/proc/version"
""
```
With this commit:
```
nix-repl> builtins.readFile "/proc/version"
"Linux version 5.6.7 (nixbld@localhost) (gcc version 9.3.0 (GCC)) #1-NixOS SMP Thu Apr 23 08:38:27 UTC 2020\n"
```
Here is a summary of the behavior changes:
- If the reported size is smaller, previous implementation
was silently returning a truncated file content. The new implementation
is returning the correct file content.
- If a file had a bigger reported file size, previous implementation was
failing with an exception, but the new implementation is returning the
correct file content. This change of behavior is coherent with this pull
request.
Open questions
- The behavior is unchanged for correctly reported file size, however
performances may vary because it uses the more complex sink interface.
Considering that sink is used a lot, I don't think this impacts the
performance a lot.
- `builtins.readFile` on an infinite file, such as `/dev/random` may
fill the memory.
- it does not support adding file to store, such as `${/proc/version}`.
Suppose I have a path /nix/store/[hash]-[name]/a/a/a/a/a/[...]/a,
long enough that everything after "/nix/store/" is longer than 4096
(MAX_PATH) bytes.
Nix will happily allow such a path to be inserted into the store,
because it doesn't look at all the nested structure. It just cares
about the /nix/store/[hash]-[name] part. But, when the path is deleted,
we encounter a problem. Nix will move the path to /nix/store/trash, but
then when it's trying to recursively delete the trash directory, it will
at some point try to unlink
/nix/store/trash/[hash]-[name]/a/a/a/a/a/[...]/a. This will fail,
because the path is too long. After this has failed, any store deletion
operation will never work again, because Nix needs to delete the trash
directory before recreating it to move new things to it. (I assume this
is because otherwise a path being deleted could already exist in the
trash, and then moving it would fail.)
This means that if I can trick somebody into just fetching a tarball
containing a path of the right length, they won't be able to delete
store paths or garbage collect ever again, until the offending path is
manually removed from /nix/store/trash. (And even fixing this manually
is quite difficult if you don't understand the issue, because the
absolute path that Nix says it failed to remove is also too long for
rm(1).)
This patch fixes the issue by making Nix's recursive delete operation
use unlinkat(2). This function takes a relative path and a directory
file descriptor. We ensure that the relative path is always just the
name of the directory entry, and therefore its length will never exceed
255 bytes. This means that it will never even come close to AX_PATH,
and Nix will therefore be able to handle removing arbitrarily deep
directory hierachies.
Since the directory file descriptor is used for recursion after being
used in readDirectory, I made a variant of readDirectory that takes an
already open directory stream, to avoid the directory being opened
multiple times. As we have seen from this issue, the less we have to
interact with paths, the better, and so it's good to reuse file
descriptors where possible.
I left _deletePath as succeeding even if the parent directory doesn't
exist, even though that feels wrong to me, because without that early
return, the linux-sandbox test failed.
Reported-by: Alyssa Ross <hi@alyssa.is>
Thanks-to: Puck Meerburg <puck@puckipedia.com>
Tested-by: Puck Meerburg <puck@puckipedia.com>
Reviewed-by: Puck Meerburg <puck@puckipedia.com>
Usually this just writes to stdout, but for ProgressBar, we need to
clear the current line, write the line to stdout, and then redraw the
progress bar.
(cherry picked from commit 696c026006)
Usually this just writes to stdout, but for ProgressBar, we need to
clear the current line, write the line to stdout, and then redraw the
progress bar.
Motivation: maintain project-level configuration files.
Document the whole situation a bit better so that it corresponds to the
implementation, and add NIX_USER_CONF_FILES that allows overriding
which user files Nix will load during startup.
This provides a pluggable mechanism for defining new fetchers. It adds
a builtin function 'fetchTree' that generalizes existing fetchers like
'fetchGit', 'fetchMercurial' and 'fetchTarball'. 'fetchTree' takes a
set of attributes, e.g.
fetchTree {
type = "git";
url = "https://example.org/repo.git";
ref = "some-branch";
rev = "abcdef...";
}
The existing fetchers are just wrappers around this. Note that the
input attributes to fetchTree are the same as flake input
specifications and flake lock file entries.
All fetchers share a common cache stored in
~/.cache/nix/fetcher-cache-v1.sqlite. This replaces the ad hoc caching
mechanisms in fetchGit and download.cc (e.g. ~/.cache/nix/{tarballs,git-revs*}).
This also adds support for Git worktrees (c169ea5904).
This allows overriding the priority of substituters, e.g.
$ nix-store --store ~/my-nix/ -r /nix/store/df3m4da96d84ljzxx4mygfshm1p0r2n3-geeqie-1.4 \
--substituters 'http://cache.nixos.org?priority=100 daemon?priority=10'
Fixes#3264.
Most functions now take a StorePath argument rather than a Path (which
is just an alias for std::string). The StorePath constructor ensures
that the path is syntactically correct (i.e. it looks like
<store-dir>/<base32-hash>-<name>). Similarly, functions like
buildPaths() now take a StorePathWithOutputs, rather than abusing Path
by adding a '!<outputs>' suffix.
Note that the StorePath type is implemented in Rust. This involves
some hackery to allow Rust values to be used directly in C++, via a
helper type whose destructor calls the Rust type's drop()
function. The main issue is the dynamic nature of C++ move semantics:
after we have moved a Rust value, we should not call the drop function
on the original value. So when we move a value, we set the original
value to bitwise zero, and the destructor only calls drop() if the
value is not bitwise zero. This should be sufficient for most types.
Also lots of minor cleanups to the C++ API to make it more modern
(e.g. using std::optional and std::string_view in some places).
https://hydra.nixos.org/build/107467517
Seems that on i686-linux, gcc and rustc disagree on how to return
1-word structs: gcc has the caller pass a pointer to the result, while
rustc has the callee return the result in a register. Work around this
by using a bare pointer.
Also, fetchGit now runs in O(1) memory since we pipe the output of
'git archive' directly into unpackTarball() (rather than first reading
it all into memory).