The new experimental feature 'cgroups' enables the use of cgroups for
all builds. This allows better containment and enables setting
resource limits and getting some build stats.
- call close explicitly in writeFile to prevent the close exception
from being ignored
- fsync after writing schema file to flush data to disk
- fsync schema file parent to flush metadata to disk
https://github.com/NixOS/nix/issues/7064
This was caused by -L calling setLogFormat() again, which caused the
creation of a new progress bar without destroying the old one. So we
had two progress bars clobbering each other.
We should change 'logger' to be a smart pointer, but I'll do that in a
future PR.
Fixes#6931.
Rather than directly copying the source to its dest, copy it first to a
temporary location, and eventually move that temporary.
That way, the move is at least atomic from the point-of-view of the destination
The recursive copy from the stl doesn’t exactly do what we need because
1. It doesn’t delete things as we go
2. It doesn’t keep the mtime, which change the nars
So re-implement it ourselves. A bit dull, but that way we have what we want
In `nix::rename`, if the call to `rename` fails with `EXDEV` (failure
because the source and the destination are in a different filesystems)
switch to copying and removing the source.
To avoid having to re-implement the copy manually, I switched the
function to use the c++17 `filesystem` library (which has a `copy`
function that should do what we want).
Fix#6262
Defers completion of flake inputs until the whole command line is parsed
so that we know what flakes we need to complete the inputs of.
Previously, `nix build flake --update-input <Tab>` always behaved like
`nix build . --update-input <Tab>`.
Useful because a default `sudo` on darwin doesn't clear `$HOME`, so things like `sudo nix-channel --list`
will surprisingly return the USER'S channels, rather than `root`'s.
Other counterintuitive outcomes can be seen in this PR description:
https://github.com/NixOS/nix/pull/6622
To quote Eelco in #5867:
> Unfortunately we can't do
>
> evalSettings.pureEval.setDefault(false);
>
> because then we have to do the same in main.cc (where
> pureEval is set to true), and that would allow pure-eval
> to be disabled globally from nix.conf.
Instead, a command should specify that it should be impure by
default. Then, `evalSettings.pureEval` will be set to `false;` unless
it's overridden by e.g. a CLI flag.
In that case it's IMHO OK to be (theoretically) able to override
`pure-eval` via `nix.conf` because it doesn't have an effect on commands
where `forceImpureByDefault` returns `false` (i.e. everything where pure
eval actually matters).
Closes#5867
Without the change llvm build fails on this week's gcc-13 snapshot as:
src/libutil/json.cc: In function 'void nix::toJSON(std::ostream&, const char*, const char*)':
src/libutil/json.cc:33:22: error: 'uint16_t' was not declared in this scope
33 | put(hex[(uint16_t(*i) >> 12) & 0xf]);
| ^~~~~~~~
src/libutil/json.cc:5:1: note: 'uint16_t' is defined in header '<cstdint>'; did you forget to '#include <cstdint>'?
4 | #include <cstring>
+++ |+#include <cstdint>
5 |
This solves the error
error: cannot connect to socket at '/nix/var/nix/daemon-socket/socket': Connection refused
on build farm systems that are loaded but operating normally.
I've seen this happen on an M1 mac running a loaded hercules-ci-agent.
Hercules CI uses multiple worker processes, which may connect to
the Nix daemon around the same time. It's not unthinkable that
the Nix daemon listening process isn't scheduled until after 6
workers try to connect, especially on a system under load with
many workers.
Is the increase safe?
The number is the number of connections that the kernel will buffer
while the listening process hasn't `accept`-ed them yet.
It did not - and will not - restrict the total number of daemon
forks that a client can create.
History
The number 5 has remained unchanged since the introduction in
nix-worker with 0130ef88ea in 2006.
Add a new `file` fetcher type, which will fetch a plain file over
http(s), or from the local file.
Because plain `http(s)://` or `file://` urls can already correspond to
`tarball` inputs (if the path ends-up with a know archive extension),
the URL parsing logic is a bit convuluted in that:
- {http,https,file}:// urls will be interpreted as either a tarball or a
file input, depending on the extensions of the path part (so
`https://foo.com/bar` will be a `file` input and
`https://foo.com/bar.tar.gz` as a `tarball` input)
- `file+{something}://` urls will be interpreted as `file` urls (with
the `file+` part removed)
- `tarball+{something}://` urls will be interpreted as `tarball` urls (with
the `tarball+` part removed)
Fix#3785
Co-Authored-By: Tony Olagbaiye <me@fron.io>
Since a26be9f3b8, the same parser is used
to parse the result of sourcehut’s `HEAD` endpoint (coming from [git
dumb protocol]) and the output of `git ls-remote`. However, they are very
slightly different (the former doesn’t specify the current reference
since it’s implied to be `HEAD`).
Unify both, and make the parser a bit more robust and understandable (by
making it more typed and adding tests for it)
[git dumb protocol]: https://git-scm.com/book/en/v2/Git-Internals-Transfer-Protocols#_the_dumb_protocol
this slightly increases the amount of memory used for any given symbol, but this
increase is more than made up for if the symbol is referenced more than once in
the EvalState that holds it. on average every symbol should be referenced at
least twice (once to introduce a binding, once to use it), so we expect no
increase in memory on average.
symbol tables are limited to 2³² entries like position tables, and similar
arguments apply to why overflow is not likely: 2³² symbols would require as many
string instances (at 24 bytes each) and map entries (at 24 bytes or more each,
assuming that the map holds on average at most one item per bucket as the docs
say). a full symbol table would require at least 192GB of memory just for
symbols, which is well out of reach. (an ofborg eval of nixpks today creates
less than a million symbols!)
PosTable deduplicates origin information, so using symbols for paths is no
longer necessary. moving away from path Symbols also reduces the usage of
symbols for things that are not keys in attribute sets, which will become
important in the future when we turn symbols into indices as well.
Pos objects are somewhat wasteful as they duplicate the origin file name and
input type for each object. on files that produce more than one Pos when parsed
this a sizeable waste of memory (one pointer per Pos). the same goes for
ptr<Pos> on 64 bit machines: parsing enough source to require 8 bytes to locate
a position would need at least 8GB of input and 64GB of expression memory. it's
not likely that we'll hit that any time soon, so we can use a uint32_t index to
locate positions instead.
nix show-config --json was serializing experimental features as ints.
nlohmann::json will automatically use these definitions to serialize
and deserialize ExperimentalFeatures.
Strictly, we don't use the from_json instance yet, it's provided for
completeness and hopefully future use.
This was a problem when writing a fetcher that uses e.g. sha256 hashes
for revisions. This doesn't actually do anything new, but allows for
creating such fetchers in the future (perhaps when support for Git's
SHA256 object format gains more popularity).
Saving the cwd fd didn't actually work well -- prior to this commit, the
following would happen:
: ~/w/vc/nix ; doas outputs/out/bin/nix --experimental-features 'nix-command flakes' run nixpkgs#coreutils -- --coreutils-prog=pwd
pwd: couldn't find directory entry in ‘../../../..’ with matching i-node
: ~/w/vc/nix ; doas outputs/out/bin/nix --experimental-features 'nix-command flakes' develop -c pwd
pwd: couldn't find directory entry in ‘../../../..’ with matching i-node
This doesn't work very well (maybe I'm misunderstanding the desired
implementation):
: ~/w/vc/nix ; doas outputs/out/bin/nix --experimental-features 'nix-command flakes' develop -c pwd
pwd: couldn't find directory entry in ‘../../../..’ with matching i-node
I regularly pass around simple scripts by using nix-shell as the script
interpreter, eg. like this:
#!/usr/bin/env nix-shell
#!nix-shell -p dd_rescue coreutils bash -i bash
While this works most of the time, I recently had one occasion where it
would not and the above would result in the following:
$ sudo ./myscript.sh
bash: ./myscript.sh: No such file or directory
Note the "sudo" here, because this error only occurs if we're root.
The reason for the latter is because running Nix as root means that we
can directly access the store, which makes sure we use a filesystem
namespace to make the store writable. XXX - REWORD!
So when stracing the process, I stumbled on the following sequence:
openat(AT_FDCWD, "/proc/self/ns/mnt", O_RDONLY) = 3
unshare(CLONE_NEWNS) = 0
... later ...
getcwd("/the/real/cwd", 4096) = 14
setns(3, CLONE_NEWNS) = 0
getcwd("/", 4096) = 2
In the whole strace output there are no calls to chdir() whatsoever, so
I decided to look into the kernel source to see what else could change
directories and found this[1]:
/* Update the pwd and root */
set_fs_pwd(fs, &root);
set_fs_root(fs, &root);
The set_fs_pwd() call is roughly equivalent to a chdir() syscall and
this is called when the setns() syscall is invoked[2].
[1]: b14ffae378/fs/namespace.c (L4659)
[2]: b14ffae378/kernel/nsproxy.c (L346)
Impure derivations are derivations that can produce a different result
every time they're built. Example:
stdenv.mkDerivation {
name = "impure";
__impure = true; # marks this derivation as impure
outputHashAlgo = "sha256";
outputHashMode = "recursive";
buildCommand = "date > $out";
};
Some important characteristics:
* This requires the 'impure-derivations' experimental feature.
* Impure derivations are not "cached". Thus, running "nix-build" on
the example above multiple times will cause a rebuild every time.
* They are implemented similar to CA derivations, i.e. the output is
moved to a content-addressed path in the store. The difference is
that we don't register a realisation in the Nix database.
* Pure derivations are not allowed to depend on impure derivations. In
the future fixed-output derivations will be allowed to depend on
impure derivations, thus forming an "impurity barrier" in the
dependency graph.
* When sandboxing is enabled, impure derivations can access the
network in the same way as fixed-output derivations. In relaxed
sandboxing mode, they can access the local filesystem.
The return value of BaseError::addTrace(...) is never used and
error-prone as subclasses calling it will return a BaseError instead of
the subclass.
This commit changes its return value to be void.
When importing e.g. a local `nixpkgs` in a flake to test a change like
{
inputs.nixpkgs.url = path:/home/ma27/Projects/nixpkgs;
outputs = /* ... */
}
then the input is missing a `lastModified`-field that's e.g. used in
`nixpkgs.lib.nixosSystem`. Due to the missing `lastMoified`-field, the
mtime is set to 19700101:
result -> /nix/store/b7dg1lmmsill2rsgyv2w7b6cnmixkvc1-nixos-system-nixos-22.05.19700101.dirty
With this change, the `path`-fetcher now sets a `lastModified` attribute
to the `mtime` just like it's the case in the `tarball`-fetcher already.
When building NixOS systems with `nixpkgs` being a `path`-input and this
patch, the output-path now looks like this:
result -> /nix/store/ld2qf9c1s98dxmiwcaq5vn9k5ylzrm1s-nixos-system-nixos-22.05.20220217.dirty
Before the change on a system with `auto-optimise-store = true`:
$ nix store gc --verbose --max 1
deleted all the paths instead of one path (we requested 1 byte limit).
It happens because every file in `auto-optimise-store = true` has at
least 2 links: file itself and a link in /nix/store/.links/ directory.
The change conservatively assumes that any file that has one (as before)
or two links (assume auto-potimise mode) will free space.
Co-authored-by: Sandro <sandro.jaeckel@gmail.com>
no need for function<> with c++17 deduction. this saves allocations and virtual
calls, but has the same semantics otherwise. not going through function has the
side effect of giving compilers more insight into the cleanup code, so we need a
few local warning disables.
reduces peak hep memory use on eval of our test system from 264.4MB to 242.3MB,
possibly also a slight performance boost.
theoretically memory use could be cut down by another eight bytes per Pos on
average by turning it into a tuple containing an index into a global base
position table with row and column offsets, but that doesn't seem worth the
effort at this point.
No real need for keeping a separate header for such a simple class.
This requires changing a bit `OrSuggestions<T>::operator*` to not throw
an `Error` to prevent a cyclic dependency. But since this error is only
thrown on programmer error, we can replace the whole method by a direct
call to `std::get` which will raise its own assertion if needs be.
Allows completing `nix build ~/flake#<Tab>`.
We can implement expansion for `~user` later if needed.
Not using wordexp(3) since that expands way too much.
Starts progress on #5729.
The idea is that we should not have these default methods throwing
"unimplemented". This is a small step in that direction.
I kept `addTempRoot` because it is a no-op, rather than failure. Also,
as a practical matter, it is called all over the place, while doing
other tasks, so the downcasting would be annoying.
Maybe in the future I could move the "real" `addTempRoot` to `GcStore`,
and the existing usecases use a `tryAddTempRoot` wrapper to downcast or
do nothing, but I wasn't sure whether that was a good idea so with a
bias to less churn I didn't do it yet.
To avoid that JSON messages are parsed twice in case of
remote builds with `ssh-ng://`, I split up the original
`handleJSONLogMessage` into three parts:
* `parseJSONMessage(const std::string&)` checks if it's a message in the
form of `@nix {...}` and tries to parse it (and prints an error if the
parsing fails).
* `handleJSONLogMessage(nlohmann::json&, ...)` reads the fields from the
message and passes them to the logger.
* `handleJSONLogMessage(const std::string&, ...)` behaves as before, but
uses the two functions mentioned above as implementation.
In case of `ssh-ng://`-logs the first two methods are invoked manually.
This changes the representation of the interrupt callback list to
be safe to use during interrupt handling.
Holding a lock while executing arbitrary functions is something to
avoid in general, because of the risk of deadlock.
Such a deadlock occurs in https://github.com/NixOS/nix/issues/3294
where ~CurlDownloader tries to deregister its interrupt callback.
This happens during what seems to be a triggerInterrupt() by the
daemon connection's MonitorFdHup thread. This bit I can not confirm
based on the stack trace though; it's based on reading the code,
so no absolute certainty, but a smoking gun nonetheless.
we'll retain the old coerceToString interface that returns a string, but callers
that don't need the returned value to outlive the Value it came from can save
copies by using the new interface instead. for values that weren't stringy we'll
pass a new buffer argument that'll be used for storage and shouldn't be
inspected.
when given a string yacc will copy the entire input to a newly allocated
location so that it can add a second terminating NUL byte. since the
parser is a very internal thing to EvalState we can ensure that having
two terminating NUL bytes is always possible without copying, and have
the parser itself merely check that the expected NULs are present.
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 572.4 ms ± 2.3 ms [User: 563.4 ms, System: 8.6 ms]
Range (min … max): 566.9 ms … 579.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 381.7 ms ± 1.0 ms [User: 348.3 ms, System: 33.1 ms]
Range (min … max): 380.2 ms … 387.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.936 s ± 0.005 s [User: 2.715 s, System: 0.221 s]
Range (min … max): 2.923 s … 2.946 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 571.7 ms ± 2.4 ms [User: 563.3 ms, System: 8.0 ms]
Range (min … max): 566.7 ms … 579.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 376.6 ms ± 1.0 ms [User: 345.8 ms, System: 30.5 ms]
Range (min … max): 374.5 ms … 379.1 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.922 s ± 0.006 s [User: 2.707 s, System: 0.215 s]
Range (min … max): 2.906 s … 2.934 s 50 runs
there's a couple places that can be easily converted from using strings to using
string_views instead. gives a slight (~1%) boost to system eval.
# before
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.946 s ± 0.026 s [User: 2.655 s, System: 0.209 s]
Range (min … max): 2.905 s … 2.995 s 20 runs
# after
Time (mean ± σ): 2.928 s ± 0.024 s [User: 2.638 s, System: 0.211 s]
Range (min … max): 2.893 s … 2.970 s 20 runs
this avoids one copy from `s` into `str`, and possibly another copy needed to
construct `s` at the call site. lexical_cast is also more efficient in general.
There already existed a smoke test for the link content length,
but it appears that there exists some corruptions pernicious enough
to replace the file content with zeros, and keeping the same length.
--repair-path now goes as far as checking the content of the link,
making it true to its name and actually repairing the path for such
coruption cases.
we don't have to create an ostream sentry object for every character of a JSON
string we write. format a bunch of characters and flush them to the stream all
at once instead.
this doesn't affect small numbers of string characters, but larger numbers of
total JSON string characters written gain a lot. at 1MB of total string written
we gain almost 30%, at 16MB it's almost a factor of 3x. large numbers of JSON
string characters do occur naturally in a nixos system evaluation to generate
documentation (though this is now somewhat mitigated by caching the largest part
of nixos option docs).
benchmarked with
hyperfine 'nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) {e})"' --warmup 1 -L e 1,4,256,4096,65536
before:
Benchmark 1: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 1)"
Time (mean ± σ): 12.5 ms ± 0.2 ms [User: 9.2 ms, System: 4.0 ms]
Range (min … max): 11.9 ms … 13.1 ms 223 runs
Benchmark 2: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4)"
Time (mean ± σ): 12.5 ms ± 0.2 ms [User: 9.3 ms, System: 3.8 ms]
Range (min … max): 11.9 ms … 13.2 ms 220 runs
Benchmark 3: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 256)"
Time (mean ± σ): 13.2 ms ± 0.3 ms [User: 9.8 ms, System: 4.0 ms]
Range (min … max): 12.6 ms … 14.3 ms 205 runs
Benchmark 4: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4096)"
Time (mean ± σ): 24.0 ms ± 0.4 ms [User: 19.4 ms, System: 5.2 ms]
Range (min … max): 22.7 ms … 25.8 ms 119 runs
Benchmark 5: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 65536)"
Time (mean ± σ): 196.0 ms ± 3.7 ms [User: 171.2 ms, System: 25.8 ms]
Range (min … max): 190.6 ms … 201.5 ms 14 runs
after:
Benchmark 1: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 1)"
Time (mean ± σ): 12.4 ms ± 0.3 ms [User: 9.1 ms, System: 4.0 ms]
Range (min … max): 11.7 ms … 13.3 ms 204 runs
Benchmark 2: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4)"
Time (mean ± σ): 12.4 ms ± 0.2 ms [User: 9.2 ms, System: 3.9 ms]
Range (min … max): 11.8 ms … 13.0 ms 214 runs
Benchmark 3: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 256)"
Time (mean ± σ): 12.6 ms ± 0.2 ms [User: 9.5 ms, System: 3.8 ms]
Range (min … max): 12.1 ms … 13.3 ms 209 runs
Benchmark 4: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4096)"
Time (mean ± σ): 15.9 ms ± 0.2 ms [User: 11.4 ms, System: 5.1 ms]
Range (min … max): 15.2 ms … 16.4 ms 171 runs
Benchmark 5: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 65536)"
Time (mean ± σ): 69.0 ms ± 0.9 ms [User: 44.3 ms, System: 25.3 ms]
Range (min … max): 67.2 ms … 70.9 ms 42 runs
This was already accidentally disabled in ba87b08. It also no longer
appears to be beneficial, and in fact slow things down, e.g. when
evaluating a NixOS system configuration:
elapsed time: median = 3.8170 mean = 3.8202 stddev = 0.0195 min = 3.7894 max = 3.8600 [rejected, p=0.00000, Δ=0.36929±0.02513]
Due to missing <atomic> declaration the build fails as:
src/libutil/util.hh:350:24: error: no match for 'operator||' (operand types are 'std::atomic<bool>' and 'bool')
350 | if (_isInterrupted || (interruptCheck && interruptCheck()))
| ~~~~~~~~~~~~~~ ^~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| | |
| std::atomic<bool> bool
Because the manual is generated from default values which are themselves
generated from various sources (cpuid, bios settings (kvm), number of
cores). This commit hides non-reproducible settings from the manual
output.
Rather than having them plain strings scattered through the whole
codebase, create an enum containing all the known experimental features.
This means that
- Nix can now `warn` when an unkwown experimental feature is passed
(making it much nicer to spot typos and spot deprecated features)
- It’s now easy to remove a feature altogether (once the feature isn’t
experimental anymore or is dropped) by just removing the field for the
enum and letting the compiler point us to all the now invalid usages
of it.
Fixed a bug in initialization of 'base64DecodeChars' variable.
Currently decoder do not fail on invalid Base64 strings.
Added test-case to verify the fix.
Also have made 'base64DecodeChars' to be computed at compile time.
And added a test case to encode/decode string with non-printable charactes.
This ensures any started processes can't write to /nix/store (except
during builds). This partially reverts 01d07b1e, which happened because
of #2646.
The problem was only happening after nix downloads anything, causing
me to suspect the download thread. The problem turns out to be:
"A process can't join a new mount namespace if it is sharing
filesystem-related attributes with another process", in this case this
process is the curl thread.
Ideally, we might kill it before spawning the shell process, but it's
inside a static variable in the getFileTransfer() function. So
instead, stop it from sharing FS state using unshare(). A strategy
such as the one from #5057 (single-threaded chroot helper binary) is
also very much on the table.
Fixes#4337.
The garbage collector no longer blocks other processes from
adding/building store paths or adding GC roots. To prevent the
collector from deleting store paths just added by another process,
processes need to connect to the garbage collector via a Unix domain
socket to register new temporary roots.
9c766a40cb broke logging from the
daemon, because commonChildInit is called when starting the build hook
in a vfork, so it ends up resetting the parent's logger. So don't
vfork.
It might be best to get rid of vfork altogether, but that may cause
problems, e.g. when we call an external program like git from the
evaluator.
This fixes a class of crashes and introduces ptr<T> to make the
code robust against this failure mode going forward.
Thanks regnat for the idea of a ref<T> without overhead!
Closes#4895Closes#4893Closes#5127Closes#5113
Previously, despite having a boolean that tracked initialization, the
decode characters have been "calculated" every single time a base64
string was being decoded.
With this change we only initialize the decode array once in a
thread-safe manner.
Otherwise I get a compiler error when building for NetBSD:
src/libutil/util.cc: In function 'void nix::_deletePath(const Path&, uint64_t&)':
src/libutil/util.cc:438:17: error: base operand of '->' is not a pointer
438 | AutoCloseFD dirfd(open(dir.c_str(), O_RDONLY));
| ^~~~~
src/libutil/util.cc:439:10: error: 'dirfd' was not declared in this scope
439 | if (!dirfd) {
| ^~~~~
src/libutil/util.cc:444:17: error: 'dirfd' was not declared in this scope
444 | _deletePath(dirfd.get(), path, bytesFreed);
| ^~~~~
Fill `NIX_CONFIG` with the value of the current Nix configuration before
calling the post-build-hook.
That way the whole configuration (including the possible
`experimental-features`, a possibly `--store` option or whatever) will
be made available to the hook
Move the `closure` logic of `computeFSClosure` to its own (templated) function.
This doesn’t bring much by itself (except for the ability to properly
test the “closure” functionality independently from the rest), but it
allows reusing it (in particular for the realisations which will require
a very similar closure computation)
When you have a symlink like:
/tmp -> ./private/tmp
you need to resolve ./private/tmp relative to /tmp’s dir: ‘/’. Unlike
any other path output by dirOf, / ends with a slash. We don’t want
trailing slashes here since we will append another slash in the next
comoponent, so clear s like we would if it was a symlink to an absoute
path.
This should fix at least part of the issue in
https://github.com/NixOS/nix/issues/4822, will need confirmation that
it actually fixes the problem to close though.
Introduced in f3f228700a.
This function doesn't support all compression methods (i.e. 'none' and
'br') so it shouldn't be exposed.
Also restore the original decompress() as a wrapper around
makeDecompressionSink().
The S3 store relies on the ability to be able to decompress things with
an empty method, because it just passes the value of the Content-Encoding
directly to decompress.
If the file is not compressed, then this will cause the compression
routine to get confused.
This caused NixOS/nixpkgs#120120.
If there were many top-level goals (which are not destroyed until the
very end), commands like
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' \
/run/current-system --no-check-sigs --substitute-on-destination
could fail with "Too many open files". So now we do some explicit
cleanup from amDone(). It would be cleaner to separate goals from
their temporary internal state, but that would be a bigger refactor.
According to RFC4007[1], IPv6 addresses can have a so-called zone_id
separated from the actual address with `%` as delimiter. In contrast to
Nix 2.3, the version on `master` doesn't recognize it as such:
$ nix ping-store --store ssh://root@fe80::1%18 --experimental-features nix-command
warning: 'ping-store' is a deprecated alias for 'store ping'
error: --- Error ----------------------------------------------------------------- nix
don't know how to open Nix store 'ssh://root@fe80::1%18'
I modified the IPv6 match-regex accordingly to optionally detect this
part of the address. As we don't seem to do anything special with it, I
decided to leave it as part of the URL for now.
Fixes#4490
[1] https://tools.ietf.org/html/rfc4007
This is probably what most people expect it to do. Fixes#3781.
There is a new command 'nix flake lock' that has the old behaviour of
'nix flake update', i.e. it just adds missing lock file entries unless
overriden using --update-input.
This is technically a breaking change, since attempting to set plugin
files after the first non-flag argument will now throw an error. This
is acceptable given the relative lack of stability in a plugin
interface and the need to tie the knot somewhere once plugins can
actually define new subcommands.
When performing distributed builds of machine learning packages, it
would be nice if builders without the required SIMD instructions can
be excluded as build nodes.
Since x86_64 has accumulated a large number of different instruction
set extensions, listing all possible extensions would be unwieldy.
AMD, Intel, Red Hat, and SUSE have recently defined four different
microarchitecture levels that are now part of the x86-64 psABI
supplement and will be used in glibc 2.33:
https://gitlab.com/x86-psABIs/x86-64-ABIhttps://lwn.net/Articles/844831/
This change uses libcpuid to detect CPU features and then uses them to
add the supported x86_64 levels to the additional system types. For
example on a Ryzen 3700X:
$ ~/aps/bin/nix -vv --version | grep "Additional system"
Additional system types: i686-linux, x86_64-v1-linux, x86_64-v2-linux, x86_64-v3-linux
I tested a trivial program that called kill(-1, SIGKILL), which was
run as the only process for an unpriveleged user, on Linux and
FreeBSD. On Linux, kill reported success, while on FreeBSD it failed
with EPERM.
POSIX says:
> If pid is -1, sig shall be sent to all processes (excluding an
> unspecified set of system processes) for which the process has
> permission to send that signal.
and
> The kill() function is successful if the process has permission to
> send sig to any of the processes specified by pid. If kill() fails,
> no signal shall be sent.
and
> [EPERM]
> The process does not have permission to send the signal to any
> receiving process.
My reading of this is that kill(-1, ...) may fail with EPERM when
there are no other processes to kill (since the current process is
ignored). Since kill(-1, ...) only attempts to kill processes the
user has permission to kill, it can't mean that we tried to do
something we didn't have permission to kill, so it should be fine to
interpret EPERM the same as success here for any POSIX-compliant
system.
This fixes an issue that Mic92 encountered[1] when he tried to review a
Nixpkgs PR on FreeBSD.
[1]: https://github.com/NixOS/nixpkgs/pull/81459#issuecomment-606073668
It's now
at /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix:7:7:
instead of
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
The new format is more standard and clickable.
Changes:
* The divider lines are gone. These were in practice a bit confusing,
in particular with --show-trace or --keep-going, since then there
were multiple lines, suggesting a start/end which wasn't the case.
* Instead, multi-line error messages are now indented to align with
the prefix (e.g. "error: ").
* The 'description' field is gone since we weren't really using it.
* 'hint' is renamed to 'msg' since it really wasn't a hint.
* The error is now printed *before* the location info.
* The 'name' field is no longer printed since most of the time it
wasn't very useful since it was just the name of the exception (like
EvalError). Ideally in the future this would be a unique, easily
googleable error ID (like rustc).
* "trace:" is now just "…". This assumes error contexts start with
something like "while doing X".
Example before:
error: --- AssertionError ---------------------------------------------------------------------------------------- nix
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
assertion 'false' failed
----------------------------------------------------- show-trace -----------------------------------------------------
trace: while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
Example after:
error: assertion 'false' failed
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
… while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
In `nixStable` (2.3.7 to be precise) it's possible to connect to stores
using an IPv6 address:
nix ping-store --store ssh://root@2001:db8::1
This is also useful for `nixops(1)` where you could specify an IPv6
address in `deployment.targetHost`.
However, this behavior is broken on `nixUnstable` and fails with the
following error:
$ nix store ping --store ssh://root@2001:db8::1
don't know how to open Nix store 'ssh://root@2001:db8::1'
This happened because `openStore` from `libstore` uses the `parseURL`
function from `libfetchers` which expects a valid URL as defined in
RFC2732. However, this is unsupported by `ssh(1)`:
$ nix store ping --store 'ssh://root@[2001:db8::1]'
cannot connect to 'root@[2001:db8::1]'
This patch now allows both ways of specifying a store (`root@2001:db8::1`) and
also `root@[2001:db8::1]` since the latter one is useful to pass query
parameters to the remote store.
In order to achieve this, the following changes were made:
* The URL regex from `url-parts.hh` now allows an IPv6 address in the
form `2001:db8::1` and also `[2001:db8::1]`.
* In `libstore`, a new function named `extractConnStr` ensures that a
proper URL is passed to e.g. `ssh(1)`:
* If a URL looks like either `[2001:db8::1]` or `root@[2001:db8::1]`,
the brackets will be removed using a regex. No additional validation
is done here as only strings parsed by `parseURL` are expected.
* In any other case, the string will be left untouched.
* The rules above only apply for `LegacySSHStore` and `SSHStore` (a.k.a
`ssh://` and `ssh-ng://`).
Unresolved questions:
* I'm not really sure whether we want to allow both variants of IPv6
addresses in the URL parser. However it should be noted that both seem
to be possible according to RFC2732:
> This document incudes an update to the generic syntax for Uniform
> Resource Identifiers defined in RFC 2396 [URL]. It defines a syntax
> for IPv6 addresses and allows the use of "[" and "]" within a URI
> explicitly for this reserved purpose.
* Currently, it's not supported to specify a port number behind the
hostname, however it seems as this is not really supported by the URL
parser. Hence, this is probably out of scope here.
using fallocate() to preallocate files space does more harm than good:
- breaks compression on btrfs
- has been called "not the right thing to do" by xfs developers
(because delayed allocation that most filesystems implement leads to smarter
allocation than what the filesystem needs to do if we upfront fallocate files)
Crucially this introduces BoehmGCStackAllocator, but it also
adds a bunch of wiring to avoid making libutil depend on bdw-gc.
Part of the solutions for #4178, #4200
This removes the extra-substituters and extra-sandbox-paths settings
and instead makes every array setting extensible by setting
"extra-<name> = <value>" in the configuration file or passing
"--<name> <value>" on the command line.
Make nix output completions in the form `completion\tdescription`.
This can't be used by bash (afaik), but other shells like zsh or fish
can display it along the completion choices
`nix flake info` calls the github 'commits' API, which requires
authorization when the repository is private. Currently this request
fails with a 404.
This commit adds an authorization header when calling the 'commits' API.
It also changes the way that the 'tarball' API authenticates, moving the
user's token from a query parameter into the Authorization header.
The query parameter method is recently deprecated and will be disallowed
in November 2020. Using them today triggers a warning email.