Similar to the nar-info disk cache (and using the same db).
This makes rebuilds muuch faster.
- This works regardless of the ca-derivations experimental feature.
I could modify the logic to not touch the db if the flag isn’t there,
but given that this is a trash-able local cache, it doesn’t seem to be
really worth it.
- We could unify the `NARs` and `Realisation` tables to only have one
generic kv table. This is left as an exercise to the reader.
- I didn’t update the cache db version number as the new schema just
adds a new table to the previous one, so the db will be transparently
migrated and is backwards-compatible.
Fix#4746
This was previously done in https://github.com/NixOS/nix/pull/4515 but
got clobbered away in https://github.com/NixOS/nix/pull/4594.
--------------------------------------------------------------------------------
This fixes an issue where derivations with a primary output that is
not "out" would fail with:
$ nix profile install nixpkgs#sqlite
error: opening directory '/nix/store/2a2ydlgyydly5czcc8lg12n6qqkfz863-sqlite-3.34.1-bin': No such file or directory
This happens because while derivations produce every output when
built, you might not have them if you didn't build the derivation
yourself (for instance, the store path was fetch from a binary cache).
This uses outputName provided from DerivationInfo which appears to
match the first output of the derivation.
I guess I misunderstood John's initial explanation about why wildcards
for outputs are sent to older stores[1]. My `nix-daemon` from 2021-03-26
also has version 1.29, but misses the wildcard[2]. So bumping seems to
be the right call.
[1] https://github.com/NixOS/nix/pull/4759#issuecomment-830812464
[2] 255d145ba7
Starting in macOS 11, the on-disk dylib bundles are no longer available,
but nixpkgs needs to be able to keep compatibility with older versions
that require `/usr/lib/libSystem.B.dylib` in `__impureHostDeps`. Allow
it to keep backwards compatibility with these versions by marking these
dependencies as optional.
Fixes#4658.
As described in #4745 it's otherwise fairly hard to understand where
this is coming from. Say you have an expression which uses e.g.
`types.package`:
``` nix
{ outputs = { self, nixpkgs }: {
packages.x86_64-linux.hello = let
foo = nixpkgs.lib.evalModules {
modules = [
{
options.foo.bar = with nixpkgs.lib; mkOption { type = types.package; };
}
{
foo.bar = ./.;
}
];
};
in builtins.trace foo.config.foo.bar.outPath nixpkgs.legacyPackages.x86_64-linux.hello;
defaultPackage.x86_64-linux = self.packages.x86_64-linux.hello;
};
}
```
Then you'll get an error trace like this:
```
error: 'builtins.storePath' is not allowed in pure evaluation mode
at /nix/store/p4h2x6r80njkb0j2rc1xjhhl99yri3zb-source/lib/attrsets.nix:328:15:
327| let
328| path' = builtins.storePath path;
| ^
329| res =
… while evaluating the attribute 'config.foo.bar.outPath'
at /nix/store/p4h2x6r80njkb0j2rc1xjhhl99yri3zb-source/lib/attrsets.nix:332:11:
331| name = sanitizeDerivationName (builtins.substring 33 (-1) (baseNameOf path'));
332| outPath = path';
| ^
333| outputs = [ "out" ];
… while evaluating the attribute 'packages.x86_64-linux.hello'
at /nix/store/6c1rfsqzrhjw1235palzjmf5vihcpci7-source/flake.nix:3:5:
2| { outputs = { self, nixpkgs }: {
3| packages.x86_64-linux.hello = let
| ^
4| foo = nixpkgs.lib.evalModules {
```
Fixes#4745
They are equivalent according to
<https://spec.commonmark.org/0.29/#hard-line-breaks>,
and the trailing spaces tend to be a pain (because the make git
complain, editors tend to want to remove them − the `.editorconfig`
actually specifies that − etc..).
This function doesn't support all compression methods (i.e. 'none' and
'br') so it shouldn't be exposed.
Also restore the original decompress() as a wrapper around
makeDecompressionSink().
The S3 store relies on the ability to be able to decompress things with
an empty method, because it just passes the value of the Content-Encoding
directly to decompress.
If the file is not compressed, then this will cause the compression
routine to get confused.
This caused NixOS/nixpkgs#120120.
This makes Nix look up paths derivations when they are passed as a
store paths. So:
$ nix path-info --derivation /nix/store/0pisd259nldh8yfjvw663mspm60cn5v8-hello-2.10
now gives
/nix/store/qp724w90516n4bk5r9gfb37vzmqdh3z7-hello-2.10.drv
instead of "".
If no deriver is available, Nix now errors instead of silently
ignoring that argument.
In case of pure input-addressed derivations, the build loop doesn't
guaranty that the realisations are stored in the db (if the output paths
are already there or can be substituted, the realisations won't be
registered). This caused `nix shell` to fail in some cases because it
was assuming that the realisations were always existing.
A better (but more involved) fix would probably to ensure that we always
register the realisations, but in the meantime this patches the surface
issue.
Fix#4721
I think that it's not very helpful to get "cached failures" in a wrong
`flake.nix`. This can be very confusing when debugging a Nix expression.
See for instance NixOS/nixpkgs#118115.
In fact, the eval cache allows a forced reevaluation which is used for
e.g. `nix eval`.
This change makes sure that this is the case for `nix build` as well. So
rather than
λ ma27 [~/Projects/exp] → ../nix/outputs/out/bin/nix build -L --rebuild --experimental-features 'nix-command flakes'
error: cached failure of attribute 'defaultPackage.x86_64-linux'
the evaluation of already-evaluated (and failed) attributes looks like
this now:
λ ma27 [~/Projects/exp] → ../nix/outputs/out/bin/nix build -L --rebuild --experimental-features 'nix-command flakes'
error: attribute 'hell' missing
at /nix/store/mrnvi9ss8zn5wj6gpn4bcd68vbh42mfh-source/flake.nix:6:35:
5|
6| packages.x86_64-linux.hello = nixpkgs.legacyPackages.x86_64-linux.hell;
| ^
7|
(use '--show-trace' to show detailed location information)
When working on some more complex Nix code, there are sometimes rather
unhelpful or misleading error messages, especially if coerce-errors are
thrown.
This patch is a first steps towards improving that. I'm happy to file
more changes after that, but I'd like to gather some feedback first.
To summarize, this patch does the following things:
* Attrsets (a.k.a. `Bindings` in `libexpr`) now have a `Pos`. This is
helpful e.g. to identify which attribute-set in `listToAttrs` is
invalid.
* The `Value`-struct has a new method named `determinePos` which tries
to guess the position of a value and falls back to a default if that's
not possible.
This can be used to provide better messages if a coercion fails.
* The new `determinePos`-API is used by `builtins.concatMap` now. With
that change, Nix shows the exact position in the error where a wrong
value was returned by the lambda.
To make sure it's still obvious that `concatMap` is the problem,
another stack-frame was added.
* The changes described above can be added to every other `primop`, but
first I'd like to get some feedback about the overall approach.
* The position of the `name`-attribute appears in the trace.
* If e.g. `meta` has no `outPath`-attribute, a `cannot coerce set to
string` error will be thrown where `pos` points to `name =` which is
highly misleading.
If there were many top-level goals (which are not destroyed until the
very end), commands like
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' \
/run/current-system --no-check-sigs --substitute-on-destination
could fail with "Too many open files". So now we do some explicit
cleanup from amDone(). It would be cleaner to separate goals from
their temporary internal state, but that would be a bigger refactor.
This avoids an ambiguity where the `StorePathWithOutputs { drvPath, {}
}` could mean "build `brvPath`" or "substitute `drvPath`" depending on
context.
It also brings the internals closer in line to the new CLI, by
generalizing the `Buildable` type is used there and makes that
distinction already.
In doing so, relegate `StorePathWithOutputs` to being a type just for
backwards compatibility (CLI and RPC).
These are by no means part of the notion of a store, but rather are
things that happen to use stores. (Or put another way, there's no way
we'd make them virtual methods any time soon.) It's better to move them
out of that too-big class then.
Also, this helps us remove StorePathWithOutputs from the Store interface
altogether next commit.
This fixes builtins.fetchGit { url = ...; ref = "HEAD"; }, that works in
stable nix (v2.3.10), but is broken in nix master:
$ ./result/bin/nix repl
Welcome to Nix version 2.4pre19700101_dd77f71. Type :? for help.
nix-repl> builtins.fetchGit { url = "https://github.com/NixOS/nix"; ref = "HEAD"; }
fetching Git repository 'https://github.com/NixOS/nix'fatal: couldn't find remote ref refs/heads/HEAD
error: program 'git' failed with exit code 128
The documentation for builtins.fetchGit says ref = "HEAD" is the
default, so it should also be supported to explicitly pass it.
I came across this issue because poetry2nix can use ref = "HEAD" in some
situations.
Fixes#4674.
A few versioning mistakes were corrected:
- In 27b5747ca7, Daemon protocol had some
version `>= 0xc` that should have been `>= 0x1c`, or `28` since the
other conditions used decimal.
- In a2b69660a9, legacy SSH gated new CAS
info on version 6, but version 5 in the server. It is now 6
everywhere.
Additionally, legacy ssh was sending over more metadata than the daemon
one was. The daemon now sends that data too.
CC @regnat
Co-authored-by: Cole Helbling <cole.e.helbling@outlook.com>
According to RFC4007[1], IPv6 addresses can have a so-called zone_id
separated from the actual address with `%` as delimiter. In contrast to
Nix 2.3, the version on `master` doesn't recognize it as such:
$ nix ping-store --store ssh://root@fe80::1%18 --experimental-features nix-command
warning: 'ping-store' is a deprecated alias for 'store ping'
error: --- Error ----------------------------------------------------------------- nix
don't know how to open Nix store 'ssh://root@fe80::1%18'
I modified the IPv6 match-regex accordingly to optionally detect this
part of the address. As we don't seem to do anything special with it, I
decided to leave it as part of the URL for now.
Fixes#4490
[1] https://tools.ietf.org/html/rfc4007
I guess the rationale behind the old name wath that
`pathInfoIsTrusted(info)` returns `true` iff we would need to `blindly`
trust the path (because it has no valid signature and `requireSigs` is
set), but I find it to be a really confusing footgun because it's quite
natural to give it the opposite meaning.
When starting a nix-shell with `-i` it was previously possible for it to
silently fail in the scenario where the specified interpreter didn't
exist. This happened due to the `exec` call masking the issue.
With this change we enable `execfail`, which causes the script using
`nix-shell` as interpreter to correctly exit with code 127.
Fixes: #4598
Basically, if a tarball URL is used as a flake input, and the URL leads
to a redirect, the final redirect destination would be recorded as the
locked URL.
This allows tarballs under https://nixos.org/channels to be used as
flake inputs. If we, as before, lock on to the original URL it would
break every time the channel updates.
Local git repositories are normally used directly instead of
cloning. This commit checks if a repo is bare and forces a
clone.
Co-authored-by: Théophane Hufschmitt <regnat@users.noreply.github.com>
What happened was that Nix was trying to unconditionally mount these
paths in fixed-output derivations, but since the outer derivation was
pure, those paths did not exist. The solution is to only mount those
paths when they exist.
This separates the scheduling logic (including simple hook pathway) from
the local-store needing code.
This should be the final split for now. I'm reasonably happy with how
it's turning out, even before I'm done moving code into
`local-derivation-goal`. Benefits:
1. This will help "witness" that the hook case is indeed a lot simpler,
and also compensate for the increased complexity that comes from
content-addressed derivation outputs.
2. It also moves us ever so slightly towards a world where we could use
off-the-shelf storage or sandboxing, since `local-derivation-goal`
would be gutted in those cases, but `derivation-goal` should remain
nearly the same.
The new `#if 0` in the new files will be deleted in the following
commit. I keep it here so if it turns out more stuff can be moved over,
it's easy to do so in a way that preserves ordering --- and thus
prevents conflicts.
N.B.
```sh
git diff HEAD^^ --color-moved --find-copies-harder --patience --stat
```
makes nicer output.
This is probably what most people expect it to do. Fixes#3781.
There is a new command 'nix flake lock' that has the old behaviour of
'nix flake update', i.e. it just adds missing lock file entries unless
overriden using --update-input.
This is already used by Hydra, and is very useful when materializing
a remote builder list from service discovery. This allows the service
discovery tool to only sync one file instead of two.
This is technically a breaking change, since attempting to set plugin
files after the first non-flag argument will now throw an error. This
is acceptable given the relative lack of stability in a plugin
interface and the need to tie the knot somewhere once plugins can
actually define new subcommands.
This field used to be a `BasicDerivation`, but this `BasicDerivation`
was downcasted to a `Derivation` when needed (implicitely or not), so we
might as well make it a full `Derivation` and upcast it when needed.
This also allows getting rid of a weird duplication in the way we
compute the static output hashes for the derivation. We had to
do it differently and in a different place depending on whether the
derivation was a full derivation or just a basic drv, but we can now do
it unconditionally on the full derivation.
Fix#4559
- Pass it the name of the outputs rather than their output paths (as
these don't exist for ca derivations)
- Get the built output paths from the remote builder
- Register the new received realisations
The PR #4240 changed messag of the error that was thrown when an auto-called
function was missing an argument.
However this change also changed the type of the error, from `EvalError`
to a new `MissingArgumentError`. This broke hydra which was relying on
an `EvalError` being thrown.
Make `MissingArgumentError` a subclass of `EvalError` to un-break hydra.
When performing distributed builds of machine learning packages, it
would be nice if builders without the required SIMD instructions can
be excluded as build nodes.
Since x86_64 has accumulated a large number of different instruction
set extensions, listing all possible extensions would be unwieldy.
AMD, Intel, Red Hat, and SUSE have recently defined four different
microarchitecture levels that are now part of the x86-64 psABI
supplement and will be used in glibc 2.33:
https://gitlab.com/x86-psABIs/x86-64-ABIhttps://lwn.net/Articles/844831/
This change uses libcpuid to detect CPU features and then uses them to
add the supported x86_64 levels to the additional system types. For
example on a Ryzen 3700X:
$ ~/aps/bin/nix -vv --version | grep "Additional system"
Additional system types: i686-linux, x86_64-v1-linux, x86_64-v2-linux, x86_64-v3-linux
That way we
1. Don't have to recompute them several times
2. Can compute them in a place where we know the type of the parent
derivation, meaning that we don't need the casting dance we had before
Once a build is done, get back to the original derivation, and register
all the newly built outputs for this derivation.
This allows Nix to work properly with derivations that don't have all
their build inputs available − thus allowing garbage collection and
(once it's implemented) binary substitution
Change the `nix-build` logic for linking/printing the output paths to allow for
some outputs to be missing. This might happen when the toplevel
derivation didn't have to be built, either because all the required
outputs were already there, or because they have all been substituted.
I tested a trivial program that called kill(-1, SIGKILL), which was
run as the only process for an unpriveleged user, on Linux and
FreeBSD. On Linux, kill reported success, while on FreeBSD it failed
with EPERM.
POSIX says:
> If pid is -1, sig shall be sent to all processes (excluding an
> unspecified set of system processes) for which the process has
> permission to send that signal.
and
> The kill() function is successful if the process has permission to
> send sig to any of the processes specified by pid. If kill() fails,
> no signal shall be sent.
and
> [EPERM]
> The process does not have permission to send the signal to any
> receiving process.
My reading of this is that kill(-1, ...) may fail with EPERM when
there are no other processes to kill (since the current process is
ignored). Since kill(-1, ...) only attempts to kill processes the
user has permission to kill, it can't mean that we tried to do
something we didn't have permission to kill, so it should be fine to
interpret EPERM the same as success here for any POSIX-compliant
system.
This fixes an issue that Mic92 encountered[1] when he tried to review a
Nixpkgs PR on FreeBSD.
[1]: https://github.com/NixOS/nixpkgs/pull/81459#issuecomment-606073668
We upgrade to lowdown 0.8.0 [1] which contains a fix/improvement to a
behavior mentioned in this issue thread [2] where a big part of
lowdown's API would just call exit(1) on allocation errors since that
is a satisfying behavior for the lowdown binary.
Now lowdown_term_rndr returns 0 if an allocation error occurred which we
check for in libcmd/markdown.cc.
Also the extern "C" { } wrapper around lowdown.h has been removed as it
is not necessary.
[1]: 6ca7c855a0/versions.xml (L987-L1006)
[2]: https://github.com/kristapsdz/lowdown/issues/45#issuecomment-756681153
In addition to being some ugly template trickery, it was also totally
useless as it was used in only one place where I could replace it by
just a few extra characters
This makes nix search always go through the first level of an
attribute set, even if it's not a top level attribute. For instance,
you can now list all GHC compilers with:
$ nix search nixpkgs#haskell.compiler
...
This is similar to how nix-env works when you pass in -A.
This fixes an issue where derivations with a primary output that is
not "out" would fail with:
$ nix profile install nixpkgs#sqlite
error: opening directory '/nix/store/2a2ydlgyydly5czcc8lg12n6qqkfz863-sqlite-3.34.1-bin': No such file or directory
This happens because while derivations produce every output when
built, you might not have them if you didn't build the derivation
yourself (for instance, the store path was fetch from a binary cache).
This uses outputName provided from DerivationInfo which appears to
match the first output of the derivation.
Where a `RealisedPath` is a store path with its history, meaning either
an opaque path for stuff that has been directly added to the store, or a
`Realisation` for stuff that has been built by a derivation
This is a low-level refactoring that doesn't bring anything by itself
(except a few dozen extra lines of code :/ ), but raising the
abstraction level a bit is important on a number of levels:
- Commands like `nix build` have to query for the realisations after the
build is finished which is fragile (see
27905f12e4a7207450abe37c9ed78e31603b67e1 for example). Having them
oprate directly at the realisation level would avoid that
- Others like `nix copy` currently operate directly on (built) store
paths, but need a bit more information as they will need to register
the realisations on the remote side
Example:
error: builder for '/nix/store/9ysqfidhipyzfiy54mh77iqn29j6cpsb-failing.drv' failed with exit code 1;
last 1 log lines:
> FAIL
For full logs, run 'nix log /nix/store/9ysqfidhipyzfiy54mh77iqn29j6cpsb-failing.drv'.
… while importing '/nix/store/pfp4a4bjh642ylxyipncqs03z6kkgfvy-failing'
at /nix/store/25wgzr2qrqqiqfbdb1chpiry221cjglc-source/flake.nix:58:15:
57|
58| ifd = import self.hydraJobs.broken;
| ^
59|
Fix a mismatch in the errors thrown when a needed output was missing
from an input derivation that was leading to a wrong and quite misleading error
message
Don't only show the name of the output, but also the derivation to which
this output belongs (as otherwise it's very hard to track back what went
wrong)
It's now
at /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix:7:7:
instead of
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
The new format is more standard and clickable.
Changes:
* The divider lines are gone. These were in practice a bit confusing,
in particular with --show-trace or --keep-going, since then there
were multiple lines, suggesting a start/end which wasn't the case.
* Instead, multi-line error messages are now indented to align with
the prefix (e.g. "error: ").
* The 'description' field is gone since we weren't really using it.
* 'hint' is renamed to 'msg' since it really wasn't a hint.
* The error is now printed *before* the location info.
* The 'name' field is no longer printed since most of the time it
wasn't very useful since it was just the name of the exception (like
EvalError). Ideally in the future this would be a unique, easily
googleable error ID (like rustc).
* "trace:" is now just "…". This assumes error contexts start with
something like "while doing X".
Example before:
error: --- AssertionError ---------------------------------------------------------------------------------------- nix
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
assertion 'false' failed
----------------------------------------------------- show-trace -----------------------------------------------------
trace: while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
Example after:
error: assertion 'false' failed
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
… while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
This change is to simplify [Trustix](https://github.com/tweag/trustix) indexing and makes it possible to reconstruct this URL regardless of the compression used.
In particular this means that 7c2e9ca597/contrib/nix/nar/nar.go (L61-L71) can be removed and only the bits that are required to establish trust needs to be published in the Trustix build logs.
With the `ca-derivation` experimental features, non-ca derivations used
to have their output paths returned as unknown as long as they weren't
built (because of a mistake in the code that systematically erased the
previous value)
Thanks @regnat and @edolstra for catching this and comming up with the
solution.
They way I had generalized those is wrong, because local settings for
non-local stores is confusing default. And due to the nature of C++
inheritance, fixing the defaults is more annoying than it should be.
Additionally, I thought we might just drop the check in the substitution
logic since `Store::addToStore` is now streaming, but @regnat rightfully
pointed out that as it downloads dependencies first, that would still be
too late, and also waste effort on possibly unneeded/unwanted
dependencies.
The simple and correct thing to do is just make a store method for the
boolean logic, keeping all the setting and key stuff the way it was
before. That new method is both used by `LocalStore::addToStore` and the
substitution goal check. Perhaps we might eventually make it fancier,
e.g. sending the ValidPathInfo to remote stores for them to validate,
but this is good enough for now.
By default, once you enter x86_64 Rosetta 2, macOS will try to run
everything in x86_64. So an x86_64 Nix will still try to use x86_64
even when system = aarch64-darwin. To avoid this we can set
kern.curproc_arch_affinity sysctl. With kern.curproc_arch_affinity=0,
we ignore this preference.
This is based on how
https://opensource.apple.com/source/system_cmds/system_cmds-880.40.5/arch.tproj/arch.c.auto.html
works. Completely undocumented, but seems to work!
Note, you can verify this works with this impure Nix expression:
```
{
a = derivation {
name = "a";
system = "aarch64-darwin";
builder = "/bin/sh";
args = [ "-e" (builtins.toFile "builder" ''
[ "$(/usr/bin/arch)" = arm64 ]
[ "$(/usr/bin/arch -arch x86_64 /bin/sh -c /usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch arm64 /bin/sh -c /usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
b = derivation {
name = "b";
system = "x86_64-darwin";
builder = "/bin/sh";
args = [ "-e" (builtins.toFile "builder" ''
[ "$(/usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch x86_64 /bin/sh -c /usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch arm64 /bin/sh -c /usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
}
```
This resolves#3810 by changing the behavior of `max-jobs = 0`, so
that specifying the option also avoids local building of derivations
with the attribute `preferLocalBuild = true`.
It appears as through the fetch attribute, which
is simply a variant with 3 elements, implicitly
converts boolean arguments to integers. One must
use Explicit<bool> to correctly populate it with
a boolean. This was missing from the implementation,
and resulted in clearly boolean JSON fields being
treated as numbers.
libc++10 seems to be stricter on what it allows in variant conversion.
I'm not sure what the rules are here, but this is the minimal change
needed to get through the compilation errors.
Sometimes it's necessary to fetch a git repository at a revision and
it's unknown which ref contains the revision in question. An example
would be a Cargo.lock which only provides the URL and the revision when
using a git repository as build input.
However it's considered a bad practice to perform a full checkout of a
repository since this may take a lot of time and can eat up a lot of
disk space. This patch makes a full checkout explicit by adding an
`allRefs` argument to `builtins.fetchGit` which fetches all refs if
explicitly set to true.
Closes#2409
A common pitfall when using e.g. `builtins.fetchGit` is the `fatal: not
a tree object`-error when trying to fetch a revision of a git-repository
that isn't on the `master` branch and no `ref` is specified.
In order to make clear what's the problem, I added a simple check
whether the revision in question exists and if it doesn't a more
meaningful error-message is displayed:
```
nix-repl> builtins.fetchGit { url = "https://github.com/owner/myrepo"; rev = "<commit not on master>"; }
moderror: --- Error -------------------------------------------------------------------- nix
Cannot find Git revision 'bf1cc5c648e6aed7360448a3745bb2fe4fbbf0e9' in ref 'master' of repository 'https://gitlab.com/Ma27/nvim.nix'! Please make sure that the rev exists on the ref you've specified or add allRefs = true; to fetchGit.
```
Closes#2431
We embrace virtual the rest of the way, and get rid of the
`assert(false)` 0-param constructors.
We also list config base classes first, so the constructor order is
always:
1. all the configs
2. all the stores
Each in the same order
Move clearValue inside Value
mkInt instead of setInt
mkBool instead of setBool
mkString instead of setString
mkPath instead of setPath
mkNull instead of setNull
mkAttrs instead of setAttrs
mkList instead of setList*
mkThunk instead of setThunk
mkApp instead of setApp
mkLambda instead of setLambda
mkBlackhole instead of setBlackhole
mkPrimOp instead of setPrimOp
mkPrimOpApp instead of setPrimOpApp
mkExternal instead of setExternal
mkFloat instead of setFloat
Add note that the static mk* function should be removed eventually
PRs #4370 and #4348 had a bad interaction in that the second broke the fist
one in a not trivial way.
The issue was that since #4348 the logic for detecting whether a
derivation output is already built requires some logic that was specific
to the `LocalStore`.
It happens though that most of this logic could be upstreamed to any `Store`,
which is what this commit does.
This ignore was here because `queryPartialDrvOutputMap` was used both
1. as a cache to avoid having to re-read the derivation (when gc-ing for
example), and
2. as the source of truth for ca realisations
The use-case 2. required it to be able to work even when the derivation
wasn't there anymore (see https://github.com/NixOS/nix/issues/4138).
However, this use-case is now handled by `queryRealisation`, meaning
that we can safely error out if the derivation isn't there anymore
`buildPaths` can be called even for stores where it's not defined in case it's
bound to be a no-op.
The “no-op detection” mechanism was only detecting the case wher `buildPaths`
was called on a set of (non-drv) paths that were already present on the store.
This commit extends this mechanism to also detect the case where `buildPaths`
is called on a set of derivation outputs which are already built on the store.
This only works with the ca-derivations flag. It could be possible to
extend this to also work without it, but it would add quite a bit of
complexity, and it's not used without it anyways.
Extend `FSAccessor::readFile` to allow not checking that the path is a
valid one, and rewrite `readInvalidDerivation` using this extended
`readFile`.
Several places in the code use `readInvalidDerivation`, either because
they need to read a derivation that has been written in the store but
not registered yet, or more generally to prevent a deadlock because
`readDerivation` tries to lock the state, so can't be called from a
place where the lock is already held.
However, `readInvalidDerivation` implicitely assumes that the store is a
`LocalFSStore`, which isn't always the case.
The concrete motivation for this is that it's required for `nix copy
--from someBinaryCache` to work, which is tremendously useful for the
tests.
Because of a too eager refactoring, `addTextToStore` used to throw an
error because the input wasn't a valid nar.
Partially revert that refactoring to wrap the text into a proper nar
(using `dumpString`) to make this method work again
Rather than storing the derivation outputs as `drvPath!outputName` internally,
store them as `drvHashModulo!outputName` (or `outputHash!outputName` for
fixed-output derivations).
This makes the storage slightly more opaque, but enables an earlier
cutoff in cases where a fixed-output dependency changes (but keeps the
same output hash) − same as what we already do for input-addressed
derivations.
Add a new table for tracking the derivation output mappings.
We used to hijack the `DerivationOutputs` table for that, but (despite its
name), it isn't a really good fit:
- Its entries depend on the drv being a valid path, making it play badly with
garbage collection and preventing us to copy a drv output without copying
the whole drv closure too;
- It dosen't guaranty that the output path exists;
By using a different table, we can experiment with a different schema better
suited for tracking the output mappings of CA derivations.
(incidentally, this also fixes#4138)
For each known realisation, store:
- its output
- its output path
This comes with a set of needed changes:
- New `realisations` module declaring the types needed for describing
these mappings
- New `Store::registerDrvOutput` method registering all the needed informations
about a derivation output (also replaces `LocalStore::linkDeriverToPath`)
- new `Store::queryRealisation` method to retrieve the informations for a
derivations
This introcudes some redundancy on the remote-store side between
`wopQueryDerivationOutputMap` and `wopQueryRealisation`.
However we might need to keep both (regardless of backwards compat)
because we sometimes need to get some infos for all the outputs of a
derivation (where `wopQueryDerivationOutputMap` is handy), but all the
stores can't implement it − because listing all the outputs of a
derivation isn't really possible for binary caches where the server
doesn't allow to list a directory.
In `nixStable` (2.3.7 to be precise) it's possible to connect to stores
using an IPv6 address:
nix ping-store --store ssh://root@2001:db8::1
This is also useful for `nixops(1)` where you could specify an IPv6
address in `deployment.targetHost`.
However, this behavior is broken on `nixUnstable` and fails with the
following error:
$ nix store ping --store ssh://root@2001:db8::1
don't know how to open Nix store 'ssh://root@2001:db8::1'
This happened because `openStore` from `libstore` uses the `parseURL`
function from `libfetchers` which expects a valid URL as defined in
RFC2732. However, this is unsupported by `ssh(1)`:
$ nix store ping --store 'ssh://root@[2001:db8::1]'
cannot connect to 'root@[2001:db8::1]'
This patch now allows both ways of specifying a store (`root@2001:db8::1`) and
also `root@[2001:db8::1]` since the latter one is useful to pass query
parameters to the remote store.
In order to achieve this, the following changes were made:
* The URL regex from `url-parts.hh` now allows an IPv6 address in the
form `2001:db8::1` and also `[2001:db8::1]`.
* In `libstore`, a new function named `extractConnStr` ensures that a
proper URL is passed to e.g. `ssh(1)`:
* If a URL looks like either `[2001:db8::1]` or `root@[2001:db8::1]`,
the brackets will be removed using a regex. No additional validation
is done here as only strings parsed by `parseURL` are expected.
* In any other case, the string will be left untouched.
* The rules above only apply for `LegacySSHStore` and `SSHStore` (a.k.a
`ssh://` and `ssh-ng://`).
Unresolved questions:
* I'm not really sure whether we want to allow both variants of IPv6
addresses in the URL parser. However it should be noted that both seem
to be possible according to RFC2732:
> This document incudes an update to the generic syntax for Uniform
> Resource Identifiers defined in RFC 2396 [URL]. It defines a syntax
> for IPv6 addresses and allows the use of "[" and "]" within a URI
> explicitly for this reserved purpose.
* Currently, it's not supported to specify a port number behind the
hostname, however it seems as this is not really supported by the URL
parser. Hence, this is probably out of scope here.
The `DerivationGoal` has a variable storing the “final” derivation
output paths that is used (amongst other things) to fill the environment
for the post build hook. However this variable wasn't set when the
build-hook is used, causing a crash when both hooks are used together.
Fix this by setting this variable (from the informations in the db) after a run
of the post build hook.
This reverts commit 1b1e076033.
Using `queryPartialDerivationOutputMap` assumes that the derivation
exists locally which isn't the case for remote builders.
Since 0744f7f, it is now useful to have cache.nixos.org in substituers
even if /nix/store is not the Nix Store Dir. This can always be
overridden via configuration, though.
When running universal binaries like /bin/bash, Darwin XNU will choose
which architecture of the binary to use based on "binary preferences".
This change sets that to the current platform for aarch64 and x86_64
builds. In addition it now uses posix_spawn instead of the usual
execve. Note, that this does not prevent the other architecture from
being run, just advises which to use.
Unfortunately, posix_spawnattr_setbinpref_np does not appear to be
inherited by child processes in x86_64 Rosetta 2 translations, meaning
that this will not always work as expected.
For example:
{
arm = derivation {
name = "test";
system = "aarch64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
rosetta = derivation {
name = "test";
system = "x86_64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = i386 ]
echo It works!
/usr/bin/touch $out
'') ];
};
}
`arm' fails on x86_64-compiled Nix, but `arm' and `rosetta' succeed on
aarch64-compiled Nix. I suspect there is a way to fix this since:
$ /usr/bin/arch -arch x86_64 /bin/bash \
-c '/usr/bin/arch -arch arm64e /bin/bash -c /usr/bin/arch'
arm64
seems to work correctly. We may need to wait for Apple to update
system_cmds in opensource.apple.com to find out how though.
macOS systems with ARM64 can utilize a translation layer at
/Library/Apple/usr/libexec/oah to run x86_64 binaries. This change
makes Nix recognize that and it to "extra-platforms". Note that there
are two cases here since Nix could be built for either x86_64 or
aarch64. In either case, we can switch to the other architecture.
Unfortunately there is not a good way to prevent aarch64 binaries from
being run in x86_64 contexts or vice versa - programs can always
execute programs for the other architecture.
If the build closure contains some CA derivations, then we can't know
ahead-of-time that we won't build anything as early-cutoff might come-in
at a laster stage
using fallocate() to preallocate files space does more harm than good:
- breaks compression on btrfs
- has been called "not the right thing to do" by xfs developers
(because delayed allocation that most filesystems implement leads to smarter
allocation than what the filesystem needs to do if we upfront fallocate files)
Without setting HGPLAIN, the user's environment leaks into
hg invocations, which means that the output may not be in the
expected format.
HGPLAIN is the Mercurial-recommended solution for this in that
it's intended for uses by scripts and programs which are looking
to parse Mercurial's output in a consistent manner.
This fixes a bug I encountered where `nix-store -qR` will deadlock when
the `--include-outputs` flag is passed and `max-connections=1`.
The deadlock occurs because `RemoteStore::queryDerivationOutputs` takes
the only connection from the connection pool and uses it to check the
daemon version. If the version is new enough, it calls
`Store::queryDerivationOutputs`, which eventually calls
`RemoteStore::queryPartialDerivationOutputMap`, where we take another
connection from the connection pool to check the version again. Because
we still haven't released the connection from the caller, this waits for
a connection to be available, causing a deadlock.
This diff solves the issue by using `getProtocol` to check the protocol
version in the caller `RemoteStore::queryDerivationOutputs`, which
immediately frees the connection back to the pool before returning the
protocol version. That way we've already freed the connection by the
time we call `RemoteStore::queryPartialDerivationOutputMap`.
Fixes:
$ nix build --store /tmp/nix /home/eelco/Dev/patchelf#hydraJobs.build.x86_64-linux
warning: Git tree '/home/eelco/Dev/patchelf' is dirty
error: --- RestrictedPathError ------------------------------------------------------------------------------------------- nix
access to path '/tmp/nix/nix/store/xmkvfmffk7xfnazykb5kx999aika8an4-source/flake.nix' is forbidden in restricted mode
(use '--show-trace' to show detailed location information)
Until now, it was not possible to substitute missing paths from e.g.
`https://cache.nixos.org` on a remote server when building on it using
the new `ssh-ng` protocol.
This is because every store implementation except legacy `ssh://`
ignores the substitution flag passed to `Store::queryValidPaths` while
the `legacy-ssh-store` substitutes the remote store using
`cmdQueryValidPaths` when the remote store is opened with `nix-store
--serve`.
This patch slightly modifies the daemon protocol to allow passing an
integer value suggesting whether to substitute missing paths during
`wopQueryValidPaths`. To implement this on the daemon-side, the
substitution logic from `nix-store --serve` has been moved into a
protected method named `Store::substitutePaths` which gets currently
called from `LocalStore::queryValidPaths` and `Store::queryValidPaths`
if `maybeSubstitute` is `true`.
Fixes#2770
Crucially this introduces BoehmGCStackAllocator, but it also
adds a bunch of wiring to avoid making libutil depend on bdw-gc.
Part of the solutions for #4178, #4200
This removes the extra-substituters and extra-sandbox-paths settings
and instead makes every array setting extensible by setting
"extra-<name> = <value>" in the configuration file or passing
"--<name> <value>" on the command line.
This makes it even clearer which of the two hashes was specified in the
nix files. Some may think that "wanted" and "got" is obvious, but:
"got" could mean "got in nix file" and "wanted" could mean "want to see in nix file".
This makes it possible to have per-project configuration in flake.nix,
e.g. binary caches and other stuff:
nixConfig.bash-prompt-suffix = "[1;35mngi# [0m";
nixConfig.substituters = [ "https://cache.ngi0.nixos.org/" ];
This is primarily useful if you're hacking simultaneously on a package
and one of its dependencies. E.g. if you're hacking on Hydra and Nix,
you would start a dev shell for Nix, and then a dev shell for Hydra as
follows:
$ nix develop \
--redirect .#hydraJobs.build.x86_64-linux.nix ~/Dev/nix/outputs/out \
--redirect .#hydraJobs.build.x86_64-linux.nix.dev ~/Dev/nix/outputs/dev
(This assumes hydraJobs.build.x86_64-linux has a passthru.nix
attribute. You can also use a store path.)
This causes all references in the environment to those store paths to
be rewritten to ~/Dev/nix/outputs/{out,dev}. Note: unfortunately, you
may need to set LD_LIBRARY_PATH=~/Dev/nix/outputs/out/lib because
Nixpkgs' ld-wrapper only adds -rpath entries for -L flags that point
to the Nix store.
Fix#3975: Currently if Ctrl-C is pressed during a phase, the interactive subshell
is not exited. Removing --rcfile when --phase is present makes bash
non-interactive
In particular, this means that derivations can output derivations. But
that ramification isn't (yet!) useful as we would want, since there is
no way to have a dependent derivation that is itself a dependent
derivation.
Make nix output completions in the form `completion\tdescription`.
This can't be used by bash (afaik), but other shells like zsh or fish
can display it along the completion choices
Observed on Centos 7 when user namespaces are disabled:
DerivationGoal::startBuilder() throws an exception, ~DerivationGoal()
waits for the child process to exit, but the child process hangs
forever in drainFD(userNamespaceSync.readSide.get()) in
DerivationGoal::runChild(). Not sure why the SIGKILL doesn't get
through.
Issue #4092.
When running `nix build -L` it can be fairly hard to read the output if
the build program intentionally renders whitespace on the left. A
typical example is `g++` displaying compilation errors.
With this patch, the whitespace on the left is retained to make the log
more readable:
```
foo> no configure script, doing nothing
foo> building
foo> foobar.cc: In function 'int main()':
foo> foobar.cc:5:5: error: 'wrong_func' was not declared in this scope
foo> 5 | wrong_func(1);
foo> | ^~~~~~~~~~
error: --- Error ------------------------------------------------------------------------------------- nix
error: --- Error --- nix-daemon
builder for '/nix/store/i1q76cw6cyh91raaqg5p5isd1l2x6rx2-foo-1.0.drv' failed with exit code 1
```
The registry targets generally follow a URL formatting schema with
support for a query parameter of "?dir=subpath" to specify a sub-path
location below the URL root.
Alternatively, an absolute path can be specified. This specification
mode accepts the query parameter but ignores/drops it. It would
probably be better to either (a) disallow the query parameter for the
path form, or (b) recognize the query parameter and add to the path.
This patch implements (b) for consistency, and to make it easier for
tooling that might switch between a remote git reference and a local
path reference.
See also issue #4050.
This change provides support for using access tokens with other
instances of GitHub and GitLab beyond just github.com and
gitlab.com (especially company-specific or foundation-specific
instances).
This change also provides the ability to specify the type of access
token being used, where different types may have different handling,
based on the forge type.
std::optional had redundant checks for whether it had a value.
An object is emplaced either way so it can be dereferenced
without repeating a value check
After 0ed946aa61, max-jobs setting (-j/--max-jobs)
stopped working.
The reason was that nrLocalBuilds (which compared to maxBuildJobs to figure
out whether the limit is reached or not) is not incremented yet when tryBuild
is started; So, the solution is to move the check to tryLocalBuild.
Closes https://github.com/nixos/nix/issues/3763
We don't need it yet, but we could/should in the future, and it's a
cost-free change since we already have the reference. I like it.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Rather than showing an integer as the default, instead show the boolean
referenced in the description.
The nix.conf.5 manpage used to show "default: 0", which is unnecessarily
opaque and confusing (doesn't 0 mean false, even though the default is
true?); now it properly shows that the default is true.
pkgs.fetchurl supports an executable argument, which is especially nice
when downloading a large executable. This patch adds the same option to
nix-prefetch-url.
I have tested this to work on the simple case of prefetching a little
executable:
1. nix-prefetch-url --executable https://my/little/script
2. Paste the hash into a pkgs.fetchurl-based package, script-pkg.nix
3. Delete the output from the store to avoid any misidentified artifacts
4. Realise the package script-pkg.nix
5. Run the executable
I repeated the above while using --name, as well.
I suspect --executable would have no meaningful effect if combined with
--unpack, but I have not tried it.
Since 108debef6f we allow a
`url`-attribute for the `github`-fetcher to fetch tarballs from
self-hosted `gitlab`/`github` instances.
However it's not used when defining e.g. a flake-input
foobar = {
type = "github";
url = "gitlab.myserver";
/* ... */
}
and breaks with an evaluation-error:
error: --- Error --------------------------------------nix
unsupported input attribute 'url'
(use '--show-trace' to show detailed location information)
This patch allows flake-inputs to be fetched from self-hosted instances
as well.
`nix flake info` calls the github 'commits' API, which requires
authorization when the repository is private. Currently this request
fails with a 404.
This commit adds an authorization header when calling the 'commits' API.
It also changes the way that the 'tarball' API authenticates, moving the
user's token from a query parameter into the Authorization header.
The query parameter method is recently deprecated and will be disallowed
in November 2020. Using them today triggers a warning email.
It is apparently required for using `toJSONObject()`, which we do inside
the header file (because it's in a template).
This was accidentally working when building Nix itself (presumably because
`config.hh` was always included after `nlohman/json.hpp`) but caused a
(pretty dirty) build failure in the perl bindings package.
Rework the `Store` hierarchy so that there's now one hierarchy for the
store configs and one for the implementations (where each implementation
extends the corresponding config). So a class hierarchy like
```
StoreConfig-------->Store
| |
v v
SubStoreConfig----->SubStore
| |
v v
SubSubStoreConfig-->SubSubStore
```
(with virtual inheritance to prevent DDD).
The advantage of this architecture is that we can now introspect the configuration of a store without having to instantiate the store itself
Add a new `init()` method to the `Store` class that is supposed to
handle all the effectful initialisation needed to set-up the store.
The constructor should remain side-effect free and just initialize the
c++ data structure.
The goal behind that is that we can create “dummy” instances of each
store to query static properties about it (the parameters it accepts for
example)
Directly register the store classes rather than a function to build an
instance of them.
This gives the possibility to introspect static members of the class or
choose different ways of instantiating them.
Add a fallback path in `queryPartialDerivationOutputMap` for daemons
that don't support it.
Also upstreams a couple methods from `SSHStore` to `RemoteStore` as this
is needed to handle the fallback path.
Otherwise the result of the printing can't be parsed back correctly by
Nix (because the unescaped `${` will be parsed as the begining of an
anti-quotation).
Fix#3989
When deploying a Hydra instance with current Nix master, most builds
would not run because of errors like this:
queue monitor: error: --- Error --- hydra-queue-runner
error: --- UsageError --- nix-daemon
not a content address because it is not in the form '<prefix>:<rest>': /nix/store/...-somedrv
The last error message is from parseContentAddress, which expects a
colon-separated string, however what we got here is a store path.
Looking at the worker protocol, the following message sent to the Nix
daemon caused the error above:
0x1E -> wopQuerySubstitutablePathInfos
0x01 -> Number of paths
0x16 -> Length of string
"/nix/store/...-somedrv"
0x00 -> Length of string
""
Looking at writeStorePathCAMap, the store path is indeed the first field
that's transmitted. However, readStorePathCAMap expects it to be the
*second* field *on my machine*, since expression evaluation order is a
classic form of unspecified behaviour[1] in C++.
This has been introduced in https://github.com/NixOS/nix/pull/3689,
specifically in commit 66a62b3189.
[1]: https://en.wikipedia.org/wiki/Unspecified_behavior#Order_of_evaluation_of_subexpressions
Signed-off-by: aszlig <aszlig@nix.build>
The change in 626200713b didn't account
for when the number of auto arguments is bigger than the number of
formal arguments. This causes the following:
$ nix-instantiate --eval -E '{ ... }@args: args.foo' --argstr foo foo
nix-instantiate: src/libexpr/attr-set.hh:55: void nix::Bindings::push_back(const nix::Attr&): Assertion `size_ < capacity_' failed.
Aborted (core dumped)
If we resolve using the known path of a derivation whose output we
didn't have, we previously blew up. Now we just fail gracefully,
returning the map of all outputs unknown.
This means profiles outside of /nix/var/nix/profiles don't get
garbage-collected. It also means we don't need to scan
/nix/var/nix/profiles for GC roots anymore, except for compatibility
with previously existing generations.
This was broken in 50f13b06fb. Once
again it turns out that putting a bool in a std::variant is a bad
idea, since pointers get silently cast to them...
The command line options --arg and --argstr that are used by a bunch of
CLI commands to pass arguments to top-level functions in files go
through the same code-path as auto-calling top-level functions with
their default arguments - this, however, was only passing the arguments
that were *explicitly* mentioned in the formals of the function - in the
case of an as-pattern with an ellipsis (eg args @ { ... }) extra passed
arguments would get omitted. This fixes that to instead pass *all*
specified auto args in the case that our function has an ellipsis.
Fixes#598
When the log.showSignature git setting is enabled, the output of
"git log" contains signature verification information in addition to the
timestamp GitInputScheme::fetch wants:
$ git log -1 --format=%ct
gpg: Signature made Sat 07 Sep 2019 02:02:03 PM PDT
gpg: using RSA key 0123456789ABCDEF0123456789ABCDEF01234567
gpg: issuer "user@example.com"
gpg: Good signature from "User <user@example.com>" [ultimate] 1567890123
1567890123
For folks that had log.showSignature set, this caused all nix operations
on flakes to fail:
$ nix build
error: stoull
Evidentally this was never implemented because Nix switched to using
`buildDerivation` exclusively before `build-remote.pl` was rewritten.
The `nix-copy-ssh` test (already) tests this.
Include a long comment explaining the policy. Perhaps this can be moved
to the manual at some point in the future.
Also bump the daemon protocol minor version, so clients can tell whether
`wopBuildDerivation` supports trustless CA derivation building. I hope
to take advantage of this in a follow-up PR to support trustless remote
building with the minimal sending of derivation closures.
This seems more correct. It also means one can specify the features a
store should support with --store and remote-store=..., which is useful.
I use this to clean up the build remotes test.
Before, processConnection wanted to know a user name and user id, and
`nix-daemon --stdio`, when it isn't proxying to an underlying daemon,
would just assume "root" and 0. But `nix-daemon --stdio` (no proxying)
shouldn't make guesses about who holds the other end of its standard
streams.
Now processConnection takes an "auth hook", so `nix-daemon` can provide
the appropriate policy and daemon.cc doesn't need to know or care what
it is.
Thanks @regnat for catching one of them. The other follows for many of
the same reasons. I'm find fixing others on a need-to-fix basis,
provided their are no regressions.
When having a message like `waiting for a machine to build X` and
building with `nix build -L`, the log-prefix is always colored yellow[1]
on a small terminal-width as everything (including the ANSI color-reset) is
stripped away.
To work around that problem, this patch explicitly adds an `ANSI_NORMAL`
to the end of the line.
[1] https://imgur.com/a/FjtJOk3
Fixes#3872.
This is a bit hacky. Ideally we would automatically re-evaluate the
failed attribute iff we need to print the error message (so in
commands like 'nix search' we wouldn't re-evaluate because we're
suppressing errors).
Some users have their own hashed-mirrors setup, that is used to mirror
things in addition to what’s available on tarballs.nixos.org. Although
this should be feasable to do with a Binary Cache, it’s not always
easy, since you have to remember what "name" each of the tarballs has.
Continuing to support hashed-mirrors is cheap, so it’s best to leave
support in Nix. Note that NIX_HASHED_MIRRORS is also supported in
Nixpkgs through fetchurl.nix.
Note that this excludes tarballs.nixos.org from the default, as in
\#3689. All of these are available on cache.nixos.org.
This allows interactively inspecting the state of the evaluator at the
point of failure.
Example:
$ nix eval path:///home/eelco/Dev/nix/flake2#modules.hello-closure._final --start-repl-on-eval-errors
error: --- TypeError -------------------------------------------------------------------------------------------------------------------------------------------------------------------- nix
at: (20:53) in file: /nix/store/4264z41dxfdiqr95svmpnxxxwhfplhy0-source/flake.nix
19|
20| _final = builtins.foldl' (xs: mod: xs // (mod._module.config { config = _final; })) _defaults _allModules;
| ^
21| };
attempt to call something which is not a function but a set
Starting REPL to allow you to inspect the current state of the evaluator.
The following extra variables are in scope: arg, fun
Welcome to Nix version 2.4. Type :? for help.
nix-repl> fun
error: --- EvalError -------------------------------------------------------------------------------------------------------------------------------------------------------------------- nix
at: (150:28) in file: /nix/store/4264z41dxfdiqr95svmpnxxxwhfplhy0-source/flake.nix
149|
150| tarballClosure = (module {
| ^
151| extends = [ self.modules.derivation ];
attribute 'derivation' missing
nix-repl> :t fun
a set
nix-repl> builtins.attrNames fun
[ "tarballClosure" ]
nix-repl>
Occasionally, `nix-build --check` is fairly helpful and I'd like to be
able to use this feature for flakes that need to be built with `nix
build` as well.
This fixes an error found in builtins.path that looks like:
store path mismatch in (possibly filtered) path added from '/private/tmp/nix-shell.CyXViH/nix-test/filter-source/filterin'
when no hash is specified
This adds a ‘nix export’ command which hooks into nix-bundle. It can
be used in a similar way as nix-bundle, with the benefit of hooking
into the new “app” functionality. For instance,
$ nix export nixpkgs#jq
$ ./jq --help
jq - commandline JSON processor [version 1.6]
...
$ scp jq machine-without-nix:
$ ssh machine-without-nix ./jq
jq - commandline JSON processor [version 1.6]
...
Note that nix-bundle currently requires Linux to run. Other exporters
might not have that requirement.
“exporters” are meant to be reusable, so that, other repos can
implement their own bundling.
Fixes#3705
match_continuous limits the search to the current start position,
instead of searching the entire file.
On libc++, this improves performance dramatically:
$ time /nix/store/70ai68dfm6xbzwn26j5n4li9di52ylia-nix-3.0pre20200728_c159f48/bin/nix print-dev-env >/dev/null
/nix/store/70ai68dfm6xbzwn26j5n4li9di52ylia-nix-3.0pre20200728_c159f48/bin/ni 2.39s user 0.19s system 64% cpu 4.032 total
$ time /nix/store/cwjfxxlp83zln4mfyy1d2dbsx7f6s962-nix-3.0pre20200728_dirty/bin/nix print-dev-env >/dev/null
/nix/store/cwjfxxlp83zln4mfyy1d2dbsx7f6s962-nix-3.0pre20200728_dirty/bin/nix 0.09s user 0.05s system 65% cpu 0.204 total
Fixes#3874
Since 6185d25e52, this was very
latency-bound since it required a round-trip for every 32 KiB. So for
example copying a 514 MiB closure over a virtual ethernet device with
a articial delay of just 1 ms took 343s. Now it takes 2.7s.
Fixes#3372.
If a repo is dirty, it used to return a `rev` object with an "empty"
sha1 (0000000000000000000000000000000000000000). Please note that this
only applies for `builtins.fetchGit` and *not* for `builtins.fetchTree{
type = "git"; }`.
The new interface we offer provides a way of getting all the
DerivationOutputs with the storePaths directly, based on the observation
that it's the most common usecase.
istream->tellg() returns -1 so we can't get the number of bytes
written.
Fixes 'uploaded 's3://nix-cache/nar/00819r9lp5kajr6baxfw5dhhc0cx8ndxaz43qmd2f0gn1hk1ynlp.nar.xz' (-1 bytes) in 11620 ms' messages.
The original idea was to implement a git-fetcher in Nix's core that
supports content hashes[1]. In #3549[2] it has been suggested to
actually use `fetchTree` for this since it's a fairly generic wrapper
over the new fetcher-API[3] and already supports content-hashes.
This patch implements a new git-fetcher based on `fetchTree` by
incorporating the following changes:
* Removed the original `fetchGit`-implementation and replaced it with an
alias on the `fetchTree` implementation.
* Ensured that the `git`-fetcher from `libfetchers` always computes a
content-hash and returns an "empty" revision on dirty trees (the
latter one is needed to retain backwards-compatibility).
* The hash-mismatch error in the fetcher-API exits with code 102 as it
usually happens whenever a hash-mismatch is detected by Nix.
* Removed the `flakes`-feature-flag: I didn't see a reason why this API
is so tightly coupled to the flakes-API and at least `fetchGit` should
remain usable without any feature-flags.
* It's only possible to specify a `narHash` for a `git`-tree if either a
`ref` or a `rev` is given[4].
* It's now possible to specify an URL without a protocol. If it's missing,
`file://` is automatically added as it was the case in the original
`fetchGit`-implementation.
[1] https://github.com/NixOS/nix/pull/3216
[2] https://github.com/NixOS/nix/pull/3549#issuecomment-625194383
[3] https://github.com/NixOS/nix/pull/3459
[4] https://github.com/NixOS/nix/pull/3216#issuecomment-553956703
The new error-format is pretty nice from a UX point-of-view, however
it's fairly hard to parse the output e.g. for editor plugins such as
vim-ale[1] that use `nix-instantiate --parse` to determine syntax errors in
Nix expression files.
This patch extends the `internal-json` logger by adding the fields
`line`, `column` and `file` to easily locate an error in a file and the
field `raw_msg` which contains the error-message itself without
code-lines and additional helpers.
An exemplary output may look like this:
```
[nix-shell]$ ./inst/bin/nix-instantiate ~/test.nix --log-format minimal
{"action":"msg","column":1,"file":"/home/ma27/test.nix","level":0,"line":4,"raw_msg":"syntax error, unexpected IF, expecting $end","msg":"<full error-msg with code-lines etc>"}
```
[1] https://github.com/dense-analysis/ale
This assumption is broken by CA derivations. Making a PR now to do the
breaking daemon change as soon as possible (if it is already too late,
we can bump protocol intead).
I think this better captures the intent of what's going on: we either
have an opaque store path, or a drv path with some outputs.
Having this structure will also help us support CA derivations: we'll
have to allow the outpath paths to be optional, so the structure we gain
now makes up for the structure we loose then.
It's a tiny function which is:
- hardly worth abstrating over, and also only used once.
- doesn't work once we get CA drvs
I rewrote the one callsite to be forwards compatable with CA
derivations, and also potentially more performant: instead of reading in
the derivation it can ust consult the SQLite DB in the common case.
to each Store implementation. The generic regStore implementation will
only be for the ambiguous shorthands, like "" and "auto".
This also could get us close to simplifying the daemon command.
Currently resizing of the terminal doesn't play nicely with
nix edit when using kakoune as the editor, as it relies on the
SIGWINCH signal which is trapped by nix. How this is not a problem
with e.g. vim is beyond me.
Virtually all other exec* calls are following a call to
restoreSignals(). This commit adds this behavior to nix edit
as well.
I got it to just become `LocalStore::addToStoreFromDump`, cleanly taking
a store and then doing nothing too fancy with it.
`LocalStore::addToStore(...Path...)` is now just a simple wrapper with a
bare-bones sinkToSource of the right dump command.
That is, the commands 'nix path-info nixpkgs#hello' and 'nix path-info
/nix/store/00ls0qi49qkqpqblmvz5s1ajl3gc63lr-hello-2.10.drv' now do the
same thing (i.e. build the derivation and operate on the output store
path, rather than the .drv path).
This reverts commit a2c27022e9. See
addToStoreSlow(), we don't need to handle this case efficiently
anymore. In fact, we can almost remove the method/hashAlgo arguments
since the non-recursive and/or non-SHA256 are almost not used anymore.
We were calculating the nar hash wrong when the file ingestion method
was flat. I don't think there's anything we can do in that case but dump
the file again, so that's what I do.
As an optomization, we again could reuse the original dump for just the
recursive and non-sha256 case, but I rather do that after this fix, and
after my other PRs which deduplicate this code.
Until now, the `gitlab`-fetcher determined the source's rev by checking
the latest commit of the given `ref` using the
`/repository/branches`-API.
This breaks however when trying to fetch a gitlab-repo by its tag:
```
$ nix repl
nix-repl> builtins.fetchTree gitlab:Ma27/nvim.nix/0.2.0
error: --- Error ------------------------------------------------------------------------------------- nix
unable to download 'https://gitlab.com/api/v4/projects/Ma27%2Fnvim.nix/repository/branches/0.2.0': HTTP error 404 ('')
```
When using the `/commits?ref_name`-endpoint[1] you can pass any kind of
valid ref to the `gitlab`-fetcher.
Please note that this fetches the only first 20 commits on a ref,
unfortunately there's currently no endpoint which only retrieves the
latest commit of any kind of `ref`.
[1] https://docs.gitlab.com/ee/api/commits.html#list-repository-commits
We've added the variant to `DerivationOutput` to support them, but made
`DerivationOutput::path` partial to avoid actually implementing them.
With this chage, we can all collaborate on "just" removing
`DerivationOutput::path` calls to implement CA derivations.
The `m` acts as termination-symbol when declaring graphics. Because
of this, the `;1m` doesn't have any effect and is directly printed to
the console:
```
$ nix repl
> builtins.fetchGit { /* ... */ }
{ outPath = "/nix/store/s0f0iz4a41cxx2h055lmh6p2d5k5bc6r-source"; rev = "e73e45b723a9a6eecb98bd5f3df395d9ab3633b6"; revCount = ;1m428; shortRev = "e73e45b"; submodules = ;1mfalse; }
```
Introduced by 6403508f5a.
This reduces memory consumption of
nix-instantiate \
-E 'with import <nixpkgs> {}; runCommand "foo" { src = ./blender; } "echo foo"' \
--option nar-buffer-size 10000
(where ./blender is a 1.1 GiB tree) from 1716 to 36 MiB, while still
ensuring that we don't do any write I/O for small source paths (up to
'nar-buffer-size' bytes). The downside is that large paths are now
always written to a temporary location in the store, even if they
produce an already valid store path. Thus, adding large paths might be
slower and run out of disk space. ¯\_(ツ)_/¯ Of course, you can always
restore the old behaviour by setting 'nar-buffer-size' to a very high
value.
"uid-range" provides 65536 UIDs to a build and runs the build as root
in its user namespace. "systemd-cgroup" allows the build to mount the
systemd cgroup controller (needed for running systemd-nspawn and NixOS
containers).
Also, add a configuration option "auto-allocate-uids" which is needed
to enable these features, and some experimental feature gates.
So to enable support for containers you need the following in
nix.conf:
experimental-features = auto-allocate-uids systemd-cgroup
auto-allocate-uids = true
system-features = uid-range systemd-cgroup
2^18 was overkill. The idea was to enable multiple containers to run
inside a build. However, those containers can use the same UID range -
we don't really care about perfect isolation between containers inside
a build.
Also, run builds in a cgroup namespace (ensuring /proc/self/cgroup
doesn't leak information about the outside world) and mount /sys. This
enables running systemd-nspawn and thus NixOS containers in a Nix
build.
Rather than rely on a nixbld group, we now allocate UIDs/GIDs
dynamically starting at a configurable ID (872415232 by default).
Also, we allocate 2^18 UIDs and GIDs per build, and run the build as
root in its UID namespace. (This should not be the default since it
breaks some builds. We probably should enable this conditional on a
requiredSystemFeature.) The goal is to be able to run (NixOS)
containers in a build. However, this will also require some cgroup
initialisation.
The 2^18 UIDs/GIDs is intended to provide enough ID space to run
multiple containers per build, e.g. for distributed NixOS tests.
This allows you to refer to an input from another flake. For example,
$ nix run --inputs-from /path/to/hydra nixpkgs#hello
runs 'hello' from the 'nixpkgs' inputs of the 'hydra' flake.
Fixes#3769.
'nix run' will try to run $out/bin/<name>, where <name> is the
derivation name (excluding the version). This often works well:
$ nix run nixpkgs#hello
Hello, world!
$ nix run nix -- --version
nix (Nix) 2.4pre20200626_adf2fbb
$ nix run patchelf -- --version
patchelf 0.11.20200623.e61654b
$ nix run nixpkgs#firefox -- --version
Mozilla Firefox 77.0.1
$ nix run nixpkgs#gimp -- --version
GNU Image Manipulation Program version 2.10.14
though not always:
$ nix run nixpkgs#git
error: unable to execute '/nix/store/kp7wp760l4gryq9s36x481b2x4rfklcy-git-2.25.4/bin/git-minimal': No such file or directory
Generalize `queryDerivationOutputNames` and `queryDerivationOutputs` by
adding a `queryDerivationOutputMap` that returns the map
`outputName=>outputPath`
(not that this is not equivalent to merging the results of
`queryDerivationOutputs` and `queryDerivationOutputNames` as sets don't
preserve the order, so we would end up with an incorrect mapping).
squash! Add a way to get all the outputs of a derivation with their label
Rename StorePathMap to OutputPathMap
Now the derivation outputs are parsed up front, we can avoid a reparse
by doing it. Also, this just feels a bit better as the `output*` env
vars are more of a `libnixexpr` interface than `libnixstore` interface:
ultimately, it's the derivation outputs that decide whether the
derivation is fixed-output.
Yes, hashed mirrors might go away with #3689, but this bit of code would
be moved rather than deleted, so it's worth doing a cleanup anyways I
think.
When we merge with master, the new lack of string types make this case
impossible (after parsing). Later, when we actually implemenent
CA-derivations, we'll change the types to allow that.
We have a larger problem that passsing computed strings to the first
variable argument of many exception constructors is unsafe because that
first variable argument is interpreted not as a plain string, but format
string, and if it contains '%' boost::format will abort, since there are
no arguments to the format string.
In this particular instance '%' was used as part of an escape code in a
URL, which, when the download failed, caused Nix to abort displaying the
`FileTransferError`.
This is a bit complex because we want to expose extra functionality the
wrapped class has. Perhaps there is some inheritancy trickery to do this
nicer, but I don't know it, and this is the first thing we tried after a
series of attempts that did build.
This design is kind of like that of Rust's Writer, Reader, or Iter
adapters, which impliment more traits based on what the inner type
implements.
This was introduced in fa125b9b28, and
then "reverted" in 1cf4801108, except that
revert left the struct around doing nothing useful.
We're removing it all the way now because we want to make a new
`TeeSink` complementing the already-exiting `TeeSource`, that is
actually a completely different concept as far as the class hierarchy is
concerned.
I’m not 100% sure this is wanted since it kind of makes everything
have to know about ca even if they don’t really want to. But it also
make things easier in dealing with looking up ca.
E.g.
$ nix run nixpkgs#hello
error: --- Error ---------- nix
flake 'flake:nixpkgs' does not provide attribute 'apps.x86_64-linux.hello' or 'hello'
instead of
$ nix run nixpkgs#hello
error: --- Error ---------- nix
flake 'flake:nixpkgs' does not provide attribute 'hello'
Not implementing anything here, just throwing an error if a derivation
sets `__contentAddressed = true` without
`--experimental-features content-addressed-paths`
(and also with it as there's nothing implemented yet)
This further continues with the dependency inverstion. Also I just went
ahead and exposed `parseDerivation`: it seems like the more proper
building block, and not a bad thing to expose if we are trying to be
less wedded to drv files on disk anywas.
On nix-env -qa -f '<nixpkgs>', this reduces maximum RSS by 20970 KiB
and runtime by 0.8%. This is mostly because we're not parsing the hash
part as a hash anymore (just validating that it consists of base-32
characters).
Also, replace storePathToHash() by StorePath::hashPart().
E.g. instead of
error: --- BuildError ----------------------------------------------- nix
builder for '/nix/store/03nk0a3n8h2948k4lqfgnnmym7knkcma-foo.drv' failed with exit code 1
error: --- Error ---------------------------------------------------- nix
build of '/nix/store/03nk0a3n8h2948k4lqfgnnmym7knkcma-foo.drv' failed
we now get
error: --- Error ---------------------------------------------------- nix
builder for '/nix/store/03nk0a3n8h2948k4lqfgnnmym7knkcma-foo.drv' failed with exit code 1
Originally, the test was only checking for different “real” storeDir.
That’s an easy case to handle, but the much harder one is if different
virtual store dirs are used. To do this, we need the SubstitutionGoal
to know about the ca, so it can recalculate the path to copy it over.
An important note here is that the store path passed to copyStorePath
needs to be one for srcStore - so that queryPathInfo works properly.
This also adds an error message when the store path from queryPathInfo
is different from the one we requested.
Substituters can substitute from one store dir to another with a
little bit of help. The store api just needs to have a CA so it can
recompute the store path based on the new store dir. We can only do
this for fixed output derivations with no references, though.
This function was used in only one place, where it could easily be
replaced by readDerivation() since it's not
performance-critical. (This function appears to have been modelled
after queryDerivationOutputs(), which exists only to make the garbage
collector faster.)
This fixes an issue where lockfile generation was not idempotent:
after updating a lockfile, a "follows" node would end up pointing to a
new copy of the node, rather than to the original node.
fetchTarball, fetchTree, and fetchGit all have *optional* hash attrs.
This means that we need to be careful with what we allow to avoid
accidentally making these defaults. When ‘hash = ""’ we assume the
empty hash is wanted.
follow up of https://github.com/NixOS/nix/pull/3544
This allows hash="" so that it can be used for debugging purposes. For
instance, this gives you an error message like:
warning: found empty hash, assuming you wanted 'sha256:0000000000000000000000000000000000000000000000000000'
hash mismatch in fixed-output derivation '/nix/store/asx6qw1r1xk6iak6y6jph4n58h4hdmbm-nix':
wanted: sha256:0000000000000000000000000000000000000000000000000000
got: sha256:0fpfhipl9v1mfzw2ffmxiyyzqwlkvww22bh9wcy4qrfslb4jm429
Needed so that we can include it as a logger in loggers.cc without
adding a dependency on nix
This also requires moving names.hh to libutil to prevent a circular
dependency between libmain and libexpr
Make the printing of the build logs systematically go through the
logger, and replicate the behavior of `no-build-output` by having two
different loggers (one that prints the build logs and one that doesn't)
Add a new `--log-format` cli argument to change the format of the logs.
The possible values are
- raw (the default one for old-style commands)
- bar (the default one for new-style commands)
- bar-with-logs (equivalent to `--print-build-logs`)
- internal-json (the internal machine-readable json format)
The initial contents of the flake is specified by the
'templates.<name>' or 'defaultTemplate' output of another flake. E.g.
outputs = { self }: {
templates = {
nixos-container = {
path = ./nixos-container;
description = "An example of a NixOS container";
};
};
};
allows
$ nix flake init -t templates#nixos-container
Also add a command 'nix flake new', which is identical to 'nix flake
init' except that it initializes a specified directory rather than the
current directory.
bool coerces anything >0 to true, but in the future we may have other
file ingestion methods. This shows a better error message when the
“recursive” byte isn’t 1.
Instead, `Hash` uses `std::optional<HashType>`. In the future, we may
also make `Hash` itself require a known hash type, encoraging people to
use `std::optional<Hash>` instead.
Caches tree in addition to lockedRef, and explicitly writes out the logic for different combinations of cached/uncached flakes and indirect/resolved/locked flakes. This eliminates uneccessary calls to lookupInFlakeCache, fetchTree, maybeLookupFlake, and flakeCache.push_back
As `git fetch` may chose to interpret refspec to it's liking, ensure that we
only pass refs that begin with `refs/` as is, otherwise, prepend them with
`refs/heads`. Otherwise, branches named `heads/foo` (I know it's bad, but it's
allowed), would be fetched as `foo`, instead of `heads/foo`.
The previous regex was too strict and did not match what git was allowing. It
could lead to `fetchGit` not accepting valid branch names, even though they
exist in a repository (for example, branch names containing `/`, which are
pretty standard, like `release/1.0` branches).
The new regex defines what a branch name should **NOT** contain. It takes the
definitions from `refs.c` in https://github.com/git/git and `git help
check-ref-format` pages.
This change also introduces a test for ref name validity checking, which
compares the result from Nix with the result of `git check-ref-format --branch`.
The attributes previously stored in TreeInfo (narHash, revCount,
lastModified) are now stored in Input. This makes it less arbitrary
what attributes are stored where.
As a result, the lock file format has changed. An entry like
"info": {
"lastModified": 1585405475,
"narHash": "sha256-bESW0n4KgPmZ0luxvwJ+UyATrC6iIltVCsGdLiphVeE="
},
"locked": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b88ff468e9850410070d4e0ccd68c7011f15b2be",
"type": "github"
},
is now stored as
"locked": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b88ff468e9850410070d4e0ccd68c7011f15b2be",
"type": "github",
"lastModified": 1585405475,
"narHash": "sha256-bESW0n4KgPmZ0luxvwJ+UyATrC6iIltVCsGdLiphVeE="
},
The 'Input' class is now a dumb set of attributes. All the fetcher
implementations subclass InputScheme, not Input. This simplifies the
API.
Also, fix substitution of flake inputs. This was broken since lazy
flake fetching started using fetchTree internally.
The idea is it's always more flexible to consumer a `Source` than a
plain string, and it might even reduce memory consumption.
I also looked at `addToStoreFromDump` with its `// FIXME: remove?`, but
the worked needed for that is far more up for interpretation, so I
punted for now.
This moves the actual parsing of configuration contents into applyConfig
which applyConfigFile is then going to call. By changing this we can now
test the configuration file parsing without actually create a file on
disk.
This makes 'nix flake' less cluttered and more consistent (it's only
subcommands that operator on a flake). Also, the registry is not
inherently flake-related (e.g. fetchTree could also use it to remap
inputs).
This completes flakerefs using the registry (e.g. 'nix<TAB>' => 'nix
nixpkgs') and flake output attributes by evaluating the flake
(e.g. 'dwarffs#nix<TAB>' => 'dwarffs#nixosModules').
InstallableValue has children InstallableFlake and InstallableAttrPath, but InstallableFlake was overriding toDerivations, and usage was changed so that InstallableFlake didn't need cmd. So these changes were made:
InstallableValue::toDerivations() -> InstalllableAttrPath::toDerivations()
InstallableValue::cmd -> InstallableAttrPath::cmd
InstallableValue uses state instead of cmd
toBuildables() and toDerivations() were made abstract
- result list will be always empty if --json is passed
- for scripts an empty search result is not really an error,
we rather want to distinguish between evaluation errors and empty results
The raw stderr output isn't logged anymore so the build logs need to be
printed by the default logger in order for the old commands like
nix-build to still show build output.
For remote stores the log messages are already forwarded as structured
STDERR_RESULT messages so the old format is duplicate information. But
still included with -vvv since it could be useful for debugging
problems.
$ nix build -L /nix/store/nl71b2niws857ffiaggyrkjwgx9jjzc0-foo.drv --store ssh-ng://localhost
Hello World!
foo> Hello World!
[1/0/1 built] building foo
Fixes#3556
When used with `readFile`, we have a pretty good heuristic of the file
size, so `reserve` this in the `string`. This will save some allocation
/ copy when the string is growing.
This means you now get an error message *before* stuff gets built:
$ nix copy .#hydraJobs.vendoredCrates
error: you must pass '--from' and/or '--to'
Try 'nix --help' for more information.
The commit 3cc1125595 adds a `grantpt`
call on the builder pseudo terminal fd. This call is actually only
required for MacOS, but it however requires a RW access to /dev/pts
which is only RO bindmounted in the Bazel Linux sandbox. So, Nix can
not be actually run in the Bazel Linux sandbox for unneeded reasons.
This closes#3026 by allowing `builtins.readFile` to read a file with a
wrongly reported file size, for example, files in `/proc` may report a
file size of 0. Reading file in `/proc` is not a good enough motivation,
however I do think it just makes nix more robust by allowing more file
to be read. Especially, I do considerer the previous behavior to be
dangerous because nix was previously reading truncated files. Examples
of file system which incorrectly report file size may be network file
system or dynamic file system (for performance reason, a dynamic file
system such as FUSE may generate the content of the file on demand).
```
nix-repl> builtins.readFile "/proc/version"
""
```
With this commit:
```
nix-repl> builtins.readFile "/proc/version"
"Linux version 5.6.7 (nixbld@localhost) (gcc version 9.3.0 (GCC)) #1-NixOS SMP Thu Apr 23 08:38:27 UTC 2020\n"
```
Here is a summary of the behavior changes:
- If the reported size is smaller, previous implementation
was silently returning a truncated file content. The new implementation
is returning the correct file content.
- If a file had a bigger reported file size, previous implementation was
failing with an exception, but the new implementation is returning the
correct file content. This change of behavior is coherent with this pull
request.
Open questions
- The behavior is unchanged for correctly reported file size, however
performances may vary because it uses the more complex sink interface.
Considering that sink is used a lot, I don't think this impacts the
performance a lot.
- `builtins.readFile` on an infinite file, such as `/dev/random` may
fill the memory.
- it does not support adding file to store, such as `${/proc/version}`.
In particular, doing 'nix build /path/to/dir' now works if
/path/to/dir is not a Git tree (it only has to contain a flake.nix
file).
Also, 'nix flake init' no longer requires a Git tree (but it will do a
'git add flake.nix' if it's a Git tree)
Suppose I have a path /nix/store/[hash]-[name]/a/a/a/a/a/[...]/a,
long enough that everything after "/nix/store/" is longer than 4096
(MAX_PATH) bytes.
Nix will happily allow such a path to be inserted into the store,
because it doesn't look at all the nested structure. It just cares
about the /nix/store/[hash]-[name] part. But, when the path is deleted,
we encounter a problem. Nix will move the path to /nix/store/trash, but
then when it's trying to recursively delete the trash directory, it will
at some point try to unlink
/nix/store/trash/[hash]-[name]/a/a/a/a/a/[...]/a. This will fail,
because the path is too long. After this has failed, any store deletion
operation will never work again, because Nix needs to delete the trash
directory before recreating it to move new things to it. (I assume this
is because otherwise a path being deleted could already exist in the
trash, and then moving it would fail.)
This means that if I can trick somebody into just fetching a tarball
containing a path of the right length, they won't be able to delete
store paths or garbage collect ever again, until the offending path is
manually removed from /nix/store/trash. (And even fixing this manually
is quite difficult if you don't understand the issue, because the
absolute path that Nix says it failed to remove is also too long for
rm(1).)
This patch fixes the issue by making Nix's recursive delete operation
use unlinkat(2). This function takes a relative path and a directory
file descriptor. We ensure that the relative path is always just the
name of the directory entry, and therefore its length will never exceed
255 bytes. This means that it will never even come close to AX_PATH,
and Nix will therefore be able to handle removing arbitrarily deep
directory hierachies.
Since the directory file descriptor is used for recursion after being
used in readDirectory, I made a variant of readDirectory that takes an
already open directory stream, to avoid the directory being opened
multiple times. As we have seen from this issue, the less we have to
interact with paths, the better, and so it's good to reuse file
descriptors where possible.
I left _deletePath as succeeding even if the parent directory doesn't
exist, even though that feels wrong to me, because without that early
return, the linux-sandbox test failed.
Reported-by: Alyssa Ross <hi@alyssa.is>
Thanks-to: Puck Meerburg <puck@puckipedia.com>
Tested-by: Puck Meerburg <puck@puckipedia.com>
Reviewed-by: Puck Meerburg <puck@puckipedia.com>
Reduces the number of store queries it performs. Also prints a warning
if any of the selectors did not match any installed derivations.
UX Caveats:
- Will print a warning that nothing matched if a previous selector
already removed the path
- Will not do anything if no selectors were provided (no change from
before).
Fixes#3531
In particular, we store whether an attribute failed to evaluate (threw
an exception) or was an unsupported type. This is to ensure that a
repeated 'nix flake show' never has to evaluate anything, so it can
execute without fetching the flake.
With this, 'nix flake show nixpkgs/nixos-20.03 --legacy' executes in
0.6s (was 3.4s).
This speeds up the creation of the cache for the nixpkgs flake from
21.2s to 10.2s. Oddly, it also speeds up querying the cache
(i.e. running 'nix flake show nixpkgs/nixos-20.03 --legacy') from 4.2s
to 3.4s.
(For comparison, running with --no-eval-cache takes 9.5s, so the
overhead of building the SQLite cache is only 0.7s.)
In the fully cached case for the 'nixpkgs' flake, it went from 101s to
4.6s. Populating the cache went from 132s to 17.4s (which could
probably be improved further by combining INSERTs).
Usually this just writes to stdout, but for ProgressBar, we need to
clear the current line, write the line to stdout, and then redraw the
progress bar.
(cherry picked from commit 696c026006)
Usually this just writes to stdout, but for ProgressBar, we need to
clear the current line, write the line to stdout, and then redraw the
progress bar.
Motivation: maintain project-level configuration files.
Document the whole situation a bit better so that it corresponds to the
implementation, and add NIX_USER_CONF_FILES that allows overriding
which user files Nix will load during startup.
Previously the memory would occasionally be collected during eval since
the GC doesn't consider the member variable as alive / doesn't scan the
region of memory where the pointer lives.
By using the traceable_allocator<T> allocator provided by Boehm GC we
can ensure the memory isn't collected. It should be properly freed when
SourceExprCommand goes out of scope.
This doesn't just cause problems for nix-store --serve but also results
in certain build failures. Builds that use unix domain sockets in their
tests often fail because the /var/folders prefix already consumes more
than half of the maximum length of socket paths.
struct sockaddr_un {
sa_family_t sun_family; /* AF_UNIX */
char sun_path[108]; /* Pathname */
};
Temporarily add user-write permission to build directory so that it
can be moved out of the sandbox to the store with a .check suffix.
This is necessary because the build directory has already had its
permissions set read-only, but write permission is required
to update the directory's parent link to move it out of the sandbox.
Updated the related --check "derivation may not be deterministic"
messages to consistently use the real store paths.
Added test for non-root sandbox nix-build --check -K to demonstrate
issue and help prevent regressions.
Future editions of flakes or the Nix language can be supported by
renaming flake.nix (e.g. flake-v2.nix). This avoids a bootstrap
problem where we don't know which grammar to use to parse
flake*.nix. It also allows a project to support multiple flake
editions, in theory.
With --check and the --keep-failed (-K) flag, the temporary directory
was being retained regardless of whether the build was successful and
reproducible. This removes the temporary directory, as expected, on
a reproducible check build.
Added tests to verify that temporary build directories are not
retained unnecessarily, particularly when using --check with
--keep-failed.
This fetchers copies a plain directory (i.e. not a Git/Mercurial
repository) to the store (or does nothing if the path is already a
store path).
One use case is to pin the 'nixpkgs' flake used to build the current
NixOS system, and prevent it from being garbage-collected, via a
system registry entry like this:
{
"from": {
"id": "nixpkgs",
"type": "indirect"
},
"to": {
"type": "path",
"path": "/nix/store/rralhl3wj4rdwzjn16g7d93mibvlr521-source",
"lastModified": 1585388205,
"rev": "b0c285807d6a9f1b7562ec417c24fa1a30ecc31a"
},
"exact": true
}
Note the fake "lastModified" and "rev" attributes that ensure that the
flake gives the same evaluation results as the corresponding
Git/GitHub inputs.
(cherry picked from commit 12f9379123)
This provides a pluggable mechanism for defining new fetchers. It adds
a builtin function 'fetchTree' that generalizes existing fetchers like
'fetchGit', 'fetchMercurial' and 'fetchTarball'. 'fetchTree' takes a
set of attributes, e.g.
fetchTree {
type = "git";
url = "https://example.org/repo.git";
ref = "some-branch";
rev = "abcdef...";
}
The existing fetchers are just wrappers around this. Note that the
input attributes to fetchTree are the same as flake input
specifications and flake lock file entries.
All fetchers share a common cache stored in
~/.cache/nix/fetcher-cache-v1.sqlite. This replaces the ad hoc caching
mechanisms in fetchGit and download.cc (e.g. ~/.cache/nix/{tarballs,git-revs*}).
This also adds support for Git worktrees (c169ea5904).
This is useful for finding out what a registry lookup resolves to, e.g
$ nix flake info patchelf
Resolved URL: github:NixOS/patchelf
Locked URL: github:NixOS/patchelf/cd7955af31698c571c30b7a0f78e59fd624d0229
When encountering an unsupported protocol, there's no need to retry.
Chances are, it won't suddenly be supported between retry attempts;
error instead. Otherwise, you see something like the following:
$ nix-env -i -f git://git@github.com/foo/bar
warning: unable to download 'git://git@github.com/foo/bar': Unsupported protocol (1); retrying in 335 ms
warning: unable to download 'git://git@github.com/foo/bar': Unsupported protocol (1); retrying in 604 ms
warning: unable to download 'git://git@github.com/foo/bar': Unsupported protocol (1); retrying in 1340 ms
warning: unable to download 'git://git@github.com/foo/bar': Unsupported protocol (1); retrying in 2685 ms
With this change, you now see:
$ nix-env -i -f git://git@github.com/foo/bar
error: unable to download 'git://git@github.com/foo/bar': Unsupported protocol (1)
When we do something like 'nix flake update --override-input nixpkgs
...', the override is now kept on subsequent calls. (If you don't want
this behaviour, you can use --no-write-lock-file.)
This fetchers copies a plain directory (i.e. not a Git/Mercurial
repository) to the store (or does nothing if the path is already a
store path).
One use case is to pin the 'nixpkgs' flake used to build the current
NixOS system, and prevent it from being garbage-collected, via a
system registry entry like this:
{
"from": {
"id": "nixpkgs",
"type": "indirect"
},
"to": {
"type": "path",
"path": "/nix/store/rralhl3wj4rdwzjn16g7d93mibvlr521-source",
"lastModified": 1585388205,
"rev": "b0c285807d6a9f1b7562ec417c24fa1a30ecc31a"
},
"exact": true
}
Note the fake "lastModified" and "rev" attributes that ensure that the
flake gives the same evaluation results as the corresponding
Git/GitHub inputs.
This allows querying the location of function arguments. E.g.
builtins.unsafeGetAttrPos "x" (builtins.functionArgs ({ x }: null))
=> { column = 57; file = "/home/infinisil/src/nix/inst/test.nix"; line = 1; }
An example use is for pinning the "nixpkgs" entry the system-wide
registry to a particular store path. Inexact matches
(e.g. "nixpkgs/master") should still use the global registry.
One application for this is pinning the 'nixpkgs' flake to the exact
revision used to build the NixOS system, e.g.
{
"flakes": [
{
"from": {
"id": "nixpkgs",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"repo": "nixpkgs",
"type": "github",
"rev": "b0c285807d6a9f1b7562ec417c24fa1a30ecc31a"
}
}
],
"version": 2
}
Using std::filesystem means also having to link with -lstdc++fs on
some platforms and it's hard to discover for what platforms this is
needed. As all the functionality is already implemented as utilities,
use those instead.
Due to fetchGit not checking if rev is an ancestor of ref (there is even
a FIXME comment about it in the code), the cache repo might not have the
ref even though it has the rev. This doesn't matter when submodule =
false, but the submodule = true code blows up because it tries to fetch
the (missing) ref from the cache repo.
Fix this in the simplest way possible: fetch all refs from the local
cache repo when submodules = true.
TODO: Add tests.
The .link file is used as a lock, so I think we should put the
"submodule" attribute in there since turning on submodules creates a new
.link file path.
This is now done in a single pass. Also fixes some issues when
updating flakes with circular dependencies. Finally, when using
'--recreate-lock-file --commit-lock-file', the commit message now
correctly shows the differences.
Sadly 10.15 changed /bin/sh to a shim which executes bash, this means it
can't be used anymore without also opening up the sandbox to allow bash.
Failed to exec /bin/bash as variant for /bin/sh (1: Operation not permitted).
This is used to determine the dependency tree of impure libraries so nix
knows what paths to open in the sandbox. With the less restrictive
defaults it isn't needed anymore.
Running `nix-store --gc --delete` will, as of Nix 2.3.3, simply fail
because the --delete option conflicts with the --delete operation.
$ nix-store --gc --delete
error: only one operation may be specified
Try 'nix-store --help' for more information.
Furthermore, it has been broken since at least Nix 0.16 (which was
released sometime in 2010), which means that any scripts which depend
on it should have been broken at least nine years ago. This commit
simply formally removes the option. There should be no actual difference
in behaviour as far as the user is concerned: it errors with the exact
same error message. The manual has been edited to remove any references
to the (now gone) --delete option.
Other information:
* Path for Nix 0.16 used:
/nix/store/rp3sgmskn0p0pj1ia2qwd5al6f6pinz4-nix-0.16
See documentattion in header and comments in implementation for details.
This is actually done in preparation for floating ca derivations, not
multi-output fixed ca derivations, but the distinction doesn't yet
mattter.
Thanks @cole-h for finding and fixing a bunch of typos.
If you do a fetchTree on a Git repository, whether the result contains
a revCount attribute should not depend on whether that repository
happens to be a shallow clone or not. That would complicate caching a
lot and would be semantically messy. So applying fetchTree/fetchGit to
a shallow repository is now an error unless you pass the attribute
'shallow = true'. If 'shallow = true', we don't return revCount, even
if the repository is not actually shallow.
Note that Nix itself is not doing shallow clones at the moment. But it
could do so as an optimisation if the user specifies 'shallow = true'.
Issue #2988.
Today's fixed output derivations and regular derivations differ in a few
ways which are largely orthogonal. This replaces `isFixedOutput` with a
`type` that returns an enum of possible combinations.
In
nix-instantiate --dry-run '<nixpkgs/nixos/release-combined.nix>' -A nixos.tests.simple.x86_64-linux
this reduces time spent in unparse() from 9.15% to 4.31%. The main
culprit was appending characters one at a time to the destination
string. Even though the string has enough capacity, push_back() still
needs to check this on every call.
Note: like 'nix run', and unlike 'nix-shell', this takes an argv
vector rather than a shell command. So
nix dev-shell -c 'echo $PATH'
doesn't work. Instead you need to do
nix dev-shell -c bash -c 'echo $PATH'
The problem fixed: each nix-shell invocation creates a new temporary
directory (`/tmp/nix-shell-*`) and never cleans up.
And while I'm here, shellescape all variables inlined into the rcfile.
See what might happen without escaping:
$ export TZ="';echo pwned'"
$ nix-shell -p hello --run hello
pwned
Hello, world!
Worktrees[1] are a feature of git which allow you to check out a ref in
a different directory.
While playing around with flakes I realized that git repositories in a
worktree checkout break when trying to build a flake:
```
$ git worktree add ../nixpkgs-flakes nixpkgs-flakes
$ cd ../nixpkgs-flakes
$ nix build .#hello
error: opening directory '/home/ma27/Projects/nixpkgs-flakes/.git/refs/heads': Not a directory
```
This issue has been fixed by determining with `git rev-parse --git-common-dir`
where the actual `.git` directory is.
Please note that this issue only exists on the `flakes` branch, fetching
worktree checkouts with Nix master seems to work fine.
[1] https://git-scm.com/docs/git-worktree
The ssh client is lazily started by the first worker thread, that
requires a ssh connection. To avoid the ssh client to be killed, when
the worker process is stopped, do not set PR_SET_PDEATHSIG.
This allows all supported fetchers to be used, e.g.
builtins.fetchTree {
type = "github";
owner = "NixOS";
repo = "nix";
rev = "d4df99a3349cf2228a8ee78dea320afef86eb3ba";
}
Brings the functionality of ssh-ng:// in sync with the legacy ssh://
implementation. Specifying the remote store uri enables various useful
things. eg.
$ nix copy --to ssh-ng://cache?remote-store=file://mnt/cache --all
This improves reproducibility and may be faster than fetching from the
original source (especially for git/hg inputs, but probably also for
github inputs - our binary cache is probably faster than GitHub's
dynamically generated tarballs).
Unfortunately this doesn't work for the top-level flake since even if
we know the NAR hash of the tree, we don't know the other tree
attributes such as revCount and lastModified.
Fixes#3253.
This copies a flake and all its inputs recursively to a store (e.g. a
binary cache). This is intended to enable long-term reproducibility
for flakes. However this will also require #3253.
Example:
$ nix flake archive --json --to file:///tmp/my-cache nixops
{"path":"/nix/store/272igzkgl1gdzmabsjvb2kb2zqbphb3p-source","inputs":{"nixops-aws":{"path":"/nix/store/ybcykw13gr7iq1pzg18iyibbcv8k9q1v-source","inputs":{}},"nixops-hetzner":{"path":"/nix/store/6yn0205x3nz55w8ms3335p2841javz2d-source","inputs":{}},"nixpkgs":{"path":"/nix/store/li3lkr2ajrzphqqz3jj2avndnyd3i5lc-source","inputs":{}}}}
$ ll /tmp/my-cache
total 16
-rw-r--r-- 1 eelco users 403 Jan 30 01:01 272igzkgl1gdzmabsjvb2kb2zqbphb3p.narinfo
-rw-r--r-- 1 eelco users 403 Jan 30 01:01 6yn0205x3nz55w8ms3335p2841javz2d.narinfo
-rw-r--r-- 1 eelco users 408 Jan 30 01:01 li3lkr2ajrzphqqz3jj2avndnyd3i5lc.narinfo
drwxr-xr-x 2 eelco users 6 Jan 30 01:01 nar
-rw-r--r-- 1 eelco users 21 Jan 30 01:01 nix-cache-info
-rw-r--r-- 1 eelco users 404 Jan 30 01:01 ybcykw13gr7iq1pzg18iyibbcv8k9q1v.narinfo
Fixes#3336.
Typical usage:
$ nix flake update ~/Misc/eelco-configurations/hagbard --update-input nixpkgs
to update the 'nixpkgs' input of a flake while leaving every other
input unchanged.
The argument is an input path, so you can do e.g. '--update-input
dwarffs/nixpkgs' to update an input of an input.
Fixes#2928.
Added a flag --no-update-lock-file to barf if the lock file needs any
changes. This is useful for CI systems if you're building a
checkout. Fixes#2947.
Renamed --no-save-lock-file to --no-write-lock-file. It is now a fatal
error if the lock file needs changes but --no-write-lock-file is not
given.
E.g.
$ nix flake update ~/Misc/eelco-configurations/hagbard \
--override-input 'dwarffs/nixpkgs' ../my-nixpkgs
overrides the 'nixpkgs' input of the 'dwarffs' input of the top-level
flake.
Fixes#2837.
When computing a lock file, we now respect the lock files of flake
inputs. This is important for usability / reproducibility. For
example, the 'nixops' flake depends on the 'nixops-aws' and
'nixops-hetzner' repositories. So when the 'nixops' flake is used in
another flake, we want the versions of 'nixops-aws' and
'nixops-hetzner' locked by the the 'nixops' flake because those
presumably have been tested.
This can lead to a proliferation of versions of flakes like 'nixpkgs'
(since every flake's lock file could depend on a different version of
'nixpkgs'). This is not a major issue when using Nixpkgs overlays or
NixOS modules, since then the top-level flake composes those
overlays/modules into *its* version of Nixpkgs and all other versions
are ignored. Lock file computation has been made a bit more lazy so it
won't try to fetch all those versions of 'nixpkgs'.
However, in case it's necessary to minimize flake versions, there now
are two input attributes that allow this. First, you can copy an input
from another flake, as follows:
inputs.nixpkgs.follows = "dwarffs/nixpkgs";
This states that the calling flake's 'nixpkgs' input shall be the same
as the 'nixpkgs' input of the 'dwarffs' input.
Second, you can override inputs of inputs:
inputs.nixpkgs.url = github:edolstra/nixpkgs/<hash>;
inputs.nixops.inputs.nixpkgs.url = github:edolstra/nixpkgs/<hash>;
or equivalently, using 'follows':
inputs.nixpkgs.url = github:edolstra/nixpkgs/<hash>;
inputs.nixops.inputs.nixpkgs.follows = "nixpkgs";
This states that the 'nixpkgs' input of the 'nixops' input shall be
the same as the calling flake's 'nixpkgs' input.
Finally, at '-v' Nix now prints the changes to the lock file, e.g.
$ nix flake update ~/Misc/eelco-configurations/hagbard
inputs of flake 'git+file:///home/eelco/Misc/eelco-configurations?subdir=hagbard' changed:
updated 'nixpkgs': 'github:edolstra/nixpkgs/7845bf5f4b3013df1cf036e9c9c3a55a30331db9' -> 'github:edolstra/nixpkgs/03f3def66a104a221aac8b751eeb7075374848fd'
removed 'nixops'
removed 'nixops/nixops-aws'
removed 'nixops/nixops-hetzner'
removed 'nixops/nixpkgs'
Fixes
error: derivation '/nix/store/klivma7r7h5lndb99f7xxmlh5whyayvg-zlib-1.2.11.drv' has incorrect output '/nix/store/fv98nnx5ykgbq8sqabilkgkbc4169q05-zlib-1.2.11-dev', should be '/nix/store/adm7pilzlj3z5k249s8b4wv3scprhzi1-zlib-1.2.11-dev'
Includes the expression of the condition in the assertion message if
the assertion failed, making assertions much easier to debug. eg.
error: assertion (withPython -> (python2Packages != null)) failed at pkgs/tools/security/nmap/default.nix:11:1
As fromTOML supports \u and \U escapes, bring fromJSON on par. As JSON defaults
to UTF-8 encoding (every JSON parser must support UTF-8), this change parses the
`\u hex hex hex hex` sequence (\u followed by 4 hexadecimal digits) into an
UTF-8 representation.
Add a test to verify correct parsing, using all escape sequences from json.org.
This prevents them from being inlined. On gcc 9, this reduces the
stack size needed for
nix-instantiate '<nixpkgs>' -A texlive.combined.scheme-full --dry-run
from 12.9 MiB to 4.8 MiB.
Having a colon in the path may cause issues, and having the hash
function indicated isn't actually necessary. We now verify the path
format in the tests to prevent regressions.
Before, we would get:
[deploy@bastion:~]$ nix-store -r /nix/store/grfnl76cahwls0igd2by2pqv0dimi8h2-nixos-system-eris-19.09.20191213.03f3def.drv
these derivations will be built:
/nix/store/3ka4ihvwh6wsyhpd2qa9f59506mnxvx1-initrd-linux-4.19.88.drv
/nix/store/ssxwmll7v21did1c8j027q0m8w6pg41i-unit-prometheus-alertmanager-irc-notifier.service.drv
/nix/store/mvyvkj46ay7pp7b1znqbkck2mq98k0qd-unit-script-network-local-commands-start.drv
/nix/store/vsl1y9mz38qfk6pyirjwnfzfggz5akg6-unit-network-local-commands.service.drv
/nix/store/wi5ighfwwb83fdmav6z6n2fw6npm9ffl-unit-prometheus-hydra-exporter.service.drv
/nix/store/x0qkv535n75pbl3xn6nn1w7qkrg9wwyg-unit-prometheus-packet-sd.service.drv
/nix/store/lv491znsjxdf51xnfxh9ld7r1zg14d52-unit-script-packet-sd-env-key-pre-start.drv
/nix/store/nw4nzlca49agsajvpibx7zg5b873gk9f-unit-script-packet-sd-env-key-start.drv
/nix/store/x674wwabdwjrkhnykair4c8mpxa9532w-unit-packet-sd-env-key.service.drv
/nix/store/ywivz64ilb1ywlv652pkixw3vxzfvgv8-unit-wireguard-wg0.service.drv
/nix/store/v3b648293g3zl8pnn0m1345nvmyd8dwb-unit-script-acme-selfsigned-status.nixos.org-start.drv
/nix/store/zci5d3zvr6fgdicz6k7jjka6lmx0v3g4-unit-acme-selfsigned-status.nixos.org.service.drv
/nix/store/f6pwvnm63d0kw5df0v7sipd1rkhqxk5g-system-units.drv
/nix/store/iax8071knxk9c7krpm9jqg0lcrawf4lc-etc.drv
/nix/store/grfnl76cahwls0igd2by2pqv0dimi8h2-nixos-system-eris-19.09.20191213.03f3def.drv
error: invalid file name 'closure-init-0' in 'exportReferencesGraph'
This was tough to debug, I didn't figure out which one was broken until I did:
nix-store -r /nix/store/grfnl76cahwls0igd2by2pqv0dimi8h2-nixos-system-eris-19.09.20191213.03f3def.drv 2>&1 | grep nix/store | xargs -n1 nix-store -r
and then looking at the remaining build graph:
$ nix-store -r /nix/store/grfnl76cahwls0igd2by2pqv0dimi8h2-nixos-system-eris-19.09.20191213.03f3def.drv
these derivations will be built:
/nix/store/3ka4ihvwh6wsyhpd2qa9f59506mnxvx1-initrd-linux-4.19.88.drv
/nix/store/grfnl76cahwls0igd2by2pqv0dimi8h2-nixos-system-eris-19.09.20191213.03f3def.drv
error: invalid file name 'closure-init-0' in 'exportReferencesGraph'
and knowing the initrd build is before the system, then:
$ nix show-derivation /nix/store/3ka4ihvwh6wsyhpd2qa9f59506mnxvx1-initrd-linux-4.19.88.drv
{
"/nix/store/3ka4ihvwh6wsyhpd2qa9f59506mnxvx1-initrd-linux-4.19.88.drv": {
[...]
"exportReferencesGraph": "closure-init-0 /nix/store/...-stage-1-init.sh closure-mdadm.conf-1 /nix/store/...-mdadm.conf closure-ubuntu.conf-2 ...",
[...]
}
}
I then searched the repo for "in 'exportReferencesGraph'", found this
recently updated regex, and realized it was missing a "-".
Before:
$ nix-channel --update
unpacking channels...
warning: SQLite database '/nix/var/nix/db/db.sqlite' is busy (SQLITE_PROTOCOL)
warning: SQLite database '/nix/var/nix/db/db.sqlite' is busy (SQLITE_PROTOCOL)
warning: SQLite database '/nix/var/nix/db/db.sqlite' is busy (SQLITE_PROTOCOL)
warning: SQLite database '/nix/var/nix/db/db.sqlite' is busy (SQLITE_PROTOCOL)
warning: SQLite database '/nix/var/nix/db/db.sqlite' is busy (SQLITE_PROTOCOL)
After:
$ inst/bin/nix-channel --update
unpacking channels...
created 1 symlinks in user environment
I've seen complaints that "sandbox" caused problems under WSL but I'm
having no problems. I think recent changes could have fixed the issue.
This allows overriding the priority of substituters, e.g.
$ nix-store --store ~/my-nix/ -r /nix/store/df3m4da96d84ljzxx4mygfshm1p0r2n3-geeqie-1.4 \
--substituters 'http://cache.nixos.org?priority=100 daemon?priority=10'
Fixes#3264.
There is no termination condition for evaluation of cyclical
expression paths which can lead to infinite loops. This addresses
one spot in the parser in a similar fashion as utils.cc/canonPath
does.
This issue can be reproduced by something like:
```
ln -s a b
ln -s b a
nix-instantiate -E 'import ./a'
```
If the `throw` is reached, this means that execvp into `ssh` wasn’t
successful. We can hint at a usual problem, which is a missing `ssh`
executable.
Test with:
```
env PATH= ./result/bin/nix-copy-closure --builders '' unusedhost
```
and the bash version with
```
env PATH= ./result/bin/nix-copy-closure --builders '' localhost
```
Most functions now take a StorePath argument rather than a Path (which
is just an alias for std::string). The StorePath constructor ensures
that the path is syntactically correct (i.e. it looks like
<store-dir>/<base32-hash>-<name>). Similarly, functions like
buildPaths() now take a StorePathWithOutputs, rather than abusing Path
by adding a '!<outputs>' suffix.
Note that the StorePath type is implemented in Rust. This involves
some hackery to allow Rust values to be used directly in C++, via a
helper type whose destructor calls the Rust type's drop()
function. The main issue is the dynamic nature of C++ move semantics:
after we have moved a Rust value, we should not call the drop function
on the original value. So when we move a value, we set the original
value to bitwise zero, and the destructor only calls drop() if the
value is not bitwise zero. This should be sufficient for most types.
Also lots of minor cleanups to the C++ API to make it more modern
(e.g. using std::optional and std::string_view in some places).
The FunctionCallTrace object consumes a few hundred bytes of stack
space, even when tracing is disabled. This was causing stack overflows:
$ nix-instantiate '<nixpkgs> -A texlive.combined.scheme-full --dry-run
error: stack overflow (possible infinite recursion)
This is with the default stack size of 8 MiB.
Putting the object on the heap reduces stack usage to < 5 MiB.
https://hydra.nixos.org/build/107467517
Seems that on i686-linux, gcc and rustc disagree on how to return
1-word structs: gcc has the caller pass a pointer to the result, while
rustc has the callee return the result in a register. Work around this
by using a bare pointer.
This replaces the '(...)' installable syntax, which is not very
discoverable. The downside is that you can't have multiple expressions
or mix expressions and other installables.
Also, fetchGit now runs in O(1) memory since we pipe the output of
'git archive' directly into unpackTarball() (rather than first reading
it all into memory).
We can now convert Rust Errors to C++ exceptions. At the Rust->C++ FFI
boundary, Result<T, Error> will cause Error to be converted to and
thrown as a C++ exception.
E.g.
$ nix-build '<nixpkgs>' -A hello --experimental-features no-url-literals
error: URL literals are disabled, at /nix/store/vsjamkzh15r3c779q2711az826hqgvzr-nixpkgs-20.03pre194957.bef773ed53f/nixpkgs/pkgs/top-level/all-packages.nix:1236:11
Helps with implementing https://github.com/NixOS/rfcs/pull/45.
A corrupt entry in .links prevents adding a fixed version of that file
to the store in any path. The user experience is that corruption
present in the store 'spreads' to new paths added to the store:
(With store optimisation enabled)
1. A file in the store gets corrupted somehow (eg: filesystem bug).
2. The user tries to add a thing to the store which contains a good copy
of the corrupted file.
3. The file being added to the store is hashed, found to match the bad
.links entry, and is replaced by a link to the bad .links entry.
(The .links entry's hash is not verified during add -- this would
impose a substantial performance burden.)
4. The user observes that the thing in the store that is supposed to be
a copy of what they were trying to add is not a correct copy -- some
files have different contents! Running "nix-store --verify
--check-contents --repair" does not fix the problem.
This change makes "nix-store --verify --check-contents --repair" fix
this problem. Bad .links entries are simply removed, allowing future
attempts to insert a good copy of the file to succeed.
This doesn't work anymore since `packages` was removed from the
`nixpkgs`-fork with flake support[1], now it's only possible to refer to
pkgs via `legacyPackages`.
[1] 49c9b71e4c
The store path is not enough. For example, when we build a dirty tree,
commit, and build the clean tree, a re-evaluation is necessary because
the flake may depend on the lastModified or revCount attributes.
Derivations that want to use recursion should now set
requiredSystemFeatures = [ "recursive-nix" ];
to make the daemon socket appear.
Also, Nix should be configured with "experimental-features =
recursive-nix".
This allows Nix builders to call Nix to build derivations, with some
limitations.
Example:
let nixpkgs = fetchTarball channel:nixos-18.03; in
with import <nixpkgs> {};
runCommand "foo"
{
buildInputs = [ nix jq ];
NIX_PATH = "nixpkgs=${nixpkgs}";
}
''
hello=$(nix-build -E '(import <nixpkgs> {}).hello.overrideDerivation (args: { name = "hello-3.5"; })')
$hello/bin/hello
mkdir -p $out/bin
ln -s $hello/bin/hello $out/bin/hello
nix path-info -r --json $hello | jq .
''
This derivation makes a recursive Nix call to build GNU Hello and
symlinks it from its $out, i.e.
# ll ./result/bin/
lrwxrwxrwx 1 root root 63 Jan 1 1970 hello -> /nix/store/s0awxrs71gickhaqdwxl506hzccb30y5-hello-3.5/bin/hello
# nix-store -qR ./result
/nix/store/hwwqshlmazzjzj7yhrkyjydxamvvkfd3-glibc-2.26-131
/nix/store/s0awxrs71gickhaqdwxl506hzccb30y5-hello-3.5
/nix/store/sgmvvyw8vhfqdqb619bxkcpfn9lvd8ss-foo
This is implemented as follows:
* Before running the outer builder, Nix creates a Unix domain socket
'.nix-socket' in the builder's temporary directory and sets
$NIX_REMOTE to point to it. It starts a thread to process
connections to this socket. (Thus you don't need to have nix-daemon
running.)
* The daemon thread uses a wrapper store (RestrictedStore) to keep
track of paths added through recursive Nix calls, to implement some
restrictions (see below), and to do some censorship (e.g. for
purity, queryPathInfo() won't return impure information such as
signatures and timestamps).
* After the build finishes, the output paths are scanned for
references to the paths added through recursive Nix calls (in
addition to the inputs closure). Thus, in the example above, $out
has a reference to $hello.
The main restriction on recursive Nix calls is that they cannot do
arbitrary substitutions. For example, doing
nix-store -r /nix/store/kmwd1hq55akdb9sc7l3finr175dajlby-hello-2.10
is forbidden unless /nix/store/kmwd... is in the inputs closure or
previously built by a recursive Nix call. This is to prevent
irreproducible derivations that have hidden dependencies on
substituters or the current store contents. Building a derivation is
fine, however, and Nix will use substitutes if available. In other
words, the builder has to present proof that it knows how to build a
desired store path from scratch by constructing a derivation graph for
that path.
Probably we should also disallow instantiating/building fixed-output
derivations (specifically, those that access the network, but
currently we have no way to mark fixed-output derivations that don't
access the network). Otherwise sandboxed derivations can bypass
sandbox restrictions and access the network.
When sandboxing is enabled, we make paths appear in the sandbox of the
builder by entering the mount namespace of the builder and
bind-mounting each path. This is tricky because we do a pivot_root()
in the builder to change the root directory of its mount namespace,
and thus the host /nix/store is not visible in the mount namespace of
the builder. To get around this, just before doing pivot_root(), we
branch a second mount namespace that shares its /nix/store mountpoint
with the parent.
Recursive Nix currently doesn't work on macOS in sandboxed mode
(because we can't change the sandbox policy of a running build) and in
non-root mode (because setns() barfs).
The intent of the code was that if the window size cannot be determined,
it would be treated as having the maximum possible size. Because of a
missing assignment, it was actually treated as having a width of 0.
The reason the width could not be determined was because it was obtained
from stdout, not stderr, even though the printing was done to stderr.
This commit addresses both issues.
Add missing docstring on InstallableCommand. Also, some of these were wrapped
when they're right next to a line longer than the unwrapped line, so we can just
unwrap them to save vertical space.
Fixes the following warning and the indicate potential issue:
src/libstore/worker-protocol.hh:66:1: warning: class 'Source' was previously declared as a struct; this is valid, but may result in linker errors
under the Microsoft C++ ABI [-Wmismatched-tags]
(cherry picked from commit 6e1bb04870b1b723282d32182af286646f13bf3c)
git show seems to print the entire tag message when being called on a tag
instead of a commit. git log instead always prints the correct timestamp
in my tests.
The error nix prints is: `error: stoull`.
This is an alternative to the IN_NIX_SHELL environment variable,
allowing the expression to adapt itself to nix-shell without
triggering those adaptations when used as a dependency of another
shell.
Closes#3147
This allows to have a repl-centric workflow to working on nixpkgs.
Usage:
:edit <package> - heuristic that find the package file path
:edit <path> - just open the editor on the file path
Once invoked, `nix repl` will open $EDITOR on that file path. Once the
editor exits, `nix repl` will automatically reload itself.
This adds a command 'nix make-content-addressable' that rewrites the
specified store paths into content-addressable paths. The advantage of
such paths is that 1) they can be imported without signatures; 2) they
can enable deduplication in cases where derivation changes do not
cause output changes (apart from store path hashes).
For example,
$ nix make-content-addressable -r nixpkgs.cowsay
rewrote '/nix/store/g1g31ah55xdia1jdqabv1imf6mcw0nb1-glibc-2.25-49' to '/nix/store/48jfj7bg78a8n4f2nhg269rgw1936vj4-glibc-2.25-49'
...
rewrote '/nix/store/qbi6rzpk0bxjw8lw6azn2mc7ynnn455q-cowsay-3.03+dfsg1-16' to '/nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16'
We can then copy the resulting closure to another store without
signatures:
$ nix copy --trusted-public-keys '' ---to ~/my-nix /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16
In order to support self-references in content-addressable paths,
these paths are hashed "modulo" self-references, meaning that
self-references are zeroed out during hashing. Somewhat annoyingly,
this means that the NAR hash stored in the Nix database is no longer
necessarily equal to the output of "nix hash-path"; for
content-addressable paths, you need to pass the --modulo flag:
$ nix path-info --json /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16 | jq -r .[].narHash
sha256:0ri611gdilz2c9rsibqhsipbfs9vwcqvs811a52i2bnkhv7w9mgw
$ nix hash-path --type sha256 --base32 /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16
1ggznh07khq0hz6id09pqws3a8q9pn03ya3c03nwck1kwq8rclzs
$ nix hash-path --type sha256 --base32 /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16 --modulo iq6g2x4q62xp7y7493bibx0qn5w7xz67
0ri611gdilz2c9rsibqhsipbfs9vwcqvs811a52i2bnkhv7w9mgw
Fixes
error: the content hash of flake '/home/eelco/Dev/nixpkgs-flake?ref=HEAD&rev=0000000000000000000000000000000000000000' doesn't match the hash recorded in the referring lockfile
Experimental features are now opt-in. There is currently one
experimental feature: "nix-command" (which enables the "nix"
command. This will allow us to merge experimental features more
quickly, without committing to supporting them indefinitely.
Typical usage:
$ nix build --experimental-features 'nix-command flakes' nixpkgs#hello
(cherry picked from commit 8e478c2341,
without the "flakes" feature)
Experimental features are now opt-in. There are currently two
experimental features: "nix-command" (which enables the "nix"
command), and "flakes" (which enables support for flakes). This will
allow us to merge experimental features more quickly, without
committing to supporting them indefinitely.
Typical usage:
$ nix build --experimental-features 'nix-command flakes' nixpkgs#hello
In particular, when building a flake lock file, inputs like 'nixpkgs'
are now downloaded only once. Previously, it would fetch
https://api.github.com/repos/<owner>/<repo>/tarball/<ref> and then
later https://api.github.com/repos/<owner>/<repo>/tarball/<rev>, even
though they produce the same result.
Git and GitHub now also share a cache that maps revs to a store path
and other info.
A command like
$ nix run nixpkgs#hello
will now build the attribute 'packages.${system}.hello' rather than
'packages.hello'. Note that this does mean that the flake needs to
export an attribute for every system type it supports, and you can't
build on unsupported systems. So 'packages' typically looks like this:
packages = nixpkgs.lib.genAttrs ["x86_64-linux" "i686-linux"] (system: {
hello = ...;
});
The 'checks', 'defaultPackage', 'devShell', 'apps' and 'defaultApp'
outputs similarly are now attrsets that map system types to
derivations/apps. 'nix flake check' checks that the derivations for
all platforms evaluate correctly, but only builds the derivations in
'checks.${system}'.
Fixes#2861. (That issue also talks about access to ~/.config/nixpkgs
and --arg, but I think it's reasonable to say that flakes shouldn't
support those.)
The alternative to attribute selection is to pass the system type as
an argument to the flake's 'outputs' function, e.g. 'outputs = { self,
nixpkgs, system }: ...'. However, that approach would be at odds with
hermetic evaluation and make it impossible to enumerate the packages
provided by a flake.
The tmpDirInSandbox is different when in sandboxed vs. non-sandboxed.
Since we don’t know ahead of time here whether sandboxing is enabled,
we need to reset all of the env vars we’ve set previously. This fixes
the issue encountered in https://github.com/NixOS/nixpkgs/issues/70856.
Previously, SANDBOX_SHELL was set to empty when unavailable. This
caused issues when actually generating the sandbox. Instead, just set
SANDBOX_SHELL when --with-sandbox-shell= is non-empty. Alternative
implementation to https://github.com/NixOS/nix/pull/3038.
Pure mode should not try to source the user’s bashrc file. These may
have many impurities that the user does not expect to get into their
shell.
Fixes#3090
Fixes
$ nix build
fatal: bad revision 'HEAD'
error: program 'git' failed with exit code 128
on a new flake. It is now detected as a dirty tree with revCount = 0.
When running nix doctor on a healthy system, it just prints the store URI and
nothing else. This makes it unclear whether the system is in a good state and
what check(s) it actually ran, since some of the checks are optional depending
on the store type.
This commit updates nix doctor to print an colored log message for every check
that it does, and explicitly state whether that check was a PASS or FAIL to make
it clear to the user whether the system passed its checkup with the doctor.
Fixes#3084
Only variables that were marked as exported are exported in the dev
shell. Also, we no longer try to parse the function section of the env
file, fixing
$ nix dev-shell
error: shell environment '/nix/store/h7ama3kahb8lypf4nvjx34z06g9ncw4h-nixops-1.7pre20190926.4c7acbb-env' has unexpected line '/^[a-z]?"""/ {'
For example, if the top-level flake depends on
"nixpkgs/release-19.03", and one of its dependencies depends on
"nixpkgs", then the latter will be mapped to "nixpkgs/release-19.03",
rather than whatever the default branch of "nixpkgs" is. Thus you get
only one "nixpkgs" dependency rather than two.
This currently only works in a breadth-first way, so the other way
around (i.e. if the top-level flake depends on "nixpkgs", and a
dependency depends on "nixpkgs/release-19.03") still results in two
"nixpkgs" dependencies.
If 'input.<name>.uri' changes, then the entry in the lockfile for
input <name> should be considered stale.
Also print some messages when lock file entries are added/updated.
This integrates the functionality of the index-debuginfo program in
nixos-channel-scripts to maintain an index of DWARF debuginfo files in
a format usable by dwarffs. Thus the debug info index is updated by
Hydra rather than by the channel mirroring script.
Example usage:
$ nix copy --to 'file:///tmp/binary-cache?index-debug-info=true' /nix/store/vr9mhcch3fljzzkjld3kvkggvpq38cva-nix-2.2.2-debug
$ cat /tmp/binary-cache/debuginfo/036b210b03bad75ab2d8fc80b7a146f98e7f1ecf.debug
{"archive":"../nar/0313h2kdhk4v73xna9ysiksp2v8xrsk5xsw79mmwr3rg7byb4ka8.nar.xz","member":"lib/debug/.build-id/03/6b210b03bad75ab2d8fc80b7a146f98e7f1ecf.debug"}
Fixes#3083.
So you now get
$ nix build
error: path '.' is not a flake (because it does not reference a Git repository)
rather than
$ nix build
error: unsupported argument '.'
Instead of a list, inputs are now an attrset like
inputs = {
nixpkgs.uri = github:NixOS/nixpkgs;
};
If 'uri' is omitted, than the flake is a lookup in the flake registry, e.g.
inputs = {
nixpkgs = {};
};
but in that case, you can also just omit the input altogether and
specify it as an argument to the 'outputs' function, as in
outputs = { self, nixpkgs }: ...
This also gets rid of 'nonFlakeInputs', which are now just a special
kind of input that have a 'flake = false' attribute, e.g.
inputs = {
someRepo = {
uri = github:example/repo;
flake = false;
};
};
Previously we allowed any length of name for Nix derivations. This is
bad because different file systems have different max lengths. To make
things predictable, I have picked a max. This was done by trying to
build this derivation:
derivation {
name = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
builder = "/no-such-path";
system = "x86_64-linux";
}
Take off one a and it will not lead to file name too long. That ends
up being 212 a’s. An even smaller max could be picked if we want to
support more file systems.
Working backwards, this is why:
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa-${name}.drv.chroot
> 255 - 32 - 1 - 4 - 7 = 211
These are already handled separately. This fixes warnings like
warning: ignoring the user-specified setting 'max-jobs', because it is a restricted setting and you are not a trusted user
when using the -j flag.
If a NAR is already in the store, addToStore doesn't read the source
which makes the protocol go out of sync. This happens for example when
two client try to nix-copy-closure the same derivation at the same time.
Introduce the SizeSource which allows to bound how much data is being
read from a source. It also contains a drainAll() function to discard
the rest of the source, useful to keep the nix protocol in sync.
With this patch, and this file I called `log.py`:
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p python3 --pure
import sys
from pprint import pprint
stack = []
timestack = []
for line in open(sys.argv[1]):
components = line.strip().split(" ", 2)
if components[0] != "function-trace":
continue
direction = components[1]
components = components[2].rsplit(" ", 2)
loc = components[0]
_at = components[1]
time = int(components[2])
if direction == "entered":
stack.append(loc)
timestack.append(time)
elif direction == "exited":
dur = time - timestack.pop()
vst = ";".join(stack)
print(f"{vst} {dur}")
stack.pop()
and:
nix-instantiate --trace-function-calls -vvvv ../nixpkgs/pkgs/top-level/release.nix -A unstable > log.matthewbauer 2>&1
./log.py ./log.matthewbauer > log.matthewbauer.folded
flamegraph.pl --title matthewbauer-post-pr log.matthewbauer.folded > log.matthewbauer.folded.svg
I can make flame graphs like: http://gsc.io/log.matthewbauer.folded.svg
---
Includes test cases around function call failures and tryEval. Uses
RAII so the finish is always called at the end of the function.
Make curl's low speed limit configurable via stalled-download-timeout.
Before, this limit was five minutes without receiving a single byte.
This is much too long as if the remote end may not have even
acknowledged the HTTP request.
POSIX file locks are essentially incompatible with multithreading. BSD
locks have much saner semantics. We need this now that there can be
multiple concurrent LocalStore::buildPaths() invocations.
This currently fails because we're using POSIX file locks. So when the
garbage collector opens and closes its own temproots file, it causes
the lock to be released and then deleted by another GC instance.
Passing `--post-build-hook /foo/bar` to a nix-* command will cause
`/foo/bar` to be executed after each build with the following
environment variables set:
DRV_PATH=/nix/store/drv-that-has-been-built.drv
OUT_PATHS=/nix/store/...build /nix/store/...build-bin /nix/store/...build-dev
This can be useful in particular to upload all the builded artifacts to
the cache (including the ones that don't appear in the runtime closure
of the final derivation or are built because of IFD).
This new feature prints the stderr/stdout output to the `nix-build`
and `nix build` client, and the output is printed in a Nix 2
compatible format:
[nix]$ ./inst/bin/nix-build ./test.nix
these derivations will be built:
/nix/store/ishzj9ni17xq4hgrjvlyjkfvm00b0ch9-my-example-derivation.drv
building '/nix/store/ishzj9ni17xq4hgrjvlyjkfvm00b0ch9-my-example-derivation.drv'...
hello!
bye!
running post-build-hook '/home/grahamc/projects/github.com/NixOS/nix/post-hook.sh'...
post-build-hook: + sleep 1
post-build-hook: + echo 'Signing paths' /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: Signing paths /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: + sleep 1
post-build-hook: + echo 'Uploading paths' /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: Uploading paths /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: + sleep 1
post-build-hook: + printf 'very important stuff'
/nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
[nix-shell:~/projects/github.com/NixOS/nix]$ ./inst/bin/nix build -L -f ./test.nix
my-example-derivation> hello!
my-example-derivation> bye!
my-example-derivation (post)> + sleep 1
my-example-derivation (post)> + echo 'Signing paths' /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> Signing paths /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> + sleep 1
my-example-derivation (post)> + echo 'Uploading paths' /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> Uploading paths /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> + sleep 1
my-example-derivation (post)> + printf 'very important stuff'
[1 built, 0.0 MiB DL]
Co-authored-by: Graham Christensen <graham@grahamc.com>
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
startProcess does not appear to send the exit code to the helper
correctly. Not sure why this is, but it is probably safe to just
fallback on all sandbox errors.
This ensures that stdenv / setup hooks take $IN_NIX_SHELL into
account. For example, stdenv only sets
NIX_SSL_CERT_FILE=/no-cert-file.crt if we're not in a shell.
When sandbox-fallback = true (the default), the Nix builder will fall
back to disabled sandbox mode when the kernel doesn’t allow users to
set it up. This prevents hard errors from occuring in tricky places,
especially the initial installer. To restore the previous behavior,
users can set:
sandbox-fallback = false
in their /etc/nix/nix.conf configuration.
Some kernels disable "unpriveleged user namespaces". This is
unfortunate, but we can still use mount namespaces. Anyway, since each
builder has its own nixbld user, we already have most of the benefits
of user namespaces.
This replaces 'nix-env --set'. For example:
$ nix build --profile /nix/var/nix/profiles/system \
~/Misc/eelco-configurations:nixosConfigurations.vyr.config.system.build.toplevel
updates the NixOS system profile from a flake.
This could have been a separate command (e.g. 'nix set-profile') but
1) '--profile' is pretty similar to '--out-link'; and 2) '--profile'
could be useful for other command (like 'nix dev-shell').
This is a much simpler fix to the 'error 9 while decompressing xz
file' problem than 78fa47a7f0. We just
do a ranged HTTP request starting after the data that we previously
wrote into the sink.
Fixes#2952, #379.
Our use of boost::coroutine2 depends on -lboost_context,
which in turn depends on `-lboost_thread`, which in turn depends
on `-lboost_system`.
I suspect that this builds on nix only because of low-level hacks
like NIX_LDFLAGS.
This commit passes the proper linker flags, thus fixing bootstrap
builds on non-nix distributions like Ubuntu 16.04.
With these changes, I can build Nix on Ubuntu 16.04 using:
./bootstrap.sh
./configure --prefix=$HOME/editline-prefix \
--disable-doc-gen \
CXX=g++-7 \
--with-boost=$HOME/boost-prefix \
EDITLINE_CFLAGS=-I$HOME/editline-prefix/include \
EDITLINE_LIBS=-leditline \
LDFLAGS=-L$HOME/editline-prefix/lib
make
where
* g++-7 comes from gcc-7 from
https://launchpad.net/~ubuntu-toolchain-r/+archive/ubuntu/test,
* editline 1.14 from https://github.com/troglobit/editline/releases/tag/1.14.0
was installed into `$HOME/editline-prefix`
(because Ubuntu 16.04's `editline` is too old to have the function nix uses),
* boost 1.66 from
https://www.boost.org/doc/libs/1_66_0/more/getting_started/unix-variants.html
was installed into $HOME/boost-prefix (because Ubuntu 16.04 only has 1.58)
$ sudo ./inst/bin/nix-instantiate -E '"${./.git}"'
error: The path name '.git' is invalid: it is illegal to start the
name with a period. Path names are alphanumeric and can include the
symbols +-._?= and must not begin with a period. Note: If '.git' is a
source file and you cannot rename it on disk,
builtins.path { name = ... } can be used to give it an alternative
name.
If multiple builds with fail with different errors it will be reflected
in the status code.
eg.
103 => timeout + hash mismatch
105 => timeout + check mismatch
106 => hash mismatch + check mismatch
107 => timeout + hash mismatch + check mismatch
Setting `http2 = false` in nix config (e.g. /etc/nix/nix.conf)
had no effect, and `nix-env -vvvvv -i hello` still downloaded .nar
packages using HTTP/2.
In `src/libstore/download.cc`, the `CURL_HTTP_VERSION_2TLS` option was
being explicitly set when `downloadSettings.enableHttp2` was `true`,
but, `CURL_HTTP_VERSION_1_1` option was not being explicitly set when
`downloadSettings.enableHttp2` was `false`.
This may be because `https://curl.haxx.se/libcurl/c/libcurl-env.html` states:
"You have to set this option if you want to use libcurl's HTTP/2 support."
but, also, in the changelog, states:
"DEFAULT
Since curl 7.62.0: CURL_HTTP_VERSION_2TLS
Before that: CURL_HTTP_VERSION_1_1"
So, the default setting for `libcurl` is HTTP/2 for version >= 7.62.0.
In this commit, option `CURLOPT_HTTP_VERSION` is explicitly set to
`CURL_HTTP_VERSION_1_1` when `downloadSettings.enableHttp2` nix config
setting is `false`.
This can be tested by running `nix-env -vvvvv -i hello | grep HTTP`
The default nsswitch.conf(5) file in most distros can handle many
different things including host name, user names, groups, etc. In Nix,
we want to limit the amount of impurities that come from these things.
As a result, we should only allow nss to be used for gethostbyname(3)
and getservent(3).
/cc @Ericson2314
'updateCV.notify_one()' does nothing if the update thread is not
waiting for updateCV (in particular this happens when it is sleeping
on quitCV). So also set a variable to ensure that the update isn't
lost.
--no-net causes tarballTtl to be set to the largest 32-bit integer,
which causes comparison like 'time + tarballTtl < other_time' to
fail on 32-bit systems. So cast them to 64-bit first.
https://hydra.nixos.org/build/95076624
(cherry picked from commit 29ccb2e969)
This flag
* Disables substituters.
* Sets the tarball-ttl to infinity (ensuring e.g. that the flake
registry and any downloaded flakes are considered current).
* Disables retrying downloads and sets the connection timeout to the
minimum. (So it doesn't completely disable downloads at the moment.)
(cherry picked from commit 8ea842260b)
Once we've started writing data to a Sink, we can't restart a download
request, because then we end up writing duplicate data to the
Sink. Therefore we shouldn't handle retries in Downloader but at a
higher level (in particular, in copyStorePath()).
Fixes#2952.
(cherry picked from commit a67cf5a358)
Also, make fetchGit and fetchMercurial update allowedPaths properly.
(Maybe the evaluator, rather than the caller of the evaluator, should
apply toRealPath(), but that's a bigger change.)
(cherry picked from commit 5c34d66538)
Once we've started writing data to a Sink, we can't restart a download
request, because then we end up writing duplicate data to the
Sink. Therefore we shouldn't handle retries in Downloader but at a
higher level (in particular, in copyStorePath()).
Fixes#2952.
--no-net causes tarballTtl to be set to the largest 32-bit integer,
which causes comparison like 'time + tarballTtl < other_time' to
fail on 32-bit systems. So cast them to 64-bit first.
https://hydra.nixos.org/build/95076624
This flag
* Disables substituters.
* Sets the tarball-ttl to infinity (ensuring e.g. that the flake
registry and any downloaded flakes are considered current).
* Disables retrying downloads and sets the connection timeout to the
minimum. (So it doesn't completely disable downloads at the moment.)
This allows many programs (e.g. gcc, clang, cmake) to print colorized
log output (assuming $TERM is set to a value like "xterm").
There are other ways to get colors, in particular setting
CLICOLOR_FORCE, but they're less widely supported and can break
programs that parse tool output.
In a daemon-based Nix setup, some options cannot be overridden by a
client unless the client's user is considered trusted.
Currently, if an untrusted user tries to override one of those
options, we are silently ignoring it.
This can be pretty confusing in certain situations.
e.g. a user thinks he disabled the sandbox when in reality he did not.
We are now sending a warning message letting know the user some options
have been ignored.
Related to #1761.
This exploits the hermetic nature of flake evaluation to speed up
repeated evaluations of a flake output attribute.
For example (doing 'nix build' on an already present package):
$ time nix build nixpkgs:firefox
real 0m1.497s
user 0m1.160s
sys 0m0.139s
$ time nix build nixpkgs:firefox
real 0m0.052s
user 0m0.038s
sys 0m0.007s
The cache is ~/.cache/nix/eval-cache-v1.sqlite, which has entries like
INSERT INTO Attributes VALUES(
X'92a907d4efe933af2a46959b082cdff176aa5bfeb47a98fabd234809a67ab195',
'packages.firefox',
1,
'/nix/store/pbalzf8x19hckr8cwdv62rd6g0lqgc38-firefox-67.0.drv /nix/store/g6q0gx0v6xvdnizp8lrcw7c4gdkzana0-firefox-67.0 out');
where the hash 92a9... is a fingerprint over the flake store path and
the contents of the lockfile. Because flakes are evaluated in pure
mode, this uniquely identifies the evaluation result.
For the SIGWINCH signal to be caught, it needs to be set in sigaction
on the main thread. Previously, this was broken, and updateWindowSize
was never being called. Tested on macOS 10.14.
As long as the flake input is locked, it is now only fetched when it
is evaluated (e.g. "nixpkgs" is fetched when
"inputs.nixpkgs.<something>" is evaluated).
This required adding an "id" attribute to the members of "inputs" in
lockfiles, e.g.
"inputs": {
"nixpkgs/release-19.03": {
"id": "nixpkgs",
"inputs": {},
"narHash": "sha256-eYtxncIMFVmOHaHBtTdPGcs/AnJqKqA6tHCm0UmPYQU=",
"nonFlakeInputs": {},
"uri": "github:edolstra/nixpkgs/e9d5882bb861dc48f8d46960e7c820efdbe8f9c1"
}
}
because the flake ID needs to be known beforehand to construct the
"inputs" attrset.
Fixes#2913.
This is like 'nix run', except that the command to execute is defined
in a flake output, e.g.
defaultApp = {
type = "app";
program = "${packages.blender_2_80}/bin/blender";
};
Thus you can do
$ nix app blender-bin
to start Blender from the 'blender-bin' flake.
In the future, we can extend this with sandboxing. (For example we
would want to be able to specify that Blender should not have network
access by default and should only have access to certain paths in the
user's home directory.)
This is primarily useful for version string generation, where we need
a monotonically increasing number. The revcount is the preferred thing
to use, but isn't available for GitHub flakes (since it requires
fetching the entire history). The last commit timestamp OTOH can be
extracted from GitHub tarballs.
This ensures that flakes don't get garbage-collected, which is
important to get nix-channel-like behaviour.
For example, running
$ nix build hydra:
will create a GC root
~/.cache/nix/flake-closures/hydra -> /nix/store/xarfiqcwa4w8r4qpz1a769xxs8c3phgn-flake-closure
where the contents/references of the linked file in the store are the
flake source trees used by the 'hydra' flake:
/nix/store/n6d5f5lkpfjbmkyby0nlg8y1wbkmbc7i-source
/nix/store/vbkg4zy1qd29fnhflsv9k2j9jnbqd5m2-source
/nix/store/z46xni7d47s5wk694359mq9ay353ar94-source
Note that this in itself is not enough to allow offline use; the
fetcher for the flakeref (e.g. fetchGit or downloadCached) must not
fail if it cannot fetch the latest version of the file, so long as it
knows a cached version.
Issue #2868.
This causes 'nix' to print build log output to stderr rather than
showing the last log line in the progress bar. Log lines are prefixed
by the name of the derivation (minus the version string), e.g.
binutils> make[1]: Leaving directory '/build/binutils-2.31.1'
binutils-wrapper> unpacking sources
binutils-wrapper> patching sources
...
binutils-wrapper> Using dynamic linker: '/nix/store/kr51dlsj9v5cr4n8700jliyz8v5b2q7q-bootstrap-stage0-glibc/lib/ld-linux-x86-64.so.2'
bootstrap-stage2-gcc-wrapper> unpacking sources
...
linux-headers> unpacking sources
linux-headers> unpacking source archive /nix/store/8javli69jhj3bkql2c35gsj5vl91p382-linux-4.19.16.tar.xz
It was getting confused between logical and real store paths.
Also, make fetchGit and fetchMercurial update allowedPaths properly.
(Maybe the evaluator, rather than the caller of the evaluator, should
apply toRealPath(), but that's a bigger change.)
The value of useChroot is not set yet in the constructor, resulting in
hash rewriting being enabled in certain cases where it should not be.
Fixes#2801
Sometimes, "expected" can be "0", but in fact means "unknown".
This is for example the case when downloading a file while the http
server doesn't send the `Content-Length` header, like when running `nix
build` pointing to a nixpkgs checkout streamed from GitHub:
⇒ nix build -f https://github.com/NixOS/nixpkgs/archive/master.tar.gz hello
[1.8/0.0 MiB DL] downloading 'https://github.com/NixOS/nixpkgs/archive/master.tar.gz'
In that case, don't show that weird progress bar, but only the (slowly
increasing) downloaded size ("done").
⇒ nix build -f https://github.com/NixOS/nixpkgs/archive/master.tar.gz hello
[1.8 MiB DL] downloading 'https://github.com/NixOS/nixpkgs/archive/master.tar.gz'
This commit also updates fmt calls with three numbers (when something is
currently 'running' too) - I'm not sure if this can be provoked, but
showing "0" as expected doesn't make any sense, as we're obviously doing
more than nothing.
For text files it is possible to do it like so:
`builtins.hashString "sha256" (builtins.readFile /tmp/a)`
but that doesn't work for binary files.
With builtins.hashFile any kind of file can be conveniently hashed.
To determine which seccomp filters to install, we were incorrectly
using settings.thisSystem, which doesn't denote the actual system when
--system is used.
Fixes#2791.
Thus
$ nix dev-shell
will now build the 'provides.devShell' attribute from the flake in the
current directory. If it doesn't exist, it falls back to
'provides.defaultPackage'.
'nix dev-shell' is intended to replace nix-shell. It supports flakes,
e.g.
$ nix dev-shell nixpkgs:hello
starts a bash shell providing an environment for building 'hello'.
Like Lorri (and unlike nix-shell), it computes the build environment
by building a modified top-level derivation that writes the
environment after running $stdenv/setup to $out and exits. This
provides some caching, so it's faster than nix-shell in some cases
(especially for packages with lots of dependencies, where the setup
script takes a long time).
There also is a command 'nix print-dev-env' that prints out shell code
for setting up the build environment in an existing shell, e.g.
$ . <(nix print-dev-env nixpkgs:hello)
https://github.com/tweag/nix/issues/21
Example:
$ nix flake info dwarffs
ID: dwarffs
URI: github:edolstra/dwarffs/a83d182fe3fe528ed6366a5cec3458bcb1a5f6e1
Description: A filesystem that fetches DWARF debug info from the Internet on demand
Revision: a83d182fe3fe528ed6366a5cec3458bcb1a5f6e1
Path: /nix/store/grgd14kxxk8q4n503j87mpz48gcqpqw7-source
This ensures that the lock file is updated *before* evaluating it, and
that it gets updated for any nix command, not just 'nix build'.
Also, while computing the lock file, allow arbitrary registry lookups,
not just at top-level.
Also, improve some error messages slightly.
Also allow "." as an installable to refer to the flake in the current
directory. E.g.
$ nix build .
will build 'provides.defaultPackage' in the flake in the current
directory.
Unlike file://<path>, this allows the path to be a dirty Git tree, so
nix build /path/to/flake:attr
is a convenient way to test building a local flake.
The general syntax for an installable is now
<flakeref>:<attrpath>. The attrpath is relative to the flake's
'provides.packages' or 'provides' if the former doesn't yield a
result. E.g.
$ nix build nixpkgs:hello
is equivalent to
$ nix build nixpkgs:packages.hello
Also, '<flakeref>:' can be omitted, in which case it defaults to
'nixpkgs', e.g.
$ nix build hello
Scanning of /proc/<pid>/{exe,cwd} was broken because '{memory:' was
prepended twice. Also, get rid of the whole '{memory:...}' thing
because it's unnecessary, we can just list the file in /proc directly.
This new structure makes more sense as there may be many sources rooting
the same store path. Many profiles can reference the same path but this
is even more true with /proc/<pid>/maps where distinct pids can and
often do map the same store path.
This implementation is also more efficient as the `Roots` map contains
only one entry per rooted store path.
It could happen that the local builder match the system but lacks some features.
Now it results a failure.
The fix gracefully excludes the local builder from the set of available builders for derivation which requires the feature, so the derivation is built on remote builders only (as though it has incompatible system, like ```aarch64-linux``` when local is x86)
This allows using an arbitrary "provides" attribute from the specified
flake. For example:
nix build --flake nixpkgs packages.hello
(Maybe provides.packages should be used for consistency...)
We want to encourage a brave new world of hermetic evaluation for
source-level reproducibility, so flakes should not poke around in the
filesystem outside of their explicit dependencies.
Note that the default installation source remains impure in that it
can refer to mutable flakes, so "nix build nixpkgs.hello" still works
(and fetches the latest nixpkgs, unless it has been pinned by the
user).
A problem with pure evaluation is that builtins.currentSystem is
unavailable. For the moment, I've hard-coded "x86_64-linux" in the
nixpkgs flake. Eventually, "system" should be a flake function
argument.
For example, github:edolstra/dwarffs is more-or-less equivalent to
https://github.com/edolstra/dwarffs.git. It's a much faster way to get
GitHub repositories: it fetches tarballs rather than entire Git
repositories. It also allows fetching specific revisions by hash
without specifying a ref (e.g. a branch name):
github:edolstra/dwarffs/41c0c1bf292ea3ac3858ff393b49ca1123dbd553
This reverts commit a0ef21262f. This
doesn't work in 'nix run' and nix-shell because setns() fails in
multithreaded programs, and Boehm GC mark threads are uncancellable.
Fixes#2646.
Inside a derivation, exportReferencesGraph already provides a way to
dump the Nix database for a specific closure. On the command line,
--dump-db gave us the same information, but only for the entire Nix
database at once.
With this change, one can now pass a list of paths to --dump-db to get
the Nix database dumped for just those paths. (The user is responsible
for ensuring this is a closure, like for --export).
Among other things, this is useful for deploying a closure to a new
host without using --import/--export; one can use tar to transfer the
store paths, and --dump-db/--load-db to transfer the validity
information. This is useful if the new host doesn't actually have Nix
yet, and the closure that is being deployed itself contains Nix.
Previously, plain derivation paths in the string context (e.g. those
that arose from builtins.storePath on a drv file, not those that arose
from accessing .drvPath of a derivation) were treated somewhat like
derivaiton paths derived from .drvPath, except their dependencies
weren't recursively added to the input set. With this change, such
plain derivation paths are simply treated as paths and added to the
source inputs set accordingly, simplifying context handling code and
removing the inconsistency. If drvPath-like behavior is desired, the
.drv file can be imported and then .drvPath can be accessed.
This is a backwards-incompatibility, but storePath is never used on
drv files within nixpkgs and almost never used elsewhere.
Trying to fetch refs that are not in refs/heads currently fails because
it looks for refs/heads/refs/foo instead of refs/foo.
eg.
builtins.fetchGit {
url = https://github.com/NixOS/nixpkgs.git;
ref = "refs/pull/1024/head;
}
SRI hashes (https://www.w3.org/TR/SRI/) combine the hash algorithm and
a base-64 hash. This allows more concise and standard hash
specifications. For example, instead of
import <nix/fetchurl.nl> {
url = https://nixos.org/releases/nix/nix-2.1.3/nix-2.1.3.tar.xz;
sha256 = "5d22dad058d5c800d65a115f919da22938c50dd6ba98c5e3a183172d149840a4";
};
you can write
import <nix/fetchurl.nl> {
url = https://nixos.org/releases/nix/nix-2.1.3/nix-2.1.3.tar.xz;
hash = "sha256-XSLa0FjVyADWWhFfkZ2iKTjFDda6mMXjoYMXLRSYQKQ=";
};
In fixed-output derivations, the outputHashAlgo is no longer mandatory
if outputHash specifies the hash (either as an SRI or in the old
"<type>:<hash>" format).
'nix hash-{file,path}' now print hashes in SRI format by default. I
also reverted them to use SHA-256 by default because that's what we're
using most of the time in Nixpkgs.
Suggested by @zimbatm.
Use the same output ordering and format everywhere.
This is such a common issue that we trade the single-line error message for
more readability.
Old message:
```
fixed-output derivation produced path '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com' with sha256 hash '08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm' instead of the expected hash '1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m'
```
New message:
```
hash mismatch in fixed-output derivation '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com':
wanted: sha256:1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m
got: sha256:08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm
```
Without this information the content addressable state and hashes are
lost after the first request, this causes signatures to be required for
everything even tho the path could be verified without signing.
This enables using for http for S3 request for debugging or
implementations that don't have https configured. This is not a problem
for binary caches since they should not contain sensitive information.
Both package signatures and AWS auth already protect against tampering.
The goal is to support libeditline AND libreadline and let the user
decide at compile time which one to use.
Add a compile time option to use libreadline instead of
libeditline. If compiled against libreadline completion functionality
is lost because of a incompatibility between libeditlines and
libreadlines completion function. Completion with libreadline is
possible and can be added later.
To use libreadline instead of libeditline the environment
variables 'EDITLINE_LIBS' and 'EDITLINE_CFLAGS' have to been set
during the ./configure step.
Example:
EDITLINE_LIBS="/usr/lib/x86_64-linux-gnu/libhistory.so /usr/lib/x86_64-linux-gnu/libreadline.so"
EDITLINE_CFLAGS="-DREADLINE"
The reason for this change is that for example on Debian already three
different editline libraries exist but none of those is compatible the
flavor used by nix. My hope is that with this change it would be
easier to port nix to systems that have already libreadline available.
Previously, config would only be read from XDG_CONFIG_HOME. This change
allows reading config from additional directories, which enables e.g.
per-project binary caches or chroot stores with the help of direnv.
The use of TransferManager has several issues, including that it
doesn't allow setting a Content-Encoding without a patch, and it
doesn't handle exceptions in worker threads (causing termination on
memory allocation failure).
Fixes#2493.
This commit partially reverts 48662d151b. When
copying from an older store (in my case a store running Nix 1.11.7), nix would
throw errors about there being no hash. This is fixed by recalculating the hash.
In structured-attributes derivations, you can now specify per-output
checks such as:
outputChecks."out" = {
# The closure of 'out' must not be larger than 256 MiB.
maxClosureSize = 256 * 1024 * 1024;
# It must not refer to C compiler or to the 'dev' output.
disallowedRequisites = [ stdenv.cc "dev" ];
};
outputChecks."dev" = {
# The 'dev' output must not be larger than 128 KiB.
maxSize = 128 * 1024;
};
Also fixed a bug in allowedRequisites that caused it to ignore
self-references.
This prints the references graph of the store paths in the graphML
format [1]. The graphML format is supported by several graph tools
such as the Python Networkx library or the Apache Thinkerpop project.
[1] http://graphml.graphdrawing.org
Since its superclass RemoteStore::Connection contains 'to' and 'from'
fields that refer to the file descriptor maintained in the subclass,
it was possible for the flush() call in Connection::~Connection() to
write to a closed file descriptor (or worse, a file descriptor now
referencing another file). So make sure that the file descriptor
survives 'to' and 'from'.
This is primarily because Derivation::{can,will}BuildLocally() depends
on attributes like preferLocalBuild and requiredSystemFeatures, but it
can't handle them properly because it doesn't have access to the
structured attributes.
E.g. __noChroot and allowedReferences now work correctly. We also now
check that the attribute type is correct. For instance, instead of
allowedReferences = "out";
you have to write
allowedReferences = [ "out" ];
Fixes#2453.
This meant that making a typo in an s3:// URI would cause a bucket to
be created. Also it didn't handle eventual consistency very well. Now
it's up to the user to create the bucket.
Tools which re-exec `$SHELL` or `$0` or `basename $SHELL` or even just
`bash` will otherwise get the non-interactive bash, providing a
broken shell for the same reasons described in
https://github.com/NixOS/nixpkgs/issues/27493.
Extends c94f3d5575
Calculating roots seems significantly slower on darwin compared to
linux. Checking for /profile/ links could show some false positives but
should still catch most issues.
* Don't wait forever for the client to remove data from the
buffer. This does mean that the buffer can grow without bounds
(e.g. when downloading is faster than writing to disk), but meh.
* Don't hold the state lock while calling the sink. The sink could
take any amount of time to process the data (in particular when it's
actually a coroutine), so we don't want to block the download
thread.
In particular this causes copyStorePath() from HttpBinaryCacheStore to
only start a download if needed. E.g. if the destination LocalStore
goes to sleep waiting for the path lock and another process creates
the path, then LocalStore::addToStore() will never read from the
source so we don't have to do the download.
‘geteuid’ gives us the user that the command is being run as,
including in setuid modes. By using geteuid to determind id, we can
avoid the ‘sudo -i’ hack when upgrading Nix. So now, upgrading Nix on
macOS is as simple as:
$ sudo nix-channel --update
$ sudo nix-env -u
$ sudo launchctl stop org.nixos.nix-daemon
$ sudo launchctl start org.nixos.nix-daemon
or
$ sudo systemctl restart nix-daemon
It's pretty easy to unintentionally install a second version of nix
into the user profile when using a daemon install. In this case it
looks like nix was upgraded while the nix-daemon is probably still
unning an older version.
A protocol mismatch can sometimes cause problems when using specific
features with an older daemon. For example:
Nix 2.0 changed the way files are compied to the store. The daemon is
backwards compatible and can still handle older clients, however a 1.11
nix-daemon isn't forwards compatible.
This is already done by coerceToString(), provided that the argument
is a path (e.g. 'fetchGit ./bla'). It fixes the handling of URLs like
git@github.com:owner/repo.git. It breaks 'fetchGit "./bla"', but that
was never intended to work anyway and is inconsistent with other
builtin functions (e.g. 'readFile "./bla"' fails).
E.g.
$ nix upgrade-nix
error: directory '/home/eelco/Dev/nix/inst/bin' does not appear to be part of a Nix profile
instead of
$ nix upgrade-nix
error: '/home/eelco/Dev/nix/inst' is not a symlink
Fix a 32-bit overflow that resulted in negative numbers being printed;
use fmt() instead of boost::format(); change -H to -h for consistency
with 'ls' and 'du'; make the columns narrower (since they can't be
bigger than 1024.0).
If the user has an object greater than 1024 yottabytes, it'll just display it as
N yottabytes instead of overflowing.
Swaps to use boost::format strings instead of std::setw and std::setprecision.
Using a 64bit integer on 32bit systems will come with a bit of a
performance overhead, but given that Nix doesn't use a lot of integers
compared to other types, I think the overhead is negligible also
considering that 32bit systems are in decline.
The biggest advantage however is that when we use a consistent integer
size across all platforms it's less likely that we miss things that we
break due to that. One example would be:
https://github.com/NixOS/nixpkgs/pull/44233
On Hydra it will evaluate, because the evaluator runs on a 64bit
machine, but when evaluating the same on a 32bit machine it will fail,
so using 64bit integers should make that consistent.
While the change of the type in value.hh is rather easy to do, we have a
few more options available for doing the conversion in the lexer:
* Via an #ifdef on the architecture and using strtol() or strtoll()
accordingly depending on which architecture we are. For the #ifdef
we would need another AX_COMPILE_CHECK_SIZEOF in configure.ac.
* Using istringstream, which would involve copying the value.
* As we're already using boost, lexical_cast might be a good idea.
Spoiler: I went for the latter, first of all because lexical_cast does
have an overload for const char* and second of all, because it doesn't
involve copying around the input string. Also, because istringstream
seems to come with a bigger overhead than boost::lexical_cast:
https://www.boost.org/doc/libs/release/doc/html/boost_lexical_cast/performance.html
The first method (still using strtol/strtoll) also wasn't something I
pursued further, because it is also locale-aware which I doubt is what
we want, given that the regex for int is [0-9]+.
Signed-off-by: aszlig <aszlig@nix.build>
Fixes: #2339
The profile present in PATH is not necessarily the actual profile
location. User profiles are generally added as $HOME/.nix-profile
in which case the indirect profile link needs to be resolved first.
/home/user/.nix-profile -> /nix/var/nix/profiles/per-user/user/profile
/nix/var/nix/profiles/per-user/user/profile -> profile-15-link
/nix/var/nix/profiles/per-user/user/profile-14-link -> /nix/store/hyi4kkjh3bwi2z3wfljrkfymz9904h62-user-environment
/nix/var/nix/profiles/per-user/user/profile-15-link -> /nix/store/6njpl3qvihz46vj911pwx7hfcvwhifl9-user-environment
To upgrade nix here we want /nix/var/nix/profiles/per-user/user/profile-16-link
instead of /home/user/.nix-profile-1-link. The latter is not a gcroot
and would be garbage collected, resulting in a broken profile.
Fixes#2175
The current usage technically works by putting multiple different
repos in to the same git directory. However, it is very slow as
Git tries very hard to find common commits between the two
repositories. If the two repositories are large (like Nixpkgs and
another long-running project,) it is maddeningly slow.
This change busts the cache for existing deployments, but users
will be promptly repaid in per-repository performance.
TransferManager allocates a lot of memory (50 MiB by default), and it
might leak but I'm not sure about that. In any case it was causing
OOMs in hydra-queue-runner. So allocate only one TransferManager per
S3BinaryCacheStore.
Hopefully fixes https://github.com/NixOS/hydra/issues/586.
This callback is executed on a different thread, so exceptions thrown
from the callback are not caught:
Aug 08 16:25:48 chef hydra-queue-runner[11967]: terminate called after throwing an instance of 'nix::Error'
Aug 08 16:25:48 chef hydra-queue-runner[11967]: what(): AWS error: failed to upload 's3://nix-cache/19dbddlfb0vp68g68y19p9fswrgl0bg7.ls'
Therefore, just check the transfer status after it completes. Also
include the S3 error message in the exception.
This didn't work anymore since decompression was only done in the
non-coroutine case.
Decompressors are now sinks, just like compressors.
Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':
if (ret != BZ_OK)
CompressionError("error while compressing bzip2 file");
It adds a new operation, cmdAddToStoreNar, that does the same thing as
the corresponding nix-daemon operation, i.e. call addToStore(). This
replaces cmdImportPaths, which has the major issue that it sends the
NAR first and the store path second, thus requiring us to store the
incoming NAR either in memory or on disk until we decide what to do
with it.
For example, this reduces the memory usage of
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 267 MiB to 12 MiB.
Probably fixes#1988.
In EvalState::checkSourcePath, the path is checked against the list of
allowed paths first and later it's checked again *after* resolving
symlinks.
The resolving of the symlinks is done via canonPath, which also strips
out "../" and "./". However after the canonicalisation the error message
pointing out that the path is not allowed prints the symlink target in
the error message.
Even if we'd suppress the message, symlink targets could still be leaked
if the symlink target doesn't exist (in this case the error is thrown in
canonPath).
So instead, we now do canonPath() without symlink resolving first before
even checking against the list of allowed paths and then later do the
symlink resolving and checking the allowed paths again.
The first call to canonPath() should get rid of all the "../" and "./",
so in theory the only way to leak a symlink if the attacker is able to
put a symlink in one of the paths allowed by restricted evaluation mode.
For the latter I don't think this is part of the threat model, because
if the attacker can write to that path, the attack vector is even
larger.
Signed-off-by: aszlig <aszlig@nix.build>
This particular `shell` variable wasn't used, since a new one was
declared in the only side of the `if` branch that used a `shell`
variable.
It could realistically confuse developers thinking it could use `$SHELL`
under some situations.
forceValue() were called after a value is copied effectively forcing only one of the copies keeping another copy not evaluated.
This resulted in its evaluation of the same lazy value more than once (the number of hits is not big though)
Not ready for this yet, causes the prompt to disappear in nix repl
and more generally can overwrite non-progress-bar messages.
This reverts commit 44de71a396.
Before:
$ command time nix-prefetch-url https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.17.6.tar.xz
1.19user 1.02system 0:41.96elapsed 5%CPU (0avgtext+0avgdata 182720maxresident)k
After:
1.38user 1.05system 0:39.73elapsed 6%CPU (0avgtext+0avgdata 16204maxresident)k
Note however that addToStore() can still take a lot of memory
(e.g. RemoteStore::addToStore() is constant space, but
LocalStore::addToStore() isn't; that's fixed by
c94b4fc7ee
though).
Fixes#1400.
For example, this allows you to do run nix-daemon as a non-privileged
user:
eelco$ NIX_STATE_DIR=~/my-nix/nix/var nix-daemon --store ~/my-nix/
The NIX_STATE_DIR is still needed because settings.nixDaemonSocketFile
is not derived from settings.storeUri (and we can't derive it from the
store's state directory because we don't want to open the store in the
parent process).
As proposed in #1634 the `nix search` command could use some
improvements. Initially 0413aeb35d added
some basic sorting behavior using `std::map`, a next step would be an
improvement of the output.
This patch includes the following changes:
* Use `$PAGER` for outputs with `RunPager` from `shared.hh`:
The same behavior is defined for `nix-env --query`, furthermore it
makes searching huge results way easier.
* Simplified result blocks:
The new output is heavily inspired by the output from `nox`, the first
line shows the attribute path and the derivaiton name
(`attribute path (derivation name)`) and the description in the second
line.
Slightly nicer behavior when updates are somewhat far apart
(during a long linking step, perhaps) ensuring things
don't appear unresponsive.
If we wait the maximum amount for the update,
don't bother waiting another 50ms (for rate-limiting purposes)
and just check if we should quit.
This also ensures we'll notice the request to quit within 1s
if quit is signalled but there is not an udpate.
(I'm not sure if this happens or not)
I hate to make this such a large check but the lack of documentation means we really have no idea what's allowed. All of them reported so far have been within ".app/Contents" directories. That appears to be a safe starting point. However, I would not be surprised to also find more paths that are disallowed for instance in .framework or .bundle directories.
Fixes#2031Fixes#2229
This makes 'nix copy' and 'nix path-info' work on .drv store
paths. Removing special treatment of .drv files seems the most
future-proof approach given the possible removal of .drv files in the
future.
Note that 'nix build' will still build (rather than substitute) .drv
paths due to the unfortunate overloading in Store::buildPaths().
EvalState contains a few counters (e.g. nrValues) that increase
quickly enough that they end up being interpreted as pointers by the
garbage collector. Moving it to the heap makes them invisible to the
garbage collector.
This reduces the max RSS doing 100 evaluations of
nixos.tests.firefox.x86_64-linux.drvPath from 455 MiB to 292 MiB.
Note: ideally, allocations would be much further up in the 64-bit
address space to reduce the odds of an integer being misinterpreted as
a pointer. Maybe we can use some linker magic to move the .bss segment
to a higher address.
This reduces the risk of object liveness misdetection. For example,
Glibc has an internal variable "mp_" that often points to a Boehm
object, keeping it alive unnecessarily. Since we don't store any
actual roots in global variables, we can just disable data segment
scanning.
With this, the max RSS doing 100 evaluations of
nixos.tests.firefox.x86_64-linux.drvPath went from 718 MiB to 455 MiB.
If a process disappears between the time /proc/[pid]/maps is opened and
the time it is read, the read() syscall will return ESRCH. This should be ignored.
In some versions/configurations libcurl doesn't handle timeouts
(especially DNS timeouts) in a way that wakes curl_multi_wait.
This doesn't appear to be a problem if using c-ares, FWIW.
This reduces memory consumption of
nix copy --from file://... --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 514 MiB to 18 MiB for an uncompressed binary cache, and from 192
MiB to 53 MiB for a bzipped binary cache. It may also be faster
because fetching can happen concurrently with decompression/writing.
Continuation of 48662d151b.
Issue https://github.com/NixOS/nix/issues/1681.
Allow global config settings to be defined in multiple Config
classes. For example, this means that libutil can have settings and
evaluator settings can be moved out of libstore. The Config classes
are registered in a new GlobalConfig class to which config files
etc. are applied.
Relevant to https://github.com/NixOS/nix/issues/2009 in that it
removes the need for ad hoc handling of useCaseHack, which was the
underlying cause of that issue.
Continuation of 97002b684c. This makes
the daemon use constant memory. For example, it reduces the daemon's
maximum RSS on
$ nix copy --from ~/my-nix --to daemon /nix/store/1n7x0yv8vq6zi90hfmian84vdhd04bgp-blender-2.79a
from 264 MiB to 7 MiB.
We now use a TunnelSource to prevent the connection from ending up in
an undefined state if an exception is thrown while the NAR is being
sent.
Issue https://github.com/NixOS/nix/issues/1681.
If the Env denotes a 'with', then values[0] may be an Expr* cast to a
Value*. For code that generically traverses Values/Envs, it's useful
to know this.
E.g. this makes
nix eval --restrict-eval -I /nix/store/foo '(builtins.readFile "/nix/store/foo/symlink/bla")'
(where /nix/store/foo/symlink is a symlink to another path in the
closure of /nix/store/foo) succeed.
This fixes a regression in Hydra compared to Nix 1.x (where there were
no restrictions at all on access to the Nix store).
The existing ordering linked `libutil` before `libstore`, which causes
link failures when building statically. This is due to `libstore` using
functions from `libutil`, and the fact that symbol resolution works
"forward" - i.e. if you pass `-lfoo -lbar -lbaz`, any symbols that
`libbar` uses from `libbaz` will be resolved, but symbols from `libfoo`
will not since it comes first in the command line.
All this to say: this commit reorders the libraries which fixes the link
errors.
Don't bind-mount these to themselves,
mount them into the chroot directory.
Fixes pty issues when using sandbox on CentOS 7.4.
(build of perlPackages.IOTty fails before this change)
The common use case is to search for packages containing multiple words
like a "git" "frontend". Having only one expressions makes this simple regular
use case very complicated. Instead, search accepts multiple regular epressions
which all need to match.
nix search git 'gui|frontend'
returns a list of all git uis for example
Fixes
error: getting status of '/nix/store/j8p0vv89k1pf0cn7kmfsdcs7bshwga1i-firefox-52.7.2esr/share/icons/hicolor/48x48/apps/firefox.png': No such file or directory
https://github.com/NixOS/nix/issues/1934
Also improve error message on directory/non-directory collisions.
For instance, this reduced the memory consumption of
$ nix copy --from ssh://localhost --to ~/my-nix /nix/store/1n7x0yv8vq6zi90hfmian84vdhd04bgp-blender-2.79a
from 632 MiB to 16 MiB.
From exec(3):
> The list of arguments must be terminated by a null pointer, and, since these
> are variadic functions, this pointer must be cast (char *) NULL
E.g.
cannot build on 'ssh://mac1': cannot connect to 'mac1': bash: nix-store: command not found
cannot build on 'ssh://mac2': cannot connect to 'mac2': Host key verification failed.
cannot build on 'ssh://mac3': cannot connect to 'mac3': Received disconnect from 213... port 6001:2: Too many authentication failures
Authentication failed.
copyStorePath() now pipes the output of srcStore->narFromPath()
directly into dstStore->addToStore(). The sink used by the former is
converted into a source usable by the latter using
boost::coroutine2. This is based on [1].
This reduces the maximum resident size of
$ nix build --store ~/my-nix/ /nix/store/b0zlxla7dmy1iwc3g459rjznx59797xy-binutils-2.28.1 --substituters file:///tmp/binary-cache-xz/ --no-require-sigs
from 418592 KiB to 53416 KiB. (The previous commit also reduced the
runtime from ~4.2s to ~3.4s, not sure why.) A further improvement will
be to download files into a Sink.
[1] https://github.com/NixOS/nix/compare/master...Mathnerd314:dump-fix-coroutine#diff-dcbcac55a634031f9cc73707da6e4b18
Issue #1969.
It was holding on to a Value* (i.e. a std::shared_ptr<ValidPathInfo>*)
outside of the pathInfoCache lock, so the std::shared_ptr could be
destroyed between the release of the lock and the decrement of the
std::shared_ptr refcount. This can happen if more than
'path-info-cache-size' paths are added in the meantime, *or* if
clearPathInfoCache() is called. The hydra-queue-runner queue monitor
thread periodically calls the later, so is likely to trigger a crash.
Fixes https://github.com/NixOS/hydra/issues/542.
Doing so prevents emacs tags from working, as well as makes the code extremely
confusing for a newbie.
In the prior state, if someone wants to find the definition of "ExprApp" for
example, a grep through the code reveals nothing. Since the definition could be
hiding in numerous ".h" files, it's really difficult to find. This personally
took me several hours to figure out.
This can be iterated on and currently leaves out settings we know we
want to forward, but it fixes#1713 and fixes#1935 and isn't
fundamentally broken like the status quo. Future changes are suggested
in a comment.
Flex's regexes have an annoying feature: the dot matches everything
except a newline. This causes problems for expressions like:
"${0}\
"
where the backslash-newline combination matches this rule instead of the
intended one mentioned in the comment:
<STRING>\$|\\|\$\\ {
/* This can only occur when we reach EOF, otherwise the above
(...|\$[^\{\"\\]|\\.|\$\\.)+ would have triggered.
This is technically invalid, but we leave the problem to the
parser who fails with exact location. */
return STR;
}
However, the parser actually accepts the resulting token sequence
('"' DOLLAR_CURLY 0 '}' STR '"'), which is a problem because the lexer
rule didn't assign anything to yylval. Ultimately this leads to a crash
when dereferencing a NULL pointer in ExprConcatStrings::bindVars().
The fix does change the syntax of the language in some corner cases
but I think it's only turning previously invalid (or crashing) syntax
to valid syntax. E.g.
"a\
b"
and
''a''\
b''
were previously syntax errors but now both result in "a\nb".
Found by afl-fuzz.
This allows building armv[67]l-linux derivations on compatible aarch64
machines. Failure to add the architecture may result from missing
hardware support, in which case we can't run 32-bit binaries and don't
need to restrict them with seccomp anyway,
This allows specifying additional systems that a machine is able to
build for. This may apply on some armv7-capable aarch64 processors, or
on systems using qemu-user with binfmt-misc to support transparent
execution of foreign-arch programs.
This removes the previous hard-coded assumptions about which systems are
ABI-compatible with which other systems, and instead relies on the user
to specify any additional platforms that they have ensured compatibility
for and wish to build for locally.
NixOS should probably add i686-linux on x86_64-linux systems for this
setting by default.
Otherwise, running e.g.
nix-instantiate --eval -E --strict 'builtins.replaceStrings [""] ["X"] "abc"'
would just hang in an infinite loop.
Found by afl-fuzz.
First attempt of this was reverted in e2d71bd186 because it caused
another infinite loop, which is fixed now and a test added.
This is important since this is given as an example.
Other patterns containing "empty search string" will still
be handled differently on different platforms ("asdf|")
but that's less of an issue.
The overhead of sandbox builds is a problem on NixOS (since building a
NixOS configuration involves a lot of small derivations) but not for
typical non-NixOS use cases. So outside of NixOS we can enable it.
Issue #179.
The assertion is broken because there is no one-to-one mapping from
length of a base64 string to the length of the output.
E.g.
"1q69lz7Empb06nzfkj651413n9icx0njmyr3xzq1j9q=" results in a 32-byte output.
"1q69lz7Empb06nzfkj651413n9icx0njmyr3xzq1j9qy" results in a 33-byte output.
To reproduce, evaluate:
builtins.derivationStrict {
name = "0";
builder = "0";
system = "0";
outputHashAlgo = "sha256";
outputHash = "1q69lz7Empb06nzfkj651413n9icx0njmyr3xzq1j9qy";
}
Found by afl-fuzz.
Otherwise, running e.g.
nix-instantiate --eval -E --strict 'builtins.replaceStrings [""] ["X"] "abc"'
would just hang in an infinite loop.
Found by afl-fuzz.
Instead of having lexicographicOrder() create a temporary sorted array
of Attr*:s and copying attr names from that, copy the attr names
first and then sort that.
Previously, this would fail at startup for non-NixOS installs:
nix-env --help
The fix for this is to just use "nixManDir" as the value for MANPATH
when spawning "man".
To test this, I’m using the following:
$ nix-build release.nix -A build
$ MANPATH= ./result/bin/nix-env --help
Fixes#1627
nix-store --export, nix-store --dump, and nix dump-path would previously
fail silently if writing the data out failed, because
a) FdSink::write ignored exceptions, and
b) the commands relied on FdSink's destructor, which ignores
exceptions, to flush the data out.
This could cause rather opaque issues with installing nixos, because
nix-store --export would happily proceed even if it couldn't write its
data out (e.g. if nix-store --import on the other side of the pipe
failed).
This commit adds tests that expose these issues in the nix-store
commands, and fixes them for all three.
This was caused by derivations with 'allowSubstitutes = false'. Such
derivations will be built locally. However, if there is another
SubstitionGoal that has the output of the first derivation in its
closure, then the path will be simultaneously built and substituted.
There was a check to catch this situation (via pathIsLockedByMe()),
but it no longer worked reliably because substitutions are now done in
another thread. (Thus the comment 'It can't happen between here and
the lockPaths() call below because we're not allowing multi-threading'
was no longer valid.)
The fix is to handle the path already being locked in both
SubstitutionGoal and DerivationGoal.
All ANSI sequences except color setting are now filtered out. In
particular, terminal resets (such as from NixOS VM tests) are filtered
out.
Also, fix the completely broken tab character handling.
builtins.path allows specifying the name of a path (which makes paths
with store-illegal names now addable), allows adding paths with flat
instead of recursive hashes, allows specifying a filter (so is a
generalization of filterSource), and allows specifying an expected
hash (enabling safe path adding in pure mode).
the case of hydra where the overhead of single threaded encoding is more
noticeable e.g most of the time spent in "Sending inputs"/"Receiving outputs"
is due to compression while the actual upload to the binary cache seems
to be negligible.
This is needed by nixos-install, which uses the Nix store on the
installation CD as a substituter. We don't want to disable signature
checking entirely because substitutes from cache.nixos.org should
still be checked. So now we can pas "local?trusted=1" to mark only the
Nix store in /nix as not requiring signatures.
Fixes#1819.
Instead, if a fixed-output derivation produces has an incorrect output
hash, we now unconditionally move the outputs to the path
corresponding with the actual hash and register it as valid. Thus,
after correcting the hash in the Nix expression (e.g. in a fetchurl
call), the fixed-output derivation doesn't have to be built again.
It would still be good to have a command for reporting the actual hash
of a fixed-output derivation (instead of throwing an error), but
"nix-build --hash" didn't do that.
Following discussion with Shea and Graham. It's a big enough change
from the last release. Also, from a semver perspective, 2.0 makes more
sense because we did remove some interfaces (like nix-pull/nix-push).
Some servers, such as Artifactory, allow uploading with PUT and BASIC
auth. This allows nix copy to work to upload binaries to those
servers.
Worked on together with @adelbertc
In this mode, the following restrictions apply:
* The builtins currentTime, currentSystem and storePath throw an
error.
* $NIX_PATH and -I are ignored.
* fetchGit and fetchMercurial require a revision hash.
* fetchurl and fetchTarball require a sha256 attribute.
* No file system access is allowed outside of the paths returned by
fetch{Git,Mercurial,url,Tarball}. Thus 'nix build -f ./foo.nix' is
not allowed.
Thus, the evaluation result is completely reproducible from the
command line arguments. E.g.
nix build --pure-eval '(
let
nix = fetchGit { url = https://github.com/NixOS/nixpkgs.git; rev = "9c927de4b179a6dd210dd88d34bda8af4b575680"; };
nixpkgs = fetchGit { url = https://github.com/NixOS/nixpkgs.git; ref = "release-17.09"; rev = "66b4de79e3841530e6d9c6baf98702aa1f7124e4"; };
in (import (nix + "/release.nix") { inherit nix nixpkgs; }).build.x86_64-linux
)'
The goal is to enable completely reproducible and traceable
evaluation. For example, a NixOS configuration could be fully
described by a single Git commit hash. 'nixos-rebuild' would do
something like
nix build --pure-eval '(
(import (fetchGit { url = file:///my-nixos-config; rev = "..."; })).system
')
where the Git repository /my-nixos-config would use further fetchGit
calls or Git externals to fetch Nixpkgs and whatever other
dependencies it has. Either way, the commit hash would uniquely
identify the NixOS configuration and allow it to reproduced.
* Look for both 'brotli' and 'bro' as external command,
since upstream has renamed it in newer versions.
If neither are found, current runtime behavior
is preserved: try to find 'bro' on PATH.
* Limit amount handed to BrotliEncoderCompressStream
to ensure interrupts are processed in a timely manner.
Testing shows negligible performance impact.
(Other compression sinks don't seem to require this)
E.g.
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nars \
/nix/store/b0w2hafndl09h64fhb86kw6bmhbmnpm1-blender-2.79/share/icons/hicolor/scalable/apps/blender.svg > /dev/null
real 0m4.139s
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nars \
/nix/store/b0w2hafndl09h64fhb86kw6bmhbmnpm1-blender-2.79/share/icons/hicolor/scalable/apps/blender.svg > /dev/null
real 0m0.024s
(Before, the second call took ~0.220s.)
This will use a NAR listing in
/tmp/nars/b0w2hafndl09h64fhb86kw6bmhbmnpm1.ls containing all metadata,
including the offsets of regular files inside the NAR. Thus, we don't
need to read the entire NAR. (We do read the entire listing, but
that's generally pretty small. We could use a SQLite DB by borrowing
some more code from nixos-channel-scripts/file-cache.hh.)
This is primarily useful when Hydra is serving files from an S3 binary
cache, in particular when you have giant NARs. E.g. we had some 12 GiB
NARs, so accessing individuals files was pretty slow.
propagated-user-env-packages files in nixpkgs aren't all terminated by
newlines, as buildenv expected. Now it does not require a terminating
newline; note that this introduces a behaviour change: propagated user
env packages may now be spread across multiple lines. However, nix
1.11.x still expects them to be on a single line so this shouldn't be
used in nixpkgs for now.
The storeUri variable in the build-remote hook is declared very much to
the start of the main function and a bunch of lines later, the same
variable gets checked via hasPrefix() but it gets assigned *after* that
check when the most suitable machine for the build was choosen.
So I guess this was just a typo in d16fd24973
and what we really want is to either checkd the prefix *after* assigning
storeUri or use bestMachine->storeUri directly.
I choose the latter, because the former could introduce even more
regressions if the try block where the variable gets assigned terminates
early.
Nevertheless, the reason why the log output didn't work is because
hasPrefix() checked for "ssh://" in front of storeUri, but if the
storeUri isn't set correctly (or at all), we don't get the log file
descriptor set up properly, leading to no log output.
I've adjusted the remote-builds test to include a regression test for
this, so that we can make sure we get a build output when using remote
builds.
In addition to that I've tested this with two of my build farms and the
build logs are emitted correctly again.
Signed-off-by: aszlig <aszlig@nix.build>
The name had become a misnomer since it's not only for substitution
from binary caches, but when adding/copying any
(non-content-addressed) path to a store.
This allows specifying the AWS configuration profile to use. E.g.
nix copy --from s3://my-cache?profile=aws-dev-account /nix/store/cf3isrlqavvd5w7rpky1fa8j9lcnlggm-...
As far as we're concerned, not being able to access a file just means
the file is missing. Plus, AWS explicitly goes out of its way to
return a 403 if the file is missing and the requester doesn't have
permission to list the bucket.
Also getting rid of an old hack that Eelco said was only relevant
to an older AWS SDK.
For example, you can write
src = fetchgit ./.;
and if ./. refers to an unclean working tree, that tree will be copied
to the Nix store. This removes the need for "cleanSource".
This will allow bind and connect to 127.0.0.1, which can reduce purity/
security (if you're running a vulnerable service on localhost) but is
also needed for a ton of test suites, so I'm leaving it turned off by
default but allowing certain derivations to turn it on as needed.
It also allows DNS resolution of arbitrary hostnames but I haven't found
a way to avoid that. In principle I'd just want to allow resolving
localhost but that doesn't seem to be possible.
I don't think this belongs under `build-use-sandbox = relaxed` because we
want it on Hydra and I don't think it's the end of the world.
Used to determine symlink size with stat and value with readlink.
This could technically result in garbage if symlink changed between
calls. Also gets around the broken stat implementation in our
network filesystem (returns size + 1 giving a byte of garbage).
The computation of urlHash didn't take the name into account, so
subsequent fetchurl calls with the same URL but a different name would
resolve to the same cached store path.
The "name" attribute defaults to "source", which we should use for all
similar functions (e.g. fetchTarball and in Hydra) to ensure that we
get a consistent store path regardless of how the tree is fetched.
"source" is not necessarily a correct label, but using an empty name
is problematic: you get an ugly store path ending in a dash, and it's
impossible to have a fixed-output derivation that produces that path
because ".drv" is not a valid store name.
Fixes#904.
You can now include files via the "builders" option, using the syntax
"@<filename>". Having only one option makes it easier to override
builders completely.
For backward compatibility, the default is "@/etc/nix/machines", or
"@<filename>" for each file name in NIX_REMOTE_SYSTEMS.
This makes it slightly more manageable to see at a glance what in a
build's sandbox profile is unique to the build and what is standard. Also
a first step to factoring more of our Darwin logic into scheme functions
that will allow us a bit more flexibility. And of course less of that
nasty codegen in C++! 😀
This speeds up commands like "nix cat-store". For example:
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nar-cache /nix/store/i60yncmq6w9dyv37zd2k454g0fkl3arl-systemd-234/etc/udev/udev.conf
real 0m4.336s
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nar-cache /nix/store/i60yncmq6w9dyv37zd2k454g0fkl3arl-systemd-234/etc/udev/udev.conf
real 0m0.045s
The primary motivation is to allow hydra-server to serve files from S3
binary caches. Previously Hydra had a hack to do "nix-store -r
<path>", but that fetches the entire closure so is prohibitively
expensive.
There is no garbage collection of the NAR cache yet. Also, the entire
NAR is read when accessing a single member file. We could generate the
NAR listing to provide random access.
Note: the NAR cache is indexed by the store path hash, not the content
hash, so NAR caches should not be shared between binary caches, unless
you're sure that all your builds are binary-reproducible.
Probably as a result of a bad merge in
4b8f1b0ec0, we had both a
BinaryCacheStoreAccessor and a
RemoteFSAccessor. BinaryCacheStore::getFSAccessor() returned the
latter, but BinaryCacheStore::addToStore() checked for the
former. This probably caused hydra-queue-runner to download paths that
it just uploaded.
This check spuriously fails for e.g. git@github.com:NixOS/nixpkgs.git,
and even for ssh://git@github.com/NixOS/nixpkgs.git, and is made
redundant by the checks git itself will do when fetching the repo. We
instead pass a -- before passing the URI to git to avoid injection.
I needed this to test ACL/xattr removal in
canonicalisePathMetaData(). Might also be useful if you need to build
old Nixpkgs that doesn't have the required patches to remove
setuid/setgid creation.
The worker threads could exit prematurely if they finished processing
all items while the main thread was still adding items. In particular,
this caused hanging nix-store --serve processes in the build farm.
Also, process items from the main thread.
It was getting too much like whac-a-mole listing all the retriable error
conditions, so we now retry by default and list the cases where retrying
is almost certainly hopeless.
I find the error message 'nix-env --set-flag priority NUMBER PKGNAME'
not as helpful as it could be :
- doesn't share the current priorities
- doesn't say that the command must be run on the already installed
PKGNAME (which is confusing the first time)
- the doc needs careful reading:
"If there are multiple derivations matching a name in args that have the same name (e.g., gcc-3.3.6 and gcc-4.1.1), then the derivation with the highest priority is used."
if one stops reading there, he is screwed. Salvation comes with reading "A derivation can define a priority by declaring the meta.priority attribute. This attribute should be a number, with a higher value denoting a lower priority. The default priority is 0."
To sum it up, lower number wins. I tried to convey this idea in the
message too.
This is a hack to make hydra-queue-runner free its temproots
periodically, thereby ensuring that garbage collection of the
corresponding paths is not blocked until the queue runner is
restarted.
It would be better if temproots could be released earlier than at
process exit. I started working on a RAII object returned by functions
like addToStore() that releases temproots. However, this would be a
pretty massive change so I gave up on it for now.
For example,
$ nix-store -q --roots /nix/store/7phd2sav7068nivgvmj2vpm3v47fd27l-patchelf-0.8pre845_0315148
{temp:1}
denotes that the path is only being kept alive by a temporary root
(i.e. /nix/var/nix/temproots/). Similarly,
$ nix-store --gc --print-roots
...
{memory:9} -> /nix/store/094gpjn9f15ip17wzxhma4r51nvsj17p-curl-7.53.1
shows that curl is being used by some process.
This command shows why a package has another package in its runtime
closure. For example, to see why VLC has libdrm.dev in its closure:
$ nix why-depends nixpkgs.vlc nixpkgs.libdrm.dev
/nix/store/g901z9pcj0n5yy5n6ykxk3qm4ina1d6z-vlc-2.2.5.1:
lib/libvlccore.so.8.0.0: …nfig:/nix/store/405lmx6jl8lp0ad1vrr6j498chrqhz8g-libdrm-2.4.75-d…
/nix/store/s3nm7kd8hlcg0facn2q1ff2n7wrwdi2l-mesa-noglu-17.0.7-dev:
nix-support/propagated-native-build-inputs: …-dev /nix/store/405lmx6jl8lp0ad1vrr6j498chrqhz8g-libdrm-2.4.75-d…
Thus, VLC's lib/libvlccore.so.8.0.0 as well as mesa-noglu's
nix-support/propagated-native-build-inputs cause the dependency.
In particular, process() won't return as long as there are active
items. This prevents work item lambdas from referring to stack frames
that no longer exist.
Since we may use a dedicated file descriptor in the future, this
allows us to change it. So builders can do
if [[ -n $NIX_LOG_FD ]]; then
echo "@nix { message... }" >&$NIX_LOG_FD
fi
Nix can now automatically run the garbage collector during builds or
while adding paths to the store. The option "min-free = <bytes>"
specifies that Nix should run the garbage collector whenever free
space in the Nix store drops below <bytes>. It will then delete
garbage until "max-free" bytes are available.
Garbage collection during builds is asynchronous; running builds are
not paused and new builds are not blocked. However, there also is a
synchronous GC run prior to the first build/substitution.
Currently, no old GC roots are deleted (as in "nix-collect-garbage
-d").
Since file locks are per-process rather than per-file-descriptor, the
garbage collector would always acquire a lock on its own temproots
file and conclude that it's stale.
Without this, substitute info is fetched sequentially, which is
superslow. In the old UI (e.g. nix-build), we call printMissing(),
which calls queryMissing(), thereby preheating the binary cache
cache. But the new UI doesn't do that.
In particular, drop the "build-" and "gc-" prefixes which are
pointless. So now you can say
nix build --no-sandbox
instead of
nix build --no-build-use-sandbox
This is useful for testing commands in isolation.
For example,
$ nix run nixpkgs.geeqie -i -k DISPLAY -k XAUTHORITY -c geeqie
runs geeqie in an empty environment, except for $DISPLAY and
$XAUTHORITY.
E.g.
nix run nixpkgs.hello -c hello --greeting Hallo
Note that unlike "nix-shell --command", no quoting of arguments is
necessary.
"-c" (short for "--command") cannot be combined with "--" because they
both consume all remaining arguments. But since installables shouldn't
start with a dash, this is unlikely to cause problems.
Running "nix run" with a diverted store, e.g.
$ nix run --store local?root=/tmp/nix nixpkgs.hello
stopped working when Nix became multithreaded, because
unshare(CLONE_NEWUSER) doesn't work in multithreaded processes. The
obvious solution is to terminate all other threads first, but 1) there
is no way to terminate Boehm GC marker threads; and 2) it appears that
the kernel has a race where unshare(CLONE_NEWUSER) will still fail for
some indeterminate amount of time after joining other threads.
So instead, "nix run" will now exec() a single-threaded helper ("nix
__run_in_chroot") that performs the actual unshare()/chroot()/exec().
Now that we use threads in lots of places, it's possible for
TunnelLogger::log() to be called asynchronously from other threads
than the main loop. So we need to ensure that STDERR_NEXT messages
don't clobber other messages.
Besides being unused, this function has a bug that it will incorrectly
decode the path component Ubuntu\04016.04.2\040LTS\040amd64 as
"Ubuntu.04.2 LTS amd64" instead of "Ubuntu 16.04.2 LTS amd64".
This adds an argument "rev" specififying the Git commit hash. The
existing argument "rev" is renamed to "ref". The default value for
"ref" is "master". When specifying a hash, it's necessary to specify a
ref since we're not cloning the entire repository but only fetching a
specific ref.
Example usage:
builtins.fetchgit {
url = https://github.com/NixOS/nixpkgs.git;
ref = "release-16.03";
rev = "c1c0484041ab6f9c6858c8ade80a8477c9ae4442";
};
The package list is now cached in
~/.cache/nix/package-search.json. This gives a substantial speedup to
"nix search" queries. For example (on an SSD):
First run: (no package search cache, cold page cache)
$ time nix search blender
Attribute name: nixpkgs.blender
Package name: blender
Version: 2.78c
Description: 3D Creation/Animation/Publishing System
real 0m6.516s
Second run: (package search cache populated)
$ time nix search blender
Attribute name: nixpkgs.blender
Package name: blender
Version: 2.78c
Description: 3D Creation/Animation/Publishing System
real 0m0.143s
In particular, don't use base-64, which we don't support. (We do have
base-32 redirects for hysterical reasons.)
Also, add a test for the hashed mirror feature.
This doesn't work in read-only mode, ensuring that operations like
nix path-info --store https://cache.nixos.org -S nixpkgs.hello
(asking for the closure size of nixpkgs.hello in cache.nixos.org) work
when nixpkgs.hello doesn't exist in the local store.
On second though this was annoying. E.g. "nix log nixpkgs.hello" would
build/download Hello first, even though the log can be fetched
directly from the binary cache.
May need to revisit this.
This allows builds to call setuid binaries. This was previously
possible until we started using seccomp. Turns out that seccomp by
default disallows processes from acquiring new privileges. Generally,
any use of setuid binaries (except those created by the builder
itself) is by definition impure, but some people were relying on this
ability for certain tests.
Example:
$ nix build '(with import <nixpkgs> {}; runCommand "foo" {} "/run/wrappers/bin/ping -c 1 8.8.8.8; exit 1")' --no-allow-new-privileges
builder for ‘/nix/store/j0nd8kv85hd6r4kxgnwzvr0k65ykf6fv-foo.drv’ failed with exit code 1; last 2 log lines:
cannot raise the capability into the Ambient set
: Operation not permitted
$ nix build '(with import <nixpkgs> {}; runCommand "foo" {} "/run/wrappers/bin/ping -c 1 8.8.8.8; exit 1")' --allow-new-privileges
builder for ‘/nix/store/j0nd8kv85hd6r4kxgnwzvr0k65ykf6fv-foo.drv’ failed with exit code 1; last 6 log lines:
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=46 time=15.2 ms
Fixes#1429.
Functions like copyClosure() had 3 bool arguments, which creates a
severe risk of mixing up arguments.
Also, implement copyClosure() using copyPaths().
Cygwin sqlite3 is patched to call SetDllDirectory("/usr/bin") on init, which
affects the current process and is inherited by child processes. It causes
DLLs to be loaded from /usr/bin/ before $PATH, which breaks all sorts of
things. A typical failures would be header/lib version mismatches (e.g.
openssl when running checkPhase on openssh). We'll just set it back to the
default value.
Note that this is a problem with the cygwin version of sqlite3 (currently
3.18.0). nixpkgs doesn't have the problematic patch.
There's no reason to restrict this to Error exceptions. This shouldn't
matter to #1407 since the repl doesn't catch non-Error exceptions
anyway, but you never know...
Recently aws-sdk-cpp quietly switched to using S3 virtual host URIs
(https://github.com/aws/aws-sdk-cpp/commit/69d9c53882), i.e. it sends
requests to http://<bucket>.<region>.s3.amazonaws.com rather than
http://<region>.s3.amazonaws.com/<bucket>. However this interacts
badly with curl connection reuse. For example, if we do the following:
1) Check whether a bucket exists using GetBucketLocation.
2) If it doesn't, create it using CreateBucket.
3) Do operations on the bucket.
then 3) will fail for a minute or so with a NoSuchBucket exception,
presumably because the server being hit is a fallback for cases when
buckets don't exist.
Disabling the use of virtual hosts ensures that 3) succeeds
immediately. (I don't know what S3's consistency guarantees are for
bucket creation, but in practice buckets appear to be available
immediately.)
Newer versions of aws-sdk-cpp call CalculateDelayBeforeNextRetry()
even for non-retriable errors (like NoSuchKey) whih causes log spam in
hydra-queue-runner.
Sandboxes cannot be nested, so if Nix's build runs inside a sandbox,
it cannot use a sandbox itself. I don't see a clean way to detect
whether we're in a sandbox, so use a test-specific hack.
https://github.com/NixOS/nix/issues/1413
In particular, UF_IMMUTABLE (uchg) needs to be cleared to allow the
path to be garbage-collected or optimised.
See https://github.com/NixOS/nixpkgs/issues/25819.
+ the file from being garbage-collected.
Thus, instead of ‘--option <name> <value>’, you can write ‘--<name>
<value>’. So
--option http-connections 100
becomes
--http-connections 100
Apart from brevity, the difference is that it's not an error to set a
non-existent option via --option, but unrecognized arguments are
fatal.
Boolean options have special treatment: they're mapped to the
argument-less flags ‘--<name>’ and ‘--no-<name>’. E.g.
--option auto-optimise-store false
becomes
--no-auto-optimise-store
Even with "build-use-sandbox = false", we now use sandboxing with a
permissive profile that allows everything except the creation of
setuid/setgid binaries.
Also, add rules to allow fixed-output derivations to access the
network.
These rules are sufficient to build stdenvDarwin without any
__sandboxProfile magic.
The filename used was not unique and owned by the build user, so
builds could fail with
error: while setting up the build environment: cannot unlink ‘/nix/store/99i210ihnsjacajaw8r33fmgjvzpg6nr-bison-3.0.4.drv.sb’: Permission denied
runResolver() was barfing on directories like
/System/Library/Frameworks/Security.framework/Versions/Current/PlugIns. It
should probably do something sophisticated for frameworks, but let's
ignore them for now.
This fixes
error: getting attributes of path ‘Versions/Current/CoreFoundation’: No such file or directory
when /System/Library/Frameworks/CoreFoundation.framework/CoreFoundation is a symlink.
Also fixes a segfault when encounting a file that is not a MACH binary (such
as /dev/null, which is included in __impureHostDeps in Nixpkgs).
Possibly fixes#786.
Fixes
src/libstore/build.cc:2321:45: error: non-constant-expression cannot be narrowed from type 'int' to 'scmp_datum_t' (aka 'unsigned long') in initializer list [-Wc++11-narrowing]
EAs/ACLs are not part of the NAR canonicalisation. Worse, setting an
ACL allows a builder to create writable files in the Nix store. So get
rid of them.
Closes#185.
This prevents builders from setting the S_ISUID or S_ISGID bits,
preventing users from using a nixbld* user to create a setuid/setgid
binary to interfere with subsequent builds under the same nixbld* uid.
This is based on aszlig's seccomp code
(47f587700d).
Reported by Linus Heckemann.
Fixes
client# error: size mismatch importing path ‘/nix/store/ywf5fihjlxwijm6ygh6s0a353b5yvq4d-libidn2-0.16’; expected 0, got 120264
This is mostly an artifact of the NixOS VM test environment, where the
Nix database doesn't contain hashes/sizes.
http://hydra.nixos.org/build/53537471
And add a 116 KiB ash shell from busybox to the release build. This
helps to make sandbox builds work out of the box on non-NixOS systems
and with diverted stores.
This is useful when we're using a diverted store (e.g. "--store
local?root=/tmp/nix") in conjunction with a statically-linked sh from
the host store (e.g. "sandbox-paths =/bin/sh=/nix/store/.../bin/busybox").