A common pitfall when using e.g. `builtins.fetchGit` is the `fatal: not
a tree object`-error when trying to fetch a revision of a git-repository
that isn't on the `master` branch and no `ref` is specified.
In order to make clear what's the problem, I added a simple check
whether the revision in question exists and if it doesn't a more
meaningful error-message is displayed:
```
nix-repl> builtins.fetchGit { url = "https://github.com/owner/myrepo"; rev = "<commit not on master>"; }
moderror: --- Error -------------------------------------------------------------------- nix
Cannot find Git revision 'bf1cc5c648e6aed7360448a3745bb2fe4fbbf0e9' in ref 'master' of repository 'https://gitlab.com/Ma27/nvim.nix'! Please make sure that the rev exists on the ref you've specified or add allRefs = true; to fetchGit.
```
Closes#2431
We embrace virtual the rest of the way, and get rid of the
`assert(false)` 0-param constructors.
We also list config base classes first, so the constructor order is
always:
1. all the configs
2. all the stores
Each in the same order
Move clearValue inside Value
mkInt instead of setInt
mkBool instead of setBool
mkString instead of setString
mkPath instead of setPath
mkNull instead of setNull
mkAttrs instead of setAttrs
mkList instead of setList*
mkThunk instead of setThunk
mkApp instead of setApp
mkLambda instead of setLambda
mkBlackhole instead of setBlackhole
mkPrimOp instead of setPrimOp
mkPrimOpApp instead of setPrimOpApp
mkExternal instead of setExternal
mkFloat instead of setFloat
Add note that the static mk* function should be removed eventually
PRs #4370 and #4348 had a bad interaction in that the second broke the fist
one in a not trivial way.
The issue was that since #4348 the logic for detecting whether a
derivation output is already built requires some logic that was specific
to the `LocalStore`.
It happens though that most of this logic could be upstreamed to any `Store`,
which is what this commit does.
This ignore was here because `queryPartialDrvOutputMap` was used both
1. as a cache to avoid having to re-read the derivation (when gc-ing for
example), and
2. as the source of truth for ca realisations
The use-case 2. required it to be able to work even when the derivation
wasn't there anymore (see https://github.com/NixOS/nix/issues/4138).
However, this use-case is now handled by `queryRealisation`, meaning
that we can safely error out if the derivation isn't there anymore
`buildPaths` can be called even for stores where it's not defined in case it's
bound to be a no-op.
The “no-op detection” mechanism was only detecting the case wher `buildPaths`
was called on a set of (non-drv) paths that were already present on the store.
This commit extends this mechanism to also detect the case where `buildPaths`
is called on a set of derivation outputs which are already built on the store.
This only works with the ca-derivations flag. It could be possible to
extend this to also work without it, but it would add quite a bit of
complexity, and it's not used without it anyways.
Extend `FSAccessor::readFile` to allow not checking that the path is a
valid one, and rewrite `readInvalidDerivation` using this extended
`readFile`.
Several places in the code use `readInvalidDerivation`, either because
they need to read a derivation that has been written in the store but
not registered yet, or more generally to prevent a deadlock because
`readDerivation` tries to lock the state, so can't be called from a
place where the lock is already held.
However, `readInvalidDerivation` implicitely assumes that the store is a
`LocalFSStore`, which isn't always the case.
The concrete motivation for this is that it's required for `nix copy
--from someBinaryCache` to work, which is tremendously useful for the
tests.
Because of a too eager refactoring, `addTextToStore` used to throw an
error because the input wasn't a valid nar.
Partially revert that refactoring to wrap the text into a proper nar
(using `dumpString`) to make this method work again
Rather than storing the derivation outputs as `drvPath!outputName` internally,
store them as `drvHashModulo!outputName` (or `outputHash!outputName` for
fixed-output derivations).
This makes the storage slightly more opaque, but enables an earlier
cutoff in cases where a fixed-output dependency changes (but keeps the
same output hash) − same as what we already do for input-addressed
derivations.
Add a new table for tracking the derivation output mappings.
We used to hijack the `DerivationOutputs` table for that, but (despite its
name), it isn't a really good fit:
- Its entries depend on the drv being a valid path, making it play badly with
garbage collection and preventing us to copy a drv output without copying
the whole drv closure too;
- It dosen't guaranty that the output path exists;
By using a different table, we can experiment with a different schema better
suited for tracking the output mappings of CA derivations.
(incidentally, this also fixes#4138)
For each known realisation, store:
- its output
- its output path
This comes with a set of needed changes:
- New `realisations` module declaring the types needed for describing
these mappings
- New `Store::registerDrvOutput` method registering all the needed informations
about a derivation output (also replaces `LocalStore::linkDeriverToPath`)
- new `Store::queryRealisation` method to retrieve the informations for a
derivations
This introcudes some redundancy on the remote-store side between
`wopQueryDerivationOutputMap` and `wopQueryRealisation`.
However we might need to keep both (regardless of backwards compat)
because we sometimes need to get some infos for all the outputs of a
derivation (where `wopQueryDerivationOutputMap` is handy), but all the
stores can't implement it − because listing all the outputs of a
derivation isn't really possible for binary caches where the server
doesn't allow to list a directory.
In `nixStable` (2.3.7 to be precise) it's possible to connect to stores
using an IPv6 address:
nix ping-store --store ssh://root@2001:db8::1
This is also useful for `nixops(1)` where you could specify an IPv6
address in `deployment.targetHost`.
However, this behavior is broken on `nixUnstable` and fails with the
following error:
$ nix store ping --store ssh://root@2001:db8::1
don't know how to open Nix store 'ssh://root@2001:db8::1'
This happened because `openStore` from `libstore` uses the `parseURL`
function from `libfetchers` which expects a valid URL as defined in
RFC2732. However, this is unsupported by `ssh(1)`:
$ nix store ping --store 'ssh://root@[2001:db8::1]'
cannot connect to 'root@[2001:db8::1]'
This patch now allows both ways of specifying a store (`root@2001:db8::1`) and
also `root@[2001:db8::1]` since the latter one is useful to pass query
parameters to the remote store.
In order to achieve this, the following changes were made:
* The URL regex from `url-parts.hh` now allows an IPv6 address in the
form `2001:db8::1` and also `[2001:db8::1]`.
* In `libstore`, a new function named `extractConnStr` ensures that a
proper URL is passed to e.g. `ssh(1)`:
* If a URL looks like either `[2001:db8::1]` or `root@[2001:db8::1]`,
the brackets will be removed using a regex. No additional validation
is done here as only strings parsed by `parseURL` are expected.
* In any other case, the string will be left untouched.
* The rules above only apply for `LegacySSHStore` and `SSHStore` (a.k.a
`ssh://` and `ssh-ng://`).
Unresolved questions:
* I'm not really sure whether we want to allow both variants of IPv6
addresses in the URL parser. However it should be noted that both seem
to be possible according to RFC2732:
> This document incudes an update to the generic syntax for Uniform
> Resource Identifiers defined in RFC 2396 [URL]. It defines a syntax
> for IPv6 addresses and allows the use of "[" and "]" within a URI
> explicitly for this reserved purpose.
* Currently, it's not supported to specify a port number behind the
hostname, however it seems as this is not really supported by the URL
parser. Hence, this is probably out of scope here.
The `DerivationGoal` has a variable storing the “final” derivation
output paths that is used (amongst other things) to fill the environment
for the post build hook. However this variable wasn't set when the
build-hook is used, causing a crash when both hooks are used together.
Fix this by setting this variable (from the informations in the db) after a run
of the post build hook.
This reverts commit 1b1e076033.
Using `queryPartialDerivationOutputMap` assumes that the derivation
exists locally which isn't the case for remote builders.
Since 0744f7f, it is now useful to have cache.nixos.org in substituers
even if /nix/store is not the Nix Store Dir. This can always be
overridden via configuration, though.
When running universal binaries like /bin/bash, Darwin XNU will choose
which architecture of the binary to use based on "binary preferences".
This change sets that to the current platform for aarch64 and x86_64
builds. In addition it now uses posix_spawn instead of the usual
execve. Note, that this does not prevent the other architecture from
being run, just advises which to use.
Unfortunately, posix_spawnattr_setbinpref_np does not appear to be
inherited by child processes in x86_64 Rosetta 2 translations, meaning
that this will not always work as expected.
For example:
{
arm = derivation {
name = "test";
system = "aarch64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
rosetta = derivation {
name = "test";
system = "x86_64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = i386 ]
echo It works!
/usr/bin/touch $out
'') ];
};
}
`arm' fails on x86_64-compiled Nix, but `arm' and `rosetta' succeed on
aarch64-compiled Nix. I suspect there is a way to fix this since:
$ /usr/bin/arch -arch x86_64 /bin/bash \
-c '/usr/bin/arch -arch arm64e /bin/bash -c /usr/bin/arch'
arm64
seems to work correctly. We may need to wait for Apple to update
system_cmds in opensource.apple.com to find out how though.
macOS systems with ARM64 can utilize a translation layer at
/Library/Apple/usr/libexec/oah to run x86_64 binaries. This change
makes Nix recognize that and it to "extra-platforms". Note that there
are two cases here since Nix could be built for either x86_64 or
aarch64. In either case, we can switch to the other architecture.
Unfortunately there is not a good way to prevent aarch64 binaries from
being run in x86_64 contexts or vice versa - programs can always
execute programs for the other architecture.
If the build closure contains some CA derivations, then we can't know
ahead-of-time that we won't build anything as early-cutoff might come-in
at a laster stage
using fallocate() to preallocate files space does more harm than good:
- breaks compression on btrfs
- has been called "not the right thing to do" by xfs developers
(because delayed allocation that most filesystems implement leads to smarter
allocation than what the filesystem needs to do if we upfront fallocate files)
Without setting HGPLAIN, the user's environment leaks into
hg invocations, which means that the output may not be in the
expected format.
HGPLAIN is the Mercurial-recommended solution for this in that
it's intended for uses by scripts and programs which are looking
to parse Mercurial's output in a consistent manner.
This fixes a bug I encountered where `nix-store -qR` will deadlock when
the `--include-outputs` flag is passed and `max-connections=1`.
The deadlock occurs because `RemoteStore::queryDerivationOutputs` takes
the only connection from the connection pool and uses it to check the
daemon version. If the version is new enough, it calls
`Store::queryDerivationOutputs`, which eventually calls
`RemoteStore::queryPartialDerivationOutputMap`, where we take another
connection from the connection pool to check the version again. Because
we still haven't released the connection from the caller, this waits for
a connection to be available, causing a deadlock.
This diff solves the issue by using `getProtocol` to check the protocol
version in the caller `RemoteStore::queryDerivationOutputs`, which
immediately frees the connection back to the pool before returning the
protocol version. That way we've already freed the connection by the
time we call `RemoteStore::queryPartialDerivationOutputMap`.
Fixes:
$ nix build --store /tmp/nix /home/eelco/Dev/patchelf#hydraJobs.build.x86_64-linux
warning: Git tree '/home/eelco/Dev/patchelf' is dirty
error: --- RestrictedPathError ------------------------------------------------------------------------------------------- nix
access to path '/tmp/nix/nix/store/xmkvfmffk7xfnazykb5kx999aika8an4-source/flake.nix' is forbidden in restricted mode
(use '--show-trace' to show detailed location information)
Until now, it was not possible to substitute missing paths from e.g.
`https://cache.nixos.org` on a remote server when building on it using
the new `ssh-ng` protocol.
This is because every store implementation except legacy `ssh://`
ignores the substitution flag passed to `Store::queryValidPaths` while
the `legacy-ssh-store` substitutes the remote store using
`cmdQueryValidPaths` when the remote store is opened with `nix-store
--serve`.
This patch slightly modifies the daemon protocol to allow passing an
integer value suggesting whether to substitute missing paths during
`wopQueryValidPaths`. To implement this on the daemon-side, the
substitution logic from `nix-store --serve` has been moved into a
protected method named `Store::substitutePaths` which gets currently
called from `LocalStore::queryValidPaths` and `Store::queryValidPaths`
if `maybeSubstitute` is `true`.
Fixes#2770
Crucially this introduces BoehmGCStackAllocator, but it also
adds a bunch of wiring to avoid making libutil depend on bdw-gc.
Part of the solutions for #4178, #4200
This removes the extra-substituters and extra-sandbox-paths settings
and instead makes every array setting extensible by setting
"extra-<name> = <value>" in the configuration file or passing
"--<name> <value>" on the command line.
This makes it even clearer which of the two hashes was specified in the
nix files. Some may think that "wanted" and "got" is obvious, but:
"got" could mean "got in nix file" and "wanted" could mean "want to see in nix file".
This makes it possible to have per-project configuration in flake.nix,
e.g. binary caches and other stuff:
nixConfig.bash-prompt-suffix = "[1;35mngi# [0m";
nixConfig.substituters = [ "https://cache.ngi0.nixos.org/" ];
This is primarily useful if you're hacking simultaneously on a package
and one of its dependencies. E.g. if you're hacking on Hydra and Nix,
you would start a dev shell for Nix, and then a dev shell for Hydra as
follows:
$ nix develop \
--redirect .#hydraJobs.build.x86_64-linux.nix ~/Dev/nix/outputs/out \
--redirect .#hydraJobs.build.x86_64-linux.nix.dev ~/Dev/nix/outputs/dev
(This assumes hydraJobs.build.x86_64-linux has a passthru.nix
attribute. You can also use a store path.)
This causes all references in the environment to those store paths to
be rewritten to ~/Dev/nix/outputs/{out,dev}. Note: unfortunately, you
may need to set LD_LIBRARY_PATH=~/Dev/nix/outputs/out/lib because
Nixpkgs' ld-wrapper only adds -rpath entries for -L flags that point
to the Nix store.
Fix#3975: Currently if Ctrl-C is pressed during a phase, the interactive subshell
is not exited. Removing --rcfile when --phase is present makes bash
non-interactive
Make nix output completions in the form `completion\tdescription`.
This can't be used by bash (afaik), but other shells like zsh or fish
can display it along the completion choices
Observed on Centos 7 when user namespaces are disabled:
DerivationGoal::startBuilder() throws an exception, ~DerivationGoal()
waits for the child process to exit, but the child process hangs
forever in drainFD(userNamespaceSync.readSide.get()) in
DerivationGoal::runChild(). Not sure why the SIGKILL doesn't get
through.
Issue #4092.
When running `nix build -L` it can be fairly hard to read the output if
the build program intentionally renders whitespace on the left. A
typical example is `g++` displaying compilation errors.
With this patch, the whitespace on the left is retained to make the log
more readable:
```
foo> no configure script, doing nothing
foo> building
foo> foobar.cc: In function 'int main()':
foo> foobar.cc:5:5: error: 'wrong_func' was not declared in this scope
foo> 5 | wrong_func(1);
foo> | ^~~~~~~~~~
error: --- Error ------------------------------------------------------------------------------------- nix
error: --- Error --- nix-daemon
builder for '/nix/store/i1q76cw6cyh91raaqg5p5isd1l2x6rx2-foo-1.0.drv' failed with exit code 1
```
The registry targets generally follow a URL formatting schema with
support for a query parameter of "?dir=subpath" to specify a sub-path
location below the URL root.
Alternatively, an absolute path can be specified. This specification
mode accepts the query parameter but ignores/drops it. It would
probably be better to either (a) disallow the query parameter for the
path form, or (b) recognize the query parameter and add to the path.
This patch implements (b) for consistency, and to make it easier for
tooling that might switch between a remote git reference and a local
path reference.
See also issue #4050.
This change provides support for using access tokens with other
instances of GitHub and GitLab beyond just github.com and
gitlab.com (especially company-specific or foundation-specific
instances).
This change also provides the ability to specify the type of access
token being used, where different types may have different handling,
based on the forge type.
std::optional had redundant checks for whether it had a value.
An object is emplaced either way so it can be dereferenced
without repeating a value check
After 0ed946aa61, max-jobs setting (-j/--max-jobs)
stopped working.
The reason was that nrLocalBuilds (which compared to maxBuildJobs to figure
out whether the limit is reached or not) is not incremented yet when tryBuild
is started; So, the solution is to move the check to tryLocalBuild.
Closes https://github.com/nixos/nix/issues/3763
We don't need it yet, but we could/should in the future, and it's a
cost-free change since we already have the reference. I like it.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Rather than showing an integer as the default, instead show the boolean
referenced in the description.
The nix.conf.5 manpage used to show "default: 0", which is unnecessarily
opaque and confusing (doesn't 0 mean false, even though the default is
true?); now it properly shows that the default is true.
pkgs.fetchurl supports an executable argument, which is especially nice
when downloading a large executable. This patch adds the same option to
nix-prefetch-url.
I have tested this to work on the simple case of prefetching a little
executable:
1. nix-prefetch-url --executable https://my/little/script
2. Paste the hash into a pkgs.fetchurl-based package, script-pkg.nix
3. Delete the output from the store to avoid any misidentified artifacts
4. Realise the package script-pkg.nix
5. Run the executable
I repeated the above while using --name, as well.
I suspect --executable would have no meaningful effect if combined with
--unpack, but I have not tried it.
Since 108debef6f we allow a
`url`-attribute for the `github`-fetcher to fetch tarballs from
self-hosted `gitlab`/`github` instances.
However it's not used when defining e.g. a flake-input
foobar = {
type = "github";
url = "gitlab.myserver";
/* ... */
}
and breaks with an evaluation-error:
error: --- Error --------------------------------------nix
unsupported input attribute 'url'
(use '--show-trace' to show detailed location information)
This patch allows flake-inputs to be fetched from self-hosted instances
as well.
`nix flake info` calls the github 'commits' API, which requires
authorization when the repository is private. Currently this request
fails with a 404.
This commit adds an authorization header when calling the 'commits' API.
It also changes the way that the 'tarball' API authenticates, moving the
user's token from a query parameter into the Authorization header.
The query parameter method is recently deprecated and will be disallowed
in November 2020. Using them today triggers a warning email.
It is apparently required for using `toJSONObject()`, which we do inside
the header file (because it's in a template).
This was accidentally working when building Nix itself (presumably because
`config.hh` was always included after `nlohman/json.hpp`) but caused a
(pretty dirty) build failure in the perl bindings package.
Rework the `Store` hierarchy so that there's now one hierarchy for the
store configs and one for the implementations (where each implementation
extends the corresponding config). So a class hierarchy like
```
StoreConfig-------->Store
| |
v v
SubStoreConfig----->SubStore
| |
v v
SubSubStoreConfig-->SubSubStore
```
(with virtual inheritance to prevent DDD).
The advantage of this architecture is that we can now introspect the configuration of a store without having to instantiate the store itself
Add a new `init()` method to the `Store` class that is supposed to
handle all the effectful initialisation needed to set-up the store.
The constructor should remain side-effect free and just initialize the
c++ data structure.
The goal behind that is that we can create “dummy” instances of each
store to query static properties about it (the parameters it accepts for
example)
Directly register the store classes rather than a function to build an
instance of them.
This gives the possibility to introspect static members of the class or
choose different ways of instantiating them.
Add a fallback path in `queryPartialDerivationOutputMap` for daemons
that don't support it.
Also upstreams a couple methods from `SSHStore` to `RemoteStore` as this
is needed to handle the fallback path.
Otherwise the result of the printing can't be parsed back correctly by
Nix (because the unescaped `${` will be parsed as the begining of an
anti-quotation).
Fix#3989
When deploying a Hydra instance with current Nix master, most builds
would not run because of errors like this:
queue monitor: error: --- Error --- hydra-queue-runner
error: --- UsageError --- nix-daemon
not a content address because it is not in the form '<prefix>:<rest>': /nix/store/...-somedrv
The last error message is from parseContentAddress, which expects a
colon-separated string, however what we got here is a store path.
Looking at the worker protocol, the following message sent to the Nix
daemon caused the error above:
0x1E -> wopQuerySubstitutablePathInfos
0x01 -> Number of paths
0x16 -> Length of string
"/nix/store/...-somedrv"
0x00 -> Length of string
""
Looking at writeStorePathCAMap, the store path is indeed the first field
that's transmitted. However, readStorePathCAMap expects it to be the
*second* field *on my machine*, since expression evaluation order is a
classic form of unspecified behaviour[1] in C++.
This has been introduced in https://github.com/NixOS/nix/pull/3689,
specifically in commit 66a62b3189.
[1]: https://en.wikipedia.org/wiki/Unspecified_behavior#Order_of_evaluation_of_subexpressions
Signed-off-by: aszlig <aszlig@nix.build>
The change in 626200713b didn't account
for when the number of auto arguments is bigger than the number of
formal arguments. This causes the following:
$ nix-instantiate --eval -E '{ ... }@args: args.foo' --argstr foo foo
nix-instantiate: src/libexpr/attr-set.hh:55: void nix::Bindings::push_back(const nix::Attr&): Assertion `size_ < capacity_' failed.
Aborted (core dumped)
If we resolve using the known path of a derivation whose output we
didn't have, we previously blew up. Now we just fail gracefully,
returning the map of all outputs unknown.
This means profiles outside of /nix/var/nix/profiles don't get
garbage-collected. It also means we don't need to scan
/nix/var/nix/profiles for GC roots anymore, except for compatibility
with previously existing generations.
This was broken in 50f13b06fb. Once
again it turns out that putting a bool in a std::variant is a bad
idea, since pointers get silently cast to them...
The command line options --arg and --argstr that are used by a bunch of
CLI commands to pass arguments to top-level functions in files go
through the same code-path as auto-calling top-level functions with
their default arguments - this, however, was only passing the arguments
that were *explicitly* mentioned in the formals of the function - in the
case of an as-pattern with an ellipsis (eg args @ { ... }) extra passed
arguments would get omitted. This fixes that to instead pass *all*
specified auto args in the case that our function has an ellipsis.
Fixes#598
When the log.showSignature git setting is enabled, the output of
"git log" contains signature verification information in addition to the
timestamp GitInputScheme::fetch wants:
$ git log -1 --format=%ct
gpg: Signature made Sat 07 Sep 2019 02:02:03 PM PDT
gpg: using RSA key 0123456789ABCDEF0123456789ABCDEF01234567
gpg: issuer "user@example.com"
gpg: Good signature from "User <user@example.com>" [ultimate] 1567890123
1567890123
For folks that had log.showSignature set, this caused all nix operations
on flakes to fail:
$ nix build
error: stoull
Evidentally this was never implemented because Nix switched to using
`buildDerivation` exclusively before `build-remote.pl` was rewritten.
The `nix-copy-ssh` test (already) tests this.
Include a long comment explaining the policy. Perhaps this can be moved
to the manual at some point in the future.
Also bump the daemon protocol minor version, so clients can tell whether
`wopBuildDerivation` supports trustless CA derivation building. I hope
to take advantage of this in a follow-up PR to support trustless remote
building with the minimal sending of derivation closures.
This seems more correct. It also means one can specify the features a
store should support with --store and remote-store=..., which is useful.
I use this to clean up the build remotes test.
Before, processConnection wanted to know a user name and user id, and
`nix-daemon --stdio`, when it isn't proxying to an underlying daemon,
would just assume "root" and 0. But `nix-daemon --stdio` (no proxying)
shouldn't make guesses about who holds the other end of its standard
streams.
Now processConnection takes an "auth hook", so `nix-daemon` can provide
the appropriate policy and daemon.cc doesn't need to know or care what
it is.
Thanks @regnat for catching one of them. The other follows for many of
the same reasons. I'm find fixing others on a need-to-fix basis,
provided their are no regressions.
When having a message like `waiting for a machine to build X` and
building with `nix build -L`, the log-prefix is always colored yellow[1]
on a small terminal-width as everything (including the ANSI color-reset) is
stripped away.
To work around that problem, this patch explicitly adds an `ANSI_NORMAL`
to the end of the line.
[1] https://imgur.com/a/FjtJOk3
Fixes#3872.
This is a bit hacky. Ideally we would automatically re-evaluate the
failed attribute iff we need to print the error message (so in
commands like 'nix search' we wouldn't re-evaluate because we're
suppressing errors).
Some users have their own hashed-mirrors setup, that is used to mirror
things in addition to what’s available on tarballs.nixos.org. Although
this should be feasable to do with a Binary Cache, it’s not always
easy, since you have to remember what "name" each of the tarballs has.
Continuing to support hashed-mirrors is cheap, so it’s best to leave
support in Nix. Note that NIX_HASHED_MIRRORS is also supported in
Nixpkgs through fetchurl.nix.
Note that this excludes tarballs.nixos.org from the default, as in
\#3689. All of these are available on cache.nixos.org.
Occasionally, `nix-build --check` is fairly helpful and I'd like to be
able to use this feature for flakes that need to be built with `nix
build` as well.
This fixes an error found in builtins.path that looks like:
store path mismatch in (possibly filtered) path added from '/private/tmp/nix-shell.CyXViH/nix-test/filter-source/filterin'
when no hash is specified
This adds a ‘nix export’ command which hooks into nix-bundle. It can
be used in a similar way as nix-bundle, with the benefit of hooking
into the new “app” functionality. For instance,
$ nix export nixpkgs#jq
$ ./jq --help
jq - commandline JSON processor [version 1.6]
...
$ scp jq machine-without-nix:
$ ssh machine-without-nix ./jq
jq - commandline JSON processor [version 1.6]
...
Note that nix-bundle currently requires Linux to run. Other exporters
might not have that requirement.
“exporters” are meant to be reusable, so that, other repos can
implement their own bundling.
Fixes#3705
match_continuous limits the search to the current start position,
instead of searching the entire file.
On libc++, this improves performance dramatically:
$ time /nix/store/70ai68dfm6xbzwn26j5n4li9di52ylia-nix-3.0pre20200728_c159f48/bin/nix print-dev-env >/dev/null
/nix/store/70ai68dfm6xbzwn26j5n4li9di52ylia-nix-3.0pre20200728_c159f48/bin/ni 2.39s user 0.19s system 64% cpu 4.032 total
$ time /nix/store/cwjfxxlp83zln4mfyy1d2dbsx7f6s962-nix-3.0pre20200728_dirty/bin/nix print-dev-env >/dev/null
/nix/store/cwjfxxlp83zln4mfyy1d2dbsx7f6s962-nix-3.0pre20200728_dirty/bin/nix 0.09s user 0.05s system 65% cpu 0.204 total
Fixes#3874
Since 6185d25e52, this was very
latency-bound since it required a round-trip for every 32 KiB. So for
example copying a 514 MiB closure over a virtual ethernet device with
a articial delay of just 1 ms took 343s. Now it takes 2.7s.
Fixes#3372.
If a repo is dirty, it used to return a `rev` object with an "empty"
sha1 (0000000000000000000000000000000000000000). Please note that this
only applies for `builtins.fetchGit` and *not* for `builtins.fetchTree{
type = "git"; }`.
The new interface we offer provides a way of getting all the
DerivationOutputs with the storePaths directly, based on the observation
that it's the most common usecase.
istream->tellg() returns -1 so we can't get the number of bytes
written.
Fixes 'uploaded 's3://nix-cache/nar/00819r9lp5kajr6baxfw5dhhc0cx8ndxaz43qmd2f0gn1hk1ynlp.nar.xz' (-1 bytes) in 11620 ms' messages.
The original idea was to implement a git-fetcher in Nix's core that
supports content hashes[1]. In #3549[2] it has been suggested to
actually use `fetchTree` for this since it's a fairly generic wrapper
over the new fetcher-API[3] and already supports content-hashes.
This patch implements a new git-fetcher based on `fetchTree` by
incorporating the following changes:
* Removed the original `fetchGit`-implementation and replaced it with an
alias on the `fetchTree` implementation.
* Ensured that the `git`-fetcher from `libfetchers` always computes a
content-hash and returns an "empty" revision on dirty trees (the
latter one is needed to retain backwards-compatibility).
* The hash-mismatch error in the fetcher-API exits with code 102 as it
usually happens whenever a hash-mismatch is detected by Nix.
* Removed the `flakes`-feature-flag: I didn't see a reason why this API
is so tightly coupled to the flakes-API and at least `fetchGit` should
remain usable without any feature-flags.
* It's only possible to specify a `narHash` for a `git`-tree if either a
`ref` or a `rev` is given[4].
* It's now possible to specify an URL without a protocol. If it's missing,
`file://` is automatically added as it was the case in the original
`fetchGit`-implementation.
[1] https://github.com/NixOS/nix/pull/3216
[2] https://github.com/NixOS/nix/pull/3549#issuecomment-625194383
[3] https://github.com/NixOS/nix/pull/3459
[4] https://github.com/NixOS/nix/pull/3216#issuecomment-553956703
The new error-format is pretty nice from a UX point-of-view, however
it's fairly hard to parse the output e.g. for editor plugins such as
vim-ale[1] that use `nix-instantiate --parse` to determine syntax errors in
Nix expression files.
This patch extends the `internal-json` logger by adding the fields
`line`, `column` and `file` to easily locate an error in a file and the
field `raw_msg` which contains the error-message itself without
code-lines and additional helpers.
An exemplary output may look like this:
```
[nix-shell]$ ./inst/bin/nix-instantiate ~/test.nix --log-format minimal
{"action":"msg","column":1,"file":"/home/ma27/test.nix","level":0,"line":4,"raw_msg":"syntax error, unexpected IF, expecting $end","msg":"<full error-msg with code-lines etc>"}
```
[1] https://github.com/dense-analysis/ale
This assumption is broken by CA derivations. Making a PR now to do the
breaking daemon change as soon as possible (if it is already too late,
we can bump protocol intead).
I think this better captures the intent of what's going on: we either
have an opaque store path, or a drv path with some outputs.
Having this structure will also help us support CA derivations: we'll
have to allow the outpath paths to be optional, so the structure we gain
now makes up for the structure we loose then.
It's a tiny function which is:
- hardly worth abstrating over, and also only used once.
- doesn't work once we get CA drvs
I rewrote the one callsite to be forwards compatable with CA
derivations, and also potentially more performant: instead of reading in
the derivation it can ust consult the SQLite DB in the common case.
to each Store implementation. The generic regStore implementation will
only be for the ambiguous shorthands, like "" and "auto".
This also could get us close to simplifying the daemon command.
Currently resizing of the terminal doesn't play nicely with
nix edit when using kakoune as the editor, as it relies on the
SIGWINCH signal which is trapped by nix. How this is not a problem
with e.g. vim is beyond me.
Virtually all other exec* calls are following a call to
restoreSignals(). This commit adds this behavior to nix edit
as well.
I got it to just become `LocalStore::addToStoreFromDump`, cleanly taking
a store and then doing nothing too fancy with it.
`LocalStore::addToStore(...Path...)` is now just a simple wrapper with a
bare-bones sinkToSource of the right dump command.
That is, the commands 'nix path-info nixpkgs#hello' and 'nix path-info
/nix/store/00ls0qi49qkqpqblmvz5s1ajl3gc63lr-hello-2.10.drv' now do the
same thing (i.e. build the derivation and operate on the output store
path, rather than the .drv path).
This reverts commit a2c27022e9. See
addToStoreSlow(), we don't need to handle this case efficiently
anymore. In fact, we can almost remove the method/hashAlgo arguments
since the non-recursive and/or non-SHA256 are almost not used anymore.
We were calculating the nar hash wrong when the file ingestion method
was flat. I don't think there's anything we can do in that case but dump
the file again, so that's what I do.
As an optomization, we again could reuse the original dump for just the
recursive and non-sha256 case, but I rather do that after this fix, and
after my other PRs which deduplicate this code.
Until now, the `gitlab`-fetcher determined the source's rev by checking
the latest commit of the given `ref` using the
`/repository/branches`-API.
This breaks however when trying to fetch a gitlab-repo by its tag:
```
$ nix repl
nix-repl> builtins.fetchTree gitlab:Ma27/nvim.nix/0.2.0
error: --- Error ------------------------------------------------------------------------------------- nix
unable to download 'https://gitlab.com/api/v4/projects/Ma27%2Fnvim.nix/repository/branches/0.2.0': HTTP error 404 ('')
```
When using the `/commits?ref_name`-endpoint[1] you can pass any kind of
valid ref to the `gitlab`-fetcher.
Please note that this fetches the only first 20 commits on a ref,
unfortunately there's currently no endpoint which only retrieves the
latest commit of any kind of `ref`.
[1] https://docs.gitlab.com/ee/api/commits.html#list-repository-commits
We've added the variant to `DerivationOutput` to support them, but made
`DerivationOutput::path` partial to avoid actually implementing them.
With this chage, we can all collaborate on "just" removing
`DerivationOutput::path` calls to implement CA derivations.
The `m` acts as termination-symbol when declaring graphics. Because
of this, the `;1m` doesn't have any effect and is directly printed to
the console:
```
$ nix repl
> builtins.fetchGit { /* ... */ }
{ outPath = "/nix/store/s0f0iz4a41cxx2h055lmh6p2d5k5bc6r-source"; rev = "e73e45b723a9a6eecb98bd5f3df395d9ab3633b6"; revCount = ;1m428; shortRev = "e73e45b"; submodules = ;1mfalse; }
```
Introduced by 6403508f5a.
This reduces memory consumption of
nix-instantiate \
-E 'with import <nixpkgs> {}; runCommand "foo" { src = ./blender; } "echo foo"' \
--option nar-buffer-size 10000
(where ./blender is a 1.1 GiB tree) from 1716 to 36 MiB, while still
ensuring that we don't do any write I/O for small source paths (up to
'nar-buffer-size' bytes). The downside is that large paths are now
always written to a temporary location in the store, even if they
produce an already valid store path. Thus, adding large paths might be
slower and run out of disk space. ¯\_(ツ)_/¯ Of course, you can always
restore the old behaviour by setting 'nar-buffer-size' to a very high
value.
This allows you to refer to an input from another flake. For example,
$ nix run --inputs-from /path/to/hydra nixpkgs#hello
runs 'hello' from the 'nixpkgs' inputs of the 'hydra' flake.
Fixes#3769.
'nix run' will try to run $out/bin/<name>, where <name> is the
derivation name (excluding the version). This often works well:
$ nix run nixpkgs#hello
Hello, world!
$ nix run nix -- --version
nix (Nix) 2.4pre20200626_adf2fbb
$ nix run patchelf -- --version
patchelf 0.11.20200623.e61654b
$ nix run nixpkgs#firefox -- --version
Mozilla Firefox 77.0.1
$ nix run nixpkgs#gimp -- --version
GNU Image Manipulation Program version 2.10.14
though not always:
$ nix run nixpkgs#git
error: unable to execute '/nix/store/kp7wp760l4gryq9s36x481b2x4rfklcy-git-2.25.4/bin/git-minimal': No such file or directory
Generalize `queryDerivationOutputNames` and `queryDerivationOutputs` by
adding a `queryDerivationOutputMap` that returns the map
`outputName=>outputPath`
(not that this is not equivalent to merging the results of
`queryDerivationOutputs` and `queryDerivationOutputNames` as sets don't
preserve the order, so we would end up with an incorrect mapping).
squash! Add a way to get all the outputs of a derivation with their label
Rename StorePathMap to OutputPathMap
Now the derivation outputs are parsed up front, we can avoid a reparse
by doing it. Also, this just feels a bit better as the `output*` env
vars are more of a `libnixexpr` interface than `libnixstore` interface:
ultimately, it's the derivation outputs that decide whether the
derivation is fixed-output.
Yes, hashed mirrors might go away with #3689, but this bit of code would
be moved rather than deleted, so it's worth doing a cleanup anyways I
think.
When we merge with master, the new lack of string types make this case
impossible (after parsing). Later, when we actually implemenent
CA-derivations, we'll change the types to allow that.
We have a larger problem that passsing computed strings to the first
variable argument of many exception constructors is unsafe because that
first variable argument is interpreted not as a plain string, but format
string, and if it contains '%' boost::format will abort, since there are
no arguments to the format string.
In this particular instance '%' was used as part of an escape code in a
URL, which, when the download failed, caused Nix to abort displaying the
`FileTransferError`.
This is a bit complex because we want to expose extra functionality the
wrapped class has. Perhaps there is some inheritancy trickery to do this
nicer, but I don't know it, and this is the first thing we tried after a
series of attempts that did build.
This design is kind of like that of Rust's Writer, Reader, or Iter
adapters, which impliment more traits based on what the inner type
implements.
This was introduced in fa125b9b28, and
then "reverted" in 1cf4801108, except that
revert left the struct around doing nothing useful.
We're removing it all the way now because we want to make a new
`TeeSink` complementing the already-exiting `TeeSource`, that is
actually a completely different concept as far as the class hierarchy is
concerned.
I’m not 100% sure this is wanted since it kind of makes everything
have to know about ca even if they don’t really want to. But it also
make things easier in dealing with looking up ca.
E.g.
$ nix run nixpkgs#hello
error: --- Error ---------- nix
flake 'flake:nixpkgs' does not provide attribute 'apps.x86_64-linux.hello' or 'hello'
instead of
$ nix run nixpkgs#hello
error: --- Error ---------- nix
flake 'flake:nixpkgs' does not provide attribute 'hello'
Not implementing anything here, just throwing an error if a derivation
sets `__contentAddressed = true` without
`--experimental-features content-addressed-paths`
(and also with it as there's nothing implemented yet)
This further continues with the dependency inverstion. Also I just went
ahead and exposed `parseDerivation`: it seems like the more proper
building block, and not a bad thing to expose if we are trying to be
less wedded to drv files on disk anywas.
On nix-env -qa -f '<nixpkgs>', this reduces maximum RSS by 20970 KiB
and runtime by 0.8%. This is mostly because we're not parsing the hash
part as a hash anymore (just validating that it consists of base-32
characters).
Also, replace storePathToHash() by StorePath::hashPart().
E.g. instead of
error: --- BuildError ----------------------------------------------- nix
builder for '/nix/store/03nk0a3n8h2948k4lqfgnnmym7knkcma-foo.drv' failed with exit code 1
error: --- Error ---------------------------------------------------- nix
build of '/nix/store/03nk0a3n8h2948k4lqfgnnmym7knkcma-foo.drv' failed
we now get
error: --- Error ---------------------------------------------------- nix
builder for '/nix/store/03nk0a3n8h2948k4lqfgnnmym7knkcma-foo.drv' failed with exit code 1
Originally, the test was only checking for different “real” storeDir.
That’s an easy case to handle, but the much harder one is if different
virtual store dirs are used. To do this, we need the SubstitutionGoal
to know about the ca, so it can recalculate the path to copy it over.
An important note here is that the store path passed to copyStorePath
needs to be one for srcStore - so that queryPathInfo works properly.
This also adds an error message when the store path from queryPathInfo
is different from the one we requested.
Substituters can substitute from one store dir to another with a
little bit of help. The store api just needs to have a CA so it can
recompute the store path based on the new store dir. We can only do
this for fixed output derivations with no references, though.
This function was used in only one place, where it could easily be
replaced by readDerivation() since it's not
performance-critical. (This function appears to have been modelled
after queryDerivationOutputs(), which exists only to make the garbage
collector faster.)
This fixes an issue where lockfile generation was not idempotent:
after updating a lockfile, a "follows" node would end up pointing to a
new copy of the node, rather than to the original node.
fetchTarball, fetchTree, and fetchGit all have *optional* hash attrs.
This means that we need to be careful with what we allow to avoid
accidentally making these defaults. When ‘hash = ""’ we assume the
empty hash is wanted.
follow up of https://github.com/NixOS/nix/pull/3544
This allows hash="" so that it can be used for debugging purposes. For
instance, this gives you an error message like:
warning: found empty hash, assuming you wanted 'sha256:0000000000000000000000000000000000000000000000000000'
hash mismatch in fixed-output derivation '/nix/store/asx6qw1r1xk6iak6y6jph4n58h4hdmbm-nix':
wanted: sha256:0000000000000000000000000000000000000000000000000000
got: sha256:0fpfhipl9v1mfzw2ffmxiyyzqwlkvww22bh9wcy4qrfslb4jm429
Needed so that we can include it as a logger in loggers.cc without
adding a dependency on nix
This also requires moving names.hh to libutil to prevent a circular
dependency between libmain and libexpr
Make the printing of the build logs systematically go through the
logger, and replicate the behavior of `no-build-output` by having two
different loggers (one that prints the build logs and one that doesn't)
Add a new `--log-format` cli argument to change the format of the logs.
The possible values are
- raw (the default one for old-style commands)
- bar (the default one for new-style commands)
- bar-with-logs (equivalent to `--print-build-logs`)
- internal-json (the internal machine-readable json format)
The initial contents of the flake is specified by the
'templates.<name>' or 'defaultTemplate' output of another flake. E.g.
outputs = { self }: {
templates = {
nixos-container = {
path = ./nixos-container;
description = "An example of a NixOS container";
};
};
};
allows
$ nix flake init -t templates#nixos-container
Also add a command 'nix flake new', which is identical to 'nix flake
init' except that it initializes a specified directory rather than the
current directory.
bool coerces anything >0 to true, but in the future we may have other
file ingestion methods. This shows a better error message when the
“recursive” byte isn’t 1.
Instead, `Hash` uses `std::optional<HashType>`. In the future, we may
also make `Hash` itself require a known hash type, encoraging people to
use `std::optional<Hash>` instead.
Caches tree in addition to lockedRef, and explicitly writes out the logic for different combinations of cached/uncached flakes and indirect/resolved/locked flakes. This eliminates uneccessary calls to lookupInFlakeCache, fetchTree, maybeLookupFlake, and flakeCache.push_back
As `git fetch` may chose to interpret refspec to it's liking, ensure that we
only pass refs that begin with `refs/` as is, otherwise, prepend them with
`refs/heads`. Otherwise, branches named `heads/foo` (I know it's bad, but it's
allowed), would be fetched as `foo`, instead of `heads/foo`.
The previous regex was too strict and did not match what git was allowing. It
could lead to `fetchGit` not accepting valid branch names, even though they
exist in a repository (for example, branch names containing `/`, which are
pretty standard, like `release/1.0` branches).
The new regex defines what a branch name should **NOT** contain. It takes the
definitions from `refs.c` in https://github.com/git/git and `git help
check-ref-format` pages.
This change also introduces a test for ref name validity checking, which
compares the result from Nix with the result of `git check-ref-format --branch`.
The attributes previously stored in TreeInfo (narHash, revCount,
lastModified) are now stored in Input. This makes it less arbitrary
what attributes are stored where.
As a result, the lock file format has changed. An entry like
"info": {
"lastModified": 1585405475,
"narHash": "sha256-bESW0n4KgPmZ0luxvwJ+UyATrC6iIltVCsGdLiphVeE="
},
"locked": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b88ff468e9850410070d4e0ccd68c7011f15b2be",
"type": "github"
},
is now stored as
"locked": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b88ff468e9850410070d4e0ccd68c7011f15b2be",
"type": "github",
"lastModified": 1585405475,
"narHash": "sha256-bESW0n4KgPmZ0luxvwJ+UyATrC6iIltVCsGdLiphVeE="
},
The 'Input' class is now a dumb set of attributes. All the fetcher
implementations subclass InputScheme, not Input. This simplifies the
API.
Also, fix substitution of flake inputs. This was broken since lazy
flake fetching started using fetchTree internally.
The idea is it's always more flexible to consumer a `Source` than a
plain string, and it might even reduce memory consumption.
I also looked at `addToStoreFromDump` with its `// FIXME: remove?`, but
the worked needed for that is far more up for interpretation, so I
punted for now.
This moves the actual parsing of configuration contents into applyConfig
which applyConfigFile is then going to call. By changing this we can now
test the configuration file parsing without actually create a file on
disk.
This makes 'nix flake' less cluttered and more consistent (it's only
subcommands that operator on a flake). Also, the registry is not
inherently flake-related (e.g. fetchTree could also use it to remap
inputs).
This completes flakerefs using the registry (e.g. 'nix<TAB>' => 'nix
nixpkgs') and flake output attributes by evaluating the flake
(e.g. 'dwarffs#nix<TAB>' => 'dwarffs#nixosModules').
InstallableValue has children InstallableFlake and InstallableAttrPath, but InstallableFlake was overriding toDerivations, and usage was changed so that InstallableFlake didn't need cmd. So these changes were made:
InstallableValue::toDerivations() -> InstalllableAttrPath::toDerivations()
InstallableValue::cmd -> InstallableAttrPath::cmd
InstallableValue uses state instead of cmd
toBuildables() and toDerivations() were made abstract
- result list will be always empty if --json is passed
- for scripts an empty search result is not really an error,
we rather want to distinguish between evaluation errors and empty results
The raw stderr output isn't logged anymore so the build logs need to be
printed by the default logger in order for the old commands like
nix-build to still show build output.
For remote stores the log messages are already forwarded as structured
STDERR_RESULT messages so the old format is duplicate information. But
still included with -vvv since it could be useful for debugging
problems.
$ nix build -L /nix/store/nl71b2niws857ffiaggyrkjwgx9jjzc0-foo.drv --store ssh-ng://localhost
Hello World!
foo> Hello World!
[1/0/1 built] building foo
Fixes#3556
When used with `readFile`, we have a pretty good heuristic of the file
size, so `reserve` this in the `string`. This will save some allocation
/ copy when the string is growing.
This means you now get an error message *before* stuff gets built:
$ nix copy .#hydraJobs.vendoredCrates
error: you must pass '--from' and/or '--to'
Try 'nix --help' for more information.
The commit 3cc1125595 adds a `grantpt`
call on the builder pseudo terminal fd. This call is actually only
required for MacOS, but it however requires a RW access to /dev/pts
which is only RO bindmounted in the Bazel Linux sandbox. So, Nix can
not be actually run in the Bazel Linux sandbox for unneeded reasons.
This closes#3026 by allowing `builtins.readFile` to read a file with a
wrongly reported file size, for example, files in `/proc` may report a
file size of 0. Reading file in `/proc` is not a good enough motivation,
however I do think it just makes nix more robust by allowing more file
to be read. Especially, I do considerer the previous behavior to be
dangerous because nix was previously reading truncated files. Examples
of file system which incorrectly report file size may be network file
system or dynamic file system (for performance reason, a dynamic file
system such as FUSE may generate the content of the file on demand).
```
nix-repl> builtins.readFile "/proc/version"
""
```
With this commit:
```
nix-repl> builtins.readFile "/proc/version"
"Linux version 5.6.7 (nixbld@localhost) (gcc version 9.3.0 (GCC)) #1-NixOS SMP Thu Apr 23 08:38:27 UTC 2020\n"
```
Here is a summary of the behavior changes:
- If the reported size is smaller, previous implementation
was silently returning a truncated file content. The new implementation
is returning the correct file content.
- If a file had a bigger reported file size, previous implementation was
failing with an exception, but the new implementation is returning the
correct file content. This change of behavior is coherent with this pull
request.
Open questions
- The behavior is unchanged for correctly reported file size, however
performances may vary because it uses the more complex sink interface.
Considering that sink is used a lot, I don't think this impacts the
performance a lot.
- `builtins.readFile` on an infinite file, such as `/dev/random` may
fill the memory.
- it does not support adding file to store, such as `${/proc/version}`.
In particular, doing 'nix build /path/to/dir' now works if
/path/to/dir is not a Git tree (it only has to contain a flake.nix
file).
Also, 'nix flake init' no longer requires a Git tree (but it will do a
'git add flake.nix' if it's a Git tree)
Suppose I have a path /nix/store/[hash]-[name]/a/a/a/a/a/[...]/a,
long enough that everything after "/nix/store/" is longer than 4096
(MAX_PATH) bytes.
Nix will happily allow such a path to be inserted into the store,
because it doesn't look at all the nested structure. It just cares
about the /nix/store/[hash]-[name] part. But, when the path is deleted,
we encounter a problem. Nix will move the path to /nix/store/trash, but
then when it's trying to recursively delete the trash directory, it will
at some point try to unlink
/nix/store/trash/[hash]-[name]/a/a/a/a/a/[...]/a. This will fail,
because the path is too long. After this has failed, any store deletion
operation will never work again, because Nix needs to delete the trash
directory before recreating it to move new things to it. (I assume this
is because otherwise a path being deleted could already exist in the
trash, and then moving it would fail.)
This means that if I can trick somebody into just fetching a tarball
containing a path of the right length, they won't be able to delete
store paths or garbage collect ever again, until the offending path is
manually removed from /nix/store/trash. (And even fixing this manually
is quite difficult if you don't understand the issue, because the
absolute path that Nix says it failed to remove is also too long for
rm(1).)
This patch fixes the issue by making Nix's recursive delete operation
use unlinkat(2). This function takes a relative path and a directory
file descriptor. We ensure that the relative path is always just the
name of the directory entry, and therefore its length will never exceed
255 bytes. This means that it will never even come close to AX_PATH,
and Nix will therefore be able to handle removing arbitrarily deep
directory hierachies.
Since the directory file descriptor is used for recursion after being
used in readDirectory, I made a variant of readDirectory that takes an
already open directory stream, to avoid the directory being opened
multiple times. As we have seen from this issue, the less we have to
interact with paths, the better, and so it's good to reuse file
descriptors where possible.
I left _deletePath as succeeding even if the parent directory doesn't
exist, even though that feels wrong to me, because without that early
return, the linux-sandbox test failed.
Reported-by: Alyssa Ross <hi@alyssa.is>
Thanks-to: Puck Meerburg <puck@puckipedia.com>
Tested-by: Puck Meerburg <puck@puckipedia.com>
Reviewed-by: Puck Meerburg <puck@puckipedia.com>
Reduces the number of store queries it performs. Also prints a warning
if any of the selectors did not match any installed derivations.
UX Caveats:
- Will print a warning that nothing matched if a previous selector
already removed the path
- Will not do anything if no selectors were provided (no change from
before).
Fixes#3531
In particular, we store whether an attribute failed to evaluate (threw
an exception) or was an unsupported type. This is to ensure that a
repeated 'nix flake show' never has to evaluate anything, so it can
execute without fetching the flake.
With this, 'nix flake show nixpkgs/nixos-20.03 --legacy' executes in
0.6s (was 3.4s).
This speeds up the creation of the cache for the nixpkgs flake from
21.2s to 10.2s. Oddly, it also speeds up querying the cache
(i.e. running 'nix flake show nixpkgs/nixos-20.03 --legacy') from 4.2s
to 3.4s.
(For comparison, running with --no-eval-cache takes 9.5s, so the
overhead of building the SQLite cache is only 0.7s.)
In the fully cached case for the 'nixpkgs' flake, it went from 101s to
4.6s. Populating the cache went from 132s to 17.4s (which could
probably be improved further by combining INSERTs).
Usually this just writes to stdout, but for ProgressBar, we need to
clear the current line, write the line to stdout, and then redraw the
progress bar.
(cherry picked from commit 696c026006)
Usually this just writes to stdout, but for ProgressBar, we need to
clear the current line, write the line to stdout, and then redraw the
progress bar.
Motivation: maintain project-level configuration files.
Document the whole situation a bit better so that it corresponds to the
implementation, and add NIX_USER_CONF_FILES that allows overriding
which user files Nix will load during startup.
Previously the memory would occasionally be collected during eval since
the GC doesn't consider the member variable as alive / doesn't scan the
region of memory where the pointer lives.
By using the traceable_allocator<T> allocator provided by Boehm GC we
can ensure the memory isn't collected. It should be properly freed when
SourceExprCommand goes out of scope.
This doesn't just cause problems for nix-store --serve but also results
in certain build failures. Builds that use unix domain sockets in their
tests often fail because the /var/folders prefix already consumes more
than half of the maximum length of socket paths.
struct sockaddr_un {
sa_family_t sun_family; /* AF_UNIX */
char sun_path[108]; /* Pathname */
};