Instead of needing to run `nix show-config --json | jq -r
'."warn-dirty".value'` to view the value of `warn-dirty`, you can now
run `nix show-config warn-dirty`.
This should be a non-empty set, and so we don't want people doing this
by accident. We remove the zero-0 constructor with a little inheritance
trickery.
`DerivedPath::Built` and `DerivationGoal` were previously using a
regular set with the convention that the empty set means all outputs.
But it is easy to forget about this rule when processing those sets.
Using `OutputSpec` forces us to get it right.
It appears that on current macOS versions, our use of poll() to detect
client disconnects no longer works. As a workaround, poll() for
POLLRDNORM, since this *will* wake up when the client has
disconnected. The downside is that it also wakes up when input is
available. So just sleep for a bit in that case. This means that on
macOS, a client disconnect may take up to a second to be detected,
but that's better than not being detected at all.
Fixes#7584.
This way the links are clearly within the manual (ie not absolute paths),
while allowing snippets to reference the documentation root reliably,
regardless of at which base url they're included.
Prior to this change, we had a bunch of ad-hoc string manipulation code
scattered around. This made it hard to figure out what data model for
string contexts is.
Now, we still store string contexts most of the time as encoded strings
--- I was wary of the performance implications of changing that --- but
whenever we parse them we do so only through the
`NixStringContextElem::parse` method, which handles all cases. This
creates a data type that is very similar to `DerivedPath` but:
- Represents the funky `=<drvpath>` case as properly distinct from the
others.
- Only encodes a single output, no wildcards and no set, for the
"built" case.
(I would like to deprecate `=<path>`, after which we are in spitting
distance of `DerivedPath` and could maybe get away with fewer types, but
that is another topic for another day.)
macOS doesn't have user namespacing, so the gid of the builder needs
to be nixbld. The logic got "has sandboxing enabled" confused with
"has user namespaces".
Fixes#7529.
This basically reverts 6e5165b773.
It fixes errors like
sandbox-exec: <internal init prelude>:292:47: unable to open sandbox-minimal.sb: not found
when trying to run a development Nix installed in a user's home
directory.
Also, we're trying to minimize the number of installed files
to make it possible to deploy Nix as a single statically-linked
binary.
Adds a new boolean structured attribute
`outputChecks.<output>.unsafeDiscardReferences` which disables scanning
an output for runtime references.
__structuredAttrs = true;
outputChecks.out.unsafeDiscardReferences = true;
This is useful when creating filesystem images containing their own embedded Nix
store: they are self-contained blobs of data with no runtime dependencies.
Setting this attribute requires the experimental feature
`discard-references` to be enabled.
Previously addTempRoot() acquired the LocalStore state lock and waited
for the garbage collector to reply. If the garbage collector is in the
same process (as it the case with auto-GC), this would deadlock as
soon as the garbage collector thread needs the LocalStore state lock.
So now addTempRoot() uses separate Syncs for the state that it
needs. As long at the auto-GC thread doesn't call addTempRoot() (which
it shouldn't), it shouldn't deadlock.
Fixes#3224.
This also moves the file handle into its own Sync object so we're not
holding the _state while acquiring the file lock. There was no real
deadlock risk here since locking a newly created file cannot block,
but it's still a bit nicer.
This has the same goal as b13fd4c58e81b2b2b0d72caa5ce80de861622610,but
achieves it in a different way in order to not break
`nix why-depends --derivation`.
In principle, this should avoid deadlocks where two instances of Nix are
holding a shared lock on big-lock and are both waiting to get an
exclusive lock.
However, it seems like `flock(2)` is supposed to do this automatically,
so it's not clear whether this is actually where the problem comes from.
This makes 'nix develop' set the Linux personality in the same way
that the actual build does, allowing a command like 'nix develop
nix#devShells.i686-linux.default' on x86_64-linux to work correctly.
Without this, the error is lost, and it makes for a hard to debug
situation. Also remove some of the busyness inside the sqlite_open_v2
args.
The errcode returned is not the extended one. The only way to make open
return an extended code, would be to add SQLITE_OPEN_EXRESCODE to the
flags. In the future it might be worth making this change,
which would also simplify the existing SQLiteError code.
This makes 'nix build' work on paths (which will be copied to the
store) and store paths (returned as is). E.g. the following flake
output attributes can be built using 'nix build .#foo':
foo = ./src;
foo = self.outPath;
foo = builtins.fetchTarball { ... };
foo = (builtins.fetchTree { .. }).outPath;
foo = builtins.fetchTree { .. } + "/README.md";
foo = builtins.storePath /nix/store/...;
Note that this is potentially risky, e.g.
foo = /.;
will cause Nix to try to copy the entire file system to the store.
What doesn't work yet:
foo = self;
foo = builtins.fetchTree { .. };
because we don't handle attrsets with an outPath attribute in it yet,
and
foo = builtins.storePath /nix/store/.../README.md;
since result symlinks have to point to a store path currently (rather
than a file inside a store path).
Fixes#7417.
They did not include the detailed error message, losing essential
information for troubleshooting.
Example message:
warning: creating statement 'insert or rplace into NARs(cache, hashPart, namePart, url, compression, fileHash, fileSize, narHash, narSize, refs, deriver, sigs, ca, timestamp, present) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 1)': at offset 10: SQL logic error, near "rplace": syntax error (in '/tmp/nix-shell.grQ6f7/nix-test/tests/binary-cache/test-home/.cache/nix/binary-cache-v6.sqlite')
It's not the best example; more important information will be in
the message for e.g. a constraint violation.
I don't see why this specific error is printed as a warning, but
that's for another commit.
Unsetting `build-users-group` (without `auto-allocate-uids` enabled)
gives the following error:
```
src/libstore/lock.cc:25: static std::unique_ptr<nix::UserLock> nix::SimpleUserLock::acquire(): Assertion `settings.buildUsersGroup != ""' failed.
```
Fix the logic in `useBuildUsers` and document the default value
for `build-users-group`.
This makes the position object used in exceptions abstract, with a
method getSource() to get the source code of the file in which the
error originated. This is needed for lazy trees because source files
don't necessarily exist in the filesystem, and we don't want to make
libutil depend on the InputAccessor type in libfetcher.
Make everything be in the form "while ..." (most things were already),
and in particular *don't* use other propositions that must go after or
before specific "while ..." clauses to make sense.
When debugging nix expressions the outermost trace tends to be more useful
than the innermost. It is therefore printed last to save developers from
scrolling.
We used to set enforceDeterminism to true in the settings (by default)
and thus did send a non-zero value over the wire. The value should
probably be ignored as it should only matter if nrRounds is non-zero
as well.
Having the old code here where the value is expected to be zero only
works with the same version of Nix where we are sending zero. We
should always test this against older Nix versions being client or
server as otherwise upgrade in larger networks might be a pain.
Fixes 8e0946e8df
Fix#6209
When trying to run `nix log <installable>`, try first to resolve the derivation pointed to
by `<installable>` as it is the resolved one that holds the build log.
This has a couple of shortcomings:
1. It’s expensive as it requires re-reading the derivation
2. It’s brittle because if the derivation doesn’t exist anymore or can’t
be resolved (which is the case if any one of its build inputs is missing),
then we can’t access the log anymore
However, I don’t think we can do better (at least not right now).
The alternatives I see are:
1. Copy the build log for the un-resolved derivation. But that means a
lot of duplication
2. Store the results of the resolving in the db. Which might be the best
long-term solution, but leads to a whole new class of potential
issues.
These only functioned if a very narrow combination of conditions held:
- The result path does not yet exist (--check did not result in
repeated builds), AND
- The result path is not available from any configured substituters, AND
- No remote builders that can build the path are available.
If any of these do not hold, a derivation would be built 0 or 1 times
regardless of the repeat option. Thus, remove it to avoid confusion.
The old way was not correct.
Here is an example:
```
$ nix-instantiate --eval --expr 'let x = a: throw "asdf"; in x 1' --show-trace
error: asdf
… while evaluating 'x'
at «string»:1:9:
1| let x = a: throw "asdf"; in x 1
| ^
… from call site
at «string»:1:29:
1| let x = a: throw "asdf"; in x 1
| ^
```
and yet also:
```
$ nix-instantiate --eval --expr 'let x = a: throw "asdf"; in x' --show-trace
<LAMBDA>
```
Here is the thing: in both cases we are evaluating `x`!
Nix is a higher-order languages, and functions are a sort of value. When
we write `x = a: ...`, `a: ...` is the expression that `x` is being
defined to be, and that is already a value. Therefore, we should *never*
get an trace that says "while evaluating `x`", because evaluating `a:
...` is *trival* and nothing happens during it!
What is actually happening here is we are applying `x` and evaluating
its *body* with arguments substituted for parameters. I think the
simplest way to say is just "while *calling* `x`", and so that is what I
changed it to.
We need to close the GC server socket before shutting down the active
GC client connections, otherwise a client may (re)connect and get
ECONNRESET. But also handle ECONNRESET for resilience.
Fixes random failures like
GC socket disconnected
connecting to '/tmp/nix-shell.y07M0H/nix-test/default/var/nix/gc-socket/socket'
sending GC root '/tmp/nix-shell.y07M0H/nix-test/default/store/kb5yzija0f1x5xkqkgclrdzldxj6nnc6-non-blocking'
reading GC root from client: error: unexpected EOF reading a line
1 store paths deleted, 0.00 MiB freed
error: reading from file: Connection reset by peer
in gc-non-blocking.sh.
It calls strlen() on the input (rather than simply copying at most
`size` bytes), which can fail if the input is not zero-terminated and
is inefficient in any case.
Fixes#7347.
why-depends assumed that we knew the output path of the second argument.
For CA derivations, we might not know until it's built. One way to solve
this would be to build the second installable to get the output path.
In this case we don't need to, though. If the first installable (A)
depends on the second (B), then getting the store path of A will
necessitate having the store path B. The contrapositive is, if the store
path of B is not known (i.e. it's a CA derivation which hasn't been
built), then A does not depend on B.
We shouldn't skip this if the supplementary group list is empty,
because then the sandbox won't drop the supplementary groups of the
parent (like "root").
The new experimental feature 'cgroups' enables the use of cgroups for
all builds. This allows better containment and enables setting
resource limits and getting some build stats.
It occurred when a output of the dependency was already available,
so it didn't need rebuilding and didn't get added to the
inputDrvOutputs.
This process-related info wasn't suitable for the purpose of finding
the actual input paths for the builder. It is better to do this in
absolute terms by querying the store.
This change is needed to support aws-sdk-cpp 1.10 and newer.
I opted not to make this dependent on the sdk version because
the crt dependency has been in the interface of the older
sdk as well, and it was only coincidence that libstore didn't
make use of any privately defined symbols directly.
When calling `builtins.readFile` on a store path, the references of that
path are currently added to the resulting string's context.
This change makes those references the *possible* context of the string,
but filters them to keep only the references whose hash actually appears
in the string, similarly to what is done for determining the runtime
references of a path.
Cgroups are now only used for derivations that require the uid-range
range feature. This allows auto UID allocation even on systems that
don't have cgroups (like macOS).
Also, make things work on modern systems that use cgroups v2 (where
there is a single hierarchy and no "systemd" controller).
after discussing this with multiple people, I'm convinced that "build
task" is more precise: a derivation is not an action, but inert until it
is built. also it's easier to pronounce.
proposal: use "build task" for the generic concept "description of how
to derive new files from the contents of existing files". then it will
be easier to distinguish what we mean by "derivation" (a specific data
structure and Nix language value type) and "store derivation" (a
serialisation of a derivation into a file in the Nix store).
readline is not re-entrant, so entering the debugger from the
completioncallback results in an eventual segfault.
The workaround is to temporarily disable the debugger when searching
for possible completions.
Call it as `['nix', '__build-remote', ... ]` rather than the previous
`["__build-remote", "nix __build-remote", ... ]` which seemed to have
been most likely unintended
The description of the --profile option talks about the "update" operation.
This is probably meant for operations such as "nix profile install", but the
same option is reused in other subcommands, which do not update the profile,
such as "nix profile {list,history,diff-closures}".
We update the description to make sense in both contexts.
Currently, Nix passes `-a` when it runs commands on a remote machine via
SSH, which disables agent forwarding. This causes issues when the
`ForwardAgent` option is set in SSH config files, as the command line
operation always overrides those.
In particular, this causes issues if the command being run is `sudo`
and the remote machine is configured with the equivalent of NixOS's
`security.pam.enableSSHAgentAuth` option. Not allowing SSH agent
forwarding can cause authentication to fail unexpectedly.
This can currently be worked around by setting `NIX_SSHOPTS="-A"`, but
we should defer to the options in the SSH config files to be least
surprising for users.
* Clarify the documentation of foldl': That the arguments are forced
before application (?) of `op` is necessarily true. What is important
to stress is that we force every application of `op`, even when the
value turns out to be unused.
* Move the example before the comment about strictness to make it less
confusing: It is a general example and doesn't really showcase anything
about foldl' strictness.
* Add test cases which nail down aspects of foldl' strictness:
* The initial accumulator value is not forced unconditionally.
* Applications of op are forced.
* The list elements are not forced unconditionally.
After we've send "\2\n" to the parent, we can't send a serialized
exception anymore. It will show up garbled like
$ nix-build --store /tmp/nix --expr 'derivation { name = "foo"; system = "x86_64-linux"; builder = "/foo/bar"; }'
this derivation will be built:
/nix/store/xmdip0z5x1zqpp6gnxld3vqng7zbpapp-foo.drv
building '/nix/store/xmdip0z5x1zqpp6gnxld3vqng7zbpapp-foo.drv'...
ErrorErrorEexecuting '/foo/bar': No such file or directory
error: builder for '/nix/store/xmdip0z5x1zqpp6gnxld3vqng7zbpapp-foo.drv' failed with exit code 1
While trying to use an alternate directory for my Nix installation, I
noticed that nix's output didn't reflect the updated state
directory. This patch corrects that and now prints the warning before
attempting to create the directory (if the directory creation fails,
it wouldn't have been obvious why nix was attempting to create the
directory in the first place).
With this patch, I now get the following warning:
warning: '/home/deck/.var/app/org.nixos.nix/var/nix' does not
exist, so Nix will use '/home/deck/.local/share/nix/root' as a
chroot store
The documentation for `parseDrvName` does not agree with the implementation when
the derivation name contains a dash which is followed by something that is
neither a letter nor a digit. This commit corrects the documentation to agree
with the implementation.
If we don't have any github token, we won't be able to fetch private
repos, but we are also more likely to run into API limits since
we don't have a token. To mitigate this only ever use the github api
if we actually have a token.
The current definition of `intersectAttrs` is incorrect:
> Return a set consisting of the attributes in the set e2 that also exist in the
> set e1.
Recall that (Nix manual, section 5.1):
> An attribute set is a collection of name-value-pairs (called attributes)
According to the existing description of `intersectAttrs`, the following should
evaluate to the empty set, since no key-value *pair* (i.e. attribute) exists in
both sets:
```
builtins.intersectAttrs { x=3; } {x="foo";}
```
And yet:
```
nix-repl> builtins.intersectAttrs { x=3; } {x="foo";}
{ x = "foo"; }
```
Clearly the intent here was for the *names* of the resulting attribute set to be
the intersection of the *names* of the two arguments, and for the values of the
resulting attribute set to be the values from the second argument.
This commit corrects the definition, making it match the implementation and intent.
Make sure that people who run Nix in non-interactive mode (and so don't have the possibility to interactively accept the individual flake configuration settings) are aware of this flag.
Fix#7086
These settings seem harmless, they control the same polling
functionality that timeout does, but with different behavior. Should
be safe for untrusted users to pass in.
I just had a colleague get confused by the previous phrase for good
reason. "valid" sounds like an *objective* criterion, e.g. and *invalid
signature* would be one that would be trusted by no one, e.g. because it
misformatted or something.
What is actually going is that there might be a signature which is
perfectly valid to *someone else*, but not to the user, because they
don't trust the corresponding public key. This is a *subjective*
criterion, because it depends on the arbitrary and personal choice of
which public keys to trust.
I therefore think "trustworthy" is a better adjective to use. Whether
something is worthy of trust is clearly subjective, and then "trust"
within that word nicely evokes `trusted-public-keys` and friends.
The history is not critical to the functionality of nix repl, so it's
enough to warn here, rather than refuse to start if the directory Nix
thinks the history should live in can't be created.
- call close explicitly in writeFile to prevent the close exception
from being ignored
- fsync after writing schema file to flush data to disk
- fsync schema file parent to flush metadata to disk
https://github.com/NixOS/nix/issues/7064
Remove the `verify TLS: Nix CA file = 'blah'` message that Nix used to print when fetching anything as it's both useless (`libcurl` prints the same info in its logs) and misleading (gives the impression that a new TLS connection is being established which might not be the case because of multiplexing. See #7011 )
This commit adds an optional `__impure` parameter to fetchurl.nix, which allows
the caller to use `libfetcher`'s fetcher in an impure derivation. This allows
nixpkgs' patch-normalizing fetcher (fetchpatch) to be rewritten to use nix's
internal fetchurl, thereby eliminating the awkward "you can't use fetchpatch
here" banners scattered all over the place.
See also: https://github.com/NixOS/nixpkgs/pull/188587
Implements the approach suggested by feedback on PR #6994, where
tempdir paths are created in the store (now with an exclusive lock).
As part of this work, the currently-broken and unused
`createTempDirInStore` function is updated to create an exclusive lock
on the temp directory in the store.
The GC now makes a non-blocking attempt to lock any store directories
that "look like" the temp directories created by this function, and if
it can't acquire one, ignores the directory.
Stdenv sets this to a bash that doesn't have readline/completion
support, so running 'nix (develop|shell)' inside a 'nix develop' gives
you a crippled shell. So let's just ignore the derivation's $SHELL.
This could break interactive use of build phases that use $SHELL, but
they appear to be fairly rare.
Disables the SA_RESTART behavior on macOS which causes:
> Restarting of pending calls is requested by setting the SA_RESTART bit
> in sa_flags. The affected system calls include read(2), write(2),
> sendto(2), recvfrom(2), sendmsg(2) and recvmsg(2) on a communications
> channel or a slow device (such as a terminal, but not a regular file)
> and during a wait(2) or ioctl(2).
From: https://man.openbsd.org/sigaction#SA_RESTART
This being set on macOS caused a bug where read() calls to the daemon
socket were blocking after a SIGINT was received. As a result,
checkInterrupt was never reached even though the signal was received
by the signal handler thread.
On Linux, SA_RESTART is disabled by default. This probably effects
other BSDs but I don’t have the ability to test it there right now.
readDerivation is pretty slow, and while it may not be significant for
some use cases, on things like ghc-nix where we have thousands of
derivations is really slows things down.
So, this just doesn’t do the impure derivation check if the impure
derivation experimental feature is disabled. Perhaps we could cache
the result of isPure() and keep the check, but this is a quick fix to
for the slowdown introduced with impure derivations features in 2.8.0.
this simplifies the setup a lot, and avoids weird looking `./file.md`
links showing up.
it also does not show regular URLs any more. currently the command
reference only has few of them, and not showing them in the offline
documentation is hopefully not a big deal.
instead of building more special-case solutions, clumsily preprocessing
the input, or issuing verbal rules on dealing with URLs, should better
be solved sustainably by not rendering relative links in `lowdown`:
https://github.com/kristapsdz/lowdown/issues/105
This was caused by -L calling setLogFormat() again, which caused the
creation of a new progress bar without destroying the old one. So we
had two progress bars clobbering each other.
We should change 'logger' to be a smart pointer, but I'll do that in a
future PR.
Fixes#6931.
98e361ad4c introduced a regression where
previously stored attributes were replaced by placeholders. As a
result, a command like 'nix build nixpkgs#hello' had to be executed at
least twice to get caching.
This code does not seem necessary for suggestions to work.
This issue made it impossible for clients using a serve protocol of
version <= 2.3 to use the `cmdBuildDerivation` command of servers using
a protocol of version >= 2.6. The faulty version check makes the server
send back build outputs that the client is not expecting.
This hang for some reason didn't trigger in the Nix build, but did
running 'make installcheck' interactively. What happened:
* Store::addMultipleToStore() calls a SinkToSource object to copy a
path, which in turn calls LegacySSHStore::narFromPath(), which
acquires a connection.
* The SinkToSource object is not destroyed after the last bytes has
been read, so the coroutine's stack is still alive and its
destructors are not run. So the connection is not released.
* Then when the next path is copied, because max-connections = 1,
LegacySSHStore::narFromPath() hangs forever waiting for a connection
to be released.
The fix is to make sure that the source object is destroyed when we're
done with it.
Makes `printValueAsJSON` not copy paths to the store for `nix eval
--json`, `nix-instantiate --eval --json` and `nix-env --json`.
Fixes https://github.com/NixOS/nix/issues/5612
RewritingSink can handle being fed input where a reference crosses a
chunk boundary. we don't need to load the whole source into memory, and
in fact *not* loading the whole source lets nix build FODs that do not
fit into memory (eg fetchurl'ing data files larger than system memory).
Some activities are numerous but usually very short (e.g. copying a
source file to the store) which would cause a lot of flickering. So
only show activities that have been running for at least 10 ms.
Rather than directly copying the source to its dest, copy it first to a
temporary location, and eventually move that temporary.
That way, the move is at least atomic from the point-of-view of the destination
The recursive copy from the stl doesn’t exactly do what we need because
1. It doesn’t delete things as we go
2. It doesn’t keep the mtime, which change the nars
So re-implement it ourselves. A bit dull, but that way we have what we want
In `nix::rename`, if the call to `rename` fails with `EXDEV` (failure
because the source and the destination are in a different filesystems)
switch to copying and removing the source.
To avoid having to re-implement the copy manually, I switched the
function to use the c++17 `filesystem` library (which has a `copy`
function that should do what we want).
Fix#6262
Once a derivation goal has been completed, we check whether or not
this goal was meant to be repeated to check its output.
An early return branch was preventing the worker to reach that repeat
code branch, hence breaking the --check command (#2619).
It seems like this early return branch is an artifact of a passed
refactoring. As far as I can tell, buildDone's main branch also
cleanup the tmp directory before returning.
By default, Nix sets the "cores" setting to the number of CPUs which are
physically present on the machine. If cgroups are used to limit the CPU
and memory consumption of a large Nix build, the OOM killer may be
invoked.
For example, consider a GitLab CI pipeline which builds a large software
package. The GitLab runner spawns a container whose CPU is limited to 4
cores and whose memory is limited to 16 GiB. If the underlying machine
has 64 cores, Nix will invoke the build with -j64. In many cases, that
level of parallelism will invoke the OOM killer and the build will
completely fail.
This change sets the default value of "cores" to be
ceil(cpu_quota / cpu_period), with a fallback to
std:🧵:hardware_concurrency() if cgroups v2 is not detected.
The workaround for "Some distros patch Linux" mentioned in
local-derivation-goal.cc will not help in the `--option
sandbox-fallback false` case. To provide the user more helpful
guidance on how to get the sandbox working, let's check to see if the
`/proc` node created by the aforementioned patch is present and
configured in a way that will cause us problems. If so, give the user
a suggestion for how to troubleshoot the problem.
local-derivation-goal.cc contains a comment stating that "Some distros
patch Linux to not allow unprivileged user namespaces." Let's give a
pointer to a common version of this patch for those who want more
details about this failure mode.
This commit causes nix to `warn()` if sandbox setup has failed and
`/proc/self/ns/user` does not exist. This is usually a sign that the
kernel was compiled without `CONFIG_USER_NS=y`, which is required for
sandboxing.
This commit uses `warn()` to notify the user if sandbox setup fails
with errno==EPERM and /proc/sys/user/max_user_namespaces is missing or
zero, since that is at least part of the reason why sandbox setup
failed.
Note that `echo -n 0 > /proc/sys/user/max_user_namespaces` or
equivalent at boot time has been the recommended mitigation for
several Linux LPE vulnerabilities over the past few years. Many users
have applied this mitigation and then forgotten that they have done
so.
The failure modes for nix's sandboxing setup are pretty complicated.
When nix is unable to set up the sandbox, let's provide more detail
about what went wrong. Specifically:
* Make sure the error message includes the word "sandbox" so the user
knows that the failure was related to sandboxing.
* If `--option sandbox-fallback false` was provided, and removing it
would have allowed further attempts to make progress, let the user
know.
I recently got fairly confused why the following expression didn't have
any effect
{
description = "Foobar";
inputs.sops-nix = {
url = github:mic92/sops-nix;
inputs.nixpkgs_22_05.follows = "nixpkgs";
};
}
until I found out that the input was called `nixpkgs-22_05` (please note
the dash vs. underscore).
IMHO it's not a good idea to not throw an error in that case and
probably leave end-users rather confused, so I implemented a small check
for that which basically checks whether `follows`-declaration from
overrides actually have corresponding inputs in the transitive flake.
In fact this was done by accident already in our own test-suite where
the removal of a `follows` was apparently forgotten[1].
Since the key of the `std::map` that holds the `overrides` is a vector
and we have to find the last element of each vector (i.e. the override)
this has to be done with a for loop in O(n) complexity with `n` being
the total amount of overrides (which shouldn't be that large though).
Please note that this doesn't work with nested expressions, i.e.
inputs.fenix.inputs.nixpkgs.follows = "...";
which is a known problem[2].
For the expression demonstrated above, an error like this will be
thrown:
error: sops-nix has a `follows'-declaration for a non-existant input nixpkgs_22_05!
[1] 2664a216e5
[2] https://github.com/NixOS/nix/issues/5790
Defers completion of flake inputs until the whole command line is parsed
so that we know what flakes we need to complete the inputs of.
Previously, `nix build flake --update-input <Tab>` always behaved like
`nix build . --update-input <Tab>`.
Prevents errors when running with UBSan:
/nix/store/j5vhrywqmz1ixwhsmmjjxa85fpwryzh0-gcc-11.3.0/include/c++/11.3.0/bits/stl_pair.h:353:4: runtime error: load of value 229, which is not a valid value for type 'AttrType'
Specifically, if we're not root and the daemon socket does not exist,
then we use ~/.local/share/nix/root as a chroot store. This enables
non-root users to download nix-static and have it work out of the box,
e.g.
ubuntu@ip-10-13-1-146:~$ ~/nix run nixpkgs#hello
warning: '/nix' does not exists, so Nix will use '/home/ubuntu/.local/share/nix/root' as a chroot store
Hello, world!
With this, Nix will write a copy of the sandbox shell to /bin/sh in
the sandbox rather than bind-mounting it from the host filesystem.
This makes /bin/sh work out of the box with nix-static, i.e. you no
longer get
/nix/store/qa36xhc5gpf42l3z1a8m1lysi40l9p7s-bootstrap-stage4-stdenv-linux/setup: ./configure: /bin/sh: bad interpreter: No such file or directory
This allows changes to nix-cache-info to be picked up by existing
clients. Previously, the only way for this to happen would be for
clients to delete binary-cache-v6.sqlite, which is quite awkward for
users.
On the other hand, updates to nix-cache-info should be pretty rare,
hence the choice of a fairly long TTL. Configurability is probably not
useful enough to warrant implementing it.
Allow `nix build flake1 flake2 --update-input <Tab>` to complete the
inputs of both flakes.
Also do tilde expansion so that `nix build ~/flake --update-input <Tab>`
works.
Useful because a default `sudo` on darwin doesn't clear `$HOME`, so things like `sudo nix-channel --list`
will surprisingly return the USER'S channels, rather than `root`'s.
Other counterintuitive outcomes can be seen in this PR description:
https://github.com/NixOS/nix/pull/6622
Basically an attempt to resume fixing #5543 for a breakage introduced
earlier[1]. Basically, when evaluating an older `nixpkgs` with
`nix-shell` the following error occurs:
λ ma27 [~] → nix-shell -I nixpkgs=channel:nixos-18.03 -p nix
error: anonymous function at /nix/store/zakqwc529rb6xcj8pwixjsxscvlx9fbi-source/pkgs/top-level/default.nix:20:1 called with unexpected argument 'inNixShell'
at /nix/store/zakqwc529rb6xcj8pwixjsxscvlx9fbi-source/pkgs/top-level/impure.nix:82:1:
81|
82| import ./. (builtins.removeAttrs args [ "system" "platform" ] // {
| ^
83| inherit config overlays crossSystem;
This is a problem because one of the main selling points of Nix is that
you can evaluate any old Nix expression and still get the same result
(which also means that it *still evaluates*). In fact we're deprecating,
but not removing a lot of stuff for that reason such as unquoted URLs[2]
or `builtins.toPath`. However this property was essentially thrown away
here.
The change is rather simple: check if `inNixShell` is specified in the
formals of an auto-called function. This means that
{ inNixShell ? false }:
builtins.trace inNixShell
(with import <nixpkgs> { }; makeShell { name = "foo"; })
will show `trace: true` while
args@{ ... }:
builtins.trace args.inNixShell
(with import <nixpkgs> { }; makeShell { name = "foo"; })
will throw the following error:
error: attribute 'inNixShell' missing
This is explicitly needed because the function in
`pkgs/top-level/impure.nix` of e.g. NixOS 18.03 has an ellipsis[3], but
passes the attribute-set on to another lambda with formals that doesn't
have an ellipsis anymore (hence the error from above). This was perhaps
a mistake, but we can't fix it anymore. This also means that there's
AFAICS no proper way to check if the attr-set that's passed to the Nix
code via `EvalState::autoCallFunction` is eventually passed to a lambda
with formals where `inNixShell` is missing.
However, this fix comes with a certain price. Essentially every
`shell.nix` that assumes `inNixShell` to be passed to the formals even
without explicitly specifying it would break with this[4]. However I think
that this is ugly, but preferable:
* Nix 2.3 was declared stable by NixOS up until recently (well, it still
is as long as 21.11 is alive), so most people might not have even
noticed that feature.
* We're talking about a way shorter time-span with this change being
in the wild, so the fallout should be smaller IMHO.
[1] 9d612c393a
[2] https://github.com/NixOS/rfcs/pull/45#issuecomment-488232537
[3] https://github.com/NixOS/nixpkgs/blob/release-18.03/pkgs/top-level/impure.nix#L75
[4] See e.g. the second expression in this commit-message or the changes
for `tests/ca/nix-shell.sh`.
Overrides for inputs with flake=false were non-sticky, since they
changed the `original` in `flake.lock`. This fixes it, by using the same
locked original for both flake and non-flake inputs.
nixos/nix#6290 introduced a regex pattern to account for tags when
resolving sourcehut refs. nixos/nix#4638 reafactored the code,
accidentally treating the pattern as a regular string, causing all
non-HEAD ref resolving to break.
This fixes the regression and adds more test cases to avoid future
breakage.
The manpage for `getgrouplist` says:
> If the number of groups of which user is a member is less than or
> equal to *ngroups, then the value *ngroups is returned.
>
> If the user is a member of more than *ngroups groups, then
> getgrouplist() returns -1. In this case, the value returned in
> *ngroups can be used to resize the buffer passed to a further
> call getgrouplist().
In our original code, however, we allocated a list of size `10` and, if
`getgrouplist` returned `-1` threw an exception. In practice, this
caused the code to fail for any user belonging to more than 10 groups.
While unusual for single-user systems, large companies commonly have a
huge number of POSIX groups users belong to, causing this issue to crop
up and make multi-user Nix unusable in such settings.
The fix is relatively simple, when `getgrouplist` fails, it stores the
real number of GIDs in `ngroups`, so we must resize our list and retry.
Only then, if it errors once more, we can raise an exception.
This should be backported to, at least, 2.9.x.
Bring back the possibility to copy CA paths with no reference (like the
outputs of FO derivations or stuff imported at eval time) between stores
that have a different prefix.
If a package's attribute path, description or name contains matches for any of the
regexes specified via `-e` or `--exclude` that package is excluded from
the final output.
Currently nix-build prints the "printMissing" information by default,
nix build doesn’t.
People generally don‘t notice this because the standard log-format of
nix build would not display the printMissing
output long enough to perceive the information.
This addresses https://github.com/NixOS/nix/issues/6561
To quote Eelco in #5867:
> Unfortunately we can't do
>
> evalSettings.pureEval.setDefault(false);
>
> because then we have to do the same in main.cc (where
> pureEval is set to true), and that would allow pure-eval
> to be disabled globally from nix.conf.
Instead, a command should specify that it should be impure by
default. Then, `evalSettings.pureEval` will be set to `false;` unless
it's overridden by e.g. a CLI flag.
In that case it's IMHO OK to be (theoretically) able to override
`pure-eval` via `nix.conf` because it doesn't have an effect on commands
where `forceImpureByDefault` returns `false` (i.e. everything where pure
eval actually matters).
Closes#5867
The git fetcher code used to dereference the (potentially empty) `ref`
input attribute. This was magically working, probably because the
compiler somehow outsmarted us, but is now blowing up with newer nixpkgs
versions.
Fix that by not trying to access this field while we don't know for sure
that it has been defined.
Fix#6554
Without the change llvm build fails on this week's gcc-13 snapshot as:
src/libutil/json.cc: In function 'void nix::toJSON(std::ostream&, const char*, const char*)':
src/libutil/json.cc:33:22: error: 'uint16_t' was not declared in this scope
33 | put(hex[(uint16_t(*i) >> 12) & 0xf]);
| ^~~~~~~~
src/libutil/json.cc:5:1: note: 'uint16_t' is defined in header '<cstdint>'; did you forget to '#include <cstdint>'?
4 | #include <cstring>
+++ |+#include <cstdint>
5 |
This solves the error
error: cannot connect to socket at '/nix/var/nix/daemon-socket/socket': Connection refused
on build farm systems that are loaded but operating normally.
I've seen this happen on an M1 mac running a loaded hercules-ci-agent.
Hercules CI uses multiple worker processes, which may connect to
the Nix daemon around the same time. It's not unthinkable that
the Nix daemon listening process isn't scheduled until after 6
workers try to connect, especially on a system under load with
many workers.
Is the increase safe?
The number is the number of connections that the kernel will buffer
while the listening process hasn't `accept`-ed them yet.
It did not - and will not - restrict the total number of daemon
forks that a client can create.
History
The number 5 has remained unchanged since the introduction in
nix-worker with 0130ef88ea in 2006.
Add a new `file` fetcher type, which will fetch a plain file over
http(s), or from the local file.
Because plain `http(s)://` or `file://` urls can already correspond to
`tarball` inputs (if the path ends-up with a know archive extension),
the URL parsing logic is a bit convuluted in that:
- {http,https,file}:// urls will be interpreted as either a tarball or a
file input, depending on the extensions of the path part (so
`https://foo.com/bar` will be a `file` input and
`https://foo.com/bar.tar.gz` as a `tarball` input)
- `file+{something}://` urls will be interpreted as `file` urls (with
the `file+` part removed)
- `tarball+{something}://` urls will be interpreted as `tarball` urls (with
the `tarball+` part removed)
Fix#3785
Co-Authored-By: Tony Olagbaiye <me@fron.io>
If experimental feature "flakes" is enabled, args passed to `nix repl`
will now be considered flake refs and imported using the existing
`:load-flake` machinery.
In addition, `:load-flake` now supports loading flake fragments.
Don’t explicitely give it a constructor, but use aggregate
initialization instead (also prevents having an implicit coertion, which
is probably good here)
Ensures the logger is stopped on exit in legacy commands. Without this,
when using `nix-build --log-format bar` and stopping nix with CTRL+C,
the bar is not cleared from the screen.
* libexpr: fix builtins.split example
The example was previously indicating that multiple whitespaces would be
collapsed into a single captured whitespace. That isn't true and was
likely a mistake when being documented initially.
* Fix segfault on unitilized list when printing value
Since lists are just chunks of memory the individual elements in the
list might be unitilized when a programming error happens within Nix.
In this case the values are null-initialized (at least with Boehm GC)
and we can avoid a nullptr deref when printing them.
I ran into this issue while ensuring that new expression tests would
show the actual value on an assertion failure.
This is unlikely to cause any runtime performance regressions as
printing values is not really in the hot path (unless the repl is the
primary use case).
* Add operator<< for ValueTypes
* Add libexpr tests
This introduces tests for libexpr that evalulate various trivial Nix
language expressions and primop invocations that should be good smoke
tests wheter or not the implementation is behaving as expected.
Since a26be9f3b8, the same parser is used
to parse the result of sourcehut’s `HEAD` endpoint (coming from [git
dumb protocol]) and the output of `git ls-remote`. However, they are very
slightly different (the former doesn’t specify the current reference
since it’s implied to be `HEAD`).
Unify both, and make the parser a bit more robust and understandable (by
making it more typed and adding tests for it)
[git dumb protocol]: https://git-scm.com/book/en/v2/Git-Internals-Transfer-Protocols#_the_dumb_protocol
I'm afraid I missed a few problematic `git(1)`-calls while implementing
PR #6440, sorry for that! Upon investigating what went wrong, I realized
that I only tested against the "cached"-case by accident because my
git-checkout with my system's flake was apparently cached during my
debugging.
I managed to trigger the original issue again by running:
$ git commit --allow-empty -m "tmp"
$ sudo nixos-rebuild switch --flake .# -L --builders ''
Since `repoDir` points to the checkout that's potentially owned by
another user, I decided to add `--git-dir` to each call affecting
`repoDir`.
Since the `tmpDir` for the temporary submodule-checkout is created by
Nix itself, it doesn't seem to be an issue.
Sorry for that, it should be fine now.
The previous head caching implementation stored two paths in the local
cache; one for the cached git repo and another textfile containing the
resolved HEAD ref. This commit instead stores the resolved HEAD by
setting the HEAD ref in the local cache appropriately.
A mips64el Linux MIPS kernel can execute userspace code using any of
three ABIs:
mips64el-linux-*abin64
mips64el-linux-*abin32
mipsel-linux-*
The first of these is the native 64-bit ABI, and the only ABI with
64-bit pointers; this is sometimes called "n64". The last of these is
the old legacy 32-bit ABI, whose binaries can execute natively on
32-bit MIPS hardware; this is sometimes called "o32".
The second ABI, "n32" is essentially the 64-bit ABI with 32-bit
pointers and address space. Hardware 64-bit integer/floating
arithmetic is still allowed, as well as the much larger mips64
register set and more-efficient calling convention.
Let's enable seccomp filters for all of these. Likewise for big
endian (mips64-linux-*).
'nix profile install' will now install all outputs listed in the
package's meta.outputsToInstall attribute, or all outputs if that
attribute doesn't exist. This makes it behave consistently with
nix-env. Fixes#6385.
Furthermore, for consistency, all other 'nix' commands do this as
well. E.g. 'nix build' will build and symlink the outputs in
meta.outputsToInstall, defaulting to all outputs. Previously, it only
built/symlinked the first output. Note that this means that selecting
a specific output using attrpath selection (e.g. 'nix build
nixpkgs#libxml2.dev') no longer works. A subsequent PR will add a way
to specify the desired outputs explicitly.
after #6218 `Symbol` no longer confers a uniqueness invariant on the
string it wraps, it is now possible to create multiple symbols that
compare equal but whose string contents have different addresses. this
guarantee is now only provided by `SymbolIdx`, leaving `Symbol` only as
a string wrapper that knows about the intricacies of how symbols need to
be formatted for output.
this change renames `SymbolIdx` to `Symbol` to restore the previous
semantics of `Symbol` to that name. we also keep the wrapper type and
rename it to `SymbolStr` instead of returning plain strings from lookups
into the symbol table because symbols are formatted for output in many
places. theoretically we do not need `SymbolStr`, only a function that
formats a string for output as a symbol, but having to wrap every symbol
that appears in a message into eg `formatSymbol()` is error-prone and
inconvient.
The `--git-dir=` must be `.` in some cases (for cached repos that are
"bare" repos in `~/.cache/nix/gitv3`). With this fix we can add
`--git-dir` to each `git`-invokation needed for `nixos-rebuild`.
To demonstrate the problem:
* You need a `git` at 2.33.3 in your $PATH
* An expression like this in a git repository:
``` nix
{
outputs = { self, nixpkgs }: {
packages.foo.x86_64-linux = with nixpkgs.legacyPackages.x86_64-linux;
runCommand "snens" { } ''
echo ${(builtins.fetchGit ./.).lastModifiedDate} > $out
'';
};
}
```
Now, when instantiating the package via `builtins.getFlake`, it fails on
Nix 2.7 like this:
$ nix-instantiate -E '(builtins.getFlake "'"$(pwd)"'").packages.foo.x86_64-linux'
fatal: unsafe repository ('/nix/store/a7j3125km4h8l0p71q6ssfkxamfh5d61-source' is owned by someone else)
To add an exception for this directory, call:
git config --global --add safe.directory /nix/store/a7j3125km4h8l0p71q6ssfkxamfh5d61-source
error: program 'git' failed with exit code 128
(use '--show-trace' to show detailed location information)
This breaks e.g. `nixops`-deployments using flakes with similar
expressions as shown above.
The cause for this is that `git(1)` tries to find the highest
`.git`-directory in the directory tree and if it finds a such a
directory, but with another owning user (root vs. the user who evaluates
the expression), it fails as above. This was changed recently to fix
CVE-2022-24765[1].
By explicitly specifying `--git-dir`, Git assumes to be in the top-level
directory and doesn't attempt to look for a `.git`-directory in the
parent directories and thus the code-path leading to said error is never
reached.
[1] https://lore.kernel.org/git/xmqqv8veb5i6.fsf@gitster.g/
The produced path is then allowed be imported or utilized elsewhere:
```
assert (43 == import (builtins.toFile "source" "43")); "good"
```
This will still fail on write-only stores.
with position and symbol tables in place we can now shrink Attr by a full
pointer with some simple field reordering. since Attr is a very hot struct this
has substantial impact on memory use, decreasing GC allocations and heap size by
10-15% each. we also get a ~15% performance improvement due to reduced GC
loading.
pure parsing has taken a hit over the branch base because positions are now
slightly more expensive to create, but overall we get a noticeable improvement.
before (on memory-friendliness):
Benchmark 1: nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.960 s ± 0.028 s [User: 5.832 s, System: 0.897 s]
Range (min … max): 6.886 s … 7.005 s 20 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 328.1 ms ± 1.7 ms [User: 295.8 ms, System: 32.2 ms]
Range (min … max): 324.9 ms … 331.2 ms 20 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.688 s ± 0.029 s [User: 2.365 s, System: 0.238 s]
Range (min … max): 2.642 s … 2.742 s 20 runs
after:
Benchmark 1: nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.902 s ± 0.039 s [User: 5.844 s, System: 0.783 s]
Range (min … max): 6.820 s … 6.956 s 20 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 330.7 ms ± 2.2 ms [User: 300.6 ms, System: 30.0 ms]
Range (min … max): 327.5 ms … 334.5 ms 20 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.330 s ± 0.027 s [User: 2.040 s, System: 0.234 s]
Range (min … max): 2.272 s … 2.383 s 20 runs
this slightly increases the amount of memory used for any given symbol, but this
increase is more than made up for if the symbol is referenced more than once in
the EvalState that holds it. on average every symbol should be referenced at
least twice (once to introduce a binding, once to use it), so we expect no
increase in memory on average.
symbol tables are limited to 2³² entries like position tables, and similar
arguments apply to why overflow is not likely: 2³² symbols would require as many
string instances (at 24 bytes each) and map entries (at 24 bytes or more each,
assuming that the map holds on average at most one item per bucket as the docs
say). a full symbol table would require at least 192GB of memory just for
symbols, which is well out of reach. (an ofborg eval of nixpks today creates
less than a million symbols!)
PosTable deduplicates origin information, so using symbols for paths is no
longer necessary. moving away from path Symbols also reduces the usage of
symbols for things that are not keys in attribute sets, which will become
important in the future when we turn symbols into indices as well.
Pos objects are somewhat wasteful as they duplicate the origin file name and
input type for each object. on files that produce more than one Pos when parsed
this a sizeable waste of memory (one pointer per Pos). the same goes for
ptr<Pos> on 64 bit machines: parsing enough source to require 8 bytes to locate
a position would need at least 8GB of input and 64GB of expression memory. it's
not likely that we'll hit that any time soon, so we can use a uint32_t index to
locate positions instead.
when we introduce position and symbol tables we'll need to do lookups to turn
indices into those tables into actual positions/symbols. having the error
functions as members of EvalState will avoid a lot of churn for adding lookups
into the tables for each caller.
only file and line of the returned position were ever used, it wasn't actually
used a position. as such we may as well use a path+int pair for only those two
values and remove a use of Pos that would not work well with a position table.
a future commit will remove the ability to convert the symbol type used in
bindings to strings. since we only have two users we can inline the error check.
the only use of this function is to determine whether a lambda has a non-set
formal, but this use is arguably better served by Symbol::set and using a
non-Symbol instead of an empty symbol in the parser when no such formal is present.
we don't *need* symbols here. the only advantage they have over strings is
making call-counting slightly faster, but that's a diagnostic feature and thus
needn't be optimized.
this also fixes a move bug that previously didn't show up: PrimOp structs were
accessed after being moved from, which technically invalidates them. previously
the names remained valid because Symbol copies on move, but strings are
invalidated. we now copy the entire primop struct instead of moving since primop
registration happen once and are not performance-sensitive.
Without the change any CA deletion triggers linear scan on large
RealisationsRefs table:
sqlite>.eqp full
sqlite> delete from RealisationsRefs where realisationReference IN ( select id from Realisations where outputPath = 1234567890 );
QUERY PLAN
|--SCAN RealisationsRefs
`--LIST SUBQUERY 1
`--SEARCH Realisations USING COVERING INDEX IndexRealisationsRefsOnOutputPath (outputPath=?)
With the change it gets turned into a lookup:
sqlite> CREATE INDEX IndexRealisationsRefsRealisationReference on RealisationsRefs(realisationReference);
sqlite> delete from RealisationsRefs where realisationReference IN ( select id from Realisations where outputPath = 1234567890 );
QUERY PLAN
|--SEARCH RealisationsRefs USING INDEX IndexRealisationsRefsRealisationReference (realisationReference=?)
`--LIST SUBQUERY 1
`--SEARCH Realisations USING COVERING INDEX IndexRealisationsRefsOnOutputPath (outputPath=?)
If the derivation `foo` depends on `bar`, and they both have the same
output path (because they are CA derivations), then this output path
will depend both on the realisation of `foo` and of `bar`, which
themselves depend on each other.
This confuses SQLite which isn’t able to automatically solve this
diamond dependency scheme.
Help it by adding a trigger to delete all the references between the
relevant realisations.
Fix#5320
Otherwise the clang builds fail because the constructor of `SQLiteBusy`
inherits it, `SQLiteError::_throw` tries to call it, which fails.
Strangely, gcc works fine with it. Not sure what the correct behavior is
and who is buggy here, but either way, making it public is at the worst
a reasonable workaround
Don’t say that the derivation is CA as it might happen on a non-ca
derivation too.
Technically we could always recover _something_ for a purely
input-addressed derivation (like we already do when the `ca-derivations`
xp feature isn’t enabled), but it seems better to consistently fail −
the end-result wouldn’t really make sense anyways in most cases.
This ensures that use-sites properly trigger new monomorphisations on
one hand, and on the other hand keeps the main `sqlite.hh` clean and
interface-only. I think that is good practice in general, but in this
situation in particular we do indeed have `sqlite.hh` users that don't
need the `throw_` function.
nix show-config --json was serializing experimental features as ints.
nlohmann::json will automatically use these definitions to serialize
and deserialize ExperimentalFeatures.
Strictly, we don't use the from_json instance yet, it's provided for
completeness and hopefully future use.
Requested by ppepino on the Matrix:
https://matrix.to/#/!KqkRjyTEzAGRiZFBYT:nixos.org/$Tb32BS3rVE2BSULAX4sPm0h6CDewX2hClOTGzTC7gwM?via=nixos.org&via=matrix.org&via=nixos.dev
This adds a new command, :bl, which works like :b but also creates
a GC root symlink to the various derivation outputs.
ckie@cookiemonster ~/git/nix -> ./outputs/out/bin/nix repl
Welcome to Nix 2.6.0. Type :? for help.
nix-repl> :l <nixpkgs>
Added 16118 variables.
nix-repl> :b runCommand "hello" {} "echo hi > $out"
This derivation produced the following outputs:
./repl-result-out -> /nix/store/kidqq2acdpi05c4a9mlbg2baikmzik44-hello
[1 built, 0.0 MiB DL]
ckie@cookiemonster ~/git/nix -> cat ./repl-result-out
hi
In particular, this means that 'nix eval` (which uses toValue()) no
longer auto-calls functions or functors (because
AttrCursor::findAlongAttrPath() doesn't).
Fixes#6152.
Also use ref<> in a few places, and don't return attrpaths from
getCursor() because cursors already have a getAttrPath() method.
Previously it only logged the builder's path, this changes it to log the
arguments at the same log level, and the environment variables at the
vomit level.
This helped me debug https://github.com/svanderburg/node2nix/issues/75
This was a problem when writing a fetcher that uses e.g. sha256 hashes
for revisions. This doesn't actually do anything new, but allows for
creating such fetchers in the future (perhaps when support for Git's
SHA256 object format gains more popularity).
The filter expects all paths to have a prefix of the raw `actualUrl`, but
`Store::addToStore(...)` provides absolute canonicalized paths.
To fix this create an absolute and canonicalized path from the `actualUrl` and
use it instead.
Fixes#6195.
This was caused by SubstitutionGoal not setting the errorMsg field in
its BuildResult. We now get a more descriptive message than in 2.7.0, e.g.
error: path '/nix/store/13mh...' is required, but there is no substituter that can build it
instead of the misleading (since there was no build)
error: build of '/nix/store/13mh...' failed
Fixes#6295.
Saving the cwd fd didn't actually work well -- prior to this commit, the
following would happen:
: ~/w/vc/nix ; doas outputs/out/bin/nix --experimental-features 'nix-command flakes' run nixpkgs#coreutils -- --coreutils-prog=pwd
pwd: couldn't find directory entry in ‘../../../..’ with matching i-node
: ~/w/vc/nix ; doas outputs/out/bin/nix --experimental-features 'nix-command flakes' develop -c pwd
pwd: couldn't find directory entry in ‘../../../..’ with matching i-node
This doesn't work very well (maybe I'm misunderstanding the desired
implementation):
: ~/w/vc/nix ; doas outputs/out/bin/nix --experimental-features 'nix-command flakes' develop -c pwd
pwd: couldn't find directory entry in ‘../../../..’ with matching i-node
I regularly pass around simple scripts by using nix-shell as the script
interpreter, eg. like this:
#!/usr/bin/env nix-shell
#!nix-shell -p dd_rescue coreutils bash -i bash
While this works most of the time, I recently had one occasion where it
would not and the above would result in the following:
$ sudo ./myscript.sh
bash: ./myscript.sh: No such file or directory
Note the "sudo" here, because this error only occurs if we're root.
The reason for the latter is because running Nix as root means that we
can directly access the store, which makes sure we use a filesystem
namespace to make the store writable. XXX - REWORD!
So when stracing the process, I stumbled on the following sequence:
openat(AT_FDCWD, "/proc/self/ns/mnt", O_RDONLY) = 3
unshare(CLONE_NEWNS) = 0
... later ...
getcwd("/the/real/cwd", 4096) = 14
setns(3, CLONE_NEWNS) = 0
getcwd("/", 4096) = 2
In the whole strace output there are no calls to chdir() whatsoever, so
I decided to look into the kernel source to see what else could change
directories and found this[1]:
/* Update the pwd and root */
set_fs_pwd(fs, &root);
set_fs_root(fs, &root);
The set_fs_pwd() call is roughly equivalent to a chdir() syscall and
this is called when the setns() syscall is invoked[2].
[1]: b14ffae378/fs/namespace.c (L4659)
[2]: b14ffae378/kernel/nsproxy.c (L346)
Impure derivations are derivations that can produce a different result
every time they're built. Example:
stdenv.mkDerivation {
name = "impure";
__impure = true; # marks this derivation as impure
outputHashAlgo = "sha256";
outputHashMode = "recursive";
buildCommand = "date > $out";
};
Some important characteristics:
* This requires the 'impure-derivations' experimental feature.
* Impure derivations are not "cached". Thus, running "nix-build" on
the example above multiple times will cause a rebuild every time.
* They are implemented similar to CA derivations, i.e. the output is
moved to a content-addressed path in the store. The difference is
that we don't register a realisation in the Nix database.
* Pure derivations are not allowed to depend on impure derivations. In
the future fixed-output derivations will be allowed to depend on
impure derivations, thus forming an "impurity barrier" in the
dependency graph.
* When sandboxing is enabled, impure derivations can access the
network in the same way as fixed-output derivations. In relaxed
sandboxing mode, they can access the local filesystem.
The return value of BaseError::addTrace(...) is never used and
error-prone as subclasses calling it will return a BaseError instead of
the subclass.
This commit changes its return value to be void.
Rather than having four different but very similar types of hashes, make
only one, with a tag indicating whether it corresponds to a regular of
deferred derivation.
This implies a slight logical change: The original Nix+multiple-outputs
model assumed only one hash-modulo per derivation. Adding
multiple-outputs CA derivations changed this as these have one
hash-modulo per output. This change is now treating each derivation as
having one hash modulo per output.
This obviously means that we internally loose the guaranty that
all the outputs of input-addressed derivations have the same hash
modulo. But it turns out that it doesn’t matter because there’s nothing
in the code taking advantage of that fact (and it probably shouldn’t
anyways).
The upside is that it is now much easier to work with these hashes, and
we can get rid of a lot of useless `std::visit{ overloaded`.
Co-authored-by: John Ericson <John.Ericson@Obsidian.Systems>
Before this change, processLine always uses the first character
as the start of the line. This cause whitespaces to matter at the
beginning of the line whereas it does not matter anywhere else.
This commit trims leading white spaces of the string line so that
subsequent operations can be performed on the string without explicitly
tracking starting and ending indices of the string.
This avoids an infinite loop in the final test in
tests/binary-cache.sh. I think this was only not triggered previously
by accident (because we were clearing wantedOutputs in between).
LocalStore::addToStore() since
79ae9e4558 expects a regular NAR hash,
rather than a NAR hash modulo self-references. Fixes#6300.
Also, makeContentAddressed() now rewrites the entire closure (so 'nix
store make-content-addressable' no longer needs '-r'). See #6301.
This allows closures to be imported at evaluation time, without
requiring the user to configure substituters. E.g.
builtins.fetchClosure {
storePath = /nix/store/f89g6yi63m1ywfxj96whv5sxsm74w5ka-python3.9-sqlparse-0.4.2;
from = "https://cache.ngi0.nixos.org";
}
Before the change lexter errors did not report the location:
$ nix build -f. mc
error: path has a trailing slash
(use '--show-trace' to show detailed location information)
Note that it's not clear what file generates the error.
After the change location is reported:
$ src/nix/nix --extra-experimental-features nix-command build -f ~/nm mc
error: path has a trailing slash
at .../pkgs/development/libraries/glib/default.nix:54:18:
53| };
54| src = /tmp/foo/;
| ^
55|
(use '--show-trace' to show detailed location information)
Here we see both problematic file and the string itself.
1. `DerivationOutput` now as the `std::variant` as a base class. And the
variants are given hierarchical names under `DerivationOutput`.
In 8e0d0689be @matthewbauer and I
didn't know a better idiom, and so we made it a field. But this sort
of "newtype" is anoying for literals downstream.
Since then we leaned the base class, inherit the constructors trick,
e.g. used in `DerivedPath`. Switching to use that makes this more
ergonomic, and consistent.
2. `store-api.hh` and `derivations.hh` are now independent.
In bcde5456cc I swapped the dependency,
but I now know it is better to just keep on using incomplete types as
much as possible for faster compilation and good separation of
concerns.
Before the change garbage collector was not considering
`.drv` and outputs as alive even if configuration says otherwise.
As a result `nix store gc --dry-run` could visit (and parse)
`.drv` files multiple times (worst case it's quadratic).
It happens because `alive` set was populating only runtime closure
without regard for actual configuration. The change fixes it.
Benchmark: my system has about 139MB, 40K `.drv` files.
Performance before the change:
$ time nix store gc --dry-run
real 4m22,148s
Performance after the change:
$ time nix store gc --dry-run
real 0m14,178s
Don’t try and assume that we know the output paths when we’ve just built
with `--dry-run`. Instead make `--dry-run` follow a different code path
that won’t assume the knowledge of the output paths at all.
Fix#6275
The current `--out-path` flag has two disadvantages when one is only
concerned with querying the names of outputs:
- it requires evaluating every output's `outPath`, which takes
significantly more resources and runs into more failures
- it destroys the information of the order of outputs so we can't tell
which one is the main output
This patch makes the output names always present (replacing paths with
`null` in JSON if `--out-path` isn't given), and adds an `outputName`
field.
When importing e.g. a local `nixpkgs` in a flake to test a change like
{
inputs.nixpkgs.url = path:/home/ma27/Projects/nixpkgs;
outputs = /* ... */
}
then the input is missing a `lastModified`-field that's e.g. used in
`nixpkgs.lib.nixosSystem`. Due to the missing `lastMoified`-field, the
mtime is set to 19700101:
result -> /nix/store/b7dg1lmmsill2rsgyv2w7b6cnmixkvc1-nixos-system-nixos-22.05.19700101.dirty
With this change, the `path`-fetcher now sets a `lastModified` attribute
to the `mtime` just like it's the case in the `tarball`-fetcher already.
When building NixOS systems with `nixpkgs` being a `path`-input and this
patch, the output-path now looks like this:
result -> /nix/store/ld2qf9c1s98dxmiwcaq5vn9k5ylzrm1s-nixos-system-nixos-22.05.20220217.dirty
Before the change on a system with `auto-optimise-store = true`:
$ nix store gc --verbose --max 1
deleted all the paths instead of one path (we requested 1 byte limit).
It happens because every file in `auto-optimise-store = true` has at
least 2 links: file itself and a link in /nix/store/.links/ directory.
The change conservatively assumes that any file that has one (as before)
or two links (assume auto-potimise mode) will free space.
Co-authored-by: Sandro <sandro.jaeckel@gmail.com>
help new users find a solution to their problem
./result/bin/nix-env -qa hello
warning: name collision in input Nix expressions, skipping '/home/artturin/.nix-defexpr/channels_root/master'
suggestion: remove 'master' from either the root channels or the user channels
hello-2.12
hello-2.12
This changes was taken from dynamic derivation (#4628). It` somewhat
undoes the refactors I first did for floating CA derivations, as the
benefit of hindsight + requirements of dynamic derivations made me
reconsider some things.
They aren't to consequential, but I figured they might be good to land
first, before the more profound changes @thufschmitt has in the works.
Continue progress on #5729.
Just as I hoped, this uncovered an issue: the daemon protocol is missing
a way to query build logs. This doesn't effect `unix://`, but does
effect `ssh://`. A FIXME is left for this, so we come back to it later.
no need for function<> with c++17 deduction. this saves allocations and virtual
calls, but has the same semantics otherwise. not going through function has the
side effect of giving compilers more insight into the cleanup code, so we need a
few local warning disables.
reduces peak hep memory use on eval of our test system from 264.4MB to 242.3MB,
possibly also a slight performance boost.
theoretically memory use could be cut down by another eight bytes per Pos on
average by turning it into a tuple containing an index into a global base
position table with row and column offsets, but that doesn't seem worth the
effort at this point.
This function is like buildPaths(), except that it returns a vector of
BuildResults containing the exact statuses and output paths of each
derivation / substitution. This is convenient for functions like
Installable::build(), because they then don't need to do another
series of calls to get the outputs of CA derivations. It's also a
precondition to impure derivations, where we *can't* query the output
of those derivations since they're not stored in the Nix database.
Note that PathSubstitutionGoal can now also return a BuildStatus.
```console
$ nix eval --expr '({ foo ? 1 }: foo) { fob = 2; }'
error: anonymous function at (string):1:2 called with unexpected argument 'fob'
at «string»:1:1:
1| ({ foo ? 1 }: foo) { fob = 2; }
| ^
Did you mean foo?
```
Not that because Nix will first check for _missing_ arguments before
checking for extra arguments, `({ foo }: foo) { fob = 1; }` will
complain about the missing `foo` argument (rather than extra `fob`) and
so won’t display a suggestion.
Make the evaluator show some suggestions when trying to access an
invalid field from an attrset.
```console
$ nix eval --expr '{ foo = 1; }.foa'
error: attribute 'foa' missing
at «string»:1:1:
1| { foo = 1; }.foa
| ^
Did you mean foo?
```
No real need for keeping a separate header for such a simple class.
This requires changing a bit `OrSuggestions<T>::operator*` to not throw
an `Error` to prevent a cyclic dependency. But since this error is only
thrown on programmer error, we can replace the whole method by a direct
call to `std::get` which will raise its own assertion if needs be.
Refactor the `size == 0` logic into a new helper function that
replaces dupStringWithLen.
The name had to change, because unlike a `dup`-function, it does
not always allocate a new string.
Allows completing `nix build ~/flake#<Tab>`.
We can implement expansion for `~user` later if needed.
Not using wordexp(3) since that expands way too much.
Starts progress on #5729.
The idea is that we should not have these default methods throwing
"unimplemented". This is a small step in that direction.
I kept `addTempRoot` because it is a no-op, rather than failure. Also,
as a practical matter, it is called all over the place, while doing
other tasks, so the downcasting would be annoying.
Maybe in the future I could move the "real" `addTempRoot` to `GcStore`,
and the existing usecases use a `tryAddTempRoot` wrapper to downcast or
do nothing, but I wasn't sure whether that was a good idea so with a
bias to less churn I didn't do it yet.
Setting the `_NIX_FORCE_HTTP` environment variable is supposed to force `file://` store urls to use the `HttpBinaryCacheStore` implementation rather than the `LocalBinaryCacheStore` one (very useful for testing).
However because of a name mismatch, the `LocalBinaryCacheStore` was still registering the `file` scheme when this variable was set, meaning that the actual store implementation picked up on `file://` uris was dependent on the registration order of the stores (itself dependent on the link order of the object files).
Fix this by making the `LocalBinaryCacheStore` gracefully not register the `file` uri scheme when the variable is set.
We now memoize on Bindings / list element vectors rather than Values,
so that e.g. two Values that point to the same Bindings will be
printed only once.
This is useful whenever we want to evaluate something to a store path
(e.g. in get-drvs.cc).
Extracted from the lazy-trees branch (where we can require that a
store path must come from a store source tree accessor).
This was introduced in #6174. However fetch{url,Tarball} are legacy
and we shouldn't have an undocumented attribute that does the same
thing as one that already exists ('sha256').
Starting work on #5638
The exact boundary between `FetchSettings` and `EvalSettings` is not
clear to me, but that's fine. First lets clean out `libstore`, and then
worry about what, if anything, should be the separation between those
two.
To avoid that JSON messages are parsed twice in case of
remote builds with `ssh-ng://`, I split up the original
`handleJSONLogMessage` into three parts:
* `parseJSONMessage(const std::string&)` checks if it's a message in the
form of `@nix {...}` and tries to parse it (and prints an error if the
parsing fails).
* `handleJSONLogMessage(nlohmann::json&, ...)` reads the fields from the
message and passes them to the logger.
* `handleJSONLogMessage(const std::string&, ...)` behaves as before, but
uses the two functions mentioned above as implementation.
In case of `ssh-ng://`-logs the first two methods are invoked manually.
Right now when building a derivation remotely via
$ nix build -j0 -f . hello -L --builders 'ssh://builder'
it's possible later to read through the entire build-log by running
`nix log -f . hello`. This isn't possible however when using `ssh-ng`
rather than `ssh`.
The reason for that is that there are two different ways to transfer
logs in Nix through e.g. an SSH tunnel (that are used by `ssh`/`ssh-ng`
respectively):
* `ssh://` receives its logs from the fd pointing to `builderOut`. This
is directly passed to the "log-sink" (and to the logger on each `\n`),
hence `nix log` works here.
* `ssh-ng://` however expects JSON-like messages (i.e. `@nix {log data
in here}`) and passes it directly to the logger without doing anything
with the `logSink`. However it's certainly possible to extract
log-lines from this format as these have their own message-type in the
JSON payload (i.e. `resBuildLogLine`).
This is basically what I changed in this patch: if the code-path for
`builderOut` is not reached and a `logSink` is initialized, the
message was successfully processed by the JSON logger (i.e. it's in
the expected format) and the line is of the expected type (i.e.
`resBuildLogLine`), the line will be written to the log-sink as well.
Closes#5079
diff-index operates on the view that git has of the working tree,
which might be outdated. The higher-level diff command does this
automatically. This change also adds handling for submodules.
fixes#4140
Alternative fixes would be invoking update-index before diff-index or
matching more closely what require_clean_work_tree from git-sh-setup.sh
does, but both those options make it more difficult to reason about
correctness.
The .git/refs/heads directory might be empty for a valid
usable git repository. This often happens in CI environments,
which might only fetch commits, not branches.
Therefore instead we let git itself check if HEAD points to
something that looks like a commit.
fixes#5302
On Nix 2.6 the output of `nix why-depends --all` seems to be somewhat
off:
$ nix why-depends /nix/store/kn47hayxab8gc01jhr98dwyywbx561aq-nixos-system-roflmayr-21.11.20220207.6c202a9.drv /nix/store/srn5jbs1q30jpybdmxqrwskyny659qgc-nix-2.6.drv --derivation --extra-experimental-features nix-command --all
/nix/store/kn47hayxab8gc01jhr98dwyywbx561aq-nixos-system-roflmayr-21.11.20220207.6c202a9.drv
└───/nix/store/g8bpgfjhh5vxrdq0w6r6s64f9kkm9z6c-etc.drv
│ └───/nix/store/hm0jmhp8shbf3cl846a685nv4f5cp3fy-nspawn-inst.drv
| [...]
└───/nix/store/2d6q3ygiim9ijl5d4h0qqx6vnjgxywyr-system-units.drv
└───/nix/store/dil014y1b8qyjhhhf5fpaah5fzdf0bzs-unit-systemd-nspawn-hydra.service.drv
└───/nix/store/a9r72wwx8qrxyp7hjydyg0gsrwnn26zb-activate.drv
└───/nix/store/99hlc7i4gl77wq087lbhag4hkf3kvssj-nixos-system-hydra-21.11pre-git.drv
Please note that `[...]-system-units.drv` is supposed to be a direct
child of `[...]-etc.drv`.
The reason for that is that each new level printed by `printNode` is
four spaces off in comparison to `nix why-depends --precise` because the
recursive `printNode()` only prints the path and not the `tree*`-chars in
the case of `--precise` and in this format the path is four spaces further
indented, i.e. on a newline, but on the same level as the path's children, i.e.
/nix/store/kn47hayxab8gc01jhr98dwyywbx561aq-nixos-system-roflmayr-21.11.20220207.6c202a9.drv
└───/: …1-p8.drv",["out"]),("/nix/store/g8bpgfjhh5vxrdq0w6r6s64f9kkm9z6c-etc.drv",["out"]),("/nix/store/…
→ /nix/store/g8bpgfjhh5vxrdq0w6r6s64f9kkm9z6c-etc.drv
As you can see `[...]-etc.drv` is a direct child of the root, but four
spaces indented. This logic was directly applied to the code-path with
`precise=false` which resulted in `tree*` being printed four spaces too
deep.
In case of no `--precise`, `hits[hash]` is empty and the path itself
should be printed rather than hits using the same logic as for `hits[hash]`.
With this fix, the output looks correct now:
/nix/store/kn47hayxab8gc01jhr98dwyywbx561aq-nixos-system-roflmayr-21.11.20220207.6c202a9.drv
└───/nix/store/g8bpgfjhh5vxrdq0w6r6s64f9kkm9z6c-etc.drv
├───/nix/store/hm0jmhp8shbf3cl846a685nv4f5cp3fy-nspawn-inst.drv
| [...]
└───/nix/store/2d6q3ygiim9ijl5d4h0qqx6vnjgxywyr-system-units.drv
└───/nix/store/dil014y1b8qyjhhhf5fpaah5fzdf0bzs-unit-systemd-nspawn-hydra.service.drv
└───/nix/store/a9r72wwx8qrxyp7hjydyg0gsrwnn26zb-activate.drv
└───/nix/store/99hlc7i4gl77wq087lbhag4hkf3kvssj-nixos-system-hydra-21.11pre-git.drv
This changes the representation of the interrupt callback list to
be safe to use during interrupt handling.
Holding a lock while executing arbitrary functions is something to
avoid in general, because of the risk of deadlock.
Such a deadlock occurs in https://github.com/NixOS/nix/issues/3294
where ~CurlDownloader tries to deregister its interrupt callback.
This happens during what seems to be a triggerInterrupt() by the
daemon connection's MonitorFdHup thread. This bit I can not confirm
based on the stack trace though; it's based on reading the code,
so no absolute certainty, but a smoking gun nonetheless.
previously :a would override old bindings of a name with new values if the added
set contained names that were already bound. in nix 2.6 this doesn't happen any
more, which is potentially confusing.
fixes#6041
At this point, we don’t know if the input is a flake or not. So, we
should allow the user to override the input with a directory without a
flake.nix.
Ideally, we could figure whether the input was originally a flake or
not, but that would require instantiating the whole flake. So just
allow it to be missing here, and rely on checks later on to verify the
input for us.
Bundlers are now responsible for correctly handling their inputs which
are no longer constrained to be (Drv->Drv)->Drv->Drv, but can be of
type (attrset->Drv)->attrset->Drv.
we'll retain the old coerceToString interface that returns a string, but callers
that don't need the returned value to outlive the Value it came from can save
copies by using the new interface instead. for values that weren't stringy we'll
pass a new buffer argument that'll be used for storage and shouldn't be
inspected.
It’s totally valid to have entries in `NIX_PATH` that aren’t valid paths
(they can even be arbitrary urls or `channel:<channel-name>`).
Fix#5998 and #5980
keeping it as a simple data member means it won't be scanned by the GC, so
eventually the GC will collect a cache that is still referenced (resulting in
use-after-free of cache elements).
fixes#5962
- Make passing the position to `forceValue` mandatory,
this way we remember people that the position is
important for better error messages
- Add pos to all `forceValue` calls
If we want to be careful about hitting the stack protector page, we should use `-fstack-check` instead.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
This no longer worked correctly because 'path' is uninitialised when
an exception occurs, leading to errors like
… while importing ''
at /nix/store/rrzz5b1pshvzh1437ac9nkl06br81lkv-source/flake.nix:352:13:
So move the adding of the error context into realisePath().
This was removed in 2e199673a5 when
`copyPath` transitioned to use `RealisedPath`. But then in
e9848beca7 we added it back just for
`realisedPath`.
I think it is a good utility function --- one can easily imagine it
becoming optimized in the future, and copying paths *violating* the
closure is a very niche feature.
So if we have `copyPaths` for both sorts of paths, I think we should
have `copyClosure` for both sorts too.
if we defer the duplicate argument check for lambda formals we can use more
efficient data structures for the formals set, and we can get rid of the
duplication of formals names to boot. instead of a list of formals we've seen
and a set of names we'll keep a vector instead and run a sort+dupcheck step
before moving the parsed formals into a newly created lambda. this improves
performance on search and rebuild by ~1%, pure parsing gains more (about 4%).
this does reorder lambda arguments in the xml output, but the output is still
stable. this shouldn't be a problem since argument order is not semantically
important anyway.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.550 s ± 0.060 s [User: 6.470 s, System: 1.664 s]
Range (min … max): 8.435 s … 8.666 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 346.7 ms ± 2.1 ms [User: 312.4 ms, System: 34.2 ms]
Range (min … max): 343.8 ms … 353.4 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.720 s ± 0.031 s [User: 2.415 s, System: 0.231 s]
Range (min … max): 2.662 s … 2.780 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.462 s ± 0.063 s [User: 6.398 s, System: 1.661 s]
Range (min … max): 8.339 s … 8.542 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 329.1 ms ± 1.4 ms [User: 296.8 ms, System: 32.3 ms]
Range (min … max): 326.1 ms … 330.8 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.687 s ± 0.035 s [User: 2.392 s, System: 0.228 s]
Range (min … max): 2.626 s … 2.754 s 20 runs
This removes a dynamic stack allocation, making the derivation
unparsing logic robust against overflows when large strings are
added to a derivation.
Overflow behavior depends on the platform and stack configuration.
For instance, x86_64-linux/glibc behaves as (somewhat) expected:
$ (ulimit -s 20000; nix-instantiate tests/lang/eval-okay-big-derivation-attr.nix)
error: stack overflow (possible infinite recursion)
$ (ulimit -s 40000; nix-instantiate tests/lang/eval-okay-big-derivation-attr.nix)
error: expression does not evaluate to a derivation (or a set or list of those)
However, on aarch64-darwin:
$ nix-instantiate big-attr.nix ~
zsh: segmentation fault nix-instantiate big-attr.nix
This indicates a slight flaw in the single stack protection page
approach that is not encountered with normal stack frames.
string expressions by and large do not need the benefits a Symbol gives us,
instead they pollute the symbol table and cause unnecessary overhead for almost
all strings. the one place we can think of that benefits from them (attrpaths
with expressions) extracts the benefit in the parser, which we'll have to touch
anyway when changing ExprString to hold strings.
this gives a sizeable improvement on of 3-5% on all benchmarks we've run.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.844 s ± 0.045 s [User: 6.750 s, System: 1.663 s]
Range (min … max): 8.758 s … 8.922 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 367.4 ms ± 3.3 ms [User: 332.3 ms, System: 35.2 ms]
Range (min … max): 364.0 ms … 375.2 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.810 s ± 0.030 s [User: 2.517 s, System: 0.225 s]
Range (min … max): 2.742 s … 2.854 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.533 s ± 0.068 s [User: 6.485 s, System: 1.642 s]
Range (min … max): 8.404 s … 8.657 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 347.6 ms ± 3.1 ms [User: 313.1 ms, System: 34.5 ms]
Range (min … max): 343.3 ms … 354.6 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.709 s ± 0.032 s [User: 2.414 s, System: 0.232 s]
Range (min … max): 2.655 s … 2.788 s 20 runs
Unless `--precise` is passed, make `nix why-depends` only show the
dependencies between the store paths, without introspecting them to
find the actual references.
This also makes it ~3x faster
it can be replaced with StringToken if we add another bit if information to
StringToken, namely whether this string should take part in indentation scanning
or not. since all escaping terminates indentation scanning we need to set this
bit only for the non-escaped IND_STRING rule.
this improves performance by about 1%.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.880 s ± 0.048 s [User: 6.809 s, System: 1.643 s]
Range (min … max): 8.781 s … 8.993 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 375.0 ms ± 2.2 ms [User: 339.8 ms, System: 35.2 ms]
Range (min … max): 371.5 ms … 379.3 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.831 s ± 0.040 s [User: 2.536 s, System: 0.225 s]
Range (min … max): 2.769 s … 2.912 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.832 s ± 0.048 s [User: 6.757 s, System: 1.657 s]
Range (min … max): 8.743 s … 8.921 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 367.4 ms ± 3.2 ms [User: 332.7 ms, System: 34.7 ms]
Range (min … max): 364.6 ms … 374.6 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.810 s ± 0.030 s [User: 2.517 s, System: 0.225 s]
Range (min … max): 2.742 s … 2.854 s 20 runs
This is needed to get the path of a derivation that might not exist
(e.g. for 'nix store copy-log').
InstallableStorePath::toDerivedPaths() cannot be used for this because
it calls readDerivation(), so it fails if the store doesn't have the
derivation.
When stderr is not connected to a tty, show "building" and
"substituting" messages, a-la nix-build et al.
Closes https://github.com/NixOS/nix/issues/4402
Co-authored-by: Théophane Hufschmitt <7226587+thufschmitt@users.noreply.github.com>
gives about 1% improvement on system eval, a bit less on nix search.
# before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 7.419 s ± 0.045 s [User: 6.362 s, System: 0.794 s]
Range (min … max): 7.335 s … 7.517 s 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.921 s ± 0.023 s [User: 2.626 s, System: 0.210 s]
Range (min … max): 2.883 s … 2.957 s 20 runs
# after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 7.370 s ± 0.059 s [User: 6.333 s, System: 0.791 s]
Range (min … max): 7.286 s … 7.541 s 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.891 s ± 0.033 s [User: 2.606 s, System: 0.210 s]
Range (min … max): 2.823 s … 2.958 s 20 runs
`nix why-depends` is piping its output into a pager by default.
However the pager was only started after the first path is printed,
causing it to be excluded from the pager output.
(Actually the pager was started *inside* the recursive function that was
printing the dependency chain, so a new instance was started at each
level. It’s a little miracle that it worked at all).
Fix#5911
mainly to avoid an allocation and a copy of a string that can be
modified in place (ever since EvalState holds on to the buffer, not the
generated parser itself).
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 571.7 ms ± 2.4 ms [User: 563.3 ms, System: 8.0 ms]
Range (min … max): 566.7 ms … 579.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 376.6 ms ± 1.0 ms [User: 345.8 ms, System: 30.5 ms]
Range (min … max): 374.5 ms … 379.1 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.922 s ± 0.006 s [User: 2.707 s, System: 0.215 s]
Range (min … max): 2.906 s … 2.934 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 570.4 ms ± 2.8 ms [User: 561.3 ms, System: 8.6 ms]
Range (min … max): 564.6 ms … 578.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 375.4 ms ± 1.3 ms [User: 343.2 ms, System: 31.7 ms]
Range (min … max): 373.4 ms … 378.2 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.925 s ± 0.006 s [User: 2.704 s, System: 0.219 s]
Range (min … max): 2.910 s … 2.942 s 50 runs
when given a string yacc will copy the entire input to a newly allocated
location so that it can add a second terminating NUL byte. since the
parser is a very internal thing to EvalState we can ensure that having
two terminating NUL bytes is always possible without copying, and have
the parser itself merely check that the expected NULs are present.
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 572.4 ms ± 2.3 ms [User: 563.4 ms, System: 8.6 ms]
Range (min … max): 566.9 ms … 579.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 381.7 ms ± 1.0 ms [User: 348.3 ms, System: 33.1 ms]
Range (min … max): 380.2 ms … 387.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.936 s ± 0.005 s [User: 2.715 s, System: 0.221 s]
Range (min … max): 2.923 s … 2.946 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 571.7 ms ± 2.4 ms [User: 563.3 ms, System: 8.0 ms]
Range (min … max): 566.7 ms … 579.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 376.6 ms ± 1.0 ms [User: 345.8 ms, System: 30.5 ms]
Range (min … max): 374.5 ms … 379.1 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.922 s ± 0.006 s [User: 2.707 s, System: 0.215 s]
Range (min … max): 2.906 s … 2.934 s 50 runs
speeds up parsing by ~3%, system builds by a bit more than 1%
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 574.7 ms ± 2.8 ms [User: 566.3 ms, System: 8.0 ms]
Range (min … max): 569.2 ms … 580.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 394.4 ms ± 0.8 ms [User: 361.8 ms, System: 32.3 ms]
Range (min … max): 392.7 ms … 395.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.976 s ± 0.005 s [User: 2.757 s, System: 0.218 s]
Range (min … max): 2.966 s … 2.990 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 572.4 ms ± 2.3 ms [User: 563.4 ms, System: 8.6 ms]
Range (min … max): 566.9 ms … 579.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 381.7 ms ± 1.0 ms [User: 348.3 ms, System: 33.1 ms]
Range (min … max): 380.2 ms … 387.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.936 s ± 0.005 s [User: 2.715 s, System: 0.221 s]
Range (min … max): 2.923 s … 2.946 s 50 runs
every stringy token the lexer returns is turned into a Symbol and not
used further, so we don't have to strdup. using a string_view is
sufficient, but due to limitations of the current parser we have to use
a POD type that holds the same information.
gives ~2% on system build, 6% on search, 8% on parsing alone
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 610.6 ms ± 2.4 ms [User: 602.5 ms, System: 7.8 ms]
Range (min … max): 606.6 ms … 617.3 ms 50 runs
Benchmark 2: nix eval -f hackage-packages.nix
Time (mean ± σ): 430.1 ms ± 1.4 ms [User: 393.1 ms, System: 36.7 ms]
Range (min … max): 428.2 ms … 434.2 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 3.032 s ± 0.005 s [User: 2.808 s, System: 0.223 s]
Range (min … max): 3.023 s … 3.041 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 574.7 ms ± 2.8 ms [User: 566.3 ms, System: 8.0 ms]
Range (min … max): 569.2 ms … 580.7 ms 50 runs
Benchmark 2: nix eval -f hackage-packages.nix
Time (mean ± σ): 394.4 ms ± 0.8 ms [User: 361.8 ms, System: 32.3 ms]
Range (min … max): 392.7 ms … 395.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.976 s ± 0.005 s [User: 2.757 s, System: 0.218 s]
Range (min … max): 2.966 s … 2.990 s 50 runs
there's a few symbols in primops we can create once and pick them out of
EvalState afterwards instead of creating them every time we need them. this
gives almost 1% speedup to an uncached nix search.
there's a couple places that can be easily converted from using strings to using
string_views instead. gives a slight (~1%) boost to system eval.
# before
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.946 s ± 0.026 s [User: 2.655 s, System: 0.209 s]
Range (min … max): 2.905 s … 2.995 s 20 runs
# after
Time (mean ± σ): 2.928 s ± 0.024 s [User: 2.638 s, System: 0.211 s]
Range (min … max): 2.893 s … 2.970 s 20 runs
this avoids one copy from `s` into `str`, and possibly another copy needed to
construct `s` at the call site. lexical_cast is also more efficient in general.
constructing an ostringstream for non-string concats (like integer addition) is
a small constant cost that we can avoid. for string concats we can keep all the
string temporaries we get from coerceToString and concatenate them in one go,
which saves a lot of intermediate temporaries and copies in ostringstream. we
can also avoid copying the concatenated string again by directly allocating it
in GC memory and moving ownership of the concatenated string into the target
value.
saves about 2% on system eval.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.837 s ± 0.031 s [User: 2.562 s, System: 0.191 s]
Range (min … max): 2.796 s … 2.892 s 20 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.790 s ± 0.035 s [User: 2.532 s, System: 0.187 s]
Range (min … max): 2.722 s … 2.836 s 20 runs
There already existed a smoke test for the link content length,
but it appears that there exists some corruptions pernicious enough
to replace the file content with zeros, and keeping the same length.
--repair-path now goes as far as checking the content of the link,
making it true to its name and actually repairing the path for such
coruption cases.
we don't have to create an ostream sentry object for every character of a JSON
string we write. format a bunch of characters and flush them to the stream all
at once instead.
this doesn't affect small numbers of string characters, but larger numbers of
total JSON string characters written gain a lot. at 1MB of total string written
we gain almost 30%, at 16MB it's almost a factor of 3x. large numbers of JSON
string characters do occur naturally in a nixos system evaluation to generate
documentation (though this is now somewhat mitigated by caching the largest part
of nixos option docs).
benchmarked with
hyperfine 'nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) {e})"' --warmup 1 -L e 1,4,256,4096,65536
before:
Benchmark 1: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 1)"
Time (mean ± σ): 12.5 ms ± 0.2 ms [User: 9.2 ms, System: 4.0 ms]
Range (min … max): 11.9 ms … 13.1 ms 223 runs
Benchmark 2: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4)"
Time (mean ± σ): 12.5 ms ± 0.2 ms [User: 9.3 ms, System: 3.8 ms]
Range (min … max): 11.9 ms … 13.2 ms 220 runs
Benchmark 3: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 256)"
Time (mean ± σ): 13.2 ms ± 0.3 ms [User: 9.8 ms, System: 4.0 ms]
Range (min … max): 12.6 ms … 14.3 ms 205 runs
Benchmark 4: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4096)"
Time (mean ± σ): 24.0 ms ± 0.4 ms [User: 19.4 ms, System: 5.2 ms]
Range (min … max): 22.7 ms … 25.8 ms 119 runs
Benchmark 5: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 65536)"
Time (mean ± σ): 196.0 ms ± 3.7 ms [User: 171.2 ms, System: 25.8 ms]
Range (min … max): 190.6 ms … 201.5 ms 14 runs
after:
Benchmark 1: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 1)"
Time (mean ± σ): 12.4 ms ± 0.3 ms [User: 9.1 ms, System: 4.0 ms]
Range (min … max): 11.7 ms … 13.3 ms 204 runs
Benchmark 2: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4)"
Time (mean ± σ): 12.4 ms ± 0.2 ms [User: 9.2 ms, System: 3.9 ms]
Range (min … max): 11.8 ms … 13.0 ms 214 runs
Benchmark 3: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 256)"
Time (mean ± σ): 12.6 ms ± 0.2 ms [User: 9.5 ms, System: 3.8 ms]
Range (min … max): 12.1 ms … 13.3 ms 209 runs
Benchmark 4: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4096)"
Time (mean ± σ): 15.9 ms ± 0.2 ms [User: 11.4 ms, System: 5.1 ms]
Range (min … max): 15.2 ms … 16.4 ms 171 runs
Benchmark 5: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 65536)"
Time (mean ± σ): 69.0 ms ± 0.9 ms [User: 44.3 ms, System: 25.3 ms]
Range (min … max): 67.2 ms … 70.9 ms 42 runs
Previously you had to remember to call value->attrs->sort() after
populating value->attrs. Now there is a BindingsBuilder helper that
wraps Bindings and ensures that sort() is called before you can use
it.
nixpkgs can save a good bit of eval memory with this primop. zipAttrsWith is
used quite a bit around nixpkgs (eg in the form of recursiveUpdate), but the
most costly application for this primop is in the module system. it improves
the implementation of zipAttrsWith from nixpkgs by not checking an attribute
multiple times if it occurs more than once in the input list, allocates less
values and set elements, and just avoids many a temporary object in general.
nixpkgs has a more generic version of this operation, zipAttrsWithNames, but
this version is only used once so isn't suitable for being the base of a new
primop. if it were to be used more we should add a second primop instead.
When we check for disappeared overrides, we can get "false positives"
for follows and overrides which are defined in the dependencies of the
flake we are locking, since they are not parsed by
parseFlakeInputs. However, at that point we already know that the
overrides couldn't have possible been changed if the input itself
hasn't changed (since we check that oldLock->originalRef == *input.ref
for the input's parent). So, to prevent this, only perform this check
when it was possible that the flake changed (e.g. the flake we're
locking, or a new input, or the input has changed and mustRefetch ==
true).
This makes sure that values parsed from TOML have a proper size. Using
e.g. `double` caused issues on i686 where the size of `double` (32bit)
was too small to accommodate some values.
This was already accidentally disabled in ba87b08. It also no longer
appears to be beneficial, and in fact slow things down, e.g. when
evaluating a NixOS system configuration:
elapsed time: median = 3.8170 mean = 3.8202 stddev = 0.0195 min = 3.7894 max = 3.8600 [rejected, p=0.00000, Δ=0.36929±0.02513]
calling GC_malloc for each value is significantly more expensive than
allocating a bunch of values at once with GC_malloc_many. "a bunch" here
is a GC block size, ie 16KiB or less.
this gives a 1.5% performance boost when evaluating our nixos system.
tested with
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
# on master
Time (mean ± σ): 3.335 s ± 0.007 s [User: 2.774 s, System: 0.293 s]
Range (min … max): 3.315 s … 3.347 s 50 runs
# with this change
Time (mean ± σ): 3.288 s ± 0.006 s [User: 2.728 s, System: 0.292 s]
Range (min … max): 3.274 s … 3.307 s 50 runs
Add a `_NIX_TRACE_BUILT_OUTPUTS` environment variable that can be set to
a filename in which the result of each build will be logged.
This is intentionally crude and undocumented as it’s only meant to be a
temporary thing to assess the usefulness of CA derivations.
Any other use would need a cleaner re-implementation first.
Make the build of unresolved derivations return the same status as the
resolved one, except in the case of an `AlreadyValid` in which case it
will return `ResolvesToAlreadyValid` to mean that the outputs of the unresolved
derivation weren’t known, but the resolved one is.
When a variable is assigned in the REPL, make sure to remove any possible reference to the old one so that we correctly pick the new one afterwards
Fix#5706
I downloaded Nix tonight, and immediately broke it by accidentally removing the default binary caching.
After figuring this out, I also failed to fix it properly, due to using the wrong key for Nix's default binary cache
If the diagnostic message would have been clearer about what/where a "signature" for a "substituter" is + comes from, it probably would have saved me a few hours.
Maybe we can save other noobs the same pain?
Before this change, stdout was closed after the pager exits. This is
fine for non-interactive commands where we want to exit right after
the pager exits anyways, but for interactive things (e.g. nix repl)
this breaks the output after we quit the pager.
Keep the initial stdout fd as part of RunPager, and restore it in
RunPager::~RunPager using dup2.
This function is very useful in nixpkgs, but its implementation in Nix
itself is rather slow due to it requiring a lot of attribute set and
list appends.
Previously, when we were attempting to reuse the old lockfile
information in the computeLocks function, we have passed the parent of
the current input to the next computeLocks call. This was incorrect,
since the follows are resolved relative to the parent. This caused
issues when we tried to reuse oldLock but couldn't for some
reason (read: mustRefetch is true), in that case the follows were
resolved incorrectly.
Fix this by passing the correct parent, and adding some tests to
prevent this particular regression from happening again.
Closes https://github.com/NixOS/nix/issues/5697
- Previous to this commit the boundary was exclusive of the
top level flake.
- This is wrong since the top level flake is still a valid
relative reference.
- Now, the check boundary is inclusive of the top level flake.
Signed-off-by: Timothy DeHerrera <tim.deh@pm.me>
Due to missing <atomic> declaration the build fails as:
src/libutil/util.hh:350:24: error: no match for 'operator||' (operand types are 'std::atomic<bool>' and 'bool')
350 | if (_isInterrupted || (interruptCheck && interruptCheck()))
| ~~~~~~~~~~~~~~ ^~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| | |
| std::atomic<bool> bool
Because the manual is generated from default values which are themselves
generated from various sources (cpuid, bios settings (kvm), number of
cores). This commit hides non-reproducible settings from the manual
output.
No matter what, we need to resize the buffer to not have any scratch
space after we do the `read`. In the end of file case, `got` will be 0
from it's initial value.
Before, we forgot to resize in the EOF case with the break. Yes, we know
we didn't recieve any data in that case, but we still have the scatch
space to undo.
Co-Authored-By: Will Fancher <Will.Fancher@Obsidian.Systems>
This doesn't fix the bug, but makes the code less difficult to read.
Also improve the comments, now that it is clear what part is needed in
each code path.
Moving arguments of the primOp into the registration structure makes it
impossible to initialize a second EvalState with the correct primOp
registration. It will end up registering all those "RegisterPrimOp"'s
with an arity of zero on all but the 2nd instance of the EvalState.
Not moving the memory will add a tiny bit of memory overhead during the
eval since we need a copy of all the argument lists of all the primOp's.
The overhead shouldn't be too bad as it is static (based on the amonut
of registered operations) and only occurs once during the interpreter
startup.
If we’re in pure eval mode, then tell that in the error message rather
than (wrongly) speaking about restricted mode.
Fix https://github.com/NixOS/nix/issues/5611
We let DISPLAY (X11) through, so we should let the Wayland equivalents
through as well. Similarly, we let HOME through, so it should be okay
to allow XDG_RUNTIME_DIR (which is needed for connecting to Wayland
with WAYLAND_DISPLAY) through as well. Otherwise graphical
applications will either fall back to X11 (if they support it), or
just not work (if they don't).
This is not really useful on its own, but it does recover the
'infinite recursion' error message for '{ __functor = x: x; } 1', and
is more efficient in conjunction with #3718.
Fixes#5515.
- This change applies to builtins.toXML and inner workings
- Proof of concept:
```nix
let e = builtins.toXML e; in e
```
- Before:
```
$ nix-instantiate --eval poc.nix
error: infinite recursion encountered
```
- After:
```
$ nix-instantiate --eval poc.nix
error: infinite recursion encountered
at /data/github/kamadorueda/nix/poc.nix:1:9:
1| let e = builtins.toXML e; in e
|
```
These headers are included by the libexpr, libfetchers, libstore
and libutil headers.
Considering that these are vendored sources, Nix should expose them,
as it is not a good idea for reverse dependencies to rely on a
potentially different source that can go out of sync.
When an input follows disappears, we can't just reuse the old lock
file entries since we may be missing some required ones. Refetch the
input when this happens.
Closes https://github.com/NixOS/nix/issues/5289
For a typical desktop system (~2K packages) we can easily get 100K
entries in RealisationsRefs. Without indices query for RealisationsRefs
requires linear scan.
RealisationsRefs(referrer)
--------------------------
Inefficiency is seen as a 100% CPU load of nix-daemon for the following
scenario:
$ nix edit -f . bash # add unused environment variable, like FOO="1"
# populate RealisationsRefs, build fresh system
$ nix build -f nixos system --arg config '{ contentAddressedByDefault = true; }'
$ nix edit -f . bash # add unused environment variable, like FOO="2"
$ time nix build -f nixos system --arg config '{ contentAddressedByDefault = true; }'
In this case `bash `will be rebuilt a few times and then rest of CPU
time is spent on scanning RealisationsRefs table (about 5 CPU-minutes
on my machine).
Before the change:
$ time nix build -f nixos system ... # step 4 above
real 34m3,613s
user 0m5,232s
sys 0m0,758s
Of all this time about 29.5 minutes are taken by nix-daemon's CPU time.
After the change:
$ time nix build -f nixos system ... # step 4 above
real 4m50,061s
user 0m5,038s
sys 0m0,677s
Of all this time about 1 minute is taken by nix-daemon's CPU time.
Most of the time is spent polling for non-existent realisations on
cache-nixos.org.
Realisations(outputPath)
------------------------
After running CA system for two weeks I got ~1M entries in Realisations
table. `nix-collect-garbage` became very slow (seemingly 100 path deletions
per second). It happens due to a slow cascading delete from Realisations
triggered by deletion from ValidPaths.
The fix is to add an index on primary key from ValidPaths(id) that
triggers cascading deletions.
Before the change:
$ time nix-collect-garbage -d --max-freed 100G
<interrupted before finish, took too long>
real 23m32.411s
user 17m49.679s
sys 4m50.609s
Most of time was spent in re-scanning Realisations table on each path deletion.
After the change:
$ time nix-collect-garbage -d --max-freed 100G
real 8m43.226s
user 6m16.317s
sys 1m40.188s
Time is spent scanning sqlite indices and in kernel when unlinking directories.
Doing it as a side-effect of calling LocalStore::makeStoreWritable()
is very ugly.
Also, make sure that stopping the progress bar joins the update
thread, otherwise that thread should be unshared as well.
Since 4806f2f6b0, we can't have paths with
references passed to builtins.{path,filterSource}. This prevents many cases
of those functions called on IFD outputs from working. Resolve this by
passing the references found in the original path to the added path.
When setting flake-local options (with the `nixConfig` field), forward
these options to the daemon in case we’re using one.
This is necessary in particular for options like `binary-caches` or
`post-build-hook` to make sense.
Fix <343239fc8a (r44356843)>
When running a `:b` command in the repl, after building the derivations
query the store for its outputs rather than just assuming that they are
known in the derivation itself (which isn’t true for CA derivations)
Fix#5328
We now parse function applications as a vector of arguments rather
than as a chain of binary applications, e.g. 'substring 1 2 "foo"' is
parsed as
ExprCall { .fun = <substring>, .args = [ <1>, <2>, <"foo"> ] }
rather than
ExprApp (ExprApp (ExprApp <substring> <1>) <2>) <"foo">
This allows primops to be called immediately (if enough arguments are
supplied) without having to allocate intermediate tPrimOpApp values.
On
$ nix-instantiate --dry-run '<nixpkgs/nixos/release-combined.nix>' -A nixos.tests.simple.x86_64-linux
this gives a substantial performance improvement:
user CPU time: median = 0.9209 mean = 0.9218 stddev = 0.0073 min = 0.9086 max = 0.9340 [rejected, p=0.00000, Δ=-0.21433±0.00677]
elapsed time: median = 1.0585 mean = 1.0584 stddev = 0.0024 min = 1.0523 max = 1.0623 [rejected, p=0.00000, Δ=-0.20594±0.00236]
because it reduces the number of tPrimOpApp allocations from 551990 to
42534 (i.e. only small minority of primop calls are partially
applied) which in turn reduces time spent in the garbage collector.
Rather than having them plain strings scattered through the whole
codebase, create an enum containing all the known experimental features.
This means that
- Nix can now `warn` when an unkwown experimental feature is passed
(making it much nicer to spot typos and spot deprecated features)
- It’s now easy to remove a feature altogether (once the feature isn’t
experimental anymore or is dropped) by just removing the field for the
enum and letting the compiler point us to all the now invalid usages
of it.
Currently machine specification (`/etc/nix/machine`) parser fails
with a vague exception if the file had incorrect format.
This commit adds verbose exceptions and unit-tests for the parser.
Fixed a bug in initialization of 'base64DecodeChars' variable.
Currently decoder do not fail on invalid Base64 strings.
Added test-case to verify the fix.
Also have made 'base64DecodeChars' to be computed at compile time.
And added a test case to encode/decode string with non-printable charactes.
- This way we improve error messages
on infinite recursion
- Demo:
```nix
let x = builtins.fetchTree x;
in x
```
- Before:
```bash
$ nix-instantiate --extra-experimental-features flakes --strict
error: infinite recursion encountered
```
- After:
```bash
$ nix-instantiate --extra-experimental-features flakes --strict
error: infinite recursion encountered
at /data/github/kamadorueda/nix/test.nix:1:9:
1| let x = builtins.fetchTree x;
| ^
2| in x
```
Mentions: #3505
This ensures any started processes can't write to /nix/store (except
during builds). This partially reverts 01d07b1e, which happened because
of #2646.
The problem was only happening after nix downloads anything, causing
me to suspect the download thread. The problem turns out to be:
"A process can't join a new mount namespace if it is sharing
filesystem-related attributes with another process", in this case this
process is the curl thread.
Ideally, we might kill it before spawning the shell process, but it's
inside a static variable in the getFileTransfer() function. So
instead, stop it from sharing FS state using unshare(). A strategy
such as the one from #5057 (single-threaded chroot helper binary) is
also very much on the table.
Fixes#4337.
- This way we improve error messages
on infinite recursion
- Demo:
```nix
let
x = builtins.fetchMercurial x;
in
x
```
- Before:
```bash
$ nix-instantiate --show-trace --strict
error: infinite recursion encountered
```
- After:
```bash
nix-instantiate --show-trace --strict
error: infinite recursion encountered
at /data/github/kamadorueda/test/default.nix:2:7:
1| let
2| x = builtins.fetchMercurial x;
| ^
3| in
```
Mentions: #3505
We can actually just load nss ourselves and call in nss to configure it
and we don't need to run a dummy query entirely to have nss load nss_dns
as a side-effect.
Signed-off-by: Arthur Gautier <baloo@superbaloo.net>
This fixes a bug in the garbage collector where if a path
/nix/store/abcd-foo is valid, but we do a
isValidPath("/nix/store/abcd-foo.lock") first, then a negative entry
for /nix/store/abcd is added to pathInfoCache, so /nix/store/abcd-foo
is subsequently considered invalid and deleted.
(where "referrers" includes the reverse of derivation outputs and
derivers). Now we do a full traversal to look if we can reach any
root. If not, all paths reached can be deleted.
The garbage collector no longer blocks other processes from
adding/building store paths or adding GC roots. To prevent the
collector from deleting store paths just added by another process,
processes need to connect to the garbage collector via a Unix domain
socket to register new temporary roots.
This reverts some parts of commit
8430a8f086 which was trying to rethrow
some exceptions while we weren’t in the context of a `catch` block,
causing some weird “terminate called without an active exception”
errors.
Fix#5368
In https://github.com/NixOS/nix/pull/5350 we noticed link failures
pkgsStatic.nixUnstable. Adding explicit dependency on libutil fixes
libstore-tests linking.
When I stop a download with Ctrl-C in a `nix repl` of a flake, the REPL
refuses to do any other downloads:
nix-repl> builtins.getFlake "nix-serve"
[0.0 MiB DL] downloading 'https://api.github.com/repos/edolstra/nix-serve/tarball/e9828a9e01a14297d15ca41 error: download of 'e9828a9e01' was interrupted
[0.0 MiB DL]
nix-repl> builtins.getFlake "nix-serve"
error: interrupted by the user
[0.0 MiB DL]
To fix this issue, two changes were necessary:
* Reset the global `_isInterrupted` variable: only because a single
operation was aborted, it should still be possible to continue the
session.
* Recreate a `fileTransfer`-instance if the current one was shut down by
an abort.
We now build the context (so this has the side-effect of making
builtins.{path,filterSource} work on derivations outputs, if IFD is
enabled) and then check that the path has no references (which is what
we really care about).
The boolean is only used to determine if the formals are set to a
non-null pointer in all our cases. We can get rid of that allocation and
instead just compare the pointer value with NULL. Saving up to
sizeof(bool) + platform specific alignment per ExprLambda instace.
Probably not a lot of memory but perhaps a few kilobyte with nixpkgs?
This also gets rid of a potential issue with dereferencing formals based on
the value of the boolean that didn't have to be aligned with the formals
pointer but was in all our cases.
9c766a40cb broke logging from the
daemon, because commonChildInit is called when starting the build hook
in a vfork, so it ends up resetting the parent's logger. So don't
vfork.
It might be best to get rid of vfork altogether, but that may cause
problems, e.g. when we call an external program like git from the
evaluator.
Before the changes when building the whole system with
`contentAddressedByDefault = true;` we get many noninformative messages:
$ nix build -f nixos system --keep-going
...
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0'; cross fingers
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0-dev'; cross fingers
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0-python'; cross fingers
error: 2 dependencies of derivation '/nix/store/...-hub-2.14.2.drv' failed to build
warning: rewriting hashes in '/nix/store/...-subversion-1.14.1.drv.chroot/nix/store/...-subversion-1.14.1-dev'; cross fingers
warning: rewriting hashes in '/nix/store/...-subversion-1.14.1.drv.chroot/nix/store/...-subversion-1.14.1-man'; cross fingers
...
Let's downgrade these messages down to debug().
I had started the trend of doing `std::visit` by value (because a type
error once mislead me into thinking that was the only form that
existed). While the optomizer in principle should be able to deal with
extra coppying or extra indirection once the lambdas inlined, sticking
with by reference is the conventional default. I hope this might even
improve performance.
I found it somewhat confusing to have an error like
error: attribute 'getFlake' missing
if the required experimental-feature (`flakes`) is not enabled. Instead,
I'd expect Nix to throw an error just like it's the case when using e.g. `nix
flake` without `flakes` being enabled.
With this change, the error looks like this:
$ nix-instantiate -E 'builtins.getFlake "nixpkgs"'
error: Cannot call 'builtins.getFlake' because experimental Nix feature 'flakes' is disabled. You can enable it via '--extra-experimental-features flakes'.
at «string»:1:1:
1| builtins.getFlake "nixpkgs"
| ^
I didn't use `settings.requireExperimentalFeature` here on purpose
because this doesn't contain a position. Also, it doesn't seem as if we
need to catch the error and check for the missing feature here since
this already happens at evaluation time.
This actually bit me quite recently in `nixpkgs` because I assumed that
`nix-build --check` would also error out if hashes don't match anymore[1]
and so I wrongly assumed that I couldn't reproduce the mismatch error.
The fix is rather simple, during the output registration a so-called
`delayedException` is instantiated e.g. if a FOD hash-mismatch occurs.
However, in case of `nix-build --check` (or `--rebuild` in case of `nix
build`), the code-path where this exception is thrown will never be
reached.
By adding that check to the if-clause that causes an early exit in case
of `bmCheck`, the issue is gone. Also added a (previously failing)
test-case to demonstrate the problem.
[1] https://github.com/NixOS/nixpkgs/pull/139238, the underlying issue
was that `nix-prefetch-git` returns different hashes than `fetchgit`
because the latter one fetches submodules by default.
This fixes
$ nix path-info -r $(type -P ls)
/nix/store/vfilzcp8a467w3p0mp54ybq6bdzb8w49-coreutils-8.32
/nix/store/5d821pjgzb90lw4zbg6xwxs7llm335wr-libunistring-0.9.10
...
/nix/store/mrv4y369nw6hg4pw8d9p9bfdxj9pjw0x-acl-2.3.0
/nix/store/vfilzcp8a467w3p0mp54ybq6bdzb8w49-coreutils-8.32
Also, output the paths in topologically sorted order like we used to.
This is important if the remote side *does* execute
nix-store/nix-daemon successfully, but stdout is polluted
(e.g. because the remote user's bashrc script prints something to
stdout). In that case we have to shutdown the write side to force the
remote nix process to exit.
Instead of
error: serialised integer 7161674624452356180 is too large for type 'j'
we now get
error: 'nix-store --serve' protocol mismatch from 'sshtest@localhost', got 'This account is currently not available.'
Fixes https://github.com/NixOS/nixpkgs/issues/37287.
Previously, type or coercion errors for string interpolation, path
interpolation, and plus expressions were always reported at the
beginning of the outer expression. This leads to confusing evaluation
error messages making it hard to accurately diagnose and then fix the
error.
For example, errors were reported as follows.
```
cannot coerce an integer to a string
1| let foo = 7; in "bar" + foo
| ^
cannot add a string to an integer
1| let foo = "bar"; in 4 + foo
| ^
cannot coerce an integer to a string
1| let foo = 7; in "x${foo}"
| ^
```
This commit changes the ExprConcatStrings expression vector to store a
sequence of expressions *and* their expansion locations so that error
locations can be reported accurately. For interpolation, the error is
reported at the beginning of the entire `${foo}`, not at the beginning
of `foo` because I thought this was slightly clearer. The previous
errors are now reported as:
```
cannot coerce an integer to a string
1| let foo = 7; in "bar" + foo
| ^
cannot add a string to an integer
1| let foo = "bar"; in 4 + foo
| ^
cannot coerce an integer to a string
1| let foo = 7; in "x${foo}"
| ^
```
The error is reported at this kind of precise location even for
multi-line indented strings.
This probably helps with at least some of the cases mentioned in #561
It's now disabled by default for the following:
* 'nix search' (this was already implied by read-only mode)
* 'nix flake show'
* 'nix flake check', but only on the hydraJobs output
Without this, flakes within the same tree and same lock data will have
the same fingerprint and the eval cache for one flake will be
incorrectly used for another.
Alternative to #4639. You can still read flake.lock, but at least in
reproducible workflows like NixOS configurations where you require a
non-dirty tree, evaluation will fail because there is no rev.
With --no-write-lock-file, it's possible that flake.lock is out of
sync with the actual inputs used by the evaluation. So doing fromJSON
(readFile ./flake.lock) will give wrong results.
Fixes#4639.
Before this commit, the dns lookup in preloadNSS would still go through
nscd. This did not have the effect of loading the nss_dns.so as expected
(nss_dns.so being out of reach from within the sandbox).
Should LOCALDOMAIN environment variable be defined, nss will completely
avoid nscd and will do its dns resolution on its own.
By temporarly setting LOCALDOMAIN variable before calling in NSS, we can
force NSS to load the shared libraries as expected.
Fixes#5089
Signed-off-by: Arthur Gautier <baloo@superbaloo.net>
In dry run mode, new derivations can't be create, so running the command on anything that has not been evaluated before results in an error message of the form `don't know how to build these paths (may be caused by read-only store access)`.
For comparison, the classical `nix-build --dry-run` doesn't use read-only mode.
Closes#1795
(cherry picked from commit 54525682df707742e58311c32e9c9cb18de1e31f)
If the store path contains a flake, this means that a command like
"nix path-info /path" will show info about /path, not about the
default output of the flake in /path. If you want the latter, you can
explicitly ask for it by doing "nix path-info path:/path".
Fixes#4568.
Store paths are only allowed to contain a limited subset of the
alphabet, which doesn’t include `!`. So don’t create lockfiles that
contain this `!` character as that would otherwise confuse (and break)
the gc.
Fix#5176
When doing e.g.
nix-build -A package --keep-failed --option \
builders \
'ssh://mfhydra?remote-store=/home/bosch/store x86_64-linux - 10 4 big-parallel'
this doesn't work properly because this build-setting is ignored.
I changed this behavior by passing the `settings.keepFailed` through the
serve-protocol to remote machines to make sure that I can introspect the
build-directory (which is particularly helpful when I have to look at a
`config.log` from a failed build for instance).
This fixes a use-after-free bug:
1. s = new EvalState();
2. callFlake()
3. static vCallFlake now references s
4. delete s;
5. s2 = new EvalState();
6. callFlake()
7. static vCallFlake still references s
8. crash
Nix 2.3 did not have a problem with recreating EvalState.
This fixes a class of crashes and introduces ptr<T> to make the
code robust against this failure mode going forward.
Thanks regnat for the idea of a ref<T> without overhead!
Closes#4895Closes#4893Closes#5127Closes#5113
“packages” was probably meant to be “packages.${system}.” but that
is already listed in `getDefaultFlakeAttrPathPrefixes` in `installables`,
which is probably why no one noticed it was broken.
It currently fails with the following error:
error: flake 'git+file://…' does not provide attribute 'devShells.x86_64-linuxhaskell', 'packages.x86_64-linux.haskell', 'legacyPackages.x86_64-linux.haskell' or 'haskell'
For git+file and path flakes, chdir to flake directory so that phases
that expect to be in the flake directory can run
Fixes https://github.com/NixOS/nix/issues/3976
Use `$(libdir)` while installing .pc files looks like a more generic
solution. For example, it will work for distributions like RHEL or
Fedora where .pc files are installed in `/usr/lib64/pkgconfig`.
This replaces the O(n) search complexity in our insert code with a
lookup of O(log n). It also makes removing waitees easier as we can use
the extract method provided by the set class.
Previously the code ensures that the isBase32 array would only be
initialised once in a single-threaded context. If two threads happen to
call the function before the initialisation was completed both of them
would have completed the initialization step. This allowed for a
race-condition where one thread might be done with the initialization
but the other thread sets all the fields to false again. For a brief
moment the base32 detection would then produce false-negatives.
`nix-shell --pure` when applied to a non stdenv derivation doesn't seem
to clear the PATH. It expects the stdenv/setup file to do so.
This adds an explicit `unset PATH` by nix-build.cc (nix-shell) itself so
that it's not reliant on stdenv/setup anymore.
This does not break impure nix-shell since the PATH is persisted as the
variable `p` prior in the bash rcfile
fixes#5092
Previously, despite having a boolean that tracked initialization, the
decode characters have been "calculated" every single time a base64
string was being decoded.
With this change we only initialize the decode array once in a
thread-safe manner.
The experimental features are, well, experimental, and shouldn’t be
carelessly and transparently enabled.
Besides, some (`ca-derivations` at least) need to be enabled at startup
in order to work properly.
So it’s better to just require that daemon be started with the right
`experimental-features` option.
Fix#5017
This extends https://github.com/NixOS/nix/pull/4978 with
supporting the SCP-like urls in expressions like
```nix
builtins.fetchGit {
url = "git@github.com:NixOS/nix.git";
ref = "master";
}
```
This adds a new store operation 'addMultipleToStore' that reads a
number of NARs and ValidPathInfos from a Source, allowing any number
of store paths to be copied in a single call. This is much faster on
high-latency links when copying a lot of small files, like .drv
closures.
For example, on a connection with an 50 ms delay:
Before:
$ nix copy --to 'unix:///tmp/proxy-socket?root=/tmp/dest-chroot' \
/nix/store/90jjw94xiyg5drj70whm9yll6xjj0ca9-hello-2.10.drv \
--derivation --no-check-sigs
real 0m57.868s
user 0m0.103s
sys 0m0.056s
After:
real 0m0.690s
user 0m0.017s
sys 0m0.011s
Otherwise I get a compiler error when building for NetBSD:
src/libutil/util.cc: In function 'void nix::_deletePath(const Path&, uint64_t&)':
src/libutil/util.cc:438:17: error: base operand of '->' is not a pointer
438 | AutoCloseFD dirfd(open(dir.c_str(), O_RDONLY));
| ^~~~~
src/libutil/util.cc:439:10: error: 'dirfd' was not declared in this scope
439 | if (!dirfd) {
| ^~~~~
src/libutil/util.cc:444:17: error: 'dirfd' was not declared in this scope
444 | _deletePath(dirfd.get(), path, bytesFreed);
| ^~~~~
With this, we don't have to copy the entire .drv closure to the
destination store ahead of time (or at all). Instead, buildPaths()
reads .drv files from the eval store and copies inputSrcs to the
destination store if it needs to build a derivation.
Issue #5025.
In particular, this now works:
$ nix path-info --eval-store auto --store https://cache.nixos.org nixpkgs#hello
Previously this would fail as it would try to upload the hello .drv to
cache.nixos.org. Now the .drv is instantiated in the local store, and
then we check for the existence of the outputs in cache.nixos.org.
It supports functions as well. Also change `package` to
`derivation` because it operates at the language level and does
not open the derivation (which would be useful but not nearly
as much).
It does not operate on a derivation and does not return a
derivation path. Instead it works at the language level,
where a distinct term "package" is more appropriate to
distinguish the parent object of `meta.position`; an
attribute which doesn't even make it into the derivation.
Some people want to avoid using registries at all on their system; Instead
of having to add --no-registries to every command, this commit allows to
set use-registries = false in the config. --no-registries is still allowed
everywhere it was allowed previously, but is now deprecated.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
- This can legitimately happen (for example because of a non-determinism
causing a build-time dependency to be kept or not as a runtime
reference)
- Because of older Nix versions, it can happen that we encounter a
realisation with an (erroneously) empty set of dependencies, in which
case we don’t want to fail, but just warn the user and try to fix it.
Fill `NIX_CONFIG` with the value of the current Nix configuration before
calling the nix subprocesses in the repl
That way the whole configuration (including the possible
`experimental-features`, a possibly `--store` option or whatever) will
be made available.
This is required for example to make `nix repl` work with a custom
`--store`
Fill `NIX_CONFIG` with the value of the current Nix configuration before
calling the post-build-hook.
That way the whole configuration (including the possible
`experimental-features`, a possibly `--store` option or whatever) will
be made available to the hook
'--delete-older-than 10' deletes the generations older than a single day, and '--delete-older-than 12m' deletes all generations older than 12 days.
This changes makes it throw on those invalid inputs, and gives an example of a valid input.
We need to support it for the “old” fetch* functions for backwards
compatibility, but we don’t need it for fetchTree (as it’s a new
function).
Given that changing the `name` messes-up the content hashing, we can
just forbid passing a custom `name` argument to it
Before this commit, nixConfig.flake-registry didn't have any real effect
on the evaluation, since config was applied after inputs were evaluated.
Change this behavior: apply the config in the beginning of flake::lockFile.
Pass the current experimental features using `NIX_CONFIG` to the various
Nix subprocesses that `nix repl` invokes.
This is quite a hack, but having `nix repl` call Nix with a subprocess
is a hack already, so I guess that’s fine.
`nix develop` is getting bash from an (assumed existing) `nixpkgs`
flake. However, when doing so, it reuses the `lockFlags` passed to the
current flake, including the `--input-overrides` and `--input-update`
which generally don’t make sense anymore at that point (and trigger a
warning because of that)
Clear these overrides before getting the nixpkgs flake to get rid of the
warning.
Add an access-control list to the realisations in recursive-nix (similar
to the already existing one for store paths), so that we can build
content-addressed derivations in the restricted store.
Fix#4353
Previously, the build system used uname(1) output when it wanted to
check the operating system it was being built for, which meant that it
didn't take into-account cross-compilation when the build and host
operating systems were different.
To fix this, instead of consulting uname output, we consult the host
triple, specifically the third "kernel" part.
For "kernel"s with stable ABIs, like Linux or Cygwin, we can use a
simple ifeq to test whether we're compiling for that system, but for
other platforms, like Darwin, FreeBSD, or Solaris, we have to use a
more complicated check to take into account the version numbers at the
end of the "kernel"s. I couldn't find a way to just strip these
version numbers in GNU Make without shelling out, which would be even
more ugly IMO. Because these checks differ between kernels, and the
patsubst ones are quite fiddly, I've added variables for each host OS
we might want to check to make them easier to reuse.
This way no derivation has to expect that these files are in the `cwd`
during the build. This is problematic for `nix-shell` where these files
would have to be inserted into the nix-shell's `cwd` which can become
problematic with e.g. recursive `nix-shell`.
To remain backwards-compatible, the location inside the build sandbox
will be kept, however using these files directly should be deprecated
from now on.
This is needed to push the adoption of structured attrs[1] forward. It's
now checked if a `__json` exists in the environment-map of the derivation
to be openend in a `nix-shell`.
Derivations with structured attributes enabled also make use of a file
named `.attrs.json` containing every environment variable represented as
JSON which is useful for e.g. `exportReferencesGraph`[2]. To
provide an environment similar to the build sandbox, `nix-shell` now
adds a `.attrs.json` to `cwd` (which is mostly equal to the one in the
build sandbox) and removes it using an exit hook when closing the shell.
To avoid leaking internals of the build-process to the `nix-shell`, the
entire logic to generate JSON and shell code for structured attrs was
moved into the `ParsedDerivation` class.
[1] https://nixos.mayflower.consulting/blog/2020/01/20/structured-attrs/
[2] https://nixos.org/manual/nix/unstable/expressions/advanced-attributes.html#advanced-attributes
Make sure that the derivation we send to the remote builder is exactly
the one that we want to build locally so that the output ids are exactly
the same
Fix#4845
Useful when we're using a daemon with a chroot store, e.g.
$ NIX_DAEMON_SOCKET_PATH=/tmp/chroot/nix/var/nix/daemon-socket/socket nix-daemon --store /tmp/chroot
Then the client can now connect with
$ nix build --store unix:///tmp/chroot/nix/var/nix/daemon-socket/socket?root=/tmp/chroot nixpkgs#hello
That doesn’t really make sense with CA derivations (and wasn’t even
really correct before because of FO derivations, though that probably
didn’t matter much in practice)
Resolve the derivation before trying to load its environment −
essentially reproducing what the build loop does − so that we can
effectively access our dependencies (and not just their placeholders).
Fix#4821
Make ca-derivations require a `ca-derivations` machine feature, and
ca-aware builders expose it.
That way, a network of builders can mix ca-aware and non-ca-aware
machines, and the scheduler will send them in the right place.
When the `keep-going` option is set to `true`, make `nix flake check`
continue as much as it can before failing.
The UI isn’t perfect as-it-is as all the lines currently start with a
mostly useless `error (ignored): error:` prefix, but I’m not sure what
the best output would be, so I’ll leave it as-it-is for the time being
(This is a bit hijacking the `keep-going` flag as it’s supposed to be a
build-time only thing. But I think it’s faire to reuse it here).
Fix https://github.com/NixOS/nix/issues/4450
When adding a path to the local store (via `LocalStore::addToStore`),
ensure that the `ca` field of the provided `ValidPathInfo` does indeed
correspond to the content of the path.
Otherwise any untrusted user (or any binary cache) can add arbitrary
content-addressed paths to the store (as content-addressed paths don’t
need a signature).
Linux is (as far as I know) the only mainstream operating system that
requires linking with libdl for dlopen. On BSD, libdl doesn't exist,
so on non-FreeBSD BSDs linking will currently fail. On macOS, it's
apparently just a symlink to libSystem (macOS libc), presumably
present for compatibility with things that assume Linux.
So the right thing to do here is to only add -ldl on Linux, not to add
it for everything that isn't FreeBSD.
Only considers the closure in term of `Realisation`, ignores all the
opaque inputs.
Dunno whether that’s the nicest solution, need to think it through a bit
Align all the worker protocol with `buildDerivation` which inlines the
realisations as one opaque json blob.
That way we don’t have to bother changing the remote store protocol
when the definition of `Realisation` changes, as long as we keep the
json backwards-compatible
Align all the worker protocol with `buildDerivation` which inlines the
realisations as one opaque json blob.
That way we don’t have to bother changing the remote store protocol
when the definition of `Realisation` changes, as long as we keep the
json backwards-compatible
Move the `closure` logic of `computeFSClosure` to its own (templated) function.
This doesn’t bring much by itself (except for the ability to properly
test the “closure” functionality independently from the rest), but it
allows reusing it (in particular for the realisations which will require
a very similar closure computation)
When you have a symlink like:
/tmp -> ./private/tmp
you need to resolve ./private/tmp relative to /tmp’s dir: ‘/’. Unlike
any other path output by dirOf, / ends with a slash. We don’t want
trailing slashes here since we will append another slash in the next
comoponent, so clear s like we would if it was a symlink to an absoute
path.
This should fix at least part of the issue in
https://github.com/NixOS/nix/issues/4822, will need confirmation that
it actually fixes the problem to close though.
Introduced in f3f228700a.
Accidentally removed in ca96f52194. This
caused `nix run` to systematically fail with
```
error: app program '/nix/store/…' is not in the Nix store
```
~/.bashrc should be sourced first in the rc script so that PATH &
other env vars give precedence over the bashrc PATH.
Also, in my bashrc I alias rm as:
alias rm='rm -Iv'
To avoid running this alias (which shows ‘removed '/tmp/nix-shell.*'),
we can just prefix rm with command.
For whatever reason, many programs trying to access SystemVersion.plist
also open SystemVersionCompat.plist; this includes Python code and
coreutils’ `cat(1)` (but not the native macOS `/bin/cat`). Illustratory
`dtruss(1m)` output:
open("/System/Library/CoreServices/SystemVersion.plist\0", 0x0, 0x0) = 3 0
open("/System/Library/CoreServices/SystemVersionCompat.plist\0", 0x0, 0x0) = 4 0
I assume this is a Big Sur change relating to the 10.16.x/11.x
version compatibility divide and that it’s something along the lines of
a hook inside libSystem.
Fixes a lot of sandboxed package builds under Big Sur.
When we don’t have enough free job slots to run a goal, we put it in
the waitForBuildSlot list & unlock its output locks. This will
continue from where we left off (tryLocalBuild). However, we need the
locks to get reacquired when/if the goal ever restarts. So, we need to
send it back through tryToBuild to get reqacquire those locks.
I think this bug was introduced in
https://github.com/NixOS/nix/pull/4570. It leads to some builds
starting without proper locks.
Similar to the nar-info disk cache (and using the same db).
This makes rebuilds muuch faster.
- This works regardless of the ca-derivations experimental feature.
I could modify the logic to not touch the db if the flag isn’t there,
but given that this is a trash-able local cache, it doesn’t seem to be
really worth it.
- We could unify the `NARs` and `Realisation` tables to only have one
generic kv table. This is left as an exercise to the reader.
- I didn’t update the cache db version number as the new schema just
adds a new table to the previous one, so the db will be transparently
migrated and is backwards-compatible.
Fix#4746
This was previously done in https://github.com/NixOS/nix/pull/4515 but
got clobbered away in https://github.com/NixOS/nix/pull/4594.
--------------------------------------------------------------------------------
This fixes an issue where derivations with a primary output that is
not "out" would fail with:
$ nix profile install nixpkgs#sqlite
error: opening directory '/nix/store/2a2ydlgyydly5czcc8lg12n6qqkfz863-sqlite-3.34.1-bin': No such file or directory
This happens because while derivations produce every output when
built, you might not have them if you didn't build the derivation
yourself (for instance, the store path was fetch from a binary cache).
This uses outputName provided from DerivationInfo which appears to
match the first output of the derivation.
I guess I misunderstood John's initial explanation about why wildcards
for outputs are sent to older stores[1]. My `nix-daemon` from 2021-03-26
also has version 1.29, but misses the wildcard[2]. So bumping seems to
be the right call.
[1] https://github.com/NixOS/nix/pull/4759#issuecomment-830812464
[2] 255d145ba7
Starting in macOS 11, the on-disk dylib bundles are no longer available,
but nixpkgs needs to be able to keep compatibility with older versions
that require `/usr/lib/libSystem.B.dylib` in `__impureHostDeps`. Allow
it to keep backwards compatibility with these versions by marking these
dependencies as optional.
Fixes#4658.
As described in #4745 it's otherwise fairly hard to understand where
this is coming from. Say you have an expression which uses e.g.
`types.package`:
``` nix
{ outputs = { self, nixpkgs }: {
packages.x86_64-linux.hello = let
foo = nixpkgs.lib.evalModules {
modules = [
{
options.foo.bar = with nixpkgs.lib; mkOption { type = types.package; };
}
{
foo.bar = ./.;
}
];
};
in builtins.trace foo.config.foo.bar.outPath nixpkgs.legacyPackages.x86_64-linux.hello;
defaultPackage.x86_64-linux = self.packages.x86_64-linux.hello;
};
}
```
Then you'll get an error trace like this:
```
error: 'builtins.storePath' is not allowed in pure evaluation mode
at /nix/store/p4h2x6r80njkb0j2rc1xjhhl99yri3zb-source/lib/attrsets.nix:328:15:
327| let
328| path' = builtins.storePath path;
| ^
329| res =
… while evaluating the attribute 'config.foo.bar.outPath'
at /nix/store/p4h2x6r80njkb0j2rc1xjhhl99yri3zb-source/lib/attrsets.nix:332:11:
331| name = sanitizeDerivationName (builtins.substring 33 (-1) (baseNameOf path'));
332| outPath = path';
| ^
333| outputs = [ "out" ];
… while evaluating the attribute 'packages.x86_64-linux.hello'
at /nix/store/6c1rfsqzrhjw1235palzjmf5vihcpci7-source/flake.nix:3:5:
2| { outputs = { self, nixpkgs }: {
3| packages.x86_64-linux.hello = let
| ^
4| foo = nixpkgs.lib.evalModules {
```
Fixes#4745
They are equivalent according to
<https://spec.commonmark.org/0.29/#hard-line-breaks>,
and the trailing spaces tend to be a pain (because the make git
complain, editors tend to want to remove them − the `.editorconfig`
actually specifies that − etc..).
This function doesn't support all compression methods (i.e. 'none' and
'br') so it shouldn't be exposed.
Also restore the original decompress() as a wrapper around
makeDecompressionSink().
The S3 store relies on the ability to be able to decompress things with
an empty method, because it just passes the value of the Content-Encoding
directly to decompress.
If the file is not compressed, then this will cause the compression
routine to get confused.
This caused NixOS/nixpkgs#120120.
This makes Nix look up paths derivations when they are passed as a
store paths. So:
$ nix path-info --derivation /nix/store/0pisd259nldh8yfjvw663mspm60cn5v8-hello-2.10
now gives
/nix/store/qp724w90516n4bk5r9gfb37vzmqdh3z7-hello-2.10.drv
instead of "".
If no deriver is available, Nix now errors instead of silently
ignoring that argument.
In case of pure input-addressed derivations, the build loop doesn't
guaranty that the realisations are stored in the db (if the output paths
are already there or can be substituted, the realisations won't be
registered). This caused `nix shell` to fail in some cases because it
was assuming that the realisations were always existing.
A better (but more involved) fix would probably to ensure that we always
register the realisations, but in the meantime this patches the surface
issue.
Fix#4721
I think that it's not very helpful to get "cached failures" in a wrong
`flake.nix`. This can be very confusing when debugging a Nix expression.
See for instance NixOS/nixpkgs#118115.
In fact, the eval cache allows a forced reevaluation which is used for
e.g. `nix eval`.
This change makes sure that this is the case for `nix build` as well. So
rather than
λ ma27 [~/Projects/exp] → ../nix/outputs/out/bin/nix build -L --rebuild --experimental-features 'nix-command flakes'
error: cached failure of attribute 'defaultPackage.x86_64-linux'
the evaluation of already-evaluated (and failed) attributes looks like
this now:
λ ma27 [~/Projects/exp] → ../nix/outputs/out/bin/nix build -L --rebuild --experimental-features 'nix-command flakes'
error: attribute 'hell' missing
at /nix/store/mrnvi9ss8zn5wj6gpn4bcd68vbh42mfh-source/flake.nix:6:35:
5|
6| packages.x86_64-linux.hello = nixpkgs.legacyPackages.x86_64-linux.hell;
| ^
7|
(use '--show-trace' to show detailed location information)
When working on some more complex Nix code, there are sometimes rather
unhelpful or misleading error messages, especially if coerce-errors are
thrown.
This patch is a first steps towards improving that. I'm happy to file
more changes after that, but I'd like to gather some feedback first.
To summarize, this patch does the following things:
* Attrsets (a.k.a. `Bindings` in `libexpr`) now have a `Pos`. This is
helpful e.g. to identify which attribute-set in `listToAttrs` is
invalid.
* The `Value`-struct has a new method named `determinePos` which tries
to guess the position of a value and falls back to a default if that's
not possible.
This can be used to provide better messages if a coercion fails.
* The new `determinePos`-API is used by `builtins.concatMap` now. With
that change, Nix shows the exact position in the error where a wrong
value was returned by the lambda.
To make sure it's still obvious that `concatMap` is the problem,
another stack-frame was added.
* The changes described above can be added to every other `primop`, but
first I'd like to get some feedback about the overall approach.
* The position of the `name`-attribute appears in the trace.
* If e.g. `meta` has no `outPath`-attribute, a `cannot coerce set to
string` error will be thrown where `pos` points to `name =` which is
highly misleading.
If there were many top-level goals (which are not destroyed until the
very end), commands like
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' \
/run/current-system --no-check-sigs --substitute-on-destination
could fail with "Too many open files". So now we do some explicit
cleanup from amDone(). It would be cleaner to separate goals from
their temporary internal state, but that would be a bigger refactor.
This avoids an ambiguity where the `StorePathWithOutputs { drvPath, {}
}` could mean "build `brvPath`" or "substitute `drvPath`" depending on
context.
It also brings the internals closer in line to the new CLI, by
generalizing the `Buildable` type is used there and makes that
distinction already.
In doing so, relegate `StorePathWithOutputs` to being a type just for
backwards compatibility (CLI and RPC).
These are by no means part of the notion of a store, but rather are
things that happen to use stores. (Or put another way, there's no way
we'd make them virtual methods any time soon.) It's better to move them
out of that too-big class then.
Also, this helps us remove StorePathWithOutputs from the Store interface
altogether next commit.
This fixes builtins.fetchGit { url = ...; ref = "HEAD"; }, that works in
stable nix (v2.3.10), but is broken in nix master:
$ ./result/bin/nix repl
Welcome to Nix version 2.4pre19700101_dd77f71. Type :? for help.
nix-repl> builtins.fetchGit { url = "https://github.com/NixOS/nix"; ref = "HEAD"; }
fetching Git repository 'https://github.com/NixOS/nix'fatal: couldn't find remote ref refs/heads/HEAD
error: program 'git' failed with exit code 128
The documentation for builtins.fetchGit says ref = "HEAD" is the
default, so it should also be supported to explicitly pass it.
I came across this issue because poetry2nix can use ref = "HEAD" in some
situations.
Fixes#4674.
A few versioning mistakes were corrected:
- In 27b5747ca7, Daemon protocol had some
version `>= 0xc` that should have been `>= 0x1c`, or `28` since the
other conditions used decimal.
- In a2b69660a9, legacy SSH gated new CAS
info on version 6, but version 5 in the server. It is now 6
everywhere.
Additionally, legacy ssh was sending over more metadata than the daemon
one was. The daemon now sends that data too.
CC @regnat
Co-authored-by: Cole Helbling <cole.e.helbling@outlook.com>
According to RFC4007[1], IPv6 addresses can have a so-called zone_id
separated from the actual address with `%` as delimiter. In contrast to
Nix 2.3, the version on `master` doesn't recognize it as such:
$ nix ping-store --store ssh://root@fe80::1%18 --experimental-features nix-command
warning: 'ping-store' is a deprecated alias for 'store ping'
error: --- Error ----------------------------------------------------------------- nix
don't know how to open Nix store 'ssh://root@fe80::1%18'
I modified the IPv6 match-regex accordingly to optionally detect this
part of the address. As we don't seem to do anything special with it, I
decided to leave it as part of the URL for now.
Fixes#4490
[1] https://tools.ietf.org/html/rfc4007
I guess the rationale behind the old name wath that
`pathInfoIsTrusted(info)` returns `true` iff we would need to `blindly`
trust the path (because it has no valid signature and `requireSigs` is
set), but I find it to be a really confusing footgun because it's quite
natural to give it the opposite meaning.
When starting a nix-shell with `-i` it was previously possible for it to
silently fail in the scenario where the specified interpreter didn't
exist. This happened due to the `exec` call masking the issue.
With this change we enable `execfail`, which causes the script using
`nix-shell` as interpreter to correctly exit with code 127.
Fixes: #4598
Basically, if a tarball URL is used as a flake input, and the URL leads
to a redirect, the final redirect destination would be recorded as the
locked URL.
This allows tarballs under https://nixos.org/channels to be used as
flake inputs. If we, as before, lock on to the original URL it would
break every time the channel updates.
Local git repositories are normally used directly instead of
cloning. This commit checks if a repo is bare and forces a
clone.
Co-authored-by: Théophane Hufschmitt <regnat@users.noreply.github.com>
What happened was that Nix was trying to unconditionally mount these
paths in fixed-output derivations, but since the outer derivation was
pure, those paths did not exist. The solution is to only mount those
paths when they exist.
This separates the scheduling logic (including simple hook pathway) from
the local-store needing code.
This should be the final split for now. I'm reasonably happy with how
it's turning out, even before I'm done moving code into
`local-derivation-goal`. Benefits:
1. This will help "witness" that the hook case is indeed a lot simpler,
and also compensate for the increased complexity that comes from
content-addressed derivation outputs.
2. It also moves us ever so slightly towards a world where we could use
off-the-shelf storage or sandboxing, since `local-derivation-goal`
would be gutted in those cases, but `derivation-goal` should remain
nearly the same.
The new `#if 0` in the new files will be deleted in the following
commit. I keep it here so if it turns out more stuff can be moved over,
it's easy to do so in a way that preserves ordering --- and thus
prevents conflicts.
N.B.
```sh
git diff HEAD^^ --color-moved --find-copies-harder --patience --stat
```
makes nicer output.
This is probably what most people expect it to do. Fixes#3781.
There is a new command 'nix flake lock' that has the old behaviour of
'nix flake update', i.e. it just adds missing lock file entries unless
overriden using --update-input.
This is already used by Hydra, and is very useful when materializing
a remote builder list from service discovery. This allows the service
discovery tool to only sync one file instead of two.
This is technically a breaking change, since attempting to set plugin
files after the first non-flag argument will now throw an error. This
is acceptable given the relative lack of stability in a plugin
interface and the need to tie the knot somewhere once plugins can
actually define new subcommands.
This field used to be a `BasicDerivation`, but this `BasicDerivation`
was downcasted to a `Derivation` when needed (implicitely or not), so we
might as well make it a full `Derivation` and upcast it when needed.
This also allows getting rid of a weird duplication in the way we
compute the static output hashes for the derivation. We had to
do it differently and in a different place depending on whether the
derivation was a full derivation or just a basic drv, but we can now do
it unconditionally on the full derivation.
Fix#4559
- Pass it the name of the outputs rather than their output paths (as
these don't exist for ca derivations)
- Get the built output paths from the remote builder
- Register the new received realisations
The PR #4240 changed messag of the error that was thrown when an auto-called
function was missing an argument.
However this change also changed the type of the error, from `EvalError`
to a new `MissingArgumentError`. This broke hydra which was relying on
an `EvalError` being thrown.
Make `MissingArgumentError` a subclass of `EvalError` to un-break hydra.
When performing distributed builds of machine learning packages, it
would be nice if builders without the required SIMD instructions can
be excluded as build nodes.
Since x86_64 has accumulated a large number of different instruction
set extensions, listing all possible extensions would be unwieldy.
AMD, Intel, Red Hat, and SUSE have recently defined four different
microarchitecture levels that are now part of the x86-64 psABI
supplement and will be used in glibc 2.33:
https://gitlab.com/x86-psABIs/x86-64-ABIhttps://lwn.net/Articles/844831/
This change uses libcpuid to detect CPU features and then uses them to
add the supported x86_64 levels to the additional system types. For
example on a Ryzen 3700X:
$ ~/aps/bin/nix -vv --version | grep "Additional system"
Additional system types: i686-linux, x86_64-v1-linux, x86_64-v2-linux, x86_64-v3-linux
That way we
1. Don't have to recompute them several times
2. Can compute them in a place where we know the type of the parent
derivation, meaning that we don't need the casting dance we had before
Once a build is done, get back to the original derivation, and register
all the newly built outputs for this derivation.
This allows Nix to work properly with derivations that don't have all
their build inputs available − thus allowing garbage collection and
(once it's implemented) binary substitution
Change the `nix-build` logic for linking/printing the output paths to allow for
some outputs to be missing. This might happen when the toplevel
derivation didn't have to be built, either because all the required
outputs were already there, or because they have all been substituted.
I tested a trivial program that called kill(-1, SIGKILL), which was
run as the only process for an unpriveleged user, on Linux and
FreeBSD. On Linux, kill reported success, while on FreeBSD it failed
with EPERM.
POSIX says:
> If pid is -1, sig shall be sent to all processes (excluding an
> unspecified set of system processes) for which the process has
> permission to send that signal.
and
> The kill() function is successful if the process has permission to
> send sig to any of the processes specified by pid. If kill() fails,
> no signal shall be sent.
and
> [EPERM]
> The process does not have permission to send the signal to any
> receiving process.
My reading of this is that kill(-1, ...) may fail with EPERM when
there are no other processes to kill (since the current process is
ignored). Since kill(-1, ...) only attempts to kill processes the
user has permission to kill, it can't mean that we tried to do
something we didn't have permission to kill, so it should be fine to
interpret EPERM the same as success here for any POSIX-compliant
system.
This fixes an issue that Mic92 encountered[1] when he tried to review a
Nixpkgs PR on FreeBSD.
[1]: https://github.com/NixOS/nixpkgs/pull/81459#issuecomment-606073668
We upgrade to lowdown 0.8.0 [1] which contains a fix/improvement to a
behavior mentioned in this issue thread [2] where a big part of
lowdown's API would just call exit(1) on allocation errors since that
is a satisfying behavior for the lowdown binary.
Now lowdown_term_rndr returns 0 if an allocation error occurred which we
check for in libcmd/markdown.cc.
Also the extern "C" { } wrapper around lowdown.h has been removed as it
is not necessary.
[1]: 6ca7c855a0/versions.xml (L987-L1006)
[2]: https://github.com/kristapsdz/lowdown/issues/45#issuecomment-756681153
In addition to being some ugly template trickery, it was also totally
useless as it was used in only one place where I could replace it by
just a few extra characters
This makes nix search always go through the first level of an
attribute set, even if it's not a top level attribute. For instance,
you can now list all GHC compilers with:
$ nix search nixpkgs#haskell.compiler
...
This is similar to how nix-env works when you pass in -A.
This fixes an issue where derivations with a primary output that is
not "out" would fail with:
$ nix profile install nixpkgs#sqlite
error: opening directory '/nix/store/2a2ydlgyydly5czcc8lg12n6qqkfz863-sqlite-3.34.1-bin': No such file or directory
This happens because while derivations produce every output when
built, you might not have them if you didn't build the derivation
yourself (for instance, the store path was fetch from a binary cache).
This uses outputName provided from DerivationInfo which appears to
match the first output of the derivation.
Where a `RealisedPath` is a store path with its history, meaning either
an opaque path for stuff that has been directly added to the store, or a
`Realisation` for stuff that has been built by a derivation
This is a low-level refactoring that doesn't bring anything by itself
(except a few dozen extra lines of code :/ ), but raising the
abstraction level a bit is important on a number of levels:
- Commands like `nix build` have to query for the realisations after the
build is finished which is fragile (see
27905f12e4a7207450abe37c9ed78e31603b67e1 for example). Having them
oprate directly at the realisation level would avoid that
- Others like `nix copy` currently operate directly on (built) store
paths, but need a bit more information as they will need to register
the realisations on the remote side
Example:
error: builder for '/nix/store/9ysqfidhipyzfiy54mh77iqn29j6cpsb-failing.drv' failed with exit code 1;
last 1 log lines:
> FAIL
For full logs, run 'nix log /nix/store/9ysqfidhipyzfiy54mh77iqn29j6cpsb-failing.drv'.
… while importing '/nix/store/pfp4a4bjh642ylxyipncqs03z6kkgfvy-failing'
at /nix/store/25wgzr2qrqqiqfbdb1chpiry221cjglc-source/flake.nix:58:15:
57|
58| ifd = import self.hydraJobs.broken;
| ^
59|