This removes a dynamic stack allocation, making the derivation
unparsing logic robust against overflows when large strings are
added to a derivation.
Overflow behavior depends on the platform and stack configuration.
For instance, x86_64-linux/glibc behaves as (somewhat) expected:
$ (ulimit -s 20000; nix-instantiate tests/lang/eval-okay-big-derivation-attr.nix)
error: stack overflow (possible infinite recursion)
$ (ulimit -s 40000; nix-instantiate tests/lang/eval-okay-big-derivation-attr.nix)
error: expression does not evaluate to a derivation (or a set or list of those)
However, on aarch64-darwin:
$ nix-instantiate big-attr.nix ~
zsh: segmentation fault nix-instantiate big-attr.nix
This indicates a slight flaw in the single stack protection page
approach that is not encountered with normal stack frames.
Unless `--precise` is passed, make `nix why-depends` only show the
dependencies between the store paths, without introspecting them to
find the actual references.
This also makes it ~3x faster
There already existed a smoke test for the link content length,
but it appears that there exists some corruptions pernicious enough
to replace the file content with zeros, and keeping the same length.
--repair-path now goes as far as checking the content of the link,
making it true to its name and actually repairing the path for such
coruption cases.
nixpkgs can save a good bit of eval memory with this primop. zipAttrsWith is
used quite a bit around nixpkgs (eg in the form of recursiveUpdate), but the
most costly application for this primop is in the module system. it improves
the implementation of zipAttrsWith from nixpkgs by not checking an attribute
multiple times if it occurs more than once in the input list, allocates less
values and set elements, and just avoids many a temporary object in general.
nixpkgs has a more generic version of this operation, zipAttrsWithNames, but
this version is only used once so isn't suitable for being the base of a new
primop. if it were to be used more we should add a second primop instead.
When we check for disappeared overrides, we can get "false positives"
for follows and overrides which are defined in the dependencies of the
flake we are locking, since they are not parsed by
parseFlakeInputs. However, at that point we already know that the
overrides couldn't have possible been changed if the input itself
hasn't changed (since we check that oldLock->originalRef == *input.ref
for the input's parent). So, to prevent this, only perform this check
when it was possible that the flake changed (e.g. the flake we're
locking, or a new input, or the input has changed and mustRefetch ==
true).
When a variable is assigned in the REPL, make sure to remove any possible reference to the old one so that we correctly pick the new one afterwards
Fix#5706
Previously, when we were attempting to reuse the old lockfile
information in the computeLocks function, we have passed the parent of
the current input to the next computeLocks call. This was incorrect,
since the follows are resolved relative to the parent. This caused
issues when we tried to reuse oldLock but couldn't for some
reason (read: mustRefetch is true), in that case the follows were
resolved incorrectly.
Fix this by passing the correct parent, and adding some tests to
prevent this particular regression from happening again.
Closes https://github.com/NixOS/nix/issues/5697
If we’re in pure eval mode, then tell that in the error message rather
than (wrongly) speaking about restricted mode.
Fix https://github.com/NixOS/nix/issues/5611
When an input follows disappears, we can't just reuse the old lock
file entries since we may be missing some required ones. Refetch the
input when this happens.
Closes https://github.com/NixOS/nix/issues/5289
When setting flake-local options (with the `nixConfig` field), forward
these options to the daemon in case we’re using one.
This is necessary in particular for options like `binary-caches` or
`post-build-hook` to make sense.
Fix <343239fc8a (r44356843)>
Having the `post-build-hook` use `nix` from the client package can lead
to a deadlock in case there’s a db migration to do between both, as a
`nix` command running inside the hook will run as root (and as such will
bypass the daemon), so might trigger a db migration, which will get
stuck trying to get a global lock on the DB (as the daemon that ran the
hook already has a lock on it).
When running a `:b` command in the repl, after building the derivations
query the store for its outputs rather than just assuming that they are
known in the derivation itself (which isn’t true for CA derivations)
Fix#5328
Rather than having them plain strings scattered through the whole
codebase, create an enum containing all the known experimental features.
This means that
- Nix can now `warn` when an unkwown experimental feature is passed
(making it much nicer to spot typos and spot deprecated features)
- It’s now easy to remove a feature altogether (once the feature isn’t
experimental anymore or is dropped) by just removing the field for the
enum and letting the compiler point us to all the now invalid usages
of it.
The min bound written corresponds to the date of the commit that
introduced the change, but it only got merged on master some weeks
later. Since the version is essentially the commit date, that means that
there’s a whole range of commits on master (including the current
`nixUnstable`) that have a higher version but don’t contain the required
change.
This fixes a bug in the garbage collector where if a path
/nix/store/abcd-foo is valid, but we do a
isValidPath("/nix/store/abcd-foo.lock") first, then a negative entry
for /nix/store/abcd is added to pathInfoCache, so /nix/store/abcd-foo
is subsequently considered invalid and deleted.
The garbage collector no longer blocks other processes from
adding/building store paths or adding GC roots. To prevent the
collector from deleting store paths just added by another process,
processes need to connect to the garbage collector via a Unix domain
socket to register new temporary roots.
We now build the context (so this has the side-effect of making
builtins.{path,filterSource} work on derivations outputs, if IFD is
enabled) and then check that the path has no references (which is what
we really care about).
This actually bit me quite recently in `nixpkgs` because I assumed that
`nix-build --check` would also error out if hashes don't match anymore[1]
and so I wrongly assumed that I couldn't reproduce the mismatch error.
The fix is rather simple, during the output registration a so-called
`delayedException` is instantiated e.g. if a FOD hash-mismatch occurs.
However, in case of `nix-build --check` (or `--rebuild` in case of `nix
build`), the code-path where this exception is thrown will never be
reached.
By adding that check to the if-clause that causes an early exit in case
of `bmCheck`, the issue is gone. Also added a (previously failing)
test-case to demonstrate the problem.
[1] https://github.com/NixOS/nixpkgs/pull/139238, the underlying issue
was that `nix-prefetch-git` returns different hashes than `fetchgit`
because the latter one fetches submodules by default.
If the store path contains a flake, this means that a command like
"nix path-info /path" will show info about /path, not about the
default output of the flake in /path. If you want the latter, you can
explicitly ask for it by doing "nix path-info path:/path".
Fixes#4568.
When doing e.g.
nix-build -A package --keep-failed --option \
builders \
'ssh://mfhydra?remote-store=/home/bosch/store x86_64-linux - 10 4 big-parallel'
this doesn't work properly because this build-setting is ignored.
I changed this behavior by passing the `settings.keepFailed` through the
serve-protocol to remote machines to make sure that I can introspect the
build-directory (which is particularly helpful when I have to look at a
`config.log` from a failed build for instance).
When `NIX_DAEMON_PACKAGE` is set, make all the tests use the Nix daemon.
That way we can test every piece of Nix functionality both with and
without the daemon.
Tests for which using the daemon isn’t possible or doesn’t make sens can
selectively be disabled with `needLocalStore`
Some people want to avoid using registries at all on their system; Instead
of having to add --no-registries to every command, this commit allows to
set use-registries = false in the config. --no-registries is still allowed
everywhere it was allowed previously, but is now deprecated.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
Fill `NIX_CONFIG` with the value of the current Nix configuration before
calling the nix subprocesses in the repl
That way the whole configuration (including the possible
`experimental-features`, a possibly `--store` option or whatever) will
be made available.
This is required for example to make `nix repl` work with a custom
`--store`
We need to support it for the “old” fetch* functions for backwards
compatibility, but we don’t need it for fetchTree (as it’s a new
function).
Given that changing the `name` messes-up the content hashing, we can
just forbid passing a custom `name` argument to it
Fixes this random failure:
error: hash mismatch in fixed-output derivation '/tmp/nix-shell.EUgAVU/nix-test/tests/check/store/sfps3l3c5n7dabpx34kigxnfhmrwk2h6-dummy.drv':
specified: sha256-47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=
got: sha256-0qhPS4tlCTfsj3PNi+LHSt1akRumTfJ0WO2CKdqASiY=
which happens because multiple tests were writing to ./dummy.
Add an access-control list to the realisations in recursive-nix (similar
to the already existing one for store paths), so that we can build
content-addressed derivations in the restricted store.
Fix#4353
This way no derivation has to expect that these files are in the `cwd`
during the build. This is problematic for `nix-shell` where these files
would have to be inserted into the nix-shell's `cwd` which can become
problematic with e.g. recursive `nix-shell`.
To remain backwards-compatible, the location inside the build sandbox
will be kept, however using these files directly should be deprecated
from now on.
This is needed to push the adoption of structured attrs[1] forward. It's
now checked if a `__json` exists in the environment-map of the derivation
to be openend in a `nix-shell`.
Derivations with structured attributes enabled also make use of a file
named `.attrs.json` containing every environment variable represented as
JSON which is useful for e.g. `exportReferencesGraph`[2]. To
provide an environment similar to the build sandbox, `nix-shell` now
adds a `.attrs.json` to `cwd` (which is mostly equal to the one in the
build sandbox) and removes it using an exit hook when closing the shell.
To avoid leaking internals of the build-process to the `nix-shell`, the
entire logic to generate JSON and shell code for structured attrs was
moved into the `ParsedDerivation` class.
[1] https://nixos.mayflower.consulting/blog/2020/01/20/structured-attrs/
[2] https://nixos.org/manual/nix/unstable/expressions/advanced-attributes.html#advanced-attributes
Resolve the derivation before trying to load its environment −
essentially reproducing what the build loop does − so that we can
effectively access our dependencies (and not just their placeholders).
Fix#4821
Make ca-derivations require a `ca-derivations` machine feature, and
ca-aware builders expose it.
That way, a network of builders can mix ca-aware and non-ca-aware
machines, and the scheduler will send them in the right place.
When the `keep-going` option is set to `true`, make `nix flake check`
continue as much as it can before failing.
The UI isn’t perfect as-it-is as all the lines currently start with a
mostly useless `error (ignored): error:` prefix, but I’m not sure what
the best output would be, so I’ll leave it as-it-is for the time being
(This is a bit hijacking the `keep-going` flag as it’s supposed to be a
build-time only thing. But I think it’s faire to reuse it here).
Fix https://github.com/NixOS/nix/issues/4450
When adding a path to the local store (via `LocalStore::addToStore`),
ensure that the `ca` field of the provided `ValidPathInfo` does indeed
correspond to the content of the path.
Otherwise any untrusted user (or any binary cache) can add arbitrary
content-addressed paths to the store (as content-addressed paths don’t
need a signature).
Similar to the nar-info disk cache (and using the same db).
This makes rebuilds muuch faster.
- This works regardless of the ca-derivations experimental feature.
I could modify the logic to not touch the db if the flag isn’t there,
but given that this is a trash-able local cache, it doesn’t seem to be
really worth it.
- We could unify the `NARs` and `Realisation` tables to only have one
generic kv table. This is left as an exercise to the reader.
- I didn’t update the cache db version number as the new schema just
adds a new table to the previous one, so the db will be transparently
migrated and is backwards-compatible.
Fix#4746
First, "XDG_CONFIG_HOME" shouldn't be named "home", as it may be
confusing compared with `$HOME`, which an upcoming test will be using.
Then, using a fixed location for the test is problematic. Use
`$TEST_ROOT` instead.
This requires adding `nix` to its own closure which is a bit unfortunate,
but as it is optional (the test will be disabled if `OUTER_NIX` is unset) it
shouldn't be too much of an issue.
(Ideally this should go in another derivation so that we can build Nix and run
the test independently, but as the tests are running in the same derivation
as the build it's a bit complicated to do so).
Requires a slight update to the test infra to work properly, but
having the possibility to group tests that way makes the whole thing
quite cleaner imho
This was
- Added in dbf96e10ec.
- Commented out in 07975979aa, which I
believe only reached master by mistake.
- Deleted in c32168c9bc, when
`tests/build-hook-ca.nix` was reused for a new test.
But the test works, and we ought to have it.
This is probably what most people expect it to do. Fixes#3781.
There is a new command 'nix flake lock' that has the old behaviour of
'nix flake update', i.e. it just adds missing lock file entries unless
overriden using --update-input.
This is technically a breaking change, since attempting to set plugin
files after the first non-flag argument will now throw an error. This
is acceptable given the relative lack of stability in a plugin
interface and the need to tie the knot somewhere once plugins can
actually define new subcommands.
When performing distributed builds of machine learning packages, it
would be nice if builders without the required SIMD instructions can
be excluded as build nodes.
Since x86_64 has accumulated a large number of different instruction
set extensions, listing all possible extensions would be unwieldy.
AMD, Intel, Red Hat, and SUSE have recently defined four different
microarchitecture levels that are now part of the x86-64 psABI
supplement and will be used in glibc 2.33:
https://gitlab.com/x86-psABIs/x86-64-ABIhttps://lwn.net/Articles/844831/
This change uses libcpuid to detect CPU features and then uses them to
add the supported x86_64 levels to the additional system types. For
example on a Ryzen 3700X:
$ ~/aps/bin/nix -vv --version | grep "Additional system"
Additional system types: i686-linux, x86_64-v1-linux, x86_64-v2-linux, x86_64-v3-linux
It's now
at /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix:7:7:
instead of
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
The new format is more standard and clickable.
This change is to simplify [Trustix](https://github.com/tweag/trustix) indexing and makes it possible to reconstruct this URL regardless of the compression used.
In particular this means that 7c2e9ca597/contrib/nix/nar/nar.go (L61-L71) can be removed and only the bits that are required to establish trust needs to be published in the Trustix build logs.
For each known realisation, store:
- its output
- its output path
This comes with a set of needed changes:
- New `realisations` module declaring the types needed for describing
these mappings
- New `Store::registerDrvOutput` method registering all the needed informations
about a derivation output (also replaces `LocalStore::linkDeriverToPath`)
- new `Store::queryRealisation` method to retrieve the informations for a
derivations
This introcudes some redundancy on the remote-store side between
`wopQueryDerivationOutputMap` and `wopQueryRealisation`.
However we might need to keep both (regardless of backwards compat)
because we sometimes need to get some infos for all the outputs of a
derivation (where `wopQueryDerivationOutputMap` is handy), but all the
stores can't implement it − because listing all the outputs of a
derivation isn't really possible for binary caches where the server
doesn't allow to list a directory.
Otherwise https://cache.nixos.org is chosen by default, causing the OSX
testsuite to hang inside the sandbox.
(In a way, this is probably rugging an actual bug under the carpet as
Nix should be able to gracefully timeout in such a case, but that's
beyond mac OSX-fu)
Without setting HGPLAIN, the user's environment leaks into
hg invocations, which means that the output may not be in the
expected format.
HGPLAIN is the Mercurial-recommended solution for this in that
it's intended for uses by scripts and programs which are looking
to parse Mercurial's output in a consistent manner.
Although the non-resolved derivation will never get a cache-hit (it
doesn't have an output path to query the cache for anyways), we might
get one on the resolved derivation.
Having vm-test-run-unnamed for all the test derivation doesn't look very
nice, so in order to better distinguish them from their store path,
let's actually give them proper names.
Signed-off-by: aszlig <aszlig@nix.build>
Perl-based tests are deprecated since NixOS 20.03 and subsequently got
removed in NixOS 20.09, which effectively means that tests are going to
fail as soon as we build it with NixOS 20.09 or anything newer.
I've put "# fmt: off" at the start of every testScript, because
formatting with Black really messes up indentation and I don't think it
really adds anything in value or readability for inlined Python scripts.
Signed-off-by: aszlig <aszlig@nix.build>
In particular, this means that derivations can output derivations. But
that ramification isn't (yet!) useful as we would want, since there is
no way to have a dependent derivation that is itself a dependent
derivation.
tar(1) on FreeBSD does not use standard output or input when the -f flag
is not provided. Instead, it defaults to /dev/sa0 on FreeBSD.
Make this tar invocation a bit more robust and explicitly tell tar(1) to
use standard output.
This is one of the issues discovered while porting Nix to FreeBSD. It has
been tested and committed locally to FreeBSD ports:
https://svnweb.freebsd.org/ports/head/sysutils/nix/Makefile?revision=550026&view=markup#l108
Otherwise the result of the printing can't be parsed back correctly by
Nix (because the unescaped `${` will be parsed as the begining of an
anti-quotation).
Fix#3989
This seems more correct. It also means one can specify the features a
store should support with --store and remote-store=..., which is useful.
I use this to clean up the build remotes test.
Before, processConnection wanted to know a user name and user id, and
`nix-daemon --stdio`, when it isn't proxying to an underlying daemon,
would just assume "root" and 0. But `nix-daemon --stdio` (no proxying)
shouldn't make guesses about who holds the other end of its standard
streams.
Now processConnection takes an "auth hook", so `nix-daemon` can provide
the appropriate policy and daemon.cc doesn't need to know or care what
it is.
If a repo is dirty, it used to return a `rev` object with an "empty"
sha1 (0000000000000000000000000000000000000000). Please note that this
only applies for `builtins.fetchGit` and *not* for `builtins.fetchTree{
type = "git"; }`.
The original idea was to implement a git-fetcher in Nix's core that
supports content hashes[1]. In #3549[2] it has been suggested to
actually use `fetchTree` for this since it's a fairly generic wrapper
over the new fetcher-API[3] and already supports content-hashes.
This patch implements a new git-fetcher based on `fetchTree` by
incorporating the following changes:
* Removed the original `fetchGit`-implementation and replaced it with an
alias on the `fetchTree` implementation.
* Ensured that the `git`-fetcher from `libfetchers` always computes a
content-hash and returns an "empty" revision on dirty trees (the
latter one is needed to retain backwards-compatibility).
* The hash-mismatch error in the fetcher-API exits with code 102 as it
usually happens whenever a hash-mismatch is detected by Nix.
* Removed the `flakes`-feature-flag: I didn't see a reason why this API
is so tightly coupled to the flakes-API and at least `fetchGit` should
remain usable without any feature-flags.
* It's only possible to specify a `narHash` for a `git`-tree if either a
`ref` or a `rev` is given[4].
* It's now possible to specify an URL without a protocol. If it's missing,
`file://` is automatically added as it was the case in the original
`fetchGit`-implementation.
[1] https://github.com/NixOS/nix/pull/3216
[2] https://github.com/NixOS/nix/pull/3549#issuecomment-625194383
[3] https://github.com/NixOS/nix/pull/3459
[4] https://github.com/NixOS/nix/pull/3216#issuecomment-553956703
Originally, the test was only checking for different “real” storeDir.
That’s an easy case to handle, but the much harder one is if different
virtual store dirs are used. To do this, we need the SubstitutionGoal
to know about the ca, so it can recalculate the path to copy it over.
An important note here is that the store path passed to copyStorePath
needs to be one for srcStore - so that queryPathInfo works properly.
This also adds an error message when the store path from queryPathInfo
is different from the one we requested.
We can’t use custom name here because different names will have
different store paths. This is a limitation of the Store API’s
reliance on store paths.
We might be able to get around the above in the future by using a
dummy name for certain fixed output paths.
This fixes an issue where lockfile generation was not idempotent:
after updating a lockfile, a "follows" node would end up pointing to a
new copy of the node, rather than to the original node.
The `remote-store` test loads the `user-env` one to test nix-env when
using the daemon, but actually does it incorrectly because every test
starts (in `common.sh`) by resetting the value of `NIX_REMOTE`, meaning
that the `user-env` test will never use the daemon.
Fix this by setting `NIX_REMOTE_` before sourcing `user-env.sh` in the
`remote-store` test, so that `NIX_REMOTE` is correctly set inside the
test
The initial contents of the flake is specified by the
'templates.<name>' or 'defaultTemplate' output of another flake. E.g.
outputs = { self }: {
templates = {
nixos-container = {
path = ./nixos-container;
description = "An example of a NixOS container";
};
};
};
allows
$ nix flake init -t templates#nixos-container
Also add a command 'nix flake new', which is identical to 'nix flake
init' except that it initializes a specified directory rather than the
current directory.
The previous regex was too strict and did not match what git was allowing. It
could lead to `fetchGit` not accepting valid branch names, even though they
exist in a repository (for example, branch names containing `/`, which are
pretty standard, like `release/1.0` branches).
The new regex defines what a branch name should **NOT** contain. It takes the
definitions from `refs.c` in https://github.com/git/git and `git help
check-ref-format` pages.
This change also introduces a test for ref name validity checking, which
compares the result from Nix with the result of `git check-ref-format --branch`.
This makes 'nix flake' less cluttered and more consistent (it's only
subcommands that operator on a flake). Also, the registry is not
inherently flake-related (e.g. fetchTree could also use it to remap
inputs).
Motivation: maintain project-level configuration files.
Document the whole situation a bit better so that it corresponds to the
implementation, and add NIX_USER_CONF_FILES that allows overriding
which user files Nix will load during startup.
A test case for correct handling of temporary directory deletion that
was added to check.sh as part of PR #2689 was initially disabled for
Darwin because of a directory permission issue in PR #2688.
Now that the issue in PR #2688 is fixed, this commit enables the test
case for Darwin.
Temporarily add user-write permission to build directory so that it
can be moved out of the sandbox to the store with a .check suffix.
This is necessary because the build directory has already had its
permissions set read-only, but write permission is required
to update the directory's parent link to move it out of the sandbox.
Updated the related --check "derivation may not be deterministic"
messages to consistently use the real store paths.
Added test for non-root sandbox nix-build --check -K to demonstrate
issue and help prevent regressions.
Future editions of flakes or the Nix language can be supported by
renaming flake.nix (e.g. flake-v2.nix). This avoids a bootstrap
problem where we don't know which grammar to use to parse
flake*.nix. It also allows a project to support multiple flake
editions, in theory.
With --check and the --keep-failed (-K) flag, the temporary directory
was being retained regardless of whether the build was successful and
reproducible. This removes the temporary directory, as expected, on
a reproducible check build.
Added tests to verify that temporary build directories are not
retained unnecessarily, particularly when using --check with
--keep-failed.
This provides a pluggable mechanism for defining new fetchers. It adds
a builtin function 'fetchTree' that generalizes existing fetchers like
'fetchGit', 'fetchMercurial' and 'fetchTarball'. 'fetchTree' takes a
set of attributes, e.g.
fetchTree {
type = "git";
url = "https://example.org/repo.git";
ref = "some-branch";
rev = "abcdef...";
}
The existing fetchers are just wrappers around this. Note that the
input attributes to fetchTree are the same as flake input
specifications and flake lock file entries.
All fetchers share a common cache stored in
~/.cache/nix/fetcher-cache-v1.sqlite. This replaces the ad hoc caching
mechanisms in fetchGit and download.cc (e.g. ~/.cache/nix/{tarballs,git-revs*}).
This also adds support for Git worktrees (c169ea5904).
This allows querying the location of function arguments. E.g.
builtins.unsafeGetAttrPos "x" (builtins.functionArgs ({ x }: null))
=> { column = 57; file = "/home/infinisil/src/nix/inst/test.nix"; line = 1; }
This is now done in a single pass. Also fixes some issues when
updating flakes with circular dependencies. Finally, when using
'--recreate-lock-file --commit-lock-file', the commit message now
correctly shows the differences.
If you do a fetchTree on a Git repository, whether the result contains
a revCount attribute should not depend on whether that repository
happens to be a shallow clone or not. That would complicate caching a
lot and would be semantically messy. So applying fetchTree/fetchGit to
a shallow repository is now an error unless you pass the attribute
'shallow = true'. If 'shallow = true', we don't return revCount, even
if the repository is not actually shallow.
Note that Nix itself is not doing shallow clones at the moment. But it
could do so as an optimisation if the user specifies 'shallow = true'.
Issue #2988.
Worktrees[1] are a feature of git which allow you to check out a ref in
a different directory.
While playing around with flakes I realized that git repositories in a
worktree checkout break when trying to build a flake:
```
$ git worktree add ../nixpkgs-flakes nixpkgs-flakes
$ cd ../nixpkgs-flakes
$ nix build .#hello
error: opening directory '/home/ma27/Projects/nixpkgs-flakes/.git/refs/heads': Not a directory
```
This issue has been fixed by determining with `git rev-parse --git-common-dir`
where the actual `.git` directory is.
Please note that this issue only exists on the `flakes` branch, fetching
worktree checkouts with Nix master seems to work fine.
[1] https://git-scm.com/docs/git-worktree
This allows all supported fetchers to be used, e.g.
builtins.fetchTree {
type = "github";
owner = "NixOS";
repo = "nix";
rev = "d4df99a3349cf2228a8ee78dea320afef86eb3ba";
}
This improves reproducibility and may be faster than fetching from the
original source (especially for git/hg inputs, but probably also for
github inputs - our binary cache is probably faster than GitHub's
dynamically generated tarballs).
Unfortunately this doesn't work for the top-level flake since even if
we know the NAR hash of the tree, we don't know the other tree
attributes such as revCount and lastModified.
Fixes#3253.
Typical usage:
$ nix flake update ~/Misc/eelco-configurations/hagbard --update-input nixpkgs
to update the 'nixpkgs' input of a flake while leaving every other
input unchanged.
The argument is an input path, so you can do e.g. '--update-input
dwarffs/nixpkgs' to update an input of an input.
Fixes#2928.
Added a flag --no-update-lock-file to barf if the lock file needs any
changes. This is useful for CI systems if you're building a
checkout. Fixes#2947.
Renamed --no-save-lock-file to --no-write-lock-file. It is now a fatal
error if the lock file needs changes but --no-write-lock-file is not
given.
E.g.
$ nix flake update ~/Misc/eelco-configurations/hagbard \
--override-input 'dwarffs/nixpkgs' ../my-nixpkgs
overrides the 'nixpkgs' input of the 'dwarffs' input of the top-level
flake.
Fixes#2837.
When computing a lock file, we now respect the lock files of flake
inputs. This is important for usability / reproducibility. For
example, the 'nixops' flake depends on the 'nixops-aws' and
'nixops-hetzner' repositories. So when the 'nixops' flake is used in
another flake, we want the versions of 'nixops-aws' and
'nixops-hetzner' locked by the the 'nixops' flake because those
presumably have been tested.
This can lead to a proliferation of versions of flakes like 'nixpkgs'
(since every flake's lock file could depend on a different version of
'nixpkgs'). This is not a major issue when using Nixpkgs overlays or
NixOS modules, since then the top-level flake composes those
overlays/modules into *its* version of Nixpkgs and all other versions
are ignored. Lock file computation has been made a bit more lazy so it
won't try to fetch all those versions of 'nixpkgs'.
However, in case it's necessary to minimize flake versions, there now
are two input attributes that allow this. First, you can copy an input
from another flake, as follows:
inputs.nixpkgs.follows = "dwarffs/nixpkgs";
This states that the calling flake's 'nixpkgs' input shall be the same
as the 'nixpkgs' input of the 'dwarffs' input.
Second, you can override inputs of inputs:
inputs.nixpkgs.url = github:edolstra/nixpkgs/<hash>;
inputs.nixops.inputs.nixpkgs.url = github:edolstra/nixpkgs/<hash>;
or equivalently, using 'follows':
inputs.nixpkgs.url = github:edolstra/nixpkgs/<hash>;
inputs.nixops.inputs.nixpkgs.follows = "nixpkgs";
This states that the 'nixpkgs' input of the 'nixops' input shall be
the same as the calling flake's 'nixpkgs' input.
Finally, at '-v' Nix now prints the changes to the lock file, e.g.
$ nix flake update ~/Misc/eelco-configurations/hagbard
inputs of flake 'git+file:///home/eelco/Misc/eelco-configurations?subdir=hagbard' changed:
updated 'nixpkgs': 'github:edolstra/nixpkgs/7845bf5f4b3013df1cf036e9c9c3a55a30331db9' -> 'github:edolstra/nixpkgs/03f3def66a104a221aac8b751eeb7075374848fd'
removed 'nixops'
removed 'nixops/nixops-aws'
removed 'nixops/nixops-hetzner'
removed 'nixops/nixpkgs'
Fixes
error: derivation '/nix/store/klivma7r7h5lndb99f7xxmlh5whyayvg-zlib-1.2.11.drv' has incorrect output '/nix/store/fv98nnx5ykgbq8sqabilkgkbc4169q05-zlib-1.2.11-dev', should be '/nix/store/adm7pilzlj3z5k249s8b4wv3scprhzi1-zlib-1.2.11-dev'
As fromTOML supports \u and \U escapes, bring fromJSON on par. As JSON defaults
to UTF-8 encoding (every JSON parser must support UTF-8), this change parses the
`\u hex hex hex hex` sequence (\u followed by 4 hexadecimal digits) into an
UTF-8 representation.
Add a test to verify correct parsing, using all escape sequences from json.org.
Having a colon in the path may cause issues, and having the hash
function indicated isn't actually necessary. We now verify the path
format in the tests to prevent regressions.
This replaces the '(...)' installable syntax, which is not very
discoverable. The downside is that you can't have multiple expressions
or mix expressions and other installables.
Derivations that want to use recursion should now set
requiredSystemFeatures = [ "recursive-nix" ];
to make the daemon socket appear.
Also, Nix should be configured with "experimental-features =
recursive-nix".
This is an alternative to the IN_NIX_SHELL environment variable,
allowing the expression to adapt itself to nix-shell without
triggering those adaptations when used as a dependency of another
shell.
Closes#3147
Experimental features are now opt-in. There is currently one
experimental feature: "nix-command" (which enables the "nix"
command. This will allow us to merge experimental features more
quickly, without committing to supporting them indefinitely.
Typical usage:
$ nix build --experimental-features 'nix-command flakes' nixpkgs#hello
(cherry picked from commit 8e478c2341,
without the "flakes" feature)
Experimental features are now opt-in. There are currently two
experimental features: "nix-command" (which enables the "nix"
command), and "flakes" (which enables support for flakes). This will
allow us to merge experimental features more quickly, without
committing to supporting them indefinitely.
Typical usage:
$ nix build --experimental-features 'nix-command flakes' nixpkgs#hello
A command like
$ nix run nixpkgs#hello
will now build the attribute 'packages.${system}.hello' rather than
'packages.hello'. Note that this does mean that the flake needs to
export an attribute for every system type it supports, and you can't
build on unsupported systems. So 'packages' typically looks like this:
packages = nixpkgs.lib.genAttrs ["x86_64-linux" "i686-linux"] (system: {
hello = ...;
});
The 'checks', 'defaultPackage', 'devShell', 'apps' and 'defaultApp'
outputs similarly are now attrsets that map system types to
derivations/apps. 'nix flake check' checks that the derivations for
all platforms evaluate correctly, but only builds the derivations in
'checks.${system}'.
Fixes#2861. (That issue also talks about access to ~/.config/nixpkgs
and --arg, but I think it's reasonable to say that flakes shouldn't
support those.)
The alternative to attribute selection is to pass the system type as
an argument to the flake's 'outputs' function, e.g. 'outputs = { self,
nixpkgs, system }: ...'. However, that approach would be at odds with
hermetic evaluation and make it impossible to enumerate the packages
provided by a flake.
Instead of a list, inputs are now an attrset like
inputs = {
nixpkgs.uri = github:NixOS/nixpkgs;
};
If 'uri' is omitted, than the flake is a lookup in the flake registry, e.g.
inputs = {
nixpkgs = {};
};
but in that case, you can also just omit the input altogether and
specify it as an argument to the 'outputs' function, as in
outputs = { self, nixpkgs }: ...
This also gets rid of 'nonFlakeInputs', which are now just a special
kind of input that have a 'flake = false' attribute, e.g.
inputs = {
someRepo = {
uri = github:example/repo;
flake = false;
};
};
With this patch, and this file I called `log.py`:
#!/usr/bin/env nix-shell
#!nix-shell -i python3 -p python3 --pure
import sys
from pprint import pprint
stack = []
timestack = []
for line in open(sys.argv[1]):
components = line.strip().split(" ", 2)
if components[0] != "function-trace":
continue
direction = components[1]
components = components[2].rsplit(" ", 2)
loc = components[0]
_at = components[1]
time = int(components[2])
if direction == "entered":
stack.append(loc)
timestack.append(time)
elif direction == "exited":
dur = time - timestack.pop()
vst = ";".join(stack)
print(f"{vst} {dur}")
stack.pop()
and:
nix-instantiate --trace-function-calls -vvvv ../nixpkgs/pkgs/top-level/release.nix -A unstable > log.matthewbauer 2>&1
./log.py ./log.matthewbauer > log.matthewbauer.folded
flamegraph.pl --title matthewbauer-post-pr log.matthewbauer.folded > log.matthewbauer.folded.svg
I can make flame graphs like: http://gsc.io/log.matthewbauer.folded.svg
---
Includes test cases around function call failures and tryEval. Uses
RAII so the finish is always called at the end of the function.
This currently fails because we're using POSIX file locks. So when the
garbage collector opens and closes its own temproots file, it causes
the lock to be released and then deleted by another GC instance.
Passing `--post-build-hook /foo/bar` to a nix-* command will cause
`/foo/bar` to be executed after each build with the following
environment variables set:
DRV_PATH=/nix/store/drv-that-has-been-built.drv
OUT_PATHS=/nix/store/...build /nix/store/...build-bin /nix/store/...build-dev
This can be useful in particular to upload all the builded artifacts to
the cache (including the ones that don't appear in the runtime closure
of the final derivation or are built because of IFD).
This new feature prints the stderr/stdout output to the `nix-build`
and `nix build` client, and the output is printed in a Nix 2
compatible format:
[nix]$ ./inst/bin/nix-build ./test.nix
these derivations will be built:
/nix/store/ishzj9ni17xq4hgrjvlyjkfvm00b0ch9-my-example-derivation.drv
building '/nix/store/ishzj9ni17xq4hgrjvlyjkfvm00b0ch9-my-example-derivation.drv'...
hello!
bye!
running post-build-hook '/home/grahamc/projects/github.com/NixOS/nix/post-hook.sh'...
post-build-hook: + sleep 1
post-build-hook: + echo 'Signing paths' /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: Signing paths /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: + sleep 1
post-build-hook: + echo 'Uploading paths' /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: Uploading paths /nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
post-build-hook: + sleep 1
post-build-hook: + printf 'very important stuff'
/nix/store/qr213vjmibrqwnyp5fw678y7whbkqyny-my-example-derivation
[nix-shell:~/projects/github.com/NixOS/nix]$ ./inst/bin/nix build -L -f ./test.nix
my-example-derivation> hello!
my-example-derivation> bye!
my-example-derivation (post)> + sleep 1
my-example-derivation (post)> + echo 'Signing paths' /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> Signing paths /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> + sleep 1
my-example-derivation (post)> + echo 'Uploading paths' /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> Uploading paths /nix/store/c263gzj2kb2609mz8wrbmh53l14wzmfs-my-example-derivation
my-example-derivation (post)> + sleep 1
my-example-derivation (post)> + printf 'very important stuff'
[1 built, 0.0 MiB DL]
Co-authored-by: Graham Christensen <graham@grahamc.com>
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
As long as the flake input is locked, it is now only fetched when it
is evaluated (e.g. "nixpkgs" is fetched when
"inputs.nixpkgs.<something>" is evaluated).
This required adding an "id" attribute to the members of "inputs" in
lockfiles, e.g.
"inputs": {
"nixpkgs/release-19.03": {
"id": "nixpkgs",
"inputs": {},
"narHash": "sha256-eYtxncIMFVmOHaHBtTdPGcs/AnJqKqA6tHCm0UmPYQU=",
"nonFlakeInputs": {},
"uri": "github:edolstra/nixpkgs/e9d5882bb861dc48f8d46960e7c820efdbe8f9c1"
}
}
because the flake ID needs to be known beforehand to construct the
"inputs" attrset.
Fixes#2913.
This is primarily useful for version string generation, where we need
a monotonically increasing number. The revcount is the preferred thing
to use, but isn't available for GitHub flakes (since it requires
fetching the entire history). The last commit timestamp OTOH can be
extracted from GitHub tarballs.
This ensures that flakes don't get garbage-collected, which is
important to get nix-channel-like behaviour.
For example, running
$ nix build hydra:
will create a GC root
~/.cache/nix/flake-closures/hydra -> /nix/store/xarfiqcwa4w8r4qpz1a769xxs8c3phgn-flake-closure
where the contents/references of the linked file in the store are the
flake source trees used by the 'hydra' flake:
/nix/store/n6d5f5lkpfjbmkyby0nlg8y1wbkmbc7i-source
/nix/store/vbkg4zy1qd29fnhflsv9k2j9jnbqd5m2-source
/nix/store/z46xni7d47s5wk694359mq9ay353ar94-source
Note that this in itself is not enough to allow offline use; the
fetcher for the flakeref (e.g. fetchGit or downloadCached) must not
fail if it cannot fetch the latest version of the file, so long as it
knows a cached version.
Issue #2868.
This PR was not intended to be merged until those tests were actually
passing. So disable them for now to unbreak the flakes branch.
https://hydra.nixos.org/eval/1519271
I.e. flake3 depends on flake2 which depends on flake1. Currently this
fails with
error: indirect flake reference 'flake1' is not allowed
because we're not propagating lockfiles downwards properly.
For text files it is possible to do it like so:
`builtins.hashString "sha256" (builtins.readFile /tmp/a)`
but that doesn't work for binary files.
With builtins.hashFile any kind of file can be conveniently hashed.
SRI hashes (https://www.w3.org/TR/SRI/) combine the hash algorithm and
a base-64 hash. This allows more concise and standard hash
specifications. For example, instead of
import <nix/fetchurl.nl> {
url = https://nixos.org/releases/nix/nix-2.1.3/nix-2.1.3.tar.xz;
sha256 = "5d22dad058d5c800d65a115f919da22938c50dd6ba98c5e3a183172d149840a4";
};
you can write
import <nix/fetchurl.nl> {
url = https://nixos.org/releases/nix/nix-2.1.3/nix-2.1.3.tar.xz;
hash = "sha256-XSLa0FjVyADWWhFfkZ2iKTjFDda6mMXjoYMXLRSYQKQ=";
};
In fixed-output derivations, the outputHashAlgo is no longer mandatory
if outputHash specifies the hash (either as an SRI or in the old
"<type>:<hash>" format).
'nix hash-{file,path}' now print hashes in SRI format by default. I
also reverted them to use SHA-256 by default because that's what we're
using most of the time in Nixpkgs.
Suggested by @zimbatm.
In structured-attributes derivations, you can now specify per-output
checks such as:
outputChecks."out" = {
# The closure of 'out' must not be larger than 256 MiB.
maxClosureSize = 256 * 1024 * 1024;
# It must not refer to C compiler or to the 'dev' output.
disallowedRequisites = [ stdenv.cc "dev" ];
};
outputChecks."dev" = {
# The 'dev' output must not be larger than 128 KiB.
maxSize = 128 * 1024;
};
Also fixed a bug in allowedRequisites that caused it to ignore
self-references.
The current usage technically works by putting multiple different
repos in to the same git directory. However, it is very slow as
Git tries very hard to find common commits between the two
repositories. If the two repositories are large (like Nixpkgs and
another long-running project,) it is maddeningly slow.
This change busts the cache for existing deployments, but users
will be promptly repaid in per-repository performance.
In EvalState::checkSourcePath, the path is checked against the list of
allowed paths first and later it's checked again *after* resolving
symlinks.
The resolving of the symlinks is done via canonPath, which also strips
out "../" and "./". However after the canonicalisation the error message
pointing out that the path is not allowed prints the symlink target in
the error message.
Even if we'd suppress the message, symlink targets could still be leaked
if the symlink target doesn't exist (in this case the error is thrown in
canonPath).
So instead, we now do canonPath() without symlink resolving first before
even checking against the list of allowed paths and then later do the
symlink resolving and checking the allowed paths again.
The first call to canonPath() should get rid of all the "../" and "./",
so in theory the only way to leak a symlink if the attacker is able to
put a symlink in one of the paths allowed by restricted evaluation mode.
For the latter I don't think this is part of the threat model, because
if the attacker can write to that path, the attack vector is even
larger.
Signed-off-by: aszlig <aszlig@nix.build>
Allow global config settings to be defined in multiple Config
classes. For example, this means that libutil can have settings and
evaluator settings can be moved out of libstore. The Config classes
are registered in a new GlobalConfig class to which config files
etc. are applied.
Relevant to https://github.com/NixOS/nix/issues/2009 in that it
removes the need for ad hoc handling of useCaseHack, which was the
underlying cause of that issue.
Flex's regexes have an annoying feature: the dot matches everything
except a newline. This causes problems for expressions like:
"${0}\
"
where the backslash-newline combination matches this rule instead of the
intended one mentioned in the comment:
<STRING>\$|\\|\$\\ {
/* This can only occur when we reach EOF, otherwise the above
(...|\$[^\{\"\\]|\\.|\$\\.)+ would have triggered.
This is technically invalid, but we leave the problem to the
parser who fails with exact location. */
return STR;
}
However, the parser actually accepts the resulting token sequence
('"' DOLLAR_CURLY 0 '}' STR '"'), which is a problem because the lexer
rule didn't assign anything to yylval. Ultimately this leads to a crash
when dereferencing a NULL pointer in ExprConcatStrings::bindVars().
The fix does change the syntax of the language in some corner cases
but I think it's only turning previously invalid (or crashing) syntax
to valid syntax. E.g.
"a\
b"
and
''a''\
b''
were previously syntax errors but now both result in "a\nb".
Found by afl-fuzz.
Otherwise, running e.g.
nix-instantiate --eval -E --strict 'builtins.replaceStrings [""] ["X"] "abc"'
would just hang in an infinite loop.
Found by afl-fuzz.
First attempt of this was reverted in e2d71bd186 because it caused
another infinite loop, which is fixed now and a test added.
Otherwise, running e.g.
nix-instantiate --eval -E --strict 'builtins.replaceStrings [""] ["X"] "abc"'
would just hang in an infinite loop.
Found by afl-fuzz.
nix-store --export, nix-store --dump, and nix dump-path would previously
fail silently if writing the data out failed, because
a) FdSink::write ignored exceptions, and
b) the commands relied on FdSink's destructor, which ignores
exceptions, to flush the data out.
This could cause rather opaque issues with installing nixos, because
nix-store --export would happily proceed even if it couldn't write its
data out (e.g. if nix-store --import on the other side of the pipe
failed).
This commit adds tests that expose these issues in the nix-store
commands, and fixes them for all three.
All ANSI sequences except color setting are now filtered out. In
particular, terminal resets (such as from NixOS VM tests) are filtered
out.
Also, fix the completely broken tab character handling.
builtins.path allows specifying the name of a path (which makes paths
with store-illegal names now addable), allows adding paths with flat
instead of recursive hashes, allows specifying a filter (so is a
generalization of filterSource), and allows specifying an expected
hash (enabling safe path adding in pure mode).
Instead, if a fixed-output derivation produces has an incorrect output
hash, we now unconditionally move the outputs to the path
corresponding with the actual hash and register it as valid. Thus,
after correcting the hash in the Nix expression (e.g. in a fetchurl
call), the fixed-output derivation doesn't have to be built again.
It would still be good to have a command for reporting the actual hash
of a fixed-output derivation (instead of throwing an error), but
"nix-build --hash" didn't do that.
In this mode, the following restrictions apply:
* The builtins currentTime, currentSystem and storePath throw an
error.
* $NIX_PATH and -I are ignored.
* fetchGit and fetchMercurial require a revision hash.
* fetchurl and fetchTarball require a sha256 attribute.
* No file system access is allowed outside of the paths returned by
fetch{Git,Mercurial,url,Tarball}. Thus 'nix build -f ./foo.nix' is
not allowed.
Thus, the evaluation result is completely reproducible from the
command line arguments. E.g.
nix build --pure-eval '(
let
nix = fetchGit { url = https://github.com/NixOS/nixpkgs.git; rev = "9c927de4b179a6dd210dd88d34bda8af4b575680"; };
nixpkgs = fetchGit { url = https://github.com/NixOS/nixpkgs.git; ref = "release-17.09"; rev = "66b4de79e3841530e6d9c6baf98702aa1f7124e4"; };
in (import (nix + "/release.nix") { inherit nix nixpkgs; }).build.x86_64-linux
)'
The goal is to enable completely reproducible and traceable
evaluation. For example, a NixOS configuration could be fully
described by a single Git commit hash. 'nixos-rebuild' would do
something like
nix build --pure-eval '(
(import (fetchGit { url = file:///my-nixos-config; rev = "..."; })).system
')
where the Git repository /my-nixos-config would use further fetchGit
calls or Git externals to fetch Nixpkgs and whatever other
dependencies it has. Either way, the commit hash would uniquely
identify the NixOS configuration and allow it to reproduced.
Disable various tests if the kernel doesn't support unprivileged user
namespaces (e.g. Arch Linux disables them) or disable them via a sysctl
(Debian, Ubuntu).
Fixes#1521Fixes#1625
* Look for both 'brotli' and 'bro' as external command,
since upstream has renamed it in newer versions.
If neither are found, current runtime behavior
is preserved: try to find 'bro' on PATH.
* Limit amount handed to BrotliEncoderCompressStream
to ensure interrupts are processed in a timely manner.
Testing shows negligible performance impact.
(Other compression sinks don't seem to require this)
The storeUri variable in the build-remote hook is declared very much to
the start of the main function and a bunch of lines later, the same
variable gets checked via hasPrefix() but it gets assigned *after* that
check when the most suitable machine for the build was choosen.
So I guess this was just a typo in d16fd24973
and what we really want is to either checkd the prefix *after* assigning
storeUri or use bestMachine->storeUri directly.
I choose the latter, because the former could introduce even more
regressions if the try block where the variable gets assigned terminates
early.
Nevertheless, the reason why the log output didn't work is because
hasPrefix() checked for "ssh://" in front of storeUri, but if the
storeUri isn't set correctly (or at all), we don't get the log file
descriptor set up properly, leading to no log output.
I've adjusted the remote-builds test to include a regression test for
this, so that we can make sure we get a build output when using remote
builds.
In addition to that I've tested this with two of my build farms and the
build logs are emitted correctly again.
Signed-off-by: aszlig <aszlig@nix.build>
The name had become a misnomer since it's not only for substitution
from binary caches, but when adding/copying any
(non-content-addressed) path to a store.
In particular, drop the "build-" and "gc-" prefixes which are
pointless. So now you can say
nix build --no-sandbox
instead of
nix build --no-build-use-sandbox
In particular, don't use base-64, which we don't support. (We do have
base-32 redirects for hysterical reasons.)
Also, add a test for the hashed mirror feature.
Sandboxes cannot be nested, so if Nix's build runs inside a sandbox,
it cannot use a sandbox itself. I don't see a clean way to detect
whether we're in a sandbox, so use a test-specific hack.
https://github.com/NixOS/nix/issues/1413
This is to simplify remote build configuration. These environment
variables predate nix.conf.
The build hook now has a sensible default (namely build-remote).
The current load is kept in the Nix state directory now.
Timeout tests rely on failed build to determine success,
so make sure these derivations (silent in particular)
don't fail regardless of timeout behavior.
The nix-shell fix in 668fef2e4f revealed
that we had some --pure tests that incorrectly depended on PATH from
config.nix's mkDerivation being overwritten by the caller's PATH.
http://hydra.nixos.org/build/49242478
This closes a long-time bug that allowed builds to hang Nix
indefinitely (regardless of timeouts) simply by doing
exec > /dev/null 2>&1; while true; do true; done
Now, on EOF, we just send SIGKILL to the child to make sure it's
really gone.
Commands such as "cp -p" also use fsetxattr() in addition to fchown(),
so we need to make sure these syscalls always return successful as well
in order to avoid nasty "Invalid value" errors.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Right now it only tests whether seccomp correctly forges the return
value of chown, but the long-term goal is to test the full sandboxing
functionality at some point in the future.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The implementation of "partition" in Nixpkgs is O(n^2) (because of the
use of ++), and for some reason was causing stack overflows in
multi-threaded evaluation (not sure why).
This reduces "nix-env -qa --drv-path" runtime by 0.197s and memory
usage by 298 MiB (in non-Boehm mode).
For example, you can now say:
configureFlags = "--prefix=${placeholder "out"} --includedir=${placeholder "dev"}";
The strings returned by the ‘placeholder’ builtin are replaced at
build time by the actual store paths corresponding to the specified
outputs.
Previously, you had to work around the inability to self-reference by doing stuff like:
preConfigure = ''
configureFlags+=" --prefix $out --includedir=$dev"
'';
or rely on ad-hoc variable interpolation semantics in Autoconf or Make
(e.g. --prefix=\$(out)), which doesn't always work.
Thus, -I / $NIX_PATH entries are now downloaded only when they are
needed for evaluation. An error to download an entry is a non-fatal
warning (just like non-existant paths).
This does change the semantics of builtins.nixPath, which now returns
the original, rather than resulting path. E.g., before we had
[ { path = "/nix/store/hgm3yxf1lrrwa3z14zpqaj5p9vs0qklk-nixexprs.tar.xz"; prefix = "nixpkgs"; } ... ]
but now
[ { path = "https://nixos.org/channels/nixos-16.03/nixexprs.tar.xz"; prefix = "nixpkgs"; } ... ]
Fixes#792.
This removes the need to have multiple downloads in the stdenv
bootstrap process (like a separate busybox binary for Linux, or
curl/mkdir/sh/bzip2 for Darwin). Now all those files can be combined
into a single NAR.
The $channelName variable passed to the channel builder is the last
portion of the URL and while that works in the previous test for
channels prior to #519, it doesn't work if the last portion is
nixexprs.tar.bz2.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Sodium's Ed25519 signatures are much shorter than OpenSSL's RSA
signatures. Public keys are also much shorter, so they're now
specified directly in the nix.conf option ‘binary-cache-public-keys’.
The new command ‘nix-store --generate-binary-cache-key’ generates and
prints a public and secret key.
The function ‘builtins.match’ takes a POSIX extended regular
expression and an arbitrary string. It returns ‘null’ if the string
does not match the regular expression. Otherwise, it returns a list
containing substring matches corresponding to parenthesis groups in
the regex. The regex must match the entire string (i.e. there is an
implied "^<pat>$" around the regex). For example:
match "foo" "foobar" => null
match "foo" "foo" => []
match "f(o+)(.*)" "foooobar" => ["oooo" "bar"]
match "(.*/)?([^/]*)" "/dir/file.nix" => ["/dir/" "file.nix"]
match "(.*/)?([^/]*)" "file.nix" => [null "file.nix"]
The following example finds all regular files with extension .nix or
.patch underneath the current directory:
let
findFiles = pat: dir: concatLists (mapAttrsToList (name: type:
if type == "directory" then
findFiles pat (dir + "/" + name)
else if type == "regular" && match pat name != null then
[(dir + "/" + name)]
else []) (readDir dir));
in findFiles ".*\\.(nix|patch)" (toString ./.)
With this, attribute sets with a `__functor` attribute can be applied
just like normal functions. This can be used to attach arbitrary
metadata to a function without callers needing to treat it specially.
For the "stdenv accidentally referring to bootstrap-tools", it seems
easier to specify the path that we don't want to depend on, e.g.
disallowedRequisites = [ bootstrapTools ];
So all these years I was totally deluded about the meaning of "set
-e". You might think that it causes statements like "false && true" or
"! true" to fail, but it doesn't...
When running NixOps under Mac OS X, we need to be able to import store
paths built on Linux into the local Nix store. However, HFS+ is
usually case-insensitive, so if there are directories with file names
that differ only in case, then importing will fail.
The solution is to add a suffix ("~nix~case~hack~<integer>") to
colliding files. For instance, if we have a directory containing
xt_CONNMARK.h and xt_connmark.h, then the latter will be renamed to
"xt_connmark.h~nix~case~hack~1". If a store path is dumped as a NAR,
the suffixes are removed. Thus, importing and exporting via a
case-insensitive Nix store is round-tripping. So when NixOps calls
nix-copy-closure to copy the path to a Linux machine, you get the
original file names back.
Closes#119.
Nix search path lookups like <nixpkgs> are now desugared to ‘findFile
nixPath <nixpkgs>’, where ‘findFile’ is a new primop. Thus you can
override the search path simply by saying
let
nixPath = [ { prefix = "nixpkgs"; path = "/my-nixpkgs"; } ];
in ... <nixpkgs> ...
In conjunction with ‘scopedImport’ (commit
c273c15cb1), the Nix search path can be
propagated across imports, e.g.
let
overrides = {
nixPath = [ ... ] ++ builtins.nixPath;
import = fn: scopedImport overrides fn;
scopedImport = attrs: fn: scopedImport (overrides // attrs) fn;
builtins = builtins // overrides;
};
in scopedImport overrides ./nixos
‘scopedImport’ works like ‘import’, except that it takes a set of
attributes to be added to the lexical scope of the expression,
essentially extending or overriding the builtin variables. For
instance, the expression
scopedImport { x = 1; } ./foo.nix
where foo.nix contains ‘x’, will evaluate to 1.
This has a few applications:
* It allows getting rid of function argument specifications in package
expressions. For instance, a package expression like:
{ stdenv, fetchurl, libfoo }:
stdenv.mkDerivation { ... buildInputs = [ libfoo ]; }
can now we written as just
stdenv.mkDerivation { ... buildInputs = [ libfoo ]; }
and imported in all-packages.nix as:
bar = scopedImport pkgs ./bar.nix;
So whereas we once had dependencies listed in three places
(buildInputs, the function, and the call site), they now only need
to appear in one place.
* It allows overriding builtin functions. For instance, to trace all
calls to ‘map’:
let
overrides = {
map = f: xs: builtins.trace "map called!" (map f xs);
# Ensure that our override gets propagated by calls to
# import/scopedImport.
import = fn: scopedImport overrides fn;
scopedImport = attrs: fn: scopedImport (overrides // attrs) fn;
# Also update ‘builtins’.
builtins = builtins // overrides;
};
in scopedImport overrides ./bla.nix
* Similarly, it allows extending the set of builtin functions. For
instance, during Nixpkgs/NixOS evaluation, the Nixpkgs library
functions could be added to the default scope.
There is a downside: calls to scopedImport are not memoized, unlike
import. So importing a file multiple times leads to multiple parsings
/ evaluations. It would be possible to construct the AST only once,
but that would require careful handling of variables/environments.
Now, in addition to a."${b}".c, you can write a.${b}.c (applicable
wherever dynamic attributes are valid).
Signed-off-by: Shea Levy <shea@shealevy.com>
NAR info files in binary caches can now have a cryptographic signature
that Nix will verify before using the corresponding NAR file.
To create a private/public key pair for signing and verifying a binary
cache, do:
$ openssl genrsa -out ./cache-key.sec 2048
$ openssl rsa -in ./cache-key.sec -pubout > ./cache-key.pub
You should also come up with a symbolic name for the key, such as
"cache.example.org-1". This will be used by clients to look up the
public key. (It's a good idea to number keys, in case you ever need
to revoke/replace one.)
To create a binary cache signed with the private key:
$ nix-push --dest /path/to/binary-cache --key ./cache-key.sec --key-name cache.example.org-1
The public key (cache-key.pub) should be distributed to the clients.
They should have a nix.conf should contain something like:
signed-binary-caches = *
binary-cache-public-key-cache.example.org-1 = /path/to/cache-key.pub
If all works well, then if Nix fetches something from the signed
binary cache, you will see a message like:
*** Downloading ‘http://cache.example.org/nar/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’ (signed by ‘cache.example.org-1’) to ‘/nix/store/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’...
On the other hand, if the signature is wrong, you get a message like
NAR info file `http://cache.example.org/7dppcj5sc1nda7l54rjc0g5l1hamj09j.narinfo' has an invalid signature; ignoring
Signatures are implemented as a single line appended to the NAR info
file, which looks like this:
Signature: 1;cache.example.org-1;HQ9Xzyanq9iV...muQ==
Thus the signature has 3 fields: a version (currently "1"), the ID of
key, and the base64-encoded signature of the SHA-256 hash of the
contents of the NAR info file up to but not including the Signature
line.
Issue #75.
Since addAttr has to iterate through the AttrPath we pass it, it makes
more sense to just iterate through the AttrNames in addAttr instead. As
an added bonus, this allows attrsets where two dynamic attribute paths
have the same static leading part (see added test case for an example
that failed previously).
Signed-off-by: Shea Levy <shea@shealevy.com>
This adds new syntax for attribute names:
* attrs."${name}" => getAttr name attrs
* attrs ? "${name}" => isAttrs attrs && hasAttr attrs name
* attrs."${name}" or def => if attrs ? "${name}" then attrs."${name}" else def
* { "${name}" = value; } => listToAttrs [{ inherit name value; }]
Of course, it's a bit more complicated than that. The attribute chains
can be arbitrarily long and contain combinations of static and dynamic
parts (e.g. attrs."${foo}".bar."${baz}" or qux), which is relatively
straightforward for the getAttrs/hasAttrs cases but is more complex for
the listToAttrs case due to rules about duplicate attribute definitions.
For attribute sets with dynamic attribute names, duplicate static
attributes are detected at parse time while duplicate dynamic attributes
are detected when the attribute set is forced. So, for example, { a =
null; a.b = null; "${"c"}" = true; } will be a parse-time error, while
{ a = {}; "${"a"}".b = null; c = true; } will be an eval-time error
(technically that case could theoretically be detected at parse time,
but the general case would require full evaluation). Moreover, duplicate
dynamic attributes are not allowed even in cases where they would be
with static attributes ({ a.b.d = true; a.b.c = false; } is legal, but {
a."${"b"}".d = true; a."${"b"}".c = false; } is not). This restriction
might be relaxed in the future in cases where the static variant would
not be an error, but it is not obvious that that is desirable.
Finally, recursive attribute sets with dynamic attributes have the
static attributes in scope but not the dynamic ones. So rec { a = true;
"${"b"}" = a; } is equivalent to { a = true; b = true; } but rec {
"${"a"}" = true; b = a; } would be an error or use a from the
surrounding scope if it exists.
Note that the getAttr, getAttr or default, and hasAttr are all
implemented purely in the parser as syntactic sugar, while attribute
sets with dynamic attribute names required changes to the AST to be
implemented cleanly.
This is an alternative solution to and closes#167
Signed-off-by: Shea Levy <shea@shealevy.com>
Certain desugaring schemes may require the parser to use some builtin
function to do some of the work (e.g. currently `throw` is used to
lazily cause an error if a `<>`-style path is not in the search path)
Unfortunately, these names are not reserved keywords, so an expression
that uses such a syntactic sugar will not see the expected behavior
(see tests/lang/eval-okay-redefine-builtin.nix for an example).
This adds the ExprBuiltin AST type, which when evaluated uses the value
from the rootmost variable scope (which of course is initialized
internally and can't shadow any of the builtins).
Signed-off-by: Shea Levy <shea@shealevy.com>
This is requires if you have attribute names with dots in them. So
you can now say:
$ nix-instantiate '<nixos>' -A 'config.systemd.units."postgresql.service".text' --eval-only
Fixes#151.
Antiquotes should evaluate to strings or paths. This is usually
checked, except in the case where the antiquote makes up the entire
string, as in "${expr}". This is optimised to expr, which discards
the runtime type checks / coercions.
This reduces the difference between inherited and non-inherited
attribute handling to the choice of which env to use (in recs and lets)
by setting the AttrDef::e to a new ExprVar in the parser rather than
carrying a separate AttrDef::v VarRef member.
As an added bonus, this allows inherited attributes that inherit from a
with to delay forcing evaluation of the with's attributes.
Signed-off-by: Shea Levy <shea@shealevy.com>
Evaluation of attribute sets is strict in the attribute names, which
means immediate evaluation of `with` attribute sets rules out some
potentially interesting use cases (e.g. where the attribute names of one
set depend in some way on another but we want to bring those names into
scope for some values in the second set).
The major example of this is overridable self-referential package sets
(e.g. all-packages.nix). With immediate `with` evaluation, the only
options for such sets are to either make them non-recursive and
explicitly use the name of the overridden set in non-overridden one
every time you want to reference another package, or make the set
recursive and use the `__overrides` hack. As shown in the test case that
comes with this commit, though, delayed `with` evaluation allows a nicer
third alternative.
Signed-off-by: Shea Levy <shea@shealevy.com>
For instance, it's pointless to keep copy-from-other-stores running if
there are no other stores, or download-using-manifests if there are no
manifests. This also speeds things up because we don't send queries
to those substituters.
In Nixpkgs, the attribute in all-packages.nix corresponding to a
package is usually equal to the package name. However, this doesn't
work if the package contains a dash, which is fairly common. The
convention is to replace the dash with an underscore (e.g. "dbus-lib"
becomes "dbus_glib"), but that's annoying. So now dashes are valid in
variable / attribute names, allowing you to write:
dbus-glib = callPackage ../development/libraries/dbus-glib { };
and
buildInputs = [ dbus-glib ];
Since we don't have a negation or subtraction operation in Nix, this
is unambiguous.
If the options gc-keep-outputs and gc-keep-derivations are both
enabled, you can get a cycle in the liveness graph. There was a hack
to handle this, but it didn't work with multiple-output derivations,
causing the garbage collector to fail with errors like ‘error: cannot
delete path `...' because it is in use by `...'’. The garbage
collector now handles strongly connected components in the liveness
graph as a unit and decides whether to delete all or none of the paths
in an SCC.
Apparently our DBD::SQLite links against /usr/lib/libsqlite3.dylib,
which is an old version that doesn't respect foreign key constraints.
So manifests/cache.sqlite doesn't get updated properly when a manifest
disappears. We should fix our DBD::SQLite, but in the meantime this
will fix the test.
http://hydra.nixos.org/build/3017959
Querying all substitutable paths via "nix-env -qas" is potentially
hard on a server, since it involves sending thousands of HEAD
requests. So a binary cache must now have a meta-info file named
"nix-cache-info" that specifies whether the server wants this. It
also specifies the store prefix so that we don't send useless queries
to a binary cache for a different store prefix.
Since SubstitutionGoal::finished() in build.cc computes the hash
anyway, we can prevent the inefficiency of computing the hash twice by
letting the substituter tell Nix about the expected hash, which can
then verify it.
"nix-channel --add" now accepts a second argument: the channel name.
This allows channels to have a nicer name than (say) nixpkgs_unstable.
If no name is given, it defaults to the last component of the URL
(with "-unstable" or "-stable" removed).
Also, channels are now stored in a profile
(/nix/var/nix/profiles/per-user/$USER/channels). One advantage of
this is that it allows rollbacks (e.g. if "nix-channel --update" gives
an undesirable update).
Ensuring that the tests work from the build tree requires a growing
number of nasty hacks. The tests also don't verify that the installed
Nix actually works. Thus, the tests now require "make install" to
have been run.
Nix now requires SQLite and bzip2 to be pre-installed. SQLite is
detected using pkg-config. We required DBD::SQLite anyway, so
depending on SQLite is not a big problem.
The --with-bzip2, --with-openssl and --with-sqlite flags are gone.
other simplifications.
* Use <nix/...> to locate the corepkgs. This allows them to be
overriden through $NIX_PATH.
* Use bash's pipefail option in the NAR builder so that we don't need
to create a temporary file.
directory
/home/eelco/src/stdenv-updates
that you want to use as the directory for import such as
with (import <nixpkgs> { });
then you can say
$ nix-build -I nixpkgs=/home/eelco/src/stdenv-updates
brackets, e.g.
import <nixpkgs/pkgs/lib>
are resolved by looking them up relative to the elements listed in
the search path. This allows us to get rid of hacks like
import "${builtins.getEnv "NIXPKGS_ALL"}/pkgs/lib"
The search path can be specified through the ‘-I’ command-line flag
and through the colon-separated ‘NIX_PATH’ environment variable,
e.g.,
$ nix-build -I /etc/nixos ...
If a file is not found in the search path, an error message is
lazily thrown.
the hook every time we want to ask whether we can run a remote build
(which can be very often), we now reuse a hook process for answering
those queries until it accepts a build. So if there are N
derivations to be built, at most N hooks will be started.
The /bin/sh interpreter on Solaris doesn't understand $(...) syntax for running
sub-shells. Consequently, this test fails on Solaris. To remedy the situation,
the script either needs to be run by /bin/bash -- which is non-standard --, or
it needs to use the ancient but portable `...` syntax.
check' now succeeds :-)
* An attribute set such as `{ foo = { enable = true; };
foo.port = 23; }' now parses. It was previously rejected, but I'm
too lazy to implement the check. (The only reason to reject it is
that the reverse, `{ foo.port = 23; foo = { enable = true; }; }', is
rejected, which is kind of ugly.)
* src/libexpr/expr-to-xml.cc (nix::showAttrs): Add `location'
parameter. Provide location XML attributes when it's true. Update
callers.
(nix::printTermAsXML): Likewise.
* src/libexpr/expr-to-xml.hh (nix::printTermAsXML): Update prototype;
have `location' default to `false'.
* src/nix-instantiate/nix-instantiate.cc (printResult, processExpr): Add
`location' parameter; update callers.
(run): Add support for `--no-location'.
* src/nix-instantiate/help.txt: Update accordingly.
* tests/lang.sh: Invoke `nix-instantiate' with `--no-location' for the
XML tests.
* tests/lang/eval-okay-toxml.exp, tests/lang/eval-okay-to-xml.nix: New
files.
allowed. So `name1@name2', `{attrs1}@{attrs2}' and so on are now no
longer legal. This is no big loss because they were not useful
anyway.
This also changes the output of builtins.toXML for @-patterns
slightly.
doesn't exist. The Debian packages don't include the manifests
directory, so nix-channel would silently skip doing a nix-pull,
resulting in everything being built from source. Thanks to Juan
Pedro Bolívar Puente.
intersectAttrs returns the (right-biased) intersection between two
attribute sets, e.g. every attribute from the second set that also
exists in the first. functionArgs returns the set of attributes
expected by a function.
The main goal of these is to allow the elimination of most of
all-packages.nix. Most package instantiations in all-packages.nix
have this form:
foo = import ./foo.nix {
inherit a b c;
};
With intersectAttrs and functionArgs, this can be written as:
foo = callPackage (import ./foo.nix) { };
where
callPackage = f: args:
f ((builtins.intersectAttrs (builtins.functionArgs f) pkgs) // args);
I.e., foo.nix is called with all attributes from "pkgs" that it
actually needs (e.g., pkgs.a, pkgs.b and pkgs.c). (callPackage can
do any other generic package-level stuff we might want, such as
applying makeOverridable.) Of course, the automatically supplied
arguments can be overriden if needed, e.g.
foo = callPackage (import ./foo.nix) {
c = c_version_2;
};
but for the vast majority of packages, this won't be needed.
The advantages are to reduce the amount of typing needed to add a
dependency (from three sites to two), and to reduce the number of
trivial commits to all-packages.nix. For the former, there have
been two previous attempts:
- Use "args: with args;" in the package's function definition.
This however obscures the actual expected arguments of a
function, which is very bad.
- Use "{ arg1, arg2, ... }:" in the package's function definition
(i.e. use the ellipis "..." to allow arbitrary additional
arguments), and then call the function with all of "pkgs" as an
argument. But this inhibits error detection if you call it with
an misspelled (or obsolete) argument.
attributes of the rec are in scope of `e'. This is useful in
expressions such as
rec {
lib = import ./lib;
inherit (lib) concatStrings;
}
It does change the semantics of expressions such as
let x = {y = 1;}; in rec { x = {y = 2;}; inherit (x) y; }.y
This now returns 2 instead of 1. However, no code in Nixpkgs or
NixOS seems to rely on the old behaviour.
shorthand for {x = {y = {z = ...;};};}. This is especially useful
for NixOS configuration files, e.g.
{
services = {
sshd = {
enable = true;
port = 2022;
};
};
}
can now be written as
{
services.sshd.enable = true;
services.sshd.port = 2022;
}
However, it is currently not permitted to write
{
services.sshd = {enable = true;};
services.sshd.port = 2022;
}
as this is considered a duplicate definition of `services.sshd'.
sure that it works as expected when you pass it a derivation. That
is, we have to make sure that all build-time dependencies are built,
and that they are all in the input closure (otherwise remote builds
might fail, for example). This is ensured at instantiation time by
adding all derivations and their sources to inputDrvs and inputSrcs.
hook. This fixes a problem with log files being partially or
completely filled with 0's because another nix-store process
truncates the log file. It should also be more efficient.
SHA-256 outputs of fixed-output derivations. I.e. they now produce
the same store path:
$ nix-store --add x
/nix/store/j2fq9qxvvxgqymvpszhs773ncci45xsj-x
$ nix-store --add-fixed --recursive sha256 x
/nix/store/j2fq9qxvvxgqymvpszhs773ncci45xsj-x
the latter being the same as the path that a derivation
derivation {
name = "x";
outputHashAlgo = "sha256";
outputHashMode = "recursive";
outputHash = "...";
...
};
produces.
This does change the output path for such fixed-output derivations.
Fortunately they are quite rare. The most common use is fetchsvn
calls with SHA-256 hashes. (There are a handful of those is
Nixpkgs, mostly unstable development packages.)
* Documented the computation of store paths (in store-api.cc).
in attribute set pattern matches. This allows defining a function
that takes *at least* the listed attributes, while ignoring
additional attributes. For instance,
{stdenv, fetchurl, fuse, ...}:
stdenv.mkDerivation {
...
};
defines a function that requires an attribute set that contains the
specified attributes but ignores others. The main advantage is that
we can then write in all-packages.nix
aefs = import ../bla/aefs pkgs;
instead of
aefs = import ../bla/aefs {
inherit stdenv fetchurl fuse;
};
This saves a lot of typing (not to mention not having to update
all-packages.nix with purely mechanical changes). It saves as much
typing as the "args: with args;" style, but has the advantage that
the function arguments are properly declared (not implicit in what
the body of the "with" uses).
functions that take a single argument (plain lambdas) into one AST
node (Function) that contains a Pattern node describing the
arguments. Current patterns are single lazy arguments (VarPat) and
matching against an attribute set (AttrsPat).
This refactoring allows other kinds of patterns to be added easily,
such as Haskell-style @-patterns, or list pattern matching.
again. (After the previous substituter mechanism refactoring I
didn't update the code that obtains the references of substitutable
paths.) This required some refactoring: the substituter programs
are now kept running and receive/respond to info requests via
stdin/stdout.
logic through the `parseDrvName' and `compareVersions' primops.
This will allow expressions to easily check whether some dependency
is a specific needed version or falls in some version range. See
tests/lang/eval-okay-versions.nix for examples.
single quotes. Example (from NixOS):
job = ''
start on network-interfaces
start script
rm -f /var/run/opengl-driver
${if videoDriver == "nvidia"
then "ln -sf ${nvidiaDrivers} /var/run/opengl-driver"
else if cfg.driSupport
then "ln -sf ${mesa} /var/run/opengl-driver"
else ""
}
rm -f /var/log/slim.log
end script
'';
This style has two big advantages:
- \, ' and " aren't special, only '' and ${. So you get a lot less
escaping in shell scripts / configuration files in Nixpkgs/NixOS.
The delimiter '' is rare in scripts (and can usually be written as
""). ${ is also fairly rare.
Other delimiters such as <<...>>, {{...}} and <|...|> were also
considered but this one appears to have the fewest drawbacks
(thanks Martin).
- Indentation is intelligently stripped so that multi-line strings
can follow the nesting structure of the containing Nix
expression. E.g. in the example above 6 spaces are stripped from
the start of each line. This prevents unnecessary indentation in
generated files (which sometimes even breaks things).
See tests/lang/eval-okay-ind-string.nix for some examples.
$ nix-env -e $(which firefox)
or
$ nix-env -e /nix/store/nywzlygrkfcgz7dfmhm5xixlx1l0m60v-pan-0.132
* nix-env -i: if an argument contains a slash anywhere, treat it as a
path and follow it through symlinks into the Nix store. This allows
things like
$ nix-build -A firefox
$ nix-env -i ./result
* nix-env -q/-i/-e: don't complain when the `*' selector doesn't match
anything. In particular, `nix-env -q \*' doesn't fail anymore on an
empty profile.
derivations that produce the same output path don't work properly
wrt locking. This happens a lot in the build farm when fetchurl
derivations downloading the same file on different platforms are
executed in parallel and then copied back to the main machine.
* `sub' to subtract two numbers.
* `stringLength' to get the length of a string.
* `substring' to get a substring of a string. These should be enough
to allow most string operations to be expressed.
from a source directory. All files for which a predicate function
returns true are copied to the store. Typical example is to leave
out the .svn directory:
stdenv.mkDerivation {
...
src = builtins.filterSource
(path: baseNameOf (toString path) != ".svn")
./source-dir;
# as opposed to
# src = ./source-dir;
}
This is important because the .svn directory influences the hash in
a rather unpredictable and variable way.
attribute existence and to return an attribute from an attribute
set, respectively. Example: `hasAttr "foo" {foo = 1;}'. They
differ from the `?' and `.' operators in that the attribute name is
an arbitrary expression. (NIX-61)
* `nix-install-package --help' (NIX-9).
* `nix-install-package --non-interactive': don't prompt or pause.
* Tests for nix-install-package.
* Security fixes: filter the values obtained from the nixpkg.
and returns its path. This can be used to (for instance) write
builders inside a Nix expression, e.g.,
stdenv.mkDerivation {
builder = "
source $stdenv/setup
...
";
...
}
all the primops. This allows Nix expressions to test for new
primops and take appropriate action if they're not available. For
instance, rather than calling a primop `foo' directly, they could
say `if builtins ? foo then builtins.foo ... else ...'.
argument has a valid value, i.e., is in a certain domain. E.g.,
{ foo : [true false]
, bar : ["a" "b" "c"]
}: ...
This previously could be done using assertions, but domain checks
will allow the buildfarm to automatically extract the configuration
space from functions.
"--with-freetype2-library=" + freetype + "/lib"
can now be written as
"--with-freetype2-library=${freetype}/lib"
An arbitrary expression can be enclosed within ${...}, not just
identifiers.
* Escaping in string literals: \n, \r, \t interpreted as in C, any
other character following \ is interpreted as-is.
* Newlines are now allowed in string literals.
externals directory. This is in particular useful because though
most systems have bzip2/bunzip2, they don't always have libbz2,
which we need for bsdiff/bspatch.
packages (provided that they have a `meta.description' attribute).
E.g.,
$ ./src/nix-env/nix-env -qa --description gcc
gcc-4.0.2 GNU Compiler Collection, 4.0.x (cross-compiler for sparc-linux)
gcc-4.0.2 GNU Compiler Collection, 4.0.x (cross-compiler for mips-linux)
gcc-4.0.2 GNU Compiler Collection, 4.0.x (cross-compiler for arm-linux)
gcc-4.0.2 GNU Compiler Collection, 4.0.x
to be queried, e.g., `nix-env -qa firefox'. This does require the
argument '*' to be passed if one wants information about all
derivations, so the old `nix-env -qa' now is `nix-env -qa "*"'.
nix-store query options `--referer' and `--referer-closure' have
been changed to `--referrer' and `--referrer-closure' (but the old
ones are still accepted for compatibility).
`removeAttrs attrs ["x", "y"]' returns the set `attrs' with the
attributes named `x' and `y' removed. It is not an error for the
named attributes to be missing from the input set.
* Removed some dead code (successor stuff) from nix-push.
* Updated terminology in the tests (store expr -> drv path).
* Check that the deriver is set properly in the tests.
being created after the garbage collector has read the temproots
directory. This blocks the creation of new processes, but the
garbage collector could periodically release the GC lock to allow
them to run.
roots to a per-process temporary file in /nix/var/nix/temproots
while holding a write lock on that file. The garbage collector
acquires read locks on all those files, thus blocking further
progress in other Nix processes, and reads the sets of temporary
roots.
closure of the referers relation rather than the references
relation, i.e., the set of all paths that directly or indirectly
refer to the given path. Note that contrary to the references
closure this set is not fixed; it can change as paths are added to
or removed from the store.
Whenever Nix attempts to realise a derivation for which a closure is
already known, but this closure cannot be realised, fall back on
normalising the derivation.
The most common scenario in which this is useful is when we have
registered substitutes in order to perform binary distribution from,
say, a network repository. If the repository is down, the
realisation of the derivation will fail. When this option is
specified, Nix will build the derivation instead. Thus, binary
installation falls back on a source installation. This option is
not the default since it is generally not desirable for a transient
failure in obtaining the substitutes to lead to a full build from
source (with the related consumption of resources).
much as possible. (This is similar to GNU Make's `-k' flag.)
* Refactoring to implement this: previously we just bombed out when
a build failed, but now we have to clean up. In particular this
means that goals must be freed quickly --- they shouldn't hang
around until the worker exits. So the worker now maintains weak
pointers in order not to prevent garbage collection.
* Documented the `-k' and `-j' flags.
* A better substitute mechanism.
Instead of generating a store expression for each store path for
which we have a substitute, we can have a single store expression
that builds a generic program that is invoked to build the desired
store path, which is passed as an argument.
This means that operations like `nix-pull' only produce O(1) files
instead of O(N) files in the store when registering N substitutes.
(It consumes O(N) database storage, of course, but that's not a
performance problem).
* Added a test for the substitute mechanism.
* `nix-store --substitute' reads the substitutes from standard input,
instead of from the command line. This prevents us from running
into the kernel's limit on command line length.
in parallel. Hooks are more efficient: locks on output paths are
only acquired when the hook says that it is willing to accept a
build job. Hooks now work in two phases. First, they should first
tell Nix whether they are willing to accept a job. Nix guarantuees
that no two hooks will ever be in the first phase at the same time
(this simplifies the implementation of hooks, since they don't have
to perform locking (?)). Second, if they accept a job, they are
then responsible for building it (on the remote system), and copying
the result back. These can be run in parallel with other hooks and
locally executed jobs.
The implementation is a bit messy right now, though.
* The directory `distributed' shows a (hacky) example of a hook that
distributes build jobs over a set of machines listed in a
configuration file.
parallel as possible (similar to GNU Make's `-j' switch). This is
useful on SMP systems, but it is especially useful for doing builds
on multiple machines. The idea is that a large derivation is
initiated on one master machine, which then distributes
sub-derivations to any number of slave machines. This should not
happen synchronously or in lock-step, so the master must be capable
of dealing with multiple parallel build jobs. We now have the
infrastructure to support this.
TODO: substitutes are currently broken.