this slightly increases the amount of memory used for any given symbol, but this
increase is more than made up for if the symbol is referenced more than once in
the EvalState that holds it. on average every symbol should be referenced at
least twice (once to introduce a binding, once to use it), so we expect no
increase in memory on average.
symbol tables are limited to 2³² entries like position tables, and similar
arguments apply to why overflow is not likely: 2³² symbols would require as many
string instances (at 24 bytes each) and map entries (at 24 bytes or more each,
assuming that the map holds on average at most one item per bucket as the docs
say). a full symbol table would require at least 192GB of memory just for
symbols, which is well out of reach. (an ofborg eval of nixpks today creates
less than a million symbols!)
PosTable deduplicates origin information, so using symbols for paths is no
longer necessary. moving away from path Symbols also reduces the usage of
symbols for things that are not keys in attribute sets, which will become
important in the future when we turn symbols into indices as well.
Pos objects are somewhat wasteful as they duplicate the origin file name and
input type for each object. on files that produce more than one Pos when parsed
this a sizeable waste of memory (one pointer per Pos). the same goes for
ptr<Pos> on 64 bit machines: parsing enough source to require 8 bytes to locate
a position would need at least 8GB of input and 64GB of expression memory. it's
not likely that we'll hit that any time soon, so we can use a uint32_t index to
locate positions instead.
the only use of this function is to determine whether a lambda has a non-set
formal, but this use is arguably better served by Symbol::set and using a
non-Symbol instead of an empty symbol in the parser when no such formal is present.
if we defer the duplicate argument check for lambda formals we can use more
efficient data structures for the formals set, and we can get rid of the
duplication of formals names to boot. instead of a list of formals we've seen
and a set of names we'll keep a vector instead and run a sort+dupcheck step
before moving the parsed formals into a newly created lambda. this improves
performance on search and rebuild by ~1%, pure parsing gains more (about 4%).
this does reorder lambda arguments in the xml output, but the output is still
stable. this shouldn't be a problem since argument order is not semantically
important anyway.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.550 s ± 0.060 s [User: 6.470 s, System: 1.664 s]
Range (min … max): 8.435 s … 8.666 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 346.7 ms ± 2.1 ms [User: 312.4 ms, System: 34.2 ms]
Range (min … max): 343.8 ms … 353.4 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.720 s ± 0.031 s [User: 2.415 s, System: 0.231 s]
Range (min … max): 2.662 s … 2.780 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.462 s ± 0.063 s [User: 6.398 s, System: 1.661 s]
Range (min … max): 8.339 s … 8.542 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 329.1 ms ± 1.4 ms [User: 296.8 ms, System: 32.3 ms]
Range (min … max): 326.1 ms … 330.8 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.687 s ± 0.035 s [User: 2.392 s, System: 0.228 s]
Range (min … max): 2.626 s … 2.754 s 20 runs
string expressions by and large do not need the benefits a Symbol gives us,
instead they pollute the symbol table and cause unnecessary overhead for almost
all strings. the one place we can think of that benefits from them (attrpaths
with expressions) extracts the benefit in the parser, which we'll have to touch
anyway when changing ExprString to hold strings.
this gives a sizeable improvement on of 3-5% on all benchmarks we've run.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.844 s ± 0.045 s [User: 6.750 s, System: 1.663 s]
Range (min … max): 8.758 s … 8.922 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 367.4 ms ± 3.3 ms [User: 332.3 ms, System: 35.2 ms]
Range (min … max): 364.0 ms … 375.2 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.810 s ± 0.030 s [User: 2.517 s, System: 0.225 s]
Range (min … max): 2.742 s … 2.854 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.533 s ± 0.068 s [User: 6.485 s, System: 1.642 s]
Range (min … max): 8.404 s … 8.657 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 347.6 ms ± 3.1 ms [User: 313.1 ms, System: 34.5 ms]
Range (min … max): 343.3 ms … 354.6 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.709 s ± 0.032 s [User: 2.414 s, System: 0.232 s]
Range (min … max): 2.655 s … 2.788 s 20 runs
it can be replaced with StringToken if we add another bit if information to
StringToken, namely whether this string should take part in indentation scanning
or not. since all escaping terminates indentation scanning we need to set this
bit only for the non-escaped IND_STRING rule.
this improves performance by about 1%.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.880 s ± 0.048 s [User: 6.809 s, System: 1.643 s]
Range (min … max): 8.781 s … 8.993 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 375.0 ms ± 2.2 ms [User: 339.8 ms, System: 35.2 ms]
Range (min … max): 371.5 ms … 379.3 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.831 s ± 0.040 s [User: 2.536 s, System: 0.225 s]
Range (min … max): 2.769 s … 2.912 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.832 s ± 0.048 s [User: 6.757 s, System: 1.657 s]
Range (min … max): 8.743 s … 8.921 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 367.4 ms ± 3.2 ms [User: 332.7 ms, System: 34.7 ms]
Range (min … max): 364.6 ms … 374.6 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.810 s ± 0.030 s [User: 2.517 s, System: 0.225 s]
Range (min … max): 2.742 s … 2.854 s 20 runs
when given a string yacc will copy the entire input to a newly allocated
location so that it can add a second terminating NUL byte. since the
parser is a very internal thing to EvalState we can ensure that having
two terminating NUL bytes is always possible without copying, and have
the parser itself merely check that the expected NULs are present.
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 572.4 ms ± 2.3 ms [User: 563.4 ms, System: 8.6 ms]
Range (min … max): 566.9 ms … 579.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 381.7 ms ± 1.0 ms [User: 348.3 ms, System: 33.1 ms]
Range (min … max): 380.2 ms … 387.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.936 s ± 0.005 s [User: 2.715 s, System: 0.221 s]
Range (min … max): 2.923 s … 2.946 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 571.7 ms ± 2.4 ms [User: 563.3 ms, System: 8.0 ms]
Range (min … max): 566.7 ms … 579.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 376.6 ms ± 1.0 ms [User: 345.8 ms, System: 30.5 ms]
Range (min … max): 374.5 ms … 379.1 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.922 s ± 0.006 s [User: 2.707 s, System: 0.215 s]
Range (min … max): 2.906 s … 2.934 s 50 runs
every stringy token the lexer returns is turned into a Symbol and not
used further, so we don't have to strdup. using a string_view is
sufficient, but due to limitations of the current parser we have to use
a POD type that holds the same information.
gives ~2% on system build, 6% on search, 8% on parsing alone
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 610.6 ms ± 2.4 ms [User: 602.5 ms, System: 7.8 ms]
Range (min … max): 606.6 ms … 617.3 ms 50 runs
Benchmark 2: nix eval -f hackage-packages.nix
Time (mean ± σ): 430.1 ms ± 1.4 ms [User: 393.1 ms, System: 36.7 ms]
Range (min … max): 428.2 ms … 434.2 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 3.032 s ± 0.005 s [User: 2.808 s, System: 0.223 s]
Range (min … max): 3.023 s … 3.041 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 574.7 ms ± 2.8 ms [User: 566.3 ms, System: 8.0 ms]
Range (min … max): 569.2 ms … 580.7 ms 50 runs
Benchmark 2: nix eval -f hackage-packages.nix
Time (mean ± σ): 394.4 ms ± 0.8 ms [User: 361.8 ms, System: 32.3 ms]
Range (min … max): 392.7 ms … 395.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.976 s ± 0.005 s [User: 2.757 s, System: 0.218 s]
Range (min … max): 2.966 s … 2.990 s 50 runs
We now parse function applications as a vector of arguments rather
than as a chain of binary applications, e.g. 'substring 1 2 "foo"' is
parsed as
ExprCall { .fun = <substring>, .args = [ <1>, <2>, <"foo"> ] }
rather than
ExprApp (ExprApp (ExprApp <substring> <1>) <2>) <"foo">
This allows primops to be called immediately (if enough arguments are
supplied) without having to allocate intermediate tPrimOpApp values.
On
$ nix-instantiate --dry-run '<nixpkgs/nixos/release-combined.nix>' -A nixos.tests.simple.x86_64-linux
this gives a substantial performance improvement:
user CPU time: median = 0.9209 mean = 0.9218 stddev = 0.0073 min = 0.9086 max = 0.9340 [rejected, p=0.00000, Δ=-0.21433±0.00677]
elapsed time: median = 1.0585 mean = 1.0584 stddev = 0.0024 min = 1.0523 max = 1.0623 [rejected, p=0.00000, Δ=-0.20594±0.00236]
because it reduces the number of tPrimOpApp allocations from 551990 to
42534 (i.e. only small minority of primop calls are partially
applied) which in turn reduces time spent in the garbage collector.
Rather than having them plain strings scattered through the whole
codebase, create an enum containing all the known experimental features.
This means that
- Nix can now `warn` when an unkwown experimental feature is passed
(making it much nicer to spot typos and spot deprecated features)
- It’s now easy to remove a feature altogether (once the feature isn’t
experimental anymore or is dropped) by just removing the field for the
enum and letting the compiler point us to all the now invalid usages
of it.
The boolean is only used to determine if the formals are set to a
non-null pointer in all our cases. We can get rid of that allocation and
instead just compare the pointer value with NULL. Saving up to
sizeof(bool) + platform specific alignment per ExprLambda instace.
Probably not a lot of memory but perhaps a few kilobyte with nixpkgs?
This also gets rid of a potential issue with dereferencing formals based on
the value of the boolean that didn't have to be aligned with the formals
pointer but was in all our cases.
Previously, type or coercion errors for string interpolation, path
interpolation, and plus expressions were always reported at the
beginning of the outer expression. This leads to confusing evaluation
error messages making it hard to accurately diagnose and then fix the
error.
For example, errors were reported as follows.
```
cannot coerce an integer to a string
1| let foo = 7; in "bar" + foo
| ^
cannot add a string to an integer
1| let foo = "bar"; in 4 + foo
| ^
cannot coerce an integer to a string
1| let foo = 7; in "x${foo}"
| ^
```
This commit changes the ExprConcatStrings expression vector to store a
sequence of expressions *and* their expansion locations so that error
locations can be reported accurately. For interpolation, the error is
reported at the beginning of the entire `${foo}`, not at the beginning
of `foo` because I thought this was slightly clearer. The previous
errors are now reported as:
```
cannot coerce an integer to a string
1| let foo = 7; in "bar" + foo
| ^
cannot add a string to an integer
1| let foo = "bar"; in 4 + foo
| ^
cannot coerce an integer to a string
1| let foo = 7; in "x${foo}"
| ^
```
The error is reported at this kind of precise location even for
multi-line indented strings.
This probably helps with at least some of the cases mentioned in #561
When working on some more complex Nix code, there are sometimes rather
unhelpful or misleading error messages, especially if coerce-errors are
thrown.
This patch is a first steps towards improving that. I'm happy to file
more changes after that, but I'd like to gather some feedback first.
To summarize, this patch does the following things:
* Attrsets (a.k.a. `Bindings` in `libexpr`) now have a `Pos`. This is
helpful e.g. to identify which attribute-set in `listToAttrs` is
invalid.
* The `Value`-struct has a new method named `determinePos` which tries
to guess the position of a value and falls back to a default if that's
not possible.
This can be used to provide better messages if a coercion fails.
* The new `determinePos`-API is used by `builtins.concatMap` now. With
that change, Nix shows the exact position in the error where a wrong
value was returned by the lambda.
To make sure it's still obvious that `concatMap` is the problem,
another stack-frame was added.
* The changes described above can be added to every other `primop`, but
first I'd like to get some feedback about the overall approach.
Changes:
* The divider lines are gone. These were in practice a bit confusing,
in particular with --show-trace or --keep-going, since then there
were multiple lines, suggesting a start/end which wasn't the case.
* Instead, multi-line error messages are now indented to align with
the prefix (e.g. "error: ").
* The 'description' field is gone since we weren't really using it.
* 'hint' is renamed to 'msg' since it really wasn't a hint.
* The error is now printed *before* the location info.
* The 'name' field is no longer printed since most of the time it
wasn't very useful since it was just the name of the exception (like
EvalError). Ideally in the future this would be a unique, easily
googleable error ID (like rustc).
* "trace:" is now just "…". This assumes error contexts start with
something like "while doing X".
Example before:
error: --- AssertionError ---------------------------------------------------------------------------------------- nix
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
assertion 'false' failed
----------------------------------------------------- show-trace -----------------------------------------------------
trace: while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
Example after:
error: assertion 'false' failed
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
… while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
`nix flake info` calls the github 'commits' API, which requires
authorization when the repository is private. Currently this request
fails with a 404.
This commit adds an authorization header when calling the 'commits' API.
It also changes the way that the 'tarball' API authenticates, moving the
user's token from a query parameter into the Authorization header.
The query parameter method is recently deprecated and will be disallowed
in November 2020. Using them today triggers a warning email.
The attributes previously stored in TreeInfo (narHash, revCount,
lastModified) are now stored in Input. This makes it less arbitrary
what attributes are stored where.
As a result, the lock file format has changed. An entry like
"info": {
"lastModified": 1585405475,
"narHash": "sha256-bESW0n4KgPmZ0luxvwJ+UyATrC6iIltVCsGdLiphVeE="
},
"locked": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b88ff468e9850410070d4e0ccd68c7011f15b2be",
"type": "github"
},
is now stored as
"locked": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "b88ff468e9850410070d4e0ccd68c7011f15b2be",
"type": "github",
"lastModified": 1585405475,
"narHash": "sha256-bESW0n4KgPmZ0luxvwJ+UyATrC6iIltVCsGdLiphVeE="
},
The 'Input' class is now a dumb set of attributes. All the fetcher
implementations subclass InputScheme, not Input. This simplifies the
API.
Also, fix substitution of flake inputs. This was broken since lazy
flake fetching started using fetchTree internally.
This provides a pluggable mechanism for defining new fetchers. It adds
a builtin function 'fetchTree' that generalizes existing fetchers like
'fetchGit', 'fetchMercurial' and 'fetchTarball'. 'fetchTree' takes a
set of attributes, e.g.
fetchTree {
type = "git";
url = "https://example.org/repo.git";
ref = "some-branch";
rev = "abcdef...";
}
The existing fetchers are just wrappers around this. Note that the
input attributes to fetchTree are the same as flake input
specifications and flake lock file entries.
All fetchers share a common cache stored in
~/.cache/nix/fetcher-cache-v1.sqlite. This replaces the ad hoc caching
mechanisms in fetchGit and download.cc (e.g. ~/.cache/nix/{tarballs,git-revs*}).
This also adds support for Git worktrees (c169ea5904).
This allows querying the location of function arguments. E.g.
builtins.unsafeGetAttrPos "x" (builtins.functionArgs ({ x }: null))
=> { column = 57; file = "/home/infinisil/src/nix/inst/test.nix"; line = 1; }
There is no termination condition for evaluation of cyclical
expression paths which can lead to infinite loops. This addresses
one spot in the parser in a similar fashion as utils.cc/canonPath
does.
This issue can be reproduced by something like:
```
ln -s a b
ln -s b a
nix-instantiate -E 'import ./a'
```
E.g.
$ nix-build '<nixpkgs>' -A hello --experimental-features no-url-literals
error: URL literals are disabled, at /nix/store/vsjamkzh15r3c779q2711az826hqgvzr-nixpkgs-20.03pre194957.bef773ed53f/nixpkgs/pkgs/top-level/all-packages.nix:1236:11
Helps with implementing https://github.com/NixOS/rfcs/pull/45.
For example, you can write
src = fetchgit ./.;
and if ./. refers to an unclean working tree, that tree will be copied
to the Nix store. This removes the need for "cleanSource".