a bunch of derivation strings contain no escape sequences. we can
optimize for this fact by first scanning for the end of a derivation
string and simply returning the contents unmodified if no escape
sequences were found. to make this even more efficient we can also use
BackedStringViews to avoid copies, avoiding heap allocations for
transient data.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.952 s ± 0.015 s [User: 5.294 s, System: 1.452 s]
Range (min … max): 6.926 s … 6.974 s 10 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.907 s ± 0.012 s [User: 5.272 s, System: 1.429 s]
Range (min … max): 6.893 s … 6.926 s 10 runs
This fixes a segfault on infinite function call recursion (rather than
infinite thunk recursion) by tracking the function call depth in
`EvalState`.
Additionally, to avoid printing extremely long stack traces, stack
frames are now deduplicated, with a `(19997 duplicate traces omitted)`
message. This should only really be triggered in infinite recursion
scenarios.
Before:
$ nix-instantiate --eval --expr '(x: x x) (x: x x)'
Segmentation fault: 11
After:
$ nix-instantiate --eval --expr '(x: x x) (x: x x)'
error: stack overflow
at «string»:1:14:
1| (x: x x) (x: x x)
| ^
$ nix-instantiate --eval --expr '(x: x x) (x: x x)' --show-trace
error:
… from call site
at «string»:1:1:
1| (x: x x) (x: x x)
| ^
… while calling anonymous lambda
at «string»:1:2:
1| (x: x x) (x: x x)
| ^
… from call site
at «string»:1:5:
1| (x: x x) (x: x x)
| ^
… while calling anonymous lambda
at «string»:1:11:
1| (x: x x) (x: x x)
| ^
… from call site
at «string»:1:14:
1| (x: x x) (x: x x)
| ^
(19997 duplicate traces omitted)
error: stack overflow
at «string»:1:14:
1| (x: x x) (x: x x)
| ^
more buffers that can be uninitialized and on the stack. small
difference, but still worth doing.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.963 s ± 0.011 s [User: 5.330 s, System: 1.421 s]
Range (min … max): 6.943 s … 6.974 s 10 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.952 s ± 0.015 s [User: 5.294 s, System: 1.452 s]
Range (min … max): 6.926 s … 6.974 s 10 runs
istream sentry objects are very expensive for single-character
operations, and since we don't configure exception masks for the
istreams used here they don't even do anything. all we need is
end-of-string checks and an advancing position in an immutable memory
buffer, both of which can be had for much cheaper than istreams allow.
the effect of this change is most apparent on empty stores.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 7.167 s ± 0.013 s [User: 5.528 s, System: 1.431 s]
Range (min … max): 7.147 s … 7.182 s 10 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.963 s ± 0.011 s [User: 5.330 s, System: 1.421 s]
Range (min … max): 6.943 s … 6.974 s 10 runs
Previously, IFDs would be built within the eval store, even though one
is typically using `--eval-store` precisely to *avoid* local builds.
Because the resulting Nix expression must be copied back to the eval
store in order to be imported, this requires the eval store to trust
the build store's signatures.
Prior to this change, Nix would prepend every installable to the PATH
list in order to ensure that installables appeared before the current
PATH from the ambient environment.
With this change, all the installables are still prepended to the PATH,
but in the same order as they appear on the command line. This means
that the first of two packages that expose an executable `hello` would
appear in the PATH first, and thus be executed first.
See the test in the prior commit for a more concrete example.
There's no good reason to deprecate it:
- For consistency reasons it should continue to exist, such that all
primitive types have a corresponding `builtins.is*` primop.
- There's no implementation cost to continuing to have this function
- It costs users time to try to migrate away from it, e.g.
https://github.com/NixOS/nixpkgs/pull/219747 and https://github.com/NixOS/nixpkgs/pull/275548
- Using it can give easier-to-read code like `all isNull list`
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
this also reduces forceValue code size and removes the need for
hideInDiagnostics. coopting thunk forcing like this has the additional
benefit of clarifying how these errors can happen in the first place.
forceValue is extremely hot. interestingly adding likeliness annotations
to the branches does not seem to make a difference.
before:
Time (mean ± σ): 4.224 s ± 0.005 s [User: 3.711 s, System: 0.512 s]
Range (min … max): 4.218 s … 4.234 s 10 runs
after:
Time (mean ± σ): 4.140 s ± 0.009 s [User: 3.647 s, System: 0.492 s]
Range (min … max): 4.130 s … 4.152 s 10 runs
almost all uses of this are interactive, except for deepSeq. deepSeq is
going to be expensive and rare enough to not care much about, and
Value::determinePos should usually be cheap enough to not be too much of
a burden in any case.
~1% parser speedup from not using TLS indirections, less on system eval.
this could have also gone in flex yyextra data, but that's significantly
slower for some reason (albeit still faster than thread locals).
before:
Time (mean ± σ): 4.231 s ± 0.004 s [User: 3.725 s, System: 0.504 s]
Range (min … max): 4.226 s … 4.240 s 10 runs
after:
Time (mean ± σ): 4.224 s ± 0.005 s [User: 3.711 s, System: 0.512 s]
Range (min … max): 4.218 s … 4.234 s 10 runs
~2% speedup on parsing without eval, less (but still significant) on
system eval. having flex generate faster parsers leads to very strange
misparses. maybe re2c is worth investigating.
before:
Time (mean ± σ): 4.260 s ± 0.003 s [User: 3.754 s, System: 0.505 s]
Range (min … max): 4.257 s … 4.266 s 10 runs
after:
Time (mean ± σ): 4.231 s ± 0.004 s [User: 3.725 s, System: 0.504 s]
Range (min … max): 4.226 s … 4.240 s 10 runs
as written the comparisons generate copies, even though it looks as
though they shouldn't.
before:
Time (mean ± σ): 4.396 s ± 0.002 s [User: 3.894 s, System: 0.501 s]
Range (min … max): 4.393 s … 4.399 s 10 runs
after:
Time (mean ± σ): 4.260 s ± 0.003 s [User: 3.754 s, System: 0.505 s]
Range (min … max): 4.257 s … 4.266 s 10 runs
checking for isBlackhole in the forceValue hot path is rather more
expensive than necessary, and with a little bit of trickery we can move
such handling into the isApp case. small performance benefit, but under
some circumstances we've seen 2% improvement as well.
〉 nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
before:
Time (mean ± σ): 4.429 s ± 0.002 s [User: 3.929 s, System: 0.500 s]
Range (min … max): 4.427 s … 4.433 s 10 runs
after:
Time (mean ± σ): 4.396 s ± 0.002 s [User: 3.894 s, System: 0.501 s]
Range (min … max): 4.393 s … 4.399 s 10 runs
resizing a std::string clears the newly added bytes, which is not
necessary here and comes with a ~1.4% slowdown on our test nixos config.
〉 nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
before:
Time (mean ± σ): 4.486 s ± 0.003 s [User: 3.978 s, System: 0.507 s]
Range (min … max): 4.482 s … 4.492 s 10 runs
after:
Time (mean ± σ): 4.429 s ± 0.002 s [User: 3.929 s, System: 0.500 s]
Range (min … max): 4.427 s … 4.433 s 10 runs