This also bans various sneaking of negative numbers from the language
into unsuspecting builtins as was exposed while auditing the
consequences of changing the Nix language integer type to a newtype.
It's unlikely that this change comprehensively ensures correctness when
passing integers out of the Nix language and we should probably add a
checked-narrowing function or something similar, but that's out of scope
for the immediate change.
During the development of this I found a few fun facts about the
language:
- You could overflow integers by converting from unsigned JSON values.
- You could overflow unsigned integers by converting negative numbers
into them when going into Nix config, into fetchTree, and into flake
inputs.
The flake inputs and Nix config cannot actually be tested properly
since they both ban thunks, however, we put in checks anyway because
it's possible these could somehow be used to do such shenanigans some
other way.
Note that Lix has banned Nix language integer overflows since the very
first public beta, but threw a SIGILL about them because we run with
-fsanitize=signed-overflow -fsanitize-undefined-trap-on-error in
production builds. Since the Nix language uses signed integers, overflow
was simply undefined behaviour, and since we defined that to trap, it
did.
Trapping on it was a bad UX, but we didn't even entirely notice
that we had done this at all until it was reported as a bug a couple of
months later (which is, to be fair, that flag working as intended), and
it's got enough production time that, aside from code that is IMHO buggy
(and which is, in any case, not in nixpkgs) such as
https://git.lix.systems/lix-project/lix/issues/445, we don't think
anyone doing anything reasonable actually depends on wrapping overflow.
Even for weird use cases such as doing funny bit crimes, it doesn't make
sense IMO to have wrapping behaviour, since two's complement arithmetic
overflow behaviour is so *aggressively* not what you want for *any* kind
of mathematics/algorithms. The Nix language exists for package
management, a domain where bit crimes are already only dubiously in
scope to begin with, and it makes a lot more sense for that domain for
the integers to never lose precision, either by throwing errors if they
would, or by being arbitrary-precision.
Fixes: https://github.com/NixOS/nix/issues/10968
Original-CL: https://gerrit.lix.systems/c/lix/+/1596
Change-Id: I51f253840c4af2ea5422b8a420aa5fafbf8fae75
Known behavior changes:
- `MemorySourceAccessor`'s comparison operators no longer forget to
compare the `SourceAccessor` base class.
Progress on #10832
What remains for that issue is hopefully much easier!
After the removal of the InputAccessor::fetchToStore() method, the
only remaining functionality in InputAccessor was `fingerprint` and
`getLastModified()`, and there is no reason to keep those in a
separate class.
Add a method to check if a value has been initialized. This helps avoid
segfaults when calling `type()`.
Useful in the context of the new C API.
Closes#10524
Thunks are now overwritten by a helper function
`Value::finishValue(newType, payload)` (where `payload` is the
original anonymous union inside `Value`). This helps to ensure we
never update a value elsewhere, since that would be incompatible with
parallel evaluation (i.e. after a value has transitioned from being a
thunk to being a non-thunk, it should be immutable).
There were two places where this happened: `Value::mkString()` and
`ExprAttrs::eval()`.
This PR also adds a bunch of accessor functions for value contents,
like `Value::integer()` to access the integer field in the union.
Previously, `state.mkList()` would set the type of the value to tList
and allocate the list vector, but it would not initialize the values
in the list. This has two problems:
* If an exception occurs, the list is left in an undefined state.
* More importantly, for multithreaded evaluation, if a value
transitions from thunk to non-thunk, it should be final (i.e. other
threads should be able to access the value safely).
To address this, there now is a `ListBuilder` class (analogous to
`BindingsBuilder`) to build the list vector prior to the call to
`Value::mkList()`. Typical usage:
auto list = state.buildList(size);
for (auto & v : list)
v = ... set value ...;
vRes.mkList(list);
While preparing PRs like #9753, I've had to change error messages in
dozens of code paths. It would be nice if instead of
EvalError("expected 'boolean' but found '%1%'", showType(v))
we could write
TypeError(v, "boolean")
or similar. Then, changing the error message could be a mechanical
refactor with the compiler pointing out places the constructor needs to
be changed, rather than the error-prone process of grepping through the
codebase. Structured errors would also help prevent the "same" error
from having multiple slightly different messages, and could be a first
step towards error codes / an error index.
This PR reworks the exception infrastructure in `libexpr` to
support exception types with different constructor signatures than
`BaseError`. Actually refactoring the exceptions to use structured data
will come in a future PR (this one is big enough already, as it has to
touch every exception in `libexpr`).
The core design is in `eval-error.hh`. Generally, errors like this:
state.error("'%s' is not a string", getAttrPathStr())
.debugThrow<TypeError>()
are transformed like this:
state.error<TypeError>("'%s' is not a string", getAttrPathStr())
.debugThrow()
The type annotation has moved from `ErrorBuilder::debugThrow` to
`EvalState::error`.
Previously, there were two mostly-identical value printers -- one in
`libexpr/eval.cc` (which didn't force values) and one in
`libcmd/repl.cc` (which did force values and also printed ANSI color
codes).
This PR unifies both of these printers into `print.cc` and provides a
`PrintOptions` struct for controlling the output, which allows for
toggling whether values are forced, whether repeated values are tracked,
and whether ANSI color codes are displayed.
Additionally, `PrintOptions` allows tuning the maximum number of
attributes, list items, and bytes in a string that will be displayed;
this makes it ideal for contexts where printing too much output (e.g.
all of Nixpkgs) is distracting. (As requested by @roberth in
https://github.com/NixOS/nix/pull/9554#issuecomment-1845095735)
Please read the tests for example output.
Future work:
- It would be nice to provide this function as a builtin, perhaps
`builtins.toStringDebug` -- a printing function that never fails would
be useful when debugging Nix code.
- It would be nice to support customizing `PrintOptions` members on the
command line, e.g. `--option to-string-max-attrs 1000`.
Also move `SourcePath` into `libutil`.
These changes allow `error.hh` and `error.cc` to access source path and
position information, which we can use to produce better error messages
(for example, we could consider omitting filenames when two or more
consecutive stack frames originate from the same file).
this also reduces forceValue code size and removes the need for
hideInDiagnostics. coopting thunk forcing like this has the additional
benefit of clarifying how these errors can happen in the first place.
checking for isBlackhole in the forceValue hot path is rather more
expensive than necessary, and with a little bit of trickery we can move
such handling into the isApp case. small performance benefit, but under
some circumstances we've seen 2% improvement as well.
〉 nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
before:
Time (mean ± σ): 4.429 s ± 0.002 s [User: 3.929 s, System: 0.500 s]
Range (min … max): 4.427 s … 4.433 s 10 runs
after:
Time (mean ± σ): 4.396 s ± 0.002 s [User: 3.894 s, System: 0.501 s]
Range (min … max): 4.393 s … 4.399 s 10 runs
**`Value` and `const`**
These two deserve some explanation. We'll get to lists later.
Values can normally be thought of as immutable, except they are
are also the vehicle for call by need, which must be implemented
using mutation.
This circumstance makes a `const Value` a rather useless thing:
- If it's a thunk, you can't evaluate it, except by copying, but
that would not be call by need.
- If it's not a thunk, you know the type, so the method that
acquired it for you should have returned something more specific,
such as a `const Bindings &` (which actually does make sense
because that's an immutable span of pointers to mutable `Value`s.
- If you don't care about the type yet, you might establish the
convention that `const Value` means `deepSeq`-ed data, but
this is hardly useful and not actually as safe as you would
supposedly want to trust it to be - just convention.
**Lists**
`std::span` is a tuple of pointer and size - just what we need.
We don't return them as `const Value`, because considering the
first bullet point we discussed before, we'd have to force all
the list values, which isn't what we want.
So what we end up with is a nice representation of a list in
weak head normal form: the spine is immutable, but the
items may need some evaluation later.
This is done in roughly the same way builtin functions are documented.
Also auto-link experimental features for primops, subsuming PR #8371.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Motivation
`PathSet` is not correct because string contexts have other forms
(`Built` and `DrvDeep`) that are not rendered as plain store paths.
Instead of wrongly using `PathSet`, or "stringly typed" using
`StringSet`, use `std::std<StringContextElem>`.
-----
In support of this change, `NixStringContext` is now defined as
`std::std<StringContextElem>` not `std:vector<StringContextElem>`. The
old definition was just used by a `getContext` method which was only
used by the eval cache. It can be deleted altogether since the types are
now unified and the preexisting `copyContext` function already suffices.
Summarizing the previous paragraph:
Old:
- `value/context.hh`: `NixStringContext = std::vector<StringContextElem>`
- `value.hh`: `NixStringContext Value::getContext(...)`
- `value.hh`: `copyContext(...)`
New:
- `value/context.hh`: `NixStringContext = std::set<StringContextElem>`
- `value.hh`: `copyContext(...)`
----
The string representation of string context elements no longer contains
the store dir. The diff of `src/libexpr/tests/value/context.cc` should
make clear what the new representation is, so we recommend reviewing
that file first. This was done for two reasons:
Less API churn:
`Value::mkString` and friends did not take a `Store` before. But if
`NixStringContextElem::{parse, to_string}` *do* take a store (as they
did before), then we cannot have the `Value` functions use them (in
order to work with the fully-structured `NixStringContext`) without
adding that argument.
That would have been a lot of churn of threading the store, and this
diff is already large enough, so the easier and less invasive thing to
do was simply make the element `parse` and `to_string` functions not
take the `Store` reference, and the easiest way to do that was to simply
drop the store dir.
Space usage:
Dropping the `/nix/store/` (or similar) from the internal representation
will safe space in the heap of the Nix programming being interpreted. If
the heap contains many strings with non-trivial contexts, the saving
could add up to something significant.
----
The eval cache version is bumped.
The eval cache serialization uses `NixStringContextElem::{parse,
to_string}`, and since those functions are changed per the above, that
means the on-disk representation is also changed.
This is simply done by changing the name of the used for the eval cache
from `eval-cache-v4` to eval-cache-v5`.
----
To avoid some duplication `EvalCache::mkPathString` is added to abstract
over the simple case of turning a store path to a string with just that
string in the context.
Context
This PR picks up where #7543 left off. That one introduced the fully
structured `NixStringContextElem` data type, but kept `PathSet context`
as an awkward middle ground between internal `char[][]` interpreter heap
string contexts and `NixStringContext` fully parsed string contexts.
The infelicity of `PathSet context` was specifically called out during
Nix team group review, but it was agreeing that fixing it could be left
as future work. This is that future work.
A possible follow-up step would be to get rid of the `char[][]`
evaluator heap representation, too, but it is not yet clear how to do
that. To use `NixStringContextElem` there we would need to get the STL
containers to GC pointers in the GC build, and I am not sure how to do
that.
----
PR #7543 effectively is writing the inverse of a `mkPathString`,
`mkOutputString`, and one more such function for the `DrvDeep` case. I
would like that PR to have property tests ensuring it is actually the
inverse as expected.
This PR sets things up nicely so that reworking that PR to be in that
more elegant and better tested way is possible.
Co-authored-by: Théophane Hufschmitt <7226587+thufschmitt@users.noreply.github.com>
* Finish converting existing comments for internal API docs
99% of this was just reformatting existing comments. Only two exceptions:
- Expanded upon `BuildResult::status` compat note
- Split up file-level `symbol-table.hh` doc comments to get
per-definition docs
Also fixed a few whitespace goofs, turning leading tabs to spaces and
removing trailing spaces.
Picking up from #8133
* Fix two things from comments
* Use triple-backtick not indent for `dumpPath`
* Convert GNU-style `\`..'` quotes to markdown style in API docs
This will render correctly.
This introduces the SourcePath type from lazy-trees as an abstraction
for accessing files from inputs that may not be materialized in the
real filesystem (e.g. Git repositories). Currently, however, it's just
a wrapper around CanonPath, so it shouldn't change any behaviour. (On
lazy-trees, SourcePath is a <InputAccessor, CanonPath> tuple.)