Motivation
`PathSet` is not correct because string contexts have other forms
(`Built` and `DrvDeep`) that are not rendered as plain store paths.
Instead of wrongly using `PathSet`, or "stringly typed" using
`StringSet`, use `std::std<StringContextElem>`.
-----
In support of this change, `NixStringContext` is now defined as
`std::std<StringContextElem>` not `std:vector<StringContextElem>`. The
old definition was just used by a `getContext` method which was only
used by the eval cache. It can be deleted altogether since the types are
now unified and the preexisting `copyContext` function already suffices.
Summarizing the previous paragraph:
Old:
- `value/context.hh`: `NixStringContext = std::vector<StringContextElem>`
- `value.hh`: `NixStringContext Value::getContext(...)`
- `value.hh`: `copyContext(...)`
New:
- `value/context.hh`: `NixStringContext = std::set<StringContextElem>`
- `value.hh`: `copyContext(...)`
----
The string representation of string context elements no longer contains
the store dir. The diff of `src/libexpr/tests/value/context.cc` should
make clear what the new representation is, so we recommend reviewing
that file first. This was done for two reasons:
Less API churn:
`Value::mkString` and friends did not take a `Store` before. But if
`NixStringContextElem::{parse, to_string}` *do* take a store (as they
did before), then we cannot have the `Value` functions use them (in
order to work with the fully-structured `NixStringContext`) without
adding that argument.
That would have been a lot of churn of threading the store, and this
diff is already large enough, so the easier and less invasive thing to
do was simply make the element `parse` and `to_string` functions not
take the `Store` reference, and the easiest way to do that was to simply
drop the store dir.
Space usage:
Dropping the `/nix/store/` (or similar) from the internal representation
will safe space in the heap of the Nix programming being interpreted. If
the heap contains many strings with non-trivial contexts, the saving
could add up to something significant.
----
The eval cache version is bumped.
The eval cache serialization uses `NixStringContextElem::{parse,
to_string}`, and since those functions are changed per the above, that
means the on-disk representation is also changed.
This is simply done by changing the name of the used for the eval cache
from `eval-cache-v4` to eval-cache-v5`.
----
To avoid some duplication `EvalCache::mkPathString` is added to abstract
over the simple case of turning a store path to a string with just that
string in the context.
Context
This PR picks up where #7543 left off. That one introduced the fully
structured `NixStringContextElem` data type, but kept `PathSet context`
as an awkward middle ground between internal `char[][]` interpreter heap
string contexts and `NixStringContext` fully parsed string contexts.
The infelicity of `PathSet context` was specifically called out during
Nix team group review, but it was agreeing that fixing it could be left
as future work. This is that future work.
A possible follow-up step would be to get rid of the `char[][]`
evaluator heap representation, too, but it is not yet clear how to do
that. To use `NixStringContextElem` there we would need to get the STL
containers to GC pointers in the GC build, and I am not sure how to do
that.
----
PR #7543 effectively is writing the inverse of a `mkPathString`,
`mkOutputString`, and one more such function for the `DrvDeep` case. I
would like that PR to have property tests ensuring it is actually the
inverse as expected.
This PR sets things up nicely so that reworking that PR to be in that
more elegant and better tested way is possible.
Co-authored-by: Théophane Hufschmitt <7226587+thufschmitt@users.noreply.github.com>
This introduces the SourcePath type from lazy-trees as an abstraction
for accessing files from inputs that may not be materialized in the
real filesystem (e.g. Git repositories). Currently, however, it's just
a wrapper around CanonPath, so it shouldn't change any behaviour. (On
lazy-trees, SourcePath is a <InputAccessor, CanonPath> tuple.)
XDG Base Directory is a standard for locations for storing various
files. Nix has a few files which seem to fit in the standard, but
currently use a custom location directly in the user's ~, polluting
it:
- ~/.nix-profile
- ~/.nix-defexpr
- ~/.nix-channels
This commit adds a config option (use-xdg-base-directories) to follow
the XDG spec and instead use the following locations:
- $XDG_STATE_HOME/nix/profile
- $XDG_STATE_HOME/nix/defexpr
- $XDG_STATE_HOME/nix/channels
If $XDG_STATE_HOME is not set, it is assumed to be ~/.local/state.
Co-authored-by: Théophane Hufschmitt <7226587+thufschmitt@users.noreply.github.com>
Co-authored-by: Tim Fenney <kodekata@gmail.com>
Co-authored-by: pasqui23 <pasqui23@users.noreply.github.com>
Co-authored-by: Artturin <Artturin@artturin.com>
Co-authored-by: John Ericson <Ericson2314@Yahoo.com>
Previously, getDefaultNixPath was called too early: at initialisation
time, before CLI and config have been processed, when `restrictEval` and
`pureEval` both have their default value `false`. Call it when
initialising the EvalState instead, and use `setDefault`.
Prior to this change, we had a bunch of ad-hoc string manipulation code
scattered around. This made it hard to figure out what data model for
string contexts is.
Now, we still store string contexts most of the time as encoded strings
--- I was wary of the performance implications of changing that --- but
whenever we parse them we do so only through the
`NixStringContextElem::parse` method, which handles all cases. This
creates a data type that is very similar to `DerivedPath` but:
- Represents the funky `=<drvpath>` case as properly distinct from the
others.
- Only encodes a single output, no wildcards and no set, for the
"built" case.
(I would like to deprecate `=<path>`, after which we are in spitting
distance of `DerivedPath` and could maybe get away with fewer types, but
that is another topic for another day.)
This makes the position object used in exceptions abstract, with a
method getSource() to get the source code of the file in which the
error originated. This is needed for lazy trees because source files
don't necessarily exist in the filesystem, and we don't want to make
libutil depend on the InputAccessor type in libfetcher.
Make everything be in the form "while ..." (most things were already),
and in particular *don't* use other propositions that must go after or
before specific "while ..." clauses to make sense.
The old way was not correct.
Here is an example:
```
$ nix-instantiate --eval --expr 'let x = a: throw "asdf"; in x 1' --show-trace
error: asdf
… while evaluating 'x'
at «string»:1:9:
1| let x = a: throw "asdf"; in x 1
| ^
… from call site
at «string»:1:29:
1| let x = a: throw "asdf"; in x 1
| ^
```
and yet also:
```
$ nix-instantiate --eval --expr 'let x = a: throw "asdf"; in x' --show-trace
<LAMBDA>
```
Here is the thing: in both cases we are evaluating `x`!
Nix is a higher-order languages, and functions are a sort of value. When
we write `x = a: ...`, `a: ...` is the expression that `x` is being
defined to be, and that is already a value. Therefore, we should *never*
get an trace that says "while evaluating `x`", because evaluating `a:
...` is *trival* and nothing happens during it!
What is actually happening here is we are applying `x` and evaluating
its *body* with arguments substituted for parameters. I think the
simplest way to say is just "while *calling* `x`", and so that is what I
changed it to.
It calls strlen() on the input (rather than simply copying at most
`size` bytes), which can fail if the input is not zero-terminated and
is inefficient in any case.
Fixes#7347.
* libexpr: fix builtins.split example
The example was previously indicating that multiple whitespaces would be
collapsed into a single captured whitespace. That isn't true and was
likely a mistake when being documented initially.
* Fix segfault on unitilized list when printing value
Since lists are just chunks of memory the individual elements in the
list might be unitilized when a programming error happens within Nix.
In this case the values are null-initialized (at least with Boehm GC)
and we can avoid a nullptr deref when printing them.
I ran into this issue while ensuring that new expression tests would
show the actual value on an assertion failure.
This is unlikely to cause any runtime performance regressions as
printing values is not really in the hot path (unless the repl is the
primary use case).
* Add operator<< for ValueTypes
* Add libexpr tests
This introduces tests for libexpr that evalulate various trivial Nix
language expressions and primop invocations that should be good smoke
tests wheter or not the implementation is behaving as expected.
after #6218 `Symbol` no longer confers a uniqueness invariant on the
string it wraps, it is now possible to create multiple symbols that
compare equal but whose string contents have different addresses. this
guarantee is now only provided by `SymbolIdx`, leaving `Symbol` only as
a string wrapper that knows about the intricacies of how symbols need to
be formatted for output.
this change renames `SymbolIdx` to `Symbol` to restore the previous
semantics of `Symbol` to that name. we also keep the wrapper type and
rename it to `SymbolStr` instead of returning plain strings from lookups
into the symbol table because symbols are formatted for output in many
places. theoretically we do not need `SymbolStr`, only a function that
formats a string for output as a symbol, but having to wrap every symbol
that appears in a message into eg `formatSymbol()` is error-prone and
inconvient.
this slightly increases the amount of memory used for any given symbol, but this
increase is more than made up for if the symbol is referenced more than once in
the EvalState that holds it. on average every symbol should be referenced at
least twice (once to introduce a binding, once to use it), so we expect no
increase in memory on average.
symbol tables are limited to 2³² entries like position tables, and similar
arguments apply to why overflow is not likely: 2³² symbols would require as many
string instances (at 24 bytes each) and map entries (at 24 bytes or more each,
assuming that the map holds on average at most one item per bucket as the docs
say). a full symbol table would require at least 192GB of memory just for
symbols, which is well out of reach. (an ofborg eval of nixpks today creates
less than a million symbols!)
PosTable deduplicates origin information, so using symbols for paths is no
longer necessary. moving away from path Symbols also reduces the usage of
symbols for things that are not keys in attribute sets, which will become
important in the future when we turn symbols into indices as well.
Pos objects are somewhat wasteful as they duplicate the origin file name and
input type for each object. on files that produce more than one Pos when parsed
this a sizeable waste of memory (one pointer per Pos). the same goes for
ptr<Pos> on 64 bit machines: parsing enough source to require 8 bytes to locate
a position would need at least 8GB of input and 64GB of expression memory. it's
not likely that we'll hit that any time soon, so we can use a uint32_t index to
locate positions instead.
when we introduce position and symbol tables we'll need to do lookups to turn
indices into those tables into actual positions/symbols. having the error
functions as members of EvalState will avoid a lot of churn for adding lookups
into the tables for each caller.
the only use of this function is to determine whether a lambda has a non-set
formal, but this use is arguably better served by Symbol::set and using a
non-Symbol instead of an empty symbol in the parser when no such formal is present.