Thunks are now overwritten by a helper function
`Value::finishValue(newType, payload)` (where `payload` is the
original anonymous union inside `Value`). This helps to ensure we
never update a value elsewhere, since that would be incompatible with
parallel evaluation (i.e. after a value has transitioned from being a
thunk to being a non-thunk, it should be immutable).
There were two places where this happened: `Value::mkString()` and
`ExprAttrs::eval()`.
This PR also adds a bunch of accessor functions for value contents,
like `Value::integer()` to access the integer field in the union.
I realized it was checking NAR hashes before of added objects, which
makes little sense --- we don't really care about ancillary NAR hashes.
Now, the bottom `nix store add` tests compare the CA field with a git
hash to hashes calculated by Git. This matches top `nix hash path` ones
in using git as a source of truth.
Previously, `state.mkList()` would set the type of the value to tList
and allocate the list vector, but it would not initialize the values
in the list. This has two problems:
* If an exception occurs, the list is left in an undefined state.
* More importantly, for multithreaded evaluation, if a value
transitions from thunk to non-thunk, it should be final (i.e. other
threads should be able to access the value safely).
To address this, there now is a `ListBuilder` class (analogous to
`BindingsBuilder`) to build the list vector prior to the call to
`Value::mkList()`. Typical usage:
auto list = state.buildList(size);
for (auto & v : list)
v = ... set value ...;
vRes.mkList(list);
* Add regression test
* Fix 'no repo' test so it doesn't succeed if the data is still in cache
* Use git_revparse_single inside git-utils instead of reimplementing the same logic.
Currently there isn't a convenient way to check for multiline output. In
addition, these outputs will easily change and having a diff between the
expected an the actual output upon failures is convenient.
we now keep not a table of all positions, but a table of all origins and
their sizes. position indices are now direct pointers into the virtual
concatenation of all parsed contents. this slightly reduces memory usage
and time spent in the parser, at the cost of not being able to report
positions if the total input size exceeds 4GiB. this limit is not unique
to nix though, rustc and clang also limit their input to 4GiB (although
at least clang refuses to process inputs that are larger, we will not).
this new 4GiB limit probably will not cause any problems for quite a
while, all of nixpkgs together is less than 100MiB in size and already
needs over 700MiB of memory and multiple seconds just to parse. 4GiB
worth of input will easily take multiple minutes and over 30GiB of
memory without even evaluating anything. if problems *do* arise we can
probably recover the old table-based system by adding some tracking to
Pos::Origin (or increasing the size of PosIdx outright), but for time
being this looks like more complexity than it's worth.
since we now need to read the entire input again to determine the
line/column of a position we'll make unsafeGetAttrPos slightly lazy:
mostly the set it returns is only used to determine the file of origin
of an attribute, not its exact location. the thunks do not add
measurable runtime overhead.
notably this change is necessary to allow changing the parser since
apparently nothing supports nix's very idiosyncratic line ending choice
of "anything goes", making it very hard to calculate line/column
positions in the parser (while byte offsets are very easy).
this needs a string comparison because there seems to be no other way to
get that information out of bison. usually the location info is going to
be correct (pointing at a bad token), but since EOF isn't a token as
such it'll be wrong in that this case.
this hasn't shown up much so far because a single line ending *is* a
token, so any file formatted in the usual manner (ie, ending in a line
ending) would have its EOF position reported correctly.
the parser treats a plain \r as a newline, error reports do not. this
can lead to interesting divergences if anything makes use of this
feature, with error reports pointing to wrong locations in the input (or
even outside the input altogether).
previously we reported the error at the beginning of the binding
block (for plain inherits) or the beginning of the attr list (for
inherit-from), effectively hiding where exactly the error happened.
this also carries over to runtime positions of attributes in sets as
reported by unsafeGetAttrPos. we're not worried about this changing
observable eval behavior because it *is* marked unsafe, and the new
behavior is much more useful.
we already normalize attr order to lexicographic, doing the same for
formals makes sense. doubly so because the order of formals would
otherwise depend on the context of the expression, which is not quite as
useful as one might expect.
the parser modifies its inputs, which means that sharing them between
the error context reporting system and the parser itself can confuse the
reporting system. usually this led to early truncation of error context
reports which, while not dangerous, can be quite confusing.
When reviewing old PRs, I found that #9997 adds some code to ensure one
particular assert is always present. But, removing asserts isn't
something we do in our own release builds either in the flake here or in
nixpkgs, and is plainly a bad idea that increases support burden,
especially if other distros make bad choices of build flags in their Nix
packaging.
For context, the assert macro in the C standard is defined to do nothing
if NDEBUG is set.
There is no way in our build system to set -DNDEBUG without manually
adding it to CFLAGS, so this is simply a configuration we do not use.
Let's ban it at compile time.
I put this preprocessor directive in src/libutil.cc because it is not
obvious where else to put it, and it seems like the most logical file
since you are not getting a usable nix without it.
Directly fail if a flakeref points to something that isn't a directory
instead of falling back to the logic of trying to look up the hierarchy
to find a valid flake root.
Fix https://github.com/NixOS/nix/issues/9868
Part of RFC 133
Extracted from our old IPFS branches.
Co-Authored-By: Matthew Bauer <mjbauer95@gmail.com>
Co-Authored-By: Carlo Nucera <carlo.nucera@protonmail.com>
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Co-authored-by: Florian Klink <flokli@flokli.de>
desugaring inherit-from to syntactic duplication of the source expr also
duplicates side effects of the source expr (such as trace calls) and
expensive computations (such as derivationStrict).
When a file conflict arises during a package install a suggestion is
made to remove the old entry. This was previously done using the
installable URLs of the old entry. These URLs are quite verbose and
often do not equal the URL of the existing entry.
This change uses the recently introduced profile entry name for the
suggestion, resulting in a simpler output.
The improvement is easily seen in the change to the functional test.
- `nix store add` supports text hashing
With functional test ensuring it matches `builtins.toFile`.
- Factored-out flags for both commands
- Move all common reusable flags to `libcmd`
- They are not part of the *definition* of the CLI infra, just a usag
of it.
- The `libstore` flag couldn't go in `args.hh` in libutil anyways,
would be awkward for it to live alone
- Shuffle around `Cmd*` hierarchy so flags for deprecated commands don't
end up on the new ones
It's better to just check whether the input has all the attributes
needed to consider itself locked (e.g. whether a Git input has an
'rev' attribute).
Also, the 'locked' field was actually incorrect for Git inputs: it
would be set to true even for dirty worktrees. As a result, we got
away with using fetchTree() internally even though fetchTree()
requires a locked input in pure mode. In particular, this allowed
'--override-input' to work by accident.
The fix is to pass a set of "overrides" to call-flake.nix for all the
unlocked inputs (i.e. the top-level flake and any --override-inputs).
This fixes warnings like
warning: Ignoring setting 'auto-allocate-uids' because experimental feature 'auto-allocate-uids' is not enabled
warning: Ignoring setting 'impure-env' because experimental feature 'configurable-impure-env' is not enabled
when using the daemon and the user didn't actually set those settings.
Note: this also hides those settings from `nix config show`, but that
seems a good thing.
`canonPath` and `absPath` work on native paths, and so should switch
between supporting Unix paths and Windows paths accordingly.
The templating is because `CanonPath`, which shares the implementation,
should always be Unix style. It is the pure "nix-native" path type for
virtual file operations --- it is part of Nix's "business logic", and
should not vary with the host OS accordingly.
The core `CanonPath` constructors were using `absPath`, but `absPath` in
some situations does IO which is not appropriate. It turns out that
these constructors avoided those situations, and thus were pure, but it
was far from obvious this was the case.
To remedy the situation, abstract the core algorithm from `canonPath` to
use separately in `CanonPath` without any IO. No we know by-construction
that those constructors are pure.
That leaves `CanonPath::fromCWD` as the only operation which uses IO /
is impure. Add docs on it, and `CanonPath` as a whole, explaining the
situation.
This is also necessary to support Windows paths on windows without
messing up `CanonPath`. But, I think it is good even without that.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Commit 83c067c0fa changed `builtins.pathExists`
to resolve symlinks before checking for existence. Consequently, if the path
refers to a symlink itself, existence of the target of the symlink (instead of
the symlink itself) was checked. Restore the previous behavior by skipping
symlink resolution in the last component.
No outward facing behavior is changed.
Older methods with same names that operate on on method + algo pair (for
old-style `<method>:algo`) are renamed to `*WithAlgo`.)
The functions are unit-tested in the same way the names for the hash
algorithms are tested.
for plain inherits this is really just a stylistic choice, but for
inherit-from it actually fixes an exponential size increase problem
during expr printing (as may happen during assertion failure reporting,
on during duplicate attr detection in the parser)
this also has the effect of sorting let bindings lexicographically
rather than by symbol creation order as was previously done, giving a
better canonicalization in the process.
When I started contributing to Nix, I found the mix of definitions and
names in `fmt.hh` to be rather confusing, especially the small
difference between `hintfmt` and `hintformat`. I've renamed many classes
and added documentation to most definitions.
- `formatHelper` is no longer exported.
- `fmt`'s documentation is now with `fmt` rather than (misleadingly)
above `formatHelper`.
- `yellowtxt` is renamed to `Magenta`.
`yellowtxt` wraps its value with `ANSI_WARNING`, but `ANSI_WARNING`
has been equal to `ANSI_MAGENTA` for a long time. Now the name is
updated.
- `normaltxt` is renamed to `Uncolored`.
- `hintfmt` has been merged into `hintformat` as extra constructor
functions.
- `hintformat` has been renamed to `hintfmt`.
- The single-argument `hintformat(std::string)` constructor has been
renamed to a static member `hintformat::interpolate` to avoid pitfalls
with using user-generated strings as format strings.
Pretty-print values in the REPL by printing each item in a list or
attrset on a separate line. When possible, single-item lists and
attrsets are printed on one line, as long as they don't contain a nested
list, attrset, or thunk.
Before:
```
{ attrs = { a = { b = { c = { }; }; }; }; list = [ 1 ]; list' = [ 1 2 3 ]; }
```
After:
```
{
attrs = {
a = {
b = {
c = { };
};
};
};
list = [ 1 ];
list' = [
1
2
3
];
}
```
Some tools which consume the "nix print-dev-env" rc script (such as
"nix-direnv") are sensitive to the use of unbound variables. They use
"set -u".
The "nix print-dev-env" rc script initially unsets "shellHook", then
loads variables from the derivation, and then evaluates "shellHook".
However, most derivations don't have a "shellHook" attribute.
So users get the error "shellHook: unbound variable". This can be
demonstrated with the command:
nix print-dev-env nixpkgs#hello | bash -u
This commit changes the rc script to provide an empty fallback value
for the "shellHook" variable.
Closes: #7951#8253
While preparing PRs like #9753, I've had to change error messages in
dozens of code paths. It would be nice if instead of
EvalError("expected 'boolean' but found '%1%'", showType(v))
we could write
TypeError(v, "boolean")
or similar. Then, changing the error message could be a mechanical
refactor with the compiler pointing out places the constructor needs to
be changed, rather than the error-prone process of grepping through the
codebase. Structured errors would also help prevent the "same" error
from having multiple slightly different messages, and could be a first
step towards error codes / an error index.
This PR reworks the exception infrastructure in `libexpr` to
support exception types with different constructor signatures than
`BaseError`. Actually refactoring the exceptions to use structured data
will come in a future PR (this one is big enough already, as it has to
touch every exception in `libexpr`).
The core design is in `eval-error.hh`. Generally, errors like this:
state.error("'%s' is not a string", getAttrPathStr())
.debugThrow<TypeError>()
are transformed like this:
state.error<TypeError>("'%s' is not a string", getAttrPathStr())
.debugThrow()
The type annotation has moved from `ErrorBuilder::debugThrow` to
`EvalState::error`.