This is the core functionality but just unit-tested and not yet made
part of the store layer. This is because there is some tech debt around
(a) repeated boilerplate hashing objects (b) better integration of the
new `SourceAccessor` type that needs to be cleaned up first.
Part of RFC 133
Co-Authored-By: Matthew Bauer <mjbauer95@gmail.com>
Co-Authored-By: Carlo Nucera <carlo.nucera@protonmail.com>
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Co-authored-by: Florian Klink <flokli@flokli.de>
As discussed in our last meeting, we need a bit more time, but we are
"time boxing" the work left to do to ensure there is not unbounded
delay.
Rather than putting it back underneath `flakes`, though, put it
underneath its own `fetch-tree` experimental feature (which `flakes`
includes/implies). This signals our commitment to the plan to stabilize
it first without waiting to go through the rest of Flakes, and also will
give users a "release candidate" when we get closer to stabilization.
This reverts commit 4112dd1fc9.
Enables shebang usage of nix shell. All arguments with `#! nix` get
added to the nix invocation. This implementation does NOT set any
additional arguments other than placing the script path itself as the
first argument such that the interpreter can utilize it.
Example below:
```
#!/usr/bin/env nix
#! nix shell --quiet
#! nix nixpkgs#bash
#! nix nixpkgs#shellcheck
#! nix nixpkgs#hello
#! nix --ignore-environment --command bash
# shellcheck shell=bash
set -eu
shellcheck "$0" || exit 1
function main {
hello
echo 0:"$0" 1:"$1" 2:"$2"
}
"$@"
```
fix: include programName usage
EDIT: For posterity I've changed shellwords to shellwords2 in order
not to interfere with other changes during a rebase.
shellwords2 is removed in a later commit. -- roberth
Users may select specific outputs using the ^output syntax or selecting
any output using ^*.
URL parsing currently doesn't support these kinds of output references:
parsing will fail.
Currently `queryRegex` was reused for URL fragments, which didn't
include support for ^. Now queryRegex has been split from fragmentRegex,
where only the fragmentRegex supports ^.
`Store::pathInfoToJSON` was a rather baroque functions, being full of
parameters to support both parsed derivations and `nix path-info`. The
common core of each, a simple `dValidPathInfo::toJSON` function, is
factored out, but the rest of the logic is just duplicated and then
specialized to its use-case (at which point it is no longer that
duplicated).
This keeps the human oriented CLI logic (which is currently unstable)
and the core domain logic (export reference graphs with structured
attrs, which is stable), separate, which I think is better.
All OS and IO operations should be moved out, leaving only some misc
portable pure functions.
This is useful to avoid copious CPP when doing things like Windows and
Emscripten ports.
Newly exposed functions to break cycles:
- `restoreSignals`
- `updateWindowSize`
The new `MemorySourceAccessor` rather than being a slightly lossy flat
map is a complete in-memory model of file system objects.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
This implements the git input attributes `verifyCommit`, `keytype`,
`publicKey` and `publicKeys` as experimental feature
`verified-fetches`. `publicKeys` should be a json string.
This representation was chosen because all attributes must be of type bool,
int or string so they can be included in flake uris (see definition of
fetchers::Attr).
Deduplicating code moreover enforcing the pattern means:
- It is easier to write new characterization tests because less boilerplate
- It is harder to mess up new tests because there are fewer places to
make mistakes.
Co-authored-by: Jacek Galowicz <jacek@galowicz.de>
I wouldn't call it *good* yet, but this will do for now.
- `RetrieveRegularNARSink` renamed to `RegularFileSink` and moved
accordingly because it actually has nothing to do with NARs in
particular.
- its `fd` field is also marked private
- `copyRecursive` introduced to dump a `SourceAccessor` into a
`ParseSink`.
- `NullParseSink` made so `ParseSink` no longer has sketchy default
methods.
This was done while updating #8918 to work with the new
`SourceAccessor`.
Adding the inputPath as a positional feature uncovered this bug.
As positional argument forms were discarded from the `expectedArgs`
list, their closures were not. When the `.completer` closure was then
called, part of the surrounding object did not exist anymore.
This didn't cause an issue before, but with the new call to
`getEvalState()` in the "inputs" completer in nix/flake.cc, a segfault
was triggered reproducibly on invalid memory access to the `this`
pointer, which was always 0.
The solution of splicing the argument forms into a new list to extend
their lifetime is a bit of a hack, but I was unable to get the "nicer"
iterator-based solution to work.
As I complained in
https://github.com/NixOS/nix/pull/6784#issuecomment-1421777030 (a
comment on the wrong PR, sorry again!), #6693 introduced a second
completions mechanism to fix a bug. Having two completion mechanisms
isn't so nice.
As @thufschmitt also pointed out, it was a bummer to go from `FlakeRef`
to `std::string` when collecting flake refs. Now it is `FlakeRefs`
again.
The underlying issue that sought to work around was that completion of
arguments not at the end can still benefit from the information from
latter arguments.
To fix this better, we rip out that change and simply defer all
completion processing until after all the (regular, already-complete)
arguments have been passed.
In addition, I noticed the original completion logic used some global
variables. I do not like global variables, because even if they save
lines of code, they also obfuscate the architecture of the code.
I got rid of them moved them to a new `RootArgs` class, which now has
`parseCmdline` instead of `Args`. The idea is that we have many argument
parsers from subcommands and what-not, but only one root args that owns
the other per actual parsing invocation. The state that was global is
now part of the root args instead.
This did, admittedly, add a bunch of new code. And I do feel bad about
that. So I went and added a lot of API docs to try to at least make the
current state of things clear to the next person.
--
This is needed for RFC 134 (tracking issue #7868). It was very hard to
modularize `Installable` parsing when there were two completion
arguments. I wouldn't go as far as to say it is *easy* now, but at least
it is less hard (and the completions test finally passed).
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
hashBase is ambiguous, since it's not about the digital bases, but about
the format of hashes. Base16, Base32 and Base64 are all character maps
for binary encoding.
Rename the enum Base to HashFormat.
Rename variables of type HashFormat from [hash]Base to hashFormat,
including CmdHashBase::hashFormat and CmdToBase::hashFormat.
Two changes:
* The (probably unintentional) hack to handle paths as tarballs has
been removed. This is almost certainly not what users expect and is
inconsistent with flakeref handling everywhere else.
* The hack to support scp-style Git URLs has been moved to the Git
fetcher, so it's now supported not just by fetchTree but by flake
inputs.
Add a new experimental `impure-env` setting that is a key-value list of
environment variables to inject into FOD derivations that specify the
corresponding `impureEnvVars`.
This allows clients to make use of this feature (without having to change the
environment of the daemon itself) and might eventually deprecate the current
behaviour (pick whatever is in the environment of the daemon) as it's more
principled and might prevent information leakage.
I think it is bad for these reasons when `tests/` contains a mix of
functional and integration tests
- Concepts is harder to understand, the documentation makes a good
unit vs functional vs integration distinction, but when the
integration tests are just two subdirs within `tests/` this is not
clear.
- Source filtering in the `flake.nix` is more complex. We need to
filter out some of the dirs from `tests/`, rather than simply pick
the dirs we want and take all of them. This is a good sign the
structure of what we are trying to do is not matching the structure
of the files.
With this change we have a clean:
```shell-session
$ git show 'HEAD:tests'
tree HEAD:tests
functional/
installer/
nixos/
```
While `nix` has always been respectful towards requests for `NO_COLOR=1`, this change asks represents a new stage of maturity for `nix` - making it also respect quests for `NOCOLOR=1`.
This ideally makes the tool more accessible to folks like me, who are exhausted by guessing whether `NO_COLOR` or `NOCOLOR` is the right environment variable to set.
<3
Behavior change:
Before we only showed uption if the command-specific options were
non-empty. But that is somewhat odd since we also show common options.
Now, we do everything based on the union of both sorts of options (with
hidden-categories filtered, as before).
Implementation change:
The JSON dumping once again includes all options; the filtering of
hidden categories is done in the Nix instead. This is better separation
of "content" vs "presentation", and prepare the way for the HTML manual
vs manpages / `--help` doing different things.
We will soon add a new implemenation so the one for NARs in `archive.cc`
isn't the only one.
Co-Authored-By: Matthew Bauer <mjbauer95@gmail.com>
Co-Authored-By: Carlo Nucera <carlo.nucera@protonmail.com>
Support using nix flakes in paths with spaces or abitrary unicode characters.
This introduces the convention that the path part of the URL should be
percent-encoded when dealing with `path:` urls and not when using
filepaths (following the convention of firefox).
Co-authored-by: Rendal <rasmus@rend.al>
We use the same nested map representation we used for goals, again in
order to save space. We might someday want to combine with `inputDrvs`,
by doing `V = bool` instead of `V = std::set<OutputName>`, but we are
not doing that yet for sake of a smaller diff.
The ATerm format for Derivations also needs to be extended, in addition
to the in-memory format. To accomodate this, we added a new basic
versioning scheme, so old versions of Nix will get nice errors. (And
going forward, if the ATerm format changes again the errors will be even
better.)
`parsedStrings`, an internal function used as part of parsing
derivations in A-Term format, used to consume the final `]` but expect
the initial `[` to already be consumed. This made for what looked like
unbalanced brackets at callsites, which was confusing. Now it consumes
both which is hopefully less confusing.
As part of testing, we also created a unit test for the A-Term format for
regular non-experimental derivations too.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Apply suggestions from code review
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
this removes a lot of noise from the web search, which precludes finding
the actual documentation.
some configuration settings have enough documentation to warrant
individual pages, so the alternative of including full setting
documentation in each command page doesn't make much sense here.
this change technically means that the command line flags to override
settings are "invisible", and not exported as JSON. this may or may not
be desirable. a more explicit approach would be adding a `hidden` field
to the flag's JSON output, but would also require adjusting
post-processing of that JSON for manual rendering.
Solves 1/3 of the infinite recursion at unknown location meme.
See #8879 for ensuring we always have a trace (for stack overflows)
We might want to re-add this for finding missing location info
*while hacking on that problem only*.
Types converted:
- `NixStringContextElem`
- `OutputsSpec`
- `ExtendedOutputsSpec`
- `DerivationOutput`
- `DerivationType`
Existing ones mostly conforming the pattern cleaned up:
- `ContentAddressMethod`
- `ContentAddressWithReferences`
The `DerivationGoal::derivationType` field had a bogus initialization,
now caught, so I made it `std::optional`. I think #8829 can make it
non-optional again because it will ensure we always have the derivation
when we construct a `DerivationGoal`.
See that issue (#7479) for details on the general goal.
`git grep 'Raw::Raw'` indicates the two types I didn't yet convert
`DerivedPath` and `BuiltPath` (and their `Single` variants) . This is
because @roberth and I (can't find issue right now...) plan on reworking
them somewhat, so I didn't want to churn them more just yet.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
If you have a URL that needs to be percent-encoded, such as
`http://localhost:8181/test/+3d.tar.gz`, and try to lock that in a Nix
flake such as the following:
{
inputs.test = { url = "http://localhost:8181/test/+3d.tar.gz"; flake = false; };
outputs = { test, ... }: {
t = builtins.readFile test;
};
}
running `nix flake metadata` shows that the input URL has been
incorrectly double-encoded (despite the flake.lock being correctly
encoded only once):
[...snip...]
Inputs:
└───test: http://localhost:8181/test/%252B3d.tar.gz?narHash=sha256-EFUdrtf6Rn0LWIJufrmg8q99aT3jGfLvd1//zaJEufY%3D
(Notice the `%252B`? That's just `%2B` but percent-encoded again)
With this patch, the double-encoding is gone; running `nix flake
metadata` will show the proper URL:
[...snip...]
Inputs:
└───test: http://localhost:8181/test/%2B3d.tar.gz?narHash=sha256-EFUdrtf6Rn0LWIJufrmg8q99aT3jGfLvd1//zaJEufY%3D
---
As far as I can tell, this happens because Nix already percent-encodes
the URL and stores this as the value of `inputs.asdf.url`.
However, when Nix later tries to read this out of the eval state as a
string (via `getStrAttr`), it has to run it through `parseURL` again to
get the `ParsedURL` structure.
Now, this itself isn't a problem -- the true problem arises when using
`ParsedURL::to_string` later, which then _re-escapes the path_. It is
at this point that what would have been `%2B` (`+`) becomes `%252B`
(`%2B`).