Enables shebang usage of nix shell. All arguments with `#! nix` get
added to the nix invocation. This implementation does NOT set any
additional arguments other than placing the script path itself as the
first argument such that the interpreter can utilize it.
Example below:
```
#!/usr/bin/env nix
#! nix shell --quiet
#! nix nixpkgs#bash
#! nix nixpkgs#shellcheck
#! nix nixpkgs#hello
#! nix --ignore-environment --command bash
# shellcheck shell=bash
set -eu
shellcheck "$0" || exit 1
function main {
hello
echo 0:"$0" 1:"$1" 2:"$2"
}
"$@"
```
fix: include programName usage
EDIT: For posterity I've changed shellwords to shellwords2 in order
not to interfere with other changes during a rebase.
shellwords2 is removed in a later commit. -- roberth
Users may select specific outputs using the ^output syntax or selecting
any output using ^*.
URL parsing currently doesn't support these kinds of output references:
parsing will fail.
Currently `queryRegex` was reused for URL fragments, which didn't
include support for ^. Now queryRegex has been split from fragmentRegex,
where only the fragmentRegex supports ^.
`Store::pathInfoToJSON` was a rather baroque functions, being full of
parameters to support both parsed derivations and `nix path-info`. The
common core of each, a simple `dValidPathInfo::toJSON` function, is
factored out, but the rest of the logic is just duplicated and then
specialized to its use-case (at which point it is no longer that
duplicated).
This keeps the human oriented CLI logic (which is currently unstable)
and the core domain logic (export reference graphs with structured
attrs, which is stable), separate, which I think is better.
All OS and IO operations should be moved out, leaving only some misc
portable pure functions.
This is useful to avoid copious CPP when doing things like Windows and
Emscripten ports.
Newly exposed functions to break cycles:
- `restoreSignals`
- `updateWindowSize`
The new `MemorySourceAccessor` rather than being a slightly lossy flat
map is a complete in-memory model of file system objects.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
This implements the git input attributes `verifyCommit`, `keytype`,
`publicKey` and `publicKeys` as experimental feature
`verified-fetches`. `publicKeys` should be a json string.
This representation was chosen because all attributes must be of type bool,
int or string so they can be included in flake uris (see definition of
fetchers::Attr).
Deduplicating code moreover enforcing the pattern means:
- It is easier to write new characterization tests because less boilerplate
- It is harder to mess up new tests because there are fewer places to
make mistakes.
Co-authored-by: Jacek Galowicz <jacek@galowicz.de>
I wouldn't call it *good* yet, but this will do for now.
- `RetrieveRegularNARSink` renamed to `RegularFileSink` and moved
accordingly because it actually has nothing to do with NARs in
particular.
- its `fd` field is also marked private
- `copyRecursive` introduced to dump a `SourceAccessor` into a
`ParseSink`.
- `NullParseSink` made so `ParseSink` no longer has sketchy default
methods.
This was done while updating #8918 to work with the new
`SourceAccessor`.
Adding the inputPath as a positional feature uncovered this bug.
As positional argument forms were discarded from the `expectedArgs`
list, their closures were not. When the `.completer` closure was then
called, part of the surrounding object did not exist anymore.
This didn't cause an issue before, but with the new call to
`getEvalState()` in the "inputs" completer in nix/flake.cc, a segfault
was triggered reproducibly on invalid memory access to the `this`
pointer, which was always 0.
The solution of splicing the argument forms into a new list to extend
their lifetime is a bit of a hack, but I was unable to get the "nicer"
iterator-based solution to work.
As I complained in
https://github.com/NixOS/nix/pull/6784#issuecomment-1421777030 (a
comment on the wrong PR, sorry again!), #6693 introduced a second
completions mechanism to fix a bug. Having two completion mechanisms
isn't so nice.
As @thufschmitt also pointed out, it was a bummer to go from `FlakeRef`
to `std::string` when collecting flake refs. Now it is `FlakeRefs`
again.
The underlying issue that sought to work around was that completion of
arguments not at the end can still benefit from the information from
latter arguments.
To fix this better, we rip out that change and simply defer all
completion processing until after all the (regular, already-complete)
arguments have been passed.
In addition, I noticed the original completion logic used some global
variables. I do not like global variables, because even if they save
lines of code, they also obfuscate the architecture of the code.
I got rid of them moved them to a new `RootArgs` class, which now has
`parseCmdline` instead of `Args`. The idea is that we have many argument
parsers from subcommands and what-not, but only one root args that owns
the other per actual parsing invocation. The state that was global is
now part of the root args instead.
This did, admittedly, add a bunch of new code. And I do feel bad about
that. So I went and added a lot of API docs to try to at least make the
current state of things clear to the next person.
--
This is needed for RFC 134 (tracking issue #7868). It was very hard to
modularize `Installable` parsing when there were two completion
arguments. I wouldn't go as far as to say it is *easy* now, but at least
it is less hard (and the completions test finally passed).
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
hashBase is ambiguous, since it's not about the digital bases, but about
the format of hashes. Base16, Base32 and Base64 are all character maps
for binary encoding.
Rename the enum Base to HashFormat.
Rename variables of type HashFormat from [hash]Base to hashFormat,
including CmdHashBase::hashFormat and CmdToBase::hashFormat.
Two changes:
* The (probably unintentional) hack to handle paths as tarballs has
been removed. This is almost certainly not what users expect and is
inconsistent with flakeref handling everywhere else.
* The hack to support scp-style Git URLs has been moved to the Git
fetcher, so it's now supported not just by fetchTree but by flake
inputs.
Add a new experimental `impure-env` setting that is a key-value list of
environment variables to inject into FOD derivations that specify the
corresponding `impureEnvVars`.
This allows clients to make use of this feature (without having to change the
environment of the daemon itself) and might eventually deprecate the current
behaviour (pick whatever is in the environment of the daemon) as it's more
principled and might prevent information leakage.
I think it is bad for these reasons when `tests/` contains a mix of
functional and integration tests
- Concepts is harder to understand, the documentation makes a good
unit vs functional vs integration distinction, but when the
integration tests are just two subdirs within `tests/` this is not
clear.
- Source filtering in the `flake.nix` is more complex. We need to
filter out some of the dirs from `tests/`, rather than simply pick
the dirs we want and take all of them. This is a good sign the
structure of what we are trying to do is not matching the structure
of the files.
With this change we have a clean:
```shell-session
$ git show 'HEAD:tests'
tree HEAD:tests
functional/
installer/
nixos/
```
While `nix` has always been respectful towards requests for `NO_COLOR=1`, this change asks represents a new stage of maturity for `nix` - making it also respect quests for `NOCOLOR=1`.
This ideally makes the tool more accessible to folks like me, who are exhausted by guessing whether `NO_COLOR` or `NOCOLOR` is the right environment variable to set.
<3
Behavior change:
Before we only showed uption if the command-specific options were
non-empty. But that is somewhat odd since we also show common options.
Now, we do everything based on the union of both sorts of options (with
hidden-categories filtered, as before).
Implementation change:
The JSON dumping once again includes all options; the filtering of
hidden categories is done in the Nix instead. This is better separation
of "content" vs "presentation", and prepare the way for the HTML manual
vs manpages / `--help` doing different things.
We will soon add a new implemenation so the one for NARs in `archive.cc`
isn't the only one.
Co-Authored-By: Matthew Bauer <mjbauer95@gmail.com>
Co-Authored-By: Carlo Nucera <carlo.nucera@protonmail.com>
Support using nix flakes in paths with spaces or abitrary unicode characters.
This introduces the convention that the path part of the URL should be
percent-encoded when dealing with `path:` urls and not when using
filepaths (following the convention of firefox).
Co-authored-by: Rendal <rasmus@rend.al>
We use the same nested map representation we used for goals, again in
order to save space. We might someday want to combine with `inputDrvs`,
by doing `V = bool` instead of `V = std::set<OutputName>`, but we are
not doing that yet for sake of a smaller diff.
The ATerm format for Derivations also needs to be extended, in addition
to the in-memory format. To accomodate this, we added a new basic
versioning scheme, so old versions of Nix will get nice errors. (And
going forward, if the ATerm format changes again the errors will be even
better.)
`parsedStrings`, an internal function used as part of parsing
derivations in A-Term format, used to consume the final `]` but expect
the initial `[` to already be consumed. This made for what looked like
unbalanced brackets at callsites, which was confusing. Now it consumes
both which is hopefully less confusing.
As part of testing, we also created a unit test for the A-Term format for
regular non-experimental derivations too.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Apply suggestions from code review
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
this removes a lot of noise from the web search, which precludes finding
the actual documentation.
some configuration settings have enough documentation to warrant
individual pages, so the alternative of including full setting
documentation in each command page doesn't make much sense here.
this change technically means that the command line flags to override
settings are "invisible", and not exported as JSON. this may or may not
be desirable. a more explicit approach would be adding a `hidden` field
to the flag's JSON output, but would also require adjusting
post-processing of that JSON for manual rendering.
Solves 1/3 of the infinite recursion at unknown location meme.
See #8879 for ensuring we always have a trace (for stack overflows)
We might want to re-add this for finding missing location info
*while hacking on that problem only*.
Types converted:
- `NixStringContextElem`
- `OutputsSpec`
- `ExtendedOutputsSpec`
- `DerivationOutput`
- `DerivationType`
Existing ones mostly conforming the pattern cleaned up:
- `ContentAddressMethod`
- `ContentAddressWithReferences`
The `DerivationGoal::derivationType` field had a bogus initialization,
now caught, so I made it `std::optional`. I think #8829 can make it
non-optional again because it will ensure we always have the derivation
when we construct a `DerivationGoal`.
See that issue (#7479) for details on the general goal.
`git grep 'Raw::Raw'` indicates the two types I didn't yet convert
`DerivedPath` and `BuiltPath` (and their `Single` variants) . This is
because @roberth and I (can't find issue right now...) plan on reworking
them somewhat, so I didn't want to churn them more just yet.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
If you have a URL that needs to be percent-encoded, such as
`http://localhost:8181/test/+3d.tar.gz`, and try to lock that in a Nix
flake such as the following:
{
inputs.test = { url = "http://localhost:8181/test/+3d.tar.gz"; flake = false; };
outputs = { test, ... }: {
t = builtins.readFile test;
};
}
running `nix flake metadata` shows that the input URL has been
incorrectly double-encoded (despite the flake.lock being correctly
encoded only once):
[...snip...]
Inputs:
└───test: http://localhost:8181/test/%252B3d.tar.gz?narHash=sha256-EFUdrtf6Rn0LWIJufrmg8q99aT3jGfLvd1//zaJEufY%3D
(Notice the `%252B`? That's just `%2B` but percent-encoded again)
With this patch, the double-encoding is gone; running `nix flake
metadata` will show the proper URL:
[...snip...]
Inputs:
└───test: http://localhost:8181/test/%2B3d.tar.gz?narHash=sha256-EFUdrtf6Rn0LWIJufrmg8q99aT3jGfLvd1//zaJEufY%3D
---
As far as I can tell, this happens because Nix already percent-encodes
the URL and stores this as the value of `inputs.asdf.url`.
However, when Nix later tries to read this out of the eval state as a
string (via `getStrAttr`), it has to run it through `parseURL` again to
get the `ParsedURL` structure.
Now, this itself isn't a problem -- the true problem arises when using
`ParsedURL::to_string` later, which then _re-escapes the path_. It is
at this point that what would have been `%2B` (`+`) becomes `%252B`
(`%2B`).
We want to be able to write down `foo.drv^bar.drv^baz`:
`foo.drv^bar.drv` is the dynamic derivation (since it is itself a
derivation output, `bar.drv` from `foo.drv`).
To that end, we create `Single{Derivation,BuiltPath}` types, that are
very similar except instead of having multiple outputs (in a set or
map), they have a single one. This is for everything to the left of the
rightmost `^`.
`NixStringContextElem` has an analogous change, and now can reuse
`SingleDerivedPath` at the top level. In fact, if we ever get rid of
`DrvDeep`, `NixStringContextElem` could be replaced with
`SingleDerivedPath` entirely!
Important note: some JSON formats have changed.
We already can *produce* dynamic derivations, but we can't refer to them
directly. Today, we can merely express building or example at the top
imperatively over time by building `foo.drv^bar.drv`, and then with a
second nix invocation doing `<result-from-first>^baz`, but this is not
declarative. The ethos of Nix of being able to write down the full plan
everything you want to do, and then execute than plan with a single
command, and for that we need the new inductive form of these types.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
When loading a derivation from a JSON, malformed input would trigger
cryptic "assertion failed" errors. Simply replacing calls to `operator []`
with calls to `.at()` was not enough, as this would cause json.execptions
to be printed verbatim.
Display nice error messages instead and give some indication where the
error happened.
*Before:*
```
$ echo 4 | nix derivation add
error: [json.exception.type_error.305] cannot use operator[] with a string argument with number
$ nix derivation show nixpkgs#hello | nix derivation add
Assertion failed: (it != m_value.object->end()), function operator[], file /nix/store/8h9pxgq1776ns6qi5arx08ifgnhmgl22-nlohmann_json-3.11.2/include/nlohmann/json.hpp, line 2135.
$ nix derivation show nixpkgs#hello | jq '.[] | .name = 5' | nix derivation add
error: [json.exception.type_error.302] type must be string, but is object
$ nix derivation show nixpkgs#hello | jq '.[] | .outputs = { out: "/nix/store/8j3f8j-hello" }' | nix derivation add
error: [json.exception.type_error.302] type must be object, but is string
```
*After:*
```
$ echo 4 | nix derivation add
error: Expected JSON of derivation to be of type 'object', but it is of type 'number'
$ nix derivation show nixpkgs#hello | nix derivation add
error: Expected JSON object to contain key 'name' but it doesn't
$ nix derivation show nixpkgs#hello | jq '.[] | .name = 5' | nix derivation add
error: Expected JSON value to be of type 'string' but it is of type 'number'
$ nix derivation show nixpkgs#hello | jq '.[] | .outputs = { out: "/nix/store/8j3f8j-hello" }' | nix derivation add
error:
… while reading key 'outputs'
error: Expected JSON value to be of type 'object' but it is of type 'string'
```
Previously it was not possible to open a local store when its database is on a read-only filesystem. Obviously a store on a read-only filesystem cannot be modified, but it would still be useful to be able to query it.
This change adds a new read-only setting to LocalStore. When set to true, Nix will skip operations that fail when the database is on a read-only filesystem (acquiring big-lock, schema migration, etc), and the store database will be opened in immutable mode.
Co-authored-by: Ben Radford <benradf@users.noreply.github.com>
Co-authored-by: cidkidnix <cidkidnix@protonmail.com>
Co-authored-by: Dylan Green <67574902+cidkidnix@users.noreply.github.com>
Co-authored-by: John Ericson <git@JohnEricson.me>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
This is generally a fine practice: Putting implementations in headers
makes them harder to read and slows compilation. Unfortunately it is
necessary for templates, but we can ameliorate that by putting them in a
separate header. Only files which need to instantiate those templates
will need to include the header with the implementation; the rest can
just include the declaration.
This is now documenting in the contributing guide.
Also, it just happens that these polymorphic serializers are the
protocol agnostic ones. (Worker and serve protocol have the same logic
for these container types.) This means by doing this general template
cleanup, we are also getting a head start on better indicating which
code is protocol-specific and which code is shared between protocols.