mirror of
https://github.com/privatevoid-net/nix-super.git
synced 2024-11-22 22:16:16 +02:00
Merge branch 'master' into read-only-local-store
This commit is contained in:
commit
abb3bb7133
52 changed files with 851 additions and 272 deletions
|
@ -11,6 +11,10 @@ assignees: ''
|
||||||
|
|
||||||
<!-- describe your problem -->
|
<!-- describe your problem -->
|
||||||
|
|
||||||
|
## Proposal
|
||||||
|
|
||||||
|
<!-- propose a solution -->
|
||||||
|
|
||||||
## Checklist
|
## Checklist
|
||||||
|
|
||||||
<!-- make sure this issue is not redundant or obsolete -->
|
<!-- make sure this issue is not redundant or obsolete -->
|
||||||
|
@ -22,10 +26,6 @@ assignees: ''
|
||||||
[source]: https://github.com/NixOS/nix/tree/master/doc/manual/src
|
[source]: https://github.com/NixOS/nix/tree/master/doc/manual/src
|
||||||
[open documentation issues and pull requests]: https://github.com/NixOS/nix/labels/documentation
|
[open documentation issues and pull requests]: https://github.com/NixOS/nix/labels/documentation
|
||||||
|
|
||||||
## Proposal
|
|
||||||
|
|
||||||
<!-- propose a solution -->
|
|
||||||
|
|
||||||
## Priorities
|
## Priorities
|
||||||
|
|
||||||
Add :+1: to [issues you find important](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
|
Add :+1: to [issues you find important](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
|
||||||
|
|
2
.github/workflows/backport.yml
vendored
2
.github/workflows/backport.yml
vendored
|
@ -21,7 +21,7 @@ jobs:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
- name: Create backport PRs
|
- name: Create backport PRs
|
||||||
# should be kept in sync with `version`
|
# should be kept in sync with `version`
|
||||||
uses: zeebe-io/backport-action@v1.3.0
|
uses: zeebe-io/backport-action@v1.3.1
|
||||||
with:
|
with:
|
||||||
# Config README: https://github.com/zeebe-io/backport-action#backport-action
|
# Config README: https://github.com/zeebe-io/backport-action#backport-action
|
||||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
|
@ -4,7 +4,7 @@
|
||||||
|
|
||||||
# Synopsis
|
# Synopsis
|
||||||
|
|
||||||
`nix-channel` {`--add` url [*name*] | `--remove` *name* | `--list` | `--update` [*names…*] | `--rollback` [*generation*] }
|
`nix-channel` {`--add` url [*name*] | `--remove` *name* | `--list` | `--update` [*names…*] | `--list-generations` | `--rollback` [*generation*] }
|
||||||
|
|
||||||
# Description
|
# Description
|
||||||
|
|
||||||
|
@ -39,6 +39,15 @@ This command has the following operations:
|
||||||
for `nix-env` operations (by symlinking them from the directory
|
for `nix-env` operations (by symlinking them from the directory
|
||||||
`~/.nix-defexpr`).
|
`~/.nix-defexpr`).
|
||||||
|
|
||||||
|
- `--list-generations`\
|
||||||
|
Prints a list of all the current existing generations for the
|
||||||
|
channel profile.
|
||||||
|
|
||||||
|
Works the same way as
|
||||||
|
```
|
||||||
|
nix-env --profile /nix/var/nix/profiles/per-user/$USER/channels --list-generations
|
||||||
|
```
|
||||||
|
|
||||||
- `--rollback` \[*generation*\]\
|
- `--rollback` \[*generation*\]\
|
||||||
Reverts the previous call to `nix-channel
|
Reverts the previous call to `nix-channel
|
||||||
--update`. Optionally, you can specify a specific channel generation
|
--update`. Optionally, you can specify a specific channel generation
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# Name
|
# Name
|
||||||
|
|
||||||
`nix-collect-garbage` - delete unreachable store paths
|
`nix-collect-garbage` - delete unreachable [store objects]
|
||||||
|
|
||||||
# Synopsis
|
# Synopsis
|
||||||
|
|
||||||
|
@ -8,17 +8,57 @@
|
||||||
|
|
||||||
# Description
|
# Description
|
||||||
|
|
||||||
The command `nix-collect-garbage` is mostly an alias of [`nix-store
|
The command `nix-collect-garbage` is mostly an alias of [`nix-store --gc`](@docroot@/command-ref/nix-store/gc.md).
|
||||||
--gc`](@docroot@/command-ref/nix-store/gc.md), that is, it deletes all
|
That is, it deletes all unreachable [store objects] in the Nix store to clean up your system.
|
||||||
unreachable paths in the Nix store to clean up your system. However,
|
|
||||||
it provides two additional options: `-d` (`--delete-old`), which
|
However, it provides two additional options,
|
||||||
deletes all old generations of all profiles in `/nix/var/nix/profiles`
|
[`--delete-old`](#opt-delete-old) and [`--delete-older-than`](#opt-delete-older-than),
|
||||||
by invoking `nix-env --delete-generations old` on all profiles (of
|
which also delete old [profiles], allowing potentially more [store objects] to be deleted because profiles are also garbage collection roots.
|
||||||
course, this makes rollbacks to previous configurations impossible);
|
These options are the equivalent of running
|
||||||
and `--delete-older-than` *period*, where period is a value such as
|
[`nix-env --delete-generations`](@docroot@/command-ref/nix-env/delete-generations.md)
|
||||||
`30d`, which deletes all generations older than the specified number
|
with various augments on multiple profiles,
|
||||||
of days in all profiles in `/nix/var/nix/profiles` (except for the
|
prior to running `nix-collect-garbage` (or just `nix-store --gc`) without any flags.
|
||||||
generations that were active at that point in time).
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> Deleting previous configurations makes rollbacks to them impossible.
|
||||||
|
|
||||||
|
These flags should be used with care, because they potentially delete generations of profiles used by other users on the system.
|
||||||
|
|
||||||
|
## Locations searched for profiles
|
||||||
|
|
||||||
|
`nix-collect-garbage` cannot know about all profiles; that information doesn't exist.
|
||||||
|
Instead, it looks in a few locations, and acts on all profiles it finds there:
|
||||||
|
|
||||||
|
1. The default profile locations as specified in the [profiles] section of the manual.
|
||||||
|
|
||||||
|
2. > **NOTE**
|
||||||
|
>
|
||||||
|
> Not stable; subject to change
|
||||||
|
>
|
||||||
|
> Do not rely on this functionality; it just exists for migration purposes and is may change in the future.
|
||||||
|
> These deprecated paths remain a private implementation detail of Nix.
|
||||||
|
|
||||||
|
`$NIX_STATE_DIR/profiles` and `$NIX_STATE_DIR/profiles/per-user`.
|
||||||
|
|
||||||
|
With the exception of `$NIX_STATE_DIR/profiles/per-user/root` and `$NIX_STATE_DIR/profiles/default`, these directories are no longer used by other commands.
|
||||||
|
`nix-collect-garbage` looks there anyways in order to clean up profiles from older versions of Nix.
|
||||||
|
|
||||||
|
# Options
|
||||||
|
|
||||||
|
These options are for deleting old [profiles] prior to deleting unreachable [store objects].
|
||||||
|
|
||||||
|
- <span id="opt-delete-old">[`--delete-old`](#opt-delete-old)</span> / `-d`\
|
||||||
|
Delete all old generations of profiles.
|
||||||
|
|
||||||
|
This is the equivalent of invoking `nix-env --delete-generations old` on each found profile.
|
||||||
|
|
||||||
|
- <span id="opt-delete-older-than">[`--delete-older-than`](#opt-delete-older-than)</span> *period*\
|
||||||
|
Delete all generations of profiles older than the specified amount (except for the generations that were active at that point in time).
|
||||||
|
*period* is a value such as `30d`, which would mean 30 days.
|
||||||
|
|
||||||
|
This is the equivalent of invoking [`nix-env --delete-generations <period>`](@docroot@/command-ref/nix-env/delete-generations.md#generations-days) on each found profile.
|
||||||
|
See the documentation of that command for additional information about the *period* argument.
|
||||||
|
|
||||||
{{#include ./opt-common.md}}
|
{{#include ./opt-common.md}}
|
||||||
|
|
||||||
|
@ -32,3 +72,6 @@ generations of each profile, do
|
||||||
```console
|
```console
|
||||||
$ nix-collect-garbage -d
|
$ nix-collect-garbage -d
|
||||||
```
|
```
|
||||||
|
|
||||||
|
[profiles]: @docroot@/command-ref/files/profiles.md
|
||||||
|
[store objects]: @docroot@/glossary.md#gloss-store-object
|
||||||
|
|
|
@ -9,14 +9,39 @@
|
||||||
# Description
|
# Description
|
||||||
|
|
||||||
This operation deletes the specified generations of the current profile.
|
This operation deletes the specified generations of the current profile.
|
||||||
The generations can be a list of generation numbers, the special value
|
|
||||||
`old` to delete all non-current generations, a value such as `30d` to
|
*generations* can be a one of the following:
|
||||||
delete all generations older than the specified number of days (except
|
|
||||||
for the generation that was active at that point in time), or a value
|
- <span id="generations-list">`<number>...`</span>:\
|
||||||
such as `+5` to keep the last `5` generations ignoring any newer than
|
A list of generation numbers, each one a separate command-line argument.
|
||||||
current, e.g., if `30` is the current generation `+5` will delete
|
|
||||||
generation `25` and all older generations. Periodically deleting old
|
Delete exactly the profile generations given by their generation number.
|
||||||
generations is important to make garbage collection effective.
|
Deleting the current generation is not allowed.
|
||||||
|
|
||||||
|
- The special value <span id="generations-old">`old`</span>
|
||||||
|
|
||||||
|
Delete all generations older than the current one.
|
||||||
|
|
||||||
|
- <span id="generations-days">`<days>d`</span>:\
|
||||||
|
The last *days* days
|
||||||
|
|
||||||
|
*Example*: `30d`
|
||||||
|
|
||||||
|
Delete all generations older than *days* days.
|
||||||
|
The generation that was active at that point in time is excluded, and will not be deleted.
|
||||||
|
|
||||||
|
- <span id="generations-count">`+<count>`</span>:\
|
||||||
|
The last *count* generations up to the present
|
||||||
|
|
||||||
|
*Example*: `+5`
|
||||||
|
|
||||||
|
Keep the last *count* generations, along with any newer than current.
|
||||||
|
|
||||||
|
Periodically deleting old generations is important to make garbage collection
|
||||||
|
effective.
|
||||||
|
The is because profiles are also garbage collection roots — any [store object] reachable from a profile is "alive" and ineligible for deletion.
|
||||||
|
|
||||||
|
[store object]: @docroot@/glossary.md#gloss-store-object
|
||||||
|
|
||||||
{{#include ./opt-common.md}}
|
{{#include ./opt-common.md}}
|
||||||
|
|
||||||
|
@ -28,19 +53,35 @@ generations is important to make garbage collection effective.
|
||||||
|
|
||||||
# Examples
|
# Examples
|
||||||
|
|
||||||
|
## Delete explicit generation numbers
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ nix-env --delete-generations 3 4 8
|
$ nix-env --delete-generations 3 4 8
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Delete the generations numbered 3, 4, and 8, so long as the current active generation is not any of those.
|
||||||
|
|
||||||
|
## Keep most-recent by count count
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ nix-env --delete-generations +5
|
$ nix-env --delete-generations +5
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Suppose `30` is the current generation, and we currently have generations numbered `20` through `32`.
|
||||||
|
|
||||||
|
Then this command will delete generations `20` through `25` (`<= 30 - 5`),
|
||||||
|
and keep generations `26` through `31` (`> 30 - 5`).
|
||||||
|
|
||||||
|
## Keep most-recent in days
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ nix-env --delete-generations 30d
|
$ nix-env --delete-generations 30d
|
||||||
```
|
```
|
||||||
|
|
||||||
|
This command will delete all generations older than 30 days, except for the generation that was active 30 days ago (if it currently exists).
|
||||||
|
|
||||||
|
## Delete all older
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ nix-env --profile other_profile --delete-generations old
|
$ nix-env --profile other_profile --delete-generations old
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -112,9 +112,10 @@
|
||||||
from some server.
|
from some server.
|
||||||
|
|
||||||
- [substituter]{#gloss-substituter}\
|
- [substituter]{#gloss-substituter}\
|
||||||
A *substituter* is an additional store from which Nix will
|
An additional [store]{#gloss-store} from which Nix can obtain store objects instead of building them.
|
||||||
copy store objects it doesn't have. For details, see the
|
Often the substituter is a [binary cache](#gloss-binary-cache), but any store can serve as substituter.
|
||||||
[`substituters` option](./command-ref/conf-file.md#conf-substituters).
|
|
||||||
|
See the [`substituters` configuration option](./command-ref/conf-file.md#conf-substituters) for details.
|
||||||
|
|
||||||
[substituter]: #gloss-substituter
|
[substituter]: #gloss-substituter
|
||||||
|
|
||||||
|
|
|
@ -10,7 +10,7 @@
|
||||||
- Bash Shell. The `./configure` script relies on bashisms, so Bash is
|
- Bash Shell. The `./configure` script relies on bashisms, so Bash is
|
||||||
required.
|
required.
|
||||||
|
|
||||||
- A version of GCC or Clang that supports C++17.
|
- A version of GCC or Clang that supports C++20.
|
||||||
|
|
||||||
- `pkg-config` to locate dependencies. If your distribution does not
|
- `pkg-config` to locate dependencies. If your distribution does not
|
||||||
provide it, you can get it from
|
provide it, you can get it from
|
||||||
|
|
|
@ -1,2 +1,3 @@
|
||||||
# Release X.Y (202?-??-??)
|
# Release X.Y (202?-??-??)
|
||||||
|
|
||||||
|
- [`nix-channel`](../command-ref/nix-channel.md) now supports a `--list-generations` subcommand
|
||||||
|
|
|
@ -80,6 +80,38 @@ my $s3_us = Net::Amazon::S3->new(
|
||||||
|
|
||||||
my $channelsBucket = $s3_us->bucket($channelsBucketName) or die;
|
my $channelsBucket = $s3_us->bucket($channelsBucketName) or die;
|
||||||
|
|
||||||
|
sub getStorePath {
|
||||||
|
my ($jobName, $output) = @_;
|
||||||
|
my $buildInfo = decode_json(fetch("$evalUrl/job/$jobName", 'application/json'));
|
||||||
|
return $buildInfo->{buildoutputs}->{$output or "out"}->{path} or die "cannot get store path for '$jobName'";
|
||||||
|
}
|
||||||
|
|
||||||
|
sub copyManual {
|
||||||
|
my $manual = getStorePath("build.x86_64-linux", "doc");
|
||||||
|
print "$manual\n";
|
||||||
|
|
||||||
|
my $manualNar = "$tmpDir/$releaseName-manual.nar.xz";
|
||||||
|
print "$manualNar\n";
|
||||||
|
|
||||||
|
unless (-e $manualNar) {
|
||||||
|
system("NIX_REMOTE=$binaryCache nix store dump-path '$manual' | xz > '$manualNar'.tmp") == 0
|
||||||
|
or die "unable to fetch $manual\n";
|
||||||
|
rename("$manualNar.tmp", $manualNar) or die;
|
||||||
|
}
|
||||||
|
|
||||||
|
unless (-e "$tmpDir/manual") {
|
||||||
|
system("xz -d < '$manualNar' | nix-store --restore $tmpDir/manual.tmp") == 0
|
||||||
|
or die "unable to unpack $manualNar\n";
|
||||||
|
rename("$tmpDir/manual.tmp/share/doc/nix/manual", "$tmpDir/manual") or die;
|
||||||
|
system("rm -rf '$tmpDir/manual.tmp'") == 0 or die;
|
||||||
|
}
|
||||||
|
|
||||||
|
system("aws s3 sync '$tmpDir/manual' s3://$releasesBucketName/$releaseDir/manual") == 0
|
||||||
|
or die "syncing manual to S3\n";
|
||||||
|
}
|
||||||
|
|
||||||
|
copyManual;
|
||||||
|
|
||||||
sub downloadFile {
|
sub downloadFile {
|
||||||
my ($jobName, $productNr, $dstName) = @_;
|
my ($jobName, $productNr, $dstName) = @_;
|
||||||
|
|
||||||
|
@ -179,9 +211,20 @@ if ($isLatest) {
|
||||||
system("docker manifest push nixos/nix:latest") == 0 or die;
|
system("docker manifest push nixos/nix:latest") == 0 or die;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Upload nix-fallback-paths.nix.
|
||||||
|
write_file("$tmpDir/fallback-paths.nix",
|
||||||
|
"{\n" .
|
||||||
|
" x86_64-linux = \"" . getStorePath("build.x86_64-linux") . "\";\n" .
|
||||||
|
" i686-linux = \"" . getStorePath("build.i686-linux") . "\";\n" .
|
||||||
|
" aarch64-linux = \"" . getStorePath("build.aarch64-linux") . "\";\n" .
|
||||||
|
" x86_64-darwin = \"" . getStorePath("build.x86_64-darwin") . "\";\n" .
|
||||||
|
" aarch64-darwin = \"" . getStorePath("build.aarch64-darwin") . "\";\n" .
|
||||||
|
"}\n");
|
||||||
|
|
||||||
# Upload release files to S3.
|
# Upload release files to S3.
|
||||||
for my $fn (glob "$tmpDir/*") {
|
for my $fn (glob "$tmpDir/*") {
|
||||||
my $name = basename($fn);
|
my $name = basename($fn);
|
||||||
|
next if $name eq "manual";
|
||||||
my $dstKey = "$releaseDir/" . $name;
|
my $dstKey = "$releaseDir/" . $name;
|
||||||
unless (defined $releasesBucket->head_key($dstKey)) {
|
unless (defined $releasesBucket->head_key($dstKey)) {
|
||||||
print STDERR "uploading $fn to s3://$releasesBucketName/$dstKey...\n";
|
print STDERR "uploading $fn to s3://$releasesBucketName/$dstKey...\n";
|
||||||
|
@ -189,8 +232,7 @@ for my $fn (glob "$tmpDir/*") {
|
||||||
my $configuration = ();
|
my $configuration = ();
|
||||||
$configuration->{content_type} = "application/octet-stream";
|
$configuration->{content_type} = "application/octet-stream";
|
||||||
|
|
||||||
if ($fn =~ /.sha256|install/) {
|
if ($fn =~ /.sha256|install|\.nix$/) {
|
||||||
# Text files
|
|
||||||
$configuration->{content_type} = "text/plain";
|
$configuration->{content_type} = "text/plain";
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -199,24 +241,6 @@ for my $fn (glob "$tmpDir/*") {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Print new nix-fallback-paths.nix.
|
|
||||||
if ($isLatest) {
|
|
||||||
sub getStorePath {
|
|
||||||
my ($jobName) = @_;
|
|
||||||
my $buildInfo = decode_json(fetch("$evalUrl/job/$jobName", 'application/json'));
|
|
||||||
return $buildInfo->{buildoutputs}->{out}->{path} or die "cannot get store path for '$jobName'";
|
|
||||||
}
|
|
||||||
|
|
||||||
print STDERR "nixos/modules/installer/tools/nix-fallback-paths.nix:\n" .
|
|
||||||
"{\n" .
|
|
||||||
" x86_64-linux = \"" . getStorePath("build.x86_64-linux") . "\";\n" .
|
|
||||||
" i686-linux = \"" . getStorePath("build.i686-linux") . "\";\n" .
|
|
||||||
" aarch64-linux = \"" . getStorePath("build.aarch64-linux") . "\";\n" .
|
|
||||||
" x86_64-darwin = \"" . getStorePath("build.x86_64-darwin") . "\";\n" .
|
|
||||||
" aarch64-darwin = \"" . getStorePath("build.aarch64-darwin") . "\";\n" .
|
|
||||||
"}\n";
|
|
||||||
}
|
|
||||||
|
|
||||||
# Update the "latest" symlink.
|
# Update the "latest" symlink.
|
||||||
$channelsBucket->add_key(
|
$channelsBucket->add_key(
|
||||||
"nix-latest/install", "",
|
"nix-latest/install", "",
|
||||||
|
|
|
@ -100,7 +100,7 @@ poly_extra_try_me_commands() {
|
||||||
poly_configure_nix_daemon_service() {
|
poly_configure_nix_daemon_service() {
|
||||||
task "Setting up the nix-daemon LaunchDaemon"
|
task "Setting up the nix-daemon LaunchDaemon"
|
||||||
_sudo "to set up the nix-daemon as a LaunchDaemon" \
|
_sudo "to set up the nix-daemon as a LaunchDaemon" \
|
||||||
/bin/cp -f "/nix/var/nix/profiles/default$NIX_DAEMON_DEST" "$NIX_DAEMON_DEST"
|
/usr/bin/install -m -rw-r--r-- "/nix/var/nix/profiles/default$NIX_DAEMON_DEST" "$NIX_DAEMON_DEST"
|
||||||
|
|
||||||
_sudo "to load the LaunchDaemon plist for nix-daemon" \
|
_sudo "to load the LaunchDaemon plist for nix-daemon" \
|
||||||
launchctl load /Library/LaunchDaemons/org.nixos.nix-daemon.plist
|
launchctl load /Library/LaunchDaemons/org.nixos.nix-daemon.plist
|
||||||
|
|
|
@ -700,6 +700,10 @@ EOF
|
||||||
}
|
}
|
||||||
|
|
||||||
welcome_to_nix() {
|
welcome_to_nix() {
|
||||||
|
local -r NIX_UID_RANGES="${NIX_FIRST_BUILD_UID}..$((NIX_FIRST_BUILD_UID + NIX_USER_COUNT - 1))"
|
||||||
|
local -r RANGE_TEXT=$(echo -ne "${BLUE}(uids [${NIX_UID_RANGES}])${ESC}")
|
||||||
|
local -r GROUP_TEXT=$(echo -ne "${BLUE}(gid ${NIX_BUILD_GROUP_ID})${ESC}")
|
||||||
|
|
||||||
ok "Welcome to the Multi-User Nix Installation"
|
ok "Welcome to the Multi-User Nix Installation"
|
||||||
|
|
||||||
cat <<EOF
|
cat <<EOF
|
||||||
|
@ -713,8 +717,8 @@ manager. This will happen in a few stages:
|
||||||
2. Show you what I am going to install and where. Then I will ask
|
2. Show you what I am going to install and where. Then I will ask
|
||||||
if you are ready to continue.
|
if you are ready to continue.
|
||||||
|
|
||||||
3. Create the system users and groups that the Nix daemon uses to run
|
3. Create the system users ${RANGE_TEXT} and groups ${GROUP_TEXT}
|
||||||
builds.
|
that the Nix daemon uses to run builds.
|
||||||
|
|
||||||
4. Perform the basic installation of the Nix files daemon.
|
4. Perform the basic installation of the Nix files daemon.
|
||||||
|
|
||||||
|
@ -880,7 +884,7 @@ configure_shell_profile() {
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
task "Setting up shell profiles for Fish with with ${PROFILE_FISH_SUFFIX} inside ${PROFILE_FISH_PREFIXES[*]}"
|
task "Setting up shell profiles for Fish with ${PROFILE_FISH_SUFFIX} inside ${PROFILE_FISH_PREFIXES[*]}"
|
||||||
for fish_prefix in "${PROFILE_FISH_PREFIXES[@]}"; do
|
for fish_prefix in "${PROFILE_FISH_PREFIXES[@]}"; do
|
||||||
if [ ! -d "$fish_prefix" ]; then
|
if [ ! -d "$fish_prefix" ]; then
|
||||||
# this specific prefix (ie: /etc/fish) is very likely to exist
|
# this specific prefix (ie: /etc/fish) is very likely to exist
|
||||||
|
|
|
@ -701,7 +701,7 @@ RawInstallablesCommand::RawInstallablesCommand()
|
||||||
{
|
{
|
||||||
addFlag({
|
addFlag({
|
||||||
.longName = "stdin",
|
.longName = "stdin",
|
||||||
.description = "Read installables from the standard input.",
|
.description = "Read installables from the standard input. No default installable applied.",
|
||||||
.handler = {&readFromStdIn, true}
|
.handler = {&readFromStdIn, true}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
@ -730,9 +730,9 @@ void RawInstallablesCommand::run(ref<Store> store)
|
||||||
while (std::cin >> word) {
|
while (std::cin >> word) {
|
||||||
rawInstallables.emplace_back(std::move(word));
|
rawInstallables.emplace_back(std::move(word));
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
applyDefaultInstallables(rawInstallables);
|
||||||
}
|
}
|
||||||
|
|
||||||
applyDefaultInstallables(rawInstallables);
|
|
||||||
run(store, std::move(rawInstallables));
|
run(store, std::move(rawInstallables));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -741,7 +741,7 @@ struct EvalSettings : Config
|
||||||
If set to `true`, the Nix evaluator will not allow access to any
|
If set to `true`, the Nix evaluator will not allow access to any
|
||||||
files outside of the Nix search path (as set via the `NIX_PATH`
|
files outside of the Nix search path (as set via the `NIX_PATH`
|
||||||
environment variable or the `-I` option), or to URIs outside of
|
environment variable or the `-I` option), or to URIs outside of
|
||||||
`allowed-uri`. The default is `false`.
|
`allowed-uris`. The default is `false`.
|
||||||
)"};
|
)"};
|
||||||
|
|
||||||
Setting<bool> pureEval{this, false, "pure-eval",
|
Setting<bool> pureEval{this, false, "pure-eval",
|
||||||
|
|
|
@ -6,7 +6,7 @@
|
||||||
#include "globals.hh"
|
#include "globals.hh"
|
||||||
#include "json-to-value.hh"
|
#include "json-to-value.hh"
|
||||||
#include "names.hh"
|
#include "names.hh"
|
||||||
#include "references.hh"
|
#include "path-references.hh"
|
||||||
#include "store-api.hh"
|
#include "store-api.hh"
|
||||||
#include "util.hh"
|
#include "util.hh"
|
||||||
#include "value-to-json.hh"
|
#include "value-to-json.hh"
|
||||||
|
@ -4058,18 +4058,6 @@ static RegisterPrimOp primop_splitVersion({
|
||||||
RegisterPrimOp::PrimOps * RegisterPrimOp::primOps;
|
RegisterPrimOp::PrimOps * RegisterPrimOp::primOps;
|
||||||
|
|
||||||
|
|
||||||
RegisterPrimOp::RegisterPrimOp(std::string name, size_t arity, PrimOpFun fun)
|
|
||||||
{
|
|
||||||
if (!primOps) primOps = new PrimOps;
|
|
||||||
primOps->push_back({
|
|
||||||
.name = name,
|
|
||||||
.args = {},
|
|
||||||
.arity = arity,
|
|
||||||
.fun = fun,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
RegisterPrimOp::RegisterPrimOp(Info && info)
|
RegisterPrimOp::RegisterPrimOp(Info && info)
|
||||||
{
|
{
|
||||||
if (!primOps) primOps = new PrimOps;
|
if (!primOps) primOps = new PrimOps;
|
||||||
|
|
|
@ -28,11 +28,6 @@ struct RegisterPrimOp
|
||||||
* will get called during EvalState initialization, so there
|
* will get called during EvalState initialization, so there
|
||||||
* may be primops not yet added and builtins is not yet sorted.
|
* may be primops not yet added and builtins is not yet sorted.
|
||||||
*/
|
*/
|
||||||
RegisterPrimOp(
|
|
||||||
std::string name,
|
|
||||||
size_t arity,
|
|
||||||
PrimOpFun fun);
|
|
||||||
|
|
||||||
RegisterPrimOp(Info && info);
|
RegisterPrimOp(Info && info);
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -12,7 +12,11 @@ static void prim_unsafeDiscardStringContext(EvalState & state, const PosIdx pos,
|
||||||
v.mkString(*s);
|
v.mkString(*s);
|
||||||
}
|
}
|
||||||
|
|
||||||
static RegisterPrimOp primop_unsafeDiscardStringContext("__unsafeDiscardStringContext", 1, prim_unsafeDiscardStringContext);
|
static RegisterPrimOp primop_unsafeDiscardStringContext({
|
||||||
|
.name = "__unsafeDiscardStringContext",
|
||||||
|
.arity = 1,
|
||||||
|
.fun = prim_unsafeDiscardStringContext
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
static void prim_hasContext(EvalState & state, const PosIdx pos, Value * * args, Value & v)
|
static void prim_hasContext(EvalState & state, const PosIdx pos, Value * * args, Value & v)
|
||||||
|
@ -22,7 +26,16 @@ static void prim_hasContext(EvalState & state, const PosIdx pos, Value * * args,
|
||||||
v.mkBool(!context.empty());
|
v.mkBool(!context.empty());
|
||||||
}
|
}
|
||||||
|
|
||||||
static RegisterPrimOp primop_hasContext("__hasContext", 1, prim_hasContext);
|
static RegisterPrimOp primop_hasContext({
|
||||||
|
.name = "__hasContext",
|
||||||
|
.args = {"s"},
|
||||||
|
.doc = R"(
|
||||||
|
Return `true` if string *s* has a non-empty context. The
|
||||||
|
context can be obtained with
|
||||||
|
[`getContext`](#builtins-getContext).
|
||||||
|
)",
|
||||||
|
.fun = prim_hasContext
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
/* Sometimes we want to pass a derivation path (i.e. pkg.drvPath) to a
|
/* Sometimes we want to pass a derivation path (i.e. pkg.drvPath) to a
|
||||||
|
@ -51,7 +64,11 @@ static void prim_unsafeDiscardOutputDependency(EvalState & state, const PosIdx p
|
||||||
v.mkString(*s, context2);
|
v.mkString(*s, context2);
|
||||||
}
|
}
|
||||||
|
|
||||||
static RegisterPrimOp primop_unsafeDiscardOutputDependency("__unsafeDiscardOutputDependency", 1, prim_unsafeDiscardOutputDependency);
|
static RegisterPrimOp primop_unsafeDiscardOutputDependency({
|
||||||
|
.name = "__unsafeDiscardOutputDependency",
|
||||||
|
.arity = 1,
|
||||||
|
.fun = prim_unsafeDiscardOutputDependency
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
/* Extract the context of a string as a structured Nix value.
|
/* Extract the context of a string as a structured Nix value.
|
||||||
|
@ -119,7 +136,30 @@ static void prim_getContext(EvalState & state, const PosIdx pos, Value * * args,
|
||||||
v.mkAttrs(attrs);
|
v.mkAttrs(attrs);
|
||||||
}
|
}
|
||||||
|
|
||||||
static RegisterPrimOp primop_getContext("__getContext", 1, prim_getContext);
|
static RegisterPrimOp primop_getContext({
|
||||||
|
.name = "__getContext",
|
||||||
|
.args = {"s"},
|
||||||
|
.doc = R"(
|
||||||
|
Return the string context of *s*.
|
||||||
|
|
||||||
|
The string context tracks references to derivations within a string.
|
||||||
|
It is represented as an attribute set of [store derivation](@docroot@/glossary.md#gloss-store-derivation) paths mapping to output names.
|
||||||
|
|
||||||
|
Using [string interpolation](@docroot@/language/string-interpolation.md) on a derivation will add that derivation to the string context.
|
||||||
|
For example,
|
||||||
|
|
||||||
|
```nix
|
||||||
|
builtins.getContext "${derivation { name = "a"; builder = "b"; system = "c"; }}"
|
||||||
|
```
|
||||||
|
|
||||||
|
evaluates to
|
||||||
|
|
||||||
|
```
|
||||||
|
{ "/nix/store/arhvjaf6zmlyn8vh8fgn55rpwnxq0n7l-a.drv" = { outputs = [ "out" ]; }; }
|
||||||
|
```
|
||||||
|
)",
|
||||||
|
.fun = prim_getContext
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
/* Append the given context to a given string.
|
/* Append the given context to a given string.
|
||||||
|
@ -192,6 +232,10 @@ static void prim_appendContext(EvalState & state, const PosIdx pos, Value * * ar
|
||||||
v.mkString(orig, context);
|
v.mkString(orig, context);
|
||||||
}
|
}
|
||||||
|
|
||||||
static RegisterPrimOp primop_appendContext("__appendContext", 2, prim_appendContext);
|
static RegisterPrimOp primop_appendContext({
|
||||||
|
.name = "__appendContext",
|
||||||
|
.arity = 2,
|
||||||
|
.fun = prim_appendContext
|
||||||
|
});
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -88,6 +88,10 @@ static void prim_fetchMercurial(EvalState & state, const PosIdx pos, Value * * a
|
||||||
state.allowPath(tree.storePath);
|
state.allowPath(tree.storePath);
|
||||||
}
|
}
|
||||||
|
|
||||||
static RegisterPrimOp r_fetchMercurial("fetchMercurial", 1, prim_fetchMercurial);
|
static RegisterPrimOp r_fetchMercurial({
|
||||||
|
.name = "fetchMercurial",
|
||||||
|
.arity = 1,
|
||||||
|
.fun = prim_fetchMercurial
|
||||||
|
});
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -194,7 +194,11 @@ static void prim_fetchTree(EvalState & state, const PosIdx pos, Value * * args,
|
||||||
}
|
}
|
||||||
|
|
||||||
// FIXME: document
|
// FIXME: document
|
||||||
static RegisterPrimOp primop_fetchTree("fetchTree", 1, prim_fetchTree);
|
static RegisterPrimOp primop_fetchTree({
|
||||||
|
.name = "fetchTree",
|
||||||
|
.arity = 1,
|
||||||
|
.fun = prim_fetchTree
|
||||||
|
});
|
||||||
|
|
||||||
static void fetch(EvalState & state, const PosIdx pos, Value * * args, Value & v,
|
static void fetch(EvalState & state, const PosIdx pos, Value * * args, Value & v,
|
||||||
const std::string & who, bool unpack, std::string name)
|
const std::string & who, bool unpack, std::string name)
|
||||||
|
|
|
@ -3,6 +3,8 @@
|
||||||
|
|
||||||
#include "../../toml11/toml.hpp"
|
#include "../../toml11/toml.hpp"
|
||||||
|
|
||||||
|
#include <sstream>
|
||||||
|
|
||||||
namespace nix {
|
namespace nix {
|
||||||
|
|
||||||
static void prim_fromTOML(EvalState & state, const PosIdx pos, Value * * args, Value & val)
|
static void prim_fromTOML(EvalState & state, const PosIdx pos, Value * * args, Value & val)
|
||||||
|
@ -58,8 +60,18 @@ static void prim_fromTOML(EvalState & state, const PosIdx pos, Value * * args, V
|
||||||
case toml::value_t::offset_datetime:
|
case toml::value_t::offset_datetime:
|
||||||
case toml::value_t::local_date:
|
case toml::value_t::local_date:
|
||||||
case toml::value_t::local_time:
|
case toml::value_t::local_time:
|
||||||
// We fail since Nix doesn't have date and time types
|
{
|
||||||
throw std::runtime_error("Dates and times are not supported");
|
if (experimentalFeatureSettings.isEnabled(Xp::ParseTomlTimestamps)) {
|
||||||
|
auto attrs = state.buildBindings(2);
|
||||||
|
attrs.alloc("_type").mkString("timestamp");
|
||||||
|
std::ostringstream s;
|
||||||
|
s << t;
|
||||||
|
attrs.alloc("value").mkString(s.str());
|
||||||
|
v.mkAttrs(attrs);
|
||||||
|
} else {
|
||||||
|
throw std::runtime_error("Dates and times are not supported");
|
||||||
|
}
|
||||||
|
}
|
||||||
break;;
|
break;;
|
||||||
case toml::value_t::empty:
|
case toml::value_t::empty:
|
||||||
v.mkNull();
|
v.mkNull();
|
||||||
|
@ -78,6 +90,24 @@ static void prim_fromTOML(EvalState & state, const PosIdx pos, Value * * args, V
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static RegisterPrimOp primop_fromTOML("fromTOML", 1, prim_fromTOML);
|
static RegisterPrimOp primop_fromTOML({
|
||||||
|
.name = "fromTOML",
|
||||||
|
.args = {"e"},
|
||||||
|
.doc = R"(
|
||||||
|
Convert a TOML string to a Nix value. For example,
|
||||||
|
|
||||||
|
```nix
|
||||||
|
builtins.fromTOML ''
|
||||||
|
x=1
|
||||||
|
s="a"
|
||||||
|
[table]
|
||||||
|
y=2
|
||||||
|
''
|
||||||
|
```
|
||||||
|
|
||||||
|
returns the value `{ s = "a"; table = { y = 2; }; x = 1; }`.
|
||||||
|
)",
|
||||||
|
.fun = prim_fromTOML
|
||||||
|
});
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,7 +4,7 @@
|
||||||
#include "worker.hh"
|
#include "worker.hh"
|
||||||
#include "builtins.hh"
|
#include "builtins.hh"
|
||||||
#include "builtins/buildenv.hh"
|
#include "builtins/buildenv.hh"
|
||||||
#include "references.hh"
|
#include "path-references.hh"
|
||||||
#include "finally.hh"
|
#include "finally.hh"
|
||||||
#include "util.hh"
|
#include "util.hh"
|
||||||
#include "archive.hh"
|
#include "archive.hh"
|
||||||
|
@ -1457,7 +1457,7 @@ void LocalDerivationGoal::startDaemon()
|
||||||
(struct sockaddr *) &remoteAddr, &remoteAddrLen);
|
(struct sockaddr *) &remoteAddr, &remoteAddrLen);
|
||||||
if (!remote) {
|
if (!remote) {
|
||||||
if (errno == EINTR || errno == EAGAIN) continue;
|
if (errno == EINTR || errno == EAGAIN) continue;
|
||||||
if (errno == EINVAL) break;
|
if (errno == EINVAL || errno == ECONNABORTED) break;
|
||||||
throw SysError("accepting connection");
|
throw SysError("accepting connection");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1487,8 +1487,22 @@ void LocalDerivationGoal::startDaemon()
|
||||||
|
|
||||||
void LocalDerivationGoal::stopDaemon()
|
void LocalDerivationGoal::stopDaemon()
|
||||||
{
|
{
|
||||||
if (daemonSocket && shutdown(daemonSocket.get(), SHUT_RDWR) == -1)
|
if (daemonSocket && shutdown(daemonSocket.get(), SHUT_RDWR) == -1) {
|
||||||
throw SysError("shutting down daemon socket");
|
// According to the POSIX standard, the 'shutdown' function should
|
||||||
|
// return an ENOTCONN error when attempting to shut down a socket that
|
||||||
|
// hasn't been connected yet. This situation occurs when the 'accept'
|
||||||
|
// function is called on a socket without any accepted connections,
|
||||||
|
// leaving the socket unconnected. While Linux doesn't seem to produce
|
||||||
|
// an error for sockets that have only been accepted, more
|
||||||
|
// POSIX-compliant operating systems like OpenBSD, macOS, and others do
|
||||||
|
// return the ENOTCONN error. Therefore, we handle this error here to
|
||||||
|
// avoid raising an exception for compliant behaviour.
|
||||||
|
if (errno == ENOTCONN) {
|
||||||
|
daemonSocket.close();
|
||||||
|
} else {
|
||||||
|
throw SysError("shutting down daemon socket");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (daemonThread.joinable())
|
if (daemonThread.joinable())
|
||||||
daemonThread.join();
|
daemonThread.join();
|
||||||
|
@ -1499,7 +1513,8 @@ void LocalDerivationGoal::stopDaemon()
|
||||||
thread.join();
|
thread.join();
|
||||||
daemonWorkerThreads.clear();
|
daemonWorkerThreads.clear();
|
||||||
|
|
||||||
daemonSocket = -1;
|
// release the socket.
|
||||||
|
daemonSocket.close();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -2379,18 +2394,21 @@ SingleDrvOutputs LocalDerivationGoal::registerOutputs()
|
||||||
continue;
|
continue;
|
||||||
auto references = *referencesOpt;
|
auto references = *referencesOpt;
|
||||||
|
|
||||||
auto rewriteOutput = [&]() {
|
auto rewriteOutput = [&](const StringMap & rewrites) {
|
||||||
/* Apply hash rewriting if necessary. */
|
/* Apply hash rewriting if necessary. */
|
||||||
if (!outputRewrites.empty()) {
|
if (!rewrites.empty()) {
|
||||||
debug("rewriting hashes in '%1%'; cross fingers", actualPath);
|
debug("rewriting hashes in '%1%'; cross fingers", actualPath);
|
||||||
|
|
||||||
/* FIXME: this is in-memory. */
|
/* FIXME: Is this actually streaming? */
|
||||||
StringSink sink;
|
auto source = sinkToSource([&](Sink & nextSink) {
|
||||||
dumpPath(actualPath, sink);
|
RewritingSink rsink(rewrites, nextSink);
|
||||||
|
dumpPath(actualPath, rsink);
|
||||||
|
rsink.flush();
|
||||||
|
});
|
||||||
|
Path tmpPath = actualPath + ".tmp";
|
||||||
|
restorePath(tmpPath, *source);
|
||||||
deletePath(actualPath);
|
deletePath(actualPath);
|
||||||
sink.s = rewriteStrings(sink.s, outputRewrites);
|
movePath(tmpPath, actualPath);
|
||||||
StringSource source(sink.s);
|
|
||||||
restorePath(actualPath, source);
|
|
||||||
|
|
||||||
/* FIXME: set proper permissions in restorePath() so
|
/* FIXME: set proper permissions in restorePath() so
|
||||||
we don't have to do another traversal. */
|
we don't have to do another traversal. */
|
||||||
|
@ -2439,7 +2457,7 @@ SingleDrvOutputs LocalDerivationGoal::registerOutputs()
|
||||||
"since recursive hashing is not enabled (one of outputHashMode={flat,text} is true)",
|
"since recursive hashing is not enabled (one of outputHashMode={flat,text} is true)",
|
||||||
actualPath);
|
actualPath);
|
||||||
}
|
}
|
||||||
rewriteOutput();
|
rewriteOutput(outputRewrites);
|
||||||
/* FIXME optimize and deduplicate with addToStore */
|
/* FIXME optimize and deduplicate with addToStore */
|
||||||
std::string oldHashPart { scratchPath->hashPart() };
|
std::string oldHashPart { scratchPath->hashPart() };
|
||||||
HashModuloSink caSink { outputHash.hashType, oldHashPart };
|
HashModuloSink caSink { outputHash.hashType, oldHashPart };
|
||||||
|
@ -2477,16 +2495,14 @@ SingleDrvOutputs LocalDerivationGoal::registerOutputs()
|
||||||
Hash::dummy,
|
Hash::dummy,
|
||||||
};
|
};
|
||||||
if (*scratchPath != newInfo0.path) {
|
if (*scratchPath != newInfo0.path) {
|
||||||
// Also rewrite the output path
|
// If the path has some self-references, we need to rewrite
|
||||||
auto source = sinkToSource([&](Sink & nextSink) {
|
// them.
|
||||||
RewritingSink rsink2(oldHashPart, std::string(newInfo0.path.hashPart()), nextSink);
|
// (note that this doesn't invalidate the ca hash we calculated
|
||||||
dumpPath(actualPath, rsink2);
|
// above because it's computed *modulo the self-references*, so
|
||||||
rsink2.flush();
|
// it already takes this rewrite into account).
|
||||||
});
|
rewriteOutput(
|
||||||
Path tmpPath = actualPath + ".tmp";
|
StringMap{{oldHashPart,
|
||||||
restorePath(tmpPath, *source);
|
std::string(newInfo0.path.hashPart())}});
|
||||||
deletePath(actualPath);
|
|
||||||
movePath(tmpPath, actualPath);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
HashResult narHashAndSize = hashPath(htSHA256, actualPath);
|
HashResult narHashAndSize = hashPath(htSHA256, actualPath);
|
||||||
|
@ -2508,7 +2524,7 @@ SingleDrvOutputs LocalDerivationGoal::registerOutputs()
|
||||||
outputRewrites.insert_or_assign(
|
outputRewrites.insert_or_assign(
|
||||||
std::string { scratchPath->hashPart() },
|
std::string { scratchPath->hashPart() },
|
||||||
std::string { requiredFinalPath.hashPart() });
|
std::string { requiredFinalPath.hashPart() });
|
||||||
rewriteOutput();
|
rewriteOutput(outputRewrites);
|
||||||
auto narHashAndSize = hashPath(htSHA256, actualPath);
|
auto narHashAndSize = hashPath(htSHA256, actualPath);
|
||||||
ValidPathInfo newInfo0 { requiredFinalPath, narHashAndSize.first };
|
ValidPathInfo newInfo0 { requiredFinalPath, narHashAndSize.first };
|
||||||
newInfo0.narSize = narHashAndSize.second;
|
newInfo0.narSize = narHashAndSize.second;
|
||||||
|
|
|
@ -21,7 +21,8 @@ void setPersonality(std::string_view system)
|
||||||
&& (std::string_view(SYSTEM) == "x86_64-linux"
|
&& (std::string_view(SYSTEM) == "x86_64-linux"
|
||||||
|| (!strcmp(utsbuf.sysname, "Linux") && !strcmp(utsbuf.machine, "x86_64"))))
|
|| (!strcmp(utsbuf.sysname, "Linux") && !strcmp(utsbuf.machine, "x86_64"))))
|
||||||
|| system == "armv7l-linux"
|
|| system == "armv7l-linux"
|
||||||
|| system == "armv6l-linux")
|
|| system == "armv6l-linux"
|
||||||
|
|| system == "armv5tel-linux")
|
||||||
{
|
{
|
||||||
if (personality(PER_LINUX32) == -1)
|
if (personality(PER_LINUX32) == -1)
|
||||||
throw SysError("cannot set 32-bit personality");
|
throw SysError("cannot set 32-bit personality");
|
||||||
|
|
|
@ -691,20 +691,19 @@ public:
|
||||||
Strings{"https://cache.nixos.org/"},
|
Strings{"https://cache.nixos.org/"},
|
||||||
"substituters",
|
"substituters",
|
||||||
R"(
|
R"(
|
||||||
A list of [URLs of Nix stores](@docroot@/command-ref/new-cli/nix3-help-stores.md#store-url-format)
|
A list of [URLs of Nix stores](@docroot@/command-ref/new-cli/nix3-help-stores.md#store-url-format) to be used as substituters, separated by whitespace.
|
||||||
to be used as substituters, separated by whitespace.
|
A substituter is an additional [store]{@docroot@/glossary.md##gloss-store} from which Nix can obtain [store objects](@docroot@/glossary.md#gloss-store-object) instead of building them.
|
||||||
Substituters are tried based on their Priority value, which each substituter can set
|
|
||||||
independently. Lower value means higher priority.
|
|
||||||
The default is `https://cache.nixos.org`, with a Priority of 40.
|
|
||||||
|
|
||||||
At least one of the following conditions must be met for Nix to use
|
Substituters are tried based on their priority value, which each substituter can set independently.
|
||||||
a substituter:
|
Lower value means higher priority.
|
||||||
|
The default is `https://cache.nixos.org`, which has a priority of 40.
|
||||||
|
|
||||||
|
At least one of the following conditions must be met for Nix to use a substituter:
|
||||||
|
|
||||||
- the substituter is in the [`trusted-substituters`](#conf-trusted-substituters) list
|
- the substituter is in the [`trusted-substituters`](#conf-trusted-substituters) list
|
||||||
- the user calling Nix is in the [`trusted-users`](#conf-trusted-users) list
|
- the user calling Nix is in the [`trusted-users`](#conf-trusted-users) list
|
||||||
|
|
||||||
In addition, each store path should be trusted as described
|
In addition, each store path should be trusted as described in [`trusted-public-keys`](#conf-trusted-public-keys)
|
||||||
in [`trusted-public-keys`](#conf-trusted-public-keys)
|
|
||||||
)",
|
)",
|
||||||
{"binary-caches"}};
|
{"binary-caches"}};
|
||||||
|
|
||||||
|
@ -896,12 +895,11 @@ public:
|
||||||
this, {}, "hashed-mirrors",
|
this, {}, "hashed-mirrors",
|
||||||
R"(
|
R"(
|
||||||
A list of web servers used by `builtins.fetchurl` to obtain files by
|
A list of web servers used by `builtins.fetchurl` to obtain files by
|
||||||
hash. The default is `http://tarballs.nixos.org/`. Given a hash type
|
hash. Given a hash type *ht* and a base-16 hash *h*, Nix will try to
|
||||||
*ht* and a base-16 hash *h*, Nix will try to download the file from
|
download the file from *hashed-mirror*/*ht*/*h*. This allows files to
|
||||||
*hashed-mirror*/*ht*/*h*. This allows files to be downloaded even if
|
be downloaded even if they have disappeared from their original URI.
|
||||||
they have disappeared from their original URI. For example, given
|
For example, given an example mirror `http://tarballs.nixos.org/`,
|
||||||
the default mirror `http://tarballs.nixos.org/`, when building the
|
when building the derivation
|
||||||
derivation
|
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
builtins.fetchurl {
|
builtins.fetchurl {
|
||||||
|
|
73
src/libstore/path-references.cc
Normal file
73
src/libstore/path-references.cc
Normal file
|
@ -0,0 +1,73 @@
|
||||||
|
#include "path-references.hh"
|
||||||
|
#include "hash.hh"
|
||||||
|
#include "util.hh"
|
||||||
|
#include "archive.hh"
|
||||||
|
|
||||||
|
#include <map>
|
||||||
|
#include <cstdlib>
|
||||||
|
#include <mutex>
|
||||||
|
#include <algorithm>
|
||||||
|
|
||||||
|
|
||||||
|
namespace nix {
|
||||||
|
|
||||||
|
|
||||||
|
PathRefScanSink::PathRefScanSink(StringSet && hashes, std::map<std::string, StorePath> && backMap)
|
||||||
|
: RefScanSink(std::move(hashes))
|
||||||
|
, backMap(std::move(backMap))
|
||||||
|
{ }
|
||||||
|
|
||||||
|
PathRefScanSink PathRefScanSink::fromPaths(const StorePathSet & refs)
|
||||||
|
{
|
||||||
|
StringSet hashes;
|
||||||
|
std::map<std::string, StorePath> backMap;
|
||||||
|
|
||||||
|
for (auto & i : refs) {
|
||||||
|
std::string hashPart(i.hashPart());
|
||||||
|
auto inserted = backMap.emplace(hashPart, i).second;
|
||||||
|
assert(inserted);
|
||||||
|
hashes.insert(hashPart);
|
||||||
|
}
|
||||||
|
|
||||||
|
return PathRefScanSink(std::move(hashes), std::move(backMap));
|
||||||
|
}
|
||||||
|
|
||||||
|
StorePathSet PathRefScanSink::getResultPaths()
|
||||||
|
{
|
||||||
|
/* Map the hashes found back to their store paths. */
|
||||||
|
StorePathSet found;
|
||||||
|
for (auto & i : getResult()) {
|
||||||
|
auto j = backMap.find(i);
|
||||||
|
assert(j != backMap.end());
|
||||||
|
found.insert(j->second);
|
||||||
|
}
|
||||||
|
|
||||||
|
return found;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
std::pair<StorePathSet, HashResult> scanForReferences(
|
||||||
|
const std::string & path,
|
||||||
|
const StorePathSet & refs)
|
||||||
|
{
|
||||||
|
HashSink hashSink { htSHA256 };
|
||||||
|
auto found = scanForReferences(hashSink, path, refs);
|
||||||
|
auto hash = hashSink.finish();
|
||||||
|
return std::pair<StorePathSet, HashResult>(found, hash);
|
||||||
|
}
|
||||||
|
|
||||||
|
StorePathSet scanForReferences(
|
||||||
|
Sink & toTee,
|
||||||
|
const Path & path,
|
||||||
|
const StorePathSet & refs)
|
||||||
|
{
|
||||||
|
PathRefScanSink refsSink = PathRefScanSink::fromPaths(refs);
|
||||||
|
TeeSink sink { refsSink, toTee };
|
||||||
|
|
||||||
|
/* Look for the hashes in the NAR dump of the path. */
|
||||||
|
dumpPath(path, sink);
|
||||||
|
|
||||||
|
return refsSink.getResultPaths();
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
25
src/libstore/path-references.hh
Normal file
25
src/libstore/path-references.hh
Normal file
|
@ -0,0 +1,25 @@
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include "references.hh"
|
||||||
|
#include "path.hh"
|
||||||
|
|
||||||
|
namespace nix {
|
||||||
|
|
||||||
|
std::pair<StorePathSet, HashResult> scanForReferences(const Path & path, const StorePathSet & refs);
|
||||||
|
|
||||||
|
StorePathSet scanForReferences(Sink & toTee, const Path & path, const StorePathSet & refs);
|
||||||
|
|
||||||
|
class PathRefScanSink : public RefScanSink
|
||||||
|
{
|
||||||
|
std::map<std::string, StorePath> backMap;
|
||||||
|
|
||||||
|
PathRefScanSink(StringSet && hashes, std::map<std::string, StorePath> && backMap);
|
||||||
|
|
||||||
|
public:
|
||||||
|
|
||||||
|
static PathRefScanSink fromPaths(const StorePathSet & refs);
|
||||||
|
|
||||||
|
StorePathSet getResultPaths();
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
|
@ -13,6 +13,14 @@
|
||||||
|
|
||||||
namespace nix {
|
namespace nix {
|
||||||
|
|
||||||
|
std::string UDSRemoteStoreConfig::doc()
|
||||||
|
{
|
||||||
|
return
|
||||||
|
#include "uds-remote-store.md"
|
||||||
|
;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
UDSRemoteStore::UDSRemoteStore(const Params & params)
|
UDSRemoteStore::UDSRemoteStore(const Params & params)
|
||||||
: StoreConfig(params)
|
: StoreConfig(params)
|
||||||
, LocalFSStoreConfig(params)
|
, LocalFSStoreConfig(params)
|
||||||
|
|
|
@ -17,12 +17,7 @@ struct UDSRemoteStoreConfig : virtual LocalFSStoreConfig, virtual RemoteStoreCon
|
||||||
|
|
||||||
const std::string name() override { return "Local Daemon Store"; }
|
const std::string name() override { return "Local Daemon Store"; }
|
||||||
|
|
||||||
std::string doc() override
|
std::string doc() override;
|
||||||
{
|
|
||||||
return
|
|
||||||
#include "uds-remote-store.md"
|
|
||||||
;
|
|
||||||
}
|
|
||||||
};
|
};
|
||||||
|
|
||||||
class UDSRemoteStore : public virtual UDSRemoteStoreConfig, public virtual LocalFSStore, public virtual RemoteStore
|
class UDSRemoteStore : public virtual UDSRemoteStoreConfig, public virtual LocalFSStore, public virtual RemoteStore
|
||||||
|
|
|
@ -12,7 +12,7 @@ struct ExperimentalFeatureDetails
|
||||||
std::string_view description;
|
std::string_view description;
|
||||||
};
|
};
|
||||||
|
|
||||||
constexpr std::array<ExperimentalFeatureDetails, 14> xpFeatureDetails = {{
|
constexpr std::array<ExperimentalFeatureDetails, 15> xpFeatureDetails = {{
|
||||||
{
|
{
|
||||||
.tag = Xp::CaDerivations,
|
.tag = Xp::CaDerivations,
|
||||||
.name = "ca-derivations",
|
.name = "ca-derivations",
|
||||||
|
@ -214,6 +214,13 @@ constexpr std::array<ExperimentalFeatureDetails, 14> xpFeatureDetails = {{
|
||||||
derivations that are themselves derivations outputs.
|
derivations that are themselves derivations outputs.
|
||||||
)",
|
)",
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
.tag = Xp::ParseTomlTimestamps,
|
||||||
|
.name = "parse-toml-timestamps",
|
||||||
|
.description = R"(
|
||||||
|
Allow parsing of timestamps in builtins.fromTOML.
|
||||||
|
)",
|
||||||
|
},
|
||||||
{
|
{
|
||||||
.tag = Xp::ReadOnlyLocalStore,
|
.tag = Xp::ReadOnlyLocalStore,
|
||||||
.name = "read-only-local-store",
|
.name = "read-only-local-store",
|
||||||
|
@ -263,7 +270,7 @@ std::string_view showExperimentalFeature(const ExperimentalFeature tag)
|
||||||
return xpFeatureDetails[(size_t)tag].name;
|
return xpFeatureDetails[(size_t)tag].name;
|
||||||
}
|
}
|
||||||
|
|
||||||
nlohmann::json documentExperimentalFeatures()
|
nlohmann::json documentExperimentalFeatures()
|
||||||
{
|
{
|
||||||
StringMap res;
|
StringMap res;
|
||||||
for (auto & xpFeature : xpFeatureDetails)
|
for (auto & xpFeature : xpFeatureDetails)
|
||||||
|
|
|
@ -30,6 +30,7 @@ enum struct ExperimentalFeature
|
||||||
DiscardReferences,
|
DiscardReferences,
|
||||||
DaemonTrustOverride,
|
DaemonTrustOverride,
|
||||||
DynamicDerivations,
|
DynamicDerivations,
|
||||||
|
ParseTomlTimestamps,
|
||||||
ReadOnlyLocalStore,
|
ReadOnlyLocalStore,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -63,30 +63,19 @@ std::pair<AutoCloseFD, Path> createTempFile(const Path & prefix)
|
||||||
return {std::move(fd), tmpl};
|
return {std::move(fd), tmpl};
|
||||||
}
|
}
|
||||||
|
|
||||||
void createSymlink(const Path & target, const Path & link,
|
void createSymlink(const Path & target, const Path & link)
|
||||||
std::optional<time_t> mtime)
|
|
||||||
{
|
{
|
||||||
if (symlink(target.c_str(), link.c_str()))
|
if (symlink(target.c_str(), link.c_str()))
|
||||||
throw SysError("creating symlink from '%1%' to '%2%'", link, target);
|
throw SysError("creating symlink from '%1%' to '%2%'", link, target);
|
||||||
if (mtime) {
|
|
||||||
struct timeval times[2];
|
|
||||||
times[0].tv_sec = *mtime;
|
|
||||||
times[0].tv_usec = 0;
|
|
||||||
times[1].tv_sec = *mtime;
|
|
||||||
times[1].tv_usec = 0;
|
|
||||||
if (lutimes(link.c_str(), times))
|
|
||||||
throw SysError("setting time of symlink '%s'", link);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void replaceSymlink(const Path & target, const Path & link,
|
void replaceSymlink(const Path & target, const Path & link)
|
||||||
std::optional<time_t> mtime)
|
|
||||||
{
|
{
|
||||||
for (unsigned int n = 0; true; n++) {
|
for (unsigned int n = 0; true; n++) {
|
||||||
Path tmp = canonPath(fmt("%s/.%d_%s", dirOf(link), n, baseNameOf(link)));
|
Path tmp = canonPath(fmt("%s/.%d_%s", dirOf(link), n, baseNameOf(link)));
|
||||||
|
|
||||||
try {
|
try {
|
||||||
createSymlink(target, tmp, mtime);
|
createSymlink(target, tmp);
|
||||||
} catch (SysError & e) {
|
} catch (SysError & e) {
|
||||||
if (e.errNo == EEXIST) continue;
|
if (e.errNo == EEXIST) continue;
|
||||||
throw;
|
throw;
|
||||||
|
|
|
@ -6,6 +6,7 @@
|
||||||
#include <map>
|
#include <map>
|
||||||
#include <cstdlib>
|
#include <cstdlib>
|
||||||
#include <mutex>
|
#include <mutex>
|
||||||
|
#include <algorithm>
|
||||||
|
|
||||||
|
|
||||||
namespace nix {
|
namespace nix {
|
||||||
|
@ -66,69 +67,20 @@ void RefScanSink::operator () (std::string_view data)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
PathRefScanSink::PathRefScanSink(StringSet && hashes, std::map<std::string, StorePath> && backMap)
|
|
||||||
: RefScanSink(std::move(hashes))
|
|
||||||
, backMap(std::move(backMap))
|
|
||||||
{ }
|
|
||||||
|
|
||||||
PathRefScanSink PathRefScanSink::fromPaths(const StorePathSet & refs)
|
|
||||||
{
|
|
||||||
StringSet hashes;
|
|
||||||
std::map<std::string, StorePath> backMap;
|
|
||||||
|
|
||||||
for (auto & i : refs) {
|
|
||||||
std::string hashPart(i.hashPart());
|
|
||||||
auto inserted = backMap.emplace(hashPart, i).second;
|
|
||||||
assert(inserted);
|
|
||||||
hashes.insert(hashPart);
|
|
||||||
}
|
|
||||||
|
|
||||||
return PathRefScanSink(std::move(hashes), std::move(backMap));
|
|
||||||
}
|
|
||||||
|
|
||||||
StorePathSet PathRefScanSink::getResultPaths()
|
|
||||||
{
|
|
||||||
/* Map the hashes found back to their store paths. */
|
|
||||||
StorePathSet found;
|
|
||||||
for (auto & i : getResult()) {
|
|
||||||
auto j = backMap.find(i);
|
|
||||||
assert(j != backMap.end());
|
|
||||||
found.insert(j->second);
|
|
||||||
}
|
|
||||||
|
|
||||||
return found;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
std::pair<StorePathSet, HashResult> scanForReferences(
|
|
||||||
const std::string & path,
|
|
||||||
const StorePathSet & refs)
|
|
||||||
{
|
|
||||||
HashSink hashSink { htSHA256 };
|
|
||||||
auto found = scanForReferences(hashSink, path, refs);
|
|
||||||
auto hash = hashSink.finish();
|
|
||||||
return std::pair<StorePathSet, HashResult>(found, hash);
|
|
||||||
}
|
|
||||||
|
|
||||||
StorePathSet scanForReferences(
|
|
||||||
Sink & toTee,
|
|
||||||
const Path & path,
|
|
||||||
const StorePathSet & refs)
|
|
||||||
{
|
|
||||||
PathRefScanSink refsSink = PathRefScanSink::fromPaths(refs);
|
|
||||||
TeeSink sink { refsSink, toTee };
|
|
||||||
|
|
||||||
/* Look for the hashes in the NAR dump of the path. */
|
|
||||||
dumpPath(path, sink);
|
|
||||||
|
|
||||||
return refsSink.getResultPaths();
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
RewritingSink::RewritingSink(const std::string & from, const std::string & to, Sink & nextSink)
|
RewritingSink::RewritingSink(const std::string & from, const std::string & to, Sink & nextSink)
|
||||||
: from(from), to(to), nextSink(nextSink)
|
: RewritingSink({{from, to}}, nextSink)
|
||||||
{
|
{
|
||||||
assert(from.size() == to.size());
|
}
|
||||||
|
|
||||||
|
RewritingSink::RewritingSink(const StringMap & rewrites, Sink & nextSink)
|
||||||
|
: rewrites(rewrites), nextSink(nextSink)
|
||||||
|
{
|
||||||
|
long unsigned int maxRewriteSize = 0;
|
||||||
|
for (auto & [from, to] : rewrites) {
|
||||||
|
assert(from.size() == to.size());
|
||||||
|
maxRewriteSize = std::max(maxRewriteSize, from.size());
|
||||||
|
}
|
||||||
|
this->maxRewriteSize = maxRewriteSize;
|
||||||
}
|
}
|
||||||
|
|
||||||
void RewritingSink::operator () (std::string_view data)
|
void RewritingSink::operator () (std::string_view data)
|
||||||
|
@ -136,13 +88,13 @@ void RewritingSink::operator () (std::string_view data)
|
||||||
std::string s(prev);
|
std::string s(prev);
|
||||||
s.append(data);
|
s.append(data);
|
||||||
|
|
||||||
size_t j = 0;
|
s = rewriteStrings(s, rewrites);
|
||||||
while ((j = s.find(from, j)) != std::string::npos) {
|
|
||||||
matches.push_back(pos + j);
|
|
||||||
s.replace(j, from.size(), to);
|
|
||||||
}
|
|
||||||
|
|
||||||
prev = s.size() < from.size() ? s : std::string(s, s.size() - from.size() + 1, from.size() - 1);
|
prev = s.size() < maxRewriteSize
|
||||||
|
? s
|
||||||
|
: maxRewriteSize == 0
|
||||||
|
? ""
|
||||||
|
: std::string(s, s.size() - maxRewriteSize + 1, maxRewriteSize - 1);
|
||||||
|
|
||||||
auto consumed = s.size() - prev.size();
|
auto consumed = s.size() - prev.size();
|
||||||
|
|
|
@ -2,14 +2,9 @@
|
||||||
///@file
|
///@file
|
||||||
|
|
||||||
#include "hash.hh"
|
#include "hash.hh"
|
||||||
#include "path.hh"
|
|
||||||
|
|
||||||
namespace nix {
|
namespace nix {
|
||||||
|
|
||||||
std::pair<StorePathSet, HashResult> scanForReferences(const Path & path, const StorePathSet & refs);
|
|
||||||
|
|
||||||
StorePathSet scanForReferences(Sink & toTee, const Path & path, const StorePathSet & refs);
|
|
||||||
|
|
||||||
class RefScanSink : public Sink
|
class RefScanSink : public Sink
|
||||||
{
|
{
|
||||||
StringSet hashes;
|
StringSet hashes;
|
||||||
|
@ -28,28 +23,18 @@ public:
|
||||||
void operator () (std::string_view data) override;
|
void operator () (std::string_view data) override;
|
||||||
};
|
};
|
||||||
|
|
||||||
class PathRefScanSink : public RefScanSink
|
|
||||||
{
|
|
||||||
std::map<std::string, StorePath> backMap;
|
|
||||||
|
|
||||||
PathRefScanSink(StringSet && hashes, std::map<std::string, StorePath> && backMap);
|
|
||||||
|
|
||||||
public:
|
|
||||||
|
|
||||||
static PathRefScanSink fromPaths(const StorePathSet & refs);
|
|
||||||
|
|
||||||
StorePathSet getResultPaths();
|
|
||||||
};
|
|
||||||
|
|
||||||
struct RewritingSink : Sink
|
struct RewritingSink : Sink
|
||||||
{
|
{
|
||||||
std::string from, to, prev;
|
const StringMap rewrites;
|
||||||
|
long unsigned int maxRewriteSize;
|
||||||
|
std::string prev;
|
||||||
Sink & nextSink;
|
Sink & nextSink;
|
||||||
uint64_t pos = 0;
|
uint64_t pos = 0;
|
||||||
|
|
||||||
std::vector<uint64_t> matches;
|
std::vector<uint64_t> matches;
|
||||||
|
|
||||||
RewritingSink(const std::string & from, const std::string & to, Sink & nextSink);
|
RewritingSink(const std::string & from, const std::string & to, Sink & nextSink);
|
||||||
|
RewritingSink(const StringMap & rewrites, Sink & nextSink);
|
||||||
|
|
||||||
void operator () (std::string_view data) override;
|
void operator () (std::string_view data) override;
|
||||||
|
|
46
src/libutil/tests/references.cc
Normal file
46
src/libutil/tests/references.cc
Normal file
|
@ -0,0 +1,46 @@
|
||||||
|
#include "references.hh"
|
||||||
|
#include <gtest/gtest.h>
|
||||||
|
|
||||||
|
namespace nix {
|
||||||
|
|
||||||
|
using std::string;
|
||||||
|
|
||||||
|
struct RewriteParams {
|
||||||
|
string originalString, finalString;
|
||||||
|
StringMap rewrites;
|
||||||
|
|
||||||
|
friend std::ostream& operator<<(std::ostream& os, const RewriteParams& bar) {
|
||||||
|
StringSet strRewrites;
|
||||||
|
for (auto & [from, to] : bar.rewrites)
|
||||||
|
strRewrites.insert(from + "->" + to);
|
||||||
|
return os <<
|
||||||
|
"OriginalString: " << bar.originalString << std::endl <<
|
||||||
|
"Rewrites: " << concatStringsSep(",", strRewrites) << std::endl <<
|
||||||
|
"Expected result: " << bar.finalString;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
class RewriteTest : public ::testing::TestWithParam<RewriteParams> {
|
||||||
|
};
|
||||||
|
|
||||||
|
TEST_P(RewriteTest, IdentityRewriteIsIdentity) {
|
||||||
|
RewriteParams param = GetParam();
|
||||||
|
StringSink rewritten;
|
||||||
|
auto rewriter = RewritingSink(param.rewrites, rewritten);
|
||||||
|
rewriter(param.originalString);
|
||||||
|
rewriter.flush();
|
||||||
|
ASSERT_EQ(rewritten.s, param.finalString);
|
||||||
|
}
|
||||||
|
|
||||||
|
INSTANTIATE_TEST_CASE_P(
|
||||||
|
references,
|
||||||
|
RewriteTest,
|
||||||
|
::testing::Values(
|
||||||
|
RewriteParams{ "foooo", "baroo", {{"foo", "bar"}, {"bar", "baz"}}},
|
||||||
|
RewriteParams{ "foooo", "bazoo", {{"fou", "bar"}, {"foo", "baz"}}},
|
||||||
|
RewriteParams{ "foooo", "foooo", {}}
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
}
|
||||||
|
|
|
@ -256,14 +256,12 @@ inline Paths createDirs(PathView path)
|
||||||
/**
|
/**
|
||||||
* Create a symlink.
|
* Create a symlink.
|
||||||
*/
|
*/
|
||||||
void createSymlink(const Path & target, const Path & link,
|
void createSymlink(const Path & target, const Path & link);
|
||||||
std::optional<time_t> mtime = {});
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Atomically create or replace a symlink.
|
* Atomically create or replace a symlink.
|
||||||
*/
|
*/
|
||||||
void replaceSymlink(const Path & target, const Path & link,
|
void replaceSymlink(const Path & target, const Path & link);
|
||||||
std::optional<time_t> mtime = {});
|
|
||||||
|
|
||||||
void renameFile(const Path & src, const Path & dst);
|
void renameFile(const Path & src, const Path & dst);
|
||||||
|
|
||||||
|
|
|
@ -177,6 +177,7 @@ static int main_nix_channel(int argc, char ** argv)
|
||||||
cRemove,
|
cRemove,
|
||||||
cList,
|
cList,
|
||||||
cUpdate,
|
cUpdate,
|
||||||
|
cListGenerations,
|
||||||
cRollback
|
cRollback
|
||||||
} cmd = cNone;
|
} cmd = cNone;
|
||||||
std::vector<std::string> args;
|
std::vector<std::string> args;
|
||||||
|
@ -193,6 +194,8 @@ static int main_nix_channel(int argc, char ** argv)
|
||||||
cmd = cList;
|
cmd = cList;
|
||||||
} else if (*arg == "--update") {
|
} else if (*arg == "--update") {
|
||||||
cmd = cUpdate;
|
cmd = cUpdate;
|
||||||
|
} else if (*arg == "--list-generations") {
|
||||||
|
cmd = cListGenerations;
|
||||||
} else if (*arg == "--rollback") {
|
} else if (*arg == "--rollback") {
|
||||||
cmd = cRollback;
|
cmd = cRollback;
|
||||||
} else {
|
} else {
|
||||||
|
@ -237,6 +240,11 @@ static int main_nix_channel(int argc, char ** argv)
|
||||||
case cUpdate:
|
case cUpdate:
|
||||||
update(StringSet(args.begin(), args.end()));
|
update(StringSet(args.begin(), args.end()));
|
||||||
break;
|
break;
|
||||||
|
case cListGenerations:
|
||||||
|
if (!args.empty())
|
||||||
|
throw UsageError("'--list-generations' expects no arguments");
|
||||||
|
std::cout << runProgram(settings.nixBinDir + "/nix-env", false, {"--profile", profile, "--list-generations"}) << std::flush;
|
||||||
|
break;
|
||||||
case cRollback:
|
case cRollback:
|
||||||
if (args.size() > 1)
|
if (args.size() > 1)
|
||||||
throw UsageError("'--rollback' has at most one argument");
|
throw UsageError("'--rollback' has at most one argument");
|
||||||
|
|
|
@ -24,6 +24,7 @@
|
||||||
#include <sys/stat.h>
|
#include <sys/stat.h>
|
||||||
#include <sys/socket.h>
|
#include <sys/socket.h>
|
||||||
#include <sys/un.h>
|
#include <sys/un.h>
|
||||||
|
#include <sys/select.h>
|
||||||
#include <errno.h>
|
#include <errno.h>
|
||||||
#include <pwd.h>
|
#include <pwd.h>
|
||||||
#include <grp.h>
|
#include <grp.h>
|
||||||
|
|
|
@ -129,3 +129,7 @@ nix build --impure -f multiple-outputs.nix --json e --no-link | jq --exit-status
|
||||||
(.drvPath | match(".*multiple-outputs-e.drv")) and
|
(.drvPath | match(".*multiple-outputs-e.drv")) and
|
||||||
(.outputs | keys == ["a_a", "b"]))
|
(.outputs | keys == ["a_a", "b"]))
|
||||||
'
|
'
|
||||||
|
|
||||||
|
# Make sure that `--stdin` works and does not apply any defaults
|
||||||
|
printf "" | nix build --no-link --stdin --json | jq --exit-status '. == []'
|
||||||
|
printf "%s\n" "$drv^*" | nix build --no-link --stdin --json | jq --exit-status '.[0]|has("drvPath")'
|
||||||
|
|
|
@ -5,6 +5,12 @@ enableFeatures "fetch-closure"
|
||||||
clearStore
|
clearStore
|
||||||
clearCacheCache
|
clearCacheCache
|
||||||
|
|
||||||
|
# Old daemons don't properly zero out the self-references when
|
||||||
|
# calculating the CA hashes, so this breaks `nix store
|
||||||
|
# make-content-addressed` which expects the client and the daemon to
|
||||||
|
# compute the same hash
|
||||||
|
requireDaemonNewerThan "2.16.0pre20230524"
|
||||||
|
|
||||||
# Initialize binary cache.
|
# Initialize binary cache.
|
||||||
nonCaPath=$(nix build --json --file ./dependencies.nix --no-link | jq -r .[].outputs.out)
|
nonCaPath=$(nix build --json --file ./dependencies.nix --no-link | jq -r .[].outputs.out)
|
||||||
caPath=$(nix store make-content-addressed --json $nonCaPath | jq -r '.rewrites | map(.) | .[]')
|
caPath=$(nix store make-content-addressed --json $nonCaPath | jq -r '.rewrites | map(.) | .[]')
|
||||||
|
|
28
tests/gc.sh
28
tests/gc.sh
|
@ -50,31 +50,3 @@ if test -e $outPath/foobar; then false; fi
|
||||||
# Check that the store is empty.
|
# Check that the store is empty.
|
||||||
rmdir $NIX_STORE_DIR/.links
|
rmdir $NIX_STORE_DIR/.links
|
||||||
rmdir $NIX_STORE_DIR
|
rmdir $NIX_STORE_DIR
|
||||||
|
|
||||||
## Test `nix-collect-garbage -d`
|
|
||||||
testCollectGarbageD () {
|
|
||||||
clearProfiles
|
|
||||||
# Run two `nix-env` commands, should create two generations of
|
|
||||||
# the profile
|
|
||||||
nix-env -f ./user-envs.nix -i foo-1.0
|
|
||||||
nix-env -f ./user-envs.nix -i foo-2.0pre1
|
|
||||||
[[ $(nix-env --list-generations | wc -l) -eq 2 ]]
|
|
||||||
|
|
||||||
# Clear the profile history. There should be only one generation
|
|
||||||
# left
|
|
||||||
nix-collect-garbage -d
|
|
||||||
[[ $(nix-env --list-generations | wc -l) -eq 1 ]]
|
|
||||||
}
|
|
||||||
# `nix-env` doesn't work with CA derivations, so let's ignore that bit if we're
|
|
||||||
# using them
|
|
||||||
if [[ -z "${NIX_TESTS_CA_BY_DEFAULT:-}" ]]; then
|
|
||||||
testCollectGarbageD
|
|
||||||
|
|
||||||
# Run the same test, but forcing the profiles at their legacy location under
|
|
||||||
# /nix/var/nix.
|
|
||||||
#
|
|
||||||
# Regression test for #8294
|
|
||||||
rm ~/.nix-profile
|
|
||||||
ln -s $NIX_STATE_DIR/profiles/per-user/me ~/.nix-profile
|
|
||||||
testCollectGarbageD
|
|
||||||
fi
|
|
||||||
|
|
130
tests/lang/eval-fail-fromTOML-timestamps.nix
Normal file
130
tests/lang/eval-fail-fromTOML-timestamps.nix
Normal file
|
@ -0,0 +1,130 @@
|
||||||
|
builtins.fromTOML ''
|
||||||
|
key = "value"
|
||||||
|
bare_key = "value"
|
||||||
|
bare-key = "value"
|
||||||
|
1234 = "value"
|
||||||
|
|
||||||
|
"127.0.0.1" = "value"
|
||||||
|
"character encoding" = "value"
|
||||||
|
"ʎǝʞ" = "value"
|
||||||
|
'key2' = "value"
|
||||||
|
'quoted "value"' = "value"
|
||||||
|
|
||||||
|
name = "Orange"
|
||||||
|
|
||||||
|
physical.color = "orange"
|
||||||
|
physical.shape = "round"
|
||||||
|
site."google.com" = true
|
||||||
|
|
||||||
|
# This is legal according to the spec, but cpptoml doesn't handle it.
|
||||||
|
#a.b.c = 1
|
||||||
|
#a.d = 2
|
||||||
|
|
||||||
|
str = "I'm a string. \"You can quote me\". Name\tJos\u00E9\nLocation\tSF."
|
||||||
|
|
||||||
|
int1 = +99
|
||||||
|
int2 = 42
|
||||||
|
int3 = 0
|
||||||
|
int4 = -17
|
||||||
|
int5 = 1_000
|
||||||
|
int6 = 5_349_221
|
||||||
|
int7 = 1_2_3_4_5
|
||||||
|
|
||||||
|
hex1 = 0xDEADBEEF
|
||||||
|
hex2 = 0xdeadbeef
|
||||||
|
hex3 = 0xdead_beef
|
||||||
|
|
||||||
|
oct1 = 0o01234567
|
||||||
|
oct2 = 0o755
|
||||||
|
|
||||||
|
bin1 = 0b11010110
|
||||||
|
|
||||||
|
flt1 = +1.0
|
||||||
|
flt2 = 3.1415
|
||||||
|
flt3 = -0.01
|
||||||
|
flt4 = 5e+22
|
||||||
|
flt5 = 1e6
|
||||||
|
flt6 = -2E-2
|
||||||
|
flt7 = 6.626e-34
|
||||||
|
flt8 = 9_224_617.445_991_228_313
|
||||||
|
|
||||||
|
bool1 = true
|
||||||
|
bool2 = false
|
||||||
|
|
||||||
|
odt1 = 1979-05-27T07:32:00Z
|
||||||
|
odt2 = 1979-05-27T00:32:00-07:00
|
||||||
|
odt3 = 1979-05-27T00:32:00.999999-07:00
|
||||||
|
odt4 = 1979-05-27 07:32:00Z
|
||||||
|
ldt1 = 1979-05-27T07:32:00
|
||||||
|
ldt2 = 1979-05-27T00:32:00.999999
|
||||||
|
ld1 = 1979-05-27
|
||||||
|
lt1 = 07:32:00
|
||||||
|
lt2 = 00:32:00.999999
|
||||||
|
|
||||||
|
arr1 = [ 1, 2, 3 ]
|
||||||
|
arr2 = [ "red", "yellow", "green" ]
|
||||||
|
arr3 = [ [ 1, 2 ], [3, 4, 5] ]
|
||||||
|
arr4 = [ "all", 'strings', """are the same""", ''''type'''']
|
||||||
|
arr5 = [ [ 1, 2 ], ["a", "b", "c"] ]
|
||||||
|
|
||||||
|
arr7 = [
|
||||||
|
1, 2, 3
|
||||||
|
]
|
||||||
|
|
||||||
|
arr8 = [
|
||||||
|
1,
|
||||||
|
2, # this is ok
|
||||||
|
]
|
||||||
|
|
||||||
|
[table-1]
|
||||||
|
key1 = "some string"
|
||||||
|
key2 = 123
|
||||||
|
|
||||||
|
|
||||||
|
[table-2]
|
||||||
|
key1 = "another string"
|
||||||
|
key2 = 456
|
||||||
|
|
||||||
|
[dog."tater.man"]
|
||||||
|
type.name = "pug"
|
||||||
|
|
||||||
|
[a.b.c]
|
||||||
|
[ d.e.f ]
|
||||||
|
[ g . h . i ]
|
||||||
|
[ j . "ʞ" . 'l' ]
|
||||||
|
[x.y.z.w]
|
||||||
|
|
||||||
|
name = { first = "Tom", last = "Preston-Werner" }
|
||||||
|
point = { x = 1, y = 2 }
|
||||||
|
animal = { type.name = "pug" }
|
||||||
|
|
||||||
|
[[products]]
|
||||||
|
name = "Hammer"
|
||||||
|
sku = 738594937
|
||||||
|
|
||||||
|
[[products]]
|
||||||
|
|
||||||
|
[[products]]
|
||||||
|
name = "Nail"
|
||||||
|
sku = 284758393
|
||||||
|
color = "gray"
|
||||||
|
|
||||||
|
[[fruit]]
|
||||||
|
name = "apple"
|
||||||
|
|
||||||
|
[fruit.physical]
|
||||||
|
color = "red"
|
||||||
|
shape = "round"
|
||||||
|
|
||||||
|
[[fruit.variety]]
|
||||||
|
name = "red delicious"
|
||||||
|
|
||||||
|
[[fruit.variety]]
|
||||||
|
name = "granny smith"
|
||||||
|
|
||||||
|
[[fruit]]
|
||||||
|
name = "banana"
|
||||||
|
|
||||||
|
[[fruit.variety]]
|
||||||
|
name = "plantain"
|
||||||
|
''
|
1
tests/lang/eval-okay-fromTOML-timestamps.exp
Normal file
1
tests/lang/eval-okay-fromTOML-timestamps.exp
Normal file
|
@ -0,0 +1 @@
|
||||||
|
{ "1234" = "value"; "127.0.0.1" = "value"; a = { b = { c = { }; }; }; arr1 = [ 1 2 3 ]; arr2 = [ "red" "yellow" "green" ]; arr3 = [ [ 1 2 ] [ 3 4 5 ] ]; arr4 = [ "all" "strings" "are the same" "type" ]; arr5 = [ [ 1 2 ] [ "a" "b" "c" ] ]; arr7 = [ 1 2 3 ]; arr8 = [ 1 2 ]; bare-key = "value"; bare_key = "value"; bin1 = 214; bool1 = true; bool2 = false; "character encoding" = "value"; d = { e = { f = { }; }; }; dog = { "tater.man" = { type = { name = "pug"; }; }; }; flt1 = 1; flt2 = 3.1415; flt3 = -0.01; flt4 = 5e+22; flt5 = 1e+06; flt6 = -0.02; flt7 = 6.626e-34; flt8 = 9.22462e+06; fruit = [ { name = "apple"; physical = { color = "red"; shape = "round"; }; variety = [ { name = "red delicious"; } { name = "granny smith"; } ]; } { name = "banana"; variety = [ { name = "plantain"; } ]; } ]; g = { h = { i = { }; }; }; hex1 = 3735928559; hex2 = 3735928559; hex3 = 3735928559; int1 = 99; int2 = 42; int3 = 0; int4 = -17; int5 = 1000; int6 = 5349221; int7 = 12345; j = { "ʞ" = { l = { }; }; }; key = "value"; key2 = "value"; ld1 = { _type = "timestamp"; value = "1979-05-27"; }; ldt1 = { _type = "timestamp"; value = "1979-05-27T07:32:00"; }; ldt2 = { _type = "timestamp"; value = "1979-05-27T00:32:00.999999"; }; lt1 = { _type = "timestamp"; value = "07:32:00"; }; lt2 = { _type = "timestamp"; value = "00:32:00.999999"; }; name = "Orange"; oct1 = 342391; oct2 = 493; odt1 = { _type = "timestamp"; value = "1979-05-27T07:32:00Z"; }; odt2 = { _type = "timestamp"; value = "1979-05-27T00:32:00-07:00"; }; odt3 = { _type = "timestamp"; value = "1979-05-27T00:32:00.999999-07:00"; }; odt4 = { _type = "timestamp"; value = "1979-05-27T07:32:00Z"; }; physical = { color = "orange"; shape = "round"; }; products = [ { name = "Hammer"; sku = 738594937; } { } { color = "gray"; name = "Nail"; sku = 284758393; } ]; "quoted \"value\"" = "value"; site = { "google.com" = true; }; str = "I'm a string. \"You can quote me\". Name\tJosé\nLocation\tSF."; table-1 = { key1 = "some string"; key2 = 123; }; table-2 = { key1 = "another string"; key2 = 456; }; x = { y = { z = { w = { animal = { type = { name = "pug"; }; }; name = { first = "Tom"; last = "Preston-Werner"; }; point = { x = 1; y = 2; }; }; }; }; }; "ʎǝʞ" = "value"; }
|
1
tests/lang/eval-okay-fromTOML-timestamps.flags
Normal file
1
tests/lang/eval-okay-fromTOML-timestamps.flags
Normal file
|
@ -0,0 +1 @@
|
||||||
|
--extra-experimental-features parse-toml-timestamps
|
130
tests/lang/eval-okay-fromTOML-timestamps.nix
Normal file
130
tests/lang/eval-okay-fromTOML-timestamps.nix
Normal file
|
@ -0,0 +1,130 @@
|
||||||
|
builtins.fromTOML ''
|
||||||
|
key = "value"
|
||||||
|
bare_key = "value"
|
||||||
|
bare-key = "value"
|
||||||
|
1234 = "value"
|
||||||
|
|
||||||
|
"127.0.0.1" = "value"
|
||||||
|
"character encoding" = "value"
|
||||||
|
"ʎǝʞ" = "value"
|
||||||
|
'key2' = "value"
|
||||||
|
'quoted "value"' = "value"
|
||||||
|
|
||||||
|
name = "Orange"
|
||||||
|
|
||||||
|
physical.color = "orange"
|
||||||
|
physical.shape = "round"
|
||||||
|
site."google.com" = true
|
||||||
|
|
||||||
|
# This is legal according to the spec, but cpptoml doesn't handle it.
|
||||||
|
#a.b.c = 1
|
||||||
|
#a.d = 2
|
||||||
|
|
||||||
|
str = "I'm a string. \"You can quote me\". Name\tJos\u00E9\nLocation\tSF."
|
||||||
|
|
||||||
|
int1 = +99
|
||||||
|
int2 = 42
|
||||||
|
int3 = 0
|
||||||
|
int4 = -17
|
||||||
|
int5 = 1_000
|
||||||
|
int6 = 5_349_221
|
||||||
|
int7 = 1_2_3_4_5
|
||||||
|
|
||||||
|
hex1 = 0xDEADBEEF
|
||||||
|
hex2 = 0xdeadbeef
|
||||||
|
hex3 = 0xdead_beef
|
||||||
|
|
||||||
|
oct1 = 0o01234567
|
||||||
|
oct2 = 0o755
|
||||||
|
|
||||||
|
bin1 = 0b11010110
|
||||||
|
|
||||||
|
flt1 = +1.0
|
||||||
|
flt2 = 3.1415
|
||||||
|
flt3 = -0.01
|
||||||
|
flt4 = 5e+22
|
||||||
|
flt5 = 1e6
|
||||||
|
flt6 = -2E-2
|
||||||
|
flt7 = 6.626e-34
|
||||||
|
flt8 = 9_224_617.445_991_228_313
|
||||||
|
|
||||||
|
bool1 = true
|
||||||
|
bool2 = false
|
||||||
|
|
||||||
|
odt1 = 1979-05-27T07:32:00Z
|
||||||
|
odt2 = 1979-05-27T00:32:00-07:00
|
||||||
|
odt3 = 1979-05-27T00:32:00.999999-07:00
|
||||||
|
odt4 = 1979-05-27 07:32:00Z
|
||||||
|
ldt1 = 1979-05-27T07:32:00
|
||||||
|
ldt2 = 1979-05-27T00:32:00.999999
|
||||||
|
ld1 = 1979-05-27
|
||||||
|
lt1 = 07:32:00
|
||||||
|
lt2 = 00:32:00.999999
|
||||||
|
|
||||||
|
arr1 = [ 1, 2, 3 ]
|
||||||
|
arr2 = [ "red", "yellow", "green" ]
|
||||||
|
arr3 = [ [ 1, 2 ], [3, 4, 5] ]
|
||||||
|
arr4 = [ "all", 'strings', """are the same""", ''''type'''']
|
||||||
|
arr5 = [ [ 1, 2 ], ["a", "b", "c"] ]
|
||||||
|
|
||||||
|
arr7 = [
|
||||||
|
1, 2, 3
|
||||||
|
]
|
||||||
|
|
||||||
|
arr8 = [
|
||||||
|
1,
|
||||||
|
2, # this is ok
|
||||||
|
]
|
||||||
|
|
||||||
|
[table-1]
|
||||||
|
key1 = "some string"
|
||||||
|
key2 = 123
|
||||||
|
|
||||||
|
|
||||||
|
[table-2]
|
||||||
|
key1 = "another string"
|
||||||
|
key2 = 456
|
||||||
|
|
||||||
|
[dog."tater.man"]
|
||||||
|
type.name = "pug"
|
||||||
|
|
||||||
|
[a.b.c]
|
||||||
|
[ d.e.f ]
|
||||||
|
[ g . h . i ]
|
||||||
|
[ j . "ʞ" . 'l' ]
|
||||||
|
[x.y.z.w]
|
||||||
|
|
||||||
|
name = { first = "Tom", last = "Preston-Werner" }
|
||||||
|
point = { x = 1, y = 2 }
|
||||||
|
animal = { type.name = "pug" }
|
||||||
|
|
||||||
|
[[products]]
|
||||||
|
name = "Hammer"
|
||||||
|
sku = 738594937
|
||||||
|
|
||||||
|
[[products]]
|
||||||
|
|
||||||
|
[[products]]
|
||||||
|
name = "Nail"
|
||||||
|
sku = 284758393
|
||||||
|
color = "gray"
|
||||||
|
|
||||||
|
[[fruit]]
|
||||||
|
name = "apple"
|
||||||
|
|
||||||
|
[fruit.physical]
|
||||||
|
color = "red"
|
||||||
|
shape = "round"
|
||||||
|
|
||||||
|
[[fruit.variety]]
|
||||||
|
name = "red delicious"
|
||||||
|
|
||||||
|
[[fruit.variety]]
|
||||||
|
name = "granny smith"
|
||||||
|
|
||||||
|
[[fruit]]
|
||||||
|
name = "banana"
|
||||||
|
|
||||||
|
[[fruit.variety]]
|
||||||
|
name = "plantain"
|
||||||
|
''
|
|
@ -16,6 +16,7 @@ nix_tests = \
|
||||||
flakes/flake-in-submodule.sh \
|
flakes/flake-in-submodule.sh \
|
||||||
ca/gc.sh \
|
ca/gc.sh \
|
||||||
gc.sh \
|
gc.sh \
|
||||||
|
nix-collect-garbage-d.sh \
|
||||||
remote-store.sh \
|
remote-store.sh \
|
||||||
legacy-ssh-store.sh \
|
legacy-ssh-store.sh \
|
||||||
lang.sh \
|
lang.sh \
|
||||||
|
|
|
@ -8,6 +8,7 @@ rm -f $TEST_HOME/.nix-channels $TEST_HOME/.nix-profile
|
||||||
nix-channel --add http://foo/bar xyzzy
|
nix-channel --add http://foo/bar xyzzy
|
||||||
nix-channel --list | grepQuiet http://foo/bar
|
nix-channel --list | grepQuiet http://foo/bar
|
||||||
nix-channel --remove xyzzy
|
nix-channel --remove xyzzy
|
||||||
|
[[ $(nix-channel --list-generations | wc -l) == 1 ]]
|
||||||
|
|
||||||
[ -e $TEST_HOME/.nix-channels ]
|
[ -e $TEST_HOME/.nix-channels ]
|
||||||
[ "$(cat $TEST_HOME/.nix-channels)" = '' ]
|
[ "$(cat $TEST_HOME/.nix-channels)" = '' ]
|
||||||
|
@ -38,6 +39,7 @@ ln -s dependencies.nix $TEST_ROOT/nixexprs/default.nix
|
||||||
# Test the update action.
|
# Test the update action.
|
||||||
nix-channel --add file://$TEST_ROOT/foo
|
nix-channel --add file://$TEST_ROOT/foo
|
||||||
nix-channel --update
|
nix-channel --update
|
||||||
|
[[ $(nix-channel --list-generations | wc -l) == 2 ]]
|
||||||
|
|
||||||
# Do a query.
|
# Do a query.
|
||||||
nix-env -qa \* --meta --xml --out-path > $TEST_ROOT/meta.xml
|
nix-env -qa \* --meta --xml --out-path > $TEST_ROOT/meta.xml
|
||||||
|
|
40
tests/nix-collect-garbage-d.sh
Normal file
40
tests/nix-collect-garbage-d.sh
Normal file
|
@ -0,0 +1,40 @@
|
||||||
|
source common.sh
|
||||||
|
|
||||||
|
clearStore
|
||||||
|
|
||||||
|
## Test `nix-collect-garbage -d`
|
||||||
|
|
||||||
|
# TODO make `nix-env` doesn't work with CA derivations, and make
|
||||||
|
# `ca/nix-collect-garbage-d.sh` wrapper.
|
||||||
|
|
||||||
|
testCollectGarbageD () {
|
||||||
|
clearProfiles
|
||||||
|
# Run two `nix-env` commands, should create two generations of
|
||||||
|
# the profile
|
||||||
|
nix-env -f ./user-envs.nix -i foo-1.0 "$@"
|
||||||
|
nix-env -f ./user-envs.nix -i foo-2.0pre1 "$@"
|
||||||
|
[[ $(nix-env --list-generations "$@" | wc -l) -eq 2 ]]
|
||||||
|
|
||||||
|
# Clear the profile history. There should be only one generation
|
||||||
|
# left
|
||||||
|
nix-collect-garbage -d
|
||||||
|
[[ $(nix-env --list-generations "$@" | wc -l) -eq 1 ]]
|
||||||
|
}
|
||||||
|
|
||||||
|
testCollectGarbageD
|
||||||
|
|
||||||
|
# Run the same test, but forcing the profiles an arbitrary location.
|
||||||
|
rm ~/.nix-profile
|
||||||
|
ln -s $TEST_ROOT/blah ~/.nix-profile
|
||||||
|
testCollectGarbageD
|
||||||
|
|
||||||
|
# Run the same test, but forcing the profiles at their legacy location under
|
||||||
|
# /nix/var/nix.
|
||||||
|
#
|
||||||
|
# Note that we *don't* use the default profile; `nix-collect-garbage` will
|
||||||
|
# need to check the legacy conditional unconditionally not just follow
|
||||||
|
# `~/.nix-profile` to pass this test.
|
||||||
|
#
|
||||||
|
# Regression test for #8294
|
||||||
|
rm ~/.nix-profile
|
||||||
|
testCollectGarbageD --profile "$NIX_STATE_DIR/profiles/per-user/me"
|
|
@ -21,4 +21,8 @@ static void prim_anotherNull (EvalState & state, const PosIdx pos, Value ** args
|
||||||
v.mkBool(false);
|
v.mkBool(false);
|
||||||
}
|
}
|
||||||
|
|
||||||
static RegisterPrimOp rp("anotherNull", 0, prim_anotherNull);
|
static RegisterPrimOp rp({
|
||||||
|
.name = "anotherNull",
|
||||||
|
.arity = 0,
|
||||||
|
.fun = prim_anotherNull,
|
||||||
|
});
|
||||||
|
|
|
@ -1,8 +1,5 @@
|
||||||
source common.sh
|
source common.sh
|
||||||
|
|
||||||
# FIXME
|
|
||||||
if [[ $(uname) != Linux ]]; then skipTest "Not running Linux"; fi
|
|
||||||
|
|
||||||
enableFeatures 'recursive-nix'
|
enableFeatures 'recursive-nix'
|
||||||
restartDaemon
|
restartDaemon
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue