mirror of
https://github.com/privatevoid-net/nix-super.git
synced 2024-11-22 05:56:15 +02:00
Merge remote-tracking branch 'upstream/master' into overlayfs-store
This commit is contained in:
commit
8434f23c97
29 changed files with 479 additions and 338 deletions
|
@ -16,7 +16,7 @@ Full reference documentation can be found in the [Nix manual](https://nixos.org/
|
|||
## Building And Developing
|
||||
|
||||
See our [Hacking guide](https://nixos.org/manual/nix/unstable/contributing/hacking.html) in our manual for instruction on how to
|
||||
to set up a development environment and build Nix from source.
|
||||
set up a development environment and build Nix from source.
|
||||
|
||||
## Contributing
|
||||
|
||||
|
|
|
@ -16,15 +16,8 @@
|
|||
- [Environment Variables](installation/env-variables.md)
|
||||
- [Upgrading Nix](installation/upgrading.md)
|
||||
- [Uninstalling Nix](installation/uninstall.md)
|
||||
- [Package Management](package-management/package-management.md)
|
||||
- [Profiles](package-management/profiles.md)
|
||||
- [Garbage Collection](package-management/garbage-collection.md)
|
||||
- [Garbage Collector Roots](package-management/garbage-collector-roots.md)
|
||||
- [Sharing Packages Between Machines](package-management/sharing-packages.md)
|
||||
- [Serving a Nix store via HTTP](package-management/binary-cache-substituter.md)
|
||||
- [Copying Closures via SSH](package-management/copy-closure.md)
|
||||
- [Serving a Nix store via SSH](package-management/ssh-substituter.md)
|
||||
- [Serving a Nix store via S3](package-management/s3-substituter.md)
|
||||
- [Nix Store](store/index.md)
|
||||
- [File System Object](store/file-system-object.md)
|
||||
- [Nix Language](language/index.md)
|
||||
- [Data Types](language/values.md)
|
||||
- [Language Constructs](language/constructs.md)
|
||||
|
@ -36,7 +29,16 @@
|
|||
- [Import From Derivation](language/import-from-derivation.md)
|
||||
- [Built-in Constants](language/builtin-constants.md)
|
||||
- [Built-in Functions](language/builtins.md)
|
||||
- [Package Management](package-management/package-management.md)
|
||||
- [Profiles](package-management/profiles.md)
|
||||
- [Garbage Collection](package-management/garbage-collection.md)
|
||||
- [Garbage Collector Roots](package-management/garbage-collector-roots.md)
|
||||
- [Advanced Topics](advanced-topics/advanced-topics.md)
|
||||
- [Sharing Packages Between Machines](package-management/sharing-packages.md)
|
||||
- [Serving a Nix store via HTTP](package-management/binary-cache-substituter.md)
|
||||
- [Copying Closures via SSH](package-management/copy-closure.md)
|
||||
- [Serving a Nix store via SSH](package-management/ssh-substituter.md)
|
||||
- [Serving a Nix store via S3](package-management/s3-substituter.md)
|
||||
- [Remote Builds](advanced-topics/distributed-builds.md)
|
||||
- [Tuning Cores and Jobs](advanced-topics/cores-vs-jobs.md)
|
||||
- [Verifying Build Reproducibility](advanced-topics/diff-hook.md)
|
||||
|
@ -98,7 +100,6 @@
|
|||
- [Channels](command-ref/files/channels.md)
|
||||
- [Default Nix expression](command-ref/files/default-nix-expression.md)
|
||||
- [Architecture and Design](architecture/architecture.md)
|
||||
- [File System Object](architecture/file-system-object.md)
|
||||
- [Protocols](protocols/protocols.md)
|
||||
- [Serving Tarball Flakes](protocols/tarball-fetcher.md)
|
||||
- [Derivation "ATerm" file format](protocols/derivation-aterm.md)
|
||||
|
|
|
@ -17,9 +17,8 @@ the build loop.
|
|||
|
||||
# Prerequisites
|
||||
|
||||
This tutorial assumes you have [configured an S3-compatible binary
|
||||
cache](../package-management/s3-substituter.md), and that the `root`
|
||||
user's default AWS profile can upload to the bucket.
|
||||
This tutorial assumes you have configured an [S3-compatible binary cache](@docroot@/command-ref/new-cli/nix3-help-stores.md#s3-binary-cache-store) as a [substituter](../command-ref/conf-file.md#conf-substituters),
|
||||
and that the `root` user's default AWS profile can upload to the bucket.
|
||||
|
||||
# Set up a Signing Key
|
||||
|
||||
|
|
|
@ -59,6 +59,7 @@ The [Nix language](../language/index.md) evaluator transforms Nix expressions in
|
|||
The command line interface and Nix expressions are what users deal with most.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> The Nix language itself does not have a notion of *packages* or *configurations*.
|
||||
> As far as we are concerned here, the inputs and results of a build plan are just data.
|
||||
|
||||
|
|
|
@ -58,22 +58,16 @@
|
|||
|
||||
- [store]{#gloss-store}
|
||||
|
||||
The location in the file system where store objects live. Typically
|
||||
`/nix/store`.
|
||||
A collection of store objects, with operations to manipulate that collection.
|
||||
See [Nix Store] for details.
|
||||
|
||||
From the perspective of the location where Nix is
|
||||
invoked, the Nix store can be referred to
|
||||
as a "_local_" or a "_remote_" one:
|
||||
There are many types of stores.
|
||||
See [`nix help-stores`](@docroot@/command-ref/new-cli/nix3-help-stores.md) for a complete list.
|
||||
|
||||
+ A [local store]{#gloss-local-store} exists on the filesystem of
|
||||
the machine where Nix is invoked. You can use other
|
||||
local stores by passing the `--store` flag to the
|
||||
`nix` command. Local stores can be used for building derivations.
|
||||
|
||||
+ A *remote store* exists anywhere other than the
|
||||
local filesystem. One example is the `/nix/store`
|
||||
directory on another machine, accessed via `ssh` or
|
||||
served by the `nix-serve` Perl script.
|
||||
From the perspective of the location where Nix is invoked, the Nix store can be referred to _local_ or _remote_.
|
||||
Only a [local store]{#gloss-local-store} exposes a location in the file system of the machine where Nix is invoked that allows access to store objects, typically `/nix/store`.
|
||||
Local stores can be used for building [derivations](#derivation).
|
||||
See [Local Store](@docroot@/command-ref/new-cli/nix3-help-stores.md#local-store) for details.
|
||||
|
||||
[store]: #gloss-store
|
||||
[local store]: #gloss-local-store
|
||||
|
@ -103,15 +97,19 @@
|
|||
|
||||
The Nix data model for representing simplified file system data.
|
||||
|
||||
See [File System Object](@docroot@/architecture/file-system-object.md) for details.
|
||||
See [File System Object](@docroot@/store/file-system-object.md) for details.
|
||||
|
||||
[file system object]: #gloss-file-system-object
|
||||
|
||||
- [store object]{#gloss-store-object}
|
||||
|
||||
A store object consists of a [file system object], [reference]s to other store objects, and other metadata.
|
||||
Part of the contents of a [store].
|
||||
|
||||
A store object consists of a [file system object], [references][reference] to other store objects, and other metadata.
|
||||
It can be referred to by a [store path].
|
||||
|
||||
See [Store Object](@docroot@/store/index.md#store-object) for details.
|
||||
|
||||
[store object]: #gloss-store-object
|
||||
|
||||
- [IFD]{#gloss-ifd}
|
||||
|
|
|
@ -1,115 +0,0 @@
|
|||
# Serving a Nix store via S3
|
||||
|
||||
Nix has [built-in support](@docroot@/command-ref/new-cli/nix3-help-stores.md#s3-binary-cache-store)
|
||||
for storing and fetching store paths from
|
||||
Amazon S3 and S3-compatible services. This uses the same *binary*
|
||||
cache mechanism that Nix usually uses to fetch prebuilt binaries from
|
||||
[cache.nixos.org](https://cache.nixos.org/).
|
||||
|
||||
In this example we will use the bucket named `example-nix-cache`.
|
||||
|
||||
## Anonymous Reads to your S3-compatible binary cache
|
||||
|
||||
If your binary cache is publicly accessible and does not require
|
||||
authentication, the simplest and easiest way to use Nix with your S3
|
||||
compatible binary cache is to use the HTTP URL for that cache.
|
||||
|
||||
For AWS S3 the binary cache URL for example bucket will be exactly
|
||||
<https://example-nix-cache.s3.amazonaws.com> or
|
||||
<s3://example-nix-cache>. For S3 compatible binary caches, consult that
|
||||
cache's documentation.
|
||||
|
||||
Your bucket will need the following bucket policy:
|
||||
|
||||
```json
|
||||
{
|
||||
"Id": "DirectReads",
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "AllowDirectReads",
|
||||
"Action": [
|
||||
"s3:GetObject",
|
||||
"s3:GetBucketLocation"
|
||||
],
|
||||
"Effect": "Allow",
|
||||
"Resource": [
|
||||
"arn:aws:s3:::example-nix-cache",
|
||||
"arn:aws:s3:::example-nix-cache/*"
|
||||
],
|
||||
"Principal": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Authenticated Reads to your S3 binary cache
|
||||
|
||||
For AWS S3 the binary cache URL for example bucket will be exactly
|
||||
<s3://example-nix-cache>.
|
||||
|
||||
Nix will use the [default credential provider
|
||||
chain](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html)
|
||||
for authenticating requests to Amazon S3.
|
||||
|
||||
Nix supports authenticated reads from Amazon S3 and S3 compatible binary
|
||||
caches.
|
||||
|
||||
Your bucket will need a bucket policy allowing the desired users to
|
||||
perform the `s3:GetObject` and `s3:GetBucketLocation` action on all
|
||||
objects in the bucket. The [anonymous policy given
|
||||
above](#anonymous-reads-to-your-s3-compatible-binary-cache) can be
|
||||
updated to have a restricted `Principal` to support this.
|
||||
|
||||
## Authenticated Writes to your S3-compatible binary cache
|
||||
|
||||
Nix support fully supports writing to Amazon S3 and S3 compatible
|
||||
buckets. The binary cache URL for our example bucket will be
|
||||
<s3://example-nix-cache>.
|
||||
|
||||
Nix will use the [default credential provider
|
||||
chain](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html)
|
||||
for authenticating requests to Amazon S3.
|
||||
|
||||
Your account will need the following IAM policy to upload to the cache:
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "UploadToCache",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:AbortMultipartUpload",
|
||||
"s3:GetBucketLocation",
|
||||
"s3:GetObject",
|
||||
"s3:ListBucket",
|
||||
"s3:ListBucketMultipartUploads",
|
||||
"s3:ListMultipartUploadParts",
|
||||
"s3:PutObject"
|
||||
],
|
||||
"Resource": [
|
||||
"arn:aws:s3:::example-nix-cache",
|
||||
"arn:aws:s3:::example-nix-cache/*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
To upload with a specific credential profile for Amazon S3:
|
||||
|
||||
```console
|
||||
$ nix copy nixpkgs.hello \
|
||||
--to 's3://example-nix-cache?profile=cache-upload®ion=eu-west-2'
|
||||
```
|
||||
|
||||
To upload to an S3-compatible binary cache:
|
||||
|
||||
```console
|
||||
$ nix copy nixpkgs.hello --to \
|
||||
's3://example-nix-cache?profile=cache-upload&scheme=https&endpoint=minio.example.com'
|
||||
```
|
|
@ -44,7 +44,7 @@
|
|||
(The store always had to check whether it trusts the client, but now the client is informed of the store's decision.)
|
||||
This is useful for scripting interactions with (non-legacy-ssh) remote Nix stores.
|
||||
|
||||
`nix store info` and `nix doctor` now display this information.
|
||||
`nix store ping` and `nix doctor` now display this information.
|
||||
|
||||
* The new command `nix derivation add` allows adding derivations to the store without involving the Nix language.
|
||||
It exists to round out our collection of basic utility/plumbing commands, and allow for a low barrier-to-entry way of experimenting with alternative front-ends to the Nix Store.
|
||||
|
|
|
@ -24,7 +24,7 @@
|
|||
[repository](https://github.com/NixOS/bundlers) has various bundlers
|
||||
implemented.
|
||||
|
||||
* `nix store info` now reports the version of the remote Nix daemon.
|
||||
* `nix store ping` now reports the version of the remote Nix daemon.
|
||||
|
||||
* `nix flake {init,new}` now display information about which files have been
|
||||
created.
|
||||
|
|
4
doc/manual/src/store/index.md
Normal file
4
doc/manual/src/store/index.md
Normal file
|
@ -0,0 +1,4 @@
|
|||
# Nix Store
|
||||
|
||||
The *Nix store* is an abstraction used by Nix to store immutable filesystem artifacts (such as software packages) that can have dependencies (*references*) between them.
|
||||
There are multiple implementations of the Nix store, such as the actual filesystem (`/nix/store`) and binary caches.
|
|
@ -3720,10 +3720,11 @@ static RegisterPrimOp primop_substring({
|
|||
.doc = R"(
|
||||
Return the substring of *s* from character position *start*
|
||||
(zero-based) up to but not including *start + len*. If *start* is
|
||||
greater than the length of the string, an empty string is returned,
|
||||
and if *start + len* lies beyond the end of the string, only the
|
||||
substring up to the end of the string is returned. *start* must be
|
||||
non-negative. For example,
|
||||
greater than the length of the string, an empty string is returned.
|
||||
If *start + len* lies beyond the end of the string or *len* is `-1`,
|
||||
only the substring up to the end of the string is returned.
|
||||
*start* must be non-negative.
|
||||
For example,
|
||||
|
||||
```nix
|
||||
builtins.substring 0 3 "nixos"
|
||||
|
|
|
@ -30,20 +30,27 @@ static RegisterPrimOp primop_hasContext({
|
|||
.name = "__hasContext",
|
||||
.args = {"s"},
|
||||
.doc = R"(
|
||||
Return `true` if string *s* has a non-empty context. The
|
||||
context can be obtained with
|
||||
Return `true` if string *s* has a non-empty context.
|
||||
The context can be obtained with
|
||||
[`getContext`](#builtins-getContext).
|
||||
|
||||
> **Example**
|
||||
>
|
||||
> Many operations require a string context to be empty because they are intended only to work with "regular" strings, and also to help users avoid unintentionally loosing track of string context elements.
|
||||
> `builtins.hasContext` can help create better domain-specific errors in those case.
|
||||
>
|
||||
> ```nix
|
||||
> name: meta:
|
||||
>
|
||||
> if builtins.hasContext name
|
||||
> then throw "package name cannot contain string context"
|
||||
> else { ${name} = meta; }
|
||||
> ```
|
||||
)",
|
||||
.fun = prim_hasContext
|
||||
});
|
||||
|
||||
|
||||
/* Sometimes we want to pass a derivation path (i.e. pkg.drvPath) to a
|
||||
builder without causing the derivation to be built (for instance,
|
||||
in the derivation that builds NARs in nix-push, when doing
|
||||
source-only deployment). This primop marks the string context so
|
||||
that builtins.derivation adds the path to drv.inputSrcs rather than
|
||||
drv.inputDrvs. */
|
||||
static void prim_unsafeDiscardOutputDependency(EvalState & state, const PosIdx pos, Value * * args, Value & v)
|
||||
{
|
||||
NixStringContext context;
|
||||
|
@ -66,11 +73,83 @@ static void prim_unsafeDiscardOutputDependency(EvalState & state, const PosIdx p
|
|||
|
||||
static RegisterPrimOp primop_unsafeDiscardOutputDependency({
|
||||
.name = "__unsafeDiscardOutputDependency",
|
||||
.arity = 1,
|
||||
.args = {"s"},
|
||||
.doc = R"(
|
||||
Create a copy of the given string where every "derivation deep" string context element is turned into a constant string context element.
|
||||
|
||||
This is the opposite of [`builtins.addDrvOutputDependencies`](#builtins-addDrvOutputDependencies).
|
||||
|
||||
This is unsafe because it allows us to "forget" store objects we would have otherwise refered to with the string context,
|
||||
whereas Nix normally tracks all dependencies consistently.
|
||||
Safe operations "grow" but never "shrink" string contexts.
|
||||
[`builtins.addDrvOutputDependencies`] in contrast is safe because "derivation deep" string context element always refers to the underlying derivation (among many more things).
|
||||
Replacing a constant string context element with a "derivation deep" element is a safe operation that just enlargens the string context without forgetting anything.
|
||||
|
||||
[`builtins.addDrvOutputDependencies`]: #builtins-addDrvOutputDependencies
|
||||
)",
|
||||
.fun = prim_unsafeDiscardOutputDependency
|
||||
});
|
||||
|
||||
|
||||
static void prim_addDrvOutputDependencies(EvalState & state, const PosIdx pos, Value * * args, Value & v)
|
||||
{
|
||||
NixStringContext context;
|
||||
auto s = state.coerceToString(pos, *args[0], context, "while evaluating the argument passed to builtins.addDrvOutputDependencies");
|
||||
|
||||
auto contextSize = context.size();
|
||||
if (contextSize != 1) {
|
||||
throw EvalError({
|
||||
.msg = hintfmt("context of string '%s' must have exactly one element, but has %d", *s, contextSize),
|
||||
.errPos = state.positions[pos]
|
||||
});
|
||||
}
|
||||
NixStringContext context2 {
|
||||
(NixStringContextElem { std::visit(overloaded {
|
||||
[&](const NixStringContextElem::Opaque & c) -> NixStringContextElem::DrvDeep {
|
||||
if (!c.path.isDerivation()) {
|
||||
throw EvalError({
|
||||
.msg = hintfmt("path '%s' is not a derivation",
|
||||
state.store->printStorePath(c.path)),
|
||||
.errPos = state.positions[pos],
|
||||
});
|
||||
}
|
||||
return NixStringContextElem::DrvDeep {
|
||||
.drvPath = c.path,
|
||||
};
|
||||
},
|
||||
[&](const NixStringContextElem::Built & c) -> NixStringContextElem::DrvDeep {
|
||||
throw EvalError({
|
||||
.msg = hintfmt("`addDrvOutputDependencies` can only act on derivations, not on a derivation output such as '%1%'", c.output),
|
||||
.errPos = state.positions[pos],
|
||||
});
|
||||
},
|
||||
[&](const NixStringContextElem::DrvDeep & c) -> NixStringContextElem::DrvDeep {
|
||||
/* Reuse original item because we want this to be idempotent. */
|
||||
return std::move(c);
|
||||
},
|
||||
}, context.begin()->raw) }),
|
||||
};
|
||||
|
||||
v.mkString(*s, context2);
|
||||
}
|
||||
|
||||
static RegisterPrimOp primop_addDrvOutputDependencies({
|
||||
.name = "__addDrvOutputDependencies",
|
||||
.args = {"s"},
|
||||
.doc = R"(
|
||||
Create a copy of the given string where a single consant string context element is turned into a "derivation deep" string context element.
|
||||
|
||||
The store path that is the constant string context element should point to a valid derivation, and end in `.drv`.
|
||||
|
||||
The original string context element must not be empty or have multiple elements, and it must not have any other type of element other than a constant or derivation deep element.
|
||||
The latter is supported so this function is idempotent.
|
||||
|
||||
This is the opposite of [`builtins.unsafeDiscardOutputDependency`](#builtins-addDrvOutputDependencies).
|
||||
)",
|
||||
.fun = prim_addDrvOutputDependencies
|
||||
});
|
||||
|
||||
|
||||
/* Extract the context of a string as a structured Nix value.
|
||||
|
||||
The context is represented as an attribute set whose keys are the
|
||||
|
|
|
@ -386,27 +386,27 @@ void LocalDerivationGoal::cleanupPostOutputsRegisteredModeNonCheck()
|
|||
cleanupPostOutputsRegisteredModeCheck();
|
||||
}
|
||||
|
||||
|
||||
#if __linux__
|
||||
static void linkOrCopy(const Path & from, const Path & to)
|
||||
{
|
||||
if (link(from.c_str(), to.c_str()) == -1) {
|
||||
/* Hard-linking fails if we exceed the maximum link count on a
|
||||
file (e.g. 32000 of ext3), which is quite possible after a
|
||||
'nix-store --optimise'. FIXME: actually, why don't we just
|
||||
bind-mount in this case?
|
||||
|
||||
It can also fail with EPERM in BeegFS v7 and earlier versions
|
||||
or fail with EXDEV in OpenAFS
|
||||
which don't allow hard-links to other directories */
|
||||
if (errno != EMLINK && errno != EPERM && errno != EXDEV)
|
||||
throw SysError("linking '%s' to '%s'", to, from);
|
||||
copyPath(from, to);
|
||||
static void doBind(const Path & source, const Path & target, bool optional = false) {
|
||||
debug("bind mounting '%1%' to '%2%'", source, target);
|
||||
struct stat st;
|
||||
if (stat(source.c_str(), &st) == -1) {
|
||||
if (optional && errno == ENOENT)
|
||||
return;
|
||||
else
|
||||
throw SysError("getting attributes of path '%1%'", source);
|
||||
}
|
||||
}
|
||||
if (S_ISDIR(st.st_mode))
|
||||
createDirs(target);
|
||||
else {
|
||||
createDirs(dirOf(target));
|
||||
writeFile(target, "");
|
||||
}
|
||||
if (mount(source.c_str(), target.c_str(), "", MS_BIND | MS_REC, 0) == -1)
|
||||
throw SysError("bind mount from '%1%' to '%2%' failed", source, target);
|
||||
};
|
||||
#endif
|
||||
|
||||
|
||||
void LocalDerivationGoal::startBuilder()
|
||||
{
|
||||
if ((buildUser && buildUser->getUIDCount() != 1)
|
||||
|
@ -581,7 +581,7 @@ void LocalDerivationGoal::startBuilder()
|
|||
|
||||
/* Allow a user-configurable set of directories from the
|
||||
host file system. */
|
||||
dirsInChroot.clear();
|
||||
pathsInChroot.clear();
|
||||
|
||||
for (auto i : settings.sandboxPaths.get()) {
|
||||
if (i.empty()) continue;
|
||||
|
@ -592,19 +592,19 @@ void LocalDerivationGoal::startBuilder()
|
|||
}
|
||||
size_t p = i.find('=');
|
||||
if (p == std::string::npos)
|
||||
dirsInChroot[i] = {i, optional};
|
||||
pathsInChroot[i] = {i, optional};
|
||||
else
|
||||
dirsInChroot[i.substr(0, p)] = {i.substr(p + 1), optional};
|
||||
pathsInChroot[i.substr(0, p)] = {i.substr(p + 1), optional};
|
||||
}
|
||||
if (hasPrefix(worker.store.storeDir, tmpDirInSandbox))
|
||||
{
|
||||
throw Error("`sandbox-build-dir` must not contain the storeDir");
|
||||
}
|
||||
dirsInChroot[tmpDirInSandbox] = tmpDir;
|
||||
pathsInChroot[tmpDirInSandbox] = tmpDir;
|
||||
|
||||
/* Add the closure of store paths to the chroot. */
|
||||
StorePathSet closure;
|
||||
for (auto & i : dirsInChroot)
|
||||
for (auto & i : pathsInChroot)
|
||||
try {
|
||||
if (worker.store.isInStore(i.second.source))
|
||||
worker.store.computeFSClosure(worker.store.toStorePath(i.second.source).first, closure);
|
||||
|
@ -615,7 +615,7 @@ void LocalDerivationGoal::startBuilder()
|
|||
}
|
||||
for (auto & i : closure) {
|
||||
auto p = worker.store.printStorePath(i);
|
||||
dirsInChroot.insert_or_assign(p, p);
|
||||
pathsInChroot.insert_or_assign(p, p);
|
||||
}
|
||||
|
||||
PathSet allowedPaths = settings.allowedImpureHostPrefixes;
|
||||
|
@ -643,7 +643,7 @@ void LocalDerivationGoal::startBuilder()
|
|||
|
||||
/* Allow files in __impureHostDeps to be missing; e.g.
|
||||
macOS 11+ has no /usr/lib/libSystem*.dylib */
|
||||
dirsInChroot[i] = {i, true};
|
||||
pathsInChroot[i] = {i, true};
|
||||
}
|
||||
|
||||
#if __linux__
|
||||
|
@ -711,15 +711,12 @@ void LocalDerivationGoal::startBuilder()
|
|||
for (auto & i : inputPaths) {
|
||||
auto p = worker.store.printStorePath(i);
|
||||
Path r = worker.store.toRealPath(p);
|
||||
if (S_ISDIR(lstat(r).st_mode))
|
||||
dirsInChroot.insert_or_assign(p, r);
|
||||
else
|
||||
linkOrCopy(r, chrootRootDir + p);
|
||||
pathsInChroot.insert_or_assign(p, r);
|
||||
}
|
||||
|
||||
/* If we're repairing, checking or rebuilding part of a
|
||||
multiple-outputs derivation, it's possible that we're
|
||||
rebuilding a path that is in settings.dirsInChroot
|
||||
rebuilding a path that is in settings.sandbox-paths
|
||||
(typically the dependencies of /bin/sh). Throw them
|
||||
out. */
|
||||
for (auto & i : drv->outputsAndOptPaths(worker.store)) {
|
||||
|
@ -729,7 +726,7 @@ void LocalDerivationGoal::startBuilder()
|
|||
is already in the sandbox, so we don't need to worry about
|
||||
removing it. */
|
||||
if (i.second.second)
|
||||
dirsInChroot.erase(worker.store.printStorePath(*i.second.second));
|
||||
pathsInChroot.erase(worker.store.printStorePath(*i.second.second));
|
||||
}
|
||||
|
||||
if (cgroup) {
|
||||
|
@ -787,9 +784,9 @@ void LocalDerivationGoal::startBuilder()
|
|||
} else {
|
||||
auto p = line.find('=');
|
||||
if (p == std::string::npos)
|
||||
dirsInChroot[line] = line;
|
||||
pathsInChroot[line] = line;
|
||||
else
|
||||
dirsInChroot[line.substr(0, p)] = line.substr(p + 1);
|
||||
pathsInChroot[line.substr(0, p)] = line.substr(p + 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -1565,41 +1562,32 @@ void LocalDerivationGoal::addDependency(const StorePath & path)
|
|||
|
||||
Path source = worker.store.Store::toRealPath(path);
|
||||
Path target = chrootRootDir + worker.store.printStorePath(path);
|
||||
debug("bind-mounting %s -> %s", target, source);
|
||||
|
||||
if (pathExists(target))
|
||||
// There is a similar debug message in doBind, so only run it in this block to not have double messages.
|
||||
debug("bind-mounting %s -> %s", target, source);
|
||||
throw Error("store path '%s' already exists in the sandbox", worker.store.printStorePath(path));
|
||||
|
||||
auto st = lstat(source);
|
||||
/* Bind-mount the path into the sandbox. This requires
|
||||
entering its mount namespace, which is not possible
|
||||
in multithreaded programs. So we do this in a
|
||||
child process.*/
|
||||
Pid child(startProcess([&]() {
|
||||
|
||||
if (S_ISDIR(st.st_mode)) {
|
||||
if (usingUserNamespace && (setns(sandboxUserNamespace.get(), 0) == -1))
|
||||
throw SysError("entering sandbox user namespace");
|
||||
|
||||
/* Bind-mount the path into the sandbox. This requires
|
||||
entering its mount namespace, which is not possible
|
||||
in multithreaded programs. So we do this in a
|
||||
child process.*/
|
||||
Pid child(startProcess([&]() {
|
||||
if (setns(sandboxMountNamespace.get(), 0) == -1)
|
||||
throw SysError("entering sandbox mount namespace");
|
||||
|
||||
if (usingUserNamespace && (setns(sandboxUserNamespace.get(), 0) == -1))
|
||||
throw SysError("entering sandbox user namespace");
|
||||
doBind(source, target);
|
||||
|
||||
if (setns(sandboxMountNamespace.get(), 0) == -1)
|
||||
throw SysError("entering sandbox mount namespace");
|
||||
_exit(0);
|
||||
}));
|
||||
|
||||
createDirs(target);
|
||||
|
||||
if (mount(source.c_str(), target.c_str(), "", MS_BIND, 0) == -1)
|
||||
throw SysError("bind mount from '%s' to '%s' failed", source, target);
|
||||
|
||||
_exit(0);
|
||||
}));
|
||||
|
||||
int status = child.wait();
|
||||
if (status != 0)
|
||||
throw Error("could not add path '%s' to sandbox", worker.store.printStorePath(path));
|
||||
|
||||
} else
|
||||
linkOrCopy(source, target);
|
||||
int status = child.wait();
|
||||
if (status != 0)
|
||||
throw Error("could not add path '%s' to sandbox", worker.store.printStorePath(path));
|
||||
|
||||
#else
|
||||
throw Error("don't know how to make path '%s' (produced by a recursive Nix call) appear in the sandbox",
|
||||
|
@ -1789,7 +1777,7 @@ void LocalDerivationGoal::runChild()
|
|||
/* Set up a nearly empty /dev, unless the user asked to
|
||||
bind-mount the host /dev. */
|
||||
Strings ss;
|
||||
if (dirsInChroot.find("/dev") == dirsInChroot.end()) {
|
||||
if (pathsInChroot.find("/dev") == pathsInChroot.end()) {
|
||||
createDirs(chrootRootDir + "/dev/shm");
|
||||
createDirs(chrootRootDir + "/dev/pts");
|
||||
ss.push_back("/dev/full");
|
||||
|
@ -1824,34 +1812,15 @@ void LocalDerivationGoal::runChild()
|
|||
ss.push_back(path);
|
||||
|
||||
if (settings.caFile != "")
|
||||
dirsInChroot.try_emplace("/etc/ssl/certs/ca-certificates.crt", settings.caFile, true);
|
||||
pathsInChroot.try_emplace("/etc/ssl/certs/ca-certificates.crt", settings.caFile, true);
|
||||
}
|
||||
|
||||
for (auto & i : ss) dirsInChroot.emplace(i, i);
|
||||
for (auto & i : ss) pathsInChroot.emplace(i, i);
|
||||
|
||||
/* Bind-mount all the directories from the "host"
|
||||
filesystem that we want in the chroot
|
||||
environment. */
|
||||
auto doBind = [&](const Path & source, const Path & target, bool optional = false) {
|
||||
debug("bind mounting '%1%' to '%2%'", source, target);
|
||||
struct stat st;
|
||||
if (stat(source.c_str(), &st) == -1) {
|
||||
if (optional && errno == ENOENT)
|
||||
return;
|
||||
else
|
||||
throw SysError("getting attributes of path '%1%'", source);
|
||||
}
|
||||
if (S_ISDIR(st.st_mode))
|
||||
createDirs(target);
|
||||
else {
|
||||
createDirs(dirOf(target));
|
||||
writeFile(target, "");
|
||||
}
|
||||
if (mount(source.c_str(), target.c_str(), "", MS_BIND | MS_REC, 0) == -1)
|
||||
throw SysError("bind mount from '%1%' to '%2%' failed", source, target);
|
||||
};
|
||||
|
||||
for (auto & i : dirsInChroot) {
|
||||
for (auto & i : pathsInChroot) {
|
||||
if (i.second.source == "/proc") continue; // backwards compatibility
|
||||
|
||||
#if HAVE_EMBEDDED_SANDBOX_SHELL
|
||||
|
@ -1892,7 +1861,7 @@ void LocalDerivationGoal::runChild()
|
|||
if /dev/ptx/ptmx exists). */
|
||||
if (pathExists("/dev/pts/ptmx") &&
|
||||
!pathExists(chrootRootDir + "/dev/ptmx")
|
||||
&& !dirsInChroot.count("/dev/pts"))
|
||||
&& !pathsInChroot.count("/dev/pts"))
|
||||
{
|
||||
if (mount("none", (chrootRootDir + "/dev/pts").c_str(), "devpts", 0, "newinstance,mode=0620") == 0)
|
||||
{
|
||||
|
@ -2027,7 +1996,7 @@ void LocalDerivationGoal::runChild()
|
|||
/* We build the ancestry before adding all inputPaths to the store because we know they'll
|
||||
all have the same parents (the store), and there might be lots of inputs. This isn't
|
||||
particularly efficient... I doubt it'll be a bottleneck in practice */
|
||||
for (auto & i : dirsInChroot) {
|
||||
for (auto & i : pathsInChroot) {
|
||||
Path cur = i.first;
|
||||
while (cur.compare("/") != 0) {
|
||||
cur = dirOf(cur);
|
||||
|
@ -2035,7 +2004,7 @@ void LocalDerivationGoal::runChild()
|
|||
}
|
||||
}
|
||||
|
||||
/* And we want the store in there regardless of how empty dirsInChroot. We include the innermost
|
||||
/* And we want the store in there regardless of how empty pathsInChroot. We include the innermost
|
||||
path component this time, since it's typically /nix/store and we care about that. */
|
||||
Path cur = worker.store.storeDir;
|
||||
while (cur.compare("/") != 0) {
|
||||
|
@ -2046,7 +2015,7 @@ void LocalDerivationGoal::runChild()
|
|||
/* Add all our input paths to the chroot */
|
||||
for (auto & i : inputPaths) {
|
||||
auto p = worker.store.printStorePath(i);
|
||||
dirsInChroot[p] = p;
|
||||
pathsInChroot[p] = p;
|
||||
}
|
||||
|
||||
/* Violations will go to the syslog if you set this. Unfortunately the destination does not appear to be configurable */
|
||||
|
@ -2077,7 +2046,7 @@ void LocalDerivationGoal::runChild()
|
|||
without file-write* allowed, access() incorrectly returns EPERM
|
||||
*/
|
||||
sandboxProfile += "(allow file-read* file-write* process-exec\n";
|
||||
for (auto & i : dirsInChroot) {
|
||||
for (auto & i : pathsInChroot) {
|
||||
if (i.first != i.second.source)
|
||||
throw Error(
|
||||
"can't map '%1%' to '%2%': mismatched impure paths not supported on Darwin",
|
||||
|
|
|
@ -86,8 +86,8 @@ struct LocalDerivationGoal : public DerivationGoal
|
|||
: source(source), optional(optional)
|
||||
{ }
|
||||
};
|
||||
typedef map<Path, ChrootPath> DirsInChroot; // maps target path to source path
|
||||
DirsInChroot dirsInChroot;
|
||||
typedef map<Path, ChrootPath> PathsInChroot; // maps target path to source path
|
||||
PathsInChroot pathsInChroot;
|
||||
|
||||
typedef map<std::string, std::string> Environment;
|
||||
Environment env;
|
||||
|
|
|
@ -174,15 +174,19 @@ void builtinBuildenv(const BasicDerivation & drv)
|
|||
/* Convert the stuff we get from the environment back into a
|
||||
* coherent data type. */
|
||||
Packages pkgs;
|
||||
auto derivations = tokenizeString<Strings>(getAttr("derivations"));
|
||||
while (!derivations.empty()) {
|
||||
/* !!! We're trusting the caller to structure derivations env var correctly */
|
||||
auto active = derivations.front(); derivations.pop_front();
|
||||
auto priority = stoi(derivations.front()); derivations.pop_front();
|
||||
auto outputs = stoi(derivations.front()); derivations.pop_front();
|
||||
for (auto n = 0; n < outputs; n++) {
|
||||
auto path = derivations.front(); derivations.pop_front();
|
||||
pkgs.emplace_back(path, active != "false", priority);
|
||||
{
|
||||
auto derivations = tokenizeString<Strings>(getAttr("derivations"));
|
||||
|
||||
auto itemIt = derivations.begin();
|
||||
while (itemIt != derivations.end()) {
|
||||
/* !!! We're trusting the caller to structure derivations env var correctly */
|
||||
const bool active = "false" != *itemIt++;
|
||||
const int priority = stoi(*itemIt++);
|
||||
const size_t outputs = stoul(*itemIt++);
|
||||
|
||||
for (size_t n {0}; n < outputs; n++) {
|
||||
pkgs.emplace_back(std::move(*itemIt++), active, priority);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -29,12 +29,13 @@ std::string ContentAddressMethod::renderPrefix() const
|
|||
|
||||
ContentAddressMethod ContentAddressMethod::parsePrefix(std::string_view & m)
|
||||
{
|
||||
ContentAddressMethod method = FileIngestionMethod::Flat;
|
||||
if (splitPrefix(m, "r:"))
|
||||
method = FileIngestionMethod::Recursive;
|
||||
else if (splitPrefix(m, "text:"))
|
||||
method = TextIngestionMethod {};
|
||||
return method;
|
||||
if (splitPrefix(m, "r:")) {
|
||||
return FileIngestionMethod::Recursive;
|
||||
}
|
||||
else if (splitPrefix(m, "text:")) {
|
||||
return TextIngestionMethod {};
|
||||
}
|
||||
return FileIngestionMethod::Flat;
|
||||
}
|
||||
|
||||
std::string ContentAddressMethod::render(HashType ht) const
|
||||
|
|
|
@ -716,7 +716,7 @@ public:
|
|||
|
||||
- `apple-virt`
|
||||
|
||||
Included on darwin if virtualization is available.
|
||||
Included on Darwin if virtualization is available.
|
||||
|
||||
- `kvm`
|
||||
|
||||
|
|
|
@ -7,6 +7,31 @@
|
|||
|
||||
namespace nix {
|
||||
|
||||
#if __linux__
|
||||
|
||||
static std::vector<gid_t> get_group_list(const char *username, gid_t group_id)
|
||||
{
|
||||
std::vector<gid_t> gids;
|
||||
gids.resize(32); // Initial guess
|
||||
|
||||
auto getgroupl_failed {[&] {
|
||||
int ngroups = gids.size();
|
||||
int err = getgrouplist(username, group_id, gids.data(), &ngroups);
|
||||
gids.resize(ngroups);
|
||||
return err == -1;
|
||||
}};
|
||||
|
||||
// The first error means that the vector was not big enough.
|
||||
// If it happens again, there is some different problem.
|
||||
if (getgroupl_failed() && getgroupl_failed()) {
|
||||
throw SysError("failed to get list of supplementary groups for '%s'", username);
|
||||
}
|
||||
|
||||
return gids;
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
struct SimpleUserLock : UserLock
|
||||
{
|
||||
AutoCloseFD fdUserLock;
|
||||
|
@ -67,37 +92,14 @@ struct SimpleUserLock : UserLock
|
|||
throw Error("the Nix user should not be a member of '%s'", settings.buildUsersGroup);
|
||||
|
||||
#if __linux__
|
||||
/* Get the list of supplementary groups of this build
|
||||
user. This is usually either empty or contains a
|
||||
group such as "kvm". */
|
||||
int ngroups = 32; // arbitrary initial guess
|
||||
std::vector<gid_t> gids;
|
||||
gids.resize(ngroups);
|
||||
|
||||
int err = getgrouplist(
|
||||
pw->pw_name, pw->pw_gid,
|
||||
gids.data(),
|
||||
&ngroups);
|
||||
|
||||
/* Our initial size of 32 wasn't sufficient, the
|
||||
correct size has been stored in ngroups, so we try
|
||||
again. */
|
||||
if (err == -1) {
|
||||
gids.resize(ngroups);
|
||||
err = getgrouplist(
|
||||
pw->pw_name, pw->pw_gid,
|
||||
gids.data(),
|
||||
&ngroups);
|
||||
}
|
||||
|
||||
// If it failed once more, then something must be broken.
|
||||
if (err == -1)
|
||||
throw Error("failed to get list of supplementary groups for '%s'", pw->pw_name);
|
||||
/* Get the list of supplementary groups of this user. This is
|
||||
* usually either empty or contains a group such as "kvm". */
|
||||
|
||||
// Finally, trim back the GID list to its real size.
|
||||
for (auto i = 0; i < ngroups; i++)
|
||||
if (gids[i] != lock->gid)
|
||||
lock->supplementaryGIDs.push_back(gids[i]);
|
||||
for (auto gid : get_group_list(pw->pw_name, pw->pw_gid)) {
|
||||
if (gid != lock->gid)
|
||||
lock->supplementaryGIDs.push_back(gid);
|
||||
}
|
||||
#endif
|
||||
|
||||
return lock;
|
||||
|
|
|
@ -2,7 +2,103 @@ R"(
|
|||
|
||||
**Store URL format**: `s3://`*bucket-name*
|
||||
|
||||
This store allows reading and writing a binary cache stored in an AWS
|
||||
S3 bucket.
|
||||
This store allows reading and writing a binary cache stored in an AWS S3 (or S3-compatible service) bucket.
|
||||
This store shares many idioms with the [HTTP Binary Cache Store](#http-binary-cache-store).
|
||||
|
||||
For AWS S3, the binary cache URL for a bucket named `example-nix-cache` will be exactly <s3://example-nix-cache>.
|
||||
For S3 compatible binary caches, consult that cache's documentation.
|
||||
|
||||
### Anonymous reads to your S3-compatible binary cache
|
||||
|
||||
> If your binary cache is publicly accessible and does not require authentication,
|
||||
> it is simplest to use the [HTTP Binary Cache Store] rather than S3 Binary Cache Store with
|
||||
> <https://example-nix-cache.s3.amazonaws.com> instead of <s3://example-nix-cache>.
|
||||
|
||||
Your bucket will need a
|
||||
[bucket policy](https://docs.aws.amazon.com/AmazonS3/v1/userguide/bucket-policies.html)
|
||||
like the following to be accessible:
|
||||
|
||||
```json
|
||||
{
|
||||
"Id": "DirectReads",
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "AllowDirectReads",
|
||||
"Action": [
|
||||
"s3:GetObject",
|
||||
"s3:GetBucketLocation"
|
||||
],
|
||||
"Effect": "Allow",
|
||||
"Resource": [
|
||||
"arn:aws:s3:::example-nix-cache",
|
||||
"arn:aws:s3:::example-nix-cache/*"
|
||||
],
|
||||
"Principal": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Authentication
|
||||
|
||||
Nix will use the
|
||||
[default credential provider chain](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html)
|
||||
for authenticating requests to Amazon S3.
|
||||
|
||||
Note that this means Nix will read environment variables and files with different idioms than with Nix's own settings, as implemented by the AWS SDK.
|
||||
Consult the documentation linked above for further details.
|
||||
|
||||
### Authenticated reads to your S3 binary cache
|
||||
|
||||
Your bucket will need a bucket policy allowing the desired users to perform the `s3:GetObject` and `s3:GetBucketLocation` action on all objects in the bucket.
|
||||
The [anonymous policy given above](#anonymous-reads-to-your-s3-compatible-binary-cache) can be updated to have a restricted `Principal` to support this.
|
||||
|
||||
### Authenticated writes to your S3-compatible binary cache
|
||||
|
||||
Your account will need an IAM policy to support uploading to the bucket:
|
||||
|
||||
```json
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Sid": "UploadToCache",
|
||||
"Effect": "Allow",
|
||||
"Action": [
|
||||
"s3:AbortMultipartUpload",
|
||||
"s3:GetBucketLocation",
|
||||
"s3:GetObject",
|
||||
"s3:ListBucket",
|
||||
"s3:ListBucketMultipartUploads",
|
||||
"s3:ListMultipartUploadParts",
|
||||
"s3:PutObject"
|
||||
],
|
||||
"Resource": [
|
||||
"arn:aws:s3:::example-nix-cache",
|
||||
"arn:aws:s3:::example-nix-cache/*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Examples
|
||||
|
||||
With bucket policies and authentication set up as described above, uploading works via [`nix copy`](@docroot@/command-ref/new-cli/nix3-copy.md) (experimental).
|
||||
|
||||
- To upload with a specific credential profile for Amazon S3:
|
||||
|
||||
```console
|
||||
$ nix copy nixpkgs.hello \
|
||||
--to 's3://example-nix-cache?profile=cache-upload®ion=eu-west-2'
|
||||
```
|
||||
|
||||
- To upload to an S3-compatible binary cache:
|
||||
|
||||
```console
|
||||
$ nix copy nixpkgs.hello --to \
|
||||
's3://example-nix-cache?profile=cache-upload&scheme=https&endpoint=minio.example.com'
|
||||
```
|
||||
|
||||
)"
|
||||
|
|
|
@ -57,7 +57,7 @@ protected:
|
|||
std::function<void(std::vector<std::string>)> fun;
|
||||
size_t arity;
|
||||
|
||||
Handler() {}
|
||||
Handler() = default;
|
||||
|
||||
Handler(std::function<void(std::vector<std::string>)> && fun)
|
||||
: fun(std::move(fun))
|
||||
|
@ -84,29 +84,29 @@ protected:
|
|||
{ }
|
||||
|
||||
Handler(std::vector<std::string> * dest)
|
||||
: fun([=](std::vector<std::string> ss) { *dest = ss; })
|
||||
: fun([dest](std::vector<std::string> ss) { *dest = ss; })
|
||||
, arity(ArityAny)
|
||||
{ }
|
||||
|
||||
Handler(std::string * dest)
|
||||
: fun([=](std::vector<std::string> ss) { *dest = ss[0]; })
|
||||
: fun([dest](std::vector<std::string> ss) { *dest = ss[0]; })
|
||||
, arity(1)
|
||||
{ }
|
||||
|
||||
Handler(std::optional<std::string> * dest)
|
||||
: fun([=](std::vector<std::string> ss) { *dest = ss[0]; })
|
||||
: fun([dest](std::vector<std::string> ss) { *dest = ss[0]; })
|
||||
, arity(1)
|
||||
{ }
|
||||
|
||||
template<class T>
|
||||
Handler(T * dest, const T & val)
|
||||
: fun([=](std::vector<std::string> ss) { *dest = val; })
|
||||
: fun([dest, val](std::vector<std::string> ss) { *dest = val; })
|
||||
, arity(0)
|
||||
{ }
|
||||
|
||||
template<class I>
|
||||
Handler(I * dest)
|
||||
: fun([=](std::vector<std::string> ss) {
|
||||
: fun([dest](std::vector<std::string> ss) {
|
||||
*dest = string2IntWithUnitPrefix<I>(ss[0]);
|
||||
})
|
||||
, arity(1)
|
||||
|
@ -114,7 +114,7 @@ protected:
|
|||
|
||||
template<class I>
|
||||
Handler(std::optional<I> * dest)
|
||||
: fun([=](std::vector<std::string> ss) {
|
||||
: fun([dest](std::vector<std::string> ss) {
|
||||
*dest = string2IntWithUnitPrefix<I>(ss[0]);
|
||||
})
|
||||
, arity(1)
|
||||
|
@ -130,7 +130,7 @@ protected:
|
|||
* The `AddCompletions` that is passed is an interface to the state
|
||||
* stored as part of the root command
|
||||
*/
|
||||
typedef void CompleterFun(AddCompletions &, size_t, std::string_view);
|
||||
using CompleterFun = void(AddCompletions &, size_t, std::string_view);
|
||||
|
||||
/**
|
||||
* The closure type of the completion callback.
|
||||
|
@ -138,7 +138,7 @@ protected:
|
|||
* This is what is actually stored as part of each Flag / Expected
|
||||
* Arg.
|
||||
*/
|
||||
typedef std::function<CompleterFun> CompleterClosure;
|
||||
using CompleterClosure = std::function<CompleterFun>;
|
||||
|
||||
/**
|
||||
* Description of flags / options
|
||||
|
@ -148,7 +148,7 @@ protected:
|
|||
*/
|
||||
struct Flag
|
||||
{
|
||||
typedef std::shared_ptr<Flag> ptr;
|
||||
using ptr = std::shared_ptr<Flag>;
|
||||
|
||||
std::string longName;
|
||||
std::set<std::string> aliases;
|
||||
|
@ -296,14 +296,14 @@ struct Command : virtual public Args
|
|||
{
|
||||
friend class MultiCommand;
|
||||
|
||||
virtual ~Command() { }
|
||||
virtual ~Command() = default;
|
||||
|
||||
/**
|
||||
* Entry point to the command
|
||||
*/
|
||||
virtual void run() = 0;
|
||||
|
||||
typedef int Category;
|
||||
using Category = int;
|
||||
|
||||
static constexpr Category catDefault = 0;
|
||||
|
||||
|
@ -312,7 +312,7 @@ struct Command : virtual public Args
|
|||
virtual Category category() { return catDefault; }
|
||||
};
|
||||
|
||||
typedef std::map<std::string, std::function<ref<Command>()>> Commands;
|
||||
using Commands = std::map<std::string, std::function<ref<Command>()>>;
|
||||
|
||||
/**
|
||||
* An argument parser that supports multiple subcommands,
|
||||
|
|
|
@ -98,7 +98,15 @@ Here are some examples of flake references in their URL-like representation:
|
|||
|
||||
## Path-like syntax
|
||||
|
||||
Flakes corresponding to a local path can also be referred to by a direct path reference, either `/absolute/path/to/the/flake` or `./relative/path/to/the/flake` (note that the leading `./` is mandatory for relative paths to avoid any ambiguity).
|
||||
Flakes corresponding to a local path can also be referred to by a direct
|
||||
path reference, either `/absolute/path/to/the/flake` or`./relative/path/to/the/flake`.
|
||||
Note that the leading `./` is mandatory for relative paths. If it is
|
||||
omitted, the path will be interpreted as [URL-like syntax](#url-like-syntax),
|
||||
which will cause error messages like this:
|
||||
|
||||
```console
|
||||
error: cannot find flake 'flake:relative/path/to/the/flake' in the flake registries
|
||||
```
|
||||
|
||||
The semantic of such a path is as follows:
|
||||
|
||||
|
@ -153,18 +161,39 @@ can occur in *locked* flake references and are available to Nix code:
|
|||
|
||||
Currently the `type` attribute can be one of the following:
|
||||
|
||||
* `path`: arbitrary local directories, or local Git trees. The
|
||||
required attribute `path` specifies the path of the flake. The URL
|
||||
form is
|
||||
* `indirect`: *The default*. Indirection through the flake registry.
|
||||
These have the form
|
||||
|
||||
```
|
||||
[path:]<path>(\?<params)?
|
||||
[flake:]<flake-id>(/<rev-or-ref>(/rev)?)?
|
||||
```
|
||||
|
||||
where *path* is an absolute path.
|
||||
These perform a lookup of `<flake-id>` in the flake registry. For
|
||||
example, `nixpkgs` and `nixpkgs/release-20.09` are indirect flake
|
||||
references. The specified `rev` and/or `ref` are merged with the
|
||||
entry in the registry; see [nix registry](./nix3-registry.md) for
|
||||
details.
|
||||
|
||||
*path* must be a directory in the file system containing a file
|
||||
named `flake.nix`.
|
||||
For example, these are valid indirect flake references:
|
||||
|
||||
* `nixpkgs`
|
||||
* `nixpkgs/nixos-unstable`
|
||||
* `nixpkgs/a3a3dda3bacf61e8a39258a0ed9c924eeca8e293`
|
||||
* `nixpkgs/nixos-unstable/a3a3dda3bacf61e8a39258a0ed9c924eeca8e293`
|
||||
* `sub/dir` (if a flake named `sub` is in the registry)
|
||||
|
||||
* `path`: arbitrary local directories. The required attribute `path`
|
||||
specifies the path of the flake. The URL form is
|
||||
|
||||
```
|
||||
path:<path>(\?<params>)?
|
||||
```
|
||||
|
||||
where *path* is an absolute path to a directory in the file system
|
||||
containing a file named `flake.nix`.
|
||||
|
||||
If the flake at *path* is not inside a git repository, the `path:`
|
||||
prefix is implied and can be omitted.
|
||||
|
||||
*path* generally must be an absolute path. However, on the command
|
||||
line, it can be a relative path (e.g. `.` or `./foo`) which is
|
||||
|
@ -173,20 +202,24 @@ Currently the `type` attribute can be one of the following:
|
|||
(e.g. `nixpkgs` is a registry lookup; `./nixpkgs` is a relative
|
||||
path).
|
||||
|
||||
For example, these are valid path flake references:
|
||||
|
||||
* `path:/home/user/sub/dir`
|
||||
* `/home/user/sub/dir` (if `dir/flake.nix` is *not* in a git repository)
|
||||
* `./sub/dir` (when used on the command line and `dir/flake.nix` is *not* in a git repository)
|
||||
|
||||
* `git`: Git repositories. The location of the repository is specified
|
||||
by the attribute `url`.
|
||||
|
||||
They have the URL form
|
||||
|
||||
```
|
||||
git(+http|+https|+ssh|+git|+file|):(//<server>)?<path>(\?<params>)?
|
||||
git(+http|+https|+ssh|+git|+file):(//<server>)?<path>(\?<params>)?
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
<user>@<server>:<path>
|
||||
```
|
||||
If *path* starts with `/` (or `./` when used as an argument on the
|
||||
command line) and is a local path to a git repository, the leading
|
||||
`git:` or `+file` prefixes are implied and can be omitted.
|
||||
|
||||
The `ref` attribute defaults to resolving the `HEAD` reference.
|
||||
|
||||
|
@ -203,6 +236,9 @@ Currently the `type` attribute can be one of the following:
|
|||
|
||||
For example, the following are valid Git flake references:
|
||||
|
||||
* `git:/home/user/sub/dir`
|
||||
* `/home/user/sub/dir` (if `dir/flake.nix` is in a git repository)
|
||||
* `./sub/dir` (when used on the command line and `dir/flake.nix` is in a git repository)
|
||||
* `git+https://example.org/my/repo`
|
||||
* `git+https://example.org/my/repo?dir=flake1`
|
||||
* `git+ssh://git@github.com/NixOS/nix?ref=v1.2.3`
|
||||
|
@ -324,19 +360,6 @@ Currently the `type` attribute can be one of the following:
|
|||
* `sourcehut:~misterio/nix-colors/182b4b8709b8ffe4e9774a4c5d6877bf6bb9a21c`
|
||||
* `sourcehut:~misterio/nix-colors/21c1a380a6915d890d408e9f22203436a35bb2de?host=hg.sr.ht`
|
||||
|
||||
* `indirect`: Indirections through the flake registry. These have the
|
||||
form
|
||||
|
||||
```
|
||||
[flake:]<flake-id>(/<rev-or-ref>(/rev)?)?
|
||||
```
|
||||
|
||||
These perform a lookup of `<flake-id>` in the flake registry. For
|
||||
example, `nixpkgs` and `nixpkgs/release-20.09` are indirect flake
|
||||
references. The specified `rev` and/or `ref` are merged with the
|
||||
entry in the registry; see [nix registry](./nix3-registry.md) for
|
||||
details.
|
||||
|
||||
# Flake format
|
||||
|
||||
As an example, here is a simple `flake.nix` that depends on the
|
||||
|
|
|
@ -0,0 +1,10 @@
|
|||
error:
|
||||
… while calling the 'addDrvOutputDependencies' builtin
|
||||
|
||||
at /pwd/lang/eval-fail-addDrvOutputDependencies-empty-context.nix:1:1:
|
||||
|
||||
1| builtins.addDrvOutputDependencies ""
|
||||
| ^
|
||||
2|
|
||||
|
||||
error: context of string '' must have exactly one element, but has 0
|
|
@ -0,0 +1 @@
|
|||
builtins.addDrvOutputDependencies ""
|
|
@ -0,0 +1,11 @@
|
|||
error:
|
||||
… while calling the 'addDrvOutputDependencies' builtin
|
||||
|
||||
at /pwd/lang/eval-fail-addDrvOutputDependencies-multi-elem-context.nix:18:4:
|
||||
|
||||
17|
|
||||
18| in builtins.addDrvOutputDependencies combo-path
|
||||
| ^
|
||||
19|
|
||||
|
||||
error: context of string '/nix/store/pg9yqs4yd85yhdm3f4i5dyaqp5jahrsz-fail.drv/nix/store/2dxd5frb715z451vbf7s8birlf3argbk-fail-2.drv' must have exactly one element, but has 2
|
|
@ -0,0 +1,18 @@
|
|||
let
|
||||
drv0 = derivation {
|
||||
name = "fail";
|
||||
builder = "/bin/false";
|
||||
system = "x86_64-linux";
|
||||
outputs = [ "out" "foo" ];
|
||||
};
|
||||
|
||||
drv1 = derivation {
|
||||
name = "fail-2";
|
||||
builder = "/bin/false";
|
||||
system = "x86_64-linux";
|
||||
outputs = [ "out" "foo" ];
|
||||
};
|
||||
|
||||
combo-path = "${drv0.drvPath}${drv1.drvPath}";
|
||||
|
||||
in builtins.addDrvOutputDependencies combo-path
|
|
@ -0,0 +1,11 @@
|
|||
error:
|
||||
… while calling the 'addDrvOutputDependencies' builtin
|
||||
|
||||
at /pwd/lang/eval-fail-addDrvOutputDependencies-wrong-element-kind.nix:9:4:
|
||||
|
||||
8|
|
||||
9| in builtins.addDrvOutputDependencies drv.outPath
|
||||
| ^
|
||||
10|
|
||||
|
||||
error: `addDrvOutputDependencies` can only act on derivations, not on a derivation output such as 'out'
|
|
@ -0,0 +1,9 @@
|
|||
let
|
||||
drv = derivation {
|
||||
name = "fail";
|
||||
builder = "/bin/false";
|
||||
system = "x86_64-linux";
|
||||
outputs = [ "out" "foo" ];
|
||||
};
|
||||
|
||||
in builtins.addDrvOutputDependencies drv.outPath
|
|
@ -1 +1 @@
|
|||
[ true true true true true true ]
|
||||
[ true true true true true true true true true true true true true ]
|
||||
|
|
|
@ -31,11 +31,29 @@ let
|
|||
(builtins.unsafeDiscardStringContext str)
|
||||
(builtins.getContext str);
|
||||
|
||||
# Only holds true if string context contains both a `DrvDeep` and
|
||||
# `Opaque` element.
|
||||
almostEtaRule = str:
|
||||
str == builtins.addDrvOutputDependencies
|
||||
(builtins.unsafeDiscardOutputDependency str);
|
||||
|
||||
addDrvOutputDependencies_idempotent = str:
|
||||
builtins.addDrvOutputDependencies str ==
|
||||
builtins.addDrvOutputDependencies (builtins.addDrvOutputDependencies str);
|
||||
|
||||
rules = str: [
|
||||
(etaRule str)
|
||||
(almostEtaRule str)
|
||||
(addDrvOutputDependencies_idempotent str)
|
||||
];
|
||||
|
||||
in [
|
||||
(legit-context == desired-context)
|
||||
(reconstructed-path == combo-path)
|
||||
(etaRule "foo")
|
||||
(etaRule drv.drvPath)
|
||||
(etaRule drv.foo.outPath)
|
||||
(etaRule (builtins.unsafeDiscardOutputDependency drv.drvPath))
|
||||
] ++ builtins.concatMap rules [
|
||||
drv.drvPath
|
||||
(builtins.addDrvOutputDependencies drv.drvPath)
|
||||
(builtins.unsafeDiscardOutputDependency drv.drvPath)
|
||||
]
|
||||
|
|
Loading…
Reference in a new issue