mirror of
https://github.com/privatevoid-net/nix-super.git
synced 2024-11-10 08:16:15 +02:00
Merge remote-tracking branch 'upstream/master' into hash-always-has-type
This commit is contained in:
commit
699fc89b39
35 changed files with 542 additions and 522 deletions
|
@ -19,6 +19,7 @@ LIBLZMA_LIBS = @LIBLZMA_LIBS@
|
|||
OPENSSL_LIBS = @OPENSSL_LIBS@
|
||||
PACKAGE_NAME = @PACKAGE_NAME@
|
||||
PACKAGE_VERSION = @PACKAGE_VERSION@
|
||||
SHELL = @bash@
|
||||
SODIUM_LIBS = @SODIUM_LIBS@
|
||||
SQLITE3_LIBS = @SQLITE3_LIBS@
|
||||
bash = @bash@
|
||||
|
|
25
README.md
25
README.md
|
@ -12,7 +12,7 @@ for more details.
|
|||
On Linux and macOS the easiest way to Install Nix is to run the following shell command
|
||||
(as a user other than root):
|
||||
|
||||
```
|
||||
```console
|
||||
$ curl -L https://nixos.org/nix/install | sh
|
||||
```
|
||||
|
||||
|
@ -20,27 +20,8 @@ Information on additional installation methods is available on the [Nix download
|
|||
|
||||
## Building And Developing
|
||||
|
||||
### Building Nix
|
||||
|
||||
You can build Nix using one of the targets provided by [release.nix](./release.nix):
|
||||
|
||||
```
|
||||
$ nix-build ./release.nix -A build.aarch64-linux
|
||||
$ nix-build ./release.nix -A build.x86_64-darwin
|
||||
$ nix-build ./release.nix -A build.i686-linux
|
||||
$ nix-build ./release.nix -A build.x86_64-linux
|
||||
```
|
||||
|
||||
### Development Environment
|
||||
|
||||
You can use the provided `shell.nix` to get a working development environment:
|
||||
|
||||
```
|
||||
$ nix-shell
|
||||
$ ./bootstrap.sh
|
||||
$ ./configure
|
||||
$ make
|
||||
```
|
||||
See our [Hacking guide](https://hydra.nixos.org/job/nix/master/build.x86_64-linux/latest/download-by-type/doc/manual#chap-hacking) in our manual for instruction on how to
|
||||
build nix from source with nix-build or how to get a development environment.
|
||||
|
||||
## Additional Resources
|
||||
|
||||
|
|
|
@ -1,119 +0,0 @@
|
|||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
version="5.0"
|
||||
xml:id='sec-builder-syntax'>
|
||||
|
||||
<title>Builder Syntax</title>
|
||||
|
||||
<example xml:id='ex-hello-builder'><title>Build script for GNU Hello
|
||||
(<filename>builder.sh</filename>)</title>
|
||||
<programlisting>
|
||||
source $stdenv/setup <co xml:id='ex-hello-builder-co-1' />
|
||||
|
||||
PATH=$perl/bin:$PATH <co xml:id='ex-hello-builder-co-2' />
|
||||
|
||||
tar xvfz $src <co xml:id='ex-hello-builder-co-3' />
|
||||
cd hello-*
|
||||
./configure --prefix=$out <co xml:id='ex-hello-builder-co-4' />
|
||||
make <co xml:id='ex-hello-builder-co-5' />
|
||||
make install</programlisting>
|
||||
</example>
|
||||
|
||||
<para><xref linkend='ex-hello-builder' /> shows the builder referenced
|
||||
from Hello's Nix expression (stored in
|
||||
<filename>pkgs/applications/misc/hello/ex-1/builder.sh</filename>).
|
||||
The builder can actually be made a lot shorter by using the
|
||||
<emphasis>generic builder</emphasis> functions provided by
|
||||
<varname>stdenv</varname>, but here we write out the build steps to
|
||||
elucidate what a builder does. It performs the following
|
||||
steps:</para>
|
||||
|
||||
<calloutlist>
|
||||
|
||||
<callout arearefs='ex-hello-builder-co-1'>
|
||||
|
||||
<para>When Nix runs a builder, it initially completely clears the
|
||||
environment (except for the attributes declared in the
|
||||
derivation). For instance, the <envar>PATH</envar> variable is
|
||||
empty<footnote><para>Actually, it's initialised to
|
||||
<filename>/path-not-set</filename> to prevent Bash from setting it
|
||||
to a default value.</para></footnote>. This is done to prevent
|
||||
undeclared inputs from being used in the build process. If for
|
||||
example the <envar>PATH</envar> contained
|
||||
<filename>/usr/bin</filename>, then you might accidentally use
|
||||
<filename>/usr/bin/gcc</filename>.</para>
|
||||
|
||||
<para>So the first step is to set up the environment. This is
|
||||
done by calling the <filename>setup</filename> script of the
|
||||
standard environment. The environment variable
|
||||
<envar>stdenv</envar> points to the location of the standard
|
||||
environment being used. (It wasn't specified explicitly as an
|
||||
attribute in <xref linkend='ex-hello-nix' />, but
|
||||
<varname>mkDerivation</varname> adds it automatically.)</para>
|
||||
|
||||
</callout>
|
||||
|
||||
<callout arearefs='ex-hello-builder-co-2'>
|
||||
|
||||
<para>Since Hello needs Perl, we have to make sure that Perl is in
|
||||
the <envar>PATH</envar>. The <envar>perl</envar> environment
|
||||
variable points to the location of the Perl package (since it
|
||||
was passed in as an attribute to the derivation), so
|
||||
<filename><replaceable>$perl</replaceable>/bin</filename> is the
|
||||
directory containing the Perl interpreter.</para>
|
||||
|
||||
</callout>
|
||||
|
||||
<callout arearefs='ex-hello-builder-co-3'>
|
||||
|
||||
<para>Now we have to unpack the sources. The
|
||||
<varname>src</varname> attribute was bound to the result of
|
||||
fetching the Hello source tarball from the network, so the
|
||||
<envar>src</envar> environment variable points to the location in
|
||||
the Nix store to which the tarball was downloaded. After
|
||||
unpacking, we <command>cd</command> to the resulting source
|
||||
directory.</para>
|
||||
|
||||
<para>The whole build is performed in a temporary directory
|
||||
created in <varname>/tmp</varname>, by the way. This directory is
|
||||
removed after the builder finishes, so there is no need to clean
|
||||
up the sources afterwards. Also, the temporary directory is
|
||||
always newly created, so you don't have to worry about files from
|
||||
previous builds interfering with the current build.</para>
|
||||
|
||||
</callout>
|
||||
|
||||
<callout arearefs='ex-hello-builder-co-4'>
|
||||
|
||||
<para>GNU Hello is a typical Autoconf-based package, so we first
|
||||
have to run its <filename>configure</filename> script. In Nix
|
||||
every package is stored in a separate location in the Nix store,
|
||||
for instance
|
||||
<filename>/nix/store/9a54ba97fb71b65fda531012d0443ce2-hello-2.1.1</filename>.
|
||||
Nix computes this path by cryptographically hashing all attributes
|
||||
of the derivation. The path is passed to the builder through the
|
||||
<envar>out</envar> environment variable. So here we give
|
||||
<filename>configure</filename> the parameter
|
||||
<literal>--prefix=$out</literal> to cause Hello to be installed in
|
||||
the expected location.</para>
|
||||
|
||||
</callout>
|
||||
|
||||
<callout arearefs='ex-hello-builder-co-5'>
|
||||
|
||||
<para>Finally we build Hello (<literal>make</literal>) and install
|
||||
it into the location specified by <envar>out</envar>
|
||||
(<literal>make install</literal>).</para>
|
||||
|
||||
</callout>
|
||||
|
||||
</calloutlist>
|
||||
|
||||
<para>If you are wondering about the absence of error checking on the
|
||||
result of various commands called in the builder: this is because the
|
||||
shell script is evaluated with Bash's <option>-e</option> option,
|
||||
which causes the script to be aborted if any command fails without an
|
||||
error check.</para>
|
||||
|
||||
</section>
|
|
@ -7,15 +7,34 @@
|
|||
<para>This section provides some notes on how to hack on Nix. To get
|
||||
the latest version of Nix from GitHub:
|
||||
<screen>
|
||||
$ git clone git://github.com/NixOS/nix.git
|
||||
$ git clone https://github.com/NixOS/nix.git
|
||||
$ cd nix
|
||||
</screen>
|
||||
</para>
|
||||
|
||||
<para>To build it and its dependencies:
|
||||
<para>To build Nix for the current operating system/architecture use
|
||||
|
||||
<screen>
|
||||
$ nix-build release.nix -A build.x86_64-linux
|
||||
$ nix-build
|
||||
</screen>
|
||||
|
||||
or if you have a flakes-enabled nix:
|
||||
|
||||
<screen>
|
||||
$ nix build
|
||||
</screen>
|
||||
|
||||
This will build <literal>defaultPackage</literal> attribute defined in the <literal>flake.nix</literal> file.
|
||||
|
||||
To build for other platforms add one of the following suffixes to it: aarch64-linux,
|
||||
i686-linux, x86_64-darwin, x86_64-linux.
|
||||
|
||||
i.e.
|
||||
|
||||
<screen>
|
||||
nix-build -A defaultPackage.x86_64-linux
|
||||
</screen>
|
||||
|
||||
</para>
|
||||
|
||||
<para>To build all dependencies and start a shell in which all
|
||||
|
@ -27,13 +46,27 @@ $ nix-shell
|
|||
To build Nix itself in this shell:
|
||||
<screen>
|
||||
[nix-shell]$ ./bootstrap.sh
|
||||
[nix-shell]$ configurePhase
|
||||
[nix-shell]$ make
|
||||
[nix-shell]$ ./configure $configureFlags
|
||||
[nix-shell]$ make -j $NIX_BUILD_CORES
|
||||
</screen>
|
||||
To install it in <literal>$(pwd)/inst</literal> and test it:
|
||||
<screen>
|
||||
[nix-shell]$ make install
|
||||
[nix-shell]$ make installcheck
|
||||
[nix-shell]$ ./inst/bin/nix --version
|
||||
nix (Nix) 2.4
|
||||
</screen>
|
||||
|
||||
If you have a flakes-enabled nix you can replace:
|
||||
|
||||
<screen>
|
||||
$ nix-shell
|
||||
</screen>
|
||||
|
||||
by:
|
||||
|
||||
<screen>
|
||||
$ nix develop
|
||||
</screen>
|
||||
|
||||
</para>
|
||||
|
|
|
@ -21,13 +21,13 @@ clean-files += $(GCH) $(PCH)
|
|||
|
||||
ifeq ($(PRECOMPILE_HEADERS), 1)
|
||||
|
||||
ifeq ($(CXX), g++)
|
||||
ifeq ($(findstring g++,$(CXX)), g++)
|
||||
|
||||
GLOBAL_CXXFLAGS_PCH += -include $(buildprefix)precompiled-headers.h -Winvalid-pch
|
||||
|
||||
GLOBAL_ORDER_AFTER += $(GCH)
|
||||
|
||||
else ifeq ($(CXX), clang++)
|
||||
else ifeq ($(findstring clang++,$(CXX)), clang++)
|
||||
|
||||
GLOBAL_CXXFLAGS_PCH += -include-pch $(PCH) -Winvalid-pch
|
||||
|
||||
|
|
|
@ -207,7 +207,7 @@ if [ -z "$NIX_INSTALLER_NO_MODIFY_PROFILE" ]; then
|
|||
if [ -w "$fn" ]; then
|
||||
if ! grep -q "$p" "$fn"; then
|
||||
echo "modifying $fn..." >&2
|
||||
echo "if [ -e $p ]; then . $p; fi # added by Nix installer" >> "$fn"
|
||||
echo -e "\nif [ -e $p ]; then . $p; fi # added by Nix installer" >> "$fn"
|
||||
fi
|
||||
added=1
|
||||
break
|
||||
|
@ -218,7 +218,7 @@ if [ -z "$NIX_INSTALLER_NO_MODIFY_PROFILE" ]; then
|
|||
if [ -w "$fn" ]; then
|
||||
if ! grep -q "$p" "$fn"; then
|
||||
echo "modifying $fn..." >&2
|
||||
echo "if [ -e $p ]; then . $p; fi # added by Nix installer" >> "$fn"
|
||||
echo -e "\nif [ -e $p ]; then . $p; fi # added by Nix installer" >> "$fn"
|
||||
fi
|
||||
added=1
|
||||
break
|
||||
|
|
3
shell.nix
Normal file
3
shell.nix
Normal file
|
@ -0,0 +1,3 @@
|
|||
(import (fetchTarball https://github.com/edolstra/flake-compat/archive/master.tar.gz) {
|
||||
src = ./.;
|
||||
}).shellNix
|
|
@ -102,14 +102,15 @@ std::pair<FlakeRef, std::string> parseFlakeRefWithFragment(
|
|||
percentDecode(std::string(match[6])));
|
||||
}
|
||||
|
||||
/* Check if 'url' is a path (either absolute or relative to
|
||||
'baseDir'). If so, search upward to the root of the repo
|
||||
(i.e. the directory containing .git). */
|
||||
|
||||
else if (std::regex_match(url, match, pathUrlRegex)) {
|
||||
std::string path = match[1];
|
||||
if (!baseDir && !hasPrefix(path, "/"))
|
||||
throw BadURL("flake reference '%s' is not an absolute path", url);
|
||||
std::string fragment = percentDecode(std::string(match[3]));
|
||||
|
||||
if (baseDir) {
|
||||
/* Check if 'url' is a path (either absolute or relative
|
||||
to 'baseDir'). If so, search upward to the root of the
|
||||
repo (i.e. the directory containing .git). */
|
||||
|
||||
path = absPath(path, baseDir, true);
|
||||
|
||||
if (!S_ISDIR(lstat(path).st_mode))
|
||||
|
@ -118,8 +119,6 @@ std::pair<FlakeRef, std::string> parseFlakeRefWithFragment(
|
|||
if (!allowMissing && !pathExists(path + "/flake.nix"))
|
||||
throw BadURL("path '%s' is not a flake (because it doesn't contain a 'flake.nix' file)", path);
|
||||
|
||||
auto fragment = percentDecode(std::string(match[3]));
|
||||
|
||||
auto flakeRoot = path;
|
||||
std::string subdir;
|
||||
|
||||
|
@ -154,6 +153,12 @@ std::pair<FlakeRef, std::string> parseFlakeRefWithFragment(
|
|||
flakeRoot = dirOf(flakeRoot);
|
||||
}
|
||||
|
||||
} else {
|
||||
if (!hasPrefix(path, "/"))
|
||||
throw BadURL("flake reference '%s' is not an absolute path", url);
|
||||
path = canonPath(path);
|
||||
}
|
||||
|
||||
fetchers::Attrs attrs;
|
||||
attrs.insert_or_assign("type", "path");
|
||||
attrs.insert_or_assign("path", path);
|
||||
|
|
|
@ -2774,7 +2774,7 @@ struct RestrictedStore : public LocalFSStore
|
|||
goal.addDependency(info.path);
|
||||
}
|
||||
|
||||
StorePath addToStoreFromDump(const string & dump, const string & name,
|
||||
StorePath addToStoreFromDump(Source & dump, const string & name,
|
||||
FileIngestionMethod method = FileIngestionMethod::Recursive, HashType hashAlgo = htSHA256, RepairFlag repair = NoRepair) override
|
||||
{
|
||||
auto path = next->addToStoreFromDump(dump, name, method, hashAlgo, repair);
|
||||
|
|
|
@ -173,31 +173,6 @@ struct TunnelSource : BufferedSource
|
|||
}
|
||||
};
|
||||
|
||||
/* If the NAR archive contains a single file at top-level, then save
|
||||
the contents of the file to `s'. Otherwise barf. */
|
||||
struct RetrieveRegularNARSink : ParseSink
|
||||
{
|
||||
bool regular;
|
||||
string s;
|
||||
|
||||
RetrieveRegularNARSink() : regular(true) { }
|
||||
|
||||
void createDirectory(const Path & path)
|
||||
{
|
||||
regular = false;
|
||||
}
|
||||
|
||||
void receiveContents(unsigned char * data, unsigned int len)
|
||||
{
|
||||
s.append((const char *) data, len);
|
||||
}
|
||||
|
||||
void createSymlink(const Path & path, const string & target)
|
||||
{
|
||||
regular = false;
|
||||
}
|
||||
};
|
||||
|
||||
struct ClientSettings
|
||||
{
|
||||
bool keepFailed;
|
||||
|
@ -375,25 +350,28 @@ static void performOp(TunnelLogger * logger, ref<Store> store,
|
|||
}
|
||||
|
||||
case wopAddToStore: {
|
||||
std::string s, baseName;
|
||||
HashType hashAlgo;
|
||||
std::string baseName;
|
||||
FileIngestionMethod method;
|
||||
{
|
||||
bool fixed; uint8_t recursive;
|
||||
from >> baseName >> fixed /* obsolete */ >> recursive >> s;
|
||||
bool fixed;
|
||||
uint8_t recursive;
|
||||
std::string hashAlgoRaw;
|
||||
from >> baseName >> fixed /* obsolete */ >> recursive >> hashAlgoRaw;
|
||||
if (recursive > (uint8_t) FileIngestionMethod::Recursive)
|
||||
throw Error("unsupported FileIngestionMethod with value of %i; you may need to upgrade nix-daemon", recursive);
|
||||
method = FileIngestionMethod { recursive };
|
||||
/* Compatibility hack. */
|
||||
if (!fixed) {
|
||||
s = "sha256";
|
||||
hashAlgoRaw = "sha256";
|
||||
method = FileIngestionMethod::Recursive;
|
||||
}
|
||||
hashAlgo = parseHashType(hashAlgoRaw);
|
||||
}
|
||||
HashType hashAlgo = parseHashType(s);
|
||||
|
||||
StringSink savedNAR;
|
||||
TeeSource savedNARSource(from, savedNAR);
|
||||
RetrieveRegularNARSink savedRegular;
|
||||
StringSink saved;
|
||||
TeeSource savedNARSource(from, saved);
|
||||
RetrieveRegularNARSink savedRegular { saved };
|
||||
|
||||
if (method == FileIngestionMethod::Recursive) {
|
||||
/* Get the entire NAR dump from the client and save it to
|
||||
|
@ -407,11 +385,9 @@ static void performOp(TunnelLogger * logger, ref<Store> store,
|
|||
logger->startWork();
|
||||
if (!savedRegular.regular) throw Error("regular file expected");
|
||||
|
||||
auto path = store->addToStoreFromDump(
|
||||
method == FileIngestionMethod::Recursive ? *savedNAR.s : savedRegular.s,
|
||||
baseName,
|
||||
method,
|
||||
hashAlgo);
|
||||
// FIXME: try to stream directly from `from`.
|
||||
StringSource dumpSource { *saved.s };
|
||||
auto path = store->addToStoreFromDump(dumpSource, baseName, method, hashAlgo);
|
||||
logger->stopWork();
|
||||
|
||||
to << store->printStorePath(path);
|
||||
|
@ -727,15 +703,15 @@ static void performOp(TunnelLogger * logger, ref<Store> store,
|
|||
if (!trusted)
|
||||
info.ultimate = false;
|
||||
|
||||
std::string saved;
|
||||
std::unique_ptr<Source> source;
|
||||
if (GET_PROTOCOL_MINOR(clientVersion) >= 21)
|
||||
source = std::make_unique<TunnelSource>(from, to);
|
||||
else {
|
||||
TeeParseSink tee(from);
|
||||
parseDump(tee, tee.source);
|
||||
saved = std::move(*tee.saved.s);
|
||||
source = std::make_unique<StringSource>(saved);
|
||||
StringSink saved;
|
||||
TeeSource tee { from, saved };
|
||||
ParseSink ether;
|
||||
parseDump(ether, tee);
|
||||
source = std::make_unique<StringSource>(std::move(*saved.s));
|
||||
}
|
||||
|
||||
logger->startWork();
|
||||
|
|
|
@ -7,14 +7,6 @@
|
|||
|
||||
namespace nix {
|
||||
|
||||
const StorePath & BasicDerivation::findOutput(const string & id) const
|
||||
{
|
||||
auto i = outputs.find(id);
|
||||
if (i == outputs.end())
|
||||
throw Error("derivation has no output '%s'", id);
|
||||
return i->second.path;
|
||||
}
|
||||
|
||||
|
||||
bool BasicDerivation::isBuiltin() const
|
||||
{
|
||||
|
|
|
@ -39,10 +39,6 @@ struct BasicDerivation
|
|||
BasicDerivation() { }
|
||||
virtual ~BasicDerivation() { };
|
||||
|
||||
/* Return the path corresponding to the output identifier `id' in
|
||||
the given derivation. */
|
||||
const StorePath & findOutput(const std::string & id) const;
|
||||
|
||||
bool isBuiltin() const;
|
||||
|
||||
/* Return true iff this is a fixed-output derivation. */
|
||||
|
|
|
@ -60,8 +60,10 @@ StorePaths Store::importPaths(Source & source, CheckSigsFlag checkSigs)
|
|||
if (n != 1) throw Error("input doesn't look like something created by 'nix-store --export'");
|
||||
|
||||
/* Extract the NAR from the source. */
|
||||
TeeParseSink tee(source);
|
||||
parseDump(tee, tee.source);
|
||||
StringSink saved;
|
||||
TeeSource tee { source, saved };
|
||||
ParseSink ether;
|
||||
parseDump(ether, tee);
|
||||
|
||||
uint32_t magic = readInt(source);
|
||||
if (magic != exportMagic)
|
||||
|
@ -77,15 +79,15 @@ StorePaths Store::importPaths(Source & source, CheckSigsFlag checkSigs)
|
|||
if (deriver != "")
|
||||
info.deriver = parseStorePath(deriver);
|
||||
|
||||
info.narHash = hashString(htSHA256, *tee.saved.s);
|
||||
info.narSize = tee.saved.s->size();
|
||||
info.narHash = hashString(htSHA256, *saved.s);
|
||||
info.narSize = saved.s->size();
|
||||
|
||||
// Ignore optional legacy signature.
|
||||
if (readInt(source) == 1)
|
||||
readString(source);
|
||||
|
||||
// Can't use underlying source, which would have been exhausted
|
||||
auto source = StringSource { *tee.saved.s };
|
||||
auto source = StringSource { *saved.s };
|
||||
addToStore(info, source, NoRepair, checkSigs);
|
||||
|
||||
res.push_back(info.path);
|
||||
|
|
|
@ -124,7 +124,7 @@ struct curlFileTransfer : public FileTransfer
|
|||
if (requestHeaders) curl_slist_free_all(requestHeaders);
|
||||
try {
|
||||
if (!done)
|
||||
fail(FileTransferError(Interrupted, "download of '%s' was interrupted", request.uri));
|
||||
fail(FileTransferError(Interrupted, nullptr, "download of '%s' was interrupted", request.uri));
|
||||
} catch (...) {
|
||||
ignoreException();
|
||||
}
|
||||
|
@ -145,6 +145,7 @@ struct curlFileTransfer : public FileTransfer
|
|||
|
||||
LambdaSink finalSink;
|
||||
std::shared_ptr<CompressionSink> decompressionSink;
|
||||
std::optional<StringSink> errorSink;
|
||||
|
||||
std::exception_ptr writeException;
|
||||
|
||||
|
@ -154,9 +155,19 @@ struct curlFileTransfer : public FileTransfer
|
|||
size_t realSize = size * nmemb;
|
||||
result.bodySize += realSize;
|
||||
|
||||
if (!decompressionSink)
|
||||
if (!decompressionSink) {
|
||||
decompressionSink = makeDecompressionSink(encoding, finalSink);
|
||||
if (! successfulStatuses.count(getHTTPStatus())) {
|
||||
// In this case we want to construct a TeeSink, to keep
|
||||
// the response around (which we figure won't be big
|
||||
// like an actual download should be) to improve error
|
||||
// messages.
|
||||
errorSink = StringSink { };
|
||||
}
|
||||
}
|
||||
|
||||
if (errorSink)
|
||||
(*errorSink)((unsigned char *) contents, realSize);
|
||||
(*decompressionSink)((unsigned char *) contents, realSize);
|
||||
|
||||
return realSize;
|
||||
|
@ -412,16 +423,21 @@ struct curlFileTransfer : public FileTransfer
|
|||
|
||||
attempt++;
|
||||
|
||||
std::shared_ptr<std::string> response;
|
||||
if (errorSink)
|
||||
response = errorSink->s;
|
||||
auto exc =
|
||||
code == CURLE_ABORTED_BY_CALLBACK && _isInterrupted
|
||||
? FileTransferError(Interrupted, fmt("%s of '%s' was interrupted", request.verb(), request.uri))
|
||||
? FileTransferError(Interrupted, response, "%s of '%s' was interrupted", request.verb(), request.uri)
|
||||
: httpStatus != 0
|
||||
? FileTransferError(err,
|
||||
response,
|
||||
fmt("unable to %s '%s': HTTP error %d ('%s')",
|
||||
request.verb(), request.uri, httpStatus, statusMsg)
|
||||
+ (code == CURLE_OK ? "" : fmt(" (curl error: %s)", curl_easy_strerror(code)))
|
||||
)
|
||||
: FileTransferError(err,
|
||||
response,
|
||||
fmt("unable to %s '%s': %s (%d)",
|
||||
request.verb(), request.uri, curl_easy_strerror(code), code));
|
||||
|
||||
|
@ -679,7 +695,7 @@ struct curlFileTransfer : public FileTransfer
|
|||
auto s3Res = s3Helper.getObject(bucketName, key);
|
||||
FileTransferResult res;
|
||||
if (!s3Res.data)
|
||||
throw FileTransferError(NotFound, fmt("S3 object '%s' does not exist", request.uri));
|
||||
throw FileTransferError(NotFound, nullptr, "S3 object '%s' does not exist", request.uri);
|
||||
res.data = s3Res.data;
|
||||
callback(std::move(res));
|
||||
#else
|
||||
|
@ -824,6 +840,21 @@ void FileTransfer::download(FileTransferRequest && request, Sink & sink)
|
|||
}
|
||||
}
|
||||
|
||||
template<typename... Args>
|
||||
FileTransferError::FileTransferError(FileTransfer::Error error, std::shared_ptr<string> response, const Args & ... args)
|
||||
: Error(args...), error(error), response(response)
|
||||
{
|
||||
const auto hf = hintfmt(args...);
|
||||
// FIXME: Due to https://github.com/NixOS/nix/issues/3841 we don't know how
|
||||
// to print different messages for different verbosity levels. For now
|
||||
// we add some heuristics for detecting when we want to show the response.
|
||||
if (response && (response->size() < 1024 || response->find("<html>") != string::npos)) {
|
||||
err.hint = hintfmt("%1%\n\nresponse body:\n\n%2%", normaltxt(hf.str()), *response);
|
||||
} else {
|
||||
err.hint = hf;
|
||||
}
|
||||
}
|
||||
|
||||
bool isUri(const string & s)
|
||||
{
|
||||
if (s.compare(0, 8, "channel:") == 0) return true;
|
||||
|
|
|
@ -103,10 +103,12 @@ class FileTransferError : public Error
|
|||
{
|
||||
public:
|
||||
FileTransfer::Error error;
|
||||
std::shared_ptr<string> response; // intentionally optional
|
||||
|
||||
template<typename... Args>
|
||||
FileTransferError(FileTransfer::Error error, const Args & ... args)
|
||||
: Error(args...), error(error)
|
||||
{ }
|
||||
FileTransferError(FileTransfer::Error error, std::shared_ptr<string> response, const Args & ... args);
|
||||
|
||||
virtual const char* sname() const override { return "FileTransferError"; }
|
||||
};
|
||||
|
||||
bool isUri(const string & s);
|
||||
|
|
|
@ -1033,82 +1033,26 @@ void LocalStore::addToStore(const ValidPathInfo & info, Source & source,
|
|||
}
|
||||
|
||||
|
||||
StorePath LocalStore::addToStoreFromDump(const string & dump, const string & name,
|
||||
FileIngestionMethod method, HashType hashAlgo, RepairFlag repair)
|
||||
{
|
||||
Hash h = hashString(hashAlgo, dump);
|
||||
|
||||
auto dstPath = makeFixedOutputPath(method, h, name);
|
||||
|
||||
addTempRoot(dstPath);
|
||||
|
||||
if (repair || !isValidPath(dstPath)) {
|
||||
|
||||
/* The first check above is an optimisation to prevent
|
||||
unnecessary lock acquisition. */
|
||||
|
||||
auto realPath = Store::toRealPath(dstPath);
|
||||
|
||||
PathLocks outputLock({realPath});
|
||||
|
||||
if (repair || !isValidPath(dstPath)) {
|
||||
|
||||
deletePath(realPath);
|
||||
|
||||
autoGC();
|
||||
|
||||
if (method == FileIngestionMethod::Recursive) {
|
||||
StringSource source(dump);
|
||||
restorePath(realPath, source);
|
||||
} else
|
||||
writeFile(realPath, dump);
|
||||
|
||||
canonicalisePathMetaData(realPath, -1);
|
||||
|
||||
/* Register the SHA-256 hash of the NAR serialisation of
|
||||
the path in the database. We may just have computed it
|
||||
above (if called with recursive == true and hashAlgo ==
|
||||
sha256); otherwise, compute it here. */
|
||||
HashResult hash = method == FileIngestionMethod::Recursive
|
||||
? HashResult {
|
||||
hashAlgo == htSHA256 ? h : hashString(htSHA256, dump),
|
||||
dump.size(),
|
||||
}
|
||||
: hashPath(htSHA256, realPath);
|
||||
|
||||
optimisePath(realPath); // FIXME: combine with hashPath()
|
||||
|
||||
ValidPathInfo info(dstPath);
|
||||
info.narHash = hash.first;
|
||||
info.narSize = hash.second;
|
||||
info.ca = FixedOutputHash { .method = method, .hash = h };
|
||||
registerValidPath(info);
|
||||
}
|
||||
|
||||
outputLock.setDeletion(true);
|
||||
}
|
||||
|
||||
return dstPath;
|
||||
}
|
||||
|
||||
|
||||
StorePath LocalStore::addToStore(const string & name, const Path & _srcPath,
|
||||
FileIngestionMethod method, HashType hashAlgo, PathFilter & filter, RepairFlag repair)
|
||||
{
|
||||
Path srcPath(absPath(_srcPath));
|
||||
auto source = sinkToSource([&](Sink & sink) {
|
||||
if (method == FileIngestionMethod::Recursive)
|
||||
dumpPath(srcPath, sink, filter);
|
||||
else
|
||||
readFile(srcPath, sink);
|
||||
});
|
||||
return addToStoreFromDump(*source, name, method, hashAlgo, repair);
|
||||
}
|
||||
|
||||
if (method != FileIngestionMethod::Recursive)
|
||||
return addToStoreFromDump(readFile(srcPath), name, method, hashAlgo, repair);
|
||||
|
||||
/* For computing the NAR hash. */
|
||||
auto sha256Sink = std::make_unique<HashSink>(htSHA256);
|
||||
|
||||
/* For computing the store path. In recursive SHA-256 mode, this
|
||||
is the same as the NAR hash, so no need to do it again. */
|
||||
std::unique_ptr<HashSink> hashSink =
|
||||
hashAlgo == htSHA256
|
||||
? nullptr
|
||||
: std::make_unique<HashSink>(hashAlgo);
|
||||
StorePath LocalStore::addToStoreFromDump(Source & source0, const string & name,
|
||||
FileIngestionMethod method, HashType hashAlgo, RepairFlag repair)
|
||||
{
|
||||
/* For computing the store path. */
|
||||
auto hashSink = std::make_unique<HashSink>(hashAlgo);
|
||||
TeeSource source { source0, *hashSink };
|
||||
|
||||
/* Read the source path into memory, but only if it's up to
|
||||
narBufferSize bytes. If it's larger, write it to a temporary
|
||||
|
@ -1116,55 +1060,49 @@ StorePath LocalStore::addToStore(const string & name, const Path & _srcPath,
|
|||
destination store path is already valid, we just delete the
|
||||
temporary path. Otherwise, we move it to the destination store
|
||||
path. */
|
||||
bool inMemory = true;
|
||||
std::string nar;
|
||||
bool inMemory = false;
|
||||
|
||||
auto source = sinkToSource([&](Sink & sink) {
|
||||
std::string dump;
|
||||
|
||||
LambdaSink sink2([&](const unsigned char * buf, size_t len) {
|
||||
(*sha256Sink)(buf, len);
|
||||
if (hashSink) (*hashSink)(buf, len);
|
||||
|
||||
if (inMemory) {
|
||||
if (nar.size() + len > settings.narBufferSize) {
|
||||
inMemory = false;
|
||||
sink << 1;
|
||||
sink((const unsigned char *) nar.data(), nar.size());
|
||||
nar.clear();
|
||||
} else {
|
||||
nar.append((const char *) buf, len);
|
||||
/* Fill out buffer, and decide whether we are working strictly in
|
||||
memory based on whether we break out because the buffer is full
|
||||
or the original source is empty */
|
||||
while (dump.size() < settings.narBufferSize) {
|
||||
auto oldSize = dump.size();
|
||||
constexpr size_t chunkSize = 65536;
|
||||
auto want = std::min(chunkSize, settings.narBufferSize - oldSize);
|
||||
dump.resize(oldSize + want);
|
||||
auto got = 0;
|
||||
try {
|
||||
got = source.read((uint8_t *) dump.data() + oldSize, want);
|
||||
} catch (EndOfFile &) {
|
||||
inMemory = true;
|
||||
break;
|
||||
}
|
||||
dump.resize(oldSize + got);
|
||||
}
|
||||
|
||||
if (!inMemory) sink(buf, len);
|
||||
});
|
||||
|
||||
dumpPath(srcPath, sink2, filter);
|
||||
});
|
||||
|
||||
std::unique_ptr<AutoDelete> delTempDir;
|
||||
Path tempPath;
|
||||
|
||||
try {
|
||||
/* Wait for the source coroutine to give us some dummy
|
||||
data. This is so that we don't create the temporary
|
||||
directory if the NAR fits in memory. */
|
||||
readInt(*source);
|
||||
if (!inMemory) {
|
||||
/* Drain what we pulled so far, and then keep on pulling */
|
||||
StringSource dumpSource { dump };
|
||||
ChainSource bothSource { dumpSource, source };
|
||||
|
||||
auto tempDir = createTempDir(realStoreDir, "add");
|
||||
delTempDir = std::make_unique<AutoDelete>(tempDir);
|
||||
tempPath = tempDir + "/x";
|
||||
|
||||
restorePath(tempPath, *source);
|
||||
if (method == FileIngestionMethod::Recursive)
|
||||
restorePath(tempPath, bothSource);
|
||||
else
|
||||
writeFile(tempPath, bothSource);
|
||||
|
||||
} catch (EndOfFile &) {
|
||||
if (!inMemory) throw;
|
||||
/* The NAR fits in memory, so we didn't do restorePath(). */
|
||||
dump.clear();
|
||||
}
|
||||
|
||||
auto sha256 = sha256Sink->finish();
|
||||
|
||||
Hash hash = hashSink ? hashSink->finish().first : sha256.first;
|
||||
auto [hash, size] = hashSink->finish();
|
||||
|
||||
auto dstPath = makeFixedOutputPath(method, hash, name);
|
||||
|
||||
|
@ -1186,22 +1124,34 @@ StorePath LocalStore::addToStore(const string & name, const Path & _srcPath,
|
|||
autoGC();
|
||||
|
||||
if (inMemory) {
|
||||
StringSource dumpSource { dump };
|
||||
/* Restore from the NAR in memory. */
|
||||
StringSource source(nar);
|
||||
restorePath(realPath, source);
|
||||
if (method == FileIngestionMethod::Recursive)
|
||||
restorePath(realPath, dumpSource);
|
||||
else
|
||||
writeFile(realPath, dumpSource);
|
||||
} else {
|
||||
/* Move the temporary path we restored above. */
|
||||
if (rename(tempPath.c_str(), realPath.c_str()))
|
||||
throw Error("renaming '%s' to '%s'", tempPath, realPath);
|
||||
}
|
||||
|
||||
/* For computing the nar hash. In recursive SHA-256 mode, this
|
||||
is the same as the store hash, so no need to do it again. */
|
||||
auto narHash = std::pair { hash, size };
|
||||
if (method != FileIngestionMethod::Recursive || hashAlgo != htSHA256) {
|
||||
HashSink narSink { htSHA256 };
|
||||
dumpPath(realPath, narSink);
|
||||
narHash = narSink.finish();
|
||||
}
|
||||
|
||||
canonicalisePathMetaData(realPath, -1); // FIXME: merge into restorePath
|
||||
|
||||
optimisePath(realPath);
|
||||
|
||||
ValidPathInfo info(dstPath);
|
||||
info.narHash = sha256.first;
|
||||
info.narSize = sha256.second;
|
||||
info.narHash = narHash.first;
|
||||
info.narSize = narHash.second;
|
||||
info.ca = FixedOutputHash { .method = method, .hash = hash };
|
||||
registerValidPath(info);
|
||||
}
|
||||
|
|
|
@ -153,7 +153,7 @@ public:
|
|||
in `dump', which is either a NAR serialisation (if recursive ==
|
||||
true) or simply the contents of a regular file (if recursive ==
|
||||
false). */
|
||||
StorePath addToStoreFromDump(const string & dump, const string & name,
|
||||
StorePath addToStoreFromDump(Source & dump, const string & name,
|
||||
FileIngestionMethod method = FileIngestionMethod::Recursive, HashType hashAlgo = htSHA256, RepairFlag repair = NoRepair) override;
|
||||
|
||||
StorePath addTextToStore(const string & name, const string & s,
|
||||
|
|
|
@ -12,30 +12,24 @@
|
|||
namespace nix {
|
||||
|
||||
|
||||
static bool cmpGensByNumber(const Generation & a, const Generation & b)
|
||||
{
|
||||
return a.number < b.number;
|
||||
}
|
||||
|
||||
|
||||
/* Parse a generation name of the format
|
||||
`<profilename>-<number>-link'. */
|
||||
static int parseName(const string & profileName, const string & name)
|
||||
static std::optional<GenerationNumber> parseName(const string & profileName, const string & name)
|
||||
{
|
||||
if (string(name, 0, profileName.size() + 1) != profileName + "-") return -1;
|
||||
if (string(name, 0, profileName.size() + 1) != profileName + "-") return {};
|
||||
string s = string(name, profileName.size() + 1);
|
||||
string::size_type p = s.find("-link");
|
||||
if (p == string::npos) return -1;
|
||||
int n;
|
||||
if (p == string::npos) return {};
|
||||
unsigned int n;
|
||||
if (string2Int(string(s, 0, p), n) && n >= 0)
|
||||
return n;
|
||||
else
|
||||
return -1;
|
||||
return {};
|
||||
}
|
||||
|
||||
|
||||
|
||||
Generations findGenerations(Path profile, int & curGen)
|
||||
std::pair<Generations, std::optional<GenerationNumber>> findGenerations(Path profile)
|
||||
{
|
||||
Generations gens;
|
||||
|
||||
|
@ -43,30 +37,34 @@ Generations findGenerations(Path profile, int & curGen)
|
|||
auto profileName = std::string(baseNameOf(profile));
|
||||
|
||||
for (auto & i : readDirectory(profileDir)) {
|
||||
int n;
|
||||
if ((n = parseName(profileName, i.name)) != -1) {
|
||||
Generation gen;
|
||||
gen.path = profileDir + "/" + i.name;
|
||||
gen.number = n;
|
||||
if (auto n = parseName(profileName, i.name)) {
|
||||
auto path = profileDir + "/" + i.name;
|
||||
struct stat st;
|
||||
if (lstat(gen.path.c_str(), &st) != 0)
|
||||
throw SysError("statting '%1%'", gen.path);
|
||||
gen.creationTime = st.st_mtime;
|
||||
gens.push_back(gen);
|
||||
if (lstat(path.c_str(), &st) != 0)
|
||||
throw SysError("statting '%1%'", path);
|
||||
gens.push_back({
|
||||
.number = *n,
|
||||
.path = path,
|
||||
.creationTime = st.st_mtime
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
gens.sort(cmpGensByNumber);
|
||||
gens.sort([](const Generation & a, const Generation & b)
|
||||
{
|
||||
return a.number < b.number;
|
||||
});
|
||||
|
||||
curGen = pathExists(profile)
|
||||
return {
|
||||
gens,
|
||||
pathExists(profile)
|
||||
? parseName(profileName, readLink(profile))
|
||||
: -1;
|
||||
|
||||
return gens;
|
||||
: std::nullopt
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
static void makeName(const Path & profile, unsigned int num,
|
||||
static void makeName(const Path & profile, GenerationNumber num,
|
||||
Path & outLink)
|
||||
{
|
||||
Path prefix = (format("%1%-%2%") % profile % num).str();
|
||||
|
@ -78,10 +76,9 @@ Path createGeneration(ref<LocalFSStore> store, Path profile, Path outPath)
|
|||
{
|
||||
/* The new generation number should be higher than old the
|
||||
previous ones. */
|
||||
int dummy;
|
||||
Generations gens = findGenerations(profile, dummy);
|
||||
auto [gens, dummy] = findGenerations(profile);
|
||||
|
||||
unsigned int num;
|
||||
GenerationNumber num;
|
||||
if (gens.size() > 0) {
|
||||
Generation last = gens.back();
|
||||
|
||||
|
@ -121,7 +118,7 @@ static void removeFile(const Path & path)
|
|||
}
|
||||
|
||||
|
||||
void deleteGeneration(const Path & profile, unsigned int gen)
|
||||
void deleteGeneration(const Path & profile, GenerationNumber gen)
|
||||
{
|
||||
Path generation;
|
||||
makeName(profile, gen, generation);
|
||||
|
@ -129,7 +126,7 @@ void deleteGeneration(const Path & profile, unsigned int gen)
|
|||
}
|
||||
|
||||
|
||||
static void deleteGeneration2(const Path & profile, unsigned int gen, bool dryRun)
|
||||
static void deleteGeneration2(const Path & profile, GenerationNumber gen, bool dryRun)
|
||||
{
|
||||
if (dryRun)
|
||||
printInfo(format("would remove generation %1%") % gen);
|
||||
|
@ -140,31 +137,29 @@ static void deleteGeneration2(const Path & profile, unsigned int gen, bool dryRu
|
|||
}
|
||||
|
||||
|
||||
void deleteGenerations(const Path & profile, const std::set<unsigned int> & gensToDelete, bool dryRun)
|
||||
void deleteGenerations(const Path & profile, const std::set<GenerationNumber> & gensToDelete, bool dryRun)
|
||||
{
|
||||
PathLocks lock;
|
||||
lockProfile(lock, profile);
|
||||
|
||||
int curGen;
|
||||
Generations gens = findGenerations(profile, curGen);
|
||||
auto [gens, curGen] = findGenerations(profile);
|
||||
|
||||
if (gensToDelete.find(curGen) != gensToDelete.end())
|
||||
if (gensToDelete.count(*curGen))
|
||||
throw Error("cannot delete current generation of profile %1%'", profile);
|
||||
|
||||
for (auto & i : gens) {
|
||||
if (gensToDelete.find(i.number) == gensToDelete.end()) continue;
|
||||
if (!gensToDelete.count(i.number)) continue;
|
||||
deleteGeneration2(profile, i.number, dryRun);
|
||||
}
|
||||
}
|
||||
|
||||
void deleteGenerationsGreaterThan(const Path & profile, int max, bool dryRun)
|
||||
void deleteGenerationsGreaterThan(const Path & profile, GenerationNumber max, bool dryRun)
|
||||
{
|
||||
PathLocks lock;
|
||||
lockProfile(lock, profile);
|
||||
|
||||
int curGen;
|
||||
bool fromCurGen = false;
|
||||
Generations gens = findGenerations(profile, curGen);
|
||||
auto [gens, curGen] = findGenerations(profile);
|
||||
for (auto i = gens.rbegin(); i != gens.rend(); ++i) {
|
||||
if (i->number == curGen) {
|
||||
fromCurGen = true;
|
||||
|
@ -186,8 +181,7 @@ void deleteOldGenerations(const Path & profile, bool dryRun)
|
|||
PathLocks lock;
|
||||
lockProfile(lock, profile);
|
||||
|
||||
int curGen;
|
||||
Generations gens = findGenerations(profile, curGen);
|
||||
auto [gens, curGen] = findGenerations(profile);
|
||||
|
||||
for (auto & i : gens)
|
||||
if (i.number != curGen)
|
||||
|
@ -200,8 +194,7 @@ void deleteGenerationsOlderThan(const Path & profile, time_t t, bool dryRun)
|
|||
PathLocks lock;
|
||||
lockProfile(lock, profile);
|
||||
|
||||
int curGen;
|
||||
Generations gens = findGenerations(profile, curGen);
|
||||
auto [gens, curGen] = findGenerations(profile);
|
||||
|
||||
bool canDelete = false;
|
||||
for (auto i = gens.rbegin(); i != gens.rend(); ++i)
|
||||
|
|
|
@ -9,37 +9,32 @@
|
|||
namespace nix {
|
||||
|
||||
|
||||
typedef unsigned int GenerationNumber;
|
||||
|
||||
struct Generation
|
||||
{
|
||||
int number;
|
||||
GenerationNumber number;
|
||||
Path path;
|
||||
time_t creationTime;
|
||||
Generation()
|
||||
{
|
||||
number = -1;
|
||||
}
|
||||
operator bool() const
|
||||
{
|
||||
return number != -1;
|
||||
}
|
||||
};
|
||||
|
||||
typedef list<Generation> Generations;
|
||||
typedef std::list<Generation> Generations;
|
||||
|
||||
|
||||
/* Returns the list of currently present generations for the specified
|
||||
profile, sorted by generation number. */
|
||||
Generations findGenerations(Path profile, int & curGen);
|
||||
profile, sorted by generation number. Also returns the number of
|
||||
the current generation. */
|
||||
std::pair<Generations, std::optional<GenerationNumber>> findGenerations(Path profile);
|
||||
|
||||
class LocalFSStore;
|
||||
|
||||
Path createGeneration(ref<LocalFSStore> store, Path profile, Path outPath);
|
||||
|
||||
void deleteGeneration(const Path & profile, unsigned int gen);
|
||||
void deleteGeneration(const Path & profile, GenerationNumber gen);
|
||||
|
||||
void deleteGenerations(const Path & profile, const std::set<unsigned int> & gensToDelete, bool dryRun);
|
||||
void deleteGenerations(const Path & profile, const std::set<GenerationNumber> & gensToDelete, bool dryRun);
|
||||
|
||||
void deleteGenerationsGreaterThan(const Path & profile, const int max, bool dryRun);
|
||||
void deleteGenerationsGreaterThan(const Path & profile, GenerationNumber max, bool dryRun);
|
||||
|
||||
void deleteOldGenerations(const Path & profile, bool dryRun);
|
||||
|
||||
|
|
|
@ -48,13 +48,12 @@ static void search(const unsigned char * s, size_t len,
|
|||
|
||||
struct RefScanSink : Sink
|
||||
{
|
||||
HashSink hashSink;
|
||||
StringSet hashes;
|
||||
StringSet seen;
|
||||
|
||||
string tail;
|
||||
|
||||
RefScanSink() : hashSink(htSHA256) { }
|
||||
RefScanSink() { }
|
||||
|
||||
void operator () (const unsigned char * data, size_t len);
|
||||
};
|
||||
|
@ -62,8 +61,6 @@ struct RefScanSink : Sink
|
|||
|
||||
void RefScanSink::operator () (const unsigned char * data, size_t len)
|
||||
{
|
||||
hashSink(data, len);
|
||||
|
||||
/* It's possible that a reference spans the previous and current
|
||||
fragment, so search in the concatenation of the tail of the
|
||||
previous fragment and the start of the current fragment. */
|
||||
|
@ -82,7 +79,9 @@ void RefScanSink::operator () (const unsigned char * data, size_t len)
|
|||
std::pair<PathSet, HashResult> scanForReferences(const string & path,
|
||||
const PathSet & refs)
|
||||
{
|
||||
RefScanSink sink;
|
||||
RefScanSink refsSink;
|
||||
HashSink hashSink { htSHA256 };
|
||||
TeeSink sink { refsSink, hashSink };
|
||||
std::map<string, Path> backMap;
|
||||
|
||||
/* For efficiency (and a higher hit rate), just search for the
|
||||
|
@ -97,7 +96,7 @@ std::pair<PathSet, HashResult> scanForReferences(const string & path,
|
|||
assert(s.size() == refLength);
|
||||
assert(backMap.find(s) == backMap.end());
|
||||
// parseHash(htSHA256, s);
|
||||
sink.hashes.insert(s);
|
||||
refsSink.hashes.insert(s);
|
||||
backMap[s] = i;
|
||||
}
|
||||
|
||||
|
@ -106,13 +105,13 @@ std::pair<PathSet, HashResult> scanForReferences(const string & path,
|
|||
|
||||
/* Map the hashes found back to their store paths. */
|
||||
PathSet found;
|
||||
for (auto & i : sink.seen) {
|
||||
for (auto & i : refsSink.seen) {
|
||||
std::map<string, Path>::iterator j;
|
||||
if ((j = backMap.find(i)) == backMap.end()) abort();
|
||||
found.insert(j->second);
|
||||
}
|
||||
|
||||
auto hash = sink.hashSink.finish();
|
||||
auto hash = hashSink.finish();
|
||||
|
||||
return std::pair<PathSet, HashResult>(found, hash);
|
||||
}
|
||||
|
|
|
@ -222,20 +222,73 @@ StorePath Store::computeStorePathForText(const string & name, const string & s,
|
|||
}
|
||||
|
||||
|
||||
/*
|
||||
The aim of this function is to compute in one pass the correct ValidPathInfo for
|
||||
the files that we are trying to add to the store. To accomplish that in one
|
||||
pass, given the different kind of inputs that we can take (normal nar archives,
|
||||
nar archives with non SHA-256 hashes, and flat files), we set up a net of sinks
|
||||
and aliases. Also, since the dataflow is obfuscated by this, we include here a
|
||||
graphviz diagram:
|
||||
|
||||
digraph graphname {
|
||||
node [shape=box]
|
||||
fileSource -> narSink
|
||||
narSink [style=dashed]
|
||||
narSink -> unsualHashTee [style = dashed, label = "Recursive && !SHA-256"]
|
||||
narSink -> narHashSink [style = dashed, label = "else"]
|
||||
unsualHashTee -> narHashSink
|
||||
unsualHashTee -> caHashSink
|
||||
fileSource -> parseSink
|
||||
parseSink [style=dashed]
|
||||
parseSink-> fileSink [style = dashed, label = "Flat"]
|
||||
parseSink -> blank [style = dashed, label = "Recursive"]
|
||||
fileSink -> caHashSink
|
||||
}
|
||||
*/
|
||||
ValidPathInfo Store::addToStoreSlow(std::string_view name, const Path & srcPath,
|
||||
FileIngestionMethod method, HashType hashAlgo,
|
||||
std::optional<Hash> expectedCAHash)
|
||||
{
|
||||
/* FIXME: inefficient: we're reading/hashing 'tmpFile' three
|
||||
times. */
|
||||
HashSink narHashSink { htSHA256 };
|
||||
HashSink caHashSink { hashAlgo };
|
||||
|
||||
auto [narHash, narSize] = hashPath(htSHA256, srcPath);
|
||||
/* Note that fileSink and unusualHashTee must be mutually exclusive, since
|
||||
they both write to caHashSink. Note that that requisite is currently true
|
||||
because the former is only used in the flat case. */
|
||||
RetrieveRegularNARSink fileSink { caHashSink };
|
||||
TeeSink unusualHashTee { narHashSink, caHashSink };
|
||||
|
||||
auto hash = method == FileIngestionMethod::Recursive
|
||||
? hashAlgo == htSHA256
|
||||
auto & narSink = method == FileIngestionMethod::Recursive && hashAlgo != htSHA256
|
||||
? static_cast<Sink &>(unusualHashTee)
|
||||
: narHashSink;
|
||||
|
||||
/* Functionally, this means that fileSource will yield the content of
|
||||
srcPath. The fact that we use scratchpadSink as a temporary buffer here
|
||||
is an implementation detail. */
|
||||
auto fileSource = sinkToSource([&](Sink & scratchpadSink) {
|
||||
dumpPath(srcPath, scratchpadSink);
|
||||
});
|
||||
|
||||
/* tapped provides the same data as fileSource, but we also write all the
|
||||
information to narSink. */
|
||||
TeeSource tapped { *fileSource, narSink };
|
||||
|
||||
ParseSink blank;
|
||||
auto & parseSink = method == FileIngestionMethod::Flat
|
||||
? fileSink
|
||||
: blank;
|
||||
|
||||
/* The information that flows from tapped (besides being replicated in
|
||||
narSink), is now put in parseSink. */
|
||||
parseDump(parseSink, tapped);
|
||||
|
||||
/* We extract the result of the computation from the sink by calling
|
||||
finish. */
|
||||
auto [narHash, narSize] = narHashSink.finish();
|
||||
|
||||
auto hash = method == FileIngestionMethod::Recursive && hashAlgo == htSHA256
|
||||
? narHash
|
||||
: hashPath(hashAlgo, srcPath).first
|
||||
: hashFile(hashAlgo, srcPath);
|
||||
: caHashSink.finish().first;
|
||||
|
||||
if (expectedCAHash && expectedCAHash != hash)
|
||||
throw Error("hash mismatch for '%s'", srcPath);
|
||||
|
@ -246,8 +299,8 @@ ValidPathInfo Store::addToStoreSlow(std::string_view name, const Path & srcPath,
|
|||
info.ca = FixedOutputHash { .method = method, .hash = hash };
|
||||
|
||||
if (!isValidPath(info.path)) {
|
||||
auto source = sinkToSource([&](Sink & sink) {
|
||||
dumpPath(srcPath, sink);
|
||||
auto source = sinkToSource([&](Sink & scratchpadSink) {
|
||||
dumpPath(srcPath, scratchpadSink);
|
||||
});
|
||||
addToStore(info, *source);
|
||||
}
|
||||
|
@ -914,12 +967,20 @@ ref<Store> openStore(const std::string & uri_,
|
|||
throw Error("don't know how to open Nix store '%s'", uri);
|
||||
}
|
||||
|
||||
static bool isNonUriPath(const std::string & spec) {
|
||||
return
|
||||
// is not a URL
|
||||
spec.find("://") == std::string::npos
|
||||
// Has at least one path separator, and so isn't a single word that
|
||||
// might be special like "auto"
|
||||
&& spec.find("/") != std::string::npos;
|
||||
}
|
||||
|
||||
StoreType getStoreType(const std::string & uri, const std::string & stateDir)
|
||||
{
|
||||
if (uri == "daemon") {
|
||||
return tDaemon;
|
||||
} else if (uri == "local" || hasPrefix(uri, "/")) {
|
||||
} else if (uri == "local" || isNonUriPath(uri)) {
|
||||
return tLocal;
|
||||
} else if (uri == "" || uri == "auto") {
|
||||
if (access(stateDir.c_str(), R_OK | W_OK) == 0)
|
||||
|
@ -943,8 +1004,9 @@ static RegisterStoreImplementation regStore([](
|
|||
return std::shared_ptr<Store>(std::make_shared<UDSRemoteStore>(params));
|
||||
case tLocal: {
|
||||
Store::Params params2 = params;
|
||||
if (hasPrefix(uri, "/"))
|
||||
params2["root"] = uri;
|
||||
if (isNonUriPath(uri)) {
|
||||
params2["root"] = absPath(uri);
|
||||
}
|
||||
return std::shared_ptr<Store>(std::make_shared<LocalStore>(params2));
|
||||
}
|
||||
default:
|
||||
|
|
|
@ -461,7 +461,7 @@ public:
|
|||
std::optional<Hash> expectedCAHash = {});
|
||||
|
||||
// FIXME: remove?
|
||||
virtual StorePath addToStoreFromDump(const string & dump, const string & name,
|
||||
virtual StorePath addToStoreFromDump(Source & dump, const string & name,
|
||||
FileIngestionMethod method = FileIngestionMethod::Recursive, HashType hashAlgo = htSHA256, RepairFlag repair = NoRepair)
|
||||
{
|
||||
throw Error("addToStoreFromDump() is not supported by this store");
|
||||
|
|
|
@ -63,12 +63,29 @@ struct ParseSink
|
|||
virtual void createSymlink(const Path & path, const string & target) { };
|
||||
};
|
||||
|
||||
struct TeeParseSink : ParseSink
|
||||
/* If the NAR archive contains a single file at top-level, then save
|
||||
the contents of the file to `s'. Otherwise barf. */
|
||||
struct RetrieveRegularNARSink : ParseSink
|
||||
{
|
||||
StringSink saved;
|
||||
TeeSource source;
|
||||
bool regular = true;
|
||||
Sink & sink;
|
||||
|
||||
TeeParseSink(Source & source) : source(source, saved) { }
|
||||
RetrieveRegularNARSink(Sink & sink) : sink(sink) { }
|
||||
|
||||
void createDirectory(const Path & path)
|
||||
{
|
||||
regular = false;
|
||||
}
|
||||
|
||||
void receiveContents(unsigned char * data, unsigned int len)
|
||||
{
|
||||
sink(data, len);
|
||||
}
|
||||
|
||||
void createSymlink(const Path & path, const string & target)
|
||||
{
|
||||
regular = false;
|
||||
}
|
||||
};
|
||||
|
||||
void parseDump(ParseSink & sink, Source & source);
|
||||
|
|
|
@ -322,5 +322,18 @@ void StringSink::operator () (const unsigned char * data, size_t len)
|
|||
s->append((const char *) data, len);
|
||||
}
|
||||
|
||||
size_t ChainSource::read(unsigned char * data, size_t len)
|
||||
{
|
||||
if (useSecond) {
|
||||
return source2.read(data, len);
|
||||
} else {
|
||||
try {
|
||||
return source1.read(data, len);
|
||||
} catch (EndOfFile &) {
|
||||
useSecond = true;
|
||||
return this->read(data, len);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -189,7 +189,7 @@ struct TeeSource : Source
|
|||
size_t read(unsigned char * data, size_t len)
|
||||
{
|
||||
size_t n = orig.read(data, len);
|
||||
sink(data, len);
|
||||
sink(data, n);
|
||||
return n;
|
||||
}
|
||||
};
|
||||
|
@ -256,6 +256,19 @@ struct LambdaSource : Source
|
|||
}
|
||||
};
|
||||
|
||||
/* Chain two sources together so after the first is exhausted, the second is
|
||||
used */
|
||||
struct ChainSource : Source
|
||||
{
|
||||
Source & source1, & source2;
|
||||
bool useSecond = false;
|
||||
ChainSource(Source & s1, Source & s2)
|
||||
: source1(s1), source2(s2)
|
||||
{ }
|
||||
|
||||
size_t read(unsigned char * data, size_t len) override;
|
||||
};
|
||||
|
||||
|
||||
/* Convert a function that feeds data into a Sink into a Source. The
|
||||
Source executes the function as a coroutine. */
|
||||
|
|
|
@ -1581,7 +1581,7 @@ AutoCloseFD createUnixDomainSocket(const Path & path, mode_t mode)
|
|||
|
||||
struct sockaddr_un addr;
|
||||
addr.sun_family = AF_UNIX;
|
||||
if (path.size() >= sizeof(addr.sun_path))
|
||||
if (path.size() + 1 >= sizeof(addr.sun_path))
|
||||
throw Error("socket path '%1%' is too long", path);
|
||||
strcpy(addr.sun_path, path.c_str());
|
||||
|
||||
|
|
|
@ -381,7 +381,8 @@ static void queryInstSources(EvalState & state,
|
|||
|
||||
if (path.isDerivation()) {
|
||||
elem.setDrvPath(state.store->printStorePath(path));
|
||||
elem.setOutPath(state.store->printStorePath(state.store->derivationFromPath(path).findOutput("out")));
|
||||
auto outputs = state.store->queryDerivationOutputMap(path);
|
||||
elem.setOutPath(state.store->printStorePath(outputs.at("out")));
|
||||
if (name.size() >= drvExtension.size() &&
|
||||
string(name, name.size() - drvExtension.size()) == drvExtension)
|
||||
name = string(name, 0, name.size() - drvExtension.size());
|
||||
|
@ -1208,18 +1209,17 @@ static void opSwitchProfile(Globals & globals, Strings opFlags, Strings opArgs)
|
|||
}
|
||||
|
||||
|
||||
static const int prevGen = -2;
|
||||
static constexpr GenerationNumber prevGen = std::numeric_limits<GenerationNumber>::max();
|
||||
|
||||
|
||||
static void switchGeneration(Globals & globals, int dstGen)
|
||||
static void switchGeneration(Globals & globals, GenerationNumber dstGen)
|
||||
{
|
||||
PathLocks lock;
|
||||
lockProfile(lock, globals.profile);
|
||||
|
||||
int curGen;
|
||||
Generations gens = findGenerations(globals.profile, curGen);
|
||||
auto [gens, curGen] = findGenerations(globals.profile);
|
||||
|
||||
Generation dst;
|
||||
std::optional<Generation> dst;
|
||||
for (auto & i : gens)
|
||||
if ((dstGen == prevGen && i.number < curGen) ||
|
||||
(dstGen >= 0 && i.number == dstGen))
|
||||
|
@ -1227,18 +1227,16 @@ static void switchGeneration(Globals & globals, int dstGen)
|
|||
|
||||
if (!dst) {
|
||||
if (dstGen == prevGen)
|
||||
throw Error("no generation older than the current (%1%) exists",
|
||||
curGen);
|
||||
throw Error("no generation older than the current (%1%) exists", curGen.value_or(0));
|
||||
else
|
||||
throw Error("generation %1% does not exist", dstGen);
|
||||
}
|
||||
|
||||
printInfo(format("switching from generation %1% to %2%")
|
||||
% curGen % dst.number);
|
||||
printInfo("switching from generation %1% to %2%", curGen.value_or(0), dst->number);
|
||||
|
||||
if (globals.dryRun) return;
|
||||
|
||||
switchLink(globals.profile, dst.path);
|
||||
switchLink(globals.profile, dst->path);
|
||||
}
|
||||
|
||||
|
||||
|
@ -1249,7 +1247,7 @@ static void opSwitchGeneration(Globals & globals, Strings opFlags, Strings opArg
|
|||
if (opArgs.size() != 1)
|
||||
throw UsageError("exactly one argument expected");
|
||||
|
||||
int dstGen;
|
||||
GenerationNumber dstGen;
|
||||
if (!string2Int(opArgs.front(), dstGen))
|
||||
throw UsageError("expected a generation number");
|
||||
|
||||
|
@ -1278,8 +1276,7 @@ static void opListGenerations(Globals & globals, Strings opFlags, Strings opArgs
|
|||
PathLocks lock;
|
||||
lockProfile(lock, globals.profile);
|
||||
|
||||
int curGen;
|
||||
Generations gens = findGenerations(globals.profile, curGen);
|
||||
auto [gens, curGen] = findGenerations(globals.profile);
|
||||
|
||||
RunPager pager;
|
||||
|
||||
|
@ -1308,14 +1305,14 @@ static void opDeleteGenerations(Globals & globals, Strings opFlags, Strings opAr
|
|||
if(opArgs.front().size() < 2)
|
||||
throw Error("invalid number of generations ‘%1%’", opArgs.front());
|
||||
string str_max = string(opArgs.front(), 1, opArgs.front().size());
|
||||
int max;
|
||||
GenerationNumber max;
|
||||
if (!string2Int(str_max, max) || max == 0)
|
||||
throw Error("invalid number of generations to keep ‘%1%’", opArgs.front());
|
||||
deleteGenerationsGreaterThan(globals.profile, max, globals.dryRun);
|
||||
} else {
|
||||
std::set<unsigned int> gens;
|
||||
std::set<GenerationNumber> gens;
|
||||
for (auto & i : opArgs) {
|
||||
unsigned int n;
|
||||
GenerationNumber n;
|
||||
if (!string2Int(i, n))
|
||||
throw UsageError("invalid generation number '%1%'", i);
|
||||
gens.insert(n);
|
||||
|
|
|
@ -244,4 +244,10 @@ void completeFlakeRefWithFragment(
|
|||
const Strings & defaultFlakeAttrPaths,
|
||||
std::string_view prefix);
|
||||
|
||||
void printClosureDiff(
|
||||
ref<Store> store,
|
||||
const StorePath & beforePath,
|
||||
const StorePath & afterPath,
|
||||
std::string_view indent);
|
||||
|
||||
}
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
|
||||
#include <regex>
|
||||
|
||||
using namespace nix;
|
||||
namespace nix {
|
||||
|
||||
struct Info
|
||||
{
|
||||
|
@ -52,40 +52,12 @@ std::string showVersions(const std::set<std::string> & versions)
|
|||
return concatStringsSep(", ", versions2);
|
||||
}
|
||||
|
||||
struct CmdDiffClosures : SourceExprCommand
|
||||
void printClosureDiff(
|
||||
ref<Store> store,
|
||||
const StorePath & beforePath,
|
||||
const StorePath & afterPath,
|
||||
std::string_view indent)
|
||||
{
|
||||
std::string _before, _after;
|
||||
|
||||
CmdDiffClosures()
|
||||
{
|
||||
expectArg("before", &_before);
|
||||
expectArg("after", &_after);
|
||||
}
|
||||
|
||||
std::string description() override
|
||||
{
|
||||
return "show what packages and versions were added and removed between two closures";
|
||||
}
|
||||
|
||||
Category category() override { return catSecondary; }
|
||||
|
||||
Examples examples() override
|
||||
{
|
||||
return {
|
||||
{
|
||||
"To show what got added and removed between two versions of the NixOS system profile:",
|
||||
"nix diff-closures /nix/var/nix/profiles/system-655-link /nix/var/nix/profiles/system-658-link",
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
void run(ref<Store> store) override
|
||||
{
|
||||
auto before = parseInstallable(store, _before);
|
||||
auto beforePath = toStorePath(store, Realise::Outputs, operateOn, before);
|
||||
auto after = parseInstallable(store, _after);
|
||||
auto afterPath = toStorePath(store, Realise::Outputs, operateOn, after);
|
||||
|
||||
auto beforeClosure = getClosureInfo(store, beforePath);
|
||||
auto afterClosure = getClosureInfo(store, afterPath);
|
||||
|
||||
|
@ -125,9 +97,49 @@ struct CmdDiffClosures : SourceExprCommand
|
|||
items.push_back(fmt("%s → %s", showVersions(removed), showVersions(added)));
|
||||
if (showDelta)
|
||||
items.push_back(fmt("%s%+.1f KiB" ANSI_NORMAL, sizeDelta > 0 ? ANSI_RED : ANSI_GREEN, sizeDelta / 1024.0));
|
||||
std::cout << fmt("%s: %s\n", name, concatStringsSep(", ", items));
|
||||
std::cout << fmt("%s%s: %s\n", indent, name, concatStringsSep(", ", items));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
using namespace nix;
|
||||
|
||||
struct CmdDiffClosures : SourceExprCommand
|
||||
{
|
||||
std::string _before, _after;
|
||||
|
||||
CmdDiffClosures()
|
||||
{
|
||||
expectArg("before", &_before);
|
||||
expectArg("after", &_after);
|
||||
}
|
||||
|
||||
std::string description() override
|
||||
{
|
||||
return "show what packages and versions were added and removed between two closures";
|
||||
}
|
||||
|
||||
Category category() override { return catSecondary; }
|
||||
|
||||
Examples examples() override
|
||||
{
|
||||
return {
|
||||
{
|
||||
"To show what got added and removed between two versions of the NixOS system profile:",
|
||||
"nix diff-closures /nix/var/nix/profiles/system-655-link /nix/var/nix/profiles/system-658-link",
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
void run(ref<Store> store) override
|
||||
{
|
||||
auto before = parseInstallable(store, _before);
|
||||
auto beforePath = toStorePath(store, Realise::Outputs, operateOn, before);
|
||||
auto after = parseInstallable(store, _after);
|
||||
auto afterPath = toStorePath(store, Realise::Outputs, operateOn, after);
|
||||
printClosureDiff(store, beforePath, afterPath, "");
|
||||
}
|
||||
};
|
||||
|
||||
|
|
|
@ -45,6 +45,7 @@ struct CmdEdit : InstallableCommand
|
|||
|
||||
auto args = editorFor(pos);
|
||||
|
||||
restoreSignals();
|
||||
execvp(args.front().c_str(), stringsToCharPtrs(args).data());
|
||||
|
||||
std::string command;
|
||||
|
|
|
@ -7,6 +7,7 @@
|
|||
#include "builtins/buildenv.hh"
|
||||
#include "flake/flakeref.hh"
|
||||
#include "../nix-env/user-env.hh"
|
||||
#include "profiles.hh"
|
||||
|
||||
#include <nlohmann/json.hpp>
|
||||
#include <regex>
|
||||
|
@ -394,6 +395,46 @@ struct CmdProfileInfo : virtual EvalCommand, virtual StoreCommand, MixDefaultPro
|
|||
}
|
||||
};
|
||||
|
||||
struct CmdProfileDiffClosures : virtual StoreCommand, MixDefaultProfile
|
||||
{
|
||||
std::string description() override
|
||||
{
|
||||
return "show the closure difference between each generation of a profile";
|
||||
}
|
||||
|
||||
Examples examples() override
|
||||
{
|
||||
return {
|
||||
Example{
|
||||
"To show what changed between each generation of the NixOS system profile:",
|
||||
"nix profile diff-closure --profile /nix/var/nix/profiles/system"
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
void run(ref<Store> store) override
|
||||
{
|
||||
auto [gens, curGen] = findGenerations(*profile);
|
||||
|
||||
std::optional<Generation> prevGen;
|
||||
bool first = true;
|
||||
|
||||
for (auto & gen : gens) {
|
||||
if (prevGen) {
|
||||
if (!first) std::cout << "\n";
|
||||
first = false;
|
||||
std::cout << fmt("Generation %d -> %d:\n", prevGen->number, gen.number);
|
||||
printClosureDiff(store,
|
||||
store->followLinksToStorePath(prevGen->path),
|
||||
store->followLinksToStorePath(gen.path),
|
||||
" ");
|
||||
}
|
||||
|
||||
prevGen = gen;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
struct CmdProfile : virtual MultiCommand, virtual Command
|
||||
{
|
||||
CmdProfile()
|
||||
|
@ -402,6 +443,7 @@ struct CmdProfile : virtual MultiCommand, virtual Command
|
|||
{"remove", []() { return make_ref<CmdProfileRemove>(); }},
|
||||
{"upgrade", []() { return make_ref<CmdProfileUpgrade>(); }},
|
||||
{"info", []() { return make_ref<CmdProfileInfo>(); }},
|
||||
{"diff-closures", []() { return make_ref<CmdProfileDiffClosures>(); }},
|
||||
})
|
||||
{ }
|
||||
|
||||
|
@ -425,4 +467,3 @@ struct CmdProfile : virtual MultiCommand, virtual Command
|
|||
};
|
||||
|
||||
static auto r1 = registerCommand<CmdProfile>("profile");
|
||||
|
||||
|
|
|
@ -111,6 +111,7 @@ struct CmdRegistryPin : virtual Args, EvalCommand
|
|||
fetchers::Attrs extraAttrs;
|
||||
if (ref.subdir != "") extraAttrs["dir"] = ref.subdir;
|
||||
userRegistry->add(ref.input, resolved, extraAttrs);
|
||||
userRegistry->write(fetchers::getUserRegistryPath());
|
||||
}
|
||||
};
|
||||
|
||||
|
|
|
@ -18,7 +18,6 @@ registry=$TEST_ROOT/registry.json
|
|||
flake1Dir=$TEST_ROOT/flake1
|
||||
flake2Dir=$TEST_ROOT/flake2
|
||||
flake3Dir=$TEST_ROOT/flake3
|
||||
flake4Dir=$TEST_ROOT/flake4
|
||||
flake5Dir=$TEST_ROOT/flake5
|
||||
flake6Dir=$TEST_ROOT/flake6
|
||||
flake7Dir=$TEST_ROOT/flake7
|
||||
|
@ -390,14 +389,12 @@ cat > $flake3Dir/flake.nix <<EOF
|
|||
};
|
||||
}
|
||||
EOF
|
||||
git -C $flake3Dir add flake.nix
|
||||
nix flake update $flake3Dir
|
||||
git -C $flake3Dir add flake.nix flake.lock
|
||||
git -C $flake3Dir commit -m 'Remove packages.xyzzy'
|
||||
git -C $flake3Dir checkout master
|
||||
|
||||
# Test whether fuzzy-matching works for IsAlias
|
||||
(! nix build -o $TEST_ROOT/result flake4/removeXyzzy#xyzzy)
|
||||
|
||||
# Test whether fuzzy-matching works for IsGit
|
||||
# Test whether fuzzy-matching works for registry entries.
|
||||
(! nix build -o $TEST_ROOT/result flake4/removeXyzzy#xyzzy)
|
||||
nix build -o $TEST_ROOT/result flake4/removeXyzzy#sth
|
||||
|
||||
|
|
20
tests/local-store.sh
Normal file
20
tests/local-store.sh
Normal file
|
@ -0,0 +1,20 @@
|
|||
source common.sh
|
||||
|
||||
cd $TEST_ROOT
|
||||
|
||||
echo example > example.txt
|
||||
mkdir -p ./x
|
||||
|
||||
NIX_STORE_DIR=$TEST_ROOT/x
|
||||
|
||||
CORRECT_PATH=$(nix-store --store ./x --add example.txt)
|
||||
|
||||
PATH1=$(nix path-info --store ./x $CORRECT_PATH)
|
||||
[ $CORRECT_PATH == $PATH1 ]
|
||||
|
||||
PATH2=$(nix path-info --store "$PWD/x" $CORRECT_PATH)
|
||||
[ $CORRECT_PATH == $PATH2 ]
|
||||
|
||||
# FIXME we could also test the query parameter version:
|
||||
# PATH3=$(nix path-info --store "local?store=$PWD/x" $CORRECT_PATH)
|
||||
# [ $CORRECT_PATH == $PATH3 ]
|
|
@ -6,7 +6,7 @@ nix_tests = \
|
|||
gc-auto.sh \
|
||||
referrers.sh user-envs.sh logging.sh nix-build.sh misc.sh fixed.sh \
|
||||
gc-runtime.sh check-refs.sh filter-source.sh \
|
||||
remote-store.sh export.sh export-graph.sh \
|
||||
local-store.sh remote-store.sh export.sh export-graph.sh \
|
||||
timeout.sh secure-drv-outputs.sh nix-channel.sh \
|
||||
multiple-outputs.sh import-derivation.sh fetchurl.sh optimise-store.sh \
|
||||
binary-cache.sh nix-profile.sh repair.sh dump-db.sh case-hack.sh \
|
||||
|
|
Loading…
Reference in a new issue