This provides a pluggable mechanism for defining new fetchers. It adds
a builtin function 'fetchTree' that generalizes existing fetchers like
'fetchGit', 'fetchMercurial' and 'fetchTarball'. 'fetchTree' takes a
set of attributes, e.g.
fetchTree {
type = "git";
url = "https://example.org/repo.git";
ref = "some-branch";
rev = "abcdef...";
}
The existing fetchers are just wrappers around this. Note that the
input attributes to fetchTree are the same as flake input
specifications and flake lock file entries.
All fetchers share a common cache stored in
~/.cache/nix/fetcher-cache-v1.sqlite. This replaces the ad hoc caching
mechanisms in fetchGit and download.cc (e.g. ~/.cache/nix/{tarballs,git-revs*}).
This also adds support for Git worktrees (c169ea5904).
When encountering an unsupported protocol, there's no need to retry.
Chances are, it won't suddenly be supported between retry attempts;
error instead. Otherwise, you see something like the following:
$ nix-env -i -f git://git@github.com/foo/bar
warning: unable to download 'git://git@github.com/foo/bar': Unsupported protocol (1); retrying in 335 ms
warning: unable to download 'git://git@github.com/foo/bar': Unsupported protocol (1); retrying in 604 ms
warning: unable to download 'git://git@github.com/foo/bar': Unsupported protocol (1); retrying in 1340 ms
warning: unable to download 'git://git@github.com/foo/bar': Unsupported protocol (1); retrying in 2685 ms
With this change, you now see:
$ nix-env -i -f git://git@github.com/foo/bar
error: unable to download 'git://git@github.com/foo/bar': Unsupported protocol (1)
Most functions now take a StorePath argument rather than a Path (which
is just an alias for std::string). The StorePath constructor ensures
that the path is syntactically correct (i.e. it looks like
<store-dir>/<base32-hash>-<name>). Similarly, functions like
buildPaths() now take a StorePathWithOutputs, rather than abusing Path
by adding a '!<outputs>' suffix.
Note that the StorePath type is implemented in Rust. This involves
some hackery to allow Rust values to be used directly in C++, via a
helper type whose destructor calls the Rust type's drop()
function. The main issue is the dynamic nature of C++ move semantics:
after we have moved a Rust value, we should not call the drop function
on the original value. So when we move a value, we set the original
value to bitwise zero, and the destructor only calls drop() if the
value is not bitwise zero. This should be sufficient for most types.
Also lots of minor cleanups to the C++ API to make it more modern
(e.g. using std::optional and std::string_view in some places).
Also, fetchGit now runs in O(1) memory since we pipe the output of
'git archive' directly into unpackTarball() (rather than first reading
it all into memory).
Make curl's low speed limit configurable via stalled-download-timeout.
Before, this limit was five minutes without receiving a single byte.
This is much too long as if the remote end may not have even
acknowledged the HTTP request.
This is a much simpler fix to the 'error 9 while decompressing xz
file' problem than 78fa47a7f0. We just
do a ranged HTTP request starting after the data that we previously
wrote into the sink.
Fixes#2952, #379.
Setting `http2 = false` in nix config (e.g. /etc/nix/nix.conf)
had no effect, and `nix-env -vvvvv -i hello` still downloaded .nar
packages using HTTP/2.
In `src/libstore/download.cc`, the `CURL_HTTP_VERSION_2TLS` option was
being explicitly set when `downloadSettings.enableHttp2` was `true`,
but, `CURL_HTTP_VERSION_1_1` option was not being explicitly set when
`downloadSettings.enableHttp2` was `false`.
This may be because `https://curl.haxx.se/libcurl/c/libcurl-env.html` states:
"You have to set this option if you want to use libcurl's HTTP/2 support."
but, also, in the changelog, states:
"DEFAULT
Since curl 7.62.0: CURL_HTTP_VERSION_2TLS
Before that: CURL_HTTP_VERSION_1_1"
So, the default setting for `libcurl` is HTTP/2 for version >= 7.62.0.
In this commit, option `CURLOPT_HTTP_VERSION` is explicitly set to
`CURL_HTTP_VERSION_1_1` when `downloadSettings.enableHttp2` nix config
setting is `false`.
This can be tested by running `nix-env -vvvvv -i hello | grep HTTP`
--no-net causes tarballTtl to be set to the largest 32-bit integer,
which causes comparison like 'time + tarballTtl < other_time' to
fail on 32-bit systems. So cast them to 64-bit first.
https://hydra.nixos.org/build/95076624
(cherry picked from commit 29ccb2e969)
This flag
* Disables substituters.
* Sets the tarball-ttl to infinity (ensuring e.g. that the flake
registry and any downloaded flakes are considered current).
* Disables retrying downloads and sets the connection timeout to the
minimum. (So it doesn't completely disable downloads at the moment.)
(cherry picked from commit 8ea842260b)
Once we've started writing data to a Sink, we can't restart a download
request, because then we end up writing duplicate data to the
Sink. Therefore we shouldn't handle retries in Downloader but at a
higher level (in particular, in copyStorePath()).
Fixes#2952.
(cherry picked from commit a67cf5a358)
Also, make fetchGit and fetchMercurial update allowedPaths properly.
(Maybe the evaluator, rather than the caller of the evaluator, should
apply toRealPath(), but that's a bigger change.)
(cherry picked from commit 5c34d66538)
Use the same output ordering and format everywhere.
This is such a common issue that we trade the single-line error message for
more readability.
Old message:
```
fixed-output derivation produced path '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com' with sha256 hash '08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm' instead of the expected hash '1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m'
```
New message:
```
hash mismatch in fixed-output derivation '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com':
wanted: sha256:1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m
got: sha256:08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm
```
This enables using for http for S3 request for debugging or
implementations that don't have https configured. This is not a problem
for binary caches since they should not contain sensitive information.
Both package signatures and AWS auth already protect against tampering.
* Don't wait forever for the client to remove data from the
buffer. This does mean that the buffer can grow without bounds
(e.g. when downloading is faster than writing to disk), but meh.
* Don't hold the state lock while calling the sink. The sink could
take any amount of time to process the data (in particular when it's
actually a coroutine), so we don't want to block the download
thread.
This didn't work anymore since decompression was only done in the
non-coroutine case.
Decompressors are now sinks, just like compressors.
Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':
if (ret != BZ_OK)
CompressionError("error while compressing bzip2 file");
In some versions/configurations libcurl doesn't handle timeouts
(especially DNS timeouts) in a way that wakes curl_multi_wait.
This doesn't appear to be a problem if using c-ares, FWIW.