This function was used in only one place, where it could easily be
replaced by readDerivation() since it's not
performance-critical. (This function appears to have been modelled
after queryDerivationOutputs(), which exists only to make the garbage
collector faster.)
The idea is it's always more flexible to consumer a `Source` than a
plain string, and it might even reduce memory consumption.
I also looked at `addToStoreFromDump` with its `// FIXME: remove?`, but
the worked needed for that is far more up for interpretation, so I
punted for now.
This provides a pluggable mechanism for defining new fetchers. It adds
a builtin function 'fetchTree' that generalizes existing fetchers like
'fetchGit', 'fetchMercurial' and 'fetchTarball'. 'fetchTree' takes a
set of attributes, e.g.
fetchTree {
type = "git";
url = "https://example.org/repo.git";
ref = "some-branch";
rev = "abcdef...";
}
The existing fetchers are just wrappers around this. Note that the
input attributes to fetchTree are the same as flake input
specifications and flake lock file entries.
All fetchers share a common cache stored in
~/.cache/nix/fetcher-cache-v1.sqlite. This replaces the ad hoc caching
mechanisms in fetchGit and download.cc (e.g. ~/.cache/nix/{tarballs,git-revs*}).
This also adds support for Git worktrees (c169ea5904).
This allows overriding the priority of substituters, e.g.
$ nix-store --store ~/my-nix/ -r /nix/store/df3m4da96d84ljzxx4mygfshm1p0r2n3-geeqie-1.4 \
--substituters 'http://cache.nixos.org?priority=100 daemon?priority=10'
Fixes#3264.
Most functions now take a StorePath argument rather than a Path (which
is just an alias for std::string). The StorePath constructor ensures
that the path is syntactically correct (i.e. it looks like
<store-dir>/<base32-hash>-<name>). Similarly, functions like
buildPaths() now take a StorePathWithOutputs, rather than abusing Path
by adding a '!<outputs>' suffix.
Note that the StorePath type is implemented in Rust. This involves
some hackery to allow Rust values to be used directly in C++, via a
helper type whose destructor calls the Rust type's drop()
function. The main issue is the dynamic nature of C++ move semantics:
after we have moved a Rust value, we should not call the drop function
on the original value. So when we move a value, we set the original
value to bitwise zero, and the destructor only calls drop() if the
value is not bitwise zero. This should be sufficient for most types.
Also lots of minor cleanups to the C++ API to make it more modern
(e.g. using std::optional and std::string_view in some places).
This adds a command 'nix make-content-addressable' that rewrites the
specified store paths into content-addressable paths. The advantage of
such paths is that 1) they can be imported without signatures; 2) they
can enable deduplication in cases where derivation changes do not
cause output changes (apart from store path hashes).
For example,
$ nix make-content-addressable -r nixpkgs.cowsay
rewrote '/nix/store/g1g31ah55xdia1jdqabv1imf6mcw0nb1-glibc-2.25-49' to '/nix/store/48jfj7bg78a8n4f2nhg269rgw1936vj4-glibc-2.25-49'
...
rewrote '/nix/store/qbi6rzpk0bxjw8lw6azn2mc7ynnn455q-cowsay-3.03+dfsg1-16' to '/nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16'
We can then copy the resulting closure to another store without
signatures:
$ nix copy --trusted-public-keys '' ---to ~/my-nix /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16
In order to support self-references in content-addressable paths,
these paths are hashed "modulo" self-references, meaning that
self-references are zeroed out during hashing. Somewhat annoyingly,
this means that the NAR hash stored in the Nix database is no longer
necessarily equal to the output of "nix hash-path"; for
content-addressable paths, you need to pass the --modulo flag:
$ nix path-info --json /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16 | jq -r .[].narHash
sha256:0ri611gdilz2c9rsibqhsipbfs9vwcqvs811a52i2bnkhv7w9mgw
$ nix hash-path --type sha256 --base32 /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16
1ggznh07khq0hz6id09pqws3a8q9pn03ya3c03nwck1kwq8rclzs
$ nix hash-path --type sha256 --base32 /nix/store/iq6g2x4q62xp7y7493bibx0qn5w7xz67-cowsay-3.03+dfsg1-16 --modulo iq6g2x4q62xp7y7493bibx0qn5w7xz67
0ri611gdilz2c9rsibqhsipbfs9vwcqvs811a52i2bnkhv7w9mgw
Scanning of /proc/<pid>/{exe,cwd} was broken because '{memory:' was
prepended twice. Also, get rid of the whole '{memory:...}' thing
because it's unnecessary, we can just list the file in /proc directly.
This new structure makes more sense as there may be many sources rooting
the same store path. Many profiles can reference the same path but this
is even more true with /proc/<pid>/maps where distinct pids can and
often do map the same store path.
This implementation is also more efficient as the `Roots` map contains
only one entry per rooted store path.
copyStorePath() now pipes the output of srcStore->narFromPath()
directly into dstStore->addToStore(). The sink used by the former is
converted into a source usable by the latter using
boost::coroutine2. This is based on [1].
This reduces the maximum resident size of
$ nix build --store ~/my-nix/ /nix/store/b0zlxla7dmy1iwc3g459rjznx59797xy-binutils-2.28.1 --substituters file:///tmp/binary-cache-xz/ --no-require-sigs
from 418592 KiB to 53416 KiB. (The previous commit also reduced the
runtime from ~4.2s to ~3.4s, not sure why.) A further improvement will
be to download files into a Sink.
[1] https://github.com/NixOS/nix/compare/master...Mathnerd314:dump-fix-coroutine#diff-dcbcac55a634031f9cc73707da6e4b18
Issue #1969.
builtins.path allows specifying the name of a path (which makes paths
with store-illegal names now addable), allows adding paths with flat
instead of recursive hashes, allows specifying a filter (so is a
generalization of filterSource), and allows specifying an expected
hash (enabling safe path adding in pure mode).
This is needed by nixos-install, which uses the Nix store on the
installation CD as a substituter. We don't want to disable signature
checking entirely because substitutes from cache.nixos.org should
still be checked. So now we can pas "local?trusted=1" to mark only the
Nix store in /nix as not requiring signatures.
Fixes#1819.
Instead, if a fixed-output derivation produces has an incorrect output
hash, we now unconditionally move the outputs to the path
corresponding with the actual hash and register it as valid. Thus,
after correcting the hash in the Nix expression (e.g. in a fetchurl
call), the fixed-output derivation doesn't have to be built again.
It would still be good to have a command for reporting the actual hash
of a fixed-output derivation (instead of throwing an error), but
"nix-build --hash" didn't do that.
Functions like copyClosure() had 3 bool arguments, which creates a
severe risk of mixing up arguments.
Also, implement copyClosure() using copyPaths().
It now means "paths that were built locally". It no longer includes
paths that were added locally. For those we don't need info.ultimate,
since we have the content-addressability assertion (info.ca).
Opening an SSHStore or LegacySSHStore does not actually establish a
connection, so the try/catch block here did nothing. Added a
Store::connect() method to test whether a connection can be
established.
For backwards compatibility, if the URI is just a hostname, ssh://
(i.e. LegacySSHStore) is prepended automatically.
Also, all fields except the URI are now optional. For example, this is
a valid nix.machines file:
local?root=/tmp/nix
This is useful for testing the remote build machinery since you don't
have to mess around with ssh.
This default implementation of buildPaths() does nothing if all
requested paths are already valid, and throws an "unsupported
operation" error otherwise. This fixes a regression introduced by
c30330df6f in binary cache and legacy
SSH stores.
The typical use is to inherit Config and add Setting<T> members:
class MyClass : private Config
{
Setting<int> foo{this, 123, "foo", "the number of foos to use"};
Setting<std::string> bar{this, "blabla", "bar", "the name of the bar"};
MyClass() : Config(readConfigFile("/etc/my-app.conf"))
{
std::cout << foo << "\n"; // will print 123 unless overriden
}
};
Currently, this is used by Store and its subclasses for store
parameters. You now get a warning if you specify a non-existant store
parameter in a store URI.
This provides a significant speedup, e.g. 64 s -> 12 s for
nix-build --dry-run -I nixpkgs=channel:nixos-16.03 '<nixpkgs/nixos/tests/misc.nix>' -A test
on a cold local and CloudFront cache.
The alternative is to use lots of concurrent daemon connections but
that seems wasteful.
This is useless because the client also caches path info, and can
cause problems for long-running clients like hydra-queue-runner
(i.e. it may return cached info about paths that have been
garbage-collected).
This allows various Store implementations to provide different ways to
get build logs. For example, BinaryCacheStore can get the build logs
from the binary cache.
Also, remove the log-servers option since we can use substituters for
this.
... and use this in Downloader::downloadCached(). This fixes
$ nix-build https://nixos.org/channels/nixos-16.09-small/nixexprs.tar.xz -A hello
error: cannot import path ‘/nix/store/csfbp1s60dkgmk9f8g0zk0mwb7hzgabd-nixexprs.tar.xz’ because it lacks a valid signature
This writes info about every path in the closure in the same format as
‘nix path-info --json’. Thus it also includes NAR hashes and sizes.
Example:
[
{
"path": "/nix/store/10h6li26i7g6z3mdpvra09yyf10mmzdr-hello-2.10",
"narHash": "sha256:0ckdc4z20kkmpqdilx0wl6cricxv90lh85xpv2qljppcmz6vzcxl",
"narSize": 197648,
"references": [
"/nix/store/10h6li26i7g6z3mdpvra09yyf10mmzdr-hello-2.10",
"/nix/store/27binbdy296qvjycdgr1535v8872vz3z-glibc-2.24"
],
"closureSize": 20939776
},
{
"path": "/nix/store/27binbdy296qvjycdgr1535v8872vz3z-glibc-2.24",
"narHash": "sha256:1nfn3m3p98y1c0kd0brp80dn9n5mycwgrk183j17rajya0h7gax3",
"narSize": 20742128,
"references": [
"/nix/store/27binbdy296qvjycdgr1535v8872vz3z-glibc-2.24"
],
"closureSize": 20742128
}
]
Fixes#1134.
That is, when build-repeat > 0, and the output of two rounds differ,
then print a warning rather than fail the build. This is primarily to
let Hydra check reproducibility of all packages.
The removal of CachedFailure caused the value of TimedOut to change,
which broke timed-out handling in Hydra (so timed-out builds would
show up as "aborted" and would be retried, e.g. at
http://hydra.nixos.org/build/42537427).
The store parameter "write-nar-listing=1" will cause BinaryCacheStore
to write a file ‘<store-hash>.ls.xz’ for each ‘<store-hash>.narinfo’
added to the binary cache. This file contains an XZ-compressed JSON
file describing the contents of the NAR, excluding the contents of
regular files.
E.g.
{
"version": 1,
"root": {
"type": "directory",
"entries": {
"lib": {
"type": "directory",
"entries": {
"Mcrt1.o": {
"type": "regular",
"size": 1288
},
"Scrt1.o": {
"type": "regular",
"size": 3920
},
}
}
}
...
}
}
(The actual file has no indentation.)
This is intended to speed up the NixOS channels programs index
generator [1], since fetching gazillions of large NARs from
cache.nixos.org is currently a bottleneck for updating the regular
(non-small) channel.
[1] https://github.com/NixOS/nixos-channel-scripts/blob/master/generate-programs-index.cc
The fact that queryPathInfo() is synchronous meant that we needed a
thread for every concurrent binary cache lookup, even though they end
up being handled by the same download thread. Requiring hundreds of
threads is not a good idea. So now there is an asynchronous version of
queryPathInfo() that takes a callback function to process the
result. Similarly, enqueueDownload() now takes a callback rather than
returning a future.
Thus, a command like
nix path-info --store https://cache.nixos.org/ -r /nix/store/slljrzwmpygy1daay14kjszsr9xix063-nixos-16.09beta231.dccf8c5
that returns 4941 paths now takes 1.87s using only 2 threads (the main
thread and the downloader thread). (This is with a prewarmed
CloudFront.)
This makes it easier to create a diverted store, i.e.
NIX_REMOTE="local?root=/tmp/root"
instead of
NIX_REMOTE="local?real=/tmp/root/nix/store&state=/tmp/root/nix/var/nix" NIX_LOG_DIR=/tmp/root/nix/var/log
This allows an unprivileged user to perform builds on a diverted store
(i.e. where the physical store location differs from the logical
location).
Example:
$ NIX_LOG_DIR=/tmp/log NIX_REMOTE="local?real=/tmp/store&state=/tmp/var" nix-build -E \
'with import <nixpkgs> {}; runCommand "foo" { buildInputs = [procps nettools]; } "id; ps; ifconfig; echo $out > $out"'
will do a build in the Nix store physically in /tmp/store but
logically in /nix/store (and thus using substituters for the latter).
This allows commands like "nix verify --all" or "nix path-info --all"
to work on S3 caches.
Unfortunately, this requires some ugly hackery: when querying the
contents of the bucket, we don't want to have to read every .narinfo
file. But the S3 bucket keys only include the hash part of each store
path, not the name part. So as a special exception
queryAllValidPaths() can now return store paths *without* the name
part, and queryPathInfo() accepts such store paths (returning a
ValidPathInfo object containing the full name).
Caching path info is generally useful. For instance, it speeds up "nix
path-info -rS /run/current-system" (i.e. showing the closure sizes of
all paths in the closure of the current system) from 5.6s to 0.15s.
This also eliminates some APIs like Store::queryDeriver() and
Store::queryReferences().
This specifies the number of distinct signatures required to consider
each path "trusted".
Also renamed ‘--no-sigs’ to ‘--no-trust’ for the flag that disables
verifying whether a path is trusted (since a path can also be trusted
if it has no signatures, but was built locally).
This enables an optimisation in hydra-queue-runner, preventing a
download of a NAR it just uploaded to the cache when reading files
like hydra-build-products.
Also, move a few free-standing functions into StoreAPI and Derivation.
Also, introduce a non-nullable smart pointer, ref<T>, which is just a
wrapper around std::shared_ptr ensuring that the pointer is never
null. (For reference-counted values, this is better than passing a
"T&", because the latter doesn't maintain the refcount. Usually, the
caller will have a shared_ptr keeping the value alive, but that's not
always the case, e.g., when passing a reference to a std::thread via
std::bind.)
For example,
$ nix-build --hash -A nix-repl.src
will build the fixed-output derivation nix-repl.src (a fetchFromGitHub
call), but instead of *verifying* the hash given in the Nix
expression, it prints out the resulting hash, and then moves the
result to its content-addressed location in the Nix store. E.g
build produced path ‘/nix/store/504a4k6zi69dq0yjc0bm12pa65bccxam-nix-repl-8a2f5f0607540ffe56b56d52db544373e1efb980-src’ with sha256 hash ‘0cjablz01i0g9smnavhf86imwx1f9mnh5flax75i615ml71gsr88’
The goal of this is to make all nix-prefetch-* scripts unnecessary: we
can just let Nix run the real thing (i.e., the corresponding fetch*
derivation).
Another example:
$ nix-build --hash -E 'with import <nixpkgs> {}; fetchgit { url = "https://github.com/NixOS/nix.git"; sha256 = "ffffffffffffffffffffffffffffffffffffffffffffffffffff"; }'
...
git revision is 9e7c1a4bbd
...
build produced path ‘/nix/store/gmsnh9i7x4mb7pyd2ns7n3c9l90jfsi1-nix’ with sha256 hash ‘1188xb621diw89n25rifqg9lxnzpz7nj5bfh4i1y3dnis0dmc0zp’
(Having to specify a fake sha256 hash is a bit annoying...)
Passing "--option build-repeat <N>" will cause every build to be
repeated N times. If the build output differs between any round, the
build is rejected, and the output paths are not registered as
valid. This is primarily useful to verify build determinism. (We
already had a --check option to repeat a previously succeeded
build. However, with --check, non-deterministic builds are registered
in the DB. Preventing that is useful for Hydra to ensure that
non-deterministic builds don't end up getting published at all.)
In particular, hydra-queue-runner can now distinguish between remote
build / substitution / already-valid. For instance, if a path already
existed on the remote side, we don't want to store a log file.
Previously, to build a derivation remotely, we had to copy the entire
closure of the .drv file to the remote machine, even though we only
need the top-level derivation. This is very wasteful: the closure can
contain thousands of store paths, and in some Hydra use cases, include
source paths that are very large (e.g. Git/Mercurial checkouts).
So now there is a new operation, StoreAPI::buildDerivation(), that
performs a build from an in-memory representation of a derivation
(BasicDerivation) rather than from a on-disk .drv file. The only files
that need to be in the Nix store are the sources of the derivation
(drv.inputSrcs), and the needed output paths of the dependencies (as
described by drv.inputDrvs). "nix-store --serve" exposes this
interface.
Note that this is a privileged operation, because you can construct a
derivation that builds any store path whatsoever. Fixing this will
require changing the hashing scheme (i.e., the output paths should be
computed from the other fields in BasicDerivation, allowing them to be
verified without access to other derivations). However, this would be
quite nice because it would allow .drv-free building (e.g. "nix-env
-i" wouldn't have to write any .drv files to disk).
Fixes#173.
Hello!
The patch below adds a ‘verifyStore’ RPC with the same signature as the
current LocalStore::verifyStore method.
Thanks,
Ludo’.
>From aef46c03ca77eb6344f4892672eb6d9d06432041 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Ludovic=20Court=C3=A8s?= <ludo@gnu.org>
Date: Mon, 1 Jun 2015 23:17:10 +0200
Subject: [PATCH] Add a 'verifyStore' remote procedure call.
The flag ‘--check’ to ‘nix-store -r’ or ‘nix-build’ will cause Nix to
redo the build of a derivation whose output paths are already valid.
If the new output differs from the original output, an error is
printed. This makes it easier to test if a build is deterministic.
(Obviously this cannot catch all sources of non-determinism, but it
catches the most common one, namely the current time.)
For example:
$ nix-build '<nixpkgs>' -A patchelf
...
$ nix-build '<nixpkgs>' -A patchelf --check
error: derivation `/nix/store/1ipvxsdnbhl1rw6siz6x92s7sc8nwkkb-patchelf-0.6' may not be deterministic: hash mismatch in output `/nix/store/4pc1dmw5xkwmc6q3gdc9i5nbjl4dkjpp-patchelf-0.6.drv'
The --check build fails if not all outputs are valid. Thus the first
call to nix-build is necessary to ensure that all outputs are valid.
The current outputs are left untouched: the new outputs are either put
in a chroot or diverted to a different location in the store using
hash rewriting.
So if a path is not garbage solely because it's reachable from a root
due to the gc-keep-outputs or gc-keep-derivations settings, ‘nix-store
-q --roots’ now shows that root.
With this flag, if any valid derivation output is missing or corrupt,
it will be recreated by using a substitute if available, or by
rebuilding the derivation. The latter may use hash rewriting if
chroots are not available.
To implement binary caches efficiently, Hydra needs to be able to map
the hash part of a store path (e.g. "gbg...zr7") to the full store
path (e.g. "/nix/store/gbg...kzr7-subversion-1.7.5"). (The binary
cache mechanism uses hash parts as a key for looking up store paths to
ensure privacy.) However, doing a search in the Nix store for
/nix/store/<hash>* is expensive since it requires reading the entire
directory. queryPathFromHashPart() prevents this by doing a cheap
database lookup.
queryValidPaths() combines multiple calls to isValidPath() in one.
This matters when using the Nix daemon because it reduces latency.
For instance, on "nix-env -qas \*" it reduces execution time from 5.7s
to 4.7s (which is indistinguishable from the non-daemon case).
Getting substitute information using the binary cache substituter has
non-trivial latency overhead. A package or NixOS system configuration
can have hundreds of dependencies, and in the worst case (when the
local info cache is empty) we have to do a separate HTTP request for
each of these. If the ping time to the server is t, getting N info
files will take tN seconds; e.g., with a ping time of 0.1s to
nixos.org, sequentially downloading 1000 info files (a typical NixOS
config) will take at least 100 seconds.
To fix this problem, the binary cache substituter can now perform
requests in parallel. This required changing the substituter
interface to support a function querySubstitutablePathInfos() that
queries multiple paths at the same time, and rewriting queryMissing()
to take advantage of parallelism. (Due to local caching,
parallelising queryMissing() is sufficient for most use cases, since
it's almost always called before building a derivation and thus fills
the local info cache.)
For example, parallelism speeds up querying all 1056 paths in a
particular NixOS system configuration from 116s to 2.6s. It works so
well because the eccentricity of the top-level derivation in the
dependency graph is only 9. So we only need 10 round-trips (when
using an unlimited number of parallel connections) to get everything.
Currently we do a maximum of 150 parallel connections to the server.
Thus it's important that the binary cache server (e.g. nixos.org) has
a high connection limit. Alternatively we could use HTTP pipelining,
but WWW::Curl doesn't support it and libcurl has a hard-coded limit of
5 requests per pipeline.
We can't open a SQLite database if the disk is full. Since this
prevents the garbage collector from running when it's most needed, we
reserve some dummy space that we can free just before doing a garbage
collection. This actually revives some old code from the Berkeley DB
days.
Fixes#27.
daemon (which is an error), print a nicer error message than
"Connection reset by peer" or "broken pipe".
* In the daemon, log errors that occur during request parameter
processing.
‘nix-store --export’.
* Add a Perl module that provides the functionality of
‘nix-copy-closure --to’. This is used by build-remote.pl so it no
longer needs to start a separate nix-copy-closure process. Also, it
uses the Perl API to do the export, so it doesn't need to start a
separate nix-store process either. As a result, nix-copy-closure
and build-remote.pl should no longer fail on very large closures due
to an "Argument list too long" error. (Note that having very many
dependencies in a single derivation can still fail because the
environment can become too large. Can't be helped though.)
This should also fix:
nix-instantiate: ./../boost/shared_ptr.hpp:254: T* boost::shared_ptr<T>::operator->() const [with T = nix::StoreAPI]: Assertion `px != 0' failed.
which was caused by hashDerivationModulo() calling the ‘store’
object (during store upgrades) before openStore() assigned it.
derivations added to the store by clients have "correct" output
paths (meaning that the output paths are computed by hashing the
derivation according to a certain algorithm). This means that a
malicious user could craft a special .drv file to build *any*
desired path in the store with any desired contents (so long as the
path doesn't already exist). Then the attacker just needs to wait
for a victim to come along and install the compromised path.
For instance, if Alice (the attacker) knows that the latest Firefox
derivation in Nixpkgs produces the path
/nix/store/1a5nyfd4ajxbyy97r1fslhgrv70gj8a7-firefox-5.0.1
then (provided this path doesn't already exist) she can craft a .drv
file that creates that path (i.e., has it as one of its outputs),
add it to the store using "nix-store --add", and build it with
"nix-store -r". So the fake .drv could write a Trojan to the
Firefox path. Then, if user Bob (the victim) comes along and does
$ nix-env -i firefox
$ firefox
he executes the Trojan injected by Alice.
The fix is to have the Nix daemon verify that derivation outputs are
correct (in addValidPath()). This required some refactoring to move
the hash computation code to libstore.
because it defines _FILE_OFFSET_BITS. Without this, on
OpenSolaris the system headers define it to be 32, and then
the 32-bit stat() ends up being called with a 64-bit "struct
stat", or vice versa.
This also ensures that we get 64-bit file sizes everywhere.
* Remove the redundant call to stat() in parseExprFromFile().
The file cannot be a symlink because that's the exit condition
of the loop before.
(Linux) machines no longer maintain the atime because it's too
expensive, and on the machines where --use-atime is useful (like the
buildfarm), reading the atimes on the entire Nix store takes way too
much time to make it practical.
SHA-256 outputs of fixed-output derivations. I.e. they now produce
the same store path:
$ nix-store --add x
/nix/store/j2fq9qxvvxgqymvpszhs773ncci45xsj-x
$ nix-store --add-fixed --recursive sha256 x
/nix/store/j2fq9qxvvxgqymvpszhs773ncci45xsj-x
the latter being the same as the path that a derivation
derivation {
name = "x";
outputHashAlgo = "sha256";
outputHashMode = "recursive";
outputHash = "...";
...
};
produces.
This does change the output path for such fixed-output derivations.
Fortunately they are quite rare. The most common use is fetchsvn
calls with SHA-256 hashes. (There are a handful of those is
Nixpkgs, mostly unstable development packages.)
* Documented the computation of store paths (in store-api.cc).
accessed time of paths that may be deleted. Anything more recently
used won't be deleted. The time is specified in time_t,
e.g. seconds since 1970-01-01 00:00:00 UTC; use `date +%s' to
convert to time_t from the command line.
Example: to delete everything that hasn't been used in the last two
months:
$ nix-store --gc -v --max-atime $(date +%s -d "2 months ago")
order of ascending last access time. This is useful in conjunction
with --max-freed or --max-links to prefer deleting non-recently used
garbage, which is good (especially in the build farm) since garbage
may become live again.
The code could easily be modified to accept other criteria for
ordering garbage by changing the comparison operator used by the
priority queue in collectGarbage().
again. (After the previous substituter mechanism refactoring I
didn't update the code that obtains the references of substitutable
paths.) This required some refactoring: the substituter programs
are now kept running and receive/respond to info requests via
stdin/stdout.
bytes have been freed, `--max-links' to stop when the Nix store
directory has fewer than N hard links (the latter being important
for very large Nix stores on filesystems with a 32000 subdirectories
limit).
$ nix-env -e $(which firefox)
or
$ nix-env -e /nix/store/nywzlygrkfcgz7dfmhm5xixlx1l0m60v-pan-0.132
* nix-env -i: if an argument contains a slash anywhere, treat it as a
path and follow it through symlinks into the Nix store. This allows
things like
$ nix-build -A firefox
$ nix-env -i ./result
* nix-env -q/-i/-e: don't complain when the `*' selector doesn't match
anything. In particular, `nix-env -q \*' doesn't fail anymore on an
empty profile.
need any info on substitutable paths, we just call the substituters
(such as download-using-manifests.pl) directly. This means that
it's no longer necessary for nix-pull to register substitutes or for
nix-channel to clear them, which makes those operations much faster
(NIX-95). Also, we don't have to worry about keeping nix-pull
manifests (in /nix/var/nix/manifests) and the database in sync with
each other.
The downside is that there is some overhead in calling an external
program to get the substitutes info. For instance, "nix-env -qas"
takes a bit longer.
Abolishing the substitutes table also makes the logic in
local-store.cc simpler, as we don't need to store info for invalid
paths. On the downside, you cannot do things like "nix-store -qR"
on a substitutable but invalid path (but nobody did that anyway).
* Never catch interrupts (the Interrupted exception).
--export' into the Nix store, and optionally check the cryptographic
signatures against /nix/etc/nix/signing-key.pub. (TODO: verify
against a set of public keys.)
path. This is like `nix-store --dump', only it also dumps the
meta-information of the store path (references, deriver). Will add
a `--sign' flag later to add a cryptographic signature, which we
will use for exchanging store paths between build farm machines in a
secure manner.
computing the store path (NIX-77). This is an important security
property in multi-user Nix stores.
Note that this changes the store paths of derivations (since the
derivation aterms are added using addTextToStore), but not most
outputs (unless they use builtins.toFile).
from a source directory. All files for which a predicate function
returns true are copied to the store. Typical example is to leave
out the .svn directory:
stdenv.mkDerivation {
...
src = builtins.filterSource
(path: baseNameOf (toString path) != ".svn")
./source-dir;
# as opposed to
# src = ./source-dir;
}
This is important because the .svn directory influences the hash in
a rather unpredictable and variable way.
`nix-store --delete'. But unprivileged users are not allowed to
ignore liveness.
* `nix-store --delete --ignore-liveness': ignore the runtime roots as
well.
process, so forward the operation.
* Spam the user about GC misconfigurations (NIX-71).
* findRoots: skip all roots that are unreadable - the warnings with
which we spam the user should be enough.
processes can register indirect roots. Of course, there is still
the problem that the garbage collector can only read the targets of
the indirect roots when it's running as root...
containing functions that operate on the Nix store. One
implementation is LocalStore, which operates on the Nix store
directly. The next step, to enable secure multi-user Nix, is to
create a different implementation RemoteStore that talks to a
privileged daemon process that uses LocalStore to perform the actual
operations.