It was failing with:
error: AWS error fetching 'nix-cache-info': The specified bucket does not exist
because `S3BinaryCacheStoreImpl` had a `bucketName` field that
shadowed the inherited `bucketName from `S3BinaryCacheStoreConfig`.
This change updates the seccomp profile to return ENOTSUP for getxattr
functions family. This reflects the behavior of filesystems that don’t
support extended attributes (or have an option to disable them), e.g.
ext2.
The current behavior is confusing for some programs because we can read
extended attributes, but only get to know that they are not supported
when setting them. In addition to that, ACLs on Linux are implemented
via extended attributes internally and if we don’t return ENOTSUP, acl
library converts file mode to ACL.
https://git.savannah.nongnu.org/cgit/acl.git/tree/libacl/acl_get_file.c?id=d9bb1759d4dad2f28a6dcc8c1742ff75d16dd10d#n69
By syncing with Nixpkgs, we reuse the same derivation, which is
generally a good idea, and has the benefit that it is transitively
a channel blocker.
Changes:
- https://github.com/NixOS/nixpkgs/pull/163313 (SuperSandro2000)
> nix: disable big-parallel for aws-sdk-cpp
> aws-sdk-cpp only takes ~1m52s on a 4 core machine under 50% load
> which does not justify the requirement on big parallel.
> Tested with `nix-build -A nixVersions.nix_2_6.aws-sdk-cpp`.
> I can finally build nix without requiring a big-parallel machine.
- https://github.com/NixOS/nixpkgs/pull/227506 (Artturin)
> nix: use [ ] instead null to empty requiredSystemFeatures
> fixes 'error: value is null while a list was expected' with 'nixpkgs.hostPlatform.gcc.arch = "x86_64";'
* manual: Contributing -> Development, Hacking -> Building
what's currently called "hacking" are really instructions for setting up
a development environment and compiling from source. we have
a contribution guide in the repo (which rightly focuses on GitHub
workflows), and the material in the manual is more about working
on the code itself.
since we'd otherwise have three headings that amount to "Building Nix",
this change also moves the "classic Nix" instructions to the top.
we may want to reorganise this in the future, and bring
contributor-oriented information closer to the code, but for now let's
stick to more accurate names to ease navigation.
We are piping curl downloads into `unpackTarfileToSink()`, but the
latter is typically slower than the former if you're on a fast
connection. So the download could appear unnecessarily slow. (There is
even a risk that if the Git import is *really* slow for whatever
reason, the TCP connection could time out.)
So let's make the download buffer bigger by default - 64 MiB is big
enough for the Nixpkgs tarball. Perhaps in the future, we could have
an unlimited buffer that spills data to disk beyond a certain
threshold, but that's probably overkill.
Currently, the worker protocol has a version number that we increment
whenever we change something in the protocol. However, this can cause
a collision between Nix PRs / forks that make protocol changes
(e.g. PR #9857 increments the version, which could collide with
another PR). So instead, the client and daemon now exchange a set of
protocol features (such as `auth-forwarding`). They will use the
intersection of the sets of features, i.e. the features they both
support.
Note that protocol features are completely distinct from
`ExperimentalFeature`s.
* Only build perl subproject on Linux
* Fix various Windows regressions
* Don't put the emulator hook in test builds
We run the tests in a separate derivation. Only need it for the dev shell.
* Fix native dev shells
* Fix cross dev shells we don't know how to emulate
Co-authored-by: PoweredByPie <poweredbypie@users.noreply.github.com>
Co-authored-by: Joachim Schiele <js@lastlog.de>
Co-authored-by: John Ericson <John.Ericson@Obsidian.Systems>
Following what is outlined in #10766 refactor the uds-remote-store such
that the member variables (state) don't live in the store itself but in
the config object.
Additionally, the config object includes a new necessary constructor
that takes a scheme & authority.
Tests are commented out because of linking errors with the current config system.
When there is a new config system we can reenable them.
Co-authored-by: John Ericson <John.Ericson@Obsidian.Systems>
This was accidentally introduced
in f71b4da0b3. We didn't notice this
because the version got interpreted by the daemon as the obsolete "CPU
affinity will follow" field, and being non-zero, it would then read
another integer for the ignored CPU affinity.
Progress towards #10766
I thought that #10768 achieved, but when I went to use this stuff (in
Hydra), turns out it did not. (Those `using FooConfig;` lines were not
working --- they are so finicky!) This PR gets the job done, and adds
some trivial unit tests to make sure I did what I intended.
I had to add add a header to expose `SSHStoreConfig`, after which the
preexisting `ssh-store-config.*` were very confusingly named files, so I
renamed them to `common-ssh-store-config.hh` to match the type defined
therein.
They are not actually part of the store layer, but instead part of the
Nix executable infra (libraries don't need plugins, executables do).
This is part of a larger project of moving all of our legacy settings
infra to libmain, and having the underlying libraries just have plain
configuration structs detached from any settings infra / UI layer.
Progress on #5638
... at call sites that are may be in the hot path.
I do not know how clever the compiler gets at these sites.
My primary concern is to not regress performance and I am confident
that this achieves it the easy way.
(System) features are unlikely to be empty strings, but when they
come in through structuredAttrs, they probably can.
I don't think this means we should drop them, but most likely they
will be dropped after this because next time they'll be parsed with
tokenizeString.
TODO: We should forbid empty features.
I don't think it's completely impossible, but I can't construct
one easily as derivationStrict seems to (re)tokenize the outputs
attribute, dropping the empty output.
It's not a scenario we have to account for here.
Bug not reported in 6 years, but here you go.
Also it is safe to switch to normal concatStringsSep behavior
because tokenizeString does not produce empty items.