Up to [cvs.NetBSD.org] / src / sys / sys
Request diff between arbitrary revisions
Default branch: MAIN
Revision 1.4 / (download) - annotate - [select for diffs], Fri Aug 14 00:53:16 2020 UTC (2 years, 9 months ago) by riastradh
Branch: MAIN
CVS Tags: thorpej-i2c-spi-conf2-base,
thorpej-i2c-spi-conf2,
thorpej-i2c-spi-conf-base,
thorpej-i2c-spi-conf,
thorpej-futex2-base,
thorpej-futex2,
thorpej-futex-base,
thorpej-futex,
thorpej-cfargs2-base,
thorpej-cfargs2,
thorpej-cfargs-base,
thorpej-cfargs,
netbsd-10-base,
netbsd-10,
cjep_sun2x-base1,
cjep_sun2x-base,
cjep_sun2x,
cjep_staticlib_x-base1,
cjep_staticlib_x-base,
cjep_staticlib_x,
bouyer-sunxi-drm-base,
bouyer-sunxi-drm,
HEAD
Changes since 1.3: +5 -3
lines
Diff to previous 1.3 (colored)
New system call getrandom() compatible with Linux and others. Three ways to call: getrandom(p, n, 0) Blocks at boot until full entropy. Returns up to n bytes at p; guarantees up to 256 bytes even if interrupted after blocking. getrandom(0,0,0) serves as an entropy barrier: return only after system has full entropy. getrandom(p, n, GRND_INSECURE) Never blocks. Guarantees up to 256 bytes even if interrupted. Equivalent to /dev/urandom. Safe only after successful getrandom(...,0), getrandom(...,GRND_RANDOM), or read from /dev/random. getrandom(p, n, GRND_RANDOM) May block at any time. Returns up to n bytes at p, but no guarantees about how many -- may return as short as 1 byte. Equivalent to /dev/random. Legacy. Provided only for source compatibility with Linux. Can also use flags|GRND_NONBLOCK to fail with EWOULDBLOCK/EAGAIN without producing any output instead of blocking. - The combination GRND_INSECURE|GRND_NONBLOCK is the same as GRND_INSECURE, since GRND_INSECURE never blocks anyway. - The combinations GRND_INSECURE|GRND_RANDOM and GRND_INSECURE|GRND_RANDOM|GRND_NONBLOCK are nonsensical and fail with EINVAL. As proposed on tech-userlevel, tech-crypto, tech-security, and tech-kern, and subsequently adopted by core (minus the getentropy part of the proposal, because other operating systems and participants in the discussion couldn't come to an agreement about getentropy and blocking semantics): https://mail-index.netbsd.org/tech-userlevel/2020/05/02/msg012333.html
Revision 1.3 / (download) - annotate - [select for diffs], Fri May 8 15:54:11 2020 UTC (3 years, 1 month ago) by riastradh
Branch: MAIN
Changes since 1.2: +1 -3
lines
Diff to previous 1.2 (colored)
Make variable unused outside kern_entropy.c static.
Revision 1.2 / (download) - annotate - [select for diffs], Thu May 7 19:05:51 2020 UTC (3 years, 1 month ago) by riastradh
Branch: MAIN
Changes since 1.1: +2 -1
lines
Diff to previous 1.1 (colored)
Consolidate entropy on RNDADDDATA and writes to /dev/random. The man page for some time has advertised: Writing to either /dev/random or /dev/urandom influences subsequent output of both devices, guaranteed to take effect at next open. So let's make that true again. It is a conscious choice _not_ to consolidate entropy frequently. For example, if you have a _slow_ HWRNG, which provides 32 bits of entropy every few seconds, and you reveal a hash that to the adversary before any more comes in, the adversary can in principle just keep guessing the intermediate state by a brute force search over ~2^32 possibilities. To mitigate this, the kernel generally tries to avoid consolidating entropy from the per-CPU pools until doing so would bring us from zero entropy to full entropy. However, there are various _possible_ sources of entropy which are just hard to give honest estimates for that are valid on ~all machines -- like interrupt timings. The time at which we read a seed in, which usually happens via /etc/rc.d/random_seed early in userland, is a reasonable time to gather this up. An operator or system engineer who knows another opportune moment can always issue `sysctl -w kern.entropy.consolidate=1'. Prompted by a suggestion from nia@ to consolidate entropy at the first transition to userland. I chose not to do that because it would likely cause warning fatigue on systems that are perfectly fine with a random seed -- doing it this way instead lets rndctl -L trigger the consolidation automatically. A subsequent commit will reorder the operations in rndctl again to make it work out better.
Revision 1.1 / (download) - annotate - [select for diffs], Thu Apr 30 03:28:19 2020 UTC (3 years, 1 month ago) by riastradh
Branch: MAIN
Rewrite entropy subsystem. Primary goals: 1. Use cryptography primitives designed and vetted by cryptographers. 2. Be honest about entropy estimation. 3. Propagate full entropy as soon as possible. 4. Simplify the APIs. 5. Reduce overhead of rnd_add_data and cprng_strong. 6. Reduce side channels of HWRNG data and human input sources. 7. Improve visibility of operation with sysctl and event counters. Caveat: rngtest is no longer used generically for RND_TYPE_RNG rndsources. Hardware RNG devices should have hardware-specific health tests. For example, checking for two repeated 256-bit outputs works to detect AMD's 2019 RDRAND bug. Not all hardware RNGs are necessarily designed to produce exactly uniform output. ENTROPY POOL - A Keccak sponge, with test vectors, replaces the old LFSR/SHA-1 kludge as the cryptographic primitive. - `Entropy depletion' is available for testing purposes with a sysctl knob kern.entropy.depletion; otherwise it is disabled, and once the system reaches full entropy it is assumed to stay there as far as modern cryptography is concerned. - No `entropy estimation' based on sample values. Such `entropy estimation' is a contradiction in terms, dishonest to users, and a potential source of side channels. It is the responsibility of the driver author to study the entropy of the process that generates the samples. - Per-CPU gathering pools avoid contention on a global queue. - Entropy is occasionally consolidated into global pool -- as soon as it's ready, if we've never reached full entropy, and with a rate limit afterward. Operators can force consolidation now by running sysctl -w kern.entropy.consolidate=1. - rndsink(9) API has been replaced by an epoch counter which changes whenever entropy is consolidated into the global pool. . Usage: Cache entropy_epoch() when you seed. If entropy_epoch() has changed when you're about to use whatever you seeded, reseed. . Epoch is never zero, so initialize cache to 0 if you want to reseed on first use. . Epoch is -1 iff we have never reached full entropy -- in other words, the old rnd_initial_entropy is (entropy_epoch() != -1) -- but it is better if you check for changes rather than for -1, so that if the system estimated its own entropy incorrectly, entropy consolidation has the opportunity to prevent future compromise. - Sysctls and event counters provide operator visibility into what's happening: . kern.entropy.needed - bits of entropy short of full entropy . kern.entropy.pending - bits known to be pending in per-CPU pools, can be consolidated with sysctl -w kern.entropy.consolidate=1 . kern.entropy.epoch - number of times consolidation has happened, never 0, and -1 iff we have never reached full entropy CPRNG_STRONG - A cprng_strong instance is now a collection of per-CPU NIST Hash_DRBGs. There are only two in the system: user_cprng for /dev/urandom and sysctl kern.?random, and kern_cprng for kernel users which may need to operate in interrupt context up to IPL_VM. (Calling cprng_strong in interrupt context does not strike me as a particularly good idea, so I added an event counter to see whether anything actually does.) - Event counters provide operator visibility into when reseeding happens. INTEL RDRAND/RDSEED, VIA C3 RNG (CPU_RNG) - Unwired for now; will be rewired in a subsequent commit.