1
0
Fork 0

Initialize the random driver earler; fix CRNG initialization when we

trust the CPU's RNG on NUMA systems; other miscellaneous cleanups and
 fixes.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEK2m5VNv+CHkogTfJ8vlZVpUNgaMFAlzSEyoACgkQ8vlZVpUN
 gaPJKAf/cBOEU9Zn0PzdCiybl6IT88A/EcL2FPPFbMrRI/aUDZ6jBURsa2Ds0+Rb
 XiMjElnxMGSv03P+MTo0SPTVwYLGPpvNgplL25I4HMfKUPbpAdxk5UZS9pUllv2T
 3ftQfGgdDalewlT8BH0K5EY9E8Whe9ODrLgGq6jXgXHm2sssDzPF+pB2ySuDRvsA
 W/6rF+PW4n/2n3An6h9jc/0cShurarpHjvWzuFWY3Mevgrl53r8SppIt085/5K6A
 tsSdXIqIBvhCp9OvXBHzEDCEPpdVBlL81XauIu6uMSlJ/oofOqjJs2Ib1k04Xx9z
 dp4/7REfm/HFMyT9MNAYPmhmXruiiQ==
 =56QI
 -----END PGP SIGNATURE-----

Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random

Pull randomness updates from Ted Ts'o:

 - initialize the random driver earler

 - fix CRNG initialization when we trust the CPU's RNG on NUMA systems

 - other miscellaneous cleanups and fixes.

* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
  random: add a spinlock_t to struct batched_entropy
  random: document get_random_int() family
  random: fix CRNG initialization when random.trust_cpu=1
  random: move rand_initialize() earlier
  random: only read from /dev/random after its pool has received 128 bits
  drivers/char/random.c: make primary_crng static
  drivers/char/random.c: remove unused stuct poolinfo::poolbits
  drivers/char/random.c: constify poolinfo_table
hifive-unleashed-5.2
Linus Torvalds 2019-05-07 21:42:23 -07:00
commit dd5001e21a
4 changed files with 156 additions and 78 deletions

View File

@ -101,15 +101,13 @@
* Exported interfaces ---- output
* ===============================
*
* There are three exported interfaces; the first is one designed to
* be used from within the kernel:
* There are four exported interfaces; two for use within the kernel,
* and two or use from userspace.
*
* void get_random_bytes(void *buf, int nbytes);
* Exported interfaces ---- userspace output
* -----------------------------------------
*
* This interface will return the requested number of random bytes,
* and place it in the requested buffer.
*
* The two other interfaces are two character devices /dev/random and
* The userspace interfaces are two character devices /dev/random and
* /dev/urandom. /dev/random is suitable for use when very high
* quality randomness is desired (for example, for key generation or
* one-time pads), as it will only return a maximum of the number of
@ -122,6 +120,77 @@
* this will result in random numbers that are merely cryptographically
* strong. For many applications, however, this is acceptable.
*
* Exported interfaces ---- kernel output
* --------------------------------------
*
* The primary kernel interface is
*
* void get_random_bytes(void *buf, int nbytes);
*
* This interface will return the requested number of random bytes,
* and place it in the requested buffer. This is equivalent to a
* read from /dev/urandom.
*
* For less critical applications, there are the functions:
*
* u32 get_random_u32()
* u64 get_random_u64()
* unsigned int get_random_int()
* unsigned long get_random_long()
*
* These are produced by a cryptographic RNG seeded from get_random_bytes,
* and so do not deplete the entropy pool as much. These are recommended
* for most in-kernel operations *if the result is going to be stored in
* the kernel*.
*
* Specifically, the get_random_int() family do not attempt to do
* "anti-backtracking". If you capture the state of the kernel (e.g.
* by snapshotting the VM), you can figure out previous get_random_int()
* return values. But if the value is stored in the kernel anyway,
* this is not a problem.
*
* It *is* safe to expose get_random_int() output to attackers (e.g. as
* network cookies); given outputs 1..n, it's not feasible to predict
* outputs 0 or n+1. The only concern is an attacker who breaks into
* the kernel later; the get_random_int() engine is not reseeded as
* often as the get_random_bytes() one.
*
* get_random_bytes() is needed for keys that need to stay secret after
* they are erased from the kernel. For example, any key that will
* be wrapped and stored encrypted. And session encryption keys: we'd
* like to know that after the session is closed and the keys erased,
* the plaintext is unrecoverable to someone who recorded the ciphertext.
*
* But for network ports/cookies, stack canaries, PRNG seeds, address
* space layout randomization, session *authentication* keys, or other
* applications where the sensitive data is stored in the kernel in
* plaintext for as long as it's sensitive, the get_random_int() family
* is just fine.
*
* Consider ASLR. We want to keep the address space secret from an
* outside attacker while the process is running, but once the address
* space is torn down, it's of no use to an attacker any more. And it's
* stored in kernel data structures as long as it's alive, so worrying
* about an attacker's ability to extrapolate it from the get_random_int()
* CRNG is silly.
*
* Even some cryptographic keys are safe to generate with get_random_int().
* In particular, keys for SipHash are generally fine. Here, knowledge
* of the key authorizes you to do something to a kernel object (inject
* packets to a network connection, or flood a hash table), and the
* key is stored with the object being protected. Once it goes away,
* we no longer care if anyone knows the key.
*
* prandom_u32()
* -------------
*
* For even weaker applications, see the pseudorandom generator
* prandom_u32(), prandom_max(), and prandom_bytes(). If the random
* numbers aren't security-critical at all, these are *far* cheaper.
* Useful for self-tests, random error simulation, randomized backoffs,
* and any other application where you trust that nobody is trying to
* maliciously mess with you by guessing the "random" numbers.
*
* Exported interfaces ---- input
* ==============================
*
@ -295,7 +364,7 @@
* To allow fractional bits to be tracked, the entropy_count field is
* denominated in units of 1/8th bits.
*
* 2*(ENTROPY_SHIFT + log2(poolbits)) must <= 31, or the multiply in
* 2*(ENTROPY_SHIFT + poolbitshift) must <= 31, or the multiply in
* credit_entropy_bits() needs to be 64 bits wide.
*/
#define ENTROPY_SHIFT 3
@ -359,9 +428,9 @@ static int random_write_wakeup_bits = 28 * OUTPUT_POOL_WORDS;
* polynomial which improves the resulting TGFSR polynomial to be
* irreducible, which we have made here.
*/
static struct poolinfo {
int poolbitshift, poolwords, poolbytes, poolbits, poolfracbits;
#define S(x) ilog2(x)+5, (x), (x)*4, (x)*32, (x) << (ENTROPY_SHIFT+5)
static const struct poolinfo {
int poolbitshift, poolwords, poolbytes, poolfracbits;
#define S(x) ilog2(x)+5, (x), (x)*4, (x) << (ENTROPY_SHIFT+5)
int tap1, tap2, tap3, tap4, tap5;
} poolinfo_table[] = {
/* was: x^128 + x^103 + x^76 + x^51 +x^25 + x + 1 */
@ -415,7 +484,7 @@ struct crng_state {
spinlock_t lock;
};
struct crng_state primary_crng = {
static struct crng_state primary_crng = {
.lock = __SPIN_LOCK_UNLOCKED(primary_crng.lock),
};
@ -470,7 +539,6 @@ struct entropy_store {
unsigned short add_ptr;
unsigned short input_rotate;
int entropy_count;
int entropy_total;
unsigned int initialized:1;
unsigned int last_data_init:1;
__u8 last_data[EXTRACT_SIZE];
@ -643,7 +711,7 @@ static void process_random_ready_list(void)
*/
static void credit_entropy_bits(struct entropy_store *r, int nbits)
{
int entropy_count, orig;
int entropy_count, orig, has_initialized = 0;
const int pool_size = r->poolinfo->poolfracbits;
int nfrac = nbits << ENTROPY_SHIFT;
@ -698,23 +766,25 @@ retry:
entropy_count = 0;
} else if (entropy_count > pool_size)
entropy_count = pool_size;
if ((r == &blocking_pool) && !r->initialized &&
(entropy_count >> ENTROPY_SHIFT) > 128)
has_initialized = 1;
if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
goto retry;
r->entropy_total += nbits;
if (!r->initialized && r->entropy_total > 128) {
if (has_initialized)
r->initialized = 1;
r->entropy_total = 0;
}
trace_credit_entropy_bits(r->name, nbits,
entropy_count >> ENTROPY_SHIFT,
r->entropy_total, _RET_IP_);
entropy_count >> ENTROPY_SHIFT, _RET_IP_);
if (r == &input_pool) {
int entropy_bits = entropy_count >> ENTROPY_SHIFT;
struct entropy_store *other = &blocking_pool;
if (crng_init < 2 && entropy_bits >= 128) {
if (crng_init < 2) {
if (entropy_bits < 128)
return;
crng_reseed(&primary_crng, r);
entropy_bits = r->entropy_count >> ENTROPY_SHIFT;
}
@ -725,20 +795,14 @@ retry:
wake_up_interruptible(&random_read_wait);
kill_fasync(&fasync, SIGIO, POLL_IN);
}
/* If the input pool is getting full, send some
* entropy to the blocking pool until it is 75% full.
/* If the input pool is getting full, and the blocking
* pool has room, send some entropy to the blocking
* pool.
*/
if (entropy_bits > random_write_wakeup_bits &&
r->initialized &&
r->entropy_total >= 2*random_read_wakeup_bits) {
struct entropy_store *other = &blocking_pool;
if (other->entropy_count <=
3 * other->poolinfo->poolfracbits / 4) {
schedule_work(&other->push_work);
r->entropy_total = 0;
}
}
if (!work_pending(&other->push_work) &&
(ENTROPY_BITS(r) > 6 * r->poolinfo->poolbytes) &&
(ENTROPY_BITS(other) <= 6 * other->poolinfo->poolbytes))
schedule_work(&other->push_work);
}
}
@ -777,6 +841,7 @@ static struct crng_state **crng_node_pool __read_mostly;
#endif
static void invalidate_batched_entropy(void);
static void numa_crng_init(void);
static bool trust_cpu __ro_after_init = IS_ENABLED(CONFIG_RANDOM_TRUST_CPU);
static int __init parse_trust_cpu(char *arg)
@ -805,7 +870,9 @@ static void crng_initialize(struct crng_state *crng)
}
crng->state[i] ^= rv;
}
if (trust_cpu && arch_init) {
if (trust_cpu && arch_init && crng == &primary_crng) {
invalidate_batched_entropy();
numa_crng_init();
crng_init = 2;
pr_notice("random: crng done (trusting CPU's manufacturer)\n");
}
@ -1553,6 +1620,11 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
int large_request = (nbytes > 256);
trace_extract_entropy_user(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
if (!r->initialized && r->pull) {
xfer_secondary_pool(r, ENTROPY_BITS(r->pull)/8);
if (!r->initialized)
return 0;
}
xfer_secondary_pool(r, nbytes);
nbytes = account(r, nbytes, 0, 0);
@ -1783,7 +1855,7 @@ EXPORT_SYMBOL(get_random_bytes_arch);
* data into the pool to prepare it for use. The pool is not cleared
* as that can only decrease the entropy in the pool.
*/
static void init_std_data(struct entropy_store *r)
static void __init init_std_data(struct entropy_store *r)
{
int i;
ktime_t now = ktime_get_real();
@ -1810,7 +1882,7 @@ static void init_std_data(struct entropy_store *r)
* take care not to overwrite the precious per platform data
* we were given.
*/
static int rand_initialize(void)
int __init rand_initialize(void)
{
init_std_data(&input_pool);
init_std_data(&blocking_pool);
@ -1822,7 +1894,6 @@ static int rand_initialize(void)
}
return 0;
}
early_initcall(rand_initialize);
#ifdef CONFIG_BLOCK
void rand_initialize_disk(struct gendisk *disk)
@ -2211,8 +2282,8 @@ struct batched_entropy {
u32 entropy_u32[CHACHA_BLOCK_SIZE / sizeof(u32)];
};
unsigned int position;
spinlock_t batch_lock;
};
static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock);
/*
* Get a random word for internal kernel use only. The quality of the random
@ -2222,12 +2293,14 @@ static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_
* wait_for_random_bytes() should be called and return 0 at least once
* at any point prior.
*/
static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = {
.batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock),
};
u64 get_random_u64(void)
{
u64 ret;
bool use_lock;
unsigned long flags = 0;
unsigned long flags;
struct batched_entropy *batch;
static void *previous;
@ -2242,28 +2315,25 @@ u64 get_random_u64(void)
warn_unseeded_randomness(&previous);
use_lock = READ_ONCE(crng_init) < 2;
batch = &get_cpu_var(batched_entropy_u64);
if (use_lock)
read_lock_irqsave(&batched_entropy_reset_lock, flags);
batch = raw_cpu_ptr(&batched_entropy_u64);
spin_lock_irqsave(&batch->batch_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
extract_crng((u8 *)batch->entropy_u64);
batch->position = 0;
}
ret = batch->entropy_u64[batch->position++];
if (use_lock)
read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u64);
spin_unlock_irqrestore(&batch->batch_lock, flags);
return ret;
}
EXPORT_SYMBOL(get_random_u64);
static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32);
static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32) = {
.batch_lock = __SPIN_LOCK_UNLOCKED(batched_entropy_u32.lock),
};
u32 get_random_u32(void)
{
u32 ret;
bool use_lock;
unsigned long flags = 0;
unsigned long flags;
struct batched_entropy *batch;
static void *previous;
@ -2272,18 +2342,14 @@ u32 get_random_u32(void)
warn_unseeded_randomness(&previous);
use_lock = READ_ONCE(crng_init) < 2;
batch = &get_cpu_var(batched_entropy_u32);
if (use_lock)
read_lock_irqsave(&batched_entropy_reset_lock, flags);
batch = raw_cpu_ptr(&batched_entropy_u32);
spin_lock_irqsave(&batch->batch_lock, flags);
if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
extract_crng((u8 *)batch->entropy_u32);
batch->position = 0;
}
ret = batch->entropy_u32[batch->position++];
if (use_lock)
read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
put_cpu_var(batched_entropy_u32);
spin_unlock_irqrestore(&batch->batch_lock, flags);
return ret;
}
EXPORT_SYMBOL(get_random_u32);
@ -2297,12 +2363,19 @@ static void invalidate_batched_entropy(void)
int cpu;
unsigned long flags;
write_lock_irqsave(&batched_entropy_reset_lock, flags);
for_each_possible_cpu (cpu) {
per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0;
per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0;
struct batched_entropy *batched_entropy;
batched_entropy = per_cpu_ptr(&batched_entropy_u32, cpu);
spin_lock_irqsave(&batched_entropy->batch_lock, flags);
batched_entropy->position = 0;
spin_unlock(&batched_entropy->batch_lock);
batched_entropy = per_cpu_ptr(&batched_entropy_u64, cpu);
spin_lock(&batched_entropy->batch_lock);
batched_entropy->position = 0;
spin_unlock_irqrestore(&batched_entropy->batch_lock, flags);
}
write_unlock_irqrestore(&batched_entropy_reset_lock, flags);
}
/**

View File

@ -36,6 +36,7 @@ extern void add_interrupt_randomness(int irq, int irq_flags) __latent_entropy;
extern void get_random_bytes(void *buf, int nbytes);
extern int wait_for_random_bytes(void);
extern int __init rand_initialize(void);
extern bool rng_is_initialized(void);
extern int add_random_ready_callback(struct random_ready_callback *rdy);
extern void del_random_ready_callback(struct random_ready_callback *rdy);

View File

@ -62,15 +62,14 @@ DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock,
TRACE_EVENT(credit_entropy_bits,
TP_PROTO(const char *pool_name, int bits, int entropy_count,
int entropy_total, unsigned long IP),
unsigned long IP),
TP_ARGS(pool_name, bits, entropy_count, entropy_total, IP),
TP_ARGS(pool_name, bits, entropy_count, IP),
TP_STRUCT__entry(
__field( const char *, pool_name )
__field( int, bits )
__field( int, entropy_count )
__field( int, entropy_total )
__field(unsigned long, IP )
),
@ -78,14 +77,12 @@ TRACE_EVENT(credit_entropy_bits,
__entry->pool_name = pool_name;
__entry->bits = bits;
__entry->entropy_count = entropy_count;
__entry->entropy_total = entropy_total;
__entry->IP = IP;
),
TP_printk("%s pool: bits %d entropy_count %d entropy_total %d "
"caller %pS", __entry->pool_name, __entry->bits,
__entry->entropy_count, __entry->entropy_total,
(void *)__entry->IP)
TP_printk("%s pool: bits %d entropy_count %d caller %pS",
__entry->pool_name, __entry->bits,
__entry->entropy_count, (void *)__entry->IP)
);
TRACE_EVENT(push_to_pool,

View File

@ -569,13 +569,6 @@ asmlinkage __visible void __init start_kernel(void)
page_address_init();
pr_notice("%s", linux_banner);
setup_arch(&command_line);
/*
* Set up the the initial canary and entropy after arch
* and after adding latent and command line entropy.
*/
add_latent_entropy();
add_device_randomness(command_line, strlen(command_line));
boot_init_stack_canary();
mm_init_cpumask(&init_mm);
setup_command_line(command_line);
setup_nr_cpu_ids();
@ -660,6 +653,20 @@ asmlinkage __visible void __init start_kernel(void)
hrtimers_init();
softirq_init();
timekeeping_init();
/*
* For best initial stack canary entropy, prepare it after:
* - setup_arch() for any UEFI RNG entropy and boot cmdline access
* - timekeeping_init() for ktime entropy used in rand_initialize()
* - rand_initialize() to get any arch-specific entropy like RDRAND
* - add_latent_entropy() to get any latent entropy
* - adding command line entropy
*/
rand_initialize();
add_latent_entropy();
add_device_randomness(command_line, strlen(command_line));
boot_init_stack_canary();
time_init();
printk_safe_init();
perf_event_init();