diff --git a/Documentation/RCU/Design/Requirements/Requirements.html b/Documentation/RCU/Design/Requirements/Requirements.html index 95b30fa25d56..62e847bcdcdd 100644 --- a/Documentation/RCU/Design/Requirements/Requirements.html +++ b/Documentation/RCU/Design/Requirements/Requirements.html @@ -2080,6 +2080,8 @@ Some of the relevant points of interest are as follows:
+The kernel transitions between in-kernel non-idle execution, userspace +execution, and the idle loop. +Depending on kernel configuration, RCU handles these states differently: + +
HZ Kconfig | +In-Kernel | +Usermode | +Idle |
---|---|---|---|
HZ_PERIODIC | +Can rely on scheduling-clock interrupt. | +Can rely on scheduling-clock interrupt and its + detection of interrupt from usermode. | +Can rely on RCU's dyntick-idle detection. |
NO_HZ_IDLE | +Can rely on scheduling-clock interrupt. | +Can rely on scheduling-clock interrupt and its + detection of interrupt from usermode. | +Can rely on RCU's dyntick-idle detection. |
NO_HZ_FULL | +Can only sometimes rely on scheduling-clock interrupt. + In other cases, it is necessary to bound kernel execution + times and/or use IPIs. | +Can rely on RCU's dyntick-idle detection. | +Can rely on RCU's dyntick-idle detection. |
Quick Quiz: |
---|
+ Why can't NO_HZ_FULL in-kernel execution rely on the + scheduling-clock interrupt, just like HZ_PERIODIC + and NO_HZ_IDLE do? + |
Answer: |
+ Because, as a performance optimization, NO_HZ_FULL + does not necessarily re-enable the scheduling-clock interrupt + on entry to each and every system call. + |
+However, RCU must be reliably informed as to whether any given +CPU is currently in the idle loop, and, for NO_HZ_FULL, +also whether that CPU is executing in usermode, as discussed +earlier. +It also requires that the scheduling-clock interrupt be enabled when +RCU needs it to be: + +
Quick Quiz: |
---|
+ But what if my driver has a hardware interrupt handler + that can run for many seconds? + I cannot invoke schedule() from an hardware + interrupt handler, after all! + |
Answer: |
+ One approach is to do rcu_irq_exit();rcu_irq_enter(); + every so often. + But given that long-running interrupt handlers can cause + other problems, not least for response time, shouldn't you + work to keep your interrupt handler's runtime within reasonable + bounds? + |
+But as long as RCU is properly informed of kernel state transitions between +in-kernel execution, usermode execution, and idle, and as long as the +scheduling-clock interrupt is enabled when RCU needs it to be, you +can rest assured that the bugs you encounter will be in some other +part of RCU or some other part of the kernel! +