diff --git a/Documentation/RCU/Design/Requirements/Requirements.html b/Documentation/RCU/Design/Requirements/Requirements.html index 95b30fa25d56..62e847bcdcdd 100644 --- a/Documentation/RCU/Design/Requirements/Requirements.html +++ b/Documentation/RCU/Design/Requirements/Requirements.html @@ -2080,6 +2080,8 @@ Some of the relevant points of interest are as follows:
  • Scheduler and RCU.
  • Tracing and RCU.
  • Energy Efficiency. +
  • + Scheduling-Clock Interrupts and RCU.
  • Memory Efficiency.
  • Performance, Scalability, Response Time, and Reliability. @@ -2532,6 +2534,134 @@ I learned of many of these requirements via angry phone calls: Flaming me on the Linux-kernel mailing list was apparently not sufficient to fully vent their ire at RCU's energy-efficiency bugs! +

    +Scheduling-Clock Interrupts and RCU

    + +

    +The kernel transitions between in-kernel non-idle execution, userspace +execution, and the idle loop. +Depending on kernel configuration, RCU handles these states differently: + + + + + + + + + + + + + + + + + + +
    HZ KconfigIn-KernelUsermodeIdle
    HZ_PERIODICCan rely on scheduling-clock interrupt.Can rely on scheduling-clock interrupt and its + detection of interrupt from usermode.Can rely on RCU's dyntick-idle detection.
    NO_HZ_IDLECan rely on scheduling-clock interrupt.Can rely on scheduling-clock interrupt and its + detection of interrupt from usermode.Can rely on RCU's dyntick-idle detection.
    NO_HZ_FULLCan only sometimes rely on scheduling-clock interrupt. + In other cases, it is necessary to bound kernel execution + times and/or use IPIs.Can rely on RCU's dyntick-idle detection.Can rely on RCU's dyntick-idle detection.
    + + + + + + + + +
     
    Quick Quiz:
    + Why can't NO_HZ_FULL in-kernel execution rely on the + scheduling-clock interrupt, just like HZ_PERIODIC + and NO_HZ_IDLE do? +
    Answer:
    + Because, as a performance optimization, NO_HZ_FULL + does not necessarily re-enable the scheduling-clock interrupt + on entry to each and every system call. +
     
    + +

    +However, RCU must be reliably informed as to whether any given +CPU is currently in the idle loop, and, for NO_HZ_FULL, +also whether that CPU is executing in usermode, as discussed +earlier. +It also requires that the scheduling-clock interrupt be enabled when +RCU needs it to be: + +

      +
    1. If a CPU is either idle or executing in usermode, and RCU believes + it is non-idle, the scheduling-clock tick had better be running. + Otherwise, you will get RCU CPU stall warnings. Or at best, + very long (11-second) grace periods, with a pointless IPI waking + the CPU from time to time. +
    2. If a CPU is in a portion of the kernel that executes RCU read-side + critical sections, and RCU believes this CPU to be idle, you will get + random memory corruption. DON'T DO THIS!!! + +
      This is one reason to test with lockdep, which will complain + about this sort of thing. +
    3. If a CPU is in a portion of the kernel that is absolutely + positively no-joking guaranteed to never execute any RCU read-side + critical sections, and RCU believes this CPU to to be idle, + no problem. This sort of thing is used by some architectures + for light-weight exception handlers, which can then avoid the + overhead of rcu_irq_enter() and rcu_irq_exit() + at exception entry and exit, respectively. + Some go further and avoid the entireties of irq_enter() + and irq_exit(). + +
      Just make very sure you are running some of your tests with + CONFIG_PROVE_RCU=y, just in case one of your code paths + was in fact joking about not doing RCU read-side critical sections. +
    4. If a CPU is executing in the kernel with the scheduling-clock + interrupt disabled and RCU believes this CPU to be non-idle, + and if the CPU goes idle (from an RCU perspective) every few + jiffies, no problem. It is usually OK for there to be the + occasional gap between idle periods of up to a second or so. + +
      If the gap grows too long, you get RCU CPU stall warnings. +
    5. If a CPU is either idle or executing in usermode, and RCU believes + it to be idle, of course no problem. +
    6. If a CPU is executing in the kernel, the kernel code + path is passing through quiescent states at a reasonable + frequency (preferably about once per few jiffies, but the + occasional excursion to a second or so is usually OK) and the + scheduling-clock interrupt is enabled, of course no problem. + +
      If the gap between a successive pair of quiescent states grows + too long, you get RCU CPU stall warnings. +
    + + + + + + + + +
     
    Quick Quiz:
    + But what if my driver has a hardware interrupt handler + that can run for many seconds? + I cannot invoke schedule() from an hardware + interrupt handler, after all! +
    Answer:
    + One approach is to do rcu_irq_exit();rcu_irq_enter(); + every so often. + But given that long-running interrupt handlers can cause + other problems, not least for response time, shouldn't you + work to keep your interrupt handler's runtime within reasonable + bounds? +
     
    + +

    +But as long as RCU is properly informed of kernel state transitions between +in-kernel execution, usermode execution, and idle, and as long as the +scheduling-clock interrupt is enabled when RCU needs it to be, you +can rest assured that the bugs you encounter will be in some other +part of RCU or some other part of the kernel! +

    Memory Efficiency