Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'rcu.release.v7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux

Pull RCU updates from Boqun Feng:

- RCU Tasks Trace:

Re-implement RCU tasks trace in term of SRCU-fast, not only more than
500 lines of code are saved because of the reimplementation, a new
set of API, rcu_read_{,un}lock_tasks_trace(), becomes possible as
well. Compared to the previous rcu_read_{,un}lock_trace(), the new
API avoid the task_struct accesses thanks to the SRCU-fast semantics.

As a result, the old rcu_read{,un}lock_trace() API is now deprecated.

- RCU Torture Test:
- Multiple improvements on kvm-series.sh (parallel run and
progress showing metrics)
- Add context checks to rcu_torture_timer()
- Make config2csv.sh properly handle comments in .boot files
- Include commit discription in testid.txt

- Miscellaneous RCU changes:
- Reduce synchronize_rcu() latency by reporting GP kthread's
CPU QS early
- Use suitable gfp_flags for the init_srcu_struct_nodes()
- Fix rcu_read_unlock() deadloop due to softirq
- Correctly compute probability to invoke ->exp_current()
in rcutorture
- Make expedited RCU CPU stall warnings detect stall-end races

- RCU nocb:
- Remove unnecessary WakeOvfIsDeferred wake path and callback
overload handling
- Extract nocb_defer_wakeup_cancel() helper

* tag 'rcu.release.v7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/rcu/linux: (25 commits)
rcu/nocb: Extract nocb_defer_wakeup_cancel() helper
rcu/nocb: Remove dead callback overload handling
rcu/nocb: Remove unnecessary WakeOvfIsDeferred wake path
rcu: Reduce synchronize_rcu() latency by reporting GP kthread's CPU QS early
srcu: Use suitable gfp_flags for the init_srcu_struct_nodes()
rcu: Fix rcu_read_unlock() deadloop due to softirq
rcutorture: Correctly compute probability to invoke ->exp_current()
rcu: Make expedited RCU CPU stall warnings detect stall-end races
rcutorture: Add --kill-previous option to terminate previous kvm.sh runs
rcutorture: Prevent concurrent kvm.sh runs on same source tree
torture: Include commit discription in testid.txt
torture: Make config2csv.sh properly handle comments in .boot files
torture: Make kvm-series.sh give run numbers and totals
torture: Make kvm-series.sh give build numbers and totals
torture: Parallelize kvm-series.sh guest-OS execution
rcutorture: Add context checks to rcu_torture_timer()
rcutorture: Test rcu_tasks_trace_expedite_current()
srcu: Create an rcu_tasks_trace_expedite_current() function
checkpatch: Deprecate rcu_read_{,un}lock_trace()
rcu: Update Requirements.rst for RCU Tasks Trace
...

+459 -932
+6 -6
Documentation/RCU/Design/Requirements/Requirements.rst
··· 2780 2780 ~~~~~~~~~~~~~~~ 2781 2781 2782 2782 Some forms of tracing need to sleep in readers, but cannot tolerate 2783 - SRCU's read-side overhead, which includes a full memory barrier in both 2784 - srcu_read_lock() and srcu_read_unlock(). This need is handled by a 2785 - Tasks Trace RCU that uses scheduler locking and IPIs to synchronize with 2786 - readers. Real-time systems that cannot tolerate IPIs may build their 2787 - kernels with ``CONFIG_TASKS_TRACE_RCU_READ_MB=y``, which avoids the IPIs at 2788 - the expense of adding full memory barriers to the read-side primitives. 2783 + SRCU's read-side overhead, which includes a full memory barrier in 2784 + both srcu_read_lock() and srcu_read_unlock(). This need is handled by 2785 + a Tasks Trace RCU API implemented as thin wrappers around SRCU-fast, 2786 + which avoids the read-side memory barriers, at least for architectures 2787 + that apply noinstr to kernel entry/exit code (or that build with 2788 + ``CONFIG_TASKS_TRACE_RCU_NO_MB=y``. 2789 2789 2790 2790 The tasks-trace-RCU API is also reasonably compact, 2791 2791 consisting of rcu_read_lock_trace(), rcu_read_unlock_trace(),
-15
Documentation/admin-guide/kernel-parameters.txt
··· 6289 6289 dynamically) adjusted. This parameter is intended 6290 6290 for use in testing. 6291 6291 6292 - rcupdate.rcu_task_ipi_delay= [KNL] 6293 - Set time in jiffies during which RCU tasks will 6294 - avoid sending IPIs, starting with the beginning 6295 - of a given grace period. Setting a large 6296 - number avoids disturbing real-time workloads, 6297 - but lengthens grace periods. 6298 - 6299 6292 rcupdate.rcu_task_lazy_lim= [KNL] 6300 6293 Number of callbacks on a given CPU that will 6301 6294 cancel laziness on that CPU. Use -1 to disable ··· 6331 6338 A negative value will take the default. A value 6332 6339 of zero will disable batching. Batching is 6333 6340 always disabled for synchronize_rcu_tasks(). 6334 - 6335 - rcupdate.rcu_tasks_trace_lazy_ms= [KNL] 6336 - Set timeout in milliseconds RCU Tasks 6337 - Trace asynchronous callback batching for 6338 - call_rcu_tasks_trace(). A negative value 6339 - will take the default. A value of zero will 6340 - disable batching. Batching is always disabled 6341 - for synchronize_rcu_tasks_trace(). 6342 6341 6343 6342 rcupdate.rcu_self_test= [KNL] 6344 6343 Run the RCU early boot self tests
+1 -30
include/linux/rcupdate.h
··· 175 175 # define synchronize_rcu_tasks synchronize_rcu 176 176 # endif 177 177 178 - # ifdef CONFIG_TASKS_TRACE_RCU 179 - // Bits for ->trc_reader_special.b.need_qs field. 180 - #define TRC_NEED_QS 0x1 // Task needs a quiescent state. 181 - #define TRC_NEED_QS_CHECKED 0x2 // Task has been checked for needing quiescent state. 182 - 183 - u8 rcu_trc_cmpxchg_need_qs(struct task_struct *t, u8 old, u8 new); 184 - void rcu_tasks_trace_qs_blkd(struct task_struct *t); 185 - 186 - # define rcu_tasks_trace_qs(t) \ 187 - do { \ 188 - int ___rttq_nesting = READ_ONCE((t)->trc_reader_nesting); \ 189 - \ 190 - if (unlikely(READ_ONCE((t)->trc_reader_special.b.need_qs) == TRC_NEED_QS) && \ 191 - likely(!___rttq_nesting)) { \ 192 - rcu_trc_cmpxchg_need_qs((t), TRC_NEED_QS, TRC_NEED_QS_CHECKED); \ 193 - } else if (___rttq_nesting && ___rttq_nesting != INT_MIN && \ 194 - !READ_ONCE((t)->trc_reader_special.b.blocked)) { \ 195 - rcu_tasks_trace_qs_blkd(t); \ 196 - } \ 197 - } while (0) 198 - void rcu_tasks_trace_torture_stats_print(char *tt, char *tf); 199 - # else 200 - # define rcu_tasks_trace_qs(t) do { } while (0) 201 - # endif 202 - 203 - #define rcu_tasks_qs(t, preempt) \ 204 - do { \ 205 - rcu_tasks_classic_qs((t), (preempt)); \ 206 - rcu_tasks_trace_qs(t); \ 207 - } while (0) 178 + #define rcu_tasks_qs(t, preempt) rcu_tasks_classic_qs((t), (preempt)) 208 179 209 180 # ifdef CONFIG_TASKS_RUDE_RCU 210 181 void synchronize_rcu_tasks_rude(void);
+139 -27
include/linux/rcupdate_trace.h
··· 12 12 #include <linux/rcupdate.h> 13 13 #include <linux/cleanup.h> 14 14 15 - extern struct lockdep_map rcu_trace_lock_map; 15 + #ifdef CONFIG_TASKS_TRACE_RCU 16 + extern struct srcu_struct rcu_tasks_trace_srcu_struct; 17 + #endif // #ifdef CONFIG_TASKS_TRACE_RCU 16 18 17 - #ifdef CONFIG_DEBUG_LOCK_ALLOC 19 + #if defined(CONFIG_DEBUG_LOCK_ALLOC) && defined(CONFIG_TASKS_TRACE_RCU) 18 20 19 21 static inline int rcu_read_lock_trace_held(void) 20 22 { 21 - return lock_is_held(&rcu_trace_lock_map); 23 + return srcu_read_lock_held(&rcu_tasks_trace_srcu_struct); 22 24 } 23 25 24 - #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ 26 + #else // #if defined(CONFIG_DEBUG_LOCK_ALLOC) && defined(CONFIG_TASKS_TRACE_RCU) 25 27 26 28 static inline int rcu_read_lock_trace_held(void) 27 29 { 28 30 return 1; 29 31 } 30 32 31 - #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ 33 + #endif // #else // #if defined(CONFIG_DEBUG_LOCK_ALLOC) && defined(CONFIG_TASKS_TRACE_RCU) 32 34 33 35 #ifdef CONFIG_TASKS_TRACE_RCU 34 36 35 - void rcu_read_unlock_trace_special(struct task_struct *t); 37 + /** 38 + * rcu_read_lock_tasks_trace - mark beginning of RCU-trace read-side critical section 39 + * 40 + * When synchronize_rcu_tasks_trace() is invoked by one task, then that 41 + * task is guaranteed to block until all other tasks exit their read-side 42 + * critical sections. Similarly, if call_rcu_trace() is invoked on one 43 + * task while other tasks are within RCU read-side critical sections, 44 + * invocation of the corresponding RCU callback is deferred until after 45 + * the all the other tasks exit their critical sections. 46 + * 47 + * For more details, please see the documentation for 48 + * srcu_read_lock_fast(). For a description of how implicit RCU 49 + * readers provide the needed ordering for architectures defining the 50 + * ARCH_WANTS_NO_INSTR Kconfig option (and thus promising never to trace 51 + * code where RCU is not watching), please see the __srcu_read_lock_fast() 52 + * (non-kerneldoc) header comment. Otherwise, the smp_mb() below provided 53 + * the needed ordering. 54 + */ 55 + static inline struct srcu_ctr __percpu *rcu_read_lock_tasks_trace(void) 56 + { 57 + struct srcu_ctr __percpu *ret = __srcu_read_lock_fast(&rcu_tasks_trace_srcu_struct); 58 + 59 + rcu_try_lock_acquire(&rcu_tasks_trace_srcu_struct.dep_map); 60 + if (!IS_ENABLED(CONFIG_TASKS_TRACE_RCU_NO_MB)) 61 + smp_mb(); // Provide ordering on noinstr-incomplete architectures. 62 + return ret; 63 + } 64 + 65 + /** 66 + * rcu_read_unlock_tasks_trace - mark end of RCU-trace read-side critical section 67 + * @scp: return value from corresponding rcu_read_lock_tasks_trace(). 68 + * 69 + * Pairs with the preceding call to rcu_read_lock_tasks_trace() that 70 + * returned the value passed in via scp. 71 + * 72 + * For more details, please see the documentation for rcu_read_unlock(). 73 + * For memory-ordering information, please see the header comment for the 74 + * rcu_read_lock_tasks_trace() function. 75 + */ 76 + static inline void rcu_read_unlock_tasks_trace(struct srcu_ctr __percpu *scp) 77 + { 78 + if (!IS_ENABLED(CONFIG_TASKS_TRACE_RCU_NO_MB)) 79 + smp_mb(); // Provide ordering on noinstr-incomplete architectures. 80 + __srcu_read_unlock_fast(&rcu_tasks_trace_srcu_struct, scp); 81 + srcu_lock_release(&rcu_tasks_trace_srcu_struct.dep_map); 82 + } 36 83 37 84 /** 38 85 * rcu_read_lock_trace - mark beginning of RCU-trace read-side critical section ··· 97 50 { 98 51 struct task_struct *t = current; 99 52 100 - WRITE_ONCE(t->trc_reader_nesting, READ_ONCE(t->trc_reader_nesting) + 1); 101 - barrier(); 102 - if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB) && 103 - t->trc_reader_special.b.need_mb) 104 - smp_mb(); // Pairs with update-side barriers 105 - rcu_lock_acquire(&rcu_trace_lock_map); 53 + rcu_try_lock_acquire(&rcu_tasks_trace_srcu_struct.dep_map); 54 + if (t->trc_reader_nesting++) { 55 + // In case we interrupted a Tasks Trace RCU reader. 56 + return; 57 + } 58 + barrier(); // nesting before scp to protect against interrupt handler. 59 + t->trc_reader_scp = __srcu_read_lock_fast(&rcu_tasks_trace_srcu_struct); 60 + if (!IS_ENABLED(CONFIG_TASKS_TRACE_RCU_NO_MB)) 61 + smp_mb(); // Placeholder for more selective ordering 106 62 } 107 63 108 64 /** ··· 119 69 */ 120 70 static inline void rcu_read_unlock_trace(void) 121 71 { 122 - int nesting; 72 + struct srcu_ctr __percpu *scp; 123 73 struct task_struct *t = current; 124 74 125 - rcu_lock_release(&rcu_trace_lock_map); 126 - nesting = READ_ONCE(t->trc_reader_nesting) - 1; 127 - barrier(); // Critical section before disabling. 128 - // Disable IPI-based setting of .need_qs. 129 - WRITE_ONCE(t->trc_reader_nesting, INT_MIN + nesting); 130 - if (likely(!READ_ONCE(t->trc_reader_special.s)) || nesting) { 131 - WRITE_ONCE(t->trc_reader_nesting, nesting); 132 - return; // We assume shallow reader nesting. 75 + scp = t->trc_reader_scp; 76 + barrier(); // scp before nesting to protect against interrupt handler. 77 + if (!--t->trc_reader_nesting) { 78 + if (!IS_ENABLED(CONFIG_TASKS_TRACE_RCU_NO_MB)) 79 + smp_mb(); // Placeholder for more selective ordering 80 + __srcu_read_unlock_fast(&rcu_tasks_trace_srcu_struct, scp); 133 81 } 134 - WARN_ON_ONCE(nesting != 0); 135 - rcu_read_unlock_trace_special(t); 82 + srcu_lock_release(&rcu_tasks_trace_srcu_struct.dep_map); 136 83 } 137 84 138 - void call_rcu_tasks_trace(struct rcu_head *rhp, rcu_callback_t func); 139 - void synchronize_rcu_tasks_trace(void); 140 - void rcu_barrier_tasks_trace(void); 141 - struct task_struct *get_rcu_tasks_trace_gp_kthread(void); 85 + /** 86 + * call_rcu_tasks_trace() - Queue a callback trace task-based grace period 87 + * @rhp: structure to be used for queueing the RCU updates. 88 + * @func: actual callback function to be invoked after the grace period 89 + * 90 + * The callback function will be invoked some time after a trace rcu-tasks 91 + * grace period elapses, in other words after all currently executing 92 + * trace rcu-tasks read-side critical sections have completed. These 93 + * read-side critical sections are delimited by calls to rcu_read_lock_trace() 94 + * and rcu_read_unlock_trace(). 95 + * 96 + * See the description of call_rcu() for more detailed information on 97 + * memory ordering guarantees. 98 + */ 99 + static inline void call_rcu_tasks_trace(struct rcu_head *rhp, rcu_callback_t func) 100 + { 101 + call_srcu(&rcu_tasks_trace_srcu_struct, rhp, func); 102 + } 103 + 104 + /** 105 + * synchronize_rcu_tasks_trace - wait for a trace rcu-tasks grace period 106 + * 107 + * Control will return to the caller some time after a trace rcu-tasks 108 + * grace period has elapsed, in other words after all currently executing 109 + * trace rcu-tasks read-side critical sections have elapsed. These read-side 110 + * critical sections are delimited by calls to rcu_read_lock_trace() 111 + * and rcu_read_unlock_trace(). 112 + * 113 + * This is a very specialized primitive, intended only for a few uses in 114 + * tracing and other situations requiring manipulation of function preambles 115 + * and profiling hooks. The synchronize_rcu_tasks_trace() function is not 116 + * (yet) intended for heavy use from multiple CPUs. 117 + * 118 + * See the description of synchronize_rcu() for more detailed information 119 + * on memory ordering guarantees. 120 + */ 121 + static inline void synchronize_rcu_tasks_trace(void) 122 + { 123 + synchronize_srcu(&rcu_tasks_trace_srcu_struct); 124 + } 125 + 126 + /** 127 + * rcu_barrier_tasks_trace - Wait for in-flight call_rcu_tasks_trace() callbacks. 128 + * 129 + * Note that rcu_barrier_tasks_trace() is not obligated to actually wait, 130 + * for example, if there are no pending callbacks. 131 + */ 132 + static inline void rcu_barrier_tasks_trace(void) 133 + { 134 + srcu_barrier(&rcu_tasks_trace_srcu_struct); 135 + } 136 + 137 + /** 138 + * rcu_tasks_trace_expedite_current - Expedite the current Tasks Trace RCU grace period 139 + * 140 + * Cause the current Tasks Trace RCU grace period to become expedited. 141 + * The grace period following the current one might also be expedited. 142 + * If there is no current grace period, one might be created. If the 143 + * current grace period is currently sleeping, that sleep will complete 144 + * before expediting will take effect. 145 + */ 146 + static inline void rcu_tasks_trace_expedite_current(void) 147 + { 148 + srcu_expedite_current(&rcu_tasks_trace_srcu_struct); 149 + } 150 + 151 + // Placeholders to enable stepwise transition. 152 + void __init rcu_tasks_trace_suppress_unused(void); 153 + 142 154 #else 143 155 /* 144 156 * The BPF JIT forms these addresses even when it doesn't call these
+1 -5
include/linux/sched.h
··· 945 945 946 946 #ifdef CONFIG_TASKS_TRACE_RCU 947 947 int trc_reader_nesting; 948 - int trc_ipi_to_cpu; 949 - union rcu_special trc_reader_special; 950 - struct list_head trc_holdout_list; 951 - struct list_head trc_blkd_node; 952 - int trc_blkd_cpu; 948 + struct srcu_ctr __percpu *trc_reader_scp; 953 949 #endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ 954 950 955 951 struct sched_info sched_info;
-3
init/init_task.c
··· 195 195 #endif 196 196 #ifdef CONFIG_TASKS_TRACE_RCU 197 197 .trc_reader_nesting = 0, 198 - .trc_reader_special.s = 0, 199 - .trc_holdout_list = LIST_HEAD_INIT(init_task.trc_holdout_list), 200 - .trc_blkd_node = LIST_HEAD_INIT(init_task.trc_blkd_node), 201 198 #endif 202 199 #ifdef CONFIG_CPUSETS 203 200 .mems_allowed_seq = SEQCNT_SPINLOCK_ZERO(init_task.mems_allowed_seq,
-20
kernel/context_tracking.c
··· 54 54 #endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */ 55 55 } 56 56 57 - /* Turn on heavyweight RCU tasks trace readers on kernel exit. */ 58 - static __always_inline void rcu_task_trace_heavyweight_enter(void) 59 - { 60 - #ifdef CONFIG_TASKS_TRACE_RCU 61 - if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) 62 - current->trc_reader_special.b.need_mb = true; 63 - #endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ 64 - } 65 - 66 - /* Turn off heavyweight RCU tasks trace readers on kernel entry. */ 67 - static __always_inline void rcu_task_trace_heavyweight_exit(void) 68 - { 69 - #ifdef CONFIG_TASKS_TRACE_RCU 70 - if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) 71 - current->trc_reader_special.b.need_mb = false; 72 - #endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ 73 - } 74 - 75 57 /* 76 58 * Record entry into an extended quiescent state. This is only to be 77 59 * called when not already in an extended quiescent state, that is, ··· 67 85 * critical sections, and we also must force ordering with the 68 86 * next idle sojourn. 69 87 */ 70 - rcu_task_trace_heavyweight_enter(); // Before CT state update! 71 88 // RCU is still watching. Better not be in extended quiescent state! 72 89 WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !rcu_is_watching_curr_cpu()); 73 90 (void)ct_state_inc(offset); ··· 89 108 */ 90 109 seq = ct_state_inc(offset); 91 110 // RCU is now watching. Better not be in an extended quiescent state! 92 - rcu_task_trace_heavyweight_exit(); // After CT state update! 93 111 WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !(seq & CT_RCU_WATCHING)); 94 112 } 95 113
-3
kernel/fork.c
··· 1828 1828 #endif /* #ifdef CONFIG_TASKS_RCU */ 1829 1829 #ifdef CONFIG_TASKS_TRACE_RCU 1830 1830 p->trc_reader_nesting = 0; 1831 - p->trc_reader_special.s = 0; 1832 - INIT_LIST_HEAD(&p->trc_holdout_list); 1833 - INIT_LIST_HEAD(&p->trc_blkd_node); 1834 1831 #endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ 1835 1832 } 1836 1833
+24 -19
kernel/rcu/Kconfig
··· 82 82 def_bool HAVE_NMI && !ARCH_HAS_NMI_SAFE_THIS_CPU_OPS && !TINY_SRCU 83 83 84 84 config TASKS_RCU_GENERIC 85 - def_bool TASKS_RCU || TASKS_RUDE_RCU || TASKS_TRACE_RCU 85 + def_bool TASKS_RCU || TASKS_RUDE_RCU 86 86 help 87 87 This option enables generic infrastructure code supporting 88 88 task-based RCU implementations. Not for manual selection. ··· 141 141 bool 142 142 default n 143 143 select IRQ_WORK 144 + 145 + config TASKS_TRACE_RCU_NO_MB 146 + bool "Override RCU Tasks Trace inclusion of read-side memory barriers" 147 + depends on RCU_EXPERT && TASKS_TRACE_RCU 148 + default ARCH_WANTS_NO_INSTR 149 + help 150 + This option prevents the use of read-side memory barriers in 151 + rcu_read_lock_tasks_trace() and rcu_read_unlock_tasks_trace() 152 + even in kernels built with CONFIG_ARCH_WANTS_NO_INSTR=n, that is, 153 + in kernels that do not have noinstr set up in entry/exit code. 154 + By setting this option, you are promising to carefully review 155 + use of ftrace, BPF, and friends to ensure that no tracing 156 + operation is attached to a function that runs in that portion 157 + of the entry/exit code that RCU does not watch, that is, 158 + where rcu_is_watching() returns false. Alternatively, you 159 + might choose to never remove traces except by rebooting. 160 + 161 + Those wishing to disable read-side memory barriers for an entire 162 + architecture can select this Kconfig option, hence the polarity. 163 + 164 + Say Y here if you need speed and will review use of tracing. 165 + Say N here for certain esoteric testing of RCU itself. 166 + Take the default if you are unsure. 144 167 145 168 config RCU_STALL_COMMON 146 169 def_bool TREE_RCU ··· 335 312 336 313 Say Y here if you want to set RT priority for offloading kthreads. 337 314 Say N here if you are building a !PREEMPT_RT kernel and are unsure. 338 - 339 - config TASKS_TRACE_RCU_READ_MB 340 - bool "Tasks Trace RCU readers use memory barriers in user and idle" 341 - depends on RCU_EXPERT && TASKS_TRACE_RCU 342 - default PREEMPT_RT || NR_CPUS < 8 343 - help 344 - Use this option to further reduce the number of IPIs sent 345 - to CPUs executing in userspace or idle during tasks trace 346 - RCU grace periods. Given that a reasonable setting of 347 - the rcupdate.rcu_task_ipi_delay kernel boot parameter 348 - eliminates such IPIs for many workloads, proper setting 349 - of this Kconfig option is important mostly for aggressive 350 - real-time installations and for battery-powered devices, 351 - hence the default chosen above. 352 - 353 - Say Y here if you hate IPIs. 354 - Say N here if you hate read-side memory barriers. 355 - Take the default if you are unsure. 356 315 357 316 config RCU_LAZY 358 317 bool "RCU callback lazy invocation functionality"
-9
kernel/rcu/rcu.h
··· 544 544 void rcu_tasks_rude_get_gp_data(int *flags, unsigned long *gp_seq); 545 545 #endif // # ifdef CONFIG_TASKS_RUDE_RCU 546 546 547 - #ifdef CONFIG_TASKS_TRACE_RCU 548 - void rcu_tasks_trace_get_gp_data(int *flags, unsigned long *gp_seq); 549 - #endif 550 - 551 547 #ifdef CONFIG_TASKS_RCU_GENERIC 552 548 void tasks_cblist_init_generic(void); 553 549 #else /* #ifdef CONFIG_TASKS_RCU_GENERIC */ ··· 668 672 void show_rcu_tasks_rude_gp_kthread(void); 669 673 #else 670 674 static inline void show_rcu_tasks_rude_gp_kthread(void) {} 671 - #endif 672 - #if !defined(CONFIG_TINY_RCU) && defined(CONFIG_TASKS_TRACE_RCU) 673 - void show_rcu_tasks_trace_gp_kthread(void); 674 - #else 675 - static inline void show_rcu_tasks_trace_gp_kthread(void) {} 676 675 #endif 677 676 678 677 #ifdef CONFIG_TINY_RCU
-7
kernel/rcu/rcuscale.c
··· 400 400 rcu_read_unlock_trace(); 401 401 } 402 402 403 - static void rcu_tasks_trace_scale_stats(void) 404 - { 405 - rcu_tasks_trace_torture_stats_print(scale_type, SCALE_FLAG); 406 - } 407 - 408 403 static struct rcu_scale_ops tasks_tracing_ops = { 409 404 .ptype = RCU_TASKS_FLAVOR, 410 405 .init = rcu_sync_scale_init, ··· 411 416 .gp_barrier = rcu_barrier_tasks_trace, 412 417 .sync = synchronize_rcu_tasks_trace, 413 418 .exp_sync = synchronize_rcu_tasks_trace, 414 - .rso_gp_kthread = get_rcu_tasks_trace_gp_kthread, 415 - .stats = IS_ENABLED(CONFIG_TINY_RCU) ? NULL : rcu_tasks_trace_scale_stats, 416 419 .name = "tasks-tracing" 417 420 }; 418 421
+6 -4
kernel/rcu/rcutorture.c
··· 1178 1178 .deferred_free = rcu_tasks_tracing_torture_deferred_free, 1179 1179 .sync = synchronize_rcu_tasks_trace, 1180 1180 .exp_sync = synchronize_rcu_tasks_trace, 1181 + .exp_current = rcu_tasks_trace_expedite_current, 1181 1182 .call = call_rcu_tasks_trace, 1182 1183 .cb_barrier = rcu_barrier_tasks_trace, 1183 - .gp_kthread_dbg = show_rcu_tasks_trace_gp_kthread, 1184 - .get_gp_data = rcu_tasks_trace_get_gp_data, 1185 1184 .cbflood_max = 50000, 1186 1185 .irq_capable = 1, 1187 1186 .slow_gps = 1, ··· 1749 1750 ulo[i] = cur_ops->get_comp_state(); 1750 1751 gp_snap = cur_ops->start_gp_poll(); 1751 1752 rcu_torture_writer_state = RTWS_POLL_WAIT; 1752 - if (cur_ops->exp_current && !torture_random(&rand) % 0xff) 1753 + if (cur_ops->exp_current && !(torture_random(&rand) & 0xff)) 1753 1754 cur_ops->exp_current(); 1754 1755 while (!cur_ops->poll_gp_state(gp_snap)) { 1755 1756 gp_snap1 = cur_ops->get_gp_state(); ··· 1771 1772 cur_ops->get_comp_state_full(&rgo[i]); 1772 1773 cur_ops->start_gp_poll_full(&gp_snap_full); 1773 1774 rcu_torture_writer_state = RTWS_POLL_WAIT_FULL; 1774 - if (cur_ops->exp_current && !torture_random(&rand) % 0xff) 1775 + if (cur_ops->exp_current && !(torture_random(&rand) & 0xff)) 1775 1776 cur_ops->exp_current(); 1776 1777 while (!cur_ops->poll_gp_state_full(&gp_snap_full)) { 1777 1778 cur_ops->get_gp_state_full(&gp_snap1_full); ··· 2454 2455 */ 2455 2456 static void rcu_torture_timer(struct timer_list *unused) 2456 2457 { 2458 + WARN_ON_ONCE(!in_serving_softirq()); 2459 + WARN_ON_ONCE(in_hardirq()); 2460 + WARN_ON_ONCE(in_nmi()); 2457 2461 atomic_long_inc(&n_rcu_torture_timers); 2458 2462 (void)rcu_torture_one_read(this_cpu_ptr(&rcu_torture_timer_rand), -1); 2459 2463
+1 -1
kernel/rcu/srcutree.c
··· 262 262 ssp->srcu_sup->srcu_gp_seq_needed_exp = SRCU_GP_SEQ_INITIAL_VAL; 263 263 ssp->srcu_sup->srcu_last_gp_end = ktime_get_mono_fast_ns(); 264 264 if (READ_ONCE(ssp->srcu_sup->srcu_size_state) == SRCU_SIZE_SMALL && SRCU_SIZING_IS_INIT()) { 265 - if (!init_srcu_struct_nodes(ssp, GFP_ATOMIC)) 265 + if (!init_srcu_struct_nodes(ssp, is_static ? GFP_ATOMIC : GFP_KERNEL)) 266 266 goto err_free_sda; 267 267 WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG); 268 268 }
+16 -692
kernel/rcu/tasks.h
··· 161 161 static DEFINE_TIMER(tasks_rcu_exit_srcu_stall_timer, tasks_rcu_exit_srcu_stall); 162 162 #endif 163 163 164 - /* Avoid IPIing CPUs early in the grace period. */ 165 - #define RCU_TASK_IPI_DELAY (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB) ? HZ / 2 : 0) 166 - static int rcu_task_ipi_delay __read_mostly = RCU_TASK_IPI_DELAY; 167 - module_param(rcu_task_ipi_delay, int, 0644); 168 - 169 164 /* Control stall timeouts. Disable with <= 0, otherwise jiffies till stall. */ 170 165 #define RCU_TASK_BOOT_STALL_TIMEOUT (HZ * 30) 171 166 #define RCU_TASK_STALL_TIMEOUT (HZ * 60 * 10) ··· 713 718 #endif /* #ifdef CONFIG_TASKS_TRACE_RCU */ 714 719 } 715 720 716 - 717 721 /* Dump out rcutorture-relevant state common to all RCU-tasks flavors. */ 718 722 static void show_rcu_tasks_generic_gp_kthread(struct rcu_tasks *rtp, char *s) 719 723 { ··· 795 801 796 802 #endif // #ifndef CONFIG_TINY_RCU 797 803 798 - static void exit_tasks_rcu_finish_trace(struct task_struct *t); 799 - 800 - #if defined(CONFIG_TASKS_RCU) || defined(CONFIG_TASKS_TRACE_RCU) 804 + #if defined(CONFIG_TASKS_RCU) 801 805 802 806 //////////////////////////////////////////////////////////////////////// 803 807 // ··· 890 898 rtp->postgp_func(rtp); 891 899 } 892 900 893 - #endif /* #if defined(CONFIG_TASKS_RCU) || defined(CONFIG_TASKS_TRACE_RCU) */ 901 + #endif /* #if defined(CONFIG_TASKS_RCU) */ 894 902 895 903 #ifdef CONFIG_TASKS_RCU 896 904 ··· 1314 1322 raw_spin_lock_irqsave_rcu_node(rtpcp, flags); 1315 1323 list_del_init(&t->rcu_tasks_exit_list); 1316 1324 raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); 1317 - 1318 - exit_tasks_rcu_finish_trace(t); 1319 1325 } 1320 1326 1321 1327 #else /* #ifdef CONFIG_TASKS_RCU */ 1322 1328 void exit_tasks_rcu_start(void) { } 1323 - void exit_tasks_rcu_finish(void) { exit_tasks_rcu_finish_trace(current); } 1329 + void exit_tasks_rcu_finish(void) { } 1324 1330 #endif /* #else #ifdef CONFIG_TASKS_RCU */ 1325 1331 1326 1332 #ifdef CONFIG_TASKS_RUDE_RCU ··· 1439 1449 1440 1450 #endif /* #ifdef CONFIG_TASKS_RUDE_RCU */ 1441 1451 1442 - //////////////////////////////////////////////////////////////////////// 1443 - // 1444 - // Tracing variant of Tasks RCU. This variant is designed to be used 1445 - // to protect tracing hooks, including those of BPF. This variant 1446 - // therefore: 1447 - // 1448 - // 1. Has explicit read-side markers to allow finite grace periods 1449 - // in the face of in-kernel loops for PREEMPT=n builds. 1450 - // 1451 - // 2. Protects code in the idle loop, exception entry/exit, and 1452 - // CPU-hotplug code paths, similar to the capabilities of SRCU. 1453 - // 1454 - // 3. Avoids expensive read-side instructions, having overhead similar 1455 - // to that of Preemptible RCU. 1456 - // 1457 - // There are of course downsides. For example, the grace-period code 1458 - // can send IPIs to CPUs, even when those CPUs are in the idle loop or 1459 - // in nohz_full userspace. If needed, these downsides can be at least 1460 - // partially remedied. 1461 - // 1462 - // Perhaps most important, this variant of RCU does not affect the vanilla 1463 - // flavors, rcu_preempt and rcu_sched. The fact that RCU Tasks Trace 1464 - // readers can operate from idle, offline, and exception entry/exit in no 1465 - // way allows rcu_preempt and rcu_sched readers to also do so. 1466 - // 1467 - // The implementation uses rcu_tasks_wait_gp(), which relies on function 1468 - // pointers in the rcu_tasks structure. The rcu_spawn_tasks_trace_kthread() 1469 - // function sets these function pointers up so that rcu_tasks_wait_gp() 1470 - // invokes these functions in this order: 1471 - // 1472 - // rcu_tasks_trace_pregp_step(): 1473 - // Disables CPU hotplug, adds all currently executing tasks to the 1474 - // holdout list, then checks the state of all tasks that blocked 1475 - // or were preempted within their current RCU Tasks Trace read-side 1476 - // critical section, adding them to the holdout list if appropriate. 1477 - // Finally, this function re-enables CPU hotplug. 1478 - // The ->pertask_func() pointer is NULL, so there is no per-task processing. 1479 - // rcu_tasks_trace_postscan(): 1480 - // Invokes synchronize_rcu() to wait for late-stage exiting tasks 1481 - // to finish exiting. 1482 - // check_all_holdout_tasks_trace(), repeatedly until holdout list is empty: 1483 - // Scans the holdout list, attempting to identify a quiescent state 1484 - // for each task on the list. If there is a quiescent state, the 1485 - // corresponding task is removed from the holdout list. Once this 1486 - // list is empty, the grace period has completed. 1487 - // rcu_tasks_trace_postgp(): 1488 - // Provides the needed full memory barrier and does debug checks. 1489 - // 1490 - // The exit_tasks_rcu_finish_trace() synchronizes with exiting tasks. 1491 - // 1492 - // Pre-grace-period update-side code is ordered before the grace period 1493 - // via the ->cbs_lock and barriers in rcu_tasks_kthread(). Pre-grace-period 1494 - // read-side code is ordered before the grace period by atomic operations 1495 - // on .b.need_qs flag of each task involved in this process, or by scheduler 1496 - // context-switch ordering (for locked-down non-running readers). 1497 - 1498 - // The lockdep state must be outside of #ifdef to be useful. 1499 - #ifdef CONFIG_DEBUG_LOCK_ALLOC 1500 - static struct lock_class_key rcu_lock_trace_key; 1501 - struct lockdep_map rcu_trace_lock_map = 1502 - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_trace", &rcu_lock_trace_key); 1503 - EXPORT_SYMBOL_GPL(rcu_trace_lock_map); 1504 - #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ 1505 - 1506 - #ifdef CONFIG_TASKS_TRACE_RCU 1507 - 1508 - // Record outstanding IPIs to each CPU. No point in sending two... 1509 - static DEFINE_PER_CPU(bool, trc_ipi_to_cpu); 1510 - 1511 - // The number of detections of task quiescent state relying on 1512 - // heavyweight readers executing explicit memory barriers. 1513 - static unsigned long n_heavy_reader_attempts; 1514 - static unsigned long n_heavy_reader_updates; 1515 - static unsigned long n_heavy_reader_ofl_updates; 1516 - static unsigned long n_trc_holdouts; 1517 - 1518 - void call_rcu_tasks_trace(struct rcu_head *rhp, rcu_callback_t func); 1519 - DEFINE_RCU_TASKS(rcu_tasks_trace, rcu_tasks_wait_gp, call_rcu_tasks_trace, 1520 - "RCU Tasks Trace"); 1521 - 1522 - /* Load from ->trc_reader_special.b.need_qs with proper ordering. */ 1523 - static u8 rcu_ld_need_qs(struct task_struct *t) 1524 - { 1525 - smp_mb(); // Enforce full grace-period ordering. 1526 - return smp_load_acquire(&t->trc_reader_special.b.need_qs); 1527 - } 1528 - 1529 - /* Store to ->trc_reader_special.b.need_qs with proper ordering. */ 1530 - static void rcu_st_need_qs(struct task_struct *t, u8 v) 1531 - { 1532 - smp_store_release(&t->trc_reader_special.b.need_qs, v); 1533 - smp_mb(); // Enforce full grace-period ordering. 1534 - } 1535 - 1536 - /* 1537 - * Do a cmpxchg() on ->trc_reader_special.b.need_qs, allowing for 1538 - * the four-byte operand-size restriction of some platforms. 1539 - * 1540 - * Returns the old value, which is often ignored. 1541 - */ 1542 - u8 rcu_trc_cmpxchg_need_qs(struct task_struct *t, u8 old, u8 new) 1543 - { 1544 - return cmpxchg(&t->trc_reader_special.b.need_qs, old, new); 1545 - } 1546 - EXPORT_SYMBOL_GPL(rcu_trc_cmpxchg_need_qs); 1547 - 1548 - /* 1549 - * If we are the last reader, signal the grace-period kthread. 1550 - * Also remove from the per-CPU list of blocked tasks. 1551 - */ 1552 - void rcu_read_unlock_trace_special(struct task_struct *t) 1553 - { 1554 - unsigned long flags; 1555 - struct rcu_tasks_percpu *rtpcp; 1556 - union rcu_special trs; 1557 - 1558 - // Open-coded full-word version of rcu_ld_need_qs(). 1559 - smp_mb(); // Enforce full grace-period ordering. 1560 - trs = smp_load_acquire(&t->trc_reader_special); 1561 - 1562 - if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB) && t->trc_reader_special.b.need_mb) 1563 - smp_mb(); // Pairs with update-side barriers. 1564 - // Update .need_qs before ->trc_reader_nesting for irq/NMI handlers. 1565 - if (trs.b.need_qs == (TRC_NEED_QS_CHECKED | TRC_NEED_QS)) { 1566 - u8 result = rcu_trc_cmpxchg_need_qs(t, TRC_NEED_QS_CHECKED | TRC_NEED_QS, 1567 - TRC_NEED_QS_CHECKED); 1568 - 1569 - WARN_ONCE(result != trs.b.need_qs, "%s: result = %d", __func__, result); 1570 - } 1571 - if (trs.b.blocked) { 1572 - rtpcp = per_cpu_ptr(rcu_tasks_trace.rtpcpu, t->trc_blkd_cpu); 1573 - raw_spin_lock_irqsave_rcu_node(rtpcp, flags); 1574 - list_del_init(&t->trc_blkd_node); 1575 - WRITE_ONCE(t->trc_reader_special.b.blocked, false); 1576 - raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); 1577 - } 1578 - WRITE_ONCE(t->trc_reader_nesting, 0); 1579 - } 1580 - EXPORT_SYMBOL_GPL(rcu_read_unlock_trace_special); 1581 - 1582 - /* Add a newly blocked reader task to its CPU's list. */ 1583 - void rcu_tasks_trace_qs_blkd(struct task_struct *t) 1584 - { 1585 - unsigned long flags; 1586 - struct rcu_tasks_percpu *rtpcp; 1587 - 1588 - local_irq_save(flags); 1589 - rtpcp = this_cpu_ptr(rcu_tasks_trace.rtpcpu); 1590 - raw_spin_lock_rcu_node(rtpcp); // irqs already disabled 1591 - t->trc_blkd_cpu = smp_processor_id(); 1592 - if (!rtpcp->rtp_blkd_tasks.next) 1593 - INIT_LIST_HEAD(&rtpcp->rtp_blkd_tasks); 1594 - list_add(&t->trc_blkd_node, &rtpcp->rtp_blkd_tasks); 1595 - WRITE_ONCE(t->trc_reader_special.b.blocked, true); 1596 - raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); 1597 - } 1598 - EXPORT_SYMBOL_GPL(rcu_tasks_trace_qs_blkd); 1599 - 1600 - /* Add a task to the holdout list, if it is not already on the list. */ 1601 - static void trc_add_holdout(struct task_struct *t, struct list_head *bhp) 1602 - { 1603 - if (list_empty(&t->trc_holdout_list)) { 1604 - get_task_struct(t); 1605 - list_add(&t->trc_holdout_list, bhp); 1606 - n_trc_holdouts++; 1607 - } 1608 - } 1609 - 1610 - /* Remove a task from the holdout list, if it is in fact present. */ 1611 - static void trc_del_holdout(struct task_struct *t) 1612 - { 1613 - if (!list_empty(&t->trc_holdout_list)) { 1614 - list_del_init(&t->trc_holdout_list); 1615 - put_task_struct(t); 1616 - n_trc_holdouts--; 1617 - } 1618 - } 1619 - 1620 - /* IPI handler to check task state. */ 1621 - static void trc_read_check_handler(void *t_in) 1622 - { 1623 - int nesting; 1624 - struct task_struct *t = current; 1625 - struct task_struct *texp = t_in; 1626 - 1627 - // If the task is no longer running on this CPU, leave. 1628 - if (unlikely(texp != t)) 1629 - goto reset_ipi; // Already on holdout list, so will check later. 1630 - 1631 - // If the task is not in a read-side critical section, and 1632 - // if this is the last reader, awaken the grace-period kthread. 1633 - nesting = READ_ONCE(t->trc_reader_nesting); 1634 - if (likely(!nesting)) { 1635 - rcu_trc_cmpxchg_need_qs(t, 0, TRC_NEED_QS_CHECKED); 1636 - goto reset_ipi; 1637 - } 1638 - // If we are racing with an rcu_read_unlock_trace(), try again later. 1639 - if (unlikely(nesting < 0)) 1640 - goto reset_ipi; 1641 - 1642 - // Get here if the task is in a read-side critical section. 1643 - // Set its state so that it will update state for the grace-period 1644 - // kthread upon exit from that critical section. 1645 - rcu_trc_cmpxchg_need_qs(t, 0, TRC_NEED_QS | TRC_NEED_QS_CHECKED); 1646 - 1647 - reset_ipi: 1648 - // Allow future IPIs to be sent on CPU and for task. 1649 - // Also order this IPI handler against any later manipulations of 1650 - // the intended task. 1651 - smp_store_release(per_cpu_ptr(&trc_ipi_to_cpu, smp_processor_id()), false); // ^^^ 1652 - smp_store_release(&texp->trc_ipi_to_cpu, -1); // ^^^ 1653 - } 1654 - 1655 - /* Callback function for scheduler to check locked-down task. */ 1656 - static int trc_inspect_reader(struct task_struct *t, void *bhp_in) 1657 - { 1658 - struct list_head *bhp = bhp_in; 1659 - int cpu = task_cpu(t); 1660 - int nesting; 1661 - bool ofl = cpu_is_offline(cpu); 1662 - 1663 - if (task_curr(t) && !ofl) { 1664 - // If no chance of heavyweight readers, do it the hard way. 1665 - if (!IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) 1666 - return -EINVAL; 1667 - 1668 - // If heavyweight readers are enabled on the remote task, 1669 - // we can inspect its state despite its currently running. 1670 - // However, we cannot safely change its state. 1671 - n_heavy_reader_attempts++; 1672 - // Check for "running" idle tasks on offline CPUs. 1673 - if (!rcu_watching_zero_in_eqs(cpu, &t->trc_reader_nesting)) 1674 - return -EINVAL; // No quiescent state, do it the hard way. 1675 - n_heavy_reader_updates++; 1676 - nesting = 0; 1677 - } else { 1678 - // The task is not running, so C-language access is safe. 1679 - nesting = t->trc_reader_nesting; 1680 - WARN_ON_ONCE(ofl && task_curr(t) && (t != idle_task(task_cpu(t)))); 1681 - if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB) && ofl) 1682 - n_heavy_reader_ofl_updates++; 1683 - } 1684 - 1685 - // If not exiting a read-side critical section, mark as checked 1686 - // so that the grace-period kthread will remove it from the 1687 - // holdout list. 1688 - if (!nesting) { 1689 - rcu_trc_cmpxchg_need_qs(t, 0, TRC_NEED_QS_CHECKED); 1690 - return 0; // In QS, so done. 1691 - } 1692 - if (nesting < 0) 1693 - return -EINVAL; // Reader transitioning, try again later. 1694 - 1695 - // The task is in a read-side critical section, so set up its 1696 - // state so that it will update state upon exit from that critical 1697 - // section. 1698 - if (!rcu_trc_cmpxchg_need_qs(t, 0, TRC_NEED_QS | TRC_NEED_QS_CHECKED)) 1699 - trc_add_holdout(t, bhp); 1700 - return 0; 1701 - } 1702 - 1703 - /* Attempt to extract the state for the specified task. */ 1704 - static void trc_wait_for_one_reader(struct task_struct *t, 1705 - struct list_head *bhp) 1706 - { 1707 - int cpu; 1708 - 1709 - // If a previous IPI is still in flight, let it complete. 1710 - if (smp_load_acquire(&t->trc_ipi_to_cpu) != -1) // Order IPI 1711 - return; 1712 - 1713 - // The current task had better be in a quiescent state. 1714 - if (t == current) { 1715 - rcu_trc_cmpxchg_need_qs(t, 0, TRC_NEED_QS_CHECKED); 1716 - WARN_ON_ONCE(READ_ONCE(t->trc_reader_nesting)); 1717 - return; 1718 - } 1719 - 1720 - // Attempt to nail down the task for inspection. 1721 - get_task_struct(t); 1722 - if (!task_call_func(t, trc_inspect_reader, bhp)) { 1723 - put_task_struct(t); 1724 - return; 1725 - } 1726 - put_task_struct(t); 1727 - 1728 - // If this task is not yet on the holdout list, then we are in 1729 - // an RCU read-side critical section. Otherwise, the invocation of 1730 - // trc_add_holdout() that added it to the list did the necessary 1731 - // get_task_struct(). Either way, the task cannot be freed out 1732 - // from under this code. 1733 - 1734 - // If currently running, send an IPI, either way, add to list. 1735 - trc_add_holdout(t, bhp); 1736 - if (task_curr(t) && 1737 - time_after(jiffies + 1, rcu_tasks_trace.gp_start + rcu_task_ipi_delay)) { 1738 - // The task is currently running, so try IPIing it. 1739 - cpu = task_cpu(t); 1740 - 1741 - // If there is already an IPI outstanding, let it happen. 1742 - if (per_cpu(trc_ipi_to_cpu, cpu) || t->trc_ipi_to_cpu >= 0) 1743 - return; 1744 - 1745 - per_cpu(trc_ipi_to_cpu, cpu) = true; 1746 - t->trc_ipi_to_cpu = cpu; 1747 - rcu_tasks_trace.n_ipis++; 1748 - if (smp_call_function_single(cpu, trc_read_check_handler, t, 0)) { 1749 - // Just in case there is some other reason for 1750 - // failure than the target CPU being offline. 1751 - WARN_ONCE(1, "%s(): smp_call_function_single() failed for CPU: %d\n", 1752 - __func__, cpu); 1753 - rcu_tasks_trace.n_ipis_fails++; 1754 - per_cpu(trc_ipi_to_cpu, cpu) = false; 1755 - t->trc_ipi_to_cpu = -1; 1756 - } 1757 - } 1758 - } 1759 - 1760 - /* 1761 - * Initialize for first-round processing for the specified task. 1762 - * Return false if task is NULL or already taken care of, true otherwise. 1763 - */ 1764 - static bool rcu_tasks_trace_pertask_prep(struct task_struct *t, bool notself) 1765 - { 1766 - // During early boot when there is only the one boot CPU, there 1767 - // is no idle task for the other CPUs. Also, the grace-period 1768 - // kthread is always in a quiescent state. In addition, just return 1769 - // if this task is already on the list. 1770 - if (unlikely(t == NULL) || (t == current && notself) || !list_empty(&t->trc_holdout_list)) 1771 - return false; 1772 - 1773 - rcu_st_need_qs(t, 0); 1774 - t->trc_ipi_to_cpu = -1; 1775 - return true; 1776 - } 1777 - 1778 - /* Do first-round processing for the specified task. */ 1779 - static void rcu_tasks_trace_pertask(struct task_struct *t, struct list_head *hop) 1780 - { 1781 - if (rcu_tasks_trace_pertask_prep(t, true)) 1782 - trc_wait_for_one_reader(t, hop); 1783 - } 1784 - 1785 - /* Initialize for a new RCU-tasks-trace grace period. */ 1786 - static void rcu_tasks_trace_pregp_step(struct list_head *hop) 1787 - { 1788 - LIST_HEAD(blkd_tasks); 1789 - int cpu; 1790 - unsigned long flags; 1791 - struct rcu_tasks_percpu *rtpcp; 1792 - struct task_struct *t; 1793 - 1794 - // There shouldn't be any old IPIs, but... 1795 - for_each_possible_cpu(cpu) 1796 - WARN_ON_ONCE(per_cpu(trc_ipi_to_cpu, cpu)); 1797 - 1798 - // Disable CPU hotplug across the CPU scan for the benefit of 1799 - // any IPIs that might be needed. This also waits for all readers 1800 - // in CPU-hotplug code paths. 1801 - cpus_read_lock(); 1802 - 1803 - // These rcu_tasks_trace_pertask_prep() calls are serialized to 1804 - // allow safe access to the hop list. 1805 - for_each_online_cpu(cpu) { 1806 - rcu_read_lock(); 1807 - // Note that cpu_curr_snapshot() picks up the target 1808 - // CPU's current task while its runqueue is locked with 1809 - // an smp_mb__after_spinlock(). This ensures that either 1810 - // the grace-period kthread will see that task's read-side 1811 - // critical section or the task will see the updater's pre-GP 1812 - // accesses. The trailing smp_mb() in cpu_curr_snapshot() 1813 - // does not currently play a role other than simplify 1814 - // that function's ordering semantics. If these simplified 1815 - // ordering semantics continue to be redundant, that smp_mb() 1816 - // might be removed. 1817 - t = cpu_curr_snapshot(cpu); 1818 - if (rcu_tasks_trace_pertask_prep(t, true)) 1819 - trc_add_holdout(t, hop); 1820 - rcu_read_unlock(); 1821 - cond_resched_tasks_rcu_qs(); 1822 - } 1823 - 1824 - // Only after all running tasks have been accounted for is it 1825 - // safe to take care of the tasks that have blocked within their 1826 - // current RCU tasks trace read-side critical section. 1827 - for_each_possible_cpu(cpu) { 1828 - rtpcp = per_cpu_ptr(rcu_tasks_trace.rtpcpu, cpu); 1829 - raw_spin_lock_irqsave_rcu_node(rtpcp, flags); 1830 - list_splice_init(&rtpcp->rtp_blkd_tasks, &blkd_tasks); 1831 - while (!list_empty(&blkd_tasks)) { 1832 - rcu_read_lock(); 1833 - t = list_first_entry(&blkd_tasks, struct task_struct, trc_blkd_node); 1834 - list_del_init(&t->trc_blkd_node); 1835 - list_add(&t->trc_blkd_node, &rtpcp->rtp_blkd_tasks); 1836 - raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); 1837 - rcu_tasks_trace_pertask(t, hop); 1838 - rcu_read_unlock(); 1839 - raw_spin_lock_irqsave_rcu_node(rtpcp, flags); 1840 - } 1841 - raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); 1842 - cond_resched_tasks_rcu_qs(); 1843 - } 1844 - 1845 - // Re-enable CPU hotplug now that the holdout list is populated. 1846 - cpus_read_unlock(); 1847 - } 1848 - 1849 - /* 1850 - * Do intermediate processing between task and holdout scans. 1851 - */ 1852 - static void rcu_tasks_trace_postscan(struct list_head *hop) 1853 - { 1854 - // Wait for late-stage exiting tasks to finish exiting. 1855 - // These might have passed the call to exit_tasks_rcu_finish(). 1856 - 1857 - // If you remove the following line, update rcu_trace_implies_rcu_gp()!!! 1858 - synchronize_rcu(); 1859 - // Any tasks that exit after this point will set 1860 - // TRC_NEED_QS_CHECKED in ->trc_reader_special.b.need_qs. 1861 - } 1862 - 1863 - /* Communicate task state back to the RCU tasks trace stall warning request. */ 1864 - struct trc_stall_chk_rdr { 1865 - int nesting; 1866 - int ipi_to_cpu; 1867 - u8 needqs; 1868 - }; 1869 - 1870 - static int trc_check_slow_task(struct task_struct *t, void *arg) 1871 - { 1872 - struct trc_stall_chk_rdr *trc_rdrp = arg; 1873 - 1874 - if (task_curr(t) && cpu_online(task_cpu(t))) 1875 - return false; // It is running, so decline to inspect it. 1876 - trc_rdrp->nesting = READ_ONCE(t->trc_reader_nesting); 1877 - trc_rdrp->ipi_to_cpu = READ_ONCE(t->trc_ipi_to_cpu); 1878 - trc_rdrp->needqs = rcu_ld_need_qs(t); 1879 - return true; 1880 - } 1881 - 1882 - /* Show the state of a task stalling the current RCU tasks trace GP. */ 1883 - static void show_stalled_task_trace(struct task_struct *t, bool *firstreport) 1884 - { 1885 - int cpu; 1886 - struct trc_stall_chk_rdr trc_rdr; 1887 - bool is_idle_tsk = is_idle_task(t); 1888 - 1889 - if (*firstreport) { 1890 - pr_err("INFO: rcu_tasks_trace detected stalls on tasks:\n"); 1891 - *firstreport = false; 1892 - } 1893 - cpu = task_cpu(t); 1894 - if (!task_call_func(t, trc_check_slow_task, &trc_rdr)) 1895 - pr_alert("P%d: %c%c\n", 1896 - t->pid, 1897 - ".I"[t->trc_ipi_to_cpu >= 0], 1898 - ".i"[is_idle_tsk]); 1899 - else 1900 - pr_alert("P%d: %c%c%c%c nesting: %d%c%c cpu: %d%s\n", 1901 - t->pid, 1902 - ".I"[trc_rdr.ipi_to_cpu >= 0], 1903 - ".i"[is_idle_tsk], 1904 - ".N"[cpu >= 0 && tick_nohz_full_cpu(cpu)], 1905 - ".B"[!!data_race(t->trc_reader_special.b.blocked)], 1906 - trc_rdr.nesting, 1907 - " !CN"[trc_rdr.needqs & 0x3], 1908 - " ?"[trc_rdr.needqs > 0x3], 1909 - cpu, cpu_online(cpu) ? "" : "(offline)"); 1910 - sched_show_task(t); 1911 - } 1912 - 1913 - /* List stalled IPIs for RCU tasks trace. */ 1914 - static void show_stalled_ipi_trace(void) 1915 - { 1916 - int cpu; 1917 - 1918 - for_each_possible_cpu(cpu) 1919 - if (per_cpu(trc_ipi_to_cpu, cpu)) 1920 - pr_alert("\tIPI outstanding to CPU %d\n", cpu); 1921 - } 1922 - 1923 - /* Do one scan of the holdout list. */ 1924 - static void check_all_holdout_tasks_trace(struct list_head *hop, 1925 - bool needreport, bool *firstreport) 1926 - { 1927 - struct task_struct *g, *t; 1928 - 1929 - // Disable CPU hotplug across the holdout list scan for IPIs. 1930 - cpus_read_lock(); 1931 - 1932 - list_for_each_entry_safe(t, g, hop, trc_holdout_list) { 1933 - // If safe and needed, try to check the current task. 1934 - if (READ_ONCE(t->trc_ipi_to_cpu) == -1 && 1935 - !(rcu_ld_need_qs(t) & TRC_NEED_QS_CHECKED)) 1936 - trc_wait_for_one_reader(t, hop); 1937 - 1938 - // If check succeeded, remove this task from the list. 1939 - if (smp_load_acquire(&t->trc_ipi_to_cpu) == -1 && 1940 - rcu_ld_need_qs(t) == TRC_NEED_QS_CHECKED) 1941 - trc_del_holdout(t); 1942 - else if (needreport) 1943 - show_stalled_task_trace(t, firstreport); 1944 - cond_resched_tasks_rcu_qs(); 1945 - } 1946 - 1947 - // Re-enable CPU hotplug now that the holdout list scan has completed. 1948 - cpus_read_unlock(); 1949 - 1950 - if (needreport) { 1951 - if (*firstreport) 1952 - pr_err("INFO: rcu_tasks_trace detected stalls? (Late IPI?)\n"); 1953 - show_stalled_ipi_trace(); 1954 - } 1955 - } 1956 - 1957 - static void rcu_tasks_trace_empty_fn(void *unused) 1958 - { 1959 - } 1960 - 1961 - /* Wait for grace period to complete and provide ordering. */ 1962 - static void rcu_tasks_trace_postgp(struct rcu_tasks *rtp) 1963 - { 1964 - int cpu; 1965 - 1966 - // Wait for any lingering IPI handlers to complete. Note that 1967 - // if a CPU has gone offline or transitioned to userspace in the 1968 - // meantime, all IPI handlers should have been drained beforehand. 1969 - // Yes, this assumes that CPUs process IPIs in order. If that ever 1970 - // changes, there will need to be a recheck and/or timed wait. 1971 - for_each_online_cpu(cpu) 1972 - if (WARN_ON_ONCE(smp_load_acquire(per_cpu_ptr(&trc_ipi_to_cpu, cpu)))) 1973 - smp_call_function_single(cpu, rcu_tasks_trace_empty_fn, NULL, 1); 1974 - 1975 - smp_mb(); // Caller's code must be ordered after wakeup. 1976 - // Pairs with pretty much every ordering primitive. 1977 - } 1978 - 1979 - /* Report any needed quiescent state for this exiting task. */ 1980 - static void exit_tasks_rcu_finish_trace(struct task_struct *t) 1981 - { 1982 - union rcu_special trs = READ_ONCE(t->trc_reader_special); 1983 - 1984 - rcu_trc_cmpxchg_need_qs(t, 0, TRC_NEED_QS_CHECKED); 1985 - WARN_ON_ONCE(READ_ONCE(t->trc_reader_nesting)); 1986 - if (WARN_ON_ONCE(rcu_ld_need_qs(t) & TRC_NEED_QS || trs.b.blocked)) 1987 - rcu_read_unlock_trace_special(t); 1988 - else 1989 - WRITE_ONCE(t->trc_reader_nesting, 0); 1990 - } 1991 - 1992 - /** 1993 - * call_rcu_tasks_trace() - Queue a callback trace task-based grace period 1994 - * @rhp: structure to be used for queueing the RCU updates. 1995 - * @func: actual callback function to be invoked after the grace period 1996 - * 1997 - * The callback function will be invoked some time after a trace rcu-tasks 1998 - * grace period elapses, in other words after all currently executing 1999 - * trace rcu-tasks read-side critical sections have completed. These 2000 - * read-side critical sections are delimited by calls to rcu_read_lock_trace() 2001 - * and rcu_read_unlock_trace(). 2002 - * 2003 - * See the description of call_rcu() for more detailed information on 2004 - * memory ordering guarantees. 2005 - */ 2006 - void call_rcu_tasks_trace(struct rcu_head *rhp, rcu_callback_t func) 2007 - { 2008 - call_rcu_tasks_generic(rhp, func, &rcu_tasks_trace); 2009 - } 2010 - EXPORT_SYMBOL_GPL(call_rcu_tasks_trace); 2011 - 2012 - /** 2013 - * synchronize_rcu_tasks_trace - wait for a trace rcu-tasks grace period 2014 - * 2015 - * Control will return to the caller some time after a trace rcu-tasks 2016 - * grace period has elapsed, in other words after all currently executing 2017 - * trace rcu-tasks read-side critical sections have elapsed. These read-side 2018 - * critical sections are delimited by calls to rcu_read_lock_trace() 2019 - * and rcu_read_unlock_trace(). 2020 - * 2021 - * This is a very specialized primitive, intended only for a few uses in 2022 - * tracing and other situations requiring manipulation of function preambles 2023 - * and profiling hooks. The synchronize_rcu_tasks_trace() function is not 2024 - * (yet) intended for heavy use from multiple CPUs. 2025 - * 2026 - * See the description of synchronize_rcu() for more detailed information 2027 - * on memory ordering guarantees. 2028 - */ 2029 - void synchronize_rcu_tasks_trace(void) 2030 - { 2031 - RCU_LOCKDEP_WARN(lock_is_held(&rcu_trace_lock_map), "Illegal synchronize_rcu_tasks_trace() in RCU Tasks Trace read-side critical section"); 2032 - synchronize_rcu_tasks_generic(&rcu_tasks_trace); 2033 - } 2034 - EXPORT_SYMBOL_GPL(synchronize_rcu_tasks_trace); 2035 - 2036 - /** 2037 - * rcu_barrier_tasks_trace - Wait for in-flight call_rcu_tasks_trace() callbacks. 2038 - * 2039 - * Although the current implementation is guaranteed to wait, it is not 2040 - * obligated to, for example, if there are no pending callbacks. 2041 - */ 2042 - void rcu_barrier_tasks_trace(void) 2043 - { 2044 - rcu_barrier_tasks_generic(&rcu_tasks_trace); 2045 - } 2046 - EXPORT_SYMBOL_GPL(rcu_barrier_tasks_trace); 2047 - 2048 - int rcu_tasks_trace_lazy_ms = -1; 2049 - module_param(rcu_tasks_trace_lazy_ms, int, 0444); 2050 - 2051 - static int __init rcu_spawn_tasks_trace_kthread(void) 2052 - { 2053 - if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB)) { 2054 - rcu_tasks_trace.gp_sleep = HZ / 10; 2055 - rcu_tasks_trace.init_fract = HZ / 10; 2056 - } else { 2057 - rcu_tasks_trace.gp_sleep = HZ / 200; 2058 - if (rcu_tasks_trace.gp_sleep <= 0) 2059 - rcu_tasks_trace.gp_sleep = 1; 2060 - rcu_tasks_trace.init_fract = HZ / 200; 2061 - if (rcu_tasks_trace.init_fract <= 0) 2062 - rcu_tasks_trace.init_fract = 1; 2063 - } 2064 - if (rcu_tasks_trace_lazy_ms >= 0) 2065 - rcu_tasks_trace.lazy_jiffies = msecs_to_jiffies(rcu_tasks_trace_lazy_ms); 2066 - rcu_tasks_trace.pregp_func = rcu_tasks_trace_pregp_step; 2067 - rcu_tasks_trace.postscan_func = rcu_tasks_trace_postscan; 2068 - rcu_tasks_trace.holdouts_func = check_all_holdout_tasks_trace; 2069 - rcu_tasks_trace.postgp_func = rcu_tasks_trace_postgp; 2070 - rcu_spawn_tasks_kthread_generic(&rcu_tasks_trace); 2071 - return 0; 2072 - } 2073 - 2074 - #if !defined(CONFIG_TINY_RCU) 2075 - void show_rcu_tasks_trace_gp_kthread(void) 2076 - { 2077 - char buf[64]; 2078 - 2079 - snprintf(buf, sizeof(buf), "N%lu h:%lu/%lu/%lu", 2080 - data_race(n_trc_holdouts), 2081 - data_race(n_heavy_reader_ofl_updates), 2082 - data_race(n_heavy_reader_updates), 2083 - data_race(n_heavy_reader_attempts)); 2084 - show_rcu_tasks_generic_gp_kthread(&rcu_tasks_trace, buf); 2085 - } 2086 - EXPORT_SYMBOL_GPL(show_rcu_tasks_trace_gp_kthread); 2087 - 2088 - void rcu_tasks_trace_torture_stats_print(char *tt, char *tf) 2089 - { 2090 - rcu_tasks_torture_stats_print_generic(&rcu_tasks_trace, tt, tf, ""); 2091 - } 2092 - EXPORT_SYMBOL_GPL(rcu_tasks_trace_torture_stats_print); 2093 - #endif // !defined(CONFIG_TINY_RCU) 2094 - 2095 - struct task_struct *get_rcu_tasks_trace_gp_kthread(void) 2096 - { 2097 - return rcu_tasks_trace.kthread_ptr; 2098 - } 2099 - EXPORT_SYMBOL_GPL(get_rcu_tasks_trace_gp_kthread); 2100 - 2101 - void rcu_tasks_trace_get_gp_data(int *flags, unsigned long *gp_seq) 2102 - { 2103 - *flags = 0; 2104 - *gp_seq = rcu_seq_current(&rcu_tasks_trace.tasks_gp_seq); 2105 - } 2106 - EXPORT_SYMBOL_GPL(rcu_tasks_trace_get_gp_data); 2107 - 2108 - #else /* #ifdef CONFIG_TASKS_TRACE_RCU */ 2109 - static void exit_tasks_rcu_finish_trace(struct task_struct *t) { } 2110 - #endif /* #else #ifdef CONFIG_TASKS_TRACE_RCU */ 2111 - 2112 1452 #ifndef CONFIG_TINY_RCU 2113 1453 void show_rcu_tasks_gp_kthreads(void) 2114 1454 { 2115 1455 show_rcu_tasks_classic_gp_kthread(); 2116 1456 show_rcu_tasks_rude_gp_kthread(); 2117 - show_rcu_tasks_trace_gp_kthread(); 2118 1457 } 2119 1458 #endif /* #ifndef CONFIG_TINY_RCU */ 2120 1459 ··· 1570 2251 #ifdef CONFIG_TASKS_RUDE_RCU 1571 2252 cblist_init_generic(&rcu_tasks_rude); 1572 2253 #endif 1573 - 1574 - #ifdef CONFIG_TASKS_TRACE_RCU 1575 - cblist_init_generic(&rcu_tasks_trace); 1576 - #endif 1577 2254 } 1578 2255 1579 2256 static int __init rcu_init_tasks_generic(void) ··· 1582 2267 rcu_spawn_tasks_rude_kthread(); 1583 2268 #endif 1584 2269 1585 - #ifdef CONFIG_TASKS_TRACE_RCU 1586 - rcu_spawn_tasks_trace_kthread(); 1587 - #endif 1588 - 1589 2270 // Run the self-tests. 1590 2271 rcu_tasks_initiate_self_tests(); 1591 2272 ··· 1592 2281 #else /* #ifdef CONFIG_TASKS_RCU_GENERIC */ 1593 2282 static inline void rcu_tasks_bootup_oddness(void) {} 1594 2283 #endif /* #else #ifdef CONFIG_TASKS_RCU_GENERIC */ 2284 + 2285 + #ifdef CONFIG_TASKS_TRACE_RCU 2286 + 2287 + //////////////////////////////////////////////////////////////////////// 2288 + // 2289 + // Tracing variant of Tasks RCU. This variant is designed to be used 2290 + // to protect tracing hooks, including those of BPF. This variant 2291 + // is implemented via a straightforward mapping onto SRCU-fast. 2292 + 2293 + DEFINE_SRCU_FAST(rcu_tasks_trace_srcu_struct); 2294 + EXPORT_SYMBOL_GPL(rcu_tasks_trace_srcu_struct); 2295 + 2296 + #endif /* #else #ifdef CONFIG_TASKS_TRACE_RCU */
+13 -1
kernel/rcu/tree.c
··· 160 160 unsigned long gps, unsigned long flags); 161 161 static void invoke_rcu_core(void); 162 162 static void rcu_report_exp_rdp(struct rcu_data *rdp); 163 + static void rcu_report_qs_rdp(struct rcu_data *rdp); 163 164 static void check_cb_ovld_locked(struct rcu_data *rdp, struct rcu_node *rnp); 164 165 static bool rcu_rdp_is_offloaded(struct rcu_data *rdp); 165 166 static bool rcu_rdp_cpu_online(struct rcu_data *rdp); ··· 1984 1983 if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD)) 1985 1984 on_each_cpu(rcu_strict_gp_boundary, NULL, 0); 1986 1985 1986 + /* 1987 + * Immediately report QS for the GP kthread's CPU. The GP kthread 1988 + * cannot be in an RCU read-side critical section while running 1989 + * the FQS scan. This eliminates the need for a second FQS wait 1990 + * when all CPUs are idle. 1991 + */ 1992 + preempt_disable(); 1993 + rcu_qs(); 1994 + rcu_report_qs_rdp(this_cpu_ptr(&rcu_data)); 1995 + preempt_enable(); 1996 + 1987 1997 return true; 1988 1998 } 1989 1999 ··· 3781 3769 } 3782 3770 rcu_nocb_unlock(rdp); 3783 3771 if (wake_nocb) 3784 - wake_nocb_gp(rdp, false); 3772 + wake_nocb_gp(rdp); 3785 3773 smp_store_release(&rdp->barrier_seq_snap, gseq); 3786 3774 } 3787 3775
+2 -3
kernel/rcu/tree.h
··· 203 203 /* during and after the last grace */ 204 204 /* period it is aware of. */ 205 205 struct irq_work defer_qs_iw; /* Obtain later scheduler attention. */ 206 - int defer_qs_iw_pending; /* Scheduler attention pending? */ 206 + int defer_qs_pending; /* irqwork or softirq pending? */ 207 207 struct work_struct strict_work; /* Schedule readers for strict GPs. */ 208 208 209 209 /* 2) batch handling */ ··· 301 301 #define RCU_NOCB_WAKE_BYPASS 1 302 302 #define RCU_NOCB_WAKE_LAZY 2 303 303 #define RCU_NOCB_WAKE 3 304 - #define RCU_NOCB_WAKE_FORCE 4 305 304 306 305 #define RCU_JIFFIES_TILL_FORCE_QS (1 + (HZ > 250) + (HZ > 500)) 307 306 /* For jiffies_till_first_fqs and */ ··· 499 500 static struct swait_queue_head *rcu_nocb_gp_get(struct rcu_node *rnp); 500 501 static void rcu_nocb_gp_cleanup(struct swait_queue_head *sq); 501 502 static void rcu_init_one_nocb(struct rcu_node *rnp); 502 - static bool wake_nocb_gp(struct rcu_data *rdp, bool force); 503 + static bool wake_nocb_gp(struct rcu_data *rdp); 503 504 static bool rcu_nocb_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, 504 505 unsigned long j, bool lazy); 505 506 static void call_rcu_nocb(struct rcu_data *rdp, struct rcu_head *head,
+6 -1
kernel/rcu/tree_exp.h
··· 589 589 pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n", 590 590 j - jiffies_start, rcu_state.expedited_sequence, data_race(rnp_root->expmask), 591 591 ".T"[!!data_race(rnp_root->exp_tasks)]); 592 - if (ndetected) { 592 + if (!ndetected) { 593 + // This is invoked from the grace-period worker, so 594 + // a new grace period cannot have started. And if this 595 + // worker were stalled, we would not get here. ;-) 596 + pr_err("INFO: Expedited stall ended before state dump start\n"); 597 + } else { 593 598 pr_err("blocking rcu_node structures (internal RCU debug):"); 594 599 rcu_for_each_node_breadth_first(rnp) { 595 600 if (rnp == rnp_root)
+27 -53
kernel/rcu/tree_nocb.h
··· 190 190 init_swait_queue_head(&rnp->nocb_gp_wq[1]); 191 191 } 192 192 193 + /* Clear any pending deferred wakeup timer (nocb_gp_lock must be held). */ 194 + static void nocb_defer_wakeup_cancel(struct rcu_data *rdp_gp) 195 + { 196 + if (rdp_gp->nocb_defer_wakeup > RCU_NOCB_WAKE_NOT) { 197 + WRITE_ONCE(rdp_gp->nocb_defer_wakeup, RCU_NOCB_WAKE_NOT); 198 + timer_delete(&rdp_gp->nocb_timer); 199 + } 200 + } 201 + 193 202 static bool __wake_nocb_gp(struct rcu_data *rdp_gp, 194 203 struct rcu_data *rdp, 195 - bool force, unsigned long flags) 204 + unsigned long flags) 196 205 __releases(rdp_gp->nocb_gp_lock) 197 206 { 198 207 bool needwake = false; ··· 213 204 return false; 214 205 } 215 206 216 - if (rdp_gp->nocb_defer_wakeup > RCU_NOCB_WAKE_NOT) { 217 - WRITE_ONCE(rdp_gp->nocb_defer_wakeup, RCU_NOCB_WAKE_NOT); 218 - timer_delete(&rdp_gp->nocb_timer); 219 - } 207 + nocb_defer_wakeup_cancel(rdp_gp); 220 208 221 - if (force || READ_ONCE(rdp_gp->nocb_gp_sleep)) { 209 + if (READ_ONCE(rdp_gp->nocb_gp_sleep)) { 222 210 WRITE_ONCE(rdp_gp->nocb_gp_sleep, false); 223 211 needwake = true; 224 212 } ··· 231 225 /* 232 226 * Kick the GP kthread for this NOCB group. 233 227 */ 234 - static bool wake_nocb_gp(struct rcu_data *rdp, bool force) 228 + static bool wake_nocb_gp(struct rcu_data *rdp) 235 229 { 236 230 unsigned long flags; 237 231 struct rcu_data *rdp_gp = rdp->nocb_gp_rdp; 238 232 239 233 raw_spin_lock_irqsave(&rdp_gp->nocb_gp_lock, flags); 240 - return __wake_nocb_gp(rdp_gp, rdp, force, flags); 234 + return __wake_nocb_gp(rdp_gp, rdp, flags); 241 235 } 242 236 243 237 #ifdef CONFIG_RCU_LAZY ··· 524 518 } 525 519 526 520 /* 527 - * Awaken the no-CBs grace-period kthread if needed, either due to it 528 - * legitimately being asleep or due to overload conditions. 529 - * 530 - * If warranted, also wake up the kthread servicing this CPUs queues. 521 + * Awaken the no-CBs grace-period kthread if needed due to it legitimately 522 + * being asleep. 531 523 */ 532 524 static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone, 533 525 unsigned long flags) 534 526 __releases(rdp->nocb_lock) 535 527 { 536 528 long bypass_len; 537 - unsigned long cur_gp_seq; 538 - unsigned long j; 539 529 long lazy_len; 540 530 long len; 541 531 struct task_struct *t; 542 - struct rcu_data *rdp_gp = rdp->nocb_gp_rdp; 543 532 544 533 // If we are being polled or there is no kthread, just leave. 545 534 t = READ_ONCE(rdp->nocb_gp_kthread); ··· 550 549 lazy_len = READ_ONCE(rdp->lazy_len); 551 550 if (was_alldone) { 552 551 rdp->qlen_last_fqs_check = len; 552 + rcu_nocb_unlock(rdp); 553 553 // Only lazy CBs in bypass list 554 554 if (lazy_len && bypass_len == lazy_len) { 555 - rcu_nocb_unlock(rdp); 556 555 wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_LAZY, 557 556 TPS("WakeLazy")); 558 557 } else if (!irqs_disabled_flags(flags)) { 559 558 /* ... if queue was empty ... */ 560 - rcu_nocb_unlock(rdp); 561 - wake_nocb_gp(rdp, false); 559 + wake_nocb_gp(rdp); 562 560 trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, 563 561 TPS("WakeEmpty")); 564 562 } else { 565 - rcu_nocb_unlock(rdp); 566 563 wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE, 567 564 TPS("WakeEmptyIsDeferred")); 568 565 } 569 - } else if (len > rdp->qlen_last_fqs_check + qhimark) { 570 - /* ... or if many callbacks queued. */ 571 - rdp->qlen_last_fqs_check = len; 572 - j = jiffies; 573 - if (j != rdp->nocb_gp_adv_time && 574 - rcu_segcblist_nextgp(&rdp->cblist, &cur_gp_seq) && 575 - rcu_seq_done(&rdp->mynode->gp_seq, cur_gp_seq)) { 576 - rcu_advance_cbs_nowake(rdp->mynode, rdp); 577 - rdp->nocb_gp_adv_time = j; 578 - } 579 - smp_mb(); /* Enqueue before timer_pending(). */ 580 - if ((rdp->nocb_cb_sleep || 581 - !rcu_segcblist_ready_cbs(&rdp->cblist)) && 582 - !timer_pending(&rdp_gp->nocb_timer)) { 583 - rcu_nocb_unlock(rdp); 584 - wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_FORCE, 585 - TPS("WakeOvfIsDeferred")); 586 - } else { 587 - rcu_nocb_unlock(rdp); 588 - trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WakeNot")); 589 - } 590 - } else { 591 - rcu_nocb_unlock(rdp); 592 - trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WakeNot")); 566 + 567 + return; 593 568 } 569 + 570 + rcu_nocb_unlock(rdp); 571 + trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WakeNot")); 594 572 } 595 573 596 574 static void call_rcu_nocb(struct rcu_data *rdp, struct rcu_head *head, ··· 794 814 if (rdp_toggling) 795 815 my_rdp->nocb_toggling_rdp = NULL; 796 816 797 - if (my_rdp->nocb_defer_wakeup > RCU_NOCB_WAKE_NOT) { 798 - WRITE_ONCE(my_rdp->nocb_defer_wakeup, RCU_NOCB_WAKE_NOT); 799 - timer_delete(&my_rdp->nocb_timer); 800 - } 817 + nocb_defer_wakeup_cancel(my_rdp); 801 818 WRITE_ONCE(my_rdp->nocb_gp_sleep, true); 802 819 raw_spin_unlock_irqrestore(&my_rdp->nocb_gp_lock, flags); 803 820 } else { ··· 943 966 unsigned long flags) 944 967 __releases(rdp_gp->nocb_gp_lock) 945 968 { 946 - int ndw; 947 969 int ret; 948 970 949 971 if (!rcu_nocb_need_deferred_wakeup(rdp_gp, level)) { ··· 950 974 return false; 951 975 } 952 976 953 - ndw = rdp_gp->nocb_defer_wakeup; 954 - ret = __wake_nocb_gp(rdp_gp, rdp, ndw == RCU_NOCB_WAKE_FORCE, flags); 977 + ret = __wake_nocb_gp(rdp_gp, rdp, flags); 955 978 trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("DeferredWake")); 956 979 957 980 return ret; ··· 966 991 trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("Timer")); 967 992 968 993 raw_spin_lock_irqsave(&rdp->nocb_gp_lock, flags); 969 - smp_mb__after_spinlock(); /* Timer expire before wakeup. */ 970 994 do_nocb_deferred_wakeup_common(rdp, rdp, RCU_NOCB_WAKE_BYPASS, flags); 971 995 } 972 996 ··· 1246 1272 } 1247 1273 rcu_nocb_try_flush_bypass(rdp, jiffies); 1248 1274 rcu_nocb_unlock_irqrestore(rdp, flags); 1249 - wake_nocb_gp(rdp, false); 1275 + wake_nocb_gp(rdp); 1250 1276 sc->nr_to_scan -= _count; 1251 1277 count += _count; 1252 1278 if (sc->nr_to_scan <= 0) ··· 1631 1657 { 1632 1658 } 1633 1659 1634 - static bool wake_nocb_gp(struct rcu_data *rdp, bool force) 1660 + static bool wake_nocb_gp(struct rcu_data *rdp) 1635 1661 { 1636 1662 return false; 1637 1663 }
+9 -6
kernel/rcu/tree_plugin.h
··· 487 487 union rcu_special special; 488 488 489 489 rdp = this_cpu_ptr(&rcu_data); 490 - if (rdp->defer_qs_iw_pending == DEFER_QS_PENDING) 491 - rdp->defer_qs_iw_pending = DEFER_QS_IDLE; 490 + if (rdp->defer_qs_pending == DEFER_QS_PENDING) 491 + rdp->defer_qs_pending = DEFER_QS_IDLE; 492 492 493 493 /* 494 494 * If RCU core is waiting for this CPU to exit its critical section, ··· 645 645 * 5. Deferred QS reporting does not happen. 646 646 */ 647 647 if (rcu_preempt_depth() > 0) 648 - WRITE_ONCE(rdp->defer_qs_iw_pending, DEFER_QS_IDLE); 648 + WRITE_ONCE(rdp->defer_qs_pending, DEFER_QS_IDLE); 649 649 } 650 650 651 651 /* ··· 747 747 // Using softirq, safe to awaken, and either the 748 748 // wakeup is free or there is either an expedited 749 749 // GP in flight or a potential need to deboost. 750 - raise_softirq_irqoff(RCU_SOFTIRQ); 750 + if (rdp->defer_qs_pending != DEFER_QS_PENDING) { 751 + rdp->defer_qs_pending = DEFER_QS_PENDING; 752 + raise_softirq_irqoff(RCU_SOFTIRQ); 753 + } 751 754 } else { 752 755 // Enabling BH or preempt does reschedule, so... 753 756 // Also if no expediting and no possible deboosting, ··· 758 755 // tick enabled. 759 756 set_need_resched_current(); 760 757 if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled && 761 - needs_exp && rdp->defer_qs_iw_pending != DEFER_QS_PENDING && 758 + needs_exp && rdp->defer_qs_pending != DEFER_QS_PENDING && 762 759 cpu_online(rdp->cpu)) { 763 760 // Get scheduler to re-evaluate and call hooks. 764 761 // If !IRQ_WORK, FQS scan will eventually IPI. 765 - rdp->defer_qs_iw_pending = DEFER_QS_PENDING; 762 + rdp->defer_qs_pending = DEFER_QS_PENDING; 766 763 irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu); 767 764 } 768 765 }
+3 -1
scripts/checkpatch.pl
··· 863 863 #These should be enough to drive away new IDR users 864 864 "DEFINE_IDR" => "DEFINE_XARRAY", 865 865 "idr_init" => "xa_init", 866 - "idr_init_base" => "xa_init_flags" 866 + "idr_init_base" => "xa_init_flags", 867 + "rcu_read_lock_trace" => "rcu_read_lock_tasks_trace", 868 + "rcu_read_unlock_trace" => "rcu_read_unlock_tasks_trace", 867 869 ); 868 870 869 871 #Create a search pattern for all these strings to speed up a loop below
+1
tools/testing/selftests/rcutorture/.gitignore
··· 3 3 b[0-9]* 4 4 res 5 5 *.swp 6 + .kvm.sh.lock
+1 -1
tools/testing/selftests/rcutorture/bin/config2csv.sh
··· 42 42 grep -v '^#' < $i | grep -v '^ *$' > $T/p 43 43 if test -r $i.boot 44 44 then 45 - tr -s ' ' '\012' < $i.boot | grep -v '^#' >> $T/p 45 + sed -e 's/#.*$//' < $i.boot | tr -s ' ' '\012' >> $T/p 46 46 fi 47 47 sed -e 's/^[^=]*$/&=?/' < $T/p | 48 48 sed -e 's/^\([^=]*\)=\(.*\)$/\tp["\1:'"$i"'"] = "\2";\n\tc["\1"] = 1;/' >> $T/p.awk
+162 -22
tools/testing/selftests/rcutorture/bin/kvm-series.sh
··· 15 15 # This script is intended to replace kvm-check-branches.sh by providing 16 16 # ease of use and faster execution. 17 17 18 - T="`mktemp -d ${TMPDIR-/tmp}/kvm-series.sh.XXXXXX`" 18 + T="`mktemp -d ${TMPDIR-/tmp}/kvm-series.sh.XXXXXX`"; export T 19 19 trap 'rm -rf $T' 0 20 20 21 21 scriptname=$0 ··· 32 32 echo "$0: Repetition ('*') not allowed in config list." 33 33 exit 1 34 34 fi 35 + config_list_len="`echo ${config_list} | wc -w | awk '{ print $1; }'`" 35 36 36 37 commit_list="${2}" 37 38 if test -z "${commit_list}" ··· 48 47 exit 2 49 48 fi 50 49 sha1_list=`cat $T/commits` 50 + sha1_list_len="`echo ${sha1_list} | wc -w | awk '{ print $1; }'`" 51 51 52 52 shift 53 53 shift 54 54 55 55 RCUTORTURE="`pwd`/tools/testing/selftests/rcutorture"; export RCUTORTURE 56 56 PATH=${RCUTORTURE}/bin:$PATH; export PATH 57 + RES="${RCUTORTURE}/res"; export RES 57 58 . functions.sh 58 59 59 60 ret=0 60 - nfail=0 61 + nbuildfail=0 62 + nrunfail=0 61 63 nsuccess=0 62 - faillist= 64 + ncpus=0 65 + buildfaillist= 66 + runfaillist= 63 67 successlist= 64 68 cursha1="`git rev-parse --abbrev-ref HEAD`" 65 69 ds="`date +%Y.%m.%d-%H.%M.%S`-series" 70 + DS="${RES}/${ds}"; export DS 66 71 startdate="`date`" 67 72 starttime="`get_starttime`" 68 73 69 74 echo " --- " $scriptname $args | tee -a $T/log 70 75 echo " --- Results directory: " $ds | tee -a $T/log 71 76 77 + # Do all builds. Iterate through commits within a given scenario 78 + # because builds normally go faster from one commit to the next within a 79 + # given scenario. In contrast, switching scenarios on each rebuild will 80 + # often force a full rebuild due to Kconfig differences, for example, 81 + # turning preemption on and off. Defer actual runs in order to run 82 + # lots of them concurrently on large systems. 83 + touch $T/torunlist 84 + n2build="$((config_list_len*sha1_list_len))" 85 + nbuilt=0 72 86 for config in ${config_list} 73 87 do 74 88 sha_n=0 75 89 for sha in ${sha1_list} 76 90 do 77 91 sha1=${sha_n}.${sha} # Enable "sort -k1nr" to list commits in order. 78 - echo Starting ${config}/${sha1} at `date` | tee -a $T/log 79 - git checkout "${sha}" 80 - time tools/testing/selftests/rcutorture/bin/kvm.sh --configs "$config" --datestamp "$ds/${config}/${sha1}" --duration 1 "$@" 92 + echo 93 + echo Starting ${config}/${sha1} "($((nbuilt+1)) of ${n2build})" at `date` | tee -a $T/log 94 + git checkout --detach "${sha}" 95 + tools/testing/selftests/rcutorture/bin/kvm.sh --configs "$config" --datestamp "$ds/${config}/${sha1}" --duration 1 --build-only --trust-make "$@" 81 96 curret=$? 82 97 if test "${curret}" -ne 0 83 98 then 84 - nfail=$((nfail+1)) 85 - faillist="$faillist ${config}/${sha1}(${curret})" 99 + nbuildfail=$((nbuildfail+1)) 100 + buildfaillist="$buildfaillist ${config}/${sha1}(${curret})" 86 101 else 87 - nsuccess=$((nsuccess+1)) 88 - successlist="$successlist ${config}/${sha1}" 89 - # Successful run, so remove large files. 90 - rm -f ${RCUTORTURE}/$ds/${config}/${sha1}/{vmlinux,bzImage,System.map,Module.symvers} 102 + batchncpus="`grep -v "^# cpus=" "${DS}/${config}/${sha1}/batches" | awk '{ sum += $3 } END { print sum }'`" 103 + echo run_one_qemu ${sha_n} ${config}/${sha1} ${batchncpus} >> $T/torunlist 104 + if test "${ncpus}" -eq 0 105 + then 106 + ncpus="`grep "^# cpus=" "${DS}/${config}/${sha1}/batches" | sed -e 's/^# cpus=//'`" 107 + case "${ncpus}" in 108 + ^[0-9]*$) 109 + ;; 110 + *) 111 + ncpus=0 112 + ;; 113 + esac 114 + fi 91 115 fi 92 116 if test "${ret}" -eq 0 93 117 then 94 118 ret=${curret} 95 119 fi 96 120 sha_n=$((sha_n+1)) 121 + nbuilt=$((nbuilt+1)) 97 122 done 98 123 done 124 + 125 + # If the user did not specify the number of CPUs, use them all. 126 + if test "${ncpus}" -eq 0 127 + then 128 + ncpus="`identify_qemu_vcpus`" 129 + fi 130 + 131 + cpusused=0 132 + touch $T/successlistfile 133 + touch $T/faillistfile 134 + n2run="`wc -l $T/torunlist | awk '{ print $1; }'`" 135 + nrun=0 136 + 137 + # do_run_one_qemu ds resultsdir qemu_curout 138 + # 139 + # Start the specified qemu run and record its success or failure. 140 + do_run_one_qemu () { 141 + local ret 142 + local ds="$1" 143 + local resultsdir="$2" 144 + local qemu_curout="$3" 145 + 146 + tools/testing/selftests/rcutorture/bin/kvm-again.sh "${DS}/${resultsdir}" --link inplace-force > ${qemu_curout} 2>&1 147 + ret=$? 148 + if test "${ret}" -eq 0 149 + then 150 + echo ${resultsdir} >> $T/successlistfile 151 + # Successful run, so remove large files. 152 + rm -f ${DS}/${resultsdir}/{vmlinux,bzImage,System.map,Module.symvers} 153 + else 154 + echo "${resultsdir}(${ret})" >> $T/faillistfile 155 + fi 156 + } 157 + 158 + # cleanup_qemu_batch batchncpus 159 + # 160 + # Update success and failure lists, files, and counts at the end of 161 + # a batch. 162 + cleanup_qemu_batch () { 163 + local batchncpus="$1" 164 + 165 + echo Waiting, cpusused=${cpusused}, ncpus=${ncpus} `date` | tee -a $T/log 166 + wait 167 + cpusused="${batchncpus}" 168 + nsuccessbatch="`wc -l $T/successlistfile | awk '{ print $1 }'`" 169 + nsuccess=$((nsuccess+nsuccessbatch)) 170 + successlist="$successlist `cat $T/successlistfile`" 171 + rm $T/successlistfile 172 + touch $T/successlistfile 173 + nfailbatch="`wc -l $T/faillistfile | awk '{ print $1 }'`" 174 + nrunfail=$((nrunfail+nfailbatch)) 175 + runfaillist="$runfaillist `cat $T/faillistfile`" 176 + rm $T/faillistfile 177 + touch $T/faillistfile 178 + } 179 + 180 + # run_one_qemu sha_n config/sha1 batchncpus 181 + # 182 + # Launch into the background the sha_n-th qemu job whose results directory 183 + # is config/sha1 and which uses batchncpus CPUs. Once we reach a job that 184 + # would overflow the number of available CPUs, wait for the previous jobs 185 + # to complete and record their results. 186 + run_one_qemu () { 187 + local sha_n="$1" 188 + local config_sha1="$2" 189 + local batchncpus="$3" 190 + local qemu_curout 191 + 192 + cpusused=$((cpusused+batchncpus)) 193 + if test "${cpusused}" -gt $ncpus 194 + then 195 + cleanup_qemu_batch "${batchncpus}" 196 + fi 197 + echo Starting ${config_sha1} using ${batchncpus} CPUs "($((nrun+1)) of ${n2run})" `date` 198 + qemu_curout="${DS}/${config_sha1}/qemu-series" 199 + do_run_one_qemu "$ds" "${config_sha1}" ${qemu_curout} & 200 + nrun="$((nrun+1))" 201 + } 202 + 203 + # Re-ordering the runs will mess up the affinity chosen at build time 204 + # (among other things, over-using CPU 0), so suppress it. 205 + TORTURE_NO_AFFINITY="no-affinity"; export TORTURE_NO_AFFINITY 206 + 207 + # Run the kernels (if any) that built correctly. 208 + echo | tee -a $T/log # Put a blank line between build and run messages. 209 + . $T/torunlist 210 + cleanup_qemu_batch "${batchncpus}" 211 + 212 + # Get back to initial checkout/SHA-1. 99 213 git checkout "${cursha1}" 100 214 101 - echo ${nsuccess} SUCCESSES: | tee -a $T/log 102 - echo ${successlist} | fmt | tee -a $T/log 103 - echo | tee -a $T/log 104 - echo ${nfail} FAILURES: | tee -a $T/log 105 - echo ${faillist} | fmt | tee -a $T/log 106 - if test -n "${faillist}" 215 + # Throw away leading and trailing space characters for fmt. 216 + successlist="`echo ${successlist} | sed -e 's/^ *//' -e 's/ *$//'`" 217 + buildfaillist="`echo ${buildfaillist} | sed -e 's/^ *//' -e 's/ *$//'`" 218 + runfaillist="`echo ${runfaillist} | sed -e 's/^ *//' -e 's/ *$//'`" 219 + 220 + # Print lists of successes, build failures, and run failures, if any. 221 + if test "${nsuccess}" -gt 0 107 222 then 108 223 echo | tee -a $T/log 109 - echo Failures across commits: | tee -a $T/log 110 - echo ${faillist} | tr ' ' '\012' | sed -e 's,^[^/]*/,,' -e 's/([0-9]*)//' | 224 + echo ${nsuccess} SUCCESSES: | tee -a $T/log 225 + echo ${successlist} | fmt | tee -a $T/log 226 + fi 227 + if test "${nbuildfail}" -gt 0 228 + then 229 + echo | tee -a $T/log 230 + echo ${nbuildfail} BUILD FAILURES: | tee -a $T/log 231 + echo ${buildfaillist} | fmt | tee -a $T/log 232 + fi 233 + if test "${nrunfail}" -gt 0 234 + then 235 + echo | tee -a $T/log 236 + echo ${nrunfail} RUN FAILURES: | tee -a $T/log 237 + echo ${runfaillist} | fmt | tee -a $T/log 238 + fi 239 + 240 + # If there were build or runtime failures, map them to commits. 241 + if test "${nbuildfail}" -gt 0 || test "${nrunfail}" -gt 0 242 + then 243 + echo | tee -a $T/log 244 + echo Build failures across commits: | tee -a $T/log 245 + echo ${buildfaillist} | tr ' ' '\012' | sed -e 's,^[^/]*/,,' -e 's/([0-9]*)//' | 111 246 sort | uniq -c | sort -k2n | tee -a $T/log 112 247 fi 248 + 249 + # Print run summary. 250 + echo | tee -a $T/log 113 251 echo Started at $startdate, ended at `date`, duration `get_starttime_duration $starttime`. | tee -a $T/log 114 - echo Summary: Successes: ${nsuccess} Failures: ${nfail} | tee -a $T/log 115 - cp $T/log tools/testing/selftests/rcutorture/res/${ds} 252 + echo Summary: Successes: ${nsuccess} " "Build Failures: ${nbuildfail} " "Runtime Failures: ${nrunfail}| tee -a $T/log 253 + cp $T/log ${DS} 116 254 117 255 exit "${ret}"
+40
tools/testing/selftests/rcutorture/bin/kvm.sh
··· 80 80 echo " --kasan" 81 81 echo " --kconfig Kconfig-options" 82 82 echo " --kcsan" 83 + echo " --kill-previous" 83 84 echo " --kmake-arg kernel-make-arguments" 84 85 echo " --mac nn:nn:nn:nn:nn:nn" 85 86 echo " --memory megabytes|nnnG" ··· 207 206 --kcsan) 208 207 TORTURE_KCONFIG_KCSAN_ARG="$debuginfo CONFIG_KCSAN=y CONFIG_KCSAN_STRICT=y CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_PROVE_LOCKING=y"; export TORTURE_KCONFIG_KCSAN_ARG 209 208 ;; 209 + --kill-previous) 210 + TORTURE_KILL_PREVIOUS=1 211 + ;; 210 212 --kmake-arg|--kmake-args) 211 213 checkarg --kmake-arg "(kernel make arguments)" $# "$2" '.*' '^error$' 212 214 TORTURE_KMAKE_ARG="`echo "$TORTURE_KMAKE_ARG $2" | sed -e 's/^ *//' -e 's/ *$//'`" ··· 278 274 esac 279 275 shift 280 276 done 277 + 278 + # Prevent concurrent kvm.sh runs on the same source tree. The flock 279 + # is automatically released when the script exits, even if killed. 280 + TORTURE_LOCK="$RCUTORTURE/.kvm.sh.lock" 281 + 282 + # Terminate any processes holding the lock file, if requested. 283 + if test -n "$TORTURE_KILL_PREVIOUS" 284 + then 285 + if test -e "$TORTURE_LOCK" 286 + then 287 + echo "Killing processes holding $TORTURE_LOCK..." 288 + if fuser -k "$TORTURE_LOCK" >/dev/null 2>&1 289 + then 290 + sleep 2 291 + echo "Previous kvm.sh processes killed." 292 + else 293 + echo "No processes were holding the lock." 294 + fi 295 + else 296 + echo "No lock file exists, nothing to kill." 297 + fi 298 + fi 299 + 300 + if test -z "$dryrun" 301 + then 302 + # Create a file descriptor and flock it, so that when kvm.sh (and its 303 + # children) exit, the flock is released by the kernel automatically. 304 + exec 9>"$TORTURE_LOCK" 305 + if ! flock -n 9 306 + then 307 + echo "ERROR: Another kvm.sh instance is already running on this tree." 308 + echo " Lock file: $TORTURE_LOCK" 309 + echo " To run kvm.sh, kill all existing kvm.sh runs first (--kill-previous)." 310 + exit 1 311 + fi 312 + fi 281 313 282 314 if test -n "$dryrun" || test -z "$TORTURE_INITRD" || tools/testing/selftests/rcutorture/bin/mkinitrd.sh 283 315 then
+1 -1
tools/testing/selftests/rcutorture/bin/mktestid.sh
··· 18 18 echo Build directory: `pwd` > ${resdir}/testid.txt 19 19 if test -d .git 20 20 then 21 - echo Current commit: `git rev-parse HEAD` >> ${resdir}/testid.txt 21 + echo Current commit: `git show --oneline --no-patch HEAD` >> ${resdir}/testid.txt 22 22 echo >> ${resdir}/testid.txt 23 23 echo ' ---' Output of "'"git status"'": >> ${resdir}/testid.txt 24 24 git status >> ${resdir}/testid.txt
-1
tools/testing/selftests/rcutorture/configs/rcu/TRACE01
··· 10 10 #CHECK#CONFIG_PROVE_RCU=n 11 11 CONFIG_FORCE_TASKS_TRACE_RCU=y 12 12 #CHECK#CONFIG_TASKS_TRACE_RCU=y 13 - CONFIG_TASKS_TRACE_RCU_READ_MB=y 14 13 CONFIG_RCU_EXPERT=y
-1
tools/testing/selftests/rcutorture/configs/rcu/TRACE02
··· 9 9 #CHECK#CONFIG_PROVE_RCU=y 10 10 CONFIG_FORCE_TASKS_TRACE_RCU=y 11 11 #CHECK#CONFIG_TASKS_TRACE_RCU=y 12 - CONFIG_TASKS_TRACE_RCU_READ_MB=n 13 12 CONFIG_RCU_EXPERT=y 14 13 CONFIG_DEBUG_OBJECTS=y