Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'locking-core-2026-02-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking updates from Ingo Molnar:
"Lock debugging:

- Implement compiler-driven static analysis locking context checking,
using the upcoming Clang 22 compiler's context analysis features
(Marco Elver)

We removed Sparse context analysis support, because prior to
removal even a defconfig kernel produced 1,700+ context tracking
Sparse warnings, the overwhelming majority of which are false
positives. On an allmodconfig kernel the number of false positive
context tracking Sparse warnings grows to over 5,200... On the plus
side of the balance actual locking bugs found by Sparse context
analysis is also rather ... sparse: I found only 3 such commits in
the last 3 years. So the rate of false positives and the
maintenance overhead is rather high and there appears to be no
active policy in place to achieve a zero-warnings baseline to move
the annotations & fixers to developers who introduce new code.

Clang context analysis is more complete and more aggressive in
trying to find bugs, at least in principle. Plus it has a different
model to enabling it: it's enabled subsystem by subsystem, which
results in zero warnings on all relevant kernel builds (as far as
our testing managed to cover it). Which allowed us to enable it by
default, similar to other compiler warnings, with the expectation
that there are no warnings going forward. This enforces a
zero-warnings baseline on clang-22+ builds (Which are still limited
in distribution, admittedly)

Hopefully the Clang approach can lead to a more maintainable
zero-warnings status quo and policy, with more and more subsystems
and drivers enabling the feature. Context tracking can be enabled
for all kernel code via WARN_CONTEXT_ANALYSIS_ALL=y (default
disabled), but this will generate a lot of false positives.

( Having said that, Sparse support could still be added back,
if anyone is interested - the removal patch is still
relatively straightforward to revert at this stage. )

Rust integration updates: (Alice Ryhl, Fujita Tomonori, Boqun Feng)

- Add support for Atomic<i8/i16/bool> and replace most Rust native
AtomicBool usages with Atomic<bool>

- Clean up LockClassKey and improve its documentation

- Add missing Send and Sync trait implementation for SetOnce

- Make ARef Unpin as it is supposed to be

- Add __rust_helper to a few Rust helpers as a preparation for
helper LTO

- Inline various lock related functions to avoid additional function
calls

WW mutexes:

- Extend ww_mutex tests and other test-ww_mutex updates (John
Stultz)

Misc fixes and cleanups:

- rcu: Mark lockdep_assert_rcu_helper() __always_inline (Arnd
Bergmann)

- locking/local_lock: Include more missing headers (Peter Zijlstra)

- seqlock: fix scoped_seqlock_read kernel-doc (Randy Dunlap)

- rust: sync: Replace `kernel::c_str!` with C-Strings (Tamir
Duberstein)"

* tag 'locking-core-2026-02-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (90 commits)
locking/rwlock: Fix write_trylock_irqsave() with CONFIG_INLINE_WRITE_TRYLOCK
rcu: Mark lockdep_assert_rcu_helper() __always_inline
compiler-context-analysis: Remove __assume_ctx_lock from initializers
tomoyo: Use scoped init guard
crypto: Use scoped init guard
kcov: Use scoped init guard
compiler-context-analysis: Introduce scoped init guards
cleanup: Make __DEFINE_LOCK_GUARD handle commas in initializers
seqlock: fix scoped_seqlock_read kernel-doc
tools: Update context analysis macros in compiler_types.h
rust: sync: Replace `kernel::c_str!` with C-Strings
rust: sync: Inline various lock related methods
rust: helpers: Move #define __rust_helper out of atomic.c
rust: wait: Add __rust_helper to helpers
rust: time: Add __rust_helper to helpers
rust: task: Add __rust_helper to helpers
rust: sync: Add __rust_helper to helpers
rust: refcount: Add __rust_helper to helpers
rust: rcu: Add __rust_helper to helpers
rust: processor: Add __rust_helper to helpers
...

+3233 -872
+169
Documentation/dev-tools/context-analysis.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + .. Copyright (C) 2025, Google LLC. 3 + 4 + .. _context-analysis: 5 + 6 + Compiler-Based Context Analysis 7 + =============================== 8 + 9 + Context Analysis is a language extension, which enables statically checking 10 + that required contexts are active (or inactive) by acquiring and releasing 11 + user-definable "context locks". An obvious application is lock-safety checking 12 + for the kernel's various synchronization primitives (each of which represents a 13 + "context lock"), and checking that locking rules are not violated. 14 + 15 + The Clang compiler currently supports the full set of context analysis 16 + features. To enable for Clang, configure the kernel with:: 17 + 18 + CONFIG_WARN_CONTEXT_ANALYSIS=y 19 + 20 + The feature requires Clang 22 or later. 21 + 22 + The analysis is *opt-in by default*, and requires declaring which modules and 23 + subsystems should be analyzed in the respective `Makefile`:: 24 + 25 + CONTEXT_ANALYSIS_mymodule.o := y 26 + 27 + Or for all translation units in the directory:: 28 + 29 + CONTEXT_ANALYSIS := y 30 + 31 + It is possible to enable the analysis tree-wide, however, which will result in 32 + numerous false positive warnings currently and is *not* generally recommended:: 33 + 34 + CONFIG_WARN_CONTEXT_ANALYSIS_ALL=y 35 + 36 + Programming Model 37 + ----------------- 38 + 39 + The below describes the programming model around using context lock types. 40 + 41 + .. note:: 42 + Enabling context analysis can be seen as enabling a dialect of Linux C with 43 + a Context System. Some valid patterns involving complex control-flow are 44 + constrained (such as conditional acquisition and later conditional release 45 + in the same function). 46 + 47 + Context analysis is a way to specify permissibility of operations to depend on 48 + context locks being held (or not held). Typically we are interested in 49 + protecting data and code in a critical section by requiring a specific context 50 + to be active, for example by holding a specific lock. The analysis ensures that 51 + callers cannot perform an operation without the required context being active. 52 + 53 + Context locks are associated with named structs, along with functions that 54 + operate on struct instances to acquire and release the associated context lock. 55 + 56 + Context locks can be held either exclusively or shared. This mechanism allows 57 + assigning more precise privileges when a context is active, typically to 58 + distinguish where a thread may only read (shared) or also write (exclusive) to 59 + data guarded within a context. 60 + 61 + The set of contexts that are actually active in a given thread at a given point 62 + in program execution is a run-time concept. The static analysis works by 63 + calculating an approximation of that set, called the context environment. The 64 + context environment is calculated for every program point, and describes the 65 + set of contexts that are statically known to be active, or inactive, at that 66 + particular point. This environment is a conservative approximation of the full 67 + set of contexts that will actually be active in a thread at run-time. 68 + 69 + More details are also documented `here 70 + <https://clang.llvm.org/docs/ThreadSafetyAnalysis.html>`_. 71 + 72 + .. note:: 73 + Clang's analysis explicitly does not infer context locks acquired or 74 + released by inline functions. It requires explicit annotations to (a) assert 75 + that it's not a bug if a context lock is released or acquired, and (b) to 76 + retain consistency between inline and non-inline function declarations. 77 + 78 + Supported Kernel Primitives 79 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~ 80 + 81 + Currently the following synchronization primitives are supported: 82 + `raw_spinlock_t`, `spinlock_t`, `rwlock_t`, `mutex`, `seqlock_t`, 83 + `bit_spinlock`, RCU, SRCU (`srcu_struct`), `rw_semaphore`, `local_lock_t`, 84 + `ww_mutex`. 85 + 86 + To initialize variables guarded by a context lock with an initialization 87 + function (``type_init(&lock)``), prefer using ``guard(type_init)(&lock)`` or 88 + ``scoped_guard(type_init, &lock) { ... }`` to initialize such guarded members 89 + or globals in the enclosing scope. This initializes the context lock and treats 90 + the context as active within the initialization scope (initialization implies 91 + exclusive access to the underlying object). 92 + 93 + For example:: 94 + 95 + struct my_data { 96 + spinlock_t lock; 97 + int counter __guarded_by(&lock); 98 + }; 99 + 100 + void init_my_data(struct my_data *d) 101 + { 102 + ... 103 + guard(spinlock_init)(&d->lock); 104 + d->counter = 0; 105 + ... 106 + } 107 + 108 + Alternatively, initializing guarded variables can be done with context analysis 109 + disabled, preferably in the smallest possible scope (due to lack of any other 110 + checking): either with a ``context_unsafe(var = init)`` expression, or by 111 + marking small initialization functions with the ``__context_unsafe(init)`` 112 + attribute. 113 + 114 + Lockdep assertions, such as `lockdep_assert_held()`, inform the compiler's 115 + context analysis that the associated synchronization primitive is held after 116 + the assertion. This avoids false positives in complex control-flow scenarios 117 + and encourages the use of Lockdep where static analysis is limited. For 118 + example, this is useful when a function doesn't *always* require a lock, making 119 + `__must_hold()` inappropriate. 120 + 121 + Keywords 122 + ~~~~~~~~ 123 + 124 + .. kernel-doc:: include/linux/compiler-context-analysis.h 125 + :identifiers: context_lock_struct 126 + token_context_lock token_context_lock_instance 127 + __guarded_by __pt_guarded_by 128 + __must_hold 129 + __must_not_hold 130 + __acquires 131 + __cond_acquires 132 + __releases 133 + __must_hold_shared 134 + __acquires_shared 135 + __cond_acquires_shared 136 + __releases_shared 137 + __acquire 138 + __release 139 + __acquire_shared 140 + __release_shared 141 + __acquire_ret 142 + __acquire_shared_ret 143 + context_unsafe 144 + __context_unsafe 145 + disable_context_analysis enable_context_analysis 146 + 147 + .. note:: 148 + The function attribute `__no_context_analysis` is reserved for internal 149 + implementation of context lock types, and should be avoided in normal code. 150 + 151 + Background 152 + ---------- 153 + 154 + Clang originally called the feature `Thread Safety Analysis 155 + <https://clang.llvm.org/docs/ThreadSafetyAnalysis.html>`_, with some keywords 156 + and documentation still using the thread-safety-analysis-only terminology. This 157 + was later changed and the feature became more flexible, gaining the ability to 158 + define custom "capabilities". Its foundations can be found in `Capability 159 + Systems <https://www.cs.cornell.edu/talc/papers/capabilities.pdf>`_, used to 160 + specify the permissibility of operations to depend on some "capability" being 161 + held (or not held). 162 + 163 + Because the feature is not just able to express capabilities related to 164 + synchronization primitives, and "capability" is already overloaded in the 165 + kernel, the naming chosen for the kernel departs from Clang's initial "Thread 166 + Safety" and "capability" nomenclature; we refer to the feature as "Context 167 + Analysis" to avoid confusion. The internal implementation still makes 168 + references to Clang's terminology in a few places, such as `-Wthread-safety` 169 + being the warning option that also still appears in diagnostic messages.
+1
Documentation/dev-tools/index.rst
··· 21 21 checkpatch 22 22 clang-format 23 23 coccinelle 24 + context-analysis 24 25 sparse 25 26 kcov 26 27 gcov
-19
Documentation/dev-tools/sparse.rst
··· 53 53 vs cpu-endian vs whatever), and there the constant "0" really _is_ 54 54 special. 55 55 56 - Using sparse for lock checking 57 - ------------------------------ 58 - 59 - The following macros are undefined for gcc and defined during a sparse 60 - run to use the "context" tracking feature of sparse, applied to 61 - locking. These annotations tell sparse when a lock is held, with 62 - regard to the annotated function's entry and exit. 63 - 64 - __must_hold - The specified lock is held on function entry and exit. 65 - 66 - __acquires - The specified lock is held on function exit, but not entry. 67 - 68 - __releases - The specified lock is held on function entry, but not exit. 69 - 70 - If the function enters and exits without the lock held, acquiring and 71 - releasing the lock inside the function in a balanced way, no 72 - annotation is needed. The three annotations above are for cases where 73 - sparse would otherwise report a context imbalance. 74 - 75 56 Getting sparse 76 57 -------------- 77 58
+3 -3
Documentation/mm/process_addrs.rst
··· 583 583 :c:func:`!pte_offset_map` can be used depending on stability requirements. 584 584 These map the page table into kernel memory if required, take the RCU lock, and 585 585 depending on variant, may also look up or acquire the PTE lock. 586 - See the comment on :c:func:`!__pte_offset_map_lock`. 586 + See the comment on :c:func:`!pte_offset_map_lock`. 587 587 588 588 Atomicity 589 589 ^^^^^^^^^ ··· 667 667 .. note:: There are some variants on this, such as 668 668 :c:func:`!pte_offset_map_rw_nolock` when we know we hold the PTE stable but 669 669 for brevity we do not explore this. See the comment for 670 - :c:func:`!__pte_offset_map_lock` for more details. 670 + :c:func:`!pte_offset_map_lock` for more details. 671 671 672 672 When modifying data in ranges we typically only wish to allocate higher page 673 673 tables as necessary, using these locks to avoid races or overwriting anything, ··· 686 686 as we have separate PMD and PTE locks and a THP collapse for instance might have 687 687 eliminated the PMD entry as well as the PTE from under us. 688 688 689 - This is why :c:func:`!__pte_offset_map_lock` locklessly retrieves the PMD entry 689 + This is why :c:func:`!pte_offset_map_lock` locklessly retrieves the PMD entry 690 690 for the PTE, carefully checking it is as expected, before acquiring the 691 691 PTE-specific lock, and then *again* checking that the PMD entry is as expected. 692 692
+11
MAINTAINERS
··· 6153 6153 S: Supported 6154 6154 F: drivers/infiniband/hw/usnic/ 6155 6155 6156 + CLANG CONTEXT ANALYSIS 6157 + M: Marco Elver <elver@google.com> 6158 + R: Bart Van Assche <bvanassche@acm.org> 6159 + L: llvm@lists.linux.dev 6160 + S: Maintained 6161 + F: Documentation/dev-tools/context-analysis.rst 6162 + F: include/linux/compiler-context-analysis.h 6163 + F: lib/test_context-analysis.c 6164 + F: scripts/Makefile.context-analysis 6165 + F: scripts/context-analysis-suppression.txt 6166 + 6156 6167 CLANG CONTROL FLOW INTEGRITY SUPPORT 6157 6168 M: Sami Tolvanen <samitolvanen@google.com> 6158 6169 M: Kees Cook <kees@kernel.org>
+1
Makefile
··· 1125 1125 include-$(CONFIG_KSTACK_ERASE) += scripts/Makefile.kstack_erase 1126 1126 include-$(CONFIG_AUTOFDO_CLANG) += scripts/Makefile.autofdo 1127 1127 include-$(CONFIG_PROPELLER_CLANG) += scripts/Makefile.propeller 1128 + include-$(CONFIG_WARN_CONTEXT_ANALYSIS) += scripts/Makefile.context-analysis 1128 1129 include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins 1129 1130 1130 1131 include $(addprefix $(srctree)/, $(include-y))
+3 -2
arch/um/include/shared/skas/mm_id.h
··· 21 21 int syscall_fd_map[STUB_MAX_FDS]; 22 22 }; 23 23 24 - void enter_turnstile(struct mm_id *mm_id) __acquires(turnstile); 25 - void exit_turnstile(struct mm_id *mm_id) __releases(turnstile); 24 + struct mutex *__get_turnstile(struct mm_id *mm_id); 25 + void enter_turnstile(struct mm_id *mm_id) __acquires(__get_turnstile(mm_id)); 26 + void exit_turnstile(struct mm_id *mm_id) __releases(__get_turnstile(mm_id)); 26 27 27 28 void notify_mm_kill(int pid); 28 29
+8 -5
arch/um/kernel/skas/mmu.c
··· 23 23 static spinlock_t mm_list_lock; 24 24 static struct list_head mm_list; 25 25 26 - void enter_turnstile(struct mm_id *mm_id) __acquires(turnstile) 26 + struct mutex *__get_turnstile(struct mm_id *mm_id) 27 27 { 28 28 struct mm_context *ctx = container_of(mm_id, struct mm_context, id); 29 29 30 - mutex_lock(&ctx->turnstile); 30 + return &ctx->turnstile; 31 31 } 32 32 33 - void exit_turnstile(struct mm_id *mm_id) __releases(turnstile) 33 + void enter_turnstile(struct mm_id *mm_id) 34 34 { 35 - struct mm_context *ctx = container_of(mm_id, struct mm_context, id); 35 + mutex_lock(__get_turnstile(mm_id)); 36 + } 36 37 37 - mutex_unlock(&ctx->turnstile); 38 + void exit_turnstile(struct mm_id *mm_id) 39 + { 40 + mutex_unlock(__get_turnstile(mm_id)); 38 41 } 39 42 40 43 int init_new_context(struct task_struct *task, struct mm_struct *mm)
+1
arch/x86/um/Kconfig
··· 9 9 config UML_X86 10 10 def_bool y 11 11 select ARCH_USE_QUEUED_RWLOCKS 12 + select ARCH_SUPPORTS_ATOMIC_RMW 12 13 select ARCH_USE_QUEUED_SPINLOCKS 13 14 select DCACHE_WORD_ACCESS 14 15 select HAVE_EFFICIENT_UNALIGNED_ACCESS
+2
crypto/Makefile
··· 3 3 # Cryptographic API 4 4 # 5 5 6 + CONTEXT_ANALYSIS := y 7 + 6 8 obj-$(CONFIG_CRYPTO) += crypto.o 7 9 crypto-y := api.o cipher.o 8 10
+3 -3
crypto/acompress.c
··· 443 443 } 444 444 EXPORT_SYMBOL_GPL(crypto_acomp_alloc_streams); 445 445 446 - struct crypto_acomp_stream *crypto_acomp_lock_stream_bh( 447 - struct crypto_acomp_streams *s) __acquires(stream) 446 + struct crypto_acomp_stream *_crypto_acomp_lock_stream_bh( 447 + struct crypto_acomp_streams *s) 448 448 { 449 449 struct crypto_acomp_stream __percpu *streams = s->streams; 450 450 int cpu = raw_smp_processor_id(); ··· 463 463 spin_lock(&ps->lock); 464 464 return ps; 465 465 } 466 - EXPORT_SYMBOL_GPL(crypto_acomp_lock_stream_bh); 466 + EXPORT_SYMBOL_GPL(_crypto_acomp_lock_stream_bh); 467 467 468 468 void acomp_walk_done_src(struct acomp_walk *walk, int used) 469 469 {
+2
crypto/algapi.c
··· 244 244 245 245 static void crypto_alg_finish_registration(struct crypto_alg *alg, 246 246 struct list_head *algs_to_put) 247 + __must_hold(&crypto_alg_sem) 247 248 { 248 249 struct crypto_alg *q; 249 250 ··· 300 299 301 300 static struct crypto_larval * 302 301 __crypto_register_alg(struct crypto_alg *alg, struct list_head *algs_to_put) 302 + __must_hold(&crypto_alg_sem) 303 303 { 304 304 struct crypto_alg *q; 305 305 struct crypto_larval *larval;
+1
crypto/api.c
··· 57 57 58 58 static struct crypto_alg *__crypto_alg_lookup(const char *name, u32 type, 59 59 u32 mask) 60 + __must_hold_shared(&crypto_alg_sem) 60 61 { 61 62 struct crypto_alg *q, *alg = NULL; 62 63 int best = -2;
+1 -1
crypto/crypto_engine.c
··· 453 453 snprintf(engine->name, sizeof(engine->name), 454 454 "%s-engine", dev_name(dev)); 455 455 456 + guard(spinlock_init)(&engine->queue_lock); 456 457 crypto_init_queue(&engine->queue, qlen); 457 - spin_lock_init(&engine->queue_lock); 458 458 459 459 engine->kworker = kthread_run_worker(0, "%s", engine->name); 460 460 if (IS_ERR(engine->kworker)) {
+6 -1
crypto/drbg.c
··· 231 231 */ 232 232 static bool drbg_fips_continuous_test(struct drbg_state *drbg, 233 233 const unsigned char *entropy) 234 + __must_hold(&drbg->drbg_mutex) 234 235 { 235 236 unsigned short entropylen = drbg_sec_strength(drbg->core->flags); 236 237 ··· 846 845 static inline void drbg_get_random_bytes(struct drbg_state *drbg, 847 846 unsigned char *entropy, 848 847 unsigned int entropylen) 848 + __must_hold(&drbg->drbg_mutex) 849 849 { 850 850 do 851 851 get_random_bytes(entropy, entropylen); ··· 854 852 } 855 853 856 854 static int drbg_seed_from_random(struct drbg_state *drbg) 855 + __must_hold(&drbg->drbg_mutex) 857 856 { 858 857 struct drbg_string data; 859 858 LIST_HEAD(seedlist); ··· 909 906 */ 910 907 static int drbg_seed(struct drbg_state *drbg, struct drbg_string *pers, 911 908 bool reseed) 909 + __must_hold(&drbg->drbg_mutex) 912 910 { 913 911 int ret; 914 912 unsigned char entropy[((32 + 16) * 2)]; ··· 1142 1138 static int drbg_generate(struct drbg_state *drbg, 1143 1139 unsigned char *buf, unsigned int buflen, 1144 1140 struct drbg_string *addtl) 1141 + __must_hold(&drbg->drbg_mutex) 1145 1142 { 1146 1143 int len = 0; 1147 1144 LIST_HEAD(addtllist); ··· 1765 1760 if (!drbg) 1766 1761 return -ENOMEM; 1767 1762 1768 - mutex_init(&drbg->drbg_mutex); 1763 + guard(mutex_init)(&drbg->drbg_mutex); 1769 1764 drbg->core = &drbg_cores[coreref]; 1770 1765 drbg->reseed_threshold = drbg_max_requests(drbg); 1771 1766
+1 -1
crypto/internal.h
··· 61 61 /* Maximum number of (rtattr) parameters for each template. */ 62 62 #define CRYPTO_MAX_ATTRS 32 63 63 64 - extern struct list_head crypto_alg_list; 65 64 extern struct rw_semaphore crypto_alg_sem; 65 + extern struct list_head crypto_alg_list __guarded_by(&crypto_alg_sem); 66 66 extern struct blocking_notifier_head crypto_chain; 67 67 68 68 int alg_test(const char *driver, const char *alg, u32 type, u32 mask);
+3
crypto/proc.c
··· 19 19 #include "internal.h" 20 20 21 21 static void *c_start(struct seq_file *m, loff_t *pos) 22 + __acquires_shared(&crypto_alg_sem) 22 23 { 23 24 down_read(&crypto_alg_sem); 24 25 return seq_list_start(&crypto_alg_list, *pos); 25 26 } 26 27 27 28 static void *c_next(struct seq_file *m, void *p, loff_t *pos) 29 + __must_hold_shared(&crypto_alg_sem) 28 30 { 29 31 return seq_list_next(p, &crypto_alg_list, pos); 30 32 } 31 33 32 34 static void c_stop(struct seq_file *m, void *p) 35 + __releases_shared(&crypto_alg_sem) 33 36 { 34 37 up_read(&crypto_alg_sem); 35 38 }
+12 -12
crypto/scompress.c
··· 28 28 struct scomp_scratch { 29 29 spinlock_t lock; 30 30 union { 31 - void *src; 32 - unsigned long saddr; 31 + void *src __guarded_by(&lock); 32 + unsigned long saddr __guarded_by(&lock); 33 33 }; 34 34 }; 35 35 ··· 38 38 }; 39 39 40 40 static const struct crypto_type crypto_scomp_type; 41 - static int scomp_scratch_users; 42 41 static DEFINE_MUTEX(scomp_lock); 42 + static int scomp_scratch_users __guarded_by(&scomp_lock); 43 43 44 44 static cpumask_t scomp_scratch_want; 45 45 static void scomp_scratch_workfn(struct work_struct *work); ··· 65 65 } 66 66 67 67 static void crypto_scomp_free_scratches(void) 68 + __context_unsafe(/* frees @scratch */) 68 69 { 69 70 struct scomp_scratch *scratch; 70 71 int i; ··· 100 99 struct scomp_scratch *scratch; 101 100 102 101 scratch = per_cpu_ptr(&scomp_scratch, cpu); 103 - if (scratch->src) 102 + if (context_unsafe(scratch->src)) 104 103 continue; 105 104 if (scomp_alloc_scratch(scratch, cpu)) 106 105 break; ··· 110 109 } 111 110 112 111 static int crypto_scomp_alloc_scratches(void) 112 + __context_unsafe(/* allocates @scratch */) 113 113 { 114 114 unsigned int i = cpumask_first(cpu_possible_mask); 115 115 struct scomp_scratch *scratch; ··· 139 137 return ret; 140 138 } 141 139 142 - static struct scomp_scratch *scomp_lock_scratch(void) __acquires(scratch) 140 + #define scomp_lock_scratch(...) __acquire_ret(_scomp_lock_scratch(__VA_ARGS__), &__ret->lock) 141 + static struct scomp_scratch *_scomp_lock_scratch(void) __acquires_ret 143 142 { 144 143 int cpu = raw_smp_processor_id(); 145 144 struct scomp_scratch *scratch; ··· 160 157 } 161 158 162 159 static inline void scomp_unlock_scratch(struct scomp_scratch *scratch) 163 - __releases(scratch) 160 + __releases(&scratch->lock) 164 161 { 165 162 spin_unlock(&scratch->lock); 166 163 } ··· 172 169 bool src_isvirt = acomp_request_src_isvirt(req); 173 170 bool dst_isvirt = acomp_request_dst_isvirt(req); 174 171 struct crypto_scomp *scomp = *tfm_ctx; 175 - struct crypto_acomp_stream *stream; 176 - struct scomp_scratch *scratch; 177 172 unsigned int slen = req->slen; 178 173 unsigned int dlen = req->dlen; 179 174 struct page *spage, *dpage; ··· 231 230 } while (0); 232 231 } 233 232 234 - stream = crypto_acomp_lock_stream_bh(&crypto_scomp_alg(scomp)->streams); 233 + struct crypto_acomp_stream *stream = crypto_acomp_lock_stream_bh(&crypto_scomp_alg(scomp)->streams); 235 234 236 235 if (!src_isvirt && !src) { 237 - const u8 *src; 236 + struct scomp_scratch *scratch = scomp_lock_scratch(); 237 + const u8 *src = scratch->src; 238 238 239 - scratch = scomp_lock_scratch(); 240 - src = scratch->src; 241 239 memcpy_from_sglist(scratch->src, req->src, 0, slen); 242 240 243 241 if (dir)
+9 -11
drivers/android/binder/rust_binder_main.rs
··· 18 18 prelude::*, 19 19 seq_file::SeqFile, 20 20 seq_print, 21 + sync::atomic::{ordering::Relaxed, Atomic}, 21 22 sync::poll::PollTable, 22 23 sync::Arc, 23 24 task::Pid, ··· 29 28 30 29 use crate::{context::Context, page_range::Shrinker, process::Process, thread::Thread}; 31 30 32 - use core::{ 33 - ptr::NonNull, 34 - sync::atomic::{AtomicBool, AtomicUsize, Ordering}, 35 - }; 31 + use core::ptr::NonNull; 36 32 37 33 mod allocation; 38 34 mod context; ··· 88 90 } 89 91 90 92 fn next_debug_id() -> usize { 91 - static NEXT_DEBUG_ID: AtomicUsize = AtomicUsize::new(0); 93 + static NEXT_DEBUG_ID: Atomic<usize> = Atomic::new(0); 92 94 93 - NEXT_DEBUG_ID.fetch_add(1, Ordering::Relaxed) 95 + NEXT_DEBUG_ID.fetch_add(1, Relaxed) 94 96 } 95 97 96 98 /// Provides a single place to write Binder return values via the ··· 213 215 214 216 struct DeliverCode { 215 217 code: u32, 216 - skip: AtomicBool, 218 + skip: Atomic<bool>, 217 219 } 218 220 219 221 kernel::list::impl_list_arc_safe! { ··· 224 226 fn new(code: u32) -> Self { 225 227 Self { 226 228 code, 227 - skip: AtomicBool::new(false), 229 + skip: Atomic::new(false), 228 230 } 229 231 } 230 232 ··· 233 235 /// This is used instead of removing it from the work list, since `LinkedList::remove` is 234 236 /// unsafe, whereas this method is not. 235 237 fn skip(&self) { 236 - self.skip.store(true, Ordering::Relaxed); 238 + self.skip.store(true, Relaxed); 237 239 } 238 240 } 239 241 ··· 243 245 _thread: &Thread, 244 246 writer: &mut BinderReturnWriter<'_>, 245 247 ) -> Result<bool> { 246 - if !self.skip.load(Ordering::Relaxed) { 248 + if !self.skip.load(Relaxed) { 247 249 writer.write_code(self.code)?; 248 250 } 249 251 Ok(true) ··· 257 259 258 260 fn debug_print(&self, m: &SeqFile, prefix: &str, _tprefix: &str) -> Result<()> { 259 261 seq_print!(m, "{}", prefix); 260 - if self.skip.load(Ordering::Relaxed) { 262 + if self.skip.load(Relaxed) { 261 263 seq_print!(m, "(skipped) "); 262 264 } 263 265 if self.code == defs::BR_TRANSACTION_COMPLETE {
+4 -4
drivers/android/binder/stats.rs
··· 5 5 //! Keep track of statistics for binder_logs. 6 6 7 7 use crate::defs::*; 8 - use core::sync::atomic::{AtomicU32, Ordering::Relaxed}; 8 + use kernel::sync::atomic::{ordering::Relaxed, Atomic}; 9 9 use kernel::{ioctl::_IOC_NR, seq_file::SeqFile, seq_print}; 10 10 11 11 const BC_COUNT: usize = _IOC_NR(BC_REPLY_SG) as usize + 1; ··· 14 14 pub(crate) static GLOBAL_STATS: BinderStats = BinderStats::new(); 15 15 16 16 pub(crate) struct BinderStats { 17 - bc: [AtomicU32; BC_COUNT], 18 - br: [AtomicU32; BR_COUNT], 17 + bc: [Atomic<u32>; BC_COUNT], 18 + br: [Atomic<u32>; BR_COUNT], 19 19 } 20 20 21 21 impl BinderStats { 22 22 pub(crate) const fn new() -> Self { 23 23 #[expect(clippy::declare_interior_mutable_const)] 24 - const ZERO: AtomicU32 = AtomicU32::new(0); 24 + const ZERO: Atomic<u32> = Atomic::new(0); 25 25 26 26 Self { 27 27 bc: [ZERO; BC_COUNT],
+11 -13
drivers/android/binder/thread.rs
··· 15 15 security, 16 16 seq_file::SeqFile, 17 17 seq_print, 18 + sync::atomic::{ordering::Relaxed, Atomic}, 18 19 sync::poll::{PollCondVar, PollTable}, 19 20 sync::{Arc, SpinLock}, 20 21 task::Task, ··· 35 34 BinderReturnWriter, DArc, DLArc, DTRWrap, DeliverCode, DeliverToRead, 36 35 }; 37 36 38 - use core::{ 39 - mem::size_of, 40 - sync::atomic::{AtomicU32, Ordering}, 41 - }; 37 + use core::mem::size_of; 42 38 43 39 fn is_aligned(value: usize, to: usize) -> bool { 44 40 value % to == 0 ··· 282 284 impl InnerThread { 283 285 fn new() -> Result<Self> { 284 286 fn next_err_id() -> u32 { 285 - static EE_ID: AtomicU32 = AtomicU32::new(0); 286 - EE_ID.fetch_add(1, Ordering::Relaxed) 287 + static EE_ID: Atomic<u32> = Atomic::new(0); 288 + EE_ID.fetch_add(1, Relaxed) 287 289 } 288 290 289 291 Ok(Self { ··· 1566 1568 1567 1569 #[pin_data] 1568 1570 struct ThreadError { 1569 - error_code: AtomicU32, 1571 + error_code: Atomic<u32>, 1570 1572 #[pin] 1571 1573 links_track: AtomicTracker, 1572 1574 } ··· 1574 1576 impl ThreadError { 1575 1577 fn try_new() -> Result<DArc<Self>> { 1576 1578 DTRWrap::arc_pin_init(pin_init!(Self { 1577 - error_code: AtomicU32::new(BR_OK), 1579 + error_code: Atomic::new(BR_OK), 1578 1580 links_track <- AtomicTracker::new(), 1579 1581 })) 1580 1582 .map(ListArc::into_arc) 1581 1583 } 1582 1584 1583 1585 fn set_error_code(&self, code: u32) { 1584 - self.error_code.store(code, Ordering::Relaxed); 1586 + self.error_code.store(code, Relaxed); 1585 1587 } 1586 1588 1587 1589 fn is_unused(&self) -> bool { 1588 - self.error_code.load(Ordering::Relaxed) == BR_OK 1590 + self.error_code.load(Relaxed) == BR_OK 1589 1591 } 1590 1592 } 1591 1593 ··· 1595 1597 _thread: &Thread, 1596 1598 writer: &mut BinderReturnWriter<'_>, 1597 1599 ) -> Result<bool> { 1598 - let code = self.error_code.load(Ordering::Relaxed); 1599 - self.error_code.store(BR_OK, Ordering::Relaxed); 1600 + let code = self.error_code.load(Relaxed); 1601 + self.error_code.store(BR_OK, Relaxed); 1600 1602 writer.write_code(code)?; 1601 1603 Ok(true) 1602 1604 } ··· 1612 1614 m, 1613 1615 "{}transaction error: {}\n", 1614 1616 prefix, 1615 - self.error_code.load(Ordering::Relaxed) 1617 + self.error_code.load(Relaxed) 1616 1618 ); 1617 1619 Ok(()) 1618 1620 }
+8 -8
drivers/android/binder/transaction.rs
··· 2 2 3 3 // Copyright (C) 2025 Google LLC. 4 4 5 - use core::sync::atomic::{AtomicBool, Ordering}; 6 5 use kernel::{ 7 6 prelude::*, 8 7 seq_file::SeqFile, 9 8 seq_print, 9 + sync::atomic::{ordering::Relaxed, Atomic}, 10 10 sync::{Arc, SpinLock}, 11 11 task::Kuid, 12 12 time::{Instant, Monotonic}, ··· 33 33 pub(crate) to: Arc<Process>, 34 34 #[pin] 35 35 allocation: SpinLock<Option<Allocation>>, 36 - is_outstanding: AtomicBool, 36 + is_outstanding: Atomic<bool>, 37 37 code: u32, 38 38 pub(crate) flags: u32, 39 39 data_size: usize, ··· 105 105 offsets_size: trd.offsets_size as _, 106 106 data_address, 107 107 allocation <- kernel::new_spinlock!(Some(alloc.success()), "Transaction::new"), 108 - is_outstanding: AtomicBool::new(false), 108 + is_outstanding: Atomic::new(false), 109 109 txn_security_ctx_off, 110 110 oneway_spam_detected, 111 111 start_time: Instant::now(), ··· 145 145 offsets_size: trd.offsets_size as _, 146 146 data_address: alloc.ptr, 147 147 allocation <- kernel::new_spinlock!(Some(alloc.success()), "Transaction::new"), 148 - is_outstanding: AtomicBool::new(false), 148 + is_outstanding: Atomic::new(false), 149 149 txn_security_ctx_off: None, 150 150 oneway_spam_detected, 151 151 start_time: Instant::now(), ··· 215 215 216 216 pub(crate) fn set_outstanding(&self, to_process: &mut ProcessInner) { 217 217 // No race because this method is only called once. 218 - if !self.is_outstanding.load(Ordering::Relaxed) { 219 - self.is_outstanding.store(true, Ordering::Relaxed); 218 + if !self.is_outstanding.load(Relaxed) { 219 + self.is_outstanding.store(true, Relaxed); 220 220 to_process.add_outstanding_txn(); 221 221 } 222 222 } ··· 227 227 // destructor, which is guaranteed to not race with any other operations on the 228 228 // transaction. It also cannot race with `set_outstanding`, since submission happens 229 229 // before delivery. 230 - if self.is_outstanding.load(Ordering::Relaxed) { 231 - self.is_outstanding.store(false, Ordering::Relaxed); 230 + if self.is_outstanding.load(Relaxed) { 231 + self.is_outstanding.store(false, Relaxed); 232 232 self.to.drop_outstanding_txn(); 233 233 } 234 234 }
+2 -2
drivers/net/wireless/intel/iwlwifi/iwl-trans.c
··· 548 548 return iwl_trans_pcie_read_config32(trans, ofs, val); 549 549 } 550 550 551 - bool _iwl_trans_grab_nic_access(struct iwl_trans *trans) 551 + bool iwl_trans_grab_nic_access(struct iwl_trans *trans) 552 552 { 553 553 return iwl_trans_pcie_grab_nic_access(trans); 554 554 } 555 - IWL_EXPORT_SYMBOL(_iwl_trans_grab_nic_access); 555 + IWL_EXPORT_SYMBOL(iwl_trans_grab_nic_access); 556 556 557 557 void __releases(nic_access) 558 558 iwl_trans_release_nic_access(struct iwl_trans *trans)
+1 -5
drivers/net/wireless/intel/iwlwifi/iwl-trans.h
··· 1063 1063 void iwl_trans_set_bits_mask(struct iwl_trans *trans, u32 reg, 1064 1064 u32 mask, u32 value); 1065 1065 1066 - bool _iwl_trans_grab_nic_access(struct iwl_trans *trans); 1067 - 1068 - #define iwl_trans_grab_nic_access(trans) \ 1069 - __cond_lock(nic_access, \ 1070 - likely(_iwl_trans_grab_nic_access(trans))) 1066 + bool iwl_trans_grab_nic_access(struct iwl_trans *trans); 1071 1067 1072 1068 void __releases(nic_access) 1073 1069 iwl_trans_release_nic_access(struct iwl_trans *trans);
+1 -4
drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/internal.h
··· 553 553 void iwl_trans_pcie_free_pnvm_dram_regions(struct iwl_dram_regions *dram_regions, 554 554 struct device *dev); 555 555 556 - bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent); 557 - #define _iwl_trans_pcie_grab_nic_access(trans, silent) \ 558 - __cond_lock(nic_access_nobh, \ 559 - likely(__iwl_trans_pcie_grab_nic_access(trans, silent))) 556 + bool _iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent); 560 557 561 558 void iwl_trans_pcie_check_product_reset_status(struct pci_dev *pdev); 562 559 void iwl_trans_pcie_check_product_reset_mode(struct pci_dev *pdev);
+2 -2
drivers/net/wireless/intel/iwlwifi/pcie/gen1_2/trans.c
··· 2327 2327 * This version doesn't disable BHs but rather assumes they're 2328 2328 * already disabled. 2329 2329 */ 2330 - bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent) 2330 + bool _iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans, bool silent) 2331 2331 { 2332 2332 int ret; 2333 2333 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); ··· 2415 2415 bool ret; 2416 2416 2417 2417 local_bh_disable(); 2418 - ret = __iwl_trans_pcie_grab_nic_access(trans, false); 2418 + ret = _iwl_trans_pcie_grab_nic_access(trans, false); 2419 2419 if (ret) { 2420 2420 /* keep BHs disabled until iwl_trans_pcie_release_nic_access */ 2421 2421 return ret;
+1 -1
fs/dlm/lock.c
··· 343 343 /* TODO move this to lib/refcount.c */ 344 344 static __must_check bool 345 345 dlm_refcount_dec_and_write_lock_bh(refcount_t *r, rwlock_t *lock) 346 - __cond_acquires(lock) 346 + __cond_acquires(true, lock) 347 347 { 348 348 if (refcount_dec_not_one(r)) 349 349 return false;
+4 -3
include/crypto/internal/acompress.h
··· 191 191 void crypto_acomp_free_streams(struct crypto_acomp_streams *s); 192 192 int crypto_acomp_alloc_streams(struct crypto_acomp_streams *s); 193 193 194 - struct crypto_acomp_stream *crypto_acomp_lock_stream_bh( 195 - struct crypto_acomp_streams *s) __acquires(stream); 194 + #define crypto_acomp_lock_stream_bh(...) __acquire_ret(_crypto_acomp_lock_stream_bh(__VA_ARGS__), &__ret->lock); 195 + struct crypto_acomp_stream *_crypto_acomp_lock_stream_bh( 196 + struct crypto_acomp_streams *s) __acquires_ret; 196 197 197 198 static inline void crypto_acomp_unlock_stream_bh( 198 - struct crypto_acomp_stream *stream) __releases(stream) 199 + struct crypto_acomp_stream *stream) __releases(&stream->lock) 199 200 { 200 201 spin_unlock_bh(&stream->lock); 201 202 }
+1 -1
include/crypto/internal/engine.h
··· 45 45 46 46 struct list_head list; 47 47 spinlock_t queue_lock; 48 - struct crypto_queue queue; 48 + struct crypto_queue queue __guarded_by(&queue_lock); 49 49 struct device *dev; 50 50 51 51 struct kthread_worker *kworker;
+9 -9
include/linux/atomic/atomic-arch-fallback.h
··· 2121 2121 * 2122 2122 * Safe to use in noinstr code; prefer atomic_try_cmpxchg() elsewhere. 2123 2123 * 2124 - * Return: @true if the exchange occured, @false otherwise. 2124 + * Return: @true if the exchange occurred, @false otherwise. 2125 2125 */ 2126 2126 static __always_inline bool 2127 2127 raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new) ··· 2155 2155 * 2156 2156 * Safe to use in noinstr code; prefer atomic_try_cmpxchg_acquire() elsewhere. 2157 2157 * 2158 - * Return: @true if the exchange occured, @false otherwise. 2158 + * Return: @true if the exchange occurred, @false otherwise. 2159 2159 */ 2160 2160 static __always_inline bool 2161 2161 raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) ··· 2189 2189 * 2190 2190 * Safe to use in noinstr code; prefer atomic_try_cmpxchg_release() elsewhere. 2191 2191 * 2192 - * Return: @true if the exchange occured, @false otherwise. 2192 + * Return: @true if the exchange occurred, @false otherwise. 2193 2193 */ 2194 2194 static __always_inline bool 2195 2195 raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) ··· 2222 2222 * 2223 2223 * Safe to use in noinstr code; prefer atomic_try_cmpxchg_relaxed() elsewhere. 2224 2224 * 2225 - * Return: @true if the exchange occured, @false otherwise. 2225 + * Return: @true if the exchange occurred, @false otherwise. 2226 2226 */ 2227 2227 static __always_inline bool 2228 2228 raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) ··· 4247 4247 * 4248 4248 * Safe to use in noinstr code; prefer atomic64_try_cmpxchg() elsewhere. 4249 4249 * 4250 - * Return: @true if the exchange occured, @false otherwise. 4250 + * Return: @true if the exchange occurred, @false otherwise. 4251 4251 */ 4252 4252 static __always_inline bool 4253 4253 raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) ··· 4281 4281 * 4282 4282 * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_acquire() elsewhere. 4283 4283 * 4284 - * Return: @true if the exchange occured, @false otherwise. 4284 + * Return: @true if the exchange occurred, @false otherwise. 4285 4285 */ 4286 4286 static __always_inline bool 4287 4287 raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) ··· 4315 4315 * 4316 4316 * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_release() elsewhere. 4317 4317 * 4318 - * Return: @true if the exchange occured, @false otherwise. 4318 + * Return: @true if the exchange occurred, @false otherwise. 4319 4319 */ 4320 4320 static __always_inline bool 4321 4321 raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) ··· 4348 4348 * 4349 4349 * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_relaxed() elsewhere. 4350 4350 * 4351 - * Return: @true if the exchange occured, @false otherwise. 4351 + * Return: @true if the exchange occurred, @false otherwise. 4352 4352 */ 4353 4353 static __always_inline bool 4354 4354 raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) ··· 4690 4690 } 4691 4691 4692 4692 #endif /* _LINUX_ATOMIC_FALLBACK_H */ 4693 - // b565db590afeeff0d7c9485ccbca5bb6e155749f 4693 + // 206314f82b8b73a5c3aa69cf7f35ac9e7b5d6b58
+13 -13
include/linux/atomic/atomic-instrumented.h
··· 1269 1269 * 1270 1270 * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg() there. 1271 1271 * 1272 - * Return: @true if the exchange occured, @false otherwise. 1272 + * Return: @true if the exchange occurred, @false otherwise. 1273 1273 */ 1274 1274 static __always_inline bool 1275 1275 atomic_try_cmpxchg(atomic_t *v, int *old, int new) ··· 1292 1292 * 1293 1293 * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_acquire() there. 1294 1294 * 1295 - * Return: @true if the exchange occured, @false otherwise. 1295 + * Return: @true if the exchange occurred, @false otherwise. 1296 1296 */ 1297 1297 static __always_inline bool 1298 1298 atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) ··· 1314 1314 * 1315 1315 * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_release() there. 1316 1316 * 1317 - * Return: @true if the exchange occured, @false otherwise. 1317 + * Return: @true if the exchange occurred, @false otherwise. 1318 1318 */ 1319 1319 static __always_inline bool 1320 1320 atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) ··· 1337 1337 * 1338 1338 * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_relaxed() there. 1339 1339 * 1340 - * Return: @true if the exchange occured, @false otherwise. 1340 + * Return: @true if the exchange occurred, @false otherwise. 1341 1341 */ 1342 1342 static __always_inline bool 1343 1343 atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) ··· 2847 2847 * 2848 2848 * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg() there. 2849 2849 * 2850 - * Return: @true if the exchange occured, @false otherwise. 2850 + * Return: @true if the exchange occurred, @false otherwise. 2851 2851 */ 2852 2852 static __always_inline bool 2853 2853 atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) ··· 2870 2870 * 2871 2871 * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_acquire() there. 2872 2872 * 2873 - * Return: @true if the exchange occured, @false otherwise. 2873 + * Return: @true if the exchange occurred, @false otherwise. 2874 2874 */ 2875 2875 static __always_inline bool 2876 2876 atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) ··· 2892 2892 * 2893 2893 * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_release() there. 2894 2894 * 2895 - * Return: @true if the exchange occured, @false otherwise. 2895 + * Return: @true if the exchange occurred, @false otherwise. 2896 2896 */ 2897 2897 static __always_inline bool 2898 2898 atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) ··· 2915 2915 * 2916 2916 * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_relaxed() there. 2917 2917 * 2918 - * Return: @true if the exchange occured, @false otherwise. 2918 + * Return: @true if the exchange occurred, @false otherwise. 2919 2919 */ 2920 2920 static __always_inline bool 2921 2921 atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) ··· 4425 4425 * 4426 4426 * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg() there. 4427 4427 * 4428 - * Return: @true if the exchange occured, @false otherwise. 4428 + * Return: @true if the exchange occurred, @false otherwise. 4429 4429 */ 4430 4430 static __always_inline bool 4431 4431 atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) ··· 4448 4448 * 4449 4449 * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_acquire() there. 4450 4450 * 4451 - * Return: @true if the exchange occured, @false otherwise. 4451 + * Return: @true if the exchange occurred, @false otherwise. 4452 4452 */ 4453 4453 static __always_inline bool 4454 4454 atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) ··· 4470 4470 * 4471 4471 * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_release() there. 4472 4472 * 4473 - * Return: @true if the exchange occured, @false otherwise. 4473 + * Return: @true if the exchange occurred, @false otherwise. 4474 4474 */ 4475 4475 static __always_inline bool 4476 4476 atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) ··· 4493 4493 * 4494 4494 * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_relaxed() there. 4495 4495 * 4496 - * Return: @true if the exchange occured, @false otherwise. 4496 + * Return: @true if the exchange occurred, @false otherwise. 4497 4497 */ 4498 4498 static __always_inline bool 4499 4499 atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) ··· 5050 5050 5051 5051 5052 5052 #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ 5053 - // f618ac667f868941a84ce0ab2242f1786e049ed4 5053 + // 9dd948d3012b22c4e75933a5172983f912e46439
+5 -5
include/linux/atomic/atomic-long.h
··· 1449 1449 * 1450 1450 * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg() elsewhere. 1451 1451 * 1452 - * Return: @true if the exchange occured, @false otherwise. 1452 + * Return: @true if the exchange occurred, @false otherwise. 1453 1453 */ 1454 1454 static __always_inline bool 1455 1455 raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) ··· 1473 1473 * 1474 1474 * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_acquire() elsewhere. 1475 1475 * 1476 - * Return: @true if the exchange occured, @false otherwise. 1476 + * Return: @true if the exchange occurred, @false otherwise. 1477 1477 */ 1478 1478 static __always_inline bool 1479 1479 raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) ··· 1497 1497 * 1498 1498 * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_release() elsewhere. 1499 1499 * 1500 - * Return: @true if the exchange occured, @false otherwise. 1500 + * Return: @true if the exchange occurred, @false otherwise. 1501 1501 */ 1502 1502 static __always_inline bool 1503 1503 raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) ··· 1521 1521 * 1522 1522 * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_relaxed() elsewhere. 1523 1523 * 1524 - * Return: @true if the exchange occured, @false otherwise. 1524 + * Return: @true if the exchange occurred, @false otherwise. 1525 1525 */ 1526 1526 static __always_inline bool 1527 1527 raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) ··· 1809 1809 } 1810 1810 1811 1811 #endif /* _LINUX_ATOMIC_LONG_H */ 1812 - // eadf183c3600b8b92b91839dd3be6bcc560c752d 1812 + // 4b882bf19018602c10816c52f8b4ae280adc887b
+20 -4
include/linux/bit_spinlock.h
··· 7 7 #include <linux/atomic.h> 8 8 #include <linux/bug.h> 9 9 10 + #include <asm/processor.h> /* for cpu_relax() */ 11 + 12 + /* 13 + * For static context analysis, we need a unique token for each possible bit 14 + * that can be used as a bit_spinlock. The easiest way to do that is to create a 15 + * fake context that we can cast to with the __bitlock(bitnum, addr) macro 16 + * below, which will give us unique instances for each (bit, addr) pair that the 17 + * static analysis can use. 18 + */ 19 + context_lock_struct(__context_bitlock) { }; 20 + #define __bitlock(bitnum, addr) (struct __context_bitlock *)(bitnum + (addr)) 21 + 10 22 /* 11 23 * bit-based spin_lock() 12 24 * ··· 26 14 * are significantly faster. 27 15 */ 28 16 static __always_inline void bit_spin_lock(int bitnum, unsigned long *addr) 17 + __acquires(__bitlock(bitnum, addr)) 29 18 { 30 19 /* 31 20 * Assuming the lock is uncontended, this never enters ··· 45 32 preempt_disable(); 46 33 } 47 34 #endif 48 - __acquire(bitlock); 35 + __acquire(__bitlock(bitnum, addr)); 49 36 } 50 37 51 38 /* 52 39 * Return true if it was acquired 53 40 */ 54 41 static __always_inline int bit_spin_trylock(int bitnum, unsigned long *addr) 42 + __cond_acquires(true, __bitlock(bitnum, addr)) 55 43 { 56 44 preempt_disable(); 57 45 #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) ··· 61 47 return 0; 62 48 } 63 49 #endif 64 - __acquire(bitlock); 50 + __acquire(__bitlock(bitnum, addr)); 65 51 return 1; 66 52 } 67 53 ··· 69 55 * bit-based spin_unlock() 70 56 */ 71 57 static __always_inline void bit_spin_unlock(int bitnum, unsigned long *addr) 58 + __releases(__bitlock(bitnum, addr)) 72 59 { 73 60 #ifdef CONFIG_DEBUG_SPINLOCK 74 61 BUG_ON(!test_bit(bitnum, addr)); ··· 78 63 clear_bit_unlock(bitnum, addr); 79 64 #endif 80 65 preempt_enable(); 81 - __release(bitlock); 66 + __release(__bitlock(bitnum, addr)); 82 67 } 83 68 84 69 /* ··· 87 72 * protecting the rest of the flags in the word. 88 73 */ 89 74 static __always_inline void __bit_spin_unlock(int bitnum, unsigned long *addr) 75 + __releases(__bitlock(bitnum, addr)) 90 76 { 91 77 #ifdef CONFIG_DEBUG_SPINLOCK 92 78 BUG_ON(!test_bit(bitnum, addr)); ··· 96 80 __clear_bit_unlock(bitnum, addr); 97 81 #endif 98 82 preempt_enable(); 99 - __release(bitlock); 83 + __release(__bitlock(bitnum, addr)); 100 84 } 101 85 102 86 /*
+54 -4
include/linux/cleanup.h
··· 278 278 279 279 #define DEFINE_CLASS(_name, _type, _exit, _init, _init_args...) \ 280 280 typedef _type class_##_name##_t; \ 281 + typedef _type lock_##_name##_t; \ 281 282 static __always_inline void class_##_name##_destructor(_type *p) \ 283 + __no_context_analysis \ 282 284 { _type _T = *p; _exit; } \ 283 285 static __always_inline _type class_##_name##_constructor(_init_args) \ 286 + __no_context_analysis \ 284 287 { _type t = _init; return t; } 285 288 286 289 #define EXTEND_CLASS(_name, ext, _init, _init_args...) \ 290 + typedef lock_##_name##_t lock_##_name##ext##_t; \ 287 291 typedef class_##_name##_t class_##_name##ext##_t; \ 288 292 static __always_inline void class_##_name##ext##_destructor(class_##_name##_t *p) \ 289 293 { class_##_name##_destructor(p); } \ 290 294 static __always_inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \ 295 + __no_context_analysis \ 291 296 { class_##_name##_t t = _init; return t; } 292 297 293 298 #define CLASS(_name, var) \ ··· 479 474 */ 480 475 481 476 #define __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, ...) \ 477 + typedef _type lock_##_name##_t; \ 482 478 typedef struct { \ 483 479 _type *lock; \ 484 480 __VA_ARGS__; \ 485 481 } class_##_name##_t; \ 486 482 \ 487 483 static __always_inline void class_##_name##_destructor(class_##_name##_t *_T) \ 484 + __no_context_analysis \ 488 485 { \ 489 486 if (!__GUARD_IS_ERR(_T->lock)) { _unlock; } \ 490 487 } \ 491 488 \ 492 489 __DEFINE_GUARD_LOCK_PTR(_name, &_T->lock) 493 490 494 - #define __DEFINE_LOCK_GUARD_1(_name, _type, _lock) \ 491 + #define __DEFINE_LOCK_GUARD_1(_name, _type, ...) \ 495 492 static __always_inline class_##_name##_t class_##_name##_constructor(_type *l) \ 493 + __no_context_analysis \ 496 494 { \ 497 495 class_##_name##_t _t = { .lock = l }, *_T = &_t; \ 498 - _lock; \ 496 + __VA_ARGS__; \ 499 497 return _t; \ 500 498 } 501 499 502 - #define __DEFINE_LOCK_GUARD_0(_name, _lock) \ 500 + #define __DEFINE_LOCK_GUARD_0(_name, ...) \ 503 501 static __always_inline class_##_name##_t class_##_name##_constructor(void) \ 502 + __no_context_analysis \ 504 503 { \ 505 504 class_##_name##_t _t = { .lock = (void*)1 }, \ 506 505 *_T __maybe_unused = &_t; \ 507 - _lock; \ 506 + __VA_ARGS__; \ 508 507 return _t; \ 509 508 } 509 + 510 + #define DECLARE_LOCK_GUARD_0_ATTRS(_name, _lock, _unlock) \ 511 + static inline class_##_name##_t class_##_name##_constructor(void) _lock;\ 512 + static inline void class_##_name##_destructor(class_##_name##_t *_T) _unlock; 513 + 514 + /* 515 + * To support Context Analysis, we need to allow the compiler to see the 516 + * acquisition and release of the context lock. However, the "cleanup" helpers 517 + * wrap the lock in a struct passed through separate helper functions, which 518 + * hides the lock alias from the compiler (no inter-procedural analysis). 519 + * 520 + * To make it work, we introduce an explicit alias to the context lock instance 521 + * that is "cleaned" up with a separate cleanup helper. This helper is a dummy 522 + * function that does nothing at runtime, but has the "_unlock" attribute to 523 + * tell the compiler what happens at the end of the scope. 524 + * 525 + * To generalize the pattern, the WITH_LOCK_GUARD_1_ATTRS() macro should be used 526 + * to redefine the constructor, which then also creates the alias variable with 527 + * the right "cleanup" attribute, *after* DECLARE_LOCK_GUARD_1_ATTRS() has been 528 + * used. 529 + * 530 + * Example usage: 531 + * 532 + * DECLARE_LOCK_GUARD_1_ATTRS(mutex, __acquires(_T), __releases(*(struct mutex **)_T)) 533 + * #define class_mutex_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex, _T) 534 + * 535 + * Note: To support the for-loop based scoped helpers, the auxiliary variable 536 + * must be a pointer to the "class" type because it is defined in the same 537 + * statement as the guard variable. However, we initialize it with the lock 538 + * pointer (despite the type mismatch, the compiler's alias analysis still works 539 + * as expected). The "_unlock" attribute receives a pointer to the auxiliary 540 + * variable (a double pointer to the class type), and must be cast and 541 + * dereferenced appropriately. 542 + */ 543 + #define DECLARE_LOCK_GUARD_1_ATTRS(_name, _lock, _unlock) \ 544 + static inline class_##_name##_t class_##_name##_constructor(lock_##_name##_t *_T) _lock;\ 545 + static __always_inline void __class_##_name##_cleanup_ctx(class_##_name##_t **_T) \ 546 + __no_context_analysis _unlock { } 547 + #define WITH_LOCK_GUARD_1_ATTRS(_name, _T) \ 548 + class_##_name##_constructor(_T), \ 549 + *__UNIQUE_ID(unlock) __cleanup(__class_##_name##_cleanup_ctx) = (void *)(unsigned long)(_T) 510 550 511 551 #define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...) \ 512 552 __DEFINE_CLASS_IS_CONDITIONAL(_name, false); \
+436
include/linux/compiler-context-analysis.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Macros and attributes for compiler-based static context analysis. 4 + */ 5 + 6 + #ifndef _LINUX_COMPILER_CONTEXT_ANALYSIS_H 7 + #define _LINUX_COMPILER_CONTEXT_ANALYSIS_H 8 + 9 + #if defined(WARN_CONTEXT_ANALYSIS) && !defined(__CHECKER__) && !defined(__GENKSYMS__) 10 + 11 + /* 12 + * These attributes define new context lock (Clang: capability) types. 13 + * Internal only. 14 + */ 15 + # define __ctx_lock_type(name) __attribute__((capability(#name))) 16 + # define __reentrant_ctx_lock __attribute__((reentrant_capability)) 17 + # define __acquires_ctx_lock(...) __attribute__((acquire_capability(__VA_ARGS__))) 18 + # define __acquires_shared_ctx_lock(...) __attribute__((acquire_shared_capability(__VA_ARGS__))) 19 + # define __try_acquires_ctx_lock(ret, var) __attribute__((try_acquire_capability(ret, var))) 20 + # define __try_acquires_shared_ctx_lock(ret, var) __attribute__((try_acquire_shared_capability(ret, var))) 21 + # define __releases_ctx_lock(...) __attribute__((release_capability(__VA_ARGS__))) 22 + # define __releases_shared_ctx_lock(...) __attribute__((release_shared_capability(__VA_ARGS__))) 23 + # define __returns_ctx_lock(var) __attribute__((lock_returned(var))) 24 + 25 + /* 26 + * The below are used to annotate code being checked. Internal only. 27 + */ 28 + # define __excludes_ctx_lock(...) __attribute__((locks_excluded(__VA_ARGS__))) 29 + # define __requires_ctx_lock(...) __attribute__((requires_capability(__VA_ARGS__))) 30 + # define __requires_shared_ctx_lock(...) __attribute__((requires_shared_capability(__VA_ARGS__))) 31 + 32 + /* 33 + * The "assert_capability" attribute is a bit confusingly named. It does not 34 + * generate a check. Instead, it tells the analysis to *assume* the capability 35 + * is held. This is used for augmenting runtime assertions, that can then help 36 + * with patterns beyond the compiler's static reasoning abilities. 37 + */ 38 + # define __assumes_ctx_lock(...) __attribute__((assert_capability(__VA_ARGS__))) 39 + # define __assumes_shared_ctx_lock(...) __attribute__((assert_shared_capability(__VA_ARGS__))) 40 + 41 + /** 42 + * __guarded_by - struct member and globals attribute, declares variable 43 + * only accessible within active context 44 + * 45 + * Declares that the struct member or global variable is only accessible within 46 + * the context entered by the given context lock. Read operations on the data 47 + * require shared access, while write operations require exclusive access. 48 + * 49 + * .. code-block:: c 50 + * 51 + * struct some_state { 52 + * spinlock_t lock; 53 + * long counter __guarded_by(&lock); 54 + * }; 55 + */ 56 + # define __guarded_by(...) __attribute__((guarded_by(__VA_ARGS__))) 57 + 58 + /** 59 + * __pt_guarded_by - struct member and globals attribute, declares pointed-to 60 + * data only accessible within active context 61 + * 62 + * Declares that the data pointed to by the struct member pointer or global 63 + * pointer is only accessible within the context entered by the given context 64 + * lock. Read operations on the data require shared access, while write 65 + * operations require exclusive access. 66 + * 67 + * .. code-block:: c 68 + * 69 + * struct some_state { 70 + * spinlock_t lock; 71 + * long *counter __pt_guarded_by(&lock); 72 + * }; 73 + */ 74 + # define __pt_guarded_by(...) __attribute__((pt_guarded_by(__VA_ARGS__))) 75 + 76 + /** 77 + * context_lock_struct() - declare or define a context lock struct 78 + * @name: struct name 79 + * 80 + * Helper to declare or define a struct type that is also a context lock. 81 + * 82 + * .. code-block:: c 83 + * 84 + * context_lock_struct(my_handle) { 85 + * int foo; 86 + * long bar; 87 + * }; 88 + * 89 + * struct some_state { 90 + * ... 91 + * }; 92 + * // ... declared elsewhere ... 93 + * context_lock_struct(some_state); 94 + * 95 + * Note: The implementation defines several helper functions that can acquire 96 + * and release the context lock. 97 + */ 98 + # define context_lock_struct(name, ...) \ 99 + struct __ctx_lock_type(name) __VA_ARGS__ name; \ 100 + static __always_inline void __acquire_ctx_lock(const struct name *var) \ 101 + __attribute__((overloadable)) __no_context_analysis __acquires_ctx_lock(var) { } \ 102 + static __always_inline void __acquire_shared_ctx_lock(const struct name *var) \ 103 + __attribute__((overloadable)) __no_context_analysis __acquires_shared_ctx_lock(var) { } \ 104 + static __always_inline bool __try_acquire_ctx_lock(const struct name *var, bool ret) \ 105 + __attribute__((overloadable)) __no_context_analysis __try_acquires_ctx_lock(1, var) \ 106 + { return ret; } \ 107 + static __always_inline bool __try_acquire_shared_ctx_lock(const struct name *var, bool ret) \ 108 + __attribute__((overloadable)) __no_context_analysis __try_acquires_shared_ctx_lock(1, var) \ 109 + { return ret; } \ 110 + static __always_inline void __release_ctx_lock(const struct name *var) \ 111 + __attribute__((overloadable)) __no_context_analysis __releases_ctx_lock(var) { } \ 112 + static __always_inline void __release_shared_ctx_lock(const struct name *var) \ 113 + __attribute__((overloadable)) __no_context_analysis __releases_shared_ctx_lock(var) { } \ 114 + static __always_inline void __assume_ctx_lock(const struct name *var) \ 115 + __attribute__((overloadable)) __assumes_ctx_lock(var) { } \ 116 + static __always_inline void __assume_shared_ctx_lock(const struct name *var) \ 117 + __attribute__((overloadable)) __assumes_shared_ctx_lock(var) { } \ 118 + struct name 119 + 120 + /** 121 + * disable_context_analysis() - disables context analysis 122 + * 123 + * Disables context analysis. Must be paired with a later 124 + * enable_context_analysis(). 125 + */ 126 + # define disable_context_analysis() \ 127 + __diag_push(); \ 128 + __diag_ignore_all("-Wunknown-warning-option", "") \ 129 + __diag_ignore_all("-Wthread-safety", "") \ 130 + __diag_ignore_all("-Wthread-safety-pointer", "") 131 + 132 + /** 133 + * enable_context_analysis() - re-enables context analysis 134 + * 135 + * Re-enables context analysis. Must be paired with a prior 136 + * disable_context_analysis(). 137 + */ 138 + # define enable_context_analysis() __diag_pop() 139 + 140 + /** 141 + * __no_context_analysis - function attribute, disables context analysis 142 + * 143 + * Function attribute denoting that context analysis is disabled for the 144 + * whole function. Prefer use of `context_unsafe()` where possible. 145 + */ 146 + # define __no_context_analysis __attribute__((no_thread_safety_analysis)) 147 + 148 + #else /* !WARN_CONTEXT_ANALYSIS */ 149 + 150 + # define __ctx_lock_type(name) 151 + # define __reentrant_ctx_lock 152 + # define __acquires_ctx_lock(...) 153 + # define __acquires_shared_ctx_lock(...) 154 + # define __try_acquires_ctx_lock(ret, var) 155 + # define __try_acquires_shared_ctx_lock(ret, var) 156 + # define __releases_ctx_lock(...) 157 + # define __releases_shared_ctx_lock(...) 158 + # define __assumes_ctx_lock(...) 159 + # define __assumes_shared_ctx_lock(...) 160 + # define __returns_ctx_lock(var) 161 + # define __guarded_by(...) 162 + # define __pt_guarded_by(...) 163 + # define __excludes_ctx_lock(...) 164 + # define __requires_ctx_lock(...) 165 + # define __requires_shared_ctx_lock(...) 166 + # define __acquire_ctx_lock(var) do { } while (0) 167 + # define __acquire_shared_ctx_lock(var) do { } while (0) 168 + # define __try_acquire_ctx_lock(var, ret) (ret) 169 + # define __try_acquire_shared_ctx_lock(var, ret) (ret) 170 + # define __release_ctx_lock(var) do { } while (0) 171 + # define __release_shared_ctx_lock(var) do { } while (0) 172 + # define __assume_ctx_lock(var) do { (void)(var); } while (0) 173 + # define __assume_shared_ctx_lock(var) do { (void)(var); } while (0) 174 + # define context_lock_struct(name, ...) struct __VA_ARGS__ name 175 + # define disable_context_analysis() 176 + # define enable_context_analysis() 177 + # define __no_context_analysis 178 + 179 + #endif /* WARN_CONTEXT_ANALYSIS */ 180 + 181 + /** 182 + * context_unsafe() - disable context checking for contained code 183 + * 184 + * Disables context checking for contained statements or expression. 185 + * 186 + * .. code-block:: c 187 + * 188 + * struct some_data { 189 + * spinlock_t lock; 190 + * int counter __guarded_by(&lock); 191 + * }; 192 + * 193 + * int foo(struct some_data *d) 194 + * { 195 + * // ... 196 + * // other code that is still checked ... 197 + * // ... 198 + * return context_unsafe(d->counter); 199 + * } 200 + */ 201 + #define context_unsafe(...) \ 202 + ({ \ 203 + disable_context_analysis(); \ 204 + __VA_ARGS__; \ 205 + enable_context_analysis() \ 206 + }) 207 + 208 + /** 209 + * __context_unsafe() - function attribute, disable context checking 210 + * @comment: comment explaining why opt-out is safe 211 + * 212 + * Function attribute denoting that context analysis is disabled for the 213 + * whole function. Forces adding an inline comment as argument. 214 + */ 215 + #define __context_unsafe(comment) __no_context_analysis 216 + 217 + /** 218 + * context_unsafe_alias() - helper to insert a context lock "alias barrier" 219 + * @p: pointer aliasing a context lock or object containing context locks 220 + * 221 + * No-op function that acts as a "context lock alias barrier", where the 222 + * analysis rightfully detects that we're switching aliases, but the switch is 223 + * considered safe but beyond the analysis reasoning abilities. 224 + * 225 + * This should be inserted before the first use of such an alias. 226 + * 227 + * Implementation Note: The compiler ignores aliases that may be reassigned but 228 + * their value cannot be determined (e.g. when passing a non-const pointer to an 229 + * alias as a function argument). 230 + */ 231 + #define context_unsafe_alias(p) _context_unsafe_alias((void **)&(p)) 232 + static inline void _context_unsafe_alias(void **p) { } 233 + 234 + /** 235 + * token_context_lock() - declare an abstract global context lock instance 236 + * @name: token context lock name 237 + * 238 + * Helper that declares an abstract global context lock instance @name, but not 239 + * backed by a real data structure (linker error if accidentally referenced). 240 + * The type name is `__ctx_lock_@name`. 241 + */ 242 + #define token_context_lock(name, ...) \ 243 + context_lock_struct(__ctx_lock_##name, ##__VA_ARGS__) {}; \ 244 + extern const struct __ctx_lock_##name *name 245 + 246 + /** 247 + * token_context_lock_instance() - declare another instance of a global context lock 248 + * @ctx: token context lock previously declared with token_context_lock() 249 + * @name: name of additional global context lock instance 250 + * 251 + * Helper that declares an additional instance @name of the same token context 252 + * lock class @ctx. This is helpful where multiple related token contexts are 253 + * declared, to allow using the same underlying type (`__ctx_lock_@ctx`) as 254 + * function arguments. 255 + */ 256 + #define token_context_lock_instance(ctx, name) \ 257 + extern const struct __ctx_lock_##ctx *name 258 + 259 + /* 260 + * Common keywords for static context analysis. 261 + */ 262 + 263 + /** 264 + * __must_hold() - function attribute, caller must hold exclusive context lock 265 + * 266 + * Function attribute declaring that the caller must hold the given context 267 + * lock instance(s) exclusively. 268 + */ 269 + #define __must_hold(...) __requires_ctx_lock(__VA_ARGS__) 270 + 271 + /** 272 + * __must_not_hold() - function attribute, caller must not hold context lock 273 + * 274 + * Function attribute declaring that the caller must not hold the given context 275 + * lock instance(s). 276 + */ 277 + #define __must_not_hold(...) __excludes_ctx_lock(__VA_ARGS__) 278 + 279 + /** 280 + * __acquires() - function attribute, function acquires context lock exclusively 281 + * 282 + * Function attribute declaring that the function acquires the given context 283 + * lock instance(s) exclusively, but does not release them. 284 + */ 285 + #define __acquires(...) __acquires_ctx_lock(__VA_ARGS__) 286 + 287 + /* 288 + * Clang's analysis does not care precisely about the value, only that it is 289 + * either zero or non-zero. So the __cond_acquires() interface might be 290 + * misleading if we say that @ret is the value returned if acquired. Instead, 291 + * provide symbolic variants which we translate. 292 + */ 293 + #define __cond_acquires_impl_true(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x) 294 + #define __cond_acquires_impl_false(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x) 295 + #define __cond_acquires_impl_nonzero(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x) 296 + #define __cond_acquires_impl_0(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x) 297 + #define __cond_acquires_impl_nonnull(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x) 298 + #define __cond_acquires_impl_NULL(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x) 299 + 300 + /** 301 + * __cond_acquires() - function attribute, function conditionally 302 + * acquires a context lock exclusively 303 + * @ret: abstract value returned by function if context lock acquired 304 + * @x: context lock instance pointer 305 + * 306 + * Function attribute declaring that the function conditionally acquires the 307 + * given context lock instance @x exclusively, but does not release it. The 308 + * function return value @ret denotes when the context lock is acquired. 309 + * 310 + * @ret may be one of: true, false, nonzero, 0, nonnull, NULL. 311 + */ 312 + #define __cond_acquires(ret, x) __cond_acquires_impl_##ret(x) 313 + 314 + /** 315 + * __releases() - function attribute, function releases a context lock exclusively 316 + * 317 + * Function attribute declaring that the function releases the given context 318 + * lock instance(s) exclusively. The associated context(s) must be active on 319 + * entry. 320 + */ 321 + #define __releases(...) __releases_ctx_lock(__VA_ARGS__) 322 + 323 + /** 324 + * __acquire() - function to acquire context lock exclusively 325 + * @x: context lock instance pointer 326 + * 327 + * No-op function that acquires the given context lock instance @x exclusively. 328 + */ 329 + #define __acquire(x) __acquire_ctx_lock(x) 330 + 331 + /** 332 + * __release() - function to release context lock exclusively 333 + * @x: context lock instance pointer 334 + * 335 + * No-op function that releases the given context lock instance @x. 336 + */ 337 + #define __release(x) __release_ctx_lock(x) 338 + 339 + /** 340 + * __must_hold_shared() - function attribute, caller must hold shared context lock 341 + * 342 + * Function attribute declaring that the caller must hold the given context 343 + * lock instance(s) with shared access. 344 + */ 345 + #define __must_hold_shared(...) __requires_shared_ctx_lock(__VA_ARGS__) 346 + 347 + /** 348 + * __acquires_shared() - function attribute, function acquires context lock shared 349 + * 350 + * Function attribute declaring that the function acquires the given 351 + * context lock instance(s) with shared access, but does not release them. 352 + */ 353 + #define __acquires_shared(...) __acquires_shared_ctx_lock(__VA_ARGS__) 354 + 355 + /** 356 + * __cond_acquires_shared() - function attribute, function conditionally 357 + * acquires a context lock shared 358 + * @ret: abstract value returned by function if context lock acquired 359 + * @x: context lock instance pointer 360 + * 361 + * Function attribute declaring that the function conditionally acquires the 362 + * given context lock instance @x with shared access, but does not release it. 363 + * The function return value @ret denotes when the context lock is acquired. 364 + * 365 + * @ret may be one of: true, false, nonzero, 0, nonnull, NULL. 366 + */ 367 + #define __cond_acquires_shared(ret, x) __cond_acquires_impl_##ret(x, _shared) 368 + 369 + /** 370 + * __releases_shared() - function attribute, function releases a 371 + * context lock shared 372 + * 373 + * Function attribute declaring that the function releases the given context 374 + * lock instance(s) with shared access. The associated context(s) must be 375 + * active on entry. 376 + */ 377 + #define __releases_shared(...) __releases_shared_ctx_lock(__VA_ARGS__) 378 + 379 + /** 380 + * __acquire_shared() - function to acquire context lock shared 381 + * @x: context lock instance pointer 382 + * 383 + * No-op function that acquires the given context lock instance @x with shared 384 + * access. 385 + */ 386 + #define __acquire_shared(x) __acquire_shared_ctx_lock(x) 387 + 388 + /** 389 + * __release_shared() - function to release context lock shared 390 + * @x: context lock instance pointer 391 + * 392 + * No-op function that releases the given context lock instance @x with shared 393 + * access. 394 + */ 395 + #define __release_shared(x) __release_shared_ctx_lock(x) 396 + 397 + /** 398 + * __acquire_ret() - helper to acquire context lock of return value 399 + * @call: call expression 400 + * @ret_expr: acquire expression that uses __ret 401 + */ 402 + #define __acquire_ret(call, ret_expr) \ 403 + ({ \ 404 + __auto_type __ret = call; \ 405 + __acquire(ret_expr); \ 406 + __ret; \ 407 + }) 408 + 409 + /** 410 + * __acquire_shared_ret() - helper to acquire context lock shared of return value 411 + * @call: call expression 412 + * @ret_expr: acquire shared expression that uses __ret 413 + */ 414 + #define __acquire_shared_ret(call, ret_expr) \ 415 + ({ \ 416 + __auto_type __ret = call; \ 417 + __acquire_shared(ret_expr); \ 418 + __ret; \ 419 + }) 420 + 421 + /* 422 + * Attributes to mark functions returning acquired context locks. 423 + * 424 + * This is purely cosmetic to help readability, and should be used with the 425 + * above macros as follows: 426 + * 427 + * struct foo { spinlock_t lock; ... }; 428 + * ... 429 + * #define myfunc(...) __acquire_ret(_myfunc(__VA_ARGS__), &__ret->lock) 430 + * struct foo *_myfunc(int bar) __acquires_ret; 431 + * ... 432 + */ 433 + #define __acquires_ret __no_context_analysis 434 + #define __acquires_shared_ret __no_context_analysis 435 + 436 + #endif /* _LINUX_COMPILER_CONTEXT_ANALYSIS_H */
+2
include/linux/compiler.h
··· 190 190 #define data_race(expr) \ 191 191 ({ \ 192 192 __kcsan_disable_current(); \ 193 + disable_context_analysis(); \ 193 194 auto __v = (expr); \ 195 + enable_context_analysis(); \ 194 196 __kcsan_enable_current(); \ 195 197 __v; \ 196 198 })
+2 -16
include/linux/compiler_types.h
··· 41 41 # define BTF_TYPE_TAG(value) /* nothing */ 42 42 #endif 43 43 44 + #include <linux/compiler-context-analysis.h> 45 + 44 46 /* sparse defines __CHECKER__; see Documentation/dev-tools/sparse.rst */ 45 47 #ifdef __CHECKER__ 46 48 /* address spaces */ ··· 53 51 # define __rcu __attribute__((noderef, address_space(__rcu))) 54 52 static inline void __chk_user_ptr(const volatile void __user *ptr) { } 55 53 static inline void __chk_io_ptr(const volatile void __iomem *ptr) { } 56 - /* context/locking */ 57 - # define __must_hold(x) __attribute__((context(x,1,1))) 58 - # define __acquires(x) __attribute__((context(x,0,1))) 59 - # define __cond_acquires(x) __attribute__((context(x,0,-1))) 60 - # define __releases(x) __attribute__((context(x,1,0))) 61 - # define __acquire(x) __context__(x,1) 62 - # define __release(x) __context__(x,-1) 63 - # define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) 64 54 /* other */ 65 55 # define __force __attribute__((force)) 66 56 # define __nocast __attribute__((nocast)) ··· 73 79 74 80 # define __chk_user_ptr(x) (void)0 75 81 # define __chk_io_ptr(x) (void)0 76 - /* context/locking */ 77 - # define __must_hold(x) 78 - # define __acquires(x) 79 - # define __cond_acquires(x) 80 - # define __releases(x) 81 - # define __acquire(x) (void)0 82 - # define __release(x) (void)0 83 - # define __cond_lock(x,c) (c) 84 82 /* other */ 85 83 # define __force 86 84 # define __nocast
+2 -2
include/linux/console.h
··· 492 492 extern int console_srcu_read_lock(void); 493 493 extern void console_srcu_read_unlock(int cookie); 494 494 495 - extern void console_list_lock(void) __acquires(console_mutex); 496 - extern void console_list_unlock(void) __releases(console_mutex); 495 + extern void console_list_lock(void); 496 + extern void console_list_unlock(void); 497 497 498 498 extern struct hlist_head console_list; 499 499
+5 -7
include/linux/debugfs.h
··· 239 239 * @cancel: callback to call 240 240 * @cancel_data: extra data for the callback to call 241 241 */ 242 - struct debugfs_cancellation { 242 + context_lock_struct(debugfs_cancellation) { 243 243 struct list_head list; 244 244 void (*cancel)(struct dentry *, void *); 245 245 void *cancel_data; 246 246 }; 247 247 248 - void __acquires(cancellation) 249 - debugfs_enter_cancellation(struct file *file, 250 - struct debugfs_cancellation *cancellation); 251 - void __releases(cancellation) 252 - debugfs_leave_cancellation(struct file *file, 253 - struct debugfs_cancellation *cancellation); 248 + void debugfs_enter_cancellation(struct file *file, 249 + struct debugfs_cancellation *cancellation) __acquires(cancellation); 250 + void debugfs_leave_cancellation(struct file *file, 251 + struct debugfs_cancellation *cancellation) __releases(cancellation); 254 252 255 253 #else 256 254
+2
include/linux/kref.h
··· 81 81 static inline int kref_put_mutex(struct kref *kref, 82 82 void (*release)(struct kref *kref), 83 83 struct mutex *mutex) 84 + __cond_acquires(true, mutex) 84 85 { 85 86 if (refcount_dec_and_mutex_lock(&kref->refcount, mutex)) { 86 87 release(kref); ··· 103 102 static inline int kref_put_lock(struct kref *kref, 104 103 void (*release)(struct kref *kref), 105 104 spinlock_t *lock) 105 + __cond_acquires(true, lock) 106 106 { 107 107 if (refcount_dec_and_lock(&kref->refcount, lock)) { 108 108 release(kref);
+2
include/linux/list_bl.h
··· 144 144 } 145 145 146 146 static inline void hlist_bl_lock(struct hlist_bl_head *b) 147 + __acquires(__bitlock(0, b)) 147 148 { 148 149 bit_spin_lock(0, (unsigned long *)b); 149 150 } 150 151 151 152 static inline void hlist_bl_unlock(struct hlist_bl_head *b) 153 + __releases(__bitlock(0, b)) 152 154 { 153 155 __bit_spin_unlock(0, (unsigned long *)b); 154 156 }
+36 -19
include/linux/local_lock.h
··· 14 14 * local_lock - Acquire a per CPU local lock 15 15 * @lock: The lock variable 16 16 */ 17 - #define local_lock(lock) __local_lock(this_cpu_ptr(lock)) 17 + #define local_lock(lock) __local_lock(__this_cpu_local_lock(lock)) 18 18 19 19 /** 20 20 * local_lock_irq - Acquire a per CPU local lock and disable interrupts 21 21 * @lock: The lock variable 22 22 */ 23 - #define local_lock_irq(lock) __local_lock_irq(this_cpu_ptr(lock)) 23 + #define local_lock_irq(lock) __local_lock_irq(__this_cpu_local_lock(lock)) 24 24 25 25 /** 26 26 * local_lock_irqsave - Acquire a per CPU local lock, save and disable ··· 29 29 * @flags: Storage for interrupt flags 30 30 */ 31 31 #define local_lock_irqsave(lock, flags) \ 32 - __local_lock_irqsave(this_cpu_ptr(lock), flags) 32 + __local_lock_irqsave(__this_cpu_local_lock(lock), flags) 33 33 34 34 /** 35 35 * local_unlock - Release a per CPU local lock 36 36 * @lock: The lock variable 37 37 */ 38 - #define local_unlock(lock) __local_unlock(this_cpu_ptr(lock)) 38 + #define local_unlock(lock) __local_unlock(__this_cpu_local_lock(lock)) 39 39 40 40 /** 41 41 * local_unlock_irq - Release a per CPU local lock and enable interrupts 42 42 * @lock: The lock variable 43 43 */ 44 - #define local_unlock_irq(lock) __local_unlock_irq(this_cpu_ptr(lock)) 44 + #define local_unlock_irq(lock) __local_unlock_irq(__this_cpu_local_lock(lock)) 45 45 46 46 /** 47 47 * local_unlock_irqrestore - Release a per CPU local lock and restore ··· 50 50 * @flags: Interrupt flags to restore 51 51 */ 52 52 #define local_unlock_irqrestore(lock, flags) \ 53 - __local_unlock_irqrestore(this_cpu_ptr(lock), flags) 53 + __local_unlock_irqrestore(__this_cpu_local_lock(lock), flags) 54 54 55 55 /** 56 56 * local_trylock_init - Runtime initialize a lock instance ··· 66 66 * locking constrains it will _always_ fail to acquire the lock in NMI or 67 67 * HARDIRQ context on PREEMPT_RT. 68 68 */ 69 - #define local_trylock(lock) __local_trylock(this_cpu_ptr(lock)) 69 + #define local_trylock(lock) __local_trylock(__this_cpu_local_lock(lock)) 70 70 71 71 #define local_lock_is_locked(lock) __local_lock_is_locked(lock) 72 72 ··· 81 81 * HARDIRQ context on PREEMPT_RT. 82 82 */ 83 83 #define local_trylock_irqsave(lock, flags) \ 84 - __local_trylock_irqsave(this_cpu_ptr(lock), flags) 84 + __local_trylock_irqsave(__this_cpu_local_lock(lock), flags) 85 85 86 - DEFINE_GUARD(local_lock, local_lock_t __percpu*, 87 - local_lock(_T), 88 - local_unlock(_T)) 89 - DEFINE_GUARD(local_lock_irq, local_lock_t __percpu*, 90 - local_lock_irq(_T), 91 - local_unlock_irq(_T)) 86 + DEFINE_LOCK_GUARD_1(local_lock, local_lock_t __percpu, 87 + local_lock(_T->lock), 88 + local_unlock(_T->lock)) 89 + DEFINE_LOCK_GUARD_1(local_lock_irq, local_lock_t __percpu, 90 + local_lock_irq(_T->lock), 91 + local_unlock_irq(_T->lock)) 92 92 DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, 93 93 local_lock_irqsave(_T->lock, _T->flags), 94 94 local_unlock_irqrestore(_T->lock, _T->flags), 95 95 unsigned long flags) 96 96 97 97 #define local_lock_nested_bh(_lock) \ 98 - __local_lock_nested_bh(this_cpu_ptr(_lock)) 98 + __local_lock_nested_bh(__this_cpu_local_lock(_lock)) 99 99 100 100 #define local_unlock_nested_bh(_lock) \ 101 - __local_unlock_nested_bh(this_cpu_ptr(_lock)) 101 + __local_unlock_nested_bh(__this_cpu_local_lock(_lock)) 102 102 103 - DEFINE_GUARD(local_lock_nested_bh, local_lock_t __percpu*, 104 - local_lock_nested_bh(_T), 105 - local_unlock_nested_bh(_T)) 103 + DEFINE_LOCK_GUARD_1(local_lock_nested_bh, local_lock_t __percpu, 104 + local_lock_nested_bh(_T->lock), 105 + local_unlock_nested_bh(_T->lock)) 106 + 107 + DEFINE_LOCK_GUARD_1(local_lock_init, local_lock_t, local_lock_init(_T->lock), /* */) 108 + 109 + DECLARE_LOCK_GUARD_1_ATTRS(local_lock, __acquires(_T), __releases(*(local_lock_t __percpu **)_T)) 110 + #define class_local_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock, _T) 111 + DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irq, __acquires(_T), __releases(*(local_lock_t __percpu **)_T)) 112 + #define class_local_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_irq, _T) 113 + DECLARE_LOCK_GUARD_1_ATTRS(local_lock_irqsave, __acquires(_T), __releases(*(local_lock_t __percpu **)_T)) 114 + #define class_local_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_irqsave, _T) 115 + DECLARE_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, __acquires(_T), __releases(*(local_lock_t __percpu **)_T)) 116 + #define class_local_lock_nested_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_nested_bh, _T) 117 + DECLARE_LOCK_GUARD_1_ATTRS(local_lock_init, __acquires(_T), __releases(*(local_lock_t **)_T)) 118 + #define class_local_lock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_lock_init, _T) 119 + 120 + DEFINE_LOCK_GUARD_1(local_trylock_init, local_trylock_t, local_trylock_init(_T->lock), /* */) 121 + DECLARE_LOCK_GUARD_1_ATTRS(local_trylock_init, __acquires(_T), __releases(*(local_trylock_t **)_T)) 122 + #define class_local_trylock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(local_trylock_init, _T) 106 123 107 124 #endif
+58 -14
include/linux/local_lock_internal.h
··· 4 4 #endif 5 5 6 6 #include <linux/percpu-defs.h> 7 + #include <linux/irqflags.h> 7 8 #include <linux/lockdep.h> 9 + #include <linux/debug_locks.h> 10 + #include <asm/current.h> 8 11 9 12 #ifndef CONFIG_PREEMPT_RT 10 13 11 - typedef struct { 14 + context_lock_struct(local_lock) { 12 15 #ifdef CONFIG_DEBUG_LOCK_ALLOC 13 16 struct lockdep_map dep_map; 14 17 struct task_struct *owner; 15 18 #endif 16 - } local_lock_t; 19 + }; 20 + typedef struct local_lock local_lock_t; 17 21 18 22 /* local_trylock() and local_trylock_irqsave() only work with local_trylock_t */ 19 - typedef struct { 23 + context_lock_struct(local_trylock) { 20 24 #ifdef CONFIG_DEBUG_LOCK_ALLOC 21 25 struct lockdep_map dep_map; 22 26 struct task_struct *owner; 23 27 #endif 24 28 u8 acquired; 25 - } local_trylock_t; 29 + }; 30 + typedef struct local_trylock local_trylock_t; 26 31 27 32 #ifdef CONFIG_DEBUG_LOCK_ALLOC 28 33 # define LOCAL_LOCK_DEBUG_INIT(lockname) \ ··· 89 84 local_lock_debug_init(lock); \ 90 85 } while (0) 91 86 92 - #define __local_trylock_init(lock) __local_lock_init((local_lock_t *)lock) 87 + #define __local_trylock_init(lock) \ 88 + do { \ 89 + __local_lock_init((local_lock_t *)lock); \ 90 + } while (0) 93 91 94 92 #define __spinlock_nested_bh_init(lock) \ 95 93 do { \ ··· 125 117 do { \ 126 118 preempt_disable(); \ 127 119 __local_lock_acquire(lock); \ 120 + __acquire(lock); \ 128 121 } while (0) 129 122 130 123 #define __local_lock_irq(lock) \ 131 124 do { \ 132 125 local_irq_disable(); \ 133 126 __local_lock_acquire(lock); \ 127 + __acquire(lock); \ 134 128 } while (0) 135 129 136 130 #define __local_lock_irqsave(lock, flags) \ 137 131 do { \ 138 132 local_irq_save(flags); \ 139 133 __local_lock_acquire(lock); \ 134 + __acquire(lock); \ 140 135 } while (0) 141 136 142 137 #define __local_trylock(lock) \ 143 - ({ \ 138 + __try_acquire_ctx_lock(lock, ({ \ 144 139 local_trylock_t *__tl; \ 145 140 \ 146 141 preempt_disable(); \ ··· 157 146 (local_lock_t *)__tl); \ 158 147 } \ 159 148 !!__tl; \ 160 - }) 149 + })) 161 150 162 151 #define __local_trylock_irqsave(lock, flags) \ 163 - ({ \ 152 + __try_acquire_ctx_lock(lock, ({ \ 164 153 local_trylock_t *__tl; \ 165 154 \ 166 155 local_irq_save(flags); \ ··· 174 163 (local_lock_t *)__tl); \ 175 164 } \ 176 165 !!__tl; \ 177 - }) 166 + })) 178 167 179 168 /* preemption or migration must be disabled before calling __local_lock_is_locked */ 180 169 #define __local_lock_is_locked(lock) READ_ONCE(this_cpu_ptr(lock)->acquired) ··· 197 186 198 187 #define __local_unlock(lock) \ 199 188 do { \ 189 + __release(lock); \ 200 190 __local_lock_release(lock); \ 201 191 preempt_enable(); \ 202 192 } while (0) 203 193 204 194 #define __local_unlock_irq(lock) \ 205 195 do { \ 196 + __release(lock); \ 206 197 __local_lock_release(lock); \ 207 198 local_irq_enable(); \ 208 199 } while (0) 209 200 210 201 #define __local_unlock_irqrestore(lock, flags) \ 211 202 do { \ 203 + __release(lock); \ 212 204 __local_lock_release(lock); \ 213 205 local_irq_restore(flags); \ 214 206 } while (0) ··· 220 206 do { \ 221 207 lockdep_assert_in_softirq(); \ 222 208 local_lock_acquire((lock)); \ 209 + __acquire(lock); \ 223 210 } while (0) 224 211 225 212 #define __local_unlock_nested_bh(lock) \ 226 - local_lock_release((lock)) 213 + do { \ 214 + __release(lock); \ 215 + local_lock_release((lock)); \ 216 + } while (0) 227 217 228 218 #else /* !CONFIG_PREEMPT_RT */ 219 + 220 + #include <linux/sched.h> 221 + #include <linux/spinlock.h> 229 222 230 223 /* 231 224 * On PREEMPT_RT local_lock maps to a per CPU spinlock, which protects the ··· 288 267 } while (0) 289 268 290 269 #define __local_trylock(lock) \ 291 - ({ \ 270 + __try_acquire_ctx_lock(lock, context_unsafe(({ \ 292 271 int __locked; \ 293 272 \ 294 273 if (in_nmi() | in_hardirq()) { \ ··· 300 279 migrate_enable(); \ 301 280 } \ 302 281 __locked; \ 303 - }) 282 + }))) 304 283 305 284 #define __local_trylock_irqsave(lock, flags) \ 306 - ({ \ 285 + __try_acquire_ctx_lock(lock, ({ \ 307 286 typecheck(unsigned long, flags); \ 308 287 flags = 0; \ 309 288 __local_trylock(lock); \ 310 - }) 289 + })) 311 290 312 291 /* migration must be disabled before calling __local_lock_is_locked */ 313 292 #define __local_lock_is_locked(__lock) \ 314 293 (rt_mutex_owner(&this_cpu_ptr(__lock)->lock) == current) 315 294 316 295 #endif /* CONFIG_PREEMPT_RT */ 296 + 297 + #if defined(WARN_CONTEXT_ANALYSIS) 298 + /* 299 + * Because the compiler only knows about the base per-CPU variable, use this 300 + * helper function to make the compiler think we lock/unlock the @base variable, 301 + * and hide the fact we actually pass the per-CPU instance to lock/unlock 302 + * functions. 303 + */ 304 + static __always_inline local_lock_t *__this_cpu_local_lock(local_lock_t __percpu *base) 305 + __returns_ctx_lock(base) __attribute__((overloadable)) 306 + { 307 + return this_cpu_ptr(base); 308 + } 309 + #ifndef CONFIG_PREEMPT_RT 310 + static __always_inline local_trylock_t *__this_cpu_local_lock(local_trylock_t __percpu *base) 311 + __returns_ctx_lock(base) __attribute__((overloadable)) 312 + { 313 + return this_cpu_ptr(base); 314 + } 315 + #endif /* CONFIG_PREEMPT_RT */ 316 + #else /* WARN_CONTEXT_ANALYSIS */ 317 + #define __this_cpu_local_lock(base) this_cpu_ptr(base) 318 + #endif /* WARN_CONTEXT_ANALYSIS */
+6 -6
include/linux/lockdep.h
··· 282 282 do { WARN_ON_ONCE(debug_locks && !(cond)); } while (0) 283 283 284 284 #define lockdep_assert_held(l) \ 285 - lockdep_assert(lockdep_is_held(l) != LOCK_STATE_NOT_HELD) 285 + do { lockdep_assert(lockdep_is_held(l) != LOCK_STATE_NOT_HELD); __assume_ctx_lock(l); } while (0) 286 286 287 287 #define lockdep_assert_not_held(l) \ 288 288 lockdep_assert(lockdep_is_held(l) != LOCK_STATE_HELD) 289 289 290 290 #define lockdep_assert_held_write(l) \ 291 - lockdep_assert(lockdep_is_held_type(l, 0)) 291 + do { lockdep_assert(lockdep_is_held_type(l, 0)); __assume_ctx_lock(l); } while (0) 292 292 293 293 #define lockdep_assert_held_read(l) \ 294 - lockdep_assert(lockdep_is_held_type(l, 1)) 294 + do { lockdep_assert(lockdep_is_held_type(l, 1)); __assume_shared_ctx_lock(l); } while (0) 295 295 296 296 #define lockdep_assert_held_once(l) \ 297 297 lockdep_assert_once(lockdep_is_held(l) != LOCK_STATE_NOT_HELD) ··· 389 389 #define lockdep_assert(c) do { } while (0) 390 390 #define lockdep_assert_once(c) do { } while (0) 391 391 392 - #define lockdep_assert_held(l) do { (void)(l); } while (0) 392 + #define lockdep_assert_held(l) __assume_ctx_lock(l) 393 393 #define lockdep_assert_not_held(l) do { (void)(l); } while (0) 394 - #define lockdep_assert_held_write(l) do { (void)(l); } while (0) 395 - #define lockdep_assert_held_read(l) do { (void)(l); } while (0) 394 + #define lockdep_assert_held_write(l) __assume_ctx_lock(l) 395 + #define lockdep_assert_held_read(l) __assume_shared_ctx_lock(l) 396 396 #define lockdep_assert_held_once(l) do { (void)(l); } while (0) 397 397 #define lockdep_assert_none_held_once() do { } while (0) 398 398
+1 -3
include/linux/lockref.h
··· 49 49 void lockref_get(struct lockref *lockref); 50 50 int lockref_put_return(struct lockref *lockref); 51 51 bool lockref_get_not_zero(struct lockref *lockref); 52 - bool lockref_put_or_lock(struct lockref *lockref); 53 - #define lockref_put_or_lock(_lockref) \ 54 - (!__cond_lock((_lockref)->lock, !lockref_put_or_lock(_lockref))) 52 + bool lockref_put_or_lock(struct lockref *lockref) __cond_acquires(false, &lockref->lock); 55 53 56 54 void lockref_mark_dead(struct lockref *lockref); 57 55 bool lockref_get_not_dead(struct lockref *lockref);
+5 -28
include/linux/mm.h
··· 2979 2979 } 2980 2980 #endif /* CONFIG_ARCH_SUPPORTS_PUD_PFNMAP */ 2981 2981 2982 - extern pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, 2983 - spinlock_t **ptl); 2984 - static inline pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, 2985 - spinlock_t **ptl) 2986 - { 2987 - pte_t *ptep; 2988 - __cond_lock(*ptl, ptep = __get_locked_pte(mm, addr, ptl)); 2989 - return ptep; 2990 - } 2982 + extern pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, 2983 + spinlock_t **ptl); 2991 2984 2992 2985 #ifdef __PAGETABLE_P4D_FOLDED 2993 2986 static inline int __p4d_alloc(struct mm_struct *mm, pgd_t *pgd, ··· 3334 3341 return true; 3335 3342 } 3336 3343 3337 - pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); 3338 - static inline pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, 3339 - pmd_t *pmdvalp) 3340 - { 3341 - pte_t *pte; 3344 + pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); 3342 3345 3343 - __cond_lock(RCU, pte = ___pte_offset_map(pmd, addr, pmdvalp)); 3344 - return pte; 3345 - } 3346 3346 static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr) 3347 3347 { 3348 3348 return __pte_offset_map(pmd, addr, NULL); 3349 3349 } 3350 3350 3351 - pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, 3352 - unsigned long addr, spinlock_t **ptlp); 3353 - static inline pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, 3354 - unsigned long addr, spinlock_t **ptlp) 3355 - { 3356 - pte_t *pte; 3357 - 3358 - __cond_lock(RCU, __cond_lock(*ptlp, 3359 - pte = __pte_offset_map_lock(mm, pmd, addr, ptlp))); 3360 - return pte; 3361 - } 3351 + pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, 3352 + unsigned long addr, spinlock_t **ptlp); 3362 3353 3363 3354 pte_t *pte_offset_map_ro_nolock(struct mm_struct *mm, pmd_t *pmd, 3364 3355 unsigned long addr, spinlock_t **ptlp);
+25 -15
include/linux/mutex.h
··· 182 182 * Also see Documentation/locking/mutex-design.rst. 183 183 */ 184 184 #ifdef CONFIG_DEBUG_LOCK_ALLOC 185 - extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass); 185 + extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass) __acquires(lock); 186 186 extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock); 187 187 extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock, 188 - unsigned int subclass); 188 + unsigned int subclass) __cond_acquires(0, lock); 189 189 extern int __must_check _mutex_lock_killable(struct mutex *lock, 190 - unsigned int subclass, struct lockdep_map *nest_lock); 191 - extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass); 190 + unsigned int subclass, struct lockdep_map *nest_lock) __cond_acquires(0, lock); 191 + extern void mutex_lock_io_nested(struct mutex *lock, unsigned int subclass) __acquires(lock); 192 192 193 193 #define mutex_lock(lock) mutex_lock_nested(lock, 0) 194 194 #define mutex_lock_interruptible(lock) mutex_lock_interruptible_nested(lock, 0) ··· 211 211 _mutex_lock_killable(lock, subclass, NULL) 212 212 213 213 #else 214 - extern void mutex_lock(struct mutex *lock); 215 - extern int __must_check mutex_lock_interruptible(struct mutex *lock); 216 - extern int __must_check mutex_lock_killable(struct mutex *lock); 217 - extern void mutex_lock_io(struct mutex *lock); 214 + extern void mutex_lock(struct mutex *lock) __acquires(lock); 215 + extern int __must_check mutex_lock_interruptible(struct mutex *lock) __cond_acquires(0, lock); 216 + extern int __must_check mutex_lock_killable(struct mutex *lock) __cond_acquires(0, lock); 217 + extern void mutex_lock_io(struct mutex *lock) __acquires(lock); 218 218 219 219 # define mutex_lock_nested(lock, subclass) mutex_lock(lock) 220 220 # define mutex_lock_interruptible_nested(lock, subclass) mutex_lock_interruptible(lock) ··· 232 232 */ 233 233 234 234 #ifdef CONFIG_DEBUG_LOCK_ALLOC 235 - extern int _mutex_trylock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock); 235 + extern int _mutex_trylock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock) __cond_acquires(true, lock); 236 236 237 237 #define mutex_trylock_nest_lock(lock, nest_lock) \ 238 238 ( \ ··· 242 242 243 243 #define mutex_trylock(lock) _mutex_trylock_nest_lock(lock, NULL) 244 244 #else 245 - extern int mutex_trylock(struct mutex *lock); 245 + extern int mutex_trylock(struct mutex *lock) __cond_acquires(true, lock); 246 246 #define mutex_trylock_nest_lock(lock, nest_lock) mutex_trylock(lock) 247 247 #endif 248 248 249 - extern void mutex_unlock(struct mutex *lock); 249 + extern void mutex_unlock(struct mutex *lock) __releases(lock); 250 250 251 - extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); 251 + extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock) __cond_acquires(true, lock); 252 252 253 - DEFINE_GUARD(mutex, struct mutex *, mutex_lock(_T), mutex_unlock(_T)) 254 - DEFINE_GUARD_COND(mutex, _try, mutex_trylock(_T)) 255 - DEFINE_GUARD_COND(mutex, _intr, mutex_lock_interruptible(_T), _RET == 0) 253 + DEFINE_LOCK_GUARD_1(mutex, struct mutex, mutex_lock(_T->lock), mutex_unlock(_T->lock)) 254 + DEFINE_LOCK_GUARD_1_COND(mutex, _try, mutex_trylock(_T->lock)) 255 + DEFINE_LOCK_GUARD_1_COND(mutex, _intr, mutex_lock_interruptible(_T->lock), _RET == 0) 256 + DEFINE_LOCK_GUARD_1(mutex_init, struct mutex, mutex_init(_T->lock), /* */) 257 + 258 + DECLARE_LOCK_GUARD_1_ATTRS(mutex, __acquires(_T), __releases(*(struct mutex **)_T)) 259 + #define class_mutex_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex, _T) 260 + DECLARE_LOCK_GUARD_1_ATTRS(mutex_try, __acquires(_T), __releases(*(struct mutex **)_T)) 261 + #define class_mutex_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_try, _T) 262 + DECLARE_LOCK_GUARD_1_ATTRS(mutex_intr, __acquires(_T), __releases(*(struct mutex **)_T)) 263 + #define class_mutex_intr_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_intr, _T) 264 + DECLARE_LOCK_GUARD_1_ATTRS(mutex_init, __acquires(_T), __releases(*(struct mutex **)_T)) 265 + #define class_mutex_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(mutex_init, _T) 256 266 257 267 extern unsigned long mutex_get_owner(struct mutex *lock); 258 268
+2 -2
include/linux/mutex_types.h
··· 38 38 * - detects multi-task circular deadlocks and prints out all affected 39 39 * locks and tasks (and only those tasks) 40 40 */ 41 - struct mutex { 41 + context_lock_struct(mutex) { 42 42 atomic_long_t owner; 43 43 raw_spinlock_t wait_lock; 44 44 #ifdef CONFIG_MUTEX_SPIN_ON_OWNER ··· 59 59 */ 60 60 #include <linux/rtmutex.h> 61 61 62 - struct mutex { 62 + context_lock_struct(mutex) { 63 63 struct rt_mutex_base rtmutex; 64 64 #ifdef CONFIG_DEBUG_LOCK_ALLOC 65 65 struct lockdep_map dep_map;
+53 -37
include/linux/rcupdate.h
··· 31 31 #include <asm/processor.h> 32 32 #include <linux/context_tracking_irq.h> 33 33 34 + token_context_lock(RCU, __reentrant_ctx_lock); 35 + token_context_lock_instance(RCU, RCU_SCHED); 36 + token_context_lock_instance(RCU, RCU_BH); 37 + 38 + /* 39 + * A convenience macro that can be used for RCU-protected globals or struct 40 + * members; adds type qualifier __rcu, and also enforces __guarded_by(RCU). 41 + */ 42 + #define __rcu_guarded __rcu __guarded_by(RCU) 43 + 34 44 #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b)) 35 45 #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b)) 36 46 ··· 406 396 407 397 // See RCU_LOCKDEP_WARN() for an explanation of the double call to 408 398 // debug_lockdep_rcu_enabled(). 409 - static inline bool lockdep_assert_rcu_helper(bool c) 399 + static __always_inline bool lockdep_assert_rcu_helper(bool c, const struct __ctx_lock_RCU *ctx) 400 + __assumes_shared_ctx_lock(RCU) __assumes_shared_ctx_lock(ctx) 410 401 { 411 402 return debug_lockdep_rcu_enabled() && 412 403 (c || !rcu_is_watching() || !rcu_lockdep_current_cpu_online()) && ··· 420 409 * Splats if lockdep is enabled and there is no rcu_read_lock() in effect. 421 410 */ 422 411 #define lockdep_assert_in_rcu_read_lock() \ 423 - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map))) 412 + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map), RCU)) 424 413 425 414 /** 426 415 * lockdep_assert_in_rcu_read_lock_bh - WARN if not protected by rcu_read_lock_bh() ··· 430 419 * actual rcu_read_lock_bh() is required. 431 420 */ 432 421 #define lockdep_assert_in_rcu_read_lock_bh() \ 433 - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map))) 422 + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_bh_lock_map), RCU_BH)) 434 423 435 424 /** 436 425 * lockdep_assert_in_rcu_read_lock_sched - WARN if not protected by rcu_read_lock_sched() ··· 440 429 * instead an actual rcu_read_lock_sched() is required. 441 430 */ 442 431 #define lockdep_assert_in_rcu_read_lock_sched() \ 443 - WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map))) 432 + WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_sched_lock_map), RCU_SCHED)) 444 433 445 434 /** 446 435 * lockdep_assert_in_rcu_reader - WARN if not within some type of RCU reader ··· 458 447 WARN_ON_ONCE(lockdep_assert_rcu_helper(!lock_is_held(&rcu_lock_map) && \ 459 448 !lock_is_held(&rcu_bh_lock_map) && \ 460 449 !lock_is_held(&rcu_sched_lock_map) && \ 461 - preemptible())) 450 + preemptible(), RCU)) 462 451 463 452 #else /* #ifdef CONFIG_PROVE_RCU */ 464 453 465 454 #define RCU_LOCKDEP_WARN(c, s) do { } while (0 && (c)) 466 455 #define rcu_sleep_check() do { } while (0) 467 456 468 - #define lockdep_assert_in_rcu_read_lock() do { } while (0) 469 - #define lockdep_assert_in_rcu_read_lock_bh() do { } while (0) 470 - #define lockdep_assert_in_rcu_read_lock_sched() do { } while (0) 471 - #define lockdep_assert_in_rcu_reader() do { } while (0) 457 + #define lockdep_assert_in_rcu_read_lock() __assume_shared_ctx_lock(RCU) 458 + #define lockdep_assert_in_rcu_read_lock_bh() __assume_shared_ctx_lock(RCU_BH) 459 + #define lockdep_assert_in_rcu_read_lock_sched() __assume_shared_ctx_lock(RCU_SCHED) 460 + #define lockdep_assert_in_rcu_reader() __assume_shared_ctx_lock(RCU) 472 461 473 462 #endif /* #else #ifdef CONFIG_PROVE_RCU */ 474 463 ··· 488 477 #endif /* #else #ifdef __CHECKER__ */ 489 478 490 479 #define __unrcu_pointer(p, local) \ 491 - ({ \ 480 + context_unsafe( \ 492 481 typeof(*p) *local = (typeof(*p) *__force)(p); \ 493 482 rcu_check_sparse(p, __rcu); \ 494 - ((typeof(*p) __force __kernel *)(local)); \ 495 - }) 483 + ((typeof(*p) __force __kernel *)(local)) \ 484 + ) 496 485 /** 497 486 * unrcu_pointer - mark a pointer as not being RCU protected 498 487 * @p: pointer needing to lose its __rcu property ··· 568 557 * other macros that it invokes. 569 558 */ 570 559 #define rcu_assign_pointer(p, v) \ 571 - do { \ 560 + context_unsafe( \ 572 561 uintptr_t _r_a_p__v = (uintptr_t)(v); \ 573 562 rcu_check_sparse(p, __rcu); \ 574 563 \ ··· 576 565 WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \ 577 566 else \ 578 567 smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \ 579 - } while (0) 568 + ) 580 569 581 570 /** 582 571 * rcu_replace_pointer() - replace an RCU pointer, returning its old value ··· 843 832 * only when acquiring spinlocks that are subject to priority inheritance. 844 833 */ 845 834 static __always_inline void rcu_read_lock(void) 835 + __acquires_shared(RCU) 846 836 { 847 837 __rcu_read_lock(); 848 - __acquire(RCU); 838 + __acquire_shared(RCU); 849 839 rcu_lock_acquire(&rcu_lock_map); 850 840 RCU_LOCKDEP_WARN(!rcu_is_watching(), 851 841 "rcu_read_lock() used illegally while idle"); ··· 874 862 * See rcu_read_lock() for more information. 875 863 */ 876 864 static inline void rcu_read_unlock(void) 865 + __releases_shared(RCU) 877 866 { 878 867 RCU_LOCKDEP_WARN(!rcu_is_watching(), 879 868 "rcu_read_unlock() used illegally while idle"); 880 869 rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */ 881 - __release(RCU); 870 + __release_shared(RCU); 882 871 __rcu_read_unlock(); 883 872 } 884 873 ··· 898 885 * was invoked from some other task. 899 886 */ 900 887 static inline void rcu_read_lock_bh(void) 888 + __acquires_shared(RCU) __acquires_shared(RCU_BH) 901 889 { 902 890 local_bh_disable(); 903 - __acquire(RCU_BH); 891 + __acquire_shared(RCU); 892 + __acquire_shared(RCU_BH); 904 893 rcu_lock_acquire(&rcu_bh_lock_map); 905 894 RCU_LOCKDEP_WARN(!rcu_is_watching(), 906 895 "rcu_read_lock_bh() used illegally while idle"); ··· 914 899 * See rcu_read_lock_bh() for more information. 915 900 */ 916 901 static inline void rcu_read_unlock_bh(void) 902 + __releases_shared(RCU) __releases_shared(RCU_BH) 917 903 { 918 904 RCU_LOCKDEP_WARN(!rcu_is_watching(), 919 905 "rcu_read_unlock_bh() used illegally while idle"); 920 906 rcu_lock_release(&rcu_bh_lock_map); 921 - __release(RCU_BH); 907 + __release_shared(RCU_BH); 908 + __release_shared(RCU); 922 909 local_bh_enable(); 923 910 } 924 911 ··· 940 923 * rcu_read_lock_sched() was invoked from an NMI handler. 941 924 */ 942 925 static inline void rcu_read_lock_sched(void) 926 + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) 943 927 { 944 928 preempt_disable(); 945 - __acquire(RCU_SCHED); 929 + __acquire_shared(RCU); 930 + __acquire_shared(RCU_SCHED); 946 931 rcu_lock_acquire(&rcu_sched_lock_map); 947 932 RCU_LOCKDEP_WARN(!rcu_is_watching(), 948 933 "rcu_read_lock_sched() used illegally while idle"); ··· 952 933 953 934 /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ 954 935 static inline notrace void rcu_read_lock_sched_notrace(void) 936 + __acquires_shared(RCU) __acquires_shared(RCU_SCHED) 955 937 { 956 938 preempt_disable_notrace(); 957 - __acquire(RCU_SCHED); 939 + __acquire_shared(RCU); 940 + __acquire_shared(RCU_SCHED); 958 941 } 959 942 960 943 /** ··· 965 944 * See rcu_read_lock_sched() for more information. 966 945 */ 967 946 static inline void rcu_read_unlock_sched(void) 947 + __releases_shared(RCU) __releases_shared(RCU_SCHED) 968 948 { 969 949 RCU_LOCKDEP_WARN(!rcu_is_watching(), 970 950 "rcu_read_unlock_sched() used illegally while idle"); 971 951 rcu_lock_release(&rcu_sched_lock_map); 972 - __release(RCU_SCHED); 952 + __release_shared(RCU_SCHED); 953 + __release_shared(RCU); 973 954 preempt_enable(); 974 955 } 975 956 976 957 /* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */ 977 958 static inline notrace void rcu_read_unlock_sched_notrace(void) 959 + __releases_shared(RCU) __releases_shared(RCU_SCHED) 978 960 { 979 - __release(RCU_SCHED); 961 + __release_shared(RCU_SCHED); 962 + __release_shared(RCU); 980 963 preempt_enable_notrace(); 981 964 } 982 965 983 966 static __always_inline void rcu_read_lock_dont_migrate(void) 967 + __acquires_shared(RCU) 984 968 { 985 969 if (IS_ENABLED(CONFIG_PREEMPT_RCU)) 986 970 migrate_disable(); ··· 993 967 } 994 968 995 969 static inline void rcu_read_unlock_migrate(void) 970 + __releases_shared(RCU) 996 971 { 997 972 rcu_read_unlock(); 998 973 if (IS_ENABLED(CONFIG_PREEMPT_RCU)) ··· 1039 1012 * ordering guarantees for either the CPU or the compiler. 1040 1013 */ 1041 1014 #define RCU_INIT_POINTER(p, v) \ 1042 - do { \ 1015 + context_unsafe( \ 1043 1016 rcu_check_sparse(p, __rcu); \ 1044 1017 WRITE_ONCE(p, RCU_INITIALIZER(v)); \ 1045 - } while (0) 1018 + ) 1046 1019 1047 1020 /** 1048 1021 * RCU_POINTER_INITIALIZER() - statically initialize an RCU protected pointer ··· 1190 1163 extern int rcu_expedited; 1191 1164 extern int rcu_normal; 1192 1165 1193 - DEFINE_LOCK_GUARD_0(rcu, 1194 - do { 1195 - rcu_read_lock(); 1196 - /* 1197 - * sparse doesn't call the cleanup function, 1198 - * so just release immediately and don't track 1199 - * the context. We don't need to anyway, since 1200 - * the whole point of the guard is to not need 1201 - * the explicit unlock. 1202 - */ 1203 - __release(RCU); 1204 - } while (0), 1205 - rcu_read_unlock()) 1166 + DEFINE_LOCK_GUARD_0(rcu, rcu_read_lock(), rcu_read_unlock()) 1167 + DECLARE_LOCK_GUARD_0_ATTRS(rcu, __acquires_shared(RCU), __releases_shared(RCU)) 1206 1168 1207 1169 #endif /* __LINUX_RCUPDATE_H */
+3 -3
include/linux/refcount.h
··· 478 478 479 479 extern __must_check bool refcount_dec_if_one(refcount_t *r); 480 480 extern __must_check bool refcount_dec_not_one(refcount_t *r); 481 - extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(lock); 482 - extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(lock); 481 + extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(true, lock); 482 + extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(true, lock); 483 483 extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, 484 484 spinlock_t *lock, 485 - unsigned long *flags) __cond_acquires(lock); 485 + unsigned long *flags) __cond_acquires(true, lock); 486 486 #endif /* _LINUX_REFCOUNT_H */
+13 -3
include/linux/rhashtable.h
··· 245 245 void rhashtable_walk_enter(struct rhashtable *ht, 246 246 struct rhashtable_iter *iter); 247 247 void rhashtable_walk_exit(struct rhashtable_iter *iter); 248 - int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires(RCU); 248 + int rhashtable_walk_start_check(struct rhashtable_iter *iter) __acquires_shared(RCU); 249 249 250 250 static inline void rhashtable_walk_start(struct rhashtable_iter *iter) 251 + __acquires_shared(RCU) 251 252 { 252 253 (void)rhashtable_walk_start_check(iter); 253 254 } 254 255 255 256 void *rhashtable_walk_next(struct rhashtable_iter *iter); 256 257 void *rhashtable_walk_peek(struct rhashtable_iter *iter); 257 - void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases(RCU); 258 + void rhashtable_walk_stop(struct rhashtable_iter *iter) __releases_shared(RCU); 258 259 259 260 void rhashtable_free_and_destroy(struct rhashtable *ht, 260 261 void (*free_fn)(void *ptr, void *arg), ··· 326 325 327 326 static inline unsigned long rht_lock(struct bucket_table *tbl, 328 327 struct rhash_lock_head __rcu **bkt) 328 + __acquires(__bitlock(0, bkt)) 329 329 { 330 330 unsigned long flags; 331 331 ··· 339 337 static inline unsigned long rht_lock_nested(struct bucket_table *tbl, 340 338 struct rhash_lock_head __rcu **bucket, 341 339 unsigned int subclass) 340 + __acquires(__bitlock(0, bucket)) 342 341 { 343 342 unsigned long flags; 344 343 ··· 352 349 static inline void rht_unlock(struct bucket_table *tbl, 353 350 struct rhash_lock_head __rcu **bkt, 354 351 unsigned long flags) 352 + __releases(__bitlock(0, bkt)) 355 353 { 356 354 lock_map_release(&tbl->dep_map); 357 355 bit_spin_unlock(0, (unsigned long *)bkt); ··· 428 424 struct rhash_lock_head __rcu **bkt, 429 425 struct rhash_head *obj, 430 426 unsigned long flags) 427 + __releases(__bitlock(0, bkt)) 431 428 { 432 429 if (rht_is_a_nulls(obj)) 433 430 obj = NULL; 434 431 lock_map_release(&tbl->dep_map); 435 432 rcu_assign_pointer(*bkt, (void *)obj); 436 433 preempt_enable(); 437 - __release(bitlock); 434 + __release(__bitlock(0, bkt)); 438 435 local_irq_restore(flags); 439 436 } 440 437 ··· 617 612 struct rhashtable *ht, const void *key, 618 613 const struct rhashtable_params params, 619 614 const enum rht_lookup_freq freq) 615 + __must_hold_shared(RCU) 620 616 { 621 617 struct rhashtable_compare_arg arg = { 622 618 .ht = ht, ··· 672 666 static __always_inline void *rhashtable_lookup( 673 667 struct rhashtable *ht, const void *key, 674 668 const struct rhashtable_params params) 669 + __must_hold_shared(RCU) 675 670 { 676 671 struct rhash_head *he = __rhashtable_lookup(ht, key, params, 677 672 RHT_LOOKUP_NORMAL); ··· 683 676 static __always_inline void *rhashtable_lookup_likely( 684 677 struct rhashtable *ht, const void *key, 685 678 const struct rhashtable_params params) 679 + __must_hold_shared(RCU) 686 680 { 687 681 struct rhash_head *he = __rhashtable_lookup(ht, key, params, 688 682 RHT_LOOKUP_LIKELY); ··· 735 727 static __always_inline struct rhlist_head *rhltable_lookup( 736 728 struct rhltable *hlt, const void *key, 737 729 const struct rhashtable_params params) 730 + __must_hold_shared(RCU) 738 731 { 739 732 struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params, 740 733 RHT_LOOKUP_NORMAL); ··· 746 737 static __always_inline struct rhlist_head *rhltable_lookup_likely( 747 738 struct rhltable *hlt, const void *key, 748 739 const struct rhashtable_params params) 740 + __must_hold_shared(RCU) 749 741 { 750 742 struct rhash_head *he = __rhashtable_lookup(&hlt->ht, key, params, 751 743 RHT_LOOKUP_LIKELY);
+7 -12
include/linux/rwlock.h
··· 29 29 #endif 30 30 31 31 #ifdef CONFIG_DEBUG_SPINLOCK 32 - extern void do_raw_read_lock(rwlock_t *lock) __acquires(lock); 32 + extern void do_raw_read_lock(rwlock_t *lock) __acquires_shared(lock); 33 33 extern int do_raw_read_trylock(rwlock_t *lock); 34 - extern void do_raw_read_unlock(rwlock_t *lock) __releases(lock); 34 + extern void do_raw_read_unlock(rwlock_t *lock) __releases_shared(lock); 35 35 extern void do_raw_write_lock(rwlock_t *lock) __acquires(lock); 36 36 extern int do_raw_write_trylock(rwlock_t *lock); 37 37 extern void do_raw_write_unlock(rwlock_t *lock) __releases(lock); 38 38 #else 39 - # define do_raw_read_lock(rwlock) do {__acquire(lock); arch_read_lock(&(rwlock)->raw_lock); } while (0) 39 + # define do_raw_read_lock(rwlock) do {__acquire_shared(lock); arch_read_lock(&(rwlock)->raw_lock); } while (0) 40 40 # define do_raw_read_trylock(rwlock) arch_read_trylock(&(rwlock)->raw_lock) 41 - # define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lock); __release(lock); } while (0) 41 + # define do_raw_read_unlock(rwlock) do {arch_read_unlock(&(rwlock)->raw_lock); __release_shared(lock); } while (0) 42 42 # define do_raw_write_lock(rwlock) do {__acquire(lock); arch_write_lock(&(rwlock)->raw_lock); } while (0) 43 43 # define do_raw_write_trylock(rwlock) arch_write_trylock(&(rwlock)->raw_lock) 44 44 # define do_raw_write_unlock(rwlock) do {arch_write_unlock(&(rwlock)->raw_lock); __release(lock); } while (0) ··· 49 49 * regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The various 50 50 * methods are defined as nops in the case they are not required. 51 51 */ 52 - #define read_trylock(lock) __cond_lock(lock, _raw_read_trylock(lock)) 53 - #define write_trylock(lock) __cond_lock(lock, _raw_write_trylock(lock)) 52 + #define read_trylock(lock) _raw_read_trylock(lock) 53 + #define write_trylock(lock) _raw_write_trylock(lock) 54 54 55 55 #define write_lock(lock) _raw_write_lock(lock) 56 56 #define read_lock(lock) _raw_read_lock(lock) ··· 112 112 } while (0) 113 113 #define write_unlock_bh(lock) _raw_write_unlock_bh(lock) 114 114 115 - #define write_trylock_irqsave(lock, flags) \ 116 - ({ \ 117 - local_irq_save(flags); \ 118 - write_trylock(lock) ? \ 119 - 1 : ({ local_irq_restore(flags); 0; }); \ 120 - }) 115 + #define write_trylock_irqsave(lock, flags) _raw_write_trylock_irqsave(lock, &(flags)) 121 116 122 117 #ifdef arch_rwlock_is_contended 123 118 #define rwlock_is_contended(lock) \
+35 -8
include/linux/rwlock_api_smp.h
··· 15 15 * Released under the General Public License (GPL). 16 16 */ 17 17 18 - void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires(lock); 18 + void __lockfunc _raw_read_lock(rwlock_t *lock) __acquires_shared(lock); 19 19 void __lockfunc _raw_write_lock(rwlock_t *lock) __acquires(lock); 20 20 void __lockfunc _raw_write_lock_nested(rwlock_t *lock, int subclass) __acquires(lock); 21 - void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires(lock); 21 + void __lockfunc _raw_read_lock_bh(rwlock_t *lock) __acquires_shared(lock); 22 22 void __lockfunc _raw_write_lock_bh(rwlock_t *lock) __acquires(lock); 23 - void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires(lock); 23 + void __lockfunc _raw_read_lock_irq(rwlock_t *lock) __acquires_shared(lock); 24 24 void __lockfunc _raw_write_lock_irq(rwlock_t *lock) __acquires(lock); 25 25 unsigned long __lockfunc _raw_read_lock_irqsave(rwlock_t *lock) 26 26 __acquires(lock); 27 27 unsigned long __lockfunc _raw_write_lock_irqsave(rwlock_t *lock) 28 28 __acquires(lock); 29 - int __lockfunc _raw_read_trylock(rwlock_t *lock); 30 - int __lockfunc _raw_write_trylock(rwlock_t *lock); 31 - void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases(lock); 29 + int __lockfunc _raw_read_trylock(rwlock_t *lock) __cond_acquires_shared(true, lock); 30 + int __lockfunc _raw_write_trylock(rwlock_t *lock) __cond_acquires(true, lock); 31 + void __lockfunc _raw_read_unlock(rwlock_t *lock) __releases_shared(lock); 32 32 void __lockfunc _raw_write_unlock(rwlock_t *lock) __releases(lock); 33 - void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases(lock); 33 + void __lockfunc _raw_read_unlock_bh(rwlock_t *lock) __releases_shared(lock); 34 34 void __lockfunc _raw_write_unlock_bh(rwlock_t *lock) __releases(lock); 35 - void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases(lock); 35 + void __lockfunc _raw_read_unlock_irq(rwlock_t *lock) __releases_shared(lock); 36 36 void __lockfunc _raw_write_unlock_irq(rwlock_t *lock) __releases(lock); 37 37 void __lockfunc 38 38 _raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) ··· 137 137 return 0; 138 138 } 139 139 140 + static inline bool _raw_write_trylock_irqsave(rwlock_t *lock, unsigned long *flags) 141 + __cond_acquires(true, lock) __no_context_analysis 142 + { 143 + local_irq_save(*flags); 144 + if (_raw_write_trylock(lock)) 145 + return true; 146 + local_irq_restore(*flags); 147 + return false; 148 + } 149 + 140 150 /* 141 151 * If lockdep is enabled then we use the non-preemption spin-ops 142 152 * even on CONFIG_PREEMPT, because lockdep assumes that interrupts are ··· 155 145 #if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC) 156 146 157 147 static inline void __raw_read_lock(rwlock_t *lock) 148 + __acquires_shared(lock) __no_context_analysis 158 149 { 159 150 preempt_disable(); 160 151 rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); ··· 163 152 } 164 153 165 154 static inline unsigned long __raw_read_lock_irqsave(rwlock_t *lock) 155 + __acquires_shared(lock) __no_context_analysis 166 156 { 167 157 unsigned long flags; 168 158 ··· 175 163 } 176 164 177 165 static inline void __raw_read_lock_irq(rwlock_t *lock) 166 + __acquires_shared(lock) __no_context_analysis 178 167 { 179 168 local_irq_disable(); 180 169 preempt_disable(); ··· 184 171 } 185 172 186 173 static inline void __raw_read_lock_bh(rwlock_t *lock) 174 + __acquires_shared(lock) __no_context_analysis 187 175 { 188 176 __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); 189 177 rwlock_acquire_read(&lock->dep_map, 0, 0, _RET_IP_); ··· 192 178 } 193 179 194 180 static inline unsigned long __raw_write_lock_irqsave(rwlock_t *lock) 181 + __acquires(lock) __no_context_analysis 195 182 { 196 183 unsigned long flags; 197 184 ··· 204 189 } 205 190 206 191 static inline void __raw_write_lock_irq(rwlock_t *lock) 192 + __acquires(lock) __no_context_analysis 207 193 { 208 194 local_irq_disable(); 209 195 preempt_disable(); ··· 213 197 } 214 198 215 199 static inline void __raw_write_lock_bh(rwlock_t *lock) 200 + __acquires(lock) __no_context_analysis 216 201 { 217 202 __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); 218 203 rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); ··· 221 204 } 222 205 223 206 static inline void __raw_write_lock(rwlock_t *lock) 207 + __acquires(lock) __no_context_analysis 224 208 { 225 209 preempt_disable(); 226 210 rwlock_acquire(&lock->dep_map, 0, 0, _RET_IP_); ··· 229 211 } 230 212 231 213 static inline void __raw_write_lock_nested(rwlock_t *lock, int subclass) 214 + __acquires(lock) __no_context_analysis 232 215 { 233 216 preempt_disable(); 234 217 rwlock_acquire(&lock->dep_map, subclass, 0, _RET_IP_); ··· 239 220 #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ 240 221 241 222 static inline void __raw_write_unlock(rwlock_t *lock) 223 + __releases(lock) 242 224 { 243 225 rwlock_release(&lock->dep_map, _RET_IP_); 244 226 do_raw_write_unlock(lock); ··· 247 227 } 248 228 249 229 static inline void __raw_read_unlock(rwlock_t *lock) 230 + __releases_shared(lock) 250 231 { 251 232 rwlock_release(&lock->dep_map, _RET_IP_); 252 233 do_raw_read_unlock(lock); ··· 256 235 257 236 static inline void 258 237 __raw_read_unlock_irqrestore(rwlock_t *lock, unsigned long flags) 238 + __releases_shared(lock) 259 239 { 260 240 rwlock_release(&lock->dep_map, _RET_IP_); 261 241 do_raw_read_unlock(lock); ··· 265 243 } 266 244 267 245 static inline void __raw_read_unlock_irq(rwlock_t *lock) 246 + __releases_shared(lock) 268 247 { 269 248 rwlock_release(&lock->dep_map, _RET_IP_); 270 249 do_raw_read_unlock(lock); ··· 274 251 } 275 252 276 253 static inline void __raw_read_unlock_bh(rwlock_t *lock) 254 + __releases_shared(lock) 277 255 { 278 256 rwlock_release(&lock->dep_map, _RET_IP_); 279 257 do_raw_read_unlock(lock); ··· 283 259 284 260 static inline void __raw_write_unlock_irqrestore(rwlock_t *lock, 285 261 unsigned long flags) 262 + __releases(lock) 286 263 { 287 264 rwlock_release(&lock->dep_map, _RET_IP_); 288 265 do_raw_write_unlock(lock); ··· 292 267 } 293 268 294 269 static inline void __raw_write_unlock_irq(rwlock_t *lock) 270 + __releases(lock) 295 271 { 296 272 rwlock_release(&lock->dep_map, _RET_IP_); 297 273 do_raw_write_unlock(lock); ··· 301 275 } 302 276 303 277 static inline void __raw_write_unlock_bh(rwlock_t *lock) 278 + __releases(lock) 304 279 { 305 280 rwlock_release(&lock->dep_map, _RET_IP_); 306 281 do_raw_write_unlock(lock);
+28 -15
include/linux/rwlock_rt.h
··· 24 24 __rt_rwlock_init(rwl, #rwl, &__key); \ 25 25 } while (0) 26 26 27 - extern void rt_read_lock(rwlock_t *rwlock) __acquires(rwlock); 28 - extern int rt_read_trylock(rwlock_t *rwlock); 29 - extern void rt_read_unlock(rwlock_t *rwlock) __releases(rwlock); 27 + extern void rt_read_lock(rwlock_t *rwlock) __acquires_shared(rwlock); 28 + extern int rt_read_trylock(rwlock_t *rwlock) __cond_acquires_shared(true, rwlock); 29 + extern void rt_read_unlock(rwlock_t *rwlock) __releases_shared(rwlock); 30 30 extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock); 31 31 extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquires(rwlock); 32 - extern int rt_write_trylock(rwlock_t *rwlock); 32 + extern int rt_write_trylock(rwlock_t *rwlock) __cond_acquires(true, rwlock); 33 33 extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock); 34 34 35 35 static __always_inline void read_lock(rwlock_t *rwlock) 36 + __acquires_shared(rwlock) 36 37 { 37 38 rt_read_lock(rwlock); 38 39 } 39 40 40 41 static __always_inline void read_lock_bh(rwlock_t *rwlock) 42 + __acquires_shared(rwlock) 41 43 { 42 44 local_bh_disable(); 43 45 rt_read_lock(rwlock); 44 46 } 45 47 46 48 static __always_inline void read_lock_irq(rwlock_t *rwlock) 49 + __acquires_shared(rwlock) 47 50 { 48 51 rt_read_lock(rwlock); 49 52 } ··· 58 55 flags = 0; \ 59 56 } while (0) 60 57 61 - #define read_trylock(lock) __cond_lock(lock, rt_read_trylock(lock)) 58 + #define read_trylock(lock) rt_read_trylock(lock) 62 59 63 60 static __always_inline void read_unlock(rwlock_t *rwlock) 61 + __releases_shared(rwlock) 64 62 { 65 63 rt_read_unlock(rwlock); 66 64 } 67 65 68 66 static __always_inline void read_unlock_bh(rwlock_t *rwlock) 67 + __releases_shared(rwlock) 69 68 { 70 69 rt_read_unlock(rwlock); 71 70 local_bh_enable(); 72 71 } 73 72 74 73 static __always_inline void read_unlock_irq(rwlock_t *rwlock) 74 + __releases_shared(rwlock) 75 75 { 76 76 rt_read_unlock(rwlock); 77 77 } 78 78 79 79 static __always_inline void read_unlock_irqrestore(rwlock_t *rwlock, 80 80 unsigned long flags) 81 + __releases_shared(rwlock) 81 82 { 82 83 rt_read_unlock(rwlock); 83 84 } 84 85 85 86 static __always_inline void write_lock(rwlock_t *rwlock) 87 + __acquires(rwlock) 86 88 { 87 89 rt_write_lock(rwlock); 88 90 } 89 91 90 92 #ifdef CONFIG_DEBUG_LOCK_ALLOC 91 93 static __always_inline void write_lock_nested(rwlock_t *rwlock, int subclass) 94 + __acquires(rwlock) 92 95 { 93 96 rt_write_lock_nested(rwlock, subclass); 94 97 } ··· 103 94 #endif 104 95 105 96 static __always_inline void write_lock_bh(rwlock_t *rwlock) 97 + __acquires(rwlock) 106 98 { 107 99 local_bh_disable(); 108 100 rt_write_lock(rwlock); 109 101 } 110 102 111 103 static __always_inline void write_lock_irq(rwlock_t *rwlock) 104 + __acquires(rwlock) 112 105 { 113 106 rt_write_lock(rwlock); 114 107 } ··· 122 111 flags = 0; \ 123 112 } while (0) 124 113 125 - #define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock)) 114 + #define write_trylock(lock) rt_write_trylock(lock) 126 115 127 - #define write_trylock_irqsave(lock, flags) \ 128 - ({ \ 129 - int __locked; \ 130 - \ 131 - typecheck(unsigned long, flags); \ 132 - flags = 0; \ 133 - __locked = write_trylock(lock); \ 134 - __locked; \ 135 - }) 116 + static __always_inline bool _write_trylock_irqsave(rwlock_t *rwlock, unsigned long *flags) 117 + __cond_acquires(true, rwlock) 118 + { 119 + *flags = 0; 120 + return rt_write_trylock(rwlock); 121 + } 122 + #define write_trylock_irqsave(lock, flags) _write_trylock_irqsave(lock, &(flags)) 136 123 137 124 static __always_inline void write_unlock(rwlock_t *rwlock) 125 + __releases(rwlock) 138 126 { 139 127 rt_write_unlock(rwlock); 140 128 } 141 129 142 130 static __always_inline void write_unlock_bh(rwlock_t *rwlock) 131 + __releases(rwlock) 143 132 { 144 133 rt_write_unlock(rwlock); 145 134 local_bh_enable(); 146 135 } 147 136 148 137 static __always_inline void write_unlock_irq(rwlock_t *rwlock) 138 + __releases(rwlock) 149 139 { 150 140 rt_write_unlock(rwlock); 151 141 } 152 142 153 143 static __always_inline void write_unlock_irqrestore(rwlock_t *rwlock, 154 144 unsigned long flags) 145 + __releases(rwlock) 155 146 { 156 147 rt_write_unlock(rwlock); 157 148 }
+6 -4
include/linux/rwlock_types.h
··· 22 22 * portions Copyright 2005, Red Hat, Inc., Ingo Molnar 23 23 * Released under the General Public License (GPL). 24 24 */ 25 - typedef struct { 25 + context_lock_struct(rwlock) { 26 26 arch_rwlock_t raw_lock; 27 27 #ifdef CONFIG_DEBUG_SPINLOCK 28 28 unsigned int magic, owner_cpu; ··· 31 31 #ifdef CONFIG_DEBUG_LOCK_ALLOC 32 32 struct lockdep_map dep_map; 33 33 #endif 34 - } rwlock_t; 34 + }; 35 + typedef struct rwlock rwlock_t; 35 36 36 37 #define RWLOCK_MAGIC 0xdeaf1eed 37 38 ··· 55 54 56 55 #include <linux/rwbase_rt.h> 57 56 58 - typedef struct { 57 + context_lock_struct(rwlock) { 59 58 struct rwbase_rt rwbase; 60 59 atomic_t readers; 61 60 #ifdef CONFIG_DEBUG_LOCK_ALLOC 62 61 struct lockdep_map dep_map; 63 62 #endif 64 - } rwlock_t; 63 + }; 64 + typedef struct rwlock rwlock_t; 65 65 66 66 #define __RWLOCK_RT_INITIALIZER(name) \ 67 67 { \
+49 -25
include/linux/rwsem.h
··· 45 45 * reduce the chance that they will share the same cacheline causing 46 46 * cacheline bouncing problem. 47 47 */ 48 - struct rw_semaphore { 48 + context_lock_struct(rw_semaphore) { 49 49 atomic_long_t count; 50 50 /* 51 51 * Write owner or one of the read owners as well flags regarding ··· 76 76 } 77 77 78 78 static inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem) 79 + __assumes_ctx_lock(sem) 79 80 { 80 81 WARN_ON(atomic_long_read(&sem->count) == RWSEM_UNLOCKED_VALUE); 81 82 } 82 83 83 84 static inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem) 85 + __assumes_ctx_lock(sem) 84 86 { 85 87 WARN_ON(!(atomic_long_read(&sem->count) & RWSEM_WRITER_LOCKED)); 86 88 } ··· 150 148 151 149 #include <linux/rwbase_rt.h> 152 150 153 - struct rw_semaphore { 151 + context_lock_struct(rw_semaphore) { 154 152 struct rwbase_rt rwbase; 155 153 #ifdef CONFIG_DEBUG_LOCK_ALLOC 156 154 struct lockdep_map dep_map; ··· 182 180 } 183 181 184 182 static __always_inline void rwsem_assert_held_nolockdep(const struct rw_semaphore *sem) 183 + __assumes_ctx_lock(sem) 185 184 { 186 185 WARN_ON(!rwsem_is_locked(sem)); 187 186 } 188 187 189 188 static __always_inline void rwsem_assert_held_write_nolockdep(const struct rw_semaphore *sem) 189 + __assumes_ctx_lock(sem) 190 190 { 191 191 WARN_ON(!rw_base_is_write_locked(&sem->rwbase)); 192 192 } ··· 206 202 */ 207 203 208 204 static inline void rwsem_assert_held(const struct rw_semaphore *sem) 205 + __assumes_ctx_lock(sem) 209 206 { 210 207 if (IS_ENABLED(CONFIG_LOCKDEP)) 211 208 lockdep_assert_held(sem); ··· 215 210 } 216 211 217 212 static inline void rwsem_assert_held_write(const struct rw_semaphore *sem) 213 + __assumes_ctx_lock(sem) 218 214 { 219 215 if (IS_ENABLED(CONFIG_LOCKDEP)) 220 216 lockdep_assert_held_write(sem); ··· 226 220 /* 227 221 * lock for reading 228 222 */ 229 - extern void down_read(struct rw_semaphore *sem); 230 - extern int __must_check down_read_interruptible(struct rw_semaphore *sem); 231 - extern int __must_check down_read_killable(struct rw_semaphore *sem); 223 + extern void down_read(struct rw_semaphore *sem) __acquires_shared(sem); 224 + extern int __must_check down_read_interruptible(struct rw_semaphore *sem) __cond_acquires_shared(0, sem); 225 + extern int __must_check down_read_killable(struct rw_semaphore *sem) __cond_acquires_shared(0, sem); 232 226 233 227 /* 234 228 * trylock for reading -- returns 1 if successful, 0 if contention 235 229 */ 236 - extern int down_read_trylock(struct rw_semaphore *sem); 230 + extern int down_read_trylock(struct rw_semaphore *sem) __cond_acquires_shared(true, sem); 237 231 238 232 /* 239 233 * lock for writing 240 234 */ 241 - extern void down_write(struct rw_semaphore *sem); 242 - extern int __must_check down_write_killable(struct rw_semaphore *sem); 235 + extern void down_write(struct rw_semaphore *sem) __acquires(sem); 236 + extern int __must_check down_write_killable(struct rw_semaphore *sem) __cond_acquires(0, sem); 243 237 244 238 /* 245 239 * trylock for writing -- returns 1 if successful, 0 if contention 246 240 */ 247 - extern int down_write_trylock(struct rw_semaphore *sem); 241 + extern int down_write_trylock(struct rw_semaphore *sem) __cond_acquires(true, sem); 248 242 249 243 /* 250 244 * release a read lock 251 245 */ 252 - extern void up_read(struct rw_semaphore *sem); 246 + extern void up_read(struct rw_semaphore *sem) __releases_shared(sem); 253 247 254 248 /* 255 249 * release a write lock 256 250 */ 257 - extern void up_write(struct rw_semaphore *sem); 251 + extern void up_write(struct rw_semaphore *sem) __releases(sem); 258 252 259 - DEFINE_GUARD(rwsem_read, struct rw_semaphore *, down_read(_T), up_read(_T)) 260 - DEFINE_GUARD_COND(rwsem_read, _try, down_read_trylock(_T)) 261 - DEFINE_GUARD_COND(rwsem_read, _intr, down_read_interruptible(_T), _RET == 0) 253 + DEFINE_LOCK_GUARD_1(rwsem_read, struct rw_semaphore, down_read(_T->lock), up_read(_T->lock)) 254 + DEFINE_LOCK_GUARD_1_COND(rwsem_read, _try, down_read_trylock(_T->lock)) 255 + DEFINE_LOCK_GUARD_1_COND(rwsem_read, _intr, down_read_interruptible(_T->lock), _RET == 0) 262 256 263 - DEFINE_GUARD(rwsem_write, struct rw_semaphore *, down_write(_T), up_write(_T)) 264 - DEFINE_GUARD_COND(rwsem_write, _try, down_write_trylock(_T)) 265 - DEFINE_GUARD_COND(rwsem_write, _kill, down_write_killable(_T), _RET == 0) 257 + DECLARE_LOCK_GUARD_1_ATTRS(rwsem_read, __acquires_shared(_T), __releases_shared(*(struct rw_semaphore **)_T)) 258 + #define class_rwsem_read_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_read, _T) 259 + DECLARE_LOCK_GUARD_1_ATTRS(rwsem_read_try, __acquires_shared(_T), __releases_shared(*(struct rw_semaphore **)_T)) 260 + #define class_rwsem_read_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_read_try, _T) 261 + DECLARE_LOCK_GUARD_1_ATTRS(rwsem_read_intr, __acquires_shared(_T), __releases_shared(*(struct rw_semaphore **)_T)) 262 + #define class_rwsem_read_intr_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_read_intr, _T) 263 + 264 + DEFINE_LOCK_GUARD_1(rwsem_write, struct rw_semaphore, down_write(_T->lock), up_write(_T->lock)) 265 + DEFINE_LOCK_GUARD_1_COND(rwsem_write, _try, down_write_trylock(_T->lock)) 266 + DEFINE_LOCK_GUARD_1_COND(rwsem_write, _kill, down_write_killable(_T->lock), _RET == 0) 267 + 268 + DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write, __acquires(_T), __releases(*(struct rw_semaphore **)_T)) 269 + #define class_rwsem_write_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_write, _T) 270 + DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_try, __acquires(_T), __releases(*(struct rw_semaphore **)_T)) 271 + #define class_rwsem_write_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_write_try, _T) 272 + DECLARE_LOCK_GUARD_1_ATTRS(rwsem_write_kill, __acquires(_T), __releases(*(struct rw_semaphore **)_T)) 273 + #define class_rwsem_write_kill_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_write_kill, _T) 274 + 275 + DEFINE_LOCK_GUARD_1(rwsem_init, struct rw_semaphore, init_rwsem(_T->lock), /* */) 276 + DECLARE_LOCK_GUARD_1_ATTRS(rwsem_init, __acquires(_T), __releases(*(struct rw_semaphore **)_T)) 277 + #define class_rwsem_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwsem_init, _T) 266 278 267 279 /* 268 280 * downgrade write lock to read lock 269 281 */ 270 - extern void downgrade_write(struct rw_semaphore *sem); 282 + extern void downgrade_write(struct rw_semaphore *sem) __releases(sem) __acquires_shared(sem); 271 283 272 284 #ifdef CONFIG_DEBUG_LOCK_ALLOC 273 285 /* ··· 301 277 * lockdep_set_class() at lock initialization time. 302 278 * See Documentation/locking/lockdep-design.rst for more details.) 303 279 */ 304 - extern void down_read_nested(struct rw_semaphore *sem, int subclass); 305 - extern int __must_check down_read_killable_nested(struct rw_semaphore *sem, int subclass); 306 - extern void down_write_nested(struct rw_semaphore *sem, int subclass); 307 - extern int down_write_killable_nested(struct rw_semaphore *sem, int subclass); 308 - extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest_lock); 280 + extern void down_read_nested(struct rw_semaphore *sem, int subclass) __acquires_shared(sem); 281 + extern int __must_check down_read_killable_nested(struct rw_semaphore *sem, int subclass) __cond_acquires_shared(0, sem); 282 + extern void down_write_nested(struct rw_semaphore *sem, int subclass) __acquires(sem); 283 + extern int down_write_killable_nested(struct rw_semaphore *sem, int subclass) __cond_acquires(0, sem); 284 + extern void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest_lock) __acquires(sem); 309 285 310 286 # define down_write_nest_lock(sem, nest_lock) \ 311 287 do { \ ··· 319 295 * [ This API should be avoided as much as possible - the 320 296 * proper abstraction for this case is completions. ] 321 297 */ 322 - extern void down_read_non_owner(struct rw_semaphore *sem); 323 - extern void up_read_non_owner(struct rw_semaphore *sem); 298 + extern void down_read_non_owner(struct rw_semaphore *sem) __acquires_shared(sem); 299 + extern void up_read_non_owner(struct rw_semaphore *sem) __releases_shared(sem); 324 300 #else 325 301 # define down_read_nested(sem, subclass) down_read(sem) 326 302 # define down_read_killable_nested(sem, subclass) down_read_killable(sem)
+3 -3
include/linux/sched.h
··· 2095 2095 _cond_resched(); \ 2096 2096 }) 2097 2097 2098 - extern int __cond_resched_lock(spinlock_t *lock); 2099 - extern int __cond_resched_rwlock_read(rwlock_t *lock); 2100 - extern int __cond_resched_rwlock_write(rwlock_t *lock); 2098 + extern int __cond_resched_lock(spinlock_t *lock) __must_hold(lock); 2099 + extern int __cond_resched_rwlock_read(rwlock_t *lock) __must_hold_shared(lock); 2100 + extern int __cond_resched_rwlock_write(rwlock_t *lock) __must_hold(lock); 2101 2101 2102 2102 #define MIGHT_RESCHED_RCU_SHIFT 8 2103 2103 #define MIGHT_RESCHED_PREEMPT_MASK ((1U << MIGHT_RESCHED_RCU_SHIFT) - 1)
+4 -12
include/linux/sched/signal.h
··· 737 737 #define delay_group_leader(p) \ 738 738 (thread_group_leader(p) && !thread_group_empty(p)) 739 739 740 - extern struct sighand_struct *__lock_task_sighand(struct task_struct *task, 741 - unsigned long *flags); 742 - 743 - static inline struct sighand_struct *lock_task_sighand(struct task_struct *task, 744 - unsigned long *flags) 745 - { 746 - struct sighand_struct *ret; 747 - 748 - ret = __lock_task_sighand(task, flags); 749 - (void)__cond_lock(&task->sighand->siglock, ret); 750 - return ret; 751 - } 740 + extern struct sighand_struct *lock_task_sighand(struct task_struct *task, 741 + unsigned long *flags) 742 + __acquires(&task->sighand->siglock); 752 743 753 744 static inline void unlock_task_sighand(struct task_struct *task, 754 745 unsigned long *flags) 746 + __releases(&task->sighand->siglock) 755 747 { 756 748 spin_unlock_irqrestore(&task->sighand->siglock, *flags); 757 749 }
+5 -1
include/linux/sched/task.h
··· 214 214 * write_lock_irq(&tasklist_lock), neither inside nor outside. 215 215 */ 216 216 static inline void task_lock(struct task_struct *p) 217 + __acquires(&p->alloc_lock) 217 218 { 218 219 spin_lock(&p->alloc_lock); 219 220 } 220 221 221 222 static inline void task_unlock(struct task_struct *p) 223 + __releases(&p->alloc_lock) 222 224 { 223 225 spin_unlock(&p->alloc_lock); 224 226 } 225 227 226 - DEFINE_GUARD(task_lock, struct task_struct *, task_lock(_T), task_unlock(_T)) 228 + DEFINE_LOCK_GUARD_1(task_lock, struct task_struct, task_lock(_T->lock), task_unlock(_T->lock)) 229 + DECLARE_LOCK_GUARD_1_ATTRS(task_lock, __acquires(&_T->alloc_lock), __releases(&(*(struct task_struct **)_T)->alloc_lock)) 230 + #define class_task_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(task_lock, _T) 227 231 228 232 #endif /* _LINUX_SCHED_TASK_H */
+3
include/linux/sched/wake_q.h
··· 66 66 /* Spin unlock helpers to unlock and call wake_up_q with preempt disabled */ 67 67 static inline 68 68 void raw_spin_unlock_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q) 69 + __releases(lock) 69 70 { 70 71 guard(preempt)(); 71 72 raw_spin_unlock(lock); ··· 78 77 79 78 static inline 80 79 void raw_spin_unlock_irq_wake(raw_spinlock_t *lock, struct wake_q_head *wake_q) 80 + __releases(lock) 81 81 { 82 82 guard(preempt)(); 83 83 raw_spin_unlock_irq(lock); ··· 91 89 static inline 92 90 void raw_spin_unlock_irqrestore_wake(raw_spinlock_t *lock, unsigned long flags, 93 91 struct wake_q_head *wake_q) 92 + __releases(lock) 94 93 { 95 94 guard(preempt)(); 96 95 raw_spin_unlock_irqrestore(lock, flags);
+48 -9
include/linux/seqlock.h
··· 14 14 */ 15 15 16 16 #include <linux/compiler.h> 17 + #include <linux/cleanup.h> 17 18 #include <linux/kcsan-checks.h> 18 19 #include <linux/lockdep.h> 19 20 #include <linux/mutex.h> ··· 833 832 * Return: count, to be passed to read_seqretry() 834 833 */ 835 834 static inline unsigned read_seqbegin(const seqlock_t *sl) 835 + __acquires_shared(sl) __no_context_analysis 836 836 { 837 837 return read_seqcount_begin(&sl->seqcount); 838 838 } ··· 850 848 * Return: true if a read section retry is required, else false 851 849 */ 852 850 static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start) 851 + __releases_shared(sl) __no_context_analysis 853 852 { 854 853 return read_seqcount_retry(&sl->seqcount, start); 855 854 } ··· 875 872 * _irqsave or _bh variants of this function instead. 876 873 */ 877 874 static inline void write_seqlock(seqlock_t *sl) 875 + __acquires(sl) __no_context_analysis 878 876 { 879 877 spin_lock(&sl->lock); 880 878 do_write_seqcount_begin(&sl->seqcount.seqcount); ··· 889 885 * critical section of given seqlock_t. 890 886 */ 891 887 static inline void write_sequnlock(seqlock_t *sl) 888 + __releases(sl) __no_context_analysis 892 889 { 893 890 do_write_seqcount_end(&sl->seqcount.seqcount); 894 891 spin_unlock(&sl->lock); ··· 903 898 * other write side sections, can be invoked from softirq contexts. 904 899 */ 905 900 static inline void write_seqlock_bh(seqlock_t *sl) 901 + __acquires(sl) __no_context_analysis 906 902 { 907 903 spin_lock_bh(&sl->lock); 908 904 do_write_seqcount_begin(&sl->seqcount.seqcount); ··· 918 912 * write_seqlock_bh(). 919 913 */ 920 914 static inline void write_sequnlock_bh(seqlock_t *sl) 915 + __releases(sl) __no_context_analysis 921 916 { 922 917 do_write_seqcount_end(&sl->seqcount.seqcount); 923 918 spin_unlock_bh(&sl->lock); ··· 932 925 * other write sections, can be invoked from hardirq contexts. 933 926 */ 934 927 static inline void write_seqlock_irq(seqlock_t *sl) 928 + __acquires(sl) __no_context_analysis 935 929 { 936 930 spin_lock_irq(&sl->lock); 937 931 do_write_seqcount_begin(&sl->seqcount.seqcount); ··· 946 938 * seqlock_t write side section opened with write_seqlock_irq(). 947 939 */ 948 940 static inline void write_sequnlock_irq(seqlock_t *sl) 941 + __releases(sl) __no_context_analysis 949 942 { 950 943 do_write_seqcount_end(&sl->seqcount.seqcount); 951 944 spin_unlock_irq(&sl->lock); 952 945 } 953 946 954 947 static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl) 948 + __acquires(sl) __no_context_analysis 955 949 { 956 950 unsigned long flags; 957 951 ··· 986 976 */ 987 977 static inline void 988 978 write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags) 979 + __releases(sl) __no_context_analysis 989 980 { 990 981 do_write_seqcount_end(&sl->seqcount.seqcount); 991 982 spin_unlock_irqrestore(&sl->lock, flags); ··· 1009 998 * The opened read section must be closed with read_sequnlock_excl(). 1010 999 */ 1011 1000 static inline void read_seqlock_excl(seqlock_t *sl) 1001 + __acquires_shared(sl) __no_context_analysis 1012 1002 { 1013 1003 spin_lock(&sl->lock); 1014 1004 } ··· 1019 1007 * @sl: Pointer to seqlock_t 1020 1008 */ 1021 1009 static inline void read_sequnlock_excl(seqlock_t *sl) 1010 + __releases_shared(sl) __no_context_analysis 1022 1011 { 1023 1012 spin_unlock(&sl->lock); 1024 1013 } ··· 1034 1021 * from softirq contexts. 1035 1022 */ 1036 1023 static inline void read_seqlock_excl_bh(seqlock_t *sl) 1024 + __acquires_shared(sl) __no_context_analysis 1037 1025 { 1038 1026 spin_lock_bh(&sl->lock); 1039 1027 } ··· 1045 1031 * @sl: Pointer to seqlock_t 1046 1032 */ 1047 1033 static inline void read_sequnlock_excl_bh(seqlock_t *sl) 1034 + __releases_shared(sl) __no_context_analysis 1048 1035 { 1049 1036 spin_unlock_bh(&sl->lock); 1050 1037 } ··· 1060 1045 * hardirq context. 1061 1046 */ 1062 1047 static inline void read_seqlock_excl_irq(seqlock_t *sl) 1048 + __acquires_shared(sl) __no_context_analysis 1063 1049 { 1064 1050 spin_lock_irq(&sl->lock); 1065 1051 } ··· 1071 1055 * @sl: Pointer to seqlock_t 1072 1056 */ 1073 1057 static inline void read_sequnlock_excl_irq(seqlock_t *sl) 1058 + __releases_shared(sl) __no_context_analysis 1074 1059 { 1075 1060 spin_unlock_irq(&sl->lock); 1076 1061 } 1077 1062 1078 1063 static inline unsigned long __read_seqlock_excl_irqsave(seqlock_t *sl) 1064 + __acquires_shared(sl) __no_context_analysis 1079 1065 { 1080 1066 unsigned long flags; 1081 1067 ··· 1107 1089 */ 1108 1090 static inline void 1109 1091 read_sequnlock_excl_irqrestore(seqlock_t *sl, unsigned long flags) 1092 + __releases_shared(sl) __no_context_analysis 1110 1093 { 1111 1094 spin_unlock_irqrestore(&sl->lock, flags); 1112 1095 } ··· 1144 1125 * parameter of the next read_seqbegin_or_lock() iteration. 1145 1126 */ 1146 1127 static inline void read_seqbegin_or_lock(seqlock_t *lock, int *seq) 1128 + __acquires_shared(lock) __no_context_analysis 1147 1129 { 1148 1130 if (!(*seq & 1)) /* Even */ 1149 1131 *seq = read_seqbegin(lock); ··· 1160 1140 * Return: true if a read section retry is required, false otherwise 1161 1141 */ 1162 1142 static inline int need_seqretry(seqlock_t *lock, int seq) 1143 + __releases_shared(lock) __no_context_analysis 1163 1144 { 1164 1145 return !(seq & 1) && read_seqretry(lock, seq); 1165 1146 } ··· 1174 1153 * with read_seqbegin_or_lock() and validated by need_seqretry(). 1175 1154 */ 1176 1155 static inline void done_seqretry(seqlock_t *lock, int seq) 1156 + __no_context_analysis 1177 1157 { 1178 1158 if (seq & 1) 1179 1159 read_sequnlock_excl(lock); ··· 1202 1180 */ 1203 1181 static inline unsigned long 1204 1182 read_seqbegin_or_lock_irqsave(seqlock_t *lock, int *seq) 1183 + __acquires_shared(lock) __no_context_analysis 1205 1184 { 1206 1185 unsigned long flags = 0; 1207 1186 ··· 1228 1205 */ 1229 1206 static inline void 1230 1207 done_seqretry_irqrestore(seqlock_t *lock, int seq, unsigned long flags) 1208 + __no_context_analysis 1231 1209 { 1232 1210 if (seq & 1) 1233 1211 read_sequnlock_excl_irqrestore(lock, flags); ··· 1249 1225 }; 1250 1226 1251 1227 static __always_inline void __scoped_seqlock_cleanup(struct ss_tmp *sst) 1228 + __no_context_analysis 1252 1229 { 1253 1230 if (sst->lock) 1254 1231 spin_unlock(sst->lock); ··· 1279 1254 1280 1255 static __always_inline void 1281 1256 __scoped_seqlock_next(struct ss_tmp *sst, seqlock_t *lock, enum ss_state target) 1257 + __no_context_analysis 1282 1258 { 1283 1259 switch (sst->state) { 1284 1260 case ss_done: ··· 1322 1296 } 1323 1297 } 1324 1298 1299 + /* 1300 + * Context analysis no-op helper to release seqlock at the end of the for-scope; 1301 + * the alias analysis of the compiler will recognize that the pointer @s is an 1302 + * alias to @_seqlock passed to read_seqbegin(_seqlock) below. 1303 + */ 1304 + static __always_inline void __scoped_seqlock_cleanup_ctx(struct ss_tmp **s) 1305 + __releases_shared(*((seqlock_t **)s)) __no_context_analysis {} 1306 + 1325 1307 #define __scoped_seqlock_read(_seqlock, _target, _s) \ 1326 1308 for (struct ss_tmp _s __cleanup(__scoped_seqlock_cleanup) = \ 1327 - { .state = ss_lockless, .data = read_seqbegin(_seqlock) }; \ 1309 + { .state = ss_lockless, .data = read_seqbegin(_seqlock) }, \ 1310 + *__UNIQUE_ID(ctx) __cleanup(__scoped_seqlock_cleanup_ctx) =\ 1311 + (struct ss_tmp *)_seqlock; \ 1328 1312 _s.state != ss_done; \ 1329 1313 __scoped_seqlock_next(&_s, _seqlock, _target)) 1330 1314 1331 1315 /** 1332 - * scoped_seqlock_read (lock, ss_state) - execute the read side critical 1333 - * section without manual sequence 1334 - * counter handling or calls to other 1335 - * helpers 1336 - * @lock: pointer to seqlock_t protecting the data 1337 - * @ss_state: one of {ss_lock, ss_lock_irqsave, ss_lockless} indicating 1338 - * the type of critical read section 1316 + * scoped_seqlock_read() - execute the read-side critical section 1317 + * without manual sequence counter handling 1318 + * or calls to other helpers 1319 + * @_seqlock: pointer to seqlock_t protecting the data 1320 + * @_target: an enum ss_state: one of {ss_lock, ss_lock_irqsave, ss_lockless} 1321 + * indicating the type of critical read section 1339 1322 * 1340 - * Example: 1323 + * Example:: 1341 1324 * 1342 1325 * scoped_seqlock_read (&lock, ss_lock) { 1343 1326 * // read-side critical section ··· 1357 1322 */ 1358 1323 #define scoped_seqlock_read(_seqlock, _target) \ 1359 1324 __scoped_seqlock_read(_seqlock, _target, __UNIQUE_ID(seqlock)) 1325 + 1326 + DEFINE_LOCK_GUARD_1(seqlock_init, seqlock_t, seqlock_init(_T->lock), /* */) 1327 + DECLARE_LOCK_GUARD_1_ATTRS(seqlock_init, __acquires(_T), __releases(*(seqlock_t **)_T)) 1328 + #define class_seqlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(seqlock_init, _T) 1360 1329 1361 1330 #endif /* __LINUX_SEQLOCK_H */
+3 -2
include/linux/seqlock_types.h
··· 81 81 * - Comments on top of seqcount_t 82 82 * - Documentation/locking/seqlock.rst 83 83 */ 84 - typedef struct { 84 + context_lock_struct(seqlock) { 85 85 /* 86 86 * Make sure that readers don't starve writers on PREEMPT_RT: use 87 87 * seqcount_spinlock_t instead of seqcount_t. Check __SEQ_LOCK(). 88 88 */ 89 89 seqcount_spinlock_t seqcount; 90 90 spinlock_t lock; 91 - } seqlock_t; 91 + }; 92 + typedef struct seqlock seqlock_t; 92 93 93 94 #endif /* __LINUX_SEQLOCK_TYPES_H */
+88 -31
include/linux/spinlock.h
··· 212 212 * various methods are defined as nops in the case they are not 213 213 * required. 214 214 */ 215 - #define raw_spin_trylock(lock) __cond_lock(lock, _raw_spin_trylock(lock)) 215 + #define raw_spin_trylock(lock) _raw_spin_trylock(lock) 216 216 217 217 #define raw_spin_lock(lock) _raw_spin_lock(lock) 218 218 ··· 283 283 } while (0) 284 284 #define raw_spin_unlock_bh(lock) _raw_spin_unlock_bh(lock) 285 285 286 - #define raw_spin_trylock_bh(lock) \ 287 - __cond_lock(lock, _raw_spin_trylock_bh(lock)) 286 + #define raw_spin_trylock_bh(lock) _raw_spin_trylock_bh(lock) 288 287 289 - #define raw_spin_trylock_irq(lock) \ 290 - ({ \ 291 - local_irq_disable(); \ 292 - raw_spin_trylock(lock) ? \ 293 - 1 : ({ local_irq_enable(); 0; }); \ 294 - }) 288 + #define raw_spin_trylock_irq(lock) _raw_spin_trylock_irq(lock) 295 289 296 - #define raw_spin_trylock_irqsave(lock, flags) \ 297 - ({ \ 298 - local_irq_save(flags); \ 299 - raw_spin_trylock(lock) ? \ 300 - 1 : ({ local_irq_restore(flags); 0; }); \ 301 - }) 290 + #define raw_spin_trylock_irqsave(lock, flags) _raw_spin_trylock_irqsave(lock, &(flags)) 302 291 303 292 #ifndef CONFIG_PREEMPT_RT 304 293 /* Include rwlock functions for !RT */ ··· 336 347 #endif 337 348 338 349 static __always_inline void spin_lock(spinlock_t *lock) 350 + __acquires(lock) __no_context_analysis 339 351 { 340 352 raw_spin_lock(&lock->rlock); 341 353 } 342 354 343 355 static __always_inline void spin_lock_bh(spinlock_t *lock) 356 + __acquires(lock) __no_context_analysis 344 357 { 345 358 raw_spin_lock_bh(&lock->rlock); 346 359 } 347 360 348 361 static __always_inline int spin_trylock(spinlock_t *lock) 362 + __cond_acquires(true, lock) __no_context_analysis 349 363 { 350 364 return raw_spin_trylock(&lock->rlock); 351 365 } ··· 356 364 #define spin_lock_nested(lock, subclass) \ 357 365 do { \ 358 366 raw_spin_lock_nested(spinlock_check(lock), subclass); \ 367 + __release(spinlock_check(lock)); __acquire(lock); \ 359 368 } while (0) 360 369 361 370 #define spin_lock_nest_lock(lock, nest_lock) \ 362 371 do { \ 363 372 raw_spin_lock_nest_lock(spinlock_check(lock), nest_lock); \ 373 + __release(spinlock_check(lock)); __acquire(lock); \ 364 374 } while (0) 365 375 366 376 static __always_inline void spin_lock_irq(spinlock_t *lock) 377 + __acquires(lock) __no_context_analysis 367 378 { 368 379 raw_spin_lock_irq(&lock->rlock); 369 380 } ··· 374 379 #define spin_lock_irqsave(lock, flags) \ 375 380 do { \ 376 381 raw_spin_lock_irqsave(spinlock_check(lock), flags); \ 382 + __release(spinlock_check(lock)); __acquire(lock); \ 377 383 } while (0) 378 384 379 385 #define spin_lock_irqsave_nested(lock, flags, subclass) \ 380 386 do { \ 381 387 raw_spin_lock_irqsave_nested(spinlock_check(lock), flags, subclass); \ 388 + __release(spinlock_check(lock)); __acquire(lock); \ 382 389 } while (0) 383 390 384 391 static __always_inline void spin_unlock(spinlock_t *lock) 392 + __releases(lock) __no_context_analysis 385 393 { 386 394 raw_spin_unlock(&lock->rlock); 387 395 } 388 396 389 397 static __always_inline void spin_unlock_bh(spinlock_t *lock) 398 + __releases(lock) __no_context_analysis 390 399 { 391 400 raw_spin_unlock_bh(&lock->rlock); 392 401 } 393 402 394 403 static __always_inline void spin_unlock_irq(spinlock_t *lock) 404 + __releases(lock) __no_context_analysis 395 405 { 396 406 raw_spin_unlock_irq(&lock->rlock); 397 407 } 398 408 399 409 static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) 410 + __releases(lock) __no_context_analysis 400 411 { 401 412 raw_spin_unlock_irqrestore(&lock->rlock, flags); 402 413 } 403 414 404 415 static __always_inline int spin_trylock_bh(spinlock_t *lock) 416 + __cond_acquires(true, lock) __no_context_analysis 405 417 { 406 418 return raw_spin_trylock_bh(&lock->rlock); 407 419 } 408 420 409 421 static __always_inline int spin_trylock_irq(spinlock_t *lock) 422 + __cond_acquires(true, lock) __no_context_analysis 410 423 { 411 424 return raw_spin_trylock_irq(&lock->rlock); 412 425 } 413 426 414 - #define spin_trylock_irqsave(lock, flags) \ 415 - ({ \ 416 - raw_spin_trylock_irqsave(spinlock_check(lock), flags); \ 417 - }) 427 + static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags) 428 + __cond_acquires(true, lock) __no_context_analysis 429 + { 430 + return raw_spin_trylock_irqsave(spinlock_check(lock), *flags); 431 + } 432 + #define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(flags)) 418 433 419 434 /** 420 435 * spin_is_locked() - Check whether a spinlock is locked. ··· 502 497 * Decrements @atomic by 1. If the result is 0, returns true and locks 503 498 * @lock. Returns false for all other cases. 504 499 */ 505 - extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); 506 - #define atomic_dec_and_lock(atomic, lock) \ 507 - __cond_lock(lock, _atomic_dec_and_lock(atomic, lock)) 500 + extern int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) __cond_acquires(true, lock); 508 501 509 502 extern int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, 510 - unsigned long *flags); 511 - #define atomic_dec_and_lock_irqsave(atomic, lock, flags) \ 512 - __cond_lock(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))) 503 + unsigned long *flags) __cond_acquires(true, lock); 504 + #define atomic_dec_and_lock_irqsave(atomic, lock, flags) _atomic_dec_and_lock_irqsave(atomic, lock, &(flags)) 513 505 514 - extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock); 515 - #define atomic_dec_and_raw_lock(atomic, lock) \ 516 - __cond_lock(lock, _atomic_dec_and_raw_lock(atomic, lock)) 506 + extern int atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) __cond_acquires(true, lock); 517 507 518 508 extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, 519 - unsigned long *flags); 520 - #define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) \ 521 - __cond_lock(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags))) 509 + unsigned long *flags) __cond_acquires(true, lock); 510 + #define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags)) 522 511 523 512 int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask, 524 513 size_t max_size, unsigned int cpu_mult, ··· 534 535 DEFINE_LOCK_GUARD_1(raw_spinlock, raw_spinlock_t, 535 536 raw_spin_lock(_T->lock), 536 537 raw_spin_unlock(_T->lock)) 538 + DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock, __acquires(_T), __releases(*(raw_spinlock_t **)_T)) 539 + #define class_raw_spinlock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock, _T) 537 540 538 541 DEFINE_LOCK_GUARD_1_COND(raw_spinlock, _try, raw_spin_trylock(_T->lock)) 542 + DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T)) 543 + #define class_raw_spinlock_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_try, _T) 539 544 540 545 DEFINE_LOCK_GUARD_1(raw_spinlock_nested, raw_spinlock_t, 541 546 raw_spin_lock_nested(_T->lock, SINGLE_DEPTH_NESTING), 542 547 raw_spin_unlock(_T->lock)) 548 + DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_nested, __acquires(_T), __releases(*(raw_spinlock_t **)_T)) 549 + #define class_raw_spinlock_nested_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_nested, _T) 543 550 544 551 DEFINE_LOCK_GUARD_1(raw_spinlock_irq, raw_spinlock_t, 545 552 raw_spin_lock_irq(_T->lock), 546 553 raw_spin_unlock_irq(_T->lock)) 554 + DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irq, __acquires(_T), __releases(*(raw_spinlock_t **)_T)) 555 + #define class_raw_spinlock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irq, _T) 547 556 548 557 DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irq, _try, raw_spin_trylock_irq(_T->lock)) 558 + DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irq_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T)) 559 + #define class_raw_spinlock_irq_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irq_try, _T) 549 560 550 561 DEFINE_LOCK_GUARD_1(raw_spinlock_bh, raw_spinlock_t, 551 562 raw_spin_lock_bh(_T->lock), 552 563 raw_spin_unlock_bh(_T->lock)) 564 + DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_bh, __acquires(_T), __releases(*(raw_spinlock_t **)_T)) 565 + #define class_raw_spinlock_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_bh, _T) 553 566 554 567 DEFINE_LOCK_GUARD_1_COND(raw_spinlock_bh, _try, raw_spin_trylock_bh(_T->lock)) 568 + DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_bh_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T)) 569 + #define class_raw_spinlock_bh_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_bh_try, _T) 555 570 556 571 DEFINE_LOCK_GUARD_1(raw_spinlock_irqsave, raw_spinlock_t, 557 572 raw_spin_lock_irqsave(_T->lock, _T->flags), 558 573 raw_spin_unlock_irqrestore(_T->lock, _T->flags), 559 574 unsigned long flags) 575 + DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave, __acquires(_T), __releases(*(raw_spinlock_t **)_T)) 576 + #define class_raw_spinlock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave, _T) 560 577 561 578 DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irqsave, _try, 562 579 raw_spin_trylock_irqsave(_T->lock, _T->flags)) 580 + DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, __acquires(_T), __releases(*(raw_spinlock_t **)_T)) 581 + #define class_raw_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_irqsave_try, _T) 582 + 583 + DEFINE_LOCK_GUARD_1(raw_spinlock_init, raw_spinlock_t, raw_spin_lock_init(_T->lock), /* */) 584 + DECLARE_LOCK_GUARD_1_ATTRS(raw_spinlock_init, __acquires(_T), __releases(*(raw_spinlock_t **)_T)) 585 + #define class_raw_spinlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(raw_spinlock_init, _T) 563 586 564 587 DEFINE_LOCK_GUARD_1(spinlock, spinlock_t, 565 588 spin_lock(_T->lock), 566 589 spin_unlock(_T->lock)) 590 + DECLARE_LOCK_GUARD_1_ATTRS(spinlock, __acquires(_T), __releases(*(spinlock_t **)_T)) 591 + #define class_spinlock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock, _T) 567 592 568 593 DEFINE_LOCK_GUARD_1_COND(spinlock, _try, spin_trylock(_T->lock)) 594 + DECLARE_LOCK_GUARD_1_ATTRS(spinlock_try, __acquires(_T), __releases(*(spinlock_t **)_T)) 595 + #define class_spinlock_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_try, _T) 569 596 570 597 DEFINE_LOCK_GUARD_1(spinlock_irq, spinlock_t, 571 598 spin_lock_irq(_T->lock), 572 599 spin_unlock_irq(_T->lock)) 600 + DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irq, __acquires(_T), __releases(*(spinlock_t **)_T)) 601 + #define class_spinlock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irq, _T) 573 602 574 603 DEFINE_LOCK_GUARD_1_COND(spinlock_irq, _try, 575 604 spin_trylock_irq(_T->lock)) 605 + DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irq_try, __acquires(_T), __releases(*(spinlock_t **)_T)) 606 + #define class_spinlock_irq_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irq_try, _T) 576 607 577 608 DEFINE_LOCK_GUARD_1(spinlock_bh, spinlock_t, 578 609 spin_lock_bh(_T->lock), 579 610 spin_unlock_bh(_T->lock)) 611 + DECLARE_LOCK_GUARD_1_ATTRS(spinlock_bh, __acquires(_T), __releases(*(spinlock_t **)_T)) 612 + #define class_spinlock_bh_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_bh, _T) 580 613 581 614 DEFINE_LOCK_GUARD_1_COND(spinlock_bh, _try, 582 615 spin_trylock_bh(_T->lock)) 616 + DECLARE_LOCK_GUARD_1_ATTRS(spinlock_bh_try, __acquires(_T), __releases(*(spinlock_t **)_T)) 617 + #define class_spinlock_bh_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_bh_try, _T) 583 618 584 619 DEFINE_LOCK_GUARD_1(spinlock_irqsave, spinlock_t, 585 620 spin_lock_irqsave(_T->lock, _T->flags), 586 621 spin_unlock_irqrestore(_T->lock, _T->flags), 587 622 unsigned long flags) 623 + DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irqsave, __acquires(_T), __releases(*(spinlock_t **)_T)) 624 + #define class_spinlock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irqsave, _T) 588 625 589 626 DEFINE_LOCK_GUARD_1_COND(spinlock_irqsave, _try, 590 627 spin_trylock_irqsave(_T->lock, _T->flags)) 628 + DECLARE_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, __acquires(_T), __releases(*(spinlock_t **)_T)) 629 + #define class_spinlock_irqsave_try_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_irqsave_try, _T) 630 + 631 + DEFINE_LOCK_GUARD_1(spinlock_init, spinlock_t, spin_lock_init(_T->lock), /* */) 632 + DECLARE_LOCK_GUARD_1_ATTRS(spinlock_init, __acquires(_T), __releases(*(spinlock_t **)_T)) 633 + #define class_spinlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(spinlock_init, _T) 591 634 592 635 DEFINE_LOCK_GUARD_1(read_lock, rwlock_t, 593 636 read_lock(_T->lock), 594 637 read_unlock(_T->lock)) 638 + DECLARE_LOCK_GUARD_1_ATTRS(read_lock, __acquires(_T), __releases(*(rwlock_t **)_T)) 639 + #define class_read_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(read_lock, _T) 595 640 596 641 DEFINE_LOCK_GUARD_1(read_lock_irq, rwlock_t, 597 642 read_lock_irq(_T->lock), 598 643 read_unlock_irq(_T->lock)) 644 + DECLARE_LOCK_GUARD_1_ATTRS(read_lock_irq, __acquires(_T), __releases(*(rwlock_t **)_T)) 645 + #define class_read_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(read_lock_irq, _T) 599 646 600 647 DEFINE_LOCK_GUARD_1(read_lock_irqsave, rwlock_t, 601 648 read_lock_irqsave(_T->lock, _T->flags), 602 649 read_unlock_irqrestore(_T->lock, _T->flags), 603 650 unsigned long flags) 651 + DECLARE_LOCK_GUARD_1_ATTRS(read_lock_irqsave, __acquires(_T), __releases(*(rwlock_t **)_T)) 652 + #define class_read_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(read_lock_irqsave, _T) 604 653 605 654 DEFINE_LOCK_GUARD_1(write_lock, rwlock_t, 606 655 write_lock(_T->lock), 607 656 write_unlock(_T->lock)) 657 + DECLARE_LOCK_GUARD_1_ATTRS(write_lock, __acquires(_T), __releases(*(rwlock_t **)_T)) 658 + #define class_write_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write_lock, _T) 608 659 609 660 DEFINE_LOCK_GUARD_1(write_lock_irq, rwlock_t, 610 661 write_lock_irq(_T->lock), 611 662 write_unlock_irq(_T->lock)) 663 + DECLARE_LOCK_GUARD_1_ATTRS(write_lock_irq, __acquires(_T), __releases(*(rwlock_t **)_T)) 664 + #define class_write_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write_lock_irq, _T) 612 665 613 666 DEFINE_LOCK_GUARD_1(write_lock_irqsave, rwlock_t, 614 667 write_lock_irqsave(_T->lock, _T->flags), 615 668 write_unlock_irqrestore(_T->lock, _T->flags), 616 669 unsigned long flags) 670 + DECLARE_LOCK_GUARD_1_ATTRS(write_lock_irqsave, __acquires(_T), __releases(*(rwlock_t **)_T)) 671 + #define class_write_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(write_lock_irqsave, _T) 672 + 673 + DEFINE_LOCK_GUARD_1(rwlock_init, rwlock_t, rwlock_init(_T->lock), /* */) 674 + DECLARE_LOCK_GUARD_1_ATTRS(rwlock_init, __acquires(_T), __releases(*(rwlock_t **)_T)) 675 + #define class_rwlock_init_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rwlock_init, _T) 617 676 618 677 #undef __LINUX_INSIDE_SPINLOCK_H 619 678 #endif /* __LINUX_SPINLOCK_H */
+32 -2
include/linux/spinlock_api_smp.h
··· 34 34 unsigned long __lockfunc 35 35 _raw_spin_lock_irqsave_nested(raw_spinlock_t *lock, int subclass) 36 36 __acquires(lock); 37 - int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock); 38 - int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock); 37 + int __lockfunc _raw_spin_trylock(raw_spinlock_t *lock) __cond_acquires(true, lock); 38 + int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(true, lock); 39 39 void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); 40 40 void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); 41 41 void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock); ··· 84 84 #endif 85 85 86 86 static inline int __raw_spin_trylock(raw_spinlock_t *lock) 87 + __cond_acquires(true, lock) 87 88 { 88 89 preempt_disable(); 89 90 if (do_raw_spin_trylock(lock)) { ··· 95 94 return 0; 96 95 } 97 96 97 + static __always_inline bool _raw_spin_trylock_irq(raw_spinlock_t *lock) 98 + __cond_acquires(true, lock) 99 + { 100 + local_irq_disable(); 101 + if (_raw_spin_trylock(lock)) 102 + return true; 103 + local_irq_enable(); 104 + return false; 105 + } 106 + 107 + static __always_inline bool _raw_spin_trylock_irqsave(raw_spinlock_t *lock, unsigned long *flags) 108 + __cond_acquires(true, lock) 109 + { 110 + local_irq_save(*flags); 111 + if (_raw_spin_trylock(lock)) 112 + return true; 113 + local_irq_restore(*flags); 114 + return false; 115 + } 116 + 98 117 /* 99 118 * If lockdep is enabled then we use the non-preemption spin-ops 100 119 * even on CONFIG_PREEMPTION, because lockdep assumes that interrupts are ··· 123 102 #if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC) 124 103 125 104 static inline unsigned long __raw_spin_lock_irqsave(raw_spinlock_t *lock) 105 + __acquires(lock) __no_context_analysis 126 106 { 127 107 unsigned long flags; 128 108 ··· 135 113 } 136 114 137 115 static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) 116 + __acquires(lock) __no_context_analysis 138 117 { 139 118 local_irq_disable(); 140 119 preempt_disable(); ··· 144 121 } 145 122 146 123 static inline void __raw_spin_lock_bh(raw_spinlock_t *lock) 124 + __acquires(lock) __no_context_analysis 147 125 { 148 126 __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); 149 127 spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); ··· 152 128 } 153 129 154 130 static inline void __raw_spin_lock(raw_spinlock_t *lock) 131 + __acquires(lock) __no_context_analysis 155 132 { 156 133 preempt_disable(); 157 134 spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); ··· 162 137 #endif /* !CONFIG_GENERIC_LOCKBREAK || CONFIG_DEBUG_LOCK_ALLOC */ 163 138 164 139 static inline void __raw_spin_unlock(raw_spinlock_t *lock) 140 + __releases(lock) 165 141 { 166 142 spin_release(&lock->dep_map, _RET_IP_); 167 143 do_raw_spin_unlock(lock); ··· 171 145 172 146 static inline void __raw_spin_unlock_irqrestore(raw_spinlock_t *lock, 173 147 unsigned long flags) 148 + __releases(lock) 174 149 { 175 150 spin_release(&lock->dep_map, _RET_IP_); 176 151 do_raw_spin_unlock(lock); ··· 180 153 } 181 154 182 155 static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock) 156 + __releases(lock) 183 157 { 184 158 spin_release(&lock->dep_map, _RET_IP_); 185 159 do_raw_spin_unlock(lock); ··· 189 161 } 190 162 191 163 static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) 164 + __releases(lock) 192 165 { 193 166 spin_release(&lock->dep_map, _RET_IP_); 194 167 do_raw_spin_unlock(lock); ··· 197 168 } 198 169 199 170 static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock) 171 + __cond_acquires(true, lock) 200 172 { 201 173 __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET); 202 174 if (do_raw_spin_trylock(lock)) {
+82 -30
include/linux/spinlock_api_up.h
··· 24 24 * flags straight, to suppress compiler warnings of unused lock 25 25 * variables, and to add the proper checker annotations: 26 26 */ 27 - #define ___LOCK(lock) \ 27 + #define ___LOCK_(lock) \ 28 28 do { __acquire(lock); (void)(lock); } while (0) 29 29 30 - #define __LOCK(lock) \ 31 - do { preempt_disable(); ___LOCK(lock); } while (0) 30 + #define ___LOCK_shared(lock) \ 31 + do { __acquire_shared(lock); (void)(lock); } while (0) 32 32 33 - #define __LOCK_BH(lock) \ 34 - do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK(lock); } while (0) 33 + #define __LOCK(lock, ...) \ 34 + do { preempt_disable(); ___LOCK_##__VA_ARGS__(lock); } while (0) 35 35 36 - #define __LOCK_IRQ(lock) \ 37 - do { local_irq_disable(); __LOCK(lock); } while (0) 36 + #define __LOCK_BH(lock, ...) \ 37 + do { __local_bh_disable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); ___LOCK_##__VA_ARGS__(lock); } while (0) 38 38 39 - #define __LOCK_IRQSAVE(lock, flags) \ 40 - do { local_irq_save(flags); __LOCK(lock); } while (0) 39 + #define __LOCK_IRQ(lock, ...) \ 40 + do { local_irq_disable(); __LOCK(lock, ##__VA_ARGS__); } while (0) 41 41 42 - #define ___UNLOCK(lock) \ 42 + #define __LOCK_IRQSAVE(lock, flags, ...) \ 43 + do { local_irq_save(flags); __LOCK(lock, ##__VA_ARGS__); } while (0) 44 + 45 + #define ___UNLOCK_(lock) \ 43 46 do { __release(lock); (void)(lock); } while (0) 44 47 45 - #define __UNLOCK(lock) \ 46 - do { preempt_enable(); ___UNLOCK(lock); } while (0) 48 + #define ___UNLOCK_shared(lock) \ 49 + do { __release_shared(lock); (void)(lock); } while (0) 47 50 48 - #define __UNLOCK_BH(lock) \ 51 + #define __UNLOCK(lock, ...) \ 52 + do { preempt_enable(); ___UNLOCK_##__VA_ARGS__(lock); } while (0) 53 + 54 + #define __UNLOCK_BH(lock, ...) \ 49 55 do { __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_LOCK_OFFSET); \ 50 - ___UNLOCK(lock); } while (0) 56 + ___UNLOCK_##__VA_ARGS__(lock); } while (0) 51 57 52 - #define __UNLOCK_IRQ(lock) \ 53 - do { local_irq_enable(); __UNLOCK(lock); } while (0) 58 + #define __UNLOCK_IRQ(lock, ...) \ 59 + do { local_irq_enable(); __UNLOCK(lock, ##__VA_ARGS__); } while (0) 54 60 55 - #define __UNLOCK_IRQRESTORE(lock, flags) \ 56 - do { local_irq_restore(flags); __UNLOCK(lock); } while (0) 61 + #define __UNLOCK_IRQRESTORE(lock, flags, ...) \ 62 + do { local_irq_restore(flags); __UNLOCK(lock, ##__VA_ARGS__); } while (0) 57 63 58 64 #define _raw_spin_lock(lock) __LOCK(lock) 59 65 #define _raw_spin_lock_nested(lock, subclass) __LOCK(lock) 60 - #define _raw_read_lock(lock) __LOCK(lock) 66 + #define _raw_read_lock(lock) __LOCK(lock, shared) 61 67 #define _raw_write_lock(lock) __LOCK(lock) 62 68 #define _raw_write_lock_nested(lock, subclass) __LOCK(lock) 63 69 #define _raw_spin_lock_bh(lock) __LOCK_BH(lock) 64 - #define _raw_read_lock_bh(lock) __LOCK_BH(lock) 70 + #define _raw_read_lock_bh(lock) __LOCK_BH(lock, shared) 65 71 #define _raw_write_lock_bh(lock) __LOCK_BH(lock) 66 72 #define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock) 67 - #define _raw_read_lock_irq(lock) __LOCK_IRQ(lock) 73 + #define _raw_read_lock_irq(lock) __LOCK_IRQ(lock, shared) 68 74 #define _raw_write_lock_irq(lock) __LOCK_IRQ(lock) 69 75 #define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) 70 - #define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) 76 + #define _raw_read_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags, shared) 71 77 #define _raw_write_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) 72 - #define _raw_spin_trylock(lock) ({ __LOCK(lock); 1; }) 73 - #define _raw_read_trylock(lock) ({ __LOCK(lock); 1; }) 74 - #define _raw_write_trylock(lock) ({ __LOCK(lock); 1; }) 75 - #define _raw_spin_trylock_bh(lock) ({ __LOCK_BH(lock); 1; }) 78 + 79 + static __always_inline int _raw_spin_trylock(raw_spinlock_t *lock) 80 + __cond_acquires(true, lock) 81 + { 82 + __LOCK(lock); 83 + return 1; 84 + } 85 + 86 + static __always_inline int _raw_spin_trylock_bh(raw_spinlock_t *lock) 87 + __cond_acquires(true, lock) 88 + { 89 + __LOCK_BH(lock); 90 + return 1; 91 + } 92 + 93 + static __always_inline int _raw_spin_trylock_irq(raw_spinlock_t *lock) 94 + __cond_acquires(true, lock) 95 + { 96 + __LOCK_IRQ(lock); 97 + return 1; 98 + } 99 + 100 + static __always_inline int _raw_spin_trylock_irqsave(raw_spinlock_t *lock, unsigned long *flags) 101 + __cond_acquires(true, lock) 102 + { 103 + __LOCK_IRQSAVE(lock, *(flags)); 104 + return 1; 105 + } 106 + 107 + static __always_inline int _raw_read_trylock(rwlock_t *lock) 108 + __cond_acquires_shared(true, lock) 109 + { 110 + __LOCK(lock, shared); 111 + return 1; 112 + } 113 + 114 + static __always_inline int _raw_write_trylock(rwlock_t *lock) 115 + __cond_acquires(true, lock) 116 + { 117 + __LOCK(lock); 118 + return 1; 119 + } 120 + 121 + static __always_inline int _raw_write_trylock_irqsave(rwlock_t *lock, unsigned long *flags) 122 + __cond_acquires(true, lock) 123 + { 124 + __LOCK_IRQSAVE(lock, *(flags)); 125 + return 1; 126 + } 127 + 76 128 #define _raw_spin_unlock(lock) __UNLOCK(lock) 77 - #define _raw_read_unlock(lock) __UNLOCK(lock) 129 + #define _raw_read_unlock(lock) __UNLOCK(lock, shared) 78 130 #define _raw_write_unlock(lock) __UNLOCK(lock) 79 131 #define _raw_spin_unlock_bh(lock) __UNLOCK_BH(lock) 80 132 #define _raw_write_unlock_bh(lock) __UNLOCK_BH(lock) 81 - #define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock) 133 + #define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock, shared) 82 134 #define _raw_spin_unlock_irq(lock) __UNLOCK_IRQ(lock) 83 - #define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock) 135 + #define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock, shared) 84 136 #define _raw_write_unlock_irq(lock) __UNLOCK_IRQ(lock) 85 137 #define _raw_spin_unlock_irqrestore(lock, flags) \ 86 138 __UNLOCK_IRQRESTORE(lock, flags) 87 139 #define _raw_read_unlock_irqrestore(lock, flags) \ 88 - __UNLOCK_IRQRESTORE(lock, flags) 140 + __UNLOCK_IRQRESTORE(lock, flags, shared) 89 141 #define _raw_write_unlock_irqrestore(lock, flags) \ 90 142 __UNLOCK_IRQRESTORE(lock, flags) 91 143
+19 -17
include/linux/spinlock_rt.h
··· 36 36 extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *nest_lock) __acquires(lock); 37 37 extern void rt_spin_unlock(spinlock_t *lock) __releases(lock); 38 38 extern void rt_spin_lock_unlock(spinlock_t *lock); 39 - extern int rt_spin_trylock_bh(spinlock_t *lock); 40 - extern int rt_spin_trylock(spinlock_t *lock); 39 + extern int rt_spin_trylock_bh(spinlock_t *lock) __cond_acquires(true, lock); 40 + extern int rt_spin_trylock(spinlock_t *lock) __cond_acquires(true, lock); 41 41 42 42 static __always_inline void spin_lock(spinlock_t *lock) 43 + __acquires(lock) 43 44 { 44 45 rt_spin_lock(lock); 45 46 } ··· 83 82 __spin_lock_irqsave_nested(lock, flags, subclass) 84 83 85 84 static __always_inline void spin_lock_bh(spinlock_t *lock) 85 + __acquires(lock) 86 86 { 87 87 /* Investigate: Drop bh when blocking ? */ 88 88 local_bh_disable(); ··· 91 89 } 92 90 93 91 static __always_inline void spin_lock_irq(spinlock_t *lock) 92 + __acquires(lock) 94 93 { 95 94 rt_spin_lock(lock); 96 95 } ··· 104 101 } while (0) 105 102 106 103 static __always_inline void spin_unlock(spinlock_t *lock) 104 + __releases(lock) 107 105 { 108 106 rt_spin_unlock(lock); 109 107 } 110 108 111 109 static __always_inline void spin_unlock_bh(spinlock_t *lock) 110 + __releases(lock) 112 111 { 113 112 rt_spin_unlock(lock); 114 113 local_bh_enable(); 115 114 } 116 115 117 116 static __always_inline void spin_unlock_irq(spinlock_t *lock) 117 + __releases(lock) 118 118 { 119 119 rt_spin_unlock(lock); 120 120 } 121 121 122 122 static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, 123 123 unsigned long flags) 124 + __releases(lock) 124 125 { 125 126 rt_spin_unlock(lock); 126 127 } 127 128 128 - #define spin_trylock(lock) \ 129 - __cond_lock(lock, rt_spin_trylock(lock)) 129 + #define spin_trylock(lock) rt_spin_trylock(lock) 130 130 131 - #define spin_trylock_bh(lock) \ 132 - __cond_lock(lock, rt_spin_trylock_bh(lock)) 131 + #define spin_trylock_bh(lock) rt_spin_trylock_bh(lock) 133 132 134 - #define spin_trylock_irq(lock) \ 135 - __cond_lock(lock, rt_spin_trylock(lock)) 133 + #define spin_trylock_irq(lock) rt_spin_trylock(lock) 136 134 137 - #define spin_trylock_irqsave(lock, flags) \ 138 - ({ \ 139 - int __locked; \ 140 - \ 141 - typecheck(unsigned long, flags); \ 142 - flags = 0; \ 143 - __locked = spin_trylock(lock); \ 144 - __locked; \ 145 - }) 135 + static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags) 136 + __cond_acquires(true, lock) 137 + { 138 + *flags = 0; 139 + return rt_spin_trylock(lock); 140 + } 141 + #define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(flags)) 146 142 147 143 #define spin_is_contended(lock) (((void)(lock), 0)) 148 144
+6 -4
include/linux/spinlock_types.h
··· 14 14 #ifndef CONFIG_PREEMPT_RT 15 15 16 16 /* Non PREEMPT_RT kernels map spinlock to raw_spinlock */ 17 - typedef struct spinlock { 17 + context_lock_struct(spinlock) { 18 18 union { 19 19 struct raw_spinlock rlock; 20 20 ··· 26 26 }; 27 27 #endif 28 28 }; 29 - } spinlock_t; 29 + }; 30 + typedef struct spinlock spinlock_t; 30 31 31 32 #define ___SPIN_LOCK_INITIALIZER(lockname) \ 32 33 { \ ··· 48 47 /* PREEMPT_RT kernels map spinlock to rt_mutex */ 49 48 #include <linux/rtmutex.h> 50 49 51 - typedef struct spinlock { 50 + context_lock_struct(spinlock) { 52 51 struct rt_mutex_base lock; 53 52 #ifdef CONFIG_DEBUG_LOCK_ALLOC 54 53 struct lockdep_map dep_map; 55 54 #endif 56 - } spinlock_t; 55 + }; 56 + typedef struct spinlock spinlock_t; 57 57 58 58 #define __SPIN_LOCK_UNLOCKED(name) \ 59 59 { \
+3 -2
include/linux/spinlock_types_raw.h
··· 11 11 12 12 #include <linux/lockdep_types.h> 13 13 14 - typedef struct raw_spinlock { 14 + context_lock_struct(raw_spinlock) { 15 15 arch_spinlock_t raw_lock; 16 16 #ifdef CONFIG_DEBUG_SPINLOCK 17 17 unsigned int magic, owner_cpu; ··· 20 20 #ifdef CONFIG_DEBUG_LOCK_ALLOC 21 21 struct lockdep_map dep_map; 22 22 #endif 23 - } raw_spinlock_t; 23 + }; 24 + typedef struct raw_spinlock raw_spinlock_t; 24 25 25 26 #define SPINLOCK_MAGIC 0xdead4ead 26 27
+50 -23
include/linux/srcu.h
··· 21 21 #include <linux/workqueue.h> 22 22 #include <linux/rcu_segcblist.h> 23 23 24 - struct srcu_struct; 24 + context_lock_struct(srcu_struct, __reentrant_ctx_lock); 25 25 26 26 #ifdef CONFIG_DEBUG_LOCK_ALLOC 27 27 ··· 77 77 #define SRCU_READ_FLAVOR_SLOWGP (SRCU_READ_FLAVOR_FAST | SRCU_READ_FLAVOR_FAST_UPDOWN) 78 78 // Flavors requiring synchronize_rcu() 79 79 // instead of smp_mb(). 80 - void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases(ssp); 80 + void __srcu_read_unlock(struct srcu_struct *ssp, int idx) __releases_shared(ssp); 81 81 82 82 #ifdef CONFIG_TINY_SRCU 83 83 #include <linux/srcutiny.h> ··· 131 131 } 132 132 133 133 #ifdef CONFIG_NEED_SRCU_NMI_SAFE 134 - int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp); 135 - void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases(ssp); 134 + int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires_shared(ssp); 135 + void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) __releases_shared(ssp); 136 136 #else 137 137 static inline int __srcu_read_lock_nmisafe(struct srcu_struct *ssp) 138 + __acquires_shared(ssp) 138 139 { 139 140 return __srcu_read_lock(ssp); 140 141 } 141 142 static inline void __srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) 143 + __releases_shared(ssp) 142 144 { 143 145 __srcu_read_unlock(ssp, idx); 144 146 } ··· 212 210 213 211 #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */ 214 212 213 + /* 214 + * No-op helper to denote that ssp must be held. Because SRCU-protected pointers 215 + * should still be marked with __rcu_guarded, and we do not want to mark them 216 + * with __guarded_by(ssp) as it would complicate annotations for writers, we 217 + * choose the following strategy: srcu_dereference_check() calls this helper 218 + * that checks that the passed ssp is held, and then fake-acquires 'RCU'. 219 + */ 220 + static inline void __srcu_read_lock_must_hold(const struct srcu_struct *ssp) __must_hold_shared(ssp) { } 215 221 216 222 /** 217 223 * srcu_dereference_check - fetch SRCU-protected pointer for later dereferencing ··· 233 223 * to 1. The @c argument will normally be a logical expression containing 234 224 * lockdep_is_held() calls. 235 225 */ 236 - #define srcu_dereference_check(p, ssp, c) \ 237 - __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ 238 - (c) || srcu_read_lock_held(ssp), __rcu) 226 + #define srcu_dereference_check(p, ssp, c) \ 227 + ({ \ 228 + __srcu_read_lock_must_hold(ssp); \ 229 + __acquire_shared_ctx_lock(RCU); \ 230 + __auto_type __v = __rcu_dereference_check((p), __UNIQUE_ID(rcu), \ 231 + (c) || srcu_read_lock_held(ssp), __rcu); \ 232 + __release_shared_ctx_lock(RCU); \ 233 + __v; \ 234 + }) 239 235 240 236 /** 241 237 * srcu_dereference - fetch SRCU-protected pointer for later dereferencing ··· 284 268 * invoke srcu_read_unlock() from one task and the matching srcu_read_lock() 285 269 * from another. 286 270 */ 287 - static inline int srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp) 271 + static inline int srcu_read_lock(struct srcu_struct *ssp) 272 + __acquires_shared(ssp) 288 273 { 289 274 int retval; 290 275 ··· 321 304 * contexts where RCU is watching, that is, from contexts where it would 322 305 * be legal to invoke rcu_read_lock(). Otherwise, lockdep will complain. 323 306 */ 324 - static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_struct *ssp) __acquires(ssp) 307 + static inline struct srcu_ctr __percpu *srcu_read_lock_fast(struct srcu_struct *ssp) __acquires_shared(ssp) 308 + __acquires_shared(ssp) 325 309 { 326 310 struct srcu_ctr __percpu *retval; 327 311 ··· 362 344 * complain. 363 345 */ 364 346 static inline struct srcu_ctr __percpu *srcu_read_lock_fast_updown(struct srcu_struct *ssp) 365 - __acquires(ssp) 347 + __acquires_shared(ssp) 366 348 { 367 349 struct srcu_ctr __percpu *retval; 368 350 ··· 378 360 * See srcu_read_lock_fast() for more information. 379 361 */ 380 362 static inline struct srcu_ctr __percpu *srcu_read_lock_fast_notrace(struct srcu_struct *ssp) 381 - __acquires(ssp) 363 + __acquires_shared(ssp) 382 364 { 383 365 struct srcu_ctr __percpu *retval; 384 366 ··· 399 381 * and srcu_read_lock_fast(). However, the same definition/initialization 400 382 * requirements called out for srcu_read_lock_safe() apply. 401 383 */ 402 - static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_struct *ssp) __acquires(ssp) 384 + static inline struct srcu_ctr __percpu *srcu_down_read_fast(struct srcu_struct *ssp) __acquires_shared(ssp) 403 385 { 404 386 WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi()); 405 387 RCU_LOCKDEP_WARN(!rcu_is_watching(), "RCU must be watching srcu_down_read_fast()."); ··· 418 400 * then none of the other flavors may be used, whether before, during, 419 401 * or after. 420 402 */ 421 - static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) __acquires(ssp) 403 + static inline int srcu_read_lock_nmisafe(struct srcu_struct *ssp) 404 + __acquires_shared(ssp) 422 405 { 423 406 int retval; 424 407 ··· 431 412 432 413 /* Used by tracing, cannot be traced and cannot invoke lockdep. */ 433 414 static inline notrace int 434 - srcu_read_lock_notrace(struct srcu_struct *ssp) __acquires(ssp) 415 + srcu_read_lock_notrace(struct srcu_struct *ssp) 416 + __acquires_shared(ssp) 435 417 { 436 418 int retval; 437 419 ··· 463 443 * which calls to down_read() may be nested. The same srcu_struct may be 464 444 * used concurrently by srcu_down_read() and srcu_read_lock(). 465 445 */ 466 - static inline int srcu_down_read(struct srcu_struct *ssp) __acquires(ssp) 446 + static inline int srcu_down_read(struct srcu_struct *ssp) 447 + __acquires_shared(ssp) 467 448 { 468 449 WARN_ON_ONCE(in_nmi()); 469 450 srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); ··· 479 458 * Exit an SRCU read-side critical section. 480 459 */ 481 460 static inline void srcu_read_unlock(struct srcu_struct *ssp, int idx) 482 - __releases(ssp) 461 + __releases_shared(ssp) 483 462 { 484 463 WARN_ON_ONCE(idx & ~0x1); 485 464 srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); ··· 495 474 * Exit a light-weight SRCU read-side critical section. 496 475 */ 497 476 static inline void srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) 498 - __releases(ssp) 477 + __releases_shared(ssp) 499 478 { 500 479 srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST); 501 480 srcu_lock_release(&ssp->dep_map); ··· 511 490 * Exit an SRCU-fast-updown read-side critical section. 512 491 */ 513 492 static inline void 514 - srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) __releases(ssp) 493 + srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) __releases_shared(ssp) 515 494 { 516 495 srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN); 517 496 srcu_lock_release(&ssp->dep_map); ··· 525 504 * See srcu_read_unlock_fast() for more information. 526 505 */ 527 506 static inline void srcu_read_unlock_fast_notrace(struct srcu_struct *ssp, 528 - struct srcu_ctr __percpu *scp) __releases(ssp) 507 + struct srcu_ctr __percpu *scp) __releases_shared(ssp) 529 508 { 530 509 srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST); 531 510 __srcu_read_unlock_fast(ssp, scp); ··· 540 519 * the same context as the maching srcu_down_read_fast(). 541 520 */ 542 521 static inline void srcu_up_read_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) 543 - __releases(ssp) 522 + __releases_shared(ssp) 544 523 { 545 524 WARN_ON_ONCE(IS_ENABLED(CONFIG_PROVE_RCU) && in_nmi()); 546 525 srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_FAST_UPDOWN); ··· 556 535 * Exit an SRCU read-side critical section, but in an NMI-safe manner. 557 536 */ 558 537 static inline void srcu_read_unlock_nmisafe(struct srcu_struct *ssp, int idx) 559 - __releases(ssp) 538 + __releases_shared(ssp) 560 539 { 561 540 WARN_ON_ONCE(idx & ~0x1); 562 541 srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NMI); ··· 566 545 567 546 /* Used by tracing, cannot be traced and cannot call lockdep. */ 568 547 static inline notrace void 569 - srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases(ssp) 548 + srcu_read_unlock_notrace(struct srcu_struct *ssp, int idx) __releases_shared(ssp) 570 549 { 571 550 srcu_check_read_flavor(ssp, SRCU_READ_FLAVOR_NORMAL); 572 551 __srcu_read_unlock(ssp, idx); ··· 581 560 * the same context as the maching srcu_down_read(). 582 561 */ 583 562 static inline void srcu_up_read(struct srcu_struct *ssp, int idx) 584 - __releases(ssp) 563 + __releases_shared(ssp) 585 564 { 586 565 WARN_ON_ONCE(idx & ~0x1); 587 566 WARN_ON_ONCE(in_nmi()); ··· 621 600 _T->idx = srcu_read_lock(_T->lock), 622 601 srcu_read_unlock(_T->lock, _T->idx), 623 602 int idx) 603 + DECLARE_LOCK_GUARD_1_ATTRS(srcu, __acquires_shared(_T), __releases_shared(*(struct srcu_struct **)_T)) 604 + #define class_srcu_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(srcu, _T) 624 605 625 606 DEFINE_LOCK_GUARD_1(srcu_fast, struct srcu_struct, 626 607 _T->scp = srcu_read_lock_fast(_T->lock), 627 608 srcu_read_unlock_fast(_T->lock, _T->scp), 628 609 struct srcu_ctr __percpu *scp) 610 + DECLARE_LOCK_GUARD_1_ATTRS(srcu_fast, __acquires_shared(_T), __releases_shared(*(struct srcu_struct **)_T)) 611 + #define class_srcu_fast_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(srcu_fast, _T) 629 612 630 613 DEFINE_LOCK_GUARD_1(srcu_fast_notrace, struct srcu_struct, 631 614 _T->scp = srcu_read_lock_fast_notrace(_T->lock), 632 615 srcu_read_unlock_fast_notrace(_T->lock, _T->scp), 633 616 struct srcu_ctr __percpu *scp) 617 + DECLARE_LOCK_GUARD_1_ATTRS(srcu_fast_notrace, __acquires_shared(_T), __releases_shared(*(struct srcu_struct **)_T)) 618 + #define class_srcu_fast_notrace_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(srcu_fast_notrace, _T) 634 619 635 620 #endif
+6
include/linux/srcutiny.h
··· 73 73 * index that must be passed to the matching srcu_read_unlock(). 74 74 */ 75 75 static inline int __srcu_read_lock(struct srcu_struct *ssp) 76 + __acquires_shared(ssp) 76 77 { 77 78 int idx; 78 79 ··· 81 80 idx = ((READ_ONCE(ssp->srcu_idx) + 1) & 0x2) >> 1; 82 81 WRITE_ONCE(ssp->srcu_lock_nesting[idx], READ_ONCE(ssp->srcu_lock_nesting[idx]) + 1); 83 82 preempt_enable(); 83 + __acquire_shared(ssp); 84 84 return idx; 85 85 } 86 86 ··· 98 96 } 99 97 100 98 static inline struct srcu_ctr __percpu *__srcu_read_lock_fast(struct srcu_struct *ssp) 99 + __acquires_shared(ssp) 101 100 { 102 101 return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp)); 103 102 } 104 103 105 104 static inline void __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) 105 + __releases_shared(ssp) 106 106 { 107 107 __srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp)); 108 108 } 109 109 110 110 static inline struct srcu_ctr __percpu *__srcu_read_lock_fast_updown(struct srcu_struct *ssp) 111 + __acquires_shared(ssp) 111 112 { 112 113 return __srcu_ctr_to_ptr(ssp, __srcu_read_lock(ssp)); 113 114 } 114 115 115 116 static inline 116 117 void __srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) 118 + __releases_shared(ssp) 117 119 { 118 120 __srcu_read_unlock(ssp, __srcu_ptr_to_ctr(ssp, scp)); 119 121 }
+9 -1
include/linux/srcutree.h
··· 233 233 #define DEFINE_STATIC_SRCU_FAST_UPDOWN(name) \ 234 234 __DEFINE_SRCU(name, SRCU_READ_FLAVOR_FAST_UPDOWN, static) 235 235 236 - int __srcu_read_lock(struct srcu_struct *ssp) __acquires(ssp); 236 + int __srcu_read_lock(struct srcu_struct *ssp) __acquires_shared(ssp); 237 237 void synchronize_srcu_expedited(struct srcu_struct *ssp); 238 238 void srcu_barrier(struct srcu_struct *ssp); 239 239 void srcu_expedite_current(struct srcu_struct *ssp); ··· 286 286 * implementations of this_cpu_inc(). 287 287 */ 288 288 static inline struct srcu_ctr __percpu notrace *__srcu_read_lock_fast(struct srcu_struct *ssp) 289 + __acquires_shared(ssp) 289 290 { 290 291 struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp); 291 292 ··· 295 294 else 296 295 atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); // Y, and implicit RCU reader. 297 296 barrier(); /* Avoid leaking the critical section. */ 297 + __acquire_shared(ssp); 298 298 return scp; 299 299 } 300 300 ··· 310 308 */ 311 309 static inline void notrace 312 310 __srcu_read_unlock_fast(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) 311 + __releases_shared(ssp) 313 312 { 313 + __release_shared(ssp); 314 314 barrier(); /* Avoid leaking the critical section. */ 315 315 if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) 316 316 this_cpu_inc(scp->srcu_unlocks.counter); // Z, and implicit RCU reader. ··· 330 326 */ 331 327 static inline 332 328 struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct srcu_struct *ssp) 329 + __acquires_shared(ssp) 333 330 { 334 331 struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp); 335 332 ··· 339 334 else 340 335 atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks)); // Y, and implicit RCU reader. 341 336 barrier(); /* Avoid leaking the critical section. */ 337 + __acquire_shared(ssp); 342 338 return scp; 343 339 } 344 340 ··· 354 348 */ 355 349 static inline void notrace 356 350 __srcu_read_unlock_fast_updown(struct srcu_struct *ssp, struct srcu_ctr __percpu *scp) 351 + __releases_shared(ssp) 357 352 { 353 + __release_shared(ssp); 358 354 barrier(); /* Avoid leaking the critical section. */ 359 355 if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) 360 356 this_cpu_inc(scp->srcu_unlocks.counter); // Z, and implicit RCU reader.
+15 -6
include/linux/ww_mutex.h
··· 44 44 unsigned int is_wait_die; 45 45 }; 46 46 47 - struct ww_mutex { 47 + context_lock_struct(ww_mutex) { 48 48 struct WW_MUTEX_BASE base; 49 49 struct ww_acquire_ctx *ctx; 50 50 #ifdef DEBUG_WW_MUTEXES ··· 52 52 #endif 53 53 }; 54 54 55 - struct ww_acquire_ctx { 55 + context_lock_struct(ww_acquire_ctx) { 56 56 struct task_struct *task; 57 57 unsigned long stamp; 58 58 unsigned int acquired; ··· 141 141 */ 142 142 static inline void ww_acquire_init(struct ww_acquire_ctx *ctx, 143 143 struct ww_class *ww_class) 144 + __acquires(ctx) __no_context_analysis 144 145 { 145 146 ctx->task = current; 146 147 ctx->stamp = atomic_long_inc_return_relaxed(&ww_class->stamp); ··· 180 179 * data structures. 181 180 */ 182 181 static inline void ww_acquire_done(struct ww_acquire_ctx *ctx) 182 + __releases(ctx) __acquires_shared(ctx) __no_context_analysis 183 183 { 184 184 #ifdef DEBUG_WW_MUTEXES 185 185 lockdep_assert_held(ctx); ··· 198 196 * mutexes have been released with ww_mutex_unlock. 199 197 */ 200 198 static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx) 199 + __releases_shared(ctx) __no_context_analysis 201 200 { 202 201 #ifdef CONFIG_DEBUG_LOCK_ALLOC 203 202 mutex_release(&ctx->first_lock_dep_map, _THIS_IP_); ··· 248 245 * 249 246 * A mutex acquired with this function must be released with ww_mutex_unlock. 250 247 */ 251 - extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx); 248 + extern int /* __must_check */ ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 249 + __cond_acquires(0, lock) __must_hold(ctx); 252 250 253 251 /** 254 252 * ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible ··· 282 278 * A mutex acquired with this function must be released with ww_mutex_unlock. 283 279 */ 284 280 extern int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock, 285 - struct ww_acquire_ctx *ctx); 281 + struct ww_acquire_ctx *ctx) 282 + __cond_acquires(0, lock) __must_hold(ctx); 286 283 287 284 /** 288 285 * ww_mutex_lock_slow - slowpath acquiring of the w/w mutex ··· 310 305 */ 311 306 static inline void 312 307 ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) 308 + __acquires(lock) __must_hold(ctx) __no_context_analysis 313 309 { 314 310 int ret; 315 311 #ifdef DEBUG_WW_MUTEXES ··· 348 342 static inline int __must_check 349 343 ww_mutex_lock_slow_interruptible(struct ww_mutex *lock, 350 344 struct ww_acquire_ctx *ctx) 345 + __cond_acquires(0, lock) __must_hold(ctx) 351 346 { 352 347 #ifdef DEBUG_WW_MUTEXES 353 348 DEBUG_LOCKS_WARN_ON(!ctx->contending_lock); ··· 356 349 return ww_mutex_lock_interruptible(lock, ctx); 357 350 } 358 351 359 - extern void ww_mutex_unlock(struct ww_mutex *lock); 352 + extern void ww_mutex_unlock(struct ww_mutex *lock) __releases(lock); 360 353 361 354 extern int __must_check ww_mutex_trylock(struct ww_mutex *lock, 362 - struct ww_acquire_ctx *ctx); 355 + struct ww_acquire_ctx *ctx) 356 + __cond_acquires(true, lock) __must_hold(ctx); 363 357 364 358 /*** 365 359 * ww_mutex_destroy - mark a w/w mutex unusable ··· 371 363 * this function is called. 372 364 */ 373 365 static inline void ww_mutex_destroy(struct ww_mutex *lock) 366 + __must_not_hold(lock) 374 367 { 375 368 #ifndef CONFIG_PREEMPT_RT 376 369 mutex_destroy(&lock->base);
+2
kernel/Makefile
··· 43 43 KCSAN_SANITIZE_kcov.o := n 44 44 UBSAN_SANITIZE_kcov.o := n 45 45 KMSAN_SANITIZE_kcov.o := n 46 + 47 + CONTEXT_ANALYSIS_kcov.o := y 46 48 CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector 47 49 48 50 obj-y += sched/
+25 -11
kernel/kcov.c
··· 55 55 refcount_t refcount; 56 56 /* The lock protects mode, size, area and t. */ 57 57 spinlock_t lock; 58 - enum kcov_mode mode; 58 + enum kcov_mode mode __guarded_by(&lock); 59 59 /* Size of arena (in long's). */ 60 - unsigned int size; 60 + unsigned int size __guarded_by(&lock); 61 61 /* Coverage buffer shared with user space. */ 62 - void *area; 62 + void *area __guarded_by(&lock); 63 63 /* Task for which we collect coverage, or NULL. */ 64 - struct task_struct *t; 64 + struct task_struct *t __guarded_by(&lock); 65 65 /* Collecting coverage from remote (background) threads. */ 66 66 bool remote; 67 67 /* Size of remote area (in long's). */ ··· 391 391 } 392 392 393 393 static void kcov_reset(struct kcov *kcov) 394 + __must_hold(&kcov->lock) 394 395 { 395 396 kcov->t = NULL; 396 397 kcov->mode = KCOV_MODE_INIT; ··· 401 400 } 402 401 403 402 static void kcov_remote_reset(struct kcov *kcov) 403 + __must_hold(&kcov->lock) 404 404 { 405 405 int bkt; 406 406 struct kcov_remote *remote; ··· 421 419 } 422 420 423 421 static void kcov_disable(struct task_struct *t, struct kcov *kcov) 422 + __must_hold(&kcov->lock) 424 423 { 425 424 kcov_task_reset(t); 426 425 if (kcov->remote) ··· 438 435 static void kcov_put(struct kcov *kcov) 439 436 { 440 437 if (refcount_dec_and_test(&kcov->refcount)) { 441 - kcov_remote_reset(kcov); 442 - vfree(kcov->area); 438 + /* Context-safety: no references left, object being destroyed. */ 439 + context_unsafe( 440 + kcov_remote_reset(kcov); 441 + vfree(kcov->area); 442 + ); 443 443 kfree(kcov); 444 444 } 445 445 } ··· 497 491 unsigned long size, off; 498 492 struct page *page; 499 493 unsigned long flags; 494 + void *area; 500 495 501 496 spin_lock_irqsave(&kcov->lock, flags); 502 497 size = kcov->size * sizeof(unsigned long); ··· 506 499 res = -EINVAL; 507 500 goto exit; 508 501 } 502 + area = kcov->area; 509 503 spin_unlock_irqrestore(&kcov->lock, flags); 510 504 vm_flags_set(vma, VM_DONTEXPAND); 511 505 for (off = 0; off < size; off += PAGE_SIZE) { 512 - page = vmalloc_to_page(kcov->area + off); 506 + page = vmalloc_to_page(area + off); 513 507 res = vm_insert_page(vma, vma->vm_start + off, page); 514 508 if (res) { 515 509 pr_warn_once("kcov: vm_insert_page() failed\n"); ··· 530 522 kcov = kzalloc(sizeof(*kcov), GFP_KERNEL); 531 523 if (!kcov) 532 524 return -ENOMEM; 525 + guard(spinlock_init)(&kcov->lock); 533 526 kcov->mode = KCOV_MODE_DISABLED; 534 527 kcov->sequence = 1; 535 528 refcount_set(&kcov->refcount, 1); 536 - spin_lock_init(&kcov->lock); 537 529 filep->private_data = kcov; 538 530 return nonseekable_open(inode, filep); 539 531 } ··· 564 556 * vmalloc fault handling path is instrumented. 565 557 */ 566 558 static void kcov_fault_in_area(struct kcov *kcov) 559 + __must_hold(&kcov->lock) 567 560 { 568 561 unsigned long stride = PAGE_SIZE / sizeof(unsigned long); 569 562 unsigned long *area = kcov->area; ··· 593 584 594 585 static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd, 595 586 unsigned long arg) 587 + __must_hold(&kcov->lock) 596 588 { 597 589 struct task_struct *t; 598 590 unsigned long flags, unused; ··· 824 814 } 825 815 826 816 static void kcov_remote_softirq_start(struct task_struct *t) 817 + __must_hold(&kcov_percpu_data.lock) 827 818 { 828 819 struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data); 829 820 unsigned int mode; ··· 842 831 } 843 832 844 833 static void kcov_remote_softirq_stop(struct task_struct *t) 834 + __must_hold(&kcov_percpu_data.lock) 845 835 { 846 836 struct kcov_percpu_data *data = this_cpu_ptr(&kcov_percpu_data); 847 837 ··· 908 896 /* Put in kcov_remote_stop(). */ 909 897 kcov_get(kcov); 910 898 /* 911 - * Read kcov fields before unlock to prevent races with 912 - * KCOV_DISABLE / kcov_remote_reset(). 899 + * Read kcov fields before unlocking kcov_remote_lock to prevent races 900 + * with KCOV_DISABLE and kcov_remote_reset(); cannot acquire kcov->lock 901 + * here, because it might lead to deadlock given kcov_remote_lock is 902 + * acquired _after_ kcov->lock elsewhere. 913 903 */ 914 - mode = kcov->mode; 904 + mode = context_unsafe(kcov->mode); 915 905 sequence = kcov->sequence; 916 906 if (in_task()) { 917 907 size = kcov->remote_size;
+2
kernel/kcsan/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 + CONTEXT_ANALYSIS := y 3 + 2 4 KCSAN_SANITIZE := n 3 5 KCOV_INSTRUMENT := n 4 6 UBSAN_SANITIZE := n
+8 -3
kernel/kcsan/report.c
··· 116 116 * been reported since (now - KCSAN_REPORT_ONCE_IN_MS). 117 117 */ 118 118 static bool rate_limit_report(unsigned long frame1, unsigned long frame2) 119 + __must_hold(&report_lock) 119 120 { 120 121 struct report_time *use_entry = &report_times[0]; 121 122 unsigned long invalid_before; ··· 367 366 368 367 static void 369 368 print_stack_trace(unsigned long stack_entries[], int num_entries, unsigned long reordered_to) 369 + __must_hold(&report_lock) 370 370 { 371 371 stack_trace_print(stack_entries, num_entries, 0); 372 372 if (reordered_to) ··· 375 373 } 376 374 377 375 static void print_verbose_info(struct task_struct *task) 376 + __must_hold(&report_lock) 378 377 { 379 378 if (!task) 380 379 return; ··· 392 389 const struct access_info *ai, 393 390 struct other_info *other_info, 394 391 u64 old, u64 new, u64 mask) 392 + __must_hold(&report_lock) 395 393 { 396 394 unsigned long reordered_to = 0; 397 395 unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 }; ··· 500 496 } 501 497 502 498 static void release_report(unsigned long *flags, struct other_info *other_info) 499 + __releases(&report_lock) 503 500 { 504 501 /* 505 502 * Use size to denote valid/invalid, since KCSAN entirely ignores ··· 512 507 513 508 /* 514 509 * Sets @other_info->task and awaits consumption of @other_info. 515 - * 516 - * Precondition: report_lock is held. 517 - * Postcondition: report_lock is held. 518 510 */ 519 511 static void set_other_info_task_blocking(unsigned long *flags, 520 512 const struct access_info *ai, 521 513 struct other_info *other_info) 514 + __must_hold(&report_lock) 522 515 { 523 516 /* 524 517 * We may be instrumenting a code-path where current->state is already ··· 575 572 static void prepare_report_producer(unsigned long *flags, 576 573 const struct access_info *ai, 577 574 struct other_info *other_info) 575 + __must_not_hold(&report_lock) 578 576 { 579 577 raw_spin_lock_irqsave(&report_lock, *flags); 580 578 ··· 607 603 static bool prepare_report_consumer(unsigned long *flags, 608 604 const struct access_info *ai, 609 605 struct other_info *other_info) 606 + __cond_acquires(true, &report_lock) 610 607 { 611 608 612 609 raw_spin_lock_irqsave(&report_lock, *flags);
+137 -58
kernel/locking/test-ww_mutex.c
··· 13 13 #include <linux/slab.h> 14 14 #include <linux/ww_mutex.h> 15 15 16 - static DEFINE_WD_CLASS(ww_class); 16 + static DEFINE_WD_CLASS(wd_class); 17 + static DEFINE_WW_CLASS(ww_class); 17 18 struct workqueue_struct *wq; 18 19 19 20 #ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH ··· 55 54 ww_mutex_unlock(&mtx->mutex); 56 55 } 57 56 58 - static int __test_mutex(unsigned int flags) 57 + static int __test_mutex(struct ww_class *class, unsigned int flags) 59 58 { 60 59 #define TIMEOUT (HZ / 16) 61 60 struct test_mutex mtx; 62 61 struct ww_acquire_ctx ctx; 63 62 int ret; 64 63 65 - ww_mutex_init(&mtx.mutex, &ww_class); 64 + ww_mutex_init(&mtx.mutex, class); 66 65 if (flags & TEST_MTX_CTX) 67 - ww_acquire_init(&ctx, &ww_class); 66 + ww_acquire_init(&ctx, class); 68 67 69 68 INIT_WORK_ONSTACK(&mtx.work, test_mutex_work); 70 69 init_completion(&mtx.ready); ··· 72 71 init_completion(&mtx.done); 73 72 mtx.flags = flags; 74 73 75 - schedule_work(&mtx.work); 74 + queue_work(wq, &mtx.work); 76 75 77 76 wait_for_completion(&mtx.ready); 78 77 ww_mutex_lock(&mtx.mutex, (flags & TEST_MTX_CTX) ? &ctx : NULL); ··· 107 106 #undef TIMEOUT 108 107 } 109 108 110 - static int test_mutex(void) 109 + static int test_mutex(struct ww_class *class) 111 110 { 112 111 int ret; 113 112 int i; 114 113 115 114 for (i = 0; i < __TEST_MTX_LAST; i++) { 116 - ret = __test_mutex(i); 115 + ret = __test_mutex(class, i); 117 116 if (ret) 118 117 return ret; 119 118 } ··· 121 120 return 0; 122 121 } 123 122 124 - static int test_aa(bool trylock) 123 + static int test_aa(struct ww_class *class, bool trylock) 125 124 { 126 125 struct ww_mutex mutex; 127 126 struct ww_acquire_ctx ctx; 128 127 int ret; 129 128 const char *from = trylock ? "trylock" : "lock"; 130 129 131 - ww_mutex_init(&mutex, &ww_class); 132 - ww_acquire_init(&ctx, &ww_class); 130 + ww_mutex_init(&mutex, class); 131 + ww_acquire_init(&ctx, class); 133 132 134 133 if (!trylock) { 135 134 ret = ww_mutex_lock(&mutex, &ctx); ··· 178 177 179 178 struct test_abba { 180 179 struct work_struct work; 180 + struct ww_class *class; 181 181 struct ww_mutex a_mutex; 182 182 struct ww_mutex b_mutex; 183 183 struct completion a_ready; ··· 193 191 struct ww_acquire_ctx ctx; 194 192 int err; 195 193 196 - ww_acquire_init_noinject(&ctx, &ww_class); 194 + ww_acquire_init_noinject(&ctx, abba->class); 197 195 if (!abba->trylock) 198 196 ww_mutex_lock(&abba->b_mutex, &ctx); 199 197 else ··· 219 217 abba->result = err; 220 218 } 221 219 222 - static int test_abba(bool trylock, bool resolve) 220 + static int test_abba(struct ww_class *class, bool trylock, bool resolve) 223 221 { 224 222 struct test_abba abba; 225 223 struct ww_acquire_ctx ctx; 226 224 int err, ret; 227 225 228 - ww_mutex_init(&abba.a_mutex, &ww_class); 229 - ww_mutex_init(&abba.b_mutex, &ww_class); 226 + ww_mutex_init(&abba.a_mutex, class); 227 + ww_mutex_init(&abba.b_mutex, class); 230 228 INIT_WORK_ONSTACK(&abba.work, test_abba_work); 231 229 init_completion(&abba.a_ready); 232 230 init_completion(&abba.b_ready); 231 + abba.class = class; 233 232 abba.trylock = trylock; 234 233 abba.resolve = resolve; 235 234 236 - schedule_work(&abba.work); 235 + queue_work(wq, &abba.work); 237 236 238 - ww_acquire_init_noinject(&ctx, &ww_class); 237 + ww_acquire_init_noinject(&ctx, class); 239 238 if (!trylock) 240 239 ww_mutex_lock(&abba.a_mutex, &ctx); 241 240 else ··· 281 278 282 279 struct test_cycle { 283 280 struct work_struct work; 281 + struct ww_class *class; 284 282 struct ww_mutex a_mutex; 285 283 struct ww_mutex *b_mutex; 286 284 struct completion *a_signal; ··· 295 291 struct ww_acquire_ctx ctx; 296 292 int err, erra = 0; 297 293 298 - ww_acquire_init_noinject(&ctx, &ww_class); 294 + ww_acquire_init_noinject(&ctx, cycle->class); 299 295 ww_mutex_lock(&cycle->a_mutex, &ctx); 300 296 301 297 complete(cycle->a_signal); ··· 318 314 cycle->result = err ?: erra; 319 315 } 320 316 321 - static int __test_cycle(unsigned int nthreads) 317 + static int __test_cycle(struct ww_class *class, unsigned int nthreads) 322 318 { 323 319 struct test_cycle *cycles; 324 320 unsigned int n, last = nthreads - 1; ··· 331 327 for (n = 0; n < nthreads; n++) { 332 328 struct test_cycle *cycle = &cycles[n]; 333 329 334 - ww_mutex_init(&cycle->a_mutex, &ww_class); 330 + cycle->class = class; 331 + ww_mutex_init(&cycle->a_mutex, class); 335 332 if (n == last) 336 333 cycle->b_mutex = &cycles[0].a_mutex; 337 334 else ··· 372 367 return ret; 373 368 } 374 369 375 - static int test_cycle(unsigned int ncpus) 370 + static int test_cycle(struct ww_class *class, unsigned int ncpus) 376 371 { 377 372 unsigned int n; 378 373 int ret; 379 374 380 375 for (n = 2; n <= ncpus + 1; n++) { 381 - ret = __test_cycle(n); 376 + ret = __test_cycle(class, n); 382 377 if (ret) 383 378 return ret; 384 379 } ··· 389 384 struct stress { 390 385 struct work_struct work; 391 386 struct ww_mutex *locks; 387 + struct ww_class *class; 392 388 unsigned long timeout; 393 389 int nlocks; 394 390 }; ··· 449 443 int contended = -1; 450 444 int n, err; 451 445 452 - ww_acquire_init(&ctx, &ww_class); 446 + ww_acquire_init(&ctx, stress->class); 453 447 retry: 454 448 err = 0; 455 449 for (n = 0; n < nlocks; n++) { ··· 517 511 order = NULL; 518 512 519 513 do { 520 - ww_acquire_init(&ctx, &ww_class); 514 + ww_acquire_init(&ctx, stress->class); 521 515 522 516 list_for_each_entry(ll, &locks, link) { 523 517 err = ww_mutex_lock(ll->lock, &ctx); ··· 576 570 #define STRESS_ONE BIT(2) 577 571 #define STRESS_ALL (STRESS_INORDER | STRESS_REORDER | STRESS_ONE) 578 572 579 - static int stress(int nlocks, int nthreads, unsigned int flags) 573 + static int stress(struct ww_class *class, int nlocks, int nthreads, unsigned int flags) 580 574 { 581 575 struct ww_mutex *locks; 582 576 struct stress *stress_array; ··· 594 588 } 595 589 596 590 for (n = 0; n < nlocks; n++) 597 - ww_mutex_init(&locks[n], &ww_class); 591 + ww_mutex_init(&locks[n], class); 598 592 599 593 count = 0; 600 594 for (n = 0; nthreads; n++) { ··· 623 617 stress = &stress_array[count++]; 624 618 625 619 INIT_WORK(&stress->work, fn); 620 + stress->class = class; 626 621 stress->locks = locks; 627 622 stress->nlocks = nlocks; 628 623 stress->timeout = jiffies + 2*HZ; ··· 642 635 return 0; 643 636 } 644 637 645 - static int __init test_ww_mutex_init(void) 638 + static int run_tests(struct ww_class *class) 646 639 { 647 640 int ncpus = num_online_cpus(); 648 641 int ret, i; 649 642 650 - printk(KERN_INFO "Beginning ww mutex selftests\n"); 643 + ret = test_mutex(class); 644 + if (ret) 645 + return ret; 646 + 647 + ret = test_aa(class, false); 648 + if (ret) 649 + return ret; 650 + 651 + ret = test_aa(class, true); 652 + if (ret) 653 + return ret; 654 + 655 + for (i = 0; i < 4; i++) { 656 + ret = test_abba(class, i & 1, i & 2); 657 + if (ret) 658 + return ret; 659 + } 660 + 661 + ret = test_cycle(class, ncpus); 662 + if (ret) 663 + return ret; 664 + 665 + ret = stress(class, 16, 2 * ncpus, STRESS_INORDER); 666 + if (ret) 667 + return ret; 668 + 669 + ret = stress(class, 16, 2 * ncpus, STRESS_REORDER); 670 + if (ret) 671 + return ret; 672 + 673 + ret = stress(class, 2046, hweight32(STRESS_ALL) * ncpus, STRESS_ALL); 674 + if (ret) 675 + return ret; 676 + 677 + return 0; 678 + } 679 + 680 + static int run_test_classes(void) 681 + { 682 + int ret; 683 + 684 + pr_info("Beginning ww (wound) mutex selftests\n"); 685 + 686 + ret = run_tests(&ww_class); 687 + if (ret) 688 + return ret; 689 + 690 + pr_info("Beginning ww (die) mutex selftests\n"); 691 + ret = run_tests(&wd_class); 692 + if (ret) 693 + return ret; 694 + 695 + pr_info("All ww mutex selftests passed\n"); 696 + return 0; 697 + } 698 + 699 + static DEFINE_MUTEX(run_lock); 700 + 701 + static ssize_t run_tests_store(struct kobject *kobj, struct kobj_attribute *attr, 702 + const char *buf, size_t count) 703 + { 704 + if (!mutex_trylock(&run_lock)) { 705 + pr_err("Test already running\n"); 706 + return count; 707 + } 708 + 709 + run_test_classes(); 710 + mutex_unlock(&run_lock); 711 + 712 + return count; 713 + } 714 + 715 + static struct kobj_attribute run_tests_attribute = 716 + __ATTR(run_tests, 0664, NULL, run_tests_store); 717 + 718 + static struct attribute *attrs[] = { 719 + &run_tests_attribute.attr, 720 + NULL, /* need to NULL terminate the list of attributes */ 721 + }; 722 + 723 + static struct attribute_group attr_group = { 724 + .attrs = attrs, 725 + }; 726 + 727 + static struct kobject *test_ww_mutex_kobj; 728 + 729 + static int __init test_ww_mutex_init(void) 730 + { 731 + int ret; 651 732 652 733 prandom_seed_state(&rng, get_random_u64()); 653 734 ··· 743 648 if (!wq) 744 649 return -ENOMEM; 745 650 746 - ret = test_mutex(); 747 - if (ret) 748 - return ret; 749 - 750 - ret = test_aa(false); 751 - if (ret) 752 - return ret; 753 - 754 - ret = test_aa(true); 755 - if (ret) 756 - return ret; 757 - 758 - for (i = 0; i < 4; i++) { 759 - ret = test_abba(i & 1, i & 2); 760 - if (ret) 761 - return ret; 651 + test_ww_mutex_kobj = kobject_create_and_add("test_ww_mutex", kernel_kobj); 652 + if (!test_ww_mutex_kobj) { 653 + destroy_workqueue(wq); 654 + return -ENOMEM; 762 655 } 763 656 764 - ret = test_cycle(ncpus); 765 - if (ret) 657 + /* Create the files associated with this kobject */ 658 + ret = sysfs_create_group(test_ww_mutex_kobj, &attr_group); 659 + if (ret) { 660 + kobject_put(test_ww_mutex_kobj); 661 + destroy_workqueue(wq); 766 662 return ret; 663 + } 767 664 768 - ret = stress(16, 2*ncpus, STRESS_INORDER); 769 - if (ret) 770 - return ret; 665 + mutex_lock(&run_lock); 666 + ret = run_test_classes(); 667 + mutex_unlock(&run_lock); 771 668 772 - ret = stress(16, 2*ncpus, STRESS_REORDER); 773 - if (ret) 774 - return ret; 775 - 776 - ret = stress(2046, hweight32(STRESS_ALL)*ncpus, STRESS_ALL); 777 - if (ret) 778 - return ret; 779 - 780 - printk(KERN_INFO "All ww mutex selftests passed\n"); 781 - return 0; 669 + return ret; 782 670 } 783 671 784 672 static void __exit test_ww_mutex_exit(void) 785 673 { 674 + kobject_put(test_ww_mutex_kobj); 786 675 destroy_workqueue(wq); 787 676 } 788 677
+2
kernel/printk/printk.c
··· 245 245 * For console list or console->flags updates 246 246 */ 247 247 void console_list_lock(void) 248 + __acquires(&console_mutex) 248 249 { 249 250 /* 250 251 * In unregister_console() and console_force_preferred_locked(), ··· 270 269 * Counterpart to console_list_lock() 271 270 */ 272 271 void console_list_unlock(void) 272 + __releases(&console_mutex) 273 273 { 274 274 mutex_unlock(&console_mutex); 275 275 }
+3
kernel/sched/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + CONTEXT_ANALYSIS_core.o := y 4 + CONTEXT_ANALYSIS_fair.o := y 5 + 3 6 # The compilers are complaining about unused variables inside an if(0) scope 4 7 # block. This is daft, shut them up. 5 8 ccflags-y += $(call cc-disable-warning, unused-but-set-variable)
+62 -27
kernel/sched/core.c
··· 396 396 static struct cpumask sched_core_mask; 397 397 398 398 static void sched_core_lock(int cpu, unsigned long *flags) 399 + __context_unsafe(/* acquires multiple */) 400 + __acquires(&runqueues.__lock) /* overapproximation */ 399 401 { 400 402 const struct cpumask *smt_mask = cpu_smt_mask(cpu); 401 403 int t, i = 0; ··· 408 406 } 409 407 410 408 static void sched_core_unlock(int cpu, unsigned long *flags) 409 + __context_unsafe(/* releases multiple */) 410 + __releases(&runqueues.__lock) /* overapproximation */ 411 411 { 412 412 const struct cpumask *smt_mask = cpu_smt_mask(cpu); 413 413 int t; ··· 634 630 */ 635 631 636 632 void raw_spin_rq_lock_nested(struct rq *rq, int subclass) 633 + __context_unsafe() 637 634 { 638 635 raw_spinlock_t *lock; 639 636 ··· 660 655 } 661 656 662 657 bool raw_spin_rq_trylock(struct rq *rq) 658 + __context_unsafe() 663 659 { 664 660 raw_spinlock_t *lock; 665 661 bool ret; ··· 702 696 raw_spin_rq_lock(rq1); 703 697 if (__rq_lockp(rq1) != __rq_lockp(rq2)) 704 698 raw_spin_rq_lock_nested(rq2, SINGLE_DEPTH_NESTING); 699 + else 700 + __acquire_ctx_lock(__rq_lockp(rq2)); /* fake acquire */ 705 701 706 702 double_rq_clock_clear_update(rq1, rq2); 707 703 } 708 704 709 705 /* 710 - * __task_rq_lock - lock the rq @p resides on. 706 + * ___task_rq_lock - lock the rq @p resides on. 711 707 */ 712 - struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) 713 - __acquires(rq->lock) 708 + struct rq *___task_rq_lock(struct task_struct *p, struct rq_flags *rf) 714 709 { 715 710 struct rq *rq; 716 711 ··· 734 727 /* 735 728 * task_rq_lock - lock p->pi_lock and lock the rq @p resides on. 736 729 */ 737 - struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) 738 - __acquires(p->pi_lock) 739 - __acquires(rq->lock) 730 + struct rq *_task_rq_lock(struct task_struct *p, struct rq_flags *rf) 740 731 { 741 732 struct rq *rq; 742 733 ··· 2436 2431 */ 2437 2432 static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf, 2438 2433 struct task_struct *p, int new_cpu) 2434 + __must_hold(__rq_lockp(rq)) 2439 2435 { 2440 2436 lockdep_assert_rq_held(rq); 2441 2437 ··· 2483 2477 */ 2484 2478 static struct rq *__migrate_task(struct rq *rq, struct rq_flags *rf, 2485 2479 struct task_struct *p, int dest_cpu) 2480 + __must_hold(__rq_lockp(rq)) 2486 2481 { 2487 2482 /* Affinity changed (again). */ 2488 2483 if (!is_cpu_allowed(p, dest_cpu)) ··· 2519 2512 * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test. 2520 2513 */ 2521 2514 flush_smp_call_function_queue(); 2515 + 2516 + /* 2517 + * We may change the underlying rq, but the locks held will 2518 + * appropriately be "transferred" when switching. 2519 + */ 2520 + context_unsafe_alias(rq); 2522 2521 2523 2522 raw_spin_lock(&p->pi_lock); 2524 2523 rq_lock(rq, &rf); ··· 2636 2623 2637 2624 if (!lowest_rq) 2638 2625 goto out_unlock; 2626 + 2627 + lockdep_assert_rq_held(lowest_rq); 2639 2628 2640 2629 // XXX validate p is still the highest prio task 2641 2630 if (task_rq(p) == rq) { ··· 2849 2834 */ 2850 2835 static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flags *rf, 2851 2836 int dest_cpu, unsigned int flags) 2852 - __releases(rq->lock) 2853 - __releases(p->pi_lock) 2837 + __releases(__rq_lockp(rq), &p->pi_lock) 2854 2838 { 2855 2839 struct set_affinity_pending my_pending = { }, *pending = NULL; 2856 2840 bool stop_pending, complete = false; ··· 3004 2990 struct affinity_context *ctx, 3005 2991 struct rq *rq, 3006 2992 struct rq_flags *rf) 3007 - __releases(rq->lock) 3008 - __releases(p->pi_lock) 2993 + __releases(__rq_lockp(rq), &p->pi_lock) 3009 2994 { 3010 2995 const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p); 3011 2996 const struct cpumask *cpu_valid_mask = cpu_active_mask; ··· 4286 4273 */ 4287 4274 int task_call_func(struct task_struct *p, task_call_f func, void *arg) 4288 4275 { 4289 - struct rq *rq = NULL; 4290 4276 struct rq_flags rf; 4291 4277 int ret; 4292 4278 4293 4279 raw_spin_lock_irqsave(&p->pi_lock, rf.flags); 4294 4280 4295 - if (__task_needs_rq_lock(p)) 4296 - rq = __task_rq_lock(p, &rf); 4281 + if (__task_needs_rq_lock(p)) { 4282 + struct rq *rq = __task_rq_lock(p, &rf); 4297 4283 4298 - /* 4299 - * At this point the task is pinned; either: 4300 - * - blocked and we're holding off wakeups (pi->lock) 4301 - * - woken, and we're holding off enqueue (rq->lock) 4302 - * - queued, and we're holding off schedule (rq->lock) 4303 - * - running, and we're holding off de-schedule (rq->lock) 4304 - * 4305 - * The called function (@func) can use: task_curr(), p->on_rq and 4306 - * p->__state to differentiate between these states. 4307 - */ 4308 - ret = func(p, arg); 4284 + /* 4285 + * At this point the task is pinned; either: 4286 + * - blocked and we're holding off wakeups (pi->lock) 4287 + * - woken, and we're holding off enqueue (rq->lock) 4288 + * - queued, and we're holding off schedule (rq->lock) 4289 + * - running, and we're holding off de-schedule (rq->lock) 4290 + * 4291 + * The called function (@func) can use: task_curr(), p->on_rq and 4292 + * p->__state to differentiate between these states. 4293 + */ 4294 + ret = func(p, arg); 4309 4295 4310 - if (rq) 4311 4296 __task_rq_unlock(rq, p, &rf); 4297 + } else { 4298 + ret = func(p, arg); 4299 + } 4312 4300 4313 4301 raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags); 4314 4302 return ret; ··· 4986 4972 4987 4973 static inline void 4988 4974 prepare_lock_switch(struct rq *rq, struct task_struct *next, struct rq_flags *rf) 4975 + __releases(__rq_lockp(rq)) 4976 + __acquires(__rq_lockp(this_rq())) 4989 4977 { 4990 4978 /* 4991 4979 * Since the runqueue lock will be released by the next ··· 5001 4985 /* this is a valid case when another task releases the spinlock */ 5002 4986 rq_lockp(rq)->owner = next; 5003 4987 #endif 4988 + /* 4989 + * Model the rq reference switcheroo. 4990 + */ 4991 + __release(__rq_lockp(rq)); 4992 + __acquire(__rq_lockp(this_rq())); 5004 4993 } 5005 4994 5006 4995 static inline void finish_lock_switch(struct rq *rq) 4996 + __releases(__rq_lockp(rq)) 5007 4997 { 5008 4998 /* 5009 4999 * If we are tracking spinlock dependencies then we have to ··· 5065 5043 static inline void 5066 5044 prepare_task_switch(struct rq *rq, struct task_struct *prev, 5067 5045 struct task_struct *next) 5046 + __must_hold(__rq_lockp(rq)) 5068 5047 { 5069 5048 kcov_prepare_switch(prev); 5070 5049 sched_info_switch(rq, prev, next); ··· 5096 5073 * because prev may have moved to another CPU. 5097 5074 */ 5098 5075 static struct rq *finish_task_switch(struct task_struct *prev) 5099 - __releases(rq->lock) 5076 + __releases(__rq_lockp(this_rq())) 5100 5077 { 5101 5078 struct rq *rq = this_rq(); 5102 5079 struct mm_struct *mm = rq->prev_mm; ··· 5192 5169 * @prev: the thread we just switched away from. 5193 5170 */ 5194 5171 asmlinkage __visible void schedule_tail(struct task_struct *prev) 5195 - __releases(rq->lock) 5172 + __releases(__rq_lockp(this_rq())) 5196 5173 { 5197 5174 /* 5198 5175 * New tasks start with FORK_PREEMPT_COUNT, see there and ··· 5224 5201 static __always_inline struct rq * 5225 5202 context_switch(struct rq *rq, struct task_struct *prev, 5226 5203 struct task_struct *next, struct rq_flags *rf) 5204 + __releases(__rq_lockp(rq)) 5227 5205 { 5228 5206 prepare_task_switch(rq, prev, next); 5229 5207 ··· 5893 5869 */ 5894 5870 static inline struct task_struct * 5895 5871 __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) 5872 + __must_hold(__rq_lockp(rq)) 5896 5873 { 5897 5874 const struct sched_class *class; 5898 5875 struct task_struct *p; ··· 5994 5969 5995 5970 static struct task_struct * 5996 5971 pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) 5972 + __must_hold(__rq_lockp(rq)) 5997 5973 { 5998 5974 struct task_struct *next, *p, *max; 5999 5975 const struct cpumask *smt_mask; ··· 6303 6277 } 6304 6278 6305 6279 static void sched_core_balance(struct rq *rq) 6280 + __must_hold(__rq_lockp(rq)) 6306 6281 { 6307 6282 struct sched_domain *sd; 6308 6283 int cpu = cpu_of(rq); ··· 6449 6422 6450 6423 static struct task_struct * 6451 6424 pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) 6425 + __must_hold(__rq_lockp(rq)) 6452 6426 { 6453 6427 return __pick_next_task(rq, prev, rf); 6454 6428 } ··· 8073 8045 int cpu; 8074 8046 8075 8047 scoped_guard (raw_spinlock_irq, &p->pi_lock) { 8048 + /* 8049 + * We may change the underlying rq, but the locks held will 8050 + * appropriately be "transferred" when switching. 8051 + */ 8052 + context_unsafe_alias(rq); 8053 + 8076 8054 cpu = select_fallback_rq(rq->cpu, p); 8077 8055 8078 8056 rq_lock(rq, &rf); ··· 8102 8068 * effective when the hotplug motion is down. 8103 8069 */ 8104 8070 static void balance_push(struct rq *rq) 8071 + __must_hold(__rq_lockp(rq)) 8105 8072 { 8106 8073 struct task_struct *push_task = rq->curr; 8107 8074
+6 -1
kernel/sched/fair.c
··· 2860 2860 } 2861 2861 2862 2862 static void task_numa_placement(struct task_struct *p) 2863 + __context_unsafe(/* conditional locking */) 2863 2864 { 2864 2865 int seq, nid, max_nid = NUMA_NO_NODE; 2865 2866 unsigned long max_faults = 0; ··· 4782 4781 return cfs_rq->avg.load_avg; 4783 4782 } 4784 4783 4785 - static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf); 4784 + static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf) 4785 + __must_hold(__rq_lockp(this_rq)); 4786 4786 4787 4787 static inline unsigned long task_util(struct task_struct *p) 4788 4788 { ··· 6190 6188 * used to track this state. 6191 6189 */ 6192 6190 static int do_sched_cfs_period_timer(struct cfs_bandwidth *cfs_b, int overrun, unsigned long flags) 6191 + __must_hold(&cfs_b->lock) 6193 6192 { 6194 6193 int throttled; 6195 6194 ··· 8912 8909 8913 8910 struct task_struct * 8914 8911 pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) 8912 + __must_hold(__rq_lockp(rq)) 8915 8913 { 8916 8914 struct sched_entity *se; 8917 8915 struct task_struct *p; ··· 12846 12842 * > 0 - success, new (fair) tasks present 12847 12843 */ 12848 12844 static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf) 12845 + __must_hold(__rq_lockp(this_rq)) 12849 12846 { 12850 12847 unsigned long next_balance = jiffies + HZ; 12851 12848 int this_cpu = this_rq->cpu;
+91 -35
kernel/sched/sched.h
··· 1362 1362 return prandom_u32_state(this_cpu_ptr(&sched_rnd_state)); 1363 1363 } 1364 1364 1365 + static __always_inline struct rq *__this_rq(void) 1366 + { 1367 + return this_cpu_ptr(&runqueues); 1368 + } 1369 + 1365 1370 #define cpu_rq(cpu) (&per_cpu(runqueues, (cpu))) 1366 - #define this_rq() this_cpu_ptr(&runqueues) 1371 + #define this_rq() __this_rq() 1367 1372 #define task_rq(p) cpu_rq(task_cpu(p)) 1368 1373 #define cpu_curr(cpu) (cpu_rq(cpu)->curr) 1369 1374 #define raw_rq() raw_cpu_ptr(&runqueues) ··· 1435 1430 } 1436 1431 1437 1432 static inline raw_spinlock_t *__rq_lockp(struct rq *rq) 1433 + __returns_ctx_lock(rq_lockp(rq)) /* alias them */ 1438 1434 { 1439 1435 if (rq->core_enabled) 1440 1436 return &rq->core->__lock; ··· 1535 1529 } 1536 1530 1537 1531 static inline raw_spinlock_t *__rq_lockp(struct rq *rq) 1532 + __returns_ctx_lock(rq_lockp(rq)) /* alias them */ 1538 1533 { 1539 1534 return &rq->__lock; 1540 1535 } ··· 1578 1571 #endif /* !CONFIG_RT_GROUP_SCHED */ 1579 1572 1580 1573 static inline void lockdep_assert_rq_held(struct rq *rq) 1574 + __assumes_ctx_lock(__rq_lockp(rq)) 1581 1575 { 1582 1576 lockdep_assert_held(__rq_lockp(rq)); 1583 1577 } 1584 1578 1585 - extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass); 1586 - extern bool raw_spin_rq_trylock(struct rq *rq); 1587 - extern void raw_spin_rq_unlock(struct rq *rq); 1579 + extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass) 1580 + __acquires(__rq_lockp(rq)); 1581 + 1582 + extern bool raw_spin_rq_trylock(struct rq *rq) 1583 + __cond_acquires(true, __rq_lockp(rq)); 1584 + 1585 + extern void raw_spin_rq_unlock(struct rq *rq) 1586 + __releases(__rq_lockp(rq)); 1588 1587 1589 1588 static inline void raw_spin_rq_lock(struct rq *rq) 1589 + __acquires(__rq_lockp(rq)) 1590 1590 { 1591 1591 raw_spin_rq_lock_nested(rq, 0); 1592 1592 } 1593 1593 1594 1594 static inline void raw_spin_rq_lock_irq(struct rq *rq) 1595 + __acquires(__rq_lockp(rq)) 1595 1596 { 1596 1597 local_irq_disable(); 1597 1598 raw_spin_rq_lock(rq); 1598 1599 } 1599 1600 1600 1601 static inline void raw_spin_rq_unlock_irq(struct rq *rq) 1602 + __releases(__rq_lockp(rq)) 1601 1603 { 1602 1604 raw_spin_rq_unlock(rq); 1603 1605 local_irq_enable(); 1604 1606 } 1605 1607 1606 1608 static inline unsigned long _raw_spin_rq_lock_irqsave(struct rq *rq) 1609 + __acquires(__rq_lockp(rq)) 1607 1610 { 1608 1611 unsigned long flags; 1609 1612 ··· 1624 1607 } 1625 1608 1626 1609 static inline void raw_spin_rq_unlock_irqrestore(struct rq *rq, unsigned long flags) 1610 + __releases(__rq_lockp(rq)) 1627 1611 { 1628 1612 raw_spin_rq_unlock(rq); 1629 1613 local_irq_restore(flags); ··· 1873 1855 rq->clock_update_flags |= rf->clock_update_flags; 1874 1856 } 1875 1857 1876 - extern 1877 - struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) 1878 - __acquires(rq->lock); 1858 + #define __task_rq_lock(...) __acquire_ret(___task_rq_lock(__VA_ARGS__), __rq_lockp(__ret)) 1859 + extern struct rq *___task_rq_lock(struct task_struct *p, struct rq_flags *rf) __acquires_ret; 1879 1860 1880 - extern 1881 - struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) 1882 - __acquires(p->pi_lock) 1883 - __acquires(rq->lock); 1861 + #define task_rq_lock(...) __acquire_ret(_task_rq_lock(__VA_ARGS__), __rq_lockp(__ret)) 1862 + extern struct rq *_task_rq_lock(struct task_struct *p, struct rq_flags *rf) 1863 + __acquires(&p->pi_lock) __acquires_ret; 1884 1864 1885 1865 static inline void 1886 1866 __task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) 1887 - __releases(rq->lock) 1867 + __releases(__rq_lockp(rq)) 1888 1868 { 1889 1869 rq_unpin_lock(rq, rf); 1890 1870 raw_spin_rq_unlock(rq); ··· 1890 1874 1891 1875 static inline void 1892 1876 task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) 1893 - __releases(rq->lock) 1894 - __releases(p->pi_lock) 1877 + __releases(__rq_lockp(rq), &p->pi_lock) 1895 1878 { 1896 1879 __task_rq_unlock(rq, p, rf); 1897 1880 raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); ··· 1900 1885 _T->rq = task_rq_lock(_T->lock, &_T->rf), 1901 1886 task_rq_unlock(_T->rq, _T->lock, &_T->rf), 1902 1887 struct rq *rq; struct rq_flags rf) 1888 + DECLARE_LOCK_GUARD_1_ATTRS(task_rq_lock, __acquires(_T->pi_lock), __releases((*(struct task_struct **)_T)->pi_lock)) 1889 + #define class_task_rq_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(task_rq_lock, _T) 1903 1890 1904 1891 DEFINE_LOCK_GUARD_1(__task_rq_lock, struct task_struct, 1905 1892 _T->rq = __task_rq_lock(_T->lock, &_T->rf), ··· 1909 1892 struct rq *rq; struct rq_flags rf) 1910 1893 1911 1894 static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) 1912 - __acquires(rq->lock) 1895 + __acquires(__rq_lockp(rq)) 1913 1896 { 1914 1897 raw_spin_rq_lock_irqsave(rq, rf->flags); 1915 1898 rq_pin_lock(rq, rf); 1916 1899 } 1917 1900 1918 1901 static inline void rq_lock_irq(struct rq *rq, struct rq_flags *rf) 1919 - __acquires(rq->lock) 1902 + __acquires(__rq_lockp(rq)) 1920 1903 { 1921 1904 raw_spin_rq_lock_irq(rq); 1922 1905 rq_pin_lock(rq, rf); 1923 1906 } 1924 1907 1925 1908 static inline void rq_lock(struct rq *rq, struct rq_flags *rf) 1926 - __acquires(rq->lock) 1909 + __acquires(__rq_lockp(rq)) 1927 1910 { 1928 1911 raw_spin_rq_lock(rq); 1929 1912 rq_pin_lock(rq, rf); 1930 1913 } 1931 1914 1932 1915 static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf) 1933 - __releases(rq->lock) 1916 + __releases(__rq_lockp(rq)) 1934 1917 { 1935 1918 rq_unpin_lock(rq, rf); 1936 1919 raw_spin_rq_unlock_irqrestore(rq, rf->flags); 1937 1920 } 1938 1921 1939 1922 static inline void rq_unlock_irq(struct rq *rq, struct rq_flags *rf) 1940 - __releases(rq->lock) 1923 + __releases(__rq_lockp(rq)) 1941 1924 { 1942 1925 rq_unpin_lock(rq, rf); 1943 1926 raw_spin_rq_unlock_irq(rq); 1944 1927 } 1945 1928 1946 1929 static inline void rq_unlock(struct rq *rq, struct rq_flags *rf) 1947 - __releases(rq->lock) 1930 + __releases(__rq_lockp(rq)) 1948 1931 { 1949 1932 rq_unpin_lock(rq, rf); 1950 1933 raw_spin_rq_unlock(rq); ··· 1955 1938 rq_unlock(_T->lock, &_T->rf), 1956 1939 struct rq_flags rf) 1957 1940 1941 + DECLARE_LOCK_GUARD_1_ATTRS(rq_lock, __acquires(__rq_lockp(_T)), __releases(__rq_lockp(*(struct rq **)_T))); 1942 + #define class_rq_lock_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rq_lock, _T) 1943 + 1958 1944 DEFINE_LOCK_GUARD_1(rq_lock_irq, struct rq, 1959 1945 rq_lock_irq(_T->lock, &_T->rf), 1960 1946 rq_unlock_irq(_T->lock, &_T->rf), 1961 1947 struct rq_flags rf) 1948 + 1949 + DECLARE_LOCK_GUARD_1_ATTRS(rq_lock_irq, __acquires(__rq_lockp(_T)), __releases(__rq_lockp(*(struct rq **)_T))); 1950 + #define class_rq_lock_irq_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rq_lock_irq, _T) 1962 1951 1963 1952 DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq, 1964 1953 rq_lock_irqsave(_T->lock, &_T->rf), 1965 1954 rq_unlock_irqrestore(_T->lock, &_T->rf), 1966 1955 struct rq_flags rf) 1967 1956 1968 - static inline struct rq *this_rq_lock_irq(struct rq_flags *rf) 1969 - __acquires(rq->lock) 1957 + DECLARE_LOCK_GUARD_1_ATTRS(rq_lock_irqsave, __acquires(__rq_lockp(_T)), __releases(__rq_lockp(*(struct rq **)_T))); 1958 + #define class_rq_lock_irqsave_constructor(_T) WITH_LOCK_GUARD_1_ATTRS(rq_lock_irqsave, _T) 1959 + 1960 + #define this_rq_lock_irq(...) __acquire_ret(_this_rq_lock_irq(__VA_ARGS__), __rq_lockp(__ret)) 1961 + static inline struct rq *_this_rq_lock_irq(struct rq_flags *rf) __acquires_ret 1970 1962 { 1971 1963 struct rq *rq; 1972 1964 ··· 3103 3077 #define DEFINE_LOCK_GUARD_2(name, type, _lock, _unlock, ...) \ 3104 3078 __DEFINE_UNLOCK_GUARD(name, type, _unlock, type *lock2; __VA_ARGS__) \ 3105 3079 static inline class_##name##_t class_##name##_constructor(type *lock, type *lock2) \ 3080 + __no_context_analysis \ 3106 3081 { class_##name##_t _t = { .lock = lock, .lock2 = lock2 }, *_T = &_t; \ 3107 3082 _lock; return _t; } 3083 + #define DECLARE_LOCK_GUARD_2_ATTRS(_name, _lock, _unlock1, _unlock2) \ 3084 + static inline class_##_name##_t class_##_name##_constructor(lock_##_name##_t *_T1, \ 3085 + lock_##_name##_t *_T2) _lock; \ 3086 + static __always_inline void __class_##_name##_cleanup_ctx1(class_##_name##_t **_T1) \ 3087 + __no_context_analysis _unlock1 { } \ 3088 + static __always_inline void __class_##_name##_cleanup_ctx2(class_##_name##_t **_T2) \ 3089 + __no_context_analysis _unlock2 { } 3090 + #define WITH_LOCK_GUARD_2_ATTRS(_name, _T1, _T2) \ 3091 + class_##_name##_constructor(_T1, _T2), \ 3092 + *__UNIQUE_ID(unlock1) __cleanup(__class_##_name##_cleanup_ctx1) = (void *)(_T1),\ 3093 + *__UNIQUE_ID(unlock2) __cleanup(__class_##_name##_cleanup_ctx2) = (void *)(_T2) 3108 3094 3109 3095 static inline bool rq_order_less(struct rq *rq1, struct rq *rq2) 3110 3096 { ··· 3144 3106 return rq1->cpu < rq2->cpu; 3145 3107 } 3146 3108 3147 - extern void double_rq_lock(struct rq *rq1, struct rq *rq2); 3109 + extern void double_rq_lock(struct rq *rq1, struct rq *rq2) 3110 + __acquires(__rq_lockp(rq1), __rq_lockp(rq2)); 3148 3111 3149 3112 #ifdef CONFIG_PREEMPTION 3150 3113 ··· 3158 3119 * also adds more overhead and therefore may reduce throughput. 3159 3120 */ 3160 3121 static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest) 3161 - __releases(this_rq->lock) 3162 - __acquires(busiest->lock) 3163 - __acquires(this_rq->lock) 3122 + __must_hold(__rq_lockp(this_rq)) 3123 + __acquires(__rq_lockp(busiest)) 3164 3124 { 3165 3125 raw_spin_rq_unlock(this_rq); 3166 3126 double_rq_lock(this_rq, busiest); ··· 3176 3138 * regardless of entry order into the function. 3177 3139 */ 3178 3140 static inline int _double_lock_balance(struct rq *this_rq, struct rq *busiest) 3179 - __releases(this_rq->lock) 3180 - __acquires(busiest->lock) 3181 - __acquires(this_rq->lock) 3141 + __must_hold(__rq_lockp(this_rq)) 3142 + __acquires(__rq_lockp(busiest)) 3182 3143 { 3183 - if (__rq_lockp(this_rq) == __rq_lockp(busiest) || 3184 - likely(raw_spin_rq_trylock(busiest))) { 3144 + if (__rq_lockp(this_rq) == __rq_lockp(busiest)) { 3145 + __acquire(__rq_lockp(busiest)); /* already held */ 3146 + double_rq_clock_clear_update(this_rq, busiest); 3147 + return 0; 3148 + } 3149 + 3150 + if (likely(raw_spin_rq_trylock(busiest))) { 3185 3151 double_rq_clock_clear_update(this_rq, busiest); 3186 3152 return 0; 3187 3153 } ··· 3208 3166 * double_lock_balance - lock the busiest runqueue, this_rq is locked already. 3209 3167 */ 3210 3168 static inline int double_lock_balance(struct rq *this_rq, struct rq *busiest) 3169 + __must_hold(__rq_lockp(this_rq)) 3170 + __acquires(__rq_lockp(busiest)) 3211 3171 { 3212 3172 lockdep_assert_irqs_disabled(); 3213 3173 ··· 3217 3173 } 3218 3174 3219 3175 static inline void double_unlock_balance(struct rq *this_rq, struct rq *busiest) 3220 - __releases(busiest->lock) 3176 + __releases(__rq_lockp(busiest)) 3221 3177 { 3222 3178 if (__rq_lockp(this_rq) != __rq_lockp(busiest)) 3223 3179 raw_spin_rq_unlock(busiest); 3180 + else 3181 + __release(__rq_lockp(busiest)); /* fake release */ 3224 3182 lock_set_subclass(&__rq_lockp(this_rq)->dep_map, 0, _RET_IP_); 3225 3183 } 3226 3184 3227 3185 static inline void double_lock(spinlock_t *l1, spinlock_t *l2) 3186 + __acquires(l1, l2) 3228 3187 { 3229 3188 if (l1 > l2) 3230 3189 swap(l1, l2); ··· 3237 3190 } 3238 3191 3239 3192 static inline void double_lock_irq(spinlock_t *l1, spinlock_t *l2) 3193 + __acquires(l1, l2) 3240 3194 { 3241 3195 if (l1 > l2) 3242 3196 swap(l1, l2); ··· 3247 3199 } 3248 3200 3249 3201 static inline void double_raw_lock(raw_spinlock_t *l1, raw_spinlock_t *l2) 3202 + __acquires(l1, l2) 3250 3203 { 3251 3204 if (l1 > l2) 3252 3205 swap(l1, l2); ··· 3257 3208 } 3258 3209 3259 3210 static inline void double_raw_unlock(raw_spinlock_t *l1, raw_spinlock_t *l2) 3211 + __releases(l1, l2) 3260 3212 { 3261 3213 raw_spin_unlock(l1); 3262 3214 raw_spin_unlock(l2); ··· 3267 3217 double_raw_lock(_T->lock, _T->lock2), 3268 3218 double_raw_unlock(_T->lock, _T->lock2)) 3269 3219 3220 + DECLARE_LOCK_GUARD_2_ATTRS(double_raw_spinlock, 3221 + __acquires(_T1, _T2), 3222 + __releases(*(raw_spinlock_t **)_T1), 3223 + __releases(*(raw_spinlock_t **)_T2)); 3224 + #define class_double_raw_spinlock_constructor(_T1, _T2) \ 3225 + WITH_LOCK_GUARD_2_ATTRS(double_raw_spinlock, _T1, _T2) 3226 + 3270 3227 /* 3271 3228 * double_rq_unlock - safely unlock two runqueues 3272 3229 * ··· 3281 3224 * you need to do so manually after calling. 3282 3225 */ 3283 3226 static inline void double_rq_unlock(struct rq *rq1, struct rq *rq2) 3284 - __releases(rq1->lock) 3285 - __releases(rq2->lock) 3227 + __releases(__rq_lockp(rq1), __rq_lockp(rq2)) 3286 3228 { 3287 3229 if (__rq_lockp(rq1) != __rq_lockp(rq2)) 3288 3230 raw_spin_rq_unlock(rq2); 3289 3231 else 3290 - __release(rq2->lock); 3232 + __release(__rq_lockp(rq2)); /* fake release */ 3291 3233 raw_spin_rq_unlock(rq1); 3292 3234 } 3293 3235
+2 -2
kernel/signal.c
··· 1355 1355 return count; 1356 1356 } 1357 1357 1358 - struct sighand_struct *__lock_task_sighand(struct task_struct *tsk, 1359 - unsigned long *flags) 1358 + struct sighand_struct *lock_task_sighand(struct task_struct *tsk, 1359 + unsigned long *flags) 1360 1360 { 1361 1361 struct sighand_struct *sighand; 1362 1362
+3 -10
kernel/time/posix-timers.c
··· 66 66 #error "SIGEV_THREAD_ID must not share bit with other SIGEV values!" 67 67 #endif 68 68 69 - static struct k_itimer *__lock_timer(timer_t timer_id); 70 - 71 - #define lock_timer(tid) \ 72 - ({ struct k_itimer *__timr; \ 73 - __cond_lock(&__timr->it_lock, __timr = __lock_timer(tid)); \ 74 - __timr; \ 75 - }) 76 - 69 + static struct k_itimer *lock_timer(timer_t timer_id); 77 70 static inline void unlock_timer(struct k_itimer *timr) 78 71 { 79 72 if (likely((timr))) ··· 78 85 79 86 #define scoped_timer (scope) 80 87 81 - DEFINE_CLASS(lock_timer, struct k_itimer *, unlock_timer(_T), __lock_timer(id), timer_t id); 88 + DEFINE_CLASS(lock_timer, struct k_itimer *, unlock_timer(_T), lock_timer(id), timer_t id); 82 89 DEFINE_CLASS_IS_COND_GUARD(lock_timer); 83 90 84 91 static struct timer_hash_bucket *hash_bucket(struct signal_struct *sig, unsigned int nr) ··· 593 600 } 594 601 #endif 595 602 596 - static struct k_itimer *__lock_timer(timer_t timer_id) 603 + static struct k_itimer *lock_timer(timer_t timer_id) 597 604 { 598 605 struct k_itimer *timr; 599 606
+44
lib/Kconfig.debug
··· 616 616 To ensure that generic code follows the above rules, this 617 617 option forces all percpu variables to be defined as weak. 618 618 619 + config WARN_CONTEXT_ANALYSIS 620 + bool "Compiler context-analysis warnings" 621 + depends on CC_IS_CLANG && CLANG_VERSION >= 220000 622 + # Branch profiling re-defines "if", which messes with the compiler's 623 + # ability to analyze __cond_acquires(..), resulting in false positives. 624 + depends on !TRACE_BRANCH_PROFILING 625 + default y 626 + help 627 + Context Analysis is a language extension, which enables statically 628 + checking that required contexts are active (or inactive) by acquiring 629 + and releasing user-definable "context locks". 630 + 631 + Clang's name of the feature is "Thread Safety Analysis". Requires 632 + Clang 22 or later. 633 + 634 + Produces warnings by default. Select CONFIG_WERROR if you wish to 635 + turn these warnings into errors. 636 + 637 + For more details, see Documentation/dev-tools/context-analysis.rst. 638 + 639 + config WARN_CONTEXT_ANALYSIS_ALL 640 + bool "Enable context analysis for all source files" 641 + depends on WARN_CONTEXT_ANALYSIS 642 + depends on EXPERT && !COMPILE_TEST 643 + help 644 + Enable tree-wide context analysis. This is likely to produce a 645 + large number of false positives - enable at your own risk. 646 + 647 + If unsure, say N. 648 + 619 649 endmenu # "Compiler options" 620 650 621 651 menu "Generic Kernel Debugging Instruments" ··· 2840 2810 Tests the linear_ranges logic correctness. 2841 2811 For more information on KUnit and unit tests in general please refer 2842 2812 to the KUnit documentation in Documentation/dev-tools/kunit/. 2813 + 2814 + If unsure, say N. 2815 + 2816 + config CONTEXT_ANALYSIS_TEST 2817 + bool "Compiler context-analysis warnings test" 2818 + depends on EXPERT 2819 + help 2820 + This builds the test for compiler-based context analysis. The test 2821 + does not add executable code to the kernel, but is meant to test that 2822 + common patterns supported by the analysis do not result in false 2823 + positive warnings. 2824 + 2825 + When adding support for new context locks, it is strongly recommended 2826 + to add supported patterns to this test. 2843 2827 2844 2828 If unsure, say N. 2845 2829
+6
lib/Makefile
··· 50 50 lib-y += kobject.o klist.o 51 51 obj-y += lockref.o 52 52 53 + CONTEXT_ANALYSIS_rhashtable.o := y 54 + 53 55 obj-y += bcd.o sort.o parser.o debug_locks.o random32.o \ 54 56 bust_spinlocks.o kasprintf.o bitmap.o scatterlist.o \ 55 57 list_sort.o uuid.o iov_iter.o clz_ctz.o \ ··· 252 250 # Prevent the compiler from calling builtins like memcmp() or bcmp() from this 253 251 # file. 254 252 CFLAGS_stackdepot.o += -fno-builtin 253 + CONTEXT_ANALYSIS_stackdepot.o := y 255 254 obj-$(CONFIG_STACKDEPOT) += stackdepot.o 256 255 KASAN_SANITIZE_stackdepot.o := n 257 256 # In particular, instrumenting stackdepot.c with KMSAN will result in infinite ··· 333 330 obj-$(CONFIG_GENERIC_LIB_DEVMEM_IS_ALLOWED) += devmem_is_allowed.o 334 331 335 332 obj-$(CONFIG_FIRMWARE_TABLE) += fw_table.o 333 + 334 + CONTEXT_ANALYSIS_test_context-analysis.o := y 335 + obj-$(CONFIG_CONTEXT_ANALYSIS_TEST) += test_context-analysis.o 336 336 337 337 subdir-$(CONFIG_FORTIFY_SOURCE) += test_fortify
+4 -4
lib/dec_and_lock.c
··· 18 18 * because the spin-lock and the decrement must be 19 19 * "atomic". 20 20 */ 21 - int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) 21 + int atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) 22 22 { 23 23 /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ 24 24 if (atomic_add_unless(atomic, -1, 1)) ··· 32 32 return 0; 33 33 } 34 34 35 - EXPORT_SYMBOL(_atomic_dec_and_lock); 35 + EXPORT_SYMBOL(atomic_dec_and_lock); 36 36 37 37 int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, 38 38 unsigned long *flags) ··· 50 50 } 51 51 EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave); 52 52 53 - int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) 53 + int atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) 54 54 { 55 55 /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ 56 56 if (atomic_add_unless(atomic, -1, 1)) ··· 63 63 raw_spin_unlock(lock); 64 64 return 0; 65 65 } 66 - EXPORT_SYMBOL(_atomic_dec_and_raw_lock); 66 + EXPORT_SYMBOL(atomic_dec_and_raw_lock); 67 67 68 68 int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, 69 69 unsigned long *flags)
-1
lib/lockref.c
··· 105 105 * @lockref: pointer to lockref structure 106 106 * Return: 1 if count updated successfully or 0 if count <= 1 and lock taken 107 107 */ 108 - #undef lockref_put_or_lock 109 108 bool lockref_put_or_lock(struct lockref *lockref) 110 109 { 111 110 CMPXCHG_LOOP(
+3 -2
lib/rhashtable.c
··· 358 358 static int rhashtable_rehash_alloc(struct rhashtable *ht, 359 359 struct bucket_table *old_tbl, 360 360 unsigned int size) 361 + __must_hold(&ht->mutex) 361 362 { 362 363 struct bucket_table *new_tbl; 363 364 int err; ··· 393 392 * bucket locks or concurrent RCU protected lookups and traversals. 394 393 */ 395 394 static int rhashtable_shrink(struct rhashtable *ht) 395 + __must_hold(&ht->mutex) 396 396 { 397 397 struct bucket_table *old_tbl = rht_dereference(ht->tbl, ht); 398 398 unsigned int nelems = atomic_read(&ht->nelems); ··· 726 724 * resize events and always continue. 727 725 */ 728 726 int rhashtable_walk_start_check(struct rhashtable_iter *iter) 729 - __acquires(RCU) 727 + __acquires_shared(RCU) 730 728 { 731 729 struct rhashtable *ht = iter->ht; 732 730 bool rhlist = ht->rhlist; ··· 942 940 * hash table. 943 941 */ 944 942 void rhashtable_walk_stop(struct rhashtable_iter *iter) 945 - __releases(RCU) 946 943 { 947 944 struct rhashtable *ht; 948 945 struct bucket_table *tbl = iter->walker.tbl;
+14 -6
lib/stackdepot.c
··· 61 61 /* Hash mask for indexing the table. */ 62 62 static unsigned int stack_hash_mask; 63 63 64 + /* The lock must be held when performing pool or freelist modifications. */ 65 + static DEFINE_RAW_SPINLOCK(pool_lock); 64 66 /* Array of memory regions that store stack records. */ 65 - static void **stack_pools; 67 + static void **stack_pools __pt_guarded_by(&pool_lock); 66 68 /* Newly allocated pool that is not yet added to stack_pools. */ 67 69 static void *new_pool; 68 70 /* Number of pools in stack_pools. */ 69 71 static int pools_num; 70 72 /* Offset to the unused space in the currently used pool. */ 71 - static size_t pool_offset = DEPOT_POOL_SIZE; 73 + static size_t pool_offset __guarded_by(&pool_lock) = DEPOT_POOL_SIZE; 72 74 /* Freelist of stack records within stack_pools. */ 73 - static LIST_HEAD(free_stacks); 74 - /* The lock must be held when performing pool or freelist modifications. */ 75 - static DEFINE_RAW_SPINLOCK(pool_lock); 75 + static __guarded_by(&pool_lock) LIST_HEAD(free_stacks); 76 76 77 77 /* Statistics counters for debugfs. */ 78 78 enum depot_counter_id { ··· 291 291 * Initializes new stack pool, and updates the list of pools. 292 292 */ 293 293 static bool depot_init_pool(void **prealloc) 294 + __must_hold(&pool_lock) 294 295 { 295 296 lockdep_assert_held(&pool_lock); 296 297 ··· 339 338 340 339 /* Keeps the preallocated memory to be used for a new stack depot pool. */ 341 340 static void depot_keep_new_pool(void **prealloc) 341 + __must_hold(&pool_lock) 342 342 { 343 343 lockdep_assert_held(&pool_lock); 344 344 ··· 359 357 * the current pre-allocation. 360 358 */ 361 359 static struct stack_record *depot_pop_free_pool(void **prealloc, size_t size) 360 + __must_hold(&pool_lock) 362 361 { 363 362 struct stack_record *stack; 364 363 void *current_pool; ··· 394 391 395 392 /* Try to find next free usable entry from the freelist. */ 396 393 static struct stack_record *depot_pop_free(void) 394 + __must_hold(&pool_lock) 397 395 { 398 396 struct stack_record *stack; 399 397 ··· 432 428 /* Allocates a new stack in a stack depot pool. */ 433 429 static struct stack_record * 434 430 depot_alloc_stack(unsigned long *entries, unsigned int nr_entries, u32 hash, depot_flags_t flags, void **prealloc) 431 + __must_hold(&pool_lock) 435 432 { 436 433 struct stack_record *stack = NULL; 437 434 size_t record_size; ··· 491 486 } 492 487 493 488 static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) 489 + __must_not_hold(&pool_lock) 494 490 { 495 491 const int pools_num_cached = READ_ONCE(pools_num); 496 492 union handle_parts parts = { .handle = handle }; ··· 508 502 return NULL; 509 503 } 510 504 511 - pool = stack_pools[pool_index]; 505 + /* @pool_index either valid, or user passed in corrupted value. */ 506 + pool = context_unsafe(stack_pools[pool_index]); 512 507 if (WARN_ON(!pool)) 513 508 return NULL; 514 509 ··· 522 515 523 516 /* Links stack into the freelist. */ 524 517 static void depot_free_stack(struct stack_record *stack) 518 + __must_not_hold(&pool_lock) 525 519 { 526 520 unsigned long flags; 527 521
+598
lib/test_context-analysis.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Compile-only tests for common patterns that should not generate false 4 + * positive errors when compiled with Clang's context analysis. 5 + */ 6 + 7 + #include <linux/bit_spinlock.h> 8 + #include <linux/build_bug.h> 9 + #include <linux/local_lock.h> 10 + #include <linux/mutex.h> 11 + #include <linux/percpu.h> 12 + #include <linux/rcupdate.h> 13 + #include <linux/rwsem.h> 14 + #include <linux/seqlock.h> 15 + #include <linux/spinlock.h> 16 + #include <linux/srcu.h> 17 + #include <linux/ww_mutex.h> 18 + 19 + /* 20 + * Test that helper macros work as expected. 21 + */ 22 + static void __used test_common_helpers(void) 23 + { 24 + BUILD_BUG_ON(context_unsafe(3) != 3); /* plain expression */ 25 + BUILD_BUG_ON(context_unsafe((void)2; 3) != 3); /* does not swallow semi-colon */ 26 + BUILD_BUG_ON(context_unsafe((void)2, 3) != 3); /* does not swallow commas */ 27 + context_unsafe(do { } while (0)); /* works with void statements */ 28 + } 29 + 30 + #define TEST_SPINLOCK_COMMON(class, type, type_init, type_lock, type_unlock, type_trylock, op) \ 31 + struct test_##class##_data { \ 32 + type lock; \ 33 + int counter __guarded_by(&lock); \ 34 + int *pointer __pt_guarded_by(&lock); \ 35 + }; \ 36 + static void __used test_##class##_init(struct test_##class##_data *d) \ 37 + { \ 38 + guard(type_init)(&d->lock); \ 39 + d->counter = 0; \ 40 + } \ 41 + static void __used test_##class(struct test_##class##_data *d) \ 42 + { \ 43 + unsigned long flags; \ 44 + d->pointer++; \ 45 + type_lock(&d->lock); \ 46 + op(d->counter); \ 47 + op(*d->pointer); \ 48 + type_unlock(&d->lock); \ 49 + type_lock##_irq(&d->lock); \ 50 + op(d->counter); \ 51 + op(*d->pointer); \ 52 + type_unlock##_irq(&d->lock); \ 53 + type_lock##_bh(&d->lock); \ 54 + op(d->counter); \ 55 + op(*d->pointer); \ 56 + type_unlock##_bh(&d->lock); \ 57 + type_lock##_irqsave(&d->lock, flags); \ 58 + op(d->counter); \ 59 + op(*d->pointer); \ 60 + type_unlock##_irqrestore(&d->lock, flags); \ 61 + } \ 62 + static void __used test_##class##_trylock(struct test_##class##_data *d) \ 63 + { \ 64 + if (type_trylock(&d->lock)) { \ 65 + op(d->counter); \ 66 + type_unlock(&d->lock); \ 67 + } \ 68 + } \ 69 + static void __used test_##class##_assert(struct test_##class##_data *d) \ 70 + { \ 71 + lockdep_assert_held(&d->lock); \ 72 + op(d->counter); \ 73 + } \ 74 + static void __used test_##class##_guard(struct test_##class##_data *d) \ 75 + { \ 76 + { guard(class)(&d->lock); op(d->counter); } \ 77 + { guard(class##_irq)(&d->lock); op(d->counter); } \ 78 + { guard(class##_irqsave)(&d->lock); op(d->counter); } \ 79 + } 80 + 81 + #define TEST_OP_RW(x) (x)++ 82 + #define TEST_OP_RO(x) ((void)(x)) 83 + 84 + TEST_SPINLOCK_COMMON(raw_spinlock, 85 + raw_spinlock_t, 86 + raw_spinlock_init, 87 + raw_spin_lock, 88 + raw_spin_unlock, 89 + raw_spin_trylock, 90 + TEST_OP_RW); 91 + static void __used test_raw_spinlock_trylock_extra(struct test_raw_spinlock_data *d) 92 + { 93 + unsigned long flags; 94 + 95 + data_race(d->counter++); /* no warning */ 96 + 97 + if (raw_spin_trylock_irq(&d->lock)) { 98 + d->counter++; 99 + raw_spin_unlock_irq(&d->lock); 100 + } 101 + if (raw_spin_trylock_irqsave(&d->lock, flags)) { 102 + d->counter++; 103 + raw_spin_unlock_irqrestore(&d->lock, flags); 104 + } 105 + scoped_cond_guard(raw_spinlock_try, return, &d->lock) { 106 + d->counter++; 107 + } 108 + } 109 + 110 + TEST_SPINLOCK_COMMON(spinlock, 111 + spinlock_t, 112 + spinlock_init, 113 + spin_lock, 114 + spin_unlock, 115 + spin_trylock, 116 + TEST_OP_RW); 117 + static void __used test_spinlock_trylock_extra(struct test_spinlock_data *d) 118 + { 119 + unsigned long flags; 120 + 121 + if (spin_trylock_irq(&d->lock)) { 122 + d->counter++; 123 + spin_unlock_irq(&d->lock); 124 + } 125 + if (spin_trylock_irqsave(&d->lock, flags)) { 126 + d->counter++; 127 + spin_unlock_irqrestore(&d->lock, flags); 128 + } 129 + scoped_cond_guard(spinlock_try, return, &d->lock) { 130 + d->counter++; 131 + } 132 + } 133 + 134 + TEST_SPINLOCK_COMMON(write_lock, 135 + rwlock_t, 136 + rwlock_init, 137 + write_lock, 138 + write_unlock, 139 + write_trylock, 140 + TEST_OP_RW); 141 + static void __used test_write_trylock_extra(struct test_write_lock_data *d) 142 + { 143 + unsigned long flags; 144 + 145 + if (write_trylock_irqsave(&d->lock, flags)) { 146 + d->counter++; 147 + write_unlock_irqrestore(&d->lock, flags); 148 + } 149 + } 150 + 151 + TEST_SPINLOCK_COMMON(read_lock, 152 + rwlock_t, 153 + rwlock_init, 154 + read_lock, 155 + read_unlock, 156 + read_trylock, 157 + TEST_OP_RO); 158 + 159 + struct test_mutex_data { 160 + struct mutex mtx; 161 + int counter __guarded_by(&mtx); 162 + }; 163 + 164 + static void __used test_mutex_init(struct test_mutex_data *d) 165 + { 166 + guard(mutex_init)(&d->mtx); 167 + d->counter = 0; 168 + } 169 + 170 + static void __used test_mutex_lock(struct test_mutex_data *d) 171 + { 172 + mutex_lock(&d->mtx); 173 + d->counter++; 174 + mutex_unlock(&d->mtx); 175 + mutex_lock_io(&d->mtx); 176 + d->counter++; 177 + mutex_unlock(&d->mtx); 178 + } 179 + 180 + static void __used test_mutex_trylock(struct test_mutex_data *d, atomic_t *a) 181 + { 182 + if (!mutex_lock_interruptible(&d->mtx)) { 183 + d->counter++; 184 + mutex_unlock(&d->mtx); 185 + } 186 + if (!mutex_lock_killable(&d->mtx)) { 187 + d->counter++; 188 + mutex_unlock(&d->mtx); 189 + } 190 + if (mutex_trylock(&d->mtx)) { 191 + d->counter++; 192 + mutex_unlock(&d->mtx); 193 + } 194 + if (atomic_dec_and_mutex_lock(a, &d->mtx)) { 195 + d->counter++; 196 + mutex_unlock(&d->mtx); 197 + } 198 + } 199 + 200 + static void __used test_mutex_assert(struct test_mutex_data *d) 201 + { 202 + lockdep_assert_held(&d->mtx); 203 + d->counter++; 204 + } 205 + 206 + static void __used test_mutex_guard(struct test_mutex_data *d) 207 + { 208 + guard(mutex)(&d->mtx); 209 + d->counter++; 210 + } 211 + 212 + static void __used test_mutex_cond_guard(struct test_mutex_data *d) 213 + { 214 + scoped_cond_guard(mutex_try, return, &d->mtx) { 215 + d->counter++; 216 + } 217 + scoped_cond_guard(mutex_intr, return, &d->mtx) { 218 + d->counter++; 219 + } 220 + } 221 + 222 + struct test_seqlock_data { 223 + seqlock_t sl; 224 + int counter __guarded_by(&sl); 225 + }; 226 + 227 + static void __used test_seqlock_init(struct test_seqlock_data *d) 228 + { 229 + guard(seqlock_init)(&d->sl); 230 + d->counter = 0; 231 + } 232 + 233 + static void __used test_seqlock_reader(struct test_seqlock_data *d) 234 + { 235 + unsigned int seq; 236 + 237 + do { 238 + seq = read_seqbegin(&d->sl); 239 + (void)d->counter; 240 + } while (read_seqretry(&d->sl, seq)); 241 + } 242 + 243 + static void __used test_seqlock_writer(struct test_seqlock_data *d) 244 + { 245 + unsigned long flags; 246 + 247 + write_seqlock(&d->sl); 248 + d->counter++; 249 + write_sequnlock(&d->sl); 250 + 251 + write_seqlock_irq(&d->sl); 252 + d->counter++; 253 + write_sequnlock_irq(&d->sl); 254 + 255 + write_seqlock_bh(&d->sl); 256 + d->counter++; 257 + write_sequnlock_bh(&d->sl); 258 + 259 + write_seqlock_irqsave(&d->sl, flags); 260 + d->counter++; 261 + write_sequnlock_irqrestore(&d->sl, flags); 262 + } 263 + 264 + static void __used test_seqlock_scoped(struct test_seqlock_data *d) 265 + { 266 + scoped_seqlock_read (&d->sl, ss_lockless) { 267 + (void)d->counter; 268 + } 269 + } 270 + 271 + struct test_rwsem_data { 272 + struct rw_semaphore sem; 273 + int counter __guarded_by(&sem); 274 + }; 275 + 276 + static void __used test_rwsem_init(struct test_rwsem_data *d) 277 + { 278 + guard(rwsem_init)(&d->sem); 279 + d->counter = 0; 280 + } 281 + 282 + static void __used test_rwsem_reader(struct test_rwsem_data *d) 283 + { 284 + down_read(&d->sem); 285 + (void)d->counter; 286 + up_read(&d->sem); 287 + 288 + if (down_read_trylock(&d->sem)) { 289 + (void)d->counter; 290 + up_read(&d->sem); 291 + } 292 + } 293 + 294 + static void __used test_rwsem_writer(struct test_rwsem_data *d) 295 + { 296 + down_write(&d->sem); 297 + d->counter++; 298 + up_write(&d->sem); 299 + 300 + down_write(&d->sem); 301 + d->counter++; 302 + downgrade_write(&d->sem); 303 + (void)d->counter; 304 + up_read(&d->sem); 305 + 306 + if (down_write_trylock(&d->sem)) { 307 + d->counter++; 308 + up_write(&d->sem); 309 + } 310 + } 311 + 312 + static void __used test_rwsem_assert(struct test_rwsem_data *d) 313 + { 314 + rwsem_assert_held_nolockdep(&d->sem); 315 + d->counter++; 316 + } 317 + 318 + static void __used test_rwsem_guard(struct test_rwsem_data *d) 319 + { 320 + { guard(rwsem_read)(&d->sem); (void)d->counter; } 321 + { guard(rwsem_write)(&d->sem); d->counter++; } 322 + } 323 + 324 + static void __used test_rwsem_cond_guard(struct test_rwsem_data *d) 325 + { 326 + scoped_cond_guard(rwsem_read_try, return, &d->sem) { 327 + (void)d->counter; 328 + } 329 + scoped_cond_guard(rwsem_write_try, return, &d->sem) { 330 + d->counter++; 331 + } 332 + } 333 + 334 + struct test_bit_spinlock_data { 335 + unsigned long bits; 336 + int counter __guarded_by(__bitlock(3, &bits)); 337 + }; 338 + 339 + static void __used test_bit_spin_lock(struct test_bit_spinlock_data *d) 340 + { 341 + /* 342 + * Note, the analysis seems to have false negatives, because it won't 343 + * precisely recognize the bit of the fake __bitlock() token. 344 + */ 345 + bit_spin_lock(3, &d->bits); 346 + d->counter++; 347 + bit_spin_unlock(3, &d->bits); 348 + 349 + bit_spin_lock(3, &d->bits); 350 + d->counter++; 351 + __bit_spin_unlock(3, &d->bits); 352 + 353 + if (bit_spin_trylock(3, &d->bits)) { 354 + d->counter++; 355 + bit_spin_unlock(3, &d->bits); 356 + } 357 + } 358 + 359 + /* 360 + * Test that we can mark a variable guarded by RCU, and we can dereference and 361 + * write to the pointer with RCU's primitives. 362 + */ 363 + struct test_rcu_data { 364 + long __rcu_guarded *data; 365 + }; 366 + 367 + static void __used test_rcu_guarded_reader(struct test_rcu_data *d) 368 + { 369 + rcu_read_lock(); 370 + (void)rcu_dereference(d->data); 371 + rcu_read_unlock(); 372 + 373 + rcu_read_lock_bh(); 374 + (void)rcu_dereference(d->data); 375 + rcu_read_unlock_bh(); 376 + 377 + rcu_read_lock_sched(); 378 + (void)rcu_dereference(d->data); 379 + rcu_read_unlock_sched(); 380 + } 381 + 382 + static void __used test_rcu_guard(struct test_rcu_data *d) 383 + { 384 + guard(rcu)(); 385 + (void)rcu_dereference(d->data); 386 + } 387 + 388 + static void __used test_rcu_guarded_updater(struct test_rcu_data *d) 389 + { 390 + rcu_assign_pointer(d->data, NULL); 391 + RCU_INIT_POINTER(d->data, NULL); 392 + (void)unrcu_pointer(d->data); 393 + } 394 + 395 + static void wants_rcu_held(void) __must_hold_shared(RCU) { } 396 + static void wants_rcu_held_bh(void) __must_hold_shared(RCU_BH) { } 397 + static void wants_rcu_held_sched(void) __must_hold_shared(RCU_SCHED) { } 398 + 399 + static void __used test_rcu_lock_variants(void) 400 + { 401 + rcu_read_lock(); 402 + wants_rcu_held(); 403 + rcu_read_unlock(); 404 + 405 + rcu_read_lock_bh(); 406 + wants_rcu_held_bh(); 407 + rcu_read_unlock_bh(); 408 + 409 + rcu_read_lock_sched(); 410 + wants_rcu_held_sched(); 411 + rcu_read_unlock_sched(); 412 + } 413 + 414 + static void __used test_rcu_lock_reentrant(void) 415 + { 416 + rcu_read_lock(); 417 + rcu_read_lock(); 418 + rcu_read_lock_bh(); 419 + rcu_read_lock_bh(); 420 + rcu_read_lock_sched(); 421 + rcu_read_lock_sched(); 422 + 423 + rcu_read_unlock_sched(); 424 + rcu_read_unlock_sched(); 425 + rcu_read_unlock_bh(); 426 + rcu_read_unlock_bh(); 427 + rcu_read_unlock(); 428 + rcu_read_unlock(); 429 + } 430 + 431 + static void __used test_rcu_assert_variants(void) 432 + { 433 + lockdep_assert_in_rcu_read_lock(); 434 + wants_rcu_held(); 435 + 436 + lockdep_assert_in_rcu_read_lock_bh(); 437 + wants_rcu_held_bh(); 438 + 439 + lockdep_assert_in_rcu_read_lock_sched(); 440 + wants_rcu_held_sched(); 441 + } 442 + 443 + struct test_srcu_data { 444 + struct srcu_struct srcu; 445 + long __rcu_guarded *data; 446 + }; 447 + 448 + static void __used test_srcu(struct test_srcu_data *d) 449 + { 450 + init_srcu_struct(&d->srcu); 451 + 452 + int idx = srcu_read_lock(&d->srcu); 453 + long *data = srcu_dereference(d->data, &d->srcu); 454 + (void)data; 455 + srcu_read_unlock(&d->srcu, idx); 456 + 457 + rcu_assign_pointer(d->data, NULL); 458 + } 459 + 460 + static void __used test_srcu_guard(struct test_srcu_data *d) 461 + { 462 + { guard(srcu)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); } 463 + { guard(srcu_fast)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); } 464 + { guard(srcu_fast_notrace)(&d->srcu); (void)srcu_dereference(d->data, &d->srcu); } 465 + } 466 + 467 + struct test_local_lock_data { 468 + local_lock_t lock; 469 + int counter __guarded_by(&lock); 470 + }; 471 + 472 + static DEFINE_PER_CPU(struct test_local_lock_data, test_local_lock_data) = { 473 + .lock = INIT_LOCAL_LOCK(lock), 474 + }; 475 + 476 + static void __used test_local_lock_init(struct test_local_lock_data *d) 477 + { 478 + guard(local_lock_init)(&d->lock); 479 + d->counter = 0; 480 + } 481 + 482 + static void __used test_local_lock(void) 483 + { 484 + unsigned long flags; 485 + 486 + local_lock(&test_local_lock_data.lock); 487 + this_cpu_add(test_local_lock_data.counter, 1); 488 + local_unlock(&test_local_lock_data.lock); 489 + 490 + local_lock_irq(&test_local_lock_data.lock); 491 + this_cpu_add(test_local_lock_data.counter, 1); 492 + local_unlock_irq(&test_local_lock_data.lock); 493 + 494 + local_lock_irqsave(&test_local_lock_data.lock, flags); 495 + this_cpu_add(test_local_lock_data.counter, 1); 496 + local_unlock_irqrestore(&test_local_lock_data.lock, flags); 497 + 498 + local_lock_nested_bh(&test_local_lock_data.lock); 499 + this_cpu_add(test_local_lock_data.counter, 1); 500 + local_unlock_nested_bh(&test_local_lock_data.lock); 501 + } 502 + 503 + static void __used test_local_lock_guard(void) 504 + { 505 + { guard(local_lock)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } 506 + { guard(local_lock_irq)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } 507 + { guard(local_lock_irqsave)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } 508 + { guard(local_lock_nested_bh)(&test_local_lock_data.lock); this_cpu_add(test_local_lock_data.counter, 1); } 509 + } 510 + 511 + struct test_local_trylock_data { 512 + local_trylock_t lock; 513 + int counter __guarded_by(&lock); 514 + }; 515 + 516 + static DEFINE_PER_CPU(struct test_local_trylock_data, test_local_trylock_data) = { 517 + .lock = INIT_LOCAL_TRYLOCK(lock), 518 + }; 519 + 520 + static void __used test_local_trylock_init(struct test_local_trylock_data *d) 521 + { 522 + guard(local_trylock_init)(&d->lock); 523 + d->counter = 0; 524 + } 525 + 526 + static void __used test_local_trylock(void) 527 + { 528 + local_lock(&test_local_trylock_data.lock); 529 + this_cpu_add(test_local_trylock_data.counter, 1); 530 + local_unlock(&test_local_trylock_data.lock); 531 + 532 + if (local_trylock(&test_local_trylock_data.lock)) { 533 + this_cpu_add(test_local_trylock_data.counter, 1); 534 + local_unlock(&test_local_trylock_data.lock); 535 + } 536 + } 537 + 538 + static DEFINE_WD_CLASS(ww_class); 539 + 540 + struct test_ww_mutex_data { 541 + struct ww_mutex mtx; 542 + int counter __guarded_by(&mtx); 543 + }; 544 + 545 + static void __used test_ww_mutex_lock_noctx(struct test_ww_mutex_data *d) 546 + { 547 + if (!ww_mutex_lock(&d->mtx, NULL)) { 548 + d->counter++; 549 + ww_mutex_unlock(&d->mtx); 550 + } 551 + 552 + if (!ww_mutex_lock_interruptible(&d->mtx, NULL)) { 553 + d->counter++; 554 + ww_mutex_unlock(&d->mtx); 555 + } 556 + 557 + if (ww_mutex_trylock(&d->mtx, NULL)) { 558 + d->counter++; 559 + ww_mutex_unlock(&d->mtx); 560 + } 561 + 562 + ww_mutex_lock_slow(&d->mtx, NULL); 563 + d->counter++; 564 + ww_mutex_unlock(&d->mtx); 565 + 566 + ww_mutex_destroy(&d->mtx); 567 + } 568 + 569 + static void __used test_ww_mutex_lock_ctx(struct test_ww_mutex_data *d) 570 + { 571 + struct ww_acquire_ctx ctx; 572 + 573 + ww_acquire_init(&ctx, &ww_class); 574 + 575 + if (!ww_mutex_lock(&d->mtx, &ctx)) { 576 + d->counter++; 577 + ww_mutex_unlock(&d->mtx); 578 + } 579 + 580 + if (!ww_mutex_lock_interruptible(&d->mtx, &ctx)) { 581 + d->counter++; 582 + ww_mutex_unlock(&d->mtx); 583 + } 584 + 585 + if (ww_mutex_trylock(&d->mtx, &ctx)) { 586 + d->counter++; 587 + ww_mutex_unlock(&d->mtx); 588 + } 589 + 590 + ww_mutex_lock_slow(&d->mtx, &ctx); 591 + d->counter++; 592 + ww_mutex_unlock(&d->mtx); 593 + 594 + ww_acquire_done(&ctx); 595 + ww_acquire_fini(&ctx); 596 + 597 + ww_mutex_destroy(&d->mtx); 598 + }
+2
mm/kfence/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + CONTEXT_ANALYSIS := y 4 + 3 5 obj-y := core.o report.o 4 6 5 7 CFLAGS_kfence_test.o := -fno-omit-frame-pointer -fno-optimize-sibling-calls
+13 -7
mm/kfence/core.c
··· 133 133 static struct kfence_metadata *kfence_metadata_init __read_mostly; 134 134 135 135 /* Freelist with available objects. */ 136 - static struct list_head kfence_freelist = LIST_HEAD_INIT(kfence_freelist); 137 - static DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */ 136 + DEFINE_RAW_SPINLOCK(kfence_freelist_lock); /* Lock protecting freelist. */ 137 + static struct list_head kfence_freelist __guarded_by(&kfence_freelist_lock) = LIST_HEAD_INIT(kfence_freelist); 138 138 139 139 /* 140 140 * The static key to set up a KFENCE allocation; or if static keys are not used ··· 254 254 } 255 255 256 256 static inline unsigned long metadata_to_pageaddr(const struct kfence_metadata *meta) 257 + __must_hold(&meta->lock) 257 258 { 258 259 unsigned long offset = (meta - kfence_metadata + 1) * PAGE_SIZE * 2; 259 260 unsigned long pageaddr = (unsigned long)&__kfence_pool[offset]; ··· 290 289 static noinline void 291 290 metadata_update_state(struct kfence_metadata *meta, enum kfence_object_state next, 292 291 unsigned long *stack_entries, size_t num_stack_entries) 292 + __must_hold(&meta->lock) 293 293 { 294 294 struct kfence_track *track = 295 295 next == KFENCE_OBJECT_ALLOCATED ? &meta->alloc_track : &meta->free_track; ··· 488 486 alloc_covered_add(alloc_stack_hash, 1); 489 487 490 488 /* Set required slab fields. */ 491 - slab = virt_to_slab((void *)meta->addr); 489 + slab = virt_to_slab(addr); 492 490 slab->slab_cache = cache; 493 491 slab->objects = 1; 494 492 ··· 517 515 static void kfence_guarded_free(void *addr, struct kfence_metadata *meta, bool zombie) 518 516 { 519 517 struct kcsan_scoped_access assert_page_exclusive; 518 + u32 alloc_stack_hash; 520 519 unsigned long flags; 521 520 bool init; 522 521 ··· 550 547 /* Mark the object as freed. */ 551 548 metadata_update_state(meta, KFENCE_OBJECT_FREED, NULL, 0); 552 549 init = slab_want_init_on_free(meta->cache); 550 + alloc_stack_hash = meta->alloc_stack_hash; 553 551 raw_spin_unlock_irqrestore(&meta->lock, flags); 554 552 555 - alloc_covered_add(meta->alloc_stack_hash, -1); 553 + alloc_covered_add(alloc_stack_hash, -1); 556 554 557 555 /* Check canary bytes for memory corruption. */ 558 556 check_canary(meta); ··· 598 594 * which partial initialization succeeded. 599 595 */ 600 596 static unsigned long kfence_init_pool(void) 597 + __context_unsafe(/* constructor */) 601 598 { 602 599 unsigned long addr, start_pfn; 603 600 int i, rand; ··· 1247 1242 { 1248 1243 const int page_index = (addr - (unsigned long)__kfence_pool) / PAGE_SIZE; 1249 1244 struct kfence_metadata *to_report = NULL; 1245 + unsigned long unprotected_page = 0; 1250 1246 enum kfence_error_type error_type; 1251 1247 unsigned long flags; 1252 1248 ··· 1281 1275 if (!to_report) 1282 1276 goto out; 1283 1277 1284 - raw_spin_lock_irqsave(&to_report->lock, flags); 1285 - to_report->unprotected_page = addr; 1286 1278 error_type = KFENCE_ERROR_OOB; 1279 + unprotected_page = addr; 1287 1280 1288 1281 /* 1289 1282 * If the object was freed before we took the look we can still ··· 1294 1289 if (!to_report) 1295 1290 goto out; 1296 1291 1297 - raw_spin_lock_irqsave(&to_report->lock, flags); 1298 1292 error_type = KFENCE_ERROR_UAF; 1299 1293 /* 1300 1294 * We may race with __kfence_alloc(), and it is possible that a ··· 1305 1301 1306 1302 out: 1307 1303 if (to_report) { 1304 + raw_spin_lock_irqsave(&to_report->lock, flags); 1305 + to_report->unprotected_page = unprotected_page; 1308 1306 kfence_report_error(addr, is_write, regs, to_report, error_type); 1309 1307 raw_spin_unlock_irqrestore(&to_report->lock, flags); 1310 1308 } else {
+8 -6
mm/kfence/kfence.h
··· 34 34 /* Maximum stack depth for reports. */ 35 35 #define KFENCE_STACK_DEPTH 64 36 36 37 + extern raw_spinlock_t kfence_freelist_lock; 38 + 37 39 /* KFENCE object states. */ 38 40 enum kfence_object_state { 39 41 KFENCE_OBJECT_UNUSED, /* Object is unused. */ ··· 55 53 56 54 /* KFENCE metadata per guarded allocation. */ 57 55 struct kfence_metadata { 58 - struct list_head list; /* Freelist node; access under kfence_freelist_lock. */ 56 + struct list_head list __guarded_by(&kfence_freelist_lock); /* Freelist node. */ 59 57 struct rcu_head rcu_head; /* For delayed freeing. */ 60 58 61 59 /* ··· 93 91 * In case of an invalid access, the page that was unprotected; we 94 92 * optimistically only store one address. 95 93 */ 96 - unsigned long unprotected_page; 94 + unsigned long unprotected_page __guarded_by(&lock); 97 95 98 96 /* Allocation and free stack information. */ 99 - struct kfence_track alloc_track; 100 - struct kfence_track free_track; 97 + struct kfence_track alloc_track __guarded_by(&lock); 98 + struct kfence_track free_track __guarded_by(&lock); 101 99 /* For updating alloc_covered on frees. */ 102 - u32 alloc_stack_hash; 100 + u32 alloc_stack_hash __guarded_by(&lock); 103 101 #ifdef CONFIG_MEMCG 104 102 struct slabobj_ext obj_exts; 105 103 #endif ··· 143 141 void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *regs, 144 142 const struct kfence_metadata *meta, enum kfence_error_type type); 145 143 146 - void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta); 144 + void kfence_print_object(struct seq_file *seq, const struct kfence_metadata *meta) __must_hold(&meta->lock); 147 145 148 146 #endif /* MM_KFENCE_KFENCE_H */
+2 -2
mm/kfence/report.c
··· 106 106 107 107 static void kfence_print_stack(struct seq_file *seq, const struct kfence_metadata *meta, 108 108 bool show_alloc) 109 + __must_hold(&meta->lock) 109 110 { 110 111 const struct kfence_track *track = show_alloc ? &meta->alloc_track : &meta->free_track; 111 112 u64 ts_sec = track->ts_nsec; ··· 208 207 if (WARN_ON(type != KFENCE_ERROR_INVALID && !meta)) 209 208 return; 210 209 211 - if (meta) 212 - lockdep_assert_held(&meta->lock); 213 210 /* 214 211 * Because we may generate reports in printk-unfriendly parts of the 215 212 * kernel, such as scheduler code, the use of printk() could deadlock. ··· 262 263 stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, 0); 263 264 264 265 if (meta) { 266 + lockdep_assert_held(&meta->lock); 265 267 pr_err("\n"); 266 268 kfence_print_object(NULL, meta); 267 269 }
+2 -2
mm/memory.c
··· 2213 2213 return pmd; 2214 2214 } 2215 2215 2216 - pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, 2217 - spinlock_t **ptl) 2216 + pte_t *get_locked_pte(struct mm_struct *mm, unsigned long addr, 2217 + spinlock_t **ptl) 2218 2218 { 2219 2219 pmd_t *pmd = walk_to_pmd(mm, addr); 2220 2220
+9 -10
mm/pgtable-generic.c
··· 280 280 static void pmdp_get_lockless_end(unsigned long irqflags) { } 281 281 #endif 282 282 283 - pte_t *___pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) 283 + pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp) 284 284 { 285 285 unsigned long irqflags; 286 286 pmd_t pmdval; ··· 332 332 } 333 333 334 334 /* 335 - * pte_offset_map_lock(mm, pmd, addr, ptlp), and its internal implementation 336 - * __pte_offset_map_lock() below, is usually called with the pmd pointer for 337 - * addr, reached by walking down the mm's pgd, p4d, pud for addr: either while 338 - * holding mmap_lock or vma lock for read or for write; or in truncate or rmap 339 - * context, while holding file's i_mmap_lock or anon_vma lock for read (or for 340 - * write). In a few cases, it may be used with pmd pointing to a pmd_t already 341 - * copied to or constructed on the stack. 335 + * pte_offset_map_lock(mm, pmd, addr, ptlp) is usually called with the pmd 336 + * pointer for addr, reached by walking down the mm's pgd, p4d, pud for addr: 337 + * either while holding mmap_lock or vma lock for read or for write; or in 338 + * truncate or rmap context, while holding file's i_mmap_lock or anon_vma lock 339 + * for read (or for write). In a few cases, it may be used with pmd pointing to 340 + * a pmd_t already copied to or constructed on the stack. 342 341 * 343 342 * When successful, it returns the pte pointer for addr, with its page table 344 343 * kmapped if necessary (when CONFIG_HIGHPTE), and locked against concurrent ··· 388 389 * table, and may not use RCU at all: "outsiders" like khugepaged should avoid 389 390 * pte_offset_map() and co once the vma is detached from mm or mm_users is zero. 390 391 */ 391 - pte_t *__pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, 392 - unsigned long addr, spinlock_t **ptlp) 392 + pte_t *pte_offset_map_lock(struct mm_struct *mm, pmd_t *pmd, 393 + unsigned long addr, spinlock_t **ptlp) 393 394 { 394 395 spinlock_t *ptl; 395 396 pmd_t pmdval;
+1 -1
net/ipv4/tcp_sigpool.c
··· 257 257 } 258 258 EXPORT_SYMBOL_GPL(tcp_sigpool_get); 259 259 260 - int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(RCU_BH) 260 + int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(0, RCU_BH) 261 261 { 262 262 struct crypto_ahash *hash; 263 263
+1 -6
rust/helpers/atomic.c
··· 11 11 12 12 #include <linux/atomic.h> 13 13 14 - // TODO: Remove this after INLINE_HELPERS support is added. 15 - #ifndef __rust_helper 16 - #define __rust_helper 17 - #endif 18 - 19 14 __rust_helper int 20 15 rust_helper_atomic_read(const atomic_t *v) 21 16 { ··· 1032 1037 } 1033 1038 1034 1039 #endif /* _RUST_ATOMIC_API_H */ 1035 - // 615a0e0c98b5973a47fe4fa65e92935051ca00ed 1040 + // e4edb6174dd42a265284958f00a7cea7ddb464b1
+139
rust/helpers/atomic_ext.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <asm/barrier.h> 4 + #include <asm/rwonce.h> 5 + #include <linux/atomic.h> 6 + 7 + __rust_helper s8 rust_helper_atomic_i8_read(s8 *ptr) 8 + { 9 + return READ_ONCE(*ptr); 10 + } 11 + 12 + __rust_helper s8 rust_helper_atomic_i8_read_acquire(s8 *ptr) 13 + { 14 + return smp_load_acquire(ptr); 15 + } 16 + 17 + __rust_helper s16 rust_helper_atomic_i16_read(s16 *ptr) 18 + { 19 + return READ_ONCE(*ptr); 20 + } 21 + 22 + __rust_helper s16 rust_helper_atomic_i16_read_acquire(s16 *ptr) 23 + { 24 + return smp_load_acquire(ptr); 25 + } 26 + 27 + __rust_helper void rust_helper_atomic_i8_set(s8 *ptr, s8 val) 28 + { 29 + WRITE_ONCE(*ptr, val); 30 + } 31 + 32 + __rust_helper void rust_helper_atomic_i8_set_release(s8 *ptr, s8 val) 33 + { 34 + smp_store_release(ptr, val); 35 + } 36 + 37 + __rust_helper void rust_helper_atomic_i16_set(s16 *ptr, s16 val) 38 + { 39 + WRITE_ONCE(*ptr, val); 40 + } 41 + 42 + __rust_helper void rust_helper_atomic_i16_set_release(s16 *ptr, s16 val) 43 + { 44 + smp_store_release(ptr, val); 45 + } 46 + 47 + /* 48 + * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the 49 + * architecture provding xchg() support for i8 and i16. 50 + * 51 + * The architectures that currently support Rust (x86_64, armv7, 52 + * arm64, riscv, and loongarch) satisfy these requirements. 53 + */ 54 + __rust_helper s8 rust_helper_atomic_i8_xchg(s8 *ptr, s8 new) 55 + { 56 + return xchg(ptr, new); 57 + } 58 + 59 + __rust_helper s16 rust_helper_atomic_i16_xchg(s16 *ptr, s16 new) 60 + { 61 + return xchg(ptr, new); 62 + } 63 + 64 + __rust_helper s8 rust_helper_atomic_i8_xchg_acquire(s8 *ptr, s8 new) 65 + { 66 + return xchg_acquire(ptr, new); 67 + } 68 + 69 + __rust_helper s16 rust_helper_atomic_i16_xchg_acquire(s16 *ptr, s16 new) 70 + { 71 + return xchg_acquire(ptr, new); 72 + } 73 + 74 + __rust_helper s8 rust_helper_atomic_i8_xchg_release(s8 *ptr, s8 new) 75 + { 76 + return xchg_release(ptr, new); 77 + } 78 + 79 + __rust_helper s16 rust_helper_atomic_i16_xchg_release(s16 *ptr, s16 new) 80 + { 81 + return xchg_release(ptr, new); 82 + } 83 + 84 + __rust_helper s8 rust_helper_atomic_i8_xchg_relaxed(s8 *ptr, s8 new) 85 + { 86 + return xchg_relaxed(ptr, new); 87 + } 88 + 89 + __rust_helper s16 rust_helper_atomic_i16_xchg_relaxed(s16 *ptr, s16 new) 90 + { 91 + return xchg_relaxed(ptr, new); 92 + } 93 + 94 + /* 95 + * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the 96 + * architecture provding try_cmpxchg() support for i8 and i16. 97 + * 98 + * The architectures that currently support Rust (x86_64, armv7, 99 + * arm64, riscv, and loongarch) satisfy these requirements. 100 + */ 101 + __rust_helper bool rust_helper_atomic_i8_try_cmpxchg(s8 *ptr, s8 *old, s8 new) 102 + { 103 + return try_cmpxchg(ptr, old, new); 104 + } 105 + 106 + __rust_helper bool rust_helper_atomic_i16_try_cmpxchg(s16 *ptr, s16 *old, s16 new) 107 + { 108 + return try_cmpxchg(ptr, old, new); 109 + } 110 + 111 + __rust_helper bool rust_helper_atomic_i8_try_cmpxchg_acquire(s8 *ptr, s8 *old, s8 new) 112 + { 113 + return try_cmpxchg_acquire(ptr, old, new); 114 + } 115 + 116 + __rust_helper bool rust_helper_atomic_i16_try_cmpxchg_acquire(s16 *ptr, s16 *old, s16 new) 117 + { 118 + return try_cmpxchg_acquire(ptr, old, new); 119 + } 120 + 121 + __rust_helper bool rust_helper_atomic_i8_try_cmpxchg_release(s8 *ptr, s8 *old, s8 new) 122 + { 123 + return try_cmpxchg_release(ptr, old, new); 124 + } 125 + 126 + __rust_helper bool rust_helper_atomic_i16_try_cmpxchg_release(s16 *ptr, s16 *old, s16 new) 127 + { 128 + return try_cmpxchg_release(ptr, old, new); 129 + } 130 + 131 + __rust_helper bool rust_helper_atomic_i8_try_cmpxchg_relaxed(s8 *ptr, s8 *old, s8 new) 132 + { 133 + return try_cmpxchg_relaxed(ptr, old, new); 134 + } 135 + 136 + __rust_helper bool rust_helper_atomic_i16_try_cmpxchg_relaxed(s16 *ptr, s16 *old, s16 new) 137 + { 138 + return try_cmpxchg_relaxed(ptr, old, new); 139 + }
+3 -3
rust/helpers/barrier.c
··· 2 2 3 3 #include <asm/barrier.h> 4 4 5 - void rust_helper_smp_mb(void) 5 + __rust_helper void rust_helper_smp_mb(void) 6 6 { 7 7 smp_mb(); 8 8 } 9 9 10 - void rust_helper_smp_wmb(void) 10 + __rust_helper void rust_helper_smp_wmb(void) 11 11 { 12 12 smp_wmb(); 13 13 } 14 14 15 - void rust_helper_smp_rmb(void) 15 + __rust_helper void rust_helper_smp_rmb(void) 16 16 { 17 17 smp_rmb(); 18 18 }
+2 -2
rust/helpers/blk.c
··· 3 3 #include <linux/blk-mq.h> 4 4 #include <linux/blkdev.h> 5 5 6 - void *rust_helper_blk_mq_rq_to_pdu(struct request *rq) 6 + __rust_helper void *rust_helper_blk_mq_rq_to_pdu(struct request *rq) 7 7 { 8 8 return blk_mq_rq_to_pdu(rq); 9 9 } 10 10 11 - struct request *rust_helper_blk_mq_rq_from_pdu(void *pdu) 11 + __rust_helper struct request *rust_helper_blk_mq_rq_from_pdu(void *pdu) 12 12 { 13 13 return blk_mq_rq_from_pdu(pdu); 14 14 }
+1 -1
rust/helpers/completion.c
··· 2 2 3 3 #include <linux/completion.h> 4 4 5 - void rust_helper_init_completion(struct completion *x) 5 + __rust_helper void rust_helper_init_completion(struct completion *x) 6 6 { 7 7 init_completion(x); 8 8 }
+1 -1
rust/helpers/cpu.c
··· 2 2 3 3 #include <linux/smp.h> 4 4 5 - unsigned int rust_helper_raw_smp_processor_id(void) 5 + __rust_helper unsigned int rust_helper_raw_smp_processor_id(void) 6 6 { 7 7 return raw_smp_processor_id(); 8 8 }
+3
rust/helpers/helpers.c
··· 7 7 * Sorted alphabetically. 8 8 */ 9 9 10 + #define __rust_helper 11 + 10 12 #include "atomic.c" 13 + #include "atomic_ext.c" 11 14 #include "auxiliary.c" 12 15 #include "barrier.c" 13 16 #include "binder.c"
+7 -6
rust/helpers/mutex.c
··· 2 2 3 3 #include <linux/mutex.h> 4 4 5 - void rust_helper_mutex_lock(struct mutex *lock) 5 + __rust_helper void rust_helper_mutex_lock(struct mutex *lock) 6 6 { 7 7 mutex_lock(lock); 8 8 } 9 9 10 - int rust_helper_mutex_trylock(struct mutex *lock) 10 + __rust_helper int rust_helper_mutex_trylock(struct mutex *lock) 11 11 { 12 12 return mutex_trylock(lock); 13 13 } 14 14 15 - void rust_helper___mutex_init(struct mutex *mutex, const char *name, 16 - struct lock_class_key *key) 15 + __rust_helper void rust_helper___mutex_init(struct mutex *mutex, 16 + const char *name, 17 + struct lock_class_key *key) 17 18 { 18 19 __mutex_init(mutex, name, key); 19 20 } 20 21 21 - void rust_helper_mutex_assert_is_held(struct mutex *mutex) 22 + __rust_helper void rust_helper_mutex_assert_is_held(struct mutex *mutex) 22 23 { 23 24 lockdep_assert_held(mutex); 24 25 } 25 26 26 - void rust_helper_mutex_destroy(struct mutex *lock) 27 + __rust_helper void rust_helper_mutex_destroy(struct mutex *lock) 27 28 { 28 29 mutex_destroy(lock); 29 30 }
+1 -1
rust/helpers/processor.c
··· 2 2 3 3 #include <linux/processor.h> 4 4 5 - void rust_helper_cpu_relax(void) 5 + __rust_helper void rust_helper_cpu_relax(void) 6 6 { 7 7 cpu_relax(); 8 8 }
+2 -2
rust/helpers/rcu.c
··· 2 2 3 3 #include <linux/rcupdate.h> 4 4 5 - void rust_helper_rcu_read_lock(void) 5 + __rust_helper void rust_helper_rcu_read_lock(void) 6 6 { 7 7 rcu_read_lock(); 8 8 } 9 9 10 - void rust_helper_rcu_read_unlock(void) 10 + __rust_helper void rust_helper_rcu_read_unlock(void) 11 11 { 12 12 rcu_read_unlock(); 13 13 }
+5 -5
rust/helpers/refcount.c
··· 2 2 3 3 #include <linux/refcount.h> 4 4 5 - refcount_t rust_helper_REFCOUNT_INIT(int n) 5 + __rust_helper refcount_t rust_helper_REFCOUNT_INIT(int n) 6 6 { 7 7 return (refcount_t)REFCOUNT_INIT(n); 8 8 } 9 9 10 - void rust_helper_refcount_set(refcount_t *r, int n) 10 + __rust_helper void rust_helper_refcount_set(refcount_t *r, int n) 11 11 { 12 12 refcount_set(r, n); 13 13 } 14 14 15 - void rust_helper_refcount_inc(refcount_t *r) 15 + __rust_helper void rust_helper_refcount_inc(refcount_t *r) 16 16 { 17 17 refcount_inc(r); 18 18 } 19 19 20 - void rust_helper_refcount_dec(refcount_t *r) 20 + __rust_helper void rust_helper_refcount_dec(refcount_t *r) 21 21 { 22 22 refcount_dec(r); 23 23 } 24 24 25 - bool rust_helper_refcount_dec_and_test(refcount_t *r) 25 + __rust_helper bool rust_helper_refcount_dec_and_test(refcount_t *r) 26 26 { 27 27 return refcount_dec_and_test(r); 28 28 }
+1 -1
rust/helpers/signal.c
··· 2 2 3 3 #include <linux/sched/signal.h> 4 4 5 - int rust_helper_signal_pending(struct task_struct *t) 5 + __rust_helper int rust_helper_signal_pending(struct task_struct *t) 6 6 { 7 7 return signal_pending(t); 8 8 }
+7 -6
rust/helpers/spinlock.c
··· 2 2 3 3 #include <linux/spinlock.h> 4 4 5 - void rust_helper___spin_lock_init(spinlock_t *lock, const char *name, 6 - struct lock_class_key *key) 5 + __rust_helper void rust_helper___spin_lock_init(spinlock_t *lock, 6 + const char *name, 7 + struct lock_class_key *key) 7 8 { 8 9 #ifdef CONFIG_DEBUG_SPINLOCK 9 10 # if defined(CONFIG_PREEMPT_RT) ··· 17 16 #endif /* CONFIG_DEBUG_SPINLOCK */ 18 17 } 19 18 20 - void rust_helper_spin_lock(spinlock_t *lock) 19 + __rust_helper void rust_helper_spin_lock(spinlock_t *lock) 21 20 { 22 21 spin_lock(lock); 23 22 } 24 23 25 - void rust_helper_spin_unlock(spinlock_t *lock) 24 + __rust_helper void rust_helper_spin_unlock(spinlock_t *lock) 26 25 { 27 26 spin_unlock(lock); 28 27 } 29 28 30 - int rust_helper_spin_trylock(spinlock_t *lock) 29 + __rust_helper int rust_helper_spin_trylock(spinlock_t *lock) 31 30 { 32 31 return spin_trylock(lock); 33 32 } 34 33 35 - void rust_helper_spin_assert_is_held(spinlock_t *lock) 34 + __rust_helper void rust_helper_spin_assert_is_held(spinlock_t *lock) 36 35 { 37 36 lockdep_assert_held(lock); 38 37 }
+2 -2
rust/helpers/sync.c
··· 2 2 3 3 #include <linux/lockdep.h> 4 4 5 - void rust_helper_lockdep_register_key(struct lock_class_key *k) 5 + __rust_helper void rust_helper_lockdep_register_key(struct lock_class_key *k) 6 6 { 7 7 lockdep_register_key(k); 8 8 } 9 9 10 - void rust_helper_lockdep_unregister_key(struct lock_class_key *k) 10 + __rust_helper void rust_helper_lockdep_unregister_key(struct lock_class_key *k) 11 11 { 12 12 lockdep_unregister_key(k); 13 13 }
+12 -12
rust/helpers/task.c
··· 3 3 #include <linux/kernel.h> 4 4 #include <linux/sched/task.h> 5 5 6 - void rust_helper_might_resched(void) 6 + __rust_helper void rust_helper_might_resched(void) 7 7 { 8 8 might_resched(); 9 9 } 10 10 11 - struct task_struct *rust_helper_get_current(void) 11 + __rust_helper struct task_struct *rust_helper_get_current(void) 12 12 { 13 13 return current; 14 14 } 15 15 16 - void rust_helper_get_task_struct(struct task_struct *t) 16 + __rust_helper void rust_helper_get_task_struct(struct task_struct *t) 17 17 { 18 18 get_task_struct(t); 19 19 } 20 20 21 - void rust_helper_put_task_struct(struct task_struct *t) 21 + __rust_helper void rust_helper_put_task_struct(struct task_struct *t) 22 22 { 23 23 put_task_struct(t); 24 24 } 25 25 26 - kuid_t rust_helper_task_uid(struct task_struct *task) 26 + __rust_helper kuid_t rust_helper_task_uid(struct task_struct *task) 27 27 { 28 28 return task_uid(task); 29 29 } 30 30 31 - kuid_t rust_helper_task_euid(struct task_struct *task) 31 + __rust_helper kuid_t rust_helper_task_euid(struct task_struct *task) 32 32 { 33 33 return task_euid(task); 34 34 } 35 35 36 36 #ifndef CONFIG_USER_NS 37 - uid_t rust_helper_from_kuid(struct user_namespace *to, kuid_t uid) 37 + __rust_helper uid_t rust_helper_from_kuid(struct user_namespace *to, kuid_t uid) 38 38 { 39 39 return from_kuid(to, uid); 40 40 } 41 41 #endif /* CONFIG_USER_NS */ 42 42 43 - bool rust_helper_uid_eq(kuid_t left, kuid_t right) 43 + __rust_helper bool rust_helper_uid_eq(kuid_t left, kuid_t right) 44 44 { 45 45 return uid_eq(left, right); 46 46 } 47 47 48 - kuid_t rust_helper_current_euid(void) 48 + __rust_helper kuid_t rust_helper_current_euid(void) 49 49 { 50 50 return current_euid(); 51 51 } 52 52 53 - struct user_namespace *rust_helper_current_user_ns(void) 53 + __rust_helper struct user_namespace *rust_helper_current_user_ns(void) 54 54 { 55 55 return current_user_ns(); 56 56 } 57 57 58 - pid_t rust_helper_task_tgid_nr_ns(struct task_struct *tsk, 59 - struct pid_namespace *ns) 58 + __rust_helper pid_t rust_helper_task_tgid_nr_ns(struct task_struct *tsk, 59 + struct pid_namespace *ns) 60 60 { 61 61 return task_tgid_nr_ns(tsk, ns); 62 62 }
+7 -7
rust/helpers/time.c
··· 4 4 #include <linux/ktime.h> 5 5 #include <linux/timekeeping.h> 6 6 7 - void rust_helper_fsleep(unsigned long usecs) 7 + __rust_helper void rust_helper_fsleep(unsigned long usecs) 8 8 { 9 9 fsleep(usecs); 10 10 } 11 11 12 - ktime_t rust_helper_ktime_get_real(void) 12 + __rust_helper ktime_t rust_helper_ktime_get_real(void) 13 13 { 14 14 return ktime_get_real(); 15 15 } 16 16 17 - ktime_t rust_helper_ktime_get_boottime(void) 17 + __rust_helper ktime_t rust_helper_ktime_get_boottime(void) 18 18 { 19 19 return ktime_get_boottime(); 20 20 } 21 21 22 - ktime_t rust_helper_ktime_get_clocktai(void) 22 + __rust_helper ktime_t rust_helper_ktime_get_clocktai(void) 23 23 { 24 24 return ktime_get_clocktai(); 25 25 } 26 26 27 - s64 rust_helper_ktime_to_us(const ktime_t kt) 27 + __rust_helper s64 rust_helper_ktime_to_us(const ktime_t kt) 28 28 { 29 29 return ktime_to_us(kt); 30 30 } 31 31 32 - s64 rust_helper_ktime_to_ms(const ktime_t kt) 32 + __rust_helper s64 rust_helper_ktime_to_ms(const ktime_t kt) 33 33 { 34 34 return ktime_to_ms(kt); 35 35 } 36 36 37 - void rust_helper_udelay(unsigned long usec) 37 + __rust_helper void rust_helper_udelay(unsigned long usec) 38 38 { 39 39 udelay(usec); 40 40 }
+1 -1
rust/helpers/wait.c
··· 2 2 3 3 #include <linux/wait.h> 4 4 5 - void rust_helper_init_wait(struct wait_queue_entry *wq_entry) 5 + __rust_helper void rust_helper_init_wait(struct wait_queue_entry *wq_entry) 6 6 { 7 7 init_wait(wq_entry); 8 8 }
+6 -8
rust/kernel/list/arc.rs
··· 6 6 7 7 use crate::alloc::{AllocError, Flags}; 8 8 use crate::prelude::*; 9 + use crate::sync::atomic::{ordering, Atomic}; 9 10 use crate::sync::{Arc, ArcBorrow, UniqueArc}; 10 11 use core::marker::PhantomPinned; 11 12 use core::ops::Deref; 12 13 use core::pin::Pin; 13 - use core::sync::atomic::{AtomicBool, Ordering}; 14 14 15 15 /// Declares that this type has some way to ensure that there is exactly one `ListArc` instance for 16 16 /// this id. ··· 469 469 /// If the boolean is `false`, then there is no [`ListArc`] for this value. 470 470 #[repr(transparent)] 471 471 pub struct AtomicTracker<const ID: u64 = 0> { 472 - inner: AtomicBool, 472 + inner: Atomic<bool>, 473 473 // This value needs to be pinned to justify the INVARIANT: comment in `AtomicTracker::new`. 474 474 _pin: PhantomPinned, 475 475 } ··· 480 480 // INVARIANT: Pin-init initializers can't be used on an existing `Arc`, so this value will 481 481 // not be constructed in an `Arc` that already has a `ListArc`. 482 482 Self { 483 - inner: AtomicBool::new(false), 483 + inner: Atomic::new(false), 484 484 _pin: PhantomPinned, 485 485 } 486 486 } 487 487 488 - fn project_inner(self: Pin<&mut Self>) -> &mut AtomicBool { 488 + fn project_inner(self: Pin<&mut Self>) -> &mut Atomic<bool> { 489 489 // SAFETY: The `inner` field is not structurally pinned, so we may obtain a mutable 490 490 // reference to it even if we only have a pinned reference to `self`. 491 491 unsafe { &mut Pin::into_inner_unchecked(self).inner } ··· 500 500 501 501 unsafe fn on_drop_list_arc(&self) { 502 502 // INVARIANT: We just dropped a ListArc, so the boolean should be false. 503 - self.inner.store(false, Ordering::Release); 503 + self.inner.store(false, ordering::Release); 504 504 } 505 505 } 506 506 ··· 514 514 fn try_new_list_arc(&self) -> bool { 515 515 // INVARIANT: If this method returns true, then the boolean used to be false, and is no 516 516 // longer false, so it is okay for the caller to create a new [`ListArc`]. 517 - self.inner 518 - .compare_exchange(false, true, Ordering::Acquire, Ordering::Relaxed) 519 - .is_ok() 517 + self.inner.cmpxchg(false, true, ordering::Acquire).is_ok() 520 518 } 521 519 }
+55 -18
rust/kernel/sync.rs
··· 32 32 pub use refcount::Refcount; 33 33 pub use set_once::SetOnce; 34 34 35 - /// Represents a lockdep class. It's a wrapper around C's `lock_class_key`. 35 + /// Represents a lockdep class. 36 + /// 37 + /// Wraps the kernel's `struct lock_class_key`. 36 38 #[repr(transparent)] 37 39 #[pin_data(PinnedDrop)] 38 40 pub struct LockClassKey { ··· 42 40 inner: Opaque<bindings::lock_class_key>, 43 41 } 44 42 43 + // SAFETY: Unregistering a lock class key from a different thread than where it was registered is 44 + // allowed. 45 + unsafe impl Send for LockClassKey {} 46 + 45 47 // SAFETY: `bindings::lock_class_key` is designed to be used concurrently from multiple threads and 46 48 // provides its own synchronization. 47 49 unsafe impl Sync for LockClassKey {} 48 50 49 51 impl LockClassKey { 50 - /// Initializes a dynamically allocated lock class key. In the common case of using a 51 - /// statically allocated lock class key, the static_lock_class! macro should be used instead. 52 + /// Initializes a statically allocated lock class key. 53 + /// 54 + /// This is usually used indirectly through the [`static_lock_class!`] macro. See its 55 + /// documentation for more information. 56 + /// 57 + /// # Safety 58 + /// 59 + /// * Before using the returned value, it must be pinned in a static memory location. 60 + /// * The destructor must never run on the returned `LockClassKey`. 61 + pub const unsafe fn new_static() -> Self { 62 + LockClassKey { 63 + inner: Opaque::uninit(), 64 + } 65 + } 66 + 67 + /// Initializes a dynamically allocated lock class key. 68 + /// 69 + /// In the common case of using a statically allocated lock class key, the 70 + /// [`static_lock_class!`] macro should be used instead. 52 71 /// 53 72 /// # Examples 73 + /// 54 74 /// ``` 55 - /// # use kernel::alloc::KBox; 56 - /// # use kernel::types::ForeignOwnable; 57 - /// # use kernel::sync::{LockClassKey, SpinLock}; 58 - /// # use pin_init::stack_pin_init; 75 + /// use kernel::alloc::KBox; 76 + /// use kernel::types::ForeignOwnable; 77 + /// use kernel::sync::{LockClassKey, SpinLock}; 78 + /// use pin_init::stack_pin_init; 59 79 /// 60 80 /// let key = KBox::pin_init(LockClassKey::new_dynamic(), GFP_KERNEL)?; 61 81 /// let key_ptr = key.into_foreign(); ··· 95 71 /// // SAFETY: We dropped `num`, the only use of the key, so the result of the previous 96 72 /// // `borrow` has also been dropped. Thus, it's safe to use from_foreign. 97 73 /// unsafe { drop(<Pin<KBox<LockClassKey>> as ForeignOwnable>::from_foreign(key_ptr)) }; 98 - /// 99 74 /// # Ok::<(), Error>(()) 100 75 /// ``` 101 76 pub fn new_dynamic() -> impl PinInit<Self> { ··· 104 81 }) 105 82 } 106 83 107 - pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key { 84 + /// Returns a raw pointer to the inner C struct. 85 + /// 86 + /// It is up to the caller to use the raw pointer correctly. 87 + pub fn as_ptr(&self) -> *mut bindings::lock_class_key { 108 88 self.inner.get() 109 89 } 110 90 } ··· 115 89 #[pinned_drop] 116 90 impl PinnedDrop for LockClassKey { 117 91 fn drop(self: Pin<&mut Self>) { 118 - // SAFETY: self.as_ptr was registered with lockdep and self is pinned, so the address 119 - // hasn't changed. Thus, it's safe to pass to unregister. 92 + // SAFETY: `self.as_ptr()` was registered with lockdep and `self` is pinned, so the address 93 + // hasn't changed. Thus, it's safe to pass it to unregister. 120 94 unsafe { bindings::lockdep_unregister_key(self.as_ptr()) } 121 95 } 122 96 } 123 97 124 98 /// Defines a new static lock class and returns a pointer to it. 125 - #[doc(hidden)] 99 + /// 100 + /// # Examples 101 + /// 102 + /// ``` 103 + /// use kernel::sync::{static_lock_class, Arc, SpinLock}; 104 + /// 105 + /// fn new_locked_int() -> Result<Arc<SpinLock<u32>>> { 106 + /// Arc::pin_init(SpinLock::new( 107 + /// 42, 108 + /// c"new_locked_int", 109 + /// static_lock_class!(), 110 + /// ), GFP_KERNEL) 111 + /// } 112 + /// ``` 126 113 #[macro_export] 127 114 macro_rules! static_lock_class { 128 115 () => {{ 129 116 static CLASS: $crate::sync::LockClassKey = 130 - // Lockdep expects uninitialized memory when it's handed a statically allocated `struct 131 - // lock_class_key`. 132 - // 133 - // SAFETY: `LockClassKey` transparently wraps `Opaque` which permits uninitialized 134 - // memory. 135 - unsafe { ::core::mem::MaybeUninit::uninit().assume_init() }; 117 + // SAFETY: The returned `LockClassKey` is stored in static memory and we pin it. Drop 118 + // never runs on a static global. 119 + unsafe { $crate::sync::LockClassKey::new_static() }; 136 120 $crate::prelude::Pin::static_ref(&CLASS) 137 121 }}; 138 122 } 123 + pub use static_lock_class; 139 124 140 125 /// Returns the given string, if one is provided, otherwise generates one based on the source code 141 126 /// location.
+3
rust/kernel/sync/aref.rs
··· 83 83 // example, when the reference count reaches zero and `T` is dropped. 84 84 unsafe impl<T: AlwaysRefCounted + Sync + Send> Sync for ARef<T> {} 85 85 86 + // Even if `T` is pinned, pointers to `T` can still move. 87 + impl<T: AlwaysRefCounted> Unpin for ARef<T> {} 88 + 86 89 impl<T: AlwaysRefCounted> ARef<T> { 87 90 /// Creates a new instance of [`ARef`]. 88 91 ///
+90 -24
rust/kernel/sync/atomic/internal.rs
··· 13 13 pub trait Sealed {} 14 14 } 15 15 16 - // `i32` and `i64` are only supported atomic implementations. 16 + // The C side supports atomic primitives only for `i32` and `i64` (`atomic_t` and `atomic64_t`), 17 + // while the Rust side also layers provides atomic support for `i8` and `i16` 18 + // on top of lower-level C primitives. 19 + impl private::Sealed for i8 {} 20 + impl private::Sealed for i16 {} 17 21 impl private::Sealed for i32 {} 18 22 impl private::Sealed for i64 {} 19 23 20 24 /// A marker trait for types that implement atomic operations with C side primitives. 21 25 /// 22 - /// This trait is sealed, and only types that have directly mapping to the C side atomics should 23 - /// impl this: 26 + /// This trait is sealed, and only types that map directly to the C side atomics 27 + /// or can be implemented with lower-level C primitives are allowed to implement this: 24 28 /// 25 - /// - `i32` maps to `atomic_t`. 26 - /// - `i64` maps to `atomic64_t`. 29 + /// - `i8` and `i16` are implemented with lower-level C primitives. 30 + /// - `i32` map to `atomic_t` 31 + /// - `i64` map to `atomic64_t` 27 32 pub trait AtomicImpl: Sized + Send + Copy + private::Sealed { 28 33 /// The type of the delta in arithmetic or logical operations. 29 34 /// 30 35 /// For example, in `atomic_add(ptr, v)`, it's the type of `v`. Usually it's the same type of 31 36 /// [`Self`], but it may be different for the atomic pointer type. 32 37 type Delta; 38 + } 39 + 40 + // The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only 41 + // guaranteed against read-modify-write operations if the architecture supports native atomic RmW. 42 + #[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)] 43 + impl AtomicImpl for i8 { 44 + type Delta = Self; 45 + } 46 + 47 + // The current helpers of load/store uses `{WRITE,READ}_ONCE()` hence the atomicity is only 48 + // guaranteed against read-modify-write operations if the architecture supports native atomic RmW. 49 + #[cfg(CONFIG_ARCH_SUPPORTS_ATOMIC_RMW)] 50 + impl AtomicImpl for i16 { 51 + type Delta = Self; 33 52 } 34 53 35 54 // `atomic_t` implements atomic operations on `i32`. ··· 175 156 } 176 157 } 177 158 178 - // Delcares $ops trait with methods and implements the trait for `i32` and `i64`. 179 - macro_rules! declare_and_impl_atomic_methods { 180 - ($(#[$attr:meta])* $pub:vis trait $ops:ident { 181 - $( 182 - $(#[doc=$doc:expr])* 183 - fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? { 184 - $unsafe:tt { bindings::#call($($arg:tt)*) } 185 - } 186 - )* 187 - }) => { 159 + macro_rules! declare_atomic_ops_trait { 160 + ( 161 + $(#[$attr:meta])* $pub:vis trait $ops:ident { 162 + $( 163 + $(#[doc=$doc:expr])* 164 + fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? { 165 + $unsafe:tt { bindings::#call($($arg:tt)*) } 166 + } 167 + )* 168 + } 169 + ) => { 188 170 $(#[$attr])* 189 171 $pub trait $ops: AtomicImpl { 190 172 $( ··· 195 175 ); 196 176 )* 197 177 } 178 + } 179 + } 198 180 199 - impl $ops for i32 { 181 + macro_rules! impl_atomic_ops_for_one { 182 + ( 183 + $ty:ty => $ctype:ident, 184 + $(#[$attr:meta])* $pub:vis trait $ops:ident { 200 185 $( 201 - impl_atomic_method!( 202 - (atomic) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? { 203 - $unsafe { call($($arg)*) } 204 - } 205 - ); 186 + $(#[doc=$doc:expr])* 187 + fn $func:ident [$($variant:ident),*]($($arg_sig:tt)*) $( -> $ret:ty)? { 188 + $unsafe:tt { bindings::#call($($arg:tt)*) } 189 + } 206 190 )* 207 191 } 208 - 209 - impl $ops for i64 { 192 + ) => { 193 + impl $ops for $ty { 210 194 $( 211 195 impl_atomic_method!( 212 - (atomic64) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? { 196 + ($ctype) $func[$($variant)*]($($arg_sig)*) $(-> $ret)? { 213 197 $unsafe { call($($arg)*) } 214 198 } 215 199 ); ··· 222 198 } 223 199 } 224 200 201 + // Declares $ops trait with methods and implements the trait. 202 + macro_rules! declare_and_impl_atomic_methods { 203 + ( 204 + [ $($map:tt)* ] 205 + $(#[$attr:meta])* $pub:vis trait $ops:ident { $($body:tt)* } 206 + ) => { 207 + declare_and_impl_atomic_methods!( 208 + @with_ops_def 209 + [ $($map)* ] 210 + ( $(#[$attr])* $pub trait $ops { $($body)* } ) 211 + ); 212 + }; 213 + 214 + (@with_ops_def [ $($map:tt)* ] ( $($ops_def:tt)* )) => { 215 + declare_atomic_ops_trait!( $($ops_def)* ); 216 + 217 + declare_and_impl_atomic_methods!( 218 + @munch 219 + [ $($map)* ] 220 + ( $($ops_def)* ) 221 + ); 222 + }; 223 + 224 + (@munch [] ( $($ops_def:tt)* )) => {}; 225 + 226 + (@munch [ $ty:ty => $ctype:ident $(, $($rest:tt)*)? ] ( $($ops_def:tt)* )) => { 227 + impl_atomic_ops_for_one!( 228 + $ty => $ctype, 229 + $($ops_def)* 230 + ); 231 + 232 + declare_and_impl_atomic_methods!( 233 + @munch 234 + [ $($($rest)*)? ] 235 + ( $($ops_def)* ) 236 + ); 237 + }; 238 + } 239 + 225 240 declare_and_impl_atomic_methods!( 241 + [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ] 226 242 /// Basic atomic operations 227 243 pub trait AtomicBasicOps { 228 244 /// Atomic read (load). ··· 280 216 ); 281 217 282 218 declare_and_impl_atomic_methods!( 219 + [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ] 283 220 /// Exchange and compare-and-exchange atomic operations 284 221 pub trait AtomicExchangeOps { 285 222 /// Atomic exchange. ··· 308 243 ); 309 244 310 245 declare_and_impl_atomic_methods!( 246 + [ i32 => atomic, i64 => atomic64 ] 311 247 /// Atomic arithmetic operations 312 248 pub trait AtomicArithmeticOps { 313 249 /// Atomic add (wrapping).
+52 -3
rust/kernel/sync/atomic/predefine.rs
··· 5 5 use crate::static_assert; 6 6 use core::mem::{align_of, size_of}; 7 7 8 + // Ensure size and alignment requirements are checked. 9 + static_assert!(size_of::<bool>() == size_of::<i8>()); 10 + static_assert!(align_of::<bool>() == align_of::<i8>()); 11 + 12 + // SAFETY: `bool` has the same size and alignment as `i8`, and Rust guarantees that `bool` has 13 + // only two valid bit patterns: 0 (false) and 1 (true). Those are valid `i8` values, so `bool` is 14 + // round-trip transmutable to `i8`. 15 + unsafe impl super::AtomicType for bool { 16 + type Repr = i8; 17 + } 18 + 19 + // SAFETY: `i8` has the same size and alignment with itself, and is round-trip transmutable to 20 + // itself. 21 + unsafe impl super::AtomicType for i8 { 22 + type Repr = i8; 23 + } 24 + 25 + // SAFETY: `i16` has the same size and alignment with itself, and is round-trip transmutable to 26 + // itself. 27 + unsafe impl super::AtomicType for i16 { 28 + type Repr = i16; 29 + } 30 + 8 31 // SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to 9 32 // itself. 10 33 unsafe impl super::AtomicType for i32 { ··· 152 129 153 130 #[test] 154 131 fn atomic_basic_tests() { 155 - for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { 132 + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| { 156 133 let x = Atomic::new(v); 157 134 158 135 assert_eq!(v, x.load(Relaxed)); ··· 160 137 } 161 138 162 139 #[test] 140 + fn atomic_acquire_release_tests() { 141 + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| { 142 + let x = Atomic::new(0); 143 + 144 + x.store(v, Release); 145 + assert_eq!(v, x.load(Acquire)); 146 + }); 147 + } 148 + 149 + #[test] 163 150 fn atomic_xchg_tests() { 164 - for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { 151 + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| { 165 152 let x = Atomic::new(v); 166 153 167 154 let old = v; ··· 184 151 185 152 #[test] 186 153 fn atomic_cmpxchg_tests() { 187 - for_each_type!(42 in [i32, i64, u32, u64, isize, usize] |v| { 154 + for_each_type!(42 in [i8, i16, i32, i64, u32, u64, isize, usize] |v| { 188 155 let x = Atomic::new(v); 189 156 190 157 let old = v; ··· 209 176 210 177 assert_eq!(v + 25, x.load(Relaxed)); 211 178 }); 179 + } 180 + 181 + #[test] 182 + fn atomic_bool_tests() { 183 + let x = Atomic::new(false); 184 + 185 + assert_eq!(false, x.load(Relaxed)); 186 + x.store(true, Relaxed); 187 + assert_eq!(true, x.load(Relaxed)); 188 + 189 + assert_eq!(true, x.xchg(false, Relaxed)); 190 + assert_eq!(false, x.load(Relaxed)); 191 + 192 + assert_eq!(Err(false), x.cmpxchg(true, true, Relaxed)); 193 + assert_eq!(false, x.load(Relaxed)); 194 + assert_eq!(Ok(false), x.cmpxchg(false, true, Full)); 212 195 } 213 196 }
+7
rust/kernel/sync/lock.rs
··· 156 156 /// the whole lifetime of `'a`. 157 157 /// 158 158 /// [`State`]: Backend::State 159 + #[inline] 159 160 pub unsafe fn from_raw<'a>(ptr: *mut B::State) -> &'a Self { 160 161 // SAFETY: 161 162 // - By the safety contract `ptr` must point to a valid initialised instance of `B::State` ··· 170 169 171 170 impl<T: ?Sized, B: Backend> Lock<T, B> { 172 171 /// Acquires the lock and gives the caller access to the data protected by it. 172 + #[inline] 173 173 pub fn lock(&self) -> Guard<'_, T, B> { 174 174 // SAFETY: The constructor of the type calls `init`, so the existence of the object proves 175 175 // that `init` was called. ··· 184 182 /// Returns a guard that can be used to access the data protected by the lock if successful. 185 183 // `Option<T>` is not `#[must_use]` even if `T` is, thus the attribute is needed here. 186 184 #[must_use = "if unused, the lock will be immediately unlocked"] 185 + #[inline] 187 186 pub fn try_lock(&self) -> Option<Guard<'_, T, B>> { 188 187 // SAFETY: The constructor of the type calls `init`, so the existence of the object proves 189 188 // that `init` was called. ··· 278 275 impl<T: ?Sized, B: Backend> core::ops::Deref for Guard<'_, T, B> { 279 276 type Target = T; 280 277 278 + #[inline] 281 279 fn deref(&self) -> &Self::Target { 282 280 // SAFETY: The caller owns the lock, so it is safe to deref the protected data. 283 281 unsafe { &*self.lock.data.get() } ··· 289 285 where 290 286 T: Unpin, 291 287 { 288 + #[inline] 292 289 fn deref_mut(&mut self) -> &mut Self::Target { 293 290 // SAFETY: The caller owns the lock, so it is safe to deref the protected data. 294 291 unsafe { &mut *self.lock.data.get() } ··· 297 292 } 298 293 299 294 impl<T: ?Sized, B: Backend> Drop for Guard<'_, T, B> { 295 + #[inline] 300 296 fn drop(&mut self) { 301 297 // SAFETY: The caller owns the lock, so it is safe to unlock it. 302 298 unsafe { B::unlock(self.lock.state.get(), &self.state) }; ··· 310 304 /// # Safety 311 305 /// 312 306 /// The caller must ensure that it owns the lock. 307 + #[inline] 313 308 pub unsafe fn new(lock: &'a Lock<T, B>, state: B::GuardState) -> Self { 314 309 // SAFETY: The caller can only hold the lock if `Backend::init` has already been called. 315 310 unsafe { B::assert_is_held(lock.state.get()) };
+2
rust/kernel/sync/lock/global.rs
··· 77 77 } 78 78 79 79 /// Lock this global lock. 80 + #[inline] 80 81 pub fn lock(&'static self) -> GlobalGuard<B> { 81 82 GlobalGuard { 82 83 inner: self.inner.lock(), ··· 85 84 } 86 85 87 86 /// Try to lock this global lock. 87 + #[inline] 88 88 pub fn try_lock(&'static self) -> Option<GlobalGuard<B>> { 89 89 Some(GlobalGuard { 90 90 inner: self.inner.try_lock()?,
+5
rust/kernel/sync/lock/mutex.rs
··· 102 102 type State = bindings::mutex; 103 103 type GuardState = (); 104 104 105 + #[inline] 105 106 unsafe fn init( 106 107 ptr: *mut Self::State, 107 108 name: *const crate::ffi::c_char, ··· 113 112 unsafe { bindings::__mutex_init(ptr, name, key) } 114 113 } 115 114 115 + #[inline] 116 116 unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState { 117 117 // SAFETY: The safety requirements of this function ensure that `ptr` points to valid 118 118 // memory, and that it has been initialised before. 119 119 unsafe { bindings::mutex_lock(ptr) }; 120 120 } 121 121 122 + #[inline] 122 123 unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) { 123 124 // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the 124 125 // caller is the owner of the mutex. 125 126 unsafe { bindings::mutex_unlock(ptr) }; 126 127 } 127 128 129 + #[inline] 128 130 unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> { 129 131 // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use. 130 132 let result = unsafe { bindings::mutex_trylock(ptr) }; ··· 139 135 } 140 136 } 141 137 138 + #[inline] 142 139 unsafe fn assert_is_held(ptr: *mut Self::State) { 143 140 // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use. 144 141 unsafe { bindings::mutex_assert_is_held(ptr) }
+5
rust/kernel/sync/lock/spinlock.rs
··· 101 101 type State = bindings::spinlock_t; 102 102 type GuardState = (); 103 103 104 + #[inline] 104 105 unsafe fn init( 105 106 ptr: *mut Self::State, 106 107 name: *const crate::ffi::c_char, ··· 112 111 unsafe { bindings::__spin_lock_init(ptr, name, key) } 113 112 } 114 113 114 + #[inline] 115 115 unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState { 116 116 // SAFETY: The safety requirements of this function ensure that `ptr` points to valid 117 117 // memory, and that it has been initialised before. 118 118 unsafe { bindings::spin_lock(ptr) } 119 119 } 120 120 121 + #[inline] 121 122 unsafe fn unlock(ptr: *mut Self::State, _guard_state: &Self::GuardState) { 122 123 // SAFETY: The safety requirements of this function ensure that `ptr` is valid and that the 123 124 // caller is the owner of the spinlock. 124 125 unsafe { bindings::spin_unlock(ptr) } 125 126 } 126 127 128 + #[inline] 127 129 unsafe fn try_lock(ptr: *mut Self::State) -> Option<Self::GuardState> { 128 130 // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use. 129 131 let result = unsafe { bindings::spin_trylock(ptr) }; ··· 138 134 } 139 135 } 140 136 137 + #[inline] 141 138 unsafe fn assert_is_held(ptr: *mut Self::State) { 142 139 // SAFETY: The `ptr` pointer is guaranteed to be valid and initialized before use. 143 140 unsafe { bindings::spin_assert_is_held(ptr) }
+8
rust/kernel/sync/set_once.rs
··· 123 123 } 124 124 } 125 125 } 126 + 127 + // SAFETY: `SetOnce` can be transferred across thread boundaries iff the data it contains can. 128 + unsafe impl<T: Send> Send for SetOnce<T> {} 129 + 130 + // SAFETY: `SetOnce` synchronises access to the inner value via atomic operations, 131 + // so shared references are safe when `T: Sync`. Since the inner `T` may be dropped 132 + // on any thread, we also require `T: Send`. 133 + unsafe impl<T: Send + Sync> Sync for SetOnce<T> {}
+11
scripts/Makefile.context-analysis
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + context-analysis-cflags := -DWARN_CONTEXT_ANALYSIS \ 4 + -fexperimental-late-parse-attributes -Wthread-safety \ 5 + -Wthread-safety-pointer -Wthread-safety-beta 6 + 7 + ifndef CONFIG_WARN_CONTEXT_ANALYSIS_ALL 8 + context-analysis-cflags += --warning-suppression-mappings=$(srctree)/scripts/context-analysis-suppression.txt 9 + endif 10 + 11 + export CFLAGS_CONTEXT_ANALYSIS := $(context-analysis-cflags)
+10
scripts/Makefile.lib
··· 106 106 endif 107 107 108 108 # 109 + # Enable context analysis flags only where explicitly opted in. 110 + # (depends on variables CONTEXT_ANALYSIS_obj.o, CONTEXT_ANALYSIS) 111 + # 112 + ifeq ($(CONFIG_WARN_CONTEXT_ANALYSIS),y) 113 + _c_flags += $(if $(patsubst n%,, \ 114 + $(CONTEXT_ANALYSIS_$(target-stem).o)$(CONTEXT_ANALYSIS)$(if $(is-kernel-object),$(CONFIG_WARN_CONTEXT_ANALYSIS_ALL))), \ 115 + $(CFLAGS_CONTEXT_ANALYSIS)) 116 + endif 117 + 118 + # 109 119 # Enable AutoFDO build flags except some files or directories we don't want to 110 120 # enable (depends on variables AUTOFDO_PROFILE_obj.o and AUTOFDO_PROFILE). 111 121 #
-5
scripts/atomic/gen-rust-atomic-helpers.sh
··· 47 47 48 48 #include <linux/atomic.h> 49 49 50 - // TODO: Remove this after INLINE_HELPERS support is added. 51 - #ifndef __rust_helper 52 - #define __rust_helper 53 - #endif 54 - 55 50 EOF 56 51 57 52 grep '^[a-z]' "$1" | while read name meta args; do
+1 -1
scripts/atomic/kerneldoc/try_cmpxchg
··· 11 11 * 12 12 * ${desc_noinstr} 13 13 * 14 - * Return: @true if the exchange occured, @false otherwise. 14 + * Return: @true if the exchange occurred, @false otherwise. 15 15 */ 16 16 EOF
+7
scripts/checkpatch.pl
··· 6735 6735 } 6736 6736 } 6737 6737 6738 + # check for context_unsafe without a comment. 6739 + if ($line =~ /\bcontext_unsafe\b/ && 6740 + !ctx_has_comment($first_line, $linenr)) { 6741 + WARN("CONTEXT_UNSAFE", 6742 + "context_unsafe without comment\n" . $herecurr); 6743 + } 6744 + 6738 6745 # check of hardware specific defines 6739 6746 if ($line =~ m@^.\s*\#\s*if.*\b(__i386__|__powerpc64__|__sun__|__s390x__)\b@ && $realfile !~ m@include/asm-@) { 6740 6747 CHK("ARCH_DEFINES",
+33
scripts/context-analysis-suppression.txt
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # The suppressions file should only match common paths such as header files. 4 + # For individual subsytems use Makefile directive CONTEXT_ANALYSIS := [yn]. 5 + # 6 + # The suppressions are ignored when CONFIG_WARN_CONTEXT_ANALYSIS_ALL is 7 + # selected. 8 + 9 + [thread-safety] 10 + src:*arch/*/include/* 11 + src:*include/acpi/* 12 + src:*include/asm-generic/* 13 + src:*include/linux/* 14 + src:*include/net/* 15 + 16 + # Opt-in headers: 17 + src:*include/linux/bit_spinlock.h=emit 18 + src:*include/linux/cleanup.h=emit 19 + src:*include/linux/kref.h=emit 20 + src:*include/linux/list*.h=emit 21 + src:*include/linux/local_lock*.h=emit 22 + src:*include/linux/lockdep.h=emit 23 + src:*include/linux/mutex*.h=emit 24 + src:*include/linux/rcupdate.h=emit 25 + src:*include/linux/refcount.h=emit 26 + src:*include/linux/rhashtable.h=emit 27 + src:*include/linux/rwlock*.h=emit 28 + src:*include/linux/rwsem.h=emit 29 + src:*include/linux/sched*=emit 30 + src:*include/linux/seqlock*.h=emit 31 + src:*include/linux/spinlock*.h=emit 32 + src:*include/linux/srcu*.h=emit 33 + src:*include/linux/ww_mutex.h=emit
+1
scripts/tags.sh
··· 221 221 '/^\<DEFINE_GUARD_COND(\([[:alnum:]_]\+\),[[:space:]]*\([[:alnum:]_]\+\)/class_\1\2/' 222 222 '/^\<DEFINE_LOCK_GUARD_[[:digit:]](\([[:alnum:]_]\+\)/class_\1/' 223 223 '/^\<DEFINE_LOCK_GUARD_[[:digit:]]_COND(\([[:alnum:]_]\+\),[[:space:]]*\([[:alnum:]_]\+\)/class_\1\2/' 224 + '/^context_lock_struct(\([^,)]*\)[^)]*)/struct \1/' 224 225 ) 225 226 regex_kconfig=( 226 227 '/^[[:blank:]]*\(menu\|\)config[[:blank:]]\+\([[:alnum:]_]\+\)/\2/'
+2
security/tomoyo/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 + CONTEXT_ANALYSIS := y 3 + 2 4 obj-y = audit.o common.o condition.o domain.o environ.o file.o gc.o group.o load_policy.o memory.o mount.o network.o realpath.o securityfs_if.o tomoyo.o util.o 3 5 4 6 targets += builtin-policy.h
+49 -5
security/tomoyo/common.c
··· 268 268 */ 269 269 static void tomoyo_io_printf(struct tomoyo_io_buffer *head, const char *fmt, 270 270 ...) 271 + __must_hold(&head->io_sem) 271 272 { 272 273 va_list args; 273 274 size_t len; ··· 417 416 * 418 417 * Returns nothing. 419 418 */ 420 - static void tomoyo_print_number_union_nospace 421 - (struct tomoyo_io_buffer *head, const struct tomoyo_number_union *ptr) 419 + static void 420 + tomoyo_print_number_union_nospace(struct tomoyo_io_buffer *head, const struct tomoyo_number_union *ptr) 421 + __must_hold(&head->io_sem) 422 422 { 423 423 if (ptr->group) { 424 424 tomoyo_set_string(head, "@"); ··· 468 466 */ 469 467 static void tomoyo_print_number_union(struct tomoyo_io_buffer *head, 470 468 const struct tomoyo_number_union *ptr) 469 + __must_hold(&head->io_sem) 471 470 { 472 471 tomoyo_set_space(head); 473 472 tomoyo_print_number_union_nospace(head, ptr); ··· 667 664 * Returns 0 on success, negative value otherwise. 668 665 */ 669 666 static int tomoyo_write_profile(struct tomoyo_io_buffer *head) 667 + __must_hold(&head->io_sem) 670 668 { 671 669 char *data = head->write_buf; 672 670 unsigned int i; ··· 723 719 * Caller prints functionality's name. 724 720 */ 725 721 static void tomoyo_print_config(struct tomoyo_io_buffer *head, const u8 config) 722 + __must_hold(&head->io_sem) 726 723 { 727 724 tomoyo_io_printf(head, "={ mode=%s grant_log=%s reject_log=%s }\n", 728 725 tomoyo_mode[config & 3], ··· 739 734 * Returns nothing. 740 735 */ 741 736 static void tomoyo_read_profile(struct tomoyo_io_buffer *head) 737 + __must_hold(&head->io_sem) 742 738 { 743 739 u8 index; 744 740 struct tomoyo_policy_namespace *ns = ··· 858 852 */ 859 853 static int tomoyo_update_manager_entry(const char *manager, 860 854 const bool is_delete) 855 + __must_hold_shared(&tomoyo_ss) 861 856 { 862 857 struct tomoyo_manager e = { }; 863 858 struct tomoyo_acl_param param = { ··· 890 883 * Caller holds tomoyo_read_lock(). 891 884 */ 892 885 static int tomoyo_write_manager(struct tomoyo_io_buffer *head) 886 + __must_hold_shared(&tomoyo_ss) 887 + __must_hold(&head->io_sem) 893 888 { 894 889 char *data = head->write_buf; 895 890 ··· 910 901 * Caller holds tomoyo_read_lock(). 911 902 */ 912 903 static void tomoyo_read_manager(struct tomoyo_io_buffer *head) 904 + __must_hold_shared(&tomoyo_ss) 913 905 { 914 906 if (head->r.eof) 915 907 return; ··· 937 927 * Caller holds tomoyo_read_lock(). 938 928 */ 939 929 static bool tomoyo_manager(void) 930 + __must_hold_shared(&tomoyo_ss) 940 931 { 941 932 struct tomoyo_manager *ptr; 942 933 const char *exe; ··· 992 981 */ 993 982 static bool tomoyo_select_domain(struct tomoyo_io_buffer *head, 994 983 const char *data) 984 + __must_hold_shared(&tomoyo_ss) 985 + __must_hold(&head->io_sem) 995 986 { 996 987 unsigned int pid; 997 988 struct tomoyo_domain_info *domain = NULL; ··· 1064 1051 * Caller holds tomoyo_read_lock(). 1065 1052 */ 1066 1053 static int tomoyo_write_task(struct tomoyo_acl_param *param) 1054 + __must_hold_shared(&tomoyo_ss) 1067 1055 { 1068 1056 int error = -EINVAL; 1069 1057 ··· 1093 1079 * Caller holds tomoyo_read_lock(). 1094 1080 */ 1095 1081 static int tomoyo_delete_domain(char *domainname) 1082 + __must_hold_shared(&tomoyo_ss) 1096 1083 { 1097 1084 struct tomoyo_domain_info *domain; 1098 1085 struct tomoyo_path_info name; ··· 1133 1118 static int tomoyo_write_domain2(struct tomoyo_policy_namespace *ns, 1134 1119 struct list_head *list, char *data, 1135 1120 const bool is_delete) 1121 + __must_hold_shared(&tomoyo_ss) 1136 1122 { 1137 1123 struct tomoyo_acl_param param = { 1138 1124 .ns = ns, ··· 1178 1162 * Caller holds tomoyo_read_lock(). 1179 1163 */ 1180 1164 static int tomoyo_write_domain(struct tomoyo_io_buffer *head) 1165 + __must_hold_shared(&tomoyo_ss) 1166 + __must_hold(&head->io_sem) 1181 1167 { 1182 1168 char *data = head->write_buf; 1183 1169 struct tomoyo_policy_namespace *ns; ··· 1241 1223 */ 1242 1224 static bool tomoyo_print_condition(struct tomoyo_io_buffer *head, 1243 1225 const struct tomoyo_condition *cond) 1226 + __must_hold(&head->io_sem) 1244 1227 { 1245 1228 switch (head->r.cond_step) { 1246 1229 case 0: ··· 1383 1364 */ 1384 1365 static void tomoyo_set_group(struct tomoyo_io_buffer *head, 1385 1366 const char *category) 1367 + __must_hold(&head->io_sem) 1386 1368 { 1387 1369 if (head->type == TOMOYO_EXCEPTIONPOLICY) { 1388 1370 tomoyo_print_namespace(head); ··· 1403 1383 */ 1404 1384 static bool tomoyo_print_entry(struct tomoyo_io_buffer *head, 1405 1385 struct tomoyo_acl_info *acl) 1386 + __must_hold(&head->io_sem) 1406 1387 { 1407 1388 const u8 acl_type = acl->type; 1408 1389 bool first = true; ··· 1609 1588 */ 1610 1589 static bool tomoyo_read_domain2(struct tomoyo_io_buffer *head, 1611 1590 struct list_head *list) 1591 + __must_hold_shared(&tomoyo_ss) 1592 + __must_hold(&head->io_sem) 1612 1593 { 1613 1594 list_for_each_cookie(head->r.acl, list) { 1614 1595 struct tomoyo_acl_info *ptr = ··· 1631 1608 * Caller holds tomoyo_read_lock(). 1632 1609 */ 1633 1610 static void tomoyo_read_domain(struct tomoyo_io_buffer *head) 1611 + __must_hold_shared(&tomoyo_ss) 1612 + __must_hold(&head->io_sem) 1634 1613 { 1635 1614 if (head->r.eof) 1636 1615 return; ··· 1711 1686 * using read()/write() interface rather than sysctl() interface. 1712 1687 */ 1713 1688 static void tomoyo_read_pid(struct tomoyo_io_buffer *head) 1689 + __must_hold(&head->io_sem) 1714 1690 { 1715 1691 char *buf = head->write_buf; 1716 1692 bool global_pid = false; ··· 1772 1746 * Caller holds tomoyo_read_lock(). 1773 1747 */ 1774 1748 static int tomoyo_write_exception(struct tomoyo_io_buffer *head) 1749 + __must_hold_shared(&tomoyo_ss) 1750 + __must_hold(&head->io_sem) 1775 1751 { 1776 1752 const bool is_delete = head->w.is_delete; 1777 1753 struct tomoyo_acl_param param = { ··· 1815 1787 * Caller holds tomoyo_read_lock(). 1816 1788 */ 1817 1789 static bool tomoyo_read_group(struct tomoyo_io_buffer *head, const int idx) 1790 + __must_hold_shared(&tomoyo_ss) 1791 + __must_hold(&head->io_sem) 1818 1792 { 1819 1793 struct tomoyo_policy_namespace *ns = 1820 1794 container_of(head->r.ns, typeof(*ns), namespace_list); ··· 1876 1846 * Caller holds tomoyo_read_lock(). 1877 1847 */ 1878 1848 static bool tomoyo_read_policy(struct tomoyo_io_buffer *head, const int idx) 1849 + __must_hold_shared(&tomoyo_ss) 1879 1850 { 1880 1851 struct tomoyo_policy_namespace *ns = 1881 1852 container_of(head->r.ns, typeof(*ns), namespace_list); ··· 1937 1906 * Caller holds tomoyo_read_lock(). 1938 1907 */ 1939 1908 static void tomoyo_read_exception(struct tomoyo_io_buffer *head) 1909 + __must_hold_shared(&tomoyo_ss) 1910 + __must_hold(&head->io_sem) 1940 1911 { 1941 1912 struct tomoyo_policy_namespace *ns = 1942 1913 container_of(head->r.ns, typeof(*ns), namespace_list); ··· 2130 2097 * Returns nothing. 2131 2098 */ 2132 2099 static void tomoyo_add_entry(struct tomoyo_domain_info *domain, char *header) 2100 + __must_hold_shared(&tomoyo_ss) 2133 2101 { 2134 2102 char *buffer; 2135 2103 char *realpath = NULL; ··· 2335 2301 * @head: Pointer to "struct tomoyo_io_buffer". 2336 2302 */ 2337 2303 static void tomoyo_read_query(struct tomoyo_io_buffer *head) 2304 + __must_hold(&head->io_sem) 2338 2305 { 2339 2306 struct list_head *tmp; 2340 2307 unsigned int pos = 0; ··· 2397 2362 * Returns 0 on success, -EINVAL otherwise. 2398 2363 */ 2399 2364 static int tomoyo_write_answer(struct tomoyo_io_buffer *head) 2365 + __must_hold(&head->io_sem) 2400 2366 { 2401 2367 char *data = head->write_buf; 2402 2368 struct list_head *tmp; ··· 2437 2401 * Returns version information. 2438 2402 */ 2439 2403 static void tomoyo_read_version(struct tomoyo_io_buffer *head) 2404 + __must_hold(&head->io_sem) 2440 2405 { 2441 2406 if (!head->r.eof) { 2442 2407 tomoyo_io_printf(head, "2.6.0"); ··· 2486 2449 * Returns nothing. 2487 2450 */ 2488 2451 static void tomoyo_read_stat(struct tomoyo_io_buffer *head) 2452 + __must_hold(&head->io_sem) 2489 2453 { 2490 2454 u8 i; 2491 2455 unsigned int total = 0; ··· 2531 2493 * Returns 0. 2532 2494 */ 2533 2495 static int tomoyo_write_stat(struct tomoyo_io_buffer *head) 2496 + __must_hold(&head->io_sem) 2534 2497 { 2535 2498 char *data = head->write_buf; 2536 2499 u8 i; ··· 2557 2518 2558 2519 if (!head) 2559 2520 return -ENOMEM; 2560 - mutex_init(&head->io_sem); 2521 + guard(mutex_init)(&head->io_sem); 2561 2522 head->type = type; 2562 2523 switch (type) { 2563 2524 case TOMOYO_DOMAINPOLICY: ··· 2756 2717 * Caller holds tomoyo_read_lock(). 2757 2718 */ 2758 2719 static int tomoyo_parse_policy(struct tomoyo_io_buffer *head, char *line) 2720 + __must_hold_shared(&tomoyo_ss) 2721 + __must_hold(&head->io_sem) 2759 2722 { 2760 2723 /* Delete request? */ 2761 2724 head->w.is_delete = !strncmp(line, "delete ", 7); ··· 3010 2969 break; 3011 2970 *end = '\0'; 3012 2971 tomoyo_normalize_line(start); 3013 - head.write_buf = start; 3014 - tomoyo_parse_policy(&head, start); 2972 + /* head is stack-local and not shared. */ 2973 + context_unsafe( 2974 + head.write_buf = start; 2975 + tomoyo_parse_policy(&head, start); 2976 + ); 3015 2977 start = end + 1; 3016 2978 } 3017 2979 }
+40 -37
security/tomoyo/common.h
··· 827 827 bool is_delete; 828 828 } w; 829 829 /* Buffer for reading. */ 830 - char *read_buf; 830 + char *read_buf __guarded_by(&io_sem); 831 831 /* Size of read buffer. */ 832 - size_t readbuf_size; 832 + size_t readbuf_size __guarded_by(&io_sem); 833 833 /* Buffer for writing. */ 834 - char *write_buf; 834 + char *write_buf __guarded_by(&io_sem); 835 835 /* Size of write buffer. */ 836 - size_t writebuf_size; 836 + size_t writebuf_size __guarded_by(&io_sem); 837 837 /* Type of this interface. */ 838 838 enum tomoyo_securityfs_interface_index type; 839 839 /* Users counter protected by tomoyo_io_buffer_list_lock. */ ··· 922 922 struct tomoyo_domain_info *old_domain_info; 923 923 }; 924 924 925 + /********** External variable definitions. **********/ 926 + 927 + extern bool tomoyo_policy_loaded; 928 + extern int tomoyo_enabled; 929 + extern const char * const tomoyo_condition_keyword 930 + [TOMOYO_MAX_CONDITION_KEYWORD]; 931 + extern const char * const tomoyo_dif[TOMOYO_MAX_DOMAIN_INFO_FLAGS]; 932 + extern const char * const tomoyo_mac_keywords[TOMOYO_MAX_MAC_INDEX 933 + + TOMOYO_MAX_MAC_CATEGORY_INDEX]; 934 + extern const char * const tomoyo_mode[TOMOYO_CONFIG_MAX_MODE]; 935 + extern const char * const tomoyo_path_keyword[TOMOYO_MAX_PATH_OPERATION]; 936 + extern const char * const tomoyo_proto_keyword[TOMOYO_SOCK_MAX]; 937 + extern const char * const tomoyo_socket_keyword[TOMOYO_MAX_NETWORK_OPERATION]; 938 + extern const u8 tomoyo_index2category[TOMOYO_MAX_MAC_INDEX]; 939 + extern const u8 tomoyo_pn2mac[TOMOYO_MAX_PATH_NUMBER_OPERATION]; 940 + extern const u8 tomoyo_pnnn2mac[TOMOYO_MAX_MKDEV_OPERATION]; 941 + extern const u8 tomoyo_pp2mac[TOMOYO_MAX_PATH2_OPERATION]; 942 + extern struct list_head tomoyo_condition_list; 943 + extern struct list_head tomoyo_domain_list; 944 + extern struct list_head tomoyo_name_list[TOMOYO_MAX_HASH]; 945 + extern struct list_head tomoyo_namespace_list; 946 + extern struct mutex tomoyo_policy_lock; 947 + extern struct srcu_struct tomoyo_ss; 948 + extern struct tomoyo_domain_info tomoyo_kernel_domain; 949 + extern struct tomoyo_policy_namespace tomoyo_kernel_namespace; 950 + extern unsigned int tomoyo_memory_quota[TOMOYO_MAX_MEMORY_STAT]; 951 + extern unsigned int tomoyo_memory_used[TOMOYO_MAX_MEMORY_STAT]; 952 + extern struct lsm_blob_sizes tomoyo_blob_sizes; 953 + 925 954 /********** Function prototypes. **********/ 926 955 927 956 int tomoyo_interface_init(void); ··· 1000 971 int tomoyo_check_open_permission(struct tomoyo_domain_info *domain, 1001 972 const struct path *path, const int flag); 1002 973 void tomoyo_close_control(struct tomoyo_io_buffer *head); 1003 - int tomoyo_env_perm(struct tomoyo_request_info *r, const char *env); 974 + int tomoyo_env_perm(struct tomoyo_request_info *r, const char *env) __must_hold_shared(&tomoyo_ss); 1004 975 int tomoyo_execute_permission(struct tomoyo_request_info *r, 1005 - const struct tomoyo_path_info *filename); 1006 - int tomoyo_find_next_domain(struct linux_binprm *bprm); 976 + const struct tomoyo_path_info *filename) __must_hold_shared(&tomoyo_ss); 977 + int tomoyo_find_next_domain(struct linux_binprm *bprm) __must_hold_shared(&tomoyo_ss); 1007 978 int tomoyo_get_mode(const struct tomoyo_policy_namespace *ns, const u8 profile, 1008 979 const u8 index); 1009 980 int tomoyo_init_request_info(struct tomoyo_request_info *r, ··· 1031 1002 int tomoyo_socket_sendmsg_permission(struct socket *sock, struct msghdr *msg, 1032 1003 int size); 1033 1004 int tomoyo_supervisor(struct tomoyo_request_info *r, const char *fmt, ...) 1005 + __must_hold_shared(&tomoyo_ss) 1034 1006 __printf(2, 3); 1035 1007 int tomoyo_update_domain(struct tomoyo_acl_info *new_entry, const int size, 1036 1008 struct tomoyo_acl_param *param, ··· 1091 1061 const unsigned long value, const u8 type); 1092 1062 void tomoyo_put_name_union(struct tomoyo_name_union *ptr); 1093 1063 void tomoyo_put_number_union(struct tomoyo_number_union *ptr); 1094 - void tomoyo_read_log(struct tomoyo_io_buffer *head); 1064 + void tomoyo_read_log(struct tomoyo_io_buffer *head) __must_hold(&head->io_sem); 1095 1065 void tomoyo_update_stat(const u8 index); 1096 1066 void tomoyo_warn_oom(const char *function); 1097 1067 void tomoyo_write_log(struct tomoyo_request_info *r, const char *fmt, ...) 1098 1068 __printf(2, 3); 1099 1069 void tomoyo_write_log2(struct tomoyo_request_info *r, int len, const char *fmt, 1100 1070 va_list args) __printf(3, 0); 1101 - 1102 - /********** External variable definitions. **********/ 1103 - 1104 - extern bool tomoyo_policy_loaded; 1105 - extern int tomoyo_enabled; 1106 - extern const char * const tomoyo_condition_keyword 1107 - [TOMOYO_MAX_CONDITION_KEYWORD]; 1108 - extern const char * const tomoyo_dif[TOMOYO_MAX_DOMAIN_INFO_FLAGS]; 1109 - extern const char * const tomoyo_mac_keywords[TOMOYO_MAX_MAC_INDEX 1110 - + TOMOYO_MAX_MAC_CATEGORY_INDEX]; 1111 - extern const char * const tomoyo_mode[TOMOYO_CONFIG_MAX_MODE]; 1112 - extern const char * const tomoyo_path_keyword[TOMOYO_MAX_PATH_OPERATION]; 1113 - extern const char * const tomoyo_proto_keyword[TOMOYO_SOCK_MAX]; 1114 - extern const char * const tomoyo_socket_keyword[TOMOYO_MAX_NETWORK_OPERATION]; 1115 - extern const u8 tomoyo_index2category[TOMOYO_MAX_MAC_INDEX]; 1116 - extern const u8 tomoyo_pn2mac[TOMOYO_MAX_PATH_NUMBER_OPERATION]; 1117 - extern const u8 tomoyo_pnnn2mac[TOMOYO_MAX_MKDEV_OPERATION]; 1118 - extern const u8 tomoyo_pp2mac[TOMOYO_MAX_PATH2_OPERATION]; 1119 - extern struct list_head tomoyo_condition_list; 1120 - extern struct list_head tomoyo_domain_list; 1121 - extern struct list_head tomoyo_name_list[TOMOYO_MAX_HASH]; 1122 - extern struct list_head tomoyo_namespace_list; 1123 - extern struct mutex tomoyo_policy_lock; 1124 - extern struct srcu_struct tomoyo_ss; 1125 - extern struct tomoyo_domain_info tomoyo_kernel_domain; 1126 - extern struct tomoyo_policy_namespace tomoyo_kernel_namespace; 1127 - extern unsigned int tomoyo_memory_quota[TOMOYO_MAX_MEMORY_STAT]; 1128 - extern unsigned int tomoyo_memory_used[TOMOYO_MAX_MEMORY_STAT]; 1129 - extern struct lsm_blob_sizes tomoyo_blob_sizes; 1130 1071 1131 1072 /********** Inlined functions. **********/ 1132 1073 ··· 1107 1106 * Returns index number for tomoyo_read_unlock(). 1108 1107 */ 1109 1108 static inline int tomoyo_read_lock(void) 1109 + __acquires_shared(&tomoyo_ss) 1110 1110 { 1111 1111 return srcu_read_lock(&tomoyo_ss); 1112 1112 } ··· 1120 1118 * Returns nothing. 1121 1119 */ 1122 1120 static inline void tomoyo_read_unlock(int idx) 1121 + __releases_shared(&tomoyo_ss) 1123 1122 { 1124 1123 srcu_read_unlock(&tomoyo_ss, idx); 1125 1124 }
+1
security/tomoyo/domain.c
··· 611 611 * Returns 0 on success, negative value otherwise. 612 612 */ 613 613 static int tomoyo_environ(struct tomoyo_execve *ee) 614 + __must_hold_shared(&tomoyo_ss) 614 615 { 615 616 struct tomoyo_request_info *r = &ee->r; 616 617 struct linux_binprm *bprm = ee->bprm;
+1
security/tomoyo/environ.c
··· 32 32 * Returns 0 on success, negative value otherwise. 33 33 */ 34 34 static int tomoyo_audit_env_log(struct tomoyo_request_info *r) 35 + __must_hold_shared(&tomoyo_ss) 35 36 { 36 37 return tomoyo_supervisor(r, "misc env %s\n", 37 38 r->param.environ.name->name);
+5
security/tomoyo/file.c
··· 164 164 * Returns 0 on success, negative value otherwise. 165 165 */ 166 166 static int tomoyo_audit_path_log(struct tomoyo_request_info *r) 167 + __must_hold_shared(&tomoyo_ss) 167 168 { 168 169 return tomoyo_supervisor(r, "file %s %s\n", tomoyo_path_keyword 169 170 [r->param.path.operation], ··· 179 178 * Returns 0 on success, negative value otherwise. 180 179 */ 181 180 static int tomoyo_audit_path2_log(struct tomoyo_request_info *r) 181 + __must_hold_shared(&tomoyo_ss) 182 182 { 183 183 return tomoyo_supervisor(r, "file %s %s %s\n", tomoyo_mac_keywords 184 184 [tomoyo_pp2mac[r->param.path2.operation]], ··· 195 193 * Returns 0 on success, negative value otherwise. 196 194 */ 197 195 static int tomoyo_audit_mkdev_log(struct tomoyo_request_info *r) 196 + __must_hold_shared(&tomoyo_ss) 198 197 { 199 198 return tomoyo_supervisor(r, "file %s %s 0%o %u %u\n", 200 199 tomoyo_mac_keywords ··· 213 210 * Returns 0 on success, negative value otherwise. 214 211 */ 215 212 static int tomoyo_audit_path_number_log(struct tomoyo_request_info *r) 213 + __must_hold_shared(&tomoyo_ss) 216 214 { 217 215 const u8 type = r->param.path_number.operation; 218 216 u8 radix; ··· 576 572 */ 577 573 static int tomoyo_path_permission(struct tomoyo_request_info *r, u8 operation, 578 574 const struct tomoyo_path_info *filename) 575 + __must_hold_shared(&tomoyo_ss) 579 576 { 580 577 int error; 581 578
+20 -8
security/tomoyo/gc.c
··· 23 23 tomoyo_memory_used[TOMOYO_MEMORY_POLICY] -= ksize(ptr); 24 24 kfree(ptr); 25 25 } 26 - 27 - /* The list for "struct tomoyo_io_buffer". */ 28 - static LIST_HEAD(tomoyo_io_buffer_list); 29 26 /* Lock for protecting tomoyo_io_buffer_list. */ 30 27 static DEFINE_SPINLOCK(tomoyo_io_buffer_list_lock); 28 + /* The list for "struct tomoyo_io_buffer". */ 29 + static __guarded_by(&tomoyo_io_buffer_list_lock) LIST_HEAD(tomoyo_io_buffer_list); 31 30 32 31 /** 33 32 * tomoyo_struct_used_by_io_buffer - Check whether the list element is used by /sys/kernel/security/tomoyo/ users or not. ··· 384 385 */ 385 386 static void tomoyo_try_to_gc(const enum tomoyo_policy_id type, 386 387 struct list_head *element) 388 + __must_hold(&tomoyo_policy_lock) 387 389 { 388 390 /* 389 391 * __list_del_entry() guarantees that the list element became no longer ··· 484 484 */ 485 485 static void tomoyo_collect_member(const enum tomoyo_policy_id id, 486 486 struct list_head *member_list) 487 + __must_hold(&tomoyo_policy_lock) 487 488 { 488 489 struct tomoyo_acl_head *member; 489 490 struct tomoyo_acl_head *tmp; ··· 505 504 * Returns nothing. 506 505 */ 507 506 static void tomoyo_collect_acl(struct list_head *list) 507 + __must_hold(&tomoyo_policy_lock) 508 508 { 509 509 struct tomoyo_acl_info *acl; 510 510 struct tomoyo_acl_info *tmp; ··· 629 627 if (head->users) 630 628 continue; 631 629 list_del(&head->list); 632 - kfree(head->read_buf); 633 - kfree(head->write_buf); 630 + /* Safe destruction because no users are left. */ 631 + context_unsafe( 632 + kfree(head->read_buf); 633 + kfree(head->write_buf); 634 + ); 634 635 kfree(head); 635 636 } 636 637 spin_unlock(&tomoyo_io_buffer_list_lock); ··· 661 656 head->users = 1; 662 657 list_add(&head->list, &tomoyo_io_buffer_list); 663 658 } else { 664 - is_write = head->write_buf != NULL; 659 + /* 660 + * tomoyo_write_control() can concurrently update write_buf from 661 + * a non-NULL to new non-NULL pointer with io_sem held. 662 + */ 663 + is_write = data_race(head->write_buf != NULL); 665 664 if (!--head->users) { 666 665 list_del(&head->list); 667 - kfree(head->read_buf); 668 - kfree(head->write_buf); 666 + /* Safe destruction because no users are left. */ 667 + context_unsafe( 668 + kfree(head->read_buf); 669 + kfree(head->write_buf); 670 + ); 669 671 kfree(head); 670 672 } 671 673 }
+2
security/tomoyo/mount.c
··· 28 28 * Returns 0 on success, negative value otherwise. 29 29 */ 30 30 static int tomoyo_audit_mount_log(struct tomoyo_request_info *r) 31 + __must_hold_shared(&tomoyo_ss) 31 32 { 32 33 return tomoyo_supervisor(r, "file mount %s %s %s 0x%lX\n", 33 34 r->param.mount.dev->name, ··· 79 78 const char *dev_name, 80 79 const struct path *dir, const char *type, 81 80 unsigned long flags) 81 + __must_hold_shared(&tomoyo_ss) 82 82 { 83 83 struct tomoyo_obj_info obj = { }; 84 84 struct path path;
+3
security/tomoyo/network.c
··· 363 363 static int tomoyo_audit_net_log(struct tomoyo_request_info *r, 364 364 const char *family, const u8 protocol, 365 365 const u8 operation, const char *address) 366 + __must_hold_shared(&tomoyo_ss) 366 367 { 367 368 return tomoyo_supervisor(r, "network %s %s %s %s\n", family, 368 369 tomoyo_proto_keyword[protocol], ··· 378 377 * Returns 0 on success, negative value otherwise. 379 378 */ 380 379 static int tomoyo_audit_inet_log(struct tomoyo_request_info *r) 380 + __must_hold_shared(&tomoyo_ss) 381 381 { 382 382 char buf[128]; 383 383 int len; ··· 404 402 * Returns 0 on success, negative value otherwise. 405 403 */ 406 404 static int tomoyo_audit_unix_log(struct tomoyo_request_info *r) 405 + __must_hold_shared(&tomoyo_ss) 407 406 { 408 407 return tomoyo_audit_net_log(r, "unix", r->param.unix_network.protocol, 409 408 r->param.unix_network.operation,
+42
tools/include/linux/compiler-context-analysis.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _TOOLS_LINUX_COMPILER_CONTEXT_ANALYSIS_H 3 + #define _TOOLS_LINUX_COMPILER_CONTEXT_ANALYSIS_H 4 + 5 + /* 6 + * Macros and attributes for compiler-based static context analysis. 7 + * No-op stubs for tools. 8 + */ 9 + 10 + #define __guarded_by(...) 11 + #define __pt_guarded_by(...) 12 + 13 + #define context_lock_struct(name, ...) struct __VA_ARGS__ name 14 + 15 + #define __no_context_analysis 16 + #define __context_unsafe(comment) 17 + #define context_unsafe(...) ({ __VA_ARGS__; }) 18 + #define context_unsafe_alias(p) 19 + #define disable_context_analysis() 20 + #define enable_context_analysis() 21 + 22 + #define __must_hold(...) 23 + #define __must_not_hold(...) 24 + #define __acquires(...) 25 + #define __cond_acquires(ret, x) 26 + #define __releases(...) 27 + #define __acquire(x) (void)0 28 + #define __release(x) (void)0 29 + 30 + #define __must_hold_shared(...) 31 + #define __acquires_shared(...) 32 + #define __cond_acquires_shared(ret, x) 33 + #define __releases_shared(...) 34 + #define __acquire_shared(x) (void)0 35 + #define __release_shared(x) (void)0 36 + 37 + #define __acquire_ret(call, expr) (call) 38 + #define __acquire_shared_ret(call, expr) (call) 39 + #define __acquires_ret 40 + #define __acquires_shared_ret 41 + 42 + #endif /* _TOOLS_LINUX_COMPILER_CONTEXT_ANALYSIS_H */
+1 -17
tools/include/linux/compiler_types.h
··· 13 13 #define __has_builtin(x) (0) 14 14 #endif 15 15 16 - #ifdef __CHECKER__ 17 - /* context/locking */ 18 - # define __must_hold(x) __attribute__((context(x,1,1))) 19 - # define __acquires(x) __attribute__((context(x,0,1))) 20 - # define __releases(x) __attribute__((context(x,1,0))) 21 - # define __acquire(x) __context__(x,1) 22 - # define __release(x) __context__(x,-1) 23 - # define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0) 24 - #else /* __CHECKER__ */ 25 - /* context/locking */ 26 - # define __must_hold(x) 27 - # define __acquires(x) 28 - # define __releases(x) 29 - # define __acquire(x) (void)0 30 - # define __release(x) (void)0 31 - # define __cond_lock(x,c) (c) 32 - #endif /* __CHECKER__ */ 16 + #include <linux/compiler-context-analysis.h> 33 17 34 18 /* Compiler specific macros. */ 35 19 #ifdef __GNUC__
-4
tools/testing/shared/linux/kernel.h
··· 21 21 #define schedule() 22 22 #define PAGE_SHIFT 12 23 23 24 - #define __acquires(x) 25 - #define __releases(x) 26 - #define __must_hold(x) 27 - 28 24 #define EXPORT_PER_CPU_SYMBOL_GPL(x) 29 25 #endif /* _KERNEL_H */