Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

rust: sync: atomic: Add Atomic<*{mut,const} T> support

Atomic pointer support is an important piece of synchronization
algorithm, e.g. RCU, hence provide the support for that.

Note that instead of relying on atomic_long or the implementation of
`Atomic<usize>`, a new set of helpers (atomic_ptr_*) is introduced for
atomic pointer specifically, this is because ptr2int casting would
lose the provenance of a pointer and even though in theory there are a
few tricks the provenance can be restored, it'll still be a simpler
implementation if C could provide atomic pointers directly. The side
effects of this approach are: we don't have the arithmetic and logical
operations for pointers yet and the current implementation only works
on ARCH_SUPPORTS_ATOMIC_RMW architectures, but these are implementation
issues and can be added later.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Gary Guo <gary@garyguo.net>
Reviewed-by: FUJITA Tomonori <fujita.tomonori@gmail.com>
Link: https://patch.msgid.link/20260120140503.62804-3-boqun.feng@gmail.com
Link: https://patch.msgid.link/20260303201701.12204-8-boqun@kernel.org

authored by

Boqun Feng and committed by
Peter Zijlstra
ac8f06ad 553c02fb

+75 -10
+3
rust/helpers/atomic_ext.c
··· 36 36 37 37 GEN_READ_SET_HELPERS(i8, s8) 38 38 GEN_READ_SET_HELPERS(i16, s16) 39 + GEN_READ_SET_HELPERS(ptr, const void *) 39 40 40 41 /* 41 42 * xchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the ··· 60 59 61 60 GEN_XCHG_HELPERS(i8, s8) 62 61 GEN_XCHG_HELPERS(i16, s16) 62 + GEN_XCHG_HELPERS(ptr, const void *) 63 63 64 64 /* 65 65 * try_cmpxchg helpers depend on ARCH_SUPPORTS_ATOMIC_RMW and on the ··· 84 82 85 83 GEN_TRY_CMPXCHG_HELPERS(i8, s8) 86 84 GEN_TRY_CMPXCHG_HELPERS(i16, s16) 85 + GEN_TRY_CMPXCHG_HELPERS(ptr, const void *)
+11 -1
rust/kernel/sync/atomic.rs
··· 51 51 #[repr(transparent)] 52 52 pub struct Atomic<T: AtomicType>(AtomicRepr<T::Repr>); 53 53 54 + // SAFETY: `Atomic<T>` is safe to transfer between execution contexts because of the safety 55 + // requirement of `AtomicType`. 56 + unsafe impl<T: AtomicType> Send for Atomic<T> {} 57 + 54 58 // SAFETY: `Atomic<T>` is safe to share among execution contexts because all accesses are atomic. 55 59 unsafe impl<T: AtomicType> Sync for Atomic<T> {} 56 60 ··· 72 68 /// 73 69 /// - [`Self`] must have the same size and alignment as [`Self::Repr`]. 74 70 /// - [`Self`] must be [round-trip transmutable] to [`Self::Repr`]. 71 + /// - [`Self`] must be safe to transfer between execution contexts, if it's [`Send`], this is 72 + /// automatically satisfied. The exception is pointer types that are even though marked as 73 + /// `!Send` (e.g. raw pointers and [`NonNull<T>`]) but requiring `unsafe` to do anything 74 + /// meaningful on them. This is because transferring pointer values between execution contexts is 75 + /// safe as long as the actual `unsafe` dereferencing is justified. 75 76 /// 76 77 /// Note that this is more relaxed than requiring the bi-directional transmutability (i.e. 77 78 /// [`transmute()`] is always sound between `U` and `T`) because of the support for atomic ··· 117 108 /// [`transmute()`]: core::mem::transmute 118 109 /// [round-trip transmutable]: AtomicType#round-trip-transmutability 119 110 /// [Examples]: AtomicType#examples 120 - pub unsafe trait AtomicType: Sized + Send + Copy { 111 + /// [`NonNull<T>`]: core::ptr::NonNull 112 + pub unsafe trait AtomicType: Sized + Copy { 121 113 /// The backing atomic implementation type. 122 114 type Repr: AtomicImpl; 123 115 }
+15 -9
rust/kernel/sync/atomic/internal.rs
··· 7 7 use crate::bindings; 8 8 use crate::macros::paste; 9 9 use core::cell::UnsafeCell; 10 + use ffi::c_void; 10 11 11 12 mod private { 12 13 /// Sealed trait marker to disable customized impls on atomic implementation traits. ··· 15 14 } 16 15 17 16 // The C side supports atomic primitives only for `i32` and `i64` (`atomic_t` and `atomic64_t`), 18 - // while the Rust side also layers provides atomic support for `i8` and `i16` 19 - // on top of lower-level C primitives. 17 + // while the Rust side also provides atomic support for `i8`, `i16` and `*const c_void` on top of 18 + // lower-level C primitives. 20 19 impl private::Sealed for i8 {} 21 20 impl private::Sealed for i16 {} 21 + impl private::Sealed for *const c_void {} 22 22 impl private::Sealed for i32 {} 23 23 impl private::Sealed for i64 {} 24 24 ··· 28 26 /// This trait is sealed, and only types that map directly to the C side atomics 29 27 /// or can be implemented with lower-level C primitives are allowed to implement this: 30 28 /// 31 - /// - `i8` and `i16` are implemented with lower-level C primitives. 29 + /// - `i8`, `i16` and `*const c_void` are implemented with lower-level C primitives. 32 30 /// - `i32` map to `atomic_t` 33 31 /// - `i64` map to `atomic64_t` 34 - pub trait AtomicImpl: Sized + Send + Copy + private::Sealed { 32 + pub trait AtomicImpl: Sized + Copy + private::Sealed { 35 33 /// The type of the delta in arithmetic or logical operations. 36 34 /// 37 35 /// For example, in `atomic_add(ptr, v)`, it's the type of `v`. Usually it's the same type of ··· 39 37 type Delta; 40 38 } 41 39 42 - // The current helpers of load/store of atomic `i8` and `i16` use `{WRITE,READ}_ONCE()` hence the 43 - // atomicity is only guaranteed against read-modify-write operations if the architecture supports 44 - // native atomic RmW. 40 + // The current helpers of load/store of atomic `i8`, `i16` and pointers use `{WRITE,READ}_ONCE()` 41 + // hence the atomicity is only guaranteed against read-modify-write operations if the architecture 42 + // supports native atomic RmW. 45 43 // 46 44 // In the future when a CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=n architecture plans to support Rust, the 47 45 // load/store helpers that guarantee atomicity against RmW operations (usually via a lock) need to ··· 58 56 59 57 impl AtomicImpl for i16 { 60 58 type Delta = Self; 59 + } 60 + 61 + impl AtomicImpl for *const c_void { 62 + type Delta = isize; 61 63 } 62 64 63 65 // `atomic_t` implements atomic operations on `i32`. ··· 275 269 } 276 270 277 271 declare_and_impl_atomic_methods!( 278 - [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ] 272 + [ i8 => atomic_i8, i16 => atomic_i16, *const c_void => atomic_ptr, i32 => atomic, i64 => atomic64 ] 279 273 /// Basic atomic operations 280 274 pub trait AtomicBasicOps { 281 275 /// Atomic read (load). ··· 293 287 ); 294 288 295 289 declare_and_impl_atomic_methods!( 296 - [ i8 => atomic_i8, i16 => atomic_i16, i32 => atomic, i64 => atomic64 ] 290 + [ i8 => atomic_i8, i16 => atomic_i16, *const c_void => atomic_ptr, i32 => atomic, i64 => atomic64 ] 297 291 /// Exchange and compare-and-exchange atomic operations 298 292 pub trait AtomicExchangeOps { 299 293 /// Atomic exchange.
+46
rust/kernel/sync/atomic/predefine.rs
··· 4 4 5 5 use crate::static_assert; 6 6 use core::mem::{align_of, size_of}; 7 + use ffi::c_void; 7 8 8 9 // Ensure size and alignment requirements are checked. 9 10 static_assert!(size_of::<bool>() == size_of::<i8>()); ··· 27 26 // itself. 28 27 unsafe impl super::AtomicType for i16 { 29 28 type Repr = i16; 29 + } 30 + 31 + // SAFETY: 32 + // 33 + // - `*mut T` has the same size and alignment with `*const c_void`, and is round-trip 34 + // transmutable to `*const c_void`. 35 + // - `*mut T` is safe to transfer between execution contexts. See the safety requirement of 36 + // [`AtomicType`]. 37 + unsafe impl<T: Sized> super::AtomicType for *mut T { 38 + type Repr = *const c_void; 39 + } 40 + 41 + // SAFETY: 42 + // 43 + // - `*const T` has the same size and alignment with `*const c_void`, and is round-trip 44 + // transmutable to `*const c_void`. 45 + // - `*const T` is safe to transfer between execution contexts. See the safety requirement of 46 + // [`AtomicType`]. 47 + unsafe impl<T: Sized> super::AtomicType for *const T { 48 + type Repr = *const c_void; 30 49 } 31 50 32 51 // SAFETY: `i32` has the same size and alignment with itself, and is round-trip transmutable to ··· 246 225 assert_eq!(Err(false), x.cmpxchg(true, true, Relaxed)); 247 226 assert_eq!(false, x.load(Relaxed)); 248 227 assert_eq!(Ok(false), x.cmpxchg(false, true, Full)); 228 + } 229 + 230 + #[test] 231 + fn atomic_ptr_tests() { 232 + let mut v = 42; 233 + let mut u = 43; 234 + let x = Atomic::new(&raw mut v); 235 + 236 + assert_eq!(x.load(Acquire), &raw mut v); 237 + assert_eq!(x.cmpxchg(&raw mut u, &raw mut u, Relaxed), Err(&raw mut v)); 238 + assert_eq!(x.cmpxchg(&raw mut v, &raw mut u, Relaxed), Ok(&raw mut v)); 239 + assert_eq!(x.load(Relaxed), &raw mut u); 240 + 241 + let x = Atomic::new(&raw const v); 242 + 243 + assert_eq!(x.load(Acquire), &raw const v); 244 + assert_eq!( 245 + x.cmpxchg(&raw const u, &raw const u, Relaxed), 246 + Err(&raw const v) 247 + ); 248 + assert_eq!( 249 + x.cmpxchg(&raw const v, &raw const u, Relaxed), 250 + Ok(&raw const v) 251 + ); 252 + assert_eq!(x.load(Relaxed), &raw const u); 249 253 } 250 254 }