Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

mm: rename VMA flag helpers to be more readable

Patch series "mm: vma flag tweaks".

The ongoing work around introducing non-system word VMA flags has
introduced a number of helper functions and macros to make life easier
when working with these flags and to make conversions from the legacy use
of VM_xxx flags more straightforward.

This series improves these to reduce confusion as to what they do and to
improve consistency and readability.

Firstly the series renames vma_flags_test() to vma_flags_test_any() to
make it abundantly clear that this function tests whether any of the flags
are set (as opposed to vma_flags_test_all()).

It then renames vma_desc_test_flags() to vma_desc_test_any() for the same
reason. Note that we drop the 'flags' suffix here, as
vma_desc_test_any_flags() would be cumbersome and 'test' implies a flag
test.

Similarly, we rename vma_test_all_flags() to vma_test_all() for
consistency.

Next, we have a couple of instances (erofs, zonefs) where we are now
testing for vma_desc_test_any(desc, VMA_SHARED_BIT) &&
vma_desc_test_any(desc, VMA_MAYWRITE_BIT).

This is silly, so this series introduces vma_desc_test_all() so these
callers can instead invoke vma_desc_test_all(desc, VMA_SHARED_BIT,
VMA_MAYWRITE_BIT).

We then observe that quite a few instances of vma_flags_test_any() and
vma_desc_test_any() are in fact only testing against a single flag.

Using the _any() variant here is just confusing - 'any' of single item
reads strangely and is liable to cause confusion.

So in these instances the series reintroduces vma_flags_test() and
vma_desc_test() as helpers which test against a single flag.

The fact that vma_flags_t is a struct and that vma_flag_t utilises sparse
to avoid confusion with vm_flags_t makes it impossible for a user to
misuse these helpers without it getting flagged somewhere.

The series also updates __mk_vma_flags() and functions invoked by it to
explicitly mark them always inline to match expectation and to be
consistent with other VMA flag helpers.

It also renames vma_flag_set() to vma_flags_set_flag() (a function only
used by __mk_vma_flags()) to be consistent with other VMA flag helpers.

Finally it updates the VMA tests for each of these changes, and introduces
explicit tests for vma_flags_test() and vma_desc_test() to assert that
they behave as expected.


This patch (of 6):

On reflection, it's confusing to have vma_flags_test() and
vma_desc_test_flags() test whether any comma-separated VMA flag bit is
set, while also having vma_flags_test_all() and vma_test_all_flags()
separately test whether all flags are set.

Firstly, rename vma_flags_test() to vma_flags_test_any() to eliminate this
confusion.

Secondly, since the VMA descriptor flag functions are becoming rather
cumbersome, prefer vma_desc_test*() to vma_desc_test_flags*(), and also
rename vma_desc_test_flags() to vma_desc_test_any().

Finally, rename vma_test_all_flags() to vma_test_all() to keep the
VMA-specific helper consistent with the VMA descriptor naming convention
and to help avoid confusion vs. vma_flags_test_all().

While we're here, also update whitespace to be consistent in helper
functions.

Link: https://lkml.kernel.org/r/cover.1772704455.git.ljs@kernel.org
Link: https://lkml.kernel.org/r/0f9cb3c511c478344fac0b3b3b0300bb95be95e9.1772704455.git.ljs@kernel.org
Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Suggested-by: Pedro Falcato <pfalcato@suse.de>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Reviewed-by: Pedro Falcato <pfalcato@suse.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Babu Moger <babu.moger@amd.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Chao Yu <chao@kernel.org>
Cc: Chatre, Reinette <reinette.chatre@intel.com>
Cc: Chunhai Guo <guochunhai@vivo.com>
Cc: Damien Le Maol <dlemoal@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Gao Xiang <xiang@kernel.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Hongbo Li <lihongbo22@huawei.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: James Morse <james.morse@arm.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jann Horn <jannh@google.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Jeffle Xu <jefflexu@linux.alibaba.com>
Cc: Johannes Thumshirn <jth@kernel.org>
Cc: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Naohiro Aota <naohiro.aota@wdc.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Sandeep Dhavale <dhavale@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vishal Verma <vishal.l.verma@intel.com>
Cc: Vlastimil Babka <vbabka@kernel.org>
Cc: Yue Hu <zbestahu@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

Lorenzo Stoakes (Oracle) and committed by
Andrew Morton
e650bb30 caf55fef

+72 -70
+1 -1
drivers/char/mem.c
··· 520 520 #ifndef CONFIG_MMU 521 521 return -ENOSYS; 522 522 #endif 523 - if (vma_desc_test_flags(desc, VMA_SHARED_BIT)) 523 + if (vma_desc_test_any(desc, VMA_SHARED_BIT)) 524 524 return shmem_zero_setup_desc(desc); 525 525 526 526 desc->action.success_hook = mmap_zero_private_success;
+1 -1
drivers/dax/device.c
··· 24 24 return -ENXIO; 25 25 26 26 /* prevent private mappings from being established */ 27 - if (!vma_flags_test(&flags, VMA_MAYSHARE_BIT)) { 27 + if (!vma_flags_test_any(&flags, VMA_MAYSHARE_BIT)) { 28 28 dev_info_ratelimited(dev, 29 29 "%s: %s: fail, attempted private mapping\n", 30 30 current->comm, func);
+2 -2
fs/erofs/data.c
··· 473 473 if (!IS_DAX(file_inode(desc->file))) 474 474 return generic_file_readonly_mmap_prepare(desc); 475 475 476 - if (vma_desc_test_flags(desc, VMA_SHARED_BIT) && 477 - vma_desc_test_flags(desc, VMA_MAYWRITE_BIT)) 476 + if (vma_desc_test_any(desc, VMA_SHARED_BIT) && 477 + vma_desc_test_any(desc, VMA_MAYWRITE_BIT)) 478 478 return -EINVAL; 479 479 480 480 desc->vm_ops = &erofs_dax_vm_ops;
+1 -1
fs/hugetlbfs/inode.c
··· 164 164 goto out; 165 165 166 166 ret = 0; 167 - if (vma_desc_test_flags(desc, VMA_WRITE_BIT) && inode->i_size < len) 167 + if (vma_desc_test_any(desc, VMA_WRITE_BIT) && inode->i_size < len) 168 168 i_size_write(inode, len); 169 169 out: 170 170 inode_unlock(inode);
+1 -1
fs/ntfs3/file.c
··· 276 276 struct file *file = desc->file; 277 277 struct inode *inode = file_inode(file); 278 278 struct ntfs_inode *ni = ntfs_i(inode); 279 - const bool rw = vma_desc_test_flags(desc, VMA_WRITE_BIT); 279 + const bool rw = vma_desc_test_any(desc, VMA_WRITE_BIT); 280 280 int err; 281 281 282 282 /* Avoid any operation if inode is bad. */
+1 -1
fs/resctrl/pseudo_lock.c
··· 1044 1044 * Ensure changes are carried directly to the memory being mapped, 1045 1045 * do not allow copy-on-write mapping. 1046 1046 */ 1047 - if (!vma_desc_test_flags(desc, VMA_SHARED_BIT)) { 1047 + if (!vma_desc_test_any(desc, VMA_SHARED_BIT)) { 1048 1048 mutex_unlock(&rdtgroup_mutex); 1049 1049 return -EINVAL; 1050 1050 }
+2 -2
fs/zonefs/file.c
··· 333 333 * ordering between msync() and page cache writeback. 334 334 */ 335 335 if (zonefs_inode_is_seq(file_inode(file)) && 336 - vma_desc_test_flags(desc, VMA_SHARED_BIT) && 337 - vma_desc_test_flags(desc, VMA_MAYWRITE_BIT)) 336 + vma_desc_test_any(desc, VMA_SHARED_BIT) && 337 + vma_desc_test_any(desc, VMA_MAYWRITE_BIT)) 338 338 return -EINVAL; 339 339 340 340 file_accessed(file);
+2 -2
include/linux/dax.h
··· 69 69 const struct inode *inode, 70 70 struct dax_device *dax_dev) 71 71 { 72 - if (!vma_desc_test_flags(desc, VMA_SYNC_BIT)) 72 + if (!vma_desc_test_any(desc, VMA_SYNC_BIT)) 73 73 return true; 74 74 if (!IS_DAX(inode)) 75 75 return false; ··· 115 115 const struct inode *inode, 116 116 struct dax_device *dax_dev) 117 117 { 118 - return !vma_desc_test_flags(desc, VMA_SYNC_BIT); 118 + return !vma_desc_test_any(desc, VMA_SYNC_BIT); 119 119 } 120 120 static inline size_t dax_recovery_write(struct dax_device *dax_dev, 121 121 pgoff_t pgoff, void *addr, size_t bytes, struct iov_iter *i)
+1 -1
include/linux/hugetlb_inline.h
··· 13 13 14 14 static inline bool is_vma_hugetlb_flags(const vma_flags_t *flags) 15 15 { 16 - return vma_flags_test(flags, VMA_HUGETLB_BIT); 16 + return vma_flags_test_any(flags, VMA_HUGETLB_BIT); 17 17 } 18 18 19 19 #else
+25 -23
include/linux/mm.h
··· 1062 1062 (const vma_flag_t []){__VA_ARGS__}) 1063 1063 1064 1064 /* Test each of to_test flags in flags, non-atomically. */ 1065 - static __always_inline bool vma_flags_test_mask(const vma_flags_t *flags, 1065 + static __always_inline bool vma_flags_test_any_mask(const vma_flags_t *flags, 1066 1066 vma_flags_t to_test) 1067 1067 { 1068 1068 const unsigned long *bitmap = flags->__vma_flags; ··· 1074 1074 /* 1075 1075 * Test whether any specified VMA flag is set, e.g.: 1076 1076 * 1077 - * if (vma_flags_test(flags, VMA_READ_BIT, VMA_MAYREAD_BIT)) { ... } 1077 + * if (vma_flags_test_any(flags, VMA_READ_BIT, VMA_MAYREAD_BIT)) { ... } 1078 1078 */ 1079 - #define vma_flags_test(flags, ...) \ 1080 - vma_flags_test_mask(flags, mk_vma_flags(__VA_ARGS__)) 1079 + #define vma_flags_test_any(flags, ...) \ 1080 + vma_flags_test_any_mask(flags, mk_vma_flags(__VA_ARGS__)) 1081 1081 1082 1082 /* Test that ALL of the to_test flags are set, non-atomically. */ 1083 1083 static __always_inline bool vma_flags_test_all_mask(const vma_flags_t *flags, ··· 1098 1098 vma_flags_test_all_mask(flags, mk_vma_flags(__VA_ARGS__)) 1099 1099 1100 1100 /* Set each of the to_set flags in flags, non-atomically. */ 1101 - static __always_inline void vma_flags_set_mask(vma_flags_t *flags, vma_flags_t to_set) 1101 + static __always_inline void vma_flags_set_mask(vma_flags_t *flags, 1102 + vma_flags_t to_set) 1102 1103 { 1103 1104 unsigned long *bitmap = flags->__vma_flags; 1104 1105 const unsigned long *bitmap_to_set = to_set.__vma_flags; ··· 1116 1115 vma_flags_set_mask(flags, mk_vma_flags(__VA_ARGS__)) 1117 1116 1118 1117 /* Clear all of the to-clear flags in flags, non-atomically. */ 1119 - static __always_inline void vma_flags_clear_mask(vma_flags_t *flags, vma_flags_t to_clear) 1118 + static __always_inline void vma_flags_clear_mask(vma_flags_t *flags, 1119 + vma_flags_t to_clear) 1120 1120 { 1121 1121 unsigned long *bitmap = flags->__vma_flags; 1122 1122 const unsigned long *bitmap_to_clear = to_clear.__vma_flags; ··· 1139 1137 * Note: appropriate locks must be held, this function does not acquire them for 1140 1138 * you. 1141 1139 */ 1142 - static inline bool vma_test_all_flags_mask(const struct vm_area_struct *vma, 1143 - vma_flags_t flags) 1140 + static inline bool vma_test_all_mask(const struct vm_area_struct *vma, 1141 + vma_flags_t flags) 1144 1142 { 1145 1143 return vma_flags_test_all_mask(&vma->flags, flags); 1146 1144 } ··· 1148 1146 /* 1149 1147 * Helper macro for checking that ALL specified flags are set in a VMA, e.g.: 1150 1148 * 1151 - * if (vma_test_all_flags(vma, VMA_READ_BIT, VMA_MAYREAD_BIT) { ... } 1149 + * if (vma_test_all(vma, VMA_READ_BIT, VMA_MAYREAD_BIT) { ... } 1152 1150 */ 1153 - #define vma_test_all_flags(vma, ...) \ 1154 - vma_test_all_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) 1151 + #define vma_test_all(vma, ...) \ 1152 + vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) 1155 1153 1156 1154 /* 1157 1155 * Helper to set all VMA flags in a VMA. ··· 1160 1158 * you. 1161 1159 */ 1162 1160 static inline void vma_set_flags_mask(struct vm_area_struct *vma, 1163 - vma_flags_t flags) 1161 + vma_flags_t flags) 1164 1162 { 1165 1163 vma_flags_set_mask(&vma->flags, flags); 1166 1164 } ··· 1178 1176 vma_set_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) 1179 1177 1180 1178 /* Helper to test all VMA flags in a VMA descriptor. */ 1181 - static inline bool vma_desc_test_flags_mask(const struct vm_area_desc *desc, 1182 - vma_flags_t flags) 1179 + static inline bool vma_desc_test_any_mask(const struct vm_area_desc *desc, 1180 + vma_flags_t flags) 1183 1181 { 1184 - return vma_flags_test_mask(&desc->vma_flags, flags); 1182 + return vma_flags_test_any_mask(&desc->vma_flags, flags); 1185 1183 } 1186 1184 1187 1185 /* 1188 1186 * Helper macro for testing VMA flags for an input pointer to a struct 1189 1187 * vm_area_desc object describing a proposed VMA, e.g.: 1190 1188 * 1191 - * if (vma_desc_test_flags(desc, VMA_IO_BIT, VMA_PFNMAP_BIT, 1189 + * if (vma_desc_test_any(desc, VMA_IO_BIT, VMA_PFNMAP_BIT, 1192 1190 * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT)) { ... } 1193 1191 */ 1194 - #define vma_desc_test_flags(desc, ...) \ 1195 - vma_desc_test_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) 1192 + #define vma_desc_test_any(desc, ...) \ 1193 + vma_desc_test_any_mask(desc, mk_vma_flags(__VA_ARGS__)) 1196 1194 1197 1195 /* Helper to set all VMA flags in a VMA descriptor. */ 1198 1196 static inline void vma_desc_set_flags_mask(struct vm_area_desc *desc, 1199 - vma_flags_t flags) 1197 + vma_flags_t flags) 1200 1198 { 1201 1199 vma_flags_set_mask(&desc->vma_flags, flags); 1202 1200 } ··· 1213 1211 1214 1212 /* Helper to clear all VMA flags in a VMA descriptor. */ 1215 1213 static inline void vma_desc_clear_flags_mask(struct vm_area_desc *desc, 1216 - vma_flags_t flags) 1214 + vma_flags_t flags) 1217 1215 { 1218 1216 vma_flags_clear_mask(&desc->vma_flags, flags); 1219 1217 } ··· 1938 1936 { 1939 1937 const vma_flags_t *flags = &desc->vma_flags; 1940 1938 1941 - return vma_flags_test(flags, VMA_MAYWRITE_BIT) && 1942 - !vma_flags_test(flags, VMA_SHARED_BIT); 1939 + return vma_flags_test_any(flags, VMA_MAYWRITE_BIT) && 1940 + !vma_flags_test_any(flags, VMA_SHARED_BIT); 1943 1941 } 1944 1942 1945 1943 #ifndef CONFIG_MMU ··· 1958 1956 1959 1957 static inline bool is_nommu_shared_vma_flags(const vma_flags_t *flags) 1960 1958 { 1961 - return vma_flags_test(flags, VMA_MAYSHARE_BIT, VMA_MAYOVERLAY_BIT); 1959 + return vma_flags_test_any(flags, VMA_MAYSHARE_BIT, VMA_MAYOVERLAY_BIT); 1962 1960 } 1963 1961 #endif 1964 1962
+7 -7
mm/hugetlb.c
··· 1194 1194 static void set_vma_desc_resv_map(struct vm_area_desc *desc, struct resv_map *map) 1195 1195 { 1196 1196 VM_WARN_ON_ONCE(!is_vma_hugetlb_flags(&desc->vma_flags)); 1197 - VM_WARN_ON_ONCE(vma_desc_test_flags(desc, VMA_MAYSHARE_BIT)); 1197 + VM_WARN_ON_ONCE(vma_desc_test_any(desc, VMA_MAYSHARE_BIT)); 1198 1198 1199 1199 desc->private_data = map; 1200 1200 } ··· 1202 1202 static void set_vma_desc_resv_flags(struct vm_area_desc *desc, unsigned long flags) 1203 1203 { 1204 1204 VM_WARN_ON_ONCE(!is_vma_hugetlb_flags(&desc->vma_flags)); 1205 - VM_WARN_ON_ONCE(vma_desc_test_flags(desc, VMA_MAYSHARE_BIT)); 1205 + VM_WARN_ON_ONCE(vma_desc_test_any(desc, VMA_MAYSHARE_BIT)); 1206 1206 1207 1207 desc->private_data = (void *)((unsigned long)desc->private_data | flags); 1208 1208 } ··· 6593 6593 * attempt will be made for VM_NORESERVE to allocate a page 6594 6594 * without using reserves 6595 6595 */ 6596 - if (vma_flags_test(&vma_flags, VMA_NORESERVE_BIT)) 6596 + if (vma_flags_test_any(&vma_flags, VMA_NORESERVE_BIT)) 6597 6597 return 0; 6598 6598 6599 6599 /* ··· 6602 6602 * to reserve the full area even if read-only as mprotect() may be 6603 6603 * called to make the mapping read-write. Assume !desc is a shm mapping 6604 6604 */ 6605 - if (!desc || vma_desc_test_flags(desc, VMA_MAYSHARE_BIT)) { 6605 + if (!desc || vma_desc_test_any(desc, VMA_MAYSHARE_BIT)) { 6606 6606 /* 6607 6607 * resv_map can not be NULL as hugetlb_reserve_pages is only 6608 6608 * called for inodes for which resv_maps were created (see ··· 6636 6636 if (err < 0) 6637 6637 goto out_err; 6638 6638 6639 - if (desc && !vma_desc_test_flags(desc, VMA_MAYSHARE_BIT) && h_cg) { 6639 + if (desc && !vma_desc_test_any(desc, VMA_MAYSHARE_BIT) && h_cg) { 6640 6640 /* For private mappings, the hugetlb_cgroup uncharge info hangs 6641 6641 * of the resv_map. 6642 6642 */ ··· 6673 6673 * consumed reservations are stored in the map. Hence, nothing 6674 6674 * else has to be done for private mappings here 6675 6675 */ 6676 - if (!desc || vma_desc_test_flags(desc, VMA_MAYSHARE_BIT)) { 6676 + if (!desc || vma_desc_test_any(desc, VMA_MAYSHARE_BIT)) { 6677 6677 add = region_add(resv_map, from, to, regions_needed, h, h_cg); 6678 6678 6679 6679 if (unlikely(add < 0)) { ··· 6737 6737 hugetlb_cgroup_uncharge_cgroup_rsvd(hstate_index(h), 6738 6738 chg * pages_per_huge_page(h), h_cg); 6739 6739 out_err: 6740 - if (!desc || vma_desc_test_flags(desc, VMA_MAYSHARE_BIT)) 6740 + if (!desc || vma_desc_test_any(desc, VMA_MAYSHARE_BIT)) 6741 6741 /* Only call region_abort if the region_chg succeeded but the 6742 6742 * region_add failed or didn't run. 6743 6743 */
+1 -1
mm/memory.c
··· 2982 2982 if (WARN_ON_ONCE(!PAGE_ALIGNED(addr))) 2983 2983 return -EINVAL; 2984 2984 2985 - VM_WARN_ON_ONCE(!vma_test_all_flags_mask(vma, VMA_REMAP_FLAGS)); 2985 + VM_WARN_ON_ONCE(!vma_test_all_mask(vma, VMA_REMAP_FLAGS)); 2986 2986 2987 2987 BUG_ON(addr >= end); 2988 2988 pfn -= addr >> PAGE_SHIFT;
+1 -1
mm/secretmem.c
··· 122 122 { 123 123 const unsigned long len = vma_desc_size(desc); 124 124 125 - if (!vma_desc_test_flags(desc, VMA_SHARED_BIT, VMA_MAYSHARE_BIT)) 125 + if (!vma_desc_test_any(desc, VMA_SHARED_BIT, VMA_MAYSHARE_BIT)) 126 126 return -EINVAL; 127 127 128 128 vma_desc_set_flags(desc, VMA_LOCKED_BIT, VMA_DONTDUMP_BIT);
+2 -2
mm/shmem.c
··· 3086 3086 spin_lock_init(&info->lock); 3087 3087 atomic_set(&info->stop_eviction, 0); 3088 3088 info->seals = F_SEAL_SEAL; 3089 - info->flags = vma_flags_test(&flags, VMA_NORESERVE_BIT) 3089 + info->flags = vma_flags_test_any(&flags, VMA_NORESERVE_BIT) 3090 3090 ? SHMEM_F_NORESERVE : 0; 3091 3091 info->i_crtime = inode_get_mtime(inode); 3092 3092 info->fsflags = (dir == NULL) ? 0 : ··· 5827 5827 unsigned int i_flags) 5828 5828 { 5829 5829 const unsigned long shmem_flags = 5830 - vma_flags_test(&flags, VMA_NORESERVE_BIT) ? SHMEM_F_NORESERVE : 0; 5830 + vma_flags_test_any(&flags, VMA_NORESERVE_BIT) ? SHMEM_F_NORESERVE : 0; 5831 5831 struct inode *inode; 5832 5832 struct file *res; 5833 5833
+10 -10
tools/testing/vma/include/dup.h
··· 843 843 #define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ 844 844 (const vma_flag_t []){__VA_ARGS__}) 845 845 846 - static __always_inline bool vma_flags_test_mask(const vma_flags_t *flags, 846 + static __always_inline bool vma_flags_test_any_mask(const vma_flags_t *flags, 847 847 vma_flags_t to_test) 848 848 { 849 849 const unsigned long *bitmap = flags->__vma_flags; ··· 852 852 return bitmap_intersects(bitmap_to_test, bitmap, NUM_VMA_FLAG_BITS); 853 853 } 854 854 855 - #define vma_flags_test(flags, ...) \ 856 - vma_flags_test_mask(flags, mk_vma_flags(__VA_ARGS__)) 855 + #define vma_flags_test_any(flags, ...) \ 856 + vma_flags_test_any_mask(flags, mk_vma_flags(__VA_ARGS__)) 857 857 858 858 static __always_inline bool vma_flags_test_all_mask(const vma_flags_t *flags, 859 859 vma_flags_t to_test) ··· 889 889 #define vma_flags_clear(flags, ...) \ 890 890 vma_flags_clear_mask(flags, mk_vma_flags(__VA_ARGS__)) 891 891 892 - static inline bool vma_test_all_flags_mask(const struct vm_area_struct *vma, 892 + static inline bool vma_test_all_mask(const struct vm_area_struct *vma, 893 893 vma_flags_t flags) 894 894 { 895 895 return vma_flags_test_all_mask(&vma->flags, flags); 896 896 } 897 897 898 - #define vma_test_all_flags(vma, ...) \ 899 - vma_test_all_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) 898 + #define vma_test_all(vma, ...) \ 899 + vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) 900 900 901 901 static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) 902 902 { ··· 913 913 #define vma_set_flags(vma, ...) \ 914 914 vma_set_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) 915 915 916 - static inline bool vma_desc_test_flags_mask(const struct vm_area_desc *desc, 916 + static inline bool vma_desc_test_any_mask(const struct vm_area_desc *desc, 917 917 vma_flags_t flags) 918 918 { 919 - return vma_flags_test_mask(&desc->vma_flags, flags); 919 + return vma_flags_test_any_mask(&desc->vma_flags, flags); 920 920 } 921 921 922 - #define vma_desc_test_flags(desc, ...) \ 923 - vma_desc_test_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) 922 + #define vma_desc_test_any(desc, ...) \ 923 + vma_desc_test_any_mask(desc, mk_vma_flags(__VA_ARGS__)) 924 924 925 925 static inline void vma_desc_set_flags_mask(struct vm_area_desc *desc, 926 926 vma_flags_t flags)
+14 -14
tools/testing/vma/tests/vma.c
··· 159 159 return true; 160 160 } 161 161 162 - /* Ensure that vma_flags_test() and friends works correctly. */ 163 - static bool test_vma_flags_test(void) 162 + /* Ensure that vma_flags_test_any() and friends works correctly. */ 163 + static bool test_vma_flags_test_any(void) 164 164 { 165 165 const vma_flags_t flags = mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, 166 166 VMA_EXEC_BIT, 64, 65); ··· 171 171 desc.vma_flags = flags; 172 172 173 173 #define do_test(...) \ 174 - ASSERT_TRUE(vma_flags_test(&flags, __VA_ARGS__)); \ 175 - ASSERT_TRUE(vma_desc_test_flags(&desc, __VA_ARGS__)) 174 + ASSERT_TRUE(vma_flags_test_any(&flags, __VA_ARGS__)); \ 175 + ASSERT_TRUE(vma_desc_test_any(&desc, __VA_ARGS__)) 176 176 177 177 #define do_test_all_true(...) \ 178 178 ASSERT_TRUE(vma_flags_test_all(&flags, __VA_ARGS__)); \ 179 - ASSERT_TRUE(vma_test_all_flags(&vma, __VA_ARGS__)) 179 + ASSERT_TRUE(vma_test_all(&vma, __VA_ARGS__)) 180 180 181 181 #define do_test_all_false(...) \ 182 182 ASSERT_FALSE(vma_flags_test_all(&flags, __VA_ARGS__)); \ 183 - ASSERT_FALSE(vma_test_all_flags(&vma, __VA_ARGS__)) 183 + ASSERT_FALSE(vma_test_all(&vma, __VA_ARGS__)) 184 184 185 185 /* 186 186 * Testing for some flags that are present, some that are not - should ··· 200 200 * Check _mask variant. We don't need to test extensively as macro 201 201 * helper is the equivalent. 202 202 */ 203 - ASSERT_TRUE(vma_flags_test_mask(&flags, flags)); 203 + ASSERT_TRUE(vma_flags_test_any_mask(&flags, flags)); 204 204 ASSERT_TRUE(vma_flags_test_all_mask(&flags, flags)); 205 205 206 206 /* Single bits. */ ··· 268 268 vma_flags_clear_mask(&flags, mask); 269 269 vma_flags_clear_mask(&vma.flags, mask); 270 270 vma_desc_clear_flags_mask(&desc, mask); 271 - ASSERT_FALSE(vma_flags_test(&flags, VMA_EXEC_BIT, 64)); 272 - ASSERT_FALSE(vma_flags_test(&vma.flags, VMA_EXEC_BIT, 64)); 273 - ASSERT_FALSE(vma_desc_test_flags(&desc, VMA_EXEC_BIT, 64)); 271 + ASSERT_FALSE(vma_flags_test_any(&flags, VMA_EXEC_BIT, 64)); 272 + ASSERT_FALSE(vma_flags_test_any(&vma.flags, VMA_EXEC_BIT, 64)); 273 + ASSERT_FALSE(vma_desc_test_any(&desc, VMA_EXEC_BIT, 64)); 274 274 /* Reset. */ 275 275 vma_flags_set(&flags, VMA_EXEC_BIT, 64); 276 276 vma_set_flags(&vma, VMA_EXEC_BIT, 64); ··· 284 284 vma_flags_clear(&flags, __VA_ARGS__); \ 285 285 vma_flags_clear(&vma.flags, __VA_ARGS__); \ 286 286 vma_desc_clear_flags(&desc, __VA_ARGS__); \ 287 - ASSERT_FALSE(vma_flags_test(&flags, __VA_ARGS__)); \ 288 - ASSERT_FALSE(vma_flags_test(&vma.flags, __VA_ARGS__)); \ 289 - ASSERT_FALSE(vma_desc_test_flags(&desc, __VA_ARGS__)); \ 287 + ASSERT_FALSE(vma_flags_test_any(&flags, __VA_ARGS__)); \ 288 + ASSERT_FALSE(vma_flags_test_any(&vma.flags, __VA_ARGS__)); \ 289 + ASSERT_FALSE(vma_desc_test_any(&desc, __VA_ARGS__)); \ 290 290 vma_flags_set(&flags, __VA_ARGS__); \ 291 291 vma_set_flags(&vma, __VA_ARGS__); \ 292 292 vma_desc_set_flags(&desc, __VA_ARGS__) ··· 334 334 TEST(vma_flags_unchanged); 335 335 TEST(vma_flags_cleared); 336 336 TEST(vma_flags_word); 337 - TEST(vma_flags_test); 337 + TEST(vma_flags_test_any); 338 338 TEST(vma_flags_clear); 339 339 }