Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 's390-7.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Vasily Gorbik:

- Add support for CONFIG_PAGE_TABLE_CHECK and enable it in
debug_defconfig. s390 can only tell user from kernel PTEs via the mm,
so mm_struct is now passed into pxx_user_accessible_page() callbacks

- Expose the PCI function UID as an arch-specific slot attribute in
sysfs so a function can be identified by its user-defined id while
still in standby. Introduces a generic ARCH_PCI_SLOT_GROUPS hook in
drivers/pci/slot.c

- Refresh s390 PCI documentation to reflect current behavior and cover
previously undocumented sysfs attributes

- zcrypt device driver cleanup series: consistent field types, clearer
variable naming, a kernel-doc warning fix, and a comment explaining
the intentional synchronize_rcu() in pkey_handler_register()

- Provide an s390 arch_raw_cpu_ptr() that avoids the detour via
get_lowcore() using alternatives, shrinking defconfig by ~27 kB

- Guard identity-base randomization with kaslr_enabled() so nokaslr
keeps the identity mapping at 0 even with RANDOMIZE_IDENTITY_BASE=y

- Build S390_MODULES_SANITY_TEST as a module only by requiring KUNIT &&
m, since built-in would not exercise module loading

- Remove the permanently commented-out HMCDRV_DEV_CLASS create_class()
code in the hmcdrv driver

- Drop stale ident_map_size extern conflicting with asm/page.h

* tag 's390-7.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390/zcrypt: Fix warning about wrong kernel doc comment
PCI: s390: Expose the UID as an arch specific PCI slot attribute
docs: s390/pci: Improve and update PCI documentation
s390/pkey: Add comment about synchronize_rcu() to pkey base
s390/hmcdrv: Remove commented out code
s390/zcrypt: Slight rework on the agent_id field
s390/zcrypt: Explicitly use a card variable in _zcrypt_send_cprb
s390/zcrypt: Rework MKVP fields and handling
s390/zcrypt: Make apfs a real unsigned int field
s390/zcrypt: Rework domain processing within zcrypt device driver
s390/zcrypt: Move inline function rng_type6cprb_msgx from header to code
s390/percpu: Provide arch_raw_cpu_ptr()
s390: Enable page table check for debug_defconfig
s390/pgtable: Add s390 support for page table check
s390/pgtable: Use set_pmd_bit() to invalidate PMD entry
mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()
s390/boot: Respect kaslr_enabled() for identity randomization
s390/Kconfig: Make modules sanity test a module-only option
s390/setup: Drop stale ident_map_size declaration

+467 -389
+104 -47
Documentation/arch/s390/pci.rst
··· 6 6 7 7 Authors: 8 8 - Pierre Morel 9 + - Niklas Schnelle 9 10 10 11 Copyright, IBM Corp. 2020 11 12 ··· 28 27 debugfs entries 29 28 --------------- 30 29 31 - The S/390 debug feature (s390dbf) generates views to hold various debug results in sysfs directories of the form: 30 + The S/390 debug feature (s390dbf) generates views to hold various debug results 31 + in sysfs directories of the form: 32 32 33 33 * /sys/kernel/debug/s390dbf/pci_*/ 34 34 35 35 For example: 36 36 37 37 - /sys/kernel/debug/s390dbf/pci_msg/sprintf 38 - Holds messages from the processing of PCI events, like machine check handling 38 + 39 + holds messages from the processing of PCI events, like machine check handling 39 40 and setting of global functionality, like UID checking. 40 41 41 42 Change the level of logging to be more or less verbose by piping ··· 50 47 51 48 Entries specific to zPCI functions and entries that hold zPCI information. 52 49 53 - * /sys/bus/pci/slots/XXXXXXXX 50 + * /sys/bus/pci/slots/XXXXXXXX: 54 51 55 - The slot entries are set up using the function identifier (FID) of the 56 - PCI function. The format depicted as XXXXXXXX above is 8 hexadecimal digits 57 - with 0 padding and lower case hexadecimal digits. 52 + The slot entries are set up using the function identifier (FID) of the PCI 53 + function as slot name. The format depicted as XXXXXXXX above is 8 hexadecimal 54 + digits with 0 padding and lower case hexadecimal digits. 58 55 59 56 - /sys/bus/pci/slots/XXXXXXXX/power 60 57 58 + In addition to using the FID as the name of the slot, the slot directory 59 + also contains the following s390-specific slot attributes. 60 + 61 + - uid: 62 + The User-defined identifier (UID) of the function which may be configured 63 + by this slot. See also the corresponding attribute of the device. 64 + 61 65 A physical function that currently supports a virtual function cannot be 62 66 powered off until all virtual functions are removed with: 63 - echo 0 > /sys/bus/pci/devices/XXXX:XX:XX.X/sriov_numvf 67 + echo 0 > /sys/bus/pci/devices/DDDD:BB:dd.f/sriov_numvf 64 68 65 - * /sys/bus/pci/devices/XXXX:XX:XX.X/ 69 + * /sys/bus/pci/devices/DDDD:BB:dd.f/: 66 70 67 - - function_id 68 - A zPCI function identifier that uniquely identifies the function in the Z server. 71 + - function_id: 72 + The zPCI function identifier (FID) is a 32-bit hexadecimal value that 73 + uniquely identifies the PCI function. Unless the hypervisor provides 74 + a virtual FID e.g. on KVM this identifier is unique across the machine even 75 + between different partitions. 69 76 70 - - function_handle 71 - Low-level identifier used for a configured PCI function. 72 - It might be useful for debugging. 77 + - function_handle: 78 + This 32-bit hexadecimal value is a low-level identifier used for a PCI 79 + function. Note that the function handle may be changed and become invalid 80 + on PCI events and when enabling/disabling the PCI function. 73 81 74 - - pchid 75 - Model-dependent location of the I/O adapter. 82 + - pchid: 83 + This 16-bit hexadecimal value encodes a model-dependent location for 84 + the PCI function. 76 85 77 - - pfgid 78 - PCI function group ID, functions that share identical functionality 86 + - pfgid: 87 + PCI function group ID; functions that share identical functionality 79 88 use a common identifier. 80 89 A PCI group defines interrupts, IOMMU, IOTLB, and DMA specifics. 81 90 82 - - vfn 91 + - vfn: 83 92 The virtual function number, from 1 to N for virtual functions, 84 93 0 for physical functions. 85 94 86 - - pft 87 - The PCI function type 95 + - pft: 96 + The PCI function type is an s390-specific type attribute. It indicates 97 + a more general, usage oriented, type than PCI Specification 98 + class/vendor/device identifiers. That is PCI functions with the same pft 99 + value may be backed by different hardware implementations. At the same time 100 + apart from unclassified functions (pft is 0x00) the same pft value 101 + generally implies a similar usage model. At the same time the same 102 + PCI hardware device may appear with different pft values when in a 103 + different usage model. For example NETD and NETH VFs may be implemented 104 + by the same PCI hardware device but in NETD the parent Physical Function 105 + is user managed while with NETH it is platform managed. 88 106 89 - - port 90 - The port corresponds to the physical port the function is attached to. 91 - It also gives an indication of the physical function a virtual function 92 - is attached to. 107 + Currently the following PFT values are defined: 93 108 94 - - uid 95 - The user identifier (UID) may be defined as part of the machine 96 - configuration or the z/VM or KVM guest configuration. If the accompanying 97 - uid_is_unique attribute is 1 the platform guarantees that the UID is unique 98 - within that instance and no devices with the same UID can be attached 99 - during the lifetime of the system. 109 + - 0x00 (UNC): Unclassified 110 + - 0x02 (ROCE): RoCE Express 111 + - 0x05 (ISM): Internal Shared Memory 112 + - 0x0a (ROC2): RoCE Express 2 113 + - 0x0b (NVMe): NVMe 114 + - 0x0c (NETH): Network Express hybrid 115 + - 0x0d (CNW): Cloud Network Adapter 116 + - 0x0f (NETD): Network Express direct 100 117 101 - - uid_is_unique 102 - Indicates whether the user identifier (UID) is guaranteed to be and remain 103 - unique within this Linux instance. 118 + - port: 119 + The port is a decimal value corresponding to the physical port the function 120 + is attached to. Virtual Functions (VFs) share the port with their parent 121 + Physical Function (PF). A value of 0 indicates that the port attribute is 122 + not applicable for that PCI function type. 104 123 105 - - pfip/segmentX 124 + - uid: 125 + The user-defined identifier (UID) for a PCI function is a 32-bit 126 + hexadecimal value. It is defined on a per instance basis as part of the 127 + partition, KVM guest, or z/VM guest configuration. If UID Checking is 128 + enabled the platform ensures that the UID is unique within that instance 129 + and no two PCI functions with the same UID will be visible to the instance. 130 + 131 + Independent of this guarantee and unlike the function ID (FID) the UID may 132 + be the same in different partitions within the same machine. This allows to 133 + create PCI configurations in multiple partitions to be identical in the 134 + UID-namespace. 135 + 136 + - uid_is_unique: 137 + A 0 or 1 flag indicating whether the user-defined identifier (UID) is 138 + guaranteed to be and remain unique within this Linux instance. This 139 + platform feature is called UID Checking. 140 + 141 + - pfip/segmentX: 106 142 The segments determine the isolation of a function. 107 143 They correspond to the physical path to the function. 108 144 The more the segments are different, the more the functions are isolated. 145 + 146 + - fidparm: 147 + Contains an 8-bit-per-PCI function parameter field in hexadecimal provided 148 + by the platform. The meaning of this field is PCI function type specific. 149 + For NETH VFs a value of 0x01 indicates that the function supports 150 + promiscuous mode. 151 + 152 + * /sys/firmware/clp/uid_checking: 153 + 154 + In addition to the per-device uid_is_unique attribute this presents a 155 + global indication of whether UID Checking is enabled. This allows users 156 + to check for UID Checking even when no PCI functions are configured. 109 157 110 158 Enumeration and hotplug 111 159 ======================= 112 160 113 161 The PCI address consists of four parts: domain, bus, device and function, 114 - and is of this form: DDDD:BB:dd.f 162 + and is of this form: DDDD:BB:dd.f. 115 163 116 - * When not using multi-functions (norid is set, or the firmware does not 117 - support multi-functions): 164 + * For a PCI function for which the platform does not expose the RID, the 165 + pci=norid kernel parameter is used, or a so-called isolated Virtual Function 166 + which does have RID information but is used without its parent Physical 167 + Function being part of the same PCI configuration: 118 168 119 169 - There is only one function per domain. 120 170 121 - - The domain is set from the zPCI function's UID as defined during the 122 - LPAR creation. 171 + - The domain is set from the zPCI function's UID if UID Checking is on; 172 + otherwise the domain ID is generated dynamically and is not stable 173 + across reboots or hot plug. 123 174 124 - * When using multi-functions (norid parameter is not set), 125 - zPCI functions are addressed differently: 175 + * For a PCI function for which the platform exposes the RID and which 176 + is not an Isolated Virtual Function: 126 177 127 178 - There is still only one bus per domain. 128 179 129 - - There can be up to 256 functions per bus. 180 + - There can be up to 256 PCI functions per bus. 130 181 131 - - The domain part of the address of all functions for 132 - a multi-Function device is set from the zPCI function's UID as defined 133 - in the LPAR creation for the function zero. 182 + - The domain part of the address of all functions within the same topology is 183 + that of the configured PCI function with the lowest devfn within that 184 + topology. 134 185 135 - - New functions will only be ready for use after the function zero 136 - (the function with devfn 0) has been enumerated. 186 + - Virtual Functions generated by an SR-IOV capable Physical Function only 187 + become visible once SR-IOV is enabled.
+3 -3
arch/arm64/include/asm/pgtable.h
··· 1276 1276 #endif 1277 1277 1278 1278 #ifdef CONFIG_PAGE_TABLE_CHECK 1279 - static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr) 1279 + static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte) 1280 1280 { 1281 1281 return pte_valid(pte) && (pte_user(pte) || pte_user_exec(pte)); 1282 1282 } 1283 1283 1284 - static inline bool pmd_user_accessible_page(pmd_t pmd, unsigned long addr) 1284 + static inline bool pmd_user_accessible_page(struct mm_struct *mm, unsigned long addr, pmd_t pmd) 1285 1285 { 1286 1286 return pmd_valid(pmd) && !pmd_table(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd)); 1287 1287 } 1288 1288 1289 - static inline bool pud_user_accessible_page(pud_t pud, unsigned long addr) 1289 + static inline bool pud_user_accessible_page(struct mm_struct *mm, unsigned long addr, pud_t pud) 1290 1290 { 1291 1291 return pud_valid(pud) && !pud_table(pud) && (pud_user(pud) || pud_user_exec(pud)); 1292 1292 }
+1 -1
arch/powerpc/include/asm/book3s/32/pgtable.h
··· 438 438 return true; 439 439 } 440 440 441 - static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr) 441 + static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte) 442 442 { 443 443 return pte_present(pte) && !is_kernel_addr(addr); 444 444 }
+5 -5
arch/powerpc/include/asm/book3s/64/pgtable.h
··· 549 549 return arch_pte_access_permitted(pte_val(pte), write, 0); 550 550 } 551 551 552 - static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr) 552 + static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte) 553 553 { 554 554 return pte_present(pte) && pte_user(pte); 555 555 } ··· 925 925 } 926 926 927 927 #define pud_user_accessible_page pud_user_accessible_page 928 - static inline bool pud_user_accessible_page(pud_t pud, unsigned long addr) 928 + static inline bool pud_user_accessible_page(struct mm_struct *mm, unsigned long addr, pud_t pud) 929 929 { 930 - return pud_leaf(pud) && pte_user_accessible_page(pud_pte(pud), addr); 930 + return pud_leaf(pud) && pte_user_accessible_page(mm, addr, pud_pte(pud)); 931 931 } 932 932 933 933 #define __p4d_raw(x) ((p4d_t) { __pgd_raw(x) }) ··· 1096 1096 } 1097 1097 1098 1098 #define pmd_user_accessible_page pmd_user_accessible_page 1099 - static inline bool pmd_user_accessible_page(pmd_t pmd, unsigned long addr) 1099 + static inline bool pmd_user_accessible_page(struct mm_struct *mm, unsigned long addr, pmd_t pmd) 1100 1100 { 1101 - return pmd_leaf(pmd) && pte_user_accessible_page(pmd_pte(pmd), addr); 1101 + return pmd_leaf(pmd) && pte_user_accessible_page(mm, addr, pmd_pte(pmd)); 1102 1102 } 1103 1103 1104 1104 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+1 -1
arch/powerpc/include/asm/nohash/pgtable.h
··· 249 249 return true; 250 250 } 251 251 252 - static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr) 252 + static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte) 253 253 { 254 254 return pte_present(pte) && !is_kernel_addr(addr); 255 255 }
+2 -2
arch/powerpc/include/asm/pgtable.h
··· 213 213 #endif /* CONFIG_PPC64 */ 214 214 215 215 #ifndef pmd_user_accessible_page 216 - #define pmd_user_accessible_page(pmd, addr) false 216 + #define pmd_user_accessible_page(mm, addr, pmd) false 217 217 #endif 218 218 219 219 #ifndef pud_user_accessible_page 220 - #define pud_user_accessible_page(pud, addr) false 220 + #define pud_user_accessible_page(mm, addr, pud) false 221 221 #endif 222 222 223 223 #endif /* __ASSEMBLER__ */
+3 -3
arch/riscv/include/asm/pgtable.h
··· 984 984 } 985 985 986 986 #ifdef CONFIG_PAGE_TABLE_CHECK 987 - static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr) 987 + static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte) 988 988 { 989 989 return pte_present(pte) && pte_user(pte); 990 990 } 991 991 992 - static inline bool pmd_user_accessible_page(pmd_t pmd, unsigned long addr) 992 + static inline bool pmd_user_accessible_page(struct mm_struct *mm, unsigned long addr, pmd_t pmd) 993 993 { 994 994 return pmd_leaf(pmd) && pmd_user(pmd); 995 995 } 996 996 997 - static inline bool pud_user_accessible_page(pud_t pud, unsigned long addr) 997 + static inline bool pud_user_accessible_page(struct mm_struct *mm, unsigned long addr, pud_t pud) 998 998 { 999 999 return pud_leaf(pud) && pud_user(pud); 1000 1000 }
+2 -1
arch/s390/Kconfig
··· 152 152 select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && CC_IS_CLANG 153 153 select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS 154 154 select ARCH_SUPPORTS_NUMA_BALANCING 155 + select ARCH_SUPPORTS_PAGE_TABLE_CHECK 155 156 select ARCH_SUPPORTS_PER_VMA_LOCK 156 157 select ARCH_USE_BUILTIN_BSWAP 157 158 select ARCH_USE_CMPXCHG_LOCKREF ··· 1024 1023 1025 1024 config S390_MODULES_SANITY_TEST 1026 1025 def_tristate n 1027 - depends on KUNIT 1026 + depends on KUNIT && m 1028 1027 default KUNIT_ALL_TESTS 1029 1028 prompt "Enable s390 specific modules tests" 1030 1029 select S390_MODULES_SANITY_TEST_HELPERS
+2 -1
arch/s390/boot/startup.c
··· 440 440 max_mappable = max(ident_map_size, MAX_DCSS_ADDR); 441 441 max_mappable = min(max_mappable, vmemmap_start); 442 442 #ifdef CONFIG_RANDOMIZE_IDENTITY_BASE 443 - __identity_base = round_down(vmemmap_start - max_mappable, rte_size); 443 + if (kaslr_enabled()) 444 + __identity_base = round_down(vmemmap_start - max_mappable, rte_size); 444 445 #endif 445 446 boot_debug("identity map: 0x%016lx-0x%016lx\n", __identity_base, 446 447 __identity_base + ident_map_size);
+2
arch/s390/configs/debug_defconfig
··· 925 925 CONFIG_ATOMIC64_SELFTEST=y 926 926 CONFIG_TEST_BITOPS=m 927 927 CONFIG_TEST_BPF=m 928 + CONFIG_PAGE_TABLE_CHECK=y 929 + CONFIG_PAGE_TABLE_CHECK_ENFORCED=y
+4
arch/s390/include/asm/pci.h
··· 208 208 &pfip_attr_group, \ 209 209 &zpci_ident_attr_group, 210 210 211 + extern const struct attribute_group zpci_slot_attr_group; 212 + 213 + #define ARCH_PCI_SLOT_GROUPS (&zpci_slot_attr_group) 214 + 211 215 extern unsigned int s390_pci_force_floating __initdata; 212 216 extern unsigned int s390_pci_no_rid; 213 217
+18
arch/s390/include/asm/percpu.h
··· 12 12 */ 13 13 #define __my_cpu_offset get_lowcore()->percpu_offset 14 14 15 + #define arch_raw_cpu_ptr(_ptr) \ 16 + ({ \ 17 + unsigned long lc_percpu, tcp_ptr__; \ 18 + \ 19 + tcp_ptr__ = (__force unsigned long)(_ptr); \ 20 + lc_percpu = offsetof(struct lowcore, percpu_offset); \ 21 + asm_inline volatile( \ 22 + ALTERNATIVE("ag %[__ptr__],%[offzero](%%r0)\n", \ 23 + "ag %[__ptr__],%[offalt](%%r0)\n", \ 24 + ALT_FEATURE(MFEATURE_LOWCORE)) \ 25 + : [__ptr__] "+d" (tcp_ptr__) \ 26 + : [offzero] "i" (lc_percpu), \ 27 + [offalt] "i" (lc_percpu + LOWCORE_ALT_ADDRESS), \ 28 + "m" (((struct lowcore *)0)->percpu_offset) \ 29 + : "cc"); \ 30 + (TYPEOF_UNQUAL(*(_ptr)) __force __kernel *)tcp_ptr__; \ 31 + }) 32 + 15 33 /* 16 34 * We use a compare-and-swap loop since that uses less cpu cycles than 17 35 * disabling and enabling interrupts like the generic variant would do.
+53 -7
arch/s390/include/asm/pgtable.h
··· 16 16 #include <linux/mm_types.h> 17 17 #include <linux/cpufeature.h> 18 18 #include <linux/page-flags.h> 19 + #include <linux/page_table_check.h> 19 20 #include <linux/radix-tree.h> 20 21 #include <linux/atomic.h> 22 + #include <linux/mmap_lock.h> 21 23 #include <asm/ctlreg.h> 22 24 #include <asm/bug.h> 23 25 #include <asm/page.h> ··· 1192 1190 /* At this point the reference through the mapping is still present */ 1193 1191 if (mm_is_protected(mm) && pte_present(res)) 1194 1192 WARN_ON_ONCE(uv_convert_from_secure_pte(res)); 1193 + page_table_check_pte_clear(mm, addr, res); 1195 1194 return res; 1196 1195 } 1197 1196 ··· 1211 1208 /* At this point the reference through the mapping is still present */ 1212 1209 if (mm_is_protected(vma->vm_mm) && pte_present(res)) 1213 1210 WARN_ON_ONCE(uv_convert_from_secure_pte(res)); 1211 + page_table_check_pte_clear(vma->vm_mm, addr, res); 1214 1212 return res; 1215 1213 } 1216 1214 ··· 1235 1231 } else { 1236 1232 res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID)); 1237 1233 } 1234 + 1235 + page_table_check_pte_clear(mm, addr, res); 1236 + 1238 1237 /* Nothing to do */ 1239 1238 if (!mm_is_protected(mm) || !pte_present(res)) 1240 1239 return res; ··· 1334 1327 { 1335 1328 if (pte_present(entry)) 1336 1329 entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED)); 1330 + page_table_check_ptes_set(mm, addr, ptep, entry, nr); 1337 1331 for (;;) { 1338 1332 set_pte(ptep, entry); 1339 1333 if (--nr == 0) ··· 1711 1703 static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, 1712 1704 pmd_t *pmdp, pmd_t entry) 1713 1705 { 1706 + page_table_check_pmd_set(mm, addr, pmdp, entry); 1714 1707 set_pmd(pmdp, entry); 1715 1708 } 1716 1709 ··· 1726 1717 static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, 1727 1718 unsigned long addr, pmd_t *pmdp) 1728 1719 { 1729 - return pmdp_xchg_direct(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY)); 1720 + pmd_t pmd; 1721 + 1722 + pmd = pmdp_xchg_direct(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY)); 1723 + page_table_check_pmd_clear(mm, addr, pmd); 1724 + return pmd; 1730 1725 } 1731 1726 1732 1727 #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL ··· 1738 1725 unsigned long addr, 1739 1726 pmd_t *pmdp, int full) 1740 1727 { 1728 + pmd_t pmd; 1729 + 1741 1730 if (full) { 1742 - pmd_t pmd = *pmdp; 1731 + pmd = *pmdp; 1743 1732 set_pmd(pmdp, __pmd(_SEGMENT_ENTRY_EMPTY)); 1733 + page_table_check_pmd_clear(vma->vm_mm, addr, pmd); 1744 1734 return pmd; 1745 1735 } 1746 - return pmdp_xchg_lazy(vma->vm_mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY)); 1736 + pmd = pmdp_xchg_lazy(vma->vm_mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY)); 1737 + page_table_check_pmd_clear(vma->vm_mm, addr, pmd); 1738 + return pmd; 1747 1739 } 1748 1740 1749 1741 #define __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH ··· 1762 1744 static inline pmd_t pmdp_invalidate(struct vm_area_struct *vma, 1763 1745 unsigned long addr, pmd_t *pmdp) 1764 1746 { 1765 - pmd_t pmd; 1747 + pmd_t pmd = *pmdp; 1766 1748 1767 - VM_WARN_ON_ONCE(!pmd_present(*pmdp)); 1768 - pmd = __pmd(pmd_val(*pmdp) | _SEGMENT_ENTRY_INVALID); 1769 - return pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd); 1749 + VM_WARN_ON_ONCE(!pmd_present(pmd)); 1750 + pmd = set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_INVALID)); 1751 + #ifdef CONFIG_PAGE_TABLE_CHECK 1752 + pmd = clear_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_READ)); 1753 + #endif 1754 + page_table_check_pmd_set(vma->vm_mm, addr, pmdp, pmd); 1755 + pmd = pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd); 1756 + return pmd; 1770 1757 } 1771 1758 1772 1759 #define __HAVE_ARCH_PMDP_SET_WRPROTECT ··· 1805 1782 return cpu_has_edat1() ? 1 : 0; 1806 1783 } 1807 1784 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 1785 + 1786 + #ifdef CONFIG_PAGE_TABLE_CHECK 1787 + static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte) 1788 + { 1789 + VM_BUG_ON(mm == &init_mm); 1790 + 1791 + return pte_present(pte); 1792 + } 1793 + 1794 + static inline bool pmd_user_accessible_page(struct mm_struct *mm, unsigned long addr, pmd_t pmd) 1795 + { 1796 + VM_BUG_ON(mm == &init_mm); 1797 + 1798 + return pmd_leaf(pmd) && (pmd_val(pmd) & _SEGMENT_ENTRY_READ); 1799 + } 1800 + 1801 + static inline bool pud_user_accessible_page(struct mm_struct *mm, unsigned long addr, pud_t pud) 1802 + { 1803 + VM_BUG_ON(mm == &init_mm); 1804 + 1805 + return pud_leaf(pud); 1806 + } 1807 + #endif 1808 1808 1809 1809 /* 1810 1810 * 64 bit swap entry format:
-1
arch/s390/include/asm/setup.h
··· 52 52 #define ZLIB_DFLTCC_INFLATE_ONLY 3 53 53 #define ZLIB_DFLTCC_FULL_DEBUG 4 54 54 55 - extern unsigned long ident_map_size; 56 55 extern unsigned long max_mappable; 57 56 58 57 /* The Write Back bit position in the physaddr is given by the SLPC PCI */
+20
arch/s390/pci/pci_sysfs.c
··· 187 187 } 188 188 static DEVICE_ATTR_RO(index); 189 189 190 + static ssize_t zpci_uid_slot_show(struct pci_slot *slot, char *buf) 191 + { 192 + struct zpci_dev *zdev = container_of(slot->hotplug, struct zpci_dev, 193 + hotplug_slot); 194 + 195 + return sysfs_emit(buf, "0x%x\n", zdev->uid); 196 + } 197 + 198 + static struct pci_slot_attribute zpci_slot_attr_uid = 199 + __ATTR(uid, 0444, zpci_uid_slot_show, NULL); 200 + 190 201 static umode_t zpci_index_is_visible(struct kobject *kobj, 191 202 struct attribute *attr, int n) 192 203 { ··· 252 241 const struct attribute_group pfip_attr_group = { 253 242 .name = "pfip", 254 243 .attrs = pfip_attrs, 244 + }; 245 + 246 + static struct attribute *zpci_slot_attrs[] = { 247 + &zpci_slot_attr_uid.attr, 248 + NULL, 249 + }; 250 + 251 + const struct attribute_group zpci_slot_attr_group = { 252 + .attrs = zpci_slot_attrs, 255 253 }; 256 254 257 255 static struct attribute *clp_fw_attrs[] = {
+3 -3
arch/x86/include/asm/pgtable.h
··· 1672 1672 #endif 1673 1673 1674 1674 #ifdef CONFIG_PAGE_TABLE_CHECK 1675 - static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr) 1675 + static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte) 1676 1676 { 1677 1677 return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER); 1678 1678 } 1679 1679 1680 - static inline bool pmd_user_accessible_page(pmd_t pmd, unsigned long addr) 1680 + static inline bool pmd_user_accessible_page(struct mm_struct *mm, unsigned long addr, pmd_t pmd) 1681 1681 { 1682 1682 return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) && (pmd_val(pmd) & _PAGE_USER); 1683 1683 } 1684 1684 1685 - static inline bool pud_user_accessible_page(pud_t pud, unsigned long addr) 1685 + static inline bool pud_user_accessible_page(struct mm_struct *mm, unsigned long addr, pud_t pud) 1686 1686 { 1687 1687 return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) && (pud_val(pud) & _PAGE_USER); 1688 1688 }
+12 -1
drivers/pci/slot.c
··· 106 106 &pci_slot_attr_cur_speed.attr, 107 107 NULL, 108 108 }; 109 - ATTRIBUTE_GROUPS(pci_slot_default); 109 + 110 + static const struct attribute_group pci_slot_default_group = { 111 + .attrs = pci_slot_default_attrs, 112 + }; 113 + 114 + static const struct attribute_group *pci_slot_default_groups[] = { 115 + &pci_slot_default_group, 116 + #ifdef ARCH_PCI_SLOT_GROUPS 117 + ARCH_PCI_SLOT_GROUPS, 118 + #endif 119 + NULL, 120 + }; 110 121 111 122 static const struct kobj_type pci_slot_ktype = { 112 123 .sysfs_ops = &pci_slot_sysfs_ops,
+1 -113
drivers/s390/char/hmcdrv_dev.c
··· 30 30 #include "hmcdrv_dev.h" 31 31 #include "hmcdrv_ftp.h" 32 32 33 - /* If the following macro is defined, then the HMC device creates it's own 34 - * separated device class (and dynamically assigns a major number). If not 35 - * defined then the HMC device is assigned to the "misc" class devices. 36 - * 37 - #define HMCDRV_DEV_CLASS "hmcftp" 38 - */ 39 - 40 33 #define HMCDRV_DEV_NAME "hmcdrv" 41 34 #define HMCDRV_DEV_BUSY_DELAY 500 /* delay between -EBUSY trials in ms */ 42 35 #define HMCDRV_DEV_BUSY_RETRIES 3 /* number of retries on -EBUSY */ 43 36 44 37 struct hmcdrv_dev_node { 45 - 46 - #ifdef HMCDRV_DEV_CLASS 47 - struct cdev dev; /* character device structure */ 48 - umode_t mode; /* mode of device node (unused, zero) */ 49 - #else 50 38 struct miscdevice dev; /* "misc" device structure */ 51 - #endif 52 - 53 39 }; 54 40 55 41 static int hmcdrv_dev_open(struct inode *inode, struct file *fp); ··· 60 74 }; 61 75 62 76 static struct hmcdrv_dev_node hmcdrv_dev; /* HMC device struct (static) */ 63 - 64 - #ifdef HMCDRV_DEV_CLASS 65 - 66 - static struct class *hmcdrv_dev_class; /* device class pointer */ 67 - static dev_t hmcdrv_dev_no; /* device number (major/minor) */ 68 - 69 - /** 70 - * hmcdrv_dev_name() - provides a naming hint for a device node in /dev 71 - * @dev: device for which the naming/mode hint is 72 - * @mode: file mode for device node created in /dev 73 - * 74 - * See: devtmpfs.c, function devtmpfs_create_node() 75 - * 76 - * Return: recommended device file name in /dev 77 - */ 78 - static char *hmcdrv_dev_name(const struct device *dev, umode_t *mode) 79 - { 80 - char *nodename = NULL; 81 - const char *devname = dev_name(dev); /* kernel device name */ 82 - 83 - if (devname) 84 - nodename = kasprintf(GFP_KERNEL, "%s", devname); 85 - 86 - /* on device destroy (rmmod) the mode pointer may be NULL 87 - */ 88 - if (mode) 89 - *mode = hmcdrv_dev.mode; 90 - 91 - return nodename; 92 - } 93 - 94 - #endif /* HMCDRV_DEV_CLASS */ 95 77 96 78 /* 97 79 * open() ··· 230 276 */ 231 277 int hmcdrv_dev_init(void) 232 278 { 233 - int rc; 234 - 235 - #ifdef HMCDRV_DEV_CLASS 236 - struct device *dev; 237 - 238 - rc = alloc_chrdev_region(&hmcdrv_dev_no, 0, 1, HMCDRV_DEV_NAME); 239 - 240 - if (rc) 241 - goto out_err; 242 - 243 - cdev_init(&hmcdrv_dev.dev, &hmcdrv_dev_fops); 244 - hmcdrv_dev.dev.owner = THIS_MODULE; 245 - rc = cdev_add(&hmcdrv_dev.dev, hmcdrv_dev_no, 1); 246 - 247 - if (rc) 248 - goto out_unreg; 249 - 250 - /* At this point the character device exists in the kernel (see 251 - * /proc/devices), but not under /dev nor /sys/devices/virtual. So 252 - * we have to create an associated class (see /sys/class). 253 - */ 254 - hmcdrv_dev_class = class_create(HMCDRV_DEV_CLASS); 255 - 256 - if (IS_ERR(hmcdrv_dev_class)) { 257 - rc = PTR_ERR(hmcdrv_dev_class); 258 - goto out_devdel; 259 - } 260 - 261 - /* Finally a device node in /dev has to be established (as 'mkdev' 262 - * does from the command line). Notice that assignment of a device 263 - * node name/mode function is optional (only for mode != 0600). 264 - */ 265 - hmcdrv_dev.mode = 0; /* "unset" */ 266 - hmcdrv_dev_class->devnode = hmcdrv_dev_name; 267 - 268 - dev = device_create(hmcdrv_dev_class, NULL, hmcdrv_dev_no, NULL, 269 - "%s", HMCDRV_DEV_NAME); 270 - if (!IS_ERR(dev)) 271 - return 0; 272 - 273 - rc = PTR_ERR(dev); 274 - class_destroy(hmcdrv_dev_class); 275 - hmcdrv_dev_class = NULL; 276 - 277 - out_devdel: 278 - cdev_del(&hmcdrv_dev.dev); 279 - 280 - out_unreg: 281 - unregister_chrdev_region(hmcdrv_dev_no, 1); 282 - 283 - out_err: 284 - 285 - #else /* !HMCDRV_DEV_CLASS */ 286 279 hmcdrv_dev.dev.minor = MISC_DYNAMIC_MINOR; 287 280 hmcdrv_dev.dev.name = HMCDRV_DEV_NAME; 288 281 hmcdrv_dev.dev.fops = &hmcdrv_dev_fops; 289 282 hmcdrv_dev.dev.mode = 0; /* finally produces 0600 */ 290 - rc = misc_register(&hmcdrv_dev.dev); 291 - #endif /* HMCDRV_DEV_CLASS */ 292 - 293 - return rc; 283 + return misc_register(&hmcdrv_dev.dev); 294 284 } 295 285 296 286 /** ··· 242 344 */ 243 345 void hmcdrv_dev_exit(void) 244 346 { 245 - #ifdef HMCDRV_DEV_CLASS 246 - if (!IS_ERR_OR_NULL(hmcdrv_dev_class)) { 247 - device_destroy(hmcdrv_dev_class, hmcdrv_dev_no); 248 - class_destroy(hmcdrv_dev_class); 249 - } 250 - 251 - cdev_del(&hmcdrv_dev.dev); 252 - unregister_chrdev_region(hmcdrv_dev_no, 1); 253 - #else /* !HMCDRV_DEV_CLASS */ 254 347 misc_deregister(&hmcdrv_dev.dev); 255 - #endif /* HMCDRV_DEV_CLASS */ 256 348 }
+7
drivers/s390/crypto/pkey_base.c
··· 60 60 61 61 list_add_rcu(&handler->list, &handler_list); 62 62 spin_unlock(&handler_list_write_lock); 63 + /* 64 + * Fast path to push the info about the updated list to the other 65 + * CPUs. If removed, the other CPUs may get the updated list when the 66 + * RCU context is synched. As this code is in general not performance 67 + * critical and the list update mostly only occurs at the early time in 68 + * system startup the focus is on concurrency versus performance. 69 + */ 63 70 synchronize_rcu(); 64 71 65 72 module_put(handler->module);
+26 -22
drivers/s390/crypto/pkey_cca.c
··· 87 87 zcrypt_wait_api_operational(); 88 88 89 89 if (hdr->type == TOKTYPE_CCA_INTERNAL) { 90 - u64 cur_mkvp = 0, old_mkvp = 0; 90 + const u8 *ptr_cur_mkvp = NULL; 91 + const u8 *ptr_old_mkvp = NULL; 91 92 int minhwtype = ZCRYPT_CEX3C; 92 93 93 94 if (hdr->version == TOKVER_CCA_AES) { 94 95 struct secaeskeytoken *t = (struct secaeskeytoken *)key; 95 96 96 97 if (flags & PKEY_FLAGS_MATCH_CUR_MKVP) 97 - cur_mkvp = t->mkvp; 98 + ptr_cur_mkvp = t->mkvp; 98 99 if (flags & PKEY_FLAGS_MATCH_ALT_MKVP) 99 - old_mkvp = t->mkvp; 100 + ptr_old_mkvp = t->mkvp; 100 101 } else if (hdr->version == TOKVER_CCA_VLSC) { 101 102 struct cipherkeytoken *t = (struct cipherkeytoken *)key; 102 103 103 104 minhwtype = ZCRYPT_CEX6; 104 105 if (flags & PKEY_FLAGS_MATCH_CUR_MKVP) 105 - cur_mkvp = t->mkvp0; 106 + ptr_cur_mkvp = t->mkvp0; 106 107 if (flags & PKEY_FLAGS_MATCH_ALT_MKVP) 107 - old_mkvp = t->mkvp0; 108 + ptr_old_mkvp = t->mkvp0; 108 109 } else { 109 110 /* unknown CCA internal token type */ 110 111 return -EINVAL; 111 112 } 112 113 rc = cca_findcard2(_apqns, &_nr_apqns, 0xFFFF, 0xFFFF, 113 114 minhwtype, AES_MK_SET, 114 - cur_mkvp, old_mkvp, xflags); 115 + ptr_cur_mkvp, ptr_old_mkvp, xflags); 115 116 if (rc) 116 117 goto out; 117 118 118 119 } else if (hdr->type == TOKTYPE_CCA_INTERNAL_PKA) { 119 120 struct eccprivkeytoken *t = (struct eccprivkeytoken *)key; 120 - u64 cur_mkvp = 0, old_mkvp = 0; 121 + const u8 *ptr_cur_mkvp = NULL; 122 + const u8 *ptr_old_mkvp = NULL; 121 123 122 124 if (t->secid == 0x20) { 123 125 if (flags & PKEY_FLAGS_MATCH_CUR_MKVP) 124 - cur_mkvp = t->mkvp; 126 + ptr_cur_mkvp = t->mkvp; 125 127 if (flags & PKEY_FLAGS_MATCH_ALT_MKVP) 126 - old_mkvp = t->mkvp; 128 + ptr_old_mkvp = t->mkvp; 127 129 } else { 128 130 /* unknown CCA internal 2 token type */ 129 131 return -EINVAL; 130 132 } 131 133 rc = cca_findcard2(_apqns, &_nr_apqns, 0xFFFF, 0xFFFF, 132 134 ZCRYPT_CEX7, APKA_MK_SET, 133 - cur_mkvp, old_mkvp, xflags); 135 + ptr_cur_mkvp, ptr_old_mkvp, xflags); 134 136 if (rc) 135 137 goto out; 136 138 ··· 169 167 zcrypt_wait_api_operational(); 170 168 171 169 if (ktype == PKEY_TYPE_CCA_DATA || ktype == PKEY_TYPE_CCA_CIPHER) { 172 - u64 cur_mkvp = 0, old_mkvp = 0; 170 + const u8 *ptr_cur_mkvp = NULL; 171 + const u8 *ptr_old_mkvp = NULL; 173 172 int minhwtype = ZCRYPT_CEX3C; 174 173 175 174 if (flags & PKEY_FLAGS_MATCH_CUR_MKVP) 176 - cur_mkvp = *((u64 *)cur_mkvp); 175 + ptr_cur_mkvp = cur_mkvp; 177 176 if (flags & PKEY_FLAGS_MATCH_ALT_MKVP) 178 - old_mkvp = *((u64 *)alt_mkvp); 177 + ptr_old_mkvp = alt_mkvp; 179 178 if (ktype == PKEY_TYPE_CCA_CIPHER) 180 179 minhwtype = ZCRYPT_CEX6; 181 180 rc = cca_findcard2(_apqns, &_nr_apqns, 0xFFFF, 0xFFFF, 182 181 minhwtype, AES_MK_SET, 183 - cur_mkvp, old_mkvp, xflags); 182 + ptr_cur_mkvp, ptr_old_mkvp, xflags); 184 183 if (rc) 185 184 goto out; 186 185 187 186 } else if (ktype == PKEY_TYPE_CCA_ECC) { 188 - u64 cur_mkvp = 0, old_mkvp = 0; 187 + const u8 *ptr_cur_mkvp = NULL; 188 + const u8 *ptr_old_mkvp = NULL; 189 189 190 190 if (flags & PKEY_FLAGS_MATCH_CUR_MKVP) 191 - cur_mkvp = *((u64 *)cur_mkvp); 191 + ptr_cur_mkvp = cur_mkvp; 192 192 if (flags & PKEY_FLAGS_MATCH_ALT_MKVP) 193 - old_mkvp = *((u64 *)alt_mkvp); 193 + ptr_old_mkvp = alt_mkvp; 194 194 rc = cca_findcard2(_apqns, &_nr_apqns, 0xFFFF, 0xFFFF, 195 195 ZCRYPT_CEX7, APKA_MK_SET, 196 - cur_mkvp, old_mkvp, xflags); 196 + ptr_cur_mkvp, ptr_old_mkvp, xflags); 197 197 if (rc) 198 198 goto out; 199 199 ··· 491 487 *keybitsize = t->bitsize; 492 488 rc = cca_findcard2(apqns, &nr_apqns, *card, *dom, 493 489 ZCRYPT_CEX3C, AES_MK_SET, 494 - t->mkvp, 0, xflags); 490 + t->mkvp, NULL, xflags); 495 491 if (!rc) 496 492 *flags = PKEY_FLAGS_MATCH_CUR_MKVP; 497 493 if (rc == -ENODEV) { 498 494 nr_apqns = ARRAY_SIZE(apqns); 499 495 rc = cca_findcard2(apqns, &nr_apqns, *card, *dom, 500 496 ZCRYPT_CEX3C, AES_MK_SET, 501 - 0, t->mkvp, xflags); 497 + NULL, t->mkvp, xflags); 502 498 if (!rc) 503 499 *flags = PKEY_FLAGS_MATCH_ALT_MKVP; 504 500 } ··· 525 521 *keybitsize = PKEY_SIZE_AES_256; 526 522 rc = cca_findcard2(apqns, &nr_apqns, *card, *dom, 527 523 ZCRYPT_CEX6, AES_MK_SET, 528 - t->mkvp0, 0, xflags); 524 + t->mkvp0, NULL, xflags); 529 525 if (!rc) 530 526 *flags = PKEY_FLAGS_MATCH_CUR_MKVP; 531 527 if (rc == -ENODEV) { 532 528 nr_apqns = ARRAY_SIZE(apqns); 533 529 rc = cca_findcard2(apqns, &nr_apqns, *card, *dom, 534 530 ZCRYPT_CEX6, AES_MK_SET, 535 - 0, t->mkvp0, xflags); 531 + NULL, t->mkvp0, xflags); 536 532 if (!rc) 537 533 *flags = PKEY_FLAGS_MATCH_ALT_MKVP; 538 534 }
+16 -24
drivers/s390/crypto/zcrypt_api.c
··· 854 854 struct ica_xcRB *xcrb) 855 855 { 856 856 bool userspace = xflags & ZCRYPT_XFLAG_USERSPACE; 857 - struct zcrypt_card *zc, *pref_zc; 858 - struct zcrypt_queue *zq, *pref_zq; 859 - struct ap_message ap_msg; 857 + unsigned int card, domain, func_code = 0; 860 858 unsigned int wgt = 0, pref_wgt = 0; 861 - unsigned int func_code = 0; 862 - unsigned short *domain, tdom; 859 + struct zcrypt_queue *zq, *pref_zq; 860 + struct zcrypt_card *zc, *pref_zc; 863 861 int cpen, qpen, qid = 0, rc; 862 + struct ap_message ap_msg; 864 863 struct module *mod; 865 864 866 865 trace_s390_zcrypt_req(xcrb, TB_ZSECSENDCPRB); ··· 877 878 print_hex_dump_debug("ccareq: ", DUMP_PREFIX_ADDRESS, 16, 1, 878 879 ap_msg.msg, ap_msg.len, false); 879 880 880 - tdom = *domain; 881 - if (perms != &ap_perms && tdom < AP_DOMAINS) { 881 + if (perms != &ap_perms && domain < AP_DOMAINS) { 882 882 if (ap_msg.flags & AP_MSG_FLAG_ADMIN) { 883 - if (!test_bit_inv(tdom, perms->adm)) { 883 + if (!test_bit_inv(domain, perms->adm)) { 884 884 rc = -ENODEV; 885 885 goto out; 886 886 } ··· 892 894 * If a valid target domain is set and this domain is NOT a usage 893 895 * domain but a control only domain, autoselect target domain. 894 896 */ 895 - if (tdom < AP_DOMAINS && 896 - !ap_test_config_usage_domain(tdom) && 897 - ap_test_config_ctrl_domain(tdom)) 898 - tdom = AUTOSEL_DOM; 897 + if (domain < AP_DOMAINS && 898 + !ap_test_config_usage_domain(domain) && 899 + ap_test_config_ctrl_domain(domain)) 900 + domain = AUTOSEL_DOM; 899 901 900 902 pref_zc = NULL; 901 903 pref_zq = NULL; 904 + card = xcrb->user_defined; 902 905 spin_lock(&zcrypt_list_lock); 903 906 for_each_zcrypt_card(zc) { 904 907 /* Check for usable CCA card */ ··· 907 908 !zc->card->hwinfo.cca) 908 909 continue; 909 910 /* Check for user selected CCA card */ 910 - if (xcrb->user_defined != AUTOSELECT && 911 - xcrb->user_defined != zc->card->id) 911 + if (card != AUTOSELECT && card != zc->card->id) 912 912 continue; 913 913 /* check if request size exceeds card max msg size */ 914 914 if (ap_msg.len > zc->card->maxmsgsize) ··· 927 929 /* check for device usable and eligible */ 928 930 if (!zq->online || !zq->ops->send_cprb || 929 931 !ap_queue_usable(zq->queue) || 930 - (tdom != AUTOSEL_DOM && 931 - tdom != AP_QID_QUEUE(zq->queue->qid))) 932 + (domain != AUTOSEL_DOM && 933 + domain != AP_QID_QUEUE(zq->queue->qid))) 932 934 continue; 933 935 /* check if device node has admission for this queue */ 934 936 if (!zcrypt_check_queue(perms, ··· 951 953 952 954 if (!pref_zq) { 953 955 pr_debug("no match for address %02x.%04x => ENODEV\n", 954 - xcrb->user_defined, *domain); 956 + card, domain); 955 957 rc = -ENODEV; 956 958 goto out; 957 959 } 958 - 959 - /* in case of auto select, provide the correct domain */ 960 - qid = pref_zq->queue->qid; 961 - if (*domain == AUTOSEL_DOM) 962 - *domain = AP_QID_QUEUE(qid); 963 960 964 961 rc = pref_zq->ops->send_cprb(userspace, pref_zq, xcrb, &ap_msg); 965 962 if (!rc) { ··· 1213 1220 unsigned int wgt = 0, pref_wgt = 0; 1214 1221 unsigned int func_code = 0; 1215 1222 struct ap_message ap_msg; 1216 - unsigned int domain; 1217 1223 int qid = 0, rc = -ENODEV; 1218 1224 struct module *mod; 1219 1225 ··· 1221 1229 rc = ap_init_apmsg(&ap_msg, 0); 1222 1230 if (rc) 1223 1231 goto out; 1224 - rc = prep_rng_ap_msg(&ap_msg, &func_code, &domain); 1232 + rc = prep_rng_ap_msg(&ap_msg, &func_code, NULL); 1225 1233 if (rc) 1226 1234 goto out; 1227 1235
+18 -10
drivers/s390/crypto/zcrypt_ccamisc.c
··· 305 305 struct CPRBX *prepcblk) 306 306 { 307 307 memset(pxcrb, 0, sizeof(*pxcrb)); 308 - pxcrb->agent_ID = 0x4341; /* 'CA' */ 308 + memcpy(&pxcrb->agent_ID, "CA", 2); 309 309 pxcrb->user_defined = (cardnr == 0xFFFF ? AUTOSELECT : cardnr); 310 310 pxcrb->request_control_blk_length = 311 311 preqcblk->cprb_len + preqcblk->req_parml; ··· 1710 1710 EXPORT_SYMBOL(cca_get_info); 1711 1711 1712 1712 int cca_findcard2(u32 *apqns, u32 *nr_apqns, u16 cardnr, u16 domain, 1713 - int minhwtype, int mktype, u64 cur_mkvp, u64 old_mkvp, 1714 - u32 xflags) 1713 + int minhwtype, int mktype, 1714 + const u8 *ptr_cur_mkvp, const u8 *ptr_old_mkvp, u32 xflags) 1715 1715 { 1716 1716 struct zcrypt_device_status_ext *device_status; 1717 1717 int i, card, dom, curmatch, oldmatch; ··· 1755 1755 /* check min hardware type */ 1756 1756 if (minhwtype > 0 && minhwtype > ci.hwtype) 1757 1757 continue; 1758 - if (cur_mkvp || old_mkvp) { 1758 + if (ptr_cur_mkvp || ptr_old_mkvp) { 1759 1759 /* check mkvps */ 1760 1760 curmatch = oldmatch = 0; 1761 1761 if (mktype == AES_MK_SET) { 1762 - if (cur_mkvp && cur_mkvp == ci.cur_aes_mkvp) 1762 + if (ptr_cur_mkvp && 1763 + !memcmp(ptr_cur_mkvp, ci.cur_aes_mkvp, 1764 + sizeof(ci.cur_aes_mkvp))) 1763 1765 curmatch = 1; 1764 - if (old_mkvp && ci.old_aes_mk_state == '2' && 1765 - old_mkvp == ci.old_aes_mkvp) 1766 + if (ptr_old_mkvp && 1767 + ci.old_aes_mk_state == '2' && 1768 + !memcmp(ptr_old_mkvp, ci.old_aes_mkvp, 1769 + sizeof(ci.old_aes_mkvp))) 1766 1770 oldmatch = 1; 1767 1771 } else { 1768 - if (cur_mkvp && cur_mkvp == ci.cur_apka_mkvp) 1772 + if (ptr_cur_mkvp && 1773 + !memcmp(ptr_cur_mkvp, ci.cur_apka_mkvp, 1774 + sizeof(ci.cur_apka_mkvp))) 1769 1775 curmatch = 1; 1770 - if (old_mkvp && ci.old_apka_mk_state == '2' && 1771 - old_mkvp == ci.old_apka_mkvp) 1776 + if (ptr_old_mkvp && 1777 + ci.old_apka_mk_state == '2' && 1778 + !memcmp(ptr_old_mkvp, ci.old_apka_mkvp, 1779 + sizeof(ci.old_apka_mkvp))) 1772 1780 oldmatch = 1; 1773 1781 } 1774 1782 if (curmatch + oldmatch < 1)
+12 -12
drivers/s390/crypto/zcrypt_ccamisc.h
··· 47 47 u8 res1[1]; 48 48 u8 flag; /* key flags */ 49 49 u8 res2[1]; 50 - u64 mkvp; /* master key verification pattern */ 50 + u8 mkvp[8]; /* master key verification pattern */ 51 51 u8 key[32]; /* key value (encrypted) */ 52 52 u8 cv[8]; /* control vector */ 53 53 u16 bitsize; /* key bit size */ ··· 64 64 u8 res1[3]; 65 65 u8 kms; /* key material state, 0x03 means wrapped with MK */ 66 66 u8 kvpt; /* key verification pattern type, should be 0x01 */ 67 - u64 mkvp0; /* master key verification pattern, lo part */ 68 - u64 mkvp1; /* master key verification pattern, hi part (unused) */ 67 + u8 mkvp0[8]; /* master key verification pattern, lo part */ 68 + u8 mkvp1[8]; /* master key verification pattern, hi part (unused) */ 69 69 u8 eskwm; /* encrypted section key wrapping method */ 70 70 u8 hashalg; /* hash algorithmus used for wrapping key */ 71 71 u8 plfver; /* pay load format version */ ··· 113 113 u8 ksrc; /* key source */ 114 114 u16 pbitlen; /* length of prime p in bits */ 115 115 u16 ibmadlen; /* IBM associated data length in bytes */ 116 - u64 mkvp; /* master key verification pattern */ 116 + u8 mkvp[8]; /* master key verification pattern */ 117 117 u8 opk[48]; /* encrypted object protection key data */ 118 118 u16 adatalen; /* associated data length in bytes */ 119 119 u16 fseclen; /* formatted section length in bytes */ ··· 227 227 * If no apqn meeting the criteria is found, -ENODEV is returned. 228 228 */ 229 229 int cca_findcard2(u32 *apqns, u32 *nr_apqns, u16 cardnr, u16 domain, 230 - int minhwtype, int mktype, u64 cur_mkvp, u64 old_mkvp, 231 - u32 xflags); 230 + int minhwtype, int mktype, 231 + const u8 *cur_mkvp, const u8 *old_mkvp, u32 xflags); 232 232 233 233 #define AES_MK_SET 0 234 234 #define APKA_MK_SET 1 ··· 245 245 char new_asym_mk_state; /* '1' empty, '2' partially full, '3' full */ 246 246 char cur_asym_mk_state; /* '1' invalid, '2' valid */ 247 247 char old_asym_mk_state; /* '1' invalid, '2' valid */ 248 - u64 new_aes_mkvp; /* truncated sha256 of new aes master key */ 249 - u64 cur_aes_mkvp; /* truncated sha256 of current aes master key */ 250 - u64 old_aes_mkvp; /* truncated sha256 of old aes master key */ 251 - u64 new_apka_mkvp; /* truncated sha256 of new apka master key */ 252 - u64 cur_apka_mkvp; /* truncated sha256 of current apka mk */ 253 - u64 old_apka_mkvp; /* truncated sha256 of old apka mk */ 248 + u8 new_aes_mkvp[8]; /* truncated sha256 of new aes master key */ 249 + u8 cur_aes_mkvp[8]; /* truncated sha256 of current aes master key */ 250 + u8 old_aes_mkvp[8]; /* truncated sha256 of old aes master key */ 251 + u8 new_apka_mkvp[8]; /* truncated sha256 of new apka master key */ 252 + u8 cur_apka_mkvp[8]; /* truncated sha256 of current apka mk */ 253 + u8 old_apka_mkvp[8]; /* truncated sha256 of old apka mk */ 254 254 u8 new_asym_mkvp[16]; /* verify pattern of new asym master key */ 255 255 u8 cur_asym_mkvp[16]; /* verify pattern of current asym master key */ 256 256 u8 old_asym_mkvp[16]; /* verify pattern of old asym master key */
+68 -42
drivers/s390/crypto/zcrypt_cex4.c
··· 102 102 .attrs = cca_card_attrs, 103 103 }; 104 104 105 - /* 106 - * CCA queue additional device attributes 107 - */ 105 + /* 106 + * Simple helper macro to format raw mkvp byte array into hex 107 + */ 108 + #define MKVP_TO_HEXBUF(mkvp, buf) \ 109 + do { \ 110 + BUILD_BUG_ON(sizeof(buf) <= 2 * sizeof(mkvp)); \ 111 + bin2hex(buf, mkvp, sizeof(mkvp)); \ 112 + buf[2 * sizeof(mkvp)] = '\0'; \ 113 + } while (0) 114 + 115 + /* 116 + * CCA queue additional device attributes 117 + */ 108 118 static ssize_t cca_mkvps_show(struct device *dev, 109 119 struct device_attribute *attr, 110 120 char *buf) ··· 123 113 static const char * const cao_state[] = { "invalid", "valid" }; 124 114 struct zcrypt_queue *zq = dev_get_drvdata(dev); 125 115 struct cca_info ci; 116 + char hexbuf[2 * 16 + 1]; 126 117 int n = 0; 127 118 128 119 memset(&ci, 0, sizeof(ci)); ··· 132 121 AP_QID_QUEUE(zq->queue->qid), 133 122 &ci, 0); 134 123 135 - if (ci.new_aes_mk_state >= '1' && ci.new_aes_mk_state <= '3') 136 - n += sysfs_emit_at(buf, n, "AES NEW: %s 0x%016llx\n", 124 + if (ci.new_aes_mk_state >= '1' && ci.new_aes_mk_state <= '3') { 125 + MKVP_TO_HEXBUF(ci.new_aes_mkvp, hexbuf); 126 + n += sysfs_emit_at(buf, n, "AES NEW: %s 0x%s\n", 137 127 new_state[ci.new_aes_mk_state - '1'], 138 - ci.new_aes_mkvp); 139 - else 128 + hexbuf); 129 + } else { 140 130 n += sysfs_emit_at(buf, n, "AES NEW: - -\n"); 131 + } 141 132 142 - if (ci.cur_aes_mk_state >= '1' && ci.cur_aes_mk_state <= '2') 143 - n += sysfs_emit_at(buf, n, "AES CUR: %s 0x%016llx\n", 133 + if (ci.cur_aes_mk_state >= '1' && ci.cur_aes_mk_state <= '2') { 134 + MKVP_TO_HEXBUF(ci.cur_aes_mkvp, hexbuf); 135 + n += sysfs_emit_at(buf, n, "AES CUR: %s 0x%s\n", 144 136 cao_state[ci.cur_aes_mk_state - '1'], 145 - ci.cur_aes_mkvp); 146 - else 137 + hexbuf); 138 + } else { 147 139 n += sysfs_emit_at(buf, n, "AES CUR: - -\n"); 140 + } 148 141 149 - if (ci.old_aes_mk_state >= '1' && ci.old_aes_mk_state <= '2') 150 - n += sysfs_emit_at(buf, n, "AES OLD: %s 0x%016llx\n", 142 + if (ci.old_aes_mk_state >= '1' && ci.old_aes_mk_state <= '2') { 143 + MKVP_TO_HEXBUF(ci.old_aes_mkvp, hexbuf); 144 + n += sysfs_emit_at(buf, n, "AES OLD: %s 0x%s\n", 151 145 cao_state[ci.old_aes_mk_state - '1'], 152 - ci.old_aes_mkvp); 153 - else 146 + hexbuf); 147 + } else { 154 148 n += sysfs_emit_at(buf, n, "AES OLD: - -\n"); 149 + } 155 150 156 - if (ci.new_apka_mk_state >= '1' && ci.new_apka_mk_state <= '3') 157 - n += sysfs_emit_at(buf, n, "APKA NEW: %s 0x%016llx\n", 151 + if (ci.new_apka_mk_state >= '1' && ci.new_apka_mk_state <= '3') { 152 + MKVP_TO_HEXBUF(ci.new_apka_mkvp, hexbuf); 153 + n += sysfs_emit_at(buf, n, "APKA NEW: %s 0x%s\n", 158 154 new_state[ci.new_apka_mk_state - '1'], 159 - ci.new_apka_mkvp); 160 - else 155 + hexbuf); 156 + } else { 161 157 n += sysfs_emit_at(buf, n, "APKA NEW: - -\n"); 158 + } 162 159 163 - if (ci.cur_apka_mk_state >= '1' && ci.cur_apka_mk_state <= '2') 164 - n += sysfs_emit_at(buf, n, "APKA CUR: %s 0x%016llx\n", 160 + if (ci.cur_apka_mk_state >= '1' && ci.cur_apka_mk_state <= '2') { 161 + MKVP_TO_HEXBUF(ci.cur_apka_mkvp, hexbuf); 162 + n += sysfs_emit_at(buf, n, "APKA CUR: %s 0x%s\n", 165 163 cao_state[ci.cur_apka_mk_state - '1'], 166 - ci.cur_apka_mkvp); 167 - else 164 + hexbuf); 165 + } else { 168 166 n += sysfs_emit_at(buf, n, "APKA CUR: - -\n"); 167 + } 169 168 170 - if (ci.old_apka_mk_state >= '1' && ci.old_apka_mk_state <= '2') 171 - n += sysfs_emit_at(buf, n, "APKA OLD: %s 0x%016llx\n", 169 + if (ci.old_apka_mk_state >= '1' && ci.old_apka_mk_state <= '2') { 170 + MKVP_TO_HEXBUF(ci.old_apka_mkvp, hexbuf); 171 + n += sysfs_emit_at(buf, n, "APKA OLD: %s 0x%s\n", 172 172 cao_state[ci.old_apka_mk_state - '1'], 173 - ci.old_apka_mkvp); 174 - else 173 + hexbuf); 174 + } else { 175 175 n += sysfs_emit_at(buf, n, "APKA OLD: - -\n"); 176 + } 176 177 177 - if (ci.new_asym_mk_state >= '1' && ci.new_asym_mk_state <= '3') 178 - n += sysfs_emit_at(buf, n, "ASYM NEW: %s 0x%016llx%016llx\n", 178 + if (ci.new_asym_mk_state >= '1' && ci.new_asym_mk_state <= '3') { 179 + MKVP_TO_HEXBUF(ci.new_asym_mkvp, hexbuf); 180 + n += sysfs_emit_at(buf, n, "ASYM NEW: %s 0x%s\n", 179 181 new_state[ci.new_asym_mk_state - '1'], 180 - *((u64 *)(ci.new_asym_mkvp)), 181 - *((u64 *)(ci.new_asym_mkvp + sizeof(u64)))); 182 - else 182 + hexbuf); 183 + } else { 183 184 n += sysfs_emit_at(buf, n, "ASYM NEW: - -\n"); 185 + } 184 186 185 - if (ci.cur_asym_mk_state >= '1' && ci.cur_asym_mk_state <= '2') 186 - n += sysfs_emit_at(buf, n, "ASYM CUR: %s 0x%016llx%016llx\n", 187 + if (ci.cur_asym_mk_state >= '1' && ci.cur_asym_mk_state <= '2') { 188 + MKVP_TO_HEXBUF(ci.cur_asym_mkvp, hexbuf); 189 + n += sysfs_emit_at(buf, n, "ASYM CUR: %s 0x%s\n", 187 190 cao_state[ci.cur_asym_mk_state - '1'], 188 - *((u64 *)(ci.cur_asym_mkvp)), 189 - *((u64 *)(ci.cur_asym_mkvp + sizeof(u64)))); 190 - else 191 + hexbuf); 192 + } else { 191 193 n += sysfs_emit_at(buf, n, "ASYM CUR: - -\n"); 194 + } 192 195 193 - if (ci.old_asym_mk_state >= '1' && ci.old_asym_mk_state <= '2') 194 - n += sysfs_emit_at(buf, n, "ASYM OLD: %s 0x%016llx%016llx\n", 196 + if (ci.old_asym_mk_state >= '1' && ci.old_asym_mk_state <= '2') { 197 + MKVP_TO_HEXBUF(ci.old_asym_mkvp, hexbuf); 198 + n += sysfs_emit_at(buf, n, "ASYM OLD: %s 0x%s\n", 195 199 cao_state[ci.old_asym_mk_state - '1'], 196 - *((u64 *)(ci.old_asym_mkvp)), 197 - *((u64 *)(ci.old_asym_mkvp + sizeof(u64)))); 198 - else 200 + hexbuf); 201 + } else { 199 202 n += sysfs_emit_at(buf, n, "ASYM OLD: - -\n"); 203 + } 200 204 201 205 return n; 202 206 }
+10 -18
drivers/s390/crypto/zcrypt_error.h
··· 78 78 static inline int convert_error(struct zcrypt_queue *zq, 79 79 struct ap_message *reply) 80 80 { 81 - struct error_hdr *ehdr = reply->msg; 82 - int card = AP_QID_CARD(zq->queue->qid); 83 81 int queue = AP_QID_QUEUE(zq->queue->qid); 82 + int card = AP_QID_CARD(zq->queue->qid); 83 + struct error_hdr *ehdr = reply->msg; 84 + struct { 85 + struct type86_hdr hdr; 86 + struct type86_fmt2_ext fmt2; 87 + } __packed * t86hdr = reply->msg; 84 88 85 89 switch (ehdr->reply_code) { 86 90 case REP82_ERROR_INVALID_MSG_LEN: /* 0x23 */ ··· 104 100 /* RY indicates malformed request */ 105 101 if (ehdr->reply_code == REP82_ERROR_FILTERED_BY_HYPERVISOR && 106 102 ehdr->type == TYPE86_RSP_CODE) { 107 - struct { 108 - struct type86_hdr hdr; 109 - struct type86_fmt2_ext fmt2; 110 - } __packed * head = reply->msg; 111 - unsigned int apfs = *((u32 *)head->fmt2.apfs); 112 - 113 103 ZCRYPT_DBF_WARN("%s dev=%02x.%04x RY=0x%02x apfs=0x%x => rc=EINVAL\n", 114 104 __func__, card, queue, 115 - ehdr->reply_code, apfs); 105 + ehdr->reply_code, t86hdr->fmt2.apfs); 116 106 } else { 117 107 ZCRYPT_DBF_WARN("%s dev=%02x.%04x RY=0x%02x => rc=EINVAL\n", 118 - __func__, card, queue, 119 - ehdr->reply_code); 108 + __func__, card, queue, ehdr->reply_code); 120 109 } 121 110 return -EINVAL; 122 111 case REP82_ERROR_MACHINE_FAILURE: /* 0x10 */ ··· 122 125 /* For type 86 response show the apfs value (failure reason) */ 123 126 if (ehdr->reply_code == REP82_ERROR_TRANSPORT_FAIL && 124 127 ehdr->type == TYPE86_RSP_CODE) { 125 - struct { 126 - struct type86_hdr hdr; 127 - struct type86_fmt2_ext fmt2; 128 - } __packed * head = reply->msg; 129 - unsigned int apfs = *((u32 *)head->fmt2.apfs); 130 - 131 128 ZCRYPT_DBF_WARN( 132 129 "%s dev=%02x.%04x RY=0x%02x apfs=0x%x => bus rescan, rc=EAGAIN\n", 133 - __func__, card, queue, ehdr->reply_code, apfs); 130 + __func__, card, queue, ehdr->reply_code, 131 + t86hdr->fmt2.apfs); 134 132 } else { 135 133 ZCRYPT_DBF_WARN("%s dev=%02x.%04x RY=0x%02x => bus rescan, rc=EAGAIN\n", 136 134 __func__, card, queue,
+65 -11
drivers/s390/crypto/zcrypt_msgtype6.c
··· 65 65 static const struct CPRBX static_cprbx = { 66 66 .cprb_len = 0x00DC, 67 67 .cprb_ver_id = 0x02, 68 - .func_id = {0x54, 0x32}, 68 + .func_id = {'T', '2'}, 69 69 }; 70 70 71 71 int speed_idx_cca(int req_type) ··· 328 328 static int xcrb_msg_to_type6cprb_msgx(bool userspace, struct ap_message *ap_msg, 329 329 struct ica_xcRB *xcrb, 330 330 unsigned int *fcode, 331 - unsigned short **dom) 331 + unsigned int *domain) 332 332 { 333 333 static struct type6_hdr static_type6_hdrX = { 334 334 .type = 0x06, ··· 412 412 sizeof(msg->hdr.function_code)); 413 413 414 414 *fcode = (msg->hdr.function_code[0] << 8) | msg->hdr.function_code[1]; 415 - *dom = (unsigned short *)&msg->cprbx.domain; 415 + if (domain) 416 + *domain = msg->cprbx.domain; 416 417 417 418 /* check subfunction, US and AU need special flag with NQAP */ 418 419 if (memcmp(function_code, "US", 2) == 0 || ··· 455 454 .type = 0x06, 456 455 .rqid = {0x00, 0x01}, 457 456 .function_code = {0x00, 0x00}, 458 - .agent_id[0] = 0x58, /* {'X'} */ 459 - .agent_id[1] = 0x43, /* {'C'} */ 457 + .agent_id = {'X', 'C'}, 460 458 .offset1 = 0x00000058, 461 459 }; 462 460 ··· 529 529 else 530 530 ap_msg->flags |= AP_MSG_FLAG_USAGE; 531 531 532 - *domain = msg->cprbx.target_id; 532 + if (domain) 533 + *domain = msg->cprbx.target_id; 533 534 534 535 return 0; 535 536 } ··· 752 751 return convert_error(zq, reply); 753 752 case TYPE86_RSP_CODE: 754 753 if (msg->hdr.reply_code) { 755 - memcpy(&xcrb->status, msg->fmt2.apfs, sizeof(u32)); 754 + xcrb->status = msg->fmt2.apfs; 756 755 return convert_error(zq, reply); 757 756 } 758 757 if (msg->cprbx.cprb_ver_id == 0x02) ··· 1053 1052 */ 1054 1053 int prep_cca_ap_msg(bool userspace, struct ica_xcRB *xcrb, 1055 1054 struct ap_message *ap_msg, 1056 - unsigned int *func_code, unsigned short **dom) 1055 + unsigned int *func_code, unsigned int *domain) 1057 1056 { 1058 1057 struct ap_response_type *resp_type = &ap_msg->response; 1059 1058 ··· 1061 1060 ap_msg->psmid = (((unsigned long)current->pid) << 32) + 1062 1061 atomic_inc_return(&zcrypt_step); 1063 1062 resp_type->type = CEXXC_RESPONSE_TYPE_XCRB; 1064 - return xcrb_msg_to_type6cprb_msgx(userspace, ap_msg, xcrb, func_code, dom); 1063 + return xcrb_msg_to_type6cprb_msgx(userspace, ap_msg, 1064 + xcrb, func_code, domain); 1065 1065 } 1066 1066 1067 1067 /* ··· 1106 1104 } 1107 1105 msg->hdr.fromcardlen1 -= delta; 1108 1106 } 1107 + 1108 + /* update domain field within the CPRB struct */ 1109 + msg->cprbx.domain = AP_QID_QUEUE(zq->queue->qid); 1109 1110 1110 1111 init_completion(&resp_type->work); 1111 1112 rc = ap_queue_message(zq->queue, ap_msg); ··· 1215 1210 lfmt = 1; /* length format #1 */ 1216 1211 } 1217 1212 payload_hdr = (struct pld_hdr *)((&msg->pld_lenfmt) + lfmt); 1218 - payload_hdr->dom_val = (unsigned int) 1219 - AP_QID_QUEUE(zq->queue->qid); 1213 + payload_hdr->dom_val = AP_QID_QUEUE(zq->queue->qid); 1220 1214 } 1221 1215 1222 1216 /* ··· 1248 1244 AP_QID_CARD(zq->queue->qid), 1249 1245 AP_QID_QUEUE(zq->queue->qid), rc); 1250 1246 return rc; 1247 + } 1248 + 1249 + /* 1250 + * Prepare a type6 CPRB message for random number generation 1251 + * 1252 + * @ap_dev: AP device pointer 1253 + * @ap_msg: pointer to AP message 1254 + */ 1255 + static inline void rng_type6cprb_msgx(struct ap_message *ap_msg, 1256 + unsigned int random_number_length, 1257 + unsigned int *domain) 1258 + { 1259 + struct { 1260 + struct type6_hdr hdr; 1261 + struct CPRBX cprbx; 1262 + char function_code[2]; 1263 + short int rule_length; 1264 + char rule[8]; 1265 + short int verb_length; 1266 + short int key_length; 1267 + } __packed * msg = ap_msg->msg; 1268 + static struct type6_hdr static_type6_hdrX = { 1269 + .type = 0x06, 1270 + .offset1 = 0x00000058, 1271 + .agent_id = {'C', 'A'}, 1272 + .function_code = {'R', 'L'}, 1273 + .tocardlen1 = sizeof(*msg) - sizeof(msg->hdr), 1274 + .fromcardlen1 = sizeof(*msg) - sizeof(msg->hdr), 1275 + }; 1276 + static struct CPRBX local_cprbx = { 1277 + .cprb_len = 0x00dc, 1278 + .cprb_ver_id = 0x02, 1279 + .func_id = {'T', '2'}, 1280 + .req_parml = sizeof(*msg) - sizeof(msg->hdr) - 1281 + sizeof(msg->cprbx), 1282 + .rpl_msgbl = sizeof(*msg) - sizeof(msg->hdr), 1283 + }; 1284 + 1285 + msg->hdr = static_type6_hdrX; 1286 + msg->hdr.fromcardlen2 = random_number_length; 1287 + msg->cprbx = local_cprbx; 1288 + msg->cprbx.rpl_datal = random_number_length; 1289 + memcpy(msg->function_code, msg->hdr.function_code, 0x02); 1290 + msg->rule_length = 0x0a; 1291 + memcpy(msg->rule, "RANDOM ", 8); 1292 + msg->verb_length = 0x02; 1293 + msg->key_length = 0x02; 1294 + ap_msg->len = sizeof(*msg); 1295 + if (domain) 1296 + *domain = msg->cprbx.domain; 1251 1297 } 1252 1298 1253 1299 /*
+3 -52
drivers/s390/crypto/zcrypt_msgtype6.h
··· 34 34 unsigned char right[4]; /* 0x00000000 */ 35 35 unsigned char reserved3[2]; /* 0x0000 */ 36 36 unsigned char reserved4[2]; /* 0x0000 */ 37 - unsigned char apfs[4]; /* 0x00000000 */ 37 + unsigned int apfs; /* 0x00000000 */ 38 38 unsigned int offset1; /* 0x00000058 (offset to CPRB) */ 39 39 unsigned int offset2; /* 0x00000000 */ 40 40 unsigned int offset3; /* 0x00000000 */ ··· 83 83 84 84 struct type86_fmt2_ext { 85 85 unsigned char reserved[4]; /* 0x00000000 */ 86 - unsigned char apfs[4]; /* final status */ 86 + unsigned int apfs; /* final status */ 87 87 unsigned int count1; /* length of CPRB + parameters */ 88 88 unsigned int offset1; /* offset to CPRB */ 89 89 unsigned int count2; /* 0x00000000 */ ··· 96 96 97 97 int prep_cca_ap_msg(bool userspace, struct ica_xcRB *xcrb, 98 98 struct ap_message *ap_msg, 99 - unsigned int *fc, unsigned short **dom); 99 + unsigned int *fc, unsigned int *dom); 100 100 int prep_ep11_ap_msg(bool userspace, struct ep11_urb *xcrb, 101 101 struct ap_message *ap_msg, 102 102 unsigned int *fc, unsigned int *dom); ··· 109 109 110 110 int speed_idx_cca(int); 111 111 int speed_idx_ep11(int); 112 - 113 - /** 114 - * Prepare a type6 CPRB message for random number generation 115 - * 116 - * @ap_dev: AP device pointer 117 - * @ap_msg: pointer to AP message 118 - */ 119 - static inline void rng_type6cprb_msgx(struct ap_message *ap_msg, 120 - unsigned int random_number_length, 121 - unsigned int *domain) 122 - { 123 - struct { 124 - struct type6_hdr hdr; 125 - struct CPRBX cprbx; 126 - char function_code[2]; 127 - short int rule_length; 128 - char rule[8]; 129 - short int verb_length; 130 - short int key_length; 131 - } __packed * msg = ap_msg->msg; 132 - static struct type6_hdr static_type6_hdrX = { 133 - .type = 0x06, 134 - .offset1 = 0x00000058, 135 - .agent_id = {'C', 'A'}, 136 - .function_code = {'R', 'L'}, 137 - .tocardlen1 = sizeof(*msg) - sizeof(msg->hdr), 138 - .fromcardlen1 = sizeof(*msg) - sizeof(msg->hdr), 139 - }; 140 - static struct CPRBX local_cprbx = { 141 - .cprb_len = 0x00dc, 142 - .cprb_ver_id = 0x02, 143 - .func_id = {0x54, 0x32}, 144 - .req_parml = sizeof(*msg) - sizeof(msg->hdr) - 145 - sizeof(msg->cprbx), 146 - .rpl_msgbl = sizeof(*msg) - sizeof(msg->hdr), 147 - }; 148 - 149 - msg->hdr = static_type6_hdrX; 150 - msg->hdr.fromcardlen2 = random_number_length; 151 - msg->cprbx = local_cprbx; 152 - msg->cprbx.rpl_datal = random_number_length; 153 - memcpy(msg->function_code, msg->hdr.function_code, 0x02); 154 - msg->rule_length = 0x0a; 155 - memcpy(msg->rule, "RANDOM ", 8); 156 - msg->verb_length = 0x02; 157 - msg->key_length = 0x02; 158 - ap_msg->len = sizeof(*msg); 159 - *domain = (unsigned short)msg->cprbx.domain; 160 - } 161 112 162 113 void zcrypt_msgtype6_init(void); 163 114 void zcrypt_msgtype6_exit(void);
+6 -9
mm/page_table_check.c
··· 151 151 if (&init_mm == mm) 152 152 return; 153 153 154 - if (pte_user_accessible_page(pte, addr)) { 154 + if (pte_user_accessible_page(mm, addr, pte)) 155 155 page_table_check_clear(pte_pfn(pte), PAGE_SIZE >> PAGE_SHIFT); 156 - } 157 156 } 158 157 EXPORT_SYMBOL(__page_table_check_pte_clear); 159 158 ··· 162 163 if (&init_mm == mm) 163 164 return; 164 165 165 - if (pmd_user_accessible_page(pmd, addr)) { 166 + if (pmd_user_accessible_page(mm, addr, pmd)) 166 167 page_table_check_clear(pmd_pfn(pmd), PMD_SIZE >> PAGE_SHIFT); 167 - } 168 168 } 169 169 EXPORT_SYMBOL(__page_table_check_pmd_clear); 170 170 ··· 173 175 if (&init_mm == mm) 174 176 return; 175 177 176 - if (pud_user_accessible_page(pud, addr)) { 178 + if (pud_user_accessible_page(mm, addr, pud)) 177 179 page_table_check_clear(pud_pfn(pud), PUD_SIZE >> PAGE_SHIFT); 178 - } 179 180 } 180 181 EXPORT_SYMBOL(__page_table_check_pud_clear); 181 182 ··· 208 211 209 212 for (i = 0; i < nr; i++) 210 213 __page_table_check_pte_clear(mm, addr + PAGE_SIZE * i, ptep_get(ptep + i)); 211 - if (pte_user_accessible_page(pte, addr)) 214 + if (pte_user_accessible_page(mm, addr, pte)) 212 215 page_table_check_set(pte_pfn(pte), nr, pte_write(pte)); 213 216 } 214 217 EXPORT_SYMBOL(__page_table_check_ptes_set); ··· 238 241 239 242 for (i = 0; i < nr; i++) 240 243 __page_table_check_pmd_clear(mm, addr + PMD_SIZE * i, *(pmdp + i)); 241 - if (pmd_user_accessible_page(pmd, addr)) 244 + if (pmd_user_accessible_page(mm, addr, pmd)) 242 245 page_table_check_set(pmd_pfn(pmd), stride * nr, pmd_write(pmd)); 243 246 } 244 247 EXPORT_SYMBOL(__page_table_check_pmds_set); ··· 254 257 255 258 for (i = 0; i < nr; i++) 256 259 __page_table_check_pud_clear(mm, addr + PUD_SIZE * i, *(pudp + i)); 257 - if (pud_user_accessible_page(pud, addr)) 260 + if (pud_user_accessible_page(mm, addr, pud)) 258 261 page_table_check_set(pud_pfn(pud), stride * nr, pud_write(pud)); 259 262 } 260 263 EXPORT_SYMBOL(__page_table_check_puds_set);