Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'tsm-for-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/devsec/tsm

Pull PCIe Link Encryption and Device Authentication from Dan Williams:
"New PCI infrastructure and one architecture implementation for PCIe
link encryption establishment via platform firmware services.

This work is the result of multiple vendors coming to consensus on
some core infrastructure (thanks Alexey, Yilun, and Aneesh!), and
three vendor implementations, although only one is included in this
pull. The PCI core changes have an ack from Bjorn, the crypto/ccp/
changes have an ack from Tom, and the iommu/amd/ changes have an ack
from Joerg.

PCIe link encryption is made possible by the soup of acronyms
mentioned in the shortlog below. Link Integrity and Data Encryption
(IDE) is a protocol for installing keys in the transmitter and
receiver at each end of a link. That protocol is transported over Data
Object Exchange (DOE) mailboxes using PCI configuration requests.

The aspect that makes this a "platform firmware service" is that the
key provisioning and protocol is coordinated through a Trusted
Execution Envrionment (TEE) Security Manager (TSM). That is either
firmware running in a coprocessor (AMD SEV-TIO), or quasi-hypervisor
software (Intel TDX Connect / ARM CCA) running in a protected CPU
mode.

Now, the only reason to ask a TSM to run this protocol and install the
keys rather than have a Linux driver do the same is so that later, a
confidential VM can ask the TSM directly "can you certify this
device?".

That precludes host Linux from provisioning its own keys, because host
Linux is outside the trust domain for the VM. It also turns out that
all architectures, save for one, do not publish a mechanism for an OS
to establish keys in the root port. So "TSM-established link
encryption" is the only cross-architecture path for this capability
for the foreseeable future.

This unblocks the other arch implementations to follow in v6.20/v7.0,
once they clear some other dependencies, and it unblocks the next
phase of work to implement the end-to-end flow of confidential device
assignment. The PCIe specification calls this end-to-end flow Trusted
Execution Environment (TEE) Device Interface Security Protocol
(TDISP).

In the meantime, Linux gets a link encryption facility which has
practical benefits along the same lines as memory encryption. It
authenticates devices via certificates and may protect against
interposer attacks trying to capture clear-text PCIe traffic.

Summary:

- Introduce the PCI/TSM core for the coordination of device
authentication, link encryption and establishment (IDE), and later
management of the device security operational states (TDISP).
Notify the new TSM core layer of PCI device arrival and departure

- Add a low level TSM driver for the link encryption establishment
capabilities of the AMD SEV-TIO architecture

- Add a library of helpers TSM drivers to use for IDE establishment
and the DOE transport

- Add skeleton support for 'bind' and 'guest_request' operations in
support of TDISP"

* tag 'tsm-for-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/devsec/tsm: (23 commits)
crypto/ccp: Fix CONFIG_PCI=n build
virt: Fix Kconfig warning when selecting TSM without VIRT_DRIVERS
crypto/ccp: Implement SEV-TIO PCIe IDE (phase1)
iommu/amd: Report SEV-TIO support
psp-sev: Assign numbers to all status codes and add new
ccp: Make snp_reclaim_pages and __sev_do_cmd_locked public
PCI/TSM: Add 'dsm' and 'bound' attributes for dependent functions
PCI/TSM: Add pci_tsm_guest_req() for managing TDIs
PCI/TSM: Add pci_tsm_bind() helper for instantiating TDIs
PCI/IDE: Initialize an ID for all IDE streams
PCI/IDE: Add Address Association Register setup for downstream MMIO
resource: Introduce resource_assigned() for discerning active resources
PCI/TSM: Drop stub for pci_tsm_doe_transfer()
drivers/virt: Drop VIRT_DRIVERS build dependency
PCI/TSM: Report active IDE streams
PCI/IDE: Report available IDE streams
PCI/IDE: Add IDE establishment helpers
PCI: Establish document for PCI host bridge sysfs attributes
PCI: Add PCIe Device 3 Extended Capability enumeration
PCI/TSM: Establish Secure Sessions and Link Encryption
...

+4326 -52
+81
Documentation/ABI/testing/sysfs-bus-pci
··· 621 621 number extended capability. The file is read only and due to 622 622 the possible sensitivity of accessible serial numbers, admin 623 623 only. 624 + 625 + What: /sys/bus/pci/devices/.../tsm/ 626 + Contact: linux-coco@lists.linux.dev 627 + Description: 628 + This directory only appears if a physical device function 629 + supports authentication (PCIe CMA-SPDM), interface security 630 + (PCIe TDISP), and is accepted for secure operation by the 631 + platform TSM driver. This attribute directory appears 632 + dynamically after the platform TSM driver loads. So, only after 633 + the /sys/class/tsm/tsm0 device arrives can tools assume that 634 + devices without a tsm/ attribute directory will never have one; 635 + before that, the security capabilities of the device relative to 636 + the platform TSM are unknown. See 637 + Documentation/ABI/testing/sysfs-class-tsm. 638 + 639 + What: /sys/bus/pci/devices/.../tsm/connect 640 + Contact: linux-coco@lists.linux.dev 641 + Description: 642 + (RW) Write the name of a TSM (TEE Security Manager) device from 643 + /sys/class/tsm to this file to establish a connection with the 644 + device. This typically includes an SPDM (DMTF Security 645 + Protocols and Data Models) session over PCIe DOE (Data Object 646 + Exchange) and may also include PCIe IDE (Integrity and Data 647 + Encryption) establishment. Reads from this attribute return the 648 + name of the connected TSM or the empty string if not 649 + connected. A TSM device signals its readiness to accept PCI 650 + connection via a KOBJ_CHANGE event. 651 + 652 + What: /sys/bus/pci/devices/.../tsm/disconnect 653 + Contact: linux-coco@lists.linux.dev 654 + Description: 655 + (WO) Write the name of the TSM device that was specified 656 + to 'connect' to teardown the connection. 657 + 658 + What: /sys/bus/pci/devices/.../tsm/dsm 659 + Contact: linux-coco@lists.linux.dev 660 + Description: (RO) Return PCI device name of this device's DSM (Device 661 + Security Manager). When a device is in the connected state it 662 + indicates that the platform TSM (TEE Security Manager) has made 663 + a secure-session connection with a device's DSM. A DSM is always 664 + physical function 0 and when the device supports TDISP (TEE 665 + Device Interface Security Protocol) its managed functions also 666 + populate this tsm/dsm attribute. The managed functions of a DSM 667 + are SR-IOV (Single Root I/O Virtualization) virtual functions, 668 + non-zero functions of a multi-function device, or downstream 669 + endpoints depending on whether the DSM is an SR-IOV physical 670 + function, function0 of a multi-function device, or an upstream 671 + PCIe switch port. This is a "link" TSM attribute, see 672 + Documentation/ABI/testing/sysfs-class-tsm. 673 + 674 + What: /sys/bus/pci/devices/.../tsm/bound 675 + Contact: linux-coco@lists.linux.dev 676 + Description: (RO) Return the device name of the TSM when the device is in a 677 + TDISP (TEE Device Interface Security Protocol) operational state 678 + (LOCKED, RUN, or ERROR, not UNLOCKED). Bound devices consume 679 + platform TSM resources and depend on the device's configuration 680 + (e.g. BME (Bus Master Enable) and MSE (Memory Space Enable) 681 + among other settings) to remain stable for the duration of the 682 + bound state. This attribute is only visible for devices that 683 + support TDISP operation, and it is only populated after 684 + successful connect and TSM bind. The TSM bind operation is 685 + initiated by VFIO/IOMMUFD. This is a "link" TSM attribute, see 686 + Documentation/ABI/testing/sysfs-class-tsm. 687 + 688 + What: /sys/bus/pci/devices/.../authenticated 689 + Contact: linux-pci@vger.kernel.org 690 + Description: 691 + When the device's tsm/ directory is present device 692 + authentication (PCIe CMA-SPDM) and link encryption (PCIe IDE) 693 + are handled by the platform TSM (TEE Security Manager). When the 694 + tsm/ directory is not present this attribute reflects only the 695 + native CMA-SPDM authentication state with the kernel's 696 + certificate store. 697 + 698 + If the attribute is not present, it indicates that 699 + authentication is unsupported by the device, or the TSM has no 700 + available authentication methods for the device. 701 + 702 + When present and the tsm/ attribute directory is present, the 703 + authenticated attribute is an alias for the device 'connect' 704 + state. See the 'tsm/connect' attribute for more details.
+19
Documentation/ABI/testing/sysfs-class-tsm
··· 1 + What: /sys/class/tsm/tsmN 2 + Contact: linux-coco@lists.linux.dev 3 + Description: 4 + "tsmN" is a device that represents the generic attributes of a 5 + platform TEE Security Manager. It is typically a child of a 6 + platform enumerated TSM device. /sys/class/tsm/tsmN/uevent 7 + signals when the PCI layer is able to support establishment of 8 + link encryption and other device-security features coordinated 9 + through a platform tsm. 10 + 11 + What: /sys/class/tsm/tsmN/streamH.R.E 12 + Contact: linux-pci@vger.kernel.org 13 + Description: 14 + (RO) When a host bridge has established a secure connection via 15 + the platform TSM, symlink appears. The primary function of this 16 + is have a system global review of TSM resource consumption 17 + across host bridges. The link points to the endpoint PCI device 18 + and matches the same link published by the host bridge. See 19 + Documentation/ABI/testing/sysfs-devices-pci-host-bridge.
+45
Documentation/ABI/testing/sysfs-devices-pci-host-bridge
··· 1 + What: /sys/devices/pciDDDD:BB 2 + /sys/devices/.../pciDDDD:BB 3 + Contact: linux-pci@vger.kernel.org 4 + Description: 5 + A PCI host bridge device parents a PCI bus device topology. PCI 6 + controllers may also parent host bridges. The DDDD:BB format 7 + conveys the PCI domain (ACPI segment) number and root bus number 8 + (in hexadecimal) of the host bridge. Note that the domain number 9 + may be larger than the 16-bits that the "DDDD" format implies 10 + for emulated host-bridges. 11 + 12 + What: pciDDDD:BB/firmware_node 13 + Contact: linux-pci@vger.kernel.org 14 + Description: 15 + (RO) Symlink to the platform firmware device object "companion" 16 + of the host bridge. For example, an ACPI device with an _HID of 17 + PNP0A08 (/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00). See 18 + /sys/devices/pciDDDD:BB entry for details about the DDDD:BB 19 + format. 20 + 21 + What: pciDDDD:BB/streamH.R.E 22 + Contact: linux-pci@vger.kernel.org 23 + Description: 24 + (RO) When a platform has established a secure connection, PCIe 25 + IDE, between two Partner Ports, this symlink appears. A stream 26 + consumes a Stream ID slot in each of the Host bridge (H), Root 27 + Port (R) and Endpoint (E). The link points to the Endpoint PCI 28 + device in the Selective IDE Stream pairing. Specifically, "R" 29 + and "E" represent the assigned Selective IDE Stream Register 30 + Block in the Root Port and Endpoint, and "H" represents a 31 + platform specific pool of stream resources shared by the Root 32 + Ports in a host bridge. See /sys/devices/pciDDDD:BB entry for 33 + details about the DDDD:BB format. 34 + 35 + What: pciDDDD:BB/available_secure_streams 36 + Contact: linux-pci@vger.kernel.org 37 + Description: 38 + (RO) When a host bridge has Root Ports that support PCIe IDE 39 + (link encryption and integrity protection) there may be a 40 + limited number of Selective IDE Streams that can be used for 41 + establishing new end-to-end secure links. This attribute 42 + decrements upon secure link setup, and increments upon secure 43 + link teardown. The in-use stream count is determined by counting 44 + stream symlinks. See /sys/devices/pciDDDD:BB entry for details 45 + about the DDDD:BB format.
+1
Documentation/driver-api/pci/index.rst
··· 10 10 11 11 pci 12 12 p2pdma 13 + tsm 13 14 14 15 .. only:: subproject and html 15 16
+21
Documentation/driver-api/pci/tsm.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + .. include:: <isonum.txt> 3 + 4 + ======================================================== 5 + PCI Trusted Execution Environment Security Manager (TSM) 6 + ======================================================== 7 + 8 + Subsystem Interfaces 9 + ==================== 10 + 11 + .. kernel-doc:: include/linux/pci-ide.h 12 + :internal: 13 + 14 + .. kernel-doc:: drivers/pci/ide.c 15 + :export: 16 + 17 + .. kernel-doc:: include/linux/pci-tsm.h 18 + :internal: 19 + 20 + .. kernel-doc:: drivers/pci/tsm.c 21 + :export:
+5 -2
MAINTAINERS
··· 20093 20093 B: https://bugzilla.kernel.org 20094 20094 C: irc://irc.oftc.net/linux-pci 20095 20095 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git 20096 + F: Documentation/ABI/testing/sysfs-devices-pci-host-bridge 20096 20097 F: Documentation/PCI/ 20097 20098 F: Documentation/devicetree/bindings/pci/ 20098 20099 F: arch/x86/kernel/early-quirks.c ··· 26393 26392 S: Maintained 26394 26393 F: Documentation/devicetree/bindings/trigger-source/* 26395 26394 26396 - TRUSTED SECURITY MODULE (TSM) INFRASTRUCTURE 26395 + TRUSTED EXECUTION ENVIRONMENT SECURITY MANAGER (TSM) 26397 26396 M: Dan Williams <dan.j.williams@intel.com> 26398 26397 L: linux-coco@lists.linux.dev 26399 26398 S: Maintained 26400 26399 F: Documentation/ABI/testing/configfs-tsm-report 26401 26400 F: Documentation/driver-api/coco/ 26401 + F: Documentation/driver-api/pci/tsm.rst 26402 + F: drivers/pci/tsm.c 26402 26403 F: drivers/virt/coco/guest/ 26403 - F: include/linux/tsm*.h 26404 + F: include/linux/*tsm*.h 26404 26405 F: samples/tsm-mr/ 26405 26406 26406 26407 TRUSTED SERVICES TEE DRIVER
+1 -1
drivers/Makefile
··· 160 160 obj-$(CONFIG_SOUNDWIRE) += soundwire/ 161 161 162 162 # Virtualization drivers 163 - obj-$(CONFIG_VIRT_DRIVERS) += virt/ 163 + obj-y += virt/ 164 164 obj-$(CONFIG_HYPERV) += hv/ 165 165 166 166 obj-$(CONFIG_PM_DEVFREQ) += devfreq/
+38
drivers/base/bus.c
··· 334 334 return dev; 335 335 } 336 336 337 + static struct device *prev_device(struct klist_iter *i) 338 + { 339 + struct klist_node *n = klist_prev(i); 340 + struct device *dev = NULL; 341 + struct device_private *dev_prv; 342 + 343 + if (n) { 344 + dev_prv = to_device_private_bus(n); 345 + dev = dev_prv->device; 346 + } 347 + return dev; 348 + } 349 + 337 350 /** 338 351 * bus_for_each_dev - device iterator. 339 352 * @bus: bus type. ··· 426 413 return dev; 427 414 } 428 415 EXPORT_SYMBOL_GPL(bus_find_device); 416 + 417 + struct device *bus_find_device_reverse(const struct bus_type *bus, 418 + struct device *start, const void *data, 419 + device_match_t match) 420 + { 421 + struct subsys_private *sp = bus_to_subsys(bus); 422 + struct klist_iter i; 423 + struct device *dev; 424 + 425 + if (!sp) 426 + return NULL; 427 + 428 + klist_iter_init_node(&sp->klist_devices, &i, 429 + (start ? &start->p->knode_bus : NULL)); 430 + while ((dev = prev_device(&i))) { 431 + if (match(dev, data)) { 432 + get_device(dev); 433 + break; 434 + } 435 + } 436 + klist_iter_exit(&i); 437 + subsys_put(sp); 438 + return dev; 439 + } 440 + EXPORT_SYMBOL_GPL(bus_find_device_reverse); 429 441 430 442 static struct device_driver *next_driver(struct klist_iter *i) 431 443 {
+1
drivers/crypto/ccp/Kconfig
··· 39 39 bool "Platform Security Processor (PSP) device" 40 40 default y 41 41 depends on CRYPTO_DEV_CCP_DD && X86_64 && AMD_IOMMU 42 + select PCI_TSM if PCI 42 43 help 43 44 Provide support for the AMD Platform Security Processor (PSP). 44 45 The PSP is a dedicated processor that provides support for key
+4
drivers/crypto/ccp/Makefile
··· 16 16 hsti.o \ 17 17 sfs.o 18 18 19 + ifeq ($(CONFIG_PCI_TSM),y) 20 + ccp-$(CONFIG_CRYPTO_DEV_SP_PSP) += sev-dev-tsm.o sev-dev-tio.o 21 + endif 22 + 19 23 obj-$(CONFIG_CRYPTO_DEV_CCP_CRYPTO) += ccp-crypto.o 20 24 ccp-crypto-objs := ccp-crypto-main.o \ 21 25 ccp-crypto-aes.o \
+864
drivers/crypto/ccp/sev-dev-tio.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + // Interface to PSP for CCP/SEV-TIO/SNP-VM 4 + 5 + #include <linux/pci.h> 6 + #include <linux/tsm.h> 7 + #include <linux/psp.h> 8 + #include <linux/vmalloc.h> 9 + #include <linux/bitfield.h> 10 + #include <linux/pci-doe.h> 11 + #include <asm/sev-common.h> 12 + #include <asm/sev.h> 13 + #include <asm/page.h> 14 + #include "sev-dev.h" 15 + #include "sev-dev-tio.h" 16 + 17 + #define to_tio_status(dev_data) \ 18 + (container_of((dev_data), struct tio_dsm, data)->sev->tio_status) 19 + 20 + #define SLA_PAGE_TYPE_DATA 0 21 + #define SLA_PAGE_TYPE_SCATTER 1 22 + #define SLA_PAGE_SIZE_4K 0 23 + #define SLA_PAGE_SIZE_2M 1 24 + #define SLA_SZ(s) ((s).page_size == SLA_PAGE_SIZE_2M ? SZ_2M : SZ_4K) 25 + #define SLA_SCATTER_LEN(s) (SLA_SZ(s) / sizeof(struct sla_addr_t)) 26 + #define SLA_EOL ((struct sla_addr_t) { .pfn = ((1UL << 40) - 1) }) 27 + #define SLA_NULL ((struct sla_addr_t) { 0 }) 28 + #define IS_SLA_NULL(s) ((s).sla == SLA_NULL.sla) 29 + #define IS_SLA_EOL(s) ((s).sla == SLA_EOL.sla) 30 + 31 + static phys_addr_t sla_to_pa(struct sla_addr_t sla) 32 + { 33 + u64 pfn = sla.pfn; 34 + u64 pa = pfn << PAGE_SHIFT; 35 + 36 + return pa; 37 + } 38 + 39 + static void *sla_to_va(struct sla_addr_t sla) 40 + { 41 + void *va = __va(__sme_clr(sla_to_pa(sla))); 42 + 43 + return va; 44 + } 45 + 46 + #define sla_to_pfn(sla) (__pa(sla_to_va(sla)) >> PAGE_SHIFT) 47 + #define sla_to_page(sla) virt_to_page(sla_to_va(sla)) 48 + 49 + static struct sla_addr_t make_sla(struct page *pg, bool stp) 50 + { 51 + u64 pa = __sme_set(page_to_phys(pg)); 52 + struct sla_addr_t ret = { 53 + .pfn = pa >> PAGE_SHIFT, 54 + .page_size = SLA_PAGE_SIZE_4K, /* Do not do SLA_PAGE_SIZE_2M ATM */ 55 + .page_type = stp ? SLA_PAGE_TYPE_SCATTER : SLA_PAGE_TYPE_DATA 56 + }; 57 + 58 + return ret; 59 + } 60 + 61 + /* the BUFFER Structure */ 62 + #define SLA_BUFFER_FLAG_ENCRYPTION BIT(0) 63 + 64 + /* 65 + * struct sla_buffer_hdr - Scatter list address buffer header 66 + * 67 + * @capacity_sz: Total capacity of the buffer in bytes 68 + * @payload_sz: Size of buffer payload in bytes, must be multiple of 32B 69 + * @flags: Buffer flags (SLA_BUFFER_FLAG_ENCRYPTION: buffer is encrypted) 70 + * @iv: Initialization vector used for encryption 71 + * @authtag: Authentication tag for encrypted buffer 72 + */ 73 + struct sla_buffer_hdr { 74 + u32 capacity_sz; 75 + u32 payload_sz; /* The size of BUFFER_PAYLOAD in bytes. Must be multiple of 32B */ 76 + u32 flags; 77 + u8 reserved1[4]; 78 + u8 iv[16]; /* IV used for the encryption of this buffer */ 79 + u8 authtag[16]; /* Authentication tag for this buffer */ 80 + u8 reserved2[16]; 81 + } __packed; 82 + 83 + enum spdm_data_type_t { 84 + DOBJ_DATA_TYPE_SPDM = 0x1, 85 + DOBJ_DATA_TYPE_SECURE_SPDM = 0x2, 86 + }; 87 + 88 + struct spdm_dobj_hdr_req { 89 + struct spdm_dobj_hdr hdr; /* hdr.id == SPDM_DOBJ_ID_REQ */ 90 + u8 data_type; /* spdm_data_type_t */ 91 + u8 reserved2[5]; 92 + } __packed; 93 + 94 + struct spdm_dobj_hdr_resp { 95 + struct spdm_dobj_hdr hdr; /* hdr.id == SPDM_DOBJ_ID_RESP */ 96 + u8 data_type; /* spdm_data_type_t */ 97 + u8 reserved2[5]; 98 + } __packed; 99 + 100 + /* Defined in sev-dev-tio.h so sev-dev-tsm.c can read types of blobs */ 101 + struct spdm_dobj_hdr_cert; 102 + struct spdm_dobj_hdr_meas; 103 + struct spdm_dobj_hdr_report; 104 + 105 + /* Used in all SPDM-aware TIO commands */ 106 + struct spdm_ctrl { 107 + struct sla_addr_t req; 108 + struct sla_addr_t resp; 109 + struct sla_addr_t scratch; 110 + struct sla_addr_t output; 111 + } __packed; 112 + 113 + static size_t sla_dobj_id_to_size(u8 id) 114 + { 115 + size_t n; 116 + 117 + BUILD_BUG_ON(sizeof(struct spdm_dobj_hdr_resp) != 0x10); 118 + switch (id) { 119 + case SPDM_DOBJ_ID_REQ: 120 + n = sizeof(struct spdm_dobj_hdr_req); 121 + break; 122 + case SPDM_DOBJ_ID_RESP: 123 + n = sizeof(struct spdm_dobj_hdr_resp); 124 + break; 125 + default: 126 + WARN_ON(1); 127 + n = 0; 128 + break; 129 + } 130 + 131 + return n; 132 + } 133 + 134 + #define SPDM_DOBJ_HDR_SIZE(hdr) sla_dobj_id_to_size((hdr)->id) 135 + #define SPDM_DOBJ_DATA(hdr) ((u8 *)(hdr) + SPDM_DOBJ_HDR_SIZE(hdr)) 136 + #define SPDM_DOBJ_LEN(hdr) ((hdr)->length - SPDM_DOBJ_HDR_SIZE(hdr)) 137 + 138 + #define sla_to_dobj_resp_hdr(buf) ((struct spdm_dobj_hdr_resp *) \ 139 + sla_to_dobj_hdr_check((buf), SPDM_DOBJ_ID_RESP)) 140 + #define sla_to_dobj_req_hdr(buf) ((struct spdm_dobj_hdr_req *) \ 141 + sla_to_dobj_hdr_check((buf), SPDM_DOBJ_ID_REQ)) 142 + 143 + static struct spdm_dobj_hdr *sla_to_dobj_hdr(struct sla_buffer_hdr *buf) 144 + { 145 + if (!buf) 146 + return NULL; 147 + 148 + return (struct spdm_dobj_hdr *) &buf[1]; 149 + } 150 + 151 + static struct spdm_dobj_hdr *sla_to_dobj_hdr_check(struct sla_buffer_hdr *buf, u32 check_dobjid) 152 + { 153 + struct spdm_dobj_hdr *hdr = sla_to_dobj_hdr(buf); 154 + 155 + if (WARN_ON_ONCE(!hdr)) 156 + return NULL; 157 + 158 + if (hdr->id != check_dobjid) { 159 + pr_err("! ERROR: expected %d, found %d\n", check_dobjid, hdr->id); 160 + return NULL; 161 + } 162 + 163 + return hdr; 164 + } 165 + 166 + static void *sla_to_data(struct sla_buffer_hdr *buf, u32 dobjid) 167 + { 168 + struct spdm_dobj_hdr *hdr = sla_to_dobj_hdr(buf); 169 + 170 + if (WARN_ON_ONCE(dobjid != SPDM_DOBJ_ID_REQ && dobjid != SPDM_DOBJ_ID_RESP)) 171 + return NULL; 172 + 173 + if (!hdr) 174 + return NULL; 175 + 176 + return (u8 *) hdr + sla_dobj_id_to_size(dobjid); 177 + } 178 + 179 + /* 180 + * struct sev_data_tio_status - SEV_CMD_TIO_STATUS command 181 + * 182 + * @length: Length of this command buffer in bytes 183 + * @status_paddr: System physical address of the TIO_STATUS structure 184 + */ 185 + struct sev_data_tio_status { 186 + u32 length; 187 + u8 reserved[4]; 188 + u64 status_paddr; 189 + } __packed; 190 + 191 + /* TIO_INIT */ 192 + struct sev_data_tio_init { 193 + u32 length; 194 + u8 reserved[12]; 195 + } __packed; 196 + 197 + /* 198 + * struct sev_data_tio_dev_create - TIO_DEV_CREATE command 199 + * 200 + * @length: Length in bytes of this command buffer 201 + * @dev_ctx_sla: Scatter list address pointing to a buffer to be used as a device context buffer 202 + * @device_id: PCIe Routing Identifier of the device to connect to 203 + * @root_port_id: PCIe Routing Identifier of the root port of the device 204 + * @segment_id: PCIe Segment Identifier of the device to connect to 205 + */ 206 + struct sev_data_tio_dev_create { 207 + u32 length; 208 + u8 reserved1[4]; 209 + struct sla_addr_t dev_ctx_sla; 210 + u16 device_id; 211 + u16 root_port_id; 212 + u8 segment_id; 213 + u8 reserved2[11]; 214 + } __packed; 215 + 216 + /* 217 + * struct sev_data_tio_dev_connect - TIO_DEV_CONNECT command 218 + * 219 + * @length: Length in bytes of this command buffer 220 + * @spdm_ctrl: SPDM control structure defined in Section 5.1 221 + * @dev_ctx_sla: Scatter list address of the device context buffer 222 + * @tc_mask: Bitmask of the traffic classes to initialize for SEV-TIO usage. 223 + * Setting the kth bit of the TC_MASK to 1 indicates that the traffic 224 + * class k will be initialized 225 + * @cert_slot: Slot number of the certificate requested for constructing the SPDM session 226 + * @ide_stream_id: IDE stream IDs to be associated with this device. 227 + * Valid only if corresponding bit in TC_MASK is set 228 + */ 229 + struct sev_data_tio_dev_connect { 230 + u32 length; 231 + u8 reserved1[4]; 232 + struct spdm_ctrl spdm_ctrl; 233 + u8 reserved2[8]; 234 + struct sla_addr_t dev_ctx_sla; 235 + u8 tc_mask; 236 + u8 cert_slot; 237 + u8 reserved3[6]; 238 + u8 ide_stream_id[8]; 239 + u8 reserved4[8]; 240 + } __packed; 241 + 242 + /* 243 + * struct sev_data_tio_dev_disconnect - TIO_DEV_DISCONNECT command 244 + * 245 + * @length: Length in bytes of this command buffer 246 + * @flags: Command flags (TIO_DEV_DISCONNECT_FLAG_FORCE: force disconnect) 247 + * @spdm_ctrl: SPDM control structure defined in Section 5.1 248 + * @dev_ctx_sla: Scatter list address of the device context buffer 249 + */ 250 + #define TIO_DEV_DISCONNECT_FLAG_FORCE BIT(0) 251 + 252 + struct sev_data_tio_dev_disconnect { 253 + u32 length; 254 + u32 flags; 255 + struct spdm_ctrl spdm_ctrl; 256 + struct sla_addr_t dev_ctx_sla; 257 + } __packed; 258 + 259 + /* 260 + * struct sev_data_tio_dev_meas - TIO_DEV_MEASUREMENTS command 261 + * 262 + * @length: Length in bytes of this command buffer 263 + * @flags: Command flags (TIO_DEV_MEAS_FLAG_RAW_BITSTREAM: request raw measurements) 264 + * @spdm_ctrl: SPDM control structure defined in Section 5.1 265 + * @dev_ctx_sla: Scatter list address of the device context buffer 266 + * @meas_nonce: Nonce for measurement freshness verification 267 + */ 268 + #define TIO_DEV_MEAS_FLAG_RAW_BITSTREAM BIT(0) 269 + 270 + struct sev_data_tio_dev_meas { 271 + u32 length; 272 + u32 flags; 273 + struct spdm_ctrl spdm_ctrl; 274 + struct sla_addr_t dev_ctx_sla; 275 + u8 meas_nonce[32]; 276 + } __packed; 277 + 278 + /* 279 + * struct sev_data_tio_dev_certs - TIO_DEV_CERTIFICATES command 280 + * 281 + * @length: Length in bytes of this command buffer 282 + * @spdm_ctrl: SPDM control structure defined in Section 5.1 283 + * @dev_ctx_sla: Scatter list address of the device context buffer 284 + */ 285 + struct sev_data_tio_dev_certs { 286 + u32 length; 287 + u8 reserved[4]; 288 + struct spdm_ctrl spdm_ctrl; 289 + struct sla_addr_t dev_ctx_sla; 290 + } __packed; 291 + 292 + /* 293 + * struct sev_data_tio_dev_reclaim - TIO_DEV_RECLAIM command 294 + * 295 + * @length: Length in bytes of this command buffer 296 + * @dev_ctx_sla: Scatter list address of the device context buffer 297 + * 298 + * This command reclaims resources associated with a device context. 299 + */ 300 + struct sev_data_tio_dev_reclaim { 301 + u32 length; 302 + u8 reserved[4]; 303 + struct sla_addr_t dev_ctx_sla; 304 + } __packed; 305 + 306 + static struct sla_buffer_hdr *sla_buffer_map(struct sla_addr_t sla) 307 + { 308 + struct sla_buffer_hdr *buf; 309 + 310 + BUILD_BUG_ON(sizeof(struct sla_buffer_hdr) != 0x40); 311 + if (IS_SLA_NULL(sla)) 312 + return NULL; 313 + 314 + if (sla.page_type == SLA_PAGE_TYPE_SCATTER) { 315 + struct sla_addr_t *scatter = sla_to_va(sla); 316 + unsigned int i, npages = 0; 317 + 318 + for (i = 0; i < SLA_SCATTER_LEN(sla); ++i) { 319 + if (WARN_ON_ONCE(SLA_SZ(scatter[i]) > SZ_4K)) 320 + return NULL; 321 + 322 + if (WARN_ON_ONCE(scatter[i].page_type == SLA_PAGE_TYPE_SCATTER)) 323 + return NULL; 324 + 325 + if (IS_SLA_EOL(scatter[i])) { 326 + npages = i; 327 + break; 328 + } 329 + } 330 + if (WARN_ON_ONCE(!npages)) 331 + return NULL; 332 + 333 + struct page **pp = kmalloc_array(npages, sizeof(pp[0]), GFP_KERNEL); 334 + 335 + if (!pp) 336 + return NULL; 337 + 338 + for (i = 0; i < npages; ++i) 339 + pp[i] = sla_to_page(scatter[i]); 340 + 341 + buf = vm_map_ram(pp, npages, 0); 342 + kfree(pp); 343 + } else { 344 + struct page *pg = sla_to_page(sla); 345 + 346 + buf = vm_map_ram(&pg, 1, 0); 347 + } 348 + 349 + return buf; 350 + } 351 + 352 + static void sla_buffer_unmap(struct sla_addr_t sla, struct sla_buffer_hdr *buf) 353 + { 354 + if (!buf) 355 + return; 356 + 357 + if (sla.page_type == SLA_PAGE_TYPE_SCATTER) { 358 + struct sla_addr_t *scatter = sla_to_va(sla); 359 + unsigned int i, npages = 0; 360 + 361 + for (i = 0; i < SLA_SCATTER_LEN(sla); ++i) { 362 + if (IS_SLA_EOL(scatter[i])) { 363 + npages = i; 364 + break; 365 + } 366 + } 367 + if (!npages) 368 + return; 369 + 370 + vm_unmap_ram(buf, npages); 371 + } else { 372 + vm_unmap_ram(buf, 1); 373 + } 374 + } 375 + 376 + static void dobj_response_init(struct sla_buffer_hdr *buf) 377 + { 378 + struct spdm_dobj_hdr *dobj = sla_to_dobj_hdr(buf); 379 + 380 + dobj->id = SPDM_DOBJ_ID_RESP; 381 + dobj->version.major = 0x1; 382 + dobj->version.minor = 0; 383 + dobj->length = 0; 384 + buf->payload_sz = sla_dobj_id_to_size(dobj->id) + dobj->length; 385 + } 386 + 387 + static void sla_free(struct sla_addr_t sla, size_t len, bool firmware_state) 388 + { 389 + unsigned int npages = PAGE_ALIGN(len) >> PAGE_SHIFT; 390 + struct sla_addr_t *scatter = NULL; 391 + int ret = 0, i; 392 + 393 + if (IS_SLA_NULL(sla)) 394 + return; 395 + 396 + if (firmware_state) { 397 + if (sla.page_type == SLA_PAGE_TYPE_SCATTER) { 398 + scatter = sla_to_va(sla); 399 + 400 + for (i = 0; i < npages; ++i) { 401 + if (IS_SLA_EOL(scatter[i])) 402 + break; 403 + 404 + ret = snp_reclaim_pages(sla_to_pa(scatter[i]), 1, false); 405 + if (ret) 406 + break; 407 + } 408 + } else { 409 + ret = snp_reclaim_pages(sla_to_pa(sla), 1, false); 410 + } 411 + } 412 + 413 + if (WARN_ON(ret)) 414 + return; 415 + 416 + if (scatter) { 417 + for (i = 0; i < npages; ++i) { 418 + if (IS_SLA_EOL(scatter[i])) 419 + break; 420 + free_page((unsigned long)sla_to_va(scatter[i])); 421 + } 422 + } 423 + 424 + free_page((unsigned long)sla_to_va(sla)); 425 + } 426 + 427 + static struct sla_addr_t sla_alloc(size_t len, bool firmware_state) 428 + { 429 + unsigned long i, npages = PAGE_ALIGN(len) >> PAGE_SHIFT; 430 + struct sla_addr_t *scatter = NULL; 431 + struct sla_addr_t ret = SLA_NULL; 432 + struct sla_buffer_hdr *buf; 433 + struct page *pg; 434 + 435 + if (npages == 0) 436 + return ret; 437 + 438 + if (WARN_ON_ONCE(npages > ((PAGE_SIZE / sizeof(struct sla_addr_t)) + 1))) 439 + return ret; 440 + 441 + BUILD_BUG_ON(PAGE_SIZE < SZ_4K); 442 + 443 + if (npages > 1) { 444 + pg = alloc_page(GFP_KERNEL | __GFP_ZERO); 445 + if (!pg) 446 + return SLA_NULL; 447 + 448 + ret = make_sla(pg, true); 449 + scatter = page_to_virt(pg); 450 + for (i = 0; i < npages; ++i) { 451 + pg = alloc_page(GFP_KERNEL | __GFP_ZERO); 452 + if (!pg) 453 + goto no_reclaim_exit; 454 + 455 + scatter[i] = make_sla(pg, false); 456 + } 457 + scatter[i] = SLA_EOL; 458 + } else { 459 + pg = alloc_page(GFP_KERNEL | __GFP_ZERO); 460 + if (!pg) 461 + return SLA_NULL; 462 + 463 + ret = make_sla(pg, false); 464 + } 465 + 466 + buf = sla_buffer_map(ret); 467 + if (!buf) 468 + goto no_reclaim_exit; 469 + 470 + buf->capacity_sz = (npages << PAGE_SHIFT); 471 + sla_buffer_unmap(ret, buf); 472 + 473 + if (firmware_state) { 474 + if (scatter) { 475 + for (i = 0; i < npages; ++i) { 476 + if (rmp_make_private(sla_to_pfn(scatter[i]), 0, 477 + PG_LEVEL_4K, 0, true)) 478 + goto free_exit; 479 + } 480 + } else { 481 + if (rmp_make_private(sla_to_pfn(ret), 0, PG_LEVEL_4K, 0, true)) 482 + goto no_reclaim_exit; 483 + } 484 + } 485 + 486 + return ret; 487 + 488 + no_reclaim_exit: 489 + firmware_state = false; 490 + free_exit: 491 + sla_free(ret, len, firmware_state); 492 + return SLA_NULL; 493 + } 494 + 495 + /* Expands a buffer, only firmware owned buffers allowed for now */ 496 + static int sla_expand(struct sla_addr_t *sla, size_t *len) 497 + { 498 + struct sla_buffer_hdr *oldbuf = sla_buffer_map(*sla), *newbuf; 499 + struct sla_addr_t oldsla = *sla, newsla; 500 + size_t oldlen = *len, newlen; 501 + 502 + if (!oldbuf) 503 + return -EFAULT; 504 + 505 + newlen = oldbuf->capacity_sz; 506 + if (oldbuf->capacity_sz == oldlen) { 507 + /* This buffer does not require expansion, must be another buffer */ 508 + sla_buffer_unmap(oldsla, oldbuf); 509 + return 1; 510 + } 511 + 512 + pr_notice("Expanding BUFFER from %ld to %ld bytes\n", oldlen, newlen); 513 + 514 + newsla = sla_alloc(newlen, true); 515 + if (IS_SLA_NULL(newsla)) 516 + return -ENOMEM; 517 + 518 + newbuf = sla_buffer_map(newsla); 519 + if (!newbuf) { 520 + sla_free(newsla, newlen, true); 521 + return -EFAULT; 522 + } 523 + 524 + memcpy(newbuf, oldbuf, oldlen); 525 + 526 + sla_buffer_unmap(newsla, newbuf); 527 + sla_free(oldsla, oldlen, true); 528 + *sla = newsla; 529 + *len = newlen; 530 + 531 + return 0; 532 + } 533 + 534 + static int sev_tio_do_cmd(int cmd, void *data, size_t data_len, int *psp_ret, 535 + struct tsm_dsm_tio *dev_data) 536 + { 537 + int rc; 538 + 539 + *psp_ret = 0; 540 + rc = sev_do_cmd(cmd, data, psp_ret); 541 + 542 + if (WARN_ON(!rc && *psp_ret == SEV_RET_SPDM_REQUEST)) 543 + return -EIO; 544 + 545 + if (rc == 0 && *psp_ret == SEV_RET_EXPAND_BUFFER_LENGTH_REQUEST) { 546 + int rc1, rc2; 547 + 548 + rc1 = sla_expand(&dev_data->output, &dev_data->output_len); 549 + if (rc1 < 0) 550 + return rc1; 551 + 552 + rc2 = sla_expand(&dev_data->scratch, &dev_data->scratch_len); 553 + if (rc2 < 0) 554 + return rc2; 555 + 556 + if (!rc1 && !rc2) 557 + /* Neither buffer requires expansion, this is wrong */ 558 + return -EFAULT; 559 + 560 + *psp_ret = 0; 561 + rc = sev_do_cmd(cmd, data, psp_ret); 562 + } 563 + 564 + if ((rc == 0 || rc == -EIO) && *psp_ret == SEV_RET_SPDM_REQUEST) { 565 + struct spdm_dobj_hdr_resp *resp_hdr; 566 + struct spdm_dobj_hdr_req *req_hdr; 567 + struct sev_tio_status *tio_status = to_tio_status(dev_data); 568 + size_t resp_len = tio_status->spdm_req_size_max - 569 + (sla_dobj_id_to_size(SPDM_DOBJ_ID_RESP) + sizeof(struct sla_buffer_hdr)); 570 + 571 + if (!dev_data->cmd) { 572 + if (WARN_ON_ONCE(!data_len || (data_len != *(u32 *) data))) 573 + return -EINVAL; 574 + if (WARN_ON(data_len > sizeof(dev_data->cmd_data))) 575 + return -EFAULT; 576 + memcpy(dev_data->cmd_data, data, data_len); 577 + memset(&dev_data->cmd_data[data_len], 0xFF, 578 + sizeof(dev_data->cmd_data) - data_len); 579 + dev_data->cmd = cmd; 580 + } 581 + 582 + req_hdr = sla_to_dobj_req_hdr(dev_data->reqbuf); 583 + resp_hdr = sla_to_dobj_resp_hdr(dev_data->respbuf); 584 + switch (req_hdr->data_type) { 585 + case DOBJ_DATA_TYPE_SPDM: 586 + rc = PCI_DOE_FEATURE_CMA; 587 + break; 588 + case DOBJ_DATA_TYPE_SECURE_SPDM: 589 + rc = PCI_DOE_FEATURE_SSESSION; 590 + break; 591 + default: 592 + return -EINVAL; 593 + } 594 + resp_hdr->data_type = req_hdr->data_type; 595 + dev_data->spdm.req_len = req_hdr->hdr.length - 596 + sla_dobj_id_to_size(SPDM_DOBJ_ID_REQ); 597 + dev_data->spdm.rsp_len = resp_len; 598 + } else if (dev_data && dev_data->cmd) { 599 + /* For either error or success just stop the bouncing */ 600 + memset(dev_data->cmd_data, 0, sizeof(dev_data->cmd_data)); 601 + dev_data->cmd = 0; 602 + } 603 + 604 + return rc; 605 + } 606 + 607 + int sev_tio_continue(struct tsm_dsm_tio *dev_data) 608 + { 609 + struct spdm_dobj_hdr_resp *resp_hdr; 610 + int ret; 611 + 612 + if (!dev_data || !dev_data->cmd) 613 + return -EINVAL; 614 + 615 + resp_hdr = sla_to_dobj_resp_hdr(dev_data->respbuf); 616 + resp_hdr->hdr.length = ALIGN(sla_dobj_id_to_size(SPDM_DOBJ_ID_RESP) + 617 + dev_data->spdm.rsp_len, 32); 618 + dev_data->respbuf->payload_sz = resp_hdr->hdr.length; 619 + 620 + ret = sev_tio_do_cmd(dev_data->cmd, dev_data->cmd_data, 0, 621 + &dev_data->psp_ret, dev_data); 622 + if (ret) 623 + return ret; 624 + 625 + if (dev_data->psp_ret != SEV_RET_SUCCESS) 626 + return -EINVAL; 627 + 628 + return 0; 629 + } 630 + 631 + static void spdm_ctrl_init(struct spdm_ctrl *ctrl, struct tsm_dsm_tio *dev_data) 632 + { 633 + ctrl->req = dev_data->req; 634 + ctrl->resp = dev_data->resp; 635 + ctrl->scratch = dev_data->scratch; 636 + ctrl->output = dev_data->output; 637 + } 638 + 639 + static void spdm_ctrl_free(struct tsm_dsm_tio *dev_data) 640 + { 641 + struct sev_tio_status *tio_status = to_tio_status(dev_data); 642 + size_t len = tio_status->spdm_req_size_max - 643 + (sla_dobj_id_to_size(SPDM_DOBJ_ID_RESP) + 644 + sizeof(struct sla_buffer_hdr)); 645 + struct tsm_spdm *spdm = &dev_data->spdm; 646 + 647 + sla_buffer_unmap(dev_data->resp, dev_data->respbuf); 648 + sla_buffer_unmap(dev_data->req, dev_data->reqbuf); 649 + spdm->rsp = NULL; 650 + spdm->req = NULL; 651 + sla_free(dev_data->req, len, true); 652 + sla_free(dev_data->resp, len, false); 653 + sla_free(dev_data->scratch, tio_status->spdm_scratch_size_max, true); 654 + 655 + dev_data->req.sla = 0; 656 + dev_data->resp.sla = 0; 657 + dev_data->scratch.sla = 0; 658 + dev_data->respbuf = NULL; 659 + dev_data->reqbuf = NULL; 660 + sla_free(dev_data->output, tio_status->spdm_out_size_max, true); 661 + } 662 + 663 + static int spdm_ctrl_alloc(struct tsm_dsm_tio *dev_data) 664 + { 665 + struct sev_tio_status *tio_status = to_tio_status(dev_data); 666 + struct tsm_spdm *spdm = &dev_data->spdm; 667 + int ret; 668 + 669 + dev_data->req = sla_alloc(tio_status->spdm_req_size_max, true); 670 + dev_data->resp = sla_alloc(tio_status->spdm_req_size_max, false); 671 + dev_data->scratch_len = tio_status->spdm_scratch_size_max; 672 + dev_data->scratch = sla_alloc(dev_data->scratch_len, true); 673 + dev_data->output_len = tio_status->spdm_out_size_max; 674 + dev_data->output = sla_alloc(dev_data->output_len, true); 675 + 676 + if (IS_SLA_NULL(dev_data->req) || IS_SLA_NULL(dev_data->resp) || 677 + IS_SLA_NULL(dev_data->scratch) || IS_SLA_NULL(dev_data->dev_ctx)) { 678 + ret = -ENOMEM; 679 + goto free_spdm_exit; 680 + } 681 + 682 + dev_data->reqbuf = sla_buffer_map(dev_data->req); 683 + dev_data->respbuf = sla_buffer_map(dev_data->resp); 684 + if (!dev_data->reqbuf || !dev_data->respbuf) { 685 + ret = -EFAULT; 686 + goto free_spdm_exit; 687 + } 688 + 689 + spdm->req = sla_to_data(dev_data->reqbuf, SPDM_DOBJ_ID_REQ); 690 + spdm->rsp = sla_to_data(dev_data->respbuf, SPDM_DOBJ_ID_RESP); 691 + if (!spdm->req || !spdm->rsp) { 692 + ret = -EFAULT; 693 + goto free_spdm_exit; 694 + } 695 + 696 + dobj_response_init(dev_data->respbuf); 697 + 698 + return 0; 699 + 700 + free_spdm_exit: 701 + spdm_ctrl_free(dev_data); 702 + return ret; 703 + } 704 + 705 + int sev_tio_init_locked(void *tio_status_page) 706 + { 707 + struct sev_tio_status *tio_status = tio_status_page; 708 + struct sev_data_tio_status data_status = { 709 + .length = sizeof(data_status), 710 + }; 711 + int ret, psp_ret; 712 + 713 + data_status.status_paddr = __psp_pa(tio_status_page); 714 + ret = __sev_do_cmd_locked(SEV_CMD_TIO_STATUS, &data_status, &psp_ret); 715 + if (ret) 716 + return ret; 717 + 718 + if (tio_status->length < offsetofend(struct sev_tio_status, tdictx_size) || 719 + tio_status->reserved) 720 + return -EFAULT; 721 + 722 + if (!tio_status->tio_en && !tio_status->tio_init_done) 723 + return -ENOENT; 724 + 725 + if (tio_status->tio_init_done) 726 + return -EBUSY; 727 + 728 + struct sev_data_tio_init ti = { .length = sizeof(ti) }; 729 + 730 + ret = __sev_do_cmd_locked(SEV_CMD_TIO_INIT, &ti, &psp_ret); 731 + if (ret) 732 + return ret; 733 + 734 + ret = __sev_do_cmd_locked(SEV_CMD_TIO_STATUS, &data_status, &psp_ret); 735 + if (ret) 736 + return ret; 737 + 738 + return 0; 739 + } 740 + 741 + int sev_tio_dev_create(struct tsm_dsm_tio *dev_data, u16 device_id, 742 + u16 root_port_id, u8 segment_id) 743 + { 744 + struct sev_tio_status *tio_status = to_tio_status(dev_data); 745 + struct sev_data_tio_dev_create create = { 746 + .length = sizeof(create), 747 + .device_id = device_id, 748 + .root_port_id = root_port_id, 749 + .segment_id = segment_id, 750 + }; 751 + void *data_pg; 752 + int ret; 753 + 754 + dev_data->dev_ctx = sla_alloc(tio_status->devctx_size, true); 755 + if (IS_SLA_NULL(dev_data->dev_ctx)) 756 + return -ENOMEM; 757 + 758 + data_pg = snp_alloc_firmware_page(GFP_KERNEL_ACCOUNT); 759 + if (!data_pg) { 760 + ret = -ENOMEM; 761 + goto free_ctx_exit; 762 + } 763 + 764 + create.dev_ctx_sla = dev_data->dev_ctx; 765 + ret = sev_do_cmd(SEV_CMD_TIO_DEV_CREATE, &create, &dev_data->psp_ret); 766 + if (ret) 767 + goto free_data_pg_exit; 768 + 769 + dev_data->data_pg = data_pg; 770 + 771 + return 0; 772 + 773 + free_data_pg_exit: 774 + snp_free_firmware_page(data_pg); 775 + free_ctx_exit: 776 + sla_free(create.dev_ctx_sla, tio_status->devctx_size, true); 777 + return ret; 778 + } 779 + 780 + int sev_tio_dev_reclaim(struct tsm_dsm_tio *dev_data) 781 + { 782 + struct sev_tio_status *tio_status = to_tio_status(dev_data); 783 + struct sev_data_tio_dev_reclaim r = { 784 + .length = sizeof(r), 785 + .dev_ctx_sla = dev_data->dev_ctx, 786 + }; 787 + int ret; 788 + 789 + if (dev_data->data_pg) { 790 + snp_free_firmware_page(dev_data->data_pg); 791 + dev_data->data_pg = NULL; 792 + } 793 + 794 + if (IS_SLA_NULL(dev_data->dev_ctx)) 795 + return 0; 796 + 797 + ret = sev_do_cmd(SEV_CMD_TIO_DEV_RECLAIM, &r, &dev_data->psp_ret); 798 + 799 + sla_free(dev_data->dev_ctx, tio_status->devctx_size, true); 800 + dev_data->dev_ctx = SLA_NULL; 801 + 802 + spdm_ctrl_free(dev_data); 803 + 804 + return ret; 805 + } 806 + 807 + int sev_tio_dev_connect(struct tsm_dsm_tio *dev_data, u8 tc_mask, u8 ids[8], u8 cert_slot) 808 + { 809 + struct sev_data_tio_dev_connect connect = { 810 + .length = sizeof(connect), 811 + .tc_mask = tc_mask, 812 + .cert_slot = cert_slot, 813 + .dev_ctx_sla = dev_data->dev_ctx, 814 + .ide_stream_id = { 815 + ids[0], ids[1], ids[2], ids[3], 816 + ids[4], ids[5], ids[6], ids[7] 817 + }, 818 + }; 819 + int ret; 820 + 821 + if (WARN_ON(IS_SLA_NULL(dev_data->dev_ctx))) 822 + return -EFAULT; 823 + if (!(tc_mask & 1)) 824 + return -EINVAL; 825 + 826 + ret = spdm_ctrl_alloc(dev_data); 827 + if (ret) 828 + return ret; 829 + 830 + spdm_ctrl_init(&connect.spdm_ctrl, dev_data); 831 + 832 + return sev_tio_do_cmd(SEV_CMD_TIO_DEV_CONNECT, &connect, sizeof(connect), 833 + &dev_data->psp_ret, dev_data); 834 + } 835 + 836 + int sev_tio_dev_disconnect(struct tsm_dsm_tio *dev_data, bool force) 837 + { 838 + struct sev_data_tio_dev_disconnect dc = { 839 + .length = sizeof(dc), 840 + .dev_ctx_sla = dev_data->dev_ctx, 841 + .flags = force ? TIO_DEV_DISCONNECT_FLAG_FORCE : 0, 842 + }; 843 + 844 + if (WARN_ON_ONCE(IS_SLA_NULL(dev_data->dev_ctx))) 845 + return -EFAULT; 846 + 847 + spdm_ctrl_init(&dc.spdm_ctrl, dev_data); 848 + 849 + return sev_tio_do_cmd(SEV_CMD_TIO_DEV_DISCONNECT, &dc, sizeof(dc), 850 + &dev_data->psp_ret, dev_data); 851 + } 852 + 853 + int sev_tio_cmd_buffer_len(int cmd) 854 + { 855 + switch (cmd) { 856 + case SEV_CMD_TIO_STATUS: return sizeof(struct sev_data_tio_status); 857 + case SEV_CMD_TIO_INIT: return sizeof(struct sev_data_tio_init); 858 + case SEV_CMD_TIO_DEV_CREATE: return sizeof(struct sev_data_tio_dev_create); 859 + case SEV_CMD_TIO_DEV_RECLAIM: return sizeof(struct sev_data_tio_dev_reclaim); 860 + case SEV_CMD_TIO_DEV_CONNECT: return sizeof(struct sev_data_tio_dev_connect); 861 + case SEV_CMD_TIO_DEV_DISCONNECT: return sizeof(struct sev_data_tio_dev_disconnect); 862 + default: return 0; 863 + } 864 + }
+123
drivers/crypto/ccp/sev-dev-tio.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + #ifndef __PSP_SEV_TIO_H__ 3 + #define __PSP_SEV_TIO_H__ 4 + 5 + #include <linux/pci-tsm.h> 6 + #include <linux/pci-ide.h> 7 + #include <linux/tsm.h> 8 + #include <uapi/linux/psp-sev.h> 9 + 10 + struct sla_addr_t { 11 + union { 12 + u64 sla; 13 + struct { 14 + u64 page_type :1, 15 + page_size :1, 16 + reserved1 :10, 17 + pfn :40, 18 + reserved2 :12; 19 + }; 20 + }; 21 + } __packed; 22 + 23 + #define SEV_TIO_MAX_COMMAND_LENGTH 128 24 + 25 + /* SPDM control structure for DOE */ 26 + struct tsm_spdm { 27 + unsigned long req_len; 28 + void *req; 29 + unsigned long rsp_len; 30 + void *rsp; 31 + }; 32 + 33 + /* Describes TIO device */ 34 + struct tsm_dsm_tio { 35 + u8 cert_slot; 36 + struct sla_addr_t dev_ctx; 37 + struct sla_addr_t req; 38 + struct sla_addr_t resp; 39 + struct sla_addr_t scratch; 40 + struct sla_addr_t output; 41 + size_t output_len; 42 + size_t scratch_len; 43 + struct tsm_spdm spdm; 44 + struct sla_buffer_hdr *reqbuf; /* vmap'ed @req for DOE */ 45 + struct sla_buffer_hdr *respbuf; /* vmap'ed @resp for DOE */ 46 + 47 + int cmd; 48 + int psp_ret; 49 + u8 cmd_data[SEV_TIO_MAX_COMMAND_LENGTH]; 50 + void *data_pg; /* Data page for DEV_STATUS/TDI_STATUS/TDI_INFO/ASID_FENCE */ 51 + 52 + #define TIO_IDE_MAX_TC 8 53 + struct pci_ide *ide[TIO_IDE_MAX_TC]; 54 + }; 55 + 56 + /* Describes TSM structure for PF0 pointed by pci_dev->tsm */ 57 + struct tio_dsm { 58 + struct pci_tsm_pf0 tsm; 59 + struct tsm_dsm_tio data; 60 + struct sev_device *sev; 61 + }; 62 + 63 + /* Data object IDs */ 64 + #define SPDM_DOBJ_ID_NONE 0 65 + #define SPDM_DOBJ_ID_REQ 1 66 + #define SPDM_DOBJ_ID_RESP 2 67 + 68 + struct spdm_dobj_hdr { 69 + u32 id; /* Data object type identifier */ 70 + u32 length; /* Length of the data object, INCLUDING THIS HEADER */ 71 + struct { /* Version of the data object structure */ 72 + u8 minor; 73 + u8 major; 74 + } version; 75 + } __packed; 76 + 77 + /** 78 + * struct sev_tio_status - TIO_STATUS command's info_paddr buffer 79 + * 80 + * @length: Length of this structure in bytes 81 + * @tio_en: Indicates that SNP_INIT_EX initialized the RMP for SEV-TIO 82 + * @tio_init_done: Indicates TIO_INIT has been invoked 83 + * @spdm_req_size_min: Minimum SPDM request buffer size in bytes 84 + * @spdm_req_size_max: Maximum SPDM request buffer size in bytes 85 + * @spdm_scratch_size_min: Minimum SPDM scratch buffer size in bytes 86 + * @spdm_scratch_size_max: Maximum SPDM scratch buffer size in bytes 87 + * @spdm_out_size_min: Minimum SPDM output buffer size in bytes 88 + * @spdm_out_size_max: Maximum for the SPDM output buffer size in bytes 89 + * @spdm_rsp_size_min: Minimum SPDM response buffer size in bytes 90 + * @spdm_rsp_size_max: Maximum SPDM response buffer size in bytes 91 + * @devctx_size: Size of a device context buffer in bytes 92 + * @tdictx_size: Size of a TDI context buffer in bytes 93 + * @tio_crypto_alg: TIO crypto algorithms supported 94 + */ 95 + struct sev_tio_status { 96 + u32 length; 97 + u32 tio_en :1, 98 + tio_init_done :1, 99 + reserved :30; 100 + u32 spdm_req_size_min; 101 + u32 spdm_req_size_max; 102 + u32 spdm_scratch_size_min; 103 + u32 spdm_scratch_size_max; 104 + u32 spdm_out_size_min; 105 + u32 spdm_out_size_max; 106 + u32 spdm_rsp_size_min; 107 + u32 spdm_rsp_size_max; 108 + u32 devctx_size; 109 + u32 tdictx_size; 110 + u32 tio_crypto_alg; 111 + u8 reserved2[12]; 112 + } __packed; 113 + 114 + int sev_tio_init_locked(void *tio_status_page); 115 + int sev_tio_continue(struct tsm_dsm_tio *dev_data); 116 + 117 + int sev_tio_dev_create(struct tsm_dsm_tio *dev_data, u16 device_id, u16 root_port_id, 118 + u8 segment_id); 119 + int sev_tio_dev_connect(struct tsm_dsm_tio *dev_data, u8 tc_mask, u8 ids[8], u8 cert_slot); 120 + int sev_tio_dev_disconnect(struct tsm_dsm_tio *dev_data, bool force); 121 + int sev_tio_dev_reclaim(struct tsm_dsm_tio *dev_data); 122 + 123 + #endif /* __PSP_SEV_TIO_H__ */
+405
drivers/crypto/ccp/sev-dev-tsm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + // Interface to CCP/SEV-TIO for generic PCIe TDISP module 4 + 5 + #include <linux/pci.h> 6 + #include <linux/device.h> 7 + #include <linux/tsm.h> 8 + #include <linux/iommu.h> 9 + #include <linux/pci-doe.h> 10 + #include <linux/bitfield.h> 11 + #include <linux/module.h> 12 + 13 + #include <asm/sev-common.h> 14 + #include <asm/sev.h> 15 + 16 + #include "psp-dev.h" 17 + #include "sev-dev.h" 18 + #include "sev-dev-tio.h" 19 + 20 + MODULE_IMPORT_NS("PCI_IDE"); 21 + 22 + #define TIO_DEFAULT_NR_IDE_STREAMS 1 23 + 24 + static uint nr_ide_streams = TIO_DEFAULT_NR_IDE_STREAMS; 25 + module_param_named(ide_nr, nr_ide_streams, uint, 0644); 26 + MODULE_PARM_DESC(ide_nr, "Set the maximum number of IDE streams per PHB"); 27 + 28 + #define dev_to_sp(dev) ((struct sp_device *)dev_get_drvdata(dev)) 29 + #define dev_to_psp(dev) ((struct psp_device *)(dev_to_sp(dev)->psp_data)) 30 + #define dev_to_sev(dev) ((struct sev_device *)(dev_to_psp(dev)->sev_data)) 31 + #define tsm_dev_to_sev(tsmdev) dev_to_sev((tsmdev)->dev.parent) 32 + 33 + #define pdev_to_tio_dsm(pdev) (container_of((pdev)->tsm, struct tio_dsm, tsm.base_tsm)) 34 + 35 + static int sev_tio_spdm_cmd(struct tio_dsm *dsm, int ret) 36 + { 37 + struct tsm_dsm_tio *dev_data = &dsm->data; 38 + struct tsm_spdm *spdm = &dev_data->spdm; 39 + 40 + /* Check the main command handler response before entering the loop */ 41 + if (ret == 0 && dev_data->psp_ret != SEV_RET_SUCCESS) 42 + return -EINVAL; 43 + 44 + if (ret <= 0) 45 + return ret; 46 + 47 + /* ret > 0 means "SPDM requested" */ 48 + while (ret == PCI_DOE_FEATURE_CMA || ret == PCI_DOE_FEATURE_SSESSION) { 49 + ret = pci_doe(dsm->tsm.doe_mb, PCI_VENDOR_ID_PCI_SIG, ret, 50 + spdm->req, spdm->req_len, spdm->rsp, spdm->rsp_len); 51 + if (ret < 0) 52 + break; 53 + 54 + WARN_ON_ONCE(ret == 0); /* The response should never be empty */ 55 + spdm->rsp_len = ret; 56 + ret = sev_tio_continue(dev_data); 57 + } 58 + 59 + return ret; 60 + } 61 + 62 + static int stream_enable(struct pci_ide *ide) 63 + { 64 + struct pci_dev *rp = pcie_find_root_port(ide->pdev); 65 + int ret; 66 + 67 + ret = pci_ide_stream_enable(rp, ide); 68 + if (ret) 69 + return ret; 70 + 71 + ret = pci_ide_stream_enable(ide->pdev, ide); 72 + if (ret) 73 + pci_ide_stream_disable(rp, ide); 74 + 75 + return ret; 76 + } 77 + 78 + static int streams_enable(struct pci_ide **ide) 79 + { 80 + int ret = 0; 81 + 82 + for (int i = 0; i < TIO_IDE_MAX_TC; ++i) { 83 + if (ide[i]) { 84 + ret = stream_enable(ide[i]); 85 + if (ret) 86 + break; 87 + } 88 + } 89 + 90 + return ret; 91 + } 92 + 93 + static void stream_disable(struct pci_ide *ide) 94 + { 95 + pci_ide_stream_disable(ide->pdev, ide); 96 + pci_ide_stream_disable(pcie_find_root_port(ide->pdev), ide); 97 + } 98 + 99 + static void streams_disable(struct pci_ide **ide) 100 + { 101 + for (int i = 0; i < TIO_IDE_MAX_TC; ++i) 102 + if (ide[i]) 103 + stream_disable(ide[i]); 104 + } 105 + 106 + static void stream_setup(struct pci_ide *ide) 107 + { 108 + struct pci_dev *rp = pcie_find_root_port(ide->pdev); 109 + 110 + ide->partner[PCI_IDE_EP].rid_start = 0; 111 + ide->partner[PCI_IDE_EP].rid_end = 0xffff; 112 + ide->partner[PCI_IDE_RP].rid_start = 0; 113 + ide->partner[PCI_IDE_RP].rid_end = 0xffff; 114 + 115 + ide->pdev->ide_cfg = 0; 116 + ide->pdev->ide_tee_limit = 1; 117 + rp->ide_cfg = 1; 118 + rp->ide_tee_limit = 0; 119 + 120 + pci_warn(ide->pdev, "Forcing CFG/TEE for %s", pci_name(rp)); 121 + pci_ide_stream_setup(ide->pdev, ide); 122 + pci_ide_stream_setup(rp, ide); 123 + } 124 + 125 + static u8 streams_setup(struct pci_ide **ide, u8 *ids) 126 + { 127 + bool def = false; 128 + u8 tc_mask = 0; 129 + int i; 130 + 131 + for (i = 0; i < TIO_IDE_MAX_TC; ++i) { 132 + if (!ide[i]) { 133 + ids[i] = 0xFF; 134 + continue; 135 + } 136 + 137 + tc_mask |= BIT(i); 138 + ids[i] = ide[i]->stream_id; 139 + 140 + if (!def) { 141 + struct pci_ide_partner *settings; 142 + 143 + settings = pci_ide_to_settings(ide[i]->pdev, ide[i]); 144 + settings->default_stream = 1; 145 + def = true; 146 + } 147 + 148 + stream_setup(ide[i]); 149 + } 150 + 151 + return tc_mask; 152 + } 153 + 154 + static int streams_register(struct pci_ide **ide) 155 + { 156 + int ret = 0, i; 157 + 158 + for (i = 0; i < TIO_IDE_MAX_TC; ++i) { 159 + if (ide[i]) { 160 + ret = pci_ide_stream_register(ide[i]); 161 + if (ret) 162 + break; 163 + } 164 + } 165 + 166 + return ret; 167 + } 168 + 169 + static void streams_unregister(struct pci_ide **ide) 170 + { 171 + for (int i = 0; i < TIO_IDE_MAX_TC; ++i) 172 + if (ide[i]) 173 + pci_ide_stream_unregister(ide[i]); 174 + } 175 + 176 + static void stream_teardown(struct pci_ide *ide) 177 + { 178 + pci_ide_stream_teardown(ide->pdev, ide); 179 + pci_ide_stream_teardown(pcie_find_root_port(ide->pdev), ide); 180 + } 181 + 182 + static void streams_teardown(struct pci_ide **ide) 183 + { 184 + for (int i = 0; i < TIO_IDE_MAX_TC; ++i) { 185 + if (ide[i]) { 186 + stream_teardown(ide[i]); 187 + pci_ide_stream_free(ide[i]); 188 + ide[i] = NULL; 189 + } 190 + } 191 + } 192 + 193 + static int stream_alloc(struct pci_dev *pdev, struct pci_ide **ide, 194 + unsigned int tc) 195 + { 196 + struct pci_dev *rp = pcie_find_root_port(pdev); 197 + struct pci_ide *ide1; 198 + 199 + if (ide[tc]) { 200 + pci_err(pdev, "Stream for class=%d already registered", tc); 201 + return -EBUSY; 202 + } 203 + 204 + /* FIXME: find a better way */ 205 + if (nr_ide_streams != TIO_DEFAULT_NR_IDE_STREAMS) 206 + pci_notice(pdev, "Enable non-default %d streams", nr_ide_streams); 207 + pci_ide_set_nr_streams(to_pci_host_bridge(rp->bus->bridge), nr_ide_streams); 208 + 209 + ide1 = pci_ide_stream_alloc(pdev); 210 + if (!ide1) 211 + return -EFAULT; 212 + 213 + /* Blindly assign streamid=0 to TC=0, and so on */ 214 + ide1->stream_id = tc; 215 + 216 + ide[tc] = ide1; 217 + 218 + return 0; 219 + } 220 + 221 + static struct pci_tsm *tio_pf0_probe(struct pci_dev *pdev, struct sev_device *sev) 222 + { 223 + struct tio_dsm *dsm __free(kfree) = kzalloc(sizeof(*dsm), GFP_KERNEL); 224 + int rc; 225 + 226 + if (!dsm) 227 + return NULL; 228 + 229 + rc = pci_tsm_pf0_constructor(pdev, &dsm->tsm, sev->tsmdev); 230 + if (rc) 231 + return NULL; 232 + 233 + pci_dbg(pdev, "TSM enabled\n"); 234 + dsm->sev = sev; 235 + return &no_free_ptr(dsm)->tsm.base_tsm; 236 + } 237 + 238 + static struct pci_tsm *dsm_probe(struct tsm_dev *tsmdev, struct pci_dev *pdev) 239 + { 240 + struct sev_device *sev = tsm_dev_to_sev(tsmdev); 241 + 242 + if (is_pci_tsm_pf0(pdev)) 243 + return tio_pf0_probe(pdev, sev); 244 + return 0; 245 + } 246 + 247 + static void dsm_remove(struct pci_tsm *tsm) 248 + { 249 + struct pci_dev *pdev = tsm->pdev; 250 + 251 + pci_dbg(pdev, "TSM disabled\n"); 252 + 253 + if (is_pci_tsm_pf0(pdev)) { 254 + struct tio_dsm *dsm = container_of(tsm, struct tio_dsm, tsm.base_tsm); 255 + 256 + pci_tsm_pf0_destructor(&dsm->tsm); 257 + kfree(dsm); 258 + } 259 + } 260 + 261 + static int dsm_create(struct tio_dsm *dsm) 262 + { 263 + struct pci_dev *pdev = dsm->tsm.base_tsm.pdev; 264 + u8 segment_id = pdev->bus ? pci_domain_nr(pdev->bus) : 0; 265 + struct pci_dev *rootport = pcie_find_root_port(pdev); 266 + u16 device_id = pci_dev_id(pdev); 267 + u16 root_port_id; 268 + u32 lnkcap = 0; 269 + 270 + if (pci_read_config_dword(rootport, pci_pcie_cap(rootport) + PCI_EXP_LNKCAP, 271 + &lnkcap)) 272 + return -ENODEV; 273 + 274 + root_port_id = FIELD_GET(PCI_EXP_LNKCAP_PN, lnkcap); 275 + 276 + return sev_tio_dev_create(&dsm->data, device_id, root_port_id, segment_id); 277 + } 278 + 279 + static int dsm_connect(struct pci_dev *pdev) 280 + { 281 + struct tio_dsm *dsm = pdev_to_tio_dsm(pdev); 282 + struct tsm_dsm_tio *dev_data = &dsm->data; 283 + u8 ids[TIO_IDE_MAX_TC]; 284 + u8 tc_mask; 285 + int ret; 286 + 287 + if (pci_find_doe_mailbox(pdev, PCI_VENDOR_ID_PCI_SIG, 288 + PCI_DOE_FEATURE_SSESSION) != dsm->tsm.doe_mb) { 289 + pci_err(pdev, "CMA DOE MB must support SSESSION\n"); 290 + return -EFAULT; 291 + } 292 + 293 + ret = stream_alloc(pdev, dev_data->ide, 0); 294 + if (ret) 295 + return ret; 296 + 297 + ret = dsm_create(dsm); 298 + if (ret) 299 + goto ide_free_exit; 300 + 301 + tc_mask = streams_setup(dev_data->ide, ids); 302 + 303 + ret = sev_tio_dev_connect(dev_data, tc_mask, ids, dev_data->cert_slot); 304 + ret = sev_tio_spdm_cmd(dsm, ret); 305 + if (ret) 306 + goto free_exit; 307 + 308 + streams_enable(dev_data->ide); 309 + 310 + ret = streams_register(dev_data->ide); 311 + if (ret) 312 + goto free_exit; 313 + 314 + return 0; 315 + 316 + free_exit: 317 + sev_tio_dev_reclaim(dev_data); 318 + 319 + streams_disable(dev_data->ide); 320 + ide_free_exit: 321 + 322 + streams_teardown(dev_data->ide); 323 + 324 + return ret; 325 + } 326 + 327 + static void dsm_disconnect(struct pci_dev *pdev) 328 + { 329 + bool force = SYSTEM_HALT <= system_state && system_state <= SYSTEM_RESTART; 330 + struct tio_dsm *dsm = pdev_to_tio_dsm(pdev); 331 + struct tsm_dsm_tio *dev_data = &dsm->data; 332 + int ret; 333 + 334 + ret = sev_tio_dev_disconnect(dev_data, force); 335 + ret = sev_tio_spdm_cmd(dsm, ret); 336 + if (ret && !force) { 337 + ret = sev_tio_dev_disconnect(dev_data, true); 338 + sev_tio_spdm_cmd(dsm, ret); 339 + } 340 + 341 + sev_tio_dev_reclaim(dev_data); 342 + 343 + streams_disable(dev_data->ide); 344 + streams_unregister(dev_data->ide); 345 + streams_teardown(dev_data->ide); 346 + } 347 + 348 + static struct pci_tsm_ops sev_tsm_ops = { 349 + .probe = dsm_probe, 350 + .remove = dsm_remove, 351 + .connect = dsm_connect, 352 + .disconnect = dsm_disconnect, 353 + }; 354 + 355 + void sev_tsm_init_locked(struct sev_device *sev, void *tio_status_page) 356 + { 357 + struct sev_tio_status *t = kzalloc(sizeof(*t), GFP_KERNEL); 358 + struct tsm_dev *tsmdev; 359 + int ret; 360 + 361 + WARN_ON(sev->tio_status); 362 + 363 + if (!t) 364 + return; 365 + 366 + ret = sev_tio_init_locked(tio_status_page); 367 + if (ret) { 368 + pr_warn("SEV-TIO STATUS failed with %d\n", ret); 369 + goto error_exit; 370 + } 371 + 372 + tsmdev = tsm_register(sev->dev, &sev_tsm_ops); 373 + if (IS_ERR(tsmdev)) 374 + goto error_exit; 375 + 376 + memcpy(t, tio_status_page, sizeof(*t)); 377 + 378 + pr_notice("SEV-TIO status: EN=%d INIT_DONE=%d rq=%d..%d rs=%d..%d " 379 + "scr=%d..%d out=%d..%d dev=%d tdi=%d algos=%x\n", 380 + t->tio_en, t->tio_init_done, 381 + t->spdm_req_size_min, t->spdm_req_size_max, 382 + t->spdm_rsp_size_min, t->spdm_rsp_size_max, 383 + t->spdm_scratch_size_min, t->spdm_scratch_size_max, 384 + t->spdm_out_size_min, t->spdm_out_size_max, 385 + t->devctx_size, t->tdictx_size, 386 + t->tio_crypto_alg); 387 + 388 + sev->tsmdev = tsmdev; 389 + sev->tio_status = t; 390 + 391 + return; 392 + 393 + error_exit: 394 + kfree(t); 395 + pr_err("Failed to enable SEV-TIO: ret=%d en=%d initdone=%d SEV=%d\n", 396 + ret, t->tio_en, t->tio_init_done, boot_cpu_has(X86_FEATURE_SEV)); 397 + } 398 + 399 + void sev_tsm_uninit(struct sev_device *sev) 400 + { 401 + if (sev->tsmdev) 402 + tsm_unregister(sev->tsmdev); 403 + 404 + sev->tsmdev = NULL; 405 + }
+56 -10
drivers/crypto/ccp/sev-dev.c
··· 75 75 module_param(psp_init_on_probe, bool, 0444); 76 76 MODULE_PARM_DESC(psp_init_on_probe, " if true, the PSP will be initialized on module init. Else the PSP will be initialized on the first command requiring it"); 77 77 78 + #if IS_ENABLED(CONFIG_PCI_TSM) 79 + static bool sev_tio_enabled = true; 80 + module_param_named(tio, sev_tio_enabled, bool, 0444); 81 + MODULE_PARM_DESC(tio, "Enables TIO in SNP_INIT_EX"); 82 + #else 83 + static const bool sev_tio_enabled = false; 84 + #endif 85 + 78 86 MODULE_FIRMWARE("amd/amd_sev_fam17h_model0xh.sbin"); /* 1st gen EPYC */ 79 87 MODULE_FIRMWARE("amd/amd_sev_fam17h_model3xh.sbin"); /* 2nd gen EPYC */ 80 88 MODULE_FIRMWARE("amd/amd_sev_fam19h_model0xh.sbin"); /* 3rd gen EPYC */ ··· 259 251 case SEV_CMD_SNP_COMMIT: return sizeof(struct sev_data_snp_commit); 260 252 case SEV_CMD_SNP_FEATURE_INFO: return sizeof(struct sev_data_snp_feature_info); 261 253 case SEV_CMD_SNP_VLEK_LOAD: return sizeof(struct sev_user_data_snp_vlek_load); 262 - default: return 0; 254 + default: return sev_tio_cmd_buffer_len(cmd); 263 255 } 264 256 265 257 return 0; ··· 388 380 return sev_write_init_ex_file(); 389 381 } 390 382 391 - /* 392 - * snp_reclaim_pages() needs __sev_do_cmd_locked(), and __sev_do_cmd_locked() 393 - * needs snp_reclaim_pages(), so a forward declaration is needed. 394 - */ 395 - static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret); 396 - 397 - static int snp_reclaim_pages(unsigned long paddr, unsigned int npages, bool locked) 383 + int snp_reclaim_pages(unsigned long paddr, unsigned int npages, bool locked) 398 384 { 399 385 int ret, err, i; 400 386 ··· 422 420 snp_leak_pages(__phys_to_pfn(paddr), npages - i); 423 421 return ret; 424 422 } 423 + EXPORT_SYMBOL_GPL(snp_reclaim_pages); 425 424 426 425 static int rmp_mark_pages_firmware(unsigned long paddr, unsigned int npages, bool locked) 427 426 { ··· 853 850 return 0; 854 851 } 855 852 856 - static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret) 853 + int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret) 857 854 { 858 855 struct cmd_buf_desc desc_list[CMD_BUF_DESC_MAX] = {0}; 859 856 struct psp_device *psp = psp_master; ··· 1395 1392 * 1396 1393 */ 1397 1394 if (sev_version_greater_or_equal(SNP_MIN_API_MAJOR, 52)) { 1395 + bool tio_supp = !!(sev->snp_feat_info_0.ebx & SNP_SEV_TIO_SUPPORTED); 1396 + 1398 1397 /* 1399 1398 * Firmware checks that the pages containing the ranges enumerated 1400 1399 * in the RANGES structure are either in the default page state or in the ··· 1437 1432 data.init_rmp = 1; 1438 1433 data.list_paddr_en = 1; 1439 1434 data.list_paddr = __psp_pa(snp_range_list); 1435 + 1436 + data.tio_en = tio_supp && sev_tio_enabled && amd_iommu_sev_tio_supported(); 1437 + 1438 + /* 1439 + * When psp_init_on_probe is disabled, the userspace calling 1440 + * SEV ioctl can inadvertently shut down SNP and SEV-TIO causing 1441 + * unexpected state loss. 1442 + */ 1443 + if (data.tio_en && !psp_init_on_probe) 1444 + dev_warn(sev->dev, "SEV-TIO as incompatible with psp_init_on_probe=0\n"); 1445 + 1440 1446 cmd = SEV_CMD_SNP_INIT_EX; 1441 1447 } else { 1442 1448 cmd = SEV_CMD_SNP_INIT; ··· 1485 1469 1486 1470 snp_hv_fixed_pages_state_update(sev, HV_FIXED); 1487 1471 sev->snp_initialized = true; 1488 - dev_dbg(sev->dev, "SEV-SNP firmware initialized\n"); 1472 + dev_dbg(sev->dev, "SEV-SNP firmware initialized, SEV-TIO is %s\n", 1473 + data.tio_en ? "enabled" : "disabled"); 1489 1474 1490 1475 dev_info(sev->dev, "SEV-SNP API:%d.%d build:%d\n", sev->api_major, 1491 1476 sev->api_minor, sev->build); 1492 1477 1493 1478 atomic_notifier_chain_register(&panic_notifier_list, 1494 1479 &snp_panic_notifier); 1480 + 1481 + if (data.tio_en) { 1482 + /* 1483 + * This executes with the sev_cmd_mutex held so down the stack 1484 + * snp_reclaim_pages(locked=false) might be needed (which is extremely 1485 + * unlikely) but will cause a deadlock. 1486 + * Instead of exporting __snp_alloc_firmware_pages(), allocate a page 1487 + * for this one call here. 1488 + */ 1489 + void *tio_status = page_address(__snp_alloc_firmware_pages( 1490 + GFP_KERNEL_ACCOUNT | __GFP_ZERO, 0, true)); 1491 + 1492 + if (tio_status) { 1493 + sev_tsm_init_locked(sev, tio_status); 1494 + __snp_free_firmware_pages(virt_to_page(tio_status), 0, true); 1495 + } 1496 + } 1495 1497 1496 1498 sev_es_tmr_size = SNP_TMR_SIZE; 1497 1499 ··· 2790 2756 2791 2757 static void sev_firmware_shutdown(struct sev_device *sev) 2792 2758 { 2759 + /* 2760 + * Calling without sev_cmd_mutex held as TSM will likely try disconnecting 2761 + * IDE and this ends up calling sev_do_cmd() which locks sev_cmd_mutex. 2762 + */ 2763 + if (sev->tio_status) 2764 + sev_tsm_uninit(sev); 2765 + 2793 2766 mutex_lock(&sev_cmd_mutex); 2767 + 2794 2768 __sev_firmware_shutdown(sev, false); 2769 + 2770 + kfree(sev->tio_status); 2771 + sev->tio_status = NULL; 2772 + 2795 2773 mutex_unlock(&sev_cmd_mutex); 2796 2774 } 2797 2775
+11
drivers/crypto/ccp/sev-dev.h
··· 34 34 struct miscdevice misc; 35 35 }; 36 36 37 + struct sev_tio_status; 38 + 37 39 struct sev_device { 38 40 struct device *dev; 39 41 struct psp_device *psp; ··· 63 61 64 62 struct sev_user_data_snp_status snp_plat_status; 65 63 struct snp_feature_info snp_feat_info_0; 64 + 65 + struct tsm_dev *tsmdev; 66 + struct sev_tio_status *tio_status; 66 67 }; 67 68 68 69 int sev_dev_init(struct psp_device *psp); 69 70 void sev_dev_destroy(struct psp_device *psp); 71 + 72 + int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret); 70 73 71 74 void sev_pci_init(void); 72 75 void sev_pci_exit(void); 73 76 74 77 struct page *snp_alloc_hv_fixed_pages(unsigned int num_2mb_pages); 75 78 void snp_free_hv_fixed_pages(struct page *page); 79 + 80 + void sev_tsm_init_locked(struct sev_device *sev, void *tio_status_page); 81 + void sev_tsm_uninit(struct sev_device *sev); 82 + int sev_tio_cmd_buffer_len(int cmd); 76 83 77 84 #endif /* __SEV_DEV_H */
+1
drivers/iommu/amd/amd_iommu_types.h
··· 107 107 108 108 109 109 /* Extended Feature 2 Bits */ 110 + #define FEATURE_SEVSNPIO_SUP BIT_ULL(1) 110 111 #define FEATURE_SNPAVICSUP GENMASK_ULL(7, 5) 111 112 #define FEATURE_SNPAVICSUP_GAM(x) \ 112 113 (FIELD_GET(FEATURE_SNPAVICSUP, x) == 0x1)
+9
drivers/iommu/amd/init.c
··· 2261 2261 if (check_feature(FEATURE_SNP)) 2262 2262 pr_cont(" SNP"); 2263 2263 2264 + if (check_feature2(FEATURE_SEVSNPIO_SUP)) 2265 + pr_cont(" SEV-TIO"); 2266 + 2264 2267 pr_cont("\n"); 2265 2268 } 2266 2269 ··· 4031 4028 return 0; 4032 4029 } 4033 4030 EXPORT_SYMBOL_GPL(amd_iommu_snp_disable); 4031 + 4032 + bool amd_iommu_sev_tio_supported(void) 4033 + { 4034 + return check_feature2(FEATURE_SEVSNPIO_SUP); 4035 + } 4036 + EXPORT_SYMBOL_GPL(amd_iommu_sev_tio_supported); 4034 4037 #endif
+18
drivers/pci/Kconfig
··· 122 122 config PCI_ATS 123 123 bool 124 124 125 + config PCI_IDE 126 + bool 127 + 128 + config PCI_TSM 129 + bool "PCI TSM: Device security protocol support" 130 + select PCI_IDE 131 + select PCI_DOE 132 + select TSM 133 + help 134 + The TEE (Trusted Execution Environment) Device Interface 135 + Security Protocol (TDISP) defines a "TSM" as a platform agent 136 + that manages device authentication, link encryption, link 137 + integrity protection, and assignment of PCI device functions 138 + (virtual or physical) to confidential computing VMs that can 139 + access (DMA) guest private memory. 140 + 141 + Enable a platform TSM driver to use this capability. 142 + 125 143 config PCI_DOE 126 144 bool "Enable PCI Data Object Exchange (DOE) support" 127 145 help
+2
drivers/pci/Makefile
··· 34 34 obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o 35 35 obj-$(CONFIG_VGA_ARB) += vgaarb.o 36 36 obj-$(CONFIG_PCI_DOE) += doe.o 37 + obj-$(CONFIG_PCI_IDE) += ide.o 38 + obj-$(CONFIG_PCI_TSM) += tsm.o 37 39 obj-$(CONFIG_PCI_DYNAMIC_OF_NODES) += of_property.o 38 40 obj-$(CONFIG_PCI_NPEM) += npem.o 39 41 obj-$(CONFIG_PCIE_TPH) += tph.o
+39
drivers/pci/bus.c
··· 8 8 */ 9 9 #include <linux/module.h> 10 10 #include <linux/kernel.h> 11 + #include <linux/cleanup.h> 11 12 #include <linux/pci.h> 12 13 #include <linux/errno.h> 13 14 #include <linux/ioport.h> ··· 436 435 return ret; 437 436 } 438 437 438 + static int __pci_walk_bus_reverse(struct pci_bus *top, 439 + int (*cb)(struct pci_dev *, void *), 440 + void *userdata) 441 + { 442 + struct pci_dev *dev; 443 + int ret = 0; 444 + 445 + list_for_each_entry_reverse(dev, &top->devices, bus_list) { 446 + if (dev->subordinate) { 447 + ret = __pci_walk_bus_reverse(dev->subordinate, cb, 448 + userdata); 449 + if (ret) 450 + break; 451 + } 452 + ret = cb(dev, userdata); 453 + if (ret) 454 + break; 455 + } 456 + return ret; 457 + } 458 + 439 459 /** 440 460 * pci_walk_bus - walk devices on/under bus, calling callback. 441 461 * @top: bus whose devices should be walked ··· 477 455 up_read(&pci_bus_sem); 478 456 } 479 457 EXPORT_SYMBOL_GPL(pci_walk_bus); 458 + 459 + /** 460 + * pci_walk_bus_reverse - walk devices on/under bus, calling callback. 461 + * @top: bus whose devices should be walked 462 + * @cb: callback to be called for each device found 463 + * @userdata: arbitrary pointer to be passed to callback 464 + * 465 + * Same semantics as pci_walk_bus(), but walks the bus in reverse order. 466 + */ 467 + void pci_walk_bus_reverse(struct pci_bus *top, 468 + int (*cb)(struct pci_dev *, void *), void *userdata) 469 + { 470 + down_read(&pci_bus_sem); 471 + __pci_walk_bus_reverse(top, cb, userdata); 472 + up_read(&pci_bus_sem); 473 + } 474 + EXPORT_SYMBOL_GPL(pci_walk_bus_reverse); 480 475 481 476 void pci_walk_bus_locked(struct pci_bus *top, int (*cb)(struct pci_dev *, void *), void *userdata) 482 477 {
-2
drivers/pci/doe.c
··· 24 24 25 25 #include "pci.h" 26 26 27 - #define PCI_DOE_FEATURE_DISCOVERY 0 28 - 29 27 /* Timeout of 1 second from 6.30.2 Operation, PCI Spec r6.0 */ 30 28 #define PCI_DOE_TIMEOUT HZ 31 29 #define PCI_DOE_POLL_INTERVAL (PCI_DOE_TIMEOUT / 128)
+815
drivers/pci/ide.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright(c) 2024-2025 Intel Corporation. All rights reserved. */ 3 + 4 + /* PCIe r7.0 section 6.33 Integrity & Data Encryption (IDE) */ 5 + 6 + #define dev_fmt(fmt) "PCI/IDE: " fmt 7 + #include <linux/bitfield.h> 8 + #include <linux/bitops.h> 9 + #include <linux/pci.h> 10 + #include <linux/pci-ide.h> 11 + #include <linux/pci_regs.h> 12 + #include <linux/slab.h> 13 + #include <linux/sysfs.h> 14 + #include <linux/tsm.h> 15 + 16 + #include "pci.h" 17 + 18 + static int __sel_ide_offset(u16 ide_cap, u8 nr_link_ide, u8 stream_index, 19 + u8 nr_ide_mem) 20 + { 21 + u32 offset = ide_cap + PCI_IDE_LINK_STREAM_0 + 22 + nr_link_ide * PCI_IDE_LINK_BLOCK_SIZE; 23 + 24 + /* 25 + * Assume a constant number of address association resources per stream 26 + * index 27 + */ 28 + return offset + stream_index * PCI_IDE_SEL_BLOCK_SIZE(nr_ide_mem); 29 + } 30 + 31 + static int sel_ide_offset(struct pci_dev *pdev, 32 + struct pci_ide_partner *settings) 33 + { 34 + return __sel_ide_offset(pdev->ide_cap, pdev->nr_link_ide, 35 + settings->stream_index, pdev->nr_ide_mem); 36 + } 37 + 38 + static bool reserve_stream_index(struct pci_dev *pdev, u8 idx) 39 + { 40 + int ret; 41 + 42 + ret = ida_alloc_range(&pdev->ide_stream_ida, idx, idx, GFP_KERNEL); 43 + return ret >= 0; 44 + } 45 + 46 + static bool reserve_stream_id(struct pci_host_bridge *hb, u8 id) 47 + { 48 + int ret; 49 + 50 + ret = ida_alloc_range(&hb->ide_stream_ids_ida, id, id, GFP_KERNEL); 51 + return ret >= 0; 52 + } 53 + 54 + static bool claim_stream(struct pci_host_bridge *hb, u8 stream_id, 55 + struct pci_dev *pdev, u8 stream_idx) 56 + { 57 + dev_info(&hb->dev, "Stream ID %d active at init\n", stream_id); 58 + if (!reserve_stream_id(hb, stream_id)) { 59 + dev_info(&hb->dev, "Failed to claim %s Stream ID %d\n", 60 + stream_id == PCI_IDE_RESERVED_STREAM_ID ? "reserved" : 61 + "active", 62 + stream_id); 63 + return false; 64 + } 65 + 66 + /* No stream index to reserve in the Link IDE case */ 67 + if (!pdev) 68 + return true; 69 + 70 + if (!reserve_stream_index(pdev, stream_idx)) { 71 + pci_info(pdev, "Failed to claim active Selective Stream %d\n", 72 + stream_idx); 73 + return false; 74 + } 75 + 76 + return true; 77 + } 78 + 79 + void pci_ide_init(struct pci_dev *pdev) 80 + { 81 + struct pci_host_bridge *hb = pci_find_host_bridge(pdev->bus); 82 + u16 nr_link_ide, nr_ide_mem, nr_streams; 83 + u16 ide_cap; 84 + u32 val; 85 + 86 + /* 87 + * Unconditionally init so that ida idle state is consistent with 88 + * pdev->ide_cap. 89 + */ 90 + ida_init(&pdev->ide_stream_ida); 91 + 92 + if (!pci_is_pcie(pdev)) 93 + return; 94 + 95 + ide_cap = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_IDE); 96 + if (!ide_cap) 97 + return; 98 + 99 + pci_read_config_dword(pdev, ide_cap + PCI_IDE_CAP, &val); 100 + if ((val & PCI_IDE_CAP_SELECTIVE) == 0) 101 + return; 102 + 103 + /* 104 + * Require endpoint IDE capability to be paired with IDE Root Port IDE 105 + * capability. 106 + */ 107 + if (pci_pcie_type(pdev) == PCI_EXP_TYPE_ENDPOINT) { 108 + struct pci_dev *rp = pcie_find_root_port(pdev); 109 + 110 + if (!rp->ide_cap) 111 + return; 112 + } 113 + 114 + pdev->ide_cfg = FIELD_GET(PCI_IDE_CAP_SEL_CFG, val); 115 + pdev->ide_tee_limit = FIELD_GET(PCI_IDE_CAP_TEE_LIMITED, val); 116 + 117 + if (val & PCI_IDE_CAP_LINK) 118 + nr_link_ide = 1 + FIELD_GET(PCI_IDE_CAP_LINK_TC_NUM, val); 119 + else 120 + nr_link_ide = 0; 121 + 122 + nr_ide_mem = 0; 123 + nr_streams = 1 + FIELD_GET(PCI_IDE_CAP_SEL_NUM, val); 124 + for (u16 i = 0; i < nr_streams; i++) { 125 + int pos = __sel_ide_offset(ide_cap, nr_link_ide, i, nr_ide_mem); 126 + int nr_assoc; 127 + u32 val; 128 + u8 id; 129 + 130 + pci_read_config_dword(pdev, pos + PCI_IDE_SEL_CAP, &val); 131 + 132 + /* 133 + * Let's not entertain streams that do not have a constant 134 + * number of address association blocks 135 + */ 136 + nr_assoc = FIELD_GET(PCI_IDE_SEL_CAP_ASSOC_NUM, val); 137 + if (i && (nr_assoc != nr_ide_mem)) { 138 + pci_info(pdev, "Unsupported Selective Stream %d capability, SKIP the rest\n", i); 139 + nr_streams = i; 140 + break; 141 + } 142 + 143 + nr_ide_mem = nr_assoc; 144 + 145 + /* 146 + * Claim Stream IDs and Selective Stream blocks that are already 147 + * active on the device 148 + */ 149 + pci_read_config_dword(pdev, pos + PCI_IDE_SEL_CTL, &val); 150 + id = FIELD_GET(PCI_IDE_SEL_CTL_ID, val); 151 + if ((val & PCI_IDE_SEL_CTL_EN) && 152 + !claim_stream(hb, id, pdev, i)) 153 + return; 154 + } 155 + 156 + /* Reserve link stream-ids that are already active on the device */ 157 + for (u16 i = 0; i < nr_link_ide; ++i) { 158 + int pos = ide_cap + PCI_IDE_LINK_STREAM_0 + i * PCI_IDE_LINK_BLOCK_SIZE; 159 + u8 id; 160 + 161 + pci_read_config_dword(pdev, pos + PCI_IDE_LINK_CTL_0, &val); 162 + id = FIELD_GET(PCI_IDE_LINK_CTL_ID, val); 163 + if ((val & PCI_IDE_LINK_CTL_EN) && 164 + !claim_stream(hb, id, NULL, -1)) 165 + return; 166 + } 167 + 168 + for (u16 i = 0; i < nr_streams; i++) { 169 + int pos = __sel_ide_offset(ide_cap, nr_link_ide, i, nr_ide_mem); 170 + 171 + pci_read_config_dword(pdev, pos + PCI_IDE_SEL_CAP, &val); 172 + if (val & PCI_IDE_SEL_CTL_EN) 173 + continue; 174 + val &= ~PCI_IDE_SEL_CTL_ID; 175 + val |= FIELD_PREP(PCI_IDE_SEL_CTL_ID, PCI_IDE_RESERVED_STREAM_ID); 176 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_CTL, val); 177 + } 178 + 179 + for (u16 i = 0; i < nr_link_ide; ++i) { 180 + int pos = ide_cap + PCI_IDE_LINK_STREAM_0 + 181 + i * PCI_IDE_LINK_BLOCK_SIZE; 182 + 183 + pci_read_config_dword(pdev, pos, &val); 184 + if (val & PCI_IDE_LINK_CTL_EN) 185 + continue; 186 + val &= ~PCI_IDE_LINK_CTL_ID; 187 + val |= FIELD_PREP(PCI_IDE_LINK_CTL_ID, PCI_IDE_RESERVED_STREAM_ID); 188 + pci_write_config_dword(pdev, pos, val); 189 + } 190 + 191 + pdev->ide_cap = ide_cap; 192 + pdev->nr_link_ide = nr_link_ide; 193 + pdev->nr_sel_ide = nr_streams; 194 + pdev->nr_ide_mem = nr_ide_mem; 195 + } 196 + 197 + struct stream_index { 198 + struct ida *ida; 199 + u8 stream_index; 200 + }; 201 + 202 + static void free_stream_index(struct stream_index *stream) 203 + { 204 + ida_free(stream->ida, stream->stream_index); 205 + } 206 + 207 + DEFINE_FREE(free_stream, struct stream_index *, if (_T) free_stream_index(_T)) 208 + static struct stream_index *alloc_stream_index(struct ida *ida, u16 max, 209 + struct stream_index *stream) 210 + { 211 + int id; 212 + 213 + if (!max) 214 + return NULL; 215 + 216 + id = ida_alloc_max(ida, max - 1, GFP_KERNEL); 217 + if (id < 0) 218 + return NULL; 219 + 220 + *stream = (struct stream_index) { 221 + .ida = ida, 222 + .stream_index = id, 223 + }; 224 + return stream; 225 + } 226 + 227 + /** 228 + * pci_ide_stream_alloc() - Reserve stream indices and probe for settings 229 + * @pdev: IDE capable PCIe Endpoint Physical Function 230 + * 231 + * Retrieve the Requester ID range of @pdev for programming its Root 232 + * Port IDE RID Association registers, and conversely retrieve the 233 + * Requester ID of the Root Port for programming @pdev's IDE RID 234 + * Association registers. 235 + * 236 + * Allocate a Selective IDE Stream Register Block instance per port. 237 + * 238 + * Allocate a platform stream resource from the associated host bridge. 239 + * Retrieve stream association parameters for Requester ID range and 240 + * address range restrictions for the stream. 241 + */ 242 + struct pci_ide *pci_ide_stream_alloc(struct pci_dev *pdev) 243 + { 244 + /* EP, RP, + HB Stream allocation */ 245 + struct stream_index __stream[PCI_IDE_HB + 1]; 246 + struct pci_bus_region pref_assoc = { 0, -1 }; 247 + struct pci_bus_region mem_assoc = { 0, -1 }; 248 + struct resource *mem, *pref; 249 + struct pci_host_bridge *hb; 250 + struct pci_dev *rp, *br; 251 + int num_vf, rid_end; 252 + 253 + if (!pci_is_pcie(pdev)) 254 + return NULL; 255 + 256 + if (pci_pcie_type(pdev) != PCI_EXP_TYPE_ENDPOINT) 257 + return NULL; 258 + 259 + if (!pdev->ide_cap) 260 + return NULL; 261 + 262 + struct pci_ide *ide __free(kfree) = kzalloc(sizeof(*ide), GFP_KERNEL); 263 + if (!ide) 264 + return NULL; 265 + 266 + hb = pci_find_host_bridge(pdev->bus); 267 + struct stream_index *hb_stream __free(free_stream) = alloc_stream_index( 268 + &hb->ide_stream_ida, hb->nr_ide_streams, &__stream[PCI_IDE_HB]); 269 + if (!hb_stream) 270 + return NULL; 271 + 272 + rp = pcie_find_root_port(pdev); 273 + struct stream_index *rp_stream __free(free_stream) = alloc_stream_index( 274 + &rp->ide_stream_ida, rp->nr_sel_ide, &__stream[PCI_IDE_RP]); 275 + if (!rp_stream) 276 + return NULL; 277 + 278 + struct stream_index *ep_stream __free(free_stream) = alloc_stream_index( 279 + &pdev->ide_stream_ida, pdev->nr_sel_ide, &__stream[PCI_IDE_EP]); 280 + if (!ep_stream) 281 + return NULL; 282 + 283 + /* for SR-IOV case, cover all VFs */ 284 + num_vf = pci_num_vf(pdev); 285 + if (num_vf) 286 + rid_end = PCI_DEVID(pci_iov_virtfn_bus(pdev, num_vf), 287 + pci_iov_virtfn_devfn(pdev, num_vf)); 288 + else 289 + rid_end = pci_dev_id(pdev); 290 + 291 + br = pci_upstream_bridge(pdev); 292 + if (!br) 293 + return NULL; 294 + 295 + /* 296 + * Check if the device consumes memory and/or prefetch-memory. Setup 297 + * downstream address association ranges for each. 298 + */ 299 + mem = pci_resource_n(br, PCI_BRIDGE_MEM_WINDOW); 300 + pref = pci_resource_n(br, PCI_BRIDGE_PREF_MEM_WINDOW); 301 + if (resource_assigned(mem)) 302 + pcibios_resource_to_bus(br->bus, &mem_assoc, mem); 303 + if (resource_assigned(pref)) 304 + pcibios_resource_to_bus(br->bus, &pref_assoc, pref); 305 + 306 + *ide = (struct pci_ide) { 307 + .pdev = pdev, 308 + .partner = { 309 + [PCI_IDE_EP] = { 310 + .rid_start = pci_dev_id(rp), 311 + .rid_end = pci_dev_id(rp), 312 + .stream_index = no_free_ptr(ep_stream)->stream_index, 313 + /* Disable upstream address association */ 314 + .mem_assoc = { 0, -1 }, 315 + .pref_assoc = { 0, -1 }, 316 + }, 317 + [PCI_IDE_RP] = { 318 + .rid_start = pci_dev_id(pdev), 319 + .rid_end = rid_end, 320 + .stream_index = no_free_ptr(rp_stream)->stream_index, 321 + .mem_assoc = mem_assoc, 322 + .pref_assoc = pref_assoc, 323 + }, 324 + }, 325 + .host_bridge_stream = no_free_ptr(hb_stream)->stream_index, 326 + .stream_id = -1, 327 + }; 328 + 329 + return_ptr(ide); 330 + } 331 + EXPORT_SYMBOL_GPL(pci_ide_stream_alloc); 332 + 333 + /** 334 + * pci_ide_stream_free() - unwind pci_ide_stream_alloc() 335 + * @ide: idle IDE settings descriptor 336 + * 337 + * Free all of the stream index (register block) allocations acquired by 338 + * pci_ide_stream_alloc(). The stream represented by @ide is assumed to 339 + * be unregistered and not instantiated in any device. 340 + */ 341 + void pci_ide_stream_free(struct pci_ide *ide) 342 + { 343 + struct pci_dev *pdev = ide->pdev; 344 + struct pci_dev *rp = pcie_find_root_port(pdev); 345 + struct pci_host_bridge *hb = pci_find_host_bridge(pdev->bus); 346 + 347 + ida_free(&pdev->ide_stream_ida, ide->partner[PCI_IDE_EP].stream_index); 348 + ida_free(&rp->ide_stream_ida, ide->partner[PCI_IDE_RP].stream_index); 349 + ida_free(&hb->ide_stream_ida, ide->host_bridge_stream); 350 + kfree(ide); 351 + } 352 + EXPORT_SYMBOL_GPL(pci_ide_stream_free); 353 + 354 + /** 355 + * pci_ide_stream_release() - unwind and release an @ide context 356 + * @ide: partially or fully registered IDE settings descriptor 357 + * 358 + * In support of automatic cleanup of IDE setup routines perform IDE 359 + * teardown in expected reverse order of setup and with respect to which 360 + * aspects of IDE setup have successfully completed. 361 + * 362 + * Be careful that setup order mirrors this shutdown order. Otherwise, 363 + * open code releasing the IDE context. 364 + */ 365 + void pci_ide_stream_release(struct pci_ide *ide) 366 + { 367 + struct pci_dev *pdev = ide->pdev; 368 + struct pci_dev *rp = pcie_find_root_port(pdev); 369 + 370 + if (ide->partner[PCI_IDE_RP].enable) 371 + pci_ide_stream_disable(rp, ide); 372 + 373 + if (ide->partner[PCI_IDE_EP].enable) 374 + pci_ide_stream_disable(pdev, ide); 375 + 376 + if (ide->tsm_dev) 377 + tsm_ide_stream_unregister(ide); 378 + 379 + if (ide->partner[PCI_IDE_RP].setup) 380 + pci_ide_stream_teardown(rp, ide); 381 + 382 + if (ide->partner[PCI_IDE_EP].setup) 383 + pci_ide_stream_teardown(pdev, ide); 384 + 385 + if (ide->name) 386 + pci_ide_stream_unregister(ide); 387 + 388 + pci_ide_stream_free(ide); 389 + } 390 + EXPORT_SYMBOL_GPL(pci_ide_stream_release); 391 + 392 + struct pci_ide_stream_id { 393 + struct pci_host_bridge *hb; 394 + u8 stream_id; 395 + }; 396 + 397 + static struct pci_ide_stream_id * 398 + request_stream_id(struct pci_host_bridge *hb, u8 stream_id, 399 + struct pci_ide_stream_id *sid) 400 + { 401 + if (!reserve_stream_id(hb, stream_id)) 402 + return NULL; 403 + 404 + *sid = (struct pci_ide_stream_id) { 405 + .hb = hb, 406 + .stream_id = stream_id, 407 + }; 408 + 409 + return sid; 410 + } 411 + DEFINE_FREE(free_stream_id, struct pci_ide_stream_id *, 412 + if (_T) ida_free(&_T->hb->ide_stream_ids_ida, _T->stream_id)) 413 + 414 + /** 415 + * pci_ide_stream_register() - Prepare to activate an IDE Stream 416 + * @ide: IDE settings descriptor 417 + * 418 + * After a Stream ID has been acquired for @ide, record the presence of 419 + * the stream in sysfs. The expectation is that @ide is immutable while 420 + * registered. 421 + */ 422 + int pci_ide_stream_register(struct pci_ide *ide) 423 + { 424 + struct pci_dev *pdev = ide->pdev; 425 + struct pci_host_bridge *hb = pci_find_host_bridge(pdev->bus); 426 + struct pci_ide_stream_id __sid; 427 + u8 ep_stream, rp_stream; 428 + int rc; 429 + 430 + if (ide->stream_id < 0 || ide->stream_id > U8_MAX) { 431 + pci_err(pdev, "Setup fail: Invalid Stream ID: %d\n", ide->stream_id); 432 + return -ENXIO; 433 + } 434 + 435 + struct pci_ide_stream_id *sid __free(free_stream_id) = 436 + request_stream_id(hb, ide->stream_id, &__sid); 437 + if (!sid) { 438 + pci_err(pdev, "Setup fail: Stream ID %d in use\n", ide->stream_id); 439 + return -EBUSY; 440 + } 441 + 442 + ep_stream = ide->partner[PCI_IDE_EP].stream_index; 443 + rp_stream = ide->partner[PCI_IDE_RP].stream_index; 444 + const char *name __free(kfree) = kasprintf(GFP_KERNEL, "stream%d.%d.%d", 445 + ide->host_bridge_stream, 446 + rp_stream, ep_stream); 447 + if (!name) 448 + return -ENOMEM; 449 + 450 + rc = sysfs_create_link(&hb->dev.kobj, &pdev->dev.kobj, name); 451 + if (rc) 452 + return rc; 453 + 454 + ide->name = no_free_ptr(name); 455 + 456 + /* Stream ID reservation recorded in @ide is now successfully registered */ 457 + retain_and_null_ptr(sid); 458 + 459 + return 0; 460 + } 461 + EXPORT_SYMBOL_GPL(pci_ide_stream_register); 462 + 463 + /** 464 + * pci_ide_stream_unregister() - unwind pci_ide_stream_register() 465 + * @ide: idle IDE settings descriptor 466 + * 467 + * In preparation for freeing @ide, remove sysfs enumeration for the 468 + * stream. 469 + */ 470 + void pci_ide_stream_unregister(struct pci_ide *ide) 471 + { 472 + struct pci_dev *pdev = ide->pdev; 473 + struct pci_host_bridge *hb = pci_find_host_bridge(pdev->bus); 474 + 475 + sysfs_remove_link(&hb->dev.kobj, ide->name); 476 + kfree(ide->name); 477 + ida_free(&hb->ide_stream_ids_ida, ide->stream_id); 478 + ide->name = NULL; 479 + } 480 + EXPORT_SYMBOL_GPL(pci_ide_stream_unregister); 481 + 482 + static int pci_ide_domain(struct pci_dev *pdev) 483 + { 484 + if (pdev->fm_enabled) 485 + return pci_domain_nr(pdev->bus); 486 + return 0; 487 + } 488 + 489 + struct pci_ide_partner *pci_ide_to_settings(struct pci_dev *pdev, struct pci_ide *ide) 490 + { 491 + if (!pci_is_pcie(pdev)) { 492 + pci_warn_once(pdev, "not a PCIe device\n"); 493 + return NULL; 494 + } 495 + 496 + switch (pci_pcie_type(pdev)) { 497 + case PCI_EXP_TYPE_ENDPOINT: 498 + if (pdev != ide->pdev) { 499 + pci_warn_once(pdev, "setup expected Endpoint: %s\n", pci_name(ide->pdev)); 500 + return NULL; 501 + } 502 + return &ide->partner[PCI_IDE_EP]; 503 + case PCI_EXP_TYPE_ROOT_PORT: { 504 + struct pci_dev *rp = pcie_find_root_port(ide->pdev); 505 + 506 + if (pdev != rp) { 507 + pci_warn_once(pdev, "setup expected Root Port: %s\n", 508 + pci_name(rp)); 509 + return NULL; 510 + } 511 + return &ide->partner[PCI_IDE_RP]; 512 + } 513 + default: 514 + pci_warn_once(pdev, "invalid device type\n"); 515 + return NULL; 516 + } 517 + } 518 + EXPORT_SYMBOL_GPL(pci_ide_to_settings); 519 + 520 + static void set_ide_sel_ctl(struct pci_dev *pdev, struct pci_ide *ide, 521 + struct pci_ide_partner *settings, int pos, 522 + bool enable) 523 + { 524 + u32 val = FIELD_PREP(PCI_IDE_SEL_CTL_ID, ide->stream_id) | 525 + FIELD_PREP(PCI_IDE_SEL_CTL_DEFAULT, settings->default_stream) | 526 + FIELD_PREP(PCI_IDE_SEL_CTL_CFG_EN, pdev->ide_cfg) | 527 + FIELD_PREP(PCI_IDE_SEL_CTL_TEE_LIMITED, pdev->ide_tee_limit) | 528 + FIELD_PREP(PCI_IDE_SEL_CTL_EN, enable); 529 + 530 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_CTL, val); 531 + } 532 + 533 + #define SEL_ADDR1_LOWER GENMASK(31, 20) 534 + #define SEL_ADDR_UPPER GENMASK_ULL(63, 32) 535 + #define PREP_PCI_IDE_SEL_ADDR1(base, limit) \ 536 + (FIELD_PREP(PCI_IDE_SEL_ADDR_1_VALID, 1) | \ 537 + FIELD_PREP(PCI_IDE_SEL_ADDR_1_BASE_LOW, \ 538 + FIELD_GET(SEL_ADDR1_LOWER, (base))) | \ 539 + FIELD_PREP(PCI_IDE_SEL_ADDR_1_LIMIT_LOW, \ 540 + FIELD_GET(SEL_ADDR1_LOWER, (limit)))) 541 + 542 + static void mem_assoc_to_regs(struct pci_bus_region *region, 543 + struct pci_ide_regs *regs, int idx) 544 + { 545 + /* convert to u64 range for bitfield size checks */ 546 + struct range r = { region->start, region->end }; 547 + 548 + regs->addr[idx].assoc1 = PREP_PCI_IDE_SEL_ADDR1(r.start, r.end); 549 + regs->addr[idx].assoc2 = FIELD_GET(SEL_ADDR_UPPER, r.end); 550 + regs->addr[idx].assoc3 = FIELD_GET(SEL_ADDR_UPPER, r.start); 551 + } 552 + 553 + /** 554 + * pci_ide_stream_to_regs() - convert IDE settings to association register values 555 + * @pdev: PCIe device object for either a Root Port or Endpoint Partner Port 556 + * @ide: registered IDE settings descriptor 557 + * @regs: output register values 558 + */ 559 + static void pci_ide_stream_to_regs(struct pci_dev *pdev, struct pci_ide *ide, 560 + struct pci_ide_regs *regs) 561 + { 562 + struct pci_ide_partner *settings = pci_ide_to_settings(pdev, ide); 563 + int assoc_idx = 0; 564 + 565 + memset(regs, 0, sizeof(*regs)); 566 + 567 + if (!settings) 568 + return; 569 + 570 + regs->rid1 = FIELD_PREP(PCI_IDE_SEL_RID_1_LIMIT, settings->rid_end); 571 + 572 + regs->rid2 = FIELD_PREP(PCI_IDE_SEL_RID_2_VALID, 1) | 573 + FIELD_PREP(PCI_IDE_SEL_RID_2_BASE, settings->rid_start) | 574 + FIELD_PREP(PCI_IDE_SEL_RID_2_SEG, pci_ide_domain(pdev)); 575 + 576 + if (pdev->nr_ide_mem && pci_bus_region_size(&settings->mem_assoc)) { 577 + mem_assoc_to_regs(&settings->mem_assoc, regs, assoc_idx); 578 + assoc_idx++; 579 + } 580 + 581 + if (pdev->nr_ide_mem > assoc_idx && 582 + pci_bus_region_size(&settings->pref_assoc)) { 583 + mem_assoc_to_regs(&settings->pref_assoc, regs, assoc_idx); 584 + assoc_idx++; 585 + } 586 + 587 + regs->nr_addr = assoc_idx; 588 + } 589 + 590 + /** 591 + * pci_ide_stream_setup() - program settings to Selective IDE Stream registers 592 + * @pdev: PCIe device object for either a Root Port or Endpoint Partner Port 593 + * @ide: registered IDE settings descriptor 594 + * 595 + * When @pdev is a PCI_EXP_TYPE_ENDPOINT then the PCI_IDE_EP partner 596 + * settings are written to @pdev's Selective IDE Stream register block, 597 + * and when @pdev is a PCI_EXP_TYPE_ROOT_PORT, the PCI_IDE_RP settings 598 + * are selected. 599 + */ 600 + void pci_ide_stream_setup(struct pci_dev *pdev, struct pci_ide *ide) 601 + { 602 + struct pci_ide_partner *settings = pci_ide_to_settings(pdev, ide); 603 + struct pci_ide_regs regs; 604 + int pos; 605 + 606 + if (!settings) 607 + return; 608 + 609 + pci_ide_stream_to_regs(pdev, ide, &regs); 610 + 611 + pos = sel_ide_offset(pdev, settings); 612 + 613 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_RID_1, regs.rid1); 614 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_RID_2, regs.rid2); 615 + 616 + for (int i = 0; i < regs.nr_addr; i++) { 617 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_ADDR_1(i), 618 + regs.addr[i].assoc1); 619 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_ADDR_2(i), 620 + regs.addr[i].assoc2); 621 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_ADDR_3(i), 622 + regs.addr[i].assoc3); 623 + } 624 + 625 + /* clear extra unused address association blocks */ 626 + for (int i = regs.nr_addr; i < pdev->nr_ide_mem; i++) { 627 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_ADDR_1(i), 0); 628 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_ADDR_2(i), 0); 629 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_ADDR_3(i), 0); 630 + } 631 + 632 + /* 633 + * Setup control register early for devices that expect 634 + * stream_id is set during key programming. 635 + */ 636 + set_ide_sel_ctl(pdev, ide, settings, pos, false); 637 + settings->setup = 1; 638 + } 639 + EXPORT_SYMBOL_GPL(pci_ide_stream_setup); 640 + 641 + /** 642 + * pci_ide_stream_teardown() - disable the stream and clear all settings 643 + * @pdev: PCIe device object for either a Root Port or Endpoint Partner Port 644 + * @ide: registered IDE settings descriptor 645 + * 646 + * For stream destruction, zero all registers that may have been written 647 + * by pci_ide_stream_setup(). Consider pci_ide_stream_disable() to leave 648 + * settings in place while temporarily disabling the stream. 649 + */ 650 + void pci_ide_stream_teardown(struct pci_dev *pdev, struct pci_ide *ide) 651 + { 652 + struct pci_ide_partner *settings = pci_ide_to_settings(pdev, ide); 653 + int pos, i; 654 + 655 + if (!settings) 656 + return; 657 + 658 + pos = sel_ide_offset(pdev, settings); 659 + 660 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_CTL, 0); 661 + 662 + for (i = 0; i < pdev->nr_ide_mem; i++) { 663 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_ADDR_1(i), 0); 664 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_ADDR_2(i), 0); 665 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_ADDR_3(i), 0); 666 + } 667 + 668 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_RID_2, 0); 669 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_RID_1, 0); 670 + settings->setup = 0; 671 + } 672 + EXPORT_SYMBOL_GPL(pci_ide_stream_teardown); 673 + 674 + /** 675 + * pci_ide_stream_enable() - enable a Selective IDE Stream 676 + * @pdev: PCIe device object for either a Root Port or Endpoint Partner Port 677 + * @ide: registered and setup IDE settings descriptor 678 + * 679 + * Activate the stream by writing to the Selective IDE Stream Control 680 + * Register. 681 + * 682 + * Return: 0 if the stream successfully entered the "secure" state, and -EINVAL 683 + * if @ide is invalid, and -ENXIO if the stream fails to enter the secure state. 684 + * 685 + * Note that the state may go "insecure" at any point after returning 0, but 686 + * those events are equivalent to a "link down" event and handled via 687 + * asynchronous error reporting. 688 + * 689 + * Caller is responsible to clear the enable bit in the -ENXIO case. 690 + */ 691 + int pci_ide_stream_enable(struct pci_dev *pdev, struct pci_ide *ide) 692 + { 693 + struct pci_ide_partner *settings = pci_ide_to_settings(pdev, ide); 694 + int pos; 695 + u32 val; 696 + 697 + if (!settings) 698 + return -EINVAL; 699 + 700 + pos = sel_ide_offset(pdev, settings); 701 + 702 + set_ide_sel_ctl(pdev, ide, settings, pos, true); 703 + settings->enable = 1; 704 + 705 + pci_read_config_dword(pdev, pos + PCI_IDE_SEL_STS, &val); 706 + if (FIELD_GET(PCI_IDE_SEL_STS_STATE, val) != 707 + PCI_IDE_SEL_STS_STATE_SECURE) 708 + return -ENXIO; 709 + 710 + return 0; 711 + } 712 + EXPORT_SYMBOL_GPL(pci_ide_stream_enable); 713 + 714 + /** 715 + * pci_ide_stream_disable() - disable a Selective IDE Stream 716 + * @pdev: PCIe device object for either a Root Port or Endpoint Partner Port 717 + * @ide: registered and setup IDE settings descriptor 718 + * 719 + * Clear the Selective IDE Stream Control Register, but leave all other 720 + * registers untouched. 721 + */ 722 + void pci_ide_stream_disable(struct pci_dev *pdev, struct pci_ide *ide) 723 + { 724 + struct pci_ide_partner *settings = pci_ide_to_settings(pdev, ide); 725 + int pos; 726 + 727 + if (!settings) 728 + return; 729 + 730 + pos = sel_ide_offset(pdev, settings); 731 + 732 + pci_write_config_dword(pdev, pos + PCI_IDE_SEL_CTL, 0); 733 + settings->enable = 0; 734 + } 735 + EXPORT_SYMBOL_GPL(pci_ide_stream_disable); 736 + 737 + void pci_ide_init_host_bridge(struct pci_host_bridge *hb) 738 + { 739 + hb->nr_ide_streams = 256; 740 + ida_init(&hb->ide_stream_ida); 741 + ida_init(&hb->ide_stream_ids_ida); 742 + reserve_stream_id(hb, PCI_IDE_RESERVED_STREAM_ID); 743 + } 744 + 745 + static ssize_t available_secure_streams_show(struct device *dev, 746 + struct device_attribute *attr, 747 + char *buf) 748 + { 749 + struct pci_host_bridge *hb = to_pci_host_bridge(dev); 750 + int nr = READ_ONCE(hb->nr_ide_streams); 751 + int avail = nr; 752 + 753 + if (!nr) 754 + return -ENXIO; 755 + 756 + /* 757 + * Yes, this is inefficient and racy, but it is only for occasional 758 + * platform resource surveys. Worst case is bounded to 256 streams. 759 + */ 760 + for (int i = 0; i < nr; i++) 761 + if (ida_exists(&hb->ide_stream_ida, i)) 762 + avail--; 763 + return sysfs_emit(buf, "%d\n", avail); 764 + } 765 + static DEVICE_ATTR_RO(available_secure_streams); 766 + 767 + static struct attribute *pci_ide_attrs[] = { 768 + &dev_attr_available_secure_streams.attr, 769 + NULL 770 + }; 771 + 772 + static umode_t pci_ide_attr_visible(struct kobject *kobj, struct attribute *a, int n) 773 + { 774 + struct device *dev = kobj_to_dev(kobj); 775 + struct pci_host_bridge *hb = to_pci_host_bridge(dev); 776 + 777 + if (a == &dev_attr_available_secure_streams.attr) 778 + if (!hb->nr_ide_streams) 779 + return 0; 780 + 781 + return a->mode; 782 + } 783 + 784 + const struct attribute_group pci_ide_attr_group = { 785 + .attrs = pci_ide_attrs, 786 + .is_visible = pci_ide_attr_visible, 787 + }; 788 + 789 + /** 790 + * pci_ide_set_nr_streams() - sets size of the pool of IDE Stream resources 791 + * @hb: host bridge boundary for the stream pool 792 + * @nr: number of streams 793 + * 794 + * Platform PCI init and/or expert test module use only. Limit IDE 795 + * Stream establishment by setting the number of stream resources 796 + * available at the host bridge. Platform init code must set this before 797 + * the first pci_ide_stream_alloc() call if the platform has less than the 798 + * default of 256 streams per host-bridge. 799 + * 800 + * The "PCI_IDE" symbol namespace is required because this is typically 801 + * a detail that is settled in early PCI init. I.e. this export is not 802 + * for endpoint drivers. 803 + */ 804 + void pci_ide_set_nr_streams(struct pci_host_bridge *hb, u16 nr) 805 + { 806 + hb->nr_ide_streams = min(nr, 256); 807 + WARN_ON_ONCE(!ida_is_empty(&hb->ide_stream_ida)); 808 + sysfs_update_group(&hb->dev.kobj, &pci_ide_attr_group); 809 + } 810 + EXPORT_SYMBOL_NS_GPL(pci_ide_set_nr_streams, "PCI_IDE"); 811 + 812 + void pci_ide_destroy(struct pci_dev *pdev) 813 + { 814 + ida_destroy(&pdev->ide_stream_ida); 815 + }
+4
drivers/pci/pci-sysfs.c
··· 1856 1856 #ifdef CONFIG_PCI_DOE 1857 1857 &pci_doe_sysfs_group, 1858 1858 #endif 1859 + #ifdef CONFIG_PCI_TSM 1860 + &pci_tsm_auth_attr_group, 1861 + &pci_tsm_attr_group, 1862 + #endif 1859 1863 NULL, 1860 1864 };
+21
drivers/pci/pci.h
··· 615 615 static inline void pci_doe_sysfs_teardown(struct pci_dev *pdev) { } 616 616 #endif 617 617 618 + #ifdef CONFIG_PCI_IDE 619 + void pci_ide_init(struct pci_dev *dev); 620 + void pci_ide_init_host_bridge(struct pci_host_bridge *hb); 621 + void pci_ide_destroy(struct pci_dev *dev); 622 + extern const struct attribute_group pci_ide_attr_group; 623 + #else 624 + static inline void pci_ide_init(struct pci_dev *dev) { } 625 + static inline void pci_ide_init_host_bridge(struct pci_host_bridge *hb) { } 626 + static inline void pci_ide_destroy(struct pci_dev *dev) { } 627 + #endif 628 + 629 + #ifdef CONFIG_PCI_TSM 630 + void pci_tsm_init(struct pci_dev *pdev); 631 + void pci_tsm_destroy(struct pci_dev *pdev); 632 + extern const struct attribute_group pci_tsm_attr_group; 633 + extern const struct attribute_group pci_tsm_auth_attr_group; 634 + #else 635 + static inline void pci_tsm_init(struct pci_dev *pdev) { } 636 + static inline void pci_tsm_destroy(struct pci_dev *pdev) { } 637 + #endif 638 + 618 639 /** 619 640 * pci_dev_set_io_state - Set the new error state if possible. 620 641 *
+30 -1
drivers/pci/probe.c
··· 658 658 kfree(bridge); 659 659 } 660 660 661 + static const struct attribute_group *pci_host_bridge_groups[] = { 662 + #ifdef CONFIG_PCI_IDE 663 + &pci_ide_attr_group, 664 + #endif 665 + NULL 666 + }; 667 + 668 + static const struct device_type pci_host_bridge_type = { 669 + .groups = pci_host_bridge_groups, 670 + .release = pci_release_host_bridge_dev, 671 + }; 672 + 661 673 static void pci_init_host_bridge(struct pci_host_bridge *bridge) 662 674 { 663 675 INIT_LIST_HEAD(&bridge->windows); ··· 689 677 bridge->native_dpc = 1; 690 678 bridge->domain_nr = PCI_DOMAIN_NR_NOT_SET; 691 679 bridge->native_cxl_error = 1; 680 + bridge->dev.type = &pci_host_bridge_type; 681 + pci_ide_init_host_bridge(bridge); 692 682 693 683 device_initialize(&bridge->dev); 694 684 } ··· 704 690 return NULL; 705 691 706 692 pci_init_host_bridge(bridge); 707 - bridge->dev.release = pci_release_host_bridge_dev; 708 693 709 694 return bridge; 710 695 } ··· 2309 2296 return 0; 2310 2297 } 2311 2298 2299 + static void pci_dev3_init(struct pci_dev *pdev) 2300 + { 2301 + u16 cap = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_DEV3); 2302 + u32 val = 0; 2303 + 2304 + if (!cap) 2305 + return; 2306 + pci_read_config_dword(pdev, cap + PCI_DEV3_STA, &val); 2307 + pdev->fm_enabled = !!(val & PCI_DEV3_STA_SEGMENT); 2308 + } 2309 + 2312 2310 /** 2313 2311 * pcie_relaxed_ordering_enabled - Probe for PCIe relaxed ordering enable 2314 2312 * @dev: PCI device to query ··· 2704 2680 pci_doe_init(dev); /* Data Object Exchange */ 2705 2681 pci_tph_init(dev); /* TLP Processing Hints */ 2706 2682 pci_rebar_init(dev); /* Resizable BAR */ 2683 + pci_dev3_init(dev); /* Device 3 capabilities */ 2684 + pci_ide_init(dev); /* Link Integrity and Data Encryption */ 2707 2685 2708 2686 pcie_report_downtraining(dev); 2709 2687 pci_init_reset_methods(dev); ··· 2798 2772 /* Notifier could use PCI capabilities */ 2799 2773 ret = device_add(&dev->dev); 2800 2774 WARN_ON(ret < 0); 2775 + 2776 + /* Establish pdev->tsm for newly added (e.g. new SR-IOV VFs) */ 2777 + pci_tsm_init(dev); 2801 2778 2802 2779 pci_npem_create(dev); 2803 2780
+7
drivers/pci/remove.c
··· 57 57 pci_doe_sysfs_teardown(dev); 58 58 pci_npem_remove(dev); 59 59 60 + /* 61 + * While device is in D0 drop the device from TSM link operations 62 + * including unbind and disconnect (IDE + SPDM teardown). 63 + */ 64 + pci_tsm_destroy(dev); 65 + 60 66 device_del(&dev->dev); 61 67 62 68 down_write(&pci_bus_sem); ··· 70 64 up_write(&pci_bus_sem); 71 65 72 66 pci_doe_destroy(dev); 67 + pci_ide_destroy(dev); 73 68 pcie_aspm_exit_link_state(dev); 74 69 pci_bridge_d3_update(dev); 75 70 pci_pwrctrl_unregister(&dev->dev);
+54 -8
drivers/pci/search.c
··· 282 282 return pdev; 283 283 } 284 284 285 + static struct pci_dev *pci_get_dev_by_id_reverse(const struct pci_device_id *id, 286 + struct pci_dev *from) 287 + { 288 + struct device *dev; 289 + struct device *dev_start = NULL; 290 + struct pci_dev *pdev = NULL; 291 + 292 + if (from) 293 + dev_start = &from->dev; 294 + dev = bus_find_device_reverse(&pci_bus_type, dev_start, (void *)id, 295 + match_pci_dev_by_id); 296 + if (dev) 297 + pdev = to_pci_dev(dev); 298 + pci_dev_put(from); 299 + return pdev; 300 + } 301 + 302 + enum pci_search_direction { 303 + PCI_SEARCH_FORWARD, 304 + PCI_SEARCH_REVERSE, 305 + }; 306 + 307 + static struct pci_dev *__pci_get_subsys(unsigned int vendor, unsigned int device, 308 + unsigned int ss_vendor, unsigned int ss_device, 309 + struct pci_dev *from, enum pci_search_direction dir) 310 + { 311 + struct pci_device_id id = { 312 + .vendor = vendor, 313 + .device = device, 314 + .subvendor = ss_vendor, 315 + .subdevice = ss_device, 316 + }; 317 + 318 + if (dir == PCI_SEARCH_FORWARD) 319 + return pci_get_dev_by_id(&id, from); 320 + else 321 + return pci_get_dev_by_id_reverse(&id, from); 322 + } 323 + 285 324 /** 286 325 * pci_get_subsys - begin or continue searching for a PCI device by vendor/subvendor/device/subdevice id 287 326 * @vendor: PCI vendor id to match, or %PCI_ANY_ID to match all vendor ids ··· 341 302 unsigned int ss_vendor, unsigned int ss_device, 342 303 struct pci_dev *from) 343 304 { 344 - struct pci_device_id id = { 345 - .vendor = vendor, 346 - .device = device, 347 - .subvendor = ss_vendor, 348 - .subdevice = ss_device, 349 - }; 350 - 351 - return pci_get_dev_by_id(&id, from); 305 + return __pci_get_subsys(vendor, device, ss_vendor, ss_device, from, 306 + PCI_SEARCH_FORWARD); 352 307 } 353 308 EXPORT_SYMBOL(pci_get_subsys); 354 309 ··· 366 333 return pci_get_subsys(vendor, device, PCI_ANY_ID, PCI_ANY_ID, from); 367 334 } 368 335 EXPORT_SYMBOL(pci_get_device); 336 + 337 + /* 338 + * Same semantics as pci_get_device(), except walks the PCI device list 339 + * in reverse discovery order. 340 + */ 341 + struct pci_dev *pci_get_device_reverse(unsigned int vendor, 342 + unsigned int device, 343 + struct pci_dev *from) 344 + { 345 + return __pci_get_subsys(vendor, device, PCI_ANY_ID, PCI_ANY_ID, from, 346 + PCI_SEARCH_REVERSE); 347 + } 348 + EXPORT_SYMBOL(pci_get_device_reverse); 369 349 370 350 /** 371 351 * pci_get_class - begin or continue searching for a PCI device by class
+900
drivers/pci/tsm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Interface with platform TEE Security Manager (TSM) objects as defined by 4 + * PCIe r7.0 section 11 TEE Device Interface Security Protocol (TDISP) 5 + * 6 + * Copyright(c) 2024-2025 Intel Corporation. All rights reserved. 7 + */ 8 + 9 + #define dev_fmt(fmt) "PCI/TSM: " fmt 10 + 11 + #include <linux/bitfield.h> 12 + #include <linux/pci.h> 13 + #include <linux/pci-doe.h> 14 + #include <linux/pci-tsm.h> 15 + #include <linux/sysfs.h> 16 + #include <linux/tsm.h> 17 + #include <linux/xarray.h> 18 + #include "pci.h" 19 + 20 + /* 21 + * Provide a read/write lock against the init / exit of pdev tsm 22 + * capabilities and arrival/departure of a TSM instance 23 + */ 24 + static DECLARE_RWSEM(pci_tsm_rwsem); 25 + 26 + /* 27 + * Count of TSMs registered that support physical link operations vs device 28 + * security state management. 29 + */ 30 + static int pci_tsm_link_count; 31 + static int pci_tsm_devsec_count; 32 + 33 + static const struct pci_tsm_ops *to_pci_tsm_ops(struct pci_tsm *tsm) 34 + { 35 + return tsm->tsm_dev->pci_ops; 36 + } 37 + 38 + static inline bool is_dsm(struct pci_dev *pdev) 39 + { 40 + return pdev->tsm && pdev->tsm->dsm_dev == pdev; 41 + } 42 + 43 + static inline bool has_tee(struct pci_dev *pdev) 44 + { 45 + return pdev->devcap & PCI_EXP_DEVCAP_TEE; 46 + } 47 + 48 + /* 'struct pci_tsm_pf0' wraps 'struct pci_tsm' when ->dsm_dev == ->pdev (self) */ 49 + static struct pci_tsm_pf0 *to_pci_tsm_pf0(struct pci_tsm *tsm) 50 + { 51 + /* 52 + * All "link" TSM contexts reference the device that hosts the DSM 53 + * interface for a set of devices. Walk to the DSM device and cast its 54 + * ->tsm context to a 'struct pci_tsm_pf0 *'. 55 + */ 56 + struct pci_dev *pf0 = tsm->dsm_dev; 57 + 58 + if (!is_pci_tsm_pf0(pf0) || !is_dsm(pf0)) { 59 + pci_WARN_ONCE(tsm->pdev, 1, "invalid context object\n"); 60 + return NULL; 61 + } 62 + 63 + return container_of(pf0->tsm, struct pci_tsm_pf0, base_tsm); 64 + } 65 + 66 + static void tsm_remove(struct pci_tsm *tsm) 67 + { 68 + struct pci_dev *pdev; 69 + 70 + if (!tsm) 71 + return; 72 + 73 + pdev = tsm->pdev; 74 + to_pci_tsm_ops(tsm)->remove(tsm); 75 + pdev->tsm = NULL; 76 + } 77 + DEFINE_FREE(tsm_remove, struct pci_tsm *, if (_T) tsm_remove(_T)) 78 + 79 + static void pci_tsm_walk_fns(struct pci_dev *pdev, 80 + int (*cb)(struct pci_dev *pdev, void *data), 81 + void *data) 82 + { 83 + /* Walk subordinate physical functions */ 84 + for (int i = 0; i < 8; i++) { 85 + struct pci_dev *pf __free(pci_dev_put) = pci_get_slot( 86 + pdev->bus, PCI_DEVFN(PCI_SLOT(pdev->devfn), i)); 87 + 88 + if (!pf) 89 + continue; 90 + 91 + /* on entry function 0 has already run @cb */ 92 + if (i > 0) 93 + cb(pf, data); 94 + 95 + /* walk virtual functions of each pf */ 96 + for (int j = 0; j < pci_num_vf(pf); j++) { 97 + struct pci_dev *vf __free(pci_dev_put) = 98 + pci_get_domain_bus_and_slot( 99 + pci_domain_nr(pf->bus), 100 + pci_iov_virtfn_bus(pf, j), 101 + pci_iov_virtfn_devfn(pf, j)); 102 + 103 + if (!vf) 104 + continue; 105 + 106 + cb(vf, data); 107 + } 108 + } 109 + 110 + /* 111 + * Walk downstream devices, assumes that an upstream DSM is 112 + * limited to downstream physical functions 113 + */ 114 + if (pci_pcie_type(pdev) == PCI_EXP_TYPE_UPSTREAM && is_dsm(pdev)) 115 + pci_walk_bus(pdev->subordinate, cb, data); 116 + } 117 + 118 + static void pci_tsm_walk_fns_reverse(struct pci_dev *pdev, 119 + int (*cb)(struct pci_dev *pdev, 120 + void *data), 121 + void *data) 122 + { 123 + /* Reverse walk downstream devices */ 124 + if (pci_pcie_type(pdev) == PCI_EXP_TYPE_UPSTREAM && is_dsm(pdev)) 125 + pci_walk_bus_reverse(pdev->subordinate, cb, data); 126 + 127 + /* Reverse walk subordinate physical functions */ 128 + for (int i = 7; i >= 0; i--) { 129 + struct pci_dev *pf __free(pci_dev_put) = pci_get_slot( 130 + pdev->bus, PCI_DEVFN(PCI_SLOT(pdev->devfn), i)); 131 + 132 + if (!pf) 133 + continue; 134 + 135 + /* reverse walk virtual functions */ 136 + for (int j = pci_num_vf(pf) - 1; j >= 0; j--) { 137 + struct pci_dev *vf __free(pci_dev_put) = 138 + pci_get_domain_bus_and_slot( 139 + pci_domain_nr(pf->bus), 140 + pci_iov_virtfn_bus(pf, j), 141 + pci_iov_virtfn_devfn(pf, j)); 142 + 143 + if (!vf) 144 + continue; 145 + cb(vf, data); 146 + } 147 + 148 + /* on exit, caller will run @cb on function 0 */ 149 + if (i > 0) 150 + cb(pf, data); 151 + } 152 + } 153 + 154 + static void link_sysfs_disable(struct pci_dev *pdev) 155 + { 156 + sysfs_update_group(&pdev->dev.kobj, &pci_tsm_auth_attr_group); 157 + sysfs_update_group(&pdev->dev.kobj, &pci_tsm_attr_group); 158 + } 159 + 160 + static void link_sysfs_enable(struct pci_dev *pdev) 161 + { 162 + bool tee = has_tee(pdev); 163 + 164 + pci_dbg(pdev, "%s Security Manager detected (%s%s%s)\n", 165 + pdev->tsm ? "Device" : "Platform TEE", 166 + pdev->ide_cap ? "IDE" : "", pdev->ide_cap && tee ? " " : "", 167 + tee ? "TEE" : ""); 168 + 169 + sysfs_update_group(&pdev->dev.kobj, &pci_tsm_auth_attr_group); 170 + sysfs_update_group(&pdev->dev.kobj, &pci_tsm_attr_group); 171 + } 172 + 173 + static int probe_fn(struct pci_dev *pdev, void *dsm) 174 + { 175 + struct pci_dev *dsm_dev = dsm; 176 + const struct pci_tsm_ops *ops = to_pci_tsm_ops(dsm_dev->tsm); 177 + 178 + pdev->tsm = ops->probe(dsm_dev->tsm->tsm_dev, pdev); 179 + pci_dbg(pdev, "setup TSM context: DSM: %s status: %s\n", 180 + pci_name(dsm_dev), pdev->tsm ? "success" : "failed"); 181 + if (pdev->tsm) 182 + link_sysfs_enable(pdev); 183 + return 0; 184 + } 185 + 186 + static int pci_tsm_connect(struct pci_dev *pdev, struct tsm_dev *tsm_dev) 187 + { 188 + int rc; 189 + struct pci_tsm_pf0 *tsm_pf0; 190 + const struct pci_tsm_ops *ops = tsm_dev->pci_ops; 191 + struct pci_tsm *pci_tsm __free(tsm_remove) = ops->probe(tsm_dev, pdev); 192 + 193 + /* connect() mutually exclusive with subfunction pci_tsm_init() */ 194 + lockdep_assert_held_write(&pci_tsm_rwsem); 195 + 196 + if (!pci_tsm) 197 + return -ENXIO; 198 + 199 + pdev->tsm = pci_tsm; 200 + tsm_pf0 = to_pci_tsm_pf0(pdev->tsm); 201 + 202 + /* mutex_intr assumes connect() is always sysfs/user driven */ 203 + ACQUIRE(mutex_intr, lock)(&tsm_pf0->lock); 204 + if ((rc = ACQUIRE_ERR(mutex_intr, &lock))) 205 + return rc; 206 + 207 + rc = ops->connect(pdev); 208 + if (rc) 209 + return rc; 210 + 211 + pdev->tsm = no_free_ptr(pci_tsm); 212 + 213 + /* 214 + * Now that the DSM is established, probe() all the potential 215 + * dependent functions. Failure to probe a function is not fatal 216 + * to connect(), it just disables subsequent security operations 217 + * for that function. 218 + * 219 + * Note this is done unconditionally, without regard to finding 220 + * PCI_EXP_DEVCAP_TEE on the dependent function, for robustness. The DSM 221 + * is the ultimate arbiter of security state relative to a given 222 + * interface id, and if it says it can manage TDISP state of a function, 223 + * let it. 224 + */ 225 + if (has_tee(pdev)) 226 + pci_tsm_walk_fns(pdev, probe_fn, pdev); 227 + return 0; 228 + } 229 + 230 + static ssize_t connect_show(struct device *dev, struct device_attribute *attr, 231 + char *buf) 232 + { 233 + struct pci_dev *pdev = to_pci_dev(dev); 234 + struct tsm_dev *tsm_dev; 235 + int rc; 236 + 237 + ACQUIRE(rwsem_read_intr, lock)(&pci_tsm_rwsem); 238 + if ((rc = ACQUIRE_ERR(rwsem_read_intr, &lock))) 239 + return rc; 240 + 241 + if (!pdev->tsm) 242 + return sysfs_emit(buf, "\n"); 243 + 244 + tsm_dev = pdev->tsm->tsm_dev; 245 + return sysfs_emit(buf, "%s\n", dev_name(&tsm_dev->dev)); 246 + } 247 + 248 + /* Is @tsm_dev managing physical link / session properties... */ 249 + static bool is_link_tsm(struct tsm_dev *tsm_dev) 250 + { 251 + return tsm_dev && tsm_dev->pci_ops && tsm_dev->pci_ops->link_ops.probe; 252 + } 253 + 254 + /* ...or is @tsm_dev managing device security state ? */ 255 + static bool is_devsec_tsm(struct tsm_dev *tsm_dev) 256 + { 257 + return tsm_dev && tsm_dev->pci_ops && tsm_dev->pci_ops->devsec_ops.lock; 258 + } 259 + 260 + static ssize_t connect_store(struct device *dev, struct device_attribute *attr, 261 + const char *buf, size_t len) 262 + { 263 + struct pci_dev *pdev = to_pci_dev(dev); 264 + int rc, id; 265 + 266 + rc = sscanf(buf, "tsm%d\n", &id); 267 + if (rc != 1) 268 + return -EINVAL; 269 + 270 + ACQUIRE(rwsem_write_kill, lock)(&pci_tsm_rwsem); 271 + if ((rc = ACQUIRE_ERR(rwsem_write_kill, &lock))) 272 + return rc; 273 + 274 + if (pdev->tsm) 275 + return -EBUSY; 276 + 277 + struct tsm_dev *tsm_dev __free(put_tsm_dev) = find_tsm_dev(id); 278 + if (!is_link_tsm(tsm_dev)) 279 + return -ENXIO; 280 + 281 + rc = pci_tsm_connect(pdev, tsm_dev); 282 + if (rc) 283 + return rc; 284 + return len; 285 + } 286 + static DEVICE_ATTR_RW(connect); 287 + 288 + static int remove_fn(struct pci_dev *pdev, void *data) 289 + { 290 + tsm_remove(pdev->tsm); 291 + link_sysfs_disable(pdev); 292 + return 0; 293 + } 294 + 295 + /* 296 + * Note, this helper only returns an error code and takes an argument for 297 + * compatibility with the pci_walk_bus() callback prototype. pci_tsm_unbind() 298 + * always succeeds. 299 + */ 300 + static int __pci_tsm_unbind(struct pci_dev *pdev, void *data) 301 + { 302 + struct pci_tdi *tdi; 303 + struct pci_tsm_pf0 *tsm_pf0; 304 + 305 + lockdep_assert_held(&pci_tsm_rwsem); 306 + 307 + if (!pdev->tsm) 308 + return 0; 309 + 310 + tsm_pf0 = to_pci_tsm_pf0(pdev->tsm); 311 + guard(mutex)(&tsm_pf0->lock); 312 + 313 + tdi = pdev->tsm->tdi; 314 + if (!tdi) 315 + return 0; 316 + 317 + to_pci_tsm_ops(pdev->tsm)->unbind(tdi); 318 + pdev->tsm->tdi = NULL; 319 + 320 + return 0; 321 + } 322 + 323 + void pci_tsm_unbind(struct pci_dev *pdev) 324 + { 325 + guard(rwsem_read)(&pci_tsm_rwsem); 326 + __pci_tsm_unbind(pdev, NULL); 327 + } 328 + EXPORT_SYMBOL_GPL(pci_tsm_unbind); 329 + 330 + /** 331 + * pci_tsm_bind() - Bind @pdev as a TDI for @kvm 332 + * @pdev: PCI device function to bind 333 + * @kvm: Private memory attach context 334 + * @tdi_id: Identifier (virtual BDF) for the TDI as referenced by the TSM and DSM 335 + * 336 + * Returns 0 on success, or a negative error code on failure. 337 + * 338 + * Context: Caller is responsible for constraining the bind lifetime to the 339 + * registered state of the device. For example, pci_tsm_bind() / 340 + * pci_tsm_unbind() limited to the VFIO driver bound state of the device. 341 + */ 342 + int pci_tsm_bind(struct pci_dev *pdev, struct kvm *kvm, u32 tdi_id) 343 + { 344 + struct pci_tsm_pf0 *tsm_pf0; 345 + struct pci_tdi *tdi; 346 + 347 + if (!kvm) 348 + return -EINVAL; 349 + 350 + guard(rwsem_read)(&pci_tsm_rwsem); 351 + 352 + if (!pdev->tsm) 353 + return -EINVAL; 354 + 355 + if (!is_link_tsm(pdev->tsm->tsm_dev)) 356 + return -ENXIO; 357 + 358 + tsm_pf0 = to_pci_tsm_pf0(pdev->tsm); 359 + guard(mutex)(&tsm_pf0->lock); 360 + 361 + /* Resolve races to bind a TDI */ 362 + if (pdev->tsm->tdi) { 363 + if (pdev->tsm->tdi->kvm != kvm) 364 + return -EBUSY; 365 + return 0; 366 + } 367 + 368 + tdi = to_pci_tsm_ops(pdev->tsm)->bind(pdev, kvm, tdi_id); 369 + if (IS_ERR(tdi)) 370 + return PTR_ERR(tdi); 371 + 372 + pdev->tsm->tdi = tdi; 373 + 374 + return 0; 375 + } 376 + EXPORT_SYMBOL_GPL(pci_tsm_bind); 377 + 378 + /** 379 + * pci_tsm_guest_req() - helper to marshal guest requests to the TSM driver 380 + * @pdev: @pdev representing a bound tdi 381 + * @scope: caller asserts this passthrough request is limited to TDISP operations 382 + * @req_in: Input payload forwarded from the guest 383 + * @in_len: Length of @req_in 384 + * @req_out: Output payload buffer response to the guest 385 + * @out_len: Length of @req_out on input, bytes filled in @req_out on output 386 + * @tsm_code: Optional TSM arch specific result code for the guest TSM 387 + * 388 + * This is a common entry point for requests triggered by userspace KVM-exit 389 + * service handlers responding to TDI information or state change requests. The 390 + * scope parameter limits requests to TDISP state management, or limited debug. 391 + * This path is only suitable for commands and results that are the host kernel 392 + * has no use, the host is only facilitating guest to TSM communication. 393 + * 394 + * Returns 0 on success and -error on failure and positive "residue" on success 395 + * but @req_out is filled with less then @out_len, or @req_out is NULL and a 396 + * residue number of bytes were not consumed from @req_in. On success or 397 + * failure @tsm_code may be populated with a TSM implementation specific result 398 + * code for the guest to consume. 399 + * 400 + * Context: Caller is responsible for calling this within the pci_tsm_bind() 401 + * state of the TDI. 402 + */ 403 + ssize_t pci_tsm_guest_req(struct pci_dev *pdev, enum pci_tsm_req_scope scope, 404 + sockptr_t req_in, size_t in_len, sockptr_t req_out, 405 + size_t out_len, u64 *tsm_code) 406 + { 407 + struct pci_tsm_pf0 *tsm_pf0; 408 + struct pci_tdi *tdi; 409 + int rc; 410 + 411 + /* Forbid requests that are not directly related to TDISP operations */ 412 + if (scope > PCI_TSM_REQ_STATE_CHANGE) 413 + return -EINVAL; 414 + 415 + ACQUIRE(rwsem_read_intr, lock)(&pci_tsm_rwsem); 416 + if ((rc = ACQUIRE_ERR(rwsem_read_intr, &lock))) 417 + return rc; 418 + 419 + if (!pdev->tsm) 420 + return -ENXIO; 421 + 422 + if (!is_link_tsm(pdev->tsm->tsm_dev)) 423 + return -ENXIO; 424 + 425 + tsm_pf0 = to_pci_tsm_pf0(pdev->tsm); 426 + ACQUIRE(mutex_intr, ops_lock)(&tsm_pf0->lock); 427 + if ((rc = ACQUIRE_ERR(mutex_intr, &ops_lock))) 428 + return rc; 429 + 430 + tdi = pdev->tsm->tdi; 431 + if (!tdi) 432 + return -ENXIO; 433 + return to_pci_tsm_ops(pdev->tsm)->guest_req(tdi, scope, req_in, in_len, 434 + req_out, out_len, tsm_code); 435 + } 436 + EXPORT_SYMBOL_GPL(pci_tsm_guest_req); 437 + 438 + static void pci_tsm_unbind_all(struct pci_dev *pdev) 439 + { 440 + pci_tsm_walk_fns_reverse(pdev, __pci_tsm_unbind, NULL); 441 + __pci_tsm_unbind(pdev, NULL); 442 + } 443 + 444 + static void __pci_tsm_disconnect(struct pci_dev *pdev) 445 + { 446 + struct pci_tsm_pf0 *tsm_pf0 = to_pci_tsm_pf0(pdev->tsm); 447 + const struct pci_tsm_ops *ops = to_pci_tsm_ops(pdev->tsm); 448 + 449 + /* disconnect() mutually exclusive with subfunction pci_tsm_init() */ 450 + lockdep_assert_held_write(&pci_tsm_rwsem); 451 + 452 + pci_tsm_unbind_all(pdev); 453 + 454 + /* 455 + * disconnect() is uninterruptible as it may be called for device 456 + * teardown 457 + */ 458 + guard(mutex)(&tsm_pf0->lock); 459 + pci_tsm_walk_fns_reverse(pdev, remove_fn, NULL); 460 + ops->disconnect(pdev); 461 + } 462 + 463 + static void pci_tsm_disconnect(struct pci_dev *pdev) 464 + { 465 + __pci_tsm_disconnect(pdev); 466 + tsm_remove(pdev->tsm); 467 + } 468 + 469 + static ssize_t disconnect_store(struct device *dev, 470 + struct device_attribute *attr, const char *buf, 471 + size_t len) 472 + { 473 + struct pci_dev *pdev = to_pci_dev(dev); 474 + struct tsm_dev *tsm_dev; 475 + int rc; 476 + 477 + ACQUIRE(rwsem_write_kill, lock)(&pci_tsm_rwsem); 478 + if ((rc = ACQUIRE_ERR(rwsem_write_kill, &lock))) 479 + return rc; 480 + 481 + if (!pdev->tsm) 482 + return -ENXIO; 483 + 484 + tsm_dev = pdev->tsm->tsm_dev; 485 + if (!sysfs_streq(buf, dev_name(&tsm_dev->dev))) 486 + return -EINVAL; 487 + 488 + pci_tsm_disconnect(pdev); 489 + return len; 490 + } 491 + static DEVICE_ATTR_WO(disconnect); 492 + 493 + static ssize_t bound_show(struct device *dev, 494 + struct device_attribute *attr, char *buf) 495 + { 496 + struct pci_dev *pdev = to_pci_dev(dev); 497 + struct pci_tsm_pf0 *tsm_pf0; 498 + struct pci_tsm *tsm; 499 + int rc; 500 + 501 + ACQUIRE(rwsem_read_intr, lock)(&pci_tsm_rwsem); 502 + if ((rc = ACQUIRE_ERR(rwsem_read_intr, &lock))) 503 + return rc; 504 + 505 + tsm = pdev->tsm; 506 + if (!tsm) 507 + return sysfs_emit(buf, "\n"); 508 + tsm_pf0 = to_pci_tsm_pf0(tsm); 509 + 510 + ACQUIRE(mutex_intr, ops_lock)(&tsm_pf0->lock); 511 + if ((rc = ACQUIRE_ERR(mutex_intr, &ops_lock))) 512 + return rc; 513 + 514 + if (!tsm->tdi) 515 + return sysfs_emit(buf, "\n"); 516 + return sysfs_emit(buf, "%s\n", dev_name(&tsm->tsm_dev->dev)); 517 + } 518 + static DEVICE_ATTR_RO(bound); 519 + 520 + static ssize_t dsm_show(struct device *dev, struct device_attribute *attr, 521 + char *buf) 522 + { 523 + struct pci_dev *pdev = to_pci_dev(dev); 524 + struct pci_tsm *tsm; 525 + int rc; 526 + 527 + ACQUIRE(rwsem_read_intr, lock)(&pci_tsm_rwsem); 528 + if ((rc = ACQUIRE_ERR(rwsem_read_intr, &lock))) 529 + return rc; 530 + 531 + tsm = pdev->tsm; 532 + if (!tsm) 533 + return sysfs_emit(buf, "\n"); 534 + 535 + return sysfs_emit(buf, "%s\n", pci_name(tsm->dsm_dev)); 536 + } 537 + static DEVICE_ATTR_RO(dsm); 538 + 539 + /* The 'authenticated' attribute is exclusive to the presence of a 'link' TSM */ 540 + static bool pci_tsm_link_group_visible(struct kobject *kobj) 541 + { 542 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 543 + 544 + if (!pci_tsm_link_count) 545 + return false; 546 + 547 + if (!pci_is_pcie(pdev)) 548 + return false; 549 + 550 + if (is_pci_tsm_pf0(pdev)) 551 + return true; 552 + 553 + /* 554 + * Show 'authenticated' and other attributes for the managed 555 + * sub-functions of a DSM. 556 + */ 557 + if (pdev->tsm) 558 + return true; 559 + 560 + return false; 561 + } 562 + DEFINE_SIMPLE_SYSFS_GROUP_VISIBLE(pci_tsm_link); 563 + 564 + /* 565 + * 'link' and 'devsec' TSMs share the same 'tsm/' sysfs group, so the TSM type 566 + * specific attributes need individual visibility checks. 567 + */ 568 + static umode_t pci_tsm_attr_visible(struct kobject *kobj, 569 + struct attribute *attr, int n) 570 + { 571 + if (pci_tsm_link_group_visible(kobj)) { 572 + struct pci_dev *pdev = to_pci_dev(kobj_to_dev(kobj)); 573 + 574 + if (attr == &dev_attr_bound.attr) { 575 + if (is_pci_tsm_pf0(pdev) && has_tee(pdev)) 576 + return attr->mode; 577 + if (pdev->tsm && has_tee(pdev->tsm->dsm_dev)) 578 + return attr->mode; 579 + } 580 + 581 + if (attr == &dev_attr_dsm.attr) { 582 + if (is_pci_tsm_pf0(pdev)) 583 + return attr->mode; 584 + if (pdev->tsm && has_tee(pdev->tsm->dsm_dev)) 585 + return attr->mode; 586 + } 587 + 588 + if (attr == &dev_attr_connect.attr || 589 + attr == &dev_attr_disconnect.attr) { 590 + if (is_pci_tsm_pf0(pdev)) 591 + return attr->mode; 592 + } 593 + } 594 + 595 + return 0; 596 + } 597 + 598 + static bool pci_tsm_group_visible(struct kobject *kobj) 599 + { 600 + return pci_tsm_link_group_visible(kobj); 601 + } 602 + DEFINE_SYSFS_GROUP_VISIBLE(pci_tsm); 603 + 604 + static struct attribute *pci_tsm_attrs[] = { 605 + &dev_attr_connect.attr, 606 + &dev_attr_disconnect.attr, 607 + &dev_attr_bound.attr, 608 + &dev_attr_dsm.attr, 609 + NULL 610 + }; 611 + 612 + const struct attribute_group pci_tsm_attr_group = { 613 + .name = "tsm", 614 + .attrs = pci_tsm_attrs, 615 + .is_visible = SYSFS_GROUP_VISIBLE(pci_tsm), 616 + }; 617 + 618 + static ssize_t authenticated_show(struct device *dev, 619 + struct device_attribute *attr, char *buf) 620 + { 621 + /* 622 + * When the SPDM session established via TSM the 'authenticated' state 623 + * of the device is identical to the connect state. 624 + */ 625 + return connect_show(dev, attr, buf); 626 + } 627 + static DEVICE_ATTR_RO(authenticated); 628 + 629 + static struct attribute *pci_tsm_auth_attrs[] = { 630 + &dev_attr_authenticated.attr, 631 + NULL 632 + }; 633 + 634 + const struct attribute_group pci_tsm_auth_attr_group = { 635 + .attrs = pci_tsm_auth_attrs, 636 + .is_visible = SYSFS_GROUP_VISIBLE(pci_tsm_link), 637 + }; 638 + 639 + /* 640 + * Retrieve physical function0 device whether it has TEE capability or not 641 + */ 642 + static struct pci_dev *pf0_dev_get(struct pci_dev *pdev) 643 + { 644 + struct pci_dev *pf_dev = pci_physfn(pdev); 645 + 646 + if (PCI_FUNC(pf_dev->devfn) == 0) 647 + return pci_dev_get(pf_dev); 648 + 649 + return pci_get_slot(pf_dev->bus, 650 + pf_dev->devfn - PCI_FUNC(pf_dev->devfn)); 651 + } 652 + 653 + /* 654 + * Find the PCI Device instance that serves as the Device Security Manager (DSM) 655 + * for @pdev. Note that no additional reference is held for the resulting device 656 + * because that resulting object always has a registered lifetime 657 + * greater-than-or-equal to that of the @pdev argument. This is by virtue of 658 + * @pdev being a descendant of, or identical to, the returned DSM device. 659 + */ 660 + static struct pci_dev *find_dsm_dev(struct pci_dev *pdev) 661 + { 662 + struct device *grandparent; 663 + struct pci_dev *uport; 664 + 665 + if (is_pci_tsm_pf0(pdev)) 666 + return pdev; 667 + 668 + struct pci_dev *pf0 __free(pci_dev_put) = pf0_dev_get(pdev); 669 + if (!pf0) 670 + return NULL; 671 + 672 + if (is_dsm(pf0)) 673 + return pf0; 674 + 675 + /* 676 + * For cases where a switch may be hosting TDISP services on behalf of 677 + * downstream devices, check the first upstream port relative to this 678 + * endpoint. 679 + */ 680 + if (!pdev->dev.parent) 681 + return NULL; 682 + grandparent = pdev->dev.parent->parent; 683 + if (!grandparent) 684 + return NULL; 685 + if (!dev_is_pci(grandparent)) 686 + return NULL; 687 + uport = to_pci_dev(grandparent); 688 + if (!pci_is_pcie(uport) || 689 + pci_pcie_type(uport) != PCI_EXP_TYPE_UPSTREAM) 690 + return NULL; 691 + 692 + if (is_dsm(uport)) 693 + return uport; 694 + return NULL; 695 + } 696 + 697 + /** 698 + * pci_tsm_tdi_constructor() - base 'struct pci_tdi' initialization for link TSMs 699 + * @pdev: PCI device function representing the TDI 700 + * @tdi: context to initialize 701 + * @kvm: Private memory attach context 702 + * @tdi_id: Identifier (virtual BDF) for the TDI as referenced by the TSM and DSM 703 + */ 704 + void pci_tsm_tdi_constructor(struct pci_dev *pdev, struct pci_tdi *tdi, 705 + struct kvm *kvm, u32 tdi_id) 706 + { 707 + tdi->pdev = pdev; 708 + tdi->kvm = kvm; 709 + tdi->tdi_id = tdi_id; 710 + } 711 + EXPORT_SYMBOL_GPL(pci_tsm_tdi_constructor); 712 + 713 + /** 714 + * pci_tsm_link_constructor() - base 'struct pci_tsm' initialization for link TSMs 715 + * @pdev: The PCI device 716 + * @tsm: context to initialize 717 + * @tsm_dev: Platform TEE Security Manager, initiator of security operations 718 + */ 719 + int pci_tsm_link_constructor(struct pci_dev *pdev, struct pci_tsm *tsm, 720 + struct tsm_dev *tsm_dev) 721 + { 722 + if (!is_link_tsm(tsm_dev)) 723 + return -EINVAL; 724 + 725 + tsm->dsm_dev = find_dsm_dev(pdev); 726 + if (!tsm->dsm_dev) { 727 + pci_warn(pdev, "failed to find Device Security Manager\n"); 728 + return -ENXIO; 729 + } 730 + tsm->pdev = pdev; 731 + tsm->tsm_dev = tsm_dev; 732 + 733 + return 0; 734 + } 735 + EXPORT_SYMBOL_GPL(pci_tsm_link_constructor); 736 + 737 + /** 738 + * pci_tsm_pf0_constructor() - common 'struct pci_tsm_pf0' (DSM) initialization 739 + * @pdev: Physical Function 0 PCI device (as indicated by is_pci_tsm_pf0()) 740 + * @tsm: context to initialize 741 + * @tsm_dev: Platform TEE Security Manager, initiator of security operations 742 + */ 743 + int pci_tsm_pf0_constructor(struct pci_dev *pdev, struct pci_tsm_pf0 *tsm, 744 + struct tsm_dev *tsm_dev) 745 + { 746 + mutex_init(&tsm->lock); 747 + tsm->doe_mb = pci_find_doe_mailbox(pdev, PCI_VENDOR_ID_PCI_SIG, 748 + PCI_DOE_FEATURE_CMA); 749 + if (!tsm->doe_mb) { 750 + pci_warn(pdev, "TSM init failure, no CMA mailbox\n"); 751 + return -ENODEV; 752 + } 753 + 754 + return pci_tsm_link_constructor(pdev, &tsm->base_tsm, tsm_dev); 755 + } 756 + EXPORT_SYMBOL_GPL(pci_tsm_pf0_constructor); 757 + 758 + void pci_tsm_pf0_destructor(struct pci_tsm_pf0 *pf0_tsm) 759 + { 760 + mutex_destroy(&pf0_tsm->lock); 761 + } 762 + EXPORT_SYMBOL_GPL(pci_tsm_pf0_destructor); 763 + 764 + int pci_tsm_register(struct tsm_dev *tsm_dev) 765 + { 766 + struct pci_dev *pdev = NULL; 767 + 768 + if (!tsm_dev) 769 + return -EINVAL; 770 + 771 + /* The TSM device must only implement one of link_ops or devsec_ops */ 772 + if (!is_link_tsm(tsm_dev) && !is_devsec_tsm(tsm_dev)) 773 + return -EINVAL; 774 + 775 + if (is_link_tsm(tsm_dev) && is_devsec_tsm(tsm_dev)) 776 + return -EINVAL; 777 + 778 + guard(rwsem_write)(&pci_tsm_rwsem); 779 + 780 + /* On first enable, update sysfs groups */ 781 + if (is_link_tsm(tsm_dev) && pci_tsm_link_count++ == 0) { 782 + for_each_pci_dev(pdev) 783 + if (is_pci_tsm_pf0(pdev)) 784 + link_sysfs_enable(pdev); 785 + } else if (is_devsec_tsm(tsm_dev)) { 786 + pci_tsm_devsec_count++; 787 + } 788 + 789 + return 0; 790 + } 791 + 792 + static void pci_tsm_fn_exit(struct pci_dev *pdev) 793 + { 794 + __pci_tsm_unbind(pdev, NULL); 795 + tsm_remove(pdev->tsm); 796 + } 797 + 798 + /** 799 + * __pci_tsm_destroy() - destroy the TSM context for @pdev 800 + * @pdev: device to cleanup 801 + * @tsm_dev: the TSM device being removed, or NULL if @pdev is being removed. 802 + * 803 + * At device removal or TSM unregistration all established context 804 + * with the TSM is torn down. Additionally, if there are no more TSMs 805 + * registered, the PCI tsm/ sysfs attributes are hidden. 806 + */ 807 + static void __pci_tsm_destroy(struct pci_dev *pdev, struct tsm_dev *tsm_dev) 808 + { 809 + struct pci_tsm *tsm = pdev->tsm; 810 + 811 + lockdep_assert_held_write(&pci_tsm_rwsem); 812 + 813 + /* 814 + * First, handle the TSM removal case to shutdown @pdev sysfs, this is 815 + * skipped if the device itself is being removed since sysfs goes away 816 + * naturally at that point 817 + */ 818 + if (is_link_tsm(tsm_dev) && is_pci_tsm_pf0(pdev) && !pci_tsm_link_count) 819 + link_sysfs_disable(pdev); 820 + 821 + /* Nothing else to do if this device never attached to the departing TSM */ 822 + if (!tsm) 823 + return; 824 + 825 + /* Now lookup the tsm_dev to destroy TSM context */ 826 + if (!tsm_dev) 827 + tsm_dev = tsm->tsm_dev; 828 + else if (tsm_dev != tsm->tsm_dev) 829 + return; 830 + 831 + if (is_link_tsm(tsm_dev) && is_pci_tsm_pf0(pdev)) 832 + pci_tsm_disconnect(pdev); 833 + else 834 + pci_tsm_fn_exit(pdev); 835 + } 836 + 837 + void pci_tsm_destroy(struct pci_dev *pdev) 838 + { 839 + guard(rwsem_write)(&pci_tsm_rwsem); 840 + __pci_tsm_destroy(pdev, NULL); 841 + } 842 + 843 + void pci_tsm_init(struct pci_dev *pdev) 844 + { 845 + guard(rwsem_read)(&pci_tsm_rwsem); 846 + 847 + /* 848 + * Subfunctions are either probed synchronous with connect() or later 849 + * when either the SR-IOV configuration is changed, or, unlikely, 850 + * connect() raced initial bus scanning. 851 + */ 852 + if (pdev->tsm) 853 + return; 854 + 855 + if (pci_tsm_link_count) { 856 + struct pci_dev *dsm = find_dsm_dev(pdev); 857 + 858 + if (!dsm) 859 + return; 860 + 861 + /* 862 + * The only path to init a Device Security Manager capable 863 + * device is via connect(). 864 + */ 865 + if (!dsm->tsm) 866 + return; 867 + 868 + probe_fn(pdev, dsm); 869 + } 870 + } 871 + 872 + void pci_tsm_unregister(struct tsm_dev *tsm_dev) 873 + { 874 + struct pci_dev *pdev = NULL; 875 + 876 + guard(rwsem_write)(&pci_tsm_rwsem); 877 + if (is_link_tsm(tsm_dev)) 878 + pci_tsm_link_count--; 879 + if (is_devsec_tsm(tsm_dev)) 880 + pci_tsm_devsec_count--; 881 + for_each_pci_dev_reverse(pdev) 882 + __pci_tsm_destroy(pdev, tsm_dev); 883 + } 884 + 885 + int pci_tsm_doe_transfer(struct pci_dev *pdev, u8 type, const void *req, 886 + size_t req_sz, void *resp, size_t resp_sz) 887 + { 888 + struct pci_tsm_pf0 *tsm; 889 + 890 + if (!pdev->tsm || !is_pci_tsm_pf0(pdev)) 891 + return -ENXIO; 892 + 893 + tsm = to_pci_tsm_pf0(pdev->tsm); 894 + if (!tsm->doe_mb) 895 + return -ENXIO; 896 + 897 + return pci_doe(tsm->doe_mb, PCI_VENDOR_ID_PCI_SIG, type, req, req_sz, 898 + resp, resp_sz); 899 + } 900 + EXPORT_SYMBOL_GPL(pci_tsm_doe_transfer);
+2 -2
drivers/virt/Kconfig
··· 47 47 48 48 source "drivers/virt/acrn/Kconfig" 49 49 50 - source "drivers/virt/coco/Kconfig" 51 - 52 50 endif 51 + 52 + source "drivers/virt/coco/Kconfig"
+5
drivers/virt/coco/Kconfig
··· 3 3 # Confidential computing related collateral 4 4 # 5 5 6 + if VIRT_DRIVERS 6 7 source "drivers/virt/coco/efi_secret/Kconfig" 7 8 8 9 source "drivers/virt/coco/pkvm-guest/Kconfig" ··· 15 14 source "drivers/virt/coco/arm-cca-guest/Kconfig" 16 15 17 16 source "drivers/virt/coco/guest/Kconfig" 17 + endif 18 + 19 + config TSM 20 + bool
+1
drivers/virt/coco/Makefile
··· 7 7 obj-$(CONFIG_SEV_GUEST) += sev-guest/ 8 8 obj-$(CONFIG_INTEL_TDX_GUEST) += tdx-guest/ 9 9 obj-$(CONFIG_ARM_CCA_GUEST) += arm-cca-guest/ 10 + obj-$(CONFIG_TSM) += tsm-core.o 10 11 obj-$(CONFIG_TSM_GUEST) += guest/
+163
drivers/virt/coco/tsm-core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* Copyright(c) 2024-2025 Intel Corporation. All rights reserved. */ 3 + 4 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 5 + 6 + #include <linux/tsm.h> 7 + #include <linux/pci.h> 8 + #include <linux/rwsem.h> 9 + #include <linux/device.h> 10 + #include <linux/module.h> 11 + #include <linux/cleanup.h> 12 + #include <linux/pci-tsm.h> 13 + #include <linux/pci-ide.h> 14 + 15 + static struct class *tsm_class; 16 + static DECLARE_RWSEM(tsm_rwsem); 17 + static DEFINE_IDA(tsm_ida); 18 + 19 + static int match_id(struct device *dev, const void *data) 20 + { 21 + struct tsm_dev *tsm_dev = container_of(dev, struct tsm_dev, dev); 22 + int id = *(const int *)data; 23 + 24 + return tsm_dev->id == id; 25 + } 26 + 27 + struct tsm_dev *find_tsm_dev(int id) 28 + { 29 + struct device *dev = class_find_device(tsm_class, NULL, &id, match_id); 30 + 31 + if (!dev) 32 + return NULL; 33 + return container_of(dev, struct tsm_dev, dev); 34 + } 35 + 36 + static struct tsm_dev *alloc_tsm_dev(struct device *parent) 37 + { 38 + struct device *dev; 39 + int id; 40 + 41 + struct tsm_dev *tsm_dev __free(kfree) = 42 + kzalloc(sizeof(*tsm_dev), GFP_KERNEL); 43 + if (!tsm_dev) 44 + return ERR_PTR(-ENOMEM); 45 + 46 + id = ida_alloc(&tsm_ida, GFP_KERNEL); 47 + if (id < 0) 48 + return ERR_PTR(id); 49 + 50 + tsm_dev->id = id; 51 + dev = &tsm_dev->dev; 52 + dev->parent = parent; 53 + dev->class = tsm_class; 54 + device_initialize(dev); 55 + 56 + return no_free_ptr(tsm_dev); 57 + } 58 + 59 + static struct tsm_dev *tsm_register_pci_or_reset(struct tsm_dev *tsm_dev, 60 + struct pci_tsm_ops *pci_ops) 61 + { 62 + int rc; 63 + 64 + if (!pci_ops) 65 + return tsm_dev; 66 + 67 + tsm_dev->pci_ops = pci_ops; 68 + rc = pci_tsm_register(tsm_dev); 69 + if (rc) { 70 + dev_err(tsm_dev->dev.parent, 71 + "PCI/TSM registration failure: %d\n", rc); 72 + device_unregister(&tsm_dev->dev); 73 + return ERR_PTR(rc); 74 + } 75 + 76 + /* Notify TSM userspace that PCI/TSM operations are now possible */ 77 + kobject_uevent(&tsm_dev->dev.kobj, KOBJ_CHANGE); 78 + return tsm_dev; 79 + } 80 + 81 + struct tsm_dev *tsm_register(struct device *parent, struct pci_tsm_ops *pci_ops) 82 + { 83 + struct tsm_dev *tsm_dev __free(put_tsm_dev) = alloc_tsm_dev(parent); 84 + struct device *dev; 85 + int rc; 86 + 87 + if (IS_ERR(tsm_dev)) 88 + return tsm_dev; 89 + 90 + dev = &tsm_dev->dev; 91 + rc = dev_set_name(dev, "tsm%d", tsm_dev->id); 92 + if (rc) 93 + return ERR_PTR(rc); 94 + 95 + rc = device_add(dev); 96 + if (rc) 97 + return ERR_PTR(rc); 98 + 99 + return tsm_register_pci_or_reset(no_free_ptr(tsm_dev), pci_ops); 100 + } 101 + EXPORT_SYMBOL_GPL(tsm_register); 102 + 103 + void tsm_unregister(struct tsm_dev *tsm_dev) 104 + { 105 + if (tsm_dev->pci_ops) 106 + pci_tsm_unregister(tsm_dev); 107 + device_unregister(&tsm_dev->dev); 108 + } 109 + EXPORT_SYMBOL_GPL(tsm_unregister); 110 + 111 + /* must be invoked between tsm_register / tsm_unregister */ 112 + int tsm_ide_stream_register(struct pci_ide *ide) 113 + { 114 + struct pci_dev *pdev = ide->pdev; 115 + struct pci_tsm *tsm = pdev->tsm; 116 + struct tsm_dev *tsm_dev = tsm->tsm_dev; 117 + int rc; 118 + 119 + rc = sysfs_create_link(&tsm_dev->dev.kobj, &pdev->dev.kobj, ide->name); 120 + if (rc) 121 + return rc; 122 + 123 + ide->tsm_dev = tsm_dev; 124 + return 0; 125 + } 126 + EXPORT_SYMBOL_GPL(tsm_ide_stream_register); 127 + 128 + void tsm_ide_stream_unregister(struct pci_ide *ide) 129 + { 130 + struct tsm_dev *tsm_dev = ide->tsm_dev; 131 + 132 + ide->tsm_dev = NULL; 133 + sysfs_remove_link(&tsm_dev->dev.kobj, ide->name); 134 + } 135 + EXPORT_SYMBOL_GPL(tsm_ide_stream_unregister); 136 + 137 + static void tsm_release(struct device *dev) 138 + { 139 + struct tsm_dev *tsm_dev = container_of(dev, typeof(*tsm_dev), dev); 140 + 141 + ida_free(&tsm_ida, tsm_dev->id); 142 + kfree(tsm_dev); 143 + } 144 + 145 + static int __init tsm_init(void) 146 + { 147 + tsm_class = class_create("tsm"); 148 + if (IS_ERR(tsm_class)) 149 + return PTR_ERR(tsm_class); 150 + 151 + tsm_class->dev_release = tsm_release; 152 + return 0; 153 + } 154 + module_init(tsm_init) 155 + 156 + static void __exit tsm_exit(void) 157 + { 158 + class_destroy(tsm_class); 159 + } 160 + module_exit(tsm_exit) 161 + 162 + MODULE_LICENSE("GPL"); 163 + MODULE_DESCRIPTION("TEE Security Manager Class Device");
+2
include/linux/amd-iommu.h
··· 18 18 struct pci_dev; 19 19 20 20 extern void amd_iommu_detect(void); 21 + extern bool amd_iommu_sev_tio_supported(void); 21 22 22 23 #else /* CONFIG_AMD_IOMMU */ 23 24 24 25 static inline void amd_iommu_detect(void) { } 26 + static inline bool amd_iommu_sev_tio_supported(void) { return false; } 25 27 26 28 #endif /* CONFIG_AMD_IOMMU */ 27 29
+3
include/linux/device/bus.h
··· 150 150 void *data, device_iter_t fn); 151 151 struct device *bus_find_device(const struct bus_type *bus, struct device *start, 152 152 const void *data, device_match_t match); 153 + struct device *bus_find_device_reverse(const struct bus_type *bus, 154 + struct device *start, const void *data, 155 + device_match_t match); 153 156 /** 154 157 * bus_find_device_by_name - device iterator for locating a particular device 155 158 * of a specific name.
+9
include/linux/ioport.h
··· 334 334 return true; 335 335 } 336 336 337 + /* 338 + * Check if this resource is added to a resource tree or detached. Caller is 339 + * responsible for not racing assignment. 340 + */ 341 + static inline bool resource_assigned(struct resource *res) 342 + { 343 + return res->parent; 344 + } 345 + 337 346 int find_resource_space(struct resource *root, struct resource *new, 338 347 resource_size_t size, struct resource_constraint *constraint); 339 348
+4
include/linux/pci-doe.h
··· 15 15 16 16 struct pci_doe_mb; 17 17 18 + #define PCI_DOE_FEATURE_DISCOVERY 0 19 + #define PCI_DOE_FEATURE_CMA 1 20 + #define PCI_DOE_FEATURE_SSESSION 2 21 + 18 22 struct pci_doe_mb *pci_find_doe_mailbox(struct pci_dev *pdev, u16 vendor, 19 23 u8 type); 20 24
+119
include/linux/pci-ide.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Common helpers for drivers (e.g. low-level PCI/TSM drivers) implementing the 4 + * IDE key management protocol (IDE_KM) as defined by: 5 + * PCIe r7.0 section 6.33 Integrity & Data Encryption (IDE) 6 + * 7 + * Copyright(c) 2024-2025 Intel Corporation. All rights reserved. 8 + */ 9 + 10 + #ifndef __PCI_IDE_H__ 11 + #define __PCI_IDE_H__ 12 + 13 + enum pci_ide_partner_select { 14 + PCI_IDE_EP, 15 + PCI_IDE_RP, 16 + PCI_IDE_PARTNER_MAX, 17 + /* 18 + * In addition to the resources in each partner port the 19 + * platform / host-bridge additionally has a Stream ID pool that 20 + * it shares across root ports. Let pci_ide_stream_alloc() use 21 + * the alloc_stream_index() helper as endpoints and root ports. 22 + */ 23 + PCI_IDE_HB = PCI_IDE_PARTNER_MAX, 24 + }; 25 + 26 + /** 27 + * struct pci_ide_partner - Per port pair Selective IDE Stream settings 28 + * @rid_start: Partner Port Requester ID range start 29 + * @rid_end: Partner Port Requester ID range end 30 + * @stream_index: Selective IDE Stream Register Block selection 31 + * @mem_assoc: PCI bus memory address association for targeting peer partner 32 + * @pref_assoc: PCI bus prefetchable memory address association for 33 + * targeting peer partner 34 + * @default_stream: Endpoint uses this stream for all upstream TLPs regardless of 35 + * address and RID association registers 36 + * @setup: flag to track whether to run pci_ide_stream_teardown() for this 37 + * partner slot 38 + * @enable: flag whether to run pci_ide_stream_disable() for this partner slot 39 + * 40 + * By default, pci_ide_stream_alloc() initializes @mem_assoc and @pref_assoc 41 + * with the immediate ancestor downstream port memory ranges (i.e. Type 1 42 + * Configuration Space Header values). Caller may zero size ({0, -1}) the range 43 + * to drop it from consideration at pci_ide_stream_setup() time. 44 + */ 45 + struct pci_ide_partner { 46 + u16 rid_start; 47 + u16 rid_end; 48 + u8 stream_index; 49 + struct pci_bus_region mem_assoc; 50 + struct pci_bus_region pref_assoc; 51 + unsigned int default_stream:1; 52 + unsigned int setup:1; 53 + unsigned int enable:1; 54 + }; 55 + 56 + /** 57 + * struct pci_ide_regs - Hardware register association settings for Selective 58 + * IDE Streams 59 + * @rid1: IDE RID Association Register 1 60 + * @rid2: IDE RID Association Register 2 61 + * @addr: Up to two address association blocks (IDE Address Association Register 62 + * 1 through 3) for MMIO and prefetchable MMIO 63 + * @nr_addr: Number of address association blocks initialized 64 + * 65 + * See pci_ide_stream_to_regs() 66 + */ 67 + struct pci_ide_regs { 68 + u32 rid1; 69 + u32 rid2; 70 + struct { 71 + u32 assoc1; 72 + u32 assoc2; 73 + u32 assoc3; 74 + } addr[2]; 75 + int nr_addr; 76 + }; 77 + 78 + /** 79 + * struct pci_ide - PCIe Selective IDE Stream descriptor 80 + * @pdev: PCIe Endpoint in the pci_ide_partner pair 81 + * @partner: per-partner settings 82 + * @host_bridge_stream: allocated from host bridge @ide_stream_ida pool 83 + * @stream_id: unique Stream ID (within Partner Port pairing) 84 + * @name: name of the established Selective IDE Stream in sysfs 85 + * @tsm_dev: For TSM established IDE, the TSM device context 86 + * 87 + * Negative @stream_id values indicate "uninitialized" on the 88 + * expectation that with TSM established IDE the TSM owns the stream_id 89 + * allocation. 90 + */ 91 + struct pci_ide { 92 + struct pci_dev *pdev; 93 + struct pci_ide_partner partner[PCI_IDE_PARTNER_MAX]; 94 + u8 host_bridge_stream; 95 + int stream_id; 96 + const char *name; 97 + struct tsm_dev *tsm_dev; 98 + }; 99 + 100 + /* 101 + * Some devices need help with aliased stream-ids even for idle streams. Use 102 + * this id as the "never enabled" place holder. 103 + */ 104 + #define PCI_IDE_RESERVED_STREAM_ID 255 105 + 106 + void pci_ide_set_nr_streams(struct pci_host_bridge *hb, u16 nr); 107 + struct pci_ide_partner *pci_ide_to_settings(struct pci_dev *pdev, 108 + struct pci_ide *ide); 109 + struct pci_ide *pci_ide_stream_alloc(struct pci_dev *pdev); 110 + void pci_ide_stream_free(struct pci_ide *ide); 111 + int pci_ide_stream_register(struct pci_ide *ide); 112 + void pci_ide_stream_unregister(struct pci_ide *ide); 113 + void pci_ide_stream_setup(struct pci_dev *pdev, struct pci_ide *ide); 114 + void pci_ide_stream_teardown(struct pci_dev *pdev, struct pci_ide *ide); 115 + int pci_ide_stream_enable(struct pci_dev *pdev, struct pci_ide *ide); 116 + void pci_ide_stream_disable(struct pci_dev *pdev, struct pci_ide *ide); 117 + void pci_ide_stream_release(struct pci_ide *ide); 118 + DEFINE_FREE(pci_ide_stream_release, struct pci_ide *, if (_T) pci_ide_stream_release(_T)) 119 + #endif /* __PCI_IDE_H__ */
+243
include/linux/pci-tsm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __PCI_TSM_H 3 + #define __PCI_TSM_H 4 + #include <linux/mutex.h> 5 + #include <linux/pci.h> 6 + #include <linux/sockptr.h> 7 + 8 + struct pci_tsm; 9 + struct tsm_dev; 10 + struct kvm; 11 + enum pci_tsm_req_scope; 12 + 13 + /* 14 + * struct pci_tsm_ops - manage confidential links and security state 15 + * @link_ops: Coordinate PCIe SPDM and IDE establishment via a platform TSM. 16 + * Provide a secure session transport for TDISP state management 17 + * (typically bare metal physical function operations). 18 + * @devsec_ops: Lock, unlock, and interrogate the security state of the 19 + * function via the platform TSM (typically virtual function 20 + * operations). 21 + * 22 + * This operations are mutually exclusive either a tsm_dev instance 23 + * manages physical link properties or it manages function security 24 + * states like TDISP lock/unlock. 25 + */ 26 + struct pci_tsm_ops { 27 + /* 28 + * struct pci_tsm_link_ops - Manage physical link and the TSM/DSM session 29 + * @probe: establish context with the TSM (allocate / wrap 'struct 30 + * pci_tsm') for follow-on link operations 31 + * @remove: destroy link operations context 32 + * @connect: establish / validate a secure connection (e.g. IDE) 33 + * with the device 34 + * @disconnect: teardown the secure link 35 + * @bind: bind a TDI in preparation for it to be accepted by a TVM 36 + * @unbind: remove a TDI from secure operation with a TVM 37 + * @guest_req: marshal TVM information and state change requests 38 + * 39 + * Context: @probe, @remove, @connect, and @disconnect run under 40 + * pci_tsm_rwsem held for write to sync with TSM unregistration and 41 + * mutual exclusion of @connect and @disconnect. @connect and 42 + * @disconnect additionally run under the DSM lock (struct 43 + * pci_tsm_pf0::lock) as well as @probe and @remove of the subfunctions. 44 + * @bind, @unbind, and @guest_req run under pci_tsm_rwsem held for read 45 + * and the DSM lock. 46 + */ 47 + struct_group_tagged(pci_tsm_link_ops, link_ops, 48 + struct pci_tsm *(*probe)(struct tsm_dev *tsm_dev, 49 + struct pci_dev *pdev); 50 + void (*remove)(struct pci_tsm *tsm); 51 + int (*connect)(struct pci_dev *pdev); 52 + void (*disconnect)(struct pci_dev *pdev); 53 + struct pci_tdi *(*bind)(struct pci_dev *pdev, 54 + struct kvm *kvm, u32 tdi_id); 55 + void (*unbind)(struct pci_tdi *tdi); 56 + ssize_t (*guest_req)(struct pci_tdi *tdi, 57 + enum pci_tsm_req_scope scope, 58 + sockptr_t req_in, size_t in_len, 59 + sockptr_t req_out, size_t out_len, 60 + u64 *tsm_code); 61 + ); 62 + 63 + /* 64 + * struct pci_tsm_devsec_ops - Manage the security state of the function 65 + * @lock: establish context with the TSM (allocate / wrap 'struct 66 + * pci_tsm') for follow-on security state transitions from the 67 + * LOCKED state 68 + * @unlock: destroy TSM context and return device to UNLOCKED state 69 + * 70 + * Context: @lock and @unlock run under pci_tsm_rwsem held for write to 71 + * sync with TSM unregistration and each other 72 + */ 73 + struct_group_tagged(pci_tsm_devsec_ops, devsec_ops, 74 + struct pci_tsm *(*lock)(struct tsm_dev *tsm_dev, 75 + struct pci_dev *pdev); 76 + void (*unlock)(struct pci_tsm *tsm); 77 + ); 78 + }; 79 + 80 + /** 81 + * struct pci_tdi - Core TEE I/O Device Interface (TDI) context 82 + * @pdev: host side representation of guest-side TDI 83 + * @kvm: TEE VM context of bound TDI 84 + * @tdi_id: Identifier (virtual BDF) for the TDI as referenced by the TSM and DSM 85 + */ 86 + struct pci_tdi { 87 + struct pci_dev *pdev; 88 + struct kvm *kvm; 89 + u32 tdi_id; 90 + }; 91 + 92 + /** 93 + * struct pci_tsm - Core TSM context for a given PCIe endpoint 94 + * @pdev: Back ref to device function, distinguishes type of pci_tsm context 95 + * @dsm_dev: PCI Device Security Manager for link operations on @pdev 96 + * @tsm_dev: PCI TEE Security Manager device for Link Confidentiality or Device 97 + * Function Security operations 98 + * @tdi: TDI context established by the @bind link operation 99 + * 100 + * This structure is wrapped by low level TSM driver data and returned by 101 + * probe()/lock(), it is freed by the corresponding remove()/unlock(). 102 + * 103 + * For link operations it serves to cache the association between a Device 104 + * Security Manager (DSM) and the functions that manager can assign to a TVM. 105 + * That can be "self", for assigning function0 of a TEE I/O device, a 106 + * sub-function (SR-IOV virtual function, or non-function0 107 + * multifunction-device), or a downstream endpoint (PCIe upstream switch-port as 108 + * DSM). 109 + */ 110 + struct pci_tsm { 111 + struct pci_dev *pdev; 112 + struct pci_dev *dsm_dev; 113 + struct tsm_dev *tsm_dev; 114 + struct pci_tdi *tdi; 115 + }; 116 + 117 + /** 118 + * struct pci_tsm_pf0 - Physical Function 0 TDISP link context 119 + * @base_tsm: generic core "tsm" context 120 + * @lock: mutual exclustion for pci_tsm_ops invocation 121 + * @doe_mb: PCIe Data Object Exchange mailbox 122 + */ 123 + struct pci_tsm_pf0 { 124 + struct pci_tsm base_tsm; 125 + struct mutex lock; 126 + struct pci_doe_mb *doe_mb; 127 + }; 128 + 129 + /* physical function0 and capable of 'connect' */ 130 + static inline bool is_pci_tsm_pf0(struct pci_dev *pdev) 131 + { 132 + if (!pdev) 133 + return false; 134 + 135 + if (!pci_is_pcie(pdev)) 136 + return false; 137 + 138 + if (pdev->is_virtfn) 139 + return false; 140 + 141 + /* 142 + * Allow for a Device Security Manager (DSM) associated with function0 143 + * of an Endpoint to coordinate TDISP requests for other functions 144 + * (physical or virtual) of the device, or allow for an Upstream Port 145 + * DSM to accept TDISP requests for the Endpoints downstream of the 146 + * switch. 147 + */ 148 + switch (pci_pcie_type(pdev)) { 149 + case PCI_EXP_TYPE_ENDPOINT: 150 + case PCI_EXP_TYPE_UPSTREAM: 151 + case PCI_EXP_TYPE_RC_END: 152 + if (pdev->ide_cap || (pdev->devcap & PCI_EXP_DEVCAP_TEE)) 153 + break; 154 + fallthrough; 155 + default: 156 + return false; 157 + } 158 + 159 + return PCI_FUNC(pdev->devfn) == 0; 160 + } 161 + 162 + /** 163 + * enum pci_tsm_req_scope - Scope of guest requests to be validated by TSM 164 + * 165 + * Guest requests are a transport for a TVM to communicate with a TSM + DSM for 166 + * a given TDI. A TSM driver is responsible for maintaining the kernel security 167 + * model and limit commands that may affect the host, or are otherwise outside 168 + * the typical TDISP operational model. 169 + */ 170 + enum pci_tsm_req_scope { 171 + /** 172 + * @PCI_TSM_REQ_INFO: Read-only, without side effects, request for 173 + * typical TDISP collateral information like Device Interface Reports. 174 + * No device secrets are permitted, and no device state is changed. 175 + */ 176 + PCI_TSM_REQ_INFO = 0, 177 + /** 178 + * @PCI_TSM_REQ_STATE_CHANGE: Request to change the TDISP state from 179 + * UNLOCKED->LOCKED, LOCKED->RUN, or other architecture specific state 180 + * changes to support those transitions for a TDI. No other (unrelated 181 + * to TDISP) device / host state, configuration, or data change is 182 + * permitted. 183 + */ 184 + PCI_TSM_REQ_STATE_CHANGE = 1, 185 + /** 186 + * @PCI_TSM_REQ_DEBUG_READ: Read-only request for debug information 187 + * 188 + * A method to facilitate TVM information retrieval outside of typical 189 + * TDISP operational requirements. No device secrets are permitted. 190 + */ 191 + PCI_TSM_REQ_DEBUG_READ = 2, 192 + /** 193 + * @PCI_TSM_REQ_DEBUG_WRITE: Device state changes for debug purposes 194 + * 195 + * The request may affect the operational state of the device outside of 196 + * the TDISP operational model. If allowed, requires CAP_SYS_RAW_IO, and 197 + * will taint the kernel. 198 + */ 199 + PCI_TSM_REQ_DEBUG_WRITE = 3, 200 + }; 201 + 202 + #ifdef CONFIG_PCI_TSM 203 + int pci_tsm_register(struct tsm_dev *tsm_dev); 204 + void pci_tsm_unregister(struct tsm_dev *tsm_dev); 205 + int pci_tsm_link_constructor(struct pci_dev *pdev, struct pci_tsm *tsm, 206 + struct tsm_dev *tsm_dev); 207 + int pci_tsm_pf0_constructor(struct pci_dev *pdev, struct pci_tsm_pf0 *tsm, 208 + struct tsm_dev *tsm_dev); 209 + void pci_tsm_pf0_destructor(struct pci_tsm_pf0 *tsm); 210 + int pci_tsm_doe_transfer(struct pci_dev *pdev, u8 type, const void *req, 211 + size_t req_sz, void *resp, size_t resp_sz); 212 + int pci_tsm_bind(struct pci_dev *pdev, struct kvm *kvm, u32 tdi_id); 213 + void pci_tsm_unbind(struct pci_dev *pdev); 214 + void pci_tsm_tdi_constructor(struct pci_dev *pdev, struct pci_tdi *tdi, 215 + struct kvm *kvm, u32 tdi_id); 216 + ssize_t pci_tsm_guest_req(struct pci_dev *pdev, enum pci_tsm_req_scope scope, 217 + sockptr_t req_in, size_t in_len, sockptr_t req_out, 218 + size_t out_len, u64 *tsm_code); 219 + #else 220 + static inline int pci_tsm_register(struct tsm_dev *tsm_dev) 221 + { 222 + return 0; 223 + } 224 + static inline void pci_tsm_unregister(struct tsm_dev *tsm_dev) 225 + { 226 + } 227 + static inline int pci_tsm_bind(struct pci_dev *pdev, struct kvm *kvm, u64 tdi_id) 228 + { 229 + return -ENXIO; 230 + } 231 + static inline void pci_tsm_unbind(struct pci_dev *pdev) 232 + { 233 + } 234 + static inline ssize_t pci_tsm_guest_req(struct pci_dev *pdev, 235 + enum pci_tsm_req_scope scope, 236 + sockptr_t req_in, size_t in_len, 237 + sockptr_t req_out, size_t out_len, 238 + u64 *tsm_code) 239 + { 240 + return -ENXIO; 241 + } 242 + #endif 243 + #endif /*__PCI_TSM_H */
+34
include/linux/pci.h
··· 452 452 unsigned int pasid_enabled:1; /* Process Address Space ID */ 453 453 unsigned int pri_enabled:1; /* Page Request Interface */ 454 454 unsigned int tph_enabled:1; /* TLP Processing Hints */ 455 + unsigned int fm_enabled:1; /* Flit Mode (segment captured) */ 455 456 unsigned int is_managed:1; /* Managed via devres */ 456 457 unsigned int is_msi_managed:1; /* MSI release via devres installed */ 457 458 unsigned int needs_freset:1; /* Requires fundamental reset */ ··· 545 544 #ifdef CONFIG_PCI_NPEM 546 545 struct npem *npem; /* Native PCIe Enclosure Management */ 547 546 #endif 547 + #ifdef CONFIG_PCI_IDE 548 + u16 ide_cap; /* Link Integrity & Data Encryption */ 549 + u8 nr_ide_mem; /* Address association resources for streams */ 550 + u8 nr_link_ide; /* Link Stream count (Selective Stream offset) */ 551 + u16 nr_sel_ide; /* Selective Stream count (register block allocator) */ 552 + struct ida ide_stream_ida; 553 + unsigned int ide_cfg:1; /* Config cycles over IDE */ 554 + unsigned int ide_tee_limit:1; /* Disallow T=0 traffic over IDE */ 555 + #endif 556 + #ifdef CONFIG_PCI_TSM 557 + struct pci_tsm *tsm; /* TSM operation state */ 558 + #endif 548 559 u16 acs_cap; /* ACS Capability offset */ 549 560 u8 supported_speeds; /* Supported Link Speeds Vector */ 550 561 phys_addr_t rom; /* Physical address if not from BAR */ ··· 592 579 593 580 #define to_pci_dev(n) container_of(n, struct pci_dev, dev) 594 581 #define for_each_pci_dev(d) while ((d = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, d)) != NULL) 582 + #define for_each_pci_dev_reverse(d) \ 583 + while ((d = pci_get_device_reverse(PCI_ANY_ID, PCI_ANY_ID, d)) != NULL) 595 584 596 585 static inline int pci_channel_offline(struct pci_dev *pdev) 597 586 { ··· 620 605 int domain_nr; 621 606 struct list_head windows; /* resource_entry */ 622 607 struct list_head dma_ranges; /* dma ranges resource list */ 608 + #ifdef CONFIG_PCI_IDE 609 + u16 nr_ide_streams; /* Max streams possibly active in @ide_stream_ida */ 610 + struct ida ide_stream_ida; 611 + struct ida ide_stream_ids_ida; /* track unique ids per domain */ 612 + #endif 623 613 u8 (*swizzle_irq)(struct pci_dev *, u8 *); /* Platform IRQ swizzler */ 624 614 int (*map_irq)(const struct pci_dev *, u8, u8); 625 615 void (*release_fn)(struct pci_host_bridge *); ··· 876 856 pci_bus_addr_t start; 877 857 pci_bus_addr_t end; 878 858 }; 859 + 860 + static inline pci_bus_addr_t pci_bus_region_size(const struct pci_bus_region *region) 861 + { 862 + return region->end - region->start + 1; 863 + } 879 864 880 865 struct pci_dynids { 881 866 spinlock_t lock; /* Protects list, index */ ··· 1266 1241 1267 1242 struct pci_dev *pci_get_device(unsigned int vendor, unsigned int device, 1268 1243 struct pci_dev *from); 1244 + struct pci_dev *pci_get_device_reverse(unsigned int vendor, unsigned int device, 1245 + struct pci_dev *from); 1269 1246 struct pci_dev *pci_get_subsys(unsigned int vendor, unsigned int device, 1270 1247 unsigned int ss_vendor, unsigned int ss_device, 1271 1248 struct pci_dev *from); ··· 1687 1660 1688 1661 void pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *), 1689 1662 void *userdata); 1663 + void pci_walk_bus_reverse(struct pci_bus *top, 1664 + int (*cb)(struct pci_dev *, void *), void *userdata); 1690 1665 int pci_cfg_space_size(struct pci_dev *dev); 1691 1666 unsigned char pci_bus_max_busnr(struct pci_bus *bus); 1692 1667 resource_size_t pcibios_window_alignment(struct pci_bus *bus, ··· 2082 2053 static inline struct pci_dev *pci_get_device(unsigned int vendor, 2083 2054 unsigned int device, 2084 2055 struct pci_dev *from) 2056 + { return NULL; } 2057 + 2058 + static inline struct pci_dev *pci_get_device_reverse(unsigned int vendor, 2059 + unsigned int device, 2060 + struct pci_dev *from) 2085 2061 { return NULL; } 2086 2062 2087 2063 static inline struct pci_dev *pci_get_subsys(unsigned int vendor,
+19 -1
include/linux/psp-sev.h
··· 142 142 SEV_CMD_SNP_VLEK_LOAD = 0x0CD, 143 143 SEV_CMD_SNP_FEATURE_INFO = 0x0CE, 144 144 145 + /* SEV-TIO commands */ 146 + SEV_CMD_TIO_STATUS = 0x0D0, 147 + SEV_CMD_TIO_INIT = 0x0D1, 148 + SEV_CMD_TIO_DEV_CREATE = 0x0D2, 149 + SEV_CMD_TIO_DEV_RECLAIM = 0x0D3, 150 + SEV_CMD_TIO_DEV_CONNECT = 0x0D4, 151 + SEV_CMD_TIO_DEV_DISCONNECT = 0x0D5, 145 152 SEV_CMD_MAX, 146 153 }; 147 154 ··· 790 783 u32 list_paddr_en:1; 791 784 u32 rapl_dis:1; 792 785 u32 ciphertext_hiding_en:1; 793 - u32 rsvd:28; 786 + u32 tio_en:1; 787 + u32 rsvd:27; 794 788 u32 rsvd1; 795 789 u64 list_paddr; 796 790 u16 max_snp_asid; ··· 890 882 u32 edx; 891 883 } __packed; 892 884 885 + /* Feature bits in ECX */ 893 886 #define SNP_RAPL_DISABLE_SUPPORTED BIT(2) 894 887 #define SNP_CIPHER_TEXT_HIDING_SUPPORTED BIT(3) 895 888 #define SNP_AES_256_XTS_POLICY_SUPPORTED BIT(4) 896 889 #define SNP_CXL_ALLOW_POLICY_SUPPORTED BIT(5) 890 + 891 + /* Feature bits in EBX */ 892 + #define SNP_SEV_TIO_SUPPORTED BIT(1) 897 893 898 894 #ifdef CONFIG_CRYPTO_DEV_SP_PSP 899 895 ··· 1040 1028 1041 1029 void *psp_copy_user_blob(u64 uaddr, u32 len); 1042 1030 void *snp_alloc_firmware_page(gfp_t mask); 1031 + int snp_reclaim_pages(unsigned long paddr, unsigned int npages, bool locked); 1043 1032 void snp_free_firmware_page(void *addr); 1044 1033 void sev_platform_shutdown(void); 1045 1034 bool sev_is_snp_ciphertext_hiding_supported(void); ··· 1075 1062 static inline void *snp_alloc_firmware_page(gfp_t mask) 1076 1063 { 1077 1064 return NULL; 1065 + } 1066 + 1067 + static inline int snp_reclaim_pages(unsigned long paddr, unsigned int npages, bool locked) 1068 + { 1069 + return -ENODEV; 1078 1070 } 1079 1071 1080 1072 static inline void snp_free_firmware_page(void *addr) { }
+17
include/linux/tsm.h
··· 5 5 #include <linux/sizes.h> 6 6 #include <linux/types.h> 7 7 #include <linux/uuid.h> 8 + #include <linux/device.h> 8 9 9 10 #define TSM_REPORT_INBLOB_MAX 64 10 11 #define TSM_REPORT_OUTBLOB_MAX SZ_32K ··· 108 107 bool (*report_bin_attr_visible)(int n); 109 108 }; 110 109 110 + struct pci_tsm_ops; 111 + struct tsm_dev { 112 + struct device dev; 113 + int id; 114 + const struct pci_tsm_ops *pci_ops; 115 + }; 116 + 117 + DEFINE_FREE(put_tsm_dev, struct tsm_dev *, 118 + if (!IS_ERR_OR_NULL(_T)) put_device(&_T->dev)) 119 + 111 120 int tsm_report_register(const struct tsm_report_ops *ops, void *priv); 112 121 int tsm_report_unregister(const struct tsm_report_ops *ops); 122 + struct tsm_dev *tsm_register(struct device *parent, struct pci_tsm_ops *ops); 123 + void tsm_unregister(struct tsm_dev *tsm_dev); 124 + struct tsm_dev *find_tsm_dev(int id); 125 + struct pci_ide; 126 + int tsm_ide_stream_register(struct pci_ide *ide); 127 + void tsm_ide_stream_unregister(struct pci_ide *ide); 113 128 #endif /* __TSM_H */
+89
include/uapi/linux/pci_regs.h
··· 503 503 #define PCI_EXP_DEVCAP_PWR_VAL 0x03fc0000 /* Slot Power Limit Value */ 504 504 #define PCI_EXP_DEVCAP_PWR_SCL 0x0c000000 /* Slot Power Limit Scale */ 505 505 #define PCI_EXP_DEVCAP_FLR 0x10000000 /* Function Level Reset */ 506 + #define PCI_EXP_DEVCAP_TEE 0x40000000 /* TEE I/O (TDISP) Support */ 506 507 #define PCI_EXP_DEVCTL 0x08 /* Device Control */ 507 508 #define PCI_EXP_DEVCTL_CERE 0x0001 /* Correctable Error Reporting En. */ 508 509 #define PCI_EXP_DEVCTL_NFERE 0x0002 /* Non-Fatal Error Reporting Enable */ ··· 755 754 #define PCI_EXT_CAP_ID_NPEM 0x29 /* Native PCIe Enclosure Management */ 756 755 #define PCI_EXT_CAP_ID_PL_32GT 0x2A /* Physical Layer 32.0 GT/s */ 757 756 #define PCI_EXT_CAP_ID_DOE 0x2E /* Data Object Exchange */ 757 + #define PCI_EXT_CAP_ID_DEV3 0x2F /* Device 3 Capability/Control/Status */ 758 + #define PCI_EXT_CAP_ID_IDE 0x30 /* Integrity and Data Encryption */ 758 759 #define PCI_EXT_CAP_ID_PL_64GT 0x31 /* Physical Layer 64.0 GT/s */ 759 760 #define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PL_64GT 760 761 ··· 1247 1244 /* Deprecated old name, replaced with PCI_DOE_DATA_OBJECT_DISC_RSP_3_TYPE */ 1248 1245 #define PCI_DOE_DATA_OBJECT_DISC_RSP_3_PROTOCOL PCI_DOE_DATA_OBJECT_DISC_RSP_3_TYPE 1249 1246 1247 + /* Device 3 Extended Capability */ 1248 + #define PCI_DEV3_CAP 0x04 /* Device 3 Capabilities Register */ 1249 + #define PCI_DEV3_CTL 0x08 /* Device 3 Control Register */ 1250 + #define PCI_DEV3_STA 0x0c /* Device 3 Status Register */ 1251 + #define PCI_DEV3_STA_SEGMENT 0x8 /* Segment Captured (end-to-end flit-mode detected) */ 1252 + 1250 1253 /* Compute Express Link (CXL r3.1, sec 8.1.5) */ 1251 1254 #define PCI_DVSEC_CXL_PORT 3 1252 1255 #define PCI_DVSEC_CXL_PORT_CTL 0x0c 1253 1256 #define PCI_DVSEC_CXL_PORT_CTL_UNMASK_SBR 0x00000001 1257 + 1258 + /* Integrity and Data Encryption Extended Capability */ 1259 + #define PCI_IDE_CAP 0x04 1260 + #define PCI_IDE_CAP_LINK 0x1 /* Link IDE Stream Supported */ 1261 + #define PCI_IDE_CAP_SELECTIVE 0x2 /* Selective IDE Streams Supported */ 1262 + #define PCI_IDE_CAP_FLOWTHROUGH 0x4 /* Flow-Through IDE Stream Supported */ 1263 + #define PCI_IDE_CAP_PARTIAL_HEADER_ENC 0x8 /* Partial Header Encryption Supported */ 1264 + #define PCI_IDE_CAP_AGGREGATION 0x10 /* Aggregation Supported */ 1265 + #define PCI_IDE_CAP_PCRC 0x20 /* PCRC Supported */ 1266 + #define PCI_IDE_CAP_IDE_KM 0x40 /* IDE_KM Protocol Supported */ 1267 + #define PCI_IDE_CAP_SEL_CFG 0x80 /* Selective IDE for Config Request Support */ 1268 + #define PCI_IDE_CAP_ALG __GENMASK(12, 8) /* Supported Algorithms */ 1269 + #define PCI_IDE_CAP_ALG_AES_GCM_256 0 /* AES-GCM 256 key size, 96b MAC */ 1270 + #define PCI_IDE_CAP_LINK_TC_NUM __GENMASK(15, 13) /* Link IDE TCs */ 1271 + #define PCI_IDE_CAP_SEL_NUM __GENMASK(23, 16) /* Supported Selective IDE Streams */ 1272 + #define PCI_IDE_CAP_TEE_LIMITED 0x1000000 /* TEE-Limited Stream Supported */ 1273 + #define PCI_IDE_CTL 0x08 1274 + #define PCI_IDE_CTL_FLOWTHROUGH_IDE 0x4 /* Flow-Through IDE Stream Enabled */ 1275 + 1276 + #define PCI_IDE_LINK_STREAM_0 0xc /* First Link Stream Register Block */ 1277 + #define PCI_IDE_LINK_BLOCK_SIZE 8 1278 + /* Link IDE Stream block, up to PCI_IDE_CAP_LINK_TC_NUM */ 1279 + #define PCI_IDE_LINK_CTL_0 0x00 /* First Link Control Register Offset in block */ 1280 + #define PCI_IDE_LINK_CTL_EN 0x1 /* Link IDE Stream Enable */ 1281 + #define PCI_IDE_LINK_CTL_TX_AGGR_NPR __GENMASK(3, 2) /* Tx Aggregation Mode NPR */ 1282 + #define PCI_IDE_LINK_CTL_TX_AGGR_PR __GENMASK(5, 4) /* Tx Aggregation Mode PR */ 1283 + #define PCI_IDE_LINK_CTL_TX_AGGR_CPL __GENMASK(7, 6) /* Tx Aggregation Mode CPL */ 1284 + #define PCI_IDE_LINK_CTL_PCRC_EN 0x100 /* PCRC Enable */ 1285 + #define PCI_IDE_LINK_CTL_PART_ENC __GENMASK(13, 10) /* Partial Header Encryption Mode */ 1286 + #define PCI_IDE_LINK_CTL_ALG __GENMASK(18, 14) /* Selection from PCI_IDE_CAP_ALG */ 1287 + #define PCI_IDE_LINK_CTL_TC __GENMASK(21, 19) /* Traffic Class */ 1288 + #define PCI_IDE_LINK_CTL_ID __GENMASK(31, 24) /* Stream ID */ 1289 + #define PCI_IDE_LINK_STS_0 0x4 /* First Link Status Register Offset in block */ 1290 + #define PCI_IDE_LINK_STS_STATE __GENMASK(3, 0) /* Link IDE Stream State */ 1291 + #define PCI_IDE_LINK_STS_IDE_FAIL 0x80000000 /* IDE fail message received */ 1292 + 1293 + /* Selective IDE Stream block, up to PCI_IDE_CAP_SELECTIVE_STREAMS_NUM */ 1294 + /* Selective IDE Stream Capability Register */ 1295 + #define PCI_IDE_SEL_CAP 0x00 1296 + #define PCI_IDE_SEL_CAP_ASSOC_NUM __GENMASK(3, 0) 1297 + /* Selective IDE Stream Control Register */ 1298 + #define PCI_IDE_SEL_CTL 0x04 1299 + #define PCI_IDE_SEL_CTL_EN 0x1 /* Selective IDE Stream Enable */ 1300 + #define PCI_IDE_SEL_CTL_TX_AGGR_NPR __GENMASK(3, 2) /* Tx Aggregation Mode NPR */ 1301 + #define PCI_IDE_SEL_CTL_TX_AGGR_PR __GENMASK(5, 4) /* Tx Aggregation Mode PR */ 1302 + #define PCI_IDE_SEL_CTL_TX_AGGR_CPL __GENMASK(7, 6) /* Tx Aggregation Mode CPL */ 1303 + #define PCI_IDE_SEL_CTL_PCRC_EN 0x100 /* PCRC Enable */ 1304 + #define PCI_IDE_SEL_CTL_CFG_EN 0x200 /* Selective IDE for Configuration Requests */ 1305 + #define PCI_IDE_SEL_CTL_PART_ENC __GENMASK(13, 10) /* Partial Header Encryption Mode */ 1306 + #define PCI_IDE_SEL_CTL_ALG __GENMASK(18, 14) /* Selection from PCI_IDE_CAP_ALG */ 1307 + #define PCI_IDE_SEL_CTL_TC __GENMASK(21, 19) /* Traffic Class */ 1308 + #define PCI_IDE_SEL_CTL_DEFAULT 0x400000 /* Default Stream */ 1309 + #define PCI_IDE_SEL_CTL_TEE_LIMITED 0x800000 /* TEE-Limited Stream */ 1310 + #define PCI_IDE_SEL_CTL_ID __GENMASK(31, 24) /* Stream ID */ 1311 + #define PCI_IDE_SEL_CTL_ID_MAX 255 1312 + /* Selective IDE Stream Status Register */ 1313 + #define PCI_IDE_SEL_STS 0x08 1314 + #define PCI_IDE_SEL_STS_STATE __GENMASK(3, 0) /* Selective IDE Stream State */ 1315 + #define PCI_IDE_SEL_STS_STATE_INSECURE 0 1316 + #define PCI_IDE_SEL_STS_STATE_SECURE 2 1317 + #define PCI_IDE_SEL_STS_IDE_FAIL 0x80000000 /* IDE fail message received */ 1318 + /* IDE RID Association Register 1 */ 1319 + #define PCI_IDE_SEL_RID_1 0x0c 1320 + #define PCI_IDE_SEL_RID_1_LIMIT __GENMASK(23, 8) 1321 + /* IDE RID Association Register 2 */ 1322 + #define PCI_IDE_SEL_RID_2 0x10 1323 + #define PCI_IDE_SEL_RID_2_VALID 0x1 1324 + #define PCI_IDE_SEL_RID_2_BASE __GENMASK(23, 8) 1325 + #define PCI_IDE_SEL_RID_2_SEG __GENMASK(31, 24) 1326 + /* Selective IDE Address Association Register Block, up to PCI_IDE_SEL_CAP_ASSOC_NUM */ 1327 + #define PCI_IDE_SEL_ADDR_BLOCK_SIZE 12 1328 + #define PCI_IDE_SEL_ADDR_1(x) (20 + (x) * PCI_IDE_SEL_ADDR_BLOCK_SIZE) 1329 + #define PCI_IDE_SEL_ADDR_1_VALID 0x1 1330 + #define PCI_IDE_SEL_ADDR_1_BASE_LOW __GENMASK(19, 8) 1331 + #define PCI_IDE_SEL_ADDR_1_LIMIT_LOW __GENMASK(31, 20) 1332 + /* IDE Address Association Register 2 is "Memory Limit Upper" */ 1333 + #define PCI_IDE_SEL_ADDR_2(x) (24 + (x) * PCI_IDE_SEL_ADDR_BLOCK_SIZE) 1334 + /* IDE Address Association Register 3 is "Memory Base Upper" */ 1335 + #define PCI_IDE_SEL_ADDR_3(x) (28 + (x) * PCI_IDE_SEL_ADDR_BLOCK_SIZE) 1336 + #define PCI_IDE_SEL_BLOCK_SIZE(nr_assoc) (20 + PCI_IDE_SEL_ADDR_BLOCK_SIZE * (nr_assoc)) 1254 1337 1255 1338 #endif /* LINUX_PCI_REGS_H */
+41 -25
include/uapi/linux/psp-sev.h
··· 47 47 * with possible values from the specification. 48 48 */ 49 49 SEV_RET_NO_FW_CALL = -1, 50 - SEV_RET_SUCCESS = 0, 51 - SEV_RET_INVALID_PLATFORM_STATE, 52 - SEV_RET_INVALID_GUEST_STATE, 53 - SEV_RET_INAVLID_CONFIG, 50 + SEV_RET_SUCCESS = 0, 51 + SEV_RET_INVALID_PLATFORM_STATE = 0x0001, 52 + SEV_RET_INVALID_GUEST_STATE = 0x0002, 53 + SEV_RET_INAVLID_CONFIG = 0x0003, 54 54 SEV_RET_INVALID_CONFIG = SEV_RET_INAVLID_CONFIG, 55 - SEV_RET_INVALID_LEN, 56 - SEV_RET_ALREADY_OWNED, 57 - SEV_RET_INVALID_CERTIFICATE, 58 - SEV_RET_POLICY_FAILURE, 59 - SEV_RET_INACTIVE, 60 - SEV_RET_INVALID_ADDRESS, 61 - SEV_RET_BAD_SIGNATURE, 62 - SEV_RET_BAD_MEASUREMENT, 63 - SEV_RET_ASID_OWNED, 64 - SEV_RET_INVALID_ASID, 65 - SEV_RET_WBINVD_REQUIRED, 66 - SEV_RET_DFFLUSH_REQUIRED, 67 - SEV_RET_INVALID_GUEST, 68 - SEV_RET_INVALID_COMMAND, 69 - SEV_RET_ACTIVE, 70 - SEV_RET_HWSEV_RET_PLATFORM, 71 - SEV_RET_HWSEV_RET_UNSAFE, 72 - SEV_RET_UNSUPPORTED, 73 - SEV_RET_INVALID_PARAM, 74 - SEV_RET_RESOURCE_LIMIT, 75 - SEV_RET_SECURE_DATA_INVALID, 55 + SEV_RET_INVALID_LEN = 0x0004, 56 + SEV_RET_ALREADY_OWNED = 0x0005, 57 + SEV_RET_INVALID_CERTIFICATE = 0x0006, 58 + SEV_RET_POLICY_FAILURE = 0x0007, 59 + SEV_RET_INACTIVE = 0x0008, 60 + SEV_RET_INVALID_ADDRESS = 0x0009, 61 + SEV_RET_BAD_SIGNATURE = 0x000A, 62 + SEV_RET_BAD_MEASUREMENT = 0x000B, 63 + SEV_RET_ASID_OWNED = 0x000C, 64 + SEV_RET_INVALID_ASID = 0x000D, 65 + SEV_RET_WBINVD_REQUIRED = 0x000E, 66 + SEV_RET_DFFLUSH_REQUIRED = 0x000F, 67 + SEV_RET_INVALID_GUEST = 0x0010, 68 + SEV_RET_INVALID_COMMAND = 0x0011, 69 + SEV_RET_ACTIVE = 0x0012, 70 + SEV_RET_HWSEV_RET_PLATFORM = 0x0013, 71 + SEV_RET_HWSEV_RET_UNSAFE = 0x0014, 72 + SEV_RET_UNSUPPORTED = 0x0015, 73 + SEV_RET_INVALID_PARAM = 0x0016, 74 + SEV_RET_RESOURCE_LIMIT = 0x0017, 75 + SEV_RET_SECURE_DATA_INVALID = 0x0018, 76 76 SEV_RET_INVALID_PAGE_SIZE = 0x0019, 77 77 SEV_RET_INVALID_PAGE_STATE = 0x001A, 78 78 SEV_RET_INVALID_MDATA_ENTRY = 0x001B, ··· 87 87 SEV_RET_RESTORE_REQUIRED = 0x0025, 88 88 SEV_RET_RMP_INITIALIZATION_FAILED = 0x0026, 89 89 SEV_RET_INVALID_KEY = 0x0027, 90 + SEV_RET_SHUTDOWN_INCOMPLETE = 0x0028, 91 + SEV_RET_INCORRECT_BUFFER_LENGTH = 0x0030, 92 + SEV_RET_EXPAND_BUFFER_LENGTH_REQUEST = 0x0031, 93 + SEV_RET_SPDM_REQUEST = 0x0032, 94 + SEV_RET_SPDM_ERROR = 0x0033, 95 + SEV_RET_SEV_STATUS_ERR_IN_DEV_CONN = 0x0035, 96 + SEV_RET_SEV_STATUS_INVALID_DEV_CTX = 0x0036, 97 + SEV_RET_SEV_STATUS_INVALID_TDI_CTX = 0x0037, 98 + SEV_RET_SEV_STATUS_INVALID_TDI = 0x0038, 99 + SEV_RET_SEV_STATUS_RECLAIM_REQUIRED = 0x0039, 100 + SEV_RET_IN_USE = 0x003A, 101 + SEV_RET_SEV_STATUS_INVALID_DEV_STATE = 0x003B, 102 + SEV_RET_SEV_STATUS_INVALID_TDI_STATE = 0x003C, 103 + SEV_RET_SEV_STATUS_DEV_CERT_CHANGED = 0x003D, 104 + SEV_RET_SEV_STATUS_RESYNC_REQ = 0x003E, 105 + SEV_RET_SEV_STATUS_RESPONSE_TOO_LARGE = 0x003F, 90 106 SEV_RET_MAX, 91 107 } sev_ret_code; 92 108