Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'pci-v7.1-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci updates from Bjorn Helgaas:
"Enumeration:

- Allow TLP Processing Hints to be enabled for RCiEPs (George Abraham
P)

- Enable AtomicOps only if we know the Root Port supports them (Gerd
Bayer)

- Don't enable AtomicOps for RCiEPs since none of them need Atomic
Ops and we can't tell whether the Root Complex would support them
(Gerd Bayer)

- Leave Precision Time Measurement disabled until a driver enables it
to avoid PCIe errors (Mika Westerberg)

- Make pci_set_vga_state() fail if bridge doesn't support VGA
routing, i.e., PCI_BRIDGE_CTL_VGA is not writable, and return
errors to vga_get() callers including userspace via
/dev/vga_arbiter (Simon Richter)

- Validate max-link-speed from DT in j721e, brcmstb, mediatek-gen3,
rzg3s drivers (where the actual controller constraints are known),
and remove validation from the generic OF DT accessor (Hans Zhang)

- Remove pc110pad driver (no longer useful after 486 CPU support
removed) and no_pci_devices() (pc110pad was the last user) (Dmitry
Torokhov, Heiner Kallweit)

Resource management:

- Prevent assigning space to unimplemented bridge windows; previously
we mistakenly assumed prefetchable window existed and assigned
space and put a BAR there (Ahmed Naseef)

- Avoid shrinking bridge windows to fit in the initial Root Port
window; fixes one problem with devices with large BARs connected
via switches, e.g., Thunderbolt (Ilpo Järvinen)

- Pass full extent of empty space, not just the aligned space, to
resource_alignf callback so free space before the requested
alignment can be used (Ilpo Järvinen)

- Place small resources before larger ones for better utilization of
address space (Ilpo Järvinen)

- Fix alignment calculation for resource size larger than align,
e.g., bridge windows larger than the 1MB required alignment (Ilpo
Järvinen)

Reset:

- Update slot handling so all ARI functions are treated as being in
the same slot. They're all reset by Secondary Bus Reset, but
previously drivers of ARI functions that appeared to be on a
non-zero device weren't notified and fatal hardware errors could
result (Keith Busch)

- Make sysfs reset_subordinate hotplug safe to avoid spurious hotplug
events (Keith Busch)

- Hide Secondary Bus Reset ('bus') from sysfs reset_methods if masked
by CXL because it has no effect (Vidya Sagar)

- Avoid FLR for AMD NPU device, where it causes the device to hang
(Lizhi Hou)

Error handling:

- Clear only error bits in PCIe Device Status to avoid accidentally
clearing Emergency Power Reduction Detected (Shuai Xue)

- Check for AER errors even in devices without drivers (Lukas Wunner)

- Initialize ratelimit info so DPC and EDR paths log AER error
information (Kuppuswamy Sathyanarayanan)

Power control:

- Add UPD720201/UPD720202 USB 3.0 xHCI Host Controller .compatible so
generic pwrctrl driver can control it (Neil Armstrong)

Hotplug:

- Set LED_HW_PLUGGABLE for NPEM hotplug-capable ports so LED core
doesn't complain when setting brightness fails because the endpoint
is gone (Richard Cheng)

Peer-to-peer DMA:

- Allow wildcards in list of host bridges that support peer-to-peer
DMA between hierarchy domains and add all Google SoCs (Jacob
Moroni)

Endpoint framework:

- Advertise dynamic inbound mapping support in pci-epf-test and
update host pci_endpoint_test to skip doorbell testing if not
advertised by endpoint (Koichiro Den)

- Return 0, not remaining timeout, when MHI eDMA ops complete so
mhi_ep_ring_add_element() doesn't interpret non-zero as failure
(Daniel Hodges)

- Remove vntb and ntb duplicate resource teardown that leads to oops
when .allow_link() fails or .drop_link() is called (Koichiro Den)

- Disable vntb delayed work before clearing BAR mappings and
doorbells to avoid oops caused by doing the work after resources
have been torn down (Koichiro Den)

- Add a way to describe reserved subregions within BARs, e.g.,
platform-owned fixed register windows, and use it for the RK3588
BAR4 DMA ctrl window (Koichiro Den)

- Add BAR_DISABLED for BARs that will never be available to an EPF
driver, and change some BAR_RESERVED annotations to BAR_DISABLED
(Niklas Cassel)

- Add NTB .get_dma_dev() callback for cases where DMA API requires a
different device, e.g., vNTB devices (Koichiro Den)

- Add reserved region types for MSI-X Table and PBA so Endpoint
controllers can them as describe hardware-owned regions in a
BAR_RESERVED BAR (Manikanta Maddireddy)

- Make Tegra194/234 BAR0 programmable and remove 1MB size limit
(Manikanta Maddireddy)

- Expose Tegra BAR2 (MSI-X) and BAR4 (DMA) as 64-bit BAR_RESERVED
(Manikanta Maddireddy)

- Add Tegra194 and Tegra234 device table entries to pci_endpoint_test
(Manikanta Maddireddy)

- Skip the BAR subrange selftest if there are not enough inbound
window resources to run the test (Christian Bruel)

New native PCIe controller drivers:

- Add DT binding and driver for Andes QiLai SoC PCIe host controller
(Randolph Lin)

- Add DT binding and driver for ESWIN PCIe Root Complex (Senchuan
Zhang)

Baikal T-1 PCIe controller driver:

- Remove driver since it never quite became usable (Andy Shevchenko)

Cadence PCIe controller driver:

- Implement byte/word config reads with dword (32-bit) reads because
some Cadence controllers don't support sub-dword accesses (Aksh
Garg)

CIX Sky1 PCIe controller driver:

- Add 'power-domains' to DT binding for SCMI power domain (Gary Yang)

Freescale i.MX6 PCIe controller driver:

- Add i.MX94 and i.MX943 to fsl,imx6q-pcie-ep DT binding (Richard
Zhu)

- Delay instead of polling for L2/L3 Ready after PME_Turn_off when
suspending i.MX6SX because LTSSM registers are inaccessible
(Richard Zhu)

- Separate PERST# assertion (for resetting endpoints) from core reset
(for resetting the RC itself) to prepare for new DTs with PERST#
GPIO in per-Root Port nodes (Sherry Sun)

- Retain Root Port MSI capability on i.MX7D, i.MX8MM, and i.MX8MQ so
MSI from downstream devices will work (Richard Zhu)

- Fix i.MX95 reference clock source selection when internal refclk is
used (Franz Schnyder)

Freescale Layerscape PCIe controller driver:

- Allow building as a removable module (Sascha Hauer)

MediaTek PCIe Gen3 controller driver:

- Use dev_err_probe() to simplify error paths and make deferred probe
messages visible in /sys/kernel/debug/devices_deferred (Chen-Yu
Tsai)

- Power off device if setup fails (Chen-Yu Tsai)

- Integrate new pwrctrl API to enable power control for WiFi/BT
adapters on mainboard or in PCIe or M.2 slots (Chen-Yu Tsai)

NVIDIA Tegra194 PCIe controller driver:

- Poll less aggressively and non-atomically for PME_TO_Ack during
transition to L2 (Vidya Sagar)

- Disable LTSSM after transition to Detect on surprise link down to
stop toggling between Polling and Detect (Manikanta Maddireddy)

- Don't force the device into the D0 state before L2 when suspending
or shutting down the controller (Vidya Sagar)

- Disable PERST# IRQ only in Endpoint mode because it's not
registered in Root Port mode (Manikanta Maddireddy)

- Handle 'nvidia,refclk-select' as optional (Vidya Sagar)

- Disable direct speed change in Endpoint mode so link speed change
is controlled by the host (Vidya Sagar)

- Set LTR values before link up to avoid bogus LTR messages with 0
latency (Vidya Sagar)

- Allow system suspend when the Endpoint link is down (Vidya Sagar)

- Use DWC IP core version, not Tegra custom values, to avoid DWC core
version check warnings (Manikanta Maddireddy)

- Apply ECRC workaround to devices based on DesignWare 5.00a as well
as 4.90a (Manikanta Maddireddy)

- Disable PM Substate L1.2 in Endpoint mode to work around Tegra234
erratum (Vidya Sagar)

- Delay post-PERST# cleanup until core is powered on to avoid CBB
timeout (Manikanta Maddireddy)

- Assert CLKREQ# so switches that forward it to their downstream side
can bring up those links successfully (Vidya Sagar)

- Calibrate pipe to UPHY for Endpoint mode to reset stale PLL state
from any previous bad link state (Vidya Sagar)

- Remove IRQF_ONESHOT flag from Endpoint interrupt registration so
DMA driver and Endpoint controller driver can share the interrupt
line (Vidya Sagar)

- Enable DMA interrupt to support DMA in both Root Port and Endpoint
modes (Vidya Sagar)

- Enable hardware link retraining after link goes down in Endpoint
mode (Vidya Sagar)

- Add DT binding and driver support for core clock monitoring (Vidya
Sagar)

Qualcomm PCIe controller driver:

- Advertise 'Hot-Plug Capable' and set 'No Command Completed Support'
since Qcom Root Ports support hotplug events like DL_Up/Down and
can accept writes to Slot Control without delays between writes
(Krishna Chaitanya Chundru)

Renesas R-Car PCIe controller driver:

- Mark Endpoint BAR0 and BAR2 as Resizable (Koichiro Den)

- Reduce EPC BAR alignment requirement to 4K (Koichiro Den)

Renesas RZ/G3S PCIe controller driver:

- Add RZ/G3E to DT binding and to driver (John Madieu)

- Assert (not deassert) resets in probe error path (John Madieu)

- Assert resets in suspend path in reverse order they were deasserted
during probe (John Madieu)

- Rework inbound window algorithm to prevent mapping more than
intended region and enforce alignment on size, to prepare for
RZ/G3E support (John Madieu)

Rockchip DesignWare PCIe controller driver:

- Add tracepoints for PCIe controller LTSSM transitions and link rate
changes (Shawn Lin)

- Trace LTSSM events collected by the dw-rockchip debug FIFO (Shawn
Lin)

SOPHGO PCIe controller driver:

- Disable ASPM L0s and L1 on Sophgo 2042 PCIe Root Ports that
advertise support for them (Yao Zi)

Synopsys DesignWare PCIe controller driver:

- Continue with system suspend even if an Endpoint doesn't respond
with PME_TO_Ack message (Manivannan Sadhasivam)

- Set Endpoint MSI-X Table Size in the correct function of a
multi-function device when configuring MSI-X, not in Function 0
(Aksh Garg)

- Set Max Link Width and Max Link Speed for all functions of a
multi-function device, not just Function 0 (Aksh Garg)

- Expose PCIe event counters in groups 5-7 in debugfs (Hans Zhang)

Miscellaneous:

- Warn only once about invalid ACS kernel parameter format (Richard
Cheng)

- Suppress FW_BUG warning when writing sysfs 'numa_node' with the
current value (Li RongQing)

- Drop redundant 'depends on PCI' from Kconfig (Julian Braha)"

* tag 'pci-v7.1-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (165 commits)
PCI/P2PDMA: Add Google SoCs to the P2P DMA host bridge list
PCI/P2PDMA: Allow wildcard Device IDs in host bridge list
PCI: sg2042: Avoid L0s and L1 on Sophgo 2042 PCIe Root Ports
PCI: cadence: Add flags for disabling ASPM capability for broken Root Ports
PCI: tegra194: Add core monitor clock support
dt-bindings: PCI: tegra194: Add monitor clock support
PCI: tegra194: Enable hardware hot reset mode in Endpoint mode
PCI: tegra194: Enable DMA interrupt
PCI: tegra194: Remove IRQF_ONESHOT flag during Endpoint interrupt registration
PCI: tegra194: Calibrate pipe to UPHY for Endpoint mode
PCI: tegra194: Assert CLKREQ# explicitly by default
PCI: tegra194: Fix CBB timeout caused by DBI access before core power-on
PCI: tegra194: Disable L1.2 capability of Tegra234 EP
PCI: dwc: Apply ECRC workaround to DesignWare 5.00a as well
PCI: tegra194: Use DWC IP core version
PCI: tegra194: Free up Endpoint resources during remove()
PCI: tegra194: Allow system suspend when the Endpoint link is not up
PCI: tegra194: Set LTR message request before PCIe link up in Endpoint mode
PCI: tegra194: Disable direct speed change for Endpoint mode
PCI: tegra194: Use devm_gpiod_get_optional() to parse "nvidia,refclk-select"
...

+2981 -2034
+5 -2
Documentation/PCI/msi-howto.rst
··· 113 113 114 114 int pci_irq_vector(struct pci_dev *dev, unsigned int nr); 115 115 116 - Any allocated resources should be freed before removing the device using 117 - the following function:: 116 + If the driver enables the device using pcim_enable_device(), the driver 117 + shouldn't call pci_free_irq_vectors() because pcim_enable_device() 118 + activates automatic management for IRQ vectors. Otherwise, the driver should 119 + free any allocated IRQ vectors before removing the device using the following 120 + function:: 118 121 119 122 void pci_free_irq_vectors(struct pci_dev *dev); 120 123
+89
Documentation/devicetree/bindings/pci/andestech,qilai-pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/andestech,qilai-pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Andes QiLai PCIe host controller 8 + 9 + description: 10 + Andes QiLai PCIe host controller is based on the Synopsys DesignWare 11 + PCI core. 12 + 13 + maintainers: 14 + - Randolph Lin <randolph@andestech.com> 15 + 16 + allOf: 17 + - $ref: /schemas/pci/snps,dw-pcie.yaml# 18 + 19 + properties: 20 + compatible: 21 + const: andestech,qilai-pcie 22 + 23 + reg: 24 + items: 25 + - description: Data Bus Interface (DBI) registers. 26 + - description: APB registers. 27 + - description: PCIe configuration space region. 28 + 29 + reg-names: 30 + items: 31 + - const: dbi 32 + - const: apb 33 + - const: config 34 + 35 + dma-coherent: true 36 + 37 + ranges: 38 + maxItems: 2 39 + 40 + interrupts: 41 + maxItems: 1 42 + 43 + interrupt-names: 44 + items: 45 + - const: msi 46 + 47 + required: 48 + - reg 49 + - reg-names 50 + - interrupts 51 + - interrupt-names 52 + 53 + unevaluatedProperties: false 54 + 55 + examples: 56 + - | 57 + #include <dt-bindings/interrupt-controller/irq.h> 58 + 59 + soc { 60 + #address-cells = <2>; 61 + #size-cells = <2>; 62 + 63 + pcie@80000000 { 64 + compatible = "andestech,qilai-pcie"; 65 + device_type = "pci"; 66 + reg = <0x0 0x80000000 0x0 0x20000000>, 67 + <0x0 0x04000000 0x0 0x00001000>, 68 + <0x0 0x00000000 0x0 0x00010000>; 69 + reg-names = "dbi", "apb", "config"; 70 + dma-coherent; 71 + 72 + linux,pci-domain = <0>; 73 + #address-cells = <3>; 74 + #size-cells = <2>; 75 + ranges = <0x02000000 0x00 0x10000000 0x00 0x10000000 0x00 0xf0000000>, 76 + <0x43000000 0x01 0x00000000 0x01 0x00000000 0x02 0x00000000>; 77 + 78 + #interrupt-cells = <1>; 79 + interrupts = <0xf>; 80 + interrupt-names = "msi"; 81 + interrupt-parent = <&plic0>; 82 + interrupt-map-mask = <0 0 0 0>; 83 + interrupt-map = <0 0 0 1 &plic0 0xf IRQ_TYPE_LEVEL_HIGH>, 84 + <0 0 0 2 &plic0 0xf IRQ_TYPE_LEVEL_HIGH>, 85 + <0 0 0 3 &plic0 0xf IRQ_TYPE_LEVEL_HIGH>, 86 + <0 0 0 4 &plic0 0xf IRQ_TYPE_LEVEL_HIGH>; 87 + }; 88 + }; 89 + ...
-168
Documentation/devicetree/bindings/pci/baikal,bt1-pcie.yaml
··· 1 - # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 - %YAML 1.2 3 - --- 4 - $id: http://devicetree.org/schemas/pci/baikal,bt1-pcie.yaml# 5 - $schema: http://devicetree.org/meta-schemas/core.yaml# 6 - 7 - title: Baikal-T1 PCIe Root Port Controller 8 - 9 - maintainers: 10 - - Serge Semin <fancer.lancer@gmail.com> 11 - 12 - description: 13 - Embedded into Baikal-T1 SoC Root Complex controller with a single port 14 - activated. It's based on the DWC RC PCIe v4.60a IP-core, which is configured 15 - to have just a single Root Port function and is capable of establishing the 16 - link up to Gen.3 speed on x4 lanes. It doesn't have embedded clock and reset 17 - control module, so the proper interface initialization is supposed to be 18 - performed by software. There four in- and four outbound iATU regions 19 - which can be used to emit all required TLP types on the PCIe bus. 20 - 21 - allOf: 22 - - $ref: /schemas/pci/snps,dw-pcie.yaml# 23 - 24 - properties: 25 - compatible: 26 - const: baikal,bt1-pcie 27 - 28 - reg: 29 - description: 30 - DBI, DBI2 and at least 4KB outbound iATU-capable region for the 31 - peripheral devices CFG-space access. 32 - maxItems: 3 33 - 34 - reg-names: 35 - items: 36 - - const: dbi 37 - - const: dbi2 38 - - const: config 39 - 40 - interrupts: 41 - description: 42 - MSI, AER, PME, Hot-plug, Link Bandwidth Management, Link Equalization 43 - request and eight Read/Write eDMA IRQ lines are available. 44 - maxItems: 14 45 - 46 - interrupt-names: 47 - items: 48 - - const: dma0 49 - - const: dma1 50 - - const: dma2 51 - - const: dma3 52 - - const: dma4 53 - - const: dma5 54 - - const: dma6 55 - - const: dma7 56 - - const: msi 57 - - const: aer 58 - - const: pme 59 - - const: hp 60 - - const: bw_mg 61 - - const: l_eq 62 - 63 - clocks: 64 - description: 65 - DBI (attached to the APB bus), AXI-bus master and slave interfaces 66 - are fed up by the dedicated application clocks. A common reference 67 - clock signal is supposed to be attached to the corresponding Ref-pad 68 - of the SoC. It will be redistributed amongst the controller core 69 - sub-modules (pipe, core, aux, etc). 70 - maxItems: 4 71 - 72 - clock-names: 73 - items: 74 - - const: dbi 75 - - const: mstr 76 - - const: slv 77 - - const: ref 78 - 79 - resets: 80 - description: 81 - A comprehensive controller reset logic is supposed to be implemented 82 - by software, so almost all the possible application and core reset 83 - signals are exposed via the system CCU module. 84 - maxItems: 9 85 - 86 - reset-names: 87 - items: 88 - - const: mstr 89 - - const: slv 90 - - const: pwr 91 - - const: hot 92 - - const: phy 93 - - const: core 94 - - const: pipe 95 - - const: sticky 96 - - const: non-sticky 97 - 98 - baikal,bt1-syscon: 99 - $ref: /schemas/types.yaml#/definitions/phandle 100 - description: 101 - Phandle to the Baikal-T1 System Controller DT node. It's required to 102 - access some additional PM, Reset-related and LTSSM signals. 103 - 104 - num-lanes: 105 - maximum: 4 106 - 107 - max-link-speed: 108 - maximum: 3 109 - 110 - required: 111 - - compatible 112 - - reg 113 - - reg-names 114 - - interrupts 115 - - interrupt-names 116 - 117 - unevaluatedProperties: false 118 - 119 - examples: 120 - - | 121 - #include <dt-bindings/interrupt-controller/mips-gic.h> 122 - #include <dt-bindings/gpio/gpio.h> 123 - 124 - pcie@1f052000 { 125 - compatible = "baikal,bt1-pcie"; 126 - device_type = "pci"; 127 - reg = <0x1f052000 0x1000>, <0x1f053000 0x1000>, <0x1bdbf000 0x1000>; 128 - reg-names = "dbi", "dbi2", "config"; 129 - #address-cells = <3>; 130 - #size-cells = <2>; 131 - ranges = <0x81000000 0 0x00000000 0x1bdb0000 0 0x00008000>, 132 - <0x82000000 0 0x20000000 0x08000000 0 0x13db0000>; 133 - bus-range = <0x0 0xff>; 134 - 135 - interrupts = <GIC_SHARED 80 IRQ_TYPE_LEVEL_HIGH>, 136 - <GIC_SHARED 81 IRQ_TYPE_LEVEL_HIGH>, 137 - <GIC_SHARED 82 IRQ_TYPE_LEVEL_HIGH>, 138 - <GIC_SHARED 83 IRQ_TYPE_LEVEL_HIGH>, 139 - <GIC_SHARED 84 IRQ_TYPE_LEVEL_HIGH>, 140 - <GIC_SHARED 85 IRQ_TYPE_LEVEL_HIGH>, 141 - <GIC_SHARED 86 IRQ_TYPE_LEVEL_HIGH>, 142 - <GIC_SHARED 87 IRQ_TYPE_LEVEL_HIGH>, 143 - <GIC_SHARED 88 IRQ_TYPE_LEVEL_HIGH>, 144 - <GIC_SHARED 89 IRQ_TYPE_LEVEL_HIGH>, 145 - <GIC_SHARED 90 IRQ_TYPE_LEVEL_HIGH>, 146 - <GIC_SHARED 91 IRQ_TYPE_LEVEL_HIGH>, 147 - <GIC_SHARED 92 IRQ_TYPE_LEVEL_HIGH>, 148 - <GIC_SHARED 93 IRQ_TYPE_LEVEL_HIGH>; 149 - interrupt-names = "dma0", "dma1", "dma2", "dma3", 150 - "dma4", "dma5", "dma6", "dma7", 151 - "msi", "aer", "pme", "hp", "bw_mg", 152 - "l_eq"; 153 - 154 - clocks = <&ccu_sys 1>, <&ccu_axi 6>, <&ccu_axi 7>, <&clk_pcie>; 155 - clock-names = "dbi", "mstr", "slv", "ref"; 156 - 157 - resets = <&ccu_axi 6>, <&ccu_axi 7>, <&ccu_sys 7>, <&ccu_sys 10>, 158 - <&ccu_sys 4>, <&ccu_sys 6>, <&ccu_sys 5>, <&ccu_sys 8>, 159 - <&ccu_sys 9>; 160 - reset-names = "mstr", "slv", "pwr", "hot", "phy", "core", "pipe", 161 - "sticky", "non-sticky"; 162 - 163 - reset-gpios = <&port0 0 GPIO_ACTIVE_LOW>; 164 - 165 - num-lanes = <4>; 166 - max-link-speed = <3>; 167 - }; 168 - ...
+3
Documentation/devicetree/bindings/pci/cix,sky1-pcie-host.yaml
··· 38 38 ranges: 39 39 maxItems: 3 40 40 41 + power-domains: 42 + maxItems: 1 43 + 41 44 required: 42 45 - compatible 43 46 - ranges
+166
Documentation/devicetree/bindings/pci/eswin,pcie.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/pci/eswin,pcie.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: ESWIN PCIe Root Complex 8 + 9 + maintainers: 10 + - Yu Ning <ningyu@eswincomputing.com> 11 + - Senchuan Zhang <zhangsenchuan@eswincomputing.com> 12 + - Yanghui Ou <ouyanghui@eswincomputing.com> 13 + 14 + description: 15 + ESWIN SoCs PCIe Root Complex is based on the Synopsys DesignWare PCIe IP. 16 + 17 + properties: 18 + compatible: 19 + const: eswin,eic7700-pcie 20 + 21 + reg: 22 + maxItems: 3 23 + 24 + reg-names: 25 + items: 26 + - const: dbi 27 + - const: config 28 + - const: elbi 29 + 30 + ranges: 31 + maxItems: 3 32 + 33 + '#interrupt-cells': 34 + const: 1 35 + 36 + interrupt-names: 37 + items: 38 + - const: msi 39 + - const: inta 40 + - const: intb 41 + - const: intc 42 + - const: intd 43 + 44 + interrupt-map: 45 + maxItems: 4 46 + 47 + interrupt-map-mask: 48 + items: 49 + - const: 0 50 + - const: 0 51 + - const: 0 52 + - const: 7 53 + 54 + clocks: 55 + maxItems: 4 56 + 57 + clock-names: 58 + items: 59 + - const: mstr 60 + - const: dbi 61 + - const: phy_reg 62 + - const: aux 63 + 64 + resets: 65 + maxItems: 2 66 + 67 + reset-names: 68 + items: 69 + - const: dbi 70 + - const: pwr 71 + 72 + patternProperties: 73 + "^pcie@": 74 + type: object 75 + $ref: /schemas/pci/pci-pci-bridge.yaml# 76 + 77 + properties: 78 + reg: 79 + maxItems: 1 80 + 81 + num-lanes: 82 + maximum: 4 83 + 84 + resets: 85 + maxItems: 1 86 + 87 + reset-names: 88 + items: 89 + - const: perst 90 + 91 + required: 92 + - reg 93 + - ranges 94 + - num-lanes 95 + - resets 96 + - reset-names 97 + 98 + unevaluatedProperties: false 99 + 100 + required: 101 + - compatible 102 + - reg 103 + - ranges 104 + - interrupts 105 + - interrupt-names 106 + - interrupt-map-mask 107 + - interrupt-map 108 + - '#interrupt-cells' 109 + - clocks 110 + - clock-names 111 + - resets 112 + - reset-names 113 + 114 + allOf: 115 + - $ref: /schemas/pci/snps,dw-pcie.yaml# 116 + 117 + unevaluatedProperties: false 118 + 119 + examples: 120 + - | 121 + soc { 122 + #address-cells = <2>; 123 + #size-cells = <2>; 124 + 125 + pcie@54000000 { 126 + compatible = "eswin,eic7700-pcie"; 127 + reg = <0x0 0x54000000 0x0 0x4000000>, 128 + <0x0 0x40000000 0x0 0x800000>, 129 + <0x0 0x50000000 0x0 0x100000>; 130 + reg-names = "dbi", "config", "elbi"; 131 + #address-cells = <3>; 132 + #size-cells = <2>; 133 + #interrupt-cells = <1>; 134 + ranges = <0x01000000 0x0 0x40800000 0x0 0x40800000 0x0 0x800000>, 135 + <0x02000000 0x0 0x41000000 0x0 0x41000000 0x0 0xf000000>, 136 + <0x43000000 0x80 0x00000000 0x80 0x00000000 0x2 0x00000000>; 137 + bus-range = <0x00 0xff>; 138 + clocks = <&clock 144>, 139 + <&clock 145>, 140 + <&clock 146>, 141 + <&clock 147>; 142 + clock-names = "mstr", "dbi", "phy_reg", "aux"; 143 + resets = <&reset 97>, 144 + <&reset 98>; 145 + reset-names = "dbi", "pwr"; 146 + interrupts = <220>, <179>, <180>, <181>, <182>, <183>, <184>, <185>, <186>; 147 + interrupt-names = "msi", "inta", "intb", "intc", "intd"; 148 + interrupt-parent = <&plic>; 149 + interrupt-map-mask = <0x0 0x0 0x0 0x7>; 150 + interrupt-map = <0x0 0x0 0x0 0x1 &plic 179>, 151 + <0x0 0x0 0x0 0x2 &plic 180>, 152 + <0x0 0x0 0x0 0x3 &plic 181>, 153 + <0x0 0x0 0x0 0x4 &plic 182>; 154 + device_type = "pci"; 155 + pcie@0 { 156 + reg = <0x0 0x0 0x0 0x0 0x0>; 157 + #address-cells = <3>; 158 + #size-cells = <2>; 159 + ranges; 160 + device_type = "pci"; 161 + num-lanes = <4>; 162 + resets = <&reset 99>; 163 + reset-names = "perst"; 164 + }; 165 + }; 166 + };
+2 -2
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie-common.yaml
··· 17 17 properties: 18 18 clocks: 19 19 minItems: 3 20 - maxItems: 5 20 + maxItems: 6 21 21 22 22 clock-names: 23 23 minItems: 3 24 - maxItems: 5 24 + maxItems: 6 25 25 26 26 num-lanes: 27 27 const: 1
+12 -6
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie-ep.yaml
··· 18 18 19 19 properties: 20 20 compatible: 21 - enum: 22 - - fsl,imx8mm-pcie-ep 23 - - fsl,imx8mq-pcie-ep 24 - - fsl,imx8mp-pcie-ep 25 - - fsl,imx8q-pcie-ep 26 - - fsl,imx95-pcie-ep 21 + oneOf: 22 + - enum: 23 + - fsl,imx8mm-pcie-ep 24 + - fsl,imx8mp-pcie-ep 25 + - fsl,imx8mq-pcie-ep 26 + - fsl,imx8q-pcie-ep 27 + - fsl,imx95-pcie-ep 28 + - items: 29 + - enum: 30 + - fsl,imx94-pcie-ep 31 + - fsl,imx943-pcie-ep 32 + - const: fsl,imx95-pcie-ep 27 33 28 34 clocks: 29 35 minItems: 3
+18 -11
Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.yaml
··· 21 21 22 22 properties: 23 23 compatible: 24 - enum: 25 - - fsl,imx6q-pcie 26 - - fsl,imx6sx-pcie 27 - - fsl,imx6qp-pcie 28 - - fsl,imx7d-pcie 29 - - fsl,imx8mq-pcie 30 - - fsl,imx8mm-pcie 31 - - fsl,imx8mp-pcie 32 - - fsl,imx95-pcie 33 - - fsl,imx8q-pcie 24 + oneOf: 25 + - enum: 26 + - fsl,imx6q-pcie 27 + - fsl,imx6qp-pcie 28 + - fsl,imx6sx-pcie 29 + - fsl,imx7d-pcie 30 + - fsl,imx8mm-pcie 31 + - fsl,imx8mp-pcie 32 + - fsl,imx8mq-pcie 33 + - fsl,imx8q-pcie 34 + - fsl,imx95-pcie 35 + - items: 36 + - enum: 37 + - fsl,imx94-pcie 38 + - fsl,imx943-pcie 39 + - const: fsl,imx95-pcie 34 40 35 41 clocks: 36 42 minItems: 3 ··· 46 40 - description: PCIe PHY clock. 47 41 - description: Additional required clock entry for imx6sx-pcie, 48 42 imx6sx-pcie-ep, imx8mq-pcie, imx8mq-pcie-ep. 49 - - description: PCIe reference clock. 43 + - description: PCIe internal reference clock. 44 + - description: PCIe additional external reference clock. 50 45 51 46 clock-names: 52 47 minItems: 3
+5 -1
Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie-ep.yaml
··· 55 55 - const: intr 56 56 57 57 clocks: 58 + minItems: 1 58 59 items: 59 - - description: module clock 60 + - description: core clock 61 + - description: monitor clock 60 62 61 63 clock-names: 64 + minItems: 1 62 65 items: 63 66 - const: core 67 + - const: core_m 64 68 65 69 resets: 66 70 items:
+5 -1
Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.yaml
··· 58 58 - const: msi 59 59 60 60 clocks: 61 + minItems: 1 61 62 items: 62 - - description: module clock 63 + - description: core clock 64 + - description: monitor clock 63 65 64 66 clock-names: 67 + minItems: 1 65 68 items: 66 69 - const: core 70 + - const: core_m 67 71 68 72 resets: 69 73 items:
+91 -30
Documentation/devicetree/bindings/pci/renesas,r9a08g045-pcie.yaml
··· 10 10 - Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com> 11 11 12 12 description: 13 - Renesas RZ/G3S PCIe host controller complies with PCIe Base Specification 14 - 4.0 and supports up to 5 GT/s (Gen2). 13 + Renesas RZ/G3{E,S} PCIe host controllers comply with PCIe 14 + Base Specification 4.0 and support up to 5 GT/s (Gen2) for RZ/G3S and 15 + up to 8 GT/s (Gen3) for RZ/G3E. 15 16 16 17 properties: 17 18 compatible: 18 - const: renesas,r9a08g045-pcie # RZ/G3S 19 + enum: 20 + - renesas,r9a08g045-pcie # RZ/G3S 21 + - renesas,r9a09g047-pcie # RZ/G3E 19 22 20 23 reg: 21 24 maxItems: 1 22 25 23 26 interrupts: 27 + minItems: 16 24 28 items: 25 29 - description: System error interrupt 26 30 - description: System error on correctable error interrupt ··· 42 38 - description: PCIe event interrupt 43 39 - description: Message interrupt 44 40 - description: All interrupts 41 + - description: Link equalization request interrupt 42 + - description: Turn off event interrupt 43 + - description: PMU power off interrupt 44 + - description: D3 event function 0 interrupt 45 + - description: D3 event function 1 interrupt 46 + - description: Configuration PMCSR write clear function 0 interrupt 47 + - description: Configuration PMCSR write clear function 1 interrupt 45 48 46 49 interrupt-names: 50 + minItems: 16 47 51 items: 48 - - description: serr 49 - - description: ser_cor 50 - - description: serr_nonfatal 51 - - description: serr_fatal 52 - - description: axi_err 53 - - description: inta 54 - - description: intb 55 - - description: intc 56 - - description: intd 57 - - description: msi 58 - - description: link_bandwidth 59 - - description: pm_pme 60 - - description: dma 61 - - description: pcie_evt 62 - - description: msg 63 - - description: all 52 + - const: serr 53 + - const: serr_cor 54 + - const: serr_nonfatal 55 + - const: serr_fatal 56 + - const: axi_err 57 + - const: inta 58 + - const: intb 59 + - const: intc 60 + - const: intd 61 + - const: msi 62 + - const: link_bandwidth 63 + - const: pm_pme 64 + - const: dma 65 + - const: pcie_evt 66 + - const: msg 67 + - const: all 68 + - const: link_equalization_request 69 + - const: turn_off_event 70 + - const: pmu_poweroff 71 + - const: d3_event_f0 72 + - const: d3_event_f1 73 + - const: cfg_pmcsr_writeclear_f0 74 + - const: cfg_pmcsr_writeclear_f1 64 75 65 76 interrupt-controller: true 66 77 67 78 clocks: 68 79 items: 69 80 - description: System clock 70 - - description: PM control clock 81 + - description: PM control clock or clock for L1 substate handling 71 82 72 83 clock-names: 73 84 items: 74 - - description: aclk 75 - - description: pm 85 + - const: aclk 86 + - enum: [pm, pmu] 76 87 77 88 resets: 89 + minItems: 1 78 90 items: 79 91 - description: AXI2PCIe Bridge reset 80 92 - description: Data link layer/transaction layer reset ··· 101 81 - description: Configuration register reset 102 82 103 83 reset-names: 84 + minItems: 1 104 85 items: 105 - - description: aresetn 106 - - description: rst_b 107 - - description: rst_gp_b 108 - - description: rst_ps_b 109 - - description: rst_rsm_b 110 - - description: rst_cfg_b 111 - - description: rst_load_b 86 + - const: aresetn 87 + - const: rst_b 88 + - const: rst_gp_b 89 + - const: rst_ps_b 90 + - const: rst_rsm_b 91 + - const: rst_cfg_b 92 + - const: rst_load_b 112 93 113 94 power-domains: 114 95 maxItems: 1 ··· 149 128 const: 0x1912 150 129 151 130 device-id: 152 - const: 0x0033 131 + enum: 132 + - 0x0033 133 + - 0x0039 153 134 154 135 clocks: 155 136 items: ··· 190 167 191 168 allOf: 192 169 - $ref: /schemas/pci/pci-host-bridge.yaml# 170 + - if: 171 + properties: 172 + compatible: 173 + contains: 174 + const: renesas,r9a08g045-pcie 175 + then: 176 + properties: 177 + interrupts: 178 + maxItems: 16 179 + interrupt-names: 180 + maxItems: 16 181 + clock-names: 182 + items: 183 + - const: aclk 184 + - const: pm 185 + resets: 186 + minItems: 7 187 + reset-names: 188 + minItems: 7 189 + - if: 190 + properties: 191 + compatible: 192 + contains: 193 + const: renesas,r9a09g047-pcie 194 + then: 195 + properties: 196 + interrupts: 197 + minItems: 23 198 + interrupt-names: 199 + minItems: 23 200 + clock-names: 201 + items: 202 + - const: aclk 203 + - const: pmu 204 + resets: 205 + maxItems: 1 206 + reset-names: 207 + maxItems: 1 193 208 194 209 unevaluatedProperties: false 195 210
+42
Documentation/trace/events-pci-controller.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ====================================== 4 + Subsystem Trace Points: PCI Controller 5 + ====================================== 6 + 7 + Overview 8 + ======== 9 + The PCI controller tracing system provides tracepoints to monitor controller 10 + level information for debugging purpose. The events normally show up here: 11 + 12 + /sys/kernel/tracing/events/pci_controller 13 + 14 + Cf. include/trace/events/pci_controller.h for the events definitions. 15 + 16 + Available Tracepoints 17 + ===================== 18 + 19 + pcie_ltssm_state_transition 20 + --------------------------- 21 + 22 + Monitors PCIe LTSSM state transition including state and rate information 23 + :: 24 + 25 + pcie_ltssm_state_transition "dev: %s state: %s rate: %s\n" 26 + 27 + **Parameters**: 28 + 29 + * ``dev`` - PCIe controller instance 30 + * ``state`` - PCIe LTSSM state 31 + * ``rate`` - PCIe date rate 32 + 33 + **Example Usage**: 34 + 35 + .. code-block:: shell 36 + 37 + # Enable the tracepoint 38 + echo 1 > /sys/kernel/debug/tracing/events/pci_controller/pcie_ltssm_state_transition/enable 39 + 40 + # Monitor events (the following output is generated when a device is linking) 41 + cat /sys/kernel/debug/tracing/trace_pipe 42 + kworker/0:0-9 [000] ..... 5.600221: pcie_ltssm_state_transition: dev: a40000000.pcie state: RCVRY_EQ2 rate: 8.0 GT/s
+1
Documentation/trace/index.rst
··· 55 55 events-nmi 56 56 events-msr 57 57 events-pci 58 + events-pci-controller 58 59 boottime-trace 59 60 histogram 60 61 histogram-design
+16
MAINTAINERS
··· 20259 20259 F: Documentation/devicetree/bindings/pci/altr,pcie-root-port.yaml 20260 20260 F: drivers/pci/controller/pcie-altera.c 20261 20261 20262 + PCI DRIVER FOR ANDES QILAI PCIE 20263 + M: Randolph Lin <randolph@andestech.com> 20264 + L: linux-pci@vger.kernel.org 20265 + S: Maintained 20266 + F: Documentation/devicetree/bindings/pci/andestech,qilai-pcie.yaml 20267 + F: drivers/pci/controller/dwc/pcie-andes-qilai.c 20268 + 20262 20269 PCI DRIVER FOR APPLIEDMICRO XGENE 20263 20270 M: Toan Le <toan@os.amperecomputing.com> 20264 20271 L: linux-pci@vger.kernel.org ··· 20533 20526 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git 20534 20527 F: Documentation/ABI/testing/debugfs-pcie-ptm 20535 20528 F: Documentation/devicetree/bindings/pci/ 20529 + F: Documentation/trace/events-pci-controller.rst 20536 20530 F: drivers/pci/controller/ 20537 20531 F: drivers/pci/pci-bridge-emul.c 20538 20532 F: drivers/pci/pci-bridge-emul.h 20533 + F: include/trace/events/pci_controller.h 20539 20534 20540 20535 PCI PEER-TO-PEER DMA (P2PDMA) 20541 20536 M: Bjorn Helgaas <bhelgaas@google.com> ··· 20632 20623 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 20633 20624 S: Odd Fixes 20634 20625 F: drivers/pci/controller/pci-thunder-* 20626 + 20627 + PCIE DRIVER FOR ESWIN 20628 + M: Senchuan Zhang <zhangsenchuan@eswincomputing.com> 20629 + L: linux-pci@vger.kernel.org 20630 + S: Maintained 20631 + F: Documentation/devicetree/bindings/pci/eswin,pcie.yaml 20632 + F: drivers/pci/controller/dwc/pcie-eswin.c 20635 20633 20636 20634 PCIE DRIVER FOR HISILICON 20637 20635 M: Zhou Wang <wangzhou1@hisilicon.com>
+1
arch/alpha/kernel/pci.c
··· 125 125 126 126 resource_size_t 127 127 pcibios_align_resource(void *data, const struct resource *res, 128 + const struct resource *empty_res, 128 129 resource_size_t size, resource_size_t align) 129 130 { 130 131 struct pci_dev *dev = data;
+6 -3
arch/arm/kernel/bios32.c
··· 560 560 * which might be mirrored at 0x0100-0x03ff.. 561 561 */ 562 562 resource_size_t pcibios_align_resource(void *data, const struct resource *res, 563 - resource_size_t size, resource_size_t align) 563 + const struct resource *empty_res, 564 + resource_size_t size, 565 + resource_size_t align) 564 566 { 565 567 struct pci_dev *dev = data; 566 568 resource_size_t start = res->start; ··· 571 569 if (res->flags & IORESOURCE_IO && start & 0x300) 572 570 start = (start + 0x3ff) & ~0x3ff; 573 571 574 - start = (start + align - 1) & ~(align - 1); 575 - 576 572 host_bridge = pci_find_host_bridge(dev->bus); 577 573 578 574 if (host_bridge->align_resource) 579 575 return host_bridge->align_resource(dev, res, 580 576 start, size, align); 577 + 578 + if (res->flags & IORESOURCE_MEM) 579 + return pci_align_resource(dev, res, empty_res, size, align); 581 580 582 581 return start; 583 582 }
+6 -2
arch/m68k/kernel/pcibios.c
··· 27 27 * which might be mirrored at 0x0100-0x03ff.. 28 28 */ 29 29 resource_size_t pcibios_align_resource(void *data, const struct resource *res, 30 - resource_size_t size, resource_size_t align) 30 + const struct resource *empty_res, 31 + resource_size_t size, 32 + resource_size_t align) 31 33 { 34 + struct pci_dev *dev = data; 32 35 resource_size_t start = res->start; 33 36 34 37 if ((res->flags & IORESOURCE_IO) && (start & 0x300)) 35 38 start = (start + 0x3ff) & ~0x3ff; 36 39 37 - start = (start + align - 1) & ~(align - 1); 40 + if (res->flags & IORESOURCE_MEM) 41 + return pci_align_resource(dev, res, empty_res, size, align); 38 42 39 43 return start; 40 44 }
+5 -3
arch/mips/pci/pci-generic.c
··· 22 22 * which might have be mirrored at 0x0100-0x03ff.. 23 23 */ 24 24 resource_size_t pcibios_align_resource(void *data, const struct resource *res, 25 - resource_size_t size, resource_size_t align) 25 + const struct resource *empty_res, 26 + resource_size_t size, resource_size_t align) 26 27 { 27 28 struct pci_dev *dev = data; 28 29 resource_size_t start = res->start; ··· 32 31 if (res->flags & IORESOURCE_IO && start & 0x300) 33 32 start = (start + 0x3ff) & ~0x3ff; 34 33 35 - start = (start + align - 1) & ~(align - 1); 36 - 37 34 host_bridge = pci_find_host_bridge(dev->bus); 38 35 39 36 if (host_bridge->align_resource) 40 37 return host_bridge->align_resource(dev, res, 41 38 start, size, align); 39 + 40 + if (res->flags & IORESOURCE_MEM) 41 + return pci_align_resource(dev, res, empty_res, size, align); 42 42 43 43 return start; 44 44 }
+3
arch/mips/pci/pci-legacy.c
··· 52 52 */ 53 53 resource_size_t 54 54 pcibios_align_resource(void *data, const struct resource *res, 55 + const struct resource *empty_res, 55 56 resource_size_t size, resource_size_t align) 56 57 { 57 58 struct pci_dev *dev = data; ··· 70 69 if (start & 0x300) 71 70 start = (start + 0x3ff) & ~0x3ff; 72 71 } else if (res->flags & IORESOURCE_MEM) { 72 + start = pci_align_resource(dev, res, empty_res, size, align); 73 + 73 74 /* Make sure we start at our min on all hoses */ 74 75 if (start < PCIBIOS_MIN_MEM + hose->mem_resource->start) 75 76 start = PCIBIOS_MIN_MEM + hose->mem_resource->start;
+10 -7
arch/parisc/kernel/pci.c
··· 8 8 * Copyright (C) 1999-2001 Hewlett-Packard Company 9 9 * Copyright (C) 1999-2001 Grant Grundler 10 10 */ 11 + #include <linux/align.h> 11 12 #include <linux/eisa.h> 12 13 #include <linux/init.h> 13 14 #include <linux/module.h> ··· 197 196 * than res->start. 198 197 */ 199 198 resource_size_t pcibios_align_resource(void *data, const struct resource *res, 200 - resource_size_t size, resource_size_t alignment) 199 + const struct resource *empty_res, 200 + resource_size_t size, 201 + resource_size_t alignment) 201 202 { 202 - resource_size_t mask, align, start = res->start; 203 + struct pci_dev *dev = data; 204 + resource_size_t align, start = res->start; 203 205 204 206 DBG_RES("pcibios_align_resource(%s, (%p) [%lx,%lx]/%x, 0x%lx, 0x%lx)\n", 205 207 pci_name(((struct pci_dev *) data)), ··· 211 207 212 208 /* If it's not IO, then it's gotta be MEM */ 213 209 align = (res->flags & IORESOURCE_IO) ? PCIBIOS_MIN_IO : PCIBIOS_MIN_MEM; 214 - 215 - /* Align to largest of MIN or input size */ 216 - mask = max(alignment, align) - 1; 217 - start += mask; 218 - start &= ~mask; 210 + if (align > alignment) 211 + start = ALIGN(start, align); 212 + else 213 + start = pci_align_resource(dev, res, empty_res, size, alignment); 219 214 220 215 return start; 221 216 }
+5 -1
arch/powerpc/kernel/pci-common.c
··· 1132 1132 * which might have be mirrored at 0x0100-0x03ff.. 1133 1133 */ 1134 1134 resource_size_t pcibios_align_resource(void *data, const struct resource *res, 1135 - resource_size_t size, resource_size_t align) 1135 + const struct resource *empty_res, 1136 + resource_size_t size, 1137 + resource_size_t align) 1136 1138 { 1137 1139 struct pci_dev *dev = data; 1138 1140 resource_size_t start = res->start; ··· 1144 1142 return start; 1145 1143 if (start & 0x300) 1146 1144 start = (start + 0x3ff) & ~0x3ff; 1145 + } else if (res->flags & IORESOURCE_MEM) { 1146 + start = pci_align_resource(dev, res, empty_res, size, align); 1147 1147 } 1148 1148 1149 1149 return start;
+1
arch/s390/pci/pci.c
··· 266 266 } 267 267 268 268 resource_size_t pcibios_align_resource(void *data, const struct resource *res, 269 + const struct resource *empty_res, 269 270 resource_size_t size, 270 271 resource_size_t align) 271 272 {
+5 -1
arch/sh/drivers/pci/pci.c
··· 168 168 * modulo 0x400. 169 169 */ 170 170 resource_size_t pcibios_align_resource(void *data, const struct resource *res, 171 - resource_size_t size, resource_size_t align) 171 + const struct resource *empty_res, 172 + resource_size_t size, 173 + resource_size_t align) 172 174 { 173 175 struct pci_dev *dev = data; 174 176 struct pci_channel *hose = dev->sysdata; ··· 185 183 */ 186 184 if (start & 0x300) 187 185 start = (start + 0x3ff) & ~0x3ff; 186 + } else if (res->flags & IORESOURCE_MEM) { 187 + start = pci_align_resource(dev, res, empty_res, size, align); 188 188 } 189 189 190 190 return start;
+4 -1
arch/x86/pci/i386.c
··· 153 153 */ 154 154 resource_size_t 155 155 pcibios_align_resource(void *data, const struct resource *res, 156 - resource_size_t size, resource_size_t align) 156 + const struct resource *empty_res, 157 + resource_size_t size, resource_size_t align) 157 158 { 158 159 struct pci_dev *dev = data; 159 160 resource_size_t start = res->start; ··· 165 164 if (start & 0x300) 166 165 start = (start + 0x3ff) & ~0x3ff; 167 166 } else if (res->flags & IORESOURCE_MEM) { 167 + start = pci_align_resource(dev, res, empty_res, size, align); 168 + 168 169 /* The low 1MB range is reserved for ISA cards */ 169 170 if (start < BIOS_END) 170 171 start = BIOS_END;
+3
arch/xtensa/kernel/pci.c
··· 39 39 */ 40 40 resource_size_t 41 41 pcibios_align_resource(void *data, const struct resource *res, 42 + const struct resource *empty_res, 42 43 resource_size_t size, resource_size_t align) 43 44 { 44 45 struct pci_dev *dev = data; ··· 54 53 55 54 if (start & 0x300) 56 55 start = (start + 0x3ff) & ~0x3ff; 56 + } else if (res->flags & IORESOURCE_MEM) { 57 + start = pci_align_resource(dev, res, empty_res, size, align); 57 58 } 58 59 59 60 return start;
-10
drivers/input/mouse/Kconfig
··· 326 326 To compile this driver as a module, choose M here: the 327 327 module will be called logibm. 328 328 329 - config MOUSE_PC110PAD 330 - tristate "IBM PC110 touchpad" 331 - depends on ISA 332 - help 333 - Say Y if you have the IBM PC-110 micro-notebook and want its 334 - touchpad supported. 335 - 336 - To compile this driver as a module, choose M here: the 337 - module will be called pc110pad. 338 - 339 329 config MOUSE_AMIGA 340 330 tristate "Amiga mouse" 341 331 depends on AMIGA
-1
drivers/input/mouse/Makefile
··· 15 15 obj-$(CONFIG_MOUSE_INPORT) += inport.o 16 16 obj-$(CONFIG_MOUSE_LOGIBM) += logibm.o 17 17 obj-$(CONFIG_MOUSE_MAPLE) += maplemouse.o 18 - obj-$(CONFIG_MOUSE_PC110PAD) += pc110pad.o 19 18 obj-$(CONFIG_MOUSE_PS2) += psmouse.o 20 19 obj-$(CONFIG_MOUSE_RISCPC) += rpcmouse.o 21 20 obj-$(CONFIG_MOUSE_SERIAL) += sermouse.o
-160
drivers/input/mouse/pc110pad.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Copyright (c) 2000-2001 Vojtech Pavlik 4 - * 5 - * Based on the work of: 6 - * Alan Cox Robin O'Leary 7 - */ 8 - 9 - /* 10 - * IBM PC110 touchpad driver for Linux 11 - */ 12 - 13 - #include <linux/module.h> 14 - #include <linux/kernel.h> 15 - #include <linux/errno.h> 16 - #include <linux/ioport.h> 17 - #include <linux/input.h> 18 - #include <linux/init.h> 19 - #include <linux/interrupt.h> 20 - #include <linux/pci.h> 21 - #include <linux/delay.h> 22 - 23 - #include <asm/io.h> 24 - #include <asm/irq.h> 25 - 26 - MODULE_AUTHOR("Vojtech Pavlik <vojtech@ucw.cz>"); 27 - MODULE_DESCRIPTION("IBM PC110 touchpad driver"); 28 - MODULE_LICENSE("GPL"); 29 - 30 - #define PC110PAD_OFF 0x30 31 - #define PC110PAD_ON 0x38 32 - 33 - static int pc110pad_irq = 10; 34 - static int pc110pad_io = 0x15e0; 35 - 36 - static struct input_dev *pc110pad_dev; 37 - static int pc110pad_data[3]; 38 - static int pc110pad_count; 39 - 40 - static irqreturn_t pc110pad_interrupt(int irq, void *ptr) 41 - { 42 - int value = inb_p(pc110pad_io); 43 - int handshake = inb_p(pc110pad_io + 2); 44 - 45 - outb(handshake | 1, pc110pad_io + 2); 46 - udelay(2); 47 - outb(handshake & ~1, pc110pad_io + 2); 48 - udelay(2); 49 - inb_p(0x64); 50 - 51 - pc110pad_data[pc110pad_count++] = value; 52 - 53 - if (pc110pad_count < 3) 54 - return IRQ_HANDLED; 55 - 56 - input_report_key(pc110pad_dev, BTN_TOUCH, 57 - pc110pad_data[0] & 0x01); 58 - input_report_abs(pc110pad_dev, ABS_X, 59 - pc110pad_data[1] | ((pc110pad_data[0] << 3) & 0x80) | ((pc110pad_data[0] << 1) & 0x100)); 60 - input_report_abs(pc110pad_dev, ABS_Y, 61 - pc110pad_data[2] | ((pc110pad_data[0] << 4) & 0x80)); 62 - input_sync(pc110pad_dev); 63 - 64 - pc110pad_count = 0; 65 - return IRQ_HANDLED; 66 - } 67 - 68 - static void pc110pad_close(struct input_dev *dev) 69 - { 70 - outb(PC110PAD_OFF, pc110pad_io + 2); 71 - } 72 - 73 - static int pc110pad_open(struct input_dev *dev) 74 - { 75 - pc110pad_interrupt(0, NULL); 76 - pc110pad_interrupt(0, NULL); 77 - pc110pad_interrupt(0, NULL); 78 - outb(PC110PAD_ON, pc110pad_io + 2); 79 - pc110pad_count = 0; 80 - 81 - return 0; 82 - } 83 - 84 - /* 85 - * We try to avoid enabling the hardware if it's not 86 - * there, but we don't know how to test. But we do know 87 - * that the PC110 is not a PCI system. So if we find any 88 - * PCI devices in the machine, we don't have a PC110. 89 - */ 90 - static int __init pc110pad_init(void) 91 - { 92 - int err; 93 - 94 - if (!no_pci_devices()) 95 - return -ENODEV; 96 - 97 - if (!request_region(pc110pad_io, 4, "pc110pad")) { 98 - printk(KERN_ERR "pc110pad: I/O area %#x-%#x in use.\n", 99 - pc110pad_io, pc110pad_io + 4); 100 - return -EBUSY; 101 - } 102 - 103 - outb(PC110PAD_OFF, pc110pad_io + 2); 104 - 105 - if (request_irq(pc110pad_irq, pc110pad_interrupt, 0, "pc110pad", NULL)) { 106 - printk(KERN_ERR "pc110pad: Unable to get irq %d.\n", pc110pad_irq); 107 - err = -EBUSY; 108 - goto err_release_region; 109 - } 110 - 111 - pc110pad_dev = input_allocate_device(); 112 - if (!pc110pad_dev) { 113 - printk(KERN_ERR "pc110pad: Not enough memory.\n"); 114 - err = -ENOMEM; 115 - goto err_free_irq; 116 - } 117 - 118 - pc110pad_dev->name = "IBM PC110 TouchPad"; 119 - pc110pad_dev->phys = "isa15e0/input0"; 120 - pc110pad_dev->id.bustype = BUS_ISA; 121 - pc110pad_dev->id.vendor = 0x0003; 122 - pc110pad_dev->id.product = 0x0001; 123 - pc110pad_dev->id.version = 0x0100; 124 - 125 - pc110pad_dev->evbit[0] = BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS); 126 - pc110pad_dev->absbit[0] = BIT_MASK(ABS_X) | BIT_MASK(ABS_Y); 127 - pc110pad_dev->keybit[BIT_WORD(BTN_TOUCH)] = BIT_MASK(BTN_TOUCH); 128 - 129 - input_abs_set_max(pc110pad_dev, ABS_X, 0x1ff); 130 - input_abs_set_max(pc110pad_dev, ABS_Y, 0x0ff); 131 - 132 - pc110pad_dev->open = pc110pad_open; 133 - pc110pad_dev->close = pc110pad_close; 134 - 135 - err = input_register_device(pc110pad_dev); 136 - if (err) 137 - goto err_free_dev; 138 - 139 - return 0; 140 - 141 - err_free_dev: 142 - input_free_device(pc110pad_dev); 143 - err_free_irq: 144 - free_irq(pc110pad_irq, NULL); 145 - err_release_region: 146 - release_region(pc110pad_io, 4); 147 - 148 - return err; 149 - } 150 - 151 - static void __exit pc110pad_exit(void) 152 - { 153 - outb(PC110PAD_OFF, pc110pad_io + 2); 154 - free_irq(pc110pad_irq, NULL); 155 - input_unregister_device(pc110pad_dev); 156 - release_region(pc110pad_io, 4); 157 - } 158 - 159 - module_init(pc110pad_init); 160 - module_exit(pc110pad_exit);
+30 -4
drivers/misc/pci_endpoint_test.c
··· 61 61 #define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15) 62 62 #define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16) 63 63 #define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17) 64 + #define STATUS_NO_RESOURCE BIT(18) 64 65 65 66 #define PCI_ENDPOINT_TEST_LOWER_SRC_ADDR 0x0c 66 67 #define PCI_ENDPOINT_TEST_UPPER_SRC_ADDR 0x10 ··· 85 84 #define CAP_MSIX BIT(2) 86 85 #define CAP_INTX BIT(3) 87 86 #define CAP_SUBRANGE_MAPPING BIT(4) 87 + #define CAP_DYNAMIC_INBOUND_MAPPING BIT(5) 88 + #define CAP_BAR0_RESERVED BIT(6) 89 + #define CAP_BAR1_RESERVED BIT(7) 90 + #define CAP_BAR2_RESERVED BIT(8) 91 + #define CAP_BAR3_RESERVED BIT(9) 92 + #define CAP_BAR4_RESERVED BIT(10) 93 + #define CAP_BAR5_RESERVED BIT(11) 88 94 89 95 #define PCI_ENDPOINT_TEST_DB_BAR 0x34 90 96 #define PCI_ENDPOINT_TEST_DB_OFFSET 0x38 ··· 114 106 #define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031 115 107 116 108 #define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588 109 + 110 + #define PCI_DEVICE_ID_NVIDIA_TEGRA194_EP 0x1ad4 111 + #define PCI_DEVICE_ID_NVIDIA_TEGRA234_EP 0x229b 117 112 118 113 #define PCI_ENDPOINT_TEST_BAR_SUBRANGE_NSUB 2 119 114 ··· 286 275 return ret; 287 276 } 288 277 278 + static bool bar_is_reserved(struct pci_endpoint_test *test, enum pci_barno bar) 279 + { 280 + return test->ep_caps & BIT(bar + __fls(CAP_BAR0_RESERVED)); 281 + } 282 + 289 283 static const u32 bar_test_pattern[] = { 290 284 0xA0A0A0A0, 291 285 0xA1A1A1A1, ··· 419 403 420 404 /* Write all BARs in order (without reading). */ 421 405 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 422 - if (test->bar[bar]) 406 + if (test->bar[bar] && !bar_is_reserved(test, bar)) 423 407 pci_endpoint_test_bars_write_bar(test, bar); 424 408 425 409 /* ··· 429 413 * (Reading back the BAR directly after writing can not detect this.) 430 414 */ 431 415 for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 432 - if (test->bar[bar]) { 416 + if (test->bar[bar] && !bar_is_reserved(test, bar)) { 433 417 ret = pci_endpoint_test_bars_read_bar(test, bar); 434 418 if (ret) 435 419 return ret; ··· 481 465 482 466 status = pci_endpoint_test_readl(test, PCI_ENDPOINT_TEST_STATUS); 483 467 if (status & fail_bit) 484 - return -EIO; 468 + return (status & STATUS_NO_RESOURCE) ? -ENOSPC : -EIO; 485 469 486 470 if (!(status & ok_bit)) 487 471 return -EIO; ··· 551 535 552 536 sub_size = bar_size / nsub; 553 537 if (sub_size < sizeof(u32)) { 554 - ret = -ENOSPC; 538 + ret = -EINVAL; 555 539 goto out_clear; 556 540 } 557 541 ··· 1076 1060 u32 addr; 1077 1061 int left; 1078 1062 1063 + if (!(test->ep_caps & CAP_DYNAMIC_INBOUND_MAPPING)) 1064 + return -EOPNOTSUPP; 1065 + 1079 1066 if (irq_type < PCITEST_IRQ_TYPE_INTX || 1080 1067 irq_type > PCITEST_IRQ_TYPE_MSIX) { 1081 1068 dev_err(dev, "Invalid IRQ type\n"); ··· 1157 1138 goto ret; 1158 1139 if (is_am654_pci_dev(pdev) && bar == BAR_0) 1159 1140 goto ret; 1141 + 1142 + if (bar_is_reserved(test, bar)) { 1143 + ret = -ENOBUFS; 1144 + goto ret; 1145 + } 1160 1146 1161 1147 if (cmd == PCITEST_BAR) 1162 1148 ret = pci_endpoint_test_bar(test, bar); ··· 1442 1418 { PCI_DEVICE(PCI_VENDOR_ID_ROCKCHIP, PCI_DEVICE_ID_ROCKCHIP_RK3588), 1443 1419 .driver_data = (kernel_ulong_t)&rk3588_data, 1444 1420 }, 1421 + { PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_TEGRA194_EP),}, 1422 + { PCI_DEVICE(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_TEGRA234_EP),}, 1445 1423 { } 1446 1424 }; 1447 1425 MODULE_DEVICE_TABLE(pci, pci_endpoint_test_tbl);
+1 -1
drivers/net/ethernet/intel/ice/ice_main.c
··· 5028 5028 } 5029 5029 5030 5030 if (pf->hw.mac_type == ICE_MAC_E830) { 5031 - err = pci_enable_ptm(pf->pdev, NULL); 5031 + err = pci_enable_ptm(pf->pdev); 5032 5032 if (err) 5033 5033 dev_dbg(dev, "PCIe PTM not supported by PCIe bus/controller\n"); 5034 5034 }
+1 -1
drivers/net/ethernet/intel/idpf/idpf_main.c
··· 257 257 goto err_free; 258 258 } 259 259 260 - err = pci_enable_ptm(pdev, NULL); 260 + err = pci_enable_ptm(pdev); 261 261 if (err) 262 262 pci_dbg(pdev, "PCIe PTM is not supported by PCIe bus/controller\n"); 263 263
+1 -1
drivers/net/ethernet/intel/igc/igc_main.c
··· 7141 7141 if (err) 7142 7142 goto err_pci_reg; 7143 7143 7144 - err = pci_enable_ptm(pdev, NULL); 7144 + err = pci_enable_ptm(pdev); 7145 7145 if (err < 0) 7146 7146 dev_info(&pdev->dev, "PCIe PTM not supported by PCIe bus/controller\n"); 7147 7147
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 960 960 961 961 mlx5_pci_vsc_init(dev); 962 962 963 - pci_enable_ptm(pdev, NULL); 963 + pci_enable_ptm(pdev); 964 964 965 965 return 0; 966 966
+7 -7
drivers/ntb/ntb_transport.c
··· 759 759 static void ntb_free_mw(struct ntb_transport_ctx *nt, int num_mw) 760 760 { 761 761 struct ntb_transport_mw *mw = &nt->mw_vec[num_mw]; 762 - struct pci_dev *pdev = nt->ndev->pdev; 762 + struct device *dma_dev = ntb_get_dma_dev(nt->ndev); 763 763 764 764 if (!mw->virt_addr) 765 765 return; 766 766 767 767 ntb_mw_clear_trans(nt->ndev, PIDX, num_mw); 768 - dma_free_coherent(&pdev->dev, mw->alloc_size, 768 + dma_free_coherent(dma_dev, mw->alloc_size, 769 769 mw->alloc_addr, mw->dma_addr); 770 770 mw->xlat_size = 0; 771 771 mw->buff_size = 0; ··· 835 835 resource_size_t size) 836 836 { 837 837 struct ntb_transport_mw *mw = &nt->mw_vec[num_mw]; 838 - struct pci_dev *pdev = nt->ndev->pdev; 838 + struct device *dma_dev = ntb_get_dma_dev(nt->ndev); 839 839 size_t xlat_size, buff_size; 840 840 resource_size_t xlat_align; 841 841 resource_size_t xlat_align_size; ··· 864 864 mw->buff_size = buff_size; 865 865 mw->alloc_size = buff_size; 866 866 867 - rc = ntb_alloc_mw_buffer(mw, &pdev->dev, xlat_align); 867 + rc = ntb_alloc_mw_buffer(mw, dma_dev, xlat_align); 868 868 if (rc) { 869 869 mw->alloc_size *= 2; 870 - rc = ntb_alloc_mw_buffer(mw, &pdev->dev, xlat_align); 870 + rc = ntb_alloc_mw_buffer(mw, dma_dev, xlat_align); 871 871 if (rc) { 872 - dev_err(&pdev->dev, 872 + dev_err(dma_dev, 873 873 "Unable to alloc aligned MW buff\n"); 874 874 mw->xlat_size = 0; 875 875 mw->buff_size = 0; ··· 882 882 rc = ntb_mw_set_trans(nt->ndev, PIDX, num_mw, mw->dma_addr, 883 883 mw->xlat_size); 884 884 if (rc) { 885 - dev_err(&pdev->dev, "Unable to set mw%d translation", num_mw); 885 + dev_err(dma_dev, "Unable to set mw%d translation", num_mw); 886 886 ntb_free_mw(nt, num_mw); 887 887 return -EIO; 888 888 }
+3 -9
drivers/pci/Kconfig
··· 31 31 32 32 config PCI_DOMAINS 33 33 bool 34 - depends on PCI 35 34 36 35 config PCI_DOMAINS_GENERIC 37 36 bool ··· 254 255 choice 255 256 prompt "PCI Express hierarchy optimization setting" 256 257 default PCIE_BUS_DEFAULT 257 - depends on PCI && EXPERT 258 + depends on EXPERT 258 259 help 259 260 MPS (Max Payload Size) and MRRS (Max Read Request Size) are PCIe 260 261 device parameters that affect performance and the ability to ··· 271 272 272 273 config PCIE_BUS_TUNE_OFF 273 274 bool "Tune Off" 274 - depends on PCI 275 275 help 276 276 Use the BIOS defaults; don't touch MPS at all. This is the same 277 277 as booting with 'pci=pcie_bus_tune_off'. 278 278 279 279 config PCIE_BUS_DEFAULT 280 280 bool "Default" 281 - depends on PCI 282 281 help 283 282 Default choice; ensure that the MPS matches upstream bridge. 284 283 285 284 config PCIE_BUS_SAFE 286 285 bool "Safe" 287 - depends on PCI 288 286 help 289 287 Use largest MPS that boot-time devices support. If you have a 290 288 closed system with no possibility of adding new devices, this ··· 290 294 291 295 config PCIE_BUS_PERFORMANCE 292 296 bool "Performance" 293 - depends on PCI 294 297 help 295 298 Use MPS and MRRS for best performance. Ensure that a given 296 299 device's MPS is no larger than its parent MPS, which allows us to ··· 298 303 299 304 config PCIE_BUS_PEER2PEER 300 305 bool "Peer2peer" 301 - depends on PCI 302 306 help 303 307 Set MPS = 128 for all devices. MPS configuration effected by the 304 308 other options could cause the MPS on one root port to be ··· 311 317 config VGA_ARB 312 318 bool "VGA Arbitration" if EXPERT 313 319 default y 314 - depends on (PCI && !S390) 320 + depends on !S390 315 321 select SCREEN_INFO if X86 316 322 help 317 323 Some "legacy" VGA devices implemented on PCI typically have the same ··· 334 340 source "drivers/pci/switch/Kconfig" 335 341 source "drivers/pci/pwrctrl/Kconfig" 336 342 337 - endif 343 + endif # PCI
+1
drivers/pci/controller/Kconfig
··· 222 222 depends on ARCH_AIROHA || ARCH_MEDIATEK || COMPILE_TEST 223 223 depends on PCI_MSI 224 224 select IRQ_MSI_LIB 225 + select PCI_PWRCTRL_GENERIC 225 226 help 226 227 Adds support for PCIe Gen3 MAC controller for MediaTek SoCs. 227 228 This PCIe controller is compatible with Gen3, Gen2 and Gen1 speed,
+2 -1
drivers/pci/controller/cadence/pci-j721e.c
··· 202 202 int ret; 203 203 204 204 link_speed = of_pci_get_max_link_speed(np); 205 - if (link_speed < 2) 205 + if ((link_speed < 2) || 206 + (pcie_get_link_speed(link_speed) == PCI_SPEED_UNKNOWN)) 206 207 link_speed = 2; 207 208 208 209 val = link_speed - 1;
+4 -2
drivers/pci/controller/cadence/pci-sky1.c
··· 173 173 cdns_pcie->ops = &sky1_pcie_ops; 174 174 cdns_pcie->reg_base = pcie->reg_base; 175 175 cdns_pcie->msg_res = pcie->msg_res; 176 - cdns_pcie->is_rc = 1; 176 + cdns_pcie->is_rc = true; 177 177 178 178 reg_off = devm_kzalloc(dev, sizeof(*reg_off), GFP_KERNEL); 179 - if (!reg_off) 179 + if (!reg_off) { 180 + pci_ecam_free(pcie->cfg); 180 181 return -ENOMEM; 182 + } 181 183 182 184 reg_off->ip_reg_bank_offset = SKY1_IP_REG_BANK; 183 185 reg_off->ip_cfg_ctrl_reg_offset = SKY1_IP_CFG_CTRL_REG_BANK;
+7
drivers/pci/controller/cadence/pcie-cadence-host.c
··· 147 147 cdns_pcie_rp_writeb(pcie, PCI_CLASS_PROG, 0); 148 148 cdns_pcie_rp_writew(pcie, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_PCI); 149 149 150 + value = cdns_pcie_rp_readl(pcie, CDNS_PCIE_RP_CAP_OFFSET + PCI_EXP_LNKCAP); 151 + if (rc->quirk_broken_aspm_l0s) 152 + value &= ~PCI_EXP_LNKCAP_ASPM_L0S; 153 + if (rc->quirk_broken_aspm_l1) 154 + value &= ~PCI_EXP_LNKCAP_ASPM_L1; 155 + cdns_pcie_rp_writel(pcie, CDNS_PCIE_RP_CAP_OFFSET + PCI_EXP_LNKCAP, value); 156 + 150 157 return 0; 151 158 } 152 159
+44 -31
drivers/pci/controller/cadence/pcie-cadence.h
··· 115 115 * @quirk_detect_quiet_flag: LTSSM Detect Quiet min delay set as quirk 116 116 * @ecam_supported: Whether the ECAM is supported 117 117 * @no_inbound_map: Whether inbound mapping is supported 118 + * @quirk_broken_aspm_l0s: Disable ASPM L0s support as quirk 119 + * @quirk_broken_aspm_l1: Disable ASPM L1 support as quirk 118 120 */ 119 121 struct cdns_pcie_rc { 120 122 struct cdns_pcie pcie; ··· 129 127 unsigned int quirk_detect_quiet_flag:1; 130 128 unsigned int ecam_supported:1; 131 129 unsigned int no_inbound_map:1; 130 + unsigned int quirk_broken_aspm_l0s:1; 131 + unsigned int quirk_broken_aspm_l1:1; 132 132 }; 133 133 134 134 /** ··· 253 249 return readl(pcie->reg_base + reg); 254 250 } 255 251 256 - static inline u16 cdns_pcie_readw(struct cdns_pcie *pcie, u32 reg) 257 - { 258 - return readw(pcie->reg_base + reg); 259 - } 260 - 261 - static inline u8 cdns_pcie_readb(struct cdns_pcie *pcie, u32 reg) 262 - { 263 - return readb(pcie->reg_base + reg); 264 - } 265 - 266 - static inline int cdns_pcie_read_cfg_byte(struct cdns_pcie *pcie, int where, 267 - u8 *val) 268 - { 269 - *val = cdns_pcie_readb(pcie, where); 270 - return PCIBIOS_SUCCESSFUL; 271 - } 272 - 273 - static inline int cdns_pcie_read_cfg_word(struct cdns_pcie *pcie, int where, 274 - u16 *val) 275 - { 276 - *val = cdns_pcie_readw(pcie, where); 277 - return PCIBIOS_SUCCESSFUL; 278 - } 279 - 280 - static inline int cdns_pcie_read_cfg_dword(struct cdns_pcie *pcie, int where, 281 - u32 *val) 282 - { 283 - *val = cdns_pcie_readl(pcie, where); 284 - return PCIBIOS_SUCCESSFUL; 285 - } 286 - 287 252 static inline u32 cdns_pcie_read_sz(void __iomem *addr, int size) 288 253 { 289 254 void __iomem *aligned_addr = PTR_ALIGN_DOWN(addr, 0x4); ··· 293 320 writel(val, aligned_addr); 294 321 } 295 322 323 + static inline int cdns_pcie_read_cfg_byte(struct cdns_pcie *pcie, int where, 324 + u8 *val) 325 + { 326 + void __iomem *addr = pcie->reg_base + where; 327 + 328 + *val = cdns_pcie_read_sz(addr, 0x1); 329 + return PCIBIOS_SUCCESSFUL; 330 + } 331 + 332 + static inline int cdns_pcie_read_cfg_word(struct cdns_pcie *pcie, int where, 333 + u16 *val) 334 + { 335 + void __iomem *addr = pcie->reg_base + where; 336 + 337 + *val = cdns_pcie_read_sz(addr, 0x2); 338 + return PCIBIOS_SUCCESSFUL; 339 + } 340 + 341 + static inline int cdns_pcie_read_cfg_dword(struct cdns_pcie *pcie, int where, 342 + u32 *val) 343 + { 344 + *val = cdns_pcie_readl(pcie, where); 345 + return PCIBIOS_SUCCESSFUL; 346 + } 347 + 296 348 /* Root Port register access */ 297 349 static inline void cdns_pcie_rp_writeb(struct cdns_pcie *pcie, 298 350 u32 reg, u8 value) ··· 340 342 void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg; 341 343 342 344 return cdns_pcie_read_sz(addr, 0x2); 345 + } 346 + 347 + static inline void cdns_pcie_rp_writel(struct cdns_pcie *pcie, 348 + u32 reg, u32 value) 349 + { 350 + void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg; 351 + 352 + cdns_pcie_write_sz(addr, 0x4, value); 353 + } 354 + 355 + static inline u32 cdns_pcie_rp_readl(struct cdns_pcie *pcie, u32 reg) 356 + { 357 + void __iomem *addr = pcie->reg_base + CDNS_PCIE_RP_BASE + reg; 358 + 359 + return cdns_pcie_read_sz(addr, 0x4); 343 360 } 344 361 345 362 static inline void cdns_pcie_hpa_rp_writeb(struct cdns_pcie *pcie,
+2
drivers/pci/controller/cadence/pcie-sg2042.c
··· 48 48 bridge->child_ops = &sg2042_pcie_child_ops; 49 49 50 50 rc = pci_host_bridge_priv(bridge); 51 + rc->quirk_broken_aspm_l0s = 1; 52 + rc->quirk_broken_aspm_l1 = 1; 51 53 pcie = &rc->pcie; 52 54 pcie->dev = dev; 53 55
+20 -8
drivers/pci/controller/dwc/Kconfig
··· 61 61 and therefore the driver re-uses the DesignWare core functions to 62 62 implement the driver. 63 63 64 + config PCIE_ANDES_QILAI 65 + tristate "Andes QiLai PCIe controller" 66 + depends on ARCH_ANDES || COMPILE_TEST 67 + depends on PCI_MSI 68 + select PCIE_DW_HOST 69 + help 70 + Say Y here to enable PCIe controller support on Andes QiLai SoCs, 71 + which operate in Root Complex mode. The Andes QiLai SoC PCIe 72 + controller is based on DesignWare IP and therefore the driver 73 + re-uses the DesignWare core functions to implement the driver. 74 + 64 75 config PCIE_ARTPEC6 65 76 bool 66 77 ··· 95 84 Enables support for the PCIe controller in the ARTPEC-6 SoC to work in 96 85 endpoint mode. This uses the DesignWare core. 97 86 98 - config PCIE_BT1 99 - tristate "Baikal-T1 PCIe controller" 100 - depends on MIPS_BAIKAL_T1 || COMPILE_TEST 87 + config PCIE_ESWIN 88 + tristate "ESWIN PCIe controller" 89 + depends on ARCH_ESWIN || COMPILE_TEST 101 90 depends on PCI_MSI 102 91 select PCIE_DW_HOST 103 92 help 104 - Enables support for the PCIe controller in the Baikal-T1 SoC to work 105 - in host mode. It's based on the Synopsys DWC PCIe v4.60a IP-core. 93 + Say Y here if you want PCIe controller support for the ESWIN SoCs. 94 + The PCIe controller in ESWIN SoCs is based on DesignWare hardware, and 95 + works only in host mode. 106 96 107 97 config PCI_IMX6 108 98 bool ··· 133 121 DesignWare core functions to implement the driver. 134 122 135 123 config PCI_LAYERSCAPE 136 - bool "Freescale Layerscape PCIe controller (host mode)" 124 + tristate "Freescale Layerscape PCIe controller (host mode)" 137 125 depends on OF && (ARM || ARCH_LAYERSCAPE || COMPILE_TEST) 138 126 depends on PCI_MSI 139 127 select PCIE_DW_HOST ··· 321 309 select CRC8 322 310 select PCIE_QCOM_COMMON 323 311 select PCI_HOST_COMMON 324 - select PCI_PWRCTRL_SLOT 312 + select PCI_PWRCTRL_GENERIC 325 313 help 326 314 Say Y here to enable PCIe controller support on Qualcomm SoCs. The 327 315 PCIe controller uses the DesignWare core plus Qualcomm-specific ··· 443 431 depends on ARCH_SPACEMIT || COMPILE_TEST 444 432 depends on HAS_IOMEM 445 433 select PCIE_DW_HOST 446 - select PCI_PWRCTRL_SLOT 434 + select PCI_PWRCTRL_GENERIC 447 435 default ARCH_SPACEMIT 448 436 help 449 437 Enables support for the DesignWare based PCIe controller in
+2 -1
drivers/pci/controller/dwc/Makefile
··· 5 5 obj-$(CONFIG_PCIE_DW_EP) += pcie-designware-ep.o 6 6 obj-$(CONFIG_PCIE_DW_PLAT) += pcie-designware-plat.o 7 7 obj-$(CONFIG_PCIE_AMD_MDB) += pcie-amd-mdb.o 8 - obj-$(CONFIG_PCIE_BT1) += pcie-bt1.o 8 + obj-$(CONFIG_PCIE_ANDES_QILAI) += pcie-andes-qilai.o 9 + obj-$(CONFIG_PCIE_ESWIN) += pcie-eswin.o 9 10 obj-$(CONFIG_PCI_DRA7XX) += pci-dra7xx.o 10 11 obj-$(CONFIG_PCI_EXYNOS) += pci-exynos.o 11 12 obj-$(CONFIG_PCIE_FU740) += pcie-fu740.o
-4
drivers/pci/controller/dwc/pci-dra7xx.c
··· 378 378 { 379 379 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 380 380 struct dra7xx_pcie *dra7xx = to_dra7xx_pcie(pci); 381 - enum pci_barno bar; 382 - 383 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 384 - dw_pcie_ep_reset_bar(pci, bar); 385 381 386 382 dra7xx_pcie_enable_wrapper_interrupts(dra7xx); 387 383 }
+39 -42
drivers/pci/controller/dwc/pci-imx6.c
··· 117 117 #define IMX_PCIE_FLAG_HAS_LUT BIT(10) 118 118 #define IMX_PCIE_FLAG_8GT_ECN_ERR051586 BIT(11) 119 119 #define IMX_PCIE_FLAG_SKIP_L23_READY BIT(12) 120 + /* Preserve MSI capability for platforms that require it */ 121 + #define IMX_PCIE_FLAG_KEEP_MSI_CAP BIT(13) 120 122 121 123 #define imx_check_flag(pci, val) (pci->drvdata->flags & val) 122 124 ··· 270 268 IMX95_PCIE_PHY_CR_PARA_SEL); 271 269 272 270 regmap_update_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_PHY_GEN_CTRL, 273 - ext ? IMX95_PCIE_REF_USE_PAD : 0, 274 - IMX95_PCIE_REF_USE_PAD); 271 + IMX95_PCIE_REF_USE_PAD, 272 + ext ? IMX95_PCIE_REF_USE_PAD : 0); 275 273 regmap_update_bits(imx_pcie->iomuxc_gpr, IMX95_PCIE_SS_RW_REG_0, 276 274 IMX95_PCIE_REF_CLKEN, 277 275 ext ? 0 : IMX95_PCIE_REF_CLKEN); ··· 903 901 904 902 if (imx_pcie->drvdata->core_reset) 905 903 imx_pcie->drvdata->core_reset(imx_pcie, true); 906 - 907 - /* Some boards don't have PCIe reset GPIO. */ 908 - gpiod_set_value_cansleep(imx_pcie->reset_gpiod, 1); 909 904 } 910 905 911 - static int imx_pcie_deassert_core_reset(struct imx_pcie *imx_pcie) 906 + static void imx_pcie_deassert_core_reset(struct imx_pcie *imx_pcie) 912 907 { 913 908 reset_control_deassert(imx_pcie->pciephy_reset); 914 909 915 910 if (imx_pcie->drvdata->core_reset) 916 911 imx_pcie->drvdata->core_reset(imx_pcie, false); 917 - 918 - /* Some boards don't have PCIe reset GPIO. */ 919 - if (imx_pcie->reset_gpiod) { 920 - msleep(100); 921 - gpiod_set_value_cansleep(imx_pcie->reset_gpiod, 0); 922 - /* Wait for 100ms after PERST# deassertion (PCIe r5.0, 6.6.1) */ 923 - msleep(100); 924 - } 925 - 926 - return 0; 927 912 } 928 913 929 914 static int imx_pcie_wait_for_speed_change(struct imx_pcie *imx_pcie) ··· 1222 1233 imx_pcie_remove_lut(imx_pcie, pci_dev_id(pdev)); 1223 1234 } 1224 1235 1236 + static void imx_pcie_assert_perst(struct imx_pcie *imx_pcie, bool assert) 1237 + { 1238 + if (assert) { 1239 + gpiod_set_value_cansleep(imx_pcie->reset_gpiod, 1); 1240 + } else { 1241 + if (imx_pcie->reset_gpiod) { 1242 + msleep(PCIE_T_PVPERL_MS); 1243 + gpiod_set_value_cansleep(imx_pcie->reset_gpiod, 0); 1244 + msleep(PCIE_RESET_CONFIG_WAIT_MS); 1245 + } 1246 + } 1247 + } 1248 + 1225 1249 static int imx_pcie_host_init(struct dw_pcie_rp *pp) 1226 1250 { 1227 1251 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); ··· 1257 1255 } 1258 1256 1259 1257 imx_pcie_assert_core_reset(imx_pcie); 1258 + imx_pcie_assert_perst(imx_pcie, true); 1260 1259 1261 1260 if (imx_pcie->drvdata->init_phy) 1262 1261 imx_pcie->drvdata->init_phy(imx_pcie); ··· 1295 1292 /* Make sure that PCIe LTSSM is cleared */ 1296 1293 imx_pcie_ltssm_disable(dev); 1297 1294 1298 - ret = imx_pcie_deassert_core_reset(imx_pcie); 1299 - if (ret < 0) { 1300 - dev_err(dev, "pcie deassert core reset failed: %d\n", ret); 1301 - goto err_phy_off; 1302 - } 1295 + imx_pcie_deassert_core_reset(imx_pcie); 1296 + imx_pcie_assert_perst(imx_pcie, false); 1303 1297 1304 1298 if (imx_pcie->drvdata->wait_pll_lock) { 1305 1299 ret = imx_pcie->drvdata->wait_pll_lock(imx_pcie); ··· 1401 1401 .stop_link = imx_pcie_stop_link, 1402 1402 }; 1403 1403 1404 - static void imx_pcie_ep_init(struct dw_pcie_ep *ep) 1405 - { 1406 - enum pci_barno bar; 1407 - struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 1408 - 1409 - for (bar = BAR_0; bar <= BAR_5; bar++) 1410 - dw_pcie_ep_reset_bar(pci, bar); 1411 - } 1412 - 1413 1404 static int imx_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 1414 1405 unsigned int type, u16 interrupt_num) 1415 1406 { ··· 1424 1433 static const struct pci_epc_features imx8m_pcie_epc_features = { 1425 1434 DWC_EPC_COMMON_FEATURES, 1426 1435 .msi_capable = true, 1427 - .bar[BAR_1] = { .type = BAR_RESERVED, }, 1428 - .bar[BAR_3] = { .type = BAR_RESERVED, }, 1436 + .bar[BAR_1] = { .type = BAR_DISABLED, }, 1437 + .bar[BAR_3] = { .type = BAR_DISABLED, }, 1429 1438 .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = SZ_256, }, 1430 - .bar[BAR_5] = { .type = BAR_RESERVED, }, 1439 + .bar[BAR_5] = { .type = BAR_DISABLED, }, 1431 1440 .align = SZ_64K, 1432 1441 }; 1433 1442 1434 1443 static const struct pci_epc_features imx8q_pcie_epc_features = { 1435 1444 DWC_EPC_COMMON_FEATURES, 1436 1445 .msi_capable = true, 1437 - .bar[BAR_1] = { .type = BAR_RESERVED, }, 1438 - .bar[BAR_3] = { .type = BAR_RESERVED, }, 1439 - .bar[BAR_5] = { .type = BAR_RESERVED, }, 1446 + .bar[BAR_1] = { .type = BAR_DISABLED, }, 1447 + .bar[BAR_3] = { .type = BAR_DISABLED, }, 1448 + .bar[BAR_5] = { .type = BAR_DISABLED, }, 1440 1449 .align = SZ_64K, 1441 1450 }; 1442 1451 ··· 1469 1478 } 1470 1479 1471 1480 static const struct dw_pcie_ep_ops pcie_ep_ops = { 1472 - .init = imx_pcie_ep_init, 1473 1481 .raise_irq = imx_pcie_ep_raise_irq, 1474 1482 .get_features = imx_pcie_ep_get_features, 1475 1483 }; ··· 1583 1593 * clock which saves some power. 1584 1594 */ 1585 1595 imx_pcie_assert_core_reset(imx_pcie); 1596 + imx_pcie_assert_perst(imx_pcie, true); 1586 1597 imx_pcie->drvdata->enable_ref_clk(imx_pcie, false); 1587 1598 } else { 1588 1599 return dw_pcie_suspend_noirq(imx_pcie->pci); ··· 1604 1613 ret = imx_pcie->drvdata->enable_ref_clk(imx_pcie, true); 1605 1614 if (ret) 1606 1615 return ret; 1607 - ret = imx_pcie_deassert_core_reset(imx_pcie); 1608 - if (ret) 1609 - return ret; 1616 + imx_pcie_deassert_core_reset(imx_pcie); 1617 + imx_pcie_assert_perst(imx_pcie, false); 1610 1618 1611 1619 /* 1612 1620 * Using PCIE_TEST_PD seems to disable MSI and powers down the ··· 1637 1647 struct device *dev = &pdev->dev; 1638 1648 struct dw_pcie *pci; 1639 1649 struct imx_pcie *imx_pcie; 1640 - struct device_node *np; 1641 1650 struct device_node *node = dev->of_node; 1642 1651 int i, ret, domain; 1643 1652 u16 val; ··· 1663 1674 pci->pp.ops = &imx_pcie_host_dw_pme_ops; 1664 1675 1665 1676 /* Find the PHY if one is defined, only imx7d uses it */ 1666 - np = of_parse_phandle(node, "fsl,imx7d-pcie-phy", 0); 1677 + struct device_node *np __free(device_node) = 1678 + of_parse_phandle(node, "fsl,imx7d-pcie-phy", 0); 1667 1679 if (np) { 1668 1680 struct resource res; 1669 1681 ··· 1820 1830 } else { 1821 1831 if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_SKIP_L23_READY)) 1822 1832 pci->pp.skip_l23_ready = true; 1833 + if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_KEEP_MSI_CAP)) 1834 + pci->pp.keep_rp_msi_en = true; 1823 1835 pci->pp.use_atu_msg = true; 1824 1836 ret = dw_pcie_host_init(&pci->pp); 1825 1837 if (ret < 0) ··· 1845 1853 1846 1854 /* bring down link, so bootloader gets clean state in case of reboot */ 1847 1855 imx_pcie_assert_core_reset(imx_pcie); 1856 + imx_pcie_assert_perst(imx_pcie, true); 1848 1857 } 1849 1858 1850 1859 static const struct imx_pcie_drvdata drvdata[] = { ··· 1869 1876 .variant = IMX6SX, 1870 1877 .flags = IMX_PCIE_FLAG_IMX_PHY | 1871 1878 IMX_PCIE_FLAG_SPEED_CHANGE_WORKAROUND | 1879 + IMX_PCIE_FLAG_SKIP_L23_READY | 1872 1880 IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1873 1881 .gpr = "fsl,imx6q-iomuxc-gpr", 1874 1882 .ltssm_off = IOMUXC_GPR12, ··· 1901 1907 [IMX7D] = { 1902 1908 .variant = IMX7D, 1903 1909 .flags = IMX_PCIE_FLAG_SUPPORTS_SUSPEND | 1910 + IMX_PCIE_FLAG_KEEP_MSI_CAP | 1904 1911 IMX_PCIE_FLAG_HAS_APP_RESET | 1905 1912 IMX_PCIE_FLAG_SKIP_L23_READY | 1906 1913 IMX_PCIE_FLAG_HAS_PHY_RESET, ··· 1914 1919 [IMX8MQ] = { 1915 1920 .variant = IMX8MQ, 1916 1921 .flags = IMX_PCIE_FLAG_HAS_APP_RESET | 1922 + IMX_PCIE_FLAG_KEEP_MSI_CAP | 1917 1923 IMX_PCIE_FLAG_HAS_PHY_RESET | 1918 1924 IMX_PCIE_FLAG_SUPPORTS_SUSPEND, 1919 1925 .gpr = "fsl,imx8mq-iomuxc-gpr", ··· 1929 1933 [IMX8MM] = { 1930 1934 .variant = IMX8MM, 1931 1935 .flags = IMX_PCIE_FLAG_SUPPORTS_SUSPEND | 1936 + IMX_PCIE_FLAG_KEEP_MSI_CAP | 1932 1937 IMX_PCIE_FLAG_HAS_PHYDRV | 1933 1938 IMX_PCIE_FLAG_HAS_APP_RESET, 1934 1939 .gpr = "fsl,imx8mm-iomuxc-gpr",
+12
drivers/pci/controller/dwc/pci-keystone.c
··· 933 933 DWC_EPC_COMMON_FEATURES, 934 934 .msi_capable = true, 935 935 .msix_capable = true, 936 + /* 937 + * TODO: This driver is the only DWC glue driver that had BAR_RESERVED 938 + * BARs, but did not call dw_pcie_ep_reset_bar() for the reserved BARs. 939 + * 940 + * To not change the existing behavior, these BARs were not migrated to 941 + * BAR_DISABLED. If this driver wants the BAR_RESERVED BARs to be 942 + * disabled, it should migrate them to BAR_DISABLED. 943 + * 944 + * If they actually should be enabled, then the driver must also define 945 + * what is behind these reserved BARs, see the definition of struct 946 + * pci_epc_bar_rsvd_region. 947 + */ 936 948 .bar[BAR_0] = { .type = BAR_RESERVED, }, 937 949 .bar[BAR_1] = { .type = BAR_RESERVED, }, 938 950 .bar[BAR_2] = { .type = BAR_RESIZABLE, },
-6
drivers/pci/controller/dwc/pci-layerscape-ep.c
··· 152 152 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 153 153 struct ls_pcie_ep *pcie = to_ls_pcie_ep(pci); 154 154 struct dw_pcie_ep_func *ep_func; 155 - enum pci_barno bar; 156 155 157 156 ep_func = dw_pcie_ep_get_func_from_ep(ep, 0); 158 157 if (!ep_func) 159 158 return; 160 - 161 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 162 - dw_pcie_ep_reset_bar(pci, bar); 163 159 164 160 pcie->ls_epc->msi_capable = ep_func->msi_cap ? true : false; 165 161 pcie->ls_epc->msix_capable = ep_func->msix_cap ? true : false; ··· 247 251 pci->ops = pcie->drvdata->dw_pcie_ops; 248 252 249 253 ls_epc->bar[BAR_2].only_64bit = true; 250 - ls_epc->bar[BAR_3].type = BAR_RESERVED; 251 254 ls_epc->bar[BAR_4].only_64bit = true; 252 - ls_epc->bar[BAR_5].type = BAR_RESERVED; 253 255 ls_epc->linkup_notifier = true; 254 256 255 257 pcie->pci = pci;
+15 -1
drivers/pci/controller/dwc/pci-layerscape.c
··· 13 13 #include <linux/interrupt.h> 14 14 #include <linux/init.h> 15 15 #include <linux/iopoll.h> 16 + #include <linux/module.h> 16 17 #include <linux/of_pci.h> 17 18 #include <linux/of_platform.h> 18 19 #include <linux/of_address.h> ··· 404 403 NOIRQ_SYSTEM_SLEEP_PM_OPS(ls_pcie_suspend_noirq, ls_pcie_resume_noirq) 405 404 }; 406 405 406 + static void ls_pcie_remove(struct platform_device *pdev) 407 + { 408 + struct ls_pcie *pcie = platform_get_drvdata(pdev); 409 + 410 + dw_pcie_host_deinit(&pcie->pci->pp); 411 + } 412 + 407 413 static struct platform_driver ls_pcie_driver = { 408 414 .probe = ls_pcie_probe, 415 + .remove = ls_pcie_remove, 409 416 .driver = { 410 417 .name = "layerscape-pcie", 411 418 .of_match_table = ls_pcie_of_match, ··· 421 412 .pm = &ls_pcie_pm_ops, 422 413 }, 423 414 }; 424 - builtin_platform_driver(ls_pcie_driver); 415 + module_platform_driver(ls_pcie_driver); 416 + 417 + MODULE_AUTHOR("Minghuan Lian <Minghuan.Lian@freescale.com>"); 418 + MODULE_DESCRIPTION("Layerscape PCIe host controller driver"); 419 + MODULE_LICENSE("GPL"); 420 + MODULE_DEVICE_TABLE(of, ls_pcie_of_match);
+1 -1
drivers/pci/controller/dwc/pcie-amd-mdb.c
··· 389 389 IRQF_NO_THREAD, NULL, pcie); 390 390 if (err) { 391 391 dev_err(dev, "Failed to request INTx IRQ %d, err=%d\n", 392 - irq, err); 392 + pcie->intx_irq, err); 393 393 return err; 394 394 } 395 395
+197
drivers/pci/controller/dwc/pcie-andes-qilai.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for the PCIe Controller in QiLai from Andes 4 + * 5 + * Copyright (C) 2026 Andes Technology Corporation 6 + */ 7 + 8 + #include <linux/bitfield.h> 9 + #include <linux/bits.h> 10 + #include <linux/kernel.h> 11 + #include <linux/module.h> 12 + #include <linux/pci.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/pm_runtime.h> 15 + #include <linux/types.h> 16 + 17 + #include "pcie-designware.h" 18 + 19 + #define PCIE_INTR_CONTROL1 0x15c 20 + #define PCIE_MSI_CTRL_INT_EN BIT(28) 21 + 22 + #define PCIE_LOGIC_COHERENCY_CONTROL3 0x8e8 23 + 24 + /* 25 + * Refer to Table A4-5 (Memory type encoding) in the 26 + * AMBA AXI and ACE Protocol Specification. 27 + * 28 + * The selected value corresponds to the Memory type field: 29 + * "Write-back, Read and Write-allocate". 30 + * 31 + * The last three rows in the table A4-5 in 32 + * AMBA AXI and ACE Protocol Specification: 33 + * ARCACHE AWCACHE Memory type 34 + * ------------------------------------------------------------------ 35 + * 1111 (0111) 0111 Write-back Read-allocate 36 + * 1011 1111 (1011) Write-back Write-allocate 37 + * 1111 1111 Write-back Read and Write-allocate (selected) 38 + */ 39 + #define IOCP_ARCACHE 0b1111 40 + #define IOCP_AWCACHE 0b1111 41 + 42 + #define PCIE_CFG_MSTR_ARCACHE_MODE GENMASK(6, 3) 43 + #define PCIE_CFG_MSTR_AWCACHE_MODE GENMASK(14, 11) 44 + #define PCIE_CFG_MSTR_ARCACHE_VALUE GENMASK(22, 19) 45 + #define PCIE_CFG_MSTR_AWCACHE_VALUE GENMASK(30, 27) 46 + 47 + #define PCIE_GEN_CONTROL2 0x54 48 + #define PCIE_CFG_LTSSM_EN BIT(0) 49 + 50 + #define PCIE_REGS_PCIE_SII_PM_STATE 0xc0 51 + #define SMLH_LINK_UP BIT(6) 52 + #define RDLH_LINK_UP BIT(7) 53 + 54 + struct qilai_pcie { 55 + struct dw_pcie pci; 56 + void __iomem *apb_base; 57 + }; 58 + 59 + #define to_qilai_pcie(_pci) container_of(_pci, struct qilai_pcie, pci) 60 + 61 + static bool qilai_pcie_link_up(struct dw_pcie *pci) 62 + { 63 + struct qilai_pcie *pcie = to_qilai_pcie(pci); 64 + u32 val; 65 + 66 + val = readl(pcie->apb_base + PCIE_REGS_PCIE_SII_PM_STATE); 67 + 68 + return FIELD_GET(SMLH_LINK_UP, val) && FIELD_GET(RDLH_LINK_UP, val); 69 + } 70 + 71 + static int qilai_pcie_start_link(struct dw_pcie *pci) 72 + { 73 + struct qilai_pcie *pcie = to_qilai_pcie(pci); 74 + u32 val; 75 + 76 + val = readl(pcie->apb_base + PCIE_GEN_CONTROL2); 77 + val |= PCIE_CFG_LTSSM_EN; 78 + writel(val, pcie->apb_base + PCIE_GEN_CONTROL2); 79 + 80 + return 0; 81 + } 82 + 83 + static const struct dw_pcie_ops qilai_pcie_ops = { 84 + .link_up = qilai_pcie_link_up, 85 + .start_link = qilai_pcie_start_link, 86 + }; 87 + 88 + /* 89 + * Set up the QiLai PCIe IOCP (IO Coherence Port) Read/Write Behaviors to the 90 + * Write-Back, Read and Write Allocate mode. 91 + * 92 + * The IOCP HW target is SoC last-level cache (L2 Cache), which serves as the 93 + * system cache. The IOCP HW helps maintain cache monitoring, ensuring that 94 + * the device can snoop data from/to the cache. 95 + */ 96 + static void qilai_pcie_iocp_cache_setup(struct dw_pcie_rp *pp) 97 + { 98 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 99 + u32 val; 100 + 101 + dw_pcie_dbi_ro_wr_en(pci); 102 + 103 + val = dw_pcie_readl_dbi(pci, PCIE_LOGIC_COHERENCY_CONTROL3); 104 + FIELD_MODIFY(PCIE_CFG_MSTR_ARCACHE_MODE, &val, IOCP_ARCACHE); 105 + FIELD_MODIFY(PCIE_CFG_MSTR_AWCACHE_MODE, &val, IOCP_AWCACHE); 106 + FIELD_MODIFY(PCIE_CFG_MSTR_ARCACHE_VALUE, &val, IOCP_ARCACHE); 107 + FIELD_MODIFY(PCIE_CFG_MSTR_AWCACHE_VALUE, &val, IOCP_AWCACHE); 108 + dw_pcie_writel_dbi(pci, PCIE_LOGIC_COHERENCY_CONTROL3, val); 109 + 110 + dw_pcie_dbi_ro_wr_dis(pci); 111 + } 112 + 113 + static void qilai_pcie_enable_msi(struct qilai_pcie *pcie) 114 + { 115 + u32 val; 116 + 117 + val = readl(pcie->apb_base + PCIE_INTR_CONTROL1); 118 + val |= PCIE_MSI_CTRL_INT_EN; 119 + writel(val, pcie->apb_base + PCIE_INTR_CONTROL1); 120 + } 121 + 122 + static int qilai_pcie_host_init(struct dw_pcie_rp *pp) 123 + { 124 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 125 + struct qilai_pcie *pcie = to_qilai_pcie(pci); 126 + 127 + qilai_pcie_enable_msi(pcie); 128 + 129 + return 0; 130 + } 131 + 132 + static void qilai_pcie_host_post_init(struct dw_pcie_rp *pp) 133 + { 134 + qilai_pcie_iocp_cache_setup(pp); 135 + } 136 + 137 + static const struct dw_pcie_host_ops qilai_pcie_host_ops = { 138 + .init = qilai_pcie_host_init, 139 + .post_init = qilai_pcie_host_post_init, 140 + }; 141 + 142 + static int qilai_pcie_probe(struct platform_device *pdev) 143 + { 144 + struct qilai_pcie *pcie; 145 + struct dw_pcie *pci; 146 + struct device *dev = &pdev->dev; 147 + int ret; 148 + 149 + pcie = devm_kzalloc(&pdev->dev, sizeof(*pcie), GFP_KERNEL); 150 + if (!pcie) 151 + return -ENOMEM; 152 + 153 + platform_set_drvdata(pdev, pcie); 154 + 155 + pci = &pcie->pci; 156 + pcie->pci.dev = dev; 157 + pcie->pci.ops = &qilai_pcie_ops; 158 + pcie->pci.pp.ops = &qilai_pcie_host_ops; 159 + pci->use_parent_dt_ranges = true; 160 + 161 + dw_pcie_cap_set(&pcie->pci, REQ_RES); 162 + 163 + pcie->apb_base = devm_platform_ioremap_resource_byname(pdev, "apb"); 164 + if (IS_ERR(pcie->apb_base)) 165 + return PTR_ERR(pcie->apb_base); 166 + 167 + pm_runtime_set_active(dev); 168 + pm_runtime_no_callbacks(dev); 169 + devm_pm_runtime_enable(dev); 170 + 171 + ret = dw_pcie_host_init(&pcie->pci.pp); 172 + if (ret) 173 + return dev_err_probe(dev, ret, "Failed to initialize PCIe host\n"); 174 + 175 + return 0; 176 + } 177 + 178 + static const struct of_device_id qilai_pcie_of_match[] = { 179 + { .compatible = "andestech,qilai-pcie" }, 180 + {}, 181 + }; 182 + MODULE_DEVICE_TABLE(of, qilai_pcie_of_match); 183 + 184 + static struct platform_driver qilai_pcie_driver = { 185 + .probe = qilai_pcie_probe, 186 + .driver = { 187 + .name = "qilai-pcie", 188 + .of_match_table = qilai_pcie_of_match, 189 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 190 + }, 191 + }; 192 + 193 + builtin_platform_driver(qilai_pcie_driver); 194 + 195 + MODULE_AUTHOR("Randolph Lin <randolph@andestech.com>"); 196 + MODULE_DESCRIPTION("Andes QiLai PCIe driver"); 197 + MODULE_LICENSE("GPL");
-4
drivers/pci/controller/dwc/pcie-artpec6.c
··· 340 340 { 341 341 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 342 342 struct artpec6_pcie *artpec6_pcie = to_artpec6_pcie(pci); 343 - enum pci_barno bar; 344 343 345 344 artpec6_pcie_assert_core_reset(artpec6_pcie); 346 345 artpec6_pcie_init_phy(artpec6_pcie); 347 346 artpec6_pcie_deassert_core_reset(artpec6_pcie); 348 347 artpec6_pcie_wait_for_phy(artpec6_pcie); 349 - 350 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 351 - dw_pcie_ep_reset_bar(pci, bar); 352 348 } 353 349 354 350 static int artpec6_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no,
-645
drivers/pci/controller/dwc/pcie-bt1.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (C) 2021 BAIKAL ELECTRONICS, JSC 4 - * 5 - * Authors: 6 - * Vadim Vlasov <Vadim.Vlasov@baikalelectronics.ru> 7 - * Serge Semin <Sergey.Semin@baikalelectronics.ru> 8 - * 9 - * Baikal-T1 PCIe controller driver 10 - */ 11 - 12 - #include <linux/bitfield.h> 13 - #include <linux/bits.h> 14 - #include <linux/clk.h> 15 - #include <linux/delay.h> 16 - #include <linux/gpio/consumer.h> 17 - #include <linux/kernel.h> 18 - #include <linux/mfd/syscon.h> 19 - #include <linux/module.h> 20 - #include <linux/pci.h> 21 - #include <linux/platform_device.h> 22 - #include <linux/regmap.h> 23 - #include <linux/reset.h> 24 - #include <linux/types.h> 25 - 26 - #include "pcie-designware.h" 27 - 28 - /* Baikal-T1 System CCU control registers */ 29 - #define BT1_CCU_PCIE_CLKC 0x140 30 - #define BT1_CCU_PCIE_REQ_PCS_CLK BIT(16) 31 - #define BT1_CCU_PCIE_REQ_MAC_CLK BIT(17) 32 - #define BT1_CCU_PCIE_REQ_PIPE_CLK BIT(18) 33 - 34 - #define BT1_CCU_PCIE_RSTC 0x144 35 - #define BT1_CCU_PCIE_REQ_LINK_RST BIT(13) 36 - #define BT1_CCU_PCIE_REQ_SMLH_RST BIT(14) 37 - #define BT1_CCU_PCIE_REQ_PHY_RST BIT(16) 38 - #define BT1_CCU_PCIE_REQ_CORE_RST BIT(24) 39 - #define BT1_CCU_PCIE_REQ_STICKY_RST BIT(26) 40 - #define BT1_CCU_PCIE_REQ_NSTICKY_RST BIT(27) 41 - 42 - #define BT1_CCU_PCIE_PMSC 0x148 43 - #define BT1_CCU_PCIE_LTSSM_STATE_MASK GENMASK(5, 0) 44 - #define BT1_CCU_PCIE_LTSSM_DET_QUIET 0x00 45 - #define BT1_CCU_PCIE_LTSSM_DET_ACT 0x01 46 - #define BT1_CCU_PCIE_LTSSM_POLL_ACT 0x02 47 - #define BT1_CCU_PCIE_LTSSM_POLL_COMP 0x03 48 - #define BT1_CCU_PCIE_LTSSM_POLL_CONF 0x04 49 - #define BT1_CCU_PCIE_LTSSM_PRE_DET_QUIET 0x05 50 - #define BT1_CCU_PCIE_LTSSM_DET_WAIT 0x06 51 - #define BT1_CCU_PCIE_LTSSM_CFG_LNKWD_START 0x07 52 - #define BT1_CCU_PCIE_LTSSM_CFG_LNKWD_ACEPT 0x08 53 - #define BT1_CCU_PCIE_LTSSM_CFG_LNNUM_WAIT 0x09 54 - #define BT1_CCU_PCIE_LTSSM_CFG_LNNUM_ACEPT 0x0a 55 - #define BT1_CCU_PCIE_LTSSM_CFG_COMPLETE 0x0b 56 - #define BT1_CCU_PCIE_LTSSM_CFG_IDLE 0x0c 57 - #define BT1_CCU_PCIE_LTSSM_RCVR_LOCK 0x0d 58 - #define BT1_CCU_PCIE_LTSSM_RCVR_SPEED 0x0e 59 - #define BT1_CCU_PCIE_LTSSM_RCVR_RCVRCFG 0x0f 60 - #define BT1_CCU_PCIE_LTSSM_RCVR_IDLE 0x10 61 - #define BT1_CCU_PCIE_LTSSM_L0 0x11 62 - #define BT1_CCU_PCIE_LTSSM_L0S 0x12 63 - #define BT1_CCU_PCIE_LTSSM_L123_SEND_IDLE 0x13 64 - #define BT1_CCU_PCIE_LTSSM_L1_IDLE 0x14 65 - #define BT1_CCU_PCIE_LTSSM_L2_IDLE 0x15 66 - #define BT1_CCU_PCIE_LTSSM_L2_WAKE 0x16 67 - #define BT1_CCU_PCIE_LTSSM_DIS_ENTRY 0x17 68 - #define BT1_CCU_PCIE_LTSSM_DIS_IDLE 0x18 69 - #define BT1_CCU_PCIE_LTSSM_DISABLE 0x19 70 - #define BT1_CCU_PCIE_LTSSM_LPBK_ENTRY 0x1a 71 - #define BT1_CCU_PCIE_LTSSM_LPBK_ACTIVE 0x1b 72 - #define BT1_CCU_PCIE_LTSSM_LPBK_EXIT 0x1c 73 - #define BT1_CCU_PCIE_LTSSM_LPBK_EXIT_TOUT 0x1d 74 - #define BT1_CCU_PCIE_LTSSM_HOT_RST_ENTRY 0x1e 75 - #define BT1_CCU_PCIE_LTSSM_HOT_RST 0x1f 76 - #define BT1_CCU_PCIE_LTSSM_RCVR_EQ0 0x20 77 - #define BT1_CCU_PCIE_LTSSM_RCVR_EQ1 0x21 78 - #define BT1_CCU_PCIE_LTSSM_RCVR_EQ2 0x22 79 - #define BT1_CCU_PCIE_LTSSM_RCVR_EQ3 0x23 80 - #define BT1_CCU_PCIE_SMLH_LINKUP BIT(6) 81 - #define BT1_CCU_PCIE_RDLH_LINKUP BIT(7) 82 - #define BT1_CCU_PCIE_PM_LINKSTATE_L0S BIT(8) 83 - #define BT1_CCU_PCIE_PM_LINKSTATE_L1 BIT(9) 84 - #define BT1_CCU_PCIE_PM_LINKSTATE_L2 BIT(10) 85 - #define BT1_CCU_PCIE_L1_PENDING BIT(12) 86 - #define BT1_CCU_PCIE_REQ_EXIT_L1 BIT(14) 87 - #define BT1_CCU_PCIE_LTSSM_RCVR_EQ BIT(15) 88 - #define BT1_CCU_PCIE_PM_DSTAT_MASK GENMASK(18, 16) 89 - #define BT1_CCU_PCIE_PM_PME_EN BIT(20) 90 - #define BT1_CCU_PCIE_PM_PME_STATUS BIT(21) 91 - #define BT1_CCU_PCIE_AUX_PM_EN BIT(22) 92 - #define BT1_CCU_PCIE_AUX_PWR_DET BIT(23) 93 - #define BT1_CCU_PCIE_WAKE_DET BIT(24) 94 - #define BT1_CCU_PCIE_TURNOFF_REQ BIT(30) 95 - #define BT1_CCU_PCIE_TURNOFF_ACK BIT(31) 96 - 97 - #define BT1_CCU_PCIE_GENC 0x14c 98 - #define BT1_CCU_PCIE_LTSSM_EN BIT(1) 99 - #define BT1_CCU_PCIE_DBI2_MODE BIT(2) 100 - #define BT1_CCU_PCIE_MGMT_EN BIT(3) 101 - #define BT1_CCU_PCIE_RXLANE_FLIP_EN BIT(16) 102 - #define BT1_CCU_PCIE_TXLANE_FLIP_EN BIT(17) 103 - #define BT1_CCU_PCIE_SLV_XFER_PEND BIT(24) 104 - #define BT1_CCU_PCIE_RCV_XFER_PEND BIT(25) 105 - #define BT1_CCU_PCIE_DBI_XFER_PEND BIT(26) 106 - #define BT1_CCU_PCIE_DMA_XFER_PEND BIT(27) 107 - 108 - #define BT1_CCU_PCIE_LTSSM_LINKUP(_pmsc) \ 109 - ({ \ 110 - int __state = FIELD_GET(BT1_CCU_PCIE_LTSSM_STATE_MASK, _pmsc); \ 111 - __state >= BT1_CCU_PCIE_LTSSM_L0 && __state <= BT1_CCU_PCIE_LTSSM_L2_WAKE; \ 112 - }) 113 - 114 - /* Baikal-T1 PCIe specific control registers */ 115 - #define BT1_PCIE_AXI2MGM_LANENUM 0xd04 116 - #define BT1_PCIE_AXI2MGM_LANESEL_MASK GENMASK(3, 0) 117 - 118 - #define BT1_PCIE_AXI2MGM_ADDRCTL 0xd08 119 - #define BT1_PCIE_AXI2MGM_PHYREG_ADDR_MASK GENMASK(20, 0) 120 - #define BT1_PCIE_AXI2MGM_READ_FLAG BIT(29) 121 - #define BT1_PCIE_AXI2MGM_DONE BIT(30) 122 - #define BT1_PCIE_AXI2MGM_BUSY BIT(31) 123 - 124 - #define BT1_PCIE_AXI2MGM_WRITEDATA 0xd0c 125 - #define BT1_PCIE_AXI2MGM_WDATA GENMASK(15, 0) 126 - 127 - #define BT1_PCIE_AXI2MGM_READDATA 0xd10 128 - #define BT1_PCIE_AXI2MGM_RDATA GENMASK(15, 0) 129 - 130 - /* Generic Baikal-T1 PCIe interface resources */ 131 - #define BT1_PCIE_NUM_APP_CLKS ARRAY_SIZE(bt1_pcie_app_clks) 132 - #define BT1_PCIE_NUM_CORE_CLKS ARRAY_SIZE(bt1_pcie_core_clks) 133 - #define BT1_PCIE_NUM_APP_RSTS ARRAY_SIZE(bt1_pcie_app_rsts) 134 - #define BT1_PCIE_NUM_CORE_RSTS ARRAY_SIZE(bt1_pcie_core_rsts) 135 - 136 - /* PCIe bus setup delays and timeouts */ 137 - #define BT1_PCIE_RST_DELAY_MS 100 138 - #define BT1_PCIE_RUN_DELAY_US 100 139 - #define BT1_PCIE_REQ_DELAY_US 1 140 - #define BT1_PCIE_REQ_TIMEOUT_US 1000 141 - #define BT1_PCIE_LNK_DELAY_US 1000 142 - #define BT1_PCIE_LNK_TIMEOUT_US 1000000 143 - 144 - static const enum dw_pcie_app_clk bt1_pcie_app_clks[] = { 145 - DW_PCIE_DBI_CLK, DW_PCIE_MSTR_CLK, DW_PCIE_SLV_CLK, 146 - }; 147 - 148 - static const enum dw_pcie_core_clk bt1_pcie_core_clks[] = { 149 - DW_PCIE_REF_CLK, 150 - }; 151 - 152 - static const enum dw_pcie_app_rst bt1_pcie_app_rsts[] = { 153 - DW_PCIE_MSTR_RST, DW_PCIE_SLV_RST, 154 - }; 155 - 156 - static const enum dw_pcie_core_rst bt1_pcie_core_rsts[] = { 157 - DW_PCIE_NON_STICKY_RST, DW_PCIE_STICKY_RST, DW_PCIE_CORE_RST, 158 - DW_PCIE_PIPE_RST, DW_PCIE_PHY_RST, DW_PCIE_HOT_RST, DW_PCIE_PWR_RST, 159 - }; 160 - 161 - struct bt1_pcie { 162 - struct dw_pcie dw; 163 - struct platform_device *pdev; 164 - struct regmap *sys_regs; 165 - }; 166 - #define to_bt1_pcie(_dw) container_of(_dw, struct bt1_pcie, dw) 167 - 168 - /* 169 - * Baikal-T1 MMIO space must be read/written by the dword-aligned 170 - * instructions. Note the methods are optimized to have the dword operations 171 - * performed with minimum overhead as the most frequently used ones. 172 - */ 173 - static int bt1_pcie_read_mmio(void __iomem *addr, int size, u32 *val) 174 - { 175 - unsigned int ofs = (uintptr_t)addr & 0x3; 176 - 177 - if (!IS_ALIGNED((uintptr_t)addr, size)) 178 - return -EINVAL; 179 - 180 - *val = readl(addr - ofs) >> ofs * BITS_PER_BYTE; 181 - if (size == 4) { 182 - return 0; 183 - } else if (size == 2) { 184 - *val &= 0xffff; 185 - return 0; 186 - } else if (size == 1) { 187 - *val &= 0xff; 188 - return 0; 189 - } 190 - 191 - return -EINVAL; 192 - } 193 - 194 - static int bt1_pcie_write_mmio(void __iomem *addr, int size, u32 val) 195 - { 196 - unsigned int ofs = (uintptr_t)addr & 0x3; 197 - u32 tmp, mask; 198 - 199 - if (!IS_ALIGNED((uintptr_t)addr, size)) 200 - return -EINVAL; 201 - 202 - if (size == 4) { 203 - writel(val, addr); 204 - return 0; 205 - } else if (size == 2 || size == 1) { 206 - mask = GENMASK(size * BITS_PER_BYTE - 1, 0); 207 - tmp = readl(addr - ofs) & ~(mask << ofs * BITS_PER_BYTE); 208 - tmp |= (val & mask) << ofs * BITS_PER_BYTE; 209 - writel(tmp, addr - ofs); 210 - return 0; 211 - } 212 - 213 - return -EINVAL; 214 - } 215 - 216 - static u32 bt1_pcie_read_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg, 217 - size_t size) 218 - { 219 - int ret; 220 - u32 val; 221 - 222 - ret = bt1_pcie_read_mmio(base + reg, size, &val); 223 - if (ret) { 224 - dev_err(pci->dev, "Read DBI address failed\n"); 225 - return ~0U; 226 - } 227 - 228 - return val; 229 - } 230 - 231 - static void bt1_pcie_write_dbi(struct dw_pcie *pci, void __iomem *base, u32 reg, 232 - size_t size, u32 val) 233 - { 234 - int ret; 235 - 236 - ret = bt1_pcie_write_mmio(base + reg, size, val); 237 - if (ret) 238 - dev_err(pci->dev, "Write DBI address failed\n"); 239 - } 240 - 241 - static void bt1_pcie_write_dbi2(struct dw_pcie *pci, void __iomem *base, u32 reg, 242 - size_t size, u32 val) 243 - { 244 - struct bt1_pcie *btpci = to_bt1_pcie(pci); 245 - int ret; 246 - 247 - regmap_update_bits(btpci->sys_regs, BT1_CCU_PCIE_GENC, 248 - BT1_CCU_PCIE_DBI2_MODE, BT1_CCU_PCIE_DBI2_MODE); 249 - 250 - ret = bt1_pcie_write_mmio(base + reg, size, val); 251 - if (ret) 252 - dev_err(pci->dev, "Write DBI2 address failed\n"); 253 - 254 - regmap_update_bits(btpci->sys_regs, BT1_CCU_PCIE_GENC, 255 - BT1_CCU_PCIE_DBI2_MODE, 0); 256 - } 257 - 258 - static int bt1_pcie_start_link(struct dw_pcie *pci) 259 - { 260 - struct bt1_pcie *btpci = to_bt1_pcie(pci); 261 - u32 val; 262 - int ret; 263 - 264 - /* 265 - * Enable LTSSM and make sure it was able to establish both PHY and 266 - * data links. This procedure shall work fine to reach 2.5 GT/s speed. 267 - */ 268 - regmap_update_bits(btpci->sys_regs, BT1_CCU_PCIE_GENC, 269 - BT1_CCU_PCIE_LTSSM_EN, BT1_CCU_PCIE_LTSSM_EN); 270 - 271 - ret = regmap_read_poll_timeout(btpci->sys_regs, BT1_CCU_PCIE_PMSC, val, 272 - (val & BT1_CCU_PCIE_SMLH_LINKUP), 273 - BT1_PCIE_LNK_DELAY_US, BT1_PCIE_LNK_TIMEOUT_US); 274 - if (ret) { 275 - dev_err(pci->dev, "LTSSM failed to set PHY link up\n"); 276 - return ret; 277 - } 278 - 279 - ret = regmap_read_poll_timeout(btpci->sys_regs, BT1_CCU_PCIE_PMSC, val, 280 - (val & BT1_CCU_PCIE_RDLH_LINKUP), 281 - BT1_PCIE_LNK_DELAY_US, BT1_PCIE_LNK_TIMEOUT_US); 282 - if (ret) { 283 - dev_err(pci->dev, "LTSSM failed to set data link up\n"); 284 - return ret; 285 - } 286 - 287 - /* 288 - * Activate direct speed change after the link is established in an 289 - * attempt to reach a higher bus performance (up to Gen.3 - 8.0 GT/s). 290 - * This is required at least to get 8.0 GT/s speed. 291 - */ 292 - val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 293 - val |= PORT_LOGIC_SPEED_CHANGE; 294 - dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 295 - 296 - ret = regmap_read_poll_timeout(btpci->sys_regs, BT1_CCU_PCIE_PMSC, val, 297 - BT1_CCU_PCIE_LTSSM_LINKUP(val), 298 - BT1_PCIE_LNK_DELAY_US, BT1_PCIE_LNK_TIMEOUT_US); 299 - if (ret) 300 - dev_err(pci->dev, "LTSSM failed to get into L0 state\n"); 301 - 302 - return ret; 303 - } 304 - 305 - static void bt1_pcie_stop_link(struct dw_pcie *pci) 306 - { 307 - struct bt1_pcie *btpci = to_bt1_pcie(pci); 308 - 309 - regmap_update_bits(btpci->sys_regs, BT1_CCU_PCIE_GENC, 310 - BT1_CCU_PCIE_LTSSM_EN, 0); 311 - } 312 - 313 - static const struct dw_pcie_ops bt1_pcie_ops = { 314 - .read_dbi = bt1_pcie_read_dbi, 315 - .write_dbi = bt1_pcie_write_dbi, 316 - .write_dbi2 = bt1_pcie_write_dbi2, 317 - .start_link = bt1_pcie_start_link, 318 - .stop_link = bt1_pcie_stop_link, 319 - }; 320 - 321 - static struct pci_ops bt1_pci_ops = { 322 - .map_bus = dw_pcie_own_conf_map_bus, 323 - .read = pci_generic_config_read32, 324 - .write = pci_generic_config_write32, 325 - }; 326 - 327 - static int bt1_pcie_get_resources(struct bt1_pcie *btpci) 328 - { 329 - struct device *dev = btpci->dw.dev; 330 - int i; 331 - 332 - /* DBI access is supposed to be performed by the dword-aligned IOs */ 333 - btpci->dw.pp.bridge->ops = &bt1_pci_ops; 334 - 335 - /* These CSRs are in MMIO so we won't check the regmap-methods status */ 336 - btpci->sys_regs = 337 - syscon_regmap_lookup_by_phandle(dev->of_node, "baikal,bt1-syscon"); 338 - if (IS_ERR(btpci->sys_regs)) 339 - return dev_err_probe(dev, PTR_ERR(btpci->sys_regs), 340 - "Failed to get syscon\n"); 341 - 342 - /* Make sure all the required resources have been specified */ 343 - for (i = 0; i < BT1_PCIE_NUM_APP_CLKS; i++) { 344 - if (!btpci->dw.app_clks[bt1_pcie_app_clks[i]].clk) { 345 - dev_err(dev, "App clocks set is incomplete\n"); 346 - return -ENOENT; 347 - } 348 - } 349 - 350 - for (i = 0; i < BT1_PCIE_NUM_CORE_CLKS; i++) { 351 - if (!btpci->dw.core_clks[bt1_pcie_core_clks[i]].clk) { 352 - dev_err(dev, "Core clocks set is incomplete\n"); 353 - return -ENOENT; 354 - } 355 - } 356 - 357 - for (i = 0; i < BT1_PCIE_NUM_APP_RSTS; i++) { 358 - if (!btpci->dw.app_rsts[bt1_pcie_app_rsts[i]].rstc) { 359 - dev_err(dev, "App resets set is incomplete\n"); 360 - return -ENOENT; 361 - } 362 - } 363 - 364 - for (i = 0; i < BT1_PCIE_NUM_CORE_RSTS; i++) { 365 - if (!btpci->dw.core_rsts[bt1_pcie_core_rsts[i]].rstc) { 366 - dev_err(dev, "Core resets set is incomplete\n"); 367 - return -ENOENT; 368 - } 369 - } 370 - 371 - return 0; 372 - } 373 - 374 - static void bt1_pcie_full_stop_bus(struct bt1_pcie *btpci, bool init) 375 - { 376 - struct device *dev = btpci->dw.dev; 377 - struct dw_pcie *pci = &btpci->dw; 378 - int ret; 379 - 380 - /* Disable LTSSM for sure */ 381 - regmap_update_bits(btpci->sys_regs, BT1_CCU_PCIE_GENC, 382 - BT1_CCU_PCIE_LTSSM_EN, 0); 383 - 384 - /* 385 - * Application reset controls are trigger-based so assert the core 386 - * resets only. 387 - */ 388 - ret = reset_control_bulk_assert(DW_PCIE_NUM_CORE_RSTS, pci->core_rsts); 389 - if (ret) 390 - dev_err(dev, "Failed to assert core resets\n"); 391 - 392 - /* 393 - * Clocks are disabled by default at least in accordance with the clk 394 - * enable counter value on init stage. 395 - */ 396 - if (!init) { 397 - clk_bulk_disable_unprepare(DW_PCIE_NUM_CORE_CLKS, pci->core_clks); 398 - 399 - clk_bulk_disable_unprepare(DW_PCIE_NUM_APP_CLKS, pci->app_clks); 400 - } 401 - 402 - /* The peripheral devices are unavailable anyway so reset them too */ 403 - gpiod_set_value_cansleep(pci->pe_rst, 1); 404 - 405 - /* Make sure all the resets are settled */ 406 - msleep(BT1_PCIE_RST_DELAY_MS); 407 - } 408 - 409 - /* 410 - * Implements the cold reset procedure in accordance with the reference manual 411 - * and available PM signals. 412 - */ 413 - static int bt1_pcie_cold_start_bus(struct bt1_pcie *btpci) 414 - { 415 - struct device *dev = btpci->dw.dev; 416 - struct dw_pcie *pci = &btpci->dw; 417 - u32 val; 418 - int ret; 419 - 420 - /* First get out of the Power/Hot reset state */ 421 - ret = reset_control_deassert(pci->core_rsts[DW_PCIE_PWR_RST].rstc); 422 - if (ret) { 423 - dev_err(dev, "Failed to deassert PHY reset\n"); 424 - return ret; 425 - } 426 - 427 - ret = reset_control_deassert(pci->core_rsts[DW_PCIE_HOT_RST].rstc); 428 - if (ret) { 429 - dev_err(dev, "Failed to deassert hot reset\n"); 430 - goto err_assert_pwr_rst; 431 - } 432 - 433 - /* Wait for the PM-core to stop requesting the PHY reset */ 434 - ret = regmap_read_poll_timeout(btpci->sys_regs, BT1_CCU_PCIE_RSTC, val, 435 - !(val & BT1_CCU_PCIE_REQ_PHY_RST), 436 - BT1_PCIE_REQ_DELAY_US, BT1_PCIE_REQ_TIMEOUT_US); 437 - if (ret) { 438 - dev_err(dev, "Timed out waiting for PM to stop PHY resetting\n"); 439 - goto err_assert_hot_rst; 440 - } 441 - 442 - ret = reset_control_deassert(pci->core_rsts[DW_PCIE_PHY_RST].rstc); 443 - if (ret) { 444 - dev_err(dev, "Failed to deassert PHY reset\n"); 445 - goto err_assert_hot_rst; 446 - } 447 - 448 - /* Clocks can be now enabled, but the ref one is crucial at this stage */ 449 - ret = clk_bulk_prepare_enable(DW_PCIE_NUM_APP_CLKS, pci->app_clks); 450 - if (ret) { 451 - dev_err(dev, "Failed to enable app clocks\n"); 452 - goto err_assert_phy_rst; 453 - } 454 - 455 - ret = clk_bulk_prepare_enable(DW_PCIE_NUM_CORE_CLKS, pci->core_clks); 456 - if (ret) { 457 - dev_err(dev, "Failed to enable ref clocks\n"); 458 - goto err_disable_app_clk; 459 - } 460 - 461 - /* Wait for the PM to stop requesting the controller core reset */ 462 - ret = regmap_read_poll_timeout(btpci->sys_regs, BT1_CCU_PCIE_RSTC, val, 463 - !(val & BT1_CCU_PCIE_REQ_CORE_RST), 464 - BT1_PCIE_REQ_DELAY_US, BT1_PCIE_REQ_TIMEOUT_US); 465 - if (ret) { 466 - dev_err(dev, "Timed out waiting for PM to stop core resetting\n"); 467 - goto err_disable_core_clk; 468 - } 469 - 470 - /* PCS-PIPE interface and controller core can be now activated */ 471 - ret = reset_control_deassert(pci->core_rsts[DW_PCIE_PIPE_RST].rstc); 472 - if (ret) { 473 - dev_err(dev, "Failed to deassert PIPE reset\n"); 474 - goto err_disable_core_clk; 475 - } 476 - 477 - ret = reset_control_deassert(pci->core_rsts[DW_PCIE_CORE_RST].rstc); 478 - if (ret) { 479 - dev_err(dev, "Failed to deassert core reset\n"); 480 - goto err_assert_pipe_rst; 481 - } 482 - 483 - /* It's recommended to reset the core and application logic together */ 484 - ret = reset_control_bulk_reset(DW_PCIE_NUM_APP_RSTS, pci->app_rsts); 485 - if (ret) { 486 - dev_err(dev, "Failed to reset app domain\n"); 487 - goto err_assert_core_rst; 488 - } 489 - 490 - /* Sticky/Non-sticky CSR flags can be now unreset too */ 491 - ret = reset_control_deassert(pci->core_rsts[DW_PCIE_STICKY_RST].rstc); 492 - if (ret) { 493 - dev_err(dev, "Failed to deassert sticky reset\n"); 494 - goto err_assert_core_rst; 495 - } 496 - 497 - ret = reset_control_deassert(pci->core_rsts[DW_PCIE_NON_STICKY_RST].rstc); 498 - if (ret) { 499 - dev_err(dev, "Failed to deassert non-sticky reset\n"); 500 - goto err_assert_sticky_rst; 501 - } 502 - 503 - /* Activate the PCIe bus peripheral devices */ 504 - gpiod_set_value_cansleep(pci->pe_rst, 0); 505 - 506 - /* Make sure the state is settled (LTSSM is still disabled though) */ 507 - usleep_range(BT1_PCIE_RUN_DELAY_US, BT1_PCIE_RUN_DELAY_US + 100); 508 - 509 - return 0; 510 - 511 - err_assert_sticky_rst: 512 - reset_control_assert(pci->core_rsts[DW_PCIE_STICKY_RST].rstc); 513 - 514 - err_assert_core_rst: 515 - reset_control_assert(pci->core_rsts[DW_PCIE_CORE_RST].rstc); 516 - 517 - err_assert_pipe_rst: 518 - reset_control_assert(pci->core_rsts[DW_PCIE_PIPE_RST].rstc); 519 - 520 - err_disable_core_clk: 521 - clk_bulk_disable_unprepare(DW_PCIE_NUM_CORE_CLKS, pci->core_clks); 522 - 523 - err_disable_app_clk: 524 - clk_bulk_disable_unprepare(DW_PCIE_NUM_APP_CLKS, pci->app_clks); 525 - 526 - err_assert_phy_rst: 527 - reset_control_assert(pci->core_rsts[DW_PCIE_PHY_RST].rstc); 528 - 529 - err_assert_hot_rst: 530 - reset_control_assert(pci->core_rsts[DW_PCIE_HOT_RST].rstc); 531 - 532 - err_assert_pwr_rst: 533 - reset_control_assert(pci->core_rsts[DW_PCIE_PWR_RST].rstc); 534 - 535 - return ret; 536 - } 537 - 538 - static int bt1_pcie_host_init(struct dw_pcie_rp *pp) 539 - { 540 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 541 - struct bt1_pcie *btpci = to_bt1_pcie(pci); 542 - int ret; 543 - 544 - ret = bt1_pcie_get_resources(btpci); 545 - if (ret) 546 - return ret; 547 - 548 - bt1_pcie_full_stop_bus(btpci, true); 549 - 550 - return bt1_pcie_cold_start_bus(btpci); 551 - } 552 - 553 - static void bt1_pcie_host_deinit(struct dw_pcie_rp *pp) 554 - { 555 - struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 556 - struct bt1_pcie *btpci = to_bt1_pcie(pci); 557 - 558 - bt1_pcie_full_stop_bus(btpci, false); 559 - } 560 - 561 - static const struct dw_pcie_host_ops bt1_pcie_host_ops = { 562 - .init = bt1_pcie_host_init, 563 - .deinit = bt1_pcie_host_deinit, 564 - }; 565 - 566 - static struct bt1_pcie *bt1_pcie_create_data(struct platform_device *pdev) 567 - { 568 - struct bt1_pcie *btpci; 569 - 570 - btpci = devm_kzalloc(&pdev->dev, sizeof(*btpci), GFP_KERNEL); 571 - if (!btpci) 572 - return ERR_PTR(-ENOMEM); 573 - 574 - btpci->pdev = pdev; 575 - 576 - platform_set_drvdata(pdev, btpci); 577 - 578 - return btpci; 579 - } 580 - 581 - static int bt1_pcie_add_port(struct bt1_pcie *btpci) 582 - { 583 - struct device *dev = &btpci->pdev->dev; 584 - int ret; 585 - 586 - ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)); 587 - if (ret) 588 - return ret; 589 - 590 - btpci->dw.version = DW_PCIE_VER_460A; 591 - btpci->dw.dev = dev; 592 - btpci->dw.ops = &bt1_pcie_ops; 593 - 594 - btpci->dw.pp.num_vectors = MAX_MSI_IRQS; 595 - btpci->dw.pp.ops = &bt1_pcie_host_ops; 596 - 597 - dw_pcie_cap_set(&btpci->dw, REQ_RES); 598 - 599 - ret = dw_pcie_host_init(&btpci->dw.pp); 600 - 601 - return dev_err_probe(dev, ret, "Failed to initialize DWC PCIe host\n"); 602 - } 603 - 604 - static void bt1_pcie_del_port(struct bt1_pcie *btpci) 605 - { 606 - dw_pcie_host_deinit(&btpci->dw.pp); 607 - } 608 - 609 - static int bt1_pcie_probe(struct platform_device *pdev) 610 - { 611 - struct bt1_pcie *btpci; 612 - 613 - btpci = bt1_pcie_create_data(pdev); 614 - if (IS_ERR(btpci)) 615 - return PTR_ERR(btpci); 616 - 617 - return bt1_pcie_add_port(btpci); 618 - } 619 - 620 - static void bt1_pcie_remove(struct platform_device *pdev) 621 - { 622 - struct bt1_pcie *btpci = platform_get_drvdata(pdev); 623 - 624 - bt1_pcie_del_port(btpci); 625 - } 626 - 627 - static const struct of_device_id bt1_pcie_of_match[] = { 628 - { .compatible = "baikal,bt1-pcie" }, 629 - {}, 630 - }; 631 - MODULE_DEVICE_TABLE(of, bt1_pcie_of_match); 632 - 633 - static struct platform_driver bt1_pcie_driver = { 634 - .probe = bt1_pcie_probe, 635 - .remove = bt1_pcie_remove, 636 - .driver = { 637 - .name = "bt1-pcie", 638 - .of_match_table = bt1_pcie_of_match, 639 - }, 640 - }; 641 - module_platform_driver(bt1_pcie_driver); 642 - 643 - MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>"); 644 - MODULE_DESCRIPTION("Baikal-T1 PCIe driver"); 645 - MODULE_LICENSE("GPL");
+63 -10
drivers/pci/controller/dwc/pcie-designware-debugfs.c
··· 131 131 * supported in DWC RAS DES 132 132 * @name: Name of the error counter 133 133 * @group_no: Group number that the event belongs to. The value can range 134 - * from 0 to 4 134 + * from 0 to 7 135 135 * @event_no: Event number of the particular event. The value ranges are: 136 136 * Group 0: 0 - 10 137 137 * Group 1: 5 - 13 138 138 * Group 2: 0 - 7 139 139 * Group 3: 0 - 5 140 140 * Group 4: 0 - 1 141 + * Group 5: 0 - 13 142 + * Group 6: 0 - 6 143 + * Group 7: 0 - 25 141 144 */ 142 145 struct dwc_pcie_event_counter { 143 146 const char *name; ··· 184 181 {"completion_timeout", 0x3, 0x5}, 185 182 {"ebuf_skp_add", 0x4, 0x0}, 186 183 {"ebuf_skp_del", 0x4, 0x1}, 184 + {"l0_to_recovery_entry", 0x5, 0x0}, 185 + {"l1_to_recovery_entry", 0x5, 0x1}, 186 + {"tx_l0s_entry", 0x5, 0x2}, 187 + {"rx_l0s_entry", 0x5, 0x3}, 188 + {"aspm_l1_reject", 0x5, 0x4}, 189 + {"l1_entry", 0x5, 0x5}, 190 + {"l1_cpm", 0x5, 0x6}, 191 + {"l1.1_entry", 0x5, 0x7}, 192 + {"l1.2_entry", 0x5, 0x8}, 193 + {"l1_short_duration", 0x5, 0x9}, 194 + {"l1.2_abort", 0x5, 0xa}, 195 + {"l2_entry", 0x5, 0xb}, 196 + {"speed_change", 0x5, 0xc}, 197 + {"link_width_change", 0x5, 0xd}, 198 + {"tx_ack_dllp", 0x6, 0x0}, 199 + {"tx_update_fc_dllp", 0x6, 0x1}, 200 + {"rx_ack_dllp", 0x6, 0x2}, 201 + {"rx_update_fc_dllp", 0x6, 0x3}, 202 + {"rx_nullified_tlp", 0x6, 0x4}, 203 + {"tx_nullified_tlp", 0x6, 0x5}, 204 + {"rx_duplicate_tlp", 0x6, 0x6}, 205 + {"tx_memory_write", 0x7, 0x0}, 206 + {"tx_memory_read", 0x7, 0x1}, 207 + {"tx_configuration_write", 0x7, 0x2}, 208 + {"tx_configuration_read", 0x7, 0x3}, 209 + {"tx_io_write", 0x7, 0x4}, 210 + {"tx_io_read", 0x7, 0x5}, 211 + {"tx_completion_without_data", 0x7, 0x6}, 212 + {"tx_completion_w_data", 0x7, 0x7}, 213 + {"tx_message_tlp_pcie_vc_only", 0x7, 0x8}, 214 + {"tx_atomic", 0x7, 0x9}, 215 + {"tx_tlp_with_prefix", 0x7, 0xa}, 216 + {"rx_memory_write", 0x7, 0xb}, 217 + {"rx_memory_read", 0x7, 0xc}, 218 + {"rx_configuration_write", 0x7, 0xd}, 219 + {"rx_configuration_read", 0x7, 0xe}, 220 + {"rx_io_write", 0x7, 0xf}, 221 + {"rx_io_read", 0x7, 0x10}, 222 + {"rx_completion_without_data", 0x7, 0x11}, 223 + {"rx_completion_w_data", 0x7, 0x12}, 224 + {"rx_message_tlp_pcie_vc_only", 0x7, 0x13}, 225 + {"rx_atomic", 0x7, 0x14}, 226 + {"rx_tlp_with_prefix", 0x7, 0x15}, 227 + {"tx_ccix_tlp", 0x7, 0x16}, 228 + {"rx_ccix_tlp", 0x7, 0x17}, 229 + {"tx_deferrable_memory_write_tlp", 0x7, 0x18}, 230 + {"rx_deferrable_memory_write_tlp", 0x7, 0x19}, 187 231 }; 188 232 189 233 static ssize_t lane_detect_read(struct file *file, char __user *buf, ··· 258 208 struct dw_pcie *pci = file->private_data; 259 209 struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info; 260 210 u32 lane, val; 211 + int ret; 261 212 262 - val = kstrtou32_from_user(buf, count, 0, &lane); 263 - if (val) 264 - return val; 213 + ret = kstrtou32_from_user(buf, count, 0, &lane); 214 + if (ret) 215 + return ret; 265 216 266 217 val = dw_pcie_readl_dbi(pci, rinfo->ras_cap_offset + SD_STATUS_L1LANE_REG); 267 218 val &= ~(LANE_SELECT); ··· 398 347 struct dw_pcie *pci = pdata->pci; 399 348 struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info; 400 349 u32 val, enable; 350 + int ret; 401 351 402 - val = kstrtou32_from_user(buf, count, 0, &enable); 403 - if (val) 404 - return val; 352 + ret = kstrtou32_from_user(buf, count, 0, &enable); 353 + if (ret) 354 + return ret; 405 355 406 356 mutex_lock(&rinfo->reg_event_lock); 407 357 set_event_number(pdata, pci, rinfo); ··· 460 408 struct dw_pcie *pci = pdata->pci; 461 409 struct dwc_pcie_rasdes_info *rinfo = pci->debugfs->rasdes_info; 462 410 u32 val, lane; 411 + int ret; 463 412 464 - val = kstrtou32_from_user(buf, count, 0, &lane); 465 - if (val) 466 - return val; 413 + ret = kstrtou32_from_user(buf, count, 0, &lane); 414 + if (ret) 415 + return ret; 467 416 468 417 mutex_lock(&rinfo->reg_event_lock); 469 418 set_event_number(pdata, pci, rinfo);
+53 -2
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 754 754 val = dw_pcie_ep_readw_dbi(ep, func_no, reg); 755 755 val &= ~PCI_MSIX_FLAGS_QSIZE; 756 756 val |= nr_irqs - 1; /* encoded as N-1 */ 757 - dw_pcie_writew_dbi(pci, reg, val); 757 + dw_pcie_ep_writew_dbi(ep, func_no, reg, val); 758 758 759 759 reg = ep_func->msix_cap + PCI_MSIX_TABLE; 760 760 val = offset | bir; ··· 1110 1110 { 1111 1111 struct dw_pcie_ep *ep = &pci->ep; 1112 1112 u8 funcs = ep->epc->max_functions; 1113 - u8 func_no; 1113 + u32 func0_lnkcap, lnkcap; 1114 + u8 func_no, offset; 1114 1115 1115 1116 dw_pcie_dbi_ro_wr_en(pci); 1116 1117 ··· 1119 1118 dw_pcie_ep_init_rebar_registers(ep, func_no); 1120 1119 1121 1120 dw_pcie_setup(pci); 1121 + 1122 + /* 1123 + * PCIe r7.0, section 7.5.3.6 states that for multi-function 1124 + * endpoints, max link width and speed fields must report same 1125 + * values for all functions. However, dw_pcie_setup() programs 1126 + * these fields only for function 0. Hence, mirror these fields 1127 + * to all other functions as well. 1128 + */ 1129 + if (funcs > 1) { 1130 + offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 1131 + func0_lnkcap = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCAP); 1132 + func0_lnkcap = FIELD_GET(PCI_EXP_LNKCAP_MLW | 1133 + PCI_EXP_LNKCAP_SLS, func0_lnkcap); 1134 + 1135 + for (func_no = 1; func_no < funcs; func_no++) { 1136 + offset = dw_pcie_ep_find_capability(ep, func_no, 1137 + PCI_CAP_ID_EXP); 1138 + lnkcap = dw_pcie_ep_readl_dbi(ep, func_no, 1139 + offset + PCI_EXP_LNKCAP); 1140 + FIELD_MODIFY(PCI_EXP_LNKCAP_MLW | PCI_EXP_LNKCAP_SLS, 1141 + &lnkcap, func0_lnkcap); 1142 + dw_pcie_ep_writel_dbi(ep, func_no, 1143 + offset + PCI_EXP_LNKCAP, lnkcap); 1144 + } 1145 + } 1146 + 1122 1147 dw_pcie_dbi_ro_wr_dis(pci); 1148 + } 1149 + 1150 + static void dw_pcie_ep_disable_bars(struct dw_pcie_ep *ep) 1151 + { 1152 + struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 1153 + enum pci_epc_bar_type bar_type; 1154 + enum pci_barno bar; 1155 + 1156 + for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) { 1157 + bar_type = dw_pcie_ep_get_bar_type(ep, bar); 1158 + 1159 + /* 1160 + * Reserved BARs should not get disabled by default. All other 1161 + * BAR types are disabled by default. 1162 + * 1163 + * This is in line with the current EPC core design, where all 1164 + * BARs are disabled by default, and then the EPF driver enables 1165 + * the BARs it wishes to use. 1166 + */ 1167 + if (bar_type != BAR_RESERVED) 1168 + dw_pcie_ep_reset_bar(pci, bar); 1169 + } 1123 1170 } 1124 1171 1125 1172 /** ··· 1251 1202 1252 1203 if (ep->ops->init) 1253 1204 ep->ops->init(ep); 1205 + 1206 + dw_pcie_ep_disable_bars(ep); 1254 1207 1255 1208 /* 1256 1209 * PCIe r6.0, section 7.9.15 states that for endpoints that support
+21 -8
drivers/pci/controller/dwc/pcie-designware-host.c
··· 1081 1081 static void dw_pcie_config_presets(struct dw_pcie_rp *pp) 1082 1082 { 1083 1083 struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 1084 - enum pci_bus_speed speed = pcie_link_speed[pci->max_link_speed]; 1084 + enum pci_bus_speed speed = pcie_get_link_speed(pci->max_link_speed); 1085 1085 1086 1086 /* 1087 1087 * Lane equalization settings need to be applied for all data rates the ··· 1171 1171 * the MSI and MSI-X capabilities of the Root Port to allow the drivers 1172 1172 * to fall back to INTx instead. 1173 1173 */ 1174 - if (pp->use_imsi_rx) { 1174 + if (pp->use_imsi_rx && !pp->keep_rp_msi_en) { 1175 1175 dw_pcie_remove_capability(pci, PCI_CAP_ID_MSI); 1176 1176 dw_pcie_remove_capability(pci, PCI_CAP_ID_MSIX); 1177 1177 } ··· 1256 1256 PCIE_PME_TO_L2_TIMEOUT_US/10, 1257 1257 PCIE_PME_TO_L2_TIMEOUT_US, false, pci); 1258 1258 if (ret) { 1259 - /* Only log message when LTSSM isn't in DETECT or POLL */ 1260 - dev_err(pci->dev, "Timeout waiting for L2 entry! LTSSM: 0x%x\n", val); 1261 - return ret; 1259 + /* 1260 + * Failure is non-fatal since spec r7.0, sec 5.3.3.2.1, 1261 + * recommends proceeding with L2/L3 sequence even if one or more 1262 + * devices do not respond with PME_TO_Ack after 10ms timeout. 1263 + */ 1264 + dev_warn(pci->dev, "Timeout waiting for L2 entry! LTSSM: 0x%x\n", val); 1265 + ret = 0; 1262 1266 } 1263 1267 1264 1268 /* ··· 1304 1300 1305 1301 ret = dw_pcie_start_link(pci); 1306 1302 if (ret) 1307 - return ret; 1303 + goto err_deinit; 1308 1304 1309 1305 ret = dw_pcie_wait_for_link(pci); 1310 - if (ret) 1311 - return ret; 1306 + if (ret == -ETIMEDOUT) 1307 + goto err_stop_link; 1312 1308 1313 1309 if (pci->pp.ops->post_init) 1314 1310 pci->pp.ops->post_init(&pci->pp); 1311 + 1312 + return 0; 1313 + 1314 + err_stop_link: 1315 + dw_pcie_stop_link(pci); 1316 + 1317 + err_deinit: 1318 + if (pci->pp.ops->deinit) 1319 + pci->pp.ops->deinit(&pci->pp); 1315 1320 1316 1321 return ret; 1317 1322 }
-10
drivers/pci/controller/dwc/pcie-designware-plat.c
··· 32 32 static const struct dw_pcie_host_ops dw_plat_pcie_host_ops = { 33 33 }; 34 34 35 - static void dw_plat_pcie_ep_init(struct dw_pcie_ep *ep) 36 - { 37 - struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 38 - enum pci_barno bar; 39 - 40 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 41 - dw_pcie_ep_reset_bar(pci, bar); 42 - } 43 - 44 35 static int dw_plat_pcie_ep_raise_irq(struct dw_pcie_ep *ep, u8 func_no, 45 36 unsigned int type, u16 interrupt_num) 46 37 { ··· 64 73 } 65 74 66 75 static const struct dw_pcie_ep_ops pcie_ep_ops = { 67 - .init = dw_plat_pcie_ep_init, 68 76 .raise_irq = dw_plat_pcie_ep_raise_irq, 69 77 .get_features = dw_plat_pcie_get_features, 70 78 };
+9 -9
drivers/pci/controller/dwc/pcie-designware.c
··· 487 487 static inline u32 dw_pcie_enable_ecrc(u32 val) 488 488 { 489 489 /* 490 - * DesignWare core version 4.90A has a design issue where the 'TD' 491 - * bit in the Control register-1 of the ATU outbound region acts 492 - * like an override for the ECRC setting, i.e., the presence of TLP 493 - * Digest (ECRC) in the outgoing TLPs is solely determined by this 494 - * bit. This is contrary to the PCIe spec which says that the 495 - * enablement of the ECRC is solely determined by the AER 496 - * registers. 490 + * DWC versions 0x3530302a and 0x3536322a have a design issue where 491 + * the 'TD' bit in the Control register-1 of the ATU outbound 492 + * region acts like an override for the ECRC setting, i.e., the 493 + * presence of TLP Digest (ECRC) in the outgoing TLPs is solely 494 + * determined by this bit. This is contrary to the PCIe spec which 495 + * says that the enablement of the ECRC is solely determined by the 496 + * AER registers. 497 497 * 498 498 * Because of this, even when the ECRC is enabled through AER 499 499 * registers, the transactions going through ATU won't have TLP ··· 563 563 if (upper_32_bits(limit_addr) > upper_32_bits(parent_bus_addr) && 564 564 dw_pcie_ver_is_ge(pci, 460A)) 565 565 val |= PCIE_ATU_INCREASE_REGION_SIZE; 566 - if (dw_pcie_ver_is(pci, 490A)) 566 + if (dw_pcie_ver_is(pci, 490A) || dw_pcie_ver_is(pci, 500A)) 567 567 val = dw_pcie_enable_ecrc(val); 568 568 dw_pcie_writel_atu_ob(pci, atu->index, PCIE_ATU_REGION_CTRL1, val); 569 569 ··· 861 861 ctrl2 = dw_pcie_readl_dbi(pci, offset + PCI_EXP_LNKCTL2); 862 862 ctrl2 &= ~PCI_EXP_LNKCTL2_TLS; 863 863 864 - switch (pcie_link_speed[pci->max_link_speed]) { 864 + switch (pcie_get_link_speed(pci->max_link_speed)) { 865 865 case PCIE_SPEED_2_5GT: 866 866 link_speed = PCI_EXP_LNKCTL2_TLS_2_5GT; 867 867 break;
+3
drivers/pci/controller/dwc/pcie-designware.h
··· 34 34 #define DW_PCIE_VER_470A 0x3437302a 35 35 #define DW_PCIE_VER_480A 0x3438302a 36 36 #define DW_PCIE_VER_490A 0x3439302a 37 + #define DW_PCIE_VER_500A 0x3530302a 37 38 #define DW_PCIE_VER_520A 0x3532302a 38 39 #define DW_PCIE_VER_540A 0x3534302a 40 + #define DW_PCIE_VER_562A 0x3536322a 39 41 40 42 #define __dw_pcie_ver_cmp(_pci, _ver, _op) \ 41 43 ((_pci)->version _op DW_PCIE_VER_ ## _ver) ··· 423 421 424 422 struct dw_pcie_rp { 425 423 bool use_imsi_rx:1; 424 + bool keep_rp_msi_en:1; 426 425 bool cfg0_io_shared:1; 427 426 u64 cfg0_base; 428 427 void __iomem *va_cfg0_base;
+126 -8
drivers/pci/controller/dwc/pcie-dw-rockchip.c
··· 22 22 #include <linux/platform_device.h> 23 23 #include <linux/regmap.h> 24 24 #include <linux/reset.h> 25 + #include <linux/workqueue.h> 26 + #include <trace/events/pci_controller.h> 25 27 26 28 #include "../../pci.h" 27 29 #include "pcie-designware.h" ··· 75 73 #define PCIE_CLIENT_CDM_RASDES_TBA_L1_1 BIT(4) 76 74 #define PCIE_CLIENT_CDM_RASDES_TBA_L1_2 BIT(5) 77 75 76 + /* Debug FIFO information */ 77 + #define PCIE_CLIENT_DBG_FIFO_MODE_CON 0x310 78 + #define PCIE_CLIENT_DBG_EN 0xffff0007 79 + #define PCIE_CLIENT_DBG_DIS 0xffff0000 80 + #define PCIE_CLIENT_DBG_FIFO_PTN_HIT_D0 0x320 81 + #define PCIE_CLIENT_DBG_FIFO_PTN_HIT_D1 0x324 82 + #define PCIE_CLIENT_DBG_FIFO_TRN_HIT_D0 0x328 83 + #define PCIE_CLIENT_DBG_FIFO_TRN_HIT_D1 0x32c 84 + #define PCIE_CLIENT_DBG_TRANSITION_DATA 0xffff0000 85 + #define PCIE_CLIENT_DBG_FIFO_STATUS 0x350 86 + #define PCIE_DBG_FIFO_RATE_MASK GENMASK(22, 20) 87 + #define PCIE_DBG_FIFO_L1SUB_MASK GENMASK(10, 8) 88 + #define PCIE_DBG_LTSSM_HISTORY_CNT 64 89 + 78 90 /* Hot Reset Control Register */ 79 91 #define PCIE_CLIENT_HOT_RESET_CTRL 0x180 80 92 #define PCIE_LTSSM_APP_DLY2_EN BIT(1) ··· 114 98 struct irq_domain *irq_domain; 115 99 const struct rockchip_pcie_of_data *data; 116 100 bool supports_clkreq; 101 + struct delayed_work trace_work; 117 102 }; 118 103 119 104 struct rockchip_pcie_of_data { ··· 225 208 return rockchip_pcie_get_ltssm_reg(rockchip) & PCIE_LTSSM_STATUS_MASK; 226 209 } 227 210 211 + #ifdef CONFIG_TRACING 212 + static void rockchip_pcie_ltssm_trace_work(struct work_struct *work) 213 + { 214 + struct rockchip_pcie *rockchip = container_of(work, 215 + struct rockchip_pcie, 216 + trace_work.work); 217 + struct dw_pcie *pci = &rockchip->pci; 218 + enum dw_pcie_ltssm state; 219 + u32 i, l1ss, prev_val = DW_PCIE_LTSSM_UNKNOWN, rate, val; 220 + 221 + if (!trace_pcie_ltssm_state_transition_enabled()) 222 + goto skip_trace; 223 + 224 + for (i = 0; i < PCIE_DBG_LTSSM_HISTORY_CNT; i++) { 225 + val = rockchip_pcie_readl_apb(rockchip, 226 + PCIE_CLIENT_DBG_FIFO_STATUS); 227 + rate = FIELD_GET(PCIE_DBG_FIFO_RATE_MASK, val); 228 + l1ss = FIELD_GET(PCIE_DBG_FIFO_L1SUB_MASK, val); 229 + val = FIELD_GET(PCIE_LTSSM_STATUS_MASK, val); 230 + 231 + /* 232 + * Hardware Mechanism: The ring FIFO employs two tracking 233 + * counters: 234 + * - 'last-read-point': maintains the user's last read position 235 + * - 'last-valid-point': tracks the HW's last state update 236 + * 237 + * Software Handling: When two consecutive LTSSM states are 238 + * identical, it indicates invalid subsequent data in the FIFO. 239 + * In this case, we skip the remaining entries. The dual counter 240 + * design ensures that on the next state transition, reading can 241 + * resume from the last user position. 242 + */ 243 + if ((i > 0 && val == prev_val) || val > DW_PCIE_LTSSM_RCVRY_EQ3) 244 + break; 245 + 246 + state = prev_val = val; 247 + if (val == DW_PCIE_LTSSM_L1_IDLE) { 248 + if (l1ss == 2) 249 + state = DW_PCIE_LTSSM_L1_2; 250 + else if (l1ss == 1) 251 + state = DW_PCIE_LTSSM_L1_1; 252 + } 253 + 254 + trace_pcie_ltssm_state_transition(dev_name(pci->dev), 255 + dw_pcie_ltssm_status_string(state), 256 + ((rate + 1) > pci->max_link_speed) ? 257 + PCI_SPEED_UNKNOWN : PCIE_SPEED_2_5GT + rate); 258 + } 259 + 260 + skip_trace: 261 + schedule_delayed_work(&rockchip->trace_work, msecs_to_jiffies(5000)); 262 + } 263 + 264 + static void rockchip_pcie_ltssm_trace(struct rockchip_pcie *rockchip, 265 + bool enable) 266 + { 267 + if (enable) { 268 + rockchip_pcie_writel_apb(rockchip, 269 + PCIE_CLIENT_DBG_TRANSITION_DATA, 270 + PCIE_CLIENT_DBG_FIFO_PTN_HIT_D0); 271 + rockchip_pcie_writel_apb(rockchip, 272 + PCIE_CLIENT_DBG_TRANSITION_DATA, 273 + PCIE_CLIENT_DBG_FIFO_PTN_HIT_D1); 274 + rockchip_pcie_writel_apb(rockchip, 275 + PCIE_CLIENT_DBG_TRANSITION_DATA, 276 + PCIE_CLIENT_DBG_FIFO_TRN_HIT_D0); 277 + rockchip_pcie_writel_apb(rockchip, 278 + PCIE_CLIENT_DBG_TRANSITION_DATA, 279 + PCIE_CLIENT_DBG_FIFO_TRN_HIT_D1); 280 + rockchip_pcie_writel_apb(rockchip, 281 + PCIE_CLIENT_DBG_EN, 282 + PCIE_CLIENT_DBG_FIFO_MODE_CON); 283 + 284 + INIT_DELAYED_WORK(&rockchip->trace_work, 285 + rockchip_pcie_ltssm_trace_work); 286 + schedule_delayed_work(&rockchip->trace_work, 0); 287 + } else { 288 + rockchip_pcie_writel_apb(rockchip, 289 + PCIE_CLIENT_DBG_DIS, 290 + PCIE_CLIENT_DBG_FIFO_MODE_CON); 291 + cancel_delayed_work_sync(&rockchip->trace_work); 292 + } 293 + } 294 + #else 295 + static void rockchip_pcie_ltssm_trace(struct rockchip_pcie *rockchip, 296 + bool enable) 297 + { 298 + } 299 + #endif 300 + 228 301 static void rockchip_pcie_enable_ltssm(struct rockchip_pcie *rockchip) 229 302 { 230 303 rockchip_pcie_writel_apb(rockchip, PCIE_CLIENT_ENABLE_LTSSM, ··· 398 291 * 100us as we don't know how long should the device need to reset. 399 292 */ 400 293 msleep(PCIE_T_PVPERL_MS); 294 + 295 + rockchip_pcie_ltssm_trace(rockchip, true); 296 + 401 297 gpiod_set_value_cansleep(rockchip->rst_gpio, 1); 402 298 403 299 return 0; ··· 411 301 struct rockchip_pcie *rockchip = to_rockchip_pcie(pci); 412 302 413 303 rockchip_pcie_disable_ltssm(rockchip); 304 + rockchip_pcie_ltssm_trace(rockchip, false); 414 305 } 415 306 416 307 static int rockchip_pcie_host_init(struct dw_pcie_rp *pp) ··· 472 361 static void rockchip_pcie_ep_init(struct dw_pcie_ep *ep) 473 362 { 474 363 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 475 - enum pci_barno bar; 476 364 477 365 rockchip_pcie_enable_l0s(pci); 478 366 rockchip_pcie_ep_hide_broken_ats_cap_rk3588(ep); 479 - 480 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 481 - dw_pcie_ep_reset_bar(pci, bar); 482 367 }; 483 368 484 369 static int rockchip_pcie_raise_irq(struct dw_pcie_ep *ep, u8 func_no, ··· 510 403 .bar[BAR_5] = { .type = BAR_RESIZABLE, }, 511 404 }; 512 405 406 + static const struct pci_epc_bar_rsvd_region rk3588_bar4_rsvd[] = { 407 + { 408 + /* DMA_CAP (BAR4: DMA Port Logic Structure) */ 409 + .type = PCI_EPC_BAR_RSVD_DMA_CTRL_MMIO, 410 + .offset = 0x0, 411 + .size = 0x2000, 412 + }, 413 + }; 414 + 513 415 /* 514 416 * BAR4 on rk3588 exposes the ATU Port Logic Structure to the host regardless of 515 417 * iATU settings for BAR4. This means that BAR4 cannot be used by an EPF driver, 516 - * so mark it as RESERVED. (rockchip_pcie_ep_init() will disable all BARs by 517 - * default.) If the host could write to BAR4, the iATU settings (for all other 518 - * BARs) would be overwritten, resulting in (all other BARs) no longer working. 418 + * so mark it as RESERVED. 519 419 */ 520 420 static const struct pci_epc_features rockchip_pcie_epc_features_rk3588 = { 521 421 DWC_EPC_COMMON_FEATURES, ··· 534 420 .bar[BAR_1] = { .type = BAR_RESIZABLE, }, 535 421 .bar[BAR_2] = { .type = BAR_RESIZABLE, }, 536 422 .bar[BAR_3] = { .type = BAR_RESIZABLE, }, 537 - .bar[BAR_4] = { .type = BAR_RESERVED, }, 423 + .bar[BAR_4] = { 424 + .type = BAR_RESERVED, 425 + .nr_rsvd_regions = ARRAY_SIZE(rk3588_bar4_rsvd), 426 + .rsvd_regions = rk3588_bar4_rsvd, 427 + }, 538 428 .bar[BAR_5] = { .type = BAR_RESIZABLE, }, 539 429 }; 540 430
+408
drivers/pci/controller/dwc/pcie-eswin.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * ESWIN PCIe Root Complex driver 4 + * 5 + * Copyright 2026, Beijing ESWIN Computing Technology Co., Ltd. 6 + * 7 + * Authors: Yu Ning <ningyu@eswincomputing.com> 8 + * Senchuan Zhang <zhangsenchuan@eswincomputing.com> 9 + * Yanghui Ou <ouyanghui@eswincomputing.com> 10 + */ 11 + 12 + #include <linux/interrupt.h> 13 + #include <linux/iopoll.h> 14 + #include <linux/module.h> 15 + #include <linux/of.h> 16 + #include <linux/pci.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/pm_runtime.h> 19 + #include <linux/resource.h> 20 + #include <linux/reset.h> 21 + #include <linux/types.h> 22 + 23 + #include "pcie-designware.h" 24 + 25 + /* ELBI registers */ 26 + #define PCIEELBI_CTRL0_OFFSET 0x0 27 + #define PCIEELBI_STATUS0_OFFSET 0x100 28 + 29 + /* LTSSM register fields */ 30 + #define PCIEELBI_APP_LTSSM_ENABLE BIT(5) 31 + 32 + /* APP_HOLD_PHY_RST register fields */ 33 + #define PCIEELBI_APP_HOLD_PHY_RST BIT(6) 34 + 35 + /* PM_SEL_AUX_CLK register fields */ 36 + #define PCIEELBI_PM_SEL_AUX_CLK BIT(16) 37 + 38 + /* DEV_TYPE register fields */ 39 + #define PCIEELBI_CTRL0_DEV_TYPE GENMASK(3, 0) 40 + 41 + /* Vendor and device ID value */ 42 + #define PCI_VENDOR_ID_ESWIN 0x1fe1 43 + #define PCI_DEVICE_ID_ESWIN_EIC7700 0x2030 44 + 45 + #define ESWIN_NUM_RSTS ARRAY_SIZE(eswin_pcie_rsts) 46 + 47 + static const char * const eswin_pcie_rsts[] = { 48 + "pwr", 49 + "dbi", 50 + }; 51 + 52 + struct eswin_pcie_data { 53 + bool skip_l23; 54 + }; 55 + 56 + struct eswin_pcie_port { 57 + struct list_head list; 58 + struct reset_control *perst; 59 + int num_lanes; 60 + }; 61 + 62 + struct eswin_pcie { 63 + struct dw_pcie pci; 64 + struct clk_bulk_data *clks; 65 + struct reset_control_bulk_data resets[ESWIN_NUM_RSTS]; 66 + struct list_head ports; 67 + const struct eswin_pcie_data *data; 68 + int num_clks; 69 + }; 70 + 71 + #define to_eswin_pcie(x) dev_get_drvdata((x)->dev) 72 + 73 + static int eswin_pcie_start_link(struct dw_pcie *pci) 74 + { 75 + u32 val; 76 + 77 + /* Enable LTSSM */ 78 + val = readl_relaxed(pci->elbi_base + PCIEELBI_CTRL0_OFFSET); 79 + val |= PCIEELBI_APP_LTSSM_ENABLE; 80 + writel_relaxed(val, pci->elbi_base + PCIEELBI_CTRL0_OFFSET); 81 + 82 + return 0; 83 + } 84 + 85 + static bool eswin_pcie_link_up(struct dw_pcie *pci) 86 + { 87 + u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 88 + u16 val = dw_pcie_readw_dbi(pci, offset + PCI_EXP_LNKSTA); 89 + 90 + return val & PCI_EXP_LNKSTA_DLLLA; 91 + } 92 + 93 + static int eswin_pcie_perst_reset(struct eswin_pcie_port *port, 94 + struct eswin_pcie *pcie) 95 + { 96 + int ret; 97 + 98 + ret = reset_control_assert(port->perst); 99 + if (ret) { 100 + dev_err(pcie->pci.dev, "Failed to assert PERST#\n"); 101 + return ret; 102 + } 103 + 104 + /* Ensure that PERST# has been asserted for at least 100 ms */ 105 + msleep(PCIE_T_PVPERL_MS); 106 + 107 + ret = reset_control_deassert(port->perst); 108 + if (ret) { 109 + dev_err(pcie->pci.dev, "Failed to deassert PERST#\n"); 110 + return ret; 111 + } 112 + 113 + return 0; 114 + } 115 + 116 + static void eswin_pcie_assert(struct eswin_pcie *pcie) 117 + { 118 + struct eswin_pcie_port *port; 119 + 120 + list_for_each_entry(port, &pcie->ports, list) 121 + reset_control_assert(port->perst); 122 + reset_control_bulk_assert(ESWIN_NUM_RSTS, pcie->resets); 123 + } 124 + 125 + static int eswin_pcie_parse_port(struct eswin_pcie *pcie, 126 + struct device_node *node) 127 + { 128 + struct device *dev = pcie->pci.dev; 129 + struct eswin_pcie_port *port; 130 + 131 + port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL); 132 + if (!port) 133 + return -ENOMEM; 134 + 135 + port->perst = of_reset_control_get_exclusive(node, "perst"); 136 + if (IS_ERR(port->perst)) { 137 + dev_err(dev, "Failed to get PERST# reset\n"); 138 + return PTR_ERR(port->perst); 139 + } 140 + 141 + /* 142 + * TODO: Since the Root Port node is separated out by pcie devicetree, 143 + * the DWC core initialization code can't parse the num-lanes attribute 144 + * in the Root Port. Before entering the DWC core initialization code, 145 + * the platform driver code parses the Root Port node. The ESWIN only 146 + * supports one Root Port node, and the num-lanes attribute is suitable 147 + * for the case of one Root Port. 148 + */ 149 + if (!of_property_read_u32(node, "num-lanes", &port->num_lanes)) 150 + pcie->pci.num_lanes = port->num_lanes; 151 + 152 + INIT_LIST_HEAD(&port->list); 153 + list_add_tail(&port->list, &pcie->ports); 154 + 155 + return 0; 156 + } 157 + 158 + static int eswin_pcie_parse_ports(struct eswin_pcie *pcie) 159 + { 160 + struct eswin_pcie_port *port, *tmp; 161 + struct device *dev = pcie->pci.dev; 162 + int ret; 163 + 164 + for_each_available_child_of_node_scoped(dev->of_node, of_port) { 165 + ret = eswin_pcie_parse_port(pcie, of_port); 166 + if (ret) 167 + goto err_port; 168 + } 169 + 170 + return 0; 171 + 172 + err_port: 173 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 174 + reset_control_put(port->perst); 175 + list_del(&port->list); 176 + } 177 + 178 + return ret; 179 + } 180 + 181 + static int eswin_pcie_host_init(struct dw_pcie_rp *pp) 182 + { 183 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 184 + struct eswin_pcie *pcie = to_eswin_pcie(pci); 185 + struct eswin_pcie_port *port, *tmp; 186 + u32 val; 187 + int ret; 188 + 189 + ret = clk_bulk_prepare_enable(pcie->num_clks, pcie->clks); 190 + if (ret) 191 + return ret; 192 + 193 + /* 194 + * The PWR and DBI reset signals are respectively used to reset the 195 + * PCIe controller and the DBI register. 196 + * 197 + * The PERST# signal is a reset signal that simultaneously controls the 198 + * PCIe controller, PHY, and Endpoint. Before configuring the PHY, the 199 + * PERST# signal must first be deasserted. 200 + * 201 + * The external reference clock is supplied simultaneously to the PHY 202 + * and EP. When the PHY is configurable, the entire chip already has 203 + * stable power and reference clock. The PHY will be ready within 20ms 204 + * after writing app_hold_phy_rst register bit of ELBI register space. 205 + */ 206 + ret = reset_control_bulk_deassert(ESWIN_NUM_RSTS, pcie->resets); 207 + if (ret) { 208 + dev_err(pcie->pci.dev, "Failed to deassert resets\n"); 209 + goto err_deassert; 210 + } 211 + 212 + /* Configure Root Port type */ 213 + val = readl_relaxed(pci->elbi_base + PCIEELBI_CTRL0_OFFSET); 214 + val &= ~PCIEELBI_CTRL0_DEV_TYPE; 215 + val |= FIELD_PREP(PCIEELBI_CTRL0_DEV_TYPE, PCI_EXP_TYPE_ROOT_PORT); 216 + writel_relaxed(val, pci->elbi_base + PCIEELBI_CTRL0_OFFSET); 217 + 218 + list_for_each_entry(port, &pcie->ports, list) { 219 + ret = eswin_pcie_perst_reset(port, pcie); 220 + if (ret) 221 + goto err_perst; 222 + } 223 + 224 + /* Configure app_hold_phy_rst */ 225 + val = readl_relaxed(pci->elbi_base + PCIEELBI_CTRL0_OFFSET); 226 + val &= ~PCIEELBI_APP_HOLD_PHY_RST; 227 + writel_relaxed(val, pci->elbi_base + PCIEELBI_CTRL0_OFFSET); 228 + 229 + /* The maximum waiting time for the clock switch lock is 20ms */ 230 + ret = readl_poll_timeout(pci->elbi_base + PCIEELBI_STATUS0_OFFSET, val, 231 + !(val & PCIEELBI_PM_SEL_AUX_CLK), 1000, 232 + 20000); 233 + if (ret) { 234 + dev_err(pci->dev, "Timeout waiting for PM_SEL_AUX_CLK ready\n"); 235 + goto err_phy_init; 236 + } 237 + 238 + /* 239 + * Configure ESWIN VID:DID for Root Port as the default values are 240 + * invalid. 241 + */ 242 + dw_pcie_dbi_ro_wr_en(pci); 243 + dw_pcie_writew_dbi(pci, PCI_VENDOR_ID, PCI_VENDOR_ID_ESWIN); 244 + dw_pcie_writew_dbi(pci, PCI_DEVICE_ID, PCI_DEVICE_ID_ESWIN_EIC7700); 245 + dw_pcie_dbi_ro_wr_dis(pci); 246 + 247 + return 0; 248 + 249 + err_phy_init: 250 + list_for_each_entry(port, &pcie->ports, list) 251 + reset_control_assert(port->perst); 252 + err_perst: 253 + reset_control_bulk_assert(ESWIN_NUM_RSTS, pcie->resets); 254 + err_deassert: 255 + clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks); 256 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 257 + reset_control_put(port->perst); 258 + list_del(&port->list); 259 + } 260 + 261 + return ret; 262 + } 263 + 264 + static void eswin_pcie_host_deinit(struct dw_pcie_rp *pp) 265 + { 266 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 267 + struct eswin_pcie *pcie = to_eswin_pcie(pci); 268 + 269 + eswin_pcie_assert(pcie); 270 + clk_bulk_disable_unprepare(pcie->num_clks, pcie->clks); 271 + } 272 + 273 + static void eswin_pcie_pme_turn_off(struct dw_pcie_rp *pp) 274 + { 275 + struct dw_pcie *pci = to_dw_pcie_from_pp(pp); 276 + struct eswin_pcie *pcie = to_eswin_pcie(pci); 277 + 278 + /* 279 + * The ESWIN EIC7700 SoC lacks hardware support for the L2/L3 low-power 280 + * link states. It cannot enter the L2/L3 Ready state through the 281 + * PME_Turn_Off/PME_To_Ack handshake protocol. To avoid this problem, 282 + * the skip_l23_ready has been set. 283 + */ 284 + pp->skip_l23_ready = pcie->data->skip_l23; 285 + } 286 + 287 + static const struct dw_pcie_host_ops eswin_pcie_host_ops = { 288 + .init = eswin_pcie_host_init, 289 + .deinit = eswin_pcie_host_deinit, 290 + .pme_turn_off = eswin_pcie_pme_turn_off, 291 + }; 292 + 293 + static const struct dw_pcie_ops dw_pcie_ops = { 294 + .start_link = eswin_pcie_start_link, 295 + .link_up = eswin_pcie_link_up, 296 + }; 297 + 298 + static int eswin_pcie_probe(struct platform_device *pdev) 299 + { 300 + const struct eswin_pcie_data *data; 301 + struct eswin_pcie_port *port, *tmp; 302 + struct device *dev = &pdev->dev; 303 + struct eswin_pcie *pcie; 304 + struct dw_pcie *pci; 305 + int ret, i; 306 + 307 + data = of_device_get_match_data(dev); 308 + if (!data) 309 + return dev_err_probe(dev, -ENODATA, "No platform data\n"); 310 + 311 + pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL); 312 + if (!pcie) 313 + return -ENOMEM; 314 + 315 + INIT_LIST_HEAD(&pcie->ports); 316 + 317 + pci = &pcie->pci; 318 + pci->dev = dev; 319 + pci->ops = &dw_pcie_ops; 320 + pci->pp.ops = &eswin_pcie_host_ops; 321 + pcie->data = data; 322 + 323 + pcie->num_clks = devm_clk_bulk_get_all(dev, &pcie->clks); 324 + if (pcie->num_clks < 0) 325 + return dev_err_probe(dev, pcie->num_clks, 326 + "Failed to get pcie clocks\n"); 327 + 328 + for (i = 0; i < ESWIN_NUM_RSTS; i++) 329 + pcie->resets[i].id = eswin_pcie_rsts[i]; 330 + 331 + ret = devm_reset_control_bulk_get_exclusive(dev, ESWIN_NUM_RSTS, 332 + pcie->resets); 333 + if (ret) 334 + return dev_err_probe(dev, ret, "Failed to get resets\n"); 335 + 336 + ret = eswin_pcie_parse_ports(pcie); 337 + if (ret) 338 + return dev_err_probe(dev, ret, "Failed to parse Root Port\n"); 339 + 340 + platform_set_drvdata(pdev, pcie); 341 + 342 + pm_runtime_no_callbacks(dev); 343 + devm_pm_runtime_enable(dev); 344 + ret = pm_runtime_get_sync(dev); 345 + if (ret < 0) 346 + goto err_pm_runtime_put; 347 + 348 + ret = dw_pcie_host_init(&pci->pp); 349 + if (ret) { 350 + dev_err(dev, "Failed to init host\n"); 351 + goto err_init; 352 + } 353 + 354 + return 0; 355 + 356 + err_pm_runtime_put: 357 + list_for_each_entry_safe(port, tmp, &pcie->ports, list) { 358 + reset_control_put(port->perst); 359 + list_del(&port->list); 360 + } 361 + err_init: 362 + pm_runtime_put(dev); 363 + 364 + return ret; 365 + } 366 + 367 + static int eswin_pcie_suspend_noirq(struct device *dev) 368 + { 369 + struct eswin_pcie *pcie = dev_get_drvdata(dev); 370 + 371 + return dw_pcie_suspend_noirq(&pcie->pci); 372 + } 373 + 374 + static int eswin_pcie_resume_noirq(struct device *dev) 375 + { 376 + struct eswin_pcie *pcie = dev_get_drvdata(dev); 377 + 378 + return dw_pcie_resume_noirq(&pcie->pci); 379 + } 380 + 381 + static DEFINE_NOIRQ_DEV_PM_OPS(eswin_pcie_pm, eswin_pcie_suspend_noirq, 382 + eswin_pcie_resume_noirq); 383 + 384 + static const struct eswin_pcie_data eswin_eic7700_data = { 385 + .skip_l23 = true, 386 + }; 387 + 388 + static const struct of_device_id eswin_pcie_of_match[] = { 389 + { .compatible = "eswin,eic7700-pcie", .data = &eswin_eic7700_data }, 390 + {} 391 + }; 392 + 393 + static struct platform_driver eswin_pcie_driver = { 394 + .probe = eswin_pcie_probe, 395 + .driver = { 396 + .name = "eswin-pcie", 397 + .of_match_table = eswin_pcie_of_match, 398 + .suppress_bind_attrs = true, 399 + .pm = &eswin_pcie_pm, 400 + }, 401 + }; 402 + builtin_platform_driver(eswin_pcie_driver); 403 + 404 + MODULE_DESCRIPTION("ESWIN PCIe Root Complex driver"); 405 + MODULE_AUTHOR("Yu Ning <ningyu@eswincomputing.com>"); 406 + MODULE_AUTHOR("Senchuan Zhang <zhangsenchuan@eswincomputing.com>"); 407 + MODULE_AUTHOR("Yanghui Ou <ouyanghui@eswincomputing.com>"); 408 + MODULE_LICENSE("GPL");
-3
drivers/pci/controller/dwc/pcie-keembay.c
··· 313 313 .msi_capable = true, 314 314 .msix_capable = true, 315 315 .bar[BAR_0] = { .only_64bit = true, }, 316 - .bar[BAR_1] = { .type = BAR_RESERVED, }, 317 316 .bar[BAR_2] = { .only_64bit = true, }, 318 - .bar[BAR_3] = { .type = BAR_RESERVED, }, 319 317 .bar[BAR_4] = { .only_64bit = true, }, 320 - .bar[BAR_5] = { .type = BAR_RESERVED, }, 321 318 .align = SZ_16K, 322 319 }; 323 320
+1 -1
drivers/pci/controller/dwc/pcie-qcom-common.c
··· 22 22 * applied. 23 23 */ 24 24 25 - for (speed = PCIE_SPEED_8_0GT; speed <= pcie_link_speed[pci->max_link_speed]; speed++) { 25 + for (speed = PCIE_SPEED_8_0GT; speed <= pcie_get_link_speed(pci->max_link_speed); speed++) { 26 26 if (speed > PCIE_SPEED_32_0GT) { 27 27 dev_warn(dev, "Skipped equalization settings for unsupported data rate\n"); 28 28 break;
+2 -14
drivers/pci/controller/dwc/pcie-qcom-ep.c
··· 152 152 #define WAKE_DELAY_US 2000 /* 2 ms */ 153 153 154 154 #define QCOM_PCIE_LINK_SPEED_TO_BW(speed) \ 155 - Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_link_speed[speed])) 155 + Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_get_link_speed(speed))) 156 156 157 157 #define to_pcie_ep(x) dev_get_drvdata((x)->dev) 158 158 ··· 531 531 532 532 qcom_pcie_common_set_equalization(pci); 533 533 534 - if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) 534 + if (pcie_get_link_speed(pci->max_link_speed) == PCIE_SPEED_16_0GT) 535 535 qcom_pcie_common_set_16gt_lane_margining(pci); 536 536 537 537 /* ··· 850 850 .msi_capable = true, 851 851 .align = SZ_4K, 852 852 .bar[BAR_0] = { .only_64bit = true, }, 853 - .bar[BAR_1] = { .type = BAR_RESERVED, }, 854 853 .bar[BAR_2] = { .only_64bit = true, }, 855 - .bar[BAR_3] = { .type = BAR_RESERVED, }, 856 854 }; 857 855 858 856 static const struct pci_epc_features * ··· 859 861 return &qcom_pcie_epc_features; 860 862 } 861 863 862 - static void qcom_pcie_ep_init(struct dw_pcie_ep *ep) 863 - { 864 - struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 865 - enum pci_barno bar; 866 - 867 - for (bar = BAR_0; bar <= BAR_5; bar++) 868 - dw_pcie_ep_reset_bar(pci, bar); 869 - } 870 - 871 864 static const struct dw_pcie_ep_ops pci_ep_ops = { 872 - .init = qcom_pcie_ep_init, 873 865 .raise_irq = qcom_pcie_ep_raise_irq, 874 866 .get_features = qcom_pcie_epc_get_features, 875 867 };
+14 -9
drivers/pci/controller/dwc/pcie-qcom.c
··· 170 170 #define QCOM_PCIE_CRC8_POLYNOMIAL (BIT(2) | BIT(1) | BIT(0)) 171 171 172 172 #define QCOM_PCIE_LINK_SPEED_TO_BW(speed) \ 173 - Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_link_speed[speed])) 173 + Mbps_to_icc(PCIE_SPEED2MBS_ENC(pcie_get_link_speed(speed))) 174 174 175 175 struct qcom_pcie_resources_1_0_0 { 176 176 struct clk_bulk_data *clks; ··· 320 320 321 321 qcom_pcie_common_set_equalization(pci); 322 322 323 - if (pcie_link_speed[pci->max_link_speed] == PCIE_SPEED_16_0GT) 323 + if (pcie_get_link_speed(pci->max_link_speed) == PCIE_SPEED_16_0GT) 324 324 qcom_pcie_common_set_16gt_lane_margining(pci); 325 325 326 326 /* Enable Link Training state machine */ ··· 350 350 dw_pcie_dbi_ro_wr_dis(pci); 351 351 } 352 352 353 - static void qcom_pcie_clear_hpc(struct dw_pcie *pci) 353 + static void qcom_pcie_set_slot_nccs(struct dw_pcie *pci) 354 354 { 355 355 u16 offset = dw_pcie_find_capability(pci, PCI_CAP_ID_EXP); 356 356 u32 val; 357 357 358 358 dw_pcie_dbi_ro_wr_en(pci); 359 359 360 + /* 361 + * Qcom PCIe Root Ports do not support generating command completion 362 + * notifications for the Hot-Plug commands. So set the NCCS field to 363 + * avoid waiting for the completions. 364 + */ 360 365 val = readl(pci->dbi_base + offset + PCI_EXP_SLTCAP); 361 - val &= ~PCI_EXP_SLTCAP_HPC; 366 + val |= PCI_EXP_SLTCAP_NCCS; 362 367 writel(val, pci->dbi_base + offset + PCI_EXP_SLTCAP); 363 368 364 369 dw_pcie_dbi_ro_wr_dis(pci); ··· 563 558 writel(CFG_BRIDGE_SB_INIT, 564 559 pci->dbi_base + AXI_MSTR_RESP_COMP_CTRL1); 565 560 566 - qcom_pcie_clear_hpc(pcie->pci); 561 + qcom_pcie_set_slot_nccs(pcie->pci); 567 562 568 563 return 0; 569 564 } ··· 643 638 writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT); 644 639 } 645 640 646 - qcom_pcie_clear_hpc(pcie->pci); 641 + qcom_pcie_set_slot_nccs(pcie->pci); 647 642 648 643 return 0; 649 644 } ··· 736 731 val |= EN; 737 732 writel(val, pcie->parf + PARF_AXI_MSTR_WR_ADDR_HALT_V2); 738 733 739 - qcom_pcie_clear_hpc(pcie->pci); 734 + qcom_pcie_set_slot_nccs(pcie->pci); 740 735 741 736 return 0; 742 737 } ··· 1042 1037 writel(WR_NO_SNOOP_OVERRIDE_EN | RD_NO_SNOOP_OVERRIDE_EN, 1043 1038 pcie->parf + PARF_NO_SNOOP_OVERRIDE); 1044 1039 1045 - qcom_pcie_clear_hpc(pcie->pci); 1040 + qcom_pcie_set_slot_nccs(pcie->pci); 1046 1041 1047 1042 return 0; 1048 1043 } ··· 1584 1579 ret); 1585 1580 } 1586 1581 } else if (pcie->use_pm_opp) { 1587 - freq_mbps = pcie_dev_speed_mbps(pcie_link_speed[speed]); 1582 + freq_mbps = pcie_dev_speed_mbps(pcie_get_link_speed(speed)); 1588 1583 if (freq_mbps < 0) 1589 1584 return; 1590 1585
+6 -14
drivers/pci/controller/dwc/pcie-rcar-gen4.c
··· 386 386 writel(PCIEDMAINTSTSEN_INIT, rcar->base + PCIEDMAINTSTSEN); 387 387 } 388 388 389 - static void rcar_gen4_pcie_ep_init(struct dw_pcie_ep *ep) 390 - { 391 - struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 392 - enum pci_barno bar; 393 - 394 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 395 - dw_pcie_ep_reset_bar(pci, bar); 396 - } 397 - 398 389 static void rcar_gen4_pcie_ep_deinit(struct rcar_gen4_pcie *rcar) 399 390 { 400 391 writel(0, rcar->base + PCIEDMAINTSTSEN); ··· 413 422 static const struct pci_epc_features rcar_gen4_pcie_epc_features = { 414 423 DWC_EPC_COMMON_FEATURES, 415 424 .msi_capable = true, 416 - .bar[BAR_1] = { .type = BAR_RESERVED, }, 417 - .bar[BAR_3] = { .type = BAR_RESERVED, }, 425 + .bar[BAR_0] = { .type = BAR_RESIZABLE, }, 426 + .bar[BAR_1] = { .type = BAR_DISABLED, }, 427 + .bar[BAR_2] = { .type = BAR_RESIZABLE, }, 428 + .bar[BAR_3] = { .type = BAR_DISABLED, }, 418 429 .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256 }, 419 - .bar[BAR_5] = { .type = BAR_RESERVED, }, 420 - .align = SZ_1M, 430 + .bar[BAR_5] = { .type = BAR_DISABLED, }, 431 + .align = SZ_4K, 421 432 }; 422 433 423 434 static const struct pci_epc_features* ··· 442 449 443 450 static const struct dw_pcie_ep_ops pcie_ep_ops = { 444 451 .pre_init = rcar_gen4_pcie_ep_pre_init, 445 - .init = rcar_gen4_pcie_ep_init, 446 452 .raise_irq = rcar_gen4_pcie_ep_raise_irq, 447 453 .get_features = rcar_gen4_pcie_ep_get_features, 448 454 .get_dbi_offset = rcar_gen4_pcie_ep_get_dbi_offset,
-10
drivers/pci/controller/dwc/pcie-stm32-ep.c
··· 28 28 unsigned int perst_irq; 29 29 }; 30 30 31 - static void stm32_pcie_ep_init(struct dw_pcie_ep *ep) 32 - { 33 - struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 34 - enum pci_barno bar; 35 - 36 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 37 - dw_pcie_ep_reset_bar(pci, bar); 38 - } 39 - 40 31 static int stm32_pcie_start_link(struct dw_pcie *pci) 41 32 { 42 33 struct stm32_pcie *stm32_pcie = to_stm32_pcie(pci); ··· 73 82 } 74 83 75 84 static const struct dw_pcie_ep_ops stm32_pcie_ep_ops = { 76 - .init = stm32_pcie_ep_init, 77 85 .raise_irq = stm32_pcie_raise_irq, 78 86 .get_features = stm32_pcie_get_features, 79 87 };
+188 -114
drivers/pci/controller/dwc/pcie-tegra194.c
··· 35 35 #include <soc/tegra/bpmp-abi.h> 36 36 #include "../../pci.h" 37 37 38 - #define TEGRA194_DWC_IP_VER 0x490A 39 - #define TEGRA234_DWC_IP_VER 0x562A 38 + #define TEGRA194_DWC_IP_VER DW_PCIE_VER_500A 39 + #define TEGRA234_DWC_IP_VER DW_PCIE_VER_562A 40 40 41 41 #define APPL_PINMUX 0x0 42 42 #define APPL_PINMUX_PEX_RST BIT(0) ··· 44 44 #define APPL_PINMUX_CLKREQ_OVERRIDE BIT(3) 45 45 #define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE_EN BIT(4) 46 46 #define APPL_PINMUX_CLK_OUTPUT_IN_OVERRIDE BIT(5) 47 + #define APPL_PINMUX_CLKREQ_DEFAULT_VALUE BIT(13) 47 48 48 49 #define APPL_CTRL 0x4 49 50 #define APPL_CTRL_SYS_PRE_DET_STATE BIT(6) ··· 91 90 #define APPL_INTR_EN_L1_8_0 0x44 92 91 #define APPL_INTR_EN_L1_8_BW_MGT_INT_EN BIT(2) 93 92 #define APPL_INTR_EN_L1_8_AUTO_BW_INT_EN BIT(3) 93 + #define APPL_INTR_EN_L1_8_EDMA_INT_EN BIT(6) 94 94 #define APPL_INTR_EN_L1_8_INTX_EN BIT(11) 95 95 #define APPL_INTR_EN_L1_8_AER_INT_EN BIT(15) 96 96 ··· 139 137 #define APPL_DEBUG_PM_LINKST_IN_L0 0x11 140 138 #define APPL_DEBUG_LTSSM_STATE_MASK GENMASK(8, 3) 141 139 #define APPL_DEBUG_LTSSM_STATE_SHIFT 3 142 - #define LTSSM_STATE_PRE_DETECT 5 140 + #define LTSSM_STATE_DETECT_QUIET 0x00 141 + #define LTSSM_STATE_DETECT_ACT 0x08 142 + #define LTSSM_STATE_PRE_DETECT_QUIET 0x28 143 + #define LTSSM_STATE_DETECT_WAIT 0x30 144 + #define LTSSM_STATE_L2_IDLE 0xa8 143 145 144 146 #define APPL_RADM_STATUS 0xE4 145 147 #define APPL_PM_XMT_TURNOFF_STATE BIT(0) ··· 204 198 #define CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_MASK GENMASK(11, 8) 205 199 #define CAP_SPCIE_CAP_OFF_USP_TX_PRESET0_SHIFT 8 206 200 207 - #define PME_ACK_TIMEOUT 10000 208 - 209 - #define LTSSM_TIMEOUT 50000 /* 50ms */ 201 + #define LTSSM_DELAY_US 10000 /* 10 ms */ 202 + #define LTSSM_TIMEOUT_US 120000 /* 120 ms */ 210 203 211 204 #define GEN3_GEN4_EQ_PRESET_INIT 5 212 205 ··· 236 231 bool has_sbr_reset_fix; 237 232 bool has_l1ss_exit_fix; 238 233 bool has_ltr_req_fix; 234 + bool disable_l1_2; 239 235 u32 cdm_chk_int_en_bit; 240 236 u32 gen4_preset_vec; 241 237 u8 n_fts[2]; ··· 249 243 struct resource *atu_dma_res; 250 244 void __iomem *appl_base; 251 245 struct clk *core_clk; 246 + struct clk *core_clk_m; 252 247 struct reset_control *core_apb_rst; 253 248 struct reset_control *core_rst; 254 249 struct dw_pcie pci; ··· 317 310 speed = FIELD_GET(PCI_EXP_LNKSTA_CLS, val); 318 311 width = FIELD_GET(PCI_EXP_LNKSTA_NLW, val); 319 312 320 - val = width * PCIE_SPEED2MBS_ENC(pcie_link_speed[speed]); 313 + val = width * PCIE_SPEED2MBS_ENC(pcie_get_link_speed(speed)); 321 314 322 315 if (icc_set_bw(pcie->icc_path, Mbps_to_icc(val), 0)) 323 316 dev_err(pcie->dev, "can't set bw[%u]\n", val); ··· 489 482 if (val & PCI_COMMAND_MASTER) { 490 483 ktime_t timeout; 491 484 492 - /* 110us for both snoop and no-snoop */ 493 - val = FIELD_PREP(PCI_LTR_VALUE_MASK, 110) | 494 - FIELD_PREP(PCI_LTR_SCALE_MASK, 2) | 495 - LTR_MSG_REQ | 496 - FIELD_PREP(PCI_LTR_NOSNOOP_VALUE, 110) | 497 - FIELD_PREP(PCI_LTR_NOSNOOP_SCALE, 2) | 498 - LTR_NOSNOOP_MSG_REQ; 499 - appl_writel(pcie, val, APPL_LTR_MSG_1); 500 - 501 485 /* Send LTR upstream */ 502 486 val = appl_readl(pcie, APPL_LTR_MSG_2); 503 487 val |= APPL_LTR_MSG_2_LTR_MSG_REQ_STATE; ··· 544 546 return IRQ_WAKE_THREAD; 545 547 546 548 spurious = 0; 549 + } 550 + 551 + if (status_l0 & APPL_INTR_STATUS_L0_INT_INT) { 552 + status_l1 = appl_readl(pcie, APPL_INTR_STATUS_L1_8_0); 553 + 554 + /* 555 + * Interrupt is handled by DMA driver; don't treat it as 556 + * spurious 557 + */ 558 + if (status_l1 & APPL_INTR_STATUS_L1_8_0_EDMA_INT_MASK) 559 + spurious = 0; 547 560 } 548 561 549 562 if (spurious) { ··· 694 685 if (pcie->supports_clkreq) 695 686 pci->l1ss_support = true; 696 687 688 + /* 689 + * Disable L1.2 capability advertisement for Tegra234 Endpoint mode. 690 + * Tegra234 has a hardware bug where during L1.2 exit, the UPHY PLL is 691 + * powered up immediately without waiting for REFCLK to stabilize. This 692 + * causes the PLL to fail to lock to the correct frequency, resulting in 693 + * PCIe link loss. Since there is no hardware fix available, we prevent 694 + * the Endpoint from advertising L1.2 support by clearing the L1.2 bits 695 + * in the L1 PM Substates Capabilities register. This ensures the host 696 + * will not attempt to enter L1.2 state with this Endpoint. 697 + */ 698 + if (pcie->of_data->disable_l1_2 && 699 + pcie->of_data->mode == DW_PCIE_EP_TYPE) { 700 + val = dw_pcie_readl_dbi(pci, l1ss + PCI_L1SS_CAP); 701 + val &= ~(PCI_L1SS_CAP_PCIPM_L1_2 | PCI_L1SS_CAP_ASPM_L1_2); 702 + dw_pcie_writel_dbi(pci, l1ss + PCI_L1SS_CAP, val); 703 + } 704 + 697 705 /* Program L0s and L1 entrance latencies */ 698 706 val = dw_pcie_readl_dbi(pci, PCIE_PORT_AFR); 699 707 val &= ~PORT_AFR_L0S_ENTRANCE_LAT_MASK; ··· 793 767 val |= APPL_INTR_EN_L1_8_INTX_EN; 794 768 val |= APPL_INTR_EN_L1_8_AUTO_BW_INT_EN; 795 769 val |= APPL_INTR_EN_L1_8_BW_MGT_INT_EN; 770 + val |= APPL_INTR_EN_L1_8_EDMA_INT_EN; 796 771 if (IS_ENABLED(CONFIG_PCIEAER)) 797 772 val |= APPL_INTR_EN_L1_8_AER_INT_EN; 798 773 appl_writel(pcie, val, APPL_INTR_EN_L1_8_0); ··· 951 924 } 952 925 953 926 clk_set_rate(pcie->core_clk, GEN4_CORE_CLK_FREQ); 927 + if (clk_prepare_enable(pcie->core_clk_m)) 928 + dev_err(pci->dev, "Failed to enable core monitor clock\n"); 954 929 955 930 return 0; 956 931 } ··· 1025 996 val &= ~PCI_DLF_EXCHANGE_ENABLE; 1026 997 dw_pcie_writel_dbi(pci, offset + PCI_DLF_CAP, val); 1027 998 999 + /* 1000 + * core_clk_m is enabled as part of host_init callback in 1001 + * dw_pcie_host_init(). Disable the clock since below 1002 + * tegra_pcie_dw_host_init() will enable it again. 1003 + */ 1004 + clk_disable_unprepare(pcie->core_clk_m); 1028 1005 tegra_pcie_dw_host_init(pp); 1029 1006 dw_pcie_setup_rc(pp); 1030 1007 ··· 1057 1022 { 1058 1023 struct tegra_pcie_dw *pcie = to_tegra_pcie(pci); 1059 1024 1060 - disable_irq(pcie->pex_rst_irq); 1025 + if (pcie->of_data->mode == DW_PCIE_EP_TYPE) 1026 + disable_irq(pcie->pex_rst_irq); 1061 1027 } 1062 1028 1063 1029 static const struct dw_pcie_ops tegra_dw_pcie_ops = { ··· 1094 1058 ret = phy_power_on(pcie->phys[i]); 1095 1059 if (ret < 0) 1096 1060 goto phy_exit; 1061 + 1062 + if (pcie->of_data->mode == DW_PCIE_EP_TYPE) 1063 + phy_calibrate(pcie->phys[i]); 1097 1064 } 1098 1065 1099 1066 return 0; ··· 1202 1163 return err; 1203 1164 } 1204 1165 1205 - pcie->pex_refclk_sel_gpiod = devm_gpiod_get(pcie->dev, 1206 - "nvidia,refclk-select", 1207 - GPIOD_OUT_HIGH); 1166 + pcie->pex_refclk_sel_gpiod = devm_gpiod_get_optional(pcie->dev, 1167 + "nvidia,refclk-select", 1168 + GPIOD_OUT_HIGH); 1208 1169 if (IS_ERR(pcie->pex_refclk_sel_gpiod)) { 1209 1170 int err = PTR_ERR(pcie->pex_refclk_sel_gpiod); 1210 1171 const char *level = KERN_ERR; ··· 1292 1253 return -EINVAL; 1293 1254 1294 1255 return 0; 1295 - } 1296 - 1297 - static void tegra_pcie_downstream_dev_to_D0(struct tegra_pcie_dw *pcie) 1298 - { 1299 - struct dw_pcie_rp *pp = &pcie->pci.pp; 1300 - struct pci_bus *child, *root_port_bus = NULL; 1301 - struct pci_dev *pdev; 1302 - 1303 - /* 1304 - * link doesn't go into L2 state with some of the endpoints with Tegra 1305 - * if they are not in D0 state. So, need to make sure that immediate 1306 - * downstream devices are in D0 state before sending PME_TurnOff to put 1307 - * link into L2 state. 1308 - * This is as per PCI Express Base r4.0 v1.0 September 27-2017, 1309 - * 5.2 Link State Power Management (Page #428). 1310 - */ 1311 - 1312 - list_for_each_entry(child, &pp->bridge->bus->children, node) { 1313 - if (child->parent == pp->bridge->bus) { 1314 - root_port_bus = child; 1315 - break; 1316 - } 1317 - } 1318 - 1319 - if (!root_port_bus) { 1320 - dev_err(pcie->dev, "Failed to find downstream bus of Root Port\n"); 1321 - return; 1322 - } 1323 - 1324 - /* Bring downstream devices to D0 if they are not already in */ 1325 - list_for_each_entry(pdev, &root_port_bus->devices, bus_list) { 1326 - if (PCI_SLOT(pdev->devfn) == 0) { 1327 - if (pci_set_power_state(pdev, PCI_D0)) 1328 - dev_err(pcie->dev, 1329 - "Failed to transition %s to D0 state\n", 1330 - dev_name(&pdev->dev)); 1331 - } 1332 - } 1333 1256 } 1334 1257 1335 1258 static int tegra_pcie_get_slot_regulators(struct tegra_pcie_dw *pcie) ··· 1455 1454 val = appl_readl(pcie, APPL_PINMUX); 1456 1455 val |= APPL_PINMUX_CLKREQ_OVERRIDE_EN; 1457 1456 val &= ~APPL_PINMUX_CLKREQ_OVERRIDE; 1457 + val &= ~APPL_PINMUX_CLKREQ_DEFAULT_VALUE; 1458 1458 appl_writel(pcie, val, APPL_PINMUX); 1459 1459 } 1460 1460 ··· 1555 1553 val |= APPL_PM_XMT_TURNOFF_STATE; 1556 1554 appl_writel(pcie, val, APPL_RADM_STATUS); 1557 1555 1558 - return readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, val, 1559 - val & APPL_DEBUG_PM_LINKST_IN_L2_LAT, 1560 - 1, PME_ACK_TIMEOUT); 1556 + return readl_poll_timeout(pcie->appl_base + APPL_DEBUG, val, 1557 + val & APPL_DEBUG_PM_LINKST_IN_L2_LAT, 1558 + PCIE_PME_TO_L2_TIMEOUT_US/10, 1559 + PCIE_PME_TO_L2_TIMEOUT_US); 1561 1560 } 1562 1561 1563 1562 static void tegra_pcie_dw_pme_turnoff(struct tegra_pcie_dw *pcie) ··· 1593 1590 data &= ~APPL_PINMUX_PEX_RST; 1594 1591 appl_writel(pcie, data, APPL_PINMUX); 1595 1592 1593 + err = readl_poll_timeout(pcie->appl_base + APPL_DEBUG, data, 1594 + ((data & APPL_DEBUG_LTSSM_STATE_MASK) == LTSSM_STATE_DETECT_QUIET) || 1595 + ((data & APPL_DEBUG_LTSSM_STATE_MASK) == LTSSM_STATE_DETECT_ACT) || 1596 + ((data & APPL_DEBUG_LTSSM_STATE_MASK) == LTSSM_STATE_PRE_DETECT_QUIET) || 1597 + ((data & APPL_DEBUG_LTSSM_STATE_MASK) == LTSSM_STATE_DETECT_WAIT), 1598 + LTSSM_DELAY_US, LTSSM_TIMEOUT_US); 1599 + if (err) 1600 + dev_info(pcie->dev, "LTSSM state: 0x%x detect timeout: %d\n", data, err); 1601 + 1596 1602 /* 1597 - * Some cards do not go to detect state even after de-asserting 1598 - * PERST#. So, de-assert LTSSM to bring link to detect state. 1603 + * Deassert LTSSM state to stop the state toggling between 1604 + * Polling and Detect. 1599 1605 */ 1600 1606 data = readl(pcie->appl_base + APPL_CTRL); 1601 1607 data &= ~APPL_CTRL_LTSSM_EN; 1602 1608 writel(data, pcie->appl_base + APPL_CTRL); 1603 - 1604 - err = readl_poll_timeout_atomic(pcie->appl_base + APPL_DEBUG, 1605 - data, 1606 - ((data & 1607 - APPL_DEBUG_LTSSM_STATE_MASK) >> 1608 - APPL_DEBUG_LTSSM_STATE_SHIFT) == 1609 - LTSSM_STATE_PRE_DETECT, 1610 - 1, LTSSM_TIMEOUT); 1611 - if (err) 1612 - dev_info(pcie->dev, "Link didn't go to detect state\n"); 1613 1609 } 1614 1610 /* 1615 1611 * DBI registers may not be accessible after this as PLL-E would be ··· 1624 1622 1625 1623 static void tegra_pcie_deinit_controller(struct tegra_pcie_dw *pcie) 1626 1624 { 1627 - tegra_pcie_downstream_dev_to_D0(pcie); 1625 + clk_disable_unprepare(pcie->core_clk_m); 1628 1626 dw_pcie_host_deinit(&pcie->pci.pp); 1629 1627 tegra_pcie_dw_pme_turnoff(pcie); 1630 1628 tegra_pcie_unconfig_controller(pcie); ··· 1682 1680 if (pcie->ep_state == EP_STATE_DISABLED) 1683 1681 return; 1684 1682 1685 - /* Disable LTSSM */ 1683 + ret = readl_poll_timeout(pcie->appl_base + APPL_DEBUG, val, 1684 + ((val & APPL_DEBUG_LTSSM_STATE_MASK) == LTSSM_STATE_DETECT_QUIET) || 1685 + ((val & APPL_DEBUG_LTSSM_STATE_MASK) == LTSSM_STATE_DETECT_ACT) || 1686 + ((val & APPL_DEBUG_LTSSM_STATE_MASK) == LTSSM_STATE_PRE_DETECT_QUIET) || 1687 + ((val & APPL_DEBUG_LTSSM_STATE_MASK) == LTSSM_STATE_DETECT_WAIT) || 1688 + ((val & APPL_DEBUG_LTSSM_STATE_MASK) == LTSSM_STATE_L2_IDLE), 1689 + LTSSM_DELAY_US, LTSSM_TIMEOUT_US); 1690 + if (ret) 1691 + dev_info(pcie->dev, "LTSSM state: 0x%x detect timeout: %d\n", val, ret); 1692 + 1693 + /* 1694 + * Deassert LTSSM state to stop the state toggling between 1695 + * Polling and Detect. 1696 + */ 1686 1697 val = appl_readl(pcie, APPL_CTRL); 1687 1698 val &= ~APPL_CTRL_LTSSM_EN; 1688 1699 appl_writel(pcie, val, APPL_CTRL); 1689 - 1690 - ret = readl_poll_timeout(pcie->appl_base + APPL_DEBUG, val, 1691 - ((val & APPL_DEBUG_LTSSM_STATE_MASK) >> 1692 - APPL_DEBUG_LTSSM_STATE_SHIFT) == 1693 - LTSSM_STATE_PRE_DETECT, 1694 - 1, LTSSM_TIMEOUT); 1695 - if (ret) 1696 - dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret); 1697 1700 1698 1701 reset_control_assert(pcie->core_rst); 1699 1702 ··· 1778 1771 goto fail_phy; 1779 1772 } 1780 1773 1781 - /* Perform cleanup that requires refclk */ 1782 - pci_epc_deinit_notify(pcie->pci.ep.epc); 1783 - dw_pcie_ep_cleanup(&pcie->pci.ep); 1784 - 1785 1774 /* Clear any stale interrupt statuses */ 1786 1775 appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0); 1787 1776 appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0); ··· 1806 1803 val = appl_readl(pcie, APPL_CTRL); 1807 1804 val |= APPL_CTRL_SYS_PRE_DET_STATE; 1808 1805 val |= APPL_CTRL_HW_HOT_RST_EN; 1806 + val &= ~(APPL_CTRL_HW_HOT_RST_MODE_MASK << APPL_CTRL_HW_HOT_RST_MODE_SHIFT); 1807 + val |= (APPL_CTRL_HW_HOT_RST_MODE_IMDT_RST_LTSSM_EN << APPL_CTRL_HW_HOT_RST_MODE_SHIFT); 1809 1808 appl_writel(pcie, val, APPL_CTRL); 1810 1809 1811 1810 val = appl_readl(pcie, APPL_CFG_MISC); ··· 1831 1826 val |= APPL_INTR_EN_L0_0_SYS_INTR_EN; 1832 1827 val |= APPL_INTR_EN_L0_0_LINK_STATE_INT_EN; 1833 1828 val |= APPL_INTR_EN_L0_0_PCI_CMD_EN_INT_EN; 1829 + val |= APPL_INTR_EN_L0_0_INT_INT_EN; 1834 1830 appl_writel(pcie, val, APPL_INTR_EN_L0_0); 1835 1831 1836 1832 val = appl_readl(pcie, APPL_INTR_EN_L1_0_0); ··· 1839 1833 val |= APPL_INTR_EN_L1_0_0_RDLH_LINK_UP_INT_EN; 1840 1834 appl_writel(pcie, val, APPL_INTR_EN_L1_0_0); 1841 1835 1836 + val = appl_readl(pcie, APPL_INTR_EN_L1_8_0); 1837 + val |= APPL_INTR_EN_L1_8_EDMA_INT_EN; 1838 + appl_writel(pcie, val, APPL_INTR_EN_L1_8_0); 1839 + 1840 + /* 110us for both snoop and no-snoop */ 1841 + val = FIELD_PREP(PCI_LTR_VALUE_MASK, 110) | 1842 + FIELD_PREP(PCI_LTR_SCALE_MASK, 2) | 1843 + LTR_MSG_REQ | 1844 + FIELD_PREP(PCI_LTR_NOSNOOP_VALUE, 110) | 1845 + FIELD_PREP(PCI_LTR_NOSNOOP_SCALE, 2) | 1846 + LTR_NOSNOOP_MSG_REQ; 1847 + appl_writel(pcie, val, APPL_LTR_MSG_1); 1848 + 1842 1849 reset_control_deassert(pcie->core_rst); 1850 + 1851 + /* Perform cleanup that requires refclk and core reset deasserted */ 1852 + pci_epc_deinit_notify(pcie->pci.ep.epc); 1853 + dw_pcie_ep_cleanup(&pcie->pci.ep); 1854 + 1855 + val = dw_pcie_readl_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL); 1856 + val &= ~PORT_LOGIC_SPEED_CHANGE; 1857 + dw_pcie_writel_dbi(pci, PCIE_LINK_WIDTH_SPEED_CONTROL, val); 1843 1858 1844 1859 if (pcie->update_fc_fixup) { 1845 1860 val = dw_pcie_readl_dbi(pci, CFG_TIMER_CTRL_MAX_FUNC_NUM_OFF); ··· 1950 1923 return IRQ_HANDLED; 1951 1924 } 1952 1925 1953 - static void tegra_pcie_ep_init(struct dw_pcie_ep *ep) 1954 - { 1955 - struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 1956 - enum pci_barno bar; 1957 - 1958 - for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) 1959 - dw_pcie_ep_reset_bar(pci, bar); 1960 - }; 1961 - 1962 1926 static int tegra_pcie_ep_raise_intx_irq(struct tegra_pcie_dw *pcie, u16 irq) 1963 1927 { 1964 1928 /* Tegra194 supports only INTA */ ··· 2005 1987 return 0; 2006 1988 } 2007 1989 1990 + static const struct pci_epc_bar_rsvd_region tegra194_bar2_rsvd[] = { 1991 + { 1992 + /* MSI-X table structure */ 1993 + .type = PCI_EPC_BAR_RSVD_MSIX_TBL_RAM, 1994 + .offset = 0x0, 1995 + .size = SZ_64K, 1996 + }, 1997 + { 1998 + /* MSI-X PBA structure */ 1999 + .type = PCI_EPC_BAR_RSVD_MSIX_PBA_RAM, 2000 + .offset = 0x10000, 2001 + .size = SZ_64K, 2002 + }, 2003 + }; 2004 + 2005 + static const struct pci_epc_bar_rsvd_region tegra194_bar4_rsvd[] = { 2006 + { 2007 + /* DMA_CAP (BAR4: DMA Port Logic Structure) */ 2008 + .type = PCI_EPC_BAR_RSVD_DMA_CTRL_MMIO, 2009 + .offset = 0x0, 2010 + .size = SZ_4K, 2011 + }, 2012 + }; 2013 + 2014 + /* Tegra EP: BAR0 = 64-bit programmable BAR, BAR2 = 64-bit MSI-X table, BAR4 = 64-bit DMA regs. */ 2008 2015 static const struct pci_epc_features tegra_pcie_epc_features = { 2009 2016 DWC_EPC_COMMON_FEATURES, 2010 2017 .linkup_notifier = true, 2011 2018 .msi_capable = true, 2012 - .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = SZ_1M, 2013 - .only_64bit = true, }, 2014 - .bar[BAR_1] = { .type = BAR_RESERVED, }, 2015 - .bar[BAR_2] = { .type = BAR_RESERVED, }, 2016 - .bar[BAR_3] = { .type = BAR_RESERVED, }, 2017 - .bar[BAR_4] = { .type = BAR_RESERVED, }, 2018 - .bar[BAR_5] = { .type = BAR_RESERVED, }, 2019 + .bar[BAR_0] = { .only_64bit = true, }, 2020 + .bar[BAR_2] = { 2021 + .type = BAR_RESERVED, 2022 + .only_64bit = true, 2023 + .nr_rsvd_regions = ARRAY_SIZE(tegra194_bar2_rsvd), 2024 + .rsvd_regions = tegra194_bar2_rsvd, 2025 + }, 2026 + .bar[BAR_4] = { 2027 + .type = BAR_RESERVED, 2028 + .only_64bit = true, 2029 + .nr_rsvd_regions = ARRAY_SIZE(tegra194_bar4_rsvd), 2030 + .rsvd_regions = tegra194_bar4_rsvd, 2031 + }, 2019 2032 .align = SZ_64K, 2020 2033 }; 2021 2034 ··· 2057 2008 } 2058 2009 2059 2010 static const struct dw_pcie_ep_ops pcie_ep_ops = { 2060 - .init = tegra_pcie_ep_init, 2061 2011 .raise_irq = tegra_pcie_ep_raise_irq, 2062 2012 .get_features = tegra_pcie_ep_get_features, 2063 2013 }; ··· 2197 2149 return PTR_ERR(pcie->core_clk); 2198 2150 } 2199 2151 2152 + pcie->core_clk_m = devm_clk_get_optional(dev, "core_m"); 2153 + if (IS_ERR(pcie->core_clk_m)) 2154 + return dev_err_probe(dev, PTR_ERR(pcie->core_clk_m), 2155 + "Failed to get monitor clock\n"); 2156 + 2200 2157 pcie->appl_res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 2201 2158 "appl"); 2202 2159 if (!pcie->appl_res) { ··· 2301 2248 ret = devm_request_threaded_irq(dev, pp->irq, 2302 2249 tegra_pcie_ep_hard_irq, 2303 2250 tegra_pcie_ep_irq_thread, 2304 - IRQF_SHARED | IRQF_ONESHOT, 2251 + IRQF_SHARED, 2305 2252 "tegra-pcie-ep-intr", pcie); 2306 2253 if (ret) { 2307 2254 dev_err(dev, "Failed to request IRQ %d: %d\n", pp->irq, ··· 2330 2277 static void tegra_pcie_dw_remove(struct platform_device *pdev) 2331 2278 { 2332 2279 struct tegra_pcie_dw *pcie = platform_get_drvdata(pdev); 2280 + struct dw_pcie_ep *ep = &pcie->pci.ep; 2333 2281 2334 2282 if (pcie->of_data->mode == DW_PCIE_RC_TYPE) { 2335 2283 if (!pcie->link_state) ··· 2342 2288 } else { 2343 2289 disable_irq(pcie->pex_rst_irq); 2344 2290 pex_ep_event_pex_rst_assert(pcie); 2291 + dw_pcie_ep_deinit(ep); 2345 2292 } 2346 2293 2347 2294 pm_runtime_disable(pcie->dev); ··· 2351 2296 gpiod_set_value(pcie->pex_refclk_sel_gpiod, 0); 2352 2297 } 2353 2298 2299 + static int tegra_pcie_dw_suspend(struct device *dev) 2300 + { 2301 + struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); 2302 + 2303 + if (pcie->of_data->mode == DW_PCIE_EP_TYPE) { 2304 + if (pcie->ep_state == EP_STATE_ENABLED) { 2305 + dev_err(dev, "Tegra PCIe is in EP mode, suspend not allowed\n"); 2306 + return -EPERM; 2307 + } 2308 + 2309 + disable_irq(pcie->pex_rst_irq); 2310 + return 0; 2311 + } 2312 + 2313 + return 0; 2314 + } 2315 + 2354 2316 static int tegra_pcie_dw_suspend_late(struct device *dev) 2355 2317 { 2356 2318 struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); 2357 2319 u32 val; 2358 - 2359 - if (pcie->of_data->mode == DW_PCIE_EP_TYPE) { 2360 - dev_err(dev, "Failed to Suspend as Tegra PCIe is in EP mode\n"); 2361 - return -EPERM; 2362 - } 2363 2320 2364 2321 if (!pcie->link_state) 2365 2322 return 0; ··· 2392 2325 { 2393 2326 struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); 2394 2327 2328 + if (pcie->of_data->mode == DW_PCIE_EP_TYPE) 2329 + return 0; 2330 + 2395 2331 if (!pcie->link_state) 2396 2332 return 0; 2397 2333 2398 - tegra_pcie_downstream_dev_to_D0(pcie); 2334 + clk_disable_unprepare(pcie->core_clk_m); 2399 2335 tegra_pcie_dw_pme_turnoff(pcie); 2400 2336 tegra_pcie_unconfig_controller(pcie); 2401 2337 ··· 2409 2339 { 2410 2340 struct tegra_pcie_dw *pcie = dev_get_drvdata(dev); 2411 2341 int ret; 2342 + 2343 + if (pcie->of_data->mode == DW_PCIE_EP_TYPE) 2344 + return 0; 2412 2345 2413 2346 if (!pcie->link_state) 2414 2347 return 0; ··· 2445 2372 u32 val; 2446 2373 2447 2374 if (pcie->of_data->mode == DW_PCIE_EP_TYPE) { 2448 - dev_err(dev, "Suspend is not supported in EP mode"); 2449 - return -ENOTSUPP; 2375 + enable_irq(pcie->pex_rst_irq); 2376 + return 0; 2450 2377 } 2451 2378 2452 2379 if (!pcie->link_state) ··· 2475 2402 return; 2476 2403 2477 2404 debugfs_remove_recursive(pcie->debugfs); 2478 - tegra_pcie_downstream_dev_to_D0(pcie); 2479 2405 2480 2406 disable_irq(pcie->pci.pp.irq); 2481 2407 if (IS_ENABLED(CONFIG_PCI_MSI)) ··· 2524 2452 .mode = DW_PCIE_EP_TYPE, 2525 2453 .has_l1ss_exit_fix = true, 2526 2454 .has_ltr_req_fix = true, 2455 + .disable_l1_2 = true, 2527 2456 .cdm_chk_int_en_bit = BIT(18), 2528 2457 /* Gen4 - 6, 8 and 9 presets enabled */ 2529 2458 .gen4_preset_vec = 0x340, ··· 2552 2479 }; 2553 2480 2554 2481 static const struct dev_pm_ops tegra_pcie_dw_pm_ops = { 2482 + .suspend = tegra_pcie_dw_suspend, 2555 2483 .suspend_late = tegra_pcie_dw_suspend_late, 2556 2484 .suspend_noirq = tegra_pcie_dw_suspend_noirq, 2557 2485 .resume_noirq = tegra_pcie_dw_resume_noirq,
+2 -17
drivers/pci/controller/dwc/pcie-uniphier-ep.c
··· 203 203 uniphier_pcie_ltssm_enable(priv, false); 204 204 } 205 205 206 - static void uniphier_pcie_ep_init(struct dw_pcie_ep *ep) 207 - { 208 - struct dw_pcie *pci = to_dw_pcie_from_ep(ep); 209 - enum pci_barno bar; 210 - 211 - for (bar = BAR_0; bar <= BAR_5; bar++) 212 - dw_pcie_ep_reset_bar(pci, bar); 213 - } 214 - 215 206 static int uniphier_pcie_ep_raise_intx_irq(struct dw_pcie_ep *ep) 216 207 { 217 208 struct dw_pcie *pci = to_dw_pcie_from_ep(ep); ··· 274 283 } 275 284 276 285 static const struct dw_pcie_ep_ops uniphier_pcie_ep_ops = { 277 - .init = uniphier_pcie_ep_init, 278 286 .raise_irq = uniphier_pcie_ep_raise_irq, 279 287 .get_features = uniphier_pcie_get_features, 280 288 }; ··· 416 426 .msix_capable = false, 417 427 .align = 1 << 16, 418 428 .bar[BAR_0] = { .only_64bit = true, }, 419 - .bar[BAR_1] = { .type = BAR_RESERVED, }, 420 429 .bar[BAR_2] = { .only_64bit = true, }, 421 - .bar[BAR_3] = { .type = BAR_RESERVED, }, 422 - .bar[BAR_4] = { .type = BAR_RESERVED, }, 423 - .bar[BAR_5] = { .type = BAR_RESERVED, }, 430 + .bar[BAR_4] = { .type = BAR_DISABLED, }, 431 + .bar[BAR_5] = { .type = BAR_DISABLED, }, 424 432 }, 425 433 }; 426 434 ··· 433 445 .msix_capable = false, 434 446 .align = 1 << 12, 435 447 .bar[BAR_0] = { .only_64bit = true, }, 436 - .bar[BAR_1] = { .type = BAR_RESERVED, }, 437 448 .bar[BAR_2] = { .only_64bit = true, }, 438 - .bar[BAR_3] = { .type = BAR_RESERVED, }, 439 449 .bar[BAR_4] = { .only_64bit = true, }, 440 - .bar[BAR_5] = { .type = BAR_RESERVED, }, 441 450 }, 442 451 }; 443 452
+4 -4
drivers/pci/controller/pcie-aspeed.c
··· 1052 1052 if (ret) 1053 1053 return ret; 1054 1054 1055 - irq = platform_get_irq(pdev, 0); 1056 - if (irq < 0) 1057 - return irq; 1058 - 1059 1055 ret = devm_add_action_or_reset(dev, aspeed_pcie_irq_domain_free, pcie); 1060 1056 if (ret) 1061 1057 return ret; 1058 + 1059 + irq = platform_get_irq(pdev, 0); 1060 + if (irq < 0) 1061 + return irq; 1062 1062 1063 1063 ret = devm_request_irq(dev, irq, aspeed_pcie_intr_handler, IRQF_SHARED, 1064 1064 dev_name(dev), pcie);
+3 -2
drivers/pci/controller/pcie-brcmstb.c
··· 1442 1442 cls = FIELD_GET(PCI_EXP_LNKSTA_CLS, lnksta); 1443 1443 nlw = FIELD_GET(PCI_EXP_LNKSTA_NLW, lnksta); 1444 1444 dev_info(dev, "link up, %s x%u %s\n", 1445 - pci_speed_string(pcie_link_speed[cls]), nlw, 1445 + pci_speed_string(pcie_get_link_speed(cls)), nlw, 1446 1446 ssc_good ? "(SSC)" : "(!SSC)"); 1447 1447 1448 1448 return 0; ··· 2072 2072 return PTR_ERR(pcie->clk); 2073 2073 2074 2074 ret = of_pci_get_max_link_speed(np); 2075 - pcie->gen = (ret < 0) ? 0 : ret; 2075 + if (pcie_get_link_speed(ret) == PCI_SPEED_UNKNOWN) 2076 + pcie->gen = 0; 2076 2077 2077 2078 pcie->ssc = of_property_read_bool(np, "brcm,enable-ssc"); 2078 2079
+133 -100
drivers/pci/controller/pcie-mediatek-gen3.c
··· 22 22 #include <linux/of_device.h> 23 23 #include <linux/of_pci.h> 24 24 #include <linux/pci.h> 25 + #include <linux/pci-pwrctrl.h> 25 26 #include <linux/phy/phy.h> 26 27 #include <linux/platform_device.h> 27 28 #include <linux/pm_domain.h> ··· 404 403 writel_relaxed(val, pcie->base + PCIE_INT_ENABLE_REG); 405 404 } 406 405 406 + static int mtk_pcie_devices_power_up(struct mtk_gen3_pcie *pcie) 407 + { 408 + int err; 409 + u32 val; 410 + 411 + /* 412 + * Airoha EN7581 has a hw bug asserting/releasing PCIE_PE_RSTB signal 413 + * causing occasional PCIe link down. In order to overcome the issue, 414 + * PCIE_RSTB signals are not asserted/released at this stage and the 415 + * PCIe block is reset using en7523_reset_assert() and 416 + * en7581_pci_enable(). 417 + */ 418 + if (!(pcie->soc->flags & SKIP_PCIE_RSTB)) { 419 + /* Assert all reset signals */ 420 + val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG); 421 + val |= PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | 422 + PCIE_PE_RSTB; 423 + writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 424 + } 425 + 426 + err = pci_pwrctrl_power_on_devices(pcie->dev); 427 + if (err) { 428 + dev_err(pcie->dev, "Failed to power on devices: %pe\n", ERR_PTR(err)); 429 + return err; 430 + } 431 + 432 + /* 433 + * Described in PCIe CEM specification revision 6.0. 434 + * 435 + * The deassertion of PERST# should be delayed 100ms (TPVPERL) 436 + * for the power and clock to become stable. 437 + */ 438 + msleep(PCIE_T_PVPERL_MS); 439 + 440 + if (!(pcie->soc->flags & SKIP_PCIE_RSTB)) { 441 + /* De-assert reset signals */ 442 + val &= ~(PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | 443 + PCIE_PE_RSTB); 444 + writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 445 + } 446 + 447 + return 0; 448 + } 449 + 450 + static void mtk_pcie_devices_power_down(struct mtk_gen3_pcie *pcie) 451 + { 452 + u32 val; 453 + 454 + if (!(pcie->soc->flags & SKIP_PCIE_RSTB)) { 455 + /* Assert the PERST# pin */ 456 + val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG); 457 + val |= PCIE_PE_RSTB; 458 + writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 459 + } 460 + 461 + pci_pwrctrl_power_off_devices(pcie->dev); 462 + } 463 + 407 464 static int mtk_pcie_startup_port(struct mtk_gen3_pcie *pcie) 408 465 { 409 466 struct resource_entry *entry; ··· 523 464 val |= PCIE_DISABLE_DVFSRC_VLT_REQ; 524 465 writel_relaxed(val, pcie->base + PCIE_MISC_CTRL_REG); 525 466 526 - /* 527 - * Airoha EN7581 has a hw bug asserting/releasing PCIE_PE_RSTB signal 528 - * causing occasional PCIe link down. In order to overcome the issue, 529 - * PCIE_RSTB signals are not asserted/released at this stage and the 530 - * PCIe block is reset using en7523_reset_assert() and 531 - * en7581_pci_enable(). 532 - */ 533 - if (!(pcie->soc->flags & SKIP_PCIE_RSTB)) { 534 - /* Assert all reset signals */ 535 - val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG); 536 - val |= PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | 537 - PCIE_PE_RSTB; 538 - writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 539 - 540 - /* 541 - * Described in PCIe CEM specification revision 6.0. 542 - * 543 - * The deassertion of PERST# should be delayed 100ms (TPVPERL) 544 - * for the power and clock to become stable. 545 - */ 546 - msleep(PCIE_T_PVPERL_MS); 547 - 548 - /* De-assert reset signals */ 549 - val &= ~(PCIE_MAC_RSTB | PCIE_PHY_RSTB | PCIE_BRG_RSTB | 550 - PCIE_PE_RSTB); 551 - writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 552 - } 553 - 554 - /* Check if the link is up or not */ 555 - err = readl_poll_timeout(pcie->base + PCIE_LINK_STATUS_REG, val, 556 - !!(val & PCIE_PORT_LINKUP), 20, 557 - PCI_PM_D3COLD_WAIT * USEC_PER_MSEC); 558 - if (err) { 559 - const char *ltssm_state; 560 - int ltssm_index; 561 - 562 - val = readl_relaxed(pcie->base + PCIE_LTSSM_STATUS_REG); 563 - ltssm_index = PCIE_LTSSM_STATE(val); 564 - ltssm_state = ltssm_index >= ARRAY_SIZE(ltssm_str) ? 565 - "Unknown state" : ltssm_str[ltssm_index]; 566 - dev_err(pcie->dev, 567 - "PCIe link down, current LTSSM state: %s (%#x)\n", 568 - ltssm_state, val); 569 - return err; 570 - } 571 - 572 467 mtk_pcie_enable_msi(pcie); 573 468 574 469 /* Set PCIe translation windows */ ··· 548 535 return err; 549 536 } 550 537 538 + err = mtk_pcie_devices_power_up(pcie); 539 + if (err) 540 + return err; 541 + 542 + /* Check if the link is up or not */ 543 + err = readl_poll_timeout(pcie->base + PCIE_LINK_STATUS_REG, val, 544 + !!(val & PCIE_PORT_LINKUP), 20, 545 + PCI_PM_D3COLD_WAIT * USEC_PER_MSEC); 546 + if (err) { 547 + const char *ltssm_state; 548 + int ltssm_index; 549 + 550 + val = readl_relaxed(pcie->base + PCIE_LTSSM_STATUS_REG); 551 + ltssm_index = PCIE_LTSSM_STATE(val); 552 + ltssm_state = ltssm_index >= ARRAY_SIZE(ltssm_str) ? 553 + "Unknown state" : ltssm_str[ltssm_index]; 554 + dev_err(pcie->dev, 555 + "PCIe link down, current LTSSM state: %s (%#x)\n", 556 + ltssm_state, val); 557 + goto err_power_down_device; 558 + } 559 + 551 560 return 0; 561 + 562 + err_power_down_device: 563 + mtk_pcie_devices_power_down(pcie); 564 + return err; 552 565 } 553 566 554 567 #define MTK_MSI_FLAGS_REQUIRED (MSI_FLAG_USE_DEF_DOM_OPS | \ ··· 890 851 struct platform_device *pdev = to_platform_device(dev); 891 852 int err; 892 853 893 - err = mtk_pcie_init_irq_domains(pcie); 894 - if (err) 895 - return err; 896 - 897 854 pcie->irq = platform_get_irq(pdev, 0); 898 855 if (pcie->irq < 0) 899 856 return pcie->irq; 857 + 858 + err = mtk_pcie_init_irq_domains(pcie); 859 + if (err) 860 + return err; 900 861 901 862 irq_set_chained_handler_and_data(pcie->irq, mtk_pcie_irq_handler, pcie); 902 863 ··· 915 876 if (!regs) 916 877 return -EINVAL; 917 878 pcie->base = devm_ioremap_resource(dev, regs); 918 - if (IS_ERR(pcie->base)) { 919 - dev_err(dev, "failed to map register base\n"); 920 - return PTR_ERR(pcie->base); 921 - } 879 + if (IS_ERR(pcie->base)) 880 + return dev_err_probe(dev, PTR_ERR(pcie->base), "failed to map register base\n"); 922 881 923 882 pcie->reg_base = regs->start; 924 883 ··· 925 888 926 889 ret = devm_reset_control_bulk_get_optional_shared(dev, num_resets, 927 890 pcie->phy_resets); 928 - if (ret) { 929 - dev_err(dev, "failed to get PHY bulk reset\n"); 930 - return ret; 931 - } 891 + if (ret) 892 + return dev_err_probe(dev, ret, "failed to get PHY bulk reset\n"); 932 893 933 894 pcie->mac_reset = devm_reset_control_get_optional_exclusive(dev, "mac"); 934 - if (IS_ERR(pcie->mac_reset)) { 935 - ret = PTR_ERR(pcie->mac_reset); 936 - if (ret != -EPROBE_DEFER) 937 - dev_err(dev, "failed to get MAC reset\n"); 938 - 939 - return ret; 940 - } 895 + if (IS_ERR(pcie->mac_reset)) 896 + return dev_err_probe(dev, PTR_ERR(pcie->mac_reset), "failed to get MAC reset\n"); 941 897 942 898 pcie->phy = devm_phy_optional_get(dev, "pcie-phy"); 943 - if (IS_ERR(pcie->phy)) { 944 - ret = PTR_ERR(pcie->phy); 945 - if (ret != -EPROBE_DEFER) 946 - dev_err(dev, "failed to get PHY\n"); 947 - 948 - return ret; 949 - } 899 + if (IS_ERR(pcie->phy)) 900 + return dev_err_probe(dev, PTR_ERR(pcie->phy), "failed to get PHY\n"); 950 901 951 902 pcie->num_clks = devm_clk_bulk_get_all(dev, &pcie->clks); 952 - if (pcie->num_clks < 0) { 953 - dev_err(dev, "failed to get clocks\n"); 954 - return pcie->num_clks; 955 - } 903 + if (pcie->num_clks < 0) 904 + return dev_err_probe(dev, pcie->num_clks, "failed to get clocks\n"); 956 905 957 906 ret = of_property_read_u32(dev->of_node, "num-lanes", &num_lanes); 958 907 if (ret == 0) { ··· 1173 1150 return err; 1174 1151 1175 1152 err = of_pci_get_max_link_speed(pcie->dev->of_node); 1176 - if (err) { 1153 + if (pcie_get_link_speed(err) != PCI_SPEED_UNKNOWN) { 1177 1154 /* Get the maximum speed supported by the controller */ 1178 1155 max_speed = mtk_pcie_get_controller_max_link_speed(pcie); 1179 1156 ··· 1188 1165 1189 1166 /* Try link up */ 1190 1167 err = mtk_pcie_startup_port(pcie); 1191 - if (err) 1192 - goto err_setup; 1193 - 1194 - err = mtk_pcie_setup_irq(pcie); 1195 1168 if (err) 1196 1169 goto err_setup; 1197 1170 ··· 1216 1197 pcie->soc = device_get_match_data(dev); 1217 1198 platform_set_drvdata(pdev, pcie); 1218 1199 1200 + err = mtk_pcie_setup_irq(pcie); 1201 + if (err) 1202 + return dev_err_probe(dev, err, "Failed to setup IRQ domains\n"); 1203 + 1204 + err = pci_pwrctrl_create_devices(pcie->dev); 1205 + if (err) { 1206 + goto err_tear_down_irq; 1207 + dev_err_probe(dev, err, "failed to create pwrctrl devices\n"); 1208 + } 1209 + 1219 1210 err = mtk_pcie_setup(pcie); 1220 1211 if (err) 1221 - return err; 1212 + goto err_destroy_pwrctrl; 1222 1213 1223 1214 host->ops = &mtk_pcie_ops; 1224 1215 host->sysdata = pcie; 1225 1216 1226 1217 err = pci_host_probe(host); 1227 - if (err) { 1228 - mtk_pcie_irq_teardown(pcie); 1229 - mtk_pcie_power_down(pcie); 1230 - return err; 1231 - } 1218 + if (err) 1219 + goto err_power_down_pcie; 1232 1220 1233 1221 return 0; 1222 + 1223 + err_power_down_pcie: 1224 + mtk_pcie_devices_power_down(pcie); 1225 + mtk_pcie_power_down(pcie); 1226 + err_destroy_pwrctrl: 1227 + if (err != -EPROBE_DEFER) 1228 + pci_pwrctrl_destroy_devices(pcie->dev); 1229 + err_tear_down_irq: 1230 + mtk_pcie_irq_teardown(pcie); 1231 + return err; 1234 1232 } 1235 1233 1236 1234 static void mtk_pcie_remove(struct platform_device *pdev) ··· 1260 1224 pci_remove_root_bus(host->bus); 1261 1225 pci_unlock_rescan_remove(); 1262 1226 1263 - mtk_pcie_irq_teardown(pcie); 1227 + pci_pwrctrl_power_off_devices(pcie->dev); 1264 1228 mtk_pcie_power_down(pcie); 1229 + pci_pwrctrl_destroy_devices(pcie->dev); 1230 + mtk_pcie_irq_teardown(pcie); 1265 1231 } 1266 1232 1267 1233 static void mtk_pcie_irq_save(struct mtk_gen3_pcie *pcie) ··· 1321 1283 { 1322 1284 struct mtk_gen3_pcie *pcie = dev_get_drvdata(dev); 1323 1285 int err; 1324 - u32 val; 1325 1286 1326 1287 /* Trigger link to L2 state */ 1327 1288 err = mtk_pcie_turn_off_link(pcie); ··· 1329 1292 return err; 1330 1293 } 1331 1294 1332 - if (!(pcie->soc->flags & SKIP_PCIE_RSTB)) { 1333 - /* Assert the PERST# pin */ 1334 - val = readl_relaxed(pcie->base + PCIE_RST_CTRL_REG); 1335 - val |= PCIE_PE_RSTB; 1336 - writel_relaxed(val, pcie->base + PCIE_RST_CTRL_REG); 1337 - } 1338 - 1295 + mtk_pcie_devices_power_down(pcie); 1339 1296 dev_dbg(pcie->dev, "entered L2 states successfully"); 1340 1297 1341 1298 mtk_pcie_irq_save(pcie); ··· 1348 1317 return err; 1349 1318 1350 1319 err = mtk_pcie_startup_port(pcie); 1351 - if (err) { 1352 - mtk_pcie_power_down(pcie); 1353 - return err; 1354 - } 1320 + if (err) 1321 + goto err_power_down; 1355 1322 1356 1323 mtk_pcie_irq_restore(pcie); 1357 1324 1358 1325 return 0; 1326 + 1327 + err_power_down: 1328 + mtk_pcie_power_down(pcie); 1329 + return err; 1359 1330 } 1360 1331 1361 1332 static const struct dev_pm_ops mtk_pcie_pm_ops = {
+1 -1
drivers/pci/controller/pcie-mediatek.c
··· 953 953 struct mtk_pcie_port *port; 954 954 struct device *dev = pcie->dev; 955 955 struct platform_device *pdev = to_platform_device(dev); 956 - char name[10]; 956 + char name[20]; 957 957 int err; 958 958 959 959 port = devm_kzalloc(dev, sizeof(*port), GFP_KERNEL);
-3
drivers/pci/controller/pcie-rcar-ep.c
··· 440 440 /* use 64-bit BARs so mark BAR[1,3,5] as reserved */ 441 441 .bar[BAR_0] = { .type = BAR_FIXED, .fixed_size = 128, 442 442 .only_64bit = true, }, 443 - .bar[BAR_1] = { .type = BAR_RESERVED, }, 444 443 .bar[BAR_2] = { .type = BAR_FIXED, .fixed_size = 256, 445 444 .only_64bit = true, }, 446 - .bar[BAR_3] = { .type = BAR_RESERVED, }, 447 445 .bar[BAR_4] = { .type = BAR_FIXED, .fixed_size = 256, 448 446 .only_64bit = true, }, 449 - .bar[BAR_5] = { .type = BAR_RESERVED, }, 450 447 }; 451 448 452 449 static const struct pci_epc_features*
+288 -83
drivers/pci/controller/pcie-rzg3s-host.c
··· 111 111 #define RZG3S_PCI_PERM_CFG_HWINIT_EN BIT(2) 112 112 #define RZG3S_PCI_PERM_PIPE_PHY_REG_EN BIT(1) 113 113 114 + #define RZG3S_PCI_RESET 0x310 115 + #define RZG3S_PCI_RESET_RST_OUT_B BIT(6) 116 + #define RZG3S_PCI_RESET_RST_PS_B BIT(5) 117 + #define RZG3S_PCI_RESET_RST_LOAD_B BIT(4) 118 + #define RZG3S_PCI_RESET_RST_CFG_B BIT(3) 119 + #define RZG3S_PCI_RESET_RST_RSM_B BIT(2) 120 + #define RZG3S_PCI_RESET_RST_GP_B BIT(1) 121 + #define RZG3S_PCI_RESET_RST_B BIT(0) 122 + 114 123 #define RZG3S_PCI_MSIRE(id) (0x600 + (id) * 0x10) 115 124 #define RZG3S_PCI_MSIRE_ENA BIT(0) 116 125 ··· 168 159 169 160 #define RZG3S_PCI_CFG_PCIEC 0x60 170 161 171 - /* System controller registers */ 172 - #define RZG3S_SYS_PCIE_RST_RSM_B 0xd74 173 - #define RZG3S_SYS_PCIE_RST_RSM_B_MASK BIT(0) 174 - 175 162 /* Maximum number of windows */ 176 163 #define RZG3S_MAX_WINDOWS 8 177 164 ··· 178 173 179 174 /* Timeouts experimentally determined */ 180 175 #define RZG3S_REQ_ISSUE_TIMEOUT_US 2500 176 + 177 + /** 178 + * struct rzg3s_sysc_function - System Controller function descriptor 179 + * @offset: Register offset from the System Controller base address 180 + * @mask: Bit mask for the function within the register 181 + */ 182 + struct rzg3s_sysc_function { 183 + u32 offset; 184 + u32 mask; 185 + }; 186 + 187 + /** 188 + * enum rzg3s_sysc_func_id - System controller function IDs 189 + * @RZG3S_SYSC_FUNC_ID_RST_RSM_B: RST_RSM_B SYSC function ID 190 + * @RZG3S_SYSC_FUNC_ID_L1_ALLOW: L1 allow SYSC function ID 191 + * @RZG3S_SYSC_FUNC_ID_MODE: Mode SYSC function ID 192 + * @RZG3S_SYSC_FUNC_ID_MAX: Max SYSC function ID 193 + */ 194 + enum rzg3s_sysc_func_id { 195 + RZG3S_SYSC_FUNC_ID_RST_RSM_B, 196 + RZG3S_SYSC_FUNC_ID_L1_ALLOW, 197 + RZG3S_SYSC_FUNC_ID_MODE, 198 + RZG3S_SYSC_FUNC_ID_MAX, 199 + }; 200 + 201 + /** 202 + * struct rzg3s_sysc_info - RZ/G3S System Controller info 203 + * @functions: SYSC function descriptors array 204 + */ 205 + struct rzg3s_sysc_info { 206 + const struct rzg3s_sysc_function functions[RZG3S_SYSC_FUNC_ID_MAX]; 207 + }; 208 + 209 + /** 210 + * struct rzg3s_sysc - RZ/G3S System Controller descriptor 211 + * @regmap: System controller regmap 212 + * @info: System controller info 213 + */ 214 + struct rzg3s_sysc { 215 + struct regmap *regmap; 216 + const struct rzg3s_sysc_info *info; 217 + }; 181 218 182 219 /** 183 220 * struct rzg3s_pcie_msi - RZ/G3S PCIe MSI data structure ··· 246 199 /** 247 200 * struct rzg3s_pcie_soc_data - SoC specific data 248 201 * @init_phy: PHY initialization function 202 + * @config_pre_init: Optional callback for SoC-specific pre-configuration 203 + * @config_post_init: Callback for SoC-specific post-configuration 204 + * @config_deinit: Callback for SoC-specific de-initialization 249 205 * @power_resets: array with the resets that need to be de-asserted after 250 206 * power-on 251 207 * @cfg_resets: array with the resets that need to be de-asserted after 252 208 * configuration 209 + * @sysc_info: SYSC info 253 210 * @num_power_resets: number of power resets 254 211 * @num_cfg_resets: number of configuration resets 255 212 */ 256 213 struct rzg3s_pcie_soc_data { 257 214 int (*init_phy)(struct rzg3s_pcie_host *host); 215 + void (*config_pre_init)(struct rzg3s_pcie_host *host); 216 + int (*config_post_init)(struct rzg3s_pcie_host *host); 217 + int (*config_deinit)(struct rzg3s_pcie_host *host); 258 218 const char * const *power_resets; 259 219 const char * const *cfg_resets; 220 + struct rzg3s_sysc_info sysc_info; 260 221 u8 num_power_resets; 261 222 u8 num_cfg_resets; 262 223 }; ··· 288 233 * @dev: struct device 289 234 * @power_resets: reset control signals that should be set after power up 290 235 * @cfg_resets: reset control signals that should be set after configuration 291 - * @sysc: SYSC regmap 236 + * @sysc: SYSC descriptor 292 237 * @intx_domain: INTx IRQ domain 293 238 * @data: SoC specific data 294 239 * @msi: MSI data structure ··· 303 248 struct device *dev; 304 249 struct reset_control_bulk_data *power_resets; 305 250 struct reset_control_bulk_data *cfg_resets; 306 - struct regmap *sysc; 251 + struct rzg3s_sysc *sysc; 307 252 struct irq_domain *intx_domain; 308 253 const struct rzg3s_pcie_soc_data *data; 309 254 struct rzg3s_pcie_msi msi; ··· 314 259 }; 315 260 316 261 #define rzg3s_msi_to_host(_msi) container_of(_msi, struct rzg3s_pcie_host, msi) 262 + 263 + static int rzg3s_sysc_config_func(struct rzg3s_sysc *sysc, 264 + enum rzg3s_sysc_func_id fid, u32 val) 265 + { 266 + const struct rzg3s_sysc_info *info = sysc->info; 267 + const struct rzg3s_sysc_function *functions = info->functions; 268 + 269 + if (fid >= RZG3S_SYSC_FUNC_ID_MAX) 270 + return -EINVAL; 271 + 272 + if (!functions[fid].mask) 273 + return 0; 274 + 275 + return regmap_update_bits(sysc->regmap, functions[fid].offset, 276 + functions[fid].mask, 277 + field_prep(functions[fid].mask, val)); 278 + } 317 279 318 280 static void rzg3s_pcie_update_bits(void __iomem *base, u32 offset, u32 mask, 319 281 u32 val) ··· 1017 945 { 1018 946 u32 remote_supported_link_speeds, max_supported_link_speeds; 1019 947 u32 cs2, tmp, pcie_cap = RZG3S_PCI_CFG_PCIEC; 1020 - u32 cur_link_speed, link_speed; 948 + u32 cur_link_speed, link_speed, hw_max_speed; 1021 949 u8 ltssm_state_l0 = 0xc; 950 + u32 lnkcap; 1022 951 int ret; 1023 952 u16 ls; 1024 953 ··· 1039 966 ls = readw_relaxed(host->pcie + pcie_cap + PCI_EXP_LNKSTA); 1040 967 cs2 = readl_relaxed(host->axi + RZG3S_PCI_PCSTAT2); 1041 968 1042 - switch (pcie_link_speed[host->max_link_speed]) { 969 + /* Read hardware supported link speed from Link Capabilities Register */ 970 + lnkcap = readl_relaxed(host->pcie + pcie_cap + PCI_EXP_LNKCAP); 971 + hw_max_speed = FIELD_GET(PCI_EXP_LNKCAP_SLS, lnkcap); 972 + 973 + /* 974 + * Use DT max-link-speed only as a limit. If specified and lower 975 + * than hardware capability, cap to that value. 976 + */ 977 + if (host->max_link_speed > 0 && host->max_link_speed < hw_max_speed) 978 + hw_max_speed = host->max_link_speed; 979 + 980 + switch (pcie_get_link_speed(hw_max_speed)) { 981 + case PCIE_SPEED_8_0GT: 982 + max_supported_link_speeds = GENMASK(PCI_EXP_LNKSTA_CLS_8_0GB - 1, 0); 983 + link_speed = PCI_EXP_LNKCTL2_TLS_8_0GT; 984 + break; 1043 985 case PCIE_SPEED_5_0GT: 1044 986 max_supported_link_speeds = GENMASK(PCI_EXP_LNKSTA_CLS_5_0GB - 1, 0); 1045 987 link_speed = PCI_EXP_LNKCTL2_TLS_5_0GT; ··· 1070 982 remote_supported_link_speeds &= max_supported_link_speeds; 1071 983 1072 984 /* 1073 - * Return if max link speed is already set or the connected device 985 + * Return if target link speed is already set or the connected device 1074 986 * doesn't support it. 1075 987 */ 1076 - if (cur_link_speed == host->max_link_speed || 988 + if (cur_link_speed == hw_max_speed || 1077 989 remote_supported_link_speeds != max_supported_link_speeds) 1078 990 return 0; 1079 991 ··· 1110 1022 static int rzg3s_pcie_config_init(struct rzg3s_pcie_host *host) 1111 1023 { 1112 1024 struct pci_host_bridge *bridge = pci_host_bridge_from_priv(host); 1025 + u32 mask = GENMASK(31, 8); 1113 1026 struct resource_entry *ft; 1114 1027 struct resource *bus; 1115 1028 u8 subordinate_bus; ··· 1134 1045 writel_relaxed(0xffffffff, host->pcie + RZG3S_PCI_CFG_BARMSK00L); 1135 1046 writel_relaxed(0xffffffff, host->pcie + RZG3S_PCI_CFG_BARMSK00U); 1136 1047 1048 + /* 1049 + * Explicitly program class code. RZ/G3E requires this configuration. 1050 + * Harmless for RZ/G3S where this matches the hardware default. 1051 + */ 1052 + rzg3s_pcie_update_bits(host->pcie, PCI_CLASS_REVISION, mask, 1053 + field_prep(mask, PCI_CLASS_BRIDGE_PCI_NORMAL)); 1054 + 1137 1055 /* Disable access control to the CFGU */ 1138 1056 writel_relaxed(0, host->axi + RZG3S_PCI_PERM); 1139 1057 ··· 1148 1052 writeb_relaxed(primary_bus, host->pcie + PCI_PRIMARY_BUS); 1149 1053 writeb_relaxed(secondary_bus, host->pcie + PCI_SECONDARY_BUS); 1150 1054 writeb_relaxed(subordinate_bus, host->pcie + PCI_SUBORDINATE_BUS); 1055 + 1056 + return 0; 1057 + } 1058 + 1059 + static int rzg3s_pcie_config_post_init(struct rzg3s_pcie_host *host) 1060 + { 1061 + return reset_control_bulk_deassert(host->data->num_cfg_resets, 1062 + host->cfg_resets); 1063 + } 1064 + 1065 + static int rzg3s_pcie_config_deinit(struct rzg3s_pcie_host *host) 1066 + { 1067 + return reset_control_bulk_assert(host->data->num_cfg_resets, 1068 + host->cfg_resets); 1069 + } 1070 + 1071 + static void rzg3e_pcie_config_pre_init(struct rzg3s_pcie_host *host) 1072 + { 1073 + u32 mask = RZG3S_PCI_RESET_RST_LOAD_B | RZG3S_PCI_RESET_RST_CFG_B; 1074 + 1075 + /* De-assert LOAD_B and CFG_B */ 1076 + rzg3s_pcie_update_bits(host->axi, RZG3S_PCI_RESET, mask, mask); 1077 + } 1078 + 1079 + static int rzg3e_pcie_config_deinit(struct rzg3s_pcie_host *host) 1080 + { 1081 + writel_relaxed(0, host->axi + RZG3S_PCI_RESET); 1082 + return 0; 1083 + } 1084 + 1085 + static int rzg3e_pcie_config_post_init(struct rzg3s_pcie_host *host) 1086 + { 1087 + u32 mask = RZG3S_PCI_RESET_RST_PS_B | RZG3S_PCI_RESET_RST_GP_B | 1088 + RZG3S_PCI_RESET_RST_B; 1089 + 1090 + /* De-assert PS_B, GP_B, RST_B */ 1091 + rzg3s_pcie_update_bits(host->axi, RZG3S_PCI_RESET, mask, mask); 1092 + 1093 + /* Flush deassert */ 1094 + readl_relaxed(host->axi + RZG3S_PCI_RESET); 1095 + 1096 + /* 1097 + * According to the RZ/G3E HW manual (Rev.1.15, Table 6.6-130 1098 + * Initialization Procedure (RC)), hardware requires >= 500us delay 1099 + * before final reset deassert. 1100 + */ 1101 + fsleep(500); 1102 + 1103 + /* De-assert OUT_B and RSM_B */ 1104 + mask = RZG3S_PCI_RESET_RST_OUT_B | RZG3S_PCI_RESET_RST_RSM_B; 1105 + rzg3s_pcie_update_bits(host->axi, RZG3S_PCI_RESET, mask, mask); 1151 1106 1152 1107 return 0; 1153 1108 } ··· 1282 1135 if (ret) 1283 1136 return ret; 1284 1137 1285 - return devm_reset_control_bulk_get_exclusive(host->dev, 1286 - data->num_cfg_resets, 1287 - host->cfg_resets); 1138 + return devm_reset_control_bulk_get_optional_exclusive(host->dev, 1139 + data->num_cfg_resets, 1140 + host->cfg_resets); 1288 1141 } 1289 1142 1290 1143 static int rzg3s_pcie_host_parse_port(struct rzg3s_pcie_host *host) ··· 1351 1204 u32 val; 1352 1205 int ret; 1353 1206 1207 + /* SoC-specific pre-configuration */ 1208 + if (host->data->config_pre_init) 1209 + host->data->config_pre_init(host); 1210 + 1354 1211 /* Initialize the PCIe related registers */ 1355 1212 ret = rzg3s_pcie_config_init(host); 1356 1213 if (ret) 1357 - return ret; 1214 + goto config_deinit; 1358 1215 1359 1216 ret = rzg3s_pcie_host_init_port(host); 1360 1217 if (ret) 1361 - return ret; 1218 + goto config_deinit; 1219 + 1220 + /* Enable ASPM L1 transition for SoCs that use it */ 1221 + ret = rzg3s_sysc_config_func(host->sysc, 1222 + RZG3S_SYSC_FUNC_ID_L1_ALLOW, 1); 1223 + if (ret) 1224 + goto config_deinit_and_refclk; 1362 1225 1363 1226 /* Initialize the interrupts */ 1364 1227 rzg3s_pcie_irq_init(host); 1365 1228 1366 - ret = reset_control_bulk_deassert(host->data->num_cfg_resets, 1367 - host->cfg_resets); 1229 + /* SoC-specific post-configuration */ 1230 + ret = host->data->config_post_init(host); 1368 1231 if (ret) 1369 - goto disable_port_refclk; 1232 + goto config_deinit_and_refclk; 1370 1233 1371 1234 /* Wait for link up */ 1372 1235 ret = readl_poll_timeout(host->axi + RZG3S_PCI_PCSTAT1, val, ··· 1385 1228 PCIE_LINK_WAIT_SLEEP_MS * MILLI * 1386 1229 PCIE_LINK_WAIT_MAX_RETRIES); 1387 1230 if (ret) 1388 - goto cfg_resets_deassert; 1231 + goto config_deinit_post; 1389 1232 1390 1233 val = readl_relaxed(host->axi + RZG3S_PCI_PCSTAT2); 1391 1234 dev_info(host->dev, "PCIe link status [0x%x]\n", val); 1392 1235 1393 1236 return 0; 1394 1237 1395 - cfg_resets_deassert: 1396 - reset_control_bulk_assert(host->data->num_cfg_resets, 1397 - host->cfg_resets); 1398 - disable_port_refclk: 1238 + config_deinit_post: 1239 + host->data->config_deinit(host); 1240 + config_deinit_and_refclk: 1399 1241 clk_disable_unprepare(host->port.refclk); 1242 + config_deinit: 1243 + if (host->data->config_pre_init) 1244 + host->data->config_deinit(host); 1400 1245 return ret; 1401 1246 } 1402 1247 ··· 1430 1271 u64 pci_addr = entry->res->start - entry->offset; 1431 1272 u64 cpu_addr = entry->res->start; 1432 1273 u64 cpu_end = entry->res->end; 1433 - u64 size_id = 0; 1434 1274 int id = *index; 1435 1275 u64 size; 1436 1276 1437 - while (cpu_addr < cpu_end) { 1277 + /* 1278 + * According to the RZ/G3S HW manual (Rev.1.10, section 34.6.6.7) and 1279 + * RZ/G3E HW manual (Rev.1.15, section 6.6.7.6): 1280 + * - Each window must be a single memory size of power of two 1281 + * - Mask registers must be set to (2^N - 1) 1282 + * - Bit carry must not occur when adding base and mask registers, 1283 + * meaning the base address must be aligned to the window size 1284 + * 1285 + * Split non-power-of-2 regions into multiple windows to satisfy 1286 + * these constraints without over-mapping. 1287 + */ 1288 + while (cpu_addr <= cpu_end) { 1289 + u64 remaining_size = cpu_end - cpu_addr + 1; 1290 + u64 align_limit; 1291 + 1438 1292 if (id >= RZG3S_MAX_WINDOWS) 1439 1293 return dev_err_probe(host->dev, -ENOSPC, 1440 1294 "Failed to map inbound window for resource (%s)\n", 1441 1295 entry->res->name); 1442 1296 1443 - size = resource_size(entry->res) - size_id; 1297 + /* Start with largest power-of-two that fits in remaining size */ 1298 + size = 1ULL << __fls(remaining_size); 1444 1299 1445 1300 /* 1446 - * According to the RZ/G3S HW manual (Rev.1.10, 1447 - * section 34.3.1.71 AXI Window Mask (Lower) Registers) the min 1448 - * size is 4K. 1301 + * The "no bit carry" rule requires base addresses to be 1302 + * aligned to the window size. Find the maximum window size 1303 + * that both addresses can support based on their natural 1304 + * alignment (lowest set bit). 1305 + */ 1306 + align_limit = min(cpu_addr ? (1ULL << __ffs(cpu_addr)) : ~0ULL, 1307 + pci_addr ? (1ULL << __ffs(pci_addr)) : ~0ULL); 1308 + 1309 + size = min(size, align_limit); 1310 + 1311 + /* 1312 + * Minimum window size is 4KB. 1313 + * See RZ/G3S HW manual (Rev.1.10, section 34.3.1.71) and 1314 + * RZ/G3E HW manual (Rev.1.15, section 6.6.4.1.3.(74)). 1449 1315 */ 1450 1316 size = max(size, SZ_4K); 1451 1317 1452 - /* 1453 - * According the RZ/G3S HW manual (Rev.1.10, sections: 1454 - * - 34.3.1.69 AXI Window Base (Lower) Registers 1455 - * - 34.3.1.71 AXI Window Mask (Lower) Registers 1456 - * - 34.3.1.73 AXI Destination (Lower) Registers) 1457 - * the CPU addr, PCIe addr, size should be 4K aligned and be a 1458 - * power of 2. 1459 - */ 1460 - size = ALIGN(size, SZ_4K); 1461 - size = roundup_pow_of_two(size); 1462 - 1463 - cpu_addr = ALIGN(cpu_addr, SZ_4K); 1464 - pci_addr = ALIGN(pci_addr, SZ_4K); 1465 - 1466 - /* 1467 - * According to the RZ/G3S HW manual (Rev.1.10, section 1468 - * 34.3.1.71 AXI Window Mask (Lower) Registers) HW expects first 1469 - * 12 LSB bits to be 0xfff. Subtract 1 from size for this. 1470 - */ 1471 1318 rzg3s_pcie_set_inbound_window(host, cpu_addr, pci_addr, 1472 1319 size - 1, id); 1473 1320 1474 1321 pci_addr += size; 1475 1322 cpu_addr += size; 1476 - size_id = size; 1477 1323 id++; 1478 1324 } 1479 1325 *index = id; ··· 1681 1517 struct device_node *sysc_np __free(device_node) = 1682 1518 of_parse_phandle(np, "renesas,sysc", 0); 1683 1519 struct rzg3s_pcie_host *host; 1520 + struct rzg3s_sysc *sysc; 1684 1521 int ret; 1685 1522 1686 1523 bridge = devm_pci_alloc_host_bridge(dev, sizeof(*host)); ··· 1693 1528 host->data = device_get_match_data(dev); 1694 1529 platform_set_drvdata(pdev, host); 1695 1530 1531 + host->sysc = devm_kzalloc(dev, sizeof(*host->sysc), GFP_KERNEL); 1532 + if (!host->sysc) 1533 + return -ENOMEM; 1534 + 1535 + sysc = host->sysc; 1536 + sysc->info = &host->data->sysc_info; 1537 + 1696 1538 host->axi = devm_platform_ioremap_resource(pdev, 0); 1697 1539 if (IS_ERR(host->axi)) 1698 1540 return PTR_ERR(host->axi); 1699 1541 host->pcie = host->axi + RZG3S_PCI_CFG_BASE; 1700 1542 1701 1543 host->max_link_speed = of_pci_get_max_link_speed(np); 1702 - if (host->max_link_speed < 0) 1703 - host->max_link_speed = 2; 1704 1544 1705 1545 ret = rzg3s_pcie_host_parse_port(host); 1706 1546 if (ret) 1707 1547 return ret; 1708 1548 1709 - host->sysc = syscon_node_to_regmap(sysc_np); 1710 - if (IS_ERR(host->sysc)) { 1711 - ret = PTR_ERR(host->sysc); 1549 + sysc->regmap = syscon_node_to_regmap(sysc_np); 1550 + if (IS_ERR(sysc->regmap)) { 1551 + ret = PTR_ERR(sysc->regmap); 1712 1552 goto port_refclk_put; 1713 1553 } 1714 1554 1715 - ret = regmap_update_bits(host->sysc, RZG3S_SYS_PCIE_RST_RSM_B, 1716 - RZG3S_SYS_PCIE_RST_RSM_B_MASK, 1717 - FIELD_PREP(RZG3S_SYS_PCIE_RST_RSM_B_MASK, 1)); 1555 + /* Put controller in RC mode */ 1556 + ret = rzg3s_sysc_config_func(sysc, RZG3S_SYSC_FUNC_ID_MODE, 1); 1557 + if (ret) 1558 + goto port_refclk_put; 1559 + 1560 + ret = rzg3s_sysc_config_func(sysc, RZG3S_SYSC_FUNC_ID_RST_RSM_B, 1); 1718 1561 if (ret) 1719 1562 goto port_refclk_put; 1720 1563 ··· 1762 1589 1763 1590 host_probe_teardown: 1764 1591 rzg3s_pcie_teardown_irqdomain(host); 1765 - reset_control_bulk_deassert(host->data->num_cfg_resets, 1766 - host->cfg_resets); 1592 + host->data->config_deinit(host); 1767 1593 rpm_put: 1768 1594 pm_runtime_put_sync(dev); 1769 1595 rpm_disable: ··· 1774 1602 * SYSC RST_RSM_B signal need to be asserted before turning off the 1775 1603 * power to the PHY. 1776 1604 */ 1777 - regmap_update_bits(host->sysc, RZG3S_SYS_PCIE_RST_RSM_B, 1778 - RZG3S_SYS_PCIE_RST_RSM_B_MASK, 1779 - FIELD_PREP(RZG3S_SYS_PCIE_RST_RSM_B_MASK, 0)); 1605 + rzg3s_sysc_config_func(sysc, RZG3S_SYSC_FUNC_ID_RST_RSM_B, 0); 1780 1606 port_refclk_put: 1781 1607 clk_put(host->port.refclk); 1782 1608 ··· 1786 1616 struct rzg3s_pcie_host *host = dev_get_drvdata(dev); 1787 1617 const struct rzg3s_pcie_soc_data *data = host->data; 1788 1618 struct rzg3s_pcie_port *port = &host->port; 1789 - struct regmap *sysc = host->sysc; 1619 + struct rzg3s_sysc *sysc = host->sysc; 1790 1620 int ret; 1791 1621 1792 1622 ret = pm_runtime_put_sync(dev); ··· 1795 1625 1796 1626 clk_disable_unprepare(port->refclk); 1797 1627 1798 - ret = reset_control_bulk_assert(data->num_power_resets, 1799 - host->power_resets); 1628 + /* SoC-specific de-initialization */ 1629 + ret = data->config_deinit(host); 1800 1630 if (ret) 1801 1631 goto refclk_restore; 1802 1632 1803 - ret = reset_control_bulk_assert(data->num_cfg_resets, 1804 - host->cfg_resets); 1633 + ret = reset_control_bulk_assert(data->num_power_resets, 1634 + host->power_resets); 1635 + if (ret) 1636 + goto config_reinit; 1637 + 1638 + ret = rzg3s_sysc_config_func(sysc, RZG3S_SYSC_FUNC_ID_RST_RSM_B, 0); 1805 1639 if (ret) 1806 1640 goto power_resets_restore; 1807 - 1808 - ret = regmap_update_bits(sysc, RZG3S_SYS_PCIE_RST_RSM_B, 1809 - RZG3S_SYS_PCIE_RST_RSM_B_MASK, 1810 - FIELD_PREP(RZG3S_SYS_PCIE_RST_RSM_B_MASK, 0)); 1811 - if (ret) 1812 - goto cfg_resets_restore; 1813 1641 1814 1642 return 0; 1815 1643 1816 1644 /* Restore the previous state if any error happens */ 1817 - cfg_resets_restore: 1818 - reset_control_bulk_deassert(data->num_cfg_resets, 1819 - host->cfg_resets); 1820 1645 power_resets_restore: 1821 1646 reset_control_bulk_deassert(data->num_power_resets, 1822 1647 host->power_resets); 1648 + config_reinit: 1649 + if (data->config_pre_init) 1650 + data->config_pre_init(host); 1651 + data->config_post_init(host); 1823 1652 refclk_restore: 1824 1653 clk_prepare_enable(port->refclk); 1825 1654 pm_runtime_resume_and_get(dev); ··· 1829 1660 { 1830 1661 struct rzg3s_pcie_host *host = dev_get_drvdata(dev); 1831 1662 const struct rzg3s_pcie_soc_data *data = host->data; 1832 - struct regmap *sysc = host->sysc; 1663 + struct rzg3s_sysc *sysc = host->sysc; 1833 1664 int ret; 1834 1665 1835 - ret = regmap_update_bits(sysc, RZG3S_SYS_PCIE_RST_RSM_B, 1836 - RZG3S_SYS_PCIE_RST_RSM_B_MASK, 1837 - FIELD_PREP(RZG3S_SYS_PCIE_RST_RSM_B_MASK, 1)); 1666 + ret = rzg3s_sysc_config_func(sysc, RZG3S_SYSC_FUNC_ID_MODE, 1); 1667 + if (ret) 1668 + return ret; 1669 + 1670 + ret = rzg3s_sysc_config_func(sysc, RZG3S_SYSC_FUNC_ID_RST_RSM_B, 1); 1838 1671 if (ret) 1839 1672 return ret; 1840 1673 ··· 1865 1694 reset_control_bulk_assert(data->num_power_resets, 1866 1695 host->power_resets); 1867 1696 assert_rst_rsm_b: 1868 - regmap_update_bits(sysc, RZG3S_SYS_PCIE_RST_RSM_B, 1869 - RZG3S_SYS_PCIE_RST_RSM_B_MASK, 1870 - FIELD_PREP(RZG3S_SYS_PCIE_RST_RSM_B_MASK, 0)); 1697 + rzg3s_sysc_config_func(sysc, RZG3S_SYSC_FUNC_ID_RST_RSM_B, 0); 1871 1698 return ret; 1872 1699 } 1873 1700 ··· 1887 1718 .num_power_resets = ARRAY_SIZE(rzg3s_soc_power_resets), 1888 1719 .cfg_resets = rzg3s_soc_cfg_resets, 1889 1720 .num_cfg_resets = ARRAY_SIZE(rzg3s_soc_cfg_resets), 1721 + .config_post_init = rzg3s_pcie_config_post_init, 1722 + .config_deinit = rzg3s_pcie_config_deinit, 1890 1723 .init_phy = rzg3s_soc_pcie_init_phy, 1724 + .sysc_info = { 1725 + .functions = { 1726 + [RZG3S_SYSC_FUNC_ID_RST_RSM_B] = { 1727 + .offset = 0xd74, 1728 + .mask = BIT(0), 1729 + }, 1730 + }, 1731 + }, 1732 + }; 1733 + 1734 + static const char * const rzg3e_soc_power_resets[] = { "aresetn" }; 1735 + 1736 + static const struct rzg3s_pcie_soc_data rzg3e_soc_data = { 1737 + .power_resets = rzg3e_soc_power_resets, 1738 + .num_power_resets = ARRAY_SIZE(rzg3e_soc_power_resets), 1739 + .config_pre_init = rzg3e_pcie_config_pre_init, 1740 + .config_post_init = rzg3e_pcie_config_post_init, 1741 + .config_deinit = rzg3e_pcie_config_deinit, 1742 + .sysc_info = { 1743 + .functions = { 1744 + [RZG3S_SYSC_FUNC_ID_L1_ALLOW] = { 1745 + .offset = 0x1020, 1746 + .mask = BIT(0), 1747 + }, 1748 + [RZG3S_SYSC_FUNC_ID_MODE] = { 1749 + .offset = 0x1024, 1750 + .mask = BIT(0), 1751 + }, 1752 + }, 1753 + }, 1891 1754 }; 1892 1755 1893 1756 static const struct of_device_id rzg3s_pcie_of_match[] = { 1894 1757 { 1895 1758 .compatible = "renesas,r9a08g045-pcie", 1896 1759 .data = &rzg3s_soc_data, 1760 + }, 1761 + { 1762 + .compatible = "renesas,r9a09g047-pcie", 1763 + .data = &rzg3e_soc_data, 1897 1764 }, 1898 1765 {} 1899 1766 };
+4
drivers/pci/endpoint/functions/pci-epf-mhi.c
··· 367 367 dev_err(dev, "DMA transfer timeout\n"); 368 368 dmaengine_terminate_sync(chan); 369 369 ret = -ETIMEDOUT; 370 + } else { 371 + ret = 0; 370 372 } 371 373 372 374 err_unmap: ··· 440 438 dev_err(dev, "DMA transfer timeout\n"); 441 439 dmaengine_terminate_sync(chan); 442 440 ret = -ETIMEDOUT; 441 + } else { 442 + ret = 0; 443 443 } 444 444 445 445 err_unmap:
+2 -54
drivers/pci/endpoint/functions/pci-epf-ntb.c
··· 1495 1495 } 1496 1496 1497 1497 /** 1498 - * epf_ntb_epc_destroy_interface() - Cleanup NTB EPC interface 1499 - * @ntb: NTB device that facilitates communication between HOST1 and HOST2 1500 - * @type: PRIMARY interface or SECONDARY interface 1501 - * 1502 - * Unbind NTB function device from EPC and relinquish reference to pci_epc 1503 - * for each of the interface. 1504 - */ 1505 - static void epf_ntb_epc_destroy_interface(struct epf_ntb *ntb, 1506 - enum pci_epc_interface_type type) 1507 - { 1508 - struct epf_ntb_epc *ntb_epc; 1509 - struct pci_epc *epc; 1510 - struct pci_epf *epf; 1511 - 1512 - if (type < 0) 1513 - return; 1514 - 1515 - epf = ntb->epf; 1516 - ntb_epc = ntb->epc[type]; 1517 - if (!ntb_epc) 1518 - return; 1519 - epc = ntb_epc->epc; 1520 - pci_epc_remove_epf(epc, epf, type); 1521 - pci_epc_put(epc); 1522 - } 1523 - 1524 - /** 1525 - * epf_ntb_epc_destroy() - Cleanup NTB EPC interface 1526 - * @ntb: NTB device that facilitates communication between HOST1 and HOST2 1527 - * 1528 - * Wrapper for epf_ntb_epc_destroy_interface() to cleanup all the NTB interfaces 1529 - */ 1530 - static void epf_ntb_epc_destroy(struct epf_ntb *ntb) 1531 - { 1532 - enum pci_epc_interface_type type; 1533 - 1534 - for (type = PRIMARY_INTERFACE; type <= SECONDARY_INTERFACE; type++) 1535 - epf_ntb_epc_destroy_interface(ntb, type); 1536 - } 1537 - 1538 - /** 1539 1498 * epf_ntb_epc_create_interface() - Create and initialize NTB EPC interface 1540 1499 * @ntb: NTB device that facilitates communication between HOST1 and HOST2 1541 1500 * @epc: struct pci_epc to which a particular NTB interface should be associated ··· 1573 1614 1574 1615 ret = epf_ntb_epc_create_interface(ntb, epf->sec_epc, 1575 1616 SECONDARY_INTERFACE); 1576 - if (ret) { 1617 + if (ret) 1577 1618 dev_err(dev, "SECONDARY intf: Fail to create NTB EPC\n"); 1578 - goto err_epc_create; 1579 - } 1580 - 1581 - return 0; 1582 - 1583 - err_epc_create: 1584 - epf_ntb_epc_destroy_interface(ntb, PRIMARY_INTERFACE); 1585 1619 1586 1620 return ret; 1587 1621 } ··· 1839 1887 ret = epf_ntb_init_epc_bar(ntb); 1840 1888 if (ret) { 1841 1889 dev_err(dev, "Failed to create NTB EPC\n"); 1842 - goto err_bar_init; 1890 + return ret; 1843 1891 } 1844 1892 1845 1893 ret = epf_ntb_config_spad_bar_alloc_interface(ntb); ··· 1861 1909 err_bar_alloc: 1862 1910 epf_ntb_config_spad_bar_free(ntb); 1863 1911 1864 - err_bar_init: 1865 - epf_ntb_epc_destroy(ntb); 1866 - 1867 1912 return ret; 1868 1913 } 1869 1914 ··· 1876 1927 1877 1928 epf_ntb_epc_cleanup(ntb); 1878 1929 epf_ntb_config_spad_bar_free(ntb); 1879 - epf_ntb_epc_destroy(ntb); 1880 1930 } 1881 1931 1882 1932 #define EPF_NTB_R(_name) \
+36 -3
drivers/pci/endpoint/functions/pci-epf-test.c
··· 54 54 #define STATUS_BAR_SUBRANGE_SETUP_FAIL BIT(15) 55 55 #define STATUS_BAR_SUBRANGE_CLEAR_SUCCESS BIT(16) 56 56 #define STATUS_BAR_SUBRANGE_CLEAR_FAIL BIT(17) 57 + #define STATUS_NO_RESOURCE BIT(18) 57 58 58 59 #define FLAG_USE_DMA BIT(0) 59 60 ··· 65 64 #define CAP_MSIX BIT(2) 66 65 #define CAP_INTX BIT(3) 67 66 #define CAP_SUBRANGE_MAPPING BIT(4) 67 + #define CAP_DYNAMIC_INBOUND_MAPPING BIT(5) 68 + #define CAP_BAR0_RESERVED BIT(6) 69 + #define CAP_BAR1_RESERVED BIT(7) 70 + #define CAP_BAR2_RESERVED BIT(8) 71 + #define CAP_BAR3_RESERVED BIT(9) 72 + #define CAP_BAR4_RESERVED BIT(10) 73 + #define CAP_BAR5_RESERVED BIT(11) 68 74 69 75 #define PCI_EPF_TEST_BAR_SUBRANGE_NSUB 2 70 76 ··· 723 715 struct pci_epf_test_reg *reg = epf_test->reg[epf_test->test_reg_bar]; 724 716 struct pci_epf *epf = epf_test->epf; 725 717 726 - free_irq(epf->db_msg[0].virq, epf_test); 727 718 reg->doorbell_bar = cpu_to_le32(NO_BAR); 728 719 729 720 pci_epf_free_doorbell(epf); ··· 766 759 &epf_test->db_bar.phys_addr, &offset); 767 760 768 761 if (ret) 769 - goto err_doorbell_cleanup; 762 + goto err_free_irq; 770 763 771 764 reg->doorbell_offset = cpu_to_le32(offset); 772 765 ··· 776 769 777 770 ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, &epf_test->db_bar); 778 771 if (ret) 779 - goto err_doorbell_cleanup; 772 + goto err_free_irq; 780 773 781 774 status |= STATUS_DOORBELL_ENABLE_SUCCESS; 782 775 reg->status = cpu_to_le32(status); 783 776 return; 784 777 778 + err_free_irq: 779 + free_irq(epf->db_msg[0].virq, epf_test); 785 780 err_doorbell_cleanup: 786 781 pci_epf_test_doorbell_cleanup(epf_test); 787 782 set_status_err: ··· 803 794 if (bar < BAR_0) 804 795 goto set_status_err; 805 796 797 + free_irq(epf->db_msg[0].virq, epf_test); 806 798 pci_epf_test_doorbell_cleanup(epf_test); 807 799 808 800 /* ··· 902 892 ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar); 903 893 if (ret) { 904 894 dev_err(&epf->dev, "pci_epc_set_bar() failed: %d\n", ret); 895 + if (ret == -ENOSPC) 896 + status |= STATUS_NO_RESOURCE; 905 897 bar->submap = old_submap; 906 898 bar->num_submap = old_nsub; 907 899 ret = pci_epc_set_bar(epc, epf->func_no, epf->vfunc_no, bar); ··· 1119 1107 if (epf_test->epc_features->intx_capable) 1120 1108 caps |= CAP_INTX; 1121 1109 1110 + if (epf_test->epc_features->dynamic_inbound_mapping) 1111 + caps |= CAP_DYNAMIC_INBOUND_MAPPING; 1112 + 1122 1113 if (epf_test->epc_features->dynamic_inbound_mapping && 1123 1114 epf_test->epc_features->subrange_mapping) 1124 1115 caps |= CAP_SUBRANGE_MAPPING; 1116 + 1117 + if (epf_test->epc_features->bar[BAR_0].type == BAR_RESERVED) 1118 + caps |= CAP_BAR0_RESERVED; 1119 + 1120 + if (epf_test->epc_features->bar[BAR_1].type == BAR_RESERVED) 1121 + caps |= CAP_BAR1_RESERVED; 1122 + 1123 + if (epf_test->epc_features->bar[BAR_2].type == BAR_RESERVED) 1124 + caps |= CAP_BAR2_RESERVED; 1125 + 1126 + if (epf_test->epc_features->bar[BAR_3].type == BAR_RESERVED) 1127 + caps |= CAP_BAR3_RESERVED; 1128 + 1129 + if (epf_test->epc_features->bar[BAR_4].type == BAR_RESERVED) 1130 + caps |= CAP_BAR4_RESERVED; 1131 + 1132 + if (epf_test->epc_features->bar[BAR_5].type == BAR_RESERVED) 1133 + caps |= CAP_BAR5_RESERVED; 1125 1134 1126 1135 reg->caps = cpu_to_le32(caps); 1127 1136 }
+33 -36
drivers/pci/endpoint/functions/pci-epf-vntb.c
··· 527 527 struct msi_msg *msg; 528 528 size_t sz; 529 529 int ret; 530 - int i; 530 + int i, req; 531 531 532 532 ret = pci_epf_alloc_doorbell(epf, ntb->db_count); 533 533 if (ret) 534 534 return ret; 535 535 536 - for (i = 0; i < ntb->db_count; i++) { 537 - ret = request_irq(epf->db_msg[i].virq, epf_ntb_doorbell_handler, 536 + for (req = 0; req < ntb->db_count; req++) { 537 + ret = request_irq(epf->db_msg[req].virq, epf_ntb_doorbell_handler, 538 538 0, "pci_epf_vntb_db", ntb); 539 539 540 540 if (ret) { 541 541 dev_err(&epf->dev, 542 542 "Failed to request doorbell IRQ: %d\n", 543 - epf->db_msg[i].virq); 543 + epf->db_msg[req].virq); 544 544 goto err_free_irq; 545 545 } 546 546 } ··· 598 598 return 0; 599 599 600 600 err_free_irq: 601 - for (i--; i >= 0; i--) 602 - free_irq(epf->db_msg[i].virq, ntb); 601 + for (req--; req >= 0; req--) 602 + free_irq(epf->db_msg[req].virq, ntb); 603 603 604 604 pci_epf_free_doorbell(ntb->epf); 605 605 return ret; ··· 762 762 ntb->mws_size[i]); 763 763 } 764 764 } 765 - 766 - /** 767 - * epf_ntb_epc_destroy() - Cleanup NTB EPC interface 768 - * @ntb: NTB device that facilitates communication between HOST and VHOST 769 - * 770 - * Wrapper for epf_ntb_epc_destroy_interface() to cleanup all the NTB interfaces 771 - */ 772 - static void epf_ntb_epc_destroy(struct epf_ntb *ntb) 773 - { 774 - pci_epc_remove_epf(ntb->epf->epc, ntb->epf, 0); 775 - pci_epc_put(ntb->epf->epc); 776 - } 777 - 778 765 779 766 /** 780 767 * epf_ntb_is_bar_used() - Check if a bar is used in the ntb configuration ··· 942 955 */ 943 956 static void epf_ntb_epc_cleanup(struct epf_ntb *ntb) 944 957 { 958 + disable_delayed_work_sync(&ntb->cmd_handler); 945 959 epf_ntb_mw_bar_clear(ntb, ntb->num_mws); 946 960 epf_ntb_db_bar_clear(ntb); 947 961 epf_ntb_config_sspad_bar_clear(ntb); ··· 983 995 struct config_group *group = to_config_group(item); \ 984 996 struct epf_ntb *ntb = to_epf_ntb(group); \ 985 997 struct device *dev = &ntb->epf->dev; \ 986 - int win_no; \ 998 + int win_no, idx; \ 987 999 \ 988 1000 if (sscanf(#_name, "mw%d", &win_no) != 1) \ 989 1001 return -EINVAL; \ 990 1002 \ 991 - if (win_no <= 0 || win_no > ntb->num_mws) { \ 992 - dev_err(dev, "Invalid num_nws: %d value\n", ntb->num_mws); \ 993 - return -EINVAL; \ 1003 + idx = win_no - 1; \ 1004 + if (idx < 0 || idx >= ntb->num_mws) { \ 1005 + dev_err(dev, "MW%d out of range (num_mws=%d)\n", \ 1006 + win_no, ntb->num_mws); \ 1007 + return -ERANGE; \ 994 1008 } \ 995 - \ 996 - return sprintf(page, "%lld\n", ntb->mws_size[win_no - 1]); \ 1009 + idx = array_index_nospec(idx, ntb->num_mws); \ 1010 + return sprintf(page, "%llu\n", ntb->mws_size[idx]); \ 997 1011 } 998 1012 999 1013 #define EPF_NTB_MW_W(_name) \ ··· 1005 1015 struct config_group *group = to_config_group(item); \ 1006 1016 struct epf_ntb *ntb = to_epf_ntb(group); \ 1007 1017 struct device *dev = &ntb->epf->dev; \ 1008 - int win_no; \ 1018 + int win_no, idx; \ 1009 1019 u64 val; \ 1010 1020 int ret; \ 1011 1021 \ ··· 1016 1026 if (sscanf(#_name, "mw%d", &win_no) != 1) \ 1017 1027 return -EINVAL; \ 1018 1028 \ 1019 - if (win_no <= 0 || win_no > ntb->num_mws) { \ 1020 - dev_err(dev, "Invalid num_nws: %d value\n", ntb->num_mws); \ 1021 - return -EINVAL; \ 1029 + idx = win_no - 1; \ 1030 + if (idx < 0 || idx >= ntb->num_mws) { \ 1031 + dev_err(dev, "MW%d out of range (num_mws=%d)\n", \ 1032 + win_no, ntb->num_mws); \ 1033 + return -ERANGE; \ 1022 1034 } \ 1023 - \ 1024 - ntb->mws_size[win_no - 1] = val; \ 1035 + idx = array_index_nospec(idx, ntb->num_mws); \ 1036 + ntb->mws_size[idx] = val; \ 1025 1037 \ 1026 1038 return len; \ 1027 1039 } ··· 1428 1436 return 0; 1429 1437 } 1430 1438 1439 + static struct device *vntb_epf_get_dma_dev(struct ntb_dev *ndev) 1440 + { 1441 + struct epf_ntb *ntb = ntb_ndev(ndev); 1442 + struct pci_epc *epc = ntb->epf->epc; 1443 + 1444 + return epc->dev.parent; 1445 + } 1446 + 1431 1447 static const struct ntb_dev_ops vntb_epf_ops = { 1432 1448 .mw_count = vntb_epf_mw_count, 1433 1449 .spad_count = vntb_epf_spad_count, ··· 1457 1457 .db_clear_mask = vntb_epf_db_clear_mask, 1458 1458 .db_clear = vntb_epf_db_clear, 1459 1459 .link_disable = vntb_epf_link_disable, 1460 + .get_dma_dev = vntb_epf_get_dma_dev, 1460 1461 }; 1461 1462 1462 1463 static int pci_vntb_probe(struct pci_dev *pdev, const struct pci_device_id *id) ··· 1526 1525 ret = epf_ntb_init_epc_bar(ntb); 1527 1526 if (ret) { 1528 1527 dev_err(dev, "Failed to create NTB EPC\n"); 1529 - goto err_bar_init; 1528 + return ret; 1530 1529 } 1531 1530 1532 1531 ret = epf_ntb_config_spad_bar_alloc(ntb); ··· 1566 1565 err_bar_alloc: 1567 1566 epf_ntb_config_spad_bar_free(ntb); 1568 1567 1569 - err_bar_init: 1570 - epf_ntb_epc_destroy(ntb); 1571 - 1572 1568 return ret; 1573 1569 } 1574 1570 ··· 1581 1583 1582 1584 epf_ntb_epc_cleanup(ntb); 1583 1585 epf_ntb_config_spad_bar_free(ntb); 1584 - epf_ntb_epc_destroy(ntb); 1585 1586 1586 1587 pci_unregister_driver(&vntb_pci_driver); 1587 1588 }
+18 -12
drivers/pci/endpoint/pci-ep-cfs.c
··· 84 84 pci_epc_remove_epf(epc, epf, SECONDARY_INTERFACE); 85 85 } 86 86 87 - static struct configfs_item_operations pci_secondary_epc_item_ops = { 87 + static const struct configfs_item_operations pci_secondary_epc_item_ops = { 88 88 .allow_link = pci_secondary_epc_epf_link, 89 89 .drop_link = pci_secondary_epc_epf_unlink, 90 90 }; ··· 148 148 pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE); 149 149 } 150 150 151 - static struct configfs_item_operations pci_primary_epc_item_ops = { 151 + static const struct configfs_item_operations pci_primary_epc_item_ops = { 152 152 .allow_link = pci_primary_epc_epf_link, 153 153 .drop_link = pci_primary_epc_epf_unlink, 154 154 }; ··· 256 256 pci_epc_remove_epf(epc, epf, PRIMARY_INTERFACE); 257 257 } 258 258 259 - static struct configfs_item_operations pci_epc_item_ops = { 259 + static const struct configfs_item_operations pci_epc_item_ops = { 260 260 .allow_link = pci_epc_epf_link, 261 261 .drop_link = pci_epc_epf_unlink, 262 262 }; ··· 507 507 kfree(epf_group); 508 508 } 509 509 510 - static struct configfs_item_operations pci_epf_ops = { 510 + static const struct configfs_item_operations pci_epf_ops = { 511 511 .allow_link = pci_epf_vepf_link, 512 512 .drop_link = pci_epf_vepf_unlink, 513 513 .release = pci_epf_release, ··· 565 565 566 566 if (IS_ERR(group)) { 567 567 dev_err(&epf_group->epf->dev, 568 - "failed to create epf type specific attributes\n"); 568 + "failed to create epf type specific attributes: %pe\n", 569 + group); 569 570 return; 570 571 } 571 572 ··· 579 578 580 579 group = pci_ep_cfs_add_primary_group(epf_group); 581 580 if (IS_ERR(group)) { 582 - pr_err("failed to create 'primary' EPC interface\n"); 581 + dev_err(&epf_group->epf->dev, 582 + "failed to create 'primary' EPC interface: %pe\n", 583 + group); 583 584 return; 584 585 } 585 586 586 587 group = pci_ep_cfs_add_secondary_group(epf_group); 587 588 if (IS_ERR(group)) { 588 - pr_err("failed to create 'secondary' EPC interface\n"); 589 + dev_err(&epf_group->epf->dev, 590 + "failed to create 'secondary' EPC interface: %pe\n", 591 + group); 589 592 return; 590 593 } 591 594 ··· 629 624 630 625 epf = pci_epf_create(epf_name); 631 626 if (IS_ERR(epf)) { 632 - pr_err("failed to create endpoint function device\n"); 633 - err = -EINVAL; 627 + err = PTR_ERR(epf); 628 + pr_err("failed to create endpoint function device (%s): %d\n", 629 + epf_name, err); 634 630 goto free_name; 635 631 } 636 632 ··· 663 657 config_item_put(item); 664 658 } 665 659 666 - static struct configfs_group_operations pci_epf_group_ops = { 660 + static const struct configfs_group_operations pci_epf_group_ops = { 667 661 .make_group = &pci_epf_make, 668 662 .drop_item = &pci_epf_drop, 669 663 }; ··· 680 674 group = configfs_register_default_group(functions_group, name, 681 675 &pci_epf_group_type); 682 676 if (IS_ERR(group)) 683 - pr_err("failed to register configfs group for %s function\n", 684 - name); 677 + pr_err("failed to register configfs group for %s function: %pe\n", 678 + name, group); 685 679 686 680 return group; 687 681 }
+5
drivers/pci/endpoint/pci-ep-msi.c
··· 50 50 return -EINVAL; 51 51 } 52 52 53 + if (epf->db_msg) 54 + return -EBUSY; 55 + 53 56 domain = of_msi_map_get_device_domain(epc->dev.parent, 0, 54 57 DOMAIN_BUS_PLATFORM_MSI); 55 58 if (!domain) { ··· 82 79 if (ret) { 83 80 dev_err(dev, "Failed to allocate MSI\n"); 84 81 kfree(msg); 82 + epf->db_msg = NULL; 83 + epf->num_db = 0; 85 84 return ret; 86 85 } 87 86
+3 -2
drivers/pci/endpoint/pci-epc-core.c
··· 103 103 bar++; 104 104 105 105 for (i = bar; i < PCI_STD_NUM_BARS; i++) { 106 - /* If the BAR is not reserved, return it. */ 107 - if (epc_features->bar[i].type != BAR_RESERVED) 106 + /* If the BAR is not reserved or disabled, return it. */ 107 + if (epc_features->bar[i].type != BAR_RESERVED && 108 + epc_features->bar[i].type != BAR_DISABLED) 108 109 return i; 109 110 } 110 111
+1 -1
drivers/pci/endpoint/pci-epf-core.c
··· 149 149 * @epf_vf: the virtual EP function to be added 150 150 * 151 151 * A physical endpoint function can be associated with multiple virtual 152 - * endpoint functions. Invoke pci_epf_add_epf() to add a virtual PCI endpoint 152 + * endpoint functions. Invoke pci_epf_add_vepf() to add a virtual PCI endpoint 153 153 * function to a physical PCI endpoint function. 154 154 */ 155 155 int pci_epf_add_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf)
+2 -1
drivers/pci/hotplug/pciehp_core.c
··· 79 79 snprintf(name, SLOT_NAME_SIZE, "%u", PSN(ctrl)); 80 80 81 81 retval = pci_hp_initialize(&ctrl->hotplug_slot, 82 - ctrl->pcie->port->subordinate, 0, name); 82 + ctrl->pcie->port->subordinate, 83 + PCI_SLOT_ALL_DEVICES, name); 83 84 if (retval) { 84 85 ctrl_err(ctrl, "pci_hp_initialize failed: error %d\n", retval); 85 86 kfree(ops);
+7 -12
drivers/pci/hotplug/pnv_php.c
··· 215 215 static int pnv_php_populate_changeset(struct of_changeset *ocs, 216 216 struct device_node *dn) 217 217 { 218 - struct device_node *child; 219 - int ret = 0; 218 + int ret; 220 219 221 - for_each_child_of_node(dn, child) { 220 + for_each_child_of_node_scoped(dn, child) { 222 221 ret = of_changeset_attach_node(ocs, child); 223 - if (ret) { 224 - of_node_put(child); 225 - break; 226 - } 222 + if (ret) 223 + return ret; 227 224 228 225 ret = pnv_php_populate_changeset(ocs, child); 229 - if (ret) { 230 - of_node_put(child); 231 - break; 232 - } 226 + if (ret) 227 + return ret; 233 228 } 234 229 235 - return ret; 230 + return 0; 236 231 } 237 232 238 233 static void *pnv_php_add_one_pdn(struct device_node *dn, void *data)
+1 -3
drivers/pci/hotplug/rpaphp_slot.c
··· 82 82 int rpaphp_register_slot(struct slot *slot) 83 83 { 84 84 struct hotplug_slot *php_slot = &slot->hotplug_slot; 85 - struct device_node *child; 86 85 u32 my_index; 87 86 int retval; 88 87 int slotno = -1; ··· 96 97 return -EAGAIN; 97 98 } 98 99 99 - for_each_child_of_node(slot->dn, child) { 100 + for_each_child_of_node_scoped(slot->dn, child) { 100 101 retval = of_property_read_u32(child, "ibm,my-drc-index", &my_index); 101 102 if (my_index == slot->index) { 102 103 slotno = PCI_SLOT(PCI_DN(child)->devfn); 103 - of_node_put(child); 104 104 break; 105 105 } 106 106 }
+5
drivers/pci/msi/api.c
··· 370 370 * Undo the interrupt vector allocations and possible device MSI/MSI-X 371 371 * enablement earlier done through pci_alloc_irq_vectors_affinity() or 372 372 * pci_alloc_irq_vectors(). 373 + * 374 + * WARNING: Do not call this function if the device has been enabled 375 + * with pcim_enable_device(). In that case, IRQ vectors are automatically 376 + * managed via pcim_msi_release() and calling pci_free_irq_vectors() can 377 + * lead to double-free issues. 373 378 */ 374 379 void pci_free_irq_vectors(struct pci_dev *dev) 375 380 {
+10
drivers/pci/msi/msi.c
··· 77 77 /* 78 78 * Needs to be separate from pcim_release to prevent an ordering problem 79 79 * vs. msi_device_data_release() in the MSI core code. 80 + * 81 + * TODO: Remove the legacy side-effect of pcim_enable_device() that 82 + * activates automatic IRQ vector management. This design is dangerous 83 + * and confusing because it switches normally un-managed functions 84 + * into managed mode. Drivers should explicitly manage their IRQ vectors 85 + * without this implicit behavior. 86 + * 87 + * The current implementation uses both pdev->is_managed and 88 + * pdev->is_msi_managed flags, which adds unnecessary complexity. 89 + * This should be simplified in a future kernel version. 80 90 */ 81 91 static int pcim_setup_msi_release(struct pci_dev *dev) 82 92 {
+1 -1
drivers/pci/npem.c
··· 504 504 led->brightness_get = brightness_get; 505 505 led->max_brightness = 1; 506 506 led->default_trigger = "none"; 507 - led->flags = 0; 507 + led->flags = LED_HW_PLUGGABLE; 508 508 509 509 ret = led_classdev_register(&npem->dev->dev, led); 510 510 if (ret)
+8 -13
drivers/pci/of.c
··· 775 775 776 776 /* Check if there is a DT root node to attach the created node */ 777 777 if (!of_root) { 778 - pr_err("of_root node is NULL, cannot create PCI host bridge node\n"); 778 + pr_debug("of_root node is NULL, cannot create PCI host bridge node\n"); 779 779 return; 780 780 } 781 781 ··· 875 875 * of_pci_get_max_link_speed - Find the maximum link speed of the given device node. 876 876 * @node: Device tree node with the maximum link speed information. 877 877 * 878 - * This function will try to find the limitation of link speed by finding 879 - * a property called "max-link-speed" of the given device node. 878 + * This function will try to read the "max-link-speed" property of the given 879 + * device tree node. It does NOT validate the value of the property. 880 880 * 881 - * Return: 882 - * * > 0 - On success, a maximum link speed. 883 - * * -EINVAL - Invalid "max-link-speed" property value, or failure to access 884 - * the property of the device tree node. 885 - * 886 - * Returns the associated max link speed from DT, or a negative value if the 887 - * required property is not found or is invalid. 881 + * Return: Maximum link speed value on success, errno on failure. 888 882 */ 889 883 int of_pci_get_max_link_speed(struct device_node *node) 890 884 { 891 885 u32 max_link_speed; 886 + int ret; 892 887 893 - if (of_property_read_u32(node, "max-link-speed", &max_link_speed) || 894 - max_link_speed == 0 || max_link_speed > 4) 895 - return -EINVAL; 888 + ret = of_property_read_u32(node, "max-link-speed", &max_link_speed); 889 + if (ret) 890 + return ret; 896 891 897 892 return max_link_speed; 898 893 }
+8 -2
drivers/pci/p2pdma.c
··· 530 530 531 531 static const struct pci_p2pdma_whitelist_entry { 532 532 unsigned short vendor; 533 - unsigned short device; 533 + int device; 534 534 enum { 535 535 REQ_SAME_HOST_BRIDGE = 1 << 0, 536 536 } flags; ··· 548 548 {PCI_VENDOR_ID_INTEL, 0x2033, 0}, 549 549 {PCI_VENDOR_ID_INTEL, 0x2020, 0}, 550 550 {PCI_VENDOR_ID_INTEL, 0x09a2, 0}, 551 + /* Google SoCs. */ 552 + {PCI_VENDOR_ID_GOOGLE, PCI_ANY_ID, 0}, 551 553 {} 552 554 }; 553 555 ··· 603 601 device = root->device; 604 602 605 603 for (entry = pci_p2pdma_whitelist; entry->vendor; entry++) { 606 - if (vendor != entry->vendor || device != entry->device) 604 + if (vendor != entry->vendor) 607 605 continue; 606 + 607 + if (entry->device != PCI_ANY_ID && device != entry->device) 608 + continue; 609 + 608 610 if (entry->flags & REQ_SAME_HOST_BRIDGE && !same_host_bridge) 609 611 return false; 610 612
+4 -2
drivers/pci/pci-sysfs.c
··· 378 378 if (node != NUMA_NO_NODE && !node_online(node)) 379 379 return -EINVAL; 380 380 381 + if (node == dev->numa_node) 382 + return count; 383 + 381 384 add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK); 382 385 pci_alert(pdev, FW_BUG "Overriding NUMA node to %d. Contact your vendor for updates.", 383 386 node); ··· 556 553 const char *buf, size_t count) 557 554 { 558 555 struct pci_dev *pdev = to_pci_dev(dev); 559 - struct pci_bus *bus = pdev->subordinate; 560 556 unsigned long val; 561 557 562 558 if (!capable(CAP_SYS_ADMIN)) ··· 565 563 return -EINVAL; 566 564 567 565 if (val) { 568 - int ret = __pci_reset_bus(bus); 566 + int ret = pci_try_reset_bridge(pdev); 569 567 570 568 if (ret) 571 569 return ret;
+173 -156
drivers/pci/pci.c
··· 949 949 950 950 ret = pci_dev_str_match(dev, p, &p); 951 951 if (ret < 0) { 952 - pr_info_once("PCI: Can't parse ACS command line parameter\n"); 952 + pr_warn_once("PCI: Can't parse ACS command line parameter\n"); 953 953 break; 954 954 } else if (ret == 1) { 955 955 /* Found a match */ ··· 2241 2241 #ifdef CONFIG_PCIEAER 2242 2242 void pcie_clear_device_status(struct pci_dev *dev) 2243 2243 { 2244 - u16 sta; 2245 - 2246 - pcie_capability_read_word(dev, PCI_EXP_DEVSTA, &sta); 2247 - pcie_capability_write_word(dev, PCI_EXP_DEVSTA, sta); 2244 + pcie_capability_write_word(dev, PCI_EXP_DEVSTA, 2245 + PCI_EXP_DEVSTA_CED | PCI_EXP_DEVSTA_NFED | 2246 + PCI_EXP_DEVSTA_FED | PCI_EXP_DEVSTA_URD); 2248 2247 } 2249 2248 #endif 2250 2249 ··· 3674 3675 */ 3675 3676 int pci_enable_atomic_ops_to_root(struct pci_dev *dev, u32 cap_mask) 3676 3677 { 3677 - struct pci_bus *bus = dev->bus; 3678 - struct pci_dev *bridge; 3678 + struct pci_dev *root, *bridge; 3679 3679 u32 cap, ctl2; 3680 3680 3681 3681 /* 3682 - * Per PCIe r5.0, sec 9.3.5.10, the AtomicOp Requester Enable bit 3682 + * Per PCIe r7.0, sec 7.5.3.16, the AtomicOp Requester Enable bit 3683 3683 * in Device Control 2 is reserved in VFs and the PF value applies 3684 3684 * to all associated VFs. 3685 3685 */ ··· 3689 3691 return -EINVAL; 3690 3692 3691 3693 /* 3692 - * Per PCIe r4.0, sec 6.15, endpoints and root ports may be 3693 - * AtomicOp requesters. For now, we only support endpoints as 3694 - * requesters and root ports as completers. No endpoints as 3694 + * Per PCIe r7.0, sec 6.15, endpoints and root ports may be 3695 + * AtomicOp requesters. For now, we only support (legacy) endpoints 3696 + * as requesters and root ports as completers. No endpoints as 3695 3697 * completers, and no peer-to-peer. 3696 3698 */ 3697 3699 3698 3700 switch (pci_pcie_type(dev)) { 3699 3701 case PCI_EXP_TYPE_ENDPOINT: 3700 3702 case PCI_EXP_TYPE_LEG_END: 3701 - case PCI_EXP_TYPE_RC_END: 3702 3703 break; 3703 3704 default: 3704 3705 return -EINVAL; 3705 3706 } 3706 3707 3707 - while (bus->parent) { 3708 - bridge = bus->self; 3708 + root = pcie_find_root_port(dev); 3709 + if (!root) 3710 + return -EINVAL; 3709 3711 3710 - pcie_capability_read_dword(bridge, PCI_EXP_DEVCAP2, &cap); 3712 + pcie_capability_read_dword(root, PCI_EXP_DEVCAP2, &cap); 3713 + if ((cap & cap_mask) != cap_mask) 3714 + return -EINVAL; 3711 3715 3716 + bridge = pci_upstream_bridge(dev); 3717 + while (bridge != root) { 3712 3718 switch (pci_pcie_type(bridge)) { 3713 - /* Ensure switch ports support AtomicOp routing */ 3714 3719 case PCI_EXP_TYPE_UPSTREAM: 3715 - case PCI_EXP_TYPE_DOWNSTREAM: 3716 - if (!(cap & PCI_EXP_DEVCAP2_ATOMIC_ROUTE)) 3717 - return -EINVAL; 3718 - break; 3719 - 3720 - /* Ensure root port supports all the sizes we care about */ 3721 - case PCI_EXP_TYPE_ROOT_PORT: 3722 - if ((cap & cap_mask) != cap_mask) 3723 - return -EINVAL; 3724 - break; 3725 - } 3726 - 3727 - /* Ensure upstream ports don't block AtomicOps on egress */ 3728 - if (pci_pcie_type(bridge) == PCI_EXP_TYPE_UPSTREAM) { 3720 + /* Upstream ports must not block AtomicOps on egress */ 3729 3721 pcie_capability_read_dword(bridge, PCI_EXP_DEVCTL2, 3730 3722 &ctl2); 3731 3723 if (ctl2 & PCI_EXP_DEVCTL2_ATOMIC_EGRESS_BLOCK) 3732 3724 return -EINVAL; 3725 + fallthrough; 3726 + 3727 + /* All switch ports need to route AtomicOps */ 3728 + case PCI_EXP_TYPE_DOWNSTREAM: 3729 + pcie_capability_read_dword(bridge, PCI_EXP_DEVCAP2, 3730 + &cap); 3731 + if (!(cap & PCI_EXP_DEVCAP2_ATOMIC_ROUTE)) 3732 + return -EINVAL; 3733 + break; 3733 3734 } 3734 3735 3735 - bus = bus->parent; 3736 + bridge = pci_upstream_bridge(bridge); 3736 3737 } 3737 3738 3738 3739 pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, ··· 4911 4914 * If "dev" is below a CXL port that has SBR control masked, SBR 4912 4915 * won't do anything, so return error. 4913 4916 */ 4914 - if (bridge && cxl_sbr_masked(bridge)) { 4915 - if (probe) 4916 - return 0; 4917 - 4917 + if (bridge && pcie_is_cxl(bridge) && cxl_sbr_masked(bridge)) 4918 4918 return -ENOTTY; 4919 - } 4920 4919 4921 4920 rc = pci_dev_reset_iommu_prepare(dev); 4922 4921 if (rc) { ··· 5285 5292 return true; 5286 5293 } 5287 5294 5288 - /* Lock devices from the top of the tree down */ 5289 - static void pci_bus_lock(struct pci_bus *bus) 5290 - { 5291 - struct pci_dev *dev; 5295 + static void pci_bus_lock(struct pci_bus *bus); 5296 + static void pci_bus_unlock(struct pci_bus *bus); 5297 + static int pci_bus_trylock(struct pci_bus *bus); 5292 5298 5293 - pci_dev_lock(bus->self); 5299 + /* Lock devices from the top of the tree down */ 5300 + static void __pci_bus_lock(struct pci_bus *bus, struct pci_slot *slot) 5301 + { 5302 + struct pci_dev *dev, *bridge = bus->self; 5303 + 5304 + if (bridge) 5305 + pci_dev_lock(bridge); 5306 + 5294 5307 list_for_each_entry(dev, &bus->devices, bus_list) { 5308 + if (slot && (!dev->slot || dev->slot != slot)) 5309 + continue; 5295 5310 if (dev->subordinate) 5296 5311 pci_bus_lock(dev->subordinate); 5297 5312 else ··· 5308 5307 } 5309 5308 5310 5309 /* Unlock devices from the bottom of the tree up */ 5311 - static void pci_bus_unlock(struct pci_bus *bus) 5310 + static void __pci_bus_unlock(struct pci_bus *bus, struct pci_slot *slot) 5312 5311 { 5313 - struct pci_dev *dev; 5312 + struct pci_dev *dev, *bridge = bus->self; 5314 5313 5315 5314 list_for_each_entry(dev, &bus->devices, bus_list) { 5315 + if (slot && (!dev->slot || dev->slot != slot)) 5316 + continue; 5316 5317 if (dev->subordinate) 5317 5318 pci_bus_unlock(dev->subordinate); 5318 5319 else 5319 5320 pci_dev_unlock(dev); 5320 5321 } 5321 - pci_dev_unlock(bus->self); 5322 + 5323 + if (bridge) 5324 + pci_dev_unlock(bridge); 5322 5325 } 5323 5326 5324 5327 /* Return 1 on successful lock, 0 on contention */ 5325 - static int pci_bus_trylock(struct pci_bus *bus) 5328 + static int __pci_bus_trylock(struct pci_bus *bus, struct pci_slot *slot) 5326 5329 { 5327 - struct pci_dev *dev; 5330 + struct pci_dev *dev, *bridge = bus->self; 5328 5331 5329 - if (!pci_dev_trylock(bus->self)) 5332 + if (bridge && !pci_dev_trylock(bridge)) 5330 5333 return 0; 5331 5334 5332 5335 list_for_each_entry(dev, &bus->devices, bus_list) { 5336 + if (slot && (!dev->slot || dev->slot != slot)) 5337 + continue; 5333 5338 if (dev->subordinate) { 5334 5339 if (!pci_bus_trylock(dev->subordinate)) 5335 5340 goto unlock; ··· 5346 5339 5347 5340 unlock: 5348 5341 list_for_each_entry_continue_reverse(dev, &bus->devices, bus_list) { 5342 + if (slot && (!dev->slot || dev->slot != slot)) 5343 + continue; 5349 5344 if (dev->subordinate) 5350 5345 pci_bus_unlock(dev->subordinate); 5351 5346 else 5352 5347 pci_dev_unlock(dev); 5353 5348 } 5354 - pci_dev_unlock(bus->self); 5349 + 5350 + if (bridge) 5351 + pci_dev_unlock(bridge); 5355 5352 return 0; 5353 + } 5354 + 5355 + /* Lock devices from the top of the tree down */ 5356 + static void pci_bus_lock(struct pci_bus *bus) 5357 + { 5358 + __pci_bus_lock(bus, NULL); 5359 + } 5360 + 5361 + /* Unlock devices from the bottom of the tree up */ 5362 + static void pci_bus_unlock(struct pci_bus *bus) 5363 + { 5364 + __pci_bus_unlock(bus, NULL); 5365 + } 5366 + 5367 + /* Return 1 on successful lock, 0 on contention */ 5368 + static int pci_bus_trylock(struct pci_bus *bus) 5369 + { 5370 + return __pci_bus_trylock(bus, NULL); 5356 5371 } 5357 5372 5358 5373 /* Do any devices on or below this slot prevent a bus reset? */ ··· 5399 5370 /* Lock devices from the top of the tree down */ 5400 5371 static void pci_slot_lock(struct pci_slot *slot) 5401 5372 { 5402 - struct pci_dev *dev, *bridge = slot->bus->self; 5403 - 5404 - if (bridge) 5405 - pci_dev_lock(bridge); 5406 - 5407 - list_for_each_entry(dev, &slot->bus->devices, bus_list) { 5408 - if (!dev->slot || dev->slot != slot) 5409 - continue; 5410 - if (dev->subordinate) 5411 - pci_bus_lock(dev->subordinate); 5412 - else 5413 - pci_dev_lock(dev); 5414 - } 5373 + __pci_bus_lock(slot->bus, slot); 5415 5374 } 5416 5375 5417 5376 /* Unlock devices from the bottom of the tree up */ 5418 5377 static void pci_slot_unlock(struct pci_slot *slot) 5419 5378 { 5420 - struct pci_dev *dev, *bridge = slot->bus->self; 5421 - 5422 - list_for_each_entry(dev, &slot->bus->devices, bus_list) { 5423 - if (!dev->slot || dev->slot != slot) 5424 - continue; 5425 - if (dev->subordinate) 5426 - pci_bus_unlock(dev->subordinate); 5427 - else 5428 - pci_dev_unlock(dev); 5429 - } 5430 - 5431 - if (bridge) 5432 - pci_dev_unlock(bridge); 5379 + __pci_bus_unlock(slot->bus, slot); 5433 5380 } 5434 5381 5435 5382 /* Return 1 on successful lock, 0 on contention */ 5436 5383 static int pci_slot_trylock(struct pci_slot *slot) 5437 5384 { 5438 - struct pci_dev *dev, *bridge = slot->bus->self; 5439 - 5440 - if (bridge && !pci_dev_trylock(bridge)) 5441 - return 0; 5442 - 5443 - list_for_each_entry(dev, &slot->bus->devices, bus_list) { 5444 - if (!dev->slot || dev->slot != slot) 5445 - continue; 5446 - if (dev->subordinate) { 5447 - if (!pci_bus_trylock(dev->subordinate)) 5448 - goto unlock; 5449 - } else if (!pci_dev_trylock(dev)) 5450 - goto unlock; 5451 - } 5452 - return 1; 5453 - 5454 - unlock: 5455 - list_for_each_entry_continue_reverse(dev, 5456 - &slot->bus->devices, bus_list) { 5457 - if (!dev->slot || dev->slot != slot) 5458 - continue; 5459 - if (dev->subordinate) 5460 - pci_bus_unlock(dev->subordinate); 5461 - else 5462 - pci_dev_unlock(dev); 5463 - } 5464 - 5465 - if (bridge) 5466 - pci_dev_unlock(bridge); 5467 - return 0; 5385 + return __pci_bus_trylock(slot->bus, slot); 5468 5386 } 5469 5387 5470 5388 /* ··· 5517 5541 EXPORT_SYMBOL_GPL(pci_probe_reset_slot); 5518 5542 5519 5543 /** 5520 - * __pci_reset_slot - Try to reset a PCI slot 5544 + * pci_try_reset_slot - Try to reset a PCI slot 5521 5545 * @slot: PCI slot to reset 5522 5546 * 5523 5547 * A PCI bus may host multiple slots, each slot may support a reset mechanism ··· 5531 5555 * 5532 5556 * Same as above except return -EAGAIN if the slot cannot be locked 5533 5557 */ 5534 - static int __pci_reset_slot(struct pci_slot *slot) 5558 + static int pci_try_reset_slot(struct pci_slot *slot) 5535 5559 { 5536 5560 int rc; 5537 5561 ··· 5573 5597 } 5574 5598 5575 5599 /** 5576 - * pci_bus_error_reset - reset the bridge's subordinate bus 5577 - * @bridge: The parent device that connects to the bus to reset 5578 - * 5579 - * This function will first try to reset the slots on this bus if the method is 5580 - * available. If slot reset fails or is not available, this will fall back to a 5581 - * secondary bus reset. 5582 - */ 5583 - int pci_bus_error_reset(struct pci_dev *bridge) 5584 - { 5585 - struct pci_bus *bus = bridge->subordinate; 5586 - struct pci_slot *slot; 5587 - 5588 - if (!bus) 5589 - return -ENOTTY; 5590 - 5591 - mutex_lock(&pci_slot_mutex); 5592 - if (list_empty(&bus->slots)) 5593 - goto bus_reset; 5594 - 5595 - list_for_each_entry(slot, &bus->slots, list) 5596 - if (pci_probe_reset_slot(slot)) 5597 - goto bus_reset; 5598 - 5599 - list_for_each_entry(slot, &bus->slots, list) 5600 - if (pci_slot_reset(slot, PCI_RESET_DO_RESET)) 5601 - goto bus_reset; 5602 - 5603 - mutex_unlock(&pci_slot_mutex); 5604 - return 0; 5605 - bus_reset: 5606 - mutex_unlock(&pci_slot_mutex); 5607 - return pci_bus_reset(bridge->subordinate, PCI_RESET_DO_RESET); 5608 - } 5609 - 5610 - /** 5611 - * pci_probe_reset_bus - probe whether a PCI bus can be reset 5612 - * @bus: PCI bus to probe 5613 - * 5614 - * Return 0 if bus can be reset, negative if a bus reset is not supported. 5615 - */ 5616 - int pci_probe_reset_bus(struct pci_bus *bus) 5617 - { 5618 - return pci_bus_reset(bus, PCI_RESET_PROBE); 5619 - } 5620 - EXPORT_SYMBOL_GPL(pci_probe_reset_bus); 5621 - 5622 - /** 5623 - * __pci_reset_bus - Try to reset a PCI bus 5600 + * pci_try_reset_bus - Try to reset a PCI bus 5624 5601 * @bus: top level PCI bus to reset 5625 5602 * 5626 5603 * Same as above except return -EAGAIN if the bus cannot be locked 5627 5604 */ 5628 - int __pci_reset_bus(struct pci_bus *bus) 5605 + static int pci_try_reset_bus(struct pci_bus *bus) 5629 5606 { 5630 5607 int rc; 5631 5608 ··· 5598 5669 return rc; 5599 5670 } 5600 5671 5672 + #define PCI_RESET_RESTORE true 5673 + #define PCI_RESET_NO_RESTORE false 5674 + /** 5675 + * pci_reset_bridge - reset a bridge's subordinate bus 5676 + * @bridge: bridge that connects to the bus to reset 5677 + * @restore: when true use a reset method that invokes pci_dev_restore() post 5678 + * reset for affected devices 5679 + * 5680 + * This function will first try to reset the slots on this bus if the method is 5681 + * available. If slot reset fails or is not available, this will fall back to a 5682 + * secondary bus reset. 5683 + */ 5684 + static int pci_reset_bridge(struct pci_dev *bridge, bool restore) 5685 + { 5686 + struct pci_bus *bus = bridge->subordinate; 5687 + struct pci_slot *slot; 5688 + 5689 + if (!bus) 5690 + return -ENOTTY; 5691 + 5692 + mutex_lock(&pci_slot_mutex); 5693 + if (list_empty(&bus->slots)) 5694 + goto bus_reset; 5695 + 5696 + list_for_each_entry(slot, &bus->slots, list) 5697 + if (pci_probe_reset_slot(slot)) 5698 + goto bus_reset; 5699 + 5700 + list_for_each_entry(slot, &bus->slots, list) { 5701 + int ret; 5702 + 5703 + if (restore) 5704 + ret = pci_try_reset_slot(slot); 5705 + else 5706 + ret = pci_slot_reset(slot, PCI_RESET_DO_RESET); 5707 + 5708 + if (ret) 5709 + goto bus_reset; 5710 + } 5711 + 5712 + mutex_unlock(&pci_slot_mutex); 5713 + return 0; 5714 + bus_reset: 5715 + mutex_unlock(&pci_slot_mutex); 5716 + 5717 + if (restore) 5718 + return pci_try_reset_bus(bus); 5719 + return pci_bus_reset(bridge->subordinate, PCI_RESET_DO_RESET); 5720 + } 5721 + 5722 + /** 5723 + * pci_bus_error_reset - reset the bridge's subordinate bus 5724 + * @bridge: The parent device that connects to the bus to reset 5725 + */ 5726 + int pci_bus_error_reset(struct pci_dev *bridge) 5727 + { 5728 + return pci_reset_bridge(bridge, PCI_RESET_NO_RESTORE); 5729 + } 5730 + 5731 + int pci_try_reset_bridge(struct pci_dev *bridge) 5732 + { 5733 + return pci_reset_bridge(bridge, PCI_RESET_RESTORE); 5734 + } 5735 + 5736 + /** 5737 + * pci_probe_reset_bus - probe whether a PCI bus can be reset 5738 + * @bus: PCI bus to probe 5739 + * 5740 + * Return 0 if bus can be reset, negative if a bus reset is not supported. 5741 + */ 5742 + int pci_probe_reset_bus(struct pci_bus *bus) 5743 + { 5744 + return pci_bus_reset(bus, PCI_RESET_PROBE); 5745 + } 5746 + EXPORT_SYMBOL_GPL(pci_probe_reset_bus); 5747 + 5601 5748 /** 5602 5749 * pci_reset_bus - Try to reset a PCI bus 5603 5750 * @pdev: top level PCI device to reset via slot/bus ··· 5683 5678 int pci_reset_bus(struct pci_dev *pdev) 5684 5679 { 5685 5680 return (!pci_probe_reset_slot(pdev->slot)) ? 5686 - __pci_reset_slot(pdev->slot) : __pci_reset_bus(pdev->bus); 5681 + pci_try_reset_slot(pdev->slot) : pci_try_reset_bus(pdev->bus); 5687 5682 } 5688 5683 EXPORT_SYMBOL_GPL(pci_reset_bus); 5689 5684 ··· 6202 6197 cmd &= ~PCI_BRIDGE_CTL_VGA; 6203 6198 pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, 6204 6199 cmd); 6200 + 6201 + 6202 + /* 6203 + * VGA Enable may not be writable if bridge doesn't 6204 + * support it. 6205 + */ 6206 + if (decode) { 6207 + pci_read_config_word(bridge, PCI_BRIDGE_CONTROL, 6208 + &cmd); 6209 + if (!(cmd & PCI_BRIDGE_CTL_VGA)) 6210 + return -EIO; 6211 + } 6205 6212 } 6206 6213 bus = bus->parent; 6207 6214 }
+6 -1
drivers/pci/pci.h
··· 108 108 PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE) 109 109 110 110 extern const unsigned char pcie_link_speed[]; 111 + unsigned char pcie_get_link_speed(unsigned int speed); 112 + 111 113 extern bool pci_early_dump; 112 114 113 115 extern struct mutex pci_rescan_remove_lock; ··· 233 231 void pci_init_reset_methods(struct pci_dev *dev); 234 232 int pci_bridge_secondary_bus_reset(struct pci_dev *dev); 235 233 int pci_bus_error_reset(struct pci_dev *dev); 236 - int __pci_reset_bus(struct pci_bus *bus); 234 + int pci_try_reset_bridge(struct pci_dev *bridge); 237 235 238 236 struct pci_cap_saved_data { 239 237 u16 cap_nr; ··· 1054 1052 return pci_cardbus_resource_alignment(res); 1055 1053 return resource_alignment(res); 1056 1054 } 1055 + 1056 + resource_size_t pci_min_window_alignment(struct pci_bus *bus, 1057 + unsigned long type); 1057 1058 1058 1059 void pci_acs_init(struct pci_dev *dev); 1059 1060 void pci_enable_acs(struct pci_dev *dev);
-2
drivers/pci/pcie/aer.c
··· 1041 1041 * 3) There are multiple errors and prior ID comparing fails; 1042 1042 * We check AER status registers to find possible reporter. 1043 1043 */ 1044 - if (atomic_read(&dev->enable_cnt) == 0) 1045 - return false; 1046 1044 1047 1045 /* Check if AER is enabled */ 1048 1046 pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &reg16);
+12 -5
drivers/pci/pcie/aspm.c
··· 706 706 } 707 707 708 708 /* Program T_POWER_ON times in both ports */ 709 - pci_write_config_dword(parent, parent->l1ss + PCI_L1SS_CTL2, ctl2); 710 - pci_write_config_dword(child, child->l1ss + PCI_L1SS_CTL2, ctl2); 709 + pci_clear_and_set_config_dword(parent, parent->l1ss + PCI_L1SS_CTL2, 710 + PCI_L1SS_CTL2_T_PWR_ON_VALUE | 711 + PCI_L1SS_CTL2_T_PWR_ON_SCALE, ctl2); 712 + pci_clear_and_set_config_dword(child, child->l1ss + PCI_L1SS_CTL2, 713 + PCI_L1SS_CTL2_T_PWR_ON_VALUE | 714 + PCI_L1SS_CTL2_T_PWR_ON_SCALE, ctl2); 711 715 712 716 /* Program Common_Mode_Restore_Time in upstream device */ 713 717 pci_clear_and_set_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 714 - PCI_L1SS_CTL1_CM_RESTORE_TIME, ctl1); 718 + PCI_L1SS_CTL1_CM_RESTORE_TIME, 719 + ctl1 & PCI_L1SS_CTL1_CM_RESTORE_TIME); 715 720 716 721 /* Program LTR_L1.2_THRESHOLD time in both ports */ 717 722 pci_clear_and_set_config_dword(parent, parent->l1ss + PCI_L1SS_CTL1, 718 723 PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 719 724 PCI_L1SS_CTL1_LTR_L12_TH_SCALE, 720 - ctl1); 725 + ctl1 & (PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 726 + PCI_L1SS_CTL1_LTR_L12_TH_SCALE)); 721 727 pci_clear_and_set_config_dword(child, child->l1ss + PCI_L1SS_CTL1, 722 728 PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 723 729 PCI_L1SS_CTL1_LTR_L12_TH_SCALE, 724 - ctl1); 730 + ctl1 & (PCI_L1SS_CTL1_LTR_L12_TH_VALUE | 731 + PCI_L1SS_CTL1_LTR_L12_TH_SCALE)); 725 732 726 733 if (pl1_2_enables || cl1_2_enables) { 727 734 pci_clear_and_set_config_dword(parent,
+3
drivers/pci/pcie/dpc.c
··· 256 256 257 257 info->dev[0] = dev; 258 258 info->error_dev_num = 1; 259 + info->ratelimit_print[0] = 1; 259 260 260 261 return 1; 261 262 } ··· 373 372 return IRQ_HANDLED; 374 373 } 375 374 375 + pci_dev_get(pdev); 376 376 dpc_process_error(pdev); 377 377 378 378 /* We configure DPC so it only triggers on ERR_FATAL */ 379 379 pcie_do_recovery(pdev, pci_channel_io_frozen, dpc_reset_link); 380 380 381 + pci_dev_put(pdev); 381 382 return IRQ_HANDLED; 382 383 } 383 384
+41 -36
drivers/pci/pcie/ptm.c
··· 52 52 return; 53 53 54 54 dev->ptm_cap = ptm; 55 + atomic_set(&dev->ptm_enable_cnt, 0); 55 56 pci_add_ext_cap_save_buffer(dev, PCI_EXT_CAP_ID_PTM, sizeof(u32)); 56 57 57 58 pci_read_config_dword(dev, ptm + PCI_PTM_CAP, &cap); ··· 86 85 dev->ptm_responder = 1; 87 86 if (cap & PCI_PTM_CAP_REQ) 88 87 dev->ptm_requester = 1; 89 - 90 - if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || 91 - pci_pcie_type(dev) == PCI_EXP_TYPE_UPSTREAM) 92 - pci_enable_ptm(dev, NULL); 93 88 } 94 89 95 90 void pci_save_ptm_state(struct pci_dev *dev) ··· 126 129 static int __pci_enable_ptm(struct pci_dev *dev) 127 130 { 128 131 u16 ptm = dev->ptm_cap; 129 - struct pci_dev *ups; 130 132 u32 ctrl; 131 133 132 134 if (!ptm) 133 135 return -EINVAL; 134 - 135 - /* 136 - * A device uses local PTM Messages to request time information 137 - * from a PTM Root that's farther upstream. Every device along the 138 - * path must support PTM and have it enabled so it can handle the 139 - * messages. Therefore, if this device is not a PTM Root, the 140 - * upstream link partner must have PTM enabled before we can enable 141 - * PTM. 142 - */ 143 - if (!dev->ptm_root) { 144 - ups = pci_upstream_ptm(dev); 145 - if (!ups || !ups->ptm_enabled) 146 - return -EINVAL; 147 - } 148 136 149 137 switch (pci_pcie_type(dev)) { 150 138 case PCI_EXP_TYPE_ROOT_PORT: ··· 164 182 /** 165 183 * pci_enable_ptm() - Enable Precision Time Measurement 166 184 * @dev: PCI device 167 - * @granularity: pointer to return granularity 168 185 * 169 - * Enable Precision Time Measurement for @dev. If successful and 170 - * @granularity is non-NULL, return the Effective Granularity. 186 + * Enable Precision Time Measurement for @dev. 171 187 * 172 188 * Return: zero if successful, or -EINVAL if @dev lacks a PTM Capability or 173 189 * is not a PTM Root and lacks an upstream path of PTM-enabled devices. 174 190 */ 175 - int pci_enable_ptm(struct pci_dev *dev, u8 *granularity) 191 + int pci_enable_ptm(struct pci_dev *dev) 176 192 { 177 193 int rc; 178 194 char clock_desc[8]; 179 195 196 + /* 197 + * A device uses local PTM Messages to request time information 198 + * from a PTM Root that's farther upstream. Every device along 199 + * the path must support PTM and have it enabled so it can 200 + * handle the messages. Therefore, if this device is not a PTM 201 + * Root, the upstream link partner must have PTM enabled before 202 + * we can enable PTM. 203 + */ 204 + if (!dev->ptm_root) { 205 + struct pci_dev *parent; 206 + 207 + parent = pci_upstream_ptm(dev); 208 + if (!parent) 209 + return -EINVAL; 210 + /* Enable PTM for the parent */ 211 + rc = pci_enable_ptm(parent); 212 + if (rc) 213 + return rc; 214 + } 215 + 216 + /* Already enabled? */ 217 + if (atomic_inc_return(&dev->ptm_enable_cnt) > 1) 218 + return 0; 219 + 180 220 rc = __pci_enable_ptm(dev); 181 - if (rc) 221 + if (rc) { 222 + atomic_dec(&dev->ptm_enable_cnt); 182 223 return rc; 183 - 184 - dev->ptm_enabled = 1; 185 - 186 - if (granularity) 187 - *granularity = dev->ptm_granularity; 224 + } 188 225 189 226 switch (dev->ptm_granularity) { 190 227 case 0: ··· 245 244 */ 246 245 void pci_disable_ptm(struct pci_dev *dev) 247 246 { 248 - if (dev->ptm_enabled) { 247 + struct pci_dev *parent; 248 + 249 + if (atomic_dec_and_test(&dev->ptm_enable_cnt)) 249 250 __pci_disable_ptm(dev); 250 - dev->ptm_enabled = 0; 251 - } 251 + 252 + parent = pci_upstream_ptm(dev); 253 + if (parent) 254 + pci_disable_ptm(parent); 252 255 } 253 256 EXPORT_SYMBOL(pci_disable_ptm); 254 257 255 258 /* 256 - * Disable PTM, but preserve dev->ptm_enabled so we silently re-enable it on 259 + * Disable PTM, but preserve dev->ptm_enable_cnt so we silently re-enable it on 257 260 * resume if necessary. 258 261 */ 259 262 void pci_suspend_ptm(struct pci_dev *dev) 260 263 { 261 - if (dev->ptm_enabled) 264 + if (atomic_read(&dev->ptm_enable_cnt)) 262 265 __pci_disable_ptm(dev); 263 266 } 264 267 265 268 /* If PTM was enabled before suspend, re-enable it when resuming */ 266 269 void pci_resume_ptm(struct pci_dev *dev) 267 270 { 268 - if (dev->ptm_enabled) 271 + if (atomic_read(&dev->ptm_enable_cnt)) 269 272 __pci_enable_ptm(dev); 270 273 } 271 274 ··· 278 273 if (!dev) 279 274 return false; 280 275 281 - return dev->ptm_enabled; 276 + return atomic_read(&dev->ptm_enable_cnt); 282 277 } 283 278 EXPORT_SYMBOL(pcie_ptm_enabled); 284 279
+22 -17
drivers/pci/probe.c
··· 68 68 } 69 69 70 70 /* 71 - * Some device drivers need know if PCI is initiated. 72 - * Basically, we think PCI is not initiated when there 73 - * is no device to be found on the pci_bus_type. 74 - */ 75 - int no_pci_devices(void) 76 - { 77 - struct device *dev; 78 - int no_devices; 79 - 80 - dev = bus_find_next_device(&pci_bus_type, NULL); 81 - no_devices = (dev == NULL); 82 - put_device(dev); 83 - return no_devices; 84 - } 85 - EXPORT_SYMBOL(no_pci_devices); 86 - 87 - /* 88 71 * PCI Bus Class 89 72 */ 90 73 static void release_pcibus_dev(struct device *dev) ··· 378 395 unsigned long io_mask, io_granularity, base, limit; 379 396 struct pci_bus_region region; 380 397 398 + if (!dev->io_window) 399 + return; 400 + 381 401 io_mask = PCI_IO_RANGE_MASK; 382 402 io_granularity = 0x1000; 383 403 if (dev->io_window_1k) { ··· 450 464 u64 base64, limit64; 451 465 pci_bus_addr_t base, limit; 452 466 struct pci_bus_region region; 467 + 468 + if (!dev->pref_window) 469 + return; 453 470 454 471 pci_read_config_word(dev, PCI_PREF_MEMORY_BASE, &mem_base_lo); 455 472 pci_read_config_word(dev, PCI_PREF_MEMORY_LIMIT, &mem_limit_lo); ··· 771 782 PCI_SPEED_UNKNOWN /* F */ 772 783 }; 773 784 EXPORT_SYMBOL_GPL(pcie_link_speed); 785 + 786 + /** 787 + * pcie_get_link_speed - Get speed value from PCIe generation number 788 + * @speed: PCIe speed (1-based: 1 = 2.5GT, 2 = 5GT, ...) 789 + * 790 + * Returns the speed value (e.g., PCIE_SPEED_2_5GT) if @speed is valid, 791 + * otherwise returns PCI_SPEED_UNKNOWN. 792 + */ 793 + unsigned char pcie_get_link_speed(unsigned int speed) 794 + { 795 + if (speed >= ARRAY_SIZE(pcie_link_speed)) 796 + return PCI_SPEED_UNKNOWN; 797 + 798 + return pcie_link_speed[speed]; 799 + } 800 + EXPORT_SYMBOL_GPL(pcie_get_link_speed); 774 801 775 802 const char *pci_speed_string(enum pci_bus_speed speed) 776 803 {
+7 -6
drivers/pci/pwrctrl/Kconfig
··· 11 11 select POWER_SEQUENCING 12 12 select PCI_PWRCTRL 13 13 14 - config PCI_PWRCTRL_SLOT 15 - tristate "PCI Power Control driver for PCI slots" 14 + config PCI_PWRCTRL_GENERIC 15 + tristate "Generic PCI Power Control driver for PCI slots and endpoints" 16 16 select POWER_SEQUENCING 17 17 select PCI_PWRCTRL 18 18 help 19 - Say Y here to enable the PCI Power Control driver to control the power 20 - state of PCI slots. 19 + Say Y here to enable the generic PCI Power Control driver to control 20 + the power state of PCI slots and endpoints. 21 21 22 22 This is a generic driver that controls the power state of different 23 - PCI slots. The voltage regulators powering the rails of the PCI slots 24 - are expected to be defined in the devicetree node of the PCI bridge. 23 + PCI slots and endpoints. The voltage regulators powering the rails 24 + of the PCI slots or endpoints are expected to be defined in the 25 + devicetree node of the PCI bridge or endpoint. 25 26 26 27 config PCI_PWRCTRL_TC9563 27 28 tristate "PCI Power Control driver for TC9563 PCIe switch"
+2 -2
drivers/pci/pwrctrl/Makefile
··· 5 5 6 6 obj-$(CONFIG_PCI_PWRCTRL_PWRSEQ) += pci-pwrctrl-pwrseq.o 7 7 8 - obj-$(CONFIG_PCI_PWRCTRL_SLOT) += pci-pwrctrl-slot.o 9 - pci-pwrctrl-slot-y := slot.o 8 + obj-$(CONFIG_PCI_PWRCTRL_GENERIC) += pci-pwrctrl-generic.o 9 + pci-pwrctrl-generic-y := generic.o 10 10 11 11 obj-$(CONFIG_PCI_PWRCTRL_TC9563) += pci-pwrctrl-tc9563.o
+7 -6
drivers/pci/pwrctrl/slot.c drivers/pci/pwrctrl/generic.c
··· 87 87 88 88 ret = of_regulator_bulk_get_all(dev, dev_of_node(dev), 89 89 &slot->supplies); 90 - if (ret < 0) { 91 - dev_err_probe(dev, ret, "Failed to get slot regulators\n"); 92 - return ret; 93 - } 90 + if (ret < 0) 91 + return dev_err_probe(dev, ret, "Failed to get slot regulators\n"); 94 92 95 93 slot->num_supplies = ret; 96 94 97 95 slot->clk = devm_clk_get_optional(dev, NULL); 98 - if (IS_ERR(slot->clk)) { 96 + if (IS_ERR(slot->clk)) 99 97 return dev_err_probe(dev, PTR_ERR(slot->clk), 100 98 "Failed to enable slot clock\n"); 101 - } 102 99 103 100 skip_resources: 104 101 slot->pwrctrl.power_on = slot_pwrctrl_power_on; ··· 117 120 static const struct of_device_id slot_pwrctrl_of_match[] = { 118 121 { 119 122 .compatible = "pciclass,0604", 123 + }, 124 + /* Renesas UPD720201/UPD720202 USB 3.0 xHCI Host Controller */ 125 + { 126 + .compatible = "pci1912,0014", 120 127 }, 121 128 { } 122 129 };
+3
drivers/pci/quirks.c
··· 5603 5603 * AMD Starship/Matisse HD Audio Controller 0x1487 5604 5604 * AMD Starship USB 3.0 Host Controller 0x148c 5605 5605 * AMD Matisse USB 3.0 Host Controller 0x149c 5606 + * AMD Neural Processing Unit 0x1502 0x17f0 5606 5607 * Intel 82579LM Gigabit Ethernet Controller 0x1502 5607 5608 * Intel 82579V Gigabit Ethernet Controller 0x1503 5608 5609 * Mediatek MT7922 802.11ax PCI Express Wireless Network Adapter ··· 5616 5615 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x148c, quirk_no_flr); 5617 5616 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x149c, quirk_no_flr); 5618 5617 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x7901, quirk_no_flr); 5618 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1502, quirk_no_flr); 5619 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x17f0, quirk_no_flr); 5619 5620 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_no_flr); 5620 5621 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_no_flr); 5621 5622 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_MEDIATEK, 0x0616, quirk_no_flr);
+55 -10
drivers/pci/setup-bus.c
··· 434 434 dev = add_res->dev; 435 435 idx = pci_resource_num(dev, res); 436 436 437 + /* Skip this resource if not found in head list */ 438 + if (!res_to_dev_res(head, res)) 439 + continue; 440 + 437 441 /* 438 442 * Skip resource that failed the earlier assignment and is 439 443 * not optional as it would just fail again. ··· 445 441 if (!resource_assigned(res) && resource_size(res) && 446 442 !pci_resource_is_optional(dev, idx)) 447 443 goto out; 448 - 449 - /* Skip this resource if not found in head list */ 450 - if (!res_to_dev_res(head, res)) 451 - continue; 452 444 453 445 res_name = pci_resource_name(dev, idx); 454 446 add_size = add_res->add_size; ··· 1035 1035 #define PCI_P2P_DEFAULT_IO_ALIGN SZ_4K 1036 1036 #define PCI_P2P_DEFAULT_IO_ALIGN_1K SZ_1K 1037 1037 1038 - static resource_size_t window_alignment(struct pci_bus *bus, unsigned long type) 1038 + resource_size_t pci_min_window_alignment(struct pci_bus *bus, unsigned long type) 1039 1039 { 1040 1040 resource_size_t align = 1, arch_align; 1041 1041 ··· 1084 1084 if (resource_assigned(b_res)) 1085 1085 return; 1086 1086 1087 - min_align = window_alignment(bus, IORESOURCE_IO); 1087 + min_align = pci_min_window_alignment(bus, IORESOURCE_IO); 1088 1088 list_for_each_entry(dev, &bus->devices, bus_list) { 1089 1089 struct resource *r; 1090 1090 ··· 1333 1333 r_size = resource_size(r); 1334 1334 size += max(r_size, align); 1335 1335 1336 - aligns[order] += align; 1336 + /* 1337 + * If resource's size is larger than its alignment, 1338 + * some configurations result in an unwanted gap in 1339 + * the head space that the larger resource cannot 1340 + * fill. 1341 + */ 1342 + if (r_size <= align) 1343 + aligns[order] += align; 1337 1344 if (order > max_order) 1338 1345 max_order = order; 1339 1346 } 1340 1347 } 1341 1348 1342 - win_align = window_alignment(bus, b_res->flags); 1349 + win_align = pci_min_window_alignment(bus, b_res->flags); 1343 1350 min_align = calculate_head_align(aligns, max_order); 1344 1351 min_align = max(min_align, win_align); 1345 1352 size0 = calculate_memsize(size, realloc_head ? 0 : add_size, ··· 1844 1837 resource_size_t new_size) 1845 1838 { 1846 1839 resource_size_t add_size, size = resource_size(res); 1840 + struct pci_dev_resource *dev_res; 1847 1841 1848 1842 if (resource_assigned(res)) 1849 1843 return; ··· 1857 1849 pci_dbg(bridge, "bridge window %pR extended by %pa\n", res, 1858 1850 &add_size); 1859 1851 } else if (new_size < size) { 1852 + int idx = pci_resource_num(bridge, res); 1853 + 1854 + /* 1855 + * hpio/mmio/mmioprefsize hasn't been included at all? See the 1856 + * add_size param at the callsites of calculate_memsize(). 1857 + */ 1858 + if (!add_list) 1859 + return; 1860 + 1861 + /* Only shrink if the hotplug extra relates to window size. */ 1862 + switch (idx) { 1863 + case PCI_BRIDGE_IO_WINDOW: 1864 + if (size > pci_hotplug_io_size) 1865 + return; 1866 + break; 1867 + case PCI_BRIDGE_MEM_WINDOW: 1868 + if (size > pci_hotplug_mmio_size) 1869 + return; 1870 + break; 1871 + case PCI_BRIDGE_PREF_MEM_WINDOW: 1872 + if (size > pci_hotplug_mmio_pref_size) 1873 + return; 1874 + break; 1875 + default: 1876 + break; 1877 + } 1878 + 1879 + dev_res = res_to_dev_res(add_list, res); 1860 1880 add_size = size - new_size; 1861 - pci_dbg(bridge, "bridge window %pR shrunken by %pa\n", res, 1862 - &add_size); 1881 + if (add_size < dev_res->add_size) { 1882 + dev_res->add_size -= add_size; 1883 + pci_dbg(bridge, "bridge window %pR optional size shrunken by %pa\n", 1884 + res, &add_size); 1885 + } else { 1886 + pci_dbg(bridge, "bridge window %pR optional size removed\n", 1887 + res); 1888 + pci_dev_res_remove_from_list(add_list, res); 1889 + } 1890 + return; 1891 + 1863 1892 } else { 1864 1893 return; 1865 1894 }
+39 -1
drivers/pci/setup-res.c
··· 245 245 } 246 246 247 247 /* 248 + * For mem bridge windows, try to relocate tail remainder space to space 249 + * before res->start if there's enough free space there. This enables 250 + * tighter packing for resources. 251 + */ 252 + resource_size_t pci_align_resource(struct pci_dev *dev, 253 + const struct resource *res, 254 + const struct resource *empty_res, 255 + resource_size_t size, 256 + resource_size_t align) 257 + { 258 + resource_size_t remainder, start_addr; 259 + 260 + if (!(res->flags & IORESOURCE_MEM)) 261 + return res->start; 262 + 263 + if (IS_ALIGNED(size, align)) 264 + return res->start; 265 + 266 + remainder = size - ALIGN_DOWN(size, align); 267 + /* Don't mess with size that doesn't align with window size granularity */ 268 + if (!IS_ALIGNED(remainder, pci_min_window_alignment(dev->bus, res->flags))) 269 + return res->start; 270 + /* Try to place remainder that doesn't fill align before */ 271 + if (res->start < remainder) 272 + return res->start; 273 + start_addr = res->start - remainder; 274 + if (empty_res->start > start_addr) 275 + return res->start; 276 + 277 + pci_dbg(dev, "%pR: moving candidate start address below align to %llx\n", 278 + res, (unsigned long long)start_addr); 279 + return start_addr; 280 + } 281 + 282 + /* 248 283 * We don't have to worry about legacy ISA devices, so nothing to do here. 249 284 * This is marked as __weak because multiple architectures define it; it should 250 285 * eventually go away. 251 286 */ 252 287 resource_size_t __weak pcibios_align_resource(void *data, 253 288 const struct resource *res, 289 + const struct resource *empty_res, 254 290 resource_size_t size, 255 291 resource_size_t align) 256 292 { 257 - return res->start; 293 + struct pci_dev *dev = data; 294 + 295 + return pci_align_resource(dev, res, empty_res, size, align); 258 296 } 259 297 260 298 static int __pci_assign_resource(struct pci_bus *bus, struct pci_dev *dev,
+27 -4
drivers/pci/slot.c
··· 42 42 pci_domain_nr(slot->bus), 43 43 slot->bus->number); 44 44 45 + /* 46 + * Preserve legacy ABI expectations that hotplug drivers that manage 47 + * multiple devices per slot emit 0 for the device number. 48 + */ 49 + if (slot->number == PCI_SLOT_ALL_DEVICES) 50 + return sysfs_emit(buf, "%04x:%02x:00\n", 51 + pci_domain_nr(slot->bus), 52 + slot->bus->number); 53 + 45 54 return sysfs_emit(buf, "%04x:%02x:%02x\n", 46 55 pci_domain_nr(slot->bus), 47 56 slot->bus->number, ··· 82 73 83 74 down_read(&pci_bus_sem); 84 75 list_for_each_entry(dev, &slot->bus->devices, bus_list) 85 - if (PCI_SLOT(dev->devfn) == slot->number) 76 + if (slot->number == PCI_SLOT_ALL_DEVICES || 77 + PCI_SLOT(dev->devfn) == slot->number) 86 78 dev->slot = NULL; 87 79 up_read(&pci_bus_sem); 88 80 ··· 176 166 177 167 mutex_lock(&pci_slot_mutex); 178 168 list_for_each_entry(slot, &dev->bus->slots, list) 179 - if (PCI_SLOT(dev->devfn) == slot->number) 169 + if (slot->number == PCI_SLOT_ALL_DEVICES || 170 + PCI_SLOT(dev->devfn) == slot->number) 180 171 dev->slot = slot; 181 172 mutex_unlock(&pci_slot_mutex); 182 173 } ··· 199 188 /** 200 189 * pci_create_slot - create or increment refcount for physical PCI slot 201 190 * @parent: struct pci_bus of parent bridge 202 - * @slot_nr: PCI_SLOT(pci_dev->devfn) or -1 for placeholder 191 + * @slot_nr: PCI_SLOT(pci_dev->devfn), -1 for placeholder, or 192 + * PCI_SLOT_ALL_DEVICES 203 193 * @name: user visible string presented in /sys/bus/pci/slots/<name> 204 194 * @hotplug: set if caller is hotplug driver, NULL otherwise 205 195 * ··· 234 222 * consist solely of a dddd:bb tuple, where dddd is the PCI domain of the 235 223 * %struct pci_bus and bb is the bus number. In other words, the devfn of 236 224 * the 'placeholder' slot will not be displayed. 225 + * 226 + * Bus-wide slots: 227 + * For PCIe hotplug, the physical slot encompasses the entire secondary 228 + * bus, not just a single device number. If the device supports ARI and ARI 229 + * Forwarding is enabled in the upstream bridge, a multi-function device 230 + * may include functions that appear to have several different device 231 + * numbers, i.e., PCI_SLOT() values. Pass @slot_nr == PCI_SLOT_ALL_DEVICES 232 + * to create a slot that matches all devices on the bus. Unlike placeholder 233 + * slots, bus-wide slots go through normal slot lookup and reuse existing 234 + * slots if present. 237 235 */ 238 236 struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr, 239 237 const char *name, ··· 307 285 308 286 down_read(&pci_bus_sem); 309 287 list_for_each_entry(dev, &parent->devices, bus_list) 310 - if (PCI_SLOT(dev->devfn) == slot_nr) 288 + if (slot_nr == PCI_SLOT_ALL_DEVICES || 289 + PCI_SLOT(dev->devfn) == slot_nr) 311 290 dev->slot = slot; 312 291 up_read(&pci_bus_sem); 313 292
+6 -3
drivers/pci/tph.c
··· 413 413 else 414 414 pdev->tph_req_type = PCI_TPH_REQ_TPH_ONLY; 415 415 416 - rp_req_type = get_rp_completer_type(pdev); 416 + /* Check if the device is behind a Root Port */ 417 + if (pci_pcie_type(pdev) != PCI_EXP_TYPE_RC_END) { 418 + rp_req_type = get_rp_completer_type(pdev); 417 419 418 - /* Final req_type is the smallest value of two */ 419 - pdev->tph_req_type = min(pdev->tph_req_type, rp_req_type); 420 + /* Final req_type is the smallest value of two */ 421 + pdev->tph_req_type = min(pdev->tph_req_type, rp_req_type); 422 + } 420 423 421 424 if (pdev->tph_req_type == PCI_TPH_REQ_DISABLE) 422 425 return -EINVAL;
+1
drivers/pci/trace.c
··· 9 9 10 10 #define CREATE_TRACE_POINTS 11 11 #include <trace/events/pci.h> 12 + #include <trace/events/pci_controller.h>
+17 -3
drivers/pci/vgaarb.c
··· 215 215 struct vga_device *conflict; 216 216 unsigned int pci_bits; 217 217 u32 flags = 0; 218 + int err; 218 219 219 220 /* 220 221 * Account for "normal" resources to lock. If we decode the legacy, ··· 308 307 if (change_bridge) 309 308 flags |= PCI_VGA_STATE_CHANGE_BRIDGE; 310 309 311 - pci_set_vga_state(conflict->pdev, false, pci_bits, flags); 310 + err = pci_set_vga_state(conflict->pdev, false, pci_bits, flags); 311 + if (err) 312 + return ERR_PTR(err); 312 313 conflict->owns &= ~match; 313 314 314 315 /* If we disabled normal decoding, reflect it in owns */ ··· 340 337 if (wants & VGA_RSRC_LEGACY_MASK) 341 338 flags |= PCI_VGA_STATE_CHANGE_BRIDGE; 342 339 343 - pci_set_vga_state(vgadev->pdev, true, pci_bits, flags); 340 + err = pci_set_vga_state(vgadev->pdev, true, pci_bits, flags); 341 + if (err) 342 + return ERR_PTR(err); 344 343 345 344 vgadev->owns |= wants; 346 345 lock_them: ··· 460 455 } 461 456 conflict = __vga_tryget(vgadev, rsrc); 462 457 spin_unlock_irqrestore(&vga_lock, flags); 458 + if (IS_ERR(conflict)) { 459 + rc = PTR_ERR(conflict); 460 + break; 461 + } 463 462 if (conflict == NULL) 464 463 break; 465 464 ··· 1143 1134 char kbuf[64], *curr_pos; 1144 1135 size_t remaining = count; 1145 1136 1137 + int err; 1146 1138 int ret_val; 1147 1139 int i; 1148 1140 ··· 1175 1165 goto done; 1176 1166 } 1177 1167 1178 - vga_get_uninterruptible(pdev, io_state); 1168 + err = vga_get_uninterruptible(pdev, io_state); 1169 + if (err) { 1170 + ret_val = err; 1171 + goto done; 1172 + } 1179 1173 1180 1174 /* Update the client's locks lists */ 1181 1175 for (i = 0; i < MAX_USER_CARDS; i++) {
+2 -1
drivers/pcmcia/rsrc_nonstatic.c
··· 602 602 603 603 static resource_size_t 604 604 pcmcia_align(void *align_data, const struct resource *res, 605 - resource_size_t size, resource_size_t align) 605 + const struct resource *empty_res, 606 + resource_size_t size, resource_size_t align) 606 607 { 607 608 struct pcmcia_align_data *data = align_data; 608 609 struct resource_map *m;
+19 -3
include/linux/ioport.h
··· 202 202 * typedef resource_alignf - Resource alignment callback 203 203 * @data: Private data used by the callback 204 204 * @res: Resource candidate range (an empty resource space) 205 + * @empty_res: Empty resource range without alignment applied 205 206 * @size: The minimum size of the empty space 206 207 * @align: Alignment from the constraints 207 208 * ··· 213 212 */ 214 213 typedef resource_size_t (*resource_alignf)(void *data, 215 214 const struct resource *res, 215 + const struct resource *empty_res, 216 216 resource_size_t size, 217 217 resource_size_t align); 218 218 ··· 306 304 { 307 305 return res->flags & IORESOURCE_EXT_TYPE_BITS; 308 306 } 309 - /* True iff r1 completely contains r2 */ 310 - static inline bool resource_contains(const struct resource *r1, const struct resource *r2) 307 + 308 + /* 309 + * For checking if @r1 completely contains @r2 for resources that have real 310 + * addresses but are not yet crafted into the resource tree. Normally 311 + * resource_contains() should be used instead of this function as it checks 312 + * also IORESOURCE_UNSET flag. 313 + */ 314 + static inline bool __resource_contains_unbound(const struct resource *r1, 315 + const struct resource *r2) 311 316 { 312 317 if (resource_type(r1) != resource_type(r2)) 313 318 return false; 319 + 320 + return r1->start <= r2->start && r1->end >= r2->end; 321 + } 322 + /* True iff r1 completely contains r2 */ 323 + static inline bool resource_contains(const struct resource *r1, const struct resource *r2) 324 + { 314 325 if (r1->flags & IORESOURCE_UNSET || r2->flags & IORESOURCE_UNSET) 315 326 return false; 316 - return r1->start <= r2->start && r1->end >= r2->end; 327 + 328 + return __resource_contains_unbound(r1, r2); 317 329 } 318 330 319 331 /* True if any part of r1 overlaps r2 */
+24
include/linux/ntb.h
··· 256 256 * @msg_clear_mask: See ntb_msg_clear_mask(). 257 257 * @msg_read: See ntb_msg_read(). 258 258 * @peer_msg_write: See ntb_peer_msg_write(). 259 + * @get_dma_dev: See ntb_get_dma_dev(). 259 260 */ 260 261 struct ntb_dev_ops { 261 262 int (*port_number)(struct ntb_dev *ntb); ··· 330 329 int (*msg_clear_mask)(struct ntb_dev *ntb, u64 mask_bits); 331 330 u32 (*msg_read)(struct ntb_dev *ntb, int *pidx, int midx); 332 331 int (*peer_msg_write)(struct ntb_dev *ntb, int pidx, int midx, u32 msg); 332 + struct device *(*get_dma_dev)(struct ntb_dev *ntb); 333 333 }; 334 334 335 335 static inline int ntb_dev_ops_is_valid(const struct ntb_dev_ops *ops) ··· 393 391 /* !ops->msg_clear_mask == !ops->msg_count && */ 394 392 !ops->msg_read == !ops->msg_count && 395 393 !ops->peer_msg_write == !ops->msg_count && 394 + 395 + /* ops->get_dma_dev is optional */ 396 396 1; 397 397 } 398 398 ··· 1565 1561 return -EINVAL; 1566 1562 1567 1563 return ntb->ops->peer_msg_write(ntb, pidx, midx, msg); 1564 + } 1565 + 1566 + /** 1567 + * ntb_get_dma_dev() - get the device to use for DMA allocations/mappings 1568 + * @ntb: NTB device context. 1569 + * 1570 + * Return a struct device suitable for DMA API allocations and mappings. 1571 + * This is typically the parent of the NTB device, but may be overridden by a 1572 + * driver by implementing .get_dma_dev(). 1573 + * 1574 + * Drivers that implement .get_dma_dev() must return a non-NULL pointer. 1575 + * 1576 + * Return: device pointer to use for DMA operations. 1577 + */ 1578 + static inline struct device *ntb_get_dma_dev(struct ntb_dev *ntb) 1579 + { 1580 + if (!ntb->ops->get_dma_dev) 1581 + return ntb->dev.parent; 1582 + 1583 + return ntb->ops->get_dma_dev(ntb); 1568 1584 } 1569 1585 1570 1586 /**
+42 -8
include/linux/pci-epc.h
··· 191 191 * @BAR_RESIZABLE: The BAR implements the PCI-SIG Resizable BAR Capability. 192 192 * NOTE: An EPC driver can currently only set a single supported 193 193 * size. 194 - * @BAR_RESERVED: The BAR should not be touched by an EPF driver. 194 + * @BAR_RESERVED: Used for HW-backed BARs (e.g. MSI-X table, DMA regs). The BAR 195 + * should not be disabled by an EPC driver. The BAR should not be 196 + * reprogrammed by an EPF driver. An EPF driver is allowed to 197 + * disable the BAR if absolutely necessary. (However, right now 198 + * there is no EPC operation to disable a BAR that has not been 199 + * programmed using pci_epc_set_bar().) 200 + * @BAR_DISABLED: The BAR should be disabled by an EPC driver. The BAR will be 201 + * unavailable to an EPF driver. 195 202 */ 196 203 enum pci_epc_bar_type { 197 204 BAR_PROGRAMMABLE = 0, 198 205 BAR_FIXED, 199 206 BAR_RESIZABLE, 200 207 BAR_RESERVED, 208 + BAR_DISABLED, 209 + }; 210 + 211 + /** 212 + * enum pci_epc_bar_rsvd_region_type - type of a fixed subregion behind a BAR 213 + * @PCI_EPC_BAR_RSVD_DMA_CTRL_MMIO: Integrated DMA controller MMIO window 214 + * @PCI_EPC_BAR_RSVD_MSIX_TBL_RAM: MSI-X table structure 215 + * @PCI_EPC_BAR_RSVD_MSIX_PBA_RAM: MSI-X PBA structure 216 + * 217 + * BARs marked BAR_RESERVED are owned by the SoC/EPC hardware and must not be 218 + * reprogrammed by EPF drivers. Some of them still expose fixed subregions that 219 + * EPFs may want to reference (e.g. embedded doorbell fallback). 220 + */ 221 + enum pci_epc_bar_rsvd_region_type { 222 + PCI_EPC_BAR_RSVD_DMA_CTRL_MMIO = 0, 223 + PCI_EPC_BAR_RSVD_MSIX_TBL_RAM, 224 + PCI_EPC_BAR_RSVD_MSIX_PBA_RAM, 225 + }; 226 + 227 + /** 228 + * struct pci_epc_bar_rsvd_region - fixed subregion behind a BAR 229 + * @type: reserved region type 230 + * @offset: offset within the BAR aperture 231 + * @size: size of the reserved region 232 + */ 233 + struct pci_epc_bar_rsvd_region { 234 + enum pci_epc_bar_rsvd_region_type type; 235 + resource_size_t offset; 236 + resource_size_t size; 201 237 }; 202 238 203 239 /** ··· 242 206 * @fixed_size: the fixed size, only applicable if type is BAR_FIXED_MASK. 243 207 * @only_64bit: if true, an EPF driver is not allowed to choose if this BAR 244 208 * should be configured as 32-bit or 64-bit, the EPF driver must 245 - * configure this BAR as 64-bit. Additionally, the BAR succeeding 246 - * this BAR must be set to type BAR_RESERVED. 247 - * 248 - * only_64bit should not be set on a BAR of type BAR_RESERVED. 249 - * (If BARx is a 64-bit BAR that an EPF driver is not allowed to 250 - * touch, then both BARx and BARx+1 must be set to type 251 - * BAR_RESERVED.) 209 + * configure this BAR as 64-bit. 210 + * @nr_rsvd_regions: number of fixed subregions described for BAR_RESERVED 211 + * @rsvd_regions: fixed subregions behind BAR_RESERVED 252 212 */ 253 213 struct pci_epc_bar_desc { 254 214 enum pci_epc_bar_type type; 255 215 u64 fixed_size; 256 216 bool only_64bit; 217 + u8 nr_rsvd_regions; 218 + const struct pci_epc_bar_rsvd_region *rsvd_regions; 257 219 }; 258 220 259 221 /**
+21 -10
include/linux/pci.h
··· 72 72 /* return bus from PCI devid = ((u16)bus_number) << 8) | devfn */ 73 73 #define PCI_BUS_NUM(x) (((x) >> 8) & 0xff) 74 74 75 + /* 76 + * PCI_SLOT_ALL_DEVICES indicates a slot that covers all devices on the bus. 77 + * Used for PCIe hotplug where the physical slot is the entire secondary bus, 78 + * and, if ARI Forwarding is enabled, functions may appear to be on multiple 79 + * devices. 80 + */ 81 + #define PCI_SLOT_ALL_DEVICES 0xfe 82 + 75 83 /* pci_slot represents a physical slot */ 76 84 struct pci_slot { 77 85 struct pci_bus *bus; /* Bus this slot is on */ 78 86 struct list_head list; /* Node in list of slots */ 79 87 struct hotplug_slot *hotplug; /* Hotplug info (move here) */ 80 - unsigned char number; /* PCI_SLOT(pci_dev->devfn) */ 88 + unsigned char number; /* Device nr, or PCI_SLOT_ALL_DEVICES */ 81 89 struct kobject kobj; 82 90 }; 83 91 ··· 526 518 unsigned int ptm_root:1; 527 519 unsigned int ptm_responder:1; 528 520 unsigned int ptm_requester:1; 529 - unsigned int ptm_enabled:1; 521 + atomic_t ptm_enable_cnt; 530 522 u8 ptm_granularity; 531 523 #endif 532 524 #ifdef CONFIG_PCI_MSI ··· 1195 1187 /* Do NOT directly access these two variables, unless you are arch-specific PCI 1196 1188 * code, or PCI core code. */ 1197 1189 extern struct list_head pci_root_buses; /* List of all known PCI buses */ 1198 - /* Some device drivers need know if PCI is initiated */ 1199 - int no_pci_devices(void); 1200 1190 1201 1191 void pcibios_resource_survey_bus(struct pci_bus *bus); 1202 1192 void pcibios_bus_add_device(struct pci_dev *pdev); ··· 1206 1200 char *pcibios_setup(char *str); 1207 1201 1208 1202 /* Used only when drivers/pci/setup.c is used */ 1209 - resource_size_t pcibios_align_resource(void *, const struct resource *, 1210 - resource_size_t, 1211 - resource_size_t); 1203 + resource_size_t pcibios_align_resource(void *data, const struct resource *res, 1204 + const struct resource *empty_res, 1205 + resource_size_t size, 1206 + resource_size_t align); 1207 + resource_size_t pci_align_resource(struct pci_dev *dev, 1208 + const struct resource *res, 1209 + const struct resource *empty_res, 1210 + resource_size_t size, 1211 + resource_size_t align); 1212 1212 1213 1213 /* Generic PCI functions used internally */ 1214 1214 ··· 1981 1969 }; 1982 1970 1983 1971 #ifdef CONFIG_PCIE_PTM 1984 - int pci_enable_ptm(struct pci_dev *dev, u8 *granularity); 1972 + int pci_enable_ptm(struct pci_dev *dev); 1985 1973 void pci_disable_ptm(struct pci_dev *dev); 1986 1974 bool pcie_ptm_enabled(struct pci_dev *dev); 1987 1975 #else 1988 - static inline int pci_enable_ptm(struct pci_dev *dev, u8 *granularity) 1976 + static inline int pci_enable_ptm(struct pci_dev *dev) 1989 1977 { return -EINVAL; } 1990 1978 static inline void pci_disable_ptm(struct pci_dev *dev) { } 1991 1979 static inline bool pcie_ptm_enabled(struct pci_dev *dev) ··· 2138 2126 static inline int pci_dev_present(const struct pci_device_id *ids) 2139 2127 { return 0; } 2140 2128 2141 - #define no_pci_devices() (1) 2142 2129 #define pci_dev_put(dev) do { } while (0) 2143 2130 2144 2131 static inline void pci_set_master(struct pci_dev *dev) { }
+2
include/linux/pci_ids.h
··· 2586 2586 2587 2587 #define PCI_VENDOR_ID_AZWAVE 0x1a3b 2588 2588 2589 + #define PCI_VENDOR_ID_GOOGLE 0x1ae0 2590 + 2589 2591 #define PCI_VENDOR_ID_REDHAT_QUMRANET 0x1af4 2590 2592 #define PCI_SUBVENDOR_ID_REDHAT_QUMRANET 0x1af4 2591 2593 #define PCI_SUBDEVICE_ID_QEMU 0x1100
+58
include/trace/events/pci_controller.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #undef TRACE_SYSTEM 3 + #define TRACE_SYSTEM pci_controller 4 + 5 + #if !defined(_TRACE_HW_EVENT_PCI_CONTROLLER_H) || defined(TRACE_HEADER_MULTI_READ) 6 + #define _TRACE_HW_EVENT_PCI_CONTROLLER_H 7 + 8 + #include <uapi/linux/pci_regs.h> 9 + #include <linux/tracepoint.h> 10 + 11 + #define RATE \ 12 + EM(PCIE_SPEED_2_5GT, "2.5 GT/s") \ 13 + EM(PCIE_SPEED_5_0GT, "5.0 GT/s") \ 14 + EM(PCIE_SPEED_8_0GT, "8.0 GT/s") \ 15 + EM(PCIE_SPEED_16_0GT, "16.0 GT/s") \ 16 + EM(PCIE_SPEED_32_0GT, "32.0 GT/s") \ 17 + EM(PCIE_SPEED_64_0GT, "64.0 GT/s") \ 18 + EMe(PCI_SPEED_UNKNOWN, "Unknown") 19 + 20 + 21 + #undef EM 22 + #undef EMe 23 + #define EM(a, b) TRACE_DEFINE_ENUM(a); 24 + #define EMe(a, b) TRACE_DEFINE_ENUM(a); 25 + 26 + RATE 27 + 28 + #undef EM 29 + #undef EMe 30 + #define EM(a, b) {a, b}, 31 + #define EMe(a, b) {a, b} 32 + 33 + TRACE_EVENT(pcie_ltssm_state_transition, 34 + TP_PROTO(const char *dev_name, const char *state, u32 rate), 35 + TP_ARGS(dev_name, state, rate), 36 + 37 + TP_STRUCT__entry( 38 + __string(dev_name, dev_name) 39 + __string(state, state) 40 + __field(u32, rate) 41 + ), 42 + 43 + TP_fast_assign( 44 + __assign_str(dev_name); 45 + __assign_str(state); 46 + __entry->rate = rate; 47 + ), 48 + 49 + TP_printk("dev: %s state: %s rate: %s", 50 + __get_str(dev_name), __get_str(state), 51 + __print_symbolic(__entry->rate, RATE) 52 + ) 53 + ); 54 + 55 + #endif /* _TRACE_HW_EVENT_PCI_CONTROLLER_H */ 56 + 57 + /* This part must be outside protection */ 58 + #include <trace/define_trace.h>
+17 -16
kernel/resource.c
··· 727 727 struct resource_constraint *constraint) 728 728 { 729 729 struct resource *this = root->child; 730 - struct resource tmp = *new, avail, alloc; 730 + struct resource full_avail = *new, avail, alloc; 731 731 resource_alignf alignf = constraint->alignf; 732 732 733 - tmp.start = root->start; 733 + full_avail.start = root->start; 734 734 /* 735 735 * Skip past an allocated resource that starts at 0, since the assignment 736 - * of this->start - 1 to tmp->end below would cause an underflow. 736 + * of this->start - 1 to full_avail->end below would cause an underflow. 737 737 */ 738 738 if (this && this->start == root->start) { 739 - tmp.start = (this == old) ? old->start : this->end + 1; 739 + full_avail.start = (this == old) ? old->start : this->end + 1; 740 740 this = this->sibling; 741 741 } 742 742 for(;;) { 743 743 if (this) 744 - tmp.end = (this == old) ? this->end : this->start - 1; 744 + full_avail.end = (this == old) ? this->end : this->start - 1; 745 745 else 746 - tmp.end = root->end; 746 + full_avail.end = root->end; 747 747 748 - if (tmp.end < tmp.start) 748 + if (full_avail.end < full_avail.start) 749 749 goto next; 750 750 751 - resource_clip(&tmp, constraint->min, constraint->max); 752 - arch_remove_reservations(&tmp); 751 + resource_clip(&full_avail, constraint->min, constraint->max); 752 + arch_remove_reservations(&full_avail); 753 753 754 754 /* Check for overflow after ALIGN() */ 755 - avail.start = ALIGN(tmp.start, constraint->align); 756 - avail.end = tmp.end; 757 - avail.flags = new->flags & ~IORESOURCE_UNSET; 758 - if (avail.start >= tmp.start) { 755 + avail.start = ALIGN(full_avail.start, constraint->align); 756 + avail.end = full_avail.end; 757 + avail.flags = new->flags; 758 + if (avail.start >= full_avail.start) { 759 759 alloc.flags = avail.flags; 760 760 if (alignf) { 761 761 alloc.start = alignf(constraint->alignf_data, 762 - &avail, size, constraint->align); 762 + &avail, &full_avail, 763 + size, constraint->align); 763 764 } else { 764 765 alloc.start = avail.start; 765 766 } 766 767 alloc.end = alloc.start + size - 1; 767 768 if (alloc.start <= alloc.end && 768 - resource_contains(&avail, &alloc)) { 769 + __resource_contains_unbound(&full_avail, &alloc)) { 769 770 new->start = alloc.start; 770 771 new->end = alloc.end; 771 772 return 0; ··· 777 776 break; 778 777 779 778 if (this != old) 780 - tmp.start = this->end + 1; 779 + full_avail.start = this->end + 1; 781 780 this = this->sibling; 782 781 } 783 782 return -EBUSY;
+8
tools/testing/selftests/pci_endpoint/pci_endpoint_test.c
··· 67 67 pci_ep_ioctl(PCITEST_BAR, variant->barno); 68 68 if (ret == -ENODATA) 69 69 SKIP(return, "BAR is disabled"); 70 + if (ret == -ENOBUFS) 71 + SKIP(return, "BAR is reserved"); 70 72 EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno); 71 73 } 72 74 ··· 86 84 SKIP(return, "BAR is test register space"); 87 85 if (ret == -EOPNOTSUPP) 88 86 SKIP(return, "Subrange map is not supported"); 87 + if (ret == -ENOBUFS) 88 + SKIP(return, "BAR is reserved"); 89 + if (ret == -ENOSPC) 90 + SKIP(return, "Not enough inbound windows"); 89 91 EXPECT_FALSE(ret) TH_LOG("Test failed for BAR%d", variant->barno); 90 92 } 91 93 ··· 282 276 ASSERT_EQ(0, ret) TH_LOG("Can't set AUTO IRQ type"); 283 277 284 278 pci_ep_ioctl(PCITEST_DOORBELL, 0); 279 + if (ret == -EOPNOTSUPP) 280 + SKIP(return, "Doorbell test is not supported"); 285 281 EXPECT_FALSE(ret) TH_LOG("Test failed for Doorbell\n"); 286 282 } 287 283 TEST_HARNESS_MAIN