Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'spi-v6.20' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi updates from Mark Brown:
"The highlight here is that David Lechner has added support for
multi-lane SPI devices. Unlike the existing dual/quad SPI support this
is for devices (typically ADCs/DACs) which support multiple
independent data streams over multiple data lanes, instead of sending
one data stream N times as fast they simultaneously transfer N
different data streams.

This is very similar to the case where multiple devices are grouped
together but in this case it's a single device in a way that's visible
to software.

Otherwise there's been quite a bit of work on existing drivers, both
cleanup and feature improvement, and a reasonable collection of new
drivers.

- Support for multi-lane SPI devices

- Preparatory work for some memory mapped flash improvements that
will happen in the MTD subsystem

- Several conversions to fwnode APIs

- A bunch of cleanup and hardening work on the ST drivers

- Support for DMA mode on Renesas RZV2H and i.MX target mode

- Support for ATCSPI200, AXIADO AX300, NXP XPI and Renesas RZ/N1"

* tag 'spi-v6.20' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (108 commits)
spi: tools: Add include folder to .gitignore
spi: cadence-qspi: Add support for the Renesas RZ/N1 controller
spi: cadence-qspi: Kill cqspi_jh7110_clk_init
spi: dt-bindings: cdns,qspi-nor: Add Renesas RZ/N1D400 to the list
spi: geni-qcom: Add target abort support
spi: geni-qcom: Drop unused msg parameter from timeout handlers
spi: geni-qcom: Fix abort sequence execution for serial engine errors
spi: geni-qcom: Improve target mode allocation by using proper allocation functions
spi: xilinx: use device property accessors.
dt-bindings: spi: Add binding for Faraday FTSSP010
spi: axi-spi-engine: support SPI_MULTI_LANE_MODE_STRIPE
spi: dt-bindings: adi,axi-spi-engine: add multi-lane support
spi: Documentation: add page on multi-lane support
spi: add multi_lane_mode field to struct spi_transfer
spi: support controllers with multiple data lanes
spi: dt-bindings: add spi-{tx,rx}-lane-map properties
spi: dt-bindings: change spi-{rx,tx}-bus-width to arrays
spi: dw: Remove not-going-to-be-supported code for Baikal SoC
spi: cadence-qspi: Use a default value for cdns,fifo-width
spi: cadence-qspi: Make sure write protection is disabled
...

+5812 -1211
+3 -2
Documentation/devicetree/bindings/display/panel/sitronix,st7789v.yaml
··· 34 34 spi-cpol: true 35 35 36 36 spi-rx-bus-width: 37 - minimum: 0 38 - maximum: 1 37 + items: 38 + minimum: 0 39 + maximum: 1 39 40 40 41 dc-gpios: 41 42 maxItems: 1
+41 -1
Documentation/devicetree/bindings/iio/adc/adi,ad4030.yaml
··· 37 37 maximum: 102040816 38 38 39 39 spi-rx-bus-width: 40 - enum: [1, 2, 4] 40 + maxItems: 2 41 + # all lanes must have the same width 42 + oneOf: 43 + - contains: 44 + const: 1 45 + - contains: 46 + const: 2 47 + - contains: 48 + const: 4 41 49 42 50 vdd-5v-supply: true 43 51 vdd-1v8-supply: true ··· 96 88 97 89 unevaluatedProperties: false 98 90 91 + allOf: 92 + - if: 93 + properties: 94 + compatible: 95 + enum: 96 + - adi,ad4030-24 97 + - adi,ad4032-24 98 + then: 99 + properties: 100 + spi-rx-bus-width: 101 + maxItems: 1 102 + 99 103 examples: 100 104 - | 101 105 #include <dt-bindings/gpio/gpio.h> ··· 120 100 compatible = "adi,ad4030-24"; 121 101 reg = <0>; 122 102 spi-max-frequency = <80000000>; 103 + vdd-5v-supply = <&supply_5V>; 104 + vdd-1v8-supply = <&supply_1_8V>; 105 + vio-supply = <&supply_1_8V>; 106 + ref-supply = <&supply_5V>; 107 + cnv-gpios = <&gpio0 0 GPIO_ACTIVE_HIGH>; 108 + reset-gpios = <&gpio0 1 GPIO_ACTIVE_LOW>; 109 + }; 110 + }; 111 + - | 112 + #include <dt-bindings/gpio/gpio.h> 113 + 114 + spi { 115 + #address-cells = <1>; 116 + #size-cells = <0>; 117 + 118 + adc@0 { 119 + compatible = "adi,ad4630-24"; 120 + reg = <0>; 121 + spi-max-frequency = <80000000>; 122 + spi-rx-bus-width = <4>, <4>; 123 123 vdd-5v-supply = <&supply_5V>; 124 124 vdd-1v8-supply = <&supply_1_8V>; 125 125 vio-supply = <&supply_1_8V>;
+3 -2
Documentation/devicetree/bindings/iio/adc/adi,ad4695.yaml
··· 38 38 spi-cpha: true 39 39 40 40 spi-rx-bus-width: 41 - minimum: 1 42 - maximum: 4 41 + items: 42 + minimum: 1 43 + maximum: 4 43 44 44 45 avdd-supply: 45 46 description: Analog power supply.
+15
Documentation/devicetree/bindings/spi/adi,axi-spi-engine.yaml
··· 70 70 71 71 unevaluatedProperties: false 72 72 73 + patternProperties: 74 + "^.*@[0-9a-f]+": 75 + type: object 76 + 77 + properties: 78 + spi-rx-bus-width: 79 + maxItems: 8 80 + items: 81 + enum: [0, 1] 82 + 83 + spi-tx-bus-width: 84 + maxItems: 8 85 + items: 86 + enum: [0, 1] 87 + 73 88 examples: 74 89 - | 75 90 spi@44a00000 {
+4 -2
Documentation/devicetree/bindings/spi/allwinner,sun4i-a10-spi.yaml
··· 55 55 maximum: 4 56 56 57 57 spi-rx-bus-width: 58 - const: 1 58 + items: 59 + - const: 1 59 60 60 61 spi-tx-bus-width: 61 - const: 1 62 + items: 63 + - const: 1 62 64 63 65 required: 64 66 - compatible
+4 -2
Documentation/devicetree/bindings/spi/allwinner,sun6i-a31-spi.yaml
··· 81 81 maximum: 4 82 82 83 83 spi-rx-bus-width: 84 - const: 1 84 + items: 85 + - const: 1 85 86 86 87 spi-tx-bus-width: 87 - const: 1 88 + items: 89 + - const: 1 88 90 89 91 required: 90 92 - compatible
+87
Documentation/devicetree/bindings/spi/andestech,ae350-spi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/spi/andestech,ae350-spi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Andes ATCSPI200 SPI controller 8 + 9 + maintainers: 10 + - CL Wang <cl634@andestech.com> 11 + 12 + properties: 13 + compatible: 14 + oneOf: 15 + - items: 16 + - enum: 17 + - andestech,qilai-spi 18 + - const: andestech,ae350-spi 19 + - const: andestech,ae350-spi 20 + 21 + reg: 22 + maxItems: 1 23 + 24 + clocks: 25 + maxItems: 1 26 + 27 + num-cs: 28 + description: Number of chip selects supported 29 + maxItems: 1 30 + 31 + dmas: 32 + items: 33 + - description: Transmit FIFO DMA channel 34 + - description: Receive FIFO DMA channel 35 + 36 + dma-names: 37 + items: 38 + - const: tx 39 + - const: rx 40 + 41 + patternProperties: 42 + "@[0-9a-f]+$": 43 + type: object 44 + additionalProperties: true 45 + 46 + properties: 47 + spi-rx-bus-width: 48 + items: 49 + - enum: [1, 4] 50 + 51 + spi-tx-bus-width: 52 + items: 53 + - enum: [1, 4] 54 + 55 + allOf: 56 + - $ref: spi-controller.yaml# 57 + 58 + required: 59 + - compatible 60 + - reg 61 + - clocks 62 + - dmas 63 + - dma-names 64 + 65 + unevaluatedProperties: false 66 + 67 + examples: 68 + - | 69 + spi@f0b00000 { 70 + compatible = "andestech,ae350-spi"; 71 + reg = <0xf0b00000 0x100>; 72 + clocks = <&clk_spi>; 73 + dmas = <&dma0 0>, <&dma0 1>; 74 + dma-names = "tx", "rx"; 75 + 76 + #address-cells = <1>; 77 + #size-cells = <0>; 78 + 79 + flash@0 { 80 + compatible = "jedec,spi-nor"; 81 + reg = <0>; 82 + spi-tx-bus-width = <4>; 83 + spi-rx-bus-width = <4>; 84 + spi-cpol; 85 + spi-cpha; 86 + }; 87 + };
+1
Documentation/devicetree/bindings/spi/atmel,at91rm9200-spi.yaml
··· 19 19 - const: atmel,at91rm9200-spi 20 20 - items: 21 21 - enum: 22 + - microchip,lan9691-spi 22 23 - microchip,sam9x60-spi 23 24 - microchip,sam9x7-spi 24 25 - microchip,sama7d65-spi
+73
Documentation/devicetree/bindings/spi/axiado,ax3000-spi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/spi/axiado,ax3000-spi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Axiado AX3000 SoC SPI controller 8 + 9 + maintainers: 10 + - Vladimir Moravcevic <vmoravcevic@axiado.com> 11 + - Tzu-Hao Wei <twei@axiado.com> 12 + - Swark Yang <syang@axiado.com> 13 + - Prasad Bolisetty <pbolisetty@axiado.com> 14 + 15 + allOf: 16 + - $ref: spi-controller.yaml# 17 + 18 + properties: 19 + compatible: 20 + enum: 21 + - axiado,ax3000-spi 22 + 23 + reg: 24 + maxItems: 1 25 + 26 + interrupts: 27 + maxItems: 1 28 + 29 + clock-names: 30 + items: 31 + - const: ref 32 + - const: pclk 33 + 34 + clocks: 35 + maxItems: 2 36 + 37 + num-cs: 38 + description: | 39 + Number of chip selects used. 40 + $ref: /schemas/types.yaml#/definitions/uint32 41 + minimum: 1 42 + maximum: 4 43 + default: 4 44 + 45 + required: 46 + - compatible 47 + - reg 48 + - interrupts 49 + - clock-names 50 + - clocks 51 + 52 + unevaluatedProperties: false 53 + 54 + examples: 55 + - | 56 + #include <dt-bindings/interrupt-controller/irq.h> 57 + #include <dt-bindings/interrupt-controller/arm-gic.h> 58 + 59 + soc { 60 + #address-cells = <2>; 61 + #size-cells = <2>; 62 + 63 + spi@80510000 { 64 + compatible = "axiado,ax3000-spi"; 65 + reg = <0x00 0x80510000 0x00 0x1000>; 66 + clock-names = "ref", "pclk"; 67 + clocks = <&spi_clk>, <&apb_pclk>; 68 + interrupt-parent = <&gic500>; 69 + interrupts = <GIC_SPI 115 IRQ_TYPE_LEVEL_HIGH>; 70 + num-cs = <4>; 71 + }; 72 + }; 73 + ...
+18 -3
Documentation/devicetree/bindings/spi/cdns,qspi-nor.yaml
··· 61 61 cdns,fifo-depth: 62 62 enum: [ 128, 256 ] 63 63 default: 128 64 + - if: 65 + properties: 66 + compatible: 67 + contains: 68 + const: renesas,rzn1-qspi 69 + then: 70 + properties: 71 + cdns,trigger-address: false 72 + cdns,fifo-depth: false 73 + cdns,fifo-width: false 74 + else: 75 + required: 76 + - cdns,trigger-address 77 + - cdns,fifo-depth 64 78 65 79 properties: 66 80 compatible: ··· 94 80 # controllers are meant to be used with flashes of all kinds, 95 81 # ie. also NAND flashes, not only NOR flashes. 96 82 - const: cdns,qspi-nor 83 + - items: 84 + - const: renesas,r9a06g032-qspi 85 + - const: renesas,rzn1-qspi 97 86 - const: cdns,qspi-nor 98 87 deprecated: true 99 88 ··· 180 163 - reg 181 164 - interrupts 182 165 - clocks 183 - - cdns,fifo-width 184 - - cdns,trigger-address 185 166 - '#address-cells' 186 167 - '#size-cells' 187 168 ··· 187 172 188 173 examples: 189 174 - | 190 - qspi: spi@ff705000 { 175 + spi@ff705000 { 191 176 compatible = "intel,socfpga-qspi", "cdns,qspi-nor"; 192 177 #address-cells = <1>; 193 178 #size-cells = <0>;
+43
Documentation/devicetree/bindings/spi/faraday,ftssp010.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/spi/faraday,ftssp010.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Faraday FTSSP010 SPI Controller 8 + 9 + maintainers: 10 + - Linus Walleij <linusw@kernel.org> 11 + 12 + properties: 13 + compatible: 14 + const: faraday,ftssp010 15 + 16 + interrupts: 17 + maxItems: 1 18 + 19 + reg: 20 + maxItems: 1 21 + 22 + cs-gpios: true 23 + 24 + required: 25 + - compatible 26 + - interrupts 27 + - reg 28 + 29 + allOf: 30 + - $ref: spi-controller.yaml# 31 + 32 + unevaluatedProperties: false 33 + 34 + examples: 35 + - | 36 + #include <dt-bindings/gpio/gpio.h> 37 + spi@4a000000 { 38 + compatible = "faraday,ftssp010"; 39 + #address-cells = <1>; 40 + #size-cells = <0>; 41 + reg = <0x4a000000 0x1000>; 42 + interrupts = <0>; 43 + };
+4 -2
Documentation/devicetree/bindings/spi/nvidia,tegra210-quad.yaml
··· 54 54 55 55 properties: 56 56 spi-rx-bus-width: 57 - enum: [1, 2, 4] 57 + items: 58 + - enum: [1, 2, 4] 58 59 59 60 spi-tx-bus-width: 60 - enum: [1, 2, 4] 61 + items: 62 + - enum: [1, 2, 4] 61 63 62 64 required: 63 65 - compatible
+92
Documentation/devicetree/bindings/spi/nxp,imx94-xspi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/spi/nxp,imx94-xspi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: NXP External Serial Peripheral Interface (xSPI) 8 + 9 + maintainers: 10 + - Haibo Chen <haibo.chen@nxp.com> 11 + - Han Xu <han.xu@nxp.com> 12 + 13 + properties: 14 + compatible: 15 + oneOf: 16 + - enum: 17 + - nxp,imx94-xspi 18 + - items: 19 + - enum: 20 + - nxp,imx952-xspi 21 + - const: nxp,imx94-xspi 22 + 23 + reg: 24 + items: 25 + - description: registers address space 26 + - description: memory mapped address space 27 + 28 + reg-names: 29 + items: 30 + - const: base 31 + - const: mmap 32 + 33 + interrupts: 34 + items: 35 + - description: interrupt for EENV0 36 + - description: interrupt for EENV1 37 + - description: interrupt for EENV2 38 + - description: interrupt for EENV3 39 + - description: interrupt for EENV4 40 + 41 + clocks: 42 + items: 43 + - description: SPI serial clock 44 + 45 + clock-names: 46 + items: 47 + - const: per 48 + 49 + required: 50 + - compatible 51 + - reg 52 + - reg-names 53 + - interrupts 54 + - clocks 55 + - clock-names 56 + 57 + allOf: 58 + - $ref: spi-controller.yaml# 59 + 60 + unevaluatedProperties: false 61 + 62 + examples: 63 + - | 64 + #include <dt-bindings/interrupt-controller/arm-gic.h> 65 + 66 + soc { 67 + #address-cells = <2>; 68 + #size-cells = <2>; 69 + 70 + spi@42b90000 { 71 + compatible = "nxp,imx94-xspi"; 72 + reg = <0x0 0x42b90000 0x0 0x50000>, <0x0 0x28000000 0x0 0x08000000>; 73 + reg-names = "base", "mmap"; 74 + interrupts = <GIC_SPI 390 IRQ_TYPE_LEVEL_HIGH>, 75 + <GIC_SPI 391 IRQ_TYPE_LEVEL_HIGH>, 76 + <GIC_SPI 392 IRQ_TYPE_LEVEL_HIGH>, 77 + <GIC_SPI 393 IRQ_TYPE_LEVEL_HIGH>, 78 + <GIC_SPI 394 IRQ_TYPE_LEVEL_HIGH>; 79 + #address-cells = <1>; 80 + #size-cells = <0>; 81 + clocks = <&scmi_1>; 82 + clock-names = "per"; 83 + 84 + flash@0 { 85 + compatible = "jedec,spi-nor"; 86 + reg = <0>; 87 + spi-max-frequency = <200000000>; 88 + spi-rx-bus-width = <8>; 89 + spi-tx-bus-width = <8>; 90 + }; 91 + }; 92 + };
+8
Documentation/devicetree/bindings/spi/nxp,lpc3220-spi.yaml
··· 20 20 clocks: 21 21 maxItems: 1 22 22 23 + dmas: 24 + maxItems: 1 25 + 26 + dma-names: 27 + const: rx-tx 28 + 23 29 allOf: 24 30 - $ref: spi-controller.yaml# 25 31 ··· 44 38 compatible = "nxp,lpc3220-spi"; 45 39 reg = <0x20088000 0x1000>; 46 40 clocks = <&clk LPC32XX_CLK_SPI1>; 41 + dmas = <&dmamux 11 1 0>; 42 + dma-names = "rx-tx"; 47 43 #address-cells = <1>; 48 44 #size-cells = <0>; 49 45 };
+8
Documentation/devicetree/bindings/spi/renesas,rzv2h-rspi.yaml
··· 57 57 - const: presetn 58 58 - const: tresetn 59 59 60 + dmas: 61 + maxItems: 2 62 + 63 + dma-names: 64 + items: 65 + - const: rx 66 + - const: tx 67 + 60 68 power-domains: 61 69 maxItems: 1 62 70
+34 -6
Documentation/devicetree/bindings/spi/spi-peripheral-props.yaml
··· 64 64 description: 65 65 Bus width to the SPI bus used for read transfers. 66 66 If 0 is provided, then no RX will be possible on this device. 67 - $ref: /schemas/types.yaml#/definitions/uint32 68 - enum: [0, 1, 2, 4, 8] 69 - default: 1 67 + 68 + Some SPI peripherals and controllers may have multiple data lanes for 69 + receiving two or more words at the same time. If this is the case, each 70 + index in the array represents the lane on both the SPI peripheral and 71 + controller. Additional mapping properties may be needed if a lane is 72 + skipped on either side. 73 + $ref: /schemas/types.yaml#/definitions/uint32-array 74 + items: 75 + enum: [0, 1, 2, 4, 8] 76 + default: [1] 77 + 78 + spi-rx-lane-map: 79 + description: Mapping of peripheral SDO lanes to controller SDI lanes. 80 + Each index in the array represents a peripheral SDO lane, and the value 81 + at that index represents the corresponding controller SDI lane. 82 + $ref: /schemas/types.yaml#/definitions/uint32-array 83 + default: [0, 1, 2, 3, 4, 5, 6, 7] 70 84 71 85 spi-rx-delay-us: 72 86 description: ··· 95 81 description: 96 82 Bus width to the SPI bus used for write transfers. 97 83 If 0 is provided, then no TX will be possible on this device. 98 - $ref: /schemas/types.yaml#/definitions/uint32 99 - enum: [0, 1, 2, 4, 8] 100 - default: 1 84 + 85 + Some SPI peripherals and controllers may have multiple data lanes for 86 + transmitting two or more words at the same time. If this is the case, each 87 + index in the array represents the lane on both the SPI peripheral and 88 + controller. Additional mapping properties may be needed if a lane is 89 + skipped on either side. 90 + $ref: /schemas/types.yaml#/definitions/uint32-array 91 + items: 92 + enum: [0, 1, 2, 4, 8] 93 + default: [1] 94 + 95 + spi-tx-lane-map: 96 + description: Mapping of peripheral SDI lanes to controller SDO lanes. 97 + Each index in the array represents a peripheral SDI lane, and the value 98 + at that index represents the corresponding controller SDO lane. 99 + $ref: /schemas/types.yaml#/definitions/uint32-array 100 + default: [0, 1, 2, 3, 4, 5, 6, 7] 101 101 102 102 spi-tx-delay-us: 103 103 description:
-1
Documentation/devicetree/bindings/spi/spi-xilinx.yaml
··· 38 38 required: 39 39 - compatible 40 40 - reg 41 - - interrupts 42 41 43 42 unevaluatedProperties: false 44 43
+3
Documentation/devicetree/bindings/spi/st,stm32-spi.yaml
··· 96 96 The region should be defined as child node of the AHB SRAM node 97 97 as per the generic bindings in Documentation/devicetree/bindings/sram/sram.yaml 98 98 99 + power-domains: 100 + maxItems: 1 101 + 99 102 access-controllers: 100 103 minItems: 1 101 104 maxItems: 2
+1
Documentation/spi/index.rst
··· 9 9 10 10 spi-summary 11 11 spidev 12 + multiple-data-lanes 12 13 butterfly 13 14 spi-lm70llp 14 15 spi-sc18is602
+217
Documentation/spi/multiple-data-lanes.rst
··· 1 + ==================================== 2 + SPI devices with multiple data lanes 3 + ==================================== 4 + 5 + Some specialized SPI controllers and peripherals support multiple data lanes 6 + that allow reading more than one word at a time in parallel. This is different 7 + from dual/quad/octal SPI where multiple bits of a single word are transferred 8 + simultaneously. 9 + 10 + For example, controllers that support parallel flash memories have this feature 11 + as do some simultaneous-sampling ADCs where each channel has its own data lane. 12 + 13 + --------------------- 14 + Describing the wiring 15 + --------------------- 16 + 17 + The ``spi-tx-bus-width`` and ``spi-rx-bus-width`` properties in the devicetree 18 + are used to describe how many data lanes are connected between the controller 19 + and how wide each lane is. The number of items in the array indicates how many 20 + lanes there are, and the value of each item indicates how many bits wide that 21 + lane is. 22 + 23 + For example, a dual-simultaneous-sampling ADC with two 4-bit lanes might be 24 + wired up like this:: 25 + 26 + +--------------+ +----------+ 27 + | SPI | | AD4630 | 28 + | Controller | | ADC | 29 + | | | | 30 + | CS0 |--->| CS | 31 + | SCK |--->| SCK | 32 + | SDO |--->| SDI | 33 + | | | | 34 + | SDIA0 |<---| SDOA0 | 35 + | SDIA1 |<---| SDOA1 | 36 + | SDIA2 |<---| SDOA2 | 37 + | SDIA3 |<---| SDOA3 | 38 + | | | | 39 + | SDIB0 |<---| SDOB0 | 40 + | SDIB1 |<---| SDOB1 | 41 + | SDIB2 |<---| SDOB2 | 42 + | SDIB3 |<---| SDOB3 | 43 + | | | | 44 + +--------------+ +----------+ 45 + 46 + It is described in a devicetree like this:: 47 + 48 + spi { 49 + compatible = "my,spi-controller"; 50 + 51 + ... 52 + 53 + adc@0 { 54 + compatible = "adi,ad4630"; 55 + reg = <0>; 56 + ... 57 + spi-rx-bus-width = <4>, <4>; /* 2 lanes of 4 bits each */ 58 + ... 59 + }; 60 + }; 61 + 62 + In most cases, lanes will be wired up symmetrically (A to A, B to B, etc). If 63 + this isn't the case, extra ``spi-rx-lane-map`` and ``spi-tx-lane-map`` 64 + properties are needed to provide a mapping between controller lanes and the 65 + physical lane wires. 66 + 67 + Here is an example where a multi-lane SPI controller has each lane wired to 68 + separate single-lane peripherals:: 69 + 70 + +--------------+ +----------+ 71 + | SPI | | Thing 1 | 72 + | Controller | | | 73 + | | | | 74 + | CS0 |--->| CS | 75 + | SDO0 |--->| SDI | 76 + | SDI0 |<---| SDO | 77 + | SCLK0 |--->| SCLK | 78 + | | | | 79 + | | +----------+ 80 + | | 81 + | | +----------+ 82 + | | | Thing 2 | 83 + | | | | 84 + | CS1 |--->| CS | 85 + | SDO1 |--->| SDI | 86 + | SDI1 |<---| SDO | 87 + | SCLK1 |--->| SCLK | 88 + | | | | 89 + +--------------+ +----------+ 90 + 91 + This is described in a devicetree like this:: 92 + 93 + spi { 94 + compatible = "my,spi-controller"; 95 + 96 + ... 97 + 98 + thing1@0 { 99 + compatible = "my,thing1"; 100 + reg = <0>; 101 + ... 102 + }; 103 + 104 + thing2@1 { 105 + compatible = "my,thing2"; 106 + reg = <1>; 107 + ... 108 + spi-tx-lane-map = <1>; /* lane 0 is not used, lane 1 is used for tx wire */ 109 + spi-rx-lane-map = <1>; /* lane 0 is not used, lane 1 is used for rx wire */ 110 + ... 111 + }; 112 + }; 113 + 114 + 115 + The default values of ``spi-rx-bus-width`` and ``spi-tx-bus-width`` are ``<1>``, 116 + so these properties can still be omitted even when ``spi-rx-lane-map`` and 117 + ``spi-tx-lane-map`` are used. 118 + 119 + ---------------------------- 120 + Usage in a peripheral driver 121 + ---------------------------- 122 + 123 + These types of SPI controllers generally do not support arbitrary use of the 124 + multiple lanes. Instead, they operate in one of a few defined modes. Peripheral 125 + drivers should set the :c:type:`struct spi_transfer.multi_lane_mode <spi_transfer>` 126 + field to indicate which mode they want to use for a given transfer. 127 + 128 + The possible values for this field have the following semantics: 129 + 130 + - :c:macro:`SPI_MULTI_BUS_MODE_SINGLE`: Only use the first lane. Other lanes are 131 + ignored. This means that it is operating just like a conventional SPI 132 + peripheral. This is the default, so it does not need to be explicitly set. 133 + 134 + Example:: 135 + 136 + tx_buf[0] = 0x88; 137 + 138 + struct spi_transfer xfer = { 139 + .tx_buf = tx_buf, 140 + .len = 1, 141 + }; 142 + 143 + spi_sync_transfer(spi, &xfer, 1); 144 + 145 + Assuming the controller is sending the MSB first, the sequence of bits 146 + sent over the tx wire would be (right-most bit is sent first):: 147 + 148 + controller > data bits > peripheral 149 + ---------- ---------------- ---------- 150 + SDO 0 0-0-0-1-0-0-0-1 SDI 0 151 + 152 + - :c:macro:`SPI_MULTI_BUS_MODE_MIRROR`: Send a single data word over all of the 153 + lanes at the same time. This only makes sense for writes and not 154 + for reads. 155 + 156 + Example:: 157 + 158 + tx_buf[0] = 0x88; 159 + 160 + struct spi_transfer xfer = { 161 + .tx_buf = tx_buf, 162 + .len = 1, 163 + .multi_lane_mode = SPI_MULTI_BUS_MODE_MIRROR, 164 + }; 165 + 166 + spi_sync_transfer(spi, &xfer, 1); 167 + 168 + The data is mirrored on each tx wire:: 169 + 170 + controller > data bits > peripheral 171 + ---------- ---------------- ---------- 172 + SDO 0 0-0-0-1-0-0-0-1 SDI 0 173 + SDO 1 0-0-0-1-0-0-0-1 SDI 1 174 + 175 + - :c:macro:`SPI_MULTI_BUS_MODE_STRIPE`: Send or receive two different data words 176 + at the same time, one on each lane. This means that the buffer needs to be 177 + sized to hold data for all lanes. Data is interleaved in the buffer, with 178 + the first word corresponding to lane 0, the second to lane 1, and so on. 179 + Once the last lane is used, the next word in the buffer corresponds to lane 180 + 0 again. Accordingly, the buffer size must be a multiple of the number of 181 + lanes. This mode works for both reads and writes. 182 + 183 + Example:: 184 + 185 + struct spi_transfer xfer = { 186 + .rx_buf = rx_buf, 187 + .len = 2, 188 + .multi_lane_mode = SPI_MULTI_BUS_MODE_STRIPE, 189 + }; 190 + 191 + spi_sync_transfer(spi, &xfer, 1); 192 + 193 + Each rx wire has a different data word sent simultaneously:: 194 + 195 + controller < data bits < peripheral 196 + ---------- ---------------- ---------- 197 + SDI 0 0-0-0-1-0-0-0-1 SDO 0 198 + SDI 1 1-0-0-0-1-0-0-0 SDO 1 199 + 200 + After the transfer, ``rx_buf[0] == 0x11`` (word from SDO 0) and 201 + ``rx_buf[1] == 0x88`` (word from SDO 1). 202 + 203 + 204 + ----------------------------- 205 + SPI controller driver support 206 + ----------------------------- 207 + 208 + To support multiple data lanes, SPI controller drivers need to set 209 + :c:type:`struct spi_controller.num_data_lanes <spi_controller>` to a value 210 + greater than 1. 211 + 212 + Then the part of the driver that handles SPI transfers needs to check the 213 + :c:type:`struct spi_transfer.multi_lane_mode <spi_transfer>` field and implement 214 + the appropriate behavior for each supported mode and return an error for 215 + unsupported modes. 216 + 217 + The core SPI code should handle the rest.
+26
MAINTAINERS
··· 1821 1821 F: drivers/clk/analogbits/* 1822 1822 F: include/linux/clk/analogbits* 1823 1823 1824 + ANDES ATCSPI200 SPI DRIVER 1825 + M: CL Wang <cl634@andestech.com> 1826 + S: Supported 1827 + F: Documentation/devicetree/bindings/spi/andestech,ae350-spi.yaml 1828 + F: drivers/spi/spi-atcspi200.c 1829 + 1824 1830 ANDROID DRIVERS 1825 1831 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 1826 1832 M: Arve Hjønnevåg <arve@android.com> ··· 4282 4276 W: https://ez.analog.com/linux-software-drivers 4283 4277 F: Documentation/devicetree/bindings/pwm/adi,axi-pwmgen.yaml 4284 4278 F: drivers/pwm/pwm-axi-pwmgen.c 4279 + 4280 + AXIADO SPI DB DRIVER 4281 + M: Vladimir Moravcevic <vmoravcevic@axiado.com> 4282 + M: Tzu-Hao Wei <twei@axiado.com> 4283 + M: Swark Yang <syang@axiado.com> 4284 + M: Prasad Bolisetty <pbolisetty@axiado.com> 4285 + L: linux-spi@vger.kernel.org 4286 + S: Maintained 4287 + F: Documentation/devicetree/bindings/spi/axiado,ax3000-spi.yaml 4288 + F: drivers/spi/spi-axiado.c 4289 + F: drivers/spi/spi-axiado.h 4285 4290 4286 4291 AYANEO PLATFORM EC DRIVER 4287 4292 M: Antheas Kapenekakis <lkml@antheas.dev> ··· 19007 18990 S: Maintained 19008 18991 F: Documentation/devicetree/bindings/sound/trivial-codec.yaml 19009 18992 F: sound/soc/codecs/tfa9879* 18993 + 18994 + NXP XSPI DRIVER 18995 + M: Han Xu <han.xu@nxp.com> 18996 + M: Haibo Chen <haibo.chen@nxp.com> 18997 + L: linux-spi@vger.kernel.org 18998 + L: imx@lists.linux.dev 18999 + S: Maintained 19000 + F: Documentation/devicetree/bindings/spi/nxp,imx94-xspi.yaml 19001 + F: drivers/spi/spi-nxp-xspi.c 19010 19002 19011 19003 NXP-NCI NFC DRIVER 19012 19004 S: Orphan
+30 -27
drivers/spi/Kconfig
··· 136 136 This enables support for the SPI controller present on the 137 137 Qualcomm Atheros AR934X/QCA95XX SoCs. 138 138 139 + config SPI_ATCSPI200 140 + tristate "Andes ATCSPI200 SPI controller" 141 + depends on ARCH_ANDES 142 + help 143 + SPI driver for Andes ATCSPI200 SPI controller. 144 + ATCSPI200 controller supports DMA and PIO modes. When DMA 145 + is not available, the driver automatically falls back to 146 + PIO mode. 147 + 139 148 config SPI_ATH79 140 149 tristate "Atheros AR71XX/AR724X/AR913X SPI controller driver" 141 150 depends on ATH79 || COMPILE_TEST ··· 212 203 This enables support for the Analog Devices AXI SPI Engine SPI controller. 213 204 It is part of the SPI Engine framework that is used in some Analog Devices 214 205 reference designs for FPGAs. 206 + 207 + config SPI_AXIADO 208 + tristate "Axiado DB-H SPI controller" 209 + depends on SPI_MEM 210 + depends on ARCH_AXIADO || COMPILE_TEST 211 + help 212 + Enable support for the SPI controller present on Axiado AX3000 SoCs. 213 + 214 + The implementation supports host-only mode and does not provide target 215 + functionality. It is intended for use cases where the SoC acts as the SPI 216 + host, communicating with peripheral devices such as flash memory. 215 217 216 218 config SPI_BCM2835 217 219 tristate "BCM2835 SPI controller" ··· 385 365 tristate "Memory-mapped io interface driver for DW SPI core" 386 366 depends on HAS_IOMEM 387 367 388 - config SPI_DW_BT1 389 - tristate "Baikal-T1 SPI driver for DW SPI core" 390 - depends on MIPS_BAIKAL_T1 || COMPILE_TEST 391 - select MULTIPLEXER 392 - help 393 - Baikal-T1 SoC is equipped with three DW APB SSI-based MMIO SPI 394 - controllers. Two of them are pretty much normal: with IRQ, DMA, 395 - FIFOs of 64 words depth, 4x CSs, but the third one as being a 396 - part of the Baikal-T1 System Boot Controller has got a very 397 - limited resources: no IRQ, no DMA, only a single native 398 - chip-select and Tx/Rx FIFO with just 8 words depth available. 399 - The later one is normally connected to an external SPI-nor flash 400 - of 128Mb (in general can be of bigger size). 401 - 402 - config SPI_DW_BT1_DIRMAP 403 - bool "Directly mapped Baikal-T1 Boot SPI flash support" 404 - depends on SPI_DW_BT1 405 - help 406 - Directly mapped SPI flash memory is an interface specific to the 407 - Baikal-T1 System Boot Controller. It is a 16MB MMIO region, which 408 - can be used to access a peripheral memory device just by 409 - reading/writing data from/to it. Note that the system APB bus 410 - will stall during each IO from/to the dirmap region until the 411 - operation is finished. So try not to use it concurrently with 412 - time-critical tasks (like the SPI memory operations implemented 413 - in this driver). 414 - 415 368 endif 416 369 417 370 config SPI_DLN2 ··· 471 478 This enables support for the Flex SPI controller in master mode. 472 479 Up to four slave devices can be connected on two buses with two 473 480 chipselects each. 481 + This controller does not support generic SPI messages and only 482 + supports the high-level SPI memory interface. 483 + 484 + config SPI_NXP_XSPI 485 + tristate "NXP xSPI controller" 486 + depends on ARCH_MXC || COMPILE_TEST 487 + depends on HAS_IOMEM 488 + help 489 + This enables support for the xSPI controller. Up to two devices 490 + can be connected to one host. 474 491 This controller does not support generic SPI messages and only 475 492 supports the high-level SPI memory interface. 476 493
+3 -1
drivers/spi/Makefile
··· 26 26 obj-$(CONFIG_SPI_AR934X) += spi-ar934x.o 27 27 obj-$(CONFIG_SPI_ARMADA_3700) += spi-armada-3700.o 28 28 obj-$(CONFIG_SPI_ASPEED_SMC) += spi-aspeed-smc.o 29 + obj-$(CONFIG_SPI_ATCSPI200) += spi-atcspi200.o 29 30 obj-$(CONFIG_SPI_ATMEL) += spi-atmel.o 30 31 obj-$(CONFIG_SPI_ATMEL_QUADSPI) += atmel-quadspi.o 31 32 obj-$(CONFIG_SPI_AT91_USART) += spi-at91-usart.o 32 33 obj-$(CONFIG_SPI_ATH79) += spi-ath79.o 33 34 obj-$(CONFIG_SPI_AU1550) += spi-au1550.o 34 35 obj-$(CONFIG_SPI_AXI_SPI_ENGINE) += spi-axi-spi-engine.o 36 + obj-$(CONFIG_SPI_AXIADO) += spi-axiado.o 35 37 obj-$(CONFIG_SPI_BCM2835) += spi-bcm2835.o 36 38 obj-$(CONFIG_SPI_BCM2835AUX) += spi-bcm2835aux.o 37 39 obj-$(CONFIG_SPI_BCM63XX) += spi-bcm63xx.o ··· 54 52 obj-$(CONFIG_SPI_DESIGNWARE) += spi-dw.o 55 53 spi-dw-y := spi-dw-core.o 56 54 spi-dw-$(CONFIG_SPI_DW_DMA) += spi-dw-dma.o 57 - obj-$(CONFIG_SPI_DW_BT1) += spi-dw-bt1.o 58 55 obj-$(CONFIG_SPI_DW_MMIO) += spi-dw-mmio.o 59 56 obj-$(CONFIG_SPI_DW_PCI) += spi-dw-pci.o 60 57 obj-$(CONFIG_SPI_EP93XX) += spi-ep93xx.o ··· 103 102 obj-$(CONFIG_SPI_NPCM_FIU) += spi-npcm-fiu.o 104 103 obj-$(CONFIG_SPI_NPCM_PSPI) += spi-npcm-pspi.o 105 104 obj-$(CONFIG_SPI_NXP_FLEXSPI) += spi-nxp-fspi.o 105 + obj-$(CONFIG_SPI_NXP_XSPI) += spi-nxp-xspi.o 106 106 obj-$(CONFIG_SPI_OC_TINY) += spi-oc-tiny.o 107 107 spi-octeon-objs := spi-cavium.o spi-cavium-octeon.o 108 108 obj-$(CONFIG_SPI_OCTEON) += spi-octeon.o
-1
drivers/spi/atmel-quadspi.c
··· 1382 1382 ctrl->bus_num = -1; 1383 1383 ctrl->mem_ops = &atmel_qspi_mem_ops; 1384 1384 ctrl->num_chipselect = 1; 1385 - ctrl->dev.of_node = pdev->dev.of_node; 1386 1385 platform_set_drvdata(pdev, ctrl); 1387 1386 1388 1387 /* Map the registers */
-1
drivers/spi/spi-airoha-snfi.c
··· 1124 1124 ctrl->bits_per_word_mask = SPI_BPW_MASK(8); 1125 1125 ctrl->mode_bits = SPI_RX_DUAL; 1126 1126 ctrl->setup = airoha_snand_setup; 1127 - device_set_node(&ctrl->dev, dev_fwnode(dev)); 1128 1127 1129 1128 err = airoha_snand_nfi_init(as_ctrl); 1130 1129 if (err)
-2
drivers/spi/spi-altera-platform.c
··· 67 67 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 16); 68 68 } 69 69 70 - host->dev.of_node = pdev->dev.of_node; 71 - 72 70 hw = spi_controller_get_devdata(host); 73 71 hw->dev = &pdev->dev; 74 72
-1
drivers/spi/spi-amlogic-spifc-a1.c
··· 358 358 return ret; 359 359 360 360 ctrl->num_chipselect = 1; 361 - ctrl->dev.of_node = pdev->dev.of_node; 362 361 ctrl->bits_per_word_mask = SPI_BPW_MASK(8); 363 362 ctrl->auto_runtime_pm = true; 364 363 ctrl->mem_ops = &amlogic_spifc_a1_mem_ops;
-1
drivers/spi/spi-amlogic-spisg.c
··· 781 781 pm_runtime_resume_and_get(&spisg->pdev->dev); 782 782 783 783 ctlr->num_chipselect = 4; 784 - ctlr->dev.of_node = pdev->dev.of_node; 785 784 ctlr->mode_bits = SPI_CPHA | SPI_CPOL | SPI_LSB_FIRST | 786 785 SPI_3WIRE | SPI_TX_QUAD | SPI_RX_QUAD; 787 786 ctlr->max_speed_hz = 1000 * 1000 * 100;
-1
drivers/spi/spi-apple.c
··· 485 485 if (ret) 486 486 return dev_err_probe(&pdev->dev, ret, "Unable to bind to interrupt\n"); 487 487 488 - ctlr->dev.of_node = pdev->dev.of_node; 489 488 ctlr->bus_num = pdev->id; 490 489 ctlr->num_chipselect = 1; 491 490 ctlr->mode_bits = SPI_CPHA | SPI_CPOL | SPI_LSB_FIRST;
-1
drivers/spi/spi-ar934x.c
··· 195 195 ctlr->transfer_one_message = ar934x_spi_transfer_one_message; 196 196 ctlr->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(24) | 197 197 SPI_BPW_MASK(16) | SPI_BPW_MASK(8); 198 - ctlr->dev.of_node = pdev->dev.of_node; 199 198 ctlr->num_chipselect = 3; 200 199 201 200 dev_set_drvdata(&pdev->dev, ctlr);
+1 -3
drivers/spi/spi-armada-3700.c
··· 813 813 static int a3700_spi_probe(struct platform_device *pdev) 814 814 { 815 815 struct device *dev = &pdev->dev; 816 - struct device_node *of_node = dev->of_node; 817 816 struct spi_controller *host; 818 817 struct a3700_spi *spi; 819 818 u32 num_cs = 0; ··· 825 826 goto out; 826 827 } 827 828 828 - if (of_property_read_u32(of_node, "num-cs", &num_cs)) { 829 + if (of_property_read_u32(dev->of_node, "num-cs", &num_cs)) { 829 830 dev_err(dev, "could not find num-cs\n"); 830 831 ret = -ENXIO; 831 832 goto error; 832 833 } 833 834 834 835 host->bus_num = pdev->id; 835 - host->dev.of_node = of_node; 836 836 host->mode_bits = SPI_MODE_3; 837 837 host->num_chipselect = num_cs; 838 838 host->bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(32);
+128 -7
drivers/spi/spi-aspeed-smc.c
··· 48 48 /* CEx Address Decoding Range Register */ 49 49 #define CE0_SEGMENT_ADDR_REG 0x30 50 50 51 + #define FULL_DUPLEX_RX_DATA 0x1e4 52 + 51 53 /* CEx Read timing compensation register */ 52 54 #define CE0_TIMING_COMPENSATION_REG 0x94 53 55 ··· 83 81 u32 hclk_mask; 84 82 u32 hdiv_max; 85 83 u32 min_window_size; 84 + bool full_duplex; 86 85 87 86 phys_addr_t (*segment_start)(struct aspeed_spi *aspi, u32 reg); 88 87 phys_addr_t (*segment_end)(struct aspeed_spi *aspi, u32 reg); ··· 108 105 109 106 struct clk *clk; 110 107 u32 clk_freq; 108 + u8 cs_change; 111 109 112 110 struct aspeed_spi_chip chips[ASPEED_SPI_MAX_NUM_CS]; 113 111 }; ··· 284 280 } 285 281 286 282 /* support for 1-1-1, 1-1-2 or 1-1-4 */ 287 - static bool aspeed_spi_supports_op(struct spi_mem *mem, const struct spi_mem_op *op) 283 + static bool aspeed_spi_supports_mem_op(struct spi_mem *mem, 284 + const struct spi_mem_op *op) 288 285 { 289 286 if (op->cmd.buswidth > 1) 290 287 return false; ··· 310 305 311 306 static const struct aspeed_spi_data ast2400_spi_data; 312 307 313 - static int do_aspeed_spi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op) 308 + static int do_aspeed_spi_exec_mem_op(struct spi_mem *mem, 309 + const struct spi_mem_op *op) 314 310 { 315 311 struct aspeed_spi *aspi = spi_controller_get_devdata(mem->spi->controller); 316 312 struct aspeed_spi_chip *chip = &aspi->chips[spi_get_chipselect(mem->spi, 0)]; ··· 373 367 return ret; 374 368 } 375 369 376 - static int aspeed_spi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op) 370 + static int aspeed_spi_exec_mem_op(struct spi_mem *mem, 371 + const struct spi_mem_op *op) 377 372 { 378 373 int ret; 379 374 380 - ret = do_aspeed_spi_exec_op(mem, op); 375 + ret = do_aspeed_spi_exec_mem_op(mem, op); 381 376 if (ret) 382 377 dev_err(&mem->spi->dev, "operation failed: %d\n", ret); 383 378 return ret; ··· 780 773 } 781 774 782 775 static const struct spi_controller_mem_ops aspeed_spi_mem_ops = { 783 - .supports_op = aspeed_spi_supports_op, 784 - .exec_op = aspeed_spi_exec_op, 776 + .supports_op = aspeed_spi_supports_mem_op, 777 + .exec_op = aspeed_spi_exec_mem_op, 785 778 .get_name = aspeed_spi_get_name, 786 779 .dirmap_create = aspeed_spi_dirmap_create, 787 780 .dirmap_read = aspeed_spi_dirmap_read, ··· 850 843 aspeed_spi_chip_enable(aspi, cs, enable); 851 844 } 852 845 846 + static int aspeed_spi_user_prepare_msg(struct spi_controller *ctlr, 847 + struct spi_message *msg) 848 + { 849 + struct aspeed_spi *aspi = 850 + (struct aspeed_spi *)spi_controller_get_devdata(ctlr); 851 + const struct aspeed_spi_data *data = aspi->data; 852 + struct spi_device *spi = msg->spi; 853 + u32 cs = spi_get_chipselect(spi, 0); 854 + struct aspeed_spi_chip *chip = &aspi->chips[cs]; 855 + u32 ctrl_val; 856 + u32 clk_div = data->get_clk_div(chip, spi->max_speed_hz); 857 + 858 + ctrl_val = chip->ctl_val[ASPEED_SPI_BASE]; 859 + ctrl_val &= ~CTRL_IO_MODE_MASK & data->hclk_mask; 860 + ctrl_val |= clk_div; 861 + chip->ctl_val[ASPEED_SPI_BASE] = ctrl_val; 862 + 863 + if (aspi->cs_change == 0) 864 + aspeed_spi_start_user(chip); 865 + 866 + return 0; 867 + } 868 + 869 + static int aspeed_spi_user_unprepare_msg(struct spi_controller *ctlr, 870 + struct spi_message *msg) 871 + { 872 + struct aspeed_spi *aspi = 873 + (struct aspeed_spi *)spi_controller_get_devdata(ctlr); 874 + struct spi_device *spi = msg->spi; 875 + u32 cs = spi_get_chipselect(spi, 0); 876 + struct aspeed_spi_chip *chip = &aspi->chips[cs]; 877 + 878 + if (aspi->cs_change == 0) 879 + aspeed_spi_stop_user(chip); 880 + 881 + return 0; 882 + } 883 + 884 + static void aspeed_spi_user_transfer_tx(struct aspeed_spi *aspi, 885 + struct spi_device *spi, 886 + const u8 *tx_buf, u8 *rx_buf, 887 + void *dst, u32 len) 888 + { 889 + const struct aspeed_spi_data *data = aspi->data; 890 + bool full_duplex_transfer = data->full_duplex && tx_buf == rx_buf; 891 + u32 i; 892 + 893 + if (full_duplex_transfer && 894 + !!(spi->mode & (SPI_TX_DUAL | SPI_TX_QUAD | 895 + SPI_RX_DUAL | SPI_RX_QUAD))) { 896 + dev_err(aspi->dev, 897 + "full duplex is only supported for single IO mode\n"); 898 + return; 899 + } 900 + 901 + for (i = 0; i < len; i++) { 902 + writeb(tx_buf[i], dst); 903 + if (full_duplex_transfer) 904 + rx_buf[i] = readb(aspi->regs + FULL_DUPLEX_RX_DATA); 905 + } 906 + } 907 + 908 + static int aspeed_spi_user_transfer(struct spi_controller *ctlr, 909 + struct spi_device *spi, 910 + struct spi_transfer *xfer) 911 + { 912 + struct aspeed_spi *aspi = 913 + (struct aspeed_spi *)spi_controller_get_devdata(ctlr); 914 + u32 cs = spi_get_chipselect(spi, 0); 915 + struct aspeed_spi_chip *chip = &aspi->chips[cs]; 916 + void __iomem *ahb_base = aspi->chips[cs].ahb_base; 917 + const u8 *tx_buf = xfer->tx_buf; 918 + u8 *rx_buf = xfer->rx_buf; 919 + 920 + dev_dbg(aspi->dev, 921 + "[cs%d] xfer: width %d, len %u, tx %p, rx %p\n", 922 + cs, xfer->bits_per_word, xfer->len, 923 + tx_buf, rx_buf); 924 + 925 + if (tx_buf) { 926 + if (spi->mode & SPI_TX_DUAL) 927 + aspeed_spi_set_io_mode(chip, CTRL_IO_DUAL_DATA); 928 + else if (spi->mode & SPI_TX_QUAD) 929 + aspeed_spi_set_io_mode(chip, CTRL_IO_QUAD_DATA); 930 + 931 + aspeed_spi_user_transfer_tx(aspi, spi, tx_buf, rx_buf, 932 + (void *)ahb_base, xfer->len); 933 + } 934 + 935 + if (rx_buf && rx_buf != tx_buf) { 936 + if (spi->mode & SPI_RX_DUAL) 937 + aspeed_spi_set_io_mode(chip, CTRL_IO_DUAL_DATA); 938 + else if (spi->mode & SPI_RX_QUAD) 939 + aspeed_spi_set_io_mode(chip, CTRL_IO_QUAD_DATA); 940 + 941 + ioread8_rep(ahb_base, rx_buf, xfer->len); 942 + } 943 + 944 + xfer->error = 0; 945 + aspi->cs_change = xfer->cs_change; 946 + 947 + return 0; 948 + } 949 + 853 950 static int aspeed_spi_probe(struct platform_device *pdev) 854 951 { 855 952 struct device *dev = &pdev->dev; ··· 1009 898 ctlr->setup = aspeed_spi_setup; 1010 899 ctlr->cleanup = aspeed_spi_cleanup; 1011 900 ctlr->num_chipselect = of_get_available_child_count(dev->of_node); 1012 - ctlr->dev.of_node = dev->of_node; 901 + ctlr->prepare_message = aspeed_spi_user_prepare_msg; 902 + ctlr->unprepare_message = aspeed_spi_user_unprepare_msg; 903 + ctlr->transfer_one = aspeed_spi_user_transfer; 1013 904 1014 905 aspi->num_cs = ctlr->num_chipselect; 1015 906 ··· 1568 1455 .hclk_mask = 0xfffff0ff, 1569 1456 .hdiv_max = 1, 1570 1457 .min_window_size = 0x800000, 1458 + .full_duplex = false, 1571 1459 .calibrate = aspeed_spi_calibrate, 1572 1460 .get_clk_div = aspeed_get_clk_div_ast2400, 1573 1461 .segment_start = aspeed_spi_segment_start, ··· 1585 1471 .timing = 0x14, 1586 1472 .hclk_mask = 0xfffff0ff, 1587 1473 .hdiv_max = 1, 1474 + .full_duplex = false, 1588 1475 .get_clk_div = aspeed_get_clk_div_ast2400, 1589 1476 .calibrate = aspeed_spi_calibrate, 1590 1477 /* No segment registers */ ··· 1600 1485 .hclk_mask = 0xffffd0ff, 1601 1486 .hdiv_max = 1, 1602 1487 .min_window_size = 0x800000, 1488 + .full_duplex = false, 1603 1489 .get_clk_div = aspeed_get_clk_div_ast2500, 1604 1490 .calibrate = aspeed_spi_calibrate, 1605 1491 .segment_start = aspeed_spi_segment_start, ··· 1618 1502 .hclk_mask = 0xffffd0ff, 1619 1503 .hdiv_max = 1, 1620 1504 .min_window_size = 0x800000, 1505 + .full_duplex = false, 1621 1506 .get_clk_div = aspeed_get_clk_div_ast2500, 1622 1507 .calibrate = aspeed_spi_calibrate, 1623 1508 .segment_start = aspeed_spi_segment_start, ··· 1637 1520 .hclk_mask = 0xf0fff0ff, 1638 1521 .hdiv_max = 2, 1639 1522 .min_window_size = 0x200000, 1523 + .full_duplex = false, 1640 1524 .get_clk_div = aspeed_get_clk_div_ast2600, 1641 1525 .calibrate = aspeed_spi_ast2600_calibrate, 1642 1526 .segment_start = aspeed_spi_segment_ast2600_start, ··· 1656 1538 .hclk_mask = 0xf0fff0ff, 1657 1539 .hdiv_max = 2, 1658 1540 .min_window_size = 0x200000, 1541 + .full_duplex = false, 1659 1542 .get_clk_div = aspeed_get_clk_div_ast2600, 1660 1543 .calibrate = aspeed_spi_ast2600_calibrate, 1661 1544 .segment_start = aspeed_spi_segment_ast2600_start, ··· 1675 1556 .hclk_mask = 0xf0fff0ff, 1676 1557 .hdiv_max = 2, 1677 1558 .min_window_size = 0x10000, 1559 + .full_duplex = true, 1678 1560 .get_clk_div = aspeed_get_clk_div_ast2600, 1679 1561 .calibrate = aspeed_spi_ast2600_calibrate, 1680 1562 .segment_start = aspeed_spi_segment_ast2700_start, ··· 1693 1573 .hclk_mask = 0xf0fff0ff, 1694 1574 .hdiv_max = 2, 1695 1575 .min_window_size = 0x10000, 1576 + .full_duplex = true, 1696 1577 .get_clk_div = aspeed_get_clk_div_ast2600, 1697 1578 .calibrate = aspeed_spi_ast2600_calibrate, 1698 1579 .segment_start = aspeed_spi_segment_ast2700_start,
+679
drivers/spi/spi-atcspi200.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Driver for Andes ATCSPI200 SPI Controller 4 + * 5 + * Copyright (C) 2025 Andes Technology Corporation. 6 + */ 7 + 8 + #include <linux/bitfield.h> 9 + #include <linux/clk.h> 10 + #include <linux/completion.h> 11 + #include <linux/dev_printk.h> 12 + #include <linux/dmaengine.h> 13 + #include <linux/err.h> 14 + #include <linux/errno.h> 15 + #include <linux/jiffies.h> 16 + #include <linux/minmax.h> 17 + #include <linux/module.h> 18 + #include <linux/mod_devicetable.h> 19 + #include <linux/mutex.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/regmap.h> 22 + #include <linux/spi/spi.h> 23 + #include <linux/spi/spi-mem.h> 24 + 25 + /* Register definitions */ 26 + #define ATCSPI_TRANS_FMT 0x10 /* SPI transfer format register */ 27 + #define ATCSPI_TRANS_CTRL 0x20 /* SPI transfer control register */ 28 + #define ATCSPI_CMD 0x24 /* SPI command register */ 29 + #define ATCSPI_ADDR 0x28 /* SPI address register */ 30 + #define ATCSPI_DATA 0x2C /* SPI data register */ 31 + #define ATCSPI_CTRL 0x30 /* SPI control register */ 32 + #define ATCSPI_STATUS 0x34 /* SPI status register */ 33 + #define ATCSPI_TIMING 0x40 /* SPI interface timing register */ 34 + #define ATCSPI_CONFIG 0x7C /* SPI configuration register */ 35 + 36 + /* Transfer format register */ 37 + #define TRANS_FMT_CPHA BIT(0) 38 + #define TRANS_FMT_CPOL BIT(1) 39 + #define TRANS_FMT_DATA_MERGE_EN BIT(7) 40 + #define TRANS_FMT_DATA_LEN_MASK GENMASK(12, 8) 41 + #define TRANS_FMT_ADDR_LEN_MASK GENMASK(17, 16) 42 + #define TRANS_FMT_DATA_LEN(x) FIELD_PREP(TRANS_FMT_DATA_LEN_MASK, (x) - 1) 43 + #define TRANS_FMT_ADDR_LEN(x) FIELD_PREP(TRANS_FMT_ADDR_LEN_MASK, (x) - 1) 44 + 45 + /* Transfer control register */ 46 + #define TRANS_MODE_MASK GENMASK(27, 24) 47 + #define TRANS_MODE_W_ONLY FIELD_PREP(TRANS_MODE_MASK, 1) 48 + #define TRANS_MODE_R_ONLY FIELD_PREP(TRANS_MODE_MASK, 2) 49 + #define TRANS_MODE_NONE_DATA FIELD_PREP(TRANS_MODE_MASK, 7) 50 + #define TRANS_MODE_DMY_READ FIELD_PREP(TRANS_MODE_MASK, 9) 51 + #define TRANS_FIELD_DECNZ(m, x) ((x) ? FIELD_PREP(m, (x) - 1) : 0) 52 + #define TRANS_RD_TRANS_CNT(x) TRANS_FIELD_DECNZ(GENMASK(8, 0), x) 53 + #define TRANS_DUMMY_CNT(x) TRANS_FIELD_DECNZ(GENMASK(10, 9), x) 54 + #define TRANS_WR_TRANS_CNT(x) TRANS_FIELD_DECNZ(GENMASK(20, 12), x) 55 + #define TRANS_DUAL_QUAD(x) FIELD_PREP(GENMASK(23, 22), (x)) 56 + #define TRANS_ADDR_FMT BIT(28) 57 + #define TRANS_ADDR_EN BIT(29) 58 + #define TRANS_CMD_EN BIT(30) 59 + 60 + /* Control register */ 61 + #define CTRL_SPI_RST BIT(0) 62 + #define CTRL_RX_FIFO_RST BIT(1) 63 + #define CTRL_TX_FIFO_RST BIT(2) 64 + #define CTRL_RX_DMA_EN BIT(3) 65 + #define CTRL_TX_DMA_EN BIT(4) 66 + 67 + /* Status register */ 68 + #define ATCSPI_ACTIVE BIT(0) 69 + #define ATCSPI_RX_EMPTY BIT(14) 70 + #define ATCSPI_TX_FULL BIT(23) 71 + 72 + /* Interface timing setting */ 73 + #define TIMING_SCLK_DIV_MASK GENMASK(7, 0) 74 + #define TIMING_SCLK_DIV_MAX 0xFE 75 + 76 + /* Configuration register */ 77 + #define RXFIFO_SIZE(x) FIELD_GET(GENMASK(3, 0), (x)) 78 + #define TXFIFO_SIZE(x) FIELD_GET(GENMASK(7, 4), (x)) 79 + 80 + /* driver configurations */ 81 + #define ATCSPI_MAX_TRANS_LEN 512 82 + #define ATCSPI_MAX_SPEED_HZ 50000000 83 + #define ATCSPI_RDY_TIMEOUT_US 1000000 84 + #define ATCSPI_XFER_TIMEOUT(n) ((n) * 10) 85 + #define ATCSPI_MAX_CS_NUM 1 86 + #define ATCSPI_DMA_THRESHOLD 256 87 + #define ATCSPI_BITS_PER_UINT 8 88 + #define ATCSPI_DATA_MERGE_EN 1 89 + #define ATCSPI_DMA_SUPPORT 1 90 + 91 + /** 92 + * struct atcspi_dev - Andes ATCSPI200 SPI controller private data 93 + * @host: Pointer to the SPI controller structure. 94 + * @mutex_lock: A mutex to protect concurrent access to the controller. 95 + * @dma_completion: A completion to signal the end of a DMA transfer. 96 + * @dev: Pointer to the device structure. 97 + * @regmap: Register map for accessing controller registers. 98 + * @clk: Pointer to the controller's functional clock. 99 + * @dma_addr: The physical address of the SPI data register for DMA. 100 + * @clk_rate: The cached frequency of the functional clock. 101 + * @sclk_rate: The target frequency for the SPI clock (SCLK). 102 + * @txfifo_size: The size of the transmit FIFO in bytes. 103 + * @rxfifo_size: The size of the receive FIFO in bytes. 104 + * @data_merge: A flag indicating if the data merge mode is enabled for 105 + * the current transfer. 106 + * @use_dma: Enable DMA mode if ATCSPI_DMA_SUPPORT is set and DMA is 107 + * successfully configured. 108 + */ 109 + struct atcspi_dev { 110 + struct spi_controller *host; 111 + struct mutex mutex_lock; 112 + struct completion dma_completion; 113 + struct device *dev; 114 + struct regmap *regmap; 115 + struct clk *clk; 116 + dma_addr_t dma_addr; 117 + unsigned int clk_rate; 118 + unsigned int sclk_rate; 119 + unsigned int txfifo_size; 120 + unsigned int rxfifo_size; 121 + bool data_merge; 122 + bool use_dma; 123 + }; 124 + 125 + static int atcspi_wait_fifo_ready(struct atcspi_dev *spi, 126 + enum spi_mem_data_dir dir) 127 + { 128 + unsigned int val; 129 + unsigned int mask; 130 + int ret; 131 + 132 + mask = (dir == SPI_MEM_DATA_OUT) ? ATCSPI_TX_FULL : ATCSPI_RX_EMPTY; 133 + ret = regmap_read_poll_timeout(spi->regmap, 134 + ATCSPI_STATUS, 135 + val, 136 + !(val & mask), 137 + 0, 138 + ATCSPI_RDY_TIMEOUT_US); 139 + if (ret) 140 + dev_info(spi->dev, "Timed out waiting for FIFO ready\n"); 141 + 142 + return ret; 143 + } 144 + 145 + static int atcspi_xfer_data_poll(struct atcspi_dev *spi, 146 + const struct spi_mem_op *op) 147 + { 148 + void *rx_buf = op->data.buf.in; 149 + const void *tx_buf = op->data.buf.out; 150 + unsigned int val; 151 + int trans_bytes = op->data.nbytes; 152 + int num_byte; 153 + int ret = 0; 154 + 155 + num_byte = spi->data_merge ? 4 : 1; 156 + while (trans_bytes) { 157 + if (op->data.dir == SPI_MEM_DATA_OUT) { 158 + ret = atcspi_wait_fifo_ready(spi, SPI_MEM_DATA_OUT); 159 + if (ret) 160 + return ret; 161 + 162 + if (spi->data_merge) 163 + val = *(unsigned int *)tx_buf; 164 + else 165 + val = *(unsigned char *)tx_buf; 166 + regmap_write(spi->regmap, ATCSPI_DATA, val); 167 + tx_buf = (unsigned char *)tx_buf + num_byte; 168 + } else { 169 + ret = atcspi_wait_fifo_ready(spi, SPI_MEM_DATA_IN); 170 + if (ret) 171 + return ret; 172 + 173 + regmap_read(spi->regmap, ATCSPI_DATA, &val); 174 + if (spi->data_merge) 175 + *(unsigned int *)rx_buf = val; 176 + else 177 + *(unsigned char *)rx_buf = (unsigned char)val; 178 + rx_buf = (unsigned char *)rx_buf + num_byte; 179 + } 180 + trans_bytes -= num_byte; 181 + } 182 + 183 + return ret; 184 + } 185 + 186 + static void atcspi_set_trans_ctl(struct atcspi_dev *spi, 187 + const struct spi_mem_op *op) 188 + { 189 + unsigned int tc = 0; 190 + 191 + if (op->cmd.nbytes) 192 + tc |= TRANS_CMD_EN; 193 + if (op->addr.nbytes) 194 + tc |= TRANS_ADDR_EN; 195 + if (op->addr.buswidth > 1) 196 + tc |= TRANS_ADDR_FMT; 197 + if (op->data.nbytes) { 198 + tc |= TRANS_DUAL_QUAD(ffs(op->data.buswidth) - 1); 199 + if (op->data.dir == SPI_MEM_DATA_IN) { 200 + if (op->dummy.nbytes) 201 + tc |= TRANS_MODE_DMY_READ | 202 + TRANS_DUMMY_CNT(op->dummy.nbytes); 203 + else 204 + tc |= TRANS_MODE_R_ONLY; 205 + tc |= TRANS_RD_TRANS_CNT(op->data.nbytes); 206 + } else { 207 + tc |= TRANS_MODE_W_ONLY | 208 + TRANS_WR_TRANS_CNT(op->data.nbytes); 209 + } 210 + } else { 211 + tc |= TRANS_MODE_NONE_DATA; 212 + } 213 + regmap_write(spi->regmap, ATCSPI_TRANS_CTRL, tc); 214 + } 215 + 216 + static void atcspi_set_trans_fmt(struct atcspi_dev *spi, 217 + const struct spi_mem_op *op) 218 + { 219 + unsigned int val; 220 + 221 + regmap_read(spi->regmap, ATCSPI_TRANS_FMT, &val); 222 + if (op->data.nbytes) { 223 + if (ATCSPI_DATA_MERGE_EN && ATCSPI_BITS_PER_UINT == 8 && 224 + !(op->data.nbytes % 4)) { 225 + val |= TRANS_FMT_DATA_MERGE_EN; 226 + spi->data_merge = true; 227 + } else { 228 + val &= ~TRANS_FMT_DATA_MERGE_EN; 229 + spi->data_merge = false; 230 + } 231 + } 232 + 233 + val = (val & ~TRANS_FMT_ADDR_LEN_MASK) | 234 + TRANS_FMT_ADDR_LEN(op->addr.nbytes); 235 + regmap_write(spi->regmap, ATCSPI_TRANS_FMT, val); 236 + } 237 + 238 + static void atcspi_prepare_trans(struct atcspi_dev *spi, 239 + const struct spi_mem_op *op) 240 + { 241 + atcspi_set_trans_fmt(spi, op); 242 + atcspi_set_trans_ctl(spi, op); 243 + if (op->addr.nbytes) 244 + regmap_write(spi->regmap, ATCSPI_ADDR, op->addr.val); 245 + regmap_write(spi->regmap, ATCSPI_CMD, op->cmd.opcode); 246 + } 247 + 248 + static int atcspi_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op) 249 + { 250 + struct atcspi_dev *spi; 251 + 252 + spi = spi_controller_get_devdata(mem->spi->controller); 253 + op->data.nbytes = min(op->data.nbytes, ATCSPI_MAX_TRANS_LEN); 254 + 255 + /* DMA needs to be aligned to 4 byte */ 256 + if (spi->use_dma && op->data.nbytes >= ATCSPI_DMA_THRESHOLD) 257 + op->data.nbytes = ALIGN_DOWN(op->data.nbytes, 4); 258 + 259 + return 0; 260 + } 261 + 262 + static int atcspi_dma_config(struct atcspi_dev *spi, bool is_rx) 263 + { 264 + struct dma_slave_config conf = { 0 }; 265 + struct dma_chan *chan; 266 + 267 + if (is_rx) { 268 + chan = spi->host->dma_rx; 269 + conf.direction = DMA_DEV_TO_MEM; 270 + conf.src_addr = spi->dma_addr; 271 + } else { 272 + chan = spi->host->dma_tx; 273 + conf.direction = DMA_MEM_TO_DEV; 274 + conf.dst_addr = spi->dma_addr; 275 + } 276 + conf.dst_maxburst = spi->rxfifo_size / 2; 277 + conf.src_maxburst = spi->txfifo_size / 2; 278 + 279 + if (spi->data_merge) { 280 + conf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 281 + conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 282 + } else { 283 + conf.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 284 + conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 285 + } 286 + 287 + return dmaengine_slave_config(chan, &conf); 288 + } 289 + 290 + static void atcspi_dma_callback(void *arg) 291 + { 292 + struct completion *dma_completion = arg; 293 + 294 + complete(dma_completion); 295 + } 296 + 297 + static int atcspi_dma_trans(struct atcspi_dev *spi, 298 + const struct spi_mem_op *op) 299 + { 300 + struct dma_async_tx_descriptor *desc; 301 + struct dma_chan *dma_ch; 302 + struct sg_table sgt; 303 + enum dma_transfer_direction dma_dir; 304 + dma_cookie_t cookie; 305 + unsigned int ctrl; 306 + int timeout; 307 + int ret; 308 + 309 + regmap_read(spi->regmap, ATCSPI_CTRL, &ctrl); 310 + ctrl |= CTRL_TX_DMA_EN | CTRL_RX_DMA_EN; 311 + regmap_write(spi->regmap, ATCSPI_CTRL, ctrl); 312 + if (op->data.dir == SPI_MEM_DATA_IN) { 313 + ret = atcspi_dma_config(spi, TRUE); 314 + dma_dir = DMA_DEV_TO_MEM; 315 + dma_ch = spi->host->dma_rx; 316 + } else { 317 + ret = atcspi_dma_config(spi, FALSE); 318 + dma_dir = DMA_MEM_TO_DEV; 319 + dma_ch = spi->host->dma_tx; 320 + } 321 + if (ret) 322 + return ret; 323 + 324 + ret = spi_controller_dma_map_mem_op_data(spi->host, op, &sgt); 325 + if (ret) 326 + return ret; 327 + 328 + desc = dmaengine_prep_slave_sg(dma_ch, sgt.sgl, sgt.nents, dma_dir, 329 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 330 + if (!desc) { 331 + ret = -ENOMEM; 332 + goto exit_unmap; 333 + } 334 + 335 + reinit_completion(&spi->dma_completion); 336 + desc->callback = atcspi_dma_callback; 337 + desc->callback_param = &spi->dma_completion; 338 + cookie = dmaengine_submit(desc); 339 + ret = dma_submit_error(cookie); 340 + if (ret) 341 + goto exit_unmap; 342 + 343 + dma_async_issue_pending(dma_ch); 344 + timeout = msecs_to_jiffies(ATCSPI_XFER_TIMEOUT(op->data.nbytes)); 345 + if (!wait_for_completion_timeout(&spi->dma_completion, timeout)) { 346 + ret = -ETIMEDOUT; 347 + dmaengine_terminate_all(dma_ch); 348 + } 349 + 350 + exit_unmap: 351 + spi_controller_dma_unmap_mem_op_data(spi->host, op, &sgt); 352 + 353 + return ret; 354 + } 355 + 356 + static int atcspi_exec_mem_op(struct spi_mem *mem, const struct spi_mem_op *op) 357 + { 358 + struct spi_device *spi_dev = mem->spi; 359 + struct atcspi_dev *spi; 360 + unsigned int val; 361 + int ret; 362 + 363 + spi = spi_controller_get_devdata(spi_dev->controller); 364 + mutex_lock(&spi->mutex_lock); 365 + atcspi_prepare_trans(spi, op); 366 + if (op->data.nbytes) { 367 + if (spi->use_dma && op->data.nbytes >= ATCSPI_DMA_THRESHOLD) 368 + ret = atcspi_dma_trans(spi, op); 369 + else 370 + ret = atcspi_xfer_data_poll(spi, op); 371 + if (ret) { 372 + dev_info(spi->dev, "SPI transmission failed\n"); 373 + goto exec_mem_exit; 374 + } 375 + } 376 + 377 + ret = regmap_read_poll_timeout(spi->regmap, 378 + ATCSPI_STATUS, 379 + val, 380 + !(val & ATCSPI_ACTIVE), 381 + 0, 382 + ATCSPI_RDY_TIMEOUT_US); 383 + if (ret) 384 + dev_info(spi->dev, "Timed out waiting for ATCSPI_ACTIVE\n"); 385 + 386 + exec_mem_exit: 387 + mutex_unlock(&spi->mutex_lock); 388 + 389 + return ret; 390 + } 391 + 392 + static const struct spi_controller_mem_ops atcspi_mem_ops = { 393 + .exec_op = atcspi_exec_mem_op, 394 + .adjust_op_size = atcspi_adjust_op_size, 395 + }; 396 + 397 + static int atcspi_setup(struct atcspi_dev *spi) 398 + { 399 + unsigned int ctrl_val; 400 + unsigned int val; 401 + int actual_spi_sclk_f; 402 + int ret; 403 + unsigned char div; 404 + 405 + ctrl_val = CTRL_TX_FIFO_RST | CTRL_RX_FIFO_RST | CTRL_SPI_RST; 406 + regmap_write(spi->regmap, ATCSPI_CTRL, ctrl_val); 407 + ret = regmap_read_poll_timeout(spi->regmap, 408 + ATCSPI_CTRL, 409 + val, 410 + !(val & ctrl_val), 411 + 0, 412 + ATCSPI_RDY_TIMEOUT_US); 413 + if (ret) 414 + return dev_err_probe(spi->dev, ret, 415 + "Timed out waiting for ATCSPI_CTRL\n"); 416 + 417 + val = TRANS_FMT_DATA_LEN(ATCSPI_BITS_PER_UINT) | 418 + TRANS_FMT_CPHA | TRANS_FMT_CPOL; 419 + regmap_write(spi->regmap, ATCSPI_TRANS_FMT, val); 420 + 421 + regmap_read(spi->regmap, ATCSPI_CONFIG, &val); 422 + spi->txfifo_size = BIT(TXFIFO_SIZE(val) + 1); 423 + spi->rxfifo_size = BIT(RXFIFO_SIZE(val) + 1); 424 + 425 + regmap_read(spi->regmap, ATCSPI_TIMING, &val); 426 + val &= ~TIMING_SCLK_DIV_MASK; 427 + 428 + /* 429 + * The SCLK_DIV value 0xFF is special and indicates that the 430 + * SCLK rate should be the same as the SPI clock rate. 431 + */ 432 + if (spi->sclk_rate >= spi->clk_rate) { 433 + div = TIMING_SCLK_DIV_MASK; 434 + } else { 435 + /* 436 + * The divider value is determined as follows: 437 + * 1. If the divider can generate the exact target frequency, 438 + * use that setting. 439 + * 2. If an exact match is not possible, select the closest 440 + * available setting that is lower than the target frequency. 441 + */ 442 + div = (spi->clk_rate + (spi->sclk_rate * 2 - 1)) / 443 + (spi->sclk_rate * 2) - 1; 444 + 445 + /* Check if the actual SPI clock is lower than the target */ 446 + actual_spi_sclk_f = spi->clk_rate / ((div + 1) * 2); 447 + if (actual_spi_sclk_f < spi->sclk_rate) 448 + dev_info(spi->dev, 449 + "Clock adjusted %d to %d due to divider limitation", 450 + spi->sclk_rate, actual_spi_sclk_f); 451 + 452 + if (div > TIMING_SCLK_DIV_MAX) 453 + return dev_err_probe(spi->dev, -EINVAL, 454 + "Unsupported SPI clock %d\n", 455 + spi->sclk_rate); 456 + } 457 + val |= div; 458 + regmap_write(spi->regmap, ATCSPI_TIMING, val); 459 + 460 + return ret; 461 + } 462 + 463 + static int atcspi_init_resources(struct platform_device *pdev, 464 + struct atcspi_dev *spi, 465 + struct resource **mem_res) 466 + { 467 + void __iomem *base; 468 + const struct regmap_config atcspi_regmap_cfg = { 469 + .name = "atcspi", 470 + .reg_bits = 32, 471 + .val_bits = 32, 472 + .cache_type = REGCACHE_NONE, 473 + .reg_stride = 4, 474 + .pad_bits = 0, 475 + .max_register = ATCSPI_CONFIG 476 + }; 477 + 478 + base = devm_platform_get_and_ioremap_resource(pdev, 0, mem_res); 479 + if (IS_ERR(base)) 480 + return dev_err_probe(spi->dev, PTR_ERR(base), 481 + "Failed to get ioremap resource\n"); 482 + 483 + spi->regmap = devm_regmap_init_mmio(spi->dev, base, 484 + &atcspi_regmap_cfg); 485 + if (IS_ERR(spi->regmap)) 486 + return dev_err_probe(spi->dev, PTR_ERR(spi->regmap), 487 + "Failed to init regmap\n"); 488 + 489 + spi->clk = devm_clk_get(spi->dev, NULL); 490 + if (IS_ERR(spi->clk)) 491 + return dev_err_probe(spi->dev, PTR_ERR(spi->clk), 492 + "Failed to get SPI clock\n"); 493 + 494 + spi->sclk_rate = ATCSPI_MAX_SPEED_HZ; 495 + return 0; 496 + } 497 + 498 + static int atcspi_configure_dma(struct atcspi_dev *spi) 499 + { 500 + struct dma_chan *dma_chan; 501 + int ret = 0; 502 + 503 + dma_chan = devm_dma_request_chan(spi->dev, "rx"); 504 + if (IS_ERR(dma_chan)) { 505 + ret = PTR_ERR(dma_chan); 506 + goto err_exit; 507 + } 508 + spi->host->dma_rx = dma_chan; 509 + 510 + dma_chan = devm_dma_request_chan(spi->dev, "tx"); 511 + if (IS_ERR(dma_chan)) { 512 + ret = PTR_ERR(dma_chan); 513 + goto free_rx; 514 + } 515 + spi->host->dma_tx = dma_chan; 516 + init_completion(&spi->dma_completion); 517 + 518 + return ret; 519 + 520 + free_rx: 521 + dma_release_channel(spi->host->dma_rx); 522 + spi->host->dma_rx = NULL; 523 + err_exit: 524 + return ret; 525 + } 526 + 527 + static int atcspi_enable_clk(struct atcspi_dev *spi) 528 + { 529 + int ret; 530 + 531 + ret = clk_prepare_enable(spi->clk); 532 + if (ret) 533 + return dev_err_probe(spi->dev, ret, 534 + "Failed to enable clock\n"); 535 + 536 + spi->clk_rate = clk_get_rate(spi->clk); 537 + if (!spi->clk_rate) 538 + return dev_err_probe(spi->dev, -EINVAL, 539 + "Failed to get SPI clock rate\n"); 540 + 541 + return 0; 542 + } 543 + 544 + static void atcspi_init_controller(struct platform_device *pdev, 545 + struct atcspi_dev *spi, 546 + struct spi_controller *host, 547 + struct resource *mem_res) 548 + { 549 + /* Get the physical address of the data register for DMA transfers. */ 550 + spi->dma_addr = (dma_addr_t)(mem_res->start + ATCSPI_DATA); 551 + 552 + /* Initialize controller properties */ 553 + host->bus_num = pdev->id; 554 + host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_RX_QUAD | SPI_TX_QUAD; 555 + host->num_chipselect = ATCSPI_MAX_CS_NUM; 556 + host->mem_ops = &atcspi_mem_ops; 557 + host->max_speed_hz = spi->sclk_rate; 558 + } 559 + 560 + static int atcspi_probe(struct platform_device *pdev) 561 + { 562 + struct spi_controller *host; 563 + struct atcspi_dev *spi; 564 + struct resource *mem_res; 565 + int ret; 566 + 567 + host = spi_alloc_host(&pdev->dev, sizeof(*spi)); 568 + if (!host) 569 + return -ENOMEM; 570 + 571 + spi = spi_controller_get_devdata(host); 572 + spi->host = host; 573 + spi->dev = &pdev->dev; 574 + dev_set_drvdata(&pdev->dev, host); 575 + 576 + ret = atcspi_init_resources(pdev, spi, &mem_res); 577 + if (ret) 578 + goto free_controller; 579 + 580 + ret = atcspi_enable_clk(spi); 581 + if (ret) 582 + goto free_controller; 583 + 584 + atcspi_init_controller(pdev, spi, host, mem_res); 585 + 586 + ret = atcspi_setup(spi); 587 + if (ret) 588 + goto disable_clk; 589 + 590 + ret = devm_spi_register_controller(&pdev->dev, host); 591 + if (ret) { 592 + dev_err_probe(spi->dev, ret, 593 + "Failed to register SPI controller\n"); 594 + goto disable_clk; 595 + } 596 + 597 + spi->use_dma = false; 598 + if (ATCSPI_DMA_SUPPORT) { 599 + ret = atcspi_configure_dma(spi); 600 + if (ret) 601 + dev_info(spi->dev, 602 + "Failed to init DMA, fallback to PIO mode\n"); 603 + else 604 + spi->use_dma = true; 605 + } 606 + mutex_init(&spi->mutex_lock); 607 + 608 + return 0; 609 + 610 + disable_clk: 611 + clk_disable_unprepare(spi->clk); 612 + 613 + free_controller: 614 + spi_controller_put(host); 615 + return ret; 616 + } 617 + 618 + static int atcspi_suspend(struct device *dev) 619 + { 620 + struct spi_controller *host = dev_get_drvdata(dev); 621 + struct atcspi_dev *spi = spi_controller_get_devdata(host); 622 + 623 + spi_controller_suspend(host); 624 + 625 + clk_disable_unprepare(spi->clk); 626 + 627 + return 0; 628 + } 629 + 630 + static int atcspi_resume(struct device *dev) 631 + { 632 + struct spi_controller *host = dev_get_drvdata(dev); 633 + struct atcspi_dev *spi = spi_controller_get_devdata(host); 634 + int ret; 635 + 636 + ret = clk_prepare_enable(spi->clk); 637 + if (ret) 638 + return ret; 639 + 640 + ret = atcspi_setup(spi); 641 + if (ret) 642 + goto disable_clk; 643 + 644 + ret = spi_controller_resume(host); 645 + if (ret) 646 + goto disable_clk; 647 + 648 + return ret; 649 + 650 + disable_clk: 651 + clk_disable_unprepare(spi->clk); 652 + 653 + return ret; 654 + } 655 + 656 + static DEFINE_SIMPLE_DEV_PM_OPS(atcspi_pm_ops, atcspi_suspend, atcspi_resume); 657 + 658 + static const struct of_device_id atcspi_of_match[] = { 659 + { .compatible = "andestech,qilai-spi", }, 660 + { .compatible = "andestech,ae350-spi", }, 661 + { /* sentinel */ } 662 + }; 663 + 664 + MODULE_DEVICE_TABLE(of, atcspi_of_match); 665 + 666 + static struct platform_driver atcspi_driver = { 667 + .probe = atcspi_probe, 668 + .driver = { 669 + .name = "atcspi200", 670 + .owner = THIS_MODULE, 671 + .of_match_table = atcspi_of_match, 672 + .pm = pm_sleep_ptr(&atcspi_pm_ops) 673 + } 674 + }; 675 + module_platform_driver(atcspi_driver); 676 + 677 + MODULE_AUTHOR("CL Wang <cl634@andestech.com>"); 678 + MODULE_DESCRIPTION("Andes ATCSPI200 SPI controller driver"); 679 + MODULE_LICENSE("GPL");
-1
drivers/spi/spi-ath79.c
··· 180 180 } 181 181 182 182 sp = spi_controller_get_devdata(host); 183 - host->dev.of_node = pdev->dev.of_node; 184 183 platform_set_drvdata(pdev, sp); 185 184 186 185 host->use_gpio_descriptors = true;
-1
drivers/spi/spi-atmel.c
··· 1536 1536 host->use_gpio_descriptors = true; 1537 1537 host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; 1538 1538 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(8, 16); 1539 - host->dev.of_node = pdev->dev.of_node; 1540 1539 host->bus_num = pdev->id; 1541 1540 host->num_chipselect = 4; 1542 1541 host->setup = atmel_spi_setup;
+141 -5
drivers/spi/spi-axi-spi-engine.c
··· 23 23 #include <linux/spi/spi.h> 24 24 #include <trace/events/spi.h> 25 25 26 + #define SPI_ENGINE_REG_DATA_WIDTH 0x0C 27 + #define SPI_ENGINE_REG_DATA_WIDTH_NUM_OF_SDIO_MASK GENMASK(23, 16) 28 + #define SPI_ENGINE_REG_DATA_WIDTH_MASK GENMASK(15, 0) 26 29 #define SPI_ENGINE_REG_OFFLOAD_MEM_ADDR_WIDTH 0x10 27 30 #define SPI_ENGINE_REG_RESET 0x40 28 31 ··· 78 75 #define SPI_ENGINE_CMD_REG_CLK_DIV 0x0 79 76 #define SPI_ENGINE_CMD_REG_CONFIG 0x1 80 77 #define SPI_ENGINE_CMD_REG_XFER_BITS 0x2 78 + #define SPI_ENGINE_CMD_REG_SDI_MASK 0x3 79 + #define SPI_ENGINE_CMD_REG_SDO_MASK 0x4 81 80 82 81 #define SPI_ENGINE_MISC_SYNC 0x0 83 82 #define SPI_ENGINE_MISC_SLEEP 0x1 ··· 109 104 /* default sizes - can be changed when SPI Engine firmware is compiled */ 110 105 #define SPI_ENGINE_OFFLOAD_CMD_FIFO_SIZE 16 111 106 #define SPI_ENGINE_OFFLOAD_SDO_FIFO_SIZE 16 107 + 108 + /* Extending SPI_MULTI_LANE_MODE values for optimizing messages. */ 109 + #define SPI_ENGINE_MULTI_BUS_MODE_UNKNOWN -1 110 + #define SPI_ENGINE_MULTI_BUS_MODE_CONFLICTING -2 112 111 113 112 struct spi_engine_program { 114 113 unsigned int length; ··· 151 142 unsigned long flags; 152 143 unsigned int offload_num; 153 144 unsigned int spi_mode_config; 145 + unsigned int multi_lane_mode; 146 + u8 rx_primary_lane_mask; 147 + u8 tx_primary_lane_mask; 148 + u8 rx_all_lanes_mask; 149 + u8 tx_all_lanes_mask; 154 150 u8 bits_per_word; 155 151 }; 156 152 ··· 178 164 u32 offload_caps; 179 165 bool offload_requires_sync; 180 166 }; 167 + 168 + static void spi_engine_primary_lane_flag(struct spi_device *spi, 169 + u8 *rx_lane_flags, u8 *tx_lane_flags) 170 + { 171 + *rx_lane_flags = BIT(spi->rx_lane_map[0]); 172 + *tx_lane_flags = BIT(spi->tx_lane_map[0]); 173 + } 174 + 175 + static void spi_engine_all_lanes_flags(struct spi_device *spi, 176 + u8 *rx_lane_flags, u8 *tx_lane_flags) 177 + { 178 + int i; 179 + 180 + for (i = 0; i < spi->num_rx_lanes; i++) 181 + *rx_lane_flags |= BIT(spi->rx_lane_map[i]); 182 + 183 + for (i = 0; i < spi->num_tx_lanes; i++) 184 + *tx_lane_flags |= BIT(spi->tx_lane_map[i]); 185 + } 181 186 182 187 static void spi_engine_program_add_cmd(struct spi_engine_program *p, 183 188 bool dry, uint16_t cmd) ··· 226 193 } 227 194 228 195 static void spi_engine_gen_xfer(struct spi_engine_program *p, bool dry, 229 - struct spi_transfer *xfer) 196 + struct spi_transfer *xfer, u32 num_lanes) 230 197 { 231 198 unsigned int len; 232 199 ··· 236 203 len = xfer->len / 2; 237 204 else 238 205 len = xfer->len / 4; 206 + 207 + if (xfer->multi_lane_mode == SPI_MULTI_LANE_MODE_STRIPE) 208 + len /= num_lanes; 239 209 240 210 while (len) { 241 211 unsigned int n = min(len, 256U); ··· 305 269 { 306 270 unsigned int clk_div, max_hz = msg->spi->controller->max_speed_hz; 307 271 struct spi_transfer *xfer; 272 + int multi_lane_mode = SPI_ENGINE_MULTI_BUS_MODE_UNKNOWN; 308 273 u8 min_bits_per_word = U8_MAX; 309 274 u8 max_bits_per_word = 0; 310 275 ··· 321 284 min_bits_per_word = min(min_bits_per_word, xfer->bits_per_word); 322 285 max_bits_per_word = max(max_bits_per_word, xfer->bits_per_word); 323 286 } 287 + 288 + if (xfer->rx_buf || xfer->offload_flags & SPI_OFFLOAD_XFER_RX_STREAM || 289 + xfer->tx_buf || xfer->offload_flags & SPI_OFFLOAD_XFER_TX_STREAM) { 290 + switch (xfer->multi_lane_mode) { 291 + case SPI_MULTI_LANE_MODE_SINGLE: 292 + case SPI_MULTI_LANE_MODE_STRIPE: 293 + break; 294 + default: 295 + /* Other modes, like mirror not supported */ 296 + return -EINVAL; 297 + } 298 + 299 + /* If all xfers have the same multi-lane mode, we can optimize. */ 300 + if (multi_lane_mode == SPI_ENGINE_MULTI_BUS_MODE_UNKNOWN) 301 + multi_lane_mode = xfer->multi_lane_mode; 302 + else if (multi_lane_mode != xfer->multi_lane_mode) 303 + multi_lane_mode = SPI_ENGINE_MULTI_BUS_MODE_CONFLICTING; 304 + } 324 305 } 325 306 326 307 /* ··· 352 297 priv->bits_per_word = min_bits_per_word; 353 298 else 354 299 priv->bits_per_word = 0; 300 + 301 + priv->multi_lane_mode = multi_lane_mode; 302 + spi_engine_primary_lane_flag(msg->spi, 303 + &priv->rx_primary_lane_mask, 304 + &priv->tx_primary_lane_mask); 305 + spi_engine_all_lanes_flags(msg->spi, 306 + &priv->rx_all_lanes_mask, 307 + &priv->tx_all_lanes_mask); 355 308 } 356 309 357 310 return 0; ··· 373 310 struct spi_engine_offload *priv; 374 311 struct spi_transfer *xfer; 375 312 int clk_div, new_clk_div, inst_ns; 313 + int prev_multi_lane_mode = SPI_MULTI_LANE_MODE_SINGLE; 376 314 bool keep_cs = false; 377 315 u8 bits_per_word = 0; 378 316 ··· 398 334 * in the same way. 399 335 */ 400 336 bits_per_word = priv->bits_per_word; 337 + prev_multi_lane_mode = priv->multi_lane_mode; 401 338 } else { 402 339 spi_engine_program_add_cmd(p, dry, 403 340 SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_CONFIG, ··· 409 344 spi_engine_gen_cs(p, dry, spi, !xfer->cs_off); 410 345 411 346 list_for_each_entry(xfer, &msg->transfers, transfer_list) { 347 + if (xfer->rx_buf || xfer->offload_flags & SPI_OFFLOAD_XFER_RX_STREAM || 348 + xfer->tx_buf || xfer->offload_flags & SPI_OFFLOAD_XFER_TX_STREAM) { 349 + if (xfer->multi_lane_mode != prev_multi_lane_mode) { 350 + u8 tx_lane_flags, rx_lane_flags; 351 + 352 + if (xfer->multi_lane_mode == SPI_MULTI_LANE_MODE_STRIPE) 353 + spi_engine_all_lanes_flags(spi, &rx_lane_flags, 354 + &tx_lane_flags); 355 + else 356 + spi_engine_primary_lane_flag(spi, &rx_lane_flags, 357 + &tx_lane_flags); 358 + 359 + spi_engine_program_add_cmd(p, dry, 360 + SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_SDI_MASK, 361 + rx_lane_flags)); 362 + spi_engine_program_add_cmd(p, dry, 363 + SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_SDO_MASK, 364 + tx_lane_flags)); 365 + } 366 + prev_multi_lane_mode = xfer->multi_lane_mode; 367 + } 368 + 412 369 new_clk_div = host->max_speed_hz / xfer->effective_speed_hz; 413 370 if (new_clk_div != clk_div) { 414 371 clk_div = new_clk_div; ··· 447 360 bits_per_word)); 448 361 } 449 362 450 - spi_engine_gen_xfer(p, dry, xfer); 363 + spi_engine_gen_xfer(p, dry, xfer, spi->num_rx_lanes); 451 364 spi_engine_gen_sleep(p, dry, spi_delay_to_ns(&xfer->delay, xfer), 452 365 inst_ns, xfer->effective_speed_hz); 453 366 ··· 481 394 if (clk_div != 1) 482 395 spi_engine_program_add_cmd(p, dry, 483 396 SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_CLK_DIV, 0)); 397 + 398 + /* Restore single lane mode unless offload disable will restore it later. */ 399 + if (prev_multi_lane_mode == SPI_MULTI_LANE_MODE_STRIPE && 400 + (!msg->offload || priv->multi_lane_mode != SPI_MULTI_LANE_MODE_STRIPE)) { 401 + u8 rx_lane_flags, tx_lane_flags; 402 + 403 + spi_engine_primary_lane_flag(spi, &rx_lane_flags, &tx_lane_flags); 404 + 405 + spi_engine_program_add_cmd(p, dry, 406 + SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_SDI_MASK, rx_lane_flags)); 407 + spi_engine_program_add_cmd(p, dry, 408 + SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_SDO_MASK, tx_lane_flags)); 409 + } 484 410 } 485 411 486 412 static void spi_engine_xfer_next(struct spi_message *msg, ··· 899 799 writel_relaxed(SPI_ENGINE_CMD_CS_INV(spi_engine->cs_inv), 900 800 spi_engine->base + SPI_ENGINE_REG_CMD_FIFO); 901 801 802 + if (host->num_data_lanes > 1) { 803 + u8 rx_lane_flags, tx_lane_flags; 804 + 805 + spi_engine_primary_lane_flag(device, &rx_lane_flags, &tx_lane_flags); 806 + 807 + writel_relaxed(SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_SDI_MASK, 808 + rx_lane_flags), 809 + spi_engine->base + SPI_ENGINE_REG_CMD_FIFO); 810 + writel_relaxed(SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_SDO_MASK, 811 + tx_lane_flags), 812 + spi_engine->base + SPI_ENGINE_REG_CMD_FIFO); 813 + } 814 + 902 815 /* 903 816 * In addition to setting the flags, we have to do a CS assert command 904 817 * to make the new setting actually take effect. ··· 1015 902 priv->bits_per_word), 1016 903 spi_engine->base + SPI_ENGINE_REG_CMD_FIFO); 1017 904 905 + if (priv->multi_lane_mode == SPI_MULTI_LANE_MODE_STRIPE) { 906 + writel_relaxed(SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_SDI_MASK, 907 + priv->rx_all_lanes_mask), 908 + spi_engine->base + SPI_ENGINE_REG_CMD_FIFO); 909 + writel_relaxed(SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_SDO_MASK, 910 + priv->tx_all_lanes_mask), 911 + spi_engine->base + SPI_ENGINE_REG_CMD_FIFO); 912 + } 913 + 1018 914 writel_relaxed(SPI_ENGINE_CMD_SYNC(1), 1019 915 spi_engine->base + SPI_ENGINE_REG_CMD_FIFO); 1020 916 ··· 1051 929 reg &= ~SPI_ENGINE_OFFLOAD_CTRL_ENABLE; 1052 930 writel_relaxed(reg, spi_engine->base + 1053 931 SPI_ENGINE_REG_OFFLOAD_CTRL(priv->offload_num)); 932 + 933 + /* Restore single-lane mode. */ 934 + if (priv->multi_lane_mode == SPI_MULTI_LANE_MODE_STRIPE) { 935 + writel_relaxed(SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_SDI_MASK, 936 + priv->rx_primary_lane_mask), 937 + spi_engine->base + SPI_ENGINE_REG_CMD_FIFO); 938 + writel_relaxed(SPI_ENGINE_CMD_WRITE(SPI_ENGINE_CMD_REG_SDO_MASK, 939 + priv->tx_primary_lane_mask), 940 + spi_engine->base + SPI_ENGINE_REG_CMD_FIFO); 941 + } 1054 942 } 1055 943 1056 944 static struct dma_chan ··· 1105 973 { 1106 974 struct spi_engine *spi_engine; 1107 975 struct spi_controller *host; 1108 - unsigned int version; 976 + unsigned int version, data_width_reg_val; 1109 977 int irq, ret; 1110 978 1111 979 irq = platform_get_irq(pdev, 0); ··· 1174 1042 return PTR_ERR(spi_engine->base); 1175 1043 1176 1044 version = readl(spi_engine->base + ADI_AXI_REG_VERSION); 1177 - if (ADI_AXI_PCORE_VER_MAJOR(version) != 1) { 1045 + if (ADI_AXI_PCORE_VER_MAJOR(version) > 2) { 1178 1046 dev_err(&pdev->dev, "Unsupported peripheral version %u.%u.%u\n", 1179 1047 ADI_AXI_PCORE_VER_MAJOR(version), 1180 1048 ADI_AXI_PCORE_VER_MINOR(version), 1181 1049 ADI_AXI_PCORE_VER_PATCH(version)); 1182 1050 return -ENODEV; 1183 1051 } 1052 + 1053 + data_width_reg_val = readl(spi_engine->base + SPI_ENGINE_REG_DATA_WIDTH); 1184 1054 1185 1055 if (adi_axi_pcore_ver_gteq(version, 1, 1)) { 1186 1056 unsigned int sizes = readl(spi_engine->base + ··· 1214 1080 if (ret) 1215 1081 return ret; 1216 1082 1217 - host->dev.of_node = pdev->dev.of_node; 1218 1083 host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_3WIRE; 1219 1084 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32); 1220 1085 host->max_speed_hz = clk_get_rate(spi_engine->ref_clk) / 2; ··· 1230 1097 } 1231 1098 if (adi_axi_pcore_ver_gteq(version, 1, 3)) 1232 1099 host->mode_bits |= SPI_MOSI_IDLE_LOW | SPI_MOSI_IDLE_HIGH; 1100 + if (adi_axi_pcore_ver_gteq(version, 2, 0)) 1101 + host->num_data_lanes = FIELD_GET(SPI_ENGINE_REG_DATA_WIDTH_NUM_OF_SDIO_MASK, 1102 + data_width_reg_val); 1233 1103 1234 1104 if (host->max_speed_hz == 0) 1235 1105 return dev_err_probe(&pdev->dev, -EINVAL, "spi_clk rate is 0");
+1007
drivers/spi/spi-axiado.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + // 3 + // Axiado SPI controller driver (Host mode only) 4 + // 5 + // Copyright (C) 2022-2025 Axiado Corporation (or its affiliates). 6 + // 7 + 8 + #include <linux/clk.h> 9 + #include <linux/delay.h> 10 + #include <linux/gpio/consumer.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/io.h> 13 + #include <linux/module.h> 14 + #include <linux/of_irq.h> 15 + #include <linux/of_address.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/pm_runtime.h> 18 + #include <linux/spi/spi.h> 19 + #include <linux/spi/spi-mem.h> 20 + #include <linux/sizes.h> 21 + 22 + #include "spi-axiado.h" 23 + 24 + /** 25 + * ax_spi_read - Register Read - 32 bit per word 26 + * @xspi: Pointer to the ax_spi structure 27 + * @offset: Register offset address 28 + * 29 + * @return: Returns the value of that register 30 + */ 31 + static inline u32 ax_spi_read(struct ax_spi *xspi, u32 offset) 32 + { 33 + return readl_relaxed(xspi->regs + offset); 34 + } 35 + 36 + /** 37 + * ax_spi_write - Register write - 32 bit per word 38 + * @xspi: Pointer to the ax_spi structure 39 + * @offset: Register offset address 40 + * @val: Value to write into that register 41 + */ 42 + static inline void ax_spi_write(struct ax_spi *xspi, u32 offset, u32 val) 43 + { 44 + writel_relaxed(val, xspi->regs + offset); 45 + } 46 + 47 + /** 48 + * ax_spi_write_b - Register Read - 8 bit per word 49 + * @xspi: Pointer to the ax_spi structure 50 + * @offset: Register offset address 51 + * @val: Value to write into that register 52 + */ 53 + static inline void ax_spi_write_b(struct ax_spi *xspi, u32 offset, u8 val) 54 + { 55 + writeb_relaxed(val, xspi->regs + offset); 56 + } 57 + 58 + /** 59 + * ax_spi_init_hw - Initialize the hardware and configure the SPI controller 60 + * @xspi: Pointer to the ax_spi structure 61 + * 62 + * * On reset the SPI controller is configured to be in host mode. 63 + * In host mode baud rate divisor is set to 4, threshold value for TX FIFO 64 + * not full interrupt is set to 1 and size of the word to be transferred as 8 bit. 65 + * 66 + * This function initializes the SPI controller to disable and clear all the 67 + * interrupts, enable manual target select and manual start, deselect all the 68 + * chip select lines, and enable the SPI controller. 69 + */ 70 + static void ax_spi_init_hw(struct ax_spi *xspi) 71 + { 72 + u32 reg_value; 73 + 74 + /* Clear CR1 */ 75 + ax_spi_write(xspi, AX_SPI_CR1, AX_SPI_CR1_CLR); 76 + 77 + /* CR1 - CPO CHP MSS SCE SCR */ 78 + reg_value = ax_spi_read(xspi, AX_SPI_CR1); 79 + reg_value |= AX_SPI_CR1_SCR | AX_SPI_CR1_SCE; 80 + 81 + ax_spi_write(xspi, AX_SPI_CR1, reg_value); 82 + 83 + /* CR2 - MTE SRD SWD SSO */ 84 + reg_value = ax_spi_read(xspi, AX_SPI_CR2); 85 + reg_value |= AX_SPI_CR2_SWD | AX_SPI_CR2_SRD; 86 + 87 + ax_spi_write(xspi, AX_SPI_CR2, reg_value); 88 + 89 + /* CR3 - Reserverd bits S3W SDL */ 90 + ax_spi_write(xspi, AX_SPI_CR3, AX_SPI_CR3_SDL); 91 + 92 + /* SCDR - Reserved bits SCS SCD */ 93 + ax_spi_write(xspi, AX_SPI_SCDR, (AX_SPI_SCDR_SCS | AX_SPI_SCD_DEFAULT)); 94 + 95 + /* IMR */ 96 + ax_spi_write(xspi, AX_SPI_IMR, AX_SPI_IMR_CLR); 97 + 98 + /* ISR - Clear all the interrupt */ 99 + ax_spi_write(xspi, AX_SPI_ISR, AX_SPI_ISR_CLR); 100 + } 101 + 102 + /** 103 + * ax_spi_chipselect - Select or deselect the chip select line 104 + * @spi: Pointer to the spi_device structure 105 + * @is_high: Select(0) or deselect (1) the chip select line 106 + */ 107 + static void ax_spi_chipselect(struct spi_device *spi, bool is_high) 108 + { 109 + struct ax_spi *xspi = spi_controller_get_devdata(spi->controller); 110 + u32 ctrl_reg; 111 + 112 + ctrl_reg = ax_spi_read(xspi, AX_SPI_CR2); 113 + /* Reset the chip select */ 114 + ctrl_reg &= ~AX_SPI_DEFAULT_TS_MASK; 115 + ctrl_reg |= spi_get_chipselect(spi, 0); 116 + 117 + ax_spi_write(xspi, AX_SPI_CR2, ctrl_reg); 118 + } 119 + 120 + /** 121 + * ax_spi_config_clock_mode - Sets clock polarity and phase 122 + * @spi: Pointer to the spi_device structure 123 + * 124 + * Sets the requested clock polarity and phase. 125 + */ 126 + static void ax_spi_config_clock_mode(struct spi_device *spi) 127 + { 128 + struct ax_spi *xspi = spi_controller_get_devdata(spi->controller); 129 + u32 ctrl_reg, new_ctrl_reg; 130 + 131 + new_ctrl_reg = ax_spi_read(xspi, AX_SPI_CR1); 132 + ctrl_reg = new_ctrl_reg; 133 + 134 + /* Set the SPI clock phase and clock polarity */ 135 + new_ctrl_reg &= ~(AX_SPI_CR1_CPHA | AX_SPI_CR1_CPOL); 136 + if (spi->mode & SPI_CPHA) 137 + new_ctrl_reg |= AX_SPI_CR1_CPHA; 138 + if (spi->mode & SPI_CPOL) 139 + new_ctrl_reg |= AX_SPI_CR1_CPOL; 140 + 141 + if (new_ctrl_reg != ctrl_reg) 142 + ax_spi_write(xspi, AX_SPI_CR1, new_ctrl_reg); 143 + ax_spi_write(xspi, AX_SPI_CR1, 0x03); 144 + } 145 + 146 + /** 147 + * ax_spi_config_clock_freq - Sets clock frequency 148 + * @spi: Pointer to the spi_device structure 149 + * @transfer: Pointer to the spi_transfer structure which provides 150 + * information about next transfer setup parameters 151 + * 152 + * Sets the requested clock frequency. 153 + * Note: If the requested frequency is not an exact match with what can be 154 + * obtained using the prescalar value the driver sets the clock frequency which 155 + * is lower than the requested frequency (maximum lower) for the transfer. If 156 + * the requested frequency is higher or lower than that is supported by the SPI 157 + * controller the driver will set the highest or lowest frequency supported by 158 + * controller. 159 + */ 160 + static void ax_spi_config_clock_freq(struct spi_device *spi, 161 + struct spi_transfer *transfer) 162 + { 163 + struct ax_spi *xspi = spi_controller_get_devdata(spi->controller); 164 + 165 + ax_spi_write(xspi, AX_SPI_SCDR, (AX_SPI_SCDR_SCS | AX_SPI_SCD_DEFAULT)); 166 + } 167 + 168 + /** 169 + * ax_spi_setup_transfer - Configure SPI controller for specified transfer 170 + * @spi: Pointer to the spi_device structure 171 + * @transfer: Pointer to the spi_transfer structure which provides 172 + * information about next transfer setup parameters 173 + * 174 + * Sets the operational mode of SPI controller for the next SPI transfer and 175 + * sets the requested clock frequency. 176 + * 177 + */ 178 + static void ax_spi_setup_transfer(struct spi_device *spi, 179 + struct spi_transfer *transfer) 180 + { 181 + struct ax_spi *xspi = spi_controller_get_devdata(spi->controller); 182 + 183 + ax_spi_config_clock_freq(spi, transfer); 184 + 185 + dev_dbg(&spi->dev, "%s, mode %d, %u bits/w, %u clock speed\n", 186 + __func__, spi->mode, spi->bits_per_word, 187 + xspi->speed_hz); 188 + } 189 + 190 + /** 191 + * ax_spi_fill_tx_fifo - Fills the TX FIFO with as many bytes as possible 192 + * @xspi: Pointer to the ax_spi structure 193 + */ 194 + static void ax_spi_fill_tx_fifo(struct ax_spi *xspi) 195 + { 196 + unsigned long trans_cnt = 0; 197 + 198 + while ((trans_cnt < xspi->tx_fifo_depth) && 199 + (xspi->tx_bytes > 0)) { 200 + /* When xspi in busy condition, bytes may send failed, 201 + * then spi control did't work thoroughly, add one byte delay 202 + */ 203 + if (ax_spi_read(xspi, AX_SPI_IVR) & AX_SPI_IVR_TFOV) 204 + usleep_range(10, 10); 205 + if (xspi->tx_buf) 206 + ax_spi_write_b(xspi, AX_SPI_TXFIFO, *xspi->tx_buf++); 207 + else 208 + ax_spi_write_b(xspi, AX_SPI_TXFIFO, 0); 209 + 210 + xspi->tx_bytes--; 211 + trans_cnt++; 212 + } 213 + } 214 + 215 + /** 216 + * ax_spi_get_rx_byte - Gets a byte from the RX FIFO buffer 217 + * @xspi: Controller private data (struct ax_spi *) 218 + * 219 + * This function handles the logic of extracting bytes from the 32-bit RX FIFO. 220 + * It reads a new 32-bit word from AX_SPI_RXFIFO only when the current buffered 221 + * word has been fully processed (all 4 bytes extracted). It then extracts 222 + * bytes one by one, assuming the controller is little-endian. 223 + * 224 + * Returns: The next 8-bit byte read from the RX FIFO stream. 225 + */ 226 + static u8 ax_spi_get_rx_byte_for_irq(struct ax_spi *xspi) 227 + { 228 + u8 byte_val; 229 + 230 + /* If all bytes from the current 32-bit word have been extracted, 231 + * read a new word from the hardware RX FIFO. 232 + */ 233 + if (xspi->bytes_left_in_current_rx_word_for_irq == 0) { 234 + xspi->current_rx_fifo_word_for_irq = ax_spi_read(xspi, AX_SPI_RXFIFO); 235 + xspi->bytes_left_in_current_rx_word_for_irq = 4; // A new 32-bit word has 4 bytes 236 + } 237 + 238 + /* Extract the least significant byte from the current 32-bit word */ 239 + byte_val = (u8)(xspi->current_rx_fifo_word_for_irq & 0xFF); 240 + 241 + /* Shift the word right by 8 bits to prepare the next byte for extraction */ 242 + xspi->current_rx_fifo_word_for_irq >>= 8; 243 + xspi->bytes_left_in_current_rx_word_for_irq--; 244 + 245 + return byte_val; 246 + } 247 + 248 + /** 249 + * Helper function to process received bytes and check for transfer completion. 250 + * This avoids code duplication and centralizes the completion logic. 251 + * Returns true if the transfer was finalized. 252 + */ 253 + static bool ax_spi_process_rx_and_finalize(struct spi_controller *ctlr) 254 + { 255 + struct ax_spi *xspi = spi_controller_get_devdata(ctlr); 256 + 257 + /* Process any remaining bytes in the RX FIFO */ 258 + u32 avail_bytes = ax_spi_read(xspi, AX_SPI_RX_FBCAR); 259 + 260 + /* This loop handles bytes that are already staged from a previous word read */ 261 + while (xspi->bytes_left_in_current_rx_word_for_irq && 262 + (xspi->rx_copy_remaining || xspi->rx_discard)) { 263 + u8 b = ax_spi_get_rx_byte_for_irq(xspi); 264 + 265 + if (xspi->rx_discard) { 266 + xspi->rx_discard--; 267 + } else { 268 + *xspi->rx_buf++ = b; 269 + xspi->rx_copy_remaining--; 270 + } 271 + } 272 + 273 + /* This loop processes new words directly from the FIFO */ 274 + while (avail_bytes >= 4 && (xspi->rx_copy_remaining || xspi->rx_discard)) { 275 + /* This function should handle reading from the FIFO */ 276 + u8 b = ax_spi_get_rx_byte_for_irq(xspi); 277 + 278 + if (xspi->rx_discard) { 279 + xspi->rx_discard--; 280 + } else { 281 + *xspi->rx_buf++ = b; 282 + xspi->rx_copy_remaining--; 283 + } 284 + /* ax_spi_get_rx_byte_for_irq fetches a new word when needed 285 + * and updates internal state. 286 + */ 287 + if (xspi->bytes_left_in_current_rx_word_for_irq == 3) 288 + avail_bytes -= 4; 289 + } 290 + 291 + /* Completion Check: The transfer is truly complete if all expected 292 + * RX bytes have been copied or discarded. 293 + */ 294 + if (xspi->rx_copy_remaining == 0 && xspi->rx_discard == 0) { 295 + /* Defensive drain: If for some reason there are leftover bytes 296 + * in the HW FIFO after we've logically finished, 297 + * read and discard them to prevent them from corrupting the next transfer. 298 + * This should be a bounded operation. 299 + */ 300 + int safety_words = AX_SPI_RX_FIFO_DRAIN_LIMIT; // Limit to avoid getting stuck 301 + 302 + while (ax_spi_read(xspi, AX_SPI_RX_FBCAR) > 0 && safety_words-- > 0) 303 + ax_spi_read(xspi, AX_SPI_RXFIFO); 304 + 305 + /* Disable all interrupts for this transfer and finalize. */ 306 + ax_spi_write(xspi, AX_SPI_IMR, 0x00); 307 + spi_finalize_current_transfer(ctlr); 308 + return true; 309 + } 310 + 311 + return false; 312 + } 313 + 314 + /** 315 + * ax_spi_irq - Interrupt service routine of the SPI controller 316 + * @irq: IRQ number 317 + * @dev_id: Pointer to the xspi structure 318 + * 319 + * This function handles RX FIFO almost full and Host Transfer Completed interrupts only. 320 + * On RX FIFO amlost full interrupt this function reads the received data from RX FIFO and 321 + * fills the TX FIFO if there is any data remaining to be transferred. 322 + * On Host Transfer Completed interrupt this function indicates that transfer is completed, 323 + * the SPI subsystem will clear MTC bit. 324 + * 325 + * Return: IRQ_HANDLED when handled; IRQ_NONE otherwise. 326 + */ 327 + static irqreturn_t ax_spi_irq(int irq, void *dev_id) 328 + { 329 + struct spi_controller *ctlr = dev_id; 330 + struct ax_spi *xspi = spi_controller_get_devdata(ctlr); 331 + u32 intr_status; 332 + 333 + intr_status = ax_spi_read(xspi, AX_SPI_IVR); 334 + if (!intr_status) 335 + return IRQ_NONE; 336 + 337 + /* Handle "Message Transfer Complete" interrupt. 338 + * This means all bytes have been shifted out of the TX FIFO. 339 + * It's time to harvest the final incoming bytes from the RX FIFO. 340 + */ 341 + if (intr_status & AX_SPI_IVR_MTCV) { 342 + /* Clear the MTC interrupt flag immediately. */ 343 + ax_spi_write(xspi, AX_SPI_ISR, AX_SPI_ISR_MTC); 344 + 345 + /* For a TX-only transfer, rx_buf would be NULL. 346 + * In the spi-core, rx_copy_remaining would be 0. 347 + * So we can finalize immediately. 348 + */ 349 + if (!xspi->rx_buf) { 350 + ax_spi_write(xspi, AX_SPI_IMR, 0x00); 351 + spi_finalize_current_transfer(ctlr); 352 + return IRQ_HANDLED; 353 + } 354 + /* For a full-duplex transfer, process any remaining RX data. 355 + * The helper function will handle finalization if everything is received. 356 + */ 357 + ax_spi_process_rx_and_finalize(ctlr); 358 + return IRQ_HANDLED; 359 + } 360 + 361 + /* Handle "RX FIFO Full / Threshold Met" interrupt. 362 + * This means we need to make space in the RX FIFO by reading from it. 363 + */ 364 + if (intr_status & AX_SPI_IVR_RFFV) { 365 + if (ax_spi_process_rx_and_finalize(ctlr)) { 366 + /* Transfer was finalized inside the helper, we are done. */ 367 + } else { 368 + /* RX is not yet complete. If there are still TX bytes to send 369 + * (for very long transfers), we can fill the TX FIFO again. 370 + */ 371 + if (xspi->tx_bytes) 372 + ax_spi_fill_tx_fifo(xspi); 373 + } 374 + return IRQ_HANDLED; 375 + } 376 + 377 + return IRQ_NONE; 378 + } 379 + 380 + static int ax_prepare_message(struct spi_controller *ctlr, 381 + struct spi_message *msg) 382 + { 383 + ax_spi_config_clock_mode(msg->spi); 384 + return 0; 385 + } 386 + 387 + /** 388 + * ax_transfer_one - Initiates the SPI transfer 389 + * @ctlr: Pointer to spi_controller structure 390 + * @spi: Pointer to the spi_device structure 391 + * @transfer: Pointer to the spi_transfer structure which provides 392 + * information about next transfer parameters 393 + * 394 + * This function fills the TX FIFO, starts the SPI transfer and 395 + * returns a positive transfer count so that core will wait for completion. 396 + * 397 + * Return: Number of bytes transferred in the last transfer 398 + */ 399 + static int ax_transfer_one(struct spi_controller *ctlr, 400 + struct spi_device *spi, 401 + struct spi_transfer *transfer) 402 + { 403 + struct ax_spi *xspi = spi_controller_get_devdata(ctlr); 404 + int drain_limit; 405 + 406 + /* Pre-transfer cleanup:Flush the RX FIFO to discard any stale data. 407 + * This is the crucial part. Before every new transfer, we must ensure 408 + * the HW is in a clean state to avoid processing stale data 409 + * from a previous, possibly failed or interrupted, transfer. 410 + */ 411 + drain_limit = AX_SPI_RX_FIFO_DRAIN_LIMIT; // Sane limit to prevent infinite loop on HW error 412 + while (ax_spi_read(xspi, AX_SPI_RX_FBCAR) > 0 && drain_limit-- > 0) 413 + ax_spi_read(xspi, AX_SPI_RXFIFO); // Read and discard 414 + 415 + if (drain_limit <= 0) 416 + dev_warn(&ctlr->dev, "RX FIFO drain timeout before transfer\n"); 417 + 418 + /* Clear any stale interrupt flags from a previous transfer. 419 + * This prevents an immediate, false interrupt trigger. 420 + */ 421 + ax_spi_write(xspi, AX_SPI_ISR, AX_SPI_ISR_CLR); 422 + 423 + xspi->tx_buf = transfer->tx_buf; 424 + xspi->rx_buf = transfer->rx_buf; 425 + xspi->tx_bytes = transfer->len; 426 + xspi->rx_bytes = transfer->len; 427 + 428 + /* Reset RX 32-bit to byte buffer for each new transfer */ 429 + if (transfer->tx_buf && !transfer->rx_buf) { 430 + /* TX mode: discard all received data */ 431 + xspi->rx_discard = transfer->len; 432 + xspi->rx_copy_remaining = 0; 433 + } else if ((!transfer->tx_buf && transfer->rx_buf) || 434 + (transfer->tx_buf && transfer->rx_buf)) { 435 + /* RX mode: generate clock by filling TX FIFO with dummy bytes 436 + * Full-duplex mode: generate clock by filling TX FIFO 437 + */ 438 + xspi->rx_discard = 0; 439 + xspi->rx_copy_remaining = transfer->len; 440 + } else { 441 + /* No TX and RX */ 442 + xspi->rx_discard = 0; 443 + xspi->rx_copy_remaining = transfer->len; 444 + } 445 + 446 + ax_spi_setup_transfer(spi, transfer); 447 + ax_spi_fill_tx_fifo(xspi); 448 + ax_spi_write(xspi, AX_SPI_CR2, (AX_SPI_CR2_HTE | AX_SPI_CR2_SRD | AX_SPI_CR2_SWD)); 449 + 450 + ax_spi_write(xspi, AX_SPI_IMR, (AX_SPI_IMR_MTCM | AX_SPI_IMR_RFFM)); 451 + return transfer->len; 452 + } 453 + 454 + /** 455 + * ax_prepare_transfer_hardware - Prepares hardware for transfer. 456 + * @ctlr: Pointer to the spi_controller structure which provides 457 + * information about the controller. 458 + * 459 + * This function enables SPI host controller. 460 + * 461 + * Return: 0 always 462 + */ 463 + static int ax_prepare_transfer_hardware(struct spi_controller *ctlr) 464 + { 465 + struct ax_spi *xspi = spi_controller_get_devdata(ctlr); 466 + 467 + u32 reg_value; 468 + 469 + reg_value = ax_spi_read(xspi, AX_SPI_CR1); 470 + reg_value |= AX_SPI_CR1_SCE; 471 + 472 + ax_spi_write(xspi, AX_SPI_CR1, reg_value); 473 + 474 + return 0; 475 + } 476 + 477 + /** 478 + * ax_unprepare_transfer_hardware - Relaxes hardware after transfer 479 + * @ctlr: Pointer to the spi_controller structure which provides 480 + * information about the controller. 481 + * 482 + * This function disables the SPI host controller when no target selected. 483 + * 484 + * Return: 0 always 485 + */ 486 + static int ax_unprepare_transfer_hardware(struct spi_controller *ctlr) 487 + { 488 + struct ax_spi *xspi = spi_controller_get_devdata(ctlr); 489 + 490 + u32 reg_value; 491 + 492 + /* Disable the SPI if target is deselected */ 493 + reg_value = ax_spi_read(xspi, AX_SPI_CR1); 494 + reg_value &= ~AX_SPI_CR1_SCE; 495 + 496 + ax_spi_write(xspi, AX_SPI_CR1, reg_value); 497 + 498 + return 0; 499 + } 500 + 501 + /** 502 + * ax_spi_detect_fifo_depth - Detect the FIFO depth of the hardware 503 + * @xspi: Pointer to the ax_spi structure 504 + * 505 + * The depth of the TX FIFO is a synthesis configuration parameter of the SPI 506 + * IP. The FIFO threshold register is sized so that its maximum value can be the 507 + * FIFO size - 1. This is used to detect the size of the FIFO. 508 + */ 509 + static void ax_spi_detect_fifo_depth(struct ax_spi *xspi) 510 + { 511 + /* The MSBs will get truncated giving us the size of the FIFO */ 512 + ax_spi_write(xspi, AX_SPI_TX_FAETR, ALMOST_EMPTY_TRESHOLD); 513 + xspi->tx_fifo_depth = FIFO_DEPTH; 514 + 515 + /* Set the threshold limit */ 516 + ax_spi_write(xspi, AX_SPI_TX_FAETR, ALMOST_EMPTY_TRESHOLD); 517 + ax_spi_write(xspi, AX_SPI_RX_FAFTR, ALMOST_FULL_TRESHOLD); 518 + } 519 + 520 + /* --- Internal Helper Function for 32-bit RX FIFO Read --- */ 521 + /** 522 + * ax_spi_get_rx_byte - Gets a byte from the RX FIFO buffer 523 + * @xspi: Controller private data (struct ax_spi *) 524 + * 525 + * This function handles the logic of extracting bytes from the 32-bit RX FIFO. 526 + * It reads a new 32-bit word from AX_SPI_RXFIFO only when the current buffered 527 + * word has been fully processed (all 4 bytes extracted). It then extracts 528 + * bytes one by one, assuming the controller is little-endian. 529 + * 530 + * Returns: The next 8-bit byte read from the RX FIFO stream. 531 + */ 532 + static u8 ax_spi_get_rx_byte(struct ax_spi *xspi) 533 + { 534 + u8 byte_val; 535 + 536 + /* If all bytes from the current 32-bit word have been extracted, 537 + * read a new word from the hardware RX FIFO. 538 + */ 539 + if (xspi->bytes_left_in_current_rx_word == 0) { 540 + xspi->current_rx_fifo_word = ax_spi_read(xspi, AX_SPI_RXFIFO); 541 + xspi->bytes_left_in_current_rx_word = 4; // A new 32-bit word has 4 bytes 542 + } 543 + 544 + /* Extract the least significant byte from the current 32-bit word */ 545 + byte_val = (u8)(xspi->current_rx_fifo_word & 0xFF); 546 + 547 + /* Shift the word right by 8 bits to prepare the next byte for extraction */ 548 + xspi->current_rx_fifo_word >>= 8; 549 + xspi->bytes_left_in_current_rx_word--; 550 + 551 + return byte_val; 552 + } 553 + 554 + static int ax_spi_mem_exec_op(struct spi_mem *mem, const struct spi_mem_op *op) 555 + { 556 + struct spi_device *spi = mem->spi; 557 + struct ax_spi *xspi = spi_controller_get_devdata(spi->controller); 558 + u32 reg_val; 559 + int ret = 0; 560 + u8 cmd_buf[AX_SPI_COMMAND_BUFFER_SIZE]; 561 + int cmd_len = 0; 562 + int i = 0, timeout = AX_SPI_TRX_FIFO_TIMEOUT; 563 + int bytes_to_discard_from_rx; 564 + u8 *rx_buf_ptr = (u8 *)op->data.buf.in; 565 + u8 *tx_buf_ptr = (u8 *)op->data.buf.out; 566 + u32 rx_count_reg = 0; 567 + 568 + dev_dbg(&spi->dev, 569 + "%s: cmd:%02x mode:%d.%d.%d.%d addr:%llx len:%d\n", 570 + __func__, op->cmd.opcode, op->cmd.buswidth, op->addr.buswidth, 571 + op->dummy.buswidth, op->data.buswidth, op->addr.val, 572 + op->data.nbytes); 573 + 574 + /* Validate operation parameters: Only 1-bit bus width supported */ 575 + if (op->cmd.buswidth != 1 || 576 + (op->addr.nbytes && op->addr.buswidth != 0 && 577 + op->addr.buswidth != 1) || 578 + (op->dummy.nbytes && op->dummy.buswidth != 0 && 579 + op->dummy.buswidth != 1) || 580 + (op->data.nbytes && op->data.buswidth != 1)) { 581 + dev_err(&spi->dev, "Unsupported bus width, only 1-bit bus width supported\n"); 582 + return -EOPNOTSUPP; 583 + } 584 + 585 + /* Initialize controller hardware */ 586 + ax_spi_init_hw(xspi); 587 + 588 + /* Assert chip select (pull low) */ 589 + ax_spi_chipselect(spi, false); 590 + 591 + /* Build command phase: Copy opcode to cmd_buf */ 592 + if (op->cmd.nbytes == 2) { 593 + cmd_buf[cmd_len++] = (op->cmd.opcode >> 8) & 0xFF; 594 + cmd_buf[cmd_len++] = op->cmd.opcode & 0xFF; 595 + } else { 596 + cmd_buf[cmd_len++] = op->cmd.opcode; 597 + } 598 + 599 + /* Put address bytes to cmd_buf */ 600 + if (op->addr.nbytes) { 601 + for (i = op->addr.nbytes - 1; i >= 0; i--) { 602 + cmd_buf[cmd_len] = (op->addr.val >> (i * 8)) & 0xFF; 603 + cmd_len++; 604 + } 605 + } 606 + 607 + /* Configure controller for desired operation mode (write/read) */ 608 + reg_val = ax_spi_read(xspi, AX_SPI_CR2); 609 + reg_val |= AX_SPI_CR2_SWD | AX_SPI_CR2_SRI | AX_SPI_CR2_SRD; 610 + ax_spi_write(xspi, AX_SPI_CR2, reg_val); 611 + 612 + /* Write command and address bytes to TX_FIFO */ 613 + for (i = 0; i < cmd_len; i++) 614 + ax_spi_write_b(xspi, AX_SPI_TXFIFO, cmd_buf[i]); 615 + 616 + /* Add dummy bytes (for clock generation) or actual data bytes to TX_FIFO */ 617 + if (op->data.dir == SPI_MEM_DATA_IN) { 618 + for (i = 0; i < op->dummy.nbytes; i++) 619 + ax_spi_write_b(xspi, AX_SPI_TXFIFO, 0x00); 620 + for (i = 0; i < op->data.nbytes; i++) 621 + ax_spi_write_b(xspi, AX_SPI_TXFIFO, 0x00); 622 + } else { 623 + for (i = 0; i < op->data.nbytes; i++) 624 + ax_spi_write_b(xspi, AX_SPI_TXFIFO, tx_buf_ptr[i]); 625 + } 626 + 627 + /* Start the SPI transmission */ 628 + reg_val = ax_spi_read(xspi, AX_SPI_CR2); 629 + reg_val |= AX_SPI_CR2_HTE; 630 + ax_spi_write(xspi, AX_SPI_CR2, reg_val); 631 + 632 + /* Wait for TX FIFO to become empty */ 633 + while (timeout-- > 0) { 634 + u32 tx_count_reg = ax_spi_read(xspi, AX_SPI_TX_FBCAR); 635 + 636 + if (tx_count_reg == 0) { 637 + udelay(1); 638 + break; 639 + } 640 + udelay(1); 641 + } 642 + 643 + /* Handle Data Reception (for read operations) */ 644 + if (op->data.dir == SPI_MEM_DATA_IN) { 645 + /* Reset the internal RX byte buffer for this new operation. 646 + * This ensures ax_spi_get_rx_byte starts fresh for each exec_op call. 647 + */ 648 + xspi->bytes_left_in_current_rx_word = 0; 649 + xspi->current_rx_fifo_word = 0; 650 + 651 + timeout = AX_SPI_TRX_FIFO_TIMEOUT; 652 + while (timeout-- > 0) { 653 + rx_count_reg = ax_spi_read(xspi, AX_SPI_RX_FBCAR); 654 + if (rx_count_reg >= op->data.nbytes) 655 + break; 656 + udelay(1); /* Small delay to prevent aggressive busy-waiting */ 657 + } 658 + 659 + if (timeout < 0) { 660 + ret = -ETIMEDOUT; 661 + goto out_unlock; 662 + } 663 + 664 + /* Calculate how many bytes we need to discard from the RX FIFO. 665 + * Since we set SRI, we only need to discard the address bytes and 666 + * dummy bytes from the RX FIFO. 667 + */ 668 + bytes_to_discard_from_rx = op->addr.nbytes + op->dummy.nbytes; 669 + for (i = 0; i < bytes_to_discard_from_rx; i++) 670 + ax_spi_get_rx_byte(xspi); 671 + 672 + /* Read actual data bytes into op->data.buf.in */ 673 + for (i = 0; i < op->data.nbytes; i++) { 674 + *rx_buf_ptr = ax_spi_get_rx_byte(xspi); 675 + rx_buf_ptr++; 676 + } 677 + } else if (op->data.dir == SPI_MEM_DATA_OUT) { 678 + timeout = AX_SPI_TRX_FIFO_TIMEOUT; 679 + while (timeout-- > 0) { 680 + u32 tx_fifo_level = ax_spi_read(xspi, AX_SPI_TX_FBCAR); 681 + 682 + if (tx_fifo_level == 0) 683 + break; 684 + udelay(1); 685 + } 686 + if (timeout < 0) { 687 + ret = -ETIMEDOUT; 688 + goto out_unlock; 689 + } 690 + } 691 + 692 + out_unlock: 693 + /* Deassert chip select (pull high) */ 694 + ax_spi_chipselect(spi, true); 695 + 696 + return ret; 697 + } 698 + 699 + static int ax_spi_mem_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op) 700 + { 701 + struct spi_device *spi = mem->spi; 702 + struct ax_spi *xspi = spi_controller_get_devdata(spi->controller); 703 + size_t max_transfer_payload_bytes; 704 + size_t fifo_total_bytes; 705 + size_t protocol_overhead_bytes; 706 + 707 + fifo_total_bytes = xspi->tx_fifo_depth; 708 + /* Calculate protocol overhead bytes according to the real operation each time. */ 709 + protocol_overhead_bytes = op->cmd.nbytes + op->addr.nbytes + op->dummy.nbytes; 710 + 711 + /* Calculate the maximum data payload that can fit into the FIFO. */ 712 + if (fifo_total_bytes <= protocol_overhead_bytes) { 713 + max_transfer_payload_bytes = 0; 714 + dev_warn_once(&spi->dev, "SPI FIFO (%zu bytes) is too small for protocol overhead (%zu bytes)! Max data size forced to 0.\n", 715 + fifo_total_bytes, protocol_overhead_bytes); 716 + } else { 717 + max_transfer_payload_bytes = fifo_total_bytes - protocol_overhead_bytes; 718 + } 719 + 720 + /* Limit op->data.nbytes based on the calculated max payload and SZ_64K. 721 + * This is the value that spi-mem will then use to split requests. 722 + */ 723 + if (op->data.nbytes > max_transfer_payload_bytes) { 724 + op->data.nbytes = max_transfer_payload_bytes; 725 + dev_dbg(&spi->dev, "%s %d: op->data.nbytes adjusted to %u due to FIFO overhead\n", 726 + __func__, __LINE__, op->data.nbytes); 727 + } 728 + 729 + /* Also apply the overall max transfer size */ 730 + if (op->data.nbytes > SZ_64K) { 731 + op->data.nbytes = SZ_64K; 732 + dev_dbg(&spi->dev, "%s %d: op->data.nbytes adjusted to %u due to SZ_64K limit\n", 733 + __func__, __LINE__, op->data.nbytes); 734 + } 735 + 736 + return 0; 737 + } 738 + 739 + static const struct spi_controller_mem_ops ax_spi_mem_ops = { 740 + .exec_op = ax_spi_mem_exec_op, 741 + .adjust_op_size = ax_spi_mem_adjust_op_size, 742 + }; 743 + 744 + /** 745 + * ax_spi_probe - Probe method for the SPI driver 746 + * @pdev: Pointer to the platform_device structure 747 + * 748 + * This function initializes the driver data structures and the hardware. 749 + * 750 + * Return: 0 on success and error value on error 751 + */ 752 + static int ax_spi_probe(struct platform_device *pdev) 753 + { 754 + int ret = 0, irq; 755 + struct spi_controller *ctlr; 756 + struct ax_spi *xspi; 757 + u32 num_cs; 758 + 759 + ctlr = devm_spi_alloc_host(&pdev->dev, sizeof(*xspi)); 760 + if (!ctlr) 761 + return -ENOMEM; 762 + 763 + xspi = spi_controller_get_devdata(ctlr); 764 + ctlr->dev.of_node = pdev->dev.of_node; 765 + platform_set_drvdata(pdev, ctlr); 766 + 767 + xspi->regs = devm_platform_ioremap_resource(pdev, 0); 768 + if (IS_ERR(xspi->regs)) { 769 + ret = PTR_ERR(xspi->regs); 770 + goto remove_ctlr; 771 + } 772 + 773 + xspi->pclk = devm_clk_get(&pdev->dev, "pclk"); 774 + if (IS_ERR(xspi->pclk)) { 775 + dev_err(&pdev->dev, "pclk clock not found.\n"); 776 + ret = PTR_ERR(xspi->pclk); 777 + goto remove_ctlr; 778 + } 779 + 780 + xspi->ref_clk = devm_clk_get(&pdev->dev, "ref"); 781 + if (IS_ERR(xspi->ref_clk)) { 782 + dev_err(&pdev->dev, "ref clock not found.\n"); 783 + ret = PTR_ERR(xspi->ref_clk); 784 + goto remove_ctlr; 785 + } 786 + 787 + ret = clk_prepare_enable(xspi->pclk); 788 + if (ret) { 789 + dev_err(&pdev->dev, "Unable to enable APB clock.\n"); 790 + goto remove_ctlr; 791 + } 792 + 793 + ret = clk_prepare_enable(xspi->ref_clk); 794 + if (ret) { 795 + dev_err(&pdev->dev, "Unable to enable device clock.\n"); 796 + goto clk_dis_apb; 797 + } 798 + 799 + pm_runtime_use_autosuspend(&pdev->dev); 800 + pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT); 801 + pm_runtime_get_noresume(&pdev->dev); 802 + pm_runtime_set_active(&pdev->dev); 803 + pm_runtime_enable(&pdev->dev); 804 + 805 + ret = of_property_read_u32(pdev->dev.of_node, "num-cs", &num_cs); 806 + if (ret < 0) 807 + ctlr->num_chipselect = AX_SPI_DEFAULT_NUM_CS; 808 + else 809 + ctlr->num_chipselect = num_cs; 810 + 811 + ax_spi_detect_fifo_depth(xspi); 812 + 813 + xspi->current_rx_fifo_word = 0; 814 + xspi->bytes_left_in_current_rx_word = 0; 815 + 816 + /* Initialize IRQ-related variables */ 817 + xspi->bytes_left_in_current_rx_word_for_irq = 0; 818 + xspi->current_rx_fifo_word_for_irq = 0; 819 + 820 + /* SPI controller initializations */ 821 + ax_spi_init_hw(xspi); 822 + 823 + irq = platform_get_irq(pdev, 0); 824 + if (irq <= 0) { 825 + ret = -ENXIO; 826 + goto clk_dis_all; 827 + } 828 + 829 + ret = devm_request_irq(&pdev->dev, irq, ax_spi_irq, 830 + 0, pdev->name, ctlr); 831 + if (ret != 0) { 832 + ret = -ENXIO; 833 + dev_err(&pdev->dev, "request_irq failed\n"); 834 + goto clk_dis_all; 835 + } 836 + 837 + ctlr->use_gpio_descriptors = true; 838 + ctlr->prepare_transfer_hardware = ax_prepare_transfer_hardware; 839 + ctlr->prepare_message = ax_prepare_message; 840 + ctlr->transfer_one = ax_transfer_one; 841 + ctlr->unprepare_transfer_hardware = ax_unprepare_transfer_hardware; 842 + ctlr->set_cs = ax_spi_chipselect; 843 + ctlr->auto_runtime_pm = true; 844 + ctlr->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; 845 + 846 + xspi->clk_rate = clk_get_rate(xspi->ref_clk); 847 + /* Set to default valid value */ 848 + ctlr->max_speed_hz = xspi->clk_rate / 2; 849 + xspi->speed_hz = ctlr->max_speed_hz; 850 + 851 + ctlr->bits_per_word_mask = SPI_BPW_MASK(8); 852 + 853 + pm_runtime_mark_last_busy(&pdev->dev); 854 + pm_runtime_put_autosuspend(&pdev->dev); 855 + 856 + ctlr->mem_ops = &ax_spi_mem_ops; 857 + 858 + ret = spi_register_controller(ctlr); 859 + if (ret) { 860 + dev_err(&pdev->dev, "spi_register_controller failed\n"); 861 + goto clk_dis_all; 862 + } 863 + 864 + return ret; 865 + 866 + clk_dis_all: 867 + pm_runtime_set_suspended(&pdev->dev); 868 + pm_runtime_disable(&pdev->dev); 869 + clk_disable_unprepare(xspi->ref_clk); 870 + clk_dis_apb: 871 + clk_disable_unprepare(xspi->pclk); 872 + remove_ctlr: 873 + spi_controller_put(ctlr); 874 + return ret; 875 + } 876 + 877 + /** 878 + * ax_spi_remove - Remove method for the SPI driver 879 + * @pdev: Pointer to the platform_device structure 880 + * 881 + * This function is called if a device is physically removed from the system or 882 + * if the driver module is being unloaded. It frees all resources allocated to 883 + * the device. 884 + */ 885 + static void ax_spi_remove(struct platform_device *pdev) 886 + { 887 + struct spi_controller *ctlr = platform_get_drvdata(pdev); 888 + struct ax_spi *xspi = spi_controller_get_devdata(ctlr); 889 + 890 + spi_unregister_controller(ctlr); 891 + 892 + pm_runtime_set_suspended(&pdev->dev); 893 + pm_runtime_disable(&pdev->dev); 894 + 895 + clk_disable_unprepare(xspi->ref_clk); 896 + clk_disable_unprepare(xspi->pclk); 897 + } 898 + 899 + /** 900 + * ax_spi_suspend - Suspend method for the SPI driver 901 + * @dev: Address of the platform_device structure 902 + * 903 + * This function disables the SPI controller and 904 + * changes the driver state to "suspend" 905 + * 906 + * Return: 0 on success and error value on error 907 + */ 908 + static int __maybe_unused ax_spi_suspend(struct device *dev) 909 + { 910 + struct spi_controller *ctlr = dev_get_drvdata(dev); 911 + 912 + return spi_controller_suspend(ctlr); 913 + } 914 + 915 + /** 916 + * ax_spi_resume - Resume method for the SPI driver 917 + * @dev: Address of the platform_device structure 918 + * 919 + * This function changes the driver state to "ready" 920 + * 921 + * Return: 0 on success and error value on error 922 + */ 923 + static int __maybe_unused ax_spi_resume(struct device *dev) 924 + { 925 + struct spi_controller *ctlr = dev_get_drvdata(dev); 926 + struct ax_spi *xspi = spi_controller_get_devdata(ctlr); 927 + 928 + ax_spi_init_hw(xspi); 929 + return spi_controller_resume(ctlr); 930 + } 931 + 932 + /** 933 + * ax_spi_runtime_resume - Runtime resume method for the SPI driver 934 + * @dev: Address of the platform_device structure 935 + * 936 + * This function enables the clocks 937 + * 938 + * Return: 0 on success and error value on error 939 + */ 940 + static int __maybe_unused ax_spi_runtime_resume(struct device *dev) 941 + { 942 + struct spi_controller *ctlr = dev_get_drvdata(dev); 943 + struct ax_spi *xspi = spi_controller_get_devdata(ctlr); 944 + int ret; 945 + 946 + ret = clk_prepare_enable(xspi->pclk); 947 + if (ret) { 948 + dev_err(dev, "Cannot enable APB clock.\n"); 949 + return ret; 950 + } 951 + 952 + ret = clk_prepare_enable(xspi->ref_clk); 953 + if (ret) { 954 + dev_err(dev, "Cannot enable device clock.\n"); 955 + clk_disable_unprepare(xspi->pclk); 956 + return ret; 957 + } 958 + return 0; 959 + } 960 + 961 + /** 962 + * ax_spi_runtime_suspend - Runtime suspend method for the SPI driver 963 + * @dev: Address of the platform_device structure 964 + * 965 + * This function disables the clocks 966 + * 967 + * Return: Always 0 968 + */ 969 + static int __maybe_unused ax_spi_runtime_suspend(struct device *dev) 970 + { 971 + struct spi_controller *ctlr = dev_get_drvdata(dev); 972 + struct ax_spi *xspi = spi_controller_get_devdata(ctlr); 973 + 974 + clk_disable_unprepare(xspi->ref_clk); 975 + clk_disable_unprepare(xspi->pclk); 976 + 977 + return 0; 978 + } 979 + 980 + static const struct dev_pm_ops ax_spi_dev_pm_ops = { 981 + SET_RUNTIME_PM_OPS(ax_spi_runtime_suspend, 982 + ax_spi_runtime_resume, NULL) 983 + SET_SYSTEM_SLEEP_PM_OPS(ax_spi_suspend, ax_spi_resume) 984 + }; 985 + 986 + static const struct of_device_id ax_spi_of_match[] = { 987 + { .compatible = "axiado,ax3000-spi" }, 988 + { /* end of table */ } 989 + }; 990 + MODULE_DEVICE_TABLE(of, ax_spi_of_match); 991 + 992 + /* ax_spi_driver - This structure defines the SPI subsystem platform driver */ 993 + static struct platform_driver ax_spi_driver = { 994 + .probe = ax_spi_probe, 995 + .remove = ax_spi_remove, 996 + .driver = { 997 + .name = AX_SPI_NAME, 998 + .of_match_table = ax_spi_of_match, 999 + .pm = &ax_spi_dev_pm_ops, 1000 + }, 1001 + }; 1002 + 1003 + module_platform_driver(ax_spi_driver); 1004 + 1005 + MODULE_AUTHOR("Axiado Corporation"); 1006 + MODULE_DESCRIPTION("Axiado SPI Host driver"); 1007 + MODULE_LICENSE("GPL");
+133
drivers/spi/spi-axiado.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + /* 3 + * Axiado SPI controller driver (Host mode only) 4 + * 5 + * Copyright (C) 2022-2025 Axiado Corporation (or its affiliates). 6 + */ 7 + 8 + #ifndef SPI_AXIADO_H 9 + #define SPI_AXIADO_H 10 + 11 + /* Name of this driver */ 12 + #define AX_SPI_NAME "axiado-db-spi" 13 + 14 + /* Axiado - SPI Digital Blocks IP design registers */ 15 + #define AX_SPI_TX_FAETR 0x18 // TX-FAETR 16 + #define ALMOST_EMPTY_TRESHOLD 0x00 // Programmed threshold value 17 + #define AX_SPI_RX_FAFTR 0x28 // RX-FAETR 18 + #define ALMOST_FULL_TRESHOLD 0x0c // Programmed threshold value 19 + #define FIFO_DEPTH 256 // 256 bytes 20 + 21 + #define AX_SPI_CR1 0x00 // CR1 22 + #define AX_SPI_CR1_CLR 0x00 // CR1 - Clear 23 + #define AX_SPI_CR1_SCR 0x01 // CR1 - controller reset 24 + #define AX_SPI_CR1_SCE 0x02 // CR1 - Controller Enable/Disable 25 + #define AX_SPI_CR1_CPHA 0x08 // CR1 - CPH 26 + #define AX_SPI_CR1_CPOL 0x10 // CR1 - CPO 27 + 28 + #define AX_SPI_CR2 0x04 // CR2 29 + #define AX_SPI_CR2_SWD 0x04 // CR2 - Write Enabel/Disable 30 + #define AX_SPI_CR2_SRD 0x08 // CR2 - Read Enable/Disable 31 + #define AX_SPI_CR2_SRI 0x10 // CR2 - Read First Byte Ignore 32 + #define AX_SPI_CR2_HTE 0x40 // CR2 - Host Transmit Enable 33 + #define AX_SPI_CR3 0x08 // CR3 34 + #define AX_SPI_CR3_SDL 0x00 // CR3 - Data lines 35 + #define AX_SPI_CR3_QUAD 0x02 // CR3 - Data lines 36 + 37 + /* As per Digital Blocks datasheet clock frequency range 38 + * Min - 244KHz 39 + * Max - 62.5MHz 40 + * SCK Clock Divider Register Values 41 + */ 42 + #define AX_SPI_RX_FBCAR 0x24 // RX_FBCAR 43 + #define AX_SPI_TX_FBCAR 0x14 // TX_FBCAR 44 + #define AX_SPI_SCDR 0x2c // SCDR 45 + #define AX_SPI_SCD_MIN 0x1fe // Valid SCD (SCK Clock Divider Register) 46 + #define AX_SPI_SCD_DEFAULT 0x06 // Default SCD (SCK Clock Divider Register) 47 + #define AX_SPI_SCD_MAX 0x00 // Valid SCD (SCK Clock Divider Register) 48 + #define AX_SPI_SCDR_SCS 0x0200 // SCDR - AMBA Bus Clock source 49 + 50 + #define AX_SPI_IMR 0x34 // IMR 51 + #define AX_SPI_IMR_CLR 0x00 // IMR - Clear 52 + #define AX_SPI_IMR_TFOM 0x02 // IMR - TFO 53 + #define AX_SPI_IMR_MTCM 0x40 // IMR - MTC 54 + #define AX_SPI_IMR_TFEM 0x10 // IMR - TFE 55 + #define AX_SPI_IMR_RFFM 0x20 // IMR - RFFM 56 + 57 + #define AX_SPI_ISR 0x30 // ISR 58 + #define AX_SPI_ISR_CLR 0xff // ISR - Clear 59 + #define AX_SPI_ISR_MTC 0x40 // ISR - MTC 60 + #define AX_SPI_ISR_TFE 0x10 // ISR - TFE 61 + #define AX_SPI_ISR_RFF 0x20 // ISR - RFF 62 + 63 + #define AX_SPI_IVR 0x38 // IVR 64 + #define AX_SPI_IVR_TFOV 0x02 // IVR - TFOV 65 + #define AX_SPI_IVR_MTCV 0x40 // IVR - MTCV 66 + #define AX_SPI_IVR_TFEV 0x10 // IVR - TFEV 67 + #define AX_SPI_IVR_RFFV 0x20 // IVR - RFFV 68 + 69 + #define AX_SPI_TXFIFO 0x0c // TX_FIFO 70 + #define AX_SPI_TX_RX_FBCR 0x10 // TX_RX_FBCR 71 + #define AX_SPI_RXFIFO 0x1c // RX_FIFO 72 + 73 + #define AX_SPI_TS0 0x00 // Target select 0 74 + #define AX_SPI_TS1 0x01 // Target select 1 75 + #define AX_SPI_TS2 0x10 // Target select 2 76 + #define AX_SPI_TS3 0x11 // Target select 3 77 + 78 + #define SPI_AUTOSUSPEND_TIMEOUT 3000 79 + 80 + /* Default number of chip select lines also used as maximum number of chip select lines */ 81 + #define AX_SPI_DEFAULT_NUM_CS 4 82 + 83 + /* Default number of command buffer size */ 84 + #define AX_SPI_COMMAND_BUFFER_SIZE 16 //Command + address bytes 85 + 86 + /* Target select mask 87 + * 00 – TS0 88 + * 01 – TS1 89 + * 10 – TS2 90 + * 11 – TS3 91 + */ 92 + #define AX_SPI_DEFAULT_TS_MASK 0x03 93 + 94 + #define AX_SPI_RX_FIFO_DRAIN_LIMIT 24 95 + #define AX_SPI_TRX_FIFO_TIMEOUT 1000 96 + /** 97 + * struct ax_spi - This definition defines spi driver instance 98 + * @regs: Virtual address of the SPI controller registers 99 + * @ref_clk: Pointer to the peripheral clock 100 + * @pclk: Pointer to the APB clock 101 + * @speed_hz: Current SPI bus clock speed in Hz 102 + * @txbuf: Pointer to the TX buffer 103 + * @rxbuf: Pointer to the RX buffer 104 + * @tx_bytes: Number of bytes left to transfer 105 + * @rx_bytes: Number of bytes requested 106 + * @tx_fifo_depth: Depth of the TX FIFO 107 + * @current_rx_fifo_word: Buffers the 32-bit word read from RXFIFO 108 + * @bytes_left_in_current_rx_word: Bytes to be extracted from current 32-bit word 109 + * @current_rx_fifo_word_for_irq: Buffers the 32-bit word read from RXFIFO for IRQ 110 + * @bytes_left_in_current_rx_word_for_irq: IRQ bytes to be extracted from current 32-bit word 111 + * @rx_discard: Number of bytes to discard 112 + * @rx_copy_remaining: Number of bytes to copy 113 + */ 114 + struct ax_spi { 115 + void __iomem *regs; 116 + struct clk *ref_clk; 117 + struct clk *pclk; 118 + unsigned int clk_rate; 119 + u32 speed_hz; 120 + const u8 *tx_buf; 121 + u8 *rx_buf; 122 + int tx_bytes; 123 + int rx_bytes; 124 + unsigned int tx_fifo_depth; 125 + u32 current_rx_fifo_word; 126 + int bytes_left_in_current_rx_word; 127 + u32 current_rx_fifo_word_for_irq; 128 + int bytes_left_in_current_rx_word_for_irq; 129 + int rx_discard; 130 + int rx_copy_remaining; 131 + }; 132 + 133 + #endif /* SPI_AXIADO_H */
-1
drivers/spi/spi-bcm-qspi.c
··· 1529 1529 host->transfer_one = bcm_qspi_transfer_one; 1530 1530 host->mem_ops = &bcm_qspi_mem_ops; 1531 1531 host->cleanup = bcm_qspi_cleanup; 1532 - host->dev.of_node = dev->of_node; 1533 1532 host->num_chipselect = NUM_CHIPSELECT; 1534 1533 host->use_gpio_descriptors = true; 1535 1534
-1
drivers/spi/spi-bcm2835.c
··· 1368 1368 ctlr->transfer_one = bcm2835_spi_transfer_one; 1369 1369 ctlr->handle_err = bcm2835_spi_handle_err; 1370 1370 ctlr->prepare_message = bcm2835_spi_prepare_message; 1371 - ctlr->dev.of_node = pdev->dev.of_node; 1372 1371 1373 1372 bs = spi_controller_get_devdata(ctlr); 1374 1373 bs->ctlr = ctlr;
-1
drivers/spi/spi-bcm2835aux.c
··· 502 502 host->handle_err = bcm2835aux_spi_handle_err; 503 503 host->prepare_message = bcm2835aux_spi_prepare_message; 504 504 host->unprepare_message = bcm2835aux_spi_unprepare_message; 505 - host->dev.of_node = pdev->dev.of_node; 506 505 host->use_gpio_descriptors = true; 507 506 508 507 bs = spi_controller_get_devdata(host);
+41 -24
drivers/spi/spi-bcm63xx-hsspi.c
··· 142 142 u32 wait_mode; 143 143 u32 xfer_mode; 144 144 u32 prepend_cnt; 145 + u32 md_start; 145 146 u8 *prepend_buf; 146 147 }; 147 148 ··· 269 268 { 270 269 271 270 struct bcm63xx_hsspi *bs = spi_controller_get_devdata(host); 272 - bool tx_only = false; 271 + bool tx_only = false, multidata = false; 273 272 struct spi_transfer *t; 274 273 275 274 /* 276 275 * Multiple transfers within a message may be combined into one transfer 277 276 * to the controller using its prepend feature. A SPI message is prependable 278 277 * only if the following are all true: 279 - * 1. One or more half duplex write transfer in single bit mode 280 - * 2. Optional full duplex read/write at the end 281 - * 3. No delay and cs_change between transfers 278 + * 1. One or more half duplex write transfers at the start 279 + * 2. Optional switch from single to dual bit within the write transfers 280 + * 3. Optional full duplex read/write at the end if all single bit 281 + * 4. No delay and cs_change between transfers 282 282 */ 283 283 bs->prepend_cnt = 0; 284 + bs->md_start = 0; 284 285 list_for_each_entry(t, &msg->transfers, transfer_list) { 285 286 if ((spi_delay_to_ns(&t->delay, t) > 0) || t->cs_change) { 286 287 bcm63xx_prepend_printk_on_checkfail(bs, ··· 300 297 return false; 301 298 } 302 299 303 - if (t->tx_nbits > SPI_NBITS_SINGLE && 304 - !list_is_last(&t->transfer_list, &msg->transfers)) { 300 + if (t->tx_nbits == SPI_NBITS_SINGLE && 301 + !list_is_last(&t->transfer_list, &msg->transfers) && 302 + multidata) { 305 303 bcm63xx_prepend_printk_on_checkfail(bs, 306 - "multi-bit prepend buf not supported!\n"); 304 + "single-bit after multi-bit not supported!\n"); 307 305 return false; 308 306 } 309 307 310 - if (t->tx_nbits == SPI_NBITS_SINGLE) { 311 - memcpy(bs->prepend_buf + bs->prepend_cnt, t->tx_buf, t->len); 312 - bs->prepend_cnt += t->len; 313 - } 308 + if (t->tx_nbits > SPI_NBITS_SINGLE) 309 + multidata = true; 310 + 311 + memcpy(bs->prepend_buf + bs->prepend_cnt, t->tx_buf, t->len); 312 + bs->prepend_cnt += t->len; 313 + 314 + if (t->tx_nbits == SPI_NBITS_SINGLE) 315 + bs->md_start += t->len; 316 + 314 317 } else { 315 318 if (!list_is_last(&t->transfer_list, &msg->transfers)) { 316 319 bcm63xx_prepend_printk_on_checkfail(bs, 317 320 "rx/tx_rx transfer not supported when it is not last one!\n"); 321 + return false; 322 + } 323 + 324 + if (t->rx_buf && t->rx_nbits == SPI_NBITS_SINGLE && 325 + multidata) { 326 + bcm63xx_prepend_printk_on_checkfail(bs, 327 + "single-bit after multi-bit not supported!\n"); 318 328 return false; 319 329 } 320 330 } ··· 335 319 if (list_is_last(&t->transfer_list, &msg->transfers)) { 336 320 memcpy(t_prepend, t, sizeof(struct spi_transfer)); 337 321 338 - if (tx_only && t->tx_nbits == SPI_NBITS_SINGLE) { 322 + if (tx_only) { 339 323 /* 340 - * if the last one is also a single bit tx only transfer, merge 324 + * if the last one is also a tx only transfer, merge 341 325 * all of them into one single tx transfer 342 326 */ 343 327 t_prepend->len = bs->prepend_cnt; ··· 345 329 bs->prepend_cnt = 0; 346 330 } else { 347 331 /* 348 - * if the last one is not a tx only transfer or dual tx xfer, all 332 + * if the last one is not a tx only transfer, all 349 333 * the previous transfers are sent through prepend bytes and 350 334 * make sure it does not exceed the max prepend len 351 335 */ ··· 354 338 "exceed max prepend len, abort prepending transfers!\n"); 355 339 return false; 356 340 } 341 + } 342 + /* 343 + * If switching from single-bit to multi-bit, make sure 344 + * the start offset does not exceed the maximum 345 + */ 346 + if (multidata && bs->md_start > HSSPI_MAX_PREPEND_LEN) { 347 + bcm63xx_prepend_printk_on_checkfail(bs, 348 + "exceed max multi-bit offset, abort prepending transfers!\n"); 349 + return false; 357 350 } 358 351 } 359 352 } ··· 406 381 407 382 if (t->rx_nbits == SPI_NBITS_DUAL) { 408 383 reg |= 1 << MODE_CTRL_MULTIDATA_RD_SIZE_SHIFT; 409 - reg |= bs->prepend_cnt << MODE_CTRL_MULTIDATA_RD_STRT_SHIFT; 384 + reg |= bs->md_start << MODE_CTRL_MULTIDATA_RD_STRT_SHIFT; 410 385 } 411 386 if (t->tx_nbits == SPI_NBITS_DUAL) { 412 387 reg |= 1 << MODE_CTRL_MULTIDATA_WR_SIZE_SHIFT; 413 - reg |= bs->prepend_cnt << MODE_CTRL_MULTIDATA_WR_STRT_SHIFT; 388 + reg |= bs->md_start << MODE_CTRL_MULTIDATA_WR_STRT_SHIFT; 414 389 } 415 390 } 416 391 ··· 717 692 if (!spi_mem_default_supports_op(mem, op)) 718 693 return false; 719 694 720 - /* Controller doesn't support spi mem dual io mode */ 721 - if ((op->cmd.opcode == SPINOR_OP_READ_1_2_2) || 722 - (op->cmd.opcode == SPINOR_OP_READ_1_2_2_4B) || 723 - (op->cmd.opcode == SPINOR_OP_READ_1_2_2_DTR) || 724 - (op->cmd.opcode == SPINOR_OP_READ_1_2_2_DTR_4B)) 725 - return false; 726 - 727 695 return true; 728 696 } 729 697 ··· 822 804 init_completion(&bs->done); 823 805 824 806 host->mem_ops = &bcm63xx_hsspi_mem_ops; 825 - host->dev.of_node = dev->of_node; 826 807 if (!dev->of_node) 827 808 host->bus_num = HSSPI_BUS_NUM; 828 809
-1
drivers/spi/spi-bcm63xx.c
··· 571 571 goto out_err; 572 572 } 573 573 574 - host->dev.of_node = dev->of_node; 575 574 host->bus_num = bus_num; 576 575 host->num_chipselect = num_cs; 577 576 host->transfer_one_message = bcm63xx_spi_transfer_one;
-1
drivers/spi/spi-bcmbca-hsspi.c
··· 500 500 mutex_init(&bs->msg_mutex); 501 501 init_completion(&bs->done); 502 502 503 - host->dev.of_node = dev->of_node; 504 503 if (!dev->of_node) 505 504 host->bus_num = HSSPI_BUS_NUM; 506 505
+139 -155
drivers/spi/spi-cadence-quadspi.c
··· 40 40 #define CQSPI_DISABLE_DAC_MODE BIT(1) 41 41 #define CQSPI_SUPPORT_EXTERNAL_DMA BIT(2) 42 42 #define CQSPI_NO_SUPPORT_WR_COMPLETION BIT(3) 43 - #define CQSPI_SLOW_SRAM BIT(4) 43 + #define CQSPI_SLOW_SRAM BIT(4) 44 44 #define CQSPI_NEEDS_APB_AHB_HAZARD_WAR BIT(5) 45 45 #define CQSPI_RD_NO_IRQ BIT(6) 46 46 #define CQSPI_DMA_SET_MASK BIT(7) 47 47 #define CQSPI_SUPPORT_DEVICE_RESET BIT(8) 48 48 #define CQSPI_DISABLE_STIG_MODE BIT(9) 49 49 #define CQSPI_DISABLE_RUNTIME_PM BIT(10) 50 + #define CQSPI_NO_INDIRECT_MODE BIT(11) 51 + #define CQSPI_HAS_WR_PROTECT BIT(12) 50 52 51 53 /* Capabilities */ 52 54 #define CQSPI_SUPPORTS_OCTAL BIT(0) ··· 57 55 #define CQSPI_OP_WIDTH(part) ((part).nbytes ? ilog2((part).buswidth) : 0) 58 56 59 57 enum { 60 - CLK_QSPI_APB = 0, 58 + CLK_QSPI_REF = 0, 59 + CLK_QSPI_APB, 61 60 CLK_QSPI_AHB, 62 61 CLK_QSPI_NUM, 63 62 }; ··· 79 76 struct cqspi_st { 80 77 struct platform_device *pdev; 81 78 struct spi_controller *host; 82 - struct clk *clk; 83 - struct clk *clks[CLK_QSPI_NUM]; 79 + struct clk_bulk_data clks[CLK_QSPI_NUM]; 84 80 unsigned int sclk; 85 81 86 82 void __iomem *iobase; ··· 110 108 bool apb_ahb_hazard; 111 109 112 110 bool is_jh7110; /* Flag for StarFive JH7110 SoC */ 111 + bool is_rzn1; /* Flag for Renesas RZ/N1 SoC */ 113 112 bool disable_stig_mode; 114 113 refcount_t refcount; 115 114 refcount_t inflight_ops; ··· 124 121 int (*indirect_read_dma)(struct cqspi_flash_pdata *f_pdata, 125 122 u_char *rxbuf, loff_t from_addr, size_t n_rx); 126 123 u32 (*get_dma_status)(struct cqspi_st *cqspi); 127 - int (*jh7110_clk_init)(struct platform_device *pdev, 128 - struct cqspi_st *cqspi); 129 124 }; 130 125 131 126 /* Operation timeout value */ ··· 219 218 220 219 #define CQSPI_REG_IRQSTATUS 0x40 221 220 #define CQSPI_REG_IRQMASK 0x44 221 + 222 + #define CQSPI_REG_WR_PROT_CTRL 0x58 222 223 223 224 #define CQSPI_REG_INDIRECTRD 0x60 224 225 #define CQSPI_REG_INDIRECTRD_START_MASK BIT(0) ··· 377 374 /* Clear interrupt */ 378 375 writel(irq_status, cqspi->iobase + CQSPI_REG_IRQSTATUS); 379 376 380 - if (cqspi->use_dma_read && ddata && ddata->get_dma_status) { 381 - if (ddata->get_dma_status(cqspi)) { 382 - complete(&cqspi->transfer_complete); 383 - return IRQ_HANDLED; 384 - } 385 - } 386 - 387 - else if (!cqspi->slow_sram) 388 - irq_status &= CQSPI_IRQ_MASK_RD | CQSPI_IRQ_MASK_WR; 389 - else 377 + if (cqspi->use_dma_read && ddata && ddata->get_dma_status) 378 + irq_status = ddata->get_dma_status(cqspi); 379 + else if (cqspi->slow_sram) 390 380 irq_status &= CQSPI_IRQ_MASK_RD_SLOW_SRAM | CQSPI_IRQ_MASK_WR; 381 + else 382 + irq_status &= CQSPI_IRQ_MASK_RD | CQSPI_IRQ_MASK_WR; 391 383 392 384 if (irq_status) 393 385 complete(&cqspi->transfer_complete); ··· 1261 1263 1262 1264 reg = readl(reg_base + CQSPI_REG_CONFIG); 1263 1265 reg &= ~(CQSPI_REG_CONFIG_BAUD_MASK << CQSPI_REG_CONFIG_BAUD_LSB); 1264 - reg |= (div & CQSPI_REG_CONFIG_BAUD_MASK) << CQSPI_REG_CONFIG_BAUD_LSB; 1266 + reg |= div << CQSPI_REG_CONFIG_BAUD_LSB; 1265 1267 writel(reg, reg_base + CQSPI_REG_CONFIG); 1266 1268 } 1267 1269 ··· 1338 1340 * mode. So, we can not use direct mode when in DTR mode for writing 1339 1341 * data. 1340 1342 */ 1341 - if (!op->cmd.dtr && cqspi->use_direct_mode && 1342 - cqspi->use_direct_mode_wr && ((to + len) <= cqspi->ahb_size)) { 1343 + if ((!op->cmd.dtr && cqspi->use_direct_mode && 1344 + cqspi->use_direct_mode_wr && ((to + len) <= cqspi->ahb_size)) || 1345 + (cqspi->ddata && cqspi->ddata->quirks & CQSPI_NO_INDIRECT_MODE)) { 1343 1346 memcpy_toio(cqspi->ahb_base + to, buf, len); 1344 1347 return cqspi_wait_idle(cqspi); 1345 1348 } ··· 1429 1430 if (ret) 1430 1431 return ret; 1431 1432 1432 - if (cqspi->use_direct_mode && ((from + len) <= cqspi->ahb_size)) 1433 + if ((cqspi->use_direct_mode && ((from + len) <= cqspi->ahb_size)) || 1434 + (cqspi->ddata && cqspi->ddata->quirks & CQSPI_NO_INDIRECT_MODE)) 1433 1435 return cqspi_direct_read_execute(f_pdata, buf, from, len); 1434 1436 1435 1437 if (cqspi->use_dma_read && ddata && ddata->indirect_read_dma && ··· 1514 1514 static bool cqspi_supports_mem_op(struct spi_mem *mem, 1515 1515 const struct spi_mem_op *op) 1516 1516 { 1517 + struct cqspi_st *cqspi = spi_controller_get_devdata(mem->spi->controller); 1517 1518 bool all_true, all_false; 1518 1519 1519 1520 /* ··· 1536 1535 if (op->addr.nbytes && op->addr.buswidth != 8) 1537 1536 return false; 1538 1537 if (op->data.nbytes && op->data.buswidth != 8) 1538 + return false; 1539 + 1540 + /* A single opcode is supported, it will be repeated */ 1541 + if ((op->cmd.opcode >> 8) != (op->cmd.opcode & 0xFF)) 1542 + return false; 1543 + 1544 + if (cqspi->is_rzn1) 1539 1545 return false; 1540 1546 } else if (!all_false) { 1541 1547 /* Mixed DTR modes are not supported. */ ··· 1597 1589 1598 1590 cqspi->is_decoded_cs = of_property_read_bool(np, "cdns,is-decoded-cs"); 1599 1591 1600 - if (of_property_read_u32(np, "cdns,fifo-depth", &cqspi->fifo_depth)) { 1601 - /* Zero signals FIFO depth should be runtime detected. */ 1602 - cqspi->fifo_depth = 0; 1603 - } 1592 + if (!(cqspi->ddata && cqspi->ddata->quirks & CQSPI_NO_INDIRECT_MODE)) { 1593 + if (of_property_read_u32(np, "cdns,fifo-depth", &cqspi->fifo_depth)) { 1594 + /* Zero signals FIFO depth should be runtime detected. */ 1595 + cqspi->fifo_depth = 0; 1596 + } 1604 1597 1605 - if (of_property_read_u32(np, "cdns,fifo-width", &cqspi->fifo_width)) { 1606 - dev_err(dev, "couldn't determine fifo-width\n"); 1607 - return -ENXIO; 1608 - } 1598 + if (of_property_read_u32(np, "cdns,fifo-width", &cqspi->fifo_width)) 1599 + cqspi->fifo_width = 4; 1609 1600 1610 - if (of_property_read_u32(np, "cdns,trigger-address", 1611 - &cqspi->trigger_address)) { 1612 - dev_err(dev, "couldn't determine trigger-address\n"); 1613 - return -ENXIO; 1601 + if (of_property_read_u32(np, "cdns,trigger-address", 1602 + &cqspi->trigger_address)) { 1603 + dev_err(dev, "couldn't determine trigger-address\n"); 1604 + return -ENXIO; 1605 + } 1614 1606 } 1615 1607 1616 1608 if (of_property_read_u32(np, "num-cs", &cqspi->num_chipselect)) ··· 1635 1627 /* Disable all interrupts. */ 1636 1628 writel(0, cqspi->iobase + CQSPI_REG_IRQMASK); 1637 1629 1638 - /* Configure the SRAM split to 1:1 . */ 1639 - writel(cqspi->fifo_depth / 2, cqspi->iobase + CQSPI_REG_SRAMPARTITION); 1630 + if (!(cqspi->ddata && cqspi->ddata->quirks & CQSPI_NO_INDIRECT_MODE)) { 1631 + /* Configure the SRAM split to 1:1 . */ 1632 + writel(cqspi->fifo_depth / 2, cqspi->iobase + CQSPI_REG_SRAMPARTITION); 1633 + /* Load indirect trigger address. */ 1634 + writel(cqspi->trigger_address, 1635 + cqspi->iobase + CQSPI_REG_INDIRECTTRIGGER); 1640 1636 1641 - /* Load indirect trigger address. */ 1642 - writel(cqspi->trigger_address, 1643 - cqspi->iobase + CQSPI_REG_INDIRECTTRIGGER); 1637 + /* Program read watermark -- 1/2 of the FIFO. */ 1638 + writel(cqspi->fifo_depth * cqspi->fifo_width / 2, 1639 + cqspi->iobase + CQSPI_REG_INDIRECTRDWATERMARK); 1640 + /* Program write watermark -- 1/8 of the FIFO. */ 1641 + writel(cqspi->fifo_depth * cqspi->fifo_width / 8, 1642 + cqspi->iobase + CQSPI_REG_INDIRECTWRWATERMARK); 1643 + } 1644 1644 1645 - /* Program read watermark -- 1/2 of the FIFO. */ 1646 - writel(cqspi->fifo_depth * cqspi->fifo_width / 2, 1647 - cqspi->iobase + CQSPI_REG_INDIRECTRDWATERMARK); 1648 - /* Program write watermark -- 1/8 of the FIFO. */ 1649 - writel(cqspi->fifo_depth * cqspi->fifo_width / 8, 1650 - cqspi->iobase + CQSPI_REG_INDIRECTWRWATERMARK); 1645 + /* Disable write protection at controller level */ 1646 + if (cqspi->ddata && cqspi->ddata->quirks & CQSPI_HAS_WR_PROTECT) 1647 + writel(0, cqspi->iobase + CQSPI_REG_WR_PROT_CTRL); 1651 1648 1652 1649 /* Disable direct access controller */ 1653 1650 if (!cqspi->use_direct_mode) { ··· 1673 1660 { 1674 1661 struct device *dev = &cqspi->pdev->dev; 1675 1662 u32 reg, fifo_depth; 1663 + 1664 + if (cqspi->ddata && cqspi->ddata->quirks & CQSPI_NO_INDIRECT_MODE) 1665 + return; 1676 1666 1677 1667 /* 1678 1668 * Bits N-1:0 are writable while bits 31:N are read as zero, with 2^N ··· 1780 1764 return 0; 1781 1765 } 1782 1766 1783 - static int cqspi_jh7110_clk_init(struct platform_device *pdev, struct cqspi_st *cqspi) 1784 - { 1785 - static struct clk_bulk_data qspiclk[] = { 1786 - { .id = "apb" }, 1787 - { .id = "ahb" }, 1788 - }; 1789 - 1790 - int ret = 0; 1791 - 1792 - ret = devm_clk_bulk_get(&pdev->dev, ARRAY_SIZE(qspiclk), qspiclk); 1793 - if (ret) { 1794 - dev_err(&pdev->dev, "%s: failed to get qspi clocks\n", __func__); 1795 - return ret; 1796 - } 1797 - 1798 - cqspi->clks[CLK_QSPI_APB] = qspiclk[0].clk; 1799 - cqspi->clks[CLK_QSPI_AHB] = qspiclk[1].clk; 1800 - 1801 - ret = clk_prepare_enable(cqspi->clks[CLK_QSPI_APB]); 1802 - if (ret) { 1803 - dev_err(&pdev->dev, "%s: failed to enable CLK_QSPI_APB\n", __func__); 1804 - return ret; 1805 - } 1806 - 1807 - ret = clk_prepare_enable(cqspi->clks[CLK_QSPI_AHB]); 1808 - if (ret) { 1809 - dev_err(&pdev->dev, "%s: failed to enable CLK_QSPI_AHB\n", __func__); 1810 - goto disable_apb_clk; 1811 - } 1812 - 1813 - cqspi->is_jh7110 = true; 1814 - 1815 - return 0; 1816 - 1817 - disable_apb_clk: 1818 - clk_disable_unprepare(cqspi->clks[CLK_QSPI_APB]); 1819 - 1820 - return ret; 1821 - } 1822 - 1823 - static void cqspi_jh7110_disable_clk(struct platform_device *pdev, struct cqspi_st *cqspi) 1824 - { 1825 - clk_disable_unprepare(cqspi->clks[CLK_QSPI_AHB]); 1826 - clk_disable_unprepare(cqspi->clks[CLK_QSPI_APB]); 1827 - } 1828 1767 static int cqspi_probe(struct platform_device *pdev) 1829 1768 { 1830 1769 const struct cqspi_driver_platdata *ddata; ··· 1788 1817 struct spi_controller *host; 1789 1818 struct resource *res_ahb; 1790 1819 struct cqspi_st *cqspi; 1791 - int ret; 1792 - int irq; 1820 + int ret, irq; 1793 1821 1794 1822 host = devm_spi_alloc_host(&pdev->dev, sizeof(*cqspi)); 1795 1823 if (!host) ··· 1797 1827 host->mode_bits = SPI_RX_QUAD | SPI_RX_DUAL; 1798 1828 host->mem_ops = &cqspi_mem_ops; 1799 1829 host->mem_caps = &cqspi_mem_caps; 1800 - host->dev.of_node = pdev->dev.of_node; 1801 1830 1802 1831 cqspi = spi_controller_get_devdata(host); 1832 + if (of_device_is_compatible(pdev->dev.of_node, "starfive,jh7110-qspi")) 1833 + cqspi->is_jh7110 = true; 1834 + if (of_device_is_compatible(pdev->dev.of_node, "renesas,rzn1-qspi")) 1835 + cqspi->is_rzn1 = true; 1803 1836 1804 1837 cqspi->pdev = pdev; 1805 1838 cqspi->host = host; 1806 - cqspi->is_jh7110 = false; 1807 1839 cqspi->ddata = ddata = of_device_get_match_data(dev); 1808 1840 platform_set_drvdata(pdev, cqspi); 1809 1841 ··· 1816 1844 return -ENODEV; 1817 1845 } 1818 1846 1819 - /* Obtain QSPI clock. */ 1820 - cqspi->clk = devm_clk_get(dev, NULL); 1821 - if (IS_ERR(cqspi->clk)) { 1822 - dev_err(dev, "Cannot claim QSPI clock.\n"); 1823 - ret = PTR_ERR(cqspi->clk); 1847 + ret = cqspi_setup_flash(cqspi); 1848 + if (ret) { 1849 + dev_err(dev, "failed to setup flash parameters %d\n", ret); 1824 1850 return ret; 1851 + } 1852 + 1853 + /* Obtain QSPI clocks. */ 1854 + ret = devm_clk_bulk_get_optional(dev, CLK_QSPI_NUM, cqspi->clks); 1855 + if (ret) 1856 + return dev_err_probe(dev, ret, "Failed to get clocks\n"); 1857 + 1858 + if (!cqspi->clks[CLK_QSPI_REF].clk) { 1859 + dev_err(dev, "Cannot claim mandatory QSPI ref clock.\n"); 1860 + return -ENODEV; 1825 1861 } 1826 1862 1827 1863 /* Obtain and remap controller address. */ ··· 1861 1881 if (ret) 1862 1882 return ret; 1863 1883 1864 - 1865 - ret = clk_prepare_enable(cqspi->clk); 1884 + ret = clk_bulk_prepare_enable(CLK_QSPI_NUM, cqspi->clks); 1866 1885 if (ret) { 1867 - dev_err(dev, "Cannot enable QSPI clock.\n"); 1868 - goto probe_clk_failed; 1886 + dev_err(dev, "Cannot enable QSPI clocks.\n"); 1887 + goto disable_rpm; 1869 1888 } 1870 1889 1871 1890 /* Obtain QSPI reset control */ ··· 1872 1893 if (IS_ERR(rstc)) { 1873 1894 ret = PTR_ERR(rstc); 1874 1895 dev_err(dev, "Cannot get QSPI reset.\n"); 1875 - goto probe_reset_failed; 1896 + goto disable_clks; 1876 1897 } 1877 1898 1878 1899 rstc_ocp = devm_reset_control_get_optional_exclusive(dev, "qspi-ocp"); 1879 1900 if (IS_ERR(rstc_ocp)) { 1880 1901 ret = PTR_ERR(rstc_ocp); 1881 1902 dev_err(dev, "Cannot get QSPI OCP reset.\n"); 1882 - goto probe_reset_failed; 1903 + goto disable_clks; 1883 1904 } 1884 1905 1885 - if (of_device_is_compatible(pdev->dev.of_node, "starfive,jh7110-qspi")) { 1906 + if (cqspi->is_jh7110) { 1886 1907 rstc_ref = devm_reset_control_get_optional_exclusive(dev, "rstc_ref"); 1887 1908 if (IS_ERR(rstc_ref)) { 1888 1909 ret = PTR_ERR(rstc_ref); 1889 1910 dev_err(dev, "Cannot get QSPI REF reset.\n"); 1890 - goto probe_reset_failed; 1911 + goto disable_clks; 1891 1912 } 1892 1913 reset_control_assert(rstc_ref); 1893 1914 reset_control_deassert(rstc_ref); ··· 1899 1920 reset_control_assert(rstc_ocp); 1900 1921 reset_control_deassert(rstc_ocp); 1901 1922 1902 - cqspi->master_ref_clk_hz = clk_get_rate(cqspi->clk); 1903 - host->max_speed_hz = cqspi->master_ref_clk_hz; 1923 + cqspi->master_ref_clk_hz = clk_get_rate(cqspi->clks[CLK_QSPI_REF].clk); 1924 + if (!cqspi->is_rzn1) { 1925 + host->max_speed_hz = cqspi->master_ref_clk_hz; 1926 + } else { 1927 + host->max_speed_hz = cqspi->master_ref_clk_hz / 2; 1928 + host->min_speed_hz = cqspi->master_ref_clk_hz / 32; 1929 + } 1904 1930 1905 1931 /* write completion is supported by default */ 1906 1932 cqspi->wr_completion = true; ··· 1930 1946 cqspi->slow_sram = true; 1931 1947 if (ddata->quirks & CQSPI_NEEDS_APB_AHB_HAZARD_WAR) 1932 1948 cqspi->apb_ahb_hazard = true; 1933 - 1934 - if (ddata->jh7110_clk_init) { 1935 - ret = cqspi_jh7110_clk_init(pdev, cqspi); 1936 - if (ret) 1937 - goto probe_reset_failed; 1938 - } 1939 1949 if (ddata->quirks & CQSPI_DISABLE_STIG_MODE) 1940 1950 cqspi->disable_stig_mode = true; 1941 1951 1942 1952 if (ddata->quirks & CQSPI_DMA_SET_MASK) { 1943 1953 ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 1944 1954 if (ret) 1945 - goto probe_reset_failed; 1955 + goto disable_clks; 1946 1956 } 1947 1957 } 1948 1958 ··· 1947 1969 pdev->name, cqspi); 1948 1970 if (ret) { 1949 1971 dev_err(dev, "Cannot request IRQ.\n"); 1950 - goto probe_reset_failed; 1972 + goto disable_clks; 1951 1973 } 1952 1974 1953 1975 cqspi_wait_idle(cqspi); ··· 1965 1987 pm_runtime_get_noresume(dev); 1966 1988 } 1967 1989 1968 - ret = cqspi_setup_flash(cqspi); 1969 - if (ret) { 1970 - dev_err(dev, "failed to setup flash parameters %d\n", ret); 1971 - goto probe_setup_failed; 1972 - } 1973 - 1974 1990 host->num_chipselect = cqspi->num_chipselect; 1975 1991 1976 1992 if (ddata && (ddata->quirks & CQSPI_SUPPORT_DEVICE_RESET)) 1977 1993 cqspi_device_reset(cqspi); 1978 1994 1979 - if (cqspi->use_direct_mode) { 1995 + if (cqspi->use_direct_mode && !cqspi->is_rzn1) { 1980 1996 ret = cqspi_request_mmap_dma(cqspi); 1981 1997 if (ret == -EPROBE_DEFER) { 1982 1998 dev_err_probe(&pdev->dev, ret, "Failed to request mmap DMA\n"); 1983 - goto probe_setup_failed; 1999 + goto disable_controller; 1984 2000 } 1985 2001 } 1986 2002 1987 2003 ret = spi_register_controller(host); 1988 2004 if (ret) { 1989 2005 dev_err(&pdev->dev, "failed to register SPI ctlr %d\n", ret); 1990 - goto probe_setup_failed; 2006 + goto release_dma_chan; 1991 2007 } 1992 2008 1993 - if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) { 1994 - pm_runtime_mark_last_busy(dev); 2009 + if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) 1995 2010 pm_runtime_put_autosuspend(dev); 1996 - } 1997 2011 1998 2012 return 0; 1999 - probe_setup_failed: 2013 + 2014 + release_dma_chan: 2015 + if (cqspi->rx_chan) 2016 + dma_release_channel(cqspi->rx_chan); 2017 + disable_controller: 2018 + cqspi_controller_enable(cqspi, 0); 2019 + disable_clks: 2020 + if (pm_runtime_get_sync(&pdev->dev) >= 0) 2021 + clk_bulk_disable_unprepare(CLK_QSPI_NUM, cqspi->clks); 2022 + disable_rpm: 2000 2023 if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) 2001 2024 pm_runtime_disable(dev); 2002 - cqspi_controller_enable(cqspi, 0); 2003 - probe_reset_failed: 2004 - if (cqspi->is_jh7110) 2005 - cqspi_jh7110_disable_clk(pdev, cqspi); 2006 2025 2007 - if (pm_runtime_get_sync(&pdev->dev) >= 0) 2008 - clk_disable_unprepare(cqspi->clk); 2009 - probe_clk_failed: 2010 2026 return ret; 2011 2027 } 2012 2028 ··· 2009 2037 const struct cqspi_driver_platdata *ddata; 2010 2038 struct cqspi_st *cqspi = platform_get_drvdata(pdev); 2011 2039 struct device *dev = &pdev->dev; 2040 + int ret = 0; 2012 2041 2013 2042 ddata = of_device_get_match_data(dev); 2014 2043 ··· 2019 2046 cqspi_wait_idle(cqspi); 2020 2047 2021 2048 spi_unregister_controller(cqspi->host); 2022 - cqspi_controller_enable(cqspi, 0); 2023 2049 2024 2050 if (cqspi->rx_chan) 2025 2051 dma_release_channel(cqspi->rx_chan); 2026 2052 2027 - if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) 2028 - if (pm_runtime_get_sync(&pdev->dev) >= 0) 2029 - clk_disable(cqspi->clk); 2053 + cqspi_controller_enable(cqspi, 0); 2030 2054 2031 - if (cqspi->is_jh7110) 2032 - cqspi_jh7110_disable_clk(pdev, cqspi); 2055 + 2056 + if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) 2057 + ret = pm_runtime_get_sync(&pdev->dev); 2058 + 2059 + if (ret >= 0) 2060 + clk_bulk_disable_unprepare(CLK_QSPI_NUM, cqspi->clks); 2033 2061 2034 2062 if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) { 2035 2063 pm_runtime_put_sync(&pdev->dev); ··· 2043 2069 struct cqspi_st *cqspi = dev_get_drvdata(dev); 2044 2070 2045 2071 cqspi_controller_enable(cqspi, 0); 2046 - clk_disable_unprepare(cqspi->clk); 2072 + clk_bulk_disable_unprepare(CLK_QSPI_NUM, cqspi->clks); 2047 2073 return 0; 2048 2074 } 2049 2075 2050 2076 static int cqspi_runtime_resume(struct device *dev) 2051 2077 { 2052 2078 struct cqspi_st *cqspi = dev_get_drvdata(dev); 2079 + int ret; 2053 2080 2054 - clk_prepare_enable(cqspi->clk); 2081 + ret = clk_bulk_prepare_enable(CLK_QSPI_NUM, cqspi->clks); 2082 + if (ret) 2083 + return ret; 2084 + 2055 2085 cqspi_wait_idle(cqspi); 2056 2086 cqspi_controller_enable(cqspi, 0); 2057 2087 cqspi_controller_init(cqspi); ··· 2115 2137 }; 2116 2138 2117 2139 static const struct cqspi_driver_platdata socfpga_qspi = { 2118 - .quirks = CQSPI_DISABLE_DAC_MODE 2119 - | CQSPI_NO_SUPPORT_WR_COMPLETION 2120 - | CQSPI_SLOW_SRAM 2121 - | CQSPI_DISABLE_STIG_MODE 2122 - | CQSPI_DISABLE_RUNTIME_PM, 2140 + .quirks = CQSPI_DISABLE_DAC_MODE | CQSPI_NO_SUPPORT_WR_COMPLETION | 2141 + CQSPI_SLOW_SRAM | CQSPI_DISABLE_STIG_MODE | 2142 + CQSPI_DISABLE_RUNTIME_PM, 2123 2143 }; 2124 2144 2125 2145 static const struct cqspi_driver_platdata versal_ospi = { 2126 2146 .hwcaps_mask = CQSPI_SUPPORTS_OCTAL, 2127 - .quirks = CQSPI_DISABLE_DAC_MODE | CQSPI_SUPPORT_EXTERNAL_DMA 2128 - | CQSPI_DMA_SET_MASK, 2147 + .quirks = CQSPI_DISABLE_DAC_MODE | CQSPI_SUPPORT_EXTERNAL_DMA | 2148 + CQSPI_DMA_SET_MASK, 2129 2149 .indirect_read_dma = cqspi_versal_indirect_read_dma, 2130 2150 .get_dma_status = cqspi_get_versal_dma_status, 2131 2151 }; 2132 2152 2133 2153 static const struct cqspi_driver_platdata versal2_ospi = { 2134 2154 .hwcaps_mask = CQSPI_SUPPORTS_OCTAL, 2135 - .quirks = CQSPI_DISABLE_DAC_MODE | CQSPI_SUPPORT_EXTERNAL_DMA 2136 - | CQSPI_DMA_SET_MASK 2137 - | CQSPI_SUPPORT_DEVICE_RESET, 2155 + .quirks = CQSPI_DISABLE_DAC_MODE | CQSPI_SUPPORT_EXTERNAL_DMA | 2156 + CQSPI_DMA_SET_MASK | CQSPI_SUPPORT_DEVICE_RESET, 2138 2157 .indirect_read_dma = cqspi_versal_indirect_read_dma, 2139 2158 .get_dma_status = cqspi_get_versal_dma_status, 2140 2159 }; 2141 2160 2142 2161 static const struct cqspi_driver_platdata jh7110_qspi = { 2143 2162 .quirks = CQSPI_DISABLE_DAC_MODE, 2144 - .jh7110_clk_init = cqspi_jh7110_clk_init, 2145 2163 }; 2146 2164 2147 2165 static const struct cqspi_driver_platdata pensando_cdns_qspi = { ··· 2147 2173 static const struct cqspi_driver_platdata mobileye_eyeq5_ospi = { 2148 2174 .hwcaps_mask = CQSPI_SUPPORTS_OCTAL, 2149 2175 .quirks = CQSPI_DISABLE_DAC_MODE | CQSPI_NO_SUPPORT_WR_COMPLETION | 2150 - CQSPI_RD_NO_IRQ, 2176 + CQSPI_RD_NO_IRQ, 2177 + }; 2178 + 2179 + static const struct cqspi_driver_platdata renesas_rzn1_qspi = { 2180 + .hwcaps_mask = CQSPI_SUPPORTS_QUAD, 2181 + .quirks = CQSPI_NO_SUPPORT_WR_COMPLETION | CQSPI_RD_NO_IRQ | 2182 + CQSPI_HAS_WR_PROTECT | CQSPI_NO_INDIRECT_MODE, 2151 2183 }; 2152 2184 2153 2185 static const struct of_device_id cqspi_dt_ids[] = { ··· 2196 2216 { 2197 2217 .compatible = "amd,versal2-ospi", 2198 2218 .data = &versal2_ospi, 2219 + }, 2220 + { 2221 + .compatible = "renesas,rzn1-qspi", 2222 + .data = &renesas_rzn1_qspi, 2199 2223 }, 2200 2224 { /* end of table */ } 2201 2225 };
+43 -34
drivers/spi/spi-cadence-xspi.c
··· 2 2 // Cadence XSPI flash controller driver 3 3 // Copyright (C) 2020-21 Cadence 4 4 5 - #include <linux/acpi.h> 6 5 #include <linux/completion.h> 7 6 #include <linux/delay.h> 8 7 #include <linux/err.h> ··· 11 12 #include <linux/iopoll.h> 12 13 #include <linux/kernel.h> 13 14 #include <linux/module.h> 14 - #include <linux/of.h> 15 15 #include <linux/platform_device.h> 16 16 #include <linux/pm_runtime.h> 17 + #include <linux/property.h> 17 18 #include <linux/spi/spi.h> 18 19 #include <linux/spi/spi-mem.h> 19 20 #include <linux/bitfield.h> 20 21 #include <linux/limits.h> 21 22 #include <linux/log2.h> 22 23 #include <linux/bitrev.h> 24 + #include <linux/util_macros.h> 23 25 24 26 #define CDNS_XSPI_MAGIC_NUM_VALUE 0x6522 25 27 #define CDNS_XSPI_MAX_BANKS 8 ··· 350 350 351 351 struct cdns_xspi_dev { 352 352 struct platform_device *pdev; 353 + struct spi_controller *host; 353 354 struct device *dev; 354 355 355 356 void __iomem *iobase; ··· 775 774 return ret; 776 775 } 777 776 778 - #ifdef CONFIG_ACPI 779 777 static bool cdns_xspi_supports_op(struct spi_mem *mem, 780 778 const struct spi_mem_op *op) 781 779 { 782 780 struct spi_device *spi = mem->spi; 783 - const union acpi_object *obj; 784 - struct acpi_device *adev; 781 + struct device *dev = &spi->dev; 782 + u32 value; 785 783 786 - adev = ACPI_COMPANION(&spi->dev); 787 - 788 - if (!acpi_dev_get_property(adev, "spi-tx-bus-width", ACPI_TYPE_INTEGER, 789 - &obj)) { 790 - switch (obj->integer.value) { 784 + if (!device_property_read_u32(dev, "spi-tx-bus-width", &value)) { 785 + switch (value) { 791 786 case 1: 792 787 break; 793 788 case 2: ··· 796 799 spi->mode |= SPI_TX_OCTAL; 797 800 break; 798 801 default: 799 - dev_warn(&spi->dev, 800 - "spi-tx-bus-width %lld not supported\n", 801 - obj->integer.value); 802 + dev_warn(dev, "spi-tx-bus-width %u not supported\n", value); 802 803 break; 803 804 } 804 805 } 805 806 806 - if (!acpi_dev_get_property(adev, "spi-rx-bus-width", ACPI_TYPE_INTEGER, 807 - &obj)) { 808 - switch (obj->integer.value) { 807 + if (!device_property_read_u32(dev, "spi-rx-bus-width", &value)) { 808 + switch (value) { 809 809 case 1: 810 810 break; 811 811 case 2: ··· 815 821 spi->mode |= SPI_RX_OCTAL; 816 822 break; 817 823 default: 818 - dev_warn(&spi->dev, 819 - "spi-rx-bus-width %lld not supported\n", 820 - obj->integer.value); 824 + dev_warn(dev, "spi-rx-bus-width %u not supported\n", value); 821 825 break; 822 826 } 823 827 } ··· 825 833 826 834 return true; 827 835 } 828 - #endif 829 836 830 837 static int cdns_xspi_adjust_mem_op_size(struct spi_mem *mem, struct spi_mem_op *op) 831 838 { ··· 837 846 } 838 847 839 848 static const struct spi_controller_mem_ops cadence_xspi_mem_ops = { 840 - #ifdef CONFIG_ACPI 841 - .supports_op = cdns_xspi_supports_op, 842 - #endif 849 + .supports_op = PTR_IF(IS_ENABLED(CONFIG_ACPI), cdns_xspi_supports_op), 843 850 .exec_op = cdns_xspi_mem_op_execute, 844 851 .adjust_op_size = cdns_xspi_adjust_mem_op_size, 845 852 }; 846 853 847 854 static const struct spi_controller_mem_ops marvell_xspi_mem_ops = { 848 - #ifdef CONFIG_ACPI 849 - .supports_op = cdns_xspi_supports_op, 850 - #endif 855 + .supports_op = PTR_IF(IS_ENABLED(CONFIG_ACPI), cdns_xspi_supports_op), 851 856 .exec_op = marvell_xspi_mem_op_execute, 852 857 .adjust_op_size = cdns_xspi_adjust_mem_op_size, 853 858 }; ··· 1144 1157 SPI_MODE_0 | SPI_MODE_3; 1145 1158 1146 1159 cdns_xspi = spi_controller_get_devdata(host); 1147 - cdns_xspi->driver_data = of_device_get_match_data(dev); 1148 - if (!cdns_xspi->driver_data) { 1149 - cdns_xspi->driver_data = acpi_device_get_match_data(dev); 1150 - if (!cdns_xspi->driver_data) 1151 - return -ENODEV; 1152 - } 1160 + cdns_xspi->driver_data = device_get_match_data(dev); 1161 + if (!cdns_xspi->driver_data) 1162 + return -ENODEV; 1153 1163 1154 1164 if (cdns_xspi->driver_data->mrvl_hw_overlay) { 1155 1165 host->mem_ops = &marvell_xspi_mem_ops; ··· 1158 1174 cdns_xspi->sdma_handler = &cdns_xspi_sdma_handle; 1159 1175 cdns_xspi->set_interrupts_handler = &cdns_xspi_set_interrupts; 1160 1176 } 1161 - host->dev.of_node = pdev->dev.of_node; 1162 1177 host->bus_num = -1; 1163 1178 1164 - platform_set_drvdata(pdev, host); 1179 + platform_set_drvdata(pdev, cdns_xspi); 1165 1180 1166 1181 cdns_xspi->pdev = pdev; 1182 + cdns_xspi->host = host; 1167 1183 cdns_xspi->dev = &pdev->dev; 1168 1184 cdns_xspi->cur_cs = 0; 1169 1185 ··· 1252 1268 return 0; 1253 1269 } 1254 1270 1271 + static int cdns_xspi_suspend(struct device *dev) 1272 + { 1273 + struct cdns_xspi_dev *cdns_xspi = dev_get_drvdata(dev); 1274 + 1275 + return spi_controller_suspend(cdns_xspi->host); 1276 + } 1277 + 1278 + static int cdns_xspi_resume(struct device *dev) 1279 + { 1280 + struct cdns_xspi_dev *cdns_xspi = dev_get_drvdata(dev); 1281 + 1282 + if (cdns_xspi->driver_data->mrvl_hw_overlay) { 1283 + cdns_mrvl_xspi_setup_clock(cdns_xspi, MRVL_DEFAULT_CLK); 1284 + cdns_xspi_configure_phy(cdns_xspi); 1285 + } 1286 + 1287 + cdns_xspi->set_interrupts_handler(cdns_xspi, false); 1288 + 1289 + return spi_controller_resume(cdns_xspi->host); 1290 + } 1291 + 1292 + static DEFINE_SIMPLE_DEV_PM_OPS(cdns_xspi_pm_ops, 1293 + cdns_xspi_suspend, cdns_xspi_resume); 1294 + 1255 1295 static const struct of_device_id cdns_xspi_of_match[] = { 1256 1296 { 1257 1297 .compatible = "cdns,xspi-nor", ··· 1294 1286 .driver = { 1295 1287 .name = CDNS_XSPI_NAME, 1296 1288 .of_match_table = cdns_xspi_of_match, 1289 + .pm = pm_sleep_ptr(&cdns_xspi_pm_ops), 1297 1290 }, 1298 1291 }; 1299 1292
-1
drivers/spi/spi-cadence.c
··· 651 651 return -ENOMEM; 652 652 653 653 xspi = spi_controller_get_devdata(ctlr); 654 - ctlr->dev.of_node = pdev->dev.of_node; 655 654 platform_set_drvdata(pdev, ctlr); 656 655 657 656 xspi->regs = devm_platform_ioremap_resource(pdev, 0);
-1
drivers/spi/spi-cavium-octeon.c
··· 54 54 host->bits_per_word_mask = SPI_BPW_MASK(8); 55 55 host->max_speed_hz = OCTEON_SPI_MAX_CLOCK_HZ; 56 56 57 - host->dev.of_node = pdev->dev.of_node; 58 57 err = devm_spi_register_controller(&pdev->dev, host); 59 58 if (err) { 60 59 dev_err(&pdev->dev, "register host failed: %d\n", err);
-1
drivers/spi/spi-cavium-thunderx.c
··· 67 67 host->transfer_one_message = octeon_spi_transfer_one_message; 68 68 host->bits_per_word_mask = SPI_BPW_MASK(8); 69 69 host->max_speed_hz = OCTEON_SPI_MAX_CLOCK_HZ; 70 - host->dev.of_node = pdev->dev.of_node; 71 70 72 71 pci_set_drvdata(pdev, host); 73 72
-1
drivers/spi/spi-clps711x.c
··· 107 107 host->bus_num = -1; 108 108 host->mode_bits = SPI_CPHA | SPI_CS_HIGH; 109 109 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 8); 110 - host->dev.of_node = pdev->dev.of_node; 111 110 host->prepare_message = spi_clps711x_prepare_message; 112 111 host->transfer_one = spi_clps711x_transfer_one; 113 112
+8
drivers/spi/spi-cs42l43.c
··· 371 371 372 372 fwnode_property_read_u32(xu_fwnode, "01fa-sidecar-instances", &nsidecars); 373 373 374 + /* 375 + * Depending on the value of nsidecars we either create a software node 376 + * or assign an fwnode. We don't want software node to be attached to 377 + * the default one. That's why we need to clear the SPI controller fwnode 378 + * first. 379 + */ 380 + device_set_node(&priv->ctlr->dev, NULL); 381 + 374 382 if (nsidecars) { 375 383 struct software_node_ref_args args[] = { 376 384 SOFTWARE_NODE_REFERENCE(fwnode, 0, GPIO_ACTIVE_LOW),
-1
drivers/spi/spi-davinci.c
··· 988 988 } 989 989 990 990 host->use_gpio_descriptors = true; 991 - host->dev.of_node = pdev->dev.of_node; 992 991 host->bus_num = pdev->id; 993 992 host->num_chipselect = pdata->num_chipselect; 994 993 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(2, 16);
-3
drivers/spi/spi-dln2.c
··· 682 682 struct spi_controller *host; 683 683 struct dln2_spi *dln2; 684 684 struct dln2_platform_data *pdata = dev_get_platdata(&pdev->dev); 685 - struct device *dev = &pdev->dev; 686 685 int ret; 687 686 688 687 host = spi_alloc_host(&pdev->dev, sizeof(*dln2)); 689 688 if (!host) 690 689 return -ENOMEM; 691 - 692 - device_set_node(&host->dev, dev_fwnode(dev)); 693 690 694 691 platform_set_drvdata(pdev, host); 695 692
-331
drivers/spi/spi-dw-bt1.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - // 3 - // Copyright (C) 2020 BAIKAL ELECTRONICS, JSC 4 - // 5 - // Authors: 6 - // Ramil Zaripov <Ramil.Zaripov@baikalelectronics.ru> 7 - // Serge Semin <Sergey.Semin@baikalelectronics.ru> 8 - // 9 - // Baikal-T1 DW APB SPI and System Boot SPI driver 10 - // 11 - 12 - #include <linux/clk.h> 13 - #include <linux/cpumask.h> 14 - #include <linux/err.h> 15 - #include <linux/interrupt.h> 16 - #include <linux/module.h> 17 - #include <linux/mux/consumer.h> 18 - #include <linux/of.h> 19 - #include <linux/of_platform.h> 20 - #include <linux/platform_device.h> 21 - #include <linux/pm_runtime.h> 22 - #include <linux/property.h> 23 - #include <linux/slab.h> 24 - #include <linux/spi/spi-mem.h> 25 - #include <linux/spi/spi.h> 26 - 27 - #include "spi-dw.h" 28 - 29 - #define BT1_BOOT_DIRMAP 0 30 - #define BT1_BOOT_REGS 1 31 - 32 - struct dw_spi_bt1 { 33 - struct dw_spi dws; 34 - struct clk *clk; 35 - struct mux_control *mux; 36 - 37 - #ifdef CONFIG_SPI_DW_BT1_DIRMAP 38 - void __iomem *map; 39 - resource_size_t map_len; 40 - #endif 41 - }; 42 - #define to_dw_spi_bt1(_ctlr) \ 43 - container_of(spi_controller_get_devdata(_ctlr), struct dw_spi_bt1, dws) 44 - 45 - typedef int (*dw_spi_bt1_init_cb)(struct platform_device *pdev, 46 - struct dw_spi_bt1 *dwsbt1); 47 - 48 - #ifdef CONFIG_SPI_DW_BT1_DIRMAP 49 - 50 - static int dw_spi_bt1_dirmap_create(struct spi_mem_dirmap_desc *desc) 51 - { 52 - struct dw_spi_bt1 *dwsbt1 = to_dw_spi_bt1(desc->mem->spi->controller); 53 - 54 - if (!dwsbt1->map || 55 - !dwsbt1->dws.mem_ops.supports_op(desc->mem, &desc->info.op_tmpl)) 56 - return -EOPNOTSUPP; 57 - 58 - if (desc->info.op_tmpl.data.dir != SPI_MEM_DATA_IN) 59 - return -EOPNOTSUPP; 60 - 61 - /* 62 - * Make sure the requested region doesn't go out of the physically 63 - * mapped flash memory bounds. 64 - */ 65 - if (desc->info.offset + desc->info.length > dwsbt1->map_len) 66 - return -EINVAL; 67 - 68 - return 0; 69 - } 70 - 71 - /* 72 - * Directly mapped SPI memory region is only accessible in the dword chunks. 73 - * That's why we have to create a dedicated read-method to copy data from there 74 - * to the passed buffer. 75 - */ 76 - static void dw_spi_bt1_dirmap_copy_from_map(void *to, void __iomem *from, size_t len) 77 - { 78 - size_t shift, chunk; 79 - u32 data; 80 - 81 - /* 82 - * We split the copying up into the next three stages: unaligned head, 83 - * aligned body, unaligned tail. 84 - */ 85 - shift = (size_t)from & 0x3; 86 - if (shift) { 87 - chunk = min_t(size_t, 4 - shift, len); 88 - data = readl_relaxed(from - shift); 89 - memcpy(to, (char *)&data + shift, chunk); 90 - from += chunk; 91 - to += chunk; 92 - len -= chunk; 93 - } 94 - 95 - while (len >= 4) { 96 - data = readl_relaxed(from); 97 - memcpy(to, &data, 4); 98 - from += 4; 99 - to += 4; 100 - len -= 4; 101 - } 102 - 103 - if (len) { 104 - data = readl_relaxed(from); 105 - memcpy(to, &data, len); 106 - } 107 - } 108 - 109 - static ssize_t dw_spi_bt1_dirmap_read(struct spi_mem_dirmap_desc *desc, 110 - u64 offs, size_t len, void *buf) 111 - { 112 - struct dw_spi_bt1 *dwsbt1 = to_dw_spi_bt1(desc->mem->spi->controller); 113 - struct dw_spi *dws = &dwsbt1->dws; 114 - struct spi_mem *mem = desc->mem; 115 - struct dw_spi_cfg cfg; 116 - int ret; 117 - 118 - /* 119 - * Make sure the requested operation length is valid. Truncate the 120 - * length if it's greater than the length of the MMIO region. 121 - */ 122 - if (offs >= dwsbt1->map_len || !len) 123 - return 0; 124 - 125 - len = min_t(size_t, len, dwsbt1->map_len - offs); 126 - 127 - /* Collect the controller configuration required by the operation */ 128 - cfg.tmode = DW_SPI_CTRLR0_TMOD_EPROMREAD; 129 - cfg.dfs = 8; 130 - cfg.ndf = 4; 131 - cfg.freq = mem->spi->max_speed_hz; 132 - 133 - /* Make sure the corresponding CS is de-asserted on transmission */ 134 - dw_spi_set_cs(mem->spi, false); 135 - 136 - dw_spi_enable_chip(dws, 0); 137 - 138 - dw_spi_update_config(dws, mem->spi, &cfg); 139 - 140 - dw_spi_umask_intr(dws, DW_SPI_INT_RXFI); 141 - 142 - dw_spi_enable_chip(dws, 1); 143 - 144 - /* 145 - * Enable the transparent mode of the System Boot Controller. 146 - * The SPI core IO should have been locked before calling this method 147 - * so noone would be touching the controller' registers during the 148 - * dirmap operation. 149 - */ 150 - ret = mux_control_select(dwsbt1->mux, BT1_BOOT_DIRMAP); 151 - if (ret) 152 - return ret; 153 - 154 - dw_spi_bt1_dirmap_copy_from_map(buf, dwsbt1->map + offs, len); 155 - 156 - mux_control_deselect(dwsbt1->mux); 157 - 158 - dw_spi_set_cs(mem->spi, true); 159 - 160 - ret = dw_spi_check_status(dws, true); 161 - 162 - return ret ?: len; 163 - } 164 - 165 - #endif /* CONFIG_SPI_DW_BT1_DIRMAP */ 166 - 167 - static int dw_spi_bt1_std_init(struct platform_device *pdev, 168 - struct dw_spi_bt1 *dwsbt1) 169 - { 170 - struct dw_spi *dws = &dwsbt1->dws; 171 - 172 - dws->irq = platform_get_irq(pdev, 0); 173 - if (dws->irq < 0) 174 - return dws->irq; 175 - 176 - dws->num_cs = 4; 177 - 178 - /* 179 - * Baikal-T1 Normal SPI Controllers don't always keep up with full SPI 180 - * bus speed especially when it comes to the concurrent access to the 181 - * APB bus resources. Thus we have no choice but to set a constraint on 182 - * the SPI bus frequency for the memory operations which require to 183 - * read/write data as fast as possible. 184 - */ 185 - dws->max_mem_freq = 20000000U; 186 - 187 - dw_spi_dma_setup_generic(dws); 188 - 189 - return 0; 190 - } 191 - 192 - static int dw_spi_bt1_sys_init(struct platform_device *pdev, 193 - struct dw_spi_bt1 *dwsbt1) 194 - { 195 - struct resource *mem __maybe_unused; 196 - struct dw_spi *dws = &dwsbt1->dws; 197 - 198 - /* 199 - * Baikal-T1 System Boot Controller is equipped with a mux, which 200 - * switches between the directly mapped SPI flash access mode and 201 - * IO access to the DW APB SSI registers. Note the mux controller 202 - * must be setup to preserve the registers being accessible by default 203 - * (on idle-state). 204 - */ 205 - dwsbt1->mux = devm_mux_control_get(&pdev->dev, NULL); 206 - if (IS_ERR(dwsbt1->mux)) 207 - return PTR_ERR(dwsbt1->mux); 208 - 209 - /* 210 - * Directly mapped SPI flash memory is a 16MB MMIO region, which can be 211 - * used to access a peripheral memory device just by reading/writing 212 - * data from/to it. Note the system APB bus will stall during each IO 213 - * from/to the dirmap region until the operation is finished. So don't 214 - * use it concurrently with time-critical tasks (like the SPI memory 215 - * operations implemented in the DW APB SSI driver). 216 - */ 217 - #ifdef CONFIG_SPI_DW_BT1_DIRMAP 218 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 1); 219 - if (mem) { 220 - dwsbt1->map = devm_ioremap_resource(&pdev->dev, mem); 221 - if (!IS_ERR(dwsbt1->map)) { 222 - dwsbt1->map_len = resource_size(mem); 223 - dws->mem_ops.dirmap_create = dw_spi_bt1_dirmap_create; 224 - dws->mem_ops.dirmap_read = dw_spi_bt1_dirmap_read; 225 - } else { 226 - dwsbt1->map = NULL; 227 - } 228 - } 229 - #endif /* CONFIG_SPI_DW_BT1_DIRMAP */ 230 - 231 - /* 232 - * There is no IRQ, no DMA and just one CS available on the System Boot 233 - * SPI controller. 234 - */ 235 - dws->irq = IRQ_NOTCONNECTED; 236 - dws->num_cs = 1; 237 - 238 - /* 239 - * Baikal-T1 System Boot SPI Controller doesn't keep up with the full 240 - * SPI bus speed due to relatively slow APB bus and races for it' 241 - * resources from different CPUs. The situation is worsen by a small 242 - * FIFOs depth (just 8 words). It works better in a single CPU mode 243 - * though, but still tends to be not fast enough at low CPU 244 - * frequencies. 245 - */ 246 - if (num_possible_cpus() > 1) 247 - dws->max_mem_freq = 10000000U; 248 - else 249 - dws->max_mem_freq = 20000000U; 250 - 251 - return 0; 252 - } 253 - 254 - static int dw_spi_bt1_probe(struct platform_device *pdev) 255 - { 256 - dw_spi_bt1_init_cb init_func; 257 - struct dw_spi_bt1 *dwsbt1; 258 - struct resource *mem; 259 - struct dw_spi *dws; 260 - int ret; 261 - 262 - dwsbt1 = devm_kzalloc(&pdev->dev, sizeof(struct dw_spi_bt1), GFP_KERNEL); 263 - if (!dwsbt1) 264 - return -ENOMEM; 265 - 266 - dws = &dwsbt1->dws; 267 - 268 - dws->regs = devm_platform_get_and_ioremap_resource(pdev, 0, &mem); 269 - if (IS_ERR(dws->regs)) 270 - return PTR_ERR(dws->regs); 271 - 272 - dws->paddr = mem->start; 273 - 274 - dwsbt1->clk = devm_clk_get_enabled(&pdev->dev, NULL); 275 - if (IS_ERR(dwsbt1->clk)) 276 - return PTR_ERR(dwsbt1->clk); 277 - 278 - dws->bus_num = pdev->id; 279 - dws->reg_io_width = 4; 280 - dws->max_freq = clk_get_rate(dwsbt1->clk); 281 - if (!dws->max_freq) 282 - return -EINVAL; 283 - 284 - init_func = device_get_match_data(&pdev->dev); 285 - ret = init_func(pdev, dwsbt1); 286 - if (ret) 287 - return ret; 288 - 289 - pm_runtime_enable(&pdev->dev); 290 - 291 - ret = dw_spi_add_controller(&pdev->dev, dws); 292 - if (ret) { 293 - pm_runtime_disable(&pdev->dev); 294 - return ret; 295 - } 296 - 297 - platform_set_drvdata(pdev, dwsbt1); 298 - 299 - return 0; 300 - } 301 - 302 - static void dw_spi_bt1_remove(struct platform_device *pdev) 303 - { 304 - struct dw_spi_bt1 *dwsbt1 = platform_get_drvdata(pdev); 305 - 306 - dw_spi_remove_controller(&dwsbt1->dws); 307 - 308 - pm_runtime_disable(&pdev->dev); 309 - } 310 - 311 - static const struct of_device_id dw_spi_bt1_of_match[] = { 312 - { .compatible = "baikal,bt1-ssi", .data = dw_spi_bt1_std_init}, 313 - { .compatible = "baikal,bt1-sys-ssi", .data = dw_spi_bt1_sys_init}, 314 - { } 315 - }; 316 - MODULE_DEVICE_TABLE(of, dw_spi_bt1_of_match); 317 - 318 - static struct platform_driver dw_spi_bt1_driver = { 319 - .probe = dw_spi_bt1_probe, 320 - .remove = dw_spi_bt1_remove, 321 - .driver = { 322 - .name = "bt1-sys-ssi", 323 - .of_match_table = dw_spi_bt1_of_match, 324 - }, 325 - }; 326 - module_platform_driver(dw_spi_bt1_driver); 327 - 328 - MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>"); 329 - MODULE_DESCRIPTION("Baikal-T1 System Boot SPI Controller driver"); 330 - MODULE_LICENSE("GPL v2"); 331 - MODULE_IMPORT_NS("SPI_DW_CORE");
-2
drivers/spi/spi-dw-core.c
··· 936 936 if (!ctlr) 937 937 return -ENOMEM; 938 938 939 - device_set_node(&ctlr->dev, dev_fwnode(dev)); 940 - 941 939 dws->ctlr = ctlr; 942 940 dws->dma_addr = (dma_addr_t)(dws->paddr + DW_SPI_DR); 943 941
+34 -3
drivers/spi/spi-dw-mmio.c
··· 104 104 return -ENOMEM; 105 105 106 106 dwsmscc->spi_mst = devm_platform_ioremap_resource(pdev, 1); 107 - if (IS_ERR(dwsmscc->spi_mst)) { 108 - dev_err(&pdev->dev, "SPI_MST region map failed\n"); 107 + if (IS_ERR(dwsmscc->spi_mst)) 109 108 return PTR_ERR(dwsmscc->spi_mst); 110 - } 111 109 112 110 dwsmscc->syscon = syscon_regmap_lookup_by_compatible(cpu_syscon); 113 111 if (IS_ERR(dwsmscc->syscon)) ··· 390 392 return ret; 391 393 } 392 394 395 + static int dw_spi_mmio_suspend(struct device *dev) 396 + { 397 + struct dw_spi_mmio *dwsmmio = dev_get_drvdata(dev); 398 + int ret; 399 + 400 + ret = dw_spi_suspend_controller(&dwsmmio->dws); 401 + if (ret) 402 + return ret; 403 + 404 + reset_control_assert(dwsmmio->rstc); 405 + 406 + clk_disable_unprepare(dwsmmio->pclk); 407 + clk_disable_unprepare(dwsmmio->clk); 408 + 409 + return 0; 410 + } 411 + 412 + static int dw_spi_mmio_resume(struct device *dev) 413 + { 414 + struct dw_spi_mmio *dwsmmio = dev_get_drvdata(dev); 415 + 416 + clk_prepare_enable(dwsmmio->clk); 417 + clk_prepare_enable(dwsmmio->pclk); 418 + 419 + reset_control_deassert(dwsmmio->rstc); 420 + 421 + return dw_spi_resume_controller(&dwsmmio->dws); 422 + } 423 + 424 + static DEFINE_SIMPLE_DEV_PM_OPS(dw_spi_mmio_pm_ops, 425 + dw_spi_mmio_suspend, dw_spi_mmio_resume); 426 + 393 427 static void dw_spi_mmio_remove(struct platform_device *pdev) 394 428 { 395 429 struct dw_spi_mmio *dwsmmio = platform_get_drvdata(pdev); ··· 465 435 .name = DRIVER_NAME, 466 436 .of_match_table = dw_spi_mmio_of_match, 467 437 .acpi_match_table = ACPI_PTR(dw_spi_mmio_acpi_match), 438 + .pm = pm_sleep_ptr(&dw_spi_mmio_pm_ops), 468 439 }, 469 440 }; 470 441 module_platform_driver(dw_spi_mmio_driver);
-1
drivers/spi/spi-ep93xx.c
··· 689 689 /* make sure that the hardware is disabled */ 690 690 writel(0, espi->mmio + SSPCR1); 691 691 692 - device_set_node(&host->dev, dev_fwnode(&pdev->dev)); 693 692 error = devm_spi_register_controller(&pdev->dev, host); 694 693 if (error) { 695 694 dev_err(&pdev->dev, "failed to register SPI host\n");
-1
drivers/spi/spi-falcon.c
··· 405 405 host->flags = SPI_CONTROLLER_HALF_DUPLEX; 406 406 host->setup = falcon_sflash_setup; 407 407 host->transfer_one_message = falcon_sflash_xfer_one; 408 - host->dev.of_node = pdev->dev.of_node; 409 408 410 409 ret = devm_spi_register_controller(&pdev->dev, host); 411 410 if (ret)
+2 -5
drivers/spi/spi-fsi.c
··· 531 531 static int fsi_spi_probe(struct device *dev) 532 532 { 533 533 int rc; 534 - struct device_node *np; 535 534 int num_controllers_registered = 0; 536 535 struct fsi2spi *bridge; 537 536 struct fsi_device *fsi = to_fsi_dev(dev); ··· 546 547 bridge->fsi = fsi; 547 548 mutex_init(&bridge->lock); 548 549 549 - for_each_available_child_of_node(dev->of_node, np) { 550 + for_each_available_child_of_node_scoped(dev->of_node, np) { 550 551 u32 base; 551 552 struct fsi_spi *ctx; 552 553 struct spi_controller *ctlr; ··· 555 556 continue; 556 557 557 558 ctlr = spi_alloc_host(dev, sizeof(*ctx)); 558 - if (!ctlr) { 559 - of_node_put(np); 559 + if (!ctlr) 560 560 break; 561 - } 562 561 563 562 ctlr->dev.of_node = np; 564 563 ctlr->num_chipselect = of_get_available_child_count(np) ?: 1;
-1
drivers/spi/spi-fsl-dspi.c
··· 1555 1555 1556 1556 ctlr->setup = dspi_setup; 1557 1557 ctlr->transfer_one_message = dspi_transfer_one_message; 1558 - ctlr->dev.of_node = pdev->dev.of_node; 1559 1558 1560 1559 ctlr->cleanup = dspi_cleanup; 1561 1560 ctlr->target_abort = dspi_target_abort;
-1
drivers/spi/spi-fsl-espi.c
··· 675 675 676 676 host->mode_bits = SPI_RX_DUAL | SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | 677 677 SPI_LSB_FIRST | SPI_LOOP; 678 - host->dev.of_node = dev->of_node; 679 678 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 16); 680 679 host->setup = fsl_espi_setup; 681 680 host->cleanup = fsl_espi_cleanup;
-1
drivers/spi/spi-fsl-lib.c
··· 91 91 ctlr->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH 92 92 | SPI_LSB_FIRST | SPI_LOOP; 93 93 94 - ctlr->dev.of_node = dev->of_node; 95 94 96 95 mpc8xxx_spi = spi_controller_get_devdata(ctlr); 97 96 mpc8xxx_spi->dev = dev;
+51 -14
drivers/spi/spi-fsl-lpspi.c
··· 281 281 fsl_lpspi->rx(fsl_lpspi); 282 282 } 283 283 284 - static void fsl_lpspi_set_cmd(struct fsl_lpspi_data *fsl_lpspi) 284 + static void fsl_lpspi_set_cmd(struct fsl_lpspi_data *fsl_lpspi, 285 + struct spi_device *spi) 285 286 { 286 287 u32 temp = 0; 287 288 ··· 304 303 temp |= TCR_CONTC; 305 304 } 306 305 } 306 + 307 + if (spi->mode & SPI_CPOL) 308 + temp |= TCR_CPOL; 309 + 310 + if (spi->mode & SPI_CPHA) 311 + temp |= TCR_CPHA; 312 + 307 313 writel(temp, fsl_lpspi->base + IMX7ULP_TCR); 308 314 309 315 dev_dbg(fsl_lpspi->dev, "TCR=0x%x\n", temp); ··· 494 486 fsl_lpspi->tx = fsl_lpspi_buf_tx_u32; 495 487 } 496 488 497 - /* 498 - * t->len is 'unsigned' and txfifosize and watermrk is 'u8', force 499 - * type cast is inevitable. When len > 255, len will be truncated in min_t(), 500 - * it caused wrong watermark set. 'unsigned int' is as the designated type 501 - * for min_t() to avoid truncation. 502 - */ 503 - fsl_lpspi->watermark = min_t(unsigned int, 504 - fsl_lpspi->txfifosize, 505 - t->len); 489 + fsl_lpspi->watermark = min(fsl_lpspi->txfifosize, t->len); 490 + 491 + return fsl_lpspi_config(fsl_lpspi); 492 + } 493 + 494 + static int fsl_lpspi_prepare_message(struct spi_controller *controller, 495 + struct spi_message *msg) 496 + { 497 + struct fsl_lpspi_data *fsl_lpspi = 498 + spi_controller_get_devdata(controller); 499 + struct spi_device *spi = msg->spi; 500 + struct spi_transfer *t; 501 + int ret; 502 + 503 + t = list_first_entry_or_null(&msg->transfers, struct spi_transfer, 504 + transfer_list); 505 + if (!t) 506 + return 0; 507 + 508 + fsl_lpspi->is_first_byte = true; 509 + fsl_lpspi->usedma = false; 510 + ret = fsl_lpspi_setup_transfer(controller, spi, t); 506 511 507 512 if (fsl_lpspi_can_dma(controller, spi, t)) 508 513 fsl_lpspi->usedma = true; 509 514 else 510 515 fsl_lpspi->usedma = false; 511 516 512 - return fsl_lpspi_config(fsl_lpspi); 517 + if (ret < 0) 518 + return ret; 519 + 520 + fsl_lpspi_set_cmd(fsl_lpspi, spi); 521 + 522 + /* No IRQs */ 523 + writel(0, fsl_lpspi->base + IMX7ULP_IER); 524 + 525 + /* Controller disable, clear FIFOs, clear status */ 526 + writel(CR_RRF | CR_RTF, fsl_lpspi->base + IMX7ULP_CR); 527 + writel(SR_CLEAR_MASK, fsl_lpspi->base + IMX7ULP_SR); 528 + 529 + return 0; 513 530 } 514 531 515 532 static int fsl_lpspi_target_abort(struct spi_controller *controller) ··· 794 761 spi_controller_get_devdata(controller); 795 762 int ret; 796 763 797 - fsl_lpspi->is_first_byte = true; 764 + if (fsl_lpspi_can_dma(controller, spi, t)) 765 + fsl_lpspi->usedma = true; 766 + else 767 + fsl_lpspi->usedma = false; 768 + 798 769 ret = fsl_lpspi_setup_transfer(controller, spi, t); 799 770 if (ret < 0) 800 771 return ret; 801 772 802 773 t->effective_speed_hz = fsl_lpspi->config.effective_speed_hz; 803 774 804 - fsl_lpspi_set_cmd(fsl_lpspi); 775 + fsl_lpspi_set_cmd(fsl_lpspi, spi); 805 776 fsl_lpspi->is_first_byte = false; 806 777 807 778 if (fsl_lpspi->usedma) ··· 989 952 } 990 953 991 954 controller->bits_per_word_mask = SPI_BPW_RANGE_MASK(8, 32); 955 + controller->prepare_message = fsl_lpspi_prepare_message; 992 956 controller->transfer_one = fsl_lpspi_transfer_one; 993 957 controller->prepare_transfer_hardware = lpspi_prepare_xfer_hardware; 994 958 controller->unprepare_transfer_hardware = lpspi_unprepare_xfer_hardware; 995 959 controller->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; 996 960 controller->flags = SPI_CONTROLLER_MUST_RX | SPI_CONTROLLER_MUST_TX; 997 - controller->dev.of_node = pdev->dev.of_node; 998 961 controller->bus_num = pdev->id; 999 962 controller->num_chipselect = num_cs; 1000 963 controller->target_abort = fsl_lpspi_target_abort;
+71 -115
drivers/spi/spi-geni-qcom.c
··· 82 82 u32 fifo_width_bits; 83 83 u32 tx_wm; 84 84 u32 last_mode; 85 + u8 last_cs; 85 86 unsigned long cur_speed_hz; 86 87 unsigned long cur_sclk_hz; 87 88 unsigned int cur_bits_per_word; ··· 146 145 return ret; 147 146 } 148 147 149 - static void handle_se_timeout(struct spi_controller *spi, 150 - struct spi_message *msg) 148 + static void handle_se_timeout(struct spi_controller *spi) 151 149 { 152 150 struct spi_geni_master *mas = spi_controller_get_devdata(spi); 153 151 unsigned long time_left; ··· 160 160 xfer = mas->cur_xfer; 161 161 mas->cur_xfer = NULL; 162 162 163 - if (spi->target) { 164 - /* 165 - * skip CMD Cancel sequnece since spi target 166 - * doesn`t support CMD Cancel sequnece 167 - */ 163 + /* The controller doesn't support the Cancel commnand in target mode */ 164 + if (!spi->target) { 165 + reinit_completion(&mas->cancel_done); 166 + geni_se_cancel_m_cmd(se); 167 + 168 168 spin_unlock_irq(&mas->lock); 169 - goto reset_if_dma; 169 + 170 + time_left = wait_for_completion_timeout(&mas->cancel_done, HZ); 171 + if (time_left) 172 + goto reset_if_dma; 173 + 174 + spin_lock_irq(&mas->lock); 170 175 } 171 176 172 - reinit_completion(&mas->cancel_done); 173 - geni_se_cancel_m_cmd(se); 174 - spin_unlock_irq(&mas->lock); 175 - 176 - time_left = wait_for_completion_timeout(&mas->cancel_done, HZ); 177 - if (time_left) 178 - goto reset_if_dma; 179 - 180 - spin_lock_irq(&mas->lock); 181 177 reinit_completion(&mas->abort_done); 182 178 geni_se_abort_m_cmd(se); 183 179 spin_unlock_irq(&mas->lock); ··· 221 225 } 222 226 } 223 227 224 - static void handle_gpi_timeout(struct spi_controller *spi, struct spi_message *msg) 228 + static void handle_gpi_timeout(struct spi_controller *spi) 225 229 { 226 230 struct spi_geni_master *mas = spi_controller_get_devdata(spi); 227 231 ··· 236 240 switch (mas->cur_xfer_mode) { 237 241 case GENI_SE_FIFO: 238 242 case GENI_SE_DMA: 239 - handle_se_timeout(spi, msg); 243 + handle_se_timeout(spi); 240 244 break; 241 245 case GENI_GPI_DMA: 242 - handle_gpi_timeout(spi, msg); 246 + handle_gpi_timeout(spi); 243 247 break; 244 248 default: 245 249 dev_err(mas->dev, "Abort on Mode:%d not supported", mas->cur_xfer_mode); ··· 278 282 mas->abort_failed = false; 279 283 280 284 return false; 281 - } 282 - 283 - static void spi_geni_set_cs(struct spi_device *slv, bool set_flag) 284 - { 285 - struct spi_geni_master *mas = spi_controller_get_devdata(slv->controller); 286 - struct spi_controller *spi = dev_get_drvdata(mas->dev); 287 - struct geni_se *se = &mas->se; 288 - unsigned long time_left; 289 - 290 - if (!(slv->mode & SPI_CS_HIGH)) 291 - set_flag = !set_flag; 292 - 293 - if (set_flag == mas->cs_flag) 294 - return; 295 - 296 - pm_runtime_get_sync(mas->dev); 297 - 298 - if (spi_geni_is_abort_still_pending(mas)) { 299 - dev_err(mas->dev, "Can't set chip select\n"); 300 - goto exit; 301 - } 302 - 303 - spin_lock_irq(&mas->lock); 304 - if (mas->cur_xfer) { 305 - dev_err(mas->dev, "Can't set CS when prev xfer running\n"); 306 - spin_unlock_irq(&mas->lock); 307 - goto exit; 308 - } 309 - 310 - mas->cs_flag = set_flag; 311 - /* set xfer_mode to FIFO to complete cs_done in isr */ 312 - mas->cur_xfer_mode = GENI_SE_FIFO; 313 - geni_se_select_mode(se, mas->cur_xfer_mode); 314 - 315 - reinit_completion(&mas->cs_done); 316 - if (set_flag) 317 - geni_se_setup_m_cmd(se, SPI_CS_ASSERT, 0); 318 - else 319 - geni_se_setup_m_cmd(se, SPI_CS_DEASSERT, 0); 320 - spin_unlock_irq(&mas->lock); 321 - 322 - time_left = wait_for_completion_timeout(&mas->cs_done, HZ); 323 - if (!time_left) { 324 - dev_warn(mas->dev, "Timeout setting chip select\n"); 325 - handle_se_timeout(spi, NULL); 326 - } 327 - 328 - exit: 329 - pm_runtime_put(mas->dev); 330 285 } 331 286 332 287 static void spi_setup_word_len(struct spi_geni_master *mas, u16 mode, ··· 346 399 { 347 400 struct spi_geni_master *mas = spi_controller_get_devdata(spi); 348 401 struct geni_se *se = &mas->se; 349 - u32 loopback_cfg = 0, cpol = 0, cpha = 0, demux_output_inv = 0; 350 - u32 demux_sel; 402 + u8 chipselect = spi_get_chipselect(spi_slv, 0); 403 + bool cs_changed = (mas->last_cs != chipselect); 404 + u32 mode_changed = mas->last_mode ^ spi_slv->mode; 351 405 352 - if (mas->last_mode != spi_slv->mode) { 353 - if (spi_slv->mode & SPI_LOOP) 354 - loopback_cfg = LOOPBACK_ENABLE; 406 + mas->last_cs = chipselect; 407 + mas->last_mode = spi_slv->mode; 355 408 356 - if (spi_slv->mode & SPI_CPOL) 357 - cpol = CPOL; 409 + if (mode_changed & SPI_LSB_FIRST) 410 + mas->cur_bits_per_word = 0; /* force next setup_se_xfer to call spi_setup_word_len */ 411 + if (mode_changed & SPI_LOOP) 412 + writel((spi_slv->mode & SPI_LOOP) ? LOOPBACK_ENABLE : 0, se->base + SE_SPI_LOOPBACK); 413 + if (cs_changed) 414 + writel(chipselect, se->base + SE_SPI_DEMUX_SEL); 415 + if (mode_changed & SE_SPI_CPHA) 416 + writel((spi_slv->mode & SPI_CPHA) ? CPHA : 0, se->base + SE_SPI_CPHA); 417 + if (mode_changed & SE_SPI_CPOL) 418 + writel((spi_slv->mode & SPI_CPOL) ? CPOL : 0, se->base + SE_SPI_CPOL); 419 + if ((mode_changed & SPI_CS_HIGH) || (cs_changed && (spi_slv->mode & SPI_CS_HIGH))) 420 + writel((spi_slv->mode & SPI_CS_HIGH) ? BIT(chipselect) : 0, se->base + SE_SPI_DEMUX_OUTPUT_INV); 358 421 359 - if (spi_slv->mode & SPI_CPHA) 360 - cpha = CPHA; 361 - 362 - if (spi_slv->mode & SPI_CS_HIGH) 363 - demux_output_inv = BIT(spi_get_chipselect(spi_slv, 0)); 364 - 365 - demux_sel = spi_get_chipselect(spi_slv, 0); 366 - mas->cur_bits_per_word = spi_slv->bits_per_word; 367 - 368 - spi_setup_word_len(mas, spi_slv->mode, spi_slv->bits_per_word); 369 - writel(loopback_cfg, se->base + SE_SPI_LOOPBACK); 370 - writel(demux_sel, se->base + SE_SPI_DEMUX_SEL); 371 - writel(cpha, se->base + SE_SPI_CPHA); 372 - writel(cpol, se->base + SE_SPI_CPOL); 373 - writel(demux_output_inv, se->base + SE_SPI_DEMUX_OUTPUT_INV); 374 - 375 - mas->last_mode = spi_slv->mode; 376 - } 377 - 378 - return geni_spi_set_clock_and_bw(mas, spi_slv->max_speed_hz); 422 + return 0; 379 423 } 380 424 381 425 static void ··· 486 548 { 487 549 u32 len; 488 550 489 - if (!(mas->cur_bits_per_word % MIN_WORD_LEN)) 490 - len = xfer->len * BITS_PER_BYTE / mas->cur_bits_per_word; 551 + if (!(xfer->bits_per_word % MIN_WORD_LEN)) 552 + len = xfer->len * BITS_PER_BYTE / xfer->bits_per_word; 491 553 else 492 - len = xfer->len / (mas->cur_bits_per_word / BITS_PER_BYTE + 1); 554 + len = xfer->len / (xfer->bits_per_word / BITS_PER_BYTE + 1); 493 555 len &= TRANS_LEN_MSK; 494 556 495 557 return len; ··· 509 571 return true; 510 572 511 573 len = get_xfer_len_in_words(xfer, mas); 512 - fifo_size = mas->tx_fifo_depth * mas->fifo_width_bits / mas->cur_bits_per_word; 574 + fifo_size = mas->tx_fifo_depth * mas->fifo_width_bits / xfer->bits_per_word; 513 575 514 576 if (len > fifo_size) 515 577 return true; ··· 662 724 case 0: 663 725 mas->cur_xfer_mode = GENI_SE_FIFO; 664 726 geni_se_select_mode(se, GENI_SE_FIFO); 727 + /* setup_fifo_params assumes that these registers start with a zero value */ 728 + writel(0, se->base + SE_SPI_LOOPBACK); 729 + writel(0, se->base + SE_SPI_DEMUX_SEL); 730 + writel(0, se->base + SE_SPI_CPHA); 731 + writel(0, se->base + SE_SPI_CPOL); 732 + writel(0, se->base + SE_SPI_DEMUX_OUTPUT_INV); 665 733 ret = 0; 666 734 break; 667 735 } 668 736 669 - /* We always control CS manually */ 737 + /* We never control CS manually */ 670 738 if (!spi->target) { 671 739 spi_tx_cfg = readl(se->base + SE_SPI_TRANS_CFG); 672 740 spi_tx_cfg &= ~CS_TOGGLE; ··· 785 841 u16 mode, struct spi_controller *spi) 786 842 { 787 843 u32 m_cmd = 0; 844 + u32 m_params = 0; 788 845 u32 len; 789 846 struct geni_se *se = &mas->se; 790 847 int ret; ··· 849 904 mas->cur_xfer_mode = GENI_SE_DMA; 850 905 geni_se_select_mode(se, mas->cur_xfer_mode); 851 906 907 + if (!xfer->cs_change) { 908 + if (!list_is_last(&xfer->transfer_list, &spi->cur_msg->transfers)) 909 + m_params = FRAGMENTATION; 910 + } 911 + 852 912 /* 853 913 * Lock around right before we start the transfer since our 854 914 * interrupt could come in at any time now. 855 915 */ 856 916 spin_lock_irq(&mas->lock); 857 - geni_se_setup_m_cmd(se, m_cmd, FRAGMENTATION); 917 + geni_se_setup_m_cmd(se, m_cmd, m_params); 858 918 859 919 if (mas->cur_xfer_mode == GENI_SE_DMA) { 860 920 if (m_cmd & SPI_RX_ONLY) ··· 1003 1053 return IRQ_HANDLED; 1004 1054 } 1005 1055 1056 + static int spi_geni_target_abort(struct spi_controller *spi) 1057 + { 1058 + if (!spi->cur_msg) 1059 + return 0; 1060 + 1061 + handle_se_timeout(spi); 1062 + spi_finalize_current_transfer(spi); 1063 + 1064 + return 0; 1065 + } 1066 + 1006 1067 static int spi_geni_probe(struct platform_device *pdev) 1007 1068 { 1008 1069 int ret, irq; ··· 1039 1078 if (IS_ERR(clk)) 1040 1079 return PTR_ERR(clk); 1041 1080 1042 - spi = devm_spi_alloc_host(dev, sizeof(*mas)); 1081 + if (device_property_read_bool(dev, "spi-slave")) 1082 + spi = devm_spi_alloc_target(dev, sizeof(*mas)); 1083 + else 1084 + spi = devm_spi_alloc_host(dev, sizeof(*mas)); 1085 + 1043 1086 if (!spi) 1044 1087 return -ENOMEM; 1045 1088 ··· 1067 1102 } 1068 1103 1069 1104 spi->bus_num = -1; 1070 - spi->dev.of_node = dev->of_node; 1071 1105 spi->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LOOP | SPI_CS_HIGH; 1072 1106 spi->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); 1073 1107 spi->num_chipselect = 4; ··· 1087 1123 init_completion(&mas->rx_reset_done); 1088 1124 spin_lock_init(&mas->lock); 1089 1125 1126 + if (spi->target) 1127 + spi->target_abort = spi_geni_target_abort; 1128 + 1090 1129 ret = geni_icc_get(&mas->se, NULL); 1091 1130 if (ret) 1092 1131 return ret; ··· 1099 1132 ret = devm_pm_runtime_enable(dev); 1100 1133 if (ret) 1101 1134 return ret; 1102 - 1103 - if (device_property_read_bool(&pdev->dev, "spi-slave")) 1104 - spi->target = true; 1105 1135 1106 1136 /* Set the bus quota to a reasonable value for register access */ 1107 1137 mas->se.icc_paths[GENI_TO_CORE].avg_bw = Bps_to_icc(CORE_2X_50_MHZ); ··· 1111 1147 ret = spi_geni_init(mas); 1112 1148 if (ret) 1113 1149 return ret; 1114 - 1115 - /* 1116 - * check the mode supported and set_cs for fifo mode only 1117 - * for dma (gsi) mode, the gsi will set cs based on params passed in 1118 - * TRE 1119 - */ 1120 - if (!spi->target && mas->cur_xfer_mode == GENI_SE_FIFO) 1121 - spi->set_cs = spi_geni_set_cs; 1122 1150 1123 1151 /* 1124 1152 * TX is required per GSI spec, see setup_gsi_xfer().
-1
drivers/spi/spi-gpio.c
··· 351 351 return -ENOMEM; 352 352 353 353 if (fwnode) { 354 - device_set_node(&host->dev, fwnode); 355 354 host->use_gpio_descriptors = true; 356 355 } else { 357 356 status = spi_gpio_probe_pdata(pdev, host);
-1
drivers/spi/spi-gxp.c
··· 284 284 ctlr->mem_ops = &gxp_spi_mem_ops; 285 285 ctlr->setup = gxp_spi_setup; 286 286 ctlr->num_chipselect = data->max_cs; 287 - ctlr->dev.of_node = dev->of_node; 288 287 289 288 ret = devm_spi_register_controller(dev, ctlr); 290 289 if (ret) {
-1
drivers/spi/spi-hisi-kunpeng.c
··· 495 495 host->cleanup = hisi_spi_cleanup; 496 496 host->transfer_one = hisi_spi_transfer_one; 497 497 host->handle_err = hisi_spi_handle_err; 498 - host->dev.fwnode = dev->fwnode; 499 498 host->min_speed_hz = DIV_ROUND_UP(host->max_speed_hz, CLK_DIV_MAX); 500 499 501 500 hisi_spi_hw_init(hs);
-1
drivers/spi/spi-img-spfi.c
··· 587 587 host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_TX_DUAL | SPI_RX_DUAL; 588 588 if (of_property_read_bool(spfi->dev->of_node, "img,supports-quad-mode")) 589 589 host->mode_bits |= SPI_TX_QUAD | SPI_RX_QUAD; 590 - host->dev.of_node = pdev->dev.of_node; 591 590 host->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(8); 592 591 host->max_speed_hz = clk_get_rate(spfi->spfi_clk) / 4; 593 592 host->min_speed_hz = clk_get_rate(spfi->spfi_clk) / 512;
+530 -118
drivers/spi/spi-imx.c
··· 60 60 #define MX51_ECSPI_CTRL_MAX_BURST 512 61 61 /* The maximum bytes that IMX53_ECSPI can transfer in target mode.*/ 62 62 #define MX53_MAX_TRANSFER_BYTES 512 63 + #define BYTES_PER_32BITS_WORD 4 63 64 64 65 enum spi_imx_devtype { 65 66 IMX1_CSPI, ··· 94 93 */ 95 94 bool tx_glitch_fixed; 96 95 enum spi_imx_devtype devtype; 96 + }; 97 + 98 + struct dma_data_package { 99 + u32 cmd_word; 100 + void *dma_rx_buf; 101 + void *dma_tx_buf; 102 + dma_addr_t dma_tx_addr; 103 + dma_addr_t dma_rx_addr; 104 + int dma_len; 105 + int data_len; 97 106 }; 98 107 99 108 struct spi_imx_data { ··· 141 130 u32 wml; 142 131 struct completion dma_rx_completion; 143 132 struct completion dma_tx_completion; 133 + size_t dma_package_num; 134 + struct dma_data_package *dma_data; 135 + int rx_offset; 144 136 145 137 const struct spi_imx_devtype_data *devtype_data; 146 138 }; ··· 203 189 MXC_SPI_BUF_RX(u32) 204 190 MXC_SPI_BUF_TX(u32) 205 191 192 + /* Align to cache line to avoid swiotlo bounce */ 193 + #define DMA_CACHE_ALIGNED_LEN(x) ALIGN((x), dma_get_cache_alignment()) 194 + 206 195 /* First entry is reserved, second entry is valid only if SDHC_SPIEN is set 207 196 * (which is currently not the case in this driver) 208 197 */ ··· 264 247 if (!controller->dma_rx) 265 248 return false; 266 249 267 - if (spi_imx->target_mode) 250 + /* 251 + * Due to Freescale errata ERR003775 "eCSPI: Burst completion by Chip 252 + * Select (SS) signal in Slave mode is not functional" burst size must 253 + * be set exactly to the size of the transfer. This limit SPI transaction 254 + * with maximum 2^12 bits. 255 + */ 256 + if (transfer->len > MX53_MAX_TRANSFER_BYTES && spi_imx->target_mode) 268 257 return false; 269 258 270 259 if (transfer->len < spi_imx->devtype_data->fifo_size) 260 + return false; 261 + 262 + /* DMA only can transmit data in bytes */ 263 + if (spi_imx->bits_per_word != 8 && spi_imx->bits_per_word != 16 && 264 + spi_imx->bits_per_word != 32) 265 + return false; 266 + 267 + if (transfer->len >= MAX_SDMA_BD_BYTES) 271 268 return false; 272 269 273 270 spi_imx->dynamic_burst = 0; ··· 1313 1282 return IRQ_HANDLED; 1314 1283 } 1315 1284 1316 - static int spi_imx_dma_configure(struct spi_controller *controller) 1317 - { 1318 - int ret; 1319 - enum dma_slave_buswidth buswidth; 1320 - struct dma_slave_config rx = {}, tx = {}; 1321 - struct spi_imx_data *spi_imx = spi_controller_get_devdata(controller); 1322 - 1323 - switch (spi_imx_bytes_per_word(spi_imx->bits_per_word)) { 1324 - case 4: 1325 - buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES; 1326 - break; 1327 - case 2: 1328 - buswidth = DMA_SLAVE_BUSWIDTH_2_BYTES; 1329 - break; 1330 - case 1: 1331 - buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE; 1332 - break; 1333 - default: 1334 - return -EINVAL; 1335 - } 1336 - 1337 - tx.direction = DMA_MEM_TO_DEV; 1338 - tx.dst_addr = spi_imx->base_phys + MXC_CSPITXDATA; 1339 - tx.dst_addr_width = buswidth; 1340 - tx.dst_maxburst = spi_imx->wml; 1341 - ret = dmaengine_slave_config(controller->dma_tx, &tx); 1342 - if (ret) { 1343 - dev_err(spi_imx->dev, "TX dma configuration failed with %d\n", ret); 1344 - return ret; 1345 - } 1346 - 1347 - rx.direction = DMA_DEV_TO_MEM; 1348 - rx.src_addr = spi_imx->base_phys + MXC_CSPIRXDATA; 1349 - rx.src_addr_width = buswidth; 1350 - rx.src_maxburst = spi_imx->wml; 1351 - ret = dmaengine_slave_config(controller->dma_rx, &rx); 1352 - if (ret) { 1353 - dev_err(spi_imx->dev, "RX dma configuration failed with %d\n", ret); 1354 - return ret; 1355 - } 1356 - 1357 - return 0; 1358 - } 1359 - 1360 1285 static int spi_imx_setupxfer(struct spi_device *spi, 1361 1286 struct spi_transfer *t) 1362 1287 { ··· 1429 1442 1430 1443 init_completion(&spi_imx->dma_rx_completion); 1431 1444 init_completion(&spi_imx->dma_tx_completion); 1432 - controller->can_dma = spi_imx_can_dma; 1433 - controller->max_dma_len = MAX_SDMA_BD_BYTES; 1434 1445 spi_imx->controller->flags = SPI_CONTROLLER_MUST_RX | 1435 1446 SPI_CONTROLLER_MUST_TX; 1436 1447 ··· 1466 1481 return secs_to_jiffies(2 * timeout); 1467 1482 } 1468 1483 1469 - static int spi_imx_dma_transfer(struct spi_imx_data *spi_imx, 1470 - struct spi_transfer *transfer) 1484 + static void spi_imx_dma_unmap(struct spi_imx_data *spi_imx, 1485 + struct dma_data_package *dma_data) 1471 1486 { 1487 + struct device *tx_dev = spi_imx->controller->dma_tx->device->dev; 1488 + struct device *rx_dev = spi_imx->controller->dma_rx->device->dev; 1489 + 1490 + dma_unmap_single(tx_dev, dma_data->dma_tx_addr, 1491 + DMA_CACHE_ALIGNED_LEN(dma_data->dma_len), 1492 + DMA_TO_DEVICE); 1493 + dma_unmap_single(rx_dev, dma_data->dma_rx_addr, 1494 + DMA_CACHE_ALIGNED_LEN(dma_data->dma_len), 1495 + DMA_FROM_DEVICE); 1496 + } 1497 + 1498 + static void spi_imx_dma_rx_data_handle(struct spi_imx_data *spi_imx, 1499 + struct dma_data_package *dma_data, void *rx_buf, 1500 + bool word_delay) 1501 + { 1502 + void *copy_ptr; 1503 + int unaligned; 1504 + 1505 + /* 1506 + * On little-endian CPUs, adjust byte order: 1507 + * - Swap bytes when bpw = 8 1508 + * - Swap half-words when bpw = 16 1509 + * This ensures correct data ordering for DMA transfers. 1510 + */ 1511 + #ifdef __LITTLE_ENDIAN 1512 + if (!word_delay) { 1513 + unsigned int bytes_per_word = spi_imx_bytes_per_word(spi_imx->bits_per_word); 1514 + u32 *temp = dma_data->dma_rx_buf; 1515 + 1516 + for (int i = 0; i < DIV_ROUND_UP(dma_data->dma_len, sizeof(*temp)); i++) { 1517 + if (bytes_per_word == 1) 1518 + swab32s(temp + i); 1519 + else if (bytes_per_word == 2) 1520 + swahw32s(temp + i); 1521 + } 1522 + } 1523 + #endif 1524 + 1525 + /* 1526 + * When dynamic burst enabled, DMA RX always receives 32-bit words from RXFIFO with 1527 + * buswidth = 4, but when data_len is not 4-bytes alignment, the RM shows when 1528 + * burst length = 32*n + m bits, a SPI burst contains the m LSB in first word and all 1529 + * 32 bits in other n words. So if garbage bytes in the first word, trim first word then 1530 + * copy the actual data to rx_buf. 1531 + */ 1532 + if (dma_data->data_len % BYTES_PER_32BITS_WORD && !word_delay) { 1533 + unaligned = dma_data->data_len % BYTES_PER_32BITS_WORD; 1534 + copy_ptr = (u8 *)dma_data->dma_rx_buf + BYTES_PER_32BITS_WORD - unaligned; 1535 + } else { 1536 + copy_ptr = dma_data->dma_rx_buf; 1537 + } 1538 + 1539 + memcpy(rx_buf, copy_ptr, dma_data->data_len); 1540 + } 1541 + 1542 + static int spi_imx_dma_map(struct spi_imx_data *spi_imx, 1543 + struct dma_data_package *dma_data) 1544 + { 1545 + struct spi_controller *controller = spi_imx->controller; 1546 + struct device *tx_dev = controller->dma_tx->device->dev; 1547 + struct device *rx_dev = controller->dma_rx->device->dev; 1548 + int ret; 1549 + 1550 + dma_data->dma_tx_addr = dma_map_single(tx_dev, dma_data->dma_tx_buf, 1551 + DMA_CACHE_ALIGNED_LEN(dma_data->dma_len), 1552 + DMA_TO_DEVICE); 1553 + ret = dma_mapping_error(tx_dev, dma_data->dma_tx_addr); 1554 + if (ret < 0) { 1555 + dev_err(spi_imx->dev, "DMA TX map failed %d\n", ret); 1556 + return ret; 1557 + } 1558 + 1559 + dma_data->dma_rx_addr = dma_map_single(rx_dev, dma_data->dma_rx_buf, 1560 + DMA_CACHE_ALIGNED_LEN(dma_data->dma_len), 1561 + DMA_FROM_DEVICE); 1562 + ret = dma_mapping_error(rx_dev, dma_data->dma_rx_addr); 1563 + if (ret < 0) { 1564 + dev_err(spi_imx->dev, "DMA RX map failed %d\n", ret); 1565 + dma_unmap_single(tx_dev, dma_data->dma_tx_addr, 1566 + DMA_CACHE_ALIGNED_LEN(dma_data->dma_len), 1567 + DMA_TO_DEVICE); 1568 + return ret; 1569 + } 1570 + 1571 + return 0; 1572 + } 1573 + 1574 + static int spi_imx_dma_tx_data_handle(struct spi_imx_data *spi_imx, 1575 + struct dma_data_package *dma_data, 1576 + const void *tx_buf, 1577 + bool word_delay) 1578 + { 1579 + void *copy_ptr; 1580 + int unaligned; 1581 + 1582 + if (word_delay) { 1583 + dma_data->dma_len = dma_data->data_len; 1584 + } else { 1585 + /* 1586 + * As per the reference manual, when burst length = 32*n + m bits, ECSPI 1587 + * sends m LSB bits in the first word, followed by n full 32-bit words. 1588 + * Since actual data may not be 4-byte aligned, allocate DMA TX/RX buffers 1589 + * to ensure alignment. For TX, DMA pushes 4-byte aligned words to TXFIFO, 1590 + * while ECSPI uses BURST_LENGTH settings to maintain correct bit count. 1591 + * For RX, DMA always receives 32-bit words from RXFIFO, when data len is 1592 + * not 4-byte aligned, trim the first word to drop garbage bytes, then group 1593 + * all transfer DMA bounse buffer and copy all valid data to rx_buf. 1594 + */ 1595 + dma_data->dma_len = ALIGN(dma_data->data_len, BYTES_PER_32BITS_WORD); 1596 + } 1597 + 1598 + dma_data->dma_tx_buf = kzalloc(dma_data->dma_len, GFP_KERNEL); 1599 + if (!dma_data->dma_tx_buf) 1600 + return -ENOMEM; 1601 + 1602 + dma_data->dma_rx_buf = kzalloc(dma_data->dma_len, GFP_KERNEL); 1603 + if (!dma_data->dma_rx_buf) { 1604 + kfree(dma_data->dma_tx_buf); 1605 + return -ENOMEM; 1606 + } 1607 + 1608 + if (dma_data->data_len % BYTES_PER_32BITS_WORD && !word_delay) { 1609 + unaligned = dma_data->data_len % BYTES_PER_32BITS_WORD; 1610 + copy_ptr = (u8 *)dma_data->dma_tx_buf + BYTES_PER_32BITS_WORD - unaligned; 1611 + } else { 1612 + copy_ptr = dma_data->dma_tx_buf; 1613 + } 1614 + 1615 + memcpy(copy_ptr, tx_buf, dma_data->data_len); 1616 + 1617 + /* 1618 + * When word_delay is enabled, DMA transfers an entire word in one minor loop. 1619 + * In this case, no data requires additional handling. 1620 + */ 1621 + if (word_delay) 1622 + return 0; 1623 + 1624 + #ifdef __LITTLE_ENDIAN 1625 + /* 1626 + * On little-endian CPUs, adjust byte order: 1627 + * - Swap bytes when bpw = 8 1628 + * - Swap half-words when bpw = 16 1629 + * This ensures correct data ordering for DMA transfers. 1630 + */ 1631 + unsigned int bytes_per_word = spi_imx_bytes_per_word(spi_imx->bits_per_word); 1632 + u32 *temp = dma_data->dma_tx_buf; 1633 + 1634 + for (int i = 0; i < DIV_ROUND_UP(dma_data->dma_len, sizeof(*temp)); i++) { 1635 + if (bytes_per_word == 1) 1636 + swab32s(temp + i); 1637 + else if (bytes_per_word == 2) 1638 + swahw32s(temp + i); 1639 + } 1640 + #endif 1641 + 1642 + return 0; 1643 + } 1644 + 1645 + static int spi_imx_dma_data_prepare(struct spi_imx_data *spi_imx, 1646 + struct spi_transfer *transfer, 1647 + bool word_delay) 1648 + { 1649 + u32 pre_bl, tail_bl; 1650 + u32 ctrl; 1651 + int ret; 1652 + 1653 + /* 1654 + * ECSPI supports a maximum burst of 512 bytes. When xfer->len exceeds 512 1655 + * and is not a multiple of 512, a tail transfer is required. BURST_LEGTH 1656 + * is used for SPI HW to maintain correct bit count. BURST_LENGTH should 1657 + * update with data length. After DMA request submit, SPI can not update the 1658 + * BURST_LENGTH, in this case, we must split two package, update the register 1659 + * then setup second DMA transfer. 1660 + */ 1661 + ctrl = readl(spi_imx->base + MX51_ECSPI_CTRL); 1662 + if (word_delay) { 1663 + /* 1664 + * When SPI IMX need to support word delay, according to "Sample Period Control 1665 + * Register" shows, The Sample Period Control Register (ECSPI_PERIODREG) 1666 + * provides software a way to insert delays (wait states) between consecutive 1667 + * SPI transfers. As a result, ECSPI can only transfer one word per frame, and 1668 + * the delay occurs between frames. 1669 + */ 1670 + spi_imx->dma_package_num = 1; 1671 + pre_bl = spi_imx->bits_per_word - 1; 1672 + } else if (transfer->len <= MX51_ECSPI_CTRL_MAX_BURST) { 1673 + spi_imx->dma_package_num = 1; 1674 + pre_bl = transfer->len * BITS_PER_BYTE - 1; 1675 + } else if (!(transfer->len % MX51_ECSPI_CTRL_MAX_BURST)) { 1676 + spi_imx->dma_package_num = 1; 1677 + pre_bl = MX51_ECSPI_CTRL_MAX_BURST * BITS_PER_BYTE - 1; 1678 + } else { 1679 + spi_imx->dma_package_num = 2; 1680 + pre_bl = MX51_ECSPI_CTRL_MAX_BURST * BITS_PER_BYTE - 1; 1681 + tail_bl = (transfer->len % MX51_ECSPI_CTRL_MAX_BURST) * BITS_PER_BYTE - 1; 1682 + } 1683 + 1684 + spi_imx->dma_data = kmalloc_array(spi_imx->dma_package_num, 1685 + sizeof(struct dma_data_package), 1686 + GFP_KERNEL | __GFP_ZERO); 1687 + if (!spi_imx->dma_data) { 1688 + dev_err(spi_imx->dev, "Failed to allocate DMA package buffer!\n"); 1689 + return -ENOMEM; 1690 + } 1691 + 1692 + if (spi_imx->dma_package_num == 1) { 1693 + ctrl &= ~MX51_ECSPI_CTRL_BL_MASK; 1694 + ctrl |= pre_bl << MX51_ECSPI_CTRL_BL_OFFSET; 1695 + spi_imx->dma_data[0].cmd_word = ctrl; 1696 + spi_imx->dma_data[0].data_len = transfer->len; 1697 + ret = spi_imx_dma_tx_data_handle(spi_imx, &spi_imx->dma_data[0], transfer->tx_buf, 1698 + word_delay); 1699 + if (ret) { 1700 + kfree(spi_imx->dma_data); 1701 + return ret; 1702 + } 1703 + } else { 1704 + ctrl &= ~MX51_ECSPI_CTRL_BL_MASK; 1705 + ctrl |= pre_bl << MX51_ECSPI_CTRL_BL_OFFSET; 1706 + spi_imx->dma_data[0].cmd_word = ctrl; 1707 + spi_imx->dma_data[0].data_len = round_down(transfer->len, 1708 + MX51_ECSPI_CTRL_MAX_BURST); 1709 + ret = spi_imx_dma_tx_data_handle(spi_imx, &spi_imx->dma_data[0], transfer->tx_buf, 1710 + false); 1711 + if (ret) { 1712 + kfree(spi_imx->dma_data); 1713 + return ret; 1714 + } 1715 + 1716 + ctrl &= ~MX51_ECSPI_CTRL_BL_MASK; 1717 + ctrl |= tail_bl << MX51_ECSPI_CTRL_BL_OFFSET; 1718 + spi_imx->dma_data[1].cmd_word = ctrl; 1719 + spi_imx->dma_data[1].data_len = transfer->len % MX51_ECSPI_CTRL_MAX_BURST; 1720 + ret = spi_imx_dma_tx_data_handle(spi_imx, &spi_imx->dma_data[1], 1721 + transfer->tx_buf + spi_imx->dma_data[0].data_len, 1722 + false); 1723 + if (ret) { 1724 + kfree(spi_imx->dma_data[0].dma_tx_buf); 1725 + kfree(spi_imx->dma_data[0].dma_rx_buf); 1726 + kfree(spi_imx->dma_data); 1727 + } 1728 + } 1729 + 1730 + return 0; 1731 + } 1732 + 1733 + static int spi_imx_dma_submit(struct spi_imx_data *spi_imx, 1734 + struct dma_data_package *dma_data, 1735 + struct spi_transfer *transfer) 1736 + { 1737 + struct spi_controller *controller = spi_imx->controller; 1472 1738 struct dma_async_tx_descriptor *desc_tx, *desc_rx; 1473 1739 unsigned long transfer_timeout; 1474 1740 unsigned long time_left; 1475 - struct spi_controller *controller = spi_imx->controller; 1476 - struct sg_table *tx = &transfer->tx_sg, *rx = &transfer->rx_sg; 1477 - struct scatterlist *last_sg = sg_last(rx->sgl, rx->nents); 1478 - unsigned int bytes_per_word, i; 1479 - int ret; 1741 + dma_cookie_t cookie; 1480 1742 1481 - /* Get the right burst length from the last sg to ensure no tail data */ 1482 - bytes_per_word = spi_imx_bytes_per_word(transfer->bits_per_word); 1743 + /* 1744 + * The TX DMA setup starts the transfer, so make sure RX is configured 1745 + * before TX. 1746 + */ 1747 + desc_rx = dmaengine_prep_slave_single(controller->dma_rx, dma_data->dma_rx_addr, 1748 + dma_data->dma_len, DMA_DEV_TO_MEM, 1749 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1750 + if (!desc_rx) { 1751 + transfer->error |= SPI_TRANS_FAIL_NO_START; 1752 + return -EINVAL; 1753 + } 1754 + 1755 + desc_rx->callback = spi_imx_dma_rx_callback; 1756 + desc_rx->callback_param = (void *)spi_imx; 1757 + cookie = dmaengine_submit(desc_rx); 1758 + if (dma_submit_error(cookie)) { 1759 + dev_err(spi_imx->dev, "submitting DMA RX failed\n"); 1760 + transfer->error |= SPI_TRANS_FAIL_NO_START; 1761 + goto dmaengine_terminate_rx; 1762 + } 1763 + 1764 + reinit_completion(&spi_imx->dma_rx_completion); 1765 + dma_async_issue_pending(controller->dma_rx); 1766 + 1767 + desc_tx = dmaengine_prep_slave_single(controller->dma_tx, dma_data->dma_tx_addr, 1768 + dma_data->dma_len, DMA_MEM_TO_DEV, 1769 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1770 + if (!desc_tx) 1771 + goto dmaengine_terminate_rx; 1772 + 1773 + desc_tx->callback = spi_imx_dma_tx_callback; 1774 + desc_tx->callback_param = (void *)spi_imx; 1775 + cookie = dmaengine_submit(desc_tx); 1776 + if (dma_submit_error(cookie)) { 1777 + dev_err(spi_imx->dev, "submitting DMA TX failed\n"); 1778 + goto dmaengine_terminate_tx; 1779 + } 1780 + reinit_completion(&spi_imx->dma_tx_completion); 1781 + dma_async_issue_pending(controller->dma_tx); 1782 + 1783 + spi_imx->devtype_data->trigger(spi_imx); 1784 + 1785 + transfer_timeout = spi_imx_calculate_timeout(spi_imx, transfer->len); 1786 + 1787 + if (!spi_imx->target_mode) { 1788 + /* Wait SDMA to finish the data transfer.*/ 1789 + time_left = wait_for_completion_timeout(&spi_imx->dma_tx_completion, 1790 + transfer_timeout); 1791 + if (!time_left) { 1792 + dev_err(spi_imx->dev, "I/O Error in DMA TX\n"); 1793 + dmaengine_terminate_all(controller->dma_tx); 1794 + dmaengine_terminate_all(controller->dma_rx); 1795 + return -ETIMEDOUT; 1796 + } 1797 + 1798 + time_left = wait_for_completion_timeout(&spi_imx->dma_rx_completion, 1799 + transfer_timeout); 1800 + if (!time_left) { 1801 + dev_err(&controller->dev, "I/O Error in DMA RX\n"); 1802 + spi_imx->devtype_data->reset(spi_imx); 1803 + dmaengine_terminate_all(controller->dma_rx); 1804 + return -ETIMEDOUT; 1805 + } 1806 + } else { 1807 + spi_imx->target_aborted = false; 1808 + 1809 + if (wait_for_completion_interruptible(&spi_imx->dma_tx_completion) || 1810 + READ_ONCE(spi_imx->target_aborted)) { 1811 + dev_dbg(spi_imx->dev, "I/O Error in DMA TX interrupted\n"); 1812 + dmaengine_terminate_all(controller->dma_tx); 1813 + dmaengine_terminate_all(controller->dma_rx); 1814 + return -EINTR; 1815 + } 1816 + 1817 + if (wait_for_completion_interruptible(&spi_imx->dma_rx_completion) || 1818 + READ_ONCE(spi_imx->target_aborted)) { 1819 + dev_dbg(spi_imx->dev, "I/O Error in DMA RX interrupted\n"); 1820 + dmaengine_terminate_all(controller->dma_rx); 1821 + return -EINTR; 1822 + } 1823 + 1824 + /* 1825 + * ECSPI has a HW issue when works in Target mode, after 64 words 1826 + * writtern to TXFIFO, even TXFIFO becomes empty, ECSPI_TXDATA keeps 1827 + * shift out the last word data, so we have to disable ECSPI when in 1828 + * target mode after the transfer completes. 1829 + */ 1830 + if (spi_imx->devtype_data->disable) 1831 + spi_imx->devtype_data->disable(spi_imx); 1832 + } 1833 + 1834 + return 0; 1835 + 1836 + dmaengine_terminate_tx: 1837 + dmaengine_terminate_all(controller->dma_tx); 1838 + dmaengine_terminate_rx: 1839 + dmaengine_terminate_all(controller->dma_rx); 1840 + 1841 + return -EINVAL; 1842 + } 1843 + 1844 + static void spi_imx_dma_max_wml_find(struct spi_imx_data *spi_imx, 1845 + struct dma_data_package *dma_data, 1846 + bool word_delay) 1847 + { 1848 + unsigned int bytes_per_word = word_delay ? 1849 + spi_imx_bytes_per_word(spi_imx->bits_per_word) : 1850 + BYTES_PER_32BITS_WORD; 1851 + unsigned int i; 1852 + 1483 1853 for (i = spi_imx->devtype_data->fifo_size / 2; i > 0; i--) { 1484 - if (!(sg_dma_len(last_sg) % (i * bytes_per_word))) 1854 + if (!dma_data->dma_len % (i * bytes_per_word)) 1485 1855 break; 1486 1856 } 1487 1857 /* Use 1 as wml in case no available burst length got */ 1488 1858 if (i == 0) 1489 1859 i = 1; 1490 1860 1491 - spi_imx->wml = i; 1861 + spi_imx->wml = i; 1862 + } 1492 1863 1493 - ret = spi_imx_dma_configure(controller); 1864 + static int spi_imx_dma_configure(struct spi_controller *controller, bool word_delay) 1865 + { 1866 + int ret; 1867 + enum dma_slave_buswidth buswidth; 1868 + struct dma_slave_config rx = {}, tx = {}; 1869 + struct spi_imx_data *spi_imx = spi_controller_get_devdata(controller); 1870 + 1871 + if (word_delay) { 1872 + switch (spi_imx_bytes_per_word(spi_imx->bits_per_word)) { 1873 + case 4: 1874 + buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES; 1875 + break; 1876 + case 2: 1877 + buswidth = DMA_SLAVE_BUSWIDTH_2_BYTES; 1878 + break; 1879 + case 1: 1880 + buswidth = DMA_SLAVE_BUSWIDTH_1_BYTE; 1881 + break; 1882 + default: 1883 + return -EINVAL; 1884 + } 1885 + } else { 1886 + buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES; 1887 + } 1888 + 1889 + tx.direction = DMA_MEM_TO_DEV; 1890 + tx.dst_addr = spi_imx->base_phys + MXC_CSPITXDATA; 1891 + tx.dst_addr_width = buswidth; 1892 + tx.dst_maxburst = spi_imx->wml; 1893 + ret = dmaengine_slave_config(controller->dma_tx, &tx); 1894 + if (ret) { 1895 + dev_err(spi_imx->dev, "TX dma configuration failed with %d\n", ret); 1896 + return ret; 1897 + } 1898 + 1899 + rx.direction = DMA_DEV_TO_MEM; 1900 + rx.src_addr = spi_imx->base_phys + MXC_CSPIRXDATA; 1901 + rx.src_addr_width = buswidth; 1902 + rx.src_maxburst = spi_imx->wml; 1903 + ret = dmaengine_slave_config(controller->dma_rx, &rx); 1904 + if (ret) { 1905 + dev_err(spi_imx->dev, "RX dma configuration failed with %d\n", ret); 1906 + return ret; 1907 + } 1908 + 1909 + return 0; 1910 + } 1911 + 1912 + static int spi_imx_dma_package_transfer(struct spi_imx_data *spi_imx, 1913 + struct dma_data_package *dma_data, 1914 + struct spi_transfer *transfer, 1915 + bool word_delay) 1916 + { 1917 + struct spi_controller *controller = spi_imx->controller; 1918 + int ret; 1919 + 1920 + spi_imx_dma_max_wml_find(spi_imx, dma_data, word_delay); 1921 + 1922 + ret = spi_imx_dma_configure(controller, word_delay); 1494 1923 if (ret) 1495 1924 goto dma_failure_no_start; 1496 1925 ··· 1915 1516 } 1916 1517 spi_imx->devtype_data->setup_wml(spi_imx); 1917 1518 1918 - /* 1919 - * The TX DMA setup starts the transfer, so make sure RX is configured 1920 - * before TX. 1921 - */ 1922 - desc_rx = dmaengine_prep_slave_sg(controller->dma_rx, 1923 - rx->sgl, rx->nents, DMA_DEV_TO_MEM, 1924 - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1925 - if (!desc_rx) { 1926 - ret = -EINVAL; 1927 - goto dma_failure_no_start; 1928 - } 1519 + ret = spi_imx_dma_submit(spi_imx, dma_data, transfer); 1520 + if (ret) 1521 + return ret; 1929 1522 1930 - desc_rx->callback = spi_imx_dma_rx_callback; 1931 - desc_rx->callback_param = (void *)spi_imx; 1932 - dmaengine_submit(desc_rx); 1933 - reinit_completion(&spi_imx->dma_rx_completion); 1934 - dma_async_issue_pending(controller->dma_rx); 1935 - 1936 - desc_tx = dmaengine_prep_slave_sg(controller->dma_tx, 1937 - tx->sgl, tx->nents, DMA_MEM_TO_DEV, 1938 - DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1939 - if (!desc_tx) { 1940 - dmaengine_terminate_all(controller->dma_tx); 1941 - dmaengine_terminate_all(controller->dma_rx); 1942 - return -EINVAL; 1943 - } 1944 - 1945 - desc_tx->callback = spi_imx_dma_tx_callback; 1946 - desc_tx->callback_param = (void *)spi_imx; 1947 - dmaengine_submit(desc_tx); 1948 - reinit_completion(&spi_imx->dma_tx_completion); 1949 - dma_async_issue_pending(controller->dma_tx); 1950 - 1951 - spi_imx->devtype_data->trigger(spi_imx); 1952 - 1953 - transfer_timeout = spi_imx_calculate_timeout(spi_imx, transfer->len); 1954 - 1955 - /* Wait SDMA to finish the data transfer.*/ 1956 - time_left = wait_for_completion_timeout(&spi_imx->dma_tx_completion, 1957 - transfer_timeout); 1958 - if (!time_left) { 1959 - dev_err(spi_imx->dev, "I/O Error in DMA TX\n"); 1960 - dmaengine_terminate_all(controller->dma_tx); 1961 - dmaengine_terminate_all(controller->dma_rx); 1962 - return -ETIMEDOUT; 1963 - } 1964 - 1965 - time_left = wait_for_completion_timeout(&spi_imx->dma_rx_completion, 1966 - transfer_timeout); 1967 - if (!time_left) { 1968 - dev_err(&controller->dev, "I/O Error in DMA RX\n"); 1969 - spi_imx->devtype_data->reset(spi_imx); 1970 - dmaengine_terminate_all(controller->dma_rx); 1971 - return -ETIMEDOUT; 1972 - } 1523 + /* Trim the DMA RX buffer and copy the actual data to rx_buf */ 1524 + dma_sync_single_for_cpu(controller->dma_rx->device->dev, dma_data->dma_rx_addr, 1525 + dma_data->dma_len, DMA_FROM_DEVICE); 1526 + spi_imx_dma_rx_data_handle(spi_imx, dma_data, transfer->rx_buf + spi_imx->rx_offset, 1527 + word_delay); 1528 + spi_imx->rx_offset += dma_data->data_len; 1973 1529 1974 1530 return 0; 1975 1531 /* fallback to pio */ 1976 1532 dma_failure_no_start: 1977 1533 transfer->error |= SPI_TRANS_FAIL_NO_START; 1534 + return ret; 1535 + } 1536 + 1537 + static int spi_imx_dma_transfer(struct spi_imx_data *spi_imx, 1538 + struct spi_transfer *transfer) 1539 + { 1540 + bool word_delay = transfer->word_delay.value != 0 && !spi_imx->target_mode; 1541 + int ret; 1542 + int i; 1543 + 1544 + ret = spi_imx_dma_data_prepare(spi_imx, transfer, word_delay); 1545 + if (ret < 0) { 1546 + transfer->error |= SPI_TRANS_FAIL_NO_START; 1547 + dev_err(spi_imx->dev, "DMA data prepare fail\n"); 1548 + goto fallback_pio; 1549 + } 1550 + 1551 + spi_imx->rx_offset = 0; 1552 + 1553 + /* Each dma_package performs a separate DMA transfer once */ 1554 + for (i = 0; i < spi_imx->dma_package_num; i++) { 1555 + ret = spi_imx_dma_map(spi_imx, &spi_imx->dma_data[i]); 1556 + if (ret < 0) { 1557 + if (i == 0) 1558 + transfer->error |= SPI_TRANS_FAIL_NO_START; 1559 + dev_err(spi_imx->dev, "DMA map fail\n"); 1560 + break; 1561 + } 1562 + 1563 + /* Update the CTRL register BL field */ 1564 + writel(spi_imx->dma_data[i].cmd_word, spi_imx->base + MX51_ECSPI_CTRL); 1565 + 1566 + ret = spi_imx_dma_package_transfer(spi_imx, &spi_imx->dma_data[i], 1567 + transfer, word_delay); 1568 + 1569 + /* Whether the dma transmission is successful or not, dma unmap is necessary */ 1570 + spi_imx_dma_unmap(spi_imx, &spi_imx->dma_data[i]); 1571 + 1572 + if (ret < 0) { 1573 + dev_dbg(spi_imx->dev, "DMA %d transfer not really finish\n", i); 1574 + break; 1575 + } 1576 + } 1577 + 1578 + for (int j = 0; j < spi_imx->dma_package_num; j++) { 1579 + kfree(spi_imx->dma_data[j].dma_tx_buf); 1580 + kfree(spi_imx->dma_data[j].dma_rx_buf); 1581 + } 1582 + kfree(spi_imx->dma_data); 1583 + 1584 + fallback_pio: 1978 1585 return ret; 1979 1586 } 1980 1587 ··· 2142 1737 while (spi_imx->devtype_data->rx_available(spi_imx)) 2143 1738 readl(spi_imx->base + MXC_CSPIRXDATA); 2144 1739 2145 - if (spi_imx->target_mode) 1740 + if (spi_imx->target_mode && !spi_imx->usedma) 2146 1741 return spi_imx_pio_transfer_target(spi, transfer); 2147 1742 2148 1743 /* ··· 2150 1745 * transfer, the SPI transfer has already been mapped, so we 2151 1746 * have to do the DMA transfer here. 2152 1747 */ 2153 - if (spi_imx->usedma) 2154 - return spi_imx_dma_transfer(spi_imx, transfer); 2155 - 1748 + if (spi_imx->usedma) { 1749 + ret = spi_imx_dma_transfer(spi_imx, transfer); 1750 + if (transfer->error & SPI_TRANS_FAIL_NO_START) { 1751 + spi_imx->usedma = false; 1752 + if (spi_imx->target_mode) 1753 + return spi_imx_pio_transfer_target(spi, transfer); 1754 + else 1755 + return spi_imx_pio_transfer(spi, transfer); 1756 + } 1757 + return ret; 1758 + } 2156 1759 /* run in polling mode for short transfers */ 2157 1760 if (transfer->len == 1 || (polling_limit_us && 2158 1761 spi_imx_transfer_estimate_time_us(transfer) < polling_limit_us)) ··· 2368 1955 2369 1956 spi_imx->devtype_data->intctrl(spi_imx, 0); 2370 1957 2371 - controller->dev.of_node = pdev->dev.of_node; 2372 1958 ret = spi_register_controller(controller); 2373 1959 if (ret) { 2374 1960 dev_err_probe(&pdev->dev, ret, "register controller failed\n");
-1
drivers/spi/spi-ingenic.c
··· 442 442 ctlr->use_gpio_descriptors = true; 443 443 ctlr->max_native_cs = pdata->max_native_cs; 444 444 ctlr->num_chipselect = num_cs; 445 - ctlr->dev.of_node = pdev->dev.of_node; 446 445 447 446 if (spi_ingenic_request_dma(ctlr, dev)) 448 447 dev_warn(dev, "DMA not available.\n");
-1
drivers/spi/spi-lantiq-ssc.c
··· 962 962 spi->bits_per_word = 8; 963 963 spi->speed_hz = 0; 964 964 965 - host->dev.of_node = pdev->dev.of_node; 966 965 host->num_chipselect = num_cs; 967 966 host->use_gpio_descriptors = true; 968 967 host->setup = lantiq_ssc_setup;
-1
drivers/spi/spi-ljca.c
··· 238 238 controller->auto_runtime_pm = false; 239 239 controller->max_speed_hz = LJCA_SPI_BUS_MAX_HZ; 240 240 241 - device_set_node(&ljca_spi->controller->dev, dev_fwnode(&auxdev->dev)); 242 241 auxiliary_set_drvdata(auxdev, controller); 243 242 244 243 ret = spi_register_controller(controller);
-1
drivers/spi/spi-loongson-core.c
··· 210 210 controller->unprepare_message = loongson_spi_unprepare_message; 211 211 controller->set_cs = loongson_spi_set_cs; 212 212 controller->num_chipselect = 4; 213 - device_set_node(&controller->dev, dev_fwnode(dev)); 214 213 dev_set_drvdata(dev, controller); 215 214 216 215 spi = spi_controller_get_devdata(controller);
-1
drivers/spi/spi-lp8841-rtc.c
··· 200 200 host->transfer_one = spi_lp8841_rtc_transfer_one; 201 201 host->bits_per_word_mask = SPI_BPW_MASK(8); 202 202 #ifdef CONFIG_OF 203 - host->dev.of_node = pdev->dev.of_node; 204 203 #endif 205 204 206 205 data = spi_controller_get_devdata(host);
+23 -3
drivers/spi/spi-mem.c
··· 178 178 if (op->data.swap16 && !spi_mem_controller_is_capable(ctlr, swap16)) 179 179 return false; 180 180 181 - if (op->cmd.nbytes != 2) 182 - return false; 181 + /* Extra 8D-8D-8D limitations */ 182 + if (op->cmd.dtr && op->cmd.buswidth == 8) { 183 + if (op->cmd.nbytes != 2) 184 + return false; 185 + 186 + if ((op->addr.nbytes % 2) || 187 + (op->dummy.nbytes % 2) || 188 + (op->data.nbytes % 2)) { 189 + dev_err(&ctlr->dev, 190 + "Even byte numbers not allowed in octal DTR operations\n"); 191 + return false; 192 + } 193 + } 183 194 } else { 184 195 if (op->cmd.nbytes != 1) 185 196 return false; ··· 719 708 720 709 desc->mem = mem; 721 710 desc->info = *info; 722 - if (ctlr->mem_ops && ctlr->mem_ops->dirmap_create) 711 + if (ctlr->mem_ops && ctlr->mem_ops->dirmap_create) { 712 + ret = spi_mem_access_start(mem); 713 + if (ret) { 714 + kfree(desc); 715 + return ERR_PTR(ret); 716 + } 717 + 723 718 ret = ctlr->mem_ops->dirmap_create(desc); 719 + 720 + spi_mem_access_end(mem); 721 + } 724 722 725 723 if (ret) { 726 724 desc->nodirmap = true;
-1
drivers/spi/spi-meson-spicc.c
··· 1054 1054 device_reset_optional(&pdev->dev); 1055 1055 1056 1056 host->num_chipselect = 4; 1057 - host->dev.of_node = pdev->dev.of_node; 1058 1057 host->mode_bits = SPI_CPHA | SPI_CPOL | SPI_CS_HIGH | SPI_LOOP; 1059 1058 host->flags = (SPI_CONTROLLER_MUST_RX | SPI_CONTROLLER_MUST_TX); 1060 1059 host->min_speed_hz = spicc->data->min_speed_hz;
-1
drivers/spi/spi-meson-spifc.c
··· 322 322 rate = clk_get_rate(spifc->clk); 323 323 324 324 host->num_chipselect = 1; 325 - host->dev.of_node = pdev->dev.of_node; 326 325 host->bits_per_word_mask = SPI_BPW_MASK(8); 327 326 host->auto_runtime_pm = true; 328 327 host->transfer_one = meson_spifc_transfer_one;
+1 -2
drivers/spi/spi-microchip-core-spi.c
··· 161 161 return -EOPNOTSUPP; 162 162 } 163 163 164 - if (spi->mode & SPI_MODE_X_MASK & ~spi->controller->mode_bits) { 164 + if ((spi->mode ^ spi->controller->mode_bits) & SPI_MODE_X_MASK) { 165 165 dev_err(&spi->dev, "incompatible CPOL/CPHA, must match controller's Motorola mode\n"); 166 166 return -EINVAL; 167 167 } ··· 360 360 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); 361 361 host->transfer_one = mchp_corespi_transfer_one; 362 362 host->set_cs = mchp_corespi_set_cs; 363 - host->dev.of_node = dev->of_node; 364 363 365 364 ret = of_property_read_u32(dev->of_node, "fifo-depth", &spi->fifo_depth); 366 365 if (ret)
-2
drivers/spi/spi-mpc512x-psc.c
··· 480 480 host->use_gpio_descriptors = true; 481 481 host->cleanup = mpc512x_psc_spi_cleanup; 482 482 483 - device_set_node(&host->dev, dev_fwnode(dev)); 484 - 485 483 tempp = devm_platform_get_and_ioremap_resource(pdev, 0, NULL); 486 484 if (IS_ERR(tempp)) 487 485 return dev_err_probe(dev, PTR_ERR(tempp), "could not ioremap I/O port range\n");
-2
drivers/spi/spi-mpc52xx-psc.c
··· 319 319 host->transfer_one_message = mpc52xx_psc_spi_transfer_one_message; 320 320 host->cleanup = mpc52xx_psc_spi_cleanup; 321 321 322 - device_set_node(&host->dev, dev_fwnode(dev)); 323 - 324 322 mps->psc = devm_platform_get_and_ioremap_resource(pdev, 0, NULL); 325 323 if (IS_ERR(mps->psc)) 326 324 return dev_err_probe(dev, PTR_ERR(mps->psc), "could not ioremap I/O port range\n");
-1
drivers/spi/spi-mpc52xx.c
··· 430 430 host->transfer = mpc52xx_spi_transfer; 431 431 host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST; 432 432 host->bits_per_word_mask = SPI_BPW_MASK(8); 433 - host->dev.of_node = op->dev.of_node; 434 433 435 434 platform_set_drvdata(op, host); 436 435
-1
drivers/spi/spi-mpfs.c
··· 550 550 host->transfer_one = mpfs_spi_transfer_one; 551 551 host->prepare_message = mpfs_spi_prepare_message; 552 552 host->set_cs = mpfs_spi_set_cs; 553 - host->dev.of_node = pdev->dev.of_node; 554 553 555 554 spi = spi_controller_get_devdata(host); 556 555
-1
drivers/spi/spi-mt65xx.c
··· 1184 1184 return -ENOMEM; 1185 1185 1186 1186 host->auto_runtime_pm = true; 1187 - host->dev.of_node = dev->of_node; 1188 1187 host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_LSB_FIRST; 1189 1188 1190 1189 host->set_cs = mtk_spi_set_cs;
-1
drivers/spi/spi-mt7621.c
··· 348 348 host->set_cs = mt7621_spi_set_native_cs; 349 349 host->transfer_one = mt7621_spi_transfer_one; 350 350 host->bits_per_word_mask = SPI_BPW_MASK(8); 351 - host->dev.of_node = pdev->dev.of_node; 352 351 host->max_native_cs = MT7621_NATIVE_CS_COUNT; 353 352 host->num_chipselect = MT7621_NATIVE_CS_COUNT; 354 353 host->use_gpio_descriptors = true;
-1
drivers/spi/spi-mtk-nor.c
··· 851 851 } 852 852 853 853 ctlr->bits_per_word_mask = SPI_BPW_MASK(8); 854 - ctlr->dev.of_node = pdev->dev.of_node; 855 854 ctlr->max_message_size = mtk_max_msg_size; 856 855 ctlr->mem_ops = &mtk_nor_mem_ops; 857 856 ctlr->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_DUAL | SPI_TX_QUAD;
-1
drivers/spi/spi-mtk-snfi.c
··· 1448 1448 ctlr->mem_caps = &mtk_snand_mem_caps; 1449 1449 ctlr->bits_per_word_mask = SPI_BPW_MASK(8); 1450 1450 ctlr->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_DUAL | SPI_TX_QUAD; 1451 - ctlr->dev.of_node = pdev->dev.of_node; 1452 1451 ret = spi_register_controller(ctlr); 1453 1452 if (ret) { 1454 1453 dev_err(&pdev->dev, "spi_register_controller failed.\n");
-1
drivers/spi/spi-mux.c
··· 161 161 ctlr->setup = spi_mux_setup; 162 162 ctlr->num_chipselect = mux_control_states(priv->mux); 163 163 ctlr->bus_num = -1; 164 - ctlr->dev.of_node = spi->dev.of_node; 165 164 ctlr->must_async = true; 166 165 ctlr->defer_optimize_message = true; 167 166
-1
drivers/spi/spi-mxic.c
··· 768 768 mxic = spi_controller_get_devdata(host); 769 769 mxic->dev = &pdev->dev; 770 770 771 - host->dev.of_node = pdev->dev.of_node; 772 771 773 772 mxic->ps_clk = devm_clk_get(&pdev->dev, "ps_clk"); 774 773 if (IS_ERR(mxic->ps_clk))
-1
drivers/spi/spi-npcm-fiu.c
··· 746 746 ctrl->bus_num = -1; 747 747 ctrl->mem_ops = &npcm_fiu_mem_ops; 748 748 ctrl->num_chipselect = fiu->info->max_cs; 749 - ctrl->dev.of_node = dev->of_node; 750 749 751 750 return devm_spi_register_controller(dev, ctrl); 752 751 }
-1
drivers/spi/spi-npcm-pspi.c
··· 401 401 host->max_speed_hz = DIV_ROUND_UP(clk_hz, NPCM_PSPI_MIN_CLK_DIVIDER); 402 402 host->min_speed_hz = DIV_ROUND_UP(clk_hz, NPCM_PSPI_MAX_CLK_DIVIDER); 403 403 host->mode_bits = SPI_CPHA | SPI_CPOL; 404 - host->dev.of_node = pdev->dev.of_node; 405 404 host->bus_num = -1; 406 405 host->bits_per_word_mask = SPI_BPW_MASK(8) | SPI_BPW_MASK(16); 407 406 host->transfer_one = npcm_pspi_transfer_one;
-2
drivers/spi/spi-nxp-fspi.c
··· 1383 1383 else 1384 1384 ctlr->mem_caps = &nxp_fspi_mem_caps; 1385 1385 1386 - device_set_node(&ctlr->dev, fwnode); 1387 - 1388 1386 ret = devm_add_action_or_reset(dev, nxp_fspi_cleanup, f); 1389 1387 if (ret) 1390 1388 return ret;
+1384
drivers/spi/spi-nxp-xspi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + 3 + /* 4 + * NXP xSPI controller driver. 5 + * 6 + * Copyright 2025 NXP 7 + * 8 + * xSPI is a flexible SPI host controller which supports single 9 + * external devices. This device can have up to eight bidirectional 10 + * data lines, this means xSPI support Single/Dual/Quad/Octal mode 11 + * data transfer (1/2/4/8 bidirectional data lines). 12 + * 13 + * xSPI controller is driven by the LUT(Look-up Table) registers 14 + * LUT registers are a look-up-table for sequences of instructions. 15 + * A valid sequence consists of five LUT registers. 16 + * Maximum 16 LUT sequences can be programmed simultaneously. 17 + * 18 + * LUTs are being created at run-time based on the commands passed 19 + * from the spi-mem framework, thus using single LUT index. 20 + * 21 + * Software triggered Flash read/write access by IP Bus. 22 + * 23 + * Memory mapped read access by AHB Bus. 24 + * 25 + * Based on SPI MEM interface and spi-nxp-fspi.c driver. 26 + * 27 + * Author: 28 + * Haibo Chen <haibo.chen@nxp.com> 29 + * Co-author: 30 + * Han Xu <han.xu@nxp.com> 31 + */ 32 + 33 + #include <linux/bitops.h> 34 + #include <linux/bitfield.h> 35 + #include <linux/clk.h> 36 + #include <linux/completion.h> 37 + #include <linux/delay.h> 38 + #include <linux/err.h> 39 + #include <linux/errno.h> 40 + #include <linux/interrupt.h> 41 + #include <linux/io.h> 42 + #include <linux/iopoll.h> 43 + #include <linux/jiffies.h> 44 + #include <linux/kernel.h> 45 + #include <linux/log2.h> 46 + #include <linux/module.h> 47 + #include <linux/mutex.h> 48 + #include <linux/of.h> 49 + #include <linux/platform_device.h> 50 + #include <linux/pinctrl/consumer.h> 51 + #include <linux/pm_runtime.h> 52 + #include <linux/spi/spi.h> 53 + #include <linux/spi/spi-mem.h> 54 + 55 + /* Runtime pm timeout */ 56 + #define XSPI_RPM_TIMEOUT_MS 50 /* 50ms */ 57 + /* 58 + * The driver only uses one single LUT entry, that is updated on 59 + * each call of exec_op(). Index 0 is preset at boot with a basic 60 + * read operation, so let's use the last entry (15). 61 + */ 62 + #define XSPI_SEQID_LUT 15 63 + 64 + #define XSPI_MCR 0x0 65 + #define XSPI_MCR_CKN_FA_EN BIT(26) 66 + #define XSPI_MCR_DQS_FA_SEL_MASK GENMASK(25, 24) 67 + #define XSPI_MCR_ISD3FA BIT(17) 68 + #define XSPI_MCR_ISD2FA BIT(16) 69 + #define XSPI_MCR_DOZE BIT(15) 70 + #define XSPI_MCR_MDIS BIT(14) 71 + #define XSPI_MCR_DLPEN BIT(12) 72 + #define XSPI_MCR_CLR_TXF BIT(11) 73 + #define XSPI_MCR_CLR_RXF BIT(10) 74 + #define XSPI_MCR_IPS_TG_RST BIT(9) 75 + #define XSPI_MCR_VAR_LAT_EN BIT(8) 76 + #define XSPI_MCR_DDR_EN BIT(7) 77 + #define XSPI_MCR_DQS_EN BIT(6) 78 + #define XSPI_MCR_DQS_LAT_EN BIT(5) 79 + #define XSPI_MCR_DQS_OUT_EN BIT(4) 80 + #define XSPI_MCR_SWRSTHD BIT(1) 81 + #define XSPI_MCR_SWRSTSD BIT(0) 82 + 83 + #define XSPI_IPCR 0x8 84 + 85 + #define XSPI_FLSHCR 0xC 86 + #define XSPI_FLSHCR_TDH_MASK GENMASK(17, 16) 87 + #define XSPI_FLSHCR_TCSH_MASK GENMASK(11, 8) 88 + #define XSPI_FLSHCR_TCSS_MASK GENMASK(3, 0) 89 + 90 + #define XSPI_BUF0CR 0x10 91 + #define XSPI_BUF1CR 0x14 92 + #define XSPI_BUF2CR 0x18 93 + #define XSPI_BUF3CR 0x1C 94 + #define XSPI_BUF3CR_ALLMST BIT(31) 95 + #define XSPI_BUF3CR_ADATSZ_MASK GENMASK(17, 8) 96 + #define XSPI_BUF3CR_MSTRID_MASK GENMASK(3, 0) 97 + 98 + #define XSPI_BFGENCR 0x20 99 + #define XSPI_BFGENCR_SEQID_WR_MASK GENMASK(31, 28) 100 + #define XSPI_BFGENCR_ALIGN_MASK GENMASK(24, 22) 101 + #define XSPI_BFGENCR_PPWF_CLR BIT(20) 102 + #define XSPI_BFGENCR_WR_FLUSH_EN BIT(21) 103 + #define XSPI_BFGENCR_SEQID_WR_EN BIT(17) 104 + #define XSPI_BFGENCR_SEQID_MASK GENMASK(15, 12) 105 + 106 + #define XSPI_BUF0IND 0x30 107 + #define XSPI_BUF1IND 0x34 108 + #define XSPI_BUF2IND 0x38 109 + 110 + #define XSPI_DLLCRA 0x60 111 + #define XSPI_DLLCRA_DLLEN BIT(31) 112 + #define XSPI_DLLCRA_FREQEN BIT(30) 113 + #define XSPI_DLLCRA_DLL_REFCNTR_MASK GENMASK(27, 24) 114 + #define XSPI_DLLCRA_DLLRES_MASK GENMASK(23, 20) 115 + #define XSPI_DLLCRA_SLV_FINE_MASK GENMASK(19, 16) 116 + #define XSPI_DLLCRA_SLV_DLY_MASK GENMASK(14, 12) 117 + #define XSPI_DLLCRA_SLV_DLY_COARSE_MASK GENMASK(11, 8) 118 + #define XSPI_DLLCRA_SLV_DLY_FINE_MASK GENMASK(7, 5) 119 + #define XSPI_DLLCRA_DLL_CDL8 BIT(4) 120 + #define XSPI_DLLCRA_SLAVE_AUTO_UPDT BIT(3) 121 + #define XSPI_DLLCRA_SLV_EN BIT(2) 122 + #define XSPI_DLLCRA_SLV_DLL_BYPASS BIT(1) 123 + #define XSPI_DLLCRA_SLV_UPD BIT(0) 124 + 125 + #define XSPI_SFAR 0x100 126 + 127 + #define XSPI_SFACR 0x104 128 + #define XSPI_SFACR_FORCE_A10 BIT(22) 129 + #define XSPI_SFACR_WA_4B_EN BIT(21) 130 + #define XSPI_SFACR_CAS_INTRLVD BIT(20) 131 + #define XSPI_SFACR_RX_BP_EN BIT(18) 132 + #define XSPI_SFACR_BYTE_SWAP BIT(17) 133 + #define XSPI_SFACR_WA BIT(16) 134 + #define XSPI_SFACR_CAS_MASK GENMASK(3, 0) 135 + 136 + #define XSPI_SMPR 0x108 137 + #define XSPI_SMPR_DLLFSMPFA_MASK GENMASK(26, 24) 138 + #define XSPI_SMPR_FSDLY BIT(6) 139 + #define XSPI_SMPR_FSPHS BIT(5) 140 + 141 + #define XSPI_RBSR 0x10C 142 + 143 + #define XSPI_RBCT 0x110 144 + #define XSPI_RBCT_WMRK_MASK GENMASK(6, 0) 145 + 146 + #define XSPI_DLLSR 0x12C 147 + #define XSPI_DLLSR_DLLA_LOCK BIT(15) 148 + #define XSPI_DLLSR_SLVA_LOCK BIT(14) 149 + #define XSPI_DLLSR_DLLA_RANGE_ERR BIT(13) 150 + #define XSPI_DLLSR_DLLA_FINE_UNDERFLOW BIT(12) 151 + 152 + #define XSPI_TBSR 0x150 153 + 154 + #define XSPI_TBDR 0x154 155 + 156 + #define XSPI_TBCT 0x158 157 + #define XSPI_TBCT_WMRK_MASK GENMASK(7, 0) 158 + 159 + #define XSPI_SR 0x15C 160 + #define XSPI_SR_TXFULL BIT(27) 161 + #define XSPI_SR_TXDMA BIT(26) 162 + #define XSPI_SR_TXWA BIT(25) 163 + #define XSPI_SR_TXNE BIT(24) 164 + #define XSPI_SR_RXDMA BIT(23) 165 + #define XSPI_SR_ARB_STATE_MASK GENMASK(23, 20) 166 + #define XSPI_SR_RXFULL BIT(19) 167 + #define XSPI_SR_RXWE BIT(16) 168 + #define XSPI_SR_ARB_LCK BIT(15) 169 + #define XSPI_SR_AHBnFUL BIT(11) 170 + #define XSPI_SR_AHBnNE BIT(7) 171 + #define XSPI_SR_AHBTRN BIT(6) 172 + #define XSPI_SR_AWRACC BIT(4) 173 + #define XSPI_SR_AHB_ACC BIT(2) 174 + #define XSPI_SR_IP_ACC BIT(1) 175 + #define XSPI_SR_BUSY BIT(0) 176 + 177 + #define XSPI_FR 0x160 178 + #define XSPI_FR_DLPFF BIT(31) 179 + #define XSPI_FR_DLLABRT BIT(28) 180 + #define XSPI_FR_TBFF BIT(27) 181 + #define XSPI_FR_TBUF BIT(26) 182 + #define XSPI_FR_DLLUNLCK BIT(24) 183 + #define XSPI_FR_ILLINE BIT(23) 184 + #define XSPI_FR_RBOF BIT(17) 185 + #define XSPI_FR_RBDF BIT(16) 186 + #define XSPI_FR_AAEF BIT(15) 187 + #define XSPI_FR_AITEF BIT(14) 188 + #define XSPI_FR_AIBSEF BIT(13) 189 + #define XSPI_FR_ABOF BIT(12) 190 + #define XSPI_FR_CRCAEF BIT(10) 191 + #define XSPI_FR_PPWF BIT(8) 192 + #define XSPI_FR_IPIEF BIT(6) 193 + #define XSPI_FR_IPEDERR BIT(5) 194 + #define XSPI_FR_PERFOVF BIT(2) 195 + #define XSPI_FR_RDADDR BIT(1) 196 + #define XSPI_FR_TFF BIT(0) 197 + 198 + #define XSPI_RSER 0x164 199 + #define XSPI_RSER_TFIE BIT(0) 200 + 201 + #define XSPI_SFA1AD 0x180 202 + 203 + #define XSPI_SFA2AD 0x184 204 + 205 + #define XSPI_RBDR0 0x200 206 + 207 + #define XSPI_LUTKEY 0x300 208 + #define XSPI_LUT_KEY_VAL (0x5AF05AF0UL) 209 + 210 + #define XSPI_LCKCR 0x304 211 + #define XSPI_LOKCR_LOCK BIT(0) 212 + #define XSPI_LOKCR_UNLOCK BIT(1) 213 + 214 + #define XSPI_LUT 0x310 215 + #define XSPI_LUT_OFFSET (XSPI_SEQID_LUT * 5 * 4) 216 + #define XSPI_LUT_REG(idx) \ 217 + (XSPI_LUT + XSPI_LUT_OFFSET + (idx) * 4) 218 + 219 + #define XSPI_MCREXT 0x4FC 220 + #define XSPI_MCREXT_RST_MASK GENMASK(3, 0) 221 + 222 + 223 + #define XSPI_FRAD0_WORD2 0x808 224 + #define XSPI_FRAD0_WORD2_MD0ACP_MASK GENMASK(2, 0) 225 + 226 + #define XSPI_FRAD0_WORD3 0x80C 227 + #define XSPI_FRAD0_WORD3_VLD BIT(31) 228 + 229 + #define XSPI_TG0MDAD 0x900 230 + #define XSPI_TG0MDAD_VLD BIT(31) 231 + 232 + #define XSPI_TG1MDAD 0x910 233 + 234 + #define XSPI_MGC 0x920 235 + #define XSPI_MGC_GVLD BIT(31) 236 + #define XSPI_MGC_GVLDMDAD BIT(29) 237 + #define XSPI_MGC_GVLDFRAD BIT(27) 238 + 239 + #define XSPI_MTO 0x928 240 + 241 + #define XSPI_ERRSTAT 0x938 242 + #define XSPI_INT_EN 0x93C 243 + 244 + #define XSPI_SFP_TG_IPCR 0x958 245 + #define XSPI_SFP_TG_IPCR_SEQID_MASK GENMASK(27, 24) 246 + #define XSPI_SFP_TG_IPCR_ARB_UNLOCK BIT(23) 247 + #define XSPI_SFP_TG_IPCR_ARB_LOCK BIT(22) 248 + #define XSPI_SFP_TG_IPCR_IDATSZ_MASK GENMASK(15, 0) 249 + 250 + #define XSPI_SFP_TG_SFAR 0x95C 251 + 252 + /* Register map end */ 253 + 254 + /********* XSPI CMD definitions ***************************/ 255 + #define LUT_STOP 0x00 256 + #define LUT_CMD_SDR 0x01 257 + #define LUT_ADDR_SDR 0x02 258 + #define LUT_DUMMY 0x03 259 + #define LUT_MODE8_SDR 0x04 260 + #define LUT_MODE2_SDR 0x05 261 + #define LUT_MODE4_SDR 0x06 262 + #define LUT_READ_SDR 0x07 263 + #define LUT_WRITE_SDR 0x08 264 + #define LUT_JMP_ON_CS 0x09 265 + #define LUT_ADDR_DDR 0x0A 266 + #define LUT_MODE8_DDR 0x0B 267 + #define LUT_MODE2_DDR 0x0C 268 + #define LUT_MODE4_DDR 0x0D 269 + #define LUT_READ_DDR 0x0E 270 + #define LUT_WRITE_DDR 0x0F 271 + #define LUT_DATA_LEARN 0x10 272 + #define LUT_CMD_DDR 0x11 273 + #define LUT_CADDR_SDR 0x12 274 + #define LUT_CADDR_DDR 0x13 275 + #define JMP_TO_SEQ 0x14 276 + 277 + #define XSPI_64BIT_LE 0x3 278 + /* 279 + * Calculate number of required PAD bits for LUT register. 280 + * 281 + * The pad stands for the number of IO lines [0:7]. 282 + * For example, the octal read needs eight IO lines, 283 + * so you should use LUT_PAD(8). This macro 284 + * returns 3 i.e. use eight (2^3) IP lines for read. 285 + */ 286 + #define LUT_PAD(x) (fls(x) - 1) 287 + 288 + /* 289 + * Macro for constructing the LUT entries with the following 290 + * register layout: 291 + * 292 + * --------------------------------------------------- 293 + * | INSTR1 | PAD1 | OPRND1 | INSTR0 | PAD0 | OPRND0 | 294 + * --------------------------------------------------- 295 + */ 296 + #define PAD_SHIFT 8 297 + #define INSTR_SHIFT 10 298 + #define OPRND_SHIFT 16 299 + 300 + /* Macros for constructing the LUT register. */ 301 + #define LUT_DEF(idx, ins, pad, opr) \ 302 + ((((ins) << INSTR_SHIFT) | ((pad) << PAD_SHIFT) | \ 303 + (opr)) << (((idx) % 2) * OPRND_SHIFT)) 304 + 305 + #define NXP_XSPI_MIN_IOMAP SZ_4M 306 + #define NXP_XSPI_MAX_CHIPSELECT 2 307 + #define POLL_TOUT_US 5000 308 + 309 + /* Access flash memory using IP bus only */ 310 + #define XSPI_QUIRK_USE_IP_ONLY BIT(0) 311 + 312 + struct nxp_xspi_devtype_data { 313 + unsigned int rxfifo; 314 + unsigned int txfifo; 315 + unsigned int ahb_buf_size; 316 + unsigned int quirks; 317 + }; 318 + 319 + static struct nxp_xspi_devtype_data imx94_data = { 320 + .rxfifo = SZ_512, /* (128 * 4 bytes) */ 321 + .txfifo = SZ_1K, /* (256 * 4 bytes) */ 322 + .ahb_buf_size = SZ_4K, /* (1024 * 4 bytes) */ 323 + }; 324 + 325 + struct nxp_xspi { 326 + void __iomem *iobase; 327 + void __iomem *ahb_addr; 328 + u32 memmap_phy; 329 + u32 memmap_phy_size; 330 + u32 memmap_start; 331 + u32 memmap_len; 332 + struct clk *clk; 333 + struct device *dev; 334 + struct completion c; 335 + const struct nxp_xspi_devtype_data *devtype_data; 336 + /* mutex lock for each operation */ 337 + struct mutex lock; 338 + int selected; 339 + #define XSPI_DTR_PROTO BIT(0) 340 + int flags; 341 + /* Save the previous operation clock rate */ 342 + unsigned long pre_op_rate; 343 + /* The max clock rate xspi supported output to device */ 344 + unsigned long support_max_rate; 345 + }; 346 + 347 + static inline int needs_ip_only(struct nxp_xspi *xspi) 348 + { 349 + return xspi->devtype_data->quirks & XSPI_QUIRK_USE_IP_ONLY; 350 + } 351 + 352 + static irqreturn_t nxp_xspi_irq_handler(int irq, void *dev_id) 353 + { 354 + struct nxp_xspi *xspi = dev_id; 355 + u32 reg; 356 + 357 + reg = readl(xspi->iobase + XSPI_FR); 358 + if (reg & XSPI_FR_TFF) { 359 + /* Clear interrupt */ 360 + writel(XSPI_FR_TFF, xspi->iobase + XSPI_FR); 361 + complete(&xspi->c); 362 + return IRQ_HANDLED; 363 + } 364 + 365 + return IRQ_NONE; 366 + } 367 + 368 + static int nxp_xspi_check_buswidth(struct nxp_xspi *xspi, u8 width) 369 + { 370 + return (is_power_of_2(width) && width <= 8) ? 0 : -EOPNOTSUPP; 371 + } 372 + 373 + static bool nxp_xspi_supports_op(struct spi_mem *mem, 374 + const struct spi_mem_op *op) 375 + { 376 + struct nxp_xspi *xspi = spi_controller_get_devdata(mem->spi->controller); 377 + int ret; 378 + 379 + ret = nxp_xspi_check_buswidth(xspi, op->cmd.buswidth); 380 + 381 + if (op->addr.nbytes) 382 + ret |= nxp_xspi_check_buswidth(xspi, op->addr.buswidth); 383 + 384 + if (op->dummy.nbytes) 385 + ret |= nxp_xspi_check_buswidth(xspi, op->dummy.buswidth); 386 + 387 + if (op->data.nbytes) 388 + ret |= nxp_xspi_check_buswidth(xspi, op->data.buswidth); 389 + 390 + if (ret) 391 + return false; 392 + 393 + /* 394 + * The number of address bytes should be equal to or less than 4 bytes. 395 + */ 396 + if (op->addr.nbytes > 4) 397 + return false; 398 + 399 + /* Max 32 dummy clock cycles supported */ 400 + if (op->dummy.buswidth && 401 + (op->dummy.nbytes * 8 / op->dummy.buswidth > 64)) 402 + return false; 403 + 404 + if (needs_ip_only(xspi) && op->data.dir == SPI_MEM_DATA_IN && 405 + op->data.nbytes > xspi->devtype_data->rxfifo) 406 + return false; 407 + 408 + if (op->data.dir == SPI_MEM_DATA_OUT && 409 + op->data.nbytes > xspi->devtype_data->txfifo) 410 + return false; 411 + 412 + return spi_mem_default_supports_op(mem, op); 413 + } 414 + 415 + static void nxp_xspi_prepare_lut(struct nxp_xspi *xspi, 416 + const struct spi_mem_op *op) 417 + { 418 + void __iomem *base = xspi->iobase; 419 + u32 lutval[5] = {}; 420 + int lutidx = 1, i; 421 + 422 + /* cmd */ 423 + if (op->cmd.dtr) { 424 + lutval[0] |= LUT_DEF(0, LUT_CMD_DDR, LUT_PAD(op->cmd.buswidth), 425 + op->cmd.opcode >> 8); 426 + lutval[lutidx / 2] |= LUT_DEF(lutidx, LUT_CMD_DDR, 427 + LUT_PAD(op->cmd.buswidth), 428 + op->cmd.opcode & 0x00ff); 429 + lutidx++; 430 + } else { 431 + lutval[0] |= LUT_DEF(0, LUT_CMD_SDR, LUT_PAD(op->cmd.buswidth), 432 + op->cmd.opcode); 433 + } 434 + 435 + /* Addr bytes */ 436 + if (op->addr.nbytes) { 437 + lutval[lutidx / 2] |= LUT_DEF(lutidx, op->addr.dtr ? 438 + LUT_ADDR_DDR : LUT_ADDR_SDR, 439 + LUT_PAD(op->addr.buswidth), 440 + op->addr.nbytes * 8); 441 + lutidx++; 442 + } 443 + 444 + /* Dummy bytes, if needed */ 445 + if (op->dummy.nbytes) { 446 + lutval[lutidx / 2] |= LUT_DEF(lutidx, LUT_DUMMY, 447 + LUT_PAD(op->data.buswidth), 448 + op->dummy.nbytes * 8 / 449 + /* need distinguish ddr mode */ 450 + op->dummy.buswidth / (op->dummy.dtr ? 2 : 1)); 451 + lutidx++; 452 + } 453 + 454 + /* Read/Write data bytes */ 455 + if (op->data.nbytes) { 456 + lutval[lutidx / 2] |= LUT_DEF(lutidx, 457 + op->data.dir == SPI_MEM_DATA_IN ? 458 + (op->data.dtr ? LUT_READ_DDR : LUT_READ_SDR) : 459 + (op->data.dtr ? LUT_WRITE_DDR : LUT_WRITE_SDR), 460 + LUT_PAD(op->data.buswidth), 461 + 0); 462 + lutidx++; 463 + } 464 + 465 + /* Stop condition. */ 466 + lutval[lutidx / 2] |= LUT_DEF(lutidx, LUT_STOP, 0, 0); 467 + 468 + /* Unlock LUT */ 469 + writel(XSPI_LUT_KEY_VAL, xspi->iobase + XSPI_LUTKEY); 470 + writel(XSPI_LOKCR_UNLOCK, xspi->iobase + XSPI_LCKCR); 471 + 472 + /* Fill LUT */ 473 + for (i = 0; i < ARRAY_SIZE(lutval); i++) 474 + writel(lutval[i], base + XSPI_LUT_REG(i)); 475 + 476 + dev_dbg(xspi->dev, "CMD[%02x] lutval[0:%08x 1:%08x 2:%08x 3:%08x 4:%08x], size: 0x%08x\n", 477 + op->cmd.opcode, lutval[0], lutval[1], lutval[2], lutval[3], lutval[4], 478 + op->data.nbytes); 479 + 480 + /* Lock LUT */ 481 + writel(XSPI_LUT_KEY_VAL, xspi->iobase + XSPI_LUTKEY); 482 + writel(XSPI_LOKCR_LOCK, xspi->iobase + XSPI_LCKCR); 483 + } 484 + 485 + static void nxp_xspi_disable_ddr(struct nxp_xspi *xspi) 486 + { 487 + void __iomem *base = xspi->iobase; 488 + u32 reg; 489 + 490 + /* Disable module */ 491 + reg = readl(base + XSPI_MCR); 492 + reg |= XSPI_MCR_MDIS; 493 + writel(reg, base + XSPI_MCR); 494 + 495 + reg &= ~XSPI_MCR_DDR_EN; 496 + reg &= ~XSPI_MCR_DQS_FA_SEL_MASK; 497 + /* Use dummy pad loopback mode to sample data */ 498 + reg |= FIELD_PREP(XSPI_MCR_DQS_FA_SEL_MASK, 0x01); 499 + writel(reg, base + XSPI_MCR); 500 + xspi->support_max_rate = 133000000; 501 + 502 + reg = readl(base + XSPI_FLSHCR); 503 + reg &= ~XSPI_FLSHCR_TDH_MASK; 504 + writel(reg, base + XSPI_FLSHCR); 505 + 506 + /* Select sampling at inverted clock */ 507 + reg = FIELD_PREP(XSPI_SMPR_DLLFSMPFA_MASK, 0) | XSPI_SMPR_FSPHS; 508 + writel(reg, base + XSPI_SMPR); 509 + 510 + /* Enable module */ 511 + reg = readl(base + XSPI_MCR); 512 + reg &= ~XSPI_MCR_MDIS; 513 + writel(reg, base + XSPI_MCR); 514 + } 515 + 516 + static void nxp_xspi_enable_ddr(struct nxp_xspi *xspi) 517 + { 518 + void __iomem *base = xspi->iobase; 519 + u32 reg; 520 + 521 + /* Disable module */ 522 + reg = readl(base + XSPI_MCR); 523 + reg |= XSPI_MCR_MDIS; 524 + writel(reg, base + XSPI_MCR); 525 + 526 + reg |= XSPI_MCR_DDR_EN; 527 + reg &= ~XSPI_MCR_DQS_FA_SEL_MASK; 528 + /* Use external dqs to sample data */ 529 + reg |= FIELD_PREP(XSPI_MCR_DQS_FA_SEL_MASK, 0x03); 530 + writel(reg, base + XSPI_MCR); 531 + xspi->support_max_rate = 200000000; 532 + 533 + reg = readl(base + XSPI_FLSHCR); 534 + reg &= ~XSPI_FLSHCR_TDH_MASK; 535 + reg |= FIELD_PREP(XSPI_FLSHCR_TDH_MASK, 0x01); 536 + writel(reg, base + XSPI_FLSHCR); 537 + 538 + reg = FIELD_PREP(XSPI_SMPR_DLLFSMPFA_MASK, 0x04); 539 + writel(reg, base + XSPI_SMPR); 540 + 541 + /* Enable module */ 542 + reg = readl(base + XSPI_MCR); 543 + reg &= ~XSPI_MCR_MDIS; 544 + writel(reg, base + XSPI_MCR); 545 + } 546 + 547 + static void nxp_xspi_sw_reset(struct nxp_xspi *xspi) 548 + { 549 + void __iomem *base = xspi->iobase; 550 + bool mdis_flag = false; 551 + u32 reg; 552 + int ret; 553 + 554 + reg = readl(base + XSPI_MCR); 555 + 556 + /* 557 + * Per RM, when reset SWRSTSD and SWRSTHD, XSPI must be 558 + * enabled (MDIS = 0). 559 + * So if MDIS is 1, should clear it before assert SWRSTSD 560 + * and SWRSTHD. 561 + */ 562 + if (reg & XSPI_MCR_MDIS) { 563 + reg &= ~XSPI_MCR_MDIS; 564 + writel(reg, base + XSPI_MCR); 565 + mdis_flag = true; 566 + } 567 + 568 + /* Software reset for AHB domain and Serial flash memory domain */ 569 + reg |= XSPI_MCR_SWRSTHD | XSPI_MCR_SWRSTSD; 570 + /* Software Reset for IPS Target Group Queue 0 */ 571 + reg |= XSPI_MCR_IPS_TG_RST; 572 + writel(reg, base + XSPI_MCR); 573 + 574 + /* IPS_TG_RST will self-clear to 0 once IPS_TG_RST complete */ 575 + ret = readl_poll_timeout(base + XSPI_MCR, reg, !(reg & XSPI_MCR_IPS_TG_RST), 576 + 100, 5000); 577 + if (ret == -ETIMEDOUT) 578 + dev_warn(xspi->dev, "XSPI_MCR_IPS_TG_RST do not self-clear in 5ms!"); 579 + 580 + /* 581 + * Per RM, must wait for at least three system cycles and 582 + * three flash cycles after changing the value of reset field. 583 + * delay 5us for safe. 584 + */ 585 + fsleep(5); 586 + 587 + /* 588 + * Per RM, before dessert SWRSTSD and SWRSTHD, XSPI must be 589 + * disabled (MIDS = 1). 590 + */ 591 + reg = readl(base + XSPI_MCR); 592 + reg |= XSPI_MCR_MDIS; 593 + writel(reg, base + XSPI_MCR); 594 + 595 + /* deassert software reset */ 596 + reg &= ~(XSPI_MCR_SWRSTHD | XSPI_MCR_SWRSTSD); 597 + writel(reg, base + XSPI_MCR); 598 + 599 + /* 600 + * Per RM, must wait for at least three system cycles and 601 + * three flash cycles after changing the value of reset field. 602 + * delay 5us for safe. 603 + */ 604 + fsleep(5); 605 + 606 + /* Re-enable XSPI if it is enabled at beginning */ 607 + if (!mdis_flag) { 608 + reg &= ~XSPI_MCR_MDIS; 609 + writel(reg, base + XSPI_MCR); 610 + } 611 + } 612 + 613 + static void nxp_xspi_dll_bypass(struct nxp_xspi *xspi) 614 + { 615 + void __iomem *base = xspi->iobase; 616 + int ret; 617 + u32 reg; 618 + 619 + nxp_xspi_sw_reset(xspi); 620 + 621 + writel(0, base + XSPI_DLLCRA); 622 + 623 + /* Set SLV EN first */ 624 + reg = XSPI_DLLCRA_SLV_EN; 625 + writel(reg, base + XSPI_DLLCRA); 626 + 627 + reg = XSPI_DLLCRA_FREQEN | 628 + FIELD_PREP(XSPI_DLLCRA_SLV_DLY_COARSE_MASK, 0x0) | 629 + XSPI_DLLCRA_SLV_EN | XSPI_DLLCRA_SLV_DLL_BYPASS; 630 + writel(reg, base + XSPI_DLLCRA); 631 + 632 + reg |= XSPI_DLLCRA_SLV_UPD; 633 + writel(reg, base + XSPI_DLLCRA); 634 + 635 + ret = readl_poll_timeout(base + XSPI_DLLSR, reg, 636 + reg & XSPI_DLLSR_SLVA_LOCK, 0, POLL_TOUT_US); 637 + if (ret) 638 + dev_err(xspi->dev, 639 + "DLL SLVA unlock, the DLL status is %x, need to check!\n", 640 + readl(base + XSPI_DLLSR)); 641 + } 642 + 643 + static void nxp_xspi_dll_auto(struct nxp_xspi *xspi, unsigned long rate) 644 + { 645 + void __iomem *base = xspi->iobase; 646 + int ret; 647 + u32 reg; 648 + 649 + nxp_xspi_sw_reset(xspi); 650 + 651 + writel(0, base + XSPI_DLLCRA); 652 + 653 + /* Set SLV EN first */ 654 + reg = XSPI_DLLCRA_SLV_EN; 655 + writel(reg, base + XSPI_DLLCRA); 656 + 657 + reg = FIELD_PREP(XSPI_DLLCRA_DLL_REFCNTR_MASK, 0x02) | 658 + FIELD_PREP(XSPI_DLLCRA_DLLRES_MASK, 0x08) | 659 + XSPI_DLLCRA_SLAVE_AUTO_UPDT | XSPI_DLLCRA_SLV_EN; 660 + if (rate > 133000000) 661 + reg |= XSPI_DLLCRA_FREQEN; 662 + 663 + writel(reg, base + XSPI_DLLCRA); 664 + 665 + reg |= XSPI_DLLCRA_SLV_UPD; 666 + writel(reg, base + XSPI_DLLCRA); 667 + 668 + reg |= XSPI_DLLCRA_DLLEN; 669 + writel(reg, base + XSPI_DLLCRA); 670 + 671 + ret = readl_poll_timeout(base + XSPI_DLLSR, reg, 672 + reg & XSPI_DLLSR_DLLA_LOCK, 0, POLL_TOUT_US); 673 + if (ret) 674 + dev_err(xspi->dev, 675 + "DLL unlock, the DLL status is %x, need to check!\n", 676 + readl(base + XSPI_DLLSR)); 677 + 678 + ret = readl_poll_timeout(base + XSPI_DLLSR, reg, 679 + reg & XSPI_DLLSR_SLVA_LOCK, 0, POLL_TOUT_US); 680 + if (ret) 681 + dev_err(xspi->dev, 682 + "DLL SLVA unlock, the DLL status is %x, need to check!\n", 683 + readl(base + XSPI_DLLSR)); 684 + } 685 + 686 + static void nxp_xspi_select_mem(struct nxp_xspi *xspi, struct spi_device *spi, 687 + const struct spi_mem_op *op) 688 + { 689 + /* xspi only support one DTR mode: 8D-8D-8D */ 690 + bool op_is_dtr = op->cmd.dtr && op->addr.dtr && op->dummy.dtr && op->data.dtr; 691 + unsigned long root_clk_rate, rate; 692 + uint64_t cs0_top_address; 693 + uint64_t cs1_top_address; 694 + u32 reg; 695 + int ret; 696 + 697 + /* 698 + * Return when following condition all meet, 699 + * 1, if previously selected target device is same as current 700 + * requested target device. 701 + * 2, the DTR or STR mode do not change. 702 + * 3, previous operation max rate equals current one. 703 + * 704 + * For other case, need to re-config. 705 + */ 706 + if (xspi->selected == spi_get_chipselect(spi, 0) && 707 + (!!(xspi->flags & XSPI_DTR_PROTO) == op_is_dtr) && 708 + (xspi->pre_op_rate == op->max_freq)) 709 + return; 710 + 711 + if (op_is_dtr) { 712 + nxp_xspi_enable_ddr(xspi); 713 + xspi->flags |= XSPI_DTR_PROTO; 714 + } else { 715 + nxp_xspi_disable_ddr(xspi); 716 + xspi->flags &= ~XSPI_DTR_PROTO; 717 + } 718 + rate = min_t(unsigned long, xspi->support_max_rate, op->max_freq); 719 + /* 720 + * There is two dividers between xspi_clk_root(from SoC CCM) and xspi_sfif. 721 + * xspi_clk_root ---->divider1 ----> ipg_clk_2xsfif 722 + * | 723 + * | 724 + * |---> divider2 ---> ipg_clk_sfif 725 + * divider1 is controlled by SOCCR, SOCCR default value is 0. 726 + * divider2 fix to divide 2. 727 + * when SOCCR = 0: 728 + * ipg_clk_2xsfif = xspi_clk_root 729 + * ipg_clk_sfif = ipg_clk_2xsfif / 2 = xspi_clk_root / 2 730 + * ipg_clk_2xsfif is used for DTR mode. 731 + * xspi_sck(output to device) is defined based on xspi_sfif clock. 732 + */ 733 + root_clk_rate = rate * 2; 734 + 735 + clk_disable_unprepare(xspi->clk); 736 + 737 + ret = clk_set_rate(xspi->clk, root_clk_rate); 738 + if (ret) 739 + return; 740 + 741 + ret = clk_prepare_enable(xspi->clk); 742 + if (ret) 743 + return; 744 + 745 + xspi->pre_op_rate = op->max_freq; 746 + xspi->selected = spi_get_chipselect(spi, 0); 747 + 748 + if (xspi->selected) { /* CS1 select */ 749 + cs0_top_address = xspi->memmap_phy; 750 + cs1_top_address = SZ_4G - 1; 751 + } else { /* CS0 select */ 752 + cs0_top_address = SZ_4G - 1; 753 + cs1_top_address = SZ_4G - 1; 754 + } 755 + writel(cs0_top_address, xspi->iobase + XSPI_SFA1AD); 756 + writel(cs1_top_address, xspi->iobase + XSPI_SFA2AD); 757 + 758 + reg = readl(xspi->iobase + XSPI_SFACR); 759 + if (op->data.swap16) 760 + reg |= XSPI_SFACR_BYTE_SWAP; 761 + else 762 + reg &= ~XSPI_SFACR_BYTE_SWAP; 763 + writel(reg, xspi->iobase + XSPI_SFACR); 764 + 765 + if (!op_is_dtr || rate < 60000000) 766 + nxp_xspi_dll_bypass(xspi); 767 + else 768 + nxp_xspi_dll_auto(xspi, rate); 769 + } 770 + 771 + static int nxp_xspi_ahb_read(struct nxp_xspi *xspi, const struct spi_mem_op *op) 772 + { 773 + u32 start = op->addr.val; 774 + u32 len = op->data.nbytes; 775 + 776 + /* If necessary, ioremap before AHB read */ 777 + if ((!xspi->ahb_addr) || start < xspi->memmap_start || 778 + start + len > xspi->memmap_start + xspi->memmap_len) { 779 + if (xspi->ahb_addr) 780 + iounmap(xspi->ahb_addr); 781 + 782 + xspi->memmap_start = start; 783 + xspi->memmap_len = len > NXP_XSPI_MIN_IOMAP ? 784 + len : NXP_XSPI_MIN_IOMAP; 785 + 786 + xspi->ahb_addr = ioremap(xspi->memmap_phy + xspi->memmap_start, 787 + xspi->memmap_len); 788 + 789 + if (!xspi->ahb_addr) { 790 + dev_err(xspi->dev, "failed to alloc memory\n"); 791 + return -ENOMEM; 792 + } 793 + } 794 + 795 + /* Read out the data directly from the AHB buffer. */ 796 + memcpy_fromio(op->data.buf.in, 797 + xspi->ahb_addr + start - xspi->memmap_start, len); 798 + 799 + return 0; 800 + } 801 + 802 + static int nxp_xspi_fill_txfifo(struct nxp_xspi *xspi, 803 + const struct spi_mem_op *op) 804 + { 805 + void __iomem *base = xspi->iobase; 806 + u8 *buf = (u8 *)op->data.buf.out; 807 + u32 reg, left; 808 + int i; 809 + 810 + for (i = 0; i < ALIGN(op->data.nbytes, 4); i += 4) { 811 + reg = readl(base + XSPI_FR); 812 + reg |= XSPI_FR_TBFF; 813 + writel(reg, base + XSPI_FR); 814 + /* Read again to check whether the tx fifo has rom */ 815 + reg = readl(base + XSPI_FR); 816 + if (!(reg & XSPI_FR_TBFF)) { 817 + WARN_ON(1); 818 + return -EIO; 819 + } 820 + 821 + if (i == ALIGN_DOWN(op->data.nbytes, 4)) { 822 + /* Use 0xFF for extra bytes */ 823 + left = 0xFFFFFFFF; 824 + /* The last 1 to 3 bytes */ 825 + memcpy((u8 *)&left, buf + i, op->data.nbytes - i); 826 + writel(left, base + XSPI_TBDR); 827 + } else { 828 + writel(*(u32 *)(buf + i), base + XSPI_TBDR); 829 + } 830 + } 831 + 832 + return 0; 833 + } 834 + 835 + static int nxp_xspi_read_rxfifo(struct nxp_xspi *xspi, 836 + const struct spi_mem_op *op) 837 + { 838 + u32 watermark, watermark_bytes, reg; 839 + void __iomem *base = xspi->iobase; 840 + u8 *buf = (u8 *) op->data.buf.in; 841 + int i, ret, len; 842 + 843 + /* 844 + * Config the rx watermark half of the 64 memory-mapped RX data buffer RBDRn 845 + * refer to the RBCT config in nxp_xspi_do_op() 846 + */ 847 + watermark = 32; 848 + watermark_bytes = watermark * 4; 849 + 850 + len = op->data.nbytes; 851 + 852 + while (len >= watermark_bytes) { 853 + /* Make sure the RX FIFO contains valid data before read */ 854 + ret = readl_poll_timeout(base + XSPI_FR, reg, 855 + reg & XSPI_FR_RBDF, 0, POLL_TOUT_US); 856 + if (ret) { 857 + WARN_ON(1); 858 + return ret; 859 + } 860 + 861 + for (i = 0; i < watermark; i++) 862 + *(u32 *)(buf + i * 4) = readl(base + XSPI_RBDR0 + i * 4); 863 + 864 + len = len - watermark_bytes; 865 + buf = buf + watermark_bytes; 866 + /* Pop up data to RXFIFO for next read. */ 867 + reg = readl(base + XSPI_FR); 868 + reg |= XSPI_FR_RBDF; 869 + writel(reg, base + XSPI_FR); 870 + } 871 + 872 + /* Wait for the total data transfer finished */ 873 + ret = readl_poll_timeout(base + XSPI_SR, reg, !(reg & XSPI_SR_BUSY), 0, POLL_TOUT_US); 874 + if (ret) { 875 + WARN_ON(1); 876 + return ret; 877 + } 878 + 879 + i = 0; 880 + while (len >= 4) { 881 + *(u32 *)(buf) = readl(base + XSPI_RBDR0 + i); 882 + i += 4; 883 + len -= 4; 884 + buf += 4; 885 + } 886 + 887 + if (len > 0) { 888 + reg = readl(base + XSPI_RBDR0 + i); 889 + memcpy(buf, (u8 *)&reg, len); 890 + } 891 + 892 + /* Invalid RXFIFO first */ 893 + reg = readl(base + XSPI_MCR); 894 + reg |= XSPI_MCR_CLR_RXF; 895 + writel(reg, base + XSPI_MCR); 896 + /* Wait for the CLR_RXF clear */ 897 + ret = readl_poll_timeout(base + XSPI_MCR, reg, 898 + !(reg & XSPI_MCR_CLR_RXF), 1, POLL_TOUT_US); 899 + WARN_ON(ret); 900 + 901 + return ret; 902 + } 903 + 904 + static int nxp_xspi_do_op(struct nxp_xspi *xspi, const struct spi_mem_op *op) 905 + { 906 + void __iomem *base = xspi->iobase; 907 + int watermark, err = 0; 908 + u32 reg, len; 909 + 910 + len = op->data.nbytes; 911 + if (op->data.nbytes && op->data.dir == SPI_MEM_DATA_OUT) { 912 + /* Clear the TX FIFO. */ 913 + reg = readl(base + XSPI_MCR); 914 + reg |= XSPI_MCR_CLR_TXF; 915 + writel(reg, base + XSPI_MCR); 916 + /* Wait for the CLR_TXF clear */ 917 + err = readl_poll_timeout(base + XSPI_MCR, reg, 918 + !(reg & XSPI_MCR_CLR_TXF), 1, POLL_TOUT_US); 919 + if (err) { 920 + WARN_ON(1); 921 + return err; 922 + } 923 + 924 + /* Cover the no 4bytes alignment data length */ 925 + watermark = (xspi->devtype_data->txfifo - ALIGN(op->data.nbytes, 4)) / 4 + 1; 926 + reg = FIELD_PREP(XSPI_TBCT_WMRK_MASK, watermark); 927 + writel(reg, base + XSPI_TBCT); 928 + /* 929 + * According to the RM, for TBDR register, a write transaction on the 930 + * flash memory with data size of less than 32 bits leads to the removal 931 + * of one data entry from the TX buffer. The valid bits are used and the 932 + * rest of the bits are discarded. 933 + * But for data size large than 32 bits, according to test, for no 4bytes 934 + * alignment data, the last 1~3 bytes will lost, because TX buffer use 935 + * 4 bytes entries. 936 + * So here adjust the transfer data length to make it 4bytes alignment. 937 + * then will meet the upper watermark setting, trigger the 4bytes entries 938 + * pop out. 939 + * Will use extra 0xff to append, refer to nxp_xspi_fill_txfifo(). 940 + */ 941 + if (len > 4) 942 + len = ALIGN(op->data.nbytes, 4); 943 + 944 + } else if (op->data.nbytes && op->data.dir == SPI_MEM_DATA_IN) { 945 + /* Invalid RXFIFO first */ 946 + reg = readl(base + XSPI_MCR); 947 + reg |= XSPI_MCR_CLR_RXF; 948 + writel(reg, base + XSPI_MCR); 949 + /* Wait for the CLR_RXF clear */ 950 + err = readl_poll_timeout(base + XSPI_MCR, reg, 951 + !(reg & XSPI_MCR_CLR_RXF), 1, POLL_TOUT_US); 952 + if (err) { 953 + WARN_ON(1); 954 + return err; 955 + } 956 + 957 + reg = FIELD_PREP(XSPI_RBCT_WMRK_MASK, 31); 958 + writel(reg, base + XSPI_RBCT); 959 + } 960 + 961 + init_completion(&xspi->c); 962 + 963 + /* Config the data address */ 964 + writel(op->addr.val + xspi->memmap_phy, base + XSPI_SFP_TG_SFAR); 965 + 966 + /* Config the data size and lut id, trigger the transfer */ 967 + reg = FIELD_PREP(XSPI_SFP_TG_IPCR_SEQID_MASK, XSPI_SEQID_LUT) | 968 + FIELD_PREP(XSPI_SFP_TG_IPCR_IDATSZ_MASK, len); 969 + writel(reg, base + XSPI_SFP_TG_IPCR); 970 + 971 + if (op->data.nbytes && op->data.dir == SPI_MEM_DATA_OUT) { 972 + err = nxp_xspi_fill_txfifo(xspi, op); 973 + if (err) 974 + return err; 975 + } 976 + 977 + /* Wait for the interrupt. */ 978 + if (!wait_for_completion_timeout(&xspi->c, msecs_to_jiffies(1000))) 979 + err = -ETIMEDOUT; 980 + 981 + /* Invoke IP data read. */ 982 + if (!err && op->data.nbytes && op->data.dir == SPI_MEM_DATA_IN) 983 + err = nxp_xspi_read_rxfifo(xspi, op); 984 + 985 + return err; 986 + } 987 + 988 + static int nxp_xspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op) 989 + { 990 + struct nxp_xspi *xspi = spi_controller_get_devdata(mem->spi->controller); 991 + void __iomem *base = xspi->iobase; 992 + u32 reg; 993 + int err; 994 + 995 + guard(mutex)(&xspi->lock); 996 + 997 + PM_RUNTIME_ACQUIRE_AUTOSUSPEND(xspi->dev, pm); 998 + err = PM_RUNTIME_ACQUIRE_ERR(&pm); 999 + if (err) 1000 + return err; 1001 + 1002 + /* Wait for controller being ready. */ 1003 + err = readl_poll_timeout(base + XSPI_SR, reg, 1004 + !(reg & XSPI_SR_BUSY), 1, POLL_TOUT_US); 1005 + if (err) { 1006 + dev_err(xspi->dev, "SR keeps in BUSY!"); 1007 + return err; 1008 + } 1009 + 1010 + nxp_xspi_select_mem(xspi, mem->spi, op); 1011 + 1012 + nxp_xspi_prepare_lut(xspi, op); 1013 + 1014 + /* 1015 + * For read: 1016 + * the address in AHB mapped range will use AHB read. 1017 + * the address out of AHB mapped range will use IP read. 1018 + * For write: 1019 + * all use IP write. 1020 + */ 1021 + if ((op->data.dir == SPI_MEM_DATA_IN) && !needs_ip_only(xspi) 1022 + && ((op->addr.val + op->data.nbytes) <= xspi->memmap_phy_size)) 1023 + err = nxp_xspi_ahb_read(xspi, op); 1024 + else 1025 + err = nxp_xspi_do_op(xspi, op); 1026 + 1027 + nxp_xspi_sw_reset(xspi); 1028 + 1029 + return err; 1030 + } 1031 + 1032 + static int nxp_xspi_adjust_op_size(struct spi_mem *mem, struct spi_mem_op *op) 1033 + { 1034 + struct nxp_xspi *xspi = spi_controller_get_devdata(mem->spi->controller); 1035 + 1036 + if (op->data.dir == SPI_MEM_DATA_OUT) { 1037 + if (op->data.nbytes > xspi->devtype_data->txfifo) 1038 + op->data.nbytes = xspi->devtype_data->txfifo; 1039 + } else { 1040 + /* Limit data bytes to RX FIFO in case of IP read only */ 1041 + if (needs_ip_only(xspi) && (op->data.nbytes > xspi->devtype_data->rxfifo)) 1042 + op->data.nbytes = xspi->devtype_data->rxfifo; 1043 + 1044 + /* Address in AHB mapped range prefer to use AHB read. */ 1045 + if (!needs_ip_only(xspi) && (op->addr.val < xspi->memmap_phy_size) 1046 + && ((op->addr.val + op->data.nbytes) > xspi->memmap_phy_size)) 1047 + op->data.nbytes = xspi->memmap_phy_size - op->addr.val; 1048 + } 1049 + 1050 + return 0; 1051 + } 1052 + 1053 + static void nxp_xspi_config_ahb_buffer(struct nxp_xspi *xspi) 1054 + { 1055 + void __iomem *base = xspi->iobase; 1056 + u32 ahb_data_trans_size; 1057 + u32 reg; 1058 + 1059 + writel(0xA, base + XSPI_BUF0CR); 1060 + writel(0x2, base + XSPI_BUF1CR); 1061 + writel(0xD, base + XSPI_BUF2CR); 1062 + 1063 + /* Configure buffer3 for All Master Access */ 1064 + reg = FIELD_PREP(XSPI_BUF3CR_MSTRID_MASK, 0x06) | 1065 + XSPI_BUF3CR_ALLMST; 1066 + 1067 + ahb_data_trans_size = xspi->devtype_data->ahb_buf_size / 8; 1068 + reg |= FIELD_PREP(XSPI_BUF3CR_ADATSZ_MASK, ahb_data_trans_size); 1069 + writel(reg, base + XSPI_BUF3CR); 1070 + 1071 + /* Only the buffer3 is used */ 1072 + writel(0, base + XSPI_BUF0IND); 1073 + writel(0, base + XSPI_BUF1IND); 1074 + writel(0, base + XSPI_BUF2IND); 1075 + 1076 + /* AHB only use ID=15 for read */ 1077 + reg = FIELD_PREP(XSPI_BFGENCR_SEQID_MASK, XSPI_SEQID_LUT); 1078 + reg |= XSPI_BFGENCR_WR_FLUSH_EN; 1079 + /* No limit for align */ 1080 + reg |= FIELD_PREP(XSPI_BFGENCR_ALIGN_MASK, 0); 1081 + writel(reg, base + XSPI_BFGENCR); 1082 + } 1083 + 1084 + static int nxp_xspi_default_setup(struct nxp_xspi *xspi) 1085 + { 1086 + void __iomem *base = xspi->iobase; 1087 + u32 reg; 1088 + 1089 + /* Bypass SFP check, clear MGC_GVLD, MGC_GVLDMDAD, MGC_GVLDFRAD */ 1090 + writel(0, base + XSPI_MGC); 1091 + 1092 + /* Enable the EENV0 SFP check */ 1093 + reg = readl(base + XSPI_TG0MDAD); 1094 + reg |= XSPI_TG0MDAD_VLD; 1095 + writel(reg, base + XSPI_TG0MDAD); 1096 + 1097 + /* Give read/write access right to EENV0 */ 1098 + reg = readl(base + XSPI_FRAD0_WORD2); 1099 + reg &= ~XSPI_FRAD0_WORD2_MD0ACP_MASK; 1100 + reg |= FIELD_PREP(XSPI_FRAD0_WORD2_MD0ACP_MASK, 0x03); 1101 + writel(reg, base + XSPI_FRAD0_WORD2); 1102 + 1103 + /* Enable the FRAD check for EENV0 */ 1104 + reg = readl(base + XSPI_FRAD0_WORD3); 1105 + reg |= XSPI_FRAD0_WORD3_VLD; 1106 + writel(reg, base + XSPI_FRAD0_WORD3); 1107 + 1108 + /* 1109 + * Config the timeout to max value, this timeout will affect the 1110 + * TBDR and RBDRn access right after IP cmd triggered. 1111 + */ 1112 + writel(0xFFFFFFFF, base + XSPI_MTO); 1113 + 1114 + /* Disable module */ 1115 + reg = readl(base + XSPI_MCR); 1116 + reg |= XSPI_MCR_MDIS; 1117 + writel(reg, base + XSPI_MCR); 1118 + 1119 + nxp_xspi_sw_reset(xspi); 1120 + 1121 + reg = readl(base + XSPI_MCR); 1122 + reg &= ~(XSPI_MCR_CKN_FA_EN | XSPI_MCR_DQS_FA_SEL_MASK | 1123 + XSPI_MCR_DOZE | XSPI_MCR_VAR_LAT_EN | 1124 + XSPI_MCR_DDR_EN | XSPI_MCR_DQS_OUT_EN); 1125 + reg |= XSPI_MCR_DQS_EN; 1126 + reg |= XSPI_MCR_ISD3FA | XSPI_MCR_ISD2FA; 1127 + writel(reg, base + XSPI_MCR); 1128 + 1129 + reg = readl(base + XSPI_SFACR); 1130 + reg &= ~(XSPI_SFACR_FORCE_A10 | XSPI_SFACR_WA_4B_EN | 1131 + XSPI_SFACR_BYTE_SWAP | XSPI_SFACR_WA | 1132 + XSPI_SFACR_CAS_MASK); 1133 + reg |= XSPI_SFACR_FORCE_A10; 1134 + writel(reg, base + XSPI_SFACR); 1135 + 1136 + nxp_xspi_config_ahb_buffer(xspi); 1137 + 1138 + reg = FIELD_PREP(XSPI_FLSHCR_TCSH_MASK, 0x03) | 1139 + FIELD_PREP(XSPI_FLSHCR_TCSS_MASK, 0x03); 1140 + writel(reg, base + XSPI_FLSHCR); 1141 + 1142 + /* Enable module */ 1143 + reg = readl(base + XSPI_MCR); 1144 + reg &= ~XSPI_MCR_MDIS; 1145 + writel(reg, base + XSPI_MCR); 1146 + 1147 + xspi->selected = -1; 1148 + 1149 + /* Enable the interrupt */ 1150 + writel(XSPI_RSER_TFIE, base + XSPI_RSER); 1151 + 1152 + return 0; 1153 + } 1154 + 1155 + static const char *nxp_xspi_get_name(struct spi_mem *mem) 1156 + { 1157 + struct nxp_xspi *xspi = spi_controller_get_devdata(mem->spi->controller); 1158 + struct device *dev = &mem->spi->dev; 1159 + const char *name; 1160 + 1161 + /* Set custom name derived from the platform_device of the controller. */ 1162 + if (of_get_available_child_count(xspi->dev->of_node) == 1) 1163 + return dev_name(xspi->dev); 1164 + 1165 + name = devm_kasprintf(dev, GFP_KERNEL, 1166 + "%s-%d", dev_name(xspi->dev), 1167 + spi_get_chipselect(mem->spi, 0)); 1168 + 1169 + if (!name) { 1170 + dev_err(dev, "failed to get memory for custom flash name\n"); 1171 + return ERR_PTR(-ENOMEM); 1172 + } 1173 + 1174 + return name; 1175 + } 1176 + 1177 + static const struct spi_controller_mem_ops nxp_xspi_mem_ops = { 1178 + .adjust_op_size = nxp_xspi_adjust_op_size, 1179 + .supports_op = nxp_xspi_supports_op, 1180 + .exec_op = nxp_xspi_exec_op, 1181 + .get_name = nxp_xspi_get_name, 1182 + }; 1183 + 1184 + static const struct spi_controller_mem_caps nxp_xspi_mem_caps = { 1185 + .dtr = true, 1186 + .per_op_freq = true, 1187 + .swap16 = true, 1188 + }; 1189 + 1190 + static void nxp_xspi_cleanup(void *data) 1191 + { 1192 + struct nxp_xspi *xspi = data; 1193 + u32 reg; 1194 + 1195 + pm_runtime_get_sync(xspi->dev); 1196 + 1197 + /* Disable interrupt */ 1198 + writel(0, xspi->iobase + XSPI_RSER); 1199 + /* Clear all the internal logic flags */ 1200 + writel(0xFFFFFFFF, xspi->iobase + XSPI_FR); 1201 + /* Disable the hardware */ 1202 + reg = readl(xspi->iobase + XSPI_MCR); 1203 + reg |= XSPI_MCR_MDIS; 1204 + writel(reg, xspi->iobase + XSPI_MCR); 1205 + 1206 + pm_runtime_put_sync(xspi->dev); 1207 + 1208 + if (xspi->ahb_addr) 1209 + iounmap(xspi->ahb_addr); 1210 + } 1211 + 1212 + static int nxp_xspi_probe(struct platform_device *pdev) 1213 + { 1214 + struct device *dev = &pdev->dev; 1215 + struct spi_controller *ctlr; 1216 + struct nxp_xspi *xspi; 1217 + struct resource *res; 1218 + int ret, irq; 1219 + 1220 + ctlr = devm_spi_alloc_host(dev, sizeof(*xspi)); 1221 + if (!ctlr) 1222 + return -ENOMEM; 1223 + 1224 + ctlr->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_RX_OCTAL | 1225 + SPI_TX_DUAL | SPI_TX_QUAD | SPI_TX_OCTAL; 1226 + 1227 + xspi = spi_controller_get_devdata(ctlr); 1228 + xspi->dev = dev; 1229 + xspi->devtype_data = device_get_match_data(dev); 1230 + if (!xspi->devtype_data) 1231 + return -ENODEV; 1232 + 1233 + platform_set_drvdata(pdev, xspi); 1234 + 1235 + /* Find the resources - configuration register address space */ 1236 + xspi->iobase = devm_platform_ioremap_resource_byname(pdev, "base"); 1237 + if (IS_ERR(xspi->iobase)) 1238 + return PTR_ERR(xspi->iobase); 1239 + 1240 + /* Find the resources - controller memory mapped space */ 1241 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mmap"); 1242 + if (!res) 1243 + return -ENODEV; 1244 + 1245 + /* Assign memory mapped starting address and mapped size. */ 1246 + xspi->memmap_phy = res->start; 1247 + xspi->memmap_phy_size = resource_size(res); 1248 + 1249 + /* Find the clocks */ 1250 + xspi->clk = devm_clk_get(dev, "per"); 1251 + if (IS_ERR(xspi->clk)) 1252 + return PTR_ERR(xspi->clk); 1253 + 1254 + /* Find the irq */ 1255 + irq = platform_get_irq(pdev, 0); 1256 + if (irq < 0) 1257 + return dev_err_probe(dev, irq, "Failed to get irq source"); 1258 + 1259 + pm_runtime_set_autosuspend_delay(dev, XSPI_RPM_TIMEOUT_MS); 1260 + pm_runtime_use_autosuspend(dev); 1261 + ret = devm_pm_runtime_enable(dev); 1262 + if (ret) 1263 + return ret; 1264 + 1265 + PM_RUNTIME_ACQUIRE_AUTOSUSPEND(dev, pm); 1266 + ret = PM_RUNTIME_ACQUIRE_ERR(&pm); 1267 + if (ret) 1268 + return dev_err_probe(dev, ret, "Failed to enable clock"); 1269 + 1270 + /* Clear potential interrupt by write xspi errstat */ 1271 + writel(0xFFFFFFFF, xspi->iobase + XSPI_ERRSTAT); 1272 + writel(0xFFFFFFFF, xspi->iobase + XSPI_FR); 1273 + 1274 + nxp_xspi_default_setup(xspi); 1275 + 1276 + ret = devm_request_irq(dev, irq, 1277 + nxp_xspi_irq_handler, 0, pdev->name, xspi); 1278 + if (ret) 1279 + return dev_err_probe(dev, ret, "failed to request irq"); 1280 + 1281 + ret = devm_mutex_init(dev, &xspi->lock); 1282 + if (ret) 1283 + return ret; 1284 + 1285 + ret = devm_add_action_or_reset(dev, nxp_xspi_cleanup, xspi); 1286 + if (ret) 1287 + return ret; 1288 + 1289 + ctlr->bus_num = -1; 1290 + ctlr->num_chipselect = NXP_XSPI_MAX_CHIPSELECT; 1291 + ctlr->mem_ops = &nxp_xspi_mem_ops; 1292 + ctlr->mem_caps = &nxp_xspi_mem_caps; 1293 + 1294 + return devm_spi_register_controller(dev, ctlr); 1295 + } 1296 + 1297 + static int nxp_xspi_runtime_suspend(struct device *dev) 1298 + { 1299 + struct nxp_xspi *xspi = dev_get_drvdata(dev); 1300 + u32 reg; 1301 + 1302 + reg = readl(xspi->iobase + XSPI_MCR); 1303 + reg |= XSPI_MCR_MDIS; 1304 + writel(reg, xspi->iobase + XSPI_MCR); 1305 + 1306 + clk_disable_unprepare(xspi->clk); 1307 + 1308 + return 0; 1309 + } 1310 + 1311 + static int nxp_xspi_runtime_resume(struct device *dev) 1312 + { 1313 + struct nxp_xspi *xspi = dev_get_drvdata(dev); 1314 + u32 reg; 1315 + int ret; 1316 + 1317 + ret = clk_prepare_enable(xspi->clk); 1318 + if (ret) 1319 + return ret; 1320 + 1321 + reg = readl(xspi->iobase + XSPI_MCR); 1322 + reg &= ~XSPI_MCR_MDIS; 1323 + writel(reg, xspi->iobase + XSPI_MCR); 1324 + 1325 + return 0; 1326 + } 1327 + 1328 + static int nxp_xspi_suspend(struct device *dev) 1329 + { 1330 + int ret; 1331 + 1332 + ret = pinctrl_pm_select_sleep_state(dev); 1333 + if (ret) { 1334 + dev_err(dev, "select flexspi sleep pinctrl failed!\n"); 1335 + return ret; 1336 + } 1337 + 1338 + return pm_runtime_force_suspend(dev); 1339 + } 1340 + 1341 + static int nxp_xspi_resume(struct device *dev) 1342 + { 1343 + struct nxp_xspi *xspi = dev_get_drvdata(dev); 1344 + int ret; 1345 + 1346 + ret = pm_runtime_force_resume(dev); 1347 + if (ret) 1348 + return ret; 1349 + 1350 + nxp_xspi_default_setup(xspi); 1351 + 1352 + ret = pinctrl_pm_select_default_state(dev); 1353 + if (ret) 1354 + dev_err(dev, "select flexspi default pinctrl failed!\n"); 1355 + 1356 + return ret; 1357 + } 1358 + 1359 + 1360 + static const struct dev_pm_ops nxp_xspi_pm_ops = { 1361 + RUNTIME_PM_OPS(nxp_xspi_runtime_suspend, nxp_xspi_runtime_resume, NULL) 1362 + SYSTEM_SLEEP_PM_OPS(nxp_xspi_suspend, nxp_xspi_resume) 1363 + }; 1364 + 1365 + static const struct of_device_id nxp_xspi_dt_ids[] = { 1366 + { .compatible = "nxp,imx94-xspi", .data = (void *)&imx94_data, }, 1367 + { /* sentinel */ } 1368 + }; 1369 + MODULE_DEVICE_TABLE(of, nxp_xspi_dt_ids); 1370 + 1371 + static struct platform_driver nxp_xspi_driver = { 1372 + .driver = { 1373 + .name = "nxp-xspi", 1374 + .of_match_table = nxp_xspi_dt_ids, 1375 + .pm = pm_ptr(&nxp_xspi_pm_ops), 1376 + }, 1377 + .probe = nxp_xspi_probe, 1378 + }; 1379 + module_platform_driver(nxp_xspi_driver); 1380 + 1381 + MODULE_DESCRIPTION("NXP xSPI Controller Driver"); 1382 + MODULE_AUTHOR("NXP Semiconductor"); 1383 + MODULE_AUTHOR("Haibo Chen <haibo.chen@nxp.com>"); 1384 + MODULE_LICENSE("GPL");
-1
drivers/spi/spi-oc-tiny.c
··· 192 192 193 193 if (!np) 194 194 return 0; 195 - hw->bitbang.ctlr->dev.of_node = pdev->dev.of_node; 196 195 if (!of_property_read_u32(np, "clock-frequency", &val)) 197 196 hw->freq = val; 198 197 if (!of_property_read_u32(np, "baud-width", &val))
-1
drivers/spi/spi-orion.c
··· 780 780 if (status < 0) 781 781 goto out_rel_pm; 782 782 783 - host->dev.of_node = pdev->dev.of_node; 784 783 status = spi_register_controller(host); 785 784 if (status < 0) 786 785 goto out_rel_pm;
-1
drivers/spi/spi-pl022.c
··· 1893 1893 host->handle_err = pl022_handle_err; 1894 1894 host->unprepare_transfer_hardware = pl022_unprepare_transfer_hardware; 1895 1895 host->rt = platform_info->rt; 1896 - host->dev.of_node = dev->of_node; 1897 1896 host->use_gpio_descriptors = true; 1898 1897 1899 1898 /*
-2
drivers/spi/spi-pxa2xx.c
··· 1290 1290 drv_data->controller_info = platform_info; 1291 1291 drv_data->ssp = ssp; 1292 1292 1293 - device_set_node(&controller->dev, dev_fwnode(dev)); 1294 - 1295 1293 /* The spi->mode bits understood by this driver: */ 1296 1294 controller->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP; 1297 1295
-1
drivers/spi/spi-qcom-qspi.c
··· 763 763 host->dma_alignment = QSPI_ALIGN_REQ; 764 764 host->num_chipselect = QSPI_NUM_CS; 765 765 host->bus_num = -1; 766 - host->dev.of_node = pdev->dev.of_node; 767 766 host->mode_bits = SPI_MODE_0 | 768 767 SPI_TX_DUAL | SPI_RX_DUAL | 769 768 SPI_TX_QUAD | SPI_RX_QUAD;
-5
drivers/spi/spi-qpic-snand.c
··· 850 850 snandc->regs->ecc_bch_cfg = cpu_to_le32(ecc_bch_cfg); 851 851 snandc->regs->exec = cpu_to_le32(1); 852 852 853 - qcom_spi_set_read_loc(snandc, 0, 0, 0, ecc_cfg->cw_data, 1); 854 - 855 853 qcom_clear_bam_transaction(snandc); 856 854 857 855 qcom_write_reg_dma(snandc, &snandc->regs->addr0, NAND_ADDR0, 2, 0); ··· 938 940 snandc->regs->cfg1 = cpu_to_le32(cfg1); 939 941 snandc->regs->ecc_bch_cfg = cpu_to_le32(ecc_bch_cfg); 940 942 snandc->regs->exec = cpu_to_le32(1); 941 - 942 - qcom_spi_set_read_loc(snandc, 0, 0, 0, ecc_cfg->cw_data, 1); 943 943 944 944 qcom_write_reg_dma(snandc, &snandc->regs->addr0, NAND_ADDR0, 2, 0); 945 945 qcom_write_reg_dma(snandc, &snandc->regs->cfg0, NAND_DEV0_CFG0, 3, 0); ··· 1583 1587 ctlr->num_chipselect = QPIC_QSPI_NUM_CS; 1584 1588 ctlr->mem_ops = &qcom_spi_mem_ops; 1585 1589 ctlr->mem_caps = &qcom_spi_mem_caps; 1586 - ctlr->dev.of_node = pdev->dev.of_node; 1587 1590 ctlr->mode_bits = SPI_TX_DUAL | SPI_RX_DUAL | 1588 1591 SPI_TX_QUAD | SPI_RX_QUAD; 1589 1592
-1
drivers/spi/spi-qup.c
··· 1091 1091 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); 1092 1092 host->max_speed_hz = max_freq; 1093 1093 host->transfer_one = spi_qup_transfer_one; 1094 - host->dev.of_node = pdev->dev.of_node; 1095 1094 host->auto_runtime_pm = true; 1096 1095 host->dma_alignment = dma_get_cache_alignment(); 1097 1096 host->max_dma_len = SPI_MAX_XFER;
-1
drivers/spi/spi-rb4xx.c
··· 160 160 if (IS_ERR(ahb_clk)) 161 161 return PTR_ERR(ahb_clk); 162 162 163 - host->dev.of_node = pdev->dev.of_node; 164 163 host->bus_num = 0; 165 164 host->num_chipselect = 3; 166 165 host->mode_bits = SPI_TX_DUAL;
-1
drivers/spi/spi-realtek-rtl-snand.c
··· 400 400 ctrl->mem_ops = &rtl_snand_mem_ops; 401 401 ctrl->bits_per_word_mask = SPI_BPW_MASK(8); 402 402 ctrl->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_DUAL | SPI_TX_QUAD; 403 - device_set_node(&ctrl->dev, dev_fwnode(dev)); 404 403 405 404 return devm_spi_register_controller(dev, ctrl); 406 405 }
-1
drivers/spi/spi-realtek-rtl.c
··· 169 169 170 170 init_hw(rtspi); 171 171 172 - ctrl->dev.of_node = pdev->dev.of_node; 173 172 ctrl->flags = SPI_CONTROLLER_HALF_DUPLEX; 174 173 ctrl->set_cs = rt_set_cs; 175 174 ctrl->transfer_one = transfer_one;
-1
drivers/spi/spi-rockchip-sfc.c
··· 622 622 host->flags = SPI_CONTROLLER_HALF_DUPLEX; 623 623 host->mem_ops = &rockchip_sfc_mem_ops; 624 624 host->mem_caps = &rockchip_sfc_mem_caps; 625 - host->dev.of_node = pdev->dev.of_node; 626 625 host->mode_bits = SPI_TX_QUAD | SPI_TX_DUAL | SPI_RX_QUAD | SPI_RX_DUAL; 627 626 host->max_speed_hz = SFC_MAX_SPEED; 628 627 host->num_chipselect = SFC_MAX_CHIPSELECT_NUM;
+2 -3
drivers/spi/spi-rockchip.c
··· 805 805 if (ret < 0) 806 806 goto err_put_ctlr; 807 807 808 - ret = devm_request_threaded_irq(&pdev->dev, ret, rockchip_spi_isr, NULL, 809 - IRQF_ONESHOT, dev_name(&pdev->dev), ctlr); 808 + ret = devm_request_irq(&pdev->dev, ret, rockchip_spi_isr, 0, 809 + dev_name(&pdev->dev), ctlr); 810 810 if (ret) 811 811 goto err_put_ctlr; 812 812 ··· 858 858 ctlr->num_chipselect = num_cs; 859 859 ctlr->use_gpio_descriptors = true; 860 860 } 861 - ctlr->dev.of_node = pdev->dev.of_node; 862 861 ctlr->bits_per_word_mask = SPI_BPW_MASK(16) | SPI_BPW_MASK(8) | SPI_BPW_MASK(4); 863 862 ctlr->min_speed_hz = rs->freq / BAUDR_SCKDV_MAX; 864 863 ctlr->max_speed_hz = min(rs->freq / BAUDR_SCKDV_MIN, MAX_SCLK_OUT);
-1
drivers/spi/spi-rspi.c
··· 1338 1338 ctlr->min_speed_hz = DIV_ROUND_UP(clksrc, ops->max_div); 1339 1339 ctlr->max_speed_hz = DIV_ROUND_UP(clksrc, ops->min_div); 1340 1340 ctlr->flags = ops->flags; 1341 - ctlr->dev.of_node = pdev->dev.of_node; 1342 1341 ctlr->use_gpio_descriptors = true; 1343 1342 ctlr->max_native_cs = rspi->ops->num_hw_ss; 1344 1343
+216 -63
drivers/spi/spi-rzv2h-rspi.c
··· 9 9 #include <linux/bitops.h> 10 10 #include <linux/bits.h> 11 11 #include <linux/clk.h> 12 + #include <linux/dmaengine.h> 12 13 #include <linux/interrupt.h> 13 14 #include <linux/io.h> 14 15 #include <linux/limits.h> ··· 21 20 #include <linux/reset.h> 22 21 #include <linux/spi/spi.h> 23 22 #include <linux/wait.h> 23 + 24 + #include "internals.h" 24 25 25 26 /* Registers */ 26 27 #define RSPI_SPDR 0x00 ··· 40 37 /* Register SPCR */ 41 38 #define RSPI_SPCR_BPEN BIT(31) 42 39 #define RSPI_SPCR_MSTR BIT(30) 40 + #define RSPI_SPCR_SPTIE BIT(20) 43 41 #define RSPI_SPCR_SPRIE BIT(17) 44 42 #define RSPI_SPCR_SCKASE BIT(12) 45 43 #define RSPI_SPCR_SPE BIT(0) ··· 97 93 }; 98 94 99 95 struct rzv2h_rspi_priv { 100 - struct reset_control_bulk_data resets[RSPI_RESET_NUM]; 101 96 struct spi_controller *controller; 102 97 const struct rzv2h_rspi_info *info; 98 + struct platform_device *pdev; 103 99 void __iomem *base; 104 100 struct clk *tclk; 105 101 struct clk *pclk; 106 102 wait_queue_head_t wait; 107 103 unsigned int bytes_per_word; 104 + int irq_rx; 108 105 u32 last_speed_hz; 109 106 u32 freq; 110 107 u16 status; 111 108 u8 spr; 112 109 u8 brdv; 113 110 bool use_pclk; 111 + bool dma_callbacked; 114 112 }; 115 113 116 114 #define RZV2H_RSPI_TX(func, type) \ 117 115 static inline void rzv2h_rspi_tx_##type(struct rzv2h_rspi_priv *rspi, \ 118 116 const void *txbuf, \ 119 117 unsigned int index) { \ 120 - type buf = 0; \ 121 - \ 122 - if (txbuf) \ 123 - buf = ((type *)txbuf)[index]; \ 124 - \ 118 + type buf = ((type *)txbuf)[index]; \ 125 119 func(buf, rspi->base + RSPI_SPDR); \ 126 120 } 127 121 ··· 128 126 void *rxbuf, \ 129 127 unsigned int index) { \ 130 128 type buf = func(rspi->base + RSPI_SPDR); \ 131 - \ 132 - if (rxbuf) \ 133 - ((type *)rxbuf)[index] = buf; \ 129 + ((type *)rxbuf)[index] = buf; \ 134 130 } 135 131 136 132 RZV2H_RSPI_TX(writel, u32) ··· 224 224 return 0; 225 225 } 226 226 227 - static int rzv2h_rspi_transfer_one(struct spi_controller *controller, 228 - struct spi_device *spi, 229 - struct spi_transfer *transfer) 227 + static bool rzv2h_rspi_can_dma(struct spi_controller *ctlr, struct spi_device *spi, 228 + struct spi_transfer *xfer) 230 229 { 231 - struct rzv2h_rspi_priv *rspi = spi_controller_get_devdata(controller); 232 - unsigned int words_to_transfer, i; 233 - int ret = 0; 230 + struct rzv2h_rspi_priv *rspi = spi_controller_get_devdata(ctlr); 234 231 235 - transfer->effective_speed_hz = rspi->freq; 236 - words_to_transfer = transfer->len / rspi->bytes_per_word; 232 + if (ctlr->fallback) 233 + return false; 234 + 235 + if (!ctlr->dma_tx || !ctlr->dma_rx) 236 + return false; 237 + 238 + return xfer->len > rspi->info->fifo_size; 239 + } 240 + 241 + static int rzv2h_rspi_transfer_pio(struct rzv2h_rspi_priv *rspi, 242 + struct spi_device *spi, 243 + struct spi_transfer *transfer, 244 + unsigned int words_to_transfer) 245 + { 246 + unsigned int i; 247 + int ret = 0; 237 248 238 249 for (i = 0; i < words_to_transfer; i++) { 239 250 rzv2h_rspi_clear_all_irqs(rspi); ··· 256 245 break; 257 246 } 258 247 248 + return ret; 249 + } 250 + 251 + static void rzv2h_rspi_dma_complete(void *arg) 252 + { 253 + struct rzv2h_rspi_priv *rspi = arg; 254 + 255 + rspi->dma_callbacked = 1; 256 + wake_up_interruptible(&rspi->wait); 257 + } 258 + 259 + static struct dma_async_tx_descriptor * 260 + rzv2h_rspi_setup_dma_channel(struct rzv2h_rspi_priv *rspi, 261 + struct dma_chan *chan, struct sg_table *sg, 262 + enum dma_slave_buswidth width, 263 + enum dma_transfer_direction direction) 264 + { 265 + struct dma_slave_config config = { 266 + .dst_addr = rspi->pdev->resource->start + RSPI_SPDR, 267 + .src_addr = rspi->pdev->resource->start + RSPI_SPDR, 268 + .dst_addr_width = width, 269 + .src_addr_width = width, 270 + .direction = direction, 271 + }; 272 + struct dma_async_tx_descriptor *desc; 273 + int ret; 274 + 275 + ret = dmaengine_slave_config(chan, &config); 276 + if (ret) 277 + return ERR_PTR(ret); 278 + 279 + desc = dmaengine_prep_slave_sg(chan, sg->sgl, sg->nents, direction, 280 + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 281 + if (!desc) 282 + return ERR_PTR(-EAGAIN); 283 + 284 + if (direction == DMA_DEV_TO_MEM) { 285 + desc->callback = rzv2h_rspi_dma_complete; 286 + desc->callback_param = rspi; 287 + } 288 + 289 + return desc; 290 + } 291 + 292 + static enum dma_slave_buswidth 293 + rzv2h_rspi_dma_width(struct rzv2h_rspi_priv *rspi) 294 + { 295 + switch (rspi->bytes_per_word) { 296 + case 4: 297 + return DMA_SLAVE_BUSWIDTH_4_BYTES; 298 + case 2: 299 + return DMA_SLAVE_BUSWIDTH_2_BYTES; 300 + case 1: 301 + return DMA_SLAVE_BUSWIDTH_1_BYTE; 302 + default: 303 + return DMA_SLAVE_BUSWIDTH_UNDEFINED; 304 + } 305 + } 306 + 307 + static int rzv2h_rspi_transfer_dma(struct rzv2h_rspi_priv *rspi, 308 + struct spi_device *spi, 309 + struct spi_transfer *transfer, 310 + unsigned int words_to_transfer) 311 + { 312 + struct dma_async_tx_descriptor *tx_desc = NULL, *rx_desc = NULL; 313 + enum dma_slave_buswidth width; 314 + dma_cookie_t cookie; 315 + int ret; 316 + 317 + width = rzv2h_rspi_dma_width(rspi); 318 + if (width == DMA_SLAVE_BUSWIDTH_UNDEFINED) 319 + return -EINVAL; 320 + 321 + rx_desc = rzv2h_rspi_setup_dma_channel(rspi, rspi->controller->dma_rx, 322 + &transfer->rx_sg, width, 323 + DMA_DEV_TO_MEM); 324 + if (IS_ERR(rx_desc)) 325 + return PTR_ERR(rx_desc); 326 + 327 + tx_desc = rzv2h_rspi_setup_dma_channel(rspi, rspi->controller->dma_tx, 328 + &transfer->tx_sg, width, 329 + DMA_MEM_TO_DEV); 330 + if (IS_ERR(tx_desc)) 331 + return PTR_ERR(tx_desc); 332 + 333 + cookie = dmaengine_submit(rx_desc); 334 + if (dma_submit_error(cookie)) 335 + return cookie; 336 + 337 + cookie = dmaengine_submit(tx_desc); 338 + if (dma_submit_error(cookie)) { 339 + dmaengine_terminate_sync(rspi->controller->dma_rx); 340 + return cookie; 341 + } 342 + 343 + /* 344 + * DMA transfer does not need IRQs to be enabled. 345 + * For PIO, we only use RX IRQ, so disable that. 346 + */ 347 + disable_irq(rspi->irq_rx); 348 + 349 + rspi->dma_callbacked = 0; 350 + 351 + dma_async_issue_pending(rspi->controller->dma_rx); 352 + dma_async_issue_pending(rspi->controller->dma_tx); 259 353 rzv2h_rspi_clear_all_irqs(rspi); 260 354 261 - if (ret) 262 - transfer->error = SPI_TRANS_FAIL_IO; 355 + ret = wait_event_interruptible_timeout(rspi->wait, rspi->dma_callbacked, HZ); 356 + if (ret) { 357 + dmaengine_synchronize(rspi->controller->dma_tx); 358 + dmaengine_synchronize(rspi->controller->dma_rx); 359 + ret = 0; 360 + } else { 361 + dmaengine_terminate_sync(rspi->controller->dma_tx); 362 + dmaengine_terminate_sync(rspi->controller->dma_rx); 363 + ret = -ETIMEDOUT; 364 + } 263 365 264 - spi_finalize_current_transfer(controller); 366 + enable_irq(rspi->irq_rx); 367 + 368 + return ret; 369 + } 370 + 371 + static int rzv2h_rspi_transfer_one(struct spi_controller *controller, 372 + struct spi_device *spi, 373 + struct spi_transfer *transfer) 374 + { 375 + struct rzv2h_rspi_priv *rspi = spi_controller_get_devdata(controller); 376 + bool is_dma = spi_xfer_is_dma_mapped(controller, spi, transfer); 377 + unsigned int words_to_transfer; 378 + int ret; 379 + 380 + transfer->effective_speed_hz = rspi->freq; 381 + words_to_transfer = transfer->len / rspi->bytes_per_word; 382 + 383 + if (is_dma) 384 + ret = rzv2h_rspi_transfer_dma(rspi, spi, transfer, words_to_transfer); 385 + else 386 + ret = rzv2h_rspi_transfer_pio(rspi, spi, transfer, words_to_transfer); 387 + 388 + rzv2h_rspi_clear_all_irqs(rspi); 389 + 390 + if (is_dma && ret == -EAGAIN) 391 + /* Retry with PIO */ 392 + transfer->error = SPI_TRANS_FAIL_NO_START; 265 393 266 394 return ret; 267 395 } ··· 635 485 /* SPI receive buffer full interrupt enable */ 636 486 conf32 |= RSPI_SPCR_SPRIE; 637 487 488 + /* SPI transmit buffer empty interrupt enable */ 489 + conf32 |= RSPI_SPCR_SPTIE; 490 + 638 491 /* Bypass synchronization circuit */ 639 492 conf32 |= FIELD_PREP(RSPI_SPCR_BPEN, rspi->use_pclk); 640 493 ··· 665 512 writeb(0, rspi->base + RSPI_SSLP); 666 513 667 514 /* Setup FIFO thresholds */ 668 - conf16 = FIELD_PREP(RSPI_SPDCR2_TTRG, rspi->info->fifo_size - 1); 515 + conf16 = FIELD_PREP(RSPI_SPDCR2_TTRG, 0); 669 516 conf16 |= FIELD_PREP(RSPI_SPDCR2_RTRG, 0); 670 517 writew(conf16, rspi->base + RSPI_SPDCR2); 671 518 ··· 691 538 struct spi_controller *controller; 692 539 struct device *dev = &pdev->dev; 693 540 struct rzv2h_rspi_priv *rspi; 541 + struct reset_control *reset; 694 542 struct clk_bulk_data *clks; 695 - int irq_rx, ret, i; 696 543 long tclk_rate; 544 + int ret, i; 697 545 698 546 controller = devm_spi_alloc_host(dev, sizeof(*rspi)); 699 547 if (!controller) ··· 704 550 platform_set_drvdata(pdev, rspi); 705 551 706 552 rspi->controller = controller; 553 + rspi->pdev = pdev; 707 554 708 555 rspi->info = device_get_match_data(dev); 709 556 ··· 728 573 if (!rspi->tclk) 729 574 return dev_err_probe(dev, -EINVAL, "Failed to get tclk\n"); 730 575 731 - rspi->resets[0].id = "presetn"; 732 - rspi->resets[1].id = "tresetn"; 733 - ret = devm_reset_control_bulk_get_optional_exclusive(dev, RSPI_RESET_NUM, 734 - rspi->resets); 735 - if (ret) 736 - return dev_err_probe(dev, ret, "cannot get resets\n"); 576 + reset = devm_reset_control_get_optional_exclusive_deasserted(&pdev->dev, 577 + "presetn"); 578 + if (IS_ERR(reset)) 579 + return dev_err_probe(&pdev->dev, PTR_ERR(reset), 580 + "cannot get presetn reset\n"); 737 581 738 - irq_rx = platform_get_irq_byname(pdev, "rx"); 739 - if (irq_rx < 0) 740 - return dev_err_probe(dev, irq_rx, "cannot get IRQ 'rx'\n"); 582 + reset = devm_reset_control_get_optional_exclusive_deasserted(&pdev->dev, 583 + "tresetn"); 584 + if (IS_ERR(reset)) 585 + return dev_err_probe(&pdev->dev, PTR_ERR(reset), 586 + "cannot get tresetn reset\n"); 741 587 742 - ret = reset_control_bulk_deassert(RSPI_RESET_NUM, rspi->resets); 743 - if (ret) 744 - return dev_err_probe(dev, ret, "failed to deassert resets\n"); 588 + rspi->irq_rx = platform_get_irq_byname(pdev, "rx"); 589 + if (rspi->irq_rx < 0) 590 + return dev_err_probe(dev, rspi->irq_rx, "cannot get IRQ 'rx'\n"); 745 591 746 592 init_waitqueue_head(&rspi->wait); 747 593 748 - ret = devm_request_irq(dev, irq_rx, rzv2h_rx_irq_handler, 0, 594 + ret = devm_request_irq(dev, rspi->irq_rx, rzv2h_rx_irq_handler, 0, 749 595 dev_name(dev), rspi); 750 596 if (ret) { 751 597 dev_err(dev, "cannot request `rx` IRQ\n"); 752 - goto quit_resets; 598 + return ret; 753 599 } 754 600 755 601 controller->mode_bits = SPI_CPHA | SPI_CPOL | SPI_CS_HIGH | 756 602 SPI_LSB_FIRST | SPI_LOOP; 603 + controller->flags = SPI_CONTROLLER_MUST_RX | SPI_CONTROLLER_MUST_TX; 757 604 controller->bits_per_word_mask = SPI_BPW_RANGE_MASK(4, 32); 758 605 controller->prepare_message = rzv2h_rspi_prepare_message; 759 606 controller->unprepare_message = rzv2h_rspi_unprepare_message; 760 607 controller->num_chipselect = 4; 761 608 controller->transfer_one = rzv2h_rspi_transfer_one; 609 + controller->can_dma = rzv2h_rspi_can_dma; 762 610 763 611 tclk_rate = clk_round_rate(rspi->tclk, 0); 764 - if (tclk_rate < 0) { 765 - ret = tclk_rate; 766 - goto quit_resets; 767 - } 612 + if (tclk_rate < 0) 613 + return tclk_rate; 768 614 769 615 controller->min_speed_hz = rzv2h_rspi_calc_bitrate(tclk_rate, 770 616 RSPI_SPBR_SPR_MAX, 771 617 RSPI_SPCMD_BRDV_MAX); 772 618 773 619 tclk_rate = clk_round_rate(rspi->tclk, ULONG_MAX); 774 - if (tclk_rate < 0) { 775 - ret = tclk_rate; 776 - goto quit_resets; 777 - } 620 + if (tclk_rate < 0) 621 + return tclk_rate; 778 622 779 623 controller->max_speed_hz = rzv2h_rspi_calc_bitrate(tclk_rate, 780 624 RSPI_SPBR_SPR_MIN, 781 625 RSPI_SPCMD_BRDV_MIN); 782 626 783 - device_set_node(&controller->dev, dev_fwnode(dev)); 784 - 785 - ret = spi_register_controller(controller); 786 - if (ret) { 787 - dev_err(dev, "register controller failed\n"); 788 - goto quit_resets; 627 + controller->dma_tx = devm_dma_request_chan(dev, "tx"); 628 + if (IS_ERR(controller->dma_tx)) { 629 + ret = dev_warn_probe(dev, PTR_ERR(controller->dma_tx), 630 + "failed to request TX DMA channel\n"); 631 + if (ret == -EPROBE_DEFER) 632 + return ret; 633 + controller->dma_tx = NULL; 789 634 } 790 635 791 - return 0; 636 + controller->dma_rx = devm_dma_request_chan(dev, "rx"); 637 + if (IS_ERR(controller->dma_rx)) { 638 + ret = dev_warn_probe(dev, PTR_ERR(controller->dma_rx), 639 + "failed to request RX DMA channel\n"); 640 + if (ret == -EPROBE_DEFER) 641 + return ret; 642 + controller->dma_rx = NULL; 643 + } 792 644 793 - quit_resets: 794 - reset_control_bulk_assert(RSPI_RESET_NUM, rspi->resets); 645 + ret = devm_spi_register_controller(dev, controller); 646 + if (ret) 647 + dev_err(dev, "register controller failed\n"); 795 648 796 649 return ret; 797 - } 798 - 799 - static void rzv2h_rspi_remove(struct platform_device *pdev) 800 - { 801 - struct rzv2h_rspi_priv *rspi = platform_get_drvdata(pdev); 802 - 803 - spi_unregister_controller(rspi->controller); 804 - 805 - reset_control_bulk_assert(RSPI_RESET_NUM, rspi->resets); 806 650 } 807 651 808 652 static const struct rzv2h_rspi_info rzv2h_info = { ··· 828 674 829 675 static struct platform_driver rzv2h_rspi_drv = { 830 676 .probe = rzv2h_rspi_probe, 831 - .remove = rzv2h_rspi_remove, 832 677 .driver = { 833 678 .name = "rzv2h_rspi", 834 679 .of_match_table = rzv2h_rspi_match,
-2
drivers/spi/spi-rzv2m-csi.c
··· 634 634 controller->use_gpio_descriptors = true; 635 635 controller->target_abort = rzv2m_csi_target_abort; 636 636 637 - device_set_node(&controller->dev, dev_fwnode(dev)); 638 - 639 637 ret = devm_request_irq(dev, irq, rzv2m_csi_irq_handler, 0, 640 638 dev_name(dev), csi); 641 639 if (ret)
-1
drivers/spi/spi-s3c64xx.c
··· 1295 1295 sdd->tx_dma.direction = DMA_MEM_TO_DEV; 1296 1296 sdd->rx_dma.direction = DMA_DEV_TO_MEM; 1297 1297 1298 - host->dev.of_node = pdev->dev.of_node; 1299 1298 host->bus_num = -1; 1300 1299 host->setup = s3c64xx_spi_setup; 1301 1300 host->cleanup = s3c64xx_spi_cleanup;
-2
drivers/spi/spi-sc18is602.c
··· 251 251 if (!host) 252 252 return -ENOMEM; 253 253 254 - device_set_node(&host->dev, dev_fwnode(dev)); 255 - 256 254 hw = spi_controller_get_devdata(host); 257 255 258 256 /* assert reset and then release */
-1
drivers/spi/spi-sg2044-nor.c
··· 455 455 return PTR_ERR(spifmc->io_base); 456 456 457 457 ctrl->num_chipselect = 1; 458 - ctrl->dev.of_node = pdev->dev.of_node; 459 458 ctrl->bits_per_word_mask = SPI_BPW_MASK(8); 460 459 ctrl->auto_runtime_pm = false; 461 460 ctrl->mem_ops = &sg2044_spifmc_mem_ops;
-1
drivers/spi/spi-sh-hspi.c
··· 253 253 254 254 ctlr->bus_num = pdev->id; 255 255 ctlr->mode_bits = SPI_CPOL | SPI_CPHA; 256 - ctlr->dev.of_node = pdev->dev.of_node; 257 256 ctlr->auto_runtime_pm = true; 258 257 ctlr->transfer_one_message = hspi_transfer_one_message; 259 258 ctlr->bits_per_word_mask = SPI_BPW_MASK(8);
-1
drivers/spi/spi-sh-msiof.c
··· 1276 1276 ctlr->flags = chipdata->ctlr_flags; 1277 1277 ctlr->bus_num = pdev->id; 1278 1278 ctlr->num_chipselect = p->info->num_chipselect; 1279 - ctlr->dev.of_node = dev->of_node; 1280 1279 ctlr->setup = sh_msiof_spi_setup; 1281 1280 ctlr->prepare_message = sh_msiof_prepare_message; 1282 1281 ctlr->target_abort = sh_msiof_target_abort;
-1
drivers/spi/spi-sifive.c
··· 368 368 } 369 369 370 370 /* Define our host */ 371 - host->dev.of_node = pdev->dev.of_node; 372 371 host->bus_num = pdev->id; 373 372 host->num_chipselect = num_cs; 374 373 host->mode_bits = SPI_CPHA | SPI_CPOL
-1
drivers/spi/spi-slave-mt27xx.c
··· 395 395 } 396 396 397 397 ctlr->auto_runtime_pm = true; 398 - ctlr->dev.of_node = pdev->dev.of_node; 399 398 ctlr->mode_bits = SPI_CPOL | SPI_CPHA; 400 399 ctlr->mode_bits |= SPI_LSB_FIRST; 401 400
-1
drivers/spi/spi-sn-f-ospi.c
··· 628 628 return -ENOMEM; 629 629 } 630 630 ctlr->num_chipselect = num_cs; 631 - ctlr->dev.of_node = dev->of_node; 632 631 633 632 ospi = spi_controller_get_devdata(ctlr); 634 633 ospi->dev = dev;
-1
drivers/spi/spi-sprd-adi.c
··· 566 566 if (sadi->data->wdg_rst) 567 567 sadi->data->wdg_rst(sadi); 568 568 569 - ctlr->dev.of_node = pdev->dev.of_node; 570 569 ctlr->bus_num = pdev->id; 571 570 ctlr->num_chipselect = num_chipselect; 572 571 ctlr->flags = SPI_CONTROLLER_HALF_DUPLEX;
-1
drivers/spi/spi-sprd.c
··· 936 936 937 937 ss->phy_base = res->start; 938 938 ss->dev = &pdev->dev; 939 - sctlr->dev.of_node = pdev->dev.of_node; 940 939 sctlr->mode_bits = SPI_CPOL | SPI_CPHA | SPI_3WIRE | SPI_TX_DUAL; 941 940 sctlr->bus_num = pdev->id; 942 941 sctlr->set_cs = sprd_spi_chipselect;
+2 -2
drivers/spi/spi-st-ssc4.c
··· 403 403 return ret; 404 404 } 405 405 406 - static int __maybe_unused spi_st_suspend(struct device *dev) 406 + static int spi_st_suspend(struct device *dev) 407 407 { 408 408 struct spi_controller *host = dev_get_drvdata(dev); 409 409 int ret; ··· 415 415 return pm_runtime_force_suspend(dev); 416 416 } 417 417 418 - static int __maybe_unused spi_st_resume(struct device *dev) 418 + static int spi_st_resume(struct device *dev) 419 419 { 420 420 struct spi_controller *host = dev_get_drvdata(dev); 421 421 int ret;
+74 -49
drivers/spi/spi-stm32-ospi.c
··· 34 34 #define CR_ABORT BIT(1) 35 35 #define CR_DMAEN BIT(2) 36 36 #define CR_FTHRES_SHIFT 8 37 - #define CR_TEIE BIT(16) 38 - #define CR_TCIE BIT(17) 39 37 #define CR_SMIE BIT(19) 40 38 #define CR_APMS BIT(22) 41 39 #define CR_CSSEL BIT(24) ··· 104 106 #define STM32_ABT_TIMEOUT_US 100000 105 107 #define STM32_COMP_TIMEOUT_MS 5000 106 108 #define STM32_BUSY_TIMEOUT_US 100000 107 - 109 + #define STM32_WAIT_CMD_TIMEOUT_US 5000 108 110 109 111 #define STM32_AUTOSUSPEND_DELAY -1 110 112 ··· 114 116 struct clk *clk; 115 117 struct reset_control *rstc; 116 118 117 - struct completion data_completion; 118 119 struct completion match_completion; 119 120 120 121 struct dma_chan *dma_chtx; ··· 139 142 struct mutex lock; 140 143 }; 141 144 142 - static void stm32_ospi_read_fifo(u8 *val, void __iomem *addr) 145 + static void stm32_ospi_read_fifo(void *val, void __iomem *addr, u8 len) 143 146 { 144 - *val = readb_relaxed(addr); 147 + switch (len) { 148 + case sizeof(u32): 149 + *((u32 *)val) = readl_relaxed(addr); 150 + break; 151 + case sizeof(u16): 152 + *((u16 *)val) = readw_relaxed(addr); 153 + break; 154 + case sizeof(u8): 155 + *((u8 *)val) = readb_relaxed(addr); 156 + } 145 157 } 146 158 147 - static void stm32_ospi_write_fifo(u8 *val, void __iomem *addr) 159 + static void stm32_ospi_write_fifo(void *val, void __iomem *addr, u8 len) 148 160 { 149 - writeb_relaxed(*val, addr); 161 + switch (len) { 162 + case sizeof(u32): 163 + writel_relaxed(*((u32 *)val), addr); 164 + break; 165 + case sizeof(u16): 166 + writew_relaxed(*((u16 *)val), addr); 167 + break; 168 + case sizeof(u8): 169 + writeb_relaxed(*((u8 *)val), addr); 170 + } 150 171 } 151 172 152 173 static int stm32_ospi_abort(struct stm32_ospi *ospi) ··· 187 172 return timeout; 188 173 } 189 174 190 - static int stm32_ospi_poll(struct stm32_ospi *ospi, u8 *buf, u32 len, bool read) 175 + static int stm32_ospi_poll(struct stm32_ospi *ospi, void *buf, u32 len, bool read) 191 176 { 192 177 void __iomem *regs_base = ospi->regs_base; 193 - void (*fifo)(u8 *val, void __iomem *addr); 178 + void (*fifo)(void *val, void __iomem *addr, u8 len); 194 179 u32 sr; 195 180 int ret; 181 + u8 step; 196 182 197 183 if (read) 198 184 fifo = stm32_ospi_read_fifo; 199 185 else 200 186 fifo = stm32_ospi_write_fifo; 201 187 202 - while (len--) { 188 + while (len) { 203 189 ret = readl_relaxed_poll_timeout_atomic(regs_base + OSPI_SR, 204 190 sr, sr & SR_FTF, 1, 205 191 STM32_FIFO_TIMEOUT_US); ··· 209 193 len, sr); 210 194 return ret; 211 195 } 212 - fifo(buf++, regs_base + OSPI_DR); 196 + 197 + if (len >= sizeof(u32)) 198 + step = sizeof(u32); 199 + else if (len >= sizeof(u16)) 200 + step = sizeof(u16); 201 + else 202 + step = sizeof(u8); 203 + 204 + fifo(buf, regs_base + OSPI_DR, step); 205 + len -= step; 206 + buf += step; 213 207 } 214 208 215 209 return 0; ··· 237 211 static int stm32_ospi_wait_cmd(struct stm32_ospi *ospi) 238 212 { 239 213 void __iomem *regs_base = ospi->regs_base; 240 - u32 cr, sr; 214 + u32 sr; 241 215 int err = 0; 242 216 243 - if ((readl_relaxed(regs_base + OSPI_SR) & SR_TCF) || 244 - ospi->fmode == CR_FMODE_APM) 217 + if (ospi->fmode == CR_FMODE_APM) 245 218 goto out; 246 219 247 - reinit_completion(&ospi->data_completion); 248 - cr = readl_relaxed(regs_base + OSPI_CR); 249 - writel_relaxed(cr | CR_TCIE | CR_TEIE, regs_base + OSPI_CR); 220 + err = readl_relaxed_poll_timeout_atomic(ospi->regs_base + OSPI_SR, sr, 221 + (sr & (SR_TEF | SR_TCF)), 1, 222 + STM32_WAIT_CMD_TIMEOUT_US); 250 223 251 - if (!wait_for_completion_timeout(&ospi->data_completion, 252 - msecs_to_jiffies(STM32_COMP_TIMEOUT_MS))) 253 - err = -ETIMEDOUT; 254 - 255 - sr = readl_relaxed(regs_base + OSPI_SR); 256 224 if (sr & SR_TCF) 257 225 /* avoid false timeout */ 258 226 err = 0; ··· 279 259 cr = readl_relaxed(regs_base + OSPI_CR); 280 260 sr = readl_relaxed(regs_base + OSPI_SR); 281 261 282 - if (cr & CR_SMIE && sr & SR_SMF) { 262 + if (sr & SR_SMF) { 283 263 /* disable irq */ 284 264 cr &= ~CR_SMIE; 285 265 writel_relaxed(cr, regs_base + OSPI_CR); 286 266 complete(&ospi->match_completion); 287 - 288 - return IRQ_HANDLED; 289 - } 290 - 291 - if (sr & (SR_TEF | SR_TCF)) { 292 - /* disable irq */ 293 - cr &= ~CR_TCIE & ~CR_TEIE; 294 - writel_relaxed(cr, regs_base + OSPI_CR); 295 - complete(&ospi->data_completion); 296 267 } 297 268 298 269 return IRQ_HANDLED; 299 270 } 300 271 301 - static void stm32_ospi_dma_setup(struct stm32_ospi *ospi, 302 - struct dma_slave_config *dma_cfg) 272 + static int stm32_ospi_dma_setup(struct stm32_ospi *ospi, 273 + struct dma_slave_config *dma_cfg) 303 274 { 275 + struct dma_slave_caps caps; 276 + int ret = 0; 277 + 304 278 if (dma_cfg && ospi->dma_chrx) { 279 + ret = dma_get_slave_caps(ospi->dma_chrx, &caps); 280 + if (ret) 281 + return ret; 282 + 283 + dma_cfg->src_maxburst = caps.max_burst / dma_cfg->src_addr_width; 284 + 305 285 if (dmaengine_slave_config(ospi->dma_chrx, dma_cfg)) { 306 286 dev_err(ospi->dev, "dma rx config failed\n"); 307 287 dma_release_channel(ospi->dma_chrx); ··· 310 290 } 311 291 312 292 if (dma_cfg && ospi->dma_chtx) { 293 + ret = dma_get_slave_caps(ospi->dma_chtx, &caps); 294 + if (ret) 295 + return ret; 296 + 297 + dma_cfg->dst_maxburst = caps.max_burst / dma_cfg->dst_addr_width; 298 + 313 299 if (dmaengine_slave_config(ospi->dma_chtx, dma_cfg)) { 314 300 dev_err(ospi->dev, "dma tx config failed\n"); 315 301 dma_release_channel(ospi->dma_chtx); ··· 324 298 } 325 299 326 300 init_completion(&ospi->dma_completion); 301 + 302 + return ret; 327 303 } 328 304 329 305 static int stm32_ospi_tx_mm(struct stm32_ospi *ospi, ··· 419 391 if (op->data.dir == SPI_MEM_DATA_IN) 420 392 buf = op->data.buf.in; 421 393 else 422 - buf = (u8 *)op->data.buf.out; 394 + buf = (void *)op->data.buf.out; 423 395 424 396 return stm32_ospi_poll(ospi, buf, op->data.nbytes, 425 397 op->data.dir == SPI_MEM_DATA_IN); ··· 866 838 dev_info(dev, "No memory-map region found\n"); 867 839 } 868 840 869 - init_completion(&ospi->data_completion); 870 841 init_completion(&ospi->match_completion); 871 842 872 843 return 0; ··· 926 899 dma_cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 927 900 dma_cfg.src_addr = ospi->regs_phys_base + OSPI_DR; 928 901 dma_cfg.dst_addr = ospi->regs_phys_base + OSPI_DR; 929 - dma_cfg.src_maxburst = 4; 930 - dma_cfg.dst_maxburst = 4; 931 - stm32_ospi_dma_setup(ospi, &dma_cfg); 902 + ret = stm32_ospi_dma_setup(ospi, &dma_cfg); 903 + if (ret) 904 + return ret; 932 905 933 906 mutex_init(&ospi->lock); 934 907 ··· 942 915 ctrl->use_gpio_descriptors = true; 943 916 ctrl->transfer_one_message = stm32_ospi_transfer_one_message; 944 917 ctrl->num_chipselect = STM32_OSPI_MAX_NORCHIP; 945 - ctrl->dev.of_node = dev->of_node; 946 918 947 919 pm_runtime_enable(ospi->dev); 948 920 pm_runtime_set_autosuspend_delay(ospi->dev, STM32_AUTOSUSPEND_DELAY); ··· 1011 985 pm_runtime_force_suspend(ospi->dev); 1012 986 } 1013 987 1014 - static int __maybe_unused stm32_ospi_suspend(struct device *dev) 988 + static int stm32_ospi_suspend(struct device *dev) 1015 989 { 1016 990 struct stm32_ospi *ospi = dev_get_drvdata(dev); 1017 991 ··· 1022 996 return pm_runtime_force_suspend(ospi->dev); 1023 997 } 1024 998 1025 - static int __maybe_unused stm32_ospi_resume(struct device *dev) 999 + static int stm32_ospi_resume(struct device *dev) 1026 1000 { 1027 1001 struct stm32_ospi *ospi = dev_get_drvdata(dev); 1028 1002 void __iomem *regs_base = ospi->regs_base; ··· 1051 1025 return 0; 1052 1026 } 1053 1027 1054 - static int __maybe_unused stm32_ospi_runtime_suspend(struct device *dev) 1028 + static int stm32_ospi_runtime_suspend(struct device *dev) 1055 1029 { 1056 1030 struct stm32_ospi *ospi = dev_get_drvdata(dev); 1057 1031 ··· 1060 1034 return 0; 1061 1035 } 1062 1036 1063 - static int __maybe_unused stm32_ospi_runtime_resume(struct device *dev) 1037 + static int stm32_ospi_runtime_resume(struct device *dev) 1064 1038 { 1065 1039 struct stm32_ospi *ospi = dev_get_drvdata(dev); 1066 1040 ··· 1068 1042 } 1069 1043 1070 1044 static const struct dev_pm_ops stm32_ospi_pm_ops = { 1071 - SET_SYSTEM_SLEEP_PM_OPS(stm32_ospi_suspend, stm32_ospi_resume) 1072 - SET_RUNTIME_PM_OPS(stm32_ospi_runtime_suspend, 1073 - stm32_ospi_runtime_resume, NULL) 1045 + SYSTEM_SLEEP_PM_OPS(stm32_ospi_suspend, stm32_ospi_resume) 1046 + RUNTIME_PM_OPS(stm32_ospi_runtime_suspend, stm32_ospi_runtime_resume, NULL) 1074 1047 }; 1075 1048 1076 1049 static const struct of_device_id stm32_ospi_of_match[] = { ··· 1083 1058 .remove = stm32_ospi_remove, 1084 1059 .driver = { 1085 1060 .name = "stm32-ospi", 1086 - .pm = &stm32_ospi_pm_ops, 1061 + .pm = pm_ptr(&stm32_ospi_pm_ops), 1087 1062 .of_match_table = stm32_ospi_of_match, 1088 1063 }, 1089 1064 };
+72 -55
drivers/spi/spi-stm32-qspi.c
··· 31 31 #define CR_DFM BIT(6) 32 32 #define CR_FSEL BIT(7) 33 33 #define CR_FTHRES_SHIFT 8 34 - #define CR_TEIE BIT(16) 35 - #define CR_TCIE BIT(17) 36 34 #define CR_FTIE BIT(18) 37 35 #define CR_SMIE BIT(19) 38 36 #define CR_TOIE BIT(20) ··· 84 86 #define STM32_QSPI_MAX_MMAP_SZ SZ_256M 85 87 #define STM32_QSPI_MAX_NORCHIP 2 86 88 87 - #define STM32_FIFO_TIMEOUT_US 30000 88 - #define STM32_BUSY_TIMEOUT_US 100000 89 - #define STM32_ABT_TIMEOUT_US 100000 90 - #define STM32_COMP_TIMEOUT_MS 1000 91 - #define STM32_AUTOSUSPEND_DELAY -1 89 + #define STM32_FIFO_TIMEOUT_US 30000 90 + #define STM32_BUSY_TIMEOUT_US 100000 91 + #define STM32_ABT_TIMEOUT_US 100000 92 + #define STM32_WAIT_CMD_TIMEOUT_US 5000 93 + #define STM32_COMP_TIMEOUT_MS 1000 94 + #define STM32_AUTOSUSPEND_DELAY -1 92 95 93 96 struct stm32_qspi_flash { 94 97 u32 cs; ··· 106 107 struct clk *clk; 107 108 u32 clk_rate; 108 109 struct stm32_qspi_flash flash[STM32_QSPI_MAX_NORCHIP]; 109 - struct completion data_completion; 110 110 struct completion match_completion; 111 111 u32 fmode; 112 112 ··· 132 134 cr = readl_relaxed(qspi->io_base + QSPI_CR); 133 135 sr = readl_relaxed(qspi->io_base + QSPI_SR); 134 136 135 - if (cr & CR_SMIE && sr & SR_SMF) { 137 + if (sr & SR_SMF) { 136 138 /* disable irq */ 137 139 cr &= ~CR_SMIE; 138 140 writel_relaxed(cr, qspi->io_base + QSPI_CR); 139 141 complete(&qspi->match_completion); 140 - 141 - return IRQ_HANDLED; 142 - } 143 - 144 - if (sr & (SR_TEF | SR_TCF)) { 145 - /* disable irq */ 146 - cr &= ~CR_TCIE & ~CR_TEIE; 147 - writel_relaxed(cr, qspi->io_base + QSPI_CR); 148 - complete(&qspi->data_completion); 149 142 } 150 143 151 144 return IRQ_HANDLED; 152 145 } 153 146 154 - static void stm32_qspi_read_fifo(u8 *val, void __iomem *addr) 147 + static void stm32_qspi_read_fifo(void *val, void __iomem *addr, u8 len) 155 148 { 156 - *val = readb_relaxed(addr); 149 + switch (len) { 150 + case sizeof(u32): 151 + *((u32 *)val) = readl_relaxed(addr); 152 + break; 153 + case sizeof(u16): 154 + *((u16 *)val) = readw_relaxed(addr); 155 + break; 156 + case sizeof(u8): 157 + *((u8 *)val) = readb_relaxed(addr); 158 + } 157 159 } 158 160 159 - static void stm32_qspi_write_fifo(u8 *val, void __iomem *addr) 161 + static void stm32_qspi_write_fifo(void *val, void __iomem *addr, u8 len) 160 162 { 161 - writeb_relaxed(*val, addr); 163 + switch (len) { 164 + case sizeof(u32): 165 + writel_relaxed(*((u32 *)val), addr); 166 + break; 167 + case sizeof(u16): 168 + writew_relaxed(*((u16 *)val), addr); 169 + break; 170 + case sizeof(u8): 171 + writeb_relaxed(*((u8 *)val), addr); 172 + } 162 173 } 163 174 164 175 static int stm32_qspi_tx_poll(struct stm32_qspi *qspi, 165 176 const struct spi_mem_op *op) 166 177 { 167 - void (*tx_fifo)(u8 *val, void __iomem *addr); 178 + void (*fifo)(void *val, void __iomem *addr, u8 len); 168 179 u32 len = op->data.nbytes, sr; 169 - u8 *buf; 180 + void *buf; 170 181 int ret; 182 + u8 step; 171 183 172 184 if (op->data.dir == SPI_MEM_DATA_IN) { 173 - tx_fifo = stm32_qspi_read_fifo; 185 + fifo = stm32_qspi_read_fifo; 174 186 buf = op->data.buf.in; 175 187 176 188 } else { 177 - tx_fifo = stm32_qspi_write_fifo; 178 - buf = (u8 *)op->data.buf.out; 189 + fifo = stm32_qspi_write_fifo; 190 + buf = (void *)op->data.buf.out; 179 191 } 180 192 181 - while (len--) { 193 + while (len) { 182 194 ret = readl_relaxed_poll_timeout_atomic(qspi->io_base + QSPI_SR, 183 195 sr, (sr & SR_FTF), 1, 184 196 STM32_FIFO_TIMEOUT_US); ··· 197 189 len, sr); 198 190 return ret; 199 191 } 200 - tx_fifo(buf++, qspi->io_base + QSPI_DR); 192 + 193 + if (len >= sizeof(u32)) 194 + step = sizeof(u32); 195 + else if (len >= sizeof(u16)) 196 + step = sizeof(u16); 197 + else 198 + step = sizeof(u8); 199 + 200 + fifo(buf, qspi->io_base + QSPI_DR, step); 201 + len -= step; 202 + buf += step; 201 203 } 202 204 203 205 return 0; ··· 319 301 320 302 static int stm32_qspi_wait_cmd(struct stm32_qspi *qspi) 321 303 { 322 - u32 cr, sr; 304 + u32 sr; 323 305 int err = 0; 324 306 325 - if ((readl_relaxed(qspi->io_base + QSPI_SR) & SR_TCF) || 326 - qspi->fmode == CCR_FMODE_APM) 307 + if (qspi->fmode == CCR_FMODE_APM) 327 308 goto out; 328 309 329 - reinit_completion(&qspi->data_completion); 330 - cr = readl_relaxed(qspi->io_base + QSPI_CR); 331 - writel_relaxed(cr | CR_TCIE | CR_TEIE, qspi->io_base + QSPI_CR); 310 + err = readl_relaxed_poll_timeout_atomic(qspi->io_base + QSPI_SR, sr, 311 + (sr & (SR_TEF | SR_TCF)), 1, 312 + STM32_WAIT_CMD_TIMEOUT_US); 332 313 333 - if (!wait_for_completion_timeout(&qspi->data_completion, 334 - msecs_to_jiffies(STM32_COMP_TIMEOUT_MS))) { 335 - err = -ETIMEDOUT; 336 - } else { 337 - sr = readl_relaxed(qspi->io_base + QSPI_SR); 338 - if (sr & SR_TEF) 339 - err = -EIO; 340 - } 314 + if (sr & SR_TEF) 315 + err = -EIO; 341 316 342 317 out: 343 318 /* clear flags */ ··· 700 689 { 701 690 struct dma_slave_config dma_cfg; 702 691 struct device *dev = qspi->dev; 692 + struct dma_slave_caps caps; 703 693 int ret = 0; 704 694 705 695 memset(&dma_cfg, 0, sizeof(dma_cfg)); ··· 709 697 dma_cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 710 698 dma_cfg.src_addr = qspi->phys_base + QSPI_DR; 711 699 dma_cfg.dst_addr = qspi->phys_base + QSPI_DR; 712 - dma_cfg.src_maxburst = 4; 713 - dma_cfg.dst_maxburst = 4; 714 700 715 701 qspi->dma_chrx = dma_request_chan(dev, "rx"); 716 702 if (IS_ERR(qspi->dma_chrx)) { ··· 717 707 if (ret == -EPROBE_DEFER) 718 708 goto out; 719 709 } else { 710 + ret = dma_get_slave_caps(qspi->dma_chrx, &caps); 711 + if (ret) 712 + return ret; 713 + 714 + dma_cfg.src_maxburst = caps.max_burst / dma_cfg.src_addr_width; 720 715 if (dmaengine_slave_config(qspi->dma_chrx, &dma_cfg)) { 721 716 dev_err(dev, "dma rx config failed\n"); 722 717 dma_release_channel(qspi->dma_chrx); ··· 734 719 ret = PTR_ERR(qspi->dma_chtx); 735 720 qspi->dma_chtx = NULL; 736 721 } else { 722 + ret = dma_get_slave_caps(qspi->dma_chtx, &caps); 723 + if (ret) 724 + return ret; 725 + 726 + dma_cfg.dst_maxburst = caps.max_burst / dma_cfg.dst_addr_width; 737 727 if (dmaengine_slave_config(qspi->dma_chtx, &dma_cfg)) { 738 728 dev_err(dev, "dma tx config failed\n"); 739 729 dma_release_channel(qspi->dma_chtx); ··· 817 797 return ret; 818 798 } 819 799 820 - init_completion(&qspi->data_completion); 821 800 init_completion(&qspi->match_completion); 822 801 823 802 qspi->clk = devm_clk_get(dev, NULL); ··· 860 841 ctrl->use_gpio_descriptors = true; 861 842 ctrl->transfer_one_message = stm32_qspi_transfer_one_message; 862 843 ctrl->num_chipselect = STM32_QSPI_MAX_NORCHIP; 863 - ctrl->dev.of_node = dev->of_node; 864 844 865 845 pm_runtime_set_autosuspend_delay(dev, STM32_AUTOSUSPEND_DELAY); 866 846 pm_runtime_use_autosuspend(dev); ··· 909 891 clk_disable_unprepare(qspi->clk); 910 892 } 911 893 912 - static int __maybe_unused stm32_qspi_runtime_suspend(struct device *dev) 894 + static int stm32_qspi_runtime_suspend(struct device *dev) 913 895 { 914 896 struct stm32_qspi *qspi = dev_get_drvdata(dev); 915 897 ··· 918 900 return 0; 919 901 } 920 902 921 - static int __maybe_unused stm32_qspi_runtime_resume(struct device *dev) 903 + static int stm32_qspi_runtime_resume(struct device *dev) 922 904 { 923 905 struct stm32_qspi *qspi = dev_get_drvdata(dev); 924 906 925 907 return clk_prepare_enable(qspi->clk); 926 908 } 927 909 928 - static int __maybe_unused stm32_qspi_suspend(struct device *dev) 910 + static int stm32_qspi_suspend(struct device *dev) 929 911 { 930 912 pinctrl_pm_select_sleep_state(dev); 931 913 932 914 return pm_runtime_force_suspend(dev); 933 915 } 934 916 935 - static int __maybe_unused stm32_qspi_resume(struct device *dev) 917 + static int stm32_qspi_resume(struct device *dev) 936 918 { 937 919 struct stm32_qspi *qspi = dev_get_drvdata(dev); 938 920 int ret; ··· 956 938 } 957 939 958 940 static const struct dev_pm_ops stm32_qspi_pm_ops = { 959 - SET_RUNTIME_PM_OPS(stm32_qspi_runtime_suspend, 960 - stm32_qspi_runtime_resume, NULL) 961 - SET_SYSTEM_SLEEP_PM_OPS(stm32_qspi_suspend, stm32_qspi_resume) 941 + RUNTIME_PM_OPS(stm32_qspi_runtime_suspend, stm32_qspi_runtime_resume, NULL) 942 + SYSTEM_SLEEP_PM_OPS(stm32_qspi_suspend, stm32_qspi_resume) 962 943 }; 963 944 964 945 static const struct of_device_id stm32_qspi_match[] = { ··· 972 955 .driver = { 973 956 .name = "stm32-qspi", 974 957 .of_match_table = stm32_qspi_match, 975 - .pm = &stm32_qspi_pm_ops, 958 + .pm = pm_ptr(&stm32_qspi_pm_ops), 976 959 }, 977 960 }; 978 961 module_platform_driver(stm32_qspi_driver);
+101 -21
drivers/spi/spi-stm32.c
··· 202 202 #define STM32_SPI_HOST_MODE(stm32_spi) (!(stm32_spi)->device_mode) 203 203 #define STM32_SPI_DEVICE_MODE(stm32_spi) ((stm32_spi)->device_mode) 204 204 205 + static unsigned int polling_limit_us = 30; 206 + module_param(polling_limit_us, uint, 0664); 207 + MODULE_PARM_DESC(polling_limit_us, "maximum time in us to run a transfer in polling mode\n"); 208 + 205 209 /** 206 210 * struct stm32_spi_reg - stm32 SPI register & bitfield desc 207 211 * @reg: register offset ··· 270 266 * @dma_rx_cb: routine to call after DMA RX channel operation is complete 271 267 * @dma_tx_cb: routine to call after DMA TX channel operation is complete 272 268 * @transfer_one_irq: routine to configure interrupts for driver 269 + * @transfer_one_poll: routine to perform a transfer via register polling 273 270 * @irq_handler_event: Interrupt handler for SPI controller events 274 271 * @irq_handler_thread: thread of interrupt handler for SPI controller 275 272 * @baud_rate_div_min: minimum baud rate divisor ··· 296 291 void (*dma_rx_cb)(void *data); 297 292 void (*dma_tx_cb)(void *data); 298 293 int (*transfer_one_irq)(struct stm32_spi *spi); 294 + int (*transfer_one_poll)(struct stm32_spi *spi); 299 295 irqreturn_t (*irq_handler_event)(int irq, void *dev_id); 300 296 irqreturn_t (*irq_handler_thread)(int irq, void *dev_id); 301 297 unsigned int baud_rate_div_min; ··· 1362 1356 } 1363 1357 1364 1358 /** 1359 + * stm32h7_spi_transfer_one_poll - transfer a single spi_transfer by direct 1360 + * register access without interrupt usage 1361 + * @spi: pointer to the spi controller data structure 1362 + * 1363 + * It must returns 0 if the transfer is finished or 1 if the transfer is still 1364 + * in progress. 1365 + */ 1366 + static int stm32h7_spi_transfer_one_poll(struct stm32_spi *spi) 1367 + { 1368 + unsigned long flags; 1369 + u32 sr; 1370 + 1371 + spin_lock_irqsave(&spi->lock, flags); 1372 + 1373 + stm32_spi_enable(spi); 1374 + 1375 + /* Be sure to have data in fifo before starting data transfer */ 1376 + if (spi->tx_buf) 1377 + stm32h7_spi_write_txfifo(spi); 1378 + 1379 + if (STM32_SPI_HOST_MODE(spi)) 1380 + stm32_spi_set_bits(spi, STM32H7_SPI_CR1, STM32H7_SPI_CR1_CSTART); 1381 + 1382 + sr = readl_relaxed(spi->base + STM32H7_SPI_SR); 1383 + /* Keep writing / reading while waiting for the end of transfer */ 1384 + while (spi->tx_len || spi->rx_len || !(sr & STM32H7_SPI_SR_EOT)) { 1385 + if (spi->rx_len && (sr & (STM32H7_SPI_SR_RXP | STM32H7_SPI_SR_RXWNE | 1386 + STM32H7_SPI_SR_RXPLVL))) 1387 + stm32h7_spi_read_rxfifo(spi); 1388 + 1389 + if (spi->tx_len && (sr & STM32H7_SPI_SR_TXP)) 1390 + stm32h7_spi_write_txfifo(spi); 1391 + 1392 + sr = readl_relaxed(spi->base + STM32H7_SPI_SR); 1393 + 1394 + /* Clear suspension bit if necessary */ 1395 + if (sr & STM32H7_SPI_SR_SUSP) 1396 + writel_relaxed(sr & STM32H7_SPI_SR_SUSP, spi->base + STM32H7_SPI_IFCR); 1397 + } 1398 + 1399 + spin_unlock_irqrestore(&spi->lock, flags); 1400 + 1401 + stm32h7_spi_disable(spi); 1402 + spi_finalize_current_transfer(spi->ctrl); 1403 + 1404 + return 0; 1405 + } 1406 + 1407 + /** 1365 1408 * stm32h7_spi_transfer_one_irq - transfer a single spi_transfer using 1366 1409 * interrupts 1367 1410 * @spi: pointer to the spi controller data structure ··· 1961 1906 cfg2_clrb |= STM32H7_SPI_CFG2_MIDI; 1962 1907 if ((len > 1) && (spi->cur_midi > 0)) { 1963 1908 u32 sck_period_ns = DIV_ROUND_UP(NSEC_PER_SEC, spi->cur_speed); 1964 - u32 midi = min_t(u32, 1965 - DIV_ROUND_UP(spi->cur_midi, sck_period_ns), 1966 - FIELD_GET(STM32H7_SPI_CFG2_MIDI, 1967 - STM32H7_SPI_CFG2_MIDI)); 1909 + u32 midi = DIV_ROUND_UP(spi->cur_midi, sck_period_ns); 1968 1910 1911 + if ((spi->cur_bpw + midi) < 8) 1912 + midi = 8 - spi->cur_bpw; 1913 + 1914 + midi = min_t(u32, midi, FIELD_MAX(STM32H7_SPI_CFG2_MIDI)); 1969 1915 1970 1916 dev_dbg(spi->dev, "period=%dns, midi=%d(=%dns)\n", 1971 1917 sck_period_ns, midi, midi * sck_period_ns); ··· 2082 2026 } 2083 2027 2084 2028 /** 2029 + * stm32_spi_can_poll - detect if poll based transfer is appropriate 2030 + * @spi: pointer to the spi controller data structure 2031 + * 2032 + * Returns true is poll is more appropriate, false otherwise. 2033 + */ 2034 + static bool stm32_spi_can_poll(struct stm32_spi *spi) 2035 + { 2036 + unsigned long hz_per_byte, byte_limit; 2037 + 2038 + /* Evaluate the transfer time and use polling if applicable */ 2039 + hz_per_byte = polling_limit_us ? 2040 + DIV_ROUND_UP(8 * USEC_PER_SEC, polling_limit_us) : 0; 2041 + byte_limit = hz_per_byte ? spi->cur_speed / hz_per_byte : 1; 2042 + 2043 + return (spi->cur_xferlen < byte_limit) ? true : false; 2044 + } 2045 + 2046 + /** 2085 2047 * stm32_spi_transfer_one - transfer a single spi_transfer 2086 2048 * @ctrl: controller interface 2087 2049 * @spi_dev: pointer to the spi device ··· 2131 2057 2132 2058 if (spi->cur_usedma) 2133 2059 return stm32_spi_transfer_one_dma(spi, transfer); 2060 + else if (spi->cfg->transfer_one_poll && stm32_spi_can_poll(spi)) 2061 + return spi->cfg->transfer_one_poll(spi); 2134 2062 else 2135 2063 return spi->cfg->transfer_one_irq(spi); 2136 2064 } ··· 2291 2215 * SPI access hence handling is performed within the SPI interrupt 2292 2216 */ 2293 2217 .transfer_one_irq = stm32h7_spi_transfer_one_irq, 2218 + .transfer_one_poll = stm32h7_spi_transfer_one_poll, 2294 2219 .irq_handler_thread = stm32h7_spi_irq_thread, 2295 2220 .baud_rate_div_min = STM32H7_SPI_MBR_DIV_MIN, 2296 2221 .baud_rate_div_max = STM32H7_SPI_MBR_DIV_MAX, ··· 2321 2244 * SPI access hence handling is performed within the SPI interrupt 2322 2245 */ 2323 2246 .transfer_one_irq = stm32h7_spi_transfer_one_irq, 2247 + .transfer_one_poll = stm32h7_spi_transfer_one_poll, 2324 2248 .irq_handler_thread = stm32h7_spi_irq_thread, 2325 2249 .baud_rate_div_min = STM32H7_SPI_MBR_DIV_MIN, 2326 2250 .baud_rate_div_max = STM32H7_SPI_MBR_DIV_MAX, ··· 2464 2386 goto err_clk_disable; 2465 2387 } 2466 2388 2467 - ctrl->dev.of_node = pdev->dev.of_node; 2468 2389 ctrl->auto_runtime_pm = true; 2469 2390 ctrl->bus_num = pdev->id; 2470 2391 ctrl->mode_bits = SPI_CPHA | SPI_CPOL | SPI_CS_HIGH | SPI_LSB_FIRST | ··· 2483 2406 spi->dma_tx = dma_request_chan(spi->dev, "tx"); 2484 2407 if (IS_ERR(spi->dma_tx)) { 2485 2408 ret = PTR_ERR(spi->dma_tx); 2486 - spi->dma_tx = NULL; 2487 - if (ret == -EPROBE_DEFER) 2409 + if (ret == -ENODEV) { 2410 + dev_info(&pdev->dev, "tx dma disabled\n"); 2411 + spi->dma_tx = NULL; 2412 + } else { 2413 + dev_err_probe(&pdev->dev, ret, "failed to request tx dma channel\n"); 2488 2414 goto err_clk_disable; 2489 - 2490 - dev_warn(&pdev->dev, "failed to request tx dma channel\n"); 2415 + } 2491 2416 } else { 2492 2417 ctrl->dma_tx = spi->dma_tx; 2493 2418 } ··· 2497 2418 spi->dma_rx = dma_request_chan(spi->dev, "rx"); 2498 2419 if (IS_ERR(spi->dma_rx)) { 2499 2420 ret = PTR_ERR(spi->dma_rx); 2500 - spi->dma_rx = NULL; 2501 - if (ret == -EPROBE_DEFER) 2421 + if (ret == -ENODEV) { 2422 + dev_info(&pdev->dev, "rx dma disabled\n"); 2423 + spi->dma_rx = NULL; 2424 + } else { 2425 + dev_err_probe(&pdev->dev, ret, "failed to request rx dma channel\n"); 2502 2426 goto err_dma_release; 2503 - 2504 - dev_warn(&pdev->dev, "failed to request rx dma channel\n"); 2427 + } 2505 2428 } else { 2506 2429 ctrl->dma_rx = spi->dma_rx; 2507 2430 } ··· 2613 2532 pinctrl_pm_select_sleep_state(&pdev->dev); 2614 2533 } 2615 2534 2616 - static int __maybe_unused stm32_spi_runtime_suspend(struct device *dev) 2535 + static int stm32_spi_runtime_suspend(struct device *dev) 2617 2536 { 2618 2537 struct spi_controller *ctrl = dev_get_drvdata(dev); 2619 2538 struct stm32_spi *spi = spi_controller_get_devdata(ctrl); ··· 2623 2542 return pinctrl_pm_select_sleep_state(dev); 2624 2543 } 2625 2544 2626 - static int __maybe_unused stm32_spi_runtime_resume(struct device *dev) 2545 + static int stm32_spi_runtime_resume(struct device *dev) 2627 2546 { 2628 2547 struct spi_controller *ctrl = dev_get_drvdata(dev); 2629 2548 struct stm32_spi *spi = spi_controller_get_devdata(ctrl); ··· 2636 2555 return clk_prepare_enable(spi->clk); 2637 2556 } 2638 2557 2639 - static int __maybe_unused stm32_spi_suspend(struct device *dev) 2558 + static int stm32_spi_suspend(struct device *dev) 2640 2559 { 2641 2560 struct spi_controller *ctrl = dev_get_drvdata(dev); 2642 2561 int ret; ··· 2648 2567 return pm_runtime_force_suspend(dev); 2649 2568 } 2650 2569 2651 - static int __maybe_unused stm32_spi_resume(struct device *dev) 2570 + static int stm32_spi_resume(struct device *dev) 2652 2571 { 2653 2572 struct spi_controller *ctrl = dev_get_drvdata(dev); 2654 2573 struct stm32_spi *spi = spi_controller_get_devdata(ctrl); ··· 2678 2597 } 2679 2598 2680 2599 static const struct dev_pm_ops stm32_spi_pm_ops = { 2681 - SET_SYSTEM_SLEEP_PM_OPS(stm32_spi_suspend, stm32_spi_resume) 2682 - SET_RUNTIME_PM_OPS(stm32_spi_runtime_suspend, 2683 - stm32_spi_runtime_resume, NULL) 2600 + SYSTEM_SLEEP_PM_OPS(stm32_spi_suspend, stm32_spi_resume) 2601 + RUNTIME_PM_OPS(stm32_spi_runtime_suspend, stm32_spi_runtime_resume, NULL) 2684 2602 }; 2685 2603 2686 2604 static struct platform_driver stm32_spi_driver = { ··· 2687 2607 .remove = stm32_spi_remove, 2688 2608 .driver = { 2689 2609 .name = DRIVER_NAME, 2690 - .pm = &stm32_spi_pm_ops, 2610 + .pm = pm_ptr(&stm32_spi_pm_ops), 2691 2611 .of_match_table = stm32_spi_of_match, 2692 2612 }, 2693 2613 };
-1
drivers/spi/spi-sun4i.c
··· 471 471 host->num_chipselect = 4; 472 472 host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST; 473 473 host->bits_per_word_mask = SPI_BPW_MASK(8); 474 - host->dev.of_node = pdev->dev.of_node; 475 474 host->auto_runtime_pm = true; 476 475 host->max_transfer_size = sun4i_spi_max_transfer_size; 477 476
-1
drivers/spi/spi-sun6i.c
··· 673 673 host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST | 674 674 sspi->cfg->mode_bits; 675 675 host->bits_per_word_mask = SPI_BPW_MASK(8); 676 - host->dev.of_node = pdev->dev.of_node; 677 676 host->auto_runtime_pm = true; 678 677 host->max_transfer_size = sun6i_spi_max_transfer_size; 679 678
-1
drivers/spi/spi-sunplus-sp7021.c
··· 419 419 ctlr = devm_spi_alloc_host(dev, sizeof(*pspim)); 420 420 if (!ctlr) 421 421 return -ENOMEM; 422 - device_set_node(&ctlr->dev, dev_fwnode(dev)); 423 422 ctlr->bus_num = pdev->id; 424 423 ctlr->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST; 425 424 ctlr->auto_runtime_pm = true;
-3
drivers/spi/spi-synquacer.c
··· 600 600 601 601 static int synquacer_spi_probe(struct platform_device *pdev) 602 602 { 603 - struct device_node *np = pdev->dev.of_node; 604 603 struct spi_controller *host; 605 604 struct synquacer_spi *sspi; 606 605 int ret; ··· 698 699 goto disable_clk; 699 700 } 700 701 701 - host->dev.of_node = np; 702 - host->dev.fwnode = pdev->dev.fwnode; 703 702 host->auto_runtime_pm = true; 704 703 host->bus_num = pdev->id; 705 704
-1
drivers/spi/spi-tegra114.c
··· 1415 1415 goto exit_pm_disable; 1416 1416 } 1417 1417 1418 - host->dev.of_node = pdev->dev.of_node; 1419 1418 ret = devm_spi_register_controller(&pdev->dev, host); 1420 1419 if (ret < 0) { 1421 1420 dev_err(&pdev->dev, "can not register to host err %d\n", ret);
-1
drivers/spi/spi-tegra20-sflash.c
··· 505 505 tegra_sflash_writel(tsd, tsd->def_command_reg, SPI_COMMAND); 506 506 pm_runtime_put(&pdev->dev); 507 507 508 - host->dev.of_node = pdev->dev.of_node; 509 508 ret = devm_spi_register_controller(&pdev->dev, host); 510 509 if (ret < 0) { 511 510 dev_err(&pdev->dev, "can not register to host err %d\n", ret);
-1
drivers/spi/spi-tegra20-slink.c
··· 1105 1105 tegra_slink_writel(tspi, tspi->def_command_reg, SLINK_COMMAND); 1106 1106 tegra_slink_writel(tspi, tspi->def_command2_reg, SLINK_COMMAND2); 1107 1107 1108 - host->dev.of_node = pdev->dev.of_node; 1109 1108 ret = spi_register_controller(host); 1110 1109 if (ret < 0) { 1111 1110 dev_err(&pdev->dev, "can not register to host err %d\n", ret);
-1
drivers/spi/spi-tegra210-quad.c
··· 1791 1791 goto exit_pm_disable; 1792 1792 } 1793 1793 1794 - host->dev.of_node = pdev->dev.of_node; 1795 1794 ret = spi_register_controller(host); 1796 1795 if (ret < 0) { 1797 1796 dev_err(&pdev->dev, "failed to register host: %d\n", ret);
-1
drivers/spi/spi-ti-qspi.c
··· 775 775 host->setup = ti_qspi_setup; 776 776 host->auto_runtime_pm = true; 777 777 host->transfer_one_message = ti_qspi_start_transfer_one; 778 - host->dev.of_node = pdev->dev.of_node; 779 778 host->bits_per_word_mask = SPI_BPW_MASK(32) | SPI_BPW_MASK(16) | 780 779 SPI_BPW_MASK(8); 781 780 host->mem_ops = &ti_qspi_mem_ops;
-1
drivers/spi/spi-uniphier.c
··· 697 697 host->max_speed_hz = DIV_ROUND_UP(clk_rate, SSI_MIN_CLK_DIVIDER); 698 698 host->min_speed_hz = DIV_ROUND_UP(clk_rate, SSI_MAX_CLK_DIVIDER); 699 699 host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LSB_FIRST; 700 - host->dev.of_node = pdev->dev.of_node; 701 700 host->bus_num = pdev->id; 702 701 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32); 703 702
+2 -4
drivers/spi/spi-virtio.c
··· 150 150 struct spi_transfer *xfer) 151 151 { 152 152 struct virtio_spi_priv *priv = spi_controller_get_devdata(ctrl); 153 - struct virtio_spi_req *spi_req __free(kfree) = NULL; 154 153 struct spi_transfer_head *th; 155 154 struct scatterlist sg_out_head, sg_out_payload; 156 155 struct scatterlist sg_in_result, sg_in_payload; ··· 158 159 unsigned int incnt = 0; 159 160 int ret; 160 161 161 - spi_req = kzalloc(sizeof(*spi_req), GFP_KERNEL); 162 + struct virtio_spi_req *spi_req __free(kfree) = kzalloc(sizeof(*spi_req), 163 + GFP_KERNEL); 162 164 if (!spi_req) 163 165 return -ENOMEM; 164 166 ··· 343 343 priv = spi_controller_get_devdata(ctrl); 344 344 priv->vdev = vdev; 345 345 vdev->priv = priv; 346 - 347 - device_set_node(&ctrl->dev, dev_fwnode(&vdev->dev)); 348 346 349 347 dev_set_drvdata(&vdev->dev, ctrl); 350 348
-1
drivers/spi/spi-wpcm-fiu.c
··· 471 471 ctrl->bus_num = -1; 472 472 ctrl->mem_ops = &wpcm_fiu_mem_ops; 473 473 ctrl->num_chipselect = 4; 474 - ctrl->dev.of_node = dev->of_node; 475 474 476 475 /* 477 476 * The FIU doesn't include a clock divider, the clock is entirely
-1
drivers/spi/spi-xcomm.c
··· 260 260 host->bits_per_word_mask = SPI_BPW_MASK(8); 261 261 host->flags = SPI_CONTROLLER_HALF_DUPLEX; 262 262 host->transfer_one_message = spi_xcomm_transfer_one; 263 - host->dev.of_node = i2c->dev.of_node; 264 263 265 264 ret = devm_spi_register_controller(&i2c->dev, host); 266 265 if (ret < 0)
+6 -7
drivers/spi/spi-xilinx.c
··· 405 405 bits_per_word = pdata->bits_per_word; 406 406 force_irq = pdata->force_irq; 407 407 } else { 408 - of_property_read_u32(pdev->dev.of_node, "xlnx,num-ss-bits", 409 - &num_cs); 410 - ret = of_property_read_u32(pdev->dev.of_node, 411 - "xlnx,num-transfer-bits", 412 - &bits_per_word); 408 + device_property_read_u32(&pdev->dev, "xlnx,num-ss-bits", 409 + &num_cs); 410 + ret = device_property_read_u32(&pdev->dev, 411 + "xlnx,num-transfer-bits", 412 + &bits_per_word); 413 413 if (ret) 414 414 bits_per_word = 8; 415 415 } ··· 447 447 448 448 host->bus_num = pdev->id; 449 449 host->num_chipselect = num_cs; 450 - host->dev.of_node = pdev->dev.of_node; 451 450 452 451 /* 453 452 * Detect endianess on the IP via loop bit in CR. Detection ··· 470 471 xspi->bytes_per_word = bits_per_word / 8; 471 472 xspi->buffer_size = xilinx_spi_find_buffer_size(xspi); 472 473 473 - xspi->irq = platform_get_irq(pdev, 0); 474 + xspi->irq = platform_get_irq_optional(pdev, 0); 474 475 if (xspi->irq < 0 && xspi->irq != -ENXIO) { 475 476 return xspi->irq; 476 477 } else if (xspi->irq >= 0) {
-1
drivers/spi/spi-xlp.c
··· 409 409 host->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; 410 410 host->setup = xlp_spi_setup; 411 411 host->transfer_one = xlp_spi_transfer_one; 412 - host->dev.of_node = pdev->dev.of_node; 413 412 414 413 init_completion(&xspi->done); 415 414 spi_controller_set_devdata(host, xspi);
-1
drivers/spi/spi-xtensa-xtfpga.c
··· 90 90 host->flags = SPI_CONTROLLER_NO_RX; 91 91 host->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 16); 92 92 host->bus_num = pdev->dev.id; 93 - host->dev.of_node = pdev->dev.of_node; 94 93 95 94 xspi = spi_controller_get_devdata(host); 96 95 xspi->bitbang.ctlr = host;
+161 -33
drivers/spi/spi.c
··· 2354 2354 static int of_spi_parse_dt(struct spi_controller *ctlr, struct spi_device *spi, 2355 2355 struct device_node *nc) 2356 2356 { 2357 - u32 value, cs[SPI_DEVICE_CS_CNT_MAX]; 2358 - int rc, idx; 2357 + u32 value, cs[SPI_DEVICE_CS_CNT_MAX], map[SPI_DEVICE_DATA_LANE_CNT_MAX]; 2358 + int rc, idx, max_num_data_lanes; 2359 2359 2360 2360 /* Mode (clock phase/polarity/etc.) */ 2361 2361 if (of_property_read_bool(nc, "spi-cpha")) ··· 2370 2370 spi->mode |= SPI_CS_HIGH; 2371 2371 2372 2372 /* Device DUAL/QUAD mode */ 2373 - if (!of_property_read_u32(nc, "spi-tx-bus-width", &value)) { 2373 + 2374 + rc = of_property_read_variable_u32_array(nc, "spi-tx-lane-map", map, 1, 2375 + ARRAY_SIZE(map)); 2376 + if (rc >= 0) { 2377 + max_num_data_lanes = rc; 2378 + for (idx = 0; idx < max_num_data_lanes; idx++) 2379 + spi->tx_lane_map[idx] = map[idx]; 2380 + } else if (rc == -EINVAL) { 2381 + /* Default lane map is identity mapping. */ 2382 + max_num_data_lanes = ARRAY_SIZE(spi->tx_lane_map); 2383 + for (idx = 0; idx < max_num_data_lanes; idx++) 2384 + spi->tx_lane_map[idx] = idx; 2385 + } else { 2386 + dev_err(&ctlr->dev, 2387 + "failed to read spi-tx-lane-map property: %d\n", rc); 2388 + return rc; 2389 + } 2390 + 2391 + rc = of_property_count_u32_elems(nc, "spi-tx-bus-width"); 2392 + if (rc < 0 && rc != -EINVAL) { 2393 + dev_err(&ctlr->dev, 2394 + "failed to read spi-tx-bus-width property: %d\n", rc); 2395 + return rc; 2396 + } 2397 + if (rc > max_num_data_lanes) { 2398 + dev_err(&ctlr->dev, 2399 + "spi-tx-bus-width has more elements (%d) than spi-tx-lane-map (%d)\n", 2400 + rc, max_num_data_lanes); 2401 + return -EINVAL; 2402 + } 2403 + 2404 + if (rc == -EINVAL) { 2405 + /* Default when property is not present. */ 2406 + spi->num_tx_lanes = 1; 2407 + } else { 2408 + u32 first_value; 2409 + 2410 + spi->num_tx_lanes = rc; 2411 + 2412 + for (idx = 0; idx < spi->num_tx_lanes; idx++) { 2413 + rc = of_property_read_u32_index(nc, "spi-tx-bus-width", 2414 + idx, &value); 2415 + if (rc) 2416 + return rc; 2417 + 2418 + /* 2419 + * For now, we only support all lanes having the same 2420 + * width so we can keep using the existing mode flags. 2421 + */ 2422 + if (!idx) 2423 + first_value = value; 2424 + else if (first_value != value) { 2425 + dev_err(&ctlr->dev, 2426 + "spi-tx-bus-width has inconsistent values: first %d vs later %d\n", 2427 + first_value, value); 2428 + return -EINVAL; 2429 + } 2430 + } 2431 + 2374 2432 switch (value) { 2375 2433 case 0: 2376 2434 spi->mode |= SPI_NO_TX; ··· 2452 2394 } 2453 2395 } 2454 2396 2455 - if (!of_property_read_u32(nc, "spi-rx-bus-width", &value)) { 2397 + for (idx = 0; idx < spi->num_tx_lanes; idx++) { 2398 + if (spi->tx_lane_map[idx] >= spi->controller->num_data_lanes) { 2399 + dev_err(&ctlr->dev, 2400 + "spi-tx-lane-map has invalid value %d (num_data_lanes=%d)\n", 2401 + spi->tx_lane_map[idx], 2402 + spi->controller->num_data_lanes); 2403 + return -EINVAL; 2404 + } 2405 + } 2406 + 2407 + rc = of_property_read_variable_u32_array(nc, "spi-rx-lane-map", map, 1, 2408 + ARRAY_SIZE(map)); 2409 + if (rc >= 0) { 2410 + max_num_data_lanes = rc; 2411 + for (idx = 0; idx < max_num_data_lanes; idx++) 2412 + spi->rx_lane_map[idx] = map[idx]; 2413 + } else if (rc == -EINVAL) { 2414 + /* Default lane map is identity mapping. */ 2415 + max_num_data_lanes = ARRAY_SIZE(spi->rx_lane_map); 2416 + for (idx = 0; idx < max_num_data_lanes; idx++) 2417 + spi->rx_lane_map[idx] = idx; 2418 + } else { 2419 + dev_err(&ctlr->dev, 2420 + "failed to read spi-rx-lane-map property: %d\n", rc); 2421 + return rc; 2422 + } 2423 + 2424 + rc = of_property_count_u32_elems(nc, "spi-rx-bus-width"); 2425 + if (rc < 0 && rc != -EINVAL) { 2426 + dev_err(&ctlr->dev, 2427 + "failed to read spi-rx-bus-width property: %d\n", rc); 2428 + return rc; 2429 + } 2430 + if (rc > max_num_data_lanes) { 2431 + dev_err(&ctlr->dev, 2432 + "spi-rx-bus-width has more elements (%d) than spi-rx-lane-map (%d)\n", 2433 + rc, max_num_data_lanes); 2434 + return -EINVAL; 2435 + } 2436 + 2437 + if (rc == -EINVAL) { 2438 + /* Default when property is not present. */ 2439 + spi->num_rx_lanes = 1; 2440 + } else { 2441 + u32 first_value; 2442 + 2443 + spi->num_rx_lanes = rc; 2444 + 2445 + for (idx = 0; idx < spi->num_rx_lanes; idx++) { 2446 + rc = of_property_read_u32_index(nc, "spi-rx-bus-width", 2447 + idx, &value); 2448 + if (rc) 2449 + return rc; 2450 + 2451 + /* 2452 + * For now, we only support all lanes having the same 2453 + * width so we can keep using the existing mode flags. 2454 + */ 2455 + if (!idx) 2456 + first_value = value; 2457 + else if (first_value != value) { 2458 + dev_err(&ctlr->dev, 2459 + "spi-rx-bus-width has inconsistent values: first %d vs later %d\n", 2460 + first_value, value); 2461 + return -EINVAL; 2462 + } 2463 + } 2464 + 2456 2465 switch (value) { 2457 2466 case 0: 2458 2467 spi->mode |= SPI_NO_RX; ··· 2540 2415 "spi-rx-bus-width %d not supported\n", 2541 2416 value); 2542 2417 break; 2418 + } 2419 + } 2420 + 2421 + for (idx = 0; idx < spi->num_rx_lanes; idx++) { 2422 + if (spi->rx_lane_map[idx] >= spi->controller->num_data_lanes) { 2423 + dev_err(&ctlr->dev, 2424 + "spi-rx-lane-map has invalid value %d (num_data_lanes=%d)\n", 2425 + spi->rx_lane_map[idx], 2426 + spi->controller->num_data_lanes); 2427 + return -EINVAL; 2543 2428 } 2544 2429 } 2545 2430 ··· 3201 3066 mutex_init(&ctlr->add_lock); 3202 3067 ctlr->bus_num = -1; 3203 3068 ctlr->num_chipselect = 1; 3069 + ctlr->num_data_lanes = 1; 3204 3070 ctlr->target = target; 3205 3071 if (IS_ENABLED(CONFIG_SPI_SLAVE) && target) 3206 3072 ctlr->dev.class = &spi_target_class; 3207 3073 else 3208 3074 ctlr->dev.class = &spi_controller_class; 3209 3075 ctlr->dev.parent = dev; 3076 + 3077 + device_set_node(&ctlr->dev, dev_fwnode(dev)); 3078 + 3210 3079 pm_suspend_ignore_children(&ctlr->dev, true); 3211 3080 spi_controller_set_devdata(ctlr, (void *)ctlr + ctlr_size); 3212 3081 ··· 3218 3079 } 3219 3080 EXPORT_SYMBOL_GPL(__spi_alloc_controller); 3220 3081 3221 - static void devm_spi_release_controller(struct device *dev, void *ctlr) 3082 + static void devm_spi_release_controller(void *ctlr) 3222 3083 { 3223 - spi_controller_put(*(struct spi_controller **)ctlr); 3084 + spi_controller_put(ctlr); 3224 3085 } 3225 3086 3226 3087 /** ··· 3242 3103 unsigned int size, 3243 3104 bool target) 3244 3105 { 3245 - struct spi_controller **ptr, *ctlr; 3246 - 3247 - ptr = devres_alloc(devm_spi_release_controller, sizeof(*ptr), 3248 - GFP_KERNEL); 3249 - if (!ptr) 3250 - return NULL; 3106 + struct spi_controller *ctlr; 3107 + int ret; 3251 3108 3252 3109 ctlr = __spi_alloc_controller(dev, size, target); 3253 - if (ctlr) { 3254 - ctlr->devm_allocated = true; 3255 - *ptr = ctlr; 3256 - devres_add(dev, ptr); 3257 - } else { 3258 - devres_free(ptr); 3259 - } 3110 + if (!ctlr) 3111 + return NULL; 3112 + 3113 + ret = devm_add_action_or_reset(dev, devm_spi_release_controller, ctlr); 3114 + if (ret) 3115 + return NULL; 3116 + 3117 + ctlr->devm_allocated = true; 3260 3118 3261 3119 return ctlr; 3262 3120 } ··· 3514 3378 } 3515 3379 EXPORT_SYMBOL_GPL(spi_register_controller); 3516 3380 3517 - static void devm_spi_unregister(struct device *dev, void *res) 3381 + static void devm_spi_unregister_controller(void *ctlr) 3518 3382 { 3519 - spi_unregister_controller(*(struct spi_controller **)res); 3383 + spi_unregister_controller(ctlr); 3520 3384 } 3521 3385 3522 3386 /** ··· 3534 3398 int devm_spi_register_controller(struct device *dev, 3535 3399 struct spi_controller *ctlr) 3536 3400 { 3537 - struct spi_controller **ptr; 3538 3401 int ret; 3539 3402 3540 - ptr = devres_alloc(devm_spi_unregister, sizeof(*ptr), GFP_KERNEL); 3541 - if (!ptr) 3542 - return -ENOMEM; 3543 - 3544 3403 ret = spi_register_controller(ctlr); 3545 - if (!ret) { 3546 - *ptr = ctlr; 3547 - devres_add(dev, ptr); 3548 - } else { 3549 - devres_free(ptr); 3550 - } 3404 + if (ret) 3405 + return ret; 3551 3406 3552 - return ret; 3407 + return devm_add_action_or_reset(dev, devm_spi_unregister_controller, ctlr); 3408 + 3553 3409 } 3554 3410 EXPORT_SYMBOL_GPL(devm_spi_register_controller); 3555 3411
+11 -3
include/linux/spi/spi-mem.h
··· 20 20 .opcode = __opcode, \ 21 21 } 22 22 23 - #define SPI_MEM_DTR_OP_CMD(__opcode, __buswidth) \ 23 + #define SPI_MEM_DTR_OP_RPT_CMD(__opcode, __buswidth) \ 24 24 { \ 25 - .nbytes = 1, \ 26 - .opcode = __opcode, \ 25 + .nbytes = 2, \ 26 + .opcode = __opcode | __opcode << 8, \ 27 27 .buswidth = __buswidth, \ 28 28 .dtr = true, \ 29 29 } ··· 39 39 { \ 40 40 .nbytes = __nbytes, \ 41 41 .val = __val, \ 42 + .buswidth = __buswidth, \ 43 + .dtr = true, \ 44 + } 45 + 46 + #define SPI_MEM_DTR_OP_RPT_ADDR(__val, __buswidth) \ 47 + { \ 48 + .nbytes = 2, \ 49 + .val = __val | __val << 8, \ 42 50 .buswidth = __buswidth, \ 43 51 .dtr = true, \ 44 52 }
+30
include/linux/spi/spi.h
··· 23 23 /* Max no. of CS supported per spi device */ 24 24 #define SPI_DEVICE_CS_CNT_MAX 4 25 25 26 + /* Max no. of data lanes supported per spi device */ 27 + #define SPI_DEVICE_DATA_LANE_CNT_MAX 8 28 + 26 29 struct dma_chan; 27 30 struct software_node; 28 31 struct ptp_system_timestamp; ··· 177 174 * @cs_index_mask: Bit mask of the active chipselect(s) in the chipselect array 178 175 * @cs_gpiod: Array of GPIO descriptors of the corresponding chipselect lines 179 176 * (optional, NULL when not using a GPIO line) 177 + * @tx_lane_map: Map of peripheral lanes (index) to controller lanes (value). 178 + * @num_tx_lanes: Number of transmit lanes wired up. 179 + * @rx_lane_map: Map of peripheral lanes (index) to controller lanes (value). 180 + * @num_rx_lanes: Number of receive lanes wired up. 180 181 * 181 182 * A @spi_device is used to interchange data between an SPI target device 182 183 * (usually a discrete chip) and CPU memory. ··· 248 241 u32 cs_index_mask : SPI_DEVICE_CS_CNT_MAX; 249 242 250 243 struct gpio_desc *cs_gpiod[SPI_DEVICE_CS_CNT_MAX]; /* Chip select gpio desc */ 244 + 245 + /* Multi-lane SPI controller support. */ 246 + u8 tx_lane_map[SPI_DEVICE_DATA_LANE_CNT_MAX]; 247 + u8 num_tx_lanes; 248 + u8 rx_lane_map[SPI_DEVICE_DATA_LANE_CNT_MAX]; 249 + u8 num_rx_lanes; 251 250 252 251 /* 253 252 * Likely need more hooks for more protocol options affecting how ··· 414 401 * SPI targets, and are numbered from zero to num_chipselects. 415 402 * each target has a chipselect signal, but it's common that not 416 403 * every chipselect is connected to a target. 404 + * @num_data_lanes: Number of data lanes supported by this controller. Default is 1. 417 405 * @dma_alignment: SPI controller constraint on DMA buffers alignment. 418 406 * @mode_bits: flags understood by this controller driver 419 407 * @buswidth_override_bits: flags to override for this controller driver ··· 589 575 * might use board-specific GPIOs. 590 576 */ 591 577 u16 num_chipselect; 578 + 579 + /* 580 + * Some specialized SPI controllers can have more than one physical 581 + * data lane interface per controller (each having it's own serializer). 582 + * This specifies the number of data lanes in that case. Other 583 + * controllers do not need to set this (defaults to 1). 584 + */ 585 + u16 num_data_lanes; 592 586 593 587 /* Some SPI controllers pose alignment requirements on DMAable 594 588 * buffers; let protocol drivers know about these requirements. ··· 981 959 * (SPI_NBITS_SINGLE) is used. 982 960 * @rx_nbits: number of bits used for reading. If 0 the default 983 961 * (SPI_NBITS_SINGLE) is used. 962 + * @multi_lane_mode: How to serialize data on multiple lanes. One of the 963 + * SPI_MULTI_LANE_MODE_* values. 984 964 * @len: size of rx and tx buffers (in bytes) 985 965 * @speed_hz: Select a speed other than the device default for this 986 966 * transfer. If 0 the default (from @spi_device) is used. ··· 1119 1095 unsigned cs_change:1; 1120 1096 unsigned tx_nbits:4; 1121 1097 unsigned rx_nbits:4; 1098 + 1099 + #define SPI_MULTI_LANE_MODE_SINGLE 0 /* only use single lane */ 1100 + #define SPI_MULTI_LANE_MODE_STRIPE 1 /* one data word per lane */ 1101 + #define SPI_MULTI_LANE_MODE_MIRROR 2 /* same word sent on all lanes */ 1102 + unsigned multi_lane_mode: 2; 1103 + 1122 1104 unsigned timestamped:1; 1123 1105 bool dtr_mode; 1124 1106 #define SPI_NBITS_SINGLE 0x01 /* 1-bit transfer */
+1
tools/spi/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 spidev_fdx 3 3 spidev_test 4 + include/