Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'mmc-v7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc

Pull MMC updates from Ulf Hansson:
"MMC core:
- Add NXP vendor and IW61x device IDs for WiFi chips over SDIO
- Add quirk for incorrect manufacturing date
- Add support for manufacturing date beyond 2025
- Optimize support for secure erase/trim for some Kingston eMMCs
- Remove support for the legacy "enable-sdio-wakeup" DT property
- Use single block writes in the retry path

MMC host:
- dw_mmc:
- A great amount of cleanups/simplifications to improve the code
- Add clk_phase_map support
- Remove mshc DT alias support
- dw_mmc-rockchip:
- Fix runtime PM support for internal phase
- Add support for the RV1103B variant
- loongson2:
- Add support for the Loongson-2K0300 SD/SDIO/eMMC controller
- mtk-sd:
- Add support for the MT8189 variant
- renesas_sdhi_core:
- Add support for selecting an optional mux
- rtsx_pci_sdmmc:
- Simplify voltage switch handling
- sdhci:
- Stop advertising the driver in dmesg
- sdhci-esdhc-imx:
- Add 1-bit bus width support
- Add support for the NXP S32N79 variant
- sdhci-msm:
- Add support for the IPQ5210 and IPQ9650 variants
- Add support for wrapped keys
- Enable ICE for CQE-capable controllers with non-CQE cards
- sdhci-of-arasan:
- Add support for the Axiado AX3000 variant
- sdhci-of-aspeed:
- Add support for the AST2700 variant
- sdhci-of-bst:
- Add driver for the Black Sesame Technologies C1200 controller
- sdhci-of-dwcmshc:
- Add support for the Canaan K230 variant
- Add support for the HPE GSC variant
- Prevent clock glitches to avoid malfunction
- sdhci-of-k1:
- Add support for the K3 variant

mux core/consumers:
- core:
- Add helper functions for getting optional and selected mux-state
- i2c-omap:
- Convert to devm_mux_state_get_optional_selected()
- phy-renesas:
- Convert to devm_mux_state_get_optional_selected()
- phy-can-transceiver:
- Convert to devm_mux_state_get_optional()"

* tag 'mmc-v7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/ulfh/mmc: (131 commits)
mmc: sdhci-msm: Fix the wrapped key handling
mmc: sdhci-of-dwcmshc: Disable clock before DLL configuration
mmc: core: Simplify with scoped for each OF child loop
mmc: core: Optimize size of struct mmc_queue_req
mmc: vub300: clean up module init
mmc: vub300: rename probe error labels
mmc: dw_mmc: Remove dw_mci_start_request wrapper and rename core function
mmc: dw_mmc: Inline dw_mci_queue_request() into dw_mci_request()
mmc: block: Use MQRQ_XFER_SINGLE_BLOCK for both read and write recovery
mmc: mmc_test: Replace hard-coded values with macros and consolidate test parameters
mmc: block: Convert to use DEFINE_SIMPLE_DEV_PM_OPS()
mmc: core: Replace the hard-coded shift value 9 with SECTOR_SHIFT
mmc: sdhci-dwcmshc: Refactor Rockchip platform data for controller revisions
mmc: core: Switch to use pm_ptr() for mmc_host_class_dev_pm_ops
mmc: core: Remove legacy 'enable-sdio-wakeup' DT property support
mmc: mmc_test: use kzalloc_flex
mmc: mtk-sd: disable new_tx/rx and modify related settings for mt8189
dt-bindings: mmc: hisilicon,hi3660-dw-mshc: Convert to DT schema
dt-bindings: mmc: sdhci-msm: add IPQ9650 compatible
mmc: block: use single block write in retry
...

+2566 -1417
+4
Documentation/devicetree/bindings/mmc/amlogic,meson-gx-mmc.yaml
··· 19 19 properties: 20 20 compatible: 21 21 oneOf: 22 + - items: 23 + - enum: 24 + - amlogic,t7-mmc 25 + - const: amlogic,meson-axg-mmc 22 26 - const: amlogic,meson-axg-mmc 23 27 - items: 24 28 - const: amlogic,meson-gx-mmc
+5
Documentation/devicetree/bindings/mmc/arasan,sdhci.yaml
··· 106 106 description: 107 107 For this device it is strongly suggested to include 108 108 arasan,soc-ctl-syscon. 109 + - items: 110 + - const: axiado,ax3000-sdhci-5.1-emmc # Axiado AX3000 eMMC controller 111 + - const: arasan,sdhci-5.1 109 112 110 113 reg: 111 114 maxItems: 1 ··· 123 120 - const: clk_xin 124 121 - const: clk_ahb 125 122 - const: gate 123 + 124 + dma-coherent: true 126 125 127 126 interrupts: 128 127 minItems: 1
+1 -1
Documentation/devicetree/bindings/mmc/arm,pl18x.yaml
··· 11 11 - Ulf Hansson <ulf.hansson@linaro.org> 12 12 13 13 description: 14 - The ARM PrimeCells MMCI PL180 and PL181 provides an interface for 14 + The ARM PrimeCell MMCI PL180 and PL181 provides an interface for 15 15 reading and writing to MultiMedia and SD cards alike. Over the years 16 16 vendors have use the VHDL code from ARM to create derivative MMC/SD/SDIO 17 17 host controllers with very similar characteristics.
+33 -8
Documentation/devicetree/bindings/mmc/aspeed,sdhci.yaml
··· 22 22 23 23 properties: 24 24 compatible: 25 - enum: 26 - - aspeed,ast2400-sd-controller 27 - - aspeed,ast2500-sd-controller 28 - - aspeed,ast2600-sd-controller 25 + oneOf: 26 + - enum: 27 + - aspeed,ast2400-sd-controller 28 + - aspeed,ast2500-sd-controller 29 + - aspeed,ast2600-sd-controller 30 + - items: 31 + - const: aspeed,ast2700-sd-controller 32 + - const: aspeed,ast2600-sd-controller 33 + 29 34 reg: 30 35 maxItems: 1 31 36 description: Common configuration registers ··· 43 38 maxItems: 1 44 39 description: The SD/SDIO controller clock gate 45 40 41 + resets: 42 + maxItems: 1 43 + 46 44 patternProperties: 47 45 "^sdhci@[0-9a-f]+$": 48 46 type: object ··· 54 46 55 47 properties: 56 48 compatible: 57 - enum: 58 - - aspeed,ast2400-sdhci 59 - - aspeed,ast2500-sdhci 60 - - aspeed,ast2600-sdhci 49 + oneOf: 50 + - enum: 51 + - aspeed,ast2400-sdhci 52 + - aspeed,ast2500-sdhci 53 + - aspeed,ast2600-sdhci 54 + - items: 55 + - const: aspeed,ast2700-sdhci 56 + - const: aspeed,ast2600-sdhci 57 + 61 58 reg: 62 59 maxItems: 1 63 60 description: The SDHCI registers ··· 90 77 - "#size-cells" 91 78 - ranges 92 79 - clocks 80 + 81 + if: 82 + properties: 83 + compatible: 84 + contains: 85 + const: aspeed,ast2700-sd-controller 86 + then: 87 + required: 88 + - resets 89 + else: 90 + properties: 91 + resets: false 93 92 94 93 examples: 95 94 - |
+5
Documentation/devicetree/bindings/mmc/brcm,iproc-sdhci.yaml
··· 26 26 reg: 27 27 minItems: 1 28 28 29 + dma-coherent: true 30 + 29 31 interrupts: 32 + maxItems: 1 33 + 34 + iommus: 30 35 maxItems: 1 31 36 32 37 clocks:
+70
Documentation/devicetree/bindings/mmc/bst,c1200-sdhci.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mmc/bst,c1200-sdhci.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Black Sesame Technologies DWCMSHC SDHCI Controller 8 + 9 + maintainers: 10 + - Ge Gordon <gordon.ge@bst.ai> 11 + 12 + allOf: 13 + - $ref: sdhci-common.yaml# 14 + 15 + properties: 16 + compatible: 17 + const: bst,c1200-sdhci 18 + 19 + reg: 20 + items: 21 + - description: Core SDHCI registers 22 + - description: CRM registers 23 + 24 + interrupts: 25 + maxItems: 1 26 + 27 + clocks: 28 + maxItems: 1 29 + 30 + clock-names: 31 + items: 32 + - const: core 33 + 34 + memory-region: 35 + maxItems: 1 36 + 37 + dma-coherent: true 38 + 39 + required: 40 + - compatible 41 + - reg 42 + - interrupts 43 + - clocks 44 + - clock-names 45 + 46 + unevaluatedProperties: false 47 + 48 + examples: 49 + - | 50 + #include <dt-bindings/interrupt-controller/arm-gic.h> 51 + #include <dt-bindings/interrupt-controller/irq.h> 52 + 53 + bus { 54 + #address-cells = <2>; 55 + #size-cells = <2>; 56 + 57 + mmc@22200000 { 58 + compatible = "bst,c1200-sdhci"; 59 + reg = <0x0 0x22200000 0x0 0x1000>, 60 + <0x0 0x23006000 0x0 0x1000>; 61 + interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>; 62 + clocks = <&clk_mmc>; 63 + clock-names = "core"; 64 + memory-region = <&mmc0_reserved>; 65 + max-frequency = <200000000>; 66 + bus-width = <8>; 67 + non-removable; 68 + dma-coherent; 69 + }; 70 + };
-2
Documentation/devicetree/bindings/mmc/cdns,sdhci.yaml
··· 134 134 items: 135 135 - description: Host controller registers 136 136 - description: Elba byte-lane enable register for writes 137 - required: 138 - - resets 139 137 else: 140 138 properties: 141 139 reg:
+1
Documentation/devicetree/bindings/mmc/fsl-imx-esdhc.yaml
··· 35 35 - fsl,imx8mm-usdhc 36 36 - fsl,imxrt1050-usdhc 37 37 - nxp,s32g2-usdhc 38 + - nxp,s32n79-usdhc 38 39 - items: 39 40 - const: fsl,imx50-esdhc 40 41 - const: fsl,imx53-esdhc
+117
Documentation/devicetree/bindings/mmc/hisilicon,hi3660-dw-mshc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/mmc/hisilicon,hi3660-dw-mshc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Hisilicon specific extensions to the Synopsys Designware Mobile Storage Host Controller 8 + 9 + maintainers: 10 + - Zhangfei Gao <zhangfei.gao@linaro.org> 11 + 12 + description: 13 + The Synopsys designware mobile storage host controller is used to interface 14 + a SoC with storage medium such as eMMC or SD/MMC cards. This file documents 15 + differences between the core Synopsys dw mshc controller properties described 16 + by synopsys-dw-mshc.txt and the properties used by the Hisilicon specific 17 + extensions to the Synopsys Designware Mobile Storage Host Controller. 18 + 19 + allOf: 20 + - $ref: /schemas/mmc/synopsys-dw-mshc-common.yaml# 21 + 22 + properties: 23 + compatible: 24 + oneOf: 25 + - enum: 26 + - hisilicon,hi3660-dw-mshc 27 + - hisilicon,hi4511-dw-mshc 28 + - hisilicon,hi6220-dw-mshc 29 + - items: 30 + - const: hisilicon,hi3670-dw-mshc 31 + - const: hisilicon,hi3660-dw-mshc 32 + 33 + reg: 34 + maxItems: 1 35 + 36 + interrupts: 37 + maxItems: 1 38 + 39 + clocks: 40 + items: 41 + - description: card interface unit clock 42 + - description: bus interface unit clock 43 + 44 + clock-names: 45 + items: 46 + - const: ciu 47 + - const: biu 48 + 49 + hisilicon,peripheral-syscon: 50 + $ref: /schemas/types.yaml#/definitions/phandle 51 + description: phandle of syscon used to control peripheral. 52 + 53 + required: 54 + - compatible 55 + - reg 56 + - interrupts 57 + - clocks 58 + - clock-names 59 + 60 + unevaluatedProperties: false 61 + 62 + examples: 63 + - | 64 + #include <dt-bindings/clock/hi3620-clock.h> 65 + #include <dt-bindings/gpio/gpio.h> 66 + #include <dt-bindings/interrupt-controller/arm-gic.h> 67 + 68 + mmc@fcd03000 { 69 + compatible = "hisilicon,hi4511-dw-mshc"; 70 + reg = <0xfcd03000 0x1000>; 71 + interrupts = <GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>; 72 + #address-cells = <1>; 73 + #size-cells = <0>; 74 + clocks = <&mmc_clock HI3620_SD_CIUCLK>, <&clock HI3620_DDRC_PER_CLK>; 75 + clock-names = "ciu", "biu"; 76 + vmmc-supply = <&ldo12>; 77 + fifo-depth = <0x100>; 78 + pinctrl-names = "default"; 79 + pinctrl-0 = <&sd_pmx_pins &sd_cfg_func1 &sd_cfg_func2>; 80 + bus-width = <4>; 81 + disable-wp; 82 + cd-gpios = <&gpio10 3 GPIO_ACTIVE_HIGH>; 83 + cap-mmc-highspeed; 84 + cap-sd-highspeed; 85 + }; 86 + 87 + - | 88 + #include <dt-bindings/clock/hi6220-clock.h> 89 + #include <dt-bindings/gpio/gpio.h> 90 + #include <dt-bindings/interrupt-controller/arm-gic.h> 91 + 92 + soc { 93 + #address-cells = <2>; 94 + #size-cells = <2>; 95 + 96 + mmc@f723e000 { 97 + compatible = "hisilicon,hi6220-dw-mshc"; 98 + reg = <0x0 0xf723e000 0x0 0x1000>; 99 + interrupts = <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>; 100 + clocks = <&clock_sys HI6220_MMC1_CIUCLK>, 101 + <&clock_sys HI6220_MMC1_CLK>; 102 + clock-names = "ciu", "biu"; 103 + bus-width = <4>; 104 + disable-wp; 105 + cap-sd-highspeed; 106 + sd-uhs-sdr12; 107 + sd-uhs-sdr25; 108 + card-detect-delay = <200>; 109 + hisilicon,peripheral-syscon = <&ao_ctrl>; 110 + cd-gpios = <&gpio1 0 GPIO_ACTIVE_LOW>; 111 + pinctrl-names = "default", "idle"; 112 + pinctrl-0 = <&sd_pmx_func &sd_clk_cfg_func &sd_cfg_func>; 113 + pinctrl-1 = <&sd_pmx_idle &sd_clk_cfg_idle &sd_cfg_idle>; 114 + vqmmc-supply = <&ldo7>; 115 + vmmc-supply = <&ldo10>; 116 + }; 117 + };
-73
Documentation/devicetree/bindings/mmc/k3-dw-mshc.txt
··· 1 - * Hisilicon specific extensions to the Synopsys Designware Mobile 2 - Storage Host Controller 3 - 4 - Read synopsys-dw-mshc.txt for more details 5 - 6 - The Synopsys designware mobile storage host controller is used to interface 7 - a SoC with storage medium such as eMMC or SD/MMC cards. This file documents 8 - differences between the core Synopsys dw mshc controller properties described 9 - by synopsys-dw-mshc.txt and the properties used by the Hisilicon specific 10 - extensions to the Synopsys Designware Mobile Storage Host Controller. 11 - 12 - Required Properties: 13 - 14 - * compatible: should be one of the following. 15 - - "hisilicon,hi3660-dw-mshc": for controllers with hi3660 specific extensions. 16 - - "hisilicon,hi3670-dw-mshc", "hisilicon,hi3660-dw-mshc": for controllers 17 - with hi3670 specific extensions. 18 - - "hisilicon,hi4511-dw-mshc": for controllers with hi4511 specific extensions. 19 - - "hisilicon,hi6220-dw-mshc": for controllers with hi6220 specific extensions. 20 - 21 - Optional Properties: 22 - - hisilicon,peripheral-syscon: phandle of syscon used to control peripheral. 23 - 24 - Example: 25 - 26 - /* for Hi3620 */ 27 - 28 - /* SoC portion */ 29 - dwmmc_0: dwmmc0@fcd03000 { 30 - compatible = "hisilicon,hi4511-dw-mshc"; 31 - reg = <0xfcd03000 0x1000>; 32 - interrupts = <0 16 4>; 33 - #address-cells = <1>; 34 - #size-cells = <0>; 35 - clocks = <&mmc_clock HI3620_SD_CIUCLK>, <&clock HI3620_DDRC_PER_CLK>; 36 - clock-names = "ciu", "biu"; 37 - }; 38 - 39 - /* Board portion */ 40 - dwmmc0@fcd03000 { 41 - vmmc-supply = <&ldo12>; 42 - fifo-depth = <0x100>; 43 - pinctrl-names = "default"; 44 - pinctrl-0 = <&sd_pmx_pins &sd_cfg_func1 &sd_cfg_func2>; 45 - bus-width = <4>; 46 - disable-wp; 47 - cd-gpios = <&gpio10 3 0>; 48 - cap-mmc-highspeed; 49 - cap-sd-highspeed; 50 - }; 51 - 52 - /* for Hi6220 */ 53 - 54 - dwmmc_1: dwmmc1@f723e000 { 55 - compatible = "hisilicon,hi6220-dw-mshc"; 56 - bus-width = <0x4>; 57 - disable-wp; 58 - cap-sd-highspeed; 59 - sd-uhs-sdr12; 60 - sd-uhs-sdr25; 61 - card-detect-delay = <200>; 62 - hisilicon,peripheral-syscon = <&ao_ctrl>; 63 - reg = <0x0 0xf723e000 0x0 0x1000>; 64 - interrupts = <0x0 0x49 0x4>; 65 - clocks = <&clock_sys HI6220_MMC1_CIUCLK>, <&clock_sys HI6220_MMC1_CLK>; 66 - clock-names = "ciu", "biu"; 67 - cd-gpios = <&gpio1 0 1>; 68 - pinctrl-names = "default", "idle"; 69 - pinctrl-0 = <&sd_pmx_func &sd_clk_cfg_func &sd_cfg_func>; 70 - pinctrl-1 = <&sd_pmx_idle &sd_clk_cfg_idle &sd_cfg_idle>; 71 - vqmmc-supply = <&ldo7>; 72 - vmmc-supply = <&ldo10>; 73 - };
+1
Documentation/devicetree/bindings/mmc/loongson,ls2k0500-mmc.yaml
··· 22 22 properties: 23 23 compatible: 24 24 enum: 25 + - loongson,ls2k0300-mmc 25 26 - loongson,ls2k0500-mmc 26 27 - loongson,ls2k1000-mmc 27 28 - loongson,ls2k2000-mmc
+3
Documentation/devicetree/bindings/mmc/mtk-sd.yaml
··· 25 25 - mediatek,mt8135-mmc 26 26 - mediatek,mt8173-mmc 27 27 - mediatek,mt8183-mmc 28 + - mediatek,mt8189-mmc 28 29 - mediatek,mt8196-mmc 29 30 - mediatek,mt8516-mmc 30 31 - items: ··· 193 192 - mediatek,mt8183-mmc 194 193 - mediatek,mt8186-mmc 195 194 - mediatek,mt8188-mmc 195 + - mediatek,mt8189-mmc 196 196 - mediatek,mt8195-mmc 197 197 - mediatek,mt8196-mmc 198 198 - mediatek,mt8516-mmc ··· 242 240 - mediatek,mt7986-mmc 243 241 - mediatek,mt7988-mmc 244 242 - mediatek,mt8183-mmc 243 + - mediatek,mt8189-mmc 245 244 - mediatek,mt8196-mmc 246 245 then: 247 246 properties:
+6
Documentation/devicetree/bindings/mmc/renesas,sdhi.yaml
··· 106 106 iommus: 107 107 maxItems: 1 108 108 109 + mux-states: 110 + description: 111 + mux controller node to route the SD/SDIO/eMMC signals from SoC to cards. 112 + maxItems: 1 113 + 109 114 power-domains: 110 115 maxItems: 1 111 116 ··· 280 275 max-frequency = <195000000>; 281 276 power-domains = <&sysc R8A7790_PD_ALWAYS_ON>; 282 277 resets = <&cpg 314>; 278 + mux-states = <&mux 0>; 283 279 }; 284 280 285 281 sdhi1: mmc@ee120000 {
+4
Documentation/devicetree/bindings/mmc/rockchip-dw-mshc.yaml
··· 47 47 - rockchip,rv1126-dw-mshc 48 48 - const: rockchip,rk3288-dw-mshc 49 49 # for Rockchip RK3576 with phase tuning inside the controller 50 + - items: 51 + - enum: 52 + - rockchip,rv1103b-dw-mshc 53 + - const: rockchip,rk3576-dw-mshc 50 54 - const: rockchip,rk3576-dw-mshc 51 55 52 56 reg:
+2
Documentation/devicetree/bindings/mmc/sdhci-msm.yaml
··· 38 38 - items: 39 39 - enum: 40 40 - qcom,ipq5018-sdhci 41 + - qcom,ipq5210-sdhci 41 42 - qcom,ipq5332-sdhci 42 43 - qcom,ipq5424-sdhci 43 44 - qcom,ipq6018-sdhci 44 45 - qcom,ipq9574-sdhci 46 + - qcom,ipq9650-sdhci 45 47 - qcom,kaanapali-sdhci 46 48 - qcom,milos-sdhci 47 49 - qcom,qcm2290-sdhci
+63
Documentation/devicetree/bindings/mmc/snps,dwcmshc-sdhci.yaml
··· 23 23 - const: sophgo,sg2044-dwcmshc 24 24 - const: sophgo,sg2042-dwcmshc 25 25 - enum: 26 + - canaan,k230-emmc 27 + - canaan,k230-sdio 28 + - hpe,gsc-dwcmshc 26 29 - rockchip,rk3568-dwcmshc 27 30 - rockchip,rk3588-dwcmshc 28 31 - snps,dwcmshc-sdhci ··· 53 50 maxItems: 1 54 51 55 52 resets: 53 + minItems: 4 56 54 maxItems: 5 57 55 58 56 reset-names: 57 + minItems: 4 59 58 maxItems: 5 59 + 60 + canaan,usb-phy: 61 + $ref: /schemas/types.yaml#/definitions/phandle 62 + description: Phandle to the Canaan K230 USB PHY node required for 63 + k230-emmc/sdio. 60 64 61 65 rockchip,txclk-tapnum: 62 66 description: Specify the number of delay for tx sampling. ··· 87 77 description: Specifies the drive impedance in Ohm. 88 78 enum: [33, 40, 50, 66, 100] 89 79 80 + hpe,gxp-sysreg: 81 + $ref: /schemas/types.yaml#/definitions/phandle-array 82 + items: 83 + - items: 84 + - description: phandle to HPE GXP SoC system register block (syscon) 85 + - description: offset of the MSHCCS register within the syscon block 86 + description: 87 + Phandle to the HPE GXP SoC system register block (syscon) and 88 + offset of the MSHCCS register used to configure clock 89 + synchronisation for HS200 tuning. 90 + 90 91 required: 91 92 - compatible 92 93 - reg ··· 107 86 108 87 allOf: 109 88 - $ref: mmc-controller.yaml# 89 + 90 + - if: 91 + properties: 92 + compatible: 93 + contains: 94 + enum: 95 + - canaan,k230-emmc 96 + - canaan,k230-sdio 97 + then: 98 + properties: 99 + clocks: 100 + minItems: 5 101 + clock-names: 102 + items: 103 + - const: core 104 + - const: bus 105 + - const: axi 106 + - const: block 107 + - const: timer 108 + required: 109 + - canaan,usb-phy 110 + 111 + - if: 112 + properties: 113 + compatible: 114 + contains: 115 + const: hpe,gsc-dwcmshc 116 + 117 + then: 118 + properties: 119 + clocks: 120 + items: 121 + - description: core clock 122 + clock-names: 123 + items: 124 + - const: core 125 + required: 126 + - hpe,gxp-sysreg 127 + else: 128 + properties: 129 + hpe,gxp-sysreg: false 110 130 111 131 - if: 112 132 properties: ··· 208 146 else: 209 147 properties: 210 148 resets: 149 + minItems: 5 211 150 maxItems: 5 212 151 reset-names: 213 152 items:
+13 -1
Documentation/devicetree/bindings/mmc/spacemit,sdhci.yaml
··· 14 14 15 15 properties: 16 16 compatible: 17 - const: spacemit,k1-sdhci 17 + enum: 18 + - spacemit,k1-sdhci 19 + - spacemit,k3-sdhci 18 20 19 21 reg: 20 22 maxItems: 1 ··· 33 31 items: 34 32 - const: core 35 33 - const: io 34 + 35 + resets: 36 + items: 37 + - description: axi reset, connect to AXI bus, shared by all controllers 38 + - description: sdh reset, connect to individual controller separately 39 + 40 + reset-names: 41 + items: 42 + - const: axi 43 + - const: sdh 36 44 37 45 required: 38 46 - compatible
+2
MAINTAINERS
··· 2640 2640 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2641 2641 S: Supported 2642 2642 F: Documentation/devicetree/bindings/arm/bst.yaml 2643 + F: Documentation/devicetree/bindings/mmc/bst,c1200-sdhci.yaml 2643 2644 F: arch/arm64/boot/dts/bst/ 2645 + F: drivers/mmc/host/sdhci-of-bst.c 2644 2646 2645 2647 ARM/CALXEDA HIGHBANK ARCHITECTURE 2646 2648 M: Andre Przywara <andre.przywara@arm.com>
+2 -2
arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
··· 20 20 compatible = "hisilicon,hi3660-hikey960", "hisilicon,hi3660"; 21 21 22 22 aliases { 23 - mshc1 = &dwmmc1; 24 - mshc2 = &dwmmc2; 23 + mmc1 = &dwmmc1; 24 + mmc2 = &dwmmc2; 25 25 serial0 = &uart0; 26 26 serial1 = &uart1; 27 27 serial2 = &uart2;
+2 -2
arch/arm64/boot/dts/hisilicon/hi3670-hikey970.dts
··· 19 19 compatible = "hisilicon,hi3670-hikey970", "hisilicon,hi3670"; 20 20 21 21 aliases { 22 - mshc1 = &dwmmc1; 23 - mshc2 = &dwmmc2; 22 + mmc1 = &dwmmc1; 23 + mmc2 = &dwmmc2; 24 24 serial0 = &uart0; 25 25 serial1 = &uart1; 26 26 serial2 = &uart2;
+5 -19
drivers/i2c/busses/i2c-omap.c
··· 1453 1453 (1000 * omap->speed / 8); 1454 1454 } 1455 1455 1456 - if (of_property_present(node, "mux-states")) { 1457 - struct mux_state *mux_state; 1458 - 1459 - mux_state = devm_mux_state_get(&pdev->dev, NULL); 1460 - if (IS_ERR(mux_state)) { 1461 - r = PTR_ERR(mux_state); 1462 - dev_dbg(&pdev->dev, "failed to get I2C mux: %d\n", r); 1463 - goto err_put_pm; 1464 - } 1465 - omap->mux_state = mux_state; 1466 - r = mux_state_select(omap->mux_state); 1467 - if (r) { 1468 - dev_err(&pdev->dev, "failed to select I2C mux: %d\n", r); 1469 - goto err_put_pm; 1470 - } 1456 + omap->mux_state = devm_mux_state_get_optional_selected(&pdev->dev, NULL); 1457 + if (IS_ERR(omap->mux_state)) { 1458 + r = PTR_ERR(omap->mux_state); 1459 + goto err_put_pm; 1471 1460 } 1472 1461 1473 1462 /* reset ASAP, clearing any IRQs */ 1474 1463 r = omap_i2c_init(omap); 1475 1464 if (r) 1476 - goto err_mux_state_deselect; 1465 + goto err_put_pm; 1477 1466 1478 1467 if (omap->rev < OMAP_I2C_OMAP1_REV_2) 1479 1468 r = devm_request_irq(&pdev->dev, omap->irq, omap_i2c_omap1_isr, ··· 1504 1515 1505 1516 err_unuse_clocks: 1506 1517 omap_i2c_write_reg(omap, OMAP_I2C_CON_REG, 0); 1507 - err_mux_state_deselect: 1508 - if (omap->mux_state) 1509 - mux_state_deselect(omap->mux_state); 1510 1518 err_put_pm: 1511 1519 pm_runtime_put_sync(omap->dev); 1512 1520 err_disable_pm:
+13 -73
drivers/mmc/core/block.c
··· 1401 1401 rq_data_dir(req) == WRITE && 1402 1402 (md->flags & MMC_BLK_REL_WR); 1403 1403 1404 + if (mqrq->flags & MQRQ_XFER_SINGLE_BLOCK) 1405 + recovery_mode = 1; 1406 + 1404 1407 memset(brq, 0, sizeof(struct mmc_blk_request)); 1405 1408 1406 1409 mmc_crypto_prepare_req(mqrq); ··· 1458 1455 * sectors can be read successfully. 1459 1456 */ 1460 1457 if (recovery_mode) 1461 - brq->data.blocks = queue_physical_block_size(mq->queue) >> 9; 1458 + brq->data.blocks = queue_physical_block_size(mq->queue) >> SECTOR_SHIFT; 1462 1459 1463 1460 /* 1464 1461 * Some controllers have HW issues while operating ··· 1543 1540 err = 0; 1544 1541 1545 1542 if (err) { 1546 - if (mqrq->retries++ < MMC_CQE_RETRIES) 1543 + if (mqrq->retries++ < MMC_CQE_RETRIES) { 1544 + mqrq->flags |= MQRQ_XFER_SINGLE_BLOCK; 1547 1545 blk_mq_requeue_request(req, true); 1548 - else 1546 + } else { 1549 1547 blk_mq_end_request(req, BLK_STS_IOERR); 1548 + } 1550 1549 } else if (mrq->data) { 1551 1550 if (blk_update_request(req, BLK_STS_OK, mrq->data->bytes_xfered)) 1552 1551 blk_mq_requeue_request(req, true); ··· 1781 1776 return err; 1782 1777 } 1783 1778 1784 - #define MMC_READ_SINGLE_RETRIES 2 1785 - 1786 - /* Single (native) sector read during recovery */ 1787 - static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req) 1788 - { 1789 - struct mmc_queue_req *mqrq = req_to_mmc_queue_req(req); 1790 - struct mmc_request *mrq = &mqrq->brq.mrq; 1791 - struct mmc_card *card = mq->card; 1792 - struct mmc_host *host = card->host; 1793 - blk_status_t error = BLK_STS_OK; 1794 - size_t bytes_per_read = queue_physical_block_size(mq->queue); 1795 - 1796 - do { 1797 - u32 status; 1798 - int err; 1799 - int retries = 0; 1800 - 1801 - while (retries++ <= MMC_READ_SINGLE_RETRIES) { 1802 - mmc_blk_rw_rq_prep(mqrq, card, 1, mq); 1803 - 1804 - mmc_wait_for_req(host, mrq); 1805 - 1806 - err = mmc_send_status(card, &status); 1807 - if (err) 1808 - goto error_exit; 1809 - 1810 - if (!mmc_host_is_spi(host) && 1811 - !mmc_ready_for_data(status)) { 1812 - err = mmc_blk_fix_state(card, req); 1813 - if (err) 1814 - goto error_exit; 1815 - } 1816 - 1817 - if (!mrq->cmd->error) 1818 - break; 1819 - } 1820 - 1821 - if (mrq->cmd->error || 1822 - mrq->data->error || 1823 - (!mmc_host_is_spi(host) && 1824 - (mrq->cmd->resp[0] & CMD_ERRORS || status & CMD_ERRORS))) 1825 - error = BLK_STS_IOERR; 1826 - else 1827 - error = BLK_STS_OK; 1828 - 1829 - } while (blk_update_request(req, error, bytes_per_read)); 1830 - 1831 - return; 1832 - 1833 - error_exit: 1834 - mrq->data->bytes_xfered = 0; 1835 - blk_update_request(req, BLK_STS_IOERR, bytes_per_read); 1836 - /* Let it try the remaining request again */ 1837 - if (mqrq->retries > MMC_MAX_RETRIES - 1) 1838 - mqrq->retries = MMC_MAX_RETRIES - 1; 1839 - } 1840 - 1841 1779 static inline bool mmc_blk_oor_valid(struct mmc_blk_request *brq) 1842 1780 { 1843 1781 return !!brq->mrq.sbc; ··· 1916 1968 mqrq->retries = MMC_MAX_RETRIES - MMC_DATA_RETRIES; 1917 1969 return; 1918 1970 } 1919 - 1920 - if (rq_data_dir(req) == READ && brq->data.blocks > 1921 - queue_physical_block_size(mq->queue) >> 9) { 1922 - /* Read one (native) sector at a time */ 1923 - mmc_blk_read_single(mq, req); 1924 - return; 1925 - } 1926 1971 } 1927 1972 1928 1973 static inline bool mmc_blk_rq_error(struct mmc_blk_request *brq) ··· 2026 2085 } else if (!blk_rq_bytes(req)) { 2027 2086 __blk_mq_end_request(req, BLK_STS_IOERR); 2028 2087 } else if (mqrq->retries++ < MMC_MAX_RETRIES) { 2088 + mqrq->flags |= MQRQ_XFER_SINGLE_BLOCK; 2029 2089 blk_mq_requeue_request(req, true); 2030 2090 } else { 2031 2091 if (mmc_card_removed(mq->card)) ··· 2959 3017 */ 2960 3018 ret = mmc_blk_alloc_rpmb_part(card, md, 2961 3019 card->part[idx].part_cfg, 2962 - card->part[idx].size >> 9, 3020 + card->part[idx].size >> SECTOR_SHIFT, 2963 3021 card->part[idx].name); 2964 3022 if (ret) 2965 3023 return ret; 2966 3024 } else if (card->part[idx].size) { 2967 3025 ret = mmc_blk_alloc_part(card, md, 2968 3026 card->part[idx].part_cfg, 2969 - card->part[idx].size >> 9, 3027 + card->part[idx].size >> SECTOR_SHIFT, 2970 3028 card->part[idx].force_ro, 2971 3029 card->part[idx].name, 2972 3030 card->part[idx].area_type); ··· 3296 3354 _mmc_blk_suspend(card); 3297 3355 } 3298 3356 3299 - #ifdef CONFIG_PM_SLEEP 3300 3357 static int mmc_blk_suspend(struct device *dev) 3301 3358 { 3302 3359 struct mmc_card *card = mmc_dev_to_card(dev); ··· 3321 3380 } 3322 3381 return 0; 3323 3382 } 3324 - #endif 3325 3383 3326 - static SIMPLE_DEV_PM_OPS(mmc_blk_pm_ops, mmc_blk_suspend, mmc_blk_resume); 3384 + static DEFINE_SIMPLE_DEV_PM_OPS(mmc_blk_pm_ops, mmc_blk_suspend, mmc_blk_resume); 3327 3385 3328 3386 static struct mmc_driver mmc_driver = { 3329 3387 .drv = { 3330 3388 .name = "mmcblk", 3331 - .pm = &mmc_blk_pm_ops, 3389 + .pm = pm_sleep_ptr(&mmc_blk_pm_ops), 3332 3390 }, 3333 3391 .probe = mmc_blk_probe, 3334 3392 .remove = mmc_blk_remove,
+11
drivers/mmc/core/card.h
··· 89 89 #define CID_MANFID_MICRON 0x13 90 90 #define CID_MANFID_SAMSUNG 0x15 91 91 #define CID_MANFID_APACER 0x27 92 + #define CID_MANFID_SANDISK_MMC 0x45 92 93 #define CID_MANFID_SWISSBIT 0x5D 93 94 #define CID_MANFID_KINGSTON 0x70 94 95 #define CID_MANFID_HYNIX 0x90 ··· 304 303 static inline int mmc_card_no_uhs_ddr50_tuning(const struct mmc_card *c) 305 304 { 306 305 return c->quirks & MMC_QUIRK_NO_UHS_DDR50_TUNING; 306 + } 307 + 308 + static inline int mmc_card_broken_mdt(const struct mmc_card *c) 309 + { 310 + return c->quirks & MMC_QUIRK_BROKEN_MDT; 311 + } 312 + 313 + static inline int mmc_card_fixed_secure_erase_trim_time(const struct mmc_card *c) 314 + { 315 + return c->quirks & MMC_QUIRK_FIXED_SECURE_ERASE_TRIM_TIME; 307 316 } 308 317 309 318 #endif
+2 -1
drivers/mmc/core/core.c
··· 97 97 return; 98 98 99 99 data->error = data_errors[get_random_u32_below(ARRAY_SIZE(data_errors))]; 100 - data->bytes_xfered = get_random_u32_below(data->bytes_xfered >> 9) << 9; 100 + data->bytes_xfered = get_random_u32_below(data->bytes_xfered >> SECTOR_SHIFT) 101 + << SECTOR_SHIFT; 101 102 } 102 103 103 104 #else /* CONFIG_FAIL_MMC_REQUEST */
+17 -12
drivers/mmc/core/host.c
··· 33 33 34 34 static DEFINE_IDA(mmc_host_ida); 35 35 36 - #ifdef CONFIG_PM_SLEEP 37 36 static int mmc_host_class_prepare(struct device *dev) 38 37 { 39 38 struct mmc_host *host = cls_dev_to_mmc_host(dev); ··· 59 60 } 60 61 61 62 static const struct dev_pm_ops mmc_host_class_dev_pm_ops = { 62 - .prepare = mmc_host_class_prepare, 63 - .complete = mmc_host_class_complete, 63 + .prepare = pm_sleep_ptr(mmc_host_class_prepare), 64 + .complete = pm_sleep_ptr(mmc_host_class_complete), 64 65 }; 65 - 66 - #define MMC_HOST_CLASS_DEV_PM_OPS (&mmc_host_class_dev_pm_ops) 67 - #else 68 - #define MMC_HOST_CLASS_DEV_PM_OPS NULL 69 - #endif 70 66 71 67 static void mmc_host_classdev_release(struct device *dev) 72 68 { ··· 84 90 .name = "mmc_host", 85 91 .dev_release = mmc_host_classdev_release, 86 92 .shutdown_pre = mmc_host_classdev_shutdown, 87 - .pm = MMC_HOST_CLASS_DEV_PM_OPS, 93 + .pm = pm_ptr(&mmc_host_class_dev_pm_ops), 88 94 }; 89 95 90 96 int mmc_register_host_class(void) ··· 373 379 host->caps2 |= MMC_CAP2_FULL_PWR_CYCLE_IN_SUSPEND; 374 380 if (device_property_read_bool(dev, "keep-power-in-suspend")) 375 381 host->pm_caps |= MMC_PM_KEEP_POWER; 376 - if (device_property_read_bool(dev, "wakeup-source") || 377 - device_property_read_bool(dev, "enable-sdio-wakeup")) /* legacy */ 382 + if (device_property_read_bool(dev, "wakeup-source")) 378 383 host->pm_caps |= MMC_PM_WAKE_SDIO_IRQ; 379 384 if (device_property_read_bool(dev, "mmc-ddr-3_3v")) 380 385 host->caps |= MMC_CAP_3_3V_DDR; ··· 617 624 return -EINVAL; 618 625 } 619 626 627 + /* UHS/DDR/HS200 modes require at least 4-bit bus */ 628 + if (!(caps & (MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA)) && 629 + ((caps & (MMC_CAP_UHS | MMC_CAP_DDR)) || (caps2 & MMC_CAP2_HS200))) { 630 + dev_warn(dev, "drop UHS/DDR/HS200 support since 1-bit bus only\n"); 631 + caps &= ~(MMC_CAP_UHS | MMC_CAP_DDR); 632 + caps2 &= ~MMC_CAP2_HS200; 633 + } 634 + 635 + /* HS400 and HS400ES modes require 8-bit bus */ 620 636 if (caps2 & (MMC_CAP2_HS400_ES | MMC_CAP2_HS400) && 621 637 !(caps & MMC_CAP_8_BIT_DATA) && !(caps2 & MMC_CAP2_NO_MMC)) { 622 638 dev_warn(dev, "drop HS400 support since no 8-bit bus\n"); 623 - host->caps2 = caps2 & ~MMC_CAP2_HS400_ES & ~MMC_CAP2_HS400; 639 + caps2 &= ~(MMC_CAP2_HS400_ES | MMC_CAP2_HS400); 624 640 } 641 + 642 + host->caps = caps; 643 + host->caps2 = caps2; 625 644 626 645 return 0; 627 646 }
+1 -5
drivers/mmc/core/host.h
··· 56 56 57 57 static inline int mmc_host_can_uhs(struct mmc_host *host) 58 58 { 59 - return host->caps & 60 - (MMC_CAP_UHS_SDR12 | MMC_CAP_UHS_SDR25 | 61 - MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR104 | 62 - MMC_CAP_UHS_DDR50) && 63 - host->caps & MMC_CAP_4_BIT_DATA; 59 + return host->caps & MMC_CAP_UHS; 64 60 } 65 61 66 62 static inline bool mmc_card_hs200(struct mmc_card *card)
+12
drivers/mmc/core/mmc.c
··· 671 671 card->ext_csd.enhanced_rpmb_supported = 672 672 (card->ext_csd.rel_param & 673 673 EXT_CSD_WR_REL_PARAM_EN_RPMB_REL_WR); 674 + 675 + if (card->ext_csd.rev >= 9) { 676 + /* Adjust production date as per JEDEC JESD84-B51B September 2025 */ 677 + if (card->cid.year < 2023) 678 + card->cid.year += 16; 679 + } else { 680 + /* Handle vendors with broken MDT reporting */ 681 + if (mmc_card_broken_mdt(card) && card->cid.year >= 2010 && 682 + card->cid.year <= 2012) 683 + card->cid.year += 16; 684 + } 674 685 } 686 + 675 687 out: 676 688 return err; 677 689 }
+46 -65
drivers/mmc/core/mmc_test.c
··· 37 37 * Limit the test area size to the maximum MMC HC erase group size. Note that 38 38 * the maximum SD allocation unit size is just 4MiB. 39 39 */ 40 - #define TEST_AREA_MAX_SIZE (128 * 1024 * 1024) 40 + #define TEST_AREA_MAX_SIZE SZ_128M 41 41 42 42 /** 43 43 * struct mmc_test_pages - pages allocated by 'alloc_pages()'. ··· 51 51 52 52 /** 53 53 * struct mmc_test_mem - allocated memory. 54 - * @arr: array of allocations 55 54 * @cnt: number of allocations 55 + * @arr: array of allocations 56 56 */ 57 57 struct mmc_test_mem { 58 - struct mmc_test_pages *arr; 59 58 unsigned int cnt; 59 + struct mmc_test_pages arr[] __counted_by(cnt); 60 60 }; 61 61 62 62 /** ··· 135 135 * struct mmc_test_card - test information. 136 136 * @card: card under test 137 137 * @scratch: transfer buffer 138 - * @buffer: transfer buffer 139 138 * @highmem: buffer for highmem tests 140 139 * @area: information for performance tests 141 140 * @gr: pointer to results of current testcase 141 + * @buffer: transfer buffer 142 142 */ 143 143 struct mmc_test_card { 144 144 struct mmc_card *card; 145 145 146 146 u8 scratch[BUFFER_SIZE]; 147 - u8 *buffer; 148 147 #ifdef CONFIG_HIGHMEM 149 148 struct page *highmem; 150 149 #endif 151 150 struct mmc_test_area area; 152 151 struct mmc_test_general_result *gr; 152 + 153 + u8 buffer[]; 153 154 }; 154 155 155 156 enum mmc_test_prep_media { ··· 169 168 enum mmc_test_prep_media prepare; 170 169 }; 171 170 171 + static unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, 172 + 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; 173 + 174 + static unsigned int sg_len[] = {1, 1 << 3, 1 << 4, 1 << 5, 1 << 6, 175 + 1 << 7, 1 << 8, 1 << 9}; 172 176 /*******************************************************************/ 173 177 /* General helper functions */ 174 178 /*******************************************************************/ ··· 321 315 while (mem->cnt--) 322 316 __free_pages(mem->arr[mem->cnt].page, 323 317 mem->arr[mem->cnt].order); 324 - kfree(mem->arr); 325 318 kfree(mem); 326 319 } 327 320 ··· 353 348 if (max_segs > max_page_cnt) 354 349 max_segs = max_page_cnt; 355 350 356 - mem = kzalloc_obj(*mem); 351 + mem = kzalloc_flex(*mem, arr, max_segs); 357 352 if (!mem) 358 353 return NULL; 359 - 360 - mem->arr = kzalloc_objs(*mem->arr, max_segs); 361 - if (!mem->arr) 362 - goto out_free; 363 354 364 355 while (max_page_cnt) { 365 356 struct page *page; ··· 507 506 uint64_t ns; 508 507 509 508 ns = timespec64_to_ns(ts); 510 - bytes *= 1000000000; 509 + bytes *= NSEC_PER_SEC; 511 510 512 511 while (ns > UINT_MAX) { 513 512 bytes >>= 1; ··· 553 552 static void mmc_test_print_rate(struct mmc_test_card *test, uint64_t bytes, 554 553 struct timespec64 *ts1, struct timespec64 *ts2) 555 554 { 556 - unsigned int rate, iops, sectors = bytes >> 9; 555 + unsigned int rate, iops, sectors = bytes >> SECTOR_SHIFT; 557 556 struct timespec64 ts; 558 557 559 558 ts = timespec64_sub(*ts2, *ts1); ··· 578 577 unsigned int count, struct timespec64 *ts1, 579 578 struct timespec64 *ts2) 580 579 { 581 - unsigned int rate, iops, sectors = bytes >> 9; 580 + unsigned int rate, iops, sectors = bytes >> SECTOR_SHIFT; 582 581 uint64_t tot = bytes * count; 583 582 struct timespec64 ts; 584 583 ··· 1379 1378 int err; 1380 1379 unsigned int sg_len = 0; 1381 1380 1382 - t->blocks = sz >> 9; 1381 + t->blocks = sz >> SECTOR_SHIFT; 1383 1382 1384 1383 if (max_scatter) { 1385 1384 err = mmc_test_map_sg_max_scatter(t->mem, sz, t->sg, ··· 1462 1461 else 1463 1462 for (i = 0; i < count && ret == 0; i++) { 1464 1463 ret = mmc_test_area_transfer(test, dev_addr, write); 1465 - dev_addr += sz >> 9; 1464 + dev_addr += sz >> SECTOR_SHIFT; 1466 1465 } 1467 1466 1468 1467 if (ret) ··· 1505 1504 if (!mmc_card_can_erase(test->card)) 1506 1505 return 0; 1507 1506 1508 - return mmc_erase(test->card, t->dev_addr, t->max_sz >> 9, 1507 + return mmc_erase(test->card, t->dev_addr, t->max_sz >> SECTOR_SHIFT, 1509 1508 MMC_ERASE_ARG); 1510 1509 } 1511 1510 ··· 1533 1532 static int mmc_test_area_init(struct mmc_test_card *test, int erase, int fill) 1534 1533 { 1535 1534 struct mmc_test_area *t = &test->area; 1536 - unsigned long min_sz = 64 * 1024, sz; 1535 + unsigned long min_sz = SZ_64K, sz; 1537 1536 int ret; 1538 1537 1539 1538 ret = mmc_test_set_blksize(test, 512); ··· 1541 1540 return ret; 1542 1541 1543 1542 /* Make the test area size about 4MiB */ 1544 - sz = (unsigned long)test->card->pref_erase << 9; 1543 + sz = (unsigned long)test->card->pref_erase << SECTOR_SHIFT; 1545 1544 t->max_sz = sz; 1546 - while (t->max_sz < 4 * 1024 * 1024) 1545 + while (t->max_sz < SZ_4M) 1547 1546 t->max_sz += sz; 1548 1547 while (t->max_sz > TEST_AREA_MAX_SIZE && t->max_sz > sz) 1549 1548 t->max_sz -= sz; ··· 1553 1552 t->max_seg_sz -= t->max_seg_sz % 512; 1554 1553 1555 1554 t->max_tfr = t->max_sz; 1556 - if (t->max_tfr >> 9 > test->card->host->max_blk_count) 1557 - t->max_tfr = test->card->host->max_blk_count << 9; 1555 + if (t->max_tfr >> SECTOR_SHIFT > test->card->host->max_blk_count) 1556 + t->max_tfr = test->card->host->max_blk_count << SECTOR_SHIFT; 1558 1557 if (t->max_tfr > test->card->host->max_req_size) 1559 1558 t->max_tfr = test->card->host->max_req_size; 1560 1559 if (t->max_tfr / t->max_seg_sz > t->max_segs) ··· 1584 1583 } 1585 1584 1586 1585 t->dev_addr = mmc_test_capacity(test->card) / 2; 1587 - t->dev_addr -= t->dev_addr % (t->max_sz >> 9); 1586 + t->dev_addr -= t->dev_addr % (t->max_sz >> SECTOR_SHIFT); 1588 1587 1589 1588 if (erase) { 1590 1589 ret = mmc_test_area_erase(test); ··· 1689 1688 int ret; 1690 1689 1691 1690 for (sz = 512; sz < t->max_tfr; sz <<= 1) { 1692 - dev_addr = t->dev_addr + (sz >> 9); 1691 + dev_addr = t->dev_addr + (sz >> SECTOR_SHIFT); 1693 1692 ret = mmc_test_area_io(test, sz, dev_addr, 0, 0, 1); 1694 1693 if (ret) 1695 1694 return ret; ··· 1713 1712 if (ret) 1714 1713 return ret; 1715 1714 for (sz = 512; sz < t->max_tfr; sz <<= 1) { 1716 - dev_addr = t->dev_addr + (sz >> 9); 1715 + dev_addr = t->dev_addr + (sz >> SECTOR_SHIFT); 1717 1716 ret = mmc_test_area_io(test, sz, dev_addr, 1, 0, 1); 1718 1717 if (ret) 1719 1718 return ret; ··· 1744 1743 return RESULT_UNSUP_HOST; 1745 1744 1746 1745 for (sz = 512; sz < t->max_sz; sz <<= 1) { 1747 - dev_addr = t->dev_addr + (sz >> 9); 1746 + dev_addr = t->dev_addr + (sz >> SECTOR_SHIFT); 1748 1747 ktime_get_ts64(&ts1); 1749 - ret = mmc_erase(test->card, dev_addr, sz >> 9, MMC_TRIM_ARG); 1748 + ret = mmc_erase(test->card, dev_addr, sz >> SECTOR_SHIFT, MMC_TRIM_ARG); 1750 1749 if (ret) 1751 1750 return ret; 1752 1751 ktime_get_ts64(&ts2); ··· 1754 1753 } 1755 1754 dev_addr = t->dev_addr; 1756 1755 ktime_get_ts64(&ts1); 1757 - ret = mmc_erase(test->card, dev_addr, sz >> 9, MMC_TRIM_ARG); 1756 + ret = mmc_erase(test->card, dev_addr, sz >> SECTOR_SHIFT, MMC_TRIM_ARG); 1758 1757 if (ret) 1759 1758 return ret; 1760 1759 ktime_get_ts64(&ts2); ··· 1776 1775 ret = mmc_test_area_io(test, sz, dev_addr, 0, 0, 0); 1777 1776 if (ret) 1778 1777 return ret; 1779 - dev_addr += (sz >> 9); 1778 + dev_addr += (sz >> SECTOR_SHIFT); 1780 1779 } 1781 1780 ktime_get_ts64(&ts2); 1782 1781 mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2); ··· 1818 1817 ret = mmc_test_area_io(test, sz, dev_addr, 1, 0, 0); 1819 1818 if (ret) 1820 1819 return ret; 1821 - dev_addr += (sz >> 9); 1820 + dev_addr += (sz >> SECTOR_SHIFT); 1822 1821 } 1823 1822 ktime_get_ts64(&ts2); 1824 1823 mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2); ··· 1871 1870 dev_addr = t->dev_addr; 1872 1871 ktime_get_ts64(&ts1); 1873 1872 for (i = 0; i < cnt; i++) { 1874 - ret = mmc_erase(test->card, dev_addr, sz >> 9, 1873 + ret = mmc_erase(test->card, dev_addr, sz >> SECTOR_SHIFT, 1875 1874 MMC_TRIM_ARG); 1876 1875 if (ret) 1877 1876 return ret; 1878 - dev_addr += (sz >> 9); 1877 + dev_addr += (sz >> SECTOR_SHIFT); 1879 1878 } 1880 1879 ktime_get_ts64(&ts2); 1881 1880 mmc_test_print_avg_rate(test, sz, cnt, &ts1, &ts2); ··· 1902 1901 struct timespec64 ts1, ts2, ts; 1903 1902 int ret; 1904 1903 1905 - ssz = sz >> 9; 1904 + ssz = sz >> SECTOR_SHIFT; 1906 1905 1907 1906 rnd_addr = mmc_test_capacity(test->card) / 4; 1908 1907 range1 = rnd_addr / test->card->pref_erase; ··· 2018 2017 sz = max_tfr; 2019 2018 } 2020 2019 2021 - ssz = sz >> 9; 2020 + ssz = sz >> SECTOR_SHIFT; 2022 2021 dev_addr = mmc_test_capacity(test->card) / 4; 2023 - if (tot_sz > dev_addr << 9) 2024 - tot_sz = dev_addr << 9; 2022 + if (tot_sz > dev_addr << SECTOR_SHIFT) 2023 + tot_sz = dev_addr << SECTOR_SHIFT; 2025 2024 cnt = tot_sz / sz; 2026 2025 dev_addr &= 0xffff0000; /* Round to 64MiB boundary */ 2027 2026 ··· 2045 2044 int ret, i; 2046 2045 2047 2046 for (i = 0; i < 10; i++) { 2048 - ret = mmc_test_seq_perf(test, write, 10 * 1024 * 1024, 1); 2047 + ret = mmc_test_seq_perf(test, write, 10 * SZ_1M, 1); 2049 2048 if (ret) 2050 2049 return ret; 2051 2050 } 2052 2051 for (i = 0; i < 5; i++) { 2053 - ret = mmc_test_seq_perf(test, write, 100 * 1024 * 1024, 1); 2052 + ret = mmc_test_seq_perf(test, write, 100 * SZ_1M, 1); 2054 2053 if (ret) 2055 2054 return ret; 2056 2055 } 2057 2056 for (i = 0; i < 3; i++) { 2058 - ret = mmc_test_seq_perf(test, write, 1000 * 1024 * 1024, 1); 2057 + ret = mmc_test_seq_perf(test, write, 1000 * SZ_1M, 1); 2059 2058 if (ret) 2060 2059 return ret; 2061 2060 } ··· 2158 2157 int i; 2159 2158 2160 2159 for (i = 0 ; i < rw->len && ret == 0; i++) { 2161 - ret = mmc_test_rw_multiple(test, rw, 512 * 1024, rw->size, 2160 + ret = mmc_test_rw_multiple(test, rw, SZ_512K, rw->size, 2162 2161 rw->sg_len[i]); 2163 2162 if (ret) 2164 2163 break; ··· 2171 2170 */ 2172 2171 static int mmc_test_profile_mult_write_blocking_perf(struct mmc_test_card *test) 2173 2172 { 2174 - unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, 2175 - 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; 2176 2173 struct mmc_test_multiple_rw test_data = { 2177 2174 .bs = bs, 2178 2175 .size = TEST_AREA_MAX_SIZE, ··· 2188 2189 */ 2189 2190 static int mmc_test_profile_mult_write_nonblock_perf(struct mmc_test_card *test) 2190 2191 { 2191 - unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, 2192 - 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; 2193 2192 struct mmc_test_multiple_rw test_data = { 2194 2193 .bs = bs, 2195 2194 .size = TEST_AREA_MAX_SIZE, ··· 2205 2208 */ 2206 2209 static int mmc_test_profile_mult_read_blocking_perf(struct mmc_test_card *test) 2207 2210 { 2208 - unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, 2209 - 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; 2210 2211 struct mmc_test_multiple_rw test_data = { 2211 2212 .bs = bs, 2212 2213 .size = TEST_AREA_MAX_SIZE, ··· 2222 2227 */ 2223 2228 static int mmc_test_profile_mult_read_nonblock_perf(struct mmc_test_card *test) 2224 2229 { 2225 - unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, 2226 - 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; 2227 2230 struct mmc_test_multiple_rw test_data = { 2228 2231 .bs = bs, 2229 2232 .size = TEST_AREA_MAX_SIZE, ··· 2239 2246 */ 2240 2247 static int mmc_test_profile_sglen_wr_blocking_perf(struct mmc_test_card *test) 2241 2248 { 2242 - unsigned int sg_len[] = {1, 1 << 3, 1 << 4, 1 << 5, 1 << 6, 2243 - 1 << 7, 1 << 8, 1 << 9}; 2244 2249 struct mmc_test_multiple_rw test_data = { 2245 2250 .sg_len = sg_len, 2246 2251 .size = TEST_AREA_MAX_SIZE, ··· 2256 2265 */ 2257 2266 static int mmc_test_profile_sglen_wr_nonblock_perf(struct mmc_test_card *test) 2258 2267 { 2259 - unsigned int sg_len[] = {1, 1 << 3, 1 << 4, 1 << 5, 1 << 6, 2260 - 1 << 7, 1 << 8, 1 << 9}; 2261 2268 struct mmc_test_multiple_rw test_data = { 2262 2269 .sg_len = sg_len, 2263 2270 .size = TEST_AREA_MAX_SIZE, ··· 2273 2284 */ 2274 2285 static int mmc_test_profile_sglen_r_blocking_perf(struct mmc_test_card *test) 2275 2286 { 2276 - unsigned int sg_len[] = {1, 1 << 3, 1 << 4, 1 << 5, 1 << 6, 2277 - 1 << 7, 1 << 8, 1 << 9}; 2278 2287 struct mmc_test_multiple_rw test_data = { 2279 2288 .sg_len = sg_len, 2280 2289 .size = TEST_AREA_MAX_SIZE, ··· 2290 2303 */ 2291 2304 static int mmc_test_profile_sglen_r_nonblock_perf(struct mmc_test_card *test) 2292 2305 { 2293 - unsigned int sg_len[] = {1, 1 << 3, 1 << 4, 1 << 5, 1 << 6, 2294 - 1 << 7, 1 << 8, 1 << 9}; 2295 2306 struct mmc_test_multiple_rw test_data = { 2296 2307 .sg_len = sg_len, 2297 2308 .size = TEST_AREA_MAX_SIZE, ··· 2441 2456 if (ret) 2442 2457 goto out_free; 2443 2458 2444 - if (repeat_cmd && (t->blocks + 1) << 9 > t->max_tfr) 2459 + if (repeat_cmd && (t->blocks + 1) << SECTOR_SHIFT > t->max_tfr) 2445 2460 pr_info("%s: %d commands completed during transfer of %u blocks\n", 2446 2461 mmc_hostname(test->card->host), count, t->blocks); 2447 2462 ··· 3084 3099 if (ret) 3085 3100 return ret; 3086 3101 3087 - test = kzalloc_obj(*test); 3102 + test = kzalloc_flex(*test, buffer, BUFFER_SIZE); 3088 3103 if (!test) 3089 3104 return -ENOMEM; 3090 3105 ··· 3096 3111 3097 3112 test->card = card; 3098 3113 3099 - test->buffer = kzalloc(BUFFER_SIZE, GFP_KERNEL); 3100 3114 #ifdef CONFIG_HIGHMEM 3101 3115 test->highmem = alloc_pages(GFP_KERNEL | __GFP_HIGHMEM, BUFFER_ORDER); 3102 3116 if (!test->highmem) { ··· 3104 3120 } 3105 3121 #endif 3106 3122 3107 - if (test->buffer) { 3108 - mutex_lock(&mmc_test_lock); 3109 - mmc_test_run(test, testcase); 3110 - mutex_unlock(&mmc_test_lock); 3111 - } 3123 + mutex_lock(&mmc_test_lock); 3124 + mmc_test_run(test, testcase); 3125 + mutex_unlock(&mmc_test_lock); 3112 3126 3113 3127 #ifdef CONFIG_HIGHMEM 3114 3128 __free_pages(test->highmem, BUFFER_ORDER); 3115 3129 free_test_buffer: 3116 3130 #endif 3117 - kfree(test->buffer); 3118 3131 kfree(test); 3119 3132 3120 3133 return count;
+7 -2
drivers/mmc/core/queue.c
··· 184 184 return; 185 185 186 186 lim->max_hw_discard_sectors = max_discard; 187 - if (mmc_card_can_secure_erase_trim(card)) 188 - lim->max_secure_erase_sectors = max_discard; 187 + if (mmc_card_can_secure_erase_trim(card)) { 188 + if (mmc_card_fixed_secure_erase_trim_time(card)) 189 + lim->max_secure_erase_sectors = UINT_MAX >> card->erase_shift; 190 + else 191 + lim->max_secure_erase_sectors = max_discard; 192 + } 193 + 189 194 if (mmc_card_can_trim(card) && card->erased_byte == 0) 190 195 lim->max_write_zeroes_sectors = max_discard; 191 196
+5 -2
drivers/mmc/core/queue.h
··· 61 61 MMC_DRV_OP_GET_EXT_CSD, 62 62 }; 63 63 64 + #define MQRQ_XFER_SINGLE_BLOCK BIT(0) 65 + 64 66 struct mmc_queue_req { 65 67 struct mmc_blk_request brq; 66 68 struct scatterlist *sg; 67 69 enum mmc_drv_op drv_op; 68 70 int drv_op_result; 69 71 void *drv_op_data; 70 - unsigned int ioc_count; 71 - int retries; 72 + u8 ioc_count; 73 + u8 retries; 74 + u8 flags; 72 75 }; 73 76 74 77 struct mmc_queue {
+14 -7
drivers/mmc/core/quirks.h
··· 153 153 MMC_FIXUP("M62704", CID_MANFID_KINGSTON, 0x0100, add_quirk_mmc, 154 154 MMC_QUIRK_TRIM_BROKEN), 155 155 156 + /* 157 + * On Some Kingston eMMCs, secure erase/trim time is independent 158 + * of erase size, fixed at approximately 2 seconds. 159 + */ 160 + MMC_FIXUP("IY2964", CID_MANFID_KINGSTON, 0x0100, add_quirk_mmc, 161 + MMC_QUIRK_FIXED_SECURE_ERASE_TRIM_TIME), 162 + MMC_FIXUP("IB2932", CID_MANFID_KINGSTON, 0x0100, add_quirk_mmc, 163 + MMC_QUIRK_FIXED_SECURE_ERASE_TRIM_TIME), 164 + 156 165 END_FIXUP 157 166 }; 158 167 ··· 178 169 */ 179 170 MMC_FIXUP_EXT_CSD_REV(CID_NAME_ANY, CID_MANFID_NUMONYX, 180 171 0x014e, add_quirk, MMC_QUIRK_BROKEN_HPI, 6), 172 + 173 + MMC_FIXUP(CID_NAME_ANY, CID_MANFID_SANDISK_MMC, CID_OEMID_ANY, add_quirk_mmc, 174 + MMC_QUIRK_BROKEN_MDT), 181 175 182 176 END_FIXUP 183 177 }; ··· 225 213 static inline bool mmc_fixup_of_compatible_match(struct mmc_card *card, 226 214 const char *compatible) 227 215 { 228 - struct device_node *np; 229 - 230 - for_each_child_of_node(mmc_dev(card->host)->of_node, np) { 231 - if (of_device_is_compatible(np, compatible)) { 232 - of_node_put(np); 216 + for_each_child_of_node_scoped(mmc_dev(card->host)->of_node, np) 217 + if (of_device_is_compatible(np, compatible)) 233 218 return true; 234 - } 235 - } 236 219 237 220 return false; 238 221 }
+2 -4
drivers/mmc/core/sdio_io.c
··· 163 163 if (blksz > func->card->host->max_blk_size) 164 164 return -EINVAL; 165 165 166 - if (blksz == 0) { 167 - blksz = min(func->max_blksize, func->card->host->max_blk_size); 168 - blksz = min(blksz, 512u); 169 - } 166 + if (blksz == 0) 167 + blksz = min3(func->max_blksize, func->card->host->max_blk_size, 512u); 170 168 171 169 ret = mmc_io_rw_direct(func->card, 1, 0, 172 170 SDIO_FBR_BASE(func->num) + SDIO_FBR_BLKSIZE,
+15 -1
drivers/mmc/host/Kconfig
··· 429 429 430 430 If you have a controller with this interface, say Y or M here. 431 431 432 + config MMC_SDHCI_BST 433 + tristate "SDHCI support for Black Sesame Technologies BST C1200 controller" 434 + depends on ARCH_BST || COMPILE_TEST 435 + depends on MMC_SDHCI_PLTFM 436 + depends on OF 437 + help 438 + This selects the Secure Digital Host Controller Interface (SDHCI) 439 + for Black Sesame Technologies BST C1200 SoC. The controller is 440 + based on Synopsys DesignWare Cores Mobile Storage Controller but 441 + requires platform-specific workarounds for hardware limitations. 442 + 443 + If you have a controller with this interface, say Y or M here. 444 + If unsure, say N. 445 + 432 446 config MMC_SDHCI_F_SDH30 433 447 tristate "SDHCI support for Fujitsu Semiconductor F_SDH30" 434 448 depends on MMC_SDHCI_PLTFM ··· 1058 1044 1059 1045 config MMC_SDHCI_MICROCHIP_PIC32 1060 1046 tristate "Microchip PIC32MZDA SDHCI support" 1061 - depends on MMC_SDHCI && PIC32MZDA && MMC_SDHCI_PLTFM 1047 + depends on MMC_SDHCI && MMC_SDHCI_PLTFM && (PIC32MZDA || COMPILE_TEST) 1062 1048 help 1063 1049 This selects the Secure Digital Host Controller Interface (SDHCI) 1064 1050 for PIC32MZDA platform.
+1
drivers/mmc/host/Makefile
··· 13 13 obj-$(CONFIG_MMC_SDHCI) += sdhci.o 14 14 obj-$(CONFIG_MMC_SDHCI_UHS2) += sdhci-uhs2.o 15 15 obj-$(CONFIG_MMC_SDHCI_PCI) += sdhci-pci.o 16 + obj-$(CONFIG_MMC_SDHCI_BST) += sdhci-of-bst.o 16 17 sdhci-pci-y += sdhci-pci-core.o sdhci-pci-o2micro.o sdhci-pci-arasan.o \ 17 18 sdhci-pci-dwc-mshc.o sdhci-pci-gli.o 18 19 obj-$(CONFIG_MMC_SDHCI_ACPI) += sdhci-acpi.o
+3 -9
drivers/mmc/host/atmel-mci.c
··· 629 629 { 630 630 struct device *dev = host->dev; 631 631 struct device_node *np = dev->of_node; 632 - struct device_node *cnp; 633 632 u32 slot_id; 634 633 int err; 635 634 636 635 if (!np) 637 636 return dev_err_probe(dev, -EINVAL, "device node not found\n"); 638 637 639 - for_each_child_of_node(np, cnp) { 638 + for_each_child_of_node_scoped(np, cnp) { 640 639 if (of_property_read_u32(cnp, "reg", &slot_id)) { 641 640 dev_warn(dev, "reg property is missing for %pOF\n", cnp); 642 641 continue; ··· 644 645 if (slot_id >= ATMCI_MAX_NR_SLOTS) { 645 646 dev_warn(dev, "can't have more than %d slots\n", 646 647 ATMCI_MAX_NR_SLOTS); 647 - of_node_put(cnp); 648 648 break; 649 649 } 650 650 ··· 656 658 "cd", GPIOD_IN, "cd-gpios"); 657 659 err = PTR_ERR_OR_ZERO(host->pdata[slot_id].detect_pin); 658 660 if (err) { 659 - if (err != -ENOENT) { 660 - of_node_put(cnp); 661 + if (err != -ENOENT) 661 662 return err; 662 - } 663 663 host->pdata[slot_id].detect_pin = NULL; 664 664 } 665 665 ··· 669 673 "wp", GPIOD_IN, "wp-gpios"); 670 674 err = PTR_ERR_OR_ZERO(host->pdata[slot_id].wp_pin); 671 675 if (err) { 672 - if (err != -ENOENT) { 673 - of_node_put(cnp); 676 + if (err != -ENOENT) 674 677 return err; 675 - } 676 678 host->pdata[slot_id].wp_pin = NULL; 677 679 } 678 680 }
+2 -3
drivers/mmc/host/cavium-octeon.c
··· 148 148 149 149 static int octeon_mmc_probe(struct platform_device *pdev) 150 150 { 151 - struct device_node *cn, *node = pdev->dev.of_node; 151 + struct device_node *node = pdev->dev.of_node; 152 152 struct cvm_mmc_host *host; 153 153 void __iomem *base; 154 154 int mmc_irq[9]; ··· 268 268 platform_set_drvdata(pdev, host); 269 269 270 270 i = 0; 271 - for_each_child_of_node(node, cn) { 271 + for_each_child_of_node_scoped(node, cn) { 272 272 host->slot_pdev[i] = 273 273 of_platform_device_create(cn, NULL, &pdev->dev); 274 274 if (!host->slot_pdev[i]) { ··· 279 279 if (ret) { 280 280 dev_err(&pdev->dev, "Error populating slots\n"); 281 281 octeon_mmc_set_shared_power(host, 0); 282 - of_node_put(cn); 283 282 goto error; 284 283 } 285 284 i++;
+1 -3
drivers/mmc/host/cavium.c
··· 905 905 { 906 906 struct mmc_host *mmc = slot->mmc; 907 907 908 - clock = min(clock, mmc->f_max); 909 - clock = max(clock, mmc->f_min); 910 - slot->clock = clock; 908 + slot->clock = clamp(clock, mmc->f_min, mmc->f_max); 911 909 } 912 910 913 911 static int cvm_mmc_init_lowlevel(struct cvm_mmc_slot *slot)
+1 -1
drivers/mmc/host/dw_mmc-bluefield.c
··· 73 73 .name = "dwmmc_bluefield", 74 74 .probe_type = PROBE_PREFER_ASYNCHRONOUS, 75 75 .of_match_table = dw_mci_bluefield_match, 76 - .pm = &dw_mci_pltfm_pmops, 76 + .pm = pm_ptr(&dw_mci_pmops), 77 77 }, 78 78 }; 79 79
+4 -5
drivers/mmc/host/dw_mmc-exynos.c
··· 185 185 * HOLD register should be bypassed in case there is no phase shift 186 186 * applied on CMD/DATA that is sent to the card. 187 187 */ 188 - if (!SDMMC_CLKSEL_GET_DRV_WD3(clksel) && host->slot) 189 - set_bit(DW_MMC_CARD_NO_USE_HOLD, &host->slot->flags); 188 + if (!SDMMC_CLKSEL_GET_DRV_WD3(clksel)) 189 + set_bit(DW_MMC_CARD_NO_USE_HOLD, &host->flags); 190 190 } 191 191 192 192 static int dw_mci_exynos_runtime_resume(struct device *dev) ··· 530 530 return loc; 531 531 } 532 532 533 - static int dw_mci_exynos_execute_tuning(struct dw_mci_slot *slot, u32 opcode) 533 + static int dw_mci_exynos_execute_tuning(struct dw_mci *host, u32 opcode) 534 534 { 535 - struct dw_mci *host = slot->host; 536 535 struct dw_mci_exynos_priv_data *priv = host->priv; 537 - struct mmc_host *mmc = slot->mmc; 536 + struct mmc_host *mmc = host->mmc; 538 537 u8 start_smpl, smpl, candidates = 0; 539 538 s8 found; 540 539 int ret = 0;
+2 -4
drivers/mmc/host/dw_mmc-hi3798cv200.c
··· 57 57 clk_set_phase(priv->drive_clk, 135); 58 58 } 59 59 60 - static int dw_mci_hi3798cv200_execute_tuning(struct dw_mci_slot *slot, 61 - u32 opcode) 60 + static int dw_mci_hi3798cv200_execute_tuning(struct dw_mci *host, u32 opcode) 62 61 { 63 62 static const int degrees[] = { 0, 45, 90, 135, 180, 225, 270, 315 }; 64 - struct dw_mci *host = slot->host; 65 63 struct hi3798cv200_priv *priv = host->priv; 66 64 int raise_point = -1, fall_point = -1; 67 65 int err, prev_err = -1; ··· 70 72 clk_set_phase(priv->sample_clk, degrees[i]); 71 73 mci_writel(host, RINTSTS, ALL_INT_CLR); 72 74 73 - err = mmc_send_tuning(slot->mmc, opcode, NULL); 75 + err = mmc_send_tuning(host->mmc, opcode, NULL); 74 76 if (!err) 75 77 found = 1; 76 78
+12 -16
drivers/mmc/host/dw_mmc-hi3798mv200.c
··· 30 30 struct clk *drive_clk; 31 31 struct regmap *crg_reg; 32 32 u32 sap_dll_offset; 33 - struct mmc_clk_phase_map phase_map; 34 33 }; 35 34 36 35 static void dw_mci_hi3798mv200_set_ios(struct dw_mci *host, struct mmc_ios *ios) 37 36 { 38 37 struct dw_mci_hi3798mv200_priv *priv = host->priv; 39 - struct mmc_clk_phase phase = priv->phase_map.phase[ios->timing]; 38 + struct mmc_clk_phase phase = host->phase_map.phase[ios->timing]; 40 39 u32 val; 41 40 42 41 val = mci_readl(host, ENABLE_SHIFT); ··· 73 74 } 74 75 } 75 76 76 - static inline int dw_mci_hi3798mv200_enable_tuning(struct dw_mci_slot *slot) 77 + static inline int dw_mci_hi3798mv200_enable_tuning(struct dw_mci *host) 77 78 { 78 - struct dw_mci_hi3798mv200_priv *priv = slot->host->priv; 79 + struct dw_mci_hi3798mv200_priv *priv = host->priv; 79 80 80 81 return regmap_clear_bits(priv->crg_reg, priv->sap_dll_offset, SAP_DLL_CTRL_DLLMODE); 81 82 } 82 83 83 - static inline int dw_mci_hi3798mv200_disable_tuning(struct dw_mci_slot *slot) 84 + static inline int dw_mci_hi3798mv200_disable_tuning(struct dw_mci *host) 84 85 { 85 - struct dw_mci_hi3798mv200_priv *priv = slot->host->priv; 86 + struct dw_mci_hi3798mv200_priv *priv = host->priv; 86 87 87 88 return regmap_set_bits(priv->crg_reg, priv->sap_dll_offset, SAP_DLL_CTRL_DLLMODE); 88 89 } 89 90 90 - static int dw_mci_hi3798mv200_execute_tuning_mix_mode(struct dw_mci_slot *slot, 91 + static int dw_mci_hi3798mv200_execute_tuning_mix_mode(struct dw_mci *host, 91 92 u32 opcode) 92 93 { 93 94 static const int degrees[] = { 0, 45, 90, 135, 180, 225, 270, 315 }; 94 - struct dw_mci *host = slot->host; 95 95 struct dw_mci_hi3798mv200_priv *priv = host->priv; 96 96 int raise_point = -1, fall_point = -1, mid; 97 97 int err, prev_err = -1; ··· 99 101 int i; 100 102 int ret; 101 103 102 - ret = dw_mci_hi3798mv200_enable_tuning(slot); 104 + ret = dw_mci_hi3798mv200_enable_tuning(host); 103 105 if (ret < 0) 104 106 return ret; 105 107 ··· 113 115 * 114 116 * Treat edge(flip) found as an error too. 115 117 */ 116 - err = mmc_send_tuning(slot->mmc, opcode, NULL); 118 + err = mmc_send_tuning(host->mmc, opcode, NULL); 117 119 regval = mci_readl(host, TUNING_CTRL); 118 120 if (err || (regval & SDMMC_TUNING_FIND_EDGE)) 119 121 err = 1; ··· 134 136 } 135 137 136 138 tuning_out: 137 - ret = dw_mci_hi3798mv200_disable_tuning(slot); 139 + ret = dw_mci_hi3798mv200_disable_tuning(host); 138 140 if (ret < 0) 139 141 return ret; 140 142 ··· 157 159 * We don't care what timing we are tuning for, 158 160 * simply use the same phase for all timing needs tuning. 159 161 */ 160 - priv->phase_map.phase[MMC_TIMING_MMC_HS200].in_deg = degrees[mid]; 161 - priv->phase_map.phase[MMC_TIMING_MMC_HS400].in_deg = degrees[mid]; 162 - priv->phase_map.phase[MMC_TIMING_UHS_SDR104].in_deg = degrees[mid]; 162 + host->phase_map.phase[MMC_TIMING_MMC_HS200].in_deg = degrees[mid]; 163 + host->phase_map.phase[MMC_TIMING_MMC_HS400].in_deg = degrees[mid]; 164 + host->phase_map.phase[MMC_TIMING_UHS_SDR104].in_deg = degrees[mid]; 163 165 164 166 clk_set_phase(priv->sample_clk, degrees[mid]); 165 167 dev_dbg(host->dev, "Tuning clk_sample[%d, %d], set[%d]\n", ··· 183 185 priv = devm_kzalloc(host->dev, sizeof(*priv), GFP_KERNEL); 184 186 if (!priv) 185 187 return -ENOMEM; 186 - 187 - mmc_of_parse_clk_phase(host->dev, &priv->phase_map); 188 188 189 189 priv->sample_clk = devm_clk_get_enabled(host->dev, "ciu-sample"); 190 190 if (IS_ERR(priv->sample_clk))
+17 -34
drivers/mmc/host/dw_mmc-k3.c
··· 53 53 #define USE_DLY_MAX_SMPL (14) 54 54 55 55 struct k3_priv { 56 - int ctrl_id; 57 56 u32 cur_speed; 58 57 struct regmap *reg; 59 58 }; ··· 126 127 if (IS_ERR(priv->reg)) 127 128 priv->reg = NULL; 128 129 129 - priv->ctrl_id = of_alias_get_id(host->dev->of_node, "mshc"); 130 - if (priv->ctrl_id < 0) 131 - priv->ctrl_id = 0; 132 - 133 - if (priv->ctrl_id >= TIMING_MODE) 134 - return -EINVAL; 135 - 136 130 host->priv = priv; 137 131 return 0; 138 132 } 139 133 140 - static int dw_mci_hi6220_switch_voltage(struct mmc_host *mmc, struct mmc_ios *ios) 134 + static int dw_mci_hi6220_switch_voltage(struct dw_mci *host, struct mmc_ios *ios) 141 135 { 142 - struct dw_mci_slot *slot = mmc_priv(mmc); 143 136 struct k3_priv *priv; 144 - struct dw_mci *host; 137 + struct mmc_host *mmc = host->mmc; 145 138 int min_uv, max_uv; 146 139 int ret; 147 140 148 - host = slot->host; 149 141 priv = host->priv; 150 142 151 143 if (!priv || !priv->reg) ··· 189 199 host->bus_hz = clk_get_rate(host->biu_clk); 190 200 } 191 201 192 - static int dw_mci_hi6220_execute_tuning(struct dw_mci_slot *slot, u32 opcode) 202 + static int dw_mci_hi6220_execute_tuning(struct dw_mci *host, u32 opcode) 193 203 { 194 204 return 0; 195 205 } ··· 203 213 .execute_tuning = dw_mci_hi6220_execute_tuning, 204 214 }; 205 215 206 - static void dw_mci_hs_set_timing(struct dw_mci *host, int timing, 216 + static int dw_mci_hs_set_timing(struct dw_mci *host, int timing, 207 217 int smpl_phase) 208 218 { 209 219 u32 drv_phase; ··· 212 222 u32 enable_shift = 0; 213 223 u32 reg_value; 214 224 int ctrl_id; 215 - struct k3_priv *priv; 216 225 217 - priv = host->priv; 218 - ctrl_id = priv->ctrl_id; 226 + ctrl_id = host->mmc->index; 227 + if (ctrl_id >= TIMING_MODE) 228 + return -EINVAL; 219 229 220 230 drv_phase = hs_timing_cfg[ctrl_id][timing].drv_phase; 221 231 smpl_dly = hs_timing_cfg[ctrl_id][timing].smpl_dly; ··· 252 262 253 263 /* We should delay 1ms wait for timing setting finished. */ 254 264 usleep_range(1000, 2000); 265 + 266 + return 0; 255 267 } 256 268 257 269 static int dw_mci_hi3660_init(struct dw_mci *host) ··· 261 269 mci_writel(host, CDTHRCTL, SDMMC_SET_THLD(SDCARD_RD_THRESHOLD, 262 270 SDMMC_CARD_RD_THR_EN)); 263 271 264 - dw_mci_hs_set_timing(host, MMC_TIMING_LEGACY, -1); 265 272 host->bus_hz /= (GENCLK_DIV + 1); 266 273 267 - return 0; 274 + return dw_mci_hs_set_timing(host, MMC_TIMING_LEGACY, -1); 268 275 } 269 276 270 277 static int dw_mci_set_sel18(struct dw_mci *host, bool set) ··· 355 364 return middle_range; 356 365 } 357 366 358 - static int dw_mci_hi3660_execute_tuning(struct dw_mci_slot *slot, u32 opcode) 367 + static int dw_mci_hi3660_execute_tuning(struct dw_mci *host, u32 opcode) 359 368 { 360 369 int i = 0; 361 - struct dw_mci *host = slot->host; 362 - struct mmc_host *mmc = slot->mmc; 370 + struct mmc_host *mmc = host->mmc; 363 371 int smpl_phase = 0; 364 372 u32 tuning_sample_flag = 0; 365 373 int best_clksmpl = 0; ··· 388 398 return 0; 389 399 } 390 400 391 - static int dw_mci_hi3660_switch_voltage(struct mmc_host *mmc, 401 + static int dw_mci_hi3660_switch_voltage(struct dw_mci *host, 392 402 struct mmc_ios *ios) 393 403 { 394 - int ret = 0; 395 - struct dw_mci_slot *slot = mmc_priv(mmc); 396 404 struct k3_priv *priv; 397 - struct dw_mci *host; 405 + struct mmc_host *mmc = host->mmc; 406 + int ret = 0; 398 407 399 - host = slot->host; 400 408 priv = host->priv; 401 409 402 410 if (!priv || !priv->reg) 403 411 return 0; 404 412 405 - if (priv->ctrl_id == DWMMC_SDIO_ID) 413 + if (mmc->index == DWMMC_SDIO_ID) 406 414 return 0; 407 415 408 416 if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_330) ··· 448 460 return dw_mci_pltfm_register(pdev, drv_data); 449 461 } 450 462 451 - static const struct dev_pm_ops dw_mci_k3_dev_pm_ops = { 452 - SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) 453 - RUNTIME_PM_OPS(dw_mci_runtime_suspend, dw_mci_runtime_resume, NULL) 454 - }; 455 - 456 463 static struct platform_driver dw_mci_k3_pltfm_driver = { 457 464 .probe = dw_mci_k3_probe, 458 465 .remove = dw_mci_pltfm_remove, ··· 455 472 .name = "dwmmc_k3", 456 473 .probe_type = PROBE_PREFER_ASYNCHRONOUS, 457 474 .of_match_table = dw_mci_k3_match, 458 - .pm = pm_ptr(&dw_mci_k3_dev_pm_ops), 475 + .pm = pm_ptr(&dw_mci_pmops), 459 476 }, 460 477 }; 461 478
+15 -22
drivers/mmc/host/dw_mmc-pci.c
··· 10 10 #include <linux/io.h> 11 11 #include <linux/irq.h> 12 12 #include <linux/pci.h> 13 + #include <linux/pci-epf.h> 13 14 #include <linux/pm_runtime.h> 14 15 #include <linux/slab.h> 15 16 #include <linux/mmc/host.h> 16 17 #include <linux/mmc/mmc.h> 17 18 #include "dw_mmc.h" 19 + #include "dw_mmc-pltfm.h" 18 20 19 - #define PCI_BAR_NO 2 20 21 #define SYNOPSYS_DW_MCI_VENDOR_ID 0x700 21 22 #define SYNOPSYS_DW_MCI_DEVICE_ID 0x1107 22 23 /* Defining the Capabilities */ ··· 25 24 MMC_CAP_SD_HIGHSPEED | MMC_CAP_8_BIT_DATA |\ 26 25 MMC_CAP_SDIO_IRQ) 27 26 28 - static struct dw_mci_board pci_board_data = { 29 - .caps = DW_MCI_CAPABILITIES, 30 - .bus_hz = 33 * 1000 * 1000, 31 - .detect_delay_ms = 200, 32 - .fifo_depth = 32, 27 + static const struct dw_mci_drv_data pci_drv_data = { 28 + .common_caps = DW_MCI_CAPABILITIES, 33 29 }; 34 30 35 31 static int dw_mci_pci_probe(struct pci_dev *pdev, ··· 39 41 if (ret) 40 42 return ret; 41 43 42 - host = devm_kzalloc(&pdev->dev, sizeof(struct dw_mci), GFP_KERNEL); 43 - if (!host) 44 - return -ENOMEM; 44 + host = dw_mci_alloc_host(&pdev->dev); 45 + if (IS_ERR(host)) 46 + return PTR_ERR(host); 45 47 46 48 host->irq = pdev->irq; 47 49 host->irq_flags = IRQF_SHARED; 48 - host->dev = &pdev->dev; 49 - host->pdata = &pci_board_data; 50 + host->fifo_depth = 32; 51 + host->detect_delay_ms = 200; 52 + host->bus_hz = 33 * 1000 * 1000; 53 + host->drv_data = &pci_drv_data; 50 54 51 - ret = pcim_iomap_regions(pdev, 1 << PCI_BAR_NO, pci_name(pdev)); 52 - if (ret) 53 - return ret; 54 - 55 - host->regs = pcim_iomap_table(pdev)[PCI_BAR_NO]; 55 + host->regs = pcim_iomap_region(pdev, BAR_2, pci_name(pdev)); 56 + if (IS_ERR(host->regs)) 57 + return PTR_ERR(host->regs); 56 58 57 59 pci_set_master(pdev); 58 60 ··· 72 74 dw_mci_remove(host); 73 75 } 74 76 75 - static const struct dev_pm_ops dw_mci_pci_dev_pm_ops = { 76 - SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) 77 - RUNTIME_PM_OPS(dw_mci_runtime_suspend, dw_mci_runtime_resume, NULL) 78 - }; 79 - 80 77 static const struct pci_device_id dw_mci_pci_id[] = { 81 78 { PCI_DEVICE(SYNOPSYS_DW_MCI_VENDOR_ID, SYNOPSYS_DW_MCI_DEVICE_ID) }, 82 79 {} ··· 84 91 .probe = dw_mci_pci_probe, 85 92 .remove = dw_mci_pci_remove, 86 93 .driver = { 87 - .pm = pm_ptr(&dw_mci_pci_dev_pm_ops), 94 + .pm = pm_ptr(&dw_mci_pmops), 88 95 }, 89 96 }; 90 97
+12 -22
drivers/mmc/host/dw_mmc-pltfm.c
··· 33 33 struct dw_mci *host; 34 34 struct resource *regs; 35 35 36 - host = devm_kzalloc(&pdev->dev, sizeof(struct dw_mci), GFP_KERNEL); 37 - if (!host) 38 - return -ENOMEM; 36 + host = dw_mci_alloc_host(&pdev->dev); 37 + if (IS_ERR(host)) 38 + return PTR_ERR(host); 39 39 40 40 host->irq = platform_get_irq(pdev, 0); 41 41 if (host->irq < 0) 42 42 return host->irq; 43 43 44 44 host->drv_data = drv_data; 45 - host->dev = &pdev->dev; 46 45 host->irq_flags = 0; 47 - host->pdata = pdev->dev.platform_data; 48 46 49 47 host->regs = devm_platform_get_and_ioremap_resource(pdev, 0, &regs); 50 48 if (IS_ERR(host->regs)) ··· 56 58 } 57 59 EXPORT_SYMBOL_GPL(dw_mci_pltfm_register); 58 60 59 - const struct dev_pm_ops dw_mci_pltfm_pmops = { 60 - SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 61 - pm_runtime_force_resume) 62 - SET_RUNTIME_PM_OPS(dw_mci_runtime_suspend, 63 - dw_mci_runtime_resume, 64 - NULL) 65 - }; 66 - EXPORT_SYMBOL_GPL(dw_mci_pltfm_pmops); 67 - 68 61 static int dw_mci_socfpga_priv_init(struct dw_mci *host) 69 62 { 70 63 struct device_node *np = host->dev->of_node; 64 + struct mmc_clk_phase phase; 71 65 struct regmap *sys_mgr_base_addr; 72 - u32 clk_phase[2] = {0}, reg_offset, reg_shift; 73 - int i, rc, hs_timing; 66 + u32 reg_offset, reg_shift; 67 + int hs_timing; 74 68 75 - rc = of_property_read_variable_u32_array(np, "clk-phase-sd-hs", &clk_phase[0], 2, 0); 76 - if (rc < 0) 69 + phase = host->phase_map.phase[MMC_TIMING_SD_HS]; 70 + if (!phase.valid) 77 71 return 0; 78 72 79 73 sys_mgr_base_addr = altr_sysmgr_regmap_lookup_by_phandle(np, "altr,sysmgr-syscon"); ··· 77 87 of_property_read_u32_index(np, "altr,sysmgr-syscon", 1, &reg_offset); 78 88 of_property_read_u32_index(np, "altr,sysmgr-syscon", 2, &reg_shift); 79 89 80 - for (i = 0; i < ARRAY_SIZE(clk_phase); i++) 81 - clk_phase[i] /= SOCFPGA_DW_MMC_CLK_PHASE_STEP; 90 + phase.in_deg /= SOCFPGA_DW_MMC_CLK_PHASE_STEP; 91 + phase.out_deg /= SOCFPGA_DW_MMC_CLK_PHASE_STEP; 82 92 83 - hs_timing = SYSMGR_SDMMC_CTRL_SET(clk_phase[0], clk_phase[1], reg_shift); 93 + hs_timing = SYSMGR_SDMMC_CTRL_SET(phase.in_deg, phase.out_deg, reg_shift); 84 94 regmap_write(sys_mgr_base_addr, reg_offset, hs_timing); 85 95 86 96 return 0; ··· 126 136 .name = "dw_mmc", 127 137 .probe_type = PROBE_PREFER_ASYNCHRONOUS, 128 138 .of_match_table = dw_mci_pltfm_match, 129 - .pm = &dw_mci_pltfm_pmops, 139 + .pm = pm_ptr(&dw_mci_pmops), 130 140 }, 131 141 }; 132 142
+1 -1
drivers/mmc/host/dw_mmc-pltfm.h
··· 11 11 extern int dw_mci_pltfm_register(struct platform_device *pdev, 12 12 const struct dw_mci_drv_data *drv_data); 13 13 extern void dw_mci_pltfm_remove(struct platform_device *pdev); 14 - extern const struct dev_pm_ops dw_mci_pltfm_pmops; 14 + extern const struct dev_pm_ops dw_mci_pmops; 15 15 16 16 #endif /* _DW_MMC_PLTFM_H_ */
+22 -14
drivers/mmc/host/dw_mmc-rockchip.c
··· 179 179 static void dw_mci_rk3288_set_ios(struct dw_mci *host, struct mmc_ios *ios) 180 180 { 181 181 struct dw_mci_rockchip_priv_data *priv = host->priv; 182 - int ret; 182 + struct mmc_clk_phase phase = host->phase_map.phase[ios->timing]; 183 + int ret, sample_phase, drv_phase; 183 184 unsigned int cclkin; 184 185 u32 bus_hz; 185 186 ··· 214 213 } 215 214 216 215 /* Make sure we use phases which we can enumerate with */ 217 - if (!IS_ERR(priv->sample_clk) && ios->timing <= MMC_TIMING_SD_HS) 218 - rockchip_mmc_set_phase(host, true, priv->default_sample_phase); 216 + if (!IS_ERR(priv->sample_clk)) { 217 + /* Keep backward compatibility */ 218 + if (ios->timing <= MMC_TIMING_SD_HS) { 219 + sample_phase = phase.valid ? phase.in_deg : priv->default_sample_phase; 220 + rockchip_mmc_set_phase(host, true, sample_phase); 221 + } else if (phase.valid) { 222 + rockchip_mmc_set_phase(host, true, phase.in_deg); 223 + } 224 + } 219 225 220 226 /* 221 227 * Set the drive phase offset based on speed mode to achieve hold times. ··· 251 243 * same results, for instance). 252 244 */ 253 245 if (!IS_ERR(priv->drv_clk)) { 254 - int phase; 255 - 256 246 /* 257 247 * In almost all cases a 90 degree phase offset will provide 258 248 * sufficient hold times across all valid input clock rates 259 249 * assuming delay_o is not absurd for a given SoC. We'll use 260 250 * that as a default. 261 251 */ 262 - phase = 90; 252 + drv_phase = 90; 263 253 264 254 switch (ios->timing) { 265 255 case MMC_TIMING_MMC_DDR52: ··· 267 261 * to get the same timings. 268 262 */ 269 263 if (ios->bus_width == MMC_BUS_WIDTH_8) 270 - phase = 180; 264 + drv_phase = 180; 271 265 break; 272 266 case MMC_TIMING_UHS_SDR104: 273 267 case MMC_TIMING_MMC_HS200: ··· 279 273 * SoCs measured this seems to be OK, but it doesn't 280 274 * hurt to give margin here, so we use 180. 281 275 */ 282 - phase = 180; 276 + drv_phase = 180; 283 277 break; 284 278 } 285 279 286 - rockchip_mmc_set_phase(host, false, phase); 280 + /* Use out phase from phase map first */ 281 + if (phase.valid) 282 + drv_phase = phase.out_deg; 283 + rockchip_mmc_set_phase(host, false, drv_phase); 287 284 } 288 285 } 289 286 290 287 #define TUNING_ITERATION_TO_PHASE(i, num_phases) \ 291 288 (DIV_ROUND_UP((i) * 360, num_phases)) 292 289 293 - static int dw_mci_rk3288_execute_tuning(struct dw_mci_slot *slot, u32 opcode) 290 + static int dw_mci_rk3288_execute_tuning(struct dw_mci *host, u32 opcode) 294 291 { 295 - struct dw_mci *host = slot->host; 296 292 struct dw_mci_rockchip_priv_data *priv = host->priv; 297 - struct mmc_host *mmc = slot->mmc; 293 + struct mmc_host *mmc = host->mmc; 298 294 int ret = 0; 299 295 int i; 300 296 bool v, prev_v = 0, first_v; ··· 484 476 struct dw_mci_rockchip_priv_data *priv = host->priv; 485 477 int ret, i; 486 478 487 - /* It is slot 8 on Rockchip SoCs */ 488 - host->sdio_id0 = 8; 479 + /* SDIO irq is the 8th on Rockchip SoCs */ 480 + host->sdio_irq = 8; 489 481 490 482 if (of_device_is_compatible(host->dev->of_node, "rockchip,rk3288-dw-mshc")) { 491 483 host->bus_hz /= RK3288_CLKGEN_DIV;
+2 -3
drivers/mmc/host/dw_mmc-starfive.c
··· 53 53 mdelay(1); 54 54 } 55 55 56 - static int dw_mci_starfive_execute_tuning(struct dw_mci_slot *slot, 56 + static int dw_mci_starfive_execute_tuning(struct dw_mci *host, 57 57 u32 opcode) 58 58 { 59 59 static const int grade = MAX_DELAY_CHAIN; 60 - struct dw_mci *host = slot->host; 61 60 int smpl_phase, smpl_raise = -1, smpl_fall = -1; 62 61 int ret; 63 62 ··· 64 65 dw_mci_starfive_set_sample_phase(host, smpl_phase); 65 66 mci_writel(host, RINTSTS, ALL_INT_CLR); 66 67 67 - ret = mmc_send_tuning(slot->mmc, opcode, NULL); 68 + ret = mmc_send_tuning(host->mmc, opcode, NULL); 68 69 69 70 if (!ret && smpl_raise < 0) { 70 71 smpl_raise = smpl_phase;
+314 -505
drivers/mmc/host/dw_mmc.c
··· 7 7 * Copyright (C) 2009, 2010 Imagination Technologies Ltd. 8 8 */ 9 9 10 - #include <linux/blkdev.h> 10 + #include <linux/bitops.h> 11 11 #include <linux/clk.h> 12 12 #include <linux/debugfs.h> 13 + #include <linux/delay.h> 13 14 #include <linux/device.h> 14 15 #include <linux/dma-mapping.h> 15 16 #include <linux/err.h> 16 - #include <linux/init.h> 17 17 #include <linux/interrupt.h> 18 18 #include <linux/iopoll.h> 19 - #include <linux/ioport.h> 20 - #include <linux/ktime.h> 21 - #include <linux/module.h> 22 - #include <linux/platform_device.h> 23 - #include <linux/pm_runtime.h> 24 - #include <linux/prandom.h> 25 - #include <linux/seq_file.h> 26 - #include <linux/slab.h> 27 - #include <linux/stat.h> 28 - #include <linux/delay.h> 29 19 #include <linux/irq.h> 20 + #include <linux/ktime.h> 30 21 #include <linux/mmc/card.h> 31 22 #include <linux/mmc/host.h> 32 23 #include <linux/mmc/mmc.h> 33 24 #include <linux/mmc/sd.h> 34 25 #include <linux/mmc/sdio.h> 35 - #include <linux/bitops.h> 36 - #include <linux/regulator/consumer.h> 37 - #include <linux/of.h> 38 26 #include <linux/mmc/slot-gpio.h> 27 + #include <linux/module.h> 28 + #include <linux/of.h> 29 + #include <linux/platform_device.h> 30 + #include <linux/pm_runtime.h> 31 + #include <linux/regulator/consumer.h> 39 32 40 33 #include "dw_mmc.h" 41 34 ··· 40 47 SDMMC_INT_RESP_ERR | SDMMC_INT_HLE) 41 48 #define DW_MCI_ERROR_FLAGS (DW_MCI_DATA_ERROR_FLAGS | \ 42 49 DW_MCI_CMD_ERROR_FLAGS) 43 - #define DW_MCI_SEND_STATUS 1 44 - #define DW_MCI_RECV_STATUS 2 45 50 #define DW_MCI_DMA_THRESHOLD 16 46 51 47 52 #define DW_MCI_FREQ_MAX 200000000 /* unit: HZ */ ··· 98 107 #if defined(CONFIG_DEBUG_FS) 99 108 static int dw_mci_req_show(struct seq_file *s, void *v) 100 109 { 101 - struct dw_mci_slot *slot = s->private; 110 + struct dw_mci *host = s->private; 102 111 struct mmc_request *mrq; 103 112 struct mmc_command *cmd; 104 113 struct mmc_command *stop; 105 114 struct mmc_data *data; 106 115 107 116 /* Make sure we get a consistent snapshot */ 108 - spin_lock_bh(&slot->host->lock); 109 - mrq = slot->mrq; 117 + spin_lock_bh(&host->lock); 118 + mrq = host->mrq; 110 119 111 120 if (mrq) { 112 121 cmd = mrq->cmd; ··· 131 140 stop->resp[2], stop->error); 132 141 } 133 142 134 - spin_unlock_bh(&slot->host->lock); 143 + spin_unlock_bh(&host->lock); 135 144 136 145 return 0; 137 146 } ··· 156 165 } 157 166 DEFINE_SHOW_ATTRIBUTE(dw_mci_regs); 158 167 159 - static void dw_mci_init_debugfs(struct dw_mci_slot *slot) 168 + static void dw_mci_init_debugfs(struct dw_mci *host) 160 169 { 161 - struct mmc_host *mmc = slot->mmc; 162 - struct dw_mci *host = slot->host; 170 + struct mmc_host *mmc = host->mmc; 163 171 struct dentry *root; 164 172 165 173 root = mmc->debugfs_root; ··· 166 176 return; 167 177 168 178 debugfs_create_file("regs", 0400, root, host, &dw_mci_regs_fops); 169 - debugfs_create_file("req", 0400, root, slot, &dw_mci_req_fops); 179 + debugfs_create_file("req", 0400, root, host, &dw_mci_req_fops); 170 180 debugfs_create_u32("state", 0400, root, &host->state); 171 181 debugfs_create_xul("pending_events", 0400, root, 172 182 &host->pending_events); ··· 221 231 } 222 232 } 223 233 224 - static void mci_send_cmd(struct dw_mci_slot *slot, u32 cmd, u32 arg) 234 + static void mci_send_cmd(struct dw_mci *host, u32 cmd, u32 arg) 225 235 { 226 - struct dw_mci *host = slot->host; 227 236 unsigned int cmd_status = 0; 228 237 229 238 mci_writel(host, CMDARG, arg); ··· 233 244 if (readl_poll_timeout_atomic(host->regs + SDMMC_CMD, cmd_status, 234 245 !(cmd_status & SDMMC_CMD_START), 235 246 1, 500 * USEC_PER_MSEC)) 236 - dev_err(&slot->mmc->class_dev, 247 + dev_err(&host->mmc->class_dev, 237 248 "Timeout sending command (cmd %#x arg %#x status %#x)\n", 238 249 cmd, arg, cmd_status); 239 250 } 240 251 241 252 static u32 dw_mci_prepare_command(struct mmc_host *mmc, struct mmc_command *cmd) 242 253 { 243 - struct dw_mci_slot *slot = mmc_priv(mmc); 244 - struct dw_mci *host = slot->host; 254 + struct dw_mci *host = mmc_priv(mmc); 245 255 u32 cmdr; 246 256 247 257 cmd->error = -EINPROGRESS; ··· 262 274 cmdr |= SDMMC_CMD_VOLT_SWITCH; 263 275 264 276 /* Change state to continue to handle CMD11 weirdness */ 265 - WARN_ON(slot->host->state != STATE_SENDING_CMD); 266 - slot->host->state = STATE_SENDING_CMD11; 277 + WARN_ON(host->state != STATE_SENDING_CMD); 278 + host->state = STATE_SENDING_CMD11; 267 279 268 280 /* 269 281 * We need to disable low power mode (automatic clock stop) ··· 277 289 * until the voltage change is all done. 278 290 */ 279 291 clk_en_a = mci_readl(host, CLKENA); 280 - clk_en_a &= ~(SDMMC_CLKEN_LOW_PWR << slot->id); 292 + clk_en_a &= ~SDMMC_CLKEN_LOW_PWR; 281 293 mci_writel(host, CLKENA, clk_en_a); 282 - mci_send_cmd(slot, SDMMC_CMD_UPD_CLK | 294 + mci_send_cmd(host, SDMMC_CMD_UPD_CLK | 283 295 SDMMC_CMD_PRV_DAT_WAIT, 0); 284 296 } 285 297 ··· 299 311 cmdr |= SDMMC_CMD_DAT_WR; 300 312 } 301 313 302 - if (!test_bit(DW_MMC_CARD_NO_USE_HOLD, &slot->flags)) 314 + if (!test_bit(DW_MMC_CARD_NO_USE_HOLD, &host->flags)) 303 315 cmdr |= SDMMC_CMD_USE_HOLD_REG; 304 316 305 317 return cmdr; ··· 338 350 cmdr = stop->opcode | SDMMC_CMD_STOP | 339 351 SDMMC_CMD_RESP_CRC | SDMMC_CMD_RESP_EXP; 340 352 341 - if (!test_bit(DW_MMC_CARD_NO_USE_HOLD, &host->slot->flags)) 353 + if (!test_bit(DW_MMC_CARD_NO_USE_HOLD, &host->flags)) 342 354 cmdr |= SDMMC_CMD_USE_HOLD_REG; 343 355 344 356 return cmdr; ··· 468 480 if ((host->use_dma == TRANS_MODE_EDMAC) && 469 481 data && (data->flags & MMC_DATA_READ)) 470 482 /* Invalidate cache after read */ 471 - dma_sync_sg_for_cpu(mmc_dev(host->slot->mmc), 483 + dma_sync_sg_for_cpu(mmc_dev(host->mmc), 472 484 data->sg, 473 485 data->sg_len, 474 486 DMA_FROM_DEVICE); ··· 563 575 return 0; 564 576 } 565 577 566 - static inline int dw_mci_prepare_desc64(struct dw_mci *host, 567 - struct mmc_data *data, 568 - unsigned int sg_len) 578 + static inline int dw_mci_prepare_desc(struct dw_mci *host, struct mmc_data *data, 579 + unsigned int sg_len, bool is_64bit) 569 580 { 570 581 unsigned int desc_len; 571 - struct idmac_desc_64addr *desc_first, *desc_last, *desc; 572 - u32 val; 573 - int i; 582 + struct idmac_desc *desc_first, *desc_last, *desc; 583 + struct idmac_desc_64addr *desc64_first, *desc64_last, *desc64; 584 + u32 val, des0; 585 + int i, err; 574 586 575 - desc_first = desc_last = desc = host->sg_cpu; 587 + if (is_64bit) 588 + desc64_first = desc64_last = desc64 = host->sg_cpu; 589 + else 590 + desc_first = desc_last = desc = host->sg_cpu; 576 591 577 592 for (i = 0; i < sg_len; i++) { 578 593 unsigned int length = sg_dma_len(&data->sg[i]); 579 594 580 595 u64 mem_addr = sg_dma_address(&data->sg[i]); 581 596 582 - for ( ; length ; desc++) { 597 + while (length > 0) { 583 598 desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ? 584 599 length : DW_MCI_DESC_DATA_LENGTH; 585 600 ··· 594 603 * isn't still owned by IDMAC as IDMAC's write 595 604 * ops and CPU's read ops are asynchronous. 596 605 */ 597 - if (readl_poll_timeout_atomic(&desc->des0, val, 598 - !(val & IDMAC_DES0_OWN), 599 - 10, 100 * USEC_PER_MSEC)) 606 + if (is_64bit) 607 + err = readl_poll_timeout_atomic(&desc64->des0, val, 608 + IDMAC_OWN_CLR64(val), 10, 100 * USEC_PER_MSEC); 609 + else 610 + err = readl_poll_timeout_atomic(&desc->des0, val, 611 + IDMAC_OWN_CLR64(val), 10, 100 * USEC_PER_MSEC); 612 + if (err) 600 613 goto err_own_bit; 601 614 615 + des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC | IDMAC_DES0_CH; 616 + if (is_64bit) 617 + desc64->des0 = des0; 618 + else 619 + desc->des0 = cpu_to_le32(des0); 620 + 602 621 /* 603 - * Set the OWN bit and disable interrupts 604 - * for this descriptor 622 + * 1. Set OWN bit and disable interrupts for this descriptor 623 + * 2. Set Buffer length 624 + * Set physical address to DMA to/from 605 625 */ 606 - desc->des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC | 607 - IDMAC_DES0_CH; 608 - 609 - /* Buffer length */ 610 - IDMAC_64ADDR_SET_BUFFER1_SIZE(desc, desc_len); 611 - 612 - /* Physical address to DMA to/from */ 613 - desc->des4 = mem_addr & 0xffffffff; 614 - desc->des5 = mem_addr >> 32; 626 + if (is_64bit) { 627 + desc64->des0 = IDMAC_DES0_OWN | IDMAC_DES0_DIC | IDMAC_DES0_CH; 628 + IDMAC_64ADDR_SET_BUFFER1_SIZE(desc64, desc_len); 629 + desc64->des4 = mem_addr & 0xffffffff; 630 + desc64->des5 = mem_addr >> 32; 631 + } else { 632 + IDMAC_SET_BUFFER1_SIZE(desc, desc_len); 633 + desc->des2 = cpu_to_le32(mem_addr); 634 + } 615 635 616 636 /* Update physical address for the next desc */ 617 637 mem_addr += desc_len; 618 638 619 639 /* Save pointer to the last descriptor */ 620 - desc_last = desc; 640 + if (is_64bit) { 641 + desc64_last = desc64; 642 + desc64++; 643 + } else { 644 + desc_last = desc; 645 + desc++; 646 + } 621 647 } 622 648 } 623 649 624 - /* Set first descriptor */ 625 - desc_first->des0 |= IDMAC_DES0_FD; 626 - 627 - /* Set last descriptor */ 628 - desc_last->des0 &= ~(IDMAC_DES0_CH | IDMAC_DES0_DIC); 629 - desc_last->des0 |= IDMAC_DES0_LD; 630 - 631 - return 0; 632 - err_own_bit: 633 - /* restore the descriptor chain as it's polluted */ 634 - dev_dbg(host->dev, "descriptor is still owned by IDMAC.\n"); 635 - memset(host->sg_cpu, 0, DESC_RING_BUF_SZ); 636 - dw_mci_idmac_init(host); 637 - return -EINVAL; 638 - } 639 - 640 - 641 - static inline int dw_mci_prepare_desc32(struct dw_mci *host, 642 - struct mmc_data *data, 643 - unsigned int sg_len) 644 - { 645 - unsigned int desc_len; 646 - struct idmac_desc *desc_first, *desc_last, *desc; 647 - u32 val; 648 - int i; 649 - 650 - desc_first = desc_last = desc = host->sg_cpu; 651 - 652 - for (i = 0; i < sg_len; i++) { 653 - unsigned int length = sg_dma_len(&data->sg[i]); 654 - 655 - u32 mem_addr = sg_dma_address(&data->sg[i]); 656 - 657 - for ( ; length ; desc++) { 658 - desc_len = (length <= DW_MCI_DESC_DATA_LENGTH) ? 659 - length : DW_MCI_DESC_DATA_LENGTH; 660 - 661 - length -= desc_len; 662 - 663 - /* 664 - * Wait for the former clear OWN bit operation 665 - * of IDMAC to make sure that this descriptor 666 - * isn't still owned by IDMAC as IDMAC's write 667 - * ops and CPU's read ops are asynchronous. 668 - */ 669 - if (readl_poll_timeout_atomic(&desc->des0, val, 670 - IDMAC_OWN_CLR64(val), 671 - 10, 672 - 100 * USEC_PER_MSEC)) 673 - goto err_own_bit; 674 - 675 - /* 676 - * Set the OWN bit and disable interrupts 677 - * for this descriptor 678 - */ 679 - desc->des0 = cpu_to_le32(IDMAC_DES0_OWN | 680 - IDMAC_DES0_DIC | 681 - IDMAC_DES0_CH); 682 - 683 - /* Buffer length */ 684 - IDMAC_SET_BUFFER1_SIZE(desc, desc_len); 685 - 686 - /* Physical address to DMA to/from */ 687 - desc->des2 = cpu_to_le32(mem_addr); 688 - 689 - /* Update physical address for the next desc */ 690 - mem_addr += desc_len; 691 - 692 - /* Save pointer to the last descriptor */ 693 - desc_last = desc; 694 - } 650 + /* Set the first descriptor and the last descriptor */ 651 + if (is_64bit) { 652 + desc64_first->des0 |= IDMAC_DES0_FD; 653 + desc64_last->des0 &= ~(IDMAC_DES0_CH | IDMAC_DES0_DIC); 654 + desc64_last->des0 |= IDMAC_DES0_LD; 655 + } else { 656 + desc_first->des0 |= cpu_to_le32(IDMAC_DES0_FD); 657 + desc_last->des0 &= cpu_to_le32(~(IDMAC_DES0_CH | IDMAC_DES0_DIC)); 658 + desc_last->des0 |= cpu_to_le32(IDMAC_DES0_LD); 695 659 } 696 - 697 - /* Set first descriptor */ 698 - desc_first->des0 |= cpu_to_le32(IDMAC_DES0_FD); 699 - 700 - /* Set last descriptor */ 701 - desc_last->des0 &= cpu_to_le32(~(IDMAC_DES0_CH | 702 - IDMAC_DES0_DIC)); 703 - desc_last->des0 |= cpu_to_le32(IDMAC_DES0_LD); 704 660 705 661 return 0; 706 662 err_own_bit: ··· 663 725 u32 temp; 664 726 int ret; 665 727 666 - if (host->dma_64bit_address == 1) 667 - ret = dw_mci_prepare_desc64(host, host->data, sg_len); 668 - else 669 - ret = dw_mci_prepare_desc32(host, host->data, sg_len); 670 - 728 + ret = dw_mci_prepare_desc(host, host->data, sg_len, host->dma_64bit_address); 671 729 if (ret) 672 730 goto out; 673 731 ··· 757 823 758 824 /* Flush cache before write */ 759 825 if (host->data->flags & MMC_DATA_WRITE) 760 - dma_sync_sg_for_device(mmc_dev(host->slot->mmc), sgl, 826 + dma_sync_sg_for_device(mmc_dev(host->mmc), sgl, 761 827 sg_elems, DMA_TO_DEVICE); 762 828 763 829 dma_async_issue_pending(host->dms->ch); ··· 847 913 static void dw_mci_pre_req(struct mmc_host *mmc, 848 914 struct mmc_request *mrq) 849 915 { 850 - struct dw_mci_slot *slot = mmc_priv(mmc); 916 + struct dw_mci *host = mmc_priv(mmc); 851 917 struct mmc_data *data = mrq->data; 852 918 853 - if (!slot->host->use_dma || !data) 919 + if (!host->use_dma || !data) 854 920 return; 855 921 856 922 /* This data might be unmapped at this time */ 857 923 data->host_cookie = COOKIE_UNMAPPED; 858 924 859 - if (dw_mci_pre_dma_transfer(slot->host, mrq->data, 925 + if (dw_mci_pre_dma_transfer(host, mrq->data, 860 926 COOKIE_PRE_MAPPED) < 0) 861 927 data->host_cookie = COOKIE_UNMAPPED; 862 928 } ··· 865 931 struct mmc_request *mrq, 866 932 int err) 867 933 { 868 - struct dw_mci_slot *slot = mmc_priv(mmc); 934 + struct dw_mci *host = mmc_priv(mmc); 869 935 struct mmc_data *data = mrq->data; 870 936 871 - if (!slot->host->use_dma || !data) 937 + if (!host->use_dma || !data) 872 938 return; 873 939 874 940 if (data->host_cookie != COOKIE_UNMAPPED) 875 - dma_unmap_sg(slot->host->dev, 941 + dma_unmap_sg(host->dev, 876 942 data->sg, 877 943 data->sg_len, 878 944 mmc_get_dma_dir(data)); ··· 881 947 882 948 static int dw_mci_get_cd(struct mmc_host *mmc) 883 949 { 884 - int present; 885 - struct dw_mci_slot *slot = mmc_priv(mmc); 886 - struct dw_mci *host = slot->host; 950 + struct dw_mci *host = mmc_priv(mmc); 887 951 int gpio_cd = mmc_gpio_get_cd(mmc); 888 952 889 - /* Use platform get_cd function, else try onboard card detect */ 890 - if (((mmc->caps & MMC_CAP_NEEDS_POLL) 891 - || !mmc_card_is_removable(mmc))) { 892 - present = 1; 953 + if (mmc->caps & MMC_CAP_NEEDS_POLL) 954 + return 1; 893 955 894 - if (!test_bit(DW_MMC_CARD_PRESENT, &slot->flags)) { 895 - if (mmc->caps & MMC_CAP_NEEDS_POLL) { 896 - dev_info(&mmc->class_dev, 897 - "card is polling.\n"); 898 - } else { 899 - dev_info(&mmc->class_dev, 900 - "card is non-removable.\n"); 901 - } 902 - set_bit(DW_MMC_CARD_PRESENT, &slot->flags); 903 - } 956 + if (!mmc_card_is_removable(mmc)) 957 + return 1; 904 958 905 - return present; 906 - } else if (gpio_cd >= 0) 907 - present = gpio_cd; 908 - else 909 - present = (mci_readl(slot->host, CDETECT) & (1 << slot->id)) 910 - == 0 ? 1 : 0; 959 + /* Try slot gpio detection */ 960 + if (gpio_cd >= 0) 961 + return !!gpio_cd; 911 962 912 - spin_lock_bh(&host->lock); 913 - if (present && !test_and_set_bit(DW_MMC_CARD_PRESENT, &slot->flags)) 914 - dev_dbg(&mmc->class_dev, "card is present\n"); 915 - else if (!present && 916 - !test_and_clear_bit(DW_MMC_CARD_PRESENT, &slot->flags)) 917 - dev_dbg(&mmc->class_dev, "card is not present\n"); 918 - spin_unlock_bh(&host->lock); 919 - 920 - return present; 963 + /* Host native card detect */ 964 + return !(mci_readl(host, CDETECT) & BIT(0)); 921 965 } 922 966 923 967 static void dw_mci_adjust_fifoth(struct dw_mci *host, struct mmc_data *data) ··· 1062 1150 host->data = data; 1063 1151 1064 1152 if (data->flags & MMC_DATA_READ) 1065 - host->dir_status = DW_MCI_RECV_STATUS; 1153 + host->dir_status = MMC_DATA_READ; 1066 1154 else 1067 - host->dir_status = DW_MCI_SEND_STATUS; 1155 + host->dir_status = MMC_DATA_WRITE; 1068 1156 1069 1157 dw_mci_ctrl_thld(host, data); 1070 1158 ··· 1112 1200 } 1113 1201 } 1114 1202 1115 - static void dw_mci_setup_bus(struct dw_mci_slot *slot, bool force_clkinit) 1203 + static void dw_mci_setup_bus(struct dw_mci *host, bool force_clkinit) 1116 1204 { 1117 - struct dw_mci *host = slot->host; 1118 - unsigned int clock = slot->clock; 1205 + unsigned int clock = host->clock; 1119 1206 u32 div; 1120 1207 u32 clk_en_a; 1121 1208 u32 sdmmc_cmd_bits = SDMMC_CMD_UPD_CLK | SDMMC_CMD_PRV_DAT_WAIT; ··· 1123 1212 if (host->state == STATE_WAITING_CMD11_DONE) 1124 1213 sdmmc_cmd_bits |= SDMMC_CMD_VOLT_SWITCH; 1125 1214 1126 - slot->mmc->actual_clock = 0; 1215 + host->mmc->actual_clock = 0; 1127 1216 1128 1217 if (!clock) { 1129 1218 mci_writel(host, CLKENA, 0); 1130 - mci_send_cmd(slot, sdmmc_cmd_bits, 0); 1219 + mci_send_cmd(host, sdmmc_cmd_bits, 0); 1131 1220 } else if (clock != host->current_speed || force_clkinit) { 1132 1221 div = host->bus_hz / clock; 1133 1222 if (host->bus_hz % clock && host->bus_hz > clock) ··· 1139 1228 1140 1229 div = (host->bus_hz != clock) ? DIV_ROUND_UP(div, 2) : 0; 1141 1230 1142 - if ((clock != slot->__clk_old && 1143 - !test_bit(DW_MMC_CARD_NEEDS_POLL, &slot->flags)) || 1231 + if ((clock != host->clk_old && 1232 + !test_bit(DW_MMC_CARD_NEEDS_POLL, &host->flags)) || 1144 1233 force_clkinit) { 1145 1234 /* Silent the verbose log if calling from PM context */ 1146 1235 if (!force_clkinit) 1147 - dev_info(&slot->mmc->class_dev, 1148 - "Bus speed (slot %d) = %dHz (slot req %dHz, actual %dHZ div = %d)\n", 1149 - slot->id, host->bus_hz, clock, 1236 + dev_info(&host->mmc->class_dev, 1237 + "Bus speed = %dHz (req %dHz, actual %dHZ div = %d)\n", 1238 + host->bus_hz, clock, 1150 1239 div ? ((host->bus_hz / div) >> 1) : 1151 1240 host->bus_hz, div); 1152 1241 ··· 1154 1243 * If card is polling, display the message only 1155 1244 * one time at boot time. 1156 1245 */ 1157 - if (slot->mmc->caps & MMC_CAP_NEEDS_POLL && 1158 - slot->mmc->f_min == clock) 1159 - set_bit(DW_MMC_CARD_NEEDS_POLL, &slot->flags); 1246 + if (host->mmc->caps & MMC_CAP_NEEDS_POLL && 1247 + host->mmc->f_min == clock) 1248 + set_bit(DW_MMC_CARD_NEEDS_POLL, &host->flags); 1160 1249 } 1161 1250 1162 1251 /* disable clock */ ··· 1164 1253 mci_writel(host, CLKSRC, 0); 1165 1254 1166 1255 /* inform CIU */ 1167 - mci_send_cmd(slot, sdmmc_cmd_bits, 0); 1256 + mci_send_cmd(host, sdmmc_cmd_bits, 0); 1168 1257 1169 1258 /* set clock to desired speed */ 1170 1259 mci_writel(host, CLKDIV, div); 1171 1260 1172 1261 /* inform CIU */ 1173 - mci_send_cmd(slot, sdmmc_cmd_bits, 0); 1262 + mci_send_cmd(host, sdmmc_cmd_bits, 0); 1174 1263 1175 1264 /* enable clock; only low power if no SDIO */ 1176 - clk_en_a = SDMMC_CLKEN_ENABLE << slot->id; 1177 - if (!test_bit(DW_MMC_CARD_NO_LOW_PWR, &slot->flags)) 1178 - clk_en_a |= SDMMC_CLKEN_LOW_PWR << slot->id; 1265 + clk_en_a = SDMMC_CLKEN_ENABLE; 1266 + if (!test_bit(DW_MMC_CARD_NO_LOW_PWR, &host->flags)) 1267 + clk_en_a |= SDMMC_CLKEN_LOW_PWR; 1179 1268 mci_writel(host, CLKENA, clk_en_a); 1180 1269 1181 1270 /* inform CIU */ 1182 - mci_send_cmd(slot, sdmmc_cmd_bits, 0); 1271 + mci_send_cmd(host, sdmmc_cmd_bits, 0); 1183 1272 1184 1273 /* keep the last clock value that was requested from core */ 1185 - slot->__clk_old = clock; 1186 - slot->mmc->actual_clock = div ? ((host->bus_hz / div) >> 1) : 1274 + host->clk_old = clock; 1275 + host->mmc->actual_clock = div ? ((host->bus_hz / div) >> 1) : 1187 1276 host->bus_hz; 1188 1277 } 1189 1278 1190 1279 host->current_speed = clock; 1191 1280 1192 - /* Set the current slot bus width */ 1193 - mci_writel(host, CTYPE, (slot->ctype << slot->id)); 1281 + /* Set the current bus width */ 1282 + mci_writel(host, CTYPE, host->ctype); 1194 1283 } 1195 1284 1196 1285 static void dw_mci_set_data_timeout(struct dw_mci *host, ··· 1224 1313 timeout_ns, tmout >> 8); 1225 1314 } 1226 1315 1227 - static void __dw_mci_start_request(struct dw_mci *host, 1228 - struct dw_mci_slot *slot, 1229 - struct mmc_command *cmd) 1316 + static void dw_mci_start_request(struct dw_mci *host, struct mmc_command *cmd) 1230 1317 { 1231 1318 struct mmc_request *mrq; 1232 1319 struct mmc_data *data; 1233 1320 u32 cmdflags; 1234 1321 1235 - mrq = slot->mrq; 1322 + mrq = host->mrq; 1236 1323 1237 1324 host->mrq = mrq; 1238 1325 ··· 1247 1338 mci_writel(host, BLKSIZ, data->blksz); 1248 1339 } 1249 1340 1250 - cmdflags = dw_mci_prepare_command(slot->mmc, cmd); 1341 + cmdflags = dw_mci_prepare_command(host->mmc, cmd); 1251 1342 1252 1343 /* this is the first command, send the initialization clock */ 1253 - if (test_and_clear_bit(DW_MMC_CARD_NEED_INIT, &slot->flags)) 1344 + if (test_and_clear_bit(DW_MMC_CARD_NEED_INIT, &host->flags)) 1254 1345 cmdflags |= SDMMC_CMD_INIT; 1255 1346 1256 1347 if (data) { ··· 1283 1374 host->stop_cmdr = dw_mci_prep_stop_abort(host, cmd); 1284 1375 } 1285 1376 1286 - static void dw_mci_start_request(struct dw_mci *host, 1287 - struct dw_mci_slot *slot) 1377 + static void dw_mci_request(struct mmc_host *mmc, struct mmc_request *mrq) 1288 1378 { 1289 - struct mmc_request *mrq = slot->mrq; 1379 + struct dw_mci *host = mmc_priv(mmc); 1290 1380 struct mmc_command *cmd; 1291 1381 1292 - cmd = mrq->sbc ? mrq->sbc : mrq->cmd; 1293 - __dw_mci_start_request(host, slot, cmd); 1294 - } 1382 + WARN_ON(host->mrq); 1295 1383 1296 - /* must be called with host->lock held */ 1297 - static void dw_mci_queue_request(struct dw_mci *host, struct dw_mci_slot *slot, 1298 - struct mmc_request *mrq) 1299 - { 1300 - dev_vdbg(&slot->mmc->class_dev, "queue request: state=%d\n", 1384 + /* 1385 + * The check for card presence and queueing of the request must be 1386 + * atomic, otherwise the card could be removed in between and the 1387 + * request wouldn't fail until another card was inserted. 1388 + */ 1389 + if (!dw_mci_get_cd(mmc)) { 1390 + mrq->cmd->error = -ENOMEDIUM; 1391 + mmc_request_done(mmc, mrq); 1392 + return; 1393 + } 1394 + 1395 + spin_lock_bh(&host->lock); 1396 + 1397 + dev_vdbg(&host->mmc->class_dev, "request: state=%d\n", 1301 1398 host->state); 1302 1399 1303 - slot->mrq = mrq; 1304 - 1400 + host->mrq = mrq; 1305 1401 if (host->state == STATE_WAITING_CMD11_DONE) { 1306 - dev_warn(&slot->mmc->class_dev, 1402 + dev_warn(&host->mmc->class_dev, 1307 1403 "Voltage change didn't complete\n"); 1308 1404 /* 1309 1405 * this case isn't expected to happen, so we can ··· 1320 1406 1321 1407 if (host->state == STATE_IDLE) { 1322 1408 host->state = STATE_SENDING_CMD; 1323 - dw_mci_start_request(host, slot); 1324 - } else { 1325 - list_add_tail(&slot->queue_node, &host->queue); 1409 + cmd = mrq->sbc ? mrq->sbc : mrq->cmd; 1410 + dw_mci_start_request(host, cmd); 1326 1411 } 1327 - } 1328 - 1329 - static void dw_mci_request(struct mmc_host *mmc, struct mmc_request *mrq) 1330 - { 1331 - struct dw_mci_slot *slot = mmc_priv(mmc); 1332 - struct dw_mci *host = slot->host; 1333 - 1334 - WARN_ON(slot->mrq); 1335 - 1336 - /* 1337 - * The check for card presence and queueing of the request must be 1338 - * atomic, otherwise the card could be removed in between and the 1339 - * request wouldn't fail until another card was inserted. 1340 - */ 1341 - 1342 - if (!dw_mci_get_cd(mmc)) { 1343 - mrq->cmd->error = -ENOMEDIUM; 1344 - mmc_request_done(mmc, mrq); 1345 - return; 1346 - } 1347 - 1348 - spin_lock_bh(&host->lock); 1349 - 1350 - dw_mci_queue_request(host, slot, mrq); 1351 1412 1352 1413 spin_unlock_bh(&host->lock); 1353 1414 } 1354 1415 1355 1416 static void dw_mci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios) 1356 1417 { 1357 - struct dw_mci_slot *slot = mmc_priv(mmc); 1358 - const struct dw_mci_drv_data *drv_data = slot->host->drv_data; 1418 + struct dw_mci *host = mmc_priv(mmc); 1419 + const struct dw_mci_drv_data *drv_data = host->drv_data; 1359 1420 u32 regs; 1360 1421 int ret; 1361 1422 1362 1423 switch (ios->bus_width) { 1363 1424 case MMC_BUS_WIDTH_4: 1364 - slot->ctype = SDMMC_CTYPE_4BIT; 1425 + host->ctype = SDMMC_CTYPE_4BIT; 1365 1426 break; 1366 1427 case MMC_BUS_WIDTH_8: 1367 - slot->ctype = SDMMC_CTYPE_8BIT; 1428 + host->ctype = SDMMC_CTYPE_8BIT; 1368 1429 break; 1369 1430 default: 1370 1431 /* set default 1 bit mode */ 1371 - slot->ctype = SDMMC_CTYPE_1BIT; 1432 + host->ctype = SDMMC_CTYPE_1BIT; 1372 1433 } 1373 1434 1374 - regs = mci_readl(slot->host, UHS_REG); 1435 + regs = mci_readl(host, UHS_REG); 1375 1436 1376 1437 /* DDR mode set */ 1377 1438 if (ios->timing == MMC_TIMING_MMC_DDR52 || 1378 1439 ios->timing == MMC_TIMING_UHS_DDR50 || 1379 1440 ios->timing == MMC_TIMING_MMC_HS400) 1380 - regs |= ((0x1 << slot->id) << 16); 1441 + regs |= BIT(16); 1381 1442 else 1382 - regs &= ~((0x1 << slot->id) << 16); 1443 + regs &= ~BIT(16); 1383 1444 1384 - mci_writel(slot->host, UHS_REG, regs); 1385 - slot->host->timing = ios->timing; 1445 + mci_writel(host, UHS_REG, regs); 1446 + host->timing = ios->timing; 1386 1447 1387 1448 /* 1388 1449 * Use mirror of ios->clock to prevent race with mmc 1389 1450 * core ios update when finding the minimum. 1390 1451 */ 1391 - slot->clock = ios->clock; 1452 + host->clock = ios->clock; 1392 1453 1393 1454 if (drv_data && drv_data->set_ios) 1394 - drv_data->set_ios(slot->host, ios); 1455 + drv_data->set_ios(host, ios); 1395 1456 1396 1457 switch (ios->power_mode) { 1397 1458 case MMC_POWER_UP: 1398 - if (!IS_ERR(mmc->supply.vmmc)) { 1399 - ret = mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, 1400 - ios->vdd); 1401 - if (ret) { 1402 - dev_err(slot->host->dev, 1403 - "failed to enable vmmc regulator\n"); 1404 - /*return, if failed turn on vmmc*/ 1405 - return; 1406 - } 1459 + ret = mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, ios->vdd); 1460 + if (ret) { 1461 + dev_err(host->dev, "failed to enable vmmc regulator\n"); 1462 + return; 1407 1463 } 1408 - set_bit(DW_MMC_CARD_NEED_INIT, &slot->flags); 1409 - regs = mci_readl(slot->host, PWREN); 1410 - regs |= (1 << slot->id); 1411 - mci_writel(slot->host, PWREN, regs); 1464 + set_bit(DW_MMC_CARD_NEED_INIT, &host->flags); 1465 + regs = mci_readl(host, PWREN); 1466 + regs |= BIT(0); 1467 + mci_writel(host, PWREN, regs); 1412 1468 break; 1413 1469 case MMC_POWER_ON: 1414 - if (!slot->host->vqmmc_enabled) { 1415 - if (!IS_ERR(mmc->supply.vqmmc)) { 1416 - ret = regulator_enable(mmc->supply.vqmmc); 1417 - if (ret < 0) 1418 - dev_err(slot->host->dev, 1419 - "failed to enable vqmmc\n"); 1420 - else 1421 - slot->host->vqmmc_enabled = true; 1422 - 1423 - } else { 1424 - /* Keep track so we don't reset again */ 1425 - slot->host->vqmmc_enabled = true; 1426 - } 1427 - 1428 - /* Reset our state machine after powering on */ 1429 - dw_mci_ctrl_reset(slot->host, 1430 - SDMMC_CTRL_ALL_RESET_FLAGS); 1431 - } 1432 - 1470 + mmc_regulator_enable_vqmmc(mmc); 1433 1471 /* Adjust clock / bus width after power is up */ 1434 - dw_mci_setup_bus(slot, false); 1472 + dw_mci_setup_bus(host, false); 1435 1473 1436 1474 break; 1437 1475 case MMC_POWER_OFF: 1438 1476 /* Turn clock off before power goes down */ 1439 - dw_mci_setup_bus(slot, false); 1477 + dw_mci_setup_bus(host, false); 1440 1478 1441 1479 if (!IS_ERR(mmc->supply.vmmc)) 1442 1480 mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, 0); 1443 1481 1444 - if (!IS_ERR(mmc->supply.vqmmc) && slot->host->vqmmc_enabled) 1445 - regulator_disable(mmc->supply.vqmmc); 1446 - slot->host->vqmmc_enabled = false; 1482 + mmc_regulator_disable_vqmmc(mmc); 1447 1483 1448 - regs = mci_readl(slot->host, PWREN); 1449 - regs &= ~(1 << slot->id); 1450 - mci_writel(slot->host, PWREN, regs); 1484 + regs = mci_readl(host, PWREN); 1485 + regs &= ~BIT(0); 1486 + mci_writel(host, PWREN, regs); 1487 + /* Reset our state machine after powering off */ 1488 + dw_mci_ctrl_reset(host, SDMMC_CTRL_ALL_RESET_FLAGS); 1451 1489 break; 1452 1490 default: 1453 1491 break; 1454 1492 } 1455 1493 1456 - if (slot->host->state == STATE_WAITING_CMD11_DONE && ios->clock != 0) 1457 - slot->host->state = STATE_IDLE; 1494 + if (host->state == STATE_WAITING_CMD11_DONE && ios->clock != 0) 1495 + host->state = STATE_IDLE; 1458 1496 } 1459 1497 1460 1498 static int dw_mci_card_busy(struct mmc_host *mmc) 1461 1499 { 1462 - struct dw_mci_slot *slot = mmc_priv(mmc); 1500 + struct dw_mci *host = mmc_priv(mmc); 1463 1501 u32 status; 1464 1502 1465 1503 /* 1466 1504 * Check the busy bit which is low when DAT[3:0] 1467 1505 * (the data lines) are 0000 1468 1506 */ 1469 - status = mci_readl(slot->host, STATUS); 1507 + status = mci_readl(host, STATUS); 1470 1508 1471 1509 return !!(status & SDMMC_STATUS_BUSY); 1472 1510 } 1473 1511 1474 1512 static int dw_mci_switch_voltage(struct mmc_host *mmc, struct mmc_ios *ios) 1475 1513 { 1476 - struct dw_mci_slot *slot = mmc_priv(mmc); 1477 - struct dw_mci *host = slot->host; 1514 + struct dw_mci *host = mmc_priv(mmc); 1478 1515 const struct dw_mci_drv_data *drv_data = host->drv_data; 1479 1516 u32 uhs; 1480 - u32 v18 = SDMMC_UHS_18V << slot->id; 1517 + u32 v18 = SDMMC_UHS_18V; 1481 1518 int ret; 1482 1519 1483 1520 if (drv_data && drv_data->switch_voltage) 1484 - return drv_data->switch_voltage(mmc, ios); 1521 + return drv_data->switch_voltage(host, ios); 1485 1522 1486 1523 /* 1487 1524 * Program the voltage. Note that some instances of dw_mmc may use ··· 1462 1597 static int dw_mci_get_ro(struct mmc_host *mmc) 1463 1598 { 1464 1599 int read_only; 1465 - struct dw_mci_slot *slot = mmc_priv(mmc); 1600 + struct dw_mci *host = mmc_priv(mmc); 1466 1601 int gpio_ro = mmc_gpio_get_ro(mmc); 1467 1602 1468 1603 /* Use platform get_ro function, else try on board write protect */ ··· 1470 1605 read_only = gpio_ro; 1471 1606 else 1472 1607 read_only = 1473 - mci_readl(slot->host, WRTPRT) & (1 << slot->id) ? 1 : 0; 1608 + mci_readl(host, WRTPRT) & BIT(0) ? 1 : 0; 1474 1609 1475 1610 dev_dbg(&mmc->class_dev, "card is %s\n", 1476 1611 read_only ? "read-only" : "read-write"); ··· 1480 1615 1481 1616 static void dw_mci_hw_reset(struct mmc_host *mmc) 1482 1617 { 1483 - struct dw_mci_slot *slot = mmc_priv(mmc); 1484 - struct dw_mci *host = slot->host; 1618 + struct dw_mci *host = mmc_priv(mmc); 1485 1619 const struct dw_mci_drv_data *drv_data = host->drv_data; 1486 1620 int reset; 1487 1621 ··· 1503 1639 * tRSTH >= 1us: RST_n high period 1504 1640 */ 1505 1641 reset = mci_readl(host, RST_N); 1506 - reset &= ~(SDMMC_RST_HWACTIVE << slot->id); 1642 + reset &= ~SDMMC_RST_HWACTIVE; 1507 1643 mci_writel(host, RST_N, reset); 1508 1644 usleep_range(1, 2); 1509 - reset |= SDMMC_RST_HWACTIVE << slot->id; 1645 + reset |= SDMMC_RST_HWACTIVE; 1510 1646 mci_writel(host, RST_N, reset); 1511 1647 usleep_range(200, 300); 1512 1648 } 1513 1649 1514 - static void dw_mci_prepare_sdio_irq(struct dw_mci_slot *slot, bool prepare) 1650 + static void dw_mci_prepare_sdio_irq(struct dw_mci *host, bool prepare) 1515 1651 { 1516 - struct dw_mci *host = slot->host; 1517 - const u32 clken_low_pwr = SDMMC_CLKEN_LOW_PWR << slot->id; 1652 + const u32 clken_low_pwr = SDMMC_CLKEN_LOW_PWR; 1518 1653 u32 clk_en_a_old; 1519 1654 u32 clk_en_a; 1520 1655 ··· 1525 1662 1526 1663 clk_en_a_old = mci_readl(host, CLKENA); 1527 1664 if (prepare) { 1528 - set_bit(DW_MMC_CARD_NO_LOW_PWR, &slot->flags); 1665 + set_bit(DW_MMC_CARD_NO_LOW_PWR, &host->flags); 1529 1666 clk_en_a = clk_en_a_old & ~clken_low_pwr; 1530 1667 } else { 1531 - clear_bit(DW_MMC_CARD_NO_LOW_PWR, &slot->flags); 1668 + clear_bit(DW_MMC_CARD_NO_LOW_PWR, &host->flags); 1532 1669 clk_en_a = clk_en_a_old | clken_low_pwr; 1533 1670 } 1534 1671 1535 1672 if (clk_en_a != clk_en_a_old) { 1536 1673 mci_writel(host, CLKENA, clk_en_a); 1537 - mci_send_cmd(slot, SDMMC_CMD_UPD_CLK | SDMMC_CMD_PRV_DAT_WAIT, 1674 + mci_send_cmd(host, SDMMC_CMD_UPD_CLK | SDMMC_CMD_PRV_DAT_WAIT, 1538 1675 0); 1539 1676 } 1540 1677 } 1541 1678 1542 - static void __dw_mci_enable_sdio_irq(struct dw_mci_slot *slot, int enb) 1679 + static void __dw_mci_enable_sdio_irq(struct dw_mci *host, int enb) 1543 1680 { 1544 - struct dw_mci *host = slot->host; 1545 1681 unsigned long irqflags; 1546 1682 u32 int_mask; 1547 1683 ··· 1549 1687 /* Enable/disable Slot Specific SDIO interrupt */ 1550 1688 int_mask = mci_readl(host, INTMASK); 1551 1689 if (enb) 1552 - int_mask |= SDMMC_INT_SDIO(slot->sdio_id); 1690 + int_mask |= SDMMC_INT_SDIO(host->sdio_irq); 1553 1691 else 1554 - int_mask &= ~SDMMC_INT_SDIO(slot->sdio_id); 1692 + int_mask &= ~SDMMC_INT_SDIO(host->sdio_irq); 1555 1693 mci_writel(host, INTMASK, int_mask); 1556 1694 1557 1695 spin_unlock_irqrestore(&host->irq_lock, irqflags); ··· 1559 1697 1560 1698 static void dw_mci_enable_sdio_irq(struct mmc_host *mmc, int enb) 1561 1699 { 1562 - struct dw_mci_slot *slot = mmc_priv(mmc); 1563 - struct dw_mci *host = slot->host; 1700 + struct dw_mci *host = mmc_priv(mmc); 1564 1701 1565 - dw_mci_prepare_sdio_irq(slot, enb); 1566 - __dw_mci_enable_sdio_irq(slot, enb); 1702 + dw_mci_prepare_sdio_irq(host, enb); 1703 + __dw_mci_enable_sdio_irq(host, enb); 1567 1704 1568 1705 /* Avoid runtime suspending the device when SDIO IRQ is enabled */ 1569 1706 if (enb) ··· 1573 1712 1574 1713 static void dw_mci_ack_sdio_irq(struct mmc_host *mmc) 1575 1714 { 1576 - struct dw_mci_slot *slot = mmc_priv(mmc); 1715 + struct dw_mci *host = mmc_priv(mmc); 1577 1716 1578 - __dw_mci_enable_sdio_irq(slot, 1); 1717 + __dw_mci_enable_sdio_irq(host, 1); 1579 1718 } 1580 1719 1581 1720 static int dw_mci_execute_tuning(struct mmc_host *mmc, u32 opcode) 1582 1721 { 1583 - struct dw_mci_slot *slot = mmc_priv(mmc); 1584 - struct dw_mci *host = slot->host; 1722 + struct dw_mci *host = mmc_priv(mmc); 1585 1723 const struct dw_mci_drv_data *drv_data = host->drv_data; 1586 1724 int err = -EINVAL; 1587 1725 1588 1726 if (drv_data && drv_data->execute_tuning) 1589 - err = drv_data->execute_tuning(slot, opcode); 1727 + err = drv_data->execute_tuning(host, opcode); 1590 1728 return err; 1591 1729 } 1592 1730 1593 1731 static int dw_mci_prepare_hs400_tuning(struct mmc_host *mmc, 1594 1732 struct mmc_ios *ios) 1595 1733 { 1596 - struct dw_mci_slot *slot = mmc_priv(mmc); 1597 - struct dw_mci *host = slot->host; 1734 + struct dw_mci *host = mmc_priv(mmc); 1598 1735 const struct dw_mci_drv_data *drv_data = host->drv_data; 1599 1736 1600 1737 if (drv_data && drv_data->prepare_hs400_tuning) ··· 1663 1804 1664 1805 ciu_out: 1665 1806 /* After a CTRL reset we need to have CIU set clock registers */ 1666 - mci_send_cmd(host->slot, SDMMC_CMD_UPD_CLK, 0); 1807 + mci_send_cmd(host, SDMMC_CMD_UPD_CLK, 0); 1667 1808 1668 1809 return ret; 1669 1810 } ··· 1754 1895 __releases(&host->lock) 1755 1896 __acquires(&host->lock) 1756 1897 { 1757 - struct dw_mci_slot *slot; 1758 - struct mmc_host *prev_mmc = host->slot->mmc; 1898 + struct mmc_host *prev_mmc = host->mmc; 1759 1899 1760 1900 WARN_ON(host->cmd || host->data); 1761 1901 1762 - host->slot->mrq = NULL; 1763 1902 host->mrq = NULL; 1764 - if (!list_empty(&host->queue)) { 1765 - slot = list_entry(host->queue.next, 1766 - struct dw_mci_slot, queue_node); 1767 - list_del(&slot->queue_node); 1768 - dev_vdbg(host->dev, "list not empty: %s is next\n", 1769 - mmc_hostname(slot->mmc)); 1770 - host->state = STATE_SENDING_CMD; 1771 - dw_mci_start_request(host, slot); 1772 - } else { 1773 - dev_vdbg(host->dev, "list empty\n"); 1774 1903 1775 - if (host->state == STATE_SENDING_CMD11) 1776 - host->state = STATE_WAITING_CMD11_DONE; 1777 - else 1778 - host->state = STATE_IDLE; 1779 - } 1904 + if (host->state == STATE_SENDING_CMD11) 1905 + host->state = STATE_WAITING_CMD11_DONE; 1906 + else 1907 + host->state = STATE_IDLE; 1780 1908 1781 1909 spin_unlock(&host->lock); 1782 1910 mmc_request_done(prev_mmc, mrq); ··· 1814 1968 data->error = -EILSEQ; 1815 1969 } else if (status & SDMMC_INT_EBE) { 1816 1970 if (host->dir_status == 1817 - DW_MCI_SEND_STATUS) { 1971 + MMC_DATA_WRITE) { 1818 1972 /* 1819 1973 * No data CRC status was returned. 1820 1974 * The number of bytes transferred ··· 1823 1977 data->bytes_xfered = 0; 1824 1978 data->error = -ETIMEDOUT; 1825 1979 } else if (host->dir_status == 1826 - DW_MCI_RECV_STATUS) { 1980 + MMC_DATA_READ) { 1827 1981 data->error = -EILSEQ; 1828 1982 } 1829 1983 } else { ··· 1941 2095 set_bit(EVENT_CMD_COMPLETE, &host->completed_events); 1942 2096 err = dw_mci_command_complete(host, cmd); 1943 2097 if (cmd == mrq->sbc && !err) { 1944 - __dw_mci_start_request(host, host->slot, 1945 - mrq->cmd); 2098 + dw_mci_start_request(host, mrq->cmd); 1946 2099 goto unlock; 1947 2100 } 1948 2101 ··· 1968 2123 * avoids races and keeps things simple. 1969 2124 */ 1970 2125 if (err != -ETIMEDOUT && 1971 - host->dir_status == DW_MCI_RECV_STATUS) { 2126 + host->dir_status == MMC_DATA_READ) { 1972 2127 state = STATE_SENDING_DATA; 1973 2128 continue; 1974 2129 } ··· 2012 2167 * If all data-related interrupts don't come 2013 2168 * within the given time in reading data state. 2014 2169 */ 2015 - if (host->dir_status == DW_MCI_RECV_STATUS) 2170 + if (host->dir_status == MMC_DATA_READ) 2016 2171 dw_mci_set_drto(host); 2017 2172 break; 2018 2173 } ··· 2052 2207 * interrupt doesn't come within the given time. 2053 2208 * in reading data state. 2054 2209 */ 2055 - if (host->dir_status == DW_MCI_RECV_STATUS) 2210 + if (host->dir_status == MMC_DATA_READ) 2056 2211 dw_mci_set_drto(host); 2057 2212 break; 2058 2213 } ··· 2648 2803 2649 2804 static void dw_mci_handle_cd(struct dw_mci *host) 2650 2805 { 2651 - struct dw_mci_slot *slot = host->slot; 2652 - 2653 - mmc_detect_change(slot->mmc, 2654 - msecs_to_jiffies(host->pdata->detect_delay_ms)); 2806 + mmc_detect_change(host->mmc, 2807 + msecs_to_jiffies(host->detect_delay_ms)); 2655 2808 } 2656 2809 2657 2810 static irqreturn_t dw_mci_interrupt(int irq, void *dev_id) 2658 2811 { 2659 2812 struct dw_mci *host = dev_id; 2660 2813 u32 pending; 2661 - struct dw_mci_slot *slot = host->slot; 2662 2814 2663 2815 pending = mci_readl(host, MINTSTS); /* read-only mask reg */ 2664 2816 ··· 2720 2878 if (!host->data_status) 2721 2879 host->data_status = pending; 2722 2880 smp_wmb(); /* drain writebuffer */ 2723 - if (host->dir_status == DW_MCI_RECV_STATUS) { 2881 + if (host->dir_status == MMC_DATA_READ) { 2724 2882 if (host->sg != NULL) 2725 2883 dw_mci_read_data_pio(host, true); 2726 2884 } ··· 2732 2890 2733 2891 if (pending & SDMMC_INT_RXDR) { 2734 2892 mci_writel(host, RINTSTS, SDMMC_INT_RXDR); 2735 - if (host->dir_status == DW_MCI_RECV_STATUS && host->sg) 2893 + if (host->dir_status == MMC_DATA_READ && host->sg) 2736 2894 dw_mci_read_data_pio(host, false); 2737 2895 } 2738 2896 2739 2897 if (pending & SDMMC_INT_TXDR) { 2740 2898 mci_writel(host, RINTSTS, SDMMC_INT_TXDR); 2741 - if (host->dir_status == DW_MCI_SEND_STATUS && host->sg) 2899 + if (host->dir_status == MMC_DATA_WRITE && host->sg) 2742 2900 dw_mci_write_data_pio(host); 2743 2901 } 2744 2902 ··· 2756 2914 dw_mci_handle_cd(host); 2757 2915 } 2758 2916 2759 - if (pending & SDMMC_INT_SDIO(slot->sdio_id)) { 2917 + if (pending & SDMMC_INT_SDIO(host->sdio_irq)) { 2760 2918 mci_writel(host, RINTSTS, 2761 - SDMMC_INT_SDIO(slot->sdio_id)); 2762 - __dw_mci_enable_sdio_irq(slot, 0); 2763 - sdio_signal_irq(slot->mmc); 2919 + SDMMC_INT_SDIO(host->sdio_irq)); 2920 + __dw_mci_enable_sdio_irq(host, 0); 2921 + sdio_signal_irq(host->mmc); 2764 2922 } 2765 2923 2766 2924 } ··· 2792 2950 return IRQ_HANDLED; 2793 2951 } 2794 2952 2795 - static int dw_mci_init_slot_caps(struct dw_mci_slot *slot) 2953 + static int dw_mci_init_host_caps(struct dw_mci *host) 2796 2954 { 2797 - struct dw_mci *host = slot->host; 2798 2955 const struct dw_mci_drv_data *drv_data = host->drv_data; 2799 - struct mmc_host *mmc = slot->mmc; 2956 + struct mmc_host *mmc = host->mmc; 2800 2957 int ctrl_id; 2801 - 2802 - if (host->pdata->caps) 2803 - mmc->caps = host->pdata->caps; 2804 - 2805 - if (host->pdata->pm_caps) 2806 - mmc->pm_caps = host->pdata->pm_caps; 2807 2958 2808 2959 if (drv_data) 2809 2960 mmc->caps |= drv_data->common_caps; 2810 2961 2811 - if (host->dev->of_node) { 2812 - ctrl_id = of_alias_get_id(host->dev->of_node, "mshc"); 2813 - if (ctrl_id < 0) 2814 - ctrl_id = 0; 2815 - } else { 2962 + if (host->dev->of_node) 2963 + ctrl_id = mmc->index; 2964 + else 2816 2965 ctrl_id = to_platform_device(host->dev)->id; 2817 - } 2818 2966 2819 2967 if (drv_data && drv_data->caps) { 2820 2968 if (ctrl_id >= drv_data->num_caps) { ··· 2814 2982 } 2815 2983 mmc->caps |= drv_data->caps[ctrl_id]; 2816 2984 } 2817 - 2818 - if (host->pdata->caps2) 2819 - mmc->caps2 = host->pdata->caps2; 2820 2985 2821 2986 /* if host has set a minimum_freq, we should respect it */ 2822 2987 if (host->minimum_speed) ··· 2831 3002 return 0; 2832 3003 } 2833 3004 2834 - static int dw_mci_init_slot(struct dw_mci *host) 3005 + static int dw_mci_init_host(struct dw_mci *host) 2835 3006 { 2836 - struct mmc_host *mmc; 2837 - struct dw_mci_slot *slot; 3007 + struct mmc_host *mmc = host->mmc; 2838 3008 int ret; 2839 - 2840 - mmc = devm_mmc_alloc_host(host->dev, sizeof(*slot)); 2841 - if (!mmc) 2842 - return -ENOMEM; 2843 - 2844 - slot = mmc_priv(mmc); 2845 - slot->id = 0; 2846 - slot->sdio_id = host->sdio_id0 + slot->id; 2847 - slot->mmc = mmc; 2848 - slot->host = host; 2849 - host->slot = slot; 2850 3009 2851 3010 mmc->ops = &dw_mci_ops; 2852 3011 ··· 2850 3033 if (ret) 2851 3034 return ret; 2852 3035 2853 - ret = dw_mci_init_slot_caps(slot); 3036 + mmc_of_parse_clk_phase(host->dev, &host->phase_map); 3037 + 3038 + ret = dw_mci_init_host_caps(host); 2854 3039 if (ret) 2855 3040 return ret; 2856 3041 ··· 2880 3061 mmc->max_seg_size = mmc->max_req_size; 2881 3062 } 2882 3063 3064 + if (mmc->caps & MMC_CAP_NEEDS_POLL) 3065 + dev_info(&mmc->class_dev, "card is polling.\n"); 3066 + else if (!mmc_card_is_removable(mmc)) 3067 + dev_info(&mmc->class_dev, "card is non-removable.\n"); 3068 + 2883 3069 dw_mci_get_cd(mmc); 2884 3070 2885 3071 ret = mmc_add_host(mmc); ··· 2892 3068 return ret; 2893 3069 2894 3070 #if defined(CONFIG_DEBUG_FS) 2895 - dw_mci_init_debugfs(slot); 3071 + dw_mci_init_debugfs(host); 2896 3072 #endif 2897 3073 2898 3074 return 0; 2899 3075 } 2900 3076 2901 - static void dw_mci_cleanup_slot(struct dw_mci_slot *slot) 3077 + static void dw_mci_cleanup_host(struct dw_mci *host) 2902 3078 { 2903 3079 /* Debugfs stuff is cleaned up by mmc core */ 2904 - mmc_remove_host(slot->mmc); 2905 - slot->host->slot = NULL; 3080 + mmc_remove_host(host->mmc); 2906 3081 } 2907 3082 2908 3083 static void dw_mci_init_dma(struct dw_mci *host) ··· 3114 3291 spin_unlock_irqrestore(&host->irq_lock, irqflags); 3115 3292 } 3116 3293 3117 - #ifdef CONFIG_OF 3118 - static struct dw_mci_board *dw_mci_parse_dt(struct dw_mci *host) 3294 + static int dw_mci_parse_dt(struct dw_mci *host) 3119 3295 { 3120 - struct dw_mci_board *pdata; 3121 3296 struct device *dev = host->dev; 3122 3297 const struct dw_mci_drv_data *drv_data = host->drv_data; 3123 3298 int ret; 3124 3299 u32 clock_frequency; 3125 3300 3126 - pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); 3127 - if (!pdata) 3128 - return ERR_PTR(-ENOMEM); 3129 - 3130 3301 /* find reset controller when exist */ 3131 - pdata->rstc = devm_reset_control_get_optional_exclusive(dev, "reset"); 3132 - if (IS_ERR(pdata->rstc)) 3133 - return ERR_CAST(pdata->rstc); 3302 + host->rstc = devm_reset_control_get_optional_exclusive(dev, "reset"); 3303 + if (IS_ERR(host->rstc)) 3304 + return PTR_ERR(host->rstc); 3134 3305 3135 - if (device_property_read_u32(dev, "fifo-depth", &pdata->fifo_depth)) 3306 + if (!host->fifo_depth && device_property_read_u32(dev, "fifo-depth", &host->fifo_depth)) 3136 3307 dev_info(dev, 3137 3308 "fifo-depth property not found, using value of FIFOTH register as default\n"); 3138 3309 3139 - device_property_read_u32(dev, "card-detect-delay", 3140 - &pdata->detect_delay_ms); 3310 + if (!host->detect_delay_ms) 3311 + device_property_read_u32(dev, "card-detect-delay", 3312 + &host->detect_delay_ms); 3141 3313 3142 - device_property_read_u32(dev, "data-addr", &host->data_addr_override); 3314 + if (!host->data_addr_override) 3315 + device_property_read_u32(dev, "data-addr", &host->data_addr_override); 3143 3316 3144 3317 if (device_property_present(dev, "fifo-watermark-aligned")) 3145 3318 host->wm_aligned = true; 3146 3319 3147 - if (!device_property_read_u32(dev, "clock-frequency", &clock_frequency)) 3148 - pdata->bus_hz = clock_frequency; 3320 + if (!host->bus_hz && !device_property_read_u32(dev, "clock-frequency", &clock_frequency)) 3321 + host->bus_hz = clock_frequency; 3149 3322 3150 3323 if (drv_data && drv_data->parse_dt) { 3151 3324 ret = drv_data->parse_dt(host); 3152 3325 if (ret) 3153 - return ERR_PTR(ret); 3326 + return ret; 3154 3327 } 3155 3328 3156 - return pdata; 3329 + return 0; 3157 3330 } 3158 - 3159 - #else /* CONFIG_OF */ 3160 - static struct dw_mci_board *dw_mci_parse_dt(struct dw_mci *host) 3161 - { 3162 - return ERR_PTR(-EINVAL); 3163 - } 3164 - #endif /* CONFIG_OF */ 3165 3331 3166 3332 static void dw_mci_enable_cd(struct dw_mci *host) 3167 3333 { ··· 3158 3346 u32 temp; 3159 3347 3160 3348 /* 3161 - * No need for CD if all slots have a non-error GPIO 3349 + * No need for CD if host has a non-error GPIO 3162 3350 * as well as broken card detection is found. 3163 3351 */ 3164 - if (host->slot->mmc->caps & MMC_CAP_NEEDS_POLL) 3352 + if (host->mmc->caps & MMC_CAP_NEEDS_POLL) 3165 3353 return; 3166 3354 3167 - if (mmc_gpio_get_cd(host->slot->mmc) < 0) { 3355 + if (mmc_gpio_get_cd(host->mmc) < 0) { 3168 3356 spin_lock_irqsave(&host->irq_lock, irqflags); 3169 3357 temp = mci_readl(host, INTMASK); 3170 3358 temp |= SDMMC_INT_CD; ··· 3173 3361 } 3174 3362 } 3175 3363 3364 + struct dw_mci *dw_mci_alloc_host(struct device *dev) 3365 + { 3366 + struct mmc_host *mmc; 3367 + struct dw_mci *host; 3368 + 3369 + mmc = devm_mmc_alloc_host(dev, sizeof(struct dw_mci)); 3370 + if (!mmc) 3371 + return ERR_PTR(-ENOMEM); 3372 + 3373 + host = mmc_priv(mmc); 3374 + host->mmc = mmc; 3375 + host->dev = dev; 3376 + 3377 + return host; 3378 + } 3379 + EXPORT_SYMBOL(dw_mci_alloc_host); 3380 + 3176 3381 int dw_mci_probe(struct dw_mci *host) 3177 3382 { 3178 3383 const struct dw_mci_drv_data *drv_data = host->drv_data; 3179 3384 int width, i, ret = 0; 3180 3385 u32 fifo_size; 3181 3386 3182 - if (!host->pdata) { 3183 - host->pdata = dw_mci_parse_dt(host); 3184 - if (IS_ERR(host->pdata)) 3185 - return dev_err_probe(host->dev, PTR_ERR(host->pdata), 3186 - "platform data not available\n"); 3187 - } 3387 + ret = dw_mci_parse_dt(host); 3388 + if (ret) 3389 + return dev_err_probe(host->dev, ret, "parse dt failed\n"); 3188 3390 3189 3391 host->biu_clk = devm_clk_get(host->dev, "biu"); 3190 3392 if (IS_ERR(host->biu_clk)) { ··· 3221 3395 ret = PTR_ERR(host->ciu_clk); 3222 3396 if (ret == -EPROBE_DEFER) 3223 3397 goto err_clk_biu; 3224 - 3225 - host->bus_hz = host->pdata->bus_hz; 3226 3398 } else { 3227 3399 ret = clk_prepare_enable(host->ciu_clk); 3228 3400 if (ret) { ··· 3228 3404 goto err_clk_biu; 3229 3405 } 3230 3406 3231 - if (host->pdata->bus_hz) { 3232 - ret = clk_set_rate(host->ciu_clk, host->pdata->bus_hz); 3407 + if (host->bus_hz) { 3408 + ret = clk_set_rate(host->ciu_clk, host->bus_hz); 3233 3409 if (ret) 3234 3410 dev_warn(host->dev, 3235 3411 "Unable to set bus rate to %uHz\n", 3236 - host->pdata->bus_hz); 3412 + host->bus_hz); 3237 3413 } 3238 3414 host->bus_hz = clk_get_rate(host->ciu_clk); 3239 3415 } ··· 3245 3421 goto err_clk_ciu; 3246 3422 } 3247 3423 3248 - if (host->pdata->rstc) { 3249 - reset_control_assert(host->pdata->rstc); 3424 + if (host->rstc) { 3425 + reset_control_assert(host->rstc); 3250 3426 usleep_range(10, 50); 3251 - reset_control_deassert(host->pdata->rstc); 3427 + reset_control_deassert(host->rstc); 3252 3428 } 3253 3429 3254 3430 if (drv_data && drv_data->init) { ··· 3266 3442 3267 3443 spin_lock_init(&host->lock); 3268 3444 spin_lock_init(&host->irq_lock); 3269 - INIT_LIST_HEAD(&host->queue); 3270 3445 3271 3446 dw_mci_init_fault(host); 3272 3447 ··· 3306 3483 goto err_clk_ciu; 3307 3484 } 3308 3485 3309 - host->dma_ops = host->pdata->dma_ops; 3310 3486 dw_mci_init_dma(host); 3311 3487 3312 3488 /* Clear the interrupts for the host controller */ ··· 3319 3497 * FIFO threshold settings RxMark = fifo_size / 2 - 1, 3320 3498 * Tx Mark = fifo_size / 2 DMA Size = 8 3321 3499 */ 3322 - if (!host->pdata->fifo_depth) { 3500 + if (!host->fifo_depth) { 3323 3501 /* 3324 3502 * Power-on value of RX_WMark is FIFO_DEPTH-1, but this may 3325 3503 * have been overwritten by the bootloader, just like we're ··· 3329 3507 fifo_size = mci_readl(host, FIFOTH); 3330 3508 fifo_size = 1 + ((fifo_size >> 16) & 0xfff); 3331 3509 } else { 3332 - fifo_size = host->pdata->fifo_depth; 3510 + fifo_size = host->fifo_depth; 3333 3511 } 3334 3512 host->fifo_depth = fifo_size; 3335 3513 host->fifoth_val = ··· 3374 3552 "DW MMC controller at irq %d,%d bit host data width,%u deep fifo\n", 3375 3553 host->irq, width, fifo_size); 3376 3554 3377 - /* We need at least one slot to succeed */ 3378 - ret = dw_mci_init_slot(host); 3555 + ret = dw_mci_init_host(host); 3379 3556 if (ret) { 3380 - dev_dbg(host->dev, "slot %d init failed\n", i); 3557 + dev_dbg(host->dev, "host init failed\n"); 3381 3558 goto err_dmaunmap; 3382 3559 } 3383 3560 3384 - /* Now that slots are all setup, we can enable card detect */ 3561 + /* Now that host is setup, we can enable card detect */ 3385 3562 dw_mci_enable_cd(host); 3386 3563 3387 3564 return 0; ··· 3389 3568 if (host->use_dma && host->dma_ops->exit) 3390 3569 host->dma_ops->exit(host); 3391 3570 3392 - reset_control_assert(host->pdata->rstc); 3571 + reset_control_assert(host->rstc); 3393 3572 3394 3573 err_clk_ciu: 3395 3574 clk_disable_unprepare(host->ciu_clk); ··· 3403 3582 3404 3583 void dw_mci_remove(struct dw_mci *host) 3405 3584 { 3406 - dev_dbg(host->dev, "remove slot\n"); 3407 - if (host->slot) 3408 - dw_mci_cleanup_slot(host->slot); 3585 + dev_dbg(host->dev, "remove host\n"); 3586 + dw_mci_cleanup_host(host); 3409 3587 3410 3588 mci_writel(host, RINTSTS, 0xFFFFFFFF); 3411 3589 mci_writel(host, INTMASK, 0); /* disable all mmc interrupt first */ ··· 3416 3596 if (host->use_dma && host->dma_ops->exit) 3417 3597 host->dma_ops->exit(host); 3418 3598 3419 - reset_control_assert(host->pdata->rstc); 3599 + reset_control_assert(host->rstc); 3420 3600 3421 3601 clk_disable_unprepare(host->ciu_clk); 3422 3602 clk_disable_unprepare(host->biu_clk); 3423 3603 } 3424 3604 EXPORT_SYMBOL(dw_mci_remove); 3425 3605 3426 - 3427 - 3428 - #ifdef CONFIG_PM 3429 3606 int dw_mci_runtime_suspend(struct device *dev) 3430 3607 { 3431 3608 struct dw_mci *host = dev_get_drvdata(dev); ··· 3432 3615 3433 3616 clk_disable_unprepare(host->ciu_clk); 3434 3617 3435 - if (host->slot && 3436 - (mmc_host_can_gpio_cd(host->slot->mmc) || 3437 - !mmc_card_is_removable(host->slot->mmc))) 3618 + if (mmc_host_can_gpio_cd(host->mmc) || 3619 + !mmc_card_is_removable(host->mmc)) 3438 3620 clk_disable_unprepare(host->biu_clk); 3439 3621 3440 3622 return 0; ··· 3445 3629 int ret = 0; 3446 3630 struct dw_mci *host = dev_get_drvdata(dev); 3447 3631 3448 - if (host->slot && 3449 - (mmc_host_can_gpio_cd(host->slot->mmc) || 3450 - !mmc_card_is_removable(host->slot->mmc))) { 3632 + if (mmc_host_can_gpio_cd(host->mmc) || 3633 + !mmc_card_is_removable(host->mmc)) { 3451 3634 ret = clk_prepare_enable(host->biu_clk); 3452 3635 if (ret) 3453 3636 return ret; ··· 3462 3647 goto err; 3463 3648 } 3464 3649 3465 - if (host->use_dma && host->dma_ops->init) 3466 - host->dma_ops->init(host); 3650 + if (host->use_dma && host->dma_ops->init) { 3651 + ret = host->dma_ops->init(host); 3652 + if (ret) 3653 + return ret; 3654 + } 3467 3655 3468 3656 /* 3469 3657 * Restore the initial value at FIFOTH register ··· 3485 3667 mci_writel(host, CTRL, SDMMC_CTRL_INT_ENABLE); 3486 3668 3487 3669 3488 - if (host->slot && host->slot->mmc->pm_flags & MMC_PM_KEEP_POWER) 3489 - dw_mci_set_ios(host->slot->mmc, &host->slot->mmc->ios); 3670 + if (host->mmc->pm_flags & MMC_PM_KEEP_POWER) 3671 + dw_mci_set_ios(host->mmc, &host->mmc->ios); 3490 3672 3491 3673 /* Force setup bus to guarantee available clock output */ 3492 - dw_mci_setup_bus(host->slot, true); 3674 + dw_mci_setup_bus(host, true); 3493 3675 3494 3676 /* Re-enable SDIO interrupts. */ 3495 - if (sdio_irq_claimed(host->slot->mmc)) 3496 - __dw_mci_enable_sdio_irq(host->slot, 1); 3677 + if (sdio_irq_claimed(host->mmc)) 3678 + __dw_mci_enable_sdio_irq(host, 1); 3497 3679 3498 - /* Now that slots are all setup, we can enable card detect */ 3680 + /* Now that host is setup, we can enable card detect */ 3499 3681 dw_mci_enable_cd(host); 3500 3682 3501 3683 return 0; 3502 3684 3503 3685 err: 3504 - if (host->slot && 3505 - (mmc_host_can_gpio_cd(host->slot->mmc) || 3506 - !mmc_card_is_removable(host->slot->mmc))) 3686 + if (mmc_host_can_gpio_cd(host->mmc) || 3687 + !mmc_card_is_removable(host->mmc)) 3507 3688 clk_disable_unprepare(host->biu_clk); 3508 3689 3509 3690 return ret; 3510 3691 } 3511 3692 EXPORT_SYMBOL(dw_mci_runtime_resume); 3512 - #endif /* CONFIG_PM */ 3513 3693 3514 - static int __init dw_mci_init(void) 3515 - { 3516 - pr_info("Synopsys Designware Multimedia Card Interface Driver\n"); 3517 - return 0; 3518 - } 3519 - 3520 - static void __exit dw_mci_exit(void) 3521 - { 3522 - } 3523 - 3524 - module_init(dw_mci_init); 3525 - module_exit(dw_mci_exit); 3694 + const struct dev_pm_ops dw_mci_pmops = { 3695 + SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume) 3696 + RUNTIME_PM_OPS(dw_mci_runtime_suspend, dw_mci_runtime_resume, NULL) 3697 + }; 3698 + EXPORT_SYMBOL_GPL(dw_mci_pmops); 3526 3699 3527 3700 MODULE_DESCRIPTION("DW Multimedia Card Interface driver"); 3528 3701 MODULE_AUTHOR("NXP Semiconductor VietNam");
+33 -115
drivers/mmc/host/dw_mmc.h
··· 43 43 COOKIE_MAPPED, /* mapped by prepare_data() of dwmmc */ 44 44 }; 45 45 46 - struct mmc_data; 47 - 48 46 enum { 49 47 TRANS_MODE_PIO = 0, 50 48 TRANS_MODE_IDMAC, ··· 55 57 }; 56 58 57 59 /** 58 - * struct dw_mci - MMC controller state shared between all slots 60 + * struct dw_mci - MMC controller state 59 61 * @lock: Spinlock protecting the queue and associated data. 60 62 * @irq_lock: Spinlock protecting the INTMASK setting. 61 63 * @regs: Pointer to MMIO registers. 62 64 * @fifo_reg: Pointer to MMIO registers for data FIFO 63 65 * @sg: Scatterlist entry currently being processed by PIO code, if any. 64 66 * @sg_miter: PIO mapping scatterlist iterator. 65 - * @mrq: The request currently being processed on @slot, 67 + * @mrq: The request currently being processed on @host, 66 68 * or NULL if the controller is idle. 67 69 * @cmd: The command currently being sent to the card, or NULL. 68 70 * @data: The data currently being transferred, or NULL if no data ··· 76 78 * @dma_64bit_address: Whether DMA supports 64-bit address mode or not. 77 79 * @sg_dma: Bus address of DMA buffer. 78 80 * @sg_cpu: Virtual address of DMA buffer. 79 - * @dma_ops: Pointer to platform-specific DMA callbacks. 81 + * @dma_ops: Pointer to DMA callbacks. 80 82 * @cmd_status: Snapshot of SR taken upon completion of the current 81 83 * @ring_size: Buffer size for idma descriptors. 82 84 * command. Only valid when EVENT_CMD_COMPLETE is pending. ··· 94 96 * @completed_events: Bitmask of events which the state machine has 95 97 * processed. 96 98 * @state: BH work state. 97 - * @queue: List of slots waiting for access to the controller. 98 99 * @bus_hz: The rate of @mck in Hz. This forms the basis for MMC bus 99 100 * rate and timeout calculations. 100 101 * @current_speed: Configured rate of the controller. ··· 101 104 * @fifoth_val: The value of FIFOTH register. 102 105 * @verid: Denote Version ID. 103 106 * @dev: Device associated with the MMC controller. 104 - * @pdata: Platform data associated with the MMC controller. 105 107 * @drv_data: Driver specific data for identified variant of the controller 106 108 * @priv: Implementation defined private data. 107 109 * @biu_clk: Pointer to bus interface unit clock instance. 108 110 * @ciu_clk: Pointer to card interface unit clock instance. 109 - * @slot: Slots sharing this MMC controller. 110 111 * @fifo_depth: depth of FIFO. 111 112 * @data_addr_override: override fifo reg offset with this value. 112 113 * @wm_aligned: force fifo watermark equal with data length in PIO mode. ··· 116 121 * @push_data: Pointer to FIFO push function. 117 122 * @pull_data: Pointer to FIFO pull function. 118 123 * @quirks: Set of quirks that apply to specific versions of the IP. 119 - * @vqmmc_enabled: Status of vqmmc, should be true or false. 120 124 * @irq_flags: The flags to be passed to request_irq. 121 125 * @irq: The irq value to be passed to request_irq. 122 - * @sdio_id0: Number of slot0 in the SDIO interrupt registers. 126 + * @sdio_irq: SDIO interrupt bit in interrupt registers. 123 127 * @cmd11_timer: Timer for SD3.0 voltage switch over scheme. 124 128 * @cto_timer: Timer for broken command transfer over scheme. 125 129 * @dto_timer: Timer for broken data transfer over scheme. 130 + * @mmc: The mmc_host representing this dw_mci. 131 + * @flags: Random state bits associated with the host. 132 + * @ctype: Card type for this host. 133 + * @clock: Clock rate configured by set_ios(). Protected by host->lock. 134 + * @clk_old: The last clock value that was requested from core. 135 + * @pdev: platform_device registered 136 + * @rstc: Reset controller for this host. 137 + * @detect_delay_ms: Delay in mS before detecting cards after interrupt. 138 + * @phase_map: The map for recording in and out phases for each timing 126 139 * 127 140 * Locking 128 141 * ======= 129 142 * 130 - * @lock is a softirq-safe spinlock protecting @queue as well as 131 - * @slot, @mrq and @state. These must always be updated 143 + * @lock is a softirq-safe spinlock protecting as well as 144 + * @mrq and @state. These must always be updated 132 145 * at the same time while holding @lock. 133 - * The @mrq field of struct dw_mci_slot is also protected by @lock, 134 - * and must always be written at the same time as the slot is added to 135 - * @queue. 136 146 * 137 147 * @irq_lock is an irq-safe spinlock protecting the INTMASK register 138 148 * to allow the interrupt handler to modify it directly. Held for only long ··· 199 199 unsigned long pending_events; 200 200 unsigned long completed_events; 201 201 enum dw_mci_state state; 202 - struct list_head queue; 203 202 204 203 u32 bus_hz; 205 204 u32 current_speed; ··· 206 207 u32 fifoth_val; 207 208 u16 verid; 208 209 struct device *dev; 209 - struct dw_mci_board *pdata; 210 210 const struct dw_mci_drv_data *drv_data; 211 211 void *priv; 212 212 struct clk *biu_clk; ··· 226 228 void (*pull_data)(struct dw_mci *host, void *buf, int cnt); 227 229 228 230 u32 quirks; 229 - bool vqmmc_enabled; 230 231 unsigned long irq_flags; /* IRQ flags */ 231 232 int irq; 232 233 233 - int sdio_id0; 234 + int sdio_irq; 234 235 235 236 struct timer_list cmd11_timer; 236 237 struct timer_list cto_timer; ··· 239 242 struct fault_attr fail_data_crc; 240 243 struct hrtimer fault_timer; 241 244 #endif 245 + struct mmc_host *mmc; 246 + unsigned long flags; 247 + #define DW_MMC_CARD_NEED_INIT 0 248 + #define DW_MMC_CARD_NO_LOW_PWR 1 249 + #define DW_MMC_CARD_NO_USE_HOLD 2 250 + #define DW_MMC_CARD_NEEDS_POLL 3 251 + u32 ctype; 252 + unsigned int clock; 253 + unsigned int clk_old; 254 + struct platform_device *pdev; 255 + struct reset_control *rstc; 256 + u32 detect_delay_ms; 257 + struct mmc_clk_phase_map phase_map; 242 258 }; 243 259 244 260 /* DMA ops for Internal/External DMAC interface */ ··· 263 253 void (*stop)(struct dw_mci *host); 264 254 void (*cleanup)(struct dw_mci *host); 265 255 void (*exit)(struct dw_mci *host); 266 - }; 267 - 268 - struct dma_pdata; 269 - 270 - /* Board platform data */ 271 - struct dw_mci_board { 272 - unsigned int bus_hz; /* Clock speed at the cclk_in pad */ 273 - 274 - u32 caps; /* Capabilities */ 275 - u32 caps2; /* More capabilities */ 276 - u32 pm_caps; /* PM capabilities */ 277 - /* 278 - * Override fifo depth. If 0, autodetect it from the FIFOTH register, 279 - * but note that this may not be reliable after a bootloader has used 280 - * it. 281 - */ 282 - unsigned int fifo_depth; 283 - 284 - /* delay in mS before detecting cards after interrupt */ 285 - u32 detect_delay_ms; 286 - 287 - struct reset_control *rstc; 288 - struct dw_mci_dma_ops *dma_ops; 289 - struct dma_pdata *data; 290 256 }; 291 257 292 258 /* Support for longer data read timeout */ ··· 382 396 #define SDMMC_INT_CMD_DONE BIT(2) 383 397 #define SDMMC_INT_RESP_ERR BIT(1) 384 398 #define SDMMC_INT_CD BIT(0) 385 - #define SDMMC_INT_ERROR 0xbfc2 386 399 /* Command register defines */ 387 400 #define SDMMC_CMD_START BIT(31) 388 401 #define SDMMC_CMD_USE_HOLD_REG BIT(29) ··· 490 505 #define mci_writel(dev, reg, value) \ 491 506 writel_relaxed((value), (dev)->regs + SDMMC_##reg) 492 507 493 - /* 16-bit FIFO access macros */ 494 - #define mci_readw(dev, reg) \ 495 - readw_relaxed((dev)->regs + SDMMC_##reg) 496 - #define mci_writew(dev, reg, value) \ 497 - writew_relaxed((value), (dev)->regs + SDMMC_##reg) 498 - 499 - /* 64-bit FIFO access macros */ 500 - #ifdef readq 501 - #define mci_readq(dev, reg) \ 502 - readq_relaxed((dev)->regs + SDMMC_##reg) 503 - #define mci_writeq(dev, reg, value) \ 504 - writeq_relaxed((value), (dev)->regs + SDMMC_##reg) 505 - #else 506 - /* 507 - * Dummy readq implementation for architectures that don't define it. 508 - * 509 - * We would assume that none of these architectures would configure 510 - * the IP block with a 64bit FIFO width, so this code will never be 511 - * executed on those machines. Defining these macros here keeps the 512 - * rest of the code free from ifdefs. 513 - */ 514 - #define mci_readq(dev, reg) \ 515 - (*(volatile u64 __force *)((dev)->regs + SDMMC_##reg)) 516 - #define mci_writeq(dev, reg, value) \ 517 - (*(volatile u64 __force *)((dev)->regs + SDMMC_##reg) = (value)) 518 - 508 + #ifndef readq 519 509 #define __raw_writeq(__value, __reg) \ 520 510 (*(volatile u64 __force *)(__reg) = (__value)) 521 511 #define __raw_readq(__reg) (*(volatile u64 __force *)(__reg)) 522 512 #endif 523 513 514 + extern struct dw_mci *dw_mci_alloc_host(struct device *device); 524 515 extern int dw_mci_probe(struct dw_mci *host); 525 516 extern void dw_mci_remove(struct dw_mci *host); 526 - #ifdef CONFIG_PM 527 517 extern int dw_mci_runtime_suspend(struct device *device); 528 518 extern int dw_mci_runtime_resume(struct device *device); 529 - #else 530 - static inline int dw_mci_runtime_suspend(struct device *device) { return -EOPNOTSUPP; } 531 - static inline int dw_mci_runtime_resume(struct device *device) { return -EOPNOTSUPP; } 532 - #endif 533 - 534 - /** 535 - * struct dw_mci_slot - MMC slot state 536 - * @mmc: The mmc_host representing this slot. 537 - * @host: The MMC controller this slot is using. 538 - * @ctype: Card type for this slot. 539 - * @mrq: mmc_request currently being processed or waiting to be 540 - * processed, or NULL when the slot is idle. 541 - * @queue_node: List node for placing this node in the @queue list of 542 - * &struct dw_mci. 543 - * @clock: Clock rate configured by set_ios(). Protected by host->lock. 544 - * @__clk_old: The last clock value that was requested from core. 545 - * Keeping track of this helps us to avoid spamming the console. 546 - * @flags: Random state bits associated with the slot. 547 - * @id: Number of this slot. 548 - * @sdio_id: Number of this slot in the SDIO interrupt registers. 549 - */ 550 - struct dw_mci_slot { 551 - struct mmc_host *mmc; 552 - struct dw_mci *host; 553 - 554 - u32 ctype; 555 - 556 - struct mmc_request *mrq; 557 - struct list_head queue_node; 558 - 559 - unsigned int clock; 560 - unsigned int __clk_old; 561 - 562 - unsigned long flags; 563 - #define DW_MMC_CARD_PRESENT 0 564 - #define DW_MMC_CARD_NEED_INIT 1 565 - #define DW_MMC_CARD_NO_LOW_PWR 2 566 - #define DW_MMC_CARD_NO_USE_HOLD 3 567 - #define DW_MMC_CARD_NEEDS_POLL 4 568 - int id; 569 - int sdio_id; 570 - }; 571 519 572 520 /** 573 521 * dw_mci driver data - dw-mshc implementation specific driver data. ··· 527 609 int (*init)(struct dw_mci *host); 528 610 void (*set_ios)(struct dw_mci *host, struct mmc_ios *ios); 529 611 int (*parse_dt)(struct dw_mci *host); 530 - int (*execute_tuning)(struct dw_mci_slot *slot, u32 opcode); 612 + int (*execute_tuning)(struct dw_mci *host, u32 opcode); 531 613 int (*prepare_hs400_tuning)(struct dw_mci *host, 532 614 struct mmc_ios *ios); 533 - int (*switch_voltage)(struct mmc_host *mmc, 615 + int (*switch_voltage)(struct dw_mci *host, 534 616 struct mmc_ios *ios); 535 617 void (*set_data_timeout)(struct dw_mci *host, 536 618 unsigned int timeout_ns);
+1 -1
drivers/mmc/host/jz4740_mmc.c
··· 1052 1052 host = mmc_priv(mmc); 1053 1053 1054 1054 /* Default if no match is JZ4740 */ 1055 - host->version = (enum jz4740_mmc_version)device_get_match_data(&pdev->dev); 1055 + host->version = (unsigned long)device_get_match_data(&pdev->dev); 1056 1056 1057 1057 ret = mmc_of_parse(mmc); 1058 1058 if (ret)
+43 -17
drivers/mmc/host/loongson2-mmc.c
··· 189 189 #define LOONGSON2_MMC_DLLVAL_TIMEOUT_US 4000 190 190 #define LOONGSON2_MMC_TXFULL_TIMEOUT_US 500 191 191 192 + /* 193 + * Due to a hardware design flaw, the Loongson-2K0300 may fail to recognize the 194 + * CMD48 (SD_READ_EXTR_SINGLE) interrupt. 195 + */ 196 + #define LOONGSON2_MMC_CMD48_QUIRK BIT(0) 197 + 192 198 /* Loongson-2K1000 SDIO2 DMA routing register */ 193 199 #define LS2K1000_SDIO_DMA_MASK GENMASK(17, 15) 194 200 #define LS2K1000_DMA0_CONF 0x0 ··· 251 245 }; 252 246 253 247 struct loongson2_mmc_pdata { 248 + u32 flags; 254 249 const struct regmap_config *regmap_config; 255 250 void (*reorder_cmd_data)(struct loongson2_mmc_host *host, struct mmc_command *cmd); 256 251 void (*fix_data_timeout)(struct loongson2_mmc_host *host, struct mmc_command *cmd); ··· 575 568 { 576 569 struct loongson2_mmc_host *host = mmc_priv(mmc); 577 570 571 + if ((host->pdata->flags & LOONGSON2_MMC_CMD48_QUIRK) && 572 + mrq->cmd->opcode == SD_READ_EXTR_SINGLE) { 573 + mmc_request_done(mmc, mrq); 574 + return; 575 + } 576 + 578 577 host->cmd_is_stop = 0; 579 578 host->mrq = mrq; 580 579 loongson2_mmc_send_request(mmc); ··· 716 703 return 0; 717 704 } 718 705 719 - static struct loongson2_mmc_pdata ls2k0500_mmc_pdata = { 720 - .regmap_config = &ls2k0500_mmc_regmap_config, 721 - .reorder_cmd_data = ls2k0500_mmc_reorder_cmd_data, 722 - .setting_dma = ls2k0500_mmc_set_external_dma, 723 - .prepare_dma = loongson2_mmc_prepare_external_dma, 724 - .release_dma = loongson2_mmc_release_external_dma, 725 - }; 726 - 727 706 static int ls2k1000_mmc_set_external_dma(struct loongson2_mmc_host *host, 728 707 struct platform_device *pdev) 729 708 { ··· 739 734 740 735 return 0; 741 736 } 742 - 743 - static struct loongson2_mmc_pdata ls2k1000_mmc_pdata = { 744 - .regmap_config = &ls2k0500_mmc_regmap_config, 745 - .reorder_cmd_data = ls2k0500_mmc_reorder_cmd_data, 746 - .setting_dma = ls2k1000_mmc_set_external_dma, 747 - .prepare_dma = loongson2_mmc_prepare_external_dma, 748 - .release_dma = loongson2_mmc_release_external_dma, 749 - }; 750 737 751 738 static const struct regmap_config ls2k2000_mmc_regmap_config = { 752 739 .reg_bits = 32, ··· 843 846 if (!host->sg_cpu) 844 847 return -ENOMEM; 845 848 846 - memset(host->sg_cpu, 0, PAGE_SIZE); 847 849 return 0; 848 850 } 849 851 ··· 852 856 dma_free_coherent(dev, PAGE_SIZE, host->sg_cpu, host->sg_dma); 853 857 } 854 858 859 + static struct loongson2_mmc_pdata ls2k0300_mmc_pdata = { 860 + .flags = LOONGSON2_MMC_CMD48_QUIRK, 861 + .regmap_config = &ls2k2000_mmc_regmap_config, 862 + .reorder_cmd_data = ls2k2000_mmc_reorder_cmd_data, 863 + .fix_data_timeout = ls2k2000_mmc_fix_data_timeout, 864 + .setting_dma = ls2k2000_mmc_set_internal_dma, 865 + .prepare_dma = loongson2_mmc_prepare_internal_dma, 866 + .release_dma = loongson2_mmc_release_internal_dma, 867 + }; 868 + 869 + static struct loongson2_mmc_pdata ls2k0500_mmc_pdata = { 870 + .flags = 0, 871 + .regmap_config = &ls2k0500_mmc_regmap_config, 872 + .reorder_cmd_data = ls2k0500_mmc_reorder_cmd_data, 873 + .setting_dma = ls2k0500_mmc_set_external_dma, 874 + .prepare_dma = loongson2_mmc_prepare_external_dma, 875 + .release_dma = loongson2_mmc_release_external_dma, 876 + }; 877 + 878 + static struct loongson2_mmc_pdata ls2k1000_mmc_pdata = { 879 + .flags = 0, 880 + .regmap_config = &ls2k0500_mmc_regmap_config, 881 + .reorder_cmd_data = ls2k0500_mmc_reorder_cmd_data, 882 + .setting_dma = ls2k1000_mmc_set_external_dma, 883 + .prepare_dma = loongson2_mmc_prepare_external_dma, 884 + .release_dma = loongson2_mmc_release_external_dma, 885 + }; 886 + 855 887 static struct loongson2_mmc_pdata ls2k2000_mmc_pdata = { 888 + .flags = 0, 856 889 .regmap_config = &ls2k2000_mmc_regmap_config, 857 890 .reorder_cmd_data = ls2k2000_mmc_reorder_cmd_data, 858 891 .fix_data_timeout = ls2k2000_mmc_fix_data_timeout, ··· 1010 985 } 1011 986 1012 987 static const struct of_device_id loongson2_mmc_of_ids[] = { 988 + { .compatible = "loongson,ls2k0300-mmc", .data = &ls2k0300_mmc_pdata }, 1013 989 { .compatible = "loongson,ls2k0500-mmc", .data = &ls2k0500_mmc_pdata }, 1014 990 { .compatible = "loongson,ls2k1000-mmc", .data = &ls2k1000_mmc_pdata }, 1015 991 { .compatible = "loongson,ls2k2000-mmc", .data = &ls2k2000_mmc_pdata },
+29 -3
drivers/mmc/host/mtk-sd.c
··· 203 203 #define SDC_CFG_DTOC GENMASK(31, 24) /* RW */ 204 204 205 205 /* SDC_STS mask */ 206 - #define SDC_STS_SDCBUSY BIT(0) /* RW */ 207 - #define SDC_STS_CMDBUSY BIT(1) /* RW */ 208 - #define SDC_STS_SWR_COMPL BIT(31) /* RW */ 206 + #define SDC_STS_SDCBUSY BIT(0) /* RW */ 207 + #define SDC_STS_CMDBUSY BIT(1) /* RW */ 208 + #define SDC_STS_SPM_RESOURCE_RELEASE BIT(3) /* RW */ 209 + #define SDC_STS_SWR_COMPL BIT(31) /* RW */ 209 210 210 211 /* SDC_ADV_CFG0 mask */ 211 212 #define SDC_DAT1_IRQ_TRIGGER BIT(19) /* RW */ ··· 449 448 bool use_internal_cd; 450 449 bool support_new_tx; 451 450 bool support_new_rx; 451 + bool support_spm_res_release; 452 452 }; 453 453 454 454 struct msdc_tune_para { ··· 675 673 .stop_dly_sel = 3, 676 674 }; 677 675 676 + static const struct mtk_mmc_compatible mt8189_compat = { 677 + .clk_div_bits = 12, 678 + .recheck_sdio_irq = false, 679 + .hs400_tune = false, 680 + .needs_top_base = true, 681 + .pad_tune_reg = MSDC_PAD_TUNE0, 682 + .async_fifo = true, 683 + .data_tune = false, 684 + .busy_check = true, 685 + .stop_clk_fix = true, 686 + .stop_dly_sel = 3, 687 + .pop_en_cnt = 8, 688 + .enhance_rx = true, 689 + .support_64g = true, 690 + .support_new_tx = false, 691 + .support_new_rx = false, 692 + .support_spm_res_release = true, 693 + }; 694 + 678 695 static const struct mtk_mmc_compatible mt8196_compat = { 679 696 .clk_div_bits = 12, 680 697 .recheck_sdio_irq = false, ··· 724 703 { .compatible = "mediatek,mt8135-mmc", .data = &mt8135_compat}, 725 704 { .compatible = "mediatek,mt8173-mmc", .data = &mt8173_compat}, 726 705 { .compatible = "mediatek,mt8183-mmc", .data = &mt8183_compat}, 706 + { .compatible = "mediatek,mt8189-mmc", .data = &mt8189_compat}, 727 707 { .compatible = "mediatek,mt8196-mmc", .data = &mt8196_compat}, 728 708 { .compatible = "mediatek,mt8516-mmc", .data = &mt8516_compat}, 729 709 ··· 3318 3296 3319 3297 __msdc_enable_sdio_irq(host, 0); 3320 3298 } 3299 + 3300 + if (host->dev_comp->support_spm_res_release) 3301 + sdr_set_bits(host->base + SDC_STS, SDC_STS_SPM_RESOURCE_RELEASE); 3302 + 3321 3303 msdc_gate_clock(host); 3322 3304 return 0; 3323 3305 }
+6
drivers/mmc/host/renesas_sdhi_core.c
··· 26 26 #include <linux/mmc/mmc.h> 27 27 #include <linux/mmc/slot-gpio.h> 28 28 #include <linux/module.h> 29 + #include <linux/mux/consumer.h> 29 30 #include <linux/pinctrl/consumer.h> 30 31 #include <linux/pinctrl/pinctrl-state.h> 31 32 #include <linux/platform_data/tmio.h> ··· 1063 1062 struct regulator_dev *rdev; 1064 1063 struct renesas_sdhi_dma *dma_priv; 1065 1064 struct device *dev = &pdev->dev; 1065 + struct mux_state *mux_state; 1066 1066 struct tmio_mmc_host *host; 1067 1067 struct renesas_sdhi *priv; 1068 1068 int num_irqs, irq, ret, i; ··· 1117 1115 priv->pins_uhs = pinctrl_lookup_state(priv->pinctrl, 1118 1116 "state_uhs"); 1119 1117 } 1118 + 1119 + mux_state = devm_mux_state_get_optional_selected(&pdev->dev, NULL); 1120 + if (IS_ERR(mux_state)) 1121 + return PTR_ERR(mux_state); 1120 1122 1121 1123 host = tmio_mmc_host_alloc(pdev, mmc_data); 1122 1124 if (IS_ERR(host))
+4 -8
drivers/mmc/host/renesas_sdhi_sys_dmac.c
··· 456 456 of_device_get_match_data(&pdev->dev), NULL); 457 457 } 458 458 459 - static const struct dev_pm_ops renesas_sdhi_sys_dmac_dev_pm_ops = { 460 - SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 461 - pm_runtime_force_resume) 462 - SET_RUNTIME_PM_OPS(tmio_mmc_host_runtime_suspend, 463 - tmio_mmc_host_runtime_resume, 464 - NULL) 465 - }; 459 + static DEFINE_RUNTIME_DEV_PM_OPS(renesas_sdhi_sys_dmac_dev_pm_ops, 460 + tmio_mmc_host_runtime_suspend, 461 + tmio_mmc_host_runtime_resume, NULL); 466 462 467 463 static struct platform_driver renesas_sys_dmac_sdhi_driver = { 468 464 .driver = { 469 465 .name = "sh_mobile_sdhi", 470 466 .probe_type = PROBE_PREFER_ASYNCHRONOUS, 471 - .pm = &renesas_sdhi_sys_dmac_dev_pm_ops, 467 + .pm = pm_ptr(&renesas_sdhi_sys_dmac_dev_pm_ops), 472 468 .of_match_table = renesas_sdhi_sys_dmac_of_match, 473 469 }, 474 470 .probe = renesas_sdhi_sys_dmac_probe,
+6 -82
drivers/mmc/host/rtsx_pci_sdmmc.c
··· 1181 1181 return cd; 1182 1182 } 1183 1183 1184 - static int sd_wait_voltage_stable_1(struct realtek_pci_sdmmc *host) 1185 - { 1186 - struct rtsx_pcr *pcr = host->pcr; 1187 - int err; 1188 - u8 stat; 1189 - 1190 - /* Reference to Signal Voltage Switch Sequence in SD spec. 1191 - * Wait for a period of time so that the card can drive SD_CMD and 1192 - * SD_DAT[3:0] to low after sending back CMD11 response. 1193 - */ 1194 - mdelay(1); 1195 - 1196 - /* SD_CMD, SD_DAT[3:0] should be driven to low by card; 1197 - * If either one of SD_CMD,SD_DAT[3:0] is not low, 1198 - * abort the voltage switch sequence; 1199 - */ 1200 - err = rtsx_pci_read_register(pcr, SD_BUS_STAT, &stat); 1201 - if (err < 0) 1202 - return err; 1203 - 1204 - if (stat & (SD_CMD_STATUS | SD_DAT3_STATUS | SD_DAT2_STATUS | 1205 - SD_DAT1_STATUS | SD_DAT0_STATUS)) 1206 - return -EINVAL; 1207 - 1208 - /* Stop toggle SD clock */ 1209 - err = rtsx_pci_write_register(pcr, SD_BUS_STAT, 1210 - 0xFF, SD_CLK_FORCE_STOP); 1211 - if (err < 0) 1212 - return err; 1213 - 1214 - return 0; 1215 - } 1216 - 1217 - static int sd_wait_voltage_stable_2(struct realtek_pci_sdmmc *host) 1218 - { 1219 - struct rtsx_pcr *pcr = host->pcr; 1220 - int err; 1221 - u8 stat, mask, val; 1222 - 1223 - /* Wait 1.8V output of voltage regulator in card stable */ 1224 - msleep(50); 1225 - 1226 - /* Toggle SD clock again */ 1227 - err = rtsx_pci_write_register(pcr, SD_BUS_STAT, 0xFF, SD_CLK_TOGGLE_EN); 1228 - if (err < 0) 1229 - return err; 1230 - 1231 - /* Wait for a period of time so that the card can drive 1232 - * SD_DAT[3:0] to high at 1.8V 1233 - */ 1234 - msleep(20); 1235 - 1236 - /* SD_CMD, SD_DAT[3:0] should be pulled high by host */ 1237 - err = rtsx_pci_read_register(pcr, SD_BUS_STAT, &stat); 1238 - if (err < 0) 1239 - return err; 1240 - 1241 - mask = SD_CMD_STATUS | SD_DAT3_STATUS | SD_DAT2_STATUS | 1242 - SD_DAT1_STATUS | SD_DAT0_STATUS; 1243 - val = SD_CMD_STATUS | SD_DAT3_STATUS | SD_DAT2_STATUS | 1244 - SD_DAT1_STATUS | SD_DAT0_STATUS; 1245 - if ((stat & mask) != val) { 1246 - dev_dbg(sdmmc_dev(host), 1247 - "%s: SD_BUS_STAT = 0x%x\n", __func__, stat); 1248 - rtsx_pci_write_register(pcr, SD_BUS_STAT, 1249 - SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, 0); 1250 - rtsx_pci_write_register(pcr, CARD_CLK_EN, 0xFF, 0); 1251 - return -EINVAL; 1252 - } 1253 - 1254 - return 0; 1255 - } 1256 - 1257 1184 static int sdmmc_switch_voltage(struct mmc_host *mmc, struct mmc_ios *ios) 1258 1185 { 1259 1186 struct realtek_pci_sdmmc *host = mmc_priv(mmc); ··· 1208 1281 voltage = OUTPUT_1V8; 1209 1282 1210 1283 if (voltage == OUTPUT_1V8) { 1211 - err = sd_wait_voltage_stable_1(host); 1284 + /* Stop toggle SD clock */ 1285 + err = rtsx_pci_write_register(pcr, SD_BUS_STAT, 1286 + 0xFF, SD_CLK_FORCE_STOP); 1212 1287 if (err < 0) 1213 1288 goto out; 1214 1289 } ··· 1219 1290 if (err < 0) 1220 1291 goto out; 1221 1292 1222 - if (voltage == OUTPUT_1V8) { 1223 - err = sd_wait_voltage_stable_2(host); 1224 - if (err < 0) 1225 - goto out; 1226 - } 1227 - 1228 1293 out: 1229 1294 /* Stop toggle SD clock in idle */ 1230 - err = rtsx_pci_write_register(pcr, SD_BUS_STAT, 1231 - SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, 0); 1295 + if (err < 0) 1296 + rtsx_pci_write_register(pcr, SD_BUS_STAT, 1297 + SD_CLK_TOGGLE_EN | SD_CLK_FORCE_STOP, 0); 1232 1298 1233 1299 mutex_unlock(&pcr->pcr_mutex); 1234 1300
+28 -5
drivers/mmc/host/sdhci-esdhc-imx.c
··· 216 216 #define ESDHC_FLAG_DUMMY_PAD BIT(19) 217 217 218 218 #define ESDHC_AUTO_TUNING_WINDOW 3 219 + /* 100ms timeout for data inhibit */ 220 + #define ESDHC_DATA_INHIBIT_WAIT_US 100000 219 221 220 222 enum wp_types { 221 223 ESDHC_WP_NONE, /* no WP, neither controller nor gpio */ ··· 323 321 .quirks = SDHCI_QUIRK_NO_LED, 324 322 }; 325 323 324 + static struct esdhc_soc_data usdhc_s32n79_data = { 325 + .flags = ESDHC_FLAG_USDHC | ESDHC_FLAG_MAN_TUNING 326 + | ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200 327 + | ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES 328 + | ESDHC_FLAG_SKIP_ERR004536, 329 + .quirks = SDHCI_QUIRK_NO_LED, 330 + }; 331 + 326 332 static struct esdhc_soc_data usdhc_imx7ulp_data = { 327 333 .flags = ESDHC_FLAG_USDHC | ESDHC_FLAG_MAN_TUNING 328 334 | ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200 ··· 418 408 { .compatible = "fsl,imx95-usdhc", .data = &usdhc_imx95_data, }, 419 409 { .compatible = "fsl,imxrt1050-usdhc", .data = &usdhc_imxrt1050_data, }, 420 410 { .compatible = "nxp,s32g2-usdhc", .data = &usdhc_s32g2_data, }, 411 + { .compatible = "nxp,s32n79-usdhc", .data = &usdhc_s32n79_data, }, 421 412 { /* sentinel */ } 422 413 }; 423 414 MODULE_DEVICE_TABLE(of, imx_esdhc_dt_ids); ··· 1464 1453 1465 1454 static void esdhc_reset(struct sdhci_host *host, u8 mask) 1466 1455 { 1456 + u32 present_state; 1457 + int ret; 1458 + 1459 + /* 1460 + * For data or full reset, ensure any active data transfer completes 1461 + * before resetting to avoid system hang. 1462 + */ 1463 + if (mask & (SDHCI_RESET_DATA | SDHCI_RESET_ALL)) { 1464 + ret = readl_poll_timeout_atomic(host->ioaddr + ESDHC_PRSSTAT, present_state, 1465 + !(present_state & SDHCI_DATA_INHIBIT), 2, 1466 + ESDHC_DATA_INHIBIT_WAIT_US); 1467 + if (ret == -ETIMEDOUT) 1468 + dev_warn(mmc_dev(host->mmc), 1469 + "timeout waiting for data transfer completion\n"); 1470 + } 1471 + 1467 1472 sdhci_and_cqhci_reset(host, mask); 1468 1473 1469 1474 sdhci_writel(host, host->ier, SDHCI_INT_ENABLE); ··· 1822 1795 1823 1796 of_property_read_u32(np, "fsl,strobe-dll-delay-target", 1824 1797 &boarddata->strobe_dll_delay_target); 1825 - if (of_property_read_bool(np, "no-1-8-v")) 1826 - host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V; 1827 1798 1828 1799 if (of_property_read_u32(np, "fsl,delay-line", &boarddata->delay_line)) 1829 1800 boarddata->delay_line = 0; ··· 1840 1815 if (ret) 1841 1816 return ret; 1842 1817 1843 - /* HS400/HS400ES require 8 bit bus */ 1844 - if (!(host->mmc->caps & MMC_CAP_8_BIT_DATA)) 1845 - host->mmc->caps2 &= ~(MMC_CAP2_HS400 | MMC_CAP2_HS400_ES); 1818 + sdhci_get_property(pdev); 1846 1819 1847 1820 if (mmc_gpio_get_cd(host->mmc) >= 0) 1848 1821 host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION;
+118 -6
drivers/mmc/host/sdhci-msm.c
··· 157 157 #define CQHCI_VENDOR_CFG1 0xA00 158 158 #define CQHCI_VENDOR_DIS_RST_ON_CQ_EN (0x3 << 13) 159 159 160 + /* non command queue crypto enable register*/ 161 + #define NONCQ_CRYPTO_PARM 0x70 162 + #define NONCQ_CRYPTO_DUN 0x74 163 + 164 + #define DISABLE_CRYPTO BIT(15) 165 + #define CRYPTO_GENERAL_ENABLE BIT(1) 166 + #define HC_VENDOR_SPECIFIC_FUNC4 0x260 167 + 168 + #define ICE_HCI_PARAM_CCI GENMASK(7, 0) 169 + #define ICE_HCI_PARAM_CE GENMASK(8, 8) 170 + 160 171 struct sdhci_msm_offset { 161 172 u32 core_hc_mode; 162 173 u32 core_mci_data_cnt; ··· 311 300 u32 dll_config; 312 301 u32 ddr_config; 313 302 bool vqmmc_enabled; 303 + bool non_cqe_ice_init_done; 314 304 }; 315 305 316 306 static const struct sdhci_msm_offset *sdhci_priv_msm_offset(struct sdhci_host *host) ··· 1926 1914 if (IS_ERR_OR_NULL(ice)) 1927 1915 return PTR_ERR_OR_ZERO(ice); 1928 1916 1929 - if (qcom_ice_get_supported_key_type(ice) != BLK_CRYPTO_KEY_TYPE_RAW) { 1930 - dev_warn(dev, "Wrapped keys not supported. Disabling inline encryption support.\n"); 1931 - return 0; 1932 - } 1933 - 1934 1917 msm_host->ice = ice; 1935 1918 1936 1919 /* Initialize the blk_crypto_profile */ ··· 1939 1932 1940 1933 profile->ll_ops = sdhci_msm_crypto_ops; 1941 1934 profile->max_dun_bytes_supported = 4; 1942 - profile->key_types_supported = BLK_CRYPTO_KEY_TYPE_RAW; 1935 + profile->key_types_supported = qcom_ice_get_supported_key_type(ice); 1943 1936 profile->dev = dev; 1944 1937 1945 1938 /* ··· 2019 2012 return qcom_ice_evict_key(msm_host->ice, slot); 2020 2013 } 2021 2014 2015 + static int sdhci_msm_ice_derive_sw_secret(struct blk_crypto_profile *profile, 2016 + const u8 *eph_key, size_t eph_key_size, 2017 + u8 sw_secret[BLK_CRYPTO_SW_SECRET_SIZE]) 2018 + { 2019 + struct sdhci_msm_host *msm_host = sdhci_msm_host_from_crypto_profile(profile); 2020 + 2021 + return qcom_ice_derive_sw_secret(msm_host->ice, eph_key, eph_key_size, 2022 + sw_secret); 2023 + } 2024 + 2025 + static int sdhci_msm_ice_import_key(struct blk_crypto_profile *profile, 2026 + const u8 *raw_key, size_t raw_key_size, 2027 + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) 2028 + { 2029 + struct sdhci_msm_host *msm_host = sdhci_msm_host_from_crypto_profile(profile); 2030 + 2031 + return qcom_ice_import_key(msm_host->ice, raw_key, raw_key_size, lt_key); 2032 + } 2033 + 2034 + static int sdhci_msm_ice_generate_key(struct blk_crypto_profile *profile, 2035 + u8 lt_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) 2036 + { 2037 + struct sdhci_msm_host *msm_host = sdhci_msm_host_from_crypto_profile(profile); 2038 + 2039 + return qcom_ice_generate_key(msm_host->ice, lt_key); 2040 + } 2041 + 2042 + static int sdhci_msm_ice_prepare_key(struct blk_crypto_profile *profile, 2043 + const u8 *lt_key, size_t lt_key_size, 2044 + u8 eph_key[BLK_CRYPTO_MAX_HW_WRAPPED_KEY_SIZE]) 2045 + { 2046 + struct sdhci_msm_host *msm_host = sdhci_msm_host_from_crypto_profile(profile); 2047 + 2048 + return qcom_ice_prepare_key(msm_host->ice, lt_key, lt_key_size, eph_key); 2049 + } 2050 + 2051 + static void sdhci_msm_non_cqe_ice_init(struct sdhci_host *host) 2052 + { 2053 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 2054 + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 2055 + struct mmc_host *mmc = msm_host->mmc; 2056 + struct cqhci_host *cq_host = mmc->cqe_private; 2057 + u32 config; 2058 + 2059 + config = sdhci_readl(host, HC_VENDOR_SPECIFIC_FUNC4); 2060 + config &= ~DISABLE_CRYPTO; 2061 + sdhci_writel(host, config, HC_VENDOR_SPECIFIC_FUNC4); 2062 + config = cqhci_readl(cq_host, CQHCI_CFG); 2063 + config |= CRYPTO_GENERAL_ENABLE; 2064 + cqhci_writel(cq_host, config, CQHCI_CFG); 2065 + } 2066 + 2067 + static void sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq) 2068 + { 2069 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 2070 + struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 2071 + struct mmc_host *mmc = msm_host->mmc; 2072 + struct cqhci_host *cq_host = mmc->cqe_private; 2073 + unsigned int crypto_params = 0; 2074 + int key_index; 2075 + 2076 + if (mrq->crypto_ctx) { 2077 + if (!msm_host->non_cqe_ice_init_done) { 2078 + sdhci_msm_non_cqe_ice_init(host); 2079 + msm_host->non_cqe_ice_init_done = true; 2080 + } 2081 + 2082 + key_index = mrq->crypto_key_slot; 2083 + crypto_params = FIELD_PREP(ICE_HCI_PARAM_CE, 1) | 2084 + FIELD_PREP(ICE_HCI_PARAM_CCI, key_index); 2085 + 2086 + cqhci_writel(cq_host, crypto_params, NONCQ_CRYPTO_PARM); 2087 + cqhci_writel(cq_host, lower_32_bits(mrq->crypto_ctx->bc_dun[0]), 2088 + NONCQ_CRYPTO_DUN); 2089 + } else { 2090 + cqhci_writel(cq_host, crypto_params, NONCQ_CRYPTO_PARM); 2091 + } 2092 + 2093 + /* Ensure crypto configuration is written before proceeding */ 2094 + wmb(); 2095 + } 2096 + 2097 + /* 2098 + * Handle non-CQE MMC requests with ICE crypto support. 2099 + * Configures ICE registers before passing the request to 2100 + * the standard SDHCI handler. 2101 + */ 2102 + static void sdhci_msm_request(struct mmc_host *mmc, struct mmc_request *mrq) 2103 + { 2104 + struct sdhci_host *host = mmc_priv(mmc); 2105 + 2106 + /* Only need to handle non-CQE crypto requests in this path */ 2107 + if (mmc->caps2 & MMC_CAP2_CRYPTO) 2108 + sdhci_msm_ice_cfg(host, mrq); 2109 + 2110 + sdhci_request(mmc, mrq); 2111 + } 2112 + 2022 2113 static const struct blk_crypto_ll_ops sdhci_msm_crypto_ops = { 2023 2114 .keyslot_program = sdhci_msm_ice_keyslot_program, 2024 2115 .keyslot_evict = sdhci_msm_ice_keyslot_evict, 2116 + .derive_sw_secret = sdhci_msm_ice_derive_sw_secret, 2117 + .import_key = sdhci_msm_ice_import_key, 2118 + .generate_key = sdhci_msm_ice_generate_key, 2119 + .prepare_key = sdhci_msm_ice_prepare_key, 2025 2120 }; 2026 2121 2027 2122 #else /* CONFIG_MMC_CRYPTO */ ··· 2871 2762 2872 2763 msm_host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY | MMC_CAP_NEED_RSP_BUSY; 2873 2764 2765 + #ifdef CONFIG_MMC_CRYPTO 2766 + host->mmc_host_ops.request = sdhci_msm_request; 2767 + #endif 2874 2768 /* Set the timeout value to max possible */ 2875 2769 host->max_timeout_count = 0xF; 2876 2770
+27 -61
drivers/mmc/host/sdhci-of-arasan.c
··· 152 152 * @sdcardclk: Pointer to normal 'struct clock' for sdcardclk_hw. 153 153 * @sampleclk_hw: Struct for the clock we might provide to a PHY. 154 154 * @sampleclk: Pointer to normal 'struct clock' for sampleclk_hw. 155 - * @clk_phase_in: Array of Input Clock Phase Delays for all speed modes 156 - * @clk_phase_out: Array of Output Clock Phase Delays for all speed modes 155 + * @phase_map: Struct for mmc_clk_phase_map provided. 157 156 * @set_clk_delays: Function pointer for setting Clock Delays 158 157 * @clk_of_data: Platform specific runtime clock data storage pointer 159 158 */ ··· 161 162 struct clk *sdcardclk; 162 163 struct clk_hw sampleclk_hw; 163 164 struct clk *sampleclk; 164 - int clk_phase_in[MMC_TIMING_MMC_HS400 + 1]; 165 - int clk_phase_out[MMC_TIMING_MMC_HS400 + 1]; 165 + struct mmc_clk_phase_map phase_map; 166 166 void (*set_clk_delays)(struct sdhci_host *host); 167 167 void *clk_of_data; 168 168 }; ··· 1247 1249 struct sdhci_arasan_clk_data *clk_data = &sdhci_arasan->clk_data; 1248 1250 1249 1251 clk_set_phase(clk_data->sampleclk, 1250 - clk_data->clk_phase_in[host->timing]); 1252 + clk_data->phase_map.phase[host->timing].in_deg); 1251 1253 clk_set_phase(clk_data->sdcardclk, 1252 - clk_data->clk_phase_out[host->timing]); 1253 - } 1254 - 1255 - static void arasan_dt_read_clk_phase(struct device *dev, 1256 - struct sdhci_arasan_clk_data *clk_data, 1257 - unsigned int timing, const char *prop) 1258 - { 1259 - struct device_node *np = dev->of_node; 1260 - 1261 - u32 clk_phase[2] = {0}; 1262 - int ret; 1263 - 1264 - /* 1265 - * Read Tap Delay values from DT, if the DT does not contain the 1266 - * Tap Values then use the pre-defined values. 1267 - */ 1268 - ret = of_property_read_variable_u32_array(np, prop, &clk_phase[0], 1269 - 2, 0); 1270 - if (ret < 0) { 1271 - dev_dbg(dev, "Using predefined clock phase for %s = %d %d\n", 1272 - prop, clk_data->clk_phase_in[timing], 1273 - clk_data->clk_phase_out[timing]); 1274 - return; 1275 - } 1276 - 1277 - /* The values read are Input and Output Clock Delays in order */ 1278 - clk_data->clk_phase_in[timing] = clk_phase[0]; 1279 - clk_data->clk_phase_out[timing] = clk_phase[1]; 1254 + clk_data->phase_map.phase[host->timing].out_deg); 1280 1255 } 1281 1256 1282 1257 /** ··· 1286 1315 } 1287 1316 1288 1317 for (i = 0; i <= MMC_TIMING_MMC_HS400; i++) { 1289 - clk_data->clk_phase_in[i] = zynqmp_iclk_phase[i]; 1290 - clk_data->clk_phase_out[i] = zynqmp_oclk_phase[i]; 1318 + clk_data->phase_map.phase[i].in_deg = zynqmp_iclk_phase[i]; 1319 + clk_data->phase_map.phase[i].out_deg = zynqmp_oclk_phase[i]; 1291 1320 } 1292 1321 } 1293 1322 ··· 1298 1327 VERSAL_OCLK_PHASE; 1299 1328 1300 1329 for (i = 0; i <= MMC_TIMING_MMC_HS400; i++) { 1301 - clk_data->clk_phase_in[i] = versal_iclk_phase[i]; 1302 - clk_data->clk_phase_out[i] = versal_oclk_phase[i]; 1330 + clk_data->phase_map.phase[i].in_deg = versal_iclk_phase[i]; 1331 + clk_data->phase_map.phase[i].out_deg = versal_oclk_phase[i]; 1303 1332 } 1304 1333 } 1305 1334 if (of_device_is_compatible(dev->of_node, "xlnx,versal-net-emmc")) { ··· 1309 1338 VERSAL_NET_EMMC_OCLK_PHASE; 1310 1339 1311 1340 for (i = 0; i <= MMC_TIMING_MMC_HS400; i++) { 1312 - clk_data->clk_phase_in[i] = versal_net_iclk_phase[i]; 1313 - clk_data->clk_phase_out[i] = versal_net_oclk_phase[i]; 1341 + clk_data->phase_map.phase[i].in_deg = versal_net_iclk_phase[i]; 1342 + clk_data->phase_map.phase[i].out_deg = versal_net_oclk_phase[i]; 1314 1343 } 1315 1344 } 1316 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_LEGACY, 1317 - "clk-phase-legacy"); 1318 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_MMC_HS, 1319 - "clk-phase-mmc-hs"); 1320 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_SD_HS, 1321 - "clk-phase-sd-hs"); 1322 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_UHS_SDR12, 1323 - "clk-phase-uhs-sdr12"); 1324 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_UHS_SDR25, 1325 - "clk-phase-uhs-sdr25"); 1326 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_UHS_SDR50, 1327 - "clk-phase-uhs-sdr50"); 1328 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_UHS_SDR104, 1329 - "clk-phase-uhs-sdr104"); 1330 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_UHS_DDR50, 1331 - "clk-phase-uhs-ddr50"); 1332 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_MMC_DDR52, 1333 - "clk-phase-mmc-ddr52"); 1334 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_MMC_HS200, 1335 - "clk-phase-mmc-hs200"); 1336 - arasan_dt_read_clk_phase(dev, clk_data, MMC_TIMING_MMC_HS400, 1337 - "clk-phase-mmc-hs400"); 1345 + 1346 + mmc_of_parse_clk_phase(dev, &clk_data->phase_map); 1338 1347 } 1339 1348 1340 1349 static const struct sdhci_pltfm_data sdhci_arasan_pdata = { ··· 1463 1512 .clk_ops = &arasan_clk_ops, 1464 1513 }; 1465 1514 1515 + static const struct sdhci_pltfm_data sdhci_arasan_axiado_pdata = { 1516 + .ops = &sdhci_arasan_ops, 1517 + .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 1518 + SDHCI_QUIRK_BROKEN_CQE, 1519 + }; 1520 + 1521 + static struct sdhci_arasan_of_data sdhci_arasan_axiado_data = { 1522 + .pdata = &sdhci_arasan_axiado_pdata, 1523 + .clk_ops = &arasan_clk_ops, 1524 + }; 1525 + 1466 1526 static const struct of_device_id sdhci_arasan_of_match[] = { 1467 1527 /* SoC-specific compatible strings w/ soc_ctl_map */ 1468 1528 { ··· 1499 1537 { 1500 1538 .compatible = "intel,keembay-sdhci-5.1-sdio", 1501 1539 .data = &intel_keembay_sdio_data, 1540 + }, 1541 + { 1542 + .compatible = "axiado,ax3000-sdhci-5.1-emmc", 1543 + .data = &sdhci_arasan_axiado_data, 1502 1544 }, 1503 1545 /* Generic compatible below here */ 1504 1546 {
+8 -3
drivers/mmc/host/sdhci-of-aspeed.c
··· 13 13 #include <linux/of.h> 14 14 #include <linux/of_platform.h> 15 15 #include <linux/platform_device.h> 16 + #include <linux/reset.h> 16 17 #include <linux/spinlock.h> 17 18 18 19 #include "sdhci-pltfm.h" ··· 520 519 static int aspeed_sdc_probe(struct platform_device *pdev) 521 520 522 521 { 523 - struct device_node *parent, *child; 522 + struct reset_control *reset; 523 + struct device_node *parent; 524 524 struct aspeed_sdc *sdc; 525 525 int ret; 526 526 ··· 530 528 return -ENOMEM; 531 529 532 530 spin_lock_init(&sdc->lock); 531 + 532 + reset = devm_reset_control_get_optional_exclusive_deasserted(&pdev->dev, NULL); 533 + if (IS_ERR(reset)) 534 + return dev_err_probe(&pdev->dev, PTR_ERR(reset), "unable to acquire reset\n"); 533 535 534 536 sdc->clk = devm_clk_get(&pdev->dev, NULL); 535 537 if (IS_ERR(sdc->clk)) ··· 554 548 dev_set_drvdata(&pdev->dev, sdc); 555 549 556 550 parent = pdev->dev.of_node; 557 - for_each_available_child_of_node(parent, child) { 551 + for_each_available_child_of_node_scoped(parent, child) { 558 552 struct platform_device *cpdev; 559 553 560 554 cpdev = of_platform_device_create(child, NULL, &pdev->dev); 561 555 if (!cpdev) { 562 - of_node_put(child); 563 556 ret = -ENODEV; 564 557 goto err_clk; 565 558 }
+523
drivers/mmc/host/sdhci-of-bst.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * SDHCI driver for Black Sesame Technologies C1200 controller 4 + * 5 + * Copyright (c) 2025 Black Sesame Technologies 6 + */ 7 + 8 + #include <linux/bits.h> 9 + #include <linux/bitfield.h> 10 + #include <linux/delay.h> 11 + #include <linux/dma-mapping.h> 12 + #include <linux/iopoll.h> 13 + #include <linux/module.h> 14 + #include <linux/of.h> 15 + #include <linux/of_reserved_mem.h> 16 + #include <linux/platform_device.h> 17 + #include "sdhci.h" 18 + #include "sdhci-pltfm.h" 19 + 20 + /* SDHCI register extensions */ 21 + #define SDHCI_CLOCK_PLL_EN 0x0008 22 + #define SDHCI_VENDOR_PTR_R 0xE8 23 + 24 + /* BST-specific tuning parameters */ 25 + #define BST_TUNING_COUNT 0x20 26 + 27 + /* Synopsys vendor specific registers */ 28 + #define SDHC_EMMC_CTRL_R_OFFSET 0x2C 29 + #define MBIU_CTRL 0x510 30 + 31 + /* MBIU burst control bits */ 32 + #define BURST_INCR16_EN BIT(3) 33 + #define BURST_INCR8_EN BIT(2) 34 + #define BURST_INCR4_EN BIT(1) 35 + #define BURST_EN (BURST_INCR16_EN | BURST_INCR8_EN | BURST_INCR4_EN) 36 + #define MBIU_BURST_MASK GENMASK(3, 0) 37 + 38 + /* CRM (Clock/Reset/Management) register offsets */ 39 + #define SDEMMC_CRM_BCLK_DIV_CTRL 0x08 40 + #define SDEMMC_CRM_TIMER_DIV_CTRL 0x0C 41 + #define SDEMMC_CRM_RX_CLK_CTRL 0x14 42 + #define SDEMMC_CRM_VOL_CTRL 0x1C 43 + #define REG_WR_PROTECT 0x88 44 + #define DELAY_CHAIN_SEL 0x94 45 + 46 + /* CRM register values and bit definitions */ 47 + #define REG_WR_PROTECT_KEY 0x1234abcd 48 + #define BST_VOL_STABLE_ON BIT(7) 49 + #define BST_TIMER_DIV_MASK GENMASK(7, 0) 50 + #define BST_TIMER_DIV_VAL 0x20 51 + #define BST_TIMER_LOAD_BIT BIT(8) 52 + #define BST_BCLK_EN_BIT BIT(10) 53 + #define BST_RX_UPDATE_BIT BIT(11) 54 + #define BST_EMMC_CTRL_RST_N BIT(2) /* eMMC card reset control */ 55 + 56 + /* Clock frequency limits */ 57 + #define BST_DEFAULT_MAX_FREQ 200000000UL /* 200 MHz */ 58 + #define BST_DEFAULT_MIN_FREQ 400000UL /* 400 kHz */ 59 + 60 + /* Clock control bit definitions */ 61 + #define BST_CLOCK_DIV_MASK GENMASK(7, 0) 62 + #define BST_CLOCK_DIV_SHIFT 8 63 + #define BST_BCLK_DIV_MASK GENMASK(9, 0) 64 + 65 + /* Clock frequency thresholds */ 66 + #define BST_CLOCK_THRESHOLD_LOW 1500 67 + 68 + /* Clock stability polling parameters */ 69 + #define BST_CLK_STABLE_POLL_US 1000 /* Poll interval in microseconds */ 70 + #define BST_CLK_STABLE_TIMEOUT_US 20000 /* Timeout for internal clock stabilization (us) */ 71 + 72 + struct sdhci_bst_priv { 73 + void __iomem *crm_reg_base; 74 + }; 75 + 76 + union sdhci_bst_rx_ctrl { 77 + struct { 78 + u32 rx_revert:1, 79 + rx_clk_sel_sec:1, 80 + rx_clk_div:4, 81 + rx_clk_phase_inner:2, 82 + rx_clk_sel_first:1, 83 + rx_clk_phase_out:2, 84 + rx_clk_en:1, 85 + res0:20; 86 + }; 87 + u32 reg; 88 + }; 89 + 90 + static u32 sdhci_bst_crm_read(struct sdhci_pltfm_host *pltfm_host, u32 offset) 91 + { 92 + struct sdhci_bst_priv *priv = sdhci_pltfm_priv(pltfm_host); 93 + 94 + return readl(priv->crm_reg_base + offset); 95 + } 96 + 97 + static void sdhci_bst_crm_write(struct sdhci_pltfm_host *pltfm_host, u32 offset, u32 value) 98 + { 99 + struct sdhci_bst_priv *priv = sdhci_pltfm_priv(pltfm_host); 100 + 101 + writel(value, priv->crm_reg_base + offset); 102 + } 103 + 104 + static int sdhci_bst_wait_int_clk(struct sdhci_host *host) 105 + { 106 + u16 clk; 107 + 108 + if (read_poll_timeout(sdhci_readw, clk, (clk & SDHCI_CLOCK_INT_STABLE), 109 + BST_CLK_STABLE_POLL_US, BST_CLK_STABLE_TIMEOUT_US, false, 110 + host, SDHCI_CLOCK_CONTROL)) 111 + return -EBUSY; 112 + return 0; 113 + } 114 + 115 + static unsigned int sdhci_bst_get_max_clock(struct sdhci_host *host) 116 + { 117 + return BST_DEFAULT_MAX_FREQ; 118 + } 119 + 120 + static unsigned int sdhci_bst_get_min_clock(struct sdhci_host *host) 121 + { 122 + return BST_DEFAULT_MIN_FREQ; 123 + } 124 + 125 + static void sdhci_bst_enable_clk(struct sdhci_host *host, unsigned int clk) 126 + { 127 + struct sdhci_pltfm_host *pltfm_host; 128 + unsigned int div; 129 + u32 val; 130 + union sdhci_bst_rx_ctrl rx_reg; 131 + 132 + pltfm_host = sdhci_priv(host); 133 + 134 + /* Calculate clock divider based on target frequency */ 135 + if (clk == 0) { 136 + div = 0; 137 + } else if (clk < BST_DEFAULT_MIN_FREQ) { 138 + /* Below minimum: use max divider to get closest to min freq */ 139 + div = BST_DEFAULT_MAX_FREQ / BST_DEFAULT_MIN_FREQ; 140 + } else if (clk <= BST_DEFAULT_MAX_FREQ) { 141 + /* Normal range: calculate divider directly */ 142 + div = BST_DEFAULT_MAX_FREQ / clk; 143 + } else { 144 + /* Above maximum: no division needed */ 145 + div = 1; 146 + } 147 + 148 + clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL); 149 + clk &= ~SDHCI_CLOCK_CARD_EN; 150 + sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); 151 + 152 + clk &= ~SDHCI_CLOCK_PLL_EN; 153 + sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); 154 + 155 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_TIMER_DIV_CTRL); 156 + val &= ~BST_TIMER_LOAD_BIT; 157 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_TIMER_DIV_CTRL, val); 158 + 159 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_TIMER_DIV_CTRL); 160 + val &= ~BST_TIMER_DIV_MASK; 161 + val |= BST_TIMER_DIV_VAL; 162 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_TIMER_DIV_CTRL, val); 163 + 164 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_TIMER_DIV_CTRL); 165 + val |= BST_TIMER_LOAD_BIT; 166 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_TIMER_DIV_CTRL, val); 167 + 168 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_RX_CLK_CTRL); 169 + val &= ~BST_RX_UPDATE_BIT; 170 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_RX_CLK_CTRL, val); 171 + 172 + rx_reg.reg = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_RX_CLK_CTRL); 173 + 174 + rx_reg.rx_revert = 0; 175 + rx_reg.rx_clk_sel_sec = 1; 176 + rx_reg.rx_clk_div = 4; 177 + rx_reg.rx_clk_phase_inner = 2; 178 + rx_reg.rx_clk_sel_first = 0; 179 + rx_reg.rx_clk_phase_out = 2; 180 + 181 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_RX_CLK_CTRL, rx_reg.reg); 182 + 183 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_RX_CLK_CTRL); 184 + val |= BST_RX_UPDATE_BIT; 185 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_RX_CLK_CTRL, val); 186 + 187 + /* Disable clock first */ 188 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_BCLK_DIV_CTRL); 189 + val &= ~BST_BCLK_EN_BIT; 190 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_BCLK_DIV_CTRL, val); 191 + 192 + /* Setup clock divider */ 193 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_BCLK_DIV_CTRL); 194 + val &= ~BST_BCLK_DIV_MASK; 195 + val |= div; 196 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_BCLK_DIV_CTRL, val); 197 + 198 + /* Enable clock */ 199 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_BCLK_DIV_CTRL); 200 + val |= BST_BCLK_EN_BIT; 201 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_BCLK_DIV_CTRL, val); 202 + 203 + /* RMW the clock divider bits to avoid clobbering other fields */ 204 + clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL); 205 + clk &= ~(BST_CLOCK_DIV_MASK << BST_CLOCK_DIV_SHIFT); 206 + clk |= (div & BST_CLOCK_DIV_MASK) << BST_CLOCK_DIV_SHIFT; 207 + sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); 208 + 209 + clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL); 210 + clk |= SDHCI_CLOCK_PLL_EN; 211 + sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); 212 + 213 + clk |= SDHCI_CLOCK_CARD_EN; 214 + sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); 215 + 216 + clk |= SDHCI_CLOCK_INT_EN; 217 + sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); 218 + } 219 + 220 + static void sdhci_bst_set_clock(struct sdhci_host *host, unsigned int clock) 221 + { 222 + /* Turn off card/internal/PLL clocks when clock==0 to avoid idle power */ 223 + u32 clk_reg = sdhci_readw(host, SDHCI_CLOCK_CONTROL); 224 + 225 + if (!clock) { 226 + clk_reg &= ~(SDHCI_CLOCK_CARD_EN | SDHCI_CLOCK_INT_EN | SDHCI_CLOCK_PLL_EN); 227 + sdhci_writew(host, clk_reg, SDHCI_CLOCK_CONTROL); 228 + return; 229 + } 230 + sdhci_bst_enable_clk(host, clock); 231 + } 232 + 233 + /* 234 + * sdhci_bst_reset - Reset the SDHCI host controller with special 235 + * handling for eMMC card reset control. 236 + */ 237 + static void sdhci_bst_reset(struct sdhci_host *host, u8 mask) 238 + { 239 + u16 vendor_ptr, emmc_ctrl_reg; 240 + u32 reg; 241 + 242 + if (host->mmc->caps2 & MMC_CAP2_NO_SD) { 243 + vendor_ptr = sdhci_readw(host, SDHCI_VENDOR_PTR_R); 244 + emmc_ctrl_reg = vendor_ptr + SDHC_EMMC_CTRL_R_OFFSET; 245 + 246 + reg = sdhci_readw(host, emmc_ctrl_reg); 247 + reg &= ~BST_EMMC_CTRL_RST_N; 248 + sdhci_writew(host, reg, emmc_ctrl_reg); 249 + sdhci_reset(host, mask); 250 + usleep_range(10, 20); 251 + reg = sdhci_readw(host, emmc_ctrl_reg); 252 + reg |= BST_EMMC_CTRL_RST_N; 253 + sdhci_writew(host, reg, emmc_ctrl_reg); 254 + } else { 255 + sdhci_reset(host, mask); 256 + } 257 + } 258 + 259 + /* Set timeout control register to maximum value (0xE) */ 260 + static void sdhci_bst_set_timeout(struct sdhci_host *host, struct mmc_command *cmd) 261 + { 262 + sdhci_writeb(host, 0xE, SDHCI_TIMEOUT_CONTROL); 263 + } 264 + 265 + /* 266 + * sdhci_bst_set_power - Set power mode and voltage, also configures 267 + * MBIU burst mode control based on power state. 268 + */ 269 + static void sdhci_bst_set_power(struct sdhci_host *host, unsigned char mode, 270 + unsigned short vdd) 271 + { 272 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 273 + u32 reg; 274 + u32 val; 275 + 276 + sdhci_set_power(host, mode, vdd); 277 + 278 + if (mode == MMC_POWER_OFF) { 279 + /* Disable MBIU burst mode */ 280 + reg = sdhci_readw(host, MBIU_CTRL); 281 + reg &= ~BURST_EN; /* Clear all burst enable bits */ 282 + sdhci_writew(host, reg, MBIU_CTRL); 283 + 284 + /* Disable CRM BCLK */ 285 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_BCLK_DIV_CTRL); 286 + val &= ~BST_BCLK_EN_BIT; 287 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_BCLK_DIV_CTRL, val); 288 + 289 + /* Disable RX clock */ 290 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_RX_CLK_CTRL); 291 + val &= ~BST_RX_UPDATE_BIT; 292 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_RX_CLK_CTRL, val); 293 + 294 + /* Turn off voltage stable power */ 295 + val = sdhci_bst_crm_read(pltfm_host, SDEMMC_CRM_VOL_CTRL); 296 + val &= ~BST_VOL_STABLE_ON; 297 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_VOL_CTRL, val); 298 + } else { 299 + /* Configure burst mode only when powered on */ 300 + reg = sdhci_readw(host, MBIU_CTRL); 301 + reg &= ~MBIU_BURST_MASK; /* Clear burst related bits */ 302 + reg |= BURST_EN; /* Enable burst mode for better bandwidth */ 303 + sdhci_writew(host, reg, MBIU_CTRL); 304 + } 305 + } 306 + 307 + /* 308 + * sdhci_bst_execute_tuning - Execute tuning procedure by trying different 309 + * delay chain values and selecting the optimal one. 310 + */ 311 + static int sdhci_bst_execute_tuning(struct sdhci_host *host, u32 opcode) 312 + { 313 + struct sdhci_pltfm_host *pltfm_host; 314 + int ret = 0, error; 315 + int first_start = -1, first_end = -1, best = 0; 316 + int second_start = -1, second_end = -1, has_failure = 0; 317 + int i; 318 + 319 + pltfm_host = sdhci_priv(host); 320 + 321 + for (i = 0; i < BST_TUNING_COUNT; i++) { 322 + /* Protected write */ 323 + sdhci_bst_crm_write(pltfm_host, REG_WR_PROTECT, REG_WR_PROTECT_KEY); 324 + /* Write tuning value */ 325 + sdhci_bst_crm_write(pltfm_host, DELAY_CHAIN_SEL, (1ul << i) - 1); 326 + 327 + /* Wait for internal clock stable before tuning */ 328 + if (sdhci_bst_wait_int_clk(host)) { 329 + dev_err(mmc_dev(host->mmc), "Internal clock never stabilised\n"); 330 + return -EBUSY; 331 + } 332 + 333 + ret = mmc_send_tuning(host->mmc, opcode, &error); 334 + if (ret != 0) { 335 + has_failure = 1; 336 + } else { 337 + if (has_failure == 0) { 338 + if (first_start == -1) 339 + first_start = i; 340 + first_end = i; 341 + } else { 342 + if (second_start == -1) 343 + second_start = i; 344 + second_end = i; 345 + } 346 + } 347 + } 348 + 349 + /* Calculate best tuning value */ 350 + if (first_end - first_start >= second_end - second_start) 351 + best = ((first_end - first_start) >> 1) + first_start; 352 + else 353 + best = ((second_end - second_start) >> 1) + second_start; 354 + 355 + if (best < 0) 356 + best = 0; 357 + 358 + sdhci_bst_crm_write(pltfm_host, DELAY_CHAIN_SEL, (1ul << best) - 1); 359 + /* Confirm internal clock stable after setting best tuning value */ 360 + if (sdhci_bst_wait_int_clk(host)) { 361 + dev_err(mmc_dev(host->mmc), "Internal clock never stabilised\n"); 362 + return -EBUSY; 363 + } 364 + 365 + return 0; 366 + } 367 + 368 + /* Enable voltage stable power for voltage switch */ 369 + static void sdhci_bst_voltage_switch(struct sdhci_host *host) 370 + { 371 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 372 + 373 + /* Enable voltage stable power */ 374 + sdhci_bst_crm_write(pltfm_host, SDEMMC_CRM_VOL_CTRL, BST_VOL_STABLE_ON); 375 + } 376 + 377 + static const struct sdhci_ops sdhci_bst_ops = { 378 + .set_clock = sdhci_bst_set_clock, 379 + .set_bus_width = sdhci_set_bus_width, 380 + .set_uhs_signaling = sdhci_set_uhs_signaling, 381 + .get_min_clock = sdhci_bst_get_min_clock, 382 + .get_max_clock = sdhci_bst_get_max_clock, 383 + .reset = sdhci_bst_reset, 384 + .set_power = sdhci_bst_set_power, 385 + .set_timeout = sdhci_bst_set_timeout, 386 + .platform_execute_tuning = sdhci_bst_execute_tuning, 387 + .voltage_switch = sdhci_bst_voltage_switch, 388 + }; 389 + 390 + static const struct sdhci_pltfm_data sdhci_bst_pdata = { 391 + .ops = &sdhci_bst_ops, 392 + .quirks = SDHCI_QUIRK_BROKEN_ADMA | 393 + SDHCI_QUIRK_DELAY_AFTER_POWER | 394 + SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 395 + SDHCI_QUIRK_INVERTED_WRITE_PROTECT, 396 + .quirks2 = SDHCI_QUIRK2_BROKEN_DDR50 | 397 + SDHCI_QUIRK2_TUNING_WORK_AROUND | 398 + SDHCI_QUIRK2_ACMD23_BROKEN, 399 + }; 400 + 401 + static void sdhci_bst_free_bounce_buffer(struct sdhci_host *host) 402 + { 403 + if (host->bounce_buffer) { 404 + dma_free_coherent(mmc_dev(host->mmc), host->bounce_buffer_size, 405 + host->bounce_buffer, host->bounce_addr); 406 + host->bounce_buffer = NULL; 407 + } 408 + of_reserved_mem_device_release(mmc_dev(host->mmc)); 409 + } 410 + 411 + static int sdhci_bst_alloc_bounce_buffer(struct sdhci_host *host) 412 + { 413 + struct mmc_host *mmc = host->mmc; 414 + unsigned int bounce_size; 415 + int ret; 416 + 417 + /* Fixed SRAM bounce size to 32KB: verified config under 32-bit DMA addressing limit */ 418 + bounce_size = SZ_32K; 419 + 420 + ret = of_reserved_mem_device_init_by_idx(mmc_dev(mmc), mmc_dev(mmc)->of_node, 0); 421 + if (ret) { 422 + dev_err(mmc_dev(mmc), "Failed to initialize reserved memory\n"); 423 + return ret; 424 + } 425 + 426 + host->bounce_buffer = dma_alloc_coherent(mmc_dev(mmc), bounce_size, 427 + &host->bounce_addr, GFP_KERNEL); 428 + if (!host->bounce_buffer) { 429 + of_reserved_mem_device_release(mmc_dev(mmc)); 430 + return -ENOMEM; 431 + } 432 + 433 + host->bounce_buffer_size = bounce_size; 434 + 435 + return 0; 436 + } 437 + 438 + static int sdhci_bst_probe(struct platform_device *pdev) 439 + { 440 + struct sdhci_pltfm_host *pltfm_host; 441 + struct sdhci_host *host; 442 + struct sdhci_bst_priv *priv; 443 + int err; 444 + 445 + host = sdhci_pltfm_init(pdev, &sdhci_bst_pdata, sizeof(struct sdhci_bst_priv)); 446 + if (IS_ERR(host)) 447 + return PTR_ERR(host); 448 + 449 + pltfm_host = sdhci_priv(host); 450 + priv = sdhci_pltfm_priv(pltfm_host); /* Get platform private data */ 451 + 452 + err = mmc_of_parse(host->mmc); 453 + if (err) 454 + return err; 455 + 456 + sdhci_get_of_property(pdev); 457 + 458 + /* Get CRM registers from the second reg entry */ 459 + priv->crm_reg_base = devm_platform_ioremap_resource(pdev, 1); 460 + if (IS_ERR(priv->crm_reg_base)) { 461 + err = PTR_ERR(priv->crm_reg_base); 462 + return err; 463 + } 464 + 465 + /* 466 + * Silicon constraints for BST C1200: 467 + * - System RAM base is 0x800000000 (above 32-bit addressable range) 468 + * - The eMMC controller DMA engine is limited to 32-bit addressing 469 + * - SMMU cannot be used on this path due to hardware design flaws 470 + * - These are fixed in silicon and cannot be changed in software 471 + * 472 + * Bus/controller mapping: 473 + * - No registers are available to reprogram the address mapping 474 + * - The 32-bit DMA limit is a hard constraint of the controller IP 475 + * 476 + * Given these constraints, an SRAM-based bounce buffer in the 32-bit 477 + * address space is required to enable eMMC DMA on this platform. 478 + */ 479 + err = sdhci_bst_alloc_bounce_buffer(host); 480 + if (err) { 481 + dev_err(&pdev->dev, "Failed to allocate bounce buffer: %d\n", err); 482 + return err; 483 + } 484 + 485 + err = sdhci_add_host(host); 486 + if (err) 487 + goto err_free_bounce_buffer; 488 + 489 + return 0; 490 + 491 + err_free_bounce_buffer: 492 + sdhci_bst_free_bounce_buffer(host); 493 + 494 + return err; 495 + } 496 + 497 + static void sdhci_bst_remove(struct platform_device *pdev) 498 + { 499 + struct sdhci_host *host = platform_get_drvdata(pdev); 500 + 501 + sdhci_bst_free_bounce_buffer(host); 502 + sdhci_pltfm_remove(pdev); 503 + } 504 + 505 + static const struct of_device_id sdhci_bst_ids[] = { 506 + { .compatible = "bst,c1200-sdhci" }, 507 + {} 508 + }; 509 + MODULE_DEVICE_TABLE(of, sdhci_bst_ids); 510 + 511 + static struct platform_driver sdhci_bst_driver = { 512 + .driver = { 513 + .name = "sdhci-bst", 514 + .of_match_table = sdhci_bst_ids, 515 + }, 516 + .probe = sdhci_bst_probe, 517 + .remove = sdhci_bst_remove, 518 + }; 519 + module_platform_driver(sdhci_bst_driver); 520 + 521 + MODULE_DESCRIPTION("Black Sesame Technologies SDHCI driver (BST)"); 522 + MODULE_AUTHOR("Black Sesame Technologies Co., Ltd."); 523 + MODULE_LICENSE("GPL");
+479 -38
drivers/mmc/host/sdhci-of-dwcmshc.c
··· 40 40 #define DWCMSHC_AREA1_MASK GENMASK(11, 0) 41 41 /* Offset inside the vendor area 1 */ 42 42 #define DWCMSHC_HOST_CTRL3 0x8 43 + #define DWCMSHC_HOST_CTRL3_CMD_CONFLICT BIT(0) 43 44 #define DWCMSHC_EMMC_CONTROL 0x2c 45 + /* HPE GSC SoC MSHCCS register */ 46 + #define HPE_GSC_MSHCCS_SCGSYNCDIS BIT(18) 44 47 #define DWCMSHC_CARD_IS_EMMC BIT(0) 45 48 #define DWCMSHC_ENHANCED_STROBE BIT(8) 46 49 #define DWCMSHC_EMMC_ATCTRL 0x40 ··· 131 128 #define PHY_CNFG_PHY_PWRGOOD_MASK BIT_MASK(1) /* bit [1] */ 132 129 #define PHY_CNFG_PAD_SP_MASK GENMASK(19, 16) /* bits [19:16] */ 133 130 #define PHY_CNFG_PAD_SP 0x0c /* PMOS TX drive strength */ 131 + #define PHY_CNFG_PAD_SP_k230 0x09 /* PMOS TX drive strength for k230 */ 134 132 #define PHY_CNFG_PAD_SP_SG2042 0x09 /* PMOS TX drive strength for SG2042 */ 135 133 #define PHY_CNFG_PAD_SN_MASK GENMASK(23, 20) /* bits [23:20] */ 136 134 #define PHY_CNFG_PAD_SN 0x0c /* NMOS TX drive strength */ 135 + #define PHY_CNFG_PAD_SN_k230 0x08 /* NMOS TX drive strength for k230 */ 137 136 #define PHY_CNFG_PAD_SN_SG2042 0x08 /* NMOS TX drive strength for SG2042 */ 138 137 139 138 /* PHY command/response pad settings */ ··· 158 153 #define PHY_PAD_RXSEL_3V3 0x2 /* Receiver type select for 3.3V */ 159 154 160 155 #define PHY_PAD_WEAKPULL_MASK GENMASK(4, 3) /* bits [4:3] */ 156 + #define PHY_PAD_WEAKPULL_DISABLED 0x0 /* Weak pull up and pull down disabled */ 161 157 #define PHY_PAD_WEAKPULL_PULLUP 0x1 /* Weak pull up enabled */ 162 158 #define PHY_PAD_WEAKPULL_PULLDOWN 0x2 /* Weak pull down enabled */ 163 159 164 160 #define PHY_PAD_TXSLEW_CTRL_P_MASK GENMASK(8, 5) /* bits [8:5] */ 165 161 #define PHY_PAD_TXSLEW_CTRL_P 0x3 /* Slew control for P-Type pad TX */ 162 + #define PHY_PAD_TXSLEW_CTRL_P_k230 0x2 /* Slew control for P-Type pad TX for k230 */ 166 163 #define PHY_PAD_TXSLEW_CTRL_N_MASK GENMASK(12, 9) /* bits [12:9] */ 167 164 #define PHY_PAD_TXSLEW_CTRL_N 0x3 /* Slew control for N-Type pad TX */ 168 165 #define PHY_PAD_TXSLEW_CTRL_N_SG2042 0x2 /* Slew control for N-Type pad TX for SG2042 */ 166 + #define PHY_PAD_TXSLEW_CTRL_N_k230 0x2 /* Slew control for N-Type pad TX for k230 */ 167 + 168 + /* PHY Common DelayLine config settings */ 169 + #define PHY_COMMDL_CNFG (DWC_MSHC_PTR_PHY_R + 0x1c) 170 + #define PHY_COMMDL_CNFG_DLSTEP_SEL BIT(0) /* DelayLine outputs on PAD enabled */ 169 171 170 172 /* PHY CLK delay line settings */ 171 173 #define PHY_SDCLKDL_CNFG_R (DWC_MSHC_PTR_PHY_R + 0x1d) ··· 186 174 #define PHY_SDCLKDL_DC_HS400 0x18 /* delay code for HS400 mode */ 187 175 188 176 #define PHY_SMPLDL_CNFG_R (DWC_MSHC_PTR_PHY_R + 0x20) 177 + #define PHY_SMPLDL_CNFG_EXTDLY_EN BIT(0) 189 178 #define PHY_SMPLDL_CNFG_BYPASS_EN BIT(1) 179 + #define PHY_SMPLDL_CNFG_INPSEL_MASK GENMASK(3, 2) /* bits [3:2] */ 180 + #define PHY_SMPLDL_CNFG_INPSEL 0x3 /* delay line input source */ 190 181 191 182 /* PHY drift_cclk_rx delay line configuration setting */ 192 183 #define PHY_ATDL_CNFG_R (DWC_MSHC_PTR_PHY_R + 0x21) ··· 239 224 SDHCI_TRNS_BLK_CNT_EN | \ 240 225 SDHCI_TRNS_DMA) 241 226 227 + #define to_pltfm_data(priv, name) \ 228 + container_of((priv)->dwcmshc_pdata, struct name##_pltfm_data, dwcmshc_pdata) 229 + 242 230 /* SMC call for BlueField-3 eMMC RST_N */ 243 231 #define BLUEFIELD_SMC_SET_EMMC_RST_N 0x82000007 232 + 233 + /* Canaan specific Registers */ 234 + #define SD0_CTRL 0x00 235 + #define SD0_HOST_REG_VOL_STABLE BIT(4) 236 + #define SD0_CARD_WRITE_PROT BIT(6) 237 + #define SD1_CTRL 0x08 238 + #define SD1_HOST_REG_VOL_STABLE BIT(0) 239 + #define SD1_CARD_WRITE_PROT BIT(2) 244 240 245 241 /* Eswin specific Registers */ 246 242 #define EIC7700_CARD_CLK_STABLE BIT(28) ··· 278 252 #define PHY_DELAY_CODE_EMMC 0x17 279 253 #define PHY_DELAY_CODE_SD 0x55 280 254 281 - enum dwcmshc_rk_type { 282 - DWCMSHC_RK3568, 283 - DWCMSHC_RK3588, 284 - }; 285 - 286 255 struct rk35xx_priv { 287 256 struct reset_control *reset; 288 - enum dwcmshc_rk_type devtype; 289 257 u8 txclk_tapnum; 290 258 }; 291 259 292 260 struct eic7700_priv { 293 261 struct reset_control *reset; 294 262 unsigned int drive_impedance; 263 + }; 264 + 265 + struct k230_priv { 266 + /* Canaan k230 specific */ 267 + struct regmap *hi_sys_regmap; 295 268 }; 296 269 297 270 #define DWCMSHC_MAX_OTHER_CLKS 3 ··· 303 278 int num_other_clks; 304 279 struct clk_bulk_data other_clks[DWCMSHC_MAX_OTHER_CLKS]; 305 280 281 + const struct dwcmshc_pltfm_data *dwcmshc_pdata; 306 282 void *priv; /* pointer to SoC private stuff */ 307 283 u16 delay_line; 308 284 u16 flags; ··· 314 288 const struct cqhci_host_ops *cqhci_host_ops; 315 289 int (*init)(struct device *dev, struct sdhci_host *host, struct dwcmshc_priv *dwc_priv); 316 290 void (*postinit)(struct sdhci_host *host, struct dwcmshc_priv *dwc_priv); 291 + }; 292 + 293 + struct k230_pltfm_data { 294 + struct dwcmshc_pltfm_data dwcmshc_pdata; 295 + bool is_emmc; 296 + u32 ctrl_reg; 297 + u32 vol_stable_bit; 298 + u32 write_prot_bit; 299 + }; 300 + 301 + struct rockchip_pltfm_data { 302 + struct dwcmshc_pltfm_data dwcmshc_pdata; 303 + /* 304 + * The controller hardware has two known revisions documented internally: 305 + * - Revision 0: Exclusively used by RK3566 and RK3568 SoCs. 306 + * - Revision 1: Implemented in all other Rockchip SoCs, including RK3576, RK3588, etc. 307 + */ 308 + int revision; 317 309 }; 318 310 319 311 static void dwcmshc_enable_card_clk(struct sdhci_host *host) ··· 753 709 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 754 710 struct dwcmshc_priv *dwc_priv = sdhci_pltfm_priv(pltfm_host); 755 711 struct rk35xx_priv *priv = dwc_priv->priv; 712 + const struct rockchip_pltfm_data *rockchip_pdata = to_pltfm_data(dwc_priv, rockchip); 756 713 u8 txclk_tapnum = DLL_TXCLK_TAPNUM_DEFAULT; 757 714 u32 extra, reg; 758 715 int err; ··· 783 738 extra |= BIT(4); 784 739 sdhci_writel(host, extra, reg); 785 740 741 + /* Disable clock while config DLL */ 742 + sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL); 743 + 786 744 if (clock <= 52000000) { 787 745 if (host->mmc->ios.timing == MMC_TIMING_MMC_HS200 || 788 746 host->mmc->ios.timing == MMC_TIMING_MMC_HS400) { 789 747 dev_err(mmc_dev(host->mmc), 790 748 "Can't reduce the clock below 52MHz in HS200/HS400 mode"); 791 - return; 749 + goto enable_clk; 792 750 } 793 751 794 752 /* ··· 811 763 DLL_STRBIN_DELAY_NUM_SEL | 812 764 DLL_STRBIN_DELAY_NUM_DEFAULT << DLL_STRBIN_DELAY_NUM_OFFSET; 813 765 sdhci_writel(host, extra, DWCMSHC_EMMC_DLL_STRBIN); 814 - return; 766 + goto enable_clk; 815 767 } 816 768 817 769 /* Reset DLL */ ··· 824 776 * we must set it in higher speed mode. 825 777 */ 826 778 extra = DWCMSHC_EMMC_DLL_DLYENA; 827 - if (priv->devtype == DWCMSHC_RK3568) 779 + if (rockchip_pdata->revision == 0) 828 780 extra |= DLL_RXCLK_NO_INVERTER << DWCMSHC_EMMC_DLL_RXCLK_SRCSEL; 829 781 sdhci_writel(host, extra, DWCMSHC_EMMC_DLL_RXCLK); 830 782 ··· 838 790 500 * USEC_PER_MSEC); 839 791 if (err) { 840 792 dev_err(mmc_dev(host->mmc), "DLL lock timeout!\n"); 841 - return; 793 + goto enable_clk; 842 794 } 843 795 844 796 extra = 0x1 << 16 | /* tune clock stop en */ ··· 850 802 host->mmc->ios.timing == MMC_TIMING_MMC_HS400) 851 803 txclk_tapnum = priv->txclk_tapnum; 852 804 853 - if ((priv->devtype == DWCMSHC_RK3588) && host->mmc->ios.timing == MMC_TIMING_MMC_HS400) { 805 + if (rockchip_pdata->revision == 1 && host->mmc->ios.timing == MMC_TIMING_MMC_HS400) { 854 806 txclk_tapnum = DLL_TXCLK_TAPNUM_90_DEGREES; 855 807 856 808 extra = DLL_CMDOUT_SRC_CLK_NEG | ··· 871 823 DLL_STRBIN_TAPNUM_DEFAULT | 872 824 DLL_STRBIN_TAPNUM_FROM_SW; 873 825 sdhci_writel(host, extra, DWCMSHC_EMMC_DLL_STRBIN); 826 + 827 + enable_clk: 828 + /* 829 + * The sdclk frequency select bits in SDHCI_CLOCK_CONTROL are not functional 830 + * on Rockchip's SDHCI implementation. Instead, the clock frequency is fully 831 + * controlled via external clk provider by calling clk_set_rate(). Consequently, 832 + * passing 0 to sdhci_enable_clk() only re-enables the already-configured clock, 833 + * which matches the hardware's actual behavior. 834 + */ 835 + sdhci_enable_clk(host, 0); 874 836 } 875 837 876 838 static void rk35xx_sdhci_reset(struct sdhci_host *host, u8 mask) ··· 915 857 priv = devm_kzalloc(dev, sizeof(struct rk35xx_priv), GFP_KERNEL); 916 858 if (!priv) 917 859 return -ENOMEM; 918 - 919 - if (of_device_is_compatible(dev->of_node, "rockchip,rk3588-dwcmshc")) 920 - priv->devtype = DWCMSHC_RK3588; 921 - else 922 - priv->devtype = DWCMSHC_RK3568; 923 860 924 861 priv->reset = devm_reset_control_array_get_optional_exclusive(mmc_dev(host->mmc)); 925 862 if (IS_ERR(priv->reset)) { ··· 1296 1243 1297 1244 return dwcmshc_get_enable_other_clks(mmc_dev(host->mmc), dwc_priv, 1298 1245 ARRAY_SIZE(clk_ids), clk_ids); 1246 + } 1247 + 1248 + /* 1249 + * HPE GSC-specific vendor configuration: disable command conflict check 1250 + * and program Auto-Tuning Control register. 1251 + */ 1252 + static void dwcmshc_hpe_vendor_specific(struct sdhci_host *host) 1253 + { 1254 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1255 + struct dwcmshc_priv *dwc_priv = sdhci_pltfm_priv(pltfm_host); 1256 + u32 atctrl; 1257 + u8 extra; 1258 + 1259 + extra = sdhci_readb(host, dwc_priv->vendor_specific_area1 + DWCMSHC_HOST_CTRL3); 1260 + extra &= ~DWCMSHC_HOST_CTRL3_CMD_CONFLICT; 1261 + sdhci_writeb(host, extra, dwc_priv->vendor_specific_area1 + DWCMSHC_HOST_CTRL3); 1262 + 1263 + atctrl = AT_CTRL_AT_EN | AT_CTRL_SWIN_TH_EN | AT_CTRL_TUNE_CLK_STOP_EN | 1264 + FIELD_PREP(AT_CTRL_PRE_CHANGE_DLY_MASK, 3) | 1265 + FIELD_PREP(AT_CTRL_POST_CHANGE_DLY_MASK, AT_CTRL_POST_CHANGE_DLY) | 1266 + FIELD_PREP(AT_CTRL_SWIN_TH_VAL_MASK, 2); 1267 + sdhci_writel(host, atctrl, dwc_priv->vendor_specific_area1 + DWCMSHC_EMMC_ATCTRL); 1268 + } 1269 + 1270 + static void dwcmshc_hpe_set_emmc(struct sdhci_host *host) 1271 + { 1272 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1273 + struct dwcmshc_priv *dwc_priv = sdhci_pltfm_priv(pltfm_host); 1274 + u16 ctrl; 1275 + 1276 + ctrl = sdhci_readw(host, dwc_priv->vendor_specific_area1 + DWCMSHC_EMMC_CONTROL); 1277 + ctrl |= DWCMSHC_CARD_IS_EMMC; 1278 + sdhci_writew(host, ctrl, dwc_priv->vendor_specific_area1 + DWCMSHC_EMMC_CONTROL); 1279 + } 1280 + 1281 + static void dwcmshc_hpe_reset(struct sdhci_host *host, u8 mask) 1282 + { 1283 + dwcmshc_reset(host, mask); 1284 + dwcmshc_hpe_vendor_specific(host); 1285 + dwcmshc_hpe_set_emmc(host); 1286 + } 1287 + 1288 + static void dwcmshc_hpe_set_uhs_signaling(struct sdhci_host *host, unsigned int timing) 1289 + { 1290 + dwcmshc_set_uhs_signaling(host, timing); 1291 + dwcmshc_hpe_set_emmc(host); 1292 + } 1293 + 1294 + /* 1295 + * HPE GSC eMMC controller clock setup. 1296 + * 1297 + * The GSC SoC wires the freq_sel field of SDHCI_CLOCK_CONTROL directly to a 1298 + * clock mux rather than a divider. Force freq_sel = 1 when running at 1299 + * 200 MHz (HS200) so the mux selects the correct clock source. 1300 + */ 1301 + static void dwcmshc_hpe_set_clock(struct sdhci_host *host, unsigned int clock) 1302 + { 1303 + u16 clk; 1304 + 1305 + host->mmc->actual_clock = 0; 1306 + 1307 + sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL); 1308 + 1309 + if (clock == 0) 1310 + return; 1311 + 1312 + clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock); 1313 + 1314 + if (host->mmc->actual_clock == 200000000) 1315 + clk |= (1 << SDHCI_DIVIDER_SHIFT); 1316 + 1317 + sdhci_enable_clk(host, clk); 1318 + } 1319 + 1320 + /* 1321 + * HPE GSC eMMC controller init. 1322 + * 1323 + * The GSC SoC requires configuring MSHCCS. Bit 18 (SCGSyncDis) disables clock 1324 + * synchronisation for phase-select values going to the HS200 RX delay lines, 1325 + * allowing the card clock to be stopped while the delay selection settles and 1326 + * the phase shift is applied. This must be used together with the ATCTRL 1327 + * settings programmed in dwcmshc_hpe_vendor_specific(): 1328 + * AT_CTRL_R.TUNE_CLK_STOP_EN = 0x1 1329 + * AT_CTRL_R.POST_CHANGE_DLY = 0x3 1330 + * AT_CTRL_R.PRE_CHANGE_DLY = 0x3 1331 + * 1332 + * The DTS node provides a syscon phandle ('hpe,gxp-sysreg') with the 1333 + * MSHCCS register offset as an argument. 1334 + */ 1335 + static int dwcmshc_hpe_gsc_init(struct device *dev, struct sdhci_host *host, 1336 + struct dwcmshc_priv *dwc_priv) 1337 + { 1338 + unsigned int reg_offset; 1339 + struct regmap *soc_ctrl; 1340 + int ret; 1341 + 1342 + /* Disable cmd conflict check and configure auto-tuning */ 1343 + dwcmshc_hpe_vendor_specific(host); 1344 + 1345 + /* Look up the GXP sysreg syscon and MSHCCS offset */ 1346 + soc_ctrl = syscon_regmap_lookup_by_phandle_args(dev->of_node, 1347 + "hpe,gxp-sysreg", 1348 + 1, &reg_offset); 1349 + if (IS_ERR(soc_ctrl)) { 1350 + dev_err(dev, "failed to get hpe,gxp-sysreg syscon\n"); 1351 + return PTR_ERR(soc_ctrl); 1352 + } 1353 + 1354 + /* Set SCGSyncDis (bit 18) to disable sync on HS200 RX delay lines */ 1355 + ret = regmap_update_bits(soc_ctrl, reg_offset, 1356 + HPE_GSC_MSHCCS_SCGSYNCDIS, 1357 + HPE_GSC_MSHCCS_SCGSYNCDIS); 1358 + if (ret) { 1359 + dev_err(dev, "failed to set SCGSyncDis in MSHCCS\n"); 1360 + return ret; 1361 + } 1362 + 1363 + sdhci_enable_v4_mode(host); 1364 + 1365 + return 0; 1299 1366 } 1300 1367 1301 1368 static void sdhci_eic7700_set_clock(struct sdhci_host *host, unsigned int clock) ··· 1829 1656 return 0; 1830 1657 } 1831 1658 1659 + static void dwcmshc_k230_sdhci_set_clock(struct sdhci_host *host, unsigned int clock) 1660 + { 1661 + u16 clk; 1662 + 1663 + sdhci_set_clock(host, clock); 1664 + 1665 + clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL); 1666 + /* 1667 + * It is necessary to enable SDHCI_PROG_CLOCK_MODE. This is a 1668 + * vendor-specific quirk. If this is not done, the eMMC will be 1669 + * unable to read or write. 1670 + */ 1671 + clk |= SDHCI_PROG_CLOCK_MODE; 1672 + sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL); 1673 + } 1674 + 1675 + static void sdhci_k230_config_phy_delay(struct sdhci_host *host) 1676 + { 1677 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1678 + struct dwcmshc_priv *dwc_priv = sdhci_pltfm_priv(pltfm_host); 1679 + u32 val; 1680 + 1681 + sdhci_writeb(host, PHY_COMMDL_CNFG_DLSTEP_SEL, PHY_COMMDL_CNFG); 1682 + sdhci_writeb(host, 0x0, PHY_SDCLKDL_CNFG_R); 1683 + sdhci_writeb(host, PHY_SDCLKDL_DC_INITIAL, PHY_SDCLKDL_DC_R); 1684 + 1685 + val = PHY_SMPLDL_CNFG_EXTDLY_EN; 1686 + val |= FIELD_PREP(PHY_SMPLDL_CNFG_INPSEL_MASK, PHY_SMPLDL_CNFG_INPSEL); 1687 + sdhci_writeb(host, val, PHY_SMPLDL_CNFG_R); 1688 + 1689 + sdhci_writeb(host, FIELD_PREP(PHY_ATDL_CNFG_INPSEL_MASK, PHY_ATDL_CNFG_INPSEL), 1690 + PHY_ATDL_CNFG_R); 1691 + 1692 + val = sdhci_readl(host, dwc_priv->vendor_specific_area1 + DWCMSHC_EMMC_ATCTRL); 1693 + val |= AT_CTRL_TUNE_CLK_STOP_EN; 1694 + val |= FIELD_PREP(AT_CTRL_PRE_CHANGE_DLY_MASK, AT_CTRL_PRE_CHANGE_DLY); 1695 + val |= FIELD_PREP(AT_CTRL_POST_CHANGE_DLY_MASK, AT_CTRL_POST_CHANGE_DLY); 1696 + sdhci_writel(host, val, dwc_priv->vendor_specific_area1 + DWCMSHC_EMMC_ATCTRL); 1697 + sdhci_writel(host, 0x0, dwc_priv->vendor_specific_area1 + DWCMSHC_AT_STAT); 1698 + } 1699 + 1700 + static int dwcmshc_k230_phy_init(struct sdhci_host *host) 1701 + { 1702 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1703 + struct dwcmshc_priv *dwc_priv = sdhci_pltfm_priv(pltfm_host); 1704 + u32 rxsel; 1705 + u32 val; 1706 + u32 reg; 1707 + int ret; 1708 + 1709 + /* reset phy */ 1710 + sdhci_writew(host, 0, PHY_CNFG_R); 1711 + 1712 + /* Disable the clock */ 1713 + sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL); 1714 + 1715 + rxsel = dwc_priv->flags & FLAG_IO_FIXED_1V8 ? PHY_PAD_RXSEL_1V8 : PHY_PAD_RXSEL_3V3; 1716 + 1717 + val = rxsel; 1718 + val |= FIELD_PREP(PHY_PAD_TXSLEW_CTRL_P_MASK, PHY_PAD_TXSLEW_CTRL_P_k230); 1719 + val |= FIELD_PREP(PHY_PAD_TXSLEW_CTRL_N_MASK, PHY_PAD_TXSLEW_CTRL_N_k230); 1720 + val |= FIELD_PREP(PHY_PAD_WEAKPULL_MASK, PHY_PAD_WEAKPULL_PULLUP); 1721 + 1722 + sdhci_writew(host, val, PHY_CMDPAD_CNFG_R); 1723 + sdhci_writew(host, val, PHY_DATAPAD_CNFG_R); 1724 + sdhci_writew(host, val, PHY_RSTNPAD_CNFG_R); 1725 + 1726 + val = rxsel; 1727 + val |= FIELD_PREP(PHY_PAD_TXSLEW_CTRL_P_MASK, PHY_PAD_TXSLEW_CTRL_P_k230); 1728 + val |= FIELD_PREP(PHY_PAD_TXSLEW_CTRL_N_MASK, PHY_PAD_TXSLEW_CTRL_N_k230); 1729 + sdhci_writew(host, val, PHY_CLKPAD_CNFG_R); 1730 + 1731 + val = rxsel; 1732 + val |= FIELD_PREP(PHY_PAD_WEAKPULL_MASK, PHY_PAD_WEAKPULL_PULLDOWN); 1733 + val |= FIELD_PREP(PHY_PAD_TXSLEW_CTRL_P_MASK, PHY_PAD_TXSLEW_CTRL_P_k230); 1734 + val |= FIELD_PREP(PHY_PAD_TXSLEW_CTRL_N_MASK, PHY_PAD_TXSLEW_CTRL_N_k230); 1735 + sdhci_writew(host, val, PHY_STBPAD_CNFG_R); 1736 + 1737 + sdhci_k230_config_phy_delay(host); 1738 + 1739 + /* Wait max 150 ms */ 1740 + ret = read_poll_timeout(sdhci_readl, reg, 1741 + (reg & FIELD_PREP(PHY_CNFG_PHY_PWRGOOD_MASK, 1)), 1742 + 10, 150000, false, host, PHY_CNFG_R); 1743 + if (ret) { 1744 + dev_err(mmc_dev(host->mmc), "READ PHY PWRGOOD timeout!\n"); 1745 + return -ETIMEDOUT; 1746 + } 1747 + 1748 + reg = FIELD_PREP(PHY_CNFG_PAD_SN_MASK, PHY_CNFG_PAD_SN_k230) | 1749 + FIELD_PREP(PHY_CNFG_PAD_SP_MASK, PHY_CNFG_PAD_SP_k230); 1750 + sdhci_writel(host, reg, PHY_CNFG_R); 1751 + 1752 + /* de-assert the phy */ 1753 + reg |= PHY_CNFG_RSTN_DEASSERT; 1754 + sdhci_writel(host, reg, PHY_CNFG_R); 1755 + 1756 + return 0; 1757 + } 1758 + 1759 + static void dwcmshc_k230_sdhci_reset(struct sdhci_host *host, u8 mask) 1760 + { 1761 + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 1762 + struct dwcmshc_priv *dwc_priv = sdhci_pltfm_priv(pltfm_host); 1763 + const struct k230_pltfm_data *k230_pdata = to_pltfm_data(dwc_priv, k230); 1764 + u8 emmc_ctrl; 1765 + 1766 + dwcmshc_reset(host, mask); 1767 + 1768 + if (mask != SDHCI_RESET_ALL) 1769 + return; 1770 + 1771 + emmc_ctrl = sdhci_readw(host, dwc_priv->vendor_specific_area1 + DWCMSHC_EMMC_CONTROL); 1772 + sdhci_writeb(host, emmc_ctrl, dwc_priv->vendor_specific_area1 + DWCMSHC_EMMC_CONTROL); 1773 + 1774 + if (k230_pdata->is_emmc) 1775 + dwcmshc_k230_phy_init(host); 1776 + else 1777 + sdhci_writel(host, 0x0, dwc_priv->vendor_specific_area1 + DWCMSHC_HOST_CTRL3); 1778 + } 1779 + 1780 + static int dwcmshc_k230_init(struct device *dev, struct sdhci_host *host, 1781 + struct dwcmshc_priv *dwc_priv) 1782 + { 1783 + const struct k230_pltfm_data *k230_pdata = to_pltfm_data(dwc_priv, k230); 1784 + static const char * const clk_ids[] = {"block", "timer", "axi"}; 1785 + struct device_node *usb_phy_node; 1786 + struct k230_priv *k230_priv; 1787 + u32 data; 1788 + int ret; 1789 + 1790 + k230_priv = devm_kzalloc(dev, sizeof(struct k230_priv), GFP_KERNEL); 1791 + if (!k230_priv) 1792 + return -ENOMEM; 1793 + 1794 + dwc_priv->priv = k230_priv; 1795 + 1796 + usb_phy_node = of_parse_phandle(dev->of_node, "canaan,usb-phy", 0); 1797 + if (!usb_phy_node) 1798 + return dev_err_probe(dev, -ENODEV, "Failed to find canaan,usb-phy phandle\n"); 1799 + 1800 + k230_priv->hi_sys_regmap = device_node_to_regmap(usb_phy_node); 1801 + of_node_put(usb_phy_node); 1802 + 1803 + if (IS_ERR(k230_priv->hi_sys_regmap)) 1804 + return dev_err_probe(dev, PTR_ERR(k230_priv->hi_sys_regmap), 1805 + "Failed to get k230-usb-phy regmap\n"); 1806 + 1807 + ret = dwcmshc_get_enable_other_clks(mmc_dev(host->mmc), dwc_priv, 1808 + ARRAY_SIZE(clk_ids), clk_ids); 1809 + if (ret) 1810 + return dev_err_probe(dev, ret, "Failed to get/enable k230 mmc other clocks\n"); 1811 + 1812 + if (k230_pdata->is_emmc) { 1813 + host->flags &= ~SDHCI_SIGNALING_330; 1814 + dwc_priv->flags |= FLAG_IO_FIXED_1V8; 1815 + } else { 1816 + host->mmc->caps |= MMC_CAP_SD_HIGHSPEED; 1817 + host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V; 1818 + } 1819 + 1820 + ret = regmap_read(k230_priv->hi_sys_regmap, k230_pdata->ctrl_reg, &data); 1821 + if (ret) 1822 + return dev_err_probe(dev, ret, "Failed to read control reg 0x%x\n", 1823 + k230_pdata->ctrl_reg); 1824 + 1825 + data |= k230_pdata->write_prot_bit | k230_pdata->vol_stable_bit; 1826 + ret = regmap_write(k230_priv->hi_sys_regmap, k230_pdata->ctrl_reg, data); 1827 + if (ret) 1828 + return dev_err_probe(dev, ret, "Failed to write control reg 0x%x\n", 1829 + k230_pdata->ctrl_reg); 1830 + 1831 + return 0; 1832 + } 1833 + 1832 1834 static const struct sdhci_ops sdhci_dwcmshc_ops = { 1833 1835 .set_clock = sdhci_set_clock, 1834 1836 .set_bus_width = sdhci_set_bus_width, ··· 2091 1743 .platform_execute_tuning = sdhci_eic7700_executing_tuning, 2092 1744 }; 2093 1745 1746 + static const struct sdhci_ops sdhci_dwcmshc_k230_ops = { 1747 + .set_clock = dwcmshc_k230_sdhci_set_clock, 1748 + .set_bus_width = sdhci_set_bus_width, 1749 + .set_uhs_signaling = dwcmshc_set_uhs_signaling, 1750 + .get_max_clock = sdhci_pltfm_clk_get_max_clock, 1751 + .reset = dwcmshc_k230_sdhci_reset, 1752 + .adma_write_desc = dwcmshc_adma_write_desc, 1753 + }; 1754 + 2094 1755 static const struct dwcmshc_pltfm_data sdhci_dwcmshc_pdata = { 2095 1756 .pdata = { 2096 1757 .ops = &sdhci_dwcmshc_ops, ··· 2128 1771 .set_tran_desc = dwcmshc_set_tran_desc, 2129 1772 }; 2130 1773 2131 - static const struct dwcmshc_pltfm_data sdhci_dwcmshc_rk35xx_pdata = { 2132 - .pdata = { 2133 - .ops = &sdhci_dwcmshc_rk35xx_ops, 2134 - .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 2135 - SDHCI_QUIRK_BROKEN_TIMEOUT_VAL, 2136 - .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 2137 - SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN, 1774 + static const struct rockchip_pltfm_data sdhci_dwcmshc_rk3568_pdata = { 1775 + .dwcmshc_pdata = { 1776 + .pdata = { 1777 + .ops = &sdhci_dwcmshc_rk35xx_ops, 1778 + .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 1779 + SDHCI_QUIRK_BROKEN_TIMEOUT_VAL, 1780 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 1781 + SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN, 1782 + }, 1783 + .cqhci_host_ops = &rk35xx_cqhci_ops, 1784 + .init = dwcmshc_rk35xx_init, 1785 + .postinit = dwcmshc_rk35xx_postinit, 2138 1786 }, 2139 - .cqhci_host_ops = &rk35xx_cqhci_ops, 2140 - .init = dwcmshc_rk35xx_init, 2141 - .postinit = dwcmshc_rk35xx_postinit, 1787 + .revision = 0, 2142 1788 }; 2143 1789 2144 - static const struct dwcmshc_pltfm_data sdhci_dwcmshc_rk3576_pdata = { 2145 - .pdata = { 2146 - .ops = &sdhci_dwcmshc_rk35xx_ops, 2147 - .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 2148 - SDHCI_QUIRK_BROKEN_TIMEOUT_VAL, 2149 - .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 2150 - SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN, 1790 + static const struct rockchip_pltfm_data sdhci_dwcmshc_rk3576_pdata = { 1791 + .dwcmshc_pdata = { 1792 + .pdata = { 1793 + .ops = &sdhci_dwcmshc_rk35xx_ops, 1794 + .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 1795 + SDHCI_QUIRK_BROKEN_TIMEOUT_VAL, 1796 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 1797 + SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN, 1798 + }, 1799 + .cqhci_host_ops = &rk35xx_cqhci_ops, 1800 + .init = dwcmshc_rk35xx_init, 1801 + .postinit = dwcmshc_rk3576_postinit, 2151 1802 }, 2152 - .cqhci_host_ops = &rk35xx_cqhci_ops, 2153 - .init = dwcmshc_rk35xx_init, 2154 - .postinit = dwcmshc_rk3576_postinit, 1803 + .revision = 1, 1804 + }; 1805 + 1806 + static const struct rockchip_pltfm_data sdhci_dwcmshc_rk3588_pdata = { 1807 + .dwcmshc_pdata = { 1808 + .pdata = { 1809 + .ops = &sdhci_dwcmshc_rk35xx_ops, 1810 + .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 1811 + SDHCI_QUIRK_BROKEN_TIMEOUT_VAL, 1812 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN | 1813 + SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN, 1814 + }, 1815 + .cqhci_host_ops = &rk35xx_cqhci_ops, 1816 + .init = dwcmshc_rk35xx_init, 1817 + .postinit = dwcmshc_rk35xx_postinit, 1818 + }, 1819 + .revision = 1, 2155 1820 }; 2156 1821 2157 1822 static const struct dwcmshc_pltfm_data sdhci_dwcmshc_th1520_pdata = { ··· 2211 1832 SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN, 2212 1833 }, 2213 1834 .init = eic7700_init, 1835 + }; 1836 + 1837 + static const struct k230_pltfm_data k230_emmc_data = { 1838 + .dwcmshc_pdata = { 1839 + .pdata = { 1840 + .ops = &sdhci_dwcmshc_k230_ops, 1841 + .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 1842 + SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12, 1843 + }, 1844 + .init = dwcmshc_k230_init, 1845 + }, 1846 + .is_emmc = true, 1847 + .ctrl_reg = SD0_CTRL, 1848 + .vol_stable_bit = SD0_HOST_REG_VOL_STABLE, 1849 + .write_prot_bit = SD0_CARD_WRITE_PROT, 1850 + }; 1851 + 1852 + static const struct k230_pltfm_data k230_sdio_data = { 1853 + .dwcmshc_pdata = { 1854 + .pdata = { 1855 + .ops = &sdhci_dwcmshc_k230_ops, 1856 + .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 1857 + SDHCI_QUIRK_MULTIBLOCK_READ_ACMD12, 1858 + }, 1859 + .init = dwcmshc_k230_init, 1860 + }, 1861 + .is_emmc = false, 1862 + .ctrl_reg = SD1_CTRL, 1863 + .vol_stable_bit = SD1_HOST_REG_VOL_STABLE, 1864 + .write_prot_bit = SD1_CARD_WRITE_PROT, 1865 + }; 1866 + 1867 + static const struct sdhci_ops sdhci_dwcmshc_hpe_ops = { 1868 + .set_clock = dwcmshc_hpe_set_clock, 1869 + .set_bus_width = sdhci_set_bus_width, 1870 + .set_uhs_signaling = dwcmshc_hpe_set_uhs_signaling, 1871 + .get_max_clock = dwcmshc_get_max_clock, 1872 + .reset = dwcmshc_hpe_reset, 1873 + .adma_write_desc = dwcmshc_adma_write_desc, 1874 + .irq = dwcmshc_cqe_irq_handler, 1875 + }; 1876 + 1877 + static const struct dwcmshc_pltfm_data sdhci_dwcmshc_hpe_gsc_pdata = { 1878 + .pdata = { 1879 + .ops = &sdhci_dwcmshc_hpe_ops, 1880 + .quirks = SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN, 1881 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 1882 + }, 1883 + .init = dwcmshc_hpe_gsc_init, 2214 1884 }; 2215 1885 2216 1886 static const struct cqhci_host_ops dwcmshc_cqhci_ops = { ··· 2335 1907 2336 1908 static const struct of_device_id sdhci_dwcmshc_dt_ids[] = { 2337 1909 { 1910 + .compatible = "canaan,k230-emmc", 1911 + .data = &k230_emmc_data.dwcmshc_pdata, 1912 + }, 1913 + { 1914 + .compatible = "canaan,k230-sdio", 1915 + .data = &k230_sdio_data.dwcmshc_pdata, 1916 + }, 1917 + { 2338 1918 .compatible = "rockchip,rk3588-dwcmshc", 2339 - .data = &sdhci_dwcmshc_rk35xx_pdata, 1919 + .data = &sdhci_dwcmshc_rk3588_pdata, 2340 1920 }, 2341 1921 { 2342 1922 .compatible = "rockchip,rk3576-dwcmshc", ··· 2352 1916 }, 2353 1917 { 2354 1918 .compatible = "rockchip,rk3568-dwcmshc", 2355 - .data = &sdhci_dwcmshc_rk35xx_pdata, 1919 + .data = &sdhci_dwcmshc_rk3568_pdata, 2356 1920 }, 2357 1921 { 2358 1922 .compatible = "snps,dwcmshc-sdhci", ··· 2377 1941 { 2378 1942 .compatible = "eswin,eic7700-dwcmshc", 2379 1943 .data = &sdhci_dwcmshc_eic7700_pdata, 1944 + }, 1945 + { 1946 + .compatible = "hpe,gsc-dwcmshc", 1947 + .data = &sdhci_dwcmshc_hpe_gsc_pdata, 2380 1948 }, 2381 1949 {}, 2382 1950 }; ··· 2428 1988 2429 1989 pltfm_host = sdhci_priv(host); 2430 1990 priv = sdhci_pltfm_priv(pltfm_host); 1991 + priv->dwcmshc_pdata = pltfm_data; 2431 1992 2432 1993 if (dev->of_node) { 2433 1994 pltfm_host->clk = devm_clk_get(dev, "core");
+37 -2
drivers/mmc/host/sdhci-of-k1.c
··· 15 15 #include <linux/module.h> 16 16 #include <linux/of.h> 17 17 #include <linux/of_device.h> 18 + #include <linux/reset.h> 18 19 #include <linux/platform_device.h> 19 20 20 21 #include "sdhci.h" ··· 224 223 return 0; 225 224 } 226 225 226 + static inline int spacemit_sdhci_get_resets(struct device *dev) 227 + { 228 + struct reset_control *rst; 229 + 230 + rst = devm_reset_control_get_optional_shared_deasserted(dev, "axi"); 231 + if (IS_ERR(rst)) 232 + return PTR_ERR(rst); 233 + 234 + rst = devm_reset_control_get_optional_exclusive_deasserted(dev, "sdh"); 235 + if (IS_ERR(rst)) 236 + return PTR_ERR(rst); 237 + 238 + return 0; 239 + } 240 + 227 241 static const struct sdhci_ops spacemit_sdhci_ops = { 228 242 .get_max_clock = spacemit_sdhci_clk_get_max_clock, 229 243 .reset = spacemit_sdhci_reset, ··· 259 243 SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 260 244 }; 261 245 246 + static const struct sdhci_pltfm_data spacemit_sdhci_k3_pdata = { 247 + .ops = &spacemit_sdhci_ops, 248 + .quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK | 249 + SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC | 250 + SDHCI_QUIRK_32BIT_ADMA_SIZE | 251 + SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN | 252 + SDHCI_QUIRK_BROKEN_CARD_DETECTION | 253 + SDHCI_QUIRK_BROKEN_TIMEOUT_VAL, 254 + .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN, 255 + }; 256 + 262 257 static const struct of_device_id spacemit_sdhci_of_match[] = { 263 - { .compatible = "spacemit,k1-sdhci" }, 258 + { .compatible = "spacemit,k1-sdhci", .data = &spacemit_sdhci_k1_pdata }, 259 + { .compatible = "spacemit,k3-sdhci", .data = &spacemit_sdhci_k3_pdata }, 264 260 { /* sentinel */ } 265 261 }; 266 262 MODULE_DEVICE_TABLE(of, spacemit_sdhci_of_match); ··· 283 255 struct spacemit_sdhci_host *sdhst; 284 256 struct sdhci_pltfm_host *pltfm_host; 285 257 struct sdhci_host *host; 258 + const struct sdhci_pltfm_data *data; 286 259 struct mmc_host_ops *mops; 287 260 int ret; 288 261 289 - host = sdhci_pltfm_init(pdev, &spacemit_sdhci_k1_pdata, sizeof(*sdhst)); 262 + data = of_device_get_match_data(&pdev->dev); 263 + 264 + host = sdhci_pltfm_init(pdev, data, sizeof(*sdhst)); 290 265 if (IS_ERR(host)) 291 266 return PTR_ERR(host); 292 267 ··· 312 281 host->mmc->caps |= MMC_CAP_NEED_RSP_BUSY; 313 282 314 283 ret = spacemit_sdhci_get_clocks(dev, pltfm_host); 284 + if (ret) 285 + goto err_pltfm; 286 + 287 + ret = spacemit_sdhci_get_resets(dev); 315 288 if (ret) 316 289 goto err_pltfm; 317 290
-1
drivers/mmc/host/sdhci-pci-core.c
··· 20 20 #include <linux/scatterlist.h> 21 21 #include <linux/io.h> 22 22 #include <linux/iopoll.h> 23 - #include <linux/gpio.h> 24 23 #include <linux/gpio/machine.h> 25 24 #include <linux/pm_runtime.h> 26 25 #include <linux/pm_qos.h>
+1 -4
drivers/mmc/host/sdhci-pic32.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 1 2 /* 2 3 * Support of SDHCI platform devices for Microchip PIC32. 3 4 * ··· 6 5 * Andrei Pistirica, Paul Thacker 7 6 * 8 7 * Inspired by sdhci-pltfm.c 9 - * 10 - * This file is licensed under the terms of the GNU General Public 11 - * License version 2. This program is licensed "as is" without any 12 - * warranty of any kind, whether express or implied. 13 8 */ 14 9 15 10 #include <linux/clk.h>
-20
drivers/mmc/host/sdhci-pltfm.c
··· 95 95 sdhci_get_compatibility(pdev); 96 96 97 97 device_property_read_u32(dev, "clock-frequency", &pltfm_host->clock); 98 - 99 - if (device_property_present(dev, "keep-power-in-suspend")) 100 - host->mmc->pm_caps |= MMC_PM_KEEP_POWER; 101 - 102 - if (device_property_read_bool(dev, "wakeup-source") || 103 - device_property_read_bool(dev, "enable-sdio-wakeup")) /* legacy */ 104 - host->mmc->pm_caps |= MMC_PM_WAKE_SDIO_IRQ; 105 98 } 106 99 EXPORT_SYMBOL_GPL(sdhci_get_property); 107 100 ··· 207 214 SET_SYSTEM_SLEEP_PM_OPS(sdhci_pltfm_suspend, sdhci_pltfm_resume) 208 215 }; 209 216 EXPORT_SYMBOL_GPL(sdhci_pltfm_pmops); 210 - 211 - static int __init sdhci_pltfm_drv_init(void) 212 - { 213 - pr_info("sdhci-pltfm: SDHCI platform and OF driver helper\n"); 214 - 215 - return 0; 216 - } 217 - module_init(sdhci_pltfm_drv_init); 218 - 219 - static void __exit sdhci_pltfm_drv_exit(void) 220 - { 221 - } 222 - module_exit(sdhci_pltfm_drv_exit); 223 217 224 218 MODULE_DESCRIPTION("SDHCI platform and OF driver helper"); 225 219 MODULE_AUTHOR("Intel Corporation");
+1 -12
drivers/mmc/host/sdhci-uhs2.c
··· 1126 1126 1127 1127 /*****************************************************************************\ 1128 1128 * * 1129 - * Driver init/exit * 1129 + * Driver init * 1130 1130 * * 1131 1131 \*****************************************************************************/ 1132 1132 ··· 1137 1137 1138 1138 return 0; 1139 1139 } 1140 - 1141 - static int __init sdhci_uhs2_mod_init(void) 1142 - { 1143 - return 0; 1144 - } 1145 - module_init(sdhci_uhs2_mod_init); 1146 - 1147 - static void __exit sdhci_uhs2_mod_exit(void) 1148 - { 1149 - } 1150 - module_exit(sdhci_uhs2_mod_exit); 1151 1140 1152 1141 /*****************************************************************************\ 1153 1142 *
+7 -16
drivers/mmc/host/sdhci.c
··· 4193 4193 unsigned int bounce_size; 4194 4194 int ret; 4195 4195 4196 + /* Drivers may have already allocated the buffer */ 4197 + if (host->bounce_buffer) { 4198 + bounce_size = host->bounce_buffer_size; 4199 + max_blocks = bounce_size / 512; 4200 + goto out; 4201 + } 4196 4202 /* 4197 4203 * Cap the bounce buffer at 64KB. Using a bigger bounce buffer 4198 4204 * has diminishing returns, this is probably because SD/MMC ··· 4247 4241 4248 4242 host->bounce_buffer_size = bounce_size; 4249 4243 4244 + out: 4250 4245 /* Lie about this since we're bouncing */ 4251 4246 mmc->max_segs = max_blocks; 4252 4247 mmc->max_seg_size = bounce_size; ··· 5010 5003 * Driver init/exit * 5011 5004 * * 5012 5005 \*****************************************************************************/ 5013 - 5014 - static int __init sdhci_drv_init(void) 5015 - { 5016 - pr_info(DRIVER_NAME 5017 - ": Secure Digital Host Controller Interface driver\n"); 5018 - pr_info(DRIVER_NAME ": Copyright(c) Pierre Ossman\n"); 5019 - 5020 - return 0; 5021 - } 5022 - 5023 - static void __exit sdhci_drv_exit(void) 5024 - { 5025 - } 5026 - 5027 - module_init(sdhci_drv_init); 5028 - module_exit(sdhci_drv_exit); 5029 5006 5030 5007 module_param(debug_quirks, uint, 0444); 5031 5008 module_param(debug_quirks2, uint, 0444);
+1 -3
drivers/mmc/host/tifm_sd.c
··· 193 193 194 194 pg = sg_page(&sg[host->sg_pos]) + (off >> PAGE_SHIFT); 195 195 p_off = offset_in_page(off); 196 - p_cnt = PAGE_SIZE - p_off; 197 - p_cnt = min(p_cnt, cnt); 198 - p_cnt = min(p_cnt, t_size); 196 + p_cnt = min3(PAGE_SIZE - p_off, cnt, t_size); 199 197 200 198 if (r_data->flags & MMC_DATA_READ) 201 199 tifm_sd_read_fifo(host, pg, p_off, p_cnt);
+26 -25
drivers/mmc/host/vub300.c
··· 2107 2107 command_out_urb = usb_alloc_urb(0, GFP_KERNEL); 2108 2108 if (!command_out_urb) { 2109 2109 retval = -ENOMEM; 2110 - goto error0; 2110 + goto err_put_udev; 2111 2111 } 2112 2112 command_res_urb = usb_alloc_urb(0, GFP_KERNEL); 2113 2113 if (!command_res_urb) { 2114 2114 retval = -ENOMEM; 2115 - goto error1; 2115 + goto err_free_out_urb; 2116 2116 } 2117 2117 /* this also allocates memory for our VUB300 mmc host device */ 2118 2118 mmc = mmc_alloc_host(sizeof(*vub300), &udev->dev); 2119 2119 if (!mmc) { 2120 2120 retval = -ENOMEM; 2121 2121 dev_err(&udev->dev, "not enough memory for the mmc_host\n"); 2122 - goto error4; 2122 + goto err_free_res_urb; 2123 2123 } 2124 2124 /* MMC core transfer sizes tunable parameters */ 2125 2125 mmc->caps = 0; ··· 2336 2336 interface_to_InterfaceNumber(interface)); 2337 2337 retval = mmc_add_host(mmc); 2338 2338 if (retval) 2339 - goto error6; 2339 + goto err_delete_timer; 2340 2340 2341 2341 return 0; 2342 - error6: 2342 + 2343 + err_delete_timer: 2343 2344 timer_delete_sync(&vub300->inactivity_timer); 2344 2345 err_free_host: 2345 2346 mmc_free_host(mmc); ··· 2348 2347 * and hence also frees vub300 2349 2348 * which is contained at the end of struct mmc 2350 2349 */ 2351 - error4: 2350 + err_free_res_urb: 2352 2351 usb_free_urb(command_res_urb); 2353 - error1: 2352 + err_free_out_urb: 2354 2353 usb_free_urb(command_out_urb); 2355 - error0: 2354 + err_put_udev: 2356 2355 usb_put_dev(udev); 2356 + 2357 2357 return retval; 2358 2358 } 2359 2359 ··· 2429 2427 2430 2428 pr_info("VUB300 Driver rom wait states = %02X irqpoll timeout = %04X", 2431 2429 firmware_rom_wait_states, 0x0FFFF & firmware_irqpoll_timeout); 2430 + 2432 2431 cmndworkqueue = create_singlethread_workqueue("kvub300c"); 2433 - if (!cmndworkqueue) { 2434 - pr_err("not enough memory for the REQUEST workqueue"); 2435 - result = -ENOMEM; 2436 - goto out1; 2437 - } 2432 + if (!cmndworkqueue) 2433 + return -ENOMEM; 2434 + 2438 2435 pollworkqueue = create_singlethread_workqueue("kvub300p"); 2439 2436 if (!pollworkqueue) { 2440 - pr_err("not enough memory for the IRQPOLL workqueue"); 2441 2437 result = -ENOMEM; 2442 - goto out2; 2438 + goto err_destroy_cmdwq; 2443 2439 } 2440 + 2444 2441 deadworkqueue = create_singlethread_workqueue("kvub300d"); 2445 2442 if (!deadworkqueue) { 2446 - pr_err("not enough memory for the EXPIRED workqueue"); 2447 2443 result = -ENOMEM; 2448 - goto out3; 2444 + goto err_destroy_pollwq; 2449 2445 } 2446 + 2450 2447 result = usb_register(&vub300_driver); 2451 - if (result) { 2452 - pr_err("usb_register failed. Error number %d", result); 2453 - goto out4; 2454 - } 2448 + if (result) 2449 + goto err_destroy_deadwq; 2450 + 2455 2451 return 0; 2456 - out4: 2452 + 2453 + err_destroy_deadwq: 2457 2454 destroy_workqueue(deadworkqueue); 2458 - out3: 2455 + err_destroy_pollwq: 2459 2456 destroy_workqueue(pollworkqueue); 2460 - out2: 2457 + err_destroy_cmdwq: 2461 2458 destroy_workqueue(cmndworkqueue); 2462 - out1: 2459 + 2463 2460 return result; 2464 2461 } 2465 2462
+15 -2
drivers/mux/Kconfig
··· 4 4 # 5 5 6 6 config MULTIPLEXER 7 - tristate 7 + bool 8 + 9 + config MUX_CORE 10 + bool "Generic Multiplexer Support" 11 + select MULTIPLEXER 12 + help 13 + This framework is designed to abstract multiplexer handling for 14 + devices via various GPIO-, MMIO/Regmap or specific multiplexer 15 + controller chips. 16 + 17 + If unsure, say no. 18 + 19 + if MULTIPLEXER 8 20 9 21 menu "Multiplexer drivers" 10 - depends on MULTIPLEXER 11 22 12 23 config MUX_ADG792A 13 24 tristate "Analog Devices ADG792A/ADG792G Multiplexers" ··· 71 60 be called mux-mmio. 72 61 73 62 endmenu 63 + 64 + endif # MULTIPLEXER
+169 -29
drivers/mux/core.c
··· 46 46 .name = "mux", 47 47 }; 48 48 49 + /** 50 + * struct devm_mux_state_state - Tracks managed resources for mux-state objects. 51 + * @mstate: Pointer to a mux state. 52 + * @exit: An optional callback to execute before free. 53 + */ 54 + struct devm_mux_state_state { 55 + struct mux_state *mstate; 56 + int (*exit)(struct mux_state *mstate); 57 + }; 58 + 49 59 static DEFINE_IDA(mux_ida); 50 60 51 61 static int __init mux_init(void) ··· 526 516 return dev ? to_mux_chip(dev) : NULL; 527 517 } 528 518 529 - /* 519 + /** 530 520 * mux_get() - Get the mux-control for a device. 531 521 * @dev: The device that needs a mux-control. 532 522 * @mux_name: The name identifying the mux-control. 533 523 * @state: Pointer to where the requested state is returned, or NULL when 534 524 * the required multiplexer states are handled by other means. 525 + * @optional: Whether to return NULL and silence errors when mux doesn't exist. 535 526 * 536 - * Return: A pointer to the mux-control, or an ERR_PTR with a negative errno. 527 + * Return: Pointer to the mux-control on success, an ERR_PTR with a negative 528 + * errno on error, or NULL if optional is true and mux doesn't exist. 537 529 */ 538 530 static struct mux_control *mux_get(struct device *dev, const char *mux_name, 539 - unsigned int *state) 531 + unsigned int *state, bool optional) 540 532 { 541 533 struct device_node *np = dev->of_node; 542 534 struct of_phandle_args args; ··· 554 542 else 555 543 index = of_property_match_string(np, "mux-control-names", 556 544 mux_name); 557 - if (index < 0) { 545 + if (index < 0 && optional) { 546 + return NULL; 547 + } else if (index < 0) { 558 548 dev_err(dev, "mux controller '%s' not found\n", 559 549 mux_name); 560 550 return ERR_PTR(index); ··· 572 558 "mux-controls", "#mux-control-cells", 573 559 index, &args); 574 560 if (ret) { 561 + if (optional && ret == -ENOENT) 562 + return NULL; 563 + 575 564 dev_err(dev, "%pOF: failed to get mux-%s %s(%i)\n", 576 - np, state ? "state" : "control", mux_name ?: "", index); 565 + np, state ? "state" : "control", 566 + mux_name ?: "", index); 577 567 return ERR_PTR(ret); 578 568 } 579 569 ··· 635 617 */ 636 618 struct mux_control *mux_control_get(struct device *dev, const char *mux_name) 637 619 { 638 - return mux_get(dev, mux_name, NULL); 620 + struct mux_control *mux = mux_get(dev, mux_name, NULL, false); 621 + 622 + if (!mux) 623 + return ERR_PTR(-ENOENT); 624 + 625 + return mux; 639 626 } 640 627 EXPORT_SYMBOL_GPL(mux_control_get); 628 + 629 + /** 630 + * mux_control_get_optional() - Get the optional mux-control for a device. 631 + * @dev: The device that needs a mux-control. 632 + * @mux_name: The name identifying the mux-control. 633 + * 634 + * Return: Pointer to the mux-control on success, an ERR_PTR with a negative 635 + * errno on error, or NULL if mux doesn't exist. 636 + */ 637 + struct mux_control *mux_control_get_optional(struct device *dev, const char *mux_name) 638 + { 639 + return mux_get(dev, mux_name, NULL, true); 640 + } 641 + EXPORT_SYMBOL_GPL(mux_control_get_optional); 641 642 642 643 /** 643 644 * mux_control_put() - Put away the mux-control for good. ··· 707 670 } 708 671 EXPORT_SYMBOL_GPL(devm_mux_control_get); 709 672 710 - /* 673 + /** 711 674 * mux_state_get() - Get the mux-state for a device. 712 675 * @dev: The device that needs a mux-state. 713 676 * @mux_name: The name identifying the mux-state. 677 + * @optional: Whether to return NULL and silence errors when mux doesn't exist. 714 678 * 715 - * Return: A pointer to the mux-state, or an ERR_PTR with a negative errno. 679 + * Return: Pointer to the mux-state on success, an ERR_PTR with a negative 680 + * errno on error, or NULL if optional is true and mux doesn't exist. 716 681 */ 717 - static struct mux_state *mux_state_get(struct device *dev, const char *mux_name) 682 + static struct mux_state *mux_state_get(struct device *dev, const char *mux_name, bool optional) 718 683 { 719 684 struct mux_state *mstate; 720 685 ··· 724 685 if (!mstate) 725 686 return ERR_PTR(-ENOMEM); 726 687 727 - mstate->mux = mux_get(dev, mux_name, &mstate->state); 688 + mstate->mux = mux_get(dev, mux_name, &mstate->state, optional); 728 689 if (IS_ERR(mstate->mux)) { 729 690 int err = PTR_ERR(mstate->mux); 730 691 731 692 kfree(mstate); 732 693 return ERR_PTR(err); 694 + } else if (!mstate->mux) { 695 + kfree(mstate); 696 + return optional ? NULL : ERR_PTR(-ENOENT); 733 697 } 734 698 735 699 return mstate; ··· 752 710 753 711 static void devm_mux_state_release(struct device *dev, void *res) 754 712 { 755 - struct mux_state *mstate = *(struct mux_state **)res; 713 + struct devm_mux_state_state *devm_state = res; 756 714 715 + if (devm_state->exit) 716 + devm_state->exit(devm_state->mstate); 717 + 718 + mux_state_put(devm_state->mstate); 719 + } 720 + 721 + /** 722 + * __devm_mux_state_get() - Get the optional mux-state for a device, 723 + * with resource management. 724 + * @dev: The device that needs a mux-state. 725 + * @mux_name: The name identifying the mux-state. 726 + * @optional: Whether to return NULL and silence errors when mux doesn't exist. 727 + * @init: Optional function pointer for mux-state object initialisation. 728 + * @exit: Optional function pointer for mux-state object cleanup on release. 729 + * 730 + * Return: Pointer to the mux-state on success, an ERR_PTR with a negative 731 + * errno on error, or NULL if optional is true and mux doesn't exist. 732 + */ 733 + static struct mux_state *__devm_mux_state_get(struct device *dev, const char *mux_name, 734 + bool optional, 735 + int (*init)(struct mux_state *mstate), 736 + int (*exit)(struct mux_state *mstate)) 737 + { 738 + struct devm_mux_state_state *devm_state; 739 + struct mux_state *mstate; 740 + int ret; 741 + 742 + mstate = mux_state_get(dev, mux_name, optional); 743 + if (IS_ERR(mstate)) 744 + return ERR_CAST(mstate); 745 + else if (optional && !mstate) 746 + return NULL; 747 + else if (!mstate) 748 + return ERR_PTR(-ENOENT); 749 + 750 + devm_state = devres_alloc(devm_mux_state_release, sizeof(*devm_state), GFP_KERNEL); 751 + if (!devm_state) { 752 + ret = -ENOMEM; 753 + goto err_devres_alloc; 754 + } 755 + 756 + if (init) { 757 + ret = init(mstate); 758 + if (ret) 759 + goto err_mux_state_init; 760 + } 761 + 762 + devm_state->mstate = mstate; 763 + devm_state->exit = exit; 764 + devres_add(dev, devm_state); 765 + 766 + return mstate; 767 + 768 + err_mux_state_init: 769 + devres_free(devm_state); 770 + err_devres_alloc: 757 771 mux_state_put(mstate); 772 + return ERR_PTR(ret); 758 773 } 759 774 760 775 /** ··· 821 722 * @mux_name: The name identifying the mux-control. 822 723 * 823 724 * Return: Pointer to the mux-state, or an ERR_PTR with a negative errno. 725 + * 726 + * The mux-state will automatically be freed on release. 824 727 */ 825 - struct mux_state *devm_mux_state_get(struct device *dev, 826 - const char *mux_name) 728 + struct mux_state *devm_mux_state_get(struct device *dev, const char *mux_name) 827 729 { 828 - struct mux_state **ptr, *mstate; 829 - 830 - ptr = devres_alloc(devm_mux_state_release, sizeof(*ptr), GFP_KERNEL); 831 - if (!ptr) 832 - return ERR_PTR(-ENOMEM); 833 - 834 - mstate = mux_state_get(dev, mux_name); 835 - if (IS_ERR(mstate)) { 836 - devres_free(ptr); 837 - return mstate; 838 - } 839 - 840 - *ptr = mstate; 841 - devres_add(dev, ptr); 842 - 843 - return mstate; 730 + return __devm_mux_state_get(dev, mux_name, false, NULL, NULL); 844 731 } 845 732 EXPORT_SYMBOL_GPL(devm_mux_state_get); 733 + 734 + /** 735 + * devm_mux_state_get_optional() - Get the optional mux-state for a device, 736 + * with resource management. 737 + * @dev: The device that needs a mux-state. 738 + * @mux_name: The name identifying the mux-state. 739 + * 740 + * Return: Pointer to the mux-state on success, an ERR_PTR with a negative 741 + * errno on error, or NULL if mux doesn't exist. 742 + * 743 + * The mux-state will automatically be freed on release. 744 + */ 745 + struct mux_state *devm_mux_state_get_optional(struct device *dev, const char *mux_name) 746 + { 747 + return __devm_mux_state_get(dev, mux_name, true, NULL, NULL); 748 + } 749 + EXPORT_SYMBOL_GPL(devm_mux_state_get_optional); 750 + 751 + /** 752 + * devm_mux_state_get_selected() - Get the mux-state for a device, with 753 + * resource management. 754 + * @dev: The device that needs a mux-state. 755 + * @mux_name: The name identifying the mux-state. 756 + * 757 + * Return: Pointer to the mux-state, or an ERR_PTR with a negative errno. 758 + * 759 + * The returned mux-state (if valid) is already selected. 760 + * 761 + * The mux-state will automatically be deselected and freed on release. 762 + */ 763 + struct mux_state *devm_mux_state_get_selected(struct device *dev, const char *mux_name) 764 + { 765 + return __devm_mux_state_get(dev, mux_name, false, mux_state_select, mux_state_deselect); 766 + } 767 + EXPORT_SYMBOL_GPL(devm_mux_state_get_selected); 768 + 769 + /** 770 + * devm_mux_state_get_optional_selected() - Get the optional mux-state for 771 + * a device, with resource management. 772 + * @dev: The device that needs a mux-state. 773 + * @mux_name: The name identifying the mux-state. 774 + * 775 + * Return: Pointer to the mux-state on success, an ERR_PTR with a negative 776 + * errno on error, or NULL if mux doesn't exist. 777 + * 778 + * The returned mux-state (if valid) is already selected. 779 + * 780 + * The mux-state will automatically be deselected and freed on release. 781 + */ 782 + struct mux_state *devm_mux_state_get_optional_selected(struct device *dev, 783 + const char *mux_name) 784 + { 785 + return __devm_mux_state_get(dev, mux_name, true, mux_state_select, mux_state_deselect); 786 + } 787 + EXPORT_SYMBOL_GPL(devm_mux_state_get_optional_selected); 846 788 847 789 /* 848 790 * Using subsys_initcall instead of module_init here to try to ensure - for
-10
drivers/phy/phy-can-transceiver.c
··· 126 126 }; 127 127 MODULE_DEVICE_TABLE(of, can_transceiver_phy_ids); 128 128 129 - /* Temporary wrapper until the multiplexer subsystem supports optional muxes */ 130 - static inline struct mux_state * 131 - devm_mux_state_get_optional(struct device *dev, const char *mux_name) 132 - { 133 - if (!of_property_present(dev->of_node, "mux-states")) 134 - return NULL; 135 - 136 - return devm_mux_state_get(dev, mux_name); 137 - } 138 - 139 129 static struct phy *can_transceiver_phy_xlate(struct device *dev, 140 130 const struct of_phandle_args *args) 141 131 {
+2 -28
drivers/phy/renesas/phy-rcar-gen3-usb2.c
··· 939 939 return rcar_gen3_phy_usb2_vbus_regulator_get_exclusive_enable(channel, enable); 940 940 } 941 941 942 - /* Temporary wrapper until the multiplexer subsystem supports optional muxes */ 943 - static inline struct mux_state * 944 - devm_mux_state_get_optional(struct device *dev, const char *mux_name) 945 - { 946 - if (!of_property_present(dev->of_node, "mux-states")) 947 - return NULL; 948 - 949 - return devm_mux_state_get(dev, mux_name); 950 - } 951 - 952 - static void rcar_gen3_phy_mux_state_deselect(void *data) 953 - { 954 - mux_state_deselect(data); 955 - } 956 - 957 942 static int rcar_gen3_phy_usb2_probe(struct platform_device *pdev) 958 943 { 959 944 struct device *dev = &pdev->dev; ··· 1021 1036 phy_set_drvdata(channel->rphys[i].phy, &channel->rphys[i]); 1022 1037 } 1023 1038 1024 - mux_state = devm_mux_state_get_optional(dev, NULL); 1039 + mux_state = devm_mux_state_get_optional_selected(dev, NULL); 1025 1040 if (IS_ERR(mux_state)) 1026 - return PTR_ERR(mux_state); 1027 - if (mux_state) { 1028 - ret = mux_state_select(mux_state); 1029 - if (ret) 1030 - return dev_err_probe(dev, ret, "Failed to select USB mux\n"); 1031 - 1032 - ret = devm_add_action_or_reset(dev, rcar_gen3_phy_mux_state_deselect, 1033 - mux_state); 1034 - if (ret) 1035 - return dev_err_probe(dev, ret, 1036 - "Failed to register USB mux state deselect\n"); 1037 - } 1041 + return dev_err_probe(dev, PTR_ERR(mux_state), "Failed to get USB mux\n"); 1038 1042 1039 1043 if (channel->phy_data->no_adp_ctrl && channel->is_otg_channel) { 1040 1044 ret = rcar_gen3_phy_usb2_vbus_regulator_register(channel);
+2
include/linux/mmc/card.h
··· 329 329 #define MMC_QUIRK_BROKEN_CACHE_FLUSH (1<<16) /* Don't flush cache until the write has occurred */ 330 330 #define MMC_QUIRK_BROKEN_SD_POWEROFF_NOTIFY (1<<17) /* Disable broken SD poweroff notify support */ 331 331 #define MMC_QUIRK_NO_UHS_DDR50_TUNING (1<<18) /* Disable DDR50 tuning */ 332 + #define MMC_QUIRK_BROKEN_MDT (1<<19) /* Wrong manufacturing year */ 333 + #define MMC_QUIRK_FIXED_SECURE_ERASE_TRIM_TIME (1<<20) /* Secure erase/trim time is fixed regardless of size */ 332 334 333 335 bool written_flag; /* Indicates eMMC has been written since power on */ 334 336 bool reenable_cmdq; /* Re-enable Command Queue */
+3
include/linux/mmc/sdio_ids.h
··· 117 117 #define SDIO_VENDOR_ID_MICROCHIP_WILC 0x0296 118 118 #define SDIO_DEVICE_ID_MICROCHIP_WILC1000 0x5347 119 119 120 + #define SDIO_VENDOR_ID_NXP 0x0471 121 + #define SDIO_DEVICE_ID_NXP_IW61X 0x0205 122 + 120 123 #define SDIO_VENDOR_ID_REALTEK 0x024c 121 124 #define SDIO_DEVICE_ID_REALTEK_RTW8723BS 0xb723 122 125 #define SDIO_DEVICE_ID_REALTEK_RTW8821BS 0xb821
+104 -4
include/linux/mux/consumer.h
··· 16 16 struct mux_control; 17 17 struct mux_state; 18 18 19 + #if IS_ENABLED(CONFIG_MULTIPLEXER) 20 + 19 21 unsigned int mux_control_states(struct mux_control *mux); 20 22 int __must_check mux_control_select_delay(struct mux_control *mux, 21 23 unsigned int state, ··· 56 54 int mux_state_deselect(struct mux_state *mstate); 57 55 58 56 struct mux_control *mux_control_get(struct device *dev, const char *mux_name); 57 + struct mux_control *mux_control_get_optional(struct device *dev, const char *mux_name); 59 58 void mux_control_put(struct mux_control *mux); 60 59 61 - struct mux_control *devm_mux_control_get(struct device *dev, 62 - const char *mux_name); 63 - struct mux_state *devm_mux_state_get(struct device *dev, 64 - const char *mux_name); 60 + struct mux_control *devm_mux_control_get(struct device *dev, const char *mux_name); 61 + struct mux_state *devm_mux_state_get(struct device *dev, const char *mux_name); 62 + struct mux_state *devm_mux_state_get_optional(struct device *dev, const char *mux_name); 63 + struct mux_state *devm_mux_state_get_selected(struct device *dev, const char *mux_name); 64 + struct mux_state *devm_mux_state_get_optional_selected(struct device *dev, const char *mux_name); 65 + 66 + #else 67 + 68 + static inline unsigned int mux_control_states(struct mux_control *mux) 69 + { 70 + return 0; 71 + } 72 + static inline int __must_check mux_control_select_delay(struct mux_control *mux, 73 + unsigned int state, unsigned int delay_us) 74 + { 75 + return -EOPNOTSUPP; 76 + } 77 + static inline int __must_check mux_state_select_delay(struct mux_state *mstate, 78 + unsigned int delay_us) 79 + { 80 + return -EOPNOTSUPP; 81 + } 82 + static inline int __must_check mux_control_try_select_delay(struct mux_control *mux, 83 + unsigned int state, 84 + unsigned int delay_us) 85 + { 86 + return -EOPNOTSUPP; 87 + } 88 + static inline int __must_check mux_state_try_select_delay(struct mux_state *mstate, 89 + unsigned int delay_us) 90 + { 91 + return -EOPNOTSUPP; 92 + } 93 + 94 + static inline int __must_check mux_control_select(struct mux_control *mux, 95 + unsigned int state) 96 + { 97 + return -EOPNOTSUPP; 98 + } 99 + 100 + static inline int __must_check mux_state_select(struct mux_state *mstate) 101 + { 102 + return -EOPNOTSUPP; 103 + } 104 + 105 + static inline int __must_check mux_control_try_select(struct mux_control *mux, 106 + unsigned int state) 107 + { 108 + return -EOPNOTSUPP; 109 + } 110 + 111 + static inline int __must_check mux_state_try_select(struct mux_state *mstate) 112 + { 113 + return -EOPNOTSUPP; 114 + } 115 + 116 + static inline int mux_control_deselect(struct mux_control *mux) 117 + { 118 + return -EOPNOTSUPP; 119 + } 120 + static inline int mux_state_deselect(struct mux_state *mstate) 121 + { 122 + return -EOPNOTSUPP; 123 + } 124 + 125 + static inline struct mux_control *mux_control_get(struct device *dev, const char *mux_name) 126 + { 127 + return ERR_PTR(-EOPNOTSUPP); 128 + } 129 + static inline struct mux_control *mux_control_get_optional(struct device *dev, 130 + const char *mux_name) 131 + { 132 + return NULL; 133 + } 134 + static inline void mux_control_put(struct mux_control *mux) {} 135 + 136 + static inline struct mux_control *devm_mux_control_get(struct device *dev, const char *mux_name) 137 + { 138 + return ERR_PTR(-EOPNOTSUPP); 139 + } 140 + static inline struct mux_state *devm_mux_state_get(struct device *dev, const char *mux_name) 141 + { 142 + return ERR_PTR(-EOPNOTSUPP); 143 + } 144 + static inline struct mux_state *devm_mux_state_get_optional(struct device *dev, 145 + const char *mux_name) 146 + { 147 + return NULL; 148 + } 149 + static inline struct mux_state *devm_mux_state_get_selected(struct device *dev, 150 + const char *mux_name) 151 + { 152 + return ERR_PTR(-EOPNOTSUPP); 153 + } 154 + static inline struct mux_state *devm_mux_state_get_optional_selected(struct device *dev, 155 + const char *mux_name) 156 + { 157 + return NULL; 158 + } 159 + 160 + #endif /* CONFIG_MULTIPLEXER */ 65 161 66 162 #endif /* _LINUX_MUX_CONSUMER_H */