Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'soc-drivers-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull SoC driver updates from Arnd Bergmann:
"There are are a number of to firmware drivers, in particular the TEE
subsystem:

- a bus callback for TEE firmware that device drivers can register to

- sysfs support for tee firmware information

- minor updates to platform specific TEE drivers for AMD, NXP,
Qualcomm and the generic optee driver

- ARM SCMI firmware refactoring to improve the protocol discover
among other fixes and cleanups

- ARM FF-A firmware interoperability improvements

The reset controller and memory controller subsystems gain support for
additional hardware platforms from Mediatek, Renesas, NXP, Canaan and
SpacemiT.

Most of the other changes are for random drivers/soc code. Among a
number of cleanups and newly added hardware support, including:

- Mediatek MT8196 DVFS power management and mailbox support

- Qualcomm SCM firmware and MDT loader refactoring, as part of the
new Glymur platform support.

- NXP i.MX9 System Manager firmware support for accessing the syslog

- Minor updates for TI, Renesas, Samsung, Apple, Marvell and AMD
SoCs"

* tag 'soc-drivers-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (171 commits)
bus: fsl-mc: fix an error handling in fsl_mc_device_add()
reset: spacemit: Add SpacemiT K3 reset driver
reset: spacemit: Extract common K1 reset code
reset: Create subdirectory for SpacemiT drivers
dt-bindings: soc: spacemit: Add K3 reset support and IDs
reset: canaan: k230: drop OF dependency and enable by default
reset: rzg2l-usbphy-ctrl: Add suspend/resume support
reset: rzg2l-usbphy-ctrl: Propagate the return value of regmap_field_update_bits()
reset: gpio: check the return value of gpiod_set_value_cansleep()
reset: imx8mp-audiomix: Support i.MX8ULP SIM LPAV
reset: imx8mp-audiomix: Extend the driver usage
reset: imx8mp-audiomix: Switch to using regmap API
reset: imx8mp-audiomix: Drop unneeded macros
soc: fsl: qe: qe_ports_ic: Consolidate chained IRQ handler install/remove
soc: mediatek: mtk-cmdq: Add mminfra_offset adjustment for DRAM addresses
soc: mediatek: mtk-cmdq: Extend cmdq_pkt_write API for SoCs without subsys ID
soc: mediatek: mtk-cmdq: Add pa_base parsing for hardware without subsys ID support
soc: mediatek: mtk-cmdq: Add cmdq_get_mbox_priv() in cmdq_pkt_create()
mailbox: mtk-cmdq: Add driver data to support for MT8196
mailbox: mtk-cmdq: Add mminfra_offset configuration for DRAM transaction
...

+5125 -1463
+10
Documentation/ABI/testing/sysfs-class-tee
··· 13 13 space if the variable is absent. The primary purpose 14 14 of this variable is to let systemd know whether 15 15 tee-supplicant is needed in the early boot with initramfs. 16 + 17 + What: /sys/class/tee/tee{,priv}X/revision 18 + Date: Jan 2026 19 + KernelVersion: 6.19 20 + Contact: op-tee@lists.trustedfirmware.org 21 + Description: 22 + Read-only revision string reported by the TEE driver. This is 23 + for diagnostics only and must not be used to infer feature 24 + support. Use TEE_IOC_VERSION for capability and compatibility 25 + checks.
+44 -2
Documentation/devicetree/bindings/cache/qcom,llcc.yaml
··· 20 20 properties: 21 21 compatible: 22 22 enum: 23 + - qcom,glymur-llcc 23 24 - qcom,ipq5424-llcc 24 25 - qcom,kaanapali-llcc 25 26 - qcom,qcs615-llcc ··· 47 46 48 47 reg: 49 48 minItems: 1 50 - maxItems: 10 49 + maxItems: 14 51 50 52 51 reg-names: 53 52 minItems: 1 54 - maxItems: 10 53 + maxItems: 14 55 54 56 55 interrupts: 57 56 maxItems: 1 ··· 84 83 reg-names: 85 84 items: 86 85 - const: llcc0_base 86 + 87 + - if: 88 + properties: 89 + compatible: 90 + contains: 91 + enum: 92 + - qcom,glymur-llcc 93 + then: 94 + properties: 95 + reg: 96 + items: 97 + - description: LLCC0 base register region 98 + - description: LLCC1 base register region 99 + - description: LLCC2 base register region 100 + - description: LLCC3 base register region 101 + - description: LLCC4 base register region 102 + - description: LLCC5 base register region 103 + - description: LLCC6 base register region 104 + - description: LLCC7 base register region 105 + - description: LLCC8 base register region 106 + - description: LLCC9 base register region 107 + - description: LLCC10 base register region 108 + - description: LLCC11 base register region 109 + - description: LLCC broadcast base register region 110 + - description: LLCC broadcast AND register region 111 + reg-names: 112 + items: 113 + - const: llcc0_base 114 + - const: llcc1_base 115 + - const: llcc2_base 116 + - const: llcc3_base 117 + - const: llcc4_base 118 + - const: llcc5_base 119 + - const: llcc6_base 120 + - const: llcc7_base 121 + - const: llcc8_base 122 + - const: llcc9_base 123 + - const: llcc10_base 124 + - const: llcc11_base 125 + - const: llcc_broadcast_base 126 + - const: llcc_broadcast_and_base 87 127 88 128 - if: 89 129 properties:
+1
Documentation/devicetree/bindings/crypto/qcom,prng.yaml
··· 21 21 - qcom,ipq5424-trng 22 22 - qcom,ipq9574-trng 23 23 - qcom,kaanapali-trng 24 + - qcom,milos-trng 24 25 - qcom,qcs615-trng 25 26 - qcom,qcs8300-trng 26 27 - qcom,sa8255p-trng
+51
Documentation/devicetree/bindings/interrupt-controller/fsl,qe-ports-ic.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/interrupt-controller/fsl,qe-ports-ic.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Freescale QUICC Engine I/O Ports Interrupt Controller 8 + 9 + maintainers: 10 + - Christophe Leroy (CS GROUP) <chleroy@kernel.org> 11 + 12 + properties: 13 + compatible: 14 + enum: 15 + - fsl,mpc8323-qe-ports-ic 16 + 17 + reg: 18 + maxItems: 1 19 + 20 + interrupt-controller: true 21 + 22 + '#address-cells': 23 + const: 0 24 + 25 + '#interrupt-cells': 26 + const: 1 27 + 28 + interrupts: 29 + maxItems: 1 30 + 31 + required: 32 + - compatible 33 + - reg 34 + - interrupt-controller 35 + - '#address-cells' 36 + - '#interrupt-cells' 37 + - interrupts 38 + 39 + additionalProperties: false 40 + 41 + examples: 42 + - | 43 + interrupt-controller@c00 { 44 + compatible = "fsl,mpc8323-qe-ports-ic"; 45 + reg = <0xc00 0x18>; 46 + interrupt-controller; 47 + #address-cells = <0>; 48 + #interrupt-cells = <1>; 49 + interrupts = <74 0x8>; 50 + interrupt-parent = <&ipic>; 51 + };
+2
Documentation/devicetree/bindings/interrupt-controller/qcom,pdc.yaml
··· 27 27 items: 28 28 - enum: 29 29 - qcom,glymur-pdc 30 + - qcom,kaanapali-pdc 31 + - qcom,milos-pdc 30 32 - qcom,qcs615-pdc 31 33 - qcom,qcs8300-pdc 32 34 - qcom,qdu1000-pdc
+34
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,ddr4.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/ddr/jedec,ddr4.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: DDR4 SDRAM compliant to JEDEC JESD79-4D 8 + 9 + maintainers: 10 + - Krzysztof Kozlowski <krzk@kernel.org> 11 + 12 + allOf: 13 + - $ref: jedec,sdram-props.yaml# 14 + 15 + properties: 16 + compatible: 17 + items: 18 + - pattern: "^ddr4-[0-9a-f]{4},[a-z]{1,20}-[0-9a-f]{2}$" 19 + - const: jedec,ddr4 20 + 21 + required: 22 + - compatible 23 + - density 24 + - io-width 25 + 26 + unevaluatedProperties: false 27 + 28 + examples: 29 + - | 30 + ddr { 31 + compatible = "ddr4-00ff,azaz-ff", "jedec,ddr4"; 32 + density = <8192>; 33 + io-width = <8>; 34 + };
+27 -13
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr-channel.yaml Documentation/devicetree/bindings/memory-controllers/ddr/jedec,sdram-channel.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/memory-controllers/ddr/jedec,lpddr-channel.yaml# 4 + $id: http://devicetree.org/schemas/memory-controllers/ddr/jedec,sdram-channel.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: LPDDR channel with chip/rank topology description 7 + title: SDRAM channel with chip/rank topology description 8 8 9 9 description: 10 - An LPDDR channel is a completely independent set of LPDDR pins (DQ, CA, CS, 11 - CK, etc.) that connect one or more LPDDR chips to a host system. The main 12 - purpose of this node is to overall LPDDR topology of the system, including the 13 - amount of individual LPDDR chips and the ranks per chip. 10 + A memory channel of SDRAM memory like DDR SDRAM or LPDDR SDRAM is a completely 11 + independent set of pins (DQ, CA, CS, CK, etc.) that connect one or more memory 12 + chips to a host system. The main purpose of this node is to overall memory 13 + topology of the system, including the amount of individual memory chips and 14 + the ranks per chip. 14 15 15 16 maintainers: 16 17 - Julius Werner <jwerner@chromium.org> 17 18 18 19 properties: 20 + $nodename: 21 + pattern: "sdram-channel-[0-9]+$" 22 + 19 23 compatible: 20 24 enum: 25 + - jedec,ddr4-channel 21 26 - jedec,lpddr2-channel 22 27 - jedec,lpddr3-channel 23 28 - jedec,lpddr4-channel ··· 31 26 io-width: 32 27 description: 33 28 The number of DQ pins in the channel. If this number is different 34 - from (a multiple of) the io-width of the LPDDR chip, that means that 29 + from (a multiple of) the io-width of the SDRAM chip, that means that 35 30 multiple instances of that type of chip are wired in parallel on this 36 31 channel (with the channel's DQ pins split up between the different 37 32 chips, and the CA, CS, etc. pins of the different chips all shorted 38 33 together). This means that the total physical memory controlled by a 39 34 channel is equal to the sum of the densities of each rank on the 40 - connected LPDDR chip, times the io-width of the channel divided by 41 - the io-width of the LPDDR chip. 35 + connected SDRAM chip, times the io-width of the channel divided by 36 + the io-width of the SDRAM chip. 42 37 enum: 43 38 - 8 44 39 - 16 ··· 56 51 "^rank@[0-9]+$": 57 52 type: object 58 53 description: 59 - Each physical LPDDR chip may have one or more ranks. Ranks are 60 - internal but fully independent sub-units of the chip. Each LPDDR bus 54 + Each physical SDRAM chip may have one or more ranks. Ranks are 55 + internal but fully independent sub-units of the chip. Each SDRAM bus 61 56 transaction on the channel targets exactly one rank, based on the 62 57 state of the CS pins. Different ranks may have different densities and 63 58 timing requirements. ··· 65 60 - reg 66 61 67 62 allOf: 63 + - if: 64 + properties: 65 + compatible: 66 + contains: 67 + const: jedec,ddr4-channel 68 + then: 69 + patternProperties: 70 + "^rank@[0-9]+$": 71 + $ref: /schemas/memory-controllers/ddr/jedec,ddr4.yaml# 68 72 - if: 69 73 properties: 70 74 compatible: ··· 121 107 122 108 examples: 123 109 - | 124 - lpddr-channel0 { 110 + sdram-channel-0 { 125 111 #address-cells = <1>; 126 112 #size-cells = <0>; 127 113 compatible = "jedec,lpddr3-channel"; ··· 136 122 }; 137 123 }; 138 124 139 - lpddr-channel1 { 125 + sdram-channel-1 { 140 126 #address-cells = <1>; 141 127 #size-cells = <0>; 142 128 compatible = "jedec,lpddr4-channel";
-74
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr-props.yaml
··· 1 - # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 - %YAML 1.2 3 - --- 4 - $id: http://devicetree.org/schemas/memory-controllers/ddr/jedec,lpddr-props.yaml# 5 - $schema: http://devicetree.org/meta-schemas/core.yaml# 6 - 7 - title: Common properties for LPDDR types 8 - 9 - description: 10 - Different LPDDR types generally use the same properties and only differ in the 11 - range of legal values for each. This file defines the common parts that can be 12 - reused for each type. Nodes using this schema should generally be nested under 13 - an LPDDR channel node. 14 - 15 - maintainers: 16 - - Krzysztof Kozlowski <krzk@kernel.org> 17 - 18 - properties: 19 - compatible: 20 - description: 21 - Compatible strings can be either explicit vendor names and part numbers 22 - (e.g. elpida,ECB240ABACN), or generated strings of the form 23 - lpddrX-YY,ZZZZ where X is the LPDDR version, YY is the manufacturer ID 24 - (from MR5) and ZZZZ is the revision ID (from MR6 and MR7). Both IDs are 25 - formatted in lower case hexadecimal representation with leading zeroes. 26 - The latter form can be useful when LPDDR nodes are created at runtime by 27 - boot firmware that doesn't have access to static part number information. 28 - 29 - reg: 30 - description: 31 - The rank number of this LPDDR rank when used as a subnode to an LPDDR 32 - channel. 33 - minimum: 0 34 - maximum: 3 35 - 36 - revision-id: 37 - $ref: /schemas/types.yaml#/definitions/uint32-array 38 - description: 39 - Revision IDs read from Mode Register 6 and 7. One byte per uint32 cell (i.e. <MR6 MR7>). 40 - maxItems: 2 41 - items: 42 - minimum: 0 43 - maximum: 255 44 - 45 - density: 46 - $ref: /schemas/types.yaml#/definitions/uint32 47 - description: 48 - Density in megabits of SDRAM chip. Decoded from Mode Register 8. 49 - enum: 50 - - 64 51 - - 128 52 - - 256 53 - - 512 54 - - 1024 55 - - 2048 56 - - 3072 57 - - 4096 58 - - 6144 59 - - 8192 60 - - 12288 61 - - 16384 62 - - 24576 63 - - 32768 64 - 65 - io-width: 66 - $ref: /schemas/types.yaml#/definitions/uint32 67 - description: 68 - IO bus width in bits of SDRAM chip. Decoded from Mode Register 8. 69 - enum: 70 - - 8 71 - - 16 72 - - 32 73 - 74 - additionalProperties: true
+1 -1
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr2.yaml
··· 10 10 - Krzysztof Kozlowski <krzk@kernel.org> 11 11 12 12 allOf: 13 - - $ref: jedec,lpddr-props.yaml# 13 + - $ref: jedec,sdram-props.yaml# 14 14 15 15 properties: 16 16 compatible:
+1 -1
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr3.yaml
··· 10 10 - Krzysztof Kozlowski <krzk@kernel.org> 11 11 12 12 allOf: 13 - - $ref: jedec,lpddr-props.yaml# 13 + - $ref: jedec,sdram-props.yaml# 14 14 15 15 properties: 16 16 compatible:
+1 -1
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr4.yaml
··· 10 10 - Krzysztof Kozlowski <krzk@kernel.org> 11 11 12 12 allOf: 13 - - $ref: jedec,lpddr-props.yaml# 13 + - $ref: jedec,sdram-props.yaml# 14 14 15 15 properties: 16 16 compatible:
+1 -1
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,lpddr5.yaml
··· 10 10 - Krzysztof Kozlowski <krzk@kernel.org> 11 11 12 12 allOf: 13 - - $ref: jedec,lpddr-props.yaml# 13 + - $ref: jedec,sdram-props.yaml# 14 14 15 15 properties: 16 16 compatible:
+94
Documentation/devicetree/bindings/memory-controllers/ddr/jedec,sdram-props.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/ddr/jedec,sdram-props.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Common properties for SDRAM types 8 + 9 + description: 10 + Different SDRAM types generally use the same properties and only differ in the 11 + range of legal values for each. This file defines the common parts that can be 12 + reused for each type. Nodes using this schema should generally be nested under 13 + a SDRAM channel node. 14 + 15 + maintainers: 16 + - Krzysztof Kozlowski <krzk@kernel.org> 17 + 18 + properties: 19 + compatible: 20 + description: | 21 + Compatible strings can be either explicit vendor names and part numbers 22 + (e.g. elpida,ECB240ABACN), or generated strings of the form 23 + lpddrX-YY,ZZZZ or ddrX-YYYY,AAAA...-ZZ where X, Y, and Z are lowercase 24 + hexadecimal with leading zeroes, and A is lowercase ASCII. 25 + For LPDDR and DDR SDRAM, X is the SDRAM version (2, 3, 4, etc.). 26 + For LPDDR SDRAM: 27 + - YY is the manufacturer ID (from MR5), 1 byte 28 + - ZZZZ is the revision ID (from MR6 and MR7), 2 bytes 29 + For DDR4 SDRAM with SPD, according to JEDEC SPD4.1.2.L-6: 30 + - YYYY is the manufacturer ID, 2 bytes, from bytes 320 and 321 31 + - AAAA... is the part number, 20 bytes (20 chars) from bytes 329 to 348 32 + without trailing spaces 33 + - ZZ is the revision ID, 1 byte, from byte 349 34 + The former form is useful when the SDRAM vendor and part number are 35 + known, for example, when memory is soldered on the board. The latter 36 + form is useful when SDRAM nodes are created at runtime by boot firmware 37 + that doesn't have access to static part number information. 38 + 39 + reg: 40 + description: 41 + The rank number of this memory rank when used as a subnode to an memory 42 + channel. 43 + minimum: 0 44 + maximum: 3 45 + 46 + revision-id: 47 + $ref: /schemas/types.yaml#/definitions/uint32-array 48 + description: | 49 + SDRAM revision ID: 50 + - LPDDR SDRAM, decoded from Mode Registers 6 and 7, always 2 bytes. 51 + - DDR4 SDRAM, decoded from the SPD from byte 349 according to 52 + JEDEC SPD4.1.2.L-6, always 1 byte. 53 + One byte per uint32 cell (e.g., <MR6 MR7>). 54 + maxItems: 2 55 + items: 56 + minimum: 0 57 + maximum: 255 58 + 59 + density: 60 + $ref: /schemas/types.yaml#/definitions/uint32 61 + description: | 62 + Density of the SDRAM chip in megabits: 63 + - LPDDR SDRAM, decoded from Mode Register 8. 64 + - DDR4 SDRAM, decoded from the SPD from bits 3-0 of byte 4 according to 65 + JEDEC SPD4.1.2.L-6. 66 + enum: 67 + - 64 68 + - 128 69 + - 256 70 + - 512 71 + - 1024 72 + - 2048 73 + - 3072 74 + - 4096 75 + - 6144 76 + - 8192 77 + - 12288 78 + - 16384 79 + - 24576 80 + - 32768 81 + 82 + io-width: 83 + $ref: /schemas/types.yaml#/definitions/uint32 84 + description: | 85 + I/O bus width in bits of the SDRAM chip: 86 + - LPDDR SDRAM, decoded from Mode Register 8. 87 + - DDR4 SDRAM, decoded from the SPD from bits 2-0 of byte 12 according to 88 + JEDEC SPD4.1.2.L-6. 89 + enum: 90 + - 8 91 + - 16 92 + - 32 93 + 94 + additionalProperties: true
+61
Documentation/devicetree/bindings/nvmem/google,gs101-otp.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/nvmem/google,gs101-otp.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Google GS101 OTP Controller 8 + 9 + maintainers: 10 + - Tudor Ambarus <tudor.ambarus@linaro.org> 11 + 12 + description: | 13 + OTP controller drives a NVMEM memory where system or user specific data 14 + can be stored. The OTP controller register space is of interest as well 15 + because it contains dedicated registers where it stores the Product ID 16 + and the Chip ID (apart other things like TMU or ASV info). 17 + 18 + allOf: 19 + - $ref: nvmem.yaml# 20 + 21 + properties: 22 + compatible: 23 + items: 24 + - const: google,gs101-otp 25 + 26 + clocks: 27 + maxItems: 1 28 + 29 + clock-names: 30 + const: pclk 31 + 32 + interrupts: 33 + maxItems: 1 34 + 35 + reg: 36 + maxItems: 1 37 + 38 + power-domains: 39 + maxItems: 1 40 + 41 + required: 42 + - compatible 43 + - reg 44 + - clocks 45 + - clock-names 46 + - interrupts 47 + 48 + unevaluatedProperties: false 49 + 50 + examples: 51 + - | 52 + #include <dt-bindings/clock/google,gs101.h> 53 + #include <dt-bindings/interrupt-controller/arm-gic.h> 54 + 55 + efuse@10000000 { 56 + compatible = "google,gs101-otp"; 57 + reg = <0x10000000 0xf084>; 58 + clocks = <&cmu_misc CLK_GOUT_MISC_OTP_CON_TOP_PCLK>; 59 + clock-names = "pclk"; 60 + interrupts = <GIC_SPI 752 IRQ_TYPE_LEVEL_HIGH>; 61 + };
+3
Documentation/devicetree/bindings/remoteproc/qcom,pas-common.yaml
··· 44 44 - const: stop-ack 45 45 - const: shutdown-ack 46 46 47 + iommus: 48 + maxItems: 1 49 + 47 50 power-domains: 48 51 minItems: 1 49 52 maxItems: 3
+6
Documentation/devicetree/bindings/soc/mediatek/mediatek,mt8183-dvfsrc.yaml
··· 34 34 maxItems: 1 35 35 description: DVFSRC common register address and length. 36 36 37 + clocks: 38 + items: 39 + - description: Clock that drives the DVFSRC MCU 40 + 37 41 regulators: 38 42 type: object 39 43 $ref: /schemas/regulator/mediatek,mt6873-dvfsrc-regulator.yaml# ··· 54 50 55 51 examples: 56 52 - | 53 + #include <dt-bindings/clock/mt8195-clk.h> 57 54 soc { 58 55 #address-cells = <2>; 59 56 #size-cells = <2>; ··· 62 57 system-controller@10012000 { 63 58 compatible = "mediatek,mt8195-dvfsrc"; 64 59 reg = <0 0x10012000 0 0x1000>; 60 + clocks = <&topckgen CLK_TOP_DVFSRC>; 65 61 66 62 regulators { 67 63 compatible = "mediatek,mt8195-dvfsrc-regulator";
+1 -22
Documentation/devicetree/bindings/soc/samsung/exynos-pmu.yaml
··· 9 9 maintainers: 10 10 - Krzysztof Kozlowski <krzk@kernel.org> 11 11 12 - # Custom select to avoid matching all nodes with 'syscon' 13 - select: 14 - properties: 15 - compatible: 16 - contains: 17 - enum: 18 - - google,gs101-pmu 19 - - samsung,exynos3250-pmu 20 - - samsung,exynos4210-pmu 21 - - samsung,exynos4212-pmu 22 - - samsung,exynos4412-pmu 23 - - samsung,exynos5250-pmu 24 - - samsung,exynos5260-pmu 25 - - samsung,exynos5410-pmu 26 - - samsung,exynos5420-pmu 27 - - samsung,exynos5433-pmu 28 - - samsung,exynos7-pmu 29 - - samsung,exynos850-pmu 30 - - samsung-s5pv210-pmu 31 - required: 32 - - compatible 33 - 34 12 properties: 35 13 compatible: 36 14 oneOf: ··· 30 52 - const: syscon 31 53 - items: 32 54 - enum: 55 + - axis,artpec9-pmu 33 56 - samsung,exynos2200-pmu 34 57 - samsung,exynos7870-pmu 35 58 - samsung,exynos7885-pmu
+7 -1
Documentation/devicetree/bindings/soc/spacemit/spacemit,k1-syscon.yaml
··· 10 10 - Haylen Chu <heylenay@4d2.org> 11 11 12 12 description: 13 - System controllers found on SpacemiT K1 SoC, which are capable of 13 + System controllers found on SpacemiT K1/K3 SoC, which are capable of 14 14 clock, reset and power-management functions. 15 15 16 16 properties: ··· 46 46 47 47 "#reset-cells": 48 48 const: 1 49 + description: | 50 + ID of the reset controller line. Valid IDs are defined in corresponding 51 + files: 52 + 53 + For SpacemiT K1, see include/dt-bindings/clock/spacemit,k1-syscon.h 54 + For SpacemiT K3, see include/dt-bindings/reset/spacemit,k3-resets.h 49 55 50 56 required: 51 57 - compatible
+2
Documentation/devicetree/bindings/sram/sram.yaml
··· 34 34 - nvidia,tegra186-sysram 35 35 - nvidia,tegra194-sysram 36 36 - nvidia,tegra234-sysram 37 + - qcom,kaanapali-imem 37 38 - qcom,rpm-msg-ram 38 39 - rockchip,rk3288-pmu-sram 39 40 ··· 90 89 - arm,juno-scp-shmem 91 90 - arm,scmi-shmem 92 91 - arm,scp-shmem 92 + - qcom,pil-reloc-info 93 93 - renesas,smp-sram 94 94 - rockchip,rk3066-smp-sram 95 95 - samsung,exynos4210-sysram
+3 -15
Documentation/driver-api/tee.rst
··· 43 43 MODULE_DEVICE_TABLE(tee, client_id_table); 44 44 45 45 static struct tee_client_driver client_driver = { 46 + .probe = client_probe, 47 + .remove = client_remove, 46 48 .id_table = client_id_table, 47 49 .driver = { 48 50 .name = DRIVER_NAME, 49 - .bus = &tee_bus_type, 50 - .probe = client_probe, 51 - .remove = client_remove, 52 51 }, 53 52 }; 54 53 55 - static int __init client_init(void) 56 - { 57 - return driver_register(&client_driver.driver); 58 - } 59 - 60 - static void __exit client_exit(void) 61 - { 62 - driver_unregister(&client_driver.driver); 63 - } 64 - 65 - module_init(client_init); 66 - module_exit(client_exit); 54 + module_tee_client_driver(client_driver);
+3 -3
MAINTAINERS
··· 19545 19545 19546 19546 OP-TEE DRIVER 19547 19547 M: Jens Wiklander <jens.wiklander@linaro.org> 19548 - L: op-tee@lists.trustedfirmware.org 19548 + L: op-tee@lists.trustedfirmware.org (moderated for non-subscribers) 19549 19549 S: Maintained 19550 19550 F: Documentation/ABI/testing/sysfs-bus-optee-devices 19551 19551 F: drivers/tee/optee/ 19552 19552 19553 19553 OP-TEE RANDOM NUMBER GENERATOR (RNG) DRIVER 19554 19554 M: Sumit Garg <sumit.garg@kernel.org> 19555 - L: op-tee@lists.trustedfirmware.org 19555 + L: op-tee@lists.trustedfirmware.org (moderated for non-subscribers) 19556 19556 S: Maintained 19557 19557 F: drivers/char/hw_random/optee-rng.c 19558 19558 ··· 25677 25677 TEE SUBSYSTEM 25678 25678 M: Jens Wiklander <jens.wiklander@linaro.org> 25679 25679 R: Sumit Garg <sumit.garg@kernel.org> 25680 - L: op-tee@lists.trustedfirmware.org 25680 + L: op-tee@lists.trustedfirmware.org (moderated for non-subscribers) 25681 25681 S: Maintained 25682 25682 F: Documentation/ABI/testing/sysfs-class-tee 25683 25683 F: Documentation/driver-api/tee.rst
+1 -1
drivers/bus/Kconfig
··· 141 141 142 142 config OMAP_OCP2SCP 143 143 tristate "OMAP OCP2SCP DRIVER" 144 - depends on ARCH_OMAP2PLUS 144 + depends on ARCH_OMAP2PLUS || COMPILE_TEST 145 145 help 146 146 Driver to enable ocp2scp module which transforms ocp interface 147 147 protocol to scp protocol. In OMAP4, USB PHY is connected via
+38 -51
drivers/bus/fsl-mc/fsl-mc-bus.c
··· 137 137 return 0; 138 138 } 139 139 140 + static int fsl_mc_probe(struct device *dev) 141 + { 142 + struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(dev->driver); 143 + struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 144 + 145 + if (mc_drv->probe) 146 + return mc_drv->probe(mc_dev); 147 + 148 + return 0; 149 + } 150 + 151 + static void fsl_mc_remove(struct device *dev) 152 + { 153 + struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(dev->driver); 154 + struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 155 + 156 + if (mc_drv->remove) 157 + mc_drv->remove(mc_dev); 158 + } 159 + 160 + static void fsl_mc_shutdown(struct device *dev) 161 + { 162 + struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(dev->driver); 163 + struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 164 + 165 + if (dev->driver && mc_drv->shutdown) 166 + mc_drv->shutdown(mc_dev); 167 + } 168 + 140 169 static int fsl_mc_dma_configure(struct device *dev) 141 170 { 142 171 const struct device_driver *drv = READ_ONCE(dev->driver); ··· 231 202 struct device_attribute *attr, char *buf) 232 203 { 233 204 struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 205 + ssize_t len; 234 206 235 - return sysfs_emit(buf, "%s\n", mc_dev->driver_override); 207 + device_lock(dev); 208 + len = sysfs_emit(buf, "%s\n", mc_dev->driver_override); 209 + device_unlock(dev); 210 + return len; 236 211 } 237 212 static DEVICE_ATTR_RW(driver_override); 238 213 ··· 347 314 .name = "fsl-mc", 348 315 .match = fsl_mc_bus_match, 349 316 .uevent = fsl_mc_bus_uevent, 317 + .probe = fsl_mc_probe, 318 + .remove = fsl_mc_remove, 319 + .shutdown = fsl_mc_shutdown, 350 320 .dma_configure = fsl_mc_dma_configure, 351 321 .dma_cleanup = fsl_mc_dma_cleanup, 352 322 .dev_groups = fsl_mc_dev_groups, ··· 470 434 return NULL; 471 435 } 472 436 473 - static int fsl_mc_driver_probe(struct device *dev) 474 - { 475 - struct fsl_mc_driver *mc_drv; 476 - struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 477 - int error; 478 - 479 - mc_drv = to_fsl_mc_driver(dev->driver); 480 - 481 - error = mc_drv->probe(mc_dev); 482 - if (error < 0) { 483 - if (error != -EPROBE_DEFER) 484 - dev_err(dev, "%s failed: %d\n", __func__, error); 485 - return error; 486 - } 487 - 488 - return 0; 489 - } 490 - 491 - static int fsl_mc_driver_remove(struct device *dev) 492 - { 493 - struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(dev->driver); 494 - struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 495 - 496 - mc_drv->remove(mc_dev); 497 - 498 - return 0; 499 - } 500 - 501 - static void fsl_mc_driver_shutdown(struct device *dev) 502 - { 503 - struct fsl_mc_driver *mc_drv = to_fsl_mc_driver(dev->driver); 504 - struct fsl_mc_device *mc_dev = to_fsl_mc_device(dev); 505 - 506 - mc_drv->shutdown(mc_dev); 507 - } 508 - 509 437 /* 510 438 * __fsl_mc_driver_register - registers a child device driver with the 511 439 * MC bus ··· 485 485 486 486 mc_driver->driver.owner = owner; 487 487 mc_driver->driver.bus = &fsl_mc_bus_type; 488 - 489 - if (mc_driver->probe) 490 - mc_driver->driver.probe = fsl_mc_driver_probe; 491 - 492 - if (mc_driver->remove) 493 - mc_driver->driver.remove = fsl_mc_driver_remove; 494 - 495 - if (mc_driver->shutdown) 496 - mc_driver->driver.shutdown = fsl_mc_driver_shutdown; 497 488 498 489 error = driver_register(&mc_driver->driver); 499 490 if (error < 0) { ··· 896 905 return 0; 897 906 898 907 error_cleanup_dev: 899 - kfree(mc_dev->regions); 900 - if (mc_bus) 901 - kfree(mc_bus); 902 - else 903 - kfree(mc_dev); 908 + put_device(&mc_dev->dev); 904 909 905 910 return error; 906 911 }
+2 -11
drivers/bus/omap-ocp2scp.c
··· 17 17 #define OCP2SCP_TIMING 0x18 18 18 #define SYNC2_MASK 0xf 19 19 20 - static int ocp2scp_remove_devices(struct device *dev, void *c) 21 - { 22 - struct platform_device *pdev = to_platform_device(dev); 23 - 24 - platform_device_unregister(pdev); 25 - 26 - return 0; 27 - } 28 - 29 20 static int omap_ocp2scp_probe(struct platform_device *pdev) 30 21 { 31 22 int ret; ··· 70 79 pm_runtime_disable(&pdev->dev); 71 80 72 81 err0: 73 - device_for_each_child(&pdev->dev, NULL, ocp2scp_remove_devices); 82 + of_platform_depopulate(&pdev->dev); 74 83 75 84 return ret; 76 85 } ··· 78 87 static void omap_ocp2scp_remove(struct platform_device *pdev) 79 88 { 80 89 pm_runtime_disable(&pdev->dev); 81 - device_for_each_child(&pdev->dev, NULL, ocp2scp_remove_devices); 90 + of_platform_depopulate(&pdev->dev); 82 91 } 83 92 84 93 #ifdef CONFIG_OF
+2 -5
drivers/bus/qcom-ebi2.c
··· 292 292 static int qcom_ebi2_probe(struct platform_device *pdev) 293 293 { 294 294 struct device_node *np = pdev->dev.of_node; 295 - struct device_node *child; 296 295 struct device *dev = &pdev->dev; 297 296 struct resource *res; 298 297 void __iomem *ebi2_base; ··· 347 348 writel(val, ebi2_base); 348 349 349 350 /* Walk over the child nodes and see what chipselects we use */ 350 - for_each_available_child_of_node(np, child) { 351 + for_each_available_child_of_node_scoped(np, child) { 351 352 u32 csindex; 352 353 353 354 /* Figure out the chipselect */ 354 355 ret = of_property_read_u32(child, "reg", &csindex); 355 - if (ret) { 356 - of_node_put(child); 356 + if (ret) 357 357 return ret; 358 - } 359 358 360 359 if (csindex > 5) { 361 360 dev_err(dev,
+6 -20
drivers/char/hw_random/optee-rng.c
··· 208 208 return (ver->impl_id == TEE_IMPL_ID_OPTEE); 209 209 } 210 210 211 - static int optee_rng_probe(struct device *dev) 211 + static int optee_rng_probe(struct tee_client_device *rng_device) 212 212 { 213 - struct tee_client_device *rng_device = to_tee_client_device(dev); 213 + struct device *dev = &rng_device->dev; 214 214 int ret = 0, err = -ENODEV; 215 215 struct tee_ioctl_open_session_arg sess_arg; 216 216 ··· 258 258 return err; 259 259 } 260 260 261 - static int optee_rng_remove(struct device *dev) 261 + static void optee_rng_remove(struct tee_client_device *tee_dev) 262 262 { 263 263 tee_client_close_session(pvt_data.ctx, pvt_data.session_id); 264 264 tee_client_close_context(pvt_data.ctx); 265 - 266 - return 0; 267 265 } 268 266 269 267 static const struct tee_client_device_id optee_rng_id_table[] = { ··· 273 275 MODULE_DEVICE_TABLE(tee, optee_rng_id_table); 274 276 275 277 static struct tee_client_driver optee_rng_driver = { 278 + .probe = optee_rng_probe, 279 + .remove = optee_rng_remove, 276 280 .id_table = optee_rng_id_table, 277 281 .driver = { 278 282 .name = DRIVER_NAME, 279 - .bus = &tee_bus_type, 280 - .probe = optee_rng_probe, 281 - .remove = optee_rng_remove, 282 283 }, 283 284 }; 284 285 285 - static int __init optee_rng_mod_init(void) 286 - { 287 - return driver_register(&optee_rng_driver.driver); 288 - } 289 - 290 - static void __exit optee_rng_mod_exit(void) 291 - { 292 - driver_unregister(&optee_rng_driver.driver); 293 - } 294 - 295 - module_init(optee_rng_mod_init); 296 - module_exit(optee_rng_mod_exit); 286 + module_tee_client_driver(optee_rng_driver); 297 287 298 288 MODULE_LICENSE("GPL v2"); 299 289 MODULE_AUTHOR("Sumit Garg <sumit.garg@linaro.org>");
+23 -12
drivers/char/tpm/tpm_ftpm_tee.c
··· 163 163 } 164 164 165 165 /** 166 - * ftpm_tee_probe() - initialize the fTPM 166 + * ftpm_tee_probe_generic() - initialize the fTPM 167 167 * @dev: the device description. 168 168 * 169 169 * Return: 170 170 * On success, 0. On failure, -errno. 171 171 */ 172 - static int ftpm_tee_probe(struct device *dev) 172 + static int ftpm_tee_probe_generic(struct device *dev) 173 173 { 174 174 int rc; 175 175 struct tpm_chip *chip; ··· 251 251 return rc; 252 252 } 253 253 254 + static int ftpm_tee_probe(struct tee_client_device *tcdev) 255 + { 256 + struct device *dev = &tcdev->dev; 257 + 258 + return ftpm_tee_probe_generic(dev); 259 + } 260 + 254 261 static int ftpm_plat_tee_probe(struct platform_device *pdev) 255 262 { 256 263 struct device *dev = &pdev->dev; 257 264 258 - return ftpm_tee_probe(dev); 265 + return ftpm_tee_probe_generic(dev); 259 266 } 260 267 261 268 /** 262 - * ftpm_tee_remove() - remove the TPM device 269 + * ftpm_tee_remove_generic() - remove the TPM device 263 270 * @dev: the device description. 264 271 * 265 272 * Return: 266 273 * 0 always. 267 274 */ 268 - static int ftpm_tee_remove(struct device *dev) 275 + static void ftpm_tee_remove_generic(struct device *dev) 269 276 { 270 277 struct ftpm_tee_private *pvt_data = dev_get_drvdata(dev); 271 278 ··· 292 285 tee_client_close_context(pvt_data->ctx); 293 286 294 287 /* memory allocated with devm_kzalloc() is freed automatically */ 288 + } 295 289 296 - return 0; 290 + static void ftpm_tee_remove(struct tee_client_device *tcdev) 291 + { 292 + struct device *dev = &tcdev->dev; 293 + 294 + ftpm_tee_remove_generic(dev); 297 295 } 298 296 299 297 static void ftpm_plat_tee_remove(struct platform_device *pdev) 300 298 { 301 299 struct device *dev = &pdev->dev; 302 300 303 - ftpm_tee_remove(dev); 301 + ftpm_tee_remove_generic(dev); 304 302 } 305 303 306 304 /** ··· 347 335 MODULE_DEVICE_TABLE(tee, optee_ftpm_id_table); 348 336 349 337 static struct tee_client_driver ftpm_tee_driver = { 338 + .probe = ftpm_tee_probe, 339 + .remove = ftpm_tee_remove, 350 340 .id_table = optee_ftpm_id_table, 351 341 .driver = { 352 342 .name = "optee-ftpm", 353 - .bus = &tee_bus_type, 354 - .probe = ftpm_tee_probe, 355 - .remove = ftpm_tee_remove, 356 343 }, 357 344 }; 358 345 ··· 363 352 if (rc) 364 353 return rc; 365 354 366 - rc = driver_register(&ftpm_tee_driver.driver); 355 + rc = tee_client_driver_register(&ftpm_tee_driver); 367 356 if (rc) { 368 357 platform_driver_unregister(&ftpm_tee_plat_driver); 369 358 return rc; ··· 375 364 static void __exit ftpm_mod_exit(void) 376 365 { 377 366 platform_driver_unregister(&ftpm_tee_plat_driver); 378 - driver_unregister(&ftpm_tee_driver.driver); 367 + tee_client_driver_unregister(&ftpm_tee_driver); 379 368 } 380 369 381 370 module_init(ftpm_mod_init);
+1 -1
drivers/clk/qcom/common.c
··· 454 454 455 455 base = devm_platform_ioremap_resource(pdev, index); 456 456 if (IS_ERR(base)) 457 - return -ENOMEM; 457 + return PTR_ERR(base); 458 458 459 459 regmap = devm_regmap_init_mmio(&pdev->dev, base, desc->config); 460 460 if (IS_ERR(regmap))
+1 -1
drivers/cpuidle/cpuidle-zynq.c
··· 11 11 * #1 wait-for-interrupt 12 12 * #2 wait-for-interrupt and RAM self refresh 13 13 * 14 - * Maintainer: Michal Simek <michal.simek@xilinx.com> 14 + * Maintainer: Michal Simek <michal.simek@amd.com> 15 15 */ 16 16 17 17 #include <linux/init.h>
+39 -9
drivers/firmware/arm_ffa/driver.c
··· 246 246 } 247 247 248 248 #define PARTITION_INFO_GET_RETURN_COUNT_ONLY BIT(0) 249 + #define FFA_SUPPORTS_GET_COUNT_ONLY(version) ((version) > FFA_VERSION_1_0) 250 + #define FFA_PART_INFO_HAS_SIZE_IN_RESP(version) ((version) > FFA_VERSION_1_0) 251 + #define FFA_PART_INFO_HAS_UUID_IN_RESP(version) ((version) > FFA_VERSION_1_0) 252 + #define FFA_PART_INFO_HAS_EXEC_STATE_IN_RESP(version) \ 253 + ((version) > FFA_VERSION_1_0) 249 254 250 255 /* buffer must be sizeof(struct ffa_partition_info) * num_partitions */ 251 256 static int ··· 260 255 int idx, count, flags = 0, sz, buf_sz; 261 256 ffa_value_t partition_info; 262 257 263 - if (drv_info->version > FFA_VERSION_1_0 && 258 + if (FFA_SUPPORTS_GET_COUNT_ONLY(drv_info->version) && 264 259 (!buffer || !num_partitions)) /* Just get the count for now */ 265 260 flags = PARTITION_INFO_GET_RETURN_COUNT_ONLY; 266 261 ··· 278 273 279 274 count = partition_info.a2; 280 275 281 - if (drv_info->version > FFA_VERSION_1_0) { 276 + if (FFA_PART_INFO_HAS_SIZE_IN_RESP(drv_info->version)) { 282 277 buf_sz = sz = partition_info.a3; 283 278 if (sz > sizeof(*buffer)) 284 279 buf_sz = sizeof(*buffer); 285 280 } else { 286 - /* FFA_VERSION_1_0 lacks size in the response */ 287 281 buf_sz = sz = 8; 288 282 } 289 283 ··· 985 981 } 986 982 } 987 983 984 + /* 985 + * Map logical ID index to the u16 index within the packed ID list. 986 + * 987 + * For native responses (FF-A width == kernel word size), IDs are 988 + * tightly packed: idx -> idx. 989 + * 990 + * For 32-bit responses on a 64-bit kernel, each 64-bit register 991 + * contributes 4 x u16 values but only the lower 2 are defined; the 992 + * upper 2 are garbage. This mapping skips those upper halves: 993 + * 0,1,2,3,4,5,... -> 0,1,4,5,8,9,... 994 + */ 995 + static int list_idx_to_u16_idx(int idx, bool is_native_resp) 996 + { 997 + return is_native_resp ? idx : idx + 2 * (idx >> 1); 998 + } 999 + 988 1000 static void ffa_notification_info_get(void) 989 1001 { 990 - int idx, list, max_ids, lists_cnt, ids_processed, ids_count[MAX_IDS_64]; 991 - bool is_64b_resp; 1002 + int ids_processed, ids_count[MAX_IDS_64]; 1003 + int idx, list, max_ids, lists_cnt; 1004 + bool is_64b_resp, is_native_resp; 992 1005 ffa_value_t ret; 993 1006 u64 id_list; 994 1007 ··· 1022 1001 } 1023 1002 1024 1003 is_64b_resp = (ret.a0 == FFA_FN64_SUCCESS); 1004 + is_native_resp = (ret.a0 == FFA_FN_NATIVE(SUCCESS)); 1025 1005 1026 1006 ids_processed = 0; 1027 1007 lists_cnt = FIELD_GET(NOTIFICATION_INFO_GET_ID_COUNT, ret.a2); ··· 1039 1017 1040 1018 /* Process IDs */ 1041 1019 for (list = 0; list < lists_cnt; list++) { 1020 + int u16_idx; 1042 1021 u16 vcpu_id, part_id, *packed_id_list = (u16 *)&ret.a3; 1043 1022 1044 1023 if (ids_processed >= max_ids - 1) 1045 1024 break; 1046 1025 1047 - part_id = packed_id_list[ids_processed++]; 1026 + u16_idx = list_idx_to_u16_idx(ids_processed, 1027 + is_native_resp); 1028 + part_id = packed_id_list[u16_idx]; 1029 + ids_processed++; 1048 1030 1049 1031 if (ids_count[list] == 1) { /* Global Notification */ 1050 1032 __do_sched_recv_cb(part_id, 0, false); ··· 1060 1034 if (ids_processed >= max_ids - 1) 1061 1035 break; 1062 1036 1063 - vcpu_id = packed_id_list[ids_processed++]; 1037 + u16_idx = list_idx_to_u16_idx(ids_processed, 1038 + is_native_resp); 1039 + vcpu_id = packed_id_list[u16_idx]; 1040 + ids_processed++; 1064 1041 1065 1042 __do_sched_recv_cb(part_id, vcpu_id, true); 1066 1043 } ··· 1735 1706 struct ffa_device *ffa_dev; 1736 1707 struct ffa_partition_info *pbuf, *tpbuf; 1737 1708 1738 - if (drv_info->version == FFA_VERSION_1_0) { 1709 + if (!FFA_PART_INFO_HAS_UUID_IN_RESP(drv_info->version)) { 1739 1710 ret = bus_register_notifier(&ffa_bus_type, &ffa_bus_nb); 1740 1711 if (ret) 1741 1712 pr_err("Failed to register FF-A bus notifiers\n"); ··· 1762 1733 continue; 1763 1734 } 1764 1735 1765 - if (drv_info->version > FFA_VERSION_1_0 && 1736 + if (FFA_PART_INFO_HAS_EXEC_STATE_IN_RESP(drv_info->version) && 1766 1737 !(tpbuf->properties & FFA_PARTITION_AARCH64_EXEC)) 1767 1738 ffa_mode_32bit_set(ffa_dev); 1768 1739 ··· 2097 2068 2098 2069 pr_err("failed to setup partitions\n"); 2099 2070 ffa_notifications_cleanup(); 2071 + ffa_rxtx_unmap(drv_info->vm_id); 2100 2072 free_pages: 2101 2073 if (drv_info->tx_buffer) 2102 2074 free_pages_exact(drv_info->tx_buffer, rxtx_bufsz);
+3 -8
drivers/firmware/arm_scmi/base.c
··· 375 375 { 376 376 int id, ret; 377 377 u8 *prot_imp; 378 - u32 version; 379 378 char name[SCMI_SHORT_NAME_MAX_SIZE]; 380 379 struct device *dev = ph->dev; 381 380 struct scmi_revision_info *rev = scmi_revision_area_get(ph); 382 381 383 - ret = ph->xops->version_get(ph, &version); 384 - if (ret) 385 - return ret; 386 - 387 - rev->major_ver = PROTOCOL_REV_MAJOR(version); 388 - rev->minor_ver = PROTOCOL_REV_MINOR(version); 389 - ph->set_priv(ph, rev, version); 382 + rev->major_ver = PROTOCOL_REV_MAJOR(ph->version); 383 + rev->minor_ver = PROTOCOL_REV_MINOR(ph->version); 384 + ph->set_priv(ph, rev); 390 385 391 386 ret = scmi_base_attributes_get(ph); 392 387 if (ret)
+8 -16
drivers/firmware/arm_scmi/clock.c
··· 157 157 }; 158 158 159 159 struct clock_info { 160 - u32 version; 161 160 int num_clocks; 162 161 int max_async_req; 163 162 bool notify_rate_changed_cmd; ··· 345 346 } 346 347 347 348 static int scmi_clock_attributes_get(const struct scmi_protocol_handle *ph, 348 - u32 clk_id, struct clock_info *cinfo, 349 - u32 version) 349 + u32 clk_id, struct clock_info *cinfo) 350 350 { 351 351 int ret; 352 352 u32 attributes; ··· 368 370 attributes = le32_to_cpu(attr->attributes); 369 371 strscpy(clk->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE); 370 372 /* clock_enable_latency field is present only since SCMI v3.1 */ 371 - if (PROTOCOL_REV_MAJOR(version) >= 0x2) 373 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x2) 372 374 latency = le32_to_cpu(attr->clock_enable_latency); 373 375 clk->enable_latency = latency ? : U32_MAX; 374 376 } ··· 379 381 * If supported overwrite short name with the extended one; 380 382 * on error just carry on and use already provided short name. 381 383 */ 382 - if (!ret && PROTOCOL_REV_MAJOR(version) >= 0x2) { 384 + if (!ret && PROTOCOL_REV_MAJOR(ph->version) >= 0x2) { 383 385 if (SUPPORTS_EXTENDED_NAMES(attributes)) 384 386 ph->hops->extended_name_get(ph, CLOCK_NAME_GET, clk_id, 385 387 NULL, clk->name, ··· 391 393 if (cinfo->notify_rate_change_requested_cmd && 392 394 SUPPORTS_RATE_CHANGE_REQUESTED_NOTIF(attributes)) 393 395 clk->rate_change_requested_notifications = true; 394 - if (PROTOCOL_REV_MAJOR(version) >= 0x3) { 396 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x3) { 395 397 if (SUPPORTS_PARENT_CLOCK(attributes)) 396 398 scmi_clock_possible_parents(ph, clk_id, clk); 397 399 if (SUPPORTS_GET_PERMISSIONS(attributes)) ··· 1066 1068 1067 1069 static int scmi_clock_protocol_init(const struct scmi_protocol_handle *ph) 1068 1070 { 1069 - u32 version; 1070 1071 int clkid, ret; 1071 1072 struct clock_info *cinfo; 1072 1073 1073 - ret = ph->xops->version_get(ph, &version); 1074 - if (ret) 1075 - return ret; 1076 - 1077 1074 dev_dbg(ph->dev, "Clock Version %d.%d\n", 1078 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 1075 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 1079 1076 1080 1077 cinfo = devm_kzalloc(ph->dev, sizeof(*cinfo), GFP_KERNEL); 1081 1078 if (!cinfo) ··· 1088 1095 for (clkid = 0; clkid < cinfo->num_clocks; clkid++) { 1089 1096 struct scmi_clock_info *clk = cinfo->clk + clkid; 1090 1097 1091 - ret = scmi_clock_attributes_get(ph, clkid, cinfo, version); 1098 + ret = scmi_clock_attributes_get(ph, clkid, cinfo); 1092 1099 if (!ret) 1093 1100 scmi_clock_describe_rates_get(ph, clkid, clk); 1094 1101 } 1095 1102 1096 - if (PROTOCOL_REV_MAJOR(version) >= 0x3) { 1103 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x3) { 1097 1104 cinfo->clock_config_set = scmi_clock_config_set_v2; 1098 1105 cinfo->clock_config_get = scmi_clock_config_get_v2; 1099 1106 } else { ··· 1101 1108 cinfo->clock_config_get = scmi_clock_config_get; 1102 1109 } 1103 1110 1104 - cinfo->version = version; 1105 - return ph->set_priv(ph, cinfo, version); 1111 + return ph->set_priv(ph, cinfo); 1106 1112 } 1107 1113 1108 1114 static const struct scmi_protocol scmi_clock = {
+78 -20
drivers/firmware/arm_scmi/driver.c
··· 1627 1627 * 1628 1628 * @ph: A reference to the protocol handle. 1629 1629 * @priv: The private data to set. 1630 - * @version: The detected protocol version for the core to register. 1631 1630 * 1632 1631 * Return: 0 on Success 1633 1632 */ 1634 1633 static int scmi_set_protocol_priv(const struct scmi_protocol_handle *ph, 1635 - void *priv, u32 version) 1634 + void *priv) 1636 1635 { 1637 1636 struct scmi_protocol_instance *pi = ph_to_pi(ph); 1638 1637 1639 1638 pi->priv = priv; 1640 - pi->version = version; 1641 1639 1642 1640 return 0; 1643 1641 } ··· 1655 1657 } 1656 1658 1657 1659 static const struct scmi_xfer_ops xfer_ops = { 1658 - .version_get = version_get, 1659 1660 .xfer_get_init = xfer_get_init, 1660 1661 .reset_rx_to_maxsz = reset_rx_to_maxsz, 1661 1662 .do_xfer = do_xfer, ··· 2110 2113 } 2111 2114 2112 2115 /** 2116 + * scmi_protocol_version_initialize - Initialize protocol version 2117 + * @dev: A device reference. 2118 + * @pi: A reference to the protocol instance being initialized 2119 + * 2120 + * At first retrieve the newest protocol version supported by the platform for 2121 + * this specific protoocol. 2122 + * 2123 + * Negotiation is attempted only when the platform advertised a protocol 2124 + * version newer than the most recent version known to this agent, since 2125 + * backward compatibility is NOT assured in general between versions. 2126 + * 2127 + * Failing to negotiate a fallback version or to query supported version at 2128 + * all will result in an attempt to use the newest version known to this agent 2129 + * even though compatibility is NOT assured. 2130 + * 2131 + * Versions are defined as: 2132 + * 2133 + * pi->version: the version supported by the platform as returned by the query. 2134 + * pi->proto->supported_version: the newest version supported by this agent 2135 + * for this protocol. 2136 + * pi->negotiated_version: The version successfully negotiated with the platform. 2137 + * ph->version: The final version effectively chosen for this session. 2138 + */ 2139 + static void scmi_protocol_version_initialize(struct device *dev, 2140 + struct scmi_protocol_instance *pi) 2141 + { 2142 + struct scmi_protocol_handle *ph = &pi->ph; 2143 + int ret; 2144 + 2145 + /* 2146 + * Query and store platform supported protocol version: this is usually 2147 + * the newest version the platfom can support. 2148 + */ 2149 + ret = version_get(ph, &pi->version); 2150 + if (ret) { 2151 + dev_warn(dev, 2152 + "Failed to query supported version for protocol 0x%X.\n", 2153 + pi->proto->id); 2154 + goto best_effort; 2155 + } 2156 + 2157 + /* Need to negotiate at all ? */ 2158 + if (pi->version <= pi->proto->supported_version) { 2159 + ph->version = pi->version; 2160 + return; 2161 + } 2162 + 2163 + /* Attempt negotiation */ 2164 + ret = scmi_protocol_version_negotiate(ph); 2165 + if (!ret) { 2166 + ph->version = pi->negotiated_version; 2167 + dev_info(dev, 2168 + "Protocol 0x%X successfully negotiated version 0x%X\n", 2169 + pi->proto->id, ph->version); 2170 + return; 2171 + } 2172 + 2173 + dev_warn(dev, 2174 + "Detected UNSUPPORTED higher version 0x%X for protocol 0x%X.\n", 2175 + pi->version, pi->proto->id); 2176 + 2177 + best_effort: 2178 + /* Fallback to use newest version known to this agent */ 2179 + ph->version = pi->proto->supported_version; 2180 + dev_warn(dev, 2181 + "Trying version 0x%X. Backward compatibility is NOT assured.\n", 2182 + ph->version); 2183 + } 2184 + 2185 + /** 2113 2186 * scmi_alloc_init_protocol_instance - Allocate and initialize a protocol 2114 2187 * instance descriptor. 2115 2188 * @info: The reference to the related SCMI instance. ··· 2224 2157 pi->ph.set_priv = scmi_set_protocol_priv; 2225 2158 pi->ph.get_priv = scmi_get_protocol_priv; 2226 2159 refcount_set(&pi->users, 1); 2160 + 2161 + /* 2162 + * Initialize effectively used protocol version performing any 2163 + * possibly needed negotiations. 2164 + */ 2165 + scmi_protocol_version_initialize(handle->dev, pi); 2166 + 2227 2167 /* proto->init is assured NON NULL by scmi_protocol_register */ 2228 2168 ret = pi->proto->instance_init(&pi->ph); 2229 2169 if (ret) ··· 2257 2183 2258 2184 devres_close_group(handle->dev, pi->gid); 2259 2185 dev_dbg(handle->dev, "Initialized protocol: 0x%X\n", pi->proto->id); 2260 - 2261 - if (pi->version > proto->supported_version) { 2262 - ret = scmi_protocol_version_negotiate(&pi->ph); 2263 - if (!ret) { 2264 - dev_info(handle->dev, 2265 - "Protocol 0x%X successfully negotiated version 0x%X\n", 2266 - proto->id, pi->negotiated_version); 2267 - } else { 2268 - dev_warn(handle->dev, 2269 - "Detected UNSUPPORTED higher version 0x%X for protocol 0x%X.\n", 2270 - pi->version, pi->proto->id); 2271 - dev_warn(handle->dev, 2272 - "Trying version 0x%X. Backward compatibility is NOT assured.\n", 2273 - pi->proto->supported_version); 2274 - } 2275 - } 2276 2186 2277 2187 return pi; 2278 2188
+20 -39
drivers/firmware/arm_scmi/perf.c
··· 27 27 /* Updated only after ALL the mandatory features for that version are merged */ 28 28 #define SCMI_PROTOCOL_SUPPORTED_VERSION 0x40000 29 29 30 - #define MAX_OPPS 32 30 + #define MAX_OPPS 64 31 31 32 32 enum scmi_performance_protocol_cmd { 33 33 PERF_DOMAIN_ATTRIBUTES = 0x3, ··· 178 178 }) 179 179 180 180 struct scmi_perf_info { 181 - u32 version; 182 181 u16 num_domains; 183 182 enum scmi_power_scale power_scale; 184 183 u64 stats_addr; ··· 214 215 215 216 if (POWER_SCALE_IN_MILLIWATT(flags)) 216 217 pi->power_scale = SCMI_POWER_MILLIWATTS; 217 - if (PROTOCOL_REV_MAJOR(pi->version) >= 0x3) 218 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x3) 218 219 if (POWER_SCALE_IN_MICROWATT(flags)) 219 220 pi->power_scale = SCMI_POWER_MICROWATTS; 220 221 ··· 250 251 static int 251 252 scmi_perf_domain_attributes_get(const struct scmi_protocol_handle *ph, 252 253 struct perf_dom_info *dom_info, 253 - bool notify_lim_cmd, bool notify_lvl_cmd, 254 - u32 version) 254 + bool notify_lim_cmd, bool notify_lvl_cmd) 255 255 { 256 256 int ret; 257 257 u32 flags; ··· 278 280 dom_info->perf_level_notify = 279 281 SUPPORTS_PERF_LEVEL_NOTIFY(flags); 280 282 dom_info->perf_fastchannels = SUPPORTS_PERF_FASTCHANNELS(flags); 281 - if (PROTOCOL_REV_MAJOR(version) >= 0x4) 283 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x4) 282 284 dom_info->level_indexing_mode = 283 285 SUPPORTS_LEVEL_INDEXING(flags); 284 286 dom_info->rate_limit_us = le32_to_cpu(attr->rate_limit_us) & ··· 321 323 * If supported overwrite short name with the extended one; 322 324 * on error just carry on and use already provided short name. 323 325 */ 324 - if (!ret && PROTOCOL_REV_MAJOR(version) >= 0x3 && 326 + if (!ret && PROTOCOL_REV_MAJOR(ph->version) >= 0x3 && 325 327 SUPPORTS_EXTENDED_NAMES(flags)) 326 328 ph->hops->extended_name_get(ph, PERF_DOMAIN_NAME_GET, 327 329 dom_info->id, NULL, dom_info->info.name, ··· 343 345 return t1->perf - t2->perf; 344 346 } 345 347 346 - struct scmi_perf_ipriv { 347 - u32 version; 348 - struct perf_dom_info *perf_dom; 349 - }; 350 - 351 348 static void iter_perf_levels_prepare_message(void *message, 352 349 unsigned int desc_index, 353 350 const void *priv) 354 351 { 355 352 struct scmi_msg_perf_describe_levels *msg = message; 356 - const struct scmi_perf_ipriv *p = priv; 353 + const struct perf_dom_info *perf_dom = priv; 357 354 358 - msg->domain = cpu_to_le32(p->perf_dom->id); 355 + msg->domain = cpu_to_le32(perf_dom->id); 359 356 /* Set the number of OPPs to be skipped/already read */ 360 357 msg->level_index = cpu_to_le32(desc_index); 361 358 } ··· 438 445 { 439 446 int ret; 440 447 struct scmi_opp *opp; 441 - struct scmi_perf_ipriv *p = priv; 448 + struct perf_dom_info *perf_dom = priv; 442 449 443 - opp = &p->perf_dom->opp[p->perf_dom->opp_count]; 444 - if (PROTOCOL_REV_MAJOR(p->version) <= 0x3) 445 - ret = process_response_opp(ph->dev, p->perf_dom, opp, 450 + opp = &perf_dom->opp[perf_dom->opp_count]; 451 + if (PROTOCOL_REV_MAJOR(ph->version) <= 0x3) 452 + ret = process_response_opp(ph->dev, perf_dom, opp, 446 453 st->loop_idx, response); 447 454 else 448 - ret = process_response_opp_v4(ph->dev, p->perf_dom, opp, 455 + ret = process_response_opp_v4(ph->dev, perf_dom, opp, 449 456 st->loop_idx, response); 450 457 451 458 /* Skip BAD duplicates received from firmware */ 452 459 if (ret) 453 460 return ret == -EBUSY ? 0 : ret; 454 461 455 - p->perf_dom->opp_count++; 462 + perf_dom->opp_count++; 456 463 457 464 dev_dbg(ph->dev, "Level %d Power %d Latency %dus Ifreq %d Index %d\n", 458 465 opp->perf, opp->power, opp->trans_latency_us, ··· 463 470 464 471 static int 465 472 scmi_perf_describe_levels_get(const struct scmi_protocol_handle *ph, 466 - struct perf_dom_info *perf_dom, u32 version) 473 + struct perf_dom_info *perf_dom) 467 474 { 468 475 int ret; 469 476 void *iter; ··· 472 479 .update_state = iter_perf_levels_update_state, 473 480 .process_response = iter_perf_levels_process_response, 474 481 }; 475 - struct scmi_perf_ipriv ppriv = { 476 - .version = version, 477 - .perf_dom = perf_dom, 478 - }; 479 482 480 483 iter = ph->hops->iter_response_init(ph, &ops, MAX_OPPS, 481 484 PERF_DESCRIBE_LEVELS, 482 485 sizeof(struct scmi_msg_perf_describe_levels), 483 - &ppriv); 486 + perf_dom); 484 487 if (IS_ERR(iter)) 485 488 return PTR_ERR(iter); 486 489 ··· 565 576 static int scmi_perf_limits_set(const struct scmi_protocol_handle *ph, 566 577 u32 domain, u32 max_perf, u32 min_perf) 567 578 { 568 - struct scmi_perf_info *pi = ph->get_priv(ph); 569 579 struct perf_dom_info *dom; 570 580 571 581 dom = scmi_perf_domain_lookup(ph, domain); ··· 574 586 if (!dom->set_limits) 575 587 return -EOPNOTSUPP; 576 588 577 - if (PROTOCOL_REV_MAJOR(pi->version) >= 0x3 && !max_perf && !min_perf) 589 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x3 && !max_perf && !min_perf) 578 590 return -EINVAL; 579 591 580 592 if (dom->level_indexing_mode) { ··· 1269 1281 static int scmi_perf_protocol_init(const struct scmi_protocol_handle *ph) 1270 1282 { 1271 1283 int domain, ret; 1272 - u32 version; 1273 1284 struct scmi_perf_info *pinfo; 1274 1285 1275 - ret = ph->xops->version_get(ph, &version); 1276 - if (ret) 1277 - return ret; 1278 - 1279 1286 dev_dbg(ph->dev, "Performance Version %d.%d\n", 1280 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 1287 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 1281 1288 1282 1289 pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 1283 1290 if (!pinfo) 1284 1291 return -ENOMEM; 1285 - 1286 - pinfo->version = version; 1287 1292 1288 1293 ret = scmi_perf_attributes_get(ph, pinfo); 1289 1294 if (ret) ··· 1292 1311 1293 1312 dom->id = domain; 1294 1313 scmi_perf_domain_attributes_get(ph, dom, pinfo->notify_lim_cmd, 1295 - pinfo->notify_lvl_cmd, version); 1296 - scmi_perf_describe_levels_get(ph, dom, version); 1314 + pinfo->notify_lvl_cmd); 1315 + scmi_perf_describe_levels_get(ph, dom); 1297 1316 1298 1317 if (dom->perf_fastchannels) 1299 1318 scmi_perf_domain_init_fc(ph, dom); ··· 1303 1322 if (ret) 1304 1323 return ret; 1305 1324 1306 - return ph->set_priv(ph, pinfo, version); 1325 + return ph->set_priv(ph, pinfo); 1307 1326 } 1308 1327 1309 1328 static const struct scmi_protocol scmi_perf = {
+50 -70
drivers/firmware/arm_scmi/pinctrl.c
··· 117 117 }; 118 118 119 119 struct scmi_pinctrl_info { 120 - u32 version; 121 120 int nr_groups; 122 121 int nr_functions; 123 122 int nr_pins; ··· 595 596 } 596 597 597 598 static int scmi_pinctrl_get_group_info(const struct scmi_protocol_handle *ph, 598 - u32 selector, 599 - struct scmi_group_info *group) 599 + u32 selector) 600 600 { 601 + struct scmi_pinctrl_info *pi = ph->get_priv(ph); 602 + struct scmi_group_info *group; 601 603 int ret; 604 + 605 + if (selector >= pi->nr_groups) 606 + return -EINVAL; 607 + 608 + group = &pi->groups[selector]; 609 + if (group->present) 610 + return 0; 602 611 603 612 ret = scmi_pinctrl_attributes(ph, GROUP_TYPE, selector, group->name, 604 613 &group->nr_pins); ··· 639 632 u32 selector, const char **name) 640 633 { 641 634 struct scmi_pinctrl_info *pi = ph->get_priv(ph); 635 + int ret; 642 636 643 637 if (!name) 644 638 return -EINVAL; 645 639 646 - if (selector >= pi->nr_groups || pi->nr_groups == 0) 647 - return -EINVAL; 648 - 649 - if (!pi->groups[selector].present) { 650 - int ret; 651 - 652 - ret = scmi_pinctrl_get_group_info(ph, selector, 653 - &pi->groups[selector]); 654 - if (ret) 655 - return ret; 656 - } 640 + ret = scmi_pinctrl_get_group_info(ph, selector); 641 + if (ret) 642 + return ret; 657 643 658 644 *name = pi->groups[selector].name; 659 645 ··· 658 658 u32 *nr_pins) 659 659 { 660 660 struct scmi_pinctrl_info *pi = ph->get_priv(ph); 661 + int ret; 661 662 662 663 if (!pins || !nr_pins) 663 664 return -EINVAL; 664 665 665 - if (selector >= pi->nr_groups || pi->nr_groups == 0) 666 - return -EINVAL; 667 - 668 - if (!pi->groups[selector].present) { 669 - int ret; 670 - 671 - ret = scmi_pinctrl_get_group_info(ph, selector, 672 - &pi->groups[selector]); 673 - if (ret) 674 - return ret; 675 - } 666 + ret = scmi_pinctrl_get_group_info(ph, selector); 667 + if (ret) 668 + return ret; 676 669 677 670 *pins = pi->groups[selector].group_pins; 678 671 *nr_pins = pi->groups[selector].nr_pins; ··· 674 681 } 675 682 676 683 static int scmi_pinctrl_get_function_info(const struct scmi_protocol_handle *ph, 677 - u32 selector, 678 - struct scmi_function_info *func) 684 + u32 selector) 679 685 { 686 + struct scmi_pinctrl_info *pi = ph->get_priv(ph); 687 + struct scmi_function_info *func; 680 688 int ret; 689 + 690 + if (selector >= pi->nr_functions) 691 + return -EINVAL; 692 + 693 + func = &pi->functions[selector]; 694 + if (func->present) 695 + return 0; 681 696 682 697 ret = scmi_pinctrl_attributes(ph, FUNCTION_TYPE, selector, func->name, 683 698 &func->nr_groups); ··· 717 716 u32 selector, const char **name) 718 717 { 719 718 struct scmi_pinctrl_info *pi = ph->get_priv(ph); 719 + int ret; 720 720 721 721 if (!name) 722 722 return -EINVAL; 723 723 724 - if (selector >= pi->nr_functions || pi->nr_functions == 0) 725 - return -EINVAL; 726 - 727 - if (!pi->functions[selector].present) { 728 - int ret; 729 - 730 - ret = scmi_pinctrl_get_function_info(ph, selector, 731 - &pi->functions[selector]); 732 - if (ret) 733 - return ret; 734 - } 724 + ret = scmi_pinctrl_get_function_info(ph, selector); 725 + if (ret) 726 + return ret; 735 727 736 728 *name = pi->functions[selector].name; 737 729 return 0; ··· 736 742 const u32 **groups) 737 743 { 738 744 struct scmi_pinctrl_info *pi = ph->get_priv(ph); 745 + int ret; 739 746 740 747 if (!groups || !nr_groups) 741 748 return -EINVAL; 742 749 743 - if (selector >= pi->nr_functions || pi->nr_functions == 0) 744 - return -EINVAL; 745 - 746 - if (!pi->functions[selector].present) { 747 - int ret; 748 - 749 - ret = scmi_pinctrl_get_function_info(ph, selector, 750 - &pi->functions[selector]); 751 - if (ret) 752 - return ret; 753 - } 750 + ret = scmi_pinctrl_get_function_info(ph, selector); 751 + if (ret) 752 + return ret; 754 753 755 754 *groups = pi->functions[selector].groups; 756 755 *nr_groups = pi->functions[selector].nr_groups; ··· 758 771 } 759 772 760 773 static int scmi_pinctrl_get_pin_info(const struct scmi_protocol_handle *ph, 761 - u32 selector, struct scmi_pin_info *pin) 774 + u32 selector) 762 775 { 776 + struct scmi_pinctrl_info *pi = ph->get_priv(ph); 777 + struct scmi_pin_info *pin; 763 778 int ret; 764 779 765 - if (!pin) 780 + if (selector >= pi->nr_pins) 766 781 return -EINVAL; 782 + 783 + pin = &pi->pins[selector]; 784 + if (pin->present) 785 + return 0; 767 786 768 787 ret = scmi_pinctrl_attributes(ph, PIN_TYPE, selector, pin->name, NULL); 769 788 if (ret) ··· 783 790 u32 selector, const char **name) 784 791 { 785 792 struct scmi_pinctrl_info *pi = ph->get_priv(ph); 793 + int ret; 786 794 787 795 if (!name) 788 796 return -EINVAL; 789 797 790 - if (selector >= pi->nr_pins) 791 - return -EINVAL; 792 - 793 - if (!pi->pins[selector].present) { 794 - int ret; 795 - 796 - ret = scmi_pinctrl_get_pin_info(ph, selector, &pi->pins[selector]); 797 - if (ret) 798 - return ret; 799 - } 798 + ret = scmi_pinctrl_get_pin_info(ph, selector); 799 + if (ret) 800 + return ret; 800 801 801 802 *name = pi->pins[selector].name; 802 803 ··· 830 843 static int scmi_pinctrl_protocol_init(const struct scmi_protocol_handle *ph) 831 844 { 832 845 int ret; 833 - u32 version; 834 846 struct scmi_pinctrl_info *pinfo; 835 847 836 - ret = ph->xops->version_get(ph, &version); 837 - if (ret) 838 - return ret; 839 - 840 848 dev_dbg(ph->dev, "Pinctrl Version %d.%d\n", 841 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 849 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 842 850 843 851 pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 844 852 if (!pinfo) ··· 858 876 if (!pinfo->functions) 859 877 return -ENOMEM; 860 878 861 - pinfo->version = version; 862 - 863 - return ph->set_priv(ph, pinfo, version); 879 + return ph->set_priv(ph, pinfo); 864 880 } 865 881 866 882 static int scmi_pinctrl_protocol_deinit(const struct scmi_protocol_handle *ph)
+5 -13
drivers/firmware/arm_scmi/power.c
··· 67 67 }; 68 68 69 69 struct scmi_power_info { 70 - u32 version; 71 70 bool notify_state_change_cmd; 72 71 int num_domains; 73 72 u64 stats_addr; ··· 108 109 static int 109 110 scmi_power_domain_attributes_get(const struct scmi_protocol_handle *ph, 110 111 u32 domain, struct power_dom_info *dom_info, 111 - u32 version, bool notify_state_change_cmd) 112 + bool notify_state_change_cmd) 112 113 { 113 114 int ret; 114 115 u32 flags; ··· 140 141 * If supported overwrite short name with the extended one; 141 142 * on error just carry on and use already provided short name. 142 143 */ 143 - if (!ret && PROTOCOL_REV_MAJOR(version) >= 0x3 && 144 + if (!ret && PROTOCOL_REV_MAJOR(ph->version) >= 0x3 && 144 145 SUPPORTS_EXTENDED_NAMES(flags)) { 145 146 ph->hops->extended_name_get(ph, POWER_DOMAIN_NAME_GET, 146 147 domain, NULL, dom_info->name, ··· 322 323 static int scmi_power_protocol_init(const struct scmi_protocol_handle *ph) 323 324 { 324 325 int domain, ret; 325 - u32 version; 326 326 struct scmi_power_info *pinfo; 327 327 328 - ret = ph->xops->version_get(ph, &version); 329 - if (ret) 330 - return ret; 331 - 332 328 dev_dbg(ph->dev, "Power Version %d.%d\n", 333 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 329 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 334 330 335 331 pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 336 332 if (!pinfo) ··· 343 349 for (domain = 0; domain < pinfo->num_domains; domain++) { 344 350 struct power_dom_info *dom = pinfo->dom_info + domain; 345 351 346 - scmi_power_domain_attributes_get(ph, domain, dom, version, 352 + scmi_power_domain_attributes_get(ph, domain, dom, 347 353 pinfo->notify_state_change_cmd); 348 354 } 349 355 350 - pinfo->version = version; 351 - 352 - return ph->set_priv(ph, pinfo, version); 356 + return ph->set_priv(ph, pinfo); 353 357 } 354 358 355 359 static const struct scmi_protocol scmi_power = {
+7 -14
drivers/firmware/arm_scmi/powercap.c
··· 122 122 }; 123 123 124 124 struct powercap_info { 125 - u32 version; 126 125 int num_domains; 127 126 bool notify_cap_cmd; 128 127 bool notify_measurements_cmd; ··· 433 434 } 434 435 435 436 /* Save the last explicitly set non-zero powercap value */ 436 - if (PROTOCOL_REV_MAJOR(pi->version) >= 0x2 && !ret && power_cap) 437 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x2 && !ret && power_cap) 437 438 pi->states[domain_id].last_pcap = power_cap; 438 439 439 440 return ret; ··· 453 454 return -EINVAL; 454 455 455 456 /* Just log the last set request if acting on a disabled domain */ 456 - if (PROTOCOL_REV_MAJOR(pi->version) >= 0x2 && 457 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x2 && 457 458 !pi->states[domain_id].enabled) { 458 459 pi->states[domain_id].last_pcap = power_cap; 459 460 return 0; ··· 634 635 u32 power_cap; 635 636 struct powercap_info *pi = ph->get_priv(ph); 636 637 637 - if (PROTOCOL_REV_MAJOR(pi->version) < 0x2) 638 + if (PROTOCOL_REV_MAJOR(ph->version) < 0x2) 638 639 return -EINVAL; 639 640 640 641 if (enable == pi->states[domain_id].enabled) ··· 675 676 struct powercap_info *pi = ph->get_priv(ph); 676 677 677 678 *enable = true; 678 - if (PROTOCOL_REV_MAJOR(pi->version) < 0x2) 679 + if (PROTOCOL_REV_MAJOR(ph->version) < 0x2) 679 680 return 0; 680 681 681 682 /* ··· 960 961 scmi_powercap_protocol_init(const struct scmi_protocol_handle *ph) 961 962 { 962 963 int domain, ret; 963 - u32 version; 964 964 struct powercap_info *pinfo; 965 965 966 - ret = ph->xops->version_get(ph, &version); 967 - if (ret) 968 - return ret; 969 - 970 966 dev_dbg(ph->dev, "Powercap Version %d.%d\n", 971 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 967 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 972 968 973 969 pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 974 970 if (!pinfo) ··· 1000 1006 &pinfo->powercaps[domain].fc_info); 1001 1007 1002 1008 /* Grab initial state when disable is supported. */ 1003 - if (PROTOCOL_REV_MAJOR(version) >= 0x2) { 1009 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x2) { 1004 1010 ret = __scmi_powercap_cap_get(ph, 1005 1011 &pinfo->powercaps[domain], 1006 1012 &pinfo->states[domain].last_pcap); ··· 1012 1018 } 1013 1019 } 1014 1020 1015 - pinfo->version = version; 1016 - return ph->set_priv(ph, pinfo, version); 1021 + return ph->set_priv(ph, pinfo); 1017 1022 } 1018 1023 1019 1024 static const struct scmi_protocol scmi_powercap = {
+5 -4
drivers/firmware/arm_scmi/protocols.h
··· 159 159 * struct scmi_protocol_handle - Reference to an initialized protocol instance 160 160 * 161 161 * @dev: A reference to the associated SCMI instance device (handle->dev). 162 + * @version: The protocol version currently effectively in use by this 163 + * initialized instance of the protocol as determined at the end of 164 + * any possibly needed negotiations performed by the core. 162 165 * @xops: A reference to a struct holding refs to the core xfer operations that 163 166 * can be used by the protocol implementation to generate SCMI messages. 164 167 * @set_priv: A method to set protocol private data for this instance. ··· 180 177 */ 181 178 struct scmi_protocol_handle { 182 179 struct device *dev; 180 + unsigned int version; 183 181 const struct scmi_xfer_ops *xops; 184 182 const struct scmi_proto_helpers_ops *hops; 185 - int (*set_priv)(const struct scmi_protocol_handle *ph, void *priv, 186 - u32 version); 183 + int (*set_priv)(const struct scmi_protocol_handle *ph, void *priv); 187 184 void *(*get_priv)(const struct scmi_protocol_handle *ph); 188 185 }; 189 186 ··· 290 287 291 288 /** 292 289 * struct scmi_xfer_ops - References to the core SCMI xfer operations. 293 - * @version_get: Get this version protocol. 294 290 * @xfer_get_init: Initialize one struct xfer if any xfer slot is free. 295 291 * @reset_rx_to_maxsz: Reset rx size to max transport size. 296 292 * @do_xfer: Do the SCMI transfer. ··· 302 300 * another protocol. 303 301 */ 304 302 struct scmi_xfer_ops { 305 - int (*version_get)(const struct scmi_protocol_handle *ph, u32 *version); 306 303 int (*xfer_get_init)(const struct scmi_protocol_handle *ph, u8 msg_id, 307 304 size_t tx_size, size_t rx_size, 308 305 struct scmi_xfer **p);
+38 -30
drivers/firmware/arm_scmi/reset.c
··· 65 65 }; 66 66 67 67 struct scmi_reset_info { 68 - u32 version; 69 68 int num_domains; 70 69 bool notify_reset_cmd; 71 70 struct reset_dom_info *dom_info; ··· 97 98 return ret; 98 99 } 99 100 101 + static struct reset_dom_info * 102 + scmi_reset_domain_lookup(const struct scmi_protocol_handle *ph, u32 domain) 103 + { 104 + struct scmi_reset_info *pi = ph->get_priv(ph); 105 + 106 + if (domain >= pi->num_domains) 107 + return ERR_PTR(-EINVAL); 108 + 109 + return pi->dom_info + domain; 110 + } 111 + 100 112 static int 101 113 scmi_reset_domain_attributes_get(const struct scmi_protocol_handle *ph, 102 - struct scmi_reset_info *pinfo, 103 - u32 domain, u32 version) 114 + struct scmi_reset_info *pinfo, u32 domain) 104 115 { 105 116 int ret; 106 117 u32 attributes; ··· 146 137 * If supported overwrite short name with the extended one; 147 138 * on error just carry on and use already provided short name. 148 139 */ 149 - if (!ret && PROTOCOL_REV_MAJOR(version) >= 0x3 && 140 + if (!ret && PROTOCOL_REV_MAJOR(ph->version) >= 0x3 && 150 141 SUPPORTS_EXTENDED_NAMES(attributes)) 151 142 ph->hops->extended_name_get(ph, RESET_DOMAIN_NAME_GET, domain, 152 143 NULL, dom_info->name, ··· 165 156 static const char * 166 157 scmi_reset_name_get(const struct scmi_protocol_handle *ph, u32 domain) 167 158 { 168 - struct scmi_reset_info *pi = ph->get_priv(ph); 159 + struct reset_dom_info *dom_info; 169 160 170 - struct reset_dom_info *dom = pi->dom_info + domain; 161 + dom_info = scmi_reset_domain_lookup(ph, domain); 162 + if (IS_ERR(dom_info)) 163 + return "unknown"; 171 164 172 - return dom->name; 165 + return dom_info->name; 173 166 } 174 167 175 168 static int scmi_reset_latency_get(const struct scmi_protocol_handle *ph, 176 169 u32 domain) 177 170 { 178 - struct scmi_reset_info *pi = ph->get_priv(ph); 179 - struct reset_dom_info *dom = pi->dom_info + domain; 171 + struct reset_dom_info *dom_info; 180 172 181 - return dom->latency_us; 173 + dom_info = scmi_reset_domain_lookup(ph, domain); 174 + if (IS_ERR(dom_info)) 175 + return PTR_ERR(dom_info); 176 + 177 + return dom_info->latency_us; 182 178 } 183 179 184 180 static int scmi_domain_reset(const struct scmi_protocol_handle *ph, u32 domain, ··· 192 178 int ret; 193 179 struct scmi_xfer *t; 194 180 struct scmi_msg_reset_domain_reset *dom; 195 - struct scmi_reset_info *pi = ph->get_priv(ph); 196 - struct reset_dom_info *rdom; 181 + struct reset_dom_info *dom_info; 197 182 198 - if (domain >= pi->num_domains) 199 - return -EINVAL; 183 + dom_info = scmi_reset_domain_lookup(ph, domain); 184 + if (IS_ERR(dom_info)) 185 + return PTR_ERR(dom_info); 200 186 201 - rdom = pi->dom_info + domain; 202 - if (rdom->async_reset && flags & AUTONOMOUS_RESET) 187 + if (dom_info->async_reset && flags & AUTONOMOUS_RESET) 203 188 flags |= ASYNCHRONOUS_RESET; 204 189 205 190 ret = ph->xops->xfer_get_init(ph, RESET, sizeof(*dom), 0, &t); ··· 251 238 static bool scmi_reset_notify_supported(const struct scmi_protocol_handle *ph, 252 239 u8 evt_id, u32 src_id) 253 240 { 254 - struct reset_dom_info *dom; 255 - struct scmi_reset_info *pi = ph->get_priv(ph); 241 + struct reset_dom_info *dom_info; 256 242 257 - if (evt_id != SCMI_EVENT_RESET_ISSUED || src_id >= pi->num_domains) 243 + if (evt_id != SCMI_EVENT_RESET_ISSUED) 258 244 return false; 259 245 260 - dom = pi->dom_info + src_id; 246 + dom_info = scmi_reset_domain_lookup(ph, src_id); 247 + if (IS_ERR(dom_info)) 248 + return false; 261 249 262 - return dom->reset_notify; 250 + return dom_info->reset_notify; 263 251 } 264 252 265 253 static int scmi_reset_notify(const struct scmi_protocol_handle *ph, ··· 354 340 static int scmi_reset_protocol_init(const struct scmi_protocol_handle *ph) 355 341 { 356 342 int domain, ret; 357 - u32 version; 358 343 struct scmi_reset_info *pinfo; 359 344 360 - ret = ph->xops->version_get(ph, &version); 361 - if (ret) 362 - return ret; 363 - 364 345 dev_dbg(ph->dev, "Reset Version %d.%d\n", 365 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 346 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 366 347 367 348 pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 368 349 if (!pinfo) ··· 373 364 return -ENOMEM; 374 365 375 366 for (domain = 0; domain < pinfo->num_domains; domain++) 376 - scmi_reset_domain_attributes_get(ph, pinfo, domain, version); 367 + scmi_reset_domain_attributes_get(ph, pinfo, domain); 377 368 378 - pinfo->version = version; 379 - return ph->set_priv(ph, pinfo, version); 369 + return ph->set_priv(ph, pinfo); 380 370 } 381 371 382 372 static const struct scmi_protocol scmi_reset = {
+7 -15
drivers/firmware/arm_scmi/sensors.c
··· 214 214 }; 215 215 216 216 struct sensors_info { 217 - u32 version; 218 217 bool notify_trip_point_cmd; 219 218 bool notify_continuos_update_cmd; 220 219 int num_sensors; ··· 523 524 } 524 525 525 526 static int scmi_sensor_axis_description(const struct scmi_protocol_handle *ph, 526 - struct scmi_sensor_info *s, 527 - u32 version) 527 + struct scmi_sensor_info *s) 528 528 { 529 529 int ret; 530 530 void *iter; ··· 553 555 if (ret) 554 556 return ret; 555 557 556 - if (PROTOCOL_REV_MAJOR(version) >= 0x3 && 558 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x3 && 557 559 apriv.any_axes_support_extended_names) 558 560 ret = scmi_sensor_axis_extended_names_get(ph, s); 559 561 ··· 619 621 s->type = SENSOR_TYPE(attrh); 620 622 /* Use pre-allocated pool wherever possible */ 621 623 s->intervals.desc = s->intervals.prealloc_pool; 622 - if (si->version == SCMIv2_SENSOR_PROTOCOL) { 624 + if (ph->version == SCMIv2_SENSOR_PROTOCOL) { 623 625 s->intervals.segmented = false; 624 626 s->intervals.count = 1; 625 627 /* ··· 657 659 * one; on error just carry on and use already provided 658 660 * short name. 659 661 */ 660 - if (PROTOCOL_REV_MAJOR(si->version) >= 0x3 && 662 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x3 && 661 663 SUPPORTS_EXTENDED_NAMES(attrl)) 662 664 ph->hops->extended_name_get(ph, SENSOR_NAME_GET, s->id, 663 665 NULL, s->name, SCMI_MAX_STR_SIZE); ··· 681 683 } 682 684 683 685 if (s->num_axis > 0) 684 - ret = scmi_sensor_axis_description(ph, s, si->version); 686 + ret = scmi_sensor_axis_description(ph, s); 685 687 686 688 st->priv = ((u8 *)sdesc + dsize); 687 689 ··· 1146 1148 1147 1149 static int scmi_sensors_protocol_init(const struct scmi_protocol_handle *ph) 1148 1150 { 1149 - u32 version; 1150 1151 int ret; 1151 1152 struct sensors_info *sinfo; 1152 1153 1153 - ret = ph->xops->version_get(ph, &version); 1154 - if (ret) 1155 - return ret; 1156 - 1157 1154 dev_dbg(ph->dev, "Sensor Version %d.%d\n", 1158 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 1155 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 1159 1156 1160 1157 sinfo = devm_kzalloc(ph->dev, sizeof(*sinfo), GFP_KERNEL); 1161 1158 if (!sinfo) 1162 1159 return -ENOMEM; 1163 - sinfo->version = version; 1164 1160 1165 1161 ret = scmi_sensor_attributes_get(ph, sinfo); 1166 1162 if (ret) ··· 1168 1176 if (ret) 1169 1177 return ret; 1170 1178 1171 - return ph->set_priv(ph, sinfo, version); 1179 + return ph->set_priv(ph, sinfo); 1172 1180 } 1173 1181 1174 1182 static const struct scmi_protocol scmi_sensors = {
+3 -2
drivers/firmware/arm_scmi/shmem.c
··· 196 196 struct resource *res, 197 197 struct scmi_shmem_io_ops **ops) 198 198 { 199 - struct device_node *shmem __free(device_node); 200 199 const char *desc = tx ? "Tx" : "Rx"; 201 200 int ret, idx = tx ? 0 : 1; 202 201 struct device *cdev = cinfo->dev; ··· 204 205 void __iomem *addr; 205 206 u32 reg_io_width; 206 207 207 - shmem = of_parse_phandle(cdev->of_node, "shmem", idx); 208 + struct device_node *shmem __free(device_node) = of_parse_phandle(cdev->of_node, 209 + "shmem", idx); 210 + 208 211 if (!shmem) 209 212 return IOMEM_ERR_PTR(-ENODEV); 210 213
+3 -11
drivers/firmware/arm_scmi/system.c
··· 34 34 }; 35 35 36 36 struct scmi_system_info { 37 - u32 version; 38 37 bool graceful_timeout_supported; 39 38 bool power_state_notify_cmd; 40 39 }; ··· 140 141 141 142 static int scmi_system_protocol_init(const struct scmi_protocol_handle *ph) 142 143 { 143 - int ret; 144 - u32 version; 145 144 struct scmi_system_info *pinfo; 146 145 147 - ret = ph->xops->version_get(ph, &version); 148 - if (ret) 149 - return ret; 150 - 151 146 dev_dbg(ph->dev, "System Power Version %d.%d\n", 152 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 147 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 153 148 154 149 pinfo = devm_kzalloc(ph->dev, sizeof(*pinfo), GFP_KERNEL); 155 150 if (!pinfo) 156 151 return -ENOMEM; 157 152 158 - pinfo->version = version; 159 - if (PROTOCOL_REV_MAJOR(pinfo->version) >= 0x2) 153 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x2) 160 154 pinfo->graceful_timeout_supported = true; 161 155 162 156 if (!ph->hops->protocol_msg_check(ph, SYSTEM_POWER_STATE_NOTIFY, NULL)) 163 157 pinfo->power_state_notify_cmd = true; 164 158 165 - return ph->set_priv(ph, pinfo, version); 159 + return ph->set_priv(ph, pinfo); 166 160 } 167 161 168 162 static const struct scmi_protocol scmi_system = {
+10 -22
drivers/firmware/arm_scmi/transports/optee.c
··· 529 529 DEFINE_SCMI_TRANSPORT_DRIVER(scmi_optee, scmi_optee_driver, scmi_optee_desc, 530 530 scmi_of_match, core); 531 531 532 - static int scmi_optee_service_probe(struct device *dev) 532 + static int scmi_optee_service_probe(struct tee_client_device *scmi_pta) 533 533 { 534 + struct device *dev = &scmi_pta->dev; 534 535 struct scmi_optee_agent *agent; 535 536 struct tee_context *tee_ctx; 536 537 int ret; ··· 579 578 return ret; 580 579 } 581 580 582 - static int scmi_optee_service_remove(struct device *dev) 581 + static void scmi_optee_service_remove(struct tee_client_device *scmi_pta) 583 582 { 584 583 struct scmi_optee_agent *agent = scmi_optee_private; 585 584 586 585 if (!scmi_optee_private) 587 - return -EINVAL; 586 + return; 588 587 589 588 platform_driver_unregister(&scmi_optee_driver); 590 589 591 590 if (!list_empty(&scmi_optee_private->channel_list)) 592 - return -EBUSY; 591 + return; 593 592 594 593 /* Ensure cleared reference is visible before resources are released */ 595 594 smp_store_mb(scmi_optee_private, NULL); 596 595 597 596 tee_client_close_context(agent->tee_ctx); 598 - 599 - return 0; 600 597 } 601 598 602 599 static const struct tee_client_device_id scmi_optee_service_id[] = { ··· 608 609 MODULE_DEVICE_TABLE(tee, scmi_optee_service_id); 609 610 610 611 static struct tee_client_driver scmi_optee_service_driver = { 611 - .id_table = scmi_optee_service_id, 612 - .driver = { 612 + .probe = scmi_optee_service_probe, 613 + .remove = scmi_optee_service_remove, 614 + .id_table = scmi_optee_service_id, 615 + .driver = { 613 616 .name = "scmi-optee", 614 - .bus = &tee_bus_type, 615 - .probe = scmi_optee_service_probe, 616 - .remove = scmi_optee_service_remove, 617 617 }, 618 618 }; 619 619 620 - static int __init scmi_transport_optee_init(void) 621 - { 622 - return driver_register(&scmi_optee_service_driver.driver); 623 - } 624 - module_init(scmi_transport_optee_init); 625 - 626 - static void __exit scmi_transport_optee_exit(void) 627 - { 628 - driver_unregister(&scmi_optee_service_driver.driver); 629 - } 630 - module_exit(scmi_transport_optee_exit); 620 + module_tee_client_driver(scmi_optee_service_driver); 631 621 632 622 MODULE_AUTHOR("Etienne Carriere <etienne.carriere@foss.st.com>"); 633 623 MODULE_DESCRIPTION("SCMI OPTEE Transport driver");
+2 -8
drivers/firmware/arm_scmi/vendors/imx/imx-sm-bbm.c
··· 48 48 #define SCMI_IMX_BBM_EVENT_RTC_MASK GENMASK(31, 24) 49 49 50 50 struct scmi_imx_bbm_info { 51 - u32 version; 52 51 int nr_rtc; 53 52 int nr_gpr; 54 53 }; ··· 344 345 345 346 static int scmi_imx_bbm_protocol_init(const struct scmi_protocol_handle *ph) 346 347 { 347 - u32 version; 348 348 int ret; 349 349 struct scmi_imx_bbm_info *binfo; 350 350 351 - ret = ph->xops->version_get(ph, &version); 352 - if (ret) 353 - return ret; 354 - 355 351 dev_info(ph->dev, "NXP SM BBM Version %d.%d\n", 356 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 352 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 357 353 358 354 binfo = devm_kzalloc(ph->dev, sizeof(*binfo), GFP_KERNEL); 359 355 if (!binfo) ··· 358 364 if (ret) 359 365 return ret; 360 366 361 - return ph->set_priv(ph, binfo, version); 367 + return ph->set_priv(ph, binfo); 362 368 } 363 369 364 370 static const struct scmi_protocol scmi_imx_bbm = {
+2 -7
drivers/firmware/arm_scmi/vendors/imx/imx-sm-cpu.c
··· 233 233 static int scmi_imx_cpu_protocol_init(const struct scmi_protocol_handle *ph) 234 234 { 235 235 struct scmi_imx_cpu_info *info; 236 - u32 version; 237 236 int ret, i; 238 237 239 - ret = ph->xops->version_get(ph, &version); 240 - if (ret) 241 - return ret; 242 - 243 238 dev_info(ph->dev, "NXP SM CPU Protocol Version %d.%d\n", 244 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 239 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 245 240 246 241 info = devm_kzalloc(ph->dev, sizeof(*info), GFP_KERNEL); 247 242 if (!info) ··· 252 257 return ret; 253 258 } 254 259 255 - return ph->set_priv(ph, info, version); 260 + return ph->set_priv(ph, info); 256 261 } 257 262 258 263 static const struct scmi_protocol scmi_imx_cpu = {
+2 -7
drivers/firmware/arm_scmi/vendors/imx/imx-sm-lmm.c
··· 226 226 static int scmi_imx_lmm_protocol_init(const struct scmi_protocol_handle *ph) 227 227 { 228 228 struct scmi_imx_lmm_priv *info; 229 - u32 version; 230 229 int ret; 231 230 232 - ret = ph->xops->version_get(ph, &version); 233 - if (ret) 234 - return ret; 235 - 236 231 dev_info(ph->dev, "NXP SM LMM Version %d.%d\n", 237 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 232 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 238 233 239 234 info = devm_kzalloc(ph->dev, sizeof(*info), GFP_KERNEL); 240 235 if (!info) ··· 239 244 if (ret) 240 245 return ret; 241 246 242 - return ph->set_priv(ph, info, version); 247 + return ph->set_priv(ph, info); 243 248 } 244 249 245 250 static const struct scmi_protocol scmi_imx_lmm = {
+85 -8
drivers/firmware/arm_scmi/vendors/imx/imx-sm-misc.c
··· 28 28 SCMI_IMX_MISC_DISCOVER_BUILD_INFO = 0x6, 29 29 SCMI_IMX_MISC_CTRL_NOTIFY = 0x8, 30 30 SCMI_IMX_MISC_CFG_INFO_GET = 0xC, 31 + SCMI_IMX_MISC_SYSLOG_GET = 0xD, 31 32 SCMI_IMX_MISC_BOARD_INFO = 0xE, 32 33 }; 33 34 34 35 struct scmi_imx_misc_info { 35 - u32 version; 36 36 u32 nr_dev_ctrl; 37 37 u32 nr_brd_ctrl; 38 38 u32 nr_reason; ··· 87 87 __le32 msel; 88 88 #define MISC_MAX_CFGNAME 16 89 89 u8 cfgname[MISC_MAX_CFGNAME]; 90 + }; 91 + 92 + struct scmi_imx_misc_syslog_in { 93 + __le32 flags; 94 + __le32 index; 95 + }; 96 + 97 + #define REMAINING(x) le32_get_bits((x), GENMASK(31, 20)) 98 + #define RETURNED(x) le32_get_bits((x), GENMASK(11, 0)) 99 + 100 + struct scmi_imx_misc_syslog_out { 101 + __le32 numlogflags; 102 + __le32 syslog[]; 90 103 }; 91 104 92 105 static int scmi_imx_misc_attributes_get(const struct scmi_protocol_handle *ph, ··· 384 371 return ret; 385 372 } 386 373 374 + struct scmi_imx_misc_syslog_ipriv { 375 + u32 *array; 376 + u16 *size; 377 + }; 378 + 379 + static void iter_misc_syslog_prepare_message(void *message, u32 desc_index, 380 + const void *priv) 381 + { 382 + struct scmi_imx_misc_syslog_in *msg = message; 383 + 384 + msg->flags = cpu_to_le32(0); 385 + msg->index = cpu_to_le32(desc_index); 386 + } 387 + 388 + static int iter_misc_syslog_update_state(struct scmi_iterator_state *st, 389 + const void *response, void *priv) 390 + { 391 + const struct scmi_imx_misc_syslog_out *r = response; 392 + struct scmi_imx_misc_syslog_ipriv *p = priv; 393 + 394 + st->num_returned = RETURNED(r->numlogflags); 395 + st->num_remaining = REMAINING(r->numlogflags); 396 + *p->size = st->num_returned + st->num_remaining; 397 + 398 + return 0; 399 + } 400 + 401 + static int 402 + iter_misc_syslog_process_response(const struct scmi_protocol_handle *ph, 403 + const void *response, 404 + struct scmi_iterator_state *st, void *priv) 405 + { 406 + const struct scmi_imx_misc_syslog_out *r = response; 407 + struct scmi_imx_misc_syslog_ipriv *p = priv; 408 + 409 + p->array[st->desc_index + st->loop_idx] = 410 + le32_to_cpu(r->syslog[st->loop_idx]); 411 + 412 + return 0; 413 + } 414 + 415 + static int scmi_imx_misc_syslog_get(const struct scmi_protocol_handle *ph, u16 *size, 416 + void *array) 417 + { 418 + struct scmi_iterator_ops ops = { 419 + .prepare_message = iter_misc_syslog_prepare_message, 420 + .update_state = iter_misc_syslog_update_state, 421 + .process_response = iter_misc_syslog_process_response, 422 + }; 423 + struct scmi_imx_misc_syslog_ipriv ipriv = { 424 + .array = array, 425 + .size = size, 426 + }; 427 + void *iter; 428 + 429 + if (!array || !size || !*size) 430 + return -EINVAL; 431 + 432 + iter = ph->hops->iter_response_init(ph, &ops, *size, SCMI_IMX_MISC_SYSLOG_GET, 433 + sizeof(struct scmi_imx_misc_syslog_in), 434 + &ipriv); 435 + if (IS_ERR(iter)) 436 + return PTR_ERR(iter); 437 + 438 + /* If firmware return NOT SUPPORTED, propagate value to caller */ 439 + return ph->hops->iter_response_run(iter); 440 + } 441 + 387 442 static const struct scmi_imx_misc_proto_ops scmi_imx_misc_proto_ops = { 388 443 .misc_ctrl_set = scmi_imx_misc_ctrl_set, 389 444 .misc_ctrl_get = scmi_imx_misc_ctrl_get, 390 445 .misc_ctrl_req_notify = scmi_imx_misc_ctrl_notify, 446 + .misc_syslog = scmi_imx_misc_syslog_get, 391 447 }; 392 448 393 449 static int scmi_imx_misc_protocol_init(const struct scmi_protocol_handle *ph) 394 450 { 395 451 struct scmi_imx_misc_info *minfo; 396 - u32 version; 397 452 int ret; 398 453 399 - ret = ph->xops->version_get(ph, &version); 400 - if (ret) 401 - return ret; 402 - 403 454 dev_info(ph->dev, "NXP SM MISC Version %d.%d\n", 404 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 455 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 405 456 406 457 minfo = devm_kzalloc(ph->dev, sizeof(*minfo), GFP_KERNEL); 407 458 if (!minfo) ··· 487 410 if (ret && ret != -EOPNOTSUPP) 488 411 return ret; 489 412 490 - return ph->set_priv(ph, minfo, version); 413 + return ph->set_priv(ph, minfo); 491 414 } 492 415 493 416 static const struct scmi_protocol scmi_imx_misc = {
+3 -10
drivers/firmware/arm_scmi/voltage.c
··· 66 66 }; 67 67 68 68 struct voltage_info { 69 - unsigned int version; 70 69 unsigned int num_domains; 71 70 struct scmi_voltage_info *domains; 72 71 }; ··· 242 243 * If supported overwrite short name with the extended one; 243 244 * on error just carry on and use already provided short name. 244 245 */ 245 - if (PROTOCOL_REV_MAJOR(vinfo->version) >= 0x2) { 246 + if (PROTOCOL_REV_MAJOR(ph->version) >= 0x2) { 246 247 if (SUPPORTS_EXTENDED_NAMES(attributes)) 247 248 ph->hops->extended_name_get(ph, 248 249 VOLTAGE_DOMAIN_NAME_GET, ··· 404 405 static int scmi_voltage_protocol_init(const struct scmi_protocol_handle *ph) 405 406 { 406 407 int ret; 407 - u32 version; 408 408 struct voltage_info *vinfo; 409 409 410 - ret = ph->xops->version_get(ph, &version); 411 - if (ret) 412 - return ret; 413 - 414 410 dev_dbg(ph->dev, "Voltage Version %d.%d\n", 415 - PROTOCOL_REV_MAJOR(version), PROTOCOL_REV_MINOR(version)); 411 + PROTOCOL_REV_MAJOR(ph->version), PROTOCOL_REV_MINOR(ph->version)); 416 412 417 413 vinfo = devm_kzalloc(ph->dev, sizeof(*vinfo), GFP_KERNEL); 418 414 if (!vinfo) 419 415 return -ENOMEM; 420 - vinfo->version = version; 421 416 422 417 ret = scmi_protocol_attributes_get(ph, vinfo); 423 418 if (ret) ··· 430 437 dev_warn(ph->dev, "No Voltage domains found.\n"); 431 438 } 432 439 433 - return ph->set_priv(ph, vinfo, version); 440 + return ph->set_priv(ph, vinfo); 434 441 } 435 442 436 443 static const struct scmi_protocol scmi_voltage = {
+8 -22
drivers/firmware/broadcom/tee_bnxt_fw.c
··· 181 181 return (ver->impl_id == TEE_IMPL_ID_OPTEE); 182 182 } 183 183 184 - static int tee_bnxt_fw_probe(struct device *dev) 184 + static int tee_bnxt_fw_probe(struct tee_client_device *bnxt_device) 185 185 { 186 - struct tee_client_device *bnxt_device = to_tee_client_device(dev); 186 + struct device *dev = &bnxt_device->dev; 187 187 int ret, err = -ENODEV; 188 188 struct tee_ioctl_open_session_arg sess_arg; 189 189 struct tee_shm *fw_shm_pool; ··· 231 231 return err; 232 232 } 233 233 234 - static int tee_bnxt_fw_remove(struct device *dev) 234 + static void tee_bnxt_fw_remove(struct tee_client_device *bnxt_device) 235 235 { 236 236 tee_shm_free(pvt_data.fw_shm_pool); 237 237 tee_client_close_session(pvt_data.ctx, pvt_data.session_id); 238 238 tee_client_close_context(pvt_data.ctx); 239 239 pvt_data.ctx = NULL; 240 - 241 - return 0; 242 240 } 243 241 244 - static void tee_bnxt_fw_shutdown(struct device *dev) 242 + static void tee_bnxt_fw_shutdown(struct tee_client_device *bnxt_device) 245 243 { 246 244 tee_shm_free(pvt_data.fw_shm_pool); 247 245 tee_client_close_session(pvt_data.ctx, pvt_data.session_id); ··· 256 258 MODULE_DEVICE_TABLE(tee, tee_bnxt_fw_id_table); 257 259 258 260 static struct tee_client_driver tee_bnxt_fw_driver = { 261 + .probe = tee_bnxt_fw_probe, 262 + .remove = tee_bnxt_fw_remove, 263 + .shutdown = tee_bnxt_fw_shutdown, 259 264 .id_table = tee_bnxt_fw_id_table, 260 265 .driver = { 261 266 .name = KBUILD_MODNAME, 262 - .bus = &tee_bus_type, 263 - .probe = tee_bnxt_fw_probe, 264 - .remove = tee_bnxt_fw_remove, 265 - .shutdown = tee_bnxt_fw_shutdown, 266 267 }, 267 268 }; 268 269 269 - static int __init tee_bnxt_fw_mod_init(void) 270 - { 271 - return driver_register(&tee_bnxt_fw_driver.driver); 272 - } 273 - 274 - static void __exit tee_bnxt_fw_mod_exit(void) 275 - { 276 - driver_unregister(&tee_bnxt_fw_driver.driver); 277 - } 278 - 279 - module_init(tee_bnxt_fw_mod_init); 280 - module_exit(tee_bnxt_fw_mod_exit); 270 + module_tee_client_driver(tee_bnxt_fw_driver); 281 271 282 272 MODULE_AUTHOR("Vikas Gupta <vikas.gupta@broadcom.com>"); 283 273 MODULE_DESCRIPTION("Broadcom bnxt firmware manager");
+6 -19
drivers/firmware/efi/stmm/tee_stmm_efi.c
··· 520 520 efivars_generic_ops_register(); 521 521 } 522 522 523 - static int tee_stmm_efi_probe(struct device *dev) 523 + static int tee_stmm_efi_probe(struct tee_client_device *tee_dev) 524 524 { 525 + struct device *dev = &tee_dev->dev; 525 526 struct tee_ioctl_open_session_arg sess_arg; 526 527 efi_status_t ret; 527 528 int rc; ··· 572 571 return 0; 573 572 } 574 573 575 - static int tee_stmm_efi_remove(struct device *dev) 574 + static void tee_stmm_efi_remove(struct tee_client_device *dev) 576 575 { 577 576 tee_stmm_restore_efivars_generic_ops(); 578 - 579 - return 0; 580 577 } 581 578 582 579 MODULE_DEVICE_TABLE(tee, tee_stmm_efi_id_table); 583 580 584 581 static struct tee_client_driver tee_stmm_efi_driver = { 585 582 .id_table = tee_stmm_efi_id_table, 583 + .probe = tee_stmm_efi_probe, 584 + .remove = tee_stmm_efi_remove, 586 585 .driver = { 587 586 .name = "tee-stmm-efi", 588 - .bus = &tee_bus_type, 589 - .probe = tee_stmm_efi_probe, 590 - .remove = tee_stmm_efi_remove, 591 587 }, 592 588 }; 593 589 594 - static int __init tee_stmm_efi_mod_init(void) 595 - { 596 - return driver_register(&tee_stmm_efi_driver.driver); 597 - } 598 - 599 - static void __exit tee_stmm_efi_mod_exit(void) 600 - { 601 - driver_unregister(&tee_stmm_efi_driver.driver); 602 - } 603 - 604 - module_init(tee_stmm_efi_mod_init); 605 - module_exit(tee_stmm_efi_mod_exit); 590 + module_tee_client_driver(tee_stmm_efi_driver); 606 591 607 592 MODULE_LICENSE("GPL"); 608 593 MODULE_AUTHOR("Ilias Apalodimas <ilias.apalodimas@linaro.org>");
+36 -1
drivers/firmware/imx/sm-misc.c
··· 3 3 * Copyright 2024 NXP 4 4 */ 5 5 6 + #include <linux/debugfs.h> 7 + #include <linux/device/devres.h> 6 8 #include <linux/firmware/imx/sm.h> 7 9 #include <linux/module.h> 8 10 #include <linux/of.h> 9 11 #include <linux/platform_device.h> 10 12 #include <linux/scmi_protocol.h> 11 13 #include <linux/scmi_imx_protocol.h> 14 + #include <linux/seq_file.h> 15 + #include <linux/sizes.h> 12 16 13 17 static const struct scmi_imx_misc_proto_ops *imx_misc_ctrl_ops; 14 18 static struct scmi_protocol_handle *ph; ··· 48 44 return 0; 49 45 } 50 46 47 + static int syslog_show(struct seq_file *file, void *priv) 48 + { 49 + /* 4KB is large enough for syslog */ 50 + void *syslog __free(kfree) = kmalloc(SZ_4K, GFP_KERNEL); 51 + /* syslog API use num words, not num bytes */ 52 + u16 size = SZ_4K / 4; 53 + int ret; 54 + 55 + if (!ph) 56 + return -ENODEV; 57 + 58 + ret = imx_misc_ctrl_ops->misc_syslog(ph, &size, syslog); 59 + if (ret) 60 + return ret; 61 + 62 + seq_hex_dump(file, " ", DUMP_PREFIX_NONE, 16, sizeof(u32), syslog, size * 4, false); 63 + seq_putc(file, '\n'); 64 + 65 + return 0; 66 + } 67 + DEFINE_SHOW_ATTRIBUTE(syslog); 68 + 69 + static void scmi_imx_misc_put(void *p) 70 + { 71 + debugfs_remove((struct dentry *)p); 72 + } 73 + 51 74 static int scmi_imx_misc_ctrl_probe(struct scmi_device *sdev) 52 75 { 53 76 const struct scmi_handle *handle = sdev->handle; 54 77 struct device_node *np = sdev->dev.of_node; 78 + struct dentry *scmi_imx_dentry; 55 79 u32 src_id, flags; 56 80 int ret, i, num; 57 81 ··· 130 98 } 131 99 } 132 100 133 - return 0; 101 + scmi_imx_dentry = debugfs_create_dir("scmi_imx", NULL); 102 + debugfs_create_file("syslog", 0444, scmi_imx_dentry, &sdev->dev, &syslog_fops); 103 + 104 + return devm_add_action_or_reset(&sdev->dev, scmi_imx_misc_put, scmi_imx_dentry); 134 105 } 135 106 136 107 static const struct scmi_device_id scmi_id_table[] = {
+442 -65
drivers/firmware/qcom/qcom_scm.c
··· 27 27 #include <linux/of_reserved_mem.h> 28 28 #include <linux/platform_device.h> 29 29 #include <linux/reset-controller.h> 30 + #include <linux/remoteproc.h> 30 31 #include <linux/sizes.h> 31 32 #include <linux/types.h> 33 + 34 + #include <dt-bindings/interrupt-controller/arm-gic.h> 32 35 33 36 #include "qcom_scm.h" 34 37 #include "qcom_tzmem.h" 35 38 36 39 static u32 download_mode; 40 + 41 + #define GIC_SPI_BASE 32 42 + #define GIC_MAX_SPI 1019 // SPIs in GICv3 spec range from 32..1019 43 + #define GIC_ESPI_BASE 4096 44 + #define GIC_MAX_ESPI 5119 // ESPIs in GICv3 spec range from 4096..5119 37 45 38 46 struct qcom_scm { 39 47 struct device *dev; ··· 49 41 struct clk *iface_clk; 50 42 struct clk *bus_clk; 51 43 struct icc_path *path; 52 - struct completion waitq_comp; 44 + struct completion *waitq_comps; 53 45 struct reset_controller_dev reset; 54 46 55 47 /* control access to the interconnect path */ ··· 59 51 u64 dload_mode_addr; 60 52 61 53 struct qcom_tzmem_pool *mempool; 54 + unsigned int wq_cnt; 62 55 }; 63 56 64 57 struct qcom_scm_current_perm_info { ··· 120 111 QSEECOM_TZ_CMD_INFO_VERSION = 3, 121 112 }; 122 113 114 + #define RSCTABLE_BUFFER_NOT_SUFFICIENT 20 115 + 123 116 #define QSEECOM_MAX_APP_NAME_SIZE 64 124 117 #define SHMBRIDGE_RESULT_NOTSUPP 4 125 118 ··· 140 129 #define QCOM_DLOAD_FULLDUMP 1 141 130 #define QCOM_DLOAD_MINIDUMP 2 142 131 #define QCOM_DLOAD_BOTHDUMP 3 132 + 133 + #define QCOM_SCM_DEFAULT_WAITQ_COUNT 1 143 134 144 135 static const char * const qcom_scm_convention_names[] = { 145 136 [SMC_CONVENTION_UNKNOWN] = "unknown", ··· 572 559 } 573 560 574 561 /** 562 + * devm_qcom_scm_pas_context_alloc() - Allocate peripheral authentication service 563 + * context for a given peripheral 564 + * 565 + * PAS context is device-resource managed, so the caller does not need 566 + * to worry about freeing the context memory. 567 + * 568 + * @dev: PAS firmware device 569 + * @pas_id: peripheral authentication service id 570 + * @mem_phys: Subsystem reserve memory start address 571 + * @mem_size: Subsystem reserve memory size 572 + * 573 + * Returns: The new PAS context, or ERR_PTR() on failure. 574 + */ 575 + struct qcom_scm_pas_context *devm_qcom_scm_pas_context_alloc(struct device *dev, 576 + u32 pas_id, 577 + phys_addr_t mem_phys, 578 + size_t mem_size) 579 + { 580 + struct qcom_scm_pas_context *ctx; 581 + 582 + ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL); 583 + if (!ctx) 584 + return ERR_PTR(-ENOMEM); 585 + 586 + ctx->dev = dev; 587 + ctx->pas_id = pas_id; 588 + ctx->mem_phys = mem_phys; 589 + ctx->mem_size = mem_size; 590 + 591 + return ctx; 592 + } 593 + EXPORT_SYMBOL_GPL(devm_qcom_scm_pas_context_alloc); 594 + 595 + static int __qcom_scm_pas_init_image(u32 pas_id, dma_addr_t mdata_phys, 596 + struct qcom_scm_res *res) 597 + { 598 + struct qcom_scm_desc desc = { 599 + .svc = QCOM_SCM_SVC_PIL, 600 + .cmd = QCOM_SCM_PIL_PAS_INIT_IMAGE, 601 + .arginfo = QCOM_SCM_ARGS(2, QCOM_SCM_VAL, QCOM_SCM_RW), 602 + .args[0] = pas_id, 603 + .owner = ARM_SMCCC_OWNER_SIP, 604 + }; 605 + int ret; 606 + 607 + ret = qcom_scm_clk_enable(); 608 + if (ret) 609 + return ret; 610 + 611 + ret = qcom_scm_bw_enable(); 612 + if (ret) 613 + goto disable_clk; 614 + 615 + desc.args[1] = mdata_phys; 616 + 617 + ret = qcom_scm_call(__scm->dev, &desc, res); 618 + qcom_scm_bw_disable(); 619 + 620 + disable_clk: 621 + qcom_scm_clk_disable(); 622 + 623 + return ret; 624 + } 625 + 626 + static int qcom_scm_pas_prep_and_init_image(struct qcom_scm_pas_context *ctx, 627 + const void *metadata, size_t size) 628 + { 629 + struct qcom_scm_res res; 630 + phys_addr_t mdata_phys; 631 + void *mdata_buf; 632 + int ret; 633 + 634 + mdata_buf = qcom_tzmem_alloc(__scm->mempool, size, GFP_KERNEL); 635 + if (!mdata_buf) 636 + return -ENOMEM; 637 + 638 + memcpy(mdata_buf, metadata, size); 639 + mdata_phys = qcom_tzmem_to_phys(mdata_buf); 640 + 641 + ret = __qcom_scm_pas_init_image(ctx->pas_id, mdata_phys, &res); 642 + if (ret < 0) 643 + qcom_tzmem_free(mdata_buf); 644 + else 645 + ctx->ptr = mdata_buf; 646 + 647 + return ret ? : res.result[0]; 648 + } 649 + 650 + /** 575 651 * qcom_scm_pas_init_image() - Initialize peripheral authentication service 576 652 * state machine for a given peripheral, using the 577 653 * metadata 578 - * @peripheral: peripheral id 654 + * @pas_id: peripheral authentication service id 579 655 * @metadata: pointer to memory containing ELF header, program header table 580 656 * and optional blob of data used for authenticating the metadata 581 657 * and the rest of the firmware 582 658 * @size: size of the metadata 583 - * @ctx: optional metadata context 659 + * @ctx: optional pas context 584 660 * 585 661 * Return: 0 on success. 586 662 * ··· 677 575 * track the metadata allocation, this needs to be released by invoking 678 576 * qcom_scm_pas_metadata_release() by the caller. 679 577 */ 680 - int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, size_t size, 681 - struct qcom_scm_pas_metadata *ctx) 578 + int qcom_scm_pas_init_image(u32 pas_id, const void *metadata, size_t size, 579 + struct qcom_scm_pas_context *ctx) 682 580 { 581 + struct qcom_scm_res res; 683 582 dma_addr_t mdata_phys; 684 583 void *mdata_buf; 685 584 int ret; 686 - struct qcom_scm_desc desc = { 687 - .svc = QCOM_SCM_SVC_PIL, 688 - .cmd = QCOM_SCM_PIL_PAS_INIT_IMAGE, 689 - .arginfo = QCOM_SCM_ARGS(2, QCOM_SCM_VAL, QCOM_SCM_RW), 690 - .args[0] = peripheral, 691 - .owner = ARM_SMCCC_OWNER_SIP, 692 - }; 693 - struct qcom_scm_res res; 585 + 586 + if (ctx && ctx->use_tzmem) 587 + return qcom_scm_pas_prep_and_init_image(ctx, metadata, size); 694 588 695 589 /* 696 590 * During the scm call memory protection will be enabled for the meta ··· 707 609 708 610 memcpy(mdata_buf, metadata, size); 709 611 710 - ret = qcom_scm_clk_enable(); 711 - if (ret) 712 - goto out; 713 - 714 - ret = qcom_scm_bw_enable(); 715 - if (ret) 716 - goto disable_clk; 717 - 718 - desc.args[1] = mdata_phys; 719 - 720 - ret = qcom_scm_call(__scm->dev, &desc, &res); 721 - qcom_scm_bw_disable(); 722 - 723 - disable_clk: 724 - qcom_scm_clk_disable(); 725 - 726 - out: 612 + ret = __qcom_scm_pas_init_image(pas_id, mdata_phys, &res); 727 613 if (ret < 0 || !ctx) { 728 614 dma_free_coherent(__scm->dev, size, mdata_buf, mdata_phys); 729 615 } else if (ctx) { ··· 722 640 723 641 /** 724 642 * qcom_scm_pas_metadata_release() - release metadata context 725 - * @ctx: metadata context 643 + * @ctx: pas context 726 644 */ 727 - void qcom_scm_pas_metadata_release(struct qcom_scm_pas_metadata *ctx) 645 + void qcom_scm_pas_metadata_release(struct qcom_scm_pas_context *ctx) 728 646 { 729 647 if (!ctx->ptr) 730 648 return; 731 649 732 - dma_free_coherent(__scm->dev, ctx->size, ctx->ptr, ctx->phys); 650 + if (ctx->use_tzmem) 651 + qcom_tzmem_free(ctx->ptr); 652 + else 653 + dma_free_coherent(__scm->dev, ctx->size, ctx->ptr, ctx->phys); 733 654 734 655 ctx->ptr = NULL; 735 - ctx->phys = 0; 736 - ctx->size = 0; 737 656 } 738 657 EXPORT_SYMBOL_GPL(qcom_scm_pas_metadata_release); 739 658 740 659 /** 741 660 * qcom_scm_pas_mem_setup() - Prepare the memory related to a given peripheral 742 661 * for firmware loading 743 - * @peripheral: peripheral id 662 + * @pas_id: peripheral authentication service id 744 663 * @addr: start address of memory area to prepare 745 664 * @size: size of the memory area to prepare 746 665 * 747 666 * Returns 0 on success. 748 667 */ 749 - int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, phys_addr_t size) 668 + int qcom_scm_pas_mem_setup(u32 pas_id, phys_addr_t addr, phys_addr_t size) 750 669 { 751 670 int ret; 752 671 struct qcom_scm_desc desc = { 753 672 .svc = QCOM_SCM_SVC_PIL, 754 673 .cmd = QCOM_SCM_PIL_PAS_MEM_SETUP, 755 674 .arginfo = QCOM_SCM_ARGS(3), 756 - .args[0] = peripheral, 675 + .args[0] = pas_id, 757 676 .args[1] = addr, 758 677 .args[2] = size, 759 678 .owner = ARM_SMCCC_OWNER_SIP, ··· 779 696 } 780 697 EXPORT_SYMBOL_GPL(qcom_scm_pas_mem_setup); 781 698 699 + static void *__qcom_scm_pas_get_rsc_table(u32 pas_id, void *input_rt_tzm, 700 + size_t input_rt_size, 701 + size_t *output_rt_size) 702 + { 703 + struct qcom_scm_desc desc = { 704 + .svc = QCOM_SCM_SVC_PIL, 705 + .cmd = QCOM_SCM_PIL_PAS_GET_RSCTABLE, 706 + .arginfo = QCOM_SCM_ARGS(5, QCOM_SCM_VAL, QCOM_SCM_RO, QCOM_SCM_VAL, 707 + QCOM_SCM_RW, QCOM_SCM_VAL), 708 + .args[0] = pas_id, 709 + .owner = ARM_SMCCC_OWNER_SIP, 710 + }; 711 + struct qcom_scm_res res; 712 + void *output_rt_tzm; 713 + int ret; 714 + 715 + output_rt_tzm = qcom_tzmem_alloc(__scm->mempool, *output_rt_size, GFP_KERNEL); 716 + if (!output_rt_tzm) 717 + return ERR_PTR(-ENOMEM); 718 + 719 + desc.args[1] = qcom_tzmem_to_phys(input_rt_tzm); 720 + desc.args[2] = input_rt_size; 721 + desc.args[3] = qcom_tzmem_to_phys(output_rt_tzm); 722 + desc.args[4] = *output_rt_size; 723 + 724 + /* 725 + * Whether SMC fail or pass, res.result[2] will hold actual resource table 726 + * size. 727 + * 728 + * If passed 'output_rt_size' buffer size is not sufficient to hold the 729 + * resource table TrustZone sends, response code in res.result[1] as 730 + * RSCTABLE_BUFFER_NOT_SUFFICIENT so that caller can retry this SMC call 731 + * with output_rt_tzm buffer with res.result[2] size however, It should not 732 + * be of unresonable size. 733 + */ 734 + ret = qcom_scm_call(__scm->dev, &desc, &res); 735 + if (!ret && res.result[2] > SZ_1G) { 736 + ret = -E2BIG; 737 + goto free_output_rt; 738 + } 739 + 740 + *output_rt_size = res.result[2]; 741 + if (ret && res.result[1] == RSCTABLE_BUFFER_NOT_SUFFICIENT) 742 + ret = -EOVERFLOW; 743 + 744 + free_output_rt: 745 + if (ret) 746 + qcom_tzmem_free(output_rt_tzm); 747 + 748 + return ret ? ERR_PTR(ret) : output_rt_tzm; 749 + } 750 + 751 + /** 752 + * qcom_scm_pas_get_rsc_table() - Retrieve the resource table in passed output buffer 753 + * for a given peripheral. 754 + * 755 + * Qualcomm remote processor may rely on both static and dynamic resources for 756 + * its functionality. Static resources typically refer to memory-mapped addresses 757 + * required by the subsystem and are often embedded within the firmware binary 758 + * and dynamic resources, such as shared memory in DDR etc., are determined at 759 + * runtime during the boot process. 760 + * 761 + * On Qualcomm Technologies devices, it's possible that static resources are not 762 + * embedded in the firmware binary and instead are provided by TrustZone However, 763 + * dynamic resources are always expected to come from TrustZone. This indicates 764 + * that for Qualcomm devices, all resources (static and dynamic) will be provided 765 + * by TrustZone via the SMC call. 766 + * 767 + * If the remote processor firmware binary does contain static resources, they 768 + * should be passed in input_rt. These will be forwarded to TrustZone for 769 + * authentication. TrustZone will then append the dynamic resources and return 770 + * the complete resource table in output_rt_tzm. 771 + * 772 + * If the remote processor firmware binary does not include a resource table, 773 + * the caller of this function should set input_rt as NULL and input_rt_size 774 + * as zero respectively. 775 + * 776 + * More about documentation on resource table data structures can be found in 777 + * include/linux/remoteproc.h 778 + * 779 + * @ctx: PAS context 780 + * @pas_id: peripheral authentication service id 781 + * @input_rt: resource table buffer which is present in firmware binary 782 + * @input_rt_size: size of the resource table present in firmware binary 783 + * @output_rt_size: TrustZone expects caller should pass worst case size for 784 + * the output_rt_tzm. 785 + * 786 + * Return: 787 + * On success, returns a pointer to the allocated buffer containing the final 788 + * resource table and output_rt_size will have actual resource table size from 789 + * TrustZone. The caller is responsible for freeing the buffer. On failure, 790 + * returns ERR_PTR(-errno). 791 + */ 792 + struct resource_table *qcom_scm_pas_get_rsc_table(struct qcom_scm_pas_context *ctx, 793 + void *input_rt, 794 + size_t input_rt_size, 795 + size_t *output_rt_size) 796 + { 797 + struct resource_table empty_rsc = {}; 798 + size_t size = SZ_16K; 799 + void *output_rt_tzm; 800 + void *input_rt_tzm; 801 + void *tbl_ptr; 802 + int ret; 803 + 804 + ret = qcom_scm_clk_enable(); 805 + if (ret) 806 + return ERR_PTR(ret); 807 + 808 + ret = qcom_scm_bw_enable(); 809 + if (ret) 810 + goto disable_clk; 811 + 812 + /* 813 + * TrustZone can not accept buffer as NULL value as argument hence, 814 + * we need to pass a input buffer indicating that subsystem firmware 815 + * does not have resource table by filling resource table structure. 816 + */ 817 + if (!input_rt) { 818 + input_rt = &empty_rsc; 819 + input_rt_size = sizeof(empty_rsc); 820 + } 821 + 822 + input_rt_tzm = qcom_tzmem_alloc(__scm->mempool, input_rt_size, GFP_KERNEL); 823 + if (!input_rt_tzm) { 824 + ret = -ENOMEM; 825 + goto disable_scm_bw; 826 + } 827 + 828 + memcpy(input_rt_tzm, input_rt, input_rt_size); 829 + 830 + output_rt_tzm = __qcom_scm_pas_get_rsc_table(ctx->pas_id, input_rt_tzm, 831 + input_rt_size, &size); 832 + if (PTR_ERR(output_rt_tzm) == -EOVERFLOW) 833 + /* Try again with the size requested by the TZ */ 834 + output_rt_tzm = __qcom_scm_pas_get_rsc_table(ctx->pas_id, 835 + input_rt_tzm, 836 + input_rt_size, 837 + &size); 838 + if (IS_ERR(output_rt_tzm)) { 839 + ret = PTR_ERR(output_rt_tzm); 840 + goto free_input_rt; 841 + } 842 + 843 + tbl_ptr = kzalloc(size, GFP_KERNEL); 844 + if (!tbl_ptr) { 845 + qcom_tzmem_free(output_rt_tzm); 846 + ret = -ENOMEM; 847 + goto free_input_rt; 848 + } 849 + 850 + memcpy(tbl_ptr, output_rt_tzm, size); 851 + *output_rt_size = size; 852 + qcom_tzmem_free(output_rt_tzm); 853 + 854 + free_input_rt: 855 + qcom_tzmem_free(input_rt_tzm); 856 + 857 + disable_scm_bw: 858 + qcom_scm_bw_disable(); 859 + 860 + disable_clk: 861 + qcom_scm_clk_disable(); 862 + 863 + return ret ? ERR_PTR(ret) : tbl_ptr; 864 + } 865 + EXPORT_SYMBOL_GPL(qcom_scm_pas_get_rsc_table); 866 + 782 867 /** 783 868 * qcom_scm_pas_auth_and_reset() - Authenticate the given peripheral firmware 784 869 * and reset the remote processor 785 - * @peripheral: peripheral id 870 + * @pas_id: peripheral authentication service id 786 871 * 787 872 * Return 0 on success. 788 873 */ 789 - int qcom_scm_pas_auth_and_reset(u32 peripheral) 874 + int qcom_scm_pas_auth_and_reset(u32 pas_id) 790 875 { 791 876 int ret; 792 877 struct qcom_scm_desc desc = { 793 878 .svc = QCOM_SCM_SVC_PIL, 794 879 .cmd = QCOM_SCM_PIL_PAS_AUTH_AND_RESET, 795 880 .arginfo = QCOM_SCM_ARGS(1), 796 - .args[0] = peripheral, 881 + .args[0] = pas_id, 797 882 .owner = ARM_SMCCC_OWNER_SIP, 798 883 }; 799 884 struct qcom_scm_res res; ··· 985 734 EXPORT_SYMBOL_GPL(qcom_scm_pas_auth_and_reset); 986 735 987 736 /** 737 + * qcom_scm_pas_prepare_and_auth_reset() - Prepare, authenticate, and reset the 738 + * remote processor 739 + * 740 + * @ctx: Context saved during call to qcom_scm_pas_context_init() 741 + * 742 + * This function performs the necessary steps to prepare a PAS subsystem, 743 + * authenticate it using the provided metadata, and initiate a reset sequence. 744 + * 745 + * It should be used when Linux is in control setting up the IOMMU hardware 746 + * for remote subsystem during secure firmware loading processes. The preparation 747 + * step sets up a shmbridge over the firmware memory before TrustZone accesses the 748 + * firmware memory region for authentication. The authentication step verifies 749 + * the integrity and authenticity of the firmware or configuration using secure 750 + * metadata. Finally, the reset step ensures the subsystem starts in a clean and 751 + * sane state. 752 + * 753 + * Return: 0 on success, negative errno on failure. 754 + */ 755 + int qcom_scm_pas_prepare_and_auth_reset(struct qcom_scm_pas_context *ctx) 756 + { 757 + u64 handle; 758 + int ret; 759 + 760 + /* 761 + * When Linux running @ EL1, Gunyah hypervisor running @ EL2 traps the 762 + * auth_and_reset call and create an shmbridge on the remote subsystem 763 + * memory region and then invokes a call to TrustZone to authenticate. 764 + */ 765 + if (!ctx->use_tzmem) 766 + return qcom_scm_pas_auth_and_reset(ctx->pas_id); 767 + 768 + /* 769 + * When Linux runs @ EL2 Linux must create the shmbridge itself and then 770 + * subsequently call TrustZone for authenticate and reset. 771 + */ 772 + ret = qcom_tzmem_shm_bridge_create(ctx->mem_phys, ctx->mem_size, &handle); 773 + if (ret) 774 + return ret; 775 + 776 + ret = qcom_scm_pas_auth_and_reset(ctx->pas_id); 777 + qcom_tzmem_shm_bridge_delete(handle); 778 + 779 + return ret; 780 + } 781 + EXPORT_SYMBOL_GPL(qcom_scm_pas_prepare_and_auth_reset); 782 + 783 + /** 988 784 * qcom_scm_pas_shutdown() - Shut down the remote processor 989 - * @peripheral: peripheral id 785 + * @pas_id: peripheral authentication service id 990 786 * 991 787 * Returns 0 on success. 992 788 */ 993 - int qcom_scm_pas_shutdown(u32 peripheral) 789 + int qcom_scm_pas_shutdown(u32 pas_id) 994 790 { 995 791 int ret; 996 792 struct qcom_scm_desc desc = { 997 793 .svc = QCOM_SCM_SVC_PIL, 998 794 .cmd = QCOM_SCM_PIL_PAS_SHUTDOWN, 999 795 .arginfo = QCOM_SCM_ARGS(1), 1000 - .args[0] = peripheral, 796 + .args[0] = pas_id, 1001 797 .owner = ARM_SMCCC_OWNER_SIP, 1002 798 }; 1003 799 struct qcom_scm_res res; ··· 1070 772 /** 1071 773 * qcom_scm_pas_supported() - Check if the peripheral authentication service is 1072 774 * available for the given peripherial 1073 - * @peripheral: peripheral id 775 + * @pas_id: peripheral authentication service id 1074 776 * 1075 777 * Returns true if PAS is supported for this peripheral, otherwise false. 1076 778 */ 1077 - bool qcom_scm_pas_supported(u32 peripheral) 779 + bool qcom_scm_pas_supported(u32 pas_id) 1078 780 { 1079 781 int ret; 1080 782 struct qcom_scm_desc desc = { 1081 783 .svc = QCOM_SCM_SVC_PIL, 1082 784 .cmd = QCOM_SCM_PIL_PAS_IS_SUPPORTED, 1083 785 .arginfo = QCOM_SCM_ARGS(1), 1084 - .args[0] = peripheral, 786 + .args[0] = pas_id, 1085 787 .owner = ARM_SMCCC_OWNER_SIP, 1086 788 }; 1087 789 struct qcom_scm_res res; ··· 2305 2007 { .compatible = "lenovo,yoga-slim7x" }, 2306 2008 { .compatible = "microsoft,arcata", }, 2307 2009 { .compatible = "microsoft,blackrock" }, 2010 + { .compatible = "microsoft,denali", }, 2308 2011 { .compatible = "microsoft,romulus13", }, 2309 2012 { .compatible = "microsoft,romulus15", }, 2310 2013 { .compatible = "qcom,hamoa-iot-evk" }, ··· 2507 2208 } 2508 2209 EXPORT_SYMBOL_GPL(qcom_scm_is_available); 2509 2210 2510 - static int qcom_scm_assert_valid_wq_ctx(u32 wq_ctx) 2211 + static int qcom_scm_fill_irq_fwspec_params(struct irq_fwspec *fwspec, u32 hwirq) 2511 2212 { 2512 - /* FW currently only supports a single wq_ctx (zero). 2513 - * TODO: Update this logic to include dynamic allocation and lookup of 2514 - * completion structs when FW supports more wq_ctx values. 2515 - */ 2516 - if (wq_ctx != 0) { 2517 - dev_err(__scm->dev, "Firmware unexpectedly passed non-zero wq_ctx\n"); 2518 - return -EINVAL; 2213 + if (hwirq >= GIC_SPI_BASE && hwirq <= GIC_MAX_SPI) { 2214 + fwspec->param[0] = GIC_SPI; 2215 + fwspec->param[1] = hwirq - GIC_SPI_BASE; 2216 + } else if (hwirq >= GIC_ESPI_BASE && hwirq <= GIC_MAX_ESPI) { 2217 + fwspec->param[0] = GIC_ESPI; 2218 + fwspec->param[1] = hwirq - GIC_ESPI_BASE; 2219 + } else { 2220 + WARN(1, "Unexpected hwirq: %d\n", hwirq); 2221 + return -ENXIO; 2519 2222 } 2223 + 2224 + fwspec->param[2] = IRQ_TYPE_EDGE_RISING; 2225 + fwspec->param_count = 3; 2520 2226 2521 2227 return 0; 2522 2228 } 2523 2229 2524 - int qcom_scm_wait_for_wq_completion(u32 wq_ctx) 2230 + static int qcom_scm_query_waitq_count(struct qcom_scm *scm) 2525 2231 { 2232 + struct qcom_scm_desc desc = { 2233 + .svc = QCOM_SCM_SVC_WAITQ, 2234 + .cmd = QCOM_SCM_WAITQ_GET_INFO, 2235 + .owner = ARM_SMCCC_OWNER_SIP 2236 + }; 2237 + struct qcom_scm_res res; 2526 2238 int ret; 2527 2239 2528 - ret = qcom_scm_assert_valid_wq_ctx(wq_ctx); 2240 + ret = qcom_scm_call_atomic(scm->dev, &desc, &res); 2529 2241 if (ret) 2530 2242 return ret; 2531 2243 2532 - wait_for_completion(&__scm->waitq_comp); 2244 + return res.result[0] & GENMASK(7, 0); 2245 + } 2246 + 2247 + static int qcom_scm_get_waitq_irq(struct qcom_scm *scm) 2248 + { 2249 + struct qcom_scm_desc desc = { 2250 + .svc = QCOM_SCM_SVC_WAITQ, 2251 + .cmd = QCOM_SCM_WAITQ_GET_INFO, 2252 + .owner = ARM_SMCCC_OWNER_SIP 2253 + }; 2254 + struct device_node *parent_irq_node; 2255 + struct irq_fwspec fwspec; 2256 + struct qcom_scm_res res; 2257 + u32 hwirq; 2258 + int ret; 2259 + 2260 + ret = qcom_scm_call_atomic(scm->dev, &desc, &res); 2261 + if (ret) 2262 + return ret; 2263 + 2264 + hwirq = res.result[1] & GENMASK(15, 0); 2265 + ret = qcom_scm_fill_irq_fwspec_params(&fwspec, hwirq); 2266 + if (ret) 2267 + return ret; 2268 + 2269 + parent_irq_node = of_irq_find_parent(scm->dev->of_node); 2270 + if (!parent_irq_node) 2271 + return -ENODEV; 2272 + 2273 + fwspec.fwnode = of_fwnode_handle(parent_irq_node); 2274 + 2275 + return irq_create_fwspec_mapping(&fwspec); 2276 + } 2277 + 2278 + static struct completion *qcom_scm_get_completion(u32 wq_ctx) 2279 + { 2280 + struct completion *wq; 2281 + 2282 + if (WARN_ON_ONCE(wq_ctx >= __scm->wq_cnt)) 2283 + return ERR_PTR(-EINVAL); 2284 + 2285 + wq = &__scm->waitq_comps[wq_ctx]; 2286 + 2287 + return wq; 2288 + } 2289 + 2290 + int qcom_scm_wait_for_wq_completion(u32 wq_ctx) 2291 + { 2292 + struct completion *wq; 2293 + 2294 + wq = qcom_scm_get_completion(wq_ctx); 2295 + if (IS_ERR(wq)) 2296 + return PTR_ERR(wq); 2297 + 2298 + wait_for_completion_state(wq, TASK_IDLE); 2533 2299 2534 2300 return 0; 2535 2301 } 2536 2302 2537 2303 static int qcom_scm_waitq_wakeup(unsigned int wq_ctx) 2538 2304 { 2539 - int ret; 2305 + struct completion *wq; 2540 2306 2541 - ret = qcom_scm_assert_valid_wq_ctx(wq_ctx); 2542 - if (ret) 2543 - return ret; 2307 + wq = qcom_scm_get_completion(wq_ctx); 2308 + if (IS_ERR(wq)) 2309 + return PTR_ERR(wq); 2544 2310 2545 - complete(&__scm->waitq_comp); 2311 + complete(wq); 2546 2312 2547 2313 return 0; 2548 2314 } ··· 2683 2319 struct qcom_tzmem_pool_config pool_config; 2684 2320 struct qcom_scm *scm; 2685 2321 int irq, ret; 2322 + int i; 2686 2323 2687 2324 scm = devm_kzalloc(&pdev->dev, sizeof(*scm), GFP_KERNEL); 2688 2325 if (!scm) ··· 2694 2329 if (ret < 0) 2695 2330 return ret; 2696 2331 2697 - init_completion(&scm->waitq_comp); 2698 2332 mutex_init(&scm->scm_bw_lock); 2699 2333 2700 2334 scm->path = devm_of_icc_get(&pdev->dev, NULL); ··· 2745 2381 return dev_err_probe(scm->dev, PTR_ERR(scm->mempool), 2746 2382 "Failed to create the SCM memory pool\n"); 2747 2383 2748 - irq = platform_get_irq_optional(pdev, 0); 2384 + ret = qcom_scm_query_waitq_count(scm); 2385 + scm->wq_cnt = ret < 0 ? QCOM_SCM_DEFAULT_WAITQ_COUNT : ret; 2386 + scm->waitq_comps = devm_kcalloc(&pdev->dev, scm->wq_cnt, sizeof(*scm->waitq_comps), 2387 + GFP_KERNEL); 2388 + if (!scm->waitq_comps) 2389 + return -ENOMEM; 2390 + 2391 + for (i = 0; i < scm->wq_cnt; i++) 2392 + init_completion(&scm->waitq_comps[i]); 2393 + 2394 + irq = qcom_scm_get_waitq_irq(scm); 2395 + if (irq < 0) 2396 + irq = platform_get_irq_optional(pdev, 0); 2397 + 2749 2398 if (irq < 0) { 2750 2399 if (irq != -ENXIO) 2751 2400 return irq;
+2
drivers/firmware/qcom/qcom_scm.h
··· 105 105 #define QCOM_SCM_PIL_PAS_SHUTDOWN 0x06 106 106 #define QCOM_SCM_PIL_PAS_IS_SUPPORTED 0x07 107 107 #define QCOM_SCM_PIL_PAS_MSS_RESET 0x0a 108 + #define QCOM_SCM_PIL_PAS_GET_RSCTABLE 0x21 108 109 109 110 #define QCOM_SCM_SVC_IO 0x05 110 111 #define QCOM_SCM_IO_READ 0x01 ··· 153 152 #define QCOM_SCM_SVC_WAITQ 0x24 154 153 #define QCOM_SCM_WAITQ_RESUME 0x02 155 154 #define QCOM_SCM_WAITQ_GET_WQ_CTX 0x03 155 + #define QCOM_SCM_WAITQ_GET_INFO 0x04 156 156 157 157 #define QCOM_SCM_SVC_GPU 0x28 158 158 #define QCOM_SCM_SVC_GPU_INIT_REGS 0x01
+16 -13
drivers/firmware/ti_sci.h
··· 580 580 } __packed; 581 581 582 582 /** 583 - * struct tisci_msg_req_prepare_sleep - Request for TISCI_MSG_PREPARE_SLEEP. 583 + * struct ti_sci_msg_req_prepare_sleep - Request for TISCI_MSG_PREPARE_SLEEP. 584 584 * 585 - * @hdr TISCI header to provide ACK/NAK flags to the host. 586 - * @mode Low power mode to enter. 587 - * @ctx_lo Low 32-bits of physical pointer to address to use for context save. 588 - * @ctx_hi High 32-bits of physical pointer to address to use for context save. 589 - * @debug_flags Flags that can be set to halt the sequence during suspend or 585 + * @hdr: TISCI header to provide ACK/NAK flags to the host. 586 + * @mode: Low power mode to enter. 587 + * @ctx_lo: Low 32-bits of physical pointer to address to use for context save. 588 + * @ctx_hi: High 32-bits of physical pointer to address to use for context save. 589 + * @debug_flags: Flags that can be set to halt the sequence during suspend or 590 590 * resume to allow JTAG connection and debug. 591 591 * 592 592 * This message is used as the first step of entering a low power mode. It ··· 610 610 } __packed; 611 611 612 612 /** 613 - * struct tisci_msg_set_io_isolation_req - Request for TI_SCI_MSG_SET_IO_ISOLATION. 613 + * struct ti_sci_msg_req_set_io_isolation - Request for TI_SCI_MSG_SET_IO_ISOLATION. 614 614 * 615 615 * @hdr: Generic header 616 616 * @state: The deseared state of the IO isolation. ··· 676 676 * TISCI_MSG_LPM_SET_LATENCY_CONSTRAINT. 677 677 * 678 678 * @hdr: TISCI header to provide ACK/NAK flags to the host. 679 - * @wkup_latency: The maximum acceptable latency to wake up from low power mode 679 + * @latency: The maximum acceptable latency to wake up from low power mode 680 680 * in milliseconds. The deeper the state, the higher the latency. 681 681 * @state: The desired state of wakeup latency constraint: set or clear. 682 682 * @rsvd: Reserved for future use. ··· 855 855 * UDMAP transmit channels mapped to source threads will have their 856 856 * TCHAN_THRD_ID register programmed with the destination thread if the pairing 857 857 * is successful. 858 - 858 + * 859 859 * @dst_thread: PSI-L destination thread ID within the PSI-L System thread map. 860 860 * PSI-L destination threads start at index 0x8000. The request is NACK'd if 861 861 * the destination thread is not greater than or equal to 0x8000. ··· 1000 1000 } __packed; 1001 1001 1002 1002 /** 1003 - * Configures a Navigator Subsystem UDMAP transmit channel 1003 + * struct ti_sci_msg_rm_udmap_tx_ch_cfg_req - Configures a 1004 + * Navigator Subsystem UDMAP transmit channel 1004 1005 * 1005 1006 * Configures the non-real-time registers of a Navigator Subsystem UDMAP 1006 1007 * transmit channel. The channel index must be assigned to the host defined ··· 1129 1128 } __packed; 1130 1129 1131 1130 /** 1132 - * Configures a Navigator Subsystem UDMAP receive channel 1131 + * struct ti_sci_msg_rm_udmap_rx_ch_cfg_req - Configures a 1132 + * Navigator Subsystem UDMAP receive channel 1133 1133 * 1134 1134 * Configures the non-real-time registers of a Navigator Subsystem UDMAP 1135 1135 * receive channel. The channel index must be assigned to the host defined ··· 1249 1247 } __packed; 1250 1248 1251 1249 /** 1252 - * Configures a Navigator Subsystem UDMAP receive flow 1250 + * struct ti_sci_msg_rm_udmap_flow_cfg_req - Configures a 1251 + * Navigator Subsystem UDMAP receive flow 1253 1252 * 1254 1253 * Configures a Navigator Subsystem UDMAP receive flow's registers. 1255 1254 * Configuration does not include the flow registers which handle size-based ··· 1261 1258 * 1262 1259 * @hdr: Standard TISCI header 1263 1260 * 1264 - * @valid_params 1261 + * @valid_params: 1265 1262 * Bitfield defining validity of rx flow configuration parameters. The 1266 1263 * rx flow configuration fields are not valid, and will not be used for flow 1267 1264 * configuration, if their corresponding valid bit is zero. Valid bit usage:
+3 -1
drivers/hwspinlock/omap_hwspinlock.c
··· 88 88 * make sure the module is enabled and clocked before reading 89 89 * the module SYSSTATUS register 90 90 */ 91 - devm_pm_runtime_enable(&pdev->dev); 91 + ret = devm_pm_runtime_enable(&pdev->dev); 92 + if (ret) 93 + return ret; 92 94 ret = pm_runtime_resume_and_get(&pdev->dev); 93 95 if (ret < 0) 94 96 return ret;
+17 -30
drivers/irqchip/irq-ls-extirq.c
··· 125 125 static int 126 126 ls_extirq_parse_map(struct ls_extirq_data *priv, struct device_node *node) 127 127 { 128 - const __be32 *map; 129 - u32 mapsize; 128 + struct of_imap_parser imap_parser; 129 + struct of_imap_item imap_item; 130 130 int ret; 131 131 132 - map = of_get_property(node, "interrupt-map", &mapsize); 133 - if (!map) 134 - return -ENOENT; 135 - if (mapsize % sizeof(*map)) 136 - return -EINVAL; 137 - mapsize /= sizeof(*map); 132 + ret = of_imap_parser_init(&imap_parser, node, &imap_item); 133 + if (ret) 134 + return ret; 138 135 139 - while (mapsize) { 136 + for_each_of_imap_item(&imap_parser, &imap_item) { 140 137 struct device_node *ipar; 141 - u32 hwirq, intsize, j; 138 + u32 hwirq; 139 + int i; 142 140 143 - if (mapsize < 3) 141 + hwirq = imap_item.child_imap[0]; 142 + if (hwirq >= MAXIRQ) { 143 + of_node_put(imap_item.parent_args.np); 144 144 return -EINVAL; 145 - hwirq = be32_to_cpup(map); 146 - if (hwirq >= MAXIRQ) 147 - return -EINVAL; 145 + } 148 146 priv->nirq = max(priv->nirq, hwirq + 1); 149 147 150 - ipar = of_find_node_by_phandle(be32_to_cpup(map + 2)); 151 - map += 3; 152 - mapsize -= 3; 153 - if (!ipar) 154 - return -EINVAL; 155 - priv->map[hwirq].fwnode = &ipar->fwnode; 156 - ret = of_property_read_u32(ipar, "#interrupt-cells", &intsize); 157 - if (ret) 158 - return ret; 148 + ipar = of_node_get(imap_item.parent_args.np); 149 + priv->map[hwirq].fwnode = of_fwnode_handle(ipar); 159 150 160 - if (intsize > mapsize) 161 - return -EINVAL; 162 - 163 - priv->map[hwirq].param_count = intsize; 164 - for (j = 0; j < intsize; ++j) 165 - priv->map[hwirq].param[j] = be32_to_cpup(map++); 166 - mapsize -= intsize; 151 + priv->map[hwirq].param_count = imap_item.parent_args.args_count; 152 + for (i = 0; i < priv->map[hwirq].param_count; i++) 153 + priv->map[hwirq].param[i] = imap_item.parent_args.args[i]; 167 154 } 168 155 return 0; 169 156 }
+16 -27
drivers/irqchip/irq-renesas-rza1.c
··· 142 142 static int rza1_irqc_parse_map(struct rza1_irqc_priv *priv, 143 143 struct device_node *gic_node) 144 144 { 145 + struct of_imap_parser imap_parser; 145 146 struct device *dev = priv->dev; 146 - unsigned int imaplen, i, j; 147 + struct of_imap_item imap_item; 147 148 struct device_node *ipar; 148 - const __be32 *imap; 149 - u32 intsize; 149 + unsigned int j; 150 + u32 i = 0; 150 151 int ret; 151 152 152 - imap = of_get_property(dev->of_node, "interrupt-map", &imaplen); 153 - if (!imap) 154 - return -EINVAL; 153 + ret = of_imap_parser_init(&imap_parser, dev->of_node, &imap_item); 154 + if (ret) 155 + return ret; 155 156 156 - for (i = 0; i < IRQC_NUM_IRQ; i++) { 157 - if (imaplen < 3) 158 - return -EINVAL; 159 - 157 + for_each_of_imap_item(&imap_parser, &imap_item) { 160 158 /* Check interrupt number, ignore sense */ 161 - if (be32_to_cpup(imap) != i) 159 + if (imap_item.child_imap[0] != i) { 160 + of_node_put(imap_item.parent_args.np); 162 161 return -EINVAL; 162 + } 163 163 164 - ipar = of_find_node_by_phandle(be32_to_cpup(imap + 2)); 164 + ipar = imap_item.parent_args.np; 165 165 if (ipar != gic_node) { 166 166 of_node_put(ipar); 167 167 return -EINVAL; 168 168 } 169 169 170 - imap += 3; 171 - imaplen -= 3; 170 + priv->map[i].args_count = imap_item.parent_args.args_count; 171 + for (j = 0; j < priv->map[i].args_count; j++) 172 + priv->map[i].args[j] = imap_item.parent_args.args[j]; 172 173 173 - ret = of_property_read_u32(ipar, "#interrupt-cells", &intsize); 174 - of_node_put(ipar); 175 - if (ret) 176 - return ret; 177 - 178 - if (imaplen < intsize) 179 - return -EINVAL; 180 - 181 - priv->map[i].args_count = intsize; 182 - for (j = 0; j < intsize; j++) 183 - priv->map[i].args[j] = be32_to_cpup(imap++); 184 - 185 - imaplen -= intsize; 174 + i++; 186 175 } 187 176 188 177 return 0;
+72 -2
drivers/mailbox/mtk-cmdq-mailbox.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/platform_device.h> 16 16 #include <linux/pm_runtime.h> 17 + #include <linux/sizes.h> 17 18 #include <linux/mailbox_controller.h> 18 19 #include <linux/mailbox/mtk-cmdq-mailbox.h> 19 20 #include <linux/of.h> ··· 43 42 #define GCE_GCTL_VALUE 0x48 44 43 #define GCE_CTRL_BY_SW GENMASK(2, 0) 45 44 #define GCE_DDR_EN GENMASK(18, 16) 45 + 46 + #define GCE_VM_ID_MAP(n) (0x5018 + (n) / 10 * 4) 47 + #define GCE_VM_ID_MAP_THR_FLD_SHIFT(n) ((n) % 10 * 3) 48 + #define GCE_VM_ID_MAP_HOST_VM GENMASK(2, 0) 49 + #define GCE_VM_CPR_GSIZE 0x50c4 50 + #define GCE_VM_CPR_GSIZE_FLD_SHIFT(vm_id) ((vm_id) * 4) 51 + #define GCE_VM_CPR_GSIZE_MAX GENMASK(3, 0) 46 52 47 53 #define CMDQ_THR_ACTIVE_SLOT_CYCLES 0x3200 48 54 #define CMDQ_THR_ENABLED 0x1 ··· 95 87 struct gce_plat { 96 88 u32 thread_nr; 97 89 u8 shift; 90 + dma_addr_t mminfra_offset; 98 91 bool control_by_sw; 99 92 bool sw_ddr_en; 93 + bool gce_vm; 100 94 u32 gce_num; 101 95 }; 102 96 103 97 static inline u32 cmdq_convert_gce_addr(dma_addr_t addr, const struct gce_plat *pdata) 104 98 { 105 99 /* Convert DMA addr (PA or IOVA) to GCE readable addr */ 106 - return addr >> pdata->shift; 100 + return (addr + pdata->mminfra_offset) >> pdata->shift; 107 101 } 108 102 109 103 static inline dma_addr_t cmdq_revert_gce_addr(u32 addr, const struct gce_plat *pdata) 110 104 { 111 105 /* Revert GCE readable addr to DMA addr (PA or IOVA) */ 112 - return (dma_addr_t)addr << pdata->shift; 106 + return ((dma_addr_t)addr << pdata->shift) - pdata->mminfra_offset; 113 107 } 108 + 109 + void cmdq_get_mbox_priv(struct mbox_chan *chan, struct cmdq_mbox_priv *priv) 110 + { 111 + struct cmdq *cmdq = container_of(chan->mbox, struct cmdq, mbox); 112 + 113 + priv->shift_pa = cmdq->pdata->shift; 114 + priv->mminfra_offset = cmdq->pdata->mminfra_offset; 115 + } 116 + EXPORT_SYMBOL(cmdq_get_mbox_priv); 114 117 115 118 u8 cmdq_get_shift_pa(struct mbox_chan *chan) 116 119 { ··· 130 111 return cmdq->pdata->shift; 131 112 } 132 113 EXPORT_SYMBOL(cmdq_get_shift_pa); 114 + 115 + static void cmdq_vm_init(struct cmdq *cmdq) 116 + { 117 + int i; 118 + u32 vm_cpr_gsize = 0, vm_id_map = 0; 119 + u32 *vm_map = NULL; 120 + 121 + if (!cmdq->pdata->gce_vm) 122 + return; 123 + 124 + vm_map = kcalloc(cmdq->pdata->thread_nr, sizeof(*vm_map), GFP_KERNEL); 125 + if (!vm_map) 126 + return; 127 + 128 + /* only configure the max CPR SRAM size to host vm (vm_id = 0) currently */ 129 + vm_cpr_gsize = GCE_VM_CPR_GSIZE_MAX << GCE_VM_CPR_GSIZE_FLD_SHIFT(0); 130 + 131 + /* set all thread mapping to host vm currently */ 132 + for (i = 0; i < cmdq->pdata->thread_nr; i++) 133 + vm_map[i] = GCE_VM_ID_MAP_HOST_VM << GCE_VM_ID_MAP_THR_FLD_SHIFT(i); 134 + 135 + /* set the amount of CPR SRAM to allocate to each VM */ 136 + writel(vm_cpr_gsize, cmdq->base + GCE_VM_CPR_GSIZE); 137 + 138 + /* config CPR_GSIZE before setting VM_ID_MAP to avoid data leakage */ 139 + for (i = 0; i < cmdq->pdata->thread_nr; i++) { 140 + vm_id_map |= vm_map[i]; 141 + /* config every 10 threads, e.g., thread id=0~9, 10~19, ..., into one register */ 142 + if ((i + 1) % 10 == 0) { 143 + writel(vm_id_map, cmdq->base + GCE_VM_ID_MAP(i)); 144 + vm_id_map = 0; 145 + } 146 + } 147 + /* config remaining threads settings */ 148 + if (cmdq->pdata->thread_nr % 10 != 0) 149 + writel(vm_id_map, cmdq->base + GCE_VM_ID_MAP(cmdq->pdata->thread_nr - 1)); 150 + 151 + kfree(vm_map); 152 + } 133 153 134 154 static void cmdq_gctl_value_toggle(struct cmdq *cmdq, bool ddr_enable) 135 155 { ··· 214 156 215 157 WARN_ON(clk_bulk_enable(cmdq->pdata->gce_num, cmdq->clocks)); 216 158 159 + cmdq_vm_init(cmdq); 217 160 cmdq_gctl_value_toggle(cmdq, true); 218 161 219 162 writel(CMDQ_THR_ACTIVE_SLOT_CYCLES, cmdq->base + CMDQ_THR_SLOT_CYCLES); ··· 841 782 .gce_num = 2 842 783 }; 843 784 785 + static const struct gce_plat gce_plat_mt8196 = { 786 + .thread_nr = 32, 787 + .shift = 3, 788 + .mminfra_offset = SZ_2G, 789 + .control_by_sw = true, 790 + .sw_ddr_en = true, 791 + .gce_vm = true, 792 + .gce_num = 2 793 + }; 794 + 844 795 static const struct of_device_id cmdq_of_ids[] = { 845 796 {.compatible = "mediatek,mt6779-gce", .data = (void *)&gce_plat_mt6779}, 846 797 {.compatible = "mediatek,mt8173-gce", .data = (void *)&gce_plat_mt8173}, ··· 859 790 {.compatible = "mediatek,mt8188-gce", .data = (void *)&gce_plat_mt8188}, 860 791 {.compatible = "mediatek,mt8192-gce", .data = (void *)&gce_plat_mt8192}, 861 792 {.compatible = "mediatek,mt8195-gce", .data = (void *)&gce_plat_mt8195}, 793 + {.compatible = "mediatek,mt8196-gce", .data = (void *)&gce_plat_mt8196}, 862 794 {} 863 795 }; 864 796 MODULE_DEVICE_TABLE(of, cmdq_of_ids);
+22 -16
drivers/memory/mtk-smi.c
··· 595 595 596 596 smi_com_pdev = of_find_device_by_node(smi_com_node); 597 597 of_node_put(smi_com_node); 598 - if (smi_com_pdev) { 599 - /* smi common is the supplier, Make sure it is ready before */ 600 - if (!platform_get_drvdata(smi_com_pdev)) { 601 - put_device(&smi_com_pdev->dev); 602 - return -EPROBE_DEFER; 603 - } 604 - smi_com_dev = &smi_com_pdev->dev; 605 - link = device_link_add(dev, smi_com_dev, 606 - DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS); 607 - if (!link) { 608 - dev_err(dev, "Unable to link smi-common dev\n"); 609 - put_device(&smi_com_pdev->dev); 610 - return -ENODEV; 611 - } 612 - *com_dev = smi_com_dev; 613 - } else { 598 + if (!smi_com_pdev) { 614 599 dev_err(dev, "Failed to get the smi_common device\n"); 615 600 return -EINVAL; 616 601 } 602 + 603 + /* smi common is the supplier, Make sure it is ready before */ 604 + if (!platform_get_drvdata(smi_com_pdev)) { 605 + put_device(&smi_com_pdev->dev); 606 + return -EPROBE_DEFER; 607 + } 608 + 609 + smi_com_dev = &smi_com_pdev->dev; 610 + link = device_link_add(dev, smi_com_dev, 611 + DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS); 612 + if (!link) { 613 + dev_err(dev, "Unable to link smi-common dev\n"); 614 + put_device(&smi_com_pdev->dev); 615 + return -ENODEV; 616 + } 617 + 618 + *com_dev = smi_com_dev; 619 + 617 620 return 0; 618 621 } 619 622 ··· 677 674 err_pm_disable: 678 675 pm_runtime_disable(dev); 679 676 device_link_remove(dev, larb->smi_common_dev); 677 + put_device(larb->smi_common_dev); 680 678 return ret; 681 679 } 682 680 ··· 688 684 device_link_remove(&pdev->dev, larb->smi_common_dev); 689 685 pm_runtime_disable(&pdev->dev); 690 686 component_del(&pdev->dev, &mtk_smi_larb_component_ops); 687 + put_device(larb->smi_common_dev); 691 688 } 692 689 693 690 static int __maybe_unused mtk_smi_larb_resume(struct device *dev) ··· 922 917 if (common->plat->type == MTK_SMI_GEN2_SUB_COMM) 923 918 device_link_remove(&pdev->dev, common->smi_common_dev); 924 919 pm_runtime_disable(&pdev->dev); 920 + put_device(common->smi_common_dev); 925 921 } 926 922 927 923 static int __maybe_unused mtk_smi_common_resume(struct device *dev)
+70
drivers/of/irq.c
··· 157 157 return imap; 158 158 } 159 159 160 + int of_imap_parser_init(struct of_imap_parser *parser, struct device_node *node, 161 + struct of_imap_item *item) 162 + { 163 + int imaplen; 164 + u32 tmp; 165 + int ret; 166 + 167 + /* 168 + * parent_offset is the offset where the parent part is starting. 169 + * In other words, the offset where the parent interrupt controller 170 + * phandle is present. 171 + * 172 + * Compute this offset (child #interrupt-cells + child #address-cells) 173 + */ 174 + parser->parent_offset = of_bus_n_addr_cells(node); 175 + 176 + ret = of_property_read_u32(node, "#interrupt-cells", &tmp); 177 + if (ret) 178 + return ret; 179 + 180 + parser->parent_offset += tmp; 181 + 182 + if (WARN(parser->parent_offset > ARRAY_SIZE(item->child_imap), 183 + "child part size = %u, cannot fit in array of %zu items", 184 + parser->parent_offset, ARRAY_SIZE(item->child_imap))) 185 + return -EINVAL; 186 + 187 + parser->imap = of_get_property(node, "interrupt-map", &imaplen); 188 + if (!parser->imap) 189 + return -ENOENT; 190 + 191 + imaplen /= sizeof(*parser->imap); 192 + parser->imap_end = parser->imap + imaplen; 193 + 194 + memset(item, 0, sizeof(*item)); 195 + item->child_imap_count = parser->parent_offset; 196 + 197 + return 0; 198 + } 199 + EXPORT_SYMBOL_GPL(of_imap_parser_init); 200 + 201 + struct of_imap_item *of_imap_parser_one(struct of_imap_parser *parser, 202 + struct of_imap_item *item) 203 + { 204 + const __be32 *imap_parent, *imap_next; 205 + int i; 206 + 207 + /* Release previously get parent node */ 208 + of_node_put(item->parent_args.np); 209 + 210 + if (parser->imap + parser->parent_offset + 1 >= parser->imap_end) 211 + return NULL; 212 + 213 + imap_parent = parser->imap + parser->parent_offset; 214 + 215 + imap_next = of_irq_parse_imap_parent(imap_parent, 216 + parser->imap_end - imap_parent, 217 + &item->parent_args); 218 + if (!imap_next) 219 + return NULL; 220 + 221 + for (i = 0; i < parser->parent_offset; i++) 222 + item->child_imap[i] = be32_to_cpu(*(parser->imap + i)); 223 + 224 + parser->imap = imap_next; 225 + 226 + return item; 227 + } 228 + EXPORT_SYMBOL_GPL(of_imap_parser_one); 229 + 160 230 /** 161 231 * of_irq_parse_raw - Low level interrupt tree parsing 162 232 * @addr: address specifier (start of "reg" property of the device) in be32 format
+9
drivers/of/unittest-data/tests-interrupts.dtsi
··· 50 50 interrupt-map = <0x5000 1 2 &test_intc0 15>; 51 51 }; 52 52 53 + intmap2 { 54 + #interrupt-cells = <2>; 55 + #address-cells = <0>; 56 + interrupt-map = <1 11 &test_intc0 100>, 57 + <2 22 &test_intc1 200 201 202>, 58 + <3 33 &test_intc2 300 301>, 59 + <4 44 &test_intc2 400 401>; 60 + }; 61 + 53 62 test_intc_intmap0: intc-intmap0 { 54 63 #interrupt-cells = <1>; 55 64 #address-cells = <1>;
+116
drivers/of/unittest.c
··· 1654 1654 of_node_put(np); 1655 1655 } 1656 1656 1657 + struct of_unittest_expected_imap_item { 1658 + u32 child_imap_count; 1659 + u32 child_imap[2]; 1660 + const char *parent_path; 1661 + int parent_args_count; 1662 + u32 parent_args[3]; 1663 + }; 1664 + 1665 + static const struct of_unittest_expected_imap_item of_unittest_expected_imap_items[] = { 1666 + { 1667 + .child_imap_count = 2, 1668 + .child_imap = {1, 11}, 1669 + .parent_path = "/testcase-data/interrupts/intc0", 1670 + .parent_args_count = 1, 1671 + .parent_args = {100}, 1672 + }, { 1673 + .child_imap_count = 2, 1674 + .child_imap = {2, 22}, 1675 + .parent_path = "/testcase-data/interrupts/intc1", 1676 + .parent_args_count = 3, 1677 + .parent_args = {200, 201, 202}, 1678 + }, { 1679 + .child_imap_count = 2, 1680 + .child_imap = {3, 33}, 1681 + .parent_path = "/testcase-data/interrupts/intc2", 1682 + .parent_args_count = 2, 1683 + .parent_args = {300, 301}, 1684 + }, { 1685 + .child_imap_count = 2, 1686 + .child_imap = {4, 44}, 1687 + .parent_path = "/testcase-data/interrupts/intc2", 1688 + .parent_args_count = 2, 1689 + .parent_args = {400, 401}, 1690 + } 1691 + }; 1692 + 1693 + static void __init of_unittest_parse_interrupt_map(void) 1694 + { 1695 + const struct of_unittest_expected_imap_item *expected_item; 1696 + struct device_node *imap_np, *expected_parent_np; 1697 + struct of_imap_parser imap_parser; 1698 + struct of_imap_item imap_item; 1699 + int count, ret, i; 1700 + 1701 + if (of_irq_workarounds & (OF_IMAP_NO_PHANDLE | OF_IMAP_OLDWORLD_MAC)) 1702 + return; 1703 + 1704 + imap_np = of_find_node_by_path("/testcase-data/interrupts/intmap2"); 1705 + if (!imap_np) { 1706 + pr_err("missing testcase data\n"); 1707 + return; 1708 + } 1709 + 1710 + ret = of_imap_parser_init(&imap_parser, imap_np, &imap_item); 1711 + if (unittest(!ret, "of_imap_parser_init(%pOF) returned error %d\n", 1712 + imap_np, ret)) 1713 + goto end; 1714 + 1715 + expected_item = of_unittest_expected_imap_items; 1716 + count = 0; 1717 + 1718 + for_each_of_imap_item(&imap_parser, &imap_item) { 1719 + if (unittest(count < ARRAY_SIZE(of_unittest_expected_imap_items), 1720 + "imap item number %d not expected. Max number %zu\n", 1721 + count, ARRAY_SIZE(of_unittest_expected_imap_items) - 1)) { 1722 + of_node_put(imap_item.parent_args.np); 1723 + goto end; 1724 + } 1725 + 1726 + expected_parent_np = of_find_node_by_path(expected_item->parent_path); 1727 + if (unittest(expected_parent_np, 1728 + "missing dependent testcase data (%s)\n", 1729 + expected_item->parent_path)) { 1730 + of_node_put(imap_item.parent_args.np); 1731 + goto end; 1732 + } 1733 + 1734 + unittest(imap_item.child_imap_count == expected_item->child_imap_count, 1735 + "imap[%d] child_imap_count = %u, expected %u\n", 1736 + count, imap_item.child_imap_count, 1737 + expected_item->child_imap_count); 1738 + 1739 + for (i = 0; i < expected_item->child_imap_count; i++) 1740 + unittest(imap_item.child_imap[i] == expected_item->child_imap[i], 1741 + "imap[%d] child_imap[%d] = %u, expected %u\n", 1742 + count, i, imap_item.child_imap[i], 1743 + expected_item->child_imap[i]); 1744 + 1745 + unittest(imap_item.parent_args.np == expected_parent_np, 1746 + "imap[%d] parent np = %pOF, expected %pOF\n", 1747 + count, imap_item.parent_args.np, expected_parent_np); 1748 + 1749 + unittest(imap_item.parent_args.args_count == expected_item->parent_args_count, 1750 + "imap[%d] parent param_count = %d, expected %d\n", 1751 + count, imap_item.parent_args.args_count, 1752 + expected_item->parent_args_count); 1753 + 1754 + for (i = 0; i < expected_item->parent_args_count; i++) 1755 + unittest(imap_item.parent_args.args[i] == expected_item->parent_args[i], 1756 + "imap[%d] parent param[%d] = %u, expected %u\n", 1757 + count, i, imap_item.parent_args.args[i], 1758 + expected_item->parent_args[i]); 1759 + 1760 + of_node_put(expected_parent_np); 1761 + count++; 1762 + expected_item++; 1763 + } 1764 + 1765 + unittest(count == ARRAY_SIZE(of_unittest_expected_imap_items), 1766 + "Missing items. %d parsed, expected %zu\n", 1767 + count, ARRAY_SIZE(of_unittest_expected_imap_items)); 1768 + end: 1769 + of_node_put(imap_np); 1770 + } 1771 + 1657 1772 #if IS_ENABLED(CONFIG_OF_DYNAMIC) 1658 1773 static void __init of_unittest_irq_refcount(void) 1659 1774 { ··· 4508 4393 of_unittest_changeset_prop(); 4509 4394 of_unittest_parse_interrupts(); 4510 4395 of_unittest_parse_interrupts_extended(); 4396 + of_unittest_parse_interrupt_map(); 4511 4397 of_unittest_irq_refcount(); 4512 4398 of_unittest_dma_get_max_cpu_address(); 4513 4399 of_unittest_parse_dma_ranges();
+130 -35
drivers/remoteproc/qcom_q6v5_pas.c
··· 11 11 #include <linux/delay.h> 12 12 #include <linux/firmware.h> 13 13 #include <linux/interrupt.h> 14 + #include <linux/iommu.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/module.h> 16 17 #include <linux/of.h> ··· 118 117 struct qcom_rproc_ssr ssr_subdev; 119 118 struct qcom_sysmon *sysmon; 120 119 121 - struct qcom_scm_pas_metadata pas_metadata; 122 - struct qcom_scm_pas_metadata dtb_pas_metadata; 120 + struct qcom_scm_pas_context *pas_ctx; 121 + struct qcom_scm_pas_context *dtb_pas_ctx; 123 122 }; 124 123 125 124 static void qcom_pas_segment_dump(struct rproc *rproc, ··· 212 211 * auth_and_reset() was successful, but in other cases clean it up 213 212 * here. 214 213 */ 215 - qcom_scm_pas_metadata_release(&pas->pas_metadata); 214 + qcom_scm_pas_metadata_release(pas->pas_ctx); 216 215 if (pas->dtb_pas_id) 217 - qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata); 216 + qcom_scm_pas_metadata_release(pas->dtb_pas_ctx); 218 217 219 218 return 0; 220 219 } ··· 240 239 return ret; 241 240 } 242 241 243 - ret = qcom_mdt_pas_init(pas->dev, pas->dtb_firmware, pas->dtb_firmware_name, 244 - pas->dtb_pas_id, pas->dtb_mem_phys, 245 - &pas->dtb_pas_metadata); 246 - if (ret) 247 - goto release_dtb_firmware; 248 - 249 - ret = qcom_mdt_load_no_init(pas->dev, pas->dtb_firmware, pas->dtb_firmware_name, 250 - pas->dtb_mem_region, pas->dtb_mem_phys, 251 - pas->dtb_mem_size, &pas->dtb_mem_reloc); 242 + ret = qcom_mdt_pas_load(pas->dtb_pas_ctx, pas->dtb_firmware, 243 + pas->dtb_firmware_name, pas->dtb_mem_region, 244 + &pas->dtb_mem_reloc); 252 245 if (ret) 253 246 goto release_dtb_metadata; 254 247 } ··· 250 255 return 0; 251 256 252 257 release_dtb_metadata: 253 - qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata); 254 - 255 - release_dtb_firmware: 258 + qcom_scm_pas_metadata_release(pas->dtb_pas_ctx); 256 259 release_firmware(pas->dtb_firmware); 257 260 261 + return ret; 262 + } 263 + 264 + static void qcom_pas_unmap_carveout(struct rproc *rproc, phys_addr_t mem_phys, size_t size) 265 + { 266 + if (rproc->has_iommu) 267 + iommu_unmap(rproc->domain, mem_phys, size); 268 + } 269 + 270 + static int qcom_pas_map_carveout(struct rproc *rproc, phys_addr_t mem_phys, size_t size) 271 + { 272 + int ret = 0; 273 + 274 + if (rproc->has_iommu) 275 + ret = iommu_map(rproc->domain, mem_phys, mem_phys, size, 276 + IOMMU_READ | IOMMU_WRITE, GFP_KERNEL); 258 277 return ret; 259 278 } 260 279 ··· 306 297 } 307 298 308 299 if (pas->dtb_pas_id) { 309 - ret = qcom_scm_pas_auth_and_reset(pas->dtb_pas_id); 300 + ret = qcom_pas_map_carveout(rproc, pas->dtb_mem_phys, pas->dtb_mem_size); 301 + if (ret) 302 + goto disable_px_supply; 303 + 304 + ret = qcom_scm_pas_prepare_and_auth_reset(pas->dtb_pas_ctx); 310 305 if (ret) { 311 306 dev_err(pas->dev, 312 307 "failed to authenticate dtb image and release reset\n"); 313 - goto disable_px_supply; 308 + goto unmap_dtb_carveout; 314 309 } 315 310 } 316 311 317 - ret = qcom_mdt_pas_init(pas->dev, pas->firmware, rproc->firmware, pas->pas_id, 318 - pas->mem_phys, &pas->pas_metadata); 319 - if (ret) 320 - goto disable_px_supply; 321 - 322 - ret = qcom_mdt_load_no_init(pas->dev, pas->firmware, rproc->firmware, 323 - pas->mem_region, pas->mem_phys, pas->mem_size, 324 - &pas->mem_reloc); 312 + ret = qcom_mdt_pas_load(pas->pas_ctx, pas->firmware, rproc->firmware, 313 + pas->mem_region, &pas->mem_reloc); 325 314 if (ret) 326 315 goto release_pas_metadata; 327 316 328 317 qcom_pil_info_store(pas->info_name, pas->mem_phys, pas->mem_size); 329 318 330 - ret = qcom_scm_pas_auth_and_reset(pas->pas_id); 319 + ret = qcom_pas_map_carveout(rproc, pas->mem_phys, pas->mem_size); 320 + if (ret) 321 + goto release_pas_metadata; 322 + 323 + ret = qcom_scm_pas_prepare_and_auth_reset(pas->pas_ctx); 331 324 if (ret) { 332 325 dev_err(pas->dev, 333 326 "failed to authenticate image and release reset\n"); 334 - goto release_pas_metadata; 327 + goto unmap_carveout; 335 328 } 336 329 337 330 ret = qcom_q6v5_wait_for_start(&pas->q6v5, msecs_to_jiffies(5000)); 338 331 if (ret == -ETIMEDOUT) { 339 332 dev_err(pas->dev, "start timed out\n"); 340 333 qcom_scm_pas_shutdown(pas->pas_id); 341 - goto release_pas_metadata; 334 + goto unmap_carveout; 342 335 } 343 336 344 - qcom_scm_pas_metadata_release(&pas->pas_metadata); 337 + qcom_scm_pas_metadata_release(pas->pas_ctx); 345 338 if (pas->dtb_pas_id) 346 - qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata); 339 + qcom_scm_pas_metadata_release(pas->dtb_pas_ctx); 347 340 348 341 /* firmware is used to pass reference from qcom_pas_start(), drop it now */ 349 342 pas->firmware = NULL; 350 343 351 344 return 0; 352 345 346 + unmap_carveout: 347 + qcom_pas_unmap_carveout(rproc, pas->mem_phys, pas->mem_size); 353 348 release_pas_metadata: 354 - qcom_scm_pas_metadata_release(&pas->pas_metadata); 349 + qcom_scm_pas_metadata_release(pas->pas_ctx); 355 350 if (pas->dtb_pas_id) 356 - qcom_scm_pas_metadata_release(&pas->dtb_pas_metadata); 351 + qcom_scm_pas_metadata_release(pas->dtb_pas_ctx); 352 + 353 + unmap_dtb_carveout: 354 + if (pas->dtb_pas_id) 355 + qcom_pas_unmap_carveout(rproc, pas->dtb_mem_phys, pas->dtb_mem_size); 357 356 disable_px_supply: 358 357 if (pas->px_supply) 359 358 regulator_disable(pas->px_supply); ··· 417 400 ret = qcom_scm_pas_shutdown(pas->dtb_pas_id); 418 401 if (ret) 419 402 dev_err(pas->dev, "failed to shutdown dtb: %d\n", ret); 403 + 404 + qcom_pas_unmap_carveout(rproc, pas->dtb_mem_phys, pas->dtb_mem_size); 420 405 } 406 + 407 + qcom_pas_unmap_carveout(rproc, pas->mem_phys, pas->mem_size); 421 408 422 409 handover = qcom_q6v5_unprepare(&pas->q6v5); 423 410 if (handover) ··· 448 427 return pas->mem_region + offset; 449 428 } 450 429 430 + static int qcom_pas_parse_firmware(struct rproc *rproc, const struct firmware *fw) 431 + { 432 + struct qcom_pas *pas = rproc->priv; 433 + struct resource_table *table = NULL; 434 + size_t output_rt_size; 435 + void *output_rt; 436 + size_t table_sz; 437 + int ret; 438 + 439 + ret = qcom_register_dump_segments(rproc, fw); 440 + if (ret) { 441 + dev_err(pas->dev, "Error in registering dump segments\n"); 442 + return ret; 443 + } 444 + 445 + if (!rproc->has_iommu) 446 + return 0; 447 + 448 + ret = rproc_elf_load_rsc_table(rproc, fw); 449 + if (ret) 450 + dev_dbg(&rproc->dev, "Failed to load resource table from firmware\n"); 451 + 452 + table = rproc->table_ptr; 453 + table_sz = rproc->table_sz; 454 + 455 + /* 456 + * The resources consumed by Qualcomm remote processors fall into two categories: 457 + * static (such as the memory carveouts for the rproc firmware) and dynamic (like 458 + * shared memory pools). Both are managed by a Qualcomm hypervisor (such as QHEE 459 + * or Gunyah), if one is present. Otherwise, a resource table must be retrieved 460 + * via an SCM call. That table will list all dynamic resources (if any) and possibly 461 + * the static ones. The static resources may also come from a resource table embedded 462 + * in the rproc firmware instead. 463 + * 464 + * Here, we call rproc_elf_load_rsc_table() to check firmware binary has resources 465 + * or not and if it is not having then we pass NULL and zero as input resource 466 + * table pointer and size respectively to the argument of qcom_scm_pas_get_rsc_table() 467 + * and this is even true for Qualcomm remote processor who does follow remoteproc 468 + * framework. 469 + */ 470 + output_rt = qcom_scm_pas_get_rsc_table(pas->pas_ctx, table, table_sz, &output_rt_size); 471 + ret = IS_ERR(output_rt) ? PTR_ERR(output_rt) : 0; 472 + if (ret) { 473 + dev_err(pas->dev, "Error in getting resource table: %d\n", ret); 474 + return ret; 475 + } 476 + 477 + kfree(rproc->cached_table); 478 + rproc->cached_table = output_rt; 479 + rproc->table_ptr = rproc->cached_table; 480 + rproc->table_sz = output_rt_size; 481 + 482 + return ret; 483 + } 484 + 451 485 static unsigned long qcom_pas_panic(struct rproc *rproc) 452 486 { 453 487 struct qcom_pas *pas = rproc->priv; ··· 515 439 .start = qcom_pas_start, 516 440 .stop = qcom_pas_stop, 517 441 .da_to_va = qcom_pas_da_to_va, 518 - .parse_fw = qcom_register_dump_segments, 442 + .parse_fw = qcom_pas_parse_firmware, 519 443 .load = qcom_pas_load, 520 444 .panic = qcom_pas_panic, 521 445 }; ··· 525 449 .start = qcom_pas_start, 526 450 .stop = qcom_pas_stop, 527 451 .da_to_va = qcom_pas_da_to_va, 528 - .parse_fw = qcom_register_dump_segments, 452 + .parse_fw = qcom_pas_parse_firmware, 529 453 .load = qcom_pas_load, 530 454 .panic = qcom_pas_panic, 531 455 .coredump = qcom_pas_minidump, ··· 773 697 return -ENOMEM; 774 698 } 775 699 700 + rproc->has_iommu = of_property_present(pdev->dev.of_node, "iommus"); 776 701 rproc->auto_boot = desc->auto_boot; 777 702 rproc_coredump_set_elf_info(rproc, ELFCLASS32, EM_NONE); 778 703 ··· 837 760 } 838 761 839 762 qcom_add_ssr_subdev(rproc, &pas->ssr_subdev, desc->ssr_name); 763 + 764 + pas->pas_ctx = devm_qcom_scm_pas_context_alloc(pas->dev, pas->pas_id, 765 + pas->mem_phys, pas->mem_size); 766 + if (IS_ERR(pas->pas_ctx)) { 767 + ret = PTR_ERR(pas->pas_ctx); 768 + goto remove_ssr_sysmon; 769 + } 770 + 771 + pas->dtb_pas_ctx = devm_qcom_scm_pas_context_alloc(pas->dev, pas->dtb_pas_id, 772 + pas->dtb_mem_phys, 773 + pas->dtb_mem_size); 774 + if (IS_ERR(pas->dtb_pas_ctx)) { 775 + ret = PTR_ERR(pas->dtb_pas_ctx); 776 + goto remove_ssr_sysmon; 777 + } 778 + 779 + pas->pas_ctx->use_tzmem = rproc->has_iommu; 780 + pas->dtb_pas_ctx->use_tzmem = rproc->has_iommu; 840 781 ret = rproc_add(rproc); 841 782 if (ret) 842 783 goto remove_ssr_sysmon;
+3 -11
drivers/reset/Kconfig
··· 161 161 config RESET_K230 162 162 tristate "Reset controller driver for Canaan Kendryte K230 SoC" 163 163 depends on ARCH_CANAAN || COMPILE_TEST 164 - depends on OF 164 + default ARCH_CANAAN 165 165 help 166 166 Support for the Canaan Kendryte K230 RISC-V SoC reset controller. 167 167 Say Y if you want to control reset signals provided by this ··· 299 299 This enables the reset driver for the SoCFPGA ARMv7 platforms. This 300 300 driver gets initialized early during platform init calls. 301 301 302 - config RESET_SPACEMIT 303 - tristate "SpacemiT reset driver" 304 - depends on ARCH_SPACEMIT || COMPILE_TEST 305 - select AUXILIARY_BUS 306 - default ARCH_SPACEMIT 307 - help 308 - This enables the reset controller driver for SpacemiT SoCs, 309 - including the K1. 310 - 311 302 config RESET_SUNPLUS 312 303 bool "Sunplus SoCs Reset Driver" if COMPILE_TEST 313 304 default ARCH_SUNPLUS ··· 397 406 This enables the reset controller driver for Xilinx ZynqMP SoCs. 398 407 399 408 source "drivers/reset/amlogic/Kconfig" 409 + source "drivers/reset/hisilicon/Kconfig" 410 + source "drivers/reset/spacemit/Kconfig" 400 411 source "drivers/reset/starfive/Kconfig" 401 412 source "drivers/reset/sti/Kconfig" 402 - source "drivers/reset/hisilicon/Kconfig" 403 413 source "drivers/reset/tegra/Kconfig" 404 414 405 415 endif
+1 -1
drivers/reset/Makefile
··· 2 2 obj-y += core.o 3 3 obj-y += amlogic/ 4 4 obj-y += hisilicon/ 5 + obj-y += spacemit/ 5 6 obj-y += starfive/ 6 7 obj-y += sti/ 7 8 obj-y += tegra/ ··· 39 38 obj-$(CONFIG_RESET_SCMI) += reset-scmi.o 40 39 obj-$(CONFIG_RESET_SIMPLE) += reset-simple.o 41 40 obj-$(CONFIG_RESET_SOCFPGA) += reset-socfpga.o 42 - obj-$(CONFIG_RESET_SPACEMIT) += reset-spacemit.o 43 41 obj-$(CONFIG_RESET_SUNPLUS) += reset-sunplus.o 44 42 obj-$(CONFIG_RESET_SUNXI) += reset-sunxi.o 45 43 obj-$(CONFIG_RESET_TH1520) += reset-th1520.o
+4 -3
drivers/reset/core.c
··· 868 868 */ 869 869 static int __reset_add_reset_gpio_device(const struct of_phandle_args *args) 870 870 { 871 - struct property_entry properties[2] = { }; 871 + struct property_entry properties[3] = { }; 872 872 unsigned int offset, of_flags, lflags; 873 873 struct reset_gpio_lookup *rgpio_dev; 874 874 struct device *parent; 875 - int id, ret; 875 + int id, ret, prop = 0; 876 876 877 877 /* 878 878 * Currently only #gpio-cells=2 is supported with the meaning of: ··· 923 923 924 924 lflags = GPIO_PERSISTENT | (of_flags & GPIO_ACTIVE_LOW); 925 925 parent = gpio_device_to_device(gdev); 926 - properties[0] = PROPERTY_ENTRY_GPIO("reset-gpios", parent->fwnode, offset, lflags); 926 + properties[prop++] = PROPERTY_ENTRY_STRING("compatible", "reset-gpio"); 927 + properties[prop++] = PROPERTY_ENTRY_GPIO("reset-gpios", parent->fwnode, offset, lflags); 927 928 928 929 id = ida_alloc(&reset_gpio_ida, GFP_KERNEL); 929 930 if (id < 0)
+3 -6
drivers/reset/reset-gpio.c
··· 22 22 { 23 23 struct reset_gpio_priv *priv = rc_to_reset_gpio(rc); 24 24 25 - gpiod_set_value_cansleep(priv->reset, 1); 26 - 27 - return 0; 25 + return gpiod_set_value_cansleep(priv->reset, 1); 28 26 } 29 27 30 28 static int reset_gpio_deassert(struct reset_controller_dev *rc, ··· 30 32 { 31 33 struct reset_gpio_priv *priv = rc_to_reset_gpio(rc); 32 34 33 - gpiod_set_value_cansleep(priv->reset, 0); 34 - 35 - return 0; 35 + return gpiod_set_value_cansleep(priv->reset, 0); 36 36 } 37 37 38 38 static int reset_gpio_status(struct reset_controller_dev *rc, unsigned long id) ··· 107 111 .id_table = reset_gpio_ids, 108 112 .driver = { 109 113 .name = "reset-gpio", 114 + .suppress_bind_attrs = true, 110 115 }, 111 116 }; 112 117 module_auxiliary_driver(reset_gpio_driver);
+125 -44
drivers/reset/reset-imx8mp-audiomix.c
··· 3 3 * Copyright 2024 NXP 4 4 */ 5 5 6 + #include <dt-bindings/reset/fsl,imx8ulp-sim-lpav.h> 6 7 #include <dt-bindings/reset/imx8mp-reset-audiomix.h> 7 8 8 9 #include <linux/auxiliary_bus.h> 10 + #include <linux/bits.h> 9 11 #include <linux/device.h> 10 12 #include <linux/io.h> 11 13 #include <linux/module.h> 12 14 #include <linux/of.h> 13 15 #include <linux/of_address.h> 16 + #include <linux/regmap.h> 14 17 #include <linux/reset-controller.h> 15 18 16 19 #define IMX8MP_AUDIOMIX_EARC_RESET_OFFSET 0x200 17 - #define IMX8MP_AUDIOMIX_EARC_RESET_MASK BIT(0) 18 - #define IMX8MP_AUDIOMIX_EARC_PHY_RESET_MASK BIT(1) 19 - 20 20 #define IMX8MP_AUDIOMIX_DSP_RUNSTALL_OFFSET 0x108 21 - #define IMX8MP_AUDIOMIX_DSP_RUNSTALL_MASK BIT(5) 21 + 22 + #define IMX8ULP_SIM_LPAV_SYSCTRL0_OFFSET 0x8 22 23 23 24 struct imx8mp_reset_map { 24 25 unsigned int offset; ··· 27 26 bool active_low; 28 27 }; 29 28 30 - static const struct imx8mp_reset_map reset_map[] = { 29 + struct imx8mp_reset_info { 30 + const struct imx8mp_reset_map *map; 31 + int num_lines; 32 + }; 33 + 34 + static const struct imx8mp_reset_map imx8mp_reset_map[] = { 31 35 [IMX8MP_AUDIOMIX_EARC_RESET] = { 32 36 .offset = IMX8MP_AUDIOMIX_EARC_RESET_OFFSET, 33 - .mask = IMX8MP_AUDIOMIX_EARC_RESET_MASK, 37 + .mask = BIT(0), 34 38 .active_low = true, 35 39 }, 36 40 [IMX8MP_AUDIOMIX_EARC_PHY_RESET] = { 37 41 .offset = IMX8MP_AUDIOMIX_EARC_RESET_OFFSET, 38 - .mask = IMX8MP_AUDIOMIX_EARC_PHY_RESET_MASK, 42 + .mask = BIT(1), 39 43 .active_low = true, 40 44 }, 41 45 [IMX8MP_AUDIOMIX_DSP_RUNSTALL] = { 42 46 .offset = IMX8MP_AUDIOMIX_DSP_RUNSTALL_OFFSET, 43 - .mask = IMX8MP_AUDIOMIX_DSP_RUNSTALL_MASK, 47 + .mask = BIT(5), 44 48 .active_low = false, 45 49 }, 46 50 }; 47 51 52 + static const struct imx8mp_reset_info imx8mp_reset_info = { 53 + .map = imx8mp_reset_map, 54 + .num_lines = ARRAY_SIZE(imx8mp_reset_map), 55 + }; 56 + 57 + static const struct imx8mp_reset_map imx8ulp_reset_map[] = { 58 + [IMX8ULP_SIM_LPAV_HIFI4_DSP_DBG_RST] = { 59 + .offset = IMX8ULP_SIM_LPAV_SYSCTRL0_OFFSET, 60 + .mask = BIT(25), 61 + .active_low = false, 62 + }, 63 + [IMX8ULP_SIM_LPAV_HIFI4_DSP_RST] = { 64 + .offset = IMX8ULP_SIM_LPAV_SYSCTRL0_OFFSET, 65 + .mask = BIT(16), 66 + .active_low = false, 67 + }, 68 + [IMX8ULP_SIM_LPAV_HIFI4_DSP_STALL] = { 69 + .offset = IMX8ULP_SIM_LPAV_SYSCTRL0_OFFSET, 70 + .mask = BIT(13), 71 + .active_low = false, 72 + }, 73 + [IMX8ULP_SIM_LPAV_DSI_RST_BYTE_N] = { 74 + .offset = IMX8ULP_SIM_LPAV_SYSCTRL0_OFFSET, 75 + .mask = BIT(5), 76 + .active_low = true, 77 + }, 78 + [IMX8ULP_SIM_LPAV_DSI_RST_ESC_N] = { 79 + .offset = IMX8ULP_SIM_LPAV_SYSCTRL0_OFFSET, 80 + .mask = BIT(4), 81 + .active_low = true, 82 + }, 83 + [IMX8ULP_SIM_LPAV_DSI_RST_DPI_N] = { 84 + .offset = IMX8ULP_SIM_LPAV_SYSCTRL0_OFFSET, 85 + .mask = BIT(3), 86 + .active_low = true, 87 + }, 88 + }; 89 + 90 + static const struct imx8mp_reset_info imx8ulp_reset_info = { 91 + .map = imx8ulp_reset_map, 92 + .num_lines = ARRAY_SIZE(imx8ulp_reset_map), 93 + }; 94 + 48 95 struct imx8mp_audiomix_reset { 49 96 struct reset_controller_dev rcdev; 50 - spinlock_t lock; /* protect register read-modify-write cycle */ 51 - void __iomem *base; 97 + struct regmap *regmap; 98 + const struct imx8mp_reset_map *map; 52 99 }; 53 100 54 101 static struct imx8mp_audiomix_reset *to_imx8mp_audiomix_reset(struct reset_controller_dev *rcdev) ··· 108 59 unsigned long id, bool assert) 109 60 { 110 61 struct imx8mp_audiomix_reset *priv = to_imx8mp_audiomix_reset(rcdev); 111 - void __iomem *reg_addr = priv->base; 112 - unsigned int mask, offset, active_low; 113 - unsigned long reg, flags; 62 + const struct imx8mp_reset_map *reset_map = priv->map; 63 + unsigned int mask, offset, active_low, val; 114 64 115 65 mask = reset_map[id].mask; 116 66 offset = reset_map[id].offset; 117 67 active_low = reset_map[id].active_low; 68 + val = (active_low ^ assert) ? mask : ~mask; 118 69 119 - spin_lock_irqsave(&priv->lock, flags); 120 - 121 - reg = readl(reg_addr + offset); 122 - if (active_low ^ assert) 123 - reg |= mask; 124 - else 125 - reg &= ~mask; 126 - writel(reg, reg_addr + offset); 127 - 128 - spin_unlock_irqrestore(&priv->lock, flags); 129 - 130 - return 0; 70 + return regmap_update_bits(priv->regmap, offset, mask, val); 131 71 } 132 72 133 73 static int imx8mp_audiomix_reset_assert(struct reset_controller_dev *rcdev, ··· 136 98 .deassert = imx8mp_audiomix_reset_deassert, 137 99 }; 138 100 101 + static const struct regmap_config regmap_config = { 102 + .reg_bits = 32, 103 + .val_bits = 32, 104 + .reg_stride = 4, 105 + }; 106 + 107 + /* assumption: registered only if not using parent regmap */ 108 + static void imx8mp_audiomix_reset_iounmap(void *data) 109 + { 110 + void __iomem *base = (void __iomem *)data; 111 + 112 + iounmap(base); 113 + } 114 + 115 + static int imx8mp_audiomix_reset_get_regmap(struct imx8mp_audiomix_reset *priv) 116 + { 117 + void __iomem *base; 118 + struct device *dev; 119 + int ret; 120 + 121 + dev = priv->rcdev.dev; 122 + 123 + /* try to use the parent's regmap */ 124 + priv->regmap = dev_get_regmap(dev->parent, NULL); 125 + if (priv->regmap) 126 + return 0; 127 + 128 + /* ... if that's not possible then initialize the regmap right now */ 129 + base = of_iomap(dev->parent->of_node, 0); 130 + if (!base) 131 + return dev_err_probe(dev, -ENOMEM, "failed to iomap address space\n"); 132 + 133 + ret = devm_add_action_or_reset(dev, 134 + imx8mp_audiomix_reset_iounmap, 135 + (void __force *)base); 136 + if (ret) 137 + return dev_err_probe(dev, ret, "failed to register action\n"); 138 + 139 + priv->regmap = devm_regmap_init_mmio(dev, base, &regmap_config); 140 + if (IS_ERR(priv->regmap)) 141 + return dev_err_probe(dev, PTR_ERR(priv->regmap), 142 + "failed to initialize regmap\n"); 143 + 144 + return 0; 145 + } 146 + 139 147 static int imx8mp_audiomix_reset_probe(struct auxiliary_device *adev, 140 148 const struct auxiliary_device_id *id) 141 149 { 150 + const struct imx8mp_reset_info *rinfo; 142 151 struct imx8mp_audiomix_reset *priv; 143 152 struct device *dev = &adev->dev; 144 153 int ret; 154 + 155 + rinfo = (void *)id->driver_data; 145 156 146 157 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 147 158 if (!priv) 148 159 return -ENOMEM; 149 160 150 - spin_lock_init(&priv->lock); 151 - 152 161 priv->rcdev.owner = THIS_MODULE; 153 - priv->rcdev.nr_resets = ARRAY_SIZE(reset_map); 162 + priv->map = rinfo->map; 163 + priv->rcdev.nr_resets = rinfo->num_lines; 154 164 priv->rcdev.ops = &imx8mp_audiomix_reset_ops; 155 165 priv->rcdev.of_node = dev->parent->of_node; 156 166 priv->rcdev.dev = dev; 157 167 priv->rcdev.of_reset_n_cells = 1; 158 - priv->base = of_iomap(dev->parent->of_node, 0); 159 - if (!priv->base) 160 - return -ENOMEM; 161 168 162 169 dev_set_drvdata(dev, priv); 163 170 171 + ret = imx8mp_audiomix_reset_get_regmap(priv); 172 + if (ret) 173 + return dev_err_probe(dev, ret, "failed to get regmap\n"); 174 + 164 175 ret = devm_reset_controller_register(dev, &priv->rcdev); 165 176 if (ret) 166 - goto out_unmap; 177 + return dev_err_probe(dev, ret, 178 + "failed to register reset controller\n"); 167 179 168 180 return 0; 169 - 170 - out_unmap: 171 - iounmap(priv->base); 172 - return ret; 173 - } 174 - 175 - static void imx8mp_audiomix_reset_remove(struct auxiliary_device *adev) 176 - { 177 - struct imx8mp_audiomix_reset *priv = dev_get_drvdata(&adev->dev); 178 - 179 - iounmap(priv->base); 180 181 } 181 182 182 183 static const struct auxiliary_device_id imx8mp_audiomix_reset_ids[] = { 183 184 { 184 185 .name = "clk_imx8mp_audiomix.reset", 186 + .driver_data = (kernel_ulong_t)&imx8mp_reset_info, 187 + }, 188 + { 189 + .name = "clk_imx8ulp_sim_lpav.reset", 190 + .driver_data = (kernel_ulong_t)&imx8ulp_reset_info, 185 191 }, 186 192 { } 187 193 }; ··· 233 151 234 152 static struct auxiliary_driver imx8mp_audiomix_reset_driver = { 235 153 .probe = imx8mp_audiomix_reset_probe, 236 - .remove = imx8mp_audiomix_reset_remove, 237 154 .id_table = imx8mp_audiomix_reset_ids, 238 155 }; 239 156
+91 -19
drivers/reset/reset-rzg2l-usbphy-ctrl.c
··· 36 36 struct reset_control *rstc; 37 37 void __iomem *base; 38 38 struct platform_device *vdev; 39 + struct regmap_field *pwrrdy; 39 40 40 41 spinlock_t lock; 41 42 }; ··· 93 92 return !!(readl(priv->base + RESET) & port_mask); 94 93 } 95 94 95 + /* put pll and phy into reset state */ 96 + static void rzg2l_usbphy_ctrl_init(struct rzg2l_usbphy_ctrl_priv *priv) 97 + { 98 + unsigned long flags; 99 + u32 val; 100 + 101 + spin_lock_irqsave(&priv->lock, flags); 102 + val = readl(priv->base + RESET); 103 + val |= RESET_SEL_PLLRESET | RESET_PLLRESET | PHY_RESET_PORT2 | PHY_RESET_PORT1; 104 + writel(val, priv->base + RESET); 105 + spin_unlock_irqrestore(&priv->lock, flags); 106 + } 107 + 96 108 #define RZG2L_USBPHY_CTRL_PWRRDY 1 97 109 98 110 static const struct of_device_id rzg2l_usbphy_ctrl_match_table[] = { ··· 131 117 .max_register = 1, 132 118 }; 133 119 134 - static void rzg2l_usbphy_ctrl_set_pwrrdy(struct regmap_field *pwrrdy, 135 - bool power_on) 120 + static int rzg2l_usbphy_ctrl_set_pwrrdy(struct regmap_field *pwrrdy, 121 + bool power_on) 136 122 { 137 123 u32 val = power_on ? 0 : 1; 138 124 139 125 /* The initialization path guarantees that the mask is 1 bit long. */ 140 - regmap_field_update_bits(pwrrdy, 1, val); 126 + return regmap_field_update_bits(pwrrdy, 1, val); 141 127 } 142 128 143 129 static void rzg2l_usbphy_ctrl_pwrrdy_off(void *data) ··· 145 131 rzg2l_usbphy_ctrl_set_pwrrdy(data, false); 146 132 } 147 133 148 - static int rzg2l_usbphy_ctrl_pwrrdy_init(struct device *dev) 134 + static int rzg2l_usbphy_ctrl_pwrrdy_init(struct device *dev, 135 + struct rzg2l_usbphy_ctrl_priv *priv) 149 136 { 150 - struct regmap_field *pwrrdy; 151 137 struct reg_field field; 152 138 struct regmap *regmap; 153 139 const int *data; 154 140 u32 args[2]; 141 + int ret; 155 142 156 143 data = device_get_match_data(dev); 157 144 if ((uintptr_t)data != RZG2L_USBPHY_CTRL_PWRRDY) ··· 172 157 field.lsb = __ffs(args[1]); 173 158 field.msb = __fls(args[1]); 174 159 175 - pwrrdy = devm_regmap_field_alloc(dev, regmap, field); 176 - if (IS_ERR(pwrrdy)) 177 - return PTR_ERR(pwrrdy); 160 + priv->pwrrdy = devm_regmap_field_alloc(dev, regmap, field); 161 + if (IS_ERR(priv->pwrrdy)) 162 + return PTR_ERR(priv->pwrrdy); 178 163 179 - rzg2l_usbphy_ctrl_set_pwrrdy(pwrrdy, true); 164 + ret = rzg2l_usbphy_ctrl_set_pwrrdy(priv->pwrrdy, true); 165 + if (ret) 166 + return ret; 180 167 181 - return devm_add_action_or_reset(dev, rzg2l_usbphy_ctrl_pwrrdy_off, pwrrdy); 168 + return devm_add_action_or_reset(dev, rzg2l_usbphy_ctrl_pwrrdy_off, priv->pwrrdy); 182 169 } 183 170 184 171 static int rzg2l_usbphy_ctrl_probe(struct platform_device *pdev) ··· 189 172 struct rzg2l_usbphy_ctrl_priv *priv; 190 173 struct platform_device *vdev; 191 174 struct regmap *regmap; 192 - unsigned long flags; 193 175 int error; 194 - u32 val; 195 176 196 177 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 197 178 if (!priv) ··· 203 188 if (IS_ERR(regmap)) 204 189 return PTR_ERR(regmap); 205 190 206 - error = rzg2l_usbphy_ctrl_pwrrdy_init(dev); 191 + error = rzg2l_usbphy_ctrl_pwrrdy_init(dev, priv); 207 192 if (error) 208 193 return error; 209 194 ··· 226 211 goto err_pm_disable_reset_deassert; 227 212 } 228 213 229 - /* put pll and phy into reset state */ 230 - spin_lock_irqsave(&priv->lock, flags); 231 - val = readl(priv->base + RESET); 232 - val |= RESET_SEL_PLLRESET | RESET_PLLRESET | PHY_RESET_PORT2 | PHY_RESET_PORT1; 233 - writel(val, priv->base + RESET); 234 - spin_unlock_irqrestore(&priv->lock, flags); 214 + rzg2l_usbphy_ctrl_init(priv); 235 215 236 216 priv->rcdev.ops = &rzg2l_usbphy_ctrl_reset_ops; 237 217 priv->rcdev.of_reset_n_cells = 1; ··· 273 263 reset_control_assert(priv->rstc); 274 264 } 275 265 266 + static int rzg2l_usbphy_ctrl_suspend(struct device *dev) 267 + { 268 + struct rzg2l_usbphy_ctrl_priv *priv = dev_get_drvdata(dev); 269 + u32 val; 270 + int ret; 271 + 272 + val = readl(priv->base + RESET); 273 + if (!(val & PHY_RESET_PORT2) || !(val & PHY_RESET_PORT1)) 274 + WARN(1, "Suspend with resets de-asserted\n"); 275 + 276 + pm_runtime_put_sync(dev); 277 + 278 + ret = reset_control_assert(priv->rstc); 279 + if (ret) 280 + goto rpm_resume; 281 + 282 + ret = rzg2l_usbphy_ctrl_set_pwrrdy(priv->pwrrdy, false); 283 + if (ret) 284 + goto reset_deassert; 285 + 286 + return 0; 287 + 288 + reset_deassert: 289 + reset_control_deassert(priv->rstc); 290 + rpm_resume: 291 + pm_runtime_resume_and_get(dev); 292 + return ret; 293 + } 294 + 295 + static int rzg2l_usbphy_ctrl_resume(struct device *dev) 296 + { 297 + struct rzg2l_usbphy_ctrl_priv *priv = dev_get_drvdata(dev); 298 + int ret; 299 + 300 + ret = rzg2l_usbphy_ctrl_set_pwrrdy(priv->pwrrdy, true); 301 + if (ret) 302 + return ret; 303 + 304 + ret = reset_control_deassert(priv->rstc); 305 + if (ret) 306 + goto pwrrdy_off; 307 + 308 + ret = pm_runtime_resume_and_get(dev); 309 + if (ret) 310 + goto reset_assert; 311 + 312 + rzg2l_usbphy_ctrl_init(priv); 313 + 314 + return 0; 315 + 316 + reset_assert: 317 + reset_control_assert(priv->rstc); 318 + pwrrdy_off: 319 + rzg2l_usbphy_ctrl_set_pwrrdy(priv->pwrrdy, false); 320 + return ret; 321 + } 322 + 323 + static DEFINE_SIMPLE_DEV_PM_OPS(rzg2l_usbphy_ctrl_pm_ops, 324 + rzg2l_usbphy_ctrl_suspend, 325 + rzg2l_usbphy_ctrl_resume); 326 + 276 327 static struct platform_driver rzg2l_usbphy_ctrl_driver = { 277 328 .driver = { 278 329 .name = "rzg2l_usbphy_ctrl", 279 330 .of_match_table = rzg2l_usbphy_ctrl_match_table, 331 + .pm = pm_ptr(&rzg2l_usbphy_ctrl_pm_ops), 280 332 }, 281 333 .probe = rzg2l_usbphy_ctrl_probe, 282 334 .remove = rzg2l_usbphy_ctrl_remove,
+10 -99
drivers/reset/reset-spacemit.c drivers/reset/spacemit/reset-spacemit-k1.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 2 3 - /* SpacemiT reset controller driver */ 3 + /* SpacemiT K1 reset controller driver */ 4 4 5 - #include <linux/auxiliary_bus.h> 6 - #include <linux/container_of.h> 7 - #include <linux/device.h> 8 5 #include <linux/module.h> 9 - #include <linux/regmap.h> 10 - #include <linux/reset-controller.h> 11 - #include <linux/types.h> 12 6 13 - #include <soc/spacemit/k1-syscon.h> 14 7 #include <dt-bindings/clock/spacemit,k1-syscon.h> 8 + #include <soc/spacemit/k1-syscon.h> 15 9 16 - struct ccu_reset_data { 17 - u32 offset; 18 - u32 assert_mask; 19 - u32 deassert_mask; 20 - }; 21 - 22 - struct ccu_reset_controller_data { 23 - const struct ccu_reset_data *reset_data; /* array */ 24 - size_t count; 25 - }; 26 - 27 - struct ccu_reset_controller { 28 - struct reset_controller_dev rcdev; 29 - const struct ccu_reset_controller_data *data; 30 - struct regmap *regmap; 31 - }; 32 - 33 - #define RESET_DATA(_offset, _assert_mask, _deassert_mask) \ 34 - { \ 35 - .offset = (_offset), \ 36 - .assert_mask = (_assert_mask), \ 37 - .deassert_mask = (_deassert_mask), \ 38 - } 10 + #include "reset-spacemit-common.h" 39 11 40 12 static const struct ccu_reset_data k1_mpmu_resets[] = { 41 13 [RESET_WDT] = RESET_DATA(MPMU_WDTPCR, BIT(2), 0), ··· 186 214 .count = ARRAY_SIZE(k1_apbc2_resets), 187 215 }; 188 216 189 - static int spacemit_reset_update(struct reset_controller_dev *rcdev, 190 - unsigned long id, bool assert) 191 - { 192 - struct ccu_reset_controller *controller; 193 - const struct ccu_reset_data *data; 194 - u32 mask; 195 - u32 val; 196 - 197 - controller = container_of(rcdev, struct ccu_reset_controller, rcdev); 198 - data = &controller->data->reset_data[id]; 199 - mask = data->assert_mask | data->deassert_mask; 200 - val = assert ? data->assert_mask : data->deassert_mask; 201 - 202 - return regmap_update_bits(controller->regmap, data->offset, mask, val); 203 - } 204 - 205 - static int spacemit_reset_assert(struct reset_controller_dev *rcdev, 206 - unsigned long id) 207 - { 208 - return spacemit_reset_update(rcdev, id, true); 209 - } 210 - 211 - static int spacemit_reset_deassert(struct reset_controller_dev *rcdev, 212 - unsigned long id) 213 - { 214 - return spacemit_reset_update(rcdev, id, false); 215 - } 216 - 217 - static const struct reset_control_ops spacemit_reset_control_ops = { 218 - .assert = spacemit_reset_assert, 219 - .deassert = spacemit_reset_deassert, 220 - }; 221 - 222 - static int spacemit_reset_controller_register(struct device *dev, 223 - struct ccu_reset_controller *controller) 224 - { 225 - struct reset_controller_dev *rcdev = &controller->rcdev; 226 - 227 - rcdev->ops = &spacemit_reset_control_ops; 228 - rcdev->owner = THIS_MODULE; 229 - rcdev->of_node = dev->of_node; 230 - rcdev->nr_resets = controller->data->count; 231 - 232 - return devm_reset_controller_register(dev, &controller->rcdev); 233 - } 234 - 235 - static int spacemit_reset_probe(struct auxiliary_device *adev, 236 - const struct auxiliary_device_id *id) 237 - { 238 - struct spacemit_ccu_adev *rdev = to_spacemit_ccu_adev(adev); 239 - struct ccu_reset_controller *controller; 240 - struct device *dev = &adev->dev; 241 - 242 - controller = devm_kzalloc(dev, sizeof(*controller), GFP_KERNEL); 243 - if (!controller) 244 - return -ENOMEM; 245 - controller->data = (const struct ccu_reset_controller_data *)id->driver_data; 246 - controller->regmap = rdev->regmap; 247 - 248 - return spacemit_reset_controller_register(dev, controller); 249 - } 250 - 251 217 #define K1_AUX_DEV_ID(_unit) \ 252 218 { \ 253 - .name = "spacemit_ccu_k1." #_unit "-reset", \ 219 + .name = "spacemit_ccu.k1-" #_unit "-reset", \ 254 220 .driver_data = (kernel_ulong_t)&k1_ ## _unit ## _reset_data, \ 255 221 } 256 222 257 - static const struct auxiliary_device_id spacemit_reset_ids[] = { 223 + static const struct auxiliary_device_id spacemit_k1_reset_ids[] = { 258 224 K1_AUX_DEV_ID(mpmu), 259 225 K1_AUX_DEV_ID(apbc), 260 226 K1_AUX_DEV_ID(apmu), 261 227 K1_AUX_DEV_ID(rcpu), 262 228 K1_AUX_DEV_ID(rcpu2), 263 229 K1_AUX_DEV_ID(apbc2), 264 - { }, 230 + { /* sentinel */ } 265 231 }; 266 - MODULE_DEVICE_TABLE(auxiliary, spacemit_reset_ids); 232 + MODULE_DEVICE_TABLE(auxiliary, spacemit_k1_reset_ids); 267 233 268 234 static struct auxiliary_driver spacemit_k1_reset_driver = { 269 235 .probe = spacemit_reset_probe, 270 - .id_table = spacemit_reset_ids, 236 + .id_table = spacemit_k1_reset_ids, 271 237 }; 272 238 module_auxiliary_driver(spacemit_k1_reset_driver); 273 239 240 + MODULE_IMPORT_NS("RESET_SPACEMIT"); 274 241 MODULE_AUTHOR("Alex Elder <elder@kernel.org>"); 275 - MODULE_DESCRIPTION("SpacemiT reset controller driver"); 242 + MODULE_DESCRIPTION("SpacemiT K1 reset controller driver"); 276 243 MODULE_LICENSE("GPL");
+36
drivers/reset/spacemit/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + 3 + menu "Reset support for SpacemiT platforms" 4 + depends on ARCH_SPACEMIT || COMPILE_TEST 5 + 6 + config RESET_SPACEMIT_COMMON 7 + tristate 8 + select AUXILIARY_BUS 9 + help 10 + Common reset controller infrastructure for SpacemiT SoCs. 11 + This provides shared code and helper functions used by 12 + reset drivers for various SpacemiT SoC families. 13 + 14 + config RESET_SPACEMIT_K1 15 + tristate "Support for SpacemiT K1 SoC" 16 + depends on SPACEMIT_K1_CCU 17 + select RESET_SPACEMIT_COMMON 18 + default SPACEMIT_K1_CCU 19 + help 20 + Support for reset controller in SpacemiT K1 SoC. 21 + This driver works with the SpacemiT K1 clock controller 22 + unit (CCU) driver to provide reset control functionality 23 + for various peripherals and subsystems in the SoC. 24 + 25 + config RESET_SPACEMIT_K3 26 + tristate "Support for SpacemiT K3 SoC" 27 + depends on SPACEMIT_K3_CCU 28 + select RESET_SPACEMIT_COMMON 29 + default SPACEMIT_K3_CCU 30 + help 31 + Support for reset controller in SpacemiT K3 SoC. 32 + This driver works with the SpacemiT K3 clock controller 33 + unit (CCU) driver to provide reset control functionality 34 + for various peripherals and subsystems in the SoC. 35 + 36 + endmenu
+5
drivers/reset/spacemit/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + obj-$(CONFIG_RESET_SPACEMIT_COMMON) += reset-spacemit-common.o 3 + 4 + obj-$(CONFIG_RESET_SPACEMIT_K1) += reset-spacemit-k1.o 5 + obj-$(CONFIG_RESET_SPACEMIT_K3) += reset-spacemit-k3.o
+77
drivers/reset/spacemit/reset-spacemit-common.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + /* SpacemiT reset controller driver - common implementation */ 4 + 5 + #include <linux/container_of.h> 6 + #include <linux/device.h> 7 + #include <linux/module.h> 8 + 9 + #include <soc/spacemit/ccu.h> 10 + 11 + #include "reset-spacemit-common.h" 12 + 13 + static int spacemit_reset_update(struct reset_controller_dev *rcdev, 14 + unsigned long id, bool assert) 15 + { 16 + struct ccu_reset_controller *controller; 17 + const struct ccu_reset_data *data; 18 + u32 mask; 19 + u32 val; 20 + 21 + controller = container_of(rcdev, struct ccu_reset_controller, rcdev); 22 + data = &controller->data->reset_data[id]; 23 + mask = data->assert_mask | data->deassert_mask; 24 + val = assert ? data->assert_mask : data->deassert_mask; 25 + 26 + return regmap_update_bits(controller->regmap, data->offset, mask, val); 27 + } 28 + 29 + static int spacemit_reset_assert(struct reset_controller_dev *rcdev, 30 + unsigned long id) 31 + { 32 + return spacemit_reset_update(rcdev, id, true); 33 + } 34 + 35 + static int spacemit_reset_deassert(struct reset_controller_dev *rcdev, 36 + unsigned long id) 37 + { 38 + return spacemit_reset_update(rcdev, id, false); 39 + } 40 + 41 + static const struct reset_control_ops spacemit_reset_control_ops = { 42 + .assert = spacemit_reset_assert, 43 + .deassert = spacemit_reset_deassert, 44 + }; 45 + 46 + static int spacemit_reset_controller_register(struct device *dev, 47 + struct ccu_reset_controller *controller) 48 + { 49 + struct reset_controller_dev *rcdev = &controller->rcdev; 50 + 51 + rcdev->ops = &spacemit_reset_control_ops; 52 + rcdev->owner = dev->driver->owner; 53 + rcdev->of_node = dev->of_node; 54 + rcdev->nr_resets = controller->data->count; 55 + 56 + return devm_reset_controller_register(dev, &controller->rcdev); 57 + } 58 + 59 + int spacemit_reset_probe(struct auxiliary_device *adev, 60 + const struct auxiliary_device_id *id) 61 + { 62 + struct spacemit_ccu_adev *rdev = to_spacemit_ccu_adev(adev); 63 + struct ccu_reset_controller *controller; 64 + struct device *dev = &adev->dev; 65 + 66 + controller = devm_kzalloc(dev, sizeof(*controller), GFP_KERNEL); 67 + if (!controller) 68 + return -ENOMEM; 69 + controller->data = (const struct ccu_reset_controller_data *)id->driver_data; 70 + controller->regmap = rdev->regmap; 71 + 72 + return spacemit_reset_controller_register(dev, controller); 73 + } 74 + EXPORT_SYMBOL_NS_GPL(spacemit_reset_probe, "RESET_SPACEMIT"); 75 + 76 + MODULE_DESCRIPTION("SpacemiT reset controller driver - common code"); 77 + MODULE_LICENSE("GPL");
+42
drivers/reset/spacemit/reset-spacemit-common.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * SpacemiT reset controller driver - common definitions 4 + */ 5 + 6 + #ifndef _RESET_SPACEMIT_COMMON_H_ 7 + #define _RESET_SPACEMIT_COMMON_H_ 8 + 9 + #include <linux/auxiliary_bus.h> 10 + #include <linux/regmap.h> 11 + #include <linux/reset-controller.h> 12 + #include <linux/types.h> 13 + 14 + struct ccu_reset_data { 15 + u32 offset; 16 + u32 assert_mask; 17 + u32 deassert_mask; 18 + }; 19 + 20 + struct ccu_reset_controller_data { 21 + const struct ccu_reset_data *reset_data; /* array */ 22 + size_t count; 23 + }; 24 + 25 + struct ccu_reset_controller { 26 + struct reset_controller_dev rcdev; 27 + const struct ccu_reset_controller_data *data; 28 + struct regmap *regmap; 29 + }; 30 + 31 + #define RESET_DATA(_offset, _assert_mask, _deassert_mask) \ 32 + { \ 33 + .offset = (_offset), \ 34 + .assert_mask = (_assert_mask), \ 35 + .deassert_mask = (_deassert_mask), \ 36 + } 37 + 38 + /* Common probe function */ 39 + int spacemit_reset_probe(struct auxiliary_device *adev, 40 + const struct auxiliary_device_id *id); 41 + 42 + #endif /* _RESET_SPACEMIT_COMMON_H_ */
+233
drivers/reset/spacemit/reset-spacemit-k3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + /* SpacemiT K3 reset controller driver */ 4 + 5 + #include <linux/module.h> 6 + 7 + #include <dt-bindings/reset/spacemit,k3-resets.h> 8 + #include <soc/spacemit/k3-syscon.h> 9 + 10 + #include "reset-spacemit-common.h" 11 + 12 + static const struct ccu_reset_data k3_mpmu_resets[] = { 13 + [RESET_MPMU_WDT] = RESET_DATA(MPMU_WDTPCR, BIT(2), 0), 14 + [RESET_MPMU_RIPC] = RESET_DATA(MPMU_RIPCCR, BIT(2), 0), 15 + }; 16 + 17 + static const struct ccu_reset_controller_data k3_mpmu_reset_data = { 18 + .reset_data = k3_mpmu_resets, 19 + .count = ARRAY_SIZE(k3_mpmu_resets), 20 + }; 21 + 22 + static const struct ccu_reset_data k3_apbc_resets[] = { 23 + [RESET_APBC_UART0] = RESET_DATA(APBC_UART0_CLK_RST, BIT(2), 0), 24 + [RESET_APBC_UART2] = RESET_DATA(APBC_UART2_CLK_RST, BIT(2), 0), 25 + [RESET_APBC_UART3] = RESET_DATA(APBC_UART3_CLK_RST, BIT(2), 0), 26 + [RESET_APBC_UART4] = RESET_DATA(APBC_UART4_CLK_RST, BIT(2), 0), 27 + [RESET_APBC_UART5] = RESET_DATA(APBC_UART5_CLK_RST, BIT(2), 0), 28 + [RESET_APBC_UART6] = RESET_DATA(APBC_UART6_CLK_RST, BIT(2), 0), 29 + [RESET_APBC_UART7] = RESET_DATA(APBC_UART7_CLK_RST, BIT(2), 0), 30 + [RESET_APBC_UART8] = RESET_DATA(APBC_UART8_CLK_RST, BIT(2), 0), 31 + [RESET_APBC_UART9] = RESET_DATA(APBC_UART9_CLK_RST, BIT(2), 0), 32 + [RESET_APBC_UART10] = RESET_DATA(APBC_UART10_CLK_RST, BIT(2), 0), 33 + [RESET_APBC_GPIO] = RESET_DATA(APBC_GPIO_CLK_RST, BIT(2), 0), 34 + [RESET_APBC_PWM0] = RESET_DATA(APBC_PWM0_CLK_RST, BIT(2), 0), 35 + [RESET_APBC_PWM1] = RESET_DATA(APBC_PWM1_CLK_RST, BIT(2), 0), 36 + [RESET_APBC_PWM2] = RESET_DATA(APBC_PWM2_CLK_RST, BIT(2), 0), 37 + [RESET_APBC_PWM3] = RESET_DATA(APBC_PWM3_CLK_RST, BIT(2), 0), 38 + [RESET_APBC_PWM4] = RESET_DATA(APBC_PWM4_CLK_RST, BIT(2), 0), 39 + [RESET_APBC_PWM5] = RESET_DATA(APBC_PWM5_CLK_RST, BIT(2), 0), 40 + [RESET_APBC_PWM6] = RESET_DATA(APBC_PWM6_CLK_RST, BIT(2), 0), 41 + [RESET_APBC_PWM7] = RESET_DATA(APBC_PWM7_CLK_RST, BIT(2), 0), 42 + [RESET_APBC_PWM8] = RESET_DATA(APBC_PWM8_CLK_RST, BIT(2), 0), 43 + [RESET_APBC_PWM9] = RESET_DATA(APBC_PWM9_CLK_RST, BIT(2), 0), 44 + [RESET_APBC_PWM10] = RESET_DATA(APBC_PWM10_CLK_RST, BIT(2), 0), 45 + [RESET_APBC_PWM11] = RESET_DATA(APBC_PWM11_CLK_RST, BIT(2), 0), 46 + [RESET_APBC_PWM12] = RESET_DATA(APBC_PWM12_CLK_RST, BIT(2), 0), 47 + [RESET_APBC_PWM13] = RESET_DATA(APBC_PWM13_CLK_RST, BIT(2), 0), 48 + [RESET_APBC_PWM14] = RESET_DATA(APBC_PWM14_CLK_RST, BIT(2), 0), 49 + [RESET_APBC_PWM15] = RESET_DATA(APBC_PWM15_CLK_RST, BIT(2), 0), 50 + [RESET_APBC_PWM16] = RESET_DATA(APBC_PWM16_CLK_RST, BIT(2), 0), 51 + [RESET_APBC_PWM17] = RESET_DATA(APBC_PWM17_CLK_RST, BIT(2), 0), 52 + [RESET_APBC_PWM18] = RESET_DATA(APBC_PWM18_CLK_RST, BIT(2), 0), 53 + [RESET_APBC_PWM19] = RESET_DATA(APBC_PWM19_CLK_RST, BIT(2), 0), 54 + [RESET_APBC_SPI0] = RESET_DATA(APBC_SSP0_CLK_RST, BIT(2), 0), 55 + [RESET_APBC_SPI1] = RESET_DATA(APBC_SSP1_CLK_RST, BIT(2), 0), 56 + [RESET_APBC_SPI3] = RESET_DATA(APBC_SSP3_CLK_RST, BIT(2), 0), 57 + [RESET_APBC_RTC] = RESET_DATA(APBC_RTC_CLK_RST, BIT(2), 0), 58 + [RESET_APBC_TWSI0] = RESET_DATA(APBC_TWSI0_CLK_RST, BIT(2), 0), 59 + [RESET_APBC_TWSI1] = RESET_DATA(APBC_TWSI1_CLK_RST, BIT(2), 0), 60 + [RESET_APBC_TWSI2] = RESET_DATA(APBC_TWSI2_CLK_RST, BIT(2), 0), 61 + [RESET_APBC_TWSI4] = RESET_DATA(APBC_TWSI4_CLK_RST, BIT(2), 0), 62 + [RESET_APBC_TWSI5] = RESET_DATA(APBC_TWSI5_CLK_RST, BIT(2), 0), 63 + [RESET_APBC_TWSI6] = RESET_DATA(APBC_TWSI6_CLK_RST, BIT(2), 0), 64 + [RESET_APBC_TWSI8] = RESET_DATA(APBC_TWSI8_CLK_RST, BIT(2), 0), 65 + [RESET_APBC_TIMERS0] = RESET_DATA(APBC_TIMERS0_CLK_RST, BIT(2), 0), 66 + [RESET_APBC_TIMERS1] = RESET_DATA(APBC_TIMERS1_CLK_RST, BIT(2), 0), 67 + [RESET_APBC_TIMERS2] = RESET_DATA(APBC_TIMERS2_CLK_RST, BIT(2), 0), 68 + [RESET_APBC_TIMERS3] = RESET_DATA(APBC_TIMERS3_CLK_RST, BIT(2), 0), 69 + [RESET_APBC_TIMERS4] = RESET_DATA(APBC_TIMERS4_CLK_RST, BIT(2), 0), 70 + [RESET_APBC_TIMERS5] = RESET_DATA(APBC_TIMERS5_CLK_RST, BIT(2), 0), 71 + [RESET_APBC_TIMERS6] = RESET_DATA(APBC_TIMERS6_CLK_RST, BIT(2), 0), 72 + [RESET_APBC_TIMERS7] = RESET_DATA(APBC_TIMERS7_CLK_RST, BIT(2), 0), 73 + [RESET_APBC_AIB] = RESET_DATA(APBC_AIB_CLK_RST, BIT(2), 0), 74 + [RESET_APBC_ONEWIRE] = RESET_DATA(APBC_ONEWIRE_CLK_RST, BIT(2), 0), 75 + [RESET_APBC_I2S0] = RESET_DATA(APBC_SSPA0_CLK_RST, BIT(2), 0), 76 + [RESET_APBC_I2S1] = RESET_DATA(APBC_SSPA1_CLK_RST, BIT(2), 0), 77 + [RESET_APBC_I2S2] = RESET_DATA(APBC_SSPA2_CLK_RST, BIT(2), 0), 78 + [RESET_APBC_I2S3] = RESET_DATA(APBC_SSPA3_CLK_RST, BIT(2), 0), 79 + [RESET_APBC_I2S4] = RESET_DATA(APBC_SSPA4_CLK_RST, BIT(2), 0), 80 + [RESET_APBC_I2S5] = RESET_DATA(APBC_SSPA5_CLK_RST, BIT(2), 0), 81 + [RESET_APBC_DRO] = RESET_DATA(APBC_DRO_CLK_RST, BIT(2), 0), 82 + [RESET_APBC_IR0] = RESET_DATA(APBC_IR0_CLK_RST, BIT(2), 0), 83 + [RESET_APBC_IR1] = RESET_DATA(APBC_IR1_CLK_RST, BIT(2), 0), 84 + [RESET_APBC_TSEN] = RESET_DATA(APBC_TSEN_CLK_RST, BIT(2), 0), 85 + [RESET_IPC_AP2AUD] = RESET_DATA(APBC_IPC_AP2AUD_CLK_RST, BIT(2), 0), 86 + [RESET_APBC_CAN0] = RESET_DATA(APBC_CAN0_CLK_RST, BIT(2), 0), 87 + [RESET_APBC_CAN1] = RESET_DATA(APBC_CAN1_CLK_RST, BIT(2), 0), 88 + [RESET_APBC_CAN2] = RESET_DATA(APBC_CAN2_CLK_RST, BIT(2), 0), 89 + [RESET_APBC_CAN3] = RESET_DATA(APBC_CAN3_CLK_RST, BIT(2), 0), 90 + [RESET_APBC_CAN4] = RESET_DATA(APBC_CAN4_CLK_RST, BIT(2), 0), 91 + }; 92 + 93 + static const struct ccu_reset_controller_data k3_apbc_reset_data = { 94 + .reset_data = k3_apbc_resets, 95 + .count = ARRAY_SIZE(k3_apbc_resets), 96 + }; 97 + 98 + static const struct ccu_reset_data k3_apmu_resets[] = { 99 + [RESET_APMU_CSI] = RESET_DATA(APMU_CSI_CCIC2_CLK_RES_CTRL, 0, BIT(1)), 100 + [RESET_APMU_CCIC2PHY] = RESET_DATA(APMU_CSI_CCIC2_CLK_RES_CTRL, 0, BIT(2)), 101 + [RESET_APMU_CCIC3PHY] = RESET_DATA(APMU_CSI_CCIC2_CLK_RES_CTRL, 0, BIT(29)), 102 + [RESET_APMU_ISP_CIBUS] = RESET_DATA(APMU_ISP_CLK_RES_CTRL, 0, BIT(16)), 103 + [RESET_APMU_DSI_ESC] = RESET_DATA(APMU_LCD_CLK_RES_CTRL1, 0, BIT(3)), 104 + [RESET_APMU_LCD] = RESET_DATA(APMU_LCD_CLK_RES_CTRL1, 0, BIT(4)), 105 + [RESET_APMU_V2D] = RESET_DATA(APMU_LCD_CLK_RES_CTRL1, 0, BIT(27)), 106 + [RESET_APMU_LCD_MCLK] = RESET_DATA(APMU_LCD_CLK_RES_CTRL2, 0, BIT(9)), 107 + [RESET_APMU_LCD_DSCCLK] = RESET_DATA(APMU_LCD_CLK_RES_CTRL2, 0, BIT(15)), 108 + [RESET_APMU_SC2_HCLK] = RESET_DATA(APMU_CCIC_CLK_RES_CTRL, 0, BIT(0)), 109 + [RESET_APMU_CCIC_4X] = RESET_DATA(APMU_CCIC_CLK_RES_CTRL, 0, BIT(1)), 110 + [RESET_APMU_CCIC1_PHY] = RESET_DATA(APMU_CCIC_CLK_RES_CTRL, 0, BIT(2)), 111 + [RESET_APMU_SDH_AXI] = RESET_DATA(APMU_SDH0_CLK_RES_CTRL, 0, BIT(0)), 112 + [RESET_APMU_SDH0] = RESET_DATA(APMU_SDH0_CLK_RES_CTRL, 0, BIT(1)), 113 + [RESET_APMU_SDH1] = RESET_DATA(APMU_SDH1_CLK_RES_CTRL, 0, BIT(1)), 114 + [RESET_APMU_SDH2] = RESET_DATA(APMU_SDH2_CLK_RES_CTRL, 0, BIT(1)), 115 + [RESET_APMU_USB2] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, 116 + BIT(1)|BIT(2)|BIT(3)), 117 + [RESET_APMU_USB3_PORTA] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, 118 + BIT(5)|BIT(6)|BIT(7)), 119 + [RESET_APMU_USB3_PORTB] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, 120 + BIT(9)|BIT(10)|BIT(11)), 121 + [RESET_APMU_USB3_PORTC] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, 122 + BIT(13)|BIT(14)|BIT(15)), 123 + [RESET_APMU_USB3_PORTD] = RESET_DATA(APMU_USB_CLK_RES_CTRL, 0, 124 + BIT(17)|BIT(18)|BIT(19)), 125 + [RESET_APMU_QSPI] = RESET_DATA(APMU_QSPI_CLK_RES_CTRL, 0, BIT(1)), 126 + [RESET_APMU_QSPI_BUS] = RESET_DATA(APMU_QSPI_CLK_RES_CTRL, 0, BIT(0)), 127 + [RESET_APMU_DMA] = RESET_DATA(APMU_DMA_CLK_RES_CTRL, 0, BIT(0)), 128 + [RESET_APMU_AES_WTM] = RESET_DATA(APMU_AES_CLK_RES_CTRL, 0, BIT(4)), 129 + [RESET_APMU_MCB_DCLK] = RESET_DATA(APMU_MCB_CLK_RES_CTRL, 0, BIT(0)), 130 + [RESET_APMU_MCB_ACLK] = RESET_DATA(APMU_MCB_CLK_RES_CTRL, 0, BIT(1)), 131 + [RESET_APMU_VPU] = RESET_DATA(APMU_VPU_CLK_RES_CTRL, 0, BIT(0)), 132 + [RESET_APMU_DTC] = RESET_DATA(APMU_DTC_CLK_RES_CTRL, 0, BIT(0)), 133 + [RESET_APMU_GPU] = RESET_DATA(APMU_GPU_CLK_RES_CTRL, 0, BIT(1)), 134 + [RESET_APMU_MC] = RESET_DATA(APMU_PMUA_MC_CTRL, 0, BIT(0)), 135 + [RESET_APMU_CPU0_POP] = RESET_DATA(APMU_PMU_CC2_AP, BIT(0), 0), 136 + [RESET_APMU_CPU0_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(1), 0), 137 + [RESET_APMU_CPU1_POP] = RESET_DATA(APMU_PMU_CC2_AP, BIT(3), 0), 138 + [RESET_APMU_CPU1_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(4), 0), 139 + [RESET_APMU_CPU2_POP] = RESET_DATA(APMU_PMU_CC2_AP, BIT(6), 0), 140 + [RESET_APMU_CPU2_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(7), 0), 141 + [RESET_APMU_CPU3_POP] = RESET_DATA(APMU_PMU_CC2_AP, BIT(9), 0), 142 + [RESET_APMU_CPU3_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(10), 0), 143 + [RESET_APMU_C0_MPSUB_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(12), 0), 144 + [RESET_APMU_CPU4_POP] = RESET_DATA(APMU_PMU_CC2_AP, BIT(16), 0), 145 + [RESET_APMU_CPU4_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(17), 0), 146 + [RESET_APMU_CPU5_POP] = RESET_DATA(APMU_PMU_CC2_AP, BIT(19), 0), 147 + [RESET_APMU_CPU5_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(20), 0), 148 + [RESET_APMU_CPU6_POP] = RESET_DATA(APMU_PMU_CC2_AP, BIT(22), 0), 149 + [RESET_APMU_CPU6_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(23), 0), 150 + [RESET_APMU_CPU7_POP] = RESET_DATA(APMU_PMU_CC2_AP, BIT(25), 0), 151 + [RESET_APMU_CPU7_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(26), 0), 152 + [RESET_APMU_C1_MPSUB_SW] = RESET_DATA(APMU_PMU_CC2_AP, BIT(28), 0), 153 + [RESET_APMU_MPSUB_DBG] = RESET_DATA(APMU_PMU_CC2_AP, BIT(29), 0), 154 + [RESET_APMU_UCIE] = RESET_DATA(APMU_UCIE_CTRL, 155 + BIT(1) | BIT(2) | BIT(3), 0), 156 + [RESET_APMU_RCPU] = RESET_DATA(APMU_RCPU_CLK_RES_CTRL, 0, 157 + BIT(3) | BIT(2) | BIT(0)), 158 + [RESET_APMU_DSI4LN2_ESCCLK] = RESET_DATA(APMU_LCD_CLK_RES_CTRL3, 0, BIT(3)), 159 + [RESET_APMU_DSI4LN2_LCD_SW] = RESET_DATA(APMU_LCD_CLK_RES_CTRL3, 0, BIT(4)), 160 + [RESET_APMU_DSI4LN2_LCD_MCLK] = RESET_DATA(APMU_LCD_CLK_RES_CTRL4, 0, BIT(9)), 161 + [RESET_APMU_DSI4LN2_LCD_DSCCLK] = RESET_DATA(APMU_LCD_CLK_RES_CTRL4, 0, BIT(15)), 162 + [RESET_APMU_DSI4LN2_DPU_ACLK] = RESET_DATA(APMU_LCD_CLK_RES_CTRL5, 0, BIT(0)), 163 + [RESET_APMU_DPU_ACLK] = RESET_DATA(APMU_LCD_CLK_RES_CTRL5, 0, BIT(15)), 164 + [RESET_APMU_UFS_ACLK] = RESET_DATA(APMU_UFS_CLK_RES_CTRL, 0, BIT(0)), 165 + [RESET_APMU_EDP0] = RESET_DATA(APMU_LCD_EDP_CTRL, 0, BIT(0)), 166 + [RESET_APMU_EDP1] = RESET_DATA(APMU_LCD_EDP_CTRL, 0, BIT(16)), 167 + [RESET_APMU_PCIE_PORTA] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_A, 0, 168 + BIT(5) | BIT(4) | BIT(3)), 169 + [RESET_APMU_PCIE_PORTB] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_B, 0, 170 + BIT(5) | BIT(4) | BIT(3)), 171 + [RESET_APMU_PCIE_PORTC] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_C, 0, 172 + BIT(5) | BIT(4) | BIT(3)), 173 + [RESET_APMU_PCIE_PORTD] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_D, 0, 174 + BIT(5) | BIT(4) | BIT(3)), 175 + [RESET_APMU_PCIE_PORTE] = RESET_DATA(APMU_PCIE_CLK_RES_CTRL_E, 0, 176 + BIT(5) | BIT(4) | BIT(3)), 177 + [RESET_APMU_EMAC0] = RESET_DATA(APMU_EMAC0_CLK_RES_CTRL, 0, BIT(1)), 178 + [RESET_APMU_EMAC1] = RESET_DATA(APMU_EMAC1_CLK_RES_CTRL, 0, BIT(1)), 179 + [RESET_APMU_EMAC2] = RESET_DATA(APMU_EMAC2_CLK_RES_CTRL, 0, BIT(1)), 180 + [RESET_APMU_ESPI_MCLK] = RESET_DATA(APMU_ESPI_CLK_RES_CTRL, 0, BIT(0)), 181 + [RESET_APMU_ESPI_SCLK] = RESET_DATA(APMU_ESPI_CLK_RES_CTRL, 0, BIT(2)), 182 + }; 183 + 184 + static const struct ccu_reset_controller_data k3_apmu_reset_data = { 185 + .reset_data = k3_apmu_resets, 186 + .count = ARRAY_SIZE(k3_apmu_resets), 187 + }; 188 + 189 + static const struct ccu_reset_data k3_dciu_resets[] = { 190 + [RESET_DCIU_HDMA] = RESET_DATA(DCIU_DMASYS_RSTN, 0, BIT(0)), 191 + [RESET_DCIU_DMA350] = RESET_DATA(DCIU_DMASYS_SDMA_RSTN, 0, BIT(0)), 192 + [RESET_DCIU_DMA350_0] = RESET_DATA(DCIU_DMASYS_S0_RSTN, 0, BIT(0)), 193 + [RESET_DCIU_DMA350_1] = RESET_DATA(DCIU_DMASYS_S1_RSTN, 0, BIT(0)), 194 + [RESET_DCIU_AXIDMA0] = RESET_DATA(DCIU_DMASYS_A0_RSTN, 0, BIT(0)), 195 + [RESET_DCIU_AXIDMA1] = RESET_DATA(DCIU_DMASYS_A1_RSTN, 0, BIT(0)), 196 + [RESET_DCIU_AXIDMA2] = RESET_DATA(DCIU_DMASYS_A2_RSTN, 0, BIT(0)), 197 + [RESET_DCIU_AXIDMA3] = RESET_DATA(DCIU_DMASYS_A3_RSTN, 0, BIT(0)), 198 + [RESET_DCIU_AXIDMA4] = RESET_DATA(DCIU_DMASYS_A4_RSTN, 0, BIT(0)), 199 + [RESET_DCIU_AXIDMA5] = RESET_DATA(DCIU_DMASYS_A5_RSTN, 0, BIT(0)), 200 + [RESET_DCIU_AXIDMA6] = RESET_DATA(DCIU_DMASYS_A6_RSTN, 0, BIT(0)), 201 + [RESET_DCIU_AXIDMA7] = RESET_DATA(DCIU_DMASYS_A7_RSTN, 0, BIT(0)), 202 + }; 203 + 204 + static const struct ccu_reset_controller_data k3_dciu_reset_data = { 205 + .reset_data = k3_dciu_resets, 206 + .count = ARRAY_SIZE(k3_dciu_resets), 207 + }; 208 + 209 + #define K3_AUX_DEV_ID(_unit) \ 210 + { \ 211 + .name = "spacemit_ccu.k3-" #_unit "-reset", \ 212 + .driver_data = (kernel_ulong_t)&k3_ ## _unit ## _reset_data, \ 213 + } 214 + 215 + static const struct auxiliary_device_id spacemit_k3_reset_ids[] = { 216 + K3_AUX_DEV_ID(mpmu), 217 + K3_AUX_DEV_ID(apbc), 218 + K3_AUX_DEV_ID(apmu), 219 + K3_AUX_DEV_ID(dciu), 220 + { /* sentinel */ } 221 + }; 222 + MODULE_DEVICE_TABLE(auxiliary, spacemit_k3_reset_ids); 223 + 224 + static struct auxiliary_driver spacemit_k3_reset_driver = { 225 + .probe = spacemit_reset_probe, 226 + .id_table = spacemit_k3_reset_ids, 227 + }; 228 + module_auxiliary_driver(spacemit_k3_reset_driver); 229 + 230 + MODULE_IMPORT_NS("RESET_SPACEMIT"); 231 + MODULE_AUTHOR("Guodong Xu <guodong@riscstar.com>"); 232 + MODULE_DESCRIPTION("SpacemiT K3 reset controller driver"); 233 + MODULE_LICENSE("GPL");
+7 -20
drivers/rtc/rtc-optee.c
··· 547 547 return 0; 548 548 } 549 549 550 - static int optee_rtc_probe(struct device *dev) 550 + static int optee_rtc_probe(struct tee_client_device *rtc_device) 551 551 { 552 - struct tee_client_device *rtc_device = to_tee_client_device(dev); 552 + struct device *dev = &rtc_device->dev; 553 553 struct tee_ioctl_open_session_arg sess2_arg = {0}; 554 554 struct tee_ioctl_open_session_arg sess_arg = {0}; 555 555 struct optee_rtc *priv; ··· 682 682 return err; 683 683 } 684 684 685 - static int optee_rtc_remove(struct device *dev) 685 + static void optee_rtc_remove(struct tee_client_device *rtc_device) 686 686 { 687 + struct device *dev = &rtc_device->dev; 687 688 struct optee_rtc *priv = dev_get_drvdata(dev); 688 689 689 690 if (priv->features & TA_RTC_FEATURE_ALARM) { ··· 697 696 tee_shm_free(priv->shm); 698 697 tee_client_close_session(priv->ctx, priv->session_id); 699 698 tee_client_close_context(priv->ctx); 700 - 701 - return 0; 702 699 } 703 700 704 701 static int optee_rtc_suspend(struct device *dev) ··· 723 724 724 725 static struct tee_client_driver optee_rtc_driver = { 725 726 .id_table = optee_rtc_id_table, 727 + .probe = optee_rtc_probe, 728 + .remove = optee_rtc_remove, 726 729 .driver = { 727 730 .name = "optee_rtc", 728 - .bus = &tee_bus_type, 729 - .probe = optee_rtc_probe, 730 - .remove = optee_rtc_remove, 731 731 .pm = pm_sleep_ptr(&optee_rtc_pm_ops), 732 732 }, 733 733 }; 734 734 735 - static int __init optee_rtc_mod_init(void) 736 - { 737 - return driver_register(&optee_rtc_driver.driver); 738 - } 739 - 740 - static void __exit optee_rtc_mod_exit(void) 741 - { 742 - driver_unregister(&optee_rtc_driver.driver); 743 - } 744 - 745 - module_init(optee_rtc_mod_init); 746 - module_exit(optee_rtc_mod_exit); 735 + module_tee_client_driver(optee_rtc_driver); 747 736 748 737 MODULE_LICENSE("GPL v2"); 749 738 MODULE_AUTHOR("Clément Léger <clement.leger@bootlin.com>");
+1
drivers/soc/amlogic/meson-gx-socinfo.c
··· 85 85 { "S905D3", 0x2b, 0x30, 0x3f }, 86 86 { "A113L", 0x2c, 0x0, 0xf8 }, 87 87 { "S805X2", 0x37, 0x2, 0xf }, 88 + { "S905Y4", 0x37, 0x3, 0xf }, 88 89 { "C308L", 0x3d, 0x1, 0xf }, 89 90 { "A311D2", 0x36, 0x1, 0xf }, 90 91 { "A113X2", 0x3c, 0x1, 0xf },
+16
drivers/soc/apple/rtkit.c
··· 851 851 } 852 852 EXPORT_SYMBOL_GPL(apple_rtkit_shutdown); 853 853 854 + int apple_rtkit_poweroff(struct apple_rtkit *rtk) 855 + { 856 + int ret; 857 + 858 + ret = apple_rtkit_set_ap_power_state(rtk, APPLE_RTKIT_PWR_STATE_OFF); 859 + if (ret) 860 + return ret; 861 + 862 + ret = apple_rtkit_set_iop_power_state(rtk, APPLE_RTKIT_PWR_STATE_OFF); 863 + if (ret) 864 + return ret; 865 + 866 + return apple_rtkit_reinit(rtk); 867 + } 868 + EXPORT_SYMBOL_GPL(apple_rtkit_poweroff); 869 + 854 870 int apple_rtkit_idle(struct apple_rtkit *rtk) 855 871 { 856 872 int ret;
+3 -6
drivers/soc/dove/pmu.c
··· 371 371 */ 372 372 int __init dove_init_pmu(void) 373 373 { 374 - struct device_node *np_pmu, *domains_node, *np; 374 + struct device_node *np_pmu, *domains_node; 375 375 struct pmu_data *pmu; 376 376 int ret, parent_irq; 377 377 ··· 404 404 405 405 pmu_reset_init(pmu); 406 406 407 - for_each_available_child_of_node(domains_node, np) { 407 + for_each_available_child_of_node_scoped(domains_node, np) { 408 408 struct of_phandle_args args; 409 409 struct pmu_domain *domain; 410 410 411 411 domain = kzalloc(sizeof(*domain), GFP_KERNEL); 412 - if (!domain) { 413 - of_node_put(np); 412 + if (!domain) 414 413 break; 415 - } 416 414 417 415 domain->pmu = pmu; 418 416 domain->base.name = kasprintf(GFP_KERNEL, "%pOFn", np); 419 417 if (!domain->base.name) { 420 418 kfree(domain); 421 - of_node_put(np); 422 419 break; 423 420 } 424 421
+1 -1
drivers/soc/fsl/qe/Makefile
··· 11 11 obj-$(CONFIG_UCC_FAST) += ucc_fast.o 12 12 obj-$(CONFIG_QE_TDM) += qe_tdm.o 13 13 obj-$(CONFIG_QE_USB) += usb.o 14 - obj-$(CONFIG_QE_GPIO) += gpio.o 14 + obj-$(CONFIG_QE_GPIO) += gpio.o qe_ports_ic.o
+141
drivers/soc/fsl/qe/qe_ports_ic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * QUICC ENGINE I/O Ports Interrupt Controller 4 + * 5 + * Copyright (c) 2025 Christophe Leroy CS GROUP France (christophe.leroy@csgroup.eu) 6 + */ 7 + 8 + #include <linux/irq.h> 9 + #include <linux/irqdomain.h> 10 + #include <linux/platform_device.h> 11 + 12 + /* QE IC registers offset */ 13 + #define CEPIER 0x0c 14 + #define CEPIMR 0x10 15 + #define CEPICR 0x14 16 + 17 + struct qepic_data { 18 + void __iomem *reg; 19 + struct irq_domain *host; 20 + }; 21 + 22 + static void qepic_mask(struct irq_data *d) 23 + { 24 + struct qepic_data *data = irq_data_get_irq_chip_data(d); 25 + 26 + clrbits32(data->reg + CEPIMR, 1 << (31 - irqd_to_hwirq(d))); 27 + } 28 + 29 + static void qepic_unmask(struct irq_data *d) 30 + { 31 + struct qepic_data *data = irq_data_get_irq_chip_data(d); 32 + 33 + setbits32(data->reg + CEPIMR, 1 << (31 - irqd_to_hwirq(d))); 34 + } 35 + 36 + static void qepic_end(struct irq_data *d) 37 + { 38 + struct qepic_data *data = irq_data_get_irq_chip_data(d); 39 + 40 + out_be32(data->reg + CEPIER, 1 << (31 - irqd_to_hwirq(d))); 41 + } 42 + 43 + static int qepic_set_type(struct irq_data *d, unsigned int flow_type) 44 + { 45 + struct qepic_data *data = irq_data_get_irq_chip_data(d); 46 + unsigned int vec = (unsigned int)irqd_to_hwirq(d); 47 + 48 + switch (flow_type & IRQ_TYPE_SENSE_MASK) { 49 + case IRQ_TYPE_EDGE_FALLING: 50 + setbits32(data->reg + CEPICR, 1 << (31 - vec)); 51 + return 0; 52 + case IRQ_TYPE_EDGE_BOTH: 53 + case IRQ_TYPE_NONE: 54 + clrbits32(data->reg + CEPICR, 1 << (31 - vec)); 55 + return 0; 56 + } 57 + return -EINVAL; 58 + } 59 + 60 + static struct irq_chip qepic = { 61 + .name = "QEPIC", 62 + .irq_mask = qepic_mask, 63 + .irq_unmask = qepic_unmask, 64 + .irq_eoi = qepic_end, 65 + .irq_set_type = qepic_set_type, 66 + }; 67 + 68 + static int qepic_get_irq(struct irq_desc *desc) 69 + { 70 + struct qepic_data *data = irq_desc_get_handler_data(desc); 71 + u32 event = in_be32(data->reg + CEPIER); 72 + 73 + if (!event) 74 + return -1; 75 + 76 + return irq_find_mapping(data->host, 32 - ffs(event)); 77 + } 78 + 79 + static void qepic_cascade(struct irq_desc *desc) 80 + { 81 + generic_handle_irq(qepic_get_irq(desc)); 82 + } 83 + 84 + static int qepic_host_map(struct irq_domain *h, unsigned int virq, irq_hw_number_t hw) 85 + { 86 + irq_set_chip_data(virq, h->host_data); 87 + irq_set_chip_and_handler(virq, &qepic, handle_fasteoi_irq); 88 + return 0; 89 + } 90 + 91 + static const struct irq_domain_ops qepic_host_ops = { 92 + .map = qepic_host_map, 93 + }; 94 + 95 + static int qepic_probe(struct platform_device *pdev) 96 + { 97 + struct device *dev = &pdev->dev; 98 + struct qepic_data *data; 99 + int irq; 100 + 101 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 102 + if (!data) 103 + return -ENOMEM; 104 + 105 + data->reg = devm_platform_ioremap_resource(pdev, 0); 106 + if (IS_ERR(data->reg)) 107 + return PTR_ERR(data->reg); 108 + 109 + irq = platform_get_irq(pdev, 0); 110 + if (irq < 0) 111 + return irq; 112 + 113 + data->host = irq_domain_add_linear(dev->of_node, 32, &qepic_host_ops, data); 114 + if (!data->host) 115 + return -ENODEV; 116 + 117 + irq_set_chained_handler_and_data(irq, qepic_cascade, data); 118 + 119 + return 0; 120 + } 121 + 122 + static const struct of_device_id qepic_match[] = { 123 + { 124 + .compatible = "fsl,mpc8323-qe-ports-ic", 125 + }, 126 + {}, 127 + }; 128 + 129 + static struct platform_driver qepic_driver = { 130 + .driver = { 131 + .name = "qe_ports_ic", 132 + .of_match_table = qepic_match, 133 + }, 134 + .probe = qepic_probe, 135 + }; 136 + 137 + static int __init qepic_init(void) 138 + { 139 + return platform_driver_register(&qepic_driver); 140 + } 141 + arch_initcall(qepic_init);
+2 -11
drivers/soc/fsl/qe/qmc.c
··· 1284 1284 1285 1285 static int qmc_of_parse_chans(struct qmc *qmc, struct device_node *np) 1286 1286 { 1287 - struct device_node *chan_np; 1288 1287 struct qmc_chan *chan; 1289 1288 const char *mode; 1290 1289 u32 chan_id; 1291 1290 u64 ts_mask; 1292 1291 int ret; 1293 1292 1294 - for_each_available_child_of_node(np, chan_np) { 1293 + for_each_available_child_of_node_scoped(np, chan_np) { 1295 1294 ret = of_property_read_u32(chan_np, "reg", &chan_id); 1296 1295 if (ret) { 1297 1296 dev_err(qmc->dev, "%pOF: failed to read reg\n", chan_np); 1298 - of_node_put(chan_np); 1299 1297 return ret; 1300 1298 } 1301 1299 if (chan_id > 63) { 1302 1300 dev_err(qmc->dev, "%pOF: Invalid chan_id\n", chan_np); 1303 - of_node_put(chan_np); 1304 1301 return -EINVAL; 1305 1302 } 1306 1303 1307 1304 chan = devm_kzalloc(qmc->dev, sizeof(*chan), GFP_KERNEL); 1308 - if (!chan) { 1309 - of_node_put(chan_np); 1305 + if (!chan) 1310 1306 return -ENOMEM; 1311 - } 1312 1307 1313 1308 chan->id = chan_id; 1314 1309 spin_lock_init(&chan->ts_lock); ··· 1314 1319 if (ret) { 1315 1320 dev_err(qmc->dev, "%pOF: failed to read fsl,tx-ts-mask\n", 1316 1321 chan_np); 1317 - of_node_put(chan_np); 1318 1322 return ret; 1319 1323 } 1320 1324 chan->tx_ts_mask_avail = ts_mask; ··· 1323 1329 if (ret) { 1324 1330 dev_err(qmc->dev, "%pOF: failed to read fsl,rx-ts-mask\n", 1325 1331 chan_np); 1326 - of_node_put(chan_np); 1327 1332 return ret; 1328 1333 } 1329 1334 chan->rx_ts_mask_avail = ts_mask; ··· 1333 1340 if (ret && ret != -EINVAL) { 1334 1341 dev_err(qmc->dev, "%pOF: failed to read fsl,operational-mode\n", 1335 1342 chan_np); 1336 - of_node_put(chan_np); 1337 1343 return ret; 1338 1344 } 1339 1345 if (!strcmp(mode, "transparent")) { ··· 1342 1350 } else { 1343 1351 dev_err(qmc->dev, "%pOF: Invalid fsl,operational-mode (%s)\n", 1344 1352 chan_np, mode); 1345 - of_node_put(chan_np); 1346 1353 return -EINVAL; 1347 1354 } 1348 1355
+5 -1
drivers/soc/imx/soc-imx8m.c
··· 148 148 goto err_clk; 149 149 } 150 150 151 - return clk_prepare_enable(drvdata->clk); 151 + ret = clk_prepare_enable(drvdata->clk); 152 + if (ret) 153 + goto err_clk; 154 + 155 + return 0; 152 156 153 157 err_clk: 154 158 iounmap(drvdata->ocotp_base);
+16 -30
drivers/soc/imx/soc-imx9.c
··· 12 12 #include <linux/sys_soc.h> 13 13 14 14 #define IMX_SIP_GET_SOC_INFO 0xc2000006 15 - #define SOC_ID(x) (((x) & 0xFFFF) >> 8) 15 + #define SOC_ID(x) (((x) & 0xFF) ? ((x) & 0xFFFF) >> 4 : ((x) & 0xFFFF) >> 8) 16 16 #define SOC_REV_MAJOR(x) ((((x) >> 28) & 0xF) - 0x9) 17 17 #define SOC_REV_MINOR(x) (((x) >> 24) & 0xF) 18 18 19 19 static int imx9_soc_probe(struct platform_device *pdev) 20 20 { 21 + struct device *dev = &pdev->dev; 21 22 struct soc_device_attribute *attr; 22 23 struct arm_smccc_res res; 23 24 struct soc_device *sdev; ··· 26 25 u64 uid127_64, uid63_0; 27 26 int err; 28 27 29 - attr = kzalloc(sizeof(*attr), GFP_KERNEL); 28 + attr = devm_kzalloc(dev, sizeof(*attr), GFP_KERNEL); 30 29 if (!attr) 31 30 return -ENOMEM; 32 31 33 32 err = of_property_read_string(of_root, "model", &attr->machine); 34 - if (err) { 35 - pr_err("%s: missing model property: %d\n", __func__, err); 36 - goto attr; 37 - } 33 + if (err) 34 + return dev_err_probe(dev, err, "%s: missing model property\n", __func__); 38 35 39 - attr->family = kasprintf(GFP_KERNEL, "Freescale i.MX"); 36 + attr->family = devm_kasprintf(dev, GFP_KERNEL, "Freescale i.MX"); 40 37 41 38 /* 42 39 * Retrieve the soc id, rev & uid info: ··· 44 45 * res.a3: uid[63:0]; 45 46 */ 46 47 arm_smccc_smc(IMX_SIP_GET_SOC_INFO, 0, 0, 0, 0, 0, 0, 0, &res); 47 - if (res.a0 != SMCCC_RET_SUCCESS) { 48 - pr_err("%s: SMC failed: 0x%lx\n", __func__, res.a0); 49 - err = -EINVAL; 50 - goto family; 51 - } 48 + if (res.a0 != SMCCC_RET_SUCCESS) 49 + return dev_err_probe(dev, -EINVAL, "%s: SMC failed: 0x%lx\n", __func__, res.a0); 52 50 53 51 soc_id = SOC_ID(res.a1); 54 52 rev_major = SOC_REV_MAJOR(res.a1); 55 53 rev_minor = SOC_REV_MINOR(res.a1); 56 54 57 - attr->soc_id = kasprintf(GFP_KERNEL, "i.MX%2x", soc_id); 58 - attr->revision = kasprintf(GFP_KERNEL, "%d.%d", rev_major, rev_minor); 55 + attr->soc_id = devm_kasprintf(dev, GFP_KERNEL, "i.MX%2x", soc_id); 56 + attr->revision = devm_kasprintf(dev, GFP_KERNEL, "%d.%d", rev_major, rev_minor); 59 57 60 58 uid127_64 = res.a2; 61 59 uid63_0 = res.a3; 62 - attr->serial_number = kasprintf(GFP_KERNEL, "%016llx%016llx", uid127_64, uid63_0); 60 + attr->serial_number = devm_kasprintf(dev, GFP_KERNEL, "%016llx%016llx", uid127_64, uid63_0); 63 61 64 62 sdev = soc_device_register(attr); 65 - if (IS_ERR(sdev)) { 66 - err = PTR_ERR(sdev); 67 - pr_err("%s failed to register SoC as a device: %d\n", __func__, err); 68 - goto serial_number; 69 - } 63 + if (IS_ERR(sdev)) 64 + return dev_err_probe(dev, PTR_ERR(sdev), 65 + "%s failed to register SoC as a device\n", __func__); 70 66 71 67 return 0; 72 - 73 - serial_number: 74 - kfree(attr->serial_number); 75 - kfree(attr->revision); 76 - kfree(attr->soc_id); 77 - family: 78 - kfree(attr->family); 79 - attr: 80 - kfree(attr); 81 - return err; 82 68 } 83 69 84 70 static __maybe_unused const struct of_device_id imx9_soc_match[] = { 85 71 { .compatible = "fsl,imx93", }, 72 + { .compatible = "fsl,imx94", }, 86 73 { .compatible = "fsl,imx95", }, 74 + { .compatible = "fsl,imx952", }, 87 75 { } 88 76 }; 89 77
+73 -4
drivers/soc/mediatek/mtk-cmdq-helper.c
··· 8 8 #include <linux/module.h> 9 9 #include <linux/mailbox_controller.h> 10 10 #include <linux/of.h> 11 + #include <linux/of_address.h> 11 12 #include <linux/soc/mediatek/mtk-cmdq.h> 12 13 13 14 #define CMDQ_WRITE_ENABLE_MASK BIT(0) ··· 61 60 struct cmdq_client_reg *client_reg, int idx) 62 61 { 63 62 struct of_phandle_args spec; 63 + struct resource res; 64 64 int err; 65 65 66 66 if (!client_reg) 67 67 return -ENOENT; 68 68 69 + err = of_address_to_resource(dev->of_node, 0, &res); 70 + if (err) { 71 + dev_err(dev, "Missing reg in %s node\n", dev->of_node->full_name); 72 + return -EINVAL; 73 + } 74 + client_reg->pa_base = res.start; 75 + 69 76 err = of_parse_phandle_with_fixed_args(dev->of_node, 70 77 "mediatek,gce-client-reg", 71 78 3, idx, &spec); 72 79 if (err < 0) { 73 - dev_warn(dev, 80 + dev_dbg(dev, 74 81 "error %d can't parse gce-client-reg property (%d)", 75 82 err, idx); 76 83 77 - return err; 84 + /* make subsys invalid */ 85 + client_reg->subsys = CMDQ_SUBSYS_INVALID; 86 + 87 + /* 88 + * All GCEs support writing register PA with mask without subsys, 89 + * but this requires extra GCE instructions to convert the PA into 90 + * a format that GCE can handle, which is less performance than 91 + * directly using subsys. Therefore, when subsys is available, 92 + * we prefer to use subsys for writing register PA. 93 + */ 94 + client_reg->pkt_write = cmdq_pkt_write_pa; 95 + client_reg->pkt_write_mask = cmdq_pkt_write_mask_pa; 96 + 97 + return 0; 78 98 } 79 99 80 100 client_reg->subsys = (u8)spec.args[0]; 81 101 client_reg->offset = (u16)spec.args[1]; 82 102 client_reg->size = (u16)spec.args[2]; 83 103 of_node_put(spec.np); 104 + 105 + client_reg->pkt_write = cmdq_pkt_write_subsys; 106 + client_reg->pkt_write_mask = cmdq_pkt_write_mask_subsys; 84 107 85 108 return 0; 86 109 } ··· 165 140 } 166 141 167 142 pkt->pa_base = dma_addr; 143 + cmdq_get_mbox_priv(client->chan, &pkt->priv); 168 144 169 145 return 0; 170 146 } ··· 227 201 } 228 202 EXPORT_SYMBOL(cmdq_pkt_write); 229 203 204 + int cmdq_pkt_write_pa(struct cmdq_pkt *pkt, u8 subsys /*unused*/, u32 pa_base, 205 + u16 offset, u32 value) 206 + { 207 + int err; 208 + 209 + err = cmdq_pkt_assign(pkt, CMDQ_THR_SPR_IDX0, CMDQ_ADDR_HIGH(pa_base)); 210 + if (err < 0) 211 + return err; 212 + 213 + return cmdq_pkt_write_s_value(pkt, CMDQ_THR_SPR_IDX0, CMDQ_ADDR_LOW(offset), value); 214 + } 215 + EXPORT_SYMBOL(cmdq_pkt_write_pa); 216 + 217 + int cmdq_pkt_write_subsys(struct cmdq_pkt *pkt, u8 subsys, u32 pa_base /*unused*/, 218 + u16 offset, u32 value) 219 + { 220 + return cmdq_pkt_write(pkt, subsys, offset, value); 221 + } 222 + EXPORT_SYMBOL(cmdq_pkt_write_subsys); 223 + 230 224 int cmdq_pkt_write_mask(struct cmdq_pkt *pkt, u8 subsys, 231 225 u16 offset, u32 value, u32 mask) 232 226 { ··· 263 217 return cmdq_pkt_write(pkt, subsys, offset_mask, value); 264 218 } 265 219 EXPORT_SYMBOL(cmdq_pkt_write_mask); 220 + 221 + int cmdq_pkt_write_mask_pa(struct cmdq_pkt *pkt, u8 subsys /*unused*/, u32 pa_base, 222 + u16 offset, u32 value, u32 mask) 223 + { 224 + int err; 225 + 226 + err = cmdq_pkt_assign(pkt, CMDQ_THR_SPR_IDX0, CMDQ_ADDR_HIGH(pa_base)); 227 + if (err < 0) 228 + return err; 229 + 230 + return cmdq_pkt_write_s_mask_value(pkt, CMDQ_THR_SPR_IDX0, 231 + CMDQ_ADDR_LOW(offset), value, mask); 232 + } 233 + EXPORT_SYMBOL(cmdq_pkt_write_mask_pa); 234 + 235 + int cmdq_pkt_write_mask_subsys(struct cmdq_pkt *pkt, u8 subsys, u32 pa_base /*unused*/, 236 + u16 offset, u32 value, u32 mask) 237 + { 238 + return cmdq_pkt_write_mask(pkt, subsys, offset, value, mask); 239 + } 240 + EXPORT_SYMBOL(cmdq_pkt_write_mask_subsys); 266 241 267 242 int cmdq_pkt_read_s(struct cmdq_pkt *pkt, u16 high_addr_reg_idx, u16 addr_low, 268 243 u16 reg_idx) ··· 372 305 int ret; 373 306 374 307 /* read the value of src_addr into high_addr_reg_idx */ 308 + src_addr += pkt->priv.mminfra_offset; 375 309 ret = cmdq_pkt_assign(pkt, high_addr_reg_idx, CMDQ_ADDR_HIGH(src_addr)); 376 310 if (ret < 0) 377 311 return ret; ··· 381 313 return ret; 382 314 383 315 /* write the value of value_reg_idx into dst_addr */ 316 + dst_addr += pkt->priv.mminfra_offset; 384 317 ret = cmdq_pkt_assign(pkt, high_addr_reg_idx, CMDQ_ADDR_HIGH(dst_addr)); 385 318 if (ret < 0) 386 319 return ret; ··· 507 438 inst.op = CMDQ_CODE_MASK; 508 439 inst.dst_t = CMDQ_REG_TYPE; 509 440 inst.sop = CMDQ_POLL_ADDR_GPR; 510 - inst.value = addr; 441 + inst.value = addr + pkt->priv.mminfra_offset; 511 442 ret = cmdq_pkt_append_command(pkt, inst); 512 443 if (ret < 0) 513 444 return ret; ··· 567 498 struct cmdq_instruction inst = { 568 499 .op = CMDQ_CODE_JUMP, 569 500 .offset = CMDQ_JUMP_ABSOLUTE, 570 - .value = addr >> shift_pa 501 + .value = (addr + pkt->priv.mminfra_offset) >> pkt->priv.shift_pa 571 502 }; 572 503 return cmdq_pkt_append_command(pkt, inst); 573 504 }
+333 -33
drivers/soc/mediatek/mtk-dvfsrc.c
··· 7 7 8 8 #include <linux/arm-smccc.h> 9 9 #include <linux/bitfield.h> 10 + #include <linux/clk.h> 10 11 #include <linux/iopoll.h> 11 12 #include <linux/module.h> 12 13 #include <linux/of.h> ··· 16 15 #include <linux/soc/mediatek/dvfsrc.h> 17 16 #include <linux/soc/mediatek/mtk_sip_svc.h> 18 17 18 + /* DVFSRC_BASIC_CONTROL */ 19 + #define DVFSRC_V4_BASIC_CTRL_OPP_COUNT GENMASK(26, 20) 20 + 19 21 /* DVFSRC_LEVEL */ 20 22 #define DVFSRC_V1_LEVEL_TARGET_LEVEL GENMASK(15, 0) 21 23 #define DVFSRC_TGT_LEVEL_IDLE 0x00 22 24 #define DVFSRC_V1_LEVEL_CURRENT_LEVEL GENMASK(31, 16) 25 + 26 + #define DVFSRC_V4_LEVEL_TARGET_LEVEL GENMASK(15, 8) 27 + #define DVFSRC_V4_LEVEL_TARGET_PRESENT BIT(16) 23 28 24 29 /* DVFSRC_SW_REQ, DVFSRC_SW_REQ2 */ 25 30 #define DVFSRC_V1_SW_REQ2_DRAM_LEVEL GENMASK(1, 0) ··· 34 27 #define DVFSRC_V2_SW_REQ_DRAM_LEVEL GENMASK(3, 0) 35 28 #define DVFSRC_V2_SW_REQ_VCORE_LEVEL GENMASK(6, 4) 36 29 30 + #define DVFSRC_V4_SW_REQ_EMI_LEVEL GENMASK(3, 0) 31 + #define DVFSRC_V4_SW_REQ_DRAM_LEVEL GENMASK(15, 12) 32 + 37 33 /* DVFSRC_VCORE */ 38 34 #define DVFSRC_V2_VCORE_REQ_VSCP_LEVEL GENMASK(14, 12) 35 + 36 + /* DVFSRC_TARGET_GEAR */ 37 + #define DVFSRC_V4_GEAR_TARGET_DRAM GENMASK(7, 0) 38 + #define DVFSRC_V4_GEAR_TARGET_VCORE GENMASK(15, 8) 39 + 40 + /* DVFSRC_GEAR_INFO */ 41 + #define DVFSRC_V4_GEAR_INFO_REG_WIDTH 0x4 42 + #define DVFSRC_V4_GEAR_INFO_REG_LEVELS 64 43 + #define DVFSRC_V4_GEAR_INFO_VCORE GENMASK(3, 0) 44 + #define DVFSRC_V4_GEAR_INFO_EMI GENMASK(7, 4) 45 + #define DVFSRC_V4_GEAR_INFO_DRAM GENMASK(15, 12) 39 46 40 47 #define DVFSRC_POLL_TIMEOUT_US 1000 41 48 #define STARTUP_TIME_US 1 ··· 57 36 #define MTK_SIP_DVFSRC_INIT 0x0 58 37 #define MTK_SIP_DVFSRC_START 0x1 59 38 60 - struct dvfsrc_bw_constraints { 61 - u16 max_dram_nom_bw; 62 - u16 max_dram_peak_bw; 63 - u16 max_dram_hrt_bw; 39 + enum mtk_dvfsrc_bw_type { 40 + DVFSRC_BW_AVG, 41 + DVFSRC_BW_PEAK, 42 + DVFSRC_BW_HRT, 43 + DVFSRC_BW_MAX, 64 44 }; 65 45 66 46 struct dvfsrc_opp { 67 47 u32 vcore_opp; 68 48 u32 dram_opp; 49 + u32 emi_opp; 69 50 }; 70 51 71 52 struct dvfsrc_opp_desc { ··· 78 55 struct dvfsrc_soc_data; 79 56 struct mtk_dvfsrc { 80 57 struct device *dev; 58 + struct clk *clk; 81 59 struct platform_device *icc; 82 60 struct platform_device *regulator; 83 61 const struct dvfsrc_soc_data *dvd; ··· 89 65 90 66 struct dvfsrc_soc_data { 91 67 const int *regs; 68 + const u8 *bw_units; 69 + const bool has_emi_ddr; 92 70 const struct dvfsrc_opp_desc *opps_desc; 71 + u32 (*calc_dram_bw)(struct mtk_dvfsrc *dvfsrc, enum mtk_dvfsrc_bw_type type, u64 bw); 93 72 u32 (*get_target_level)(struct mtk_dvfsrc *dvfsrc); 94 73 u32 (*get_current_level)(struct mtk_dvfsrc *dvfsrc); 95 74 u32 (*get_vcore_level)(struct mtk_dvfsrc *dvfsrc); 96 75 u32 (*get_vscp_level)(struct mtk_dvfsrc *dvfsrc); 76 + u32 (*get_opp_count)(struct mtk_dvfsrc *dvfsrc); 77 + int (*get_hw_opps)(struct mtk_dvfsrc *dvfsrc); 97 78 void (*set_dram_bw)(struct mtk_dvfsrc *dvfsrc, u64 bw); 98 79 void (*set_dram_peak_bw)(struct mtk_dvfsrc *dvfsrc, u64 bw); 99 80 void (*set_dram_hrt_bw)(struct mtk_dvfsrc *dvfsrc, u64 bw); ··· 107 78 void (*set_vscp_level)(struct mtk_dvfsrc *dvfsrc, u32 level); 108 79 int (*wait_for_opp_level)(struct mtk_dvfsrc *dvfsrc, u32 level); 109 80 int (*wait_for_vcore_level)(struct mtk_dvfsrc *dvfsrc, u32 level); 110 - const struct dvfsrc_bw_constraints *bw_constraints; 81 + 82 + /** 83 + * @bw_max_constraints - array of maximum bandwidth for this hardware 84 + * 85 + * indexed by &enum mtk_dvfsrc_bw_type, storing the maximum permissible 86 + * hardware value for each bandwidth type. 87 + */ 88 + const u32 *const bw_max_constraints; 89 + 90 + /** 91 + * @bw_min_constraints - array of minimum bandwidth for this hardware 92 + * 93 + * indexed by &enum mtk_dvfsrc_bw_type, storing the minimum permissible 94 + * hardware value for each bandwidth type. 95 + */ 96 + const u32 *const bw_min_constraints; 111 97 }; 112 98 113 99 static u32 dvfsrc_readl(struct mtk_dvfsrc *dvfs, u32 offset) ··· 136 92 } 137 93 138 94 enum dvfsrc_regs { 95 + DVFSRC_BASIC_CONTROL, 139 96 DVFSRC_SW_REQ, 140 97 DVFSRC_SW_REQ2, 141 98 DVFSRC_LEVEL, ··· 144 99 DVFSRC_SW_BW, 145 100 DVFSRC_SW_PEAK_BW, 146 101 DVFSRC_SW_HRT_BW, 102 + DVFSRC_SW_EMI_BW, 147 103 DVFSRC_VCORE, 104 + DVFSRC_TARGET_GEAR, 105 + DVFSRC_GEAR_INFO_L, 106 + DVFSRC_GEAR_INFO_H, 148 107 DVFSRC_REGS_MAX, 149 108 }; 150 109 ··· 169 120 [DVFSRC_TARGET_LEVEL] = 0xd48, 170 121 }; 171 122 123 + static const int dvfsrc_mt8196_regs[] = { 124 + [DVFSRC_BASIC_CONTROL] = 0x0, 125 + [DVFSRC_SW_REQ] = 0x18, 126 + [DVFSRC_VCORE] = 0x80, 127 + [DVFSRC_GEAR_INFO_L] = 0xfc, 128 + [DVFSRC_SW_BW] = 0x1e8, 129 + [DVFSRC_SW_PEAK_BW] = 0x1f4, 130 + [DVFSRC_SW_HRT_BW] = 0x20c, 131 + [DVFSRC_LEVEL] = 0x5f0, 132 + [DVFSRC_TARGET_LEVEL] = 0x5f0, 133 + [DVFSRC_SW_REQ2] = 0x604, 134 + [DVFSRC_SW_EMI_BW] = 0x60c, 135 + [DVFSRC_TARGET_GEAR] = 0x6ac, 136 + [DVFSRC_GEAR_INFO_H] = 0x6b0, 137 + }; 138 + 172 139 static const struct dvfsrc_opp *dvfsrc_get_current_opp(struct mtk_dvfsrc *dvfsrc) 173 140 { 174 141 u32 level = dvfsrc->dvd->get_current_level(dvfsrc); 175 142 176 143 return &dvfsrc->curr_opps->opps[level]; 144 + } 145 + 146 + static u32 dvfsrc_get_current_target_vcore_gear(struct mtk_dvfsrc *dvfsrc) 147 + { 148 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_TARGET_GEAR); 149 + 150 + return FIELD_GET(DVFSRC_V4_GEAR_TARGET_VCORE, val); 151 + } 152 + 153 + static u32 dvfsrc_get_current_target_dram_gear(struct mtk_dvfsrc *dvfsrc) 154 + { 155 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_TARGET_GEAR); 156 + 157 + return FIELD_GET(DVFSRC_V4_GEAR_TARGET_DRAM, val); 177 158 } 178 159 179 160 static bool dvfsrc_is_idle(struct mtk_dvfsrc *dvfsrc) ··· 262 183 return 0; 263 184 } 264 185 186 + static int dvfsrc_wait_for_vcore_level_v4(struct mtk_dvfsrc *dvfsrc, u32 level) 187 + { 188 + u32 val; 189 + 190 + return readx_poll_timeout_atomic(dvfsrc_get_current_target_vcore_gear, 191 + dvfsrc, val, val >= level, 192 + STARTUP_TIME_US, DVFSRC_POLL_TIMEOUT_US); 193 + } 194 + 195 + static int dvfsrc_wait_for_opp_level_v4(struct mtk_dvfsrc *dvfsrc, u32 level) 196 + { 197 + u32 val; 198 + 199 + return readx_poll_timeout_atomic(dvfsrc_get_current_target_dram_gear, 200 + dvfsrc, val, val >= level, 201 + STARTUP_TIME_US, DVFSRC_POLL_TIMEOUT_US); 202 + } 203 + 265 204 static u32 dvfsrc_get_target_level_v1(struct mtk_dvfsrc *dvfsrc) 266 205 { 267 206 u32 val = dvfsrc_readl(dvfsrc, DVFSRC_LEVEL); ··· 304 207 { 305 208 u32 val = dvfsrc_readl(dvfsrc, DVFSRC_LEVEL); 306 209 u32 level = ffs(val); 210 + 211 + /* Valid levels */ 212 + if (level < dvfsrc->curr_opps->num_opp) 213 + return dvfsrc->curr_opps->num_opp - level; 214 + 215 + /* Zero for level 0 or invalid level */ 216 + return 0; 217 + } 218 + 219 + static u32 dvfsrc_get_target_level_v4(struct mtk_dvfsrc *dvfsrc) 220 + { 221 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_TARGET_LEVEL); 222 + 223 + if (val & DVFSRC_V4_LEVEL_TARGET_PRESENT) 224 + return FIELD_GET(DVFSRC_V4_LEVEL_TARGET_LEVEL, val) + 1; 225 + return 0; 226 + } 227 + 228 + static u32 dvfsrc_get_current_level_v4(struct mtk_dvfsrc *dvfsrc) 229 + { 230 + u32 level = dvfsrc_readl(dvfsrc, DVFSRC_LEVEL) + 1; 307 231 308 232 /* Valid levels */ 309 233 if (level < dvfsrc->curr_opps->num_opp) ··· 385 267 dvfsrc_writel(dvfsrc, DVFSRC_VCORE, val); 386 268 } 387 269 388 - static void __dvfsrc_set_dram_bw_v1(struct mtk_dvfsrc *dvfsrc, u32 reg, 389 - u16 max_bw, u16 min_bw, u64 bw) 270 + static u32 dvfsrc_get_opp_count_v4(struct mtk_dvfsrc *dvfsrc) 390 271 { 391 - u32 new_bw = (u32)div_u64(bw, 100 * 1000); 272 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_BASIC_CONTROL); 392 273 393 - /* If bw constraints (in mbps) are defined make sure to respect them */ 394 - if (max_bw) 395 - new_bw = min(new_bw, max_bw); 396 - if (min_bw && new_bw > 0) 397 - new_bw = max(new_bw, min_bw); 274 + return FIELD_GET(DVFSRC_V4_BASIC_CTRL_OPP_COUNT, val) + 1; 275 + } 398 276 399 - dvfsrc_writel(dvfsrc, reg, new_bw); 277 + static u32 278 + dvfsrc_calc_dram_bw_v1(struct mtk_dvfsrc *dvfsrc, enum mtk_dvfsrc_bw_type type, u64 bw) 279 + { 280 + return clamp_val(div_u64(bw, 100 * 1000), dvfsrc->dvd->bw_min_constraints[type], 281 + dvfsrc->dvd->bw_max_constraints[type]); 282 + } 283 + 284 + /** 285 + * dvfsrc_calc_dram_bw_v4 - convert kbps to hardware register bandwidth value 286 + * @dvfsrc: pointer to the &struct mtk_dvfsrc of this driver instance 287 + * @type: one of %DVFSRC_BW_AVG, %DVFSRC_BW_PEAK, or %DVFSRC_BW_HRT 288 + * @bw: the bandwidth in kilobits per second 289 + * 290 + * Returns the hardware register value appropriate for expressing @bw, clamped 291 + * to hardware limits. 292 + */ 293 + static u32 294 + dvfsrc_calc_dram_bw_v4(struct mtk_dvfsrc *dvfsrc, enum mtk_dvfsrc_bw_type type, u64 bw) 295 + { 296 + u8 bw_unit = dvfsrc->dvd->bw_units[type]; 297 + u64 bw_mbps; 298 + u32 bw_hw; 299 + 300 + if (type < DVFSRC_BW_AVG || type >= DVFSRC_BW_MAX) 301 + return 0; 302 + 303 + bw_mbps = div_u64(bw, 1000); 304 + bw_hw = div_u64((bw_mbps + bw_unit - 1), bw_unit); 305 + return clamp_val(bw_hw, dvfsrc->dvd->bw_min_constraints[type], 306 + dvfsrc->dvd->bw_max_constraints[type]); 307 + } 308 + 309 + static void __dvfsrc_set_dram_bw_v1(struct mtk_dvfsrc *dvfsrc, u32 reg, 310 + enum mtk_dvfsrc_bw_type type, u64 bw) 311 + { 312 + u32 bw_hw = dvfsrc->dvd->calc_dram_bw(dvfsrc, type, bw); 313 + 314 + dvfsrc_writel(dvfsrc, reg, bw_hw); 315 + 316 + if (type == DVFSRC_BW_AVG && dvfsrc->dvd->has_emi_ddr) 317 + dvfsrc_writel(dvfsrc, DVFSRC_SW_EMI_BW, bw_hw); 400 318 } 401 319 402 320 static void dvfsrc_set_dram_bw_v1(struct mtk_dvfsrc *dvfsrc, u64 bw) 403 321 { 404 - u64 max_bw = dvfsrc->dvd->bw_constraints->max_dram_nom_bw; 405 - 406 - __dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_BW, max_bw, 0, bw); 322 + __dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_BW, DVFSRC_BW_AVG, bw); 407 323 }; 408 324 409 325 static void dvfsrc_set_dram_peak_bw_v1(struct mtk_dvfsrc *dvfsrc, u64 bw) 410 326 { 411 - u64 max_bw = dvfsrc->dvd->bw_constraints->max_dram_peak_bw; 412 - 413 - __dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_PEAK_BW, max_bw, 0, bw); 327 + __dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_PEAK_BW, DVFSRC_BW_PEAK, bw); 414 328 } 415 329 416 330 static void dvfsrc_set_dram_hrt_bw_v1(struct mtk_dvfsrc *dvfsrc, u64 bw) 417 331 { 418 - u64 max_bw = dvfsrc->dvd->bw_constraints->max_dram_hrt_bw; 419 - 420 - __dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_HRT_BW, max_bw, 0, bw); 332 + __dvfsrc_set_dram_bw_v1(dvfsrc, DVFSRC_SW_HRT_BW, DVFSRC_BW_HRT, bw); 421 333 } 422 334 423 335 static void dvfsrc_set_opp_level_v1(struct mtk_dvfsrc *dvfsrc, u32 level) ··· 460 312 val |= FIELD_PREP(DVFSRC_V1_SW_REQ2_VCORE_LEVEL, opp->vcore_opp); 461 313 462 314 dev_dbg(dvfsrc->dev, "vcore_opp: %d, dram_opp: %d\n", opp->vcore_opp, opp->dram_opp); 315 + dvfsrc_writel(dvfsrc, DVFSRC_SW_REQ, val); 316 + } 317 + 318 + static u32 dvfsrc_get_opp_gear(struct mtk_dvfsrc *dvfsrc, u8 level) 319 + { 320 + u32 reg_ofst, val; 321 + u8 idx; 322 + 323 + /* Calculate register offset and index for requested gear */ 324 + if (level < DVFSRC_V4_GEAR_INFO_REG_LEVELS) { 325 + reg_ofst = dvfsrc->dvd->regs[DVFSRC_GEAR_INFO_L]; 326 + idx = level; 327 + } else { 328 + reg_ofst = dvfsrc->dvd->regs[DVFSRC_GEAR_INFO_H]; 329 + idx = level - DVFSRC_V4_GEAR_INFO_REG_LEVELS; 330 + } 331 + reg_ofst += DVFSRC_V4_GEAR_INFO_REG_WIDTH * (level / 2); 332 + 333 + /* Read the corresponding gear register */ 334 + val = readl(dvfsrc->regs + reg_ofst); 335 + 336 + /* Each register contains two sets of data, 16 bits per gear */ 337 + val >>= 16 * (idx % 2); 338 + 339 + return val; 340 + } 341 + 342 + static int dvfsrc_get_hw_opps_v4(struct mtk_dvfsrc *dvfsrc) 343 + { 344 + struct dvfsrc_opp *dvfsrc_opps; 345 + struct dvfsrc_opp_desc *desc; 346 + u32 num_opps, gear_info; 347 + u8 num_vcore, num_dram; 348 + u8 num_emi; 349 + int i; 350 + 351 + num_opps = dvfsrc_get_opp_count_v4(dvfsrc); 352 + if (num_opps == 0) { 353 + dev_err(dvfsrc->dev, "No OPPs programmed in DVFSRC MCU.\n"); 354 + return -EINVAL; 355 + } 356 + 357 + /* 358 + * The first 16 bits set in the gear info table says how many OPPs 359 + * and how many vcore, dram and emi table entries are available. 360 + */ 361 + gear_info = dvfsrc_readl(dvfsrc, DVFSRC_GEAR_INFO_L); 362 + if (gear_info == 0) { 363 + dev_err(dvfsrc->dev, "No gear info in DVFSRC MCU.\n"); 364 + return -EINVAL; 365 + } 366 + 367 + num_vcore = FIELD_GET(DVFSRC_V4_GEAR_INFO_VCORE, gear_info) + 1; 368 + num_dram = FIELD_GET(DVFSRC_V4_GEAR_INFO_DRAM, gear_info) + 1; 369 + num_emi = FIELD_GET(DVFSRC_V4_GEAR_INFO_EMI, gear_info) + 1; 370 + dev_info(dvfsrc->dev, 371 + "Discovered %u gears and %u vcore, %u dram, %u emi table entries.\n", 372 + num_opps, num_vcore, num_dram, num_emi); 373 + 374 + /* Allocate everything now as anything else after that cannot fail */ 375 + desc = devm_kzalloc(dvfsrc->dev, sizeof(*desc), GFP_KERNEL); 376 + if (!desc) 377 + return -ENOMEM; 378 + 379 + dvfsrc_opps = devm_kcalloc(dvfsrc->dev, num_opps + 1, 380 + sizeof(*dvfsrc_opps), GFP_KERNEL); 381 + if (!dvfsrc_opps) 382 + return -ENOMEM; 383 + 384 + /* Read the OPP table gear indices */ 385 + for (i = 0; i <= num_opps; i++) { 386 + gear_info = dvfsrc_get_opp_gear(dvfsrc, num_opps - i); 387 + dvfsrc_opps[i].vcore_opp = FIELD_GET(DVFSRC_V4_GEAR_INFO_VCORE, gear_info); 388 + dvfsrc_opps[i].dram_opp = FIELD_GET(DVFSRC_V4_GEAR_INFO_DRAM, gear_info); 389 + dvfsrc_opps[i].emi_opp = FIELD_GET(DVFSRC_V4_GEAR_INFO_EMI, gear_info); 390 + }; 391 + desc->num_opp = num_opps + 1; 392 + desc->opps = dvfsrc_opps; 393 + 394 + /* Assign to main structure now that everything is done! */ 395 + dvfsrc->curr_opps = desc; 396 + 397 + return 0; 398 + } 399 + 400 + static void dvfsrc_set_dram_level_v4(struct mtk_dvfsrc *dvfsrc, u32 level) 401 + { 402 + u32 val = dvfsrc_readl(dvfsrc, DVFSRC_SW_REQ); 403 + 404 + val &= ~DVFSRC_V4_SW_REQ_DRAM_LEVEL; 405 + val |= FIELD_PREP(DVFSRC_V4_SW_REQ_DRAM_LEVEL, level); 406 + 407 + dev_dbg(dvfsrc->dev, "%s level=%u\n", __func__, level); 408 + 463 409 dvfsrc_writel(dvfsrc, DVFSRC_SW_REQ, val); 464 410 } 465 411 ··· 664 422 if (IS_ERR(dvfsrc->regs)) 665 423 return PTR_ERR(dvfsrc->regs); 666 424 425 + dvfsrc->clk = devm_clk_get_enabled(&pdev->dev, NULL); 426 + if (IS_ERR(dvfsrc->clk)) 427 + return dev_err_probe(&pdev->dev, PTR_ERR(dvfsrc->clk), 428 + "Couldn't get and enable DVFSRC clock\n"); 429 + 667 430 arm_smccc_smc(MTK_SIP_DVFSRC_VCOREFS_CONTROL, MTK_SIP_DVFSRC_INIT, 668 431 0, 0, 0, 0, 0, 0, &ares); 669 432 if (ares.a0) ··· 677 430 dvfsrc->dram_type = ares.a1; 678 431 dev_dbg(&pdev->dev, "DRAM Type: %d\n", dvfsrc->dram_type); 679 432 680 - dvfsrc->curr_opps = &dvfsrc->dvd->opps_desc[dvfsrc->dram_type]; 433 + /* Newer versions of the DVFSRC MCU have pre-programmed gear tables */ 434 + if (dvfsrc->dvd->get_hw_opps) { 435 + ret = dvfsrc->dvd->get_hw_opps(dvfsrc); 436 + if (ret) 437 + return ret; 438 + } else { 439 + dvfsrc->curr_opps = &dvfsrc->dvd->opps_desc[dvfsrc->dram_type]; 440 + } 681 441 platform_set_drvdata(pdev, dvfsrc); 682 442 683 443 ret = devm_of_platform_populate(&pdev->dev); ··· 694 440 /* Everything is set up - make it run! */ 695 441 arm_smccc_smc(MTK_SIP_DVFSRC_VCOREFS_CONTROL, MTK_SIP_DVFSRC_START, 696 442 0, 0, 0, 0, 0, 0, &ares); 697 - if (ares.a0) 443 + if (ares.a0 & BIT(0)) 698 444 return dev_err_probe(&pdev->dev, -EINVAL, "Cannot start DVFSRC: %lu\n", ares.a0); 699 445 700 446 return 0; 701 447 } 702 448 703 - static const struct dvfsrc_bw_constraints dvfsrc_bw_constr_v1 = { 0, 0, 0 }; 704 - static const struct dvfsrc_bw_constraints dvfsrc_bw_constr_v2 = { 705 - .max_dram_nom_bw = 255, 706 - .max_dram_peak_bw = 255, 707 - .max_dram_hrt_bw = 1023, 449 + static const u32 dvfsrc_bw_min_constr_none[DVFSRC_BW_MAX] = { 450 + [DVFSRC_BW_AVG] = 0, 451 + [DVFSRC_BW_PEAK] = 0, 452 + [DVFSRC_BW_HRT] = 0, 453 + }; 454 + 455 + static const u32 dvfsrc_bw_max_constr_v1[DVFSRC_BW_MAX] = { 456 + [DVFSRC_BW_AVG] = U32_MAX, 457 + [DVFSRC_BW_PEAK] = U32_MAX, 458 + [DVFSRC_BW_HRT] = U32_MAX, 459 + }; 460 + 461 + static const u32 dvfsrc_bw_max_constr_v2[DVFSRC_BW_MAX] = { 462 + [DVFSRC_BW_AVG] = 65535, 463 + [DVFSRC_BW_PEAK] = 65535, 464 + [DVFSRC_BW_HRT] = 1023, 708 465 }; 709 466 710 467 static const struct dvfsrc_opp dvfsrc_opp_mt6893_lp4[] = { ··· 748 483 .set_vscp_level = dvfsrc_set_vscp_level_v2, 749 484 .wait_for_opp_level = dvfsrc_wait_for_opp_level_v2, 750 485 .wait_for_vcore_level = dvfsrc_wait_for_vcore_level_v1, 751 - .bw_constraints = &dvfsrc_bw_constr_v2, 486 + .bw_max_constraints = dvfsrc_bw_max_constr_v2, 487 + .bw_min_constraints = dvfsrc_bw_min_constr_none, 752 488 }; 753 489 754 490 static const struct dvfsrc_opp dvfsrc_opp_mt8183_lp4[] = { ··· 778 512 static const struct dvfsrc_soc_data mt8183_data = { 779 513 .opps_desc = dvfsrc_opp_mt8183_desc, 780 514 .regs = dvfsrc_mt8183_regs, 515 + .calc_dram_bw = dvfsrc_calc_dram_bw_v1, 781 516 .get_target_level = dvfsrc_get_target_level_v1, 782 517 .get_current_level = dvfsrc_get_current_level_v1, 783 518 .get_vcore_level = dvfsrc_get_vcore_level_v1, ··· 787 520 .set_vcore_level = dvfsrc_set_vcore_level_v1, 788 521 .wait_for_opp_level = dvfsrc_wait_for_opp_level_v1, 789 522 .wait_for_vcore_level = dvfsrc_wait_for_vcore_level_v1, 790 - .bw_constraints = &dvfsrc_bw_constr_v1, 523 + .bw_max_constraints = dvfsrc_bw_max_constr_v1, 524 + .bw_min_constraints = dvfsrc_bw_min_constr_none, 791 525 }; 792 526 793 527 static const struct dvfsrc_opp dvfsrc_opp_mt8195_lp4[] = { ··· 810 542 static const struct dvfsrc_soc_data mt8195_data = { 811 543 .opps_desc = dvfsrc_opp_mt8195_desc, 812 544 .regs = dvfsrc_mt8195_regs, 545 + .calc_dram_bw = dvfsrc_calc_dram_bw_v1, 813 546 .get_target_level = dvfsrc_get_target_level_v2, 814 547 .get_current_level = dvfsrc_get_current_level_v2, 815 548 .get_vcore_level = dvfsrc_get_vcore_level_v2, ··· 822 553 .set_vscp_level = dvfsrc_set_vscp_level_v2, 823 554 .wait_for_opp_level = dvfsrc_wait_for_opp_level_v2, 824 555 .wait_for_vcore_level = dvfsrc_wait_for_vcore_level_v1, 825 - .bw_constraints = &dvfsrc_bw_constr_v2, 556 + .bw_max_constraints = dvfsrc_bw_max_constr_v2, 557 + .bw_min_constraints = dvfsrc_bw_min_constr_none, 558 + }; 559 + 560 + static const u8 mt8196_bw_units[] = { 561 + [DVFSRC_BW_AVG] = 64, 562 + [DVFSRC_BW_PEAK] = 64, 563 + [DVFSRC_BW_HRT] = 30, 564 + }; 565 + 566 + static const struct dvfsrc_soc_data mt8196_data = { 567 + .regs = dvfsrc_mt8196_regs, 568 + .bw_units = mt8196_bw_units, 569 + .has_emi_ddr = true, 570 + .get_target_level = dvfsrc_get_target_level_v4, 571 + .get_current_level = dvfsrc_get_current_level_v4, 572 + .get_vcore_level = dvfsrc_get_vcore_level_v2, 573 + .get_vscp_level = dvfsrc_get_vscp_level_v2, 574 + .get_opp_count = dvfsrc_get_opp_count_v4, 575 + .get_hw_opps = dvfsrc_get_hw_opps_v4, 576 + .calc_dram_bw = dvfsrc_calc_dram_bw_v4, 577 + .set_dram_bw = dvfsrc_set_dram_bw_v1, 578 + .set_dram_peak_bw = dvfsrc_set_dram_peak_bw_v1, 579 + .set_dram_hrt_bw = dvfsrc_set_dram_hrt_bw_v1, 580 + .set_opp_level = dvfsrc_set_dram_level_v4, 581 + .set_vcore_level = dvfsrc_set_vcore_level_v2, 582 + .set_vscp_level = dvfsrc_set_vscp_level_v2, 583 + .wait_for_opp_level = dvfsrc_wait_for_opp_level_v4, 584 + .wait_for_vcore_level = dvfsrc_wait_for_vcore_level_v4, 585 + .bw_max_constraints = dvfsrc_bw_max_constr_v2, 586 + .bw_min_constraints = dvfsrc_bw_min_constr_none, 826 587 }; 827 588 828 589 static const struct of_device_id mtk_dvfsrc_of_match[] = { 829 590 { .compatible = "mediatek,mt6893-dvfsrc", .data = &mt6893_data }, 830 591 { .compatible = "mediatek,mt8183-dvfsrc", .data = &mt8183_data }, 831 592 { .compatible = "mediatek,mt8195-dvfsrc", .data = &mt8195_data }, 593 + { .compatible = "mediatek,mt8196-dvfsrc", .data = &mt8196_data }, 832 594 { /* sentinel */ } 833 595 }; 834 596
+1
drivers/soc/mediatek/mtk-socinfo.c
··· 59 59 MTK_SOCINFO_ENTRY("MT8195", "MT8195TV/EZA", "Kompanio 1380", 0x81950400, CELL_NOT_USED), 60 60 MTK_SOCINFO_ENTRY("MT8195", "MT8195TV/EHZA", "Kompanio 1380", 0x81950404, CELL_NOT_USED), 61 61 MTK_SOCINFO_ENTRY("MT8370", "MT8370AV/AZA", "Genio 510", 0x83700000, 0x00000081), 62 + MTK_SOCINFO_ENTRY("MT8371", "MT8371AV/AZA", "Genio 520", 0x83710000, 0x00000081), 62 63 MTK_SOCINFO_ENTRY("MT8390", "MT8390AV/AZA", "Genio 700", 0x83900000, 0x00000080), 63 64 MTK_SOCINFO_ENTRY("MT8391", "MT8391AV/AZA", "Genio 720", 0x83910000, 0x00000080), 64 65 MTK_SOCINFO_ENTRY("MT8395", "MT8395AV/ZA", "Genio 1200", 0x83950100, CELL_NOT_USED),
+2 -3
drivers/soc/mediatek/mtk-svs.c
··· 9 9 #include <linux/bits.h> 10 10 #include <linux/clk.h> 11 11 #include <linux/completion.h> 12 + #include <linux/cleanup.h> 12 13 #include <linux/cpu.h> 13 14 #include <linux/cpuidle.h> 14 15 #include <linux/debugfs.h> ··· 790 789 struct svs_bank *svsb = file_inode(filp)->i_private; 791 790 struct svs_platform *svsp = dev_get_drvdata(svsb->dev); 792 791 int enabled, ret; 793 - char *buf = NULL; 792 + char *buf __free(kfree) = NULL; 794 793 795 794 if (count >= PAGE_SIZE) 796 795 return -EINVAL; ··· 807 806 svs_bank_disable_and_restore_default_volts(svsp, svsb); 808 807 svsb->mode_support = SVSB_MODE_ALL_DISABLE; 809 808 } 810 - 811 - kfree(buf); 812 809 813 810 return count; 814 811 }
+4 -3
drivers/soc/qcom/cmd-db.c
··· 349 349 return -EINVAL; 350 350 } 351 351 352 - cmd_db_header = memremap(rmem->base, rmem->size, MEMREMAP_WC); 353 - if (!cmd_db_header) { 354 - ret = -ENOMEM; 352 + cmd_db_header = devm_memremap(&pdev->dev, rmem->base, rmem->size, MEMREMAP_WC); 353 + if (IS_ERR(cmd_db_header)) { 354 + ret = PTR_ERR(cmd_db_header); 355 355 cmd_db_header = NULL; 356 356 return ret; 357 357 } 358 358 359 359 if (!cmd_db_magic_matches(cmd_db_header)) { 360 360 dev_err(&pdev->dev, "Invalid Command DB Magic\n"); 361 + cmd_db_header = NULL; 361 362 return -EINVAL; 362 363 } 363 364
+207
drivers/soc/qcom/llcc-qcom.c
··· 182 182 LLCC_TRP_WRS_CACHEABLE_EN, 183 183 }; 184 184 185 + static const struct llcc_slice_config glymur_data[] = { 186 + { 187 + .usecase_id = LLCC_CPUSS, 188 + .slice_id = 1, 189 + .max_cap = 7680, 190 + .priority = 1, 191 + .bonus_ways = 0xFFF, 192 + .res_ways = 0x0, 193 + .vict_prio = true, 194 + .activate_on_init = true, 195 + }, { 196 + .usecase_id = LLCC_VIDSC0, 197 + .slice_id = 2, 198 + .max_cap = 512, 199 + .priority = 3, 200 + .fixed_size = true, 201 + .bonus_ways = 0xFFF, 202 + .res_ways = 0x0, 203 + .vict_prio = true, 204 + }, { 205 + .usecase_id = LLCC_AUDIO, 206 + .slice_id = 6, 207 + .max_cap = 1024, 208 + .priority = 1, 209 + .fixed_size = true, 210 + .bonus_ways = 0xFFF, 211 + .res_ways = 0x0, 212 + .vict_prio = true, 213 + }, { 214 + .usecase_id = LLCC_VIDSC1, 215 + .slice_id = 4, 216 + .max_cap = 512, 217 + .priority = 3, 218 + .fixed_size = true, 219 + .bonus_ways = 0xFFF, 220 + .res_ways = 0x0, 221 + .vict_prio = true, 222 + }, { 223 + .usecase_id = LLCC_CMPT, 224 + .slice_id = 10, 225 + .max_cap = 7680, 226 + .priority = 1, 227 + .fixed_size = true, 228 + .bonus_ways = 0xFFF, 229 + .res_ways = 0x0, 230 + .vict_prio = true, 231 + }, { 232 + .usecase_id = LLCC_GPUHTW, 233 + .slice_id = 11, 234 + .max_cap = 512, 235 + .priority = 1, 236 + .fixed_size = true, 237 + .bonus_ways = 0xFFF, 238 + .res_ways = 0x0, 239 + .vict_prio = true, 240 + }, { 241 + .usecase_id = LLCC_GPU, 242 + .slice_id = 9, 243 + .max_cap = 7680, 244 + .priority = 1, 245 + .bonus_ways = 0xFFF, 246 + .res_ways = 0x0, 247 + .write_scid_en = true, 248 + .write_scid_cacheable_en = true, 249 + .stale_en = true, 250 + .vict_prio = true, 251 + }, { 252 + .usecase_id = LLCC_MMUHWT, 253 + .slice_id = 18, 254 + .max_cap = 768, 255 + .priority = 1, 256 + .fixed_size = true, 257 + .bonus_ways = 0xFFF, 258 + .res_ways = 0x0, 259 + .vict_prio = true, 260 + .activate_on_init = true, 261 + }, { 262 + .usecase_id = LLCC_AUDHW, 263 + .slice_id = 22, 264 + .max_cap = 1024, 265 + .priority = 1, 266 + .fixed_size = true, 267 + .bonus_ways = 0xFFF, 268 + .res_ways = 0x0, 269 + .vict_prio = true, 270 + }, { 271 + .usecase_id = LLCC_CVP, 272 + .slice_id = 8, 273 + .max_cap = 64, 274 + .priority = 3, 275 + .fixed_size = true, 276 + .bonus_ways = 0xFFF, 277 + .res_ways = 0x0, 278 + .vict_prio = true, 279 + }, { 280 + .usecase_id = LLCC_WRCACHE, 281 + .slice_id = 31, 282 + .max_cap = 1536, 283 + .priority = 1, 284 + .fixed_size = true, 285 + .bonus_ways = 0xFFF, 286 + .res_ways = 0x0, 287 + .vict_prio = true, 288 + .activate_on_init = true, 289 + }, { 290 + .usecase_id = LLCC_CMPTHCP, 291 + .slice_id = 17, 292 + .max_cap = 256, 293 + .priority = 3, 294 + .fixed_size = true, 295 + .bonus_ways = 0xFFF, 296 + .res_ways = 0x0, 297 + .vict_prio = true, 298 + }, { 299 + .usecase_id = LLCC_LCPDARE, 300 + .slice_id = 30, 301 + .max_cap = 768, 302 + .priority = 3, 303 + .fixed_size = true, 304 + .bonus_ways = 0xFFF, 305 + .res_ways = 0x0, 306 + .alloc_oneway_en = true, 307 + .vict_prio = true, 308 + .activate_on_init = true, 309 + }, { 310 + .usecase_id = LLCC_AENPU, 311 + .slice_id = 3, 312 + .max_cap = 3072, 313 + .priority = 1, 314 + .fixed_size = true, 315 + .bonus_ways = 0xFFF, 316 + .res_ways = 0x0, 317 + .cache_mode = 2, 318 + .vict_prio = true, 319 + }, { 320 + .usecase_id = LLCC_ISLAND1, 321 + .slice_id = 12, 322 + .max_cap = 5632, 323 + .priority = 7, 324 + .fixed_size = true, 325 + .bonus_ways = 0x0, 326 + .res_ways = 0x7FF, 327 + .vict_prio = true, 328 + }, { 329 + .usecase_id = LLCC_VIDVSP, 330 + .slice_id = 28, 331 + .max_cap = 256, 332 + .priority = 3, 333 + .fixed_size = true, 334 + .bonus_ways = 0xFFF, 335 + .res_ways = 0x0, 336 + .vict_prio = true, 337 + }, { 338 + .usecase_id = LLCC_OOBM_NS, 339 + .slice_id = 5, 340 + .max_cap = 512, 341 + .priority = 1, 342 + .bonus_ways = 0xFFF, 343 + .res_ways = 0x0, 344 + .vict_prio = true, 345 + }, { 346 + .usecase_id = LLCC_CPUSS_OPP, 347 + .slice_id = 32, 348 + .max_cap = 0, 349 + .fixed_size = true, 350 + .bonus_ways = 0x0, 351 + .res_ways = 0x0, 352 + .vict_prio = true, 353 + .activate_on_init = true, 354 + }, { 355 + .usecase_id = LLCC_PCIE_TCU, 356 + .slice_id = 19, 357 + .max_cap = 256, 358 + .priority = 1, 359 + .fixed_size = true, 360 + .bonus_ways = 0xFFF, 361 + .res_ways = 0x0, 362 + .vict_prio = true, 363 + .activate_on_init = true, 364 + }, { 365 + .usecase_id = LLCC_VIDSC_VSP1, 366 + .slice_id = 29, 367 + .max_cap = 256, 368 + .priority = 3, 369 + .fixed_size = true, 370 + .bonus_ways = 0xFFF, 371 + .res_ways = 0x0, 372 + .vict_prio = true, 373 + } 374 + }; 375 + 185 376 static const struct llcc_slice_config ipq5424_data[] = { 186 377 { 187 378 .usecase_id = LLCC_CPUSS, ··· 4063 3872 }, 4064 3873 }; 4065 3874 3875 + static const struct qcom_llcc_config glymur_cfg[] = { 3876 + { 3877 + .sct_data = glymur_data, 3878 + .size = ARRAY_SIZE(glymur_data), 3879 + .reg_offset = llcc_v6_reg_offset, 3880 + .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 3881 + .no_edac = true, 3882 + }, 3883 + }; 3884 + 4066 3885 static const struct qcom_llcc_config qcs615_cfg[] = { 4067 3886 { 4068 3887 .sct_data = qcs615_data, ··· 4302 4101 static const struct qcom_sct_config kaanapali_cfgs = { 4303 4102 .llcc_config = kaanapali_cfg, 4304 4103 .num_config = ARRAY_SIZE(kaanapali_cfg), 4104 + }; 4105 + 4106 + static const struct qcom_sct_config glymur_cfgs = { 4107 + .llcc_config = glymur_cfg, 4108 + .num_config = ARRAY_SIZE(glymur_cfg), 4305 4109 }; 4306 4110 4307 4111 static const struct qcom_sct_config qcs615_cfgs = { ··· 5147 4941 } 5148 4942 5149 4943 static const struct of_device_id qcom_llcc_of_match[] = { 4944 + { .compatible = "qcom,glymur-llcc", .data = &glymur_cfgs }, 5150 4945 { .compatible = "qcom,ipq5424-llcc", .data = &ipq5424_cfgs}, 5151 4946 { .compatible = "qcom,kaanapali-llcc", .data = &kaanapali_cfgs}, 5152 4947 { .compatible = "qcom,qcs615-llcc", .data = &qcs615_cfgs},
+35 -16
drivers/soc/qcom/mdt_loader.c
··· 227 227 } 228 228 EXPORT_SYMBOL_GPL(qcom_mdt_read_metadata); 229 229 230 - /** 231 - * qcom_mdt_pas_init() - initialize PAS region for firmware loading 232 - * @dev: device handle to associate resources with 233 - * @fw: firmware object for the mdt file 234 - * @fw_name: name of the firmware, for construction of segment file names 235 - * @pas_id: PAS identifier 236 - * @mem_phys: physical address of allocated memory region 237 - * @ctx: PAS metadata context, to be released by caller 238 - * 239 - * Returns 0 on success, negative errno otherwise. 240 - */ 241 - int qcom_mdt_pas_init(struct device *dev, const struct firmware *fw, 242 - const char *fw_name, int pas_id, phys_addr_t mem_phys, 243 - struct qcom_scm_pas_metadata *ctx) 230 + static int __qcom_mdt_pas_init(struct device *dev, const struct firmware *fw, 231 + const char *fw_name, int pas_id, phys_addr_t mem_phys, 232 + struct qcom_scm_pas_context *ctx) 244 233 { 245 234 const struct elf32_phdr *phdrs; 246 235 const struct elf32_phdr *phdr; ··· 291 302 out: 292 303 return ret; 293 304 } 294 - EXPORT_SYMBOL_GPL(qcom_mdt_pas_init); 295 305 296 306 static bool qcom_mdt_bins_are_split(const struct firmware *fw) 297 307 { ··· 457 469 { 458 470 int ret; 459 471 460 - ret = qcom_mdt_pas_init(dev, fw, fw_name, pas_id, mem_phys, NULL); 472 + ret = __qcom_mdt_pas_init(dev, fw, fw_name, pas_id, mem_phys, NULL); 461 473 if (ret) 462 474 return ret; 463 475 ··· 465 477 mem_size, reloc_base); 466 478 } 467 479 EXPORT_SYMBOL_GPL(qcom_mdt_load); 480 + 481 + /** 482 + * qcom_mdt_pas_load - Loads and authenticates the metadata of the firmware 483 + * (typically contained in the .mdt file), followed by loading the actual 484 + * firmware segments (e.g., .bXX files). Authentication of the segments done 485 + * by a separate call. 486 + * 487 + * The PAS context must be initialized using qcom_scm_pas_context_init() 488 + * prior to invoking this function. 489 + * 490 + * @ctx: Pointer to the PAS (Peripheral Authentication Service) context 491 + * @fw: Firmware object representing the .mdt file 492 + * @firmware: Name of the firmware used to construct segment file names 493 + * @mem_region: Memory region allocated for loading the firmware 494 + * @reloc_base: Physical address adjusted after relocation 495 + * 496 + * Return: 0 on success or a negative error code on failure. 497 + */ 498 + int qcom_mdt_pas_load(struct qcom_scm_pas_context *ctx, const struct firmware *fw, 499 + const char *firmware, void *mem_region, phys_addr_t *reloc_base) 500 + { 501 + int ret; 502 + 503 + ret = __qcom_mdt_pas_init(ctx->dev, fw, firmware, ctx->pas_id, ctx->mem_phys, ctx); 504 + if (ret) 505 + return ret; 506 + 507 + return qcom_mdt_load_no_init(ctx->dev, fw, firmware, mem_region, ctx->mem_phys, 508 + ctx->mem_size, reloc_base); 509 + } 510 + EXPORT_SYMBOL_GPL(qcom_mdt_pas_load); 468 511 469 512 MODULE_DESCRIPTION("Firmware parser for Qualcomm MDT format"); 470 513 MODULE_LICENSE("GPL v2");
+118 -19
drivers/soc/qcom/qmi_encdec.c
··· 23 23 *p_length |= ((u8)*p_src) << 8; \ 24 24 } while (0) 25 25 26 - #define QMI_ENCDEC_ENCODE_N_BYTES(p_dst, p_src, size) \ 26 + #define QMI_ENCDEC_ENCODE_U8(p_dst, p_src) \ 27 27 do { \ 28 - memcpy(p_dst, p_src, size); \ 29 - p_dst = (u8 *)p_dst + size; \ 30 - p_src = (u8 *)p_src + size; \ 28 + memcpy(p_dst, p_src, sizeof(u8)); \ 29 + p_dst = (u8 *)p_dst + sizeof(u8); \ 30 + p_src = (u8 *)p_src + sizeof(u8); \ 31 31 } while (0) 32 32 33 - #define QMI_ENCDEC_DECODE_N_BYTES(p_dst, p_src, size) \ 33 + #define QMI_ENCDEC_ENCODE_U16(p_dst, p_src) \ 34 34 do { \ 35 - memcpy(p_dst, p_src, size); \ 36 - p_dst = (u8 *)p_dst + size; \ 37 - p_src = (u8 *)p_src + size; \ 35 + *(__le16 *)p_dst = __cpu_to_le16(*(u16 *)p_src); \ 36 + p_dst = (u8 *)p_dst + sizeof(u16); \ 37 + p_src = (u8 *)p_src + sizeof(u16); \ 38 + } while (0) 39 + 40 + #define QMI_ENCDEC_ENCODE_U32(p_dst, p_src) \ 41 + do { \ 42 + *(__le32 *)p_dst = __cpu_to_le32(*(u32 *)p_src); \ 43 + p_dst = (u8 *)p_dst + sizeof(u32); \ 44 + p_src = (u8 *)p_src + sizeof(u32); \ 45 + } while (0) 46 + 47 + #define QMI_ENCDEC_ENCODE_U64(p_dst, p_src) \ 48 + do { \ 49 + *(__le64 *)p_dst = __cpu_to_le64(*(u64 *)p_src); \ 50 + p_dst = (u8 *)p_dst + sizeof(u64); \ 51 + p_src = (u8 *)p_src + sizeof(u64); \ 52 + } while (0) 53 + 54 + #define QMI_ENCDEC_DECODE_U8(p_dst, p_src) \ 55 + do { \ 56 + memcpy(p_dst, p_src, sizeof(u8)); \ 57 + p_dst = (u8 *)p_dst + sizeof(u8); \ 58 + p_src = (u8 *)p_src + sizeof(u8); \ 59 + } while (0) 60 + 61 + #define QMI_ENCDEC_DECODE_U16(p_dst, p_src) \ 62 + do { \ 63 + *(u16 *)p_dst = __le16_to_cpu(*(__le16 *)p_src); \ 64 + p_dst = (u8 *)p_dst + sizeof(u16); \ 65 + p_src = (u8 *)p_src + sizeof(u16); \ 66 + } while (0) 67 + 68 + #define QMI_ENCDEC_DECODE_U32(p_dst, p_src) \ 69 + do { \ 70 + *(u32 *)p_dst = __le32_to_cpu(*(__le32 *)p_src); \ 71 + p_dst = (u8 *)p_dst + sizeof(u32); \ 72 + p_src = (u8 *)p_src + sizeof(u32); \ 73 + } while (0) 74 + 75 + #define QMI_ENCDEC_DECODE_U64(p_dst, p_src) \ 76 + do { \ 77 + *(u64 *)p_dst = __le64_to_cpu(*(__le64 *)p_src); \ 78 + p_dst = (u8 *)p_dst + sizeof(u64); \ 79 + p_src = (u8 *)p_src + sizeof(u64); \ 38 80 } while (0) 39 81 40 82 #define UPDATE_ENCODE_VARIABLES(temp_si, buf_dst, \ ··· 203 161 * of primary data type which include u8 - u64 or similar. This 204 162 * function returns the number of bytes of encoded information. 205 163 * 206 - * Return: The number of bytes of encoded information. 164 + * Return: The number of bytes of encoded information on success or negative 165 + * errno on error. 207 166 */ 208 167 static int qmi_encode_basic_elem(void *buf_dst, const void *buf_src, 209 168 u32 elem_len, u32 elem_size) ··· 212 169 u32 i, rc = 0; 213 170 214 171 for (i = 0; i < elem_len; i++) { 215 - QMI_ENCDEC_ENCODE_N_BYTES(buf_dst, buf_src, elem_size); 172 + switch (elem_size) { 173 + case sizeof(u8): 174 + QMI_ENCDEC_ENCODE_U8(buf_dst, buf_src); 175 + break; 176 + case sizeof(u16): 177 + QMI_ENCDEC_ENCODE_U16(buf_dst, buf_src); 178 + break; 179 + case sizeof(u32): 180 + QMI_ENCDEC_ENCODE_U32(buf_dst, buf_src); 181 + break; 182 + case sizeof(u64): 183 + QMI_ENCDEC_ENCODE_U64(buf_dst, buf_src); 184 + break; 185 + default: 186 + pr_err("%s: Unrecognized element size\n", __func__); 187 + return -EINVAL; 188 + } 189 + 216 190 rc += elem_size; 217 191 } 218 192 ··· 327 267 } 328 268 rc = qmi_encode_basic_elem(buf_dst, &string_len, 329 269 1, string_len_sz); 270 + if (rc < 0) 271 + return rc; 330 272 encoded_bytes += rc; 331 273 } 332 274 333 275 rc = qmi_encode_basic_elem(buf_dst + encoded_bytes, buf_src, 334 276 string_len, temp_ei->elem_size); 277 + if (rc < 0) 278 + return rc; 335 279 encoded_bytes += rc; 336 280 337 281 return encoded_bytes; ··· 397 333 case QMI_OPT_FLAG: 398 334 rc = qmi_encode_basic_elem(&opt_flag_value, buf_src, 399 335 1, sizeof(u8)); 336 + if (rc < 0) 337 + return rc; 400 338 if (opt_flag_value) 401 339 temp_ei = temp_ei + 1; 402 340 else ··· 406 340 break; 407 341 408 342 case QMI_DATA_LEN: 343 + memcpy(&data_len_value, buf_src, sizeof(u32)); 409 344 data_len_sz = temp_ei->elem_size == sizeof(u8) ? 410 345 sizeof(u8) : sizeof(u16); 411 346 /* Check to avoid out of range buffer access */ ··· 417 350 return -ETOOSMALL; 418 351 } 419 352 if (data_len_sz == sizeof(u8)) { 420 - val8 = *(u8 *)buf_src; 421 - data_len_value = (u32)val8; 353 + val8 = data_len_value; 422 354 rc = qmi_encode_basic_elem(buf_dst, &val8, 423 355 1, data_len_sz); 356 + if (rc < 0) 357 + return rc; 424 358 } else { 425 - val16 = *(u16 *)buf_src; 426 - data_len_value = (u32)le16_to_cpu(val16); 359 + val16 = data_len_value; 427 360 rc = qmi_encode_basic_elem(buf_dst, &val16, 428 361 1, data_len_sz); 362 + if (rc < 0) 363 + return rc; 429 364 } 430 365 UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst, 431 366 encoded_bytes, tlv_len, ··· 455 386 rc = qmi_encode_basic_elem(buf_dst, buf_src, 456 387 data_len_value, 457 388 temp_ei->elem_size); 389 + if (rc < 0) 390 + return rc; 458 391 UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst, 459 392 encoded_bytes, tlv_len, 460 393 encode_tlv, rc); ··· 515 444 * of primary data type which include u8 - u64 or similar. This 516 445 * function returns the number of bytes of decoded information. 517 446 * 518 - * Return: The total size of the decoded data elements, in bytes. 447 + * Return: The total size of the decoded data elements, in bytes, on success or 448 + * negative errno on error. 519 449 */ 520 450 static int qmi_decode_basic_elem(void *buf_dst, const void *buf_src, 521 451 u32 elem_len, u32 elem_size) ··· 524 452 u32 i, rc = 0; 525 453 526 454 for (i = 0; i < elem_len; i++) { 527 - QMI_ENCDEC_DECODE_N_BYTES(buf_dst, buf_src, elem_size); 455 + switch (elem_size) { 456 + case sizeof(u8): 457 + QMI_ENCDEC_DECODE_U8(buf_dst, buf_src); 458 + break; 459 + case sizeof(u16): 460 + QMI_ENCDEC_DECODE_U16(buf_dst, buf_src); 461 + break; 462 + case sizeof(u32): 463 + QMI_ENCDEC_DECODE_U32(buf_dst, buf_src); 464 + break; 465 + case sizeof(u64): 466 + QMI_ENCDEC_DECODE_U64(buf_dst, buf_src); 467 + break; 468 + default: 469 + pr_err("%s: Unrecognized element size\n", __func__); 470 + return -EINVAL; 471 + } 472 + 528 473 rc += elem_size; 529 474 } 530 475 ··· 633 544 if (string_len_sz == sizeof(u8)) { 634 545 rc = qmi_decode_basic_elem(&val8, buf_src, 635 546 1, string_len_sz); 547 + if (rc < 0) 548 + return rc; 636 549 string_len = (u32)val8; 637 550 } else { 638 551 rc = qmi_decode_basic_elem(&val16, buf_src, 639 552 1, string_len_sz); 553 + if (rc < 0) 554 + return rc; 640 555 string_len = (u32)val16; 641 556 } 642 557 decoded_bytes += rc; ··· 658 565 659 566 rc = qmi_decode_basic_elem(buf_dst, buf_src + decoded_bytes, 660 567 string_len, temp_ei->elem_size); 568 + if (rc < 0) 569 + return rc; 661 570 *((char *)buf_dst + string_len) = '\0'; 662 571 decoded_bytes += rc; 663 572 ··· 720 625 int rc; 721 626 u8 val8; 722 627 u16 val16; 723 - u32 val32; 724 628 725 629 while (decoded_bytes < in_buf_len) { 726 630 if (dec_level >= 2 && temp_ei->data_type == QMI_EOTI) ··· 761 667 if (data_len_sz == sizeof(u8)) { 762 668 rc = qmi_decode_basic_elem(&val8, buf_src, 763 669 1, data_len_sz); 670 + if (rc < 0) 671 + return rc; 764 672 data_len_value = (u32)val8; 765 673 } else { 766 674 rc = qmi_decode_basic_elem(&val16, buf_src, 767 675 1, data_len_sz); 676 + if (rc < 0) 677 + return rc; 768 678 data_len_value = (u32)val16; 769 679 } 770 - val32 = cpu_to_le32(data_len_value); 771 - memcpy(buf_dst, &val32, sizeof(u32)); 680 + memcpy(buf_dst, &data_len_value, sizeof(u32)); 772 681 temp_ei = temp_ei + 1; 773 682 buf_dst = out_c_struct + temp_ei->offset; 774 683 tlv_len -= data_len_sz; ··· 798 701 rc = qmi_decode_basic_elem(buf_dst, buf_src, 799 702 data_len_value, 800 703 temp_ei->elem_size); 704 + if (rc < 0) 705 + return rc; 801 706 UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc); 802 707 break; 803 708
+3 -1
drivers/soc/qcom/smem.c
··· 1219 1219 smem->item_count = qcom_smem_get_item_count(smem); 1220 1220 break; 1221 1221 case SMEM_GLOBAL_HEAP_VERSION: 1222 - qcom_smem_map_global(smem, size); 1222 + ret = qcom_smem_map_global(smem, size); 1223 + if (ret < 0) 1224 + return ret; 1223 1225 smem->item_count = SMEM_ITEM_COUNT; 1224 1226 break; 1225 1227 default:
+5
drivers/soc/renesas/Kconfig
··· 62 62 select PM 63 63 select PM_GENERIC_DOMAINS 64 64 select ARM_AMBA 65 + select RZN1_IRQMUX if GPIO_DWAPB 65 66 66 67 if ARM && ARCH_RENESAS 67 68 ··· 431 430 config ARCH_R9A09G087 432 431 bool "ARM64 Platform support for R9A09G087 (RZ/N2H)" 433 432 default y if ARCH_RENESAS 433 + select RENESAS_RZT2H_ICU 434 434 help 435 435 This enables support for the Renesas RZ/N2H SoC variants. 436 436 ··· 462 460 463 461 config RST_RCAR 464 462 bool "Reset Controller support for R-Car" if COMPILE_TEST 463 + 464 + config RZN1_IRQMUX 465 + bool "Renesas RZ/N1 GPIO IRQ multiplexer support" if COMPILE_TEST 465 466 466 467 config SYSC_RZ 467 468 bool "System controller for RZ SoCs" if COMPILE_TEST
+1
drivers/soc/renesas/Makefile
··· 14 14 # Family 15 15 obj-$(CONFIG_PWC_RZV2M) += pwc-rzv2m.o 16 16 obj-$(CONFIG_RST_RCAR) += rcar-rst.o 17 + obj-$(CONFIG_RZN1_IRQMUX) += rzn1_irqmux.o 17 18 obj-$(CONFIG_SYSC_RZ) += rz-sysc.o
+127
drivers/soc/renesas/rzn1_irqmux.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * RZ/N1 GPIO Interrupt Multiplexer 4 + * 5 + * Copyright 2025 Schneider Electric 6 + * Author: Herve Codina <herve.codina@bootlin.com> 7 + */ 8 + 9 + #include <linux/bitmap.h> 10 + #include <linux/bitops.h> 11 + #include <linux/mod_devicetable.h> 12 + #include <linux/module.h> 13 + #include <linux/of.h> 14 + #include <linux/of_irq.h> 15 + #include <linux/platform_device.h> 16 + #include <dt-bindings/interrupt-controller/arm-gic.h> 17 + 18 + /* 19 + * Up to 8 output lines are connected to GIC SPI interrupt controller 20 + * starting at IRQ 103. 21 + */ 22 + #define RZN1_IRQMUX_GIC_SPI_BASE 103 23 + #define RZN1_IRQMUX_NUM_OUTPUTS 8 24 + 25 + static int rzn1_irqmux_parent_args_to_line_index(struct device *dev, 26 + const struct of_phandle_args *parent_args) 27 + { 28 + /* 29 + * The parent interrupt should be one of the GIC controller. 30 + * Three arguments must be provided. 31 + * - args[0]: GIC_SPI 32 + * - args[1]: The GIC interrupt number 33 + * - args[2]: The interrupt flags 34 + * 35 + * We retrieve the line index based on the GIC interrupt number 36 + * provided. 37 + */ 38 + 39 + if (parent_args->args_count != 3 || parent_args->args[0] != GIC_SPI) { 40 + dev_err(dev, "Invalid interrupt-map item\n"); 41 + return -EINVAL; 42 + } 43 + 44 + if (parent_args->args[1] < RZN1_IRQMUX_GIC_SPI_BASE || 45 + parent_args->args[1] >= RZN1_IRQMUX_GIC_SPI_BASE + RZN1_IRQMUX_NUM_OUTPUTS) { 46 + dev_err(dev, "Invalid GIC interrupt %u\n", parent_args->args[1]); 47 + return -EINVAL; 48 + } 49 + 50 + return parent_args->args[1] - RZN1_IRQMUX_GIC_SPI_BASE; 51 + } 52 + 53 + static int rzn1_irqmux_probe(struct platform_device *pdev) 54 + { 55 + DECLARE_BITMAP(index_done, RZN1_IRQMUX_NUM_OUTPUTS) = {}; 56 + struct device *dev = &pdev->dev; 57 + struct device_node *np = dev->of_node; 58 + struct of_imap_parser imap_parser; 59 + struct of_imap_item imap_item; 60 + u32 __iomem *regs; 61 + int index; 62 + int ret; 63 + u32 tmp; 64 + 65 + regs = devm_platform_ioremap_resource(pdev, 0); 66 + if (IS_ERR(regs)) 67 + return PTR_ERR(regs); 68 + 69 + /* We support only #interrupt-cells = <1> and #address-cells = <0> */ 70 + ret = of_property_read_u32(np, "#interrupt-cells", &tmp); 71 + if (ret) 72 + return ret; 73 + if (tmp != 1) 74 + return -EINVAL; 75 + 76 + ret = of_property_read_u32(np, "#address-cells", &tmp); 77 + if (ret) 78 + return ret; 79 + if (tmp != 0) 80 + return -EINVAL; 81 + 82 + ret = of_imap_parser_init(&imap_parser, np, &imap_item); 83 + if (ret) 84 + return ret; 85 + 86 + for_each_of_imap_item(&imap_parser, &imap_item) { 87 + index = rzn1_irqmux_parent_args_to_line_index(dev, &imap_item.parent_args); 88 + if (index < 0) { 89 + of_node_put(imap_item.parent_args.np); 90 + return index; 91 + } 92 + 93 + if (test_and_set_bit(index, index_done)) { 94 + of_node_put(imap_item.parent_args.np); 95 + dev_err(dev, "Mux output line %d already defined in interrupt-map\n", 96 + index); 97 + return -EINVAL; 98 + } 99 + 100 + /* 101 + * The child #address-cells is 0 (already checked). The first 102 + * value in imap item is the src hwirq. 103 + */ 104 + writel(imap_item.child_imap[0], regs + index); 105 + } 106 + 107 + return 0; 108 + } 109 + 110 + static const struct of_device_id rzn1_irqmux_of_match[] = { 111 + { .compatible = "renesas,rzn1-gpioirqmux", }, 112 + { /* sentinel */ } 113 + }; 114 + MODULE_DEVICE_TABLE(of, rzn1_irqmux_of_match); 115 + 116 + static struct platform_driver rzn1_irqmux_driver = { 117 + .probe = rzn1_irqmux_probe, 118 + .driver = { 119 + .name = "rzn1_irqmux", 120 + .of_match_table = rzn1_irqmux_of_match, 121 + }, 122 + }; 123 + module_platform_driver(rzn1_irqmux_driver); 124 + 125 + MODULE_AUTHOR("Herve Codina <herve.codina@bootlin.com>"); 126 + MODULE_DESCRIPTION("Renesas RZ/N1 GPIO IRQ Multiplexer Driver"); 127 + MODULE_LICENSE("GPL");
+24 -25
drivers/soc/rockchip/grf.c
··· 146 146 .num_values = ARRAY_SIZE(rk3576_defaults_sys_grf), 147 147 }; 148 148 149 - #define RK3576_IOCGRF_MISC_CON 0x04F0 149 + #define RK3576_IOCGRF_MISC_CON 0x40F0 150 150 151 151 static const struct rockchip_grf_value rk3576_defaults_ioc_grf[] __initconst = { 152 152 { "jtag switching", RK3576_IOCGRF_MISC_CON, FIELD_PREP_WM16_CONST(BIT(1), 0) }, ··· 217 217 struct regmap *grf; 218 218 int ret, i; 219 219 220 - np = of_find_matching_node_and_match(NULL, rockchip_grf_dt_match, 221 - &match); 222 - if (!np) 223 - return -ENODEV; 224 - if (!match || !match->data) { 225 - pr_err("%s: missing grf data\n", __func__); 226 - of_node_put(np); 227 - return -EINVAL; 228 - } 220 + for_each_matching_node_and_match(np, rockchip_grf_dt_match, &match) { 221 + if (!of_device_is_available(np)) 222 + continue; 223 + if (!match || !match->data) { 224 + pr_err("%s: missing grf data\n", __func__); 225 + of_node_put(np); 226 + return -EINVAL; 227 + } 229 228 230 - grf_info = match->data; 229 + grf_info = match->data; 231 230 232 - grf = syscon_node_to_regmap(np); 233 - of_node_put(np); 234 - if (IS_ERR(grf)) { 235 - pr_err("%s: could not get grf syscon\n", __func__); 236 - return PTR_ERR(grf); 237 - } 231 + grf = syscon_node_to_regmap(np); 232 + if (IS_ERR(grf)) { 233 + pr_err("%s: could not get grf syscon\n", __func__); 234 + return PTR_ERR(grf); 235 + } 238 236 239 - for (i = 0; i < grf_info->num_values; i++) { 240 - const struct rockchip_grf_value *val = &grf_info->values[i]; 237 + for (i = 0; i < grf_info->num_values; i++) { 238 + const struct rockchip_grf_value *val = &grf_info->values[i]; 241 239 242 - pr_debug("%s: adjusting %s in %#6x to %#10x\n", __func__, 243 - val->desc, val->reg, val->val); 244 - ret = regmap_write(grf, val->reg, val->val); 245 - if (ret < 0) 246 - pr_err("%s: write to %#6x failed with %d\n", 247 - __func__, val->reg, ret); 240 + pr_debug("%s: adjusting %s in %#6x to %#10x\n", __func__, 241 + val->desc, val->reg, val->val); 242 + ret = regmap_write(grf, val->reg, val->val); 243 + if (ret < 0) 244 + pr_err("%s: write to %#6x failed with %d\n", 245 + __func__, val->reg, ret); 246 + } 248 247 } 249 248 250 249 return 0;
+92 -41
drivers/soc/samsung/exynos-chipid.c
··· 14 14 15 15 #include <linux/array_size.h> 16 16 #include <linux/device.h> 17 - #include <linux/errno.h> 17 + #include <linux/device/devres.h> 18 + #include <linux/err.h> 19 + #include <linux/ioport.h> 18 20 #include <linux/mfd/syscon.h> 19 21 #include <linux/module.h> 20 22 #include <linux/of.h> ··· 29 27 #include "exynos-asv.h" 30 28 31 29 struct exynos_chipid_variant { 32 - unsigned int rev_reg; /* revision register offset */ 30 + unsigned int main_rev_reg; /* main revision register offset */ 31 + unsigned int sub_rev_reg; /* sub revision register offset */ 33 32 unsigned int main_rev_shift; /* main revision offset in rev_reg */ 34 33 unsigned int sub_rev_shift; /* sub revision offset in rev_reg */ 34 + bool efuse; 35 35 }; 36 36 37 37 struct exynos_chipid_info { ··· 72 68 { "EXYNOS990", 0xE9830000 }, 73 69 { "EXYNOSAUTOV9", 0xAAA80000 }, 74 70 { "EXYNOSAUTOV920", 0x0A920000 }, 71 + /* Compatible with: google,gs101-otp */ 72 + { "GS101", 0x9845000 }, 75 73 }; 76 74 77 - static const char *product_id_to_soc_id(unsigned int product_id) 75 + static const char *exynos_product_id_to_name(unsigned int product_id) 78 76 { 79 77 int i; 80 78 ··· 86 80 return NULL; 87 81 } 88 82 89 - static int exynos_chipid_get_chipid_info(struct regmap *regmap, 90 - const struct exynos_chipid_variant *data, 83 + static int exynos_chipid_get_chipid_info(struct device *dev, 84 + struct regmap *regmap, const struct exynos_chipid_variant *data, 91 85 struct exynos_chipid_info *soc_info) 92 86 { 93 87 int ret; ··· 95 89 96 90 ret = regmap_read(regmap, EXYNOS_CHIPID_REG_PRO_ID, &val); 97 91 if (ret < 0) 98 - return ret; 92 + return dev_err_probe(dev, ret, "failed to read Product ID\n"); 99 93 soc_info->product_id = val & EXYNOS_MASK; 100 94 101 - if (data->rev_reg != EXYNOS_CHIPID_REG_PRO_ID) { 102 - ret = regmap_read(regmap, data->rev_reg, &val); 95 + if (data->sub_rev_reg == EXYNOS_CHIPID_REG_PRO_ID) { 96 + /* exynos4210 case */ 97 + main_rev = (val >> data->main_rev_shift) & EXYNOS_REV_PART_MASK; 98 + sub_rev = (val >> data->sub_rev_shift) & EXYNOS_REV_PART_MASK; 99 + } else { 100 + unsigned int val2; 101 + 102 + ret = regmap_read(regmap, data->sub_rev_reg, &val2); 103 103 if (ret < 0) 104 - return ret; 104 + return dev_err_probe(dev, ret, 105 + "failed to read revision\n"); 106 + 107 + if (data->main_rev_reg == EXYNOS_CHIPID_REG_PRO_ID) 108 + /* gs101 case */ 109 + main_rev = (val >> data->main_rev_shift) & EXYNOS_REV_PART_MASK; 110 + else 111 + /* exynos850 case */ 112 + main_rev = (val2 >> data->main_rev_shift) & EXYNOS_REV_PART_MASK; 113 + 114 + sub_rev = (val2 >> data->sub_rev_shift) & EXYNOS_REV_PART_MASK; 105 115 } 106 - main_rev = (val >> data->main_rev_shift) & EXYNOS_REV_PART_MASK; 107 - sub_rev = (val >> data->sub_rev_shift) & EXYNOS_REV_PART_MASK; 116 + 108 117 soc_info->revision = (main_rev << EXYNOS_REV_PART_SHIFT) | sub_rev; 109 118 110 119 return 0; 120 + } 121 + 122 + static struct regmap *exynos_chipid_get_efuse_regmap(struct platform_device *pdev) 123 + { 124 + struct resource *res; 125 + void __iomem *base; 126 + 127 + base = devm_platform_get_and_ioremap_resource(pdev, 0, &res); 128 + if (IS_ERR(base)) 129 + return ERR_CAST(base); 130 + 131 + const struct regmap_config reg_config = { 132 + .reg_bits = 32, 133 + .reg_stride = 4, 134 + .val_bits = 32, 135 + .use_relaxed_mmio = true, 136 + .max_register = (resource_size(res) - reg_config.reg_stride), 137 + }; 138 + 139 + return devm_regmap_init_mmio_clk(&pdev->dev, "pclk", base, &reg_config); 140 + } 141 + 142 + static void exynos_chipid_unregister_soc(void *data) 143 + { 144 + soc_device_unregister(data); 111 145 } 112 146 113 147 static int exynos_chipid_probe(struct platform_device *pdev) ··· 163 117 164 118 drv_data = of_device_get_match_data(dev); 165 119 if (!drv_data) 166 - return -EINVAL; 120 + return dev_err_probe(dev, -EINVAL, 121 + "failed to get match data\n"); 167 122 168 - regmap = device_node_to_regmap(dev->of_node); 123 + if (drv_data->efuse) 124 + regmap = exynos_chipid_get_efuse_regmap(pdev); 125 + else 126 + regmap = device_node_to_regmap(dev->of_node); 127 + 169 128 if (IS_ERR(regmap)) 170 - return PTR_ERR(regmap); 129 + return dev_err_probe(dev, PTR_ERR(regmap), 130 + "failed to get regmap\n"); 171 131 172 - ret = exynos_chipid_get_chipid_info(regmap, drv_data, &soc_info); 132 + ret = exynos_chipid_get_chipid_info(dev, regmap, drv_data, &soc_info); 173 133 if (ret < 0) 174 134 return ret; 175 135 ··· 193 141 soc_info.revision); 194 142 if (!soc_dev_attr->revision) 195 143 return -ENOMEM; 196 - soc_dev_attr->soc_id = product_id_to_soc_id(soc_info.product_id); 197 - if (!soc_dev_attr->soc_id) { 198 - pr_err("Unknown SoC\n"); 199 - return -ENODEV; 200 - } 144 + 145 + soc_dev_attr->soc_id = exynos_product_id_to_name(soc_info.product_id); 146 + if (!soc_dev_attr->soc_id) 147 + return dev_err_probe(dev, -ENODEV, "Unknown SoC\n"); 201 148 202 149 /* please note that the actual registration will be deferred */ 203 150 soc_dev = soc_device_register(soc_dev_attr); 204 151 if (IS_ERR(soc_dev)) 205 - return PTR_ERR(soc_dev); 152 + return dev_err_probe(dev, PTR_ERR(soc_dev), 153 + "failed to register to the soc interface\n"); 154 + 155 + ret = devm_add_action_or_reset(dev, exynos_chipid_unregister_soc, 156 + soc_dev); 157 + if (ret) 158 + return dev_err_probe(dev, ret, "failed to add devm action\n"); 206 159 207 160 ret = exynos_asv_init(dev, regmap); 208 161 if (ret) 209 - goto err; 162 + return ret; 210 163 211 - platform_set_drvdata(pdev, soc_dev); 212 - 213 - dev_info(dev, "Exynos: CPU[%s] PRO_ID[0x%x] REV[0x%x] Detected\n", 214 - soc_dev_attr->soc_id, soc_info.product_id, soc_info.revision); 164 + dev_dbg(dev, "Exynos: CPU[%s] PRO_ID[0x%x] REV[0x%x] Detected\n", 165 + soc_dev_attr->soc_id, soc_info.product_id, soc_info.revision); 215 166 216 167 return 0; 217 - 218 - err: 219 - soc_device_unregister(soc_dev); 220 - 221 - return ret; 222 - } 223 - 224 - static void exynos_chipid_remove(struct platform_device *pdev) 225 - { 226 - struct soc_device *soc_dev = platform_get_drvdata(pdev); 227 - 228 - soc_device_unregister(soc_dev); 229 168 } 230 169 231 170 static const struct exynos_chipid_variant exynos4210_chipid_drv_data = { 232 - .rev_reg = 0x0, 233 171 .main_rev_shift = 4, 234 172 .sub_rev_shift = 0, 235 173 }; 236 174 237 175 static const struct exynos_chipid_variant exynos850_chipid_drv_data = { 238 - .rev_reg = 0x10, 176 + .main_rev_reg = 0x10, 177 + .sub_rev_reg = 0x10, 239 178 .main_rev_shift = 20, 240 179 .sub_rev_shift = 16, 241 180 }; 242 181 182 + static const struct exynos_chipid_variant gs101_chipid_drv_data = { 183 + .sub_rev_reg = 0x10, 184 + .sub_rev_shift = 16, 185 + .efuse = true, 186 + }; 187 + 243 188 static const struct of_device_id exynos_chipid_of_device_ids[] = { 244 189 { 190 + .compatible = "google,gs101-otp", 191 + .data = &gs101_chipid_drv_data, 192 + }, { 245 193 .compatible = "samsung,exynos4210-chipid", 246 194 .data = &exynos4210_chipid_drv_data, 247 195 }, { ··· 258 206 .of_match_table = exynos_chipid_of_device_ids, 259 207 }, 260 208 .probe = exynos_chipid_probe, 261 - .remove = exynos_chipid_remove, 262 209 }; 263 210 module_platform_driver(exynos_chipid_driver); 264 211
+329 -104
drivers/soc/tegra/pmc.c
··· 28 28 #include <linux/iopoll.h> 29 29 #include <linux/irqdomain.h> 30 30 #include <linux/irq.h> 31 + #include <linux/irq_work.h> 31 32 #include <linux/kernel.h> 32 33 #include <linux/of_address.h> 33 34 #include <linux/of_clk.h> ··· 202 201 #define TEGRA_SMC_PMC_WRITE 0xbb 203 202 204 203 struct pmc_clk { 205 - struct clk_hw hw; 206 - unsigned long offs; 207 - u32 mux_shift; 208 - u32 force_en_shift; 204 + struct clk_hw hw; 205 + struct tegra_pmc *pmc; 206 + unsigned long offs; 207 + u32 mux_shift; 208 + u32 force_en_shift; 209 209 }; 210 210 211 211 #define to_pmc_clk(_hw) container_of(_hw, struct pmc_clk, hw) 212 212 213 213 struct pmc_clk_gate { 214 - struct clk_hw hw; 215 - unsigned long offs; 216 - u32 shift; 214 + struct clk_hw hw; 215 + struct tegra_pmc *pmc; 216 + unsigned long offs; 217 + u32 shift; 217 218 }; 218 219 219 220 #define to_pmc_clk_gate(_hw) container_of(_hw, struct pmc_clk_gate, hw) ··· 267 264 .force_en_shift = 18, 268 265 }, 269 266 }; 267 + 268 + struct tegra_pmc_core_pd { 269 + struct generic_pm_domain genpd; 270 + struct tegra_pmc *pmc; 271 + }; 272 + 273 + static inline struct tegra_pmc_core_pd * 274 + to_core_pd(struct generic_pm_domain *genpd) 275 + { 276 + return container_of(genpd, struct tegra_pmc_core_pd, genpd); 277 + } 270 278 271 279 struct tegra_powergate { 272 280 struct generic_pm_domain genpd; ··· 481 467 unsigned long *wake_type_dual_edge_map; 482 468 unsigned long *wake_sw_status_map; 483 469 unsigned long *wake_cntrl_level_map; 470 + 471 + struct notifier_block reboot_notifier; 484 472 struct syscore syscore; 473 + 474 + /* Pending wake IRQ processing */ 475 + struct irq_work wake_work; 476 + u32 *wake_status; 485 477 }; 486 478 487 479 static struct tegra_pmc *pmc = &(struct tegra_pmc) { ··· 561 541 writel(value, pmc->scratch + offset); 562 542 } 563 543 564 - /* 565 - * TODO Figure out a way to call this with the struct tegra_pmc * passed in. 566 - * This currently doesn't work because readx_poll_timeout() can only operate 567 - * on functions that take a single argument. 568 - */ 569 - static inline bool tegra_powergate_state(int id) 544 + static inline bool tegra_powergate_state(struct tegra_pmc *pmc, int id) 570 545 { 571 546 if (id == TEGRA_POWERGATE_3D && pmc->soc->has_gpu_clamps) 572 547 return (tegra_pmc_readl(pmc, GPU_RG_CNTRL) & 0x1) == 0; ··· 613 598 tegra_pmc_writel(pmc, PWRGATE_TOGGLE_START | id, PWRGATE_TOGGLE); 614 599 615 600 /* wait for PMC to execute the command */ 616 - ret = readx_poll_timeout(tegra_powergate_state, id, status, 617 - status == new_state, 1, 10); 601 + ret = read_poll_timeout(tegra_powergate_state, status, 602 + status == new_state, 1, 10, false, 603 + pmc, id); 618 604 } while (ret == -ETIMEDOUT && retries--); 619 605 620 606 return ret; ··· 647 631 return err; 648 632 649 633 /* wait for PMC to execute the command */ 650 - err = readx_poll_timeout(tegra_powergate_state, id, status, 651 - status == new_state, 10, 100000); 634 + err = read_poll_timeout(tegra_powergate_state, status, 635 + status == new_state, 10, 100000, false, 636 + pmc, id); 652 637 if (err) 653 638 return err; 654 639 ··· 672 655 673 656 mutex_lock(&pmc->powergates_lock); 674 657 675 - if (tegra_powergate_state(id) == new_state) { 658 + if (tegra_powergate_state(pmc, id) == new_state) { 676 659 mutex_unlock(&pmc->powergates_lock); 677 660 return 0; 678 661 } ··· 957 940 return err; 958 941 } 959 942 943 + static void tegra_pmc_put_device(void *data) 944 + { 945 + struct tegra_pmc *pmc = data; 946 + 947 + put_device(pmc->dev); 948 + } 949 + 950 + static const struct of_device_id tegra_pmc_match[]; 951 + 952 + static struct tegra_pmc *tegra_pmc_get(struct device *dev) 953 + { 954 + struct platform_device *pdev; 955 + struct device_node *np; 956 + struct tegra_pmc *pmc; 957 + 958 + np = of_parse_phandle(dev->of_node, "nvidia,pmc", 0); 959 + if (!np) { 960 + struct device_node *parent = of_node_get(dev->of_node); 961 + 962 + while ((parent = of_get_next_parent(parent)) != NULL) { 963 + np = of_find_matching_node(parent, tegra_pmc_match); 964 + if (np) 965 + break; 966 + } 967 + 968 + of_node_put(parent); 969 + 970 + if (!np) 971 + return ERR_PTR(-ENODEV); 972 + } 973 + 974 + pdev = of_find_device_by_node(np); 975 + of_node_put(np); 976 + 977 + if (!pdev) 978 + return ERR_PTR(-ENODEV); 979 + 980 + pmc = platform_get_drvdata(pdev); 981 + if (!pmc) { 982 + put_device(&pdev->dev); 983 + return ERR_PTR(-EPROBE_DEFER); 984 + } 985 + 986 + return pmc; 987 + } 988 + 960 989 /** 961 - * tegra_powergate_power_on() - power on partition 990 + * tegra_pmc_get() - find the PMC for a given device 991 + * @dev: device for which to find the PMC 992 + * 993 + * Returns a pointer to the PMC on success or an ERR_PTR()-encoded error code 994 + * otherwise. 995 + */ 996 + struct tegra_pmc *devm_tegra_pmc_get(struct device *dev) 997 + { 998 + struct tegra_pmc *pmc; 999 + int err; 1000 + 1001 + pmc = tegra_pmc_get(dev); 1002 + if (IS_ERR(pmc)) 1003 + return pmc; 1004 + 1005 + err = devm_add_action_or_reset(dev, tegra_pmc_put_device, pmc); 1006 + if (err < 0) 1007 + return ERR_PTR(err); 1008 + 1009 + return pmc; 1010 + } 1011 + EXPORT_SYMBOL(devm_tegra_pmc_get); 1012 + 1013 + /** 1014 + * tegra_pmc_powergate_power_on() - power on partition 1015 + * @pmc: power management controller 962 1016 * @id: partition ID 963 1017 */ 964 - int tegra_powergate_power_on(unsigned int id) 1018 + int tegra_pmc_powergate_power_on(struct tegra_pmc *pmc, unsigned int id) 965 1019 { 966 1020 if (!tegra_powergate_is_available(pmc, id)) 967 1021 return -EINVAL; 968 1022 969 1023 return tegra_powergate_set(pmc, id, true); 970 1024 } 1025 + EXPORT_SYMBOL(tegra_pmc_powergate_power_on); 1026 + 1027 + /** 1028 + * tegra_powergate_power_on() - power on partition 1029 + * @id: partition ID 1030 + */ 1031 + int tegra_powergate_power_on(unsigned int id) 1032 + { 1033 + return tegra_pmc_powergate_power_on(pmc, id); 1034 + } 971 1035 EXPORT_SYMBOL(tegra_powergate_power_on); 1036 + 1037 + /** 1038 + * tegra_pmc_powergate_power_off() - power off partition 1039 + * @pmc: power management controller 1040 + * @id: partition ID 1041 + */ 1042 + int tegra_pmc_powergate_power_off(struct tegra_pmc *pmc, unsigned int id) 1043 + { 1044 + if (!tegra_powergate_is_available(pmc, id)) 1045 + return -EINVAL; 1046 + 1047 + return tegra_powergate_set(pmc, id, false); 1048 + } 1049 + EXPORT_SYMBOL(tegra_pmc_powergate_power_off); 972 1050 973 1051 /** 974 1052 * tegra_powergate_power_off() - power off partition ··· 1071 959 */ 1072 960 int tegra_powergate_power_off(unsigned int id) 1073 961 { 1074 - if (!tegra_powergate_is_available(pmc, id)) 1075 - return -EINVAL; 1076 - 1077 - return tegra_powergate_set(pmc, id, false); 962 + return tegra_pmc_powergate_power_off(pmc, id); 1078 963 } 1079 964 EXPORT_SYMBOL(tegra_powergate_power_off); 1080 965 ··· 1085 976 if (!tegra_powergate_is_valid(pmc, id)) 1086 977 return -EINVAL; 1087 978 1088 - return tegra_powergate_state(id); 979 + return tegra_powergate_state(pmc, id); 1089 980 } 981 + 982 + /** 983 + * tegra_pmc_powergate_remove_clamping() - remove power clamps for partition 984 + * @pmc: power management controller 985 + * @id: partition ID 986 + */ 987 + int tegra_pmc_powergate_remove_clamping(struct tegra_pmc *pmc, unsigned int id) 988 + { 989 + if (!tegra_powergate_is_available(pmc, id)) 990 + return -EINVAL; 991 + 992 + return __tegra_powergate_remove_clamping(pmc, id); 993 + } 994 + EXPORT_SYMBOL(tegra_pmc_powergate_remove_clamping); 1090 995 1091 996 /** 1092 997 * tegra_powergate_remove_clamping() - remove power clamps for partition ··· 1108 985 */ 1109 986 int tegra_powergate_remove_clamping(unsigned int id) 1110 987 { 1111 - if (!tegra_powergate_is_available(pmc, id)) 1112 - return -EINVAL; 1113 - 1114 - return __tegra_powergate_remove_clamping(pmc, id); 988 + return tegra_pmc_powergate_remove_clamping(pmc, id); 1115 989 } 1116 990 EXPORT_SYMBOL(tegra_powergate_remove_clamping); 1117 991 1118 992 /** 1119 - * tegra_powergate_sequence_power_up() - power up partition 993 + * tegra_pmc_powergate_sequence_power_up() - power up partition 994 + * @pmc: power management controller 1120 995 * @id: partition ID 1121 996 * @clk: clock for partition 1122 997 * @rst: reset for partition 1123 998 * 1124 999 * Must be called with clk disabled, and returns with clk enabled. 1125 1000 */ 1126 - int tegra_powergate_sequence_power_up(unsigned int id, struct clk *clk, 1127 - struct reset_control *rst) 1001 + int tegra_pmc_powergate_sequence_power_up(struct tegra_pmc *pmc, 1002 + unsigned int id, struct clk *clk, 1003 + struct reset_control *rst) 1128 1004 { 1129 1005 struct tegra_powergate *pg; 1130 1006 int err; ··· 1156 1034 kfree(pg); 1157 1035 1158 1036 return err; 1037 + } 1038 + EXPORT_SYMBOL(tegra_pmc_powergate_sequence_power_up); 1039 + 1040 + /** 1041 + * tegra_powergate_sequence_power_up() - power up partition 1042 + * @id: partition ID 1043 + * @clk: clock for partition 1044 + * @rst: reset for partition 1045 + * 1046 + * Must be called with clk disabled, and returns with clk enabled. 1047 + */ 1048 + int tegra_powergate_sequence_power_up(unsigned int id, struct clk *clk, 1049 + struct reset_control *rst) 1050 + { 1051 + return tegra_pmc_powergate_sequence_power_up(pmc, id, clk, rst); 1159 1052 } 1160 1053 EXPORT_SYMBOL(tegra_powergate_sequence_power_up); 1161 1054 ··· 1236 1099 return tegra_powergate_remove_clamping(id); 1237 1100 } 1238 1101 1239 - static void tegra_pmc_program_reboot_reason(const char *cmd) 1102 + static void tegra_pmc_program_reboot_reason(struct tegra_pmc *pmc, 1103 + const char *cmd) 1240 1104 { 1241 1105 u32 value; 1242 1106 ··· 1261 1123 static int tegra_pmc_reboot_notify(struct notifier_block *this, 1262 1124 unsigned long action, void *data) 1263 1125 { 1126 + struct tegra_pmc *pmc = container_of(this, struct tegra_pmc, 1127 + reboot_notifier); 1264 1128 if (action == SYS_RESTART) 1265 - tegra_pmc_program_reboot_reason(data); 1129 + tegra_pmc_program_reboot_reason(pmc, data); 1266 1130 1267 1131 return NOTIFY_DONE; 1268 1132 } 1269 1133 1270 - static struct notifier_block tegra_pmc_reboot_notifier = { 1271 - .notifier_call = tegra_pmc_reboot_notify, 1272 - }; 1273 - 1274 - static void tegra_pmc_restart(void) 1134 + static void tegra_pmc_restart(struct tegra_pmc *pmc) 1275 1135 { 1276 1136 u32 value; 1277 1137 ··· 1281 1145 1282 1146 static int tegra_pmc_restart_handler(struct sys_off_data *data) 1283 1147 { 1284 - tegra_pmc_restart(); 1148 + struct tegra_pmc *pmc = data->cb_data; 1149 + 1150 + tegra_pmc_restart(pmc); 1285 1151 1286 1152 return NOTIFY_DONE; 1287 1153 } 1288 1154 1289 1155 static int tegra_pmc_power_off_handler(struct sys_off_data *data) 1290 1156 { 1157 + struct tegra_pmc *pmc = data->cb_data; 1158 + 1291 1159 /* 1292 1160 * Reboot Nexus 7 into special bootloader mode if USB cable is 1293 1161 * connected in order to display battery status and power off. ··· 1301 1161 const u32 go_to_charger_mode = 0xa5a55a5a; 1302 1162 1303 1163 tegra_pmc_writel(pmc, go_to_charger_mode, PMC_SCRATCH37); 1304 - tegra_pmc_restart(); 1164 + tegra_pmc_restart(pmc); 1305 1165 } 1306 1166 1307 1167 return NOTIFY_DONE; ··· 1309 1169 1310 1170 static int powergate_show(struct seq_file *s, void *data) 1311 1171 { 1172 + struct tegra_pmc *pmc = data; 1312 1173 unsigned int i; 1313 1174 int status; 1314 1175 ··· 1518 1377 tegra_pmc_core_pd_set_performance_state(struct generic_pm_domain *genpd, 1519 1378 unsigned int level) 1520 1379 { 1380 + struct tegra_pmc_core_pd *pd = to_core_pd(genpd); 1381 + struct tegra_pmc *pmc = pd->pmc; 1521 1382 struct dev_pm_opp *opp; 1522 1383 int err; 1523 1384 ··· 1547 1404 1548 1405 static int tegra_pmc_core_pd_add(struct tegra_pmc *pmc, struct device_node *np) 1549 1406 { 1550 - struct generic_pm_domain *genpd; 1551 1407 const char *rname[] = { "core", NULL}; 1408 + struct tegra_pmc_core_pd *pd; 1552 1409 int err; 1553 1410 1554 - genpd = devm_kzalloc(pmc->dev, sizeof(*genpd), GFP_KERNEL); 1555 - if (!genpd) 1411 + pd = devm_kzalloc(pmc->dev, sizeof(*pd), GFP_KERNEL); 1412 + if (!pd) 1556 1413 return -ENOMEM; 1557 1414 1558 - genpd->name = "core"; 1559 - genpd->flags = GENPD_FLAG_NO_SYNC_STATE; 1560 - genpd->set_performance_state = tegra_pmc_core_pd_set_performance_state; 1415 + pd->genpd.name = "core"; 1416 + pd->genpd.flags = GENPD_FLAG_NO_SYNC_STATE; 1417 + pd->genpd.set_performance_state = tegra_pmc_core_pd_set_performance_state; 1418 + pd->pmc = pmc; 1561 1419 1562 1420 err = devm_pm_opp_set_regulators(pmc->dev, rname); 1563 1421 if (err) 1564 1422 return dev_err_probe(pmc->dev, err, 1565 1423 "failed to set core OPP regulator\n"); 1566 1424 1567 - err = pm_genpd_init(genpd, NULL, false); 1425 + err = pm_genpd_init(&pd->genpd, NULL, false); 1568 1426 if (err) { 1569 1427 dev_err(pmc->dev, "failed to init core genpd: %d\n", err); 1570 1428 return err; 1571 1429 } 1572 1430 1573 - err = of_genpd_add_provider_simple(np, genpd); 1431 + err = of_genpd_add_provider_simple(np, &pd->genpd); 1574 1432 if (err) { 1575 1433 dev_err(pmc->dev, "failed to add core genpd: %d\n", err); 1576 1434 goto remove_genpd; ··· 1580 1436 return 0; 1581 1437 1582 1438 remove_genpd: 1583 - pm_genpd_remove(genpd); 1439 + pm_genpd_remove(&pd->genpd); 1584 1440 1585 1441 return err; 1586 1442 } ··· 1643 1499 1644 1500 kfree(pg->clks); 1645 1501 1646 - set_bit(pg->id, pmc->powergates_available); 1502 + set_bit(pg->id, pg->pmc->powergates_available); 1647 1503 1648 1504 kfree(pg); 1649 1505 } ··· 1747 1603 1748 1604 /** 1749 1605 * tegra_io_pad_power_enable() - enable power to I/O pad 1606 + * @pmc: power management controller 1750 1607 * @id: Tegra I/O pad ID for which to enable power 1751 1608 * 1752 1609 * Returns: 0 on success or a negative error code on failure. 1753 1610 */ 1754 - int tegra_io_pad_power_enable(enum tegra_io_pad id) 1611 + int tegra_pmc_io_pad_power_enable(struct tegra_pmc *pmc, enum tegra_io_pad id) 1755 1612 { 1756 1613 const struct tegra_io_pad_soc *pad; 1757 1614 unsigned long request, status; ··· 1787 1642 mutex_unlock(&pmc->powergates_lock); 1788 1643 return err; 1789 1644 } 1645 + EXPORT_SYMBOL(tegra_pmc_io_pad_power_enable); 1646 + 1647 + /** 1648 + * tegra_io_pad_power_enable() - enable power to I/O pad 1649 + * @id: Tegra I/O pad ID for which to enable power 1650 + * 1651 + * Returns: 0 on success or a negative error code on failure. 1652 + */ 1653 + int tegra_io_pad_power_enable(enum tegra_io_pad id) 1654 + { 1655 + return tegra_pmc_io_pad_power_enable(pmc, id); 1656 + } 1790 1657 EXPORT_SYMBOL(tegra_io_pad_power_enable); 1791 1658 1792 1659 /** 1793 - * tegra_io_pad_power_disable() - disable power to I/O pad 1660 + * tegra_pmc_io_pad_power_disable() - disable power to I/O pad 1661 + * @pmc: power management controller 1794 1662 * @id: Tegra I/O pad ID for which to disable power 1795 1663 * 1796 1664 * Returns: 0 on success or a negative error code on failure. 1797 1665 */ 1798 - int tegra_io_pad_power_disable(enum tegra_io_pad id) 1666 + int tegra_pmc_io_pad_power_disable(struct tegra_pmc *pmc, enum tegra_io_pad id) 1799 1667 { 1800 1668 const struct tegra_io_pad_soc *pad; 1801 1669 unsigned long request, status; ··· 1842 1684 unlock: 1843 1685 mutex_unlock(&pmc->powergates_lock); 1844 1686 return err; 1687 + } 1688 + EXPORT_SYMBOL(tegra_pmc_io_pad_power_disable); 1689 + 1690 + /** 1691 + * tegra_io_pad_power_disable() - disable power to I/O pad 1692 + * @id: Tegra I/O pad ID for which to disable power 1693 + * 1694 + * Returns: 0 on success or a negative error code on failure. 1695 + */ 1696 + int tegra_io_pad_power_disable(enum tegra_io_pad id) 1697 + { 1698 + return tegra_pmc_io_pad_power_disable(pmc, id); 1845 1699 } 1846 1700 EXPORT_SYMBOL(tegra_io_pad_power_disable); 1847 1701 ··· 2075 1905 return 0; 2076 1906 } 2077 1907 1908 + /* translate sc7 wake sources back into IRQs to catch edge triggered wakeups */ 1909 + static void tegra186_pmc_wake_handler(struct irq_work *work) 1910 + { 1911 + struct tegra_pmc *pmc = container_of(work, struct tegra_pmc, wake_work); 1912 + unsigned int i, wake; 1913 + 1914 + for (i = 0; i < pmc->soc->max_wake_vectors; i++) { 1915 + unsigned long status = pmc->wake_status[i]; 1916 + 1917 + for_each_set_bit(wake, &status, 32) { 1918 + irq_hw_number_t hwirq = wake + (i * 32); 1919 + struct irq_desc *desc; 1920 + unsigned int irq; 1921 + 1922 + irq = irq_find_mapping(pmc->domain, hwirq); 1923 + if (!irq) { 1924 + dev_warn(pmc->dev, 1925 + "No IRQ found for WAKE#%lu!\n", 1926 + hwirq); 1927 + continue; 1928 + } 1929 + 1930 + dev_dbg(pmc->dev, 1931 + "Resume caused by WAKE#%lu mapped to IRQ#%u\n", 1932 + hwirq, irq); 1933 + 1934 + desc = irq_to_desc(irq); 1935 + if (!desc) { 1936 + dev_warn(pmc->dev, 1937 + "No descriptor found for IRQ#%u\n", 1938 + irq); 1939 + continue; 1940 + } 1941 + 1942 + if (!desc->action || !desc->action->name) 1943 + continue; 1944 + 1945 + generic_handle_irq(irq); 1946 + } 1947 + 1948 + pmc->wake_status[i] = 0; 1949 + } 1950 + } 1951 + 2078 1952 static int tegra_pmc_init(struct tegra_pmc *pmc) 2079 1953 { 2080 1954 if (pmc->soc->max_wake_events > 0) { ··· 2137 1923 pmc->wake_cntrl_level_map = bitmap_zalloc(pmc->soc->max_wake_events, GFP_KERNEL); 2138 1924 if (!pmc->wake_cntrl_level_map) 2139 1925 return -ENOMEM; 1926 + 1927 + pmc->wake_status = kcalloc(pmc->soc->max_wake_vectors, sizeof(u32), GFP_KERNEL); 1928 + if (!pmc->wake_status) 1929 + return -ENOMEM; 1930 + 1931 + /* 1932 + * Initialize IRQ work for processing wake IRQs. Must use 1933 + * HARD_IRQ variant to run in hard IRQ context on PREEMPT_RT 1934 + * because we call generic_handle_irq() which requires hard 1935 + * IRQ context. 1936 + */ 1937 + pmc->wake_work = IRQ_WORK_INIT_HARD(tegra186_pmc_wake_handler); 2140 1938 } 2141 1939 2142 1940 if (pmc->soc->init) ··· 2330 2104 switch (param) { 2331 2105 case PIN_CONFIG_MODE_LOW_POWER: 2332 2106 if (arg) 2333 - err = tegra_io_pad_power_disable(pad->id); 2107 + err = tegra_pmc_io_pad_power_disable(pmc, pad->id); 2334 2108 else 2335 - err = tegra_io_pad_power_enable(pad->id); 2109 + err = tegra_pmc_io_pad_power_enable(pmc, pad->id); 2336 2110 if (err) 2337 2111 return err; 2338 2112 break; ··· 2389 2163 static ssize_t reset_reason_show(struct device *dev, 2390 2164 struct device_attribute *attr, char *buf) 2391 2165 { 2166 + struct tegra_pmc *pmc = dev_get_drvdata(dev); 2392 2167 u32 value; 2393 2168 2394 2169 value = tegra_pmc_readl(pmc, pmc->soc->regs->rst_status); ··· 2407 2180 static ssize_t reset_level_show(struct device *dev, 2408 2181 struct device_attribute *attr, char *buf) 2409 2182 { 2183 + struct tegra_pmc *pmc = dev_get_drvdata(dev); 2410 2184 u32 value; 2411 2185 2412 2186 value = tegra_pmc_readl(pmc, pmc->soc->regs->rst_status); ··· 2771 2543 return NOTIFY_OK; 2772 2544 } 2773 2545 2774 - static void pmc_clk_fence_udelay(u32 offset) 2546 + static void pmc_clk_fence_udelay(struct tegra_pmc *pmc, u32 offset) 2775 2547 { 2776 2548 tegra_pmc_readl(pmc, offset); 2777 2549 /* pmc clk propagation delay 2 us */ ··· 2783 2555 struct pmc_clk *clk = to_pmc_clk(hw); 2784 2556 u32 val; 2785 2557 2786 - val = tegra_pmc_readl(pmc, clk->offs) >> clk->mux_shift; 2558 + val = tegra_pmc_readl(clk->pmc, clk->offs) >> clk->mux_shift; 2787 2559 val &= PMC_CLK_OUT_MUX_MASK; 2788 2560 2789 2561 return val; ··· 2794 2566 struct pmc_clk *clk = to_pmc_clk(hw); 2795 2567 u32 val; 2796 2568 2797 - val = tegra_pmc_readl(pmc, clk->offs); 2569 + val = tegra_pmc_readl(clk->pmc, clk->offs); 2798 2570 val &= ~(PMC_CLK_OUT_MUX_MASK << clk->mux_shift); 2799 2571 val |= index << clk->mux_shift; 2800 - tegra_pmc_writel(pmc, val, clk->offs); 2801 - pmc_clk_fence_udelay(clk->offs); 2572 + tegra_pmc_writel(clk->pmc, val, clk->offs); 2573 + pmc_clk_fence_udelay(clk->pmc, clk->offs); 2802 2574 2803 2575 return 0; 2804 2576 } ··· 2808 2580 struct pmc_clk *clk = to_pmc_clk(hw); 2809 2581 u32 val; 2810 2582 2811 - val = tegra_pmc_readl(pmc, clk->offs) & BIT(clk->force_en_shift); 2583 + val = tegra_pmc_readl(clk->pmc, clk->offs) & BIT(clk->force_en_shift); 2812 2584 2813 2585 return val ? 1 : 0; 2814 2586 } 2815 2587 2816 - static void pmc_clk_set_state(unsigned long offs, u32 shift, int state) 2588 + static void pmc_clk_set_state(struct tegra_pmc *pmc, unsigned long offs, 2589 + u32 shift, int state) 2817 2590 { 2818 2591 u32 val; 2819 2592 2820 2593 val = tegra_pmc_readl(pmc, offs); 2821 2594 val = state ? (val | BIT(shift)) : (val & ~BIT(shift)); 2822 2595 tegra_pmc_writel(pmc, val, offs); 2823 - pmc_clk_fence_udelay(offs); 2596 + pmc_clk_fence_udelay(pmc, offs); 2824 2597 } 2825 2598 2826 2599 static int pmc_clk_enable(struct clk_hw *hw) 2827 2600 { 2828 2601 struct pmc_clk *clk = to_pmc_clk(hw); 2829 2602 2830 - pmc_clk_set_state(clk->offs, clk->force_en_shift, 1); 2603 + pmc_clk_set_state(clk->pmc, clk->offs, clk->force_en_shift, 1); 2831 2604 2832 2605 return 0; 2833 2606 } ··· 2837 2608 { 2838 2609 struct pmc_clk *clk = to_pmc_clk(hw); 2839 2610 2840 - pmc_clk_set_state(clk->offs, clk->force_en_shift, 0); 2611 + pmc_clk_set_state(clk->pmc, clk->offs, clk->force_en_shift, 0); 2841 2612 } 2842 2613 2843 2614 static const struct clk_ops pmc_clk_ops = { ··· 2869 2640 CLK_SET_PARENT_GATE; 2870 2641 2871 2642 pmc_clk->hw.init = &init; 2643 + pmc_clk->pmc = pmc; 2872 2644 pmc_clk->offs = offset; 2873 2645 pmc_clk->mux_shift = data->mux_shift; 2874 2646 pmc_clk->force_en_shift = data->force_en_shift; ··· 2880 2650 static int pmc_clk_gate_is_enabled(struct clk_hw *hw) 2881 2651 { 2882 2652 struct pmc_clk_gate *gate = to_pmc_clk_gate(hw); 2653 + u32 value = tegra_pmc_readl(gate->pmc, gate->offs); 2883 2654 2884 - return tegra_pmc_readl(pmc, gate->offs) & BIT(gate->shift) ? 1 : 0; 2655 + return value & BIT(gate->shift) ? 1 : 0; 2885 2656 } 2886 2657 2887 2658 static int pmc_clk_gate_enable(struct clk_hw *hw) 2888 2659 { 2889 2660 struct pmc_clk_gate *gate = to_pmc_clk_gate(hw); 2890 2661 2891 - pmc_clk_set_state(gate->offs, gate->shift, 1); 2662 + pmc_clk_set_state(gate->pmc, gate->offs, gate->shift, 1); 2892 2663 2893 2664 return 0; 2894 2665 } ··· 2898 2667 { 2899 2668 struct pmc_clk_gate *gate = to_pmc_clk_gate(hw); 2900 2669 2901 - pmc_clk_set_state(gate->offs, gate->shift, 0); 2670 + pmc_clk_set_state(gate->pmc, gate->offs, gate->shift, 0); 2902 2671 } 2903 2672 2904 2673 static const struct clk_ops pmc_clk_gate_ops = { ··· 2926 2695 init.flags = 0; 2927 2696 2928 2697 gate->hw.init = &init; 2698 + gate->pmc = pmc; 2929 2699 gate->offs = offset; 2930 2700 gate->shift = shift; 2931 2701 ··· 3090 2858 3091 2859 static void tegra_pmc_reset_suspend_mode(void *data) 3092 2860 { 2861 + struct tegra_pmc *pmc = data; 2862 + 3093 2863 pmc->suspend_mode = TEGRA_SUSPEND_NOT_READY; 3094 2864 } 3095 2865 ··· 3114 2880 return err; 3115 2881 3116 2882 err = devm_add_action_or_reset(&pdev->dev, tegra_pmc_reset_suspend_mode, 3117 - NULL); 2883 + pmc); 3118 2884 if (err) 3119 2885 return err; 3120 2886 ··· 3165 2931 * CPU without resetting everything else. 3166 2932 */ 3167 2933 if (pmc->scratch) { 2934 + pmc->reboot_notifier.notifier_call = tegra_pmc_reboot_notify; 2935 + 3168 2936 err = devm_register_reboot_notifier(&pdev->dev, 3169 - &tegra_pmc_reboot_notifier); 2937 + &pmc->reboot_notifier); 3170 2938 if (err) { 3171 2939 dev_err(&pdev->dev, 3172 2940 "unable to register reboot notifier, %d\n", ··· 3180 2944 err = devm_register_sys_off_handler(&pdev->dev, 3181 2945 SYS_OFF_MODE_RESTART, 3182 2946 SYS_OFF_PRIO_LOW, 3183 - tegra_pmc_restart_handler, NULL); 2947 + tegra_pmc_restart_handler, 2948 + pmc); 3184 2949 if (err) { 3185 2950 dev_err(&pdev->dev, "failed to register sys-off handler: %d\n", 3186 2951 err); ··· 3195 2958 err = devm_register_sys_off_handler(&pdev->dev, 3196 2959 SYS_OFF_MODE_POWER_OFF, 3197 2960 SYS_OFF_PRIO_FIRMWARE, 3198 - tegra_pmc_power_off_handler, NULL); 2961 + tegra_pmc_power_off_handler, 2962 + pmc); 3199 2963 if (err) { 3200 2964 dev_err(&pdev->dev, "failed to register sys-off handler: %d\n", 3201 2965 err); ··· 3262 3024 if (pmc->soc->set_wake_filters) 3263 3025 pmc->soc->set_wake_filters(pmc); 3264 3026 3265 - debugfs_create_file("powergate", 0444, NULL, NULL, &powergate_fops); 3027 + debugfs_create_file("powergate", 0444, NULL, pmc, &powergate_fops); 3266 3028 3267 3029 return 0; 3268 3030 ··· 3367 3129 } 3368 3130 } 3369 3131 3370 - /* translate sc7 wake sources back into IRQs to catch edge triggered wakeups */ 3371 - static void tegra186_pmc_process_wake_events(struct tegra_pmc *pmc, unsigned int index, 3372 - unsigned long status) 3373 - { 3374 - unsigned int wake; 3375 - 3376 - dev_dbg(pmc->dev, "Wake[%d:%d] status=%#lx\n", (index * 32) + 31, index * 32, status); 3377 - 3378 - for_each_set_bit(wake, &status, 32) { 3379 - irq_hw_number_t hwirq = wake + 32 * index; 3380 - struct irq_desc *desc; 3381 - unsigned int irq; 3382 - 3383 - irq = irq_find_mapping(pmc->domain, hwirq); 3384 - 3385 - desc = irq_to_desc(irq); 3386 - if (!desc || !desc->action || !desc->action->name) { 3387 - dev_dbg(pmc->dev, "Resume caused by WAKE%ld, IRQ %d\n", hwirq, irq); 3388 - continue; 3389 - } 3390 - 3391 - dev_dbg(pmc->dev, "Resume caused by WAKE%ld, %s\n", hwirq, desc->action->name); 3392 - generic_handle_irq(irq); 3393 - } 3394 - } 3395 - 3396 3132 static void tegra186_pmc_wake_syscore_resume(void *data) 3397 3133 { 3398 - u32 status, mask; 3134 + struct tegra_pmc *pmc = data; 3399 3135 unsigned int i; 3136 + u32 mask; 3400 3137 3401 3138 for (i = 0; i < pmc->soc->max_wake_vectors; i++) { 3402 3139 mask = readl(pmc->wake + WAKE_AOWAKE_TIER2_ROUTING(i)); 3403 - status = readl(pmc->wake + WAKE_AOWAKE_STATUS_R(i)) & mask; 3404 - 3405 - tegra186_pmc_process_wake_events(pmc, i, status); 3140 + pmc->wake_status[i] = readl(pmc->wake + WAKE_AOWAKE_STATUS_R(i)) & mask; 3406 3141 } 3142 + 3143 + /* Schedule IRQ work to process wake IRQs (if any) */ 3144 + irq_work_queue(&pmc->wake_work); 3407 3145 } 3408 3146 3409 3147 static int tegra186_pmc_wake_syscore_suspend(void *data) 3410 3148 { 3149 + struct tegra_pmc *pmc = data; 3150 + unsigned int i; 3151 + 3152 + /* Check if there are unhandled wake IRQs */ 3153 + for (i = 0; i < pmc->soc->max_wake_vectors; i++) 3154 + if (pmc->wake_status[i]) 3155 + dev_warn(pmc->dev, 3156 + "Unhandled wake IRQs pending vector[%u]: 0x%x\n", 3157 + i, pmc->wake_status[i]); 3158 + 3411 3159 wke_read_sw_wake_status(pmc); 3412 3160 3413 3161 /* flip the wakeup trigger for dual-edge triggered pads ··· 4067 3843 static void tegra186_pmc_init(struct tegra_pmc *pmc) 4068 3844 { 4069 3845 pmc->syscore.ops = &tegra186_pmc_wake_syscore_ops; 3846 + pmc->syscore.data = pmc; 4070 3847 register_syscore(&pmc->syscore); 4071 3848 } 4072 3849
+1 -1
drivers/soc/ti/Kconfig
··· 62 62 If unsure, say N. 63 63 64 64 config TI_K3_SOCINFO 65 - bool 65 + bool "K3 SoC Information driver" if COMPILE_TEST 66 66 depends on ARCH_K3 || COMPILE_TEST 67 67 select SOC_BUS 68 68 select MFD_SYSCON
+1 -1
drivers/soc/ti/k3-socinfo.c
··· 141 141 if (IS_ERR(base)) 142 142 return PTR_ERR(base); 143 143 144 - regmap = regmap_init_mmio(dev, base, &k3_chipinfo_regmap_cfg); 144 + regmap = devm_regmap_init_mmio(dev, base, &k3_chipinfo_regmap_cfg); 145 145 if (IS_ERR(regmap)) 146 146 return PTR_ERR(regmap); 147 147
+7 -14
drivers/soc/ti/knav_dma.c
··· 706 706 { 707 707 struct device *dev = &pdev->dev; 708 708 struct device_node *node = pdev->dev.of_node; 709 - struct device_node *child; 710 709 int ret = 0; 711 710 712 - if (!node) { 713 - dev_err(&pdev->dev, "could not find device info\n"); 714 - return -EINVAL; 715 - } 711 + if (!node) 712 + return dev_err_probe(dev, -EINVAL, "could not find device info\n"); 716 713 717 714 kdev = devm_kzalloc(dev, 718 715 sizeof(struct knav_dma_pool_device), GFP_KERNEL); 719 - if (!kdev) { 720 - dev_err(dev, "could not allocate driver mem\n"); 716 + if (!kdev) 721 717 return -ENOMEM; 722 - } 723 718 724 719 kdev->dev = dev; 725 720 INIT_LIST_HEAD(&kdev->list); ··· 722 727 pm_runtime_enable(kdev->dev); 723 728 ret = pm_runtime_resume_and_get(kdev->dev); 724 729 if (ret < 0) { 725 - dev_err(kdev->dev, "unable to enable pktdma, err %d\n", ret); 730 + dev_err(dev, "unable to enable pktdma, err %d\n", ret); 726 731 goto err_pm_disable; 727 732 } 728 733 729 734 /* Initialise all packet dmas */ 730 - for_each_child_of_node(node, child) { 735 + for_each_child_of_node_scoped(node, child) { 731 736 ret = dma_init(node, child); 732 737 if (ret) { 733 - of_node_put(child); 734 - dev_err(&pdev->dev, "init failed with %d\n", ret); 738 + dev_err(dev, "init failed with %d\n", ret); 735 739 break; 736 740 } 737 741 } 738 742 739 743 if (list_empty(&kdev->list)) { 740 - dev_err(dev, "no valid dma instance\n"); 741 - ret = -ENODEV; 744 + ret = dev_err_probe(dev, -ENODEV, "no valid dma instance\n"); 742 745 goto err_put_sync; 743 746 } 744 747
+7 -18
drivers/soc/ti/knav_qmss_queue.c
··· 1079 1079 struct device_node *regions __free(device_node) = 1080 1080 of_get_child_by_name(node, "descriptor-regions"); 1081 1081 struct knav_region *region; 1082 - struct device_node *child; 1083 1082 u32 temp[2]; 1084 1083 int ret; 1085 1084 ··· 1086 1087 return dev_err_probe(dev, -ENODEV, 1087 1088 "descriptor-regions not specified\n"); 1088 1089 1089 - for_each_child_of_node(regions, child) { 1090 + for_each_child_of_node_scoped(regions, child) { 1090 1091 region = devm_kzalloc(dev, sizeof(*region), GFP_KERNEL); 1091 - if (!region) { 1092 - of_node_put(child); 1093 - dev_err(dev, "out of memory allocating region\n"); 1092 + if (!region) 1094 1093 return -ENOMEM; 1095 - } 1096 1094 1097 1095 region->name = knav_queue_find_name(child); 1098 1096 of_property_read_u32(child, "id", &region->id); ··· 1393 1397 struct device_node *qmgrs __free(device_node) = 1394 1398 of_get_child_by_name(node, "qmgrs"); 1395 1399 struct knav_qmgr_info *qmgr; 1396 - struct device_node *child; 1397 1400 u32 temp[2]; 1398 1401 int ret; 1399 1402 ··· 1400 1405 return dev_err_probe(dev, -ENODEV, 1401 1406 "queue manager info not specified\n"); 1402 1407 1403 - for_each_child_of_node(qmgrs, child) { 1408 + for_each_child_of_node_scoped(qmgrs, child) { 1404 1409 qmgr = devm_kzalloc(dev, sizeof(*qmgr), GFP_KERNEL); 1405 - if (!qmgr) { 1406 - of_node_put(child); 1407 - dev_err(dev, "out of memory allocating qmgr\n"); 1410 + if (!qmgr) 1408 1411 return -ENOMEM; 1409 - } 1410 1412 1411 1413 ret = of_property_read_u32_array(child, "managed-queues", 1412 1414 temp, 2); ··· 1495 1503 { 1496 1504 struct device *dev = kdev->dev; 1497 1505 struct knav_pdsp_info *pdsp; 1498 - struct device_node *child; 1499 1506 1500 - for_each_child_of_node(pdsps, child) { 1507 + for_each_child_of_node_scoped(pdsps, child) { 1501 1508 pdsp = devm_kzalloc(dev, sizeof(*pdsp), GFP_KERNEL); 1502 - if (!pdsp) { 1503 - of_node_put(child); 1504 - dev_err(dev, "out of memory allocating pdsp\n"); 1509 + if (!pdsp) 1505 1510 return -ENOMEM; 1506 - } 1511 + 1507 1512 pdsp->name = knav_queue_find_name(child); 1508 1513 pdsp->iram = 1509 1514 knav_queue_map_reg(kdev, child,
+2 -4
drivers/soc/ti/pruss.c
··· 366 366 367 367 ret = devm_add_action_or_reset(dev, pruss_of_free_clk_provider, 368 368 clk_mux_np); 369 - if (ret) { 369 + if (ret) 370 370 dev_err(dev, "failed to add clkmux free action %d", ret); 371 - goto put_clk_mux_np; 372 - } 373 371 374 - return 0; 372 + return ret; 375 373 376 374 put_clk_mux_np: 377 375 of_node_put(clk_mux_np);
+3 -3
drivers/soc/xilinx/zynqmp_power.c
··· 82 82 memcpy(zynqmp_pm_init_restart_work->args, &payload[0], 83 83 sizeof(zynqmp_pm_init_restart_work->args)); 84 84 85 - queue_work(system_unbound_wq, &zynqmp_pm_init_restart_work->callback_work); 85 + queue_work(system_dfl_wq, &zynqmp_pm_init_restart_work->callback_work); 86 86 } 87 87 88 88 static void suspend_event_callback(const u32 *payload, void *data) ··· 95 95 memcpy(zynqmp_pm_init_suspend_work->args, &payload[1], 96 96 sizeof(zynqmp_pm_init_suspend_work->args)); 97 97 98 - queue_work(system_unbound_wq, &zynqmp_pm_init_suspend_work->callback_work); 98 + queue_work(system_dfl_wq, &zynqmp_pm_init_suspend_work->callback_work); 99 99 } 100 100 101 101 static irqreturn_t zynqmp_pm_isr(int irq, void *data) ··· 140 140 memcpy(zynqmp_pm_init_suspend_work->args, &payload[1], 141 141 sizeof(zynqmp_pm_init_suspend_work->args)); 142 142 143 - queue_work(system_unbound_wq, 143 + queue_work(system_dfl_wq, 144 144 &zynqmp_pm_init_suspend_work->callback_work); 145 145 146 146 /* Send NULL message to mbox controller to ack the message */
+4 -4
drivers/tee/amdtee/call.c
··· 15 15 static int tee_params_to_amd_params(struct tee_param *tee, u32 count, 16 16 struct tee_operation *amd) 17 17 { 18 - int i, ret = 0; 18 + int i; 19 19 u32 type; 20 20 21 21 if (!count) ··· 66 66 i, amd->params[i].val.b); 67 67 } 68 68 } 69 - return ret; 69 + return 0; 70 70 } 71 71 72 72 static int amd_params_to_tee_params(struct tee_param *tee, u32 count, 73 73 struct tee_operation *amd) 74 74 { 75 - int i, ret = 0; 75 + int i; 76 76 u32 type; 77 77 78 78 if (!count) ··· 118 118 i, amd->params[i].val.b); 119 119 } 120 120 } 121 - return ret; 121 + return 0; 122 122 } 123 123 124 124 static DEFINE_MUTEX(ta_refcount_mutex);
+23
drivers/tee/optee/core.c
··· 63 63 return dma_coerce_mask_and_coherent(&optee->teedev->dev, mask); 64 64 } 65 65 66 + int optee_get_revision(struct tee_device *teedev, char *buf, size_t len) 67 + { 68 + struct optee *optee = tee_get_drvdata(teedev); 69 + u64 build_id; 70 + 71 + if (!optee) 72 + return -ENODEV; 73 + if (!buf || !len) 74 + return -EINVAL; 75 + 76 + build_id = optee->revision.os_build_id; 77 + if (build_id) 78 + scnprintf(buf, len, "%u.%u (%016llx)", 79 + optee->revision.os_major, 80 + optee->revision.os_minor, 81 + (unsigned long long)build_id); 82 + else 83 + scnprintf(buf, len, "%u.%u", optee->revision.os_major, 84 + optee->revision.os_minor); 85 + 86 + return 0; 87 + } 88 + 66 89 static void optee_bus_scan(struct work_struct *work) 67 90 { 68 91 WARN_ON(optee_enumerate_devices(PTA_CMD_GET_DEVICES_SUPP));
+40 -14
drivers/tee/optee/ffa_abi.c
··· 775 775 * with a matching configuration. 776 776 */ 777 777 778 + static bool optee_ffa_get_os_revision(struct ffa_device *ffa_dev, 779 + const struct ffa_ops *ops, 780 + struct optee_revision *revision) 781 + { 782 + const struct ffa_msg_ops *msg_ops = ops->msg_ops; 783 + struct ffa_send_direct_data data = { 784 + .data0 = OPTEE_FFA_GET_OS_VERSION, 785 + }; 786 + int rc; 787 + 788 + msg_ops->mode_32bit_set(ffa_dev); 789 + 790 + rc = msg_ops->sync_send_receive(ffa_dev, &data); 791 + if (rc) { 792 + pr_err("Unexpected error %d\n", rc); 793 + return false; 794 + } 795 + 796 + if (revision) { 797 + revision->os_major = data.data0; 798 + revision->os_minor = data.data1; 799 + revision->os_build_id = data.data2; 800 + } 801 + 802 + if (data.data2) 803 + pr_info("revision %lu.%lu (%08lx)", 804 + data.data0, data.data1, data.data2); 805 + else 806 + pr_info("revision %lu.%lu", data.data0, data.data1); 807 + 808 + return true; 809 + } 810 + 778 811 static bool optee_ffa_api_is_compatible(struct ffa_device *ffa_dev, 779 812 const struct ffa_ops *ops) 780 813 { ··· 830 797 data.data0, data.data1); 831 798 return false; 832 799 } 833 - 834 - data = (struct ffa_send_direct_data){ 835 - .data0 = OPTEE_FFA_GET_OS_VERSION, 836 - }; 837 - rc = msg_ops->sync_send_receive(ffa_dev, &data); 838 - if (rc) { 839 - pr_err("Unexpected error %d\n", rc); 840 - return false; 841 - } 842 - if (data.data2) 843 - pr_info("revision %lu.%lu (%08lx)", 844 - data.data0, data.data1, data.data2); 845 - else 846 - pr_info("revision %lu.%lu", data.data0, data.data1); 847 800 848 801 return true; 849 802 } ··· 919 900 920 901 static const struct tee_driver_ops optee_ffa_clnt_ops = { 921 902 .get_version = optee_ffa_get_version, 903 + .get_tee_revision = optee_get_revision, 922 904 .open = optee_ffa_open, 923 905 .release = optee_release, 924 906 .open_session = optee_open_session, ··· 938 918 939 919 static const struct tee_driver_ops optee_ffa_supp_ops = { 940 920 .get_version = optee_ffa_get_version, 921 + .get_tee_revision = optee_get_revision, 941 922 .open = optee_ffa_open, 942 923 .release = optee_release_supp, 943 924 .supp_recv = optee_supp_recv, ··· 1080 1059 optee = kzalloc(sizeof(*optee), GFP_KERNEL); 1081 1060 if (!optee) 1082 1061 return -ENOMEM; 1062 + 1063 + if (!optee_ffa_get_os_revision(ffa_dev, ffa_ops, &optee->revision)) { 1064 + rc = -EINVAL; 1065 + goto err_free_optee; 1066 + } 1083 1067 1084 1068 pool = optee_ffa_shm_pool_alloc_pages(); 1085 1069 if (IS_ERR(pool)) {
+19
drivers/tee/optee/optee_private.h
··· 172 172 struct optee; 173 173 174 174 /** 175 + * struct optee_revision - OP-TEE OS revision reported by secure world 176 + * @os_major: OP-TEE OS major version 177 + * @os_minor: OP-TEE OS minor version 178 + * @os_build_id: OP-TEE OS build identifier (0 if unspecified) 179 + * 180 + * Values come from OPTEE_SMC_CALL_GET_OS_REVISION (SMC ABI) or 181 + * OPTEE_FFA_GET_OS_VERSION (FF-A ABI); this is the trusted OS revision, not an 182 + * FF-A ABI version. 183 + */ 184 + struct optee_revision { 185 + u32 os_major; 186 + u32 os_minor; 187 + u64 os_build_id; 188 + }; 189 + 190 + int optee_get_revision(struct tee_device *teedev, char *buf, size_t len); 191 + 192 + /** 175 193 * struct optee_ops - OP-TEE driver internal operations 176 194 * @do_call_with_arg: enters OP-TEE in secure world 177 195 * @to_msg_param: converts from struct tee_param to OPTEE_MSG parameters ··· 267 249 bool in_kernel_rpmb_routing; 268 250 struct work_struct scan_bus_work; 269 251 struct work_struct rpmb_scan_bus_work; 252 + struct optee_revision revision; 270 253 }; 271 254 272 255 struct optee_session {
+3 -3
drivers/tee/optee/rpc.c
··· 43 43 struct i2c_msg msg = { }; 44 44 size_t i; 45 45 int ret = -EOPNOTSUPP; 46 - u8 attr[] = { 46 + static const u8 attr[] = { 47 47 TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT, 48 48 TEE_IOCTL_PARAM_ATTR_TYPE_VALUE_INPUT, 49 49 TEE_IOCTL_PARAM_ATTR_TYPE_MEMREF_INOUT, ··· 247 247 param.u.value.c = 0; 248 248 249 249 /* 250 - * Match the tee_shm_get_from_id() in cmd_alloc_suppl() as secure 251 - * world has released its reference. 250 + * Match the tee_shm_get_from_id() in optee_rpc_cmd_alloc_suppl() 251 + * as secure world has released its reference. 252 252 * 253 253 * It's better to do this before sending the request to supplicant 254 254 * as we'd like to let the process doing the initial allocation to
+12 -3
drivers/tee/optee/smc_abi.c
··· 1242 1242 1243 1243 static const struct tee_driver_ops optee_clnt_ops = { 1244 1244 .get_version = optee_get_version, 1245 + .get_tee_revision = optee_get_revision, 1245 1246 .open = optee_smc_open, 1246 1247 .release = optee_release, 1247 1248 .open_session = optee_open_session, ··· 1262 1261 1263 1262 static const struct tee_driver_ops optee_supp_ops = { 1264 1263 .get_version = optee_get_version, 1264 + .get_tee_revision = optee_get_revision, 1265 1265 .open = optee_smc_open, 1266 1266 .release = optee_release_supp, 1267 1267 .supp_recv = optee_supp_recv, ··· 1325 1323 } 1326 1324 #endif 1327 1325 1328 - static void optee_msg_get_os_revision(optee_invoke_fn *invoke_fn) 1326 + static void optee_msg_get_os_revision(optee_invoke_fn *invoke_fn, 1327 + struct optee_revision *revision) 1329 1328 { 1330 1329 union { 1331 1330 struct arm_smccc_res smccc; ··· 1339 1336 1340 1337 invoke_fn(OPTEE_SMC_CALL_GET_OS_REVISION, 0, 0, 0, 0, 0, 0, 0, 1341 1338 &res.smccc); 1339 + 1340 + if (revision) { 1341 + revision->os_major = res.result.major; 1342 + revision->os_minor = res.result.minor; 1343 + revision->os_build_id = res.result.build_id; 1344 + } 1342 1345 1343 1346 if (res.result.build_id) 1344 1347 pr_info("revision %lu.%lu (%0*lx)", res.result.major, ··· 1754 1745 return -EINVAL; 1755 1746 } 1756 1747 1757 - optee_msg_get_os_revision(invoke_fn); 1758 - 1759 1748 if (!optee_msg_api_revision_is_compatible(invoke_fn)) { 1760 1749 pr_warn("api revision mismatch\n"); 1761 1750 return -EINVAL; ··· 1821 1814 rc = -ENOMEM; 1822 1815 goto err_free_shm_pool; 1823 1816 } 1817 + 1818 + optee_msg_get_os_revision(invoke_fn, &optee->revision); 1824 1819 1825 1820 optee->ops = &optee_ops; 1826 1821 optee->smc.invoke_fn = invoke_fn;
+8 -9
drivers/tee/qcomtee/call.c
··· 395 395 struct tee_ioctl_object_invoke_arg *arg, 396 396 struct tee_param *params) 397 397 { 398 - struct qcomtee_object_invoke_ctx *oic __free(kfree) = NULL; 399 398 struct qcomtee_context_data *ctxdata = ctx->data; 400 - struct qcomtee_arg *u __free(kfree) = NULL; 401 399 struct qcomtee_object *object; 402 400 int i, ret, result; 403 401 ··· 410 412 } 411 413 412 414 /* Otherwise, invoke a QTEE object: */ 413 - oic = qcomtee_object_invoke_ctx_alloc(ctx); 415 + struct qcomtee_object_invoke_ctx *oic __free(kfree) = 416 + qcomtee_object_invoke_ctx_alloc(ctx); 414 417 if (!oic) 415 418 return -ENOMEM; 416 419 417 420 /* +1 for ending QCOMTEE_ARG_TYPE_INV. */ 418 - u = kcalloc(arg->num_params + 1, sizeof(*u), GFP_KERNEL); 421 + struct qcomtee_arg *u __free(kfree) = kcalloc(arg->num_params + 1, sizeof(*u), 422 + GFP_KERNEL); 419 423 if (!u) 420 424 return -ENOMEM; 421 425 ··· 562 562 563 563 static int qcomtee_open(struct tee_context *ctx) 564 564 { 565 - struct qcomtee_context_data *ctxdata __free(kfree) = NULL; 566 - 567 - ctxdata = kzalloc(sizeof(*ctxdata), GFP_KERNEL); 565 + struct qcomtee_context_data *ctxdata __free(kfree) = kzalloc(sizeof(*ctxdata), 566 + GFP_KERNEL); 568 567 if (!ctxdata) 569 568 return -ENOMEM; 570 569 ··· 644 645 static void qcomtee_get_qtee_feature_list(struct tee_context *ctx, u32 id, 645 646 u32 *version) 646 647 { 647 - struct qcomtee_object_invoke_ctx *oic __free(kfree) = NULL; 648 648 struct qcomtee_object *client_env, *service; 649 649 struct qcomtee_arg u[3] = { 0 }; 650 650 int result; 651 651 652 - oic = qcomtee_object_invoke_ctx_alloc(ctx); 652 + struct qcomtee_object_invoke_ctx *oic __free(kfree) = 653 + qcomtee_object_invoke_ctx_alloc(ctx); 653 654 if (!oic) 654 655 return; 655 656
+2 -2
drivers/tee/qcomtee/mem_obj.c
··· 88 88 struct tee_param *param, 89 89 struct tee_context *ctx) 90 90 { 91 - struct qcomtee_mem_object *mem_object __free(kfree) = NULL; 92 91 struct tee_shm *shm; 93 92 int err; 94 93 95 - mem_object = kzalloc(sizeof(*mem_object), GFP_KERNEL); 94 + struct qcomtee_mem_object *mem_object __free(kfree) = kzalloc(sizeof(*mem_object), 95 + GFP_KERNEL); 96 96 if (!mem_object) 97 97 return -ENOMEM; 98 98
+4 -4
drivers/tee/qcomtee/user_obj.c
··· 228 228 { 229 229 struct qcomtee_user_object *uo = to_qcomtee_user_object(object); 230 230 struct qcomtee_context_data *ctxdata = uo->ctx->data; 231 - struct qcomtee_ureq *ureq __free(kfree) = NULL; 232 231 int errno; 233 232 234 - ureq = kzalloc(sizeof(*ureq), GFP_KERNEL); 233 + struct qcomtee_ureq *ureq __free(kfree) = kzalloc(sizeof(*ureq), 234 + GFP_KERNEL); 235 235 if (!ureq) 236 236 return -ENOMEM; 237 237 ··· 367 367 struct tee_param *param, 368 368 struct tee_context *ctx) 369 369 { 370 - struct qcomtee_user_object *user_object __free(kfree) = NULL; 371 370 int err; 372 371 373 - user_object = kzalloc(sizeof(*user_object), GFP_KERNEL); 372 + struct qcomtee_user_object *user_object __free(kfree) = 373 + kzalloc(sizeof(*user_object), GFP_KERNEL); 374 374 if (!user_object) 375 375 return -ENOMEM; 376 376
+134 -1
drivers/tee/tee_core.c
··· 1146 1146 NULL 1147 1147 }; 1148 1148 1149 - ATTRIBUTE_GROUPS(tee_dev); 1149 + static const struct attribute_group tee_dev_group = { 1150 + .attrs = tee_dev_attrs, 1151 + }; 1152 + 1153 + static ssize_t revision_show(struct device *dev, 1154 + struct device_attribute *attr, char *buf) 1155 + { 1156 + struct tee_device *teedev = container_of(dev, struct tee_device, dev); 1157 + char version[TEE_REVISION_STR_SIZE]; 1158 + int ret; 1159 + 1160 + if (!teedev->desc->ops->get_tee_revision) 1161 + return -ENODEV; 1162 + 1163 + ret = teedev->desc->ops->get_tee_revision(teedev, version, 1164 + sizeof(version)); 1165 + if (ret) 1166 + return ret; 1167 + 1168 + return sysfs_emit(buf, "%s\n", version); 1169 + } 1170 + static DEVICE_ATTR_RO(revision); 1171 + 1172 + static struct attribute *tee_revision_attrs[] = { 1173 + &dev_attr_revision.attr, 1174 + NULL 1175 + }; 1176 + 1177 + static umode_t tee_revision_attr_is_visible(struct kobject *kobj, 1178 + struct attribute *attr, int n) 1179 + { 1180 + struct device *dev = kobj_to_dev(kobj); 1181 + struct tee_device *teedev = container_of(dev, struct tee_device, dev); 1182 + 1183 + if (teedev->desc->ops->get_tee_revision) 1184 + return attr->mode; 1185 + 1186 + return 0; 1187 + } 1188 + 1189 + static const struct attribute_group tee_revision_group = { 1190 + .attrs = tee_revision_attrs, 1191 + .is_visible = tee_revision_attr_is_visible, 1192 + }; 1193 + 1194 + static const struct attribute_group *tee_dev_groups[] = { 1195 + &tee_dev_group, 1196 + &tee_revision_group, 1197 + NULL 1198 + }; 1150 1199 1151 1200 static const struct class tee_class = { 1152 1201 .name = "tee", ··· 1447 1398 return add_uevent_var(env, "MODALIAS=tee:%pUb", dev_id); 1448 1399 } 1449 1400 1401 + static int tee_client_device_probe(struct device *dev) 1402 + { 1403 + struct tee_client_device *tcdev = to_tee_client_device(dev); 1404 + struct tee_client_driver *drv = to_tee_client_driver(dev->driver); 1405 + 1406 + if (drv->probe) 1407 + return drv->probe(tcdev); 1408 + else 1409 + return 0; 1410 + } 1411 + 1412 + static void tee_client_device_remove(struct device *dev) 1413 + { 1414 + struct tee_client_device *tcdev = to_tee_client_device(dev); 1415 + struct tee_client_driver *drv = to_tee_client_driver(dev->driver); 1416 + 1417 + if (drv->remove) 1418 + drv->remove(tcdev); 1419 + } 1420 + 1421 + static void tee_client_device_shutdown(struct device *dev) 1422 + { 1423 + struct tee_client_device *tcdev = to_tee_client_device(dev); 1424 + struct tee_client_driver *drv = to_tee_client_driver(dev->driver); 1425 + 1426 + if (dev->driver && drv->shutdown) 1427 + drv->shutdown(tcdev); 1428 + } 1429 + 1450 1430 const struct bus_type tee_bus_type = { 1451 1431 .name = "tee", 1452 1432 .match = tee_client_device_match, 1453 1433 .uevent = tee_client_device_uevent, 1434 + .probe = tee_client_device_probe, 1435 + .remove = tee_client_device_remove, 1436 + .shutdown = tee_client_device_shutdown, 1454 1437 }; 1455 1438 EXPORT_SYMBOL_GPL(tee_bus_type); 1439 + 1440 + static int tee_client_device_probe_legacy(struct tee_client_device *tcdev) 1441 + { 1442 + struct device *dev = &tcdev->dev; 1443 + struct device_driver *driver = dev->driver; 1444 + 1445 + return driver->probe(dev); 1446 + } 1447 + 1448 + static void tee_client_device_remove_legacy(struct tee_client_device *tcdev) 1449 + { 1450 + struct device *dev = &tcdev->dev; 1451 + struct device_driver *driver = dev->driver; 1452 + 1453 + driver->remove(dev); 1454 + } 1455 + 1456 + static void tee_client_device_shutdown_legacy(struct tee_client_device *tcdev) 1457 + { 1458 + struct device *dev = &tcdev->dev; 1459 + struct device_driver *driver = dev->driver; 1460 + 1461 + driver->shutdown(dev); 1462 + } 1463 + 1464 + int __tee_client_driver_register(struct tee_client_driver *tee_driver, 1465 + struct module *owner) 1466 + { 1467 + tee_driver->driver.owner = owner; 1468 + tee_driver->driver.bus = &tee_bus_type; 1469 + 1470 + /* 1471 + * Drivers that have callbacks set for tee_driver->driver need updating 1472 + * to use the callbacks in tee_driver instead. driver_register() warns 1473 + * about that, so no need to warn here, too. 1474 + */ 1475 + if (!tee_driver->probe && tee_driver->driver.probe) 1476 + tee_driver->probe = tee_client_device_probe_legacy; 1477 + if (!tee_driver->remove && tee_driver->driver.remove) 1478 + tee_driver->remove = tee_client_device_remove_legacy; 1479 + if (!tee_driver->shutdown && tee_driver->driver.probe) 1480 + tee_driver->shutdown = tee_client_device_shutdown_legacy; 1481 + 1482 + return driver_register(&tee_driver->driver); 1483 + } 1484 + EXPORT_SYMBOL_GPL(__tee_client_driver_register); 1485 + 1486 + void tee_client_driver_unregister(struct tee_client_driver *tee_driver) 1487 + { 1488 + driver_unregister(&tee_driver->driver); 1489 + } 1490 + EXPORT_SYMBOL_GPL(tee_client_driver_unregister); 1456 1491 1457 1492 static int __init tee_init(void) 1458 1493 {
+171
include/dt-bindings/reset/spacemit,k3-resets.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ 2 + /* 3 + * Copyright (c) 2025 SpacemiT Technology Co. Ltd 4 + */ 5 + 6 + #ifndef _DT_BINDINGS_RESET_SPACEMIT_K3_RESETS_H_ 7 + #define _DT_BINDINGS_RESET_SPACEMIT_K3_RESETS_H_ 8 + 9 + /* MPMU resets */ 10 + #define RESET_MPMU_WDT 0 11 + #define RESET_MPMU_RIPC 1 12 + 13 + /* APBC resets */ 14 + #define RESET_APBC_UART0 0 15 + #define RESET_APBC_UART2 1 16 + #define RESET_APBC_UART3 2 17 + #define RESET_APBC_UART4 3 18 + #define RESET_APBC_UART5 4 19 + #define RESET_APBC_UART6 5 20 + #define RESET_APBC_UART7 6 21 + #define RESET_APBC_UART8 7 22 + #define RESET_APBC_UART9 8 23 + #define RESET_APBC_UART10 9 24 + #define RESET_APBC_GPIO 10 25 + #define RESET_APBC_PWM0 11 26 + #define RESET_APBC_PWM1 12 27 + #define RESET_APBC_PWM2 13 28 + #define RESET_APBC_PWM3 14 29 + #define RESET_APBC_PWM4 15 30 + #define RESET_APBC_PWM5 16 31 + #define RESET_APBC_PWM6 17 32 + #define RESET_APBC_PWM7 18 33 + #define RESET_APBC_PWM8 19 34 + #define RESET_APBC_PWM9 20 35 + #define RESET_APBC_PWM10 21 36 + #define RESET_APBC_PWM11 22 37 + #define RESET_APBC_PWM12 23 38 + #define RESET_APBC_PWM13 24 39 + #define RESET_APBC_PWM14 25 40 + #define RESET_APBC_PWM15 26 41 + #define RESET_APBC_PWM16 27 42 + #define RESET_APBC_PWM17 28 43 + #define RESET_APBC_PWM18 29 44 + #define RESET_APBC_PWM19 30 45 + #define RESET_APBC_SPI0 31 46 + #define RESET_APBC_SPI1 32 47 + #define RESET_APBC_SPI3 33 48 + #define RESET_APBC_RTC 34 49 + #define RESET_APBC_TWSI0 35 50 + #define RESET_APBC_TWSI1 36 51 + #define RESET_APBC_TWSI2 37 52 + #define RESET_APBC_TWSI4 38 53 + #define RESET_APBC_TWSI5 39 54 + #define RESET_APBC_TWSI6 40 55 + #define RESET_APBC_TWSI8 41 56 + #define RESET_APBC_TIMERS0 42 57 + #define RESET_APBC_TIMERS1 43 58 + #define RESET_APBC_TIMERS2 44 59 + #define RESET_APBC_TIMERS3 45 60 + #define RESET_APBC_TIMERS4 46 61 + #define RESET_APBC_TIMERS5 47 62 + #define RESET_APBC_TIMERS6 48 63 + #define RESET_APBC_TIMERS7 49 64 + #define RESET_APBC_AIB 50 65 + #define RESET_APBC_ONEWIRE 51 66 + #define RESET_APBC_I2S0 52 67 + #define RESET_APBC_I2S1 53 68 + #define RESET_APBC_I2S2 54 69 + #define RESET_APBC_I2S3 55 70 + #define RESET_APBC_I2S4 56 71 + #define RESET_APBC_I2S5 57 72 + #define RESET_APBC_DRO 58 73 + #define RESET_APBC_IR0 59 74 + #define RESET_APBC_IR1 60 75 + #define RESET_APBC_TSEN 61 76 + #define RESET_IPC_AP2AUD 62 77 + #define RESET_APBC_CAN0 63 78 + #define RESET_APBC_CAN1 64 79 + #define RESET_APBC_CAN2 65 80 + #define RESET_APBC_CAN3 66 81 + #define RESET_APBC_CAN4 67 82 + 83 + /* APMU resets */ 84 + #define RESET_APMU_CSI 0 85 + #define RESET_APMU_CCIC2PHY 1 86 + #define RESET_APMU_CCIC3PHY 2 87 + #define RESET_APMU_ISP_CIBUS 3 88 + #define RESET_APMU_DSI_ESC 4 89 + #define RESET_APMU_LCD 5 90 + #define RESET_APMU_V2D 6 91 + #define RESET_APMU_LCD_MCLK 7 92 + #define RESET_APMU_LCD_DSCCLK 8 93 + #define RESET_APMU_SC2_HCLK 9 94 + #define RESET_APMU_CCIC_4X 10 95 + #define RESET_APMU_CCIC1_PHY 11 96 + #define RESET_APMU_SDH_AXI 12 97 + #define RESET_APMU_SDH0 13 98 + #define RESET_APMU_SDH1 14 99 + #define RESET_APMU_SDH2 15 100 + #define RESET_APMU_USB2 16 101 + #define RESET_APMU_USB3_PORTA 17 102 + #define RESET_APMU_USB3_PORTB 18 103 + #define RESET_APMU_USB3_PORTC 19 104 + #define RESET_APMU_USB3_PORTD 20 105 + #define RESET_APMU_QSPI 21 106 + #define RESET_APMU_QSPI_BUS 22 107 + #define RESET_APMU_DMA 23 108 + #define RESET_APMU_AES_WTM 24 109 + #define RESET_APMU_MCB_DCLK 25 110 + #define RESET_APMU_MCB_ACLK 26 111 + #define RESET_APMU_VPU 27 112 + #define RESET_APMU_DTC 28 113 + #define RESET_APMU_GPU 29 114 + #define RESET_APMU_ALZO 30 115 + #define RESET_APMU_MC 31 116 + #define RESET_APMU_CPU0_POP 32 117 + #define RESET_APMU_CPU0_SW 33 118 + #define RESET_APMU_CPU1_POP 34 119 + #define RESET_APMU_CPU1_SW 35 120 + #define RESET_APMU_CPU2_POP 36 121 + #define RESET_APMU_CPU2_SW 37 122 + #define RESET_APMU_CPU3_POP 38 123 + #define RESET_APMU_CPU3_SW 39 124 + #define RESET_APMU_C0_MPSUB_SW 40 125 + #define RESET_APMU_CPU4_POP 41 126 + #define RESET_APMU_CPU4_SW 42 127 + #define RESET_APMU_CPU5_POP 43 128 + #define RESET_APMU_CPU5_SW 44 129 + #define RESET_APMU_CPU6_POP 45 130 + #define RESET_APMU_CPU6_SW 46 131 + #define RESET_APMU_CPU7_POP 47 132 + #define RESET_APMU_CPU7_SW 48 133 + #define RESET_APMU_C1_MPSUB_SW 49 134 + #define RESET_APMU_MPSUB_DBG 50 135 + #define RESET_APMU_UCIE 51 136 + #define RESET_APMU_RCPU 52 137 + #define RESET_APMU_DSI4LN2_ESCCLK 53 138 + #define RESET_APMU_DSI4LN2_LCD_SW 54 139 + #define RESET_APMU_DSI4LN2_LCD_MCLK 55 140 + #define RESET_APMU_DSI4LN2_LCD_DSCCLK 56 141 + #define RESET_APMU_DSI4LN2_DPU_ACLK 57 142 + #define RESET_APMU_DPU_ACLK 58 143 + #define RESET_APMU_UFS_ACLK 59 144 + #define RESET_APMU_EDP0 60 145 + #define RESET_APMU_EDP1 61 146 + #define RESET_APMU_PCIE_PORTA 62 147 + #define RESET_APMU_PCIE_PORTB 63 148 + #define RESET_APMU_PCIE_PORTC 64 149 + #define RESET_APMU_PCIE_PORTD 65 150 + #define RESET_APMU_PCIE_PORTE 66 151 + #define RESET_APMU_EMAC0 67 152 + #define RESET_APMU_EMAC1 68 153 + #define RESET_APMU_EMAC2 69 154 + #define RESET_APMU_ESPI_MCLK 70 155 + #define RESET_APMU_ESPI_SCLK 71 156 + 157 + /* DCIU resets*/ 158 + #define RESET_DCIU_HDMA 0 159 + #define RESET_DCIU_DMA350 1 160 + #define RESET_DCIU_DMA350_0 2 161 + #define RESET_DCIU_DMA350_1 3 162 + #define RESET_DCIU_AXIDMA0 4 163 + #define RESET_DCIU_AXIDMA1 5 164 + #define RESET_DCIU_AXIDMA2 6 165 + #define RESET_DCIU_AXIDMA3 7 166 + #define RESET_DCIU_AXIDMA4 8 167 + #define RESET_DCIU_AXIDMA5 9 168 + #define RESET_DCIU_AXIDMA6 10 169 + #define RESET_DCIU_AXIDMA7 11 170 + 171 + #endif /* _DT_BINDINGS_RESET_SPACEMIT_K3_H_ */
+22 -8
include/linux/firmware/qcom/qcom_scm.h
··· 66 66 void qcom_scm_cpu_power_down(u32 flags); 67 67 int qcom_scm_set_remote_state(u32 state, u32 id); 68 68 69 - struct qcom_scm_pas_metadata { 69 + struct qcom_scm_pas_context { 70 + struct device *dev; 71 + u32 pas_id; 72 + phys_addr_t mem_phys; 73 + size_t mem_size; 70 74 void *ptr; 71 75 dma_addr_t phys; 72 76 ssize_t size; 77 + bool use_tzmem; 73 78 }; 74 79 75 - int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, size_t size, 76 - struct qcom_scm_pas_metadata *ctx); 77 - void qcom_scm_pas_metadata_release(struct qcom_scm_pas_metadata *ctx); 78 - int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, phys_addr_t size); 79 - int qcom_scm_pas_auth_and_reset(u32 peripheral); 80 - int qcom_scm_pas_shutdown(u32 peripheral); 81 - bool qcom_scm_pas_supported(u32 peripheral); 80 + struct qcom_scm_pas_context *devm_qcom_scm_pas_context_alloc(struct device *dev, 81 + u32 pas_id, 82 + phys_addr_t mem_phys, 83 + size_t mem_size); 84 + int qcom_scm_pas_init_image(u32 pas_id, const void *metadata, size_t size, 85 + struct qcom_scm_pas_context *ctx); 86 + void qcom_scm_pas_metadata_release(struct qcom_scm_pas_context *ctx); 87 + int qcom_scm_pas_mem_setup(u32 pas_id, phys_addr_t addr, phys_addr_t size); 88 + int qcom_scm_pas_auth_and_reset(u32 pas_id); 89 + int qcom_scm_pas_shutdown(u32 pas_id); 90 + bool qcom_scm_pas_supported(u32 pas_id); 91 + struct resource_table *qcom_scm_pas_get_rsc_table(struct qcom_scm_pas_context *ctx, 92 + void *input_rt, size_t input_rt_size, 93 + size_t *output_rt_size); 94 + 95 + int qcom_scm_pas_prepare_and_auth_reset(struct qcom_scm_pas_context *ctx); 82 96 83 97 int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val); 84 98 int qcom_scm_io_writel(phys_addr_t addr, unsigned int val);
+19
include/linux/mailbox/mtk-cmdq-mailbox.h
··· 70 70 struct cmdq_pkt *pkt; 71 71 }; 72 72 73 + struct cmdq_mbox_priv { 74 + u8 shift_pa; 75 + dma_addr_t mminfra_offset; 76 + }; 77 + 73 78 struct cmdq_pkt { 74 79 void *va_base; 75 80 dma_addr_t pa_base; 76 81 size_t cmd_buf_size; /* command occupied size */ 77 82 size_t buf_size; /* real buffer size */ 83 + struct cmdq_mbox_priv priv; /* for generating instruction */ 78 84 }; 85 + 86 + /** 87 + * cmdq_get_mbox_priv() - get the private data of mailbox channel 88 + * @chan: mailbox channel 89 + * @priv: pointer to store the private data of mailbox channel 90 + * 91 + * While generating the GCE instruction to command buffer, the private data 92 + * of GCE hardware may need to be referenced, such as the shift bits of 93 + * physical address. 94 + * 95 + * This function should be called before generating the GCE instruction. 96 + */ 97 + void cmdq_get_mbox_priv(struct mbox_chan *chan, struct cmdq_mbox_priv *priv); 79 98 80 99 /** 81 100 * cmdq_get_shift_pa() - get the shift bits of physical address
+40 -1
include/linux/of_irq.h
··· 11 11 12 12 typedef int (*of_irq_init_cb_t)(struct device_node *, struct device_node *); 13 13 14 + struct of_imap_parser { 15 + struct device_node *node; 16 + const __be32 *imap; 17 + const __be32 *imap_end; 18 + u32 parent_offset; 19 + }; 20 + 21 + struct of_imap_item { 22 + struct of_phandle_args parent_args; 23 + u32 child_imap_count; 24 + u32 child_imap[16]; /* Arbitrary size. 25 + * Should be #address-cells + #interrupt-cells but 26 + * avoid using allocation and so, expect that 16 27 + * should be enough 28 + */ 29 + }; 30 + 31 + /* 32 + * If the iterator is exited prematurely (break, goto, return) of_node_put() has 33 + * to be called on item.parent_args.np 34 + */ 35 + #define for_each_of_imap_item(parser, item) \ 36 + for (; of_imap_parser_one(parser, item);) 37 + 14 38 /* 15 39 * Workarounds only applied to 32bit powermac machines 16 40 */ ··· 73 49 extern int of_irq_to_resource_table(struct device_node *dev, 74 50 struct resource *res, int nr_irqs); 75 51 extern struct device_node *of_irq_find_parent(struct device_node *child); 52 + extern int of_imap_parser_init(struct of_imap_parser *parser, 53 + struct device_node *node, 54 + struct of_imap_item *item); 55 + extern struct of_imap_item *of_imap_parser_one(struct of_imap_parser *parser, 56 + struct of_imap_item *item); 76 57 extern struct irq_domain *of_msi_get_domain(struct device *dev, 77 58 const struct device_node *np, 78 59 enum irq_domain_bus_token token); ··· 121 92 { 122 93 return NULL; 123 94 } 124 - 95 + static inline int of_imap_parser_init(struct of_imap_parser *parser, 96 + struct device_node *node, 97 + struct of_imap_item *item) 98 + { 99 + return -ENOSYS; 100 + } 101 + static inline struct of_imap_item *of_imap_parser_one(struct of_imap_parser *parser, 102 + struct of_imap_item *item) 103 + { 104 + return NULL; 105 + } 125 106 static inline struct irq_domain *of_msi_get_domain(struct device *dev, 126 107 struct device_node *np, 127 108 enum irq_domain_bus_token token)
-36
include/linux/platform_data/hwmon-s3c.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright 2005 Simtec Electronics 4 - * Ben Dooks <ben@simtec.co.uk> 5 - * http://armlinux.simtec.co.uk/ 6 - * 7 - * S3C - HWMon interface for ADC 8 - */ 9 - 10 - #ifndef __HWMON_S3C_H__ 11 - #define __HWMON_S3C_H__ 12 - 13 - /** 14 - * s3c_hwmon_chcfg - channel configuration 15 - * @name: The name to give this channel. 16 - * @mult: Multiply the ADC value read by this. 17 - * @div: Divide the value from the ADC by this. 18 - * 19 - * The value read from the ADC is converted to a value that 20 - * hwmon expects (mV) by result = (value_read * @mult) / @div. 21 - */ 22 - struct s3c_hwmon_chcfg { 23 - const char *name; 24 - unsigned int mult; 25 - unsigned int div; 26 - }; 27 - 28 - /** 29 - * s3c_hwmon_pdata - HWMON platform data 30 - * @in: One configuration for each possible channel used. 31 - */ 32 - struct s3c_hwmon_pdata { 33 - struct s3c_hwmon_chcfg *in[8]; 34 - }; 35 - 36 - #endif /* __HWMON_S3C_H__ */
+2
include/linux/scmi_imx_protocol.h
··· 59 59 u32 *num, u32 *val); 60 60 int (*misc_ctrl_req_notify)(const struct scmi_protocol_handle *ph, 61 61 u32 ctrl_id, u32 evt_id, u32 flags); 62 + int (*misc_syslog)(const struct scmi_protocol_handle *ph, u16 *size, 63 + void *array); 62 64 }; 63 65 64 66 /* See LMM_ATTRIBUTES in imx95.rst */
+7
include/linux/soc/apple/rtkit.h
··· 126 126 int apple_rtkit_shutdown(struct apple_rtkit *rtk); 127 127 128 128 /* 129 + * Put the co-processor into the lowest power state. Note that it usually 130 + * is not possible to recover from this state without a full SoC reset. 131 + */ 132 + 133 + int apple_rtkit_poweroff(struct apple_rtkit *rtk); 134 + 135 + /* 129 136 * Put the co-processor into idle mode 130 137 */ 131 138 int apple_rtkit_idle(struct apple_rtkit *rtk);
+93
include/linux/soc/mediatek/mtk-cmdq.h
··· 23 23 #define CMDQ_THR_SPR_IDX2 (2) 24 24 #define CMDQ_THR_SPR_IDX3 (3) 25 25 26 + #define CMDQ_SUBSYS_INVALID (U8_MAX) 27 + 26 28 struct cmdq_pkt; 27 29 28 30 enum cmdq_logic_op { ··· 54 52 55 53 struct cmdq_client_reg { 56 54 u8 subsys; 55 + phys_addr_t pa_base; 57 56 u16 offset; 58 57 u16 size; 58 + 59 + /* 60 + * Client only uses these functions for MMIO access, 61 + * so doesn't need to handle the mminfra_offset. 62 + * The mminfra_offset is used for DRAM access and 63 + * is handled internally by CMDQ APIs. 64 + */ 65 + int (*pkt_write)(struct cmdq_pkt *pkt, u8 subsys, u32 pa_base, 66 + u16 offset, u32 value); 67 + int (*pkt_write_mask)(struct cmdq_pkt *pkt, u8 subsys, u32 pa_base, 68 + u16 offset, u32 value, u32 mask); 59 69 }; 60 70 61 71 struct cmdq_client { ··· 136 122 int cmdq_pkt_write(struct cmdq_pkt *pkt, u8 subsys, u16 offset, u32 value); 137 123 138 124 /** 125 + * cmdq_pkt_write_pa() - append write command to the CMDQ packet with pa_base 126 + * @pkt: the CMDQ packet 127 + * @subsys: unused parameter 128 + * @pa_base: the physical address base of the hardware register 129 + * @offset: register offset from CMDQ sub system 130 + * @value: the specified target register value 131 + * 132 + * Return: 0 for success; else the error code is returned 133 + */ 134 + int cmdq_pkt_write_pa(struct cmdq_pkt *pkt, u8 subsys /*unused*/, 135 + u32 pa_base, u16 offset, u32 value); 136 + 137 + /** 138 + * cmdq_pkt_write_subsys() - append write command to the CMDQ packet with subsys 139 + * @pkt: the CMDQ packet 140 + * @subsys: the CMDQ sub system code 141 + * @pa_base: unused parameter 142 + * @offset: register offset from CMDQ sub system 143 + * @value: the specified target register value 144 + * 145 + * Return: 0 for success; else the error code is returned 146 + */ 147 + int cmdq_pkt_write_subsys(struct cmdq_pkt *pkt, u8 subsys, 148 + u32 pa_base /*unused*/, u16 offset, u32 value); 149 + 150 + /** 139 151 * cmdq_pkt_write_mask() - append write command with mask to the CMDQ packet 140 152 * @pkt: the CMDQ packet 141 153 * @subsys: the CMDQ sub system code ··· 173 133 */ 174 134 int cmdq_pkt_write_mask(struct cmdq_pkt *pkt, u8 subsys, 175 135 u16 offset, u32 value, u32 mask); 136 + 137 + /** 138 + * cmdq_pkt_write_mask_pa() - append write command with mask to the CMDQ packet with pa 139 + * @pkt: the CMDQ packet 140 + * @subsys: unused parameter 141 + * @pa_base: the physical address base of the hardware register 142 + * @offset: register offset from CMDQ sub system 143 + * @value: the specified target register value 144 + * @mask: the specified target register mask 145 + * 146 + * Return: 0 for success; else the error code is returned 147 + */ 148 + int cmdq_pkt_write_mask_pa(struct cmdq_pkt *pkt, u8 subsys /*unused*/, 149 + u32 pa_base, u16 offset, u32 value, u32 mask); 150 + 151 + /** 152 + * cmdq_pkt_write_mask_subsys() - append write command with mask to the CMDQ packet with subsys 153 + * @pkt: the CMDQ packet 154 + * @subsys: the CMDQ sub system code 155 + * @pa_base: unused parameter 156 + * @offset: register offset from CMDQ sub system 157 + * @value: the specified target register value 158 + * @mask: the specified target register mask 159 + * 160 + * Return: 0 for success; else the error code is returned 161 + */ 162 + int cmdq_pkt_write_mask_subsys(struct cmdq_pkt *pkt, u8 subsys, 163 + u32 pa_base /*unused*/, u16 offset, u32 value, u32 mask); 176 164 177 165 /* 178 166 * cmdq_pkt_read_s() - append read_s command to the CMDQ packet ··· 486 418 return -ENOENT; 487 419 } 488 420 421 + static inline int cmdq_pkt_write_pa(struct cmdq_pkt *pkt, u8 subsys /*unused*/, 422 + u32 pa_base, u16 offset, u32 value) 423 + { 424 + return -ENOENT; 425 + } 426 + 427 + static inline int cmdq_pkt_write_subsys(struct cmdq_pkt *pkt, u8 subsys, 428 + u32 pa_base /*unused*/, u16 offset, u32 value) 429 + { 430 + return -ENOENT; 431 + } 432 + 489 433 static inline int cmdq_pkt_write_mask(struct cmdq_pkt *pkt, u8 subsys, 490 434 u16 offset, u32 value, u32 mask) 435 + { 436 + return -ENOENT; 437 + } 438 + 439 + static inline int cmdq_pkt_write_mask_pa(struct cmdq_pkt *pkt, u8 subsys /*unused*/, 440 + u32 pa_base, u16 offset, u32 value, u32 mask) 441 + { 442 + return -ENOENT; 443 + } 444 + 445 + static inline int cmdq_pkt_write_mask_subsys(struct cmdq_pkt *pkt, u8 subsys, 446 + u32 pa_base /*unused*/, u16 offset, 447 + u32 value, u32 mask) 491 448 { 492 449 return -ENOENT; 493 450 }
+4
include/linux/soc/qcom/llcc-qcom.h
··· 74 74 #define LLCC_CAMSRTIP 73 75 75 #define LLCC_CAMRTRF 74 76 76 #define LLCC_CAMSRTRF 75 77 + #define LLCC_OOBM_NS 81 78 + #define LLCC_OOBM_S 82 77 79 #define LLCC_VIDEO_APV 83 78 80 #define LLCC_COMPUTE1 87 79 81 #define LLCC_CPUSS_OPP 88 80 82 #define LLCC_CPUSSMPAM 89 83 + #define LLCC_VIDSC_VSP1 91 81 84 #define LLCC_CAM_IPE_STROV 92 82 85 #define LLCC_CAM_OFE_STROV 93 83 86 #define LLCC_CPUSS_HEU 94 87 + #define LLCC_PCIE_TCU 97 84 88 #define LLCC_MDM_PNG_FIXED 100 85 89 86 90 /**
+11 -11
include/linux/soc/qcom/mdt_loader.h
··· 10 10 11 11 struct device; 12 12 struct firmware; 13 - struct qcom_scm_pas_metadata; 13 + struct qcom_scm_pas_context; 14 14 15 15 #if IS_ENABLED(CONFIG_QCOM_MDT_LOADER) 16 16 17 17 ssize_t qcom_mdt_get_size(const struct firmware *fw); 18 - int qcom_mdt_pas_init(struct device *dev, const struct firmware *fw, 19 - const char *fw_name, int pas_id, phys_addr_t mem_phys, 20 - struct qcom_scm_pas_metadata *pas_metadata_ctx); 21 18 int qcom_mdt_load(struct device *dev, const struct firmware *fw, 22 19 const char *fw_name, int pas_id, void *mem_region, 23 20 phys_addr_t mem_phys, size_t mem_size, 24 21 phys_addr_t *reloc_base); 22 + 23 + int qcom_mdt_pas_load(struct qcom_scm_pas_context *ctx, const struct firmware *fw, 24 + const char *firmware, void *mem_region, phys_addr_t *reloc_base); 25 25 26 26 int qcom_mdt_load_no_init(struct device *dev, const struct firmware *fw, 27 27 const char *fw_name, void *mem_region, ··· 37 37 return -ENODEV; 38 38 } 39 39 40 - static inline int qcom_mdt_pas_init(struct device *dev, const struct firmware *fw, 41 - const char *fw_name, int pas_id, phys_addr_t mem_phys, 42 - struct qcom_scm_pas_metadata *pas_metadata_ctx) 43 - { 44 - return -ENODEV; 45 - } 46 - 47 40 static inline int qcom_mdt_load(struct device *dev, const struct firmware *fw, 48 41 const char *fw_name, int pas_id, 49 42 void *mem_region, phys_addr_t mem_phys, 50 43 size_t mem_size, phys_addr_t *reloc_base) 44 + { 45 + return -ENODEV; 46 + } 47 + 48 + static inline int qcom_mdt_pas_load(struct qcom_scm_pas_context *ctx, 49 + const struct firmware *fw, const char *firmware, 50 + void *mem_region, phys_addr_t *reloc_base) 51 51 { 52 52 return -ENODEV; 53 53 }
+1
include/linux/soc/qcom/ubwc.h
··· 8 8 #define __QCOM_UBWC_H__ 9 9 10 10 #include <linux/bits.h> 11 + #include <linux/printk.h> 11 12 #include <linux/types.h> 12 13 13 14 struct qcom_ubwc_cfg_data {
+9
include/linux/tee_core.h
··· 76 76 /** 77 77 * struct tee_driver_ops - driver operations vtable 78 78 * @get_version: returns version of driver 79 + * @get_tee_revision: returns revision string (diagnostic only); 80 + * do not infer feature support from this, use 81 + * TEE_IOC_VERSION instead 79 82 * @open: called for a context when the device file is opened 80 83 * @close_context: called when the device file is closed 81 84 * @release: called to release the context ··· 98 95 * client closes the device file, even if there are existing references to the 99 96 * context. The TEE driver can use @close_context to start cleaning up. 100 97 */ 98 + 101 99 struct tee_driver_ops { 102 100 void (*get_version)(struct tee_device *teedev, 103 101 struct tee_ioctl_version_data *vers); 102 + int (*get_tee_revision)(struct tee_device *teedev, 103 + char *buf, size_t len); 104 104 int (*open)(struct tee_context *ctx); 105 105 void (*close_context)(struct tee_context *ctx); 106 106 void (*release)(struct tee_context *ctx); ··· 128 122 unsigned long start); 129 123 int (*shm_unregister)(struct tee_context *ctx, struct tee_shm *shm); 130 124 }; 125 + 126 + /* Size for TEE revision string buffer used by get_tee_revision(). */ 127 + #define TEE_REVISION_STR_SIZE 128 131 128 132 129 /** 133 130 * struct tee_desc - Describes the TEE driver to the subsystem
+12
include/linux/tee_drv.h
··· 315 315 * @driver: driver structure 316 316 */ 317 317 struct tee_client_driver { 318 + int (*probe)(struct tee_client_device *); 319 + void (*remove)(struct tee_client_device *); 320 + void (*shutdown)(struct tee_client_device *); 318 321 const struct tee_client_device_id *id_table; 319 322 struct device_driver driver; 320 323 }; 321 324 322 325 #define to_tee_client_driver(d) \ 323 326 container_of_const(d, struct tee_client_driver, driver) 327 + 328 + #define tee_client_driver_register(drv) \ 329 + __tee_client_driver_register(drv, THIS_MODULE) 330 + int __tee_client_driver_register(struct tee_client_driver *, struct module *); 331 + void tee_client_driver_unregister(struct tee_client_driver *); 332 + 333 + #define module_tee_client_driver(__tee_client_driver) \ 334 + module_driver(__tee_client_driver, tee_client_driver_register, \ 335 + tee_client_driver_unregister) 324 336 325 337 #endif /*__TEE_DRV_H*/
+21
include/soc/spacemit/ccu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __SOC_SPACEMIT_CCU_H__ 4 + #define __SOC_SPACEMIT_CCU_H__ 5 + 6 + #include <linux/auxiliary_bus.h> 7 + #include <linux/regmap.h> 8 + 9 + /* Auxiliary device used to represent a CCU reset controller */ 10 + struct spacemit_ccu_adev { 11 + struct auxiliary_device adev; 12 + struct regmap *regmap; 13 + }; 14 + 15 + static inline struct spacemit_ccu_adev * 16 + to_spacemit_ccu_adev(struct auxiliary_device *adev) 17 + { 18 + return container_of(adev, struct spacemit_ccu_adev, adev); 19 + } 20 + 21 + #endif /* __SOC_SPACEMIT_CCU_H__ */
+1 -11
include/soc/spacemit/k1-syscon.h
··· 5 5 #ifndef __SOC_K1_SYSCON_H__ 6 6 #define __SOC_K1_SYSCON_H__ 7 7 8 - /* Auxiliary device used to represent a CCU reset controller */ 9 - struct spacemit_ccu_adev { 10 - struct auxiliary_device adev; 11 - struct regmap *regmap; 12 - }; 13 - 14 - static inline struct spacemit_ccu_adev * 15 - to_spacemit_ccu_adev(struct auxiliary_device *adev) 16 - { 17 - return container_of(adev, struct spacemit_ccu_adev, adev); 18 - } 8 + #include "ccu.h" 19 9 20 10 /* APBS register offset */ 21 11 #define APBS_PLL1_SWCR1 0x100
+273
include/soc/spacemit/k3-syscon.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + /* SpacemiT clock and reset driver definitions for the K3 SoC */ 4 + 5 + #ifndef __SOC_K3_SYSCON_H__ 6 + #define __SOC_K3_SYSCON_H__ 7 + 8 + #include "ccu.h" 9 + 10 + /* APBS register offset */ 11 + #define APBS_PLL1_SWCR1 0x100 12 + #define APBS_PLL1_SWCR2 0x104 13 + #define APBS_PLL1_SWCR3 0x108 14 + #define APBS_PLL2_SWCR1 0x118 15 + #define APBS_PLL2_SWCR2 0x11c 16 + #define APBS_PLL2_SWCR3 0x120 17 + #define APBS_PLL3_SWCR1 0x124 18 + #define APBS_PLL3_SWCR2 0x128 19 + #define APBS_PLL3_SWCR3 0x12c 20 + #define APBS_PLL4_SWCR1 0x130 21 + #define APBS_PLL4_SWCR2 0x134 22 + #define APBS_PLL4_SWCR3 0x138 23 + #define APBS_PLL5_SWCR1 0x13c 24 + #define APBS_PLL5_SWCR2 0x140 25 + #define APBS_PLL5_SWCR3 0x144 26 + #define APBS_PLL6_SWCR1 0x148 27 + #define APBS_PLL6_SWCR2 0x14c 28 + #define APBS_PLL6_SWCR3 0x150 29 + #define APBS_PLL7_SWCR1 0x158 30 + #define APBS_PLL7_SWCR2 0x15c 31 + #define APBS_PLL7_SWCR3 0x160 32 + #define APBS_PLL8_SWCR1 0x180 33 + #define APBS_PLL8_SWCR2 0x184 34 + #define APBS_PLL8_SWCR3 0x188 35 + 36 + /* MPMU register offset */ 37 + #define MPMU_FCCR 0x0008 38 + #define MPMU_POSR 0x0010 39 + #define POSR_PLL1_LOCK BIT(24) 40 + #define POSR_PLL2_LOCK BIT(25) 41 + #define POSR_PLL3_LOCK BIT(26) 42 + #define POSR_PLL4_LOCK BIT(27) 43 + #define POSR_PLL5_LOCK BIT(28) 44 + #define POSR_PLL6_LOCK BIT(29) 45 + #define POSR_PLL7_LOCK BIT(30) 46 + #define POSR_PLL8_LOCK BIT(31) 47 + #define MPMU_SUCCR 0x0014 48 + #define MPMU_ISCCR 0x0044 49 + #define MPMU_WDTPCR 0x0200 50 + #define MPMU_RIPCCR 0x0210 51 + #define MPMU_ACGR 0x1024 52 + #define MPMU_APBCSCR 0x1050 53 + #define MPMU_SUCCR_1 0x10b0 54 + 55 + #define MPMU_I2S0_SYSCLK 0x1100 56 + #define MPMU_I2S2_SYSCLK 0x1104 57 + #define MPMU_I2S3_SYSCLK 0x1108 58 + #define MPMU_I2S4_SYSCLK 0x110c 59 + #define MPMU_I2S5_SYSCLK 0x1110 60 + #define MPMU_I2S_SYSCLK_CTRL 0x1114 61 + 62 + /* APBC register offset */ 63 + #define APBC_UART0_CLK_RST 0x00 64 + #define APBC_UART2_CLK_RST 0x04 65 + #define APBC_GPIO_CLK_RST 0x08 66 + #define APBC_PWM0_CLK_RST 0x0c 67 + #define APBC_PWM1_CLK_RST 0x10 68 + #define APBC_PWM2_CLK_RST 0x14 69 + #define APBC_PWM3_CLK_RST 0x18 70 + #define APBC_TWSI8_CLK_RST 0x20 71 + #define APBC_UART3_CLK_RST 0x24 72 + #define APBC_RTC_CLK_RST 0x28 73 + #define APBC_TWSI0_CLK_RST 0x2c 74 + #define APBC_TWSI1_CLK_RST 0x30 75 + #define APBC_TIMERS0_CLK_RST 0x34 76 + #define APBC_TWSI2_CLK_RST 0x38 77 + #define APBC_AIB_CLK_RST 0x3c 78 + #define APBC_TWSI4_CLK_RST 0x40 79 + #define APBC_TIMERS1_CLK_RST 0x44 80 + #define APBC_ONEWIRE_CLK_RST 0x48 81 + #define APBC_TWSI5_CLK_RST 0x4c 82 + #define APBC_DRO_CLK_RST 0x58 83 + #define APBC_IR0_CLK_RST 0x5c 84 + #define APBC_IR1_CLK_RST 0x1c 85 + #define APBC_TWSI6_CLK_RST 0x60 86 + #define APBC_COUNTER_CLK_SEL 0x64 87 + #define APBC_TSEN_CLK_RST 0x6c 88 + #define APBC_UART4_CLK_RST 0x70 89 + #define APBC_UART5_CLK_RST 0x74 90 + #define APBC_UART6_CLK_RST 0x78 91 + #define APBC_SSP3_CLK_RST 0x7c 92 + #define APBC_SSPA0_CLK_RST 0x80 93 + #define APBC_SSPA1_CLK_RST 0x84 94 + #define APBC_SSPA2_CLK_RST 0x88 95 + #define APBC_SSPA3_CLK_RST 0x8c 96 + #define APBC_IPC_AP2AUD_CLK_RST 0x90 97 + #define APBC_UART7_CLK_RST 0x94 98 + #define APBC_UART8_CLK_RST 0x98 99 + #define APBC_UART9_CLK_RST 0x9c 100 + #define APBC_CAN0_CLK_RST 0xa0 101 + #define APBC_CAN1_CLK_RST 0xa4 102 + #define APBC_PWM4_CLK_RST 0xa8 103 + #define APBC_PWM5_CLK_RST 0xac 104 + #define APBC_PWM6_CLK_RST 0xb0 105 + #define APBC_PWM7_CLK_RST 0xb4 106 + #define APBC_PWM8_CLK_RST 0xb8 107 + #define APBC_PWM9_CLK_RST 0xbc 108 + #define APBC_PWM10_CLK_RST 0xc0 109 + #define APBC_PWM11_CLK_RST 0xc4 110 + #define APBC_PWM12_CLK_RST 0xc8 111 + #define APBC_PWM13_CLK_RST 0xcc 112 + #define APBC_PWM14_CLK_RST 0xd0 113 + #define APBC_PWM15_CLK_RST 0xd4 114 + #define APBC_PWM16_CLK_RST 0xd8 115 + #define APBC_PWM17_CLK_RST 0xdc 116 + #define APBC_PWM18_CLK_RST 0xe0 117 + #define APBC_PWM19_CLK_RST 0xe4 118 + #define APBC_TIMERS2_CLK_RST 0x11c 119 + #define APBC_TIMERS3_CLK_RST 0x120 120 + #define APBC_TIMERS4_CLK_RST 0x124 121 + #define APBC_TIMERS5_CLK_RST 0x128 122 + #define APBC_TIMERS6_CLK_RST 0x12c 123 + #define APBC_TIMERS7_CLK_RST 0x130 124 + 125 + #define APBC_CAN2_CLK_RST 0x148 126 + #define APBC_CAN3_CLK_RST 0x14c 127 + #define APBC_CAN4_CLK_RST 0x150 128 + #define APBC_UART10_CLK_RST 0x154 129 + #define APBC_SSP0_CLK_RST 0x158 130 + #define APBC_SSP1_CLK_RST 0x15c 131 + #define APBC_SSPA4_CLK_RST 0x160 132 + #define APBC_SSPA5_CLK_RST 0x164 133 + 134 + /* APMU register offset */ 135 + #define APMU_CSI_CCIC2_CLK_RES_CTRL 0x024 136 + #define APMU_ISP_CLK_RES_CTRL 0x038 137 + #define APMU_PMU_CLK_GATE_CTRL 0x040 138 + #define APMU_LCD_CLK_RES_CTRL1 0x044 139 + #define APMU_LCD_SPI_CLK_RES_CTRL 0x048 140 + #define APMU_LCD_CLK_RES_CTRL2 0x04c 141 + #define APMU_CCIC_CLK_RES_CTRL 0x050 142 + #define APMU_SDH0_CLK_RES_CTRL 0x054 143 + #define APMU_SDH1_CLK_RES_CTRL 0x058 144 + #define APMU_USB_CLK_RES_CTRL 0x05c 145 + #define APMU_QSPI_CLK_RES_CTRL 0x060 146 + #define APMU_DMA_CLK_RES_CTRL 0x064 147 + #define APMU_AES_CLK_RES_CTRL 0x068 148 + #define APMU_MCB_CLK_RES_CTRL 0x06c 149 + #define APMU_VPU_CLK_RES_CTRL 0x0a4 150 + #define APMU_DTC_CLK_RES_CTRL 0x0ac 151 + #define APMU_GPU_CLK_RES_CTRL 0x0cc 152 + #define APMU_SDH2_CLK_RES_CTRL 0x0e0 153 + #define APMU_PMUA_MC_CTRL 0x0e8 154 + #define APMU_PMU_CC2_AP 0x100 155 + #define APMU_PMUA_EM_CLK_RES_CTRL 0x104 156 + #define APMU_UCIE_CTRL 0x11c 157 + #define APMU_RCPU_CLK_RES_CTRL 0x14c 158 + #define APMU_TOP_DCLK_CTRL 0x158 159 + #define APMU_LCD_EDP_CTRL 0x23c 160 + #define APMU_UFS_CLK_RES_CTRL 0x268 161 + #define APMU_LCD_CLK_RES_CTRL3 0x26c 162 + #define APMU_LCD_CLK_RES_CTRL4 0x270 163 + #define APMU_LCD_CLK_RES_CTRL5 0x274 164 + #define APMU_CCI550_CLK_CTRL 0x300 165 + #define APMU_ACLK_CLK_CTRL 0x388 166 + #define APMU_CPU_C0_CLK_CTRL 0x38C 167 + #define APMU_CPU_C1_CLK_CTRL 0x390 168 + #define APMU_CPU_C2_CLK_CTRL 0x394 169 + #define APMU_CPU_C3_CLK_CTRL 0x208 170 + #define APMU_PCIE_CLK_RES_CTRL_A 0x1f0 171 + #define APMU_PCIE_CLK_RES_CTRL_B 0x1c8 172 + #define APMU_PCIE_CLK_RES_CTRL_C 0x1d0 173 + #define APMU_PCIE_CLK_RES_CTRL_D 0x1e0 174 + #define APMU_PCIE_CLK_RES_CTRL_E 0x1e8 175 + #define APMU_EMAC0_CLK_RES_CTRL 0x3e4 176 + #define APMU_EMAC1_CLK_RES_CTRL 0x3ec 177 + #define APMU_EMAC2_CLK_RES_CTRL 0x248 178 + #define APMU_ESPI_CLK_RES_CTRL 0x240 179 + #define APMU_SNR_ISIM_VCLK_CTRL 0x3f8 180 + 181 + /* DCIU register offsets */ 182 + #define DCIU_DMASYS_CLK_EN 0x234 183 + #define DCIU_DMASYS_SDMA_CLK_EN 0x238 184 + #define DCIU_C2_TCM_PIPE_CLK 0x244 185 + #define DCIU_C3_TCM_PIPE_CLK 0x248 186 + 187 + #define DCIU_DMASYS_S0_RSTN 0x204 188 + #define DCIU_DMASYS_S1_RSTN 0x208 189 + #define DCIU_DMASYS_A0_RSTN 0x20C 190 + #define DCIU_DMASYS_A1_RSTN 0x210 191 + #define DCIU_DMASYS_A2_RSTN 0x214 192 + #define DCIU_DMASYS_A3_RSTN 0x218 193 + #define DCIU_DMASYS_A4_RSTN 0x21C 194 + #define DCIU_DMASYS_A5_RSTN 0x220 195 + #define DCIU_DMASYS_A6_RSTN 0x224 196 + #define DCIU_DMASYS_A7_RSTN 0x228 197 + #define DCIU_DMASYS_RSTN 0x22C 198 + #define DCIU_DMASYS_SDMA_RSTN 0x230 199 + 200 + /* RCPU SYSCTRL register offsets */ 201 + #define RCPU_CAN_CLK_RST 0x4c 202 + #define RCPU_CAN1_CLK_RST 0xF0 203 + #define RCPU_CAN2_CLK_RST 0xF4 204 + #define RCPU_CAN3_CLK_RST 0xF8 205 + #define RCPU_CAN4_CLK_RST 0xFC 206 + #define RCPU_IRC_CLK_RST 0x48 207 + #define RCPU_IRC1_CLK_RST 0xEC 208 + #define RCPU_GMAC_CLK_RST 0xE4 209 + #define RCPU_ESPI_CLK_RST 0xDC 210 + #define RCPU_AUDIO_I2S0_SYS_CLK_CTRL 0x70 211 + #define RCPU_AUDIO_I2S1_SYS_CLK_CTRL 0x44 212 + 213 + /* RCPU UARTCTRL register offsets */ 214 + #define RCPU1_UART0_CLK_RST 0x00 215 + #define RCPU1_UART1_CLK_RST 0x04 216 + #define RCPU1_UART2_CLK_RST 0x08 217 + #define RCPU1_UART3_CLK_RST 0x0c 218 + #define RCPU1_UART4_CLK_RST 0x10 219 + #define RCPU1_UART5_CLK_RST 0x14 220 + 221 + /* RCPU I2SCTRL register offsets */ 222 + #define RCPU2_AUDIO_I2S0_TX_RX_CLK_CTRL 0x60 223 + #define RCPU2_AUDIO_I2S1_TX_RX_CLK_CTRL 0x64 224 + #define RCPU2_AUDIO_I2S2_TX_RX_CLK_CTRL 0x68 225 + #define RCPU2_AUDIO_I2S3_TX_RX_CLK_CTRL 0x6C 226 + 227 + #define RCPU2_AUDIO_I2S2_SYS_CLK_CTRL 0x44 228 + #define RCPU2_AUDIO_I2S3_SYS_CLK_CTRL 0x54 229 + 230 + /* RCPU SPICTRL register offsets */ 231 + #define RCPU3_SSP0_CLK_RST 0x00 232 + #define RCPU3_SSP1_CLK_RST 0x04 233 + #define RCPU3_PWR_SSP_CLK_RST 0x08 234 + 235 + /* RCPU I2CCTRL register offsets */ 236 + #define RCPU4_I2C0_CLK_RST 0x00 237 + #define RCPU4_I2C1_CLK_RST 0x04 238 + #define RCPU4_PWR_I2C_CLK_RST 0x08 239 + 240 + /* RPMU register offsets */ 241 + #define RCPU5_AON_PER_CLK_RST_CTRL 0x2C 242 + #define RCPU5_TIMER1_CLK_RST 0x4C 243 + #define RCPU5_TIMER2_CLK_RST 0x70 244 + #define RCPU5_TIMER3_CLK_RST 0x78 245 + #define RCPU5_TIMER4_CLK_RST 0x7C 246 + #define RCPU5_GPIO_AND_EDGE_CLK_RST 0x74 247 + #define RCPU5_RCPU_BUS_CLK_CTRL 0xC0 248 + #define RCPU5_RT24_CORE0_CLK_CTRL 0xC4 249 + #define RCPU5_RT24_CORE1_CLK_CTRL 0xC8 250 + #define RCPU5_RT24_CORE0_SW_RESET 0xCC 251 + #define RCPU5_RT24_CORE1_SW_RESET 0xD0 252 + 253 + /* RCPU PWMCTRL register offsets */ 254 + #define RCPU6_PWM0_CLK_RST 0x00 255 + #define RCPU6_PWM1_CLK_RST 0x04 256 + #define RCPU6_PWM2_CLK_RST 0x08 257 + #define RCPU6_PWM3_CLK_RST 0x0c 258 + #define RCPU6_PWM4_CLK_RST 0x10 259 + #define RCPU6_PWM5_CLK_RST 0x14 260 + #define RCPU6_PWM6_CLK_RST 0x18 261 + #define RCPU6_PWM7_CLK_RST 0x1c 262 + #define RCPU6_PWM8_CLK_RST 0x20 263 + #define RCPU6_PWM9_CLK_RST 0x24 264 + 265 + /* APBC2 SEC register offsets */ 266 + #define APBC2_UART1_CLK_RST 0x00 267 + #define APBC2_SSP2_CLK_RST 0x04 268 + #define APBC2_TWSI3_CLK_RST 0x08 269 + #define APBC2_RTC_CLK_RST 0x0c 270 + #define APBC2_TIMERS_CLK_RST 0x10 271 + #define APBC2_GPIO_CLK_RST 0x1c 272 + 273 + #endif /* __SOC_K3_SYSCON_H__ */
+59 -1
include/soc/tegra/pmc.h
··· 16 16 17 17 struct clk; 18 18 struct reset_control; 19 + struct tegra_pmc; 19 20 20 21 bool tegra_pmc_cpu_is_powered(unsigned int cpuid); 21 22 int tegra_pmc_cpu_power_on(unsigned int cpuid); ··· 150 149 }; 151 150 152 151 #ifdef CONFIG_SOC_TEGRA_PMC 152 + struct tegra_pmc *devm_tegra_pmc_get(struct device *dev); 153 + 154 + int tegra_pmc_powergate_power_on(struct tegra_pmc *pmc, unsigned int id); 155 + int tegra_pmc_powergate_power_off(struct tegra_pmc *pmc, unsigned int id); 156 + int tegra_pmc_powergate_remove_clamping(struct tegra_pmc *pmc, unsigned int id); 157 + 158 + /* Must be called with clk disabled, and returns with clk enabled */ 159 + int tegra_pmc_powergate_sequence_power_up(struct tegra_pmc *pmc, 160 + unsigned int id, struct clk *clk, 161 + struct reset_control *rst); 162 + int tegra_pmc_io_pad_power_enable(struct tegra_pmc *pmc, enum tegra_io_pad id); 163 + int tegra_pmc_io_pad_power_disable(struct tegra_pmc *pmc, enum tegra_io_pad id); 164 + 165 + /* legacy */ 153 166 int tegra_powergate_power_on(unsigned int id); 154 167 int tegra_powergate_power_off(unsigned int id); 155 168 int tegra_powergate_remove_clamping(unsigned int id); 156 169 157 - /* Must be called with clk disabled, and returns with clk enabled */ 158 170 int tegra_powergate_sequence_power_up(unsigned int id, struct clk *clk, 159 171 struct reset_control *rst); 160 172 ··· 180 166 bool tegra_pmc_core_domain_state_synced(void); 181 167 182 168 #else 169 + static inline struct tegra_pmc *devm_tegra_pmc_get(struct device *dev) 170 + { 171 + return ERR_PTR(-ENOSYS); 172 + } 173 + 174 + static inline int 175 + tegra_pmc_powergate_power_on(struct tegra_pmc *pmc, unsigned int id) 176 + { 177 + return -ENOSYS; 178 + } 179 + 180 + static inline int 181 + tegra_pmc_powergate_power_off(struct tegra_pmc *pmc, unsigned int id) 182 + { 183 + return -ENOSYS; 184 + } 185 + 186 + static inline int 187 + tegra_pmc_powergate_remove_clamping(struct tegra_pmc *pmc, unsigned int id) 188 + { 189 + return -ENOSYS; 190 + } 191 + 192 + /* Must be called with clk disabled, and returns with clk enabled */ 193 + static inline int 194 + tegra_pmc_powergate_sequence_power_up(struct tegra_pmc *pmc, unsigned int id, 195 + struct clk *clk, 196 + struct reset_control *rst) 197 + { 198 + return -ENOSYS; 199 + } 200 + 201 + static inline int 202 + tegra_pmc_io_pad_power_enable(struct tegra_pmc *pmc, enum tegra_io_pad id) 203 + { 204 + return -ENOSYS; 205 + } 206 + 207 + static inline int 208 + tegra_pmc_io_pad_power_disable(struct tegra_pmc *pmc, enum tegra_io_pad id) 209 + { 210 + return -ENOSYS; 211 + } 212 + 183 213 static inline int tegra_powergate_power_on(unsigned int id) 184 214 { 185 215 return -ENOSYS;
+7 -10
security/keys/trusted-keys/trusted_tee.c
··· 202 202 return 0; 203 203 } 204 204 205 - static int trusted_key_probe(struct device *dev) 205 + static int trusted_key_probe(struct tee_client_device *rng_device) 206 206 { 207 - struct tee_client_device *rng_device = to_tee_client_device(dev); 207 + struct device *dev = &rng_device->dev; 208 208 int ret; 209 209 struct tee_ioctl_open_session_arg sess_arg; 210 210 ··· 244 244 return ret; 245 245 } 246 246 247 - static int trusted_key_remove(struct device *dev) 247 + static void trusted_key_remove(struct tee_client_device *dev) 248 248 { 249 249 unregister_key_type(&key_type_trusted); 250 250 tee_client_close_session(pvt_data.ctx, pvt_data.session_id); 251 251 tee_client_close_context(pvt_data.ctx); 252 - 253 - return 0; 254 252 } 255 253 256 254 static const struct tee_client_device_id trusted_key_id_table[] = { ··· 259 261 MODULE_DEVICE_TABLE(tee, trusted_key_id_table); 260 262 261 263 static struct tee_client_driver trusted_key_driver = { 264 + .probe = trusted_key_probe, 265 + .remove = trusted_key_remove, 262 266 .id_table = trusted_key_id_table, 263 267 .driver = { 264 268 .name = DRIVER_NAME, 265 - .bus = &tee_bus_type, 266 - .probe = trusted_key_probe, 267 - .remove = trusted_key_remove, 268 269 }, 269 270 }; 270 271 271 272 static int trusted_tee_init(void) 272 273 { 273 - return driver_register(&trusted_key_driver.driver); 274 + return tee_client_driver_register(&trusted_key_driver); 274 275 } 275 276 276 277 static void trusted_tee_exit(void) 277 278 { 278 - driver_unregister(&trusted_key_driver.driver); 279 + tee_client_driver_unregister(&trusted_key_driver); 279 280 } 280 281 281 282 struct trusted_key_ops trusted_key_tee_ops = {