Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'drm-msm-next-2026-04-02' of https://gitlab.freedesktop.org/drm/msm into drm-next

Changes for v7.1

CI:
- Uprev mesa
- Restore CI jobs for Qualcomm APQ8016 and APQ8096 devices

Core:
- Switched to of_get_available_child_by_name()

DPU:
- Fixes for DSC panels
- Fixed brownout because of the frequency / OPP mismatch
- Quad pipe preparation (not enabled yet)
- Switched to virtual planes by default
- Dropped VBIF_NRT support
- Added support for Eliza platform
- Reworked alpha handling
- Switched to correct CWB definitions on Eliza
- Dropped dummy INTF_0 on MSM8953
- Corrected INTFs related to DP-MST

DP:
- Removed debug prints looking into PHY internals

DSI:
- Fixes for DSC panels
- RGB101010 support
- Support for SC8280XP
- Moved PHY bindings from display/ to phy/

GPU:
- Preemption support for x2-85 and a840
- IFPC support for a840
- SKU detection support for x2-85 and a840
- Expose AQE support (VK ray-pipeline)
- Avoid locking in VM_BIND fence signaling path
- Fix to avoid reclaim in GPU snapshot path
- Disallow foreign mapping of _NO_SHARE BOs
- Couple a6xx gpu snapshot fixes
- Various other fixes

HDMI:
- Fixed infoframes programming

MDP5:
- Dropped support for MSM8974v1
- Dropped now unused code for MSM8974 v1 and SDM660 / MSM8998

Also misc small fixes

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Rob Clark <rob.clark@oss.qualcomm.com>
Link: https://patch.msgid.link/CACSVV012vn73BaUfk=Hw4WkQHZNPHiqfifWEunAqMc2EGOWUEQ@mail.gmail.com

+3213 -1491
+1
Documentation/devicetree/bindings/display/msm/dp-controller.yaml
··· 67 67 68 68 - items: 69 69 - enum: 70 + - qcom,eliza-dp 70 71 - qcom,sm8750-dp 71 72 - const: qcom,sm8650-dp 72 73
+5
Documentation/devicetree/bindings/display/msm/dsi-controller-main.yaml
··· 49 49 - items: 50 50 - enum: 51 51 - qcom,qcs8300-dsi-ctrl 52 + - qcom,sc8280xp-dsi-ctrl 52 53 - const: qcom,sa8775p-dsi-ctrl 54 + - const: qcom,mdss-dsi-ctrl 55 + - items: 56 + - const: qcom,eliza-dsi-ctrl 57 + - const: qcom,sm8750-dsi-ctrl 53 58 - const: qcom,mdss-dsi-ctrl 54 59 - enum: 55 60 - qcom,dsi-ctrl-6g-qcm2290
+2 -2
Documentation/devicetree/bindings/display/msm/dsi-phy-10nm.yaml Documentation/devicetree/bindings/phy/qcom,dsi-phy-10nm.yaml
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/display/msm/dsi-phy-10nm.yaml# 4 + $id: http://devicetree.org/schemas/phy/qcom,dsi-phy-10nm.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 7 title: Qualcomm Display DSI 10nm PHY ··· 10 10 - Krishna Manikandan <quic_mkrishn@quicinc.com> 11 11 12 12 allOf: 13 - - $ref: dsi-phy-common.yaml# 13 + - $ref: qcom,dsi-phy-common.yaml# 14 14 15 15 properties: 16 16 compatible:
+2 -2
Documentation/devicetree/bindings/display/msm/dsi-phy-14nm.yaml Documentation/devicetree/bindings/phy/qcom,dsi-phy-14nm.yaml
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/display/msm/dsi-phy-14nm.yaml# 4 + $id: http://devicetree.org/schemas/phy/qcom,dsi-phy-14nm.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 7 title: Qualcomm Display DSI 14nm PHY ··· 10 10 - Krishna Manikandan <quic_mkrishn@quicinc.com> 11 11 12 12 allOf: 13 - - $ref: dsi-phy-common.yaml# 13 + - $ref: qcom,dsi-phy-common.yaml# 14 14 15 15 properties: 16 16 compatible:
+15 -10
Documentation/devicetree/bindings/display/msm/dsi-phy-20nm.yaml Documentation/devicetree/bindings/phy/qcom,dsi-phy-28nm.yaml
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/display/msm/dsi-phy-20nm.yaml# 4 + $id: http://devicetree.org/schemas/phy/qcom,dsi-phy-28nm.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: Qualcomm Display DSI 20nm PHY 7 + title: Qualcomm Display DSI 28nm PHY 8 8 9 9 maintainers: 10 10 - Krishna Manikandan <quic_mkrishn@quicinc.com> 11 11 12 12 allOf: 13 - - $ref: dsi-phy-common.yaml# 13 + - $ref: qcom,dsi-phy-common.yaml# 14 14 15 15 properties: 16 16 compatible: 17 - const: qcom,dsi-phy-20nm 17 + enum: 18 + - qcom,dsi-phy-28nm-8226 19 + - qcom,dsi-phy-28nm-8937 20 + - qcom,dsi-phy-28nm-8960 21 + - qcom,dsi-phy-28nm-hpm 22 + - qcom,dsi-phy-28nm-hpm-fam-b 23 + - qcom,dsi-phy-28nm-lp 18 24 19 25 reg: 20 26 items: ··· 34 28 - const: dsi_phy 35 29 - const: dsi_phy_regulator 36 30 37 - vcca-supply: 38 - description: Phandle to vcca regulator device node. 39 - 40 31 vddio-supply: 41 32 description: Phandle to vdd-io regulator device node. 33 + 34 + qcom,dsi-phy-regulator-ldo-mode: 35 + type: boolean 36 + description: Indicates if the LDO mode PHY regulator is wanted. 42 37 43 38 required: 44 39 - compatible 45 40 - reg 46 41 - reg-names 47 42 - vddio-supply 48 - - vcca-supply 49 43 50 44 unevaluatedProperties: false 51 45 ··· 55 49 #include <dt-bindings/clock/qcom,rpmh.h> 56 50 57 51 dsi-phy@fd922a00 { 58 - compatible = "qcom,dsi-phy-20nm"; 52 + compatible = "qcom,dsi-phy-28nm-lp"; 59 53 reg = <0xfd922a00 0xd4>, 60 54 <0xfd922b00 0x2b0>, 61 55 <0xfd922d80 0x7b>; ··· 66 60 #clock-cells = <1>; 67 61 #phy-cells = <0>; 68 62 69 - vcca-supply = <&vcca_reg>; 70 63 vddio-supply = <&vddio_reg>; 71 64 72 65 clocks = <&dispcc DISP_CC_MDSS_AHB_CLK>,
+10 -15
Documentation/devicetree/bindings/display/msm/dsi-phy-28nm.yaml Documentation/devicetree/bindings/phy/qcom,dsi-phy-20nm.yaml
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/display/msm/dsi-phy-28nm.yaml# 4 + $id: http://devicetree.org/schemas/phy/qcom,dsi-phy-20nm.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: Qualcomm Display DSI 28nm PHY 7 + title: Qualcomm Display DSI 20nm PHY 8 8 9 9 maintainers: 10 10 - Krishna Manikandan <quic_mkrishn@quicinc.com> 11 11 12 12 allOf: 13 - - $ref: dsi-phy-common.yaml# 13 + - $ref: qcom,dsi-phy-common.yaml# 14 14 15 15 properties: 16 16 compatible: 17 - enum: 18 - - qcom,dsi-phy-28nm-8226 19 - - qcom,dsi-phy-28nm-8937 20 - - qcom,dsi-phy-28nm-8960 21 - - qcom,dsi-phy-28nm-hpm 22 - - qcom,dsi-phy-28nm-hpm-fam-b 23 - - qcom,dsi-phy-28nm-lp 17 + const: qcom,dsi-phy-20nm 24 18 25 19 reg: 26 20 items: ··· 28 34 - const: dsi_phy 29 35 - const: dsi_phy_regulator 30 36 37 + vcca-supply: 38 + description: Phandle to vcca regulator device node. 39 + 31 40 vddio-supply: 32 41 description: Phandle to vdd-io regulator device node. 33 - 34 - qcom,dsi-phy-regulator-ldo-mode: 35 - type: boolean 36 - description: Indicates if the LDO mode PHY regulator is wanted. 37 42 38 43 required: 39 44 - compatible 40 45 - reg 41 46 - reg-names 42 47 - vddio-supply 48 + - vcca-supply 43 49 44 50 unevaluatedProperties: false 45 51 ··· 49 55 #include <dt-bindings/clock/qcom,rpmh.h> 50 56 51 57 dsi-phy@fd922a00 { 52 - compatible = "qcom,dsi-phy-28nm-lp"; 58 + compatible = "qcom,dsi-phy-20nm"; 53 59 reg = <0xfd922a00 0xd4>, 54 60 <0xfd922b00 0x2b0>, 55 61 <0xfd922d80 0x7b>; ··· 60 66 #clock-cells = <1>; 61 67 #phy-cells = <0>; 62 68 69 + vcca-supply = <&vcca_reg>; 63 70 vddio-supply = <&vddio_reg>; 64 71 65 72 clocks = <&dispcc DISP_CC_MDSS_AHB_CLK>,
+7 -2
Documentation/devicetree/bindings/display/msm/dsi-phy-7nm.yaml Documentation/devicetree/bindings/phy/qcom,dsi-phy-7nm.yaml
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/display/msm/dsi-phy-7nm.yaml# 4 + $id: http://devicetree.org/schemas/phy/qcom,dsi-phy-7nm.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 7 title: Qualcomm Display DSI 7nm PHY ··· 10 10 - Jonathan Marek <jonathan@marek.ca> 11 11 12 12 allOf: 13 - - $ref: dsi-phy-common.yaml# 13 + - $ref: qcom,dsi-phy-common.yaml# 14 14 15 15 properties: 16 16 compatible: ··· 31 31 - qcom,sm8750-dsi-phy-3nm 32 32 - items: 33 33 - enum: 34 + - qcom,eliza-dsi-phy-4nm 35 + - const: qcom,sm8650-dsi-phy-4nm 36 + - items: 37 + - enum: 34 38 - qcom,qcs8300-dsi-phy-5nm 39 + - qcom,sc8280xp-dsi-phy-5nm 35 40 - const: qcom,sa8775p-dsi-phy-5nm 36 41 37 42 reg:
+1 -1
Documentation/devicetree/bindings/display/msm/dsi-phy-common.yaml Documentation/devicetree/bindings/phy/qcom,dsi-phy-common.yaml
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/display/msm/dsi-phy-common.yaml# 4 + $id: http://devicetree.org/schemas/phy/qcom,dsi-phy-common.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 7 title: Qualcomm Display DSI PHY Common Properties
+1
Documentation/devicetree/bindings/display/msm/gmu.yaml
··· 91 91 compatible: 92 92 contains: 93 93 enum: 94 + - qcom,adreno-gmu-615.0 94 95 - qcom,adreno-gmu-618.0 95 96 - qcom,adreno-gmu-630.2 96 97 then:
-7
Documentation/devicetree/bindings/display/msm/gpu.yaml
··· 440 440 clocks: false 441 441 clock-names: false 442 442 443 - reg-names: 444 - minItems: 1 445 - items: 446 - - const: kgsl_3d0_reg_memory 447 - - const: cx_mem 448 - - const: cx_dbgc 449 - 450 443 examples: 451 444 - | 452 445
+494
Documentation/devicetree/bindings/display/msm/qcom,eliza-mdss.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/msm/qcom,eliza-mdss.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Eliza SoC Display MDSS 8 + 9 + maintainers: 10 + - Krzysztof Kozlowski <krzk@kernel.org> 11 + 12 + description: 13 + Eliza SoC Mobile Display Subsystem (MDSS) encapsulates sub-blocks like DPU 14 + display controller, DSI and DP interfaces etc. 15 + 16 + $ref: /schemas/display/msm/mdss-common.yaml# 17 + 18 + properties: 19 + compatible: 20 + const: qcom,eliza-mdss 21 + 22 + clocks: 23 + items: 24 + - description: Display AHB 25 + - description: Display hf AXI 26 + - description: Display core 27 + 28 + iommus: 29 + maxItems: 1 30 + 31 + interconnects: 32 + items: 33 + - description: Interconnect path from mdp0 port to the data bus 34 + - description: Interconnect path from CPU to the reg bus 35 + 36 + interconnect-names: 37 + items: 38 + - const: mdp0-mem 39 + - const: cpu-cfg 40 + 41 + patternProperties: 42 + "^display-controller@[0-9a-f]+$": 43 + type: object 44 + additionalProperties: true 45 + properties: 46 + compatible: 47 + contains: 48 + const: qcom,eliza-dpu 49 + 50 + "^displayport-controller@[0-9a-f]+$": 51 + type: object 52 + additionalProperties: true 53 + properties: 54 + compatible: 55 + contains: 56 + const: qcom,eliza-dp 57 + 58 + "^dsi@[0-9a-f]+$": 59 + type: object 60 + additionalProperties: true 61 + properties: 62 + compatible: 63 + contains: 64 + const: qcom,eliza-dsi-ctrl 65 + 66 + "^phy@[0-9a-f]+$": 67 + type: object 68 + additionalProperties: true 69 + properties: 70 + compatible: 71 + contains: 72 + const: qcom,eliza-dsi-phy-4nm 73 + 74 + required: 75 + - compatible 76 + 77 + unevaluatedProperties: false 78 + 79 + examples: 80 + - | 81 + #include <dt-bindings/clock/qcom,dsi-phy-28nm.h> 82 + #include <dt-bindings/clock/qcom,rpmh.h> 83 + #include <dt-bindings/interconnect/qcom,icc.h> 84 + #include <dt-bindings/interrupt-controller/arm-gic.h> 85 + #include <dt-bindings/phy/phy-qcom-qmp.h> 86 + #include <dt-bindings/power/qcom,rpmhpd.h> 87 + 88 + display-subsystem@ae00000 { 89 + compatible = "qcom,eliza-mdss"; 90 + reg = <0x0ae00000 0x1000>; 91 + reg-names = "mdss"; 92 + ranges; 93 + 94 + interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>; 95 + 96 + clocks = <&disp_cc_mdss_ahb_clk>, 97 + <&gcc_disp_hf_axi_clk>, 98 + <&disp_cc_mdss_mdp_clk>; 99 + 100 + resets = <&disp_cc_mdss_core_bcr>; 101 + 102 + interconnects = <&mmss_noc_master_mdp QCOM_ICC_TAG_ALWAYS 103 + &mc_virt_slave_ebi1 QCOM_ICC_TAG_ALWAYS>, 104 + <&gem_noc_master_appss_proc QCOM_ICC_TAG_ACTIVE_ONLY 105 + &config_noc_slave_display_cfg QCOM_ICC_TAG_ACTIVE_ONLY>; 106 + interconnect-names = "mdp0-mem", 107 + "cpu-cfg"; 108 + 109 + power-domains = <&mdss_gdsc>; 110 + 111 + iommus = <&apps_smmu 0x800 0x2>; 112 + 113 + interrupt-controller; 114 + #interrupt-cells = <1>; 115 + 116 + #address-cells = <1>; 117 + #size-cells = <1>; 118 + 119 + mdss_mdp: display-controller@ae01000 { 120 + compatible = "qcom,eliza-dpu"; 121 + reg = <0x0ae01000 0x93000>, 122 + <0x0aeb0000 0x2008>; 123 + reg-names = "mdp", 124 + "vbif"; 125 + 126 + interrupts-extended = <&mdss 0>; 127 + 128 + clocks = <&gcc_disp_hf_axi_clk>, 129 + <&disp_cc_mdss_ahb_clk>, 130 + <&disp_cc_mdss_mdp_lut_clk>, 131 + <&disp_cc_mdss_mdp_clk>, 132 + <&disp_cc_mdss_vsync_clk>; 133 + clock-names = "nrt_bus", 134 + "iface", 135 + "lut", 136 + "core", 137 + "vsync"; 138 + 139 + assigned-clocks = <&disp_cc_mdss_vsync_clk>; 140 + assigned-clock-rates = <19200000>; 141 + 142 + operating-points-v2 = <&mdp_opp_table>; 143 + 144 + power-domains = <&rpmhpd RPMHPD_MMCX>; 145 + 146 + ports { 147 + #address-cells = <1>; 148 + #size-cells = <0>; 149 + 150 + port@0 { 151 + reg = <0>; 152 + 153 + dpu_intf1_out: endpoint { 154 + remote-endpoint = <&mdss_dsi0_in>; 155 + }; 156 + }; 157 + 158 + port@1 { 159 + reg = <1>; 160 + 161 + dpu_intf2_out: endpoint { 162 + remote-endpoint = <&mdss_dsi1_in>; 163 + }; 164 + }; 165 + 166 + port@2 { 167 + reg = <2>; 168 + 169 + dpu_intf0_out: endpoint { 170 + remote-endpoint = <&mdss_dp0_in>; 171 + }; 172 + }; 173 + }; 174 + 175 + mdp_opp_table: opp-table { 176 + compatible = "operating-points-v2"; 177 + 178 + opp-150000000 { 179 + opp-hz = /bits/ 64 <150000000>; 180 + required-opps = <&rpmhpd_opp_low_svs_d1>; 181 + }; 182 + 183 + opp-207000000 { 184 + opp-hz = /bits/ 64 <207000000>; 185 + required-opps = <&rpmhpd_opp_low_svs>; 186 + }; 187 + 188 + opp-342000000 { 189 + opp-hz = /bits/ 64 <342000000>; 190 + required-opps = <&rpmhpd_opp_svs>; 191 + }; 192 + 193 + opp-417000000 { 194 + opp-hz = /bits/ 64 <417000000>; 195 + required-opps = <&rpmhpd_opp_svs_l1>; 196 + }; 197 + 198 + opp-532000000 { 199 + opp-hz = /bits/ 64 <532000000>; 200 + required-opps = <&rpmhpd_opp_nom>; 201 + }; 202 + 203 + opp-600000000 { 204 + opp-hz = /bits/ 64 <600000000>; 205 + required-opps = <&rpmhpd_opp_nom_l1>; 206 + }; 207 + 208 + opp-660000000 { 209 + opp-hz = /bits/ 64 <660000000>; 210 + required-opps = <&rpmhpd_opp_turbo>; 211 + }; 212 + }; 213 + }; 214 + 215 + dsi@ae94000 { 216 + compatible = "qcom,eliza-dsi-ctrl", "qcom,sm8750-dsi-ctrl", "qcom,mdss-dsi-ctrl"; 217 + reg = <0x0ae94000 0x400>; 218 + reg-names = "dsi_ctrl"; 219 + 220 + interrupts-extended = <&mdss 4>; 221 + 222 + clocks = <&disp_cc_mdss_byte0_clk>, 223 + <&disp_cc_mdss_byte0_intf_clk>, 224 + <&disp_cc_mdss_pclk0_clk>, 225 + <&disp_cc_mdss_esc0_clk>, 226 + <&disp_cc_mdss_ahb_clk>, 227 + <&gcc_disp_hf_axi_clk>, 228 + <&mdss_dsi0_phy DSI_PIXEL_PLL_CLK>, 229 + <&mdss_dsi0_phy DSI_BYTE_PLL_CLK>, 230 + <&disp_cc_esync0_clk>, 231 + <&disp_cc_osc_clk>, 232 + <&disp_cc_mdss_byte0_clk_src>, 233 + <&disp_cc_mdss_pclk0_clk_src>; 234 + clock-names = "byte", 235 + "byte_intf", 236 + "pixel", 237 + "core", 238 + "iface", 239 + "bus", 240 + "dsi_pll_pixel", 241 + "dsi_pll_byte", 242 + "esync", 243 + "osc", 244 + "byte_src", 245 + "pixel_src"; 246 + 247 + operating-points-v2 = <&mdss_dsi_opp_table>; 248 + 249 + power-domains = <&rpmhpd RPMHPD_MMCX>; 250 + 251 + phys = <&mdss_dsi0_phy>; 252 + phy-names = "dsi"; 253 + 254 + #address-cells = <1>; 255 + #size-cells = <0>; 256 + 257 + ports { 258 + #address-cells = <1>; 259 + #size-cells = <0>; 260 + 261 + port@0 { 262 + reg = <0>; 263 + 264 + mdss_dsi0_in: endpoint { 265 + remote-endpoint = <&dpu_intf1_out>; 266 + }; 267 + }; 268 + 269 + port@1 { 270 + reg = <1>; 271 + 272 + mdss_dsi0_out: endpoint { 273 + remote-endpoint = <&panel0_in>; 274 + data-lanes = <0 1 2 3>; 275 + }; 276 + }; 277 + }; 278 + 279 + mdss_dsi_opp_table: opp-table { 280 + compatible = "operating-points-v2"; 281 + 282 + opp-140630000 { 283 + opp-hz = /bits/ 64 <140630000>; 284 + required-opps = <&rpmhpd_opp_low_svs_d1>; 285 + }; 286 + 287 + opp-187500000 { 288 + opp-hz = /bits/ 64 <187500000>; 289 + required-opps = <&rpmhpd_opp_low_svs>; 290 + }; 291 + 292 + opp-300000000 { 293 + opp-hz = /bits/ 64 <300000000>; 294 + required-opps = <&rpmhpd_opp_svs>; 295 + }; 296 + 297 + opp-358000000 { 298 + opp-hz = /bits/ 64 <358000000>; 299 + required-opps = <&rpmhpd_opp_svs_l1>; 300 + }; 301 + }; 302 + }; 303 + 304 + mdss_dsi0_phy: phy@ae95000 { 305 + compatible = "qcom,eliza-dsi-phy-4nm", "qcom,sm8650-dsi-phy-4nm"; 306 + reg = <0x0ae95000 0x200>, 307 + <0x0ae95200 0x280>, 308 + <0x0ae95500 0x400>; 309 + reg-names = "dsi_phy", 310 + "dsi_phy_lane", 311 + "dsi_pll"; 312 + 313 + clocks = <&disp_cc_mdss_ahb_clk>, 314 + <&bi_tcxo_div2>; 315 + clock-names = "iface", 316 + "ref"; 317 + 318 + #clock-cells = <1>; 319 + #phy-cells = <0>; 320 + 321 + vdds-supply = <&vreg_l2b>; 322 + }; 323 + 324 + dsi@ae96000 { 325 + compatible = "qcom,eliza-dsi-ctrl", "qcom,sm8750-dsi-ctrl", "qcom,mdss-dsi-ctrl"; 326 + reg = <0x0ae96000 0x400>; 327 + reg-names = "dsi_ctrl"; 328 + 329 + interrupts-extended = <&mdss 5>; 330 + 331 + clocks = <&disp_cc_mdss_byte1_clk>, 332 + <&disp_cc_mdss_byte1_intf_clk>, 333 + <&disp_cc_mdss_pclk1_clk>, 334 + <&disp_cc_mdss_esc1_clk>, 335 + <&disp_cc_mdss_ahb_clk>, 336 + <&gcc_disp_hf_axi_clk>, 337 + <&mdss_dsi1_phy DSI_PIXEL_PLL_CLK>, 338 + <&mdss_dsi1_phy DSI_BYTE_PLL_CLK>, 339 + <&disp_cc_esync1_clk>, 340 + <&disp_cc_osc_clk>, 341 + <&disp_cc_mdss_byte1_clk_src>, 342 + <&disp_cc_mdss_pclk1_clk_src>; 343 + clock-names = "byte", 344 + "byte_intf", 345 + "pixel", 346 + "core", 347 + "iface", 348 + "bus", 349 + "dsi_pll_pixel", 350 + "dsi_pll_byte", 351 + "esync", 352 + "osc", 353 + "byte_src", 354 + "pixel_src"; 355 + 356 + operating-points-v2 = <&mdss_dsi_opp_table>; 357 + 358 + power-domains = <&rpmhpd RPMHPD_MMCX>; 359 + 360 + phys = <&mdss_dsi1_phy>; 361 + phy-names = "dsi"; 362 + 363 + vdda-supply = <&vreg_l4b>; 364 + 365 + ports { 366 + #address-cells = <1>; 367 + #size-cells = <0>; 368 + 369 + port@0 { 370 + reg = <0>; 371 + 372 + mdss_dsi1_in: endpoint { 373 + remote-endpoint = <&dpu_intf2_out>; 374 + }; 375 + }; 376 + 377 + port@1 { 378 + reg = <1>; 379 + 380 + mdss_dsi1_out: endpoint { 381 + remote-endpoint = <&panel1_in>; 382 + data-lanes = <0 1 2 3>; 383 + }; 384 + }; 385 + }; 386 + }; 387 + 388 + mdss_dsi1_phy: phy@ae97000 { 389 + compatible = "qcom,eliza-dsi-phy-4nm", "qcom,sm8650-dsi-phy-4nm"; 390 + reg = <0x0ae97000 0x200>, 391 + <0x0ae97200 0x280>, 392 + <0x0ae97500 0x400>; 393 + reg-names = "dsi_phy", 394 + "dsi_phy_lane", 395 + "dsi_pll"; 396 + 397 + clocks = <&disp_cc_mdss_ahb_clk>, 398 + <&rpmhcc RPMH_CXO_CLK>; 399 + clock-names = "iface", 400 + "ref"; 401 + 402 + #clock-cells = <1>; 403 + #phy-cells = <0>; 404 + 405 + vdds-supply = <&vreg_l2b>; 406 + }; 407 + 408 + displayport-controller@af54000 { 409 + compatible = "qcom,eliza-dp", "qcom,sm8650-dp"; 410 + reg = <0xaf54000 0x104>, 411 + <0xaf54200 0xc0>, 412 + <0xaf55000 0x770>, 413 + <0xaf56000 0x9c>, 414 + <0xaf57000 0x9c>; 415 + 416 + interrupts-extended = <&mdss 12>; 417 + 418 + clocks = <&disp_cc_mdss_ahb_clk>, 419 + <&disp_cc_mdss_dptx0_aux_clk>, 420 + <&disp_cc_mdss_dptx0_link_clk>, 421 + <&disp_cc_mdss_dptx0_link_intf_clk>, 422 + <&disp_cc_mdss_dptx0_pixel0_clk>, 423 + <&disp_cc_mdss_dptx0_pixel1_clk>; 424 + clock-names = "core_iface", 425 + "core_aux", 426 + "ctrl_link", 427 + "ctrl_link_iface", 428 + "stream_pixel", 429 + "stream_1_pixel"; 430 + 431 + assigned-clocks = <&disp_cc_mdss_dptx0_link_clk_src>, 432 + <&disp_cc_mdss_dptx0_pixel0_clk_src>, 433 + <&disp_cc_mdss_dptx0_pixel1_clk_src>; 434 + assigned-clock-parents = <&usb_dp_qmpphy QMP_USB43DP_DP_LINK_CLK>, 435 + <&usb_dp_qmpphy QMP_USB43DP_DP_VCO_DIV_CLK>, 436 + <&usb_dp_qmpphy QMP_USB43DP_DP_VCO_DIV_CLK>; 437 + 438 + operating-points-v2 = <&dp_opp_table>; 439 + 440 + power-domains = <&rpmhpd RPMHPD_MMCX>; 441 + 442 + phys = <&usb_dp_qmpphy QMP_USB43DP_DP_PHY>; 443 + phy-names = "dp"; 444 + 445 + #sound-dai-cells = <0>; 446 + 447 + dp_opp_table: opp-table { 448 + compatible = "operating-points-v2"; 449 + 450 + opp-192000000 { 451 + opp-hz = /bits/ 64 <192000000>; 452 + required-opps = <&rpmhpd_opp_low_svs_d1>; 453 + }; 454 + 455 + opp-270000000 { 456 + opp-hz = /bits/ 64 <270000000>; 457 + required-opps = <&rpmhpd_opp_low_svs>; 458 + }; 459 + 460 + opp-540000000 { 461 + opp-hz = /bits/ 64 <540000000>; 462 + required-opps = <&rpmhpd_opp_svs_l1>; 463 + }; 464 + 465 + opp-810000000 { 466 + opp-hz = /bits/ 64 <810000000>; 467 + required-opps = <&rpmhpd_opp_nom>; 468 + }; 469 + }; 470 + 471 + ports { 472 + #address-cells = <1>; 473 + #size-cells = <0>; 474 + 475 + port@0 { 476 + reg = <0>; 477 + 478 + mdss_dp0_in: endpoint { 479 + remote-endpoint = <&dpu_intf0_out>; 480 + }; 481 + }; 482 + 483 + port@1 { 484 + reg = <1>; 485 + 486 + mdss_dp0_out: endpoint { 487 + data-lanes = <0 1 2 3>; 488 + remote-endpoint = <&usb_dp_qmpphy_dp_in>; 489 + link-frequencies = /bits/ 64 <1620000000 2700000000 5400000000 8100000000>; 490 + }; 491 + }; 492 + }; 493 + }; 494 + };
+30
Documentation/devicetree/bindings/display/msm/qcom,sc8280xp-mdss.yaml
··· 50 50 - qcom,sc8280xp-dp 51 51 - qcom,sc8280xp-edp 52 52 53 + "^dsi@[0-9a-f]+$": 54 + type: object 55 + additionalProperties: true 56 + properties: 57 + compatible: 58 + contains: 59 + const: qcom,sc8280xp-dsi-ctrl 60 + 61 + "^phy@[0-9a-f]+$": 62 + type: object 63 + additionalProperties: true 64 + properties: 65 + compatible: 66 + contains: 67 + const: qcom,sc8280xp-dsi-phy-5nm 68 + 53 69 unevaluatedProperties: false 54 70 55 71 examples: ··· 142 126 reg = <0>; 143 127 endpoint { 144 128 remote-endpoint = <&mdss0_dp0_in>; 129 + }; 130 + }; 131 + 132 + port@1 { 133 + reg = <1>; 134 + dpu_intf1_out: endpoint { 135 + remote-endpoint = <&mdss0_dsi0_in>; 136 + }; 137 + }; 138 + 139 + port@2 { 140 + reg = <2>; 141 + dpu_intf2_out: endpoint { 142 + remote-endpoint = <&mdss0_dsi1_in>; 145 143 }; 146 144 }; 147 145
+1
Documentation/devicetree/bindings/display/msm/qcom,sm8650-dpu.yaml
··· 15 15 compatible: 16 16 oneOf: 17 17 - enum: 18 + - qcom,eliza-dpu 18 19 - qcom,glymur-dpu 19 20 - qcom,kaanapali-dpu 20 21 - qcom,sa8775p-dpu
+1
MAINTAINERS
··· 2195 2195 S: Supported 2196 2196 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 2197 2197 F: Documentation/devicetree/bindings/gpu/arm,mali-valhall-csf.yaml 2198 + F: drivers/gpu/drm/ci/xfails/panthor* 2198 2199 F: drivers/gpu/drm/panthor/ 2199 2200 F: include/uapi/drm/panthor_drm.h 2200 2201
+6
drivers/gpu/drm/ci/arm64.config
··· 83 83 CONFIG_SC_GPUCC_7180=y 84 84 CONFIG_SM_GPUCC_8350=y 85 85 CONFIG_QCOM_SPMI_ADC5=y 86 + CONFIG_QCOM_SPMI_VADC=y 86 87 CONFIG_DRM_PARADE_PS8640=y 87 88 CONFIG_DRM_LONTIUM_LT9611UXC=y 88 89 CONFIG_PHY_QCOM_USB_HS=y ··· 209 208 CONFIG_TEGRA_SOCTHERM=y 210 209 CONFIG_DRM_TEGRA_DEBUG=y 211 210 CONFIG_PWM_TEGRA=y 211 + 212 + # For Rockchip rk3588 213 + CONFIG_DRM_PANTHOR=m 214 + CONFIG_PHY_ROCKCHIP_NANENG_COMBO_PHY=y 215 + CONFIG_PHY_ROCKCHIP_SAMSUNG_HDPTX=y
+5 -8
drivers/gpu/drm/ci/build.sh
··· 3 3 4 4 set -ex 5 5 6 - # Clean up stale rebases that GitLab might not have removed when reusing a checkout dir 7 - rm -rf .git/rebase-apply 8 - 9 6 . .gitlab-ci/container/container_pre_build.sh 10 7 11 8 # libssl-dev was uninstalled because it was considered an ephemeral package ··· 16 19 GCC_ARCH="aarch64-linux-gnu" 17 20 DEBIAN_ARCH="arm64" 18 21 DEVICE_TREES="arch/arm64/boot/dts/rockchip/rk3399-gru-kevin.dtb" 22 + DEVICE_TREES+=" arch/arm64/boot/dts/rockchip/rk3588-rock-5b.dtb" 19 23 DEVICE_TREES+=" arch/arm64/boot/dts/amlogic/meson-gxl-s805x-libretech-ac.dtb" 20 24 DEVICE_TREES+=" arch/arm64/boot/dts/allwinner/sun50i-h6-pine-h64.dtb" 21 25 DEVICE_TREES+=" arch/arm64/boot/dts/amlogic/meson-gxm-khadas-vim2.dtb" ··· 59 61 60 62 git config --global user.email "fdo@example.com" 61 63 git config --global user.name "freedesktop.org CI" 62 - git config --global pull.rebase true 63 64 64 65 # cleanup git state on the worker 65 - rm -rf .git/rebase-merge 66 + rm -rf .git/rebase-merge .git/rebase-apply 66 67 67 68 # Try to merge fixes from target repo 68 69 if [ "$(git ls-remote --exit-code --heads ${UPSTREAM_REPO} ${TARGET_BRANCH}-external-fixes)" ]; then 69 - git pull ${UPSTREAM_REPO} ${TARGET_BRANCH}-external-fixes 70 + git pull --no-rebase ${UPSTREAM_REPO} ${TARGET_BRANCH}-external-fixes 70 71 fi 71 72 72 73 # Try to merge fixes from local repo if this isn't a merge request 73 74 # otherwise try merging the fixes from the merge target 74 75 if [ -z "$CI_MERGE_REQUEST_PROJECT_PATH" ]; then 75 76 if [ "$(git ls-remote --exit-code --heads origin ${TARGET_BRANCH}-external-fixes)" ]; then 76 - git pull origin ${TARGET_BRANCH}-external-fixes 77 + git pull --no-rebase origin ${TARGET_BRANCH}-external-fixes 77 78 fi 78 79 else 79 80 if [ "$(git ls-remote --exit-code --heads ${CI_MERGE_REQUEST_PROJECT_URL} ${CI_MERGE_REQUEST_TARGET_BRANCH_NAME}-external-fixes)" ]; then 80 - git pull ${CI_MERGE_REQUEST_PROJECT_URL} ${CI_MERGE_REQUEST_TARGET_BRANCH_NAME}-external-fixes 81 + git pull --no-rebase ${CI_MERGE_REQUEST_PROJECT_URL} ${CI_MERGE_REQUEST_TARGET_BRANCH_NAME}-external-fixes 81 82 fi 82 83 fi 83 84
+22 -18
drivers/gpu/drm/ci/build.yml
··· 1 1 .build: 2 2 extends: 3 - - .container+build-rules 3 + - .build-rules 4 4 stage: build-only 5 5 artifacts: 6 6 paths: ··· 133 133 rules: 134 134 - when: never 135 135 136 + debian-x86_64-msan: 137 + rules: 138 + - when: never 139 + 136 140 debian-arm64: 137 141 rules: 138 142 - when: never ··· 157 153 rules: 158 154 - when: never 159 155 160 - debian-build-testing: 156 + debian-build-x86_64: 161 157 rules: 162 158 - when: never 163 159 ··· 181 177 rules: 182 178 - when: never 183 179 180 + debian-riscv64: 181 + rules: 182 + - when: never 183 + 184 184 debian-s390x: 185 - rules: 186 - - when: never 187 - 188 - debian-testing: 189 - rules: 190 - - when: never 191 - 192 - debian-testing-asan: 193 - rules: 194 - - when: never 195 - 196 - debian-testing-msan: 197 - rules: 198 - - when: never 199 - 200 - debian-testing-ubsan: 201 185 rules: 202 186 - when: never 203 187 ··· 194 202 - when: never 195 203 196 204 debian-x86_32: 205 + rules: 206 + - when: never 207 + 208 + debian-x86_64: 209 + rules: 210 + - when: never 211 + 212 + debian-x86_64-asan: 213 + rules: 214 + - when: never 215 + 216 + debian-x86_64-ubsan: 197 217 rules: 198 218 - when: never 199 219
+24 -8
drivers/gpu/drm/ci/container.yml
··· 5 5 6 6 debian/x86_64_build-base: 7 7 variables: 8 - EXTRA_LOCAL_PACKAGES: "libcairo-dev libdw-dev libjson-c-dev libkmod2 libkmod-dev libpciaccess-dev libproc2-dev libudev-dev libunwind-dev python3-docutils bc python3-ply libssl-dev bc" 9 - 10 - debian/x86_64_test-gl: 11 - variables: 12 - EXTRA_LOCAL_PACKAGES: "jq libasound2 libcairo2 libdw1 libglib2.0-0 libjson-c5 libkmod-dev libkmod2 libgles2 libproc2-dev" 8 + EXTRA_LOCAL_PACKAGES: "libcairo-dev libdw-dev libjson-c-dev libkmod-dev libpciaccess-dev libproc2-dev libudev-dev libunwind-dev python3-docutils bc python3-ply libssl-dev bc" 13 9 14 10 debian/arm64_build: 15 11 variables: 16 - EXTRA_LOCAL_PACKAGES: "libcairo-dev libdw-dev libjson-c-dev libproc2-dev libkmod2 libkmod-dev libpciaccess-dev libudev-dev libunwind-dev python3-docutils libssl-dev crossbuild-essential-armhf libkmod-dev:armhf libproc2-dev:armhf libunwind-dev:armhf libdw-dev:armhf libpixman-1-dev:armhf libcairo-dev:armhf libudev-dev:armhf libjson-c-dev:armhf" 12 + EXTRA_LOCAL_PACKAGES: "libcairo-dev libdw-dev libjson-c-dev libproc2-dev libkmod-dev libpciaccess-dev libudev-dev libunwind-dev python3-docutils libssl-dev crossbuild-essential-armhf libkmod-dev:armhf libproc2-dev:armhf libunwind-dev:armhf libdw-dev:armhf libpixman-1-dev:armhf libcairo-dev:armhf libudev-dev:armhf libjson-c-dev:armhf" 17 13 18 - .kernel+rootfs: 14 + debian/x86_64_test-gl: 19 15 variables: 20 - EXTRA_LOCAL_PACKAGES: "jq libasound2 libcairo2 libdw1 libglib2.0-0 libjson-c5" 16 + EXTRA_LOCAL_PACKAGES: "jq libasound2t64 libcairo2 libdw1t64 libglib2.0-0t64 libjson-c5 libkmod2 libgles2 libdrm-nouveau2 libdrm-amdgpu1" 17 + 18 + debian/arm64_test-gl: 19 + variables: 20 + EXTRA_LOCAL_PACKAGES: "jq libasound2t64 libcairo2 libdw1t64 libglib2.0-0t64 libjson-c5 libkmod2 libgles2 libdrm-nouveau2 libdrm-amdgpu1" 21 + 22 + debian/arm32_test-gl: 23 + variables: 24 + EXTRA_LOCAL_PACKAGES: "jq libasound2t64 libcairo2 libdw1t64 libglib2.0-0t64 libjson-c5 libkmod2 libgles2 libdrm-nouveau2 libdrm-amdgpu1 libunwind8" 21 25 22 26 # Disable container jobs that we won't use 27 + alpine/x86_64_build: 28 + rules: 29 + - when: never 30 + 23 31 debian/arm64_test-vk: 24 32 rules: 25 33 - when: never ··· 36 28 rules: 37 29 - when: never 38 30 31 + debian/baremetal_arm64_test-gl: 32 + rules: 33 + - when: never 34 + 39 35 debian/baremetal_arm64_test-vk: 40 36 rules: 41 37 - when: never 42 38 43 39 debian/ppc64el_build: 40 + rules: 41 + - when: never 42 + 43 + debian/riscv64_build: 44 44 rules: 45 45 - when: never 46 46
+77 -19
drivers/gpu/drm/ci/gitlab-ci.yml
··· 1 1 variables: 2 2 DRM_CI_PROJECT_PATH: &drm-ci-project-path mesa/mesa 3 - DRM_CI_COMMIT_SHA: &drm-ci-commit-sha 02337aec715c25dae7ff2479d986f831c77fe536 3 + DRM_CI_COMMIT_SHA: &drm-ci-commit-sha 25881c701a56233dd8fc7f92db6884a73949d63d 4 4 5 5 UPSTREAM_REPO: https://gitlab.freedesktop.org/drm/kernel.git 6 6 TARGET_BRANCH: drm-next ··· 11 11 DEQP_RUNNER_GIT_TAG: v0.20.0 12 12 13 13 FDO_UPSTREAM_REPO: helen.fornazier/linux # The repo where the git-archive daily runs 14 - MESA_TEMPLATES_COMMIT: &ci-templates-commit c6aeb16f86e32525fa630fb99c66c4f3e62fc3cb 14 + MESA_TEMPLATES_COMMIT: &ci-templates-commit aec7a6ce7bb38902c70641526f6611e27141784a 15 15 DRM_CI_PROJECT_URL: https://gitlab.freedesktop.org/${DRM_CI_PROJECT_PATH} 16 16 CI_PRE_CLONE_SCRIPT: |- 17 17 set -o xtrace ··· 30 30 S3_GITCACHE_BUCKET: git-cache 31 31 # Bucket for the pipeline artifacts pushed to S3 32 32 S3_ARTIFACTS_BUCKET: artifacts 33 + # Base path used for various artifacts 34 + S3_BASE_PATH: "${S3_HOST}/${S3_KERNEL_BUCKET}" 33 35 # per-pipeline artifact storage on MinIO 34 36 PIPELINE_ARTIFACTS_BASE: ${S3_HOST}/${S3_ARTIFACTS_BUCKET}/${CI_PROJECT_PATH}/${CI_PIPELINE_ID} 35 37 # per-job artifact storage on MinIO ··· 46 44 ARTIFACTS_BASE_URL: https://${CI_PROJECT_ROOT_NAMESPACE}.${CI_PAGES_DOMAIN}/-/${CI_PROJECT_NAME}/-/jobs/${CI_JOB_ID}/artifacts 47 45 # Python scripts for structured logger 48 46 PYTHONPATH: "$PYTHONPATH:$CI_PROJECT_DIR/install" 47 + # Mesa-specific variables that shouldn't be forwarded to DUTs and crosvm 48 + CI_EXCLUDE_ENV_VAR_REGEX: 'SCRIPTS_DIR|RESULTS_DIR' 49 49 50 50 51 51 default: ··· 88 84 - project: *drm-ci-project-path 89 85 ref: *drm-ci-commit-sha 90 86 file: 87 + - '/.gitlab-ci/bare-metal/gitlab-ci.yml' 91 88 - '/.gitlab-ci/build/gitlab-ci.yml' 92 89 - '/.gitlab-ci/container/gitlab-ci.yml' 93 90 - '/.gitlab-ci/farm-rules.yml' 94 - - '/.gitlab-ci/lava/lava-gitlab-ci.yml' 91 + - '/.gitlab-ci/lava/gitlab-ci.yml' 95 92 - '/.gitlab-ci/test-source-dep.yml' 96 93 - '/.gitlab-ci/test/gitlab-ci.yml' 97 94 - '/src/amd/ci/gitlab-ci-inc.yml' ··· 136 131 - meson 137 132 - msm 138 133 - panfrost 134 + - panthor 139 135 - powervr 140 136 - rockchip 141 137 - software-driver ··· 153 147 - if: &is-merge-attempt $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "merge_request_event" 154 148 # post-merge pipeline 155 149 - if: &is-post-merge $GITLAB_USER_LOGIN == "marge-bot" && $CI_PIPELINE_SOURCE == "push" 156 - # Pre-merge pipeline 157 - - if: &is-pre-merge $CI_PIPELINE_SOURCE == "merge_request_event" 150 + # Pre-merge pipeline (because merge pipelines are already caught above) 151 + - if: &is-merge-request $CI_PIPELINE_SOURCE == "merge_request_event" 158 152 # Push to a branch on a fork 159 - - if: &is-fork-push $CI_PIPELINE_SOURCE == "push" 153 + - if: &is-push-to-fork $CI_PIPELINE_SOURCE == "push" 160 154 # nightly pipeline 161 155 - if: &is-scheduled-pipeline $CI_PIPELINE_SOURCE == "schedule" 162 156 # pipeline for direct pushes that bypassed the CI ··· 166 160 # Rules applied to every job in the pipeline 167 161 .common-rules: 168 162 rules: 169 - - if: *is-fork-push 163 + - if: *is-push-to-fork 170 164 when: manual 171 - 172 165 173 166 .never-post-merge-rules: 174 167 rules: ··· 175 170 when: never 176 171 177 172 178 - .container+build-rules: 173 + # Note: make sure the branches in this list are the same as in 174 + # `.build-only-delayed-rules` below. 175 + .container-rules: 176 + rules: 177 + - !reference [.common-rules, rules] 178 + # Run when re-enabling a disabled farm, but not when disabling it 179 + - !reference [.disable-farm-mr-rules, rules] 180 + # Never run immediately after merging, as we just ran everything 181 + - !reference [.never-post-merge-rules, rules] 182 + # Only rebuild containers in merge pipelines if any tags have been 183 + # changed, else we'll just use the already-built containers 184 + - if: *is-merge-attempt 185 + changes: &image_tags_path 186 + - drivers/gpu/drm/ci/image-tags.yml 187 + when: on_success 188 + # Skip everything for pre-merge and merge pipelines which don't change 189 + # anything in the build; we only do this for marge-bot and not user 190 + # pipelines in a MR, because we might still need to run it to copy the 191 + # container into the user's namespace. 192 + - if: *is-merge-attempt 193 + when: never 194 + # Any MR pipeline which changes image-tags.yml needs to be able to 195 + # rebuild the containers 196 + - if: *is-merge-request 197 + changes: *image_tags_path 198 + when: manual 199 + # ... however for MRs running inside the user namespace, we may need to 200 + # run these jobs to copy the container images from upstream 201 + - if: *is-merge-request 202 + when: manual 203 + # Build everything after someone bypassed the CI 204 + - if: *is-direct-push 205 + when: manual 206 + # Scheduled pipelines reuse already-built containers 207 + - if: *is-scheduled-pipeline 208 + when: never 209 + # Allow building everything in fork pipelines, but build nothing unless 210 + # manually triggered 211 + - when: manual 212 + 213 + 214 + # Note: make sure the branches in this list are the same as in 215 + # `.build-only-delayed-rules` below. 216 + .build-rules: 179 217 rules: 180 218 - !reference [.common-rules, rules] 181 219 # Run when re-enabling a disabled farm, but not when disabling it ··· 229 181 - if: *is-merge-attempt 230 182 when: on_success 231 183 # Same as above, but for pre-merge pipelines 232 - - if: *is-pre-merge 184 + - if: *is-merge-request 233 185 when: manual 234 186 # Build everything after someone bypassed the CI 235 187 - if: *is-direct-push ··· 245 197 # Repeat of the above but with `when: on_success` replaced with 246 198 # `when: delayed` + `start_in:`, for build-only jobs. 247 199 # Note: make sure the branches in this list are the same as in 248 - # `.container+build-rules` above. 200 + # `.build-rules` above. 249 201 .build-only-delayed-rules: 250 202 rules: 251 203 - !reference [.common-rules, rules] ··· 258 210 when: delayed 259 211 start_in: &build-delay 5 minutes 260 212 # Same as above, but for pre-merge pipelines 261 - - if: *is-pre-merge 213 + - if: *is-merge-request 262 214 when: manual 263 215 # Build everything after someone bypassed the CI 264 216 - if: *is-direct-push ··· 283 235 - artifacts 284 236 - _build/meson-logs/*.txt 285 237 - _build/meson-logs/strace 286 - 287 - 288 - python-artifacts: 289 - variables: 290 - GIT_DEPTH: 10 291 238 292 239 293 240 # Git archive ··· 316 273 tags: 317 274 - $FDO_RUNNER_JOB_PRIORITY_TAG_X86_64 318 275 rules: 319 - - if: *is-pre-merge 276 + - if: *is-merge-request 320 277 when: on_success 321 278 - when: never 322 279 variables: ··· 327 284 - | 328 285 set -eu 329 286 image_tags=( 330 - ALPINE_X86_64_LAVA_SSH_TAG 331 287 CONTAINER_TAG 332 288 DEBIAN_BASE_TAG 333 289 DEBIAN_BUILD_TAG ··· 387 345 - when: never 388 346 389 347 test-docs: 348 + rules: 349 + - when: never 350 + 351 + .ci-tron-x86_64-test-vk: 352 + rules: 353 + - when: never 354 + 355 + .ci-tron-x86_64-test-gl-manual: 356 + rules: 357 + - when: never 358 + 359 + .ci-tron-arm64-test-gl: 360 + rules: 361 + - when: never 362 + 363 + .ci-tron-x86_64-test-gl: 390 364 rules: 391 365 - when: never
+4 -2
drivers/gpu/drm/ci/igt_runner.sh
··· 1 - #!/bin/sh 1 + #!/usr/bin/env bash 2 2 # SPDX-License-Identifier: MIT 3 + 4 + . "${SCRIPTS_DIR}/setup-test-env.sh" 3 5 4 6 set -ex 5 7 ··· 23 21 24 22 mkdir -p /lib/modules 25 23 case "$DRIVER_NAME" in 26 - amdgpu|vkms) 24 + amdgpu|vkms|panthor) 27 25 # Cannot use HWCI_KERNEL_MODULES as at that point we don't have the module in /lib 28 26 mv /install/modules/lib/modules/* /lib/modules/. || true 29 27 modprobe --first-time $DRIVER_NAME
+13 -9
drivers/gpu/drm/ci/image-tags.yml
··· 1 1 variables: 2 - CONTAINER_TAG: "20250502-mesa-uprev" 3 - DEBIAN_BASE_TAG: "${CONTAINER_TAG}" 2 + CONTAINER_TAG: "20260108-mesa-igt" 3 + 4 + DEBIAN_BUILD_BASE_TAG: "${CONTAINER_TAG}" 4 5 DEBIAN_BUILD_TAG: "${CONTAINER_TAG}" 6 + DEBIAN_TEST_BASE_TAG: "${CONTAINER_TAG}" 5 7 6 8 DEBIAN_TEST_GL_TAG: "${CONTAINER_TAG}" 7 9 # default kernel for rootfs before injecting the current kernel tree 8 - KERNEL_TAG: "v6.14-mesa-0bdd" 10 + KERNEL_TAG: "v6.16-mesa-9d85" 9 11 KERNEL_REPO: "gfx-ci/linux" 10 - PKG_REPO_REV: "95bf62c" 11 - 12 - DEBIAN_PYUTILS_TAG: "${CONTAINER_TAG}" 12 + PKG_REPO_REV: "0d2527f6" 13 + FIRMWARE_TAG: "8fc31b97" 14 + FIRMWARE_REPO: "gfx-ci/firmware" 13 15 14 16 ALPINE_X86_64_BUILD_TAG: "${CONTAINER_TAG}" 15 - ALPINE_X86_64_LAVA_SSH_TAG: "${CONTAINER_TAG}" 16 17 17 - CONDITIONAL_BUILD_ANGLE_TAG: 384145a4023315dae658259bee07c43a 18 - CONDITIONAL_BUILD_PIGLIT_TAG: a19e424b8a3f020dbf1b9dd29f220a4f 18 + CONDITIONAL_BUILD_ANGLE_TAG: efd57e99d51361944f87b9466356b0ce 19 + CONDITIONAL_BUILD_CROSVM_TAG: 4079babd375b09761d59eacb25a0598a 20 + CONDITIONAL_BUILD_PIGLIT_TAG: 21ab2c66f54777163dd038dc4cfcfde6 21 + 22 + CROSVM_TAG: ${CONDITIONAL_BUILD_CROSVM_TAG}
+47 -54
drivers/gpu/drm/ci/lava-submit.sh
··· 3 3 # shellcheck disable=SC2086 # we want word splitting 4 4 # shellcheck disable=SC1091 # paths only become valid at runtime 5 5 6 - # If we run in the fork (not from mesa or Marge-bot), reuse mainline kernel and rootfs, if exist. 7 - _check_artifact_path() { 8 - _url="https://${1}/${2}" 9 - if curl -s -o /dev/null -I -L -f --retry 4 --retry-delay 15 "${_url}"; then 10 - echo -n "${_url}" 11 - fi 12 - } 6 + # shellcheck disable=SC1090 7 + source "${FDO_CI_BASH_HELPERS}" 13 8 14 - get_path_to_artifact() { 15 - _mainline_artifact="$(_check_artifact_path ${BASE_SYSTEM_MAINLINE_HOST_PATH} ${1})" 16 - if [ -n "${_mainline_artifact}" ]; then 17 - echo -n "${_mainline_artifact}" 18 - return 19 - fi 20 - _fork_artifact="$(_check_artifact_path ${BASE_SYSTEM_FORK_HOST_PATH} ${1})" 21 - if [ -n "${_fork_artifact}" ]; then 22 - echo -n "${_fork_artifact}" 23 - return 24 - fi 9 + fdo_log_section_start_collapsed prepare_rootfs "Preparing root filesystem" 10 + 11 + set -ex 12 + 13 + # If we run in the fork (not from mesa or Marge-bot), reuse mainline kernel and rootfs, if exist. 14 + ROOTFS_URL="$(fdo_find_s3_path "$LAVA_ROOTFS_PATH")" || 15 + { 25 16 set +x 26 - error "Sorry, I couldn't find a viable built path for ${1} in either mainline or a fork." >&2 17 + fdo_log_section_error "Sorry, I couldn't find a viable built path for ${LAVA_ROOTFS_PATH} in either mainline or a fork." >&2 27 18 echo "" >&2 28 19 echo "If you're working on CI, this probably means that you're missing a dependency:" >&2 29 20 echo "this job ran ahead of the job which was supposed to upload that artifact." >&2 ··· 26 35 exit 1 27 36 } 28 37 29 - . "${SCRIPTS_DIR}/setup-test-env.sh" 30 - 31 - section_start prepare_rootfs "Preparing root filesystem" 32 - 33 - set -ex 34 - 35 - ROOTFS_URL="$(get_path_to_artifact lava-rootfs.tar.zst)" 36 - [ $? != 1 ] || exit 1 37 - 38 38 rm -rf results 39 - mkdir -p results/job-rootfs-overlay/ 39 + mkdir results 40 40 41 - artifacts/ci-common/export-gitlab-job-env-for-dut.sh \ 42 - > results/job-rootfs-overlay/set-job-env-vars.sh 43 - cp artifacts/ci-common/init-*.sh results/job-rootfs-overlay/ 44 - cp "$SCRIPTS_DIR"/setup-test-env.sh results/job-rootfs-overlay/ 41 + fdo_filter_env_vars > dut-env-vars.sh 42 + # Set SCRIPTS_DIR to point to the Mesa install we download for the DUT 43 + echo "export SCRIPTS_DIR='$CI_PROJECT_DIR/install'" >> dut-env-vars.sh 45 44 46 - tar zcf job-rootfs-overlay.tar.gz -C results/job-rootfs-overlay/ . 47 - ci-fairy s3cp --token-file "${S3_JWT_FILE}" job-rootfs-overlay.tar.gz "https://${JOB_ROOTFS_OVERLAY_PATH}" 45 + fdo_log_section_end prepare_rootfs 48 46 49 47 # Prepare env vars for upload. 50 - section_switch variables "Environment variables passed through to device:" 51 - cat results/job-rootfs-overlay/set-job-env-vars.sh 48 + fdo_log_section_start_collapsed variables "Environment variables passed through to device:" 49 + cat dut-env-vars.sh 50 + fdo_log_section_end variables 52 51 53 - section_switch lava_submit "Submitting job for scheduling" 52 + fdo_log_section_start_collapsed lava_submit "Submitting job for scheduling" 54 53 55 54 touch results/lava.log 56 55 tail -f results/lava.log & 57 56 # Ensure that we are printing the commands that are being executed, 58 57 # making it easier to debug the job in case it fails. 59 58 set -x 60 - PYTHONPATH=artifacts/ artifacts/lava/lava_job_submitter.py \ 59 + 60 + # List of optional overlays 61 + LAVA_EXTRA_OVERLAYS=() 62 + if [ -n "${LAVA_FIRMWARE:-}" ]; then 63 + for fw in $LAVA_FIRMWARE; do 64 + LAVA_EXTRA_OVERLAYS+=( 65 + - append-overlay 66 + --name=linux-firmware 67 + --url="https://${S3_BASE_PATH}/${FIRMWARE_REPO}/${fw}-${FIRMWARE_TAG}.tar" 68 + --path="/" 69 + --format=tar 70 + ) 71 + done 72 + fi 73 + LAVA_EXTRA_OVERLAYS+=( 74 + - append-overlay \ 75 + --name=kernel-build \ 76 + --url="${FDO_HTTP_CACHE_URI:-}https://${PIPELINE_ARTIFACTS_BASE}/${DEBIAN_ARCH}/kernel-files.tar.zst" \ 77 + --compression=zstd \ 78 + --path="${CI_PROJECT_DIR}" \ 79 + --format=tar \ 80 + ) 81 + 82 + lava-job-submitter \ 61 83 --farm "${FARM}" \ 62 84 --device-type "${DEVICE_TYPE}" \ 63 85 --boot-method "${BOOT_METHOD}" \ ··· 79 75 --pipeline-info "$CI_JOB_NAME: $CI_PIPELINE_URL on $CI_COMMIT_REF_NAME ${CI_NODE_INDEX}/${CI_NODE_TOTAL}" \ 80 76 --rootfs-url "${ROOTFS_URL}" \ 81 77 --kernel-url-prefix "https://${PIPELINE_ARTIFACTS_BASE}/${DEBIAN_ARCH}" \ 82 - --kernel-external "${EXTERNAL_KERNEL_TAG}" \ 83 - --first-stage-init artifacts/ci-common/init-stage1.sh \ 84 78 --dtb-filename "${DTB}" \ 79 + --env-file dut-env-vars.sh \ 85 80 --jwt-file "${S3_JWT_FILE}" \ 86 81 --kernel-image-name "${KERNEL_IMAGE_NAME}" \ 87 82 --kernel-image-type "${KERNEL_IMAGE_TYPE}" \ ··· 89 86 --mesa-job-name "$CI_JOB_NAME" \ 90 87 --structured-log-file "results/lava_job_detail.json" \ 91 88 --ssh-client-image "${LAVA_SSH_CLIENT_IMAGE}" \ 89 + --project-dir "${CI_PROJECT_DIR}" \ 92 90 --project-name "${CI_PROJECT_NAME}" \ 93 - --starting-section "${CURRENT_SECTION}" \ 91 + --starting-section lava_submit \ 94 92 --job-submitted-at "${CI_JOB_STARTED_AT}" \ 95 - - append-overlay \ 96 - --name=kernel-build \ 97 - --url="${FDO_HTTP_CACHE_URI:-}https://${PIPELINE_ARTIFACTS_BASE}/${DEBIAN_ARCH}/kernel-files.tar.zst" \ 98 - --compression=zstd \ 99 - --path="${CI_PROJECT_DIR}" \ 100 - --format=tar \ 101 - - append-overlay \ 102 - --name=job-overlay \ 103 - --url="https://${JOB_ROOTFS_OVERLAY_PATH}" \ 104 - --compression=gz \ 105 - --path="/" \ 106 - --format=tar \ 93 + "${LAVA_EXTRA_OVERLAYS[@]}" \ 107 94 - submit \ 108 95 >> results/lava.log
+1
drivers/gpu/drm/ci/static-checks.yml
··· 1 1 check-patch: 2 + stage: static-checks 2 3 extends: 3 4 - .build 4 5 - .use-debian/x86_64_build
+65 -54
drivers/gpu/drm/ci/test.yml
··· 7 7 8 8 .lava-test: 9 9 extends: 10 - - .container+build-rules 10 + - .build-rules 11 11 - .allow_failure_lockdep 12 12 timeout: "1h30m" 13 13 rules: 14 14 - !reference [.scheduled_pipeline-rules, rules] 15 15 - !reference [.collabora-farm-rules, rules] 16 16 - when: on_success 17 + before_script: 18 + # lava-submit.sh is a part of the archive, unlike Mesa CI 19 + - eval "$S3_JWT_FILE_SCRIPT" 17 20 script: 18 21 # Note: Build dir (and thus install) may be dirty due to GIT_STRATEGY 19 22 - rm -rf install ··· 35 32 dependencies: 36 33 - testing:arm32 37 34 needs: 38 - - alpine/x86_64_lava_ssh_client 39 35 - debian/arm32_test-gl 40 - - python-artifacts 41 36 - testing:arm32 42 37 - igt:arm32 43 38 ··· 49 48 dependencies: 50 49 - testing:arm64 51 50 needs: 52 - - alpine/x86_64_lava_ssh_client 53 51 - debian/arm64_test-gl 54 - - python-artifacts 55 52 - testing:arm64 56 53 - igt:arm64 57 54 ··· 63 64 dependencies: 64 65 - testing:x86_64 65 66 needs: 66 - - alpine/x86_64_lava_ssh_client 67 67 - debian/x86_64_test-gl 68 - - python-artifacts 69 68 - testing:x86_64 70 69 - igt:x86_64 71 - 72 - .baremetal-igt-arm64: 73 - extends: 74 - - .baremetal-test-arm64-gl 75 - - .use-debian/baremetal_arm64_test-gl 76 - - .allow_failure_lockdep 77 - timeout: "1h30m" 78 - rules: 79 - - !reference [.scheduled_pipeline-rules, rules] 80 - - !reference [.google-freedreno-farm-rules, rules] 81 - - when: on_success 82 - variables: 83 - FDO_CI_CONCURRENT: 10 84 - HWCI_TEST_SCRIPT: "/install/igt_runner.sh" 85 - S3_ARTIFACT_NAME: "arm64/kernel-files" 86 - BM_KERNEL: https://${PIPELINE_ARTIFACTS_BASE}/arm64/Image.gz 87 - BM_CMDLINE: "ip=dhcp console=ttyMSM0,115200n8 $BM_KERNEL_EXTRA_ARGS root=/dev/nfs rw nfsrootdebug nfsroot=,tcp,nfsvers=4.2 init=/init $BM_KERNELARGS" 88 - FARM: google 89 - needs: 90 - - debian/baremetal_arm64_test-gl 91 - - job: testing:arm64 92 - artifacts: false 93 - - igt:arm64 94 - tags: 95 - - $RUNNER_TAG 96 70 97 71 .software-driver: 98 72 stage: software-driver ··· 82 110 - !reference [default, before_script] 83 111 - rm -rf install 84 112 - tar -xf artifacts/install.tar 113 + - mkdir -p /kernel 85 114 script: 86 115 - ln -sf $CI_PROJECT_DIR/install /install 87 116 - mv install/bzImage /kernel/bzImage ··· 100 127 DRIVER_NAME: msm 101 128 BOOT_METHOD: depthcharge 102 129 KERNEL_IMAGE_TYPE: "" 130 + LAVA_FIRMWARE: qcom-lava 103 131 104 132 msm:sc7180-trogdor-lazor-limozeen: 105 133 extends: ··· 124 150 125 151 msm:apq8016: 126 152 extends: 127 - - .baremetal-igt-arm64 153 + - .lava-igt:arm64 128 154 stage: msm 155 + parallel: 3 129 156 variables: 130 - DEVICE_TYPE: apq8016-sbc-usb-host 157 + BOOT_METHOD: fastboot 158 + DEVICE_TYPE: dragonboard-410c 131 159 DRIVER_NAME: msm 132 - BM_DTB: https://${PIPELINE_ARTIFACTS_BASE}/arm64/${DEVICE_TYPE}.dtb 160 + DTB: apq8016-sbc-usb-host 161 + FARM: collabora 133 162 GPU_VERSION: apq8016 134 - # disabling unused clocks congests with the MDSS runtime PM trying to 135 - # disable those clocks and causes boot to fail. 136 - # Reproducer: DRM_MSM=y, DRM_I2C_ADV7511=m 137 - BM_KERNEL_EXTRA_ARGS: clk_ignore_unused 138 - RUNNER_TAG: google-freedreno-db410c 139 - script: 140 - - ./install/bare-metal/fastboot.sh || exit $? 163 + KERNEL_IMAGE_NAME: "Image.gz" 164 + KERNEL_IMAGE_TYPE: "" 165 + RUNNER_TAG: mesa-ci-x86-64-lava-dragonboard-410c 166 + LAVA_FIRMWARE: qcom-lava 141 167 142 168 msm:apq8096: 143 169 extends: 144 - - .baremetal-igt-arm64 170 + - .lava-igt:arm64 145 171 stage: msm 172 + parallel: 3 146 173 variables: 147 - DEVICE_TYPE: apq8096-db820c 174 + BOOT_METHOD: fastboot 175 + DEVICE_TYPE: dragonboard-820c 148 176 DRIVER_NAME: msm 149 - BM_KERNEL_EXTRA_ARGS: maxcpus=2 150 - BM_DTB: https://${PIPELINE_ARTIFACTS_BASE}/arm64/${DEVICE_TYPE}.dtb 177 + DTB: apq8096-db820c 178 + FARM: collabora 151 179 GPU_VERSION: apq8096 152 - RUNNER_TAG: google-freedreno-db820c 153 - script: 154 - - ./install/bare-metal/fastboot.sh || exit $? 180 + KERNEL_IMAGE_NAME: "Image.gz" 181 + KERNEL_IMAGE_TYPE: "" 182 + RUNNER_TAG: mesa-ci-x86-64-lava-dragonboard-820c 183 + LAVA_FIRMWARE: qcom-lava 155 184 156 185 msm:sm8350-hdk: 157 186 extends: 158 187 - .lava-igt:arm64 159 188 stage: msm 160 - parallel: 4 189 + parallel: 2 161 190 variables: 162 191 BOOT_METHOD: fastboot 163 192 DEVICE_TYPE: sm8350-hdk ··· 171 194 KERNEL_IMAGE_NAME: "Image.gz" 172 195 KERNEL_IMAGE_TYPE: "" 173 196 RUNNER_TAG: mesa-ci-x86-64-lava-sm8350-hdk 197 + LAVA_FIRMWARE: qcom-lava 198 + LAVA_FASTBOOT_CMD: "set_active a" 174 199 175 200 .rockchip-device: 176 201 variables: 177 202 DTB: ${DEVICE_TYPE} 178 203 BOOT_METHOD: depthcharge 204 + LAVA_FIRMWARE: arm 179 205 180 206 .rockchip-display: 181 207 stage: rockchip ··· 206 226 KERNEL_IMAGE_TYPE: "" 207 227 RUNNER_TAG: mesa-ci-x86-64-lava-rk3399-gru-kevin 208 228 229 + .rk3588: 230 + extends: 231 + - .lava-igt:arm64 232 + - .rockchip-device 233 + parallel: 2 234 + variables: 235 + DEVICE_TYPE: rk3588-rock-5b 236 + GPU_VERSION: rk3588 237 + BOOT_METHOD: u-boot 238 + KERNEL_IMAGE_NAME: Image 239 + KERNEL_IMAGE_TYPE: "image" 240 + RUNNER_TAG: mesa-ci-x86-64-lava-rk3588-rock-5b 241 + 209 242 rockchip:rk3288: 210 243 extends: 211 244 - .rk3288 ··· 239 246 - .rk3399 240 247 - .panfrost-gpu 241 248 249 + rockchip:rk3588: 250 + extends: 251 + - .rk3588 252 + - .rockchip-display 253 + 254 + panthor:rk3588: 255 + extends: 256 + - .rk3588 257 + - .panthor-gpu 258 + 242 259 .i915: 243 260 extends: 244 261 - .lava-igt:x86_64 ··· 258 255 DTB: "" 259 256 BOOT_METHOD: depthcharge 260 257 KERNEL_IMAGE_TYPE: "" 258 + LAVA_FIRMWARE: i915 261 259 262 260 i915:apl: 263 261 extends: ··· 281 277 i915:amly: 282 278 extends: 283 279 - .i915 284 - parallel: 2 280 + parallel: 3 285 281 variables: 286 282 DEVICE_TYPE: asus-C433TA-AJ0005-rammus 287 283 GPU_VERSION: amly ··· 308 304 i915:cml: 309 305 extends: 310 306 - .i915 311 - parallel: 2 307 + parallel: 5 312 308 variables: 313 - DEVICE_TYPE: asus-C436FA-Flip-hatch 309 + DEVICE_TYPE: acer-chromebox-cxi4-puff 314 310 GPU_VERSION: cml 315 - RUNNER_TAG: mesa-ci-x86-64-lava-asus-C436FA-Flip-hatch 311 + RUNNER_TAG: mesa-ci-x86-64-lava-acer-chromebox-cxi4-puff 316 312 317 313 i915:tgl: 318 314 extends: ··· 341 337 DTB: "" 342 338 BOOT_METHOD: depthcharge 343 339 KERNEL_IMAGE_TYPE: "" 340 + LAVA_FIRMWARE: amdgpu-lava 344 341 345 342 amdgpu:stoney: 346 343 extends: ··· 360 355 DTB: ${DEVICE_TYPE} 361 356 BOOT_METHOD: depthcharge 362 357 KERNEL_IMAGE_TYPE: "" 358 + LAVA_FIRMWARE: arm 363 359 364 360 .mediatek-display: 365 361 stage: mediatek ··· 376 370 stage: panfrost 377 371 variables: 378 372 DRIVER_NAME: panfrost 373 + 374 + .panthor-gpu: 375 + stage: panthor 376 + variables: 377 + DRIVER_NAME: panthor 379 378 380 379 .mt8173: 381 380 extends:
+9 -4
drivers/gpu/drm/ci/xfails/amdgpu-stoney-fails.txt
··· 3 3 amdgpu/amd_abm@backlight_monotonic_abm,Fail 4 4 amdgpu/amd_abm@backlight_monotonic_basic,Fail 5 5 amdgpu/amd_abm@dpms_cycle,Fail 6 - amdgpu/amd_assr@assr-links,Fail 7 6 amdgpu/amd_assr@assr-links-dpms,Fail 8 - amdgpu/amd_mall@static-screen,Crash 7 + amdgpu/amd_assr@assr-links,Fail 8 + amdgpu/amd_basic@cs-gfx-with-IP-GFX,Fail 9 + amdgpu/amd_basic@cs-multi-fence-with-IP-GFX,Fail 9 10 amdgpu/amd_mode_switch@mode-switch-first-last-pipe-2,Crash 10 11 amdgpu/amd_plane@mpo-pan-nv12,Fail 11 12 amdgpu/amd_plane@mpo-pan-p010,Fail ··· 14 13 amdgpu/amd_plane@mpo-scale-nv12,Fail 15 14 amdgpu/amd_plane@mpo-scale-p010,Fail 16 15 amdgpu/amd_plane@mpo-scale-rgb,Crash 17 - amdgpu/amd_plane@mpo-swizzle-toggle,Fail 16 + amdgpu/amd_plane@mpo-swizzle-toggle,Crash 18 17 amdgpu/amd_uvd_dec@amdgpu_uvd_decode,Fail 18 + core_setmaster@master-drop-set-user,Fail 19 19 kms_addfb_basic@bad-pitch-65536,Fail 20 20 kms_addfb_basic@bo-too-small,Fail 21 21 kms_addfb_basic@too-high,Fail 22 + kms_async_flips@basic-modeset-with-all-modifiers-formats,Crash 22 23 kms_atomic_transition@plane-all-modeset-transition-internal-panels,Fail 23 24 kms_atomic_transition@plane-all-transition,Fail 24 25 kms_atomic_transition@plane-all-transition-nonblocking,Fail ··· 36 33 kms_cursor_edge_walk@64x64-left-edge,Fail 37 34 kms_flip@flip-vs-modeset-vs-hang,Fail 38 35 kms_flip@flip-vs-panning-vs-hang,Fail 36 + kms_invalid_mode@int-max-clock,Fail 37 + kms_invalid_mode@overflow-vrefresh,Fail 39 38 kms_lease@lease-uevent,Fail 40 - kms_plane@pixel-format,Fail 41 39 kms_plane_cursor@primary,Fail 40 + kms_plane@pixel-format,Fail 42 41 kms_rotation_crc@primary-rotation-180,Fail 43 42 perf@i915-ref-count,Fail
+7
drivers/gpu/drm/ci/xfails/amdgpu-stoney-flakes.txt
··· 32 32 # IGT Version: 1.29-g33adea9eb 33 33 # Linux Version: 6.13.0-rc2 34 34 kms_async_flips@crc-atomic 35 + 36 + # Board Name: hp-11A-G6-EE-grunt 37 + # Bug Report: https://gitlab.freedesktop.org/drm/amd/-/issues/4406 38 + # Failure Rate: 20 39 + # IGT Version: 2.1-g26ddb59c1 40 + # Linux Version: 6.16.0-rc2 41 + kms_async_flips@alternate-sync-async-flip
+1 -26
drivers/gpu/drm/ci/xfails/i915-amly-fails.txt
··· 1 - core_setmaster_vs_auth,Fail 2 1 i915_module_load@load,Fail 3 2 i915_module_load@reload,Fail 4 3 i915_module_load@reload-no-display,Fail 5 4 i915_module_load@resize-bar,Fail 6 5 i915_pm_rpm@gem-execbuf-stress,Timeout 7 6 i915_pm_rpm@module-reload,Fail 8 - kms_ccs@ccs-on-another-bo-y-tiled-gen12-rc-ccs-cc,Timeout 9 - kms_fb_coherency@memset-crc,Crash 10 - kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-downscaling,Fail 11 - kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling,Fail 12 - kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling,Fail 13 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling,Fail 14 7 kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 15 - kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling,Fail 16 - kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling,Fail 17 - kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-upscaling,Fail 18 - kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-downscaling,Fail 19 - kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-upscaling,Fail 20 - kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling,Fail 21 8 kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-upscaling,Fail 22 - kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-downscaling,Fail 23 - kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-upscaling,Fail 24 - kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling,Fail 25 - kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-upscaling,Fail 26 9 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling,Fail 27 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 28 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling,Fail 29 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling,Fail 30 - kms_frontbuffer_tracking@fbc-rgb101010-draw-mmap-cpu,Timeout 31 10 kms_lease@lease-uevent,Fail 32 11 kms_plane_alpha_blend@alpha-basic,Fail 33 12 kms_plane_alpha_blend@alpha-opaque-fb,Fail 34 13 kms_plane_alpha_blend@alpha-transparent-fb,Fail 35 14 kms_plane_alpha_blend@constant-alpha-max,Fail 36 - kms_plane_scaling@planes-upscale-factor-0-25,Timeout 37 - kms_pm_backlight@brightness-with-dpms,Crash 38 - kms_pm_backlight@fade,Crash 39 - kms_prop_blob@invalid-set-prop-any,Fail 40 - kms_properties@connector-properties-legacy,Timeout 15 + kms_pm_rpm@modeset-stress-extra-wait,Timeout 41 16 kms_universal_plane@disable-primary-vs-flip,Timeout 42 17 perf@i915-ref-count,Fail 43 18 perf_pmu@module-unload,Fail
+2 -22
drivers/gpu/drm/ci/xfails/i915-apl-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Fail 1 2 i915_module_load@load,Fail 2 3 i915_module_load@reload,Fail 3 4 i915_module_load@reload-no-display,Fail 4 5 i915_module_load@resize-bar,Fail 5 - kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-downscaling,Fail 6 - kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling,Fail 7 - kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-downscaling,Fail 8 - kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling,Fail 9 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling,Fail 10 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 11 - kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling,Fail 12 - kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-upscaling,Fail 13 - kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling,Fail 14 - kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-upscaling,Fail 15 - kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-downscaling,Fail 16 - kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-upscaling,Fail 17 - kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling,Fail 18 - kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-upscaling,Fail 19 - kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-downscaling,Fail 20 - kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-upscaling,Fail 21 - kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling,Fail 22 - kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-upscaling,Fail 23 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling,Fail 24 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 25 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling,Fail 26 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling,Fail 6 + kms_flip@flip-vs-wf_vblank-interruptible,Fail 27 7 kms_lease@lease-uevent,Fail 28 8 kms_plane_alpha_blend@alpha-basic,Fail 29 9 kms_plane_alpha_blend@alpha-opaque-fb,Fail
+7 -30
drivers/gpu/drm/ci/xfails/i915-cml-fails.txt
··· 1 - core_setmaster_vs_auth,Fail 1 + api_intel_bb@intel-bb-blit-none,Timeout 2 + core_setmaster@master-drop-set-user,Fail 2 3 i915_module_load@load,Fail 3 4 i915_module_load@reload,Fail 4 5 i915_module_load@reload-no-display,Fail ··· 9 8 i915_pm_rpm@gem-execbuf-stress,Timeout 10 9 i915_pm_rpm@module-reload,Fail 11 10 i915_pm_rpm@system-suspend-execbuf,Timeout 12 - kms_ccs@ccs-on-another-bo-y-tiled-gen12-rc-ccs-cc,Timeout 13 - kms_cursor_crc@cursor-suspend,Timeout 14 - kms_fb_coherency@memset-crc,Crash 15 11 kms_flip@busy-flip,Timeout 16 12 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-downscaling,Fail 17 13 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling,Fail 18 14 kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-downscaling,Fail 19 15 kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling,Fail 20 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling,Fail 21 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 22 16 kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling,Fail 23 17 kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-upscaling,Fail 18 + kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling,Fail 19 + kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 24 20 kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling,Fail 25 21 kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-upscaling,Fail 26 22 kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-downscaling,Fail ··· 29 31 kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling,Fail 30 32 kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-upscaling,Fail 31 33 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling,Fail 32 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 33 34 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling,Fail 34 35 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling,Fail 36 + kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 35 37 kms_lease@lease-uevent,Fail 36 - kms_pipe_stress@stress-xrgb8888-untiled,Fail 37 - kms_pipe_stress@stress-xrgb8888-ytiled,Fail 38 - kms_plane_alpha_blend@alpha-basic,Fail 39 - kms_plane_alpha_blend@alpha-opaque-fb,Fail 40 - kms_plane_alpha_blend@alpha-transparent-fb,Fail 41 - kms_plane_alpha_blend@constant-alpha-max,Fail 42 - kms_plane_scaling@planes-upscale-factor-0-25,Timeout 43 - kms_pm_backlight@brightness-with-dpms,Crash 44 - kms_pm_backlight@fade,Crash 45 - kms_prop_blob@invalid-set-prop-any,Fail 46 - kms_properties@connector-properties-legacy,Timeout 38 + kms_pm_rpm@basic-rte,Fail 47 39 kms_psr2_sf@cursor-plane-update-sf,Fail 48 40 kms_psr2_sf@overlay-plane-update-continuous-sf,Fail 49 41 kms_psr2_sf@overlay-plane-update-sf-dmg-area,Fail 50 42 kms_psr2_sf@overlay-primary-update-sf-dmg-area,Fail 51 43 kms_psr2_sf@plane-move-sf-dmg-area,Fail 52 - kms_psr2_sf@primary-plane-update-sf-dmg-area,Fail 53 44 kms_psr2_sf@primary-plane-update-sf-dmg-area-big-fb,Fail 54 - kms_psr2_sf@psr2-cursor-plane-update-sf,Fail 55 - kms_psr2_sf@psr2-overlay-plane-update-continuous-sf,Fail 56 - kms_psr2_sf@psr2-overlay-plane-update-sf-dmg-area,Fail 57 - kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area,Fail 58 - kms_psr2_sf@psr2-plane-move-sf-dmg-area,Fail 59 - kms_psr2_sf@psr2-primary-plane-update-sf-dmg-area,Fail 60 - kms_psr2_sf@psr2-primary-plane-update-sf-dmg-area-big-fb,Fail 61 - kms_psr2_su@page_flip-NV12,Fail 62 - kms_psr2_su@page_flip-P010,Fail 63 - kms_setmode@basic,Fail 64 - kms_universal_plane@disable-primary-vs-flip,Timeout 45 + kms_psr2_sf@primary-plane-update-sf-dmg-area,Fail 65 46 perf@i915-ref-count,Fail 66 47 perf_pmu@module-unload,Fail 67 48 perf_pmu@rc6,Crash
+7
drivers/gpu/drm/ci/xfails/i915-cml-flakes.txt
··· 32 32 # IGT Version: 1.29-g33adea9eb 33 33 # Linux Version: 6.13.0-rc2 34 34 gen9_exec_parse@unaligned-access 35 + 36 + # Board Name: asus-C436FA-Flip-hatch 37 + # Bug Report: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14625 38 + # Failure Rate: 100 39 + # IGT Version: 2.1-g26ddb59c1 40 + # Linux Version: 6.16.0-rc2 41 + perf_pmu@most-busy-check-all
+6 -16
drivers/gpu/drm/ci/xfails/i915-glk-fails.txt
··· 4 4 i915_module_load@reload,Fail 5 5 i915_module_load@reload-no-display,Fail 6 6 i915_module_load@resize-bar,Fail 7 + kms_dirtyfb@default-dirtyfb-ioctl,Fail 7 8 kms_dirtyfb@drrs-dirtyfb-ioctl,Fail 9 + kms_dirtyfb@fbc-dirtyfb-ioctl,Fail 8 10 kms_flip@blocking-wf_vblank,Fail 9 - kms_flip@wf_vblank-ts-check,Fail 10 - kms_flip@wf_vblank-ts-check-interruptible,Fail 11 - kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-downscaling,Fail 12 11 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling,Fail 13 12 kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling,Fail 14 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling,Fail 15 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 16 13 kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling,Fail 17 - kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling,Fail 14 + kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 18 15 kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-upscaling,Fail 19 16 kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-downscaling,Fail 20 - kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-upscaling,Fail 21 - kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling,Fail 22 17 kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-upscaling,Fail 23 18 kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-downscaling,Fail 24 - kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-upscaling,Fail 25 - kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling,Fail 26 19 kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-upscaling,Fail 27 20 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling,Fail 28 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 29 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling,Fail 30 21 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling,Fail 31 - kms_frontbuffer_tracking@fbc-rgb101010-draw-mmap-cpu,Timeout 22 + kms_flip@wf_vblank-ts-check,Fail 23 + kms_flip@wf_vblank-ts-check-interruptible,Fail 24 + kms_frontbuffer_tracking@fbcdrrs-tiling-linear,Fail 32 25 kms_frontbuffer_tracking@fbc-tiling-linear,Fail 33 26 kms_lease@lease-uevent,Fail 34 27 kms_plane_alpha_blend@alpha-opaque-fb,Fail 35 28 kms_plane_scaling@planes-upscale-factor-0-25,Timeout 36 - kms_pm_backlight@brightness-with-dpms,Crash 37 - kms_pm_backlight@fade,Crash 38 29 kms_prop_blob@invalid-set-prop-any,Fail 39 30 kms_properties@connector-properties-legacy,Timeout 40 31 kms_rotation_crc@multiplane-rotation,Fail 41 - kms_rotation_crc@multiplane-rotation-cropping-top,Fail 42 32 kms_universal_plane@disable-primary-vs-flip,Timeout 43 33 perf@non-zero-reason,Timeout 44 34 sysfs_heartbeat_interval@long,Timeout
+3 -24
drivers/gpu/drm/ci/xfails/i915-jsl-fails.txt
··· 1 + core_setmaster@master-drop-set-root,Fail 1 2 drm_fdinfo@busy-check-all,Fail 2 3 i915_module_load@load,Fail 3 4 i915_module_load@reload,Fail 4 5 i915_module_load@reload-no-display,Fail 5 6 i915_module_load@resize-bar,Fail 6 7 i915_pm_rpm@gem-execbuf-stress,Timeout 8 + i915_pm_rpm@module-reload,Fail 7 9 kms_flip@dpms-off-confusion,Fail 8 - kms_flip@nonexisting-fb,Fail 9 - kms_flip@single-buffer-flip-vs-dpms-off-vs-modeset,Fail 10 - kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-downscaling,Fail 11 10 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling,Fail 12 11 kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-downscaling,UnexpectedImprovement(Skip) 13 - kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling,Fail 14 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling,Fail 15 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 16 12 kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling,Fail 17 - kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-upscaling,Fail 18 - kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling,Fail 19 - kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-upscaling,Fail 20 - kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-downscaling,Fail 21 - kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-upscaling,Fail 22 - kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling,Fail 13 + kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 23 14 kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-upscaling,Fail 24 15 kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-downscaling,Fail 25 - kms_flip_scaled_crc@flip-64bpp-xtile-to-32bpp-xtile-upscaling,Fail 26 - kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling,Fail 27 - kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-upscaling,Fail 28 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling,Fail 29 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 30 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling,Fail 31 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling,Fail 32 16 kms_lease@lease-uevent,Fail 33 17 kms_pm_rpm@modeset-stress-extra-wait,Timeout 34 - kms_rotation_crc@bad-pixel-format,Fail 35 18 kms_rotation_crc@multiplane-rotation,Fail 36 - kms_rotation_crc@multiplane-rotation-cropping-bottom,Fail 37 - kms_rotation_crc@multiplane-rotation-cropping-top,Fail 38 19 perf@i915-ref-count,Fail 39 20 perf_pmu@module-unload,Fail 40 - perf_pmu@most-busy-idle-check-all,Fail 41 21 perf_pmu@rc6,Crash 42 - prime_busy@before-wait,Fail 43 22 sysfs_heartbeat_interval@long,Timeout 44 23 sysfs_heartbeat_interval@off,Timeout 45 24 sysfs_preempt_timeout@off,Timeout
+4 -1
drivers/gpu/drm/ci/xfails/i915-kbl-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Fail 1 2 i915_module_load@load,Fail 2 3 i915_module_load@reload,Fail 3 4 i915_module_load@reload-no-display,Fail 4 5 i915_module_load@resize-bar,Fail 5 6 i915_pm_rpm@gem-execbuf-stress,Timeout 6 7 kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-downscaling,Fail 7 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 8 8 kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-upscaling,Fail 9 + kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 9 10 kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling,Fail 10 11 kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-upscaling,Fail 11 12 kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-upscaling,Fail ··· 18 17 perf_pmu@busy-accuracy-50,Fail 19 18 perf_pmu@module-unload,Fail 20 19 perf_pmu@rc6,Crash 20 + prime_busy@after-wait,Fail 21 + prime_busy@before,Fail 21 22 sysfs_heartbeat_interval@long,Timeout 22 23 sysfs_heartbeat_interval@off,Timeout 23 24 sysfs_preempt_timeout@off,Timeout
+2 -3
drivers/gpu/drm/ci/xfails/i915-tgl-fails.txt
··· 1 1 api_intel_allocator@reopen,Timeout 2 2 api_intel_bb@destroy-bb,Timeout 3 3 core_hotunplug@hotrebind-lateclose,Timeout 4 + core_setmaster@master-drop-set-user,Fail 5 + drm_read@short-buffer-block,Timeout 4 6 dumb_buffer@map-valid,Timeout 5 7 i915_module_load@load,Fail 6 8 i915_module_load@reload,Fail 7 9 i915_module_load@reload-no-display,Fail 8 10 i915_module_load@resize-bar,Fail 9 - i915_pm_rpm@gem-execbuf-stress,Timeout 10 11 i915_pm_rps@engine-order,Timeout 11 - i915_pm_rps@waitboost,Fail 12 12 kms_lease@lease-uevent,Fail 13 13 kms_rotation_crc@multiplane-rotation,Fail 14 14 perf@i915-ref-count,Fail ··· 17 17 perf_pmu@module-unload,Fail 18 18 perf_pmu@rc6,Crash 19 19 perf_pmu@semaphore-wait-idle,Timeout 20 - prime_busy@before,Fail 21 20 prime_mmap@test_refcounting,Timeout 22 21 sriov_basic@enable-vfs-bind-unbind-each-numvfs-all,Timeout 23 22 syncobj_basic@illegal-fd-to-handle,Timeout
+6
drivers/gpu/drm/ci/xfails/i915-tgl-flakes.txt
··· 1 + # Board Name: acer-cp514-2h-1130g7-volteer 2 + # Bug Report: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14624 3 + # Failure Rate: 100 4 + # IGT Version: 2.1-g26ddb59c1 5 + # Linux Version: 6.16.0-rc2 6 + perf@gen12-unprivileged-single-ctx-counters
+5 -8
drivers/gpu/drm/ci/xfails/i915-whl-fails.txt
··· 6 6 i915_pm_rpm@gem-execbuf-stress,Timeout 7 7 i915_pm_rpm@module-reload,Fail 8 8 i915_pm_rpm@system-suspend-execbuf,Timeout 9 - kms_ccs@ccs-on-another-bo-y-tiled-gen12-rc-ccs-cc,Timeout 10 - kms_cursor_crc@cursor-suspend,Timeout 9 + kms_dirtyfb@default-dirtyfb-ioctl,Fail 10 + kms_dirtyfb@fbc-dirtyfb-ioctl,Fail 11 11 kms_fb_coherency@memset-crc,Crash 12 12 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-downscaling,Fail 13 13 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling,Fail 14 14 kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-downscaling,Fail 15 15 kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-upscaling,Fail 16 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling,Fail 17 - kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 18 16 kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling,Fail 19 17 kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-upscaling,Fail 18 + kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-downscaling,Fail 19 + kms_flip_scaled_crc@flip-32bpp-ytile-to-64bpp-ytile-upscaling,Fail 20 20 kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling,Fail 21 21 kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-upscaling,Fail 22 22 kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling,Fail ··· 26 26 kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-downscaling,Fail 27 27 kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-upscaling,Fail 28 28 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-downscaling,Fail 29 - kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 30 29 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling,Fail 31 30 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling,Fail 32 - kms_frontbuffer_tracking@fbc-rgb101010-draw-mmap-cpu,Timeout 31 + kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 33 32 kms_frontbuffer_tracking@fbc-tiling-linear,Fail 34 33 kms_lease@lease-uevent,Fail 35 34 kms_plane_alpha_blend@alpha-basic,Fail ··· 36 37 kms_plane_alpha_blend@alpha-transparent-fb,Fail 37 38 kms_plane_alpha_blend@constant-alpha-max,Fail 38 39 kms_plane_scaling@planes-upscale-factor-0-25,Timeout 39 - kms_pm_backlight@brightness-with-dpms,Crash 40 - kms_pm_backlight@fade,Crash 41 40 kms_prop_blob@invalid-set-prop-any,Fail 42 41 kms_properties@connector-properties-legacy,Timeout 43 42 kms_universal_plane@disable-primary-vs-flip,Timeout
+5 -7
drivers/gpu/drm/ci/xfails/mediatek-mt8173-fails.txt
··· 1 + core_setmaster@master-drop-set-root,Fail 2 + core_setmaster@master-drop-set-shared-fd,Fail 3 + core_setmaster@master-drop-set-user,Fail 1 4 kms_3d,Fail 2 - kms_bw@connected-linear-tiling-1-displays-1920x1080p,Fail 3 5 kms_bw@connected-linear-tiling-1-displays-2560x1440p,Fail 4 - kms_bw@connected-linear-tiling-1-displays-3840x2160p,Fail 5 6 kms_bw@connected-linear-tiling-2-displays-1920x1080p,Fail 6 7 kms_bw@connected-linear-tiling-2-displays-2160x1440p,Fail 7 8 kms_bw@connected-linear-tiling-2-displays-2560x1440p,Fail ··· 15 14 kms_bw@linear-tiling-2-displays-2160x1440p,Fail 16 15 kms_bw@linear-tiling-2-displays-2560x1440p,Fail 17 16 kms_bw@linear-tiling-2-displays-3840x2160p,Fail 18 - kms_color@invalid-gamma-lut-sizes,Fail 19 17 kms_cursor_legacy@cursor-vs-flip-atomic,Fail 20 - kms_cursor_legacy@cursor-vs-flip-legacy,Fail 21 18 kms_cursor_legacy@flip-vs-cursor-atomic,Fail 22 19 kms_cursor_legacy@flip-vs-cursor-legacy,Fail 23 20 kms_cursor_legacy@flip-vs-cursor-toggle,Fail ··· 23 24 kms_flip@basic-plain-flip,Fail 24 25 kms_flip@dpms-off-confusion,Fail 25 26 kms_flip@dpms-off-confusion-interruptible,Fail 26 - kms_flip@flip-vs-absolute-wf_vblank,Fail 27 - kms_flip@flip-vs-absolute-wf_vblank-interruptible,Fail 28 27 kms_flip@flip-vs-blocking-wf-vblank,Fail 28 + kms_flip@flip-vs-dpms-on-nop,Fail 29 + kms_flip@flip-vs-dpms-on-nop-interruptible,Fail 29 30 kms_flip@flip-vs-expired-vblank,Fail 30 31 kms_flip@flip-vs-expired-vblank-interruptible,Fail 31 32 kms_flip@flip-vs-modeset-vs-hang,Fail ··· 39 40 kms_flip@plain-flip-interruptible,Fail 40 41 kms_flip@plain-flip-ts-check,Fail 41 42 kms_flip@plain-flip-ts-check-interruptible,Fail 42 - kms_invalid_mode@overflow-vrefresh,Fail 43 43 kms_lease@lease-uevent,Fail
+35
drivers/gpu/drm/ci/xfails/mediatek-mt8173-flakes.txt
··· 53 53 # IGT Version: 1.30-g04bedb923 54 54 # Linux Version: 6.14.0-rc4 55 55 kms_flip@flip-vs-wf_vblank-interruptible 56 + 57 + # Board Name: mt8173-elm-hana 58 + # Bug Report: https://lore.kernel.org/dri-devel/7559dd68-c9dd-410f-880f-201679e2dd54@collabora.com/T/#u 59 + # Failure Rate: 20 60 + # IGT Version: 2.1-g26ddb59c1 61 + # Linux Version: 6.16.0-rc2 62 + kms_flip@blocking-wf_vblank 63 + 64 + # Board Name: mt8173-elm-hana 65 + # Bug Report: https://lore.kernel.org/dri-devel/953ab66e-9dda-4003-9b98-9e0d81e18a1f@collabora.com/T/#u 66 + # Failure Rate: 40 67 + # IGT Version: 2.1-g26ddb59c1 68 + # Linux Version: 6.16.0-rc2 69 + kms_flip@busy-flip 70 + 71 + # Board Name: mt8173-elm-hana 72 + # Bug Report: https://lore.kernel.org/dri-devel/6ab7f59c-042e-4c7a-baaa-86c7d47ab308@collabora.com/ 73 + # Failure Rate: 40 74 + # IGT Version: 2.1-g26ddb59c1 75 + # Linux Version: 6.16.0-rc2 76 + kms_flip@flip-vs-rmfb 77 + 78 + # Board Name: mt8173-elm-hana 79 + # Bug Report: https://lore.kernel.org/dri-devel/30b3f8b0-3409-4329-bb60-b6287e1a439d@collabora.com/ 80 + # Failure Rate: 60 81 + # IGT Version: 2.1-g26ddb59c1 82 + # Linux Version: 6.16.0-rc2 83 + kms_atomic_transition@plane-all-modeset-transition-internal-panels 84 + 85 + # Board Name: mt8173-elm-hana 86 + # Bug Report: https://lore.kernel.org/dri-devel/4c9e1501-52cd-4659-a894-8a2ac58c3996@collabora.com/ 87 + # Failure Rate: 40 88 + # IGT Version: 2.1-g26ddb59c1 89 + # Linux Version: 6.16.0-rc2 90 + kms_flip@absolute-wf_vblank
+4
drivers/gpu/drm/ci/xfails/msm-apq8016-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Fail 1 2 kms_3d,Fail 3 + kms_cursor_legacy@forked-move,Fail 4 + kms_cursor_legacy@single-bo,Fail 2 5 kms_force_connector_basic@force-edid,Fail 3 6 kms_hdmi_inject@inject-4k,Fail 4 7 kms_lease@lease-uevent,Fail 8 + msm/msm_mapping@memptrs,Fail 5 9 msm/msm_mapping@ring,Fail
+2
drivers/gpu/drm/ci/xfails/msm-apq8096-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Fail 1 2 kms_3d,Fail 2 3 kms_lease@lease-uevent,Fail 4 + msm/msm_mapping@memptrs,Fail
+3 -2
drivers/gpu/drm/ci/xfails/msm-sc7180-trogdor-kingoftown-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Fail 1 2 kms_color@ctm-0-25,Fail 2 3 kms_color@ctm-0-50,Fail 3 4 kms_color@ctm-0-75,Fail ··· 15 14 kms_flip@flip-vs-panning-vs-hang,Fail 16 15 kms_lease@lease-uevent,Fail 17 16 kms_pipe_crc_basic@compare-crc-sanitycheck-nv12,Fail 18 - kms_plane@pixel-format,Fail 19 - kms_plane@pixel-format-source-clamping,Fail 20 17 kms_plane_alpha_blend@alpha-7efc,Fail 21 18 kms_plane_alpha_blend@coverage-7efc,Fail 22 19 kms_plane_alpha_blend@coverage-vs-premult-vs-constant,Fail 20 + kms_plane@pixel-format,Fail 21 + kms_plane@pixel-format-source-clamping,Fail
+3 -2
drivers/gpu/drm/ci/xfails/msm-sc7180-trogdor-lazor-limozeen-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Fail 1 2 kms_color@ctm-0-25,Fail 2 3 kms_color@ctm-0-50,Fail 3 4 kms_color@ctm-0-75,Fail ··· 15 14 kms_flip@flip-vs-panning-vs-hang,Fail 16 15 kms_lease@lease-uevent,Fail 17 16 kms_pipe_crc_basic@compare-crc-sanitycheck-nv12,Fail 18 - kms_plane@pixel-format,Fail 19 - kms_plane@pixel-format-source-clamping,Fail 20 17 kms_plane_alpha_blend@alpha-7efc,Fail 21 18 kms_plane_alpha_blend@coverage-7efc,Fail 22 19 kms_plane_alpha_blend@coverage-vs-premult-vs-constant,Fail 20 + kms_plane@pixel-format,Fail 21 + kms_plane@pixel-format-source-clamping,Fail
+1
drivers/gpu/drm/ci/xfails/msm-sm8350-hdk-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Fail 1 2 kms_3d,Fail 2 3 kms_cursor_legacy@forked-bo,Fail 3 4 kms_cursor_legacy@forked-move,Fail
+1
drivers/gpu/drm/ci/xfails/panfrost-mt8183-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Fail 1 2 panfrost/panfrost_prime@gem-prime-import,Fail 2 3 panfrost/panfrost_submit@pan-submit-error-bad-requirements,Fail
+1
drivers/gpu/drm/ci/xfails/panfrost-rk3288-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Crash 1 2 panfrost/panfrost_prime@gem-prime-import,Crash 2 3 panfrost/panfrost_submit@pan-submit-error-bad-requirements,Crash
+1
drivers/gpu/drm/ci/xfails/panfrost-rk3399-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Fail 1 2 panfrost/panfrost_prime@gem-prime-import,Fail 2 3 panfrost/panfrost_submit@pan-submit-error-bad-requirements,Fail
+5
drivers/gpu/drm/ci/xfails/panthor-rk3588-fails.txt
··· 1 + core_hotunplug@hotreplug,Fail 2 + core_hotunplug@hotreplug-lateclose,Fail 3 + core_hotunplug@hotunplug-rescan,Fail 4 + core_hotunplug@unplug-rescan,Fail 5 + core_setmaster@master-drop-set-user,Fail
+20
drivers/gpu/drm/ci/xfails/panthor-rk3588-skips.txt
··· 1 + # Skip driver specific tests 2 + ^amdgpu.* 3 + ^msm.* 4 + nouveau_.* 5 + ^v3d.* 6 + ^vc4.* 7 + ^vmwgfx* 8 + 9 + # Skip intel specific tests 10 + gem_.* 11 + i915_.* 12 + tools_test.* 13 + kms_dp_link_training.* 14 + 15 + # Panfrost is not a KMS driver, so skip the KMS tests 16 + kms_.* 17 + 18 + # Skip display functionality tests for GPU-only drivers 19 + dumb_buffer.* 20 + fbdev.*
+12 -3
drivers/gpu/drm/ci/xfails/rockchip-rk3288-fails.txt
··· 2 2 core_setmaster@master-drop-set-shared-fd,Crash 3 3 core_setmaster@master-drop-set-user,Crash 4 4 core_setmaster_vs_auth,Crash 5 - dumb_buffer@create-clear,Crash 6 5 fbdev@pan,Crash 7 - kms_cursor_legacy@basic-flip-before-cursor-legacy,Fail 8 - kms_prop_blob@invalid-set-prop,Crash 6 + kms_cursor_crc@cursor-dpms,Crash 7 + kms_cursor_crc@cursor-sliding-32x32,Crash 8 + kms_cursor_legacy@basic-flip-before-cursor-atomic,Crash 9 + kms_cursor_legacy@cursor-vs-flip-atomic,Crash 10 + kms_flip@basic-flip-vs-wf_vblank,Crash 11 + kms_flip@flip-vs-panning-vs-hang,Crash 12 + kms_flip@plain-flip-fb-recreate-interruptible,Crash 13 + kms_pipe_crc_basic@read-crc-frame-sequence,Crash 14 + kms_plane_cursor@overlay,Crash 15 + kms_plane_cursor@viewport,Crash 9 16 kms_prop_blob@invalid-set-prop-any,Crash 17 + kms_prop_blob@invalid-set-prop,Crash 18 + kms_properties@get_properties-sanity-non-atomic,Fail
+21
drivers/gpu/drm/ci/xfails/rockchip-rk3288-flakes.txt
··· 32 32 # IGT Version: 1.28-ga73311079 33 33 # Linux Version: 6.11.0-rc2 34 34 kms_cursor_crc@cursor-alpha-opaque 35 + 36 + # Board Name: rk3288-veyron-jaq 37 + # Bug Report: https://lore.kernel.org/dri-devel/acfd5838-d861-4dd9-97c3-99fffc9bfa04@collabora.com/T/#u 38 + # Failure Rate: 40 39 + # IGT Version: 2.1-g26ddb59c1 40 + # Linux Version: 6.16.0-rc2 41 + kms_flip@flip-vs-absolute-wf_vblank 42 + 43 + # Board Name: rk3288-veyron-jaq 44 + # Bug Report: https://lore.kernel.org/dri-devel/81e13fcc-d916-4eb8-91cd-f74f64f53f72@collabora.com/T/#u 45 + # Failure Rate: 40 46 + # IGT Version: 2.1-g26ddb59c1 47 + # Linux Version: 6.16.0-rc2 48 + kms_flip@flip-vs-dpms-on-nop-interruptible 49 + 50 + # Board Name: rk3288-veyron-jaq 51 + # Bug Report: https://lore.kernel.org/dri-devel/10c5abab-c8fe-4eff-8eed-009038436b49@collabora.com/T/#u 52 + # Failure Rate: 20 53 + # IGT Version: 2.1-g26ddb59c1 54 + # Linux Version: 6.16.0-rc2 55 + kms_flip@plain-flip-fb-recreate
+7 -5
drivers/gpu/drm/ci/xfails/rockchip-rk3399-fails.txt
··· 1 - dumb_buffer@create-clear,Crash 1 + core_setmaster@master-drop-set-user,Fail 2 2 kms_atomic_transition@modeset-transition,Fail 3 3 kms_atomic_transition@modeset-transition-fencing,Fail 4 4 kms_atomic_transition@plane-toggle-modeset-transition,Fail 5 - kms_color@gamma,Fail 6 - kms_color@legacy-gamma,Fail 5 + kms_cursor_crc@async-cursor-crc-framebuffer-change,Fail 6 + kms_cursor_crc@async-cursor-crc-position-change,Fail 7 7 kms_cursor_crc@cursor-alpha-opaque,Fail 8 8 kms_cursor_crc@cursor-alpha-transparent,Fail 9 9 kms_cursor_crc@cursor-dpms,Fail ··· 41 41 kms_cursor_legacy@flip-vs-cursor-crc-legacy,Fail 42 42 kms_cursor_legacy@flip-vs-cursor-legacy,Fail 43 43 kms_cursor_legacy@long-nonblocking-modeset-vs-cursor-atomic,Fail 44 + kms_flip@basic-flip-vs-dpms,Fail 44 45 kms_flip@basic-flip-vs-wf_vblank,Fail 45 46 kms_flip@blocking-wf_vblank,Fail 47 + kms_flip@flip-vs-dpms-on-nop,Fail 48 + kms_flip@flip-vs-dpms-on-nop-interruptible,Fail 46 49 kms_flip@flip-vs-modeset-vs-hang,Fail 47 50 kms_flip@flip-vs-panning,Fail 48 51 kms_flip@flip-vs-panning-interruptible,Fail ··· 54 51 kms_flip@plain-flip-fb-recreate,Fail 55 52 kms_flip@plain-flip-fb-recreate-interruptible,Fail 56 53 kms_flip@plain-flip-ts-check,Fail 57 - kms_flip@plain-flip-ts-check-interruptible,Fail 58 54 kms_flip@wf_vblank-ts-check-interruptible,Fail 59 55 kms_invalid_mode@int-max-clock,Fail 60 56 kms_invalid_mode@overflow-vrefresh,Fail ··· 66 64 kms_pipe_crc_basic@nonblocking-crc-frame-sequence,Fail 67 65 kms_pipe_crc_basic@read-crc,Fail 68 66 kms_pipe_crc_basic@read-crc-frame-sequence,Fail 67 + kms_plane_cursor@primary,Fail 69 68 kms_plane@pixel-format,Fail 70 69 kms_plane@pixel-format-source-clamping,Fail 71 70 kms_plane@plane-panning-bottom-right,Fail 72 71 kms_plane@plane-panning-top-left,Fail 73 72 kms_plane@plane-position-covered,Fail 74 73 kms_plane@plane-position-hole,Fail 75 - kms_plane_cursor@primary,Fail 76 74 kms_universal_plane@universal-plane-functional,Fail
+35
drivers/gpu/drm/ci/xfails/rockchip-rk3399-flakes.txt
··· 144 144 # IGT Version: 1.30-g04bedb923 145 145 # Linux Version: 6.14.0-rc4 146 146 kms_bw@linear-tiling-1-displays-3840x2160p 147 + 148 + # Board Name: rk3399-gru-kevin 149 + # Bug Report: https://lore.kernel.org/dri-devel/7b6e2e3b-2ea2-4cd7-92a5-68d23a63e426@collabora.com/T/#u 150 + # Failure Rate: 60 151 + # IGT Version: 2.1-g26ddb59c1 152 + # Linux Version: 6.16.0-rc2 153 + kms_color@gamma 154 + 155 + # Board Name: rk3399-gru-kevin 156 + # Bug Report: https://lore.kernel.org/dri-devel/e29c2892-08f2-423f-af72-e4d8b207fd1c@collabora.com/T/#u 157 + # Failure Rate: 60 158 + # IGT Version: 2.1-g26ddb59c1 159 + # Linux Version: 6.16.0-rc2 160 + kms_bw@connected-linear-tiling-1-displays-3840x2160p 161 + 162 + # Board Name: rk3399-gru-kevin 163 + # Bug Report: https://lore.kernel.org/dri-devel/ad9ce463-c803-4502-ae89-381a6b6eb19f@collabora.com/T/#u 164 + # Failure Rate: 40 165 + # IGT Version: 2.1-g26ddb59c1 166 + # Linux Version: 6.16.0-rc2 167 + kms_color@legacy-gamma 168 + 169 + # Board Name: rk3399-gru-kevin 170 + # Bug Report: https://lore.kernel.org/dri-devel/59724e10-12ca-4481-b0e4-72d7b6e4dae0@collabora.com/T/#u 171 + # Failure Rate: 40 172 + # IGT Version: 2.1-g26ddb59c1 173 + # Linux Version: 6.16.0-rc2 174 + kms_flip@plain-flip-ts-check-interruptible 175 + 176 + # Board Name: rk3399-gru-kevin 177 + # Bug Report: https://lore.kernel.org/dri-devel/d790db5f-a1ba-47f9-9af0-d3287ef3274c@collabora.com/T/#u 178 + # Failure Rate: 20 179 + # IGT Version: 2.1-g26ddb59c1 180 + # Linux Version: 6.16.0-rc2 181 + kms_bw@linear-tiling-2-displays-3840x2160p
+9
drivers/gpu/drm/ci/xfails/rockchip-rk3588-fails.txt
··· 1 + core_setmaster@master-drop-set-user,Fail 2 + kms_3d,Fail 3 + kms_cursor_legacy@forked-bo,Fail 4 + kms_cursor_legacy@forked-move,Fail 5 + kms_cursor_legacy@single-bo,Fail 6 + kms_cursor_legacy@single-move,Fail 7 + kms_cursor_legacy@torture-bo,Fail 8 + kms_cursor_legacy@torture-move,Fail 9 + kms_lease@lease-uevent,Fail
+14
drivers/gpu/drm/ci/xfails/rockchip-rk3588-skips.txt
··· 1 + # Skip driver specific tests 2 + ^amdgpu.* 3 + ^msm.* 4 + nouveau_.* 5 + ^panfrost.* 6 + ^v3d.* 7 + ^vc4.* 8 + ^vmwgfx* 9 + 10 + # Skip intel specific tests 11 + gem_.* 12 + i915_.* 13 + tools_test.* 14 + kms_dp_link_training.*
+8 -58
drivers/gpu/drm/ci/xfails/virtio_gpu-none-fails.txt
··· 2 2 kms_addfb_basic@bo-too-small,Fail 3 3 kms_addfb_basic@size-max,Fail 4 4 kms_addfb_basic@too-high,Fail 5 - kms_atomic_transition@plane-primary-toggle-with-vblank-wait,Fail 6 - kms_bw@connected-linear-tiling-1-displays-1920x1080p,Fail 7 - kms_bw@connected-linear-tiling-1-displays-2160x1440p,Fail 8 - kms_bw@connected-linear-tiling-1-displays-2560x1440p,Fail 9 - kms_bw@connected-linear-tiling-1-displays-3840x2160p,Fail 10 5 kms_bw@connected-linear-tiling-10-displays-1920x1080p,Fail 11 6 kms_bw@connected-linear-tiling-10-displays-2160x1440p,Fail 12 7 kms_bw@connected-linear-tiling-10-displays-2560x1440p,Fail ··· 30 35 kms_bw@connected-linear-tiling-16-displays-2160x1440p,Fail 31 36 kms_bw@connected-linear-tiling-16-displays-2560x1440p,Fail 32 37 kms_bw@connected-linear-tiling-16-displays-3840x2160p,Fail 38 + kms_bw@connected-linear-tiling-1-displays-1920x1080p,Fail 39 + kms_bw@connected-linear-tiling-1-displays-2160x1440p,Fail 40 + kms_bw@connected-linear-tiling-1-displays-2560x1440p,Fail 41 + kms_bw@connected-linear-tiling-1-displays-3840x2160p,Fail 33 42 kms_bw@connected-linear-tiling-2-displays-1920x1080p,Fail 34 43 kms_bw@connected-linear-tiling-2-displays-2160x1440p,Fail 35 44 kms_bw@connected-linear-tiling-2-displays-2560x1440p,Fail ··· 66 67 kms_bw@connected-linear-tiling-9-displays-2160x1440p,Fail 67 68 kms_bw@connected-linear-tiling-9-displays-2560x1440p,Fail 68 69 kms_bw@connected-linear-tiling-9-displays-3840x2160p,Fail 69 - kms_bw@linear-tiling-1-displays-1920x1080p,Fail 70 - kms_bw@linear-tiling-1-displays-2160x1440p,Fail 71 - kms_bw@linear-tiling-1-displays-2560x1440p,Fail 72 - kms_bw@linear-tiling-1-displays-3840x2160p,Fail 73 70 kms_bw@linear-tiling-10-displays-1920x1080p,Fail 74 71 kms_bw@linear-tiling-10-displays-2160x1440p,Fail 75 72 kms_bw@linear-tiling-10-displays-2560x1440p,Fail ··· 94 99 kms_bw@linear-tiling-16-displays-2160x1440p,Fail 95 100 kms_bw@linear-tiling-16-displays-2560x1440p,Fail 96 101 kms_bw@linear-tiling-16-displays-3840x2160p,Fail 102 + kms_bw@linear-tiling-1-displays-1920x1080p,Fail 103 + kms_bw@linear-tiling-1-displays-2160x1440p,Fail 104 + kms_bw@linear-tiling-1-displays-2560x1440p,Fail 105 + kms_bw@linear-tiling-1-displays-3840x2160p,Fail 97 106 kms_bw@linear-tiling-2-displays-1920x1080p,Fail 98 107 kms_bw@linear-tiling-2-displays-2160x1440p,Fail 99 108 kms_bw@linear-tiling-2-displays-2560x1440p,Fail ··· 130 131 kms_bw@linear-tiling-9-displays-2160x1440p,Fail 131 132 kms_bw@linear-tiling-9-displays-2560x1440p,Fail 132 133 kms_bw@linear-tiling-9-displays-3840x2160p,Fail 133 - kms_flip@absolute-wf_vblank,Fail 134 - kms_flip@absolute-wf_vblank-interruptible,Fail 135 - kms_flip@basic-flip-vs-wf_vblank,Fail 136 - kms_flip@blocking-absolute-wf_vblank,Fail 137 - kms_flip@blocking-absolute-wf_vblank-interruptible,Fail 138 - kms_flip@blocking-wf_vblank,Fail 139 - kms_flip@busy-flip,Fail 140 - kms_flip@dpms-vs-vblank-race,Fail 141 - kms_flip@dpms-vs-vblank-race-interruptible,Fail 142 - kms_flip@flip-vs-absolute-wf_vblank,Fail 143 - kms_flip@flip-vs-absolute-wf_vblank-interruptible,Fail 144 - kms_flip@flip-vs-blocking-wf-vblank,Fail 145 - kms_flip@flip-vs-expired-vblank,Fail 146 - kms_flip@flip-vs-expired-vblank-interruptible,Fail 147 134 kms_flip@flip-vs-modeset-vs-hang,Fail 148 135 kms_flip@flip-vs-panning-vs-hang,Fail 149 - kms_flip@flip-vs-wf_vblank-interruptible,Fail 150 - kms_flip@modeset-vs-vblank-race,Fail 151 - kms_flip@modeset-vs-vblank-race-interruptible,Fail 152 - kms_flip@plain-flip-fb-recreate,Fail 153 - kms_flip@plain-flip-fb-recreate-interruptible,Fail 154 - kms_flip@plain-flip-ts-check,Fail 155 - kms_flip@plain-flip-ts-check-interruptible,Fail 156 - kms_flip@wf_vblank-ts-check,Fail 157 - kms_flip@wf_vblank-ts-check-interruptible,Fail 158 136 kms_invalid_mode@int-max-clock,Fail 159 137 kms_invalid_mode@overflow-vrefresh,Fail 160 - kms_lease@cursor-implicit-plane,Fail 161 138 kms_lease@lease-uevent,Fail 162 - kms_lease@page-flip-implicit-plane,Fail 163 - kms_lease@setcrtc-implicit-plane,Fail 164 - kms_lease@simple-lease,Fail 165 - kms_sequence@get-busy,Fail 166 - kms_sequence@get-forked,Fail 167 - kms_sequence@get-forked-busy,Fail 168 - kms_sequence@get-idle,Fail 169 - kms_sequence@queue-busy,Fail 170 - kms_sequence@queue-idle,Fail 171 - kms_setmode@basic,Fail 172 - kms_vblank@accuracy-idle,Fail 173 - kms_vblank@crtc-id,Fail 174 - kms_vblank@invalid,Fail 175 - kms_vblank@query-busy,Fail 176 - kms_vblank@query-forked,Fail 177 - kms_vblank@query-forked-busy,Fail 178 - kms_vblank@query-idle,Fail 179 - kms_vblank@ts-continuation-dpms-rpm,Fail 180 139 kms_vblank@ts-continuation-dpms-suspend,Fail 181 - kms_vblank@ts-continuation-idle,Fail 182 - kms_vblank@ts-continuation-modeset,Fail 183 - kms_vblank@ts-continuation-modeset-rpm,Fail 184 140 kms_vblank@ts-continuation-suspend,Fail 185 - kms_vblank@wait-busy,Fail 186 - kms_vblank@wait-forked,Fail 187 - kms_vblank@wait-forked-busy,Fail 188 - kms_vblank@wait-idle,Fail 189 141 perf@i915-ref-count,Fail
+2
drivers/gpu/drm/ci/xfails/vkms-none-fails.txt
··· 16 16 kms_flip@flip-vs-suspend,Fail 17 17 kms_flip@flip-vs-suspend-interruptible,Fail 18 18 kms_lease@lease-uevent,Fail 19 + kms_plane@pixel-format-source-clamping,Timeout 20 + kms_plane@pixel-format,Timeout 19 21 kms_writeback@writeback-check-output,Fail 20 22 kms_writeback@writeback-check-output-XRGB2101010,Fail 21 23 kms_writeback@writeback-fb-id,Fail
+2 -4
drivers/gpu/drm/meson/meson_dw_mipi_dsi.c
··· 119 119 dpi_data_format = DPI_COLOR_18BIT_CFG_2; 120 120 venc_data_width = VENC_IN_COLOR_18B; 121 121 break; 122 - case MIPI_DSI_FMT_RGB666_PACKED: 123 - case MIPI_DSI_FMT_RGB565: 122 + default: 124 123 return -EINVAL; 125 124 } 126 125 ··· 231 232 break; 232 233 case MIPI_DSI_FMT_RGB666: 233 234 break; 234 - case MIPI_DSI_FMT_RGB666_PACKED: 235 - case MIPI_DSI_FMT_RGB565: 235 + default: 236 236 dev_err(mipi_dsi->dev, "invalid pixel format %d\n", device->format); 237 237 return -EINVAL; 238 238 }
+1
drivers/gpu/drm/msm/Makefile
··· 25 25 adreno/a6xx_hfi.o \ 26 26 adreno/a6xx_preempt.o \ 27 27 adreno/a8xx_gpu.o \ 28 + adreno/a8xx_preempt.o \ 28 29 29 30 adreno-$(CONFIG_DEBUG_FS) += adreno/a5xx_debugfs.o \ 30 31
+2 -4
drivers/gpu/drm/msm/adreno/a4xx_gpu.c
··· 604 604 return 0; 605 605 } 606 606 607 - static int a4xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) 607 + static u64 a4xx_get_timestamp(struct msm_gpu *gpu) 608 608 { 609 - *value = gpu_read64(gpu, REG_A4XX_RBBM_PERFCTR_CP_0_LO); 610 - 611 - return 0; 609 + return gpu_read64(gpu, REG_A4XX_RBBM_PERFCTR_CP_0_LO); 612 610 } 613 611 614 612 static u64 a4xx_gpu_busy(struct msm_gpu *gpu, unsigned long *out_sample_rate)
+8 -4
drivers/gpu/drm/msm/adreno/a5xx_gpu.c
··· 1435 1435 return 0; 1436 1436 } 1437 1437 1438 - static int a5xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) 1438 + static u64 a5xx_get_timestamp(struct msm_gpu *gpu) 1439 1439 { 1440 - *value = gpu_read64(gpu, REG_A5XX_RBBM_ALWAYSON_COUNTER_LO); 1441 - 1442 - return 0; 1440 + return gpu_read64(gpu, REG_A5XX_RBBM_ALWAYSON_COUNTER_LO); 1443 1441 } 1444 1442 1445 1443 struct a5xx_crashdumper { ··· 1730 1732 struct adreno_gpu *adreno_gpu; 1731 1733 struct msm_gpu *gpu; 1732 1734 unsigned int nr_rings; 1735 + u32 speedbin; 1733 1736 int ret; 1734 1737 1735 1738 a5xx_gpu = kzalloc_obj(*a5xx_gpu); ··· 1756 1757 a5xx_destroy(&(a5xx_gpu->base.base)); 1757 1758 return ERR_PTR(ret); 1758 1759 } 1760 + 1761 + /* Set the speedbin value that is passed to userspace */ 1762 + if (adreno_read_speedbin(&pdev->dev, &speedbin) || !speedbin) 1763 + speedbin = 0xffff; 1764 + adreno_gpu->speedbin = (uint16_t) (0xffff & speedbin); 1759 1765 1760 1766 msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, 1761 1767 a5xx_fault_handler);
+241 -2
drivers/gpu/drm/msm/adreno/a6xx_catalog.c
··· 1761 1761 1762 1762 DECLARE_ADRENO_PROTECT(x285_protect, 15); 1763 1763 1764 + static const struct adreno_reglist_pipe x285_dyn_pwrup_reglist_regs[] = { 1765 + { REG_A8XX_GRAS_TSEFE_DBG_ECO_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1766 + { REG_A8XX_GRAS_NC_MODE_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1767 + { REG_A8XX_GRAS_DBG_ECO_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1768 + { REG_A6XX_PC_AUTO_VERTEX_STRIDE, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1769 + { REG_A8XX_PC_CHICKEN_BITS_1, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1770 + { REG_A8XX_PC_CHICKEN_BITS_2, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1771 + { REG_A8XX_PC_CHICKEN_BITS_3, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1772 + { REG_A8XX_PC_CHICKEN_BITS_4, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1773 + { REG_A8XX_PC_CONTEXT_SWITCH_STABILIZE_CNTL_1, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1774 + { REG_A8XX_PC_VIS_STREAM_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1775 + { REG_A7XX_RB_CCU_CNTL, 0, BIT(PIPE_BR) }, 1776 + { REG_A7XX_RB_CCU_DBG_ECO_CNTL, 0, BIT(PIPE_BR)}, 1777 + { REG_A8XX_RB_CCU_NC_MODE_CNTL, 0, BIT(PIPE_BR) }, 1778 + { REG_A8XX_RB_CMP_NC_MODE_CNTL, 0, BIT(PIPE_BR) }, 1779 + { REG_A6XX_RB_RBP_CNTL, 0, BIT(PIPE_BR) }, 1780 + { REG_A8XX_RB_RESOLVE_PREFETCH_CNTL, 0, BIT(PIPE_BR) }, 1781 + { REG_A8XX_RB_CMP_DBG_ECO_CNTL, 0, BIT(PIPE_BR) }, 1782 + { REG_A7XX_VFD_DBG_ECO_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1783 + { REG_A8XX_VFD_CB_BV_THRESHOLD, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1784 + { REG_A8XX_VFD_CB_BR_THRESHOLD, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1785 + { REG_A8XX_VFD_CB_BUSY_REQ_CNT, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1786 + { REG_A8XX_VFD_CB_LP_REQ_CNT, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1787 + { REG_A8XX_VPC_FLATSHADE_MODE_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 1788 + { REG_A8XX_CP_HW_FAULT_STATUS_MASK_PIPE, 0, BIT(PIPE_BR) | 1789 + BIT(PIPE_BV) | BIT(PIPE_LPAC) | BIT(PIPE_AQE0) | 1790 + BIT(PIPE_AQE1) | BIT(PIPE_DDE_BR) | BIT(PIPE_DDE_BV) }, 1791 + { REG_A8XX_CP_INTERRUPT_STATUS_MASK_PIPE, 0, BIT(PIPE_BR) | 1792 + BIT(PIPE_BV) | BIT(PIPE_LPAC) | BIT(PIPE_AQE0) | 1793 + BIT(PIPE_AQE1) | BIT(PIPE_DDE_BR) | BIT(PIPE_DDE_BV) }, 1794 + { REG_A8XX_CP_PROTECT_CNTL_PIPE, 0, BIT(PIPE_BR) | BIT(PIPE_BV) | BIT(PIPE_LPAC)}, 1795 + { REG_A8XX_CP_PROTECT_PIPE(15), 0, BIT(PIPE_BR) | BIT(PIPE_BV) | BIT(PIPE_LPAC) }, 1796 + { REG_A8XX_RB_GC_GMEM_PROTECT, 0, BIT(PIPE_BR) }, 1797 + { REG_A8XX_RB_LPAC_GMEM_PROTECT, 0, BIT(PIPE_BR) }, 1798 + { REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ENABLE, 0, BIT(PIPE_BR) }, 1799 + }; 1800 + DECLARE_ADRENO_REGLIST_PIPE_LIST(x285_dyn_pwrup_reglist); 1801 + 1764 1802 static const struct adreno_reglist_pipe a840_nonctxt_regs[] = { 1765 1803 { REG_A8XX_CP_SMMU_STREAM_ID_LPAC, 0x00000101, BIT(PIPE_NONE) }, 1766 1804 { REG_A8XX_GRAS_DBG_ECO_CNTL, 0x00000800, BIT(PIPE_BV) | BIT(PIPE_BR) }, ··· 1929 1891 { }, 1930 1892 }; 1931 1893 1894 + static const uint32_t a840_pwrup_reglist_regs[] = { 1895 + REG_A7XX_SP_HLSQ_TIMEOUT_THRESHOLD_DP, 1896 + REG_A7XX_SP_READ_SEL, 1897 + REG_A6XX_UCHE_MODE_CNTL, 1898 + REG_A8XX_UCHE_VARB_IDLE_TIMEOUT, 1899 + REG_A8XX_UCHE_GBIF_GX_CONFIG, 1900 + REG_A8XX_UCHE_CCHE_MODE_CNTL, 1901 + REG_A8XX_UCHE_CCHE_CACHE_WAYS, 1902 + REG_A8XX_UCHE_CACHE_WAYS, 1903 + REG_A8XX_UCHE_CCHE_GC_GMEM_RANGE_MIN, 1904 + REG_A8XX_UCHE_CCHE_GC_GMEM_RANGE_MIN + 1, 1905 + REG_A8XX_UCHE_CCHE_LPAC_GMEM_RANGE_MIN, 1906 + REG_A8XX_UCHE_CCHE_LPAC_GMEM_RANGE_MIN + 1, 1907 + REG_A8XX_UCHE_CCHE_TRAP_BASE, 1908 + REG_A8XX_UCHE_CCHE_TRAP_BASE + 1, 1909 + REG_A8XX_UCHE_CCHE_WRITE_THRU_BASE, 1910 + REG_A8XX_UCHE_CCHE_WRITE_THRU_BASE + 1, 1911 + REG_A8XX_UCHE_HW_DBG_CNTL, 1912 + REG_A8XX_UCHE_WRITE_THRU_BASE, 1913 + REG_A8XX_UCHE_WRITE_THRU_BASE + 1, 1914 + REG_A8XX_UCHE_TRAP_BASE, 1915 + REG_A8XX_UCHE_TRAP_BASE + 1, 1916 + REG_A8XX_UCHE_CLIENT_PF, 1917 + REG_A8XX_RB_CMP_NC_MODE_CNTL, 1918 + REG_A8XX_SP_HLSQ_GC_GMEM_RANGE_MIN, 1919 + REG_A8XX_SP_HLSQ_GC_GMEM_RANGE_MIN + 1, 1920 + REG_A6XX_TPL1_NC_MODE_CNTL, 1921 + REG_A6XX_TPL1_DBG_ECO_CNTL, 1922 + REG_A6XX_TPL1_DBG_ECO_CNTL1, 1923 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(0), 1924 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(1), 1925 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(2), 1926 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(3), 1927 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(4), 1928 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(5), 1929 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(6), 1930 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(7), 1931 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(8), 1932 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(9), 1933 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(10), 1934 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(11), 1935 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(12), 1936 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(13), 1937 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(14), 1938 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(15), 1939 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(16), 1940 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(17), 1941 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(18), 1942 + REG_A8XX_TPL1_BICUBIC_WEIGHTS_TABLE(19), 1943 + }; 1944 + DECLARE_ADRENO_REGLIST_LIST(a840_pwrup_reglist); 1945 + 1946 + static const u32 a840_ifpc_reglist_regs[] = { 1947 + REG_A8XX_RBBM_NC_MODE_CNTL, 1948 + REG_A8XX_RBBM_SLICE_NC_MODE_CNTL, 1949 + REG_A6XX_SP_NC_MODE_CNTL, 1950 + REG_A6XX_SP_CHICKEN_BITS, 1951 + REG_A8XX_SP_SS_CHICKEN_BITS_0, 1952 + REG_A7XX_SP_CHICKEN_BITS_1, 1953 + REG_A7XX_SP_CHICKEN_BITS_2, 1954 + REG_A7XX_SP_CHICKEN_BITS_3, 1955 + REG_A8XX_SP_CHICKEN_BITS_4, 1956 + REG_A6XX_SP_PERFCTR_SHADER_MASK, 1957 + REG_A8XX_RBBM_SLICE_PERFCTR_CNTL, 1958 + REG_A8XX_RBBM_SLICE_INTERFACE_HANG_INT_CNTL, 1959 + REG_A7XX_SP_HLSQ_DBG_ECO_CNTL, 1960 + REG_A7XX_SP_HLSQ_DBG_ECO_CNTL_1, 1961 + REG_A7XX_SP_HLSQ_DBG_ECO_CNTL_2, 1962 + REG_A8XX_SP_HLSQ_DBG_ECO_CNTL_3, 1963 + REG_A8XX_SP_HLSQ_LPAC_GMEM_RANGE_MIN, 1964 + REG_A8XX_SP_HLSQ_LPAC_GMEM_RANGE_MIN + 1, 1965 + REG_A8XX_CP_INTERRUPT_STATUS_MASK_GLOBAL, 1966 + REG_A8XX_RBBM_PERFCTR_CNTL, 1967 + REG_A8XX_CP_PROTECT_GLOBAL(0), 1968 + REG_A8XX_CP_PROTECT_GLOBAL(1), 1969 + REG_A8XX_CP_PROTECT_GLOBAL(2), 1970 + REG_A8XX_CP_PROTECT_GLOBAL(3), 1971 + REG_A8XX_CP_PROTECT_GLOBAL(4), 1972 + REG_A8XX_CP_PROTECT_GLOBAL(5), 1973 + REG_A8XX_CP_PROTECT_GLOBAL(6), 1974 + REG_A8XX_CP_PROTECT_GLOBAL(7), 1975 + REG_A8XX_CP_PROTECT_GLOBAL(8), 1976 + REG_A8XX_CP_PROTECT_GLOBAL(9), 1977 + REG_A8XX_CP_PROTECT_GLOBAL(10), 1978 + REG_A8XX_CP_PROTECT_GLOBAL(11), 1979 + REG_A8XX_CP_PROTECT_GLOBAL(12), 1980 + REG_A8XX_CP_PROTECT_GLOBAL(13), 1981 + REG_A8XX_CP_PROTECT_GLOBAL(14), 1982 + REG_A8XX_CP_PROTECT_GLOBAL(15), 1983 + REG_A8XX_CP_PROTECT_GLOBAL(16), 1984 + REG_A8XX_CP_PROTECT_GLOBAL(17), 1985 + REG_A8XX_CP_PROTECT_GLOBAL(18), 1986 + REG_A8XX_CP_PROTECT_GLOBAL(19), 1987 + REG_A8XX_CP_PROTECT_GLOBAL(20), 1988 + REG_A8XX_CP_PROTECT_GLOBAL(21), 1989 + REG_A8XX_CP_PROTECT_GLOBAL(22), 1990 + REG_A8XX_CP_PROTECT_GLOBAL(23), 1991 + REG_A8XX_CP_PROTECT_GLOBAL(24), 1992 + REG_A8XX_CP_PROTECT_GLOBAL(25), 1993 + REG_A8XX_CP_PROTECT_GLOBAL(26), 1994 + REG_A8XX_CP_PROTECT_GLOBAL(27), 1995 + REG_A8XX_CP_PROTECT_GLOBAL(28), 1996 + REG_A8XX_CP_PROTECT_GLOBAL(29), 1997 + REG_A8XX_CP_PROTECT_GLOBAL(30), 1998 + REG_A8XX_CP_PROTECT_GLOBAL(31), 1999 + REG_A8XX_CP_PROTECT_GLOBAL(32), 2000 + REG_A8XX_CP_PROTECT_GLOBAL(33), 2001 + REG_A8XX_CP_PROTECT_GLOBAL(34), 2002 + REG_A8XX_CP_PROTECT_GLOBAL(35), 2003 + REG_A8XX_CP_PROTECT_GLOBAL(36), 2004 + REG_A8XX_CP_PROTECT_GLOBAL(37), 2005 + REG_A8XX_CP_PROTECT_GLOBAL(38), 2006 + REG_A8XX_CP_PROTECT_GLOBAL(39), 2007 + REG_A8XX_CP_PROTECT_GLOBAL(40), 2008 + REG_A8XX_CP_PROTECT_GLOBAL(41), 2009 + REG_A8XX_CP_PROTECT_GLOBAL(42), 2010 + REG_A8XX_CP_PROTECT_GLOBAL(43), 2011 + REG_A8XX_CP_PROTECT_GLOBAL(44), 2012 + REG_A8XX_CP_PROTECT_GLOBAL(45), 2013 + REG_A8XX_CP_PROTECT_GLOBAL(46), 2014 + REG_A8XX_CP_PROTECT_GLOBAL(47), 2015 + REG_A8XX_CP_PROTECT_GLOBAL(48), 2016 + REG_A8XX_CP_PROTECT_GLOBAL(49), 2017 + REG_A8XX_CP_PROTECT_GLOBAL(50), 2018 + REG_A8XX_CP_PROTECT_GLOBAL(51), 2019 + REG_A8XX_CP_PROTECT_GLOBAL(52), 2020 + REG_A8XX_CP_PROTECT_GLOBAL(53), 2021 + REG_A8XX_CP_PROTECT_GLOBAL(54), 2022 + REG_A8XX_CP_PROTECT_GLOBAL(55), 2023 + REG_A8XX_CP_PROTECT_GLOBAL(56), 2024 + REG_A8XX_CP_PROTECT_GLOBAL(57), 2025 + REG_A8XX_CP_PROTECT_GLOBAL(58), 2026 + REG_A8XX_CP_PROTECT_GLOBAL(59), 2027 + REG_A8XX_CP_PROTECT_GLOBAL(60), 2028 + REG_A8XX_CP_PROTECT_GLOBAL(61), 2029 + REG_A8XX_CP_PROTECT_GLOBAL(62), 2030 + REG_A8XX_CP_PROTECT_GLOBAL(63), 2031 + }; 2032 + DECLARE_ADRENO_REGLIST_LIST(a840_ifpc_reglist); 2033 + 2034 + static const struct adreno_reglist_pipe a840_dyn_pwrup_reglist_regs[] = { 2035 + { REG_A8XX_GRAS_TSEFE_DBG_ECO_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2036 + { REG_A8XX_GRAS_NC_MODE_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2037 + { REG_A8XX_GRAS_DBG_ECO_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2038 + { REG_A6XX_PC_AUTO_VERTEX_STRIDE, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2039 + { REG_A8XX_PC_CHICKEN_BITS_1, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2040 + { REG_A8XX_PC_CHICKEN_BITS_2, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2041 + { REG_A8XX_PC_CHICKEN_BITS_3, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2042 + { REG_A8XX_PC_CHICKEN_BITS_4, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2043 + { REG_A8XX_PC_CONTEXT_SWITCH_STABILIZE_CNTL_1, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2044 + { REG_A8XX_PC_VIS_STREAM_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2045 + { REG_A7XX_RB_CCU_CNTL, 0, BIT(PIPE_BR) }, 2046 + { REG_A7XX_RB_CCU_DBG_ECO_CNTL, 0, BIT(PIPE_BR)}, 2047 + { REG_A8XX_RB_CCU_NC_MODE_CNTL, 0, BIT(PIPE_BR) }, 2048 + { REG_A8XX_RB_CMP_NC_MODE_CNTL, 0, BIT(PIPE_BR) }, 2049 + { REG_A6XX_RB_RBP_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2050 + { REG_A8XX_RB_RESOLVE_PREFETCH_CNTL, 0, BIT(PIPE_BR) }, 2051 + { REG_A6XX_RB_DBG_ECO_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2052 + { REG_A8XX_RB_CMP_DBG_ECO_CNTL, 0, BIT(PIPE_BR) }, 2053 + { REG_A7XX_VFD_DBG_ECO_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2054 + { REG_A8XX_VFD_CB_BV_THRESHOLD, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2055 + { REG_A8XX_VFD_CB_BR_THRESHOLD, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2056 + { REG_A8XX_VFD_CB_BUSY_REQ_CNT, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2057 + { REG_A8XX_VFD_CB_LP_REQ_CNT, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2058 + { REG_A8XX_VPC_FLATSHADE_MODE_CNTL, 0, BIT(PIPE_BV) | BIT(PIPE_BR) }, 2059 + { REG_A8XX_CP_HW_FAULT_STATUS_MASK_PIPE, 0, BIT(PIPE_BR) | 2060 + BIT(PIPE_BV) | BIT(PIPE_LPAC) | BIT(PIPE_AQE0) | 2061 + BIT(PIPE_AQE1) | BIT(PIPE_DDE_BR) | BIT(PIPE_DDE_BV) }, 2062 + { REG_A8XX_CP_INTERRUPT_STATUS_MASK_PIPE, 0, BIT(PIPE_BR) | 2063 + BIT(PIPE_BV) | BIT(PIPE_LPAC) | BIT(PIPE_AQE0) | 2064 + BIT(PIPE_AQE1) | BIT(PIPE_DDE_BR) | BIT(PIPE_DDE_BV) }, 2065 + { REG_A8XX_CP_PROTECT_CNTL_PIPE, 0, BIT(PIPE_BR) | BIT(PIPE_BV) | BIT(PIPE_LPAC)}, 2066 + { REG_A8XX_CP_PROTECT_PIPE(15), 0, BIT(PIPE_BR) | BIT(PIPE_BV) | BIT(PIPE_LPAC) }, 2067 + { REG_A8XX_RB_GC_GMEM_PROTECT, 0, BIT(PIPE_BR) }, 2068 + { REG_A8XX_RB_LPAC_GMEM_PROTECT, 0, BIT(PIPE_BR) }, 2069 + { REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ENABLE, 0, BIT(PIPE_BR) }, 2070 + }; 2071 + DECLARE_ADRENO_REGLIST_PIPE_LIST(a840_dyn_pwrup_reglist); 2072 + 1932 2073 static const struct adreno_info a8xx_gpus[] = { 1933 2074 { 1934 2075 .chip_ids = ADRENO_CHIP_IDS(0x44070001), ··· 2119 1902 .gmem = 21 * SZ_1M, 2120 1903 .inactive_period = DRM_MSM_INACTIVE_PERIOD, 2121 1904 .quirks = ADRENO_QUIRK_HAS_CACHED_COHERENT | 2122 - ADRENO_QUIRK_HAS_HW_APRIV, 1905 + ADRENO_QUIRK_HAS_HW_APRIV | 1906 + ADRENO_QUIRK_PREEMPTION | 1907 + ADRENO_QUIRK_SOFTFUSE, 2123 1908 .funcs = &a8xx_gpu_funcs, 2124 1909 .a6xx = &(const struct a6xx_info) { 2125 1910 .protect = &x285_protect, 2126 1911 .nonctxt_reglist = x285_nonctxt_regs, 1912 + .pwrup_reglist = &a840_pwrup_reglist, 1913 + .dyn_pwrup_reglist = &x285_dyn_pwrup_reglist, 1914 + .ifpc_reglist = &a840_ifpc_reglist, 2127 1915 .gbif_cx = a840_gbif, 2128 1916 .max_slices = 4, 2129 1917 .gmu_chipid = 0x8010100, ··· 2144 1922 { /* sentinel */ }, 2145 1923 }, 2146 1924 }, 1925 + .speedbins = ADRENO_SPEEDBINS( 1926 + { 0, 0 }, 1927 + { 388, 1 }, 1928 + { 357, 2 }, 1929 + { 284, 3 }, 1930 + ), 2147 1931 }, { 2148 1932 .chip_ids = ADRENO_CHIP_IDS(0x44050a01), 2149 1933 .family = ADRENO_8XX_GEN2, ··· 2161 1933 .gmem = 18 * SZ_1M, 2162 1934 .inactive_period = DRM_MSM_INACTIVE_PERIOD, 2163 1935 .quirks = ADRENO_QUIRK_HAS_CACHED_COHERENT | 2164 - ADRENO_QUIRK_HAS_HW_APRIV, 1936 + ADRENO_QUIRK_HAS_HW_APRIV | 1937 + ADRENO_QUIRK_PREEMPTION | 1938 + ADRENO_QUIRK_IFPC, 2165 1939 .funcs = &a8xx_gpu_funcs, 2166 1940 .a6xx = &(const struct a6xx_info) { 2167 1941 .protect = &a840_protect, 2168 1942 .nonctxt_reglist = a840_nonctxt_regs, 1943 + .pwrup_reglist = &a840_pwrup_reglist, 1944 + .dyn_pwrup_reglist = &a840_dyn_pwrup_reglist, 1945 + .ifpc_reglist = &a840_ifpc_reglist, 2169 1946 .gbif_cx = a840_gbif, 2170 1947 .max_slices = 3, 2171 1948 .gmu_chipid = 0x8020100, ··· 2187 1954 }, 2188 1955 }, 2189 1956 .preempt_record_size = 19708 * SZ_1K, 1957 + .speedbins = ADRENO_SPEEDBINS( 1958 + { 0, 0 }, 1959 + { 273, 1 }, 1960 + { 252, 2 }, 1961 + { 221, 3 }, 1962 + ), 2190 1963 } 2191 1964 }; 2192 1965
+116 -24
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
··· 3 3 4 4 #include <linux/bitfield.h> 5 5 #include <linux/clk.h> 6 + #include <linux/firmware/qcom/qcom_scm.h> 6 7 #include <linux/interconnect.h> 7 8 #include <linux/of_platform.h> 8 9 #include <linux/platform_device.h> ··· 92 91 } 93 92 94 93 /* Check to see if the GX rail is still powered */ 95 - bool a6xx_gmu_gx_is_on(struct a6xx_gmu *gmu) 94 + bool a6xx_gmu_gx_is_on(struct adreno_gpu *adreno_gpu) 96 95 { 97 - struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); 98 - struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; 96 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 97 + struct a6xx_gmu *gmu = &a6xx_gpu->gmu; 99 98 u32 val; 100 99 101 100 /* This can be called from gpu state code so make sure GMU is valid */ ··· 116 115 return !(val & 117 116 (A6XX_GMU_SPTPRAC_PWR_CLK_STATUS_GX_HM_GDSC_POWER_OFF | 118 117 A6XX_GMU_SPTPRAC_PWR_CLK_STATUS_GX_HM_CLK_OFF)); 118 + } 119 + 120 + bool a7xx_gmu_gx_is_on(struct adreno_gpu *adreno_gpu) 121 + { 122 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 123 + struct a6xx_gmu *gmu = &a6xx_gpu->gmu; 124 + u32 val; 125 + 126 + /* This can be called from gpu state code so make sure GMU is valid */ 127 + if (!gmu->initialized) 128 + return false; 129 + 130 + val = gmu_read(gmu, REG_A6XX_GMU_SPTPRAC_PWR_CLK_STATUS); 131 + 132 + return !(val & 133 + (A7XX_GMU_SPTPRAC_PWR_CLK_STATUS_GX_HM_GDSC_POWER_OFF | 134 + A7XX_GMU_SPTPRAC_PWR_CLK_STATUS_GX_HM_CLK_OFF)); 135 + } 136 + 137 + bool a8xx_gmu_gx_is_on(struct adreno_gpu *adreno_gpu) 138 + { 139 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 140 + struct a6xx_gmu *gmu = &a6xx_gpu->gmu; 141 + u32 val; 142 + 143 + /* This can be called from gpu state code so make sure GMU is valid */ 144 + if (!gmu->initialized) 145 + return false; 146 + 147 + val = gmu_read(gmu, REG_A8XX_GMU_PWR_CLK_STATUS); 148 + 149 + return !(val & 150 + (A8XX_GMU_PWR_CLK_STATUS_GX_HM_GDSC_POWER_OFF | 151 + A8XX_GMU_PWR_CLK_STATUS_GX_HM_CLK_OFF)); 119 152 } 120 153 121 154 void a6xx_gmu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp, ··· 275 240 276 241 if (val == local) { 277 242 if (gmu->idle_level != GMU_IDLE_STATE_IFPC || 278 - !a6xx_gmu_gx_is_on(gmu)) 243 + !adreno_gpu->funcs->gx_is_on(adreno_gpu)) 279 244 return true; 280 245 } 281 246 ··· 1192 1157 dev_pm_opp_put(gpu_opp); 1193 1158 } 1194 1159 1160 + static int a6xx_gmu_secure_init(struct a6xx_gpu *a6xx_gpu) 1161 + { 1162 + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; 1163 + struct msm_gpu *gpu = &adreno_gpu->base; 1164 + struct a6xx_gmu *gmu = &a6xx_gpu->gmu; 1165 + u32 fuse_val; 1166 + int ret; 1167 + 1168 + if (test_bit(GMU_STATUS_SECURE_INIT, &gmu->status)) 1169 + return 0; 1170 + 1171 + if (adreno_is_a750(adreno_gpu) || adreno_is_a8xx(adreno_gpu)) { 1172 + /* 1173 + * Assume that if qcom scm isn't available, that whatever 1174 + * replacement allows writing the fuse register ourselves. 1175 + * Users of alternative firmware need to make sure this 1176 + * register is writeable or indicate that it's not somehow. 1177 + * Print a warning because if you mess this up you're about to 1178 + * crash horribly. 1179 + */ 1180 + if (!qcom_scm_is_available()) { 1181 + dev_warn_once(gpu->dev->dev, 1182 + "SCM is not available, poking fuse register\n"); 1183 + a6xx_llc_write(a6xx_gpu, REG_A7XX_CX_MISC_SW_FUSE_VALUE, 1184 + A7XX_CX_MISC_SW_FUSE_VALUE_RAYTRACING | 1185 + A7XX_CX_MISC_SW_FUSE_VALUE_FASTBLEND | 1186 + A7XX_CX_MISC_SW_FUSE_VALUE_LPAC); 1187 + adreno_gpu->has_ray_tracing = true; 1188 + goto done; 1189 + } 1190 + 1191 + ret = qcom_scm_gpu_init_regs(QCOM_SCM_GPU_ALWAYS_EN_REQ | 1192 + QCOM_SCM_GPU_TSENSE_EN_REQ); 1193 + if (ret) { 1194 + dev_warn_once(gpu->dev->dev, 1195 + "SCM call failed\n"); 1196 + return ret; 1197 + } 1198 + 1199 + /* 1200 + * On A7XX_GEN3 and newer, raytracing may be disabled by the 1201 + * firmware, find out whether that's the case. The scm call 1202 + * above sets the fuse register. 1203 + */ 1204 + fuse_val = a6xx_llc_read(a6xx_gpu, 1205 + REG_A7XX_CX_MISC_SW_FUSE_VALUE); 1206 + adreno_gpu->has_ray_tracing = 1207 + !!(fuse_val & A7XX_CX_MISC_SW_FUSE_VALUE_RAYTRACING); 1208 + } else if (adreno_is_a740(adreno_gpu)) { 1209 + /* Raytracing is always enabled on a740 */ 1210 + adreno_gpu->has_ray_tracing = true; 1211 + } 1212 + 1213 + done: 1214 + set_bit(GMU_STATUS_SECURE_INIT, &gmu->status); 1215 + return 0; 1216 + } 1217 + 1218 + 1195 1219 int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) 1196 1220 { 1197 1221 struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; ··· 1279 1185 clk_set_rate(gmu->hub_clk, adreno_is_a740_family(adreno_gpu) ? 1280 1186 200000000 : 150000000); 1281 1187 ret = clk_bulk_prepare_enable(gmu->nr_clocks, gmu->clocks); 1282 - if (ret) { 1283 - pm_runtime_put(gmu->gxpd); 1284 - pm_runtime_put(gmu->dev); 1285 - return ret; 1286 - } 1188 + if (ret) 1189 + goto rpm_put; 1190 + 1191 + ret = a6xx_gmu_secure_init(a6xx_gpu); 1192 + if (ret) 1193 + goto disable_clk; 1287 1194 1288 1195 /* Read the slice info on A8x GPUs */ 1289 1196 a8xx_gpu_get_slice_info(gpu); ··· 1314 1219 1315 1220 ret = a6xx_gmu_fw_start(gmu, status); 1316 1221 if (ret) 1317 - goto out; 1222 + goto disable_irq; 1318 1223 1319 1224 ret = a6xx_hfi_start(gmu, status); 1320 1225 if (ret) 1321 - goto out; 1226 + goto disable_irq; 1322 1227 1323 1228 /* 1324 1229 * Turn on the GMU firmware fault interrupt after we know the boot ··· 1331 1236 /* Set the GPU to the current freq */ 1332 1237 a6xx_gmu_set_initial_freq(gpu, gmu); 1333 1238 1334 - if (refcount_read(&gpu->sysprof_active) > 1) { 1335 - ret = a6xx_gmu_set_oob(gmu, GMU_OOB_PERFCOUNTER_SET); 1336 - if (!ret) 1337 - set_bit(GMU_STATUS_OOB_PERF_SET, &gmu->status); 1338 - } 1339 - out: 1340 - /* On failure, shut down the GMU to leave it in a good state */ 1341 - if (ret) { 1342 - disable_irq(gmu->gmu_irq); 1343 - a6xx_rpmh_stop(gmu); 1344 - pm_runtime_put(gmu->gxpd); 1345 - pm_runtime_put(gmu->dev); 1346 - } 1239 + return 0; 1240 + 1241 + disable_irq: 1242 + disable_irq(gmu->gmu_irq); 1243 + a6xx_rpmh_stop(gmu); 1244 + disable_clk: 1245 + clk_bulk_disable_unprepare(gmu->nr_clocks, gmu->clocks); 1246 + rpm_put: 1247 + pm_runtime_put(gmu->gxpd); 1248 + pm_runtime_put(gmu->dev); 1347 1249 1348 1250 return ret; 1349 1251 }
+7 -2
drivers/gpu/drm/msm/adreno/a6xx_gmu.h
··· 10 10 #include <linux/notifier.h> 11 11 #include <linux/soc/qcom/qcom_aoss.h> 12 12 #include "msm_drv.h" 13 + #include "adreno_gpu.h" 13 14 #include "a6xx_hfi.h" 14 15 15 16 struct a6xx_gmu_bo { ··· 111 110 112 111 unsigned long freq; 113 112 114 - struct a6xx_hfi_queue queues[2]; 113 + struct a6xx_hfi_queue queues[HFI_MAX_QUEUES]; 115 114 116 115 bool initialized; 117 116 bool hung; ··· 130 129 #define GMU_STATUS_PDC_SLEEP 1 131 130 /* To track Perfcounter OOB set status */ 132 131 #define GMU_STATUS_OOB_PERF_SET 2 132 + /* To track whether secure world init was done */ 133 + #define GMU_STATUS_SECURE_INIT 3 133 134 unsigned long status; 134 135 }; 135 136 ··· 234 231 int a6xx_hfi_send_prep_slumber(struct a6xx_gmu *gmu); 235 232 int a6xx_hfi_set_freq(struct a6xx_gmu *gmu, u32 perf_index, u32 bw_index); 236 233 237 - bool a6xx_gmu_gx_is_on(struct a6xx_gmu *gmu); 234 + bool a6xx_gmu_gx_is_on(struct adreno_gpu *adreno_gpu); 235 + bool a7xx_gmu_gx_is_on(struct adreno_gpu *adreno_gpu); 236 + bool a8xx_gmu_gx_is_on(struct adreno_gpu *adreno_gpu); 238 237 bool a6xx_gmu_sptprac_is_on(struct a6xx_gmu *gmu); 239 238 void a6xx_sptprac_disable(struct a6xx_gmu *gmu); 240 239 int a6xx_sptprac_enable(struct a6xx_gmu *gmu);
+78 -92
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
··· 10 10 11 11 #include <linux/bitfield.h> 12 12 #include <linux/devfreq.h> 13 - #include <linux/firmware/qcom/qcom_scm.h> 14 13 #include <linux/pm_domain.h> 15 14 #include <linux/soc/qcom/llcc-qcom.h> 16 15 17 16 #define GPU_PAS_ID 13 18 17 19 - static u64 read_gmu_ao_counter(struct a6xx_gpu *a6xx_gpu) 18 + static u64 a6xx_gmu_get_timestamp(struct msm_gpu *gpu) 20 19 { 20 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 21 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 21 22 u64 count_hi, count_lo, temp; 22 23 23 24 do { ··· 346 345 * GPU registers so we need to add 0x1a800 to the register value on A630 347 346 * to get the right value from PM4. 348 347 */ 349 - get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER, 348 + get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_CONTEXT, 350 349 rbmemptr_stats(ring, index, alwayson_start)); 351 350 352 351 /* Invalidate CCU depth and color */ ··· 387 386 388 387 get_stats_counter(ring, REG_A6XX_RBBM_PERFCTR_CP(0), 389 388 rbmemptr_stats(ring, index, cpcycles_end)); 390 - get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_COUNTER, 389 + get_stats_counter(ring, REG_A6XX_CP_ALWAYS_ON_CONTEXT, 391 390 rbmemptr_stats(ring, index, alwayson_end)); 392 391 393 392 /* Write the fence to the scratch register */ ··· 405 404 OUT_RING(ring, upper_32_bits(rbmemptr(ring, fence))); 406 405 OUT_RING(ring, submit->seqno); 407 406 408 - trace_msm_gpu_submit_flush(submit, read_gmu_ao_counter(a6xx_gpu)); 407 + trace_msm_gpu_submit_flush(submit, adreno_gpu->funcs->get_timestamp(gpu)); 409 408 410 409 a6xx_flush(gpu, ring); 411 410 } 412 411 413 - static void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring, 412 + void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring, 414 413 struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue) 415 414 { 416 415 u64 preempt_postamble; ··· 456 455 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 457 456 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 458 457 struct msm_ringbuffer *ring = submit->ring; 459 - u32 rbbm_perfctr_cp0, cp_always_on_counter; 458 + u32 rbbm_perfctr_cp0, cp_always_on_context; 460 459 unsigned int i, ibs = 0; 461 460 462 461 adreno_check_and_reenable_stall(adreno_gpu); ··· 479 478 480 479 if (adreno_is_a8xx(adreno_gpu)) { 481 480 rbbm_perfctr_cp0 = REG_A8XX_RBBM_PERFCTR_CP(0); 482 - cp_always_on_counter = REG_A8XX_CP_ALWAYS_ON_COUNTER; 481 + cp_always_on_context = REG_A8XX_CP_ALWAYS_ON_CONTEXT; 483 482 } else { 484 483 rbbm_perfctr_cp0 = REG_A7XX_RBBM_PERFCTR_CP(0); 485 - cp_always_on_counter = REG_A6XX_CP_ALWAYS_ON_COUNTER; 484 + cp_always_on_context = REG_A6XX_CP_ALWAYS_ON_CONTEXT; 486 485 } 487 486 488 487 get_stats_counter(ring, rbbm_perfctr_cp0, rbmemptr_stats(ring, index, cpcycles_start)); 489 - get_stats_counter(ring, cp_always_on_counter, rbmemptr_stats(ring, index, alwayson_start)); 488 + get_stats_counter(ring, cp_always_on_context, rbmemptr_stats(ring, index, alwayson_start)); 490 489 491 490 OUT_PKT7(ring, CP_THREAD_CONTROL, 1); 492 491 OUT_RING(ring, CP_SET_THREAD_BOTH); ··· 534 533 } 535 534 536 535 get_stats_counter(ring, rbbm_perfctr_cp0, rbmemptr_stats(ring, index, cpcycles_end)); 537 - get_stats_counter(ring, cp_always_on_counter, rbmemptr_stats(ring, index, alwayson_end)); 536 + get_stats_counter(ring, cp_always_on_context, rbmemptr_stats(ring, index, alwayson_end)); 538 537 539 538 /* Write the fence to the scratch register */ 540 539 if (adreno_is_a8xx(adreno_gpu)) { ··· 615 614 } 616 615 617 616 618 - trace_msm_gpu_submit_flush(submit, read_gmu_ao_counter(a6xx_gpu)); 617 + trace_msm_gpu_submit_flush(submit, adreno_gpu->funcs->get_timestamp(gpu)); 619 618 620 619 a6xx_flush(gpu, ring); 621 620 622 621 /* Check to see if we need to start preemption */ 623 - a6xx_preempt_trigger(gpu); 622 + if (adreno_is_a8xx(adreno_gpu)) 623 + a8xx_preempt_trigger(gpu); 624 + else 625 + a6xx_preempt_trigger(gpu); 624 626 } 625 627 626 628 static void a6xx_set_hwcg(struct msm_gpu *gpu, bool state) ··· 1607 1603 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_BOOT_SLUMBER); 1608 1604 } 1609 1605 1606 + if (!ret && (refcount_read(&gpu->sysprof_active) > 1)) { 1607 + ret = a6xx_gmu_set_oob(gmu, GMU_OOB_PERFCOUNTER_SET); 1608 + if (!ret) 1609 + set_bit(GMU_STATUS_OOB_PERF_SET, &gmu->status); 1610 + } 1611 + 1610 1612 return ret; 1611 1613 } 1612 1614 ··· 1645 1635 1646 1636 adreno_dump_info(gpu); 1647 1637 1648 - if (a6xx_gmu_gx_is_on(&a6xx_gpu->gmu)) { 1638 + if (adreno_gpu->funcs->gx_is_on(adreno_gpu)) { 1649 1639 /* Sometimes crashstate capture is skipped, so SQE should be halted here again */ 1650 1640 gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 3); 1651 1641 ··· 2162 2152 a6xx_gpu->llc_mmio = ERR_PTR(-EINVAL); 2163 2153 } 2164 2154 2165 - static int a7xx_cx_mem_init(struct a6xx_gpu *a6xx_gpu) 2166 - { 2167 - struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; 2168 - struct msm_gpu *gpu = &adreno_gpu->base; 2169 - u32 fuse_val; 2170 - int ret; 2171 - 2172 - if (adreno_is_a750(adreno_gpu) || adreno_is_a8xx(adreno_gpu)) { 2173 - /* 2174 - * Assume that if qcom scm isn't available, that whatever 2175 - * replacement allows writing the fuse register ourselves. 2176 - * Users of alternative firmware need to make sure this 2177 - * register is writeable or indicate that it's not somehow. 2178 - * Print a warning because if you mess this up you're about to 2179 - * crash horribly. 2180 - */ 2181 - if (!qcom_scm_is_available()) { 2182 - dev_warn_once(gpu->dev->dev, 2183 - "SCM is not available, poking fuse register\n"); 2184 - a6xx_llc_write(a6xx_gpu, REG_A7XX_CX_MISC_SW_FUSE_VALUE, 2185 - A7XX_CX_MISC_SW_FUSE_VALUE_RAYTRACING | 2186 - A7XX_CX_MISC_SW_FUSE_VALUE_FASTBLEND | 2187 - A7XX_CX_MISC_SW_FUSE_VALUE_LPAC); 2188 - adreno_gpu->has_ray_tracing = true; 2189 - return 0; 2190 - } 2191 - 2192 - ret = qcom_scm_gpu_init_regs(QCOM_SCM_GPU_ALWAYS_EN_REQ | 2193 - QCOM_SCM_GPU_TSENSE_EN_REQ); 2194 - if (ret) 2195 - return ret; 2196 - 2197 - /* 2198 - * On A7XX_GEN3 and newer, raytracing may be disabled by the 2199 - * firmware, find out whether that's the case. The scm call 2200 - * above sets the fuse register. 2201 - */ 2202 - fuse_val = a6xx_llc_read(a6xx_gpu, 2203 - REG_A7XX_CX_MISC_SW_FUSE_VALUE); 2204 - adreno_gpu->has_ray_tracing = 2205 - !!(fuse_val & A7XX_CX_MISC_SW_FUSE_VALUE_RAYTRACING); 2206 - } else if (adreno_is_a740(adreno_gpu)) { 2207 - /* Raytracing is always enabled on a740 */ 2208 - adreno_gpu->has_ray_tracing = true; 2209 - } 2210 - 2211 - return 0; 2212 - } 2213 - 2214 - 2215 2155 #define GBIF_CLIENT_HALT_MASK BIT(0) 2216 2156 #define GBIF_ARB_HALT_MASK BIT(1) 2217 2157 #define VBIF_XIN_HALT_CTRL0_MASK GENMASK(3, 0) ··· 2374 2414 return 0; 2375 2415 } 2376 2416 2377 - static int a6xx_gmu_get_timestamp(struct msm_gpu *gpu, uint64_t *value) 2417 + static u64 a6xx_get_timestamp(struct msm_gpu *gpu) 2378 2418 { 2379 - struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 2380 - struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 2381 - 2382 - *value = read_gmu_ao_counter(a6xx_gpu); 2383 - 2384 - return 0; 2385 - } 2386 - 2387 - static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) 2388 - { 2389 - *value = gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER); 2390 - return 0; 2419 + return gpu_read64(gpu, REG_A6XX_CP_ALWAYS_ON_COUNTER); 2391 2420 } 2392 2421 2393 2422 static struct msm_ringbuffer *a6xx_active_ring(struct msm_gpu *gpu) ··· 2549 2600 return UINT_MAX; 2550 2601 } 2551 2602 2552 - static int a6xx_set_supported_hw(struct device *dev, const struct adreno_info *info) 2603 + static int a6xx_read_speedbin(struct device *dev, struct a6xx_gpu *a6xx_gpu, 2604 + const struct adreno_info *info, u32 *speedbin) 2605 + { 2606 + int ret; 2607 + 2608 + /* Use speedbin fuse if present. Otherwise, fallback to softfuse */ 2609 + ret = adreno_read_speedbin(dev, speedbin); 2610 + if (ret != -ENOENT) 2611 + return ret; 2612 + 2613 + if (info->quirks & ADRENO_QUIRK_SOFTFUSE) { 2614 + *speedbin = a6xx_llc_read(a6xx_gpu, REG_A8XX_CX_MISC_SW_FUSE_FREQ_LIMIT_STATUS); 2615 + *speedbin = A8XX_CX_MISC_SW_FUSE_FREQ_LIMIT_STATUS_FINALFREQLIMIT(*speedbin); 2616 + return 0; 2617 + } 2618 + 2619 + return -ENOENT; 2620 + } 2621 + 2622 + static int a6xx_set_supported_hw(struct device *dev, struct a6xx_gpu *a6xx_gpu, 2623 + const struct adreno_info *info) 2553 2624 { 2554 2625 u32 supp_hw; 2555 2626 u32 speedbin; 2556 2627 int ret; 2557 2628 2558 - ret = adreno_read_speedbin(dev, &speedbin); 2629 + ret = a6xx_read_speedbin(dev, a6xx_gpu, info, &speedbin); 2559 2630 /* 2560 2631 * -ENOENT means that the platform doesn't support speedbin which is 2561 2632 * fine ··· 2604 2635 return 0; 2605 2636 } 2606 2637 2638 + static bool a6xx_aqe_is_enabled(struct adreno_gpu *adreno_gpu) 2639 + { 2640 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 2641 + 2642 + /* 2643 + * AQE uses preemption context record as scratch pad, so check if 2644 + * preemption is enabled 2645 + */ 2646 + return (adreno_gpu->base.nr_rings > 1) && !!a6xx_gpu->aqe_bo; 2647 + } 2648 + 2607 2649 static struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) 2608 2650 { 2609 2651 struct msm_drm_private *priv = dev->dev_private; 2610 2652 struct platform_device *pdev = priv->gpu_pdev; 2611 2653 struct adreno_platform_config *config = pdev->dev.platform_data; 2654 + const struct adreno_info *info = config->info; 2612 2655 struct device_node *node; 2613 2656 struct a6xx_gpu *a6xx_gpu; 2614 2657 struct adreno_gpu *adreno_gpu; 2615 2658 struct msm_gpu *gpu; 2616 2659 extern int enable_preemption; 2660 + u32 speedbin; 2617 2661 bool is_a7xx; 2618 2662 int ret, nr_rings = 1; 2619 2663 ··· 2638 2656 gpu = &adreno_gpu->base; 2639 2657 2640 2658 mutex_init(&a6xx_gpu->gmu.lock); 2659 + spin_lock_init(&a6xx_gpu->aperture_lock); 2641 2660 2642 2661 adreno_gpu->registers = NULL; 2643 2662 ··· 2650 2667 adreno_gpu->gmu_is_wrapper = of_device_is_compatible(node, "qcom,adreno-gmu-wrapper"); 2651 2668 2652 2669 adreno_gpu->base.hw_apriv = 2653 - !!(config->info->quirks & ADRENO_QUIRK_HAS_HW_APRIV); 2670 + !!(info->quirks & ADRENO_QUIRK_HAS_HW_APRIV); 2654 2671 2655 2672 /* gpu->info only gets assigned in adreno_gpu_init(). A8x is included intentionally */ 2656 - is_a7xx = config->info->family >= ADRENO_7XX_GEN1; 2673 + is_a7xx = info->family >= ADRENO_7XX_GEN1; 2657 2674 2658 2675 a6xx_llc_slices_init(pdev, a6xx_gpu, is_a7xx); 2659 2676 2660 - ret = a6xx_set_supported_hw(&pdev->dev, config->info); 2677 + ret = a6xx_set_supported_hw(&pdev->dev, a6xx_gpu, info); 2661 2678 if (ret) { 2662 2679 a6xx_llc_slices_destroy(a6xx_gpu); 2663 2680 kfree(a6xx_gpu); ··· 2665 2682 } 2666 2683 2667 2684 if ((enable_preemption == 1) || (enable_preemption == -1 && 2668 - (config->info->quirks & ADRENO_QUIRK_PREEMPTION))) 2685 + (info->quirks & ADRENO_QUIRK_PREEMPTION))) 2669 2686 nr_rings = 4; 2670 2687 2671 - ret = adreno_gpu_init(dev, pdev, adreno_gpu, config->info->funcs, nr_rings); 2688 + ret = adreno_gpu_init(dev, pdev, adreno_gpu, info->funcs, nr_rings); 2672 2689 if (ret) { 2673 2690 a6xx_destroy(&(a6xx_gpu->base.base)); 2674 2691 return ERR_PTR(ret); 2675 2692 } 2693 + 2694 + /* Set the speedbin value that is passed to userspace */ 2695 + if (a6xx_read_speedbin(&pdev->dev, a6xx_gpu, info, &speedbin) || !speedbin) 2696 + speedbin = 0xffff; 2697 + adreno_gpu->speedbin = (uint16_t) (0xffff & speedbin); 2676 2698 2677 2699 /* 2678 2700 * For now only clamp to idle freq for devices where this is known not ··· 2694 2706 if (ret) { 2695 2707 a6xx_destroy(&(a6xx_gpu->base.base)); 2696 2708 return ERR_PTR(ret); 2697 - } 2698 - 2699 - if (adreno_is_a7xx(adreno_gpu) || adreno_is_a8xx(adreno_gpu)) { 2700 - ret = a7xx_cx_mem_init(a6xx_gpu); 2701 - if (ret) { 2702 - a6xx_destroy(&(a6xx_gpu->base.base)); 2703 - return ERR_PTR(ret); 2704 - } 2705 2709 } 2706 2710 2707 2711 adreno_gpu->uche_trap_base = 0x1fffffffff000ull; ··· 2745 2765 .get_timestamp = a6xx_gmu_get_timestamp, 2746 2766 .bus_halt = a6xx_bus_clear_pending_transactions, 2747 2767 .mmu_fault_handler = a6xx_fault_handler, 2768 + .gx_is_on = a6xx_gmu_gx_is_on, 2748 2769 }; 2749 2770 2750 2771 const struct adreno_gpu_funcs a6xx_gmuwrapper_funcs = { ··· 2778 2797 .get_timestamp = a6xx_get_timestamp, 2779 2798 .bus_halt = a6xx_bus_clear_pending_transactions, 2780 2799 .mmu_fault_handler = a6xx_fault_handler, 2800 + .gx_is_on = a6xx_gmu_gx_is_on, 2781 2801 }; 2782 2802 2783 2803 const struct adreno_gpu_funcs a7xx_gpu_funcs = { ··· 2813 2831 .get_timestamp = a6xx_gmu_get_timestamp, 2814 2832 .bus_halt = a6xx_bus_clear_pending_transactions, 2815 2833 .mmu_fault_handler = a6xx_fault_handler, 2834 + .gx_is_on = a7xx_gmu_gx_is_on, 2835 + .aqe_is_enabled = a6xx_aqe_is_enabled, 2816 2836 }; 2817 2837 2818 2838 const struct adreno_gpu_funcs a8xx_gpu_funcs = { ··· 2842 2858 .get_timestamp = a8xx_gmu_get_timestamp, 2843 2859 .bus_halt = a8xx_bus_clear_pending_transactions, 2844 2860 .mmu_fault_handler = a8xx_fault_handler, 2861 + .gx_is_on = a8xx_gmu_gx_is_on, 2862 + .aqe_is_enabled = a6xx_aqe_is_enabled, 2845 2863 };
+6 -1
drivers/gpu/drm/msm/adreno/a6xx_gpu.h
··· 278 278 void a6xx_preempt_trigger(struct msm_gpu *gpu); 279 279 void a6xx_preempt_irq(struct msm_gpu *gpu); 280 280 void a6xx_preempt_fini(struct msm_gpu *gpu); 281 + void a6xx_emit_set_pseudo_reg(struct msm_ringbuffer *ring, 282 + struct a6xx_gpu *a6xx_gpu, struct msm_gpu_submitqueue *queue); 281 283 int a6xx_preempt_submitqueue_setup(struct msm_gpu *gpu, 282 284 struct msm_gpu_submitqueue *queue); 283 285 void a6xx_preempt_submitqueue_close(struct msm_gpu *gpu, ··· 322 320 void a8xx_bus_clear_pending_transactions(struct adreno_gpu *adreno_gpu, bool gx_off); 323 321 int a8xx_fault_handler(void *arg, unsigned long iova, int flags, void *data); 324 322 void a8xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring); 325 - int a8xx_gmu_get_timestamp(struct msm_gpu *gpu, uint64_t *value); 323 + u64 a8xx_gmu_get_timestamp(struct msm_gpu *gpu); 326 324 u64 a8xx_gpu_busy(struct msm_gpu *gpu, unsigned long *out_sample_rate); 327 325 int a8xx_gpu_feature_probe(struct msm_gpu *gpu); 328 326 void a8xx_gpu_get_slice_info(struct msm_gpu *gpu); 329 327 int a8xx_hw_init(struct msm_gpu *gpu); 330 328 irqreturn_t a8xx_irq(struct msm_gpu *gpu); 331 329 void a8xx_llc_activate(struct a6xx_gpu *a6xx_gpu); 330 + void a8xx_preempt_hw_init(struct msm_gpu *gpu); 331 + void a8xx_preempt_trigger(struct msm_gpu *gpu); 332 + void a8xx_preempt_irq(struct msm_gpu *gpu); 332 333 bool a8xx_progress(struct msm_gpu *gpu, struct msm_ringbuffer *ring); 333 334 void a8xx_recover(struct msm_gpu *gpu); 334 335 #endif /* __A6XX_GPU_H__ */
+10 -10
drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
··· 57 57 struct msm_gpu_state_bo *gmu_hfi; 58 58 struct msm_gpu_state_bo *gmu_debug; 59 59 60 - s32 hfi_queue_history[2][HFI_HISTORY_SZ]; 60 + s32 hfi_queue_history[HFI_MAX_QUEUES][HFI_HISTORY_SZ]; 61 61 62 62 struct list_head objs; 63 63 ··· 361 361 sizeof(*a6xx_state->debugbus)); 362 362 363 363 if (a6xx_state->debugbus) { 364 - int i; 364 + int i, j; 365 365 366 366 for (i = 0; i < ARRAY_SIZE(a6xx_debugbus_blocks); i++) 367 367 a6xx_get_debugbus_block(gpu, 368 368 a6xx_state, 369 369 &a6xx_debugbus_blocks[i], 370 370 &a6xx_state->debugbus[i]); 371 - 372 - a6xx_state->nr_debugbus = ARRAY_SIZE(a6xx_debugbus_blocks); 373 371 374 372 /* 375 373 * GBIF has same debugbus as of other GPU blocks, fall back to ··· 379 381 &a6xx_gbif_debugbus_block, 380 382 &a6xx_state->debugbus[i]); 381 383 382 - a6xx_state->nr_debugbus += 1; 384 + i++; 383 385 } 384 386 385 387 386 388 if (adreno_is_a650_family(to_adreno_gpu(gpu))) { 387 - for (i = 0; i < ARRAY_SIZE(a650_debugbus_blocks); i++) 389 + for (j = 0; j < ARRAY_SIZE(a650_debugbus_blocks); i++, j++) 388 390 a6xx_get_debugbus_block(gpu, 389 391 a6xx_state, 390 - &a650_debugbus_blocks[i], 392 + &a650_debugbus_blocks[j], 391 393 &a6xx_state->debugbus[i]); 392 394 } 395 + 396 + a6xx_state->nr_debugbus = i; 393 397 } 394 398 } 395 399 ··· 1013 1013 u64 out = dumper->iova + A6XX_CD_DATA_OFFSET; 1014 1014 int i, regcount = 0; 1015 1015 1016 - in += CRASHDUMP_WRITE(in, REG_A6XX_HLSQ_DBG_READ_SEL, regs->val1); 1016 + in += CRASHDUMP_WRITE(in, REG_A6XX_HLSQ_DBG_READ_SEL, (regs->val1 & 0xff) << 8); 1017 1017 1018 1018 for (i = 0; i < regs->count; i += 2) { 1019 1019 u32 count = RANGE(regs->registers, i); ··· 1251 1251 _a6xx_get_gmu_registers(gpu, a6xx_state, &a6xx_gpucc_reg, 1252 1252 &a6xx_state->gmu_registers[2], false); 1253 1253 1254 - if (!a6xx_gmu_gx_is_on(&a6xx_gpu->gmu)) 1254 + if (!adreno_gpu->funcs->gx_is_on(adreno_gpu)) 1255 1255 return; 1256 1256 1257 1257 /* Set the fence to ALLOW mode so we can access the registers */ ··· 1607 1607 } 1608 1608 1609 1609 /* If GX isn't on the rest of the data isn't going to be accessible */ 1610 - if (!a6xx_gmu_gx_is_on(&a6xx_gpu->gmu)) 1610 + if (!adreno_gpu->funcs->gx_is_on(adreno_gpu)) 1611 1611 return &a6xx_state->base; 1612 1612 1613 1613 /* Halt SQE first */
+23 -10
drivers/gpu/drm/msm/adreno/a6xx_hfi.c
··· 34 34 struct a6xx_hfi_queue_header *header = queue->header; 35 35 u32 i, hdr, index = header->read_index; 36 36 37 - if (header->read_index == header->write_index) { 37 + if (header->read_index == READ_ONCE(header->write_index)) { 38 38 header->rx_request = 1; 39 39 return 0; 40 40 } ··· 62 62 if (!gmu->legacy) 63 63 index = ALIGN(index, 4) % header->size; 64 64 65 - header->read_index = index; 65 + /* Ensure all memory operations are complete before updating the read index */ 66 + dma_mb(); 67 + 68 + WRITE_ONCE(header->read_index, index); 66 69 return HFI_HEADER_SIZE(hdr); 67 70 } 68 71 ··· 77 74 78 75 spin_lock(&queue->lock); 79 76 80 - space = CIRC_SPACE(header->write_index, header->read_index, 77 + space = CIRC_SPACE(header->write_index, READ_ONCE(header->read_index), 81 78 header->size); 82 79 if (space < dwords) { 83 80 header->dropped++; ··· 98 95 queue->data[index] = 0xfafafafa; 99 96 } 100 97 101 - header->write_index = index; 98 + /* Ensure all memory operations are complete before updating the write index */ 99 + dma_mb(); 100 + 101 + WRITE_ONCE(header->write_index, index); 102 102 spin_unlock(&queue->lock); 103 103 104 104 gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, 0x01); ··· 851 845 return a6xx_hfi_send_msg(gmu, HFI_H2F_FEATURE_CTRL, &msg, sizeof(msg), NULL, 0); 852 846 } 853 847 854 - #define HFI_FEATURE_IFPC 9 855 848 #define IFPC_LONG_HYST 0x1680 856 849 857 850 static int a6xx_hfi_enable_ifpc(struct a6xx_gmu *gmu) ··· 860 855 861 856 return a6xx_hfi_feature_ctrl_msg(gmu, HFI_FEATURE_IFPC, 1, IFPC_LONG_HYST); 862 857 } 863 - 864 - #define HFI_FEATURE_ACD 12 865 858 866 859 static int a6xx_hfi_enable_acd(struct a6xx_gmu *gmu) 867 860 { ··· 1059 1056 struct a6xx_gmu_bo *hfi = &gmu->hfi; 1060 1057 struct a6xx_hfi_queue_table_header *table = hfi->virt; 1061 1058 struct a6xx_hfi_queue_header *headers = hfi->virt + sizeof(*table); 1059 + int table_size, idx; 1062 1060 u64 offset; 1063 - int table_size; 1064 1061 1065 1062 /* 1066 1063 * The table size is the size of the table header plus all of the queue ··· 1079 1076 table->active_queues = ARRAY_SIZE(gmu->queues); 1080 1077 1081 1078 /* Command queue */ 1079 + idx = 0; 1082 1080 offset = SZ_4K; 1083 - a6xx_hfi_queue_init(&gmu->queues[0], &headers[0], hfi->virt + offset, 1081 + a6xx_hfi_queue_init(&gmu->queues[idx], &headers[idx], hfi->virt + offset, 1084 1082 hfi->iova + offset, 0); 1085 1083 1086 1084 /* GMU response queue */ 1085 + idx++; 1087 1086 offset += SZ_4K; 1088 - a6xx_hfi_queue_init(&gmu->queues[1], &headers[1], hfi->virt + offset, 1087 + a6xx_hfi_queue_init(&gmu->queues[idx], &headers[idx], hfi->virt + offset, 1089 1088 hfi->iova + offset, gmu->legacy ? 4 : 1); 1089 + 1090 + /* GMU Debug queue */ 1091 + idx++; 1092 + offset += SZ_4K; 1093 + a6xx_hfi_queue_init(&gmu->queues[idx], &headers[idx], hfi->virt + offset, 1094 + hfi->iova + offset, gmu->legacy ? 5 : 2); 1095 + 1096 + WARN_ON(idx >= HFI_MAX_QUEUES); 1090 1097 }
+133 -22
drivers/gpu/drm/msm/adreno/a6xx_hfi.h
··· 4 4 #ifndef _A6XX_HFI_H_ 5 5 #define _A6XX_HFI_H_ 6 6 7 + #define HFI_MAX_QUEUES 3 8 + 7 9 struct a6xx_hfi_queue_table_header { 8 10 u32 version; 9 11 u32 size; /* Size of the queue table in dwords */ ··· 13 11 u32 qhdr_size; /* Size of the queue headers */ 14 12 u32 num_queues; /* Number of total queues */ 15 13 u32 active_queues; /* Number of active queues */ 16 - }; 14 + } __packed; 17 15 18 16 struct a6xx_hfi_queue_header { 19 17 u32 status; ··· 28 26 u32 tx_request; 29 27 u32 read_index; 30 28 u32 write_index; 31 - }; 29 + } __packed; 32 30 33 31 struct a6xx_hfi_queue { 34 32 struct a6xx_hfi_queue_header *header; ··· 74 72 u32 ret_header; 75 73 u32 error; 76 74 u32 payload[HFI_RESPONSE_PAYLOAD_SIZE]; 77 - }; 75 + } __packed; 78 76 79 77 #define HFI_F2H_MSG_ERROR 100 80 78 ··· 82 80 u32 header; 83 81 u32 code; 84 82 u32 payload[2]; 85 - }; 83 + } __packed; 86 84 87 85 #define HFI_H2F_MSG_INIT 0 88 86 ··· 92 90 u32 dbg_buffer_addr; 93 91 u32 dbg_buffer_size; 94 92 u32 boot_state; 95 - }; 93 + } __packed; 96 94 97 95 #define HFI_H2F_MSG_FW_VERSION 1 98 96 99 97 struct a6xx_hfi_msg_fw_version { 100 98 u32 header; 101 99 u32 supported_version; 102 - }; 100 + } __packed; 103 101 104 102 #define HFI_H2F_MSG_PERF_TABLE 4 105 103 106 104 struct perf_level { 107 105 u32 vote; 108 106 u32 freq; 109 - }; 107 + } __packed; 110 108 111 109 struct perf_gx_level { 112 110 u32 vote; 113 111 u32 acd; 114 112 u32 freq; 115 - }; 113 + } __packed; 116 114 117 115 struct a6xx_hfi_msg_perf_table_v1 { 118 116 u32 header; ··· 121 119 122 120 struct perf_level gx_votes[16]; 123 121 struct perf_level cx_votes[4]; 124 - }; 122 + } __packed; 125 123 126 124 struct a6xx_hfi_msg_perf_table { 127 125 u32 header; ··· 130 128 131 129 struct perf_gx_level gx_votes[16]; 132 130 struct perf_level cx_votes[4]; 133 - }; 131 + } __packed; 134 132 135 133 #define HFI_H2F_MSG_BW_TABLE 3 136 134 ··· 145 143 u32 cnoc_cmds_data[2][6]; 146 144 u32 ddr_cmds_addrs[8]; 147 145 u32 ddr_cmds_data[16][8]; 148 - }; 146 + } __packed; 149 147 150 148 #define HFI_H2F_MSG_TEST 5 151 149 152 150 struct a6xx_hfi_msg_test { 153 151 u32 header; 154 - }; 152 + } __packed; 155 153 156 154 #define HFI_H2F_MSG_ACD 7 157 155 #define MAX_ACD_STRIDE 2 ··· 163 161 u32 stride; 164 162 u32 num_levels; 165 163 u32 data[16 * MAX_ACD_STRIDE]; 166 - }; 164 + } __packed; 165 + 166 + #define CLX_DATA(irated, num_phases, clx_path, extd_intf) \ 167 + ((extd_intf << 29) | \ 168 + (clx_path << 28) | \ 169 + (num_phases << 22) | \ 170 + (irated << 16)) 171 + 172 + struct a6xx_hfi_clx_domain_v2 { 173 + /** 174 + * @data: BITS[0:15] Migration time 175 + * BITS[16:21] Current rating 176 + * BITS[22:27] Phases for domain 177 + * BITS[28:28] Path notification 178 + * BITS[29:31] Extra features 179 + */ 180 + u32 data; 181 + /** @clxt: CLX time in microseconds */ 182 + u32 clxt; 183 + /** @clxh: CLH time in microseconds */ 184 + u32 clxh; 185 + /** @urg_mode: Urgent HW throttle mode of operation */ 186 + u32 urg_mode; 187 + /** @lkg_en: Enable leakage current estimate */ 188 + u32 lkg_en; 189 + /** curr_budget: Current Budget */ 190 + u32 curr_budget; 191 + } __packed; 192 + 193 + #define HFI_H2F_MSG_CLX_TBL 8 194 + 195 + #define MAX_CLX_DOMAINS 2 196 + struct a6xx_hfi_clx_table_v2_cmd { 197 + u32 hdr; 198 + u32 version; 199 + struct a6xx_hfi_clx_domain_v2 domain[MAX_CLX_DOMAINS]; 200 + } __packed; 167 201 168 202 #define HFI_H2F_MSG_START 10 169 203 170 204 struct a6xx_hfi_msg_start { 171 205 u32 header; 172 - }; 206 + } __packed; 173 207 174 208 #define HFI_H2F_FEATURE_CTRL 11 175 209 176 210 struct a6xx_hfi_msg_feature_ctrl { 177 211 u32 header; 178 212 u32 feature; 213 + #define HFI_FEATURE_DCVS 0 214 + #define HFI_FEATURE_HWSCHED 1 215 + #define HFI_FEATURE_PREEMPTION 2 216 + #define HFI_FEATURE_CLOCKS_ON 3 217 + #define HFI_FEATURE_BUS_ON 4 218 + #define HFI_FEATURE_RAIL_ON 5 219 + #define HFI_FEATURE_HWCG 6 220 + #define HFI_FEATURE_LM 7 221 + #define HFI_FEATURE_THROTTLE 8 222 + #define HFI_FEATURE_IFPC 9 223 + #define HFI_FEATURE_NAP 10 224 + #define HFI_FEATURE_BCL 11 225 + #define HFI_FEATURE_ACD 12 226 + #define HFI_FEATURE_DIDT 13 227 + #define HFI_FEATURE_DEPRECATED 14 228 + #define HFI_FEATURE_CB 15 229 + #define HFI_FEATURE_KPROF 16 230 + #define HFI_FEATURE_BAIL_OUT_TIMER 17 231 + #define HFI_FEATURE_GMU_STATS 18 232 + #define HFI_FEATURE_DBQ 19 233 + #define HFI_FEATURE_MINBW 20 234 + #define HFI_FEATURE_CLX 21 235 + #define HFI_FEATURE_LSR 23 236 + #define HFI_FEATURE_LPAC 24 237 + #define HFI_FEATURE_HW_FENCE 25 238 + #define HFI_FEATURE_PERF_NORETAIN 26 239 + #define HFI_FEATURE_DMS 27 240 + #define HFI_FEATURE_THERMAL 28 241 + #define HFI_FEATURE_AQE 29 242 + #define HFI_FEATURE_TDCVS 30 243 + #define HFI_FEATURE_DCE 31 244 + #define HFI_FEATURE_IFF_PCLX 32 245 + #define HFI_FEATURE_SOFT_RESET 0x10000001 246 + #define HFI_FEATURE_DCVS_PROFILE 0x10000002 247 + #define HFI_FEATURE_FAST_CTX_DESTROY 0x10000003 179 248 u32 enable; 180 249 u32 data; 181 - }; 250 + } __packed; 182 251 183 252 #define HFI_H2F_MSG_CORE_FW_START 14 184 253 185 254 struct a6xx_hfi_msg_core_fw_start { 186 255 u32 header; 187 256 u32 handle; 188 - }; 257 + } __packed; 189 258 190 259 #define HFI_H2F_MSG_TABLE 15 191 260 ··· 264 191 u32 count; 265 192 u32 stride; 266 193 u32 data[]; 267 - }; 194 + } __packed; 268 195 269 196 struct a6xx_hfi_table { 270 197 u32 header; 271 198 u32 version; 272 199 u32 type; 273 - #define HFI_TABLE_BW_VOTE 0 274 - #define HFI_TABLE_GPU_PERF 1 200 + #define HFI_TABLE_BW_VOTE 0 201 + #define HFI_TABLE_GPU_PERF 1 202 + #define HFI_TABLE_DIDT 2 203 + #define HFI_TABLE_ACD 3 204 + #define HFI_TABLE_CLX_V1 4 /* Unused */ 205 + #define HFI_TABLE_CLX_V2 5 206 + #define HFI_TABLE_THERM 6 207 + #define HFI_TABLE_DCVS 7 208 + #define HFI_TABLE_SYS_TIME 8 209 + #define HFI_TABLE_GMU_DCVS 9 210 + #define HFI_TABLE_LIMITS_MIT 10 275 211 struct a6xx_hfi_table_entry entry[]; 276 - }; 212 + } __packed; 277 213 278 214 #define HFI_H2F_MSG_GX_BW_PERF_VOTE 30 279 215 ··· 291 209 u32 ack_type; 292 210 u32 freq; 293 211 u32 bw; 294 - }; 212 + } __packed; 295 213 296 214 #define AB_VOTE_MASK GENMASK(31, 16) 297 215 #define MAX_AB_VOTE (FIELD_MAX(AB_VOTE_MASK) - 1) ··· 304 222 u32 header; 305 223 u32 bw; 306 224 u32 freq; 307 - }; 225 + } __packed; 226 + 227 + struct a6xx_hfi_limits_cfg { 228 + u32 enable; 229 + u32 msg_path; 230 + u32 lkg_en; 231 + /* 232 + * BIT[0]: 0 = (static) throttle to fixed sid level 233 + * 1 = (dynamic) throttle to sid level calculated by HW 234 + * BIT[1]: 0 = Mx 235 + * 1 = Bx 236 + */ 237 + u32 mode; 238 + u32 sid; 239 + /* Mitigation time in microseconds */ 240 + u32 mit_time; 241 + /* Max current in mA during mitigation */ 242 + u32 curr_limit; 243 + } __packed; 244 + 245 + struct a6xx_hfi_limits_tbl { 246 + u8 feature_id; 247 + #define GMU_MIT_IFF 0 248 + #define GMU_MIT_PCLX 1 249 + u8 domain; 250 + #define GMU_GX_DOMAIN 0 251 + #define GMU_MX_DOMAIN 1 252 + u16 feature_rev; 253 + struct a6xx_hfi_limits_cfg cfg; 254 + } __packed; 308 255 309 256 #endif
+1 -76
drivers/gpu/drm/msm/adreno/a6xx_preempt.c
··· 6 6 #include "msm_gem.h" 7 7 #include "a6xx_gpu.h" 8 8 #include "a6xx_gmu.xml.h" 9 + #include "a6xx_preempt.h" 9 10 #include "msm_mmu.h" 10 11 #include "msm_gpu_trace.h" 11 - 12 - /* 13 - * Try to transition the preemption state from old to new. Return 14 - * true on success or false if the original state wasn't 'old' 15 - */ 16 - static inline bool try_preempt_state(struct a6xx_gpu *a6xx_gpu, 17 - enum a6xx_preempt_state old, enum a6xx_preempt_state new) 18 - { 19 - enum a6xx_preempt_state cur = atomic_cmpxchg(&a6xx_gpu->preempt_state, 20 - old, new); 21 - 22 - return (cur == old); 23 - } 24 - 25 - /* 26 - * Force the preemption state to the specified state. This is used in cases 27 - * where the current state is known and won't change 28 - */ 29 - static inline void set_preempt_state(struct a6xx_gpu *gpu, 30 - enum a6xx_preempt_state new) 31 - { 32 - /* 33 - * preempt_state may be read by other cores trying to trigger a 34 - * preemption or in the interrupt handler so barriers are needed 35 - * before... 36 - */ 37 - smp_mb__before_atomic(); 38 - atomic_set(&gpu->preempt_state, new); 39 - /* ... and after*/ 40 - smp_mb__after_atomic(); 41 - } 42 - 43 - /* Write the most recent wptr for the given ring into the hardware */ 44 - static inline void update_wptr(struct a6xx_gpu *a6xx_gpu, struct msm_ringbuffer *ring) 45 - { 46 - unsigned long flags; 47 - uint32_t wptr; 48 - 49 - spin_lock_irqsave(&ring->preempt_lock, flags); 50 - 51 - if (ring->restore_wptr) { 52 - wptr = get_wptr(ring); 53 - 54 - a6xx_fenced_write(a6xx_gpu, REG_A6XX_CP_RB_WPTR, wptr, BIT(0), false); 55 - 56 - ring->restore_wptr = false; 57 - } 58 - 59 - spin_unlock_irqrestore(&ring->preempt_lock, flags); 60 - } 61 - 62 - /* Return the highest priority ringbuffer with something in it */ 63 - static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu) 64 - { 65 - struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 66 - struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 67 - 68 - unsigned long flags; 69 - int i; 70 - 71 - for (i = 0; i < gpu->nr_rings; i++) { 72 - bool empty; 73 - struct msm_ringbuffer *ring = gpu->rb[i]; 74 - 75 - spin_lock_irqsave(&ring->preempt_lock, flags); 76 - empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring)); 77 - if (!empty && ring == a6xx_gpu->cur_ring) 78 - empty = ring->memptrs->fence == a6xx_gpu->last_seqno[i]; 79 - spin_unlock_irqrestore(&ring->preempt_lock, flags); 80 - 81 - if (!empty) 82 - return ring; 83 - } 84 - 85 - return NULL; 86 - } 87 12 88 13 static void a6xx_preempt_timer(struct timer_list *t) 89 14 {
+82
drivers/gpu/drm/msm/adreno/a6xx_preempt.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* Copyright (c) 2018, The Linux Foundation. All rights reserved. */ 3 + /* Copyright (c) 2023 Collabora, Ltd. */ 4 + /* Copyright (c) 2024 Valve Corporation */ 5 + /* Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. */ 6 + 7 + /* 8 + * Try to transition the preemption state from old to new. Return 9 + * true on success or false if the original state wasn't 'old' 10 + */ 11 + static inline bool try_preempt_state(struct a6xx_gpu *a6xx_gpu, 12 + enum a6xx_preempt_state old, enum a6xx_preempt_state new) 13 + { 14 + enum a6xx_preempt_state cur = atomic_cmpxchg(&a6xx_gpu->preempt_state, 15 + old, new); 16 + 17 + return (cur == old); 18 + } 19 + 20 + /* 21 + * Force the preemption state to the specified state. This is used in cases 22 + * where the current state is known and won't change 23 + */ 24 + static inline void set_preempt_state(struct a6xx_gpu *gpu, 25 + enum a6xx_preempt_state new) 26 + { 27 + /* 28 + * preempt_state may be read by other cores trying to trigger a 29 + * preemption or in the interrupt handler so barriers are needed 30 + * before... 31 + */ 32 + smp_mb__before_atomic(); 33 + atomic_set(&gpu->preempt_state, new); 34 + /* ... and after */ 35 + smp_mb__after_atomic(); 36 + } 37 + 38 + /* Write the most recent wptr for the given ring into the hardware */ 39 + static inline void update_wptr(struct a6xx_gpu *a6xx_gpu, struct msm_ringbuffer *ring) 40 + { 41 + unsigned long flags; 42 + uint32_t wptr; 43 + 44 + spin_lock_irqsave(&ring->preempt_lock, flags); 45 + 46 + if (ring->restore_wptr) { 47 + wptr = get_wptr(ring); 48 + 49 + a6xx_fenced_write(a6xx_gpu, REG_A6XX_CP_RB_WPTR, wptr, BIT(0), false); 50 + 51 + ring->restore_wptr = false; 52 + } 53 + 54 + spin_unlock_irqrestore(&ring->preempt_lock, flags); 55 + } 56 + 57 + /* Return the highest priority ringbuffer with something in it */ 58 + static inline struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu) 59 + { 60 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 61 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 62 + 63 + unsigned long flags; 64 + int i; 65 + 66 + for (i = 0; i < gpu->nr_rings; i++) { 67 + bool empty; 68 + struct msm_ringbuffer *ring = gpu->rb[i]; 69 + 70 + spin_lock_irqsave(&ring->preempt_lock, flags); 71 + empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring)); 72 + if (!empty && ring == a6xx_gpu->cur_ring) 73 + empty = ring->memptrs->fence == a6xx_gpu->last_seqno[i]; 74 + spin_unlock_irqrestore(&ring->preempt_lock, flags); 75 + 76 + if (!empty) 77 + return ring; 78 + } 79 + 80 + return NULL; 81 + } 82 +
+156 -16
drivers/gpu/drm/msm/adreno/a8xx_gpu.c
··· 87 87 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 88 88 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 89 89 const struct a6xx_info *info = adreno_gpu->info->a6xx; 90 + struct device *dev = &gpu->pdev->dev; 90 91 u32 slice_mask; 91 92 92 93 if (adreno_gpu->info->family < ADRENO_8XX_GEN1) ··· 111 110 112 111 /* Chip ID depends on the number of slices available. So update it */ 113 112 adreno_gpu->chip_id |= FIELD_PREP(GENMASK(7, 4), hweight32(slice_mask)); 113 + 114 + /* Update the gpu-name to reflect the slice config: */ 115 + const char *name = devm_kasprintf(dev, GFP_KERNEL, 116 + "%"ADRENO_CHIPID_FMT, 117 + ADRENO_CHIPID_ARGS(adreno_gpu->chip_id)); 118 + if (name) { 119 + devm_kfree(dev, adreno_gpu->base.name); 120 + adreno_gpu->base.name = name; 121 + } 114 122 } 115 123 116 124 static u32 a8xx_get_first_slice(struct a6xx_gpu *a6xx_gpu) ··· 183 173 /* Update HW if this is the current ring and we are not in preempt*/ 184 174 if (!a6xx_in_preempt(a6xx_gpu)) { 185 175 if (a6xx_gpu->cur_ring == ring) 186 - gpu_write(gpu, REG_A6XX_CP_RB_WPTR, wptr); 176 + a6xx_fenced_write(a6xx_gpu, REG_A6XX_CP_RB_WPTR, wptr, BIT(0), false); 187 177 else 188 178 ring->restore_wptr = true; 189 179 } else { ··· 396 386 a8xx_aperture_clear(gpu); 397 387 } 398 388 389 + static void a8xx_patch_pwrup_reglist(struct msm_gpu *gpu) 390 + { 391 + const struct adreno_reglist_pipe_list *dyn_pwrup_reglist; 392 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 393 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 394 + const struct adreno_reglist_list *reglist; 395 + void *ptr = a6xx_gpu->pwrup_reglist_ptr; 396 + struct cpu_gpu_lock *lock = ptr; 397 + u32 *dest = (u32 *)&lock->regs[0]; 398 + u32 dyn_pwrup_reglist_count = 0; 399 + int i; 400 + 401 + lock->gpu_req = lock->cpu_req = lock->turn = 0; 402 + 403 + reglist = adreno_gpu->info->a6xx->ifpc_reglist; 404 + if (reglist) { 405 + lock->ifpc_list_len = reglist->count; 406 + 407 + /* 408 + * For each entry in each of the lists, write the offset and the current 409 + * register value into the GPU buffer 410 + */ 411 + for (i = 0; i < reglist->count; i++) { 412 + *dest++ = reglist->regs[i]; 413 + *dest++ = gpu_read(gpu, reglist->regs[i]); 414 + } 415 + } 416 + 417 + reglist = adreno_gpu->info->a6xx->pwrup_reglist; 418 + if (reglist) { 419 + lock->preemption_list_len = reglist->count; 420 + 421 + for (i = 0; i < reglist->count; i++) { 422 + *dest++ = reglist->regs[i]; 423 + *dest++ = gpu_read(gpu, reglist->regs[i]); 424 + } 425 + } 426 + 427 + /* 428 + * The overall register list is composed of 429 + * 1. Static IFPC-only registers 430 + * 2. Static IFPC + preemption registers 431 + * 3. Dynamic IFPC + preemption registers (ex: perfcounter selects) 432 + * 433 + * The first two lists are static. Size of these lists are stored as 434 + * number of pairs in ifpc_list_len and preemption_list_len 435 + * respectively. With concurrent binning, Some of the perfcounter 436 + * registers being virtualized, CP needs to know the pipe id to program 437 + * the aperture inorder to restore the same. Thus, third list is a 438 + * dynamic list with triplets as 439 + * (<aperture, shifted 12 bits> <address> <data>), and the length is 440 + * stored as number for triplets in dynamic_list_len. 441 + */ 442 + dyn_pwrup_reglist = adreno_gpu->info->a6xx->dyn_pwrup_reglist; 443 + if (!dyn_pwrup_reglist) 444 + goto done; 445 + 446 + for (u32 pipe_id = PIPE_BR; pipe_id <= PIPE_DDE_BV; pipe_id++) { 447 + for (i = 0; i < dyn_pwrup_reglist->count; i++) { 448 + if (!(dyn_pwrup_reglist->regs[i].pipe & BIT(pipe_id))) 449 + continue; 450 + *dest++ = A8XX_CP_APERTURE_CNTL_HOST_PIPEID(pipe_id); 451 + *dest++ = dyn_pwrup_reglist->regs[i].offset; 452 + *dest++ = a8xx_read_pipe_slice(gpu, 453 + pipe_id, 454 + a8xx_get_first_slice(a6xx_gpu), 455 + dyn_pwrup_reglist->regs[i].offset); 456 + dyn_pwrup_reglist_count++; 457 + } 458 + } 459 + 460 + lock->dynamic_list_len = dyn_pwrup_reglist_count; 461 + 462 + done: 463 + a8xx_aperture_clear(gpu); 464 + } 465 + 466 + static int a8xx_preempt_start(struct msm_gpu *gpu) 467 + { 468 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 469 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 470 + struct msm_ringbuffer *ring = gpu->rb[0]; 471 + 472 + if (gpu->nr_rings <= 1) 473 + return 0; 474 + 475 + /* Turn CP protection off */ 476 + OUT_PKT7(ring, CP_SET_PROTECTED_MODE, 1); 477 + OUT_RING(ring, 0); 478 + 479 + a6xx_emit_set_pseudo_reg(ring, a6xx_gpu, NULL); 480 + 481 + /* Yield the floor on command completion */ 482 + OUT_PKT7(ring, CP_CONTEXT_SWITCH_YIELD, 4); 483 + OUT_RING(ring, 0x00); 484 + OUT_RING(ring, 0x00); 485 + OUT_RING(ring, 0x00); 486 + /* Generate interrupt on preemption completion */ 487 + OUT_RING(ring, 0x00); 488 + 489 + a6xx_flush(gpu, ring); 490 + 491 + return a8xx_idle(gpu, ring) ? 0 : -EINVAL; 492 + } 493 + 399 494 static int a8xx_cp_init(struct msm_gpu *gpu) 400 495 { 496 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 497 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 401 498 struct msm_ringbuffer *ring = gpu->rb[0]; 402 499 u32 mask; 403 500 ··· 512 395 OUT_PKT7(ring, CP_THREAD_CONTROL, 1); 513 396 OUT_RING(ring, BIT(27)); 514 397 515 - OUT_PKT7(ring, CP_ME_INIT, 4); 398 + OUT_PKT7(ring, CP_ME_INIT, 7); 516 399 517 400 /* Use multiple HW contexts */ 518 401 mask = BIT(0); ··· 526 409 /* Disable save/restore of performance counters across preemption */ 527 410 mask |= BIT(6); 528 411 412 + /* Enable the register init list with the spinlock */ 413 + mask |= BIT(8); 414 + 529 415 OUT_RING(ring, mask); 530 416 531 417 /* Enable multiple hardware contexts */ ··· 539 419 540 420 /* Operation mode mask */ 541 421 OUT_RING(ring, 0x00000002); 422 + 423 + /* Lo address */ 424 + OUT_RING(ring, lower_32_bits(a6xx_gpu->pwrup_reglist_iova)); 425 + /* Hi address */ 426 + OUT_RING(ring, upper_32_bits(a6xx_gpu->pwrup_reglist_iova)); 427 + 428 + /* Enable dyn pwrup list with triplets (offset, value, pipe) */ 429 + OUT_RING(ring, BIT(31)); 542 430 543 431 a6xx_flush(gpu, ring); 544 432 return a8xx_idle(gpu, ring) ? 0 : -EINVAL; ··· 776 648 gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR, shadowptr(a6xx_gpu, gpu->rb[0])); 777 649 gpu_write64(gpu, REG_A8XX_CP_RB_RPTR_ADDR_BV, rbmemptr(gpu->rb[0], bv_rptr)); 778 650 651 + a8xx_preempt_hw_init(gpu); 652 + 779 653 for (i = 0; i < gpu->nr_rings; i++) 780 654 a6xx_gpu->shadow[i] = 0; 781 655 ··· 832 702 WARN_ON(!gmem_protect); 833 703 a8xx_aperture_clear(gpu); 834 704 705 + if (!a6xx_gpu->pwrup_reglist_emitted) { 706 + a8xx_patch_pwrup_reglist(gpu); 707 + a6xx_gpu->pwrup_reglist_emitted = true; 708 + } 709 + 835 710 /* Enable hardware clockgating */ 836 711 a8xx_set_hwcg(gpu, true); 837 712 out: 713 + /* Last step - yield the ringbuffer */ 714 + a8xx_preempt_start(gpu); 715 + 838 716 /* 839 717 * Tell the GMU that we are done touching the GPU and it can start power 840 718 * management 841 719 */ 842 720 a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET); 721 + 722 + if (!ret && (refcount_read(&gpu->sysprof_active) > 1)) { 723 + ret = a6xx_gmu_set_oob(gmu, GMU_OOB_PERFCOUNTER_SET); 724 + if (!ret) 725 + set_bit(GMU_STATUS_OOB_PERF_SET, &gmu->status); 726 + } 843 727 844 728 return ret; 845 729 } ··· 1252 1108 1253 1109 if (status & A6XX_RBBM_INT_0_MASK_CP_CACHE_FLUSH_TS) { 1254 1110 msm_gpu_retire(gpu); 1255 - a6xx_preempt_trigger(gpu); 1111 + a8xx_preempt_trigger(gpu); 1256 1112 } 1257 1113 1258 1114 if (status & A6XX_RBBM_INT_0_MASK_CP_SW) 1259 - a6xx_preempt_irq(gpu); 1115 + a8xx_preempt_irq(gpu); 1260 1116 1261 1117 return IRQ_HANDLED; 1262 1118 } ··· 1318 1174 gpu_write(gpu, REG_A6XX_GBIF_HALT, 0x0); 1319 1175 } 1320 1176 1321 - int a8xx_gmu_get_timestamp(struct msm_gpu *gpu, uint64_t *value) 1177 + u64 a8xx_gmu_get_timestamp(struct msm_gpu *gpu) 1322 1178 { 1323 1179 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1324 1180 struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 1181 + u64 count_hi, count_lo, temp; 1325 1182 1326 - mutex_lock(&a6xx_gpu->gmu.lock); 1183 + do { 1184 + count_hi = gmu_read(&a6xx_gpu->gmu, REG_A8XX_GMU_ALWAYS_ON_COUNTER_H); 1185 + count_lo = gmu_read(&a6xx_gpu->gmu, REG_A8XX_GMU_ALWAYS_ON_COUNTER_L); 1186 + temp = gmu_read(&a6xx_gpu->gmu, REG_A8XX_GMU_ALWAYS_ON_COUNTER_H); 1187 + } while (unlikely(count_hi != temp)); 1327 1188 1328 - /* Force the GPU power on so we can read this register */ 1329 - a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); 1330 - 1331 - *value = gpu_read64(gpu, REG_A8XX_CP_ALWAYS_ON_COUNTER); 1332 - 1333 - a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); 1334 - 1335 - mutex_unlock(&a6xx_gpu->gmu.lock); 1336 - 1337 - return 0; 1189 + return (count_hi << 32) | count_lo; 1338 1190 } 1339 1191 1340 1192 u64 a8xx_gpu_busy(struct msm_gpu *gpu, unsigned long *out_sample_rate)
+259
drivers/gpu/drm/msm/adreno/a8xx_preempt.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. */ 3 + 4 + #include "msm_gem.h" 5 + #include "a6xx_gpu.h" 6 + #include "a6xx_gmu.xml.h" 7 + #include "a6xx_preempt.h" 8 + #include "msm_mmu.h" 9 + #include "msm_gpu_trace.h" 10 + 11 + static void preempt_prepare_postamble(struct a6xx_gpu *a6xx_gpu) 12 + { 13 + u32 *postamble = a6xx_gpu->preempt_postamble_ptr; 14 + u32 count = 0; 15 + 16 + postamble[count++] = PKT7(CP_REG_RMW, 3); 17 + postamble[count++] = REG_A8XX_RBBM_PERFCTR_SRAM_INIT_CMD; 18 + postamble[count++] = 0; 19 + postamble[count++] = 1; 20 + 21 + postamble[count++] = PKT7(CP_WAIT_REG_MEM, 6); 22 + postamble[count++] = CP_WAIT_REG_MEM_0_FUNCTION(WRITE_EQ); 23 + postamble[count++] = CP_WAIT_REG_MEM_POLL_ADDR_LO( 24 + REG_A8XX_RBBM_PERFCTR_SRAM_INIT_STATUS); 25 + postamble[count++] = CP_WAIT_REG_MEM_POLL_ADDR_HI(0); 26 + postamble[count++] = CP_WAIT_REG_MEM_3_REF(0x1); 27 + postamble[count++] = CP_WAIT_REG_MEM_4_MASK(0x1); 28 + postamble[count++] = CP_WAIT_REG_MEM_5_DELAY_LOOP_CYCLES(0); 29 + 30 + a6xx_gpu->preempt_postamble_len = count; 31 + 32 + a6xx_gpu->postamble_enabled = true; 33 + } 34 + 35 + static void preempt_disable_postamble(struct a6xx_gpu *a6xx_gpu) 36 + { 37 + u32 *postamble = a6xx_gpu->preempt_postamble_ptr; 38 + 39 + /* 40 + * Disable the postamble by replacing the first packet header with a NOP 41 + * that covers the whole buffer. 42 + */ 43 + *postamble = PKT7(CP_NOP, (a6xx_gpu->preempt_postamble_len - 1)); 44 + 45 + a6xx_gpu->postamble_enabled = false; 46 + } 47 + 48 + /* 49 + * Set preemption keepalive vote. Please note that this vote is different from the one used in 50 + * a8xx_irq() 51 + */ 52 + static void a8xx_preempt_keepalive_vote(struct msm_gpu *gpu, bool on) 53 + { 54 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 55 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 56 + 57 + gmu_write(&a6xx_gpu->gmu, REG_A8XX_GMU_PWR_COL_PREEMPT_KEEPALIVE, on); 58 + } 59 + 60 + void a8xx_preempt_irq(struct msm_gpu *gpu) 61 + { 62 + uint32_t status; 63 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 64 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 65 + struct drm_device *dev = gpu->dev; 66 + 67 + if (!try_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED, PREEMPT_PENDING)) 68 + return; 69 + 70 + /* Delete the preemption watchdog timer */ 71 + timer_delete(&a6xx_gpu->preempt_timer); 72 + 73 + /* 74 + * The hardware should be setting the stop bit of CP_CONTEXT_SWITCH_CNTL 75 + * to zero before firing the interrupt, but there is a non zero chance 76 + * of a hardware condition or a software race that could set it again 77 + * before we have a chance to finish. If that happens, log and go for 78 + * recovery 79 + */ 80 + status = gpu_read(gpu, REG_A8XX_CP_CONTEXT_SWITCH_CNTL); 81 + if (unlikely(status & A8XX_CP_CONTEXT_SWITCH_CNTL_STOP)) { 82 + DRM_DEV_ERROR(&gpu->pdev->dev, 83 + "!!!!!!!!!!!!!!!! preemption faulted !!!!!!!!!!!!!! irq\n"); 84 + set_preempt_state(a6xx_gpu, PREEMPT_FAULTED); 85 + dev_err(dev->dev, "%s: Preemption failed to complete\n", 86 + gpu->name); 87 + kthread_queue_work(gpu->worker, &gpu->recover_work); 88 + return; 89 + } 90 + 91 + a6xx_gpu->cur_ring = a6xx_gpu->next_ring; 92 + a6xx_gpu->next_ring = NULL; 93 + 94 + set_preempt_state(a6xx_gpu, PREEMPT_FINISH); 95 + 96 + update_wptr(a6xx_gpu, a6xx_gpu->cur_ring); 97 + 98 + set_preempt_state(a6xx_gpu, PREEMPT_NONE); 99 + 100 + a8xx_preempt_keepalive_vote(gpu, false); 101 + 102 + trace_msm_gpu_preemption_irq(a6xx_gpu->cur_ring->id); 103 + 104 + /* 105 + * Retrigger preemption to avoid a deadlock that might occur when preemption 106 + * is skipped due to it being already in flight when requested. 107 + */ 108 + a8xx_preempt_trigger(gpu); 109 + } 110 + 111 + void a8xx_preempt_hw_init(struct msm_gpu *gpu) 112 + { 113 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 114 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 115 + int i; 116 + 117 + /* No preemption if we only have one ring */ 118 + if (gpu->nr_rings == 1) 119 + return; 120 + 121 + for (i = 0; i < gpu->nr_rings; i++) { 122 + struct a6xx_preempt_record *record_ptr = a6xx_gpu->preempt[i]; 123 + 124 + record_ptr->wptr = 0; 125 + record_ptr->rptr = 0; 126 + record_ptr->rptr_addr = shadowptr(a6xx_gpu, gpu->rb[i]); 127 + record_ptr->info = 0; 128 + record_ptr->data = 0; 129 + record_ptr->rbase = gpu->rb[i]->iova; 130 + } 131 + 132 + /* Write a 0 to signal that we aren't switching pagetables */ 133 + gpu_write64(gpu, REG_A8XX_CP_CONTEXT_SWITCH_SMMU_INFO, 0); 134 + 135 + /* Enable the GMEM save/restore feature for preemption */ 136 + gpu_write(gpu, REG_A6XX_RB_CONTEXT_SWITCH_GMEM_SAVE_RESTORE_ENABLE, 0x1); 137 + 138 + /* Reset the preemption state */ 139 + set_preempt_state(a6xx_gpu, PREEMPT_NONE); 140 + 141 + spin_lock_init(&a6xx_gpu->eval_lock); 142 + 143 + /* Always come up on rb 0 */ 144 + a6xx_gpu->cur_ring = gpu->rb[0]; 145 + } 146 + 147 + void a8xx_preempt_trigger(struct msm_gpu *gpu) 148 + { 149 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 150 + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); 151 + unsigned long flags; 152 + struct msm_ringbuffer *ring; 153 + unsigned int cntl; 154 + bool sysprof; 155 + 156 + if (gpu->nr_rings == 1) 157 + return; 158 + 159 + /* 160 + * Lock to make sure another thread attempting preemption doesn't skip it 161 + * while we are still evaluating the next ring. This makes sure the other 162 + * thread does start preemption if we abort it and avoids a soft lock. 163 + */ 164 + spin_lock_irqsave(&a6xx_gpu->eval_lock, flags); 165 + 166 + /* 167 + * Try to start preemption by moving from NONE to START. If 168 + * unsuccessful, a preemption is already in flight 169 + */ 170 + if (!try_preempt_state(a6xx_gpu, PREEMPT_NONE, PREEMPT_START)) { 171 + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags); 172 + return; 173 + } 174 + 175 + cntl = A8XX_CP_CONTEXT_SWITCH_CNTL_LEVEL(a6xx_gpu->preempt_level); 176 + 177 + if (a6xx_gpu->skip_save_restore) 178 + cntl |= A8XX_CP_CONTEXT_SWITCH_CNTL_SKIP_SAVE_RESTORE; 179 + 180 + if (a6xx_gpu->uses_gmem) 181 + cntl |= A8XX_CP_CONTEXT_SWITCH_CNTL_USES_GMEM; 182 + 183 + cntl |= A8XX_CP_CONTEXT_SWITCH_CNTL_STOP; 184 + 185 + /* Get the next ring to preempt to */ 186 + ring = get_next_ring(gpu); 187 + 188 + /* 189 + * If no ring is populated or the highest priority ring is the current 190 + * one do nothing except to update the wptr to the latest and greatest 191 + */ 192 + if (!ring || (a6xx_gpu->cur_ring == ring)) { 193 + set_preempt_state(a6xx_gpu, PREEMPT_FINISH); 194 + update_wptr(a6xx_gpu, a6xx_gpu->cur_ring); 195 + set_preempt_state(a6xx_gpu, PREEMPT_NONE); 196 + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags); 197 + return; 198 + } 199 + 200 + spin_unlock_irqrestore(&a6xx_gpu->eval_lock, flags); 201 + 202 + spin_lock_irqsave(&ring->preempt_lock, flags); 203 + 204 + struct a7xx_cp_smmu_info *smmu_info_ptr = 205 + a6xx_gpu->preempt_smmu[ring->id]; 206 + struct a6xx_preempt_record *record_ptr = a6xx_gpu->preempt[ring->id]; 207 + u64 ttbr0 = ring->memptrs->ttbr0; 208 + u32 context_idr = ring->memptrs->context_idr; 209 + 210 + smmu_info_ptr->ttbr0 = ttbr0; 211 + smmu_info_ptr->context_idr = context_idr; 212 + record_ptr->wptr = get_wptr(ring); 213 + 214 + /* 215 + * The GPU will write the wptr we set above when we preempt. Reset 216 + * restore_wptr to make sure that we don't write WPTR to the same 217 + * thing twice. It's still possible subsequent submissions will update 218 + * wptr again, in which case they will set the flag to true. This has 219 + * to be protected by the lock for setting the flag and updating wptr 220 + * to be atomic. 221 + */ 222 + ring->restore_wptr = false; 223 + 224 + trace_msm_gpu_preemption_trigger(a6xx_gpu->cur_ring->id, ring->id); 225 + 226 + spin_unlock_irqrestore(&ring->preempt_lock, flags); 227 + 228 + /* Set the keepalive bit to keep the GPU ON until preemption is complete */ 229 + a8xx_preempt_keepalive_vote(gpu, true); 230 + 231 + a6xx_fenced_write(a6xx_gpu, 232 + REG_A8XX_CP_CONTEXT_SWITCH_SMMU_INFO, a6xx_gpu->preempt_smmu_iova[ring->id], 233 + BIT(1), true); 234 + 235 + a6xx_fenced_write(a6xx_gpu, 236 + REG_A8XX_CP_CONTEXT_SWITCH_PRIV_NON_SECURE_RESTORE_ADDR, 237 + a6xx_gpu->preempt_iova[ring->id], BIT(1), true); 238 + 239 + a6xx_gpu->next_ring = ring; 240 + 241 + /* Start a timer to catch a stuck preemption */ 242 + mod_timer(&a6xx_gpu->preempt_timer, jiffies + msecs_to_jiffies(10000)); 243 + 244 + /* Enable or disable postamble as needed */ 245 + sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; 246 + 247 + if (!sysprof && !a6xx_gpu->postamble_enabled) 248 + preempt_prepare_postamble(a6xx_gpu); 249 + 250 + if (sysprof && a6xx_gpu->postamble_enabled) 251 + preempt_disable_postamble(a6xx_gpu); 252 + 253 + /* Set the preemption state to triggered */ 254 + set_preempt_state(a6xx_gpu, PREEMPT_TRIGGERED); 255 + 256 + /* Trigger the preemption */ 257 + a6xx_fenced_write(a6xx_gpu, REG_A8XX_CP_CONTEXT_SWITCH_CNTL, cntl, BIT(1), false); 258 + } 259 +
+8 -11
drivers/gpu/drm/msm/adreno/adreno_gpu.c
··· 45 45 return -EINVAL; 46 46 } 47 47 48 - np = of_get_child_by_name(dev->of_node, "zap-shader"); 49 - if (!of_device_is_available(np)) { 48 + np = of_get_available_child_by_name(dev->of_node, "zap-shader"); 49 + if (!np) { 50 50 zap_available = false; 51 51 return -ENODEV; 52 52 } ··· 391 391 return 0; 392 392 case MSM_PARAM_TIMESTAMP: 393 393 if (adreno_gpu->funcs->get_timestamp) { 394 - int ret; 395 - 396 394 pm_runtime_get_sync(&gpu->pdev->dev); 397 - ret = adreno_gpu->funcs->get_timestamp(gpu, value); 395 + *value = adreno_gpu->funcs->get_timestamp(gpu); 398 396 pm_runtime_put_autosuspend(&gpu->pdev->dev); 399 397 400 - return ret; 398 + return 0; 401 399 } 402 400 return -EINVAL; 403 401 case MSM_PARAM_PRIORITIES: ··· 440 442 return 0; 441 443 case MSM_PARAM_HAS_PRR: 442 444 *value = adreno_smmu_has_prr(gpu); 445 + return 0; 446 + case MSM_PARAM_AQE: 447 + *value = !!(adreno_gpu->funcs->aqe_is_enabled && 448 + adreno_gpu->funcs->aqe_is_enabled(adreno_gpu)); 443 449 return 0; 444 450 default: 445 451 return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); ··· 1186 1184 struct msm_gpu_config adreno_gpu_config = { 0 }; 1187 1185 struct msm_gpu *gpu = &adreno_gpu->base; 1188 1186 const char *gpu_name; 1189 - u32 speedbin; 1190 1187 int ret; 1191 1188 1192 1189 adreno_gpu->funcs = funcs; ··· 1213 1212 } else 1214 1213 devm_pm_opp_set_clkname(dev, "core"); 1215 1214 } 1216 - 1217 - if (adreno_read_speedbin(dev, &speedbin) || !speedbin) 1218 - speedbin = 0xffff; 1219 - adreno_gpu->speedbin = (uint16_t) (0xffff & speedbin); 1220 1215 1221 1216 gpu_name = devm_kasprintf(dev, GFP_KERNEL, "%"ADRENO_CHIPID_FMT, 1222 1217 ADRENO_CHIPID_ARGS(config->chip_id));
+6 -7
drivers/gpu/drm/msm/adreno/adreno_gpu.h
··· 63 63 #define ADRENO_QUIRK_PREEMPTION BIT(5) 64 64 #define ADRENO_QUIRK_4GB_VA BIT(6) 65 65 #define ADRENO_QUIRK_IFPC BIT(7) 66 + #define ADRENO_QUIRK_SOFTFUSE BIT(8) 66 67 67 68 /* Helper for formating the chip_id in the way that userspace tools like 68 69 * crashdec expect. 69 70 */ 70 - #define ADRENO_CHIPID_FMT "u.%u.%u.%u" 71 - #define ADRENO_CHIPID_ARGS(_c) \ 72 - (((_c) >> 24) & 0xff), \ 73 - (((_c) >> 16) & 0xff), \ 74 - (((_c) >> 8) & 0xff), \ 75 - ((_c) & 0xff) 71 + #define ADRENO_CHIPID_FMT "08x" 72 + #define ADRENO_CHIPID_ARGS(_c) (_c) 76 73 77 74 struct adreno_gpu; 78 75 79 76 struct adreno_gpu_funcs { 80 77 struct msm_gpu_funcs base; 81 78 struct msm_gpu *(*init)(struct drm_device *dev); 82 - int (*get_timestamp)(struct msm_gpu *gpu, uint64_t *value); 79 + u64 (*get_timestamp)(struct msm_gpu *gpu); 83 80 void (*bus_halt)(struct adreno_gpu *adreno_gpu, bool gx_off); 84 81 int (*mmu_fault_handler)(void *arg, unsigned long iova, int flags, void *data); 82 + bool (*gx_is_on)(struct adreno_gpu *adreno_gpu); 83 + bool (*aqe_is_enabled)(struct adreno_gpu *adreno_gpu); 85 84 }; 86 85 87 86 struct adreno_reglist {
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_10_0_sm8650.h
··· 322 322 .format_list = wb2_formats_rgb_yuv, 323 323 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 324 324 .xin_id = 6, 325 - .vbif_idx = VBIF_RT, 326 325 .maxlinewidth = 4096, 327 326 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 328 327 }, ··· 377 378 .name = "intf_3", .id = INTF_3, 378 379 .base = 0x37000, .len = 0x280, 379 380 .type = INTF_DP, 380 - .controller_id = MSM_DP_CONTROLLER_1, 381 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 381 382 .prog_fetch_lines_worst_case = 24, 382 383 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 383 384 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 444 445 .cwb = sm8650_cwb, 445 446 .intf_count = ARRAY_SIZE(sm8650_intf), 446 447 .intf = sm8650_intf, 447 - .vbif_count = ARRAY_SIZE(sm8650_vbif), 448 - .vbif = sm8650_vbif, 448 + .vbif = &sm8650_vbif, 449 449 .perf = &sm8650_perf_data, 450 450 }; 451 451
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_12_0_sm8750.h
··· 364 364 .format_list = wb2_formats_rgb_yuv, 365 365 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 366 366 .xin_id = 6, 367 - .vbif_idx = VBIF_RT, 368 367 .maxlinewidth = 4096, 369 368 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 370 369 }, ··· 419 420 .name = "intf_3", .id = INTF_3, 420 421 .base = 0x37000, .len = 0x4bc, 421 422 .type = INTF_DP, 422 - .controller_id = MSM_DP_CONTROLLER_1, 423 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 423 424 .prog_fetch_lines_worst_case = 24, 424 425 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 425 426 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 485 486 .cwb = sm8650_cwb, 486 487 .intf_count = ARRAY_SIZE(sm8750_intf), 487 488 .intf = sm8750_intf, 488 - .vbif_count = ARRAY_SIZE(sm8650_vbif), 489 - .vbif = sm8650_vbif, 489 + .vbif = &sm8650_vbif, 490 490 .perf = &sm8750_perf_data, 491 491 }; 492 492
+4 -6
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_12_2_glymur.h
··· 371 371 .format_list = wb2_formats_rgb_yuv, 372 372 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 373 373 .xin_id = 6, 374 - .vbif_idx = VBIF_RT, 375 374 .maxlinewidth = 4096, 376 375 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 377 376 }, ··· 425 426 }, { 426 427 .name = "intf_3", .id = INTF_3, 427 428 .base = 0x37000, .len = 0x400, 428 - .type = INTF_NONE, 429 + .type = INTF_DP, 429 430 .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 430 431 .prog_fetch_lines_worst_case = 24, 431 432 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), ··· 457 458 }, { 458 459 .name = "intf_7", .id = INTF_7, 459 460 .base = 0x3b000, .len = 0x400, 460 - .type = INTF_NONE, 461 + .type = INTF_DP, 461 462 .controller_id = MSM_DP_CONTROLLER_2, /* pair with intf_6 for DP MST */ 462 463 .prog_fetch_lines_worst_case = 24, 463 464 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 18), ··· 465 466 }, { 466 467 .name = "intf_8", .id = INTF_8, 467 468 .base = 0x3c000, .len = 0x400, 468 - .type = INTF_NONE, 469 + .type = INTF_DP, 469 470 .controller_id = MSM_DP_CONTROLLER_1, /* pair with intf_4 for DP MST */ 470 471 .prog_fetch_lines_worst_case = 24, 471 472 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12), ··· 532 533 .cwb = sm8650_cwb, 533 534 .intf_count = ARRAY_SIZE(glymur_intf), 534 535 .intf = glymur_intf, 535 - .vbif_count = ARRAY_SIZE(sm8650_vbif), 536 - .vbif = sm8650_vbif, 536 + .vbif = &sm8650_vbif, 537 537 .perf = &glymur_perf_data, 538 538 }; 539 539
+363
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_12_4_eliza.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef _DPU_12_4_ELIZA_H 4 + #define _DPU_12_4_ELIZA_H 5 + 6 + static const struct dpu_caps eliza_dpu_caps = { 7 + .max_mixer_width = DEFAULT_DPU_OUTPUT_LINE_WIDTH, 8 + .max_mixer_blendstages = 0xb, 9 + .has_src_split = true, 10 + .has_dim_layer = true, 11 + .has_idle_pc = true, 12 + .has_3d_merge = true, 13 + .max_linewidth = 8192, 14 + .pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE, 15 + }; 16 + 17 + static const struct dpu_mdp_cfg eliza_mdp = { 18 + .name = "top_0", 19 + .base = 0, .len = 0x494, 20 + .clk_ctrls = { 21 + [DPU_CLK_CTRL_REG_DMA] = { .reg_off = 0x2bc, .bit_off = 20 }, 22 + }, 23 + }; 24 + 25 + static const struct dpu_ctl_cfg eliza_ctl[] = { 26 + { 27 + .name = "ctl_0", .id = CTL_0, 28 + .base = 0x15000, .len = 0x1000, 29 + .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 9), 30 + }, { 31 + .name = "ctl_1", .id = CTL_1, 32 + .base = 0x16000, .len = 0x1000, 33 + .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10), 34 + }, { 35 + .name = "ctl_2", .id = CTL_2, 36 + .base = 0x17000, .len = 0x1000, 37 + .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11), 38 + }, { 39 + .name = "ctl_3", .id = CTL_3, 40 + .base = 0x18000, .len = 0x1000, 41 + .intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 12), 42 + }, 43 + }; 44 + 45 + static const struct dpu_sspp_cfg eliza_sspp[] = { 46 + { 47 + .name = "sspp_0", .id = SSPP_VIG0, 48 + .base = 0x4000, .len = 0x344, 49 + .features = VIG_SDM845_MASK_SDMA, 50 + .sblk = &dpu_vig_sblk_qseed3_3_4, 51 + .xin_id = 0, 52 + .type = SSPP_TYPE_VIG, 53 + }, { 54 + .name = "sspp_1", .id = SSPP_VIG1, 55 + .base = 0x6000, .len = 0x344, 56 + .features = VIG_SDM845_MASK_SDMA, 57 + .sblk = &dpu_vig_sblk_qseed3_3_4, 58 + .xin_id = 4, 59 + .type = SSPP_TYPE_VIG, 60 + }, { 61 + .name = "sspp_8", .id = SSPP_DMA0, 62 + .base = 0x24000, .len = 0x344, 63 + .features = DMA_SDM845_MASK_SDMA, 64 + .sblk = &dpu_dma_sblk, 65 + .xin_id = 1, 66 + .type = SSPP_TYPE_DMA, 67 + }, { 68 + .name = "sspp_9", .id = SSPP_DMA1, 69 + .base = 0x26000, .len = 0x344, 70 + .features = DMA_SDM845_MASK_SDMA, 71 + .sblk = &dpu_dma_sblk, 72 + .xin_id = 5, 73 + .type = SSPP_TYPE_DMA, 74 + }, { 75 + .name = "sspp_10", .id = SSPP_DMA2, 76 + .base = 0x28000, .len = 0x344, 77 + .features = DMA_SDM845_MASK_SDMA, 78 + .sblk = &dpu_dma_sblk, 79 + .xin_id = 9, 80 + .type = SSPP_TYPE_DMA, 81 + }, { 82 + .name = "sspp_11", .id = SSPP_DMA3, 83 + .base = 0x2a000, .len = 0x344, 84 + .features = DMA_SDM845_MASK_SDMA, 85 + .sblk = &dpu_dma_sblk, 86 + .xin_id = 13, 87 + .type = SSPP_TYPE_DMA, 88 + }, 89 + }; 90 + 91 + static const struct dpu_lm_cfg eliza_lm[] = { 92 + { 93 + .name = "lm_0", .id = LM_0, 94 + .base = 0x44000, .len = 0x400, 95 + .features = MIXER_MSM8998_MASK, 96 + .sblk = &sm8750_lm_sblk, 97 + .lm_pair = LM_1, 98 + .pingpong = PINGPONG_0, 99 + .dspp = DSPP_0, 100 + }, { 101 + .name = "lm_1", .id = LM_1, 102 + .base = 0x45000, .len = 0x400, 103 + .features = MIXER_MSM8998_MASK, 104 + .sblk = &sm8750_lm_sblk, 105 + .lm_pair = LM_0, 106 + .pingpong = PINGPONG_1, 107 + .dspp = DSPP_1, 108 + }, { 109 + .name = "lm_2", .id = LM_2, 110 + .base = 0x46000, .len = 0x400, 111 + .features = MIXER_MSM8998_MASK, 112 + .sblk = &sm8750_lm_sblk, 113 + .lm_pair = LM_3, 114 + .pingpong = PINGPONG_2, 115 + .dspp = DSPP_2, 116 + }, { 117 + .name = "lm_3", .id = LM_3, 118 + .base = 0x47000, .len = 0x400, 119 + .features = MIXER_MSM8998_MASK, 120 + .sblk = &sm8750_lm_sblk, 121 + .lm_pair = LM_2, 122 + .pingpong = PINGPONG_3, 123 + }, 124 + }; 125 + 126 + static const struct dpu_dspp_cfg eliza_dspp[] = { 127 + { 128 + .name = "dspp_0", .id = DSPP_0, 129 + .base = 0x54000, .len = 0x1800, 130 + .sblk = &sm8750_dspp_sblk, 131 + }, { 132 + .name = "dspp_1", .id = DSPP_1, 133 + .base = 0x56000, .len = 0x1800, 134 + .sblk = &sm8750_dspp_sblk, 135 + }, { 136 + .name = "dspp_2", .id = DSPP_2, 137 + .base = 0x58000, .len = 0x1800, 138 + .sblk = &sm8750_dspp_sblk, 139 + }, 140 + }; 141 + 142 + static const struct dpu_pingpong_cfg eliza_pp[] = { 143 + { 144 + .name = "pingpong_0", .id = PINGPONG_0, 145 + .base = 0x69000, .len = 0, 146 + .sblk = &sc7280_pp_sblk, 147 + .merge_3d = MERGE_3D_0, 148 + .intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 8), 149 + }, { 150 + .name = "pingpong_1", .id = PINGPONG_1, 151 + .base = 0x6a000, .len = 0, 152 + .sblk = &sc7280_pp_sblk, 153 + .merge_3d = MERGE_3D_0, 154 + .intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 9), 155 + }, { 156 + .name = "pingpong_2", .id = PINGPONG_2, 157 + .base = 0x6b000, .len = 0, 158 + .sblk = &sc7280_pp_sblk, 159 + .merge_3d = MERGE_3D_1, 160 + .intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10), 161 + }, { 162 + .name = "pingpong_3", .id = PINGPONG_3, 163 + .base = 0x6c000, .len = 0, 164 + .sblk = &sc7280_pp_sblk, 165 + .merge_3d = MERGE_3D_1, 166 + .intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11), 167 + }, { 168 + .name = "pingpong_cwb_0", .id = PINGPONG_CWB_0, 169 + .base = 0x66000, .len = 0, 170 + .sblk = &sc7280_pp_sblk, 171 + .merge_3d = MERGE_3D_2, 172 + }, { 173 + .name = "pingpong_cwb_1", .id = PINGPONG_CWB_1, 174 + .base = 0x66400, .len = 0, 175 + .sblk = &sc7280_pp_sblk, 176 + .merge_3d = MERGE_3D_2, 177 + }, { 178 + .name = "pingpong_cwb_2", .id = PINGPONG_CWB_2, 179 + .base = 0x7e000, .len = 0, 180 + .sblk = &sc7280_pp_sblk, 181 + .merge_3d = MERGE_3D_3, 182 + }, { 183 + .name = "pingpong_cwb_3", .id = PINGPONG_CWB_3, 184 + .base = 0x7e400, .len = 0, 185 + .sblk = &sc7280_pp_sblk, 186 + .merge_3d = MERGE_3D_3, 187 + }, 188 + }; 189 + 190 + static const struct dpu_merge_3d_cfg eliza_merge_3d[] = { 191 + { 192 + .name = "merge_3d_0", .id = MERGE_3D_0, 193 + .base = 0x4e000, .len = 0x1c, 194 + }, { 195 + .name = "merge_3d_1", .id = MERGE_3D_1, 196 + .base = 0x4f000, .len = 0x1c, 197 + }, { 198 + .name = "merge_3d_2", .id = MERGE_3D_2, 199 + .base = 0x66700, .len = 0x1c, 200 + }, { 201 + .name = "merge_3d_3", .id = MERGE_3D_3, 202 + .base = 0x7e700, .len = 0x1c, 203 + }, 204 + }; 205 + 206 + /* 207 + * NOTE: Each display compression engine (DCE) contains dual hard 208 + * slice DSC encoders so both share same base address but with 209 + * its own different sub block address. 210 + */ 211 + static const struct dpu_dsc_cfg eliza_dsc[] = { 212 + { 213 + .name = "dce_0_0", .id = DSC_0, 214 + .base = 0x80000, .len = 0x8, 215 + .features = BIT(DPU_DSC_NATIVE_42x_EN), 216 + .sblk = &sm8750_dsc_sblk_0, 217 + }, { 218 + .name = "dce_0_1", .id = DSC_1, 219 + .base = 0x80000, .len = 0x8, 220 + .features = BIT(DPU_DSC_NATIVE_42x_EN), 221 + .sblk = &sm8750_dsc_sblk_1, 222 + }, { 223 + .name = "dce_1_0", .id = DSC_2, 224 + .base = 0x81000, .len = 0x8, 225 + .features = BIT(DPU_DSC_NATIVE_42x_EN), 226 + .sblk = &sm8750_dsc_sblk_0, 227 + }, 228 + }; 229 + 230 + static const struct dpu_wb_cfg eliza_wb[] = { 231 + { 232 + .name = "wb_2", .id = WB_2, 233 + .base = 0x65000, .len = 0x2c8, 234 + .features = WB_SDM845_MASK, 235 + .format_list = wb2_formats_rgb_yuv, 236 + .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 237 + .xin_id = 6, 238 + .maxlinewidth = 4096, 239 + .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 240 + }, 241 + }; 242 + 243 + static const struct dpu_cwb_cfg eliza_cwb[] = { 244 + { 245 + .name = "cwb_0", .id = CWB_0, 246 + .base = 0x66200, .len = 0x20, 247 + }, 248 + { 249 + .name = "cwb_1", .id = CWB_1, 250 + .base = 0x66600, .len = 0x20, 251 + }, 252 + { 253 + .name = "cwb_2", .id = CWB_2, 254 + .base = 0x7e200, .len = 0x20, 255 + }, 256 + { 257 + .name = "cwb_3", .id = CWB_3, 258 + .base = 0x7e600, .len = 0x20, 259 + }, 260 + }; 261 + 262 + static const struct dpu_intf_cfg eliza_intf[] = { 263 + { 264 + .name = "intf_0", .id = INTF_0, 265 + .base = 0x34000, .len = 0x4bc, 266 + .type = INTF_DP, 267 + .controller_id = MSM_DP_CONTROLLER_0, 268 + .prog_fetch_lines_worst_case = 24, 269 + .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 24), 270 + .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 25), 271 + }, { 272 + .name = "intf_1", .id = INTF_1, 273 + .base = 0x35000, .len = 0x4bc, 274 + .type = INTF_DSI, 275 + .controller_id = MSM_DSI_CONTROLLER_0, 276 + .prog_fetch_lines_worst_case = 24, 277 + .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 26), 278 + .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 27), 279 + .intr_tear_rd_ptr = DPU_IRQ_IDX(MDP_INTF1_TEAR_INTR, 2), 280 + }, { 281 + .name = "intf_2", .id = INTF_2, 282 + .base = 0x36000, .len = 0x4bc, 283 + .type = INTF_DSI, 284 + .controller_id = MSM_DSI_CONTROLLER_1, 285 + .prog_fetch_lines_worst_case = 24, 286 + .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 28), 287 + .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 29), 288 + .intr_tear_rd_ptr = DPU_IRQ_IDX(MDP_INTF2_TEAR_INTR, 2), 289 + }, { 290 + .name = "intf_3", .id = INTF_3, 291 + .base = 0x37000, .len = 0x4bc, 292 + .type = INTF_DP, 293 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 294 + .prog_fetch_lines_worst_case = 24, 295 + .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 296 + .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), 297 + } 298 + }; 299 + 300 + static const struct dpu_perf_cfg eliza_perf_data = { 301 + .max_bw_low = 6800000, 302 + .max_bw_high = 14200000, 303 + .min_core_ib = 2500000, 304 + .min_llcc_ib = 0, 305 + .min_dram_ib = 1600000, 306 + .min_prefill_lines = 35, 307 + .danger_lut_tbl = {0x3ffff, 0x3ffff, 0x0}, 308 + .safe_lut_tbl = {0xfe00, 0xfe00, 0xffff}, 309 + .qos_lut_tbl = { 310 + {.nentry = ARRAY_SIZE(sc7180_qos_linear), 311 + .entries = sc7180_qos_linear 312 + }, 313 + {.nentry = ARRAY_SIZE(sc7180_qos_macrotile), 314 + .entries = sc7180_qos_macrotile 315 + }, 316 + {.nentry = ARRAY_SIZE(sc7180_qos_nrt), 317 + .entries = sc7180_qos_nrt 318 + }, 319 + /* TODO: macrotile-qseed is different from macrotile */ 320 + }, 321 + .cdp_cfg = { 322 + {.rd_enable = 1, .wr_enable = 1}, 323 + {.rd_enable = 1, .wr_enable = 0} 324 + }, 325 + .clk_inefficiency_factor = 105, 326 + .bw_inefficiency_factor = 120, 327 + }; 328 + 329 + static const struct dpu_mdss_version eliza_mdss_ver = { 330 + .core_major_ver = 12, 331 + .core_minor_ver = 4, 332 + }; 333 + 334 + const struct dpu_mdss_cfg dpu_eliza_cfg = { 335 + .mdss_ver = &eliza_mdss_ver, 336 + .caps = &eliza_dpu_caps, 337 + .mdp = &eliza_mdp, 338 + .cdm = &dpu_cdm_5_x, 339 + .ctl_count = ARRAY_SIZE(eliza_ctl), 340 + .ctl = eliza_ctl, 341 + .sspp_count = ARRAY_SIZE(eliza_sspp), 342 + .sspp = eliza_sspp, 343 + .mixer_count = ARRAY_SIZE(eliza_lm), 344 + .mixer = eliza_lm, 345 + .dspp_count = ARRAY_SIZE(eliza_dspp), 346 + .dspp = eliza_dspp, 347 + .pingpong_count = ARRAY_SIZE(eliza_pp), 348 + .pingpong = eliza_pp, 349 + .dsc_count = ARRAY_SIZE(eliza_dsc), 350 + .dsc = eliza_dsc, 351 + .merge_3d_count = ARRAY_SIZE(eliza_merge_3d), 352 + .merge_3d = eliza_merge_3d, 353 + .wb_count = ARRAY_SIZE(eliza_wb), 354 + .wb = eliza_wb, 355 + .cwb_count = ARRAY_SIZE(eliza_cwb), 356 + .cwb = eliza_cwb, 357 + .intf_count = ARRAY_SIZE(eliza_intf), 358 + .intf = eliza_intf, 359 + .vbif = &sm8650_vbif, 360 + .perf = &eliza_perf_data, 361 + }; 362 + 363 + #endif
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_13_0_kaanapali.h
··· 362 362 .format_list = wb2_formats_rgb_yuv, 363 363 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 364 364 .xin_id = 6, 365 - .vbif_idx = VBIF_RT, 366 365 .maxlinewidth = 4096, 367 366 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 368 367 }, ··· 417 418 .name = "intf_3", .id = INTF_3, 418 419 .base = 0x190000, .len = 0x4bc, 419 420 .type = INTF_DP, 420 - .controller_id = MSM_DP_CONTROLLER_1, 421 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 421 422 .prog_fetch_lines_worst_case = 24, 422 423 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 423 424 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 483 484 .cwb = sm8650_cwb, 484 485 .intf_count = ARRAY_SIZE(kaanapali_intf), 485 486 .intf = kaanapali_intf, 486 - .vbif_count = ARRAY_SIZE(sm8650_vbif), 487 - .vbif = sm8650_vbif, 487 + .vbif = &sm8650_vbif, 488 488 .perf = &kaanapali_perf_data, 489 489 }; 490 490
+1 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_14_msm8937.h
··· 197 197 .pingpong = msm8937_pp, 198 198 .intf_count = ARRAY_SIZE(msm8937_intf), 199 199 .intf = msm8937_intf, 200 - .vbif_count = ARRAY_SIZE(msm8996_vbif), 201 - .vbif = msm8996_vbif, 200 + .vbif = &msm8996_vbif, 202 201 .perf = &msm8937_perf_data, 203 202 }; 204 203
+1 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_15_msm8917.h
··· 176 176 .pingpong = msm8917_pp, 177 177 .intf_count = ARRAY_SIZE(msm8917_intf), 178 178 .intf = msm8917_intf, 179 - .vbif_count = ARRAY_SIZE(msm8996_vbif), 180 - .vbif = msm8996_vbif, 179 + .vbif = &msm8996_vbif, 181 180 .perf = &msm8917_perf_data, 182 181 }; 183 182
+1 -9
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_16_msm8953.h
··· 121 121 122 122 static const struct dpu_intf_cfg msm8953_intf[] = { 123 123 { 124 - .name = "intf_0", .id = INTF_0, 125 - .base = 0x6a000, .len = 0x268, 126 - .type = INTF_NONE, 127 - .prog_fetch_lines_worst_case = 14, 128 - .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 24), 129 - .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 25), 130 - }, { 131 124 .name = "intf_1", .id = INTF_1, 132 125 .base = 0x6a800, .len = 0x268, 133 126 .type = INTF_DSI, ··· 197 204 .pingpong = msm8953_pp, 198 205 .intf_count = ARRAY_SIZE(msm8953_intf), 199 206 .intf = msm8953_intf, 200 - .vbif_count = ARRAY_SIZE(msm8996_vbif), 201 - .vbif = msm8996_vbif, 207 + .vbif = &msm8996_vbif, 202 208 .perf = &msm8953_perf_data, 203 209 }; 204 210
+1 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_1_7_msm8996.h
··· 320 320 .dsc = msm8996_dsc, 321 321 .intf_count = ARRAY_SIZE(msm8996_intf), 322 322 .intf = msm8996_intf, 323 - .vbif_count = ARRAY_SIZE(msm8996_vbif), 324 - .vbif = msm8996_vbif, 323 + .vbif = &msm8996_vbif, 325 324 .perf = &msm8996_perf_data, 326 325 }; 327 326
+1 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
··· 305 305 .dsc = msm8998_dsc, 306 306 .intf_count = ARRAY_SIZE(msm8998_intf), 307 307 .intf = msm8998_intf, 308 - .vbif_count = ARRAY_SIZE(msm8998_vbif), 309 - .vbif = msm8998_vbif, 308 + .vbif = &msm8998_vbif, 310 309 .perf = &msm8998_perf_data, 311 310 }; 312 311
+1 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_2_sdm660.h
··· 269 269 .dsc = sdm660_dsc, 270 270 .intf_count = ARRAY_SIZE(sdm660_intf), 271 271 .intf = sdm660_intf, 272 - .vbif_count = ARRAY_SIZE(msm8998_vbif), 273 - .vbif = msm8998_vbif, 272 + .vbif = &msm8998_vbif, 274 273 .perf = &sdm660_perf_data, 275 274 }; 276 275
+1 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_3_sdm630.h
··· 207 207 .pingpong = sdm630_pp, 208 208 .intf_count = ARRAY_SIZE(sdm630_intf), 209 209 .intf = sdm630_intf, 210 - .vbif_count = ARRAY_SIZE(msm8998_vbif), 211 - .vbif = msm8998_vbif, 210 + .vbif = &msm8998_vbif, 212 211 .perf = &sdm630_perf_data, 213 212 }; 214 213
+2 -3
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
··· 258 258 .name = "intf_3", .id = INTF_3, 259 259 .base = 0x6b800, .len = 0x280, 260 260 .type = INTF_DP, 261 - .controller_id = MSM_DP_CONTROLLER_1, 261 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 262 262 .prog_fetch_lines_worst_case = 24, 263 263 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 264 264 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 325 325 .dsc = sdm845_dsc, 326 326 .intf_count = ARRAY_SIZE(sdm845_intf), 327 327 .intf = sdm845_intf, 328 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 329 - .vbif = sdm845_vbif, 328 + .vbif = &sdm845_vbif, 330 329 .perf = &sdm845_perf_data, 331 330 }; 332 331
+1 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_1_sdm670.h
··· 144 144 .dsc = sdm670_dsc, 145 145 .intf_count = ARRAY_SIZE(sdm845_intf), 146 146 .intf = sdm845_intf, 147 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 148 - .vbif = sdm845_vbif, 147 + .vbif = &sdm845_vbif, 149 148 .perf = &sdm845_perf_data, 150 149 }; 151 150
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
··· 280 280 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 281 281 .clk_ctrl = DPU_CLK_CTRL_WB2, 282 282 .xin_id = 6, 283 - .vbif_idx = VBIF_RT, 284 283 .maxlinewidth = 4096, 285 284 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 286 285 }, ··· 316 317 .name = "intf_3", .id = INTF_3, 317 318 .base = 0x6b800, .len = 0x280, 318 319 .type = INTF_DP, 319 - .controller_id = MSM_DP_CONTROLLER_1, 320 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 320 321 .prog_fetch_lines_worst_case = 24, 321 322 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 322 323 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 380 381 .wb = sm8150_wb, 381 382 .intf_count = ARRAY_SIZE(sm8150_intf), 382 383 .intf = sm8150_intf, 383 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 384 - .vbif = sdm845_vbif, 384 + .vbif = &sdm845_vbif, 385 385 .perf = &sm8150_perf_data, 386 386 }; 387 387
+1 -3
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
··· 286 286 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 287 287 .clk_ctrl = DPU_CLK_CTRL_WB2, 288 288 .xin_id = 6, 289 - .vbif_idx = VBIF_RT, 290 289 .maxlinewidth = 4096, 291 290 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 292 291 }, ··· 404 405 .wb = sc8180x_wb, 405 406 .intf_count = ARRAY_SIZE(sc8180x_intf), 406 407 .intf = sc8180x_intf, 407 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 408 - .vbif = sdm845_vbif, 408 + .vbif = &sdm845_vbif, 409 409 .perf = &sc8180x_perf_data, 410 410 }; 411 411
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_2_sm7150.h
··· 230 230 .name = "intf_3", .id = INTF_3, 231 231 .base = 0x6b800, .len = 0x280, 232 232 .type = INTF_DP, 233 - .controller_id = MSM_DP_CONTROLLER_1, 233 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 234 234 .prog_fetch_lines_worst_case = 24, 235 235 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 236 236 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 246 246 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 247 247 .clk_ctrl = DPU_CLK_CTRL_WB2, 248 248 .xin_id = 6, 249 - .vbif_idx = VBIF_RT, 250 249 .maxlinewidth = 4096, 251 250 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 252 251 }, ··· 308 309 .intf = sm7150_intf, 309 310 .wb_count = ARRAY_SIZE(sm7150_wb), 310 311 .wb = sm7150_wb, 311 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 312 - .vbif = sdm845_vbif, 312 + .vbif = &sdm845_vbif, 313 313 .perf = &sm7150_perf_data, 314 314 }; 315 315
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_3_sm6150.h
··· 158 158 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 159 159 .clk_ctrl = DPU_CLK_CTRL_WB2, 160 160 .xin_id = 6, 161 - .vbif_idx = VBIF_RT, 162 161 .maxlinewidth = 2160, 163 162 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 164 163 }, ··· 185 186 .name = "intf_3", .id = INTF_3, 186 187 .base = 0x6b800, .len = 0x280, 187 188 .type = INTF_DP, 188 - .controller_id = MSM_DP_CONTROLLER_1, 189 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 189 190 .prog_fetch_lines_worst_case = 24, 190 191 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 191 192 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 245 246 .wb = sm6150_wb, 246 247 .intf_count = ARRAY_SIZE(sm6150_intf), 247 248 .intf = sm6150_intf, 248 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 249 - .vbif = sdm845_vbif, 249 + .vbif = &sdm845_vbif, 250 250 .perf = &sm6150_perf_data, 251 251 }; 252 252
+1 -3
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_4_sm6125.h
··· 137 137 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 138 138 .clk_ctrl = DPU_CLK_CTRL_WB2, 139 139 .xin_id = 6, 140 - .vbif_idx = VBIF_RT, 141 140 .maxlinewidth = 2160, 142 141 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 143 142 }, ··· 216 217 .wb = sm6125_wb, 217 218 .intf_count = ARRAY_SIZE(sm6125_intf), 218 219 .intf = sm6125_intf, 219 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 220 - .vbif = sdm845_vbif, 220 + .vbif = &sdm845_vbif, 221 221 .perf = &sm6125_perf_data, 222 222 }; 223 223
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
··· 301 301 .name = "intf_3", .id = INTF_3, 302 302 .base = 0x6b800, .len = 0x280, 303 303 .type = INTF_DP, 304 - .controller_id = MSM_DP_CONTROLLER_1, 304 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 305 305 .prog_fetch_lines_worst_case = 24, 306 306 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 307 307 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 317 317 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 318 318 .clk_ctrl = DPU_CLK_CTRL_WB2, 319 319 .xin_id = 6, 320 - .vbif_idx = VBIF_RT, 321 320 .maxlinewidth = 4096, 322 321 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 323 322 }, ··· 377 378 .merge_3d = sm8250_merge_3d, 378 379 .intf_count = ARRAY_SIZE(sm8250_intf), 379 380 .intf = sm8250_intf, 380 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 381 - .vbif = sdm845_vbif, 381 + .vbif = &sdm845_vbif, 382 382 .wb_count = ARRAY_SIZE(sm8250_wb), 383 383 .wb = sm8250_wb, 384 384 .perf = &sm8250_perf_data,
+1 -3
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_2_sc7180.h
··· 153 153 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 154 154 .clk_ctrl = DPU_CLK_CTRL_WB2, 155 155 .xin_id = 6, 156 - .vbif_idx = VBIF_RT, 157 156 .maxlinewidth = 4096, 158 157 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 159 158 }, ··· 210 211 .intf = sc7180_intf, 211 212 .wb_count = ARRAY_SIZE(sc7180_wb), 212 213 .wb = sc7180_wb, 213 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 214 - .vbif = sdm845_vbif, 214 + .vbif = &sdm845_vbif, 215 215 .perf = &sc7180_perf_data, 216 216 }; 217 217
+1 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_3_sm6115.h
··· 144 144 .pingpong = sm6115_pp, 145 145 .intf_count = ARRAY_SIZE(sm6115_intf), 146 146 .intf = sm6115_intf, 147 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 148 - .vbif = sdm845_vbif, 147 + .vbif = &sdm845_vbif, 149 148 .perf = &sm6115_perf_data, 150 149 }; 151 150
+1 -3
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_4_sm6350.h
··· 147 147 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 148 148 .clk_ctrl = DPU_CLK_CTRL_WB2, 149 149 .xin_id = 6, 150 - .vbif_idx = VBIF_RT, 151 150 .maxlinewidth = 1920, 152 151 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 153 152 }, ··· 228 229 .wb = sm6350_wb, 229 230 .intf_count = ARRAY_SIZE(sm6350_intf), 230 231 .intf = sm6350_intf, 231 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 232 - .vbif = sdm845_vbif, 232 + .vbif = &sdm845_vbif, 233 233 .perf = &sm6350_perf_data, 234 234 }; 235 235
+1 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_5_qcm2290.h
··· 137 137 .pingpong = qcm2290_pp, 138 138 .intf_count = ARRAY_SIZE(qcm2290_intf), 139 139 .intf = qcm2290_intf, 140 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 141 - .vbif = sdm845_vbif, 140 + .vbif = &sdm845_vbif, 142 141 .perf = &qcm2290_perf_data, 143 142 }; 144 143
+1 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_9_sm6375.h
··· 155 155 .pingpong = sm6375_pp, 156 156 .intf_count = ARRAY_SIZE(sm6375_intf), 157 157 .intf = sm6375_intf, 158 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 159 - .vbif = sdm845_vbif, 158 + .vbif = &sdm845_vbif, 160 159 .perf = &sm6375_perf_data, 161 160 }; 162 161
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
··· 290 290 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 291 291 .clk_ctrl = DPU_CLK_CTRL_WB2, 292 292 .xin_id = 6, 293 - .vbif_idx = VBIF_RT, 294 293 .maxlinewidth = 4096, 295 294 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 296 295 }, ··· 326 327 .name = "intf_3", .id = INTF_3, 327 328 .base = 0x37000, .len = 0x280, 328 329 .type = INTF_DP, 329 - .controller_id = MSM_DP_CONTROLLER_1, 330 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 330 331 .prog_fetch_lines_worst_case = 24, 331 332 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 332 333 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 391 392 .wb = sm8350_wb, 392 393 .intf_count = ARRAY_SIZE(sm8350_intf), 393 394 .intf = sm8350_intf, 394 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 395 - .vbif = sdm845_vbif, 395 + .vbif = &sdm845_vbif, 396 396 .perf = &sm8350_perf_data, 397 397 }; 398 398
+1 -3
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
··· 172 172 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 173 173 .clk_ctrl = DPU_CLK_CTRL_WB2, 174 174 .xin_id = 6, 175 - .vbif_idx = VBIF_RT, 176 175 .maxlinewidth = 4096, 177 176 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 178 177 }, ··· 262 263 .wb = sc7280_wb, 263 264 .intf_count = ARRAY_SIZE(sc7280_intf), 264 265 .intf = sc7280_intf, 265 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 266 - .vbif = sdm845_vbif, 266 + .vbif = &sdm845_vbif, 267 267 .perf = &sc7280_perf_data, 268 268 }; 269 269
+7 -9
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h
··· 288 288 }, 289 289 }; 290 290 291 - /* TODO: INTF 3, 8 and 7 are used for MST, marked as INTF_NONE for now */ 292 291 static const struct dpu_intf_cfg sc8280xp_intf[] = { 293 292 { 294 293 .name = "intf_0", .id = INTF_0, ··· 318 319 }, { 319 320 .name = "intf_3", .id = INTF_3, 320 321 .base = 0x37000, .len = 0x280, 321 - .type = INTF_NONE, 322 - .controller_id = MSM_DP_CONTROLLER_0, 322 + .type = INTF_DP, 323 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 323 324 .prog_fetch_lines_worst_case = 24, 324 325 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 325 326 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 350 351 }, { 351 352 .name = "intf_7", .id = INTF_7, 352 353 .base = 0x3b000, .len = 0x280, 353 - .type = INTF_NONE, 354 - .controller_id = MSM_DP_CONTROLLER_2, 354 + .type = INTF_DP, 355 + .controller_id = MSM_DP_CONTROLLER_2, /* pair with intf_6 for DP MST */ 355 356 .prog_fetch_lines_worst_case = 24, 356 357 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 18), 357 358 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 19), 358 359 }, { 359 360 .name = "intf_8", .id = INTF_8, 360 361 .base = 0x3c000, .len = 0x280, 361 - .type = INTF_NONE, 362 - .controller_id = MSM_DP_CONTROLLER_1, 362 + .type = INTF_DP, 363 + .controller_id = MSM_DP_CONTROLLER_1, /* pair with intf_8 for DP MST */ 363 364 .prog_fetch_lines_worst_case = 24, 364 365 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12), 365 366 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 13), ··· 420 421 .merge_3d = sc8280xp_merge_3d, 421 422 .intf_count = ARRAY_SIZE(sc8280xp_intf), 422 423 .intf = sc8280xp_intf, 423 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 424 - .vbif = sdm845_vbif, 424 + .vbif = &sdm845_vbif, 425 425 .perf = &sc8280xp_perf_data, 426 426 }; 427 427
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
··· 303 303 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 304 304 .clk_ctrl = DPU_CLK_CTRL_WB2, 305 305 .xin_id = 6, 306 - .vbif_idx = VBIF_RT, 307 306 .maxlinewidth = 4096, 308 307 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 309 308 }, ··· 339 340 .name = "intf_3", .id = INTF_3, 340 341 .base = 0x37000, .len = 0x280, 341 342 .type = INTF_DP, 342 - .controller_id = MSM_DP_CONTROLLER_1, 343 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 343 344 .prog_fetch_lines_worst_case = 24, 344 345 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 345 346 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 404 405 .wb = sm8450_wb, 405 406 .intf_count = ARRAY_SIZE(sm8450_intf), 406 407 .intf = sm8450_intf, 407 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 408 - .vbif = sdm845_vbif, 408 + .vbif = &sdm845_vbif, 409 409 .perf = &sm8450_perf_data, 410 410 }; 411 411
+5 -8
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_4_sa8775p.h
··· 310 310 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 311 311 .clk_ctrl = DPU_CLK_CTRL_WB2, 312 312 .xin_id = 6, 313 - .vbif_idx = VBIF_RT, 314 313 .maxlinewidth = 4096, 315 314 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 316 315 }, 317 316 }; 318 317 319 - /* TODO: INTF 3, 6, 7 and 8 are used for MST, marked as INTF_NONE for now */ 320 318 static const struct dpu_intf_cfg sa8775p_intf[] = { 321 319 { 322 320 .name = "intf_0", .id = INTF_0, ··· 345 347 }, { 346 348 .name = "intf_3", .id = INTF_3, 347 349 .base = 0x37000, .len = 0x280, 348 - .type = INTF_NONE, 350 + .type = INTF_DP, 349 351 .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 350 352 .prog_fetch_lines_worst_case = 24, 351 353 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), ··· 361 363 }, { 362 364 .name = "intf_6", .id = INTF_6, 363 365 .base = 0x3A000, .len = 0x280, 364 - .type = INTF_NONE, 366 + .type = INTF_DP, 365 367 .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 366 368 .prog_fetch_lines_worst_case = 24, 367 369 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 16), ··· 369 371 }, { 370 372 .name = "intf_7", .id = INTF_7, 371 373 .base = 0x3b000, .len = 0x280, 372 - .type = INTF_NONE, 374 + .type = INTF_DP, 373 375 .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 374 376 .prog_fetch_lines_worst_case = 24, 375 377 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 18), ··· 377 379 }, { 378 380 .name = "intf_8", .id = INTF_8, 379 381 .base = 0x3c000, .len = 0x280, 380 - .type = INTF_NONE, 382 + .type = INTF_DP, 381 383 .controller_id = MSM_DP_CONTROLLER_1, /* pair with intf_4 for DP MST */ 382 384 .prog_fetch_lines_worst_case = 24, 383 385 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12), ··· 443 445 .wb = sa8775p_wb, 444 446 .intf_count = ARRAY_SIZE(sa8775p_intf), 445 447 .intf = sa8775p_intf, 446 - .vbif_count = ARRAY_SIZE(sdm845_vbif), 447 - .vbif = sdm845_vbif, 448 + .vbif = &sdm845_vbif, 448 449 .perf = &sa8775p_perf_data, 449 450 }; 450 451
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
··· 298 298 .format_list = wb2_formats_rgb_yuv, 299 299 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 300 300 .xin_id = 6, 301 - .vbif_idx = VBIF_RT, 302 301 .maxlinewidth = 4096, 303 302 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 304 303 }, ··· 334 335 .name = "intf_3", .id = INTF_3, 335 336 .base = 0x37000, .len = 0x280, 336 337 .type = INTF_DP, 337 - .controller_id = MSM_DP_CONTROLLER_1, 338 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 338 339 .prog_fetch_lines_worst_case = 24, 339 340 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 340 341 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 399 400 .wb = sm8550_wb, 400 401 .intf_count = ARRAY_SIZE(sm8550_intf), 401 402 .intf = sm8550_intf, 402 - .vbif_count = ARRAY_SIZE(sm8550_vbif), 403 - .vbif = sm8550_vbif, 403 + .vbif = &sm8550_vbif, 404 404 .perf = &sm8550_perf_data, 405 405 }; 406 406
+2 -4
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_1_sar2130p.h
··· 298 298 .format_list = wb2_formats_rgb_yuv, 299 299 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 300 300 .xin_id = 6, 301 - .vbif_idx = VBIF_RT, 302 301 .maxlinewidth = 4096, 303 302 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 304 303 }, ··· 334 335 .name = "intf_3", .id = INTF_3, 335 336 .base = 0x37000, .len = 0x280, 336 337 .type = INTF_DP, 337 - .controller_id = MSM_DP_CONTROLLER_1, 338 + .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 338 339 .prog_fetch_lines_worst_case = 24, 339 340 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), 340 341 .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 31), ··· 399 400 .wb = sar2130p_wb, 400 401 .intf_count = ARRAY_SIZE(sar2130p_intf), 401 402 .intf = sar2130p_intf, 402 - .vbif_count = ARRAY_SIZE(sm8550_vbif), 403 - .vbif = sm8550_vbif, 403 + .vbif = &sm8550_vbif, 404 404 .perf = &sar2130p_perf_data, 405 405 }; 406 406
+4 -7
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
··· 298 298 .format_list = wb2_formats_rgb_yuv, 299 299 .num_formats = ARRAY_SIZE(wb2_formats_rgb_yuv), 300 300 .xin_id = 6, 301 - .vbif_idx = VBIF_RT, 302 301 .maxlinewidth = 4096, 303 302 .intr_wb_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 4), 304 303 }, 305 304 }; 306 305 307 - /* TODO: INTF 3, 8 and 7 are used for MST, marked as INTF_NONE for now */ 308 306 static const struct dpu_intf_cfg x1e80100_intf[] = { 309 307 { 310 308 .name = "intf_0", .id = INTF_0, ··· 333 335 }, { 334 336 .name = "intf_3", .id = INTF_3, 335 337 .base = 0x37000, .len = 0x280, 336 - .type = INTF_NONE, 338 + .type = INTF_DP, 337 339 .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 338 340 .prog_fetch_lines_worst_case = 24, 339 341 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 30), ··· 365 367 }, { 366 368 .name = "intf_7", .id = INTF_7, 367 369 .base = 0x3b000, .len = 0x280, 368 - .type = INTF_NONE, 370 + .type = INTF_DP, 369 371 .controller_id = MSM_DP_CONTROLLER_2, /* pair with intf_6 for DP MST */ 370 372 .prog_fetch_lines_worst_case = 24, 371 373 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 18), ··· 373 375 }, { 374 376 .name = "intf_8", .id = INTF_8, 375 377 .base = 0x3c000, .len = 0x280, 376 - .type = INTF_NONE, 378 + .type = INTF_DP, 377 379 .controller_id = MSM_DP_CONTROLLER_1, /* pair with intf_4 for DP MST */ 378 380 .prog_fetch_lines_worst_case = 24, 379 381 .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 12), ··· 439 441 .wb = x1e80100_wb, 440 442 .intf_count = ARRAY_SIZE(x1e80100_intf), 441 443 .intf = x1e80100_intf, 442 - .vbif_count = ARRAY_SIZE(sm8550_vbif), 443 - .vbif = sm8550_vbif, 444 + .vbif = &sm8550_vbif, 444 445 .perf = &x1e80100_perf_data, 445 446 }; 446 447
+39 -18
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
··· 326 326 { 327 327 struct dpu_hw_mixer *lm = mixer->hw_lm; 328 328 u32 blend_op; 329 - u32 fg_alpha, bg_alpha, max_alpha; 329 + u32 fg_alpha, bg_alpha; 330 330 331 - if (mdss_ver->core_major_ver < 12) { 332 - max_alpha = 0xff; 333 - fg_alpha = pstate->base.alpha >> 8; 334 - } else { 335 - max_alpha = 0x3ff; 336 - fg_alpha = pstate->base.alpha >> 6; 337 - } 338 - bg_alpha = max_alpha - fg_alpha; 331 + fg_alpha = pstate->base.alpha; 339 332 340 333 /* default to opaque blending */ 341 334 if (pstate->base.pixel_blend_mode == DRM_MODE_BLEND_PIXEL_NONE || 342 335 !format->alpha_enable) { 343 336 blend_op = DPU_BLEND_FG_ALPHA_FG_CONST | 344 337 DPU_BLEND_BG_ALPHA_BG_CONST; 338 + bg_alpha = DRM_BLEND_ALPHA_OPAQUE - fg_alpha; 345 339 } else if (pstate->base.pixel_blend_mode == DRM_MODE_BLEND_PREMULTI) { 346 340 blend_op = DPU_BLEND_FG_ALPHA_FG_CONST | 347 341 DPU_BLEND_BG_ALPHA_FG_PIXEL; 348 - if (fg_alpha != max_alpha) { 342 + if (fg_alpha != DRM_BLEND_ALPHA_OPAQUE) { 349 343 bg_alpha = fg_alpha; 350 344 blend_op |= DPU_BLEND_BG_MOD_ALPHA | 351 345 DPU_BLEND_BG_INV_MOD_ALPHA; 352 346 } else { 347 + bg_alpha = 0; 353 348 blend_op |= DPU_BLEND_BG_INV_ALPHA; 354 349 } 355 350 } else { 356 351 /* coverage blending */ 357 352 blend_op = DPU_BLEND_FG_ALPHA_FG_PIXEL | 358 353 DPU_BLEND_BG_ALPHA_FG_PIXEL; 359 - if (fg_alpha != max_alpha) { 354 + if (fg_alpha != DRM_BLEND_ALPHA_OPAQUE) { 360 355 bg_alpha = fg_alpha; 361 356 blend_op |= DPU_BLEND_FG_MOD_ALPHA | 362 357 DPU_BLEND_FG_INV_MOD_ALPHA | 363 358 DPU_BLEND_BG_MOD_ALPHA | 364 359 DPU_BLEND_BG_INV_MOD_ALPHA; 365 360 } else { 361 + bg_alpha = 0; 366 362 blend_op |= DPU_BLEND_BG_INV_ALPHA; 367 363 } 368 364 } ··· 1321 1325 return false; 1322 1326 } 1323 1327 1324 - static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct drm_crtc_state *crtc_state) 1328 + static int dpu_crtc_assign_planes(struct drm_crtc *crtc, struct drm_crtc_state *crtc_state) 1325 1329 { 1326 1330 int total_planes = crtc->dev->mode_config.num_total_plane; 1327 1331 struct drm_atomic_state *state = crtc_state->state; ··· 1333 1337 global_state = dpu_kms_get_global_state(crtc_state->state); 1334 1338 if (IS_ERR(global_state)) 1335 1339 return PTR_ERR(global_state); 1336 - 1337 - dpu_rm_release_all_sspp(global_state, crtc); 1338 1340 1339 1341 if (!crtc_state->enable) 1340 1342 return 0; ··· 1358 1364 done: 1359 1365 kfree(states); 1360 1366 return ret; 1367 + } 1368 + 1369 + static int dpu_crtc_reassign_planes(struct drm_crtc *crtc, struct drm_crtc_state *crtc_state) 1370 + { 1371 + struct dpu_global_state *global_state; 1372 + 1373 + global_state = dpu_kms_get_global_state(crtc_state->state); 1374 + if (IS_ERR(global_state)) 1375 + return PTR_ERR(global_state); 1376 + 1377 + dpu_rm_release_all_sspp(global_state, crtc); 1378 + 1379 + return dpu_crtc_assign_planes(crtc, crtc_state); 1361 1380 } 1362 1381 1363 1382 #define MAX_CHANNELS_PER_CRTC PIPES_PER_PLANE ··· 1417 1410 topology.num_lm = 2; 1418 1411 else if (topology.num_dsc == 2) 1419 1412 topology.num_lm = 2; 1420 - else if (dpu_kms->catalog->caps->has_3d_merge) 1413 + else if (dpu_kms->catalog->caps->has_3d_merge && 1414 + topology.num_dsc == 0) 1421 1415 topology.num_lm = (mode->hdisplay > MAX_HDISPLAY_SPLIT) ? 2 : 1; 1422 1416 else 1423 1417 topology.num_lm = 1; ··· 1542 1534 return rc; 1543 1535 } 1544 1536 1545 - if (dpu_use_virtual_planes && 1546 - (crtc_state->planes_changed || crtc_state->zpos_changed)) { 1547 - rc = dpu_crtc_reassign_planes(crtc, crtc_state); 1537 + if (crtc_state->planes_changed || crtc_state->zpos_changed) { 1538 + if (dpu_use_virtual_planes) 1539 + rc = dpu_crtc_reassign_planes(crtc, crtc_state); 1540 + else 1541 + rc = dpu_crtc_assign_planes(crtc, crtc_state); 1548 1542 if (rc < 0) 1549 1543 return rc; 1550 1544 } ··· 1665 1655 } 1666 1656 1667 1657 return 0; 1658 + } 1659 + 1660 + /** 1661 + * dpu_crtc_get_num_lm - Get mixer number in this CRTC pipeline 1662 + * @state: Pointer to drm crtc state object 1663 + */ 1664 + unsigned int dpu_crtc_get_num_lm(const struct drm_crtc_state *state) 1665 + { 1666 + struct dpu_crtc_state *cstate = to_dpu_crtc_state(state); 1667 + 1668 + return cstate->num_mixers; 1668 1669 } 1669 1670 1670 1671 #ifdef CONFIG_DEBUG_FS
+2
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.h
··· 267 267 268 268 void dpu_crtc_frame_event_cb(struct drm_crtc *crtc, u32 event); 269 269 270 + unsigned int dpu_crtc_get_num_lm(const struct drm_crtc_state *state); 271 + 270 272 #endif /* _DPU_CRTC_H_ */
+6
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
··· 257 257 if (!dpu_encoder_phys_cmd_is_master(phys_enc)) 258 258 goto end; 259 259 260 + /* IRQ not yet initialized */ 261 + if (!phys_enc->irq[INTR_IDX_RDPTR]) { 262 + ret = -EINVAL; 263 + goto end; 264 + } 265 + 260 266 /* protect against negative */ 261 267 if (!enable && refcount == 0) { 262 268 ret = -EINVAL;
+2
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
··· 10 10 #include "dpu_formats.h" 11 11 #include "dpu_trace.h" 12 12 #include "disp/msm_disp_snapshot.h" 13 + #include "msm_dsc_helper.h" 13 14 14 15 #include <drm/display/drm_dsc_helper.h> 15 16 #include <drm/drm_managed.h> ··· 137 136 timing->width = timing->width * drm_dsc_get_bpp_int(dsc) / 138 137 (dsc->bits_per_component * 3); 139 138 timing->xres = timing->width; 139 + timing->dce_bytes_per_line = msm_dsc_get_bytes_per_line(dsc); 140 140 } 141 141 } 142 142
+3 -4
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
··· 70 70 ot_params.height = phys_enc->cached_mode.vdisplay; 71 71 ot_params.is_wfd = !dpu_encoder_helper_get_cwb_mask(phys_enc); 72 72 ot_params.frame_rate = drm_mode_vrefresh(&phys_enc->cached_mode); 73 - ot_params.vbif_idx = hw_wb->caps->vbif_idx; 73 + /* XXX: WB on MSM8996 should use VBIF_NRT */ 74 74 ot_params.rd = false; 75 75 76 76 if (!_dpu_encoder_phys_wb_clk_force_ctrl(hw_wb, phys_enc->dpu_kms->hw_mdp, ··· 108 108 hw_wb = phys_enc->hw_wb; 109 109 110 110 memset(&qos_params, 0, sizeof(qos_params)); 111 - qos_params.vbif_idx = hw_wb->caps->vbif_idx; 111 + /* XXX: WB on MSM8996 should use VBIF_NRT */ 112 112 qos_params.xin_id = hw_wb->caps->xin_id; 113 113 qos_params.num = hw_wb->idx - WB_0; 114 114 qos_params.is_rt = dpu_encoder_helper_get_cwb_mask(phys_enc); 115 115 116 - DPU_DEBUG("[qos_remap] wb:%d vbif:%d xin:%d is_rt:%d\n", 116 + DPU_DEBUG("[qos_remap] wb:%d xin:%d is_rt:%d\n", 117 117 qos_params.num, 118 - qos_params.vbif_idx, 119 118 qos_params.xin_id, qos_params.is_rt); 120 119 121 120 if (!_dpu_encoder_phys_wb_clk_force_ctrl(hw_wb, phys_enc->dpu_kms->hw_mdp,
+11 -25
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
··· 513 513 }, 514 514 }; 515 515 516 - static const struct dpu_vbif_cfg msm8996_vbif[] = { 517 - { 518 - .name = "vbif_rt", .id = VBIF_RT, 519 - .base = 0, .len = 0x1040, 516 + static const struct dpu_vbif_cfg msm8996_vbif = { 517 + .len = 0x1040, 520 518 .default_ot_rd_limit = 32, 521 519 .default_ot_wr_limit = 16, 522 520 .features = BIT(DPU_VBIF_QOS_REMAP) | BIT(DPU_VBIF_QOS_OTLIM), ··· 536 538 .npriority_lvl = ARRAY_SIZE(msm8998_nrt_pri_lvl), 537 539 .priority_lvl = msm8998_nrt_pri_lvl, 538 540 }, 539 - }, 540 541 }; 541 542 542 - static const struct dpu_vbif_cfg msm8998_vbif[] = { 543 - { 544 - .name = "vbif_rt", .id = VBIF_RT, 545 - .base = 0, .len = 0x1040, 543 + static const struct dpu_vbif_cfg msm8998_vbif = { 544 + .len = 0x1040, 546 545 .default_ot_rd_limit = 32, 547 546 .default_ot_wr_limit = 32, 548 547 .features = BIT(DPU_VBIF_QOS_REMAP) | BIT(DPU_VBIF_QOS_OTLIM), ··· 563 568 }, 564 569 .memtype_count = 14, 565 570 .memtype = {2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2}, 566 - }, 567 571 }; 568 572 569 - static const struct dpu_vbif_cfg sdm845_vbif[] = { 570 - { 571 - .name = "vbif_rt", .id = VBIF_RT, 572 - .base = 0, .len = 0x1040, 573 + static const struct dpu_vbif_cfg sdm845_vbif = { 574 + .len = 0x1040, 573 575 .features = BIT(DPU_VBIF_QOS_REMAP), 574 576 .xin_halt_timeout = 0x4000, 575 577 .qos_rp_remap_size = 0x40, ··· 580 588 }, 581 589 .memtype_count = 14, 582 590 .memtype = {3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3}, 583 - }, 584 591 }; 585 592 586 - static const struct dpu_vbif_cfg sm8550_vbif[] = { 587 - { 588 - .name = "vbif_rt", .id = VBIF_RT, 589 - .base = 0, .len = 0x1040, 593 + static const struct dpu_vbif_cfg sm8550_vbif = { 594 + .len = 0x1040, 590 595 .features = BIT(DPU_VBIF_QOS_REMAP), 591 596 .xin_halt_timeout = 0x4000, 592 597 .qos_rp_remap_size = 0x40, ··· 597 608 }, 598 609 .memtype_count = 16, 599 610 .memtype = {3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3}, 600 - }, 601 611 }; 602 612 603 - static const struct dpu_vbif_cfg sm8650_vbif[] = { 604 - { 605 - .name = "vbif_rt", .id = VBIF_RT, 606 - .base = 0, .len = 0x1074, 613 + static const struct dpu_vbif_cfg sm8650_vbif = { 614 + .len = 0x1074, 607 615 .features = BIT(DPU_VBIF_QOS_REMAP), 608 616 .xin_halt_timeout = 0x4000, 609 617 .qos_rp_remap_size = 0x40, ··· 614 628 }, 615 629 .memtype_count = 16, 616 630 .memtype = {3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3}, 617 - }, 618 631 }; 619 632 620 633 /************************************************************* ··· 756 771 #include "catalog/dpu_10_0_sm8650.h" 757 772 #include "catalog/dpu_12_0_sm8750.h" 758 773 #include "catalog/dpu_12_2_glymur.h" 774 + #include "catalog/dpu_12_4_eliza.h" 759 775 #include "catalog/dpu_13_0_kaanapali.h"
+3 -6
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.h
··· 524 524 /** 525 525 * struct dpu_wb_cfg - information of writeback blocks 526 526 * @DPU_HW_BLK_INFO: refer to the description above for DPU_HW_BLK_INFO 527 - * @vbif_idx: vbif client index 528 527 * @maxlinewidth: max line width supported by writeback block 529 528 * @xin_id: bus client identifier 530 529 * @intr_wb_done: interrupt index for WB_DONE ··· 534 535 struct dpu_wb_cfg { 535 536 DPU_HW_BLK_INFO; 536 537 unsigned long features; 537 - u8 vbif_idx; 538 538 u32 maxlinewidth; 539 539 u32 xin_id; 540 540 unsigned int intr_wb_done; ··· 585 587 586 588 /** 587 589 * struct dpu_vbif_cfg - information of VBIF blocks 588 - * @id enum identifying this block 589 - * @base register offset of this block 590 + * @len: length of hardware block 590 591 * @features bit mask identifying sub-blocks/features 591 592 * @ot_rd_limit default OT read limit 592 593 * @ot_wr_limit default OT write limit ··· 599 602 * @memtype array of xin memtype definitions 600 603 */ 601 604 struct dpu_vbif_cfg { 602 - DPU_HW_BLK_INFO; 605 + u32 len; 603 606 unsigned long features; 604 607 u32 default_ot_rd_limit; 605 608 u32 default_ot_wr_limit; ··· 740 743 u32 intf_count; 741 744 const struct dpu_intf_cfg *intf; 742 745 743 - u32 vbif_count; 744 746 const struct dpu_vbif_cfg *vbif; 745 747 746 748 u32 wb_count; ··· 763 767 const struct dpu_format_extended *vig_formats; 764 768 }; 765 769 770 + extern const struct dpu_mdss_cfg dpu_eliza_cfg; 766 771 extern const struct dpu_mdss_cfg dpu_glymur_cfg; 767 772 extern const struct dpu_mdss_cfg dpu_kaanapali_cfg; 768 773 extern const struct dpu_mdss_cfg dpu_msm8917_cfg;
+21 -5
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
··· 173 173 data_width = p->width; 174 174 175 175 /* 176 - * If widebus is enabled, data is valid for only half the active window 177 - * since the data rate is doubled in this mode. But for the compression 178 - * mode in DP case, the p->width is already adjusted in 179 - * drm_mode_to_intf_timing_params() 176 + * If widebus is disabled: 177 + * For uncompressed stream, the data is valid for the entire active 178 + * window period. 179 + * For compressed stream, data is valid for a shorter time period 180 + * inside the active window depending on the compression ratio. 181 + * 182 + * If widebus is enabled: 183 + * For uncompressed stream, data is valid for only half the active 184 + * window, since the data rate is doubled in this mode. 185 + * For compressed stream, data validity window needs to be adjusted for 186 + * compression ratio and then further halved. 187 + * 188 + * For the compression mode in DP case, the p->width is already 189 + * adjusted in drm_mode_to_intf_timing_params(). 180 190 */ 181 - if (p->wide_bus_en && !dp_intf) 191 + if (p->compression_en && !dp_intf) { 192 + if (p->wide_bus_en) 193 + data_width = DIV_ROUND_UP(p->dce_bytes_per_line, 6); 194 + else 195 + data_width = DIV_ROUND_UP(p->dce_bytes_per_line, 3); 196 + } else if (p->wide_bus_en && !dp_intf) { 182 197 data_width = p->width >> 1; 198 + } 183 199 184 200 /* TODO: handle DSC+DP case, we only handle DSC+DSI case so far */ 185 201 if (p->compression_en && !dp_intf &&
+1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
··· 35 35 36 36 bool wide_bus_en; 37 37 bool compression_en; 38 + u32 dce_bytes_per_line; 38 39 }; 39 40 40 41 struct dpu_hw_intf_prog_fetch {
+13 -8
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.c
··· 126 126 } 127 127 128 128 static void dpu_hw_lm_setup_blend_config_combined_alpha(struct dpu_hw_mixer *ctx, 129 - u32 stage, u32 fg_alpha, u32 bg_alpha, u32 blend_op) 129 + u32 stage, 130 + u16 fg_alpha, u16 bg_alpha, 131 + u32 blend_op) 130 132 { 131 133 struct dpu_hw_blk_reg_map *c = &ctx->hw; 132 134 int stage_off; ··· 141 139 if (WARN_ON(stage_off < 0)) 142 140 return; 143 141 144 - const_alpha = (bg_alpha & 0xFF) | ((fg_alpha & 0xFF) << 16); 142 + const_alpha = (bg_alpha >> 8) | ((fg_alpha >> 8) << 16); 145 143 DPU_REG_WRITE(c, LM_BLEND0_CONST_ALPHA + stage_off, const_alpha); 146 144 DPU_REG_WRITE(c, LM_BLEND0_OP + stage_off, blend_op); 147 145 } 148 146 149 147 static void 150 148 dpu_hw_lm_setup_blend_config_combined_alpha_v12(struct dpu_hw_mixer *ctx, 151 - u32 stage, u32 fg_alpha, 152 - u32 bg_alpha, u32 blend_op) 149 + u32 stage, 150 + u16 fg_alpha, u16 bg_alpha, 151 + u32 blend_op) 153 152 { 154 153 struct dpu_hw_blk_reg_map *c = &ctx->hw; 155 154 int stage_off; ··· 163 160 if (WARN_ON(stage_off < 0)) 164 161 return; 165 162 166 - const_alpha = (bg_alpha & 0x3ff) | ((fg_alpha & 0x3ff) << 16); 163 + const_alpha = (bg_alpha >> 6) | ((fg_alpha >> 6) << 16); 167 164 DPU_REG_WRITE(c, LM_BLEND0_CONST_ALPHA_V12 + stage_off, const_alpha); 168 165 DPU_REG_WRITE(c, LM_BLEND0_OP + stage_off, blend_op); 169 166 } 170 167 171 168 static void dpu_hw_lm_setup_blend_config(struct dpu_hw_mixer *ctx, 172 - u32 stage, u32 fg_alpha, u32 bg_alpha, u32 blend_op) 169 + u32 stage, 170 + u16 fg_alpha, u16 bg_alpha, 171 + u32 blend_op) 173 172 { 174 173 struct dpu_hw_blk_reg_map *c = &ctx->hw; 175 174 int stage_off; ··· 183 178 if (WARN_ON(stage_off < 0)) 184 179 return; 185 180 186 - DPU_REG_WRITE(c, LM_BLEND0_FG_ALPHA + stage_off, fg_alpha); 187 - DPU_REG_WRITE(c, LM_BLEND0_BG_ALPHA + stage_off, bg_alpha); 181 + DPU_REG_WRITE(c, LM_BLEND0_FG_ALPHA + stage_off, fg_alpha >> 8); 182 + DPU_REG_WRITE(c, LM_BLEND0_BG_ALPHA + stage_off, bg_alpha >> 8); 188 183 DPU_REG_WRITE(c, LM_BLEND0_OP + stage_off, blend_op); 189 184 } 190 185
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_lm.h
··· 41 41 * for the specified stage 42 42 */ 43 43 void (*setup_blend_config)(struct dpu_hw_mixer *ctx, uint32_t stage, 44 - uint32_t fg_alpha, uint32_t bg_alpha, uint32_t blend_op); 44 + u16 fg_alpha, u16 bg_alpha, uint32_t blend_op); 45 45 46 46 /** 47 47 * @setup_alpha_out: Alpha color component selection from either fg or bg
-6
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_mdss.h
··· 284 284 WD_TIMER_MAX 285 285 }; 286 286 287 - enum dpu_vbif { 288 - VBIF_RT, 289 - VBIF_NRT, 290 - VBIF_MAX, 291 - }; 292 - 293 287 /** 294 288 * enum dpu_3d_blend_mode 295 289 * Desribes how the 3d data is blended
+1 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_vbif.c
··· 230 230 if (!c) 231 231 return ERR_PTR(-ENOMEM); 232 232 233 - c->hw.blk_addr = addr + cfg->base; 233 + c->hw.blk_addr = addr; 234 234 c->hw.log_mask = DPU_DBG_MASK_VBIF; 235 235 236 236 /* 237 237 * Assign ops 238 238 */ 239 - c->idx = cfg->id; 240 239 c->cap = cfg; 241 240 _setup_vbif_ops(&c->ops, c->cap->features); 242 241
-1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_vbif.h
··· 98 98 struct dpu_hw_blk_reg_map hw; 99 99 100 100 /* vbif */ 101 - enum dpu_vbif idx; 102 101 const struct dpu_vbif_cfg *cap; 103 102 104 103 /* ops */
+25 -50
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
··· 52 52 #define DPU_DEBUGFS_DIR "msm_dpu" 53 53 #define DPU_DEBUGFS_HWMASKNAME "hw_log_mask" 54 54 55 - bool dpu_use_virtual_planes; 55 + bool dpu_use_virtual_planes = true; 56 56 module_param(dpu_use_virtual_planes, bool, 0); 57 57 58 58 static int dpu_kms_hw_init(struct msm_kms *kms); ··· 888 888 889 889 static void _dpu_kms_hw_destroy(struct dpu_kms *dpu_kms) 890 890 { 891 - int i; 892 - 893 891 dpu_kms->hw_intr = NULL; 894 892 895 893 /* safe to call these more than once during shutdown */ 896 894 _dpu_kms_mmu_destroy(dpu_kms); 897 895 898 - for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) { 899 - dpu_kms->hw_vbif[i] = NULL; 900 - } 896 + dpu_kms->hw_vbif = NULL; 901 897 902 898 dpu_kms_global_obj_fini(dpu_kms); 903 899 ··· 1057 1061 dpu_kms->mmio + cat->cdm->base, 1058 1062 "%s", cat->cdm->name); 1059 1063 1060 - for (i = 0; i < dpu_kms->catalog->vbif_count; i++) { 1061 - const struct dpu_vbif_cfg *vbif = &dpu_kms->catalog->vbif[i]; 1064 + const struct dpu_vbif_cfg *vbif = dpu_kms->catalog->vbif; 1062 1065 1063 - msm_disp_snapshot_add_block(disp_state, vbif->len, 1064 - dpu_kms->vbif[vbif->id] + vbif->base, 1065 - "%s", vbif->name); 1066 - } 1066 + msm_disp_snapshot_add_block(disp_state, vbif->len, 1067 + dpu_kms->vbif, 1068 + "vbif"); 1067 1069 1068 1070 pm_runtime_put_sync(&dpu_kms->pdev->dev); 1069 1071 } ··· 1139 1145 { 1140 1146 struct dpu_kms *dpu_kms; 1141 1147 struct drm_device *dev; 1142 - int i, rc = -EINVAL; 1148 + int rc = -EINVAL; 1143 1149 unsigned long max_core_clk_rate; 1144 1150 u32 core_rev; 1145 1151 ··· 1214 1220 goto err_pm_put; 1215 1221 } 1216 1222 1217 - for (i = 0; i < dpu_kms->catalog->vbif_count; i++) { 1218 - struct dpu_hw_vbif *hw; 1219 - const struct dpu_vbif_cfg *vbif = &dpu_kms->catalog->vbif[i]; 1223 + struct dpu_hw_vbif *hw; 1224 + const struct dpu_vbif_cfg *vbif = dpu_kms->catalog->vbif; 1220 1225 1221 - hw = dpu_hw_vbif_init(dev, vbif, dpu_kms->vbif[vbif->id]); 1222 - if (IS_ERR(hw)) { 1223 - rc = PTR_ERR(hw); 1224 - DPU_ERROR("failed to init vbif %d: %d\n", vbif->id, rc); 1225 - goto err_pm_put; 1226 - } 1227 - 1228 - dpu_kms->hw_vbif[vbif->id] = hw; 1226 + hw = dpu_hw_vbif_init(dev, vbif, dpu_kms->vbif); 1227 + if (IS_ERR(hw)) { 1228 + rc = PTR_ERR(hw); 1229 + DPU_ERROR("failed to init vbif: %d\n", rc); 1230 + goto err_pm_put; 1229 1231 } 1232 + 1233 + dpu_kms->hw_vbif = hw; 1230 1234 1231 1235 /* TODO: use the same max_freq as in dpu_kms_hw_init */ 1232 1236 max_core_clk_rate = dpu_kms_get_clk_rate(dpu_kms, "core"); ··· 1340 1348 } 1341 1349 DRM_DEBUG("mapped dpu address space @%p\n", dpu_kms->mmio); 1342 1350 1343 - dpu_kms->vbif[VBIF_RT] = msm_ioremap_mdss(mdss_dev, 1344 - dpu_kms->pdev, 1345 - "vbif_phys"); 1346 - if (IS_ERR(dpu_kms->vbif[VBIF_RT])) { 1347 - ret = PTR_ERR(dpu_kms->vbif[VBIF_RT]); 1351 + dpu_kms->vbif = msm_ioremap_mdss(mdss_dev, dpu_kms->pdev, "vbif_phys"); 1352 + if (IS_ERR(dpu_kms->vbif)) { 1353 + ret = PTR_ERR(dpu_kms->vbif); 1348 1354 DPU_ERROR("vbif register memory map failed: %d\n", ret); 1349 - dpu_kms->vbif[VBIF_RT] = NULL; 1355 + dpu_kms->vbif = NULL; 1350 1356 return ret; 1351 - } 1352 - 1353 - dpu_kms->vbif[VBIF_NRT] = msm_ioremap_mdss(mdss_dev, 1354 - dpu_kms->pdev, 1355 - "vbif_nrt_phys"); 1356 - if (IS_ERR(dpu_kms->vbif[VBIF_NRT])) { 1357 - dpu_kms->vbif[VBIF_NRT] = NULL; 1358 - DPU_DEBUG("VBIF NRT is not defined"); 1359 1357 } 1360 1358 1361 1359 return 0; ··· 1365 1383 } 1366 1384 DRM_DEBUG("mapped dpu address space @%p\n", dpu_kms->mmio); 1367 1385 1368 - dpu_kms->vbif[VBIF_RT] = msm_ioremap(pdev, "vbif"); 1369 - if (IS_ERR(dpu_kms->vbif[VBIF_RT])) { 1370 - ret = PTR_ERR(dpu_kms->vbif[VBIF_RT]); 1386 + dpu_kms->vbif = msm_ioremap(pdev, "vbif"); 1387 + if (IS_ERR(dpu_kms->vbif)) { 1388 + ret = PTR_ERR(dpu_kms->vbif); 1371 1389 DPU_ERROR("vbif register memory map failed: %d\n", ret); 1372 - dpu_kms->vbif[VBIF_RT] = NULL; 1390 + dpu_kms->vbif = NULL; 1373 1391 return ret; 1374 - } 1375 - 1376 - dpu_kms->vbif[VBIF_NRT] = msm_ioremap_quiet(pdev, "vbif_nrt"); 1377 - if (IS_ERR(dpu_kms->vbif[VBIF_NRT])) { 1378 - dpu_kms->vbif[VBIF_NRT] = NULL; 1379 - DPU_DEBUG("VBIF NRT is not defined"); 1380 1392 } 1381 1393 1382 1394 return 0; ··· 1438 1462 struct msm_drm_private *priv = platform_get_drvdata(pdev); 1439 1463 struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms); 1440 1464 1441 - /* Drop the performance state vote */ 1442 - dev_pm_opp_set_rate(dev, 0); 1443 1465 clk_bulk_disable_unprepare(dpu_kms->num_clocks, dpu_kms->clocks); 1444 1466 1445 1467 for (i = 0; i < dpu_kms->num_paths; i++) ··· 1480 1506 }; 1481 1507 1482 1508 static const struct of_device_id dpu_dt_match[] = { 1509 + { .compatible = "qcom,eliza-dpu", .data = &dpu_eliza_cfg, }, 1483 1510 { .compatible = "qcom,glymur-dpu", .data = &dpu_glymur_cfg, }, 1484 1511 { .compatible = "qcom,kaanapali-dpu", .data = &dpu_kaanapali_cfg, }, 1485 1512 { .compatible = "qcom,msm8917-mdp5", .data = &dpu_msm8917_cfg, },
+2 -2
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
··· 63 63 const struct qcom_ubwc_cfg_data *mdss; 64 64 65 65 /* io/register spaces: */ 66 - void __iomem *mmio, *vbif[VBIF_MAX]; 66 + void __iomem *mmio, *vbif; 67 67 68 68 struct regulator *vdd; 69 69 struct regulator *mmagic; ··· 81 81 82 82 struct dpu_rm rm; 83 83 84 - struct dpu_hw_vbif *hw_vbif[VBIF_MAX]; 84 + struct dpu_hw_vbif *hw_vbif; 85 85 struct dpu_hw_mdp *hw_mdp; 86 86 87 87 bool has_danger_ctrl;
+183 -120
drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
··· 374 374 ot_params.height = drm_rect_height(&pipe_cfg->src_rect); 375 375 ot_params.is_wfd = !pdpu->is_rt_pipe; 376 376 ot_params.frame_rate = frame_rate; 377 - ot_params.vbif_idx = VBIF_RT; 378 377 ot_params.rd = true; 379 378 380 379 if (!_dpu_plane_sspp_clk_force_ctrl(pipe->sspp, dpu_kms->hw_mdp, ··· 401 402 bool forced_on = false; 402 403 403 404 memset(&qos_params, 0, sizeof(qos_params)); 404 - qos_params.vbif_idx = VBIF_RT; 405 405 qos_params.xin_id = pipe->sspp->cap->xin_id; 406 406 qos_params.num = pipe->sspp->idx - SSPP_VIG0; 407 407 qos_params.is_rt = pdpu->is_rt_pipe; 408 408 409 - DPU_DEBUG_PLANE(pdpu, "pipe:%d vbif:%d xin:%d rt:%d\n", 409 + DPU_DEBUG_PLANE(pdpu, "pipe:%d xin:%d rt:%d\n", 410 410 qos_params.num, 411 - qos_params.vbif_idx, 412 411 qos_params.xin_id, qos_params.is_rt); 413 412 414 413 if (!_dpu_plane_sspp_clk_force_ctrl(pipe->sspp, dpu_kms->hw_mdp, ··· 818 821 { 819 822 int i, ret = 0, min_scale, max_scale; 820 823 struct dpu_plane *pdpu = to_dpu_plane(plane); 821 - struct dpu_kms *kms = _dpu_plane_get_kms(&pdpu->base); 822 - u64 max_mdp_clk_rate = kms->perf.max_core_clk_rate; 823 824 struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state); 824 - struct dpu_sw_pipe_cfg *pipe_cfg; 825 - struct dpu_sw_pipe_cfg *r_pipe_cfg; 826 825 struct drm_rect fb_rect = { 0 }; 827 - uint32_t max_linewidth; 828 826 829 827 min_scale = FRAC_16_16(1, MAX_UPSCALE_RATIO); 830 828 max_scale = MAX_DOWNSCALE_RATIO << 16; ··· 842 850 return -EINVAL; 843 851 } 844 852 845 - /* move the assignment here, to ease handling to another pairs later */ 846 - pipe_cfg = &pstate->pipe_cfg[0]; 847 - r_pipe_cfg = &pstate->pipe_cfg[1]; 848 - /* state->src is 16.16, src_rect is not */ 849 - drm_rect_fp_to_int(&pipe_cfg->src_rect, &new_plane_state->src); 850 - 851 - pipe_cfg->dst_rect = new_plane_state->dst; 852 - 853 853 fb_rect.x2 = new_plane_state->fb->width; 854 854 fb_rect.y2 = new_plane_state->fb->height; 855 855 ··· 863 879 if (pstate->layout.plane_pitch[i] > DPU_SSPP_MAX_PITCH_SIZE) 864 880 return -E2BIG; 865 881 882 + pstate->needs_qos_remap = drm_atomic_crtc_needs_modeset(crtc_state); 883 + 884 + return 0; 885 + } 886 + 887 + static int dpu_plane_split(struct drm_plane *plane, 888 + struct drm_plane_state *new_plane_state, 889 + const struct drm_crtc_state *crtc_state) 890 + { 891 + struct dpu_plane *pdpu = to_dpu_plane(plane); 892 + struct dpu_kms *kms = _dpu_plane_get_kms(&pdpu->base); 893 + u64 max_mdp_clk_rate = kms->perf.max_core_clk_rate; 894 + struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state); 895 + struct dpu_sw_pipe_cfg *pipe_cfg; 896 + struct dpu_sw_pipe_cfg *r_pipe_cfg; 897 + const struct drm_display_mode *mode = &crtc_state->adjusted_mode; 898 + uint32_t max_linewidth; 899 + u32 num_lm; 900 + int stage_id, num_stages; 901 + 866 902 max_linewidth = pdpu->catalog->caps->max_linewidth; 867 903 868 - drm_rect_rotate(&pipe_cfg->src_rect, 869 - new_plane_state->fb->width, new_plane_state->fb->height, 870 - new_plane_state->rotation); 904 + /* In non-virtual plane case, one mixer pair is always needed. */ 905 + num_lm = dpu_crtc_get_num_lm(crtc_state); 906 + if (dpu_use_virtual_planes) 907 + num_stages = (num_lm + 1) / 2; 908 + else 909 + num_stages = 1; 871 910 872 - if ((drm_rect_width(&pipe_cfg->src_rect) > max_linewidth) || 873 - _dpu_plane_calc_clk(&crtc_state->adjusted_mode, pipe_cfg) > max_mdp_clk_rate) { 874 - if (drm_rect_width(&pipe_cfg->src_rect) > 2 * max_linewidth) { 875 - DPU_DEBUG_PLANE(pdpu, "invalid src " DRM_RECT_FMT " line:%u\n", 876 - DRM_RECT_ARG(&pipe_cfg->src_rect), max_linewidth); 877 - return -E2BIG; 911 + /* 912 + * For wide plane that exceeds SSPP rectangle constrain, it needed to 913 + * be split and mapped to 2 rectangles with 1 config for 2:2:1. 914 + * For 2 interfaces cases, such as dual DSI, 2:2:2 topology is needed. 915 + * If the width or clock exceeds hardware limitation in every half of 916 + * screen, 4:4:2 topology is needed and virtual plane feature should 917 + * be enabled to map plane to more than 1 SSPP. 2 stage configs are 918 + * needed to serve 2 mixer pairs in this 4:4:2 case. So both left/right 919 + * half of plane splitting, and splitting within the half of screen is 920 + * needed in quad-pipe case. Check dest rectangle left/right clipping 921 + * and iterate mixer configs for this plane first, then check wide 922 + * rectangle splitting in every half next. 923 + */ 924 + for (stage_id = 0; stage_id < num_stages; stage_id++) { 925 + struct drm_rect mixer_rect = { 926 + .x1 = stage_id * mode->hdisplay / num_stages, 927 + .y1 = 0, 928 + .x2 = (stage_id + 1) * mode->hdisplay / num_stages, 929 + .y2 = mode->vdisplay 930 + }; 931 + int cfg_idx = stage_id * PIPES_PER_STAGE; 932 + 933 + pipe_cfg = &pstate->pipe_cfg[cfg_idx]; 934 + r_pipe_cfg = &pstate->pipe_cfg[cfg_idx + 1]; 935 + 936 + drm_rect_fp_to_int(&pipe_cfg->src_rect, &new_plane_state->src); 937 + 938 + drm_rect_rotate(&pipe_cfg->src_rect, 939 + new_plane_state->fb->width, new_plane_state->fb->height, 940 + new_plane_state->rotation); 941 + 942 + pipe_cfg->dst_rect = new_plane_state->dst; 943 + 944 + DPU_DEBUG_PLANE(pdpu, "checking src " DRM_RECT_FMT 945 + " vs clip window " DRM_RECT_FMT "\n", 946 + DRM_RECT_ARG(&pipe_cfg->src_rect), 947 + DRM_RECT_ARG(&mixer_rect)); 948 + 949 + /* 950 + * If this plane does not fall into mixer rect, check next 951 + * mixer rect. 952 + */ 953 + if (!drm_rect_clip_scaled(&pipe_cfg->src_rect, 954 + &pipe_cfg->dst_rect, 955 + &mixer_rect)) { 956 + memset(pipe_cfg, 0, 2 * sizeof(struct dpu_sw_pipe_cfg)); 957 + 958 + continue; 878 959 } 879 960 880 - *r_pipe_cfg = *pipe_cfg; 881 - pipe_cfg->src_rect.x2 = (pipe_cfg->src_rect.x1 + pipe_cfg->src_rect.x2) >> 1; 882 - pipe_cfg->dst_rect.x2 = (pipe_cfg->dst_rect.x1 + pipe_cfg->dst_rect.x2) >> 1; 883 - r_pipe_cfg->src_rect.x1 = pipe_cfg->src_rect.x2; 884 - r_pipe_cfg->dst_rect.x1 = pipe_cfg->dst_rect.x2; 885 - } else { 886 - memset(r_pipe_cfg, 0, sizeof(*r_pipe_cfg)); 887 - } 961 + pipe_cfg->dst_rect.x1 -= mixer_rect.x1; 962 + pipe_cfg->dst_rect.x2 -= mixer_rect.x1; 888 963 889 - drm_rect_rotate_inv(&pipe_cfg->src_rect, 890 - new_plane_state->fb->width, new_plane_state->fb->height, 891 - new_plane_state->rotation); 892 - if (drm_rect_width(&r_pipe_cfg->src_rect) != 0) 893 - drm_rect_rotate_inv(&r_pipe_cfg->src_rect, 894 - new_plane_state->fb->width, new_plane_state->fb->height, 964 + DPU_DEBUG_PLANE(pdpu, "Got clip src:" DRM_RECT_FMT " dst: " DRM_RECT_FMT "\n", 965 + DRM_RECT_ARG(&pipe_cfg->src_rect), DRM_RECT_ARG(&pipe_cfg->dst_rect)); 966 + 967 + /* Split wide rect into 2 rect */ 968 + if ((drm_rect_width(&pipe_cfg->src_rect) > max_linewidth) || 969 + _dpu_plane_calc_clk(mode, pipe_cfg) > max_mdp_clk_rate) { 970 + 971 + if (drm_rect_width(&pipe_cfg->src_rect) > 2 * max_linewidth) { 972 + DPU_DEBUG_PLANE(pdpu, "invalid src " DRM_RECT_FMT " line:%u\n", 973 + DRM_RECT_ARG(&pipe_cfg->src_rect), max_linewidth); 974 + return -E2BIG; 975 + } 976 + 977 + memcpy(r_pipe_cfg, pipe_cfg, sizeof(struct dpu_sw_pipe_cfg)); 978 + pipe_cfg->src_rect.x2 = (pipe_cfg->src_rect.x1 + pipe_cfg->src_rect.x2) >> 1; 979 + pipe_cfg->dst_rect.x2 = (pipe_cfg->dst_rect.x1 + pipe_cfg->dst_rect.x2) >> 1; 980 + r_pipe_cfg->src_rect.x1 = pipe_cfg->src_rect.x2; 981 + r_pipe_cfg->dst_rect.x1 = pipe_cfg->dst_rect.x2; 982 + DPU_DEBUG_PLANE(pdpu, "Split wide plane into:" 983 + DRM_RECT_FMT " and " DRM_RECT_FMT "\n", 984 + DRM_RECT_ARG(&pipe_cfg->src_rect), 985 + DRM_RECT_ARG(&r_pipe_cfg->src_rect)); 986 + } else { 987 + memset(r_pipe_cfg, 0, sizeof(struct dpu_sw_pipe_cfg)); 988 + } 989 + 990 + drm_rect_rotate_inv(&pipe_cfg->src_rect, 991 + new_plane_state->fb->width, 992 + new_plane_state->fb->height, 895 993 new_plane_state->rotation); 896 994 897 - pstate->needs_qos_remap = drm_atomic_crtc_needs_modeset(crtc_state); 995 + if (drm_rect_width(&r_pipe_cfg->src_rect) != 0) 996 + drm_rect_rotate_inv(&r_pipe_cfg->src_rect, 997 + new_plane_state->fb->width, 998 + new_plane_state->fb->height, 999 + new_plane_state->rotation); 1000 + } 898 1001 899 1002 return 0; 900 1003 } ··· 1056 985 drm_atomic_get_new_plane_state(state, plane); 1057 986 struct dpu_plane *pdpu = to_dpu_plane(plane); 1058 987 struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state); 1059 - struct dpu_sw_pipe *pipe = &pstate->pipe[0]; 1060 - struct dpu_sw_pipe *r_pipe = &pstate->pipe[1]; 1061 - struct dpu_sw_pipe_cfg *pipe_cfg = &pstate->pipe_cfg[0]; 1062 - struct dpu_sw_pipe_cfg *r_pipe_cfg = &pstate->pipe_cfg[1]; 1063 - int ret = 0; 1064 988 1065 - ret = dpu_plane_atomic_check_pipe(pdpu, pipe, pipe_cfg, 1066 - &crtc_state->adjusted_mode, 1067 - new_plane_state); 1068 - if (ret) 1069 - return ret; 989 + struct dpu_sw_pipe *pipe; 990 + struct dpu_sw_pipe_cfg *pipe_cfg; 991 + int ret = 0, i; 1070 992 1071 - if (drm_rect_width(&r_pipe_cfg->src_rect) != 0) { 1072 - ret = dpu_plane_atomic_check_pipe(pdpu, r_pipe, r_pipe_cfg, 993 + for (i = 0; i < PIPES_PER_PLANE; i++) { 994 + pipe = &pstate->pipe[i]; 995 + pipe_cfg = &pstate->pipe_cfg[i]; 996 + if (!drm_rect_width(&pipe_cfg->src_rect)) 997 + continue; 998 + DPU_DEBUG_PLANE(pdpu, "pipe %d is in use, validate it\n", i); 999 + ret = dpu_plane_atomic_check_pipe(pdpu, pipe, pipe_cfg, 1073 1000 &crtc_state->adjusted_mode, 1074 1001 new_plane_state); 1075 1002 if (ret) ··· 1172 1103 static int dpu_plane_atomic_check(struct drm_plane *plane, 1173 1104 struct drm_atomic_state *state) 1174 1105 { 1175 - struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state, 1176 - plane); 1177 - int ret = 0; 1178 - struct dpu_plane *pdpu = to_dpu_plane(plane); 1179 - struct dpu_plane_state *pstate = to_dpu_plane_state(new_plane_state); 1180 - struct dpu_kms *dpu_kms = _dpu_plane_get_kms(plane); 1181 - struct dpu_sw_pipe *pipe = &pstate->pipe[0]; 1182 - struct dpu_sw_pipe *r_pipe = &pstate->pipe[1]; 1183 - struct dpu_sw_pipe_cfg *pipe_cfg = &pstate->pipe_cfg[0]; 1184 - struct dpu_sw_pipe_cfg *r_pipe_cfg = &pstate->pipe_cfg[1]; 1185 - const struct drm_crtc_state *crtc_state = NULL; 1186 - uint32_t max_linewidth = dpu_kms->catalog->caps->max_linewidth; 1187 - 1188 - if (new_plane_state->crtc) 1189 - crtc_state = drm_atomic_get_new_crtc_state(state, 1190 - new_plane_state->crtc); 1191 - 1192 - pipe->sspp = dpu_rm_get_sspp(&dpu_kms->rm, pdpu->pipe); 1193 - 1194 - if (!pipe->sspp) 1195 - return -EINVAL; 1196 - 1197 - ret = dpu_plane_atomic_check_nosspp(plane, new_plane_state, crtc_state); 1198 - if (ret) 1199 - return ret; 1200 - 1201 - if (!new_plane_state->visible) 1202 - return 0; 1203 - 1204 - if (!dpu_plane_try_multirect_parallel(pipe, pipe_cfg, r_pipe, r_pipe_cfg, 1205 - pipe->sspp, 1206 - msm_framebuffer_format(new_plane_state->fb), 1207 - max_linewidth)) { 1208 - DPU_DEBUG_PLANE(pdpu, "invalid " DRM_RECT_FMT " /" DRM_RECT_FMT 1209 - " max_line:%u, can't use split source\n", 1210 - DRM_RECT_ARG(&pipe_cfg->src_rect), 1211 - DRM_RECT_ARG(&r_pipe_cfg->src_rect), 1212 - max_linewidth); 1213 - return -E2BIG; 1214 - } 1215 - 1216 - return dpu_plane_atomic_check_sspp(plane, state, crtc_state); 1217 - } 1218 - 1219 - static int dpu_plane_virtual_atomic_check(struct drm_plane *plane, 1220 - struct drm_atomic_state *state) 1221 - { 1222 1106 struct drm_plane_state *plane_state = 1223 1107 drm_atomic_get_plane_state(state, plane); 1224 1108 struct drm_plane_state *old_plane_state = 1225 1109 drm_atomic_get_old_plane_state(state, plane); 1226 - struct dpu_plane_state *pstate = to_dpu_plane_state(plane_state); 1110 + int ret = 0; 1227 1111 struct drm_crtc_state *crtc_state = NULL; 1228 - int ret, i; 1229 1112 1230 1113 if (IS_ERR(plane_state)) 1231 1114 return PTR_ERR(plane_state); ··· 1190 1169 if (ret) 1191 1170 return ret; 1192 1171 1193 - if (!plane_state->visible) { 1194 - /* 1195 - * resources are freed by dpu_crtc_assign_plane_resources(), 1196 - * but clean them here. 1197 - */ 1198 - for (i = 0; i < PIPES_PER_PLANE; i++) 1199 - pstate->pipe[i].sspp = NULL; 1200 - 1172 + if (!plane_state->visible) 1201 1173 return 0; 1202 - } 1203 1174 1204 1175 /* 1205 1176 * Force resource reallocation if the format of FB or src/dst have ··· 1206 1193 msm_framebuffer_format(old_plane_state->fb) != 1207 1194 msm_framebuffer_format(plane_state->fb)) 1208 1195 crtc_state->planes_changed = true; 1209 - 1210 1196 return 0; 1211 1197 } 1212 1198 ··· 1252 1240 struct dpu_global_state *global_state, 1253 1241 struct drm_atomic_state *state, 1254 1242 struct drm_plane_state *plane_state, 1243 + const struct drm_crtc_state *crtc_state, 1255 1244 struct drm_plane_state **prev_adjacent_plane_state) 1256 1245 { 1257 - const struct drm_crtc_state *crtc_state = NULL; 1258 1246 struct drm_plane *plane = plane_state->plane; 1259 1247 struct dpu_kms *dpu_kms = _dpu_plane_get_kms(plane); 1260 1248 struct dpu_rm_sspp_requirements reqs; ··· 1263 1251 struct dpu_sw_pipe_cfg *pipe_cfg; 1264 1252 const struct msm_format *fmt; 1265 1253 int i, ret; 1266 - 1267 - if (plane_state->crtc) 1268 - crtc_state = drm_atomic_get_new_crtc_state(state, 1269 - plane_state->crtc); 1270 1254 1271 1255 pstate = to_dpu_plane_state(plane_state); 1272 1256 for (i = 0; i < STAGES_PER_PLANE; i++) ··· 1274 1266 1275 1267 if (!plane_state->fb) 1276 1268 return -EINVAL; 1269 + 1270 + ret = dpu_plane_split(plane, plane_state, crtc_state); 1271 + if (ret) 1272 + return ret; 1277 1273 1278 1274 fmt = msm_framebuffer_format(plane_state->fb); 1279 1275 reqs.yuv = MSM_FORMAT_IS_YUV(fmt); ··· 1309 1297 return dpu_plane_atomic_check_sspp(plane, state, crtc_state); 1310 1298 } 1311 1299 1300 + static int dpu_plane_assign_resources(struct drm_crtc *crtc, 1301 + struct dpu_global_state *global_state, 1302 + struct drm_atomic_state *state, 1303 + struct drm_plane_state *plane_state, 1304 + const struct drm_crtc_state *crtc_state) 1305 + { 1306 + struct drm_plane *plane = plane_state->plane; 1307 + struct dpu_kms *dpu_kms = _dpu_plane_get_kms(plane); 1308 + struct dpu_plane_state *pstate = to_dpu_plane_state(plane_state); 1309 + struct dpu_sw_pipe *pipe = &pstate->pipe[0]; 1310 + struct dpu_sw_pipe *r_pipe = &pstate->pipe[1]; 1311 + struct dpu_sw_pipe_cfg *pipe_cfg = &pstate->pipe_cfg[0]; 1312 + struct dpu_sw_pipe_cfg *r_pipe_cfg = &pstate->pipe_cfg[1]; 1313 + struct dpu_plane *pdpu = to_dpu_plane(plane); 1314 + int ret; 1315 + 1316 + pipe->sspp = dpu_rm_get_sspp(&dpu_kms->rm, pdpu->pipe); 1317 + if (!pipe->sspp) 1318 + return -EINVAL; 1319 + 1320 + ret = dpu_plane_split(plane, plane_state, crtc_state); 1321 + if (ret) 1322 + return ret; 1323 + 1324 + if (!dpu_plane_try_multirect_parallel(pipe, pipe_cfg, r_pipe, r_pipe_cfg, 1325 + pipe->sspp, 1326 + msm_framebuffer_format(plane_state->fb), 1327 + dpu_kms->catalog->caps->max_linewidth)) { 1328 + DPU_DEBUG_PLANE(pdpu, "invalid " DRM_RECT_FMT " /" DRM_RECT_FMT 1329 + " max_line:%u, can't use split source\n", 1330 + DRM_RECT_ARG(&pipe_cfg->src_rect), 1331 + DRM_RECT_ARG(&r_pipe_cfg->src_rect), 1332 + dpu_kms->catalog->caps->max_linewidth); 1333 + return -E2BIG; 1334 + } 1335 + 1336 + return dpu_plane_atomic_check_sspp(plane, state, crtc_state); 1337 + } 1338 + 1312 1339 int dpu_assign_plane_resources(struct dpu_global_state *global_state, 1313 1340 struct drm_atomic_state *state, 1314 1341 struct drm_crtc *crtc, 1315 1342 struct drm_plane_state **states, 1316 1343 unsigned int num_planes) 1317 1344 { 1318 - unsigned int i; 1319 1345 struct drm_plane_state *prev_adjacent_plane_state[STAGES_PER_PLANE] = { NULL }; 1346 + const struct drm_crtc_state *crtc_state = NULL; 1347 + unsigned int i; 1348 + int ret; 1320 1349 1321 1350 for (i = 0; i < num_planes; i++) { 1322 1351 struct drm_plane_state *plane_state = states[i]; ··· 1366 1313 !plane_state->visible) 1367 1314 continue; 1368 1315 1369 - int ret = dpu_plane_virtual_assign_resources(crtc, global_state, 1316 + if (plane_state->crtc) 1317 + crtc_state = drm_atomic_get_new_crtc_state(state, 1318 + plane_state->crtc); 1319 + 1320 + if (!dpu_use_virtual_planes) 1321 + ret = dpu_plane_assign_resources(crtc, global_state, 1322 + state, plane_state, 1323 + crtc_state); 1324 + else 1325 + ret = dpu_plane_virtual_assign_resources(crtc, global_state, 1370 1326 state, plane_state, 1327 + crtc_state, 1371 1328 prev_adjacent_plane_state); 1372 1329 if (ret) 1373 1330 return ret; ··· 1814 1751 static const struct drm_plane_helper_funcs dpu_plane_virtual_helper_funcs = { 1815 1752 .prepare_fb = dpu_plane_prepare_fb, 1816 1753 .cleanup_fb = dpu_plane_cleanup_fb, 1817 - .atomic_check = dpu_plane_virtual_atomic_check, 1754 + .atomic_check = dpu_plane_atomic_check, 1818 1755 .atomic_update = dpu_plane_atomic_update, 1819 1756 }; 1820 1757
+7 -12
drivers/gpu/drm/msm/disp/dpu1/dpu_trace.h
··· 72 72 ); 73 73 74 74 TRACE_EVENT(dpu_perf_set_ot, 75 - TP_PROTO(u32 pnum, u32 xin_id, u32 rd_lim, u32 vbif_idx), 76 - TP_ARGS(pnum, xin_id, rd_lim, vbif_idx), 75 + TP_PROTO(u32 pnum, u32 xin_id, u32 rd_lim), 76 + TP_ARGS(pnum, xin_id, rd_lim), 77 77 TP_STRUCT__entry( 78 78 __field(u32, pnum) 79 79 __field(u32, xin_id) 80 80 __field(u32, rd_lim) 81 - __field(u32, vbif_idx) 82 81 ), 83 82 TP_fast_assign( 84 83 __entry->pnum = pnum; 85 84 __entry->xin_id = xin_id; 86 85 __entry->rd_lim = rd_lim; 87 - __entry->vbif_idx = vbif_idx; 88 86 ), 89 - TP_printk("pnum:%d xin_id:%d ot:%d vbif:%d", 90 - __entry->pnum, __entry->xin_id, __entry->rd_lim, 91 - __entry->vbif_idx) 87 + TP_printk("pnum:%d xin_id:%d ot:%d", 88 + __entry->pnum, __entry->xin_id, __entry->rd_lim) 92 89 ) 93 90 94 91 TRACE_EVENT(dpu_cmd_release_bw, ··· 858 861 ); 859 862 860 863 TRACE_EVENT(dpu_vbif_wait_xin_halt_fail, 861 - TP_PROTO(enum dpu_vbif index, u32 xin_id), 862 - TP_ARGS(index, xin_id), 864 + TP_PROTO(u32 xin_id), 865 + TP_ARGS(xin_id), 863 866 TP_STRUCT__entry( 864 - __field( enum dpu_vbif, index ) 865 867 __field( u32, xin_id ) 866 868 ), 867 869 TP_fast_assign( 868 - __entry->index = index; 869 870 __entry->xin_id = xin_id; 870 871 ), 871 - TP_printk("index:%d xin_id:%u", __entry->index, __entry->xin_id) 872 + TP_printk("xin_id:%u", __entry->xin_id) 872 873 ); 873 874 874 875 TRACE_EVENT(dpu_pp_connect_ext_te,
+60 -97
drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
··· 11 11 #include "dpu_hw_vbif.h" 12 12 #include "dpu_trace.h" 13 13 14 - static struct dpu_hw_vbif *dpu_get_vbif(struct dpu_kms *dpu_kms, enum dpu_vbif vbif_idx) 15 - { 16 - if (vbif_idx < ARRAY_SIZE(dpu_kms->hw_vbif)) 17 - return dpu_kms->hw_vbif[vbif_idx]; 18 - 19 - return NULL; 20 - } 21 - 22 - static const char *dpu_vbif_name(enum dpu_vbif idx) 23 - { 24 - switch (idx) { 25 - case VBIF_RT: 26 - return "VBIF_RT"; 27 - case VBIF_NRT: 28 - return "VBIF_NRT"; 29 - default: 30 - return "??"; 31 - } 32 - } 33 - 34 14 /** 35 15 * _dpu_vbif_wait_for_xin_halt - wait for the xin to halt 36 16 * @vbif: Pointer to hardware vbif driver ··· 42 62 43 63 if (!status) { 44 64 rc = -ETIMEDOUT; 45 - DPU_ERROR("%s client %d not halting. TIMEDOUT.\n", 46 - dpu_vbif_name(vbif->idx), xin_id); 65 + DPU_ERROR("VBIF client %d not halting. TIMEDOUT.\n", xin_id); 47 66 } else { 48 67 rc = 0; 49 - DRM_DEBUG_ATOMIC("%s client %d is halted\n", 50 - dpu_vbif_name(vbif->idx), xin_id); 68 + DRM_DEBUG_ATOMIC("VBIF client %d is halted\n", xin_id); 51 69 } 52 70 53 71 return rc; ··· 85 107 } 86 108 } 87 109 88 - DRM_DEBUG_ATOMIC("%s xin:%d w:%d h:%d fps:%d pps:%llu ot:%u\n", 89 - dpu_vbif_name(vbif->idx), params->xin_id, 90 - params->width, params->height, params->frame_rate, 91 - pps, *ot_lim); 110 + DRM_DEBUG_ATOMIC("VBIF xin:%d w:%d h:%d fps:%d pps:%llu ot:%u\n", 111 + params->xin_id, 112 + params->width, params->height, params->frame_rate, 113 + pps, *ot_lim); 92 114 } 93 115 94 116 /** ··· 131 153 } 132 154 133 155 exit: 134 - DRM_DEBUG_ATOMIC("%s xin:%d ot_lim:%d\n", 135 - dpu_vbif_name(vbif->idx), params->xin_id, ot_lim); 156 + DRM_DEBUG_ATOMIC("VBIF xin:%d ot_lim:%d\n", params->xin_id, ot_lim); 136 157 return ot_lim; 137 158 } 138 159 ··· 149 172 u32 ot_lim; 150 173 int ret; 151 174 152 - vbif = dpu_get_vbif(dpu_kms, params->vbif_idx); 175 + vbif = dpu_kms->hw_vbif; 153 176 if (!vbif) { 154 177 DRM_DEBUG_ATOMIC("invalid arguments vbif %d\n", vbif != NULL); 155 178 return; ··· 167 190 if (ot_lim == 0) 168 191 return; 169 192 170 - trace_dpu_perf_set_ot(params->num, params->xin_id, ot_lim, 171 - params->vbif_idx); 193 + trace_dpu_perf_set_ot(params->num, params->xin_id, ot_lim); 172 194 173 195 vbif->ops.set_limit_conf(vbif, params->xin_id, params->rd, ot_lim); 174 196 ··· 175 199 176 200 ret = _dpu_vbif_wait_for_xin_halt(vbif, params->xin_id); 177 201 if (ret) 178 - trace_dpu_vbif_wait_xin_halt_fail(vbif->idx, params->xin_id); 202 + trace_dpu_vbif_wait_xin_halt_fail(params->xin_id); 179 203 180 204 vbif->ops.set_halt_ctrl(vbif, params->xin_id, false); 181 205 } ··· 197 221 return; 198 222 } 199 223 200 - vbif = dpu_get_vbif(dpu_kms, params->vbif_idx); 224 + vbif = dpu_kms->hw_vbif; 201 225 202 226 if (!vbif || !vbif->cap) { 203 - DPU_ERROR("invalid vbif %d\n", params->vbif_idx); 227 + DPU_ERROR("invalid vbif\n"); 204 228 return; 205 229 } 206 230 ··· 218 242 } 219 243 220 244 for (i = 0; i < qos_tbl->npriority_lvl; i++) { 221 - DRM_DEBUG_ATOMIC("%s xin:%d lvl:%d/%d\n", 222 - dpu_vbif_name(params->vbif_idx), params->xin_id, i, 245 + DRM_DEBUG_ATOMIC("VBIF xin:%d lvl:%d/%d\n", 246 + params->xin_id, i, 223 247 qos_tbl->priority_lvl[i]); 224 248 vbif->ops.set_qos_remap(vbif, params->xin_id, i, 225 249 qos_tbl->priority_lvl[i]); ··· 233 257 void dpu_vbif_clear_errors(struct dpu_kms *dpu_kms) 234 258 { 235 259 struct dpu_hw_vbif *vbif; 236 - u32 i, pnd, src; 260 + u32 pnd, src; 237 261 238 - for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) { 239 - vbif = dpu_kms->hw_vbif[i]; 240 - if (vbif && vbif->ops.clear_errors) { 241 - vbif->ops.clear_errors(vbif, &pnd, &src); 242 - if (pnd || src) { 243 - DRM_DEBUG_KMS("%s: pnd 0x%X, src 0x%X\n", 244 - dpu_vbif_name(vbif->idx), pnd, src); 245 - } 262 + vbif = dpu_kms->hw_vbif; 263 + if (vbif && vbif->ops.clear_errors) { 264 + vbif->ops.clear_errors(vbif, &pnd, &src); 265 + if (pnd || src) { 266 + DRM_DEBUG_KMS("VBIF: pnd 0x%X, src 0x%X\n", pnd, src); 246 267 } 247 268 } 248 269 } ··· 251 278 void dpu_vbif_init_memtypes(struct dpu_kms *dpu_kms) 252 279 { 253 280 struct dpu_hw_vbif *vbif; 254 - int i, j; 281 + int j; 255 282 256 - for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) { 257 - vbif = dpu_kms->hw_vbif[i]; 258 - if (vbif && vbif->cap && vbif->ops.set_mem_type) { 259 - for (j = 0; j < vbif->cap->memtype_count; j++) 260 - vbif->ops.set_mem_type( 261 - vbif, j, vbif->cap->memtype[j]); 262 - } 283 + vbif = dpu_kms->hw_vbif; 284 + if (vbif && vbif->cap && vbif->ops.set_mem_type) { 285 + for (j = 0; j < vbif->cap->memtype_count; j++) 286 + vbif->ops.set_mem_type(vbif, j, vbif->cap->memtype[j]); 263 287 } 264 288 } 265 289 ··· 264 294 265 295 void dpu_debugfs_vbif_init(struct dpu_kms *dpu_kms, struct dentry *debugfs_root) 266 296 { 297 + const struct dpu_vbif_cfg *vbif = dpu_kms->catalog->vbif; 267 298 char vbif_name[32]; 268 - struct dentry *entry, *debugfs_vbif; 269 - int i, j; 299 + struct dentry *debugfs_vbif; 300 + int j; 270 301 271 - entry = debugfs_create_dir("vbif", debugfs_root); 302 + debugfs_vbif = debugfs_create_dir("vbif", debugfs_root); 272 303 273 - for (i = 0; i < dpu_kms->catalog->vbif_count; i++) { 274 - const struct dpu_vbif_cfg *vbif = &dpu_kms->catalog->vbif[i]; 304 + debugfs_create_u32("features", 0600, debugfs_vbif, 305 + (u32 *)&vbif->features); 275 306 276 - snprintf(vbif_name, sizeof(vbif_name), "%d", vbif->id); 307 + debugfs_create_u32("xin_halt_timeout", 0400, debugfs_vbif, 308 + (u32 *)&vbif->xin_halt_timeout); 277 309 278 - debugfs_vbif = debugfs_create_dir(vbif_name, entry); 310 + debugfs_create_u32("default_rd_ot_limit", 0400, debugfs_vbif, 311 + (u32 *)&vbif->default_ot_rd_limit); 279 312 280 - debugfs_create_u32("features", 0600, debugfs_vbif, 281 - (u32 *)&vbif->features); 313 + debugfs_create_u32("default_wr_ot_limit", 0400, debugfs_vbif, 314 + (u32 *)&vbif->default_ot_wr_limit); 282 315 283 - debugfs_create_u32("xin_halt_timeout", 0400, debugfs_vbif, 284 - (u32 *)&vbif->xin_halt_timeout); 316 + for (j = 0; j < vbif->dynamic_ot_rd_tbl.count; j++) { 317 + const struct dpu_vbif_dynamic_ot_cfg *cfg = 318 + &vbif->dynamic_ot_rd_tbl.cfg[j]; 285 319 286 - debugfs_create_u32("default_rd_ot_limit", 0400, debugfs_vbif, 287 - (u32 *)&vbif->default_ot_rd_limit); 320 + snprintf(vbif_name, sizeof(vbif_name), 321 + "dynamic_ot_rd_%d_pps", j); 322 + debugfs_create_u64(vbif_name, 0400, debugfs_vbif, 323 + (u64 *)&cfg->pps); 324 + snprintf(vbif_name, sizeof(vbif_name), 325 + "dynamic_ot_rd_%d_ot_limit", j); 326 + debugfs_create_u32(vbif_name, 0400, debugfs_vbif, 327 + (u32 *)&cfg->ot_limit); 328 + } 288 329 289 - debugfs_create_u32("default_wr_ot_limit", 0400, debugfs_vbif, 290 - (u32 *)&vbif->default_ot_wr_limit); 330 + for (j = 0; j < vbif->dynamic_ot_wr_tbl.count; j++) { 331 + const struct dpu_vbif_dynamic_ot_cfg *cfg = 332 + &vbif->dynamic_ot_wr_tbl.cfg[j]; 291 333 292 - for (j = 0; j < vbif->dynamic_ot_rd_tbl.count; j++) { 293 - const struct dpu_vbif_dynamic_ot_cfg *cfg = 294 - &vbif->dynamic_ot_rd_tbl.cfg[j]; 295 - 296 - snprintf(vbif_name, sizeof(vbif_name), 297 - "dynamic_ot_rd_%d_pps", j); 298 - debugfs_create_u64(vbif_name, 0400, debugfs_vbif, 299 - (u64 *)&cfg->pps); 300 - snprintf(vbif_name, sizeof(vbif_name), 301 - "dynamic_ot_rd_%d_ot_limit", j); 302 - debugfs_create_u32(vbif_name, 0400, debugfs_vbif, 303 - (u32 *)&cfg->ot_limit); 304 - } 305 - 306 - for (j = 0; j < vbif->dynamic_ot_wr_tbl.count; j++) { 307 - const struct dpu_vbif_dynamic_ot_cfg *cfg = 308 - &vbif->dynamic_ot_wr_tbl.cfg[j]; 309 - 310 - snprintf(vbif_name, sizeof(vbif_name), 311 - "dynamic_ot_wr_%d_pps", j); 312 - debugfs_create_u64(vbif_name, 0400, debugfs_vbif, 313 - (u64 *)&cfg->pps); 314 - snprintf(vbif_name, sizeof(vbif_name), 315 - "dynamic_ot_wr_%d_ot_limit", j); 316 - debugfs_create_u32(vbif_name, 0400, debugfs_vbif, 317 - (u32 *)&cfg->ot_limit); 318 - } 334 + snprintf(vbif_name, sizeof(vbif_name), 335 + "dynamic_ot_wr_%d_pps", j); 336 + debugfs_create_u64(vbif_name, 0400, debugfs_vbif, 337 + (u64 *)&cfg->pps); 338 + snprintf(vbif_name, sizeof(vbif_name), 339 + "dynamic_ot_wr_%d_ot_limit", j); 340 + debugfs_create_u32(vbif_name, 0400, debugfs_vbif, 341 + (u32 *)&cfg->ot_limit); 319 342 } 320 343 } 321 344 #endif
-4
drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.h
··· 15 15 u32 frame_rate; 16 16 bool rd; 17 17 bool is_wfd; 18 - u32 vbif_idx; 19 18 }; 20 19 21 20 struct dpu_vbif_set_memtype_params { 22 21 u32 xin_id; 23 - u32 vbif_idx; 24 22 bool is_cacheable; 25 23 }; 26 24 27 25 /** 28 26 * struct dpu_vbif_set_qos_params - QoS remapper parameter 29 - * @vbif_idx: vbif identifier 30 27 * @xin_id: client interface identifier 31 28 * @num: pipe identifier (debug only) 32 29 * @is_rt: true if pipe is used in real-time use case 33 30 */ 34 31 struct dpu_vbif_set_qos_params { 35 - u32 vbif_idx; 36 32 u32 xin_id; 37 33 u32 num; 38 34 bool is_rt;
+2 -92
drivers/gpu/drm/msm/disp/mdp5/mdp5_cfg.c
··· 14 14 /* mdp5_cfg must be exposed (used in mdp5.xml.h) */ 15 15 const struct mdp5_cfg_hw *mdp5_cfg = NULL; 16 16 17 - static const struct mdp5_cfg_hw msm8x74v1_config = { 18 - .name = "msm8x74v1", 19 - .mdp = { 20 - .count = 1, 21 - .caps = MDP_CAP_SMP | 22 - 0, 23 - }, 24 - .smp = { 25 - .mmb_count = 22, 26 - .mmb_size = 4096, 27 - .clients = { 28 - [SSPP_VIG0] = 1, [SSPP_VIG1] = 4, [SSPP_VIG2] = 7, 29 - [SSPP_DMA0] = 10, [SSPP_DMA1] = 13, 30 - [SSPP_RGB0] = 16, [SSPP_RGB1] = 17, [SSPP_RGB2] = 18, 31 - }, 32 - }, 33 - .ctl = { 34 - .count = 5, 35 - .base = { 0x00500, 0x00600, 0x00700, 0x00800, 0x00900 }, 36 - .flush_hw_mask = 0x0003ffff, 37 - }, 38 - .pipe_vig = { 39 - .count = 3, 40 - .base = { 0x01100, 0x01500, 0x01900 }, 41 - .caps = MDP_PIPE_CAP_HFLIP | 42 - MDP_PIPE_CAP_VFLIP | 43 - MDP_PIPE_CAP_SCALE | 44 - MDP_PIPE_CAP_CSC | 45 - 0, 46 - }, 47 - .pipe_rgb = { 48 - .count = 3, 49 - .base = { 0x01d00, 0x02100, 0x02500 }, 50 - .caps = MDP_PIPE_CAP_HFLIP | 51 - MDP_PIPE_CAP_VFLIP | 52 - MDP_PIPE_CAP_SCALE | 53 - 0, 54 - }, 55 - .pipe_dma = { 56 - .count = 2, 57 - .base = { 0x02900, 0x02d00 }, 58 - .caps = MDP_PIPE_CAP_HFLIP | 59 - MDP_PIPE_CAP_VFLIP | 60 - 0, 61 - }, 62 - .lm = { 63 - .count = 5, 64 - .base = { 0x03100, 0x03500, 0x03900, 0x03d00, 0x04100 }, 65 - .instances = { 66 - { .id = 0, .pp = 0, .dspp = 0, 67 - .caps = MDP_LM_CAP_DISPLAY, }, 68 - { .id = 1, .pp = 1, .dspp = 1, 69 - .caps = MDP_LM_CAP_DISPLAY, }, 70 - { .id = 2, .pp = 2, .dspp = 2, 71 - .caps = MDP_LM_CAP_DISPLAY, }, 72 - { .id = 3, .pp = -1, .dspp = -1, 73 - .caps = MDP_LM_CAP_WB }, 74 - { .id = 4, .pp = -1, .dspp = -1, 75 - .caps = MDP_LM_CAP_WB }, 76 - }, 77 - .nb_stages = 5, 78 - .max_width = 2048, 79 - .max_height = 0xFFFF, 80 - }, 81 - .dspp = { 82 - .count = 3, 83 - .base = { 0x04500, 0x04900, 0x04d00 }, 84 - }, 85 - .pp = { 86 - .count = 3, 87 - .base = { 0x21a00, 0x21b00, 0x21c00 }, 88 - }, 89 - .intf = { 90 - .base = { 0x21000, 0x21200, 0x21400, 0x21600 }, 91 - .connect = { 92 - [0] = INTF_eDP, 93 - [1] = INTF_DSI, 94 - [2] = INTF_DSI, 95 - [3] = INTF_HDMI, 96 - }, 97 - }, 98 - .perf = { 99 - .ab_inefficiency = 200, 100 - .ib_inefficiency = 120, 101 - .clk_inefficiency = 125 102 - }, 103 - .max_clk = 200000000, 104 - }; 105 - 106 17 static const struct mdp5_cfg_hw msm8x26_config = { 107 18 .name = "msm8x26", 108 19 .mdp = { ··· 95 184 .max_clk = 200000000, 96 185 }; 97 186 98 - static const struct mdp5_cfg_hw msm8x74v2_config = { 187 + static const struct mdp5_cfg_hw msm8x74_config = { 99 188 .name = "msm8x74", 100 189 .mdp = { 101 190 .count = 1, ··· 1009 1098 }; 1010 1099 1011 1100 static const struct mdp5_cfg_handler cfg_handlers_v1[] = { 1012 - { .revision = 0, .config = { .hw = &msm8x74v1_config } }, 1013 1101 { .revision = 1, .config = { .hw = &msm8x26_config } }, 1014 - { .revision = 2, .config = { .hw = &msm8x74v2_config } }, 1102 + { .revision = 2, .config = { .hw = &msm8x74_config } }, 1015 1103 { .revision = 3, .config = { .hw = &apq8084_config } }, 1016 1104 { .revision = 6, .config = { .hw = &msm8x16_config } }, 1017 1105 { .revision = 8, .config = { .hw = &msm8x36_config } },
-90
drivers/gpu/drm/msm/disp/mdp5/mdp5_ctl.c
··· 17 17 * a specific data path ID - REG_MDP5_CTL_*(<id>, ...) 18 18 * 19 19 * Hardware capabilities determine the number of concurrent data paths 20 - * 21 - * In certain use cases (high-resolution dual pipe), one single CTL can be 22 - * shared across multiple CRTCs. 23 20 */ 24 21 25 22 #define CTL_STAT_BUSY 0x1 ··· 43 46 u32 pending_ctl_trigger; 44 47 45 48 bool cursor_on; 46 - 47 - /* True if the current CTL has FLUSH bits pending for single FLUSH. */ 48 - bool flush_pending; 49 - 50 - struct mdp5_ctl *pair; /* Paired CTL to be flushed together */ 51 49 }; 52 50 53 51 struct mdp5_ctl_manager { ··· 54 62 55 63 /* to filter out non-present bits in the current hardware config */ 56 64 u32 flush_hw_mask; 57 - 58 - /* status for single FLUSH */ 59 - bool single_flush_supported; 60 - u32 single_flush_pending_mask; 61 65 62 66 /* pool of CTLs + lock to protect resource allocation (ctls[i].busy) */ 63 67 spinlock_t pool_lock; ··· 473 485 return sw_mask; 474 486 } 475 487 476 - static void fix_for_single_flush(struct mdp5_ctl *ctl, u32 *flush_mask, 477 - u32 *flush_id) 478 - { 479 - struct mdp5_ctl_manager *ctl_mgr = ctl->ctlm; 480 - 481 - if (ctl->pair) { 482 - DBG("CTL %d FLUSH pending mask %x", ctl->id, *flush_mask); 483 - ctl->flush_pending = true; 484 - ctl_mgr->single_flush_pending_mask |= (*flush_mask); 485 - *flush_mask = 0; 486 - 487 - if (ctl->pair->flush_pending) { 488 - *flush_id = min_t(u32, ctl->id, ctl->pair->id); 489 - *flush_mask = ctl_mgr->single_flush_pending_mask; 490 - 491 - ctl->flush_pending = false; 492 - ctl->pair->flush_pending = false; 493 - ctl_mgr->single_flush_pending_mask = 0; 494 - 495 - DBG("Single FLUSH mask %x,ID %d", *flush_mask, 496 - *flush_id); 497 - } 498 - } 499 - } 500 - 501 488 /** 502 489 * mdp5_ctl_commit() - Register Flush 503 490 * ··· 518 555 519 556 curr_ctl_flush_mask = flush_mask; 520 557 521 - fix_for_single_flush(ctl, &flush_mask, &flush_id); 522 - 523 558 if (!start) { 524 559 ctl->flush_mask |= flush_mask; 525 560 return curr_ctl_flush_mask; ··· 547 586 int mdp5_ctl_get_ctl_id(struct mdp5_ctl *ctl) 548 587 { 549 588 return WARN_ON(!ctl) ? -EINVAL : ctl->id; 550 - } 551 - 552 - /* 553 - * mdp5_ctl_pair() - Associate 2 booked CTLs for single FLUSH 554 - */ 555 - int mdp5_ctl_pair(struct mdp5_ctl *ctlx, struct mdp5_ctl *ctly, bool enable) 556 - { 557 - struct mdp5_ctl_manager *ctl_mgr = ctlx->ctlm; 558 - struct mdp5_kms *mdp5_kms = get_kms(ctl_mgr); 559 - 560 - /* do nothing silently if hw doesn't support */ 561 - if (!ctl_mgr->single_flush_supported) 562 - return 0; 563 - 564 - if (!enable) { 565 - ctlx->pair = NULL; 566 - ctly->pair = NULL; 567 - mdp5_write(mdp5_kms, REG_MDP5_SPARE_0, 0); 568 - return 0; 569 - } else if ((ctlx->pair != NULL) || (ctly->pair != NULL)) { 570 - DRM_DEV_ERROR(ctl_mgr->dev->dev, "CTLs already paired\n"); 571 - return -EINVAL; 572 - } else if (!(ctlx->status & ctly->status & CTL_STAT_BOOKED)) { 573 - DRM_DEV_ERROR(ctl_mgr->dev->dev, "Only pair booked CTLs\n"); 574 - return -EINVAL; 575 - } 576 - 577 - ctlx->pair = ctly; 578 - ctly->pair = ctlx; 579 - 580 - mdp5_write(mdp5_kms, REG_MDP5_SPARE_0, 581 - MDP5_SPARE_0_SPLIT_DPL_SINGLE_FLUSH_EN); 582 - 583 - return 0; 584 589 } 585 590 586 591 /* ··· 614 687 { 615 688 struct mdp5_ctl_manager *ctl_mgr; 616 689 const struct mdp5_cfg_hw *hw_cfg = mdp5_cfg_get_hw_config(cfg_hnd); 617 - int rev = mdp5_cfg_get_hw_rev(cfg_hnd); 618 - unsigned dsi_cnt = 0; 619 690 const struct mdp5_ctl_block *ctl_cfg = &hw_cfg->ctl; 620 691 unsigned long flags; 621 692 int c, ret; ··· 655 730 spin_lock_init(&ctl->hw_lock); 656 731 } 657 732 658 - /* 659 - * In bonded DSI case, CTL0 and CTL1 are always assigned to two DSI 660 - * interfaces to support single FLUSH feature (Flush CTL0 and CTL1 when 661 - * only write into CTL0's FLUSH register) to keep two DSI pipes in sync. 662 - * Single FLUSH is supported from hw rev v3.0. 663 - */ 664 - for (c = 0; c < ARRAY_SIZE(hw_cfg->intf.connect); c++) 665 - if (hw_cfg->intf.connect[c] == INTF_DSI) 666 - dsi_cnt++; 667 - if ((rev >= 3) && (dsi_cnt > 1)) { 668 - ctl_mgr->single_flush_supported = true; 669 - /* Reserve CTL0/1 for INTF1/2 */ 670 - ctl_mgr->ctls[0].status |= CTL_STAT_BOOKED; 671 - ctl_mgr->ctls[1].status |= CTL_STAT_BOOKED; 672 - } 673 733 spin_unlock_irqrestore(&ctl_mgr->pool_lock, flags); 674 734 DBG("Pool of %d CTLs created.", ctl_mgr->nctl); 675 735
-1
drivers/gpu/drm/msm/disp/mdp5/mdp5_ctl.h
··· 35 35 36 36 int mdp5_ctl_set_cursor(struct mdp5_ctl *ctl, struct mdp5_pipeline *pipeline, 37 37 int cursor_id, bool enable); 38 - int mdp5_ctl_pair(struct mdp5_ctl *ctlx, struct mdp5_ctl *ctly, bool enable); 39 38 40 39 #define MAX_PIPE_STAGE 2 41 40
+1 -7
drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c
··· 118 118 u32 width, bool hdecim) 119 119 { 120 120 const struct drm_format_info *info = drm_format_info(format->pixel_format); 121 - struct mdp5_kms *mdp5_kms = get_kms(smp); 122 - int rev = mdp5_cfg_get_hw_rev(mdp5_kms->cfg); 123 121 int i, hsub, nplanes, nlines; 124 122 uint32_t blkcfg = 0; 125 123 ··· 131 133 * U and V components (splits them from Y if necessary) and packs 132 134 * them together, writes to SMP using a single client. 133 135 */ 134 - if ((rev > 0) && (format->chroma_sample > CHROMA_FULL)) { 136 + if (format->chroma_sample > CHROMA_FULL) { 135 137 nplanes = 2; 136 138 137 139 /* if decimation is enabled, HW decimates less on the ··· 148 150 fetch_stride = width * cpp / (i ? hsub : 1); 149 151 150 152 n = DIV_ROUND_UP(fetch_stride * nlines, smp->blk_size); 151 - 152 - /* for hw rev v1.00 */ 153 - if (rev == 0) 154 - n = roundup_pow_of_two(n); 155 153 156 154 blkcfg |= (n << (8 * i)); 157 155 }
-18
drivers/gpu/drm/msm/dp/dp_ctrl.c
··· 1928 1928 1929 1929 msm_dp_ctrl_phy_reset(ctrl); 1930 1930 phy_init(phy); 1931 - 1932 - drm_dbg_dp(ctrl->drm_dev, "phy=%p init=%d power_on=%d\n", 1933 - phy, phy->init_count, phy->power_count); 1934 1931 } 1935 1932 1936 1933 void msm_dp_ctrl_phy_exit(struct msm_dp_ctrl *msm_dp_ctrl) ··· 1940 1943 1941 1944 msm_dp_ctrl_phy_reset(ctrl); 1942 1945 phy_exit(phy); 1943 - drm_dbg_dp(ctrl->drm_dev, "phy=%p init=%d power_on=%d\n", 1944 - phy, phy->init_count, phy->power_count); 1945 1946 } 1946 1947 1947 1948 static int msm_dp_ctrl_reinitialize_mainlink(struct msm_dp_ctrl_private *ctrl) ··· 1991 1996 phy_exit(phy); 1992 1997 phy_init(phy); 1993 1998 1994 - drm_dbg_dp(ctrl->drm_dev, "phy=%p init=%d power_on=%d\n", 1995 - phy, phy->init_count, phy->power_count); 1996 1999 return 0; 1997 2000 } 1998 2001 ··· 2581 2588 /* aux channel down, reinit phy */ 2582 2589 phy_exit(phy); 2583 2590 phy_init(phy); 2584 - 2585 - drm_dbg_dp(ctrl->drm_dev, "phy=%p init=%d power_on=%d\n", 2586 - phy, phy->init_count, phy->power_count); 2587 2591 } 2588 2592 2589 2593 void msm_dp_ctrl_off_link(struct msm_dp_ctrl *msm_dp_ctrl) ··· 2596 2606 dev_pm_opp_set_rate(ctrl->dev, 0); 2597 2607 msm_dp_ctrl_link_clk_disable(&ctrl->msm_dp_ctrl); 2598 2608 2599 - DRM_DEBUG_DP("Before, phy=%p init_count=%d power_on=%d\n", 2600 - phy, phy->init_count, phy->power_count); 2601 - 2602 2609 phy_power_off(phy); 2603 - 2604 - DRM_DEBUG_DP("After, phy=%p init_count=%d power_on=%d\n", 2605 - phy, phy->init_count, phy->power_count); 2606 2610 } 2607 2611 2608 2612 void msm_dp_ctrl_off(struct msm_dp_ctrl *msm_dp_ctrl) ··· 2622 2638 msm_dp_ctrl_link_clk_disable(&ctrl->msm_dp_ctrl); 2623 2639 2624 2640 phy_power_off(phy); 2625 - drm_dbg_dp(ctrl->drm_dev, "phy=%p init=%d power_on=%d\n", 2626 - phy, phy->init_count, phy->power_count); 2627 2641 } 2628 2642 2629 2643 irqreturn_t msm_dp_ctrl_isr(struct msm_dp_ctrl *msm_dp_ctrl)
+1
drivers/gpu/drm/msm/dp/dp_display.c
··· 210 210 { .compatible = "qcom,x1e80100-dp", .data = &msm_dp_desc_x1e80100 }, 211 211 {} 212 212 }; 213 + MODULE_DEVICE_TABLE(of, msm_dp_dt_match); 213 214 214 215 static struct msm_dp_display_private *dev_get_dp_display_private(struct device *dev) 215 216 {
+1
drivers/gpu/drm/msm/dsi/dsi.c
··· 198 198 { .compatible = "qcom,dsi-ctrl-6g-qcm2290" }, 199 199 {} 200 200 }; 201 + MODULE_DEVICE_TABLE(of, dt_match); 201 202 202 203 static const struct dev_pm_ops dsi_pm_ops = { 203 204 SET_RUNTIME_PM_OPS(msm_dsi_runtime_suspend, msm_dsi_runtime_resume, NULL)
+2 -2
drivers/gpu/drm/msm/dsi/dsi_cfg.c
··· 317 317 &msm8996_dsi_cfg, &msm_dsi_6g_host_ops}, 318 318 {MSM_DSI_VER_MAJOR_6G, MSM_DSI_6G_VER_MINOR_V1_4_2, 319 319 &msm8976_dsi_cfg, &msm_dsi_6g_host_ops}, 320 + {MSM_DSI_VER_MAJOR_6G, MSM_DSI_6G_VER_MINOR_V2_0_0, 321 + &msm8998_dsi_cfg, &msm_dsi_6g_v2_host_ops}, 320 322 {MSM_DSI_VER_MAJOR_6G, MSM_DSI_6G_VER_MINOR_V2_1_0, 321 323 &sdm660_dsi_cfg, &msm_dsi_6g_v2_host_ops}, 322 - {MSM_DSI_VER_MAJOR_6G, MSM_DSI_6G_VER_MINOR_V2_2_0, 323 - &msm8998_dsi_cfg, &msm_dsi_6g_v2_host_ops}, 324 324 {MSM_DSI_VER_MAJOR_6G, MSM_DSI_6G_VER_MINOR_V2_2_1, 325 325 &sdm845_dsi_cfg, &msm_dsi_6g_v2_host_ops}, 326 326 {MSM_DSI_VER_MAJOR_6G, MSM_DSI_6G_VER_MINOR_V2_3_0,
+1 -1
drivers/gpu/drm/msm/dsi/dsi_cfg.h
··· 19 19 #define MSM_DSI_6G_VER_MINOR_V1_3_1 0x10030001 20 20 #define MSM_DSI_6G_VER_MINOR_V1_4_1 0x10040001 21 21 #define MSM_DSI_6G_VER_MINOR_V1_4_2 0x10040002 22 + #define MSM_DSI_6G_VER_MINOR_V2_0_0 0x20000000 22 23 #define MSM_DSI_6G_VER_MINOR_V2_1_0 0x20010000 23 - #define MSM_DSI_6G_VER_MINOR_V2_2_0 0x20000000 24 24 #define MSM_DSI_6G_VER_MINOR_V2_2_1 0x20020001 25 25 #define MSM_DSI_6G_VER_MINOR_V2_3_0 0x20030000 26 26 #define MSM_DSI_6G_VER_MINOR_V2_3_1 0x20030001
+43 -7
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 569 569 * dsi_adjust_pclk_for_compression() - Adjust the pclk rate for compression case 570 570 * @mode: The selected mode for the DSI output 571 571 * @dsc: DRM DSC configuration for this DSI output 572 + * @is_bonded_dsi: True if two DSI controllers are bonded 572 573 * 573 574 * Adjust the pclk rate by calculating a new hdisplay proportional to 574 575 * the compression ratio such that: ··· 758 757 dsi_get_vid_fmt(const enum mipi_dsi_pixel_format mipi_fmt) 759 758 { 760 759 switch (mipi_fmt) { 760 + case MIPI_DSI_FMT_RGB101010: return VID_DST_FORMAT_RGB101010; 761 761 case MIPI_DSI_FMT_RGB888: return VID_DST_FORMAT_RGB888; 762 762 case MIPI_DSI_FMT_RGB666: return VID_DST_FORMAT_RGB666_LOOSE; 763 763 case MIPI_DSI_FMT_RGB666_PACKED: return VID_DST_FORMAT_RGB666; ··· 771 769 dsi_get_cmd_fmt(const enum mipi_dsi_pixel_format mipi_fmt) 772 770 { 773 771 switch (mipi_fmt) { 772 + case MIPI_DSI_FMT_RGB101010: return CMD_DST_FORMAT_RGB101010; 774 773 case MIPI_DSI_FMT_RGB888: return CMD_DST_FORMAT_RGB888; 775 774 case MIPI_DSI_FMT_RGB666_PACKED: 776 775 case MIPI_DSI_FMT_RGB666: return CMD_DST_FORMAT_RGB666; ··· 785 782 dsi_write(msm_host, REG_DSI_CTRL, 0); 786 783 } 787 784 785 + static bool msm_dsi_host_version_geq(struct msm_dsi_host *msm_host, 786 + u32 major, u32 minor) 787 + { 788 + return msm_host->cfg_hnd->major > major || 789 + (msm_host->cfg_hnd->major == major && 790 + msm_host->cfg_hnd->minor >= minor); 791 + } 792 + 788 793 bool msm_dsi_host_is_wide_bus_enabled(struct mipi_dsi_host *host) 789 794 { 790 795 struct msm_dsi_host *msm_host = to_msm_dsi_host(host); 791 796 792 797 return msm_host->dsc && 793 - (msm_host->cfg_hnd->major == MSM_DSI_VER_MAJOR_6G && 794 - msm_host->cfg_hnd->minor >= MSM_DSI_6G_VER_MINOR_V2_5_0); 798 + msm_dsi_host_version_geq(msm_host, MSM_DSI_VER_MAJOR_6G, 799 + MSM_DSI_6G_VER_MINOR_V2_5_0); 795 800 } 796 801 797 802 static void dsi_ctrl_enable(struct msm_dsi_host *msm_host, ··· 1044 1033 /* 1045 1034 * DPU sends 3 bytes per pclk cycle to DSI. If widebus is 1046 1035 * enabled, MDP always sends out 48-bit compressed data per 1047 - * pclk and on average, DSI consumes an amount of compressed 1048 - * data equivalent to the uncompressed pixel depth per pclk. 1036 + * pclk and on average, for video mode, DSI consumes only an 1037 + * amount of compressed data equivalent to the uncompressed 1038 + * pixel depth per pclk. 1049 1039 * 1050 1040 * Calculate the number of pclks needed to transmit one line of 1051 1041 * the compressed data. ··· 1058 1046 * unused anyway. 1059 1047 */ 1060 1048 h_total -= hdisplay; 1061 - if (wide_bus_enabled) 1062 - bits_per_pclk = mipi_dsi_pixel_format_to_bpp(msm_host->format); 1063 - else 1049 + if (wide_bus_enabled) { 1050 + if (msm_host->mode_flags & MIPI_DSI_MODE_VIDEO) 1051 + bits_per_pclk = dsc->bits_per_component * 3; 1052 + else 1053 + bits_per_pclk = 48; 1054 + } else { 1064 1055 bits_per_pclk = 24; 1056 + } 1065 1057 1066 1058 hdisplay = DIV_ROUND_UP(msm_dsc_get_bytes_per_line(msm_host->dsc) * 8, bits_per_pclk); 1067 1059 ··· 1720 1704 msm_host->mode_flags = dsi->mode_flags; 1721 1705 if (dsi->dsc) 1722 1706 msm_host->dsc = dsi->dsc; 1707 + 1708 + if (msm_host->format == MIPI_DSI_FMT_RGB101010) { 1709 + if (!msm_dsi_host_version_geq(msm_host, MSM_DSI_VER_MAJOR_6G, 1710 + MSM_DSI_6G_VER_MINOR_V2_1_0)) { 1711 + DRM_DEV_ERROR(&msm_host->pdev->dev, 1712 + "RGB101010 not supported on this DSI controller\n"); 1713 + return -EINVAL; 1714 + } 1715 + 1716 + /* 1717 + * Downstream overrides RGB101010 back to RGB888 when DSC is enabled 1718 + * but widebus is not. Using RGB101010 in this case may require some 1719 + * extra changes. 1720 + */ 1721 + if (msm_host->dsc && 1722 + !msm_dsi_host_is_wide_bus_enabled(&msm_host->base)) { 1723 + dev_warn(&msm_host->pdev->dev, 1724 + "RGB101010 with DSC but without widebus, may need extra changes\n"); 1725 + } 1726 + } 1723 1727 1724 1728 ret = dsi_dev_attach(msm_host->pdev); 1725 1729 if (ret)
+1
drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
··· 582 582 #endif 583 583 {} 584 584 }; 585 + MODULE_DEVICE_TABLE(of, dsi_phy_dt_match); 585 586 586 587 /* 587 588 * Currently, we only support one SoC for each PHY type. When we have multiple
+8 -8
drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
··· 41 41 #define VCO_REF_CLK_RATE 19200000 42 42 #define FRAC_BITS 18 43 43 44 - /* Hardware is pre V4.1 */ 45 - #define DSI_PHY_7NM_QUIRK_PRE_V4_1 BIT(0) 44 + /* Hardware is V4.0 */ 45 + #define DSI_PHY_7NM_QUIRK_V4_0 BIT(0) 46 46 /* Hardware is V4.1 */ 47 47 #define DSI_PHY_7NM_QUIRK_V4_1 BIT(1) 48 48 /* Hardware is V4.2 */ ··· 141 141 dec_multiple = div_u64(pll_freq * multiplier, divider); 142 142 dec = div_u64_rem(dec_multiple, multiplier, &frac); 143 143 144 - if (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_PRE_V4_1) { 144 + if (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_0) { 145 145 config->pll_clock_inverters = 0x28; 146 146 } else if ((pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) { 147 147 if (pll_freq < 163000000ULL) ··· 264 264 void __iomem *base = pll->phy->pll_base; 265 265 u8 analog_controls_five_1 = 0x01, vco_config_1 = 0x00; 266 266 267 - if (!(pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_PRE_V4_1)) 267 + if (!(pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_0)) 268 268 if (pll->vco_current_rate >= 3100000000ULL) 269 269 analog_controls_five_1 = 0x03; 270 270 ··· 313 313 writel(0x29, base + REG_DSI_7nm_PHY_PLL_PFILT); 314 314 writel(0x2f, base + REG_DSI_7nm_PHY_PLL_PFILT); 315 315 writel(0x2a, base + REG_DSI_7nm_PHY_PLL_IFILT); 316 - writel(!(pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_PRE_V4_1) ? 0x3f : 0x22, 316 + writel(!(pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_0) ? 0x3f : 0x22, 317 317 base + REG_DSI_7nm_PHY_PLL_IFILT); 318 318 319 - if (!(pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_PRE_V4_1)) { 319 + if (!(pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_0)) { 320 320 writel(0x22, base + REG_DSI_7nm_PHY_PLL_PERF_OPTIMIZE); 321 321 if (pll->slave) 322 322 writel(0x22, pll->slave->phy->pll_base + REG_DSI_7nm_PHY_PLL_PERF_OPTIMIZE); ··· 928 928 const u8 *tx_dctrl = tx_dctrl_0; 929 929 void __iomem *lane_base = phy->lane_base; 930 930 931 - if (!(phy->cfg->quirks & DSI_PHY_7NM_QUIRK_PRE_V4_1)) 931 + if (!(phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_0)) 932 932 tx_dctrl = tx_dctrl_1; 933 933 934 934 /* Strength ctrl settings */ ··· 1319 1319 .max_pll_rate = 3500000000UL, 1320 1320 .io_start = { 0xae94400, 0xae96400 }, 1321 1321 .num_dsi_phy = 2, 1322 - .quirks = DSI_PHY_7NM_QUIRK_PRE_V4_1, 1322 + .quirks = DSI_PHY_7NM_QUIRK_V4_0, 1323 1323 }; 1324 1324 1325 1325 const struct msm_dsi_phy_cfg dsi_phy_7nm_7280_cfgs = {
+16 -19
drivers/gpu/drm/msm/hdmi/hdmi.c
··· 20 20 21 21 void msm_hdmi_set_mode(struct hdmi *hdmi, bool power_on) 22 22 { 23 - uint32_t ctrl = 0; 23 + u32 ctrl = 0; 24 24 unsigned long flags; 25 25 26 26 spin_lock_irqsave(&hdmi->reg_lock, flags); ··· 91 91 struct platform_device *phy_pdev; 92 92 struct device_node *phy_node; 93 93 94 - phy_node = of_parse_phandle(pdev->dev.of_node, "phys", 0); 94 + phy_node = of_parse_phandle(dev_of_node(&pdev->dev), "phys", 0); 95 95 if (!phy_node) { 96 96 DRM_DEV_ERROR(&pdev->dev, "cannot find phy device\n"); 97 97 return -ENXIO; ··· 278 278 if (!config) 279 279 return -EINVAL; 280 280 281 - hdmi = devm_kzalloc(&pdev->dev, sizeof(*hdmi), GFP_KERNEL); 281 + hdmi = devm_kzalloc(dev, sizeof(*hdmi), GFP_KERNEL); 282 282 if (!hdmi) 283 283 return -ENOMEM; 284 284 ··· 287 287 spin_lock_init(&hdmi->reg_lock); 288 288 mutex_init(&hdmi->state_mutex); 289 289 290 - ret = drm_of_find_panel_or_bridge(pdev->dev.of_node, 1, 0, NULL, &hdmi->next_bridge); 290 + ret = drm_of_find_panel_or_bridge(dev_of_node(dev), 1, 0, NULL, &hdmi->next_bridge); 291 291 if (ret && ret != -ENODEV) 292 292 return ret; 293 293 ··· 304 304 305 305 hdmi->qfprom_mmio = msm_ioremap(pdev, "qfprom_physical"); 306 306 if (IS_ERR(hdmi->qfprom_mmio)) { 307 - DRM_DEV_INFO(&pdev->dev, "can't find qfprom resource\n"); 307 + DRM_DEV_INFO(dev, "can't find qfprom resource\n"); 308 308 hdmi->qfprom_mmio = NULL; 309 309 } 310 310 ··· 312 312 if (hdmi->irq < 0) 313 313 return hdmi->irq; 314 314 315 - hdmi->pwr_regs = devm_kcalloc(&pdev->dev, 316 - config->pwr_reg_cnt, 315 + hdmi->pwr_regs = devm_kcalloc(dev, config->pwr_reg_cnt, 317 316 sizeof(hdmi->pwr_regs[0]), 318 317 GFP_KERNEL); 319 318 if (!hdmi->pwr_regs) ··· 321 322 for (i = 0; i < config->pwr_reg_cnt; i++) 322 323 hdmi->pwr_regs[i].supply = config->pwr_reg_names[i]; 323 324 324 - ret = devm_regulator_bulk_get(&pdev->dev, config->pwr_reg_cnt, hdmi->pwr_regs); 325 + ret = devm_regulator_bulk_get(dev, config->pwr_reg_cnt, hdmi->pwr_regs); 325 326 if (ret) 326 327 return dev_err_probe(dev, ret, "failed to get pwr regulators\n"); 327 328 328 - hdmi->pwr_clks = devm_kcalloc(&pdev->dev, 329 - config->pwr_clk_cnt, 329 + hdmi->pwr_clks = devm_kcalloc(dev, config->pwr_clk_cnt, 330 330 sizeof(hdmi->pwr_clks[0]), 331 331 GFP_KERNEL); 332 332 if (!hdmi->pwr_clks) ··· 334 336 for (i = 0; i < config->pwr_clk_cnt; i++) 335 337 hdmi->pwr_clks[i].id = config->pwr_clk_names[i]; 336 338 337 - ret = devm_clk_bulk_get(&pdev->dev, config->pwr_clk_cnt, hdmi->pwr_clks); 339 + ret = devm_clk_bulk_get(dev, config->pwr_clk_cnt, hdmi->pwr_clks); 338 340 if (ret) 339 341 return ret; 340 342 341 - hdmi->extp_clk = devm_clk_get_optional(&pdev->dev, "extp"); 343 + hdmi->extp_clk = devm_clk_get_optional(dev, "extp"); 342 344 if (IS_ERR(hdmi->extp_clk)) 343 345 return dev_err_probe(dev, PTR_ERR(hdmi->extp_clk), 344 346 "failed to get extp clock\n"); 345 347 346 - hdmi->hpd_gpiod = devm_gpiod_get_optional(&pdev->dev, "hpd", GPIOD_IN); 348 + hdmi->hpd_gpiod = devm_gpiod_get_optional(dev, "hpd", GPIOD_IN); 347 349 /* This will catch e.g. -EPROBE_DEFER */ 348 350 if (IS_ERR(hdmi->hpd_gpiod)) 349 351 return dev_err_probe(dev, PTR_ERR(hdmi->hpd_gpiod), ··· 356 358 gpiod_set_consumer_name(hdmi->hpd_gpiod, "HDMI_HPD"); 357 359 358 360 ret = msm_hdmi_get_phy(hdmi); 359 - if (ret) { 360 - DRM_DEV_ERROR(&pdev->dev, "failed to get phy\n"); 361 + if (ret) 361 362 return ret; 362 - } 363 363 364 - ret = devm_pm_runtime_enable(&pdev->dev); 364 + ret = devm_pm_runtime_enable(dev); 365 365 if (ret) 366 366 goto err_put_phy; 367 367 368 368 platform_set_drvdata(pdev, hdmi); 369 369 370 - ret = component_add(&pdev->dev, &msm_hdmi_ops); 370 + ret = component_add(dev, &msm_hdmi_ops); 371 371 if (ret) 372 372 goto err_put_phy; 373 373 ··· 425 429 return ret; 426 430 } 427 431 428 - DEFINE_RUNTIME_DEV_PM_OPS(msm_hdmi_pm_ops, msm_hdmi_runtime_suspend, msm_hdmi_runtime_resume, NULL); 432 + static DEFINE_RUNTIME_DEV_PM_OPS(msm_hdmi_pm_ops, msm_hdmi_runtime_suspend, msm_hdmi_runtime_resume, NULL); 429 433 430 434 static const struct of_device_id msm_hdmi_dt_match[] = { 431 435 { .compatible = "qcom,hdmi-tx-8998", .data = &hdmi_tx_8974_config }, ··· 437 441 { .compatible = "qcom,hdmi-tx-8660", .data = &hdmi_tx_8960_config }, 438 442 {} 439 443 }; 444 + MODULE_DEVICE_TABLE(of, msm_hdmi_dt_match); 440 445 441 446 static struct platform_driver msm_hdmi_driver = { 442 447 .probe = msm_hdmi_dev_probe,
+3 -3
drivers/gpu/drm/msm/hdmi/hdmi.h
··· 43 43 bool power_on; 44 44 bool hpd_enabled; 45 45 struct mutex state_mutex; /* protects two booleans */ 46 - unsigned long int pixclock; 46 + unsigned long pixclock; 47 47 48 48 void __iomem *mmio; 49 49 void __iomem *qfprom_mmio; ··· 132 132 133 133 struct hdmi_phy_cfg { 134 134 enum hdmi_phy_type type; 135 - void (*powerup)(struct hdmi_phy *phy, unsigned long int pixclock); 135 + void (*powerup)(struct hdmi_phy *phy, unsigned long pixclock); 136 136 void (*powerdown)(struct hdmi_phy *phy); 137 137 const char * const *reg_names; 138 138 int num_regs; ··· 167 167 168 168 int msm_hdmi_phy_resource_enable(struct hdmi_phy *phy); 169 169 void msm_hdmi_phy_resource_disable(struct hdmi_phy *phy); 170 - void msm_hdmi_phy_powerup(struct hdmi_phy *phy, unsigned long int pixclock); 170 + void msm_hdmi_phy_powerup(struct hdmi_phy *phy, unsigned long pixclock); 171 171 void msm_hdmi_phy_powerdown(struct hdmi_phy *phy); 172 172 void __init msm_hdmi_phy_driver_register(void); 173 173 void __exit msm_hdmi_phy_driver_unregister(void);
+2 -3
drivers/gpu/drm/msm/hdmi/hdmi_audio.c
··· 17 17 { 18 18 struct hdmi_audio *audio = &hdmi->audio; 19 19 bool enabled = audio->enabled; 20 - uint32_t acr_pkt_ctrl, vbi_pkt_ctrl, aud_pkt_ctrl; 21 - uint32_t audio_config; 20 + u32 acr_pkt_ctrl, vbi_pkt_ctrl, aud_pkt_ctrl, audio_config; 22 21 23 22 if (!hdmi->connector->display_info.is_hdmi) 24 23 return -EINVAL; ··· 42 43 acr_pkt_ctrl &= ~HDMI_ACR_PKT_CTRL_SELECT__MASK; 43 44 44 45 if (enabled) { 45 - uint32_t n, cts, multiplier; 46 + u32 n, cts, multiplier; 46 47 enum hdmi_acr_cts select; 47 48 48 49 drm_hdmi_acr_get_n_cts(hdmi->pixclock, audio->rate, &n, &cts);
+4 -4
drivers/gpu/drm/msm/hdmi/hdmi_bridge.c
··· 153 153 for (i = 0; i < ARRAY_SIZE(buf); i++) 154 154 hdmi_write(hdmi, REG_HDMI_AVI_INFO(i), buf[i]); 155 155 156 - val = hdmi_read(hdmi, REG_HDMI_INFOFRAME_CTRL1); 156 + val = hdmi_read(hdmi, REG_HDMI_INFOFRAME_CTRL0); 157 157 val |= HDMI_INFOFRAME_CTRL0_AVI_SEND | 158 158 HDMI_INFOFRAME_CTRL0_AVI_CONT; 159 159 hdmi_write(hdmi, REG_HDMI_INFOFRAME_CTRL0, val); ··· 193 193 buffer[9] << 16 | 194 194 buffer[10] << 24); 195 195 196 - val = hdmi_read(hdmi, REG_HDMI_INFOFRAME_CTRL1); 196 + val = hdmi_read(hdmi, REG_HDMI_INFOFRAME_CTRL0); 197 197 val |= HDMI_INFOFRAME_CTRL0_AUDIO_INFO_SEND | 198 198 HDMI_INFOFRAME_CTRL0_AUDIO_INFO_CONT | 199 199 HDMI_INFOFRAME_CTRL0_AUDIO_INFO_SOURCE | ··· 356 356 const struct drm_display_mode *mode) 357 357 { 358 358 int hstart, hend, vstart, vend; 359 - uint32_t frame_ctrl; 359 + u32 frame_ctrl; 360 360 361 361 hstart = mode->htotal - mode->hsync_start; 362 362 hend = mode->htotal - mode->hsync_start + mode->hdisplay; ··· 409 409 struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge); 410 410 struct hdmi *hdmi = hdmi_bridge->hdmi; 411 411 const struct drm_edid *drm_edid; 412 - uint32_t hdmi_ctrl; 412 + u32 hdmi_ctrl; 413 413 414 414 hdmi_ctrl = hdmi_read(hdmi, REG_HDMI_CTRL); 415 415 hdmi_write(hdmi, REG_HDMI_CTRL, hdmi_ctrl | HDMI_CTRL_ENABLE);
+2 -2
drivers/gpu/drm/msm/hdmi/hdmi_hpd.c
··· 65 65 struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge); 66 66 struct hdmi *hdmi = hdmi_bridge->hdmi; 67 67 struct device *dev = &hdmi->pdev->dev; 68 - uint32_t hpd_ctrl; 68 + u32 hpd_ctrl; 69 69 int ret; 70 70 unsigned long flags; 71 71 ··· 125 125 { 126 126 struct hdmi_bridge *hdmi_bridge = to_hdmi_bridge(bridge); 127 127 struct hdmi *hdmi = hdmi_bridge->hdmi; 128 - uint32_t hpd_int_status, hpd_int_ctrl; 128 + u32 hpd_int_status, hpd_int_ctrl; 129 129 130 130 /* Process HPD: */ 131 131 hpd_int_status = hdmi_read(hdmi, REG_HDMI_HPD_INT_STATUS);
+6 -6
drivers/gpu/drm/msm/hdmi/hdmi_i2c.c
··· 40 40 { 41 41 struct hdmi *hdmi = hdmi_i2c->hdmi; 42 42 struct drm_device *dev = hdmi->dev; 43 - uint32_t retry = 0xffff; 44 - uint32_t ddc_int_ctrl; 43 + u32 retry = 0xffff; 44 + u32 ddc_int_ctrl; 45 45 46 46 do { 47 47 --retry; ··· 71 71 struct hdmi *hdmi = hdmi_i2c->hdmi; 72 72 73 73 if (!hdmi_i2c->sw_done) { 74 - uint32_t ddc_int_ctrl; 74 + u32 ddc_int_ctrl; 75 75 76 76 ddc_int_ctrl = hdmi_read(hdmi, REG_HDMI_DDC_INT_CTRL); 77 77 ··· 92 92 struct hdmi_i2c_adapter *hdmi_i2c = to_hdmi_i2c_adapter(i2c); 93 93 struct hdmi *hdmi = hdmi_i2c->hdmi; 94 94 struct drm_device *dev = hdmi->dev; 95 - static const uint32_t nack[] = { 95 + static const u32 nack[] = { 96 96 HDMI_DDC_SW_STATUS_NACK0, HDMI_DDC_SW_STATUS_NACK1, 97 97 HDMI_DDC_SW_STATUS_NACK2, HDMI_DDC_SW_STATUS_NACK3, 98 98 }; 99 99 int indices[MAX_TRANSACTIONS]; 100 100 int ret, i, j, index = 0; 101 - uint32_t ddc_status, ddc_data, i2c_trans; 101 + u32 ddc_status, ddc_data, i2c_trans; 102 102 103 103 num = min(num, MAX_TRANSACTIONS); 104 104 ··· 119 119 120 120 for (i = 0; i < num; i++) { 121 121 struct i2c_msg *p = &msgs[i]; 122 - uint32_t raw_addr = p->addr << 1; 122 + u32 raw_addr = p->addr << 1; 123 123 124 124 if (p->flags & I2C_M_RD) 125 125 raw_addr |= 1;
+2 -1
drivers/gpu/drm/msm/hdmi/hdmi_phy.c
··· 94 94 pm_runtime_put_sync(dev); 95 95 } 96 96 97 - void msm_hdmi_phy_powerup(struct hdmi_phy *phy, unsigned long int pixclock) 97 + void msm_hdmi_phy_powerup(struct hdmi_phy *phy, unsigned long pixclock) 98 98 { 99 99 if (!phy || !phy->cfg->powerup) 100 100 return; ··· 204 204 .data = &msm_hdmi_phy_8998_cfg }, 205 205 {} 206 206 }; 207 + MODULE_DEVICE_TABLE(of, msm_hdmi_phy_dt_match); 207 208 208 209 static struct platform_driver msm_hdmi_phy_platform_driver = { 209 210 .probe = msm_hdmi_phy_probe,
+1 -1
drivers/gpu/drm/msm/hdmi/hdmi_phy_8960.c
··· 7 7 #include "hdmi.h" 8 8 9 9 static void hdmi_phy_8960_powerup(struct hdmi_phy *phy, 10 - unsigned long int pixclock) 10 + unsigned long pixclock) 11 11 { 12 12 DBG("pixclock: %lu", pixclock); 13 13
+1 -1
drivers/gpu/drm/msm/hdmi/hdmi_phy_8x60.c
··· 9 9 #include "hdmi.h" 10 10 11 11 static void hdmi_phy_8x60_powerup(struct hdmi_phy *phy, 12 - unsigned long int pixclock) 12 + unsigned long pixclock) 13 13 { 14 14 /* De-serializer delay D/C for non-lbk mode: */ 15 15 hdmi_phy_write(phy, REG_HDMI_8x60_PHY_REG0,
+1 -1
drivers/gpu/drm/msm/hdmi/hdmi_phy_8x74.c
··· 7 7 #include "hdmi.h" 8 8 9 9 static void hdmi_phy_8x74_powerup(struct hdmi_phy *phy, 10 - unsigned long int pixclock) 10 + unsigned long pixclock) 11 11 { 12 12 hdmi_phy_write(phy, REG_HDMI_8x74_ANA_CFG0, 0x1b); 13 13 hdmi_phy_write(phy, REG_HDMI_8x74_ANA_CFG1, 0xf2);
+6 -1
drivers/gpu/drm/msm/msm_drv.c
··· 536 536 len = msm_obj->metadata_size; 537 537 buf = kmemdup(msm_obj->metadata, len, GFP_KERNEL); 538 538 539 + if (!buf) { 540 + msm_gem_unlock(obj); 541 + return -ENOMEM; 542 + } 543 + 539 544 msm_gem_unlock(obj); 540 545 541 546 if (*metadata_size < len) { ··· 553 548 554 549 kfree(buf); 555 550 556 - return 0; 551 + return ret; 557 552 } 558 553 559 554 static int msm_ioctl_gem_info(struct drm_device *dev, void *data,
+6 -1
drivers/gpu/drm/msm/msm_fb.c
··· 219 219 + mode_cmd->offsets[i]; 220 220 221 221 if (bos[i]->size < min_size) { 222 - ret = -EINVAL; 222 + ret = UERR(EINVAL, dev, "plane %d too small", i); 223 + goto fail; 224 + } 225 + 226 + if (to_msm_bo(bos[i])->flags & MSM_BO_NO_SHARE) { 227 + ret = UERR(EINVAL, dev, "Cannot map _NO_SHARE to kms vm"); 223 228 goto fail; 224 229 } 225 230
+3
drivers/gpu/drm/msm/msm_gem.c
··· 507 507 */ 508 508 void msm_gem_unpin_active(struct drm_gem_object *obj) 509 509 { 510 + struct msm_drm_private *priv = obj->dev->dev_private; 510 511 struct msm_gem_object *msm_obj = to_msm_bo(obj); 512 + 513 + GEM_WARN_ON(!mutex_is_locked(&priv->lru.lock)); 511 514 512 515 msm_obj->pin_count--; 513 516 GEM_WARN_ON(msm_obj->pin_count < 0);
+1
drivers/gpu/drm/msm/msm_gem.h
··· 452 452 bool bos_pinned : 1; 453 453 bool fault_dumped:1;/* Limit devcoredump dumping to one per submit */ 454 454 bool in_rb : 1; /* "sudo" mode, copy cmds into RB */ 455 + bool has_exec : 1; /* @exec is initialized. */ 455 456 struct msm_ringbuffer *ring; 456 457 unsigned int nr_cmds; 457 458 unsigned int nr_bos;
+2 -3
drivers/gpu/drm/msm/msm_gem_shrinker.c
··· 26 26 27 27 static bool can_block(struct shrink_control *sc) 28 28 { 29 - if (!(sc->gfp_mask & __GFP_DIRECT_RECLAIM)) 30 - return false; 31 - return current_is_kswapd() || (sc->gfp_mask & __GFP_RECLAIM); 29 + return (sc->gfp_mask & __GFP_DIRECT_RECLAIM) || 30 + (current_is_kswapd() && (sc->gfp_mask & __GFP_KSWAPD_RECLAIM)); 32 31 } 33 32 34 33 static unsigned long
+3 -1
drivers/gpu/drm/msm/msm_gem_submit.c
··· 278 278 int ret = 0; 279 279 280 280 drm_exec_init(&submit->exec, flags, submit->nr_bos); 281 + submit->has_exec = true; 281 282 282 283 drm_exec_until_all_locked (&submit->exec) { 283 284 ret = drm_gpuvm_prepare_vm(submit->vm, exec, 1); ··· 305 304 return submit_lock_objects_vmbind(submit); 306 305 307 306 drm_exec_init(&submit->exec, flags, submit->nr_bos); 307 + submit->has_exec = true; 308 308 309 309 drm_exec_until_all_locked (&submit->exec) { 310 310 ret = drm_exec_lock_obj(&submit->exec, ··· 525 523 if (error) 526 524 submit_unpin_objects(submit); 527 525 528 - if (submit->exec.objects) 526 + if (submit->has_exec) 529 527 drm_exec_fini(&submit->exec); 530 528 531 529 /* if job wasn't enqueued to scheduler, early retirement: */
+13 -4
drivers/gpu/drm/msm/msm_gem_vma.c
··· 373 373 struct msm_gem_vma *vma; 374 374 int ret; 375 375 376 + /* _NO_SHARE objs cannot be mapped outside of their "host" vm: */ 377 + if (obj && (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) && 378 + GEM_WARN_ON(obj->resv != drm_gpuvm_resv(gpuvm))) { 379 + return ERR_PTR(-EINVAL); 380 + } 381 + 376 382 drm_gpuvm_resv_assert_held(&vm->base); 377 383 378 384 vma = kzalloc_obj(*vma); ··· 702 696 msm_vma_job_run(struct drm_sched_job *_job) 703 697 { 704 698 struct msm_vm_bind_job *job = to_msm_vm_bind_job(_job); 699 + struct msm_drm_private *priv = job->vm->drm->dev_private; 705 700 struct msm_gem_vm *vm = to_msm_vm(job->vm); 706 701 struct drm_gem_object *obj; 707 702 int ret = vm->unusable ? -EINVAL : 0; ··· 745 738 if (ret) 746 739 msm_gem_vm_unusable(job->vm); 747 740 741 + mutex_lock(&priv->lru.lock); 742 + 748 743 job_foreach_bo (obj, job) { 749 - msm_gem_lock(obj); 750 - msm_gem_unpin_locked(obj); 751 - msm_gem_unlock(obj); 744 + msm_gem_unpin_active(obj); 752 745 } 746 + 747 + mutex_unlock(&priv->lru.lock); 753 748 754 749 /* VM_BIND ops are synchronous, so no fence to wait on: */ 755 750 return NULL; ··· 1251 1242 case MSM_VM_BIND_OP_UNMAP: 1252 1243 ret = drm_gpuvm_sm_unmap_exec_lock(job->vm, exec, 1253 1244 op->iova, 1254 - op->obj_offset); 1245 + op->range); 1255 1246 break; 1256 1247 case MSM_VM_BIND_OP_MAP: 1257 1248 case MSM_VM_BIND_OP_MAP_NULL: {
+31 -22
drivers/gpu/drm/msm/msm_gpu.c
··· 17 17 #include <linux/string_helpers.h> 18 18 #include <linux/devcoredump.h> 19 19 #include <linux/sched/task.h> 20 + #include <linux/sched/mm.h> 20 21 21 22 /* 22 23 * Power Management: ··· 469 468 struct msm_gem_submit *submit; 470 469 struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); 471 470 char *comm = NULL, *cmd = NULL; 471 + unsigned int noreclaim_flag; 472 472 struct task_struct *task; 473 473 int i; 474 474 ··· 507 505 msm_gem_vm_unusable(submit->vm); 508 506 } 509 507 508 + noreclaim_flag = memalloc_noreclaim_save(); 509 + 510 510 get_comm_cmdline(submit, &comm, &cmd); 511 511 512 512 if (comm && cmd) { ··· 526 522 /* Record the crash state */ 527 523 pm_runtime_get_sync(&gpu->pdev->dev); 528 524 msm_gpu_crashstate_capture(gpu, submit, NULL, comm, cmd); 525 + 526 + memalloc_noreclaim_restore(noreclaim_flag); 529 527 530 528 kfree(cmd); 531 529 kfree(comm); ··· 552 546 msm_update_fence(ring->fctx, fence); 553 547 } 554 548 555 - if (msm_gpu_active(gpu)) { 556 - /* retire completed submits, plus the one that hung: */ 557 - retire_submits(gpu); 549 + /* retire completed submits, plus the one that hung: */ 550 + retire_submits(gpu); 558 551 559 - gpu->funcs->recover(gpu); 552 + gpu->funcs->recover(gpu); 560 553 561 - /* 562 - * Replay all remaining submits starting with highest priority 563 - * ring 564 - */ 565 - for (i = 0; i < gpu->nr_rings; i++) { 566 - struct msm_ringbuffer *ring = gpu->rb[i]; 567 - unsigned long flags; 554 + /* 555 + * Replay all remaining submits starting with highest priority 556 + * ring 557 + */ 558 + for (i = 0; i < gpu->nr_rings; i++) { 559 + struct msm_ringbuffer *ring = gpu->rb[i]; 560 + unsigned long flags; 568 561 569 - spin_lock_irqsave(&ring->submit_lock, flags); 570 - list_for_each_entry(submit, &ring->submits, node) { 571 - /* 572 - * If the submit uses an unusable vm make sure 573 - * we don't actually run it 574 - */ 575 - if (to_msm_vm(submit->vm)->unusable) 576 - submit->nr_cmds = 0; 577 - gpu->funcs->submit(gpu, submit); 578 - } 579 - spin_unlock_irqrestore(&ring->submit_lock, flags); 562 + spin_lock_irqsave(&ring->submit_lock, flags); 563 + list_for_each_entry(submit, &ring->submits, node) { 564 + /* 565 + * If the submit uses an unusable vm make sure 566 + * we don't actually run it 567 + */ 568 + if (to_msm_vm(submit->vm)->unusable) 569 + submit->nr_cmds = 0; 570 + gpu->funcs->submit(gpu, submit); 580 571 } 572 + spin_unlock_irqrestore(&ring->submit_lock, flags); 581 573 } 582 574 583 575 pm_runtime_put(&gpu->pdev->dev); ··· 591 587 struct msm_gem_submit *submit; 592 588 struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); 593 589 char *comm = NULL, *cmd = NULL; 590 + unsigned int noreclaim_flag; 594 591 595 592 mutex_lock(&gpu->lock); 596 593 597 594 submit = find_submit(cur_ring, cur_ring->memptrs->fence + 1); 598 595 if (submit && submit->fault_dumped) 599 596 goto resume_smmu; 597 + 598 + noreclaim_flag = memalloc_noreclaim_save(); 600 599 601 600 if (submit) { 602 601 get_comm_cmdline(submit, &comm, &cmd); ··· 615 608 pm_runtime_get_sync(&gpu->pdev->dev); 616 609 msm_gpu_crashstate_capture(gpu, submit, fault_info, comm, cmd); 617 610 pm_runtime_put_sync(&gpu->pdev->dev); 611 + 612 + memalloc_noreclaim_restore(noreclaim_flag); 618 613 619 614 kfree(cmd); 620 615 kfree(comm);
+9
drivers/gpu/drm/msm/msm_mdss.c
··· 262 262 icc_set_bw(msm_mdss->reg_bus_path, 0, 263 263 msm_mdss->reg_bus_bw); 264 264 265 + /* 266 + * TODO: 267 + * Previous users (e.g. the bootloader) may have left this clock at a high rate, which 268 + * would remain set, as prepare_enable() doesn't reprogram it. This theoretically poses a 269 + * risk of brownout, but realistically this path is almost exclusively excercised after the 270 + * correct OPP has been set in one of the MDPn or DPU drivers, or during initial probe, 271 + * before the RPM(H)PD sync_state is done. 272 + */ 265 273 ret = clk_bulk_prepare_enable(msm_mdss->num_clocks, msm_mdss->clocks); 266 274 if (ret) { 267 275 dev_err(msm_mdss->dev, "clock enable failed, ret:%d\n", ret); ··· 568 560 569 561 static const struct of_device_id mdss_dt_match[] = { 570 562 { .compatible = "qcom,mdss", .data = &data_153k6 }, 563 + { .compatible = "qcom,eliza-mdss", .data = &data_57k }, 571 564 { .compatible = "qcom,glymur-mdss", .data = &data_57k }, 572 565 { .compatible = "qcom,kaanapali-mdss", .data = &data_57k }, 573 566 { .compatible = "qcom,msm8998-mdss", .data = &data_76k8 },
+4
drivers/gpu/drm/msm/registers/adreno/a6xx.xml
··· 5016 5016 <bitfield pos="1" name="LPAC" type="boolean"/> 5017 5017 <bitfield pos="2" name="RAYTRACING" type="boolean"/> 5018 5018 </reg32> 5019 + <reg32 offset="0x0405" name="CX_MISC_SW_FUSE_FREQ_LIMIT_STATUS" variants="A8XX-"> 5020 + <bitfield high="8" low="0" name="FINALFREQLIMIT"/> 5021 + <bitfield pos="24" name="SOFTSKUDISABLED" type="boolean"/> 5022 + </reg32> 5019 5023 </domain> 5020 5024 5021 5025 </database>
+4 -2
drivers/gpu/drm/msm/registers/adreno/a6xx_gmu.xml
··· 141 141 <reg32 offset="0x1f9f0" name="GMU_BOOT_KMD_LM_HANDSHAKE"/> 142 142 <reg32 offset="0x1f957" name="GMU_LLM_GLM_SLEEP_CTRL"/> 143 143 <reg32 offset="0x1f958" name="GMU_LLM_GLM_SLEEP_STATUS"/> 144 - <reg32 offset="0x1f888" name="GMU_ALWAYS_ON_COUNTER_L"/> 145 - <reg32 offset="0x1f889" name="GMU_ALWAYS_ON_COUNTER_H"/> 144 + <reg32 offset="0x1f888" name="GMU_ALWAYS_ON_COUNTER_L" variants="A6XX-A7XX"/> 145 + <reg32 offset="0x1f840" name="GMU_ALWAYS_ON_COUNTER_L" variants="A8XX-"/> 146 + <reg32 offset="0x1f889" name="GMU_ALWAYS_ON_COUNTER_H" variants="A6XX-A7XX"/> 147 + <reg32 offset="0x1f841" name="GMU_ALWAYS_ON_COUNTER_H" variants="A8XX-"/> 146 148 <reg32 offset="0x1f8c3" name="GMU_GMU_PWR_COL_KEEPALIVE" variants="A6XX-A7XX"/> 147 149 <reg32 offset="0x1f7e4" name="GMU_GMU_PWR_COL_KEEPALIVE" variants="A8XX-"/> 148 150 <reg32 offset="0x1f8c4" name="GMU_PWR_COL_PREEMPT_KEEPALIVE" variants="A6XX-A7XX"/>
+4 -1
drivers/gpu/drm/msm/registers/display/dsi.xml
··· 15 15 <value name="VID_DST_FORMAT_RGB666" value="1"/> 16 16 <value name="VID_DST_FORMAT_RGB666_LOOSE" value="2"/> 17 17 <value name="VID_DST_FORMAT_RGB888" value="3"/> 18 + <value name="VID_DST_FORMAT_RGB101010" value="4"/> 18 19 </enum> 19 20 <enum name="dsi_rgb_swap"> 20 21 <value name="SWAP_RGB" value="0"/> ··· 40 39 <value name="CMD_DST_FORMAT_RGB565" value="6"/> 41 40 <value name="CMD_DST_FORMAT_RGB666" value="7"/> 42 41 <value name="CMD_DST_FORMAT_RGB888" value="8"/> 42 + <value name="CMD_DST_FORMAT_RGB101010" value="9"/> 43 43 </enum> 44 44 <enum name="dsi_lane_swap"> 45 45 <value name="LANE_SWAP_0123" value="0"/> ··· 144 142 </reg32> 145 143 <reg32 offset="0x0000c" name="VID_CFG0"> 146 144 <bitfield name="VIRT_CHANNEL" low="0" high="1" type="uint"/> <!-- always zero? --> 147 - <bitfield name="DST_FORMAT" low="4" high="5" type="dsi_vid_dst_format"/> 145 + <!-- high was 5 before DSI 6G 2.1.0 --> 146 + <bitfield name="DST_FORMAT" low="4" high="6" type="dsi_vid_dst_format"/> 148 147 <bitfield name="TRAFFIC_MODE" low="8" high="9" type="dsi_traffic_mode"/> 149 148 <bitfield name="BLLP_POWER_STOP" pos="12" type="boolean"/> 150 149 <bitfield name="EOF_BLLP_POWER_STOP" pos="15" type="boolean"/>
+4
include/drm/drm_mipi_dsi.h
··· 144 144 MIPI_DSI_FMT_RGB666, 145 145 MIPI_DSI_FMT_RGB666_PACKED, 146 146 MIPI_DSI_FMT_RGB565, 147 + MIPI_DSI_FMT_RGB101010, 147 148 }; 148 149 149 150 #define DSI_DEV_NAME_SIZE 20 ··· 236 235 static inline int mipi_dsi_pixel_format_to_bpp(enum mipi_dsi_pixel_format fmt) 237 236 { 238 237 switch (fmt) { 238 + case MIPI_DSI_FMT_RGB101010: 239 + return 30; 240 + 239 241 case MIPI_DSI_FMT_RGB888: 240 242 case MIPI_DSI_FMT_RGB666: 241 243 return 24;
+1
include/uapi/drm/msm_drm.h
··· 117 117 * ioctl will throw -EPIPE. 118 118 */ 119 119 #define MSM_PARAM_EN_VM_BIND 0x16 /* WO, once */ 120 + #define MSM_PARAM_AQE 0x17 /* RO */ 120 121 121 122 /* For backwards compat. The original support for preemption was based on 122 123 * a single ring per priority level so # of priority levels equals the #