Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'drm-fixes-2026-03-14' of https://gitlab.freedesktop.org/drm/kernel

Pull drm fixes from Dave Airlie:
"The weekly drm fixes. This is mostly msm fixes across the functions,
with amdgpu and i915. It also has a core rust fix and changes in
nova-core to take advantage of it, and otherwise just has some minor
driver fixes, and marks loongsoon as orphaned.

rust:
- Fix safety issue in dma_read! and dma_write!

nova-core:
- Fix UB in DmaGspMem pointer accessors
- Fix stack overflow in GSP memory allocation

loongsoon:
- mark drm driver as unmaintained

msm:
- Core:
- Adjusted msm_iommu_pagetable_prealloc_allocate() allocation type
- DPU:
- Fixed blue screens on Hamoa laptops by reverting the LM
reservation
- Fixed the size of the LM block on several platforms
- Dropped usage of %pK (again)
- Fixed smatch warning on SSPP v13+ code
- Fixed INTF_6 interrupts on Lemans
- DSI:
- Fixed DSI PHY revision on Kaanapali
- Fixed pixel clock calculation for the bonded DSI mode panels
with compression enabled
- DT bindings:
- Fixed DisplayPort description on Glymur
- Fixed model name in SM8750 MDSS schema
- GPU:
- Added MODULE_DEVICE_TABLE to the GPU driver
- Fix bogus protect error on X2-85
- Fix dma_free_attrs() buffer size
- Gen8 UBWC fix for Glymur

i915:
- Avoid hang when configuring VRR [icl]
- Fix sg_table overflow with >4GB folios
- Fix PSR Selective Update handling
- Fix eDP ALPM read-out sequence

amdgpu:
- SMU13 fix
- SMU14 fix
- Fixes for bringup hw testing
- Kerneldoc fix
- GC12 idle power fix for compute workloads
- DCCG fixes

amdkfd:
- Fix missing BO unreserve in an error path

ivpu:
- drop unnecessary bootparams register setting

amdxdna:
- fix runtime/suspend resume deadlock

bridge:
- ti-sn65dsi83: fix DSI rounding and dual LVDS

gud:
- fix NULL crtc dereference on display disable"

* tag 'drm-fixes-2026-03-14' of https://gitlab.freedesktop.org/drm/kernel: (44 commits)
drm/amd: Set num IP blocks to 0 if discovery fails
drm/amdkfd: Unreserve bo if queue update failed
drm/amd/display: Check for S0i3 to be done before DCCG init on DCN21
drm/amd/display: Add missing DCCG register entries for DCN20-DCN316
gpu: nova-core: gsp: fix UB in DmaGspMem pointer accessors
drm/loongson: Mark driver as orphaned
accel/amdxdna: Fix runtime suspend deadlock when there is pending job
gpu: nova-core: fix stack overflow in GSP memory allocation
accel/ivpu: Remove boot params address setting via MMIO register
drm/i915/dp: Read ALPM caps after DPCD init
drm/i915/psr: Write DSC parameters on Selective Update in ET mode
drm/i915/dsc: Add helper for writing DSC Selective Update ET parameters
drm/i915/dsc: Add Selective Update register definitions
drm/i915/psr: Repeat Selective Update area alignment
drm/i915: Fix potential overflow of shmem scatterlist length
drm/i915/vrr: Configure VRR timings after enabling TRANS_DDI_FUNC_CTL
drm/bridge: ti-sn65dsi83: halve horizontal syncs for dual LVDS output
drm/bridge: ti-sn65dsi83: fix CHA_DSI_CLK_RANGE rounding
drm/gud: fix NULL crtc dereference on display disable
drm/sitronix/st7586: fix bad pixel data due to byte swap
...

+935 -399
+20 -1
Documentation/devicetree/bindings/display/msm/dp-controller.yaml
··· 253 253 enum: 254 254 # these platforms support 2 streams MST on some interfaces, 255 255 # others are SST only 256 - - qcom,glymur-dp 257 256 - qcom,sc8280xp-dp 258 257 - qcom,x1e80100-dp 259 258 then: ··· 308 309 clocks-names: 309 310 minItems: 6 310 311 maxItems: 8 312 + 313 + - if: 314 + properties: 315 + compatible: 316 + contains: 317 + enum: 318 + # these platforms support 2 streams MST on some interfaces, 319 + # others are SST only, but all controllers have 4 ports 320 + - qcom,glymur-dp 321 + then: 322 + properties: 323 + reg: 324 + minItems: 9 325 + maxItems: 9 326 + clocks: 327 + minItems: 5 328 + maxItems: 6 329 + clocks-names: 330 + minItems: 5 331 + maxItems: 6 311 332 312 333 unevaluatedProperties: false 313 334
+10 -6
Documentation/devicetree/bindings/display/msm/qcom,glymur-mdss.yaml
··· 176 176 }; 177 177 }; 178 178 179 - displayport-controller@ae90000 { 179 + displayport-controller@af54000 { 180 180 compatible = "qcom,glymur-dp"; 181 - reg = <0xae90000 0x200>, 182 - <0xae90200 0x200>, 183 - <0xae90400 0x600>, 184 - <0xae91000 0x400>, 185 - <0xae91400 0x400>; 181 + reg = <0xaf54000 0x200>, 182 + <0xaf54200 0x200>, 183 + <0xaf55000 0xc00>, 184 + <0xaf56000 0x400>, 185 + <0xaf57000 0x400>, 186 + <0xaf58000 0x400>, 187 + <0xaf59000 0x400>, 188 + <0xaf5a000 0x600>, 189 + <0xaf5b000 0x600>; 186 190 187 191 interrupt-parent = <&mdss>; 188 192 interrupts = <12>;
+1 -1
Documentation/devicetree/bindings/display/msm/qcom,sm8750-mdss.yaml
··· 10 10 - Krzysztof Kozlowski <krzk@kernel.org> 11 11 12 12 description: 13 - SM8650 MSM Mobile Display Subsystem(MDSS), which encapsulates sub-blocks like 13 + SM8750 MSM Mobile Display Subsystem(MDSS), which encapsulates sub-blocks like 14 14 DPU display controller, DSI and DP interfaces etc. 15 15 16 16 $ref: /schemas/display/msm/mdss-common.yaml#
+1 -2
MAINTAINERS
··· 8626 8626 F: include/uapi/drm/lima_drm.h 8627 8627 8628 8628 DRM DRIVERS FOR LOONGSON 8629 - M: Sui Jingfeng <suijingfeng@loongson.cn> 8630 8629 L: dri-devel@lists.freedesktop.org 8631 - S: Supported 8630 + S: Orphan 8632 8631 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 8633 8632 F: drivers/gpu/drm/loongson/ 8634 8633
+2 -12
drivers/accel/amdxdna/aie2_ctx.c
··· 165 165 166 166 trace_xdna_job(&job->base, job->hwctx->name, "signaled fence", job->seq); 167 167 168 - amdxdna_pm_suspend_put(job->hwctx->client->xdna); 169 168 job->hwctx->priv->completed++; 170 169 dma_fence_signal(fence); 171 170 ··· 289 290 struct dma_fence *fence; 290 291 int ret; 291 292 292 - ret = amdxdna_pm_resume_get(hwctx->client->xdna); 293 - if (ret) 293 + if (!hwctx->priv->mbox_chann) 294 294 return NULL; 295 295 296 - if (!hwctx->priv->mbox_chann) { 297 - amdxdna_pm_suspend_put(hwctx->client->xdna); 298 - return NULL; 299 - } 300 - 301 - if (!mmget_not_zero(job->mm)) { 302 - amdxdna_pm_suspend_put(hwctx->client->xdna); 296 + if (!mmget_not_zero(job->mm)) 303 297 return ERR_PTR(-ESRCH); 304 - } 305 298 306 299 kref_get(&job->refcnt); 307 300 fence = dma_fence_get(job->fence); ··· 324 333 325 334 out: 326 335 if (ret) { 327 - amdxdna_pm_suspend_put(hwctx->client->xdna); 328 336 dma_fence_put(job->fence); 329 337 aie2_job_put(job); 330 338 mmput(job->mm);
+10
drivers/accel/amdxdna/amdxdna_ctx.c
··· 17 17 #include "amdxdna_ctx.h" 18 18 #include "amdxdna_gem.h" 19 19 #include "amdxdna_pci_drv.h" 20 + #include "amdxdna_pm.h" 20 21 21 22 #define MAX_HWCTX_ID 255 22 23 #define MAX_ARG_COUNT 4095 ··· 446 445 void amdxdna_sched_job_cleanup(struct amdxdna_sched_job *job) 447 446 { 448 447 trace_amdxdna_debug_point(job->hwctx->name, job->seq, "job release"); 448 + amdxdna_pm_suspend_put(job->hwctx->client->xdna); 449 449 amdxdna_arg_bos_put(job); 450 450 amdxdna_gem_put_obj(job->cmd_bo); 451 451 dma_fence_put(job->fence); ··· 482 480 if (ret) { 483 481 XDNA_ERR(xdna, "Argument BOs lookup failed, ret %d", ret); 484 482 goto cmd_put; 483 + } 484 + 485 + ret = amdxdna_pm_resume_get(xdna); 486 + if (ret) { 487 + XDNA_ERR(xdna, "Resume failed, ret %d", ret); 488 + goto put_bos; 485 489 } 486 490 487 491 idx = srcu_read_lock(&client->hwctx_srcu); ··· 530 522 dma_fence_put(job->fence); 531 523 unlock_srcu: 532 524 srcu_read_unlock(&client->hwctx_srcu, idx); 525 + amdxdna_pm_suspend_put(xdna); 526 + put_bos: 533 527 amdxdna_arg_bos_put(job); 534 528 cmd_put: 535 529 amdxdna_gem_put_obj(job->cmd_bo);
-6
drivers/accel/ivpu/ivpu_hw_40xx_reg.h
··· 121 121 #define VPU_50XX_HOST_SS_AON_PWR_ISLAND_STATUS_DLY 0x0003006cu 122 122 #define VPU_50XX_HOST_SS_AON_PWR_ISLAND_STATUS_DLY_STATUS_DLY_MASK GENMASK(7, 0) 123 123 124 - #define VPU_40XX_HOST_SS_AON_RETENTION0 0x0003000cu 125 - #define VPU_40XX_HOST_SS_AON_RETENTION1 0x00030010u 126 - #define VPU_40XX_HOST_SS_AON_RETENTION2 0x00030014u 127 - #define VPU_40XX_HOST_SS_AON_RETENTION3 0x00030018u 128 - #define VPU_40XX_HOST_SS_AON_RETENTION4 0x0003001cu 129 - 130 124 #define VPU_40XX_HOST_SS_AON_IDLE_GEN 0x00030200u 131 125 #define VPU_40XX_HOST_SS_AON_IDLE_GEN_EN_MASK BIT_MASK(0) 132 126 #define VPU_40XX_HOST_SS_AON_IDLE_GEN_HW_PG_EN_MASK BIT_MASK(1)
-1
drivers/accel/ivpu/ivpu_hw_ip.c
··· 931 931 932 932 static int soc_cpu_boot_60xx(struct ivpu_device *vdev) 933 933 { 934 - REGV_WR64(VPU_40XX_HOST_SS_AON_RETENTION1, vdev->fw->mem_bp->vpu_addr); 935 934 soc_cpu_set_entry_point_40xx(vdev, vdev->fw->cold_boot_entry_point); 936 935 937 936 return 0;
+13 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 2690 2690 break; 2691 2691 default: 2692 2692 r = amdgpu_discovery_set_ip_blocks(adev); 2693 - if (r) 2693 + if (r) { 2694 + adev->num_ip_blocks = 0; 2694 2695 return r; 2696 + } 2695 2697 break; 2696 2698 } 2697 2699 ··· 3249 3247 i = state == AMD_CG_STATE_GATE ? j : adev->num_ip_blocks - j - 1; 3250 3248 if (!adev->ip_blocks[i].status.late_initialized) 3251 3249 continue; 3250 + if (!adev->ip_blocks[i].version) 3251 + continue; 3252 3252 /* skip CG for GFX, SDMA on S0ix */ 3253 3253 if (adev->in_s0ix && 3254 3254 (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GFX || ··· 3289 3285 for (j = 0; j < adev->num_ip_blocks; j++) { 3290 3286 i = state == AMD_PG_STATE_GATE ? j : adev->num_ip_blocks - j - 1; 3291 3287 if (!adev->ip_blocks[i].status.late_initialized) 3288 + continue; 3289 + if (!adev->ip_blocks[i].version) 3292 3290 continue; 3293 3291 /* skip PG for GFX, SDMA on S0ix */ 3294 3292 if (adev->in_s0ix && ··· 3499 3493 int i, r; 3500 3494 3501 3495 for (i = 0; i < adev->num_ip_blocks; i++) { 3496 + if (!adev->ip_blocks[i].version) 3497 + continue; 3502 3498 if (!adev->ip_blocks[i].version->funcs->early_fini) 3503 3499 continue; 3504 3500 ··· 3578 3570 if (!adev->ip_blocks[i].status.sw) 3579 3571 continue; 3580 3572 3573 + if (!adev->ip_blocks[i].version) 3574 + continue; 3581 3575 if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC) { 3582 3576 amdgpu_ucode_free_bo(adev); 3583 3577 amdgpu_free_static_csa(&adev->virt.csa_obj); ··· 3605 3595 3606 3596 for (i = adev->num_ip_blocks - 1; i >= 0; i--) { 3607 3597 if (!adev->ip_blocks[i].status.late_initialized) 3598 + continue; 3599 + if (!adev->ip_blocks[i].version) 3608 3600 continue; 3609 3601 if (adev->ip_blocks[i].version->funcs->late_fini) 3610 3602 adev->ip_blocks[i].version->funcs->late_fini(&adev->ip_blocks[i]);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
··· 83 83 { 84 84 struct amdgpu_device *adev = drm_to_adev(dev); 85 85 86 - if (adev == NULL) 86 + if (adev == NULL || !adev->num_ip_blocks) 87 87 return; 88 88 89 89 amdgpu_unregister_gpu_instance(adev);
+8 -8
drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h
··· 368 368 369 369 struct drm_property *plane_ctm_property; 370 370 /** 371 - * @shaper_lut_property: Plane property to set pre-blending shaper LUT 372 - * that converts color content before 3D LUT. If 373 - * plane_shaper_tf_property != Identity TF, AMD color module will 371 + * @plane_shaper_lut_property: Plane property to set pre-blending 372 + * shaper LUT that converts color content before 3D LUT. 373 + * If plane_shaper_tf_property != Identity TF, AMD color module will 374 374 * combine the user LUT values with pre-defined TF into the LUT 375 375 * parameters to be programmed. 376 376 */ 377 377 struct drm_property *plane_shaper_lut_property; 378 378 /** 379 - * @shaper_lut_size_property: Plane property for the size of 379 + * @plane_shaper_lut_size_property: Plane property for the size of 380 380 * pre-blending shaper LUT as supported by the driver (read-only). 381 381 */ 382 382 struct drm_property *plane_shaper_lut_size_property; ··· 400 400 */ 401 401 struct drm_property *plane_lut3d_property; 402 402 /** 403 - * @plane_degamma_lut_size_property: Plane property to define the max 404 - * size of 3D LUT as supported by the driver (read-only). The max size 405 - * is the max size of one dimension and, therefore, the max number of 406 - * entries for 3D LUT array is the 3D LUT size cubed; 403 + * @plane_lut3d_size_property: Plane property to define the max size 404 + * of 3D LUT as supported by the driver (read-only). The max size is 405 + * the max size of one dimension and, therefore, the max number of 406 + * entries for 3D LUT array is the 3D LUT size cubed. 407 407 */ 408 408 struct drm_property *plane_lut3d_size_property; 409 409 /**
+4 -1
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
··· 731 731 int i; 732 732 struct amdgpu_device *adev = mes->adev; 733 733 union MESAPI_SET_HW_RESOURCES mes_set_hw_res_pkt; 734 + uint32_t mes_rev = (pipe == AMDGPU_MES_SCHED_PIPE) ? 735 + (mes->sched_version & AMDGPU_MES_VERSION_MASK) : 736 + (mes->kiq_version & AMDGPU_MES_VERSION_MASK); 734 737 735 738 memset(&mes_set_hw_res_pkt, 0, sizeof(mes_set_hw_res_pkt)); 736 739 ··· 788 785 * handling support, other queue will not use the oversubscribe timer. 789 786 * handling mode - 0: disabled; 1: basic version; 2: basic+ version 790 787 */ 791 - mes_set_hw_res_pkt.oversubscription_timer = 50; 788 + mes_set_hw_res_pkt.oversubscription_timer = mes_rev < 0x8b ? 0 : 50; 792 789 mes_set_hw_res_pkt.unmapped_doorbell_handling = 1; 793 790 794 791 if (amdgpu_mes_log_enable) {
+1
drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
··· 593 593 p->queue_size)) { 594 594 pr_debug("ring buf 0x%llx size 0x%llx not mapped on GPU\n", 595 595 p->queue_address, p->queue_size); 596 + amdgpu_bo_unreserve(vm->root.bo); 596 597 return -EFAULT; 597 598 } 598 599
+5 -1
drivers/gpu/drm/amd/display/dc/dccg/dcn20/dcn20_dccg.h
··· 38 38 DCCG_SRII(PIXEL_RATE_CNTL, OTG, 0),\ 39 39 DCCG_SRII(PIXEL_RATE_CNTL, OTG, 1),\ 40 40 SR(DISPCLK_FREQ_CHANGE_CNTL),\ 41 - SR(DC_MEM_GLOBAL_PWR_REQ_CNTL) 41 + SR(DC_MEM_GLOBAL_PWR_REQ_CNTL),\ 42 + SR(MICROSECOND_TIME_BASE_DIV),\ 43 + SR(MILLISECOND_TIME_BASE_DIV),\ 44 + SR(DCCG_GATE_DISABLE_CNTL),\ 45 + SR(DCCG_GATE_DISABLE_CNTL2) 42 46 43 47 #define DCCG_REG_LIST_DCN2() \ 44 48 DCCG_COMMON_REG_LIST_DCN_BASE(),\
+20 -1
drivers/gpu/drm/amd/display/dc/dccg/dcn21/dcn21_dccg.c
··· 96 96 dccg->pipe_dppclk_khz[dpp_inst] = req_dppclk; 97 97 } 98 98 99 + /* 100 + * On DCN21 S0i3 resume, BIOS programs MICROSECOND_TIME_BASE_DIV to 101 + * 0x00120464 as a marker that golden init has already been done. 102 + * dcn21_s0i3_golden_init_wa() reads this marker later in bios_golden_init() 103 + * to decide whether to skip golden init. 104 + * 105 + * dccg2_init() unconditionally overwrites MICROSECOND_TIME_BASE_DIV to 106 + * 0x00120264, destroying the marker before it can be read. 107 + * 108 + * Guard the call: if the S0i3 marker is present, skip dccg2_init() so the 109 + * WA can function correctly. bios_golden_init() will handle init in that case. 110 + */ 111 + static void dccg21_init(struct dccg *dccg) 112 + { 113 + if (dccg2_is_s0i3_golden_init_wa_done(dccg)) 114 + return; 115 + 116 + dccg2_init(dccg); 117 + } 99 118 100 119 static const struct dccg_funcs dccg21_funcs = { 101 120 .update_dpp_dto = dccg21_update_dpp_dto, ··· 122 103 .set_fifo_errdet_ovr_en = dccg2_set_fifo_errdet_ovr_en, 123 104 .otg_add_pixel = dccg2_otg_add_pixel, 124 105 .otg_drop_pixel = dccg2_otg_drop_pixel, 125 - .dccg_init = dccg2_init, 106 + .dccg_init = dccg21_init, 126 107 .refclk_setup = dccg2_refclk_setup, /* Deprecated - for backward compatibility only */ 127 108 .allow_clock_gating = dccg2_allow_clock_gating, 128 109 .enable_memory_low_power = dccg2_enable_memory_low_power,
+7 -1
drivers/gpu/drm/amd/display/dc/dccg/dcn301/dcn301_dccg.h
··· 34 34 DCCG_SRII(DTO_PARAM, DPPCLK, 1),\ 35 35 DCCG_SRII(DTO_PARAM, DPPCLK, 2),\ 36 36 DCCG_SRII(DTO_PARAM, DPPCLK, 3),\ 37 - SR(REFCLK_CNTL) 37 + SR(REFCLK_CNTL),\ 38 + SR(DISPCLK_FREQ_CHANGE_CNTL),\ 39 + SR(DC_MEM_GLOBAL_PWR_REQ_CNTL),\ 40 + SR(MICROSECOND_TIME_BASE_DIV),\ 41 + SR(MILLISECOND_TIME_BASE_DIV),\ 42 + SR(DCCG_GATE_DISABLE_CNTL),\ 43 + SR(DCCG_GATE_DISABLE_CNTL2) 38 44 39 45 #define DCCG_MASK_SH_LIST_DCN301(mask_sh) \ 40 46 DCCG_SFI(DPPCLK_DTO_CTRL, DTO_ENABLE, DPPCLK, 0, mask_sh),\
+4 -1
drivers/gpu/drm/amd/display/dc/dccg/dcn31/dcn31_dccg.h
··· 64 64 SR(DSCCLK1_DTO_PARAM),\ 65 65 SR(DSCCLK2_DTO_PARAM),\ 66 66 SR(DSCCLK_DTO_CTRL),\ 67 + SR(DCCG_GATE_DISABLE_CNTL),\ 67 68 SR(DCCG_GATE_DISABLE_CNTL2),\ 68 69 SR(DCCG_GATE_DISABLE_CNTL3),\ 69 - SR(HDMISTREAMCLK0_DTO_PARAM) 70 + SR(HDMISTREAMCLK0_DTO_PARAM),\ 71 + SR(DC_MEM_GLOBAL_PWR_REQ_CNTL),\ 72 + SR(MICROSECOND_TIME_BASE_DIV) 70 73 71 74 72 75 #define DCCG_MASK_SH_LIST_DCN31(mask_sh) \
+4 -1
drivers/gpu/drm/amd/display/dc/dccg/dcn314/dcn314_dccg.h
··· 70 70 SR(DSCCLK2_DTO_PARAM),\ 71 71 SR(DSCCLK3_DTO_PARAM),\ 72 72 SR(DSCCLK_DTO_CTRL),\ 73 + SR(DCCG_GATE_DISABLE_CNTL),\ 73 74 SR(DCCG_GATE_DISABLE_CNTL2),\ 74 75 SR(DCCG_GATE_DISABLE_CNTL3),\ 75 76 SR(HDMISTREAMCLK0_DTO_PARAM),\ 76 77 SR(OTG_PIXEL_RATE_DIV),\ 77 - SR(DTBCLK_P_CNTL) 78 + SR(DTBCLK_P_CNTL),\ 79 + SR(DC_MEM_GLOBAL_PWR_REQ_CNTL),\ 80 + SR(MICROSECOND_TIME_BASE_DIV) 78 81 79 82 #define DCCG_MASK_SH_LIST_DCN314_COMMON(mask_sh) \ 80 83 DCCG_SFI(DPPCLK_DTO_CTRL, DTO_DB_EN, DPPCLK, 0, mask_sh),\
+2 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 2222 2222 user_od_table->OverDriveTable.FeatureCtrlMask = BIT(PP_OD_FEATURE_GFXCLK_BIT) | 2223 2223 BIT(PP_OD_FEATURE_UCLK_BIT) | 2224 2224 BIT(PP_OD_FEATURE_GFX_VF_CURVE_BIT) | 2225 - BIT(PP_OD_FEATURE_FAN_CURVE_BIT); 2225 + BIT(PP_OD_FEATURE_FAN_CURVE_BIT) | 2226 + BIT(PP_OD_FEATURE_ZERO_FAN_BIT); 2226 2227 res = smu_v13_0_0_upload_overdrive_table(smu, user_od_table); 2227 2228 user_od_table->OverDriveTable.FeatureCtrlMask = 0; 2228 2229 if (res == 0)
+2 -1
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 2224 2224 user_od_table->OverDriveTable.FeatureCtrlMask = BIT(PP_OD_FEATURE_GFXCLK_BIT) | 2225 2225 BIT(PP_OD_FEATURE_UCLK_BIT) | 2226 2226 BIT(PP_OD_FEATURE_GFX_VF_CURVE_BIT) | 2227 - BIT(PP_OD_FEATURE_FAN_CURVE_BIT); 2227 + BIT(PP_OD_FEATURE_FAN_CURVE_BIT) | 2228 + BIT(PP_OD_FEATURE_ZERO_FAN_BIT); 2228 2229 res = smu_v13_0_7_upload_overdrive_table(smu, user_od_table); 2229 2230 user_od_table->OverDriveTable.FeatureCtrlMask = 0; 2230 2231 if (res == 0)
+2 -1
drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
··· 2311 2311 user_od_table->OverDriveTable.FeatureCtrlMask = BIT(PP_OD_FEATURE_GFXCLK_BIT) | 2312 2312 BIT(PP_OD_FEATURE_UCLK_BIT) | 2313 2313 BIT(PP_OD_FEATURE_GFX_VF_CURVE_BIT) | 2314 - BIT(PP_OD_FEATURE_FAN_CURVE_BIT); 2314 + BIT(PP_OD_FEATURE_FAN_CURVE_BIT) | 2315 + BIT(PP_OD_FEATURE_ZERO_FAN_BIT); 2315 2316 res = smu_v14_0_2_upload_overdrive_table(smu, user_od_table); 2316 2317 user_od_table->OverDriveTable.FeatureCtrlMask = 0; 2317 2318 if (res == 0)
+7 -6
drivers/gpu/drm/bridge/ti-sn65dsi83.c
··· 351 351 * DSI_CLK = mode clock * bpp / dsi_data_lanes / 2 352 352 * the 2 is there because the bus is DDR. 353 353 */ 354 - return DIV_ROUND_UP(clamp((unsigned int)mode->clock * 355 - mipi_dsi_pixel_format_to_bpp(ctx->dsi->format) / 356 - ctx->dsi->lanes / 2, 40000U, 500000U), 5000U); 354 + return clamp((unsigned int)mode->clock * 355 + mipi_dsi_pixel_format_to_bpp(ctx->dsi->format) / 356 + ctx->dsi->lanes / 2, 40000U, 500000U) / 5000U; 357 357 } 358 358 359 359 static u8 sn65dsi83_get_dsi_div(struct sn65dsi83 *ctx) ··· 517 517 struct drm_atomic_state *state) 518 518 { 519 519 struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge); 520 + const unsigned int dual_factor = ctx->lvds_dual_link ? 2 : 1; 520 521 const struct drm_bridge_state *bridge_state; 521 522 const struct drm_crtc_state *crtc_state; 522 523 const struct drm_display_mode *mode; ··· 654 653 /* 32 + 1 pixel clock to ensure proper operation */ 655 654 le16val = cpu_to_le16(32 + 1); 656 655 regmap_bulk_write(ctx->regmap, REG_VID_CHA_SYNC_DELAY_LOW, &le16val, 2); 657 - le16val = cpu_to_le16(mode->hsync_end - mode->hsync_start); 656 + le16val = cpu_to_le16((mode->hsync_end - mode->hsync_start) / dual_factor); 658 657 regmap_bulk_write(ctx->regmap, REG_VID_CHA_HSYNC_PULSE_WIDTH_LOW, 659 658 &le16val, 2); 660 659 le16val = cpu_to_le16(mode->vsync_end - mode->vsync_start); 661 660 regmap_bulk_write(ctx->regmap, REG_VID_CHA_VSYNC_PULSE_WIDTH_LOW, 662 661 &le16val, 2); 663 662 regmap_write(ctx->regmap, REG_VID_CHA_HORIZONTAL_BACK_PORCH, 664 - mode->htotal - mode->hsync_end); 663 + (mode->htotal - mode->hsync_end) / dual_factor); 665 664 regmap_write(ctx->regmap, REG_VID_CHA_VERTICAL_BACK_PORCH, 666 665 mode->vtotal - mode->vsync_end); 667 666 regmap_write(ctx->regmap, REG_VID_CHA_HORIZONTAL_FRONT_PORCH, 668 - mode->hsync_start - mode->hdisplay); 667 + (mode->hsync_start - mode->hdisplay) / dual_factor); 669 668 regmap_write(ctx->regmap, REG_VID_CHA_VERTICAL_FRONT_PORCH, 670 669 mode->vsync_start - mode->vdisplay); 671 670 regmap_write(ctx->regmap, REG_VID_CHA_TEST_PATTERN, 0x00);
+8 -1
drivers/gpu/drm/gud/gud_drv.c
··· 339 339 } 340 340 341 341 static const struct drm_crtc_helper_funcs gud_crtc_helper_funcs = { 342 - .atomic_check = drm_crtc_helper_atomic_check 342 + .atomic_check = drm_crtc_helper_atomic_check, 343 + .atomic_enable = gud_crtc_atomic_enable, 344 + .atomic_disable = gud_crtc_atomic_disable, 343 345 }; 344 346 345 347 static const struct drm_crtc_funcs gud_crtc_funcs = { ··· 364 362 .disable_plane = drm_atomic_helper_disable_plane, 365 363 .destroy = drm_plane_cleanup, 366 364 DRM_GEM_SHADOW_PLANE_FUNCS, 365 + }; 366 + 367 + static const struct drm_mode_config_helper_funcs gud_mode_config_helpers = { 368 + .atomic_commit_tail = drm_atomic_helper_commit_tail_rpm, 367 369 }; 368 370 369 371 static const struct drm_mode_config_funcs gud_mode_config_funcs = { ··· 505 499 drm->mode_config.min_height = le32_to_cpu(desc.min_height); 506 500 drm->mode_config.max_height = le32_to_cpu(desc.max_height); 507 501 drm->mode_config.funcs = &gud_mode_config_funcs; 502 + drm->mode_config.helper_private = &gud_mode_config_helpers; 508 503 509 504 /* Format init */ 510 505 formats_dev = devm_kmalloc(dev, GUD_FORMATS_MAX_NUM, GFP_KERNEL);
+4
drivers/gpu/drm/gud/gud_internal.h
··· 62 62 63 63 void gud_clear_damage(struct gud_device *gdrm); 64 64 void gud_flush_work(struct work_struct *work); 65 + void gud_crtc_atomic_enable(struct drm_crtc *crtc, 66 + struct drm_atomic_state *state); 67 + void gud_crtc_atomic_disable(struct drm_crtc *crtc, 68 + struct drm_atomic_state *state); 65 69 int gud_plane_atomic_check(struct drm_plane *plane, 66 70 struct drm_atomic_state *state); 67 71 void gud_plane_atomic_update(struct drm_plane *plane,
+36 -18
drivers/gpu/drm/gud/gud_pipe.c
··· 580 580 return ret; 581 581 } 582 582 583 + void gud_crtc_atomic_enable(struct drm_crtc *crtc, 584 + struct drm_atomic_state *state) 585 + { 586 + struct drm_device *drm = crtc->dev; 587 + struct gud_device *gdrm = to_gud_device(drm); 588 + int idx; 589 + 590 + if (!drm_dev_enter(drm, &idx)) 591 + return; 592 + 593 + gud_usb_set_u8(gdrm, GUD_REQ_SET_CONTROLLER_ENABLE, 1); 594 + gud_usb_set(gdrm, GUD_REQ_SET_STATE_COMMIT, 0, NULL, 0); 595 + gud_usb_set_u8(gdrm, GUD_REQ_SET_DISPLAY_ENABLE, 1); 596 + 597 + drm_dev_exit(idx); 598 + } 599 + 600 + void gud_crtc_atomic_disable(struct drm_crtc *crtc, 601 + struct drm_atomic_state *state) 602 + { 603 + struct drm_device *drm = crtc->dev; 604 + struct gud_device *gdrm = to_gud_device(drm); 605 + int idx; 606 + 607 + if (!drm_dev_enter(drm, &idx)) 608 + return; 609 + 610 + gud_usb_set_u8(gdrm, GUD_REQ_SET_DISPLAY_ENABLE, 0); 611 + gud_usb_set_u8(gdrm, GUD_REQ_SET_CONTROLLER_ENABLE, 0); 612 + 613 + drm_dev_exit(idx); 614 + } 615 + 583 616 void gud_plane_atomic_update(struct drm_plane *plane, 584 617 struct drm_atomic_state *atomic_state) 585 618 { ··· 640 607 mutex_unlock(&gdrm->damage_lock); 641 608 } 642 609 643 - if (!drm_dev_enter(drm, &idx)) 610 + if (!crtc || !drm_dev_enter(drm, &idx)) 644 611 return; 645 - 646 - if (!old_state->fb) 647 - gud_usb_set_u8(gdrm, GUD_REQ_SET_CONTROLLER_ENABLE, 1); 648 - 649 - if (fb && (crtc->state->mode_changed || crtc->state->connectors_changed)) 650 - gud_usb_set(gdrm, GUD_REQ_SET_STATE_COMMIT, 0, NULL, 0); 651 - 652 - if (crtc->state->active_changed) 653 - gud_usb_set_u8(gdrm, GUD_REQ_SET_DISPLAY_ENABLE, crtc->state->active); 654 - 655 - if (!fb) 656 - goto ctrl_disable; 657 612 658 613 ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE); 659 614 if (ret) 660 - goto ctrl_disable; 615 + goto out; 661 616 662 617 drm_atomic_helper_damage_iter_init(&iter, old_state, new_state); 663 618 drm_atomic_for_each_plane_damage(&iter, &damage) ··· 653 632 654 633 drm_gem_fb_end_cpu_access(fb, DMA_FROM_DEVICE); 655 634 656 - ctrl_disable: 657 - if (!crtc->state->enable) 658 - gud_usb_set_u8(gdrm, GUD_REQ_SET_CONTROLLER_ENABLE, 0); 659 - 635 + out: 660 636 drm_dev_exit(idx); 661 637 }
-6
drivers/gpu/drm/i915/display/intel_alpm.c
··· 43 43 44 44 void intel_alpm_init(struct intel_dp *intel_dp) 45 45 { 46 - u8 dpcd; 47 - 48 - if (drm_dp_dpcd_readb(&intel_dp->aux, DP_RECEIVER_ALPM_CAP, &dpcd) < 0) 49 - return; 50 - 51 - intel_dp->alpm_dpcd = dpcd; 52 46 mutex_init(&intel_dp->alpm.lock); 53 47 } 54 48
-1
drivers/gpu/drm/i915/display/intel_display.c
··· 1614 1614 } 1615 1615 1616 1616 intel_set_transcoder_timings(crtc_state); 1617 - intel_vrr_set_transcoder_timings(crtc_state); 1618 1617 1619 1618 if (cpu_transcoder != TRANSCODER_EDP) 1620 1619 intel_de_write(display, TRANS_MULT(display, cpu_transcoder),
+7
drivers/gpu/drm/i915/display/intel_dp.c
··· 4577 4577 intel_edp_init_dpcd(struct intel_dp *intel_dp, struct intel_connector *connector) 4578 4578 { 4579 4579 struct intel_display *display = to_intel_display(intel_dp); 4580 + int ret; 4580 4581 4581 4582 /* this function is meant to be called only once */ 4582 4583 drm_WARN_ON(display->drm, intel_dp->dpcd[DP_DPCD_REV] != 0); ··· 4616 4615 * available (such as HDR backlight controls) 4617 4616 */ 4618 4617 intel_dp_init_source_oui(intel_dp); 4618 + 4619 + /* Read the ALPM DPCD caps */ 4620 + ret = drm_dp_dpcd_read_byte(&intel_dp->aux, DP_RECEIVER_ALPM_CAP, 4621 + &intel_dp->alpm_dpcd); 4622 + if (ret < 0) 4623 + return false; 4619 4624 4620 4625 /* 4621 4626 * This has to be called after intel_dp->edp_dpcd is filled, PSR checks
+48 -12
drivers/gpu/drm/i915/display/intel_psr.c
··· 2619 2619 2620 2620 intel_de_write_dsb(display, dsb, PIPE_SRCSZ_ERLY_TPT(crtc->pipe), 2621 2621 crtc_state->pipe_srcsz_early_tpt); 2622 + 2623 + if (!crtc_state->dsc.compression_enable) 2624 + return; 2625 + 2626 + intel_dsc_su_et_parameters_configure(dsb, encoder, crtc_state, 2627 + drm_rect_height(&crtc_state->psr2_su_area)); 2622 2628 } 2623 2629 2624 2630 static void psr2_man_trk_ctl_calc(struct intel_crtc_state *crtc_state, ··· 2695 2689 overlap_damage_area->y2 = damage_area->y2; 2696 2690 } 2697 2691 2698 - static void intel_psr2_sel_fetch_pipe_alignment(struct intel_crtc_state *crtc_state) 2692 + static bool intel_psr2_sel_fetch_pipe_alignment(struct intel_crtc_state *crtc_state) 2699 2693 { 2700 2694 struct intel_display *display = to_intel_display(crtc_state); 2701 2695 const struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config; 2702 2696 u16 y_alignment; 2697 + bool su_area_changed = false; 2703 2698 2704 2699 /* ADLP aligns the SU region to vdsc slice height in case dsc is enabled */ 2705 2700 if (crtc_state->dsc.compression_enable && ··· 2709 2702 else 2710 2703 y_alignment = crtc_state->su_y_granularity; 2711 2704 2712 - crtc_state->psr2_su_area.y1 -= crtc_state->psr2_su_area.y1 % y_alignment; 2713 - if (crtc_state->psr2_su_area.y2 % y_alignment) 2705 + if (crtc_state->psr2_su_area.y1 % y_alignment) { 2706 + crtc_state->psr2_su_area.y1 -= crtc_state->psr2_su_area.y1 % y_alignment; 2707 + su_area_changed = true; 2708 + } 2709 + 2710 + if (crtc_state->psr2_su_area.y2 % y_alignment) { 2714 2711 crtc_state->psr2_su_area.y2 = ((crtc_state->psr2_su_area.y2 / 2715 2712 y_alignment) + 1) * y_alignment; 2713 + su_area_changed = true; 2714 + } 2715 + 2716 + return su_area_changed; 2716 2717 } 2717 2718 2718 2719 /* ··· 2854 2839 struct intel_crtc_state *crtc_state = intel_atomic_get_new_crtc_state(state, crtc); 2855 2840 struct intel_plane_state *new_plane_state, *old_plane_state; 2856 2841 struct intel_plane *plane; 2857 - bool full_update = false, cursor_in_su_area = false; 2842 + bool full_update = false, su_area_changed; 2858 2843 int i, ret; 2859 2844 2860 2845 if (!crtc_state->enable_psr2_sel_fetch) ··· 2961 2946 if (ret) 2962 2947 return ret; 2963 2948 2964 - /* 2965 - * Adjust su area to cover cursor fully as necessary (early 2966 - * transport). This needs to be done after 2967 - * drm_atomic_add_affected_planes to ensure visible cursor is added into 2968 - * affected planes even when cursor is not updated by itself. 2969 - */ 2970 - intel_psr2_sel_fetch_et_alignment(state, crtc, &cursor_in_su_area); 2949 + do { 2950 + bool cursor_in_su_area; 2971 2951 2972 - intel_psr2_sel_fetch_pipe_alignment(crtc_state); 2952 + /* 2953 + * Adjust su area to cover cursor fully as necessary 2954 + * (early transport). This needs to be done after 2955 + * drm_atomic_add_affected_planes to ensure visible 2956 + * cursor is added into affected planes even when 2957 + * cursor is not updated by itself. 2958 + */ 2959 + intel_psr2_sel_fetch_et_alignment(state, crtc, &cursor_in_su_area); 2960 + 2961 + su_area_changed = intel_psr2_sel_fetch_pipe_alignment(crtc_state); 2962 + 2963 + /* 2964 + * If the cursor was outside the SU area before 2965 + * alignment, the alignment step (which only expands 2966 + * SU) may pull the cursor partially inside, so we 2967 + * must run ET alignment again to fully cover it. But 2968 + * if the cursor was already fully inside before 2969 + * alignment, expanding the SU area won't change that, 2970 + * so no further work is needed. 2971 + */ 2972 + if (cursor_in_su_area) 2973 + break; 2974 + } while (su_area_changed); 2973 2975 2974 2976 /* 2975 2977 * Now that we have the pipe damaged area check if it intersect with ··· 3046 3014 } 3047 3015 3048 3016 skip_sel_fetch_set_loop: 3017 + if (full_update) 3018 + clip_area_update(&crtc_state->psr2_su_area, &crtc_state->pipe_src, 3019 + &crtc_state->pipe_src); 3020 + 3049 3021 psr2_man_trk_ctl_calc(crtc_state, full_update); 3050 3022 crtc_state->pipe_srcsz_early_tpt = 3051 3023 psr2_pipe_srcsz_early_tpt_calc(crtc_state, full_update);
+23
drivers/gpu/drm/i915/display/intel_vdsc.c
··· 767 767 sizeof(dp_dsc_pps_sdp)); 768 768 } 769 769 770 + void intel_dsc_su_et_parameters_configure(struct intel_dsb *dsb, struct intel_encoder *encoder, 771 + const struct intel_crtc_state *crtc_state, int su_lines) 772 + { 773 + struct intel_display *display = to_intel_display(crtc_state); 774 + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc); 775 + const struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config; 776 + enum pipe pipe = crtc->pipe; 777 + int vdsc_instances_per_pipe = intel_dsc_get_vdsc_per_pipe(crtc_state); 778 + int slice_row_per_frame = su_lines / vdsc_cfg->slice_height; 779 + u32 val; 780 + 781 + drm_WARN_ON_ONCE(display->drm, su_lines % vdsc_cfg->slice_height); 782 + drm_WARN_ON_ONCE(display->drm, vdsc_instances_per_pipe > 2); 783 + 784 + val = DSC_SUPS0_SU_SLICE_ROW_PER_FRAME(slice_row_per_frame); 785 + val |= DSC_SUPS0_SU_PIC_HEIGHT(su_lines); 786 + 787 + intel_de_write_dsb(display, dsb, LNL_DSC0_SU_PARAMETER_SET_0(pipe), val); 788 + 789 + if (vdsc_instances_per_pipe == 2) 790 + intel_de_write_dsb(display, dsb, LNL_DSC1_SU_PARAMETER_SET_0(pipe), val); 791 + } 792 + 770 793 static i915_reg_t dss_ctl1_reg(struct intel_crtc *crtc, enum transcoder cpu_transcoder) 771 794 { 772 795 return is_pipe_dsc(crtc, cpu_transcoder) ?
+3
drivers/gpu/drm/i915/display/intel_vdsc.h
··· 13 13 enum transcoder; 14 14 struct intel_crtc; 15 15 struct intel_crtc_state; 16 + struct intel_dsb; 16 17 struct intel_encoder; 17 18 18 19 bool intel_dsc_source_support(const struct intel_crtc_state *crtc_state); ··· 32 31 const struct intel_crtc_state *crtc_state); 33 32 void intel_dsc_dp_pps_write(struct intel_encoder *encoder, 34 33 const struct intel_crtc_state *crtc_state); 34 + void intel_dsc_su_et_parameters_configure(struct intel_dsb *dsb, struct intel_encoder *encoder, 35 + const struct intel_crtc_state *crtc_state, int su_lines); 35 36 void intel_vdsc_state_dump(struct drm_printer *p, int indent, 36 37 const struct intel_crtc_state *crtc_state); 37 38 int intel_vdsc_min_cdclk(const struct intel_crtc_state *crtc_state);
+12
drivers/gpu/drm/i915/display/intel_vdsc_regs.h
··· 196 196 #define DSC_PPS18_NSL_BPG_OFFSET(offset) REG_FIELD_PREP(DSC_PPS18_NSL_BPG_OFFSET_MASK, offset) 197 197 #define DSC_PPS18_SL_OFFSET_ADJ(offset) REG_FIELD_PREP(DSC_PPS18_SL_OFFSET_ADJ_MASK, offset) 198 198 199 + #define _LNL_DSC0_SU_PARAMETER_SET_0_PA 0x78064 200 + #define _LNL_DSC1_SU_PARAMETER_SET_0_PA 0x78164 201 + #define _LNL_DSC0_SU_PARAMETER_SET_0_PB 0x78264 202 + #define _LNL_DSC1_SU_PARAMETER_SET_0_PB 0x78364 203 + #define LNL_DSC0_SU_PARAMETER_SET_0(pipe) _MMIO_PIPE((pipe), _LNL_DSC0_SU_PARAMETER_SET_0_PA, _LNL_DSC0_SU_PARAMETER_SET_0_PB) 204 + #define LNL_DSC1_SU_PARAMETER_SET_0(pipe) _MMIO_PIPE((pipe), _LNL_DSC1_SU_PARAMETER_SET_0_PA, _LNL_DSC1_SU_PARAMETER_SET_0_PB) 205 + 206 + #define DSC_SUPS0_SU_SLICE_ROW_PER_FRAME_MASK REG_GENMASK(31, 20) 207 + #define DSC_SUPS0_SU_SLICE_ROW_PER_FRAME(rows) REG_FIELD_PREP(DSC_SUPS0_SU_SLICE_ROW_PER_FRAME_MASK, (rows)) 208 + #define DSC_SUPS0_SU_PIC_HEIGHT_MASK REG_GENMASK(15, 0) 209 + #define DSC_SUPS0_SU_PIC_HEIGHT(h) REG_FIELD_PREP(DSC_SUPS0_SU_PIC_HEIGHT_MASK, (h)) 210 + 199 211 /* Icelake Rate Control Buffer Threshold Registers */ 200 212 #define DSCA_RC_BUF_THRESH_0 _MMIO(0x6B230) 201 213 #define DSCA_RC_BUF_THRESH_0_UDW _MMIO(0x6B230 + 4)
+14
drivers/gpu/drm/i915/display/intel_vrr.c
··· 598 598 return; 599 599 600 600 /* 601 + * Bspec says: 602 + * "(note: VRR needs to be programmed after 603 + * TRANS_DDI_FUNC_CTL and before TRANS_CONF)." 604 + * 605 + * In practice it turns out that ICL can hang if 606 + * TRANS_VRR_VMAX/FLIPLINE are written before 607 + * enabling TRANS_DDI_FUNC_CTL. 608 + */ 609 + drm_WARN_ON(display->drm, 610 + !(intel_de_read(display, TRANS_DDI_FUNC_CTL(display, cpu_transcoder)) & TRANS_DDI_FUNC_ENABLE)); 611 + 612 + /* 601 613 * This bit seems to have two meanings depending on the platform: 602 614 * TGL: generate VRR "safe window" for DSB vblank waits 603 615 * ADL/DG2: make TRANS_SET_CONTEXT_LATENCY effective with VRR ··· 950 938 void intel_vrr_transcoder_enable(const struct intel_crtc_state *crtc_state) 951 939 { 952 940 struct intel_display *display = to_intel_display(crtc_state); 941 + 942 + intel_vrr_set_transcoder_timings(crtc_state); 953 943 954 944 if (!intel_vrr_possible(crtc_state)) 955 945 return;
+9 -3
drivers/gpu/drm/i915/gem/i915_gem_shmem.c
··· 153 153 } 154 154 } while (1); 155 155 156 - nr_pages = min_t(unsigned long, 157 - folio_nr_pages(folio), page_count - i); 156 + nr_pages = min_array(((unsigned long[]) { 157 + folio_nr_pages(folio), 158 + page_count - i, 159 + max_segment / PAGE_SIZE, 160 + }), 3); 161 + 158 162 if (!i || 159 163 sg->length >= max_segment || 160 164 folio_pfn(folio) != next_pfn) { ··· 168 164 st->nents++; 169 165 sg_set_folio(sg, folio, nr_pages * PAGE_SIZE, 0); 170 166 } else { 171 - /* XXX: could overflow? */ 167 + nr_pages = min_t(unsigned long, nr_pages, 168 + (max_segment - sg->length) / PAGE_SIZE); 169 + 172 170 sg->length += nr_pages * PAGE_SIZE; 173 171 } 174 172 next_pfn = folio_pfn(folio) + nr_pages;
+1 -1
drivers/gpu/drm/msm/adreno/a2xx_gpummu.c
··· 78 78 { 79 79 struct a2xx_gpummu *gpummu = to_a2xx_gpummu(mmu); 80 80 81 - dma_free_attrs(mmu->dev, TABLE_SIZE, gpummu->table, gpummu->pt_base, 81 + dma_free_attrs(mmu->dev, TABLE_SIZE + 32, gpummu->table, gpummu->pt_base, 82 82 DMA_ATTR_FORCE_CONTIGUOUS); 83 83 84 84 kfree(gpummu);
+1 -2
drivers/gpu/drm/msm/adreno/a6xx_catalog.c
··· 1759 1759 A6XX_PROTECT_NORDWR(0x27c06, 0x0000), 1760 1760 }; 1761 1761 1762 - DECLARE_ADRENO_PROTECT(x285_protect, 64); 1762 + DECLARE_ADRENO_PROTECT(x285_protect, 15); 1763 1763 1764 1764 static const struct adreno_reglist_pipe a840_nonctxt_regs[] = { 1765 1765 { REG_A8XX_CP_SMMU_STREAM_ID_LPAC, 0x00000101, BIT(PIPE_NONE) }, ··· 1966 1966 BUILD_BUG_ON(a660_protect.count > a660_protect.count_max); 1967 1967 BUILD_BUG_ON(a690_protect.count > a690_protect.count_max); 1968 1968 BUILD_BUG_ON(a730_protect.count > a730_protect.count_max); 1969 - BUILD_BUG_ON(a840_protect.count > a840_protect.count_max); 1970 1969 }
+12 -2
drivers/gpu/drm/msm/adreno/a8xx_gpu.c
··· 310 310 hbb = cfg->highest_bank_bit - 13; 311 311 hbb_hi = hbb >> 2; 312 312 hbb_lo = hbb & 3; 313 - a8xx_write_pipe(gpu, PIPE_BV, REG_A8XX_GRAS_NC_MODE_CNTL, hbb << 5); 314 - a8xx_write_pipe(gpu, PIPE_BR, REG_A8XX_GRAS_NC_MODE_CNTL, hbb << 5); 313 + 314 + a8xx_write_pipe(gpu, PIPE_BV, REG_A8XX_GRAS_NC_MODE_CNTL, 315 + hbb << 5 | 316 + level3_swizzling_dis << 4 | 317 + level2_swizzling_dis << 3); 318 + 319 + a8xx_write_pipe(gpu, PIPE_BR, REG_A8XX_GRAS_NC_MODE_CNTL, 320 + hbb << 5 | 321 + level3_swizzling_dis << 4 | 322 + level2_swizzling_dis << 3); 315 323 316 324 a8xx_write_pipe(gpu, PIPE_BR, REG_A8XX_RB_CCU_NC_MODE_CNTL, 317 325 yuvnotcomptofc << 6 | 326 + level3_swizzling_dis << 5 | 327 + level2_swizzling_dis << 4 | 318 328 hbb_hi << 3 | 319 329 hbb_lo << 1); 320 330
+1
drivers/gpu/drm/msm/adreno/adreno_device.c
··· 302 302 { .compatible = "qcom,kgsl-3d0" }, 303 303 {} 304 304 }; 305 + MODULE_DEVICE_TABLE(of, dt_match); 305 306 306 307 static int adreno_runtime_resume(struct device *dev) 307 308 {
+6 -6
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_0_sc8280xp.h
··· 133 133 static const struct dpu_lm_cfg sc8280xp_lm[] = { 134 134 { 135 135 .name = "lm_0", .id = LM_0, 136 - .base = 0x44000, .len = 0x320, 136 + .base = 0x44000, .len = 0x400, 137 137 .features = MIXER_MSM8998_MASK, 138 138 .sblk = &sdm845_lm_sblk, 139 139 .lm_pair = LM_1, ··· 141 141 .dspp = DSPP_0, 142 142 }, { 143 143 .name = "lm_1", .id = LM_1, 144 - .base = 0x45000, .len = 0x320, 144 + .base = 0x45000, .len = 0x400, 145 145 .features = MIXER_MSM8998_MASK, 146 146 .sblk = &sdm845_lm_sblk, 147 147 .lm_pair = LM_0, ··· 149 149 .dspp = DSPP_1, 150 150 }, { 151 151 .name = "lm_2", .id = LM_2, 152 - .base = 0x46000, .len = 0x320, 152 + .base = 0x46000, .len = 0x400, 153 153 .features = MIXER_MSM8998_MASK, 154 154 .sblk = &sdm845_lm_sblk, 155 155 .lm_pair = LM_3, ··· 157 157 .dspp = DSPP_2, 158 158 }, { 159 159 .name = "lm_3", .id = LM_3, 160 - .base = 0x47000, .len = 0x320, 160 + .base = 0x47000, .len = 0x400, 161 161 .features = MIXER_MSM8998_MASK, 162 162 .sblk = &sdm845_lm_sblk, 163 163 .lm_pair = LM_2, ··· 165 165 .dspp = DSPP_3, 166 166 }, { 167 167 .name = "lm_4", .id = LM_4, 168 - .base = 0x48000, .len = 0x320, 168 + .base = 0x48000, .len = 0x400, 169 169 .features = MIXER_MSM8998_MASK, 170 170 .sblk = &sdm845_lm_sblk, 171 171 .lm_pair = LM_5, 172 172 .pingpong = PINGPONG_4, 173 173 }, { 174 174 .name = "lm_5", .id = LM_5, 175 - .base = 0x49000, .len = 0x320, 175 + .base = 0x49000, .len = 0x400, 176 176 .features = MIXER_MSM8998_MASK, 177 177 .sblk = &sdm845_lm_sblk, 178 178 .lm_pair = LM_4,
+6 -6
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_1_sm8450.h
··· 134 134 static const struct dpu_lm_cfg sm8450_lm[] = { 135 135 { 136 136 .name = "lm_0", .id = LM_0, 137 - .base = 0x44000, .len = 0x320, 137 + .base = 0x44000, .len = 0x400, 138 138 .features = MIXER_MSM8998_MASK, 139 139 .sblk = &sdm845_lm_sblk, 140 140 .lm_pair = LM_1, ··· 142 142 .dspp = DSPP_0, 143 143 }, { 144 144 .name = "lm_1", .id = LM_1, 145 - .base = 0x45000, .len = 0x320, 145 + .base = 0x45000, .len = 0x400, 146 146 .features = MIXER_MSM8998_MASK, 147 147 .sblk = &sdm845_lm_sblk, 148 148 .lm_pair = LM_0, ··· 150 150 .dspp = DSPP_1, 151 151 }, { 152 152 .name = "lm_2", .id = LM_2, 153 - .base = 0x46000, .len = 0x320, 153 + .base = 0x46000, .len = 0x400, 154 154 .features = MIXER_MSM8998_MASK, 155 155 .sblk = &sdm845_lm_sblk, 156 156 .lm_pair = LM_3, ··· 158 158 .dspp = DSPP_2, 159 159 }, { 160 160 .name = "lm_3", .id = LM_3, 161 - .base = 0x47000, .len = 0x320, 161 + .base = 0x47000, .len = 0x400, 162 162 .features = MIXER_MSM8998_MASK, 163 163 .sblk = &sdm845_lm_sblk, 164 164 .lm_pair = LM_2, ··· 166 166 .dspp = DSPP_3, 167 167 }, { 168 168 .name = "lm_4", .id = LM_4, 169 - .base = 0x48000, .len = 0x320, 169 + .base = 0x48000, .len = 0x400, 170 170 .features = MIXER_MSM8998_MASK, 171 171 .sblk = &sdm845_lm_sblk, 172 172 .lm_pair = LM_5, 173 173 .pingpong = PINGPONG_4, 174 174 }, { 175 175 .name = "lm_5", .id = LM_5, 176 - .base = 0x49000, .len = 0x320, 176 + .base = 0x49000, .len = 0x400, 177 177 .features = MIXER_MSM8998_MASK, 178 178 .sblk = &sdm845_lm_sblk, 179 179 .lm_pair = LM_4,
+2 -2
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_8_4_sa8775p.h
··· 366 366 .type = INTF_NONE, 367 367 .controller_id = MSM_DP_CONTROLLER_0, /* pair with intf_0 for DP MST */ 368 368 .prog_fetch_lines_worst_case = 24, 369 - .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 17), 370 - .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 16), 369 + .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 16), 370 + .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 17), 371 371 }, { 372 372 .name = "intf_7", .id = INTF_7, 373 373 .base = 0x3b000, .len = 0x280,
+6 -6
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
··· 131 131 static const struct dpu_lm_cfg sm8550_lm[] = { 132 132 { 133 133 .name = "lm_0", .id = LM_0, 134 - .base = 0x44000, .len = 0x320, 134 + .base = 0x44000, .len = 0x400, 135 135 .features = MIXER_MSM8998_MASK, 136 136 .sblk = &sdm845_lm_sblk, 137 137 .lm_pair = LM_1, ··· 139 139 .dspp = DSPP_0, 140 140 }, { 141 141 .name = "lm_1", .id = LM_1, 142 - .base = 0x45000, .len = 0x320, 142 + .base = 0x45000, .len = 0x400, 143 143 .features = MIXER_MSM8998_MASK, 144 144 .sblk = &sdm845_lm_sblk, 145 145 .lm_pair = LM_0, ··· 147 147 .dspp = DSPP_1, 148 148 }, { 149 149 .name = "lm_2", .id = LM_2, 150 - .base = 0x46000, .len = 0x320, 150 + .base = 0x46000, .len = 0x400, 151 151 .features = MIXER_MSM8998_MASK, 152 152 .sblk = &sdm845_lm_sblk, 153 153 .lm_pair = LM_3, ··· 155 155 .dspp = DSPP_2, 156 156 }, { 157 157 .name = "lm_3", .id = LM_3, 158 - .base = 0x47000, .len = 0x320, 158 + .base = 0x47000, .len = 0x400, 159 159 .features = MIXER_MSM8998_MASK, 160 160 .sblk = &sdm845_lm_sblk, 161 161 .lm_pair = LM_2, ··· 163 163 .dspp = DSPP_3, 164 164 }, { 165 165 .name = "lm_4", .id = LM_4, 166 - .base = 0x48000, .len = 0x320, 166 + .base = 0x48000, .len = 0x400, 167 167 .features = MIXER_MSM8998_MASK, 168 168 .sblk = &sdm845_lm_sblk, 169 169 .lm_pair = LM_5, 170 170 .pingpong = PINGPONG_4, 171 171 }, { 172 172 .name = "lm_5", .id = LM_5, 173 - .base = 0x49000, .len = 0x320, 173 + .base = 0x49000, .len = 0x400, 174 174 .features = MIXER_MSM8998_MASK, 175 175 .sblk = &sdm845_lm_sblk, 176 176 .lm_pair = LM_4,
+6 -6
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_1_sar2130p.h
··· 131 131 static const struct dpu_lm_cfg sar2130p_lm[] = { 132 132 { 133 133 .name = "lm_0", .id = LM_0, 134 - .base = 0x44000, .len = 0x320, 134 + .base = 0x44000, .len = 0x400, 135 135 .features = MIXER_MSM8998_MASK, 136 136 .sblk = &sdm845_lm_sblk, 137 137 .lm_pair = LM_1, ··· 139 139 .dspp = DSPP_0, 140 140 }, { 141 141 .name = "lm_1", .id = LM_1, 142 - .base = 0x45000, .len = 0x320, 142 + .base = 0x45000, .len = 0x400, 143 143 .features = MIXER_MSM8998_MASK, 144 144 .sblk = &sdm845_lm_sblk, 145 145 .lm_pair = LM_0, ··· 147 147 .dspp = DSPP_1, 148 148 }, { 149 149 .name = "lm_2", .id = LM_2, 150 - .base = 0x46000, .len = 0x320, 150 + .base = 0x46000, .len = 0x400, 151 151 .features = MIXER_MSM8998_MASK, 152 152 .sblk = &sdm845_lm_sblk, 153 153 .lm_pair = LM_3, ··· 155 155 .dspp = DSPP_2, 156 156 }, { 157 157 .name = "lm_3", .id = LM_3, 158 - .base = 0x47000, .len = 0x320, 158 + .base = 0x47000, .len = 0x400, 159 159 .features = MIXER_MSM8998_MASK, 160 160 .sblk = &sdm845_lm_sblk, 161 161 .lm_pair = LM_2, ··· 163 163 .dspp = DSPP_3, 164 164 }, { 165 165 .name = "lm_4", .id = LM_4, 166 - .base = 0x48000, .len = 0x320, 166 + .base = 0x48000, .len = 0x400, 167 167 .features = MIXER_MSM8998_MASK, 168 168 .sblk = &sdm845_lm_sblk, 169 169 .lm_pair = LM_5, 170 170 .pingpong = PINGPONG_4, 171 171 }, { 172 172 .name = "lm_5", .id = LM_5, 173 - .base = 0x49000, .len = 0x320, 173 + .base = 0x49000, .len = 0x400, 174 174 .features = MIXER_MSM8998_MASK, 175 175 .sblk = &sdm845_lm_sblk, 176 176 .lm_pair = LM_4,
+6 -6
drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
··· 130 130 static const struct dpu_lm_cfg x1e80100_lm[] = { 131 131 { 132 132 .name = "lm_0", .id = LM_0, 133 - .base = 0x44000, .len = 0x320, 133 + .base = 0x44000, .len = 0x400, 134 134 .features = MIXER_MSM8998_MASK, 135 135 .sblk = &sdm845_lm_sblk, 136 136 .lm_pair = LM_1, ··· 138 138 .dspp = DSPP_0, 139 139 }, { 140 140 .name = "lm_1", .id = LM_1, 141 - .base = 0x45000, .len = 0x320, 141 + .base = 0x45000, .len = 0x400, 142 142 .features = MIXER_MSM8998_MASK, 143 143 .sblk = &sdm845_lm_sblk, 144 144 .lm_pair = LM_0, ··· 146 146 .dspp = DSPP_1, 147 147 }, { 148 148 .name = "lm_2", .id = LM_2, 149 - .base = 0x46000, .len = 0x320, 149 + .base = 0x46000, .len = 0x400, 150 150 .features = MIXER_MSM8998_MASK, 151 151 .sblk = &sdm845_lm_sblk, 152 152 .lm_pair = LM_3, ··· 154 154 .dspp = DSPP_2, 155 155 }, { 156 156 .name = "lm_3", .id = LM_3, 157 - .base = 0x47000, .len = 0x320, 157 + .base = 0x47000, .len = 0x400, 158 158 .features = MIXER_MSM8998_MASK, 159 159 .sblk = &sdm845_lm_sblk, 160 160 .lm_pair = LM_2, ··· 162 162 .dspp = DSPP_3, 163 163 }, { 164 164 .name = "lm_4", .id = LM_4, 165 - .base = 0x48000, .len = 0x320, 165 + .base = 0x48000, .len = 0x400, 166 166 .features = MIXER_MSM8998_MASK, 167 167 .sblk = &sdm845_lm_sblk, 168 168 .lm_pair = LM_5, 169 169 .pingpong = PINGPONG_4, 170 170 }, { 171 171 .name = "lm_5", .id = LM_5, 172 - .base = 0x49000, .len = 0x320, 172 + .base = 0x49000, .len = 0x400, 173 173 .features = MIXER_MSM8998_MASK, 174 174 .sblk = &sdm845_lm_sblk, 175 175 .lm_pair = LM_4,
+1 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dspp.c
··· 89 89 base = ctx->cap->sblk->gc.base; 90 90 91 91 if (!base) { 92 - DRM_ERROR("invalid ctx %pK gc base\n", ctx); 92 + DRM_ERROR("invalid ctx %p gc base\n", ctx); 93 93 return; 94 94 } 95 95
+3 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_hw_sspp_v13.c
··· 156 156 u8 color; 157 157 u32 lr_pe[4], tb_pe[4]; 158 158 const u32 bytemask = 0xff; 159 - u32 offset = ctx->cap->sblk->sspp_rec0_blk.base; 159 + u32 offset; 160 160 161 161 if (!ctx || !pe_ext) 162 162 return; 163 + 164 + offset = ctx->cap->sblk->sspp_rec0_blk.base; 163 165 164 166 c = &ctx->hw; 165 167 /* program SW pixel extension override for all pipes*/
+14 -38
drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
··· 350 350 return true; 351 351 } 352 352 353 - static bool dpu_rm_find_lms(struct dpu_rm *rm, 354 - struct dpu_global_state *global_state, 355 - uint32_t crtc_id, bool skip_dspp, 356 - struct msm_display_topology *topology, 357 - int *lm_idx, int *pp_idx, int *dspp_idx) 353 + static int _dpu_rm_reserve_lms(struct dpu_rm *rm, 354 + struct dpu_global_state *global_state, 355 + uint32_t crtc_id, 356 + struct msm_display_topology *topology) 358 357 359 358 { 359 + int lm_idx[MAX_BLOCKS]; 360 + int pp_idx[MAX_BLOCKS]; 361 + int dspp_idx[MAX_BLOCKS] = {0}; 360 362 int i, lm_count = 0; 363 + 364 + if (!topology->num_lm) { 365 + DPU_ERROR("zero LMs in topology\n"); 366 + return -EINVAL; 367 + } 361 368 362 369 /* Find a primary mixer */ 363 370 for (i = 0; i < ARRAY_SIZE(rm->mixer_blks) && 364 371 lm_count < topology->num_lm; i++) { 365 372 if (!rm->mixer_blks[i]) 366 373 continue; 367 - 368 - if (skip_dspp && to_dpu_hw_mixer(rm->mixer_blks[i])->cap->dspp) { 369 - DPU_DEBUG("Skipping LM_%d, skipping LMs with DSPPs\n", i); 370 - continue; 371 - } 372 374 373 375 /* 374 376 * Reset lm_count to an even index. This will drop the previous ··· 410 408 } 411 409 } 412 410 413 - return lm_count == topology->num_lm; 414 - } 415 - 416 - static int _dpu_rm_reserve_lms(struct dpu_rm *rm, 417 - struct dpu_global_state *global_state, 418 - uint32_t crtc_id, 419 - struct msm_display_topology *topology) 420 - 421 - { 422 - int lm_idx[MAX_BLOCKS]; 423 - int pp_idx[MAX_BLOCKS]; 424 - int dspp_idx[MAX_BLOCKS] = {0}; 425 - int i; 426 - bool found; 427 - 428 - if (!topology->num_lm) { 429 - DPU_ERROR("zero LMs in topology\n"); 430 - return -EINVAL; 431 - } 432 - 433 - /* Try using non-DSPP LM blocks first */ 434 - found = dpu_rm_find_lms(rm, global_state, crtc_id, !topology->num_dspp, 435 - topology, lm_idx, pp_idx, dspp_idx); 436 - if (!found && !topology->num_dspp) 437 - found = dpu_rm_find_lms(rm, global_state, crtc_id, false, 438 - topology, lm_idx, pp_idx, dspp_idx); 439 - if (!found) { 411 + if (lm_count != topology->num_lm) { 440 412 DPU_DEBUG("unable to find appropriate mixers\n"); 441 413 return -ENAVAIL; 442 414 } 443 415 444 - for (i = 0; i < topology->num_lm; i++) { 416 + for (i = 0; i < lm_count; i++) { 445 417 global_state->mixer_to_crtc_id[lm_idx[i]] = crtc_id; 446 418 global_state->pingpong_to_crtc_id[pp_idx[i]] = crtc_id; 447 419 global_state->dspp_to_crtc_id[dspp_idx[i]] =
+31 -12
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 584 584 * FIXME: Reconsider this if/when CMD mode handling is rewritten to use 585 585 * transfer time and data overhead as a starting point of the calculations. 586 586 */ 587 - static unsigned long dsi_adjust_pclk_for_compression(const struct drm_display_mode *mode, 588 - const struct drm_dsc_config *dsc) 587 + static unsigned long 588 + dsi_adjust_pclk_for_compression(const struct drm_display_mode *mode, 589 + const struct drm_dsc_config *dsc, 590 + bool is_bonded_dsi) 589 591 { 590 - int new_hdisplay = DIV_ROUND_UP(mode->hdisplay * drm_dsc_get_bpp_int(dsc), 591 - dsc->bits_per_component * 3); 592 + int hdisplay, new_hdisplay, new_htotal; 592 593 593 - int new_htotal = mode->htotal - mode->hdisplay + new_hdisplay; 594 + /* 595 + * For bonded DSI, split hdisplay across two links and round up each 596 + * half separately, passing the full hdisplay would only round up once. 597 + * This also aligns with the hdisplay we program later in 598 + * dsi_timing_setup() 599 + */ 600 + hdisplay = mode->hdisplay; 601 + if (is_bonded_dsi) 602 + hdisplay /= 2; 603 + 604 + new_hdisplay = DIV_ROUND_UP(hdisplay * drm_dsc_get_bpp_int(dsc), 605 + dsc->bits_per_component * 3); 606 + 607 + if (is_bonded_dsi) 608 + new_hdisplay *= 2; 609 + 610 + new_htotal = mode->htotal - mode->hdisplay + new_hdisplay; 594 611 595 612 return mult_frac(mode->clock * 1000u, new_htotal, mode->htotal); 596 613 } ··· 620 603 pclk_rate = mode->clock * 1000u; 621 604 622 605 if (dsc) 623 - pclk_rate = dsi_adjust_pclk_for_compression(mode, dsc); 606 + pclk_rate = dsi_adjust_pclk_for_compression(mode, dsc, is_bonded_dsi); 624 607 625 608 /* 626 609 * For bonded DSI mode, the current DRM mode has the complete width of the ··· 1010 993 1011 994 if (msm_host->dsc) { 1012 995 struct drm_dsc_config *dsc = msm_host->dsc; 1013 - u32 bytes_per_pclk; 996 + u32 bits_per_pclk; 1014 997 1015 998 /* update dsc params with timing params */ 1016 999 if (!dsc || !mode->hdisplay || !mode->vdisplay) { ··· 1032 1015 1033 1016 /* 1034 1017 * DPU sends 3 bytes per pclk cycle to DSI. If widebus is 1035 - * enabled, bus width is extended to 6 bytes. 1018 + * enabled, MDP always sends out 48-bit compressed data per 1019 + * pclk and on average, DSI consumes an amount of compressed 1020 + * data equivalent to the uncompressed pixel depth per pclk. 1036 1021 * 1037 1022 * Calculate the number of pclks needed to transmit one line of 1038 1023 * the compressed data. ··· 1046 1027 * unused anyway. 1047 1028 */ 1048 1029 h_total -= hdisplay; 1049 - if (wide_bus_enabled && !(msm_host->mode_flags & MIPI_DSI_MODE_VIDEO)) 1050 - bytes_per_pclk = 6; 1030 + if (wide_bus_enabled) 1031 + bits_per_pclk = mipi_dsi_pixel_format_to_bpp(msm_host->format); 1051 1032 else 1052 - bytes_per_pclk = 3; 1033 + bits_per_pclk = 24; 1053 1034 1054 - hdisplay = DIV_ROUND_UP(msm_dsc_get_bytes_per_line(msm_host->dsc), bytes_per_pclk); 1035 + hdisplay = DIV_ROUND_UP(msm_dsc_get_bytes_per_line(msm_host->dsc) * 8, bits_per_pclk); 1055 1036 1056 1037 h_total += hdisplay; 1057 1038 ha_end = ha_start + hdisplay;
+11 -11
drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
··· 51 51 #define DSI_PHY_7NM_QUIRK_V4_3 BIT(3) 52 52 /* Hardware is V5.2 */ 53 53 #define DSI_PHY_7NM_QUIRK_V5_2 BIT(4) 54 - /* Hardware is V7.0 */ 55 - #define DSI_PHY_7NM_QUIRK_V7_0 BIT(5) 54 + /* Hardware is V7.2 */ 55 + #define DSI_PHY_7NM_QUIRK_V7_2 BIT(5) 56 56 57 57 struct dsi_pll_config { 58 58 bool enable_ssc; ··· 143 143 144 144 if (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_PRE_V4_1) { 145 145 config->pll_clock_inverters = 0x28; 146 - } else if ((pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) { 146 + } else if ((pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) { 147 147 if (pll_freq < 163000000ULL) 148 148 config->pll_clock_inverters = 0xa0; 149 149 else if (pll_freq < 175000000ULL) ··· 284 284 } 285 285 286 286 if ((pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) || 287 - (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) { 287 + (pll->phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) { 288 288 if (pll->vco_current_rate < 1557000000ULL) 289 289 vco_config_1 = 0x08; 290 290 else ··· 699 699 case MSM_DSI_PHY_MASTER: 700 700 pll_7nm->slave = pll_7nm_list[(pll_7nm->phy->id + 1) % DSI_MAX]; 701 701 /* v7.0: Enable ATB_EN0 and alternate clock output to external phy */ 702 - if (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0) 702 + if (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2) 703 703 writel(0x07, base + REG_DSI_7nm_PHY_CMN_CTRL_5); 704 704 break; 705 705 case MSM_DSI_PHY_SLAVE: ··· 987 987 /* Request for REFGEN READY */ 988 988 if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_3) || 989 989 (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) || 990 - (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) { 990 + (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) { 991 991 writel(0x1, phy->base + REG_DSI_7nm_PHY_CMN_GLBL_DIGTOP_SPARE10); 992 992 udelay(500); 993 993 } ··· 1021 1021 lane_ctrl0 = 0x1f; 1022 1022 } 1023 1023 1024 - if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) { 1024 + if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) { 1025 1025 if (phy->cphy_mode) { 1026 1026 /* TODO: different for second phy */ 1027 1027 vreg_ctrl_0 = 0x57; ··· 1097 1097 1098 1098 /* program CMN_CTRL_4 for minor_ver 2 chipsets*/ 1099 1099 if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) || 1100 - (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0) || 1100 + (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2) || 1101 1101 (readl(base + REG_DSI_7nm_PHY_CMN_REVISION_ID0) & (0xf0)) == 0x20) 1102 1102 writel(0x04, base + REG_DSI_7nm_PHY_CMN_CTRL_4); 1103 1103 ··· 1213 1213 /* Turn off REFGEN Vote */ 1214 1214 if ((phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V4_3) || 1215 1215 (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V5_2) || 1216 - (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_0)) { 1216 + (phy->cfg->quirks & DSI_PHY_7NM_QUIRK_V7_2)) { 1217 1217 writel(0x0, base + REG_DSI_7nm_PHY_CMN_GLBL_DIGTOP_SPARE10); 1218 1218 wmb(); 1219 1219 /* Delay to ensure HW removes vote before PHY shut down */ ··· 1502 1502 #endif 1503 1503 .io_start = { 0xae95000, 0xae97000 }, 1504 1504 .num_dsi_phy = 2, 1505 - .quirks = DSI_PHY_7NM_QUIRK_V7_0, 1505 + .quirks = DSI_PHY_7NM_QUIRK_V7_2, 1506 1506 }; 1507 1507 1508 1508 const struct msm_dsi_phy_cfg dsi_phy_3nm_kaanapali_cfgs = { ··· 1525 1525 #endif 1526 1526 .io_start = { 0x9ac1000, 0x9ac4000 }, 1527 1527 .num_dsi_phy = 2, 1528 - .quirks = DSI_PHY_7NM_QUIRK_V7_0, 1528 + .quirks = DSI_PHY_7NM_QUIRK_V7_2, 1529 1529 };
+6 -9
drivers/gpu/drm/sitronix/st7586.c
··· 347 347 if (ret) 348 348 return ret; 349 349 350 + /* 351 + * Override value set by mipi_dbi_spi_init(). This driver is a bit 352 + * non-standard, so best to set it explicitly here. 353 + */ 354 + dbi->write_memory_bpw = 8; 355 + 350 356 /* Cannot read from this controller via SPI */ 351 357 dbi->read_commands = NULL; 352 358 ··· 361 355 &st7586_mode, rotation, bufsize); 362 356 if (ret) 363 357 return ret; 364 - 365 - /* 366 - * we are using 8-bit data, so we are not actually swapping anything, 367 - * but setting mipi->swap_bytes makes mipi_dbi_typec3_command() do the 368 - * right thing and not use 16-bit transfers (which results in swapped 369 - * bytes on little-endian systems and causes out of order data to be 370 - * sent to the display). 371 - */ 372 - dbi->swap_bytes = true; 373 358 374 359 drm_mode_config_reset(drm); 375 360
+24 -22
drivers/gpu/nova-core/gsp.rs
··· 47 47 unsafe impl<const NUM_ENTRIES: usize> AsBytes for PteArray<NUM_ENTRIES> {} 48 48 49 49 impl<const NUM_PAGES: usize> PteArray<NUM_PAGES> { 50 - /// Creates a new page table array mapping `NUM_PAGES` GSP pages starting at address `start`. 51 - fn new(start: DmaAddress) -> Result<Self> { 52 - let mut ptes = [0u64; NUM_PAGES]; 53 - for (i, pte) in ptes.iter_mut().enumerate() { 54 - *pte = start 55 - .checked_add(num::usize_as_u64(i) << GSP_PAGE_SHIFT) 56 - .ok_or(EOVERFLOW)?; 57 - } 58 - 59 - Ok(Self(ptes)) 50 + /// Returns the page table entry for `index`, for a mapping starting at `start`. 51 + // TODO: Replace with `IoView` projection once available. 52 + fn entry(start: DmaAddress, index: usize) -> Result<u64> { 53 + start 54 + .checked_add(num::usize_as_u64(index) << GSP_PAGE_SHIFT) 55 + .ok_or(EOVERFLOW) 60 56 } 61 57 } 62 58 ··· 82 86 NUM_PAGES * GSP_PAGE_SIZE, 83 87 GFP_KERNEL | __GFP_ZERO, 84 88 )?); 85 - let ptes = PteArray::<NUM_PAGES>::new(obj.0.dma_handle())?; 89 + 90 + let start_addr = obj.0.dma_handle(); 86 91 87 92 // SAFETY: `obj` has just been created and we are its sole user. 88 - unsafe { 89 - // Copy the self-mapping PTE at the expected location. 93 + let pte_region = unsafe { 90 94 obj.0 91 - .as_slice_mut(size_of::<u64>(), size_of_val(&ptes))? 92 - .copy_from_slice(ptes.as_bytes()) 95 + .as_slice_mut(size_of::<u64>(), NUM_PAGES * size_of::<u64>())? 93 96 }; 97 + 98 + // Write values one by one to avoid an on-stack instance of `PteArray`. 99 + for (i, chunk) in pte_region.chunks_exact_mut(size_of::<u64>()).enumerate() { 100 + let pte_value = PteArray::<0>::entry(start_addr, i)?; 101 + 102 + chunk.copy_from_slice(&pte_value.to_ne_bytes()); 103 + } 94 104 95 105 Ok(obj) 96 106 } ··· 145 143 // _kgspInitLibosLoggingStructures (allocates memory for buffers) 146 144 // kgspSetupLibosInitArgs_IMPL (creates pLibosInitArgs[] array) 147 145 dma_write!( 148 - libos[0] = LibosMemoryRegionInitArgument::new("LOGINIT", &loginit.0) 149 - )?; 146 + libos, [0]?, LibosMemoryRegionInitArgument::new("LOGINIT", &loginit.0) 147 + ); 150 148 dma_write!( 151 - libos[1] = LibosMemoryRegionInitArgument::new("LOGINTR", &logintr.0) 152 - )?; 153 - dma_write!(libos[2] = LibosMemoryRegionInitArgument::new("LOGRM", &logrm.0))?; 154 - dma_write!(rmargs[0].inner = fw::GspArgumentsCached::new(cmdq))?; 155 - dma_write!(libos[3] = LibosMemoryRegionInitArgument::new("RMARGS", rmargs))?; 149 + libos, [1]?, LibosMemoryRegionInitArgument::new("LOGINTR", &logintr.0) 150 + ); 151 + dma_write!(libos, [2]?, LibosMemoryRegionInitArgument::new("LOGRM", &logrm.0)); 152 + dma_write!(rmargs, [0]?.inner, fw::GspArgumentsCached::new(cmdq)); 153 + dma_write!(libos, [3]?, LibosMemoryRegionInitArgument::new("RMARGS", rmargs)); 156 154 }, 157 155 })) 158 156 })
+1 -1
drivers/gpu/nova-core/gsp/boot.rs
··· 157 157 158 158 let wpr_meta = 159 159 CoherentAllocation::<GspFwWprMeta>::alloc_coherent(dev, 1, GFP_KERNEL | __GFP_ZERO)?; 160 - dma_write!(wpr_meta[0] = GspFwWprMeta::new(&gsp_fw, &fb_layout))?; 160 + dma_write!(wpr_meta, [0]?, GspFwWprMeta::new(&gsp_fw, &fb_layout)); 161 161 162 162 self.cmdq 163 163 .send_command(bar, commands::SetSystemInfo::new(pdev))?;
+33 -60
drivers/gpu/nova-core/gsp/cmdq.rs
··· 2 2 3 3 use core::{ 4 4 cmp, 5 - mem, 6 - sync::atomic::{ 7 - fence, 8 - Ordering, // 9 - }, // 5 + mem, // 10 6 }; 11 7 12 8 use kernel::{ ··· 142 146 #[repr(C)] 143 147 // There is no struct defined for this in the open-gpu-kernel-source headers. 144 148 // Instead it is defined by code in `GspMsgQueuesInit()`. 145 - struct Msgq { 149 + // TODO: Revert to private once `IoView` projections replace the `gsp_mem` module. 150 + pub(super) struct Msgq { 146 151 /// Header for sending messages, including the write pointer. 147 - tx: MsgqTxHeader, 152 + pub(super) tx: MsgqTxHeader, 148 153 /// Header for receiving messages, including the read pointer. 149 - rx: MsgqRxHeader, 154 + pub(super) rx: MsgqRxHeader, 150 155 /// The message queue proper. 151 156 msgq: MsgqData, 152 157 } 153 158 154 159 /// Structure shared between the driver and the GSP and containing the command and message queues. 155 160 #[repr(C)] 156 - struct GspMem { 161 + // TODO: Revert to private once `IoView` projections replace the `gsp_mem` module. 162 + pub(super) struct GspMem { 157 163 /// Self-mapping page table entries. 158 - ptes: PteArray<{ GSP_PAGE_SIZE / size_of::<u64>() }>, 164 + ptes: PteArray<{ Self::PTE_ARRAY_SIZE }>, 159 165 /// CPU queue: the driver writes commands here, and the GSP reads them. It also contains the 160 166 /// write and read pointers that the CPU updates. 161 167 /// 162 168 /// This member is read-only for the GSP. 163 - cpuq: Msgq, 169 + pub(super) cpuq: Msgq, 164 170 /// GSP queue: the GSP writes messages here, and the driver reads them. It also contains the 165 171 /// write and read pointers that the GSP updates. 166 172 /// 167 173 /// This member is read-only for the driver. 168 - gspq: Msgq, 174 + pub(super) gspq: Msgq, 175 + } 176 + 177 + impl GspMem { 178 + const PTE_ARRAY_SIZE: usize = GSP_PAGE_SIZE / size_of::<u64>(); 169 179 } 170 180 171 181 // SAFETY: These structs don't meet the no-padding requirements of AsBytes but ··· 203 201 204 202 let gsp_mem = 205 203 CoherentAllocation::<GspMem>::alloc_coherent(dev, 1, GFP_KERNEL | __GFP_ZERO)?; 206 - dma_write!(gsp_mem[0].ptes = PteArray::new(gsp_mem.dma_handle())?)?; 207 - dma_write!(gsp_mem[0].cpuq.tx = MsgqTxHeader::new(MSGQ_SIZE, RX_HDR_OFF, MSGQ_NUM_PAGES))?; 208 - dma_write!(gsp_mem[0].cpuq.rx = MsgqRxHeader::new())?; 204 + 205 + let start = gsp_mem.dma_handle(); 206 + // Write values one by one to avoid an on-stack instance of `PteArray`. 207 + for i in 0..GspMem::PTE_ARRAY_SIZE { 208 + dma_write!(gsp_mem, [0]?.ptes.0[i], PteArray::<0>::entry(start, i)?); 209 + } 210 + 211 + dma_write!( 212 + gsp_mem, 213 + [0]?.cpuq.tx, 214 + MsgqTxHeader::new(MSGQ_SIZE, RX_HDR_OFF, MSGQ_NUM_PAGES) 215 + ); 216 + dma_write!(gsp_mem, [0]?.cpuq.rx, MsgqRxHeader::new()); 209 217 210 218 Ok(Self(gsp_mem)) 211 219 } ··· 329 317 // 330 318 // - The returned value is between `0` and `MSGQ_NUM_PAGES`. 331 319 fn gsp_write_ptr(&self) -> u32 { 332 - let gsp_mem = self.0.start_ptr(); 333 - 334 - // SAFETY: 335 - // - The 'CoherentAllocation' contains at least one object. 336 - // - By the invariants of `CoherentAllocation` the pointer is valid. 337 - (unsafe { (*gsp_mem).gspq.tx.write_ptr() } % MSGQ_NUM_PAGES) 320 + super::fw::gsp_mem::gsp_write_ptr(&self.0) 338 321 } 339 322 340 323 // Returns the index of the memory page the GSP will read the next command from. ··· 338 331 // 339 332 // - The returned value is between `0` and `MSGQ_NUM_PAGES`. 340 333 fn gsp_read_ptr(&self) -> u32 { 341 - let gsp_mem = self.0.start_ptr(); 342 - 343 - // SAFETY: 344 - // - The 'CoherentAllocation' contains at least one object. 345 - // - By the invariants of `CoherentAllocation` the pointer is valid. 346 - (unsafe { (*gsp_mem).gspq.rx.read_ptr() } % MSGQ_NUM_PAGES) 334 + super::fw::gsp_mem::gsp_read_ptr(&self.0) 347 335 } 348 336 349 337 // Returns the index of the memory page the CPU can read the next message from. ··· 347 345 // 348 346 // - The returned value is between `0` and `MSGQ_NUM_PAGES`. 349 347 fn cpu_read_ptr(&self) -> u32 { 350 - let gsp_mem = self.0.start_ptr(); 351 - 352 - // SAFETY: 353 - // - The ['CoherentAllocation'] contains at least one object. 354 - // - By the invariants of CoherentAllocation the pointer is valid. 355 - (unsafe { (*gsp_mem).cpuq.rx.read_ptr() } % MSGQ_NUM_PAGES) 348 + super::fw::gsp_mem::cpu_read_ptr(&self.0) 356 349 } 357 350 358 351 // Informs the GSP that it can send `elem_count` new pages into the message queue. 359 352 fn advance_cpu_read_ptr(&mut self, elem_count: u32) { 360 - let rptr = self.cpu_read_ptr().wrapping_add(elem_count) % MSGQ_NUM_PAGES; 361 - 362 - // Ensure read pointer is properly ordered. 363 - fence(Ordering::SeqCst); 364 - 365 - let gsp_mem = self.0.start_ptr_mut(); 366 - 367 - // SAFETY: 368 - // - The 'CoherentAllocation' contains at least one object. 369 - // - By the invariants of `CoherentAllocation` the pointer is valid. 370 - unsafe { (*gsp_mem).cpuq.rx.set_read_ptr(rptr) }; 353 + super::fw::gsp_mem::advance_cpu_read_ptr(&self.0, elem_count) 371 354 } 372 355 373 356 // Returns the index of the memory page the CPU can write the next command to. ··· 361 374 // 362 375 // - The returned value is between `0` and `MSGQ_NUM_PAGES`. 363 376 fn cpu_write_ptr(&self) -> u32 { 364 - let gsp_mem = self.0.start_ptr(); 365 - 366 - // SAFETY: 367 - // - The 'CoherentAllocation' contains at least one object. 368 - // - By the invariants of `CoherentAllocation` the pointer is valid. 369 - (unsafe { (*gsp_mem).cpuq.tx.write_ptr() } % MSGQ_NUM_PAGES) 377 + super::fw::gsp_mem::cpu_write_ptr(&self.0) 370 378 } 371 379 372 380 // Informs the GSP that it can process `elem_count` new pages from the command queue. 373 381 fn advance_cpu_write_ptr(&mut self, elem_count: u32) { 374 - let wptr = self.cpu_write_ptr().wrapping_add(elem_count) & MSGQ_NUM_PAGES; 375 - let gsp_mem = self.0.start_ptr_mut(); 376 - 377 - // SAFETY: 378 - // - The 'CoherentAllocation' contains at least one object. 379 - // - By the invariants of `CoherentAllocation` the pointer is valid. 380 - unsafe { (*gsp_mem).cpuq.tx.set_write_ptr(wptr) }; 381 - 382 - // Ensure all command data is visible before triggering the GSP read. 383 - fence(Ordering::SeqCst); 382 + super::fw::gsp_mem::advance_cpu_write_ptr(&self.0, elem_count) 384 383 } 385 384 } 386 385
+69 -32
drivers/gpu/nova-core/gsp/fw.rs
··· 40 40 }, 41 41 }; 42 42 43 + // TODO: Replace with `IoView` projections once available; the `unwrap()` calls go away once we 44 + // switch to the new `dma::Coherent` API. 45 + pub(super) mod gsp_mem { 46 + use core::sync::atomic::{ 47 + fence, 48 + Ordering, // 49 + }; 50 + 51 + use kernel::{ 52 + dma::CoherentAllocation, 53 + dma_read, 54 + dma_write, 55 + prelude::*, // 56 + }; 57 + 58 + use crate::gsp::cmdq::{ 59 + GspMem, 60 + MSGQ_NUM_PAGES, // 61 + }; 62 + 63 + pub(in crate::gsp) fn gsp_write_ptr(qs: &CoherentAllocation<GspMem>) -> u32 { 64 + // PANIC: A `dma::CoherentAllocation` always contains at least one element. 65 + || -> Result<u32> { Ok(dma_read!(qs, [0]?.gspq.tx.0.writePtr) % MSGQ_NUM_PAGES) }().unwrap() 66 + } 67 + 68 + pub(in crate::gsp) fn gsp_read_ptr(qs: &CoherentAllocation<GspMem>) -> u32 { 69 + // PANIC: A `dma::CoherentAllocation` always contains at least one element. 70 + || -> Result<u32> { Ok(dma_read!(qs, [0]?.gspq.rx.0.readPtr) % MSGQ_NUM_PAGES) }().unwrap() 71 + } 72 + 73 + pub(in crate::gsp) fn cpu_read_ptr(qs: &CoherentAllocation<GspMem>) -> u32 { 74 + // PANIC: A `dma::CoherentAllocation` always contains at least one element. 75 + || -> Result<u32> { Ok(dma_read!(qs, [0]?.cpuq.rx.0.readPtr) % MSGQ_NUM_PAGES) }().unwrap() 76 + } 77 + 78 + pub(in crate::gsp) fn advance_cpu_read_ptr(qs: &CoherentAllocation<GspMem>, count: u32) { 79 + let rptr = cpu_read_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES; 80 + 81 + // Ensure read pointer is properly ordered. 82 + fence(Ordering::SeqCst); 83 + 84 + // PANIC: A `dma::CoherentAllocation` always contains at least one element. 85 + || -> Result { 86 + dma_write!(qs, [0]?.cpuq.rx.0.readPtr, rptr); 87 + Ok(()) 88 + }() 89 + .unwrap() 90 + } 91 + 92 + pub(in crate::gsp) fn cpu_write_ptr(qs: &CoherentAllocation<GspMem>) -> u32 { 93 + // PANIC: A `dma::CoherentAllocation` always contains at least one element. 94 + || -> Result<u32> { Ok(dma_read!(qs, [0]?.cpuq.tx.0.writePtr) % MSGQ_NUM_PAGES) }().unwrap() 95 + } 96 + 97 + pub(in crate::gsp) fn advance_cpu_write_ptr(qs: &CoherentAllocation<GspMem>, count: u32) { 98 + let wptr = cpu_write_ptr(qs).wrapping_add(count) % MSGQ_NUM_PAGES; 99 + 100 + // PANIC: A `dma::CoherentAllocation` always contains at least one element. 101 + || -> Result { 102 + dma_write!(qs, [0]?.cpuq.tx.0.writePtr, wptr); 103 + Ok(()) 104 + }() 105 + .unwrap(); 106 + 107 + // Ensure all command data is visible before triggering the GSP read. 108 + fence(Ordering::SeqCst); 109 + } 110 + } 111 + 43 112 /// Empty type to group methods related to heap parameters for running the GSP firmware. 44 113 enum GspFwHeapParams {} 45 114 ··· 777 708 entryOff: num::usize_into_u32::<GSP_PAGE_SIZE>(), 778 709 }) 779 710 } 780 - 781 - /// Returns the value of the write pointer for this queue. 782 - pub(crate) fn write_ptr(&self) -> u32 { 783 - let ptr = core::ptr::from_ref(&self.0.writePtr); 784 - 785 - // SAFETY: `ptr` is a valid pointer to a `u32`. 786 - unsafe { ptr.read_volatile() } 787 - } 788 - 789 - /// Sets the value of the write pointer for this queue. 790 - pub(crate) fn set_write_ptr(&mut self, val: u32) { 791 - let ptr = core::ptr::from_mut(&mut self.0.writePtr); 792 - 793 - // SAFETY: `ptr` is a valid pointer to a `u32`. 794 - unsafe { ptr.write_volatile(val) } 795 - } 796 711 } 797 712 798 713 // SAFETY: Padding is explicit and does not contain uninitialized data. ··· 791 738 /// Creates a new RX queue header. 792 739 pub(crate) fn new() -> Self { 793 740 Self(Default::default()) 794 - } 795 - 796 - /// Returns the value of the read pointer for this queue. 797 - pub(crate) fn read_ptr(&self) -> u32 { 798 - let ptr = core::ptr::from_ref(&self.0.readPtr); 799 - 800 - // SAFETY: `ptr` is a valid pointer to a `u32`. 801 - unsafe { ptr.read_volatile() } 802 - } 803 - 804 - /// Sets the value of the read pointer for this queue. 805 - pub(crate) fn set_read_ptr(&mut self, val: u32) { 806 - let ptr = core::ptr::from_mut(&mut self.0.readPtr); 807 - 808 - // SAFETY: `ptr` is a valid pointer to a `u32`. 809 - unsafe { ptr.write_volatile(val) } 810 741 } 811 742 } 812 743
+50 -64
rust/kernel/dma.rs
··· 461 461 self.count * core::mem::size_of::<T>() 462 462 } 463 463 464 + /// Returns the raw pointer to the allocated region in the CPU's virtual address space. 465 + #[inline] 466 + pub fn as_ptr(&self) -> *const [T] { 467 + core::ptr::slice_from_raw_parts(self.cpu_addr.as_ptr(), self.count) 468 + } 469 + 470 + /// Returns the raw pointer to the allocated region in the CPU's virtual address space as 471 + /// a mutable pointer. 472 + #[inline] 473 + pub fn as_mut_ptr(&self) -> *mut [T] { 474 + core::ptr::slice_from_raw_parts_mut(self.cpu_addr.as_ptr(), self.count) 475 + } 476 + 464 477 /// Returns the base address to the allocated region in the CPU's virtual address space. 465 478 pub fn start_ptr(&self) -> *const T { 466 479 self.cpu_addr.as_ptr() ··· 594 581 Ok(()) 595 582 } 596 583 597 - /// Returns a pointer to an element from the region with bounds checking. `offset` is in 598 - /// units of `T`, not the number of bytes. 599 - /// 600 - /// Public but hidden since it should only be used from [`dma_read`] and [`dma_write`] macros. 601 - #[doc(hidden)] 602 - pub fn item_from_index(&self, offset: usize) -> Result<*mut T> { 603 - if offset >= self.count { 604 - return Err(EINVAL); 605 - } 606 - // SAFETY: 607 - // - The pointer is valid due to type invariant on `CoherentAllocation` 608 - // and we've just checked that the range and index is within bounds. 609 - // - `offset` can't overflow since it is smaller than `self.count` and we've checked 610 - // that `self.count` won't overflow early in the constructor. 611 - Ok(unsafe { self.cpu_addr.as_ptr().add(offset) }) 612 - } 613 - 614 584 /// Reads the value of `field` and ensures that its type is [`FromBytes`]. 615 585 /// 616 586 /// # Safety ··· 666 670 667 671 /// Reads a field of an item from an allocated region of structs. 668 672 /// 673 + /// The syntax is of the form `kernel::dma_read!(dma, proj)` where `dma` is an expression evaluating 674 + /// to a [`CoherentAllocation`] and `proj` is a [projection specification](kernel::ptr::project!). 675 + /// 669 676 /// # Examples 670 677 /// 671 678 /// ``` ··· 683 684 /// unsafe impl kernel::transmute::AsBytes for MyStruct{}; 684 685 /// 685 686 /// # fn test(alloc: &kernel::dma::CoherentAllocation<MyStruct>) -> Result { 686 - /// let whole = kernel::dma_read!(alloc[2]); 687 - /// let field = kernel::dma_read!(alloc[1].field); 687 + /// let whole = kernel::dma_read!(alloc, [2]?); 688 + /// let field = kernel::dma_read!(alloc, [1]?.field); 688 689 /// # Ok::<(), Error>(()) } 689 690 /// ``` 690 691 #[macro_export] 691 692 macro_rules! dma_read { 692 - ($dma:expr, $idx: expr, $($field:tt)*) => {{ 693 - (|| -> ::core::result::Result<_, $crate::error::Error> { 694 - let item = $crate::dma::CoherentAllocation::item_from_index(&$dma, $idx)?; 695 - // SAFETY: `item_from_index` ensures that `item` is always a valid pointer and can be 696 - // dereferenced. The compiler also further validates the expression on whether `field` 697 - // is a member of `item` when expanded by the macro. 698 - unsafe { 699 - let ptr_field = ::core::ptr::addr_of!((*item) $($field)*); 700 - ::core::result::Result::Ok( 701 - $crate::dma::CoherentAllocation::field_read(&$dma, ptr_field) 702 - ) 703 - } 704 - })() 693 + ($dma:expr, $($proj:tt)*) => {{ 694 + let dma = &$dma; 695 + let ptr = $crate::ptr::project!( 696 + $crate::dma::CoherentAllocation::as_ptr(dma), $($proj)* 697 + ); 698 + // SAFETY: The pointer created by the projection is within the DMA region. 699 + unsafe { $crate::dma::CoherentAllocation::field_read(dma, ptr) } 705 700 }}; 706 - ($dma:ident [ $idx:expr ] $($field:tt)* ) => { 707 - $crate::dma_read!($dma, $idx, $($field)*) 708 - }; 709 - ($($dma:ident).* [ $idx:expr ] $($field:tt)* ) => { 710 - $crate::dma_read!($($dma).*, $idx, $($field)*) 711 - }; 712 701 } 713 702 714 703 /// Writes to a field of an item from an allocated region of structs. 704 + /// 705 + /// The syntax is of the form `kernel::dma_write!(dma, proj, val)` where `dma` is an expression 706 + /// evaluating to a [`CoherentAllocation`], `proj` is a 707 + /// [projection specification](kernel::ptr::project!), and `val` is the value to be written to the 708 + /// projected location. 715 709 /// 716 710 /// # Examples 717 711 /// ··· 720 728 /// unsafe impl kernel::transmute::AsBytes for MyStruct{}; 721 729 /// 722 730 /// # fn test(alloc: &kernel::dma::CoherentAllocation<MyStruct>) -> Result { 723 - /// kernel::dma_write!(alloc[2].member = 0xf); 724 - /// kernel::dma_write!(alloc[1] = MyStruct { member: 0xf }); 731 + /// kernel::dma_write!(alloc, [2]?.member, 0xf); 732 + /// kernel::dma_write!(alloc, [1]?, MyStruct { member: 0xf }); 725 733 /// # Ok::<(), Error>(()) } 726 734 /// ``` 727 735 #[macro_export] 728 736 macro_rules! dma_write { 729 - ($dma:ident [ $idx:expr ] $($field:tt)*) => {{ 730 - $crate::dma_write!($dma, $idx, $($field)*) 737 + (@parse [$dma:expr] [$($proj:tt)*] [, $val:expr]) => {{ 738 + let dma = &$dma; 739 + let ptr = $crate::ptr::project!( 740 + mut $crate::dma::CoherentAllocation::as_mut_ptr(dma), $($proj)* 741 + ); 742 + let val = $val; 743 + // SAFETY: The pointer created by the projection is within the DMA region. 744 + unsafe { $crate::dma::CoherentAllocation::field_write(dma, ptr, val) } 731 745 }}; 732 - ($($dma:ident).* [ $idx:expr ] $($field:tt)* ) => {{ 733 - $crate::dma_write!($($dma).*, $idx, $($field)*) 734 - }}; 735 - ($dma:expr, $idx: expr, = $val:expr) => { 736 - (|| -> ::core::result::Result<_, $crate::error::Error> { 737 - let item = $crate::dma::CoherentAllocation::item_from_index(&$dma, $idx)?; 738 - // SAFETY: `item_from_index` ensures that `item` is always a valid item. 739 - unsafe { $crate::dma::CoherentAllocation::field_write(&$dma, item, $val) } 740 - ::core::result::Result::Ok(()) 741 - })() 746 + (@parse [$dma:expr] [$($proj:tt)*] [.$field:tt $($rest:tt)*]) => { 747 + $crate::dma_write!(@parse [$dma] [$($proj)* .$field] [$($rest)*]) 742 748 }; 743 - ($dma:expr, $idx: expr, $(.$field:ident)* = $val:expr) => { 744 - (|| -> ::core::result::Result<_, $crate::error::Error> { 745 - let item = $crate::dma::CoherentAllocation::item_from_index(&$dma, $idx)?; 746 - // SAFETY: `item_from_index` ensures that `item` is always a valid pointer and can be 747 - // dereferenced. The compiler also further validates the expression on whether `field` 748 - // is a member of `item` when expanded by the macro. 749 - unsafe { 750 - let ptr_field = ::core::ptr::addr_of_mut!((*item) $(.$field)*); 751 - $crate::dma::CoherentAllocation::field_write(&$dma, ptr_field, $val) 752 - } 753 - ::core::result::Result::Ok(()) 754 - })() 749 + (@parse [$dma:expr] [$($proj:tt)*] [[$index:expr]? $($rest:tt)*]) => { 750 + $crate::dma_write!(@parse [$dma] [$($proj)* [$index]?] [$($rest)*]) 751 + }; 752 + (@parse [$dma:expr] [$($proj:tt)*] [[$index:expr] $($rest:tt)*]) => { 753 + $crate::dma_write!(@parse [$dma] [$($proj)* [$index]] [$($rest)*]) 754 + }; 755 + ($dma:expr, $($rest:tt)*) => { 756 + $crate::dma_write!(@parse [$dma] [] [$($rest)*]) 755 757 }; 756 758 }
+4
rust/kernel/lib.rs
··· 20 20 #![feature(generic_nonzero)] 21 21 #![feature(inline_const)] 22 22 #![feature(pointer_is_aligned)] 23 + #![feature(slice_ptr_len)] 23 24 // 24 25 // Stable since Rust 1.80.0. 25 26 #![feature(slice_flatten)] ··· 37 36 #![feature(const_option)] 38 37 #![feature(const_ptr_write)] 39 38 #![feature(const_refs_to_cell)] 39 + // 40 + // Stable since Rust 1.84.0. 41 + #![feature(strict_provenance)] 40 42 // 41 43 // Expected to become stable. 42 44 #![feature(arbitrary_self_types)]
+29 -1
rust/kernel/ptr.rs
··· 2 2 3 3 //! Types and functions to work with pointers and addresses. 4 4 5 - use core::mem::align_of; 5 + pub mod projection; 6 + pub use crate::project_pointer as project; 7 + 8 + use core::mem::{ 9 + align_of, 10 + size_of, // 11 + }; 6 12 use core::num::NonZero; 7 13 8 14 /// Type representing an alignment, which is always a power of two. ··· 231 225 } 232 226 233 227 impl_alignable_uint!(u8, u16, u32, u64, usize); 228 + 229 + /// Trait to represent compile-time known size information. 230 + /// 231 + /// This is a generalization of [`size_of`] that works for dynamically sized types. 232 + pub trait KnownSize { 233 + /// Get the size of an object of this type in bytes, with the metadata of the given pointer. 234 + fn size(p: *const Self) -> usize; 235 + } 236 + 237 + impl<T> KnownSize for T { 238 + #[inline(always)] 239 + fn size(_: *const Self) -> usize { 240 + size_of::<T>() 241 + } 242 + } 243 + 244 + impl<T> KnownSize for [T] { 245 + #[inline(always)] 246 + fn size(p: *const Self) -> usize { 247 + p.len() * size_of::<T>() 248 + } 249 + }
+305
rust/kernel/ptr/projection.rs
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + //! Infrastructure for handling projections. 4 + 5 + use core::{ 6 + mem::MaybeUninit, 7 + ops::Deref, // 8 + }; 9 + 10 + use crate::prelude::*; 11 + 12 + /// Error raised when a projection is attempted on an array or slice out of bounds. 13 + pub struct OutOfBound; 14 + 15 + impl From<OutOfBound> for Error { 16 + #[inline(always)] 17 + fn from(_: OutOfBound) -> Self { 18 + ERANGE 19 + } 20 + } 21 + 22 + /// A helper trait to perform index projection. 23 + /// 24 + /// This is similar to [`core::slice::SliceIndex`], but operates on raw pointers safely and 25 + /// fallibly. 26 + /// 27 + /// # Safety 28 + /// 29 + /// The implementation of `index` and `get` (if [`Some`] is returned) must ensure that, if provided 30 + /// input pointer `slice` and returned pointer `output`, then: 31 + /// - `output` has the same provenance as `slice`; 32 + /// - `output.byte_offset_from(slice)` is between 0 to 33 + /// `KnownSize::size(slice) - KnownSize::size(output)`. 34 + /// 35 + /// This means that if the input pointer is valid, then pointer returned by `get` or `index` is 36 + /// also valid. 37 + #[diagnostic::on_unimplemented(message = "`{Self}` cannot be used to index `{T}`")] 38 + #[doc(hidden)] 39 + pub unsafe trait ProjectIndex<T: ?Sized>: Sized { 40 + type Output: ?Sized; 41 + 42 + /// Returns an index-projected pointer, if in bounds. 43 + fn get(self, slice: *mut T) -> Option<*mut Self::Output>; 44 + 45 + /// Returns an index-projected pointer; fail the build if it cannot be proved to be in bounds. 46 + #[inline(always)] 47 + fn index(self, slice: *mut T) -> *mut Self::Output { 48 + Self::get(self, slice).unwrap_or_else(|| build_error!()) 49 + } 50 + } 51 + 52 + // Forward array impl to slice impl. 53 + // 54 + // SAFETY: Safety requirement guaranteed by the forwarded impl. 55 + unsafe impl<T, I, const N: usize> ProjectIndex<[T; N]> for I 56 + where 57 + I: ProjectIndex<[T]>, 58 + { 59 + type Output = <I as ProjectIndex<[T]>>::Output; 60 + 61 + #[inline(always)] 62 + fn get(self, slice: *mut [T; N]) -> Option<*mut Self::Output> { 63 + <I as ProjectIndex<[T]>>::get(self, slice) 64 + } 65 + 66 + #[inline(always)] 67 + fn index(self, slice: *mut [T; N]) -> *mut Self::Output { 68 + <I as ProjectIndex<[T]>>::index(self, slice) 69 + } 70 + } 71 + 72 + // SAFETY: `get`-returned pointer has the same provenance as `slice` and the offset is checked to 73 + // not exceed the required bound. 74 + unsafe impl<T> ProjectIndex<[T]> for usize { 75 + type Output = T; 76 + 77 + #[inline(always)] 78 + fn get(self, slice: *mut [T]) -> Option<*mut T> { 79 + if self >= slice.len() { 80 + None 81 + } else { 82 + Some(slice.cast::<T>().wrapping_add(self)) 83 + } 84 + } 85 + } 86 + 87 + // SAFETY: `get`-returned pointer has the same provenance as `slice` and the offset is checked to 88 + // not exceed the required bound. 89 + unsafe impl<T> ProjectIndex<[T]> for core::ops::Range<usize> { 90 + type Output = [T]; 91 + 92 + #[inline(always)] 93 + fn get(self, slice: *mut [T]) -> Option<*mut [T]> { 94 + let new_len = self.end.checked_sub(self.start)?; 95 + if self.end > slice.len() { 96 + return None; 97 + } 98 + Some(core::ptr::slice_from_raw_parts_mut( 99 + slice.cast::<T>().wrapping_add(self.start), 100 + new_len, 101 + )) 102 + } 103 + } 104 + 105 + // SAFETY: Safety requirement guaranteed by the forwarded impl. 106 + unsafe impl<T> ProjectIndex<[T]> for core::ops::RangeTo<usize> { 107 + type Output = [T]; 108 + 109 + #[inline(always)] 110 + fn get(self, slice: *mut [T]) -> Option<*mut [T]> { 111 + (0..self.end).get(slice) 112 + } 113 + } 114 + 115 + // SAFETY: Safety requirement guaranteed by the forwarded impl. 116 + unsafe impl<T> ProjectIndex<[T]> for core::ops::RangeFrom<usize> { 117 + type Output = [T]; 118 + 119 + #[inline(always)] 120 + fn get(self, slice: *mut [T]) -> Option<*mut [T]> { 121 + (self.start..slice.len()).get(slice) 122 + } 123 + } 124 + 125 + // SAFETY: `get` returned the pointer as is, so it always has the same provenance and offset of 0. 126 + unsafe impl<T> ProjectIndex<[T]> for core::ops::RangeFull { 127 + type Output = [T]; 128 + 129 + #[inline(always)] 130 + fn get(self, slice: *mut [T]) -> Option<*mut [T]> { 131 + Some(slice) 132 + } 133 + } 134 + 135 + /// A helper trait to perform field projection. 136 + /// 137 + /// This trait has a `DEREF` generic parameter so it can be implemented twice for types that 138 + /// implement [`Deref`]. This will cause an ambiguity error and thus block [`Deref`] types being 139 + /// used as base of projection, as they can inject unsoundness. Users therefore must not specify 140 + /// `DEREF` and should always leave it to be inferred. 141 + /// 142 + /// # Safety 143 + /// 144 + /// `proj` may only invoke `f` with a valid allocation, as the documentation of [`Self::proj`] 145 + /// describes. 146 + #[doc(hidden)] 147 + pub unsafe trait ProjectField<const DEREF: bool> { 148 + /// Project a pointer to a type to a pointer of a field. 149 + /// 150 + /// `f` may only be invoked with a valid allocation so it can safely obtain raw pointers to 151 + /// fields using `&raw mut`. 152 + /// 153 + /// This is needed because `base` might not point to a valid allocation, while `&raw mut` 154 + /// requires pointers to be in bounds of a valid allocation. 155 + /// 156 + /// # Safety 157 + /// 158 + /// `f` must return a pointer in bounds of the provided pointer. 159 + unsafe fn proj<F>(base: *mut Self, f: impl FnOnce(*mut Self) -> *mut F) -> *mut F; 160 + } 161 + 162 + // NOTE: in theory, this API should work for `T: ?Sized` and `F: ?Sized`, too. However, we cannot 163 + // currently support that as we need to obtain a valid allocation that `&raw const` can operate on. 164 + // 165 + // SAFETY: `proj` invokes `f` with valid allocation. 166 + unsafe impl<T> ProjectField<false> for T { 167 + #[inline(always)] 168 + unsafe fn proj<F>(base: *mut Self, f: impl FnOnce(*mut Self) -> *mut F) -> *mut F { 169 + // Create a valid allocation to start projection, as `base` is not necessarily so. The 170 + // memory is never actually used so it will be optimized out, so it should work even for 171 + // very large `T` (`memoffset` crate also relies on this). To be extra certain, we also 172 + // annotate `f` closure with `#[inline(always)]` in the macro. 173 + let mut place = MaybeUninit::uninit(); 174 + let place_base = place.as_mut_ptr(); 175 + let field = f(place_base); 176 + // SAFETY: `field` is in bounds from `base` per safety requirement. 177 + let offset = unsafe { field.byte_offset_from(place_base) }; 178 + // Use `wrapping_byte_offset` as `base` does not need to be of valid allocation. 179 + base.wrapping_byte_offset(offset).cast() 180 + } 181 + } 182 + 183 + // SAFETY: Vacuously satisfied. 184 + unsafe impl<T: Deref> ProjectField<true> for T { 185 + #[inline(always)] 186 + unsafe fn proj<F>(_: *mut Self, _: impl FnOnce(*mut Self) -> *mut F) -> *mut F { 187 + build_error!("this function is a guard against `Deref` impl and is never invoked"); 188 + } 189 + } 190 + 191 + /// Create a projection from a raw pointer. 192 + /// 193 + /// The projected pointer is within the memory region marked by the input pointer. There is no 194 + /// requirement that the input raw pointer needs to be valid, so this macro may be used for 195 + /// projecting pointers outside normal address space, e.g. I/O pointers. However, if the input 196 + /// pointer is valid, the projected pointer is also valid. 197 + /// 198 + /// Supported projections include field projections and index projections. 199 + /// It is not allowed to project into types that implement custom [`Deref`] or 200 + /// [`Index`](core::ops::Index). 201 + /// 202 + /// The macro has basic syntax of `kernel::ptr::project!(ptr, projection)`, where `ptr` is an 203 + /// expression that evaluates to a raw pointer which serves as the base of projection. `projection` 204 + /// can be a projection expression of form `.field` (normally identifier, or numeral in case of 205 + /// tuple structs) or of form `[index]`. 206 + /// 207 + /// If a mutable pointer is needed, the macro input can be prefixed with the `mut` keyword, i.e. 208 + /// `kernel::ptr::project!(mut ptr, projection)`. By default, a const pointer is created. 209 + /// 210 + /// `ptr::project!` macro can perform both fallible indexing and build-time checked indexing. 211 + /// `[index]` form performs build-time bounds checking; if compiler fails to prove `[index]` is in 212 + /// bounds, compilation will fail. `[index]?` can be used to perform runtime bounds checking; 213 + /// `OutOfBound` error is raised via `?` if the index is out of bounds. 214 + /// 215 + /// # Examples 216 + /// 217 + /// Field projections are performed with `.field_name`: 218 + /// 219 + /// ``` 220 + /// struct MyStruct { field: u32, } 221 + /// let ptr: *const MyStruct = core::ptr::dangling(); 222 + /// let field_ptr: *const u32 = kernel::ptr::project!(ptr, .field); 223 + /// 224 + /// struct MyTupleStruct(u32, u32); 225 + /// 226 + /// fn proj(ptr: *const MyTupleStruct) { 227 + /// let field_ptr: *const u32 = kernel::ptr::project!(ptr, .1); 228 + /// } 229 + /// ``` 230 + /// 231 + /// Index projections are performed with `[index]`: 232 + /// 233 + /// ``` 234 + /// fn proj(ptr: *const [u8; 32]) -> Result { 235 + /// let field_ptr: *const u8 = kernel::ptr::project!(ptr, [1]); 236 + /// // The following invocation, if uncommented, would fail the build. 237 + /// // 238 + /// // kernel::ptr::project!(ptr, [128]); 239 + /// 240 + /// // This will raise an `OutOfBound` error (which is convertible to `ERANGE`). 241 + /// kernel::ptr::project!(ptr, [128]?); 242 + /// Ok(()) 243 + /// } 244 + /// ``` 245 + /// 246 + /// If you need to match on the error instead of propagate, put the invocation inside a closure: 247 + /// 248 + /// ``` 249 + /// let ptr: *const [u8; 32] = core::ptr::dangling(); 250 + /// let field_ptr: Result<*const u8> = (|| -> Result<_> { 251 + /// Ok(kernel::ptr::project!(ptr, [128]?)) 252 + /// })(); 253 + /// assert!(field_ptr.is_err()); 254 + /// ``` 255 + /// 256 + /// For mutable pointers, put `mut` as the first token in macro invocation. 257 + /// 258 + /// ``` 259 + /// let ptr: *mut [(u8, u16); 32] = core::ptr::dangling_mut(); 260 + /// let field_ptr: *mut u16 = kernel::ptr::project!(mut ptr, [1].1); 261 + /// ``` 262 + #[macro_export] 263 + macro_rules! project_pointer { 264 + (@gen $ptr:ident, ) => {}; 265 + // Field projection. `$field` needs to be `tt` to support tuple index like `.0`. 266 + (@gen $ptr:ident, .$field:tt $($rest:tt)*) => { 267 + // SAFETY: The provided closure always returns an in-bounds pointer. 268 + let $ptr = unsafe { 269 + $crate::ptr::projection::ProjectField::proj($ptr, #[inline(always)] |ptr| { 270 + // Check unaligned field. Not all users (e.g. DMA) can handle unaligned 271 + // projections. 272 + if false { 273 + let _ = &(*ptr).$field; 274 + } 275 + // SAFETY: `$field` is in bounds, and no implicit `Deref` is possible (if the 276 + // type implements `Deref`, Rust cannot infer the generic parameter `DEREF`). 277 + &raw mut (*ptr).$field 278 + }) 279 + }; 280 + $crate::ptr::project!(@gen $ptr, $($rest)*) 281 + }; 282 + // Fallible index projection. 283 + (@gen $ptr:ident, [$index:expr]? $($rest:tt)*) => { 284 + let $ptr = $crate::ptr::projection::ProjectIndex::get($index, $ptr) 285 + .ok_or($crate::ptr::projection::OutOfBound)?; 286 + $crate::ptr::project!(@gen $ptr, $($rest)*) 287 + }; 288 + // Build-time checked index projection. 289 + (@gen $ptr:ident, [$index:expr] $($rest:tt)*) => { 290 + let $ptr = $crate::ptr::projection::ProjectIndex::index($index, $ptr); 291 + $crate::ptr::project!(@gen $ptr, $($rest)*) 292 + }; 293 + (mut $ptr:expr, $($proj:tt)*) => {{ 294 + let ptr: *mut _ = $ptr; 295 + $crate::ptr::project!(@gen ptr, $($proj)*); 296 + ptr 297 + }}; 298 + ($ptr:expr, $($proj:tt)*) => {{ 299 + let ptr = <*const _>::cast_mut($ptr); 300 + // We currently always project using mutable pointer, as it is not decided whether `&raw 301 + // const` allows the resulting pointer to be mutated (see documentation of `addr_of!`). 302 + $crate::ptr::project!(@gen ptr, $($proj)*); 303 + ptr.cast_const() 304 + }}; 305 + }
+16 -14
samples/rust/rust_dma.rs
··· 68 68 CoherentAllocation::alloc_coherent(pdev.as_ref(), TEST_VALUES.len(), GFP_KERNEL)?; 69 69 70 70 for (i, value) in TEST_VALUES.into_iter().enumerate() { 71 - kernel::dma_write!(ca[i] = MyStruct::new(value.0, value.1))?; 71 + kernel::dma_write!(ca, [i]?, MyStruct::new(value.0, value.1)); 72 72 } 73 73 74 74 let size = 4 * page::PAGE_SIZE; ··· 85 85 } 86 86 } 87 87 88 + impl DmaSampleDriver { 89 + fn check_dma(&self) -> Result { 90 + for (i, value) in TEST_VALUES.into_iter().enumerate() { 91 + let val0 = kernel::dma_read!(self.ca, [i]?.h); 92 + let val1 = kernel::dma_read!(self.ca, [i]?.b); 93 + 94 + assert_eq!(val0, value.0); 95 + assert_eq!(val1, value.1); 96 + } 97 + 98 + Ok(()) 99 + } 100 + } 101 + 88 102 #[pinned_drop] 89 103 impl PinnedDrop for DmaSampleDriver { 90 104 fn drop(self: Pin<&mut Self>) { 91 105 dev_info!(self.pdev, "Unload DMA test driver.\n"); 92 106 93 - for (i, value) in TEST_VALUES.into_iter().enumerate() { 94 - let val0 = kernel::dma_read!(self.ca[i].h); 95 - let val1 = kernel::dma_read!(self.ca[i].b); 96 - assert!(val0.is_ok()); 97 - assert!(val1.is_ok()); 98 - 99 - if let Ok(val0) = val0 { 100 - assert_eq!(val0, value.0); 101 - } 102 - if let Ok(val1) = val1 { 103 - assert_eq!(val1, value.1); 104 - } 105 - } 107 + assert!(self.check_dma().is_ok()); 106 108 107 109 for (i, entry) in self.sgt.iter().enumerate() { 108 110 dev_info!(
+3 -1
scripts/Makefile.build
··· 310 310 311 311 # The features in this list are the ones allowed for non-`rust/` code. 312 312 # 313 + # - Stable since Rust 1.79.0: `feature(slice_ptr_len)`. 313 314 # - Stable since Rust 1.81.0: `feature(lint_reasons)`. 314 315 # - Stable since Rust 1.82.0: `feature(asm_const)`, 315 316 # `feature(offset_of_nested)`, `feature(raw_ref_op)`. 317 + # - Stable since Rust 1.84.0: `feature(strict_provenance)`. 316 318 # - Stable since Rust 1.87.0: `feature(asm_goto)`. 317 319 # - Expected to become stable: `feature(arbitrary_self_types)`. 318 320 # - To be determined: `feature(used_with_arg)`. 319 321 # 320 322 # Please see https://github.com/Rust-for-Linux/linux/issues/2 for details on 321 323 # the unstable features in use. 322 - rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,offset_of_nested,raw_ref_op,used_with_arg 324 + rust_allowed_features := asm_const,asm_goto,arbitrary_self_types,lint_reasons,offset_of_nested,raw_ref_op,slice_ptr_len,strict_provenance,used_with_arg 323 325 324 326 # `--out-dir` is required to avoid temporaries being created by `rustc` in the 325 327 # current working directory, which may be not accessible in the out-of-tree