Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

Conflicts:

include/linux/filter.h
kernel/bpf/core.c
66e13b615a0c ("bpf: verifier: prevent userspace memory access")
d503a04f8bc0 ("bpf: Add support for certain atomics in bpf_arena to x86 JIT")
https://lore.kernel.org/all/20240429114939.210328b0@canb.auug.org.au/

No adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+2458 -1536
+1
.mailmap
··· 512 512 Pradeep Kumar Chitrapu <quic_pradeepc@quicinc.com> <pradeepc@codeaurora.org> 513 513 Prasad Sodagudi <quic_psodagud@quicinc.com> <psodagud@codeaurora.org> 514 514 Punit Agrawal <punitagrawal@gmail.com> <punit.agrawal@arm.com> 515 + Puranjay Mohan <puranjay@kernel.org> <puranjay12@gmail.com> 515 516 Qais Yousef <qyousef@layalina.io> <qais.yousef@imgtec.com> 516 517 Qais Yousef <qyousef@layalina.io> <qais.yousef@arm.com> 517 518 Quentin Monnet <qmo@kernel.org> <quentin.monnet@netronome.com>
+3
Documentation/admin-guide/kernel-parameters.txt
··· 3423 3423 arch-independent options, each of which is an 3424 3424 aggregation of existing arch-specific options. 3425 3425 3426 + Note, "mitigations" is supported if and only if the 3427 + kernel was built with CPU_MITIGATIONS=y. 3428 + 3426 3429 off 3427 3430 Disable all optional CPU mitigations. This 3428 3431 improves system performance, but it may also
+3 -3
Documentation/core-api/workqueue.rst
··· 671 671 events_unbound unbound 9 9 10 10 8 672 672 events_freezable percpu 0 2 4 6 673 673 events_power_efficient percpu 0 2 4 6 674 - events_freezable_power_ percpu 0 2 4 6 674 + events_freezable_pwr_ef percpu 0 2 4 6 675 675 rcu_gp percpu 0 2 4 6 676 676 rcu_par_gp percpu 0 2 4 6 677 677 slub_flushwq percpu 0 2 4 6 ··· 694 694 events_unbound 38306 0 0.1 - 7 - - 695 695 events_freezable 0 0 0.0 0 0 - - 696 696 events_power_efficient 29598 0 0.2 0 0 - - 697 - events_freezable_power_ 10 0 0.0 0 0 - - 697 + events_freezable_pwr_ef 10 0 0.0 0 0 - - 698 698 sock_diag_events 0 0 0.0 0 0 - - 699 699 700 700 total infl CPUtime CPUhog CMW/RPR mayday rescued ··· 704 704 events_unbound 38322 0 0.1 - 7 - - 705 705 events_freezable 0 0 0.0 0 0 - - 706 706 events_power_efficient 29603 0 0.2 0 0 - - 707 - events_freezable_power_ 10 0 0.0 0 0 - - 707 + events_freezable_pwr_ef 10 0 0.0 0 0 - - 708 708 sock_diag_events 0 0 0.0 0 0 - - 709 709 710 710 ...
+1 -4
Documentation/devicetree/bindings/eeprom/at24.yaml
··· 69 69 - items: 70 70 pattern: c32$ 71 71 - items: 72 - pattern: c32d-wl$ 73 - - items: 74 72 pattern: cs32$ 75 73 - items: 76 74 pattern: c64$ 77 - - items: 78 - pattern: c64d-wl$ 79 75 - items: 80 76 pattern: cs64$ 81 77 - items: ··· 132 136 - renesas,r1ex24128 133 137 - samsung,s524ad0xd1 134 138 - const: atmel,24c128 139 + - pattern: '^atmel,24c(32|64)d-wl$' # Actual vendor is st 135 140 136 141 label: 137 142 description: Descriptive name of the EEPROM.
+2
Documentation/devicetree/bindings/pinctrl/renesas,rzg2l-pinctrl.yaml
··· 120 120 slew-rate: true 121 121 gpio-hog: true 122 122 gpios: true 123 + input: true 123 124 input-enable: true 125 + output-enable: true 124 126 output-high: true 125 127 output-low: true 126 128 line-name: true
+1
Documentation/devicetree/bindings/soc/rockchip/grf.yaml
··· 171 171 unevaluatedProperties: false 172 172 173 173 pcie-phy: 174 + type: object 174 175 description: 175 176 Documentation/devicetree/bindings/phy/rockchip-pcie-phy.txt 176 177
+1 -1
Documentation/rust/arch-support.rst
··· 16 16 Architecture Level of support Constraints 17 17 ============= ================ ============================================== 18 18 ``arm64`` Maintained Little Endian only. 19 - ``loongarch`` Maintained - 19 + ``loongarch`` Maintained \- 20 20 ``um`` Maintained ``x86_64`` only. 21 21 ``x86`` Maintained ``x86_64`` only. 22 22 ============= ================ ==============================================
+2 -5
Documentation/timers/no_hz.rst
··· 129 129 online to handle timekeeping tasks in order to ensure that system 130 130 calls like gettimeofday() returns accurate values on adaptive-tick CPUs. 131 131 (This is not an issue for CONFIG_NO_HZ_IDLE=y because there are no running 132 - user processes to observe slight drifts in clock rate.) Therefore, the 133 - boot CPU is prohibited from entering adaptive-ticks mode. Specifying a 134 - "nohz_full=" mask that includes the boot CPU will result in a boot-time 135 - error message, and the boot CPU will be removed from the mask. Note that 136 - this means that your system must have at least two CPUs in order for 132 + user processes to observe slight drifts in clock rate.) Note that this 133 + means that your system must have at least two CPUs in order for 137 134 CONFIG_NO_HZ_FULL=y to do anything for you. 138 135 139 136 Finally, adaptive-ticks CPUs must have their RCU callbacks offloaded.
+371 -27
Documentation/translations/zh_CN/core-api/workqueue.rst
··· 7 7 8 8 司延腾 Yanteng Si <siyanteng@loongson.cn> 9 9 周彬彬 Binbin Zhou <zhoubinbin@loongson.cn> 10 + 陈兴友 Xingyou Chen <rockrush@rockwork.org> 10 11 11 12 .. _cn_workqueue.rst: 12 13 13 - ========================= 14 - 并发管理的工作队列 (cmwq) 15 - ========================= 14 + ======== 15 + 工作队列 16 + ======== 16 17 17 18 :日期: September, 2010 18 19 :作者: Tejun Heo <tj@kernel.org> ··· 23 22 简介 24 23 ==== 25 24 26 - 在很多情况下,需要一个异步进程的执行环境,工作队列(wq)API是这种情况下 25 + 在很多情况下,需要一个异步的程序执行环境,工作队列(wq)API是这种情况下 27 26 最常用的机制。 28 27 29 28 当需要这样一个异步执行上下文时,一个描述将要执行的函数的工作项(work, ··· 35 34 队列时,工作者又开始执行。 36 35 37 36 38 - 为什么要cmwq? 39 - ============= 37 + 为什么要有并发管理工作队列? 38 + =========================== 40 39 41 40 在最初的wq实现中,多线程(MT)wq在每个CPU上有一个工作者线程,而单线程 42 41 (ST)wq在全系统有一个工作者线程。一个MT wq需要保持与CPU数量相同的工 ··· 74 73 向该函数的工作项,并在工作队列中排队等待该工作项。(就是挂到workqueue 75 74 队列里面去) 76 75 77 - 特定目的线程,称为工作线程(工作者),一个接一个地执行队列中的功能。 78 - 如果没有工作项排队,工作者线程就会闲置。这些工作者线程被管理在所谓 79 - 的工作者池中。 76 + 工作项可以在线程或BH(软中断)上下文中执行。 77 + 78 + 对于由线程执行的工作队列,被称为(内核)工作者([k]worker)的特殊 79 + 线程会依次执行其中的函数。如果没有工作项排队,工作者线程就会闲置。 80 + 这些工作者线程被管理在所谓的工作者池中。 80 81 81 82 cmwq设计区分了面向用户的工作队列,子系统和驱动程序在上面排队工作, 82 83 以及管理工作者池和处理排队工作项的后端机制。 ··· 86 83 每个可能的CPU都有两个工作者池,一个用于正常的工作项,另一个用于高 87 84 优先级的工作项,还有一些额外的工作者池,用于服务未绑定工作队列的工 88 85 作项目——这些后备池的数量是动态的。 86 + 87 + BH工作队列使用相同的结构。然而,由于同一时间只可能有一个执行上下文, 88 + 不需要担心并发问题。每个CPU上的BH工作者池只包含一个用于表示BH执行 89 + 上下文的虚拟工作者。BH工作队列可以被看作软中断的便捷接口。 89 90 90 91 当他们认为合适的时候,子系统和驱动程序可以通过特殊的 91 92 ``workqueue API`` 函数创建和排队工作项。他们可以通过在工作队列上 ··· 102 95 否则一个绑定的工作队列的工作项将被排在与发起线程运行的CPU相关的普 103 96 通或高级工作工作者池的工作项列表中。 104 97 105 - 对于任何工作者池的实施,管理并发水平(有多少执行上下文处于活动状 106 - 态)是一个重要问题。最低水平是为了节省资源,而饱和水平是指系统被 107 - 充分使用。 98 + 对于任何线程池的实施,管理并发水平(有多少执行上下文处于活动状 99 + 态)是一个重要问题。cmwq试图将并发保持在一个尽可能低且充足的 100 + 水平。最低水平是为了节省资源,而充足是为了使系统能被充分使用。 108 101 109 102 每个与实际CPU绑定的worker-pool通过钩住调度器来实现并发管理。每当 110 103 一个活动的工作者被唤醒或睡眠时,工作者池就会得到通知,并跟踪当前可 ··· 146 139 147 140 ``flags`` 148 141 --------- 142 + 143 + ``WQ_BH`` 144 + BH工作队列可以被看作软中断的便捷接口。它总是每个CPU一份, 145 + 其中的各个工作项也会按在队列中的顺序,被所属CPU在软中断 146 + 上下文中执行。 147 + 148 + BH工作队列的 ``max_active`` 值必须为0,且只能单独或和 149 + ``WQ_HIGHPRI`` 标志组合使用。 150 + 151 + BH工作项不可以睡眠。像延迟排队、冲洗、取消等所有其他特性 152 + 都是支持的。 149 153 150 154 ``WQ_UNBOUND`` 151 155 排队到非绑定wq的工作项由特殊的工作者池提供服务,这些工作者不 ··· 202 184 -------------- 203 185 204 186 ``@max_active`` 决定了每个CPU可以分配给wq的工作项的最大执行上 205 - 下文数量。例如,如果 ``@max_active为16`` ,每个CPU最多可以同 206 - 时执行16个wq的工作项。 187 + 下文数量。例如,如果 ``@max_active`` 为16 ,每个CPU最多可以同 188 + 时执行16个wq的工作项。它总是每CPU属性,即便对于未绑定 wq。 207 189 208 - 目前,对于一个绑定的wq, ``@max_active`` 的最大限制是512,当指 209 - 定为0时使用的默认值是256。对于非绑定的wq,其限制是512和 210 - 4 * ``num_possible_cpus()`` 中的较高值。这些值被选得足够高,所 211 - 以它们不是限制性因素,同时会在失控情况下提供保护。 190 + ``@max_active`` 的最大限制是512,当指定为0时使用的默认值是256。 191 + 这些值被选得足够高,所以它们不是限制性因素,同时会在失控情况下提供 192 + 保护。 212 193 213 194 一个wq的活动工作项的数量通常由wq的用户来调节,更具体地说,是由用 214 195 户在同一时间可以排列多少个工作项来调节。除非有特定的需求来控制活动 215 196 工作项的数量,否则建议指定 为"0"。 216 197 217 - 一些用户依赖于ST wq的严格执行顺序。 ``@max_active`` 为1和 ``WQ_UNBOUND`` 218 - 的组合用来实现这种行为。这种wq上的工作项目总是被排到未绑定的工作池 219 - 中,并且在任何时候都只有一个工作项目处于活动状态,从而实现与ST wq相 220 - 同的排序属性。 221 - 222 - 在目前的实现中,上述配置只保证了特定NUMA节点内的ST行为。相反, 223 - ``alloc_ordered_workqueue()`` 应该被用来实现全系统的ST行为。 198 + 一些用户依赖于任意时刻最多只有一个工作项被执行,且各工作项被按队列中 199 + 顺序处理带来的严格执行顺序。``@max_active`` 为1和 ``WQ_UNBOUND`` 200 + 的组合曾被用来实现这种行为,现在不用了。请使用 201 + ``alloc_ordered_workqueue()`` 。 224 202 225 203 226 204 执行场景示例 ··· 299 285 * 除非有特殊需要,建议使用0作为@max_active。在大多数使用情 300 286 况下,并发水平通常保持在默认限制之下。 301 287 302 - * 一个wq作为前进进度保证(WQ_MEM_RECLAIM,冲洗(flush)和工 288 + * 一个wq作为前进进度保证,``WQ_MEM_RECLAIM`` ,冲洗(flush)和工 303 289 作项属性的域。不涉及内存回收的工作项,不需要作为工作项组的一 304 290 部分被刷新,也不需要任何特殊属性,可以使用系统中的一个wq。使 305 291 用专用wq和系统wq在执行特性上没有区别。 306 292 307 293 * 除非工作项预计会消耗大量的CPU周期,否则使用绑定的wq通常是有 308 294 益的,因为wq操作和工作项执行中的定位水平提高了。 295 + 296 + 297 + 亲和性作用域 298 + ============ 299 + 300 + 一个非绑定工作队列根据其亲和性作用域来对CPU进行分组以提高缓存 301 + 局部性。比如如果一个工作队列使用默认的“cache”亲和性作用域, 302 + 它将根据最后一级缓存的边界来分组处理器。这个工作队列上的工作项 303 + 将被分配给一个与发起CPU共用最后级缓存的处理器上的工作者。根据 304 + ``affinity_strict`` 的设置,工作者在启动后可能被允许移出 305 + 所在作用域,也可能不被允许。 306 + 307 + 工作队列目前支持以下亲和性作用域。 308 + 309 + ``default`` 310 + 使用模块参数 ``workqueue.default_affinity_scope`` 指定 311 + 的作用域,该参数总是会被设为以下作用域中的一个。 312 + 313 + ``cpu`` 314 + CPU不被分组。一个CPU上发起的工作项会被同一CPU上的工作者执行。 315 + 这使非绑定工作队列表现得像是不含并发管理的每CPU工作队列。 316 + 317 + ``smt`` 318 + CPU被按SMT边界分组。这通常意味着每个物理CPU核上的各逻辑CPU会 319 + 被分进同一组。 320 + 321 + ``cache`` 322 + CPU被按缓存边界分组。采用哪个缓存边界由架构代码决定。很多情况 323 + 下会使用L3。这是默认的亲和性作用域。 324 + 325 + ``numa`` 326 + CPU被按NUMA边界分组。 327 + 328 + ``system`` 329 + 所有CPU被放在同一组。工作队列不尝试在临近发起CPU的CPU上运行 330 + 工作项。 331 + 332 + 默认的亲和性作用域可以被模块参数 ``workqueue.default_affinity_scope`` 333 + 修改,特定工作队列的亲和性作用域可以通过 ``apply_workqueue_attrs()`` 334 + 被更改。 335 + 336 + 如果设置了 ``WQ_SYSFS`` ,工作队列会在它的 ``/sys/devices/virtual/workqueue/WQ_NAME/`` 337 + 目录中有以下亲和性作用域相关的接口文件。 338 + 339 + ``affinity_scope`` 340 + 读操作以查看当前的亲和性作用域。写操作用于更改设置。 341 + 342 + 当前作用域是默认值时,当前生效的作用域也可以被从这个文件中 343 + 读到(小括号内),例如 ``default (cache)`` 。 344 + 345 + ``affinity_strict`` 346 + 默认值0表明亲和性作用域不是严格的。当一个工作项开始执行时, 347 + 工作队列尽量尝试使工作者处于亲和性作用域内,称为遣返。启动后, 348 + 调度器可以自由地将工作者调度到系统中任意它认为合适的地方去。 349 + 这使得在保留使用其他CPU(如果必需且有可用)能力的同时, 350 + 还能从作用域局部性上获益。 351 + 352 + 如果设置为1,作用域内的所有工作者将被保证总是处于作用域内。 353 + 这在跨亲和性作用域会导致如功耗、负载隔离等方面的潜在影响时 354 + 会有用。严格的NUMA作用域也可用于和旧版内核中工作队列的行为 355 + 保持一致。 356 + 357 + 358 + 亲和性作用域与性能 359 + ================== 360 + 361 + 如果非绑定工作队列的行为对绝大多数使用场景来说都是最优的, 362 + 不需要更多调节,就完美了。很不幸,在当前内核中,重度使用 363 + 工作队列时,需要在局部性和利用率间显式地作一个明显的权衡。 364 + 365 + 更高的局部性带来更高效率,也就是相同数量的CPU周期内可以做 366 + 更多工作。然而,如果发起者没能将工作项充分地分散在亲和性 367 + 作用域间,更高的局部性也可能带来更低的整体系统利用率。以下 368 + dm-crypt 的性能测试清楚地阐明了这一取舍。 369 + 370 + 测试运行在一个12核24线程、4个L3缓存的处理器(AMD Ryzen 371 + 9 3900x)上。为保持一致性,关闭CPU超频。 ``/dev/dm-0`` 372 + 是NVME SSD(三星 990 PRO)上创建,用 ``cryptsetup`` 373 + 以默认配置打开的一个 dm-crypt 设备。 374 + 375 + 376 + 场景 1: 机器上遍布着有充足的发起者和工作量 377 + ------------------------------------------ 378 + 379 + 使用命令::: 380 + 381 + $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \ 382 + --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \ 383 + --name=iops-test-job --verify=sha512 384 + 385 + 这里有24个发起者,每个同时发起64个IO。 ``--verify=sha512`` 386 + 使得 ``fio`` 每次生成和读回内容受发起者和 ``kcryptd`` 387 + 间的执行局部性影响。下面是基于不同 ``kcryptd`` 的亲和性 388 + 作用域设置,各经过五次测试得到的读取带宽和CPU利用率数据。 389 + 390 + .. list-table:: 391 + :widths: 16 20 20 392 + :header-rows: 1 393 + 394 + * - 亲和性 395 + - 带宽 (MiBps) 396 + - CPU利用率(%) 397 + 398 + * - system 399 + - 1159.40 ±1.34 400 + - 99.31 ±0.02 401 + 402 + * - cache 403 + - 1166.40 ±0.89 404 + - 99.34 ±0.01 405 + 406 + * - cache (strict) 407 + - 1166.00 ±0.71 408 + - 99.35 ±0.01 409 + 410 + 在系统中分布着足够多发起者的情况下,不论严格与否,“cache” 411 + 没有表现得更差。三种配置均使整个机器达到饱和,但由于提高了 412 + 局部性,缓存相关的两种有0.6%的(带宽)提升。 413 + 414 + 415 + 场景 2: 更少发起者,足以达到饱和的工作量 416 + ---------------------------------------- 417 + 418 + 使用命令::: 419 + 420 + $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \ 421 + --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \ 422 + --time_based --group_reporting --name=iops-test-job --verify=sha512 423 + 424 + 与上一个场景唯一的区别是 ``--numjobs=8``。 发起者数量 425 + 减少为三分之一,但仍然有足以使系统达到饱和的工作总量。 426 + 427 + .. list-table:: 428 + :widths: 16 20 20 429 + :header-rows: 1 430 + 431 + * - 亲和性 432 + - 带宽 (MiBps) 433 + - CPU利用率(%) 434 + 435 + * - system 436 + - 1155.40 ±0.89 437 + - 97.41 ±0.05 438 + 439 + * - cache 440 + - 1154.40 ±1.14 441 + - 96.15 ±0.09 442 + 443 + * - cache (strict) 444 + - 1112.00 ±4.64 445 + - 93.26 ±0.35 446 + 447 + 这里有超过使系统达到饱和所需的工作量。“system”和“cache” 448 + 都接近但并未使机器完全饱和。“cache”消耗更少的CPU但更高的 449 + 效率使其得到和“system”相同的带宽。 450 + 451 + 八个发起者盘桓在四个L3缓存作用域间仍然允许“cache (strict)” 452 + 几乎使机器饱和,但缺少对工作的保持(不移到空闲处理器上) 453 + 开始带来3.7%的带宽损失。 454 + 455 + 456 + 场景 3: 更少发起者,不充足的工作量 457 + ---------------------------------- 458 + 459 + 使用命令::: 460 + 461 + $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \ 462 + --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \ 463 + --time_based --group_reporting --name=iops-test-job --verify=sha512 464 + 465 + 再次,唯一的区别是 ``--numjobs=4``。由于发起者减少到四个, 466 + 现在没有足以使系统饱和的工作量,带宽变得依赖于完成时延。 467 + 468 + .. list-table:: 469 + :widths: 16 20 20 470 + :header-rows: 1 471 + 472 + * - 亲和性 473 + - 带宽 (MiBps) 474 + - CPU利用率(%) 475 + 476 + * - system 477 + - 993.60 ±1.82 478 + - 75.49 ±0.06 479 + 480 + * - cache 481 + - 973.40 ±1.52 482 + - 74.90 ±0.07 483 + 484 + * - cache (strict) 485 + - 828.20 ±4.49 486 + - 66.84 ±0.29 487 + 488 + 现在,局部性和利用率间的权衡更清晰了。“cache”展示出相比 489 + “system”2%的带宽损失,而“cache (strict)”跌到20%。 490 + 491 + 492 + 结论和建议 493 + ---------- 494 + 495 + 在以上试验中,虽然一致并且也明显,但“cache”亲和性作用域 496 + 相比“system”的性能优势并不大。然而,这影响是依赖于作用域 497 + 间距离的,在更复杂的处理器拓扑下可能有更明显的影响。 498 + 499 + 虽然这些情形下缺少工作保持是有坏处的,但比“cache (strict)” 500 + 好多了,而且最大化工作队列利用率的需求也并不常见。因此, 501 + “cache”是非绑定池的默认亲和性作用域。 502 + 503 + * 由于不存在一个适用于大多数场景的选择,对于可能需要消耗 504 + 大量CPU的工作队列,建议通过 ``apply_workqueue_attrs()`` 505 + 进行(专门)配置,并考虑是否启用 ``WQ_SYSFS``。 506 + 507 + * 设置了严格“cpu”亲和性作用域的非绑定工作队列,它的行为与 508 + ``WQ_CPU_INTENSIVE`` 每CPU工作队列一样。后者没有真正 509 + 优势,而前者提供了大幅度的灵活性。 510 + 511 + * 亲和性作用域是从Linux v6.5起引入的。为了模拟旧版行为, 512 + 可以使用严格的“numa”亲和性作用域。 513 + 514 + * 不严格的亲和性作用域中,缺少工作保持大概缘于调度器。内核 515 + 为什么没能维护好大多数场景下的工作保持,把事情作对,还没有 516 + 理论上的解释。因此,未来调度器的改进可能会使我们不再需要 517 + 这些调节项。 518 + 519 + 520 + 检查配置 521 + ======== 522 + 523 + 使用 tools/workqueue/wq_dump.py(drgn脚本) 来检查未 524 + 绑定CPU的亲和性配置,工作者池,以及工作队列如何映射到池上: :: 525 + 526 + $ tools/workqueue/wq_dump.py 527 + Affinity Scopes 528 + =============== 529 + wq_unbound_cpumask=0000000f 530 + 531 + CPU 532 + nr_pods 4 533 + pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 534 + pod_node [0]=0 [1]=0 [2]=1 [3]=1 535 + cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 536 + 537 + SMT 538 + nr_pods 4 539 + pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008 540 + pod_node [0]=0 [1]=0 [2]=1 [3]=1 541 + cpu_pod [0]=0 [1]=1 [2]=2 [3]=3 542 + 543 + CACHE (default) 544 + nr_pods 2 545 + pod_cpus [0]=00000003 [1]=0000000c 546 + pod_node [0]=0 [1]=1 547 + cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 548 + 549 + NUMA 550 + nr_pods 2 551 + pod_cpus [0]=00000003 [1]=0000000c 552 + pod_node [0]=0 [1]=1 553 + cpu_pod [0]=0 [1]=0 [2]=1 [3]=1 554 + 555 + SYSTEM 556 + nr_pods 1 557 + pod_cpus [0]=0000000f 558 + pod_node [0]=-1 559 + cpu_pod [0]=0 [1]=0 [2]=0 [3]=0 560 + 561 + Worker Pools 562 + ============ 563 + pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0 564 + pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0 565 + pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1 566 + pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1 567 + pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2 568 + pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2 569 + pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3 570 + pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3 571 + pool[08] ref=42 nice= 0 idle/workers= 6/ 6 cpus=0000000f 572 + pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003 573 + pool[10] ref=28 nice= 0 idle/workers= 17/ 17 cpus=0000000c 574 + pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f 575 + pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003 576 + pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c 577 + 578 + Workqueue CPU -> pool 579 + ===================== 580 + [ workqueue \ CPU 0 1 2 3 dfl] 581 + events percpu 0 2 4 6 582 + events_highpri percpu 1 3 5 7 583 + events_long percpu 0 2 4 6 584 + events_unbound unbound 9 9 10 10 8 585 + events_freezable percpu 0 2 4 6 586 + events_power_efficient percpu 0 2 4 6 587 + events_freezable_power_ percpu 0 2 4 6 588 + rcu_gp percpu 0 2 4 6 589 + rcu_par_gp percpu 0 2 4 6 590 + slub_flushwq percpu 0 2 4 6 591 + netns ordered 8 8 8 8 8 592 + ... 593 + 594 + 参见命令的帮助消息以获取更多信息。 595 + 596 + 597 + 监视 598 + ==== 599 + 600 + 使用 tools/workqueue/wq_monitor.py 来监视工作队列的运行: :: 601 + 602 + $ tools/workqueue/wq_monitor.py events 603 + total infl CPUtime CPUhog CMW/RPR mayday rescued 604 + events 18545 0 6.1 0 5 - - 605 + events_highpri 8 0 0.0 0 0 - - 606 + events_long 3 0 0.0 0 0 - - 607 + events_unbound 38306 0 0.1 - 7 - - 608 + events_freezable 0 0 0.0 0 0 - - 609 + events_power_efficient 29598 0 0.2 0 0 - - 610 + events_freezable_power_ 10 0 0.0 0 0 - - 611 + sock_diag_events 0 0 0.0 0 0 - - 612 + 613 + total infl CPUtime CPUhog CMW/RPR mayday rescued 614 + events 18548 0 6.1 0 5 - - 615 + events_highpri 8 0 0.0 0 0 - - 616 + events_long 3 0 0.0 0 0 - - 617 + events_unbound 38322 0 0.1 - 7 - - 618 + events_freezable 0 0 0.0 0 0 - - 619 + events_power_efficient 29603 0 0.2 0 0 - - 620 + events_freezable_power_ 10 0 0.0 0 0 - - 621 + sock_diag_events 0 0 0.0 0 0 - - 622 + 623 + ... 624 + 625 + 参见命令的帮助消息以获取更多信息。 309 626 310 627 311 628 调试 ··· 674 329 ============ 675 330 676 331 工作队列保证,如果在工作项排队后满足以下条件,则工作项不能重入: 677 - 678 332 679 333 1. 工作函数没有被改变。 680 334 2. 没有人将该工作项排到另一个工作队列中。
+15 -16
MAINTAINERS
··· 553 553 F: drivers/input/misc/adxl34x.c 554 554 555 555 ADXL355 THREE-AXIS DIGITAL ACCELEROMETER DRIVER 556 - M: Puranjay Mohan <puranjay12@gmail.com> 556 + M: Puranjay Mohan <puranjay@kernel.org> 557 557 L: linux-iio@vger.kernel.org 558 558 S: Supported 559 559 F: Documentation/devicetree/bindings/iio/accel/adi,adxl355.yaml ··· 3714 3714 3715 3715 BPF JIT for ARM 3716 3716 M: Russell King <linux@armlinux.org.uk> 3717 - M: Puranjay Mohan <puranjay12@gmail.com> 3717 + M: Puranjay Mohan <puranjay@kernel.org> 3718 3718 L: bpf@vger.kernel.org 3719 3719 S: Maintained 3720 3720 F: arch/arm/net/ ··· 3764 3764 3765 3765 BPF JIT for RISC-V (64-bit) 3766 3766 M: Björn Töpel <bjorn@kernel.org> 3767 + R: Pu Lehui <pulehui@huawei.com> 3768 + R: Puranjay Mohan <puranjay@kernel.org> 3767 3769 L: bpf@vger.kernel.org 3768 3770 S: Maintained 3769 3771 F: arch/riscv/net/ ··· 4201 4199 F: drivers/scsi/bnx2i/ 4202 4200 4203 4201 BROADCOM BNX2X 10 GIGABIT ETHERNET DRIVER 4204 - M: Ariel Elior <aelior@marvell.com> 4205 4202 M: Sudarsana Kalluru <skalluru@marvell.com> 4206 4203 M: Manish Chopra <manishc@marvell.com> 4207 4204 L: netdev@vger.kernel.org ··· 15191 15190 F: drivers/scsi/myrs.* 15192 15191 15193 15192 MYRICOM MYRI-10G 10GbE DRIVER (MYRI10GE) 15194 - M: Chris Lee <christopher.lee@cspi.com> 15195 15193 L: netdev@vger.kernel.org 15196 - S: Supported 15194 + S: Orphan 15197 15195 W: https://www.cspi.com/ethernet-products/support/downloads/ 15198 15196 F: drivers/net/ethernet/myricom/myri10ge/ 15199 15197 ··· 16829 16829 F: drivers/leds/leds-pca9532.c 16830 16830 F: include/linux/leds-pca9532.h 16831 16831 16832 - PCA9541 I2C BUS MASTER SELECTOR DRIVER 16833 - M: Guenter Roeck <linux@roeck-us.net> 16834 - L: linux-i2c@vger.kernel.org 16835 - S: Maintained 16836 - F: drivers/i2c/muxes/i2c-mux-pca9541.c 16837 - 16838 16832 PCI DRIVER FOR AARDVARK (Marvell Armada 3700) 16839 16833 M: Thomas Petazzoni <thomas.petazzoni@bootlin.com> 16840 16834 M: Pali Rohár <pali@kernel.org> ··· 17905 17911 F: drivers/media/rc/pwm-ir-tx.c 17906 17912 17907 17913 PWM SUBSYSTEM 17908 - M: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 17914 + M: Uwe Kleine-König <ukleinek@kernel.org> 17909 17915 L: linux-pwm@vger.kernel.org 17910 17916 S: Maintained 17911 17917 Q: https://patchwork.ozlabs.org/project/linux-pwm/list/ ··· 18029 18035 F: drivers/scsi/qedi/ 18030 18036 18031 18037 QLOGIC QL4xxx ETHERNET DRIVER 18032 - M: Ariel Elior <aelior@marvell.com> 18033 18038 M: Manish Chopra <manishc@marvell.com> 18034 18039 L: netdev@vger.kernel.org 18035 18040 S: Supported ··· 18038 18045 18039 18046 QLOGIC QL4xxx RDMA DRIVER 18040 18047 M: Michal Kalderon <mkalderon@marvell.com> 18041 - M: Ariel Elior <aelior@marvell.com> 18042 18048 L: linux-rdma@vger.kernel.org 18043 18049 S: Supported 18044 18050 F: drivers/infiniband/hw/qedr/ ··· 20207 20215 20208 20216 SIOX 20209 20217 M: Thorsten Scherer <t.scherer@eckelmann.de> 20210 - M: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> 20211 20218 R: Pengutronix Kernel Team <kernel@pengutronix.de> 20212 20219 S: Supported 20213 20220 F: drivers/gpio/gpio-siox.c ··· 21956 21965 F: include/linux/soc/ti/ti_sci_protocol.h 21957 21966 21958 21967 TEXAS INSTRUMENTS' TMP117 TEMPERATURE SENSOR DRIVER 21959 - M: Puranjay Mohan <puranjay12@gmail.com> 21968 + M: Puranjay Mohan <puranjay@kernel.org> 21960 21969 L: linux-iio@vger.kernel.org 21961 21970 S: Supported 21962 21971 F: Documentation/devicetree/bindings/iio/temperature/ti,tmp117.yaml ··· 24497 24506 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening 24498 24507 F: Documentation/admin-guide/LSM/Yama.rst 24499 24508 F: security/yama/ 24509 + 24510 + YAML NETLINK (YNL) 24511 + M: Donald Hunter <donald.hunter@gmail.com> 24512 + M: Jakub Kicinski <kuba@kernel.org> 24513 + F: Documentation/netlink/ 24514 + F: Documentation/userspace-api/netlink/intro-specs.rst 24515 + F: Documentation/userspace-api/netlink/specs.rst 24516 + F: tools/net/ynl/ 24500 24517 24501 24518 YEALINK PHONE DRIVER 24502 24519 M: Henk Vergonet <Henk.Vergonet@gmail.com>
+1 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 9 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc5 5 + EXTRAVERSION = -rc6 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION*
+8
arch/Kconfig
··· 9 9 # 10 10 source "arch/$(SRCARCH)/Kconfig" 11 11 12 + config ARCH_CONFIGURES_CPU_MITIGATIONS 13 + bool 14 + 15 + if !ARCH_CONFIGURES_CPU_MITIGATIONS 16 + config CPU_MITIGATIONS 17 + def_bool y 18 + endif 19 + 12 20 menu "General architecture-dependent options" 13 21 14 22 config ARCH_HAS_SUBPAGE_FAULTS
-1
arch/arc/Kconfig
··· 6 6 config ARC 7 7 def_bool y 8 8 select ARC_TIMERS 9 - select ARCH_HAS_CPU_CACHE_ALIASING 10 9 select ARCH_HAS_CACHE_LINE_SIZE 11 10 select ARCH_HAS_DEBUG_VM_PGTABLE 12 11 select ARCH_HAS_DMA_PREP_COHERENT
+2 -2
arch/arc/boot/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - # uImage build relies on mkimage being availble on your host for ARC target 3 + # uImage build relies on mkimage being available on your host for ARC target 4 4 # You will need to build u-boot for ARC, rename mkimage to arc-elf32-mkimage 5 - # and make sure it's reacable from your PATH 5 + # and make sure it's reachable from your PATH 6 6 7 7 OBJCOPYFLAGS= -O binary -R .note -R .note.gnu.build-id -R .comment -S 8 8
+2 -2
arch/arc/boot/dts/axc003.dtsi
··· 119 119 /* 120 120 * The DW APB ICTL intc on MB is connected to CPU intc via a 121 121 * DT "invisible" DW APB GPIO block, configured to simply pass thru 122 - * interrupts - setup accordinly in platform init (plat-axs10x/ax10x.c) 122 + * interrupts - setup accordingly in platform init (plat-axs10x/ax10x.c) 123 123 * 124 - * So here we mimic a direct connection betwen them, ignoring the 124 + * So here we mimic a direct connection between them, ignoring the 125 125 * ABPG GPIO. Thus set "interrupts = <24>" (DW APB GPIO to core) 126 126 * instead of "interrupts = <12>" (DW APB ICTL to DW APB GPIO) 127 127 *
-1
arch/arc/boot/dts/hsdk.dts
··· 205 205 }; 206 206 207 207 gmac: ethernet@8000 { 208 - #interrupt-cells = <1>; 209 208 compatible = "snps,dwmac"; 210 209 reg = <0x8000 0x2000>; 211 210 interrupts = <10>;
+1 -1
arch/arc/boot/dts/vdk_axs10x_mb.dtsi
··· 113 113 /* 114 114 * Embedded Vision subsystem UIO mappings; only relevant for EV VDK 115 115 * 116 - * This node is intentionally put outside of MB above becase 116 + * This node is intentionally put outside of MB above because 117 117 * it maps areas outside of MB's 0xez-0xfz. 118 118 */ 119 119 uio_ev: uio@d0000000 {
-9
arch/arc/include/asm/cachetype.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef __ASM_ARC_CACHETYPE_H 3 - #define __ASM_ARC_CACHETYPE_H 4 - 5 - #include <linux/types.h> 6 - 7 - #define cpu_dcache_is_aliasing() true 8 - 9 - #endif
+1 -1
arch/arc/include/asm/dsp.h
··· 12 12 /* 13 13 * DSP-related saved registers - need to be saved only when you are 14 14 * scheduled out. 15 - * structure fields name must correspond to aux register defenitions for 15 + * structure fields name must correspond to aux register definitions for 16 16 * automatic offset calculation in DSP_AUX_SAVE_RESTORE macros 17 17 */ 18 18 struct dsp_callee_regs {
+5 -5
arch/arc/include/asm/entry-compact.h
··· 7 7 * Stack switching code can no longer reliably rely on the fact that 8 8 * if we are NOT in user mode, stack is switched to kernel mode. 9 9 * e.g. L2 IRQ interrupted a L1 ISR which had not yet completed 10 - * it's prologue including stack switching from user mode 10 + * its prologue including stack switching from user mode 11 11 * 12 12 * Vineetg: Aug 28th 2008: Bug #94984 13 13 * -Zero Overhead Loop Context shd be cleared when entering IRQ/EXcp/Trap ··· 143 143 * 2. L1 IRQ taken, ISR starts (CPU auto-switched to KERNEL mode) 144 144 * 3. But before it could switch SP from USER to KERNEL stack 145 145 * a L2 IRQ "Interrupts" L1 146 - * Thay way although L2 IRQ happened in Kernel mode, stack is still 146 + * That way although L2 IRQ happened in Kernel mode, stack is still 147 147 * not switched. 148 148 * To handle this, we may need to switch stack even if in kernel mode 149 149 * provided SP has values in range of USER mode stack ( < 0x7000_0000 ) ··· 173 173 174 174 GET_CURR_TASK_ON_CPU r9 175 175 176 - /* With current tsk in r9, get it's kernel mode stack base */ 176 + /* With current tsk in r9, get its kernel mode stack base */ 177 177 GET_TSK_STACK_BASE r9, r9 178 178 179 179 /* save U mode SP @ pt_regs->sp */ ··· 282 282 * NOTE: 283 283 * 284 284 * It is recommended that lp_count/ilink1/ilink2 not be used as a dest reg 285 - * for memory load operations. If used in that way interrupts are deffered 285 + * for memory load operations. If used in that way interrupts are deferred 286 286 * by hardware and that is not good. 287 287 *-------------------------------------------------------------*/ 288 288 .macro EXCEPTION_EPILOGUE ··· 350 350 * NOTE: 351 351 * 352 352 * It is recommended that lp_count/ilink1/ilink2 not be used as a dest reg 353 - * for memory load operations. If used in that way interrupts are deffered 353 + * for memory load operations. If used in that way interrupts are deferred 354 354 * by hardware and that is not good. 355 355 *-------------------------------------------------------------*/ 356 356 .macro INTERRUPT_EPILOGUE LVL
+2 -2
arch/arc/include/asm/entry.h
··· 7 7 #ifndef __ASM_ARC_ENTRY_H 8 8 #define __ASM_ARC_ENTRY_H 9 9 10 - #include <asm/unistd.h> /* For NR_syscalls defination */ 10 + #include <asm/unistd.h> /* For NR_syscalls definition */ 11 11 #include <asm/arcregs.h> 12 12 #include <asm/ptrace.h> 13 13 #include <asm/processor.h> /* For VMALLOC_START */ ··· 56 56 .endm 57 57 58 58 /*------------------------------------------------------------- 59 - * given a tsk struct, get to the base of it's kernel mode stack 59 + * given a tsk struct, get to the base of its kernel mode stack 60 60 * tsk->thread_info is really a PAGE, whose bottom hoists stack 61 61 * which grows upwards towards thread_info 62 62 *------------------------------------------------------------*/
+1 -1
arch/arc/include/asm/irq.h
··· 10 10 * ARCv2 can support 240 interrupts in the core interrupts controllers and 11 11 * 128 interrupts in IDU. Thus 512 virtual IRQs must be enough for most 12 12 * configurations of boards. 13 - * This doesnt affect ARCompact, but we change it to same value 13 + * This doesn't affect ARCompact, but we change it to same value 14 14 */ 15 15 #define NR_IRQS 512 16 16
+1 -1
arch/arc/include/asm/irqflags-compact.h
··· 46 46 * IRQ Control Macros 47 47 * 48 48 * All of them have "memory" clobber (compiler barrier) which is needed to 49 - * ensure that LD/ST requiring irq safetly (R-M-W when LLSC is not available) 49 + * ensure that LD/ST requiring irq safety (R-M-W when LLSC is not available) 50 50 * are redone after IRQs are re-enabled (and gcc doesn't reuse stale register) 51 51 * 52 52 * Noted at the time of Abilis Timer List corruption
+1 -1
arch/arc/include/asm/mmu_context.h
··· 165 165 * for retiring-mm. However destroy_context( ) still needs to do that because 166 166 * between mm_release( ) = >deactive_mm( ) and 167 167 * mmput => .. => __mmdrop( ) => destroy_context( ) 168 - * there is a good chance that task gets sched-out/in, making it's ASID valid 168 + * there is a good chance that task gets sched-out/in, making its ASID valid 169 169 * again (this teased me for a whole day). 170 170 */ 171 171
+1 -1
arch/arc/include/asm/pgtable-bits-arcv2.h
··· 66 66 * Other rules which cause the divergence from 1:1 mapping 67 67 * 68 68 * 1. Although ARC700 can do exclusive execute/write protection (meaning R 69 - * can be tracked independet of X/W unlike some other CPUs), still to 69 + * can be tracked independently of X/W unlike some other CPUs), still to 70 70 * keep things consistent with other archs: 71 71 * -Write implies Read: W => R 72 72 * -Execute implies Read: X => R
+1 -1
arch/arc/include/asm/ptrace.h
··· 169 169 return *(unsigned long *)((unsigned long)regs + offset); 170 170 } 171 171 172 - extern int syscall_trace_entry(struct pt_regs *); 172 + extern int syscall_trace_enter(struct pt_regs *); 173 173 extern void syscall_trace_exit(struct pt_regs *); 174 174 175 175 #endif /* !__ASSEMBLY__ */
+1 -1
arch/arc/include/asm/shmparam.h
··· 6 6 #ifndef __ARC_ASM_SHMPARAM_H 7 7 #define __ARC_ASM_SHMPARAM_H 8 8 9 - /* Handle upto 2 cache bins */ 9 + /* Handle up to 2 cache bins */ 10 10 #define SHMLBA (2 * PAGE_SIZE) 11 11 12 12 /* Enforce SHMLBA in shmat */
+2 -2
arch/arc/include/asm/smp.h
··· 77 77 78 78 /* 79 79 * ARC700 doesn't support atomic Read-Modify-Write ops. 80 - * Originally Interrupts had to be disabled around code to gaurantee atomicity. 80 + * Originally Interrupts had to be disabled around code to guarantee atomicity. 81 81 * The LLOCK/SCOND insns allow writing interrupt-hassle-free based atomic ops 82 82 * based on retry-if-irq-in-atomic (with hardware assist). 83 83 * However despite these, we provide the IRQ disabling variant ··· 86 86 * support needed. 87 87 * 88 88 * (2) In a SMP setup, the LLOCK/SCOND atomicity across CPUs needs to be 89 - * gaurantted by the platform (not something which core handles). 89 + * guaranteed by the platform (not something which core handles). 90 90 * Assuming a platform won't, SMP Linux needs to use spinlocks + local IRQ 91 91 * disabling for atomicity. 92 92 *
+1 -1
arch/arc/include/asm/thread_info.h
··· 38 38 struct thread_info { 39 39 unsigned long flags; /* low level flags */ 40 40 unsigned long ksp; /* kernel mode stack top in __switch_to */ 41 - int preempt_count; /* 0 => preemptable, <0 => BUG */ 41 + int preempt_count; /* 0 => preemptible, <0 => BUG */ 42 42 int cpu; /* current CPU */ 43 43 unsigned long thr_ptr; /* TLS ptr */ 44 44 struct task_struct *task; /* main task structure */
+1 -1
arch/arc/include/uapi/asm/swab.h
··· 62 62 * 8051fdc4: st r2,[r1,20] ; Mem op : save result back to mem 63 63 * 64 64 * Joern suggested a better "C" algorithm which is great since 65 - * (1) It is portable to any architecure 65 + * (1) It is portable to any architecture 66 66 * (2) At the same time it takes advantage of ARC ISA (rotate intrns) 67 67 */ 68 68
+4 -4
arch/arc/kernel/entry-arcv2.S
··· 5 5 * Copyright (C) 2013 Synopsys, Inc. (www.synopsys.com) 6 6 */ 7 7 8 - #include <linux/linkage.h> /* ARC_{EXTRY,EXIT} */ 8 + #include <linux/linkage.h> /* ARC_{ENTRY,EXIT} */ 9 9 #include <asm/entry.h> /* SAVE_ALL_{INT1,INT2,TRAP...} */ 10 10 #include <asm/errno.h> 11 11 #include <asm/arcregs.h> ··· 31 31 VECTOR mem_service ; Mem exception 32 32 VECTOR instr_service ; Instrn Error 33 33 VECTOR EV_MachineCheck ; Fatal Machine check 34 - VECTOR EV_TLBMissI ; Intruction TLB miss 34 + VECTOR EV_TLBMissI ; Instruction TLB miss 35 35 VECTOR EV_TLBMissD ; Data TLB miss 36 36 VECTOR EV_TLBProtV ; Protection Violation 37 37 VECTOR EV_PrivilegeV ; Privilege Violation ··· 76 76 # query in hard ISR path would return false (since .IE is set) which would 77 77 # trips genirq interrupt handling asserts. 78 78 # 79 - # So do a "soft" disable of interrutps here. 79 + # So do a "soft" disable of interrupts here. 80 80 # 81 81 # Note this disable is only for consistent book-keeping as further interrupts 82 82 # will be disabled anyways even w/o this. Hardware tracks active interrupts 83 - # seperately in AUX_IRQ_ACT.active and will not take new interrupts 83 + # separately in AUX_IRQ_ACT.active and will not take new interrupts 84 84 # unless this one returns (or higher prio becomes pending in 2-prio scheme) 85 85 86 86 IRQ_DISABLE
+2 -2
arch/arc/kernel/entry.S
··· 95 95 lr r0, [efa] 96 96 mov r1, sp 97 97 98 - ; MC excpetions disable MMU 98 + ; MC exceptions disable MMU 99 99 ARC_MMU_REENABLE r3 100 100 101 101 lsr r3, r10, 8 ··· 209 209 210 210 ; --------------------------------------------- 211 211 ; syscall TRAP 212 - ; ABI: (r0-r7) upto 8 args, (r8) syscall number 212 + ; ABI: (r0-r7) up to 8 args, (r8) syscall number 213 213 ; --------------------------------------------- 214 214 215 215 ENTRY(EV_Trap)
+1 -1
arch/arc/kernel/head.S
··· 165 165 ; setup stack (fp, sp) 166 166 mov fp, 0 167 167 168 - ; set it's stack base to tsk->thread_info bottom 168 + ; set its stack base to tsk->thread_info bottom 169 169 GET_TSK_STACK_BASE r0, sp 170 170 171 171 j start_kernel_secondary
+1 -1
arch/arc/kernel/intc-arcv2.c
··· 56 56 WRITE_AUX(AUX_IRQ_CTRL, ictrl); 57 57 58 58 /* 59 - * ARCv2 core intc provides multiple interrupt priorities (upto 16). 59 + * ARCv2 core intc provides multiple interrupt priorities (up to 16). 60 60 * Typical builds though have only two levels (0-high, 1-low) 61 61 * Linux by default uses lower prio 1 for most irqs, reserving 0 for 62 62 * NMI style interrupts in future (say perf)
+4 -3
arch/arc/kernel/kprobes.c
··· 190 190 } 191 191 } 192 192 193 - int __kprobes arc_kprobe_handler(unsigned long addr, struct pt_regs *regs) 193 + static int 194 + __kprobes arc_kprobe_handler(unsigned long addr, struct pt_regs *regs) 194 195 { 195 196 struct kprobe *p; 196 197 struct kprobe_ctlblk *kcb; ··· 242 241 return 0; 243 242 } 244 243 245 - static int __kprobes arc_post_kprobe_handler(unsigned long addr, 246 - struct pt_regs *regs) 244 + static int 245 + __kprobes arc_post_kprobe_handler(unsigned long addr, struct pt_regs *regs) 247 246 { 248 247 struct kprobe *cur = kprobe_running(); 249 248 struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+1 -1
arch/arc/kernel/perf_event.c
··· 38 38 * (based on a specific RTL build) 39 39 * Below is the static map between perf generic/arc specific event_id and 40 40 * h/w condition names. 41 - * At the time of probe, we loop thru each index and find it's name to 41 + * At the time of probe, we loop thru each index and find its name to 42 42 * complete the mapping of perf event_id to h/w index as latter is needed 43 43 * to program the counter really 44 44 */
+1 -1
arch/arc/kernel/setup.c
··· 390 390 #ifdef CONFIG_ARC_HAS_DCCM 391 391 /* 392 392 * DCCM can be arbit placed in hardware. 393 - * Make sure it's placement/sz matches what Linux is built with 393 + * Make sure its placement/sz matches what Linux is built with 394 394 */ 395 395 if ((unsigned int)__arc_dccm_base != info->dccm.base) 396 396 panic("Linux built with incorrect DCCM Base address\n");
+4 -3
arch/arc/kernel/signal.c
··· 8 8 * 9 9 * vineetg: Nov 2009 (Everything needed for TIF_RESTORE_SIGMASK) 10 10 * -do_signal() supports TIF_RESTORE_SIGMASK 11 - * -do_signal() no loner needs oldset, required by OLD sys_sigsuspend 12 - * -sys_rt_sigsuspend() now comes from generic code, so discard arch implemen 11 + * -do_signal() no longer needs oldset, required by OLD sys_sigsuspend 12 + * -sys_rt_sigsuspend() now comes from generic code, so discard arch 13 + * implementation 13 14 * -sys_sigsuspend() no longer needs to fudge ptregs, hence that arg removed 14 15 * -sys_sigsuspend() no longer loops for do_signal(), sets TIF_xxx and leaves 15 16 * the job to do_signal() 16 17 * 17 18 * vineetg: July 2009 18 19 * -Modified Code to support the uClibc provided userland sigreturn stub 19 - * to avoid kernel synthesing it on user stack at runtime, costing TLB 20 + * to avoid kernel synthesizing it on user stack at runtime, costing TLB 20 21 * probes and Cache line flushes. 21 22 * 22 23 * vineetg: July 2009
+1 -1
arch/arc/kernel/traps.c
··· 89 89 90 90 /* 91 91 * Entry point for miscll errors such as Nested Exceptions 92 - * -Duplicate TLB entry is handled seperately though 92 + * -Duplicate TLB entry is handled separately though 93 93 */ 94 94 void do_machine_check_fault(unsigned long address, struct pt_regs *regs) 95 95 {
+2 -2
arch/arc/kernel/vmlinux.lds.S
··· 41 41 #endif 42 42 43 43 /* 44 - * The reason for having a seperate subsection .init.ramfs is to 45 - * prevent objump from including it in kernel dumps 44 + * The reason for having a separate subsection .init.ramfs is to 45 + * prevent objdump from including it in kernel dumps 46 46 * 47 47 * Reason for having .init.ramfs above .init is to make sure that the 48 48 * binary blob is tucked away to one side, reducing the displacement
+2 -2
arch/arc/mm/tlb.c
··· 212 212 unsigned long flags; 213 213 214 214 /* If range @start to @end is more than 32 TLB entries deep, 215 - * its better to move to a new ASID rather than searching for 215 + * it's better to move to a new ASID rather than searching for 216 216 * individual entries and then shooting them down 217 217 * 218 218 * The calc above is rough, doesn't account for unaligned parts, ··· 408 408 * -More importantly it makes this handler inconsistent with fast-path 409 409 * TLB Refill handler which always deals with "current" 410 410 * 411 - * Lets see the use cases when current->mm != vma->mm and we land here 411 + * Let's see the use cases when current->mm != vma->mm and we land here 412 412 * 1. execve->copy_strings()->__get_user_pages->handle_mm_fault 413 413 * Here VM wants to pre-install a TLB entry for user stack while 414 414 * current->mm still points to pre-execve mm (hence the condition).
+4 -4
arch/arc/mm/tlbex.S
··· 5 5 * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) 6 6 * 7 7 * Vineetg: April 2011 : 8 - * -MMU v1: moved out legacy code into a seperate file 8 + * -MMU v1: moved out legacy code into a separate file 9 9 * -MMU v3: PD{0,1} bits layout changed: They don't overlap anymore, 10 10 * helps avoid a shift when preparing PD0 from PTE 11 11 * 12 12 * Vineetg: July 2009 13 - * -For MMU V2, we need not do heuristics at the time of commiting a D-TLB 14 - * entry, so that it doesn't knock out it's I-TLB entry 13 + * -For MMU V2, we need not do heuristics at the time of committing a D-TLB 14 + * entry, so that it doesn't knock out its I-TLB entry 15 15 * -Some more fine tuning: 16 16 * bmsk instead of add, asl.cc instead of branch, delay slot utilise etc 17 17 * 18 18 * Vineetg: July 2009 19 19 * -Practically rewrote the I/D TLB Miss handlers 20 - * Now 40 and 135 instructions a peice as compared to 131 and 449 resp. 20 + * Now 40 and 135 instructions apiece as compared to 131 and 449 resp. 21 21 * Hence Leaner by 1.5 K 22 22 * Used Conditional arithmetic to replace excessive branching 23 23 * Also used short instructions wherever possible
+4 -4
arch/arm/boot/dts/microchip/at91-sama7g54_curiosity.dts
··· 242 242 243 243 regulator-state-standby { 244 244 regulator-on-in-suspend; 245 - regulator-suspend-voltage = <1150000>; 245 + regulator-suspend-microvolt = <1150000>; 246 246 regulator-mode = <4>; 247 247 }; 248 248 ··· 263 263 264 264 regulator-state-standby { 265 265 regulator-on-in-suspend; 266 - regulator-suspend-voltage = <1050000>; 266 + regulator-suspend-microvolt = <1050000>; 267 267 regulator-mode = <4>; 268 268 }; 269 269 ··· 280 280 regulator-always-on; 281 281 282 282 regulator-state-standby { 283 - regulator-suspend-voltage = <1800000>; 283 + regulator-suspend-microvolt = <1800000>; 284 284 regulator-on-in-suspend; 285 285 }; 286 286 ··· 296 296 regulator-always-on; 297 297 298 298 regulator-state-standby { 299 - regulator-suspend-voltage = <3300000>; 299 + regulator-suspend-microvolt = <3300000>; 300 300 regulator-on-in-suspend; 301 301 }; 302 302
+4 -4
arch/arm/boot/dts/microchip/at91-sama7g5ek.dts
··· 293 293 294 294 regulator-state-standby { 295 295 regulator-on-in-suspend; 296 - regulator-suspend-voltage = <1150000>; 296 + regulator-suspend-microvolt = <1150000>; 297 297 regulator-mode = <4>; 298 298 }; 299 299 ··· 314 314 315 315 regulator-state-standby { 316 316 regulator-on-in-suspend; 317 - regulator-suspend-voltage = <1050000>; 317 + regulator-suspend-microvolt = <1050000>; 318 318 regulator-mode = <4>; 319 319 }; 320 320 ··· 331 331 regulator-always-on; 332 332 333 333 regulator-state-standby { 334 - regulator-suspend-voltage = <1800000>; 334 + regulator-suspend-microvolt = <1800000>; 335 335 regulator-on-in-suspend; 336 336 }; 337 337 ··· 346 346 regulator-max-microvolt = <3700000>; 347 347 348 348 regulator-state-standby { 349 - regulator-suspend-voltage = <1800000>; 349 + regulator-suspend-microvolt = <1800000>; 350 350 regulator-on-in-suspend; 351 351 }; 352 352
+1
arch/arm/boot/dts/nxp/imx/imx6ull-tarragon-common.dtsi
··· 805 805 &pinctrl_usb_pwr>; 806 806 dr_mode = "host"; 807 807 power-active-high; 808 + over-current-active-low; 808 809 disable-over-current; 809 810 status = "okay"; 810 811 };
+43 -13
arch/arm/net/bpf_jit_32.c
··· 871 871 } 872 872 873 873 /* dst = src (4 bytes)*/ 874 - static inline void emit_a32_mov_r(const s8 dst, const s8 src, const u8 off, 875 - struct jit_ctx *ctx) { 874 + static inline void emit_a32_mov_r(const s8 dst, const s8 src, struct jit_ctx *ctx) { 876 875 const s8 *tmp = bpf2a32[TMP_REG_1]; 877 876 s8 rt; 878 877 879 878 rt = arm_bpf_get_reg32(src, tmp[0], ctx); 880 - if (off && off != 32) { 881 - emit(ARM_LSL_I(rt, rt, 32 - off), ctx); 882 - emit(ARM_ASR_I(rt, rt, 32 - off), ctx); 883 - } 884 879 arm_bpf_put_reg32(dst, rt, ctx); 885 880 } 886 881 ··· 884 889 const s8 src[], 885 890 struct jit_ctx *ctx) { 886 891 if (!is64) { 887 - emit_a32_mov_r(dst_lo, src_lo, 0, ctx); 892 + emit_a32_mov_r(dst_lo, src_lo, ctx); 888 893 if (!ctx->prog->aux->verifier_zext) 889 894 /* Zero out high 4 bytes */ 890 895 emit_a32_mov_i(dst_hi, 0, ctx); 891 896 } else if (__LINUX_ARM_ARCH__ < 6 && 892 897 ctx->cpu_architecture < CPU_ARCH_ARMv5TE) { 893 898 /* complete 8 byte move */ 894 - emit_a32_mov_r(dst_lo, src_lo, 0, ctx); 895 - emit_a32_mov_r(dst_hi, src_hi, 0, ctx); 899 + emit_a32_mov_r(dst_lo, src_lo, ctx); 900 + emit_a32_mov_r(dst_hi, src_hi, ctx); 896 901 } else if (is_stacked(src_lo) && is_stacked(dst_lo)) { 897 902 const u8 *tmp = bpf2a32[TMP_REG_1]; 898 903 ··· 912 917 static inline void emit_a32_movsx_r64(const bool is64, const u8 off, const s8 dst[], const s8 src[], 913 918 struct jit_ctx *ctx) { 914 919 const s8 *tmp = bpf2a32[TMP_REG_1]; 915 - const s8 *rt; 920 + s8 rs; 921 + s8 rd; 916 922 917 - rt = arm_bpf_get_reg64(dst, tmp, ctx); 923 + if (is_stacked(dst_lo)) 924 + rd = tmp[1]; 925 + else 926 + rd = dst_lo; 927 + rs = arm_bpf_get_reg32(src_lo, rd, ctx); 928 + /* rs may be one of src[1], dst[1], or tmp[1] */ 918 929 919 - emit_a32_mov_r(dst_lo, src_lo, off, ctx); 930 + /* Sign extend rs if needed. If off == 32, lower 32-bits of src are moved to dst and sign 931 + * extension only happens in the upper 64 bits. 932 + */ 933 + if (off != 32) { 934 + /* Sign extend rs into rd */ 935 + emit(ARM_LSL_I(rd, rs, 32 - off), ctx); 936 + emit(ARM_ASR_I(rd, rd, 32 - off), ctx); 937 + } else { 938 + rd = rs; 939 + } 940 + 941 + /* Write rd to dst_lo 942 + * 943 + * Optimization: 944 + * Assume: 945 + * 1. dst == src and stacked. 946 + * 2. off == 32 947 + * 948 + * In this case src_lo was loaded into rd(tmp[1]) but rd was not sign extended as off==32. 949 + * So, we don't need to write rd back to dst_lo as they have the same value. 950 + * This saves us one str instruction. 951 + */ 952 + if (dst_lo != src_lo || off != 32) 953 + arm_bpf_put_reg32(dst_lo, rd, ctx); 954 + 920 955 if (!is64) { 921 956 if (!ctx->prog->aux->verifier_zext) 922 957 /* Zero out high 4 bytes */ 923 958 emit_a32_mov_i(dst_hi, 0, ctx); 924 959 } else { 925 - emit(ARM_ASR_I(rt[0], rt[1], 31), ctx); 960 + if (is_stacked(dst_hi)) { 961 + emit(ARM_ASR_I(tmp[0], rd, 31), ctx); 962 + arm_bpf_put_reg32(dst_hi, tmp[0], ctx); 963 + } else { 964 + emit(ARM_ASR_I(dst_hi, rd, 31), ctx); 965 + } 926 966 } 927 967 } 928 968
+1 -1
arch/arm64/boot/dts/freescale/imx8mp.dtsi
··· 1672 1672 <&clk IMX8MP_CLK_MEDIA_MIPI_PHY1_REF_ROOT>, 1673 1673 <&clk IMX8MP_CLK_MEDIA_AXI_ROOT>; 1674 1674 clock-names = "pclk", "wrap", "phy", "axi"; 1675 - assigned-clocks = <&clk IMX8MP_CLK_MEDIA_CAM1_PIX>, 1675 + assigned-clocks = <&clk IMX8MP_CLK_MEDIA_CAM2_PIX>, 1676 1676 <&clk IMX8MP_CLK_MEDIA_MIPI_PHY1_REF>; 1677 1677 assigned-clock-parents = <&clk IMX8MP_SYS_PLL2_1000M>, 1678 1678 <&clk IMX8MP_CLK_24M>;
+4 -4
arch/arm64/boot/dts/mediatek/mt2712-evb.dts
··· 129 129 }; 130 130 131 131 &pio { 132 - eth_default: eth_default { 132 + eth_default: eth-default-pins { 133 133 tx_pins { 134 134 pinmux = <MT2712_PIN_71_GBE_TXD3__FUNC_GBE_TXD3>, 135 135 <MT2712_PIN_72_GBE_TXD2__FUNC_GBE_TXD2>, ··· 156 156 }; 157 157 }; 158 158 159 - eth_sleep: eth_sleep { 159 + eth_sleep: eth-sleep-pins { 160 160 tx_pins { 161 161 pinmux = <MT2712_PIN_71_GBE_TXD3__FUNC_GPIO71>, 162 162 <MT2712_PIN_72_GBE_TXD2__FUNC_GPIO72>, ··· 182 182 }; 183 183 }; 184 184 185 - usb0_id_pins_float: usb0_iddig { 185 + usb0_id_pins_float: usb0-iddig-pins { 186 186 pins_iddig { 187 187 pinmux = <MT2712_PIN_12_IDDIG_P0__FUNC_IDDIG_A>; 188 188 bias-pull-up; 189 189 }; 190 190 }; 191 191 192 - usb1_id_pins_float: usb1_iddig { 192 + usb1_id_pins_float: usb1-iddig-pins { 193 193 pins_iddig { 194 194 pinmux = <MT2712_PIN_14_IDDIG_P1__FUNC_IDDIG_B>; 195 195 bias-pull-up;
+2 -1
arch/arm64/boot/dts/mediatek/mt2712e.dtsi
··· 249 249 #clock-cells = <1>; 250 250 }; 251 251 252 - infracfg: syscon@10001000 { 252 + infracfg: clock-controller@10001000 { 253 253 compatible = "mediatek,mt2712-infracfg", "syscon"; 254 254 reg = <0 0x10001000 0 0x1000>; 255 255 #clock-cells = <1>; 256 + #reset-cells = <1>; 256 257 }; 257 258 258 259 pericfg: syscon@10003000 {
+14 -20
arch/arm64/boot/dts/mediatek/mt7622.dtsi
··· 252 252 clock-names = "hif_sel"; 253 253 }; 254 254 255 - cir: cir@10009000 { 255 + cir: ir-receiver@10009000 { 256 256 compatible = "mediatek,mt7622-cir"; 257 257 reg = <0 0x10009000 0 0x1000>; 258 258 interrupts = <GIC_SPI 175 IRQ_TYPE_LEVEL_LOW>; ··· 283 283 }; 284 284 }; 285 285 286 - apmixedsys: apmixedsys@10209000 { 287 - compatible = "mediatek,mt7622-apmixedsys", 288 - "syscon"; 286 + apmixedsys: clock-controller@10209000 { 287 + compatible = "mediatek,mt7622-apmixedsys"; 289 288 reg = <0 0x10209000 0 0x1000>; 290 289 #clock-cells = <1>; 291 290 }; 292 291 293 - topckgen: topckgen@10210000 { 294 - compatible = "mediatek,mt7622-topckgen", 295 - "syscon"; 292 + topckgen: clock-controller@10210000 { 293 + compatible = "mediatek,mt7622-topckgen"; 296 294 reg = <0 0x10210000 0 0x1000>; 297 295 #clock-cells = <1>; 298 296 }; ··· 513 515 <&pericfg CLK_PERI_AUXADC_PD>; 514 516 clock-names = "therm", "auxadc"; 515 517 resets = <&pericfg MT7622_PERI_THERM_SW_RST>; 516 - reset-names = "therm"; 517 518 mediatek,auxadc = <&auxadc>; 518 519 mediatek,apmixedsys = <&apmixedsys>; 519 520 nvmem-cells = <&thermal_calibration>; ··· 731 734 power-domains = <&scpsys MT7622_POWER_DOMAIN_WB>; 732 735 }; 733 736 734 - ssusbsys: ssusbsys@1a000000 { 735 - compatible = "mediatek,mt7622-ssusbsys", 736 - "syscon"; 737 + ssusbsys: clock-controller@1a000000 { 738 + compatible = "mediatek,mt7622-ssusbsys"; 737 739 reg = <0 0x1a000000 0 0x1000>; 738 740 #clock-cells = <1>; 739 741 #reset-cells = <1>; ··· 789 793 }; 790 794 }; 791 795 792 - pciesys: pciesys@1a100800 { 793 - compatible = "mediatek,mt7622-pciesys", 794 - "syscon"; 796 + pciesys: clock-controller@1a100800 { 797 + compatible = "mediatek,mt7622-pciesys"; 795 798 reg = <0 0x1a100800 0 0x1000>; 796 799 #clock-cells = <1>; 797 800 #reset-cells = <1>; ··· 916 921 }; 917 922 }; 918 923 919 - hifsys: syscon@1af00000 { 920 - compatible = "mediatek,mt7622-hifsys", "syscon"; 924 + hifsys: clock-controller@1af00000 { 925 + compatible = "mediatek,mt7622-hifsys"; 921 926 reg = <0 0x1af00000 0 0x70>; 927 + #clock-cells = <1>; 922 928 }; 923 929 924 - ethsys: syscon@1b000000 { 930 + ethsys: clock-controller@1b000000 { 925 931 compatible = "mediatek,mt7622-ethsys", 926 932 "syscon"; 927 933 reg = <0 0x1b000000 0 0x1000>; ··· 962 966 }; 963 967 964 968 eth: ethernet@1b100000 { 965 - compatible = "mediatek,mt7622-eth", 966 - "mediatek,mt2701-eth", 967 - "syscon"; 969 + compatible = "mediatek,mt7622-eth"; 968 970 reg = <0 0x1b100000 0 0x20000>; 969 971 interrupts = <GIC_SPI 223 IRQ_TYPE_LEVEL_LOW>, 970 972 <GIC_SPI 224 IRQ_TYPE_LEVEL_LOW>,
+3 -3
arch/arm64/boot/dts/mediatek/mt7986a-bananapi-bpi-r3.dts
··· 146 146 147 147 &cpu_thermal { 148 148 cooling-maps { 149 - cpu-active-high { 149 + map-cpu-active-high { 150 150 /* active: set fan to cooling level 2 */ 151 151 cooling-device = <&fan 2 2>; 152 152 trip = <&cpu_trip_active_high>; 153 153 }; 154 154 155 - cpu-active-med { 155 + map-cpu-active-med { 156 156 /* active: set fan to cooling level 1 */ 157 157 cooling-device = <&fan 1 1>; 158 158 trip = <&cpu_trip_active_med>; 159 159 }; 160 160 161 - cpu-active-low { 161 + map-cpu-active-low { 162 162 /* active: set fan to cooling level 0 */ 163 163 cooling-device = <&fan 0 0>; 164 164 trip = <&cpu_trip_active_low>;
+2 -6
arch/arm64/boot/dts/mediatek/mt7986a.dtsi
··· 332 332 reg = <0 0x1100c800 0 0x800>; 333 333 interrupts = <GIC_SPI 138 IRQ_TYPE_LEVEL_HIGH>; 334 334 clocks = <&infracfg CLK_INFRA_THERM_CK>, 335 - <&infracfg CLK_INFRA_ADC_26M_CK>, 336 - <&infracfg CLK_INFRA_ADC_FRC_CK>; 337 - clock-names = "therm", "auxadc", "adc_32k"; 335 + <&infracfg CLK_INFRA_ADC_26M_CK>; 336 + clock-names = "therm", "auxadc"; 338 337 nvmem-cells = <&thermal_calibration>; 339 338 nvmem-cell-names = "calibration-data"; 340 339 #thermal-sensor-cells = <1>; ··· 491 492 compatible = "mediatek,mt7986-ethsys", 492 493 "syscon"; 493 494 reg = <0 0x15000000 0 0x1000>; 494 - #address-cells = <1>; 495 - #size-cells = <1>; 496 495 #clock-cells = <1>; 497 496 #reset-cells = <1>; 498 497 }; ··· 553 556 <&topckgen CLK_TOP_SGM_325M_SEL>; 554 557 assigned-clock-parents = <&apmixedsys CLK_APMIXED_NET2PLL>, 555 558 <&apmixedsys CLK_APMIXED_SGMPLL>; 556 - #reset-cells = <1>; 557 559 #address-cells = <1>; 558 560 #size-cells = <0>; 559 561 mediatek,ethsys = <&ethsys>;
-1
arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
··· 433 433 }; 434 434 435 435 &mt6358_vgpu_reg { 436 - regulator-min-microvolt = <625000>; 437 436 regulator-max-microvolt = <900000>; 438 437 439 438 regulator-coupled-with = <&mt6358_vsram_gpu_reg>;
+1
arch/arm64/boot/dts/mediatek/mt8183.dtsi
··· 1637 1637 compatible = "mediatek,mt8183-mfgcfg", "syscon"; 1638 1638 reg = <0 0x13000000 0 0x1000>; 1639 1639 #clock-cells = <1>; 1640 + power-domains = <&spm MT8183_POWER_DOMAIN_MFG_ASYNC>; 1640 1641 }; 1641 1642 1642 1643 gpu: gpu@13040000 {
+1 -1
arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
··· 1296 1296 * regulator coupling requirements. 1297 1297 */ 1298 1298 regulator-name = "ppvar_dvdd_vgpu"; 1299 - regulator-min-microvolt = <600000>; 1299 + regulator-min-microvolt = <500000>; 1300 1300 regulator-max-microvolt = <950000>; 1301 1301 regulator-ramp-delay = <6250>; 1302 1302 regulator-enable-ramp-delay = <200>;
+3 -3
arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
··· 1421 1421 mt6315_6_vbuck1: vbuck1 { 1422 1422 regulator-compatible = "vbuck1"; 1423 1423 regulator-name = "Vbcpu"; 1424 - regulator-min-microvolt = <300000>; 1424 + regulator-min-microvolt = <400000>; 1425 1425 regulator-max-microvolt = <1193750>; 1426 1426 regulator-enable-ramp-delay = <256>; 1427 1427 regulator-allowed-modes = <0 1 2>; ··· 1431 1431 mt6315_6_vbuck3: vbuck3 { 1432 1432 regulator-compatible = "vbuck3"; 1433 1433 regulator-name = "Vlcpu"; 1434 - regulator-min-microvolt = <300000>; 1434 + regulator-min-microvolt = <400000>; 1435 1435 regulator-max-microvolt = <1193750>; 1436 1436 regulator-enable-ramp-delay = <256>; 1437 1437 regulator-allowed-modes = <0 1 2>; ··· 1448 1448 mt6315_7_vbuck1: vbuck1 { 1449 1449 regulator-compatible = "vbuck1"; 1450 1450 regulator-name = "Vgpu"; 1451 - regulator-min-microvolt = <606250>; 1451 + regulator-min-microvolt = <400000>; 1452 1452 regulator-max-microvolt = <800000>; 1453 1453 regulator-enable-ramp-delay = <256>; 1454 1454 regulator-allowed-modes = <0 1 2>;
+1
arch/arm64/boot/dts/mediatek/mt8192.dtsi
··· 1464 1464 reg = <0 0x14001000 0 0x1000>; 1465 1465 interrupts = <GIC_SPI 252 IRQ_TYPE_LEVEL_HIGH 0>; 1466 1466 clocks = <&mmsys CLK_MM_DISP_MUTEX0>; 1467 + mediatek,gce-client-reg = <&gce SUBSYS_1400XXXX 0x1000 0x1000>; 1467 1468 mediatek,gce-events = <CMDQ_EVENT_DISP_STREAM_DONE_ENG_EVENT_0>, 1468 1469 <CMDQ_EVENT_DISP_STREAM_DONE_ENG_EVENT_1>; 1469 1470 power-domains = <&spm MT8192_POWER_DOMAIN_DISP>;
+34 -2
arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
··· 264 264 status = "okay"; 265 265 }; 266 266 267 + &cpu0 { 268 + cpu-supply = <&mt6359_vcore_buck_reg>; 269 + }; 270 + 271 + &cpu1 { 272 + cpu-supply = <&mt6359_vcore_buck_reg>; 273 + }; 274 + 275 + &cpu2 { 276 + cpu-supply = <&mt6359_vcore_buck_reg>; 277 + }; 278 + 279 + &cpu3 { 280 + cpu-supply = <&mt6359_vcore_buck_reg>; 281 + }; 282 + 283 + &cpu4 { 284 + cpu-supply = <&mt6315_6_vbuck1>; 285 + }; 286 + 287 + &cpu5 { 288 + cpu-supply = <&mt6315_6_vbuck1>; 289 + }; 290 + 291 + &cpu6 { 292 + cpu-supply = <&mt6315_6_vbuck1>; 293 + }; 294 + 295 + &cpu7 { 296 + cpu-supply = <&mt6315_6_vbuck1>; 297 + }; 298 + 267 299 &dp_intf0 { 268 300 status = "okay"; 269 301 ··· 1246 1214 mt6315_6_vbuck1: vbuck1 { 1247 1215 regulator-compatible = "vbuck1"; 1248 1216 regulator-name = "Vbcpu"; 1249 - regulator-min-microvolt = <300000>; 1217 + regulator-min-microvolt = <400000>; 1250 1218 regulator-max-microvolt = <1193750>; 1251 1219 regulator-enable-ramp-delay = <256>; 1252 1220 regulator-ramp-delay = <6250>; ··· 1264 1232 mt6315_7_vbuck1: vbuck1 { 1265 1233 regulator-compatible = "vbuck1"; 1266 1234 regulator-name = "Vgpu"; 1267 - regulator-min-microvolt = <625000>; 1235 + regulator-min-microvolt = <400000>; 1268 1236 regulator-max-microvolt = <1193750>; 1269 1237 regulator-enable-ramp-delay = <256>; 1270 1238 regulator-ramp-delay = <6250>;
+5
arch/arm64/boot/dts/mediatek/mt8195.dtsi
··· 2028 2028 compatible = "mediatek,mt8195-vppsys0", "syscon"; 2029 2029 reg = <0 0x14000000 0 0x1000>; 2030 2030 #clock-cells = <1>; 2031 + mediatek,gce-client-reg = <&gce1 SUBSYS_1400XXXX 0 0x1000>; 2031 2032 }; 2032 2033 2033 2034 dma-controller@14001000 { ··· 2252 2251 compatible = "mediatek,mt8195-vppsys1", "syscon"; 2253 2252 reg = <0 0x14f00000 0 0x1000>; 2254 2253 #clock-cells = <1>; 2254 + mediatek,gce-client-reg = <&gce1 SUBSYS_14f0XXXX 0 0x1000>; 2255 2255 }; 2256 2256 2257 2257 mutex@14f01000 { ··· 3082 3080 reg = <0 0x1c01a000 0 0x1000>; 3083 3081 mboxes = <&gce0 0 CMDQ_THR_PRIO_4>; 3084 3082 #clock-cells = <1>; 3083 + mediatek,gce-client-reg = <&gce0 SUBSYS_1c01XXXX 0xa000 0x1000>; 3085 3084 }; 3086 3085 3087 3086 ··· 3264 3261 interrupts = <GIC_SPI 658 IRQ_TYPE_LEVEL_HIGH 0>; 3265 3262 power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS0>; 3266 3263 clocks = <&vdosys0 CLK_VDO0_DISP_MUTEX0>; 3264 + mediatek,gce-client-reg = <&gce0 SUBSYS_1c01XXXX 0x6000 0x1000>; 3267 3265 mediatek,gce-events = <CMDQ_EVENT_VDO0_DISP_STREAM_DONE_0>; 3268 3266 }; 3269 3267 ··· 3335 3331 power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>; 3336 3332 clocks = <&vdosys1 CLK_VDO1_DISP_MUTEX>; 3337 3333 clock-names = "vdo1_mutex"; 3334 + mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x1000 0x1000>; 3338 3335 mediatek,gce-events = <CMDQ_EVENT_VDO1_STREAM_DONE_ENG_0>; 3339 3336 }; 3340 3337
+2 -2
arch/arm64/boot/dts/qcom/sc7280.dtsi
··· 3707 3707 compatible = "qcom,sc7280-adsp-pas"; 3708 3708 reg = <0 0x03700000 0 0x100>; 3709 3709 3710 - interrupts-extended = <&pdc 6 IRQ_TYPE_LEVEL_HIGH>, 3710 + interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>, 3711 3711 <&adsp_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, 3712 3712 <&adsp_smp2p_in 1 IRQ_TYPE_EDGE_RISING>, 3713 3713 <&adsp_smp2p_in 2 IRQ_TYPE_EDGE_RISING>, ··· 3944 3944 compatible = "qcom,sc7280-cdsp-pas"; 3945 3945 reg = <0 0x0a300000 0 0x10000>; 3946 3946 3947 - interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_LEVEL_HIGH>, 3947 + interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>, 3948 3948 <&cdsp_smp2p_in 0 IRQ_TYPE_EDGE_RISING>, 3949 3949 <&cdsp_smp2p_in 1 IRQ_TYPE_EDGE_RISING>, 3950 3950 <&cdsp_smp2p_in 2 IRQ_TYPE_EDGE_RISING>,
+1 -1
arch/arm64/boot/dts/qcom/sc8180x.dtsi
··· 2701 2701 resets = <&gcc GCC_USB30_SEC_BCR>; 2702 2702 power-domains = <&gcc USB30_SEC_GDSC>; 2703 2703 interrupts-extended = <&intc GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>, 2704 - <&pdc 7 IRQ_TYPE_LEVEL_HIGH>, 2704 + <&pdc 40 IRQ_TYPE_LEVEL_HIGH>, 2705 2705 <&pdc 10 IRQ_TYPE_EDGE_BOTH>, 2706 2706 <&pdc 11 IRQ_TYPE_EDGE_BOTH>; 2707 2707 interrupt-names = "hs_phy_irq", "ss_phy_irq",
+8 -3
arch/arm64/boot/dts/qcom/sc8280xp.dtsi
··· 1774 1774 reset-names = "pci"; 1775 1775 1776 1776 power-domains = <&gcc PCIE_4_GDSC>; 1777 + required-opps = <&rpmhpd_opp_nom>; 1777 1778 1778 1779 phys = <&pcie4_phy>; 1779 1780 phy-names = "pciephy"; ··· 1873 1872 reset-names = "pci"; 1874 1873 1875 1874 power-domains = <&gcc PCIE_3B_GDSC>; 1875 + required-opps = <&rpmhpd_opp_nom>; 1876 1876 1877 1877 phys = <&pcie3b_phy>; 1878 1878 phy-names = "pciephy"; ··· 1972 1970 reset-names = "pci"; 1973 1971 1974 1972 power-domains = <&gcc PCIE_3A_GDSC>; 1973 + required-opps = <&rpmhpd_opp_nom>; 1975 1974 1976 1975 phys = <&pcie3a_phy>; 1977 1976 phy-names = "pciephy"; ··· 2074 2071 reset-names = "pci"; 2075 2072 2076 2073 power-domains = <&gcc PCIE_2B_GDSC>; 2074 + required-opps = <&rpmhpd_opp_nom>; 2077 2075 2078 2076 phys = <&pcie2b_phy>; 2079 2077 phy-names = "pciephy"; ··· 2173 2169 reset-names = "pci"; 2174 2170 2175 2171 power-domains = <&gcc PCIE_2A_GDSC>; 2172 + required-opps = <&rpmhpd_opp_nom>; 2176 2173 2177 2174 phys = <&pcie2a_phy>; 2178 2175 phy-names = "pciephy"; ··· 2646 2641 compatible = "qcom,sc8280xp-adsp-pas"; 2647 2642 reg = <0 0x03000000 0 0x100>; 2648 2643 2649 - interrupts-extended = <&intc GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>, 2644 + interrupts-extended = <&intc GIC_SPI 162 IRQ_TYPE_EDGE_RISING>, 2650 2645 <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>, 2651 2646 <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>, 2652 2647 <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>, ··· 4982 4977 compatible = "qcom,sc8280xp-nsp0-pas"; 4983 4978 reg = <0 0x1b300000 0 0x100>; 4984 4979 4985 - interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_LEVEL_HIGH>, 4980 + interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>, 4986 4981 <&smp2p_nsp0_in 0 IRQ_TYPE_EDGE_RISING>, 4987 4982 <&smp2p_nsp0_in 1 IRQ_TYPE_EDGE_RISING>, 4988 4983 <&smp2p_nsp0_in 2 IRQ_TYPE_EDGE_RISING>, ··· 5113 5108 compatible = "qcom,sc8280xp-nsp1-pas"; 5114 5109 reg = <0 0x21300000 0 0x100>; 5115 5110 5116 - interrupts-extended = <&intc GIC_SPI 887 IRQ_TYPE_LEVEL_HIGH>, 5111 + interrupts-extended = <&intc GIC_SPI 887 IRQ_TYPE_EDGE_RISING>, 5117 5112 <&smp2p_nsp1_in 0 IRQ_TYPE_EDGE_RISING>, 5118 5113 <&smp2p_nsp1_in 1 IRQ_TYPE_EDGE_RISING>, 5119 5114 <&smp2p_nsp1_in 2 IRQ_TYPE_EDGE_RISING>,
+2 -2
arch/arm64/boot/dts/qcom/sm6350.dtsi
··· 1252 1252 compatible = "qcom,sm6350-adsp-pas"; 1253 1253 reg = <0 0x03000000 0 0x100>; 1254 1254 1255 - interrupts-extended = <&pdc 6 IRQ_TYPE_LEVEL_HIGH>, 1255 + interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>, 1256 1256 <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>, 1257 1257 <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>, 1258 1258 <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>, ··· 1511 1511 compatible = "qcom,sm6350-cdsp-pas"; 1512 1512 reg = <0 0x08300000 0 0x10000>; 1513 1513 1514 - interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_LEVEL_HIGH>, 1514 + interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>, 1515 1515 <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>, 1516 1516 <&smp2p_cdsp_in 1 IRQ_TYPE_EDGE_RISING>, 1517 1517 <&smp2p_cdsp_in 2 IRQ_TYPE_EDGE_RISING>,
+1 -1
arch/arm64/boot/dts/qcom/sm6375.dtsi
··· 1561 1561 compatible = "qcom,sm6375-adsp-pas"; 1562 1562 reg = <0 0x0a400000 0 0x100>; 1563 1563 1564 - interrupts-extended = <&intc GIC_SPI 282 IRQ_TYPE_LEVEL_HIGH>, 1564 + interrupts-extended = <&intc GIC_SPI 282 IRQ_TYPE_EDGE_RISING>, 1565 1565 <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>, 1566 1566 <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>, 1567 1567 <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+3 -3
arch/arm64/boot/dts/qcom/sm8250.dtsi
··· 3062 3062 compatible = "qcom,sm8250-slpi-pas"; 3063 3063 reg = <0 0x05c00000 0 0x4000>; 3064 3064 3065 - interrupts-extended = <&pdc 9 IRQ_TYPE_LEVEL_HIGH>, 3065 + interrupts-extended = <&pdc 9 IRQ_TYPE_EDGE_RISING>, 3066 3066 <&smp2p_slpi_in 0 IRQ_TYPE_EDGE_RISING>, 3067 3067 <&smp2p_slpi_in 1 IRQ_TYPE_EDGE_RISING>, 3068 3068 <&smp2p_slpi_in 2 IRQ_TYPE_EDGE_RISING>, ··· 3766 3766 compatible = "qcom,sm8250-cdsp-pas"; 3767 3767 reg = <0 0x08300000 0 0x10000>; 3768 3768 3769 - interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_LEVEL_HIGH>, 3769 + interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>, 3770 3770 <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>, 3771 3771 <&smp2p_cdsp_in 1 IRQ_TYPE_EDGE_RISING>, 3772 3772 <&smp2p_cdsp_in 2 IRQ_TYPE_EDGE_RISING>, ··· 5928 5928 compatible = "qcom,sm8250-adsp-pas"; 5929 5929 reg = <0 0x17300000 0 0x100>; 5930 5930 5931 - interrupts-extended = <&pdc 6 IRQ_TYPE_LEVEL_HIGH>, 5931 + interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>, 5932 5932 <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>, 5933 5933 <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>, 5934 5934 <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+4 -12
arch/arm64/boot/dts/qcom/sm8450.dtsi
··· 1777 1777 ranges = <0x01000000 0x0 0x00000000 0x0 0x60200000 0x0 0x100000>, 1778 1778 <0x02000000 0x0 0x60300000 0x0 0x60300000 0x0 0x3d00000>; 1779 1779 1780 - /* 1781 - * MSIs for BDF (1:0.0) only works with Device ID 0x5980. 1782 - * Hence, the IDs are swapped. 1783 - */ 1784 - msi-map = <0x0 &gic_its 0x5981 0x1>, 1785 - <0x100 &gic_its 0x5980 0x1>; 1780 + msi-map = <0x0 &gic_its 0x5980 0x1>, 1781 + <0x100 &gic_its 0x5981 0x1>; 1786 1782 msi-map-mask = <0xff00>; 1787 1783 interrupts = <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>, 1788 1784 <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>, ··· 1896 1900 ranges = <0x01000000 0x0 0x00000000 0x0 0x40200000 0x0 0x100000>, 1897 1901 <0x02000000 0x0 0x40300000 0x0 0x40300000 0x0 0x1fd00000>; 1898 1902 1899 - /* 1900 - * MSIs for BDF (1:0.0) only works with Device ID 0x5a00. 1901 - * Hence, the IDs are swapped. 1902 - */ 1903 - msi-map = <0x0 &gic_its 0x5a01 0x1>, 1904 - <0x100 &gic_its 0x5a00 0x1>; 1903 + msi-map = <0x0 &gic_its 0x5a00 0x1>, 1904 + <0x100 &gic_its 0x5a01 0x1>; 1905 1905 msi-map-mask = <0xff00>; 1906 1906 interrupts = <GIC_SPI 307 IRQ_TYPE_LEVEL_HIGH>, 1907 1907 <GIC_SPI 308 IRQ_TYPE_LEVEL_HIGH>,
+4 -6
arch/arm64/boot/dts/qcom/sm8550.dtsi
··· 1755 1755 <&gem_noc MASTER_APPSS_PROC 0 &cnoc_main SLAVE_PCIE_0 0>; 1756 1756 interconnect-names = "pcie-mem", "cpu-pcie"; 1757 1757 1758 - /* Entries are reversed due to the unusual ITS DeviceID encoding */ 1759 - msi-map = <0x0 &gic_its 0x1401 0x1>, 1760 - <0x100 &gic_its 0x1400 0x1>; 1758 + msi-map = <0x0 &gic_its 0x1400 0x1>, 1759 + <0x100 &gic_its 0x1401 0x1>; 1761 1760 iommu-map = <0x0 &apps_smmu 0x1400 0x1>, 1762 1761 <0x100 &apps_smmu 0x1401 0x1>; 1763 1762 ··· 1866 1867 <&gem_noc MASTER_APPSS_PROC 0 &cnoc_main SLAVE_PCIE_1 0>; 1867 1868 interconnect-names = "pcie-mem", "cpu-pcie"; 1868 1869 1869 - /* Entries are reversed due to the unusual ITS DeviceID encoding */ 1870 - msi-map = <0x0 &gic_its 0x1481 0x1>, 1871 - <0x100 &gic_its 0x1480 0x1>; 1870 + msi-map = <0x0 &gic_its 0x1480 0x1>, 1871 + <0x100 &gic_its 0x1481 0x1>; 1872 1872 iommu-map = <0x0 &apps_smmu 0x1480 0x1>, 1873 1873 <0x100 &apps_smmu 0x1481 0x1>; 1874 1874
+4 -6
arch/arm64/boot/dts/qcom/sm8650.dtsi
··· 2274 2274 interrupt-map-mask = <0 0 0 0x7>; 2275 2275 #interrupt-cells = <1>; 2276 2276 2277 - /* Entries are reversed due to the unusual ITS DeviceID encoding */ 2278 - msi-map = <0x0 &gic_its 0x1401 0x1>, 2279 - <0x100 &gic_its 0x1400 0x1>; 2277 + msi-map = <0x0 &gic_its 0x1400 0x1>, 2278 + <0x100 &gic_its 0x1401 0x1>; 2280 2279 msi-map-mask = <0xff00>; 2281 2280 2282 2281 linux,pci-domain = <0>; ··· 2401 2402 interrupt-map-mask = <0 0 0 0x7>; 2402 2403 #interrupt-cells = <1>; 2403 2404 2404 - /* Entries are reversed due to the unusual ITS DeviceID encoding */ 2405 - msi-map = <0x0 &gic_its 0x1481 0x1>, 2406 - <0x100 &gic_its 0x1480 0x1>; 2405 + msi-map = <0x0 &gic_its 0x1480 0x1>, 2406 + <0x100 &gic_its 0x1481 0x1>; 2407 2407 msi-map-mask = <0xff00>; 2408 2408 2409 2409 linux,pci-domain = <1>;
+2 -2
arch/arm64/boot/dts/qcom/x1e80100.dtsi
··· 284 284 285 285 domain-idle-states { 286 286 CLUSTER_CL4: cluster-sleep-0 { 287 - compatible = "arm,idle-state"; 287 + compatible = "domain-idle-state"; 288 288 idle-state-name = "l2-ret"; 289 289 arm,psci-suspend-param = <0x01000044>; 290 290 entry-latency-us = <350>; ··· 293 293 }; 294 294 295 295 CLUSTER_CL5: cluster-sleep-1 { 296 - compatible = "arm,idle-state"; 296 + compatible = "domain-idle-state"; 297 297 idle-state-name = "ret-pll-off"; 298 298 arm,psci-suspend-param = <0x01000054>; 299 299 entry-latency-us = <2200>;
+1 -2
arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi
··· 663 663 port@1 { 664 664 reg = <1>; 665 665 666 - mipi1_in_panel: endpoint@1 { 666 + mipi1_in_panel: endpoint { 667 667 remote-endpoint = <&mipi1_out_panel>; 668 668 }; 669 669 }; ··· 689 689 ep-gpios = <&gpio0 3 GPIO_ACTIVE_HIGH>; 690 690 691 691 /* PERST# asserted in S3 */ 692 - pcie-reset-suspend = <1>; 693 692 694 693 vpcie3v3-supply = <&wlan_3v3>; 695 694 vpcie1v8-supply = <&pp1800_pcie>;
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-kobol-helios64.dts
··· 611 611 #size-cells = <0>; 612 612 613 613 interface@0 { /* interface 0 of configuration 1 */ 614 - compatible = "usbbda,8156.config1.0"; 614 + compatible = "usbifbda,8156.config1.0"; 615 615 reg = <0 1>; 616 616 }; 617 617 };
-1
arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
··· 779 779 }; 780 780 781 781 &pcie0 { 782 - bus-scan-delay-ms = <1000>; 783 782 ep-gpios = <&gpio2 RK_PD4 GPIO_ACTIVE_HIGH>; 784 783 num-lanes = <4>; 785 784 pinctrl-names = "default";
+2
arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
··· 194 194 num-lanes = <4>; 195 195 pinctrl-names = "default"; 196 196 pinctrl-0 = <&pcie_clkreqn_cpm>; 197 + vpcie3v3-supply = <&vcc3v3_baseboard>; 198 + vpcie12v-supply = <&dc_12v>; 197 199 status = "okay"; 198 200 }; 199 201
+47 -6
arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
··· 79 79 regulator-max-microvolt = <5000000>; 80 80 }; 81 81 82 + vcca_0v9: vcca-0v9-regulator { 83 + compatible = "regulator-fixed"; 84 + regulator-name = "vcca_0v9"; 85 + regulator-always-on; 86 + regulator-boot-on; 87 + regulator-min-microvolt = <900000>; 88 + regulator-max-microvolt = <900000>; 89 + vin-supply = <&vcc_1v8>; 90 + }; 91 + 92 + vcca_1v8: vcca-1v8-regulator { 93 + compatible = "regulator-fixed"; 94 + regulator-name = "vcca_1v8"; 95 + regulator-always-on; 96 + regulator-boot-on; 97 + regulator-min-microvolt = <1800000>; 98 + regulator-max-microvolt = <1800000>; 99 + vin-supply = <&vcc3v3_sys>; 100 + }; 101 + 82 102 vdd_log: vdd-log { 83 103 compatible = "pwm-regulator"; 84 104 pwms = <&pwm2 0 25000 1>; ··· 436 416 gpio1830-supply = <&vcc_1v8>; 437 417 }; 438 418 439 - &pmu_io_domains { 440 - status = "okay"; 441 - pmu1830-supply = <&vcc_1v8>; 419 + &pcie0 { 420 + /* PCIe PHY supplies */ 421 + vpcie0v9-supply = <&vcca_0v9>; 422 + vpcie1v8-supply = <&vcca_1v8>; 442 423 }; 443 424 444 - &pwm2 { 445 - status = "okay"; 425 + &pcie_clkreqn_cpm { 426 + rockchip,pins = 427 + <2 RK_PD2 RK_FUNC_GPIO &pcfg_pull_up>; 446 428 }; 447 429 448 430 &pinctrl { 431 + pinctrl-names = "default"; 432 + pinctrl-0 = <&q7_thermal_pin>; 433 + 434 + gpios { 435 + q7_thermal_pin: q7-thermal-pin { 436 + rockchip,pins = 437 + <0 RK_PA3 RK_FUNC_GPIO &pcfg_pull_up>; 438 + }; 439 + }; 440 + 449 441 i2c8 { 450 442 i2c8_xfer_a: i2c8-xfer { 451 443 rockchip,pins = ··· 490 458 usb3 { 491 459 usb3_id: usb3-id { 492 460 rockchip,pins = 493 - <1 RK_PC2 RK_FUNC_GPIO &pcfg_pull_none>; 461 + <1 RK_PC2 RK_FUNC_GPIO &pcfg_pull_up>; 494 462 }; 495 463 }; 464 + }; 465 + 466 + &pmu_io_domains { 467 + status = "okay"; 468 + pmu1830-supply = <&vcc_1v8>; 469 + }; 470 + 471 + &pwm2 { 472 + status = "okay"; 496 473 }; 497 474 498 475 &sdhci {
-1
arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
··· 447 447 448 448 &pcie2x1 { 449 449 reset-gpios = <&gpio0 RK_PB6 GPIO_ACTIVE_HIGH>; 450 - disable-gpios = <&gpio0 RK_PA6 GPIO_ACTIVE_HIGH>; 451 450 vpcie3v3-supply = <&vcc3v3_pcie>; 452 451 status = "okay"; 453 452 };
+4 -2
arch/arm64/boot/dts/rockchip/rk3568-bpi-r2-pro.dts
··· 416 416 417 417 vccio_sd: LDO_REG5 { 418 418 regulator-name = "vccio_sd"; 419 + regulator-always-on; 420 + regulator-boot-on; 419 421 regulator-min-microvolt = <1800000>; 420 422 regulator-max-microvolt = <3300000>; 421 423 ··· 527 525 #address-cells = <1>; 528 526 #size-cells = <0>; 529 527 530 - switch@0 { 528 + switch@1f { 531 529 compatible = "mediatek,mt7531"; 532 - reg = <0>; 530 + reg = <0x1f>; 533 531 534 532 ports { 535 533 #address-cells = <1>;
-1
arch/arm64/boot/dts/rockchip/rk3568-lubancat-2.dts
··· 523 523 524 524 &pcie2x1 { 525 525 reset-gpios = <&gpio3 RK_PC1 GPIO_ACTIVE_HIGH>; 526 - disable-gpios = <&gpio3 RK_PC2 GPIO_ACTIVE_HIGH>; 527 526 vpcie3v3-supply = <&vcc3v3_mini_pcie>; 528 527 status = "okay"; 529 528 };
+2 -2
arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5.dtsi
··· 216 216 pinctrl-0 = <&i2c7m0_xfer>; 217 217 status = "okay"; 218 218 219 - es8316: audio-codec@11 { 219 + es8316: audio-codec@10 { 220 220 compatible = "everest,es8316"; 221 - reg = <0x11>; 221 + reg = <0x10>; 222 222 assigned-clocks = <&cru I2S0_8CH_MCLKOUT>; 223 223 assigned-clock-rates = <12288000>; 224 224 clocks = <&cru I2S0_8CH_MCLKOUT>;
+2 -1
arch/arm64/boot/dts/rockchip/rk3588-orangepi-5-plus.dts
··· 485 485 pinctrl-0 = <&pmic_pins>, <&rk806_dvs1_null>, 486 486 <&rk806_dvs2_null>, <&rk806_dvs3_null>; 487 487 spi-max-frequency = <1000000>; 488 + system-power-controller; 488 489 489 490 vcc1-supply = <&vcc5v0_sys>; 490 491 vcc2-supply = <&vcc5v0_sys>; ··· 507 506 #gpio-cells = <2>; 508 507 509 508 rk806_dvs1_null: dvs1-null-pins { 510 - pins = "gpio_pwrctrl2"; 509 + pins = "gpio_pwrctrl1"; 511 510 function = "pin_fun0"; 512 511 }; 513 512
+1
arch/arm64/boot/dts/rockchip/rk3588-quartzpro64.dts
··· 456 456 <&rk806_dvs2_null>, <&rk806_dvs3_null>; 457 457 pinctrl-names = "default"; 458 458 spi-max-frequency = <1000000>; 459 + system-power-controller; 459 460 460 461 vcc1-supply = <&vcc4v0_sys>; 461 462 vcc2-supply = <&vcc4v0_sys>;
+4 -4
arch/arm64/kvm/vgic/vgic-kvm-device.c
··· 338 338 int vgic_v2_parse_attr(struct kvm_device *dev, struct kvm_device_attr *attr, 339 339 struct vgic_reg_attr *reg_attr) 340 340 { 341 - int cpuid; 341 + int cpuid = FIELD_GET(KVM_DEV_ARM_VGIC_CPUID_MASK, attr->attr); 342 342 343 - cpuid = FIELD_GET(KVM_DEV_ARM_VGIC_CPUID_MASK, attr->attr); 344 - 345 - reg_attr->vcpu = kvm_get_vcpu_by_id(dev->kvm, cpuid); 346 343 reg_attr->addr = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK; 344 + reg_attr->vcpu = kvm_get_vcpu_by_id(dev->kvm, cpuid); 345 + if (!reg_attr->vcpu) 346 + return -EINVAL; 347 347 348 348 return 0; 349 349 }
+3 -3
arch/arm64/net/bpf_jit_comp.c
··· 1905 1905 1906 1906 emit_call(enter_prog, ctx); 1907 1907 1908 + /* save return value to callee saved register x20 */ 1909 + emit(A64_MOV(1, A64_R(20), A64_R(0)), ctx); 1910 + 1908 1911 /* if (__bpf_prog_enter(prog) == 0) 1909 1912 * goto skip_exec_of_prog; 1910 1913 */ 1911 1914 branch = ctx->image + ctx->idx; 1912 1915 emit(A64_NOP, ctx); 1913 - 1914 - /* save return value to callee saved register x20 */ 1915 - emit(A64_MOV(1, A64_R(20), A64_R(0)), ctx); 1916 1916 1917 1917 emit(A64_ADD_I(1, A64_R(0), A64_SP, args_off), ctx); 1918 1918 if (!p->jited)
+1 -1
arch/loongarch/Kconfig
··· 595 595 select RELOCATABLE 596 596 597 597 config ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION 598 - def_bool CRASH_CORE 598 + def_bool CRASH_RESERVE 599 599 600 600 config RELOCATABLE 601 601 bool "Relocatable kernel"
+2 -2
arch/loongarch/include/asm/crash_core.h arch/loongarch/include/asm/crash_reserve.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 - #ifndef _LOONGARCH_CRASH_CORE_H 3 - #define _LOONGARCH_CRASH_CORE_H 2 + #ifndef _LOONGARCH_CRASH_RESERVE_H 3 + #define _LOONGARCH_CRASH_RESERVE_H 4 4 5 5 #define CRASH_ALIGN SZ_2M 6 6
+8
arch/loongarch/include/asm/perf_event.h
··· 7 7 #ifndef __LOONGARCH_PERF_EVENT_H__ 8 8 #define __LOONGARCH_PERF_EVENT_H__ 9 9 10 + #include <asm/ptrace.h> 11 + 10 12 #define perf_arch_bpf_user_pt_regs(regs) (struct user_pt_regs *)regs 13 + 14 + #define perf_arch_fetch_caller_regs(regs, __ip) { \ 15 + (regs)->csr_era = (__ip); \ 16 + (regs)->regs[3] = current_stack_pointer; \ 17 + (regs)->regs[22] = (unsigned long) __builtin_frame_address(0); \ 18 + } 11 19 12 20 #endif /* __LOONGARCH_PERF_EVENT_H__ */
-2
arch/loongarch/include/asm/tlb.h
··· 132 132 ); 133 133 } 134 134 135 - #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) 136 - 137 135 static void tlb_flush(struct mmu_gather *tlb); 138 136 139 137 #define tlb_flush tlb_flush
+1 -1
arch/loongarch/kernel/perf_event.c
··· 884 884 885 885 return 0; 886 886 } 887 - early_initcall(init_hw_perf_events); 887 + pure_initcall(init_hw_perf_events);
+2 -2
arch/loongarch/mm/fault.c
··· 202 202 if (!(vma->vm_flags & VM_WRITE)) 203 203 goto bad_area; 204 204 } else { 205 - if (!(vma->vm_flags & VM_READ) && address != exception_era(regs)) 206 - goto bad_area; 207 205 if (!(vma->vm_flags & VM_EXEC) && address == exception_era(regs)) 206 + goto bad_area; 207 + if (!(vma->vm_flags & (VM_READ | VM_WRITE)) && address != exception_era(regs)) 208 208 goto bad_area; 209 209 } 210 210
+4 -4
arch/riscv/Kconfig.errata
··· 82 82 83 83 Otherwise, please say "N" here to avoid unnecessary overhead. 84 84 85 - config ERRATA_THEAD_PBMT 86 - bool "Apply T-Head memory type errata" 85 + config ERRATA_THEAD_MAE 86 + bool "Apply T-Head's memory attribute extension (XTheadMae) errata" 87 87 depends on ERRATA_THEAD && 64BIT && MMU 88 88 select RISCV_ALTERNATIVE_EARLY 89 89 default y 90 90 help 91 - This will apply the memory type errata to handle the non-standard 92 - memory type bits in page-table-entries on T-Head SoCs. 91 + This will apply the memory attribute extension errata to handle the 92 + non-standard PTE utilization on T-Head SoCs (XTheadMae). 93 93 94 94 If you don't know what to do here, say "Y". 95 95
+15 -9
arch/riscv/errata/thead/errata.c
··· 19 19 #include <asm/patch.h> 20 20 #include <asm/vendorid_list.h> 21 21 22 - static bool errata_probe_pbmt(unsigned int stage, 23 - unsigned long arch_id, unsigned long impid) 22 + #define CSR_TH_SXSTATUS 0x5c0 23 + #define SXSTATUS_MAEE _AC(0x200000, UL) 24 + 25 + static bool errata_probe_mae(unsigned int stage, 26 + unsigned long arch_id, unsigned long impid) 24 27 { 25 - if (!IS_ENABLED(CONFIG_ERRATA_THEAD_PBMT)) 28 + if (!IS_ENABLED(CONFIG_ERRATA_THEAD_MAE)) 26 29 return false; 27 30 28 31 if (arch_id != 0 || impid != 0) 29 32 return false; 30 33 31 - if (stage == RISCV_ALTERNATIVES_EARLY_BOOT || 32 - stage == RISCV_ALTERNATIVES_MODULE) 33 - return true; 34 + if (stage != RISCV_ALTERNATIVES_EARLY_BOOT && 35 + stage != RISCV_ALTERNATIVES_MODULE) 36 + return false; 34 37 35 - return false; 38 + if (!(csr_read(CSR_TH_SXSTATUS) & SXSTATUS_MAEE)) 39 + return false; 40 + 41 + return true; 36 42 } 37 43 38 44 /* ··· 146 140 { 147 141 u32 cpu_req_errata = 0; 148 142 149 - if (errata_probe_pbmt(stage, archid, impid)) 150 - cpu_req_errata |= BIT(ERRATA_THEAD_PBMT); 143 + if (errata_probe_mae(stage, archid, impid)) 144 + cpu_req_errata |= BIT(ERRATA_THEAD_MAE); 151 145 152 146 errata_probe_cmo(stage, archid, impid); 153 147
+10 -10
arch/riscv/include/asm/errata_list.h
··· 23 23 #endif 24 24 25 25 #ifdef CONFIG_ERRATA_THEAD 26 - #define ERRATA_THEAD_PBMT 0 26 + #define ERRATA_THEAD_MAE 0 27 27 #define ERRATA_THEAD_PMU 1 28 28 #define ERRATA_THEAD_NUMBER 2 29 29 #endif ··· 53 53 * in the default case. 54 54 */ 55 55 #define ALT_SVPBMT_SHIFT 61 56 - #define ALT_THEAD_PBMT_SHIFT 59 56 + #define ALT_THEAD_MAE_SHIFT 59 57 57 #define ALT_SVPBMT(_val, prot) \ 58 58 asm(ALTERNATIVE_2("li %0, 0\t\nnop", \ 59 59 "li %0, %1\t\nslli %0,%0,%3", 0, \ 60 60 RISCV_ISA_EXT_SVPBMT, CONFIG_RISCV_ISA_SVPBMT, \ 61 61 "li %0, %2\t\nslli %0,%0,%4", THEAD_VENDOR_ID, \ 62 - ERRATA_THEAD_PBMT, CONFIG_ERRATA_THEAD_PBMT) \ 62 + ERRATA_THEAD_MAE, CONFIG_ERRATA_THEAD_MAE) \ 63 63 : "=r"(_val) \ 64 64 : "I"(prot##_SVPBMT >> ALT_SVPBMT_SHIFT), \ 65 - "I"(prot##_THEAD >> ALT_THEAD_PBMT_SHIFT), \ 65 + "I"(prot##_THEAD >> ALT_THEAD_MAE_SHIFT), \ 66 66 "I"(ALT_SVPBMT_SHIFT), \ 67 - "I"(ALT_THEAD_PBMT_SHIFT)) 67 + "I"(ALT_THEAD_MAE_SHIFT)) 68 68 69 - #ifdef CONFIG_ERRATA_THEAD_PBMT 69 + #ifdef CONFIG_ERRATA_THEAD_MAE 70 70 /* 71 71 * IO/NOCACHE memory types are handled together with svpbmt, 72 72 * so on T-Head chips, check if no other memory type is set, ··· 83 83 "slli t3, t3, %3\n\t" \ 84 84 "or %0, %0, t3\n\t" \ 85 85 "2:", THEAD_VENDOR_ID, \ 86 - ERRATA_THEAD_PBMT, CONFIG_ERRATA_THEAD_PBMT) \ 86 + ERRATA_THEAD_MAE, CONFIG_ERRATA_THEAD_MAE) \ 87 87 : "+r"(_val) \ 88 - : "I"(_PAGE_MTMASK_THEAD >> ALT_THEAD_PBMT_SHIFT), \ 89 - "I"(_PAGE_PMA_THEAD >> ALT_THEAD_PBMT_SHIFT), \ 90 - "I"(ALT_THEAD_PBMT_SHIFT) \ 88 + : "I"(_PAGE_MTMASK_THEAD >> ALT_THEAD_MAE_SHIFT), \ 89 + "I"(_PAGE_PMA_THEAD >> ALT_THEAD_MAE_SHIFT), \ 90 + "I"(ALT_THEAD_MAE_SHIFT) \ 91 91 : "t3") 92 92 #else 93 93 #define ALT_THEAD_PMA(_val)
+1 -1
arch/riscv/include/asm/page.h
··· 89 89 #define PTE_FMT "%08lx" 90 90 #endif 91 91 92 - #ifdef CONFIG_64BIT 92 + #if defined(CONFIG_64BIT) && defined(CONFIG_MMU) 93 93 /* 94 94 * We override this value as its generic definition uses __pa too early in 95 95 * the boot process (before kernel_map.va_pa_offset is set).
+1 -1
arch/riscv/include/asm/pgtable.h
··· 896 896 #define PAGE_SHARED __pgprot(0) 897 897 #define PAGE_KERNEL __pgprot(0) 898 898 #define swapper_pg_dir NULL 899 - #define TASK_SIZE 0xffffffffUL 899 + #define TASK_SIZE _AC(-1, UL) 900 900 #define VMALLOC_START _AC(0, UL) 901 901 #define VMALLOC_END TASK_SIZE 902 902
+1 -1
arch/riscv/include/uapi/asm/hwprobe.h
··· 54 54 #define RISCV_HWPROBE_EXT_ZFHMIN (1 << 28) 55 55 #define RISCV_HWPROBE_EXT_ZIHINTNTL (1 << 29) 56 56 #define RISCV_HWPROBE_EXT_ZVFH (1 << 30) 57 - #define RISCV_HWPROBE_EXT_ZVFHMIN (1 << 31) 57 + #define RISCV_HWPROBE_EXT_ZVFHMIN (1ULL << 31) 58 58 #define RISCV_HWPROBE_EXT_ZFA (1ULL << 32) 59 59 #define RISCV_HWPROBE_EXT_ZTSO (1ULL << 33) 60 60 #define RISCV_HWPROBE_EXT_ZACAS (1ULL << 34)
+1 -1
arch/riscv/mm/init.c
··· 231 231 * In 64-bit, any use of __va/__pa before this point is wrong as we 232 232 * did not know the start of DRAM before. 233 233 */ 234 - if (IS_ENABLED(CONFIG_64BIT)) 234 + if (IS_ENABLED(CONFIG_64BIT) && IS_ENABLED(CONFIG_MMU)) 235 235 kernel_map.va_pa_offset = PAGE_OFFSET - phys_ram_base; 236 236 237 237 /*
+3 -3
arch/riscv/net/bpf_jit_comp64.c
··· 730 730 if (ret) 731 731 return ret; 732 732 733 + /* store prog start time */ 734 + emit_mv(RV_REG_S1, RV_REG_A0, ctx); 735 + 733 736 /* if (__bpf_prog_enter(prog) == 0) 734 737 * goto skip_exec_of_prog; 735 738 */ 736 739 branch_off = ctx->ninsns; 737 740 /* nop reserved for conditional jump */ 738 741 emit(rv_nop(), ctx); 739 - 740 - /* store prog start time */ 741 - emit_mv(RV_REG_S1, RV_REG_A0, ctx); 742 742 743 743 /* arg1: &args_off */ 744 744 emit_addi(RV_REG_A0, RV_REG_FP, -args_off, ctx);
+12 -7
arch/x86/Kconfig
··· 62 62 select ACPI_HOTPLUG_CPU if ACPI_PROCESSOR && HOTPLUG_CPU 63 63 select ARCH_32BIT_OFF_T if X86_32 64 64 select ARCH_CLOCKSOURCE_INIT 65 + select ARCH_CONFIGURES_CPU_MITIGATIONS 65 66 select ARCH_CORRECT_STACKTRACE_ON_KRETPROBE 66 67 select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRATION 67 68 select ARCH_ENABLE_MEMORY_HOTPLUG if X86_64 ··· 2489 2488 def_bool y 2490 2489 depends on CALL_PADDING && !CFI_CLANG 2491 2490 2492 - menuconfig SPECULATION_MITIGATIONS 2493 - bool "Mitigations for speculative execution vulnerabilities" 2491 + menuconfig CPU_MITIGATIONS 2492 + bool "Mitigations for CPU vulnerabilities" 2494 2493 default y 2495 2494 help 2496 - Say Y here to enable options which enable mitigations for 2497 - speculative execution hardware vulnerabilities. 2495 + Say Y here to enable options which enable mitigations for hardware 2496 + vulnerabilities (usually related to speculative execution). 2497 + Mitigations can be disabled or restricted to SMT systems at runtime 2498 + via the "mitigations" kernel parameter. 2498 2499 2499 - If you say N, all mitigations will be disabled. You really 2500 - should know what you are doing to say so. 2500 + If you say N, all mitigations will be disabled. This CANNOT be 2501 + overridden at runtime. 2501 2502 2502 - if SPECULATION_MITIGATIONS 2503 + Say 'Y', unless you really know what you are doing. 2504 + 2505 + if CPU_MITIGATIONS 2503 2506 2504 2507 config MITIGATION_PAGE_TABLE_ISOLATION 2505 2508 bool "Remove the kernel mapping in user mode"
+1
arch/x86/include/asm/coco.h
··· 25 25 void cc_random_init(void); 26 26 #else 27 27 #define cc_vendor (CC_VENDOR_NONE) 28 + static const u64 cc_mask = 0; 28 29 29 30 static inline u64 cc_mkenc(u64 val) 30 31 {
+2 -1
arch/x86/include/asm/pgtable_types.h
··· 148 148 #define _COMMON_PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \ 149 149 _PAGE_SPECIAL | _PAGE_ACCESSED | \ 150 150 _PAGE_DIRTY_BITS | _PAGE_SOFT_DIRTY | \ 151 - _PAGE_DEVMAP | _PAGE_ENC | _PAGE_UFFD_WP) 151 + _PAGE_DEVMAP | _PAGE_CC | _PAGE_UFFD_WP) 152 152 #define _PAGE_CHG_MASK (_COMMON_PAGE_CHG_MASK | _PAGE_PAT) 153 153 #define _HPAGE_CHG_MASK (_COMMON_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_PAT_LARGE) 154 154 ··· 173 173 }; 174 174 #endif 175 175 176 + #define _PAGE_CC (_AT(pteval_t, cc_mask)) 176 177 #define _PAGE_ENC (_AT(pteval_t, sme_me_mask)) 177 178 178 179 #define _PAGE_CACHE_MASK (_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)
+1 -2
arch/x86/kernel/cpu/amd.c
··· 459 459 460 460 case 0x1a: 461 461 switch (c->x86_model) { 462 - case 0x00 ... 0x0f: 463 - case 0x20 ... 0x2f: 462 + case 0x00 ... 0x2f: 464 463 case 0x40 ... 0x4f: 465 464 case 0x70 ... 0x7f: 466 465 setup_force_cpu_cap(X86_FEATURE_ZEN5);
+1 -1
arch/x86/kernel/process_64.c
··· 139 139 log_lvl, d3, d6, d7); 140 140 } 141 141 142 - if (cpu_feature_enabled(X86_FEATURE_OSPKE)) 142 + if (cr4 & X86_CR4_PKE) 143 143 printk("%sPKRU: %08x\n", log_lvl, read_pkru()); 144 144 } 145 145
+4 -2
arch/x86/kernel/sev-shared.c
··· 1203 1203 break; 1204 1204 1205 1205 case SVM_EXIT_MONITOR: 1206 - if (opcode == 0x010f && modrm == 0xc8) 1206 + /* MONITOR and MONITORX instructions generate the same error code */ 1207 + if (opcode == 0x010f && (modrm == 0xc8 || modrm == 0xfa)) 1207 1208 return ES_OK; 1208 1209 break; 1209 1210 1210 1211 case SVM_EXIT_MWAIT: 1211 - if (opcode == 0x010f && modrm == 0xc9) 1212 + /* MWAIT and MWAITX instructions generate the same error code */ 1213 + if (opcode == 0x010f && (modrm == 0xc9 || modrm == 0xfb)) 1212 1214 return ES_OK; 1213 1215 break; 1214 1216
+31 -32
arch/x86/net/bpf_jit_comp.c
··· 1867 1867 if (BPF_MODE(insn->code) == BPF_PROBE_MEM || 1868 1868 BPF_MODE(insn->code) == BPF_PROBE_MEMSX) { 1869 1869 /* Conservatively check that src_reg + insn->off is a kernel address: 1870 - * src_reg + insn->off >= TASK_SIZE_MAX + PAGE_SIZE 1871 - * src_reg is used as scratch for src_reg += insn->off and restored 1872 - * after emit_ldx if necessary 1870 + * src_reg + insn->off > TASK_SIZE_MAX + PAGE_SIZE 1871 + * and 1872 + * src_reg + insn->off < VSYSCALL_ADDR 1873 1873 */ 1874 1874 1875 - u64 limit = TASK_SIZE_MAX + PAGE_SIZE; 1875 + u64 limit = TASK_SIZE_MAX + PAGE_SIZE - VSYSCALL_ADDR; 1876 1876 u8 *end_of_jmp; 1877 1877 1878 - /* At end of these emitted checks, insn->off will have been added 1879 - * to src_reg, so no need to do relative load with insn->off offset 1880 - */ 1881 - insn_off = 0; 1878 + /* movabsq r10, VSYSCALL_ADDR */ 1879 + emit_mov_imm64(&prog, BPF_REG_AX, (long)VSYSCALL_ADDR >> 32, 1880 + (u32)(long)VSYSCALL_ADDR); 1882 1881 1883 - /* movabsq r11, limit */ 1884 - EMIT2(add_1mod(0x48, AUX_REG), add_1reg(0xB8, AUX_REG)); 1885 - EMIT((u32)limit, 4); 1886 - EMIT(limit >> 32, 4); 1882 + /* mov src_reg, r11 */ 1883 + EMIT_mov(AUX_REG, src_reg); 1887 1884 1888 1885 if (insn->off) { 1889 - /* add src_reg, insn->off */ 1890 - maybe_emit_1mod(&prog, src_reg, true); 1891 - EMIT2_off32(0x81, add_1reg(0xC0, src_reg), insn->off); 1886 + /* add r11, insn->off */ 1887 + maybe_emit_1mod(&prog, AUX_REG, true); 1888 + EMIT2_off32(0x81, add_1reg(0xC0, AUX_REG), insn->off); 1892 1889 } 1893 1890 1894 - /* cmp src_reg, r11 */ 1895 - maybe_emit_mod(&prog, src_reg, AUX_REG, true); 1896 - EMIT2(0x39, add_2reg(0xC0, src_reg, AUX_REG)); 1891 + /* sub r11, r10 */ 1892 + maybe_emit_mod(&prog, AUX_REG, BPF_REG_AX, true); 1893 + EMIT2(0x29, add_2reg(0xC0, AUX_REG, BPF_REG_AX)); 1897 1894 1898 - /* if unsigned '>=', goto load */ 1899 - EMIT2(X86_JAE, 0); 1895 + /* movabsq r10, limit */ 1896 + emit_mov_imm64(&prog, BPF_REG_AX, (long)limit >> 32, 1897 + (u32)(long)limit); 1898 + 1899 + /* cmp r10, r11 */ 1900 + maybe_emit_mod(&prog, AUX_REG, BPF_REG_AX, true); 1901 + EMIT2(0x39, add_2reg(0xC0, AUX_REG, BPF_REG_AX)); 1902 + 1903 + /* if unsigned '>', goto load */ 1904 + EMIT2(X86_JA, 0); 1900 1905 end_of_jmp = prog; 1901 1906 1902 1907 /* xor dst_reg, dst_reg */ ··· 1926 1921 1927 1922 /* populate jmp_offset for JMP above */ 1928 1923 start_of_ldx[-1] = prog - start_of_ldx; 1929 - 1930 - if (insn->off && src_reg != dst_reg) { 1931 - /* sub src_reg, insn->off 1932 - * Restore src_reg after "add src_reg, insn->off" in prev 1933 - * if statement. But if src_reg == dst_reg, emit_ldx 1934 - * above already clobbered src_reg, so no need to restore. 1935 - * If add src_reg, insn->off was unnecessary, no need to 1936 - * restore either. 1937 - */ 1938 - maybe_emit_1mod(&prog, src_reg, true); 1939 - EMIT2_off32(0x81, add_1reg(0xE8, src_reg), insn->off); 1940 - } 1941 1924 1942 1925 if (!bpf_prog->aux->extable) 1943 1926 break; ··· 3551 3558 bool bpf_jit_supports_ptr_xchg(void) 3552 3559 { 3553 3560 return true; 3561 + } 3562 + 3563 + /* x86-64 JIT emits its own code to filter user addresses so return 0 here */ 3564 + u64 bpf_arch_uaddress_limit(void) 3565 + { 3566 + return 0; 3554 3567 }
+1 -1
block/bdev.c
··· 882 882 goto abort_claiming; 883 883 ret = -EBUSY; 884 884 if (!bdev_may_open(bdev, mode)) 885 - goto abort_claiming; 885 + goto put_module; 886 886 if (bdev_is_partition(bdev)) 887 887 ret = blkdev_get_part(bdev, mode); 888 888 else
+39 -18
drivers/acpi/cppc_acpi.c
··· 170 170 #define GET_BIT_WIDTH(reg) ((reg)->access_width ? (8 << ((reg)->access_width - 1)) : (reg)->bit_width) 171 171 172 172 /* Shift and apply the mask for CPC reads/writes */ 173 - #define MASK_VAL(reg, val) ((val) >> ((reg)->bit_offset & \ 174 - GENMASK(((reg)->bit_width), 0))) 173 + #define MASK_VAL(reg, val) (((val) >> (reg)->bit_offset) & \ 174 + GENMASK(((reg)->bit_width) - 1, 0)) 175 175 176 176 static ssize_t show_feedback_ctrs(struct kobject *kobj, 177 177 struct kobj_attribute *attr, char *buf) ··· 1002 1002 } 1003 1003 1004 1004 *val = 0; 1005 + size = GET_BIT_WIDTH(reg); 1005 1006 1006 1007 if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_IO) { 1007 - u32 width = GET_BIT_WIDTH(reg); 1008 1008 u32 val_u32; 1009 1009 acpi_status status; 1010 1010 1011 1011 status = acpi_os_read_port((acpi_io_address)reg->address, 1012 - &val_u32, width); 1012 + &val_u32, size); 1013 1013 if (ACPI_FAILURE(status)) { 1014 1014 pr_debug("Error: Failed to read SystemIO port %llx\n", 1015 1015 reg->address); ··· 1018 1018 1019 1019 *val = val_u32; 1020 1020 return 0; 1021 - } else if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM && pcc_ss_id >= 0) 1021 + } else if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM && pcc_ss_id >= 0) { 1022 + /* 1023 + * For registers in PCC space, the register size is determined 1024 + * by the bit width field; the access size is used to indicate 1025 + * the PCC subspace id. 1026 + */ 1027 + size = reg->bit_width; 1022 1028 vaddr = GET_PCC_VADDR(reg->address, pcc_ss_id); 1029 + } 1023 1030 else if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) 1024 1031 vaddr = reg_res->sys_mem_vaddr; 1025 1032 else if (reg->space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) 1026 1033 return cpc_read_ffh(cpu, reg, val); 1027 1034 else 1028 1035 return acpi_os_read_memory((acpi_physical_address)reg->address, 1029 - val, reg->bit_width); 1030 - 1031 - size = GET_BIT_WIDTH(reg); 1036 + val, size); 1032 1037 1033 1038 switch (size) { 1034 1039 case 8: ··· 1049 1044 *val = readq_relaxed(vaddr); 1050 1045 break; 1051 1046 default: 1052 - pr_debug("Error: Cannot read %u bit width from PCC for ss: %d\n", 1053 - reg->bit_width, pcc_ss_id); 1047 + if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) { 1048 + pr_debug("Error: Cannot read %u bit width from system memory: 0x%llx\n", 1049 + size, reg->address); 1050 + } else if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM) { 1051 + pr_debug("Error: Cannot read %u bit width from PCC for ss: %d\n", 1052 + size, pcc_ss_id); 1053 + } 1054 1054 return -EFAULT; 1055 1055 } 1056 1056 ··· 1073 1063 int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu); 1074 1064 struct cpc_reg *reg = &reg_res->cpc_entry.reg; 1075 1065 1066 + size = GET_BIT_WIDTH(reg); 1067 + 1076 1068 if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_IO) { 1077 - u32 width = GET_BIT_WIDTH(reg); 1078 1069 acpi_status status; 1079 1070 1080 1071 status = acpi_os_write_port((acpi_io_address)reg->address, 1081 - (u32)val, width); 1072 + (u32)val, size); 1082 1073 if (ACPI_FAILURE(status)) { 1083 1074 pr_debug("Error: Failed to write SystemIO port %llx\n", 1084 1075 reg->address); ··· 1087 1076 } 1088 1077 1089 1078 return 0; 1090 - } else if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM && pcc_ss_id >= 0) 1079 + } else if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM && pcc_ss_id >= 0) { 1080 + /* 1081 + * For registers in PCC space, the register size is determined 1082 + * by the bit width field; the access size is used to indicate 1083 + * the PCC subspace id. 1084 + */ 1085 + size = reg->bit_width; 1091 1086 vaddr = GET_PCC_VADDR(reg->address, pcc_ss_id); 1087 + } 1092 1088 else if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) 1093 1089 vaddr = reg_res->sys_mem_vaddr; 1094 1090 else if (reg->space_id == ACPI_ADR_SPACE_FIXED_HARDWARE) 1095 1091 return cpc_write_ffh(cpu, reg, val); 1096 1092 else 1097 1093 return acpi_os_write_memory((acpi_physical_address)reg->address, 1098 - val, reg->bit_width); 1099 - 1100 - size = GET_BIT_WIDTH(reg); 1094 + val, size); 1101 1095 1102 1096 if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) 1103 1097 val = MASK_VAL(reg, val); ··· 1121 1105 writeq_relaxed(val, vaddr); 1122 1106 break; 1123 1107 default: 1124 - pr_debug("Error: Cannot write %u bit width to PCC for ss: %d\n", 1125 - reg->bit_width, pcc_ss_id); 1108 + if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) { 1109 + pr_debug("Error: Cannot write %u bit width to system memory: 0x%llx\n", 1110 + size, reg->address); 1111 + } else if (reg->space_id == ACPI_ADR_SPACE_PLATFORM_COMM) { 1112 + pr_debug("Error: Cannot write %u bit width to PCC for ss: %d\n", 1113 + size, pcc_ss_id); 1114 + } 1126 1115 ret_val = -EFAULT; 1127 1116 break; 1128 1117 }
+3 -5
drivers/acpi/x86/s2idle.c
··· 492 492 unsigned int func_mask; 493 493 494 494 /* 495 - * Avoid evaluating the same _DSM function for two 496 - * different UUIDs and prioritize the MSFT one. 495 + * Log a message if the _DSM function sets for two 496 + * different UUIDs overlap. 497 497 */ 498 498 func_mask = lps0_dsm_func_mask & lps0_dsm_func_mask_microsoft; 499 - if (func_mask) { 499 + if (func_mask) 500 500 acpi_handle_info(adev->handle, 501 501 "Duplicate LPS0 _DSM functions (mask: 0x%x)\n", 502 502 func_mask); 503 - lps0_dsm_func_mask &= ~func_mask; 504 - } 505 503 } 506 504 } 507 505
+17 -21
drivers/cxl/core/mbox.c
··· 946 946 struct cxl_memdev *cxlmd = mds->cxlds.cxlmd; 947 947 struct device *dev = mds->cxlds.dev; 948 948 struct cxl_get_event_payload *payload; 949 - struct cxl_mbox_cmd mbox_cmd; 950 949 u8 log_type = type; 951 950 u16 nr_rec; 952 951 953 952 mutex_lock(&mds->event.log_lock); 954 953 payload = mds->event.buf; 955 954 956 - mbox_cmd = (struct cxl_mbox_cmd) { 957 - .opcode = CXL_MBOX_OP_GET_EVENT_RECORD, 958 - .payload_in = &log_type, 959 - .size_in = sizeof(log_type), 960 - .payload_out = payload, 961 - .min_out = struct_size(payload, records, 0), 962 - }; 963 - 964 955 do { 965 956 int rc, i; 966 - 967 - mbox_cmd.size_out = mds->payload_size; 957 + struct cxl_mbox_cmd mbox_cmd = (struct cxl_mbox_cmd) { 958 + .opcode = CXL_MBOX_OP_GET_EVENT_RECORD, 959 + .payload_in = &log_type, 960 + .size_in = sizeof(log_type), 961 + .payload_out = payload, 962 + .size_out = mds->payload_size, 963 + .min_out = struct_size(payload, records, 0), 964 + }; 968 965 969 966 rc = cxl_internal_send_cmd(mds, &mbox_cmd); 970 967 if (rc) { ··· 1294 1297 struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); 1295 1298 struct cxl_mbox_poison_out *po; 1296 1299 struct cxl_mbox_poison_in pi; 1297 - struct cxl_mbox_cmd mbox_cmd; 1298 1300 int nr_records = 0; 1299 1301 int rc; 1300 1302 ··· 1305 1309 pi.offset = cpu_to_le64(offset); 1306 1310 pi.length = cpu_to_le64(len / CXL_POISON_LEN_MULT); 1307 1311 1308 - mbox_cmd = (struct cxl_mbox_cmd) { 1309 - .opcode = CXL_MBOX_OP_GET_POISON, 1310 - .size_in = sizeof(pi), 1311 - .payload_in = &pi, 1312 - .size_out = mds->payload_size, 1313 - .payload_out = po, 1314 - .min_out = struct_size(po, record, 0), 1315 - }; 1316 - 1317 1312 do { 1313 + struct cxl_mbox_cmd mbox_cmd = (struct cxl_mbox_cmd){ 1314 + .opcode = CXL_MBOX_OP_GET_POISON, 1315 + .size_in = sizeof(pi), 1316 + .payload_in = &pi, 1317 + .size_out = mds->payload_size, 1318 + .payload_out = po, 1319 + .min_out = struct_size(po, record, 0), 1320 + }; 1321 + 1318 1322 rc = cxl_internal_send_cmd(mds, &mbox_cmd); 1319 1323 if (rc) 1320 1324 break;
+4
drivers/dma/idma64.c
··· 171 171 u32 status_err; 172 172 unsigned short i; 173 173 174 + /* Since IRQ may be shared, check if DMA controller is powered on */ 175 + if (status == GENMASK(31, 0)) 176 + return IRQ_NONE; 177 + 174 178 dev_vdbg(idma64->dma.dev, "%s: status=%#x\n", __func__, status); 175 179 176 180 /* Check if we have any interrupt from the DMA controller */
+2 -3
drivers/dma/idxd/cdev.c
··· 342 342 if (!evl) 343 343 return; 344 344 345 - spin_lock(&evl->lock); 345 + mutex_lock(&evl->lock); 346 346 status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET); 347 347 t = status.tail; 348 348 h = status.head; ··· 354 354 set_bit(h, evl->bmap); 355 355 h = (h + 1) % size; 356 356 } 357 - spin_unlock(&evl->lock); 358 - 359 357 drain_workqueue(wq->wq); 358 + mutex_unlock(&evl->lock); 360 359 } 361 360 362 361 static int idxd_cdev_release(struct inode *node, struct file *filep)
+2 -2
drivers/dma/idxd/debugfs.c
··· 66 66 if (!evl || !evl->log) 67 67 return 0; 68 68 69 - spin_lock(&evl->lock); 69 + mutex_lock(&evl->lock); 70 70 71 71 evl_status.bits = ioread64(idxd->reg_base + IDXD_EVLSTATUS_OFFSET); 72 72 t = evl_status.tail; ··· 87 87 dump_event_entry(idxd, s, i, &count, processed); 88 88 } 89 89 90 - spin_unlock(&evl->lock); 90 + mutex_unlock(&evl->lock); 91 91 return 0; 92 92 } 93 93
+4 -4
drivers/dma/idxd/device.c
··· 775 775 goto err_alloc; 776 776 } 777 777 778 - spin_lock(&evl->lock); 778 + mutex_lock(&evl->lock); 779 779 evl->log = addr; 780 780 evl->dma = dma_addr; 781 781 evl->log_size = size; ··· 796 796 gencfg.evl_en = 1; 797 797 iowrite32(gencfg.bits, idxd->reg_base + IDXD_GENCFG_OFFSET); 798 798 799 - spin_unlock(&evl->lock); 799 + mutex_unlock(&evl->lock); 800 800 return 0; 801 801 802 802 err_alloc: ··· 819 819 if (!gencfg.evl_en) 820 820 return; 821 821 822 - spin_lock(&evl->lock); 822 + mutex_lock(&evl->lock); 823 823 gencfg.evl_en = 0; 824 824 iowrite32(gencfg.bits, idxd->reg_base + IDXD_GENCFG_OFFSET); 825 825 ··· 836 836 evl_dma = evl->dma; 837 837 evl->log = NULL; 838 838 evl->size = IDXD_EVL_SIZE_MIN; 839 - spin_unlock(&evl->lock); 839 + mutex_unlock(&evl->lock); 840 840 841 841 dma_free_coherent(dev, evl_log_size, evl_log, evl_dma); 842 842 }
+1 -1
drivers/dma/idxd/idxd.h
··· 293 293 294 294 struct idxd_evl { 295 295 /* Lock to protect event log access. */ 296 - spinlock_t lock; 296 + struct mutex lock; 297 297 void *log; 298 298 dma_addr_t dma; 299 299 /* Total size of event log = number of entries * entry size. */
+1 -1
drivers/dma/idxd/init.c
··· 354 354 if (!evl) 355 355 return -ENOMEM; 356 356 357 - spin_lock_init(&evl->lock); 357 + mutex_init(&evl->lock); 358 358 evl->size = IDXD_EVL_SIZE_MIN; 359 359 360 360 idxd_name = dev_name(idxd_confdev(idxd));
+2 -2
drivers/dma/idxd/irq.c
··· 363 363 evl_status.bits = 0; 364 364 evl_status.int_pending = 1; 365 365 366 - spin_lock(&evl->lock); 366 + mutex_lock(&evl->lock); 367 367 /* Clear interrupt pending bit */ 368 368 iowrite32(evl_status.bits_upper32, 369 369 idxd->reg_base + IDXD_EVLSTATUS_OFFSET + sizeof(u32)); ··· 380 380 381 381 evl_status.head = h; 382 382 iowrite32(evl_status.bits_lower32, idxd->reg_base + IDXD_EVLSTATUS_OFFSET); 383 - spin_unlock(&evl->lock); 383 + mutex_unlock(&evl->lock); 384 384 } 385 385 386 386 irqreturn_t idxd_misc_thread(int vec, void *data)
+3 -6
drivers/dma/idxd/perfmon.c
··· 528 528 return 0; 529 529 530 530 target = cpumask_any_but(cpu_online_mask, cpu); 531 - 532 531 /* migrate events if there is a valid target */ 533 - if (target < nr_cpu_ids) 532 + if (target < nr_cpu_ids) { 534 533 cpumask_set_cpu(target, &perfmon_dsa_cpu_mask); 535 - else 536 - target = -1; 537 - 538 - perf_pmu_migrate_context(&idxd_pmu->pmu, cpu, target); 534 + perf_pmu_migrate_context(&idxd_pmu->pmu, cpu, target); 535 + } 539 536 540 537 return 0; 541 538 }
+2 -2
drivers/dma/owl-dma.c
··· 250 250 else 251 251 regval &= ~val; 252 252 253 - writel(val, pchan->base + reg); 253 + writel(regval, pchan->base + reg); 254 254 } 255 255 256 256 static void pchan_writel(struct owl_dma_pchan *pchan, u32 reg, u32 data) ··· 274 274 else 275 275 regval &= ~val; 276 276 277 - writel(val, od->base + reg); 277 + writel(regval, od->base + reg); 278 278 } 279 279 280 280 static void dma_writel(struct owl_dma *od, u32 reg, u32 data)
-3
drivers/dma/pl330.c
··· 1053 1053 1054 1054 thrd->req_running = idx; 1055 1055 1056 - if (desc->rqtype == DMA_MEM_TO_DEV || desc->rqtype == DMA_DEV_TO_MEM) 1057 - UNTIL(thrd, PL330_STATE_WFP); 1058 - 1059 1056 return true; 1060 1057 } 1061 1058
+3
drivers/dma/tegra186-gpc-dma.c
··· 746 746 bytes_xfer = dma_desc->bytes_xfer + 747 747 sg_req[dma_desc->sg_idx].len - (wcount * 4); 748 748 749 + if (dma_desc->bytes_req == bytes_xfer) 750 + return 0; 751 + 749 752 residual = dma_desc->bytes_req - (bytes_xfer % dma_desc->bytes_req); 750 753 751 754 return residual;
+3
drivers/dma/xilinx/xdma-regs.h
··· 117 117 CHAN_CTRL_IE_WRITE_ERROR | \ 118 118 CHAN_CTRL_IE_DESC_ERROR) 119 119 120 + /* bits of the channel status register */ 121 + #define XDMA_CHAN_STATUS_BUSY BIT(0) 122 + 120 123 #define XDMA_CHAN_STATUS_MASK CHAN_CTRL_START 121 124 122 125 #define XDMA_CHAN_ERROR_MASK (CHAN_CTRL_IE_DESC_ALIGN_MISMATCH | \
+27 -15
drivers/dma/xilinx/xdma.c
··· 71 71 enum dma_transfer_direction dir; 72 72 struct dma_slave_config cfg; 73 73 u32 irq; 74 + struct completion last_interrupt; 75 + bool stop_requested; 74 76 }; 75 77 76 78 /** ··· 378 376 return ret; 379 377 380 378 xchan->busy = true; 379 + xchan->stop_requested = false; 380 + reinit_completion(&xchan->last_interrupt); 381 381 382 382 return 0; 383 383 } ··· 391 387 static int xdma_xfer_stop(struct xdma_chan *xchan) 392 388 { 393 389 int ret; 394 - u32 val; 395 390 struct xdma_device *xdev = xchan->xdev_hdl; 396 391 397 392 /* clear run stop bit to prevent any further auto-triggering */ ··· 398 395 CHAN_CTRL_RUN_STOP); 399 396 if (ret) 400 397 return ret; 401 - 402 - /* Clear the channel status register */ 403 - ret = regmap_read(xdev->rmap, xchan->base + XDMA_CHAN_STATUS_RC, &val); 404 - if (ret) 405 - return ret; 406 - 407 - return 0; 398 + return ret; 408 399 } 409 400 410 401 /** ··· 471 474 xchan->xdev_hdl = xdev; 472 475 xchan->base = base + i * XDMA_CHAN_STRIDE; 473 476 xchan->dir = dir; 477 + xchan->stop_requested = false; 478 + init_completion(&xchan->last_interrupt); 474 479 475 480 ret = xdma_channel_init(xchan); 476 481 if (ret) ··· 520 521 spin_lock_irqsave(&xdma_chan->vchan.lock, flags); 521 522 522 523 xdma_chan->busy = false; 524 + xdma_chan->stop_requested = true; 523 525 vd = vchan_next_desc(&xdma_chan->vchan); 524 526 if (vd) { 525 527 list_del(&vd->node); ··· 542 542 static void xdma_synchronize(struct dma_chan *chan) 543 543 { 544 544 struct xdma_chan *xdma_chan = to_xdma_chan(chan); 545 + struct xdma_device *xdev = xdma_chan->xdev_hdl; 546 + int st = 0; 547 + 548 + /* If the engine continues running, wait for the last interrupt */ 549 + regmap_read(xdev->rmap, xdma_chan->base + XDMA_CHAN_STATUS, &st); 550 + if (st & XDMA_CHAN_STATUS_BUSY) 551 + wait_for_completion_timeout(&xdma_chan->last_interrupt, msecs_to_jiffies(1000)); 545 552 546 553 vchan_synchronize(&xdma_chan->vchan); 547 554 } 548 555 549 556 /** 550 - * xdma_fill_descs - Fill hardware descriptors with contiguous memory block addresses 551 - * @sw_desc: tx descriptor state container 552 - * @src_addr: Value for a ->src_addr field of a first descriptor 553 - * @dst_addr: Value for a ->dst_addr field of a first descriptor 554 - * @size: Total size of a contiguous memory block 555 - * @filled_descs_num: Number of filled hardware descriptors for corresponding sw_desc 557 + * xdma_fill_descs() - Fill hardware descriptors for one contiguous memory chunk. 558 + * More than one descriptor will be used if the size is bigger 559 + * than XDMA_DESC_BLEN_MAX. 560 + * @sw_desc: Descriptor container 561 + * @src_addr: First value for the ->src_addr field 562 + * @dst_addr: First value for the ->dst_addr field 563 + * @size: Size of the contiguous memory block 564 + * @filled_descs_num: Index of the first descriptor to take care of in @sw_desc 556 565 */ 557 566 static inline u32 xdma_fill_descs(struct xdma_desc *sw_desc, u64 src_addr, 558 567 u64 dst_addr, u32 size, u32 filled_descs_num) ··· 713 704 desc_num = 0; 714 705 for (i = 0; i < periods; i++) { 715 706 desc_num += xdma_fill_descs(sw_desc, *src, *dst, period_size, desc_num); 716 - addr += i * period_size; 707 + addr += period_size; 717 708 } 718 709 719 710 tx_desc = vchan_tx_prep(&xdma_chan->vchan, &sw_desc->vdesc, flags); ··· 884 875 int ret; 885 876 u32 st; 886 877 bool repeat_tx; 878 + 879 + if (xchan->stop_requested) 880 + complete(&xchan->last_interrupt); 887 881 888 882 spin_lock(&xchan->vchan.lock); 889 883
+10 -3
drivers/dma/xilinx/xilinx_dpdma.c
··· 214 214 * @running: true if the channel is running 215 215 * @first_frame: flag for the first frame of stream 216 216 * @video_group: flag if multi-channel operation is needed for video channels 217 - * @lock: lock to access struct xilinx_dpdma_chan 217 + * @lock: lock to access struct xilinx_dpdma_chan. Must be taken before 218 + * @vchan.lock, if both are to be held. 218 219 * @desc_pool: descriptor allocation pool 219 220 * @err_task: error IRQ bottom half handler 220 221 * @desc: References to descriptors being processed ··· 1098 1097 * Complete the active descriptor, if any, promote the pending 1099 1098 * descriptor to active, and queue the next transfer, if any. 1100 1099 */ 1100 + spin_lock(&chan->vchan.lock); 1101 1101 if (chan->desc.active) 1102 1102 vchan_cookie_complete(&chan->desc.active->vdesc); 1103 1103 chan->desc.active = pending; 1104 1104 chan->desc.pending = NULL; 1105 1105 1106 1106 xilinx_dpdma_chan_queue_transfer(chan); 1107 + spin_unlock(&chan->vchan.lock); 1107 1108 1108 1109 out: 1109 1110 spin_unlock_irqrestore(&chan->lock, flags); ··· 1267 1264 struct xilinx_dpdma_chan *chan = to_xilinx_chan(dchan); 1268 1265 unsigned long flags; 1269 1266 1270 - spin_lock_irqsave(&chan->vchan.lock, flags); 1267 + spin_lock_irqsave(&chan->lock, flags); 1268 + spin_lock(&chan->vchan.lock); 1271 1269 if (vchan_issue_pending(&chan->vchan)) 1272 1270 xilinx_dpdma_chan_queue_transfer(chan); 1273 - spin_unlock_irqrestore(&chan->vchan.lock, flags); 1271 + spin_unlock(&chan->vchan.lock); 1272 + spin_unlock_irqrestore(&chan->lock, flags); 1274 1273 } 1275 1274 1276 1275 static int xilinx_dpdma_config(struct dma_chan *dchan, ··· 1500 1495 XILINX_DPDMA_EINTR_CHAN_ERR_MASK << chan->id); 1501 1496 1502 1497 spin_lock_irqsave(&chan->lock, flags); 1498 + spin_lock(&chan->vchan.lock); 1503 1499 xilinx_dpdma_chan_queue_transfer(chan); 1500 + spin_unlock(&chan->vchan.lock); 1504 1501 spin_unlock_irqrestore(&chan->lock, flags); 1505 1502 } 1506 1503
+91 -46
drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
··· 221 221 * alignment of 8 bytes (64 bits) for GUIDs. Our definition of efi_guid_t, 222 222 * however, has an alignment of 4 byte (32 bits). So far, this seems to work 223 223 * fine here. See also the comment on the typedef of efi_guid_t. 224 + * 225 + * Note: It looks like uefisecapp is quite picky about how the memory passed to 226 + * it is structured and aligned. In particular the request/response setup used 227 + * for QSEE_CMD_UEFI_GET_VARIABLE. While qcom_qseecom_app_send(), in theory, 228 + * accepts separate buffers/addresses for the request and response parts, in 229 + * practice, however, it seems to expect them to be both part of a larger 230 + * contiguous block. We initially allocated separate buffers for the request 231 + * and response but this caused the QSEE_CMD_UEFI_GET_VARIABLE command to 232 + * either not write any response to the response buffer or outright crash the 233 + * device. Therefore, we now allocate a single contiguous block of DMA memory 234 + * for both and properly align the data using the macros below. In particular, 235 + * request and response structs are aligned at 8 byte (via __reqdata_offs()), 236 + * following the driver that this has been reverse-engineered from. 224 237 */ 225 238 #define qcuefi_buf_align_fields(fields...) \ 226 239 ({ \ ··· 256 243 257 244 #define __array_offs(type, count, offset) \ 258 245 __field_impl(sizeof(type) * (count), __alignof__(type), offset) 246 + 247 + #define __array_offs_aligned(type, count, align, offset) \ 248 + __field_impl(sizeof(type) * (count), align, offset) 249 + 250 + #define __reqdata_offs(size, offset) \ 251 + __array_offs_aligned(u8, size, 8, offset) 259 252 260 253 #define __array(type, count) __array_offs(type, count, NULL) 261 254 #define __field_offs(type, offset) __array_offs(type, 1, offset) ··· 296 277 unsigned long buffer_size = *data_size; 297 278 efi_status_t efi_status = EFI_SUCCESS; 298 279 unsigned long name_length; 280 + dma_addr_t cmd_buf_dma; 281 + size_t cmd_buf_size; 282 + void *cmd_buf; 299 283 size_t guid_offs; 300 284 size_t name_offs; 301 285 size_t req_size; 302 286 size_t rsp_size; 287 + size_t req_offs; 288 + size_t rsp_offs; 303 289 ssize_t status; 304 290 305 291 if (!name || !guid) ··· 328 304 __array(u8, buffer_size) 329 305 ); 330 306 331 - req_data = kzalloc(req_size, GFP_KERNEL); 332 - if (!req_data) { 307 + cmd_buf_size = qcuefi_buf_align_fields( 308 + __reqdata_offs(req_size, &req_offs) 309 + __reqdata_offs(rsp_size, &rsp_offs) 310 + ); 311 + 312 + cmd_buf = qseecom_dma_alloc(qcuefi->client, cmd_buf_size, &cmd_buf_dma, GFP_KERNEL); 313 + if (!cmd_buf) { 333 314 efi_status = EFI_OUT_OF_RESOURCES; 334 315 goto out; 335 316 } 336 317 337 - rsp_data = kzalloc(rsp_size, GFP_KERNEL); 338 - if (!rsp_data) { 339 - efi_status = EFI_OUT_OF_RESOURCES; 340 - goto out_free_req; 341 - } 318 + req_data = cmd_buf + req_offs; 319 + rsp_data = cmd_buf + rsp_offs; 342 320 343 321 req_data->command_id = QSEE_CMD_UEFI_GET_VARIABLE; 344 322 req_data->data_size = buffer_size; ··· 358 332 359 333 memcpy(((void *)req_data) + req_data->guid_offset, guid, req_data->guid_size); 360 334 361 - status = qcom_qseecom_app_send(qcuefi->client, req_data, req_size, rsp_data, rsp_size); 335 + status = qcom_qseecom_app_send(qcuefi->client, 336 + cmd_buf_dma + req_offs, req_size, 337 + cmd_buf_dma + rsp_offs, rsp_size); 362 338 if (status) { 363 339 efi_status = EFI_DEVICE_ERROR; 364 340 goto out_free; ··· 435 407 memcpy(data, ((void *)rsp_data) + rsp_data->data_offset, rsp_data->data_size); 436 408 437 409 out_free: 438 - kfree(rsp_data); 439 - out_free_req: 440 - kfree(req_data); 410 + qseecom_dma_free(qcuefi->client, cmd_buf_size, cmd_buf, cmd_buf_dma); 441 411 out: 442 412 return efi_status; 443 413 } ··· 448 422 struct qsee_rsp_uefi_set_variable *rsp_data; 449 423 efi_status_t efi_status = EFI_SUCCESS; 450 424 unsigned long name_length; 425 + dma_addr_t cmd_buf_dma; 426 + size_t cmd_buf_size; 427 + void *cmd_buf; 451 428 size_t name_offs; 452 429 size_t guid_offs; 453 430 size_t data_offs; 454 431 size_t req_size; 432 + size_t req_offs; 433 + size_t rsp_offs; 455 434 ssize_t status; 456 435 457 436 if (!name || !guid) ··· 481 450 __array_offs(u8, data_size, &data_offs) 482 451 ); 483 452 484 - req_data = kzalloc(req_size, GFP_KERNEL); 485 - if (!req_data) { 453 + cmd_buf_size = qcuefi_buf_align_fields( 454 + __reqdata_offs(req_size, &req_offs) 455 + __reqdata_offs(sizeof(*rsp_data), &rsp_offs) 456 + ); 457 + 458 + cmd_buf = qseecom_dma_alloc(qcuefi->client, cmd_buf_size, &cmd_buf_dma, GFP_KERNEL); 459 + if (!cmd_buf) { 486 460 efi_status = EFI_OUT_OF_RESOURCES; 487 461 goto out; 488 462 } 489 463 490 - rsp_data = kzalloc(sizeof(*rsp_data), GFP_KERNEL); 491 - if (!rsp_data) { 492 - efi_status = EFI_OUT_OF_RESOURCES; 493 - goto out_free_req; 494 - } 464 + req_data = cmd_buf + req_offs; 465 + rsp_data = cmd_buf + rsp_offs; 495 466 496 467 req_data->command_id = QSEE_CMD_UEFI_SET_VARIABLE; 497 468 req_data->attributes = attributes; ··· 516 483 if (data_size) 517 484 memcpy(((void *)req_data) + req_data->data_offset, data, req_data->data_size); 518 485 519 - status = qcom_qseecom_app_send(qcuefi->client, req_data, req_size, rsp_data, 520 - sizeof(*rsp_data)); 486 + status = qcom_qseecom_app_send(qcuefi->client, 487 + cmd_buf_dma + req_offs, req_size, 488 + cmd_buf_dma + rsp_offs, sizeof(*rsp_data)); 521 489 if (status) { 522 490 efi_status = EFI_DEVICE_ERROR; 523 491 goto out_free; ··· 541 507 } 542 508 543 509 out_free: 544 - kfree(rsp_data); 545 - out_free_req: 546 - kfree(req_data); 510 + qseecom_dma_free(qcuefi->client, cmd_buf_size, cmd_buf, cmd_buf_dma); 547 511 out: 548 512 return efi_status; 549 513 } ··· 553 521 struct qsee_req_uefi_get_next_variable *req_data; 554 522 struct qsee_rsp_uefi_get_next_variable *rsp_data; 555 523 efi_status_t efi_status = EFI_SUCCESS; 524 + dma_addr_t cmd_buf_dma; 525 + size_t cmd_buf_size; 526 + void *cmd_buf; 556 527 size_t guid_offs; 557 528 size_t name_offs; 558 529 size_t req_size; 559 530 size_t rsp_size; 531 + size_t req_offs; 532 + size_t rsp_offs; 560 533 ssize_t status; 561 534 562 535 if (!name_size || !name || !guid) ··· 582 545 __array(*name, *name_size / sizeof(*name)) 583 546 ); 584 547 585 - req_data = kzalloc(req_size, GFP_KERNEL); 586 - if (!req_data) { 548 + cmd_buf_size = qcuefi_buf_align_fields( 549 + __reqdata_offs(req_size, &req_offs) 550 + __reqdata_offs(rsp_size, &rsp_offs) 551 + ); 552 + 553 + cmd_buf = qseecom_dma_alloc(qcuefi->client, cmd_buf_size, &cmd_buf_dma, GFP_KERNEL); 554 + if (!cmd_buf) { 587 555 efi_status = EFI_OUT_OF_RESOURCES; 588 556 goto out; 589 557 } 590 558 591 - rsp_data = kzalloc(rsp_size, GFP_KERNEL); 592 - if (!rsp_data) { 593 - efi_status = EFI_OUT_OF_RESOURCES; 594 - goto out_free_req; 595 - } 559 + req_data = cmd_buf + req_offs; 560 + rsp_data = cmd_buf + rsp_offs; 596 561 597 562 req_data->command_id = QSEE_CMD_UEFI_GET_NEXT_VARIABLE; 598 563 req_data->guid_offset = guid_offs; ··· 611 572 goto out_free; 612 573 } 613 574 614 - status = qcom_qseecom_app_send(qcuefi->client, req_data, req_size, rsp_data, rsp_size); 575 + status = qcom_qseecom_app_send(qcuefi->client, 576 + cmd_buf_dma + req_offs, req_size, 577 + cmd_buf_dma + rsp_offs, rsp_size); 615 578 if (status) { 616 579 efi_status = EFI_DEVICE_ERROR; 617 580 goto out_free; ··· 686 645 } 687 646 688 647 out_free: 689 - kfree(rsp_data); 690 - out_free_req: 691 - kfree(req_data); 648 + qseecom_dma_free(qcuefi->client, cmd_buf_size, cmd_buf, cmd_buf_dma); 692 649 out: 693 650 return efi_status; 694 651 } ··· 698 659 struct qsee_req_uefi_query_variable_info *req_data; 699 660 struct qsee_rsp_uefi_query_variable_info *rsp_data; 700 661 efi_status_t efi_status = EFI_SUCCESS; 662 + dma_addr_t cmd_buf_dma; 663 + size_t cmd_buf_size; 664 + void *cmd_buf; 665 + size_t req_offs; 666 + size_t rsp_offs; 701 667 int status; 702 668 703 - req_data = kzalloc(sizeof(*req_data), GFP_KERNEL); 704 - if (!req_data) { 669 + cmd_buf_size = qcuefi_buf_align_fields( 670 + __reqdata_offs(sizeof(*req_data), &req_offs) 671 + __reqdata_offs(sizeof(*rsp_data), &rsp_offs) 672 + ); 673 + 674 + cmd_buf = qseecom_dma_alloc(qcuefi->client, cmd_buf_size, &cmd_buf_dma, GFP_KERNEL); 675 + if (!cmd_buf) { 705 676 efi_status = EFI_OUT_OF_RESOURCES; 706 677 goto out; 707 678 } 708 679 709 - rsp_data = kzalloc(sizeof(*rsp_data), GFP_KERNEL); 710 - if (!rsp_data) { 711 - efi_status = EFI_OUT_OF_RESOURCES; 712 - goto out_free_req; 713 - } 680 + req_data = cmd_buf + req_offs; 681 + rsp_data = cmd_buf + rsp_offs; 714 682 715 683 req_data->command_id = QSEE_CMD_UEFI_QUERY_VARIABLE_INFO; 716 684 req_data->attributes = attr; 717 685 req_data->length = sizeof(*req_data); 718 686 719 - status = qcom_qseecom_app_send(qcuefi->client, req_data, sizeof(*req_data), rsp_data, 720 - sizeof(*rsp_data)); 687 + status = qcom_qseecom_app_send(qcuefi->client, 688 + cmd_buf_dma + req_offs, sizeof(*req_data), 689 + cmd_buf_dma + rsp_offs, sizeof(*rsp_data)); 721 690 if (status) { 722 691 efi_status = EFI_DEVICE_ERROR; 723 692 goto out_free; ··· 758 711 *max_variable_size = rsp_data->max_variable_size; 759 712 760 713 out_free: 761 - kfree(rsp_data); 762 - out_free_req: 763 - kfree(req_data); 714 + qseecom_dma_free(qcuefi->client, cmd_buf_size, cmd_buf, cmd_buf_dma); 764 715 out: 765 716 return efi_status; 766 717 }
+6 -31
drivers/firmware/qcom/qcom_scm.c
··· 1576 1576 /** 1577 1577 * qcom_scm_qseecom_app_send() - Send to and receive data from a given QSEE app. 1578 1578 * @app_id: The ID of the target app. 1579 - * @req: Request buffer sent to the app (must be DMA-mappable). 1579 + * @req: DMA address of the request buffer sent to the app. 1580 1580 * @req_size: Size of the request buffer. 1581 - * @rsp: Response buffer, written to by the app (must be DMA-mappable). 1581 + * @rsp: DMA address of the response buffer, written to by the app. 1582 1582 * @rsp_size: Size of the response buffer. 1583 1583 * 1584 1584 * Sends a request to the QSEE app associated with the given ID and read back ··· 1589 1589 * 1590 1590 * Return: Zero on success, nonzero on failure. 1591 1591 */ 1592 - int qcom_scm_qseecom_app_send(u32 app_id, void *req, size_t req_size, void *rsp, 1593 - size_t rsp_size) 1592 + int qcom_scm_qseecom_app_send(u32 app_id, dma_addr_t req, size_t req_size, 1593 + dma_addr_t rsp, size_t rsp_size) 1594 1594 { 1595 1595 struct qcom_scm_qseecom_resp res = {}; 1596 1596 struct qcom_scm_desc desc = {}; 1597 - dma_addr_t req_phys; 1598 - dma_addr_t rsp_phys; 1599 1597 int status; 1600 1598 1601 - /* Map request buffer */ 1602 - req_phys = dma_map_single(__scm->dev, req, req_size, DMA_TO_DEVICE); 1603 - status = dma_mapping_error(__scm->dev, req_phys); 1604 - if (status) { 1605 - dev_err(__scm->dev, "qseecom: failed to map request buffer\n"); 1606 - return status; 1607 - } 1608 - 1609 - /* Map response buffer */ 1610 - rsp_phys = dma_map_single(__scm->dev, rsp, rsp_size, DMA_FROM_DEVICE); 1611 - status = dma_mapping_error(__scm->dev, rsp_phys); 1612 - if (status) { 1613 - dma_unmap_single(__scm->dev, req_phys, req_size, DMA_TO_DEVICE); 1614 - dev_err(__scm->dev, "qseecom: failed to map response buffer\n"); 1615 - return status; 1616 - } 1617 - 1618 - /* Set up SCM call data */ 1619 1599 desc.owner = QSEECOM_TZ_OWNER_TZ_APPS; 1620 1600 desc.svc = QSEECOM_TZ_SVC_APP_ID_PLACEHOLDER; 1621 1601 desc.cmd = QSEECOM_TZ_CMD_APP_SEND; ··· 1603 1623 QCOM_SCM_RW, QCOM_SCM_VAL, 1604 1624 QCOM_SCM_RW, QCOM_SCM_VAL); 1605 1625 desc.args[0] = app_id; 1606 - desc.args[1] = req_phys; 1626 + desc.args[1] = req; 1607 1627 desc.args[2] = req_size; 1608 - desc.args[3] = rsp_phys; 1628 + desc.args[3] = rsp; 1609 1629 desc.args[4] = rsp_size; 1610 1630 1611 - /* Perform call */ 1612 1631 status = qcom_scm_qseecom_call(&desc, &res); 1613 - 1614 - /* Unmap buffers */ 1615 - dma_unmap_single(__scm->dev, rsp_phys, rsp_size, DMA_FROM_DEVICE); 1616 - dma_unmap_single(__scm->dev, req_phys, req_size, DMA_TO_DEVICE); 1617 1632 1618 1633 if (status) 1619 1634 return status;
+6 -3
drivers/gpio/gpio-tangier.c
··· 195 195 196 196 static void tng_irq_ack(struct irq_data *d) 197 197 { 198 - struct tng_gpio *priv = irq_data_get_irq_chip_data(d); 198 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 199 + struct tng_gpio *priv = gpiochip_get_data(gc); 199 200 irq_hw_number_t gpio = irqd_to_hwirq(d); 200 201 void __iomem *gisr; 201 202 u8 shift; ··· 228 227 229 228 static void tng_irq_mask(struct irq_data *d) 230 229 { 231 - struct tng_gpio *priv = irq_data_get_irq_chip_data(d); 230 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 231 + struct tng_gpio *priv = gpiochip_get_data(gc); 232 232 irq_hw_number_t gpio = irqd_to_hwirq(d); 233 233 234 234 tng_irq_unmask_mask(priv, gpio, false); ··· 238 236 239 237 static void tng_irq_unmask(struct irq_data *d) 240 238 { 241 - struct tng_gpio *priv = irq_data_get_irq_chip_data(d); 239 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 240 + struct tng_gpio *priv = gpiochip_get_data(gc); 242 241 irq_hw_number_t gpio = irqd_to_hwirq(d); 243 242 244 243 gpiochip_enable_irq(&priv->chip, gpio);
+11 -9
drivers/gpio/gpio-tegra186.c
··· 36 36 #define TEGRA186_GPIO_SCR_SEC_REN BIT(27) 37 37 #define TEGRA186_GPIO_SCR_SEC_G1W BIT(9) 38 38 #define TEGRA186_GPIO_SCR_SEC_G1R BIT(1) 39 - #define TEGRA186_GPIO_FULL_ACCESS (TEGRA186_GPIO_SCR_SEC_WEN | \ 40 - TEGRA186_GPIO_SCR_SEC_REN | \ 41 - TEGRA186_GPIO_SCR_SEC_G1R | \ 42 - TEGRA186_GPIO_SCR_SEC_G1W) 43 - #define TEGRA186_GPIO_SCR_SEC_ENABLE (TEGRA186_GPIO_SCR_SEC_WEN | \ 44 - TEGRA186_GPIO_SCR_SEC_REN) 45 39 46 40 /* control registers */ 47 41 #define TEGRA186_GPIO_ENABLE_CONFIG 0x00 ··· 171 177 172 178 value = __raw_readl(secure + TEGRA186_GPIO_SCR); 173 179 174 - if ((value & TEGRA186_GPIO_SCR_SEC_ENABLE) == 0) 175 - return true; 180 + /* 181 + * When SCR_SEC_[R|W]EN is unset, then we have full read/write access to all the 182 + * registers for given GPIO pin. 183 + * When SCR_SEC[R|W]EN is set, then there is need to further check the accompanying 184 + * SCR_SEC_G1[R|W] bit to determine read/write access to all the registers for given 185 + * GPIO pin. 186 + */ 176 187 177 - if ((value & TEGRA186_GPIO_FULL_ACCESS) == TEGRA186_GPIO_FULL_ACCESS) 188 + if (((value & TEGRA186_GPIO_SCR_SEC_REN) == 0 || 189 + ((value & TEGRA186_GPIO_SCR_SEC_REN) && (value & TEGRA186_GPIO_SCR_SEC_G1R))) && 190 + ((value & TEGRA186_GPIO_SCR_SEC_WEN) == 0 || 191 + ((value & TEGRA186_GPIO_SCR_SEC_WEN) && (value & TEGRA186_GPIO_SCR_SEC_G1W)))) 178 192 return true; 179 193 180 194 return false;
+21 -14
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 1854 1854 err_bo_create: 1855 1855 amdgpu_amdkfd_unreserve_mem_limit(adev, aligned_size, flags, xcp_id); 1856 1856 err_reserve_limit: 1857 + amdgpu_sync_free(&(*mem)->sync); 1857 1858 mutex_destroy(&(*mem)->lock); 1858 1859 if (gobj) 1859 1860 drm_gem_object_put(gobj); ··· 2901 2900 2902 2901 amdgpu_sync_create(&sync_obj); 2903 2902 2904 - /* Validate BOs and map them to GPUVM (update VM page tables). */ 2903 + /* Validate BOs managed by KFD */ 2905 2904 list_for_each_entry(mem, &process_info->kfd_bo_list, 2906 2905 validate_list) { 2907 2906 2908 2907 struct amdgpu_bo *bo = mem->bo; 2909 2908 uint32_t domain = mem->domain; 2910 - struct kfd_mem_attachment *attachment; 2911 2909 struct dma_resv_iter cursor; 2912 2910 struct dma_fence *fence; 2913 2911 ··· 2931 2931 goto validate_map_fail; 2932 2932 } 2933 2933 } 2934 + } 2935 + 2936 + if (failed_size) 2937 + pr_debug("0x%lx/0x%lx in system\n", failed_size, total_size); 2938 + 2939 + /* Validate PDs, PTs and evicted DMABuf imports last. Otherwise BO 2940 + * validations above would invalidate DMABuf imports again. 2941 + */ 2942 + ret = process_validate_vms(process_info, &exec.ticket); 2943 + if (ret) { 2944 + pr_debug("Validating VMs failed, ret: %d\n", ret); 2945 + goto validate_map_fail; 2946 + } 2947 + 2948 + /* Update mappings managed by KFD. */ 2949 + list_for_each_entry(mem, &process_info->kfd_bo_list, 2950 + validate_list) { 2951 + struct kfd_mem_attachment *attachment; 2952 + 2934 2953 list_for_each_entry(attachment, &mem->attachments, list) { 2935 2954 if (!attachment->is_mapped) 2936 2955 continue; ··· 2964 2945 goto validate_map_fail; 2965 2946 } 2966 2947 } 2967 - } 2968 - 2969 - if (failed_size) 2970 - pr_debug("0x%lx/0x%lx in system\n", failed_size, total_size); 2971 - 2972 - /* Validate PDs, PTs and evicted DMABuf imports last. Otherwise BO 2973 - * validations above would invalidate DMABuf imports again. 2974 - */ 2975 - ret = process_validate_vms(process_info, &exec.ticket); 2976 - if (ret) { 2977 - pr_debug("Validating VMs failed, ret: %d\n", ret); 2978 - goto validate_map_fail; 2979 2948 } 2980 2949 2981 2950 /* Update mappings not managed by KFD */
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
··· 1132 1132 return; 1133 1133 1134 1134 amdgpu_mes_remove_hw_queue(adev, ring->hw_queue_id); 1135 + del_timer_sync(&ring->fence_drv.fallback_timer); 1135 1136 amdgpu_ring_fini(ring); 1136 1137 kfree(ring); 1137 1138 }
+2
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 605 605 else 606 606 amdgpu_bo_placement_from_domain(bo, bp->domain); 607 607 if (bp->type == ttm_bo_type_kernel) 608 + bo->tbo.priority = 2; 609 + else if (!(bp->flags & AMDGPU_GEM_CREATE_DISCARDABLE)) 608 610 bo->tbo.priority = 1; 609 611 610 612 if (!bp->destroy)
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c
··· 774 774 { 775 775 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 776 776 777 + if (amdgpu_in_reset(adev) || adev->in_s0ix || adev->in_suspend) 778 + return 0; 779 + 777 780 return umsch_mm_test(adev); 778 781 } 779 782
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vpe.c
··· 205 205 dpm_ctl &= 0xfffffffe; /* Disable DPM */ 206 206 WREG32(vpe_get_reg_offset(vpe, 0, vpe->regs.dpm_enable), dpm_ctl); 207 207 dev_dbg(adev->dev, "%s: disable vpe dpm\n", __func__); 208 - return 0; 208 + return -EINVAL; 209 209 } 210 210 211 211 int amdgpu_vpe_psp_update_sram(struct amdgpu_device *adev)
+1 -2
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 9186 9186 7 + /* PIPELINE_SYNC */ 9187 9187 SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + 9188 9188 SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + 9189 - 2 + /* VM_FLUSH */ 9189 + 4 + /* VM_FLUSH */ 9190 9190 8 + /* FENCE for VM_FLUSH */ 9191 9191 20 + /* GDS switch */ 9192 9192 4 + /* double SWITCH_BUFFER, ··· 9276 9276 7 + /* gfx_v10_0_ring_emit_pipeline_sync */ 9277 9277 SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + 9278 9278 SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + 9279 - 2 + /* gfx_v10_0_ring_emit_vm_flush */ 9280 9279 8 + 8 + 8, /* gfx_v10_0_ring_emit_fence_kiq x3 for user fence, vm fence */ 9281 9280 .emit_ib_size = 7, /* gfx_v10_0_ring_emit_ib_compute */ 9282 9281 .emit_ib = gfx_v10_0_ring_emit_ib_compute,
+1 -2
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 6192 6192 7 + /* PIPELINE_SYNC */ 6193 6193 SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + 6194 6194 SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + 6195 - 2 + /* VM_FLUSH */ 6195 + 4 + /* VM_FLUSH */ 6196 6196 8 + /* FENCE for VM_FLUSH */ 6197 6197 20 + /* GDS switch */ 6198 6198 5 + /* COND_EXEC */ ··· 6278 6278 7 + /* gfx_v11_0_ring_emit_pipeline_sync */ 6279 6279 SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + 6280 6280 SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + 6281 - 2 + /* gfx_v11_0_ring_emit_vm_flush */ 6282 6281 8 + 8 + 8, /* gfx_v11_0_ring_emit_fence_kiq x3 for user fence, vm fence */ 6283 6282 .emit_ib_size = 7, /* gfx_v11_0_ring_emit_ib_compute */ 6284 6283 .emit_ib = gfx_v11_0_ring_emit_ib_compute,
-2
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 6981 6981 7 + /* gfx_v9_0_ring_emit_pipeline_sync */ 6982 6982 SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + 6983 6983 SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + 6984 - 2 + /* gfx_v9_0_ring_emit_vm_flush */ 6985 6984 8 + 8 + 8 + /* gfx_v9_0_ring_emit_fence x3 for user fence, vm fence */ 6986 6985 7 + /* gfx_v9_0_emit_mem_sync */ 6987 6986 5 + /* gfx_v9_0_emit_wave_limit for updating mmSPI_WCL_PIPE_PERCENT_GFX register */ ··· 7018 7019 7 + /* gfx_v9_0_ring_emit_pipeline_sync */ 7019 7020 SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 + 7020 7021 SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 + 7021 - 2 + /* gfx_v9_0_ring_emit_vm_flush */ 7022 7022 8 + 8 + 8, /* gfx_v9_0_ring_emit_fence_kiq x3 for user fence, vm fence */ 7023 7023 .emit_ib_size = 7, /* gfx_v9_0_ring_emit_ib_compute */ 7024 7024 .emit_fence = gfx_v9_0_ring_emit_fence_kiq,
+2 -1
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
··· 368 368 u32 ref_and_mask = 0; 369 369 const struct nbio_hdp_flush_reg *nbio_hf_reg = adev->nbio.hdp_flush_reg; 370 370 371 - ref_and_mask = nbio_hf_reg->ref_and_mask_sdma0 << ring->me; 371 + ref_and_mask = nbio_hf_reg->ref_and_mask_sdma0 372 + << (ring->me % adev->sdma.num_inst_per_aid); 372 373 373 374 sdma_v4_4_2_wait_reg_mem(ring, 0, 1, 374 375 adev->nbio.funcs->get_hdp_flush_done_offset(adev),
+14 -10
drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
··· 280 280 u32 ref_and_mask = 0; 281 281 const struct nbio_hdp_flush_reg *nbio_hf_reg = adev->nbio.hdp_flush_reg; 282 282 283 - ref_and_mask = nbio_hf_reg->ref_and_mask_sdma0 << ring->me; 283 + if (ring->me > 1) { 284 + amdgpu_asic_flush_hdp(adev, ring); 285 + } else { 286 + ref_and_mask = nbio_hf_reg->ref_and_mask_sdma0 << ring->me; 284 287 285 - amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_POLL_REGMEM) | 286 - SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(1) | 287 - SDMA_PKT_POLL_REGMEM_HEADER_FUNC(3)); /* == */ 288 - amdgpu_ring_write(ring, (adev->nbio.funcs->get_hdp_flush_done_offset(adev)) << 2); 289 - amdgpu_ring_write(ring, (adev->nbio.funcs->get_hdp_flush_req_offset(adev)) << 2); 290 - amdgpu_ring_write(ring, ref_and_mask); /* reference */ 291 - amdgpu_ring_write(ring, ref_and_mask); /* mask */ 292 - amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) | 293 - SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(10)); /* retry count, poll interval */ 288 + amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_POLL_REGMEM) | 289 + SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(1) | 290 + SDMA_PKT_POLL_REGMEM_HEADER_FUNC(3)); /* == */ 291 + amdgpu_ring_write(ring, (adev->nbio.funcs->get_hdp_flush_done_offset(adev)) << 2); 292 + amdgpu_ring_write(ring, (adev->nbio.funcs->get_hdp_flush_req_offset(adev)) << 2); 293 + amdgpu_ring_write(ring, ref_and_mask); /* reference */ 294 + amdgpu_ring_write(ring, ref_and_mask); /* mask */ 295 + amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) | 296 + SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(10)); /* retry count, poll interval */ 297 + } 294 298 } 295 299 296 300 /**
+7 -7
drivers/gpu/drm/amd/amdgpu/vpe_v6_1.c
··· 144 144 WREG32(vpe_get_reg_offset(vpe, j, regVPEC_CNTL), ret); 145 145 } 146 146 147 + /* setup collaborate mode */ 148 + vpe_v6_1_set_collaborate_mode(vpe, true); 149 + /* setup DPM */ 150 + if (amdgpu_vpe_configure_dpm(vpe)) 151 + dev_warn(adev->dev, "VPE failed to enable DPM\n"); 152 + 147 153 /* 148 154 * For VPE 6.1.1, still only need to add master's offset, and psp will apply it to slave as well. 149 155 * Here use instance 0 as master. ··· 165 159 adev->vpe.cmdbuf_cpu_addr[0] = f32_offset; 166 160 adev->vpe.cmdbuf_cpu_addr[1] = f32_cntl; 167 161 168 - amdgpu_vpe_psp_update_sram(adev); 169 - vpe_v6_1_set_collaborate_mode(vpe, true); 170 - amdgpu_vpe_configure_dpm(vpe); 171 - 172 - return 0; 162 + return amdgpu_vpe_psp_update_sram(adev); 173 163 } 174 164 175 165 vpe_hdr = (const struct vpe_firmware_header_v1_0 *)adev->vpe.fw->data; ··· 198 196 } 199 197 200 198 vpe_v6_1_halt(vpe, false); 201 - vpe_v6_1_set_collaborate_mode(vpe, true); 202 - amdgpu_vpe_configure_dpm(vpe); 203 199 204 200 return 0; 205 201 }
+15 -1
drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
··· 509 509 start = start_mgr << PAGE_SHIFT; 510 510 end = (last_mgr + 1) << PAGE_SHIFT; 511 511 512 + r = amdgpu_amdkfd_reserve_mem_limit(node->adev, 513 + prange->npages * PAGE_SIZE, 514 + KFD_IOC_ALLOC_MEM_FLAGS_VRAM, 515 + node->xcp ? node->xcp->id : 0); 516 + if (r) { 517 + dev_dbg(node->adev->dev, "failed to reserve VRAM, r: %ld\n", r); 518 + return -ENOSPC; 519 + } 520 + 512 521 r = svm_range_vram_node_new(node, prange, true); 513 522 if (r) { 514 523 dev_dbg(node->adev->dev, "fail %ld to alloc vram\n", r); 515 - return r; 524 + goto out; 516 525 } 517 526 ttm_res_offset = (start_mgr - prange->start + prange->offset) << PAGE_SHIFT; 518 527 ··· 554 545 svm_range_vram_node_free(prange); 555 546 } 556 547 548 + out: 549 + amdgpu_amdkfd_unreserve_mem_limit(node->adev, 550 + prange->npages * PAGE_SIZE, 551 + KFD_IOC_ALLOC_MEM_FLAGS_VRAM, 552 + node->xcp ? node->xcp->id : 0); 557 553 return r < 0 ? r : 0; 558 554 } 559 555
+8 -7
drivers/gpu/drm/amd/amdkfd/kfd_process.c
··· 1922 1922 rcu_read_lock(); 1923 1923 ef = dma_fence_get_rcu_safe(&p->ef); 1924 1924 rcu_read_unlock(); 1925 + if (!ef) 1926 + return -EINVAL; 1925 1927 1926 1928 ret = dma_fence_signal(ef); 1927 1929 dma_fence_put(ef); ··· 1951 1949 * they are responsible stopping the queues and scheduling 1952 1950 * the restore work. 1953 1951 */ 1954 - if (!signal_eviction_fence(p)) 1955 - queue_delayed_work(kfd_restore_wq, &p->restore_work, 1956 - msecs_to_jiffies(PROCESS_RESTORE_TIME_MS)); 1957 - else 1952 + if (signal_eviction_fence(p) || 1953 + mod_delayed_work(kfd_restore_wq, &p->restore_work, 1954 + msecs_to_jiffies(PROCESS_RESTORE_TIME_MS))) 1958 1955 kfd_process_restore_queues(p); 1959 1956 1960 1957 pr_debug("Finished evicting pasid 0x%x\n", p->pasid); ··· 2012 2011 if (ret) { 2013 2012 pr_debug("Failed to restore BOs of pasid 0x%x, retry after %d ms\n", 2014 2013 p->pasid, PROCESS_BACK_OFF_TIME_MS); 2015 - ret = queue_delayed_work(kfd_restore_wq, &p->restore_work, 2016 - msecs_to_jiffies(PROCESS_BACK_OFF_TIME_MS)); 2017 - WARN(!ret, "reschedule restore work failed\n"); 2014 + if (mod_delayed_work(kfd_restore_wq, &p->restore_work, 2015 + msecs_to_jiffies(PROCESS_RESTORE_TIME_MS))) 2016 + kfd_process_restore_queues(p); 2018 2017 } 2019 2018 } 2020 2019
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_svm.c
··· 3426 3426 mm, KFD_MIGRATE_TRIGGER_PREFETCH); 3427 3427 *migrated = !r; 3428 3428 3429 - return r; 3429 + return 0; 3430 3430 } 3431 3431 3432 3432 int svm_range_schedule_evict_svm_bo(struct amdgpu_amdkfd_fence *fence)
+1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 3029 3029 dc_stream_release(dm_new_crtc_state->stream); 3030 3030 dm_new_crtc_state->stream = NULL; 3031 3031 } 3032 + dm_new_crtc_state->base.color_mgmt_changed = true; 3032 3033 } 3033 3034 3034 3035 for_each_new_plane_in_state(dm->cached_state, plane, new_plane_state, i) {
+7
drivers/gpu/drm/amd/pm/amdgpu_pm.c
··· 4261 4261 } 4262 4262 } 4263 4263 4264 + /* 4265 + * If gpu_od is the only member in the list, that means gpu_od is an 4266 + * empty directory, so remove it. 4267 + */ 4268 + if (list_is_singular(&adev->pm.od_kobj_list)) 4269 + goto err_out; 4270 + 4264 4271 return 0; 4265 4272 4266 4273 err_out:
+25
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
··· 2294 2294 return sizeof(*gpu_metrics); 2295 2295 } 2296 2296 2297 + static void smu_v13_0_6_restore_pci_config(struct smu_context *smu) 2298 + { 2299 + struct amdgpu_device *adev = smu->adev; 2300 + int i; 2301 + 2302 + for (i = 0; i < 16; i++) 2303 + pci_write_config_dword(adev->pdev, i * 4, 2304 + adev->pdev->saved_config_space[i]); 2305 + pci_restore_msi_state(adev->pdev); 2306 + } 2307 + 2297 2308 static int smu_v13_0_6_mode2_reset(struct smu_context *smu) 2298 2309 { 2299 2310 int ret = 0, index; ··· 2325 2314 dev_dbg(smu->adev->dev, "restore config space...\n"); 2326 2315 /* Restore the config space saved during init */ 2327 2316 amdgpu_device_load_pci_state(adev->pdev); 2317 + 2318 + /* Certain platforms have switches which assign virtual BAR values to 2319 + * devices. OS uses the virtual BAR values and device behind the switch 2320 + * is assgined another BAR value. When device's config space registers 2321 + * are queried, switch returns the virtual BAR values. When mode-2 reset 2322 + * is performed, switch is unaware of it, and will continue to return 2323 + * the same virtual values to the OS.This affects 2324 + * pci_restore_config_space() API as it doesn't write the value saved if 2325 + * the current value read from config space is the same as what is 2326 + * saved. As a workaround, make sure the config space is restored 2327 + * always. 2328 + */ 2329 + if (!(adev->flags & AMD_IS_APU)) 2330 + smu_v13_0_6_restore_pci_config(smu); 2328 2331 2329 2332 dev_dbg(smu->adev->dev, "wait for reset ack\n"); 2330 2333 do {
+2 -2
drivers/gpu/drm/drm_gem_atomic_helper.c
··· 224 224 225 225 __drm_atomic_helper_plane_duplicate_state(plane, &new_shadow_plane_state->base); 226 226 227 - drm_format_conv_state_copy(&shadow_plane_state->fmtcnv_state, 228 - &new_shadow_plane_state->fmtcnv_state); 227 + drm_format_conv_state_copy(&new_shadow_plane_state->fmtcnv_state, 228 + &shadow_plane_state->fmtcnv_state); 229 229 } 230 230 EXPORT_SYMBOL(__drm_gem_duplicate_shadow_plane_state); 231 231
+2 -22
drivers/gpu/drm/etnaviv/etnaviv_gpu.c
··· 164 164 *value = gpu->identity.eco_id; 165 165 break; 166 166 167 - case ETNAVIV_PARAM_GPU_NN_CORE_COUNT: 168 - *value = gpu->identity.nn_core_count; 169 - break; 170 - 171 - case ETNAVIV_PARAM_GPU_NN_MAD_PER_CORE: 172 - *value = gpu->identity.nn_mad_per_core; 173 - break; 174 - 175 - case ETNAVIV_PARAM_GPU_TP_CORE_COUNT: 176 - *value = gpu->identity.tp_core_count; 177 - break; 178 - 179 - case ETNAVIV_PARAM_GPU_ON_CHIP_SRAM_SIZE: 180 - *value = gpu->identity.on_chip_sram_size; 181 - break; 182 - 183 - case ETNAVIV_PARAM_GPU_AXI_SRAM_SIZE: 184 - *value = gpu->identity.axi_sram_size; 185 - break; 186 - 187 167 default: 188 168 DBG("%s: invalid param: %u", dev_name(gpu->dev), param); 189 169 return -EINVAL; ··· 643 663 /* Disable TX clock gating on affected core revisions. */ 644 664 if (etnaviv_is_model_rev(gpu, GC4000, 0x5222) || 645 665 etnaviv_is_model_rev(gpu, GC2000, 0x5108) || 646 - etnaviv_is_model_rev(gpu, GC2000, 0x6202) || 647 - etnaviv_is_model_rev(gpu, GC2000, 0x6203)) 666 + etnaviv_is_model_rev(gpu, GC7000, 0x6202) || 667 + etnaviv_is_model_rev(gpu, GC7000, 0x6203)) 648 668 pmc |= VIVS_PM_MODULE_CONTROLS_DISABLE_MODULE_CLOCK_GATING_TX; 649 669 650 670 /* Disable SE and RA clock gating on affected core revisions. */
-12
drivers/gpu/drm/etnaviv/etnaviv_gpu.h
··· 54 54 /* Number of Neural Network cores. */ 55 55 u32 nn_core_count; 56 56 57 - /* Number of MAD units per Neural Network core. */ 58 - u32 nn_mad_per_core; 59 - 60 - /* Number of Tensor Processing cores. */ 61 - u32 tp_core_count; 62 - 63 - /* Size in bytes of the SRAM inside the NPU. */ 64 - u32 on_chip_sram_size; 65 - 66 - /* Size in bytes of the SRAM across the AXI bus. */ 67 - u32 axi_sram_size; 68 - 69 57 /* Size of the vertex cache. */ 70 58 u32 vertex_cache_size; 71 59
-34
drivers/gpu/drm/etnaviv/etnaviv_hwdb.c
··· 17 17 .thread_count = 128, 18 18 .shader_core_count = 1, 19 19 .nn_core_count = 0, 20 - .nn_mad_per_core = 0, 21 - .tp_core_count = 0, 22 - .on_chip_sram_size = 0, 23 - .axi_sram_size = 0, 24 20 .vertex_cache_size = 8, 25 21 .vertex_output_buffer_size = 1024, 26 22 .pixel_pipes = 1, ··· 48 52 .register_max = 64, 49 53 .thread_count = 256, 50 54 .shader_core_count = 1, 51 - .nn_core_count = 0, 52 - .nn_mad_per_core = 0, 53 - .tp_core_count = 0, 54 - .on_chip_sram_size = 0, 55 - .axi_sram_size = 0, 56 55 .vertex_cache_size = 8, 57 56 .vertex_output_buffer_size = 512, 58 57 .pixel_pipes = 1, ··· 80 89 .thread_count = 512, 81 90 .shader_core_count = 2, 82 91 .nn_core_count = 0, 83 - .nn_mad_per_core = 0, 84 - .tp_core_count = 0, 85 - .on_chip_sram_size = 0, 86 - .axi_sram_size = 0, 87 92 .vertex_cache_size = 16, 88 93 .vertex_output_buffer_size = 1024, 89 94 .pixel_pipes = 1, ··· 112 125 .thread_count = 512, 113 126 .shader_core_count = 2, 114 127 .nn_core_count = 0, 115 - .nn_mad_per_core = 0, 116 - .tp_core_count = 0, 117 - .on_chip_sram_size = 0, 118 - .axi_sram_size = 0, 119 128 .vertex_cache_size = 16, 120 129 .vertex_output_buffer_size = 1024, 121 130 .pixel_pipes = 1, ··· 143 160 .register_max = 64, 144 161 .thread_count = 512, 145 162 .shader_core_count = 2, 146 - .nn_core_count = 0, 147 - .nn_mad_per_core = 0, 148 - .tp_core_count = 0, 149 - .on_chip_sram_size = 0, 150 - .axi_sram_size = 0, 151 163 .vertex_cache_size = 16, 152 164 .vertex_output_buffer_size = 1024, 153 165 .pixel_pipes = 1, ··· 175 197 .thread_count = 1024, 176 198 .shader_core_count = 4, 177 199 .nn_core_count = 0, 178 - .nn_mad_per_core = 0, 179 - .tp_core_count = 0, 180 - .on_chip_sram_size = 0, 181 - .axi_sram_size = 0, 182 200 .vertex_cache_size = 16, 183 201 .vertex_output_buffer_size = 1024, 184 202 .pixel_pipes = 2, ··· 207 233 .thread_count = 256, 208 234 .shader_core_count = 1, 209 235 .nn_core_count = 8, 210 - .nn_mad_per_core = 64, 211 - .tp_core_count = 4, 212 - .on_chip_sram_size = 524288, 213 - .axi_sram_size = 1048576, 214 236 .vertex_cache_size = 16, 215 237 .vertex_output_buffer_size = 1024, 216 238 .pixel_pipes = 1, ··· 239 269 .thread_count = 256, 240 270 .shader_core_count = 1, 241 271 .nn_core_count = 6, 242 - .nn_mad_per_core = 64, 243 - .tp_core_count = 3, 244 - .on_chip_sram_size = 262144, 245 - .axi_sram_size = 0, 246 272 .vertex_cache_size = 16, 247 273 .vertex_output_buffer_size = 1024, 248 274 .pixel_pipes = 1,
-1
drivers/gpu/drm/gma500/Makefile
··· 34 34 psb_intel_lvds.o \ 35 35 psb_intel_modes.o \ 36 36 psb_intel_sdvo.o \ 37 - psb_lid.o \ 38 37 psb_irq.o 39 38 40 39 gma500_gfx-$(CONFIG_ACPI) += opregion.o
+1 -4
drivers/gpu/drm/gma500/psb_device.c
··· 73 73 } 74 74 75 75 psb_intel_lvds_set_brightness(dev, PSB_MAX_BRIGHTNESS); 76 - /* This must occur after the backlight is properly initialised */ 77 - psb_lid_timer_init(dev_priv); 76 + 78 77 return 0; 79 78 } 80 79 ··· 258 259 259 260 static void psb_chip_teardown(struct drm_device *dev) 260 261 { 261 - struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 262 - psb_lid_timer_takedown(dev_priv); 263 262 gma_intel_teardown_gmbus(dev); 264 263 } 265 264
-9
drivers/gpu/drm/gma500/psb_drv.h
··· 162 162 #define PSB_NUM_VBLANKS 2 163 163 164 164 #define PSB_WATCHDOG_DELAY (HZ * 2) 165 - #define PSB_LID_DELAY (HZ / 10) 166 165 167 166 #define PSB_MAX_BRIGHTNESS 100 168 167 ··· 490 491 /* Hotplug handling */ 491 492 struct work_struct hotplug_work; 492 493 493 - /* LID-Switch */ 494 - spinlock_t lid_lock; 495 - struct timer_list lid_timer; 496 494 struct psb_intel_opregion opregion; 497 - u32 lid_last_state; 498 495 499 496 /* Watchdog */ 500 497 uint32_t apm_reg; ··· 585 590 586 591 int i2c_bus; /* I2C bus identifier for Moorestown */ 587 592 }; 588 - 589 - /* psb_lid.c */ 590 - extern void psb_lid_timer_init(struct drm_psb_private *dev_priv); 591 - extern void psb_lid_timer_takedown(struct drm_psb_private *dev_priv); 592 593 593 594 /* modesetting */ 594 595 extern void psb_modeset_init(struct drm_device *dev);
-80
drivers/gpu/drm/gma500/psb_lid.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /************************************************************************** 3 - * Copyright (c) 2007, Intel Corporation. 4 - * 5 - * Authors: Thomas Hellstrom <thomas-at-tungstengraphics-dot-com> 6 - **************************************************************************/ 7 - 8 - #include <linux/spinlock.h> 9 - 10 - #include "psb_drv.h" 11 - #include "psb_intel_reg.h" 12 - #include "psb_reg.h" 13 - 14 - static void psb_lid_timer_func(struct timer_list *t) 15 - { 16 - struct drm_psb_private *dev_priv = from_timer(dev_priv, t, lid_timer); 17 - struct drm_device *dev = (struct drm_device *)&dev_priv->dev; 18 - struct timer_list *lid_timer = &dev_priv->lid_timer; 19 - unsigned long irq_flags; 20 - u32 __iomem *lid_state = dev_priv->opregion.lid_state; 21 - u32 pp_status; 22 - 23 - if (readl(lid_state) == dev_priv->lid_last_state) 24 - goto lid_timer_schedule; 25 - 26 - if ((readl(lid_state)) & 0x01) { 27 - /*lid state is open*/ 28 - REG_WRITE(PP_CONTROL, REG_READ(PP_CONTROL) | POWER_TARGET_ON); 29 - do { 30 - pp_status = REG_READ(PP_STATUS); 31 - } while ((pp_status & PP_ON) == 0 && 32 - (pp_status & PP_SEQUENCE_MASK) != 0); 33 - 34 - if (REG_READ(PP_STATUS) & PP_ON) { 35 - /*FIXME: should be backlight level before*/ 36 - psb_intel_lvds_set_brightness(dev, 100); 37 - } else { 38 - DRM_DEBUG("LVDS panel never powered up"); 39 - return; 40 - } 41 - } else { 42 - psb_intel_lvds_set_brightness(dev, 0); 43 - 44 - REG_WRITE(PP_CONTROL, REG_READ(PP_CONTROL) & ~POWER_TARGET_ON); 45 - do { 46 - pp_status = REG_READ(PP_STATUS); 47 - } while ((pp_status & PP_ON) == 0); 48 - } 49 - dev_priv->lid_last_state = readl(lid_state); 50 - 51 - lid_timer_schedule: 52 - spin_lock_irqsave(&dev_priv->lid_lock, irq_flags); 53 - if (!timer_pending(lid_timer)) { 54 - lid_timer->expires = jiffies + PSB_LID_DELAY; 55 - add_timer(lid_timer); 56 - } 57 - spin_unlock_irqrestore(&dev_priv->lid_lock, irq_flags); 58 - } 59 - 60 - void psb_lid_timer_init(struct drm_psb_private *dev_priv) 61 - { 62 - struct timer_list *lid_timer = &dev_priv->lid_timer; 63 - unsigned long irq_flags; 64 - 65 - spin_lock_init(&dev_priv->lid_lock); 66 - spin_lock_irqsave(&dev_priv->lid_lock, irq_flags); 67 - 68 - timer_setup(lid_timer, psb_lid_timer_func, 0); 69 - 70 - lid_timer->expires = jiffies + PSB_LID_DELAY; 71 - 72 - add_timer(lid_timer); 73 - spin_unlock_irqrestore(&dev_priv->lid_lock, irq_flags); 74 - } 75 - 76 - void psb_lid_timer_takedown(struct drm_psb_private *dev_priv) 77 - { 78 - del_timer_sync(&dev_priv->lid_timer); 79 - } 80 -
+3 -1
drivers/gpu/drm/xe/xe_gt.c
··· 378 378 err); 379 379 380 380 /* Initialize CCS mode sysfs after early initialization of HW engines */ 381 - xe_gt_ccs_mode_sysfs_init(gt); 381 + err = xe_gt_ccs_mode_sysfs_init(gt); 382 + if (err) 383 + goto err_force_wake; 382 384 383 385 /* 384 386 * Stash hardware-reported version. Since this register does not exist
+7 -12
drivers/gpu/drm/xe/xe_gt_ccs_mode.c
··· 167 167 * and it is expected that there are no open drm clients while doing so. 168 168 * The number of available compute slices is exposed to user through a per-gt 169 169 * 'num_cslices' sysfs interface. 170 + * 171 + * Returns: Returns error value for failure and 0 for success. 170 172 */ 171 - void xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt) 173 + int xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt) 172 174 { 173 175 struct xe_device *xe = gt_to_xe(gt); 174 176 int err; 175 177 176 178 if (!xe_gt_ccs_mode_enabled(gt)) 177 - return; 179 + return 0; 178 180 179 181 err = sysfs_create_files(gt->sysfs, gt_ccs_mode_attrs); 180 - if (err) { 181 - drm_warn(&xe->drm, "Sysfs creation for ccs_mode failed err: %d\n", err); 182 - return; 183 - } 182 + if (err) 183 + return err; 184 184 185 - err = drmm_add_action_or_reset(&xe->drm, xe_gt_ccs_mode_sysfs_fini, gt); 186 - if (err) { 187 - sysfs_remove_files(gt->sysfs, gt_ccs_mode_attrs); 188 - drm_warn(&xe->drm, "%s: drmm_add_action_or_reset failed, err: %d\n", 189 - __func__, err); 190 - } 185 + return drmm_add_action_or_reset(&xe->drm, xe_gt_ccs_mode_sysfs_fini, gt); 191 186 }
+1 -1
drivers/gpu/drm/xe/xe_gt_ccs_mode.h
··· 12 12 #include "xe_platform_types.h" 13 13 14 14 void xe_gt_apply_ccs_mode(struct xe_gt *gt); 15 - void xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt); 15 + int xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt); 16 16 17 17 static inline bool xe_gt_ccs_mode_enabled(const struct xe_gt *gt) 18 18 {
+2 -2
drivers/gpu/drm/xe/xe_guc_ct.c
··· 1054 1054 adj_len); 1055 1055 break; 1056 1056 case XE_GUC_ACTION_GUC2PF_RELAY_FROM_VF: 1057 - ret = xe_guc_relay_process_guc2pf(&guc->relay, payload, adj_len); 1057 + ret = xe_guc_relay_process_guc2pf(&guc->relay, hxg, hxg_len); 1058 1058 break; 1059 1059 case XE_GUC_ACTION_GUC2VF_RELAY_FROM_PF: 1060 - ret = xe_guc_relay_process_guc2vf(&guc->relay, payload, adj_len); 1060 + ret = xe_guc_relay_process_guc2vf(&guc->relay, hxg, hxg_len); 1061 1061 break; 1062 1062 default: 1063 1063 drm_err(&xe->drm, "unexpected action 0x%04x\n", action);
+1 -8
drivers/gpu/drm/xe/xe_huc.c
··· 53 53 struct xe_gt *gt = huc_to_gt(huc); 54 54 struct xe_device *xe = gt_to_xe(gt); 55 55 struct xe_bo *bo; 56 - int err; 57 56 58 57 /* we use a single object for both input and output */ 59 58 bo = xe_bo_create_pin_map(xe, gt_to_tile(gt), NULL, ··· 65 66 66 67 huc->gsc_pkt = bo; 67 68 68 - err = drmm_add_action_or_reset(&xe->drm, free_gsc_pkt, huc); 69 - if (err) { 70 - free_gsc_pkt(&xe->drm, huc); 71 - return err; 72 - } 73 - 74 - return 0; 69 + return drmm_add_action_or_reset(&xe->drm, free_gsc_pkt, huc); 75 70 } 76 71 77 72 int xe_huc_init(struct xe_huc *huc)
+6 -6
drivers/i2c/i2c-core-base.c
··· 2200 2200 * Returns negative errno, else the number of messages executed. 2201 2201 * 2202 2202 * Adapter lock must be held when calling this function. No debug logging 2203 - * takes place. adap->algo->master_xfer existence isn't checked. 2203 + * takes place. 2204 2204 */ 2205 2205 int __i2c_transfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num) 2206 2206 { 2207 2207 unsigned long orig_jiffies; 2208 2208 int ret, try; 2209 + 2210 + if (!adap->algo->master_xfer) { 2211 + dev_dbg(&adap->dev, "I2C level transfers not supported\n"); 2212 + return -EOPNOTSUPP; 2213 + } 2209 2214 2210 2215 if (WARN_ON(!msgs || num < 1)) 2211 2216 return -EINVAL; ··· 2277 2272 int i2c_transfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num) 2278 2273 { 2279 2274 int ret; 2280 - 2281 - if (!adap->algo->master_xfer) { 2282 - dev_dbg(&adap->dev, "I2C level transfers not supported\n"); 2283 - return -EOPNOTSUPP; 2284 - } 2285 2275 2286 2276 /* REVISIT the fault reporting model here is weak: 2287 2277 *
+2 -7
drivers/irqchip/irq-gic-v3-its.c
··· 4567 4567 irqd_set_resend_when_in_progress(irq_get_irq_data(virq + i)); 4568 4568 } 4569 4569 4570 - if (err) { 4571 - if (i > 0) 4572 - its_vpe_irq_domain_free(domain, virq, i); 4573 - 4574 - its_lpi_free(bitmap, base, nr_ids); 4575 - its_free_prop_table(vprop_page); 4576 - } 4570 + if (err) 4571 + its_vpe_irq_domain_free(domain, virq, i); 4577 4572 4578 4573 return err; 4579 4574 }
+1 -1
drivers/md/dm-vdo/murmurhash3.c
··· 137 137 break; 138 138 default: 139 139 break; 140 - }; 140 + } 141 141 } 142 142 /* finalization */ 143 143
+8 -2
drivers/md/dm.c
··· 765 765 return td; 766 766 767 767 out_blkdev_put: 768 - fput(bdev_file); 768 + __fput_sync(bdev_file); 769 769 out_free_td: 770 770 kfree(td); 771 771 return ERR_PTR(r); ··· 778 778 { 779 779 if (md->disk->slave_dir) 780 780 bd_unlink_disk_holder(td->dm_dev.bdev, md->disk); 781 - fput(td->dm_dev.bdev_file); 781 + 782 + /* Leverage async fput() if DMF_DEFERRED_REMOVE set */ 783 + if (unlikely(test_bit(DMF_DEFERRED_REMOVE, &md->flags))) 784 + fput(td->dm_dev.bdev_file); 785 + else 786 + __fput_sync(td->dm_dev.bdev_file); 787 + 782 788 put_dax(td->dm_dev.dax_dev); 783 789 list_del(&td->list); 784 790 kfree(td);
+9 -9
drivers/misc/eeprom/at24.c
··· 758 758 } 759 759 pm_runtime_enable(dev); 760 760 761 - at24->nvmem = devm_nvmem_register(dev, &nvmem_config); 762 - if (IS_ERR(at24->nvmem)) { 763 - pm_runtime_disable(dev); 764 - if (!pm_runtime_status_suspended(dev)) 765 - regulator_disable(at24->vcc_reg); 766 - return dev_err_probe(dev, PTR_ERR(at24->nvmem), 767 - "failed to register nvmem\n"); 768 - } 769 - 770 761 /* 771 762 * Perform a one-byte test read to verify that the chip is functional, 772 763 * unless powering on the device is to be avoided during probe (i.e. ··· 771 780 regulator_disable(at24->vcc_reg); 772 781 return -ENODEV; 773 782 } 783 + } 784 + 785 + at24->nvmem = devm_nvmem_register(dev, &nvmem_config); 786 + if (IS_ERR(at24->nvmem)) { 787 + pm_runtime_disable(dev); 788 + if (!pm_runtime_status_suspended(dev)) 789 + regulator_disable(at24->vcc_reg); 790 + return dev_err_probe(dev, PTR_ERR(at24->nvmem), 791 + "failed to register nvmem\n"); 774 792 } 775 793 776 794 /* If this a SPD EEPROM, probe for DDR3 thermal sensor */
+1
drivers/mmc/host/moxart-mmc.c
··· 300 300 remain = sgm->length; 301 301 if (remain > host->data_len) 302 302 remain = host->data_len; 303 + sgm->consumed = 0; 303 304 304 305 if (data->flags & MMC_DATA_WRITE) { 305 306 while (remain > 0) {
+15 -1
drivers/mmc/host/sdhci-msm.c
··· 2694 2694 struct sdhci_host *host = dev_get_drvdata(dev); 2695 2695 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 2696 2696 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 2697 + unsigned long flags; 2698 + 2699 + spin_lock_irqsave(&host->lock, flags); 2700 + host->runtime_suspended = true; 2701 + spin_unlock_irqrestore(&host->lock, flags); 2697 2702 2698 2703 /* Drop the performance vote */ 2699 2704 dev_pm_opp_set_rate(dev, 0); ··· 2713 2708 struct sdhci_host *host = dev_get_drvdata(dev); 2714 2709 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); 2715 2710 struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host); 2711 + unsigned long flags; 2716 2712 int ret; 2717 2713 2718 2714 ret = clk_bulk_prepare_enable(ARRAY_SIZE(msm_host->bulk_clks), ··· 2732 2726 2733 2727 dev_pm_opp_set_rate(dev, msm_host->clk_rate); 2734 2728 2735 - return sdhci_msm_ice_resume(msm_host); 2729 + ret = sdhci_msm_ice_resume(msm_host); 2730 + if (ret) 2731 + return ret; 2732 + 2733 + spin_lock_irqsave(&host->lock, flags); 2734 + host->runtime_suspended = false; 2735 + spin_unlock_irqrestore(&host->lock, flags); 2736 + 2737 + return ret; 2736 2738 } 2737 2739 2738 2740 static const struct dev_pm_ops sdhci_msm_pm_ops = {
+1
drivers/mmc/host/sdhci-of-dwcmshc.c
··· 626 626 627 627 /* perform tuning */ 628 628 sdhci_start_tuning(host); 629 + host->tuning_loop_count = 128; 629 630 host->tuning_err = __sdhci_execute_tuning(host, opcode); 630 631 if (host->tuning_err) { 631 632 /* disable auto-tuning upon tuning error */
+1 -1
drivers/mtd/mtdcore.c
··· 900 900 config.name = compatible; 901 901 config.id = NVMEM_DEVID_AUTO; 902 902 config.owner = THIS_MODULE; 903 - config.add_legacy_fixed_of_cells = true; 903 + config.add_legacy_fixed_of_cells = !mtd_type_is_nand(mtd); 904 904 config.type = NVMEM_TYPE_OTP; 905 905 config.root_only = true; 906 906 config.ignore_wp = true;
+1 -1
drivers/mtd/nand/raw/brcmnand/brcmnand.c
··· 857 857 struct brcmnand_soc *soc = ctrl->soc; 858 858 int i; 859 859 860 - if (soc->read_data_bus) { 860 + if (soc && soc->read_data_bus) { 861 861 soc->read_data_bus(soc, flash_cache, buffer, fc_words); 862 862 } else { 863 863 for (i = 0; i < fc_words; i++)
+2 -2
drivers/mtd/nand/raw/diskonchip.c
··· 53 53 0xe8000, 0xea000, 0xec000, 0xee000, 54 54 #endif 55 55 #endif 56 - 0xffffffff }; 56 + }; 57 57 58 58 static struct mtd_info *doclist = NULL; 59 59 ··· 1554 1554 if (ret < 0) 1555 1555 return ret; 1556 1556 } else { 1557 - for (i = 0; (doc_locations[i] != 0xffffffff); i++) { 1557 + for (i = 0; i < ARRAY_SIZE(doc_locations); i++) { 1558 1558 doc_probe(doc_locations[i]); 1559 1559 } 1560 1560 }
+3 -4
drivers/mtd/nand/raw/qcom_nandc.c
··· 2815 2815 host->cfg0_raw & ~(7 << CW_PER_PAGE)); 2816 2816 nandc_set_reg(chip, NAND_DEV0_CFG1, host->cfg1_raw); 2817 2817 instrs = 3; 2818 - } else { 2818 + } else if (q_op.cmd_reg != OP_RESET_DEVICE) { 2819 2819 return 0; 2820 2820 } 2821 2821 ··· 2830 2830 nandc_set_reg(chip, NAND_EXEC_CMD, 1); 2831 2831 2832 2832 write_reg_dma(nandc, NAND_FLASH_CMD, instrs, NAND_BAM_NEXT_SGL); 2833 - (q_op.cmd_reg == OP_BLOCK_ERASE) ? write_reg_dma(nandc, NAND_DEV0_CFG0, 2834 - 2, NAND_BAM_NEXT_SGL) : read_reg_dma(nandc, 2835 - NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); 2833 + if (q_op.cmd_reg == OP_BLOCK_ERASE) 2834 + write_reg_dma(nandc, NAND_DEV0_CFG0, 2, NAND_BAM_NEXT_SGL); 2836 2835 2837 2836 write_reg_dma(nandc, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_SGL); 2838 2837 read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL);
+2 -2
drivers/net/dsa/mv88e6xxx/chip.c
··· 5758 5758 .prod_num = MV88E6XXX_PORT_SWITCH_ID_PROD_6141, 5759 5759 .family = MV88E6XXX_FAMILY_6341, 5760 5760 .name = "Marvell 88E6141", 5761 - .num_databases = 4096, 5761 + .num_databases = 256, 5762 5762 .num_macs = 2048, 5763 5763 .num_ports = 6, 5764 5764 .num_internal_phys = 5, ··· 6217 6217 .prod_num = MV88E6XXX_PORT_SWITCH_ID_PROD_6341, 6218 6218 .family = MV88E6XXX_FAMILY_6341, 6219 6219 .name = "Marvell 88E6341", 6220 - .num_databases = 4096, 6220 + .num_databases = 256, 6221 6221 .num_macs = 2048, 6222 6222 .num_internal_phys = 5, 6223 6223 .num_ports = 6,
+14 -2
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 2 2 /* 3 3 * Broadcom GENET (Gigabit Ethernet) controller driver 4 4 * 5 - * Copyright (c) 2014-2020 Broadcom 5 + * Copyright (c) 2014-2024 Broadcom 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "bcmgenet: " fmt ··· 2467 2467 { 2468 2468 u32 reg; 2469 2469 2470 + spin_lock_bh(&priv->reg_lock); 2470 2471 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 2471 - if (reg & CMD_SW_RESET) 2472 + if (reg & CMD_SW_RESET) { 2473 + spin_unlock_bh(&priv->reg_lock); 2472 2474 return; 2475 + } 2473 2476 if (enable) 2474 2477 reg |= mask; 2475 2478 else 2476 2479 reg &= ~mask; 2477 2480 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 2481 + spin_unlock_bh(&priv->reg_lock); 2478 2482 2479 2483 /* UniMAC stops on a packet boundary, wait for a full-size packet 2480 2484 * to be processed ··· 2494 2490 udelay(10); 2495 2491 2496 2492 /* issue soft reset and disable MAC while updating its registers */ 2493 + spin_lock_bh(&priv->reg_lock); 2497 2494 bcmgenet_umac_writel(priv, CMD_SW_RESET, UMAC_CMD); 2498 2495 udelay(2); 2496 + spin_unlock_bh(&priv->reg_lock); 2499 2497 } 2500 2498 2501 2499 static void bcmgenet_intr_disable(struct bcmgenet_priv *priv) ··· 3340 3334 struct bcmgenet_priv *priv = netdev_priv(dev); 3341 3335 3342 3336 /* Start the network engine */ 3337 + netif_addr_lock_bh(dev); 3343 3338 bcmgenet_set_rx_mode(dev); 3339 + netif_addr_unlock_bh(dev); 3344 3340 bcmgenet_enable_rx_napi(priv); 3345 3341 3346 3342 umac_enable_set(priv, CMD_TX_EN | CMD_RX_EN, true); ··· 3603 3595 * 3. The number of filters needed exceeds the number filters 3604 3596 * supported by the hardware. 3605 3597 */ 3598 + spin_lock(&priv->reg_lock); 3606 3599 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 3607 3600 if ((dev->flags & (IFF_PROMISC | IFF_ALLMULTI)) || 3608 3601 (nfilter > MAX_MDF_FILTER)) { 3609 3602 reg |= CMD_PROMISC; 3610 3603 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 3604 + spin_unlock(&priv->reg_lock); 3611 3605 bcmgenet_umac_writel(priv, 0, UMAC_MDF_CTRL); 3612 3606 return; 3613 3607 } else { 3614 3608 reg &= ~CMD_PROMISC; 3615 3609 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 3610 + spin_unlock(&priv->reg_lock); 3616 3611 } 3617 3612 3618 3613 /* update MDF filter */ ··· 4014 4003 goto err; 4015 4004 } 4016 4005 4006 + spin_lock_init(&priv->reg_lock); 4017 4007 spin_lock_init(&priv->lock); 4018 4008 4019 4009 /* Set default pause parameters */
+3 -1
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 - * Copyright (c) 2014-2020 Broadcom 3 + * Copyright (c) 2014-2024 Broadcom 4 4 */ 5 5 6 6 #ifndef __BCMGENET_H__ ··· 573 573 /* device context */ 574 574 struct bcmgenet_priv { 575 575 void __iomem *base; 576 + /* reg_lock: lock to serialize access to shared registers */ 577 + spinlock_t reg_lock; 576 578 enum bcmgenet_version version; 577 579 struct net_device *dev; 578 580
+7 -1
drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
··· 2 2 /* 3 3 * Broadcom GENET (Gigabit Ethernet) Wake-on-LAN support 4 4 * 5 - * Copyright (c) 2014-2020 Broadcom 5 + * Copyright (c) 2014-2024 Broadcom 6 6 */ 7 7 8 8 #define pr_fmt(fmt) "bcmgenet_wol: " fmt ··· 151 151 } 152 152 153 153 /* Can't suspend with WoL if MAC is still in reset */ 154 + spin_lock_bh(&priv->reg_lock); 154 155 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 155 156 if (reg & CMD_SW_RESET) 156 157 reg &= ~CMD_SW_RESET; ··· 159 158 /* disable RX */ 160 159 reg &= ~CMD_RX_EN; 161 160 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 161 + spin_unlock_bh(&priv->reg_lock); 162 162 mdelay(10); 163 163 164 164 if (priv->wolopts & (WAKE_MAGIC | WAKE_MAGICSECURE)) { ··· 205 203 } 206 204 207 205 /* Enable CRC forward */ 206 + spin_lock_bh(&priv->reg_lock); 208 207 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 209 208 priv->crc_fwd_en = 1; 210 209 reg |= CMD_CRC_FWD; ··· 213 210 /* Receiver must be enabled for WOL MP detection */ 214 211 reg |= CMD_RX_EN; 215 212 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 213 + spin_unlock_bh(&priv->reg_lock); 216 214 217 215 reg = UMAC_IRQ_MPD_R; 218 216 if (hfb_enable) ··· 260 256 } 261 257 262 258 /* Disable CRC Forward */ 259 + spin_lock_bh(&priv->reg_lock); 263 260 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 264 261 reg &= ~CMD_CRC_FWD; 265 262 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 263 + spin_unlock_bh(&priv->reg_lock); 266 264 }
+5 -1
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 2 2 /* 3 3 * Broadcom GENET MDIO routines 4 4 * 5 - * Copyright (c) 2014-2017 Broadcom 5 + * Copyright (c) 2014-2024 Broadcom 6 6 */ 7 7 8 8 #include <linux/acpi.h> ··· 76 76 reg |= RGMII_LINK; 77 77 bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL); 78 78 79 + spin_lock_bh(&priv->reg_lock); 79 80 reg = bcmgenet_umac_readl(priv, UMAC_CMD); 80 81 reg &= ~((CMD_SPEED_MASK << CMD_SPEED_SHIFT) | 81 82 CMD_HD_EN | ··· 89 88 reg |= CMD_TX_EN | CMD_RX_EN; 90 89 } 91 90 bcmgenet_umac_writel(priv, reg, UMAC_CMD); 91 + spin_unlock_bh(&priv->reg_lock); 92 92 93 93 active = phy_init_eee(phydev, 0) >= 0; 94 94 bcmgenet_eee_enable_set(dev, ··· 277 275 * block for the interface to work, unconditionally clear the 278 276 * Out-of-band disable since we do not need it. 279 277 */ 278 + mutex_lock(&phydev->lock); 280 279 reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL); 281 280 reg &= ~OOB_DISABLE; 282 281 if (priv->ext_phy) { ··· 289 286 reg |= RGMII_MODE_EN; 290 287 } 291 288 bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL); 289 + mutex_unlock(&phydev->lock); 292 290 293 291 if (init) 294 292 dev_info(kdev, "configuring instance for %s\n", phy_name);
+2 -2
drivers/net/ethernet/brocade/bna/bnad_debugfs.c
··· 312 312 void *kern_buf; 313 313 314 314 /* Copy the user space buf */ 315 - kern_buf = memdup_user(buf, nbytes); 315 + kern_buf = memdup_user_nul(buf, nbytes); 316 316 if (IS_ERR(kern_buf)) 317 317 return PTR_ERR(kern_buf); 318 318 ··· 372 372 void *kern_buf; 373 373 374 374 /* Copy the user space buf */ 375 - kern_buf = memdup_user(buf, nbytes); 375 + kern_buf = memdup_user_nul(buf, nbytes); 376 376 if (IS_ERR(kern_buf)) 377 377 return PTR_ERR(kern_buf); 378 378
+3 -3
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 2670 2670 lb->loopback = 1; 2671 2671 2672 2672 q = &adap->sge.ethtxq[pi->first_qset]; 2673 - __netif_tx_lock(q->txq, smp_processor_id()); 2673 + __netif_tx_lock_bh(q->txq); 2674 2674 2675 2675 reclaim_completed_tx(adap, &q->q, -1, true); 2676 2676 credits = txq_avail(&q->q) - ndesc; 2677 2677 if (unlikely(credits < 0)) { 2678 - __netif_tx_unlock(q->txq); 2678 + __netif_tx_unlock_bh(q->txq); 2679 2679 return -ENOMEM; 2680 2680 } 2681 2681 ··· 2710 2710 init_completion(&lb->completion); 2711 2711 txq_advance(&q->q, ndesc); 2712 2712 cxgb4_ring_tx_db(adap, &q->q, ndesc); 2713 - __netif_tx_unlock(q->txq); 2713 + __netif_tx_unlock_bh(q->txq); 2714 2714 2715 2715 /* wait for the pkt to return */ 2716 2716 ret = wait_for_completion_timeout(&lb->completion, 10 * HZ);
+4 -4
drivers/net/ethernet/intel/e1000e/phy.c
··· 157 157 * the lower time out 158 158 */ 159 159 for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) { 160 - usleep_range(50, 60); 160 + udelay(50); 161 161 mdic = er32(MDIC); 162 162 if (mdic & E1000_MDIC_READY) 163 163 break; ··· 181 181 * reading duplicate data in the next MDIC transaction. 182 182 */ 183 183 if (hw->mac.type == e1000_pch2lan) 184 - usleep_range(100, 150); 184 + udelay(100); 185 185 186 186 if (success) { 187 187 *data = (u16)mdic; ··· 237 237 * the lower time out 238 238 */ 239 239 for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) { 240 - usleep_range(50, 60); 240 + udelay(50); 241 241 mdic = er32(MDIC); 242 242 if (mdic & E1000_MDIC_READY) 243 243 break; ··· 261 261 * reading duplicate data in the next MDIC transaction. 262 262 */ 263 263 if (hw->mac.type == e1000_pch2lan) 264 - usleep_range(100, 150); 264 + udelay(100); 265 265 266 266 if (success) 267 267 return 0;
+4 -4
drivers/net/ethernet/intel/ice/ice_debugfs.c
··· 171 171 if (*ppos != 0 || count > 8) 172 172 return -EINVAL; 173 173 174 - cmd_buf = memdup_user(buf, count); 174 + cmd_buf = memdup_user_nul(buf, count); 175 175 if (IS_ERR(cmd_buf)) 176 176 return PTR_ERR(cmd_buf); 177 177 ··· 257 257 if (*ppos != 0 || count > 4) 258 258 return -EINVAL; 259 259 260 - cmd_buf = memdup_user(buf, count); 260 + cmd_buf = memdup_user_nul(buf, count); 261 261 if (IS_ERR(cmd_buf)) 262 262 return PTR_ERR(cmd_buf); 263 263 ··· 332 332 if (*ppos != 0 || count > 2) 333 333 return -EINVAL; 334 334 335 - cmd_buf = memdup_user(buf, count); 335 + cmd_buf = memdup_user_nul(buf, count); 336 336 if (IS_ERR(cmd_buf)) 337 337 return PTR_ERR(cmd_buf); 338 338 ··· 428 428 if (*ppos != 0 || count > 5) 429 429 return -EINVAL; 430 430 431 - cmd_buf = memdup_user(buf, count); 431 + cmd_buf = memdup_user_nul(buf, count); 432 432 if (IS_ERR(cmd_buf)) 433 433 return PTR_ERR(cmd_buf); 434 434
+1 -3
drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
··· 999 999 u16 pcifunc; 1000 1000 int ret, lf; 1001 1001 1002 - cmd_buf = memdup_user(buffer, count + 1); 1002 + cmd_buf = memdup_user_nul(buffer, count); 1003 1003 if (IS_ERR(cmd_buf)) 1004 1004 return -ENOMEM; 1005 - 1006 - cmd_buf[count] = '\0'; 1007 1005 1008 1006 cmd_buf_tmp = strchr(cmd_buf, '\n'); 1009 1007 if (cmd_buf_tmp) {
+8 -6
drivers/net/ethernet/qlogic/qede/qede_filter.c
··· 1868 1868 struct flow_cls_offload *f) 1869 1869 { 1870 1870 struct qede_arfs_fltr_node *n; 1871 - int min_hlen, rc = -EINVAL; 1872 1871 struct qede_arfs_tuple t; 1872 + int min_hlen, rc; 1873 1873 1874 1874 __qede_lock(edev); 1875 1875 ··· 1879 1879 } 1880 1880 1881 1881 /* parse flower attribute and prepare filter */ 1882 - if (qede_parse_flow_attr(edev, proto, f->rule, &t)) 1882 + rc = qede_parse_flow_attr(edev, proto, f->rule, &t); 1883 + if (rc) 1883 1884 goto unlock; 1884 1885 1885 1886 /* Validate profile mode and number of filters */ ··· 1889 1888 DP_NOTICE(edev, 1890 1889 "Filter configuration invalidated, filter mode=0x%x, configured mode=0x%x, filter count=0x%x\n", 1891 1890 t.mode, edev->arfs->mode, edev->arfs->filter_count); 1891 + rc = -EINVAL; 1892 1892 goto unlock; 1893 1893 } 1894 1894 1895 1895 /* parse tc actions and get the vf_id */ 1896 - if (qede_parse_actions(edev, &f->rule->action, f->common.extack)) 1896 + rc = qede_parse_actions(edev, &f->rule->action, f->common.extack); 1897 + if (rc) 1897 1898 goto unlock; 1898 1899 1899 1900 if (qede_flow_find_fltr(edev, &t)) { ··· 2001 1998 if (IS_ERR(flow)) 2002 1999 return PTR_ERR(flow); 2003 2000 2004 - if (qede_parse_flow_attr(edev, proto, flow->rule, t)) { 2005 - err = -EINVAL; 2001 + err = qede_parse_flow_attr(edev, proto, flow->rule, t); 2002 + if (err) 2006 2003 goto err_out; 2007 - } 2008 2004 2009 2005 /* Make sure location is valid and filter isn't already set */ 2010 2006 err = qede_flow_spec_validate(edev, &flow->rule->action, t,
+34 -15
drivers/net/vxlan/vxlan_core.c
··· 1675 1675 bool raw_proto = false; 1676 1676 void *oiph; 1677 1677 __be32 vni = 0; 1678 + int nh; 1678 1679 1679 1680 /* Need UDP and VXLAN header to be present */ 1680 1681 if (!pskb_may_pull(skb, VXLAN_HLEN)) ··· 1766 1765 skb->pkt_type = PACKET_HOST; 1767 1766 } 1768 1767 1769 - oiph = skb_network_header(skb); 1768 + /* Save offset of outer header relative to skb->head, 1769 + * because we are going to reset the network header to the inner header 1770 + * and might change skb->head. 1771 + */ 1772 + nh = skb_network_header(skb) - skb->head; 1773 + 1770 1774 skb_reset_network_header(skb); 1771 1775 1776 + if (!pskb_inet_may_pull(skb)) { 1777 + DEV_STATS_INC(vxlan->dev, rx_length_errors); 1778 + DEV_STATS_INC(vxlan->dev, rx_errors); 1779 + vxlan_vnifilter_count(vxlan, vni, vninode, 1780 + VXLAN_VNI_STATS_RX_ERRORS, 0); 1781 + goto drop; 1782 + } 1783 + 1784 + /* Get the outer header. */ 1785 + oiph = skb->head + nh; 1786 + 1772 1787 if (!vxlan_ecn_decapsulate(vs, oiph, skb)) { 1773 - ++vxlan->dev->stats.rx_frame_errors; 1774 - ++vxlan->dev->stats.rx_errors; 1788 + DEV_STATS_INC(vxlan->dev, rx_frame_errors); 1789 + DEV_STATS_INC(vxlan->dev, rx_errors); 1775 1790 vxlan_vnifilter_count(vxlan, vni, vninode, 1776 1791 VXLAN_VNI_STATS_RX_ERRORS, 0); 1777 1792 goto drop; ··· 1857 1840 goto out; 1858 1841 1859 1842 if (!pskb_may_pull(skb, arp_hdr_len(dev))) { 1860 - dev->stats.tx_dropped++; 1843 + dev_core_stats_tx_dropped_inc(dev); 1844 + vxlan_vnifilter_count(vxlan, vni, NULL, 1845 + VXLAN_VNI_STATS_TX_DROPS, 0); 1861 1846 goto out; 1862 1847 } 1863 1848 parp = arp_hdr(skb); ··· 1915 1896 reply->pkt_type = PACKET_HOST; 1916 1897 1917 1898 if (netif_rx(reply) == NET_RX_DROP) { 1918 - dev->stats.rx_dropped++; 1899 + dev_core_stats_rx_dropped_inc(dev); 1919 1900 vxlan_vnifilter_count(vxlan, vni, NULL, 1920 1901 VXLAN_VNI_STATS_RX_DROPS, 0); 1921 1902 } ··· 2074 2055 goto out; 2075 2056 2076 2057 if (netif_rx(reply) == NET_RX_DROP) { 2077 - dev->stats.rx_dropped++; 2058 + dev_core_stats_rx_dropped_inc(dev); 2078 2059 vxlan_vnifilter_count(vxlan, vni, NULL, 2079 2060 VXLAN_VNI_STATS_RX_DROPS, 0); 2080 2061 } ··· 2285 2266 len); 2286 2267 } else { 2287 2268 drop: 2288 - dev->stats.rx_dropped++; 2269 + dev_core_stats_rx_dropped_inc(dev); 2289 2270 vxlan_vnifilter_count(dst_vxlan, vni, NULL, 2290 2271 VXLAN_VNI_STATS_RX_DROPS, 0); 2291 2272 } ··· 2317 2298 addr_family, dst_port, 2318 2299 vxlan->cfg.flags); 2319 2300 if (!dst_vxlan) { 2320 - dev->stats.tx_errors++; 2301 + DEV_STATS_INC(dev, tx_errors); 2321 2302 vxlan_vnifilter_count(vxlan, vni, NULL, 2322 2303 VXLAN_VNI_STATS_TX_ERRORS, 0); 2323 2304 kfree_skb(skb); ··· 2582 2563 return; 2583 2564 2584 2565 drop: 2585 - dev->stats.tx_dropped++; 2566 + dev_core_stats_tx_dropped_inc(dev); 2586 2567 vxlan_vnifilter_count(vxlan, vni, NULL, VXLAN_VNI_STATS_TX_DROPS, 0); 2587 2568 dev_kfree_skb(skb); 2588 2569 return; ··· 2590 2571 tx_error: 2591 2572 rcu_read_unlock(); 2592 2573 if (err == -ELOOP) 2593 - dev->stats.collisions++; 2574 + DEV_STATS_INC(dev, collisions); 2594 2575 else if (err == -ENETUNREACH) 2595 - dev->stats.tx_carrier_errors++; 2576 + DEV_STATS_INC(dev, tx_carrier_errors); 2596 2577 dst_release(ndst); 2597 - dev->stats.tx_errors++; 2578 + DEV_STATS_INC(dev, tx_errors); 2598 2579 vxlan_vnifilter_count(vxlan, vni, NULL, VXLAN_VNI_STATS_TX_ERRORS, 0); 2599 2580 kfree_skb(skb); 2600 2581 } ··· 2627 2608 return; 2628 2609 2629 2610 drop: 2630 - dev->stats.tx_dropped++; 2611 + dev_core_stats_tx_dropped_inc(dev); 2631 2612 vxlan_vnifilter_count(netdev_priv(dev), vni, NULL, 2632 2613 VXLAN_VNI_STATS_TX_DROPS, 0); 2633 2614 dev_kfree_skb(skb); ··· 2665 2646 return NETDEV_TX_OK; 2666 2647 2667 2648 drop: 2668 - dev->stats.tx_dropped++; 2649 + dev_core_stats_tx_dropped_inc(dev); 2669 2650 vxlan_vnifilter_count(netdev_priv(dev), vni, NULL, 2670 2651 VXLAN_VNI_STATS_TX_DROPS, 0); 2671 2652 dev_kfree_skb(skb); ··· 2762 2743 !is_multicast_ether_addr(eth->h_dest)) 2763 2744 vxlan_fdb_miss(vxlan, eth->h_dest); 2764 2745 2765 - dev->stats.tx_dropped++; 2746 + dev_core_stats_tx_dropped_inc(dev); 2766 2747 vxlan_vnifilter_count(vxlan, vni, NULL, 2767 2748 VXLAN_VNI_STATS_TX_DROPS, 0); 2768 2749 kfree_skb(skb);
+4 -2
drivers/phy/freescale/phy-fsl-imx8m-pcie.c
··· 110 110 /* Source clock from SoC internal PLL */ 111 111 writel(ANA_PLL_CLK_OUT_TO_EXT_IO_SEL, 112 112 imx8_phy->base + IMX8MM_PCIE_PHY_CMN_REG062); 113 - writel(AUX_PLL_REFCLK_SEL_SYS_PLL, 114 - imx8_phy->base + IMX8MM_PCIE_PHY_CMN_REG063); 113 + if (imx8_phy->drvdata->variant != IMX8MM) { 114 + writel(AUX_PLL_REFCLK_SEL_SYS_PLL, 115 + imx8_phy->base + IMX8MM_PCIE_PHY_CMN_REG063); 116 + } 115 117 val = ANA_AUX_RX_TX_SEL_TX | ANA_AUX_TX_TERM; 116 118 writel(val | ANA_AUX_RX_TERM_GND_EN, 117 119 imx8_phy->base + IMX8MM_PCIE_PHY_CMN_REG064);
+5 -4
drivers/phy/marvell/phy-mvebu-a3700-comphy.c
··· 603 603 u16 val; 604 604 605 605 fix_idx = 0; 606 - for (addr = 0; addr < 512; addr++) { 606 + for (addr = 0; addr < ARRAY_SIZE(gbe_phy_init); addr++) { 607 607 /* 608 608 * All PHY register values are defined in full for 3.125Gbps 609 609 * SERDES speed. The values required for 1.25 Gbps are almost ··· 611 611 * comparison to 3.125 Gbps values. These register values are 612 612 * stored in "gbe_phy_init_fix" array. 613 613 */ 614 - if (!is_1gbps && gbe_phy_init_fix[fix_idx].addr == addr) { 614 + if (!is_1gbps && 615 + fix_idx < ARRAY_SIZE(gbe_phy_init_fix) && 616 + gbe_phy_init_fix[fix_idx].addr == addr) { 615 617 /* Use new value */ 616 618 val = gbe_phy_init_fix[fix_idx].value; 617 - if (fix_idx < ARRAY_SIZE(gbe_phy_init_fix)) 618 - fix_idx++; 619 + fix_idx++; 619 620 } else { 620 621 val = gbe_phy_init[addr]; 621 622 }
+1 -1
drivers/phy/qualcomm/phy-qcom-m31.c
··· 297 297 return dev_err_probe(dev, PTR_ERR(qphy->phy), 298 298 "failed to create phy\n"); 299 299 300 - qphy->vreg = devm_regulator_get(dev, "vdda-phy"); 300 + qphy->vreg = devm_regulator_get(dev, "vdd"); 301 301 if (IS_ERR(qphy->vreg)) 302 302 return dev_err_probe(dev, PTR_ERR(qphy->vreg), 303 303 "failed to get vreg\n");
+9 -3
drivers/phy/qualcomm/phy-qcom-qmp-combo.c
··· 77 77 QPHY_COM_BIAS_EN_CLKBUFLR_EN, 78 78 79 79 QPHY_DP_PHY_STATUS, 80 + QPHY_DP_PHY_VCO_DIV, 80 81 81 82 QPHY_TX_TX_POL_INV, 82 83 QPHY_TX_TX_DRV_LVL, ··· 103 102 [QPHY_COM_BIAS_EN_CLKBUFLR_EN] = QSERDES_V3_COM_BIAS_EN_CLKBUFLR_EN, 104 103 105 104 [QPHY_DP_PHY_STATUS] = QSERDES_V3_DP_PHY_STATUS, 105 + [QPHY_DP_PHY_VCO_DIV] = QSERDES_V3_DP_PHY_VCO_DIV, 106 106 107 107 [QPHY_TX_TX_POL_INV] = QSERDES_V3_TX_TX_POL_INV, 108 108 [QPHY_TX_TX_DRV_LVL] = QSERDES_V3_TX_TX_DRV_LVL, ··· 128 126 [QPHY_COM_BIAS_EN_CLKBUFLR_EN] = QSERDES_V4_COM_BIAS_EN_CLKBUFLR_EN, 129 127 130 128 [QPHY_DP_PHY_STATUS] = QSERDES_V4_DP_PHY_STATUS, 129 + [QPHY_DP_PHY_VCO_DIV] = QSERDES_V4_DP_PHY_VCO_DIV, 131 130 132 131 [QPHY_TX_TX_POL_INV] = QSERDES_V4_TX_TX_POL_INV, 133 132 [QPHY_TX_TX_DRV_LVL] = QSERDES_V4_TX_TX_DRV_LVL, ··· 153 150 [QPHY_COM_BIAS_EN_CLKBUFLR_EN] = QSERDES_V5_COM_BIAS_EN_CLKBUFLR_EN, 154 151 155 152 [QPHY_DP_PHY_STATUS] = QSERDES_V5_DP_PHY_STATUS, 153 + [QPHY_DP_PHY_VCO_DIV] = QSERDES_V5_DP_PHY_VCO_DIV, 156 154 157 155 [QPHY_TX_TX_POL_INV] = QSERDES_V5_5NM_TX_TX_POL_INV, 158 156 [QPHY_TX_TX_DRV_LVL] = QSERDES_V5_5NM_TX_TX_DRV_LVL, ··· 178 174 [QPHY_COM_BIAS_EN_CLKBUFLR_EN] = QSERDES_V6_COM_PLL_BIAS_EN_CLK_BUFLR_EN, 179 175 180 176 [QPHY_DP_PHY_STATUS] = QSERDES_V6_DP_PHY_STATUS, 177 + [QPHY_DP_PHY_VCO_DIV] = QSERDES_V6_DP_PHY_VCO_DIV, 181 178 182 179 [QPHY_TX_TX_POL_INV] = QSERDES_V6_TX_TX_POL_INV, 183 180 [QPHY_TX_TX_DRV_LVL] = QSERDES_V6_TX_TX_DRV_LVL, ··· 2155 2150 writel(val, qmp->dp_dp_phy + QSERDES_DP_PHY_PD_CTL); 2156 2151 2157 2152 if (reverse) 2158 - writel(0x4c, qmp->pcs + QSERDES_DP_PHY_MODE); 2153 + writel(0x4c, qmp->dp_dp_phy + QSERDES_DP_PHY_MODE); 2159 2154 else 2160 - writel(0x5c, qmp->pcs + QSERDES_DP_PHY_MODE); 2155 + writel(0x5c, qmp->dp_dp_phy + QSERDES_DP_PHY_MODE); 2161 2156 2162 2157 return reverse; 2163 2158 } ··· 2167 2162 const struct phy_configure_opts_dp *dp_opts = &qmp->dp_opts; 2168 2163 u32 phy_vco_div; 2169 2164 unsigned long pixel_freq; 2165 + const struct qmp_phy_cfg *cfg = qmp->cfg; 2170 2166 2171 2167 switch (dp_opts->link_rate) { 2172 2168 case 1620: ··· 2190 2184 /* Other link rates aren't supported */ 2191 2185 return -EINVAL; 2192 2186 } 2193 - writel(phy_vco_div, qmp->dp_dp_phy + QSERDES_V4_DP_PHY_VCO_DIV); 2187 + writel(phy_vco_div, qmp->dp_dp_phy + cfg->regs[QPHY_DP_PHY_VCO_DIV]); 2194 2188 2195 2189 clk_set_rate(qmp->dp_link_hw.clk, dp_opts->link_rate * 100000); 2196 2190 clk_set_rate(qmp->dp_pixel_hw.clk, pixel_freq);
+1
drivers/phy/qualcomm/phy-qcom-qmp-dp-phy-v5.h
··· 7 7 #define QCOM_PHY_QMP_DP_PHY_V5_H_ 8 8 9 9 /* Only for QMP V5 PHY - DP PHY registers */ 10 + #define QSERDES_V5_DP_PHY_VCO_DIV 0x070 10 11 #define QSERDES_V5_DP_PHY_AUX_INTERRUPT_STATUS 0x0d8 11 12 #define QSERDES_V5_DP_PHY_STATUS 0x0dc 12 13
+1
drivers/phy/qualcomm/phy-qcom-qmp-dp-phy-v6.h
··· 7 7 #define QCOM_PHY_QMP_DP_PHY_V6_H_ 8 8 9 9 /* Only for QMP V6 PHY - DP PHY registers */ 10 + #define QSERDES_V6_DP_PHY_VCO_DIV 0x070 10 11 #define QSERDES_V6_DP_PHY_AUX_INTERRUPT_STATUS 0x0e0 11 12 #define QSERDES_V6_DP_PHY_STATUS 0x0e4 12 13
+1
drivers/phy/rockchip/Kconfig
··· 87 87 tristate "Rockchip Samsung HDMI/eDP Combo PHY driver" 88 88 depends on (ARCH_ROCKCHIP || COMPILE_TEST) && OF 89 89 select GENERIC_PHY 90 + select RATIONAL 90 91 help 91 92 Enable this to support the Rockchip HDMI/eDP Combo PHY 92 93 with Samsung IP block.
+33 -3
drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
··· 125 125 }; 126 126 127 127 struct rockchip_combphy_cfg { 128 + unsigned int num_phys; 129 + unsigned int phy_ids[3]; 128 130 const struct rockchip_combphy_grfcfg *grfcfg; 129 131 int (*combphy_cfg)(struct rockchip_combphy_priv *priv); 130 132 }; 131 133 132 134 struct rockchip_combphy_priv { 133 135 u8 type; 136 + int id; 134 137 void __iomem *mmio; 135 138 int num_clks; 136 139 struct clk_bulk_data *clks; ··· 323 320 struct rockchip_combphy_priv *priv; 324 321 const struct rockchip_combphy_cfg *phy_cfg; 325 322 struct resource *res; 326 - int ret; 323 + int ret, id; 327 324 328 325 phy_cfg = of_device_get_match_data(dev); 329 326 if (!phy_cfg) { ··· 339 336 if (IS_ERR(priv->mmio)) { 340 337 ret = PTR_ERR(priv->mmio); 341 338 return ret; 339 + } 340 + 341 + /* find the phy-id from the io address */ 342 + priv->id = -ENODEV; 343 + for (id = 0; id < phy_cfg->num_phys; id++) { 344 + if (res->start == phy_cfg->phy_ids[id]) { 345 + priv->id = id; 346 + break; 347 + } 342 348 } 343 349 344 350 priv->dev = dev; ··· 574 562 }; 575 563 576 564 static const struct rockchip_combphy_cfg rk3568_combphy_cfgs = { 565 + .num_phys = 3, 566 + .phy_ids = { 567 + 0xfe820000, 568 + 0xfe830000, 569 + 0xfe840000, 570 + }, 577 571 .grfcfg = &rk3568_combphy_grfcfgs, 578 572 .combphy_cfg = rk3568_combphy_cfg, 579 573 }; ··· 596 578 rockchip_combphy_param_write(priv->phy_grf, &cfg->con1_for_pcie, true); 597 579 rockchip_combphy_param_write(priv->phy_grf, &cfg->con2_for_pcie, true); 598 580 rockchip_combphy_param_write(priv->phy_grf, &cfg->con3_for_pcie, true); 599 - rockchip_combphy_param_write(priv->pipe_grf, &cfg->pipe_pcie1l0_sel, true); 600 - rockchip_combphy_param_write(priv->pipe_grf, &cfg->pipe_pcie1l1_sel, true); 581 + switch (priv->id) { 582 + case 1: 583 + rockchip_combphy_param_write(priv->pipe_grf, &cfg->pipe_pcie1l0_sel, true); 584 + break; 585 + case 2: 586 + rockchip_combphy_param_write(priv->pipe_grf, &cfg->pipe_pcie1l1_sel, true); 587 + break; 588 + } 601 589 break; 602 590 case PHY_TYPE_USB3: 603 591 /* Set SSC downward spread spectrum */ ··· 760 736 }; 761 737 762 738 static const struct rockchip_combphy_cfg rk3588_combphy_cfgs = { 739 + .num_phys = 3, 740 + .phy_ids = { 741 + 0xfee00000, 742 + 0xfee10000, 743 + 0xfee20000, 744 + }, 763 745 .grfcfg = &rk3588_combphy_grfcfgs, 764 746 .combphy_cfg = rk3588_combphy_cfg, 765 747 };
+13 -18
drivers/phy/rockchip/phy-rockchip-snps-pcie3.c
··· 40 40 #define RK3588_BIFURCATION_LANE_0_1 BIT(0) 41 41 #define RK3588_BIFURCATION_LANE_2_3 BIT(1) 42 42 #define RK3588_LANE_AGGREGATION BIT(2) 43 + #define RK3588_PCIE1LN_SEL_EN (GENMASK(1, 0) << 16) 44 + #define RK3588_PCIE30_PHY_MODE_EN (GENMASK(2, 0) << 16) 43 45 44 46 struct rockchip_p3phy_ops; 45 47 ··· 134 132 static int rockchip_p3phy_rk3588_init(struct rockchip_p3phy_priv *priv) 135 133 { 136 134 u32 reg = 0; 137 - u8 mode = 0; 135 + u8 mode = RK3588_LANE_AGGREGATION; /* default */ 138 136 int ret; 139 137 140 138 /* Deassert PCIe PMA output clamp mode */ ··· 142 140 143 141 /* Set bifurcation if needed */ 144 142 for (int i = 0; i < priv->num_lanes; i++) { 145 - if (!priv->lanes[i]) 146 - mode |= (BIT(i) << 3); 147 - 148 143 if (priv->lanes[i] > 1) 149 - mode |= (BIT(i) >> 1); 144 + mode &= ~RK3588_LANE_AGGREGATION; 145 + if (priv->lanes[i] == 3) 146 + mode |= RK3588_BIFURCATION_LANE_0_1; 147 + if (priv->lanes[i] == 4) 148 + mode |= RK3588_BIFURCATION_LANE_2_3; 150 149 } 151 150 152 - if (!mode) 153 - reg = RK3588_LANE_AGGREGATION; 154 - else { 155 - if (mode & (BIT(0) | BIT(1))) 156 - reg |= RK3588_BIFURCATION_LANE_0_1; 157 - 158 - if (mode & (BIT(2) | BIT(3))) 159 - reg |= RK3588_BIFURCATION_LANE_2_3; 160 - } 161 - 162 - regmap_write(priv->phy_grf, RK3588_PCIE3PHY_GRF_CMN_CON0, (0x7<<16) | reg); 151 + reg = mode; 152 + regmap_write(priv->phy_grf, RK3588_PCIE3PHY_GRF_CMN_CON0, 153 + RK3588_PCIE30_PHY_MODE_EN | reg); 163 154 164 155 /* Set pcie1ln_sel in PHP_GRF_PCIESEL_CON */ 165 156 if (!IS_ERR(priv->pipe_grf)) { 166 - reg = (mode & (BIT(6) | BIT(7))) >> 6; 157 + reg = mode & (RK3588_BIFURCATION_LANE_0_1 | RK3588_BIFURCATION_LANE_2_3); 167 158 if (reg) 168 159 regmap_write(priv->pipe_grf, PHP_GRF_PCIESEL_CON, 169 - (reg << 16) | reg); 160 + RK3588_PCIE1LN_SEL_EN | reg); 170 161 } 171 162 172 163 reset_control_deassert(priv->p30phy);
+12 -11
drivers/phy/ti/phy-tusb1210.c
··· 69 69 struct delayed_work chg_det_work; 70 70 struct notifier_block psy_nb; 71 71 struct power_supply *psy; 72 - struct power_supply *charger; 73 72 #endif 74 73 }; 75 74 ··· 235 236 236 237 static bool tusb1210_get_online(struct tusb1210 *tusb) 237 238 { 239 + struct power_supply *charger = NULL; 238 240 union power_supply_propval val; 239 - int i; 241 + bool online = false; 242 + int i, ret; 240 243 241 - for (i = 0; i < ARRAY_SIZE(tusb1210_chargers) && !tusb->charger; i++) 242 - tusb->charger = power_supply_get_by_name(tusb1210_chargers[i]); 244 + for (i = 0; i < ARRAY_SIZE(tusb1210_chargers) && !charger; i++) 245 + charger = power_supply_get_by_name(tusb1210_chargers[i]); 243 246 244 - if (!tusb->charger) 247 + if (!charger) 245 248 return false; 246 249 247 - if (power_supply_get_property(tusb->charger, POWER_SUPPLY_PROP_ONLINE, &val)) 248 - return false; 250 + ret = power_supply_get_property(charger, POWER_SUPPLY_PROP_ONLINE, &val); 251 + if (ret == 0) 252 + online = val.intval; 249 253 250 - return val.intval; 254 + power_supply_put(charger); 255 + 256 + return online; 251 257 } 252 258 253 259 static void tusb1210_chg_det_work(struct work_struct *work) ··· 477 473 cancel_delayed_work_sync(&tusb->chg_det_work); 478 474 power_supply_unregister(tusb->psy); 479 475 } 480 - 481 - if (tusb->charger) 482 - power_supply_put(tusb->charger); 483 476 } 484 477 #else 485 478 static void tusb1210_probe_charger_detect(struct tusb1210 *tusb) { }
+17 -17
drivers/pinctrl/aspeed/pinctrl-aspeed-g6.c
··· 43 43 #define SCU614 0x614 /* Disable GPIO Internal Pull-Down #1 */ 44 44 #define SCU618 0x618 /* Disable GPIO Internal Pull-Down #2 */ 45 45 #define SCU61C 0x61c /* Disable GPIO Internal Pull-Down #3 */ 46 - #define SCU620 0x620 /* Disable GPIO Internal Pull-Down #4 */ 46 + #define SCU630 0x630 /* Disable GPIO Internal Pull-Down #4 */ 47 47 #define SCU634 0x634 /* Disable GPIO Internal Pull-Down #5 */ 48 48 #define SCU638 0x638 /* Disable GPIO Internal Pull-Down #6 */ 49 49 #define SCU690 0x690 /* Multi-function Pin Control #24 */ ··· 2495 2495 ASPEED_PULL_DOWN_PINCONF(D14, SCU61C, 0), 2496 2496 2497 2497 /* GPIOS7 */ 2498 - ASPEED_PULL_DOWN_PINCONF(T24, SCU620, 23), 2498 + ASPEED_PULL_DOWN_PINCONF(T24, SCU630, 23), 2499 2499 /* GPIOS6 */ 2500 - ASPEED_PULL_DOWN_PINCONF(P23, SCU620, 22), 2500 + ASPEED_PULL_DOWN_PINCONF(P23, SCU630, 22), 2501 2501 /* GPIOS5 */ 2502 - ASPEED_PULL_DOWN_PINCONF(P24, SCU620, 21), 2502 + ASPEED_PULL_DOWN_PINCONF(P24, SCU630, 21), 2503 2503 /* GPIOS4 */ 2504 - ASPEED_PULL_DOWN_PINCONF(R26, SCU620, 20), 2504 + ASPEED_PULL_DOWN_PINCONF(R26, SCU630, 20), 2505 2505 /* GPIOS3*/ 2506 - ASPEED_PULL_DOWN_PINCONF(R24, SCU620, 19), 2506 + ASPEED_PULL_DOWN_PINCONF(R24, SCU630, 19), 2507 2507 /* GPIOS2 */ 2508 - ASPEED_PULL_DOWN_PINCONF(T26, SCU620, 18), 2508 + ASPEED_PULL_DOWN_PINCONF(T26, SCU630, 18), 2509 2509 /* GPIOS1 */ 2510 - ASPEED_PULL_DOWN_PINCONF(T25, SCU620, 17), 2510 + ASPEED_PULL_DOWN_PINCONF(T25, SCU630, 17), 2511 2511 /* GPIOS0 */ 2512 - ASPEED_PULL_DOWN_PINCONF(R23, SCU620, 16), 2512 + ASPEED_PULL_DOWN_PINCONF(R23, SCU630, 16), 2513 2513 2514 2514 /* GPIOR7 */ 2515 - ASPEED_PULL_DOWN_PINCONF(U26, SCU620, 15), 2515 + ASPEED_PULL_DOWN_PINCONF(U26, SCU630, 15), 2516 2516 /* GPIOR6 */ 2517 - ASPEED_PULL_DOWN_PINCONF(W26, SCU620, 14), 2517 + ASPEED_PULL_DOWN_PINCONF(W26, SCU630, 14), 2518 2518 /* GPIOR5 */ 2519 - ASPEED_PULL_DOWN_PINCONF(T23, SCU620, 13), 2519 + ASPEED_PULL_DOWN_PINCONF(T23, SCU630, 13), 2520 2520 /* GPIOR4 */ 2521 - ASPEED_PULL_DOWN_PINCONF(U25, SCU620, 12), 2521 + ASPEED_PULL_DOWN_PINCONF(U25, SCU630, 12), 2522 2522 /* GPIOR3*/ 2523 - ASPEED_PULL_DOWN_PINCONF(V26, SCU620, 11), 2523 + ASPEED_PULL_DOWN_PINCONF(V26, SCU630, 11), 2524 2524 /* GPIOR2 */ 2525 - ASPEED_PULL_DOWN_PINCONF(V24, SCU620, 10), 2525 + ASPEED_PULL_DOWN_PINCONF(V24, SCU630, 10), 2526 2526 /* GPIOR1 */ 2527 - ASPEED_PULL_DOWN_PINCONF(U24, SCU620, 9), 2527 + ASPEED_PULL_DOWN_PINCONF(U24, SCU630, 9), 2528 2528 /* GPIOR0 */ 2529 - ASPEED_PULL_DOWN_PINCONF(V25, SCU620, 8), 2529 + ASPEED_PULL_DOWN_PINCONF(V25, SCU630, 8), 2530 2530 2531 2531 /* GPIOX7 */ 2532 2532 ASPEED_PULL_DOWN_PINCONF(AB10, SCU634, 31),
+1 -7
drivers/pinctrl/core.c
··· 2124 2124 2125 2125 error = pinctrl_claim_hogs(pctldev); 2126 2126 if (error) { 2127 - dev_err(pctldev->dev, "could not claim hogs: %i\n", 2128 - error); 2129 - pinctrl_free_pindescs(pctldev, pctldev->desc->pins, 2130 - pctldev->desc->npins); 2131 - mutex_destroy(&pctldev->mutex); 2132 - kfree(pctldev); 2133 - 2127 + dev_err(pctldev->dev, "could not claim hogs: %i\n", error); 2134 2128 return error; 2135 2129 } 2136 2130
+6 -4
drivers/pinctrl/devicetree.c
··· 220 220 for (state = 0; ; state++) { 221 221 /* Retrieve the pinctrl-* property */ 222 222 propname = kasprintf(GFP_KERNEL, "pinctrl-%d", state); 223 - if (!propname) 224 - return -ENOMEM; 223 + if (!propname) { 224 + ret = -ENOMEM; 225 + goto err; 226 + } 225 227 prop = of_find_property(np, propname, &size); 226 228 kfree(propname); 227 229 if (!prop) { 228 230 if (state == 0) { 229 - of_node_put(np); 230 - return -ENODEV; 231 + ret = -ENODEV; 232 + goto err; 231 233 } 232 234 break; 233 235 }
+41 -37
drivers/pinctrl/intel/pinctrl-baytrail.c
··· 231 231 /* SCORE groups */ 232 232 static const unsigned int byt_score_uart1_pins[] = { 70, 71, 72, 73 }; 233 233 static const unsigned int byt_score_uart2_pins[] = { 74, 75, 76, 77 }; 234 + static const unsigned int byt_score_uart3_pins[] = { 57, 61 }; 234 235 235 236 static const unsigned int byt_score_pwm0_pins[] = { 94 }; 236 237 static const unsigned int byt_score_pwm1_pins[] = { 95 }; ··· 279 278 static const unsigned int byt_score_smbus_pins[] = { 51, 52, 53 }; 280 279 281 280 static const struct intel_pingroup byt_score_groups[] = { 282 - PIN_GROUP("uart1_grp", byt_score_uart1_pins, 1), 283 - PIN_GROUP("uart2_grp", byt_score_uart2_pins, 1), 284 - PIN_GROUP("pwm0_grp", byt_score_pwm0_pins, 1), 285 - PIN_GROUP("pwm1_grp", byt_score_pwm1_pins, 1), 286 - PIN_GROUP("ssp2_grp", byt_score_ssp2_pins, 1), 287 - PIN_GROUP("sio_spi_grp", byt_score_sio_spi_pins, 1), 288 - PIN_GROUP("i2c5_grp", byt_score_i2c5_pins, 1), 289 - PIN_GROUP("i2c6_grp", byt_score_i2c6_pins, 1), 290 - PIN_GROUP("i2c4_grp", byt_score_i2c4_pins, 1), 291 - PIN_GROUP("i2c3_grp", byt_score_i2c3_pins, 1), 292 - PIN_GROUP("i2c2_grp", byt_score_i2c2_pins, 1), 293 - PIN_GROUP("i2c1_grp", byt_score_i2c1_pins, 1), 294 - PIN_GROUP("i2c0_grp", byt_score_i2c0_pins, 1), 295 - PIN_GROUP("ssp0_grp", byt_score_ssp0_pins, 1), 296 - PIN_GROUP("ssp1_grp", byt_score_ssp1_pins, 1), 297 - PIN_GROUP("sdcard_grp", byt_score_sdcard_pins, byt_score_sdcard_mux_values), 298 - PIN_GROUP("sdio_grp", byt_score_sdio_pins, 1), 299 - PIN_GROUP("emmc_grp", byt_score_emmc_pins, 1), 300 - PIN_GROUP("lpc_grp", byt_score_ilb_lpc_pins, 1), 301 - PIN_GROUP("sata_grp", byt_score_sata_pins, 1), 302 - PIN_GROUP("plt_clk0_grp", byt_score_plt_clk0_pins, 1), 303 - PIN_GROUP("plt_clk1_grp", byt_score_plt_clk1_pins, 1), 304 - PIN_GROUP("plt_clk2_grp", byt_score_plt_clk2_pins, 1), 305 - PIN_GROUP("plt_clk3_grp", byt_score_plt_clk3_pins, 1), 306 - PIN_GROUP("plt_clk4_grp", byt_score_plt_clk4_pins, 1), 307 - PIN_GROUP("plt_clk5_grp", byt_score_plt_clk5_pins, 1), 308 - PIN_GROUP("smbus_grp", byt_score_smbus_pins, 1), 281 + PIN_GROUP_GPIO("uart1_grp", byt_score_uart1_pins, 1), 282 + PIN_GROUP_GPIO("uart2_grp", byt_score_uart2_pins, 1), 283 + PIN_GROUP_GPIO("uart3_grp", byt_score_uart3_pins, 1), 284 + PIN_GROUP_GPIO("pwm0_grp", byt_score_pwm0_pins, 1), 285 + PIN_GROUP_GPIO("pwm1_grp", byt_score_pwm1_pins, 1), 286 + PIN_GROUP_GPIO("ssp2_grp", byt_score_ssp2_pins, 1), 287 + PIN_GROUP_GPIO("sio_spi_grp", byt_score_sio_spi_pins, 1), 288 + PIN_GROUP_GPIO("i2c5_grp", byt_score_i2c5_pins, 1), 289 + PIN_GROUP_GPIO("i2c6_grp", byt_score_i2c6_pins, 1), 290 + PIN_GROUP_GPIO("i2c4_grp", byt_score_i2c4_pins, 1), 291 + PIN_GROUP_GPIO("i2c3_grp", byt_score_i2c3_pins, 1), 292 + PIN_GROUP_GPIO("i2c2_grp", byt_score_i2c2_pins, 1), 293 + PIN_GROUP_GPIO("i2c1_grp", byt_score_i2c1_pins, 1), 294 + PIN_GROUP_GPIO("i2c0_grp", byt_score_i2c0_pins, 1), 295 + PIN_GROUP_GPIO("ssp0_grp", byt_score_ssp0_pins, 1), 296 + PIN_GROUP_GPIO("ssp1_grp", byt_score_ssp1_pins, 1), 297 + PIN_GROUP_GPIO("sdcard_grp", byt_score_sdcard_pins, byt_score_sdcard_mux_values), 298 + PIN_GROUP_GPIO("sdio_grp", byt_score_sdio_pins, 1), 299 + PIN_GROUP_GPIO("emmc_grp", byt_score_emmc_pins, 1), 300 + PIN_GROUP_GPIO("lpc_grp", byt_score_ilb_lpc_pins, 1), 301 + PIN_GROUP_GPIO("sata_grp", byt_score_sata_pins, 1), 302 + PIN_GROUP_GPIO("plt_clk0_grp", byt_score_plt_clk0_pins, 1), 303 + PIN_GROUP_GPIO("plt_clk1_grp", byt_score_plt_clk1_pins, 1), 304 + PIN_GROUP_GPIO("plt_clk2_grp", byt_score_plt_clk2_pins, 1), 305 + PIN_GROUP_GPIO("plt_clk3_grp", byt_score_plt_clk3_pins, 1), 306 + PIN_GROUP_GPIO("plt_clk4_grp", byt_score_plt_clk4_pins, 1), 307 + PIN_GROUP_GPIO("plt_clk5_grp", byt_score_plt_clk5_pins, 1), 308 + PIN_GROUP_GPIO("smbus_grp", byt_score_smbus_pins, 1), 309 309 }; 310 310 311 311 static const char * const byt_score_uart_groups[] = { 312 - "uart1_grp", "uart2_grp", 312 + "uart1_grp", "uart2_grp", "uart3_grp", 313 313 }; 314 314 static const char * const byt_score_pwm_groups[] = { 315 315 "pwm0_grp", "pwm1_grp", ··· 334 332 }; 335 333 static const char * const byt_score_smbus_groups[] = { "smbus_grp" }; 336 334 static const char * const byt_score_gpio_groups[] = { 337 - "uart1_grp", "uart2_grp", "pwm0_grp", "pwm1_grp", "ssp0_grp", 338 - "ssp1_grp", "ssp2_grp", "sio_spi_grp", "i2c0_grp", "i2c1_grp", 339 - "i2c2_grp", "i2c3_grp", "i2c4_grp", "i2c5_grp", "i2c6_grp", 340 - "sdcard_grp", "sdio_grp", "emmc_grp", "lpc_grp", "sata_grp", 341 - "plt_clk0_grp", "plt_clk1_grp", "plt_clk2_grp", "plt_clk3_grp", 342 - "plt_clk4_grp", "plt_clk5_grp", "smbus_grp", 335 + "uart1_grp_gpio", "uart2_grp_gpio", "uart3_grp_gpio", "pwm0_grp_gpio", 336 + "pwm1_grp_gpio", "ssp0_grp_gpio", "ssp1_grp_gpio", "ssp2_grp_gpio", 337 + "sio_spi_grp_gpio", "i2c0_grp_gpio", "i2c1_grp_gpio", "i2c2_grp_gpio", 338 + "i2c3_grp_gpio", "i2c4_grp_gpio", "i2c5_grp_gpio", "i2c6_grp_gpio", 339 + "sdcard_grp_gpio", "sdio_grp_gpio", "emmc_grp_gpio", "lpc_grp_gpio", 340 + "sata_grp_gpio", "plt_clk0_grp_gpio", "plt_clk1_grp_gpio", 341 + "plt_clk2_grp_gpio", "plt_clk3_grp_gpio", "plt_clk4_grp_gpio", 342 + "plt_clk5_grp_gpio", "smbus_grp_gpio", 343 343 }; 344 344 345 345 static const struct intel_function byt_score_functions[] = { ··· 460 456 PIN_GROUP("usb_oc_grp_gpio", byt_sus_usb_over_current_pins, byt_sus_usb_over_current_gpio_mode_values), 461 457 PIN_GROUP("usb_ulpi_grp_gpio", byt_sus_usb_ulpi_pins, byt_sus_usb_ulpi_gpio_mode_values), 462 458 PIN_GROUP("pcu_spi_grp_gpio", byt_sus_pcu_spi_pins, byt_sus_pcu_spi_gpio_mode_values), 463 - PIN_GROUP("pmu_clk1_grp", byt_sus_pmu_clk1_pins, 1), 464 - PIN_GROUP("pmu_clk2_grp", byt_sus_pmu_clk2_pins, 1), 459 + PIN_GROUP_GPIO("pmu_clk1_grp", byt_sus_pmu_clk1_pins, 1), 460 + PIN_GROUP_GPIO("pmu_clk2_grp", byt_sus_pmu_clk2_pins, 1), 465 461 }; 466 462 467 463 static const char * const byt_sus_usb_groups[] = { ··· 473 469 }; 474 470 static const char * const byt_sus_gpio_groups[] = { 475 471 "usb_oc_grp_gpio", "usb_ulpi_grp_gpio", "pcu_spi_grp_gpio", 476 - "pmu_clk1_grp", "pmu_clk2_grp", 472 + "pmu_clk1_grp_gpio", "pmu_clk2_grp_gpio", 477 473 }; 478 474 479 475 static const struct intel_function byt_sus_functions[] = {
+4
drivers/pinctrl/intel/pinctrl-intel.h
··· 179 179 .modes = __builtin_choose_expr(__builtin_constant_p((m)), NULL, (m)), \ 180 180 } 181 181 182 + #define PIN_GROUP_GPIO(n, p, m) \ 183 + PIN_GROUP(n, p, m), \ 184 + PIN_GROUP(n "_gpio", p, 0) 185 + 182 186 #define FUNCTION(n, g) \ 183 187 { \ 184 188 .func = PINCTRL_PINFUNCTION((n), (g), ARRAY_SIZE(g)), \
+13 -27
drivers/pinctrl/mediatek/pinctrl-paris.c
··· 165 165 err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_SR, &ret); 166 166 break; 167 167 case PIN_CONFIG_INPUT_ENABLE: 168 - case PIN_CONFIG_OUTPUT_ENABLE: 168 + err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_IES, &ret); 169 + if (!ret) 170 + err = -EINVAL; 171 + break; 172 + case PIN_CONFIG_OUTPUT: 169 173 err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_DIR, &ret); 170 174 if (err) 171 175 break; 172 - /* CONFIG Current direction return value 173 - * ------------- ----------------- ---------------------- 174 - * OUTPUT_ENABLE output 1 (= HW value) 175 - * input 0 (= HW value) 176 - * INPUT_ENABLE output 0 (= reverse HW value) 177 - * input 1 (= reverse HW value) 178 - */ 179 - if (param == PIN_CONFIG_INPUT_ENABLE) 180 - ret = !ret; 181 176 177 + if (!ret) { 178 + err = -EINVAL; 179 + break; 180 + } 181 + 182 + err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_DO, &ret); 182 183 break; 183 184 case PIN_CONFIG_INPUT_SCHMITT_ENABLE: 184 185 err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_DIR, &ret); ··· 194 193 } 195 194 196 195 err = mtk_hw_get_value(hw, desc, PINCTRL_PIN_REG_SMT, &ret); 196 + if (!ret) 197 + err = -EINVAL; 197 198 break; 198 199 case PIN_CONFIG_DRIVE_STRENGTH: 199 200 if (!hw->soc->drive_get) ··· 284 281 break; 285 282 err = hw->soc->bias_set_combo(hw, desc, 0, arg); 286 283 break; 287 - case PIN_CONFIG_OUTPUT_ENABLE: 288 - err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_SMT, 289 - MTK_DISABLE); 290 - /* Keep set direction to consider the case that a GPIO pin 291 - * does not have SMT control 292 - */ 293 - if (err != -ENOTSUPP) 294 - break; 295 - 296 - err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_DIR, 297 - MTK_OUTPUT); 298 - break; 299 284 case PIN_CONFIG_INPUT_ENABLE: 300 285 /* regard all non-zero value as enable */ 301 286 err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_IES, !!arg); 302 - if (err) 303 - break; 304 - 305 - err = mtk_hw_set_value(hw, desc, PINCTRL_PIN_REG_DIR, 306 - MTK_INPUT); 307 287 break; 308 288 case PIN_CONFIG_SLEW_RATE: 309 289 /* regard all non-zero value as enable */
+3 -3
drivers/pinctrl/meson/pinctrl-meson-a1.c
··· 250 250 static const unsigned int pdm_din2_a_pins[] = { GPIOA_6 }; 251 251 static const unsigned int pdm_din1_a_pins[] = { GPIOA_7 }; 252 252 static const unsigned int pdm_din0_a_pins[] = { GPIOA_8 }; 253 - static const unsigned int pdm_dclk_pins[] = { GPIOA_9 }; 253 + static const unsigned int pdm_dclk_a_pins[] = { GPIOA_9 }; 254 254 255 255 /* gen_clk */ 256 256 static const unsigned int gen_clk_x_pins[] = { GPIOX_7 }; ··· 591 591 GROUP(pdm_din2_a, 3), 592 592 GROUP(pdm_din1_a, 3), 593 593 GROUP(pdm_din0_a, 3), 594 - GROUP(pdm_dclk, 3), 594 + GROUP(pdm_dclk_a, 3), 595 595 GROUP(pwm_c_a, 3), 596 596 GROUP(pwm_b_a, 3), 597 597 ··· 755 755 756 756 static const char * const pdm_groups[] = { 757 757 "pdm_din0_x", "pdm_din1_x", "pdm_din2_x", "pdm_dclk_x", "pdm_din2_a", 758 - "pdm_din1_a", "pdm_din0_a", "pdm_dclk", 758 + "pdm_din1_a", "pdm_din0_a", "pdm_dclk_a", 759 759 }; 760 760 761 761 static const char * const gen_clk_groups[] = {
+13 -1
drivers/pinctrl/renesas/pinctrl-rzg2l.c
··· 2045 2045 2046 2046 for (unsigned int i = 0; i < RZG2L_TINT_MAX_INTERRUPT; i++) { 2047 2047 struct irq_data *data; 2048 + unsigned long flags; 2048 2049 unsigned int virq; 2050 + int ret; 2049 2051 2050 2052 if (!pctrl->hwirq[i]) 2051 2053 continue; ··· 2065 2063 continue; 2066 2064 } 2067 2065 2068 - if (!irqd_irq_disabled(data)) 2066 + /* 2067 + * This has to be atomically executed to protect against a concurrent 2068 + * interrupt. 2069 + */ 2070 + raw_spin_lock_irqsave(&pctrl->lock.rlock, flags); 2071 + ret = rzg2l_gpio_irq_set_type(data, irqd_get_trigger_type(data)); 2072 + if (!ret && !irqd_irq_disabled(data)) 2069 2073 rzg2l_gpio_irq_enable(data); 2074 + raw_spin_unlock_irqrestore(&pctrl->lock.rlock, flags); 2075 + 2076 + if (ret) 2077 + dev_crit(pctrl->dev, "Failed to set IRQ type for virq=%u\n", virq); 2070 2078 } 2071 2079 } 2072 2080
+1
drivers/platform/x86/intel/speed_select_if/isst_if_common.c
··· 721 721 static const struct x86_cpu_id hpm_cpu_ids[] = { 722 722 X86_MATCH_INTEL_FAM6_MODEL(GRANITERAPIDS_D, NULL), 723 723 X86_MATCH_INTEL_FAM6_MODEL(GRANITERAPIDS_X, NULL), 724 + X86_MATCH_INTEL_FAM6_MODEL(ATOM_CRESTMONT, NULL), 724 725 X86_MATCH_INTEL_FAM6_MODEL(ATOM_CRESTMONT_X, NULL), 725 726 {} 726 727 };
+1 -1
drivers/power/supply/mt6360_charger.c
··· 588 588 }; 589 589 590 590 static const struct regulator_desc mt6360_otg_rdesc = { 591 - .of_match = "usb-otg-vbus", 591 + .of_match = "usb-otg-vbus-regulator", 592 592 .name = "usb-otg-vbus", 593 593 .ops = &mt6360_chg_otg_ops, 594 594 .owner = THIS_MODULE,
+2
drivers/power/supply/rt9455_charger.c
··· 192 192 4450000, 4450000, 4450000, 4450000, 4450000, 4450000, 4450000, 4450000 193 193 }; 194 194 195 + #if IS_ENABLED(CONFIG_USB_PHY) 195 196 /* 196 197 * When the charger is in boost mode, REG02[7:2] represent boost output 197 198 * voltage. ··· 208 207 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 209 208 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 5600000, 210 209 }; 210 + #endif 211 211 212 212 /* REG07[3:0] (VMREG) in uV */ 213 213 static const int rt9455_vmreg_values[] = {
+3
drivers/regulator/irq_helpers.c
··· 352 352 353 353 h->irq = irq; 354 354 h->desc = *d; 355 + h->desc.name = devm_kstrdup(dev, d->name, GFP_KERNEL); 356 + if (!h->desc.name) 357 + return ERR_PTR(-ENOMEM); 355 358 356 359 ret = init_rdev_state(dev, h, rdev, common_errs, per_rdev_errs, 357 360 rdev_amount);
+20 -12
drivers/regulator/mt6360-regulator.c
··· 319 319 } 320 320 } 321 321 322 - #define MT6360_REGULATOR_DESC(_name, _sname, ereg, emask, vreg, vmask, \ 323 - mreg, mmask, streg, stmask, vranges, \ 324 - vcnts, offon_delay, irq_tbls) \ 322 + #define MT6360_REGULATOR_DESC(match, _name, _sname, ereg, emask, vreg, \ 323 + vmask, mreg, mmask, streg, stmask, \ 324 + vranges, vcnts, offon_delay, irq_tbls) \ 325 325 { \ 326 326 .desc = { \ 327 327 .name = #_name, \ 328 328 .supply_name = #_sname, \ 329 329 .id = MT6360_REGULATOR_##_name, \ 330 - .of_match = of_match_ptr(#_name), \ 330 + .of_match = of_match_ptr(match), \ 331 331 .regulators_node = of_match_ptr("regulator"), \ 332 332 .of_map_mode = mt6360_regulator_of_map_mode, \ 333 333 .owner = THIS_MODULE, \ ··· 351 351 } 352 352 353 353 static const struct mt6360_regulator_desc mt6360_regulator_descs[] = { 354 - MT6360_REGULATOR_DESC(BUCK1, BUCK1_VIN, 0x117, 0x40, 0x110, 0xff, 0x117, 0x30, 0x117, 0x04, 354 + MT6360_REGULATOR_DESC("buck1", BUCK1, BUCK1_VIN, 355 + 0x117, 0x40, 0x110, 0xff, 0x117, 0x30, 0x117, 0x04, 355 356 buck_vout_ranges, 256, 0, buck1_irq_tbls), 356 - MT6360_REGULATOR_DESC(BUCK2, BUCK2_VIN, 0x127, 0x40, 0x120, 0xff, 0x127, 0x30, 0x127, 0x04, 357 + MT6360_REGULATOR_DESC("buck2", BUCK2, BUCK2_VIN, 358 + 0x127, 0x40, 0x120, 0xff, 0x127, 0x30, 0x127, 0x04, 357 359 buck_vout_ranges, 256, 0, buck2_irq_tbls), 358 - MT6360_REGULATOR_DESC(LDO6, LDO_VIN3, 0x137, 0x40, 0x13B, 0xff, 0x137, 0x30, 0x137, 0x04, 360 + MT6360_REGULATOR_DESC("ldo6", LDO6, LDO_VIN3, 361 + 0x137, 0x40, 0x13B, 0xff, 0x137, 0x30, 0x137, 0x04, 359 362 ldo_vout_ranges1, 256, 0, ldo6_irq_tbls), 360 - MT6360_REGULATOR_DESC(LDO7, LDO_VIN3, 0x131, 0x40, 0x135, 0xff, 0x131, 0x30, 0x131, 0x04, 363 + MT6360_REGULATOR_DESC("ldo7", LDO7, LDO_VIN3, 364 + 0x131, 0x40, 0x135, 0xff, 0x131, 0x30, 0x131, 0x04, 361 365 ldo_vout_ranges1, 256, 0, ldo7_irq_tbls), 362 - MT6360_REGULATOR_DESC(LDO1, LDO_VIN1, 0x217, 0x40, 0x21B, 0xff, 0x217, 0x30, 0x217, 0x04, 366 + MT6360_REGULATOR_DESC("ldo1", LDO1, LDO_VIN1, 367 + 0x217, 0x40, 0x21B, 0xff, 0x217, 0x30, 0x217, 0x04, 363 368 ldo_vout_ranges2, 256, 0, ldo1_irq_tbls), 364 - MT6360_REGULATOR_DESC(LDO2, LDO_VIN1, 0x211, 0x40, 0x215, 0xff, 0x211, 0x30, 0x211, 0x04, 369 + MT6360_REGULATOR_DESC("ldo2", LDO2, LDO_VIN1, 370 + 0x211, 0x40, 0x215, 0xff, 0x211, 0x30, 0x211, 0x04, 365 371 ldo_vout_ranges2, 256, 0, ldo2_irq_tbls), 366 - MT6360_REGULATOR_DESC(LDO3, LDO_VIN1, 0x205, 0x40, 0x209, 0xff, 0x205, 0x30, 0x205, 0x04, 372 + MT6360_REGULATOR_DESC("ldo3", LDO3, LDO_VIN1, 373 + 0x205, 0x40, 0x209, 0xff, 0x205, 0x30, 0x205, 0x04, 367 374 ldo_vout_ranges2, 256, 100, ldo3_irq_tbls), 368 - MT6360_REGULATOR_DESC(LDO5, LDO_VIN2, 0x20B, 0x40, 0x20F, 0x7f, 0x20B, 0x30, 0x20B, 0x04, 375 + MT6360_REGULATOR_DESC("ldo5", LDO5, LDO_VIN2, 376 + 0x20B, 0x40, 0x20F, 0x7f, 0x20B, 0x30, 0x20B, 0x04, 369 377 ldo_vout_ranges3, 128, 100, ldo5_irq_tbls), 370 378 }; 371 379
+1
drivers/regulator/qcom-refgen-regulator.c
··· 140 140 { .compatible = "qcom,sm8250-refgen-regulator", .data = &sm8250_refgen_desc }, 141 141 { } 142 142 }; 143 + MODULE_DEVICE_TABLE(of, qcom_refgen_match_table); 143 144 144 145 static struct platform_driver qcom_refgen_driver = { 145 146 .probe = qcom_refgen_probe,
+1
drivers/regulator/vqmmc-ipq4019-regulator.c
··· 84 84 { .compatible = "qcom,vqmmc-ipq4019-regulator", }, 85 85 {}, 86 86 }; 87 + MODULE_DEVICE_TABLE(of, regulator_ipq4019_of_match); 87 88 88 89 static struct platform_driver ipq4019_regulator_driver = { 89 90 .probe = ipq4019_regulator_probe,
+31 -38
drivers/s390/net/qeth_core_main.c
··· 364 364 return rc; 365 365 } 366 366 367 - static int qeth_alloc_cq(struct qeth_card *card) 368 - { 369 - if (card->options.cq == QETH_CQ_ENABLED) { 370 - QETH_CARD_TEXT(card, 2, "cqon"); 371 - card->qdio.c_q = qeth_alloc_qdio_queue(); 372 - if (!card->qdio.c_q) { 373 - dev_err(&card->gdev->dev, "Failed to create completion queue\n"); 374 - return -ENOMEM; 375 - } 376 - } else { 377 - QETH_CARD_TEXT(card, 2, "nocq"); 378 - card->qdio.c_q = NULL; 379 - } 380 - return 0; 381 - } 382 - 383 367 static void qeth_free_cq(struct qeth_card *card) 384 368 { 385 369 if (card->qdio.c_q) { 386 370 qeth_free_qdio_queue(card->qdio.c_q); 387 371 card->qdio.c_q = NULL; 388 372 } 373 + } 374 + 375 + static int qeth_alloc_cq(struct qeth_card *card) 376 + { 377 + if (card->options.cq == QETH_CQ_ENABLED) { 378 + QETH_CARD_TEXT(card, 2, "cqon"); 379 + if (!card->qdio.c_q) { 380 + card->qdio.c_q = qeth_alloc_qdio_queue(); 381 + if (!card->qdio.c_q) { 382 + dev_err(&card->gdev->dev, 383 + "Failed to create completion queue\n"); 384 + return -ENOMEM; 385 + } 386 + } 387 + } else { 388 + QETH_CARD_TEXT(card, 2, "nocq"); 389 + qeth_free_cq(card); 390 + } 391 + return 0; 389 392 } 390 393 391 394 static enum iucv_tx_notify qeth_compute_cq_notification(int sbalf15, ··· 2631 2628 2632 2629 QETH_CARD_TEXT(card, 2, "allcqdbf"); 2633 2630 2631 + /* completion */ 2632 + if (qeth_alloc_cq(card)) 2633 + goto out_err; 2634 + 2634 2635 if (atomic_cmpxchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED, 2635 2636 QETH_QDIO_ALLOCATED) != QETH_QDIO_UNINITIALIZED) 2636 2637 return 0; ··· 2670 2663 queue->priority = QETH_QIB_PQUE_PRIO_DEFAULT; 2671 2664 } 2672 2665 2673 - /* completion */ 2674 - if (qeth_alloc_cq(card)) 2675 - goto out_freeoutq; 2676 - 2677 2666 return 0; 2678 2667 2679 2668 out_freeoutq: ··· 2680 2677 qeth_free_buffer_pool(card); 2681 2678 out_buffer_pool: 2682 2679 atomic_set(&card->qdio.state, QETH_QDIO_UNINITIALIZED); 2680 + qeth_free_cq(card); 2681 + out_err: 2683 2682 return -ENOMEM; 2684 2683 } 2685 2684 ··· 2689 2684 { 2690 2685 int i, j; 2691 2686 2687 + qeth_free_cq(card); 2688 + 2692 2689 if (atomic_xchg(&card->qdio.state, QETH_QDIO_UNINITIALIZED) == 2693 2690 QETH_QDIO_UNINITIALIZED) 2694 2691 return; 2695 2692 2696 - qeth_free_cq(card); 2697 2693 for (j = 0; j < QDIO_MAX_BUFFERS_PER_Q; ++j) { 2698 2694 if (card->qdio.in_q->bufs[j].rx_skb) { 2699 2695 consume_skb(card->qdio.in_q->bufs[j].rx_skb); ··· 3748 3742 3749 3743 int qeth_configure_cq(struct qeth_card *card, enum qeth_cq cq) 3750 3744 { 3751 - int rc; 3745 + if (card->options.cq == QETH_CQ_NOTAVAILABLE) 3746 + return -1; 3752 3747 3753 - if (card->options.cq == QETH_CQ_NOTAVAILABLE) { 3754 - rc = -1; 3755 - goto out; 3756 - } else { 3757 - if (card->options.cq == cq) { 3758 - rc = 0; 3759 - goto out; 3760 - } 3761 - 3762 - qeth_free_qdio_queues(card); 3763 - card->options.cq = cq; 3764 - rc = 0; 3765 - } 3766 - out: 3767 - return rc; 3768 - 3748 + card->options.cq = cq; 3749 + return 0; 3769 3750 } 3770 3751 EXPORT_SYMBOL_GPL(qeth_configure_cq); 3771 3752
+3 -1
drivers/scsi/sd.c
··· 3120 3120 { 3121 3121 struct scsi_device *sdp = sdkp->device; 3122 3122 const struct scsi_io_group_descriptor *desc, *start, *end; 3123 + u16 permanent_stream_count_old; 3123 3124 struct scsi_sense_hdr sshdr; 3124 3125 struct scsi_mode_data data; 3125 3126 int res; ··· 3141 3140 for (desc = start; desc < end; desc++) 3142 3141 if (!desc->st_enble || !sd_is_perm_stream(sdkp, desc - start)) 3143 3142 break; 3143 + permanent_stream_count_old = sdkp->permanent_stream_count; 3144 3144 sdkp->permanent_stream_count = desc - start; 3145 3145 if (sdkp->rscs && sdkp->permanent_stream_count < 2) 3146 3146 sd_printk(KERN_INFO, sdkp, 3147 3147 "Unexpected: RSCS has been set and the permanent stream count is %u\n", 3148 3148 sdkp->permanent_stream_count); 3149 - else if (sdkp->permanent_stream_count) 3149 + else if (sdkp->permanent_stream_count != permanent_stream_count_old) 3150 3150 sd_printk(KERN_INFO, sdkp, "permanent stream count = %d\n", 3151 3151 sdkp->permanent_stream_count); 3152 3152 }
+1
drivers/soc/mediatek/Kconfig
··· 72 72 tristate "MediaTek SoC Information" 73 73 default y 74 74 depends on NVMEM_MTK_EFUSE 75 + select SOC_BUS 75 76 help 76 77 The MediaTek SoC Information (mtk-socinfo) driver provides 77 78 information about the SoC to the userspace including the
+5 -2
drivers/soc/mediatek/mtk-svs.c
··· 1768 1768 const struct svs_bank_pdata *bdata; 1769 1769 struct svs_bank *svsb; 1770 1770 struct dev_pm_opp *opp; 1771 + char tz_name_buf[20]; 1771 1772 unsigned long freq; 1772 1773 int count, ret; 1773 1774 u32 idx, i; ··· 1820 1819 } 1821 1820 1822 1821 if (!IS_ERR_OR_NULL(bdata->tzone_name)) { 1823 - svsb->tzd = thermal_zone_get_zone_by_name(bdata->tzone_name); 1822 + snprintf(tz_name_buf, ARRAY_SIZE(tz_name_buf), 1823 + "%s-thermal", bdata->tzone_name); 1824 + svsb->tzd = thermal_zone_get_zone_by_name(tz_name_buf); 1824 1825 if (IS_ERR(svsb->tzd)) { 1825 1826 dev_err(svsb->dev, "cannot get \"%s\" thermal zone\n", 1826 - bdata->tzone_name); 1827 + tz_name_buf); 1827 1828 return PTR_ERR(svsb->tzd); 1828 1829 } 1829 1830 }
+15
drivers/soundwire/amd_manager.c
··· 130 130 writel(frame_size, amd_manager->mmio + ACP_SW_FRAMESIZE); 131 131 } 132 132 133 + static void amd_sdw_wake_enable(struct amd_sdw_manager *amd_manager, bool enable) 134 + { 135 + u32 wake_ctrl; 136 + 137 + wake_ctrl = readl(amd_manager->mmio + ACP_SW_STATE_CHANGE_STATUS_MASK_8TO11); 138 + if (enable) 139 + wake_ctrl |= AMD_SDW_WAKE_INTR_MASK; 140 + else 141 + wake_ctrl &= ~AMD_SDW_WAKE_INTR_MASK; 142 + 143 + writel(wake_ctrl, amd_manager->mmio + ACP_SW_STATE_CHANGE_STATUS_MASK_8TO11); 144 + } 145 + 133 146 static void amd_sdw_ctl_word_prep(u32 *lower_word, u32 *upper_word, struct sdw_msg *msg, 134 147 int cmd_offset) 135 148 { ··· 1108 1095 } 1109 1096 1110 1097 if (amd_manager->power_mode_mask & AMD_SDW_CLK_STOP_MODE) { 1098 + amd_sdw_wake_enable(amd_manager, false); 1111 1099 return amd_sdw_clock_stop(amd_manager); 1112 1100 } else if (amd_manager->power_mode_mask & AMD_SDW_POWER_OFF_MODE) { 1113 1101 /* ··· 1135 1121 return 0; 1136 1122 } 1137 1123 if (amd_manager->power_mode_mask & AMD_SDW_CLK_STOP_MODE) { 1124 + amd_sdw_wake_enable(amd_manager, true); 1138 1125 return amd_sdw_clock_stop(amd_manager); 1139 1126 } else if (amd_manager->power_mode_mask & AMD_SDW_POWER_OFF_MODE) { 1140 1127 ret = amd_sdw_clock_stop(amd_manager);
+2 -1
drivers/soundwire/amd_manager.h
··· 152 152 #define AMD_SDW0_EXT_INTR_MASK 0x200000 153 153 #define AMD_SDW1_EXT_INTR_MASK 4 154 154 #define AMD_SDW_IRQ_MASK_0TO7 0x77777777 155 - #define AMD_SDW_IRQ_MASK_8TO11 0x000d7777 155 + #define AMD_SDW_IRQ_MASK_8TO11 0x000c7777 156 156 #define AMD_SDW_IRQ_ERROR_MASK 0xff 157 157 #define AMD_SDW_MAX_FREQ_NUM 1 158 158 #define AMD_SDW0_MAX_TX_PORTS 3 ··· 190 190 #define AMD_SDW_CLK_RESUME_REQ 2 191 191 #define AMD_SDW_CLK_RESUME_DONE 3 192 192 #define AMD_SDW_WAKE_STAT_MASK BIT(16) 193 + #define AMD_SDW_WAKE_INTR_MASK BIT(16) 193 194 194 195 static u32 amd_sdw_freq_tbl[AMD_SDW_MAX_FREQ_NUM] = { 195 196 AMD_SDW_DEFAULT_CLK_FREQ,
+3 -3
drivers/vdpa/vdpa.c
··· 967 967 968 968 val_u32 = __virtio32_to_cpu(true, config->size_max); 969 969 970 - return nla_put_u32(msg, VDPA_ATTR_DEV_BLK_CFG_SEG_SIZE, val_u32); 970 + return nla_put_u32(msg, VDPA_ATTR_DEV_BLK_CFG_SIZE_MAX, val_u32); 971 971 } 972 972 973 973 /* fill the block size*/ ··· 1089 1089 u8 ro; 1090 1090 1091 1091 ro = ((features & BIT_ULL(VIRTIO_BLK_F_RO)) == 0) ? 0 : 1; 1092 - if (nla_put_u8(msg, VDPA_ATTR_DEV_BLK_CFG_READ_ONLY, ro)) 1092 + if (nla_put_u8(msg, VDPA_ATTR_DEV_BLK_READ_ONLY, ro)) 1093 1093 return -EMSGSIZE; 1094 1094 1095 1095 return 0; ··· 1100 1100 u8 flush; 1101 1101 1102 1102 flush = ((features & BIT_ULL(VIRTIO_BLK_F_FLUSH)) == 0) ? 0 : 1; 1103 - if (nla_put_u8(msg, VDPA_ATTR_DEV_BLK_CFG_FLUSH, flush)) 1103 + if (nla_put_u8(msg, VDPA_ATTR_DEV_BLK_FLUSH, flush)) 1104 1104 return -EMSGSIZE; 1105 1105 1106 1106 return 0;
+1 -1
drivers/video/fbdev/core/fb_defio.c
··· 196 196 */ 197 197 static vm_fault_t fb_deferred_io_page_mkwrite(struct fb_info *info, struct vm_fault *vmf) 198 198 { 199 - unsigned long offset = vmf->address - vmf->vma->vm_start; 199 + unsigned long offset = vmf->pgoff << PAGE_SHIFT; 200 200 struct page *page = vmf->page; 201 201 202 202 file_update_time(vmf->vma->vm_file);
+6 -5
fs/9p/v9fs.h
··· 179 179 struct inode *old_dir, struct dentry *old_dentry, 180 180 struct inode *new_dir, struct dentry *new_dentry, 181 181 unsigned int flags); 182 - extern struct inode *v9fs_fid_iget(struct super_block *sb, struct p9_fid *fid); 182 + extern struct inode *v9fs_fid_iget(struct super_block *sb, struct p9_fid *fid, 183 + bool new); 183 184 extern const struct inode_operations v9fs_dir_inode_operations_dotl; 184 185 extern const struct inode_operations v9fs_file_inode_operations_dotl; 185 186 extern const struct inode_operations v9fs_symlink_inode_operations_dotl; 186 187 extern const struct netfs_request_ops v9fs_req_ops; 187 188 extern struct inode *v9fs_fid_iget_dotl(struct super_block *sb, 188 - struct p9_fid *fid); 189 + struct p9_fid *fid, bool new); 189 190 190 191 /* other default globals */ 191 192 #define V9FS_PORT 564 ··· 225 224 */ 226 225 static inline struct inode * 227 226 v9fs_get_inode_from_fid(struct v9fs_session_info *v9ses, struct p9_fid *fid, 228 - struct super_block *sb) 227 + struct super_block *sb, bool new) 229 228 { 230 229 if (v9fs_proto_dotl(v9ses)) 231 - return v9fs_fid_iget_dotl(sb, fid); 230 + return v9fs_fid_iget_dotl(sb, fid, new); 232 231 else 233 - return v9fs_fid_iget(sb, fid); 232 + return v9fs_fid_iget(sb, fid, new); 234 233 } 235 234 236 235 #endif
+29 -8
fs/9p/vfs_inode.c
··· 364 364 clear_inode(inode); 365 365 } 366 366 367 - struct inode *v9fs_fid_iget(struct super_block *sb, struct p9_fid *fid) 367 + struct inode * 368 + v9fs_fid_iget(struct super_block *sb, struct p9_fid *fid, bool new) 368 369 { 369 370 dev_t rdev; 370 371 int retval; ··· 377 376 inode = iget_locked(sb, QID2INO(&fid->qid)); 378 377 if (unlikely(!inode)) 379 378 return ERR_PTR(-ENOMEM); 380 - if (!(inode->i_state & I_NEW)) 381 - return inode; 379 + if (!(inode->i_state & I_NEW)) { 380 + if (!new) { 381 + goto done; 382 + } else { 383 + p9_debug(P9_DEBUG_VFS, "WARNING: Inode collision %ld\n", 384 + inode->i_ino); 385 + iput(inode); 386 + remove_inode_hash(inode); 387 + inode = iget_locked(sb, QID2INO(&fid->qid)); 388 + WARN_ON(!(inode->i_state & I_NEW)); 389 + } 390 + } 382 391 383 392 /* 384 393 * initialize the inode with the stat info ··· 412 401 v9fs_set_netfs_context(inode); 413 402 v9fs_cache_inode_get_cookie(inode); 414 403 unlock_new_inode(inode); 404 + done: 415 405 return inode; 416 406 error: 417 407 iget_failed(inode); 418 408 return ERR_PTR(retval); 419 - 420 409 } 421 410 422 411 /** ··· 448 437 */ 449 438 static void v9fs_dec_count(struct inode *inode) 450 439 { 451 - if (!S_ISDIR(inode->i_mode) || inode->i_nlink > 2) 452 - drop_nlink(inode); 440 + if (!S_ISDIR(inode->i_mode) || inode->i_nlink > 2) { 441 + if (inode->i_nlink) { 442 + drop_nlink(inode); 443 + } else { 444 + p9_debug(P9_DEBUG_VFS, 445 + "WARNING: unexpected i_nlink zero %d inode %ld\n", 446 + inode->i_nlink, inode->i_ino); 447 + } 448 + } 453 449 } 454 450 455 451 /** ··· 506 488 v9fs_dec_count(dir); 507 489 } else 508 490 v9fs_dec_count(inode); 491 + 492 + if (inode->i_nlink <= 0) /* no more refs unhash it */ 493 + remove_inode_hash(inode); 509 494 510 495 v9fs_invalidate_inode_attr(inode); 511 496 v9fs_invalidate_inode_attr(dir); ··· 575 554 /* 576 555 * instantiate inode and assign the unopened fid to the dentry 577 556 */ 578 - inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb); 557 + inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb, true); 579 558 if (IS_ERR(inode)) { 580 559 err = PTR_ERR(inode); 581 560 p9_debug(P9_DEBUG_VFS, ··· 704 683 else if (IS_ERR(fid)) 705 684 inode = ERR_CAST(fid); 706 685 else 707 - inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb); 686 + inode = v9fs_get_inode_from_fid(v9ses, fid, dir->i_sb, false); 708 687 /* 709 688 * If we had a rename on the server and a parallel lookup 710 689 * for the new name, then make sure we instantiate with
+20 -8
fs/9p/vfs_inode_dotl.c
··· 52 52 return current_fsgid(); 53 53 } 54 54 55 - struct inode *v9fs_fid_iget_dotl(struct super_block *sb, struct p9_fid *fid) 55 + 56 + 57 + struct inode * 58 + v9fs_fid_iget_dotl(struct super_block *sb, struct p9_fid *fid, bool new) 56 59 { 57 60 int retval; 58 61 struct inode *inode; ··· 65 62 inode = iget_locked(sb, QID2INO(&fid->qid)); 66 63 if (unlikely(!inode)) 67 64 return ERR_PTR(-ENOMEM); 68 - if (!(inode->i_state & I_NEW)) 69 - return inode; 65 + if (!(inode->i_state & I_NEW)) { 66 + if (!new) { 67 + goto done; 68 + } else { /* deal with race condition in inode number reuse */ 69 + p9_debug(P9_DEBUG_ERROR, "WARNING: Inode collision %lx\n", 70 + inode->i_ino); 71 + iput(inode); 72 + remove_inode_hash(inode); 73 + inode = iget_locked(sb, QID2INO(&fid->qid)); 74 + WARN_ON(!(inode->i_state & I_NEW)); 75 + } 76 + } 70 77 71 78 /* 72 79 * initialize the inode with the stat info ··· 103 90 goto error; 104 91 105 92 unlock_new_inode(inode); 106 - 93 + done: 107 94 return inode; 108 95 error: 109 96 iget_failed(inode); 110 97 return ERR_PTR(retval); 111 - 112 98 } 113 99 114 100 struct dotl_openflag_map { ··· 259 247 p9_debug(P9_DEBUG_VFS, "p9_client_walk failed %d\n", err); 260 248 goto out; 261 249 } 262 - inode = v9fs_fid_iget_dotl(dir->i_sb, fid); 250 + inode = v9fs_fid_iget_dotl(dir->i_sb, fid, true); 263 251 if (IS_ERR(inode)) { 264 252 err = PTR_ERR(inode); 265 253 p9_debug(P9_DEBUG_VFS, "inode creation failed %d\n", err); ··· 352 340 } 353 341 354 342 /* instantiate inode and assign the unopened fid to the dentry */ 355 - inode = v9fs_fid_iget_dotl(dir->i_sb, fid); 343 + inode = v9fs_fid_iget_dotl(dir->i_sb, fid, true); 356 344 if (IS_ERR(inode)) { 357 345 err = PTR_ERR(inode); 358 346 p9_debug(P9_DEBUG_VFS, "inode creation failed %d\n", ··· 788 776 err); 789 777 goto error; 790 778 } 791 - inode = v9fs_fid_iget_dotl(dir->i_sb, fid); 779 + inode = v9fs_fid_iget_dotl(dir->i_sb, fid, true); 792 780 if (IS_ERR(inode)) { 793 781 err = PTR_ERR(inode); 794 782 p9_debug(P9_DEBUG_VFS, "inode creation failed %d\n",
+1 -1
fs/9p/vfs_super.c
··· 139 139 else 140 140 sb->s_d_op = &v9fs_dentry_operations; 141 141 142 - inode = v9fs_get_inode_from_fid(v9ses, fid, sb); 142 + inode = v9fs_get_inode_from_fid(v9ses, fid, sb, true); 143 143 if (IS_ERR(inode)) { 144 144 retval = PTR_ERR(inode); 145 145 goto release_sb;
+5 -2
fs/bcachefs/btree_node_scan.c
··· 57 57 bp->v.seq = cpu_to_le64(f->cookie); 58 58 bp->v.sectors_written = 0; 59 59 bp->v.flags = 0; 60 + bp->v.sectors_written = cpu_to_le16(f->sectors_written); 60 61 bp->v.min_key = f->min_key; 61 62 SET_BTREE_PTR_RANGE_UPDATED(&bp->v, f->range_updated); 62 63 memcpy(bp->v.start, f->ptrs, sizeof(struct bch_extent_ptr) * f->nr_ptrs); 63 64 } 64 65 65 66 static bool found_btree_node_is_readable(struct btree_trans *trans, 66 - const struct found_btree_node *f) 67 + struct found_btree_node *f) 67 68 { 68 69 struct { __BKEY_PADDED(k, BKEY_BTREE_PTR_VAL_U64s_MAX); } k; 69 70 ··· 72 71 73 72 struct btree *b = bch2_btree_node_get_noiter(trans, &k.k, f->btree_id, f->level, false); 74 73 bool ret = !IS_ERR_OR_NULL(b); 75 - if (ret) 74 + if (ret) { 75 + f->sectors_written = b->written; 76 76 six_unlock_read(&b->c.lock); 77 + } 77 78 78 79 /* 79 80 * We might update this node's range; if that happens, we need the node
+1
fs/bcachefs/btree_node_scan_types.h
··· 9 9 bool overwritten:1; 10 10 u8 btree_id; 11 11 u8 level; 12 + unsigned sectors_written; 12 13 u32 seq; 13 14 u64 cookie; 14 15
-2
fs/bcachefs/buckets.c
··· 525 525 "different types of data in same bucket: %s, %s", 526 526 bch2_data_type_str(g->data_type), 527 527 bch2_data_type_str(data_type))) { 528 - BUG(); 529 528 ret = -EIO; 530 529 goto err; 531 530 } ··· 628 629 bch2_data_type_str(ptr_data_type), 629 630 (printbuf_reset(&buf), 630 631 bch2_bkey_val_to_text(&buf, c, k), buf.buf)); 631 - BUG(); 632 632 ret = -EIO; 633 633 goto err; 634 634 }
+1 -1
fs/bcachefs/inode.c
··· 606 606 struct bkey_s new, 607 607 unsigned flags) 608 608 { 609 - s64 nr = bkey_is_inode(new.k) - bkey_is_inode(old.k); 609 + s64 nr = (s64) bkey_is_inode(new.k) - (s64) bkey_is_inode(old.k); 610 610 611 611 if (flags & BTREE_TRIGGER_TRANSACTIONAL) { 612 612 if (nr) {
+1 -1
fs/erofs/fscache.c
··· 151 151 if (WARN_ON(len == 0)) 152 152 source = NETFS_INVALID_READ; 153 153 if (source != NETFS_READ_FROM_CACHE) { 154 - erofs_err(NULL, "prepare_read failed (source %d)", source); 154 + erofs_err(NULL, "prepare_ondemand_read failed (source %d)", source); 155 155 return -EIO; 156 156 } 157 157
-7
fs/erofs/internal.h
··· 84 84 bool flatdev; 85 85 }; 86 86 87 - struct erofs_fs_context { 88 - struct erofs_mount_opts opt; 89 - struct erofs_dev_context *devs; 90 - char *fsid; 91 - char *domain_id; 92 - }; 93 - 94 87 /* all filesystem-wide lz4 configurations */ 95 88 struct erofs_sb_lz4_info { 96 89 /* # of pages needed for EROFS lz4 rolling decompression */
+55 -69
fs/erofs/super.c
··· 370 370 return ret; 371 371 } 372 372 373 - static void erofs_default_options(struct erofs_fs_context *ctx) 373 + static void erofs_default_options(struct erofs_sb_info *sbi) 374 374 { 375 375 #ifdef CONFIG_EROFS_FS_ZIP 376 - ctx->opt.cache_strategy = EROFS_ZIP_CACHE_READAROUND; 377 - ctx->opt.max_sync_decompress_pages = 3; 378 - ctx->opt.sync_decompress = EROFS_SYNC_DECOMPRESS_AUTO; 376 + sbi->opt.cache_strategy = EROFS_ZIP_CACHE_READAROUND; 377 + sbi->opt.max_sync_decompress_pages = 3; 378 + sbi->opt.sync_decompress = EROFS_SYNC_DECOMPRESS_AUTO; 379 379 #endif 380 380 #ifdef CONFIG_EROFS_FS_XATTR 381 - set_opt(&ctx->opt, XATTR_USER); 381 + set_opt(&sbi->opt, XATTR_USER); 382 382 #endif 383 383 #ifdef CONFIG_EROFS_FS_POSIX_ACL 384 - set_opt(&ctx->opt, POSIX_ACL); 384 + set_opt(&sbi->opt, POSIX_ACL); 385 385 #endif 386 386 } 387 387 ··· 426 426 static bool erofs_fc_set_dax_mode(struct fs_context *fc, unsigned int mode) 427 427 { 428 428 #ifdef CONFIG_FS_DAX 429 - struct erofs_fs_context *ctx = fc->fs_private; 429 + struct erofs_sb_info *sbi = fc->s_fs_info; 430 430 431 431 switch (mode) { 432 432 case EROFS_MOUNT_DAX_ALWAYS: 433 - set_opt(&ctx->opt, DAX_ALWAYS); 434 - clear_opt(&ctx->opt, DAX_NEVER); 433 + set_opt(&sbi->opt, DAX_ALWAYS); 434 + clear_opt(&sbi->opt, DAX_NEVER); 435 435 return true; 436 436 case EROFS_MOUNT_DAX_NEVER: 437 - set_opt(&ctx->opt, DAX_NEVER); 438 - clear_opt(&ctx->opt, DAX_ALWAYS); 437 + set_opt(&sbi->opt, DAX_NEVER); 438 + clear_opt(&sbi->opt, DAX_ALWAYS); 439 439 return true; 440 440 default: 441 441 DBG_BUGON(1); ··· 450 450 static int erofs_fc_parse_param(struct fs_context *fc, 451 451 struct fs_parameter *param) 452 452 { 453 - struct erofs_fs_context *ctx = fc->fs_private; 453 + struct erofs_sb_info *sbi = fc->s_fs_info; 454 454 struct fs_parse_result result; 455 455 struct erofs_device_info *dif; 456 456 int opt, ret; ··· 463 463 case Opt_user_xattr: 464 464 #ifdef CONFIG_EROFS_FS_XATTR 465 465 if (result.boolean) 466 - set_opt(&ctx->opt, XATTR_USER); 466 + set_opt(&sbi->opt, XATTR_USER); 467 467 else 468 - clear_opt(&ctx->opt, XATTR_USER); 468 + clear_opt(&sbi->opt, XATTR_USER); 469 469 #else 470 470 errorfc(fc, "{,no}user_xattr options not supported"); 471 471 #endif ··· 473 473 case Opt_acl: 474 474 #ifdef CONFIG_EROFS_FS_POSIX_ACL 475 475 if (result.boolean) 476 - set_opt(&ctx->opt, POSIX_ACL); 476 + set_opt(&sbi->opt, POSIX_ACL); 477 477 else 478 - clear_opt(&ctx->opt, POSIX_ACL); 478 + clear_opt(&sbi->opt, POSIX_ACL); 479 479 #else 480 480 errorfc(fc, "{,no}acl options not supported"); 481 481 #endif 482 482 break; 483 483 case Opt_cache_strategy: 484 484 #ifdef CONFIG_EROFS_FS_ZIP 485 - ctx->opt.cache_strategy = result.uint_32; 485 + sbi->opt.cache_strategy = result.uint_32; 486 486 #else 487 487 errorfc(fc, "compression not supported, cache_strategy ignored"); 488 488 #endif ··· 504 504 kfree(dif); 505 505 return -ENOMEM; 506 506 } 507 - down_write(&ctx->devs->rwsem); 508 - ret = idr_alloc(&ctx->devs->tree, dif, 0, 0, GFP_KERNEL); 509 - up_write(&ctx->devs->rwsem); 507 + down_write(&sbi->devs->rwsem); 508 + ret = idr_alloc(&sbi->devs->tree, dif, 0, 0, GFP_KERNEL); 509 + up_write(&sbi->devs->rwsem); 510 510 if (ret < 0) { 511 511 kfree(dif->path); 512 512 kfree(dif); 513 513 return ret; 514 514 } 515 - ++ctx->devs->extra_devices; 515 + ++sbi->devs->extra_devices; 516 516 break; 517 517 #ifdef CONFIG_EROFS_FS_ONDEMAND 518 518 case Opt_fsid: 519 - kfree(ctx->fsid); 520 - ctx->fsid = kstrdup(param->string, GFP_KERNEL); 521 - if (!ctx->fsid) 519 + kfree(sbi->fsid); 520 + sbi->fsid = kstrdup(param->string, GFP_KERNEL); 521 + if (!sbi->fsid) 522 522 return -ENOMEM; 523 523 break; 524 524 case Opt_domain_id: 525 - kfree(ctx->domain_id); 526 - ctx->domain_id = kstrdup(param->string, GFP_KERNEL); 527 - if (!ctx->domain_id) 525 + kfree(sbi->domain_id); 526 + sbi->domain_id = kstrdup(param->string, GFP_KERNEL); 527 + if (!sbi->domain_id) 528 528 return -ENOMEM; 529 529 break; 530 530 #else ··· 581 581 static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc) 582 582 { 583 583 struct inode *inode; 584 - struct erofs_sb_info *sbi; 585 - struct erofs_fs_context *ctx = fc->fs_private; 584 + struct erofs_sb_info *sbi = EROFS_SB(sb); 586 585 int err; 587 586 588 587 sb->s_magic = EROFS_SUPER_MAGIC; 589 588 sb->s_flags |= SB_RDONLY | SB_NOATIME; 590 589 sb->s_maxbytes = MAX_LFS_FILESIZE; 591 590 sb->s_op = &erofs_sops; 592 - 593 - sbi = kzalloc(sizeof(*sbi), GFP_KERNEL); 594 - if (!sbi) 595 - return -ENOMEM; 596 - 597 - sb->s_fs_info = sbi; 598 - sbi->opt = ctx->opt; 599 - sbi->devs = ctx->devs; 600 - ctx->devs = NULL; 601 - sbi->fsid = ctx->fsid; 602 - ctx->fsid = NULL; 603 - sbi->domain_id = ctx->domain_id; 604 - ctx->domain_id = NULL; 605 591 606 592 sbi->blkszbits = PAGE_SHIFT; 607 593 if (erofs_is_fscache_mode(sb)) { ··· 692 706 693 707 static int erofs_fc_get_tree(struct fs_context *fc) 694 708 { 695 - struct erofs_fs_context *ctx = fc->fs_private; 709 + struct erofs_sb_info *sbi = fc->s_fs_info; 696 710 697 - if (IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && ctx->fsid) 711 + if (IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && sbi->fsid) 698 712 return get_tree_nodev(fc, erofs_fc_fill_super); 699 713 700 714 return get_tree_bdev(fc, erofs_fc_fill_super); ··· 704 718 { 705 719 struct super_block *sb = fc->root->d_sb; 706 720 struct erofs_sb_info *sbi = EROFS_SB(sb); 707 - struct erofs_fs_context *ctx = fc->fs_private; 721 + struct erofs_sb_info *new_sbi = fc->s_fs_info; 708 722 709 723 DBG_BUGON(!sb_rdonly(sb)); 710 724 711 - if (ctx->fsid || ctx->domain_id) 725 + if (new_sbi->fsid || new_sbi->domain_id) 712 726 erofs_info(sb, "ignoring reconfiguration for fsid|domain_id."); 713 727 714 - if (test_opt(&ctx->opt, POSIX_ACL)) 728 + if (test_opt(&new_sbi->opt, POSIX_ACL)) 715 729 fc->sb_flags |= SB_POSIXACL; 716 730 else 717 731 fc->sb_flags &= ~SB_POSIXACL; 718 732 719 - sbi->opt = ctx->opt; 733 + sbi->opt = new_sbi->opt; 720 734 721 735 fc->sb_flags |= SB_RDONLY; 722 736 return 0; ··· 747 761 748 762 static void erofs_fc_free(struct fs_context *fc) 749 763 { 750 - struct erofs_fs_context *ctx = fc->fs_private; 764 + struct erofs_sb_info *sbi = fc->s_fs_info; 751 765 752 - erofs_free_dev_context(ctx->devs); 753 - kfree(ctx->fsid); 754 - kfree(ctx->domain_id); 755 - kfree(ctx); 766 + if (!sbi) 767 + return; 768 + 769 + erofs_free_dev_context(sbi->devs); 770 + kfree(sbi->fsid); 771 + kfree(sbi->domain_id); 772 + kfree(sbi); 756 773 } 757 774 758 775 static const struct fs_context_operations erofs_context_ops = { ··· 767 778 768 779 static int erofs_init_fs_context(struct fs_context *fc) 769 780 { 770 - struct erofs_fs_context *ctx; 781 + struct erofs_sb_info *sbi; 771 782 772 - ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 773 - if (!ctx) 783 + sbi = kzalloc(sizeof(*sbi), GFP_KERNEL); 784 + if (!sbi) 774 785 return -ENOMEM; 775 - ctx->devs = kzalloc(sizeof(struct erofs_dev_context), GFP_KERNEL); 776 - if (!ctx->devs) { 777 - kfree(ctx); 786 + 787 + sbi->devs = kzalloc(sizeof(struct erofs_dev_context), GFP_KERNEL); 788 + if (!sbi->devs) { 789 + kfree(sbi); 778 790 return -ENOMEM; 779 791 } 780 - fc->fs_private = ctx; 792 + fc->s_fs_info = sbi; 781 793 782 - idr_init(&ctx->devs->tree); 783 - init_rwsem(&ctx->devs->rwsem); 784 - erofs_default_options(ctx); 794 + idr_init(&sbi->devs->tree); 795 + init_rwsem(&sbi->devs->rwsem); 796 + erofs_default_options(sbi); 785 797 fc->ops = &erofs_context_ops; 786 798 return 0; 787 799 } 788 800 789 801 static void erofs_kill_sb(struct super_block *sb) 790 802 { 791 - struct erofs_sb_info *sbi; 803 + struct erofs_sb_info *sbi = EROFS_SB(sb); 792 804 793 - if (erofs_is_fscache_mode(sb)) 805 + if (IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && sbi->fsid) 794 806 kill_anon_super(sb); 795 807 else 796 808 kill_block_super(sb); 797 - 798 - sbi = EROFS_SB(sb); 799 - if (!sbi) 800 - return; 801 809 802 810 erofs_free_dev_context(sbi->devs); 803 811 fs_put_dax(sbi->dax_dev, NULL);
+2 -2
fs/ioctl.c
··· 769 769 struct fsuuid2 u = { .len = sb->s_uuid_len, }; 770 770 771 771 if (!sb->s_uuid_len) 772 - return -ENOIOCTLCMD; 772 + return -ENOTTY; 773 773 774 774 memcpy(&u.uuid[0], &sb->s_uuid, sb->s_uuid_len); 775 775 ··· 781 781 struct super_block *sb = file_inode(file)->i_sb; 782 782 783 783 if (!strlen(sb->s_sysfs_name)) 784 - return -ENOIOCTLCMD; 784 + return -ENOTTY; 785 785 786 786 struct fs_sysfs_path u = {}; 787 787
+12 -11
fs/netfs/buffered_write.c
··· 164 164 enum netfs_how_to_modify howto; 165 165 enum netfs_folio_trace trace; 166 166 unsigned int bdp_flags = (iocb->ki_flags & IOCB_SYNC) ? 0: BDP_ASYNC; 167 - ssize_t written = 0, ret; 167 + ssize_t written = 0, ret, ret2; 168 168 loff_t i_size, pos = iocb->ki_pos, from, to; 169 169 size_t max_chunk = PAGE_SIZE << MAX_PAGECACHE_ORDER; 170 170 bool maybe_trouble = false; ··· 172 172 if (unlikely(test_bit(NETFS_ICTX_WRITETHROUGH, &ctx->flags) || 173 173 iocb->ki_flags & (IOCB_DSYNC | IOCB_SYNC)) 174 174 ) { 175 - if (pos < i_size_read(inode)) { 176 - ret = filemap_write_and_wait_range(mapping, pos, pos + iter->count); 177 - if (ret < 0) { 178 - goto out; 179 - } 180 - } 181 - 182 175 wbc_attach_fdatawrite_inode(&wbc, mapping->host); 176 + 177 + ret = filemap_write_and_wait_range(mapping, pos, pos + iter->count); 178 + if (ret < 0) { 179 + wbc_detach_inode(&wbc); 180 + goto out; 181 + } 183 182 184 183 wreq = netfs_begin_writethrough(iocb, iter->count); 185 184 if (IS_ERR(wreq)) { ··· 394 395 395 396 out: 396 397 if (unlikely(wreq)) { 397 - ret = netfs_end_writethrough(wreq, iocb); 398 + ret2 = netfs_end_writethrough(wreq, iocb); 398 399 wbc_detach_inode(&wbc); 399 - if (ret == -EIOCBQUEUED) 400 - return ret; 400 + if (ret2 == -EIOCBQUEUED) 401 + return ret2; 402 + if (ret == 0) 403 + ret = ret2; 401 404 } 402 405 403 406 iocb->ki_pos += written;
+6 -1
fs/nfs/inode.c
··· 2429 2429 struct nfs_net *nn = net_generic(net, nfs_net_id); 2430 2430 2431 2431 nfs_clients_init(net); 2432 - rpc_proc_register(net, &nn->rpcstats); 2432 + 2433 + if (!rpc_proc_register(net, &nn->rpcstats)) { 2434 + nfs_clients_exit(net); 2435 + return -ENOMEM; 2436 + } 2437 + 2433 2438 return nfs_fs_proc_net_init(net); 2434 2439 } 2435 2440
+1 -1
fs/nfsd/nfs4xdr.c
··· 3515 3515 args.exp = exp; 3516 3516 args.dentry = dentry; 3517 3517 args.ignore_crossmnt = (ignore_crossmnt != 0); 3518 + args.acl = NULL; 3518 3519 3519 3520 /* 3520 3521 * Make a local copy of the attribute bitmap that can be modified. ··· 3574 3573 } else 3575 3574 args.fhp = fhp; 3576 3575 3577 - args.acl = NULL; 3578 3576 if (attrmask[0] & FATTR4_WORD0_ACL) { 3579 3577 err = nfsd4_get_nfs4_acl(rqstp, dentry, &args.acl); 3580 3578 if (err == -EOPNOTSUPP)
+9
fs/ntfs3/Kconfig
··· 46 46 NOTE: this is linux only feature. Windows will ignore these ACLs. 47 47 48 48 If you don't know what Access Control Lists are, say N. 49 + 50 + config NTFS_FS 51 + tristate "NTFS file system support" 52 + select NTFS3_FS 53 + select BUFFER_HEAD 54 + select NLS 55 + help 56 + This config option is here only for backward compatibility. NTFS 57 + filesystem is now handled by the NTFS3 driver.
+7
fs/ntfs3/dir.c
··· 616 616 .compat_ioctl = ntfs_compat_ioctl, 617 617 #endif 618 618 }; 619 + 620 + const struct file_operations ntfs_legacy_dir_operations = { 621 + .llseek = generic_file_llseek, 622 + .read = generic_read_dir, 623 + .iterate_shared = ntfs_readdir, 624 + .open = ntfs_file_open, 625 + }; 619 626 // clang-format on
+8
fs/ntfs3/file.c
··· 1236 1236 .fallocate = ntfs_fallocate, 1237 1237 .release = ntfs_file_release, 1238 1238 }; 1239 + 1240 + const struct file_operations ntfs_legacy_file_operations = { 1241 + .llseek = generic_file_llseek, 1242 + .read_iter = ntfs_file_read_iter, 1243 + .splice_read = ntfs_file_splice_read, 1244 + .open = ntfs_file_open, 1245 + .release = ntfs_file_release, 1246 + }; 1239 1247 // clang-format on
+16 -4
fs/ntfs3/inode.c
··· 440 440 * Usually a hard links to directories are disabled. 441 441 */ 442 442 inode->i_op = &ntfs_dir_inode_operations; 443 - inode->i_fop = &ntfs_dir_operations; 443 + if (is_legacy_ntfs(inode->i_sb)) 444 + inode->i_fop = &ntfs_legacy_dir_operations; 445 + else 446 + inode->i_fop = &ntfs_dir_operations; 444 447 ni->i_valid = 0; 445 448 } else if (S_ISLNK(mode)) { 446 449 ni->std_fa &= ~FILE_ATTRIBUTE_DIRECTORY; ··· 453 450 } else if (S_ISREG(mode)) { 454 451 ni->std_fa &= ~FILE_ATTRIBUTE_DIRECTORY; 455 452 inode->i_op = &ntfs_file_inode_operations; 456 - inode->i_fop = &ntfs_file_operations; 453 + if (is_legacy_ntfs(inode->i_sb)) 454 + inode->i_fop = &ntfs_legacy_file_operations; 455 + else 456 + inode->i_fop = &ntfs_file_operations; 457 457 inode->i_mapping->a_ops = is_compressed(ni) ? &ntfs_aops_cmpr : 458 458 &ntfs_aops; 459 459 if (ino != MFT_REC_MFT) ··· 1620 1614 1621 1615 if (S_ISDIR(mode)) { 1622 1616 inode->i_op = &ntfs_dir_inode_operations; 1623 - inode->i_fop = &ntfs_dir_operations; 1617 + if (is_legacy_ntfs(inode->i_sb)) 1618 + inode->i_fop = &ntfs_legacy_dir_operations; 1619 + else 1620 + inode->i_fop = &ntfs_dir_operations; 1624 1621 } else if (S_ISLNK(mode)) { 1625 1622 inode->i_op = &ntfs_link_inode_operations; 1626 1623 inode->i_fop = NULL; ··· 1632 1623 inode_nohighmem(inode); 1633 1624 } else if (S_ISREG(mode)) { 1634 1625 inode->i_op = &ntfs_file_inode_operations; 1635 - inode->i_fop = &ntfs_file_operations; 1626 + if (is_legacy_ntfs(inode->i_sb)) 1627 + inode->i_fop = &ntfs_legacy_file_operations; 1628 + else 1629 + inode->i_fop = &ntfs_file_operations; 1636 1630 inode->i_mapping->a_ops = is_compressed(ni) ? &ntfs_aops_cmpr : 1637 1631 &ntfs_aops; 1638 1632 init_rwsem(&ni->file.run_lock);
+4
fs/ntfs3/ntfs_fs.h
··· 493 493 struct ntfs_fnd *fnd); 494 494 bool dir_is_empty(struct inode *dir); 495 495 extern const struct file_operations ntfs_dir_operations; 496 + extern const struct file_operations ntfs_legacy_dir_operations; 496 497 497 498 /* Globals from file.c */ 498 499 int ntfs_getattr(struct mnt_idmap *idmap, const struct path *path, ··· 508 507 extern const struct inode_operations ntfs_special_inode_operations; 509 508 extern const struct inode_operations ntfs_file_inode_operations; 510 509 extern const struct file_operations ntfs_file_operations; 510 + extern const struct file_operations ntfs_legacy_file_operations; 511 511 512 512 /* Globals from frecord.c */ 513 513 void ni_remove_mi(struct ntfs_inode *ni, struct mft_inode *mi); ··· 1155 1153 { 1156 1154 *var = cpu_to_le64(le64_to_cpu(*var) - val); 1157 1155 } 1156 + 1157 + bool is_legacy_ntfs(struct super_block *sb); 1158 1158 1159 1159 #endif /* _LINUX_NTFS3_NTFS_FS_H */
+62 -3
fs/ntfs3/super.c
··· 408 408 struct ntfs_mount_options *new_opts = fc->fs_private; 409 409 int ro_rw; 410 410 411 + /* If ntfs3 is used as legacy ntfs enforce read-only mode. */ 412 + if (is_legacy_ntfs(sb)) { 413 + fc->sb_flags |= SB_RDONLY; 414 + goto out; 415 + } 416 + 411 417 ro_rw = sb_rdonly(sb) && !(fc->sb_flags & SB_RDONLY); 412 418 if (ro_rw && (sbi->flags & NTFS_FLAGS_NEED_REPLAY)) { 413 419 errorf(fc, ··· 433 427 fc, 434 428 "ntfs3: Cannot use different iocharset when remounting!"); 435 429 436 - sync_filesystem(sb); 437 - 438 430 if (ro_rw && (sbi->volume.flags & VOLUME_FLAG_DIRTY) && 439 431 !new_opts->force) { 440 432 errorf(fc, ··· 440 436 return -EINVAL; 441 437 } 442 438 439 + out: 440 + sync_filesystem(sb); 443 441 swap(sbi->options, fc->fs_private); 444 442 445 443 return 0; ··· 1619 1613 } 1620 1614 #endif 1621 1615 1616 + if (is_legacy_ntfs(sb)) 1617 + sb->s_flags |= SB_RDONLY; 1622 1618 return 0; 1623 1619 1624 1620 put_inode_out: ··· 1738 1730 * This will called when mount/remount. We will first initialize 1739 1731 * options so that if remount we can use just that. 1740 1732 */ 1741 - static int ntfs_init_fs_context(struct fs_context *fc) 1733 + static int __ntfs_init_fs_context(struct fs_context *fc) 1742 1734 { 1743 1735 struct ntfs_mount_options *opts; 1744 1736 struct ntfs_sb_info *sbi; ··· 1786 1778 return -ENOMEM; 1787 1779 } 1788 1780 1781 + static int ntfs_init_fs_context(struct fs_context *fc) 1782 + { 1783 + return __ntfs_init_fs_context(fc); 1784 + } 1785 + 1789 1786 static void ntfs3_kill_sb(struct super_block *sb) 1790 1787 { 1791 1788 struct ntfs_sb_info *sbi = sb->s_fs_info; ··· 1811 1798 .kill_sb = ntfs3_kill_sb, 1812 1799 .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP, 1813 1800 }; 1801 + 1802 + #if IS_ENABLED(CONFIG_NTFS_FS) 1803 + static int ntfs_legacy_init_fs_context(struct fs_context *fc) 1804 + { 1805 + int ret; 1806 + 1807 + ret = __ntfs_init_fs_context(fc); 1808 + /* If ntfs3 is used as legacy ntfs enforce read-only mode. */ 1809 + fc->sb_flags |= SB_RDONLY; 1810 + return ret; 1811 + } 1812 + 1813 + static struct file_system_type ntfs_legacy_fs_type = { 1814 + .owner = THIS_MODULE, 1815 + .name = "ntfs", 1816 + .init_fs_context = ntfs_legacy_init_fs_context, 1817 + .parameters = ntfs_fs_parameters, 1818 + .kill_sb = ntfs3_kill_sb, 1819 + .fs_flags = FS_REQUIRES_DEV | FS_ALLOW_IDMAP, 1820 + }; 1821 + MODULE_ALIAS_FS("ntfs"); 1822 + 1823 + static inline void register_as_ntfs_legacy(void) 1824 + { 1825 + int err = register_filesystem(&ntfs_legacy_fs_type); 1826 + if (err) 1827 + pr_warn("ntfs3: Failed to register legacy ntfs filesystem driver: %d\n", err); 1828 + } 1829 + 1830 + static inline void unregister_as_ntfs_legacy(void) 1831 + { 1832 + unregister_filesystem(&ntfs_legacy_fs_type); 1833 + } 1834 + bool is_legacy_ntfs(struct super_block *sb) 1835 + { 1836 + return sb->s_type == &ntfs_legacy_fs_type; 1837 + } 1838 + #else 1839 + static inline void register_as_ntfs_legacy(void) {} 1840 + static inline void unregister_as_ntfs_legacy(void) {} 1841 + bool is_legacy_ntfs(struct super_block *sb) { return false; } 1842 + #endif 1843 + 1844 + 1814 1845 // clang-format on 1815 1846 1816 1847 static int __init init_ntfs_fs(void) ··· 1889 1832 goto out1; 1890 1833 } 1891 1834 1835 + register_as_ntfs_legacy(); 1892 1836 err = register_filesystem(&ntfs_fs_type); 1893 1837 if (err) 1894 1838 goto out; ··· 1907 1849 rcu_barrier(); 1908 1850 kmem_cache_destroy(ntfs_inode_cachep); 1909 1851 unregister_filesystem(&ntfs_fs_type); 1852 + unregister_as_ntfs_legacy(); 1910 1853 ntfs3_exit_bitmap(); 1911 1854 1912 1855 #ifdef CONFIG_PROC_FS
+2 -5
fs/proc/page.c
··· 67 67 */ 68 68 ppage = pfn_to_online_page(pfn); 69 69 70 - if (!ppage || PageSlab(ppage) || page_has_type(ppage)) 70 + if (!ppage) 71 71 pcount = 0; 72 72 else 73 73 pcount = page_mapcount(ppage); ··· 124 124 125 125 /* 126 126 * pseudo flags for the well known (anonymous) memory mapped pages 127 - * 128 - * Note that page->_mapcount is overloaded in SLAB, so the 129 - * simple test in page_mapped() is not enough. 130 127 */ 131 - if (!PageSlab(page) && page_mapped(page)) 128 + if (page_mapped(page)) 132 129 u |= 1 << KPF_MMAP; 133 130 if (PageAnon(page)) 134 131 u |= 1 << KPF_ANON;
+2 -2
fs/smb/client/cifspdu.h
··· 882 882 __u8 OplockLevel; 883 883 __u16 Fid; 884 884 __le32 CreateAction; 885 - struct_group(common_attributes, 885 + struct_group_attr(common_attributes, __packed, 886 886 __le64 CreationTime; 887 887 __le64 LastAccessTime; 888 888 __le64 LastWriteTime; ··· 2266 2266 /* QueryFileInfo/QueryPathinfo (also for SetPath/SetFile) data buffer formats */ 2267 2267 /******************************************************************************/ 2268 2268 typedef struct { /* data block encoding of response to level 263 QPathInfo */ 2269 - struct_group(common_attributes, 2269 + struct_group_attr(common_attributes, __packed, 2270 2270 __le64 CreationTime; 2271 2271 __le64 LastAccessTime; 2272 2272 __le64 LastWriteTime;
+1 -1
fs/smb/client/smb2pdu.h
··· 320 320 } __packed; 321 321 322 322 struct smb2_file_network_open_info { 323 - struct_group(network_open_info, 323 + struct_group_attr(network_open_info, __packed, 324 324 __le64 CreationTime; 325 325 __le64 LastAccessTime; 326 326 __le64 LastWriteTime;
+6 -1
fs/smb/client/transport.c
··· 909 909 list_del_init(&mid->qhead); 910 910 mid->mid_flags |= MID_DELETED; 911 911 } 912 + spin_unlock(&server->mid_lock); 912 913 cifs_server_dbg(VFS, "%s: invalid mid state mid=%llu state=%d\n", 913 914 __func__, mid->mid, mid->mid_state); 914 915 rc = -EIO; 916 + goto sync_mid_done; 915 917 } 916 918 spin_unlock(&server->mid_lock); 917 919 920 + sync_mid_done: 918 921 release_mid(mid); 919 922 return rc; 920 923 } ··· 1060 1057 index = (uint)atomic_inc_return(&ses->chan_seq); 1061 1058 index %= ses->chan_count; 1062 1059 } 1060 + 1061 + server = ses->chans[index].server; 1063 1062 spin_unlock(&ses->chan_lock); 1064 1063 1065 - return ses->chans[index].server; 1064 + return server; 1066 1065 } 1067 1066 1068 1067 int
+11
include/linux/cpu.h
··· 221 221 static inline void cpuhp_report_idle_dead(void) { } 222 222 #endif /* #ifdef CONFIG_HOTPLUG_CPU */ 223 223 224 + #ifdef CONFIG_CPU_MITIGATIONS 224 225 extern bool cpu_mitigations_off(void); 225 226 extern bool cpu_mitigations_auto_nosmt(void); 227 + #else 228 + static inline bool cpu_mitigations_off(void) 229 + { 230 + return true; 231 + } 232 + static inline bool cpu_mitigations_auto_nosmt(void) 233 + { 234 + return false; 235 + } 236 + #endif 226 237 227 238 #endif /* _LINUX_CPU_H_ */
+1
include/linux/filter.h
··· 1001 1001 bool bpf_jit_supports_ptr_xchg(void); 1002 1002 bool bpf_jit_supports_arena(void); 1003 1003 bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena); 1004 + u64 bpf_arch_uaddress_limit(void); 1004 1005 void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie); 1005 1006 bool bpf_helper_changes_pkt_data(void *func); 1006 1007
+51 -4
include/linux/firmware/qcom/qcom_qseecom.h
··· 10 10 #define __QCOM_QSEECOM_H 11 11 12 12 #include <linux/auxiliary_bus.h> 13 + #include <linux/dma-mapping.h> 13 14 #include <linux/types.h> 14 15 15 16 #include <linux/firmware/qcom/qcom_scm.h> ··· 26 25 }; 27 26 28 27 /** 28 + * qseecom_scm_dev() - Get the SCM device associated with the QSEECOM client. 29 + * @client: The QSEECOM client device. 30 + * 31 + * Returns the SCM device under which the provided QSEECOM client device 32 + * operates. This function is intended to be used for DMA allocations. 33 + */ 34 + static inline struct device *qseecom_scm_dev(struct qseecom_client *client) 35 + { 36 + return client->aux_dev.dev.parent->parent; 37 + } 38 + 39 + /** 40 + * qseecom_dma_alloc() - Allocate DMA memory for a QSEECOM client. 41 + * @client: The QSEECOM client to allocate the memory for. 42 + * @size: The number of bytes to allocate. 43 + * @dma_handle: Pointer to where the DMA address should be stored. 44 + * @gfp: Allocation flags. 45 + * 46 + * Wrapper function for dma_alloc_coherent(), allocating DMA memory usable for 47 + * TZ/QSEECOM communication. Refer to dma_alloc_coherent() for details. 48 + */ 49 + static inline void *qseecom_dma_alloc(struct qseecom_client *client, size_t size, 50 + dma_addr_t *dma_handle, gfp_t gfp) 51 + { 52 + return dma_alloc_coherent(qseecom_scm_dev(client), size, dma_handle, gfp); 53 + } 54 + 55 + /** 56 + * dma_free_coherent() - Free QSEECOM DMA memory. 57 + * @client: The QSEECOM client for which the memory has been allocated. 58 + * @size: The number of bytes allocated. 59 + * @cpu_addr: Virtual memory address to free. 60 + * @dma_handle: DMA memory address to free. 61 + * 62 + * Wrapper function for dma_free_coherent(), freeing memory previously 63 + * allocated with qseecom_dma_alloc(). Refer to dma_free_coherent() for 64 + * details. 65 + */ 66 + static inline void qseecom_dma_free(struct qseecom_client *client, size_t size, 67 + void *cpu_addr, dma_addr_t dma_handle) 68 + { 69 + return dma_free_coherent(qseecom_scm_dev(client), size, cpu_addr, dma_handle); 70 + } 71 + 72 + /** 29 73 * qcom_qseecom_app_send() - Send to and receive data from a given QSEE app. 30 74 * @client: The QSEECOM client associated with the target app. 31 - * @req: Request buffer sent to the app (must be DMA-mappable). 75 + * @req: DMA address of the request buffer sent to the app. 32 76 * @req_size: Size of the request buffer. 33 - * @rsp: Response buffer, written to by the app (must be DMA-mappable). 77 + * @rsp: DMA address of the response buffer, written to by the app. 34 78 * @rsp_size: Size of the response buffer. 35 79 * 36 80 * Sends a request to the QSEE app associated with the given client and read ··· 89 43 * 90 44 * Return: Zero on success, nonzero on failure. 91 45 */ 92 - static inline int qcom_qseecom_app_send(struct qseecom_client *client, void *req, size_t req_size, 93 - void *rsp, size_t rsp_size) 46 + static inline int qcom_qseecom_app_send(struct qseecom_client *client, 47 + dma_addr_t req, size_t req_size, 48 + dma_addr_t rsp, size_t rsp_size) 94 49 { 95 50 return qcom_scm_qseecom_app_send(client->app_id, req, req_size, rsp, rsp_size); 96 51 }
+5 -5
include/linux/firmware/qcom/qcom_scm.h
··· 118 118 #ifdef CONFIG_QCOM_QSEECOM 119 119 120 120 int qcom_scm_qseecom_app_get_id(const char *app_name, u32 *app_id); 121 - int qcom_scm_qseecom_app_send(u32 app_id, void *req, size_t req_size, void *rsp, 122 - size_t rsp_size); 121 + int qcom_scm_qseecom_app_send(u32 app_id, dma_addr_t req, size_t req_size, 122 + dma_addr_t rsp, size_t rsp_size); 123 123 124 124 #else /* CONFIG_QCOM_QSEECOM */ 125 125 ··· 128 128 return -EINVAL; 129 129 } 130 130 131 - static inline int qcom_scm_qseecom_app_send(u32 app_id, void *req, 132 - size_t req_size, void *rsp, 133 - size_t rsp_size) 131 + static inline int qcom_scm_qseecom_app_send(u32 app_id, 132 + dma_addr_t req, size_t req_size, 133 + dma_addr_t rsp, size_t rsp_size) 134 134 { 135 135 return -EINVAL; 136 136 }
+5 -3
include/linux/mm.h
··· 1223 1223 * a large folio, it includes the number of times this page is mapped 1224 1224 * as part of that folio. 1225 1225 * 1226 - * The result is undefined for pages which cannot be mapped into userspace. 1227 - * For example SLAB or special types of pages. See function page_has_type(). 1228 - * They use this field in struct page differently. 1226 + * Will report 0 for pages which cannot be mapped into userspace, eg 1227 + * slab, page tables and similar. 1229 1228 */ 1230 1229 static inline int page_mapcount(struct page *page) 1231 1230 { 1232 1231 int mapcount = atomic_read(&page->_mapcount) + 1; 1233 1232 1233 + /* Handle page_has_type() pages */ 1234 + if (mapcount < 0) 1235 + mapcount = 0; 1234 1236 if (unlikely(PageCompound(page))) 1235 1237 mapcount += folio_entire_mapcount(page_folio(page)); 1236 1238
+83 -63
include/linux/page-flags.h
··· 190 190 191 191 /* At least one page in this folio has the hwpoison flag set */ 192 192 PG_has_hwpoisoned = PG_error, 193 - PG_hugetlb = PG_active, 194 193 PG_large_rmappable = PG_workingset, /* anon or file-backed */ 195 194 }; 196 195 ··· 457 458 TESTSETFLAG(uname, lname, policy) \ 458 459 TESTCLEARFLAG(uname, lname, policy) 459 460 461 + #define FOLIO_TEST_FLAG_FALSE(name) \ 462 + static inline bool folio_test_##name(const struct folio *folio) \ 463 + { return false; } 464 + #define FOLIO_SET_FLAG_NOOP(name) \ 465 + static inline void folio_set_##name(struct folio *folio) { } 466 + #define FOLIO_CLEAR_FLAG_NOOP(name) \ 467 + static inline void folio_clear_##name(struct folio *folio) { } 468 + #define __FOLIO_SET_FLAG_NOOP(name) \ 469 + static inline void __folio_set_##name(struct folio *folio) { } 470 + #define __FOLIO_CLEAR_FLAG_NOOP(name) \ 471 + static inline void __folio_clear_##name(struct folio *folio) { } 472 + #define FOLIO_TEST_SET_FLAG_FALSE(name) \ 473 + static inline bool folio_test_set_##name(struct folio *folio) \ 474 + { return false; } 475 + #define FOLIO_TEST_CLEAR_FLAG_FALSE(name) \ 476 + static inline bool folio_test_clear_##name(struct folio *folio) \ 477 + { return false; } 478 + 479 + #define FOLIO_FLAG_FALSE(name) \ 480 + FOLIO_TEST_FLAG_FALSE(name) \ 481 + FOLIO_SET_FLAG_NOOP(name) \ 482 + FOLIO_CLEAR_FLAG_NOOP(name) 483 + 460 484 #define TESTPAGEFLAG_FALSE(uname, lname) \ 461 - static inline bool folio_test_##lname(const struct folio *folio) { return false; } \ 485 + FOLIO_TEST_FLAG_FALSE(lname) \ 462 486 static inline int Page##uname(const struct page *page) { return 0; } 463 487 464 488 #define SETPAGEFLAG_NOOP(uname, lname) \ 465 - static inline void folio_set_##lname(struct folio *folio) { } \ 489 + FOLIO_SET_FLAG_NOOP(lname) \ 466 490 static inline void SetPage##uname(struct page *page) { } 467 491 468 492 #define CLEARPAGEFLAG_NOOP(uname, lname) \ 469 - static inline void folio_clear_##lname(struct folio *folio) { } \ 493 + FOLIO_CLEAR_FLAG_NOOP(lname) \ 470 494 static inline void ClearPage##uname(struct page *page) { } 471 495 472 496 #define __CLEARPAGEFLAG_NOOP(uname, lname) \ 473 - static inline void __folio_clear_##lname(struct folio *folio) { } \ 497 + __FOLIO_CLEAR_FLAG_NOOP(lname) \ 474 498 static inline void __ClearPage##uname(struct page *page) { } 475 499 476 500 #define TESTSETFLAG_FALSE(uname, lname) \ 477 - static inline bool folio_test_set_##lname(struct folio *folio) \ 478 - { return 0; } \ 501 + FOLIO_TEST_SET_FLAG_FALSE(lname) \ 479 502 static inline int TestSetPage##uname(struct page *page) { return 0; } 480 503 481 504 #define TESTCLEARFLAG_FALSE(uname, lname) \ 482 - static inline bool folio_test_clear_##lname(struct folio *folio) \ 483 - { return 0; } \ 505 + FOLIO_TEST_CLEAR_FLAG_FALSE(lname) \ 484 506 static inline int TestClearPage##uname(struct page *page) { return 0; } 485 507 486 508 #define PAGEFLAG_FALSE(uname, lname) TESTPAGEFLAG_FALSE(uname, lname) \ ··· 875 855 876 856 #define PG_head_mask ((1UL << PG_head)) 877 857 878 - #ifdef CONFIG_HUGETLB_PAGE 879 - int PageHuge(const struct page *page); 880 - SETPAGEFLAG(HugeTLB, hugetlb, PF_SECOND) 881 - CLEARPAGEFLAG(HugeTLB, hugetlb, PF_SECOND) 882 - 883 - /** 884 - * folio_test_hugetlb - Determine if the folio belongs to hugetlbfs 885 - * @folio: The folio to test. 886 - * 887 - * Context: Any context. Caller should have a reference on the folio to 888 - * prevent it from being turned into a tail page. 889 - * Return: True for hugetlbfs folios, false for anon folios or folios 890 - * belonging to other filesystems. 891 - */ 892 - static inline bool folio_test_hugetlb(const struct folio *folio) 893 - { 894 - return folio_test_large(folio) && 895 - test_bit(PG_hugetlb, const_folio_flags(folio, 1)); 896 - } 897 - #else 898 - TESTPAGEFLAG_FALSE(Huge, hugetlb) 899 - #endif 900 - 901 858 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 902 859 /* 903 860 * PageHuge() only returns true for hugetlbfs pages, but not for ··· 931 934 #endif 932 935 933 936 /* 934 - * Check if a page is currently marked HWPoisoned. Note that this check is 935 - * best effort only and inherently racy: there is no way to synchronize with 936 - * failing hardware. 937 - */ 938 - static inline bool is_page_hwpoison(struct page *page) 939 - { 940 - if (PageHWPoison(page)) 941 - return true; 942 - return PageHuge(page) && PageHWPoison(compound_head(page)); 943 - } 944 - 945 - /* 946 937 * For pages that are never mapped to userspace (and aren't PageSlab), 947 938 * page_type may be used. Because it is initialised to -1, we invert the 948 939 * sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and 949 940 * __ClearPageFoo *sets* the bit used for PageFoo. We reserve a few high and 950 - * low bits so that an underflow or overflow of page_mapcount() won't be 941 + * low bits so that an underflow or overflow of _mapcount won't be 951 942 * mistaken for a page type value. 952 943 */ 953 944 954 945 #define PAGE_TYPE_BASE 0xf0000000 955 - /* Reserve 0x0000007f to catch underflows of page_mapcount */ 946 + /* Reserve 0x0000007f to catch underflows of _mapcount */ 956 947 #define PAGE_MAPCOUNT_RESERVE -128 957 948 #define PG_buddy 0x00000080 958 949 #define PG_offline 0x00000100 959 950 #define PG_table 0x00000200 960 951 #define PG_guard 0x00000400 952 + #define PG_hugetlb 0x00000800 961 953 962 954 #define PageType(page, flag) \ 963 955 ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) ··· 963 977 return page_type_has_type(page->page_type); 964 978 } 965 979 966 - #define PAGE_TYPE_OPS(uname, lname, fname) \ 967 - static __always_inline int Page##uname(const struct page *page) \ 968 - { \ 969 - return PageType(page, PG_##lname); \ 970 - } \ 971 - static __always_inline int folio_test_##fname(const struct folio *folio)\ 980 + #define FOLIO_TYPE_OPS(lname, fname) \ 981 + static __always_inline bool folio_test_##fname(const struct folio *folio)\ 972 982 { \ 973 983 return folio_test_type(folio, PG_##lname); \ 974 - } \ 975 - static __always_inline void __SetPage##uname(struct page *page) \ 976 - { \ 977 - VM_BUG_ON_PAGE(!PageType(page, 0), page); \ 978 - page->page_type &= ~PG_##lname; \ 979 984 } \ 980 985 static __always_inline void __folio_set_##fname(struct folio *folio) \ 981 986 { \ 982 987 VM_BUG_ON_FOLIO(!folio_test_type(folio, 0), folio); \ 983 988 folio->page.page_type &= ~PG_##lname; \ 984 989 } \ 985 - static __always_inline void __ClearPage##uname(struct page *page) \ 986 - { \ 987 - VM_BUG_ON_PAGE(!Page##uname(page), page); \ 988 - page->page_type |= PG_##lname; \ 989 - } \ 990 990 static __always_inline void __folio_clear_##fname(struct folio *folio) \ 991 991 { \ 992 992 VM_BUG_ON_FOLIO(!folio_test_##fname(folio), folio); \ 993 993 folio->page.page_type |= PG_##lname; \ 994 + } 995 + 996 + #define PAGE_TYPE_OPS(uname, lname, fname) \ 997 + FOLIO_TYPE_OPS(lname, fname) \ 998 + static __always_inline int Page##uname(const struct page *page) \ 999 + { \ 1000 + return PageType(page, PG_##lname); \ 994 1001 } \ 1002 + static __always_inline void __SetPage##uname(struct page *page) \ 1003 + { \ 1004 + VM_BUG_ON_PAGE(!PageType(page, 0), page); \ 1005 + page->page_type &= ~PG_##lname; \ 1006 + } \ 1007 + static __always_inline void __ClearPage##uname(struct page *page) \ 1008 + { \ 1009 + VM_BUG_ON_PAGE(!Page##uname(page), page); \ 1010 + page->page_type |= PG_##lname; \ 1011 + } 995 1012 996 1013 /* 997 1014 * PageBuddy() indicates that the page is free and in the buddy system ··· 1040 1051 * Marks guardpages used with debug_pagealloc. 1041 1052 */ 1042 1053 PAGE_TYPE_OPS(Guard, guard, guard) 1054 + 1055 + #ifdef CONFIG_HUGETLB_PAGE 1056 + FOLIO_TYPE_OPS(hugetlb, hugetlb) 1057 + #else 1058 + FOLIO_TEST_FLAG_FALSE(hugetlb) 1059 + #endif 1060 + 1061 + /** 1062 + * PageHuge - Determine if the page belongs to hugetlbfs 1063 + * @page: The page to test. 1064 + * 1065 + * Context: Any context. 1066 + * Return: True for hugetlbfs pages, false for anon pages or pages 1067 + * belonging to other filesystems. 1068 + */ 1069 + static inline bool PageHuge(const struct page *page) 1070 + { 1071 + return folio_test_hugetlb(page_folio(page)); 1072 + } 1073 + 1074 + /* 1075 + * Check if a page is currently marked HWPoisoned. Note that this check is 1076 + * best effort only and inherently racy: there is no way to synchronize with 1077 + * failing hardware. 1078 + */ 1079 + static inline bool is_page_hwpoison(struct page *page) 1080 + { 1081 + if (PageHWPoison(page)) 1082 + return true; 1083 + return PageHuge(page) && PageHWPoison(compound_head(page)); 1084 + } 1043 1085 1044 1086 extern bool is_free_buddy_page(struct page *page); 1045 1087 ··· 1138 1118 */ 1139 1119 #define PAGE_FLAGS_SECOND \ 1140 1120 (0xffUL /* order */ | 1UL << PG_has_hwpoisoned | \ 1141 - 1UL << PG_hugetlb | 1UL << PG_large_rmappable) 1121 + 1UL << PG_large_rmappable) 1142 1122 1143 1123 #define PAGE_FLAGS_PRIVATE \ 1144 1124 (1UL << PG_private | 1UL << PG_private_2)
-5
include/linux/profile.h
··· 18 18 struct notifier_block; 19 19 20 20 #if defined(CONFIG_PROFILING) && defined(CONFIG_PROC_FS) 21 - void create_prof_cpu_mask(void); 22 21 int create_proc_profile(void); 23 22 #else 24 - static inline void create_prof_cpu_mask(void) 25 - { 26 - } 27 - 28 23 static inline int create_proc_profile(void) 29 24 { 30 25 return 0;
+2 -2
include/linux/regulator/consumer.h
··· 320 320 321 321 static inline int devm_regulator_get_enable(struct device *dev, const char *id) 322 322 { 323 - return -ENODEV; 323 + return 0; 324 324 } 325 325 326 326 static inline int devm_regulator_get_enable_optional(struct device *dev, 327 327 const char *id) 328 328 { 329 - return -ENODEV; 329 + return 0; 330 330 } 331 331 332 332 static inline struct regulator *__must_check
+2
include/linux/skmsg.h
··· 465 465 466 466 static inline void sk_psock_data_ready(struct sock *sk, struct sk_psock *psock) 467 467 { 468 + read_lock_bh(&sk->sk_callback_lock); 468 469 if (psock->saved_data_ready) 469 470 psock->saved_data_ready(sk); 470 471 else 471 472 sk->sk_data_ready(sk); 473 + read_unlock_bh(&sk->sk_callback_lock); 472 474 } 473 475 474 476 static inline void psock_set_prog(struct bpf_prog **pprog,
+9
include/net/gro.h
··· 87 87 88 88 /* used to support CHECKSUM_COMPLETE for tunneling protocols */ 89 89 __wsum csum; 90 + 91 + /* L3 offsets */ 92 + union { 93 + struct { 94 + u16 network_offset; 95 + u16 inner_network_offset; 96 + }; 97 + u16 network_offsets[2]; 98 + }; 90 99 }; 91 100 92 101 #define NAPI_GRO_CB(skb) ((struct napi_gro_cb *)(skb)->cb)
+1
include/trace/events/mmflags.h
··· 135 135 #define DEF_PAGETYPE_NAME(_name) { PG_##_name, __stringify(_name) } 136 136 137 137 #define __def_pagetype_names \ 138 + DEF_PAGETYPE_NAME(hugetlb), \ 138 139 DEF_PAGETYPE_NAME(offline), \ 139 140 DEF_PAGETYPE_NAME(guard), \ 140 141 DEF_PAGETYPE_NAME(table), \
-5
include/uapi/drm/etnaviv_drm.h
··· 77 77 #define ETNAVIV_PARAM_GPU_PRODUCT_ID 0x1c 78 78 #define ETNAVIV_PARAM_GPU_CUSTOMER_ID 0x1d 79 79 #define ETNAVIV_PARAM_GPU_ECO_ID 0x1e 80 - #define ETNAVIV_PARAM_GPU_NN_CORE_COUNT 0x1f 81 - #define ETNAVIV_PARAM_GPU_NN_MAD_PER_CORE 0x20 82 - #define ETNAVIV_PARAM_GPU_TP_CORE_COUNT 0x21 83 - #define ETNAVIV_PARAM_GPU_ON_CHIP_SRAM_SIZE 0x22 84 - #define ETNAVIV_PARAM_GPU_AXI_SRAM_SIZE 0x23 85 80 86 81 #define ETNA_MAX_PIPES 4 87 82
+3 -3
include/uapi/linux/vdpa.h
··· 57 57 VDPA_ATTR_DEV_FEATURES, /* u64 */ 58 58 59 59 VDPA_ATTR_DEV_BLK_CFG_CAPACITY, /* u64 */ 60 - VDPA_ATTR_DEV_BLK_CFG_SEG_SIZE, /* u32 */ 60 + VDPA_ATTR_DEV_BLK_CFG_SIZE_MAX, /* u32 */ 61 61 VDPA_ATTR_DEV_BLK_CFG_BLK_SIZE, /* u32 */ 62 62 VDPA_ATTR_DEV_BLK_CFG_SEG_MAX, /* u32 */ 63 63 VDPA_ATTR_DEV_BLK_CFG_NUM_QUEUES, /* u16 */ ··· 70 70 VDPA_ATTR_DEV_BLK_CFG_DISCARD_SEC_ALIGN,/* u32 */ 71 71 VDPA_ATTR_DEV_BLK_CFG_MAX_WRITE_ZEROES_SEC, /* u32 */ 72 72 VDPA_ATTR_DEV_BLK_CFG_MAX_WRITE_ZEROES_SEG, /* u32 */ 73 - VDPA_ATTR_DEV_BLK_CFG_READ_ONLY, /* u8 */ 74 - VDPA_ATTR_DEV_BLK_CFG_FLUSH, /* u8 */ 73 + VDPA_ATTR_DEV_BLK_READ_ONLY, /* u8 */ 74 + VDPA_ATTR_DEV_BLK_FLUSH, /* u8 */ 75 75 76 76 /* new attributes must be added above here */ 77 77 VDPA_ATTR_MAX,
+1 -1
init/Kconfig
··· 1899 1899 bool "Rust support" 1900 1900 depends on HAVE_RUST 1901 1901 depends on RUST_IS_AVAILABLE 1902 + depends on !CFI_CLANG 1902 1903 depends on !MODVERSIONS 1903 1904 depends on !GCC_PLUGINS 1904 1905 depends on !RANDSTRUCT 1905 1906 depends on !DEBUG_INFO_BTF || PAHOLE_HAS_LANG_EXCLUDE 1906 - select CONSTRUCTORS 1907 1907 help 1908 1908 Enables Rust support in the kernel. 1909 1909
+1 -1
kernel/bounds.c
··· 19 19 DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS); 20 20 DEFINE(MAX_NR_ZONES, __MAX_NR_ZONES); 21 21 #ifdef CONFIG_SMP 22 - DEFINE(NR_CPUS_BITS, bits_per(CONFIG_NR_CPUS)); 22 + DEFINE(NR_CPUS_BITS, order_base_2(CONFIG_NR_CPUS)); 23 23 #endif 24 24 DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t)); 25 25 #ifdef CONFIG_LRU_GEN
+9
kernel/bpf/core.c
··· 2970 2970 return false; 2971 2971 } 2972 2972 2973 + u64 __weak bpf_arch_uaddress_limit(void) 2974 + { 2975 + #if defined(CONFIG_64BIT) && defined(CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE) 2976 + return TASK_SIZE; 2977 + #else 2978 + return 0; 2979 + #endif 2980 + } 2981 + 2973 2982 /* Return TRUE if the JIT backend satisfies the following two conditions: 2974 2983 * 1) JIT backend supports atomic_xchg() on pointer-sized words. 2975 2984 * 2) Under the specific arch, the implementation of xchg() is the same
+31 -2
kernel/bpf/verifier.c
··· 18469 18469 f = fdget(fd); 18470 18470 map = __bpf_map_get(f); 18471 18471 if (IS_ERR(map)) { 18472 - verbose(env, "fd %d is not pointing to valid bpf_map\n", 18473 - insn[0].imm); 18472 + verbose(env, "fd %d is not pointing to valid bpf_map\n", fd); 18474 18473 return PTR_ERR(map); 18475 18474 } 18476 18475 ··· 19876 19877 ARRAY_SIZE(chk_and_mod) - (is64 ? 2 : 0); 19877 19878 19878 19879 new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt); 19880 + if (!new_prog) 19881 + return -ENOMEM; 19882 + 19883 + delta += cnt - 1; 19884 + env->prog = prog = new_prog; 19885 + insn = new_prog->insnsi + i + delta; 19886 + goto next_insn; 19887 + } 19888 + 19889 + /* Make it impossible to de-reference a userspace address */ 19890 + if (BPF_CLASS(insn->code) == BPF_LDX && 19891 + (BPF_MODE(insn->code) == BPF_PROBE_MEM || 19892 + BPF_MODE(insn->code) == BPF_PROBE_MEMSX)) { 19893 + struct bpf_insn *patch = &insn_buf[0]; 19894 + u64 uaddress_limit = bpf_arch_uaddress_limit(); 19895 + 19896 + if (!uaddress_limit) 19897 + goto next_insn; 19898 + 19899 + *patch++ = BPF_MOV64_REG(BPF_REG_AX, insn->src_reg); 19900 + if (insn->off) 19901 + *patch++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_AX, insn->off); 19902 + *patch++ = BPF_ALU64_IMM(BPF_RSH, BPF_REG_AX, 32); 19903 + *patch++ = BPF_JMP_IMM(BPF_JLE, BPF_REG_AX, uaddress_limit >> 32, 2); 19904 + *patch++ = *insn; 19905 + *patch++ = BPF_JMP_IMM(BPF_JA, 0, 0, 1); 19906 + *patch++ = BPF_MOV64_IMM(insn->dst_reg, 0); 19907 + 19908 + cnt = patch - insn_buf; 19909 + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); 19879 19910 if (!new_prog) 19880 19911 return -ENOMEM; 19881 19912
+10 -4
kernel/cpu.c
··· 3196 3196 this_cpu_write(cpuhp_state.target, CPUHP_ONLINE); 3197 3197 } 3198 3198 3199 + #ifdef CONFIG_CPU_MITIGATIONS 3199 3200 /* 3200 3201 * These are used for a global "mitigations=" cmdline option for toggling 3201 3202 * optional CPU mitigations. ··· 3207 3206 CPU_MITIGATIONS_AUTO_NOSMT, 3208 3207 }; 3209 3208 3210 - static enum cpu_mitigations cpu_mitigations __ro_after_init = 3211 - IS_ENABLED(CONFIG_SPECULATION_MITIGATIONS) ? CPU_MITIGATIONS_AUTO : 3212 - CPU_MITIGATIONS_OFF; 3209 + static enum cpu_mitigations cpu_mitigations __ro_after_init = CPU_MITIGATIONS_AUTO; 3213 3210 3214 3211 static int __init mitigations_parse_cmdline(char *arg) 3215 3212 { ··· 3223 3224 3224 3225 return 0; 3225 3226 } 3226 - early_param("mitigations", mitigations_parse_cmdline); 3227 3227 3228 3228 /* mitigations=off */ 3229 3229 bool cpu_mitigations_off(void) ··· 3237 3239 return cpu_mitigations == CPU_MITIGATIONS_AUTO_NOSMT; 3238 3240 } 3239 3241 EXPORT_SYMBOL_GPL(cpu_mitigations_auto_nosmt); 3242 + #else 3243 + static int __init mitigations_parse_cmdline(char *arg) 3244 + { 3245 + pr_crit("Kernel compiled without mitigations, ignoring 'mitigations'; system may still be vulnerable\n"); 3246 + return 0; 3247 + } 3248 + #endif 3249 + early_param("mitigations", mitigations_parse_cmdline);
-43
kernel/profile.c
··· 344 344 #include <linux/seq_file.h> 345 345 #include <linux/uaccess.h> 346 346 347 - static int prof_cpu_mask_proc_show(struct seq_file *m, void *v) 348 - { 349 - seq_printf(m, "%*pb\n", cpumask_pr_args(prof_cpu_mask)); 350 - return 0; 351 - } 352 - 353 - static int prof_cpu_mask_proc_open(struct inode *inode, struct file *file) 354 - { 355 - return single_open(file, prof_cpu_mask_proc_show, NULL); 356 - } 357 - 358 - static ssize_t prof_cpu_mask_proc_write(struct file *file, 359 - const char __user *buffer, size_t count, loff_t *pos) 360 - { 361 - cpumask_var_t new_value; 362 - int err; 363 - 364 - if (!zalloc_cpumask_var(&new_value, GFP_KERNEL)) 365 - return -ENOMEM; 366 - 367 - err = cpumask_parse_user(buffer, count, new_value); 368 - if (!err) { 369 - cpumask_copy(prof_cpu_mask, new_value); 370 - err = count; 371 - } 372 - free_cpumask_var(new_value); 373 - return err; 374 - } 375 - 376 - static const struct proc_ops prof_cpu_mask_proc_ops = { 377 - .proc_open = prof_cpu_mask_proc_open, 378 - .proc_read = seq_read, 379 - .proc_lseek = seq_lseek, 380 - .proc_release = single_release, 381 - .proc_write = prof_cpu_mask_proc_write, 382 - }; 383 - 384 - void create_prof_cpu_mask(void) 385 - { 386 - /* create /proc/irq/prof_cpu_mask */ 387 - proc_create("irq/prof_cpu_mask", 0600, NULL, &prof_cpu_mask_proc_ops); 388 - } 389 - 390 347 /* 391 348 * This function accesses profiling information. The returned data is 392 349 * binary: the sampling step and the actual contents of the profile
+20 -14
kernel/sched/fair.c
··· 696 696 * 697 697 * XXX could add max_slice to the augmented data to track this. 698 698 */ 699 + static s64 entity_lag(u64 avruntime, struct sched_entity *se) 700 + { 701 + s64 vlag, limit; 702 + 703 + vlag = avruntime - se->vruntime; 704 + limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se); 705 + 706 + return clamp(vlag, -limit, limit); 707 + } 708 + 699 709 static void update_entity_lag(struct cfs_rq *cfs_rq, struct sched_entity *se) 700 710 { 701 - s64 lag, limit; 702 - 703 711 SCHED_WARN_ON(!se->on_rq); 704 - lag = avg_vruntime(cfs_rq) - se->vruntime; 705 712 706 - limit = calc_delta_fair(max_t(u64, 2*se->slice, TICK_NSEC), se); 707 - se->vlag = clamp(lag, -limit, limit); 713 + se->vlag = entity_lag(avg_vruntime(cfs_rq), se); 708 714 } 709 715 710 716 /* ··· 3682 3676 dequeue_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se) { } 3683 3677 #endif 3684 3678 3685 - static void reweight_eevdf(struct cfs_rq *cfs_rq, struct sched_entity *se, 3679 + static void reweight_eevdf(struct sched_entity *se, u64 avruntime, 3686 3680 unsigned long weight) 3687 3681 { 3688 3682 unsigned long old_weight = se->load.weight; 3689 - u64 avruntime = avg_vruntime(cfs_rq); 3690 3683 s64 vlag, vslice; 3691 3684 3692 3685 /* ··· 3766 3761 * = V - vl' 3767 3762 */ 3768 3763 if (avruntime != se->vruntime) { 3769 - vlag = (s64)(avruntime - se->vruntime); 3764 + vlag = entity_lag(avruntime, se); 3770 3765 vlag = div_s64(vlag * old_weight, weight); 3771 3766 se->vruntime = avruntime - vlag; 3772 3767 } ··· 3792 3787 unsigned long weight) 3793 3788 { 3794 3789 bool curr = cfs_rq->curr == se; 3790 + u64 avruntime; 3795 3791 3796 3792 if (se->on_rq) { 3797 3793 /* commit outstanding execution time */ 3798 - if (curr) 3799 - update_curr(cfs_rq); 3800 - else 3794 + update_curr(cfs_rq); 3795 + avruntime = avg_vruntime(cfs_rq); 3796 + if (!curr) 3801 3797 __dequeue_entity(cfs_rq, se); 3802 3798 update_load_sub(&cfs_rq->load, se->load.weight); 3803 3799 } 3804 3800 dequeue_load_avg(cfs_rq, se); 3805 3801 3806 - if (!se->on_rq) { 3802 + if (se->on_rq) { 3803 + reweight_eevdf(se, avruntime, weight); 3804 + } else { 3807 3805 /* 3808 3806 * Because we keep se->vlag = V - v_i, while: lag_i = w_i*(V - v_i), 3809 3807 * we need to scale se->vlag when w_i changes. 3810 3808 */ 3811 3809 se->vlag = div_s64(se->vlag * se->load.weight, weight); 3812 - } else { 3813 - reweight_eevdf(cfs_rq, se, weight); 3814 3810 } 3815 3811 3816 3812 update_load_set(&se->load, weight);
+16 -2
kernel/sched/isolation.c
··· 46 46 if (cpu < nr_cpu_ids) 47 47 return cpu; 48 48 49 - return cpumask_any_and(housekeeping.cpumasks[type], cpu_online_mask); 49 + cpu = cpumask_any_and(housekeeping.cpumasks[type], cpu_online_mask); 50 + if (likely(cpu < nr_cpu_ids)) 51 + return cpu; 52 + /* 53 + * Unless we have another problem this can only happen 54 + * at boot time before start_secondary() brings the 1st 55 + * housekeeping CPU up. 56 + */ 57 + WARN_ON_ONCE(system_state == SYSTEM_RUNNING || 58 + type != HK_TYPE_TIMER); 50 59 } 51 60 } 52 61 return smp_processor_id(); ··· 118 109 static int __init housekeeping_setup(char *str, unsigned long flags) 119 110 { 120 111 cpumask_var_t non_housekeeping_mask, housekeeping_staging; 112 + unsigned int first_cpu; 121 113 int err = 0; 122 114 123 115 if ((flags & HK_FLAG_TICK) && !(housekeeping.flags & HK_FLAG_TICK)) { ··· 139 129 cpumask_andnot(housekeeping_staging, 140 130 cpu_possible_mask, non_housekeeping_mask); 141 131 142 - if (!cpumask_intersects(cpu_present_mask, housekeeping_staging)) { 132 + first_cpu = cpumask_first_and(cpu_present_mask, housekeeping_staging); 133 + if (first_cpu >= nr_cpu_ids || first_cpu >= setup_max_cpus) { 143 134 __cpumask_set_cpu(smp_processor_id(), housekeeping_staging); 144 135 __cpumask_clear_cpu(smp_processor_id(), non_housekeeping_mask); 145 136 if (!housekeeping.flags) { ··· 148 137 "using boot CPU:%d\n", smp_processor_id()); 149 138 } 150 139 } 140 + 141 + if (cpumask_empty(non_housekeeping_mask)) 142 + goto free_housekeeping_staging; 151 143 152 144 if (!housekeeping.flags) { 153 145 /* First setup call ("nohz_full=" or "isolcpus=") */
+2 -3
kernel/vmcore_info.c
··· 205 205 VMCOREINFO_NUMBER(PG_head_mask); 206 206 #define PAGE_BUDDY_MAPCOUNT_VALUE (~PG_buddy) 207 207 VMCOREINFO_NUMBER(PAGE_BUDDY_MAPCOUNT_VALUE); 208 - #ifdef CONFIG_HUGETLB_PAGE 209 - VMCOREINFO_NUMBER(PG_hugetlb); 208 + #define PAGE_HUGETLB_MAPCOUNT_VALUE (~PG_hugetlb) 209 + VMCOREINFO_NUMBER(PAGE_HUGETLB_MAPCOUNT_VALUE); 210 210 #define PAGE_OFFLINE_MAPCOUNT_VALUE (~PG_offline) 211 211 VMCOREINFO_NUMBER(PAGE_OFFLINE_MAPCOUNT_VALUE); 212 - #endif 213 212 214 213 #ifdef CONFIG_KALLSYMS 215 214 VMCOREINFO_SYMBOL(kallsyms_names);
+16 -3
kernel/workqueue.c
··· 1277 1277 !cpumask_test_cpu(p->wake_cpu, pool->attrs->__pod_cpumask)) { 1278 1278 struct work_struct *work = list_first_entry(&pool->worklist, 1279 1279 struct work_struct, entry); 1280 - p->wake_cpu = cpumask_any_distribute(pool->attrs->__pod_cpumask); 1281 - get_work_pwq(work)->stats[PWQ_STAT_REPATRIATED]++; 1280 + int wake_cpu = cpumask_any_and_distribute(pool->attrs->__pod_cpumask, 1281 + cpu_online_mask); 1282 + if (wake_cpu < nr_cpu_ids) { 1283 + p->wake_cpu = wake_cpu; 1284 + get_work_pwq(work)->stats[PWQ_STAT_REPATRIATED]++; 1285 + } 1282 1286 } 1283 1287 #endif 1284 1288 wake_up_process(p); ··· 1598 1594 if (off_cpu >= 0) 1599 1595 total_cpus--; 1600 1596 1597 + /* If all CPUs of the wq get offline, use the default values */ 1598 + if (unlikely(!total_cpus)) { 1599 + for_each_node(node) 1600 + wq_node_nr_active(wq, node)->max = min_active; 1601 + 1602 + wq_node_nr_active(wq, NUMA_NO_NODE)->max = max_active; 1603 + return; 1604 + } 1605 + 1601 1606 for_each_node(node) { 1602 1607 int node_cpus; 1603 1608 ··· 1619 1606 min_active, max_active); 1620 1607 } 1621 1608 1622 - wq_node_nr_active(wq, NUMA_NO_NODE)->max = min_active; 1609 + wq_node_nr_active(wq, NUMA_NO_NODE)->max = max_active; 1623 1610 } 1624 1611 1625 1612 /**
+3 -2
lib/Kconfig.debug
··· 375 375 Incompatible with older versions of ccache. 376 376 377 377 config DEBUG_INFO_BTF 378 - bool "Generate BTF typeinfo" 378 + bool "Generate BTF type information" 379 379 depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED 380 380 depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST 381 381 depends on BPF_SYSCALL ··· 408 408 using DEBUG_INFO_BTF_MODULES. 409 409 410 410 config DEBUG_INFO_BTF_MODULES 411 - def_bool y 411 + bool "Generate BTF type information for kernel modules" 412 + default y 412 413 depends on DEBUG_INFO_BTF && MODULES && PAHOLE_HAS_SPLIT_BTF 413 414 help 414 415 Generate compact split BTF type information for kernel modules.
+1 -1
lib/scatterlist.c
··· 1124 1124 do { 1125 1125 res = iov_iter_extract_pages(iter, &pages, maxsize, sg_max, 1126 1126 extraction_flags, &off); 1127 - if (res < 0) 1127 + if (res <= 0) 1128 1128 goto failed; 1129 1129 1130 1130 len = res;
+2 -2
lib/stackdepot.c
··· 627 627 /* 628 628 * Zero out zone modifiers, as we don't have specific zone 629 629 * requirements. Keep the flags related to allocation in atomic 630 - * contexts and I/O. 630 + * contexts, I/O, nolockdep. 631 631 */ 632 632 alloc_flags &= ~GFP_ZONEMASK; 633 - alloc_flags &= (GFP_ATOMIC | GFP_KERNEL); 633 + alloc_flags &= (GFP_ATOMIC | GFP_KERNEL | __GFP_NOLOCKDEP); 634 634 alloc_flags |= __GFP_NOWARN; 635 635 page = alloc_pages(alloc_flags, DEPOT_POOL_ORDER); 636 636 if (page)
+15 -25
mm/hugetlb.c
··· 1624 1624 { 1625 1625 lockdep_assert_held(&hugetlb_lock); 1626 1626 1627 - folio_clear_hugetlb(folio); 1627 + __folio_clear_hugetlb(folio); 1628 1628 } 1629 1629 1630 1630 /* ··· 1711 1711 h->surplus_huge_pages_node[nid]++; 1712 1712 } 1713 1713 1714 - folio_set_hugetlb(folio); 1714 + __folio_set_hugetlb(folio); 1715 1715 folio_change_private(folio, NULL); 1716 1716 /* 1717 1717 * We have to set hugetlb_vmemmap_optimized again as above ··· 1781 1781 * If vmemmap pages were allocated above, then we need to clear the 1782 1782 * hugetlb destructor under the hugetlb lock. 1783 1783 */ 1784 - if (clear_dtor) { 1784 + if (folio_test_hugetlb(folio)) { 1785 1785 spin_lock_irq(&hugetlb_lock); 1786 1786 __clear_hugetlb_destructor(h, folio); 1787 1787 spin_unlock_irq(&hugetlb_lock); ··· 2049 2049 2050 2050 static void init_new_hugetlb_folio(struct hstate *h, struct folio *folio) 2051 2051 { 2052 - folio_set_hugetlb(folio); 2052 + __folio_set_hugetlb(folio); 2053 2053 INIT_LIST_HEAD(&folio->lru); 2054 2054 hugetlb_set_folio_subpool(folio, NULL); 2055 2055 set_hugetlb_cgroup(folio, NULL); ··· 2158 2158 { 2159 2159 return __prep_compound_gigantic_folio(folio, order, true); 2160 2160 } 2161 - 2162 - /* 2163 - * PageHuge() only returns true for hugetlbfs pages, but not for normal or 2164 - * transparent huge pages. See the PageTransHuge() documentation for more 2165 - * details. 2166 - */ 2167 - int PageHuge(const struct page *page) 2168 - { 2169 - const struct folio *folio; 2170 - 2171 - if (!PageCompound(page)) 2172 - return 0; 2173 - folio = page_folio(page); 2174 - return folio_test_hugetlb(folio); 2175 - } 2176 - EXPORT_SYMBOL_GPL(PageHuge); 2177 2161 2178 2162 /* 2179 2163 * Find and lock address space (mapping) in write mode. ··· 3252 3268 3253 3269 rsv_adjust = hugepage_subpool_put_pages(spool, 1); 3254 3270 hugetlb_acct_memory(h, -rsv_adjust); 3255 - if (deferred_reserve) 3271 + if (deferred_reserve) { 3272 + spin_lock_irq(&hugetlb_lock); 3256 3273 hugetlb_cgroup_uncharge_folio_rsvd(hstate_index(h), 3257 3274 pages_per_huge_page(h), folio); 3275 + spin_unlock_irq(&hugetlb_lock); 3276 + } 3258 3277 } 3259 3278 3260 3279 if (!memcg_charge_ret) ··· 6261 6274 VM_UFFD_MISSING); 6262 6275 } 6263 6276 6277 + if (!(vma->vm_flags & VM_MAYSHARE)) { 6278 + ret = vmf_anon_prepare(vmf); 6279 + if (unlikely(ret)) 6280 + goto out; 6281 + } 6282 + 6264 6283 folio = alloc_hugetlb_folio(vma, haddr, 0); 6265 6284 if (IS_ERR(folio)) { 6266 6285 /* ··· 6303 6310 */ 6304 6311 restore_reserve_on_error(h, vma, haddr, folio); 6305 6312 folio_put(folio); 6313 + ret = VM_FAULT_SIGBUS; 6306 6314 goto out; 6307 6315 } 6308 6316 new_pagecache_folio = true; 6309 6317 } else { 6310 6318 folio_lock(folio); 6311 - 6312 - ret = vmf_anon_prepare(vmf); 6313 - if (unlikely(ret)) 6314 - goto backout_unlocked; 6315 6319 anon_rmap = 1; 6316 6320 } 6317 6321 } else {
+16 -9
mm/zswap.c
··· 1331 1331 if (!gfp_has_io_fs(sc->gfp_mask)) 1332 1332 return 0; 1333 1333 1334 - #ifdef CONFIG_MEMCG_KMEM 1335 - mem_cgroup_flush_stats(memcg); 1336 - nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT; 1337 - nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED); 1338 - #else 1339 - /* use pool stats instead of memcg stats */ 1340 - nr_backing = zswap_pool_total_size >> PAGE_SHIFT; 1341 - nr_stored = atomic_read(&zswap_nr_stored); 1342 - #endif 1334 + /* 1335 + * For memcg, use the cgroup-wide ZSWAP stats since we don't 1336 + * have them per-node and thus per-lruvec. Careful if memcg is 1337 + * runtime-disabled: we can get sc->memcg == NULL, which is ok 1338 + * for the lruvec, but not for memcg_page_state(). 1339 + * 1340 + * Without memcg, use the zswap pool-wide metrics. 1341 + */ 1342 + if (!mem_cgroup_disabled()) { 1343 + mem_cgroup_flush_stats(memcg); 1344 + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT; 1345 + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED); 1346 + } else { 1347 + nr_backing = zswap_pool_total_size >> PAGE_SHIFT; 1348 + nr_stored = atomic_read(&zswap_nr_stored); 1349 + } 1343 1350 1344 1351 if (!nr_stored) 1345 1352 return 0;
+2
net/8021q/vlan_core.c
··· 478 478 if (unlikely(!vhdr)) 479 479 goto out; 480 480 481 + NAPI_GRO_CB(skb)->network_offsets[NAPI_GRO_CB(skb)->encap_mark] = hlen; 482 + 481 483 type = vhdr->h_vlan_encapsulated_proto; 482 484 483 485 ptype = gro_find_receive_by_type(type);
+1 -1
net/bridge/br_forward.c
··· 266 266 if (skb->dev == p->dev && ether_addr_equal(src, addr)) 267 267 return; 268 268 269 - skb = skb_copy(skb, GFP_ATOMIC); 269 + skb = pskb_copy(skb, GFP_ATOMIC); 270 270 if (!skb) { 271 271 DEV_STATS_INC(dev, tx_dropped); 272 272 return;
+32 -10
net/core/filter.c
··· 4362 4362 enum bpf_map_type map_type = ri->map_type; 4363 4363 void *fwd = ri->tgt_value; 4364 4364 u32 map_id = ri->map_id; 4365 + u32 flags = ri->flags; 4365 4366 struct bpf_map *map; 4366 4367 int err; 4367 4368 4368 4369 ri->map_id = 0; /* Valid map id idr range: [1,INT_MAX[ */ 4370 + ri->flags = 0; 4369 4371 ri->map_type = BPF_MAP_TYPE_UNSPEC; 4370 4372 4371 4373 if (unlikely(!xdpf)) { ··· 4379 4377 case BPF_MAP_TYPE_DEVMAP: 4380 4378 fallthrough; 4381 4379 case BPF_MAP_TYPE_DEVMAP_HASH: 4382 - map = READ_ONCE(ri->map); 4383 - if (unlikely(map)) { 4380 + if (unlikely(flags & BPF_F_BROADCAST)) { 4381 + map = READ_ONCE(ri->map); 4382 + 4383 + /* The map pointer is cleared when the map is being torn 4384 + * down by bpf_clear_redirect_map() 4385 + */ 4386 + if (unlikely(!map)) { 4387 + err = -ENOENT; 4388 + break; 4389 + } 4390 + 4384 4391 WRITE_ONCE(ri->map, NULL); 4385 4392 err = dev_map_enqueue_multi(xdpf, dev, map, 4386 - ri->flags & BPF_F_EXCLUDE_INGRESS); 4393 + flags & BPF_F_EXCLUDE_INGRESS); 4387 4394 } else { 4388 4395 err = dev_map_enqueue(fwd, xdpf, dev); 4389 4396 } ··· 4455 4444 static int xdp_do_generic_redirect_map(struct net_device *dev, 4456 4445 struct sk_buff *skb, 4457 4446 struct xdp_buff *xdp, 4458 - struct bpf_prog *xdp_prog, 4459 - void *fwd, 4460 - enum bpf_map_type map_type, u32 map_id) 4447 + struct bpf_prog *xdp_prog, void *fwd, 4448 + enum bpf_map_type map_type, u32 map_id, 4449 + u32 flags) 4461 4450 { 4462 4451 struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); 4463 4452 struct bpf_map *map; ··· 4467 4456 case BPF_MAP_TYPE_DEVMAP: 4468 4457 fallthrough; 4469 4458 case BPF_MAP_TYPE_DEVMAP_HASH: 4470 - map = READ_ONCE(ri->map); 4471 - if (unlikely(map)) { 4459 + if (unlikely(flags & BPF_F_BROADCAST)) { 4460 + map = READ_ONCE(ri->map); 4461 + 4462 + /* The map pointer is cleared when the map is being torn 4463 + * down by bpf_clear_redirect_map() 4464 + */ 4465 + if (unlikely(!map)) { 4466 + err = -ENOENT; 4467 + break; 4468 + } 4469 + 4472 4470 WRITE_ONCE(ri->map, NULL); 4473 4471 err = dev_map_redirect_multi(dev, skb, xdp_prog, map, 4474 - ri->flags & BPF_F_EXCLUDE_INGRESS); 4472 + flags & BPF_F_EXCLUDE_INGRESS); 4475 4473 } else { 4476 4474 err = dev_map_generic_redirect(fwd, skb, xdp_prog); 4477 4475 } ··· 4517 4497 enum bpf_map_type map_type = ri->map_type; 4518 4498 void *fwd = ri->tgt_value; 4519 4499 u32 map_id = ri->map_id; 4500 + u32 flags = ri->flags; 4520 4501 int err; 4521 4502 4522 4503 ri->map_id = 0; /* Valid map id idr range: [1,INT_MAX[ */ 4504 + ri->flags = 0; 4523 4505 ri->map_type = BPF_MAP_TYPE_UNSPEC; 4524 4506 4525 4507 if (map_type == BPF_MAP_TYPE_UNSPEC && map_id == INT_MAX) { ··· 4541 4519 return 0; 4542 4520 } 4543 4521 4544 - return xdp_do_generic_redirect_map(dev, skb, xdp, xdp_prog, fwd, map_type, map_id); 4522 + return xdp_do_generic_redirect_map(dev, skb, xdp, xdp_prog, fwd, map_type, map_id, flags); 4545 4523 err: 4546 4524 _trace_xdp_redirect_err(dev, xdp_prog, ri->tgt_index, err); 4547 4525 return err;
+1
net/core/gro.c
··· 372 372 const skb_frag_t *frag0; 373 373 unsigned int headlen; 374 374 375 + NAPI_GRO_CB(skb)->network_offset = 0; 375 376 NAPI_GRO_CB(skb)->data_offset = 0; 376 377 headlen = skb_headlen(skb); 377 378 NAPI_GRO_CB(skb)->frag0 = skb->data;
+19 -8
net/core/skbuff.c
··· 2076 2076 2077 2077 struct sk_buff *skb_copy(const struct sk_buff *skb, gfp_t gfp_mask) 2078 2078 { 2079 - int headerlen = skb_headroom(skb); 2080 - unsigned int size = skb_end_offset(skb) + skb->data_len; 2081 - struct sk_buff *n = __alloc_skb(size, gfp_mask, 2082 - skb_alloc_rx_flag(skb), NUMA_NO_NODE); 2079 + struct sk_buff *n; 2080 + unsigned int size; 2081 + int headerlen; 2083 2082 2083 + if (WARN_ON_ONCE(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)) 2084 + return NULL; 2085 + 2086 + headerlen = skb_headroom(skb); 2087 + size = skb_end_offset(skb) + skb->data_len; 2088 + n = __alloc_skb(size, gfp_mask, 2089 + skb_alloc_rx_flag(skb), NUMA_NO_NODE); 2084 2090 if (!n) 2085 2091 return NULL; 2086 2092 ··· 2414 2408 /* 2415 2409 * Allocate the copy buffer 2416 2410 */ 2417 - struct sk_buff *n = __alloc_skb(newheadroom + skb->len + newtailroom, 2418 - gfp_mask, skb_alloc_rx_flag(skb), 2419 - NUMA_NO_NODE); 2420 - int oldheadroom = skb_headroom(skb); 2421 2411 int head_copy_len, head_copy_off; 2412 + struct sk_buff *n; 2413 + int oldheadroom; 2422 2414 2415 + if (WARN_ON_ONCE(skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST)) 2416 + return NULL; 2417 + 2418 + oldheadroom = skb_headroom(skb); 2419 + n = __alloc_skb(newheadroom + skb->len + newtailroom, 2420 + gfp_mask, skb_alloc_rx_flag(skb), 2421 + NUMA_NO_NODE); 2423 2422 if (!n) 2424 2423 return NULL; 2425 2424
+1 -4
net/core/skmsg.c
··· 1226 1226 1227 1227 rcu_read_lock(); 1228 1228 psock = sk_psock(sk); 1229 - if (psock) { 1230 - read_lock_bh(&sk->sk_callback_lock); 1229 + if (psock) 1231 1230 sk_psock_data_ready(sk, psock); 1232 - read_unlock_bh(&sk->sk_callback_lock); 1233 - } 1234 1231 rcu_read_unlock(); 1235 1232 } 1236 1233 }
+1
net/ipv4/af_inet.c
··· 1573 1573 /* The above will be needed by the transport layer if there is one 1574 1574 * immediately following this IP hdr. 1575 1575 */ 1576 + NAPI_GRO_CB(skb)->inner_network_offset = off; 1576 1577 1577 1578 /* Note : No need to call skb_gro_postpull_rcsum() here, 1578 1579 * as we already checked checksum over ipv4 header was 0
+1 -1
net/ipv4/ip_output.c
··· 1473 1473 * by icmp_hdr(skb)->type. 1474 1474 */ 1475 1475 if (sk->sk_type == SOCK_RAW && 1476 - !inet_test_bit(HDRINCL, sk)) 1476 + !(fl4->flowi4_flags & FLOWI_FLAG_KNOWN_NH)) 1477 1477 icmp_type = fl4->fl4_icmp_type; 1478 1478 else 1479 1479 icmp_type = icmp_hdr(skb)->type;
+3
net/ipv4/raw.c
··· 612 612 (hdrincl ? FLOWI_FLAG_KNOWN_NH : 0), 613 613 daddr, saddr, 0, 0, sk->sk_uid); 614 614 615 + fl4.fl4_icmp_type = 0; 616 + fl4.fl4_icmp_code = 0; 617 + 615 618 if (!hdrincl) { 616 619 rfv.msg = msg; 617 620 rfv.hlen = 0;
+2 -1
net/ipv4/udp.c
··· 543 543 struct sock *udp4_lib_lookup_skb(const struct sk_buff *skb, 544 544 __be16 sport, __be16 dport) 545 545 { 546 - const struct iphdr *iph = ip_hdr(skb); 546 + const u16 offset = NAPI_GRO_CB(skb)->network_offsets[skb->encapsulation]; 547 + const struct iphdr *iph = (struct iphdr *)(skb->data + offset); 547 548 struct net *net = dev_net(skb->dev); 548 549 int iif, sdif; 549 550
+13 -2
net/ipv4/udp_offload.c
··· 471 471 struct sk_buff *p; 472 472 unsigned int ulen; 473 473 int ret = 0; 474 + int flush; 474 475 475 476 /* requires non zero csum, for symmetry with GSO */ 476 477 if (!uh->check) { ··· 505 504 return p; 506 505 } 507 506 507 + flush = NAPI_GRO_CB(p)->flush; 508 + 509 + if (NAPI_GRO_CB(p)->flush_id != 1 || 510 + NAPI_GRO_CB(p)->count != 1 || 511 + !NAPI_GRO_CB(p)->is_atomic) 512 + flush |= NAPI_GRO_CB(p)->flush_id; 513 + else 514 + NAPI_GRO_CB(p)->is_atomic = false; 515 + 508 516 /* Terminate the flow on len mismatch or if it grow "too much". 509 517 * Under small packet flood GRO count could elsewhere grow a lot 510 518 * leading to excessive truesize values. 511 519 * On len mismatch merge the first packet shorter than gso_size, 512 520 * otherwise complete the GRO packet. 513 521 */ 514 - if (ulen > ntohs(uh2->len)) { 522 + if (ulen > ntohs(uh2->len) || flush) { 515 523 pp = p; 516 524 } else { 517 525 if (NAPI_GRO_CB(skb)->is_flist) { ··· 728 718 729 719 INDIRECT_CALLABLE_SCOPE int udp4_gro_complete(struct sk_buff *skb, int nhoff) 730 720 { 731 - const struct iphdr *iph = ip_hdr(skb); 721 + const u16 offset = NAPI_GRO_CB(skb)->network_offsets[skb->encapsulation]; 722 + const struct iphdr *iph = (struct iphdr *)(skb->data + offset); 732 723 struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); 733 724 734 725 /* do fraglist only if there is no outer UDP encap (or we already processed it) */
+1
net/ipv6/ip6_offload.c
··· 237 237 goto out; 238 238 239 239 skb_set_network_header(skb, off); 240 + NAPI_GRO_CB(skb)->inner_network_offset = off; 240 241 241 242 flush += ntohs(iph->payload_len) != skb->len - hlen; 242 243
+2 -1
net/ipv6/udp.c
··· 285 285 struct sock *udp6_lib_lookup_skb(const struct sk_buff *skb, 286 286 __be16 sport, __be16 dport) 287 287 { 288 - const struct ipv6hdr *iph = ipv6_hdr(skb); 288 + const u16 offset = NAPI_GRO_CB(skb)->network_offsets[skb->encapsulation]; 289 + const struct ipv6hdr *iph = (struct ipv6hdr *)(skb->data + offset); 289 290 struct net *net = dev_net(skb->dev); 290 291 int iif, sdif; 291 292
+2 -1
net/ipv6/udp_offload.c
··· 164 164 165 165 INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff) 166 166 { 167 - const struct ipv6hdr *ipv6h = ipv6_hdr(skb); 167 + const u16 offset = NAPI_GRO_CB(skb)->network_offsets[skb->encapsulation]; 168 + const struct ipv6hdr *ipv6h = (struct ipv6hdr *)(skb->data + offset); 168 169 struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); 169 170 170 171 /* do fraglist only if there is no outer UDP encap (or we already processed it) */
+3
net/l2tp/l2tp_eth.c
··· 127 127 /* checksums verified by L2TP */ 128 128 skb->ip_summed = CHECKSUM_NONE; 129 129 130 + /* drop outer flow-hash */ 131 + skb_clear_hash(skb); 132 + 130 133 skb_dst_drop(skb); 131 134 nf_reset_ct(skb); 132 135
+3
net/mptcp/protocol.c
··· 3731 3731 MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_TOKENFALLBACKINIT); 3732 3732 mptcp_subflow_early_fallback(msk, subflow); 3733 3733 } 3734 + 3735 + WRITE_ONCE(msk->write_seq, subflow->idsn); 3736 + WRITE_ONCE(msk->snd_nxt, subflow->idsn); 3734 3737 if (likely(!__mptcp_check_fallback(msk))) 3735 3738 MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPCAPABLEACTIVE); 3736 3739
+8 -6
net/nsh/nsh.c
··· 77 77 static struct sk_buff *nsh_gso_segment(struct sk_buff *skb, 78 78 netdev_features_t features) 79 79 { 80 + unsigned int outer_hlen, mac_len, nsh_len; 80 81 struct sk_buff *segs = ERR_PTR(-EINVAL); 81 82 u16 mac_offset = skb->mac_header; 82 - unsigned int nsh_len, mac_len; 83 - __be16 proto; 83 + __be16 outer_proto, proto; 84 84 85 85 skb_reset_network_header(skb); 86 86 87 + outer_proto = skb->protocol; 88 + outer_hlen = skb_mac_header_len(skb); 87 89 mac_len = skb->mac_len; 88 90 89 91 if (unlikely(!pskb_may_pull(skb, NSH_BASE_HDR_LEN))) ··· 115 113 } 116 114 117 115 for (skb = segs; skb; skb = skb->next) { 118 - skb->protocol = htons(ETH_P_NSH); 119 - __skb_push(skb, nsh_len); 120 - skb->mac_header = mac_offset; 121 - skb->network_header = skb->mac_header + mac_len; 116 + skb->protocol = outer_proto; 117 + __skb_push(skb, nsh_len + outer_hlen); 118 + skb_reset_mac_header(skb); 119 + skb_set_network_header(skb, outer_hlen); 122 120 skb->mac_len = mac_len; 123 121 } 124 122
+2 -7
net/rxrpc/conn_object.c
··· 119 119 switch (srx->transport.family) { 120 120 case AF_INET: 121 121 if (peer->srx.transport.sin.sin_port != 122 - srx->transport.sin.sin_port || 123 - peer->srx.transport.sin.sin_addr.s_addr != 124 - srx->transport.sin.sin_addr.s_addr) 122 + srx->transport.sin.sin_port) 125 123 goto not_found; 126 124 break; 127 125 #ifdef CONFIG_AF_RXRPC_IPV6 128 126 case AF_INET6: 129 127 if (peer->srx.transport.sin6.sin6_port != 130 - srx->transport.sin6.sin6_port || 131 - memcmp(&peer->srx.transport.sin6.sin6_addr, 132 - &srx->transport.sin6.sin6_addr, 133 - sizeof(struct in6_addr)) != 0) 128 + srx->transport.sin6.sin6_port) 134 129 goto not_found; 135 130 break; 136 131 #endif
+1 -1
net/rxrpc/insecure.c
··· 19 19 */ 20 20 static struct rxrpc_txbuf *none_alloc_txbuf(struct rxrpc_call *call, size_t remain, gfp_t gfp) 21 21 { 22 - return rxrpc_alloc_data_txbuf(call, min_t(size_t, remain, RXRPC_JUMBO_DATALEN), 0, gfp); 22 + return rxrpc_alloc_data_txbuf(call, min_t(size_t, remain, RXRPC_JUMBO_DATALEN), 1, gfp); 23 23 } 24 24 25 25 static int none_secure_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb)
+1 -1
net/rxrpc/rxkad.c
··· 155 155 switch (call->conn->security_level) { 156 156 default: 157 157 space = min_t(size_t, remain, RXRPC_JUMBO_DATALEN); 158 - return rxrpc_alloc_data_txbuf(call, space, 0, gfp); 158 + return rxrpc_alloc_data_txbuf(call, space, 1, gfp); 159 159 case RXRPC_SECURITY_AUTH: 160 160 shdr = sizeof(struct rxkad_level1_hdr); 161 161 break;
+5 -5
net/rxrpc/txbuf.c
··· 21 21 { 22 22 struct rxrpc_wire_header *whdr; 23 23 struct rxrpc_txbuf *txb; 24 - size_t total, hoff = 0; 24 + size_t total, hoff; 25 25 void *buf; 26 26 27 27 txb = kmalloc(sizeof(*txb), gfp); 28 28 if (!txb) 29 29 return NULL; 30 30 31 - if (data_align) 32 - hoff = round_up(sizeof(*whdr), data_align) - sizeof(*whdr); 31 + hoff = round_up(sizeof(*whdr), data_align) - sizeof(*whdr); 33 32 total = hoff + sizeof(*whdr) + data_size; 34 33 34 + data_align = umax(data_align, L1_CACHE_BYTES); 35 35 mutex_lock(&call->conn->tx_data_alloc_lock); 36 - buf = __page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, 37 - ~(data_align - 1) & ~(L1_CACHE_BYTES - 1)); 36 + buf = page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, 37 + data_align); 38 38 mutex_unlock(&call->conn->tx_data_alloc_lock); 39 39 if (!buf) { 40 40 kfree(txb);
+1
net/sunrpc/xprtsock.c
··· 2664 2664 .xprtsec = { 2665 2665 .policy = RPC_XPRTSEC_NONE, 2666 2666 }, 2667 + .stats = upper_clnt->cl_stats, 2667 2668 }; 2668 2669 unsigned int pflags = current->flags; 2669 2670 struct rpc_clnt *lower_clnt;
+6 -2
net/tipc/msg.c
··· 142 142 if (fragid == FIRST_FRAGMENT) { 143 143 if (unlikely(head)) 144 144 goto err; 145 - *buf = NULL; 146 145 if (skb_has_frag_list(frag) && __skb_linearize(frag)) 147 146 goto err; 147 + *buf = NULL; 148 148 frag = skb_unshare(frag, GFP_ATOMIC); 149 149 if (unlikely(!frag)) 150 150 goto err; ··· 156 156 if (!head) 157 157 goto err; 158 158 159 + /* Either the input skb ownership is transferred to headskb 160 + * or the input skb is freed, clear the reference to avoid 161 + * bad access on error path. 162 + */ 163 + *buf = NULL; 159 164 if (skb_try_coalesce(head, frag, &headstolen, &delta)) { 160 165 kfree_skb_partial(frag, headstolen); 161 166 } else { ··· 184 179 *headbuf = NULL; 185 180 return 1; 186 181 } 187 - *buf = NULL; 188 182 return 0; 189 183 err: 190 184 kfree_skb(*buf);
-1
rust/Makefile
··· 175 175 mkdir -p $(objtree)/$(obj)/test/doctests/kernel; \ 176 176 OBJTREE=$(abspath $(objtree)) \ 177 177 $(RUSTDOC) --test $(rust_flags) \ 178 - @$(objtree)/include/generated/rustc_cfg \ 179 178 -L$(objtree)/$(obj) --extern alloc --extern kernel \ 180 179 --extern build_error --extern macros \ 181 180 --extern bindings --extern uapi \
+9 -2
rust/kernel/init.rs
··· 1292 1292 i8, i16, i32, i64, i128, isize, 1293 1293 f32, f64, 1294 1294 1295 - // SAFETY: These are ZSTs, there is nothing to zero. 1296 - {<T: ?Sized>} PhantomData<T>, core::marker::PhantomPinned, Infallible, (), 1295 + // Note: do not add uninhabited types (such as `!` or `core::convert::Infallible`) to this list; 1296 + // creating an instance of an uninhabited type is immediate undefined behavior. For more on 1297 + // uninhabited/empty types, consult The Rustonomicon: 1298 + // <https://doc.rust-lang.org/stable/nomicon/exotic-sizes.html#empty-types>. The Rust Reference 1299 + // also has information on undefined behavior: 1300 + // <https://doc.rust-lang.org/stable/reference/behavior-considered-undefined.html>. 1301 + // 1302 + // SAFETY: These are inhabited ZSTs; there is nothing to zero and a valid value exists. 1303 + {<T: ?Sized>} PhantomData<T>, core::marker::PhantomPinned, (), 1297 1304 1298 1305 // SAFETY: Type is allowed to take any value, including all zeros. 1299 1306 {<T>} MaybeUninit<T>,
+1 -1
rust/kernel/lib.rs
··· 65 65 /// The top level entrypoint to implementing a kernel module. 66 66 /// 67 67 /// For any teardown or cleanup operations, your type may implement [`Drop`]. 68 - pub trait Module: Sized + Sync { 68 + pub trait Module: Sized + Sync + Send { 69 69 /// Called at module initialization time. 70 70 /// 71 71 /// Use this method to perform whatever setup or registration your module
+4
rust/kernel/net/phy.rs
··· 640 640 drivers: Pin<&'static mut [DriverVTable]>, 641 641 } 642 642 643 + // SAFETY: The only action allowed in a `Registration` instance is dropping it, which is safe to do 644 + // from any thread because `phy_drivers_unregister` can be called from any thread context. 645 + unsafe impl Send for Registration {} 646 + 643 647 impl Registration { 644 648 /// Registers a PHY driver. 645 649 pub fn register(
-12
rust/macros/lib.rs
··· 35 35 /// author: "Rust for Linux Contributors", 36 36 /// description: "My very own kernel module!", 37 37 /// license: "GPL", 38 - /// params: { 39 - /// my_i32: i32 { 40 - /// default: 42, 41 - /// permissions: 0o000, 42 - /// description: "Example of i32", 43 - /// }, 44 - /// writeable_i32: i32 { 45 - /// default: 42, 46 - /// permissions: 0o644, 47 - /// description: "Example of i32", 48 - /// }, 49 - /// }, 50 38 /// } 51 39 /// 52 40 /// struct MyModule;
+115 -75
rust/macros/module.rs
··· 199 199 /// Used by the printing macros, e.g. [`info!`]. 200 200 const __LOG_PREFIX: &[u8] = b\"{name}\\0\"; 201 201 202 - /// The \"Rust loadable module\" mark. 203 - // 204 - // This may be best done another way later on, e.g. as a new modinfo 205 - // key or a new section. For the moment, keep it simple. 206 - #[cfg(MODULE)] 207 - #[doc(hidden)] 208 - #[used] 209 - static __IS_RUST_MODULE: () = (); 210 - 211 - static mut __MOD: Option<{type_}> = None; 212 - 213 202 // SAFETY: `__this_module` is constructed by the kernel at load time and will not be 214 203 // freed until the module is unloaded. 215 204 #[cfg(MODULE)] ··· 210 221 kernel::ThisModule::from_ptr(core::ptr::null_mut()) 211 222 }}; 212 223 213 - // Loadable modules need to export the `{{init,cleanup}}_module` identifiers. 214 - /// # Safety 215 - /// 216 - /// This function must not be called after module initialization, because it may be 217 - /// freed after that completes. 218 - #[cfg(MODULE)] 219 - #[doc(hidden)] 220 - #[no_mangle] 221 - #[link_section = \".init.text\"] 222 - pub unsafe extern \"C\" fn init_module() -> core::ffi::c_int {{ 223 - __init() 224 - }} 224 + // Double nested modules, since then nobody can access the public items inside. 225 + mod __module_init {{ 226 + mod __module_init {{ 227 + use super::super::{type_}; 225 228 226 - #[cfg(MODULE)] 227 - #[doc(hidden)] 228 - #[no_mangle] 229 - pub extern \"C\" fn cleanup_module() {{ 230 - __exit() 231 - }} 229 + /// The \"Rust loadable module\" mark. 230 + // 231 + // This may be best done another way later on, e.g. as a new modinfo 232 + // key or a new section. For the moment, keep it simple. 233 + #[cfg(MODULE)] 234 + #[doc(hidden)] 235 + #[used] 236 + static __IS_RUST_MODULE: () = (); 232 237 233 - // Built-in modules are initialized through an initcall pointer 234 - // and the identifiers need to be unique. 235 - #[cfg(not(MODULE))] 236 - #[cfg(not(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS))] 237 - #[doc(hidden)] 238 - #[link_section = \"{initcall_section}\"] 239 - #[used] 240 - pub static __{name}_initcall: extern \"C\" fn() -> core::ffi::c_int = __{name}_init; 238 + static mut __MOD: Option<{type_}> = None; 241 239 242 - #[cfg(not(MODULE))] 243 - #[cfg(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)] 244 - core::arch::global_asm!( 245 - r#\".section \"{initcall_section}\", \"a\" 246 - __{name}_initcall: 247 - .long __{name}_init - . 248 - .previous 249 - \"# 250 - ); 240 + // Loadable modules need to export the `{{init,cleanup}}_module` identifiers. 241 + /// # Safety 242 + /// 243 + /// This function must not be called after module initialization, because it may be 244 + /// freed after that completes. 245 + #[cfg(MODULE)] 246 + #[doc(hidden)] 247 + #[no_mangle] 248 + #[link_section = \".init.text\"] 249 + pub unsafe extern \"C\" fn init_module() -> core::ffi::c_int {{ 250 + // SAFETY: This function is inaccessible to the outside due to the double 251 + // module wrapping it. It is called exactly once by the C side via its 252 + // unique name. 253 + unsafe {{ __init() }} 254 + }} 251 255 252 - #[cfg(not(MODULE))] 253 - #[doc(hidden)] 254 - #[no_mangle] 255 - pub extern \"C\" fn __{name}_init() -> core::ffi::c_int {{ 256 - __init() 257 - }} 256 + #[cfg(MODULE)] 257 + #[doc(hidden)] 258 + #[no_mangle] 259 + pub extern \"C\" fn cleanup_module() {{ 260 + // SAFETY: 261 + // - This function is inaccessible to the outside due to the double 262 + // module wrapping it. It is called exactly once by the C side via its 263 + // unique name, 264 + // - furthermore it is only called after `init_module` has returned `0` 265 + // (which delegates to `__init`). 266 + unsafe {{ __exit() }} 267 + }} 258 268 259 - #[cfg(not(MODULE))] 260 - #[doc(hidden)] 261 - #[no_mangle] 262 - pub extern \"C\" fn __{name}_exit() {{ 263 - __exit() 264 - }} 269 + // Built-in modules are initialized through an initcall pointer 270 + // and the identifiers need to be unique. 271 + #[cfg(not(MODULE))] 272 + #[cfg(not(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS))] 273 + #[doc(hidden)] 274 + #[link_section = \"{initcall_section}\"] 275 + #[used] 276 + pub static __{name}_initcall: extern \"C\" fn() -> core::ffi::c_int = __{name}_init; 265 277 266 - fn __init() -> core::ffi::c_int {{ 267 - match <{type_} as kernel::Module>::init(&THIS_MODULE) {{ 268 - Ok(m) => {{ 269 - unsafe {{ 270 - __MOD = Some(m); 278 + #[cfg(not(MODULE))] 279 + #[cfg(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)] 280 + core::arch::global_asm!( 281 + r#\".section \"{initcall_section}\", \"a\" 282 + __{name}_initcall: 283 + .long __{name}_init - . 284 + .previous 285 + \"# 286 + ); 287 + 288 + #[cfg(not(MODULE))] 289 + #[doc(hidden)] 290 + #[no_mangle] 291 + pub extern \"C\" fn __{name}_init() -> core::ffi::c_int {{ 292 + // SAFETY: This function is inaccessible to the outside due to the double 293 + // module wrapping it. It is called exactly once by the C side via its 294 + // placement above in the initcall section. 295 + unsafe {{ __init() }} 296 + }} 297 + 298 + #[cfg(not(MODULE))] 299 + #[doc(hidden)] 300 + #[no_mangle] 301 + pub extern \"C\" fn __{name}_exit() {{ 302 + // SAFETY: 303 + // - This function is inaccessible to the outside due to the double 304 + // module wrapping it. It is called exactly once by the C side via its 305 + // unique name, 306 + // - furthermore it is only called after `__{name}_init` has returned `0` 307 + // (which delegates to `__init`). 308 + unsafe {{ __exit() }} 309 + }} 310 + 311 + /// # Safety 312 + /// 313 + /// This function must only be called once. 314 + unsafe fn __init() -> core::ffi::c_int {{ 315 + match <{type_} as kernel::Module>::init(&super::super::THIS_MODULE) {{ 316 + Ok(m) => {{ 317 + // SAFETY: No data race, since `__MOD` can only be accessed by this 318 + // module and there only `__init` and `__exit` access it. These 319 + // functions are only called once and `__exit` cannot be called 320 + // before or during `__init`. 321 + unsafe {{ 322 + __MOD = Some(m); 323 + }} 324 + return 0; 325 + }} 326 + Err(e) => {{ 327 + return e.to_errno(); 328 + }} 271 329 }} 272 - return 0; 273 330 }} 274 - Err(e) => {{ 275 - return e.to_errno(); 331 + 332 + /// # Safety 333 + /// 334 + /// This function must 335 + /// - only be called once, 336 + /// - be called after `__init` has been called and returned `0`. 337 + unsafe fn __exit() {{ 338 + // SAFETY: No data race, since `__MOD` can only be accessed by this module 339 + // and there only `__init` and `__exit` access it. These functions are only 340 + // called once and `__init` was already called. 341 + unsafe {{ 342 + // Invokes `drop()` on `__MOD`, which should be used for cleanup. 343 + __MOD = None; 344 + }} 276 345 }} 346 + 347 + {modinfo} 277 348 }} 278 349 }} 279 - 280 - fn __exit() {{ 281 - unsafe {{ 282 - // Invokes `drop()` on `__MOD`, which should be used for cleanup. 283 - __MOD = None; 284 - }} 285 - }} 286 - 287 - {modinfo} 288 350 ", 289 351 type_ = info.type_, 290 352 name = info.name,
+1 -1
scripts/Makefile.build
··· 273 273 -Zallow-features=$(rust_allowed_features) \ 274 274 -Zcrate-attr=no_std \ 275 275 -Zcrate-attr='feature($(rust_allowed_features))' \ 276 - --extern alloc --extern kernel \ 276 + -Zunstable-options --extern force:alloc --extern kernel \ 277 277 --crate-type rlib -L $(objtree)/rust/ \ 278 278 --crate-name $(basename $(notdir $@)) \ 279 279 --sysroot=/dev/null \
+1 -1
tools/perf/arch/riscv/util/header.c
··· 41 41 char *mimpid = NULL; 42 42 char *cpuid = NULL; 43 43 int read; 44 - unsigned long line_sz; 44 + size_t line_sz; 45 45 FILE *cpuinfo; 46 46 47 47 cpuinfo = fopen(CPUINFO, "r");
+3
tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
··· 205 205 case 5: return (void *)~(1ull << 30); /* trigger extable */ 206 206 case 6: return &f; /* valid addr */ 207 207 case 7: return (void *)((long)&f | 1); /* kernel tricks */ 208 + #ifdef CONFIG_X86_64 209 + case 8: return (void *)VSYSCALL_ADDR; /* vsyscall page address */ 210 + #endif 208 211 default: return NULL; 209 212 } 210 213 }
+8 -4
tools/testing/selftests/kselftest_harness.h
··· 56 56 #include <asm/types.h> 57 57 #include <ctype.h> 58 58 #include <errno.h> 59 - #include <limits.h> 60 59 #include <stdbool.h> 61 60 #include <stdint.h> 62 61 #include <stdio.h> ··· 1158 1159 struct __test_metadata *t) 1159 1160 { 1160 1161 struct __test_xfail *xfail; 1161 - char test_name[LINE_MAX]; 1162 + char *test_name; 1162 1163 const char *diagnostic; 1163 1164 1164 1165 /* reset test struct */ ··· 1166 1167 t->trigger = 0; 1167 1168 memset(t->results->reason, 0, sizeof(t->results->reason)); 1168 1169 1169 - snprintf(test_name, sizeof(test_name), "%s%s%s.%s", 1170 - f->name, variant->name[0] ? "." : "", variant->name, t->name); 1170 + if (asprintf(&test_name, "%s%s%s.%s", f->name, 1171 + variant->name[0] ? "." : "", variant->name, t->name) == -1) { 1172 + ksft_print_msg("ERROR ALLOCATING MEMORY\n"); 1173 + t->exit_code = KSFT_FAIL; 1174 + _exit(t->exit_code); 1175 + } 1171 1176 1172 1177 ksft_print_msg(" RUN %s ...\n", test_name); 1173 1178 ··· 1209 1206 1210 1207 ksft_test_result_code(t->exit_code, test_name, 1211 1208 diagnostic ? "%s" : NULL, diagnostic); 1209 + free(test_name); 1212 1210 } 1213 1211 1214 1212 static int test_harness_run(int argc, char **argv)
+49
tools/testing/selftests/kvm/aarch64/vgic_init.c
··· 84 84 return v; 85 85 } 86 86 87 + static struct vm_gic vm_gic_create_barebones(uint32_t gic_dev_type) 88 + { 89 + struct vm_gic v; 90 + 91 + v.gic_dev_type = gic_dev_type; 92 + v.vm = vm_create_barebones(); 93 + v.gic_fd = kvm_create_device(v.vm, gic_dev_type); 94 + 95 + return v; 96 + } 97 + 98 + 87 99 static void vm_gic_destroy(struct vm_gic *v) 88 100 { 89 101 close(v->gic_fd); ··· 365 353 366 354 ret = run_vcpu(vcpus[3]); 367 355 TEST_ASSERT(ret == -EINVAL, "dist/rdist overlap detected on 1st vcpu run"); 356 + 357 + vm_gic_destroy(&v); 358 + } 359 + 360 + #define KVM_VGIC_V2_ATTR(offset, cpu) \ 361 + (FIELD_PREP(KVM_DEV_ARM_VGIC_OFFSET_MASK, offset) | \ 362 + FIELD_PREP(KVM_DEV_ARM_VGIC_CPUID_MASK, cpu)) 363 + 364 + #define GIC_CPU_CTRL 0x00 365 + 366 + static void test_v2_uaccess_cpuif_no_vcpus(void) 367 + { 368 + struct vm_gic v; 369 + u64 val = 0; 370 + int ret; 371 + 372 + v = vm_gic_create_barebones(KVM_DEV_TYPE_ARM_VGIC_V2); 373 + subtest_dist_rdist(&v); 374 + 375 + ret = __kvm_has_device_attr(v.gic_fd, KVM_DEV_ARM_VGIC_GRP_CPU_REGS, 376 + KVM_VGIC_V2_ATTR(GIC_CPU_CTRL, 0)); 377 + TEST_ASSERT(ret && errno == EINVAL, 378 + "accessed non-existent CPU interface, want errno: %i", 379 + EINVAL); 380 + ret = __kvm_device_attr_get(v.gic_fd, KVM_DEV_ARM_VGIC_GRP_CPU_REGS, 381 + KVM_VGIC_V2_ATTR(GIC_CPU_CTRL, 0), &val); 382 + TEST_ASSERT(ret && errno == EINVAL, 383 + "accessed non-existent CPU interface, want errno: %i", 384 + EINVAL); 385 + ret = __kvm_device_attr_set(v.gic_fd, KVM_DEV_ARM_VGIC_GRP_CPU_REGS, 386 + KVM_VGIC_V2_ATTR(GIC_CPU_CTRL, 0), &val); 387 + TEST_ASSERT(ret && errno == EINVAL, 388 + "accessed non-existent CPU interface, want errno: %i", 389 + EINVAL); 368 390 369 391 vm_gic_destroy(&v); 370 392 } ··· 720 674 { 721 675 test_vcpus_then_vgic(gic_dev_type); 722 676 test_vgic_then_vcpus(gic_dev_type); 677 + 678 + if (VGIC_DEV_IS_V2(gic_dev_type)) 679 + test_v2_uaccess_cpuif_no_vcpus(); 723 680 724 681 if (VGIC_DEV_IS_V3(gic_dev_type)) { 725 682 test_v3_new_redist_regions();
+1
tools/testing/selftests/mm/mdwe_test.c
··· 7 7 #include <linux/mman.h> 8 8 #include <linux/prctl.h> 9 9 10 + #define _GNU_SOURCE 10 11 #include <stdio.h> 11 12 #include <stdlib.h> 12 13 #include <sys/auxv.h>
-38
tools/testing/selftests/mm/protection_keys.c
··· 54 54 u64 shadow_pkey_reg; 55 55 int dprint_in_signal; 56 56 char dprint_in_signal_buffer[DPRINT_IN_SIGNAL_BUF_SIZE]; 57 - char buf[256]; 58 57 59 58 void cat_into_file(char *str, char *file) 60 59 { ··· 1744 1745 shadow_pkey_reg = __read_pkey_reg(); 1745 1746 } 1746 1747 1747 - pid_t parent_pid; 1748 - 1749 - void restore_settings_atexit(void) 1750 - { 1751 - if (parent_pid == getpid()) 1752 - cat_into_file(buf, "/proc/sys/vm/nr_hugepages"); 1753 - } 1754 - 1755 - void save_settings(void) 1756 - { 1757 - int fd; 1758 - int err; 1759 - 1760 - if (geteuid()) 1761 - return; 1762 - 1763 - fd = open("/proc/sys/vm/nr_hugepages", O_RDONLY); 1764 - if (fd < 0) { 1765 - fprintf(stderr, "error opening\n"); 1766 - perror("error: "); 1767 - exit(__LINE__); 1768 - } 1769 - 1770 - /* -1 to guarantee leaving the trailing \0 */ 1771 - err = read(fd, buf, sizeof(buf)-1); 1772 - if (err < 0) { 1773 - fprintf(stderr, "error reading\n"); 1774 - perror("error: "); 1775 - exit(__LINE__); 1776 - } 1777 - 1778 - parent_pid = getpid(); 1779 - atexit(restore_settings_atexit); 1780 - close(fd); 1781 - } 1782 - 1783 1748 int main(void) 1784 1749 { 1785 1750 int nr_iterations = 22; ··· 1751 1788 1752 1789 srand((unsigned int)time(NULL)); 1753 1790 1754 - save_settings(); 1755 1791 setup_handlers(); 1756 1792 1757 1793 printf("has pkeys: %d\n", pkeys_supported);
+2
tools/testing/selftests/mm/run_vmtests.sh
··· 385 385 CATEGORY="ksm" run_test ./ksm_functional_tests 386 386 387 387 # protection_keys tests 388 + nr_hugepgs=$(cat /proc/sys/vm/nr_hugepages) 388 389 if [ -x ./protection_keys_32 ] 389 390 then 390 391 CATEGORY="pkey" run_test ./protection_keys_32 ··· 395 394 then 396 395 CATEGORY="pkey" run_test ./protection_keys_64 397 396 fi 397 + echo "$nr_hugepgs" > /proc/sys/vm/nr_hugepages 398 398 399 399 if [ -x ./soft-dirty ] 400 400 then
+1 -1
tools/testing/selftests/mm/split_huge_page_test.c
··· 300 300 char **addr) 301 301 { 302 302 size_t i; 303 - int dummy; 303 + int __attribute__((unused)) dummy = 0; 304 304 305 305 srand(time(NULL)); 306 306
+1 -1
tools/testing/selftests/riscv/hwprobe/cbo.c
··· 19 19 #include "hwprobe.h" 20 20 #include "../../kselftest.h" 21 21 22 - #define MK_CBO(fn) cpu_to_le32((fn) << 20 | 10 << 15 | 2 << 12 | 0 << 7 | 15) 22 + #define MK_CBO(fn) le32_bswap((uint32_t)(fn) << 20 | 10 << 15 | 2 << 12 | 0 << 7 | 15) 23 23 24 24 static char mem[4096] __aligned(4096) = { [0 ... 4095] = 0xa5 }; 25 25
+10
tools/testing/selftests/riscv/hwprobe/hwprobe.h
··· 4 4 #include <stddef.h> 5 5 #include <asm/hwprobe.h> 6 6 7 + #if __BYTE_ORDER == __BIG_ENDIAN 8 + # define le32_bswap(_x) \ 9 + ((((_x) & 0x000000ffU) << 24) | \ 10 + (((_x) & 0x0000ff00U) << 8) | \ 11 + (((_x) & 0x00ff0000U) >> 8) | \ 12 + (((_x) & 0xff000000U) >> 24)) 13 + #else 14 + # define le32_bswap(_x) (_x) 15 + #endif 16 + 7 17 /* 8 18 * Rather than relying on having a new enough libc to define this, just do it 9 19 * ourselves. This way we don't need to be coupled to a new-enough libc to
+14
tools/testing/selftests/syscall_user_dispatch/sud_test.c
··· 158 158 159 159 /* In preparation for sigreturn. */ 160 160 SYSCALL_DISPATCH_OFF(glob_sel); 161 + 162 + /* 163 + * The tests for argument handling assume that `syscall(x) == x`. This 164 + * is a NOP on x86 because the syscall number is passed in %rax, which 165 + * happens to also be the function ABI return register. Other 166 + * architectures may need to swizzle the arguments around. 167 + */ 168 + #if defined(__riscv) 169 + /* REG_A7 is not defined in libc headers */ 170 + # define REG_A7 (REG_A0 + 7) 171 + 172 + ((ucontext_t *)ucontext)->uc_mcontext.__gregs[REG_A0] = 173 + ((ucontext_t *)ucontext)->uc_mcontext.__gregs[REG_A7]; 174 + #endif 161 175 } 162 176 163 177 TEST(dispatch_and_return)