···101011111212What: /sys/devices/virtual/memory_tiering/memory_tierN/1313- /sys/devices/virtual/memory_tiering/memory_tierN/nodes1313+ /sys/devices/virtual/memory_tiering/memory_tierN/nodelist1414Date: August 20221515Contact: Linux memory management mailing list <linux-mm@kvack.org>1616Description: Directory with details of a specific memory tier···2121 A smaller value of N implies a higher (faster) memory tier in the2222 hierarchy.23232424- nodes: NUMA nodes that are part of this memory tier.2424+ nodelist: NUMA nodes that are part of this memory tier.2525
+1-1
Documentation/kernel-hacking/hacking.rst
···120120.. warning::121121122122 Beware that this will return a false positive if a123123- :ref:`botton half lock <local_bh_disable>` is held.123123+ :ref:`bottom half lock <local_bh_disable>` is held.124124125125Some Basic Rules126126================
+3-10
Documentation/process/2.Process.rst
···1261265.2.21 was the final stable update of the 5.2 release.127127128128Some kernels are designated "long term" kernels; they will receive support129129-for a longer period. As of this writing, the current long term kernels130130-and their maintainers are:129129+for a longer period. Please refer to the following link for the list of active130130+long term kernel versions and their maintainers:131131132132- ====== ================================ =======================133133- 3.16 Ben Hutchings (very long-term kernel)134134- 4.4 Greg Kroah-Hartman & Sasha Levin (very long-term kernel)135135- 4.9 Greg Kroah-Hartman & Sasha Levin136136- 4.14 Greg Kroah-Hartman & Sasha Levin137137- 4.19 Greg Kroah-Hartman & Sasha Levin138138- 5.4 Greg Kroah-Hartman & Sasha Levin139139- ====== ================================ =======================132132+ https://www.kernel.org/category/releases.html140133141134The selection of a kernel for long-term support is purely a matter of a142135maintainer having the need and the time to maintain that release. There
+1-1
Documentation/process/howto.rst
···3636 - "C: A Reference Manual" by Harbison and Steele [Prentice Hall]37373838The kernel is written using GNU C and the GNU toolchain. While it3939-adheres to the ISO C89 standard, it uses a number of extensions that are3939+adheres to the ISO C11 standard, it uses a number of extensions that are4040not featured in the standard. The kernel is a freestanding C4141environment, with no reliance on the standard C library, so some4242portions of the C standard are not supported. Arbitrary long long
+1-1
Documentation/trace/histogram.rst
···3939 will use the event's kernel stacktrace as the key. The keywords4040 'keys' or 'key' can be used to specify keys, and the keywords4141 'values', 'vals', or 'val' can be used to specify values. Compound4242- keys consisting of up to two fields can be specified by the 'keys'4242+ keys consisting of up to three fields can be specified by the 'keys'4343 keyword. Hashing a compound key produces a unique entry in the4444 table for each unique combination of component keys, and can be4545 useful for providing more fine-grained summaries of event data.
···4444- "C: A Reference Manual" di Harbison and Steele [Prentice Hall]45454646Il kernel è stato scritto usando GNU C e la toolchain GNU.4747-Sebbene si attenga allo standard ISO C89, esso utilizza una serie di4747+Sebbene si attenga allo standard ISO C11, esso utilizza una serie di4848estensioni che non sono previste in questo standard. Il kernel è un4949ambiente C indipendente, che non ha alcuna dipendenza dalle librerie5050C standard, così alcune parti del C standard non sono supportate.
+1-1
Documentation/translations/ja_JP/howto.rst
···6565 - 『新・詳説 C 言語 H&S リファレンス』 (サミュエル P ハービソン/ガイ L スティール共著 斉藤 信男監訳)[ソフトバンク]66666767カーネルは GNU C と GNU ツールチェインを使って書かれています。カーネル6868-は ISO C89 仕様に準拠して書く一方で、標準には無い言語拡張を多く使って6868+は ISO C11 仕様に準拠して書く一方で、標準には無い言語拡張を多く使って6969います。カーネルは標準 C ライブラリに依存しない、C 言語非依存環境です。7070そのため、C の標準の中で使えないものもあります。特に任意の long long7171の除算や浮動小数点は使えません。カーネルがツールチェインや C 言語拡張
+1-1
Documentation/translations/ko_KR/howto.rst
···6262 - "Practical C Programming" by Steve Oualline [O'Reilly]6363 - "C: A Reference Manual" by Harbison and Steele [Prentice Hall]64646565-커널은 GNU C와 GNU 툴체인을 사용하여 작성되었다. 이 툴들은 ISO C89 표준을6565+커널은 GNU C와 GNU 툴체인을 사용하여 작성되었다. 이 툴들은 ISO C11 표준을6666따르는 반면 표준에 있지 않은 많은 확장기능도 가지고 있다. 커널은 표준 C6767라이브러리와는 관계없이 freestanding C 환경이어서 C 표준의 일부는6868지원되지 않는다. 임의의 long long 나누기나 floating point는 지원되지 않는다.
···1010#define SVERSION_ANY_ID PA_SVERSION_ANY_ID11111212struct hp_hardware {1313- unsigned short hw_type:5; /* HPHW_xxx */1414- unsigned short hversion;1515- unsigned long sversion:28;1616- unsigned short opt;1717- const char name[80]; /* The hardware description */1818-};1313+ unsigned int hw_type:8; /* HPHW_xxx */1414+ unsigned int hversion:12;1515+ unsigned int sversion:12;1616+ unsigned char opt;1717+ unsigned char name[59]; /* The hardware description */1818+} __packed;19192020struct parisc_device;2121
+13-23
arch/parisc/include/uapi/asm/pdc.h
···363363364364#if !defined(__ASSEMBLY__)365365366366-/* flags of the device_path */366366+/* flags for hardware_path */367367#define PF_AUTOBOOT 0x80368368#define PF_AUTOSEARCH 0x40369369#define PF_TIMER 0x0F370370371371-struct device_path { /* page 1-69 */372372- unsigned char flags; /* flags see above! */373373- unsigned char bc[6]; /* bus converter routing info */374374- unsigned char mod;375375- unsigned int layers[6];/* device-specific layer-info */376376-} __attribute__((aligned(8))) ;371371+struct hardware_path {372372+ unsigned char flags; /* see bit definitions below */373373+ signed char bc[6]; /* Bus Converter routing info to a specific */374374+ /* I/O adaptor (< 0 means none, > 63 resvd) */375375+ signed char mod; /* fixed field of specified module */376376+};377377+378378+struct pdc_module_path { /* page 1-69 */379379+ struct hardware_path path;380380+ unsigned int layers[6]; /* device-specific info (ctlr #, unit # ...) */381381+} __attribute__((aligned(8)));377382378383struct pz_device {379379- struct device_path dp; /* see above */384384+ struct pdc_module_path dp; /* see above */380385 /* struct iomod *hpa; */381386 unsigned int hpa; /* HPA base address */382387 /* char *spa; */···614609 int factor;615610 int width;616611 int mode;617617-};618618-619619-struct hardware_path {620620- char flags; /* see bit definitions below */621621- char bc[6]; /* Bus Converter routing info to a specific */622622- /* I/O adaptor (< 0 means none, > 63 resvd) */623623- char mod; /* fixed field of specified module */624624-};625625-626626-/*627627- * Device path specifications used by PDC.628628- */629629-struct pdc_module_path {630630- struct hardware_path path;631631- unsigned int layers[6]; /* device-specific info (ctlr #, unit # ...) */632612};633613634614/* Only used on some pre-PA2.0 boxes */
···32323333 if (radix_enabled())3434 return;3535+ /*3636+ * apply_to_page_range can call us this preempt enabled when3737+ * operating on kernel page tables.3838+ */3939+ preempt_disable();3540 batch = this_cpu_ptr(&ppc64_tlb_batch);3641 batch->active = 1;3742}···5247 if (batch->index)5348 __flush_tlb_pending(batch);5449 batch->active = 0;5050+ preempt_enable();5551}56525753#define arch_flush_lazy_mmu_mode() do {} while (0)
+7
arch/powerpc/include/asm/syscalls.h
···104104 unsigned long len1, unsigned long len2);105105long sys_ppc32_fadvise64(int fd, u32 unused, u32 offset1, u32 offset2,106106 size_t len, int advice);107107+long sys_ppc_sync_file_range2(int fd, unsigned int flags,108108+ unsigned int offset1,109109+ unsigned int offset2,110110+ unsigned int nbytes1,111111+ unsigned int nbytes2);112112+long sys_ppc_fallocate(int fd, int mode, u32 offset1, u32 offset2,113113+ u32 len1, u32 len2);107114#endif108115#ifdef CONFIG_COMPAT109116long compat_sys_mmap2(unsigned long addr, size_t len,
+7
arch/powerpc/kernel/exceptions-64e.S
···813813 EXCEPTION_COMMON(0x260)814814 CHECK_NAPPING()815815 addi r3,r1,STACK_FRAME_OVERHEAD816816+ /*817817+ * XXX: Returning from performance_monitor_exception taken as a818818+ * soft-NMI (Linux irqs disabled) may be risky to use interrupt_return819819+ * and could cause bugs in return or elsewhere. That case should just820820+ * restore registers and return. There is a workaround for one known821821+ * problem in interrupt_exit_kernel_prepare().822822+ */816823 bl performance_monitor_exception817824 b interrupt_return818825
···374374 if (regs_is_unrecoverable(regs))375375 unrecoverable_exception(regs);376376 /*377377- * CT_WARN_ON comes here via program_check_exception,378378- * so avoid recursion.377377+ * CT_WARN_ON comes here via program_check_exception, so avoid378378+ * recursion.379379+ *380380+ * Skip the assertion on PMIs on 64e to work around a problem caused381381+ * by NMI PMIs incorrectly taking this interrupt return path, it's382382+ * possible for this to hit after interrupt exit to user switches383383+ * context to user. See also the comment in the performance monitor384384+ * handler in exceptions-64e.S379385 */380380- if (TRAP(regs) != INTERRUPT_PROGRAM)386386+ if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64) &&387387+ TRAP(regs) != INTERRUPT_PROGRAM &&388388+ TRAP(regs) != INTERRUPT_PERFMON)381389 CT_WARN_ON(ct_state() == CONTEXT_USER);382390383391 kuap = kuap_get_and_assert_locked();
+11-2
arch/powerpc/kernel/interrupt_64.S
···532532 * Returning to soft-disabled context.533533 * Check if a MUST_HARD_MASK interrupt has become pending, in which534534 * case we need to disable MSR[EE] in the return context.535535+ *536536+ * The MSR[EE] check catches among other things the short incoherency537537+ * in hard_irq_disable() between clearing MSR[EE] and setting538538+ * PACA_IRQ_HARD_DIS.535539 */536540 ld r12,_MSR(r1)537541 andi. r10,r12,MSR_EE538542 beq .Lfast_kernel_interrupt_return_\srr\() // EE already disabled539543 lbz r11,PACAIRQHAPPENED(r13)540544 andi. r10,r11,PACA_IRQ_MUST_HARD_MASK541541- beq .Lfast_kernel_interrupt_return_\srr\() // No HARD_MASK pending545545+ bne 1f // HARD_MASK is pending546546+ // No HARD_MASK pending, clear possible HARD_DIS set by interrupt547547+ andi. r11,r11,(~PACA_IRQ_HARD_DIS)@l548548+ stb r11,PACAIRQHAPPENED(r13)549549+ b .Lfast_kernel_interrupt_return_\srr\()542550543543- /* Must clear MSR_EE from _MSR */551551+552552+1: /* Must clear MSR_EE from _MSR */544553#ifdef CONFIG_PPC_BOOK3S545554 li r10,0546555 /* Clear valid before changing _MSR */
···5151config KVM_BOOK3S_325252 tristate "KVM support for PowerPC book3s_32 processors"5353 depends on PPC_BOOK3S_32 && !SMP && !PTE_64BIT5454+ depends on !CONTEXT_TRACKING_USER5455 select KVM5556 select KVM_BOOK3S_32_HANDLER5657 select KVM_BOOK3S_PR_POSSIBLE···106105config KVM_BOOK3S_64_PR107106 tristate "KVM support without using hypervisor mode in host"108107 depends on KVM_BOOK3S_64108108+ depends on !CONTEXT_TRACKING_USER109109 select KVM_BOOK3S_PR_POSSIBLE110110 help111111 Support running guest kernels in virtual machines on processors···192190config KVM_E500V2193191 bool "KVM support for PowerPC E500v2 processors"194192 depends on PPC_E500 && !PPC_E500MC193193+ depends on !CONTEXT_TRACKING_USER195194 select KVM196195 select KVM_MMIO197196 select MMU_NOTIFIER···208205config KVM_E500MC209206 bool "KVM support for PowerPC E500MC/E5500/E6500 processors"210207 depends on PPC_E500MC208208+ depends on !CONTEXT_TRACKING_USER211209 select KVM212210 select KVM_MMIO213211 select KVM_BOOKE_HV
+11-1
arch/powerpc/lib/vmx-helper.c
···3636{3737 disable_kernel_altivec();3838 pagefault_enable();3939- preempt_enable();3939+ preempt_enable_no_resched();4040+ /*4141+ * Must never explicitly call schedule (including preempt_enable())4242+ * while in a kuap-unlocked user copy, because the AMR register will4343+ * not be saved and restored across context switch. However preempt4444+ * kernels need to be preempted as soon as possible if need_resched is4545+ * set and we are preemptible. The hack here is to schedule a4646+ * decrementer to fire here and reschedule for us if necessary.4747+ */4848+ if (IS_ENABLED(CONFIG_PREEMPT) && need_resched())4949+ set_dec(1);4050 return 0;4151}4252
+59-8
arch/powerpc/mm/book3s64/hash_native.c
···43434444static DEFINE_RAW_SPINLOCK(native_tlbie_lock);45454646+#ifdef CONFIG_LOCKDEP4747+static struct lockdep_map hpte_lock_map =4848+ STATIC_LOCKDEP_MAP_INIT("hpte_lock", &hpte_lock_map);4949+5050+static void acquire_hpte_lock(void)5151+{5252+ lock_map_acquire(&hpte_lock_map);5353+}5454+5555+static void release_hpte_lock(void)5656+{5757+ lock_map_release(&hpte_lock_map);5858+}5959+#else6060+static void acquire_hpte_lock(void)6161+{6262+}6363+6464+static void release_hpte_lock(void)6565+{6666+}6767+#endif6868+4669static inline unsigned long ___tlbie(unsigned long vpn, int psize,4770 int apsize, int ssize)4871{···243220{244221 unsigned long *word = (unsigned long *)&hptep->v;245222223223+ acquire_hpte_lock();246224 while (1) {247225 if (!test_and_set_bit_lock(HPTE_LOCK_BIT, word))248226 break;···258234{259235 unsigned long *word = (unsigned long *)&hptep->v;260236237237+ release_hpte_lock();261238 clear_bit_unlock(HPTE_LOCK_BIT, word);262239}263240···268243{269244 struct hash_pte *hptep = htab_address + hpte_group;270245 unsigned long hpte_v, hpte_r;246246+ unsigned long flags;271247 int i;248248+249249+ local_irq_save(flags);272250273251 if (!(vflags & HPTE_V_BOLTED)) {274252 DBG_LOW(" insert(group=%lx, vpn=%016lx, pa=%016lx,"···291263 hptep++;292264 }293265294294- if (i == HPTES_PER_GROUP)266266+ if (i == HPTES_PER_GROUP) {267267+ local_irq_restore(flags);295268 return -1;269269+ }296270297271 hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;298272 hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;···316286 * Now set the first dword including the valid bit317287 * NOTE: this also unlocks the hpte318288 */289289+ release_hpte_lock();319290 hptep->v = cpu_to_be64(hpte_v);320291321292 __asm__ __volatile__ ("ptesync" : : : "memory");293293+294294+ local_irq_restore(flags);322295323296 return i | (!!(vflags & HPTE_V_SECONDARY) << 3);324297}···360327 return -1;361328362329 /* Invalidate the hpte. NOTE: this also unlocks it */330330+ release_hpte_lock();363331 hptep->v = 0;364332365333 return i;···373339 struct hash_pte *hptep = htab_address + slot;374340 unsigned long hpte_v, want_v;375341 int ret = 0, local = 0;342342+ unsigned long irqflags;343343+344344+ local_irq_save(irqflags);376345377346 want_v = hpte_encode_avpn(vpn, bpsize, ssize);378347···418381 */419382 if (!(flags & HPTE_NOHPTE_UPDATE))420383 tlbie(vpn, bpsize, apsize, ssize, local);384384+385385+ local_irq_restore(irqflags);421386422387 return ret;423388}···484445 unsigned long vsid;485446 long slot;486447 struct hash_pte *hptep;448448+ unsigned long flags;449449+450450+ local_irq_save(flags);487451488452 vsid = get_kernel_vsid(ea, ssize);489453 vpn = hpt_vpn(ea, vsid, ssize);···505463 * actual page size will be same.506464 */507465 tlbie(vpn, psize, psize, ssize, 0);466466+467467+ local_irq_restore(flags);508468}509469510470/*···520476 unsigned long vsid;521477 long slot;522478 struct hash_pte *hptep;479479+ unsigned long flags;480480+481481+ local_irq_save(flags);523482524483 vsid = get_kernel_vsid(ea, ssize);525484 vpn = hpt_vpn(ea, vsid, ssize);···540493541494 /* Invalidate the TLB */542495 tlbie(vpn, psize, psize, ssize, 0);496496+497497+ local_irq_restore(flags);498498+543499 return 0;544500}545501···567517 /* recheck with locks held */568518 hpte_v = hpte_get_old_v(hptep);569519570570- if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID))520520+ if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID)) {571521 /* Invalidate the hpte. NOTE: this also unlocks it */522522+ release_hpte_lock();572523 hptep->v = 0;573573- else524524+ } else574525 native_unlock_hpte(hptep);575526 }576527 /*···631580 hpte_v = hpte_get_old_v(hptep);632581633582 if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID)) {634634- /*635635- * Invalidate the hpte. NOTE: this also unlocks it636636- */637637-583583+ /* Invalidate the hpte. NOTE: this also unlocks it */584584+ release_hpte_lock();638585 hptep->v = 0;639586 } else640587 native_unlock_hpte(hptep);···814765815766 if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))816767 native_unlock_hpte(hptep);817817- else768768+ else {769769+ release_hpte_lock();818770 hptep->v = 0;771771+ }819772820773 } pte_iterate_hashed_end();821774 }
+5-3
arch/powerpc/mm/book3s64/hash_pgtable.c
···404404405405struct change_memory_parms {406406 unsigned long start, end, newpp;407407- unsigned int step, nr_cpus, master_cpu;407407+ unsigned int step, nr_cpus;408408+ atomic_t master_cpu;408409 atomic_t cpu_counter;409410};410411···479478{480479 struct change_memory_parms *parms = data;481480482482- if (parms->master_cpu != smp_processor_id())481481+ // First CPU goes through, all others wait.482482+ if (atomic_xchg(&parms->master_cpu, 1) == 1)483483 return chmem_secondary_loop(parms);484484485485 // Wait for all but one CPU (this one) to call-in···518516 chmem_parms.end = end;519517 chmem_parms.step = step;520518 chmem_parms.newpp = newpp;521521- chmem_parms.master_cpu = smp_processor_id();519519+ atomic_set(&chmem_parms.master_cpu, 0);522520523521 cpus_read_lock();524522
···3535#include <asm/drmem.h>36363737#include "pseries.h"3838+#include "vas.h" /* pseries_vas_dlpar_cpu() */38393940/*4041 * This isn't a module but we expose that to userspace···749748 return -EINVAL;750749751750 retval = update_ppp(new_entitled_ptr, NULL);751751+752752+ if (retval == H_SUCCESS || retval == H_CONSTRAINED) {753753+ /*754754+ * The hypervisor assigns VAS resources based755755+ * on entitled capacity for shared mode.756756+ * Reconfig VAS windows based on DLPAR CPU events.757757+ */758758+ if (pseries_vas_dlpar_cpu() != 0)759759+ retval = H_HARDWARE;760760+ }752761 } else if (!strcmp(kbuf, "capacity_weight")) {753762 char *endp;754763 *new_weight_ptr = (u8) simple_strtoul(tmp, &endp, 10);
+62-21
arch/powerpc/platforms/pseries/vas.c
···200200 struct vas_user_win_ref *tsk_ref;201201 int rc;202202203203- rc = h_get_nx_fault(txwin->vas_win.winid, (u64)virt_to_phys(&crb));204204- if (!rc) {205205- tsk_ref = &txwin->vas_win.task_ref;206206- vas_dump_crb(&crb);207207- vas_update_csb(&crb, tsk_ref);203203+ while (atomic_read(&txwin->pending_faults)) {204204+ rc = h_get_nx_fault(txwin->vas_win.winid, (u64)virt_to_phys(&crb));205205+ if (!rc) {206206+ tsk_ref = &txwin->vas_win.task_ref;207207+ vas_dump_crb(&crb);208208+ vas_update_csb(&crb, tsk_ref);209209+ }210210+ atomic_dec(&txwin->pending_faults);208211 }209212210213 return IRQ_HANDLED;214214+}215215+216216+/*217217+ * irq_default_primary_handler() can be used only with IRQF_ONESHOT218218+ * which disables IRQ before executing the thread handler and enables219219+ * it after. But this disabling interrupt sets the VAS IRQ OFF220220+ * state in the hypervisor. If the NX generates fault interrupt221221+ * during this window, the hypervisor will not deliver this222222+ * interrupt to the LPAR. So use VAS specific IRQ handler instead223223+ * of calling the default primary handler.224224+ */225225+static irqreturn_t pseries_vas_irq_handler(int irq, void *data)226226+{227227+ struct pseries_vas_window *txwin = data;228228+229229+ /*230230+ * The thread hanlder will process this interrupt if it is231231+ * already running.232232+ */233233+ atomic_inc(&txwin->pending_faults);234234+235235+ return IRQ_WAKE_THREAD;211236}212237213238/*···265240 goto out_irq;266241 }267242268268- rc = request_threaded_irq(txwin->fault_virq, NULL,269269- pseries_vas_fault_thread_fn, IRQF_ONESHOT,243243+ rc = request_threaded_irq(txwin->fault_virq,244244+ pseries_vas_irq_handler,245245+ pseries_vas_fault_thread_fn, 0,270246 txwin->name, txwin);271247 if (rc) {272248 pr_err("VAS-Window[%d]: Request IRQ(%u) failed with %d\n",···852826 mutex_unlock(&vas_pseries_mutex);853827 return rc;854828}829829+830830+int pseries_vas_dlpar_cpu(void)831831+{832832+ int new_nr_creds, rc;833833+834834+ rc = h_query_vas_capabilities(H_QUERY_VAS_CAPABILITIES,835835+ vascaps[VAS_GZIP_DEF_FEAT_TYPE].feat,836836+ (u64)virt_to_phys(&hv_cop_caps));837837+ if (!rc) {838838+ new_nr_creds = be16_to_cpu(hv_cop_caps.target_lpar_creds);839839+ rc = vas_reconfig_capabilties(VAS_GZIP_DEF_FEAT_TYPE, new_nr_creds);840840+ }841841+842842+ if (rc)843843+ pr_err("Failed reconfig VAS capabilities with DLPAR\n");844844+845845+ return rc;846846+}847847+855848/*856849 * Total number of default credits available (target_credits)857850 * in LPAR depends on number of cores configured. It varies based on···885840 struct of_reconfig_data *rd = data;886841 struct device_node *dn = rd->dn;887842 const __be32 *intserv = NULL;888888- int new_nr_creds, len, rc = 0;843843+ int len;844844+845845+ /*846846+ * For shared CPU partition, the hypervisor assigns total credits847847+ * based on entitled core capacity. So updating VAS windows will848848+ * be called from lparcfg_write().849849+ */850850+ if (is_shared_processor())851851+ return NOTIFY_OK;889852890853 if ((action == OF_RECONFIG_ATTACH_NODE) ||891854 (action == OF_RECONFIG_DETACH_NODE))···905852 if (!intserv)906853 return NOTIFY_OK;907854908908- rc = h_query_vas_capabilities(H_QUERY_VAS_CAPABILITIES,909909- vascaps[VAS_GZIP_DEF_FEAT_TYPE].feat,910910- (u64)virt_to_phys(&hv_cop_caps));911911- if (!rc) {912912- new_nr_creds = be16_to_cpu(hv_cop_caps.target_lpar_creds);913913- rc = vas_reconfig_capabilties(VAS_GZIP_DEF_FEAT_TYPE,914914- new_nr_creds);915915- }916916-917917- if (rc)918918- pr_err("Failed reconfig VAS capabilities with DLPAR\n");919919-920920- return rc;855855+ return pseries_vas_dlpar_cpu();921856}922857923858static struct notifier_block pseries_vas_nb = {
+6
arch/powerpc/platforms/pseries/vas.h
···132132 u64 flags;133133 char *name;134134 int fault_virq;135135+ atomic_t pending_faults; /* Number of pending faults */135136};136137137138int sysfs_add_vas_caps(struct vas_cop_feat_caps *caps);···141140142141#ifdef CONFIG_PPC_VAS143142int vas_migration_handler(int action);143143+int pseries_vas_dlpar_cpu(void);144144#else145145static inline int vas_migration_handler(int action)146146+{147147+ return 0;148148+}149149+static inline int pseries_vas_dlpar_cpu(void)146150{147151 return 0;148152}
+13-4
arch/riscv/Kconfig
···411411412412 If you don't know what to do here, say Y.413413414414-config CC_HAS_ZICBOM414414+config TOOLCHAIN_HAS_ZICBOM415415 bool416416- default y if 64BIT && $(cc-option,-mabi=lp64 -march=rv64ima_zicbom)417417- default y if 32BIT && $(cc-option,-mabi=ilp32 -march=rv32ima_zicbom)416416+ default y417417+ depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zicbom)418418+ depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zicbom)419419+ depends on LLD_VERSION >= 150000 || LD_VERSION >= 23800418420419421config RISCV_ISA_ZICBOM420422 bool "Zicbom extension support for non-coherent DMA operation"421421- depends on CC_HAS_ZICBOM423423+ depends on TOOLCHAIN_HAS_ZICBOM422424 depends on !XIP_KERNEL && MMU423425 select RISCV_DMA_NONCOHERENT424426 select RISCV_ALTERNATIVE···434432 non-coherent DMA support on devices that need it.435433436434 If you don't know what to do here, say Y.435435+436436+config TOOLCHAIN_HAS_ZIHINTPAUSE437437+ bool438438+ default y439439+ depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zihintpause)440440+ depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zihintpause)441441+ depends on LLD_VERSION >= 150000 || LD_VERSION >= 23600437442438443config FPU439444 bool "FPU support"
+2-4
arch/riscv/Makefile
···5959riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei60606161# Check if the toolchain supports Zicbom extension6262-toolchain-supports-zicbom := $(call cc-option-yn, -march=$(riscv-march-y)_zicbom)6363-riscv-march-$(toolchain-supports-zicbom) := $(riscv-march-y)_zicbom6262+riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZICBOM) := $(riscv-march-y)_zicbom64636564# Check if the toolchain supports Zihintpause extension6666-toolchain-supports-zihintpause := $(call cc-option-yn, -march=$(riscv-march-y)_zihintpause)6767-riscv-march-$(toolchain-supports-zihintpause) := $(riscv-march-y)_zihintpause6565+riscv-march-$(CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE) := $(riscv-march-y)_zihintpause68666967KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y))7068KBUILD_AFLAGS += -march=$(riscv-march-y)
···2121 * Reduce instruction retirement.2222 * This assumes the PC changes.2323 */2424-#ifdef __riscv_zihintpause2424+#ifdef CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE2525 __asm__ __volatile__ ("pause");2626#else2727 /* Encoding of the pause instruction */
···11331133 entry->eax = max(entry->eax, 0x80000021);11341134 break;11351135 case 0x80000001:11361136+ entry->ebx &= ~GENMASK(27, 16);11361137 cpuid_entry_override(entry, CPUID_8000_0001_EDX);11371138 cpuid_entry_override(entry, CPUID_8000_0001_ECX);11381139 break;11391140 case 0x80000006:11401140- /* L2 cache and TLB: pass through host info. */11411141+ /* Drop reserved bits, pass host L2 cache and TLB info. */11421142+ entry->edx &= ~GENMASK(17, 16);11411143 break;11421144 case 0x80000007: /* Advanced power management */11431145 /* invariant TSC is CPUID.80000007H:EDX[8] */···11691167 g_phys_as = phys_as;1170116811711169 entry->eax = g_phys_as | (virt_as << 8);11701170+ entry->ecx &= ~(GENMASK(31, 16) | GENMASK(11, 8));11721171 entry->edx = 0;11731172 cpuid_entry_override(entry, CPUID_8000_0008_EBX);11741173 break;···11891186 entry->ecx = entry->edx = 0;11901187 break;11911188 case 0x8000001a:11891189+ entry->eax &= GENMASK(2, 0);11901190+ entry->ebx = entry->ecx = entry->edx = 0;11911191+ break;11921192 case 0x8000001e:11931193 break;11941194 case 0x8000001F:···11991193 entry->eax = entry->ebx = entry->ecx = entry->edx = 0;12001194 } else {12011195 cpuid_entry_override(entry, CPUID_8000_001F_EAX);12021202-11961196+ /* Clear NumVMPL since KVM does not support VMPL. */11971197+ entry->ebx &= ~GENMASK(31, 12);12031198 /*12041199 * Enumerate '0' for "PA bits reduction", the adjusted12051200 * MAXPHYADDR is enumerated directly (see 0x80000008).
+6-1
arch/x86/kvm/debugfs.c
···158158static int kvm_mmu_rmaps_stat_open(struct inode *inode, struct file *file)159159{160160 struct kvm *kvm = inode->i_private;161161+ int r;161162162163 if (!kvm_get_kvm_safe(kvm))163164 return -ENOENT;164165165165- return single_open(file, kvm_mmu_rmaps_stat_show, kvm);166166+ r = single_open(file, kvm_mmu_rmaps_stat_show, kvm);167167+ if (r < 0)168168+ kvm_put_kvm(kvm);169169+170170+ return r;166171}167172168173static int kvm_mmu_rmaps_stat_release(struct inode *inode, struct file *file)
+77-33
arch/x86/kvm/emulate.c
···791791 ctxt->mode, linear);792792}793793794794-static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst,795795- enum x86emul_mode mode)794794+static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst)796795{797796 ulong linear;798797 int rc;···801802802803 if (ctxt->op_bytes != sizeof(unsigned long))803804 addr.ea = dst & ((1UL << (ctxt->op_bytes << 3)) - 1);804804- rc = __linearize(ctxt, addr, &max_size, 1, false, true, mode, &linear);805805+ rc = __linearize(ctxt, addr, &max_size, 1, false, true, ctxt->mode, &linear);805806 if (rc == X86EMUL_CONTINUE)806807 ctxt->_eip = addr.ea;807808 return rc;808809}809810810810-static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst)811811+static inline int emulator_recalc_and_set_mode(struct x86_emulate_ctxt *ctxt)811812{812812- return assign_eip(ctxt, dst, ctxt->mode);813813+ u64 efer;814814+ struct desc_struct cs;815815+ u16 selector;816816+ u32 base3;817817+818818+ ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);819819+820820+ if (!(ctxt->ops->get_cr(ctxt, 0) & X86_CR0_PE)) {821821+ /* Real mode. cpu must not have long mode active */822822+ if (efer & EFER_LMA)823823+ return X86EMUL_UNHANDLEABLE;824824+ ctxt->mode = X86EMUL_MODE_REAL;825825+ return X86EMUL_CONTINUE;826826+ }827827+828828+ if (ctxt->eflags & X86_EFLAGS_VM) {829829+ /* Protected/VM86 mode. cpu must not have long mode active */830830+ if (efer & EFER_LMA)831831+ return X86EMUL_UNHANDLEABLE;832832+ ctxt->mode = X86EMUL_MODE_VM86;833833+ return X86EMUL_CONTINUE;834834+ }835835+836836+ if (!ctxt->ops->get_segment(ctxt, &selector, &cs, &base3, VCPU_SREG_CS))837837+ return X86EMUL_UNHANDLEABLE;838838+839839+ if (efer & EFER_LMA) {840840+ if (cs.l) {841841+ /* Proper long mode */842842+ ctxt->mode = X86EMUL_MODE_PROT64;843843+ } else if (cs.d) {844844+ /* 32 bit compatibility mode*/845845+ ctxt->mode = X86EMUL_MODE_PROT32;846846+ } else {847847+ ctxt->mode = X86EMUL_MODE_PROT16;848848+ }849849+ } else {850850+ /* Legacy 32 bit / 16 bit mode */851851+ ctxt->mode = cs.d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;852852+ }853853+854854+ return X86EMUL_CONTINUE;813855}814856815815-static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst,816816- const struct desc_struct *cs_desc)857857+static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst)817858{818818- enum x86emul_mode mode = ctxt->mode;819819- int rc;859859+ return assign_eip(ctxt, dst);860860+}820861821821-#ifdef CONFIG_X86_64822822- if (ctxt->mode >= X86EMUL_MODE_PROT16) {823823- if (cs_desc->l) {824824- u64 efer = 0;862862+static int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst)863863+{864864+ int rc = emulator_recalc_and_set_mode(ctxt);825865826826- ctxt->ops->get_msr(ctxt, MSR_EFER, &efer);827827- if (efer & EFER_LMA)828828- mode = X86EMUL_MODE_PROT64;829829- } else830830- mode = X86EMUL_MODE_PROT32; /* temporary value */831831- }832832-#endif833833- if (mode == X86EMUL_MODE_PROT16 || mode == X86EMUL_MODE_PROT32)834834- mode = cs_desc->d ? X86EMUL_MODE_PROT32 : X86EMUL_MODE_PROT16;835835- rc = assign_eip(ctxt, dst, mode);836836- if (rc == X86EMUL_CONTINUE)837837- ctxt->mode = mode;838838- return rc;866866+ if (rc != X86EMUL_CONTINUE)867867+ return rc;868868+869869+ return assign_eip(ctxt, dst);839870}840871841872static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel)···22012172 if (rc != X86EMUL_CONTINUE)22022173 return rc;2203217422042204- rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);21752175+ rc = assign_eip_far(ctxt, ctxt->src.val);22052176 /* Error handling is not implemented. */22062177 if (rc != X86EMUL_CONTINUE)22072178 return X86EMUL_UNHANDLEABLE;···22792250 &new_desc);22802251 if (rc != X86EMUL_CONTINUE)22812252 return rc;22822282- rc = assign_eip_far(ctxt, eip, &new_desc);22532253+ rc = assign_eip_far(ctxt, eip);22832254 /* Error handling is not implemented. */22842255 if (rc != X86EMUL_CONTINUE)22852256 return X86EMUL_UNHANDLEABLE;···24612432 ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7ff4) | X86_EFLAGS_FIXED;24622433 ctxt->_eip = GET_SMSTATE(u32, smstate, 0x7ff0);2463243424642464- for (i = 0; i < NR_EMULATOR_GPRS; i++)24352435+ for (i = 0; i < 8; i++)24652436 *reg_write(ctxt, i) = GET_SMSTATE(u32, smstate, 0x7fd0 + i * 4);2466243724672438 val = GET_SMSTATE(u32, smstate, 0x7fcc);···25182489 u16 selector;25192490 int i, r;2520249125212521- for (i = 0; i < NR_EMULATOR_GPRS; i++)24922492+ for (i = 0; i < 16; i++)25222493 *reg_write(ctxt, i) = GET_SMSTATE(u64, smstate, 0x7ff8 - i * 8);2523249425242495 ctxt->_eip = GET_SMSTATE(u64, smstate, 0x7f78);···26622633 * those side effects need to be explicitly handled for both success26632634 * and shutdown.26642635 */26652665- return X86EMUL_CONTINUE;26362636+ return emulator_recalc_and_set_mode(ctxt);2666263726672638emulate_shutdown:26682639 ctxt->ops->triple_fault(ctxt);···29052876 ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS);2906287729072878 ctxt->_eip = rdx;28792879+ ctxt->mode = usermode;29082880 *reg_write(ctxt, VCPU_REGS_RSP) = rcx;2909288129102882 return X86EMUL_CONTINUE;···34993469 if (rc != X86EMUL_CONTINUE)35003470 return rc;3501347135023502- rc = assign_eip_far(ctxt, ctxt->src.val, &new_desc);34723472+ rc = assign_eip_far(ctxt, ctxt->src.val);35033473 if (rc != X86EMUL_CONTINUE)35043474 goto fail;35053475···3641361136423612static int em_cr_write(struct x86_emulate_ctxt *ctxt)36433613{36443644- if (ctxt->ops->set_cr(ctxt, ctxt->modrm_reg, ctxt->src.val))36143614+ int cr_num = ctxt->modrm_reg;36153615+ int r;36163616+36173617+ if (ctxt->ops->set_cr(ctxt, cr_num, ctxt->src.val))36453618 return emulate_gp(ctxt, 0);3646361936473620 /* Disable writeback. */36483621 ctxt->dst.type = OP_NONE;36223622+36233623+ if (cr_num == 0) {36243624+ /*36253625+ * CR0 write might have updated CR0.PE and/or CR0.PG36263626+ * which can affect the cpu's execution mode.36273627+ */36283628+ r = emulator_recalc_and_set_mode(ctxt);36293629+ if (r != X86EMUL_CONTINUE)36303630+ return r;36313631+ }36323632+36493633 return X86EMUL_CONTINUE;36503634}36513635
+5
arch/x86/kvm/vmx/vmx.c
···82638263 if (!cpu_has_virtual_nmis())82648264 enable_vnmi = 0;8265826582668266+#ifdef CONFIG_X86_SGX_KVM82678267+ if (!cpu_has_vmx_encls_vmexit())82688268+ enable_sgx = false;82698269+#endif82708270+82668271 /*82678272 * set_apic_access_page_addr() is used to reload apic access82688273 * page upon invalidation. No need to do anything if not
+21-6
arch/x86/kvm/x86.c
···2315231523162316 /* we verify if the enable bit is set... */23172317 if (system_time & 1) {23182318- kvm_gfn_to_pfn_cache_init(vcpu->kvm, &vcpu->arch.pv_time, vcpu,23192319- KVM_HOST_USES_PFN, system_time & ~1ULL,23202320- sizeof(struct pvclock_vcpu_time_info));23182318+ kvm_gpc_activate(vcpu->kvm, &vcpu->arch.pv_time, vcpu,23192319+ KVM_HOST_USES_PFN, system_time & ~1ULL,23202320+ sizeof(struct pvclock_vcpu_time_info));23212321 } else {23222322- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.pv_time);23222322+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time);23232323 }2324232423252325 return;···3388338833893389static void kvmclock_reset(struct kvm_vcpu *vcpu)33903390{33913391- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.pv_time);33913391+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.pv_time);33923392 vcpu->arch.time = 0;33933393}33943394···1004410044 kvm_x86_ops.nested_ops->has_events(vcpu))1004510045 *req_immediate_exit = true;10046100461004710047- WARN_ON(kvm_is_exception_pending(vcpu));1004710047+ /*1004810048+ * KVM must never queue a new exception while injecting an event; KVM1004910049+ * is done emulating and should only propagate the to-be-injected event1005010050+ * to the VMCS/VMCB. Queueing a new exception can put the vCPU into an1005110051+ * infinite loop as KVM will bail from VM-Enter to inject the pending1005210052+ * exception and start the cycle all over.1005310053+ *1005410054+ * Exempt triple faults as they have special handling and won't put the1005510055+ * vCPU into an infinite loop. Triple fault can be queued when running1005610056+ * VMX without unrestricted guest, as that requires KVM to emulate Real1005710057+ * Mode events (see kvm_inject_realmode_interrupt()).1005810058+ */1005910059+ WARN_ON_ONCE(vcpu->arch.exception.pending ||1006010060+ vcpu->arch.exception_vmexit.pending);1004810061 return 0;10049100621005010063out:···1182811815 vcpu->arch.last_vmentry_cpu = -1;1182911816 vcpu->arch.regs_avail = ~0;1183011817 vcpu->arch.regs_dirty = ~0;1181811818+1181911819+ kvm_gpc_init(&vcpu->arch.pv_time);11831118201183211821 if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu))1183311822 vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
+34-30
arch/x86/kvm/xen.c
···4242 int idx = srcu_read_lock(&kvm->srcu);43434444 if (gfn == GPA_INVALID) {4545- kvm_gfn_to_pfn_cache_destroy(kvm, gpc);4545+ kvm_gpc_deactivate(kvm, gpc);4646 goto out;4747 }48484949 do {5050- ret = kvm_gfn_to_pfn_cache_init(kvm, gpc, NULL, KVM_HOST_USES_PFN,5151- gpa, PAGE_SIZE);5050+ ret = kvm_gpc_activate(kvm, gpc, NULL, KVM_HOST_USES_PFN, gpa,5151+ PAGE_SIZE);5252 if (ret)5353 goto out;5454···554554 offsetof(struct compat_vcpu_info, time));555555556556 if (data->u.gpa == GPA_INVALID) {557557- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache);557557+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache);558558 r = 0;559559 break;560560 }561561562562- r = kvm_gfn_to_pfn_cache_init(vcpu->kvm,563563- &vcpu->arch.xen.vcpu_info_cache,564564- NULL, KVM_HOST_USES_PFN, data->u.gpa,565565- sizeof(struct vcpu_info));562562+ r = kvm_gpc_activate(vcpu->kvm,563563+ &vcpu->arch.xen.vcpu_info_cache, NULL,564564+ KVM_HOST_USES_PFN, data->u.gpa,565565+ sizeof(struct vcpu_info));566566 if (!r)567567 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);568568···570570571571 case KVM_XEN_VCPU_ATTR_TYPE_VCPU_TIME_INFO:572572 if (data->u.gpa == GPA_INVALID) {573573- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm,574574- &vcpu->arch.xen.vcpu_time_info_cache);573573+ kvm_gpc_deactivate(vcpu->kvm,574574+ &vcpu->arch.xen.vcpu_time_info_cache);575575 r = 0;576576 break;577577 }578578579579- r = kvm_gfn_to_pfn_cache_init(vcpu->kvm,580580- &vcpu->arch.xen.vcpu_time_info_cache,581581- NULL, KVM_HOST_USES_PFN, data->u.gpa,582582- sizeof(struct pvclock_vcpu_time_info));579579+ r = kvm_gpc_activate(vcpu->kvm,580580+ &vcpu->arch.xen.vcpu_time_info_cache,581581+ NULL, KVM_HOST_USES_PFN, data->u.gpa,582582+ sizeof(struct pvclock_vcpu_time_info));583583 if (!r)584584 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);585585 break;···590590 break;591591 }592592 if (data->u.gpa == GPA_INVALID) {593593- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm,594594- &vcpu->arch.xen.runstate_cache);593593+ kvm_gpc_deactivate(vcpu->kvm,594594+ &vcpu->arch.xen.runstate_cache);595595 r = 0;596596 break;597597 }598598599599- r = kvm_gfn_to_pfn_cache_init(vcpu->kvm,600600- &vcpu->arch.xen.runstate_cache,601601- NULL, KVM_HOST_USES_PFN, data->u.gpa,602602- sizeof(struct vcpu_runstate_info));599599+ r = kvm_gpc_activate(vcpu->kvm, &vcpu->arch.xen.runstate_cache,600600+ NULL, KVM_HOST_USES_PFN, data->u.gpa,601601+ sizeof(struct vcpu_runstate_info));603602 break;604603605604 case KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT:···16661667 case EVTCHNSTAT_ipi:16671668 /* IPI must map back to the same port# */16681669 if (data->u.evtchn.deliver.port.port != data->u.evtchn.send_port)16691669- goto out; /* -EINVAL */16701670+ goto out_noeventfd; /* -EINVAL */16701671 break;1671167216721673 case EVTCHNSTAT_interdomain:16731674 if (data->u.evtchn.deliver.port.port) {16741675 if (data->u.evtchn.deliver.port.port >= max_evtchn_port(kvm))16751675- goto out; /* -EINVAL */16761676+ goto out_noeventfd; /* -EINVAL */16761677 } else {16771678 eventfd = eventfd_ctx_fdget(data->u.evtchn.deliver.eventfd.fd);16781679 if (IS_ERR(eventfd)) {16791680 ret = PTR_ERR(eventfd);16801680- goto out;16811681+ goto out_noeventfd;16811682 }16821683 }16831684 break;···17171718out:17181719 if (eventfd)17191720 eventfd_ctx_put(eventfd);17211721+out_noeventfd:17201722 kfree(evtchnfd);17211723 return ret;17221724}···18161816{18171817 vcpu->arch.xen.vcpu_id = vcpu->vcpu_idx;18181818 vcpu->arch.xen.poll_evtchn = 0;18191819+18191820 timer_setup(&vcpu->arch.xen.poll_timer, cancel_evtchn_poll, 0);18211821+18221822+ kvm_gpc_init(&vcpu->arch.xen.runstate_cache);18231823+ kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache);18241824+ kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache);18201825}1821182618221827void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu)···18291824 if (kvm_xen_timer_enabled(vcpu))18301825 kvm_xen_stop_timer(vcpu);1831182618321832- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm,18331833- &vcpu->arch.xen.runstate_cache);18341834- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm,18351835- &vcpu->arch.xen.vcpu_info_cache);18361836- kvm_gfn_to_pfn_cache_destroy(vcpu->kvm,18371837- &vcpu->arch.xen.vcpu_time_info_cache);18271827+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.runstate_cache);18281828+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_info_cache);18291829+ kvm_gpc_deactivate(vcpu->kvm, &vcpu->arch.xen.vcpu_time_info_cache);18301830+18381831 del_timer_sync(&vcpu->arch.xen.poll_timer);18391832}1840183318411834void kvm_xen_init_vm(struct kvm *kvm)18421835{18431836 idr_init(&kvm->arch.xen.evtchn_ports);18371837+ kvm_gpc_init(&kvm->arch.xen.shinfo_cache);18441838}1845183918461840void kvm_xen_destroy_vm(struct kvm *kvm)···18471843 struct evtchnfd *evtchnfd;18481844 int i;1849184518501850- kvm_gfn_to_pfn_cache_destroy(kvm, &kvm->arch.xen.shinfo_cache);18461846+ kvm_gpc_deactivate(kvm, &kvm->arch.xen.shinfo_cache);1851184718521848 idr_for_each_entry(&kvm->arch.xen.evtchn_ports, evtchnfd, i) {18531849 if (!evtchnfd->deliver.port.port)
+1
arch/x86/purgatory/Makefile
···2626KASAN_SANITIZE := n2727UBSAN_SANITIZE := n2828KCSAN_SANITIZE := n2929+KMSAN_SANITIZE := n2930KCOV_INSTRUMENT := n30313132# These are adjustments to the compiler flags used for objects that
···410410 * Otherwise just allocate the device numbers for both the whole device411411 * and all partitions from the extended dev_t space.412412 */413413+ ret = -EINVAL;413414 if (disk->major) {414415 if (WARN_ON(!disk->minors))415415- return -EINVAL;416416+ goto out_exit_elevator;416417417418 if (disk->minors > DISK_MAX_PARTS) {418419 pr_err("block: can't allocate more than %d partitions\n",···421420 disk->minors = DISK_MAX_PARTS;422421 }423422 if (disk->first_minor + disk->minors > MINORMASK + 1)424424- return -EINVAL;423423+ goto out_exit_elevator;425424 } else {426425 if (WARN_ON(disk->minors))427427- return -EINVAL;426426+ goto out_exit_elevator;428427429428 ret = blk_alloc_ext_minor();430429 if (ret < 0)431431- return ret;430430+ goto out_exit_elevator;432431 disk->major = BLOCK_EXT_MAJOR;433432 disk->first_minor = ret;434433 }···541540out_free_ext_minor:542541 if (disk->major == BLOCK_EXT_MAJOR)543542 blk_free_ext_minor(disk->first_minor);543543+out_exit_elevator:544544+ if (disk->queue->elevator)545545+ elevator_exit(disk->queue);544546 return ret;545547}546548EXPORT_SYMBOL(device_add_disk);
+1-1
drivers/acpi/acpi_pcc.c
···2727 * Arbitrary retries in case the remote processor is slow to respond2828 * to PCC commands2929 */3030-#define PCC_CMD_WAIT_RETRIES_NUM 5003030+#define PCC_CMD_WAIT_RETRIES_NUM 500ULL31313232struct pcc_data {3333 struct pcc_mbox_chan *pcc_chan;
···29522952 np = it.node;29532953 if (!of_match_node(idle_state_match, np))29542954 continue;29552955+29562956+ if (!of_device_is_available(np))29572957+ continue;29582958+29552959 if (states) {29562960 ret = genpd_parse_state(&states[i], np);29572961 if (ret) {
+2-2
drivers/base/property.c
···229229 * Find a given string in a string array and if it is found return the230230 * index back.231231 *232232- * Return: %0 if the property was found (success),232232+ * Return: index, starting from %0, if the property was found (success),233233 * %-EINVAL if given arguments are not valid,234234 * %-ENODATA if the property does not have a value,235235 * %-EPROTO if the property is not an array of strings,···450450 * Find a given string in a string array and if it is found return the451451 * index back.452452 *453453- * Return: %0 if the property was found (success),453453+ * Return: index, starting from %0, if the property was found (success),454454 * %-EINVAL if given arguments are not valid,455455 * %-ENODATA if the property does not have a value,456456 * %-EPROTO if the property is not an array of strings,
···510510 struct ttm_tt *ttm = bo->tbo.ttm;511511 int ret;512512513513+ if (WARN_ON(ttm->num_pages != src_ttm->num_pages))514514+ return -EINVAL;515515+513516 ttm->sg = kmalloc(sizeof(*ttm->sg), GFP_KERNEL);514517 if (unlikely(!ttm->sg))515518 return -ENOMEM;516516-517517- if (WARN_ON(ttm->num_pages != src_ttm->num_pages))518518- return -EINVAL;519519520520 /* Same sequence as in amdgpu_ttm_tt_pin_userptr */521521 ret = sg_alloc_table_from_pages(ttm->sg, src_ttm->pages,
+4-1
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
···326326 if (r)327327 return r;328328329329- ctx->stable_pstate = current_stable_pstate;329329+ if (mgr->adev->pm.stable_pstate_ctx)330330+ ctx->stable_pstate = mgr->adev->pm.stable_pstate_ctx->stable_pstate;331331+ else332332+ ctx->stable_pstate = current_stable_pstate;330333331334 return 0;332335}
+17-1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···32103210 return r;32113211 }32123212 adev->ip_blocks[i].status.hw = true;32133213+32143214+ if (adev->in_s0ix && adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_SMC) {32153215+ /* disable gfxoff for IP resume. The gfxoff will be re-enabled in32163216+ * amdgpu_device_resume() after IP resume.32173217+ */32183218+ amdgpu_gfx_off_ctrl(adev, false);32193219+ DRM_DEBUG("will disable gfxoff for re-initializing other blocks\n");32203220+ }32213221+32133222 }3214322332153224 return 0;···41944185 /* Make sure IB tests flushed */41954186 flush_delayed_work(&adev->delayed_init_work);4196418741884188+ if (adev->in_s0ix) {41894189+ /* re-enable gfxoff after IP resume. This re-enables gfxoff after41904190+ * it was disabled for IP resume in amdgpu_device_ip_resume_phase2().41914191+ */41924192+ amdgpu_gfx_off_ctrl(adev, true);41934193+ DRM_DEBUG("will enable gfxoff for the mission mode\n");41944194+ }41974195 if (fbcon)41984196 drm_fb_helper_set_suspend_unlocked(adev_to_drm(adev)->fb_helper, false);41994197···53975381 drm_sched_start(&ring->sched, !tmp_adev->asic_reset_res);53985382 }5399538354005400- if (adev->enable_mes)53845384+ if (adev->enable_mes && adev->ip_versions[GC_HWIP][0] != IP_VERSION(11, 0, 3))54015385 amdgpu_mes_self_test(tmp_adev);5402538654035387 if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) {
···13691369{13701370 struct amdgpu_device *adev = drm_to_adev(plane->dev);13711371 const struct drm_format_info *info = drm_format_info(format);13721372- struct hw_asic_id asic_id = adev->dm.dc->ctx->asic_id;13721372+ int i;1373137313741374 enum dm_micro_swizzle microtile = modifier_gfx9_swizzle_mode(modifier) & 3;13751375···13861386 return true;13871387 }1388138813891389- /* check if swizzle mode is supported by this version of DCN */13901390- switch (asic_id.chip_family) {13911391- case FAMILY_SI:13921392- case FAMILY_CI:13931393- case FAMILY_KV:13941394- case FAMILY_CZ:13951395- case FAMILY_VI:13961396- /* asics before AI does not have modifier support */13971397- return false;13981398- case FAMILY_AI:13991399- case FAMILY_RV:14001400- case FAMILY_NV:14011401- case FAMILY_VGH:14021402- case FAMILY_YELLOW_CARP:14031403- case AMDGPU_FAMILY_GC_10_3_6:14041404- case AMDGPU_FAMILY_GC_10_3_7:14051405- switch (AMD_FMT_MOD_GET(TILE, modifier)) {14061406- case AMD_FMT_MOD_TILE_GFX9_64K_R_X:14071407- case AMD_FMT_MOD_TILE_GFX9_64K_D_X:14081408- case AMD_FMT_MOD_TILE_GFX9_64K_S_X:14091409- case AMD_FMT_MOD_TILE_GFX9_64K_D:14101410- return true;14111411- default:14121412- return false;14131413- }14141414- break;14151415- case AMDGPU_FAMILY_GC_11_0_0:14161416- case AMDGPU_FAMILY_GC_11_0_1:14171417- switch (AMD_FMT_MOD_GET(TILE, modifier)) {14181418- case AMD_FMT_MOD_TILE_GFX11_256K_R_X:14191419- case AMD_FMT_MOD_TILE_GFX9_64K_R_X:14201420- case AMD_FMT_MOD_TILE_GFX9_64K_D_X:14211421- case AMD_FMT_MOD_TILE_GFX9_64K_S_X:14221422- case AMD_FMT_MOD_TILE_GFX9_64K_D:14231423- return true;14241424- default:14251425- return false;14261426- }14271427- break;14281428- default:14291429- ASSERT(0); /* Unknown asic */14301430- break;13891389+ /* Check that the modifier is on the list of the plane's supported modifiers. */13901390+ for (i = 0; i < plane->modifier_count; i++) {13911391+ if (modifier == plane->modifiers[i])13921392+ break;14311393 }13941394+ if (i == plane->modifier_count)13951395+ return false;1432139614331397 /*14341398 * For D swizzle the canonical modifier depends on the bpp, so check
···289289 smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_ALDE;290290 break;291291 case IP_VERSION(13, 0, 0):292292- smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_0;292292+ case IP_VERSION(13, 0, 10):293293+ smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_0_10;293294 break;294295 case IP_VERSION(13, 0, 7):295296 smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_7;···305304 break;306305 case IP_VERSION(13, 0, 5):307306 smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_5;308308- break;309309- case IP_VERSION(13, 0, 10):310310- smu->smc_driver_if_version = SMU13_DRIVER_IF_VERSION_SMU_V13_0_10;311307 break;312308 default:313309 dev_err(adev->dev, "smu unsupported IP version: 0x%x.\n",···840842 case IP_VERSION(13, 0, 5):841843 case IP_VERSION(13, 0, 7):842844 case IP_VERSION(13, 0, 8):845845+ case IP_VERSION(13, 0, 10):843846 if (!(adev->pm.pp_feature & PP_GFXOFF_MASK))844847 return 0;845848 if (enable)
+23-2
drivers/gpu/drm/bridge/parade-ps8640.c
···105105 struct gpio_desc *gpio_powerdown;106106 struct device_link *link;107107 bool pre_enabled;108108+ bool need_post_hpd_delay;108109};109110110111static const struct regmap_config ps8640_regmap_config[] = {···174173{175174 struct regmap *map = ps_bridge->regmap[PAGE2_TOP_CNTL];176175 int status;176176+ int ret;177177178178 /*179179 * Apparently something about the firmware in the chip signals that180180 * HPD goes high by reporting GPIO9 as high (even though HPD isn't181181 * actually connected to GPIO9).182182 */183183- return regmap_read_poll_timeout(map, PAGE2_GPIO_H, status,184184- status & PS_GPIO9, wait_us / 10, wait_us);183183+ ret = regmap_read_poll_timeout(map, PAGE2_GPIO_H, status,184184+ status & PS_GPIO9, wait_us / 10, wait_us);185185+186186+ /*187187+ * The first time we see HPD go high after a reset we delay an extra188188+ * 50 ms. The best guess is that the MCU is doing "stuff" during this189189+ * time (maybe talking to the panel) and we don't want to interrupt it.190190+ *191191+ * No locking is done around "need_post_hpd_delay". If we're here we192192+ * know we're holding a PM Runtime reference and the only other place193193+ * that touches this is PM Runtime resume.194194+ */195195+ if (!ret && ps_bridge->need_post_hpd_delay) {196196+ ps_bridge->need_post_hpd_delay = false;197197+ msleep(50);198198+ }199199+200200+ return ret;185201}186202187203static int ps8640_wait_hpd_asserted(struct drm_dp_aux *aux, unsigned long wait_us)···398380 gpiod_set_value(ps_bridge->gpio_reset, 1);399381 msleep(50);400382 gpiod_set_value(ps_bridge->gpio_reset, 0);383383+384384+ /* We just reset things, so we need a delay after the first HPD */385385+ ps_bridge->need_post_hpd_delay = true;401386402387 /*403388 * Mystery 200 ms delay for the "MCU to be ready". It's unclear if
+2
drivers/gpu/drm/i915/display/intel_dp.c
···3957395739583958 drm_dp_pcon_hdmi_frl_link_error_count(&intel_dp->aux, &intel_dp->attached_connector->base);3959395939603960+ intel_dp->frl.is_trained = false;39613961+39603962 /* Restart FRL training or fall back to TMDS mode */39613963 intel_dp_check_frl_training(intel_dp);39623964 }
+2-2
drivers/gpu/drm/i915/gt/intel_workarounds.c
···22932293 }2294229422952295 if (IS_DG1_GRAPHICS_STEP(i915, STEP_A0, STEP_B0) ||22962296- IS_ROCKETLAKE(i915) || IS_TIGERLAKE(i915)) {22962296+ IS_ROCKETLAKE(i915) || IS_TIGERLAKE(i915) || IS_ALDERLAKE_P(i915)) {22972297 /*22982298 * Wa_1607030317:tgl22992299 * Wa_1607186500:tgl23002300- * Wa_1607297627:tgl,rkl,dg1[a0]23002300+ * Wa_1607297627:tgl,rkl,dg1[a0],adlp23012301 *23022302 * On TGL and RKL there are multiple entries for this WA in the23032303 * BSpec; some indicate this is an A0-only WA, others indicate
+9-2
drivers/gpu/drm/i915/intel_runtime_pm.c
···591591 pm_runtime_use_autosuspend(kdev);592592 }593593594594- /* Enable by default */595595- pm_runtime_allow(kdev);594594+ /*595595+ * FIXME: Temp hammer to keep autosupend disable on lmem supported platforms.596596+ * As per PCIe specs 5.3.1.4.1, all iomem read write request over a PCIe597597+ * function will be unsupported in case PCIe endpoint function is in D3.598598+ * Let's keep i915 autosuspend control 'on' till we fix all known issue599599+ * with lmem access in D3.600600+ */601601+ if (!IS_DGFX(i915))602602+ pm_runtime_allow(kdev);596603597604 /*598605 * The core calls the driver load handler with an RPM reference held.
+1-1
drivers/gpu/drm/msm/Kconfig
···155155 Compile in support for the HDMI output MSM DRM driver. It can156156 be a primary or a secondary display on device. Note that this is used157157 only for the direct HDMI output. If the device outputs HDMI data158158- throught some kind of DSI-to-HDMI bridge, this option can be disabled.158158+ through some kind of DSI-to-HDMI bridge, this option can be disabled.159159160160config DRM_MSM_HDMI_HDCP161161 bool "Enable HDMI HDCP support in MSM DRM driver"
···679679 struct msm_gpu *gpu = dev_to_gpu(dev);680680 int remaining, ret;681681682682+ if (!gpu)683683+ return 0;684684+682685 suspend_scheduler(gpu);683686684687 remaining = wait_event_timeout(gpu->retire_event,···703700704701static int adreno_system_resume(struct device *dev)705702{706706- resume_scheduler(dev_to_gpu(dev));703703+ struct msm_gpu *gpu = dev_to_gpu(dev);704704+705705+ if (!gpu)706706+ return 0;707707+708708+ resume_scheduler(gpu);707709 return pm_runtime_force_resume(dev);708710}709711
+6-1
drivers/gpu/drm/msm/adreno/adreno_gpu.c
···729729 return buf;730730}731731732732-/* len is expected to be in bytes */732732+/* len is expected to be in bytes733733+ *734734+ * WARNING: *ptr should be allocated with kvmalloc or friends. It can be free'd735735+ * with kvfree() and replaced with a newly kvmalloc'd buffer on the first call736736+ * when the unencoded raw data is encoded737737+ */733738void adreno_show_object(struct drm_printer *p, void **ptr, int len,734739 bool *encoded)735740{
···12491249 return -EINVAL;12501250 }1251125112521252- rc = devm_request_irq(&dp->pdev->dev, dp->irq,12521252+ rc = devm_request_irq(dp_display->drm_dev->dev, dp->irq,12531253 dp_display_irq_handler,12541254 IRQF_TRIGGER_HIGH, "dp_display_isr", dp);12551255 if (rc < 0) {···15281528 }15291529}1530153015311531+static void of_dp_aux_depopulate_bus_void(void *data)15321532+{15331533+ of_dp_aux_depopulate_bus(data);15341534+}15351535+15311536static int dp_display_get_next_bridge(struct msm_dp *dp)15321537{15331538 int rc;···15571552 * panel driver is probed asynchronously but is the best we15581553 * can do without a bigger driver reorganization.15591554 */15601560- rc = devm_of_dp_aux_populate_ep_devices(dp_priv->aux);15551555+ rc = of_dp_aux_populate_bus(dp_priv->aux, NULL);15611556 of_node_put(aux_bus);15571557+ if (rc)15581558+ goto error;15591559+15601560+ rc = devm_add_action_or_reset(dp->drm_dev->dev,15611561+ of_dp_aux_depopulate_bus_void,15621562+ dp_priv->aux);15621563 if (rc)15631564 goto error;15641565 } else if (dp->is_edp) {···15791568 * For DisplayPort interfaces external bridges are optional, so15801569 * silently ignore an error if one is not present (-ENODEV).15811570 */15821582- rc = dp_parser_find_next_bridge(dp_priv->parser);15711571+ rc = devm_dp_parser_find_next_bridge(dp->drm_dev->dev, dp_priv->parser);15831572 if (!dp->is_edp && rc == -ENODEV)15841573 return 0;15851574···16081597 return -EINVAL;1609159816101599 priv = dev->dev_private;16001600+16011601+ if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) {16021602+ DRM_DEV_ERROR(dev->dev, "too many bridges\n");16031603+ return -ENOSPC;16041604+ }16051605+16111606 dp_display->drm_dev = dev;1612160716131608 dp_priv = container_of(dp_display, struct dp_display_private, dp_display);
+34
drivers/gpu/drm/msm/dp/dp_drm.c
···3131 connector_status_disconnected;3232}33333434+static int dp_bridge_atomic_check(struct drm_bridge *bridge,3535+ struct drm_bridge_state *bridge_state,3636+ struct drm_crtc_state *crtc_state,3737+ struct drm_connector_state *conn_state)3838+{3939+ struct msm_dp *dp;4040+4141+ dp = to_dp_bridge(bridge)->dp_display;4242+4343+ drm_dbg_dp(dp->drm_dev, "is_connected = %s\n",4444+ (dp->is_connected) ? "true" : "false");4545+4646+ /*4747+ * There is no protection in the DRM framework to check if the display4848+ * pipeline has been already disabled before trying to disable it again.4949+ * Hence if the sink is unplugged, the pipeline gets disabled, but the5050+ * crtc->active is still true. Any attempt to set the mode or manually5151+ * disable this encoder will result in the crash.5252+ *5353+ * TODO: add support for telling the DRM subsystem that the pipeline is5454+ * disabled by the hardware and thus all access to it should be forbidden.5555+ * After that this piece of code can be removed.5656+ */5757+ if (bridge->ops & DRM_BRIDGE_OP_HPD)5858+ return (dp->is_connected) ? 0 : -ENOTCONN;5959+6060+ return 0;6161+}6262+6363+3464/**3565 * dp_bridge_get_modes - callback to add drm modes via drm_mode_probed_add()3666 * @bridge: Poiner to drm bridge···9161}92629363static const struct drm_bridge_funcs dp_bridge_ops = {6464+ .atomic_duplicate_state = drm_atomic_helper_bridge_duplicate_state,6565+ .atomic_destroy_state = drm_atomic_helper_bridge_destroy_state,6666+ .atomic_reset = drm_atomic_helper_bridge_reset,9467 .enable = dp_bridge_enable,9568 .disable = dp_bridge_disable,9669 .post_disable = dp_bridge_post_disable,···10168 .mode_valid = dp_bridge_mode_valid,10269 .get_modes = dp_bridge_get_modes,10370 .detect = dp_bridge_detect,7171+ .atomic_check = dp_bridge_atomic_check,10472};1057310674struct drm_bridge *dp_bridge_init(struct msm_dp *dp_display, struct drm_device *dev,
···138138struct dp_parser *dp_parser_get(struct platform_device *pdev);139139140140/**141141- * dp_parser_find_next_bridge() - find an additional bridge to DP141141+ * devm_dp_parser_find_next_bridge() - find an additional bridge to DP142142 *143143+ * @dev: device to tie bridge lifetime to143144 * @parser: dp_parser data from client144145 *145146 * This function is used to find any additional bridge attached to···148147 *149148 * Return: 0 if able to get the bridge, otherwise negative errno for failure.150149 */151151-int dp_parser_find_next_bridge(struct dp_parser *parser);150150+int devm_dp_parser_find_next_bridge(struct device *dev, struct dp_parser *parser);152151153152#endif
+6
drivers/gpu/drm/msm/dsi/dsi.c
···218218 return -EINVAL;219219220220 priv = dev->dev_private;221221+222222+ if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) {223223+ DRM_DEV_ERROR(dev->dev, "too many bridges\n");224224+ return -ENOSPC;225225+ }226226+221227 msm_dsi->dev = dev;222228223229 ret = msm_dsi_host_modeset_init(msm_dsi->host, dev);
+6-1
drivers/gpu/drm/msm/hdmi/hdmi.c
···300300 struct platform_device *pdev = hdmi->pdev;301301 int ret;302302303303+ if (priv->num_bridges == ARRAY_SIZE(priv->bridges)) {304304+ DRM_DEV_ERROR(dev->dev, "too many bridges\n");305305+ return -ENOSPC;306306+ }307307+303308 hdmi->dev = dev;304309 hdmi->encoder = encoder;305310···344339 goto fail;345340 }346341347347- ret = devm_request_irq(&pdev->dev, hdmi->irq,342342+ ret = devm_request_irq(dev->dev, hdmi->irq,348343 msm_hdmi_irq, IRQF_TRIGGER_HIGH,349344 "hdmi_isr", hdmi);350345 if (ret < 0) {
+1
drivers/gpu/drm/msm/msm_drv.c
···247247248248 for (i = 0; i < priv->num_bridges; i++)249249 drm_bridge_remove(priv->bridges[i]);250250+ priv->num_bridges = 0;250251251252 pm_runtime_get_sync(dev);252253 msm_irq_uninstall(ddev);
···207207 struct drm_sched_job *job = container_of(cb, struct drm_sched_job,208208 finish_cb);209209210210+ dma_fence_put(f);210211 INIT_WORK(&job->work, drm_sched_entity_kill_jobs_work);211212 schedule_work(&job->work);212213}···235234 struct drm_sched_fence *s_fence = job->s_fence;236235237236 /* Wait for all dependencies to avoid data corruptions */238238- while ((f = drm_sched_job_dependency(job, entity)))237237+ while ((f = drm_sched_job_dependency(job, entity))) {239238 dma_fence_wait(f, false);239239+ dma_fence_put(f);240240+ }240241241242 drm_sched_fence_scheduled(s_fence);242243 dma_fence_set_error(&s_fence->finished, -ESRCH);···253250 continue;254251 }255252253253+ dma_fence_get(entity->last_scheduled);256254 r = dma_fence_add_callback(entity->last_scheduled,257255 &job->finish_cb,258256 drm_sched_entity_kill_jobs_cb);
+4-3
drivers/hwtracing/coresight/coresight-core.c
···16871687 ret = coresight_fixup_device_conns(csdev);16881688 if (!ret)16891689 ret = coresight_fixup_orphan_conns(csdev);16901690- if (!ret && cti_assoc_ops && cti_assoc_ops->add)16911691- cti_assoc_ops->add(csdev);1692169016931691out_unlock:16941692 mutex_unlock(&coresight_mutex);16951693 /* Success */16961696- if (!ret)16941694+ if (!ret) {16951695+ if (cti_assoc_ops && cti_assoc_ops->add)16961696+ cti_assoc_ops->add(csdev);16971697 return csdev;16981698+ }1698169916991700 /* Unregister the device if needed */17001701 if (registered) {
+3-7
drivers/hwtracing/coresight/coresight-cti-core.c
···9090static int cti_enable_hw(struct cti_drvdata *drvdata)9191{9292 struct cti_config *config = &drvdata->config;9393- struct device *dev = &drvdata->csdev->dev;9493 unsigned long flags;9594 int rc = 0;96959797- pm_runtime_get_sync(dev->parent);9896 spin_lock_irqsave(&drvdata->spinlock, flags);999710098 /* no need to do anything if enabled or unpowered*/···117119 /* cannot enable due to error */118120cti_err_not_enabled:119121 spin_unlock_irqrestore(&drvdata->spinlock, flags);120120- pm_runtime_put(dev->parent);121122 return rc;122123}123124···150153static int cti_disable_hw(struct cti_drvdata *drvdata)151154{152155 struct cti_config *config = &drvdata->config;153153- struct device *dev = &drvdata->csdev->dev;154156 struct coresight_device *csdev = drvdata->csdev;155157156158 spin_lock(&drvdata->spinlock);···171175 coresight_disclaim_device_unlocked(csdev);172176 CS_LOCK(drvdata->base);173177 spin_unlock(&drvdata->spinlock);174174- pm_runtime_put(dev->parent);175178 return 0;176179177180 /* not disabled this call */···536541/*537542 * Search the cti list to add an associated CTI into the supplied CS device538543 * This will set the association if CTI declared before the CS device.539539- * (called from coresight_register() with coresight_mutex locked).544544+ * (called from coresight_register() without coresight_mutex locked).540545 */541546static void cti_add_assoc_to_csdev(struct coresight_device *csdev)542547{···564569 * if we found a matching csdev then update the ECT565570 * association pointer for the device with this CTI.566571 */567567- csdev->ect_dev = ect_item->csdev;572572+ coresight_set_assoc_ectdev_mutex(csdev->ect_dev,573573+ ect_item->csdev);568574 break;569575 }570576 }
···11// SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause22/*33- * Copyright 2018-2021 Amazon.com, Inc. or its affiliates. All rights reserved.33+ * Copyright 2018-2022 Amazon.com, Inc. or its affiliates. All rights reserved.44 */5566#include <linux/module.h>···14141515#define PCI_DEV_ID_EFA0_VF 0xefa01616#define PCI_DEV_ID_EFA1_VF 0xefa11717+#define PCI_DEV_ID_EFA2_VF 0xefa217181819static const struct pci_device_id efa_pci_tbl[] = {1920 { PCI_VDEVICE(AMAZON, PCI_DEV_ID_EFA0_VF) },2021 { PCI_VDEVICE(AMAZON, PCI_DEV_ID_EFA1_VF) },2222+ { PCI_VDEVICE(AMAZON, PCI_DEV_ID_EFA2_VF) },2123 { }2224};2325
+1-2
drivers/infiniband/hw/hfi1/pio.c
···913913 spin_unlock(&sc->release_lock);914914915915 write_seqlock(&sc->waitlock);916916- if (!list_empty(&sc->piowait))917917- list_move(&sc->piowait, &wake_list);916916+ list_splice_init(&sc->piowait, &wake_list);918917 write_sequnlock(&sc->waitlock);919918 while (!list_empty(&wake_list)) {920919 struct iowait *wait;
+4-11
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
···118118 HR_OPC_MAP(ATOMIC_CMP_AND_SWP, ATOM_CMP_AND_SWAP),119119 HR_OPC_MAP(ATOMIC_FETCH_AND_ADD, ATOM_FETCH_AND_ADD),120120 HR_OPC_MAP(SEND_WITH_INV, SEND_WITH_INV),121121- HR_OPC_MAP(LOCAL_INV, LOCAL_INV),122121 HR_OPC_MAP(MASKED_ATOMIC_CMP_AND_SWP, ATOM_MSK_CMP_AND_SWAP),123122 HR_OPC_MAP(MASKED_ATOMIC_FETCH_AND_ADD, ATOM_MSK_FETCH_AND_ADD),124123 HR_OPC_MAP(REG_MR, FAST_REG_PMR),···558559 else559560 ret = -EOPNOTSUPP;560561 break;561561- case IB_WR_LOCAL_INV:562562- hr_reg_enable(rc_sq_wqe, RC_SEND_WQE_SO);563563- fallthrough;564562 case IB_WR_SEND_WITH_INV:565563 rc_sq_wqe->inv_key = cpu_to_le32(wr->ex.invalidate_rkey);566564 break;···2801280528022806static int free_mr_init(struct hns_roce_dev *hr_dev)28032807{28082808+ struct hns_roce_v2_priv *priv = hr_dev->priv;28092809+ struct hns_roce_v2_free_mr *free_mr = &priv->free_mr;28042810 int ret;28112811+28122812+ mutex_init(&free_mr->mutex);2805281328062814 ret = free_mr_alloc_res(hr_dev);28072815 if (ret)···3222322232233223 hr_reg_write(mpt_entry, MPT_ST, V2_MPT_ST_VALID);32243224 hr_reg_write(mpt_entry, MPT_PD, mr->pd);32253225- hr_reg_enable(mpt_entry, MPT_L_INV_EN);3226322532273226 hr_reg_write_bool(mpt_entry, MPT_BIND_EN,32283227 mr->access & IB_ACCESS_MW_BIND);···3312331333133314 hr_reg_enable(mpt_entry, MPT_RA_EN);33143315 hr_reg_enable(mpt_entry, MPT_R_INV_EN);33153315- hr_reg_enable(mpt_entry, MPT_L_INV_EN);3316331633173317 hr_reg_enable(mpt_entry, MPT_FRE);33183318 hr_reg_clear(mpt_entry, MPT_MR_MW);···33433345 hr_reg_write(mpt_entry, MPT_PD, mw->pdn);3344334633453347 hr_reg_enable(mpt_entry, MPT_R_INV_EN);33463346- hr_reg_enable(mpt_entry, MPT_L_INV_EN);33473348 hr_reg_enable(mpt_entry, MPT_LW_EN);3348334933493350 hr_reg_enable(mpt_entry, MPT_MR_MW);···37913794 HR_WC_OP_MAP(RDMA_READ, RDMA_READ),37923795 HR_WC_OP_MAP(RDMA_WRITE, RDMA_WRITE),37933796 HR_WC_OP_MAP(RDMA_WRITE_WITH_IMM, RDMA_WRITE),37943794- HR_WC_OP_MAP(LOCAL_INV, LOCAL_INV),37953797 HR_WC_OP_MAP(ATOM_CMP_AND_SWAP, COMP_SWAP),37963798 HR_WC_OP_MAP(ATOM_FETCH_AND_ADD, FETCH_ADD),37973799 HR_WC_OP_MAP(ATOM_MSK_CMP_AND_SWAP, MASKED_COMP_SWAP),···38393843 case HNS_ROCE_V2_WQE_OP_SEND_WITH_IMM:38403844 case HNS_ROCE_V2_WQE_OP_RDMA_WRITE_WITH_IMM:38413845 wc->wc_flags |= IB_WC_WITH_IMM;38423842- break;38433843- case HNS_ROCE_V2_WQE_OP_LOCAL_INV:38443844- wc->wc_flags |= IB_WC_WITH_INVALIDATE;38453846 break;38463847 case HNS_ROCE_V2_WQE_OP_ATOM_CMP_AND_SWAP:38473848 case HNS_ROCE_V2_WQE_OP_ATOM_FETCH_AND_ADD:
···956956 }957957 if (card->irq > 0)958958 free_irq(card->irq, card);959959- if (card->isac.dch.dev.dev.class)959959+ if (device_is_registered(&card->isac.dch.dev.dev))960960 mISDN_unregister_device(&card->isac.dch.dev);961961962962 for (i = 0; i < 2; i++) {
···152152 * Optionally, build an array of chars that contain the bit numbers allocated.153153 */154154static unsigned long reserve_resources(unsigned long *p, int n, int mmax,155155- char *idx)155155+ signed char *idx)156156{157157 unsigned long bits = 0;158158 int i;···170170}171171172172unsigned long gru_reserve_cb_resources(struct gru_state *gru, int cbr_au_count,173173- char *cbmap)173173+ signed char *cbmap)174174{175175 return reserve_resources(&gru->gs_cbr_map, cbr_au_count, GRU_CBR_AU,176176 cbmap);177177}178178179179unsigned long gru_reserve_ds_resources(struct gru_state *gru, int dsr_au_count,180180- char *dsmap)180180+ signed char *dsmap)181181{182182 return reserve_resources(&gru->gs_dsr_map, dsr_au_count, GRU_DSR_AU,183183 dsmap);
+7-7
drivers/misc/sgi-gru/grutables.h
···351351 pid_t ts_tgid_owner; /* task that is using the352352 context - for migration */353353 short ts_user_blade_id;/* user selected blade */354354- char ts_user_chiplet_id;/* user selected chiplet */354354+ signed char ts_user_chiplet_id;/* user selected chiplet */355355 unsigned short ts_sizeavail; /* Pagesizes in use */356356 int ts_tsid; /* thread that owns the357357 structure */···364364 required for contest */365365 unsigned char ts_cbr_au_count;/* Number of CBR resources366366 required for contest */367367- char ts_cch_req_slice;/* CCH packet slice */368368- char ts_blade; /* If >= 0, migrate context if367367+ signed char ts_cch_req_slice;/* CCH packet slice */368368+ signed char ts_blade; /* If >= 0, migrate context if369369 ref from different blade */370370- char ts_force_cch_reload;371371- char ts_cbr_idx[GRU_CBR_AU];/* CBR numbers of each370370+ signed char ts_force_cch_reload;371371+ signed char ts_cbr_idx[GRU_CBR_AU];/* CBR numbers of each372372 allocated CB */373373 int ts_data_valid; /* Indicates if ts_gdata has374374 valid data */···643643 int cbr_au_count, int dsr_au_count,644644 unsigned char tlb_preload_count, int options, int tsid);645645extern unsigned long gru_reserve_cb_resources(struct gru_state *gru,646646- int cbr_au_count, char *cbmap);646646+ int cbr_au_count, signed char *cbmap);647647extern unsigned long gru_reserve_ds_resources(struct gru_state *gru,648648- int dsr_au_count, char *dsmap);648648+ int dsr_au_count, signed char *dsmap);649649extern vm_fault_t gru_fault(struct vm_fault *vmf);650650extern struct gru_mm_struct *gru_register_mmu_notifier(void);651651extern void gru_drop_mmu_notifier(struct gru_mm_struct *gms);
+26-18
drivers/mmc/core/block.c
···134134 * track of the current selected device partition.135135 */136136 unsigned int part_curr;137137+#define MMC_BLK_PART_INVALID UINT_MAX /* Unknown partition active */137138 int area_type;138139139140 /* debugfs files (only in main mmc_blk_data) */···988987 return ms;989988}990989990990+/*991991+ * Attempts to reset the card and get back to the requested partition.992992+ * Therefore any error here must result in cancelling the block layer993993+ * request, it must not be reattempted without going through the mmc_blk994994+ * partition sanity checks.995995+ */991996static int mmc_blk_reset(struct mmc_blk_data *md, struct mmc_host *host,992997 int type)993998{994999 int err;10001000+ struct mmc_blk_data *main_md = dev_get_drvdata(&host->card->dev);99510019961002 if (md->reset_done & type)9971003 return -EEXIST;99810049991005 md->reset_done |= type;10001006 err = mmc_hw_reset(host->card);10071007+ /*10081008+ * A successful reset will leave the card in the main partition, but10091009+ * upon failure it might not be, so set it to MMC_BLK_PART_INVALID10101010+ * in that case.10111011+ */10121012+ main_md->part_curr = err ? MMC_BLK_PART_INVALID : main_md->part_type;10131013+ if (err)10141014+ return err;10011015 /* Ensure we switch back to the correct partition */10021002- if (err) {10031003- struct mmc_blk_data *main_md =10041004- dev_get_drvdata(&host->card->dev);10051005- int part_err;10061006-10071007- main_md->part_curr = main_md->part_type;10081008- part_err = mmc_blk_part_switch(host->card, md->part_type);10091009- if (part_err) {10101010- /*10111011- * We have failed to get back into the correct10121012- * partition, so we need to abort the whole request.10131013- */10141014- return -ENODEV;10151015- }10161016- }10171017- return err;10161016+ if (mmc_blk_part_switch(host->card, md->part_type))10171017+ /*10181018+ * We have failed to get back into the correct10191019+ * partition, so we need to abort the whole request.10201020+ */10211021+ return -ENODEV;10221022+ return 0;10181023}1019102410201025static inline void mmc_blk_reset_success(struct mmc_blk_data *md, int type)···18781871 return;1879187218801873 /* Reset before last retry */18811881- if (mqrq->retries + 1 == MMC_MAX_RETRIES)18821882- mmc_blk_reset(md, card->host, type);18741874+ if (mqrq->retries + 1 == MMC_MAX_RETRIES &&18751875+ mmc_blk_reset(md, card->host, type))18761876+ return;1883187718841878 /* Command errors fail fast, so use all MMC_MAX_RETRIES */18851879 if (brq->sbc.error || brq->cmd.error)
+8
drivers/mmc/core/queue.c
···4848 case REQ_OP_DRV_OUT:4949 case REQ_OP_DISCARD:5050 case REQ_OP_SECURE_ERASE:5151+ case REQ_OP_WRITE_ZEROES:5152 return MMC_ISSUE_SYNC;5253 case REQ_OP_FLUSH:5354 return mmc_cqe_can_dcmd(host) ? MMC_ISSUE_DCMD : MMC_ISSUE_SYNC;···493492 */494493 if (blk_queue_quiesced(q))495494 blk_mq_unquiesce_queue(q);495495+496496+ /*497497+ * If the recovery completes the last (and only remaining) request in498498+ * the queue, and the card has been removed, we could end up here with499499+ * the recovery not quite finished yet, so cancel it.500500+ */501501+ cancel_work_sync(&mq->recovery_work);496502497503 blk_mq_free_tag_set(&mq->tag_set);498504
···1075107510761076config MMC_SDHCI_AM65410771077 tristate "Support for the SDHCI Controller in TI's AM654 SOCs"10781078- depends on MMC_SDHCI_PLTFM && OF && REGMAP_MMIO10781078+ depends on MMC_SDHCI_PLTFM && OF10791079 select MMC_SDHCI_IO_ACCESSORS10801080 select MMC_CQHCI10811081+ select REGMAP_MMIO10811082 help10821083 This selects the Secure Digital Host Controller Interface (SDHCI)10831084 support present in TI's AM654 SOCs. The controller supports
+8-6
drivers/mmc/host/sdhci-esdhc-imx.c
···16601660 host->mmc_host_ops.execute_tuning = usdhc_execute_tuning;16611661 }1662166216631663+ err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);16641664+ if (err)16651665+ goto disable_ahb_clk;16661666+16631667 if (imx_data->socdata->flags & ESDHC_FLAG_MAN_TUNING)16641668 sdhci_esdhc_ops.platform_execute_tuning =16651669 esdhc_executing_tuning;···16711667 if (imx_data->socdata->flags & ESDHC_FLAG_ERR004536)16721668 host->quirks |= SDHCI_QUIRK_BROKEN_ADMA;1673166916741674- if (imx_data->socdata->flags & ESDHC_FLAG_HS400)16701670+ if (host->caps & MMC_CAP_8_BIT_DATA &&16711671+ imx_data->socdata->flags & ESDHC_FLAG_HS400)16751672 host->mmc->caps2 |= MMC_CAP2_HS400;1676167316771674 if (imx_data->socdata->flags & ESDHC_FLAG_BROKEN_AUTO_CMD23)16781675 host->quirks2 |= SDHCI_QUIRK2_ACMD23_BROKEN;1679167616801680- if (imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {16771677+ if (host->caps & MMC_CAP_8_BIT_DATA &&16781678+ imx_data->socdata->flags & ESDHC_FLAG_HS400_ES) {16811679 host->mmc->caps2 |= MMC_CAP2_HS400_ES;16821680 host->mmc_host_ops.hs400_enhanced_strobe =16831681 esdhc_hs400_enhanced_strobe;···17001694 if (err)17011695 goto disable_ahb_clk;17021696 }17031703-17041704- err = sdhci_esdhc_imx_probe_dt(pdev, host, imx_data);17051705- if (err)17061706- goto disable_ahb_clk;1707169717081698 sdhci_esdhc_imx_hwinit(host);17091699
···27242724 */27252725 WARN_ONCE(nor->flags & SNOR_F_BROKEN_RESET,27262726 "enabling reset hack; may not recover from unexpected reboots\n");27272727- return nor->params->set_4byte_addr_mode(nor, true);27272727+ err = nor->params->set_4byte_addr_mode(nor, true);27282728+ if (err && err != -ENOTSUPP)27292729+ return err;27282730 }2729273127302732 return 0;
+18-7
drivers/net/dsa/dsa_loop.c
···376376377377#define NUM_FIXED_PHYS (DSA_LOOP_NUM_PORTS - 2)378378379379+static void dsa_loop_phydevs_unregister(void)380380+{381381+ unsigned int i;382382+383383+ for (i = 0; i < NUM_FIXED_PHYS; i++)384384+ if (!IS_ERR(phydevs[i])) {385385+ fixed_phy_unregister(phydevs[i]);386386+ phy_device_free(phydevs[i]);387387+ }388388+}389389+379390static int __init dsa_loop_init(void)380391{381392 struct fixed_phy_status status = {···394383 .speed = SPEED_100,395384 .duplex = DUPLEX_FULL,396385 };397397- unsigned int i;386386+ unsigned int i, ret;398387399388 for (i = 0; i < NUM_FIXED_PHYS; i++)400389 phydevs[i] = fixed_phy_register(PHY_POLL, &status, NULL);401390402402- return mdio_driver_register(&dsa_loop_drv);391391+ ret = mdio_driver_register(&dsa_loop_drv);392392+ if (ret)393393+ dsa_loop_phydevs_unregister();394394+395395+ return ret;403396}404397module_init(dsa_loop_init);405398406399static void __exit dsa_loop_exit(void)407400{408408- unsigned int i;409409-410401 mdio_driver_unregister(&dsa_loop_drv);411411- for (i = 0; i < NUM_FIXED_PHYS; i++)412412- if (!IS_ERR(phydevs[i]))413413- fixed_phy_unregister(phydevs[i]);402402+ dsa_loop_phydevs_unregister();414403}415404module_exit(dsa_loop_exit);416405
+29-9
drivers/net/ethernet/adi/adin1110.c
···15281528 .notifier_call = adin1110_switchdev_event,15291529};1530153015311531-static void adin1110_unregister_notifiers(void *data)15311531+static void adin1110_unregister_notifiers(void)15321532{15331533 unregister_switchdev_blocking_notifier(&adin1110_switchdev_blocking_notifier);15341534 unregister_switchdev_notifier(&adin1110_switchdev_notifier);15351535 unregister_netdevice_notifier(&adin1110_netdevice_nb);15361536}1537153715381538-static int adin1110_setup_notifiers(struct adin1110_priv *priv)15381538+static int adin1110_setup_notifiers(void)15391539{15401540- struct device *dev = &priv->spidev->dev;15411540 int ret;1542154115431542 ret = register_netdevice_notifier(&adin1110_netdevice_nb);···15511552 if (ret < 0)15521553 goto err_sdev;1553155415541554- return devm_add_action_or_reset(dev, adin1110_unregister_notifiers, NULL);15551555+ return 0;1555155615561557err_sdev:15571558 unregister_switchdev_notifier(&adin1110_switchdev_notifier);1558155915591560err_netdev:15601561 unregister_netdevice_notifier(&adin1110_netdevice_nb);15621562+15611563 return ret;15621564}15631565···16261626 adin1110_irq,16271627 IRQF_TRIGGER_LOW | IRQF_ONESHOT,16281628 dev_name(dev), priv);16291629- if (ret < 0)16301630- return ret;16311631-16321632- ret = adin1110_setup_notifiers(priv);16331629 if (ret < 0)16341630 return ret;16351631···17051709 .probe = adin1110_probe,17061710 .id_table = adin1110_spi_id,17071711};17081708-module_spi_driver(adin1110_driver);17121712+17131713+static int __init adin1110_driver_init(void)17141714+{17151715+ int ret;17161716+17171717+ ret = adin1110_setup_notifiers();17181718+ if (ret < 0)17191719+ return ret;17201720+17211721+ ret = spi_register_driver(&adin1110_driver);17221722+ if (ret < 0) {17231723+ adin1110_unregister_notifiers();17241724+ return ret;17251725+ }17261726+17271727+ return 0;17281728+}17291729+17301730+static void __exit adin1110_exit(void)17311731+{17321732+ adin1110_unregister_notifiers();17331733+ spi_unregister_driver(&adin1110_driver);17341734+}17351735+module_init(adin1110_driver_init);17361736+module_exit(adin1110_exit);1709173717101738MODULE_DESCRIPTION("ADIN1110 Network driver");17111739MODULE_AUTHOR("Alexandru Tachici <alexandru.tachici@analog.com>");
···30073007 rwi = get_next_rwi(adapter);3008300830093009 /*30103010- * If there is another reset queued, free the previous rwi30113011- * and process the new reset even if previous reset failed30123012- * (the previous reset could have failed because of a fail30133013- * over for instance, so process the fail over).30143014- *30153010 * If there are no resets queued and the previous reset failed,30163011 * the adapter would be in an undefined state. So retry the30173012 * previous reset as a hard reset.30133013+ *30143014+ * Else, free the previous rwi and, if there is another reset30153015+ * queued, process the new reset even if previous reset failed30163016+ * (the previous reset could have failed because of a fail30173017+ * over for instance, so process the fail over).30183018 */30193019- if (rwi)30203020- kfree(tmprwi);30213021- else if (rc)30193019+ if (!rwi && rc)30223020 rwi = tmprwi;30213021+ else30223022+ kfree(tmprwi);3023302330243024 if (rwi && (rwi->reset_reason == VNIC_RESET_FAILOVER ||30253025 rwi->reset_reason == VNIC_RESET_MOBILITY || rc))
···5151 struct stmmac_resources res;5252 struct device_node *np;5353 int ret, i, phy_mode;5454- bool mdio = false;55545655 np = dev_of_node(&pdev->dev);5756···6869 if (!plat)6970 return -ENOMEM;70717272+ plat->mdio_node = of_get_child_by_name(np, "mdio");7173 if (plat->mdio_node) {7272- dev_err(&pdev->dev, "Found MDIO subnode\n");7373- mdio = true;7474- }7474+ dev_info(&pdev->dev, "Found MDIO subnode\n");75757676- if (mdio) {7776 plat->mdio_bus_data = devm_kzalloc(&pdev->dev,7877 sizeof(*plat->mdio_bus_data),7978 GFP_KERNEL);
+1-1
drivers/net/ethernet/xilinx/xilinx_emaclite.c
···108108 * @next_tx_buf_to_use: next Tx buffer to write to109109 * @next_rx_buf_to_use: next Rx buffer to read from110110 * @base_addr: base address of the Emaclite device111111- * @reset_lock: lock used for synchronization111111+ * @reset_lock: lock to serialize xmit and tx_timeout execution112112 * @deferred_skb: holds an skb (for transmission at a later time) when the113113 * Tx buffer is not free114114 * @phy_dev: pointer to the PHY device
+1-1
drivers/net/phy/mdio_bus.c
···583583 }584584585585 for (i = 0; i < PHY_MAX_ADDR; i++) {586586- if ((bus->phy_mask & (1 << i)) == 0) {586586+ if ((bus->phy_mask & BIT(i)) == 0) {587587 struct phy_device *phydev;588588589589 phydev = mdiobus_scan(bus, i);
+2-1
drivers/net/tun.c
···14591459 int err;14601460 int i;1461146114621462- if (it->nr_segs > MAX_SKB_FRAGS + 1)14621462+ if (it->nr_segs > MAX_SKB_FRAGS + 1 ||14631463+ len > (ETH_MAX_MTU - NET_SKB_PAD - NET_IP_ALIGN))14631464 return ERR_PTR(-EMSGSIZE);1464146514651466 local_bh_disable();
+9-1
drivers/nfc/fdp/fdp.c
···249249static int fdp_nci_send(struct nci_dev *ndev, struct sk_buff *skb)250250{251251 struct fdp_nci_info *info = nci_get_drvdata(ndev);252252+ int ret;252253253254 if (atomic_dec_and_test(&info->data_pkt_counter))254255 info->data_pkt_counter_cb(ndev);255256256256- return info->phy_ops->write(info->phy, skb);257257+ ret = info->phy_ops->write(info->phy, skb);258258+ if (ret < 0) {259259+ kfree_skb(skb);260260+ return ret;261261+ }262262+263263+ consume_skb(skb);264264+ return 0;257265}258266259267static int fdp_nci_request_firmware(struct nci_dev *ndev)
+7-2
drivers/nfc/nfcmrvl/i2c.c
···132132 ret = -EREMOTEIO;133133 } else134134 ret = 0;135135- kfree_skb(skb);136135 }137136138138- return ret;137137+ if (ret) {138138+ kfree_skb(skb);139139+ return ret;140140+ }141141+142142+ consume_skb(skb);143143+ return 0;139144}140145141146static void nfcmrvl_i2c_nci_update_config(struct nfcmrvl_private *priv,
+5-2
drivers/nfc/nxp-nci/core.c
···8080 return -EINVAL;81818282 r = info->phy_ops->write(info->phy_id, skb);8383- if (r < 0)8383+ if (r < 0) {8484 kfree_skb(skb);8585+ return r;8686+ }85878686- return r;8888+ consume_skb(skb);8989+ return 0;8790}88918992static int nxp_nci_rf_pll_unlocked_ntf(struct nci_dev *ndev,
+6-2
drivers/nfc/s3fwrn5/core.c
···110110 }111111112112 ret = s3fwrn5_write(info, skb);113113- if (ret < 0)113113+ if (ret < 0) {114114 kfree_skb(skb);115115+ mutex_unlock(&info->mutex);116116+ return ret;117117+ }115118119119+ consume_skb(skb);116120 mutex_unlock(&info->mutex);117117- return ret;121121+ return 0;118122}119123120124static int s3fwrn5_nci_post_setup(struct nci_dev *ndev)
+1
drivers/nvme/host/multipath.c
···516516 /* set to a default value of 512 until the disk is validated */517517 blk_queue_logical_block_size(head->disk->queue, 512);518518 blk_set_stacking_limits(&head->disk->queue->limits);519519+ blk_queue_dma_alignment(head->disk->queue, 3);519520520521 /* we need to propagate up the VMC settings */521522 if (ctrl->vwc & NVME_CTRL_VWC_PRESENT)
+11-2
drivers/nvme/host/tcp.c
···387387{388388 struct scatterlist sg;389389390390- sg_init_marker(&sg, 1);390390+ sg_init_table(&sg, 1);391391 sg_set_page(&sg, page, len, off);392392 ahash_request_set_crypt(hash, &sg, NULL, len);393393 crypto_ahash_update(hash);···11411141static int nvme_tcp_try_send(struct nvme_tcp_queue *queue)11421142{11431143 struct nvme_tcp_request *req;11441144+ unsigned int noreclaim_flag;11441145 int ret = 1;1145114611461147 if (!queue->request) {···11511150 }11521151 req = queue->request;1153115211531153+ noreclaim_flag = memalloc_noreclaim_save();11541154 if (req->state == NVME_TCP_SEND_CMD_PDU) {11551155 ret = nvme_tcp_try_send_cmd_pdu(req);11561156 if (ret <= 0)11571157 goto done;11581158 if (!nvme_tcp_has_inline_data(req))11591159- return ret;11591159+ goto out;11601160 }1161116111621162 if (req->state == NVME_TCP_SEND_H2C_PDU) {···11831181 nvme_tcp_fail_request(queue->request);11841182 nvme_tcp_done_send_req(queue);11851183 }11841184+out:11851185+ memalloc_noreclaim_restore(noreclaim_flag);11861186 return ret;11871187}11881188···13001296 struct page *page;13011297 struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);13021298 struct nvme_tcp_queue *queue = &ctrl->queues[qid];12991299+ unsigned int noreclaim_flag;1303130013041301 if (!test_and_clear_bit(NVME_TCP_Q_ALLOCATED, &queue->flags))13051302 return;···13131308 __page_frag_cache_drain(page, queue->pf_cache.pagecnt_bias);13141309 queue->pf_cache.va = NULL;13151310 }13111311+13121312+ noreclaim_flag = memalloc_noreclaim_save();13161313 sock_release(queue->sock);13141314+ memalloc_noreclaim_restore(noreclaim_flag);13151315+13171316 kfree(queue->pdu);13181317 mutex_destroy(&queue->send_mutex);13191318 mutex_destroy(&queue->queue_lock);
···1414 * all) PA-RISC machines should have them. Anyway, for safety reasons, the1515 * following code can deal with just 96 bytes of Stable Storage, and all1616 * sizes between 96 and 192 bytes (provided they are multiple of struct1717- * device_path size, eg: 128, 160 and 192) to provide full information.1717+ * pdc_module_path size, eg: 128, 160 and 192) to provide full information.1818 * One last word: there's one path we can always count on: the primary path.1919 * Anything above 224 bytes is used for 'osdep2' OS-dependent storage area.2020 *···8888 short ready; /* entry record is valid if != 0 */8989 unsigned long addr; /* entry address in stable storage */9090 char *name; /* entry name */9191- struct device_path devpath; /* device path in parisc representation */9191+ struct pdc_module_path devpath; /* device path in parisc representation */9292 struct device *dev; /* corresponding device */9393 struct kobject kobj;9494};···138138static int139139pdcspath_fetch(struct pdcspath_entry *entry)140140{141141- struct device_path *devpath;141141+ struct pdc_module_path *devpath;142142143143 if (!entry)144144 return -EINVAL;···153153 return -EIO;154154155155 /* Find the matching device.156156- NOTE: hardware_path overlays with device_path, so the nice cast can156156+ NOTE: hardware_path overlays with pdc_module_path, so the nice cast can157157 be used */158158 entry->dev = hwpath_to_device((struct hardware_path *)devpath);159159···179179static void180180pdcspath_store(struct pdcspath_entry *entry)181181{182182- struct device_path *devpath;182182+ struct pdc_module_path *devpath;183183184184 BUG_ON(!entry);185185···221221pdcspath_hwpath_read(struct pdcspath_entry *entry, char *buf)222222{223223 char *out = buf;224224- struct device_path *devpath;224224+ struct pdc_module_path *devpath;225225 short i;226226227227 if (!entry || !buf)···236236 return -ENODATA;237237238238 for (i = 0; i < 6; i++) {239239- if (devpath->bc[i] >= 128)239239+ if (devpath->path.bc[i] < 0)240240 continue;241241- out += sprintf(out, "%u/", (unsigned char)devpath->bc[i]);241241+ out += sprintf(out, "%d/", devpath->path.bc[i]);242242 }243243- out += sprintf(out, "%u\n", (unsigned char)devpath->mod);243243+ out += sprintf(out, "%u\n", (unsigned char)devpath->path.mod);244244245245 return out - buf;246246}···296296 for (i=5; ((temp = strrchr(in, '/'))) && (temp-in > 0) && (likely(i)); i--) {297297 hwpath.bc[i] = simple_strtoul(temp+1, NULL, 10);298298 in[temp-in] = '\0';299299- DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.bc[i]);299299+ DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.path.bc[i]);300300 }301301302302 /* Store the final field */ 303303 hwpath.bc[i] = simple_strtoul(in, NULL, 10);304304- DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.bc[i]);304304+ DPRINTK("%s: bc[%d]: %d\n", __func__, i, hwpath.path.bc[i]);305305306306 /* Now we check that the user isn't trying to lure us */307307 if (!(dev = hwpath_to_device((struct hardware_path *)&hwpath))) {···342342pdcspath_layer_read(struct pdcspath_entry *entry, char *buf)343343{344344 char *out = buf;345345- struct device_path *devpath;345345+ struct pdc_module_path *devpath;346346 short i;347347348348 if (!entry || !buf)···547547 pathentry = &pdcspath_entry_primary;548548549549 read_lock(&pathentry->rw_lock);550550- out += sprintf(out, "%s\n", (pathentry->devpath.flags & knob) ?550550+ out += sprintf(out, "%s\n", (pathentry->devpath.path.flags & knob) ?551551 "On" : "Off");552552 read_unlock(&pathentry->rw_lock);553553···594594595595 /* print the timer value in seconds */596596 read_lock(&pathentry->rw_lock);597597- out += sprintf(out, "%u\n", (pathentry->devpath.flags & PF_TIMER) ?598598- (1 << (pathentry->devpath.flags & PF_TIMER)) : 0);597597+ out += sprintf(out, "%u\n", (pathentry->devpath.path.flags & PF_TIMER) ?598598+ (1 << (pathentry->devpath.path.flags & PF_TIMER)) : 0);599599 read_unlock(&pathentry->rw_lock);600600601601 return out - buf;···764764765765 /* Be nice to the existing flag record */766766 read_lock(&pathentry->rw_lock);767767- flags = pathentry->devpath.flags;767767+ flags = pathentry->devpath.path.flags;768768 read_unlock(&pathentry->rw_lock);769769770770 DPRINTK("%s: flags before: 0x%X\n", __func__, flags);···785785 write_lock(&pathentry->rw_lock);786786787787 /* Change the path entry flags first */788788- pathentry->devpath.flags = flags;788788+ pathentry->devpath.path.flags = flags;789789790790 /* Now, dive in. Write back to the hardware */791791 pdcspath_store(pathentry);
+14-10
drivers/platform/loongarch/loongson-laptop.c
···199199 struct key_entry ke;200200 struct backlight_device *bd;201201202202+ bd = backlight_device_get_by_type(BACKLIGHT_PLATFORM);203203+ if (bd) {204204+ loongson_laptop_backlight_update(bd) ?205205+ pr_warn("Loongson_backlight: resume brightness failed") :206206+ pr_info("Loongson_backlight: resume brightness %d\n", bd->props.brightness);207207+ }208208+202209 /*203210 * Only if the firmware supports SW_LID event model, we can handle the204211 * event. This is for the consideration of development board without EC.···233226 ke.sw.code = SW_LID;234227 sparse_keymap_report_entry(generic_inputdev, &ke, 1, true);235228 }236236- }237237-238238- bd = backlight_device_get_by_type(BACKLIGHT_PLATFORM);239239- if (bd) {240240- loongson_laptop_backlight_update(bd) ?241241- pr_warn("Loongson_backlight: resume brightness failed") :242242- pr_info("Loongson_backlight: resume brightness %d\n", bd->props.brightness);243229 }244230245231 return 0;···448448 if (ret < 0) {449449 pr_err("Failed to setup input device keymap\n");450450 input_free_device(generic_inputdev);451451+ generic_inputdev = NULL;451452452453 return ret;453454 }···503502 if (ret)504503 return -EINVAL;505504506506- if (sub_driver->init)507507- sub_driver->init(sub_driver);505505+ if (sub_driver->init) {506506+ ret = sub_driver->init(sub_driver);507507+ if (ret)508508+ goto err_out;509509+ }508510509511 if (sub_driver->notify) {510512 ret = setup_acpi_notify(sub_driver);···523519524520err_out:525521 generic_subdriver_exit(sub_driver);526526- return (ret < 0) ? ret : 0;522522+ return ret;527523}528524529525static void generic_subdriver_exit(struct generic_sub_driver *sub_driver)
···25822582 *25832583 * This function obtains the transmit and receive ids required to send25842584 * an unsolicited ct command with a payload. A special lpfc FsType and CmdRsp25852585- * flags are used to the unsolicted response handler is able to process25852585+ * flags are used to the unsolicited response handler is able to process25862586 * the ct command sent on the same port.25872587 **/25882588static int lpfcdiag_loop_get_xri(struct lpfc_hba *phba, uint16_t rpi,···28742874 * @len: Number of data bytes28752875 *28762876 * This function allocates and posts a data buffer of sufficient size to receive28772877- * an unsolicted CT command.28772877+ * an unsolicited CT command.28782878 **/28792879static int lpfcdiag_sli3_loop_post_rxbufs(struct lpfc_hba *phba, uint16_t rxxri,28802880 size_t len)
···29562956 __core_scsi3_complete_pro_preempt(dev, pr_reg_n,29572957 (preempt_type == PREEMPT_AND_ABORT) ? &preempt_and_abort_list : NULL,29582958 type, scope, preempt_type);29592959-29602960- if (preempt_type == PREEMPT_AND_ABORT)29612961- core_scsi3_release_preempt_and_abort(29622962- &preempt_and_abort_list, pr_reg_n);29632959 }29602960+29642961 spin_unlock(&dev->dev_reservation_lock);29622962+29632963+ /*29642964+ * SPC-4 5.12.11.2.6 Preempting and aborting29652965+ * The actions described in this subclause shall be performed29662966+ * for all I_T nexuses that are registered with the non-zero29672967+ * SERVICE ACTION RESERVATION KEY value, without regard for29682968+ * whether the preempted I_T nexuses hold the persistent29692969+ * reservation. If the SERVICE ACTION RESERVATION KEY field is29702970+ * set to zero and an all registrants persistent reservation is29712971+ * present, the device server shall abort all commands for all29722972+ * registered I_T nexuses.29732973+ */29742974+ if (preempt_type == PREEMPT_AND_ABORT) {29752975+ core_tmr_lun_reset(dev, NULL, &preempt_and_abort_list,29762976+ cmd);29772977+ core_scsi3_release_preempt_and_abort(29782978+ &preempt_and_abort_list, pr_reg_n);29792979+ }2965298029662981 if (pr_tmpl->pr_aptpl_active)29672982 core_scsi3_update_and_write_aptpl(cmd->se_dev, true);···30373022 if (calling_it_nexus)30383023 continue;3039302430403040- if (pr_reg->pr_res_key != sa_res_key)30253025+ if (sa_res_key && pr_reg->pr_res_key != sa_res_key)30413026 continue;3042302730433028 pr_reg_nacl = pr_reg->pr_reg_nacl;···34403425 * transport protocols where port names are not required;34413426 * d) Register the reservation key specified in the SERVICE ACTION34423427 * RESERVATION KEY field;34433443- * e) Retain the reservation key specified in the SERVICE ACTION34443444- * RESERVATION KEY field and associated information;34453428 *34463429 * Also, It is not an error for a REGISTER AND MOVE service action to34473430 * register an I_T nexus that is already registered with the same···34613448 dest_pr_reg = __core_scsi3_locate_pr_reg(dev, dest_node_acl,34623449 iport_ptr);34633450 new_reg = 1;34513451+ } else {34523452+ /*34533453+ * e) Retain the reservation key specified in the SERVICE ACTION34543454+ * RESERVATION KEY field and associated information;34553455+ */34563456+ dest_pr_reg->pr_res_key = sa_res_key;34643457 }34653458 /*34663459 * f) Release the persistent reservation for the persistent reservation
···772772}773773774774/**775775- * ufshcd_utmrl_clear - Clear a bit in UTRMLCLR register775775+ * ufshcd_utmrl_clear - Clear a bit in UTMRLCLR register776776 * @hba: per adapter instance777777 * @pos: position of the bit to be cleared778778 */···3098309830993099 if (ret)31003100 dev_err(hba->dev,31013101- "%s: query attribute, opcode %d, idn %d, failed with error %d after %d retries\n",31013101+ "%s: query flag, opcode %d, idn %d, failed with error %d after %d retries\n",31023102 __func__, opcode, idn, ret, retries);31033103 return ret;31043104}
+3-3
drivers/ufs/core/ufshpb.c
···383383 rgn = hpb->rgn_tbl + rgn_idx;384384 srgn = rgn->srgn_tbl + srgn_idx;385385386386- /* If command type is WRITE or DISCARD, set bitmap as drity */386386+ /* If command type is WRITE or DISCARD, set bitmap as dirty */387387 if (ufshpb_is_write_or_discard(cmd)) {388388 ufshpb_iterate_rgn(hpb, rgn_idx, srgn_idx, srgn_offset,389389 transfer_len, true);···616616static enum rq_end_io_ret ufshpb_umap_req_compl_fn(struct request *req,617617 blk_status_t error)618618{619619- struct ufshpb_req *umap_req = (struct ufshpb_req *)req->end_io_data;619619+ struct ufshpb_req *umap_req = req->end_io_data;620620621621 ufshpb_put_req(umap_req->hpb, umap_req);622622 return RQ_END_IO_NONE;···625625static enum rq_end_io_ret ufshpb_map_req_compl_fn(struct request *req,626626 blk_status_t error)627627{628628- struct ufshpb_req *map_req = (struct ufshpb_req *) req->end_io_data;628628+ struct ufshpb_req *map_req = req->end_io_data;629629 struct ufshpb_lu *hpb = map_req->hpb;630630 struct ufshpb_subregion *srgn;631631 unsigned long flags;
···2323#include <linux/delay.h>2424#include <linux/dma-mapping.h>2525#include <linux/of.h>2626+#include <linux/of_graph.h>2627#include <linux/acpi.h>2728#include <linux/pinctrl/consumer.h>2829#include <linux/reset.h>···8685 * mode. If the controller supports DRD but the dr_mode is not8786 * specified or set to OTG, then set the mode to peripheral.8887 */8989- if (mode == USB_DR_MODE_OTG &&8888+ if (mode == USB_DR_MODE_OTG && !dwc->edev &&9089 (!IS_ENABLED(CONFIG_USB_ROLE_SWITCH) ||9190 !device_property_read_bool(dwc->dev, "usb-role-switch")) &&9291 !DWC3_VER_IS_PRIOR(DWC3, 330A))···16911690 }16921691}1693169216931693+static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc)16941694+{16951695+ struct device *dev = dwc->dev;16961696+ struct device_node *np_phy;16971697+ struct extcon_dev *edev = NULL;16981698+ const char *name;16991699+17001700+ if (device_property_read_bool(dev, "extcon"))17011701+ return extcon_get_edev_by_phandle(dev, 0);17021702+17031703+ /*17041704+ * Device tree platforms should get extcon via phandle.17051705+ * On ACPI platforms, we get the name from a device property.17061706+ * This device property is for kernel internal use only and17071707+ * is expected to be set by the glue code.17081708+ */17091709+ if (device_property_read_string(dev, "linux,extcon-name", &name) == 0)17101710+ return extcon_get_extcon_dev(name);17111711+17121712+ /*17131713+ * Try to get an extcon device from the USB PHY controller's "port"17141714+ * node. Check if it has the "port" node first, to avoid printing the17151715+ * error message from underlying code, as it's a valid case: extcon17161716+ * device (and "port" node) may be missing in case of "usb-role-switch"17171717+ * or OTG mode.17181718+ */17191719+ np_phy = of_parse_phandle(dev->of_node, "phys", 0);17201720+ if (of_graph_is_present(np_phy)) {17211721+ struct device_node *np_conn;17221722+17231723+ np_conn = of_graph_get_remote_node(np_phy, -1, -1);17241724+ if (np_conn)17251725+ edev = extcon_find_edev_by_node(np_conn);17261726+ of_node_put(np_conn);17271727+ }17281728+ of_node_put(np_phy);17291729+17301730+ return edev;17311731+}17321732+16941733static int dwc3_probe(struct platform_device *pdev)16951734{16961735 struct device *dev = &pdev->dev;···18791838 dev_err(dwc->dev, "failed to allocate event buffers\n");18801839 ret = -ENOMEM;18811840 goto err2;18411841+ }18421842+18431843+ dwc->edev = dwc3_get_extcon(dwc);18441844+ if (IS_ERR(dwc->edev)) {18451845+ ret = dev_err_probe(dwc->dev, PTR_ERR(dwc->edev), "failed to get extcon\n");18461846+ goto err3;18821847 }1883184818841849 ret = dwc3_get_dr_mode(dwc);
-50
drivers/usb/dwc3/drd.c
···88 */991010#include <linux/extcon.h>1111-#include <linux/of_graph.h>1211#include <linux/of_platform.h>1312#include <linux/platform_device.h>1413#include <linux/property.h>···438439 return NOTIFY_DONE;439440}440441441441-static struct extcon_dev *dwc3_get_extcon(struct dwc3 *dwc)442442-{443443- struct device *dev = dwc->dev;444444- struct device_node *np_phy;445445- struct extcon_dev *edev = NULL;446446- const char *name;447447-448448- if (device_property_read_bool(dev, "extcon"))449449- return extcon_get_edev_by_phandle(dev, 0);450450-451451- /*452452- * Device tree platforms should get extcon via phandle.453453- * On ACPI platforms, we get the name from a device property.454454- * This device property is for kernel internal use only and455455- * is expected to be set by the glue code.456456- */457457- if (device_property_read_string(dev, "linux,extcon-name", &name) == 0) {458458- edev = extcon_get_extcon_dev(name);459459- if (!edev)460460- return ERR_PTR(-EPROBE_DEFER);461461-462462- return edev;463463- }464464-465465- /*466466- * Try to get an extcon device from the USB PHY controller's "port"467467- * node. Check if it has the "port" node first, to avoid printing the468468- * error message from underlying code, as it's a valid case: extcon469469- * device (and "port" node) may be missing in case of "usb-role-switch"470470- * or OTG mode.471471- */472472- np_phy = of_parse_phandle(dev->of_node, "phys", 0);473473- if (of_graph_is_present(np_phy)) {474474- struct device_node *np_conn;475475-476476- np_conn = of_graph_get_remote_node(np_phy, -1, -1);477477- if (np_conn)478478- edev = extcon_find_edev_by_node(np_conn);479479- of_node_put(np_conn);480480- }481481- of_node_put(np_phy);482482-483483- return edev;484484-}485485-486442#if IS_ENABLED(CONFIG_USB_ROLE_SWITCH)487443#define ROLE_SWITCH 1488444static int dwc3_usb_role_switch_set(struct usb_role_switch *sw,···541587 if (ROLE_SWITCH &&542588 device_property_read_bool(dwc->dev, "usb-role-switch"))543589 return dwc3_setup_role_switch(dwc);544544-545545- dwc->edev = dwc3_get_extcon(dwc);546546- if (IS_ERR(dwc->edev))547547- return PTR_ERR(dwc->edev);548590549591 if (dwc->edev) {550592 dwc->edev_nb.notifier_call = dwc3_drd_notifier;
+1-1
drivers/usb/dwc3/dwc3-st.c
···251251 /* Manage SoftReset */252252 reset_control_deassert(dwc3_data->rstc_rst);253253254254- child = of_get_child_by_name(node, "usb");254254+ child = of_get_compatible_child(node, "snps,dwc3");255255 if (!child) {256256 dev_err(&pdev->dev, "failed to find dwc3 core node\n");257257 ret = -ENODEV;
+17-3
drivers/usb/dwc3/gadget.c
···12921292 trb->ctrl = DWC3_TRBCTL_ISOCHRONOUS;12931293 }1294129412951295- /* always enable Interrupt on Missed ISOC */12961296- trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;12951295+ if (!no_interrupt && !chain)12961296+ trb->ctrl |= DWC3_TRB_CTRL_ISP_IMI;12971297 break;1298129812991299 case USB_ENDPOINT_XFER_BULK:···16981698 cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);16991699 memset(¶ms, 0, sizeof(params));17001700 ret = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms);17011701+ /*17021702+ * If the End Transfer command was timed out while the device is17031703+ * not in SETUP phase, it's possible that an incoming Setup packet17041704+ * may prevent the command's completion. Let's retry when the17051705+ * ep0state returns to EP0_SETUP_PHASE.17061706+ */17071707+ if (ret == -ETIMEDOUT && dep->dwc->ep0state != EP0_SETUP_PHASE) {17081708+ dep->flags |= DWC3_EP_DELAY_STOP;17091709+ return 0;17101710+ }17011711 WARN_ON_ONCE(ret);17021712 dep->resource_index = 0;17031713···32483238 if (event->status & DEPEVT_STATUS_SHORT && !chain)32493239 return 1;3250324032413241+ if ((trb->ctrl & DWC3_TRB_CTRL_ISP_IMI) &&32423242+ DWC3_TRB_SIZE_TRBSTS(trb->size) == DWC3_TRBSTS_MISSED_ISOC)32433243+ return 1;32443244+32513245 if ((trb->ctrl & DWC3_TRB_CTRL_IOC) ||32523246 (trb->ctrl & DWC3_TRB_CTRL_LST))32533247 return 1;···37333719 * timeout. Delay issuing the End Transfer command until the Setup TRB is37343720 * prepared.37353721 */37363736- if (dwc->ep0state != EP0_SETUP_PHASE) {37223722+ if (dwc->ep0state != EP0_SETUP_PHASE && !dwc->delayed_status) {37373723 dep->flags |= DWC3_EP_DELAY_STOP;37383724 return;37393725 }
···889889 if (dev->eps[i].stream_info)890890 xhci_free_stream_info(xhci,891891 dev->eps[i].stream_info);892892- /* Endpoints on the TT/root port lists should have been removed893893- * when usb_disable_device() was called for the device.894894- * We can't drop them anyway, because the udev might have gone895895- * away by this point, and we can't tell what speed it was.892892+ /*893893+ * Endpoints are normally deleted from the bandwidth list when894894+ * endpoints are dropped, before device is freed.895895+ * If host is dying or being removed then endpoints aren't896896+ * dropped cleanly, so delete the endpoint from list here.897897+ * Only applicable for hosts with software bandwidth checking.896898 */897897- if (!list_empty(&dev->eps[i].bw_endpoint_list))898898- xhci_warn(xhci, "Slot %u endpoint %u "899899- "not removed from BW list!\n",900900- slot_id, i);899899+900900+ if (!list_empty(&dev->eps[i].bw_endpoint_list)) {901901+ list_del_init(&dev->eps[i].bw_endpoint_list);902902+ xhci_dbg(xhci, "Slot %u endpoint %u not removed from BW list!\n",903903+ slot_id, i);904904+ }901905 }902906 /* If this is a hub, free the TT(s) from the TT list */903907 xhci_free_tt_info(xhci, dev, slot_id);
···810810811811 spin_lock_irq(&xhci->lock);812812 xhci_halt(xhci);813813- /* Workaround for spurious wakeups at shutdown with HSW */814814- if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)813813+814814+ /*815815+ * Workaround for spurious wakeps at shutdown with HSW, and for boot816816+ * firmware delay in ADL-P PCH if port are left in U3 at shutdown817817+ */818818+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP ||819819+ xhci->quirks & XHCI_RESET_TO_DEFAULT)815820 xhci_reset(xhci, XHCI_RESET_SHORT_USEC);821821+816822 spin_unlock_irq(&xhci->lock);817823818824 xhci_cleanup_msix(xhci);
+1
drivers/usb/host/xhci.h
···18971897#define XHCI_BROKEN_D3COLD BIT_ULL(41)18981898#define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42)18991899#define XHCI_SUSPEND_RESUME_CLKS BIT_ULL(43)19001900+#define XHCI_RESET_TO_DEFAULT BIT_ULL(44)1900190119011902 unsigned int num_active_eps;19021903 unsigned int limit_active_eps;
···10761076 if (par->lcd_supply) {10771077 ret = regulator_disable(par->lcd_supply);10781078 if (ret)10791079- return ret;10791079+ dev_warn(&dev->dev, "Failed to disable regulator (%pe)\n",10801080+ ERR_PTR(ret));10801081 }1081108210821083 lcd_disable_raster(DA8XX_FRAME_WAIT);
···202202 * and destination blitting areas overlap and203203 * adapt the bitmap addresses synchronously204204 * if the coordinates exceed the valid range.205205- * The the areas do not overlap, we do our205205+ * The areas do not overlap, we do our206206 * normal check.207207 */208208 if((mymax - mymin) < height) {
···9797 struct kref kref;9898 int fb_count;9999 bool virtualized; /* true when physical usb device not present */100100- struct delayed_work free_framebuffer_work;101100 atomic_t usb_active; /* 0 = update virtual buffer, but no usb traffic */102101 atomic_t lost_pixels; /* 1 = a render op failed. Need screen refresh */103102 u8 *edid; /* null until we read edid from hw or get from sysfs */···11161117{11171118 struct ufx_data *dev = container_of(kref, struct ufx_data, kref);1118111911191119- /* this function will wait for all in-flight urbs to complete */11201120- if (dev->urbs.count > 0)11211121- ufx_free_urb_list(dev);11221122-11231123- pr_debug("freeing ufx_data %p", dev);11241124-11251120 kfree(dev);11261121}11221122+11231123+static void ufx_ops_destory(struct fb_info *info)11241124+{11251125+ struct ufx_data *dev = info->par;11261126+ int node = info->node;11271127+11281128+ /* Assume info structure is freed after this point */11291129+ framebuffer_release(info);11301130+11311131+ pr_debug("fb_info for /dev/fb%d has been freed", node);11321132+11331133+ /* release reference taken by kref_init in probe() */11341134+ kref_put(&dev->kref, ufx_free);11351135+}11361136+1127113711281138static void ufx_release_urb_work(struct work_struct *work)11291139{···11421134 up(&unode->dev->urbs.limit_sem);11431135}1144113611451145-static void ufx_free_framebuffer_work(struct work_struct *work)11371137+static void ufx_free_framebuffer(struct ufx_data *dev)11461138{11471147- struct ufx_data *dev = container_of(work, struct ufx_data,11481148- free_framebuffer_work.work);11491139 struct fb_info *info = dev->info;11501150- int node = info->node;11511151-11521152- unregister_framebuffer(info);1153114011541141 if (info->cmap.len != 0)11551142 fb_dealloc_cmap(&info->cmap);···11551152 fb_destroy_modelist(&info->modelist);1156115311571154 dev->info = NULL;11581158-11591159- /* Assume info structure is freed after this point */11601160- framebuffer_release(info);11611161-11621162- pr_debug("fb_info for /dev/fb%d has been freed", node);1163115511641156 /* ref taken in probe() as part of registering framebfufer */11651157 kref_put(&dev->kref, ufx_free);···11671169{11681170 struct ufx_data *dev = info->par;1169117111721172+ mutex_lock(&disconnect_mutex);11731173+11701174 dev->fb_count--;1171117511721176 /* We can't free fb_info here - fbmem will touch it when we return */11731177 if (dev->virtualized && (dev->fb_count == 0))11741174- schedule_delayed_work(&dev->free_framebuffer_work, HZ);11781178+ ufx_free_framebuffer(dev);1175117911761180 if ((dev->fb_count == 0) && (info->fbdefio)) {11771181 fb_deferred_io_cleanup(info);···11851185 info->node, user, dev->fb_count);1186118611871187 kref_put(&dev->kref, ufx_free);11881188+11891189+ mutex_unlock(&disconnect_mutex);1188119011891191 return 0;11901192}···12941292 .fb_blank = ufx_ops_blank,12951293 .fb_check_var = ufx_ops_check_var,12961294 .fb_set_par = ufx_ops_set_par,12951295+ .fb_destroy = ufx_ops_destory,12971296};1298129712991298/* Assumes &info->lock held by caller···16761673 goto destroy_modedb;16771674 }1678167516791679- INIT_DELAYED_WORK(&dev->free_framebuffer_work,16801680- ufx_free_framebuffer_work);16811681-16821676 retval = ufx_reg_read(dev, 0x3000, &id_rev);16831677 check_warn_goto_error(retval, "error %d reading 0x3000 register from device", retval);16841678 dev_dbg(dev->gdev, "ID_REV register value 0x%08x", id_rev);···17481748static void ufx_usb_disconnect(struct usb_interface *interface)17491749{17501750 struct ufx_data *dev;17511751+ struct fb_info *info;1751175217521753 mutex_lock(&disconnect_mutex);1753175417541755 dev = usb_get_intfdata(interface);17561756+ info = dev->info;1755175717561758 pr_debug("USB disconnect starting\n");17571759···1767176517681766 /* if clients still have us open, will be freed on last close */17691767 if (dev->fb_count == 0)17701770- schedule_delayed_work(&dev->free_framebuffer_work, 0);17681768+ ufx_free_framebuffer(dev);1771176917721772- /* release reference taken by kref_init in probe() */17731773- kref_put(&dev->kref, ufx_free);17701770+ /* this function will wait for all in-flight urbs to complete */17711771+ if (dev->urbs.count > 0)17721772+ ufx_free_urb_list(dev);1774177317751775- /* consider ufx_data freed */17741774+ pr_debug("freeing ufx_data %p", dev);17751775+17761776+ unregister_framebuffer(info);1776177717771778 mutex_unlock(&disconnect_mutex);17781779}
···376376 return rc;377377}378378379379-static int xilinxfb_release(struct device *dev)379379+static void xilinxfb_release(struct device *dev)380380{381381 struct xilinxfb_drvdata *drvdata = dev_get_drvdata(dev);382382···402402 if (!(drvdata->flags & BUS_ACCESS_FLAG))403403 dcr_unmap(drvdata->dcr_host, drvdata->dcr_len);404404#endif405405-406406- return 0;407405}408406409407/* ---------------------------------------------------------------------···478480479481static int xilinxfb_of_remove(struct platform_device *op)480482{481481- return xilinxfb_release(&op->dev);483483+ xilinxfb_release(&op->dev);484484+485485+ return 0;482486}483487484488/* Match table for of_platform binding */
+3-1
drivers/watchdog/exar_wdt.c
···355355 &priv->wdt_res, 1,356356 priv, sizeof(*priv));357357 if (IS_ERR(n->pdev)) {358358+ int err = PTR_ERR(n->pdev);359359+358360 kfree(n);359359- return PTR_ERR(n->pdev);361361+ return err;360362 }361363362364 list_add_tail(&n->list, &pdev_list);
+1-1
drivers/watchdog/sp805_wdt.c
···8888 return (wdtcontrol & ENABLE_MASK) == ENABLE_MASK;8989}90909191-/* This routine finds load value that will reset system in required timout */9191+/* This routine finds load value that will reset system in required timeout */9292static int wdt_setload(struct watchdog_device *wdd, unsigned int timeout)9393{9494 struct sp805_wdt *wdt = watchdog_get_drvdata(wdd);
+4-6
fs/btrfs/disk-io.c
···166166 * Return 0 if the superblock checksum type matches the checksum value of that167167 * algorithm. Pass the raw disk superblock data.168168 */169169-static int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,170170- char *raw_disk_sb)169169+int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,170170+ const struct btrfs_super_block *disk_sb)171171{172172- struct btrfs_super_block *disk_sb =173173- (struct btrfs_super_block *)raw_disk_sb;174172 char result[BTRFS_CSUM_SIZE];175173 SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);176174···179181 * BTRFS_SUPER_INFO_SIZE range, we expect that the unused space is180182 * filled with zeros and is included in the checksum.181183 */182182- crypto_shash_digest(shash, raw_disk_sb + BTRFS_CSUM_SIZE,184184+ crypto_shash_digest(shash, (const u8 *)disk_sb + BTRFS_CSUM_SIZE,183185 BTRFS_SUPER_INFO_SIZE - BTRFS_CSUM_SIZE, result);184186185187 if (memcmp(disk_sb->csum, result, fs_info->csum_size))···34773479 * We want to check superblock checksum, the type is stored inside.34783480 * Pass the whole disk block of size BTRFS_SUPER_INFO_SIZE (4k).34793481 */34803480- if (btrfs_check_super_csum(fs_info, (u8 *)disk_super)) {34823482+ if (btrfs_check_super_csum(fs_info, disk_super)) {34813483 btrfs_err(fs_info, "superblock checksum mismatch");34823484 err = -EINVAL;34833485 btrfs_release_disk_super(disk_super);
···32953295 }3296329632973297 /*32983298- * If this is a leaf and there are tree mod log users, we may32993299- * have recorded mod log operations that point to this leaf.33003300- * So we must make sure no one reuses this leaf's extent before33013301- * mod log operations are applied to a node, otherwise after33023302- * rewinding a node using the mod log operations we get an33033303- * inconsistent btree, as the leaf's extent may now be used as33043304- * a node or leaf for another different btree.32983298+ * If there are tree mod log users we may have recorded mod log32993299+ * operations for this node. If we re-allocate this node we33003300+ * could replay operations on this node that happened when it33013301+ * existed in a completely different root. For example if it33023302+ * was part of root A, then was reallocated to root B, and we33033303+ * are doing a btrfs_old_search_slot(root b), we could replay33043304+ * operations that happened when the block was part of root A,33053305+ * giving us an inconsistent view of the btree.33063306+ *33053307 * We are safe from races here because at this point no other33063308 * node or root points to this extent buffer, so if after this33073307- * check a new tree mod log user joins, it will not be able to33083308- * find a node pointing to this leaf and record operations that33093309- * point to this leaf.33093309+ * check a new tree mod log user joins we will not have an33103310+ * existing log of operations on this node that we have to33113311+ * contend with.33103312 */33113311- if (btrfs_header_level(buf) == 0 &&33123312- test_bit(BTRFS_FS_TREE_MOD_LOG_USERS, &fs_info->flags))33133313+ if (test_bit(BTRFS_FS_TREE_MOD_LOG_USERS, &fs_info->flags))33133314 must_pin = true;3314331533153316 if (must_pin || btrfs_is_zoned(fs_info)) {
···66686668 /*66696669 * First, process the inode as if it was deleted.66706670 */66716671- sctx->cur_inode_gen = right_gen;66726672- sctx->cur_inode_new = false;66736673- sctx->cur_inode_deleted = true;66746674- sctx->cur_inode_size = btrfs_inode_size(66756675- sctx->right_path->nodes[0], right_ii);66766676- sctx->cur_inode_mode = btrfs_inode_mode(66776677- sctx->right_path->nodes[0], right_ii);66786678- ret = process_all_refs(sctx,66796679- BTRFS_COMPARE_TREE_DELETED);66806680- if (ret < 0)66816681- goto out;66716671+ if (old_nlinks > 0) {66726672+ sctx->cur_inode_gen = right_gen;66736673+ sctx->cur_inode_new = false;66746674+ sctx->cur_inode_deleted = true;66756675+ sctx->cur_inode_size = btrfs_inode_size(66766676+ sctx->right_path->nodes[0], right_ii);66776677+ sctx->cur_inode_mode = btrfs_inode_mode(66786678+ sctx->right_path->nodes[0], right_ii);66796679+ ret = process_all_refs(sctx,66806680+ BTRFS_COMPARE_TREE_DELETED);66816681+ if (ret < 0)66826682+ goto out;66836683+ }6682668466836685 /*66846686 * Now process the inode as if it was new.
+16
fs/btrfs/super.c
···25552555{25562556 struct btrfs_fs_info *fs_info = dev->fs_info;25572557 struct btrfs_super_block *sb;25582558+ u16 csum_type;25582559 int ret = 0;2559256025602561 /* This should be called with fs still frozen. */···25692568 sb = btrfs_read_dev_one_super(dev->bdev, 0, true);25702569 if (IS_ERR(sb))25712570 return PTR_ERR(sb);25712571+25722572+ /* Verify the checksum. */25732573+ csum_type = btrfs_super_csum_type(sb);25742574+ if (csum_type != btrfs_super_csum_type(fs_info->super_copy)) {25752575+ btrfs_err(fs_info, "csum type changed, has %u expect %u",25762576+ csum_type, btrfs_super_csum_type(fs_info->super_copy));25772577+ ret = -EUCLEAN;25782578+ goto out;25792579+ }25802580+25812581+ if (btrfs_check_super_csum(fs_info, sb)) {25822582+ btrfs_err(fs_info, "csum for on-disk super block no longer matches");25832583+ ret = -EUCLEAN;25842584+ goto out;25852585+ }2572258625732587 /* Btrfs_validate_super() includes fsid check against super->fsid. */25742588 ret = btrfs_validate_super(fs_info, sb, 0);
+11-1
fs/btrfs/volumes.c
···71427142 u64 devid;71437143 u64 type;71447144 u8 uuid[BTRFS_UUID_SIZE];71457145+ int index;71457146 int num_stripes;71467147 int ret;71477148 int i;···71507149 logical = key->offset;71517150 length = btrfs_chunk_length(leaf, chunk);71527151 type = btrfs_chunk_type(leaf, chunk);71527152+ index = btrfs_bg_flags_to_raid_index(type);71537153 num_stripes = btrfs_chunk_num_stripes(leaf, chunk);7154715471557155#if BITS_PER_LONG == 32···72047202 map->io_align = btrfs_chunk_io_align(leaf, chunk);72057203 map->stripe_len = btrfs_chunk_stripe_len(leaf, chunk);72067204 map->type = type;72077207- map->sub_stripes = btrfs_chunk_sub_stripes(leaf, chunk);72057205+ /*72067206+ * We can't use the sub_stripes value, as for profiles other than72077207+ * RAID10, they may have 0 as sub_stripes for filesystems created by72087208+ * older mkfs (<v5.4).72097209+ * In that case, it can cause divide-by-zero errors later.72107210+ * Since currently sub_stripes is fixed for each profile, let's72117211+ * use the trusted value instead.72127212+ */72137213+ map->sub_stripes = btrfs_raid_array[index].sub_stripes;72087214 map->verified_stripes = 0;72097215 em->orig_block_len = btrfs_calc_stripe_length(em);72107216 for (i = 0; i < num_stripes; i++) {
+1-1
fs/btrfs/volumes.h
···395395 */396396struct btrfs_bio {397397 unsigned int mirror_num;398398+ struct bvec_iter iter;398399399400 /* for direct I/O */400401 u64 file_offset;···404403 struct btrfs_device *device;405404 u8 *csum;406405 u8 csum_inline[BTRFS_BIO_INLINE_CSUM_SIZE];407407- struct bvec_iter iter;408406409407 /* End I/O information supplied to btrfs_bio_alloc */410408 btrfs_bio_end_io_t end_io;
···24342434struct cifs_writedata *24352435cifs_writedata_alloc(unsigned int nr_pages, work_func_t complete)24362436{24372437+ struct cifs_writedata *writedata = NULL;24372438 struct page **pages =24382439 kcalloc(nr_pages, sizeof(struct page *), GFP_NOFS);24392439- if (pages)24402440- return cifs_writedata_direct_alloc(pages, complete);24402440+ if (pages) {24412441+ writedata = cifs_writedata_direct_alloc(pages, complete);24422442+ if (!writedata)24432443+ kvfree(pages);24442444+ }2441244524422442- return NULL;24462446+ return writedata;24432447}2444244824452449struct cifs_writedata *···33033299 cifs_uncached_writev_complete);33043300 if (!wdata) {33053301 rc = -ENOMEM;33023302+ for (i = 0; i < nr_pages; i++)33033303+ put_page(pagevec[i]);33043304+ kvfree(pagevec);33063305 add_credits_and_wake_if(server, credits, 0);33073306 break;33083307 }
+1-1
fs/exec.c
···10121012 active_mm = tsk->active_mm;10131013 tsk->active_mm = mm;10141014 tsk->mm = mm;10151015- lru_gen_add_mm(mm);10161015 /*10171016 * This prevents preemption while active_mm is being loaded and10181017 * it and mm are being updated, which could cause problems for···10241025 activate_mm(active_mm, mm);10251026 if (IS_ENABLED(CONFIG_ARCH_WANT_IRQS_OFF_ACTIVATE_MM))10261027 local_irq_enable();10281028+ lru_gen_add_mm(mm);10271029 task_unlock(tsk);10281030 lru_gen_use_mm(mm);10291031 if (old_mm) {
-4
fs/ext4/super.c
···1741174117421742#define DEFAULT_JOURNAL_IOPRIO (IOPRIO_PRIO_VALUE(IOPRIO_CLASS_BE, 3))1743174317441744-static const char deprecated_msg[] =17451745- "Mount option \"%s\" will be removed by %s\n"17461746- "Contact linux-ext4@vger.kernel.org if you think we should keep it.\n";17471747-17481744#define MOPT_SET 0x000117491745#define MOPT_CLEAR 0x000217501746#define MOPT_NOSUPPORT 0x0004
···11+/* SPDX-License-Identifier: GPL-2.0 */22+/*33+ * KMSAN string functions API used in other headers.44+ *55+ * Copyright (C) 2022 Google LLC66+ * Author: Alexander Potapenko <glider@google.com>77+ *88+ */99+#ifndef _LINUX_KMSAN_STRING_H1010+#define _LINUX_KMSAN_STRING_H1111+1212+/*1313+ * KMSAN overrides the default memcpy/memset/memmove implementations in the1414+ * kernel, which requires having __msan_XXX function prototypes in several other1515+ * headers. Keep them in one place instead of open-coding.1616+ */1717+void *__msan_memcpy(void *dst, const void *src, size_t size);1818+void *__msan_memset(void *s, int c, size_t n);1919+void *__msan_memmove(void *dest, const void *src, size_t len);2020+2121+#endif /* _LINUX_KMSAN_STRING_H */
+17-7
include/linux/kvm_host.h
···12401240void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn);1241124112421242/**12431243- * kvm_gfn_to_pfn_cache_init - prepare a cached kernel mapping and HPA for a12441244- * given guest physical address.12431243+ * kvm_gpc_init - initialize gfn_to_pfn_cache.12441244+ *12451245+ * @gpc: struct gfn_to_pfn_cache object.12461246+ *12471247+ * This sets up a gfn_to_pfn_cache by initializing locks. Note, the cache must12481248+ * be zero-allocated (or zeroed by the caller before init).12491249+ */12501250+void kvm_gpc_init(struct gfn_to_pfn_cache *gpc);12511251+12521252+/**12531253+ * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given guest12541254+ * physical address.12451255 *12461256 * @kvm: pointer to kvm instance.12471257 * @gpc: struct gfn_to_pfn_cache object.···12751265 * kvm_gfn_to_pfn_cache_check() to ensure that the cache is valid before12761266 * accessing the target page.12771267 */12781278-int kvm_gfn_to_pfn_cache_init(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,12791279- struct kvm_vcpu *vcpu, enum pfn_cache_usage usage,12801280- gpa_t gpa, unsigned long len);12681268+int kvm_gpc_activate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc,12691269+ struct kvm_vcpu *vcpu, enum pfn_cache_usage usage,12701270+ gpa_t gpa, unsigned long len);1281127112821272/**12831273 * kvm_gfn_to_pfn_cache_check - check validity of a gfn_to_pfn_cache.···13341324void kvm_gfn_to_pfn_cache_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);1335132513361326/**13371337- * kvm_gfn_to_pfn_cache_destroy - destroy and unlink a gfn_to_pfn_cache.13271327+ * kvm_gpc_deactivate - deactivate and unlink a gfn_to_pfn_cache.13381328 *13391329 * @kvm: pointer to kvm instance.13401330 * @gpc: struct gfn_to_pfn_cache object.···13421332 * This removes a cache from the @kvm's list to be processed on MMU notifier13431333 * invocation.13441334 */13451345-void kvm_gfn_to_pfn_cache_destroy(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);13351335+void kvm_gpc_deactivate(struct kvm *kvm, struct gfn_to_pfn_cache *gpc);1346133613471337void kvm_sigset_activate(struct kvm_vcpu *vcpu);13481338void kvm_sigset_deactivate(struct kvm_vcpu *vcpu);
+3-3
include/linux/userfaultfd_k.h
···146146static inline bool vma_can_userfault(struct vm_area_struct *vma,147147 unsigned long vm_flags)148148{149149- if (vm_flags & VM_UFFD_MINOR)150150- return is_vm_hugetlb_page(vma) || vma_is_shmem(vma);151151-149149+ if ((vm_flags & VM_UFFD_MINOR) &&150150+ (!is_vm_hugetlb_page(vma) && !vma_is_shmem(vma)))151151+ return false;152152#ifndef CONFIG_PTE_MARKER_UFFD_WP153153 /*154154 * If user requested uffd-wp but not enabled pte markers for
+27-21
include/net/netlink.h
···181181 NLA_S64,182182 NLA_BITFIELD32,183183 NLA_REJECT,184184+ NLA_BE16,185185+ NLA_BE32,184186 __NLA_TYPE_MAX,185187};186188···233231 * NLA_U32, NLA_U64,234232 * NLA_S8, NLA_S16,235233 * NLA_S32, NLA_S64,234234+ * NLA_BE16, NLA_BE32,236235 * NLA_MSECS Leaving the length field zero will verify the237236 * given type fits, using it verifies minimum length238237 * just like "All other"···264261 * NLA_U16,265262 * NLA_U32,266263 * NLA_U64,264264+ * NLA_BE16,265265+ * NLA_BE32,267266 * NLA_S8,268267 * NLA_S16,269268 * NLA_S32,···322317 u8 validation_type;323318 u16 len;324319 union {325325- const u32 bitfield32_valid;326326- const u32 mask;327327- const char *reject_message;328328- const struct nla_policy *nested_policy;329329- struct netlink_range_validation *range;330330- struct netlink_range_validation_signed *range_signed;331331- struct {332332- s16 min, max;333333- u8 network_byte_order:1;334334- };335335- int (*validate)(const struct nlattr *attr,336336- struct netlink_ext_ack *extack);337337- /* This entry is special, and used for the attribute at index 0320320+ /**321321+ * @strict_start_type: first attribute to validate strictly322322+ *323323+ * This entry is special, and used for the attribute at index 0338324 * only, and specifies special data about the policy, namely it339325 * specifies the "boundary type" where strict length validation340326 * starts for any attribute types >= this value, also, strict···344348 * was added to enforce strict validation from thereon.345349 */346350 u16 strict_start_type;351351+352352+ /* private: use NLA_POLICY_*() to set */353353+ const u32 bitfield32_valid;354354+ const u32 mask;355355+ const char *reject_message;356356+ const struct nla_policy *nested_policy;357357+ struct netlink_range_validation *range;358358+ struct netlink_range_validation_signed *range_signed;359359+ struct {360360+ s16 min, max;361361+ };362362+ int (*validate)(const struct nlattr *attr,363363+ struct netlink_ext_ack *extack);347364 };348365};349366···378369 (tp == NLA_U8 || tp == NLA_U16 || tp == NLA_U32 || tp == NLA_U64)379370#define __NLA_IS_SINT_TYPE(tp) \380371 (tp == NLA_S8 || tp == NLA_S16 || tp == NLA_S32 || tp == NLA_S64)372372+#define __NLA_IS_BEINT_TYPE(tp) \373373+ (tp == NLA_BE16 || tp == NLA_BE32)381374382375#define __NLA_ENSURE(condition) BUILD_BUG_ON_ZERO(!(condition))383376#define NLA_ENSURE_UINT_TYPE(tp) \···393382#define NLA_ENSURE_INT_OR_BINARY_TYPE(tp) \394383 (__NLA_ENSURE(__NLA_IS_UINT_TYPE(tp) || \395384 __NLA_IS_SINT_TYPE(tp) || \385385+ __NLA_IS_BEINT_TYPE(tp) || \396386 tp == NLA_MSECS || \397387 tp == NLA_BINARY) + tp)398388#define NLA_ENSURE_NO_VALIDATION_PTR(tp) \···401389 tp != NLA_REJECT && \402390 tp != NLA_NESTED && \403391 tp != NLA_NESTED_ARRAY) + tp)392392+#define NLA_ENSURE_BEINT_TYPE(tp) \393393+ (__NLA_ENSURE(__NLA_IS_BEINT_TYPE(tp)) + tp)404394405395#define NLA_POLICY_RANGE(tp, _min, _max) { \406396 .type = NLA_ENSURE_INT_OR_BINARY_TYPE(tp), \···433419 .type = NLA_ENSURE_INT_OR_BINARY_TYPE(tp), \434420 .validation_type = NLA_VALIDATE_MAX, \435421 .max = _max, \436436- .network_byte_order = 0, \437437-}438438-439439-#define NLA_POLICY_MAX_BE(tp, _max) { \440440- .type = NLA_ENSURE_UINT_TYPE(tp), \441441- .validation_type = NLA_VALIDATE_MAX, \442442- .max = _max, \443443- .network_byte_order = 1, \444422}445423446424#define NLA_POLICY_MASK(tp, _mask) { \
···645645 int error;646646647647 if (hibernation_mode == HIBERNATION_SUSPEND) {648648- error = suspend_devices_and_enter(PM_SUSPEND_MEM);648648+ error = suspend_devices_and_enter(mem_sleep_current);649649 if (error) {650650 hibernation_mode = hibernation_ops ?651651 HIBERNATION_PLATFORM :
+2-1
lib/Kconfig.debug
···400400 default 1536 if (!64BIT && XTENSA)401401 default 1024 if !64BIT402402 default 2048 if 64BIT403403+ default 0 if KMSAN403404 help404404- Tell gcc to warn at build time for stack frames larger than this.405405+ Tell the compiler to warn at build time for stack frames larger than this.405406 Setting this too low will cause a lot of warnings.406407 Setting it to 0 disables the warning.407408
+2-2
lib/maple_tree.c
···29032903 unsigned long max, min;29042904 unsigned long prev_max, prev_min;2905290529062906- last = next = mas->node;29072907- prev_min = min = mas->min;29062906+ next = mas->node;29072907+ min = mas->min;29082908 max = mas->max;29092909 do {29102910 offset = 0;
+15-26
lib/nlattr.c
···124124 range->max = U8_MAX;125125 break;126126 case NLA_U16:127127+ case NLA_BE16:127128 case NLA_BINARY:128129 range->max = U16_MAX;129130 break;130131 case NLA_U32:132132+ case NLA_BE32:131133 range->max = U32_MAX;132134 break;133135 case NLA_U64:···161159 }162160}163161164164-static u64 nla_get_attr_bo(const struct nla_policy *pt,165165- const struct nlattr *nla)166166-{167167- switch (pt->type) {168168- case NLA_U16:169169- if (pt->network_byte_order)170170- return ntohs(nla_get_be16(nla));171171-172172- return nla_get_u16(nla);173173- case NLA_U32:174174- if (pt->network_byte_order)175175- return ntohl(nla_get_be32(nla));176176-177177- return nla_get_u32(nla);178178- case NLA_U64:179179- if (pt->network_byte_order)180180- return be64_to_cpu(nla_get_be64(nla));181181-182182- return nla_get_u64(nla);183183- }184184-185185- WARN_ON_ONCE(1);186186- return 0;187187-}188188-189162static int nla_validate_range_unsigned(const struct nla_policy *pt,190163 const struct nlattr *nla,191164 struct netlink_ext_ack *extack,···174197 value = nla_get_u8(nla);175198 break;176199 case NLA_U16:200200+ value = nla_get_u16(nla);201201+ break;177202 case NLA_U32:203203+ value = nla_get_u32(nla);204204+ break;178205 case NLA_U64:179179- value = nla_get_attr_bo(pt, nla);206206+ value = nla_get_u64(nla);180207 break;181208 case NLA_MSECS:182209 value = nla_get_u64(nla);183210 break;184211 case NLA_BINARY:185212 value = nla_len(nla);213213+ break;214214+ case NLA_BE16:215215+ value = ntohs(nla_get_be16(nla));216216+ break;217217+ case NLA_BE32:218218+ value = ntohl(nla_get_be32(nla));186219 break;187220 default:188221 return -EINVAL;···321334 case NLA_U64:322335 case NLA_MSECS:323336 case NLA_BINARY:337337+ case NLA_BE16:338338+ case NLA_BE32:324339 return nla_validate_range_unsigned(pt, nla, extack, validate);325340 case NLA_S8:326341 case NLA_S16:
+1-1
mm/huge_memory.c
···24622462 * Fix up and warn once if private is unexpectedly set.24632463 */24642464 if (!folio_test_swapcache(page_folio(head))) {24652465- VM_WARN_ON_ONCE_PAGE(page_tail->private != 0, head);24652465+ VM_WARN_ON_ONCE_PAGE(page_tail->private != 0, page_tail);24662466 page_tail->private = 0;24672467 }24682468
+42-19
mm/kmemleak.c
···14611461}1462146214631463/*14641464+ * Conditionally call resched() in a object iteration loop while making sure14651465+ * that the given object won't go away without RCU read lock by performing a14661466+ * get_object() if !pinned.14671467+ *14681468+ * Return: false if can't do a cond_resched() due to get_object() failure14691469+ * true otherwise14701470+ */14711471+static bool kmemleak_cond_resched(struct kmemleak_object *object, bool pinned)14721472+{14731473+ if (!pinned && !get_object(object))14741474+ return false;14751475+14761476+ rcu_read_unlock();14771477+ cond_resched();14781478+ rcu_read_lock();14791479+ if (!pinned)14801480+ put_object(object);14811481+ return true;14821482+}14831483+14841484+/*14641485 * Scan data sections and all the referenced memory blocks allocated via the14651486 * kernel's standard allocators. This function must be called with the14661487 * scan_mutex held.···14921471 struct zone *zone;14931472 int __maybe_unused i;14941473 int new_leaks = 0;14951495- int loop1_cnt = 0;14741474+ int loop_cnt = 0;1496147514971476 jiffies_last_scan = jiffies;14981477···15011480 list_for_each_entry_rcu(object, &object_list, object_list) {15021481 bool obj_pinned = false;1503148215041504- loop1_cnt++;15051483 raw_spin_lock_irq(&object->lock);15061484#ifdef DEBUG15071485 /*···15341514 raw_spin_unlock_irq(&object->lock);1535151515361516 /*15371537- * Do a cond_resched() to avoid soft lockup every 64k objects.15381538- * Make sure a reference has been taken so that the object15391539- * won't go away without RCU read lock.15171517+ * Do a cond_resched() every 64k objects to avoid soft lockup.15401518 */15411541- if (!(loop1_cnt & 0xffff)) {15421542- if (!obj_pinned && !get_object(object)) {15431543- /* Try the next object instead */15441544- loop1_cnt--;15451545- continue;15461546- }15471547-15481548- rcu_read_unlock();15491549- cond_resched();15501550- rcu_read_lock();15511551-15521552- if (!obj_pinned)15531553- put_object(object);15541554- }15191519+ if (!(++loop_cnt & 0xffff) &&15201520+ !kmemleak_cond_resched(object, obj_pinned))15211521+ loop_cnt--; /* Try again on next object */15551522 }15561523 rcu_read_unlock();15571524···16051598 * scan and color them gray until the next scan.16061599 */16071600 rcu_read_lock();16011601+ loop_cnt = 0;16081602 list_for_each_entry_rcu(object, &object_list, object_list) {16031603+ /*16041604+ * Do a cond_resched() every 64k objects to avoid soft lockup.16051605+ */16061606+ if (!(++loop_cnt & 0xffff) &&16071607+ !kmemleak_cond_resched(object, false))16081608+ loop_cnt--; /* Try again on next object */16091609+16091610 /*16101611 * This is racy but we can save the overhead of lock/unlock16111612 * calls. The missed objects, if any, should be caught in···16471632 * Scanning result reporting.16481633 */16491634 rcu_read_lock();16351635+ loop_cnt = 0;16501636 list_for_each_entry_rcu(object, &object_list, object_list) {16371637+ /*16381638+ * Do a cond_resched() every 64k objects to avoid soft lockup.16391639+ */16401640+ if (!(++loop_cnt & 0xffff) &&16411641+ !kmemleak_cond_resched(object, false))16421642+ loop_cnt--; /* Try again on next object */16431643+16511644 /*16521645 * This is racy but we can save the overhead of lock/unlock16531646 * calls. The missed objects, if any, should be caught in
···813813 if (start & ~huge_page_mask(hstate_vma(vma)))814814 return false;815815816816- *end = ALIGN(*end, huge_page_size(hstate_vma(vma)));816816+ /*817817+ * Madvise callers expect the length to be rounded up to PAGE_SIZE818818+ * boundaries, and may be unaware that this VMA uses huge pages.819819+ * Avoid unexpected data loss by rounding down the number of820820+ * huge pages freed.821821+ */822822+ *end = ALIGN_DOWN(*end, huge_page_size(hstate_vma(vma)));823823+817824 return true;818825}819826···834827 *prev = vma;835828 if (!madvise_dontneed_free_valid_vma(vma, start, &end, behavior))836829 return -EINVAL;830830+831831+ if (start == end)832832+ return 0;837833838834 if (!userfaultfd_remove(vma, start, end)) {839835 *prev = NULL; /* mmap_lock has been dropped, prev is stale */
···330330 zone->zone_start_pfn);331331332332 if (skip_isolation) {333333- int mt = get_pageblock_migratetype(pfn_to_page(isolate_pageblock));333333+ int mt __maybe_unused = get_pageblock_migratetype(pfn_to_page(isolate_pageblock));334334335335 VM_BUG_ON(!is_migrate_isolate(mt));336336 } else {
+17
mm/shmem.c
···2424242424252425 if (!zeropage) { /* COPY */24262426 page_kaddr = kmap_local_folio(folio, 0);24272427+ /*24282428+ * The read mmap_lock is held here. Despite the24292429+ * mmap_lock being read recursive a deadlock is still24302430+ * possible if a writer has taken a lock. For example:24312431+ *24322432+ * process A thread 1 takes read lock on own mmap_lock24332433+ * process A thread 2 calls mmap, blocks taking write lock24342434+ * process B thread 1 takes page fault, read lock on own mmap lock24352435+ * process B thread 2 calls mmap, blocks taking write lock24362436+ * process A thread 1 blocks taking read lock on process B24372437+ * process B thread 1 blocks taking read lock on process A24382438+ *24392439+ * Disable page faults to prevent potential deadlock24402440+ * and retry the copy outside the mmap_lock.24412441+ */24422442+ pagefault_disable();24272443 ret = copy_from_user(page_kaddr,24282444 (const void __user *)src_addr,24292445 PAGE_SIZE);24462446+ pagefault_enable();24302447 kunmap_local(page_kaddr);2431244824322449 /* fallback to copy_from_user outside mmap_lock */
+21-4
mm/userfaultfd.c
···157157 if (!page)158158 goto out;159159160160- page_kaddr = kmap_atomic(page);160160+ page_kaddr = kmap_local_page(page);161161+ /*162162+ * The read mmap_lock is held here. Despite the163163+ * mmap_lock being read recursive a deadlock is still164164+ * possible if a writer has taken a lock. For example:165165+ *166166+ * process A thread 1 takes read lock on own mmap_lock167167+ * process A thread 2 calls mmap, blocks taking write lock168168+ * process B thread 1 takes page fault, read lock on own mmap lock169169+ * process B thread 2 calls mmap, blocks taking write lock170170+ * process A thread 1 blocks taking read lock on process B171171+ * process B thread 1 blocks taking read lock on process A172172+ *173173+ * Disable page faults to prevent potential deadlock174174+ * and retry the copy outside the mmap_lock.175175+ */176176+ pagefault_disable();161177 ret = copy_from_user(page_kaddr,162178 (const void __user *) src_addr,163179 PAGE_SIZE);164164- kunmap_atomic(page_kaddr);180180+ pagefault_enable();181181+ kunmap_local(page_kaddr);165182166183 /* fallback to copy_from_user outside mmap_lock */167184 if (unlikely(ret)) {···663646 mmap_read_unlock(dst_mm);664647 BUG_ON(!page);665648666666- page_kaddr = kmap(page);649649+ page_kaddr = kmap_local_page(page);667650 err = copy_from_user(page_kaddr,668651 (const void __user *) src_addr,669652 PAGE_SIZE);670670- kunmap(page);653653+ kunmap_local(page_kaddr);671654 if (unlikely(err)) {672655 err = -EFAULT;673656 goto out;
+12-6
net/bluetooth/hci_conn.c
···10671067 hdev->acl_cnt += conn->sent;10681068 } else {10691069 struct hci_conn *acl = conn->link;10701070+10701071 if (acl) {10711072 acl->link = NULL;10721073 hci_conn_drop(acl);10741074+ }10751075+10761076+ /* Unacked ISO frames */10771077+ if (conn->type == ISO_LINK) {10781078+ if (hdev->iso_pkts)10791079+ hdev->iso_cnt += conn->sent;10801080+ else if (hdev->le_pkts)10811081+ hdev->le_cnt += conn->sent;10821082+ else10831083+ hdev->acl_cnt += conn->sent;10731084 }10741085 }10751086···17721761 if (!cis)17731762 return ERR_PTR(-ENOMEM);17741763 cis->cleanup = cis_cleanup;17641764+ cis->dst_type = dst_type;17751765 }1776176617771767 if (cis->state == BT_CONNECTED)···21512139{21522140 struct hci_conn *le;21532141 struct hci_conn *cis;21542154-21552155- /* Convert from ISO socket address type to HCI address type */21562156- if (dst_type == BDADDR_LE_PUBLIC)21572157- dst_type = ADDR_LE_DEV_PUBLIC;21582158- else21592159- dst_type = ADDR_LE_DEV_RANDOM;2160214221612143 if (hci_dev_test_flag(hdev, HCI_ADVERTISING))21622144 le = hci_connect_le(hdev, dst, dst_type, false,
+12-2
net/bluetooth/iso.c
···235235 return err;236236}237237238238+static inline u8 le_addr_type(u8 bdaddr_type)239239+{240240+ if (bdaddr_type == BDADDR_LE_PUBLIC)241241+ return ADDR_LE_DEV_PUBLIC;242242+ else243243+ return ADDR_LE_DEV_RANDOM;244244+}245245+238246static int iso_connect_bis(struct sock *sk)239247{240248 struct iso_conn *conn;···336328 /* Just bind if DEFER_SETUP has been set */337329 if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {338330 hcon = hci_bind_cis(hdev, &iso_pi(sk)->dst,339339- iso_pi(sk)->dst_type, &iso_pi(sk)->qos);331331+ le_addr_type(iso_pi(sk)->dst_type),332332+ &iso_pi(sk)->qos);340333 if (IS_ERR(hcon)) {341334 err = PTR_ERR(hcon);342335 goto done;343336 }344337 } else {345338 hcon = hci_connect_cis(hdev, &iso_pi(sk)->dst,346346- iso_pi(sk)->dst_type, &iso_pi(sk)->qos);339339+ le_addr_type(iso_pi(sk)->dst_type),340340+ &iso_pi(sk)->qos);347341 if (IS_ERR(hcon)) {348342 err = PTR_ERR(hcon);349343 goto done;
+73-13
net/bluetooth/l2cap_core.c
···19901990 if (link_type == LE_LINK && c->src_type == BDADDR_BREDR)19911991 continue;1992199219931993- if (c->psm == psm) {19931993+ if (c->chan_type != L2CAP_CHAN_FIXED && c->psm == psm) {19941994 int src_match, dst_match;19951995 int src_any, dst_any;19961996···37643764 l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC,37653765 sizeof(rfc), (unsigned long) &rfc, endptr - ptr);3766376637673767- if (test_bit(FLAG_EFS_ENABLE, &chan->flags)) {37673767+ if (remote_efs &&37683768+ test_bit(FLAG_EFS_ENABLE, &chan->flags)) {37683769 chan->remote_id = efs.id;37693770 chan->remote_stype = efs.stype;37703771 chan->remote_msdu = le16_to_cpu(efs.msdu);···58145813 BT_DBG("psm 0x%2.2x scid 0x%4.4x mtu %u mps %u", __le16_to_cpu(psm),58155814 scid, mtu, mps);5816581558165816+ /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 3, Part A58175817+ * page 1059:58185818+ *58195819+ * Valid range: 0x0001-0x00ff58205820+ *58215821+ * Table 4.15: L2CAP_LE_CREDIT_BASED_CONNECTION_REQ SPSM ranges58225822+ */58235823+ if (!psm || __le16_to_cpu(psm) > L2CAP_PSM_LE_DYN_END) {58245824+ result = L2CAP_CR_LE_BAD_PSM;58255825+ chan = NULL;58265826+ goto response;58275827+ }58285828+58175829 /* Check if we have socket listening on psm */58185830 pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src,58195831 &conn->hcon->dst, LE_LINK);···60146000 }6015600160166002 psm = req->psm;60036003+60046004+ /* BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 3, Part A60056005+ * page 1059:60066006+ *60076007+ * Valid range: 0x0001-0x00ff60086008+ *60096009+ * Table 4.15: L2CAP_LE_CREDIT_BASED_CONNECTION_REQ SPSM ranges60106010+ */60116011+ if (!psm || __le16_to_cpu(psm) > L2CAP_PSM_LE_DYN_END) {60126012+ result = L2CAP_CR_LE_BAD_PSM;60136013+ goto response;60146014+ }6017601560186016 BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps);60196017···69116885 struct l2cap_ctrl *control,69126886 struct sk_buff *skb, u8 event)69136887{68886888+ struct l2cap_ctrl local_control;69146889 int err = 0;69156890 bool skb_in_use = false;69166891···69366909 chan->buffer_seq = chan->expected_tx_seq;69376910 skb_in_use = true;6938691169126912+ /* l2cap_reassemble_sdu may free skb, hence invalidate69136913+ * control, so make a copy in advance to use it after69146914+ * l2cap_reassemble_sdu returns and to avoid the race69156915+ * condition, for example:69166916+ *69176917+ * The current thread calls:69186918+ * l2cap_reassemble_sdu69196919+ * chan->ops->recv == l2cap_sock_recv_cb69206920+ * __sock_queue_rcv_skb69216921+ * Another thread calls:69226922+ * bt_sock_recvmsg69236923+ * skb_recv_datagram69246924+ * skb_free_datagram69256925+ * Then the current thread tries to access control, but69266926+ * it was freed by skb_free_datagram.69276927+ */69286928+ local_control = *control;69396929 err = l2cap_reassemble_sdu(chan, skb, control);69406930 if (err)69416931 break;6942693269436943- if (control->final) {69336933+ if (local_control.final) {69446934 if (!test_and_clear_bit(CONN_REJ_ACT,69456935 &chan->conn_state)) {69466946- control->final = 0;69476947- l2cap_retransmit_all(chan, control);69366936+ local_control.final = 0;69376937+ l2cap_retransmit_all(chan, &local_control);69486938 l2cap_ertm_send(chan);69496939 }69506940 }···73417297static int l2cap_stream_rx(struct l2cap_chan *chan, struct l2cap_ctrl *control,73427298 struct sk_buff *skb)73437299{73007300+ /* l2cap_reassemble_sdu may free skb, hence invalidate control, so store73017301+ * the txseq field in advance to use it after l2cap_reassemble_sdu73027302+ * returns and to avoid the race condition, for example:73037303+ *73047304+ * The current thread calls:73057305+ * l2cap_reassemble_sdu73067306+ * chan->ops->recv == l2cap_sock_recv_cb73077307+ * __sock_queue_rcv_skb73087308+ * Another thread calls:73097309+ * bt_sock_recvmsg73107310+ * skb_recv_datagram73117311+ * skb_free_datagram73127312+ * Then the current thread tries to access control, but it was freed by73137313+ * skb_free_datagram.73147314+ */73157315+ u16 txseq = control->txseq;73167316+73447317 BT_DBG("chan %p, control %p, skb %p, state %d", chan, control, skb,73457318 chan->rx_state);7346731973477347- if (l2cap_classify_txseq(chan, control->txseq) ==73487348- L2CAP_TXSEQ_EXPECTED) {73207320+ if (l2cap_classify_txseq(chan, txseq) == L2CAP_TXSEQ_EXPECTED) {73497321 l2cap_pass_to_tx(chan, control);7350732273517323 BT_DBG("buffer_seq %u->%u", chan->buffer_seq,···73847324 }73857325 }7386732673877387- chan->last_acked_seq = control->txseq;73887388- chan->expected_tx_seq = __next_seq(chan, control->txseq);73277327+ chan->last_acked_seq = txseq;73287328+ chan->expected_tx_seq = __next_seq(chan, txseq);7389732973907330 return 0;73917331}···76417581 return;76427582 }7643758375847584+ l2cap_chan_hold(chan);76447585 l2cap_chan_lock(chan);76457586 } else {76467587 BT_DBG("unknown cid 0x%4.4x", cid);···84878426 * expected length.84888427 */84898428 if (skb->len < L2CAP_LEN_SIZE) {84908490- if (l2cap_recv_frag(conn, skb, conn->mtu) < 0)84918491- goto drop;84928492- return;84298429+ l2cap_recv_frag(conn, skb, conn->mtu);84308430+ break;84938431 }8494843284958433 len = get_unaligned_le16(skb->data) + L2CAP_HDR_SIZE;···8532847285338473 /* Header still could not be read just continue */85348474 if (conn->rx_skb->len < L2CAP_LEN_SIZE)85358535- return;84758475+ break;85368476 }8537847785388478 if (skb->len > conn->rx_len) {
···4242#define AHASH_MAX_SIZE (6 * AHASH_INIT_SIZE)4343/* Max muber of elements in the array block when tuned */4444#define AHASH_MAX_TUNED 644545-4645#define AHASH_MAX(h) ((h)->bucketsize)4747-4848-/* Max number of elements can be tuned */4949-#ifdef IP_SET_HASH_WITH_MULTI5050-static u85151-tune_bucketsize(u8 curr, u32 multi)5252-{5353- u32 n;5454-5555- if (multi < curr)5656- return curr;5757-5858- n = curr + AHASH_INIT_SIZE;5959- /* Currently, at listing one hash bucket must fit into a message.6060- * Therefore we have a hard limit here.6161- */6262- return n > curr && n <= AHASH_MAX_TUNED ? n : curr;6363-}6464-#define TUNE_BUCKETSIZE(h, multi) \6565- ((h)->bucketsize = tune_bucketsize((h)->bucketsize, multi))6666-#else6767-#define TUNE_BUCKETSIZE(h, multi)6868-#endif69467047/* A hash bucket */7148struct hbucket {···913936 goto set_full;914937 /* Create a new slot */915938 if (n->pos >= n->size) {916916- TUNE_BUCKETSIZE(h, multi);939939+#ifdef IP_SET_HASH_WITH_MULTI940940+ if (h->bucketsize >= AHASH_MAX_TUNED)941941+ goto set_full;942942+ else if (h->bucketsize < multi)943943+ h->bucketsize += AHASH_INIT_SIZE;944944+#endif917945 if (n->size >= AHASH_MAX(h)) {918946 /* Trigger rehashing */919947 mtype_data_next(&h->next, d);
···147147 return rc;148148}149149150150+/* Returns 1 if added, 0 for otherwise; don't return a negative value! */150151/* FIXME: look at device node refcounting */151152static int i2sbus_add_dev(struct macio_dev *macio,152153 struct i2sbus_control *control,···214213 * either as the second one in that case is just a modem. */215214 if (!ok) {216215 kfree(dev);217217- return -ENODEV;216216+ return 0;218217 }219218220219 mutex_init(&dev->lock);···303302304303 if (soundbus_add_one(&dev->sound)) {305304 printk(KERN_DEBUG "i2sbus: device registration error!\n");305305+ if (dev->sound.ofdev.dev.kobj.state_initialized) {306306+ soundbus_dev_put(&dev->sound);307307+ return 0;308308+ }306309 goto err;307310 }308311
+23
sound/core/control.c
···753753}754754EXPORT_SYMBOL(snd_ctl_rename_id);755755756756+/**757757+ * snd_ctl_rename - rename the control on the card758758+ * @card: the card instance759759+ * @kctl: the control to rename760760+ * @name: the new name761761+ *762762+ * Renames the specified control on the card to the new name.763763+ *764764+ * Make sure to take the control write lock - down_write(&card->controls_rwsem).765765+ */766766+void snd_ctl_rename(struct snd_card *card, struct snd_kcontrol *kctl,767767+ const char *name)768768+{769769+ remove_hash_entries(card, kctl);770770+771771+ if (strscpy(kctl->id.name, name, sizeof(kctl->id.name)) < 0)772772+ pr_warn("ALSA: Renamed control new name '%s' truncated to '%s'\n",773773+ name, kctl->id.name);774774+775775+ add_hash_entries(card, kctl);776776+}777777+EXPORT_SYMBOL(snd_ctl_rename);778778+756779#ifndef CONFIG_SND_CTL_FAST_LOOKUP757780static struct snd_kcontrol *758781snd_ctl_find_numid_slow(struct snd_card *card, unsigned int numid)
+25-8
sound/pci/ac97/ac97_codec.c
···20092009 err = device_register(&ac97->dev);20102010 if (err < 0) {20112011 ac97_err(ac97, "Can't register ac97 bus\n");20122012+ put_device(&ac97->dev);20122013 ac97->dev.bus = NULL;20132014 return err;20142015 }···26562655 */26572656static void set_ctl_name(char *dst, const char *src, const char *suffix)26582657{26592659- if (suffix)26602660- sprintf(dst, "%s %s", src, suffix);26612661- else26622662- strcpy(dst, src);26632663-} 26582658+ const size_t msize = SNDRV_CTL_ELEM_ID_NAME_MAXLEN;26592659+26602660+ if (suffix) {26612661+ if (snprintf(dst, msize, "%s %s", src, suffix) >= msize)26622662+ pr_warn("ALSA: AC97 control name '%s %s' truncated to '%s'\n",26632663+ src, suffix, dst);26642664+ } else {26652665+ if (strscpy(dst, src, msize) < 0)26662666+ pr_warn("ALSA: AC97 control name '%s' truncated to '%s'\n",26672667+ src, dst);26682668+ }26692669+}2664267026652671/* remove the control with the given name and optional suffix */26662672static int snd_ac97_remove_ctl(struct snd_ac97 *ac97, const char *name,···26942686 const char *dst, const char *suffix)26952687{26962688 struct snd_kcontrol *kctl = ctl_find(ac97, src, suffix);26892689+ char name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN];26902690+26972691 if (kctl) {26982698- set_ctl_name(kctl->id.name, dst, suffix);26922692+ set_ctl_name(name, dst, suffix);26932693+ snd_ctl_rename(ac97->bus->card, kctl, name);26992694 return 0;27002695 }27012696 return -ENOENT;···27172706 const char *s2, const char *suffix)27182707{27192708 struct snd_kcontrol *kctl1, *kctl2;27092709+ char name[SNDRV_CTL_ELEM_ID_NAME_MAXLEN];27102710+27202711 kctl1 = ctl_find(ac97, s1, suffix);27212712 kctl2 = ctl_find(ac97, s2, suffix);27222713 if (kctl1 && kctl2) {27232723- set_ctl_name(kctl1->id.name, s2, suffix);27242724- set_ctl_name(kctl2->id.name, s1, suffix);27142714+ set_ctl_name(name, s2, suffix);27152715+ snd_ctl_rename(ac97->bus->card, kctl1, name);27162716+27172717+ set_ctl_name(name, s1, suffix);27182718+ snd_ctl_rename(ac97->bus->card, kctl2, name);27192719+27252720 return 0;27262721 }27272722 return -ENOENT;
+3-3
sound/pci/au88x0/au88x0.h
···141141#ifndef CHIP_AU8810142142 stream_t dma_wt[NR_WT];143143 wt_voice_t wt_voice[NR_WT]; /* WT register cache. */144144- char mixwt[(NR_WT / NR_WTPB) * 6]; /* WT mixin objects */144144+ s8 mixwt[(NR_WT / NR_WTPB) * 6]; /* WT mixin objects */145145#endif146146147147 /* Global resources */···235235static void vortex_connect_default(vortex_t * vortex, int en);236236static int vortex_adb_allocroute(vortex_t * vortex, int dma, int nr_ch,237237 int dir, int type, int subdev);238238-static char vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out,239239- int restype);238238+static int vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out,239239+ int restype);240240#ifndef CHIP_AU8810241241static int vortex_wt_allocroute(vortex_t * vortex, int dma, int nr_ch);242242static void vortex_wt_connect(vortex_t * vortex, int en);
+1-1
sound/pci/au88x0/au88x0_core.c
···19981998 out: Mean checkout if != 0. Else mean Checkin resource.19991999 restype: Indicates type of resource to be checked in or out.20002000*/20012001-static char20012001+static int20022002vortex_adb_checkinout(vortex_t * vortex, int resmap[], int out, int restype)20032003{20042004 int i, qty = resnum[restype], resinuse = 0;
···16291629config SND_SOC_TLV320ADC3XXX16301630 tristate "Texas Instruments TLV320ADC3001/3101 audio ADC"16311631 depends on I2C16321632+ depends on GPIOLIB16321633 help16331634 Enable support for Texas Instruments TLV320ADC3001 and TLV320ADC310116341635 ADCs.
+1-1
sound/soc/codecs/cx2072x.h
···177177#define CX2072X_PLBK_DRC_PARM_LEN 9178178#define CX2072X_CLASSD_AMP_LEN 6179179180180-/* DAI interfae type */180180+/* DAI interface type */181181#define CX2072X_DAI_HIFI 1182182#define CX2072X_DAI_DSP 2183183#define CX2072X_DAI_DSP_PWM 3 /* 4 ch, including mic and AEC */
···417417 * or has convert-xxx property418418 */419419 if ((of_get_child_count(codec_port) > 1) ||420420- (adata->convert_rate || adata->convert_channels))420420+ asoc_simple_is_convert_required(adata))421421 return true;422422423423 return false;
+15
sound/soc/generic/simple-card-utils.c
···8585}8686EXPORT_SYMBOL_GPL(asoc_simple_parse_convert);87878888+/**8989+ * asoc_simple_is_convert_required() - Query if HW param conversion was requested9090+ * @data: Link data.9191+ *9292+ * Returns true if any HW param conversion was requested for this DAI link with9393+ * any "convert-xxx" properties.9494+ */9595+bool asoc_simple_is_convert_required(const struct asoc_simple_data *data)9696+{9797+ return data->convert_rate ||9898+ data->convert_channels ||9999+ data->convert_sample_format;100100+}101101+EXPORT_SYMBOL_GPL(asoc_simple_is_convert_required);102102+88103int asoc_simple_parse_daifmt(struct device *dev,89104 struct device_node *node,90105 struct device_node *codec,
+1-2
sound/soc/generic/simple-card.c
···393393 * or has convert-xxx property394394 */395395 if (dpcm_selectable &&396396- (num > 2 ||397397- adata.convert_rate || adata.convert_channels)) {396396+ (num > 2 || asoc_simple_is_convert_required(&adata))) {398397 /*399398 * np400399 * |1(CPU)|0(Codec) li->cpu
···187187config SND_SOC_SC7180188188 tristate "SoC Machine driver for SC7180 boards"189189 depends on I2C && GPIOLIB190190+ depends on SOUNDWIRE || SOUNDWIRE=n190191 select SND_SOC_QCOM_COMMON191192 select SND_SOC_LPASS_SC7180192193 select SND_SOC_MAX98357A
+10
sound/soc/qcom/lpass-cpu.c
···782782 return true;783783 if (reg == LPASS_HDMI_TX_LEGACY_ADDR(v))784784 return true;785785+ if (reg == LPASS_HDMI_TX_VBIT_CTL_ADDR(v))786786+ return true;787787+ if (reg == LPASS_HDMI_TX_PARITY_ADDR(v))788788+ return true;785789786790 for (i = 0; i < v->hdmi_rdma_channels; ++i) {787791 if (reg == LPAIF_HDMI_RDMACURR_REG(v, i))792792+ return true;793793+ if (reg == LPASS_HDMI_TX_DMA_ADDR(v, i))794794+ return true;795795+ if (reg == LPASS_HDMI_TX_CH_LSB_ADDR(v, i))796796+ return true;797797+ if (reg == LPASS_HDMI_TX_CH_MSB_ADDR(v, i))788798 return true;789799 }790800 return false;
+4-2
sound/soc/soc-component.c
···12131213 int i;1214121412151215 for_each_rtd_components(rtd, i, component) {12161216- int ret = pm_runtime_resume_and_get(component->dev);12171217- if (ret < 0 && ret != -EACCES)12161216+ int ret = pm_runtime_get_sync(component->dev);12171217+ if (ret < 0 && ret != -EACCES) {12181218+ pm_runtime_put_noidle(component->dev);12181219 return soc_component_ret(component, ret);12201220+ }12191221 /* mark stream if succeeded */12201222 soc_component_mark_push(component, stream, pm);12211223 }
+1-7
sound/soc/sof/intel/hda-codec.c
···109109#define is_generic_config(x) 0110110#endif111111112112-static void hda_codec_device_exit(struct device *dev)113113-{114114- snd_hdac_device_exit(dev_to_hdac_dev(dev));115115-}116116-117112static struct hda_codec *hda_codec_device_init(struct hdac_bus *bus, int addr, int type)118113{119114 struct hda_codec *codec;···121126 }122127123128 codec->core.type = type;124124- codec->core.dev.release = hda_codec_device_exit;125129126130 ret = snd_hdac_device_register(&codec->core);127131 if (ret) {128132 dev_err(bus->dev, "failed to register hdac device\n");129129- snd_hdac_device_exit(&codec->core);133133+ put_device(&codec->core.dev);130134 return ERR_PTR(ret);131135 }132136
···1919int memcmp(const void *s1, const void *s2, size_t n)2020{2121 size_t ofs = 0;2222- char c1 = 0;2222+ int c1 = 0;23232424- while (ofs < n && !(c1 = ((char *)s1)[ofs] - ((char *)s2)[ofs])) {2424+ while (ofs < n && !(c1 = ((unsigned char *)s1)[ofs] - ((unsigned char *)s2)[ofs])) {2525 ofs++;2626 }2727 return c1;···125125}126126127127/* this function is only used with arguments that are not constants or when128128- * it's not known because optimizations are disabled.128128+ * it's not known because optimizations are disabled. Note that gcc 12129129+ * recognizes an strlen() pattern and replaces it with a jump to strlen(),130130+ * thus itself, hence the asm() statement below that's meant to disable this131131+ * confusing practice.129132 */130133static __attribute__((unused))131131-size_t nolibc_strlen(const char *str)134134+size_t strlen(const char *str)132135{133136 size_t len;134137135135- for (len = 0; str[len]; len++);138138+ for (len = 0; str[len]; len++)139139+ asm("");136140 return len;137141}138142···144140 * the two branches, then will rely on an external definition of strlen().145141 */146142#if defined(__OPTIMIZE__)143143+#define nolibc_strlen(x) strlen(x)147144#define strlen(str) ({ \148145 __builtin_constant_p((str)) ? \149146 __builtin_strlen((str)) : \150147 nolibc_strlen((str)); \151148})152152-#else153153-#define strlen(str) nolibc_strlen((str))154149#endif155150156151static __attribute__((unused))
+6-6
tools/power/pm-graph/README
···66 |_| |___/ |_|7788 pm-graph: suspend/resume/boot timing analysis tools99- Version: 5.999+ Version: 5.101010 Author: Todd Brandt <todd.e.brandt@intel.com>1111- Home Page: https://01.org/pm-graph1111+ Home Page: https://www.intel.com/content/www/us/en/developer/topic-technology/open/pm-graph/overview.html12121313 Report bugs/issues at bugzilla.kernel.org Tools/pm-graph1414 - https://bugzilla.kernel.org/buglist.cgi?component=pm-graph&product=Tools15151616 Full documentation available online & in man pages1717 - Getting Started:1818- https://01.org/pm-graph/documentation/getting-started1818+ https://www.intel.com/content/www/us/en/developer/articles/technical/usage.html19192020- - Config File Format:2121- https://01.org/pm-graph/documentation/3-config-file-format2020+ - Feature Summary:2121+ https://www.intel.com/content/www/us/en/developer/topic-technology/open/pm-graph/features.html22222323 - upstream version in git:2424- https://github.com/intel/pm-graph/2424+ git clone https://github.com/intel/pm-graph/25252626 Table of Contents2727 - Overview
+3
tools/power/pm-graph/sleepgraph.8
···7878If a wifi connection is available, check that it reconnects after resume. Include7979the reconnect time in the total resume time calculation and treat wifi timeouts8080as resume failures.8181+.TP8282+\fB-wifitrace\fR8383+Trace through the wifi reconnect time and include it in the timeline.81848285.SS "advanced"8386.TP
+109-116
tools/power/pm-graph/sleepgraph.py
···8686# store system values and test parameters8787class SystemValues:8888 title = 'SleepGraph'8989- version = '5.9'8989+ version = '5.10'9090 ansi = False9191 rs = 09292 display = ''···100100 ftracelog = False101101 acpidebug = True102102 tstat = True103103+ wifitrace = False103104 mindevlen = 0.0001104105 mincglen = 0.0105106 cgphase = ''···125124 epath = '/sys/kernel/debug/tracing/events/power/'126125 pmdpath = '/sys/power/pm_debug_messages'127126 s0ixpath = '/sys/module/intel_pmc_core/parameters/warn_on_s0ix_failures'127127+ s0ixres = '/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us'128128 acpipath='/sys/module/acpi/parameters/debug_level'129129 traceevents = [130130 'suspend_resume',···182180 tmstart = 'SUSPEND START %Y%m%d-%H:%M:%S.%f'183181 tmend = 'RESUME COMPLETE %Y%m%d-%H:%M:%S.%f'184182 tracefuncs = {183183+ 'async_synchronize_full': {},185184 'sys_sync': {},186185 'ksys_sync': {},187186 '__pm_notifier_call_chain': {},···307304 [2, 'suspendstats', 'sh', '-c', 'grep -v invalid /sys/power/suspend_stats/*'],308305 [2, 'cpuidle', 'sh', '-c', 'grep -v invalid /sys/devices/system/cpu/cpu*/cpuidle/state*/s2idle/*'],309306 [2, 'battery', 'sh', '-c', 'grep -v invalid /sys/class/power_supply/*/*'],307307+ [2, 'thermal', 'sh', '-c', 'grep . /sys/class/thermal/thermal_zone*/temp'],310308 ]311309 cgblacklist = []312310 kprobes = dict()···781777 return782778 if not quiet:783779 sysvals.printSystemInfo(False)784784- pprint('INITIALIZING FTRACE...')780780+ pprint('INITIALIZING FTRACE')785781 # turn trace off786782 self.fsetVal('0', 'tracing_on')787783 self.cleanupFtrace()···845841 for name in self.dev_tracefuncs:846842 self.defaultKprobe(name, self.dev_tracefuncs[name])847843 if not quiet:848848- pprint('INITIALIZING KPROBES...')844844+ pprint('INITIALIZING KPROBES')849845 self.addKprobes(self.verbose)850846 if(self.usetraceevents):851847 # turn trace events on···11371133 self.cfgdef[file] = fp.read().strip()11381134 fp.write(value)11391135 fp.close()11361136+ def s0ixSupport(self):11371137+ if not os.path.exists(self.s0ixres) or not os.path.exists(self.mempowerfile):11381138+ return False11391139+ fp = open(sysvals.mempowerfile, 'r')11401140+ data = fp.read().strip()11411141+ fp.close()11421142+ if '[s2idle]' in data:11431143+ return True11441144+ return False11401145 def haveTurbostat(self):11411146 if not self.tstat:11421147 return False···11591146 self.vprint(out)11601147 return True11611148 return False11621162- def turbostat(self):11491149+ def turbostat(self, s0ixready):11631150 cmd = self.getExec('turbostat')11641151 rawout = keyline = valline = ''11651152 fullcmd = '%s -q -S echo freeze > %s' % (cmd, self.powerfile)···11861173 for key in keyline:11871174 idx = keyline.index(key)11881175 val = valline[idx]11761176+ if key == 'SYS%LPI' and not s0ixready and re.match('^[0\.]*$', val):11771177+ continue11891178 out.append('%s=%s' % (key, val))11901179 return '|'.join(out)11911180 def netfixon(self, net='both'):···11981183 out = ascii(fp.read()).strip()11991184 fp.close()12001185 return out12011201- def wifiRepair(self):12021202- out = self.netfixon('wifi')12031203- if not out or 'error' in out.lower():12041204- return ''12051205- m = re.match('WIFI \S* ONLINE (?P<action>\S*)', out)12061206- if not m:12071207- return 'dead'12081208- return m.group('action')12091186 def wifiDetails(self, dev):12101187 try:12111188 info = open('/sys/class/net/%s/device/uevent' % dev, 'r').read().strip()···12271220 return '%s reconnected %.2f' % \12281221 (self.wifiDetails(dev), max(0, time.time() - start))12291222 time.sleep(0.01)12301230- if self.netfix:12311231- res = self.wifiRepair()12321232- if res:12331233- timeout = max(0, time.time() - start)12341234- return '%s %s %d' % (self.wifiDetails(dev), res, timeout)12351223 return '%s timeout %d' % (self.wifiDetails(dev), timeout)12361224 def errorSummary(self, errinfo, msg):12371225 found = False···13481346 for i in self.rslist:13491347 self.setVal(self.rstgt, i)13501348 pprint('runtime suspend settings restored on %d devices' % len(self.rslist))13491349+ def start(self, pm):13501350+ if self.useftrace:13511351+ self.dlog('start ftrace tracing')13521352+ self.fsetVal('1', 'tracing_on')13531353+ if self.useprocmon:13541354+ self.dlog('start the process monitor')13551355+ pm.start()13561356+ def stop(self, pm):13571357+ if self.useftrace:13581358+ if self.useprocmon:13591359+ self.dlog('stop the process monitor')13601360+ pm.stop()13611361+ self.dlog('stop ftrace tracing')13621362+ self.fsetVal('0', 'tracing_on')1351136313521364sysvals = SystemValues()13531365switchvalues = ['enable', 'disable', 'on', 'off', 'true', 'false', '1', '0']···16591643 ubiquitous = False16601644 if kprobename in dtf and 'ub' in dtf[kprobename]:16611645 ubiquitous = True16621662- title = cdata+' '+rdata16631663- mstr = '\(.*\) *(?P<args>.*) *\((?P<caller>.*)\+.* arg1=(?P<ret>.*)'16641664- m = re.match(mstr, title)16651665- if m:16661666- c = m.group('caller')16671667- a = m.group('args').strip()16681668- r = m.group('ret')16461646+ mc = re.match('\(.*\) *(?P<args>.*)', cdata)16471647+ mr = re.match('\((?P<caller>\S*).* arg1=(?P<ret>.*)', rdata)16481648+ if mc and mr:16491649+ c = mr.group('caller').split('+')[0]16501650+ a = mc.group('args').strip()16511651+ r = mr.group('ret')16691652 if len(r) > 6:16701653 r = ''16711654 else:16721655 r = 'ret=%s ' % r16731656 if ubiquitous and c in dtf and 'ub' in dtf[c]:16741657 return False16581658+ else:16591659+ return False16751660 color = sysvals.kprobeColor(kprobename)16761661 e = DevFunction(displayname, a, c, r, start, end, ubiquitous, proc, pid, color)16771662 tgtdev['src'].append(e)···17891772 e.time = self.trimTimeVal(e.time, t0, dT, left)17901773 e.end = self.trimTimeVal(e.end, t0, dT, left)17911774 e.length = e.end - e.time17751775+ if('cpuexec' in d):17761776+ cpuexec = dict()17771777+ for e in d['cpuexec']:17781778+ c0, cN = e17791779+ c0 = self.trimTimeVal(c0, t0, dT, left)17801780+ cN = self.trimTimeVal(cN, t0, dT, left)17811781+ cpuexec[(c0, cN)] = d['cpuexec'][e]17821782+ d['cpuexec'] = cpuexec17921783 for dir in ['suspend', 'resume']:17931784 list = []17941785 for e in self.errorinfo[dir]:···21112086 return d21122087 def addProcessUsageEvent(self, name, times):21132088 # get the start and end times for this process21142114- maxC = 021152115- tlast = 021162116- start = -121172117- end = -120892089+ cpuexec = dict()20902090+ tlast = start = end = -121182091 for t in sorted(times):21192119- if tlast == 0:20922092+ if tlast < 0:21202093 tlast = t21212094 continue21222122- if name in self.pstl[t]:21232123- if start == -1 or tlast < start:20952095+ if name in self.pstl[t] and self.pstl[t][name] > 0:20962096+ if start < 0:21242097 start = tlast21252125- if end == -1 or t > end:21262126- end = t20982098+ end, key = t, (tlast, t)20992099+ maxj = (t - tlast) * 1024.021002100+ cpuexec[key] = min(1.0, float(self.pstl[t][name]) / maxj)21272101 tlast = t21282128- if start == -1 or end == -1:21292129- return 021022102+ if start < 0 or end < 0:21032103+ return21302104 # add a new action for this process and get the object21312105 out = self.newActionGlobal(name, start, end, -3)21322132- if not out:21332133- return 021342134- phase, devname = out21352135- dev = self.dmesg[phase]['list'][devname]21362136- # get the cpu exec data21372137- tlast = 021382138- clast = 021392139- cpuexec = dict()21402140- for t in sorted(times):21412141- if tlast == 0 or t <= start or t > end:21422142- tlast = t21432143- continue21442144- list = self.pstl[t]21452145- c = 021462146- if name in list:21472147- c = list[name]21482148- if c > maxC:21492149- maxC = c21502150- if c != clast:21512151- key = (tlast, t)21522152- cpuexec[key] = c21532153- tlast = t21542154- clast = c21552155- dev['cpuexec'] = cpuexec21562156- return maxC21062106+ if out:21072107+ phase, devname = out21082108+ dev = self.dmesg[phase]['list'][devname]21092109+ dev['cpuexec'] = cpuexec21572110 def createProcessUsageEvents(self):21582158- # get an array of process names21592159- proclist = []21112111+ # get an array of process names and times21122112+ proclist = {'sus': dict(), 'res': dict()}21132113+ tdata = {'sus': [], 'res': []}21602114 for t in sorted(self.pstl):21612161- pslist = self.pstl[t]21622162- for ps in sorted(pslist):21632163- if ps not in proclist:21642164- proclist.append(ps)21652165- # get a list of data points for suspend and resume21662166- tsus = []21672167- tres = []21682168- for t in sorted(self.pstl):21692169- if t < self.tSuspended:21702170- tsus.append(t)21712171- else:21722172- tres.append(t)21152115+ dir = 'sus' if t < self.tSuspended else 'res'21162116+ for ps in sorted(self.pstl[t]):21172117+ if ps not in proclist[dir]:21182118+ proclist[dir][ps] = 021192119+ tdata[dir].append(t)21732120 # process the events for suspend and resume21742174- if len(proclist) > 0:21212121+ if len(proclist['sus']) > 0 or len(proclist['res']) > 0:21752122 sysvals.vprint('Process Execution:')21762176- for ps in proclist:21772177- c = self.addProcessUsageEvent(ps, tsus)21782178- if c > 0:21792179- sysvals.vprint('%25s (sus): %d' % (ps, c))21802180- c = self.addProcessUsageEvent(ps, tres)21812181- if c > 0:21822182- sysvals.vprint('%25s (res): %d' % (ps, c))21232123+ for dir in ['sus', 'res']:21242124+ for ps in sorted(proclist[dir]):21252125+ self.addProcessUsageEvent(ps, tdata[dir])21832126 def handleEndMarker(self, time, msg=''):21842127 dm = self.dmesg21852128 self.setEnd(time, msg)···32113218# markers, and/or kprobes required for primary parsing.32123219def doesTraceLogHaveTraceEvents():32133220 kpcheck = ['_cal: (', '_ret: (']32143214- techeck = ['suspend_resume', 'device_pm_callback']32213221+ techeck = ['suspend_resume', 'device_pm_callback', 'tracing_mark_write']32153222 tmcheck = ['SUSPEND START', 'RESUME COMPLETE']32163223 sysvals.usekprobes = False32173224 fp = sysvals.openlog(sysvals.ftracefile, 'r')···32343241 check.remove(i)32353242 tmcheck = check32363243 fp.close()32373237- sysvals.usetraceevents = True if len(techeck) < 2 else False32443244+ sysvals.usetraceevents = True if len(techeck) < 3 else False32383245 sysvals.usetracemarkers = True if len(tmcheck) == 0 else False3239324632403247# Function: appendIncompleteTraceLog···34493456 continue34503457 # process cpu exec line34513458 if t.type == 'tracing_mark_write':34593459+ if t.name == 'CMD COMPLETE' and data.tKernRes == 0:34603460+ data.tKernRes = t.time34523461 m = re.match(tp.procexecfmt, t.name)34533462 if(m):34543463 parts, msg = 1, m.group('ps')···36683673 continue36693674 e = next((x for x in reversed(tp.ktemp[key]) if x['end'] < 0), 0)36703675 if not e:36763676+ continue36773677+ if (t.time - e['begin']) * 1000 < sysvals.mindevlen:36783678+ tp.ktemp[key].pop()36713679 continue36723680 e['end'] = t.time36733681 e['rdata'] = kprobedata···42114213 fmt = '<n>(%.3f ms @ '+sv.timeformat+')</n>'42124214 flen = fmt % (line.length*1000, line.time)42134215 if line.isLeaf():42164216+ if line.length * 1000 < sv.mincglen:42174217+ continue42144218 hf.write(html_func_leaf.format(line.name, flen))42154219 elif line.freturn:42164220 hf.write(html_func_end)···48274827 if('cpuexec' in dev):48284828 for t in sorted(dev['cpuexec']):48294829 start, end = t48304830- j = float(dev['cpuexec'][t]) / 548314831- if j > 1.0:48324832- j = 1.048334830 height = '%.3f' % (rowheight/3)48344831 top = '%.3f' % (rowtop + devtl.scaleH + 2*rowheight/3)48354832 left = '%f' % (((start-m0)*100)/mTotal)48364833 width = '%f' % ((end-start)*100/mTotal)48374837- color = 'rgba(255, 0, 0, %f)' % j48344834+ color = 'rgba(255, 0, 0, %f)' % dev['cpuexec'][t]48384835 devtl.html += \48394836 html_cpuexec.format(left, top, height, width, color)48404837 if('src' not in dev):···54505453 call('sync', shell=True)54515454 sv.dlog('read dmesg')54525455 sv.initdmesg()54535453- # start ftrace54545454- if sv.useftrace:54555455- if not quiet:54565456- pprint('START TRACING')54575457- sv.dlog('start ftrace tracing')54585458- sv.fsetVal('1', 'tracing_on')54595459- if sv.useprocmon:54605460- sv.dlog('start the process monitor')54615461- pm.start()54625462- sv.dlog('run the cmdinfo list before')54565456+ sv.dlog('cmdinfo before')54635457 sv.cmdinfo(True)54585458+ sv.start(pm)54645459 # execute however many s/r runs requested54655460 for count in range(1,sv.execcount+1):54665461 # x2delay in between test runs···54895500 if res != 0:54905501 tdata['error'] = 'cmd returned %d' % res54915502 else:55035503+ s0ixready = sv.s0ixSupport()54925504 mode = sv.suspendmode54935505 if sv.memmode and os.path.exists(sv.mempowerfile):54945506 mode = 'mem'···54995509 sv.testVal(sv.diskpowerfile, 'radio', sv.diskmode)55005510 if sv.acpidebug:55015511 sv.testVal(sv.acpipath, 'acpi', '0xe')55025502- if mode == 'freeze' and sv.haveTurbostat():55125512+ if ((mode == 'freeze') or (sv.memmode == 's2idle')) \55135513+ and sv.haveTurbostat():55035514 # execution will pause here55045504- turbo = sv.turbostat()55155515+ turbo = sv.turbostat(s0ixready)55055516 if turbo:55065517 tdata['turbo'] = turbo55075518 else:···55135522 pf.close()55145523 except Exception as e:55155524 tdata['error'] = str(e)55165516- sv.dlog('system returned from resume')55255525+ sv.fsetVal('CMD COMPLETE', 'trace_marker')55265526+ sv.dlog('system returned')55175527 # reset everything55185528 sv.testVal('restoreall')55195529 if(sv.rtcwake):···55275535 sv.fsetVal('WAIT END', 'trace_marker')55285536 # return from suspend55295537 pprint('RESUME COMPLETE')55305530- sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker')55385538+ if(count < sv.execcount):55395539+ sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker')55405540+ elif(not sv.wifitrace):55415541+ sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker')55425542+ sv.stop(pm)55315543 if sv.wifi and wifi:55325544 tdata['wifi'] = sv.pollWifi(wifi)55335545 sv.dlog('wifi check, %s' % tdata['wifi'])55345534- if sv.netfix:55355535- netfixout = sv.netfixon('wired')55365536- elif sv.netfix:55375537- netfixout = sv.netfixon()55385538- if sv.netfix and netfixout:55395539- tdata['netfix'] = netfixout55465546+ if(count == sv.execcount and sv.wifitrace):55475547+ sv.fsetVal(datetime.now().strftime(sv.tmend), 'trace_marker')55485548+ sv.stop(pm)55495549+ if sv.netfix:55505550+ tdata['netfix'] = sv.netfixon()55405551 sv.dlog('netfix, %s' % tdata['netfix'])55415552 if(sv.suspendmode == 'mem' or sv.suspendmode == 'command'):55425553 sv.dlog('read the ACPI FPDT')55435554 tdata['fw'] = getFPDT(False)55445555 testdata.append(tdata)55455545- sv.dlog('run the cmdinfo list after')55565556+ sv.dlog('cmdinfo after')55465557 cmdafter = sv.cmdinfo(False)55475547- # stop ftrace55485548- if sv.useftrace:55495549- if sv.useprocmon:55505550- sv.dlog('stop the process monitor')55515551- pm.stop()55525552- sv.fsetVal('0', 'tracing_on')55535558 # grab a copy of the dmesg output55545559 if not quiet:55555560 pprint('CAPTURING DMESG')55565556- sysvals.dlog('EXECUTION TRACE END')55575561 sv.getdmesg(testdata)55585562 # grab a copy of the ftrace output55595563 if sv.useftrace:···63386350 if not m:63396351 continue63406352 name, time, phase = m.group('n'), m.group('t'), m.group('p')63536353+ if name == 'async_synchronize_full':63546354+ continue63416355 if ' async' in name or ' sync' in name:63426356 name = ' '.join(name.split(' ')[:-1])63436357 if phase.startswith('suspend'):···66916701 ' -skiphtml Run the test and capture the trace logs, but skip the timeline (default: disabled)\n'\66926702 ' -result fn Export a results table to a text file for parsing.\n'\66936703 ' -wifi If a wifi connection is available, check that it reconnects after resume.\n'\67046704+ ' -wifitrace Trace kernel execution through wifi reconnect.\n'\66946705 ' -netfix Use netfix to reset the network in the event it fails to resume.\n'\66956706 ' [testprep]\n'\66966707 ' -sync Sync the filesystems before starting the test\n'\···68196828 sysvals.sync = True68206829 elif(arg == '-wifi'):68216830 sysvals.wifi = True68316831+ elif(arg == '-wifitrace'):68326832+ sysvals.wifitrace = True68226833 elif(arg == '-netfix'):68236834 sysvals.netfix = True68246835 elif(arg == '-gzip'):