···267267Tx268268--269269270270-end_start_xmit() is called by the stack. This function does the following:270270+ena_start_xmit() is called by the stack. This function does the following:271271272272- Maps data buffers (skb->data and frags).273273- Populates ena_buf for the push buffer (if the driver and device are
···5252``devlink-dpipe`` should change according to the changes done by the5353standard configuration tools.54545555-For example, it’s quiet common to implement Access Control Lists (ACL)5555+For example, it’s quite common to implement Access Control Lists (ACL)5656using Ternary Content Addressable Memory (TCAM). The TCAM memory can be5757divided into TCAM regions. Complex TC filters can have multiple rules with5858different priorities and different lookup keys. On the other hand hardware
+2-2
Documentation/networking/devlink/devlink-port.rst
···151151-------------152152A subfunction devlink port is created but it is not active yet. That means the153153entities are created on devlink side, the e-switch port representor is created,154154-but the subfunction device itself it not created. A user might use e-switch port154154+but the subfunction device itself is not created. A user might use e-switch port155155representor to do settings, putting it into bridge, adding TC rules, etc. A user156156might as well configure the hardware address (such as MAC address) of the157157subfunction while subfunction is inactive.···173173 * - Term174174 - Definitions175175 * - ``PCI device``176176- - A physical PCI device having one or more PCI bus consists of one or176176+ - A physical PCI device having one or more PCI buses consists of one or177177 more PCI controllers.178178 * - ``PCI controller``179179 - A controller consists of potentially multiple physical functions,
+1-1
Documentation/networking/xfrm_device.rst
···50505151The NIC driver offering ipsec offload will need to implement these5252callbacks to make the offload available to the network stack's5353-XFRM subsytem. Additionally, the feature bits NETIF_F_HW_ESP and5353+XFRM subsystem. Additionally, the feature bits NETIF_F_HW_ESP and5454NETIF_F_HW_ESP_TX_CSUM will signal the availability of the offload.55555656
+28-31
MAINTAINERS
···24892489N: sc27312490249024912491ARM/STI ARCHITECTURE24922492-M: Patrice Chotard <patrice.chotard@st.com>24922492+M: Patrice Chotard <patrice.chotard@foss.st.com>24932493L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)24942494S: Maintained24952495W: http://www.stlinux.com···2522252225232523ARM/STM32 ARCHITECTURE25242524M: Maxime Coquelin <mcoquelin.stm32@gmail.com>25252525-M: Alexandre Torgue <alexandre.torgue@st.com>25252525+M: Alexandre Torgue <alexandre.torgue@foss.st.com>25262526L: linux-stm32@st-md-mailman.stormreply.com (moderated for non-subscribers)25272527L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)25282528S: Maintained···31153115F: drivers/md/bcache/3116311631173117BDISP ST MEDIA DRIVER31183118-M: Fabien Dessenne <fabien.dessenne@st.com>31183118+M: Fabien Dessenne <fabien.dessenne@foss.st.com>31193119L: linux-media@vger.kernel.org31203120S: Supported31213121W: https://linuxtv.org···36753675L: linux-pm@vger.kernel.org36763676S: Maintained36773677T: git git://github.com/broadcom/stblinux.git36783678-F: drivers/soc/bcm/bcm-pmb.c36783678+F: drivers/soc/bcm/bcm63xx/bcm-pmb.c36793679F: include/dt-bindings/soc/bcm-pmb.h3680368036813681BROADCOM SPECIFIC AMBA DRIVER (BCMA)···50805080F: drivers/platform/x86/dell/dell-wmi.c5081508150825082DELTA ST MEDIA DRIVER50835083-M: Hugues Fruchet <hugues.fruchet@st.com>50835083+M: Hugues Fruchet <hugues.fruchet@foss.st.com>50845084L: linux-media@vger.kernel.org50855085S: Supported50865086W: https://linuxtv.org···6012601260136013DRM DRIVERS FOR STI60146014M: Benjamin Gaignard <benjamin.gaignard@linaro.org>60156015-M: Vincent Abriou <vincent.abriou@st.com>60166015L: dri-devel@lists.freedesktop.org60176016S: Maintained60186017T: git git://anongit.freedesktop.org/drm/drm-misc···60196020F: drivers/gpu/drm/sti6020602160216022DRM DRIVERS FOR STM60226022-M: Yannick Fertre <yannick.fertre@st.com>60236023-M: Philippe Cornu <philippe.cornu@st.com>60236023+M: Yannick Fertre <yannick.fertre@foss.st.com>60246024+M: Philippe Cornu <philippe.cornu@foss.st.com>60246025M: Benjamin Gaignard <benjamin.gaignard@linaro.org>60256025-M: Vincent Abriou <vincent.abriou@st.com>60266026L: dri-devel@lists.freedesktop.org60276027S: Maintained60286028T: git git://anongit.freedesktop.org/drm/drm-misc···74807482GENERIC PHY FRAMEWORK74817483M: Kishon Vijay Abraham I <kishon@ti.com>74827484M: Vinod Koul <vkoul@kernel.org>74837483-L: linux-kernel@vger.kernel.org74857485+L: linux-phy@lists.infradead.org74847486S: Supported74877487+Q: https://patchwork.kernel.org/project/linux-phy/list/74857488T: git git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy.git74867489F: Documentation/devicetree/bindings/phy/74877490F: drivers/phy/···82358236F: mm/hugetlb.c8236823782378238HVA ST MEDIA DRIVER82388238-M: Jean-Christophe Trotin <jean-christophe.trotin@st.com>82398239+M: Jean-Christophe Trotin <jean-christophe.trotin@foss.st.com>82398240L: linux-media@vger.kernel.org82408241S: Supported82418242W: https://linuxtv.org···85258526M: Dany Madden <drt@linux.ibm.com>85268527M: Lijun Pan <ljp@linux.ibm.com>85278528M: Sukadev Bhattiprolu <sukadev@linux.ibm.com>85298529+R: Thomas Falcon <tlfalcon@linux.ibm.com>85288530L: netdev@vger.kernel.org85298531S: Supported85308532F: drivers/net/ethernet/ibm/ibmvnic.*···10035100351003610036LED SUBSYSTEM1003710037M: Pavel Machek <pavel@ucw.cz>1003810038-R: Dan Murphy <dmurphy@ti.com>1003910038L: linux-leds@vger.kernel.org1004010039S: Maintained1004110040T: git git://git.kernel.org/pub/scm/linux/kernel/git/pavel/linux-leds.git···1091010911F: drivers/media/radio/radio-maxiradio*10911109121091210913MCAN MMIO DEVICE DRIVER1091310913-M: Dan Murphy <dmurphy@ti.com>1091410914M: Pankaj Sharma <pankj.sharma@samsung.com>1091510915L: linux-can@vger.kernel.org1091610916S: Maintained···1117011172F: drivers/media/dvb-frontends/stv6111*11171111731117211174MEDIA DRIVERS FOR STM32 - DCMI1117311173-M: Hugues Fruchet <hugues.fruchet@st.com>1117511175+M: Hugues Fruchet <hugues.fruchet@foss.st.com>1117411176L: linux-media@vger.kernel.org1117511177S: Supported1117611178T: git git://linuxtv.org/media_tree.git···1254112543M: Mat Martineau <mathew.j.martineau@linux.intel.com>1254212544M: Matthieu Baerts <matthieu.baerts@tessares.net>1254312545L: netdev@vger.kernel.org1254412544-L: mptcp@lists.01.org1254612546+L: mptcp@lists.linux.dev1254512547S: Maintained1254612548W: https://github.com/multipath-tcp/mptcp_net-next/wiki1254712549B: https://github.com/multipath-tcp/mptcp_net-next/issues···1471214714QLOGIC QLGE 10Gb ETHERNET DRIVER1471314715M: Manish Chopra <manishc@marvell.com>1471414716M: GR-Linux-NIC-Dev@marvell.com1471514715-L: netdev@vger.kernel.org1471614716-S: Supported1471714717-F: drivers/staging/qlge/1471814718-1471914719-QLOGIC QLGE 10Gb ETHERNET DRIVER1472014717M: Coiby Xu <coiby.xu@gmail.com>1472114718L: netdev@vger.kernel.org1472214722-S: Maintained1471914719+S: Supported1472314720F: Documentation/networking/device_drivers/qlogic/qlge.rst1472114721+F: drivers/staging/qlge/14724147221472514723QM1D1B0004 MEDIA DRIVER1472614724M: Akihiro Tsukada <tskd08@gmail.com>···15634156401563515641S390 VFIO AP DRIVER1563615642M: Tony Krowiak <akrowiak@linux.ibm.com>1563715637-M: Pierre Morel <pmorel@linux.ibm.com>1563815643M: Halil Pasic <pasic@linux.ibm.com>1564415644+M: Jason Herne <jjherne@linux.ibm.com>1563915645L: linux-s390@vger.kernel.org1564015646S: Supported1564115647W: http://www.ibm.com/developerworks/linux/linux390/···1564715653S390 VFIO-CCW DRIVER1564815654M: Cornelia Huck <cohuck@redhat.com>1564915655M: Eric Farman <farman@linux.ibm.com>1565615656+M: Matthew Rosato <mjrosato@linux.ibm.com>1565015657R: Halil Pasic <pasic@linux.ibm.com>1565115658L: linux-s390@vger.kernel.org1565215659L: kvm@vger.kernel.org···15658156631565915664S390 VFIO-PCI DRIVER1566015665M: Matthew Rosato <mjrosato@linux.ibm.com>1566615666+M: Eric Farman <farman@linux.ibm.com>1566115667L: linux-s390@vger.kernel.org1566215668L: kvm@vger.kernel.org1566315669S: Supported···16888168921688916893SPIDERNET NETWORK DRIVER for CELL1689016894M: Ishizaki Kou <kou.ishizaki@toshiba.co.jp>1689516895+M: Geoff Levand <geoff@infradead.org>1689116896L: netdev@vger.kernel.org1689216892-S: Supported1689716897+L: linuxppc-dev@lists.ozlabs.org1689816898+S: Maintained1689316899F: Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst1689416900F: drivers/net/ethernet/toshiba/spider_net*1689516901···1694516947F: drivers/media/i2c/st-mipid02.c16946169481694716949ST STM32 I2C/SMBUS DRIVER1694816948-M: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>1695016950+M: Pierre-Yves MORDRET <pierre-yves.mordret@foss.st.com>1695116951+M: Alain Volmat <alain.volmat@foss.st.com>1694916952L: linux-i2c@vger.kernel.org1695016953S: Maintained1695116954F: drivers/i2c/busses/i2c-stm32*···1707117072F: kernel/static_call.c17072170731707317074STI AUDIO (ASoC) DRIVERS1707417074-M: Arnaud Pouliquen <arnaud.pouliquen@st.com>1707517075+M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>1707517076L: alsa-devel@alsa-project.org (moderated for non-subscribers)1707617077S: Maintained1707717078F: Documentation/devicetree/bindings/sound/st,sti-asoc-card.txt···1709117092F: drivers/media/usb/stk1160/17092170931709317094STM32 AUDIO (ASoC) DRIVERS1709417094-M: Olivier Moysan <olivier.moysan@st.com>1709517095-M: Arnaud Pouliquen <arnaud.pouliquen@st.com>1709517095+M: Olivier Moysan <olivier.moysan@foss.st.com>1709617096+M: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>1709617097L: alsa-devel@alsa-project.org (moderated for non-subscribers)1709717098S: Maintained1709817099F: Documentation/devicetree/bindings/iio/adc/st,stm32-*.yaml1709917100F: sound/soc/stm/17100171011710117102STM32 TIMER/LPTIMER DRIVERS1710217102-M: Fabrice Gasnier <fabrice.gasnier@st.com>1710317103+M: Fabrice Gasnier <fabrice.gasnier@foss.st.com>1710317104S: Maintained1710417105F: Documentation/ABI/testing/*timer-stm321710517106F: Documentation/devicetree/bindings/*/*stm32-*timer*···17109171101711017111STMMAC ETHERNET DRIVER1711117112M: Giuseppe Cavallaro <peppe.cavallaro@st.com>1711217112-M: Alexandre Torgue <alexandre.torgue@st.com>1711317113+M: Alexandre Torgue <alexandre.torgue@foss.st.com>1711317114M: Jose Abreu <joabreu@synopsys.com>1711417115L: netdev@vger.kernel.org1711517116S: Supported···1785117852F: drivers/thermal/ti-soc-thermal/17852178531785317854TI BQ27XXX POWER SUPPLY DRIVER1785417854-R: Dan Murphy <dmurphy@ti.com>1785517855F: drivers/power/supply/bq27xxx_battery.c1785617856F: drivers/power/supply/bq27xxx_battery_i2c.c1785717857F: include/linux/power/bq27xxx_battery.h···1798517987F: sound/soc/codecs/tas571x*17986179881798717989TI TCAN4X5X DEVICE DRIVER1798817988-M: Dan Murphy <dmurphy@ti.com>1798917990L: linux-can@vger.kernel.org1799017991S: Maintained1799117992F: Documentation/devicetree/bindings/net/can/tcan4x5x.txt
···88888989extern struct omap_sr_data omap_sr_pdata[];90909191-static int __init sr_dev_init(struct omap_hwmod *oh, void *user)9191+static int __init sr_init_by_name(const char *name, const char *voltdm)9292{9393 struct omap_sr_data *sr_data = NULL;9494 struct omap_volt_data *volt_data;9595- struct omap_smartreflex_dev_attr *sr_dev_attr;9695 static int i;97969898- if (!strncmp(oh->name, "smartreflex_mpu_iva", 20) ||9999- !strncmp(oh->name, "smartreflex_mpu", 16))9797+ if (!strncmp(name, "smartreflex_mpu_iva", 20) ||9898+ !strncmp(name, "smartreflex_mpu", 16))10099 sr_data = &omap_sr_pdata[OMAP_SR_MPU];101101- else if (!strncmp(oh->name, "smartreflex_core", 17))100100+ else if (!strncmp(name, "smartreflex_core", 17))102101 sr_data = &omap_sr_pdata[OMAP_SR_CORE];103103- else if (!strncmp(oh->name, "smartreflex_iva", 16))102102+ else if (!strncmp(name, "smartreflex_iva", 16))104103 sr_data = &omap_sr_pdata[OMAP_SR_IVA];105104106105 if (!sr_data) {107107- pr_err("%s: Unknown instance %s\n", __func__, oh->name);106106+ pr_err("%s: Unknown instance %s\n", __func__, name);108107 return -EINVAL;109108 }110109111111- sr_dev_attr = (struct omap_smartreflex_dev_attr *)oh->dev_attr;112112- if (!sr_dev_attr || !sr_dev_attr->sensor_voltdm_name) {113113- pr_err("%s: No voltage domain specified for %s. Cannot initialize\n",114114- __func__, oh->name);115115- goto exit;116116- }117117-118118- sr_data->name = oh->name;110110+ sr_data->name = name;119111 if (cpu_is_omap343x())120112 sr_data->ip_type = 1;121113 else···128136 }129137 }130138131131- sr_data->voltdm = voltdm_lookup(sr_dev_attr->sensor_voltdm_name);139139+ sr_data->voltdm = voltdm_lookup(voltdm);132140 if (!sr_data->voltdm) {133141 pr_err("%s: Unable to get voltage domain pointer for VDD %s\n",134134- __func__, sr_dev_attr->sensor_voltdm_name);142142+ __func__, voltdm);135143 goto exit;136144 }137145···152160 return 0;153161}154162163163+static int __init sr_dev_init(struct omap_hwmod *oh, void *user)164164+{165165+ struct omap_smartreflex_dev_attr *sr_dev_attr;166166+167167+ sr_dev_attr = (struct omap_smartreflex_dev_attr *)oh->dev_attr;168168+ if (!sr_dev_attr || !sr_dev_attr->sensor_voltdm_name) {169169+ pr_err("%s: No voltage domain specified for %s. Cannot initialize\n",170170+ __func__, oh->name);171171+ return 0;172172+ }173173+174174+ return sr_init_by_name(oh->name, sr_dev_attr->sensor_voltdm_name);175175+}176176+155177/*156178 * API to be called from board files to enable smartreflex157179 * autocompensation at init.···175169 sr_enable_on_init = true;176170}177171172172+static const char * const omap4_sr_instances[] = {173173+ "mpu",174174+ "iva",175175+ "core",176176+};177177+178178+static const char * const dra7_sr_instances[] = {179179+ "mpu",180180+ "core",181181+};182182+178183int __init omap_devinit_smartreflex(void)179184{185185+ const char * const *sr_inst;186186+ int i, nr_sr = 0;187187+188188+ if (soc_is_omap44xx()) {189189+ sr_inst = omap4_sr_instances;190190+ nr_sr = ARRAY_SIZE(omap4_sr_instances);191191+192192+ } else if (soc_is_dra7xx()) {193193+ sr_inst = dra7_sr_instances;194194+ nr_sr = ARRAY_SIZE(dra7_sr_instances);195195+ }196196+197197+ if (nr_sr) {198198+ const char *name, *voltdm;199199+200200+ for (i = 0; i < nr_sr; i++) {201201+ name = kasprintf(GFP_KERNEL, "smartreflex_%s", sr_inst[i]);202202+ voltdm = sr_inst[i];203203+ sr_init_by_name(name, voltdm);204204+ }205205+206206+ return 0;207207+ }208208+180209 return omap_hwmod_for_each_by_class("smartreflex", sr_dev_init, NULL);181210}
+10
arch/arm64/Kconfig
···810810811811 If unsure, say Y.812812813813+config NVIDIA_CARMEL_CNP_ERRATUM814814+ bool "NVIDIA Carmel CNP: CNP on Carmel semantically different than ARM cores"815815+ default y816816+ help817817+ If CNP is enabled on Carmel cores, non-sharable TLBIs on a core will not818818+ invalidate shared TLB entries installed by a different core, as it would819819+ on standard ARM cores.820820+821821+ If unsure, say Y.822822+813823config SOCIONEXT_SYNQUACER_PREITS814824 bool "Socionext Synquacer: Workaround for GICv3 pre-ITS"815825 default y
···383383 * of support.384384 */385385 S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_EXACT, ID_AA64DFR0_PMUVER_SHIFT, 4, 0),386386- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_TRACEVER_SHIFT, 4, 0),387386 ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_EXACT, ID_AA64DFR0_DEBUGVER_SHIFT, 4, 0x6),388387 ARM64_FTR_END,389388};···13201321 * may share TLB entries with a CPU stuck in the crashed13211322 * kernel.13221323 */13231323- if (is_kdump_kernel())13241324+ if (is_kdump_kernel())13251325+ return false;13261326+13271327+ if (cpus_have_const_cap(ARM64_WORKAROUND_NVIDIA_CARMEL_CNP))13241328 return false;1325132913261330 return has_cpuid_feature(entry, scope);
+1-1
arch/arm64/kernel/cpuinfo.c
···353353 * with the CLIDR_EL1 fields to avoid triggering false warnings354354 * when there is a mismatch across the CPUs. Keep track of the355355 * effective value of the CTR_EL0 in our internal records for356356- * acurate sanity check and feature enablement.356356+ * accurate sanity check and feature enablement.357357 */358358 info->reg_ctr = read_cpuid_effective_cachetype();359359 info->reg_dczid = read_cpuid(DCZID_EL0);
···8989 * - Debug ROM Address (MDCR_EL2_TDRA)9090 * - OS related registers (MDCR_EL2_TDOSA)9191 * - Statistical profiler (MDCR_EL2_TPMS/MDCR_EL2_E2PB)9292+ * - Self-hosted Trace Filter controls (MDCR_EL2_TTRF)9293 *9394 * Additionally, KVM only traps guest accesses to the debug registers if9495 * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY···113112 vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;114113 vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |115114 MDCR_EL2_TPMS |115115+ MDCR_EL2_TTRF |116116 MDCR_EL2_TPMCR |117117 MDCR_EL2_TDRA |118118 MDCR_EL2_TDOSA);
+9
arch/arm64/kvm/hyp/vgic-v3-sr.c
···429429 if (has_vhe())430430 flags = local_daif_save();431431432432+ /*433433+ * Table 11-2 "Permitted ICC_SRE_ELx.SRE settings" indicates434434+ * that to be able to set ICC_SRE_EL1.SRE to 0, all the435435+ * interrupt overrides must be set. You've got to love this.436436+ */437437+ sysreg_clear_set(hcr_el2, 0, HCR_AMO | HCR_FMO | HCR_IMO);438438+ isb();432439 write_gicreg(0, ICC_SRE_EL1);433440 isb();434441435442 val = read_gicreg(ICC_SRE_EL1);436443437444 write_gicreg(sre, ICC_SRE_EL1);445445+ isb();446446+ sysreg_clear_set(hcr_el2, HCR_AMO | HCR_FMO | HCR_IMO, 0);438447 isb();439448440449 if (has_vhe())
+19-2
arch/arm64/mm/mmu.c
···14481448struct range arch_get_mappable_range(void)14491449{14501450 struct range mhp_range;14511451+ u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));14521452+ u64 end_linear_pa = __pa(PAGE_END - 1);14531453+14541454+ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {14551455+ /*14561456+ * Check for a wrap, it is possible because of randomized linear14571457+ * mapping the start physical address is actually bigger than14581458+ * the end physical address. In this case set start to zero14591459+ * because [0, end_linear_pa] range must still be able to cover14601460+ * all addressable physical addresses.14611461+ */14621462+ if (start_linear_pa > end_linear_pa)14631463+ start_linear_pa = 0;14641464+ }14651465+14661466+ WARN_ON(start_linear_pa > end_linear_pa);1451146714521468 /*14531469 * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]···14711455 * range which can be mapped inside this linear mapping range, must14721456 * also be derived from its end points.14731457 */14741474- mhp_range.start = __pa(_PAGE_OFFSET(vabits_actual));14751475- mhp_range.end = __pa(PAGE_END - 1);14581458+ mhp_range.start = start_linear_pa;14591459+ mhp_range.end = end_linear_pa;14601460+14761461 return mhp_range;14771462}14781463
···18241824 data = mca_bootmem();18251825 first_time = 0;18261826 } else18271827- data = (void *)__get_free_pages(GFP_KERNEL,18271827+ data = (void *)__get_free_pages(GFP_ATOMIC,18281828 get_order(sz));18291829 if (!data)18301830 panic("Could not allocate MCA memory for cpu %d\n",
···452452 return ret;453453}454454455455+/**456456+ * struct pseries_suspend_info - State shared between CPUs for join/suspend.457457+ * @counter: Threads are to increment this upon resuming from suspend458458+ * or if an error is received from H_JOIN. The thread which performs459459+ * the first increment (i.e. sets it to 1) is responsible for460460+ * waking the other threads.461461+ * @done: False if join/suspend is in progress. True if the operation is462462+ * complete (successful or not).463463+ */464464+struct pseries_suspend_info {465465+ atomic_t counter;466466+ bool done;467467+};468468+455469static int do_join(void *arg)456470{457457- atomic_t *counter = arg;471471+ struct pseries_suspend_info *info = arg;472472+ atomic_t *counter = &info->counter;458473 long hvrc;459474 int ret;460475476476+retry:461477 /* Must ensure MSR.EE off for H_JOIN. */462478 hard_irq_disable();463479 hvrc = plpar_hcall_norets(H_JOIN);···489473 case H_SUCCESS:490474 /*491475 * The suspend is complete and this cpu has received a492492- * prod.476476+ * prod, or we've received a stray prod from unrelated477477+ * code (e.g. paravirt spinlocks) and we need to join478478+ * again.479479+ *480480+ * This barrier orders the return from H_JOIN above vs481481+ * the load of info->done. It pairs with the barrier482482+ * in the wakeup/prod path below.493483 */484484+ smp_mb();485485+ if (READ_ONCE(info->done) == false) {486486+ pr_info_ratelimited("premature return from H_JOIN on CPU %i, retrying",487487+ smp_processor_id());488488+ goto retry;489489+ }494490 ret = 0;495491 break;496492 case H_BAD_MODE:···516488517489 if (atomic_inc_return(counter) == 1) {518490 pr_info("CPU %u waking all threads\n", smp_processor_id());491491+ WRITE_ONCE(info->done, true);492492+ /*493493+ * This barrier orders the store to info->done vs subsequent494494+ * H_PRODs to wake the other CPUs. It pairs with the barrier495495+ * in the H_SUCCESS case above.496496+ */497497+ smp_mb();519498 prod_others();520499 }521500 /*···570535 int ret;571536572537 while (true) {573573- atomic_t counter = ATOMIC_INIT(0);538538+ struct pseries_suspend_info info;574539 unsigned long vasi_state;575540 int vasi_err;576541577577- ret = stop_machine(do_join, &counter, cpu_online_mask);542542+ info = (struct pseries_suspend_info) {543543+ .counter = ATOMIC_INIT(0),544544+ .done = false,545545+ };546546+547547+ ret = stop_machine(do_join, &info, cpu_online_mask);578548 if (ret == 0)579549 break;580550 /*
+1-1
arch/riscv/Kconfig
···314314# Common NUMA Features315315config NUMA316316 bool "NUMA Memory Allocation and Scheduler Support"317317- depends on SMP317317+ depends on SMP && MMU318318 select GENERIC_ARCH_NUMA319319 select OF_NUMA320320 select ARCH_SUPPORTS_NUMA_BALANCING
+5-2
arch/riscv/include/asm/uaccess.h
···306306 * data types like structures or arrays.307307 *308308 * @ptr must have pointer-to-simple-variable type, and @x must be assignable309309- * to the result of dereferencing @ptr.309309+ * to the result of dereferencing @ptr. The value of @x is copied to avoid310310+ * re-ordering where @x is evaluated inside the block that enables user-space311311+ * access (thus bypassing user space protection if @x is a function).310312 *311313 * Caller must check the pointer with access_ok() before calling this312314 * function.···318316#define __put_user(x, ptr) \319317({ \320318 __typeof__(*(ptr)) __user *__gu_ptr = (ptr); \319319+ __typeof__(*__gu_ptr) __val = (x); \321320 long __pu_err = 0; \322321 \323322 __chk_user_ptr(__gu_ptr); \324323 \325324 __enable_user_access(); \326326- __put_user_nocheck(x, __gu_ptr, __pu_err); \325325+ __put_user_nocheck(__val, __gu_ptr, __pu_err); \327326 __disable_user_access(); \328327 \329328 __pu_err; \
···14141515#include <asm/stacktrace.h>16161717-register const unsigned long sp_in_global __asm__("sp");1717+register unsigned long sp_in_global __asm__("sp");18181919#ifdef CONFIG_FRAME_POINTER2020
+1-1
arch/riscv/mm/kasan_init.c
···216216 break;217217218218 kasan_populate(kasan_mem_to_shadow(start), kasan_mem_to_shadow(end));219219- };219219+ }220220221221 for (i = 0; i < PTRS_PER_PTE; i++)222222 set_pte(&kasan_early_shadow_pte[i],
···8787#endif88888989/*9090- * The maximum amount of extra memory compared to the base size. The9191- * main scaling factor is the size of struct page. At extreme ratios9292- * of base:extra, all the base memory can be filled with page9393- * structures for the extra memory, leaving no space for anything9494- * else.9595- *9696- * 10x seems like a reasonable balance between scaling flexibility and9797- * leaving a practically usable system.9898- */9999-#define XEN_EXTRA_MEM_RATIO (10)100100-101101-/*10290 * Helper functions to write or read unsigned long values to/from10391 * memory, when the access may fault.10492 */
+12-13
arch/x86/kernel/acpi/boot.c
···15541554 /*15551555 * Initialize the ACPI boot-time table parser.15561556 */15571557- if (acpi_table_init()) {15571557+ if (acpi_locate_initial_tables())15581558 disable_acpi();15591559- return;15601560- }15591559+ else15601560+ acpi_reserve_initial_tables();15611561+}15621562+15631563+int __init early_acpi_boot_init(void)15641564+{15651565+ if (acpi_disabled)15661566+ return 1;15671567+15681568+ acpi_table_init_complete();1561156915621570 acpi_table_parse(ACPI_SIG_BOOT, acpi_parse_sbf);15631571···15781570 } else {15791571 printk(KERN_WARNING PREFIX "Disabling ACPI support\n");15801572 disable_acpi();15811581- return;15731573+ return 1;15821574 }15831575 }15841584-}15851585-15861586-int __init early_acpi_boot_init(void)15871587-{15881588- /*15891589- * If acpi_disabled, bail out15901590- */15911591- if (acpi_disabled)15921592- return 1;1593157615941577 /*15951578 * Process the Multiple APIC Description Table (MADT), if present
+3-5
arch/x86/kernel/setup.c
···1045104510461046 cleanup_highmap();1047104710481048+ /* Look for ACPI tables and reserve memory occupied by them. */10491049+ acpi_boot_table_init();10501050+10481051 memblock_set_current_limit(ISA_END_ADDRESS);10491052 e820__memblock_setup();10501053···11381135 io_delay_init();1139113611401137 early_platform_quirks();11411141-11421142- /*11431143- * Parse the ACPI tables for possible boot-time SMP configuration.11441144- */11451145- acpi_boot_table_init();1146113811471139 early_acpi_boot_init();11481140
···8686 list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link)87878888static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,8989- gfn_t start, gfn_t end, bool can_yield);8989+ gfn_t start, gfn_t end, bool can_yield, bool flush);90909191void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root)9292{···9999100100 list_del(&root->link);101101102102- zap_gfn_range(kvm, root, 0, max_gfn, false);102102+ zap_gfn_range(kvm, root, 0, max_gfn, false, false);103103104104 free_page((unsigned long)root->spt);105105 kmem_cache_free(mmu_page_header_cache, root);···668668 * scheduler needs the CPU or there is contention on the MMU lock. If this669669 * function cannot yield, it will not release the MMU lock or reschedule and670670 * the caller must ensure it does not supply too large a GFN range, or the671671- * operation can cause a soft lockup.671671+ * operation can cause a soft lockup. Note, in some use cases a flush may be672672+ * required by prior actions. Ensure the pending flush is performed prior to673673+ * yielding.672674 */673675static bool zap_gfn_range(struct kvm *kvm, struct kvm_mmu_page *root,674674- gfn_t start, gfn_t end, bool can_yield)676676+ gfn_t start, gfn_t end, bool can_yield, bool flush)675677{676678 struct tdp_iter iter;677677- bool flush_needed = false;678679679680 rcu_read_lock();680681681682 tdp_root_for_each_pte(iter, root, start, end) {682683 if (can_yield &&683683- tdp_mmu_iter_cond_resched(kvm, &iter, flush_needed)) {684684- flush_needed = false;684684+ tdp_mmu_iter_cond_resched(kvm, &iter, flush)) {685685+ flush = false;685686 continue;686687 }687688···700699 continue;701700702701 tdp_mmu_set_spte(kvm, &iter, 0);703703- flush_needed = true;702702+ flush = true;704703 }705704706705 rcu_read_unlock();707707- return flush_needed;706706+ return flush;708707}709708710709/*···713712 * SPTEs have been cleared and a TLB flush is needed before releasing the714713 * MMU lock.715714 */716716-bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end)715715+bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end,716716+ bool can_yield)717717{718718 struct kvm_mmu_page *root;719719 bool flush = false;720720721721 for_each_tdp_mmu_root_yield_safe(kvm, root)722722- flush |= zap_gfn_range(kvm, root, start, end, true);722722+ flush = zap_gfn_range(kvm, root, start, end, can_yield, flush);723723724724 return flush;725725}···932930 struct kvm_mmu_page *root, gfn_t start,933931 gfn_t end, unsigned long unused)934932{935935- return zap_gfn_range(kvm, root, start, end, false);933933+ return zap_gfn_range(kvm, root, start, end, false, false);936934}937935938936int kvm_tdp_mmu_zap_hva_range(struct kvm *kvm, unsigned long start,
+23-1
arch/x86/kvm/mmu/tdp_mmu.h
···88hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu);99void kvm_tdp_mmu_free_root(struct kvm *kvm, struct kvm_mmu_page *root);10101111-bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end);1111+bool __kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start, gfn_t end,1212+ bool can_yield);1313+static inline bool kvm_tdp_mmu_zap_gfn_range(struct kvm *kvm, gfn_t start,1414+ gfn_t end)1515+{1616+ return __kvm_tdp_mmu_zap_gfn_range(kvm, start, end, true);1717+}1818+static inline bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp)1919+{2020+ gfn_t end = sp->gfn + KVM_PAGES_PER_HPAGE(sp->role.level);2121+2222+ /*2323+ * Don't allow yielding, as the caller may have a flush pending. Note,2424+ * if mmu_lock is held for write, zapping will never yield in this case,2525+ * but explicitly disallow it for safety. The TDP MMU does not yield2626+ * until it has made forward progress (steps sideways), and when zapping2727+ * a single shadow page that it's guaranteed to see (thus the mmu_lock2828+ * requirement), its "step sideways" will always step beyond the bounds2929+ * of the shadow page's gfn range and stop iterating before yielding.3030+ */3131+ lockdep_assert_held_write(&kvm->mmu_lock);3232+ return __kvm_tdp_mmu_zap_gfn_range(kvm, sp->gfn, end, false);3333+}1234void kvm_tdp_mmu_zap_all(struct kvm *kvm);13351436int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
+23-5
arch/x86/kvm/svm/nested.c
···246246 return true;247247}248248249249-static bool nested_vmcb_checks(struct vcpu_svm *svm, struct vmcb *vmcb12)249249+static bool nested_vmcb_check_save(struct vcpu_svm *svm, struct vmcb *vmcb12)250250{251251 struct kvm_vcpu *vcpu = &svm->vcpu;252252 bool vmcb12_lma;253253254254+ /*255255+ * FIXME: these should be done after copying the fields,256256+ * to avoid TOC/TOU races. For these save area checks257257+ * the possible damage is limited since kvm_set_cr0 and258258+ * kvm_set_cr4 handle failure; EFER_SVME is an exception259259+ * so it is force-set later in nested_prepare_vmcb_save.260260+ */254261 if ((vmcb12->save.efer & EFER_SVME) == 0)255262 return false;256263···278271 if (!kvm_is_valid_cr4(&svm->vcpu, vmcb12->save.cr4))279272 return false;280273281281- return nested_vmcb_check_controls(&vmcb12->control);274274+ return true;282275}283276284277static void load_nested_vmcb_control(struct vcpu_svm *svm,···403396 svm->vmcb->save.gdtr = vmcb12->save.gdtr;404397 svm->vmcb->save.idtr = vmcb12->save.idtr;405398 kvm_set_rflags(&svm->vcpu, vmcb12->save.rflags | X86_EFLAGS_FIXED);406406- svm_set_efer(&svm->vcpu, vmcb12->save.efer);399399+400400+ /*401401+ * Force-set EFER_SVME even though it is checked earlier on the402402+ * VMCB12, because the guest can flip the bit between the check403403+ * and now. Clearing EFER_SVME would call svm_free_nested.404404+ */405405+ svm_set_efer(&svm->vcpu, vmcb12->save.efer | EFER_SVME);406406+407407 svm_set_cr0(&svm->vcpu, vmcb12->save.cr0);408408 svm_set_cr4(&svm->vcpu, vmcb12->save.cr4);409409 svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = vmcb12->save.cr2;···482468483469484470 svm->nested.vmcb12_gpa = vmcb12_gpa;485485- load_nested_vmcb_control(svm, &vmcb12->control);486471 nested_prepare_vmcb_control(svm);487472 nested_prepare_vmcb_save(svm, vmcb12);488473···528515 if (WARN_ON_ONCE(!svm->nested.initialized))529516 return -EINVAL;530517531531- if (!nested_vmcb_checks(svm, vmcb12)) {518518+ load_nested_vmcb_control(svm, &vmcb12->control);519519+520520+ if (!nested_vmcb_check_save(svm, vmcb12) ||521521+ !nested_vmcb_check_controls(&svm->nested.ctl)) {532522 vmcb12->control.exit_code = SVM_EXIT_ERR;533523 vmcb12->control.exit_code_hi = 0;534524 vmcb12->control.exit_info_1 = 0;···12241208 * TODO: validate reserved bits for all saved state.12251209 */12261210 if (!(save->cr0 & X86_CR0_PG))12111211+ goto out_free;12121212+ if (!(save->efer & EFER_SVME))12271213 goto out_free;1228121412291215 /*
+8
arch/x86/kvm/svm/pmu.c
···9898static inline struct kvm_pmc *get_gp_pmc_amd(struct kvm_pmu *pmu, u32 msr,9999 enum pmu_type type)100100{101101+ struct kvm_vcpu *vcpu = pmu_to_vcpu(pmu);102102+101103 switch (msr) {102104 case MSR_F15H_PERF_CTL0:103105 case MSR_F15H_PERF_CTL1:···107105 case MSR_F15H_PERF_CTL3:108106 case MSR_F15H_PERF_CTL4:109107 case MSR_F15H_PERF_CTL5:108108+ if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE))109109+ return NULL;110110+ fallthrough;110111 case MSR_K7_EVNTSEL0 ... MSR_K7_EVNTSEL3:111112 if (type != PMU_TYPE_EVNTSEL)112113 return NULL;···120115 case MSR_F15H_PERF_CTR3:121116 case MSR_F15H_PERF_CTR4:122117 case MSR_F15H_PERF_CTR5:118118+ if (!guest_cpuid_has(vcpu, X86_FEATURE_PERFCTR_CORE))119119+ return NULL;120120+ fallthrough;123121 case MSR_K7_PERFCTR0 ... MSR_K7_PERFCTR3:124122 if (type != PMU_TYPE_COUNTER)125123 return NULL;
+36-21
arch/x86/kvm/x86.c
···271271 * When called, it means the previous get/set msr reached an invalid msr.272272 * Return true if we want to ignore/silent this failed msr access.273273 */274274-static bool kvm_msr_ignored_check(struct kvm_vcpu *vcpu, u32 msr,275275- u64 data, bool write)274274+static bool kvm_msr_ignored_check(u32 msr, u64 data, bool write)276275{277276 const char *op = write ? "wrmsr" : "rdmsr";278277···14441445 if (r == KVM_MSR_RET_INVALID) {14451446 /* Unconditionally clear the output for simplicity */14461447 *data = 0;14471447- if (kvm_msr_ignored_check(vcpu, index, 0, false))14481448+ if (kvm_msr_ignored_check(index, 0, false))14481449 r = 0;14491450 }14501451···16191620 int ret = __kvm_set_msr(vcpu, index, data, host_initiated);1620162116211622 if (ret == KVM_MSR_RET_INVALID)16221622- if (kvm_msr_ignored_check(vcpu, index, data, true))16231623+ if (kvm_msr_ignored_check(index, data, true))16231624 ret = 0;1624162516251626 return ret;···16571658 if (ret == KVM_MSR_RET_INVALID) {16581659 /* Unconditionally clear *data for simplicity */16591660 *data = 0;16601660- if (kvm_msr_ignored_check(vcpu, index, 0, false))16611661+ if (kvm_msr_ignored_check(index, 0, false))16611662 ret = 0;16621663 }16631664···23282329 kvm_vcpu_write_tsc_offset(vcpu, offset);23292330 raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);2330233123312331- spin_lock(&kvm->arch.pvclock_gtod_sync_lock);23322332+ spin_lock_irqsave(&kvm->arch.pvclock_gtod_sync_lock, flags);23322333 if (!matched) {23332334 kvm->arch.nr_vcpus_matched_tsc = 0;23342335 } else if (!already_matched) {···23362337 }2337233823382339 kvm_track_tsc_matching(vcpu);23392339- spin_unlock(&kvm->arch.pvclock_gtod_sync_lock);23402340+ spin_unlock_irqrestore(&kvm->arch.pvclock_gtod_sync_lock, flags);23402341}2341234223422343static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu,···25582559 int i;25592560 struct kvm_vcpu *vcpu;25602561 struct kvm_arch *ka = &kvm->arch;25622562+ unsigned long flags;2561256325622564 kvm_hv_invalidate_tsc_page(kvm);2563256525642564- spin_lock(&ka->pvclock_gtod_sync_lock);25652566 kvm_make_mclock_inprogress_request(kvm);25672567+25662568 /* no guest entries from this point */25692569+ spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);25672570 pvclock_update_vm_gtod_copy(kvm);25712571+ spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);2568257225692573 kvm_for_each_vcpu(i, vcpu, kvm)25702574 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);···25752573 /* guest entries allowed */25762574 kvm_for_each_vcpu(i, vcpu, kvm)25772575 kvm_clear_request(KVM_REQ_MCLOCK_INPROGRESS, vcpu);25782578-25792579- spin_unlock(&ka->pvclock_gtod_sync_lock);25802576#endif25812577}25822578···25822582{25832583 struct kvm_arch *ka = &kvm->arch;25842584 struct pvclock_vcpu_time_info hv_clock;25852585+ unsigned long flags;25852586 u64 ret;2586258725872587- spin_lock(&ka->pvclock_gtod_sync_lock);25882588+ spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);25882589 if (!ka->use_master_clock) {25892589- spin_unlock(&ka->pvclock_gtod_sync_lock);25902590+ spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);25902591 return get_kvmclock_base_ns() + ka->kvmclock_offset;25912592 }2592259325932594 hv_clock.tsc_timestamp = ka->master_cycle_now;25942595 hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset;25952595- spin_unlock(&ka->pvclock_gtod_sync_lock);25962596+ spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);2596259725972598 /* both __this_cpu_read() and rdtsc() should be on the same cpu */25982599 get_cpu();···26872686 * If the host uses TSC clock, then passthrough TSC as stable26882687 * to the guest.26892688 */26902690- spin_lock(&ka->pvclock_gtod_sync_lock);26892689+ spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);26912690 use_master_clock = ka->use_master_clock;26922691 if (use_master_clock) {26932692 host_tsc = ka->master_cycle_now;26942693 kernel_ns = ka->master_kernel_ns;26952694 }26962696- spin_unlock(&ka->pvclock_gtod_sync_lock);26952695+ spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);2697269626982697 /* Keep irq disabled to prevent changes to the clock */26992698 local_irq_save(flags);···57275726 }57285727#endif57295728 case KVM_SET_CLOCK: {57295729+ struct kvm_arch *ka = &kvm->arch;57305730 struct kvm_clock_data user_ns;57315731 u64 now_ns;57325732···57465744 * pvclock_update_vm_gtod_copy().57475745 */57485746 kvm_gen_update_masterclock(kvm);57495749- now_ns = get_kvmclock_ns(kvm);57505750- kvm->arch.kvmclock_offset += user_ns.clock - now_ns;57475747+57485748+ /*57495749+ * This pairs with kvm_guest_time_update(): when masterclock is57505750+ * in use, we use master_kernel_ns + kvmclock_offset to set57515751+ * unsigned 'system_time' so if we use get_kvmclock_ns() (which57525752+ * is slightly ahead) here we risk going negative on unsigned57535753+ * 'system_time' when 'user_ns.clock' is very small.57545754+ */57555755+ spin_lock_irq(&ka->pvclock_gtod_sync_lock);57565756+ if (kvm->arch.use_master_clock)57575757+ now_ns = ka->master_kernel_ns;57585758+ else57595759+ now_ns = get_kvmclock_base_ns();57605760+ ka->kvmclock_offset = user_ns.clock - now_ns;57615761+ spin_unlock_irq(&ka->pvclock_gtod_sync_lock);57625762+57515763 kvm_make_all_cpus_request(kvm, KVM_REQ_CLOCK_UPDATE);57525764 break;57535765 }···77407724 struct kvm *kvm;77417725 struct kvm_vcpu *vcpu;77427726 int cpu;77277727+ unsigned long flags;7743772877447729 mutex_lock(&kvm_lock);77457730 list_for_each_entry(kvm, &vm_list, vm_list)···77567739 list_for_each_entry(kvm, &vm_list, vm_list) {77577740 struct kvm_arch *ka = &kvm->arch;7758774177597759- spin_lock(&ka->pvclock_gtod_sync_lock);77607760-77427742+ spin_lock_irqsave(&ka->pvclock_gtod_sync_lock, flags);77617743 pvclock_update_vm_gtod_copy(kvm);77447744+ spin_unlock_irqrestore(&ka->pvclock_gtod_sync_lock, flags);7762774577637746 kvm_for_each_vcpu(cpu, vcpu, kvm)77647747 kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);7765774877667749 kvm_for_each_vcpu(cpu, vcpu, kvm)77677750 kvm_clear_request(KVM_REQ_MCLOCK_INPROGRESS, vcpu);77687768-77697769- spin_unlock(&ka->pvclock_gtod_sync_lock);77707751 }77717752 mutex_unlock(&kvm_lock);77727753}
-1
arch/x86/kvm/x86.h
···250250void kvm_write_wall_clock(struct kvm *kvm, gpa_t wall_clock, int sec_hi_ofs);251251void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip);252252253253-void kvm_write_tsc(struct kvm_vcpu *vcpu, struct msr_data *msr);254253u64 get_kvmclock_ns(struct kvm *kvm);255254256255int kvm_read_guest_virt(struct kvm_vcpu *vcpu,
+1-1
arch/x86/mm/mem_encrypt.c
···262262 if (pgprot_val(old_prot) == pgprot_val(new_prot))263263 return;264264265265- pa = pfn << page_level_shift(level);265265+ pa = pfn << PAGE_SHIFT;266266 size = page_level_size(level);267267268268 /*
+25-6
arch/x86/net/bpf_jit_comp.c
···19361936 * add rsp, 8 // skip eth_type_trans's frame19371937 * ret // return to its caller19381938 */19391939-int arch_prepare_bpf_trampoline(void *image, void *image_end,19391939+int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,19401940 const struct btf_func_model *m, u32 flags,19411941 struct bpf_tramp_progs *tprogs,19421942 void *orig_call)···1975197519761976 save_regs(m, &prog, nr_args, stack_size);1977197719781978+ if (flags & BPF_TRAMP_F_CALL_ORIG) {19791979+ /* arg1: mov rdi, im */19801980+ emit_mov_imm64(&prog, BPF_REG_1, (long) im >> 32, (u32) (long) im);19811981+ if (emit_call(&prog, __bpf_tramp_enter, prog)) {19821982+ ret = -EINVAL;19831983+ goto cleanup;19841984+ }19851985+ }19861986+19781987 if (fentry->nr_progs)19791988 if (invoke_bpf(m, &prog, fentry, stack_size))19801989 return -EINVAL;···20021993 }2003199420041995 if (flags & BPF_TRAMP_F_CALL_ORIG) {20052005- if (fentry->nr_progs || fmod_ret->nr_progs)20062006- restore_regs(m, &prog, nr_args, stack_size);19961996+ restore_regs(m, &prog, nr_args, stack_size);2007199720081998 /* call original function */20091999 if (emit_call(&prog, orig_call, prog)) {···20112003 }20122004 /* remember return value in a stack for bpf prog to access */20132005 emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);20062006+ im->ip_after_call = prog;20072007+ memcpy(prog, ideal_nops[NOP_ATOMIC5], X86_PATCH_SIZE);20082008+ prog += X86_PATCH_SIZE;20142009 }2015201020162011 if (fmod_ret->nr_progs) {···20442033 * the return value is only updated on the stack and still needs to be20452034 * restored to R0.20462035 */20472047- if (flags & BPF_TRAMP_F_CALL_ORIG)20362036+ if (flags & BPF_TRAMP_F_CALL_ORIG) {20372037+ im->ip_epilogue = prog;20382038+ /* arg1: mov rdi, im */20392039+ emit_mov_imm64(&prog, BPF_REG_1, (long) im >> 32, (u32) (long) im);20402040+ if (emit_call(&prog, __bpf_tramp_exit, prog)) {20412041+ ret = -EINVAL;20422042+ goto cleanup;20432043+ }20482044 /* restore original return value back into RAX */20492045 emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);20462046+ }2050204720512048 EMIT1(0x5B); /* pop rbx */20522049 EMIT1(0xC9); /* leave */···22442225 padding = true;22452226 goto skip_init_addrs;22462227 }22472247- addrs = kmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);22282228+ addrs = kvmalloc_array(prog->len + 1, sizeof(*addrs), GFP_KERNEL);22482229 if (!addrs) {22492230 prog = orig_prog;22502231 goto out_addrs;···23362317 if (image)23372318 bpf_prog_fill_jited_linfo(prog, addrs + 1);23382319out_addrs:23392339- kfree(addrs);23202320+ kvfree(addrs);23402321 kfree(jit_data);23412322 prog->aux->jit_data = NULL;23422323 }
···5959} xen_remap_buf __initdata __aligned(PAGE_SIZE);6060static unsigned long xen_remap_mfn __initdata = INVALID_P2M_ENTRY;61616262+/*6363+ * The maximum amount of extra memory compared to the base size. The6464+ * main scaling factor is the size of struct page. At extreme ratios6565+ * of base:extra, all the base memory can be filled with page6666+ * structures for the extra memory, leaving no space for anything6767+ * else.6868+ *6969+ * 10x seems like a reasonable balance between scaling flexibility and7070+ * leaving a practically usable system.7171+ */7272+#define EXTRA_MEM_RATIO (10)7373+6274static bool xen_512gb_limit __initdata = IS_ENABLED(CONFIG_XEN_512GB);63756476static void __init xen_parse_512gb(void)···790778 extra_pages += max_pages - max_pfn;791779792780 /*793793- * Clamp the amount of extra memory to a XEN_EXTRA_MEM_RATIO781781+ * Clamp the amount of extra memory to a EXTRA_MEM_RATIO794782 * factor the base size.795783 *796784 * Make sure we have no memory above max_pages, as this area797785 * isn't handled by the p2m management.798786 */799799- extra_pages = min3(XEN_EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),787787+ extra_pages = min3(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),800788 extra_pages, max_pages - max_pfn);801789 i = 0;802790 addr = xen_e820_table.entries[0].addr;
+33-31
arch/xtensa/kernel/coprocessor.S
···100100 LOAD_CP_REGS_TAB(7)101101102102/*103103- * coprocessor_flush(struct thread_info*, index)104104- * a2 a3105105- *106106- * Save coprocessor registers for coprocessor 'index'.107107- * The register values are saved to or loaded from the coprocessor area 108108- * inside the task_info structure.109109- *110110- * Note that this function doesn't update the coprocessor_owner information!111111- *112112- */113113-114114-ENTRY(coprocessor_flush)115115-116116- /* reserve 4 bytes on stack to save a0 */117117- abi_entry(4)118118-119119- s32i a0, a1, 0120120- movi a0, .Lsave_cp_regs_jump_table121121- addx8 a3, a3, a0122122- l32i a4, a3, 4123123- l32i a3, a3, 0124124- add a2, a2, a4125125- beqz a3, 1f126126- callx0 a3127127-1: l32i a0, a1, 0128128-129129- abi_ret(4)130130-131131-ENDPROC(coprocessor_flush)132132-133133-/*134103 * Entry condition:135104 *136105 * a0: trashed, original value saved on stack (PT_AREG0)···213244 rfe214245215246ENDPROC(fast_coprocessor)247247+248248+ .text249249+250250+/*251251+ * coprocessor_flush(struct thread_info*, index)252252+ * a2 a3253253+ *254254+ * Save coprocessor registers for coprocessor 'index'.255255+ * The register values are saved to or loaded from the coprocessor area256256+ * inside the task_info structure.257257+ *258258+ * Note that this function doesn't update the coprocessor_owner information!259259+ *260260+ */261261+262262+ENTRY(coprocessor_flush)263263+264264+ /* reserve 4 bytes on stack to save a0 */265265+ abi_entry(4)266266+267267+ s32i a0, a1, 0268268+ movi a0, .Lsave_cp_regs_jump_table269269+ addx8 a3, a3, a0270270+ l32i a4, a3, 4271271+ l32i a3, a3, 0272272+ add a2, a2, a4273273+ beqz a3, 1f274274+ callx0 a3275275+1: l32i a0, a1, 0276276+277277+ abi_ret(4)278278+279279+ENDPROC(coprocessor_flush)216280217281 .data218282
+4-1
arch/xtensa/mm/fault.c
···112112 */113113 fault = handle_mm_fault(vma, address, flags, regs);114114115115- if (fault_signal_pending(fault, regs))115115+ if (fault_signal_pending(fault, regs)) {116116+ if (!user_mode(regs))117117+ goto bad_page_fault;116118 return;119119+ }117120118121 if (unlikely(fault & VM_FAULT_ERROR)) {119122 if (fault & VM_FAULT_OOM)
+19-4
block/bio.c
···277277{278278 struct bio *parent = bio->bi_private;279279280280- if (!parent->bi_status)280280+ if (bio->bi_status && !parent->bi_status)281281 parent->bi_status = bio->bi_status;282282 bio_put(bio);283283 return parent;···949949}950950EXPORT_SYMBOL_GPL(bio_release_pages);951951952952-static int bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)952952+static void __bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)953953{954954 WARN_ON_ONCE(bio->bi_max_vecs);955955···959959 bio->bi_iter.bi_size = iter->count;960960 bio_set_flag(bio, BIO_NO_PAGE_REF);961961 bio_set_flag(bio, BIO_CLONED);962962+}962963964964+static int bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)965965+{966966+ __bio_iov_bvec_set(bio, iter);963967 iov_iter_advance(iter, iter->count);968968+ return 0;969969+}970970+971971+static int bio_iov_bvec_set_append(struct bio *bio, struct iov_iter *iter)972972+{973973+ struct request_queue *q = bio->bi_bdev->bd_disk->queue;974974+ struct iov_iter i = *iter;975975+976976+ iov_iter_truncate(&i, queue_max_zone_append_sectors(q) << 9);977977+ __bio_iov_bvec_set(bio, &i);978978+ iov_iter_advance(iter, i.count);964979 return 0;965980}966981···11091094 int ret = 0;1110109511111096 if (iov_iter_is_bvec(iter)) {11121112- if (WARN_ON_ONCE(bio_op(bio) == REQ_OP_ZONE_APPEND))11131113- return -EINVAL;10971097+ if (bio_op(bio) == REQ_OP_ZONE_APPEND)10981098+ return bio_iov_bvec_set_append(bio, iter);11141099 return bio_iov_bvec_set(bio, iter);11151100 }11161101
+8
block/blk-merge.c
···382382 switch (bio_op(rq->bio)) {383383 case REQ_OP_DISCARD:384384 case REQ_OP_SECURE_ERASE:385385+ if (queue_max_discard_segments(rq->q) > 1) {386386+ struct bio *bio = rq->bio;387387+388388+ for_each_bio(bio)389389+ nr_phys_segs++;390390+ return nr_phys_segs;391391+ }392392+ return 1;385393 case REQ_OP_WRITE_ZEROES:386394 return 0;387395 case REQ_OP_WRITE_SAME:
···323323 int err;324324325325 /*326326+ * disk_max_parts() won't be zero, either GENHD_FL_EXT_DEVT is set327327+ * or 'minors' is passed to alloc_disk().328328+ */329329+ if (partno >= disk_max_parts(disk))330330+ return ERR_PTR(-EINVAL);331331+332332+ /*326333 * Partitions are not supported on zoned block devices that are used as327334 * such.328335 */
+1-2
drivers/acpi/acpica/nsaccess.c
···9999 * just create and link the new node(s) here.100100 */101101 new_node =102102- ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_namespace_node));102102+ acpi_ns_create_node(*ACPI_CAST_PTR(u32, init_val->name));103103 if (!new_node) {104104 status = AE_NO_MEMORY;105105 goto unlock_and_exit;106106 }107107108108- ACPI_COPY_NAMESEG(new_node->name.ascii, init_val->name);109108 new_node->descriptor_type = ACPI_DESC_TYPE_NAMED;110109 new_node->type = init_val->type;111110
···470470 char c;471471472472 for (; count-- > 0; (*ppos)++, tmp++) {473473- if (!in_interrupt() && (((count + 1) & 0x1f) == 0))473473+ if (((count + 1) & 0x1f) == 0) {474474 /*475475- * let's be a little nice with other processes476476- * that need some CPU475475+ * charlcd_write() is invoked as a VFS->write() callback476476+ * and as such it is always invoked from preemptible477477+ * context and may sleep.477478 */478478- schedule();479479+ cond_resched();480480+ }479481480482 if (get_user(c, tmp))481483 return -EFAULT;···539537 int count = strlen(s);540538541539 for (; count-- > 0; tmp++) {542542- if (!in_interrupt() && (((count + 1) & 0x1f) == 0))543543- /*544544- * let's be a little nice with other processes545545- * that need some CPU546546- */547547- schedule();540540+ if (((count + 1) & 0x1f) == 0)541541+ cond_resched();548542549543 charlcd_write_char(lcd, *tmp);550544 }
+3
drivers/base/dd.c
···96969797 get_device(dev);98989999+ kfree(dev->p->deferred_probe_reason);100100+ dev->p->deferred_probe_reason = NULL;101101+99102 /*100103 * Drop the mutex while probing each device; the probe path may101104 * manipulate the deferred list
+47-8
drivers/base/power/runtime.c
···305305 return 0;306306}307307308308-static void rpm_put_suppliers(struct device *dev)308308+static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend)309309{310310 struct device_link *link;311311···313313 device_links_read_lock_held()) {314314315315 while (refcount_dec_not_one(&link->rpm_active))316316- pm_runtime_put(link->supplier);316316+ pm_runtime_put_noidle(link->supplier);317317+318318+ if (try_to_suspend)319319+ pm_request_idle(link->supplier);317320 }321321+}322322+323323+static void rpm_put_suppliers(struct device *dev)324324+{325325+ __rpm_put_suppliers(dev, true);326326+}327327+328328+static void rpm_suspend_suppliers(struct device *dev)329329+{330330+ struct device_link *link;331331+ int idx = device_links_read_lock();332332+333333+ list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,334334+ device_links_read_lock_held())335335+ pm_request_idle(link->supplier);336336+337337+ device_links_read_unlock(idx);318338}319339320340/**···364344 idx = device_links_read_lock();365345366346 retval = rpm_get_suppliers(dev);367367- if (retval)347347+ if (retval) {348348+ rpm_put_suppliers(dev);368349 goto fail;350350+ }369351370352 device_links_read_unlock(idx);371353 }···390368 || (dev->power.runtime_status == RPM_RESUMING && retval))) {391369 idx = device_links_read_lock();392370393393- fail:394394- rpm_put_suppliers(dev);371371+ __rpm_put_suppliers(dev, false);395372373373+fail:396374 device_links_read_unlock(idx);397375 }398376···664642 goto out;665643 }666644645645+ if (dev->power.irq_safe)646646+ goto out;647647+667648 /* Maybe the parent is now able to suspend. */668668- if (parent && !parent->power.ignore_children && !dev->power.irq_safe) {649649+ if (parent && !parent->power.ignore_children) {669650 spin_unlock(&dev->power.lock);670651671652 spin_lock(&parent->power.lock);···676651 spin_unlock(&parent->power.lock);677652678653 spin_lock(&dev->power.lock);654654+ }655655+ /* Maybe the suppliers are now able to suspend. */656656+ if (dev->power.links_count > 0) {657657+ spin_unlock_irq(&dev->power.lock);658658+659659+ rpm_suspend_suppliers(dev);660660+661661+ spin_lock_irq(&dev->power.lock);679662 }680663681664 out:···16901657 device_links_read_lock_held())16911658 if (link->flags & DL_FLAG_PM_RUNTIME) {16921659 link->supplier_preactivated = true;16931693- refcount_inc(&link->rpm_active);16941660 pm_runtime_get_sync(link->supplier);16611661+ refcount_inc(&link->rpm_active);16951662 }1696166316971664 device_links_read_unlock(idx);···17041671void pm_runtime_put_suppliers(struct device *dev)17051672{17061673 struct device_link *link;16741674+ unsigned long flags;16751675+ bool put;17071676 int idx;1708167717091678 idx = device_links_read_lock();···17141679 device_links_read_lock_held())17151680 if (link->supplier_preactivated) {17161681 link->supplier_preactivated = false;17171717- if (refcount_dec_not_one(&link->rpm_active))16821682+ spin_lock_irqsave(&dev->power.lock, flags);16831683+ put = pm_runtime_status_suspended(dev) &&16841684+ refcount_dec_not_one(&link->rpm_active);16851685+ spin_unlock_irqrestore(&dev->power.lock, flags);16861686+ if (put)17181687 pm_runtime_put(link->supplier);17191688 }17201689
+21-5
drivers/block/null_blk/main.c
···13691369 }1370137013711371 if (dev->zoned)13721372- cmd->error = null_process_zoned_cmd(cmd, op,13731373- sector, nr_sectors);13721372+ sts = null_process_zoned_cmd(cmd, op, sector, nr_sectors);13741373 else13751375- cmd->error = null_process_cmd(cmd, op, sector, nr_sectors);13741374+ sts = null_process_cmd(cmd, op, sector, nr_sectors);13751375+13761376+ /* Do not overwrite errors (e.g. timeout errors) */13771377+ if (cmd->error == BLK_STS_OK)13781378+ cmd->error = sts;1376137913771380out:13781381 nullb_complete_cmd(cmd);···1454145114551452static enum blk_eh_timer_return null_timeout_rq(struct request *rq, bool res)14561453{14541454+ struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq);14551455+14571456 pr_info("rq %p timed out\n", rq);14581458- blk_mq_complete_request(rq);14571457+14581458+ /*14591459+ * If the device is marked as blocking (i.e. memory backed or zoned14601460+ * device), the submission path may be blocked waiting for resources14611461+ * and cause real timeouts. For these real timeouts, the submission14621462+ * path will complete the request using blk_mq_complete_request().14631463+ * Only fake timeouts need to execute blk_mq_complete_request() here.14641464+ */14651465+ cmd->error = BLK_STS_TIMEOUT;14661466+ if (cmd->fake_timeout)14671467+ blk_mq_complete_request(rq);14591468 return BLK_EH_DONE;14601469}14611470···14881473 cmd->rq = bd->rq;14891474 cmd->error = BLK_STS_OK;14901475 cmd->nq = nq;14761476+ cmd->fake_timeout = should_timeout_request(bd->rq);1491147714921478 blk_mq_start_request(bd->rq);14931479···15051489 return BLK_STS_OK;15061490 }15071491 }15081508- if (should_timeout_request(bd->rq))14921492+ if (cmd->fake_timeout)15091493 return BLK_STS_OK;1510149415111495 return null_handle_cmd(cmd, sector, nr_sectors, req_op(bd->rq));
···267267__ATTR_RO(_name##_frequencies)268268269269/*270270- * show_scaling_available_frequencies - show available normal frequencies for270270+ * scaling_available_frequencies_show - show available normal frequencies for271271 * the specified CPU272272 */273273static ssize_t scaling_available_frequencies_show(struct cpufreq_policy *policy,···279279EXPORT_SYMBOL_GPL(cpufreq_freq_attr_scaling_available_freqs);280280281281/*282282- * show_available_boost_freqs - show available boost frequencies for282282+ * scaling_boost_frequencies_show - show available boost frequencies for283283 * the specified CPU284284 */285285static ssize_t scaling_boost_frequencies_show(struct cpufreq_policy *policy,
···10281028{10291029 struct ttm_resource_manager *man;1030103010311031- /* late 2.6.33 fix IGP hibernate - we need pm ops to do this correct */10321032-#ifndef CONFIG_HIBERNATION10331033- if (adev->flags & AMD_IS_APU) {10341034- /* Useless to evict on IGP chips */10311031+ if (adev->in_s3 && (adev->flags & AMD_IS_APU)) {10321032+ /* No need to evict vram on APUs for suspend to ram */10351033 return 0;10361034 }10371037-#endif1038103510391036 man = ttm_manager_type(&adev->mman.bdev, TTM_PL_VRAM);10401037 return ttm_resource_manager_evict_all(&adev->mman.bdev, man);
+5-5
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
···21972197 uint64_t eaddr;2198219821992199 /* validate the parameters */22002200- if (saddr & AMDGPU_GPU_PAGE_MASK || offset & AMDGPU_GPU_PAGE_MASK ||22012201- size == 0 || size & AMDGPU_GPU_PAGE_MASK)22002200+ if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||22012201+ size == 0 || size & ~PAGE_MASK)22022202 return -EINVAL;2203220322042204 /* make sure object fit at this offset */···22632263 int r;2264226422652265 /* validate the parameters */22662266- if (saddr & AMDGPU_GPU_PAGE_MASK || offset & AMDGPU_GPU_PAGE_MASK ||22672267- size == 0 || size & AMDGPU_GPU_PAGE_MASK)22662266+ if (saddr & ~PAGE_MASK || offset & ~PAGE_MASK ||22672267+ size == 0 || size & ~PAGE_MASK)22682268 return -EINVAL;2269226922702270 /* make sure object fit at this offset */···24092409 after->start = eaddr + 1;24102410 after->last = tmp->last;24112411 after->offset = tmp->offset;24122412- after->offset += after->start - tmp->start;24122412+ after->offset += (after->start - tmp->start) << PAGE_SHIFT;24132413 after->flags = tmp->flags;24142414 after->bo_va = tmp->bo_va;24152415 list_add(&after->list, &tmp->bo_va->invalids);
+8-1
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c
···28972897static int dce_v10_0_suspend(void *handle)28982898{28992899 struct amdgpu_device *adev = (struct amdgpu_device *)handle;29002900+ int r;29012901+29022902+ r = amdgpu_display_suspend_helper(adev);29032903+ if (r)29042904+ return r;2900290529012906 adev->mode_info.bl_level =29022907 amdgpu_atombios_encoder_get_backlight_level_from_reg(adev);···29262921 amdgpu_display_backlight_set_level(adev, adev->mode_info.bl_encoder,29272922 bl_level);29282923 }29242924+ if (ret)29252925+ return ret;2929292629302930- return ret;29272927+ return amdgpu_display_resume_helper(adev);29312928}2932292929332930static bool dce_v10_0_is_idle(void *handle)
+8-1
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
···30273027static int dce_v11_0_suspend(void *handle)30283028{30293029 struct amdgpu_device *adev = (struct amdgpu_device *)handle;30303030+ int r;30313031+30323032+ r = amdgpu_display_suspend_helper(adev);30333033+ if (r)30343034+ return r;3030303530313036 adev->mode_info.bl_level =30323037 amdgpu_atombios_encoder_get_backlight_level_from_reg(adev);···30563051 amdgpu_display_backlight_set_level(adev, adev->mode_info.bl_encoder,30573052 bl_level);30583053 }30543054+ if (ret)30553055+ return ret;3059305630603060- return ret;30573057+ return amdgpu_display_resume_helper(adev);30613058}3062305930633060static bool dce_v11_0_is_idle(void *handle)
+7-1
drivers/gpu/drm/amd/amdgpu/dce_v6_0.c
···27702770static int dce_v6_0_suspend(void *handle)27712771{27722772 struct amdgpu_device *adev = (struct amdgpu_device *)handle;27732773+ int r;2773277427752775+ r = amdgpu_display_suspend_helper(adev);27762776+ if (r)27772777+ return r;27742778 adev->mode_info.bl_level =27752779 amdgpu_atombios_encoder_get_backlight_level_from_reg(adev);27762780···27982794 amdgpu_display_backlight_set_level(adev, adev->mode_info.bl_encoder,27992795 bl_level);28002796 }27972797+ if (ret)27982798+ return ret;2801279928022802- return ret;28002800+ return amdgpu_display_resume_helper(adev);28032801}2804280228052803static bool dce_v6_0_is_idle(void *handle)
+8-1
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c
···27962796static int dce_v8_0_suspend(void *handle)27972797{27982798 struct amdgpu_device *adev = (struct amdgpu_device *)handle;27992799+ int r;28002800+28012801+ r = amdgpu_display_suspend_helper(adev);28022802+ if (r)28032803+ return r;2799280428002805 adev->mode_info.bl_level =28012806 amdgpu_atombios_encoder_get_backlight_level_from_reg(adev);···28252820 amdgpu_display_backlight_set_level(adev, adev->mode_info.bl_encoder,28262821 bl_level);28272822 }28232823+ if (ret)28242824+ return ret;2828282528292829- return ret;28262826+ return amdgpu_display_resume_helper(adev);28302827}2831282828322829static bool dce_v8_0_is_idle(void *handle)
+14-1
drivers/gpu/drm/amd/amdgpu/dce_virtual.c
···3939#include "dce_v11_0.h"4040#include "dce_virtual.h"4141#include "ivsrcid/ivsrcid_vislands30.h"4242+#include "amdgpu_display.h"42434344#define DCE_VIRTUAL_VBLANK_PERIOD 166666664445···492491493492static int dce_virtual_suspend(void *handle)494493{494494+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;495495+ int r;496496+497497+ r = amdgpu_display_suspend_helper(adev);498498+ if (r)499499+ return r;495500 return dce_virtual_hw_fini(handle);496501}497502498503static int dce_virtual_resume(void *handle)499504{500500- return dce_virtual_hw_init(handle);505505+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;506506+ int r;507507+508508+ r = dce_virtual_hw_init(handle);509509+ if (r)510510+ return r;511511+ return amdgpu_display_resume_helper(adev);501512}502513503514static bool dce_virtual_is_idle(void *handle)
+1-1
drivers/gpu/drm/amd/amdkfd/kfd_dbgdev.c
···155155156156 /* Wait till CP writes sync code: */157157 status = amdkfd_fence_wait_timeout(158158- (unsigned int *) rm_state,158158+ rm_state,159159 QUEUESTATE__ACTIVE, 1500);160160161161 kfd_gtt_sa_free(dbgdev->dev, mem_obj);
···171171 data->registry_data.gfxoff_controlled_by_driver = 1;172172 data->gfxoff_allowed = false;173173 data->counter_gfxoff = 0;174174+ data->registry_data.pcie_dpm_key_disabled = !(hwmgr->feature_mask & PP_PCIE_DPM_MASK);174175}175176176177static int vega20_set_features_platform_caps(struct pp_hwmgr *hwmgr)···883882 /* update the pptable */884883 pp_table->PcieGenSpeed[i] = pcie_gen_arg;885884 pp_table->PcieLaneCount[i] = pcie_width_arg;885885+ }886886+887887+ /* override to the highest if it's disabled from ppfeaturmask */888888+ if (data->registry_data.pcie_dpm_key_disabled) {889889+ for (i = 0; i < NUM_LINK_LEVELS; i++) {890890+ smu_pcie_arg = (i << 16) | (pcie_gen << 8) | pcie_width;891891+ ret = smum_send_msg_to_smc_with_parameter(hwmgr,892892+ PPSMC_MSG_OverridePcieParameters, smu_pcie_arg,893893+ NULL);894894+ PP_ASSERT_WITH_CODE(!ret,895895+ "[OverridePcieParameters] Attempt to override pcie params failed!",896896+ return ret);897897+898898+ pp_table->PcieGenSpeed[i] = pcie_gen;899899+ pp_table->PcieLaneCount[i] = pcie_width;900900+ }901901+ ret = vega20_enable_smc_features(hwmgr,902902+ false,903903+ data->smu_features[GNLD_DPM_LINK].smu_feature_bitmap);904904+ PP_ASSERT_WITH_CODE(!ret,905905+ "Attempt to Disable DPM LINK Failed!",906906+ return ret);907907+ data->smu_features[GNLD_DPM_LINK].enabled = false;908908+ data->smu_features[GNLD_DPM_LINK].supported = false;886909 }887910888911 return 0;
+3-2
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
···12941294 bool use_baco = !smu->is_apu &&12951295 ((amdgpu_in_reset(adev) &&12961296 (amdgpu_asic_reset_method(adev) == AMD_RESET_METHOD_BACO)) ||12971297- ((adev->in_runpm || adev->in_hibernate) && amdgpu_asic_supports_baco(adev)));12971297+ ((adev->in_runpm || adev->in_s4) && amdgpu_asic_supports_baco(adev)));1298129812991299 /*13001300 * For custom pptable uploading, skip the DPM features···1431143114321432 smu->watermarks_bitmap &= ~(WATERMARKS_LOADED);1433143314341434- if (smu->is_apu)14341434+ /* skip CGPG when in S0ix */14351435+ if (smu->is_apu && !adev->in_s0ix)14351436 smu_set_gfx_cgpg(&adev->smu, false);1436143714371438 return 0;
+5
drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
···384384385385static bool vangogh_is_dpm_running(struct smu_context *smu)386386{387387+ struct amdgpu_device *adev = smu->adev;387388 int ret = 0;388389 uint32_t feature_mask[2];389390 uint64_t feature_enabled;391391+392392+ /* we need to re-init after suspend so return false */393393+ if (adev->in_suspend)394394+ return false;390395391396 ret = smu_cmn_get_enabled_32_bits_mask(smu, feature_mask, 2);392397
···317317 if (!new_plane_state->hw.crtc && !old_plane_state->hw.crtc)318318 return 0;319319320320- new_crtc_state->enabled_planes |= BIT(plane->id);321321-322320 ret = plane->check_plane(new_crtc_state, new_plane_state);323321 if (ret)324322 return ret;323323+324324+ if (fb)325325+ new_crtc_state->enabled_planes |= BIT(plane->id);325326326327 /* FIXME pre-g4x don't work like this */327328 if (new_plane_state->uapi.visible)
+1-3
drivers/gpu/drm/i915/display/intel_dp.c
···36193619{36203620 int ret;3621362136223622- intel_dp_lttpr_init(intel_dp);36233623-36243624- if (drm_dp_read_dpcd_caps(&intel_dp->aux, intel_dp->dpcd))36223622+ if (intel_dp_init_lttpr_and_dprx_caps(intel_dp) < 0)36253623 return false;3626362436273625 /*
+7
drivers/gpu/drm/i915/display/intel_dp_aux.c
···133133 else134134 precharge = 5;135135136136+ /* Max timeout value on G4x-BDW: 1.6ms */136137 if (IS_BROADWELL(dev_priv))137138 timeout = DP_AUX_CH_CTL_TIME_OUT_600us;138139 else···160159 enum phy phy = intel_port_to_phy(i915, dig_port->base.port);161160 u32 ret;162161162162+ /*163163+ * Max timeout values:164164+ * SKL-GLK: 1.6ms165165+ * CNL: 3.2ms166166+ * ICL+: 4ms167167+ */163168 ret = DP_AUX_CH_CTL_SEND_BUSY |164169 DP_AUX_CH_CTL_DONE |165170 DP_AUX_CH_CTL_INTERRUPT |
···3434 link_status[3], link_status[4], link_status[5]);3535}36363737+static void intel_dp_reset_lttpr_common_caps(struct intel_dp *intel_dp)3838+{3939+ memset(&intel_dp->lttpr_common_caps, 0, sizeof(intel_dp->lttpr_common_caps));4040+}4141+3742static void intel_dp_reset_lttpr_count(struct intel_dp *intel_dp)3843{3944 intel_dp->lttpr_common_caps[DP_PHY_REPEATER_CNT -···86818782static bool intel_dp_read_lttpr_common_caps(struct intel_dp *intel_dp)8883{8989- if (drm_dp_read_lttpr_common_caps(&intel_dp->aux,9090- intel_dp->lttpr_common_caps) < 0) {9191- memset(intel_dp->lttpr_common_caps, 0,9292- sizeof(intel_dp->lttpr_common_caps));8484+ struct drm_i915_private *i915 = dp_to_i915(intel_dp);8585+8686+ if (intel_dp_is_edp(intel_dp))9387 return false;9494- }8888+8989+ /*9090+ * Detecting LTTPRs must be avoided on platforms with an AUX timeout9191+ * period < 3.2ms. (see DP Standard v2.0, 2.11.2, 3.6.6.1).9292+ */9393+ if (INTEL_GEN(i915) < 10)9494+ return false;9595+9696+ if (drm_dp_read_lttpr_common_caps(&intel_dp->aux,9797+ intel_dp->lttpr_common_caps) < 0)9898+ goto reset_caps;959996100 drm_dbg_kms(&dp_to_i915(intel_dp)->drm,97101 "LTTPR common capabilities: %*ph\n",98102 (int)sizeof(intel_dp->lttpr_common_caps),99103 intel_dp->lttpr_common_caps);100104105105+ /* The minimum value of LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV is 1.4 */106106+ if (intel_dp->lttpr_common_caps[0] < 0x14)107107+ goto reset_caps;108108+101109 return true;110110+111111+reset_caps:112112+ intel_dp_reset_lttpr_common_caps(intel_dp);113113+ return false;102114}103115104116static bool···128106}129107130108/**131131- * intel_dp_lttpr_init - detect LTTPRs and init the LTTPR link training mode109109+ * intel_dp_init_lttpr_and_dprx_caps - detect LTTPR and DPRX caps, init the LTTPR link training mode132110 * @intel_dp: Intel DP struct133111 *134134- * Read the LTTPR common capabilities, switch to non-transparent link training135135- * mode if any is detected and read the PHY capabilities for all detected136136- * LTTPRs. In case of an LTTPR detection error or if the number of112112+ * Read the LTTPR common and DPRX capabilities and switch to non-transparent113113+ * link training mode if any is detected and read the PHY capabilities for all114114+ * detected LTTPRs. In case of an LTTPR detection error or if the number of137115 * LTTPRs is more than is supported (8), fall back to the no-LTTPR,138116 * transparent mode link training mode.139117 *140118 * Returns:141141- * >0 if LTTPRs were detected and the non-transparent LT mode was set119119+ * >0 if LTTPRs were detected and the non-transparent LT mode was set. The120120+ * DPRX capabilities are read out.142121 * 0 if no LTTPRs or more than 8 LTTPRs were detected or in case of a143143- * detection failure and the transparent LT mode was set122122+ * detection failure and the transparent LT mode was set. The DPRX123123+ * capabilities are read out.124124+ * <0 Reading out the DPRX capabilities failed.144125 */145145-int intel_dp_lttpr_init(struct intel_dp *intel_dp)126126+int intel_dp_init_lttpr_and_dprx_caps(struct intel_dp *intel_dp)146127{147128 int lttpr_count;148129 bool ret;149130 int i;150131151151- if (intel_dp_is_edp(intel_dp))152152- return 0;153153-154132 ret = intel_dp_read_lttpr_common_caps(intel_dp);133133+134134+ /* The DPTX shall read the DPRX caps after LTTPR detection. */135135+ if (drm_dp_read_dpcd_caps(&intel_dp->aux, intel_dp->dpcd)) {136136+ intel_dp_reset_lttpr_common_caps(intel_dp);137137+ return -EIO;138138+ }139139+155140 if (!ret)156141 return 0;142142+143143+ /*144144+ * The 0xF0000-0xF02FF range is only valid if the DPCD revision is145145+ * at least 1.4.146146+ */147147+ if (intel_dp->dpcd[DP_DPCD_REV] < 0x14) {148148+ intel_dp_reset_lttpr_common_caps(intel_dp);149149+ return 0;150150+ }157151158152 lttpr_count = drm_dp_lttpr_count(intel_dp->lttpr_common_caps);159153 /*···210172211173 return lttpr_count;212174}213213-EXPORT_SYMBOL(intel_dp_lttpr_init);175175+EXPORT_SYMBOL(intel_dp_init_lttpr_and_dprx_caps);214176215177static u8 dp_voltage_max(u8 preemph)216178{···845807 * TODO: Reiniting LTTPRs here won't be needed once proper connector846808 * HW state readout is added.847809 */848848- int lttpr_count = intel_dp_lttpr_init(intel_dp);810810+ int lttpr_count = intel_dp_init_lttpr_and_dprx_caps(intel_dp);811811+812812+ if (lttpr_count < 0)813813+ return;849814850815 if (!intel_dp_link_train_all_phys(intel_dp, crtc_state, lttpr_count))851816 intel_dp_schedule_fallback_link_training(intel_dp, crtc_state);
···316316 WRITE_ONCE(fence->vma, NULL);317317 vma->fence = NULL;318318319319- with_intel_runtime_pm_if_in_use(fence_to_uncore(fence)->rpm, wakeref)319319+ /*320320+ * Skip the write to HW if and only if the device is currently321321+ * suspended.322322+ *323323+ * If the driver does not currently hold a wakeref (if_in_use == 0),324324+ * the device may currently be runtime suspended, or it may be woken325325+ * up before the suspend takes place. If the device is not suspended326326+ * (powered down) and we skip clearing the fence register, the HW is327327+ * left in an undefined state where we may end up with multiple328328+ * registers overlapping.329329+ */330330+ with_intel_runtime_pm_if_active(fence_to_uncore(fence)->rpm, wakeref)320331 fence_write(fence);321332}322333
+24-5
drivers/gpu/drm/i915/intel_runtime_pm.c
···412412}413413414414/**415415- * intel_runtime_pm_get_if_in_use - grab a runtime pm reference if device in use415415+ * __intel_runtime_pm_get_if_active - grab a runtime pm reference if device is active416416 * @rpm: the intel_runtime_pm structure417417+ * @ignore_usecount: get a ref even if dev->power.usage_count is 0417418 *418419 * This function grabs a device-level runtime pm reference if the device is419419- * already in use and ensures that it is powered up. It is illegal to try420420- * and access the HW should intel_runtime_pm_get_if_in_use() report failure.420420+ * already active and ensures that it is powered up. It is illegal to try421421+ * and access the HW should intel_runtime_pm_get_if_active() report failure.422422+ *423423+ * If @ignore_usecount=true, a reference will be acquired even if there is no424424+ * user requiring the device to be powered up (dev->power.usage_count == 0).425425+ * If the function returns false in this case then it's guaranteed that the426426+ * device's runtime suspend hook has been called already or that it will be427427+ * called (and hence it's also guaranteed that the device's runtime resume428428+ * hook will be called eventually).421429 *422430 * Any runtime pm reference obtained by this function must have a symmetric423431 * call to intel_runtime_pm_put() to release the reference again.···433425 * Returns: the wakeref cookie to pass to intel_runtime_pm_put(), evaluates434426 * as True if the wakeref was acquired, or False otherwise.435427 */436436-intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm)428428+static intel_wakeref_t __intel_runtime_pm_get_if_active(struct intel_runtime_pm *rpm,429429+ bool ignore_usecount)437430{438431 if (IS_ENABLED(CONFIG_PM)) {439432 /*···443434 * function, since the power state is undefined. This applies444435 * atm to the late/early system suspend/resume handlers.445436 */446446- if (pm_runtime_get_if_in_use(rpm->kdev) <= 0)437437+ if (pm_runtime_get_if_active(rpm->kdev, ignore_usecount) <= 0)447438 return 0;448439 }449440450441 intel_runtime_pm_acquire(rpm, true);451442452443 return track_intel_runtime_pm_wakeref(rpm);444444+}445445+446446+intel_wakeref_t intel_runtime_pm_get_if_in_use(struct intel_runtime_pm *rpm)447447+{448448+ return __intel_runtime_pm_get_if_active(rpm, false);449449+}450450+451451+intel_wakeref_t intel_runtime_pm_get_if_active(struct intel_runtime_pm *rpm)452452+{453453+ return __intel_runtime_pm_get_if_active(rpm, true);453454}454455455456/**
···215215216216 ret = drmm_mode_config_init(drm);217217 if (ret)218218- return ret;218218+ goto err_kms;219219220220 ret = drm_vblank_init(drm, MAX_CRTC);221221 if (ret)
+11-1
drivers/gpu/drm/imx/imx-ldb.c
···197197 int dual = ldb->ldb_ctrl & LDB_SPLIT_MODE_EN;198198 int mux = drm_of_encoder_active_port_id(imx_ldb_ch->child, encoder);199199200200+ if (mux < 0 || mux >= ARRAY_SIZE(ldb->clk_sel)) {201201+ dev_warn(ldb->dev, "%s: invalid mux %d\n", __func__, mux);202202+ return;203203+ }204204+200205 drm_panel_prepare(imx_ldb_ch->panel);201206202207 if (dual) {···259254 unsigned long di_clk = mode->clock * 1000;260255 int mux = drm_of_encoder_active_port_id(imx_ldb_ch->child, encoder);261256 u32 bus_format = imx_ldb_ch->bus_format;257257+258258+ if (mux < 0 || mux >= ARRAY_SIZE(ldb->clk_sel)) {259259+ dev_warn(ldb->dev, "%s: invalid mux %d\n", __func__, mux);260260+ return;261261+ }262262263263 if (mode->clock > 170000) {264264 dev_warn(ldb->dev,···593583 struct imx_ldb_channel *channel = &imx_ldb->channel[i];594584595585 if (!channel->ldb)596596- break;586586+ continue;597587598588 ret = imx_ldb_register(drm, channel);599589 if (ret)
+1-1
drivers/gpu/drm/msm/adreno/a5xx_power.c
···304304 /* Set up the limits management */305305 if (adreno_is_a530(adreno_gpu))306306 a530_lm_setup(gpu);307307- else307307+ else if (adreno_is_a540(adreno_gpu))308308 a540_lm_setup(gpu);309309310310 /* Set up SP/TP power collpase */
+1-1
drivers/gpu/drm/msm/adreno/a6xx_gmu.c
···339339 else340340 bit = a6xx_gmu_oob_bits[state].ack_new;341341342342- gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, bit);342342+ gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, 1 << bit);343343}344344345345/* Enable CPU control of SPTP power power collapse */
+72-36
drivers/gpu/drm/msm/adreno/a6xx_gpu.c
···522522 return a6xx_idle(gpu, ring) ? 0 : -EINVAL;523523}524524525525-static void a6xx_ucode_check_version(struct a6xx_gpu *a6xx_gpu,525525+/*526526+ * Check that the microcode version is new enough to include several key527527+ * security fixes. Return true if the ucode is safe.528528+ */529529+static bool a6xx_ucode_check_version(struct a6xx_gpu *a6xx_gpu,526530 struct drm_gem_object *obj)527531{532532+ struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;533533+ struct msm_gpu *gpu = &adreno_gpu->base;528534 u32 *buf = msm_gem_get_vaddr(obj);535535+ bool ret = false;529536530537 if (IS_ERR(buf))531531- return;538538+ return false;532539533540 /*534534- * If the lowest nibble is 0xa that is an indication that this microcode535535- * has been patched. The actual version is in dword [3] but we only care536536- * about the patchlevel which is the lowest nibble of dword [3]537537- *538538- * Otherwise check that the firmware is greater than or equal to 1.90539539- * which was the first version that had this fix built in541541+ * Targets up to a640 (a618, a630 and a640) need to check for a542542+ * microcode version that is patched to support the whereami opcode or543543+ * one that is new enough to include it by default.540544 */541541- if (((buf[0] & 0xf) == 0xa) && (buf[2] & 0xf) >= 1)542542- a6xx_gpu->has_whereami = true;543543- else if ((buf[0] & 0xfff) > 0x190)544544- a6xx_gpu->has_whereami = true;545545+ if (adreno_is_a618(adreno_gpu) || adreno_is_a630(adreno_gpu) ||546546+ adreno_is_a640(adreno_gpu)) {547547+ /*548548+ * If the lowest nibble is 0xa that is an indication that this549549+ * microcode has been patched. The actual version is in dword550550+ * [3] but we only care about the patchlevel which is the lowest551551+ * nibble of dword [3]552552+ *553553+ * Otherwise check that the firmware is greater than or equal554554+ * to 1.90 which was the first version that had this fix built555555+ * in556556+ */557557+ if ((((buf[0] & 0xf) == 0xa) && (buf[2] & 0xf) >= 1) ||558558+ (buf[0] & 0xfff) >= 0x190) {559559+ a6xx_gpu->has_whereami = true;560560+ ret = true;561561+ goto out;562562+ }545563564564+ DRM_DEV_ERROR(&gpu->pdev->dev,565565+ "a630 SQE ucode is too old. Have version %x need at least %x\n",566566+ buf[0] & 0xfff, 0x190);567567+ } else {568568+ /*569569+ * a650 tier targets don't need whereami but still need to be570570+ * equal to or newer than 1.95 for other security fixes571571+ */572572+ if (adreno_is_a650(adreno_gpu)) {573573+ if ((buf[0] & 0xfff) >= 0x195) {574574+ ret = true;575575+ goto out;576576+ }577577+578578+ DRM_DEV_ERROR(&gpu->pdev->dev,579579+ "a650 SQE ucode is too old. Have version %x need at least %x\n",580580+ buf[0] & 0xfff, 0x195);581581+ }582582+583583+ /*584584+ * When a660 is added those targets should return true here585585+ * since those have all the critical security fixes built in586586+ * from the start587587+ */588588+ }589589+out:546590 msm_gem_put_vaddr(obj);591591+ return ret;547592}548593549594static int a6xx_ucode_init(struct msm_gpu *gpu)···611566 }612567613568 msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw");614614- a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo);569569+ if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) {570570+ msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace);571571+ drm_gem_object_put(a6xx_gpu->sqe_bo);572572+573573+ a6xx_gpu->sqe_bo = NULL;574574+ return -EPERM;575575+ }615576 }616577617578 gpu_write64(gpu, REG_A6XX_CP_SQE_INSTR_BASE_LO,···14011350 u32 revn)14021351{14031352 struct opp_table *opp_table;14041404- struct nvmem_cell *cell;14051353 u32 supp_hw = UINT_MAX;14061406- void *buf;13541354+ u16 speedbin;13551355+ int ret;1407135614081408- cell = nvmem_cell_get(dev, "speed_bin");14091409- /*14101410- * -ENOENT means that the platform doesn't support speedbin which is14111411- * fine14121412- */14131413- if (PTR_ERR(cell) == -ENOENT)14141414- return 0;14151415- else if (IS_ERR(cell)) {13571357+ ret = nvmem_cell_read_u16(dev, "speed_bin", &speedbin);13581358+ if (ret) {14161359 DRM_DEV_ERROR(dev,14171417- "failed to read speed-bin. Some OPPs may not be supported by hardware");13601360+ "failed to read speed-bin (%d). Some OPPs may not be supported by hardware",13611361+ ret);14181362 goto done;14191363 }13641364+ speedbin = le16_to_cpu(speedbin);1420136514211421- buf = nvmem_cell_read(cell, NULL);14221422- if (IS_ERR(buf)) {14231423- nvmem_cell_put(cell);14241424- DRM_DEV_ERROR(dev,14251425- "failed to read speed-bin. Some OPPs may not be supported by hardware");14261426- goto done;14271427- }14281428-14291429- supp_hw = fuse_to_supp_hw(dev, revn, *((u32 *) buf));14301430-14311431- kfree(buf);14321432- nvmem_cell_put(cell);13661366+ supp_hw = fuse_to_supp_hw(dev, revn, speedbin);1433136714341368done:14351369 opp_table = dev_pm_opp_set_supported_hw(dev, &supp_hw, 1);
+7-5
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
···4343#define DPU_DEBUGFS_DIR "msm_dpu"4444#define DPU_DEBUGFS_HWMASKNAME "hw_log_mask"45454646+#define MIN_IB_BW 400000000ULL /* Min ib vote 400MB */4747+4648static int dpu_kms_hw_init(struct msm_kms *kms);4749static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms);4850···933931 DPU_DEBUG("REG_DMA is not defined");934932 }935933934934+ if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))935935+ dpu_kms_parse_data_bus_icc_path(dpu_kms);936936+936937 pm_runtime_get_sync(&dpu_kms->pdev->dev);937938938939 dpu_kms->core_rev = readl_relaxed(dpu_kms->mmio + 0x0);···10361031 }1037103210381033 dpu_vbif_init_memtypes(dpu_kms);10391039-10401040- if (of_device_is_compatible(dev->dev->of_node, "qcom,sc7180-mdss"))10411041- dpu_kms_parse_data_bus_icc_path(dpu_kms);1042103410431035 pm_runtime_put_sync(&dpu_kms->pdev->dev);10441036···1193119111941192 ddev = dpu_kms->dev;1195119311941194+ WARN_ON(!(dpu_kms->num_paths));11961195 /* Min vote of BW is required before turning on AXI clk */11971196 for (i = 0; i < dpu_kms->num_paths; i++)11981198- icc_set_bw(dpu_kms->path[i], 0,11991199- dpu_kms->catalog->perf.min_dram_ib);11971197+ icc_set_bw(dpu_kms->path[i], 0, Bps_to_icc(MIN_IB_BW));1200119812011199 rc = msm_dss_enable_clk(mp->clk_config, mp->num_clk, true);12021200 if (rc) {
+7
drivers/gpu/drm/msm/dp/dp_aux.c
···3232 struct drm_dp_aux dp_aux;3333};34343535+#define MAX_AUX_RETRIES 53636+3537static const char *dp_aux_get_error(u32 aux_error)3638{3739 switch (aux_error) {···379377 ret = dp_aux_cmd_fifo_tx(aux, msg);380378381379 if (ret < 0) {380380+ if (aux->native) {381381+ aux->retry_cnt++;382382+ if (!(aux->retry_cnt % MAX_AUX_RETRIES))383383+ dp_catalog_aux_update_cfg(aux->catalog);384384+ }382385 usleep_range(400, 500); /* at least 400us to next try */383386 goto unlock_exit;384387 }
+1-1
drivers/gpu/drm/msm/dsi/pll/dsi_pll.c
···163163 break;164164 case MSM_DSI_PHY_7NM:165165 case MSM_DSI_PHY_7NM_V4_1:166166- pll = msm_dsi_pll_7nm_init(pdev, id);166166+ pll = msm_dsi_pll_7nm_init(pdev, type, id);167167 break;168168 default:169169 pll = ERR_PTR(-ENXIO);
+4-2
drivers/gpu/drm/msm/dsi/pll/dsi_pll.h
···117117}118118#endif119119#ifdef CONFIG_DRM_MSM_DSI_7NM_PHY120120-struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev, int id);120120+struct msm_dsi_pll *msm_dsi_pll_7nm_init(struct platform_device *pdev,121121+ enum msm_dsi_phy_type type, int id);121122#else122123static inline struct msm_dsi_pll *123123-msm_dsi_pll_7nm_init(struct platform_device *pdev, int id)124124+msm_dsi_pll_7nm_init(struct platform_device *pdev,125125+ enum msm_dsi_phy_type type, int id)124126{125127 return ERR_PTR(-ENODEV);126128}
···4545 int ret;46464747 if (fence > fctx->last_fence) {4848- DRM_ERROR("%s: waiting on invalid fence: %u (of %u)\n",4848+ DRM_ERROR_RATELIMITED("%s: waiting on invalid fence: %u (of %u)\n",4949 fctx->name, fence, fctx->last_fence);5050 return -EINVAL;5151 }
+2-6
drivers/gpu/drm/msm/msm_kms.h
···157157 * from the crtc's pending_timer close to end of the frame:158158 */159159 struct mutex commit_lock[MAX_CRTCS];160160- struct lock_class_key commit_lock_keys[MAX_CRTCS];161160 unsigned pending_crtc_mask;162161 struct msm_pending_timer pending_timers[MAX_CRTCS];163162};···166167{167168 unsigned i, ret;168169169169- for (i = 0; i < ARRAY_SIZE(kms->commit_lock); i++) {170170- lockdep_register_key(&kms->commit_lock_keys[i]);171171- __mutex_init(&kms->commit_lock[i], "&kms->commit_lock[i]",172172- &kms->commit_lock_keys[i]);173173- }170170+ for (i = 0; i < ARRAY_SIZE(kms->commit_lock); i++)171171+ mutex_init(&kms->commit_lock[i]);174172175173 kms->funcs = funcs;176174
+12-1
drivers/gpu/drm/nouveau/dispnv50/disp.c
···26932693 else26942694 nouveau_display(dev)->format_modifiers = disp50xx_modifiers;2695269526962696- if (disp->disp->object.oclass >= GK104_DISP) {26962696+ /* FIXME: 256x256 cursors are supported on Kepler, however unlike Maxwell and later26972697+ * generations Kepler requires that we use small pages (4K) for cursor scanout surfaces. The26982698+ * proper fix for this is to teach nouveau to migrate fbs being used for the cursor plane to26992699+ * small page allocations in prepare_fb(). When this is implemented, we should also force27002700+ * large pages (128K) for ovly fbs in order to fix Kepler ovlys.27012701+ * But until then, just limit cursors to 128x128 - which is small enough to avoid ever using27022702+ * large pages.27032703+ */27042704+ if (disp->disp->object.oclass >= GM107_DISP) {26972705 dev->mode_config.cursor_width = 256;26982706 dev->mode_config.cursor_height = 256;27072707+ } else if (disp->disp->object.oclass >= GK104_DISP) {27082708+ dev->mode_config.cursor_width = 128;27092709+ dev->mode_config.cursor_height = 128;26992710 } else {27002711 dev->mode_config.cursor_width = 64;27012712 dev->mode_config.cursor_height = 64;
+6-25
drivers/gpu/drm/rcar-du/rcar_du_encoder.c
···4848static const struct drm_encoder_funcs rcar_du_encoder_funcs = {4949};50505151-static void rcar_du_encoder_release(struct drm_device *dev, void *res)5252-{5353- struct rcar_du_encoder *renc = res;5454-5555- drm_encoder_cleanup(&renc->base);5656- kfree(renc);5757-}5858-5951int rcar_du_encoder_init(struct rcar_du_device *rcdu,6052 enum rcar_du_output output,6153 struct device_node *enc_node)6254{6355 struct rcar_du_encoder *renc;6456 struct drm_bridge *bridge;6565- int ret;66576758 /*6859 * Locate the DRM bridge from the DT node. For the DPAD outputs, if the···92101 return -ENOLINK;93102 }941039595- renc = kzalloc(sizeof(*renc), GFP_KERNEL);9696- if (renc == NULL)9797- return -ENOMEM;9898-9999- renc->output = output;100100-101104 dev_dbg(rcdu->dev, "initializing encoder %pOF for output %u\n",102105 enc_node, output);103106104104- ret = drm_encoder_init(&rcdu->ddev, &renc->base, &rcar_du_encoder_funcs,105105- DRM_MODE_ENCODER_NONE, NULL);106106- if (ret < 0) {107107- kfree(renc);108108- return ret;109109- }107107+ renc = drmm_encoder_alloc(&rcdu->ddev, struct rcar_du_encoder, base,108108+ &rcar_du_encoder_funcs, DRM_MODE_ENCODER_NONE,109109+ NULL);110110+ if (!renc)111111+ return -ENOMEM;110112111111- ret = drmm_add_action_or_reset(&rcdu->ddev, rcar_du_encoder_release,112112- renc);113113- if (ret)114114- return ret;113113+ renc->output = output;115114116115 /*117116 * Attach the bridge to the encoder. The bridge will create the
+13-17
drivers/gpu/drm/tegra/dc.c
···16881688 dev_err(dc->dev,16891689 "failed to set clock rate to %lu Hz\n",16901690 state->pclk);16911691+16921692+ err = clk_set_rate(dc->clk, state->pclk);16931693+ if (err < 0)16941694+ dev_err(dc->dev, "failed to set clock %pC to %lu Hz: %d\n",16951695+ dc->clk, state->pclk, err);16911696 }1692169716931698 DRM_DEBUG_KMS("rate: %lu, div: %u\n", clk_get_rate(dc->clk),···17031698 value = SHIFT_CLK_DIVIDER(state->div) | PIXEL_CLK_DIVIDER_PCD1;17041699 tegra_dc_writel(dc, value, DC_DISP_DISP_CLOCK_CONTROL);17051700 }17061706-17071707- err = clk_set_rate(dc->clk, state->pclk);17081708- if (err < 0)17091709- dev_err(dc->dev, "failed to set clock %pC to %lu Hz: %d\n",17101710- dc->clk, state->pclk, err);17111701}1712170217131703static void tegra_dc_stop(struct tegra_dc *dc)···25012501 * POWER_CONTROL registers during CRTC enabling.25022502 */25032503 if (dc->soc->coupled_pm && dc->pipe == 1) {25042504- u32 flags = DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_CONSUMER;25052505- struct device_link *link;25062506- struct device *partner;25042504+ struct device *companion;25052505+ struct tegra_dc *parent;2507250625082508- partner = driver_find_device(dc->dev->driver, NULL, NULL,25092509- tegra_dc_match_by_pipe);25102510- if (!partner)25072507+ companion = driver_find_device(dc->dev->driver, NULL, (const void *)0,25082508+ tegra_dc_match_by_pipe);25092509+ if (!companion)25112510 return -EPROBE_DEFER;2512251125132513- link = device_link_add(dc->dev, partner, flags);25142514- if (!link) {25152515- dev_err(dc->dev, "failed to link controllers\n");25162516- return -EINVAL;25172517- }25122512+ parent = dev_get_drvdata(companion);25132513+ dc->client.parent = &parent->client;2518251425192519- dev_dbg(dc->dev, "coupled to %s\n", dev_name(partner));25152515+ dev_dbg(dc->dev, "coupled to %s\n", dev_name(companion));25202516 }2521251725222518 return 0;
+7
drivers/gpu/drm/tegra/sor.c
···31153115 * kernel is possible.31163116 */31173117 if (sor->rst) {31183118+ err = pm_runtime_resume_and_get(sor->dev);31193119+ if (err < 0) {31203120+ dev_err(sor->dev, "failed to get runtime PM: %d\n", err);31213121+ return err;31223122+ }31233123+31183124 err = reset_control_acquire(sor->rst);31193125 if (err < 0) {31203126 dev_err(sor->dev, "failed to acquire SOR reset: %d\n",···31543148 }3155314931563150 reset_control_release(sor->rst);31513151+ pm_runtime_put(sor->dev);31573152 }3158315331593154 err = clk_prepare_enable(sor->clk_safe);
+6-4
drivers/gpu/host1x/bus.c
···705705EXPORT_SYMBOL(host1x_driver_unregister);706706707707/**708708- * host1x_client_register() - register a host1x client708708+ * __host1x_client_register() - register a host1x client709709 * @client: host1x client710710+ * @key: lock class key for the client-specific mutex710711 *711712 * Registers a host1x client with each host1x controller instance. Note that712713 * each client will only match their parent host1x controller and will only be···716715 * device and call host1x_device_init(), which will in turn call each client's717716 * &host1x_client_ops.init implementation.718717 */719719-int host1x_client_register(struct host1x_client *client)718718+int __host1x_client_register(struct host1x_client *client,719719+ struct lock_class_key *key)720720{721721 struct host1x *host1x;722722 int err;723723724724 INIT_LIST_HEAD(&client->list);725725- mutex_init(&client->lock);725725+ __mutex_init(&client->lock, "host1x client lock", key);726726 client->usecount = 0;727727728728 mutex_lock(&devices_lock);···744742745743 return 0;746744}747747-EXPORT_SYMBOL(host1x_client_register);745745+EXPORT_SYMBOL(__host1x_client_register);748746749747/**750748 * host1x_client_unregister() - unregister a host1x client
···5353EXPORT_SYMBOL_GPL(icc_bulk_put);54545555/**5656- * icc_bulk_set() - set bandwidth to a set of paths5656+ * icc_bulk_set_bw() - set bandwidth to a set of paths5757 * @num_paths: the number of icc_bulk_data5858 * @paths: the icc_bulk_data table containing the paths and bandwidth5959 *
···15941594 return blk_queue_zoned_model(q) != *zoned_model;15951595}1596159615971597+/*15981598+ * Check the device zoned model based on the target feature flag. If the target15991599+ * has the DM_TARGET_ZONED_HM feature flag set, host-managed zoned devices are16001600+ * also accepted but all devices must have the same zoned model. If the target16011601+ * has the DM_TARGET_MIXED_ZONED_MODEL feature set, the devices can have any16021602+ * zoned model with all zoned devices having the same zone size.16031603+ */15971604static bool dm_table_supports_zoned_model(struct dm_table *t,15981605 enum blk_zoned_model zoned_model)15991606{···16101603 for (i = 0; i < dm_table_get_num_targets(t); i++) {16111604 ti = dm_table_get_target(t, i);1612160516131613- if (zoned_model == BLK_ZONED_HM &&16141614- !dm_target_supports_zoned_hm(ti->type))16151615- return false;16161616-16171617- if (!ti->type->iterate_devices ||16181618- ti->type->iterate_devices(ti, device_not_zoned_model, &zoned_model))16191619- return false;16061606+ if (dm_target_supports_zoned_hm(ti->type)) {16071607+ if (!ti->type->iterate_devices ||16081608+ ti->type->iterate_devices(ti, device_not_zoned_model,16091609+ &zoned_model))16101610+ return false;16111611+ } else if (!dm_target_supports_mixed_zoned_model(ti->type)) {16121612+ if (zoned_model == BLK_ZONED_HM)16131613+ return false;16141614+ }16201615 }1621161616221617 return true;···16301621 struct request_queue *q = bdev_get_queue(dev->bdev);16311622 unsigned int *zone_sectors = data;1632162316241624+ if (!blk_queue_is_zoned(q))16251625+ return 0;16261626+16331627 return blk_queue_zone_sectors(q) != *zone_sectors;16341628}1635162916301630+/*16311631+ * Check consistency of zoned model and zone sectors across all targets. For16321632+ * zone sectors, if the destination device is a zoned block device, it shall16331633+ * have the specified zone_sectors.16341634+ */16361635static int validate_hardware_zoned_model(struct dm_table *table,16371636 enum blk_zoned_model zoned_model,16381637 unsigned int zone_sectors)···16591642 return -EINVAL;1660164316611644 if (dm_table_any_dev_attr(table, device_not_matches_zone_sectors, &zone_sectors)) {16621662- DMERR("%s: zone sectors is not consistent across all devices",16451645+ DMERR("%s: zone sectors is not consistent across all zoned devices",16631646 dm_device_name(table->md));16641647 return -EINVAL;16651648 }
···20362036 if (size != dm_get_size(md))20372037 memset(&md->geometry, 0, sizeof(md->geometry));2038203820392039- set_capacity_and_notify(md->disk, size);20392039+ if (!get_capacity(md->disk))20402040+ set_capacity(md->disk, size);20412041+ else20422042+ set_capacity_and_notify(md->disk, size);2040204320412044 dm_table_event_callback(t, event_callback, md);20422045
+4-2
drivers/mfd/intel_quark_i2c_gpio.c
···7272 {}7373};74747575-static const struct resource intel_quark_i2c_res[] = {7575+/* This is used as a place holder and will be modified at run-time */7676+static struct resource intel_quark_i2c_res[] = {7677 [INTEL_QUARK_IORES_MEM] = {7778 .flags = IORESOURCE_MEM,7879 },···8685 .adr = MFD_ACPI_MATCH_I2C,8786};88878989-static const struct resource intel_quark_gpio_res[] = {8888+/* This is used as a place holder and will be modified at run-time */8989+static struct resource intel_quark_gpio_res[] = {9090 [INTEL_QUARK_IORES_MEM] = {9191 .flags = IORESOURCE_MEM,9292 },
+7-10
drivers/misc/mei/client.c
···22862286 if (buffer_id == 0)22872287 return -EINVAL;2288228822892289- if (!mei_cl_is_connected(cl))22902290- return -ENODEV;22892289+ if (mei_cl_is_connected(cl))22902290+ return -EPROTO;2291229122922292 if (cl->dma_mapped)22932293 return -EPROTO;···2327232723282328 mutex_unlock(&dev->device_lock);23292329 wait_event_timeout(cl->wait,23302330- cl->dma_mapped ||23312331- cl->status ||23322332- !mei_cl_is_connected(cl),23302330+ cl->dma_mapped || cl->status,23332331 mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));23342332 mutex_lock(&dev->device_lock);23352333···23742376 return -EOPNOTSUPP;23752377 }2376237823772377- if (!mei_cl_is_connected(cl))23782378- return -ENODEV;23792379+ /* do not allow unmap for connected client */23802380+ if (mei_cl_is_connected(cl))23812381+ return -EPROTO;2379238223802383 if (!cl->dma_mapped)23812384 return -EPROTO;···2404240524052406 mutex_unlock(&dev->device_lock);24062407 wait_event_timeout(cl->wait,24072407- !cl->dma_mapped ||24082408- cl->status ||24092409- !mei_cl_is_connected(cl),24082408+ !cl->dma_mapped || cl->status,24102409 mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));24112410 mutex_lock(&dev->device_lock);24122411
+19-15
drivers/net/arcnet/com20020-pci.c
···127127 int i, ioaddr, ret;128128 struct resource *r;129129130130+ ret = 0;131131+130132 if (pci_enable_device(pdev))131133 return -EIO;132134···140138 ci = (struct com20020_pci_card_info *)id->driver_data;141139 priv->ci = ci;142140 mm = &ci->misc_map;141141+142142+ pci_set_drvdata(pdev, priv);143143144144 INIT_LIST_HEAD(&priv->list_dev);145145···165161 dev = alloc_arcdev(device);166162 if (!dev) {167163 ret = -ENOMEM;168168- goto out_port;164164+ break;169165 }170166 dev->dev_port = i;171167···182178 pr_err("IO region %xh-%xh already allocated\n",183179 ioaddr, ioaddr + cm->size - 1);184180 ret = -EBUSY;185185- goto out_port;181181+ goto err_free_arcdev;186182 }187183188184 /* Dummy access after Reset···220216 if (arcnet_inb(ioaddr, COM20020_REG_R_STATUS) == 0xFF) {221217 pr_err("IO address %Xh is empty!\n", ioaddr);222218 ret = -EIO;223223- goto out_port;219219+ goto err_free_arcdev;224220 }225221 if (com20020_check(dev)) {226222 ret = -EIO;227227- goto out_port;223223+ goto err_free_arcdev;228224 }229225230226 card = devm_kzalloc(&pdev->dev, sizeof(struct com20020_dev),231227 GFP_KERNEL);232228 if (!card) {233229 ret = -ENOMEM;234234- goto out_port;230230+ goto err_free_arcdev;235231 }236232237233 card->index = i;···257253258254 ret = devm_led_classdev_register(&pdev->dev, &card->tx_led);259255 if (ret)260260- goto out_port;256256+ goto err_free_arcdev;261257262258 ret = devm_led_classdev_register(&pdev->dev, &card->recon_led);263259 if (ret)264264- goto out_port;260260+ goto err_free_arcdev;265261266262 dev_set_drvdata(&dev->dev, card);267263268264 ret = com20020_found(dev, IRQF_SHARED);269265 if (ret)270270- goto out_port;266266+ goto err_free_arcdev;271267272268 devm_arcnet_led_init(dev, dev->dev_id, i);273269274270 list_add(&card->list, &priv->list_dev);271271+ continue;272272+273273+err_free_arcdev:274274+ free_arcdev(dev);275275+ break;275276 }276276-277277- pci_set_drvdata(pdev, priv);278278-279279- return 0;280280-281281-out_port:282282- com20020pci_remove(pdev);277277+ if (ret)278278+ com20020pci_remove(pdev);283279 return ret;284280}285281
+2-6
drivers/net/bonding/bond_main.c
···3978397839793979 rcu_read_lock();39803980 slave = bond_first_slave_rcu(bond);39813981- if (!slave) {39823982- ret = -EINVAL;39813981+ if (!slave)39833982 goto out;39843984- }39853983 slave_ops = slave->dev->netdev_ops;39863986- if (!slave_ops->ndo_neigh_setup) {39873987- ret = -EINVAL;39843984+ if (!slave_ops->ndo_neigh_setup)39883985 goto out;39893989- }3990398639913987 /* TODO: find another way [1] to implement this.39923988 * Passing a zeroed structure is fragile,
+1-23
drivers/net/can/c_can/c_can.c
···212212 .brp_inc = 1,213213};214214215215-static inline void c_can_pm_runtime_enable(const struct c_can_priv *priv)216216-{217217- if (priv->device)218218- pm_runtime_enable(priv->device);219219-}220220-221221-static inline void c_can_pm_runtime_disable(const struct c_can_priv *priv)222222-{223223- if (priv->device)224224- pm_runtime_disable(priv->device);225225-}226226-227215static inline void c_can_pm_runtime_get_sync(const struct c_can_priv *priv)228216{229217 if (priv->device)···1323133513241336int register_c_can_dev(struct net_device *dev)13251337{13261326- struct c_can_priv *priv = netdev_priv(dev);13271338 int err;1328133913291340 /* Deactivate pins to prevent DRA7 DCAN IP from being···13321345 */13331346 pinctrl_pm_select_sleep_state(dev->dev.parent);1334134713351335- c_can_pm_runtime_enable(priv);13361336-13371348 dev->flags |= IFF_ECHO; /* we support local echo */13381349 dev->netdev_ops = &c_can_netdev_ops;1339135013401351 err = register_candev(dev);13411341- if (err)13421342- c_can_pm_runtime_disable(priv);13431343- else13521352+ if (!err)13441353 devm_can_led_init(dev);13451345-13461354 return err;13471355}13481356EXPORT_SYMBOL_GPL(register_c_can_dev);1349135713501358void unregister_c_can_dev(struct net_device *dev)13511359{13521352- struct c_can_priv *priv = netdev_priv(dev);13531353-13541360 unregister_candev(dev);13551355-13561356- c_can_pm_runtime_disable(priv);13571361}13581362EXPORT_SYMBOL_GPL(unregister_c_can_dev);13591363
···11051105 b53_disable_port(ds, port);11061106 }1107110711081108- /* Let DSA handle the case were multiple bridges span the same switch11091109- * device and different VLAN awareness settings are requested, which11101110- * would be breaking filtering semantics for any of the other bridge11111111- * devices. (not hardware supported)11121112- */11131113- ds->vlan_filtering_is_global = true;11141114-11151108 return b53_setup_devlink_resources(ds);11161109}11171110···26572664 ds->ops = &b53_switch_ops;26582665 ds->untag_bridge_pvid = true;26592666 dev->vlan_enabled = true;26672667+ /* Let DSA handle the case were multiple bridges span the same switch26682668+ * device and different VLAN awareness settings are requested, which26692669+ * would be breaking filtering semantics for any of the other bridge26702670+ * devices. (not hardware supported)26712671+ */26722672+ ds->vlan_filtering_is_global = true;26732673+26602674 mutex_init(&dev->reg_mutex);26612675 mutex_init(&dev->stats_mutex);26622676
+8-3
drivers/net/dsa/bcm_sf2.c
···114114 /* Force link status for IMP port */115115 reg = core_readl(priv, offset);116116 reg |= (MII_SW_OR | LINK_STS);117117- reg &= ~GMII_SPEED_UP_2G;117117+ if (priv->type == BCM4908_DEVICE_ID)118118+ reg |= GMII_SPEED_UP_2G;119119+ else120120+ reg &= ~GMII_SPEED_UP_2G;118121 core_writel(priv, reg, offset);119122120123 /* Enable Broadcast, Multicast, Unicast forwarding to IMP port */···588585 * in bits 15:8 and the patch level in bits 7:0 which is exactly what589586 * the REG_PHY_REVISION register layout is.590587 */591591-592592- return priv->hw_params.gphy_rev;588588+ if (priv->int_phy_mask & BIT(port))589589+ return priv->hw_params.gphy_rev;590590+ else591591+ return 0;593592}594593595594static void bcm_sf2_sw_validate(struct dsa_switch *ds, int port,
+22-24
drivers/net/dsa/mt7530.c
···436436 TD_DM_DRVP(8) | TD_DM_DRVN(8));437437438438 /* Setup core clock for MT7530 */439439- if (!trgint) {440440- /* Disable MT7530 core clock */441441- core_clear(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);439439+ /* Disable MT7530 core clock */440440+ core_clear(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);442441443443- /* Disable PLL, since phy_device has not yet been created444444- * provided for phy_[read,write]_mmd_indirect is called, we445445- * provide our own core_write_mmd_indirect to complete this446446- * function.447447- */448448- core_write_mmd_indirect(priv,449449- CORE_GSWPLL_GRP1,450450- MDIO_MMD_VEND2,451451- 0);442442+ /* Disable PLL, since phy_device has not yet been created443443+ * provided for phy_[read,write]_mmd_indirect is called, we444444+ * provide our own core_write_mmd_indirect to complete this445445+ * function.446446+ */447447+ core_write_mmd_indirect(priv,448448+ CORE_GSWPLL_GRP1,449449+ MDIO_MMD_VEND2,450450+ 0);452451453453- /* Set core clock into 500Mhz */454454- core_write(priv, CORE_GSWPLL_GRP2,455455- RG_GSWPLL_POSDIV_500M(1) |456456- RG_GSWPLL_FBKDIV_500M(25));452452+ /* Set core clock into 500Mhz */453453+ core_write(priv, CORE_GSWPLL_GRP2,454454+ RG_GSWPLL_POSDIV_500M(1) |455455+ RG_GSWPLL_FBKDIV_500M(25));457456458458- /* Enable PLL */459459- core_write(priv, CORE_GSWPLL_GRP1,460460- RG_GSWPLL_EN_PRE |461461- RG_GSWPLL_POSDIV_200M(2) |462462- RG_GSWPLL_FBKDIV_200M(32));457457+ /* Enable PLL */458458+ core_write(priv, CORE_GSWPLL_GRP1,459459+ RG_GSWPLL_EN_PRE |460460+ RG_GSWPLL_POSDIV_200M(2) |461461+ RG_GSWPLL_FBKDIV_200M(32));463462464464- /* Enable MT7530 core clock */465465- core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);466466- }463463+ /* Enable MT7530 core clock */464464+ core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);467465468466 /* Setup the MT7530 TRGMII Tx Clock */469467 core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);
+1-1
drivers/net/ethernet/broadcom/Kconfig
···5454config BCM4908_ENET5555 tristate "Broadcom BCM4908 internal mac support"5656 depends on ARCH_BCM4908 || COMPILE_TEST5757- default y5757+ default y if ARCH_BCM49085858 help5959 This driver supports Ethernet controller integrated into Broadcom6060 BCM4908 family SoCs.
···899899 } else {900900 data &= ~IGP02E1000_PM_D0_LPLU;901901 ret_val = e1e_wphy(hw, IGP02E1000_PHY_POWER_MGMT, data);902902+ if (ret_val)903903+ return ret_val;902904 /* LPLU and SmartSpeed are mutually exclusive. LPLU is used903905 * during Dx states where the power conservation is most904906 * important. During driver activity we should enable
···59745974 struct e1000_adapter *adapter;59755975 adapter = container_of(work, struct e1000_adapter, reset_task);5976597659775977+ rtnl_lock();59775978 /* don't run the task if already down */59785978- if (test_bit(__E1000_DOWN, &adapter->state))59795979+ if (test_bit(__E1000_DOWN, &adapter->state)) {59805980+ rtnl_unlock();59795981 return;59825982+ }5980598359815984 if (!(adapter->flags & FLAG_RESTART_NOW)) {59825985 e1000e_dump(adapter);59835986 e_err("Reset adapter unexpectedly\n");59845987 }59855988 e1000e_reinit_locked(adapter);59895989+ rtnl_unlock();59865990}5987599159885992/**
+13
drivers/net/ethernet/intel/i40e/i40e_main.c
···32593259}3260326032613261/**32623262+ * i40e_rx_offset - Return expected offset into page to access data32633263+ * @rx_ring: Ring we are requesting offset of32643264+ *32653265+ * Returns the offset value for ring into the data buffer.32663266+ */32673267+static unsigned int i40e_rx_offset(struct i40e_ring *rx_ring)32683268+{32693269+ return ring_uses_build_skb(rx_ring) ? I40E_SKB_PAD : 0;32703270+}32713271+32723272+/**32623273 * i40e_configure_rx_ring - Configure a receive ring context32633274 * @ring: The Rx ring to configure32643275 *···33793368 clear_ring_build_skb_enabled(ring);33803369 else33813370 set_ring_build_skb_enabled(ring);33713371+33723372+ ring->rx_offset = i40e_rx_offset(ring);3382337333833374 /* cache tail for quicker writes, and clear the reg before use */33843375 ring->tail = hw->hw_addr + I40E_QRX_TAIL(pf_q);
-12
drivers/net/ethernet/intel/i40e/i40e_txrx.c
···15701570}1571157115721572/**15731573- * i40e_rx_offset - Return expected offset into page to access data15741574- * @rx_ring: Ring we are requesting offset of15751575- *15761576- * Returns the offset value for ring into the data buffer.15771577- */15781578-static unsigned int i40e_rx_offset(struct i40e_ring *rx_ring)15791579-{15801580- return ring_uses_build_skb(rx_ring) ? I40E_SKB_PAD : 0;15811581-}15821582-15831583-/**15841573 * i40e_setup_rx_descriptors - Allocate Rx descriptors15851574 * @rx_ring: Rx descriptor ring (for a specific queue) to setup15861575 *···15971608 rx_ring->next_to_alloc = 0;15981609 rx_ring->next_to_clean = 0;15991610 rx_ring->next_to_use = 0;16001600- rx_ring->rx_offset = i40e_rx_offset(rx_ring);1601161116021612 /* XDP RX-queue info only needed for RX rings exposed to XDP */16031613 if (rx_ring->vsi->type == I40E_VSI_MAIN) {
+22-2
drivers/net/ethernet/intel/ice/ice_base.c
···275275}276276277277/**278278+ * ice_rx_offset - Return expected offset into page to access data279279+ * @rx_ring: Ring we are requesting offset of280280+ *281281+ * Returns the offset value for ring into the data buffer.282282+ */283283+static unsigned int ice_rx_offset(struct ice_ring *rx_ring)284284+{285285+ if (ice_ring_uses_build_skb(rx_ring))286286+ return ICE_SKB_PAD;287287+ else if (ice_is_xdp_ena_vsi(rx_ring->vsi))288288+ return XDP_PACKET_HEADROOM;289289+290290+ return 0;291291+}292292+293293+/**278294 * ice_setup_rx_ctx - Configure a receive ring context279295 * @ring: The Rx ring to configure280296 *···429413 else430414 ice_set_ring_build_skb_ena(ring);431415416416+ ring->rx_offset = ice_rx_offset(ring);417417+432418 /* init queue specific tail register */433419 ring->tail = hw->hw_addr + QRX_TAIL(pf_q);434420 writel(0, ring->tail);435421436422 if (ring->xsk_pool) {423423+ bool ok;424424+437425 if (!xsk_buff_can_alloc(ring->xsk_pool, num_bufs)) {438426 dev_warn(dev, "XSK buffer pool does not provide enough addresses to fill %d buffers on Rx ring %d\n",439427 num_bufs, ring->q_index);···446426 return 0;447427 }448428449449- err = ice_alloc_rx_bufs_zc(ring, num_bufs);450450- if (err)429429+ ok = ice_alloc_rx_bufs_zc(ring, num_bufs);430430+ if (!ok)451431 dev_info(dev, "Failed to allocate some buffers on XSK buffer pool enabled Rx ring %d (pf_q %d)\n",452432 ring->q_index, pf_q);453433 return 0;
-17
drivers/net/ethernet/intel/ice/ice_txrx.c
···444444}445445446446/**447447- * ice_rx_offset - Return expected offset into page to access data448448- * @rx_ring: Ring we are requesting offset of449449- *450450- * Returns the offset value for ring into the data buffer.451451- */452452-static unsigned int ice_rx_offset(struct ice_ring *rx_ring)453453-{454454- if (ice_ring_uses_build_skb(rx_ring))455455- return ICE_SKB_PAD;456456- else if (ice_is_xdp_ena_vsi(rx_ring->vsi))457457- return XDP_PACKET_HEADROOM;458458-459459- return 0;460460-}461461-462462-/**463447 * ice_setup_rx_ring - Allocate the Rx descriptors464448 * @rx_ring: the Rx ring to set up465449 *···477493478494 rx_ring->next_to_use = 0;479495 rx_ring->next_to_clean = 0;480480- rx_ring->rx_offset = ice_rx_offset(rx_ring);481496482497 if (ice_is_xdp_ena_vsi(rx_ring->vsi))483498 WRITE_ONCE(rx_ring->xdp_prog, rx_ring->vsi->xdp_prog);
+5-5
drivers/net/ethernet/intel/ice/ice_xsk.c
···358358 * This function allocates a number of Rx buffers from the fill ring359359 * or the internal recycle mechanism and places them on the Rx ring.360360 *361361- * Returns false if all allocations were successful, true if any fail.361361+ * Returns true if all allocations were successful, false if any fail.362362 */363363bool ice_alloc_rx_bufs_zc(struct ice_ring *rx_ring, u16 count)364364{365365 union ice_32b_rx_flex_desc *rx_desc;366366 u16 ntu = rx_ring->next_to_use;367367 struct ice_rx_buf *rx_buf;368368- bool ret = false;368368+ bool ok = true;369369 dma_addr_t dma;370370371371 if (!count)372372- return false;372372+ return true;373373374374 rx_desc = ICE_RX_DESC(rx_ring, ntu);375375 rx_buf = &rx_ring->rx_buf[ntu];···377377 do {378378 rx_buf->xdp = xsk_buff_alloc(rx_ring->xsk_pool);379379 if (!rx_buf->xdp) {380380- ret = true;380380+ ok = false;381381 break;382382 }383383···402402 ice_release_rx_desc(rx_ring, ntu);403403 }404404405405- return ret;405405+ return ok;406406}407407408408/**
···82148214 new_buff->pagecnt_bias = old_buff->pagecnt_bias;82158215}8216821682178217-static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer)82178217+static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer,82188218+ int rx_buf_pgcnt)82188219{82198220 unsigned int pagecnt_bias = rx_buffer->pagecnt_bias;82208221 struct page *page = rx_buffer->page;···8226822582278226#if (PAGE_SIZE < 8192)82288227 /* if we are only owner of page we can reuse it */82298229- if (unlikely((page_ref_count(page) - pagecnt_bias) > 1))82288228+ if (unlikely((rx_buf_pgcnt - pagecnt_bias) > 1))82308229 return false;82318230#else82328231#define IGB_LAST_OFFSET \···83028301 return NULL;8303830283048303 if (unlikely(igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP))) {83058305- igb_ptp_rx_pktstamp(rx_ring->q_vector, xdp->data, skb);83068306- xdp->data += IGB_TS_HDR_LEN;83078307- size -= IGB_TS_HDR_LEN;83048304+ if (!igb_ptp_rx_pktstamp(rx_ring->q_vector, xdp->data, skb)) {83058305+ xdp->data += IGB_TS_HDR_LEN;83068306+ size -= IGB_TS_HDR_LEN;83078307+ }83088308 }8309830983108310 /* Determine available headroom for copy */···8366836483678365 /* pull timestamp out of packet data */83688366 if (igb_test_staterr(rx_desc, E1000_RXDADV_STAT_TSIP)) {83698369- igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb);83708370- __skb_pull(skb, IGB_TS_HDR_LEN);83678367+ if (!igb_ptp_rx_pktstamp(rx_ring->q_vector, skb->data, skb))83688368+ __skb_pull(skb, IGB_TS_HDR_LEN);83718369 }8372837083738371 /* update buffer offset */···86168614}8617861586188616static struct igb_rx_buffer *igb_get_rx_buffer(struct igb_ring *rx_ring,86198619- const unsigned int size)86178617+ const unsigned int size, int *rx_buf_pgcnt)86208618{86218619 struct igb_rx_buffer *rx_buffer;8622862086238621 rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean];86228622+ *rx_buf_pgcnt =86238623+#if (PAGE_SIZE < 8192)86248624+ page_count(rx_buffer->page);86258625+#else86268626+ 0;86278627+#endif86248628 prefetchw(rx_buffer->page);8625862986268630 /* we are reusing so sync this buffer for CPU use */···86428634}8643863586448636static void igb_put_rx_buffer(struct igb_ring *rx_ring,86458645- struct igb_rx_buffer *rx_buffer)86378637+ struct igb_rx_buffer *rx_buffer, int rx_buf_pgcnt)86468638{86478647- if (igb_can_reuse_rx_page(rx_buffer)) {86398639+ if (igb_can_reuse_rx_page(rx_buffer, rx_buf_pgcnt)) {86488640 /* hand second half of page back to the ring */86498641 igb_reuse_rx_page(rx_ring, rx_buffer);86508642 } else {···86728664 unsigned int xdp_xmit = 0;86738665 struct xdp_buff xdp;86748666 u32 frame_sz = 0;86678667+ int rx_buf_pgcnt;8675866886768669 /* Frame size depend on rx_ring setup when PAGE_SIZE=4K */86778670#if (PAGE_SIZE < 8192)···87028693 */87038694 dma_rmb();8704869587058705- rx_buffer = igb_get_rx_buffer(rx_ring, size);86968696+ rx_buffer = igb_get_rx_buffer(rx_ring, size, &rx_buf_pgcnt);8706869787078698 /* retrieve a buffer from the ring */87088699 if (!skb) {···87458736 break;87468737 }8747873887488748- igb_put_rx_buffer(rx_ring, rx_buffer);87398739+ igb_put_rx_buffer(rx_ring, rx_buffer, rx_buf_pgcnt);87498740 cleaned_count++;8750874187518742 /* fetch next buffer in frame if non-eop */
+24-7
drivers/net/ethernet/intel/igb/igb_ptp.c
···856856 dev_kfree_skb_any(skb);857857}858858859859+#define IGB_RET_PTP_DISABLED 1860860+#define IGB_RET_PTP_INVALID 2861861+859862/**860863 * igb_ptp_rx_pktstamp - retrieve Rx per packet timestamp861864 * @q_vector: Pointer to interrupt specific structure···867864 *868865 * This function is meant to retrieve a timestamp from the first buffer of an869866 * incoming frame. The value is stored in little endian format starting on870870- * byte 8.867867+ * byte 8868868+ *869869+ * Returns: 0 if success, nonzero if failure871870 **/872872-void igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,873873- struct sk_buff *skb)871871+int igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, void *va,872872+ struct sk_buff *skb)874873{875875- __le64 *regval = (__le64 *)va;876874 struct igb_adapter *adapter = q_vector->adapter;875875+ __le64 *regval = (__le64 *)va;877876 int adjust = 0;877877+878878+ if (!(adapter->ptp_flags & IGB_PTP_ENABLED))879879+ return IGB_RET_PTP_DISABLED;878880879881 /* The timestamp is recorded in little endian format.880882 * DWORD: 0 1 2 3881883 * Field: Reserved Reserved SYSTIML SYSTIMH882884 */885885+886886+ /* check reserved dwords are zero, be/le doesn't matter for zero */887887+ if (regval[0])888888+ return IGB_RET_PTP_INVALID;889889+883890 igb_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb),884891 le64_to_cpu(regval[1]));885892···909896 }910897 skb_hwtstamps(skb)->hwtstamp =911898 ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust);899899+900900+ return 0;912901}913902914903/**···921906 * This function is meant to retrieve a timestamp from the internal registers922907 * of the adapter and store it in the skb.923908 **/924924-void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector,925925- struct sk_buff *skb)909909+void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector, struct sk_buff *skb)926910{927911 struct igb_adapter *adapter = q_vector->adapter;928912 struct e1000_hw *hw = &adapter->hw;929929- u64 regval;930913 int adjust = 0;914914+ u64 regval;915915+916916+ if (!(adapter->ptp_flags & IGB_PTP_ENABLED))917917+ return;931918932919 /* If this bit is set, then the RX registers contain the time stamp. No933920 * other packet will be time stamped until we read these registers, so
···17111711 Autoneg);17121712 }1713171317141714+ /* Set pause flow control settings */17151715+ ethtool_link_ksettings_add_link_mode(cmd, supported, Pause);17161716+17141717 switch (hw->fc.requested_mode) {17151718 case igc_fc_full:17161719 ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);···17281725 Asym_Pause);17291726 break;17301727 default:17311731- ethtool_link_ksettings_add_link_mode(cmd, advertising, Pause);17321732- ethtool_link_ksettings_add_link_mode(cmd, advertising,17331733- Asym_Pause);17281728+ break;17341729 }1735173017361731 status = pm_runtime_suspended(&adapter->pdev->dev) ?
+9
drivers/net/ethernet/intel/igc/igc_main.c
···3831383138323832 adapter = container_of(work, struct igc_adapter, reset_task);3833383338343834+ rtnl_lock();38353835+ /* If we're already down or resetting, just bail */38363836+ if (test_bit(__IGC_DOWN, &adapter->state) ||38373837+ test_bit(__IGC_RESETTING, &adapter->state)) {38383838+ rtnl_unlock();38393839+ return;38403840+ }38413841+38343842 igc_rings_dump(adapter);38353843 igc_regs_dump(adapter);38363844 netdev_err(adapter->netdev, "Reset adapter\n");38373845 igc_reinit_locked(adapter);38463846+ rtnl_unlock();38383847}3839384838403849/**
+38-30
drivers/net/ethernet/intel/igc/igc_ptp.c
···152152}153153154154/**155155- * igc_ptp_rx_pktstamp - retrieve Rx per packet timestamp155155+ * igc_ptp_rx_pktstamp - Retrieve timestamp from Rx packet buffer156156 * @q_vector: Pointer to interrupt specific structure157157 * @va: Pointer to address containing Rx buffer158158 * @skb: Buffer containing timestamp and packet159159 *160160- * This function is meant to retrieve the first timestamp from the161161- * first buffer of an incoming frame. The value is stored in little162162- * endian format starting on byte 0. There's a second timestamp163163- * starting on byte 8.164164- **/165165-void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, void *va,160160+ * This function retrieves the timestamp saved in the beginning of packet161161+ * buffer. While two timestamps are available, one in timer0 reference and the162162+ * other in timer1 reference, this function considers only the timestamp in163163+ * timer0 reference.164164+ */165165+void igc_ptp_rx_pktstamp(struct igc_q_vector *q_vector, __le32 *va,166166 struct sk_buff *skb)167167{168168 struct igc_adapter *adapter = q_vector->adapter;169169- __le64 *regval = (__le64 *)va;170170- int adjust = 0;169169+ u64 regval;170170+ int adjust;171171172172- /* The timestamp is recorded in little endian format.173173- * DWORD: | 0 | 1 | 2 | 3174174- * Field: | Timer0 Low | Timer0 High | Timer1 Low | Timer1 High172172+ /* Timestamps are saved in little endian at the beginning of the packet173173+ * buffer following the layout:174174+ *175175+ * DWORD: | 0 | 1 | 2 | 3 |176176+ * Field: | Timer1 SYSTIML | Timer1 SYSTIMH | Timer0 SYSTIML | Timer0 SYSTIMH |177177+ *178178+ * SYSTIML holds the nanoseconds part while SYSTIMH holds the seconds179179+ * part of the timestamp.175180 */176176- igc_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb),177177- le64_to_cpu(regval[0]));181181+ regval = le32_to_cpu(va[2]);182182+ regval |= (u64)le32_to_cpu(va[3]) << 32;183183+ igc_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb), regval);178184179179- /* adjust timestamp for the RX latency based on link speed */180180- if (adapter->hw.mac.type == igc_i225) {181181- switch (adapter->link_speed) {182182- case SPEED_10:183183- adjust = IGC_I225_RX_LATENCY_10;184184- break;185185- case SPEED_100:186186- adjust = IGC_I225_RX_LATENCY_100;187187- break;188188- case SPEED_1000:189189- adjust = IGC_I225_RX_LATENCY_1000;190190- break;191191- case SPEED_2500:192192- adjust = IGC_I225_RX_LATENCY_2500;193193- break;194194- }185185+ /* Adjust timestamp for the RX latency based on link speed */186186+ switch (adapter->link_speed) {187187+ case SPEED_10:188188+ adjust = IGC_I225_RX_LATENCY_10;189189+ break;190190+ case SPEED_100:191191+ adjust = IGC_I225_RX_LATENCY_100;192192+ break;193193+ case SPEED_1000:194194+ adjust = IGC_I225_RX_LATENCY_1000;195195+ break;196196+ case SPEED_2500:197197+ adjust = IGC_I225_RX_LATENCY_2500;198198+ break;199199+ default:200200+ adjust = 0;201201+ netdev_warn_once(adapter->netdev, "Imprecise timestamp\n");202202+ break;195203 }196204 skb_hwtstamps(skb)->hwtstamp =197205 ktime_sub_ns(skb_hwtstamps(skb)->hwtstamp, adjust);
···22962296 *match_level = MLX5_MATCH_L4;22972297 }2298229822992299+ /* Currenlty supported only for MPLS over UDP */23002300+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS) &&23012301+ !netif_is_bareudp(filter_dev)) {23022302+ NL_SET_ERR_MSG_MOD(extack,23032303+ "Matching on MPLS is supported only for MPLS over UDP");23042304+ netdev_err(priv->netdev,23052305+ "Matching on MPLS is supported only for MPLS over UDP\n");23062306+ return -EOPNOTSUPP;23072307+ }23082308+22992309 return 0;23002310}23012311···29092899 return 0;29102900}2911290129022902+static bool modify_tuple_supported(bool modify_tuple, bool ct_clear,29032903+ bool ct_flow, struct netlink_ext_ack *extack,29042904+ struct mlx5e_priv *priv,29052905+ struct mlx5_flow_spec *spec)29062906+{29072907+ if (!modify_tuple || ct_clear)29082908+ return true;29092909+29102910+ if (ct_flow) {29112911+ NL_SET_ERR_MSG_MOD(extack,29122912+ "can't offload tuple modification with non-clear ct()");29132913+ netdev_info(priv->netdev,29142914+ "can't offload tuple modification with non-clear ct()");29152915+ return false;29162916+ }29172917+29182918+ /* Add ct_state=-trk match so it will be offloaded for non ct flows29192919+ * (or after clear action), as otherwise, since the tuple is changed,29202920+ * we can't restore ct state29212921+ */29222922+ if (mlx5_tc_ct_add_no_trk_match(spec)) {29232923+ NL_SET_ERR_MSG_MOD(extack,29242924+ "can't offload tuple modification with ct matches and no ct(clear) action");29252925+ netdev_info(priv->netdev,29262926+ "can't offload tuple modification with ct matches and no ct(clear) action");29272927+ return false;29282928+ }29292929+29302930+ return true;29312931+}29322932+29122933static bool modify_header_match_supported(struct mlx5e_priv *priv,29132934 struct mlx5_flow_spec *spec,29142935 struct flow_action *flow_action,···29782937 return err;29792938 }2980293929812981- /* Add ct_state=-trk match so it will be offloaded for non ct flows29822982- * (or after clear action), as otherwise, since the tuple is changed,29832983- * we can't restore ct state29842984- */29852985- if (!ct_clear && modify_tuple &&29862986- mlx5_tc_ct_add_no_trk_match(spec)) {29872987- NL_SET_ERR_MSG_MOD(extack,29882988- "can't offload tuple modify header with ct matches");29892989- netdev_info(priv->netdev,29902990- "can't offload tuple modify header with ct matches");29402940+ if (!modify_tuple_supported(modify_tuple, ct_clear, ct_flow, extack,29412941+ priv, spec))29912942 return false;29922992- }2993294329942944 ip_proto = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ip_protocol);29952945 if (modify_ip_header && ip_proto != IPPROTO_TCP &&···44774445 */44784446 if (rate) {44794447 rate = (rate * BITS_PER_BYTE) + 500000;44804480- rate_mbps = max_t(u64, do_div(rate, 1000000), 1);44484448+ do_div(rate, 1000000);44494449+ rate_mbps = max_t(u32, rate, 1);44814450 }4482445144834452 err = mlx5_esw_modify_vport_rate(esw, vport_num, rate_mbps);
···327327 goto err_free_ctx_entry;328328 }329329330330+ /* Do net allocate a mask-id for pre_tun_rules. These flows are used to331331+ * configure the pre_tun table and are never actually send to the332332+ * firmware as an add-flow message. This causes the mask-id allocation333333+ * on the firmware to get out of sync if allocated here.334334+ */330335 new_mask_id = 0;331331- if (!nfp_check_mask_add(app, nfp_flow->mask_data,336336+ if (!nfp_flow->pre_tun_rule.dev &&337337+ !nfp_check_mask_add(app, nfp_flow->mask_data,332338 nfp_flow->meta.mask_len,333339 &nfp_flow->meta.flags, &new_mask_id)) {334340 NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot allocate a new mask id");···365359 goto err_remove_mask;366360 }367361368368- if (!nfp_check_mask_remove(app, nfp_flow->mask_data,362362+ if (!nfp_flow->pre_tun_rule.dev &&363363+ !nfp_check_mask_remove(app, nfp_flow->mask_data,369364 nfp_flow->meta.mask_len,370365 NULL, &new_mask_id)) {371366 NL_SET_ERR_MSG_MOD(extack, "invalid entry: cannot release mask id");···381374 return 0;382375383376err_remove_mask:384384- nfp_check_mask_remove(app, nfp_flow->mask_data, nfp_flow->meta.mask_len,385385- NULL, &new_mask_id);377377+ if (!nfp_flow->pre_tun_rule.dev)378378+ nfp_check_mask_remove(app, nfp_flow->mask_data,379379+ nfp_flow->meta.mask_len,380380+ NULL, &new_mask_id);386381err_remove_rhash:387382 WARN_ON_ONCE(rhashtable_remove_fast(&priv->stats_ctx_table,388383 &ctx_entry->ht_node,···415406416407 __nfp_modify_flow_metadata(priv, nfp_flow);417408418418- nfp_check_mask_remove(app, nfp_flow->mask_data,419419- nfp_flow->meta.mask_len, &nfp_flow->meta.flags,420420- &new_mask_id);409409+ if (!nfp_flow->pre_tun_rule.dev)410410+ nfp_check_mask_remove(app, nfp_flow->mask_data,411411+ nfp_flow->meta.mask_len, &nfp_flow->meta.flags,412412+ &new_mask_id);421413422414 /* Update flow payload with mask ids. */423415 nfp_flow->unmasked_data[NFP_FL_MASK_ID_LOCATION] = new_mask_id;
···11421142 return -EOPNOTSUPP;11431143 }1144114411451145+ if (!(key_layer & NFP_FLOWER_LAYER_IPV4) &&11461146+ !(key_layer & NFP_FLOWER_LAYER_IPV6)) {11471147+ NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: match on ipv4/ipv6 eth_type must be present");11481148+ return -EOPNOTSUPP;11491149+ }11501150+11451151 /* Skip fields known to exist. */11461152 mask += sizeof(struct nfp_flower_meta_tci);11471153 ext += sizeof(struct nfp_flower_meta_tci);···11581152 mask += sizeof(struct nfp_flower_in_port);11591153 ext += sizeof(struct nfp_flower_in_port);1160115411551155+ /* Ensure destination MAC address matches pre_tun_dev. */11561156+ mac = (struct nfp_flower_mac_mpls *)ext;11571157+ if (memcmp(&mac->mac_dst[0], flow->pre_tun_rule.dev->dev_addr, 6)) {11581158+ NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: dest MAC must match output dev MAC");11591159+ return -EOPNOTSUPP;11601160+ }11611161+11611162 /* Ensure destination MAC address is fully matched. */11621163 mac = (struct nfp_flower_mac_mpls *)mask;11631164 if (!is_broadcast_ether_addr(&mac->mac_dst[0])) {11641165 NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: dest MAC field must not be masked");11661166+ return -EOPNOTSUPP;11671167+ }11681168+11691169+ if (mac->mpls_lse) {11701170+ NL_SET_ERR_MSG_MOD(extack, "unsupported pre-tunnel rule: MPLS not supported");11651171 return -EOPNOTSUPP;11661172 }11671173
···10791079{10801080 int sg_elems = q->lif->qtype_info[IONIC_QTYPE_TXQ].max_sg_elems;10811081 struct ionic_tx_stats *stats = q_to_tx_stats(q);10821082+ int ndescs;10821083 int err;1083108410841084- /* If TSO, need roundup(skb->len/mss) descs */10851085+ /* Each desc is mss long max, so a descriptor for each gso_seg */10851086 if (skb_is_gso(skb))10861086- return (skb->len / skb_shinfo(skb)->gso_size) + 1;10871087+ ndescs = skb_shinfo(skb)->gso_segs;10881088+ else10891089+ ndescs = 1;1087109010881088- /* If non-TSO, just need 1 desc and nr_frags sg elems */10891091 if (skb_shinfo(skb)->nr_frags <= sg_elems)10901090- return 1;10921092+ return ndescs;1091109310921094 /* Too many frags, so linearize */10931095 err = skb_linearize(skb);···1098109610991097 stats->linearize++;1100109811011101- /* Need 1 desc and zero sg elems */11021102- return 1;10991099+ return ndescs;11031100}1104110111051102static int ionic_maybe_stop_tx(struct ionic_queue *q, int ndescs)
···18801880 if (IS_ERR(lp->regs)) {18811881 dev_err(&pdev->dev, "could not map Axi Ethernet regs.\n");18821882 ret = PTR_ERR(lp->regs);18831883- goto free_netdev;18831883+ goto cleanup_clk;18841884 }18851885 lp->regs_start = ethres->start;18861886···19581958 break;19591959 default:19601960 ret = -EINVAL;19611961- goto free_netdev;19611961+ goto cleanup_clk;19621962 }19631963 } else {19641964 ret = of_get_phy_mode(pdev->dev.of_node, &lp->phy_mode);19651965 if (ret)19661966- goto free_netdev;19661966+ goto cleanup_clk;19671967 }19681968 if (lp->switch_x_sgmii && lp->phy_mode != PHY_INTERFACE_MODE_SGMII &&19691969 lp->phy_mode != PHY_INTERFACE_MODE_1000BASEX) {19701970 dev_err(&pdev->dev, "xlnx,switch-x-sgmii only supported with SGMII or 1000BaseX\n");19711971 ret = -EINVAL;19721972- goto free_netdev;19721972+ goto cleanup_clk;19731973 }1974197419751975 /* Find the DMA node, map the DMA registers, and decode the DMA IRQs */···19821982 dev_err(&pdev->dev,19831983 "unable to get DMA resource\n");19841984 of_node_put(np);19851985- goto free_netdev;19851985+ goto cleanup_clk;19861986 }19871987 lp->dma_regs = devm_ioremap_resource(&pdev->dev,19881988 &dmares);···20022002 if (IS_ERR(lp->dma_regs)) {20032003 dev_err(&pdev->dev, "could not map DMA regs\n");20042004 ret = PTR_ERR(lp->dma_regs);20052005- goto free_netdev;20052005+ goto cleanup_clk;20062006 }20072007 if ((lp->rx_irq <= 0) || (lp->tx_irq <= 0)) {20082008 dev_err(&pdev->dev, "could not determine irqs\n");20092009 ret = -ENOMEM;20102010- goto free_netdev;20102010+ goto cleanup_clk;20112011 }2012201220132013 /* Autodetect the need for 64-bit DMA pointers.···20372037 ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(addr_width));20382038 if (ret) {20392039 dev_err(&pdev->dev, "No suitable DMA available\n");20402040- goto free_netdev;20402040+ goto cleanup_clk;20412041 }2042204220432043 /* Check for Ethernet core IRQ (optional) */···20682068 if (!lp->phy_node) {20692069 dev_err(&pdev->dev, "phy-handle required for 1000BaseX/SGMII\n");20702070 ret = -EINVAL;20712071- goto free_netdev;20712071+ goto cleanup_mdio;20722072 }20732073 lp->pcs_phy = of_mdio_find_device(lp->phy_node);20742074 if (!lp->pcs_phy) {20752075 ret = -EPROBE_DEFER;20762076- goto free_netdev;20762076+ goto cleanup_mdio;20772077 }20782078 lp->phylink_config.pcs_poll = true;20792079 }···20872087 if (IS_ERR(lp->phylink)) {20882088 ret = PTR_ERR(lp->phylink);20892089 dev_err(&pdev->dev, "phylink_create error (%i)\n", ret);20902090- goto free_netdev;20902090+ goto cleanup_mdio;20912091 }2092209220932093 ret = register_netdev(lp->ndev);20942094 if (ret) {20952095 dev_err(lp->dev, "register_netdev() error (%i)\n", ret);20962096- goto free_netdev;20962096+ goto cleanup_phylink;20972097 }2098209820992099 return 0;21002100+21012101+cleanup_phylink:21022102+ phylink_destroy(lp->phylink);21032103+21042104+cleanup_mdio:21052105+ if (lp->pcs_phy)21062106+ put_device(&lp->pcs_phy->dev);21072107+ if (lp->mii_bus)21082108+ axienet_mdio_teardown(lp);21092109+ of_node_put(lp->phy_node);21102110+21112111+cleanup_clk:21122112+ clk_disable_unprepare(lp->clk);2100211321012114free_netdev:21022115 free_netdev(ndev);
+33-17
drivers/net/ipa/ipa_cmd.c
···175175 : field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);176176 if (mem->offset > offset_max ||177177 ipa->mem_offset > offset_max - mem->offset) {178178- dev_err(dev, "IPv%c %s%s table region offset too large "179179- "(0x%04x + 0x%04x > 0x%04x)\n",180180- ipv6 ? '6' : '4', hashed ? "hashed " : "",181181- route ? "route" : "filter",182182- ipa->mem_offset, mem->offset, offset_max);178178+ dev_err(dev, "IPv%c %s%s table region offset too large\n",179179+ ipv6 ? '6' : '4', hashed ? "hashed " : "",180180+ route ? "route" : "filter");181181+ dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n",182182+ ipa->mem_offset, mem->offset, offset_max);183183+183184 return false;184185 }185186186187 if (mem->offset > ipa->mem_size ||187188 mem->size > ipa->mem_size - mem->offset) {188188- dev_err(dev, "IPv%c %s%s table region out of range "189189- "(0x%04x + 0x%04x > 0x%04x)\n",190190- ipv6 ? '6' : '4', hashed ? "hashed " : "",191191- route ? "route" : "filter",192192- mem->offset, mem->size, ipa->mem_size);189189+ dev_err(dev, "IPv%c %s%s table region out of range\n",190190+ ipv6 ? '6' : '4', hashed ? "hashed " : "",191191+ route ? "route" : "filter");192192+ dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n",193193+ mem->offset, mem->size, ipa->mem_size);194194+193195 return false;194196 }195197···207205 u32 size_max;208206 u32 size;209207208208+ /* In ipa_cmd_hdr_init_local_add() we record the offset and size209209+ * of the header table memory area. Make sure the offset and size210210+ * fit in the fields that need to hold them, and that the entire211211+ * range is within the overall IPA memory range.212212+ */210213 offset_max = field_max(HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);211214 if (mem->offset > offset_max ||212215 ipa->mem_offset > offset_max - mem->offset) {213213- dev_err(dev, "header table region offset too large "214214- "(0x%04x + 0x%04x > 0x%04x)\n",215215- ipa->mem_offset + mem->offset, offset_max);216216+ dev_err(dev, "header table region offset too large\n");217217+ dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n",218218+ ipa->mem_offset, mem->offset, offset_max);219219+216220 return false;217221 }218222219223 size_max = field_max(HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);220224 size = ipa->mem[IPA_MEM_MODEM_HEADER].size;221225 size += ipa->mem[IPA_MEM_AP_HEADER].size;222222- if (mem->offset > ipa->mem_size || size > ipa->mem_size - mem->offset) {223223- dev_err(dev, "header table region out of range "224224- "(0x%04x + 0x%04x > 0x%04x)\n",225225- mem->offset, size, ipa->mem_size);226226+227227+ if (size > size_max) {228228+ dev_err(dev, "header table region size too large\n");229229+ dev_err(dev, " (0x%04x > 0x%08x)\n", size, size_max);230230+231231+ return false;232232+ }233233+ if (size > ipa->mem_size || mem->offset > ipa->mem_size - size) {234234+ dev_err(dev, "header table region out of range\n");235235+ dev_err(dev, " (0x%04x + 0x%04x > 0x%04x)\n",236236+ mem->offset, size, ipa->mem_size);237237+226238 return false;227239 }228240
+2
drivers/net/ipa/ipa_qmi.c
···249249 .decoded_size = IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ,250250 .fn = ipa_server_driver_init_complete,251251 },252252+ { },252253};253254254255/* Handle an INIT_DRIVER response message from the modem. */···270269 .decoded_size = IPA_QMI_INIT_DRIVER_RSP_SZ,271270 .fn = ipa_client_init_driver,272271 },272272+ { },273273};274274275275/* Return a pointer to an init modem driver request structure, which contains
+9
drivers/net/phy/broadcom.c
···342342 bcm54xx_adjust_rxrefclk(phydev);343343344344 switch (BRCM_PHY_MODEL(phydev)) {345345+ case PHY_ID_BCM50610:346346+ case PHY_ID_BCM50610M:347347+ err = bcm54xx_config_clock_delay(phydev);348348+ break;345349 case PHY_ID_BCM54210E:346350 err = bcm54210e_config_init(phydev);347351 break;···402398 ret = genphy_resume(phydev);403399 if (ret < 0)404400 return ret;401401+402402+ /* Upon exiting power down, the PHY remains in an internal reset state403403+ * for 40us404404+ */405405+ fsleep(40);405406406407 return bcm54xx_config_init(phydev);407408}
···387387388388 err = register_netdev(dev);389389 if (err) {390390+ /* Set disconnected flag so that disconnect() returns early. */391391+ pnd->disconnected = 1;390392 usb_driver_release_interface(&usbpn_driver, data_intf);391393 goto out;392394 }
···302302 if (rxq < rcv->real_num_rx_queues) {303303 rq = &rcv_priv->rq[rxq];304304 rcv_xdp = rcu_access_pointer(rq->xdp_prog);305305- if (rcv_xdp)306306- skb_record_rx_queue(skb, rxq);305305+ skb_record_rx_queue(skb, rxq);307306 }308307309308 skb_tx_timestamp(skb);
+42-2
drivers/net/wan/hdlc_x25.c
···23232424struct x25_state {2525 x25_hdlc_proto settings;2626+ bool up;2727+ spinlock_t up_lock; /* Protects "up" */2628};27292830static int x25_ioctl(struct net_device *dev, struct ifreq *ifr);···106104107105static netdev_tx_t x25_xmit(struct sk_buff *skb, struct net_device *dev)108106{107107+ hdlc_device *hdlc = dev_to_hdlc(dev);108108+ struct x25_state *x25st = state(hdlc);109109 int result;110110111111 /* There should be a pseudo header of 1 byte added by upper layers.···118114 return NETDEV_TX_OK;119115 }120116117117+ spin_lock_bh(&x25st->up_lock);118118+ if (!x25st->up) {119119+ spin_unlock_bh(&x25st->up_lock);120120+ kfree_skb(skb);121121+ return NETDEV_TX_OK;122122+ }123123+121124 switch (skb->data[0]) {122125 case X25_IFACE_DATA: /* Data to be transmitted */123126 skb_pull(skb, 1);124127 if ((result = lapb_data_request(dev, skb)) != LAPB_OK)125128 dev_kfree_skb(skb);129129+ spin_unlock_bh(&x25st->up_lock);126130 return NETDEV_TX_OK;127131128132 case X25_IFACE_CONNECT:···159147 break;160148 }161149150150+ spin_unlock_bh(&x25st->up_lock);162151 dev_kfree_skb(skb);163152 return NETDEV_TX_OK;164153}···177164 .data_transmit = x25_data_transmit,178165 };179166 hdlc_device *hdlc = dev_to_hdlc(dev);167167+ struct x25_state *x25st = state(hdlc);180168 struct lapb_parms_struct params;181169 int result;182170···204190 if (result != LAPB_OK)205191 return -EINVAL;206192193193+ spin_lock_bh(&x25st->up_lock);194194+ x25st->up = true;195195+ spin_unlock_bh(&x25st->up_lock);196196+207197 return 0;208198}209199···215197216198static void x25_close(struct net_device *dev)217199{200200+ hdlc_device *hdlc = dev_to_hdlc(dev);201201+ struct x25_state *x25st = state(hdlc);202202+203203+ spin_lock_bh(&x25st->up_lock);204204+ x25st->up = false;205205+ spin_unlock_bh(&x25st->up_lock);206206+218207 lapb_unregister(dev);219208}220209···230205static int x25_rx(struct sk_buff *skb)231206{232207 struct net_device *dev = skb->dev;208208+ hdlc_device *hdlc = dev_to_hdlc(dev);209209+ struct x25_state *x25st = state(hdlc);233210234211 if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL) {235212 dev->stats.rx_dropped++;236213 return NET_RX_DROP;237214 }238215239239- if (lapb_data_received(dev, skb) == LAPB_OK)240240- return NET_RX_SUCCESS;216216+ spin_lock_bh(&x25st->up_lock);217217+ if (!x25st->up) {218218+ spin_unlock_bh(&x25st->up_lock);219219+ kfree_skb(skb);220220+ dev->stats.rx_dropped++;221221+ return NET_RX_DROP;222222+ }241223224224+ if (lapb_data_received(dev, skb) == LAPB_OK) {225225+ spin_unlock_bh(&x25st->up_lock);226226+ return NET_RX_SUCCESS;227227+ }228228+229229+ spin_unlock_bh(&x25st->up_lock);242230 dev->stats.rx_errors++;243231 dev_kfree_skb_any(skb);244232 return NET_RX_DROP;···336298 return result;337299338300 memcpy(&state(hdlc)->settings, &new_settings, size);301301+ state(hdlc)->up = false;302302+ spin_lock_init(&state(hdlc)->up_lock);339303340304 /* There's no header_ops so hard_header_len should be 0. */341305 dev->hard_header_len = 0;
+8-1
drivers/pinctrl/intel/pinctrl-intel.c
···13571357 gpps[i].gpio_base = 0;13581358 break;13591359 case INTEL_GPIO_BASE_NOMAP:13601360+ break;13601361 default:13611362 break;13621363 }···13941393 gpps[i].size = min(gpp_size, npins);13951394 npins -= gpps[i].size;1396139513961396+ gpps[i].gpio_base = gpps[i].base;13971397 gpps[i].padown_num = padown_num;1398139813991399 /*···14931491 if (IS_ERR(regs))14941492 return PTR_ERR(regs);1495149314961496- /* Determine community features based on the revision */14941494+ /*14951495+ * Determine community features based on the revision.14961496+ * A value of all ones means the device is not present.14971497+ */14971498 value = readl(regs + REVID);14991499+ if (value == ~0u)15001500+ return -ENODEV;14981501 if (((value & REVID_MASK) >> REVID_SHIFT) >= 0x94) {14991502 community->features |= PINCTRL_FEATURE_DEBOUNCE;15001503 community->features |= PINCTRL_FEATURE_1K_PD;
+1-1
drivers/pinctrl/pinctrl-microchip-sgpio.c
···572572 /* Type value spread over 2 registers sets: low, high bit */573573 sgpio_clrsetbits(bank->priv, REG_INT_TRIGGER, addr.bit,574574 BIT(addr.port), (!!(type & 0x1)) << addr.port);575575- sgpio_clrsetbits(bank->priv, REG_INT_TRIGGER + SGPIO_MAX_BITS, addr.bit,575575+ sgpio_clrsetbits(bank->priv, REG_INT_TRIGGER, SGPIO_MAX_BITS + addr.bit,576576 BIT(addr.port), (!!(type & 0x2)) << addr.port);577577578578 if (type == SGPIO_INT_TRG_LEVEL)
+8-5
drivers/pinctrl/pinctrl-rockchip.c
···37273727static int __maybe_unused rockchip_pinctrl_resume(struct device *dev)37283728{37293729 struct rockchip_pinctrl *info = dev_get_drvdata(dev);37303730- int ret = regmap_write(info->regmap_base, RK3288_GRF_GPIO6C_IOMUX,37313731- rk3288_grf_gpio6c_iomux |37323732- GPIO6C6_SEL_WRITE_ENABLE);37303730+ int ret;3733373137343734- if (ret)37353735- return ret;37323732+ if (info->ctrl->type == RK3288) {37333733+ ret = regmap_write(info->regmap_base, RK3288_GRF_GPIO6C_IOMUX,37343734+ rk3288_grf_gpio6c_iomux |37353735+ GPIO6C6_SEL_WRITE_ENABLE);37363736+ if (ret)37373737+ return ret;37383738+ }3736373937373740 return pinctrl_force_default(info->pctl_dev);37383741}
+1-1
drivers/pinctrl/qcom/pinctrl-lpass-lpi.c
···392392 unsigned long *configs, unsigned int nconfs)393393{394394 struct lpi_pinctrl *pctrl = dev_get_drvdata(pctldev->dev);395395- unsigned int param, arg, pullup, strength;395395+ unsigned int param, arg, pullup = LPI_GPIO_BIAS_DISABLE, strength = 2;396396 bool value, output_enabled = false;397397 const struct lpi_pingroup *g;398398 unsigned long sval;
···11731173 depends on PCI11741174 help11751175 The Intel Platform Controller Hub for Intel Core SoCs provides access11761176- to Power Management Controller registers via a PCI interface. This11761176+ to Power Management Controller registers via various interfaces. This11771177 driver can utilize debugging capabilities and supported features as11781178- exposed by the Power Management Controller.11781178+ exposed by the Power Management Controller. It also may perform some11791179+ tasks in the PMC in order to enable transition into the SLPS0 state.11801180+ It should be selected on all Intel platforms supported by the driver.1179118111801182 Supported features:11811183 - SLP_S0_RESIDENCY counter11821184 - PCH IP Power Gating status11831183- - LTR Ignore11851185+ - LTR Ignore / LTR Show11841186 - MPHY/PLL gating status (Sunrisepoint PCH only)11871187+ - SLPS0 Debug registers (Cannonlake/Icelake PCH)11881188+ - Low Power Mode registers (Tigerlake and beyond)11891189+ - PMC quirks as needed to enable SLPS0/S0ix1185119011861191config INTEL_PMT_CLASS11871192 tristate
···173173 struct intel_pmt_namespace *ns,174174 struct device *parent)175175{176176- struct resource res;176176+ struct resource res = {0};177177 struct device *dev;178178 int ret;179179
+6-7
drivers/platform/x86/intel_pmt_crashlog.c
···2323#define CRASH_TYPE_OOBMSM 124242525/* Control Flags */2626-#define CRASHLOG_FLAG_DISABLE BIT(27)2626+#define CRASHLOG_FLAG_DISABLE BIT(28)27272828/*2929- * Bits 28 and 29 control the state of bit 31.2929+ * Bits 29 and 30 control the state of bit 31.3030 *3131- * Bit 28 will clear bit 31, if set, allowing a new crashlog to be captured.3232- * Bit 29 will immediately trigger a crashlog to be generated, setting bit 31.3333- * Bit 30 is read-only and reserved as 0.3131+ * Bit 29 will clear bit 31, if set, allowing a new crashlog to be captured.3232+ * Bit 30 will immediately trigger a crashlog to be generated, setting bit 31.3433 * Bit 31 is the read-only status with a 1 indicating log is complete.3534 */3636-#define CRASHLOG_FLAG_TRIGGER_CLEAR BIT(28)3737-#define CRASHLOG_FLAG_TRIGGER_EXECUTE BIT(29)3535+#define CRASHLOG_FLAG_TRIGGER_CLEAR BIT(29)3636+#define CRASHLOG_FLAG_TRIGGER_EXECUTE BIT(30)3837#define CRASHLOG_FLAG_TRIGGER_COMPLETE BIT(31)3938#define CRASHLOG_FLAG_TRIGGER_MASK GENMASK(31, 28)4039
+79-29
drivers/platform/x86/thinkpad_acpi.c
···4081408140824082 case TP_HKEY_EV_KEY_NUMLOCK:40834083 case TP_HKEY_EV_KEY_FN:40844084- case TP_HKEY_EV_KEY_FN_ESC:40854084 /* key press events, we just ignore them as long as the EC40864085 * is still reporting them in the normal keyboard stream */40864086+ *send_acpi_ev = false;40874087+ *ignore_acpi_ev = true;40884088+ return true;40894089+40904090+ case TP_HKEY_EV_KEY_FN_ESC:40914091+ /* Get the media key status to foce the status LED to update */40924092+ acpi_evalf(hkey_handle, NULL, "GMKS", "v");40874093 *send_acpi_ev = false;40884094 *ignore_acpi_ev = true;40894095 return true;···98519845 * Thinkpad sensor interfaces98529846 */9853984798489848+#define DYTC_CMD_QUERY 0 /* To get DYTC status - enable/revision */98499849+#define DYTC_QUERY_ENABLE_BIT 8 /* Bit 8 - 0 = disabled, 1 = enabled */98509850+#define DYTC_QUERY_SUBREV_BIT 16 /* Bits 16 - 27 - sub revision */98519851+#define DYTC_QUERY_REV_BIT 28 /* Bits 28 - 31 - revision */98529852+98549853#define DYTC_CMD_GET 2 /* To get current IC function and mode */98559854#define DYTC_GET_LAPMODE_BIT 17 /* Set when in lapmode */98569855···98669855static bool has_lapsensor;98679856static bool palm_state;98689857static bool lap_state;98589858+static int dytc_version;9869985998709860static int dytc_command(int command, int *output)98719861{···98789866 }98799867 if (!acpi_evalf(dytc_handle, output, NULL, "dd", command))98809868 return -EIO;98699869+ return 0;98709870+}98719871+98729872+static int dytc_get_version(void)98739873+{98749874+ int err, output;98759875+98769876+ /* Check if we've been called before - and just return cached value */98779877+ if (dytc_version)98789878+ return dytc_version;98799879+98809880+ /* Otherwise query DYTC and extract version information */98819881+ err = dytc_command(DYTC_CMD_QUERY, &output);98829882+ /*98839883+ * If support isn't available (ENODEV) then don't return an error98849884+ * and don't create the sysfs group98859885+ */98869886+ if (err == -ENODEV)98879887+ return 0;98889888+ /* For all other errors we can flag the failure */98899889+ if (err)98909890+ return err;98919891+98929892+ /* Check DYTC is enabled and supports mode setting */98939893+ if (output & BIT(DYTC_QUERY_ENABLE_BIT))98949894+ dytc_version = (output >> DYTC_QUERY_REV_BIT) & 0xF;98959895+98819896 return 0;98829897}98839898···100139974 if (err)100149975 return err;100159976 }1001610016- if (has_lapsensor) {99779977+99789978+ /* Check if we know the DYTC version, if we don't then get it */99799979+ if (!dytc_version) {99809980+ err = dytc_get_version();99819981+ if (err)99829982+ return err;99839983+ }99849984+ /*99859985+ * Platforms before DYTC version 5 claim to have a lap sensor, but it doesn't work, so we99869986+ * ignore them99879987+ */99889988+ if (has_lapsensor && (dytc_version >= 5)) {100179989 err = sysfs_create_file(&tpacpi_pdev->dev.kobj, &dev_attr_dytc_lapmode.attr);100189990 if (err)100199991 return err;···100499999 * DYTC Platform Profile interface1005010000 */10051100011005210052-#define DYTC_CMD_QUERY 0 /* To get DYTC status - enable/revision */1005310002#define DYTC_CMD_SET 1 /* To enable/disable IC function mode */1005410003#define DYTC_CMD_RESET 0x1ff /* To reset back to default */1005510055-1005610056-#define DYTC_QUERY_ENABLE_BIT 8 /* Bit 8 - 0 = disabled, 1 = enabled */1005710057-#define DYTC_QUERY_SUBREV_BIT 16 /* Bits 16 - 27 - sub revision */1005810058-#define DYTC_QUERY_REV_BIT 28 /* Bits 28 - 31 - revision */10059100041006010005#define DYTC_GET_FUNCTION_BIT 8 /* Bits 8-11 - function setting */1006110006#define DYTC_GET_MODE_BIT 12 /* Bits 12-15 - mode setting */···1018710142 return err;10188101431018910144 if (profile == PLATFORM_PROFILE_BALANCED) {1019010190- /* To get back to balanced mode we just issue a reset command */1019110191- err = dytc_command(DYTC_CMD_RESET, &output);1014510145+ /*1014610146+ * To get back to balanced mode we need to issue a reset command.1014710147+ * Note we still need to disable CQL mode before hand and re-enable1014810148+ * it afterwards, otherwise dytc_lapmode gets reset to 0 and stays1014910149+ * stuck at 0 for aprox. 30 minutes.1015010150+ */1015110151+ err = dytc_cql_command(DYTC_CMD_RESET, &output);1019210152 if (err)1019310153 goto unlock;1019410154 } else {···1026110211 if (err)1026210212 return err;10263102131021410214+ /* Check if we know the DYTC version, if we don't then get it */1021510215+ if (!dytc_version) {1021610216+ err = dytc_get_version();1021710217+ if (err)1021810218+ return err;1021910219+ }1026410220 /* Check DYTC is enabled and supports mode setting */1026510265- if (output & BIT(DYTC_QUERY_ENABLE_BIT)) {1026610266- /* Only DYTC v5.0 and later has this feature. */1026710267- int dytc_version;1022110221+ if (dytc_version >= 5) {1022210222+ dbg_printk(TPACPI_DBG_INIT,1022310223+ "DYTC version %d: thermal mode available\n", dytc_version);1022410224+ /* Create platform_profile structure and register */1022510225+ err = platform_profile_register(&dytc_profile);1022610226+ /*1022710227+ * If for some reason platform_profiles aren't enabled1022810228+ * don't quit terminally.1022910229+ */1023010230+ if (err)1023110231+ return 0;10268102321026910269- dytc_version = (output >> DYTC_QUERY_REV_BIT) & 0xF;1027010270- if (dytc_version >= 5) {1027110271- dbg_printk(TPACPI_DBG_INIT,1027210272- "DYTC version %d: thermal mode available\n", dytc_version);1027310273- /* Create platform_profile structure and register */1027410274- err = platform_profile_register(&dytc_profile);1027510275- /*1027610276- * If for some reason platform_profiles aren't enabled1027710277- * don't quit terminally.1027810278- */1027910279- if (err)1028010280- return 0;1028110281-1028210282- dytc_profile_available = true;1028310283- /* Ensure initial values are correct */1028410284- dytc_profile_refresh();1028510285- }1023310233+ dytc_profile_available = true;1023410234+ /* Ensure initial values are correct */1023510235+ dytc_profile_refresh();1028610236 }1028710237 return 0;1028810238}
···43224322 if (hsotg->op_state == OTG_STATE_B_PERIPHERAL)43234323 goto unlock;4324432443254325- if (hsotg->params.power_down > DWC2_POWER_DOWN_PARAM_PARTIAL)43254325+ if (hsotg->params.power_down != DWC2_POWER_DOWN_PARAM_PARTIAL ||43264326+ hsotg->flags.b.port_connect_status == 0)43264327 goto skip_power_saving;4327432843284329 /*···53995398 dwc2_writel(hsotg, hprt0, HPRT0);5400539954015400 /* Wait for the HPRT0.PrtSusp register field to be set */54025402- if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 3000))54015401+ if (dwc2_hsotg_wait_bit_set(hsotg, HPRT0, HPRT0_SUSP, 5000))54035402 dev_warn(hsotg->dev, "Suspend wasn't generated\n");5404540354055404 /*
···397397 xhci->quirks |= XHCI_SPURIOUS_SUCCESS;398398 if (mtk->lpm_support)399399 xhci->quirks |= XHCI_LPM_SUPPORT;400400+401401+ /*402402+ * MTK xHCI 0.96: PSA is 1 by default even if doesn't support stream,403403+ * and it's 3 when support it.404404+ */405405+ if (xhci->hci_version < 0x100 && HCC_MAX_PSA(xhci->hcc_params) == 4)406406+ xhci->quirks |= XHCI_BROKEN_STREAMS;400407}401408402409/* called during probe() after chip reset completes */···555548 if (ret)556549 goto put_usb3_hcd;557550558558- if (HCC_MAX_PSA(xhci->hcc_params) >= 4)551551+ if (HCC_MAX_PSA(xhci->hcc_params) >= 4 &&552552+ !(xhci->quirks & XHCI_BROKEN_STREAMS))559553 xhci->shared_hcd->can_do_streams = 1;560554561555 ret = usb_add_hcd(xhci->shared_hcd, irq, IRQF_SHARED);
+8-4
drivers/usb/musb/musb_core.c
···20042004 MUSB_DEVCTL_HR;20052005 switch (devctl & ~s) {20062006 case MUSB_QUIRK_B_DISCONNECT_99:20072007- musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n");20082008- schedule_delayed_work(&musb->irq_work,20092009- msecs_to_jiffies(1000));20102010- break;20072007+ if (musb->quirk_retries && !musb->flush_irq_work) {20082008+ musb_dbg(musb, "Poll devctl in case of suspend after disconnect\n");20092009+ schedule_delayed_work(&musb->irq_work,20102010+ msecs_to_jiffies(1000));20112011+ musb->quirk_retries--;20122012+ break;20132013+ }20142014+ fallthrough;20112015 case MUSB_QUIRK_B_INVALID_VBUS_91:20122016 if (musb->quirk_retries && !musb->flush_irq_work) {20132017 musb_dbg(musb,
+2
drivers/usb/usbip/vhci_hcd.c
···594594 pr_err("invalid port number %d\n", wIndex);595595 goto error;596596 }597597+ if (wValue >= 32)598598+ goto error;597599 if (hcd->speed == HCD_USB3) {598600 if ((vhci_hcd->port_status[rhport] &599601 USB_SS_PORT_STAT_POWER) != 0) {
+1-1
drivers/vfio/pci/Kconfig
···42424343config VFIO_PCI_NVLINK24444 def_bool y4545- depends on VFIO_PCI && PPC_POWERNV4545+ depends on VFIO_PCI && PPC_POWERNV && SPAPR_TCE_IOMMU4646 help4747 VFIO PCI support for P9 Witherspoon machine with NVIDIA V100 GPUs
+6
drivers/vfio/vfio_iommu_type1.c
···739739 ret = vfio_lock_acct(dma, lock_acct, false);740740741741unpin_out:742742+ if (batch->size == 1 && !batch->offset) {743743+ /* May be a VM_PFNMAP pfn, which the batch can't remember. */744744+ put_pfn(pfn, dma->prot);745745+ batch->size = 0;746746+ }747747+742748 if (ret < 0) {743749 if (pinned && !rsvd) {744750 for (pfn = *pfn_base ; pinned ; pfn++, pinned--)
···50505151 SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"52525353-config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT5353+config XEN_MEMORY_HOTPLUG_LIMIT5454 int "Hotplugged memory limit (in GiB) for a PV guest"5555 default 5125656 depends on XEN_HAVE_PVMMU5757- depends on XEN_BALLOON_MEMORY_HOTPLUG5757+ depends on MEMORY_HOTPLUG5858 help5959 Maxmium amount of memory (in GiB) that a PV guest can be6060 expanded to when using memory hotplug.
+1-2
fs/afs/write.c
···851851 fscache_wait_on_page_write(vnode->cache, vmf->page);852852#endif853853854854- if (PageWriteback(vmf->page) &&855855- wait_on_page_bit_killable(vmf->page, PG_writeback) < 0)854854+ if (wait_on_page_writeback_killable(vmf->page))856855 return VM_FAULT_RETRY;857856858857 if (lock_page_killable(vmf->page) < 0)
+6-2
fs/block_dev.c
···275275 bio.bi_opf = dio_bio_write_op(iocb);276276 task_io_account_write(ret);277277 }278278+ if (iocb->ki_flags & IOCB_NOWAIT)279279+ bio.bi_opf |= REQ_NOWAIT;278280 if (iocb->ki_flags & IOCB_HIPRI)279281 bio_set_polled(&bio, iocb);280282···430428 bio->bi_opf = dio_bio_write_op(iocb);431429 task_io_account_write(bio->bi_iter.bi_size);432430 }431431+ if (iocb->ki_flags & IOCB_NOWAIT)432432+ bio->bi_opf |= REQ_NOWAIT;433433434434 dio->size += bio->bi_iter.bi_size;435435 pos += bio->bi_iter.bi_size;···1244124012451241 lockdep_assert_held(&bdev->bd_mutex);1246124212471247- clear_bit(GD_NEED_PART_SCAN, &bdev->bd_disk->state);12481248-12491243rescan:12501244 ret = blk_drop_partitions(bdev);12511245 if (ret)12521246 return ret;12471247+12481248+ clear_bit(GD_NEED_PART_SCAN, &disk->state);1253124912541250 /*12551251 * Historically we only set the capacity to zero for devices that
···8181 struct btrfs_dev_replace_item *ptr;8282 u64 src_devid;83838484+ if (!dev_root)8585+ return 0;8686+8487 path = btrfs_alloc_path();8588 if (!path) {8689 ret = -ENOMEM;
+17-2
fs/btrfs/disk-io.c
···23872387 } else {23882388 set_bit(BTRFS_ROOT_TRACK_DIRTY, &root->state);23892389 fs_info->dev_root = root;23902390- btrfs_init_devices_late(fs_info);23912390 }23912391+ /* Initialize fs_info for all devices in any case */23922392+ btrfs_init_devices_late(fs_info);2392239323932394 /* If IGNOREDATACSUMS is set don't bother reading the csum root. */23942395 if (!btrfs_test_opt(fs_info, IGNOREDATACSUMS)) {···30103009 }30113010 }3012301130123012+ /*30133013+ * btrfs_find_orphan_roots() is responsible for finding all the dead30143014+ * roots (with 0 refs), flag them with BTRFS_ROOT_DEAD_TREE and load30153015+ * them into the fs_info->fs_roots_radix tree. This must be done before30163016+ * calling btrfs_orphan_cleanup() on the tree root. If we don't do it30173017+ * first, then btrfs_orphan_cleanup() will delete a dead root's orphan30183018+ * item before the root's tree is deleted - this means that if we unmount30193019+ * or crash before the deletion completes, on the next mount we will not30203020+ * delete what remains of the tree because the orphan item does not30213021+ * exists anymore, which is what tells us we have a pending deletion.30223022+ */30233023+ ret = btrfs_find_orphan_roots(fs_info);30243024+ if (ret)30253025+ goto out;30263026+30133027 ret = btrfs_cleanup_fs_roots(fs_info);30143028 if (ret)30153029 goto out;···30843068 }30853069 }3086307030873087- ret = btrfs_find_orphan_roots(fs_info);30883071out:30893072 return ret;30903073}
+9-9
fs/btrfs/inode.c
···30993099 * @bio_offset: offset to the beginning of the bio (in bytes)31003100 * @page: page where is the data to be verified31013101 * @pgoff: offset inside the page31023102+ * @start: logical offset in the file31023103 *31033104 * The length of such check is always one sector size.31043105 */31053106static int check_data_csum(struct inode *inode, struct btrfs_io_bio *io_bio,31063106- u32 bio_offset, struct page *page, u32 pgoff)31073107+ u32 bio_offset, struct page *page, u32 pgoff,31083108+ u64 start)31073109{31083110 struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);31093111 SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);···31323130 kunmap_atomic(kaddr);31333131 return 0;31343132zeroit:31353135- btrfs_print_data_csum_error(BTRFS_I(inode), page_offset(page) + pgoff,31363136- csum, csum_expected, io_bio->mirror_num);31333133+ btrfs_print_data_csum_error(BTRFS_I(inode), start, csum, csum_expected,31343134+ io_bio->mirror_num);31373135 if (io_bio->device)31383136 btrfs_dev_stat_inc_and_print(io_bio->device,31393137 BTRFS_DEV_STAT_CORRUPTION_ERRS);···31863184 pg_off += sectorsize, bio_offset += sectorsize) {31873185 int ret;3188318631893189- ret = check_data_csum(inode, io_bio, bio_offset, page, pg_off);31873187+ ret = check_data_csum(inode, io_bio, bio_offset, page, pg_off,31883188+ page_offset(page) + pg_off);31903189 if (ret < 0)31913190 return -EIO;31923191 }···79137910 ASSERT(pgoff < PAGE_SIZE);79147911 if (uptodate &&79157912 (!csum || !check_data_csum(inode, io_bio,79167916- bio_offset, bvec.bv_page, pgoff))) {79137913+ bio_offset, bvec.bv_page,79147914+ pgoff, start))) {79177915 clean_io_failure(fs_info, failure_tree, io_tree,79187916 start, bvec.bv_page,79197917 btrfs_ino(BTRFS_I(inode)),···81728168 bio->bi_private = dip;81738169 bio->bi_end_io = btrfs_end_dio_bio;81748170 btrfs_io_bio(bio)->logical = file_offset;81758175-81768176- WARN_ON_ONCE(write && btrfs_is_zoned(fs_info) &&81778177- fs_info->max_zone_append_size &&81788178- bio_op(bio) != REQ_OP_ZONE_APPEND);8179817181808172 if (bio_op(bio) == REQ_OP_ZONE_APPEND) {81818173 status = extract_ordered_extent(BTRFS_I(inode), bio,
+10-2
fs/btrfs/qgroup.c
···226226{227227 struct btrfs_qgroup_list *list;228228229229- btrfs_sysfs_del_one_qgroup(fs_info, qgroup);230229 list_del(&qgroup->dirty);231230 while (!list_empty(&qgroup->groups)) {232231 list = list_first_entry(&qgroup->groups,···242243 list_del(&list->next_member);243244 kfree(list);244245 }245245- kfree(qgroup);246246}247247248248/* must be called with qgroup_lock held */···567569 qgroup = rb_entry(n, struct btrfs_qgroup, node);568570 rb_erase(n, &fs_info->qgroup_tree);569571 __del_qgroup_rb(fs_info, qgroup);572572+ btrfs_sysfs_del_one_qgroup(fs_info, qgroup);573573+ kfree(qgroup);570574 }571575 /*572576 * We call btrfs_free_qgroup_config() when unmounting···15781578 spin_lock(&fs_info->qgroup_lock);15791579 del_qgroup_rb(fs_info, qgroupid);15801580 spin_unlock(&fs_info->qgroup_lock);15811581+15821582+ /*15831583+ * Remove the qgroup from sysfs now without holding the qgroup_lock15841584+ * spinlock, since the sysfs_remove_group() function needs to take15851585+ * the mutex kernfs_mutex through kernfs_remove_by_name_ns().15861586+ */15871587+ btrfs_sysfs_del_one_qgroup(fs_info, qgroup);15881588+ kfree(qgroup);15811589out:15821590 mutex_unlock(&fs_info->qgroup_ioctl_lock);15831591 return ret;
+3
fs/btrfs/volumes.c
···74487448 int item_size;74497449 int i, ret, slot;7450745074517451+ if (!device->fs_info->dev_root)74527452+ return 0;74537453+74517454 key.objectid = BTRFS_DEV_STATS_OBJECTID;74527455 key.type = BTRFS_PERSISTENT_ITEM_KEY;74537456 key.offset = device->devid;
+6
fs/cachefiles/bind.c
···118118 cache->mnt = path.mnt;119119 root = path.dentry;120120121121+ ret = -EINVAL;122122+ if (mnt_user_ns(path.mnt) != &init_user_ns) {123123+ pr_warn("File cache on idmapped mounts not supported");124124+ goto error_unsupported;125125+ }126126+121127 /* check parameters */122128 ret = -EOPNOTSUPP;123129 if (d_is_negative(root) ||
···5858#define SMB2_HMACSHA256_SIZE (32)5959#define SMB2_CMACAES_SIZE (16)6060#define SMB3_SIGNKEY_SIZE (16)6161+#define SMB3_GCM128_CRYPTKEY_SIZE (16)6162#define SMB3_GCM256_CRYPTKEY_SIZE (32)62636364/* Maximum buffer size value we can send with 1 credit */
+2-2
fs/cifs/smb2misc.c
···754754 }755755 }756756 spin_unlock(&cifs_tcp_ses_lock);757757- cifs_dbg(FYI, "Can not process oplock break for non-existent connection\n");758758- return false;757757+ cifs_dbg(FYI, "No file id matched, oplock break ignored\n");758758+ return true;759759}760760761761void
+20-7
fs/cifs/smb2ops.c
···20382038{20392039 int rc;20402040 unsigned int ret_data_len;20412041+ struct inode *inode;20412042 struct duplicate_extents_to_file dup_ext_buf;20422043 struct cifs_tcon *tcon = tlink_tcon(trgtfile->tlink);20432044···20552054 cifs_dbg(FYI, "Duplicate extents: src off %lld dst off %lld len %lld\n",20562055 src_off, dest_off, len);2057205620582058- rc = smb2_set_file_size(xid, tcon, trgtfile, dest_off + len, false);20592059- if (rc)20602060- goto duplicate_extents_out;20572057+ inode = d_inode(trgtfile->dentry);20582058+ if (inode->i_size < dest_off + len) {20592059+ rc = smb2_set_file_size(xid, tcon, trgtfile, dest_off + len, false);20602060+ if (rc)20612061+ goto duplicate_extents_out;2061206220632063+ /*20642064+ * Although also could set plausible allocation size (i_blocks)20652065+ * here in addition to setting the file size, in reflink20662066+ * it is likely that the target file is sparse. Its allocation20672067+ * size will be queried on next revalidate, but it is important20682068+ * to make sure that file's cached size is updated immediately20692069+ */20702070+ cifs_setsize(inode, dest_off + len);20712071+ }20622072 rc = SMB2_ioctl(xid, tcon, trgtfile->fid.persistent_fid,20632073 trgtfile->fid.volatile_fid,20642074 FSCTL_DUPLICATE_EXTENTS_TO_FILE,···41704158 if (ses->Suid == ses_id) {41714159 ses_enc_key = enc ? ses->smb3encryptionkey :41724160 ses->smb3decryptionkey;41734173- memcpy(key, ses_enc_key, SMB3_SIGN_KEY_SIZE);41614161+ memcpy(key, ses_enc_key, SMB3_ENC_DEC_KEY_SIZE);41744162 spin_unlock(&cifs_tcp_ses_lock);41754163 return 0;41764164 }···41974185 int rc = 0;41984186 struct scatterlist *sg;41994187 u8 sign[SMB2_SIGNATURE_SIZE] = {};42004200- u8 key[SMB3_SIGN_KEY_SIZE];41884188+ u8 key[SMB3_ENC_DEC_KEY_SIZE];42014189 struct aead_request *req;42024190 char *iv;42034191 unsigned int iv_len;···42214209 tfm = enc ? server->secmech.ccmaesencrypt :42224210 server->secmech.ccmaesdecrypt;4223421142244224- if (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM)42124212+ if ((server->cipher_type == SMB2_ENCRYPTION_AES256_CCM) ||42134213+ (server->cipher_type == SMB2_ENCRYPTION_AES256_GCM))42254214 rc = crypto_aead_setkey(tfm, key, SMB3_GCM256_CRYPTKEY_SIZE);42264215 else42274227- rc = crypto_aead_setkey(tfm, key, SMB3_SIGN_KEY_SIZE);42164216+ rc = crypto_aead_setkey(tfm, key, SMB3_GCM128_CRYPTKEY_SIZE);4228421742294218 if (rc) {42304219 cifs_server_dbg(VFS, "%s: Failed to set aead key %d\n", __func__, rc);
···233233234234struct acpi_device_pnp {235235 acpi_bus_id bus_id; /* Object name */236236+ int instance_no; /* Instance number of this object */236237 struct acpi_pnp_type type; /* ID type */237238 acpi_bus_address bus_address; /* _ADR */238239 char *unique_id; /* _UID */
···5656 * COMMAND_RECONFIG_FLAG_PARTIAL:5757 * Set to FPGA configuration type (full or partial).5858 */5959-#define COMMAND_RECONFIG_FLAG_PARTIAL 15959+#define COMMAND_RECONFIG_FLAG_PARTIAL 060606161/*6262 * Timeout settings for service clients:
···460460/*461461 * Set the allocation direction to bottom-up or top-down.462462 */463463-static inline __init void memblock_set_bottom_up(bool enable)463463+static inline __init_memblock void memblock_set_bottom_up(bool enable)464464{465465 memblock.bottom_up = enable;466466}···470470 * if this is true, that said, memblock will allocate memory471471 * in bottom-up direction.472472 */473473-static inline __init bool memblock_bottom_up(void)473473+static inline __init_memblock bool memblock_bottom_up(void)474474{475475 return memblock.bottom_up;476476}
···1461146114621462#if defined(CONFIG_KASAN_SW_TAGS) || defined(CONFIG_KASAN_HW_TAGS)1463146314641464+/*14651465+ * KASAN per-page tags are stored xor'ed with 0xff. This allows to avoid14661466+ * setting tags for all pages to native kernel tag value 0xff, as the default14671467+ * value 0x00 maps to 0xff.14681468+ */14691469+14641470static inline u8 page_kasan_tag(const struct page *page)14651471{14661466- if (kasan_enabled())14671467- return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;14681468- return 0xff;14721472+ u8 tag = 0xff;14731473+14741474+ if (kasan_enabled()) {14751475+ tag = (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;14761476+ tag ^= 0xff;14771477+ }14781478+14791479+ return tag;14691480}1470148114711482static inline void page_kasan_tag_set(struct page *page, u8 tag)14721483{14731484 if (kasan_enabled()) {14851485+ tag ^= 0xff;14741486 page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);14751487 page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;14761488 }
+5-5
include/linux/mmu_notifier.h
···169169 * the last refcount is dropped.170170 *171171 * If blockable argument is set to false then the callback cannot172172- * sleep and has to return with -EAGAIN. 0 should be returned173173- * otherwise. Please note that if invalidate_range_start approves174174- * a non-blocking behavior then the same applies to175175- * invalidate_range_end.176176- *172172+ * sleep and has to return with -EAGAIN if sleeping would be required.173173+ * 0 should be returned otherwise. Please note that notifiers that can174174+ * fail invalidate_range_start are not allowed to implement175175+ * invalidate_range_end, as there is no mechanism for informing the176176+ * notifier that its start failed.177177 */178178 int (*invalidate_range_start)(struct mmu_notifier *subscription,179179 const struct mmu_notifier_range *range);
···360360 NAPI_STATE_IN_BUSY_POLL, /* sk_busy_loop() owns this NAPI */361361 NAPI_STATE_PREFER_BUSY_POLL, /* prefer busy-polling over softirq processing*/362362 NAPI_STATE_THREADED, /* The poll is performed inside its own thread*/363363+ NAPI_STATE_SCHED_THREADED, /* Napi is currently scheduled in threaded mode */363364};364365365366enum {···373372 NAPIF_STATE_IN_BUSY_POLL = BIT(NAPI_STATE_IN_BUSY_POLL),374373 NAPIF_STATE_PREFER_BUSY_POLL = BIT(NAPI_STATE_PREFER_BUSY_POLL),375374 NAPIF_STATE_THREADED = BIT(NAPI_STATE_THREADED),375375+ NAPIF_STATE_SCHED_THREADED = BIT(NAPI_STATE_SCHED_THREADED),376376};377377378378enum gro_result {
+2-5
include/linux/netfilter/x_tables.h
···227227 unsigned int valid_hooks;228228229229 /* Man behind the curtain... */230230- struct xt_table_info __rcu *private;230230+ struct xt_table_info *private;231231232232 /* Set this to THIS_MODULE if you are a module, otherwise NULL */233233 struct module *me;···376376 * since addend is most likely 1377377 */378378 __this_cpu_add(xt_recseq.sequence, addend);379379- smp_wmb();379379+ smp_mb();380380381381 return addend;382382}···447447}448448449449struct nf_hook_ops *xt_hook_ops_alloc(const struct xt_table *, nf_hookfn *);450450-451451-struct xt_table_info452452-*xt_table_get_private_protected(const struct xt_table *table);453450454451#ifdef CONFIG_COMPAT455452#include <net/compat.h>
+1-1
include/linux/pagemap.h
···559559 return pgoff;560560}561561562562-/* This has the same layout as wait_bit_key - see fs/cachefiles/rdwr.c */563562struct wait_page_key {564563 struct page *page;565564 int bit_nr;···682683683684int put_and_wait_on_page_locked(struct page *page, int state);684685void wait_on_page_writeback(struct page *page);686686+int wait_on_page_writeback_killable(struct page *page);685687extern void end_page_writeback(struct page *page);686688void wait_for_stable_page(struct page *page);687689
···229229 *230230 * This structure is used either directly or via the XA_LIMIT() macro231231 * to communicate the range of IDs that are valid for allocation.232232- * Two common ranges are predefined for you:232232+ * Three common ranges are predefined for you:233233 * * xa_limit_32b - [0 - UINT_MAX]234234 * * xa_limit_31b - [0 - INT_MAX]235235+ * * xa_limit_16b - [0 - USHRT_MAX]235236 */236237struct xa_limit {237238 u32 max;···243242244243#define xa_limit_32b XA_LIMIT(0, UINT_MAX)245244#define xa_limit_31b XA_LIMIT(0, INT_MAX)245245+#define xa_limit_16b XA_LIMIT(0, USHRT_MAX)246246247247typedef unsigned __bitwise xa_mark_t;248248#define XA_MARK_0 ((__force xa_mark_t)0U)
···410410int fib6_check_nexthop(struct nexthop *nh, struct fib6_config *cfg,411411 struct netlink_ext_ack *extack);412412413413+/* Caller should either hold rcu_read_lock(), or RTNL. */413414static inline struct fib6_nh *nexthop_fib6_nh(struct nexthop *nh)414415{415416 struct nh_info *nhi;···425424 }426425427426 nhi = rcu_dereference_rtnl(nh->nh_info);427427+ if (nhi->family == AF_INET6)428428+ return &nhi->fib6_nh;429429+430430+ return NULL;431431+}432432+433433+/* Variant of nexthop_fib6_nh().434434+ * Caller should either hold rcu_read_lock_bh(), or RTNL.435435+ */436436+static inline struct fib6_nh *nexthop_fib6_nh_bh(struct nexthop *nh)437437+{438438+ struct nh_info *nhi;439439+440440+ if (nh->is_group) {441441+ struct nh_group *nh_grp;442442+443443+ nh_grp = rcu_dereference_bh_rtnl(nh->nh_grp);444444+ nh = nexthop_mpath_select(nh_grp, 0);445445+ if (!nh)446446+ return NULL;447447+ }448448+449449+ nhi = rcu_dereference_bh_rtnl(nh->nh_info);428450 if (nhi->family == AF_INET6)429451 return &nhi->fib6_nh;430452
+10-2
include/net/red.h
···168168 v->qcount = -1;169169}170170171171-static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog, u8 Scell_log)171171+static inline bool red_check_params(u32 qth_min, u32 qth_max, u8 Wlog,172172+ u8 Scell_log, u8 *stab)172173{173174 if (fls(qth_min) + Wlog > 32)174175 return false;···179178 return false;180179 if (qth_max < qth_min)181180 return false;181181+ if (stab) {182182+ int i;183183+184184+ for (i = 0; i < RED_STAB_SIZE; i++)185185+ if (stab[i] >= 32)186186+ return false;187187+ }182188 return true;183189}184190···295287 int shift;296288297289 /*298298- * The problem: ideally, average length queue recalcultion should290290+ * The problem: ideally, average length queue recalculation should299291 * be done over constant clock intervals. This is too expensive, so300292 * that the calculation is driven by outgoing packets.301293 * When the queue is idle we have to model this clock by hand.
+2
include/net/rtnetlink.h
···3333 *3434 * @list: Used internally3535 * @kind: Identifier3636+ * @netns_refund: Physical device, move to init_net on netns exit3637 * @maxtype: Highest device specific netlink attribute number3738 * @policy: Netlink policy for device specific attribute validation3839 * @validate: Optional validation function for netlink/changelink parameters···6564 size_t priv_size;6665 void (*setup)(struct net_device *dev);67666767+ bool netns_refund;6868 unsigned int maxtype;6969 const struct nla_policy *policy;7070 int (*validate)(struct nlattr *tb[],
···22#ifndef _UAPI__LINUX_BLKPG_H33#define _UAPI__LINUX_BLKPG_H4455-/*66- * Partition table and disk geometry handling77- *88- * A single ioctl with lots of subfunctions:99- *1010- * Device number stuff:1111- * get_whole_disk() (given the device number of a partition,1212- * find the device number of the encompassing disk)1313- * get_all_partitions() (given the device number of a disk, return the1414- * device numbers of all its known partitions)1515- *1616- * Partition stuff:1717- * add_partition()1818- * delete_partition()1919- * test_partition_in_use() (also for test_disk_in_use)2020- *2121- * Geometry stuff:2222- * get_geometry()2323- * set_geometry()2424- * get_bios_drivedata()2525- *2626- * For today, only the partition stuff - aeb, 9905152727- */285#include <linux/compiler.h>296#include <linux/ioctl.h>307···2952 long long start; /* starting offset in bytes */3053 long long length; /* length in bytes */3154 int pno; /* partition number */3232- char devname[BLKPG_DEVNAMELTH]; /* partition name, like sda5 or c0d1p2,3333- to be used in kernel messages */3434- char volname[BLKPG_VOLNAMELTH]; /* volume label */5555+ char devname[BLKPG_DEVNAMELTH]; /* unused / ignored */5656+ char volname[BLKPG_VOLNAMELTH]; /* unused / ignore */3557};36583759#endif /* _UAPI__LINUX_BLKPG_H */
+11-5
include/uapi/linux/bpf.h
···38503850 *38513851 * long bpf_check_mtu(void *ctx, u32 ifindex, u32 *mtu_len, s32 len_diff, u64 flags)38523852 * Description38533853- * Check ctx packet size against exceeding MTU of net device (based38533853+ * Check packet size against exceeding MTU of net device (based38543854 * on *ifindex*). This helper will likely be used in combination38553855 * with helpers that adjust/change the packet size.38563856 *···38663866 * Specifying *ifindex* zero means the MTU check is performed38673867 * against the current net device. This is practical if this isn't38683868 * used prior to redirect.38693869+ *38703870+ * On input *mtu_len* must be a valid pointer, else verifier will38713871+ * reject BPF program. If the value *mtu_len* is initialized to38723872+ * zero then the ctx packet size is use. When value *mtu_len* is38733873+ * provided as input this specify the L3 length that the MTU check38743874+ * is done against. Remember XDP and TC length operate at L2, but38753875+ * this value is L3 as this correlate to MTU and IP-header tot_len38763876+ * values which are L3 (similar behavior as bpf_fib_lookup).38693877 *38703878 * The Linux kernel route table can configure MTUs on a more38713879 * specific per route level, which is not provided by this helper.···38993891 *39003892 * On return *mtu_len* pointer contains the MTU value of the net39013893 * device. Remember the net device configured MTU is the L3 size,39023902- * which is returned here and XDP and TX length operate at L2.38943894+ * which is returned here and XDP and TC length operate at L2.39033895 * Helper take this into account for you, but remember when using39043904- * MTU value in your BPF-code. On input *mtu_len* must be a valid39053905- * pointer and be initialized (to zero), else verifier will reject39063906- * BPF program.38963896+ * MTU value in your BPF-code.39073897 *39083898 * Return39093899 * * 0 on success, and populate MTU value in *mtu_len* pointer.
···5757 PAGE_SIZE, true, ksym->name);5858}59596060-static void bpf_trampoline_ksym_add(struct bpf_trampoline *tr)6161-{6262- struct bpf_ksym *ksym = &tr->ksym;6363-6464- snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", tr->key);6565- bpf_image_ksym_add(tr->image, ksym);6666-}6767-6860static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)6961{7062 struct bpf_trampoline *tr;7163 struct hlist_head *head;7272- void *image;7364 int i;74657566 mutex_lock(&trampoline_mutex);···7584 if (!tr)7685 goto out;77867878- /* is_root was checked earlier. No need for bpf_jit_charge_modmem() */7979- image = bpf_jit_alloc_exec_page();8080- if (!image) {8181- kfree(tr);8282- tr = NULL;8383- goto out;8484- }8585-8687 tr->key = key;8788 INIT_HLIST_NODE(&tr->hlist);8889 hlist_add_head(&tr->hlist, head);···8299 mutex_init(&tr->mutex);83100 for (i = 0; i < BPF_TRAMP_MAX; i++)84101 INIT_HLIST_HEAD(&tr->progs_hlist[i]);8585- tr->image = image;8686- INIT_LIST_HEAD_RCU(&tr->ksym.lnode);8787- bpf_trampoline_ksym_add(tr);88102out:89103 mutex_unlock(&trampoline_mutex);90104 return tr;···165185 return tprogs;166186}167187188188+static void __bpf_tramp_image_put_deferred(struct work_struct *work)189189+{190190+ struct bpf_tramp_image *im;191191+192192+ im = container_of(work, struct bpf_tramp_image, work);193193+ bpf_image_ksym_del(&im->ksym);194194+ bpf_jit_free_exec(im->image);195195+ bpf_jit_uncharge_modmem(1);196196+ percpu_ref_exit(&im->pcref);197197+ kfree_rcu(im, rcu);198198+}199199+200200+/* callback, fexit step 3 or fentry step 2 */201201+static void __bpf_tramp_image_put_rcu(struct rcu_head *rcu)202202+{203203+ struct bpf_tramp_image *im;204204+205205+ im = container_of(rcu, struct bpf_tramp_image, rcu);206206+ INIT_WORK(&im->work, __bpf_tramp_image_put_deferred);207207+ schedule_work(&im->work);208208+}209209+210210+/* callback, fexit step 2. Called after percpu_ref_kill confirms. */211211+static void __bpf_tramp_image_release(struct percpu_ref *pcref)212212+{213213+ struct bpf_tramp_image *im;214214+215215+ im = container_of(pcref, struct bpf_tramp_image, pcref);216216+ call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu);217217+}218218+219219+/* callback, fexit or fentry step 1 */220220+static void __bpf_tramp_image_put_rcu_tasks(struct rcu_head *rcu)221221+{222222+ struct bpf_tramp_image *im;223223+224224+ im = container_of(rcu, struct bpf_tramp_image, rcu);225225+ if (im->ip_after_call)226226+ /* the case of fmod_ret/fexit trampoline and CONFIG_PREEMPTION=y */227227+ percpu_ref_kill(&im->pcref);228228+ else229229+ /* the case of fentry trampoline */230230+ call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu);231231+}232232+233233+static void bpf_tramp_image_put(struct bpf_tramp_image *im)234234+{235235+ /* The trampoline image that calls original function is using:236236+ * rcu_read_lock_trace to protect sleepable bpf progs237237+ * rcu_read_lock to protect normal bpf progs238238+ * percpu_ref to protect trampoline itself239239+ * rcu tasks to protect trampoline asm not covered by percpu_ref240240+ * (which are few asm insns before __bpf_tramp_enter and241241+ * after __bpf_tramp_exit)242242+ *243243+ * The trampoline is unreachable before bpf_tramp_image_put().244244+ *245245+ * First, patch the trampoline to avoid calling into fexit progs.246246+ * The progs will be freed even if the original function is still247247+ * executing or sleeping.248248+ * In case of CONFIG_PREEMPT=y use call_rcu_tasks() to wait on249249+ * first few asm instructions to execute and call into250250+ * __bpf_tramp_enter->percpu_ref_get.251251+ * Then use percpu_ref_kill to wait for the trampoline and the original252252+ * function to finish.253253+ * Then use call_rcu_tasks() to make sure few asm insns in254254+ * the trampoline epilogue are done as well.255255+ *256256+ * In !PREEMPT case the task that got interrupted in the first asm257257+ * insns won't go through an RCU quiescent state which the258258+ * percpu_ref_kill will be waiting for. Hence the first259259+ * call_rcu_tasks() is not necessary.260260+ */261261+ if (im->ip_after_call) {262262+ int err = bpf_arch_text_poke(im->ip_after_call, BPF_MOD_JUMP,263263+ NULL, im->ip_epilogue);264264+ WARN_ON(err);265265+ if (IS_ENABLED(CONFIG_PREEMPTION))266266+ call_rcu_tasks(&im->rcu, __bpf_tramp_image_put_rcu_tasks);267267+ else268268+ percpu_ref_kill(&im->pcref);269269+ return;270270+ }271271+272272+ /* The trampoline without fexit and fmod_ret progs doesn't call original273273+ * function and doesn't use percpu_ref.274274+ * Use call_rcu_tasks_trace() to wait for sleepable progs to finish.275275+ * Then use call_rcu_tasks() to wait for the rest of trampoline asm276276+ * and normal progs.277277+ */278278+ call_rcu_tasks_trace(&im->rcu, __bpf_tramp_image_put_rcu_tasks);279279+}280280+281281+static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx)282282+{283283+ struct bpf_tramp_image *im;284284+ struct bpf_ksym *ksym;285285+ void *image;286286+ int err = -ENOMEM;287287+288288+ im = kzalloc(sizeof(*im), GFP_KERNEL);289289+ if (!im)290290+ goto out;291291+292292+ err = bpf_jit_charge_modmem(1);293293+ if (err)294294+ goto out_free_im;295295+296296+ err = -ENOMEM;297297+ im->image = image = bpf_jit_alloc_exec_page();298298+ if (!image)299299+ goto out_uncharge;300300+301301+ err = percpu_ref_init(&im->pcref, __bpf_tramp_image_release, 0, GFP_KERNEL);302302+ if (err)303303+ goto out_free_image;304304+305305+ ksym = &im->ksym;306306+ INIT_LIST_HEAD_RCU(&ksym->lnode);307307+ snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu_%u", key, idx);308308+ bpf_image_ksym_add(image, ksym);309309+ return im;310310+311311+out_free_image:312312+ bpf_jit_free_exec(im->image);313313+out_uncharge:314314+ bpf_jit_uncharge_modmem(1);315315+out_free_im:316316+ kfree(im);317317+out:318318+ return ERR_PTR(err);319319+}320320+168321static int bpf_trampoline_update(struct bpf_trampoline *tr)169322{170170- void *old_image = tr->image + ((tr->selector + 1) & 1) * PAGE_SIZE/2;171171- void *new_image = tr->image + (tr->selector & 1) * PAGE_SIZE/2;323323+ struct bpf_tramp_image *im;172324 struct bpf_tramp_progs *tprogs;173325 u32 flags = BPF_TRAMP_F_RESTORE_REGS;174326 int err, total;···310198 return PTR_ERR(tprogs);311199312200 if (total == 0) {313313- err = unregister_fentry(tr, old_image);201201+ err = unregister_fentry(tr, tr->cur_image->image);202202+ bpf_tramp_image_put(tr->cur_image);203203+ tr->cur_image = NULL;314204 tr->selector = 0;205205+ goto out;206206+ }207207+208208+ im = bpf_tramp_image_alloc(tr->key, tr->selector);209209+ if (IS_ERR(im)) {210210+ err = PTR_ERR(im);315211 goto out;316212 }317213···327207 tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)328208 flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;329209330330- /* Though the second half of trampoline page is unused a task could be331331- * preempted in the middle of the first half of trampoline and two332332- * updates to trampoline would change the code from underneath the333333- * preempted task. Hence wait for tasks to voluntarily schedule or go334334- * to userspace.335335- * The same trampoline can hold both sleepable and non-sleepable progs.336336- * synchronize_rcu_tasks_trace() is needed to make sure all sleepable337337- * programs finish executing.338338- * Wait for these two grace periods together.339339- */340340- synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace);341341-342342- err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2,210210+ err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE,343211 &tr->func.model, flags, tprogs,344212 tr->func.addr);345213 if (err < 0)346214 goto out;347215348348- if (tr->selector)216216+ WARN_ON(tr->cur_image && tr->selector == 0);217217+ WARN_ON(!tr->cur_image && tr->selector);218218+ if (tr->cur_image)349219 /* progs already running at this address */350350- err = modify_fentry(tr, old_image, new_image);220220+ err = modify_fentry(tr, tr->cur_image->image, im->image);351221 else352222 /* first time registering */353353- err = register_fentry(tr, new_image);223223+ err = register_fentry(tr, im->image);354224 if (err)355225 goto out;226226+ if (tr->cur_image)227227+ bpf_tramp_image_put(tr->cur_image);228228+ tr->cur_image = im;356229 tr->selector++;357230out:358231 kfree(tprogs);···477364 goto out;478365 if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))479366 goto out;480480- bpf_image_ksym_del(&tr->ksym);481481- /* This code will be executed when all bpf progs (both sleepable and482482- * non-sleepable) went through483483- * bpf_prog_put()->call_rcu[_tasks_trace]()->bpf_prog_free_deferred().484484- * Hence no need for another synchronize_rcu_tasks_trace() here,485485- * but synchronize_rcu_tasks() is still needed, since trampoline486486- * may not have had any sleepable programs and we need to wait487487- * for tasks to get out of trampoline code before freeing it.367367+ /* This code will be executed even when the last bpf_tramp_image368368+ * is alive. All progs are detached from the trampoline and the369369+ * trampoline image is patched with jmp into epilogue to skip370370+ * fexit progs. The fentry-only trampoline will be freed via371371+ * multiple rcu callbacks.488372 */489489- synchronize_rcu_tasks();490490- bpf_jit_free_exec(tr->image);491373 hlist_del(&tr->hlist);492374 kfree(tr);493375out:···586478 rcu_read_unlock_trace();587479}588480481481+void notrace __bpf_tramp_enter(struct bpf_tramp_image *tr)482482+{483483+ percpu_ref_get(&tr->pcref);484484+}485485+486486+void notrace __bpf_tramp_exit(struct bpf_tramp_image *tr)487487+{488488+ percpu_ref_put(&tr->pcref);489489+}490490+589491int __weak590590-arch_prepare_bpf_trampoline(void *image, void *image_end,492492+arch_prepare_bpf_trampoline(struct bpf_tramp_image *tr, void *image, void *image_end,591493 const struct btf_func_model *m, u32 flags,592494 struct bpf_tramp_progs *tprogs,593495 void *orig_call)
+25-12
kernel/bpf/verifier.c
···58615861{58625862 bool mask_to_left = (opcode == BPF_ADD && off_is_neg) ||58635863 (opcode == BPF_SUB && !off_is_neg);58645864- u32 off;58645864+ u32 off, max;5865586558665866 switch (ptr_reg->type) {58675867 case PTR_TO_STACK:58685868+ /* Offset 0 is out-of-bounds, but acceptable start for the58695869+ * left direction, see BPF_REG_FP.58705870+ */58715871+ max = MAX_BPF_STACK + mask_to_left;58685872 /* Indirect variable offset stack access is prohibited in58695873 * unprivileged mode so it's not handled here.58705874 */···58765872 if (mask_to_left)58775873 *ptr_limit = MAX_BPF_STACK + off;58785874 else58795879- *ptr_limit = -off;58805880- return 0;58755875+ *ptr_limit = -off - 1;58765876+ return *ptr_limit >= max ? -ERANGE : 0;58815877 case PTR_TO_MAP_VALUE:58785878+ max = ptr_reg->map_ptr->value_size;58825879 if (mask_to_left) {58835880 *ptr_limit = ptr_reg->umax_value + ptr_reg->off;58845881 } else {58855882 off = ptr_reg->smin_value + ptr_reg->off;58865886- *ptr_limit = ptr_reg->map_ptr->value_size - off;58835883+ *ptr_limit = ptr_reg->map_ptr->value_size - off - 1;58875884 }58885888- return 0;58855885+ return *ptr_limit >= max ? -ERANGE : 0;58895886 default:58905887 return -EINVAL;58915888 }···59395934 u32 alu_state, alu_limit;59405935 struct bpf_reg_state tmp;59415936 bool ret;59375937+ int err;5942593859435939 if (can_skip_alu_sanitation(env, insn))59445940 return 0;···59555949 alu_state |= ptr_is_dst_reg ?59565950 BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;5957595159585958- if (retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg))59595959- return 0;59605960- if (update_alu_sanitation_state(aux, alu_state, alu_limit))59615961- return -EACCES;59525952+ err = retrieve_ptr_limit(ptr_reg, &alu_limit, opcode, off_is_neg);59535953+ if (err < 0)59545954+ return err;59555955+59565956+ err = update_alu_sanitation_state(aux, alu_state, alu_limit);59575957+ if (err < 0)59585958+ return err;59625959do_sim:59635960 /* Simulate and find potential out-of-bounds access under59645961 * speculative execution from truncation as a result of···61126103 case BPF_ADD:61136104 ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);61146105 if (ret < 0) {61156115- verbose(env, "R%d tried to add from different maps or paths\n", dst);61066106+ verbose(env, "R%d tried to add from different maps, paths, or prohibited types\n", dst);61166107 return ret;61176108 }61186109 /* We can take a fixed offset as long as it doesn't overflow···61676158 case BPF_SUB:61686159 ret = sanitize_ptr_alu(env, insn, ptr_reg, dst_reg, smin_val < 0);61696160 if (ret < 0) {61706170- verbose(env, "R%d tried to sub from different maps or paths\n", dst);61616161+ verbose(env, "R%d tried to sub from different maps, paths, or prohibited types\n", dst);61716162 return ret;61726163 }61736164 if (dst_reg == off_reg) {···90659056 btf = btf_get_by_fd(attr->prog_btf_fd);90669057 if (IS_ERR(btf))90679058 return PTR_ERR(btf);90599059+ if (btf_is_kernel(btf)) {90609060+ btf_put(btf);90619061+ return -EACCES;90629062+ }90689063 env->prog->aux->btf = btf;9069906490709065 err = check_btf_func(env, attr, uattr);···1167311660 off_reg = issrc ? insn->src_reg : insn->dst_reg;1167411661 if (isneg)1167511662 *patch++ = BPF_ALU64_IMM(BPF_MUL, off_reg, -1);1167611676- *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit - 1);1166311663+ *patch++ = BPF_MOV32_IMM(BPF_REG_AX, aux->alu_limit);1167711664 *patch++ = BPF_ALU64_REG(BPF_SUB, BPF_REG_AX, off_reg);1167811665 *patch++ = BPF_ALU64_REG(BPF_OR, BPF_REG_AX, off_reg);1167911666 *patch++ = BPF_ALU64_IMM(BPF_NEG, BPF_REG_AX, 0);
+8-8
kernel/fork.c
···19481948 p = dup_task_struct(current, node);19491949 if (!p)19501950 goto fork_out;19511951- if (args->io_thread)19511951+ if (args->io_thread) {19521952+ /*19531953+ * Mark us an IO worker, and block any signal that isn't19541954+ * fatal or STOP19551955+ */19521956 p->flags |= PF_IO_WORKER;19571957+ siginitsetinv(&p->blocked, sigmask(SIGKILL)|sigmask(SIGSTOP));19581958+ }1953195919541960 /*19551961 * This _must_ happen before we call free_task(), i.e. before we jump···24442438 .stack_size = (unsigned long)arg,24452439 .io_thread = 1,24462440 };24472447- struct task_struct *tsk;2448244124492449- tsk = copy_process(NULL, 0, node, &args);24502450- if (!IS_ERR(tsk)) {24512451- sigfillset(&tsk->blocked);24522452- sigdelsetmask(&tsk->blocked, sigmask(SIGKILL));24532453- }24542454- return tsk;24422442+ return copy_process(NULL, 0, node, &args);24552443}2456244424572445/*
+1-1
kernel/freezer.c
···134134 return false;135135 }136136137137- if (!(p->flags & (PF_KTHREAD | PF_IO_WORKER)))137137+ if (!(p->flags & PF_KTHREAD))138138 fake_signal_wake_up(p);139139 else140140 wake_up_state(p, TASK_INTERRUPTIBLE);
···139139 struct umd_info *umd_info = info->data;140140141141 /* cleanup if umh_setup() was successful but exec failed */142142- if (info->retval) {143143- fput(umd_info->pipe_to_umh);144144- fput(umd_info->pipe_from_umh);145145- put_pid(umd_info->tgid);146146- umd_info->tgid = NULL;147147- }142142+ if (info->retval)143143+ umd_cleanup_helper(umd_info);148144}145145+146146+/**147147+ * umd_cleanup_helper - release the resources which were allocated in umd_setup148148+ * @info: information about usermode driver149149+ */150150+void umd_cleanup_helper(struct umd_info *info)151151+{152152+ fput(info->pipe_to_umh);153153+ fput(info->pipe_from_umh);154154+ put_pid(info->tgid);155155+ info->tgid = NULL;156156+}157157+EXPORT_SYMBOL_GPL(umd_cleanup_helper);149158150159/**151160 * fork_usermode_driver - fork a usermode driver
+1
lib/math/div64.c
···232232233233 return res + div64_u64(a * b, c);234234}235235+EXPORT_SYMBOL(mul_u64_u64_div_u64);235236#endif
+14-12
lib/test_xarray.c
···1530153015311531#ifdef CONFIG_XARRAY_MULTI15321532static void check_split_1(struct xarray *xa, unsigned long index,15331533- unsigned int order)15331533+ unsigned int order, unsigned int new_order)15341534{15351535- XA_STATE(xas, xa, index);15361536- void *entry;15371537- unsigned int i = 0;15351535+ XA_STATE_ORDER(xas, xa, index, new_order);15361536+ unsigned int i;1538153715391538 xa_store_order(xa, index, order, xa, GFP_KERNEL);1540153915411540 xas_split_alloc(&xas, xa, order, GFP_KERNEL);15421541 xas_lock(&xas);15431542 xas_split(&xas, xa, order);15431543+ for (i = 0; i < (1 << order); i += (1 << new_order))15441544+ __xa_store(xa, index + i, xa_mk_index(index + i), 0);15441545 xas_unlock(&xas);1545154615461546- xa_for_each(xa, index, entry) {15471547- XA_BUG_ON(xa, entry != xa);15481548- i++;15471547+ for (i = 0; i < (1 << order); i++) {15481548+ unsigned int val = index + (i & ~((1 << new_order) - 1));15491549+ XA_BUG_ON(xa, xa_load(xa, index + i) != xa_mk_index(val));15491550 }15501550- XA_BUG_ON(xa, i != 1 << order);1551155115521552 xa_set_mark(xa, index, XA_MARK_0);15531553 XA_BUG_ON(xa, !xa_get_mark(xa, index, XA_MARK_0));···1557155715581558static noinline void check_split(struct xarray *xa)15591559{15601560- unsigned int order;15601560+ unsigned int order, new_order;1561156115621562 XA_BUG_ON(xa, !xa_empty(xa));1563156315641564 for (order = 1; order < 2 * XA_CHUNK_SHIFT; order++) {15651565- check_split_1(xa, 0, order);15661566- check_split_1(xa, 1UL << order, order);15671567- check_split_1(xa, 3UL << order, order);15651565+ for (new_order = 0; new_order < order; new_order++) {15661566+ check_split_1(xa, 0, order, new_order);15671567+ check_split_1(xa, 1UL << order, order, new_order);15681568+ check_split_1(xa, 3UL << order, order, new_order);15691569+ }15681570 }15691571}15701572#else
+6-5
lib/xarray.c
···987987 * xas_split_alloc() - Allocate memory for splitting an entry.988988 * @xas: XArray operation state.989989 * @entry: New entry which will be stored in the array.990990- * @order: New entry order.990990+ * @order: Current entry order.991991 * @gfp: Memory allocation flags.992992 *993993 * This function should be called before calling xas_split().···1011101110121012 do {10131013 unsigned int i;10141014- void *sibling;10141014+ void *sibling = NULL;10151015 struct xa_node *node;1016101610171017 node = kmem_cache_alloc(radix_tree_node_cachep, gfp);···10211021 for (i = 0; i < XA_CHUNK_SIZE; i++) {10221022 if ((i & mask) == 0) {10231023 RCU_INIT_POINTER(node->slots[i], entry);10241024- sibling = xa_mk_sibling(0);10241024+ sibling = xa_mk_sibling(i);10251025 } else {10261026 RCU_INIT_POINTER(node->slots[i], sibling);10271027 }···10411041 * xas_split() - Split a multi-index entry into smaller entries.10421042 * @xas: XArray operation state.10431043 * @entry: New entry to store in the array.10441044- * @order: New entry order.10441044+ * @order: Current entry order.10451045 *10461046- * The value in the entry is copied to all the replacement entries.10461046+ * The size of the new entries is set in @xas. The value in @entry is10471047+ * copied to all the replacement entries.10471048 *10481049 * Context: Any context. The caller should hold the xa_lock.10491050 */
+2-2
mm/highmem.c
···618618 int idx;619619620620 /* With debug all even slots are unmapped and act as guard */621621- if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) {621621+ if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL) && !(i & 0x01)) {622622 WARN_ON_ONCE(!pte_none(pteval));623623 continue;624624 }···654654 int idx;655655656656 /* With debug all even slots are unmapped and act as guard */657657- if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) {657657+ if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL) && !(i & 0x01)) {658658 WARN_ON_ONCE(!pte_none(pteval));659659 continue;660660 }
+37-4
mm/hugetlb.c
···280280 nrg->reservation_counter =281281 &h_cg->rsvd_hugepage[hstate_index(h)];282282 nrg->css = &h_cg->css;283283+ /*284284+ * The caller will hold exactly one h_cg->css reference for the285285+ * whole contiguous reservation region. But this area might be286286+ * scattered when there are already some file_regions reside in287287+ * it. As a result, many file_regions may share only one css288288+ * reference. In order to ensure that one file_region must hold289289+ * exactly one h_cg->css reference, we should do css_get for290290+ * each file_region and leave the reference held by caller291291+ * untouched.292292+ */293293+ css_get(&h_cg->css);283294 if (!resv->pages_per_hpage)284295 resv->pages_per_hpage = pages_per_huge_page(h);285296 /* pages_per_hpage should be the same for all entries in···301290 nrg->reservation_counter = NULL;302291 nrg->css = NULL;303292 }293293+#endif294294+}295295+296296+static void put_uncharge_info(struct file_region *rg)297297+{298298+#ifdef CONFIG_CGROUP_HUGETLB299299+ if (rg->css)300300+ css_put(rg->css);304301#endif305302}306303···335316 prg->to = rg->to;336317337318 list_del(&rg->link);319319+ put_uncharge_info(rg);338320 kfree(rg);339321340322 rg = prg;···347327 nrg->from = rg->from;348328349329 list_del(&rg->link);330330+ put_uncharge_info(rg);350331 kfree(rg);351332 }352333}···683662684663 del += t - f;685664 hugetlb_cgroup_uncharge_file_region(686686- resv, rg, t - f);665665+ resv, rg, t - f, false);687666688667 /* New entry for end of split region */689668 nrg->from = t;···704683 if (f <= rg->from && t >= rg->to) { /* Remove entire region */705684 del += rg->to - rg->from;706685 hugetlb_cgroup_uncharge_file_region(resv, rg,707707- rg->to - rg->from);686686+ rg->to - rg->from, true);708687 list_del(&rg->link);709688 kfree(rg);710689 continue;···712691713692 if (f <= rg->from) { /* Trim beginning of region */714693 hugetlb_cgroup_uncharge_file_region(resv, rg,715715- t - rg->from);694694+ t - rg->from, false);716695717696 del += t - rg->from;718697 rg->from = t;719698 } else { /* Trim end of region */720699 hugetlb_cgroup_uncharge_file_region(resv, rg,721721- rg->to - f);700700+ rg->to - f, false);722701723702 del += rg->to - f;724703 rg->to = f;···52085187 */52095188 long rsv_adjust;5210518951905190+ /*51915191+ * hugetlb_cgroup_uncharge_cgroup_rsvd() will put the51925192+ * reference to h_cg->css. See comment below for detail.51935193+ */52115194 hugetlb_cgroup_uncharge_cgroup_rsvd(52125195 hstate_index(h),52135196 (chg - add) * pages_per_huge_page(h), h_cg);···52195194 rsv_adjust = hugepage_subpool_put_pages(spool,52205195 chg - add);52215196 hugetlb_acct_memory(h, -rsv_adjust);51975197+ } else if (h_cg) {51985198+ /*51995199+ * The file_regions will hold their own reference to52005200+ * h_cg->css. So we should release the reference held52015201+ * via hugetlb_cgroup_charge_cgroup_rsvd() when we are52025202+ * done.52035203+ */52045204+ hugetlb_cgroup_put_rsvd_cgroup(h_cg);52225205 }52235206 }52245207 return true;
+8-2
mm/hugetlb_cgroup.c
···391391392392void hugetlb_cgroup_uncharge_file_region(struct resv_map *resv,393393 struct file_region *rg,394394- unsigned long nr_pages)394394+ unsigned long nr_pages,395395+ bool region_del)395396{396397 if (hugetlb_cgroup_disabled() || !resv || !rg || !nr_pages)397398 return;···401400 !resv->reservation_counter) {402401 page_counter_uncharge(rg->reservation_counter,403402 nr_pages * resv->pages_per_hpage);404404- css_put(rg->css);403403+ /*404404+ * Only do css_put(rg->css) when we delete the entire region405405+ * because one file_region must hold exactly one css reference.406406+ */407407+ if (region_del)408408+ css_put(rg->css);405409 }406410}407411
+9
mm/kfence/core.c
···1212#include <linux/debugfs.h>1313#include <linux/kcsan-checks.h>1414#include <linux/kfence.h>1515+#include <linux/kmemleak.h>1516#include <linux/list.h>1617#include <linux/lockdep.h>1718#include <linux/memblock.h>···480479481480 addr += 2 * PAGE_SIZE;482481 }482482+483483+ /*484484+ * The pool is live and will never be deallocated from this point on.485485+ * Remove the pool object from the kmemleak object tree, as it would486486+ * otherwise overlap with allocations returned by kfence_alloc(), which487487+ * are registered with kmemleak through the slab post-alloc hook.488488+ */489489+ kmemleak_free(__kfence_pool);483490484491 return true;485492
···166166 zero_pfn = page_to_pfn(ZERO_PAGE(0));167167 return 0;168168}169169-core_initcall(init_zero_pfn);169169+early_initcall(init_zero_pfn);170170171171void mm_trace_rss_stat(struct mm_struct *mm, int member, long count)172172{
+23
mm/mmu_notifier.c
···501501 "");502502 WARN_ON(mmu_notifier_range_blockable(range) ||503503 _ret != -EAGAIN);504504+ /*505505+ * We call all the notifiers on any EAGAIN,506506+ * there is no way for a notifier to know if507507+ * its start method failed, thus a start that508508+ * does EAGAIN can't also do end.509509+ */510510+ WARN_ON(ops->invalidate_range_end);504511 ret = _ret;505512 }513513+ }514514+ }515515+516516+ if (ret) {517517+ /*518518+ * Must be non-blocking to get here. If there are multiple519519+ * notifiers and one or more failed start, any that succeeded520520+ * start are expecting their end to be called. Do so now.521521+ */522522+ hlist_for_each_entry_rcu(subscription, &subscriptions->list,523523+ hlist, srcu_read_lock_held(&srcu)) {524524+ if (!subscription->ops->invalidate_range_end)525525+ continue;526526+527527+ subscription->ops->invalidate_range_end(subscription,528528+ range);506529 }507530 }508531 srcu_read_unlock(&srcu, id);
+16
mm/page-writeback.c
···28332833}28342834EXPORT_SYMBOL_GPL(wait_on_page_writeback);2835283528362836+/*28372837+ * Wait for a page to complete writeback. Returns -EINTR if we get a28382838+ * fatal signal while waiting.28392839+ */28402840+int wait_on_page_writeback_killable(struct page *page)28412841+{28422842+ while (PageWriteback(page)) {28432843+ trace_wait_on_page_writeback(page, page_mapping(page));28442844+ if (wait_on_page_bit_killable(page, PG_writeback))28452845+ return -EINTR;28462846+ }28472847+28482848+ return 0;28492849+}28502850+EXPORT_SYMBOL_GPL(wait_on_page_writeback_killable);28512851+28362852/**28372853 * wait_for_stable_page() - wait for writeback to finish, if necessary.28382854 * @page: The page to wait on.
+15-1
mm/z3fold.c
···13461346 page = list_entry(pos, struct page, lru);1347134713481348 zhdr = page_address(page);13491349- if (test_bit(PAGE_HEADLESS, &page->private))13491349+ if (test_bit(PAGE_HEADLESS, &page->private)) {13501350+ /*13511351+ * For non-headless pages, we wait to do this13521352+ * until we have the page lock to avoid racing13531353+ * with __z3fold_alloc(). Headless pages don't13541354+ * have a lock (and __z3fold_alloc() will never13551355+ * see them), but we still need to test and set13561356+ * PAGE_CLAIMED to avoid racing with13571357+ * z3fold_free(), so just do it now before13581358+ * leaving the loop.13591359+ */13601360+ if (test_and_set_bit(PAGE_CLAIMED, &page->private))13611361+ continue;13621362+13501363 break;13641364+ }1351136513521366 if (kref_get_unless_zero(&zhdr->refcount) == 0) {13531367 zhdr = NULL;
···196196 nskb->dev = dev;197197 can_skb_set_owner(nskb, sk);198198 ncf = (struct canfd_frame *)nskb->data;199199- skb_put(nskb, so->ll.mtu);199199+ skb_put_zero(nskb, so->ll.mtu);200200201201 /* create & send flow control reply */202202 ncf->can_id = so->txid;···215215 if (ae)216216 ncf->data[0] = so->opt.ext_address;217217218218- if (so->ll.mtu == CANFD_MTU)219219- ncf->flags = so->ll.tx_flags;218218+ ncf->flags = so->ll.tx_flags;220219221220 can_send_ret = can_send(nskb, 1);222221 if (can_send_ret)···779780 can_skb_prv(skb)->skbcnt = 0;780781781782 cf = (struct canfd_frame *)skb->data;782782- skb_put(skb, so->ll.mtu);783783+ skb_put_zero(skb, so->ll.mtu);783784784785 /* create consecutive frame */785786 isotp_fill_dataframe(cf, so, ae, 0);···789790 so->tx.sn %= 16;790791 so->tx.bs++;791792792792- if (so->ll.mtu == CANFD_MTU)793793- cf->flags = so->ll.tx_flags;793793+ cf->flags = so->ll.tx_flags;794794795795 skb->dev = dev;796796 can_skb_set_owner(skb, sk);···895897 so->tx.idx = 0;896898897899 cf = (struct canfd_frame *)skb->data;898898- skb_put(skb, so->ll.mtu);900900+ skb_put_zero(skb, so->ll.mtu);899901900902 /* check for single frame transmission depending on TX_DL */901903 if (size <= so->tx.ll_dl - SF_PCI_SZ4 - ae - off) {···937939 }938940939941 /* send the first or only CAN frame */940940- if (so->ll.mtu == CANFD_MTU)941941- cf->flags = so->ll.tx_flags;942942+ cf->flags = so->ll.tx_flags;942943943944 skb->dev = dev;944945 skb->sk = sk;···12251228 if (ll.mtu != CAN_MTU && ll.mtu != CANFD_MTU)12261229 return -EINVAL;1227123012281228- if (ll.mtu == CAN_MTU && ll.tx_dl > CAN_MAX_DLEN)12311231+ if (ll.mtu == CAN_MTU &&12321232+ (ll.tx_dl > CAN_MAX_DLEN || ll.tx_flags != 0))12291233 return -EINVAL;1230123412311235 memcpy(&so->ll, &ll, sizeof(ll));
+31-2
net/core/dev.c
···11841184 return -ENOMEM;1185118511861186 for_each_netdev(net, d) {11871187+ struct netdev_name_node *name_node;11881188+ list_for_each_entry(name_node, &d->name_node->list, list) {11891189+ if (!sscanf(name_node->name, name, &i))11901190+ continue;11911191+ if (i < 0 || i >= max_netdevices)11921192+ continue;11931193+11941194+ /* avoid cases where sscanf is not exact inverse of printf */11951195+ snprintf(buf, IFNAMSIZ, name, i);11961196+ if (!strncmp(buf, name_node->name, IFNAMSIZ))11971197+ set_bit(i, inuse);11981198+ }11871199 if (!sscanf(d->name, name, &i))11881200 continue;11891201 if (i < 0 || i >= max_netdevices)···43064294 */43074295 thread = READ_ONCE(napi->thread);43084296 if (thread) {42974297+ /* Avoid doing set_bit() if the thread is in42984298+ * INTERRUPTIBLE state, cause napi_thread_wait()42994299+ * makes sure to proceed with napi polling43004300+ * if the thread is explicitly woken from here.43014301+ */43024302+ if (READ_ONCE(thread->state) != TASK_INTERRUPTIBLE)43034303+ set_bit(NAPI_STATE_SCHED_THREADED, &napi->state);43094304 wake_up_process(thread);43104305 return;43114306 }···65056486 WARN_ON_ONCE(!(val & NAPIF_STATE_SCHED));6506648765076488 new = val & ~(NAPIF_STATE_MISSED | NAPIF_STATE_SCHED |64896489+ NAPIF_STATE_SCHED_THREADED |65086490 NAPIF_STATE_PREFER_BUSY_POLL);6509649165106492 /* If STATE_MISSED was set, leave STATE_SCHED set,···6988696869896969static int napi_thread_wait(struct napi_struct *napi)69906970{69716971+ bool woken = false;69726972+69916973 set_current_state(TASK_INTERRUPTIBLE);6992697469936975 while (!kthread_should_stop() && !napi_disable_pending(napi)) {69946994- if (test_bit(NAPI_STATE_SCHED, &napi->state)) {69766976+ /* Testing SCHED_THREADED bit here to make sure the current69776977+ * kthread owns this napi and could poll on this napi.69786978+ * Testing SCHED bit is not enough because SCHED bit might be69796979+ * set by some other busy poll thread or by napi_disable().69806980+ */69816981+ if (test_bit(NAPI_STATE_SCHED_THREADED, &napi->state) || woken) {69956982 WARN_ON(!list_empty(&napi->poll_list));69966983 __set_current_state(TASK_RUNNING);69976984 return 0;69986985 }6999698670006987 schedule();69886988+ /* woken being true indicates this thread owns this napi. */69896989+ woken = true;70016990 set_current_state(TASK_INTERRUPTIBLE);70026991 }70036992 __set_current_state(TASK_RUNNING);···1137511346 continue;11376113471137711348 /* Leave virtual devices for the generic cleanup */1137811378- if (dev->rtnl_link_ops)1134911349+ if (dev->rtnl_link_ops && !dev->rtnl_link_ops->netns_refund)1137911350 continue;11380113511138111352 /* Push remaining network devices to init_net */
···56585658 if (unlikely(flags & ~(BPF_MTU_CHK_SEGS)))56595659 return -EINVAL;5660566056615661- if (unlikely(flags & BPF_MTU_CHK_SEGS && len_diff))56615661+ if (unlikely(flags & BPF_MTU_CHK_SEGS && (len_diff || *mtu_len)))56625662 return -EINVAL;5663566356645664 dev = __dev_via_ifindex(dev, ifindex);···56685668 mtu = READ_ONCE(dev->mtu);5669566956705670 dev_len = mtu + dev->hard_header_len;56715671- skb_len = skb->len + len_diff; /* minus result pass check */56715671+56725672+ /* If set use *mtu_len as input, L3 as iph->tot_len (like fib_lookup) */56735673+ skb_len = *mtu_len ? *mtu_len + dev->hard_header_len : skb->len;56745674+56755675+ skb_len += len_diff; /* minus result pass check */56725676 if (skb_len <= dev_len) {56735677 ret = BPF_MTU_CHK_RET_SUCCESS;56745678 goto out;···5716571257175713 /* Add L2-header as dev MTU is L3 size */57185714 dev_len = mtu + dev->hard_header_len;57155715+57165716+ /* Use *mtu_len as input, L3 as iph->tot_len (like fib_lookup) */57175717+ if (*mtu_len)57185718+ xdp_len = *mtu_len + dev->hard_header_len;5719571957205720 xdp_len += len_diff; /* minus result pass check */57215721 if (xdp_len > dev_len)
+1-1
net/core/flow_dissector.c
···176176 * avoid confusion with packets without such field177177 */178178 if (icmp_has_id(ih->type))179179- key_icmp->id = ih->un.echo.id ? : 1;179179+ key_icmp->id = ih->un.echo.id ? ntohs(ih->un.echo.id) : 1;180180 else181181 key_icmp->id = 0;182182}
+28-16
net/core/sock.c
···34403440 twsk_prot->twsk_slab = NULL;34413441}3442344234433443+static int tw_prot_init(const struct proto *prot)34443444+{34453445+ struct timewait_sock_ops *twsk_prot = prot->twsk_prot;34463446+34473447+ if (!twsk_prot)34483448+ return 0;34493449+34503450+ twsk_prot->twsk_slab_name = kasprintf(GFP_KERNEL, "tw_sock_%s",34513451+ prot->name);34523452+ if (!twsk_prot->twsk_slab_name)34533453+ return -ENOMEM;34543454+34553455+ twsk_prot->twsk_slab =34563456+ kmem_cache_create(twsk_prot->twsk_slab_name,34573457+ twsk_prot->twsk_obj_size, 0,34583458+ SLAB_ACCOUNT | prot->slab_flags,34593459+ NULL);34603460+ if (!twsk_prot->twsk_slab) {34613461+ pr_crit("%s: Can't create timewait sock SLAB cache!\n",34623462+ prot->name);34633463+ return -ENOMEM;34643464+ }34653465+34663466+ return 0;34673467+}34683468+34433469static void req_prot_cleanup(struct request_sock_ops *rsk_prot)34443470{34453471 if (!rsk_prot)···35223496 if (req_prot_init(prot))35233497 goto out_free_request_sock_slab;3524349835253525- if (prot->twsk_prot != NULL) {35263526- prot->twsk_prot->twsk_slab_name = kasprintf(GFP_KERNEL, "tw_sock_%s", prot->name);35273527-35283528- if (prot->twsk_prot->twsk_slab_name == NULL)35293529- goto out_free_request_sock_slab;35303530-35313531- prot->twsk_prot->twsk_slab =35323532- kmem_cache_create(prot->twsk_prot->twsk_slab_name,35333533- prot->twsk_prot->twsk_obj_size,35343534- 0,35353535- SLAB_ACCOUNT |35363536- prot->slab_flags,35373537- NULL);35383538- if (prot->twsk_prot->twsk_slab == NULL)35393539- goto out_free_timewait_sock_slab;35403540- }34993499+ if (tw_prot_init(prot))35003500+ goto out_free_timewait_sock_slab;35413501 }3542350235433503 mutex_lock(&proto_list_mutex);
+5
net/dccp/ipv6.c
···319319 if (!ipv6_unicast_destination(skb))320320 return 0; /* discard, don't send a reset here */321321322322+ if (ipv6_addr_v4mapped(&ipv6_hdr(skb)->saddr)) {323323+ __IP6_INC_STATS(sock_net(sk), NULL, IPSTATS_MIB_INHDRERRORS);324324+ return 0;325325+ }326326+322327 if (dccp_bad_service_code(sk, service)) {323328 dcb->dccpd_reset_code = DCCP_RESET_CODE_BAD_SERVICE_CODE;324329 goto drop;
+7-4
net/dsa/dsa2.c
···10661066{10671067 struct dsa_switch *ds = dp->ds;10681068 struct dsa_switch_tree *dst = ds->dst;10691069+ const struct dsa_device_ops *tag_ops;10691070 enum dsa_tag_protocol tag_protocol;1070107110711072 tag_protocol = dsa_get_tag_protocol(dp, master);···10811080 * nothing to do here.10821081 */10831082 } else {10841084- dst->tag_ops = dsa_tag_driver_get(tag_protocol);10851085- if (IS_ERR(dst->tag_ops)) {10861086- if (PTR_ERR(dst->tag_ops) == -ENOPROTOOPT)10831083+ tag_ops = dsa_tag_driver_get(tag_protocol);10841084+ if (IS_ERR(tag_ops)) {10851085+ if (PTR_ERR(tag_ops) == -ENOPROTOOPT)10871086 return -EPROBE_DEFER;10881087 dev_warn(ds->dev, "No tagger for this switch\n");10891088 dp->master = NULL;10901090- return PTR_ERR(dst->tag_ops);10891089+ return PTR_ERR(tag_ops);10911090 }10911091+10921092+ dst->tag_ops = tag_ops;10921093 }1093109410941095 dp->master = master;
···245245 if (ipv6_addr_is_multicast(&hdr->saddr))246246 goto err;247247248248- /* While RFC4291 is not explicit about v4mapped addresses249249- * in IPv6 headers, it seems clear linux dual-stack250250- * model can not deal properly with these.251251- * Security models could be fooled by ::ffff:127.0.0.1 for example.252252- *253253- * https://tools.ietf.org/html/draft-itojun-v6ops-v4mapped-harmful-02254254- */255255- if (ipv6_addr_v4mapped(&hdr->saddr))256256- goto err;257257-258248 skb->transport_header = skb->network_header + sizeof(*hdr);259249 IP6CB(skb)->nhoff = offsetof(struct ipv6hdr, nexthdr);260250
+8-8
net/ipv6/netfilter/ip6_tables.c
···280280281281 local_bh_disable();282282 addend = xt_write_recseq_begin();283283- private = rcu_access_pointer(table->private);283283+ private = READ_ONCE(table->private); /* Address dependency. */284284 cpu = smp_processor_id();285285 table_base = private->entries;286286 jumpstack = (struct ip6t_entry **)private->jumpstack[cpu];···807807{808808 unsigned int countersize;809809 struct xt_counters *counters;810810- const struct xt_table_info *private = xt_table_get_private_protected(table);810810+ const struct xt_table_info *private = table->private;811811812812 /* We need atomic snapshot of counters: rest doesn't change813813 (other than comefrom, which userspace doesn't care···831831 unsigned int off, num;832832 const struct ip6t_entry *e;833833 struct xt_counters *counters;834834- const struct xt_table_info *private = xt_table_get_private_protected(table);834834+ const struct xt_table_info *private = table->private;835835 int ret = 0;836836 const void *loc_cpu_entry;837837···980980 t = xt_request_find_table_lock(net, AF_INET6, name);981981 if (!IS_ERR(t)) {982982 struct ip6t_getinfo info;983983- const struct xt_table_info *private = xt_table_get_private_protected(t);983983+ const struct xt_table_info *private = t->private;984984#ifdef CONFIG_COMPAT985985 struct xt_table_info tmp;986986···1035103510361036 t = xt_find_table_lock(net, AF_INET6, get.name);10371037 if (!IS_ERR(t)) {10381038- struct xt_table_info *private = xt_table_get_private_protected(t);10381038+ struct xt_table_info *private = t->private;10391039 if (get.size == private->size)10401040 ret = copy_entries_to_user(private->size,10411041 t, uptr->entrytable);···11891189 }1190119011911191 local_bh_disable();11921192- private = xt_table_get_private_protected(t);11921192+ private = t->private;11931193 if (private->number != tmp.num_counters) {11941194 ret = -EINVAL;11951195 goto unlock_up_free;···15521552 void __user *userptr)15531553{15541554 struct xt_counters *counters;15551555- const struct xt_table_info *private = xt_table_get_private_protected(table);15551555+ const struct xt_table_info *private = table->private;15561556 void __user *pos;15571557 unsigned int size;15581558 int ret = 0;···15981598 xt_compat_lock(AF_INET6);15991599 t = xt_find_table_lock(net, AF_INET6, get.name);16001600 if (!IS_ERR(t)) {16011601- const struct xt_table_info *private = xt_table_get_private_protected(t);16011601+ const struct xt_table_info *private = t->private;16021602 struct xt_table_info info;16031603 ret = compat_table_info(private, &info);16041604 if (!ret && get.size == info.size)
···29502950 continue;2951295129522952 for (j = 0; j < IEEE80211_HT_MCS_MASK_LEN; j++) {29532953- if (~sdata->rc_rateidx_mcs_mask[i][j]) {29532953+ if (sdata->rc_rateidx_mcs_mask[i][j] != 0xff) {29542954 sdata->rc_has_mcs_mask[i] = true;29552955 break;29562956 }29572957 }2958295829592959 for (j = 0; j < NL80211_VHT_NSS_MAX; j++) {29602960- if (~sdata->rc_rateidx_vht_mcs_mask[i][j]) {29602960+ if (sdata->rc_rateidx_vht_mcs_mask[i][j] != 0xffff) {29612961 sdata->rc_has_vht_mcs_mask[i] = true;29622962 break;29632963 }
+2
net/mac80211/ibss.c
···1874187418751875 /* remove beacon */18761876 kfree(sdata->u.ibss.ie);18771877+ sdata->u.ibss.ie = NULL;18781878+ sdata->u.ibss.ie_len = 0;1877187918781880 /* on the next join, re-program HT parameters */18791881 memset(&ifibss->ht_capa, 0, sizeof(ifibss->ht_capa));
+12-1
net/mac80211/main.c
···973973 continue;974974975975 if (!dflt_chandef.chan) {976976+ /*977977+ * Assign the first enabled channel to dflt_chandef978978+ * from the list of channels979979+ */980980+ for (i = 0; i < sband->n_channels; i++)981981+ if (!(sband->channels[i].flags &982982+ IEEE80211_CHAN_DISABLED))983983+ break;984984+ /* if none found then use the first anyway */985985+ if (i == sband->n_channels)986986+ i = 0;976987 cfg80211_chandef_create(&dflt_chandef,977977- &sband->channels[0],988988+ &sband->channels[i],978989 NL80211_CHAN_NO_HT);979990 /* init channel we're on */980991 if (!local->use_chanctx && !local->_oper_chandef.chan) {
···805805static u16806806minstrel_ht_next_inc_rate(struct minstrel_ht_sta *mi, u32 fast_rate_dur)807807{808808- struct minstrel_mcs_group_data *mg;809808 u8 type = MINSTREL_SAMPLE_TYPE_INC;810809 int i, index = 0;811810 u8 group;···812813 group = mi->sample[type].sample_group;813814 for (i = 0; i < ARRAY_SIZE(minstrel_mcs_groups); i++) {814815 group = (group + 1) % ARRAY_SIZE(minstrel_mcs_groups);815815- mg = &mi->groups[group];816816817817 index = minstrel_ht_group_min_rate_offset(mi, group,818818 fast_rate_dur);
+1-1
net/mac80211/util.c
···968968 break;969969 case WLAN_EID_EXT_HE_OPERATION:970970 if (len >= sizeof(*elems->he_operation) &&971971- len == ieee80211_he_oper_size(data) - 1) {971971+ len >= ieee80211_he_oper_size(data) - 1) {972972 if (crc)973973 *crc = crc32_be(*crc, (void *)elem,974974 elem->datalen + 2);
···857857#if IS_ENABLED(CONFIG_NET_TC_SKB_EXT)858858 struct tc_skb_ext *tc_ext;859859#endif860860+ bool post_ct = false;860861 int res, err;861862862863 /* Extract metadata from packet. */···896895 tc_ext = skb_ext_find(skb, TC_SKB_EXT);897896 key->recirc_id = tc_ext ? tc_ext->chain : 0;898897 OVS_CB(skb)->mru = tc_ext ? tc_ext->mru : 0;898898+ post_ct = tc_ext ? tc_ext->post_ct : false;899899 } else {900900 key->recirc_id = 0;901901 }···906904907905 err = key_extract(skb, key);908906 if (!err)909909- ovs_ct_fill_key(skb, key); /* Must be after key_extract(). */907907+ ovs_ct_fill_key(skb, key, post_ct); /* Must be after key_extract(). */910908 return err;911909}912910
+5
net/qrtr/qrtr.c
···10581058 rc = copied;1059105910601060 if (addr) {10611061+ /* There is an anonymous 2-byte hole after sq_family,10621062+ * make sure to clear it.10631063+ */10641064+ memset(addr, 0, sizeof(*addr));10651065+10611066 addr->sq_family = AF_QIPCRTR;10621067 addr->sq_node = cb->src_node;10631068 addr->sq_port = cb->src_port;
···7070 struct wireless_dev *result = NULL;7171 bool have_ifidx = attrs[NL80211_ATTR_IFINDEX];7272 bool have_wdev_id = attrs[NL80211_ATTR_WDEV];7373- u64 wdev_id;7373+ u64 wdev_id = 0;7474 int wiphy_idx = -1;7575 int ifidx = -1;7676···1478914789#define NL80211_FLAG_NEED_WDEV_UP (NL80211_FLAG_NEED_WDEV |\1479014790 NL80211_FLAG_CHECK_NETDEV_UP)1479114791#define NL80211_FLAG_CLEAR_SKB 0x201479214792+#define NL80211_FLAG_NO_WIPHY_MTX 0x4014792147931479314794static int nl80211_pre_doit(const struct genl_ops *ops, struct sk_buff *skb,1479414795 struct genl_info *info)···1484114840 info->user_ptr[0] = rdev;1484214841 }14843148421484414844- if (rdev) {1484314843+ if (rdev && !(ops->internal_flags & NL80211_FLAG_NO_WIPHY_MTX)) {1484514844 wiphy_lock(&rdev->wiphy);1484614845 /* we keep the mutex locked until post_doit */1484714846 __release(&rdev->wiphy.mtx);···1486614865 }1486714866 }14868148671486914869- if (info->user_ptr[0]) {1486814868+ if (info->user_ptr[0] &&1486914869+ !(ops->internal_flags & NL80211_FLAG_NO_WIPHY_MTX)) {1487014870 struct cfg80211_registered_device *rdev = info->user_ptr[0];14871148711487214872 /* we kept the mutex locked since pre_doit */···1533115329 .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,1533215330 .doit = nl80211_wiphy_netns,1533315331 .flags = GENL_UNS_ADMIN_PERM,1533415334- .internal_flags = NL80211_FLAG_NEED_WIPHY,1533215332+ .internal_flags = NL80211_FLAG_NEED_WIPHY |1533315333+ NL80211_FLAG_NEED_RTNL |1533415334+ NL80211_FLAG_NO_WIPHY_MTX,1533515335 },1533615336 {1533715337 .cmd = NL80211_CMD_GET_SURVEY,
+2
scripts/module.lds.S
···20202121 __patchable_function_entries : { *(__patchable_function_entries) }22222323+#ifdef CONFIG_LTO_CLANG2324 /*2425 * With CONFIG_LTO_CLANG, LLD always enables -fdata-sections and2526 * -ffunction-sections, which increases the size of the final module.···4241 }43424443 .text : { *(.text .text.[0-9a-zA-Z_]*) }4444+#endif4545}46464747/* bring in arch-specific sections */
+8
security/integrity/iint.c
···9898 struct rb_node *node, *parent = NULL;9999 struct integrity_iint_cache *iint, *test_iint;100100101101+ /*102102+ * The integrity's "iint_cache" is initialized at security_init(),103103+ * unless it is not included in the ordered list of LSMs enabled104104+ * on the boot command line.105105+ */106106+ if (!iint_cache)107107+ panic("%s: lsm=integrity required.\n", __func__);108108+101109 iint = integrity_iint_find(inode);102110 if (iint)103111 return iint;
···6767#include "policycap_names.h"6868#include "ima.h"69697070+struct convert_context_args {7171+ struct selinux_state *state;7272+ struct policydb *oldp;7373+ struct policydb *newp;7474+};7575+7676+struct selinux_policy_convert_data {7777+ struct convert_context_args args;7878+ struct sidtab_convert_params sidtab_params;7979+};8080+7081/* Forward declaration. */7182static int context_struct_to_string(struct policydb *policydb,7283 struct context *context,···19851974 return 0;19861975}1987197619881988-struct convert_context_args {19891989- struct selinux_state *state;19901990- struct policydb *oldp;19911991- struct policydb *newp;19921992-};19931993-19941977/*19951978 * Convert the values in the security context19961979 * structure `oldc' from the values specified···21642159}2165216021662161void selinux_policy_cancel(struct selinux_state *state,21672167- struct selinux_policy *policy)21622162+ struct selinux_load_state *load_state)21682163{21692164 struct selinux_policy *oldpolicy;21702165···21722167 lockdep_is_held(&state->policy_mutex));2173216821742169 sidtab_cancel_convert(oldpolicy->sidtab);21752175- selinux_policy_free(policy);21702170+ selinux_policy_free(load_state->policy);21712171+ kfree(load_state->convert_data);21762172}2177217321782174static void selinux_notify_policy_change(struct selinux_state *state,···21892183}2190218421912185void selinux_policy_commit(struct selinux_state *state,21922192- struct selinux_policy *newpolicy)21862186+ struct selinux_load_state *load_state)21932187{21942194- struct selinux_policy *oldpolicy;21882188+ struct selinux_policy *oldpolicy, *newpolicy = load_state->policy;21952189 u32 seqno;2196219021972191 oldpolicy = rcu_dereference_protected(state->policy,···22312225 /* Free the old policy */22322226 synchronize_rcu();22332227 selinux_policy_free(oldpolicy);22282228+ kfree(load_state->convert_data);2234222922352230 /* Notify others of the policy change */22362231 selinux_notify_policy_change(state, seqno);···22482241 * loading the new policy.22492242 */22502243int security_load_policy(struct selinux_state *state, void *data, size_t len,22512251- struct selinux_policy **newpolicyp)22442244+ struct selinux_load_state *load_state)22522245{22532246 struct selinux_policy *newpolicy, *oldpolicy;22542254- struct sidtab_convert_params convert_params;22552255- struct convert_context_args args;22472247+ struct selinux_policy_convert_data *convert_data;22562248 int rc = 0;22572249 struct policy_file file = { data, len }, *fp = &file;22582250···22812275 goto err_mapping;22822276 }2283227722842284-22852278 if (!selinux_initialized(state)) {22862279 /* First policy load, so no need to preserve state from old policy */22872287- *newpolicyp = newpolicy;22802280+ load_state->policy = newpolicy;22812281+ load_state->convert_data = NULL;22882282 return 0;22892283 }22902284···22982292 goto err_free_isids;22992293 }2300229422952295+ convert_data = kmalloc(sizeof(*convert_data), GFP_KERNEL);22962296+ if (!convert_data) {22972297+ rc = -ENOMEM;22982298+ goto err_free_isids;22992299+ }23002300+23012301 /*23022302 * Convert the internal representations of contexts23032303 * in the new SID table.23042304 */23052305- args.state = state;23062306- args.oldp = &oldpolicy->policydb;23072307- args.newp = &newpolicy->policydb;23052305+ convert_data->args.state = state;23062306+ convert_data->args.oldp = &oldpolicy->policydb;23072307+ convert_data->args.newp = &newpolicy->policydb;2308230823092309- convert_params.func = convert_context;23102310- convert_params.args = &args;23112311- convert_params.target = newpolicy->sidtab;23092309+ convert_data->sidtab_params.func = convert_context;23102310+ convert_data->sidtab_params.args = &convert_data->args;23112311+ convert_data->sidtab_params.target = newpolicy->sidtab;2312231223132313- rc = sidtab_convert(oldpolicy->sidtab, &convert_params);23132313+ rc = sidtab_convert(oldpolicy->sidtab, &convert_data->sidtab_params);23142314 if (rc) {23152315 pr_err("SELinux: unable to convert the internal"23162316 " representation of contexts in the new SID"23172317 " table\n");23182318- goto err_free_isids;23182318+ goto err_free_convert_data;23192319 }2320232023212321- *newpolicyp = newpolicy;23212321+ load_state->policy = newpolicy;23222322+ load_state->convert_data = convert_data;23222323 return 0;2323232423252325+err_free_convert_data:23262326+ kfree(convert_data);23242327err_free_isids:23252328 sidtab_destroy(newpolicy->sidtab);23262329err_mapping:
+1-1
security/tomoyo/network.c
···613613static bool tomoyo_kernel_service(void)614614{615615 /* Nothing to do if I am a kernel service. */616616- return (current->flags & (PF_KTHREAD | PF_IO_WORKER)) == PF_KTHREAD;616616+ return current->flags & PF_KTHREAD;617617}618618619619/**
···462462 return err;463463464464 case BTF_KIND_ARRAY:465465- return btf_dump_order_type(d, btf_array(t)->type, through_ptr);465465+ return btf_dump_order_type(d, btf_array(t)->type, false);466466467467 case BTF_KIND_STRUCT:468468 case BTF_KIND_UNION: {
+2-1
tools/lib/bpf/libbpf.c
···11811181 if (!elf_rawdata(elf_getscn(obj->efile.elf, obj->efile.shstrndx), NULL)) {11821182 pr_warn("elf: failed to get section names strings from %s: %s\n",11831183 obj->path, elf_errmsg(-1));11841184- return -LIBBPF_ERRNO__FORMAT;11841184+ err = -LIBBPF_ERRNO__FORMAT;11851185+ goto errout;11851186 }1186118711871188 /* Old LLVM set e_machine to EM_NONE */
···298298 queue->set = true;299299 queue->tid = buffer->tid;300300 queue->cpu = buffer->cpu;301301- } else if (buffer->cpu != queue->cpu || buffer->tid != queue->tid) {302302- pr_err("auxtrace queue conflict: cpu %d, tid %d vs cpu %d, tid %d\n",303303- queue->cpu, queue->tid, buffer->cpu, buffer->tid);304304- return -EINVAL;305301 }306302307303 buffer->buffer_nr = queues->next_buffer_nr++;
+10-3
tools/perf/util/bpf-event.c
···196196 }197197198198 if (info_linear->info_len < offsetof(struct bpf_prog_info, prog_tags)) {199199+ free(info_linear);199200 pr_debug("%s: the kernel is too old, aborting\n", __func__);200201 return -2;201202 }202203203204 info = &info_linear->info;205205+ if (!info->jited_ksyms) {206206+ free(info_linear);207207+ return -1;208208+ }204209205210 /* number of ksyms, func_lengths, and tags should match */206211 sub_prog_cnt = info->nr_jited_ksyms;207212 if (sub_prog_cnt != info->nr_prog_tags ||208208- sub_prog_cnt != info->nr_jited_func_lens)213213+ sub_prog_cnt != info->nr_jited_func_lens) {214214+ free(info_linear);209215 return -1;216216+ }210217211218 /* check BTF func info support */212219 if (info->btf_id && info->nr_func_info && info->func_info_rec_size) {213220 /* btf func info number should be same as sub_prog_cnt */214221 if (sub_prog_cnt != info->nr_func_info) {215222 pr_debug("%s: mismatch in BPF sub program count and BTF function info count, aborting\n", __func__);216216- err = -1;217217- goto out;223223+ free(info_linear);224224+ return -1;218225 }219226 if (btf__get_from_id(info->btf_id, &btf)) {220227 pr_debug("%s: failed to get BTF of id %u, aborting\n", __func__, info->btf_id);
···133133 if (dso != NULL) {134134 __dsos__add(&machine->dsos, dso);135135 dso__set_long_name(dso, long_name, false);136136+ /* Put dso here because __dsos_add already got it */137137+ dso__put(dso);136138 }137139138140 return dso;
+2
tools/testing/kunit/configs/broken_on_uml.config
···4040# CONFIG_RESET_BRCMSTB_RESCAL is not set4141# CONFIG_RESET_INTEL_GW is not set4242# CONFIG_ADI_AXI_ADC is not set4343+# CONFIG_DEBUG_PAGEALLOC is not set4444+# CONFIG_PAGE_POISONING is not set
+1-1
tools/testing/kunit/kunit_config.py
···1313CONFIG_IS_NOT_SET_PATTERN = r'^# CONFIG_(\w+) is not set$'1414CONFIG_PATTERN = r'^CONFIG_(\w+)=(\S+|".*")$'15151616-KconfigEntryBase = collections.namedtuple('KconfigEntry', ['name', 'value'])1616+KconfigEntryBase = collections.namedtuple('KconfigEntryBase', ['name', 'value'])17171818class KconfigEntry(KconfigEntryBase):1919
+18-3
tools/testing/radix-tree/idr-test.c
···296296 return NULL;297297}298298299299+/*300300+ * There are always either 1 or 2 objects in the IDR. If we find nothing,301301+ * or we find something at an ID we didn't expect, that's a bug.302302+ */299303void idr_find_test_1(int anchor_id, int throbber_id)300304{301305 pthread_t throbber;302306 time_t start = time(NULL);303307304304- pthread_create(&throbber, NULL, idr_throbber, &throbber_id);305305-306308 BUG_ON(idr_alloc(&find_idr, xa_mk_value(anchor_id), anchor_id,307309 anchor_id + 1, GFP_KERNEL) != anchor_id);308310311311+ pthread_create(&throbber, NULL, idr_throbber, &throbber_id);312312+313313+ rcu_read_lock();309314 do {310315 int id = 0;311316 void *entry = idr_get_next(&find_idr, &id);312312- BUG_ON(entry != xa_mk_value(id));317317+ rcu_read_unlock();318318+ if ((id != anchor_id && id != throbber_id) ||319319+ entry != xa_mk_value(id)) {320320+ printf("%s(%d, %d): %p at %d\n", __func__, anchor_id,321321+ throbber_id, entry, id);322322+ abort();323323+ }324324+ rcu_read_lock();313325 } while (time(NULL) < start + 11);326326+ rcu_read_unlock();314327315328 pthread_join(throbber, NULL);316329···590577591578int __weak main(void)592579{580580+ rcu_register_thread();593581 radix_tree_init();594582 idr_checks();595583 ida_tests();···598584 rcu_barrier();599585 if (nr_allocated)600586 printf("nr_allocated = %d\n", nr_allocated);587587+ rcu_unregister_thread();601588 return 0;602589}
···284284// Set up test pattern in the FFR285285// x0: pid286286// x2: generation287287+//288288+// We need to generate a canonical FFR value, which consists of a number of289289+// low "1" bits, followed by a number of zeros. This gives us 17 unique values290290+// per 16 bits of FFR, so we create a 4 bit signature out of the PID and291291+// generation, and use that as the initial number of ones in the pattern.292292+// We fill the upper lanes of FFR with zeros.287293// Beware: corrupts P0.288294function setup_ffr289295 mov x4, x30290296291291- bl pattern297297+ and w0, w0, #0x3298298+ bfi w0, w2, #2, #2299299+ mov w1, #1300300+ lsl w1, w1, w0301301+ sub w1, w1, #1302302+292303 ldr x0, =ffrref293293- ldr x1, =scratch294294- rdvl x2, #1295295- lsr x2, x2, #3296296- bl memcpy304304+ strh w1, [x0], 2305305+ rdvl x1, #1306306+ lsr x1, x1, #3307307+ sub x1, x1, #2308308+ bl memclr297309298310 mov x0, #0299311 ldr x1, =ffrref
···11+// SPDX-License-Identifier: GPL-2.022+/* Copyright (c) 2021 Facebook */33+#define _GNU_SOURCE44+#include <sched.h>55+#include <test_progs.h>66+#include <time.h>77+#include <sys/mman.h>88+#include <sys/syscall.h>99+#include "fexit_sleep.skel.h"1010+1111+static int do_sleep(void *skel)1212+{1313+ struct fexit_sleep *fexit_skel = skel;1414+ struct timespec ts1 = { .tv_nsec = 1 };1515+ struct timespec ts2 = { .tv_sec = 10 };1616+1717+ fexit_skel->bss->pid = getpid();1818+ (void)syscall(__NR_nanosleep, &ts1, NULL);1919+ (void)syscall(__NR_nanosleep, &ts2, NULL);2020+ return 0;2121+}2222+2323+#define STACK_SIZE (1024 * 1024)2424+static char child_stack[STACK_SIZE];2525+2626+void test_fexit_sleep(void)2727+{2828+ struct fexit_sleep *fexit_skel = NULL;2929+ int wstatus, duration = 0;3030+ pid_t cpid;3131+ int err, fexit_cnt;3232+3333+ fexit_skel = fexit_sleep__open_and_load();3434+ if (CHECK(!fexit_skel, "fexit_skel_load", "fexit skeleton failed\n"))3535+ goto cleanup;3636+3737+ err = fexit_sleep__attach(fexit_skel);3838+ if (CHECK(err, "fexit_attach", "fexit attach failed: %d\n", err))3939+ goto cleanup;4040+4141+ cpid = clone(do_sleep, child_stack + STACK_SIZE, CLONE_FILES | SIGCHLD, fexit_skel);4242+ if (CHECK(cpid == -1, "clone", strerror(errno)))4343+ goto cleanup;4444+4545+ /* wait until first sys_nanosleep ends and second sys_nanosleep starts */4646+ while (READ_ONCE(fexit_skel->bss->fentry_cnt) != 2);4747+ fexit_cnt = READ_ONCE(fexit_skel->bss->fexit_cnt);4848+ if (CHECK(fexit_cnt != 1, "fexit_cnt", "%d", fexit_cnt))4949+ goto cleanup;5050+5151+ /* close progs and detach them. That will trigger two nop5->jmp5 rewrites5252+ * in the trampolines to skip nanosleep_fexit prog.5353+ * The nanosleep_fentry prog will get detached first.5454+ * The nanosleep_fexit prog will get detached second.5555+ * Detaching will trigger freeing of both progs JITed images.5656+ * There will be two dying bpf_tramp_image-s, but only the initial5757+ * bpf_tramp_image (with both _fentry and _fexit progs will be stuck5858+ * waiting for percpu_ref_kill to confirm). The other one5959+ * will be freed quickly.6060+ */6161+ close(bpf_program__fd(fexit_skel->progs.nanosleep_fentry));6262+ close(bpf_program__fd(fexit_skel->progs.nanosleep_fexit));6363+ fexit_sleep__detach(fexit_skel);6464+6565+ /* kill the thread to unwind sys_nanosleep stack through the trampoline */6666+ kill(cpid, 9);6767+6868+ if (CHECK(waitpid(cpid, &wstatus, 0) == -1, "waitpid", strerror(errno)))6969+ goto cleanup;7070+ if (CHECK(WEXITSTATUS(wstatus) != 0, "exitstatus", "failed"))7171+ goto cleanup;7272+7373+ /* The bypassed nanosleep_fexit prog shouldn't have executed.7474+ * Unlike progs the maps were not freed and directly accessible.7575+ */7676+ fexit_cnt = READ_ONCE(fexit_skel->bss->fexit_cnt);7777+ if (CHECK(fexit_cnt != 1, "fexit_cnt", "%d", fexit_cnt))7878+ goto cleanup;7979+8080+cleanup:8181+ fexit_sleep__destroy(fexit_skel);8282+}
···105105 return retval;106106}107107108108+SEC("xdp")109109+int xdp_input_len(struct xdp_md *ctx)110110+{111111+ int retval = XDP_PASS; /* Expected retval on successful test */112112+ void *data_end = (void *)(long)ctx->data_end;113113+ void *data = (void *)(long)ctx->data;114114+ __u32 ifindex = GLOBAL_USER_IFINDEX;115115+ __u32 data_len = data_end - data;116116+117117+ /* API allow user give length to check as input via mtu_len param,118118+ * resulting MTU value is still output in mtu_len param after call.119119+ *120120+ * Input len is L3, like MTU and iph->tot_len.121121+ * Remember XDP data_len is L2.122122+ */123123+ __u32 mtu_len = data_len - ETH_HLEN;124124+125125+ if (bpf_check_mtu(ctx, ifindex, &mtu_len, 0, 0))126126+ retval = XDP_ABORTED;127127+128128+ global_bpf_mtu_xdp = mtu_len;129129+ return retval;130130+}131131+132132+SEC("xdp")133133+int xdp_input_len_exceed(struct xdp_md *ctx)134134+{135135+ int retval = XDP_ABORTED; /* Fail */136136+ __u32 ifindex = GLOBAL_USER_IFINDEX;137137+ int err;138138+139139+ /* API allow user give length to check as input via mtu_len param,140140+ * resulting MTU value is still output in mtu_len param after call.141141+ *142142+ * Input length value is L3 size like MTU.143143+ */144144+ __u32 mtu_len = GLOBAL_USER_MTU;145145+146146+ mtu_len += 1; /* Exceed with 1 */147147+148148+ err = bpf_check_mtu(ctx, ifindex, &mtu_len, 0, 0);149149+ if (err == BPF_MTU_CHK_RET_FRAG_NEEDED)150150+ retval = XDP_PASS ; /* Success in exceeding MTU check */151151+152152+ global_bpf_mtu_xdp = mtu_len;153153+ return retval;154154+}155155+108156SEC("classifier")109157int tc_use_helper(struct __sk_buff *ctx)110158{···240192 */241193 if (bpf_check_mtu(ctx, ifindex, &mtu_len, delta, 0))242194 retval = BPF_DROP;195195+196196+ global_bpf_mtu_xdp = mtu_len;197197+ return retval;198198+}199199+200200+SEC("classifier")201201+int tc_input_len(struct __sk_buff *ctx)202202+{203203+ int retval = BPF_OK; /* Expected retval on successful test */204204+ __u32 ifindex = GLOBAL_USER_IFINDEX;205205+206206+ /* API allow user give length to check as input via mtu_len param,207207+ * resulting MTU value is still output in mtu_len param after call.208208+ *209209+ * Input length value is L3 size.210210+ */211211+ __u32 mtu_len = GLOBAL_USER_MTU;212212+213213+ if (bpf_check_mtu(ctx, ifindex, &mtu_len, 0, 0))214214+ retval = BPF_DROP;215215+216216+ global_bpf_mtu_xdp = mtu_len;217217+ return retval;218218+}219219+220220+SEC("classifier")221221+int tc_input_len_exceed(struct __sk_buff *ctx)222222+{223223+ int retval = BPF_DROP; /* Fail */224224+ __u32 ifindex = GLOBAL_USER_IFINDEX;225225+ int err;226226+227227+ /* API allow user give length to check as input via mtu_len param,228228+ * resulting MTU value is still output in mtu_len param after call.229229+ *230230+ * Input length value is L3 size like MTU.231231+ */232232+ __u32 mtu_len = GLOBAL_USER_MTU;233233+234234+ mtu_len += 1; /* Exceed with 1 */235235+236236+ err = bpf_check_mtu(ctx, ifindex, &mtu_len, 0, 0);237237+ if (err == BPF_MTU_CHK_RET_FRAG_NEEDED)238238+ retval = BPF_OK; /* Success in exceeding MTU check */243239244240 global_bpf_mtu_xdp = mtu_len;245241 return retval;