Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'net-6.12-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
"Including fixes from netfiler, xfrm and bluetooth.

Oddly this includes a fix for a posix clock regression; in our
previous PR we included a change there as a pre-requisite for
networking one. That fix proved to be buggy and requires the follow-up
included here. Thomas suggested we should send it, given we sent the
buggy patch.

Current release - regressions:

- posix-clock: Fix unbalanced locking in pc_clock_settime()

- netfilter: fix typo causing some targets not to load on IPv6

Current release - new code bugs:

- xfrm: policy: remove last remnants of pernet inexact list

Previous releases - regressions:

- core: fix races in netdev_tx_sent_queue()/dev_watchdog()

- bluetooth: fix UAF on sco_sock_timeout

- eth: hv_netvsc: fix VF namespace also in synthetic NIC
NETDEV_REGISTER event

- eth: usbnet: fix name regression

- eth: be2net: fix potential memory leak in be_xmit()

- eth: plip: fix transmit path breakage

Previous releases - always broken:

- sched: deny mismatched skip_sw/skip_hw flags for actions created by
classifiers

- netfilter: bpf: must hold reference on net namespace

- eth: virtio_net: fix integer overflow in stats

- eth: bnxt_en: replace ptp_lock with irqsave variant

- eth: octeon_ep: add SKB allocation failures handling in
__octep_oq_process_rx()

Misc:

- MAINTAINERS: add Simon as an official reviewer"

* tag 'net-6.12-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (40 commits)
net: dsa: mv88e6xxx: support 4000ps cycle counter period
net: dsa: mv88e6xxx: read cycle counter period from hardware
net: dsa: mv88e6xxx: group cycle counter coefficients
net: usb: qmi_wwan: add Fibocom FG132 0x0112 composition
hv_netvsc: Fix VF namespace also in synthetic NIC NETDEV_REGISTER event
net: dsa: microchip: disable EEE for KSZ879x/KSZ877x/KSZ876x
Bluetooth: ISO: Fix UAF on iso_sock_timeout
Bluetooth: SCO: Fix UAF on sco_sock_timeout
Bluetooth: hci_core: Disable works on hci_unregister_dev
posix-clock: posix-clock: Fix unbalanced locking in pc_clock_settime()
r8169: avoid unsolicited interrupts
net: sched: use RCU read-side critical section in taprio_dump()
net: sched: fix use-after-free in taprio_change()
net/sched: act_api: deny mismatched skip_sw/skip_hw flags for actions created by classifiers
net: usb: usbnet: fix name regression
mlxsw: spectrum_router: fix xa_store() error checking
virtio_net: fix integer overflow in stats
net: fix races in netdev_tx_sent_queue()/dev_watchdog()
net: wwan: fix global oob in wwan_rtnl_policy
netfilter: xtables: fix typo causing some targets not to load on IPv6
...

+564 -257
+5
.mailmap
··· 306 306 Jens Axboe <axboe@kernel.dk> <axboe@meta.com> 307 307 Jens Osterkamp <Jens.Osterkamp@de.ibm.com> 308 308 Jernej Skrabec <jernej.skrabec@gmail.com> <jernej.skrabec@siol.net> 309 + Jesper Dangaard Brouer <hawk@kernel.org> <brouer@redhat.com> 310 + Jesper Dangaard Brouer <hawk@kernel.org> <hawk@comx.dk> 311 + Jesper Dangaard Brouer <hawk@kernel.org> <jbrouer@redhat.com> 312 + Jesper Dangaard Brouer <hawk@kernel.org> <jdb@comx.dk> 313 + Jesper Dangaard Brouer <hawk@kernel.org> <netoptimizer@brouer.com> 309 314 Jessica Zhang <quic_jesszhan@quicinc.com> <jesszhan@codeaurora.org> 310 315 Jilai Wang <quic_jilaiw@quicinc.com> <jilaiw@codeaurora.org> 311 316 Jiri Kosina <jikos@kernel.org> <jikos@jikos.cz>
+2
MAINTAINERS
··· 16042 16042 M: Eric Dumazet <edumazet@google.com> 16043 16043 M: Jakub Kicinski <kuba@kernel.org> 16044 16044 M: Paolo Abeni <pabeni@redhat.com> 16045 + R: Simon Horman <horms@kernel.org> 16045 16046 L: netdev@vger.kernel.org 16046 16047 S: Maintained 16047 16048 P: Documentation/process/maintainer-netdev.rst ··· 16085 16084 F: lib/net_utils.c 16086 16085 F: lib/random32.c 16087 16086 F: net/ 16087 + F: samples/pktgen/ 16088 16088 F: tools/net/ 16089 16089 F: tools/testing/selftests/net/ 16090 16090 X: Documentation/networking/mac80211-injection.rst
+11 -10
drivers/net/dsa/microchip/ksz_common.c
··· 2733 2733 return MICREL_KSZ8_P1_ERRATA; 2734 2734 break; 2735 2735 case KSZ8567_CHIP_ID: 2736 + /* KSZ8567R Errata DS80000752C Module 4 */ 2737 + case KSZ8765_CHIP_ID: 2738 + case KSZ8794_CHIP_ID: 2739 + case KSZ8795_CHIP_ID: 2740 + /* KSZ879x/KSZ877x/KSZ876x Errata DS80000687C Module 2 */ 2736 2741 case KSZ9477_CHIP_ID: 2742 + /* KSZ9477S Errata DS80000754A Module 4 */ 2737 2743 case KSZ9567_CHIP_ID: 2744 + /* KSZ9567S Errata DS80000756A Module 4 */ 2738 2745 case KSZ9896_CHIP_ID: 2746 + /* KSZ9896C Errata DS80000757A Module 3 */ 2739 2747 case KSZ9897_CHIP_ID: 2740 - /* KSZ9477 Errata DS80000754C 2741 - * 2742 - * Module 4: Energy Efficient Ethernet (EEE) feature select must 2743 - * be manually disabled 2748 + /* KSZ9897R Errata DS80000758C Module 4 */ 2749 + /* Energy Efficient Ethernet (EEE) feature select must be manually disabled 2744 2750 * The EEE feature is enabled by default, but it is not fully 2745 2751 * operational. It must be manually disabled through register 2746 2752 * controls. If not disabled, the PHY ports can auto-negotiate 2747 2753 * to enable EEE, and this feature can cause link drops when 2748 2754 * linked to another device supporting EEE. 2749 2755 * 2750 - * The same item appears in the errata for the KSZ9567, KSZ9896, 2751 - * and KSZ9897. 2752 - * 2753 - * A similar item appears in the errata for the KSZ8567, but 2754 - * provides an alternative workaround. For now, use the simple 2755 - * workaround of disabling the EEE feature for this device too. 2756 + * The same item appears in the errata for all switches above. 2756 2757 */ 2757 2758 return MICREL_NO_EEE; 2758 2759 }
+2 -4
drivers/net/dsa/mv88e6xxx/chip.h
··· 206 206 struct mv88e6xxx_avb_ops; 207 207 struct mv88e6xxx_ptp_ops; 208 208 struct mv88e6xxx_pcs_ops; 209 + struct mv88e6xxx_cc_coeffs; 209 210 210 211 struct mv88e6xxx_irq { 211 212 u16 masked; ··· 409 408 struct cyclecounter tstamp_cc; 410 409 struct timecounter tstamp_tc; 411 410 struct delayed_work overflow_work; 411 + const struct mv88e6xxx_cc_coeffs *cc_coeffs; 412 412 413 413 struct ptp_clock *ptp_clock; 414 414 struct ptp_clock_info ptp_clock_info; ··· 733 731 int arr1_sts_reg; 734 732 int dep_sts_reg; 735 733 u32 rx_filters; 736 - u32 cc_shift; 737 - u32 cc_mult; 738 - u32 cc_mult_num; 739 - u32 cc_mult_dem; 740 734 }; 741 735 742 736 struct mv88e6xxx_pcs_ops {
+1
drivers/net/dsa/mv88e6xxx/port.c
··· 1713 1713 ptr = shift / 8; 1714 1714 shift %= 8; 1715 1715 mask >>= ptr * 8; 1716 + ptr <<= 8; 1716 1717 1717 1718 err = mv88e6393x_port_policy_read(chip, port, ptr, &reg); 1718 1719 if (err)
+75 -33
drivers/net/dsa/mv88e6xxx/ptp.c
··· 18 18 19 19 #define MV88E6XXX_MAX_ADJ_PPB 1000000 20 20 21 + struct mv88e6xxx_cc_coeffs { 22 + u32 cc_shift; 23 + u32 cc_mult; 24 + u32 cc_mult_num; 25 + u32 cc_mult_dem; 26 + }; 27 + 21 28 /* Family MV88E6250: 22 29 * Raw timestamps are in units of 10-ns clock periods. 23 30 * ··· 32 25 * simplifies to 33 26 * clkadj = scaled_ppm * 2^7 / 5^5 34 27 */ 35 - #define MV88E6250_CC_SHIFT 28 36 - #define MV88E6250_CC_MULT (10 << MV88E6250_CC_SHIFT) 37 - #define MV88E6250_CC_MULT_NUM (1 << 7) 38 - #define MV88E6250_CC_MULT_DEM 3125ULL 28 + #define MV88E6XXX_CC_10NS_SHIFT 28 29 + static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_10ns_coeffs = { 30 + .cc_shift = MV88E6XXX_CC_10NS_SHIFT, 31 + .cc_mult = 10 << MV88E6XXX_CC_10NS_SHIFT, 32 + .cc_mult_num = 1 << 7, 33 + .cc_mult_dem = 3125ULL, 34 + }; 39 35 40 - /* Other families: 36 + /* Other families except MV88E6393X in internal clock mode: 41 37 * Raw timestamps are in units of 8-ns clock periods. 42 38 * 43 39 * clkadj = scaled_ppm * 8*2^28 / (10^6 * 2^16) 44 40 * simplifies to 45 41 * clkadj = scaled_ppm * 2^9 / 5^6 46 42 */ 47 - #define MV88E6XXX_CC_SHIFT 28 48 - #define MV88E6XXX_CC_MULT (8 << MV88E6XXX_CC_SHIFT) 49 - #define MV88E6XXX_CC_MULT_NUM (1 << 9) 50 - #define MV88E6XXX_CC_MULT_DEM 15625ULL 43 + #define MV88E6XXX_CC_8NS_SHIFT 28 44 + static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_8ns_coeffs = { 45 + .cc_shift = MV88E6XXX_CC_8NS_SHIFT, 46 + .cc_mult = 8 << MV88E6XXX_CC_8NS_SHIFT, 47 + .cc_mult_num = 1 << 9, 48 + .cc_mult_dem = 15625ULL 49 + }; 50 + 51 + /* Family MV88E6393X using internal clock: 52 + * Raw timestamps are in units of 4-ns clock periods. 53 + * 54 + * clkadj = scaled_ppm * 4*2^28 / (10^6 * 2^16) 55 + * simplifies to 56 + * clkadj = scaled_ppm * 2^8 / 5^6 57 + */ 58 + #define MV88E6XXX_CC_4NS_SHIFT 28 59 + static const struct mv88e6xxx_cc_coeffs mv88e6xxx_cc_4ns_coeffs = { 60 + .cc_shift = MV88E6XXX_CC_4NS_SHIFT, 61 + .cc_mult = 4 << MV88E6XXX_CC_4NS_SHIFT, 62 + .cc_mult_num = 1 << 8, 63 + .cc_mult_dem = 15625ULL 64 + }; 51 65 52 66 #define TAI_EVENT_WORK_INTERVAL msecs_to_jiffies(100) 53 67 ··· 109 81 return err; 110 82 111 83 return chip->info->ops->gpio_ops->set_pctl(chip, pin, func); 84 + } 85 + 86 + static const struct mv88e6xxx_cc_coeffs * 87 + mv88e6xxx_cc_coeff_get(struct mv88e6xxx_chip *chip) 88 + { 89 + u16 period_ps; 90 + int err; 91 + 92 + err = mv88e6xxx_tai_read(chip, MV88E6XXX_TAI_CLOCK_PERIOD, &period_ps, 1); 93 + if (err) { 94 + dev_err(chip->dev, "failed to read cycle counter period: %d\n", 95 + err); 96 + return ERR_PTR(err); 97 + } 98 + 99 + switch (period_ps) { 100 + case 4000: 101 + return &mv88e6xxx_cc_4ns_coeffs; 102 + case 8000: 103 + return &mv88e6xxx_cc_8ns_coeffs; 104 + case 10000: 105 + return &mv88e6xxx_cc_10ns_coeffs; 106 + default: 107 + dev_err(chip->dev, "unexpected cycle counter period of %u ps\n", 108 + period_ps); 109 + return ERR_PTR(-ENODEV); 110 + } 112 111 } 113 112 114 113 static u64 mv88e6352_ptp_clock_read(const struct cyclecounter *cc) ··· 259 204 static int mv88e6xxx_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm) 260 205 { 261 206 struct mv88e6xxx_chip *chip = ptp_to_chip(ptp); 262 - const struct mv88e6xxx_ptp_ops *ptp_ops = chip->info->ops->ptp_ops; 263 207 int neg_adj = 0; 264 208 u32 diff, mult; 265 209 u64 adj; ··· 268 214 scaled_ppm = -scaled_ppm; 269 215 } 270 216 271 - mult = ptp_ops->cc_mult; 272 - adj = ptp_ops->cc_mult_num; 217 + mult = chip->cc_coeffs->cc_mult; 218 + adj = chip->cc_coeffs->cc_mult_num; 273 219 adj *= scaled_ppm; 274 - diff = div_u64(adj, ptp_ops->cc_mult_dem); 220 + diff = div_u64(adj, chip->cc_coeffs->cc_mult_dem); 275 221 276 222 mv88e6xxx_reg_lock(chip); 277 223 ··· 418 364 (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) | 419 365 (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) | 420 366 (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ), 421 - .cc_shift = MV88E6XXX_CC_SHIFT, 422 - .cc_mult = MV88E6XXX_CC_MULT, 423 - .cc_mult_num = MV88E6XXX_CC_MULT_NUM, 424 - .cc_mult_dem = MV88E6XXX_CC_MULT_DEM, 425 367 }; 426 368 427 369 const struct mv88e6xxx_ptp_ops mv88e6250_ptp_ops = { ··· 441 391 (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) | 442 392 (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) | 443 393 (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ), 444 - .cc_shift = MV88E6250_CC_SHIFT, 445 - .cc_mult = MV88E6250_CC_MULT, 446 - .cc_mult_num = MV88E6250_CC_MULT_NUM, 447 - .cc_mult_dem = MV88E6250_CC_MULT_DEM, 448 394 }; 449 395 450 396 const struct mv88e6xxx_ptp_ops mv88e6352_ptp_ops = { ··· 464 418 (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) | 465 419 (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) | 466 420 (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ), 467 - .cc_shift = MV88E6XXX_CC_SHIFT, 468 - .cc_mult = MV88E6XXX_CC_MULT, 469 - .cc_mult_num = MV88E6XXX_CC_MULT_NUM, 470 - .cc_mult_dem = MV88E6XXX_CC_MULT_DEM, 471 421 }; 472 422 473 423 const struct mv88e6xxx_ptp_ops mv88e6390_ptp_ops = { ··· 488 446 (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) | 489 447 (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) | 490 448 (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ), 491 - .cc_shift = MV88E6XXX_CC_SHIFT, 492 - .cc_mult = MV88E6XXX_CC_MULT, 493 - .cc_mult_num = MV88E6XXX_CC_MULT_NUM, 494 - .cc_mult_dem = MV88E6XXX_CC_MULT_DEM, 495 449 }; 496 450 497 451 static u64 mv88e6xxx_ptp_clock_read(const struct cyclecounter *cc) ··· 500 462 return 0; 501 463 } 502 464 503 - /* With a 125MHz input clock, the 32-bit timestamp counter overflows in ~34.3 465 + /* With a 250MHz input clock, the 32-bit timestamp counter overflows in ~17.2 504 466 * seconds; this task forces periodic reads so that we don't miss any. 505 467 */ 506 - #define MV88E6XXX_TAI_OVERFLOW_PERIOD (HZ * 16) 468 + #define MV88E6XXX_TAI_OVERFLOW_PERIOD (HZ * 8) 507 469 static void mv88e6xxx_ptp_overflow_check(struct work_struct *work) 508 470 { 509 471 struct delayed_work *dw = to_delayed_work(work); ··· 522 484 int i; 523 485 524 486 /* Set up the cycle counter */ 487 + chip->cc_coeffs = mv88e6xxx_cc_coeff_get(chip); 488 + if (IS_ERR(chip->cc_coeffs)) 489 + return PTR_ERR(chip->cc_coeffs); 490 + 525 491 memset(&chip->tstamp_cc, 0, sizeof(chip->tstamp_cc)); 526 492 chip->tstamp_cc.read = mv88e6xxx_ptp_clock_read; 527 493 chip->tstamp_cc.mask = CYCLECOUNTER_MASK(32); 528 - chip->tstamp_cc.mult = ptp_ops->cc_mult; 529 - chip->tstamp_cc.shift = ptp_ops->cc_shift; 494 + chip->tstamp_cc.mult = chip->cc_coeffs->cc_mult; 495 + chip->tstamp_cc.shift = chip->cc_coeffs->cc_shift; 530 496 531 497 timecounter_init(&chip->tstamp_tc, &chip->tstamp_cc, 532 498 ktime_to_ns(ktime_get_real()));
+14 -8
drivers/net/ethernet/broadcom/bnxt/bnxt.c
··· 2254 2254 2255 2255 if (!bnxt_get_rx_ts_p5(bp, &ts, cmpl_ts)) { 2256 2256 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 2257 + unsigned long flags; 2257 2258 2258 - spin_lock_bh(&ptp->ptp_lock); 2259 + spin_lock_irqsave(&ptp->ptp_lock, flags); 2259 2260 ns = timecounter_cyc2time(&ptp->tc, ts); 2260 - spin_unlock_bh(&ptp->ptp_lock); 2261 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 2261 2262 memset(skb_hwtstamps(skb), 0, 2262 2263 sizeof(*skb_hwtstamps(skb))); 2263 2264 skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(ns); ··· 2758 2757 case ASYNC_EVENT_CMPL_PHC_UPDATE_EVENT_DATA1_FLAGS_PHC_RTC_UPDATE: 2759 2758 if (BNXT_PTP_USE_RTC(bp)) { 2760 2759 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 2760 + unsigned long flags; 2761 2761 u64 ns; 2762 2762 2763 2763 if (!ptp) 2764 2764 goto async_event_process_exit; 2765 2765 2766 - spin_lock_bh(&ptp->ptp_lock); 2766 + spin_lock_irqsave(&ptp->ptp_lock, flags); 2767 2767 bnxt_ptp_update_current_time(bp); 2768 2768 ns = (((u64)BNXT_EVENT_PHC_RTC_UPDATE(data1) << 2769 2769 BNXT_PHC_BITS) | ptp->current_time); 2770 2770 bnxt_ptp_rtc_timecounter_init(ptp, ns); 2771 - spin_unlock_bh(&ptp->ptp_lock); 2771 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 2772 2772 } 2773 2773 break; 2774 2774 } ··· 13496 13494 return; 13497 13495 13498 13496 if (ptp) { 13499 - spin_lock_bh(&ptp->ptp_lock); 13497 + unsigned long flags; 13498 + 13499 + spin_lock_irqsave(&ptp->ptp_lock, flags); 13500 13500 set_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 13501 - spin_unlock_bh(&ptp->ptp_lock); 13501 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 13502 13502 } else { 13503 13503 set_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 13504 13504 } ··· 13565 13561 int n = 0, tmo; 13566 13562 13567 13563 if (ptp) { 13568 - spin_lock_bh(&ptp->ptp_lock); 13564 + unsigned long flags; 13565 + 13566 + spin_lock_irqsave(&ptp->ptp_lock, flags); 13569 13567 set_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 13570 - spin_unlock_bh(&ptp->ptp_lock); 13568 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 13571 13569 } else { 13572 13570 set_bit(BNXT_STATE_IN_FW_RESET, &bp->state); 13573 13571 }
+42 -28
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
··· 62 62 struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg, 63 63 ptp_info); 64 64 u64 ns = timespec64_to_ns(ts); 65 + unsigned long flags; 65 66 66 67 if (BNXT_PTP_USE_RTC(ptp->bp)) 67 68 return bnxt_ptp_cfg_settime(ptp->bp, ns); 68 69 69 - spin_lock_bh(&ptp->ptp_lock); 70 + spin_lock_irqsave(&ptp->ptp_lock, flags); 70 71 timecounter_init(&ptp->tc, &ptp->cc, ns); 71 - spin_unlock_bh(&ptp->ptp_lock); 72 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 72 73 return 0; 73 74 } 74 75 ··· 101 100 static void bnxt_ptp_get_current_time(struct bnxt *bp) 102 101 { 103 102 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 103 + unsigned long flags; 104 104 105 105 if (!ptp) 106 106 return; 107 - spin_lock_bh(&ptp->ptp_lock); 107 + spin_lock_irqsave(&ptp->ptp_lock, flags); 108 108 WRITE_ONCE(ptp->old_time, ptp->current_time); 109 109 bnxt_refclk_read(bp, NULL, &ptp->current_time); 110 - spin_unlock_bh(&ptp->ptp_lock); 110 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 111 111 } 112 112 113 113 static int bnxt_hwrm_port_ts_query(struct bnxt *bp, u32 flags, u64 *ts, ··· 151 149 { 152 150 struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg, 153 151 ptp_info); 152 + unsigned long flags; 154 153 u64 ns, cycles; 155 154 int rc; 156 155 157 - spin_lock_bh(&ptp->ptp_lock); 156 + spin_lock_irqsave(&ptp->ptp_lock, flags); 158 157 rc = bnxt_refclk_read(ptp->bp, sts, &cycles); 159 158 if (rc) { 160 - spin_unlock_bh(&ptp->ptp_lock); 159 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 161 160 return rc; 162 161 } 163 162 ns = timecounter_cyc2time(&ptp->tc, cycles); 164 - spin_unlock_bh(&ptp->ptp_lock); 163 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 165 164 *ts = ns_to_timespec64(ns); 166 165 167 166 return 0; ··· 180 177 static int bnxt_ptp_adjphc(struct bnxt_ptp_cfg *ptp, s64 delta) 181 178 { 182 179 struct hwrm_port_mac_cfg_input *req; 180 + unsigned long flags; 183 181 int rc; 184 182 185 183 rc = hwrm_req_init(ptp->bp, req, HWRM_PORT_MAC_CFG); ··· 194 190 if (rc) { 195 191 netdev_err(ptp->bp->dev, "ptp adjphc failed. rc = %x\n", rc); 196 192 } else { 197 - spin_lock_bh(&ptp->ptp_lock); 193 + spin_lock_irqsave(&ptp->ptp_lock, flags); 198 194 bnxt_ptp_update_current_time(ptp->bp); 199 - spin_unlock_bh(&ptp->ptp_lock); 195 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 200 196 } 201 197 202 198 return rc; ··· 206 202 { 207 203 struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg, 208 204 ptp_info); 205 + unsigned long flags; 209 206 210 207 if (BNXT_PTP_USE_RTC(ptp->bp)) 211 208 return bnxt_ptp_adjphc(ptp, delta); 212 209 213 - spin_lock_bh(&ptp->ptp_lock); 210 + spin_lock_irqsave(&ptp->ptp_lock, flags); 214 211 timecounter_adjtime(&ptp->tc, delta); 215 - spin_unlock_bh(&ptp->ptp_lock); 212 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 216 213 return 0; 217 214 } 218 215 ··· 241 236 struct bnxt_ptp_cfg *ptp = container_of(ptp_info, struct bnxt_ptp_cfg, 242 237 ptp_info); 243 238 struct bnxt *bp = ptp->bp; 239 + unsigned long flags; 244 240 245 241 if (!BNXT_MH(bp)) 246 242 return bnxt_ptp_adjfine_rtc(bp, scaled_ppm); 247 243 248 - spin_lock_bh(&ptp->ptp_lock); 244 + spin_lock_irqsave(&ptp->ptp_lock, flags); 249 245 timecounter_read(&ptp->tc); 250 246 ptp->cc.mult = adjust_by_scaled_ppm(ptp->cmult, scaled_ppm); 251 - spin_unlock_bh(&ptp->ptp_lock); 247 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 252 248 return 0; 253 249 } 254 250 ··· 257 251 { 258 252 struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; 259 253 struct ptp_clock_event event; 254 + unsigned long flags; 260 255 u64 ns, pps_ts; 261 256 262 257 pps_ts = EVENT_PPS_TS(data2, data1); 263 - spin_lock_bh(&ptp->ptp_lock); 258 + spin_lock_irqsave(&ptp->ptp_lock, flags); 264 259 ns = timecounter_cyc2time(&ptp->tc, pps_ts); 265 - spin_unlock_bh(&ptp->ptp_lock); 260 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 266 261 267 262 switch (EVENT_DATA2_PPS_EVENT_TYPE(data2)) { 268 263 case ASYNC_EVENT_CMPL_PPS_TIMESTAMP_EVENT_DATA2_EVENT_TYPE_INTERNAL: ··· 400 393 { 401 394 u64 cycles_now; 402 395 u64 nsec_now, nsec_delta; 396 + unsigned long flags; 403 397 int rc; 404 398 405 - spin_lock_bh(&ptp->ptp_lock); 399 + spin_lock_irqsave(&ptp->ptp_lock, flags); 406 400 rc = bnxt_refclk_read(ptp->bp, NULL, &cycles_now); 407 401 if (rc) { 408 - spin_unlock_bh(&ptp->ptp_lock); 402 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 409 403 return rc; 410 404 } 411 405 nsec_now = timecounter_cyc2time(&ptp->tc, cycles_now); 412 - spin_unlock_bh(&ptp->ptp_lock); 406 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 413 407 414 408 nsec_delta = target_ns - nsec_now; 415 409 *cycles_delta = div64_u64(nsec_delta << ptp->cc.shift, ptp->cc.mult); ··· 697 689 struct skb_shared_hwtstamps timestamp; 698 690 struct bnxt_ptp_tx_req *txts_req; 699 691 unsigned long now = jiffies; 692 + unsigned long flags; 700 693 u64 ts = 0, ns = 0; 701 694 u32 tmo = 0; 702 695 int rc; ··· 711 702 tmo, slot); 712 703 if (!rc) { 713 704 memset(&timestamp, 0, sizeof(timestamp)); 714 - spin_lock_bh(&ptp->ptp_lock); 705 + spin_lock_irqsave(&ptp->ptp_lock, flags); 715 706 ns = timecounter_cyc2time(&ptp->tc, ts); 716 - spin_unlock_bh(&ptp->ptp_lock); 707 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 717 708 timestamp.hwtstamp = ns_to_ktime(ns); 718 709 skb_tstamp_tx(txts_req->tx_skb, &timestamp); 719 710 ptp->stats.ts_pkts++; ··· 739 730 unsigned long now = jiffies; 740 731 struct bnxt *bp = ptp->bp; 741 732 u16 cons = ptp->txts_cons; 733 + unsigned long flags; 742 734 u32 num_requests; 743 735 int rc = 0; 744 736 ··· 767 757 bnxt_ptp_get_current_time(bp); 768 758 ptp->next_period = now + HZ; 769 759 if (time_after_eq(now, ptp->next_overflow_check)) { 770 - spin_lock_bh(&ptp->ptp_lock); 760 + spin_lock_irqsave(&ptp->ptp_lock, flags); 771 761 timecounter_read(&ptp->tc); 772 - spin_unlock_bh(&ptp->ptp_lock); 762 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 773 763 ptp->next_overflow_check = now + BNXT_PHC_OVERFLOW_PERIOD; 774 764 } 775 765 if (rc == -EAGAIN) ··· 829 819 u32 opaque = tscmp->tx_ts_cmp_opaque; 830 820 struct bnxt_tx_ring_info *txr; 831 821 struct bnxt_sw_tx_bd *tx_buf; 822 + unsigned long flags; 832 823 u64 ts, ns; 833 824 u16 cons; 834 825 ··· 844 833 le32_to_cpu(tscmp->tx_ts_cmp_flags_type), 845 834 le32_to_cpu(tscmp->tx_ts_cmp_errors_v)); 846 835 } else { 847 - spin_lock_bh(&ptp->ptp_lock); 836 + spin_lock_irqsave(&ptp->ptp_lock, flags); 848 837 ns = timecounter_cyc2time(&ptp->tc, ts); 849 - spin_unlock_bh(&ptp->ptp_lock); 838 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 850 839 timestamp.hwtstamp = ns_to_ktime(ns); 851 840 skb_tstamp_tx(tx_buf->skb, &timestamp); 852 841 } ··· 986 975 int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg) 987 976 { 988 977 struct timespec64 tsp; 978 + unsigned long flags; 989 979 u64 ns; 990 980 int rc; 991 981 ··· 1005 993 if (rc) 1006 994 return rc; 1007 995 } 1008 - spin_lock_bh(&bp->ptp_cfg->ptp_lock); 996 + spin_lock_irqsave(&bp->ptp_cfg->ptp_lock, flags); 1009 997 bnxt_ptp_rtc_timecounter_init(bp->ptp_cfg, ns); 1010 - spin_unlock_bh(&bp->ptp_cfg->ptp_lock); 998 + spin_unlock_irqrestore(&bp->ptp_cfg->ptp_lock, flags); 1011 999 1012 1000 return 0; 1013 1001 } ··· 1075 1063 atomic64_set(&ptp->stats.ts_err, 0); 1076 1064 1077 1065 if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) { 1078 - spin_lock_bh(&ptp->ptp_lock); 1066 + unsigned long flags; 1067 + 1068 + spin_lock_irqsave(&ptp->ptp_lock, flags); 1079 1069 bnxt_refclk_read(bp, NULL, &ptp->current_time); 1080 1070 WRITE_ONCE(ptp->old_time, ptp->current_time); 1081 - spin_unlock_bh(&ptp->ptp_lock); 1071 + spin_unlock_irqrestore(&ptp->ptp_lock, flags); 1082 1072 ptp_schedule_worker(ptp->ptp_clock, 0); 1083 1073 } 1084 1074 ptp->txts_tmo = BNXT_PTP_DFLT_TX_TMO;
+7 -5
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
··· 146 146 }; 147 147 148 148 #if BITS_PER_LONG == 32 149 - #define BNXT_READ_TIME64(ptp, dst, src) \ 150 - do { \ 151 - spin_lock_bh(&(ptp)->ptp_lock); \ 152 - (dst) = (src); \ 153 - spin_unlock_bh(&(ptp)->ptp_lock); \ 149 + #define BNXT_READ_TIME64(ptp, dst, src) \ 150 + do { \ 151 + unsigned long flags; \ 152 + \ 153 + spin_lock_irqsave(&(ptp)->ptp_lock, flags); \ 154 + (dst) = (src); \ 155 + spin_unlock_irqrestore(&(ptp)->ptp_lock, flags); \ 154 156 } while (0) 155 157 #else 156 158 #define BNXT_READ_TIME64(ptp, dst, src) \
+5 -5
drivers/net/ethernet/emulex/benet/be_main.c
··· 1381 1381 be_get_wrb_params_from_skb(adapter, skb, &wrb_params); 1382 1382 1383 1383 wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params); 1384 - if (unlikely(!wrb_cnt)) { 1385 - dev_kfree_skb_any(skb); 1386 - goto drop; 1387 - } 1384 + if (unlikely(!wrb_cnt)) 1385 + goto drop_skb; 1388 1386 1389 1387 /* if os2bmc is enabled and if the pkt is destined to bmc, 1390 1388 * enqueue the pkt a 2nd time with mgmt bit set. ··· 1391 1393 BE_WRB_F_SET(wrb_params.features, OS2BMC, 1); 1392 1394 wrb_cnt = be_xmit_enqueue(adapter, txo, skb, &wrb_params); 1393 1395 if (unlikely(!wrb_cnt)) 1394 - goto drop; 1396 + goto drop_skb; 1395 1397 else 1396 1398 skb_get(skb); 1397 1399 } ··· 1405 1407 be_xmit_flush(adapter, txo); 1406 1408 1407 1409 return NETDEV_TX_OK; 1410 + drop_skb: 1411 + dev_kfree_skb_any(skb); 1408 1412 drop: 1409 1413 tx_stats(txo)->tx_drv_drops++; 1410 1414 /* Flush the already enqueued tx requests */
+51 -17
drivers/net/ethernet/freescale/fman/mac.c
··· 197 197 err = -EINVAL; 198 198 goto _return_of_node_put; 199 199 } 200 + mac_dev->fman_dev = &of_dev->dev; 200 201 201 202 /* Get the FMan cell-index */ 202 203 err = of_property_read_u32(dev_node, "cell-index", &val); 203 204 if (err) { 204 205 dev_err(dev, "failed to read cell-index for %pOF\n", dev_node); 205 206 err = -EINVAL; 206 - goto _return_of_node_put; 207 + goto _return_dev_put; 207 208 } 208 209 /* cell-index 0 => FMan id 1 */ 209 210 fman_id = (u8)(val + 1); 210 211 211 - priv->fman = fman_bind(&of_dev->dev); 212 + priv->fman = fman_bind(mac_dev->fman_dev); 212 213 if (!priv->fman) { 213 214 dev_err(dev, "fman_bind(%pOF) failed\n", dev_node); 214 215 err = -ENODEV; 215 - goto _return_of_node_put; 216 + goto _return_dev_put; 216 217 } 217 218 219 + /* Two references have been taken in of_find_device_by_node() 220 + * and fman_bind(). Release one of them here. The second one 221 + * will be released in mac_remove(). 222 + */ 223 + put_device(mac_dev->fman_dev); 218 224 of_node_put(dev_node); 225 + dev_node = NULL; 219 226 220 227 /* Get the address of the memory mapped registers */ 221 228 mac_dev->res = platform_get_mem_or_io(_of_dev, 0); 222 229 if (!mac_dev->res) { 223 230 dev_err(dev, "could not get registers\n"); 224 - return -EINVAL; 231 + err = -EINVAL; 232 + goto _return_dev_put; 225 233 } 226 234 227 235 err = devm_request_resource(dev, fman_get_mem_region(priv->fman), 228 236 mac_dev->res); 229 237 if (err) { 230 238 dev_err_probe(dev, err, "could not request resource\n"); 231 - return err; 239 + goto _return_dev_put; 232 240 } 233 241 234 242 mac_dev->vaddr = devm_ioremap(dev, mac_dev->res->start, 235 243 resource_size(mac_dev->res)); 236 244 if (!mac_dev->vaddr) { 237 245 dev_err(dev, "devm_ioremap() failed\n"); 238 - return -EIO; 246 + err = -EIO; 247 + goto _return_dev_put; 239 248 } 240 249 241 - if (!of_device_is_available(mac_node)) 242 - return -ENODEV; 250 + if (!of_device_is_available(mac_node)) { 251 + err = -ENODEV; 252 + goto _return_dev_put; 253 + } 243 254 244 255 /* Get the cell-index */ 245 256 err = of_property_read_u32(mac_node, "cell-index", &val); 246 257 if (err) { 247 258 dev_err(dev, "failed to read cell-index for %pOF\n", mac_node); 248 - return -EINVAL; 259 + err = -EINVAL; 260 + goto _return_dev_put; 249 261 } 250 262 priv->cell_index = (u8)val; 251 263 ··· 271 259 if (unlikely(nph < 0)) { 272 260 dev_err(dev, "of_count_phandle_with_args(%pOF, fsl,fman-ports) failed\n", 273 261 mac_node); 274 - return nph; 262 + err = nph; 263 + goto _return_dev_put; 275 264 } 276 265 277 266 if (nph != ARRAY_SIZE(mac_dev->port)) { 278 267 dev_err(dev, "Not supported number of fman-ports handles of mac node %pOF from device tree\n", 279 268 mac_node); 280 - return -EINVAL; 269 + err = -EINVAL; 270 + goto _return_dev_put; 281 271 } 282 272 283 - for (i = 0; i < ARRAY_SIZE(mac_dev->port); i++) { 273 + /* PORT_NUM determines the size of the port array */ 274 + for (i = 0; i < PORT_NUM; i++) { 284 275 /* Find the port node */ 285 276 dev_node = of_parse_phandle(mac_node, "fsl,fman-ports", i); 286 277 if (!dev_node) { 287 278 dev_err(dev, "of_parse_phandle(%pOF, fsl,fman-ports) failed\n", 288 279 mac_node); 289 - return -EINVAL; 280 + err = -EINVAL; 281 + goto _return_dev_arr_put; 290 282 } 291 283 292 284 of_dev = of_find_device_by_node(dev_node); ··· 298 282 dev_err(dev, "of_find_device_by_node(%pOF) failed\n", 299 283 dev_node); 300 284 err = -EINVAL; 301 - goto _return_of_node_put; 285 + goto _return_dev_arr_put; 302 286 } 287 + mac_dev->fman_port_devs[i] = &of_dev->dev; 303 288 304 - mac_dev->port[i] = fman_port_bind(&of_dev->dev); 289 + mac_dev->port[i] = fman_port_bind(mac_dev->fman_port_devs[i]); 305 290 if (!mac_dev->port[i]) { 306 291 dev_err(dev, "dev_get_drvdata(%pOF) failed\n", 307 292 dev_node); 308 293 err = -EINVAL; 309 - goto _return_of_node_put; 294 + goto _return_dev_arr_put; 310 295 } 296 + /* Two references have been taken in of_find_device_by_node() 297 + * and fman_port_bind(). Release one of them here. The second 298 + * one will be released in mac_remove(). 299 + */ 300 + put_device(mac_dev->fman_port_devs[i]); 311 301 of_node_put(dev_node); 302 + dev_node = NULL; 312 303 } 313 304 314 305 /* Get the PHY connection type */ ··· 335 312 336 313 err = init(mac_dev, mac_node, &params); 337 314 if (err < 0) 338 - return err; 315 + goto _return_dev_arr_put; 339 316 340 317 if (!is_zero_ether_addr(mac_dev->addr)) 341 318 dev_info(dev, "FMan MAC address: %pM\n", mac_dev->addr); ··· 350 327 351 328 return err; 352 329 330 + _return_dev_arr_put: 331 + /* mac_dev is kzalloc'ed */ 332 + for (i = 0; i < PORT_NUM; i++) 333 + put_device(mac_dev->fman_port_devs[i]); 334 + _return_dev_put: 335 + put_device(mac_dev->fman_dev); 353 336 _return_of_node_put: 354 337 of_node_put(dev_node); 355 338 return err; ··· 364 335 static void mac_remove(struct platform_device *pdev) 365 336 { 366 337 struct mac_device *mac_dev = platform_get_drvdata(pdev); 338 + int i; 339 + 340 + for (i = 0; i < PORT_NUM; i++) 341 + put_device(mac_dev->fman_port_devs[i]); 342 + put_device(mac_dev->fman_dev); 367 343 368 344 platform_device_unregister(mac_dev->priv->eth_dev); 369 345 }
+5 -1
drivers/net/ethernet/freescale/fman/mac.h
··· 19 19 struct fman_mac; 20 20 struct mac_priv_s; 21 21 22 + #define PORT_NUM 2 22 23 struct mac_device { 23 24 void __iomem *vaddr; 24 25 struct device *dev; 25 26 struct resource *res; 26 27 u8 addr[ETH_ALEN]; 27 - struct fman_port *port[2]; 28 + struct fman_port *port[PORT_NUM]; 28 29 struct phylink *phylink; 29 30 struct phylink_config phylink_config; 30 31 phy_interface_t phy_if; ··· 53 52 54 53 struct fman_mac *fman_mac; 55 54 struct mac_priv_s *priv; 55 + 56 + struct device *fman_dev; 57 + struct device *fman_port_devs[PORT_NUM]; 56 58 }; 57 59 58 60 static inline struct mac_device
+1
drivers/net/ethernet/i825xx/sun3_82586.c
··· 1012 1012 if(skb->len > XMIT_BUFF_SIZE) 1013 1013 { 1014 1014 printk("%s: Sorry, max. framelength is %d bytes. The length of your frame is %d bytes.\n",dev->name,XMIT_BUFF_SIZE,skb->len); 1015 + dev_kfree_skb(skb); 1015 1016 return NETDEV_TX_OK; 1016 1017 } 1017 1018
+59 -23
drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
··· 337 337 } 338 338 339 339 /** 340 + * octep_oq_next_pkt() - Move to the next packet in Rx queue. 341 + * 342 + * @oq: Octeon Rx queue data structure. 343 + * @buff_info: Current packet buffer info. 344 + * @read_idx: Current packet index in the ring. 345 + * @desc_used: Current packet descriptor number. 346 + * 347 + * Free the resources associated with a packet. 348 + * Increment packet index in the ring and packet descriptor number. 349 + */ 350 + static void octep_oq_next_pkt(struct octep_oq *oq, 351 + struct octep_rx_buffer *buff_info, 352 + u32 *read_idx, u32 *desc_used) 353 + { 354 + dma_unmap_page(oq->dev, oq->desc_ring[*read_idx].buffer_ptr, 355 + PAGE_SIZE, DMA_FROM_DEVICE); 356 + buff_info->page = NULL; 357 + (*read_idx)++; 358 + (*desc_used)++; 359 + if (*read_idx == oq->max_count) 360 + *read_idx = 0; 361 + } 362 + 363 + /** 364 + * octep_oq_drop_rx() - Free the resources associated with a packet. 365 + * 366 + * @oq: Octeon Rx queue data structure. 367 + * @buff_info: Current packet buffer info. 368 + * @read_idx: Current packet index in the ring. 369 + * @desc_used: Current packet descriptor number. 370 + * 371 + */ 372 + static void octep_oq_drop_rx(struct octep_oq *oq, 373 + struct octep_rx_buffer *buff_info, 374 + u32 *read_idx, u32 *desc_used) 375 + { 376 + int data_len = buff_info->len - oq->max_single_buffer_size; 377 + 378 + while (data_len > 0) { 379 + octep_oq_next_pkt(oq, buff_info, read_idx, desc_used); 380 + data_len -= oq->buffer_size; 381 + }; 382 + } 383 + 384 + /** 340 385 * __octep_oq_process_rx() - Process hardware Rx queue and push to stack. 341 386 * 342 387 * @oct: Octeon device private data structure. ··· 412 367 desc_used = 0; 413 368 for (pkt = 0; pkt < pkts_to_process; pkt++) { 414 369 buff_info = (struct octep_rx_buffer *)&oq->buff_info[read_idx]; 415 - dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr, 416 - PAGE_SIZE, DMA_FROM_DEVICE); 417 370 resp_hw = page_address(buff_info->page); 418 - buff_info->page = NULL; 419 371 420 372 /* Swap the length field that is in Big-Endian to CPU */ 421 373 buff_info->len = be64_to_cpu(resp_hw->length); ··· 436 394 data_offset = OCTEP_OQ_RESP_HW_SIZE; 437 395 rx_ol_flags = 0; 438 396 } 397 + 398 + octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used); 399 + 400 + skb = build_skb((void *)resp_hw, PAGE_SIZE); 401 + if (!skb) { 402 + octep_oq_drop_rx(oq, buff_info, 403 + &read_idx, &desc_used); 404 + oq->stats.alloc_failures++; 405 + continue; 406 + } 407 + skb_reserve(skb, data_offset); 408 + 439 409 rx_bytes += buff_info->len; 440 410 441 411 if (buff_info->len <= oq->max_single_buffer_size) { 442 - skb = build_skb((void *)resp_hw, PAGE_SIZE); 443 - skb_reserve(skb, data_offset); 444 412 skb_put(skb, buff_info->len); 445 - read_idx++; 446 - desc_used++; 447 - if (read_idx == oq->max_count) 448 - read_idx = 0; 449 413 } else { 450 414 struct skb_shared_info *shinfo; 451 415 u16 data_len; 452 416 453 - skb = build_skb((void *)resp_hw, PAGE_SIZE); 454 - skb_reserve(skb, data_offset); 455 417 /* Head fragment includes response header(s); 456 418 * subsequent fragments contains only data. 457 419 */ 458 420 skb_put(skb, oq->max_single_buffer_size); 459 - read_idx++; 460 - desc_used++; 461 - if (read_idx == oq->max_count) 462 - read_idx = 0; 463 - 464 421 shinfo = skb_shinfo(skb); 465 422 data_len = buff_info->len - oq->max_single_buffer_size; 466 423 while (data_len) { 467 - dma_unmap_page(oq->dev, oq->desc_ring[read_idx].buffer_ptr, 468 - PAGE_SIZE, DMA_FROM_DEVICE); 469 424 buff_info = (struct octep_rx_buffer *) 470 425 &oq->buff_info[read_idx]; 471 426 if (data_len < oq->buffer_size) { ··· 477 438 buff_info->page, 0, 478 439 buff_info->len, 479 440 buff_info->len); 480 - buff_info->page = NULL; 481 - read_idx++; 482 - desc_used++; 483 - if (read_idx == oq->max_count) 484 - read_idx = 0; 441 + 442 + octep_oq_next_pkt(oq, buff_info, &read_idx, &desc_used); 485 443 } 486 444 } 487 445
+3 -6
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 3197 3197 { 3198 3198 struct mlxsw_sp_nexthop_group *nh_grp = nh->nhgi->nh_grp; 3199 3199 struct mlxsw_sp_nexthop_counter *nhct; 3200 - void *ptr; 3201 3200 int err; 3202 3201 3203 3202 nhct = xa_load(&nh_grp->nhgi->nexthop_counters, nh->id); ··· 3209 3210 if (IS_ERR(nhct)) 3210 3211 return nhct; 3211 3212 3212 - ptr = xa_store(&nh_grp->nhgi->nexthop_counters, nh->id, nhct, 3213 - GFP_KERNEL); 3214 - if (IS_ERR(ptr)) { 3215 - err = PTR_ERR(ptr); 3213 + err = xa_err(xa_store(&nh_grp->nhgi->nexthop_counters, nh->id, nhct, 3214 + GFP_KERNEL)); 3215 + if (err) 3216 3216 goto err_store; 3217 - } 3218 3217 3219 3218 return nhct; 3220 3219
+3 -1
drivers/net/ethernet/realtek/r8169_main.c
··· 4682 4682 if ((status & 0xffff) == 0xffff || !(status & tp->irq_mask)) 4683 4683 return IRQ_NONE; 4684 4684 4685 - if (unlikely(status & SYSErr)) { 4685 + /* At least RTL8168fp may unexpectedly set the SYSErr bit */ 4686 + if (unlikely(status & SYSErr && 4687 + tp->mac_version <= RTL_GIGA_MAC_VER_06)) { 4686 4688 rtl8169_pcierr_interrupt(tp->dev); 4687 4689 goto out; 4688 4690 }
+30
drivers/net/hyperv/netvsc_drv.c
··· 2798 2798 }, 2799 2799 }; 2800 2800 2801 + /* Set VF's namespace same as the synthetic NIC */ 2802 + static void netvsc_event_set_vf_ns(struct net_device *ndev) 2803 + { 2804 + struct net_device_context *ndev_ctx = netdev_priv(ndev); 2805 + struct net_device *vf_netdev; 2806 + int ret; 2807 + 2808 + vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev); 2809 + if (!vf_netdev) 2810 + return; 2811 + 2812 + if (!net_eq(dev_net(ndev), dev_net(vf_netdev))) { 2813 + ret = dev_change_net_namespace(vf_netdev, dev_net(ndev), 2814 + "eth%d"); 2815 + if (ret) 2816 + netdev_err(vf_netdev, 2817 + "Cannot move to same namespace as %s: %d\n", 2818 + ndev->name, ret); 2819 + else 2820 + netdev_info(vf_netdev, 2821 + "Moved VF to namespace with: %s\n", 2822 + ndev->name); 2823 + } 2824 + } 2825 + 2801 2826 /* 2802 2827 * On Hyper-V, every VF interface is matched with a corresponding 2803 2828 * synthetic interface. The synthetic interface is presented first ··· 2834 2809 { 2835 2810 struct net_device *event_dev = netdev_notifier_info_to_dev(ptr); 2836 2811 int ret = 0; 2812 + 2813 + if (event_dev->netdev_ops == &device_ops && event == NETDEV_REGISTER) { 2814 + netvsc_event_set_vf_ns(event_dev); 2815 + return NOTIFY_DONE; 2816 + } 2837 2817 2838 2818 ret = check_dev_is_matching_vf(event_dev); 2839 2819 if (ret != 0)
+2 -2
drivers/net/phy/dp83822.c
··· 45 45 /* Control Register 2 bits */ 46 46 #define DP83822_FX_ENABLE BIT(14) 47 47 48 - #define DP83822_HW_RESET BIT(15) 49 - #define DP83822_SW_RESET BIT(14) 48 + #define DP83822_SW_RESET BIT(15) 49 + #define DP83822_DIG_RESTART BIT(14) 50 50 51 51 /* PHY STS bits */ 52 52 #define DP83822_PHYSTS_DUPLEX BIT(2)
+1 -1
drivers/net/plip/plip.c
··· 815 815 return HS_TIMEOUT; 816 816 } 817 817 } 818 - break; 818 + fallthrough; 819 819 820 820 case PLIP_PK_LENGTH_LSB: 821 821 if (plip_send(nibble_timeout, dev,
+2 -2
drivers/net/pse-pd/pse_core.c
··· 113 113 { 114 114 int i; 115 115 116 - for (i = 0; i <= pcdev->nr_lines; i++) { 116 + for (i = 0; i < pcdev->nr_lines; i++) { 117 117 of_node_put(pcdev->pi[i].pairset[0].np); 118 118 of_node_put(pcdev->pi[i].pairset[1].np); 119 119 of_node_put(pcdev->pi[i].np); ··· 647 647 { 648 648 int i; 649 649 650 - for (i = 0; i <= pcdev->nr_lines; i++) { 650 + for (i = 0; i < pcdev->nr_lines; i++) { 651 651 if (pcdev->pi[i].np == np) 652 652 return i; 653 653 }
+1
drivers/net/usb/qmi_wwan.c
··· 1426 1426 {QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */ 1427 1427 {QMI_QUIRK_SET_DTR(0x2c7c, 0x030e, 4)}, /* Quectel EM05GV2 */ 1428 1428 {QMI_QUIRK_SET_DTR(0x2cb7, 0x0104, 4)}, /* Fibocom NL678 series */ 1429 + {QMI_QUIRK_SET_DTR(0x2cb7, 0x0112, 0)}, /* Fibocom FG132 */ 1429 1430 {QMI_FIXED_INTF(0x0489, 0xe0b4, 0)}, /* Foxconn T77W968 LTE */ 1430 1431 {QMI_FIXED_INTF(0x0489, 0xe0b5, 0)}, /* Foxconn T77W968 LTE with eSIM support*/ 1431 1432 {QMI_FIXED_INTF(0x2692, 0x9025, 4)}, /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */
+2 -1
drivers/net/usb/usbnet.c
··· 1767 1767 // can rename the link if it knows better. 1768 1768 if ((dev->driver_info->flags & FLAG_ETHER) != 0 && 1769 1769 ((dev->driver_info->flags & FLAG_POINTTOPOINT) == 0 || 1770 - (net->dev_addr [0] & 0x02) == 0)) 1770 + /* somebody touched it*/ 1771 + !is_zero_ether_addr(net->dev_addr))) 1771 1772 strscpy(net->name, "eth%d", sizeof(net->name)); 1772 1773 /* WLAN devices should always be named "wlan%d" */ 1773 1774 if ((dev->driver_info->flags & FLAG_WLAN) != 0)
+1 -1
drivers/net/virtio_net.c
··· 4155 4155 u32 desc_num[3]; 4156 4156 4157 4157 /* The actual supported stat types. */ 4158 - u32 bitmap[3]; 4158 + u64 bitmap[3]; 4159 4159 4160 4160 /* Used to calculate the reply buffer size. */ 4161 4161 u32 size[3];
+1 -1
drivers/net/wwan/wwan_core.c
··· 1038 1038 1039 1039 static struct rtnl_link_ops wwan_rtnl_link_ops __read_mostly = { 1040 1040 .kind = "wwan", 1041 - .maxtype = __IFLA_WWAN_MAX, 1041 + .maxtype = IFLA_WWAN_MAX, 1042 1042 .alloc = wwan_rtnl_alloc, 1043 1043 .validate = wwan_rtnl_validate, 1044 1044 .newlink = wwan_rtnl_newlink,
+12
include/linux/netdevice.h
··· 3325 3325 3326 3326 static __always_inline void netif_tx_stop_queue(struct netdev_queue *dev_queue) 3327 3327 { 3328 + /* Paired with READ_ONCE() from dev_watchdog() */ 3329 + WRITE_ONCE(dev_queue->trans_start, jiffies); 3330 + 3331 + /* This barrier is paired with smp_mb() from dev_watchdog() */ 3332 + smp_mb__before_atomic(); 3333 + 3328 3334 /* Must be an atomic op see netif_txq_try_stop() */ 3329 3335 set_bit(__QUEUE_STATE_DRV_XOFF, &dev_queue->state); 3330 3336 } ··· 3456 3450 3457 3451 if (likely(dql_avail(&dev_queue->dql) >= 0)) 3458 3452 return; 3453 + 3454 + /* Paired with READ_ONCE() from dev_watchdog() */ 3455 + WRITE_ONCE(dev_queue->trans_start, jiffies); 3456 + 3457 + /* This barrier is paired with smp_mb() from dev_watchdog() */ 3458 + smp_mb__before_atomic(); 3459 3459 3460 3460 set_bit(__QUEUE_STATE_STACK_XOFF, &dev_queue->state); 3461 3461
+1
include/net/bluetooth/bluetooth.h
··· 403 403 void bt_sock_unregister(int proto); 404 404 void bt_sock_link(struct bt_sock_list *l, struct sock *s); 405 405 void bt_sock_unlink(struct bt_sock_list *l, struct sock *s); 406 + bool bt_sock_linked(struct bt_sock_list *l, struct sock *s); 406 407 struct sock *bt_sock_alloc(struct net *net, struct socket *sock, 407 408 struct proto *prot, int proto, gfp_t prio, int kern); 408 409 int bt_sock_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
-1
include/net/netns/xfrm.h
··· 51 51 struct hlist_head *policy_byidx; 52 52 unsigned int policy_idx_hmask; 53 53 unsigned int idx_generator; 54 - struct hlist_head policy_inexact[XFRM_POLICY_MAX]; 55 54 struct xfrm_policy_hash policy_bydst[XFRM_POLICY_MAX]; 56 55 unsigned int policy_count[XFRM_POLICY_MAX * 2]; 57 56 struct work_struct policy_hash_work;
+15 -13
include/net/xfrm.h
··· 349 349 void xfrm_if_register_cb(const struct xfrm_if_cb *ifcb); 350 350 void xfrm_if_unregister_cb(void); 351 351 352 + struct xfrm_dst_lookup_params { 353 + struct net *net; 354 + int tos; 355 + int oif; 356 + xfrm_address_t *saddr; 357 + xfrm_address_t *daddr; 358 + u32 mark; 359 + __u8 ipproto; 360 + union flowi_uli uli; 361 + }; 362 + 352 363 struct net_device; 353 364 struct xfrm_type; 354 365 struct xfrm_dst; 355 366 struct xfrm_policy_afinfo { 356 367 struct dst_ops *dst_ops; 357 - struct dst_entry *(*dst_lookup)(struct net *net, 358 - int tos, int oif, 359 - const xfrm_address_t *saddr, 360 - const xfrm_address_t *daddr, 361 - u32 mark); 362 - int (*get_saddr)(struct net *net, int oif, 363 - xfrm_address_t *saddr, 364 - xfrm_address_t *daddr, 365 - u32 mark); 368 + struct dst_entry *(*dst_lookup)(const struct xfrm_dst_lookup_params *params); 369 + int (*get_saddr)(xfrm_address_t *saddr, 370 + const struct xfrm_dst_lookup_params *params); 366 371 int (*fill_dst)(struct xfrm_dst *xdst, 367 372 struct net_device *dev, 368 373 const struct flowi *fl); ··· 1769 1764 } 1770 1765 #endif 1771 1766 1772 - struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif, 1773 - const xfrm_address_t *saddr, 1774 - const xfrm_address_t *daddr, 1775 - int family, u32 mark); 1767 + struct dst_entry *__xfrm_dst_lookup(int family, const struct xfrm_dst_lookup_params *params); 1776 1768 1777 1769 struct xfrm_policy *xfrm_policy_alloc(struct net *net, gfp_t gfp); 1778 1770
+3 -3
kernel/time/posix-clock.c
··· 309 309 struct posix_clock_desc cd; 310 310 int err; 311 311 312 + if (!timespec64_valid_strict(ts)) 313 + return -EINVAL; 314 + 312 315 err = get_clock_desc(id, &cd); 313 316 if (err) 314 317 return err; ··· 320 317 err = -EACCES; 321 318 goto out; 322 319 } 323 - 324 - if (!timespec64_valid_strict(ts)) 325 - return -EINVAL; 326 320 327 321 if (cd.clk->ops.clock_settime) 328 322 err = cd.clk->ops.clock_settime(cd.clk, ts);
+22
net/bluetooth/af_bluetooth.c
··· 185 185 } 186 186 EXPORT_SYMBOL(bt_sock_unlink); 187 187 188 + bool bt_sock_linked(struct bt_sock_list *l, struct sock *s) 189 + { 190 + struct sock *sk; 191 + 192 + if (!l || !s) 193 + return false; 194 + 195 + read_lock(&l->lock); 196 + 197 + sk_for_each(sk, &l->head) { 198 + if (s == sk) { 199 + read_unlock(&l->lock); 200 + return true; 201 + } 202 + } 203 + 204 + read_unlock(&l->lock); 205 + 206 + return false; 207 + } 208 + EXPORT_SYMBOL(bt_sock_linked); 209 + 188 210 void bt_accept_enqueue(struct sock *parent, struct sock *sk, bool bh) 189 211 { 190 212 const struct cred *old_cred;
+15 -9
net/bluetooth/hci_core.c
··· 1644 1644 struct adv_info *adv_instance, *n; 1645 1645 1646 1646 if (hdev->adv_instance_timeout) { 1647 - cancel_delayed_work(&hdev->adv_instance_expire); 1647 + disable_delayed_work(&hdev->adv_instance_expire); 1648 1648 hdev->adv_instance_timeout = 0; 1649 1649 } 1650 1650 1651 1651 list_for_each_entry_safe(adv_instance, n, &hdev->adv_instances, list) { 1652 - cancel_delayed_work_sync(&adv_instance->rpa_expired_cb); 1652 + disable_delayed_work_sync(&adv_instance->rpa_expired_cb); 1653 1653 list_del(&adv_instance->list); 1654 1654 kfree(adv_instance); 1655 1655 } ··· 2685 2685 list_del(&hdev->list); 2686 2686 write_unlock(&hci_dev_list_lock); 2687 2687 2688 - cancel_work_sync(&hdev->rx_work); 2689 - cancel_work_sync(&hdev->cmd_work); 2690 - cancel_work_sync(&hdev->tx_work); 2691 - cancel_work_sync(&hdev->power_on); 2692 - cancel_work_sync(&hdev->error_reset); 2688 + disable_work_sync(&hdev->rx_work); 2689 + disable_work_sync(&hdev->cmd_work); 2690 + disable_work_sync(&hdev->tx_work); 2691 + disable_work_sync(&hdev->power_on); 2692 + disable_work_sync(&hdev->error_reset); 2693 2693 2694 2694 hci_cmd_sync_clear(hdev); 2695 2695 ··· 2796 2796 { 2797 2797 bt_dev_dbg(hdev, "err 0x%2.2x", err); 2798 2798 2799 - cancel_delayed_work_sync(&hdev->cmd_timer); 2800 - cancel_delayed_work_sync(&hdev->ncmd_timer); 2799 + if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) { 2800 + disable_delayed_work_sync(&hdev->cmd_timer); 2801 + disable_delayed_work_sync(&hdev->ncmd_timer); 2802 + } else { 2803 + cancel_delayed_work_sync(&hdev->cmd_timer); 2804 + cancel_delayed_work_sync(&hdev->ncmd_timer); 2805 + } 2806 + 2801 2807 atomic_set(&hdev->cmd_cnt, 1); 2802 2808 2803 2809 hci_cmd_sync_cancel_sync(hdev, err);
+9 -3
net/bluetooth/hci_sync.c
··· 5131 5131 5132 5132 bt_dev_dbg(hdev, ""); 5133 5133 5134 - cancel_delayed_work(&hdev->power_off); 5135 - cancel_delayed_work(&hdev->ncmd_timer); 5136 - cancel_delayed_work(&hdev->le_scan_disable); 5134 + if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) { 5135 + disable_delayed_work(&hdev->power_off); 5136 + disable_delayed_work(&hdev->ncmd_timer); 5137 + disable_delayed_work(&hdev->le_scan_disable); 5138 + } else { 5139 + cancel_delayed_work(&hdev->power_off); 5140 + cancel_delayed_work(&hdev->ncmd_timer); 5141 + cancel_delayed_work(&hdev->le_scan_disable); 5142 + } 5137 5143 5138 5144 hci_cmd_sync_cancel_sync(hdev, ENODEV); 5139 5145
+12 -6
net/bluetooth/iso.c
··· 93 93 #define ISO_CONN_TIMEOUT (HZ * 40) 94 94 #define ISO_DISCONN_TIMEOUT (HZ * 2) 95 95 96 + static struct sock *iso_sock_hold(struct iso_conn *conn) 97 + { 98 + if (!conn || !bt_sock_linked(&iso_sk_list, conn->sk)) 99 + return NULL; 100 + 101 + sock_hold(conn->sk); 102 + 103 + return conn->sk; 104 + } 105 + 96 106 static void iso_sock_timeout(struct work_struct *work) 97 107 { 98 108 struct iso_conn *conn = container_of(work, struct iso_conn, ··· 110 100 struct sock *sk; 111 101 112 102 iso_conn_lock(conn); 113 - sk = conn->sk; 114 - if (sk) 115 - sock_hold(sk); 103 + sk = iso_sock_hold(conn); 116 104 iso_conn_unlock(conn); 117 105 118 106 if (!sk) ··· 217 209 218 210 /* Kill socket */ 219 211 iso_conn_lock(conn); 220 - sk = conn->sk; 221 - if (sk) 222 - sock_hold(sk); 212 + sk = iso_sock_hold(conn); 223 213 iso_conn_unlock(conn); 224 214 225 215 if (sk) {
+12 -6
net/bluetooth/sco.c
··· 76 76 #define SCO_CONN_TIMEOUT (HZ * 40) 77 77 #define SCO_DISCONN_TIMEOUT (HZ * 2) 78 78 79 + static struct sock *sco_sock_hold(struct sco_conn *conn) 80 + { 81 + if (!conn || !bt_sock_linked(&sco_sk_list, conn->sk)) 82 + return NULL; 83 + 84 + sock_hold(conn->sk); 85 + 86 + return conn->sk; 87 + } 88 + 79 89 static void sco_sock_timeout(struct work_struct *work) 80 90 { 81 91 struct sco_conn *conn = container_of(work, struct sco_conn, ··· 97 87 sco_conn_unlock(conn); 98 88 return; 99 89 } 100 - sk = conn->sk; 101 - if (sk) 102 - sock_hold(sk); 90 + sk = sco_sock_hold(conn); 103 91 sco_conn_unlock(conn); 104 92 105 93 if (!sk) ··· 202 194 203 195 /* Kill socket */ 204 196 sco_conn_lock(conn); 205 - sk = conn->sk; 206 - if (sk) 207 - sock_hold(sk); 197 + sk = sco_sock_hold(conn); 208 198 sco_conn_unlock(conn); 209 199 210 200 if (sk) {
+17 -21
net/ipv4/xfrm4_policy.c
··· 17 17 #include <net/ip.h> 18 18 #include <net/l3mdev.h> 19 19 20 - static struct dst_entry *__xfrm4_dst_lookup(struct net *net, struct flowi4 *fl4, 21 - int tos, int oif, 22 - const xfrm_address_t *saddr, 23 - const xfrm_address_t *daddr, 24 - u32 mark) 20 + static struct dst_entry *__xfrm4_dst_lookup(struct flowi4 *fl4, 21 + const struct xfrm_dst_lookup_params *params) 25 22 { 26 23 struct rtable *rt; 27 24 28 25 memset(fl4, 0, sizeof(*fl4)); 29 - fl4->daddr = daddr->a4; 30 - fl4->flowi4_tos = tos; 31 - fl4->flowi4_l3mdev = l3mdev_master_ifindex_by_index(net, oif); 32 - fl4->flowi4_mark = mark; 33 - if (saddr) 34 - fl4->saddr = saddr->a4; 26 + fl4->daddr = params->daddr->a4; 27 + fl4->flowi4_tos = params->tos; 28 + fl4->flowi4_l3mdev = l3mdev_master_ifindex_by_index(params->net, 29 + params->oif); 30 + fl4->flowi4_mark = params->mark; 31 + if (params->saddr) 32 + fl4->saddr = params->saddr->a4; 33 + fl4->flowi4_proto = params->ipproto; 34 + fl4->uli = params->uli; 35 35 36 - rt = __ip_route_output_key(net, fl4); 36 + rt = __ip_route_output_key(params->net, fl4); 37 37 if (!IS_ERR(rt)) 38 38 return &rt->dst; 39 39 40 40 return ERR_CAST(rt); 41 41 } 42 42 43 - static struct dst_entry *xfrm4_dst_lookup(struct net *net, int tos, int oif, 44 - const xfrm_address_t *saddr, 45 - const xfrm_address_t *daddr, 46 - u32 mark) 43 + static struct dst_entry *xfrm4_dst_lookup(const struct xfrm_dst_lookup_params *params) 47 44 { 48 45 struct flowi4 fl4; 49 46 50 - return __xfrm4_dst_lookup(net, &fl4, tos, oif, saddr, daddr, mark); 47 + return __xfrm4_dst_lookup(&fl4, params); 51 48 } 52 49 53 - static int xfrm4_get_saddr(struct net *net, int oif, 54 - xfrm_address_t *saddr, xfrm_address_t *daddr, 55 - u32 mark) 50 + static int xfrm4_get_saddr(xfrm_address_t *saddr, 51 + const struct xfrm_dst_lookup_params *params) 56 52 { 57 53 struct dst_entry *dst; 58 54 struct flowi4 fl4; 59 55 60 - dst = __xfrm4_dst_lookup(net, &fl4, 0, oif, NULL, daddr, mark); 56 + dst = __xfrm4_dst_lookup(&fl4, params); 61 57 if (IS_ERR(dst)) 62 58 return -EHOSTUNREACH; 63 59
+16 -15
net/ipv6/xfrm6_policy.c
··· 23 23 #include <net/ip6_route.h> 24 24 #include <net/l3mdev.h> 25 25 26 - static struct dst_entry *xfrm6_dst_lookup(struct net *net, int tos, int oif, 27 - const xfrm_address_t *saddr, 28 - const xfrm_address_t *daddr, 29 - u32 mark) 26 + static struct dst_entry *xfrm6_dst_lookup(const struct xfrm_dst_lookup_params *params) 30 27 { 31 28 struct flowi6 fl6; 32 29 struct dst_entry *dst; 33 30 int err; 34 31 35 32 memset(&fl6, 0, sizeof(fl6)); 36 - fl6.flowi6_l3mdev = l3mdev_master_ifindex_by_index(net, oif); 37 - fl6.flowi6_mark = mark; 38 - memcpy(&fl6.daddr, daddr, sizeof(fl6.daddr)); 39 - if (saddr) 40 - memcpy(&fl6.saddr, saddr, sizeof(fl6.saddr)); 33 + fl6.flowi6_l3mdev = l3mdev_master_ifindex_by_index(params->net, 34 + params->oif); 35 + fl6.flowi6_mark = params->mark; 36 + memcpy(&fl6.daddr, params->daddr, sizeof(fl6.daddr)); 37 + if (params->saddr) 38 + memcpy(&fl6.saddr, params->saddr, sizeof(fl6.saddr)); 41 39 42 - dst = ip6_route_output(net, NULL, &fl6); 40 + fl6.flowi4_proto = params->ipproto; 41 + fl6.uli = params->uli; 42 + 43 + dst = ip6_route_output(params->net, NULL, &fl6); 43 44 44 45 err = dst->error; 45 46 if (dst->error) { ··· 51 50 return dst; 52 51 } 53 52 54 - static int xfrm6_get_saddr(struct net *net, int oif, 55 - xfrm_address_t *saddr, xfrm_address_t *daddr, 56 - u32 mark) 53 + static int xfrm6_get_saddr(xfrm_address_t *saddr, 54 + const struct xfrm_dst_lookup_params *params) 57 55 { 58 56 struct dst_entry *dst; 59 57 struct net_device *dev; 60 58 struct inet6_dev *idev; 61 59 62 - dst = xfrm6_dst_lookup(net, 0, oif, NULL, daddr, mark); 60 + dst = xfrm6_dst_lookup(params); 63 61 if (IS_ERR(dst)) 64 62 return -EHOSTUNREACH; 65 63 ··· 68 68 return -EHOSTUNREACH; 69 69 } 70 70 dev = idev->dev; 71 - ipv6_dev_get_saddr(dev_net(dev), dev, &daddr->in6, 0, &saddr->in6); 71 + ipv6_dev_get_saddr(dev_net(dev), dev, &params->daddr->in6, 0, 72 + &saddr->in6); 72 73 dst_release(dst); 73 74 return 0; 74 75 }
+4
net/netfilter/nf_bpf_link.c
··· 23 23 struct bpf_nf_link { 24 24 struct bpf_link link; 25 25 struct nf_hook_ops hook_ops; 26 + netns_tracker ns_tracker; 26 27 struct net *net; 27 28 u32 dead; 28 29 const struct nf_defrag_hook *defrag_hook; ··· 121 120 if (!cmpxchg(&nf_link->dead, 0, 1)) { 122 121 nf_unregister_net_hook(nf_link->net, &nf_link->hook_ops); 123 122 bpf_nf_disable_defrag(nf_link); 123 + put_net_track(nf_link->net, &nf_link->ns_tracker); 124 124 } 125 125 } 126 126 ··· 259 257 bpf_link_cleanup(&link_primer); 260 258 return err; 261 259 } 260 + 261 + get_net_track(net, &link->ns_tracker, GFP_KERNEL); 262 262 263 263 return bpf_link_settle(&link_primer); 264 264 }
+1 -1
net/netfilter/xt_NFLOG.c
··· 79 79 { 80 80 .name = "NFLOG", 81 81 .revision = 0, 82 - .family = NFPROTO_IPV4, 82 + .family = NFPROTO_IPV6, 83 83 .checkentry = nflog_tg_check, 84 84 .destroy = nflog_tg_destroy, 85 85 .target = nflog_tg,
+1
net/netfilter/xt_TRACE.c
··· 49 49 .target = trace_tg, 50 50 .checkentry = trace_tg_check, 51 51 .destroy = trace_tg_destroy, 52 + .me = THIS_MODULE, 52 53 }, 53 54 #endif 54 55 };
+1 -1
net/netfilter/xt_mark.c
··· 62 62 { 63 63 .name = "MARK", 64 64 .revision = 2, 65 - .family = NFPROTO_IPV4, 65 + .family = NFPROTO_IPV6, 66 66 .target = mark_tg, 67 67 .targetsize = sizeof(struct xt_mark_tginfo2), 68 68 .me = THIS_MODULE,
+22 -1
net/sched/act_api.c
··· 1498 1498 bool skip_sw = tc_skip_sw(fl_flags); 1499 1499 bool skip_hw = tc_skip_hw(fl_flags); 1500 1500 1501 - if (tc_act_bind(act->tcfa_flags)) 1501 + if (tc_act_bind(act->tcfa_flags)) { 1502 + /* Action is created by classifier and is not 1503 + * standalone. Check that the user did not set 1504 + * any action flags different than the 1505 + * classifier flags, and inherit the flags from 1506 + * the classifier for the compatibility case 1507 + * where no flags were specified at all. 1508 + */ 1509 + if ((tc_act_skip_sw(act->tcfa_flags) && !skip_sw) || 1510 + (tc_act_skip_hw(act->tcfa_flags) && !skip_hw)) { 1511 + NL_SET_ERR_MSG(extack, 1512 + "Mismatch between action and filter offload flags"); 1513 + err = -EINVAL; 1514 + goto err; 1515 + } 1516 + if (skip_sw) 1517 + act->tcfa_flags |= TCA_ACT_FLAGS_SKIP_SW; 1518 + if (skip_hw) 1519 + act->tcfa_flags |= TCA_ACT_FLAGS_SKIP_HW; 1502 1520 continue; 1521 + } 1522 + 1523 + /* Action is standalone */ 1503 1524 if (skip_sw != tc_act_skip_sw(act->tcfa_flags) || 1504 1525 skip_hw != tc_act_skip_hw(act->tcfa_flags)) { 1505 1526 NL_SET_ERR_MSG(extack,
+7 -1
net/sched/sch_generic.c
··· 512 512 struct netdev_queue *txq; 513 513 514 514 txq = netdev_get_tx_queue(dev, i); 515 - trans_start = READ_ONCE(txq->trans_start); 516 515 if (!netif_xmit_stopped(txq)) 517 516 continue; 517 + 518 + /* Paired with WRITE_ONCE() + smp_mb...() in 519 + * netdev_tx_sent_queue() and netif_tx_stop_queue(). 520 + */ 521 + smp_mb(); 522 + trans_start = READ_ONCE(txq->trans_start); 523 + 518 524 if (time_after(jiffies, trans_start + dev->watchdog_timeo)) { 519 525 timedout_ms = jiffies_to_msecs(jiffies - trans_start); 520 526 atomic_long_inc(&txq->trans_timeout);
+14 -7
net/sched/sch_taprio.c
··· 1965 1965 1966 1966 taprio_start_sched(sch, start, new_admin); 1967 1967 1968 - rcu_assign_pointer(q->admin_sched, new_admin); 1968 + admin = rcu_replace_pointer(q->admin_sched, new_admin, 1969 + lockdep_rtnl_is_held()); 1969 1970 if (admin) 1970 1971 call_rcu(&admin->rcu, taprio_free_sched_cb); 1971 1972 ··· 2374 2373 struct tc_mqprio_qopt opt = { 0 }; 2375 2374 struct nlattr *nest, *sched_nest; 2376 2375 2377 - oper = rtnl_dereference(q->oper_sched); 2378 - admin = rtnl_dereference(q->admin_sched); 2379 - 2380 2376 mqprio_qopt_reconstruct(dev, &opt); 2381 2377 2382 2378 nest = nla_nest_start_noflag(skb, TCA_OPTIONS); ··· 2394 2396 nla_put_u32(skb, TCA_TAPRIO_ATTR_TXTIME_DELAY, q->txtime_delay)) 2395 2397 goto options_error; 2396 2398 2399 + rcu_read_lock(); 2400 + 2401 + oper = rtnl_dereference(q->oper_sched); 2402 + admin = rtnl_dereference(q->admin_sched); 2403 + 2397 2404 if (oper && taprio_dump_tc_entries(skb, q, oper)) 2398 - goto options_error; 2405 + goto options_error_rcu; 2399 2406 2400 2407 if (oper && dump_schedule(skb, oper)) 2401 - goto options_error; 2408 + goto options_error_rcu; 2402 2409 2403 2410 if (!admin) 2404 2411 goto done; 2405 2412 2406 2413 sched_nest = nla_nest_start_noflag(skb, TCA_TAPRIO_ATTR_ADMIN_SCHED); 2407 2414 if (!sched_nest) 2408 - goto options_error; 2415 + goto options_error_rcu; 2409 2416 2410 2417 if (dump_schedule(skb, admin)) 2411 2418 goto admin_error; ··· 2418 2415 nla_nest_end(skb, sched_nest); 2419 2416 2420 2417 done: 2418 + rcu_read_unlock(); 2421 2419 return nla_nest_end(skb, nest); 2422 2420 2423 2421 admin_error: 2424 2422 nla_nest_cancel(skb, sched_nest); 2423 + 2424 + options_error_rcu: 2425 + rcu_read_unlock(); 2425 2426 2426 2427 options_error: 2427 2428 nla_nest_cancel(skb, nest);
+8 -3
net/xfrm/xfrm_device.c
··· 269 269 270 270 dev = dev_get_by_index(net, xuo->ifindex); 271 271 if (!dev) { 272 + struct xfrm_dst_lookup_params params; 273 + 272 274 if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) { 273 275 saddr = &x->props.saddr; 274 276 daddr = &x->id.daddr; ··· 279 277 daddr = &x->props.saddr; 280 278 } 281 279 282 - dst = __xfrm_dst_lookup(net, 0, 0, saddr, daddr, 283 - x->props.family, 284 - xfrm_smark_get(0, x)); 280 + memset(&params, 0, sizeof(params)); 281 + params.net = net; 282 + params.saddr = saddr; 283 + params.daddr = daddr; 284 + params.mark = xfrm_smark_get(0, x); 285 + dst = __xfrm_dst_lookup(x->props.family, &params); 285 286 if (IS_ERR(dst)) 286 287 return (is_packet_offload) ? -EINVAL : 0; 287 288
+38 -15
net/xfrm/xfrm_policy.c
··· 270 270 return rcu_dereference(xfrm_if_cb); 271 271 } 272 272 273 - struct dst_entry *__xfrm_dst_lookup(struct net *net, int tos, int oif, 274 - const xfrm_address_t *saddr, 275 - const xfrm_address_t *daddr, 276 - int family, u32 mark) 273 + struct dst_entry *__xfrm_dst_lookup(int family, 274 + const struct xfrm_dst_lookup_params *params) 277 275 { 278 276 const struct xfrm_policy_afinfo *afinfo; 279 277 struct dst_entry *dst; ··· 280 282 if (unlikely(afinfo == NULL)) 281 283 return ERR_PTR(-EAFNOSUPPORT); 282 284 283 - dst = afinfo->dst_lookup(net, tos, oif, saddr, daddr, mark); 285 + dst = afinfo->dst_lookup(params); 284 286 285 287 rcu_read_unlock(); 286 288 ··· 294 296 xfrm_address_t *prev_daddr, 295 297 int family, u32 mark) 296 298 { 299 + struct xfrm_dst_lookup_params params; 297 300 struct net *net = xs_net(x); 298 301 xfrm_address_t *saddr = &x->props.saddr; 299 302 xfrm_address_t *daddr = &x->id.daddr; ··· 309 310 daddr = x->coaddr; 310 311 } 311 312 312 - dst = __xfrm_dst_lookup(net, tos, oif, saddr, daddr, family, mark); 313 + params.net = net; 314 + params.saddr = saddr; 315 + params.daddr = daddr; 316 + params.tos = tos; 317 + params.oif = oif; 318 + params.mark = mark; 319 + params.ipproto = x->id.proto; 320 + if (x->encap) { 321 + switch (x->encap->encap_type) { 322 + case UDP_ENCAP_ESPINUDP: 323 + params.ipproto = IPPROTO_UDP; 324 + params.uli.ports.sport = x->encap->encap_sport; 325 + params.uli.ports.dport = x->encap->encap_dport; 326 + break; 327 + case TCP_ENCAP_ESPINTCP: 328 + params.ipproto = IPPROTO_TCP; 329 + params.uli.ports.sport = x->encap->encap_sport; 330 + params.uli.ports.dport = x->encap->encap_dport; 331 + break; 332 + } 333 + } 334 + 335 + dst = __xfrm_dst_lookup(family, &params); 313 336 314 337 if (!IS_ERR(dst)) { 315 338 if (prev_saddr != saddr) ··· 2453 2432 } 2454 2433 2455 2434 static int 2456 - xfrm_get_saddr(struct net *net, int oif, xfrm_address_t *local, 2457 - xfrm_address_t *remote, unsigned short family, u32 mark) 2435 + xfrm_get_saddr(unsigned short family, xfrm_address_t *saddr, 2436 + const struct xfrm_dst_lookup_params *params) 2458 2437 { 2459 2438 int err; 2460 2439 const struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family); 2461 2440 2462 2441 if (unlikely(afinfo == NULL)) 2463 2442 return -EINVAL; 2464 - err = afinfo->get_saddr(net, oif, local, remote, mark); 2443 + err = afinfo->get_saddr(saddr, params); 2465 2444 rcu_read_unlock(); 2466 2445 return err; 2467 2446 } ··· 2490 2469 remote = &tmpl->id.daddr; 2491 2470 local = &tmpl->saddr; 2492 2471 if (xfrm_addr_any(local, tmpl->encap_family)) { 2493 - error = xfrm_get_saddr(net, fl->flowi_oif, 2494 - &tmp, remote, 2495 - tmpl->encap_family, 0); 2472 + struct xfrm_dst_lookup_params params; 2473 + 2474 + memset(&params, 0, sizeof(params)); 2475 + params.net = net; 2476 + params.oif = fl->flowi_oif; 2477 + params.daddr = remote; 2478 + error = xfrm_get_saddr(tmpl->encap_family, &tmp, 2479 + &params); 2496 2480 if (error) 2497 2481 goto fail; 2498 2482 local = &tmp; ··· 4206 4180 4207 4181 net->xfrm.policy_count[dir] = 0; 4208 4182 net->xfrm.policy_count[XFRM_POLICY_MAX + dir] = 0; 4209 - INIT_HLIST_HEAD(&net->xfrm.policy_inexact[dir]); 4210 4183 4211 4184 htab = &net->xfrm.policy_bydst[dir]; 4212 4185 htab->table = xfrm_hash_alloc(sz); ··· 4258 4233 4259 4234 for (dir = 0; dir < XFRM_POLICY_MAX; dir++) { 4260 4235 struct xfrm_policy_hash *htab; 4261 - 4262 - WARN_ON(!hlist_empty(&net->xfrm.policy_inexact[dir])); 4263 4236 4264 4237 htab = &net->xfrm.policy_bydst[dir]; 4265 4238 sz = (htab->hmask + 1) * sizeof(struct hlist_head);
+8 -2
net/xfrm/xfrm_user.c
··· 201 201 { 202 202 int err; 203 203 u8 sa_dir = attrs[XFRMA_SA_DIR] ? nla_get_u8(attrs[XFRMA_SA_DIR]) : 0; 204 + u16 family = p->sel.family; 204 205 205 206 err = -EINVAL; 206 207 switch (p->family) { ··· 222 221 goto out; 223 222 } 224 223 225 - switch (p->sel.family) { 224 + if (!family && !(p->flags & XFRM_STATE_AF_UNSPEC)) 225 + family = p->family; 226 + 227 + switch (family) { 226 228 case AF_UNSPEC: 227 229 break; 228 230 ··· 1102 1098 if (!nla) 1103 1099 return -EMSGSIZE; 1104 1100 ap = nla_data(nla); 1105 - memcpy(ap, auth, sizeof(struct xfrm_algo_auth)); 1101 + strscpy_pad(ap->alg_name, auth->alg_name, sizeof(ap->alg_name)); 1102 + ap->alg_key_len = auth->alg_key_len; 1103 + ap->alg_trunc_len = auth->alg_trunc_len; 1106 1104 if (redact_secret && auth->alg_key_len) 1107 1105 memset(ap->alg_key, 0, (auth->alg_key_len + 7) / 8); 1108 1106 else