Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from David Miller:

1) Don't corrupt xfrm_interface parms before validation, from Nicolas
Dichtel.

2) Revert use of usb-wakeup in btusb, from Mario Limonciello.

3) Block ipv6 packets in bridge netfilter if ipv6 is disabled, from
Leonardo Bras.

4) IPS_OFFLOAD not honored in ctnetlink, from Pablo Neira Ayuso.

5) Missing ULP check in sock_map, from John Fastabend.

6) Fix receive statistic handling in forcedeth, from Zhu Yanjun.

7) Fix length of SKB allocated in 6pack driver, from Christophe
JAILLET.

8) ip6_route_info_create() returns an error pointer, not NULL. From
Maciej Żenczykowski.

9) Only add RDS sock to the hashes after rs_transport is set, from
Ka-Cheong Poon.

10) Don't double clean TX descriptors in ixgbe, from Ilya Maximets.

11) Presence of transmit IPSEC offload in an SKB is not tested for
correctly in ixgbe and ixgbevf. From Steffen Klassert and Jeff
Kirsher.

12) Need rcu_barrier() when register_netdevice() takes one of the
notifier based failure paths, from Subash Abhinov Kasiviswanathan.

13) Fix leak in sctp_do_bind(), from Mao Wenan.

* git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (72 commits)
cdc_ether: fix rndis support for Mediatek based smartphones
sctp: destroy bucket if failed to bind addr
sctp: remove redundant assignment when call sctp_get_port_local
sctp: change return type of sctp_get_port_local
ixgbevf: Fix secpath usage for IPsec Tx offload
sctp: Fix the link time qualifier of 'sctp_ctrlsock_exit()'
ixgbe: Fix secpath usage for IPsec TX offload.
net: qrtr: fix memort leak in qrtr_tun_write_iter
net: Fix null de-reference of device refcount
ipv6: Fix the link time qualifier of 'ping_v6_proc_exit_net()'
tun: fix use-after-free when register netdev failed
tcp: fix tcp_ecn_withdraw_cwr() to clear TCP_ECN_QUEUE_CWR
ixgbe: fix double clean of Tx descriptors with xdp
ixgbe: Prevent u8 wrapping of ITR value to something less than 10us
mlx4: fix spelling mistake "veify" -> "verify"
net: hns3: fix spelling mistake "undeflow" -> "underflow"
net: lmc: fix spelling mistake "runnin" -> "running"
NFC: st95hf: fix spelling mistake "receieve" -> "receive"
net/rds: An rds_sock is added too early to the hash table
mac80211: Do not send Layer 2 Update frame before authorization
...

+443 -289
+1 -2
MAINTAINERS
··· 17699 17699 F: include/uapi/linux/fsmap.h 17700 17700 17701 17701 XILINX AXI ETHERNET DRIVER 17702 - M: Anirudha Sarangi <anirudh@xilinx.com> 17703 - M: John Linn <John.Linn@xilinx.com> 17702 + M: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> 17704 17703 S: Maintained 17705 17704 F: drivers/net/ethernet/xilinx/xilinx_axienet* 17706 17705
+1 -1
drivers/bluetooth/bpa10x.c
··· 337 337 338 338 usb_free_urb(urb); 339 339 340 - return 0; 340 + return err; 341 341 } 342 342 343 343 static int bpa10x_set_diag(struct hci_dev *hdev, bool enable)
+3 -5
drivers/bluetooth/btusb.c
··· 384 384 { USB_DEVICE(0x13d3, 0x3526), .driver_info = BTUSB_REALTEK }, 385 385 { USB_DEVICE(0x0b05, 0x185c), .driver_info = BTUSB_REALTEK }, 386 386 387 + /* Additional Realtek 8822CE Bluetooth devices */ 388 + { USB_DEVICE(0x04ca, 0x4005), .driver_info = BTUSB_REALTEK }, 389 + 387 390 /* Silicon Wave based devices */ 388 391 { USB_DEVICE(0x0c10, 0x0000), .driver_info = BTUSB_SWAVE }, 389 392 ··· 1173 1170 } 1174 1171 1175 1172 data->intf->needs_remote_wakeup = 1; 1176 - /* device specific wakeup source enabled and required for USB 1177 - * remote wakeup while host is suspended 1178 - */ 1179 - device_wakeup_enable(&data->udev->dev); 1180 1173 1181 1174 if (test_and_set_bit(BTUSB_INTR_RUNNING, &data->flags)) 1182 1175 goto done; ··· 1237 1238 goto failed; 1238 1239 1239 1240 data->intf->needs_remote_wakeup = 0; 1240 - device_wakeup_disable(&data->udev->dev); 1241 1241 usb_autopm_put_interface(data->intf); 1242 1242 1243 1243 failed:
+6 -4
drivers/bluetooth/hci_qca.c
··· 309 309 ws_awake_device); 310 310 struct hci_uart *hu = qca->hu; 311 311 unsigned long retrans_delay; 312 + unsigned long flags; 312 313 313 314 BT_DBG("hu %p wq awake device", hu); 314 315 315 316 /* Vote for serial clock */ 316 317 serial_clock_vote(HCI_IBS_TX_VOTE_CLOCK_ON, hu); 317 318 318 - spin_lock(&qca->hci_ibs_lock); 319 + spin_lock_irqsave(&qca->hci_ibs_lock, flags); 319 320 320 321 /* Send wake indication to device */ 321 322 if (send_hci_ibs_cmd(HCI_IBS_WAKE_IND, hu) < 0) ··· 328 327 retrans_delay = msecs_to_jiffies(qca->wake_retrans); 329 328 mod_timer(&qca->wake_retrans_timer, jiffies + retrans_delay); 330 329 331 - spin_unlock(&qca->hci_ibs_lock); 330 + spin_unlock_irqrestore(&qca->hci_ibs_lock, flags); 332 331 333 332 /* Actually send the packets */ 334 333 hci_uart_tx_wakeup(hu); ··· 339 338 struct qca_data *qca = container_of(work, struct qca_data, 340 339 ws_awake_rx); 341 340 struct hci_uart *hu = qca->hu; 341 + unsigned long flags; 342 342 343 343 BT_DBG("hu %p wq awake rx", hu); 344 344 345 345 serial_clock_vote(HCI_IBS_RX_VOTE_CLOCK_ON, hu); 346 346 347 - spin_lock(&qca->hci_ibs_lock); 347 + spin_lock_irqsave(&qca->hci_ibs_lock, flags); 348 348 qca->rx_ibs_state = HCI_IBS_RX_AWAKE; 349 349 350 350 /* Always acknowledge device wake up, ··· 356 354 357 355 qca->ibs_sent_wacks++; 358 356 359 - spin_unlock(&qca->hci_ibs_lock); 357 + spin_unlock_irqrestore(&qca->hci_ibs_lock, flags); 360 358 361 359 /* Actually send the packets */ 362 360 hci_uart_tx_wakeup(hu);
+9 -1
drivers/isdn/capi/capi.c
··· 688 688 if (!cdev->ap.applid) 689 689 return -ENODEV; 690 690 691 + if (count < CAPIMSG_BASELEN) 692 + return -EINVAL; 693 + 691 694 skb = alloc_skb(count, GFP_USER); 692 695 if (!skb) 693 696 return -ENOMEM; ··· 701 698 } 702 699 mlen = CAPIMSG_LEN(skb->data); 703 700 if (CAPIMSG_CMD(skb->data) == CAPI_DATA_B3_REQ) { 704 - if ((size_t)(mlen + CAPIMSG_DATALEN(skb->data)) != count) { 701 + if (count < CAPI_DATA_B3_REQ_LEN || 702 + (size_t)(mlen + CAPIMSG_DATALEN(skb->data)) != count) { 705 703 kfree_skb(skb); 706 704 return -EINVAL; 707 705 } ··· 715 711 CAPIMSG_SETAPPID(skb->data, cdev->ap.applid); 716 712 717 713 if (CAPIMSG_CMD(skb->data) == CAPI_DISCONNECT_B3_RESP) { 714 + if (count < CAPI_DISCONNECT_B3_RESP_LEN) { 715 + kfree_skb(skb); 716 + return -EINVAL; 717 + } 718 718 mutex_lock(&cdev->lock); 719 719 capincci_free(cdev, CAPIMSG_NCCI(skb->data)); 720 720 mutex_unlock(&cdev->lock);
+1 -1
drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
··· 98 98 .reset_level = HNAE3_GLOBAL_RESET }, 99 99 { .int_msk = BIT(1), .msg = "rx_stp_fifo_overflow", 100 100 .reset_level = HNAE3_GLOBAL_RESET }, 101 - { .int_msk = BIT(2), .msg = "rx_stp_fifo_undeflow", 101 + { .int_msk = BIT(2), .msg = "rx_stp_fifo_underflow", 102 102 .reset_level = HNAE3_GLOBAL_RESET }, 103 103 { .int_msk = BIT(3), .msg = "tx_buf_overflow", 104 104 .reset_level = HNAE3_GLOBAL_RESET },
+6 -3
drivers/net/ethernet/ibm/ibmvnic.c
··· 1984 1984 rwi = get_next_rwi(adapter); 1985 1985 while (rwi) { 1986 1986 if (adapter->state == VNIC_REMOVING || 1987 - adapter->state == VNIC_REMOVED) 1988 - goto out; 1987 + adapter->state == VNIC_REMOVED) { 1988 + kfree(rwi); 1989 + rc = EBUSY; 1990 + break; 1991 + } 1989 1992 1990 1993 if (adapter->force_reset_recovery) { 1991 1994 adapter->force_reset_recovery = false; ··· 2014 2011 netdev_dbg(adapter->netdev, "Reset failed\n"); 2015 2012 free_all_rwi(adapter); 2016 2013 } 2017 - out: 2014 + 2018 2015 adapter->resetting = false; 2019 2016 if (we_lock_rtnl) 2020 2017 rtnl_unlock();
+5 -2
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 36 36 #include <net/vxlan.h> 37 37 #include <net/mpls.h> 38 38 #include <net/xdp_sock.h> 39 + #include <net/xfrm.h> 39 40 40 41 #include "ixgbe.h" 41 42 #include "ixgbe_common.h" ··· 2622 2621 /* 16K ints/sec to 9.2K ints/sec */ 2623 2622 avg_wire_size *= 15; 2624 2623 avg_wire_size += 11452; 2625 - } else if (avg_wire_size <= 1980) { 2624 + } else if (avg_wire_size < 1968) { 2626 2625 /* 9.2K ints/sec to 8K ints/sec */ 2627 2626 avg_wire_size *= 5; 2628 2627 avg_wire_size += 22420; ··· 2655 2654 case IXGBE_LINK_SPEED_2_5GB_FULL: 2656 2655 case IXGBE_LINK_SPEED_1GB_FULL: 2657 2656 case IXGBE_LINK_SPEED_10_FULL: 2657 + if (avg_wire_size > 8064) 2658 + avg_wire_size = 8064; 2658 2659 itr += DIV_ROUND_UP(avg_wire_size, 2659 2660 IXGBE_ITR_ADAPTIVE_MIN_INC * 64) * 2660 2661 IXGBE_ITR_ADAPTIVE_MIN_INC; ··· 8698 8695 #endif /* IXGBE_FCOE */ 8699 8696 8700 8697 #ifdef CONFIG_IXGBE_IPSEC 8701 - if (secpath_exists(skb) && 8698 + if (xfrm_offload(skb) && 8702 8699 !ixgbe_ipsec_tx(tx_ring, first, &ipsec_tx)) 8703 8700 goto out_drop; 8704 8701 #endif
+11 -18
drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
··· 633 633 bool ixgbe_clean_xdp_tx_irq(struct ixgbe_q_vector *q_vector, 634 634 struct ixgbe_ring *tx_ring, int napi_budget) 635 635 { 636 + u16 ntc = tx_ring->next_to_clean, ntu = tx_ring->next_to_use; 636 637 unsigned int total_packets = 0, total_bytes = 0; 637 - u32 i = tx_ring->next_to_clean, xsk_frames = 0; 638 - unsigned int budget = q_vector->tx.work_limit; 639 638 struct xdp_umem *umem = tx_ring->xsk_umem; 640 639 union ixgbe_adv_tx_desc *tx_desc; 641 640 struct ixgbe_tx_buffer *tx_bi; 642 - bool xmit_done; 641 + u32 xsk_frames = 0; 643 642 644 - tx_bi = &tx_ring->tx_buffer_info[i]; 645 - tx_desc = IXGBE_TX_DESC(tx_ring, i); 646 - i -= tx_ring->count; 643 + tx_bi = &tx_ring->tx_buffer_info[ntc]; 644 + tx_desc = IXGBE_TX_DESC(tx_ring, ntc); 647 645 648 - do { 646 + while (ntc != ntu) { 649 647 if (!(tx_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD))) 650 648 break; 651 649 ··· 659 661 660 662 tx_bi++; 661 663 tx_desc++; 662 - i++; 663 - if (unlikely(!i)) { 664 - i -= tx_ring->count; 664 + ntc++; 665 + if (unlikely(ntc == tx_ring->count)) { 666 + ntc = 0; 665 667 tx_bi = tx_ring->tx_buffer_info; 666 668 tx_desc = IXGBE_TX_DESC(tx_ring, 0); 667 669 } 668 670 669 671 /* issue prefetch for next Tx descriptor */ 670 672 prefetch(tx_desc); 673 + } 671 674 672 - /* update budget accounting */ 673 - budget--; 674 - } while (likely(budget)); 675 - 676 - i += tx_ring->count; 677 - tx_ring->next_to_clean = i; 675 + tx_ring->next_to_clean = ntc; 678 676 679 677 u64_stats_update_begin(&tx_ring->syncp); 680 678 tx_ring->stats.bytes += total_bytes; ··· 682 688 if (xsk_frames) 683 689 xsk_umem_complete_tx(umem, xsk_frames); 684 690 685 - xmit_done = ixgbe_xmit_zc(tx_ring, q_vector->tx.work_limit); 686 - return budget > 0 && xmit_done; 691 + return ixgbe_xmit_zc(tx_ring, q_vector->tx.work_limit); 687 692 } 688 693 689 694 int ixgbe_xsk_async_xmit(struct net_device *dev, u32 qid)
+2 -1
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 30 30 #include <linux/bpf.h> 31 31 #include <linux/bpf_trace.h> 32 32 #include <linux/atomic.h> 33 + #include <net/xfrm.h> 33 34 34 35 #include "ixgbevf.h" 35 36 ··· 4162 4161 first->protocol = vlan_get_protocol(skb); 4163 4162 4164 4163 #ifdef CONFIG_IXGBEVF_IPSEC 4165 - if (secpath_exists(skb) && !ixgbevf_ipsec_tx(tx_ring, first, &ipsec_tx)) 4164 + if (xfrm_offload(skb) && !ixgbevf_ipsec_tx(tx_ring, first, &ipsec_tx)) 4166 4165 goto out_drop; 4167 4166 #endif 4168 4167 tso = ixgbevf_tso(tx_ring, first, &hdr_len, &ipsec_tx);
+1 -1
drivers/net/ethernet/mellanox/mlx4/main.c
··· 2240 2240 for (i = 1; i <= dev->caps.num_ports; i++) { 2241 2241 if (mlx4_dev_port(dev, i, &port_cap)) { 2242 2242 mlx4_err(dev, 2243 - "QUERY_DEV_CAP command failed, can't veify DMFS high rate steering.\n"); 2243 + "QUERY_DEV_CAP command failed, can't verify DMFS high rate steering.\n"); 2244 2244 } else if ((dev->caps.dmfs_high_steer_mode != 2245 2245 MLX4_STEERING_DMFS_A0_DEFAULT) && 2246 2246 (port_cap.dmfs_optimized_state ==
+3 -3
drivers/net/ethernet/natsemi/sonic.c
··· 232 232 233 233 laddr = dma_map_single(lp->device, skb->data, length, DMA_TO_DEVICE); 234 234 if (!laddr) { 235 - printk(KERN_ERR "%s: failed to map tx DMA buffer.\n", dev->name); 236 - dev_kfree_skb(skb); 237 - return NETDEV_TX_BUSY; 235 + pr_err_ratelimited("%s: failed to map tx DMA buffer.\n", dev->name); 236 + dev_kfree_skb_any(skb); 237 + return NETDEV_TX_OK; 238 238 } 239 239 240 240 sonic_tda_put(dev, entry, SONIC_TD_STATUS, 0); /* clear status */
+5 -5
drivers/net/ethernet/netronome/nfp/flower/cmsg.c
··· 260 260 261 261 type = cmsg_hdr->type; 262 262 switch (type) { 263 - case NFP_FLOWER_CMSG_TYPE_PORT_REIFY: 264 - nfp_flower_cmsg_portreify_rx(app, skb); 265 - break; 266 263 case NFP_FLOWER_CMSG_TYPE_PORT_MOD: 267 264 nfp_flower_cmsg_portmod_rx(app, skb); 268 265 break; ··· 325 328 struct nfp_flower_priv *priv = app->priv; 326 329 struct sk_buff_head *skb_head; 327 330 328 - if (type == NFP_FLOWER_CMSG_TYPE_PORT_REIFY || 329 - type == NFP_FLOWER_CMSG_TYPE_PORT_MOD) 331 + if (type == NFP_FLOWER_CMSG_TYPE_PORT_MOD) 330 332 skb_head = &priv->cmsg_skbs_high; 331 333 else 332 334 skb_head = &priv->cmsg_skbs_low; ··· 363 367 dev_consume_skb_any(skb); 364 368 } else if (cmsg_hdr->type == NFP_FLOWER_CMSG_TYPE_TUN_NEIGH) { 365 369 /* Acks from the NFP that the route is added - ignore. */ 370 + dev_consume_skb_any(skb); 371 + } else if (cmsg_hdr->type == NFP_FLOWER_CMSG_TYPE_PORT_REIFY) { 372 + /* Handle REIFY acks outside wq to prevent RTNL conflict. */ 373 + nfp_flower_cmsg_portreify_rx(app, skb); 366 374 dev_consume_skb_any(skb); 367 375 } else { 368 376 nfp_flower_queue_ctl_msg(app, skb, cmsg_hdr->type);
+99 -44
drivers/net/ethernet/nvidia/forcedeth.c
··· 713 713 struct nv_skb_map *next_tx_ctx; 714 714 }; 715 715 716 + struct nv_txrx_stats { 717 + u64 stat_rx_packets; 718 + u64 stat_rx_bytes; /* not always available in HW */ 719 + u64 stat_rx_missed_errors; 720 + u64 stat_rx_dropped; 721 + u64 stat_tx_packets; /* not always available in HW */ 722 + u64 stat_tx_bytes; 723 + u64 stat_tx_dropped; 724 + }; 725 + 726 + #define nv_txrx_stats_inc(member) \ 727 + __this_cpu_inc(np->txrx_stats->member) 728 + #define nv_txrx_stats_add(member, count) \ 729 + __this_cpu_add(np->txrx_stats->member, (count)) 730 + 716 731 /* 717 732 * SMP locking: 718 733 * All hardware access under netdev_priv(dev)->lock, except the performance ··· 812 797 813 798 /* RX software stats */ 814 799 struct u64_stats_sync swstats_rx_syncp; 815 - u64 stat_rx_packets; 816 - u64 stat_rx_bytes; /* not always available in HW */ 817 - u64 stat_rx_missed_errors; 818 - u64 stat_rx_dropped; 800 + struct nv_txrx_stats __percpu *txrx_stats; 819 801 820 802 /* media detection workaround. 821 803 * Locking: Within irq hander or disable_irq+spin_lock(&np->lock); ··· 838 826 839 827 /* TX software stats */ 840 828 struct u64_stats_sync swstats_tx_syncp; 841 - u64 stat_tx_packets; /* not always available in HW */ 842 - u64 stat_tx_bytes; 843 - u64 stat_tx_dropped; 844 829 845 830 /* msi/msi-x fields */ 846 831 u32 msi_flags; ··· 1730 1721 } 1731 1722 } 1732 1723 1724 + static void nv_get_stats(int cpu, struct fe_priv *np, 1725 + struct rtnl_link_stats64 *storage) 1726 + { 1727 + struct nv_txrx_stats *src = per_cpu_ptr(np->txrx_stats, cpu); 1728 + unsigned int syncp_start; 1729 + u64 rx_packets, rx_bytes, rx_dropped, rx_missed_errors; 1730 + u64 tx_packets, tx_bytes, tx_dropped; 1731 + 1732 + do { 1733 + syncp_start = u64_stats_fetch_begin_irq(&np->swstats_rx_syncp); 1734 + rx_packets = src->stat_rx_packets; 1735 + rx_bytes = src->stat_rx_bytes; 1736 + rx_dropped = src->stat_rx_dropped; 1737 + rx_missed_errors = src->stat_rx_missed_errors; 1738 + } while (u64_stats_fetch_retry_irq(&np->swstats_rx_syncp, syncp_start)); 1739 + 1740 + storage->rx_packets += rx_packets; 1741 + storage->rx_bytes += rx_bytes; 1742 + storage->rx_dropped += rx_dropped; 1743 + storage->rx_missed_errors += rx_missed_errors; 1744 + 1745 + do { 1746 + syncp_start = u64_stats_fetch_begin_irq(&np->swstats_tx_syncp); 1747 + tx_packets = src->stat_tx_packets; 1748 + tx_bytes = src->stat_tx_bytes; 1749 + tx_dropped = src->stat_tx_dropped; 1750 + } while (u64_stats_fetch_retry_irq(&np->swstats_tx_syncp, syncp_start)); 1751 + 1752 + storage->tx_packets += tx_packets; 1753 + storage->tx_bytes += tx_bytes; 1754 + storage->tx_dropped += tx_dropped; 1755 + } 1756 + 1733 1757 /* 1734 1758 * nv_get_stats64: dev->ndo_get_stats64 function 1735 1759 * Get latest stats value from the nic. ··· 1775 1733 __releases(&netdev_priv(dev)->hwstats_lock) 1776 1734 { 1777 1735 struct fe_priv *np = netdev_priv(dev); 1778 - unsigned int syncp_start; 1736 + int cpu; 1779 1737 1780 1738 /* 1781 1739 * Note: because HW stats are not always available and for ··· 1788 1746 */ 1789 1747 1790 1748 /* software stats */ 1791 - do { 1792 - syncp_start = u64_stats_fetch_begin_irq(&np->swstats_rx_syncp); 1793 - storage->rx_packets = np->stat_rx_packets; 1794 - storage->rx_bytes = np->stat_rx_bytes; 1795 - storage->rx_dropped = np->stat_rx_dropped; 1796 - storage->rx_missed_errors = np->stat_rx_missed_errors; 1797 - } while (u64_stats_fetch_retry_irq(&np->swstats_rx_syncp, syncp_start)); 1798 - 1799 - do { 1800 - syncp_start = u64_stats_fetch_begin_irq(&np->swstats_tx_syncp); 1801 - storage->tx_packets = np->stat_tx_packets; 1802 - storage->tx_bytes = np->stat_tx_bytes; 1803 - storage->tx_dropped = np->stat_tx_dropped; 1804 - } while (u64_stats_fetch_retry_irq(&np->swstats_tx_syncp, syncp_start)); 1749 + for_each_online_cpu(cpu) 1750 + nv_get_stats(cpu, np, storage); 1805 1751 1806 1752 /* If the nic supports hw counters then retrieve latest values */ 1807 1753 if (np->driver_data & DEV_HAS_STATISTICS_V123) { ··· 1857 1827 } else { 1858 1828 packet_dropped: 1859 1829 u64_stats_update_begin(&np->swstats_rx_syncp); 1860 - np->stat_rx_dropped++; 1830 + nv_txrx_stats_inc(stat_rx_dropped); 1861 1831 u64_stats_update_end(&np->swstats_rx_syncp); 1862 1832 return 1; 1863 1833 } ··· 1899 1869 } else { 1900 1870 packet_dropped: 1901 1871 u64_stats_update_begin(&np->swstats_rx_syncp); 1902 - np->stat_rx_dropped++; 1872 + nv_txrx_stats_inc(stat_rx_dropped); 1903 1873 u64_stats_update_end(&np->swstats_rx_syncp); 1904 1874 return 1; 1905 1875 } ··· 2043 2013 } 2044 2014 if (nv_release_txskb(np, &np->tx_skb[i])) { 2045 2015 u64_stats_update_begin(&np->swstats_tx_syncp); 2046 - np->stat_tx_dropped++; 2016 + nv_txrx_stats_inc(stat_tx_dropped); 2047 2017 u64_stats_update_end(&np->swstats_tx_syncp); 2048 2018 } 2049 2019 np->tx_skb[i].dma = 0; ··· 2257 2227 /* on DMA mapping error - drop the packet */ 2258 2228 dev_kfree_skb_any(skb); 2259 2229 u64_stats_update_begin(&np->swstats_tx_syncp); 2260 - np->stat_tx_dropped++; 2230 + nv_txrx_stats_inc(stat_tx_dropped); 2261 2231 u64_stats_update_end(&np->swstats_tx_syncp); 2262 2232 return NETDEV_TX_OK; 2263 2233 } ··· 2303 2273 dev_kfree_skb_any(skb); 2304 2274 np->put_tx_ctx = start_tx_ctx; 2305 2275 u64_stats_update_begin(&np->swstats_tx_syncp); 2306 - np->stat_tx_dropped++; 2276 + nv_txrx_stats_inc(stat_tx_dropped); 2307 2277 u64_stats_update_end(&np->swstats_tx_syncp); 2308 2278 return NETDEV_TX_OK; 2309 2279 } ··· 2414 2384 /* on DMA mapping error - drop the packet */ 2415 2385 dev_kfree_skb_any(skb); 2416 2386 u64_stats_update_begin(&np->swstats_tx_syncp); 2417 - np->stat_tx_dropped++; 2387 + nv_txrx_stats_inc(stat_tx_dropped); 2418 2388 u64_stats_update_end(&np->swstats_tx_syncp); 2419 2389 return NETDEV_TX_OK; 2420 2390 } ··· 2461 2431 dev_kfree_skb_any(skb); 2462 2432 np->put_tx_ctx = start_tx_ctx; 2463 2433 u64_stats_update_begin(&np->swstats_tx_syncp); 2464 - np->stat_tx_dropped++; 2434 + nv_txrx_stats_inc(stat_tx_dropped); 2465 2435 u64_stats_update_end(&np->swstats_tx_syncp); 2466 2436 return NETDEV_TX_OK; 2467 2437 } ··· 2590 2560 && !(flags & NV_TX_RETRYCOUNT_MASK)) 2591 2561 nv_legacybackoff_reseed(dev); 2592 2562 } else { 2563 + unsigned int len; 2564 + 2593 2565 u64_stats_update_begin(&np->swstats_tx_syncp); 2594 - np->stat_tx_packets++; 2595 - np->stat_tx_bytes += np->get_tx_ctx->skb->len; 2566 + nv_txrx_stats_inc(stat_tx_packets); 2567 + len = np->get_tx_ctx->skb->len; 2568 + nv_txrx_stats_add(stat_tx_bytes, len); 2596 2569 u64_stats_update_end(&np->swstats_tx_syncp); 2597 2570 } 2598 2571 bytes_compl += np->get_tx_ctx->skb->len; ··· 2610 2577 && !(flags & NV_TX2_RETRYCOUNT_MASK)) 2611 2578 nv_legacybackoff_reseed(dev); 2612 2579 } else { 2580 + unsigned int len; 2581 + 2613 2582 u64_stats_update_begin(&np->swstats_tx_syncp); 2614 - np->stat_tx_packets++; 2615 - np->stat_tx_bytes += np->get_tx_ctx->skb->len; 2583 + nv_txrx_stats_inc(stat_tx_packets); 2584 + len = np->get_tx_ctx->skb->len; 2585 + nv_txrx_stats_add(stat_tx_bytes, len); 2616 2586 u64_stats_update_end(&np->swstats_tx_syncp); 2617 2587 } 2618 2588 bytes_compl += np->get_tx_ctx->skb->len; ··· 2663 2627 nv_legacybackoff_reseed(dev); 2664 2628 } 2665 2629 } else { 2630 + unsigned int len; 2631 + 2666 2632 u64_stats_update_begin(&np->swstats_tx_syncp); 2667 - np->stat_tx_packets++; 2668 - np->stat_tx_bytes += np->get_tx_ctx->skb->len; 2633 + nv_txrx_stats_inc(stat_tx_packets); 2634 + len = np->get_tx_ctx->skb->len; 2635 + nv_txrx_stats_add(stat_tx_bytes, len); 2669 2636 u64_stats_update_end(&np->swstats_tx_syncp); 2670 2637 } 2671 2638 ··· 2845 2806 } 2846 2807 } 2847 2808 2809 + static void rx_missing_handler(u32 flags, struct fe_priv *np) 2810 + { 2811 + if (flags & NV_RX_MISSEDFRAME) { 2812 + u64_stats_update_begin(&np->swstats_rx_syncp); 2813 + nv_txrx_stats_inc(stat_rx_missed_errors); 2814 + u64_stats_update_end(&np->swstats_rx_syncp); 2815 + } 2816 + } 2817 + 2848 2818 static int nv_rx_process(struct net_device *dev, int limit) 2849 2819 { 2850 2820 struct fe_priv *np = netdev_priv(dev); ··· 2896 2848 } 2897 2849 /* the rest are hard errors */ 2898 2850 else { 2899 - if (flags & NV_RX_MISSEDFRAME) { 2900 - u64_stats_update_begin(&np->swstats_rx_syncp); 2901 - np->stat_rx_missed_errors++; 2902 - u64_stats_update_end(&np->swstats_rx_syncp); 2903 - } 2851 + rx_missing_handler(flags, np); 2904 2852 dev_kfree_skb(skb); 2905 2853 goto next_pkt; 2906 2854 } ··· 2940 2896 skb->protocol = eth_type_trans(skb, dev); 2941 2897 napi_gro_receive(&np->napi, skb); 2942 2898 u64_stats_update_begin(&np->swstats_rx_syncp); 2943 - np->stat_rx_packets++; 2944 - np->stat_rx_bytes += len; 2899 + nv_txrx_stats_inc(stat_rx_packets); 2900 + nv_txrx_stats_add(stat_rx_bytes, len); 2945 2901 u64_stats_update_end(&np->swstats_rx_syncp); 2946 2902 next_pkt: 2947 2903 if (unlikely(np->get_rx.orig++ == np->last_rx.orig)) ··· 3026 2982 } 3027 2983 napi_gro_receive(&np->napi, skb); 3028 2984 u64_stats_update_begin(&np->swstats_rx_syncp); 3029 - np->stat_rx_packets++; 3030 - np->stat_rx_bytes += len; 2985 + nv_txrx_stats_inc(stat_rx_packets); 2986 + nv_txrx_stats_add(stat_rx_bytes, len); 3031 2987 u64_stats_update_end(&np->swstats_rx_syncp); 3032 2988 } else { 3033 2989 dev_kfree_skb(skb); ··· 5695 5651 SET_NETDEV_DEV(dev, &pci_dev->dev); 5696 5652 u64_stats_init(&np->swstats_rx_syncp); 5697 5653 u64_stats_init(&np->swstats_tx_syncp); 5654 + np->txrx_stats = alloc_percpu(struct nv_txrx_stats); 5655 + if (!np->txrx_stats) { 5656 + pr_err("np->txrx_stats, alloc memory error.\n"); 5657 + err = -ENOMEM; 5658 + goto out_alloc_percpu; 5659 + } 5698 5660 5699 5661 timer_setup(&np->oom_kick, nv_do_rx_refill, 0); 5700 5662 timer_setup(&np->nic_poll, nv_do_nic_poll, 0); ··· 6110 6060 out_disable: 6111 6061 pci_disable_device(pci_dev); 6112 6062 out_free: 6063 + free_percpu(np->txrx_stats); 6064 + out_alloc_percpu: 6113 6065 free_netdev(dev); 6114 6066 out: 6115 6067 return err; ··· 6157 6105 static void nv_remove(struct pci_dev *pci_dev) 6158 6106 { 6159 6107 struct net_device *dev = pci_get_drvdata(pci_dev); 6108 + struct fe_priv *np = netdev_priv(dev); 6109 + 6110 + free_percpu(np->txrx_stats); 6160 6111 6161 6112 unregister_netdev(dev); 6162 6113
+6 -1
drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
··· 873 873 int ret; 874 874 u32 reg, val; 875 875 876 - regmap_field_read(gmac->regmap_field, &val); 876 + ret = regmap_field_read(gmac->regmap_field, &val); 877 + if (ret) { 878 + dev_err(priv->device, "Fail to read from regmap field.\n"); 879 + return ret; 880 + } 881 + 877 882 reg = gmac->variant->default_syscon_value; 878 883 if (reg != val) 879 884 dev_warn(priv->device,
+2 -2
drivers/net/hamradio/6pack.c
··· 344 344 345 345 sp->dev->stats.rx_bytes += count; 346 346 347 - if ((skb = dev_alloc_skb(count)) == NULL) 347 + if ((skb = dev_alloc_skb(count + 1)) == NULL) 348 348 goto out_mem; 349 349 350 - ptr = skb_put(skb, count); 350 + ptr = skb_put(skb, count + 1); 351 351 *ptr++ = cmd; /* KISS command */ 352 352 353 353 memcpy(ptr, sp->cooked_buf + 1, count);
+3 -3
drivers/net/phy/phylink.c
··· 376 376 * Local device Link partner 377 377 * Pause AsymDir Pause AsymDir Result 378 378 * 1 X 1 X TX+RX 379 - * 0 1 1 1 RX 380 - * 1 1 0 1 TX 379 + * 0 1 1 1 TX 380 + * 1 1 0 1 RX 381 381 */ 382 382 static void phylink_resolve_flow(struct phylink *pl, 383 383 struct phylink_link_state *state) ··· 398 398 new_pause = MLO_PAUSE_TX | MLO_PAUSE_RX; 399 399 else if (pause & MLO_PAUSE_ASYM) 400 400 new_pause = state->pause & MLO_PAUSE_SYM ? 401 - MLO_PAUSE_RX : MLO_PAUSE_TX; 401 + MLO_PAUSE_TX : MLO_PAUSE_RX; 402 402 } else { 403 403 new_pause = pl->link_config.pause & MLO_PAUSE_TXRX_MASK; 404 404 }
+11 -5
drivers/net/tun.c
··· 787 787 } 788 788 789 789 static int tun_attach(struct tun_struct *tun, struct file *file, 790 - bool skip_filter, bool napi, bool napi_frags) 790 + bool skip_filter, bool napi, bool napi_frags, 791 + bool publish_tun) 791 792 { 792 793 struct tun_file *tfile = file->private_data; 793 794 struct net_device *dev = tun->dev; ··· 871 870 * initialized tfile; otherwise we risk using half-initialized 872 871 * object. 873 872 */ 874 - rcu_assign_pointer(tfile->tun, tun); 873 + if (publish_tun) 874 + rcu_assign_pointer(tfile->tun, tun); 875 875 rcu_assign_pointer(tun->tfiles[tun->numqueues], tfile); 876 876 tun->numqueues++; 877 877 tun_set_real_num_queues(tun); ··· 2732 2730 2733 2731 err = tun_attach(tun, file, ifr->ifr_flags & IFF_NOFILTER, 2734 2732 ifr->ifr_flags & IFF_NAPI, 2735 - ifr->ifr_flags & IFF_NAPI_FRAGS); 2733 + ifr->ifr_flags & IFF_NAPI_FRAGS, true); 2736 2734 if (err < 0) 2737 2735 return err; 2738 2736 ··· 2831 2829 2832 2830 INIT_LIST_HEAD(&tun->disabled); 2833 2831 err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI, 2834 - ifr->ifr_flags & IFF_NAPI_FRAGS); 2832 + ifr->ifr_flags & IFF_NAPI_FRAGS, false); 2835 2833 if (err < 0) 2836 2834 goto err_free_flow; 2837 2835 2838 2836 err = register_netdevice(tun->dev); 2839 2837 if (err < 0) 2840 2838 goto err_detach; 2839 + /* free_netdev() won't check refcnt, to aovid race 2840 + * with dev_put() we need publish tun after registration. 2841 + */ 2842 + rcu_assign_pointer(tfile->tun, tun); 2841 2843 } 2842 2844 2843 2845 netif_carrier_on(tun->dev); ··· 2984 2978 if (ret < 0) 2985 2979 goto unlock; 2986 2980 ret = tun_attach(tun, file, false, tun->flags & IFF_NAPI, 2987 - tun->flags & IFF_NAPI_FRAGS); 2981 + tun->flags & IFF_NAPI_FRAGS, true); 2988 2982 } else if (ifr->ifr_flags & IFF_DETACH_QUEUE) { 2989 2983 tun = rtnl_dereference(tfile->tun); 2990 2984 if (!tun || !(tun->flags & IFF_MULTI_QUEUE) || tfile->detached)
+9 -1
drivers/net/usb/cdc_ether.c
··· 206 206 goto bad_desc; 207 207 } 208 208 skip: 209 - if (rndis && header.usb_cdc_acm_descriptor && 209 + /* Communcation class functions with bmCapabilities are not 210 + * RNDIS. But some Wireless class RNDIS functions use 211 + * bmCapabilities for their own purpose. The failsafe is 212 + * therefore applied only to Communication class RNDIS 213 + * functions. The rndis test is redundant, but a cheap 214 + * optimization. 215 + */ 216 + if (rndis && is_rndis(&intf->cur_altsetting->desc) && 217 + header.usb_cdc_acm_descriptor && 210 218 header.usb_cdc_acm_descriptor->bmCapabilities) { 211 219 dev_dbg(&intf->dev, 212 220 "ACM capabilities %02x, not really RNDIS?\n",
+1 -1
drivers/net/wan/lmc/lmc_main.c
··· 1115 1115 sc->lmc_cmdmode |= (TULIP_CMD_TXRUN | TULIP_CMD_RXRUN); 1116 1116 LMC_CSR_WRITE (sc, csr_command, sc->lmc_cmdmode); 1117 1117 1118 - lmc_trace(dev, "lmc_runnin_reset_out"); 1118 + lmc_trace(dev, "lmc_running_reset_out"); 1119 1119 } 1120 1120 1121 1121
+1
drivers/net/wimax/i2400m/op-rfkill.c
··· 127 127 "%d\n", result); 128 128 result = 0; 129 129 error_cmd: 130 + kfree(cmd); 130 131 kfree_skb(ack_skb); 131 132 error_msg_to_dev: 132 133 error_alloc:
+12 -12
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 1070 1070 1071 1071 /* same thing for QuZ... */ 1072 1072 if (iwl_trans->hw_rev == CSR_HW_REV_TYPE_QUZ) { 1073 - if (cfg == &iwl_ax101_cfg_qu_hr) 1074 - cfg = &iwl_ax101_cfg_quz_hr; 1075 - else if (cfg == &iwl_ax201_cfg_qu_hr) 1076 - cfg = &iwl_ax201_cfg_quz_hr; 1077 - else if (cfg == &iwl9461_2ac_cfg_qu_b0_jf_b0) 1078 - cfg = &iwl9461_2ac_cfg_quz_a0_jf_b0_soc; 1079 - else if (cfg == &iwl9462_2ac_cfg_qu_b0_jf_b0) 1080 - cfg = &iwl9462_2ac_cfg_quz_a0_jf_b0_soc; 1081 - else if (cfg == &iwl9560_2ac_cfg_qu_b0_jf_b0) 1082 - cfg = &iwl9560_2ac_cfg_quz_a0_jf_b0_soc; 1083 - else if (cfg == &iwl9560_2ac_160_cfg_qu_b0_jf_b0) 1084 - cfg = &iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc; 1073 + if (iwl_trans->cfg == &iwl_ax101_cfg_qu_hr) 1074 + iwl_trans->cfg = &iwl_ax101_cfg_quz_hr; 1075 + else if (iwl_trans->cfg == &iwl_ax201_cfg_qu_hr) 1076 + iwl_trans->cfg = &iwl_ax201_cfg_quz_hr; 1077 + else if (iwl_trans->cfg == &iwl9461_2ac_cfg_qu_b0_jf_b0) 1078 + iwl_trans->cfg = &iwl9461_2ac_cfg_quz_a0_jf_b0_soc; 1079 + else if (iwl_trans->cfg == &iwl9462_2ac_cfg_qu_b0_jf_b0) 1080 + iwl_trans->cfg = &iwl9462_2ac_cfg_quz_a0_jf_b0_soc; 1081 + else if (iwl_trans->cfg == &iwl9560_2ac_cfg_qu_b0_jf_b0) 1082 + iwl_trans->cfg = &iwl9560_2ac_cfg_quz_a0_jf_b0_soc; 1083 + else if (iwl_trans->cfg == &iwl9560_2ac_160_cfg_qu_b0_jf_b0) 1084 + iwl_trans->cfg = &iwl9560_2ac_160_cfg_quz_a0_jf_b0_soc; 1085 1085 } 1086 1086 1087 1087 #endif
+3
drivers/net/wireless/marvell/mwifiex/ie.c
··· 241 241 } 242 242 243 243 vs_ie = (struct ieee_types_header *)vendor_ie; 244 + if (le16_to_cpu(ie->ie_length) + vs_ie->len + 2 > 245 + IEEE_MAX_IE_SIZE) 246 + return -EINVAL; 244 247 memcpy(ie->ie_buffer + le16_to_cpu(ie->ie_length), 245 248 vs_ie, vs_ie->len + 2); 246 249 le16_unaligned_add_cpu(&ie->ie_length, vs_ie->len + 2);
+8 -1
drivers/net/wireless/marvell/mwifiex/uap_cmd.c
··· 265 265 266 266 rate_ie = (void *)cfg80211_find_ie(WLAN_EID_SUPP_RATES, var_pos, len); 267 267 if (rate_ie) { 268 + if (rate_ie->len > MWIFIEX_SUPPORTED_RATES) 269 + return; 268 270 memcpy(bss_cfg->rates, rate_ie + 1, rate_ie->len); 269 271 rate_len = rate_ie->len; 270 272 } ··· 274 272 rate_ie = (void *)cfg80211_find_ie(WLAN_EID_EXT_SUPP_RATES, 275 273 params->beacon.tail, 276 274 params->beacon.tail_len); 277 - if (rate_ie) 275 + if (rate_ie) { 276 + if (rate_ie->len > MWIFIEX_SUPPORTED_RATES - rate_len) 277 + return; 278 278 memcpy(bss_cfg->rates + rate_len, rate_ie + 1, rate_ie->len); 279 + } 279 280 280 281 return; 281 282 } ··· 396 391 params->beacon.tail_len); 397 392 if (vendor_ie) { 398 393 wmm_ie = vendor_ie; 394 + if (*(wmm_ie + 1) > sizeof(struct mwifiex_types_wmm_info)) 395 + return; 399 396 memcpy(&bss_cfg->wmm_info, wmm_ie + 400 397 sizeof(struct ieee_types_header), *(wmm_ie + 1)); 401 398 priv->wmm_enabled = 1;
+5
drivers/net/wireless/mediatek/mt76/mt76x0/eeprom.c
··· 59 59 dev_dbg(dev->mt76.dev, "mask out 2GHz support\n"); 60 60 } 61 61 62 + if (is_mt7630(dev)) { 63 + dev->mt76.cap.has_5ghz = false; 64 + dev_dbg(dev->mt76.dev, "mask out 5GHz support\n"); 65 + } 66 + 62 67 if (!mt76x02_field_valid(nic_conf1 & 0xff)) 63 68 nic_conf1 &= 0xff00; 64 69
+14 -1
drivers/net/wireless/mediatek/mt76/mt76x0/pci.c
··· 62 62 mt76x0e_stop_hw(dev); 63 63 } 64 64 65 + static int 66 + mt76x0e_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, 67 + struct ieee80211_vif *vif, struct ieee80211_sta *sta, 68 + struct ieee80211_key_conf *key) 69 + { 70 + struct mt76x02_dev *dev = hw->priv; 71 + 72 + if (is_mt7630(dev)) 73 + return -EOPNOTSUPP; 74 + 75 + return mt76x02_set_key(hw, cmd, vif, sta, key); 76 + } 77 + 65 78 static void 66 79 mt76x0e_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, 67 80 u32 queues, bool drop) ··· 91 78 .configure_filter = mt76x02_configure_filter, 92 79 .bss_info_changed = mt76x02_bss_info_changed, 93 80 .sta_state = mt76_sta_state, 94 - .set_key = mt76x02_set_key, 81 + .set_key = mt76x0e_set_key, 95 82 .conf_tx = mt76x02_conf_tx, 96 83 .sw_scan_start = mt76x02_sw_scan, 97 84 .sw_scan_complete = mt76x02_sw_scan_complete,
+18 -19
drivers/net/wireless/ralink/rt2x00/rt2800lib.c
··· 1654 1654 1655 1655 offset = MAC_IVEIV_ENTRY(key->hw_key_idx); 1656 1656 1657 - rt2800_register_multiread(rt2x00dev, offset, 1658 - &iveiv_entry, sizeof(iveiv_entry)); 1659 - if ((crypto->cipher == CIPHER_TKIP) || 1660 - (crypto->cipher == CIPHER_TKIP_NO_MIC) || 1661 - (crypto->cipher == CIPHER_AES)) 1662 - iveiv_entry.iv[3] |= 0x20; 1663 - iveiv_entry.iv[3] |= key->keyidx << 6; 1657 + if (crypto->cmd == SET_KEY) { 1658 + rt2800_register_multiread(rt2x00dev, offset, 1659 + &iveiv_entry, sizeof(iveiv_entry)); 1660 + if ((crypto->cipher == CIPHER_TKIP) || 1661 + (crypto->cipher == CIPHER_TKIP_NO_MIC) || 1662 + (crypto->cipher == CIPHER_AES)) 1663 + iveiv_entry.iv[3] |= 0x20; 1664 + iveiv_entry.iv[3] |= key->keyidx << 6; 1665 + } else { 1666 + memset(&iveiv_entry, 0, sizeof(iveiv_entry)); 1667 + } 1668 + 1664 1669 rt2800_register_multiwrite(rt2x00dev, offset, 1665 1670 &iveiv_entry, sizeof(iveiv_entry)); 1666 1671 } ··· 4242 4237 switch (rt2x00dev->default_ant.rx_chain_num) { 4243 4238 case 3: 4244 4239 /* Turn on tertiary LNAs */ 4245 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A2_EN, 4246 - rf->channel > 14); 4247 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G2_EN, 4248 - rf->channel <= 14); 4240 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A2_EN, 1); 4241 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G2_EN, 1); 4249 4242 /* fall-through */ 4250 4243 case 2: 4251 4244 /* Turn on secondary LNAs */ 4252 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A1_EN, 4253 - rf->channel > 14); 4254 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G1_EN, 4255 - rf->channel <= 14); 4245 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A1_EN, 1); 4246 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G1_EN, 1); 4256 4247 /* fall-through */ 4257 4248 case 1: 4258 4249 /* Turn on primary LNAs */ 4259 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A0_EN, 4260 - rf->channel > 14); 4261 - rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G0_EN, 4262 - rf->channel <= 14); 4250 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_A0_EN, 1); 4251 + rt2x00_set_field32(&tx_pin, TX_PIN_CFG_LNA_PE_G0_EN, 1); 4263 4252 break; 4264 4253 } 4265 4254
-1
drivers/net/wireless/rsi/rsi_91x_usb.c
··· 645 645 kfree(rsi_dev->tx_buffer); 646 646 647 647 fail_eps: 648 - kfree(rsi_dev); 649 648 650 649 return status; 651 650 }
+1 -1
drivers/nfc/st95hf/core.c
··· 316 316 &echo_response); 317 317 if (result) { 318 318 dev_err(&st95context->spicontext.spidev->dev, 319 - "err: echo response receieve error = 0x%x\n", result); 319 + "err: echo response receive error = 0x%x\n", result); 320 320 return result; 321 321 } 322 322
+1
include/linux/phy_fixed.h
··· 11 11 }; 12 12 13 13 struct device_node; 14 + struct gpio_desc; 14 15 15 16 #if IS_ENABLED(CONFIG_FIXED_PHY) 16 17 extern int fixed_phy_change_carrier(struct net_device *dev, bool new_carrier);
+2 -2
include/net/ip_fib.h
··· 513 513 struct netlink_callback *cb); 514 514 515 515 int fib_nexthop_info(struct sk_buff *skb, const struct fib_nh_common *nh, 516 - unsigned char *flags, bool skip_oif); 516 + u8 rt_family, unsigned char *flags, bool skip_oif); 517 517 int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nh, 518 - int nh_weight); 518 + int nh_weight, u8 rt_family); 519 519 #endif /* _NET_FIB_H */
+3 -2
include/net/nexthop.h
··· 161 161 } 162 162 163 163 static inline 164 - int nexthop_mpath_fill_node(struct sk_buff *skb, struct nexthop *nh) 164 + int nexthop_mpath_fill_node(struct sk_buff *skb, struct nexthop *nh, 165 + u8 rt_family) 165 166 { 166 167 struct nh_group *nhg = rtnl_dereference(nh->nh_grp); 167 168 int i; ··· 173 172 struct fib_nh_common *nhc = &nhi->fib_nhc; 174 173 int weight = nhg->nh_entries[i].weight; 175 174 176 - if (fib_add_nexthop(skb, nhc, weight) < 0) 175 + if (fib_add_nexthop(skb, nhc, weight, rt_family) < 0) 177 176 return -EMSGSIZE; 178 177 } 179 178
-2
include/net/xfrm.h
··· 983 983 void xfrm_dst_ifdown(struct dst_entry *dst, struct net_device *dev); 984 984 985 985 struct xfrm_if_parms { 986 - char name[IFNAMSIZ]; /* name of XFRM device */ 987 986 int link; /* ifindex of underlying L2 interface */ 988 987 u32 if_id; /* interface identifyer */ 989 988 }; ··· 990 991 struct xfrm_if { 991 992 struct xfrm_if __rcu *next; /* next interface in list */ 992 993 struct net_device *dev; /* virtual device associated with interface */ 993 - struct net_device *phydev; /* physical device */ 994 994 struct net *net; /* netns for packet i/o */ 995 995 struct xfrm_if_parms p; /* interface parms */ 996 996
+1
include/uapi/linux/isdn/capicmd.h
··· 16 16 #define CAPI_MSG_BASELEN 8 17 17 #define CAPI_DATA_B3_REQ_LEN (CAPI_MSG_BASELEN+4+4+2+2+2) 18 18 #define CAPI_DATA_B3_RESP_LEN (CAPI_MSG_BASELEN+4+2) 19 + #define CAPI_DISCONNECT_B3_RESP_LEN (CAPI_MSG_BASELEN+4) 19 20 20 21 /*----- CAPI commands -----*/ 21 22 #define CAPI_ALERT 0x01
+14 -9
kernel/bpf/verifier.c
··· 1772 1772 bitmap_from_u64(mask, stack_mask); 1773 1773 for_each_set_bit(i, mask, 64) { 1774 1774 if (i >= func->allocated_stack / BPF_REG_SIZE) { 1775 - /* This can happen if backtracking 1776 - * is propagating stack precision where 1777 - * caller has larger stack frame 1778 - * than callee, but backtrack_insn() should 1779 - * have returned -ENOTSUPP. 1775 + /* the sequence of instructions: 1776 + * 2: (bf) r3 = r10 1777 + * 3: (7b) *(u64 *)(r3 -8) = r0 1778 + * 4: (79) r4 = *(u64 *)(r10 -8) 1779 + * doesn't contain jmps. It's backtracked 1780 + * as a single block. 1781 + * During backtracking insn 3 is not recognized as 1782 + * stack access, so at the end of backtracking 1783 + * stack slot fp-8 is still marked in stack_mask. 1784 + * However the parent state may not have accessed 1785 + * fp-8 and it's "unallocated" stack space. 1786 + * In such case fallback to conservative. 1780 1787 */ 1781 - verbose(env, "BUG spi %d stack_size %d\n", 1782 - i, func->allocated_stack); 1783 - WARN_ONCE(1, "verifier backtracking bug"); 1784 - return -EFAULT; 1788 + mark_all_scalars_precise(env, st); 1789 + return 0; 1785 1790 } 1786 1791 1787 1792 if (func->stack[i].slot_type[0] != STACK_SPILL) {
+3 -3
lib/Kconfig
··· 631 631 config PARMAN 632 632 tristate "parman" if COMPILE_TEST 633 633 634 + config OBJAGG 635 + tristate "objagg" if COMPILE_TEST 636 + 634 637 config STRING_SELFTEST 635 638 tristate "Test string functions" 636 639 ··· 656 653 657 654 config GENERIC_LIB_UCMPDI2 658 655 bool 659 - 660 - config OBJAGG 661 - tristate "objagg" if COMPILE_TEST
-5
net/bluetooth/hci_event.c
··· 5660 5660 return send_conn_param_neg_reply(hdev, handle, 5661 5661 HCI_ERROR_UNKNOWN_CONN_ID); 5662 5662 5663 - if (min < hcon->le_conn_min_interval || 5664 - max > hcon->le_conn_max_interval) 5665 - return send_conn_param_neg_reply(hdev, handle, 5666 - HCI_ERROR_INVALID_LL_PARAMS); 5667 - 5668 5663 if (hci_check_conn_params(min, max, latency, timeout)) 5669 5664 return send_conn_param_neg_reply(hdev, handle, 5670 5665 HCI_ERROR_INVALID_LL_PARAMS);
+1 -8
net/bluetooth/l2cap_core.c
··· 5305 5305 5306 5306 memset(&rsp, 0, sizeof(rsp)); 5307 5307 5308 - if (min < hcon->le_conn_min_interval || 5309 - max > hcon->le_conn_max_interval) { 5310 - BT_DBG("requested connection interval exceeds current bounds."); 5311 - err = -EINVAL; 5312 - } else { 5313 - err = hci_check_conn_params(min, max, latency, to_multiplier); 5314 - } 5315 - 5308 + err = hci_check_conn_params(min, max, latency, to_multiplier); 5316 5309 if (err) 5317 5310 rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED); 5318 5311 else
+1 -1
net/bridge/br_mdb.c
··· 437 437 struct nlmsghdr *nlh; 438 438 struct nlattr *nest; 439 439 440 - nlh = nlmsg_put(skb, pid, seq, type, sizeof(*bpm), NLM_F_MULTI); 440 + nlh = nlmsg_put(skb, pid, seq, type, sizeof(*bpm), 0); 441 441 if (!nlh) 442 442 return -EMSGSIZE; 443 443
+4
net/bridge/br_netfilter_hooks.c
··· 496 496 if (!brnet->call_ip6tables && 497 497 !br_opt_get(br, BROPT_NF_CALL_IP6TABLES)) 498 498 return NF_ACCEPT; 499 + if (!ipv6_mod_enabled()) { 500 + pr_warn_once("Module ipv6 is disabled, so call_ip6tables is not supported."); 501 + return NF_DROP; 502 + } 499 503 500 504 nf_bridge_pull_encap_header_rcsum(skb); 501 505 return br_nf_pre_routing_ipv6(priv, skb, state);
+2
net/core/dev.c
··· 8758 8758 ret = notifier_to_errno(ret); 8759 8759 if (ret) { 8760 8760 rollback_registered(dev); 8761 + rcu_barrier(); 8762 + 8761 8763 dev->reg_state = NETREG_UNREGISTERED; 8762 8764 } 8763 8765 /*
+19
net/core/skbuff.c
··· 3664 3664 int pos; 3665 3665 int dummy; 3666 3666 3667 + if (list_skb && !list_skb->head_frag && skb_headlen(list_skb) && 3668 + (skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY)) { 3669 + /* gso_size is untrusted, and we have a frag_list with a linear 3670 + * non head_frag head. 3671 + * 3672 + * (we assume checking the first list_skb member suffices; 3673 + * i.e if either of the list_skb members have non head_frag 3674 + * head, then the first one has too). 3675 + * 3676 + * If head_skb's headlen does not fit requested gso_size, it 3677 + * means that the frag_list members do NOT terminate on exact 3678 + * gso_size boundaries. Hence we cannot perform skb_frag_t page 3679 + * sharing. Therefore we must fallback to copying the frag_list 3680 + * skbs; we do so by disabling SG. 3681 + */ 3682 + if (mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb)) 3683 + features &= ~NETIF_F_SG; 3684 + } 3685 + 3667 3686 __skb_push(head_skb, doffset); 3668 3687 proto = skb_network_protocol(head_skb, &dummy); 3669 3688 if (unlikely(!proto))
+3
net/core/sock_map.c
··· 656 656 struct sock *sk, u64 flags) 657 657 { 658 658 struct bpf_htab *htab = container_of(map, struct bpf_htab, map); 659 + struct inet_connection_sock *icsk = inet_csk(sk); 659 660 u32 key_size = map->key_size, hash; 660 661 struct bpf_htab_elem *elem, *elem_new; 661 662 struct bpf_htab_bucket *bucket; ··· 666 665 667 666 WARN_ON_ONCE(!rcu_read_lock_held()); 668 667 if (unlikely(flags > BPF_EXIST)) 668 + return -EINVAL; 669 + if (unlikely(icsk->icsk_ulp_data)) 669 670 return -EINVAL; 670 671 671 672 link = sk_psock_init_link();
+8 -7
net/ipv4/fib_semantics.c
··· 1582 1582 } 1583 1583 1584 1584 int fib_nexthop_info(struct sk_buff *skb, const struct fib_nh_common *nhc, 1585 - unsigned char *flags, bool skip_oif) 1585 + u8 rt_family, unsigned char *flags, bool skip_oif) 1586 1586 { 1587 1587 if (nhc->nhc_flags & RTNH_F_DEAD) 1588 1588 *flags |= RTNH_F_DEAD; ··· 1613 1613 /* if gateway family does not match nexthop family 1614 1614 * gateway is encoded as RTA_VIA 1615 1615 */ 1616 - if (nhc->nhc_gw_family != nhc->nhc_family) { 1616 + if (rt_family != nhc->nhc_gw_family) { 1617 1617 int alen = sizeof(struct in6_addr); 1618 1618 struct nlattr *nla; 1619 1619 struct rtvia *via; ··· 1654 1654 1655 1655 #if IS_ENABLED(CONFIG_IP_ROUTE_MULTIPATH) || IS_ENABLED(CONFIG_IPV6) 1656 1656 int fib_add_nexthop(struct sk_buff *skb, const struct fib_nh_common *nhc, 1657 - int nh_weight) 1657 + int nh_weight, u8 rt_family) 1658 1658 { 1659 1659 const struct net_device *dev = nhc->nhc_dev; 1660 1660 struct rtnexthop *rtnh; ··· 1667 1667 rtnh->rtnh_hops = nh_weight - 1; 1668 1668 rtnh->rtnh_ifindex = dev ? dev->ifindex : 0; 1669 1669 1670 - if (fib_nexthop_info(skb, nhc, &flags, true) < 0) 1670 + if (fib_nexthop_info(skb, nhc, rt_family, &flags, true) < 0) 1671 1671 goto nla_put_failure; 1672 1672 1673 1673 rtnh->rtnh_flags = flags; ··· 1693 1693 goto nla_put_failure; 1694 1694 1695 1695 if (unlikely(fi->nh)) { 1696 - if (nexthop_mpath_fill_node(skb, fi->nh) < 0) 1696 + if (nexthop_mpath_fill_node(skb, fi->nh, AF_INET) < 0) 1697 1697 goto nla_put_failure; 1698 1698 goto mp_end; 1699 1699 } 1700 1700 1701 1701 for_nexthops(fi) { 1702 - if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight) < 0) 1702 + if (fib_add_nexthop(skb, &nh->nh_common, nh->fib_nh_weight, 1703 + AF_INET) < 0) 1703 1704 goto nla_put_failure; 1704 1705 #ifdef CONFIG_IP_ROUTE_CLASSID 1705 1706 if (nh->nh_tclassid && ··· 1776 1775 const struct fib_nh_common *nhc = fib_info_nhc(fi, 0); 1777 1776 unsigned char flags = 0; 1778 1777 1779 - if (fib_nexthop_info(skb, nhc, &flags, false) < 0) 1778 + if (fib_nexthop_info(skb, nhc, AF_INET, &flags, false) < 0) 1780 1779 goto nla_put_failure; 1781 1780 1782 1781 rtm->rtm_flags = flags;
+1 -1
net/ipv4/tcp_input.c
··· 266 266 267 267 static void tcp_ecn_withdraw_cwr(struct tcp_sock *tp) 268 268 { 269 - tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR; 269 + tp->ecn_flags &= ~TCP_ECN_QUEUE_CWR; 270 270 } 271 271 272 272 static void __tcp_ecn_check_ce(struct sock *sk, const struct sk_buff *skb)
+1 -1
net/ipv6/ping.c
··· 223 223 return 0; 224 224 } 225 225 226 - static void __net_init ping_v6_proc_exit_net(struct net *net) 226 + static void __net_exit ping_v6_proc_exit_net(struct net *net) 227 227 { 228 228 remove_proc_entry("icmp6", net->proc_net); 229 229 }
+13 -8
net/ipv6/route.c
··· 4388 4388 struct fib6_config cfg = { 4389 4389 .fc_table = l3mdev_fib_table(idev->dev) ? : RT6_TABLE_LOCAL, 4390 4390 .fc_ifindex = idev->dev->ifindex, 4391 - .fc_flags = RTF_UP | RTF_ADDRCONF | RTF_NONEXTHOP, 4391 + .fc_flags = RTF_UP | RTF_NONEXTHOP, 4392 4392 .fc_dst = *addr, 4393 4393 .fc_dst_len = 128, 4394 4394 .fc_protocol = RTPROT_KERNEL, 4395 4395 .fc_nlinfo.nl_net = net, 4396 4396 .fc_ignore_dev_down = true, 4397 4397 }; 4398 + struct fib6_info *f6i; 4398 4399 4399 4400 if (anycast) { 4400 4401 cfg.fc_type = RTN_ANYCAST; ··· 4405 4404 cfg.fc_flags |= RTF_LOCAL; 4406 4405 } 4407 4406 4408 - return ip6_route_info_create(&cfg, gfp_flags, NULL); 4407 + f6i = ip6_route_info_create(&cfg, gfp_flags, NULL); 4408 + if (!IS_ERR(f6i)) 4409 + f6i->dst_nocount = true; 4410 + return f6i; 4409 4411 } 4410 4412 4411 4413 /* remove deleted ip from prefsrc entries */ ··· 5329 5325 if (nexthop_is_multipath(nh)) { 5330 5326 struct nlattr *mp; 5331 5327 5332 - mp = nla_nest_start(skb, RTA_MULTIPATH); 5328 + mp = nla_nest_start_noflag(skb, RTA_MULTIPATH); 5333 5329 if (!mp) 5334 5330 goto nla_put_failure; 5335 5331 5336 - if (nexthop_mpath_fill_node(skb, nh)) 5332 + if (nexthop_mpath_fill_node(skb, nh, AF_INET6)) 5337 5333 goto nla_put_failure; 5338 5334 5339 5335 nla_nest_end(skb, mp); ··· 5341 5337 struct fib6_nh *fib6_nh; 5342 5338 5343 5339 fib6_nh = nexthop_fib6_nh(nh); 5344 - if (fib_nexthop_info(skb, &fib6_nh->nh_common, 5340 + if (fib_nexthop_info(skb, &fib6_nh->nh_common, AF_INET6, 5345 5341 flags, false) < 0) 5346 5342 goto nla_put_failure; 5347 5343 } ··· 5470 5466 goto nla_put_failure; 5471 5467 5472 5468 if (fib_add_nexthop(skb, &rt->fib6_nh->nh_common, 5473 - rt->fib6_nh->fib_nh_weight) < 0) 5469 + rt->fib6_nh->fib_nh_weight, AF_INET6) < 0) 5474 5470 goto nla_put_failure; 5475 5471 5476 5472 list_for_each_entry_safe(sibling, next_sibling, 5477 5473 &rt->fib6_siblings, fib6_siblings) { 5478 5474 if (fib_add_nexthop(skb, &sibling->fib6_nh->nh_common, 5479 - sibling->fib6_nh->fib_nh_weight) < 0) 5475 + sibling->fib6_nh->fib_nh_weight, 5476 + AF_INET6) < 0) 5480 5477 goto nla_put_failure; 5481 5478 } 5482 5479 ··· 5494 5489 5495 5490 rtm->rtm_flags |= nh_flags; 5496 5491 } else { 5497 - if (fib_nexthop_info(skb, &rt->fib6_nh->nh_common, 5492 + if (fib_nexthop_info(skb, &rt->fib6_nh->nh_common, AF_INET6, 5498 5493 &nh_flags, false) < 0) 5499 5494 goto nla_put_failure; 5500 5495
+4 -10
net/mac80211/cfg.c
··· 1529 1529 struct sta_info *sta; 1530 1530 struct ieee80211_sub_if_data *sdata; 1531 1531 int err; 1532 - int layer2_update; 1533 1532 1534 1533 if (params->vlan) { 1535 1534 sdata = IEEE80211_DEV_TO_SUB_IF(params->vlan); ··· 1572 1573 test_sta_flag(sta, WLAN_STA_ASSOC)) 1573 1574 rate_control_rate_init(sta); 1574 1575 1575 - layer2_update = sdata->vif.type == NL80211_IFTYPE_AP_VLAN || 1576 - sdata->vif.type == NL80211_IFTYPE_AP; 1577 - 1578 1576 err = sta_info_insert_rcu(sta); 1579 1577 if (err) { 1580 1578 rcu_read_unlock(); 1581 1579 return err; 1582 1580 } 1583 - 1584 - if (layer2_update) 1585 - cfg80211_send_layer2_update(sta->sdata->dev, sta->sta.addr); 1586 1581 1587 1582 rcu_read_unlock(); 1588 1583 ··· 1675 1682 sta->sdata = vlansdata; 1676 1683 ieee80211_check_fast_xmit(sta); 1677 1684 1678 - if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) 1685 + if (test_sta_flag(sta, WLAN_STA_AUTHORIZED)) { 1679 1686 ieee80211_vif_inc_num_mcast(sta->sdata); 1680 - 1681 - cfg80211_send_layer2_update(sta->sdata->dev, sta->sta.addr); 1687 + cfg80211_send_layer2_update(sta->sdata->dev, 1688 + sta->sta.addr); 1689 + } 1682 1690 } 1683 1691 1684 1692 err = sta_apply_parameters(local, sta, params);
+4
net/mac80211/sta_info.c
··· 1979 1979 ieee80211_check_fast_xmit(sta); 1980 1980 ieee80211_check_fast_rx(sta); 1981 1981 } 1982 + if (sta->sdata->vif.type == NL80211_IFTYPE_AP_VLAN || 1983 + sta->sdata->vif.type == NL80211_IFTYPE_AP) 1984 + cfg80211_send_layer2_update(sta->sdata->dev, 1985 + sta->sta.addr); 1982 1986 break; 1983 1987 default: 1984 1988 break;
+5 -2
net/netfilter/nf_conntrack_netlink.c
··· 553 553 goto nla_put_failure; 554 554 555 555 if (ctnetlink_dump_status(skb, ct) < 0 || 556 - ctnetlink_dump_timeout(skb, ct) < 0 || 557 556 ctnetlink_dump_acct(skb, ct, type) < 0 || 558 557 ctnetlink_dump_timestamp(skb, ct) < 0 || 559 - ctnetlink_dump_protoinfo(skb, ct) < 0 || 560 558 ctnetlink_dump_helpinfo(skb, ct) < 0 || 561 559 ctnetlink_dump_mark(skb, ct) < 0 || 562 560 ctnetlink_dump_secctx(skb, ct) < 0 || ··· 564 566 ctnetlink_dump_master(skb, ct) < 0 || 565 567 ctnetlink_dump_ct_seq_adj(skb, ct) < 0 || 566 568 ctnetlink_dump_ct_synproxy(skb, ct) < 0) 569 + goto nla_put_failure; 570 + 571 + if (!test_bit(IPS_OFFLOAD_BIT, &ct->status) && 572 + (ctnetlink_dump_timeout(skb, ct) < 0 || 573 + ctnetlink_dump_protoinfo(skb, ct) < 0)) 567 574 goto nla_put_failure; 568 575 569 576 nlmsg_end(skb, nlh);
+1 -1
net/netfilter/nf_flow_table_core.c
··· 217 217 return err; 218 218 } 219 219 220 - flow->timeout = (u32)jiffies; 220 + flow->timeout = (u32)jiffies + NF_FLOW_TIMEOUT; 221 221 return 0; 222 222 } 223 223 EXPORT_SYMBOL_GPL(flow_offload_add);
+3
net/netfilter/nft_fib_netdev.c
··· 14 14 #include <linux/netfilter/nf_tables.h> 15 15 #include <net/netfilter/nf_tables_core.h> 16 16 #include <net/netfilter/nf_tables.h> 17 + #include <net/ipv6.h> 17 18 18 19 #include <net/netfilter/nft_fib.h> 19 20 ··· 35 34 } 36 35 break; 37 36 case ETH_P_IPV6: 37 + if (!ipv6_mod_enabled()) 38 + break; 38 39 switch (priv->result) { 39 40 case NFT_FIB_RESULT_OIF: 40 41 case NFT_FIB_RESULT_OIFNAME:
+3 -3
net/netfilter/nft_socket.c
··· 47 47 return; 48 48 } 49 49 50 - /* So that subsequent socket matching not to require other lookups. */ 51 - skb->sk = sk; 52 - 53 50 switch(priv->key) { 54 51 case NFT_SOCKET_TRANSPARENT: 55 52 nft_reg_store8(dest, inet_sk_transparent(sk)); ··· 63 66 WARN_ON(1); 64 67 regs->verdict.code = NFT_BREAK; 65 68 } 69 + 70 + if (sk != skb->sk) 71 + sock_gen_put(sk); 66 72 } 67 73 68 74 static const struct nla_policy nft_socket_policy[NFTA_SOCKET_MAX + 1] = {
+4 -1
net/qrtr/tun.c
··· 84 84 if (!kbuf) 85 85 return -ENOMEM; 86 86 87 - if (!copy_from_iter_full(kbuf, len, from)) 87 + if (!copy_from_iter_full(kbuf, len, from)) { 88 + kfree(kbuf); 88 89 return -EFAULT; 90 + } 89 91 90 92 ret = qrtr_endpoint_post(&tun->ep, kbuf, len); 91 93 94 + kfree(kbuf); 92 95 return ret < 0 ? ret : len; 93 96 } 94 97
+18 -22
net/rds/bind.c
··· 1 1 /* 2 - * Copyright (c) 2006, 2018 Oracle and/or its affiliates. All rights reserved. 2 + * Copyright (c) 2006, 2019 Oracle and/or its affiliates. All rights reserved. 3 3 * 4 4 * This software is available to you under a choice of one of two 5 5 * licenses. You may choose to be licensed under the terms of the GNU ··· 239 239 goto out; 240 240 } 241 241 242 - sock_set_flag(sk, SOCK_RCU_FREE); 243 - ret = rds_add_bound(rs, binding_addr, &port, scope_id); 244 - if (ret) 245 - goto out; 246 - 247 - if (rs->rs_transport) { /* previously bound */ 242 + /* The transport can be set using SO_RDS_TRANSPORT option before the 243 + * socket is bound. 244 + */ 245 + if (rs->rs_transport) { 248 246 trans = rs->rs_transport; 249 247 if (trans->laddr_check(sock_net(sock->sk), 250 248 binding_addr, scope_id) != 0) { 251 249 ret = -ENOPROTOOPT; 252 - rds_remove_bound(rs); 253 - } else { 254 - ret = 0; 250 + goto out; 255 251 } 256 - goto out; 257 - } 258 - trans = rds_trans_get_preferred(sock_net(sock->sk), binding_addr, 259 - scope_id); 260 - if (!trans) { 261 - ret = -EADDRNOTAVAIL; 262 - rds_remove_bound(rs); 263 - pr_info_ratelimited("RDS: %s could not find a transport for %pI6c, load rds_tcp or rds_rdma?\n", 264 - __func__, binding_addr); 265 - goto out; 252 + } else { 253 + trans = rds_trans_get_preferred(sock_net(sock->sk), 254 + binding_addr, scope_id); 255 + if (!trans) { 256 + ret = -EADDRNOTAVAIL; 257 + pr_info_ratelimited("RDS: %s could not find a transport for %pI6c, load rds_tcp or rds_rdma?\n", 258 + __func__, binding_addr); 259 + goto out; 260 + } 261 + rs->rs_transport = trans; 266 262 } 267 263 268 - rs->rs_transport = trans; 269 - ret = 0; 264 + sock_set_flag(sk, SOCK_RCU_FREE); 265 + ret = rds_add_bound(rs, binding_addr, &port, scope_id); 270 266 271 267 out: 272 268 release_sock(sk);
+1 -1
net/rxrpc/input.c
··· 1262 1262 1263 1263 if (nskb != skb) { 1264 1264 rxrpc_eaten_skb(skb, rxrpc_skb_received); 1265 - rxrpc_new_skb(skb, rxrpc_skb_unshared); 1266 1265 skb = nskb; 1266 + rxrpc_new_skb(skb, rxrpc_skb_unshared); 1267 1267 sp = rxrpc_skb(skb); 1268 1268 } 1269 1269 }
+2
net/sched/sch_api.c
··· 1920 1920 cl = cops->find(q, portid); 1921 1921 if (!cl) 1922 1922 return; 1923 + if (!cops->tcf_block) 1924 + return; 1923 1925 block = cops->tcf_block(q, cl, NULL); 1924 1926 if (!block) 1925 1927 return;
+7 -2
net/sched/sch_generic.c
··· 46 46 * - updates to tree and tree walking are only done under the rtnl mutex. 47 47 */ 48 48 49 + #define SKB_XOFF_MAGIC ((struct sk_buff *)1UL) 50 + 49 51 static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q) 50 52 { 51 53 const struct netdev_queue *txq = q->dev_queue; ··· 73 71 q->q.qlen--; 74 72 } 75 73 } else { 76 - skb = NULL; 74 + skb = SKB_XOFF_MAGIC; 77 75 } 78 76 } 79 77 ··· 255 253 return skb; 256 254 257 255 skb = qdisc_dequeue_skb_bad_txq(q); 258 - if (unlikely(skb)) 256 + if (unlikely(skb)) { 257 + if (skb == SKB_XOFF_MAGIC) 258 + return NULL; 259 259 goto bulk; 260 + } 260 261 skb = q->dequeue(q); 261 262 if (skb) { 262 263 bulk:
+1 -1
net/sched/sch_hhf.c
··· 531 531 new_hhf_non_hh_weight = nla_get_u32(tb[TCA_HHF_NON_HH_WEIGHT]); 532 532 533 533 non_hh_quantum = (u64)new_quantum * new_hhf_non_hh_weight; 534 - if (non_hh_quantum > INT_MAX) 534 + if (non_hh_quantum == 0 || non_hh_quantum > INT_MAX) 535 535 return -EINVAL; 536 536 537 537 sch_tree_lock(sch);
+1 -1
net/sctp/protocol.c
··· 1336 1336 return status; 1337 1337 } 1338 1338 1339 - static void __net_init sctp_ctrlsock_exit(struct net *net) 1339 + static void __net_exit sctp_ctrlsock_exit(struct net *net) 1340 1340 { 1341 1341 /* Free the control endpoint. */ 1342 1342 inet_ctl_sock_destroy(net->sctp.ctl_sock);
+1 -1
net/sctp/sm_sideeffect.c
··· 547 547 if (net->sctp.pf_enable && 548 548 (transport->state == SCTP_ACTIVE) && 549 549 (transport->error_count < transport->pathmaxrxt) && 550 - (transport->error_count > asoc->pf_retrans)) { 550 + (transport->error_count > transport->pf_retrans)) { 551 551 552 552 sctp_assoc_control_transport(asoc, transport, 553 553 SCTP_TRANSPORT_PF,
+13 -11
net/sctp/socket.c
··· 309 309 return retval; 310 310 } 311 311 312 - static long sctp_get_port_local(struct sock *, union sctp_addr *); 312 + static int sctp_get_port_local(struct sock *, union sctp_addr *); 313 313 314 314 /* Verify this is a valid sockaddr. */ 315 315 static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt, ··· 399 399 * detection. 400 400 */ 401 401 addr->v4.sin_port = htons(snum); 402 - if ((ret = sctp_get_port_local(sk, addr))) { 402 + if (sctp_get_port_local(sk, addr)) 403 403 return -EADDRINUSE; 404 - } 405 404 406 405 /* Refresh ephemeral port. */ 407 406 if (!bp->port) ··· 412 413 ret = sctp_add_bind_addr(bp, addr, af->sockaddr_len, 413 414 SCTP_ADDR_SRC, GFP_ATOMIC); 414 415 415 - /* Copy back into socket for getsockname() use. */ 416 - if (!ret) { 417 - inet_sk(sk)->inet_sport = htons(inet_sk(sk)->inet_num); 418 - sp->pf->to_sk_saddr(addr, sk); 416 + if (ret) { 417 + sctp_put_port(sk); 418 + return ret; 419 419 } 420 + /* Copy back into socket for getsockname() use. */ 421 + inet_sk(sk)->inet_sport = htons(inet_sk(sk)->inet_num); 422 + sp->pf->to_sk_saddr(addr, sk); 420 423 421 424 return ret; 422 425 } ··· 7174 7173 val.spt_pathmaxrxt = trans->pathmaxrxt; 7175 7174 val.spt_pathpfthld = trans->pf_retrans; 7176 7175 7177 - return 0; 7176 + goto out; 7178 7177 } 7179 7178 7180 7179 asoc = sctp_id2assoc(sk, val.spt_assoc_id); ··· 7192 7191 val.spt_pathmaxrxt = sp->pathmaxrxt; 7193 7192 } 7194 7193 7194 + out: 7195 7195 if (put_user(len, optlen) || copy_to_user(optval, &val, len)) 7196 7196 return -EFAULT; 7197 7197 ··· 8000 7998 static struct sctp_bind_bucket *sctp_bucket_create( 8001 7999 struct sctp_bind_hashbucket *head, struct net *, unsigned short snum); 8002 8000 8003 - static long sctp_get_port_local(struct sock *sk, union sctp_addr *addr) 8001 + static int sctp_get_port_local(struct sock *sk, union sctp_addr *addr) 8004 8002 { 8005 8003 struct sctp_sock *sp = sctp_sk(sk); 8006 8004 bool reuse = (sk->sk_reuse || sp->reuse); ··· 8110 8108 8111 8109 if (sctp_bind_addr_conflict(&ep2->base.bind_addr, 8112 8110 addr, sp2, sp)) { 8113 - ret = (long)sk2; 8111 + ret = 1; 8114 8112 goto fail_unlock; 8115 8113 } 8116 8114 } ··· 8182 8180 addr.v4.sin_port = htons(snum); 8183 8181 8184 8182 /* Note: sk->sk_num gets filled in if ephemeral port request. */ 8185 - return !!sctp_get_port_local(sk, &addr); 8183 + return sctp_get_port_local(sk, &addr); 8186 8184 } 8187 8185 8188 8186 /*
+2 -1
net/tipc/name_distr.c
··· 223 223 publ->key); 224 224 } 225 225 226 - kfree_rcu(p, rcu); 226 + if (p) 227 + kfree_rcu(p, rcu); 227 228 } 228 229 229 230 /**
+25 -31
net/xfrm/xfrm_interface.c
··· 145 145 if (err < 0) 146 146 goto out; 147 147 148 - strcpy(xi->p.name, dev->name); 149 - 150 148 dev_hold(dev); 151 149 xfrmi_link(xfrmn, xi); 152 150 ··· 175 177 struct xfrmi_net *xfrmn = net_generic(xi->net, xfrmi_net_id); 176 178 177 179 xfrmi_unlink(xfrmn, xi); 178 - dev_put(xi->phydev); 179 180 dev_put(dev); 180 181 } 181 182 ··· 291 294 if (tdev == dev) { 292 295 stats->collisions++; 293 296 net_warn_ratelimited("%s: Local routing loop detected!\n", 294 - xi->p.name); 297 + dev->name); 295 298 goto tx_err_dst_release; 296 299 } 297 300 ··· 361 364 goto tx_err; 362 365 } 363 366 364 - fl.flowi_oif = xi->phydev->ifindex; 367 + fl.flowi_oif = xi->p.link; 365 368 366 369 ret = xfrmi_xmit2(skb, dev, &fl); 367 370 if (ret < 0) ··· 502 505 503 506 static int xfrmi_update(struct xfrm_if *xi, struct xfrm_if_parms *p) 504 507 { 505 - struct net *net = dev_net(xi->dev); 508 + struct net *net = xi->net; 506 509 struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id); 507 510 int err; 508 511 ··· 547 550 { 548 551 struct xfrm_if *xi = netdev_priv(dev); 549 552 550 - return xi->phydev->ifindex; 553 + return xi->p.link; 551 554 } 552 555 553 556 ··· 573 576 dev->needs_free_netdev = true; 574 577 dev->priv_destructor = xfrmi_dev_free; 575 578 netif_keep_dst(dev); 579 + 580 + eth_broadcast_addr(dev->broadcast); 576 581 } 577 582 578 583 static int xfrmi_dev_init(struct net_device *dev) 579 584 { 580 585 struct xfrm_if *xi = netdev_priv(dev); 581 - struct net_device *phydev = xi->phydev; 586 + struct net_device *phydev = __dev_get_by_index(xi->net, xi->p.link); 582 587 int err; 583 588 584 589 dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); ··· 595 596 596 597 dev->features |= NETIF_F_LLTX; 597 598 598 - dev->needed_headroom = phydev->needed_headroom; 599 - dev->needed_tailroom = phydev->needed_tailroom; 599 + if (phydev) { 600 + dev->needed_headroom = phydev->needed_headroom; 601 + dev->needed_tailroom = phydev->needed_tailroom; 600 602 601 - if (is_zero_ether_addr(dev->dev_addr)) 602 - eth_hw_addr_inherit(dev, phydev); 603 - if (is_zero_ether_addr(dev->broadcast)) 604 - memcpy(dev->broadcast, phydev->broadcast, dev->addr_len); 603 + if (is_zero_ether_addr(dev->dev_addr)) 604 + eth_hw_addr_inherit(dev, phydev); 605 + if (is_zero_ether_addr(dev->broadcast)) 606 + memcpy(dev->broadcast, phydev->broadcast, 607 + dev->addr_len); 608 + } else { 609 + eth_hw_addr_random(dev); 610 + eth_broadcast_addr(dev->broadcast); 611 + } 605 612 606 613 return 0; 607 614 } ··· 643 638 int err; 644 639 645 640 xfrmi_netlink_parms(data, &p); 646 - 647 - if (!tb[IFLA_IFNAME]) 648 - return -EINVAL; 649 - 650 - nla_strlcpy(p.name, tb[IFLA_IFNAME], IFNAMSIZ); 651 - 652 641 xi = xfrmi_locate(net, &p); 653 642 if (xi) 654 643 return -EEXIST; ··· 651 652 xi->p = p; 652 653 xi->net = net; 653 654 xi->dev = dev; 654 - xi->phydev = dev_get_by_index(net, p.link); 655 - if (!xi->phydev) 656 - return -ENODEV; 657 655 658 656 err = xfrmi_create(dev); 659 - if (err < 0) 660 - dev_put(xi->phydev); 661 657 return err; 662 658 } 663 659 ··· 666 672 struct netlink_ext_ack *extack) 667 673 { 668 674 struct xfrm_if *xi = netdev_priv(dev); 669 - struct net *net = dev_net(dev); 675 + struct net *net = xi->net; 676 + struct xfrm_if_parms p; 670 677 671 - xfrmi_netlink_parms(data, &xi->p); 672 - 673 - xi = xfrmi_locate(net, &xi->p); 678 + xfrmi_netlink_parms(data, &p); 679 + xi = xfrmi_locate(net, &p); 674 680 if (!xi) { 675 681 xi = netdev_priv(dev); 676 682 } else { ··· 678 684 return -EEXIST; 679 685 } 680 686 681 - return xfrmi_update(xi, &xi->p); 687 + return xfrmi_update(xi, &p); 682 688 } 683 689 684 690 static size_t xfrmi_get_size(const struct net_device *dev) ··· 709 715 { 710 716 struct xfrm_if *xi = netdev_priv(dev); 711 717 712 - return dev_net(xi->phydev); 718 + return xi->net; 713 719 } 714 720 715 721 static const struct nla_policy xfrmi_policy[IFLA_XFRM_MAX + 1] = {
+4 -2
net/xfrm/xfrm_policy.c
··· 912 912 } else if (delta > 0) { 913 913 p = &parent->rb_right; 914 914 } else { 915 + bool same_prefixlen = node->prefixlen == n->prefixlen; 915 916 struct xfrm_policy *tmp; 916 917 917 918 hlist_for_each_entry(tmp, &n->hhead, bydst) { ··· 920 919 hlist_del_rcu(&tmp->bydst); 921 920 } 922 921 922 + node->prefixlen = prefixlen; 923 + 923 924 xfrm_policy_inexact_list_reinsert(net, node, family); 924 925 925 - if (node->prefixlen == n->prefixlen) { 926 + if (same_prefixlen) { 926 927 kfree_rcu(n, rcu); 927 928 return; 928 929 } ··· 932 929 rb_erase(*p, new); 933 930 kfree_rcu(n, rcu); 934 931 n = node; 935 - n->prefixlen = prefixlen; 936 932 goto restart; 937 933 } 938 934 }
+13 -11
tools/testing/selftests/net/fib_nexthops.sh
··· 212 212 printf " ${out}\n" 213 213 printf " Expected:\n" 214 214 printf " ${expected}\n\n" 215 + else 216 + echo " WARNING: Unexpected route entry" 215 217 fi 216 218 fi 217 219 ··· 276 274 277 275 run_cmd "$IP nexthop get id 52" 278 276 log_test $? 0 "Get nexthop by id" 279 - check_nexthop "id 52" "id 52 via 2001:db8:91::2 dev veth1" 277 + check_nexthop "id 52" "id 52 via 2001:db8:91::2 dev veth1 scope link" 280 278 281 279 run_cmd "$IP nexthop del id 52" 282 280 log_test $? 0 "Delete nexthop by id" ··· 481 479 run_cmd "$IP -6 nexthop add id 85 dev veth1" 482 480 run_cmd "$IP ro replace 2001:db8:101::1/128 nhid 85" 483 481 log_test $? 0 "IPv6 route with device only nexthop" 484 - check_route6 "2001:db8:101::1" "2001:db8:101::1 nhid 85 dev veth1" 482 + check_route6 "2001:db8:101::1" "2001:db8:101::1 nhid 85 dev veth1 metric 1024 pref medium" 485 483 486 484 run_cmd "$IP nexthop add id 123 group 81/85" 487 485 run_cmd "$IP ro replace 2001:db8:101::1/128 nhid 123" 488 486 log_test $? 0 "IPv6 multipath route with nexthop mix - dev only + gw" 489 - check_route6 "2001:db8:101::1" "2001:db8:101::1 nhid 85 nexthop via 2001:db8:91::2 dev veth1 nexthop dev veth1" 487 + check_route6 "2001:db8:101::1" "2001:db8:101::1 nhid 123 metric 1024 nexthop via 2001:db8:91::2 dev veth1 weight 1 nexthop dev veth1 weight 1 pref medium" 490 488 491 489 # 492 490 # IPv6 route with v4 nexthop - not allowed ··· 540 538 541 539 run_cmd "$IP nexthop get id 12" 542 540 log_test $? 0 "Get nexthop by id" 543 - check_nexthop "id 12" "id 12 via 172.16.1.2 src 172.16.1.1 dev veth1 scope link" 541 + check_nexthop "id 12" "id 12 via 172.16.1.2 dev veth1 scope link" 544 542 545 543 run_cmd "$IP nexthop del id 12" 546 544 log_test $? 0 "Delete nexthop by id" ··· 687 685 set +e 688 686 run_cmd "$IP ro add 172.16.101.1/32 nhid 11" 689 687 log_test $? 0 "IPv6 nexthop with IPv4 route" 690 - check_route "172.16.101.1" "172.16.101.1 nhid 11 via ${lladdr} dev veth1" 688 + check_route "172.16.101.1" "172.16.101.1 nhid 11 via inet6 ${lladdr} dev veth1" 691 689 692 690 set -e 693 691 run_cmd "$IP nexthop add id 12 via 172.16.1.2 dev veth1" ··· 696 694 run_cmd "$IP ro replace 172.16.101.1/32 nhid 101" 697 695 log_test $? 0 "IPv6 nexthop with IPv4 route" 698 696 699 - check_route "172.16.101.1" "172.16.101.1 nhid 101 nexthop via ${lladdr} dev veth1 weight 1 nexthop via 172.16.1.2 dev veth1 weight 1" 697 + check_route "172.16.101.1" "172.16.101.1 nhid 101 nexthop via inet6 ${lladdr} dev veth1 weight 1 nexthop via 172.16.1.2 dev veth1 weight 1" 700 698 701 699 run_cmd "$IP ro replace 172.16.101.1/32 via inet6 ${lladdr} dev veth1" 702 700 log_test $? 0 "IPv4 route with IPv6 gateway" 703 - check_route "172.16.101.1" "172.16.101.1 via ${lladdr} dev veth1" 701 + check_route "172.16.101.1" "172.16.101.1 via inet6 ${lladdr} dev veth1" 704 702 705 703 run_cmd "$IP ro replace 172.16.101.1/32 via inet6 2001:db8:50::1 dev veth1" 706 704 log_test $? 2 "IPv4 route with invalid IPv6 gateway" ··· 787 785 log_test $? 0 "IPv4 route with device only nexthop" 788 786 check_route "172.16.101.1" "172.16.101.1 nhid 85 dev veth1" 789 787 790 - run_cmd "$IP nexthop add id 122 group 21/85" 791 - run_cmd "$IP ro replace 172.16.101.1/32 nhid 122" 788 + run_cmd "$IP nexthop add id 123 group 21/85" 789 + run_cmd "$IP ro replace 172.16.101.1/32 nhid 123" 792 790 log_test $? 0 "IPv4 multipath route with nexthop mix - dev only + gw" 793 - check_route "172.16.101.1" "172.16.101.1 nhid 85 nexthop via 172.16.1.2 dev veth1 nexthop dev veth1" 791 + check_route "172.16.101.1" "172.16.101.1 nhid 123 nexthop via 172.16.1.2 dev veth1 weight 1 nexthop dev veth1 weight 1" 794 792 795 793 # 796 794 # IPv4 with IPv6 ··· 822 820 run_cmd "$IP ro replace 172.16.101.1/32 nhid 101" 823 821 log_test $? 0 "IPv4 route with mixed v4-v6 multipath route" 824 822 825 - check_route "172.16.101.1" "172.16.101.1 nhid 101 nexthop via ${lladdr} dev veth1 weight 1 nexthop via 172.16.1.2 dev veth1 weight 1" 823 + check_route "172.16.101.1" "172.16.101.1 nhid 101 nexthop via inet6 ${lladdr} dev veth1 weight 1 nexthop via 172.16.1.2 dev veth1 weight 1" 826 824 827 825 run_cmd "ip netns exec me ping -c1 -w1 172.16.101.1" 828 826 log_test $? 0 "IPv6 nexthop with IPv4 route"
+7
tools/testing/selftests/net/xfrm_policy.sh
··· 106 106 # 107 107 # 10.0.0.0/24 and 10.0.1.0/24 nodes have been merged as 10.0.0.0/23. 108 108 ip -net $ns xfrm policy add src 10.1.0.0/24 dst 10.0.0.0/23 dir fwd priority 200 action block 109 + 110 + # similar to above: add policies (with partially random address), with shrinking prefixes. 111 + for p in 29 28 27;do 112 + for k in $(seq 1 32); do 113 + ip -net $ns xfrm policy add src 10.253.1.$((RANDOM%255))/$p dst 10.254.1.$((RANDOM%255))/$p dir fwd priority $((200+k)) action block 2>/dev/null 114 + done 115 + done 109 116 } 110 117 111 118 do_esp_policy_get_check() {