Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) Revert CHECKSUM_COMPLETE optimization in pskb_trim_rcsum(), I can't
figure out why it breaks things.

2) Fix comparison in netfilter ipset's hash_netnet4_data_equal(), it
was basically doing "x == x", from Dave Jones.

3) Freescale FEC driver was DMA mapping the wrong number of bytes, from
Sebastian Siewior.

4) Blackhole and prohibit routes in ipv6 were not doing the right thing
because their ->input and ->output methods were not being assigned
correctly. Now they behave properly like their ipv4 counterparts.
From Kamala R.

5) Several drivers advertise the NETIF_F_FRAGLIST capability, but
really do not support this feature and will send garbage packets if
fed fraglist SKBs. From Eric Dumazet.

6) Fix long standing user triggerable BUG_ON over loopback in RDS
protocol stack, from Venkat Venkatsubra.

7) Several not so common code paths can potentially try to invoke
packet scheduler actions that might be NULL without checking. Shore
things up by either 1) defining a method as mandatory and erroring
on registration if that method is NULL 2) defininig a method as
optional and the registration function hooks up a default
implementation when NULL is seen. From Jamal Hadi Salim.

8) Fix fragment detection in xen-natback driver, from Paul Durrant.

9) Kill dangling enter_memory_pressure method in cg_proto ops, from
Eric W Biederman.

10) SKBs that traverse namespaces should have their local_df cleared,
from Hannes Frederic Sowa.

11) IOCB file position is not being updated by macvtap_aio_read() and
tun_chr_aio_read(). From Zhi Yong Wu.

12) Don't free virtio_net netdev before releasing all of the NAPI
instances. From Andrey Vagin.

13) Procfs entry leak in xt_hashlimit, from Sergey Popovich.

14) IPv6 routes that are no cached routes should not count against the
garbage collection limits. We had this almost right, but were
missing handling addrconf generated routes properly. From Hannes
Frederic Sowa.

15) fib{4,6}_rule_suppress() have to consider potentially seeing NULL
route info when they are called, from Stefan Tomanek.

16) TUN and MACVTAP have had truncated packet signalling for some time,
fix from Jason Wang.

17) Fix use after frrr in __udp4_lib_rcv(), from Eric Dumazet.

18) xen-netback does not interpret the NAPI budget properly for TX work,
fix from Paul Durrant.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (132 commits)
igb: Fix for issue where values could be too high for udelay function.
i40e: fix null dereference
xen-netback: fix gso_prefix check
net: make neigh_priv_len in struct net_device 16bit instead of 8bit
drivers: net: cpsw: fix for cpsw crash when build as modules
xen-netback: napi: don't prematurely request a tx event
xen-netback: napi: fix abuse of budget
sch_tbf: use do_div() for 64-bit divide
udp: ipv4: must add synchronization in udp_sk_rx_dst_set()
net:fec: remove duplicate lines in comment about errata ERR006358
Revert "8390 : Replace ei_debug with msg_enable/NETIF_MSG_* feature"
8390 : Replace ei_debug with msg_enable/NETIF_MSG_* feature
xen-netback: make sure skb linear area covers checksum field
net: smc91x: Fix device tree based configuration so it's usable
udp: ipv4: fix potential use after free in udp_v4_early_demux()
macvtap: signal truncated packets
tun: unbreak truncated packet signalling
net: sched: htb: fix the calculation of quantum
net: sched: tbf: fix the calculation of max_size
micrel: add support for KSZ8041RNLI
...

+1552 -917
+1 -1
Documentation/devicetree/bindings/net/davinci_emac.txt
··· 4 4 for the davinci_emac interface contains. 5 5 6 6 Required properties: 7 - - compatible: "ti,davinci-dm6467-emac"; 7 + - compatible: "ti,davinci-dm6467-emac" or "ti,am3517-emac" 8 8 - reg: Offset and length of the register set for the device 9 9 - ti,davinci-ctrl-reg-offset: offset to control register 10 10 - ti,davinci-ctrl-mod-reg-offset: offset to control module register
+4
Documentation/devicetree/bindings/net/smsc-lan91c111.txt
··· 8 8 Optional properties: 9 9 - phy-device : phandle to Ethernet phy 10 10 - local-mac-address : Ethernet mac address to use 11 + - reg-io-width : Mask of sizes (in bytes) of the IO accesses that 12 + are supported on the device. Valid value for SMSC LAN91c111 are 13 + 1, 2 or 4. If it's omitted or invalid, the size would be 2 meaning 14 + 16-bit access only.
+10
Documentation/networking/packet_mmap.txt
··· 123 123 [shutdown] close() --------> destruction of the transmission socket and 124 124 deallocation of all associated resources. 125 125 126 + Socket creation and destruction is also straight forward, and is done 127 + the same way as in capturing described in the previous paragraph: 128 + 129 + int fd = socket(PF_PACKET, mode, 0); 130 + 131 + The protocol can optionally be 0 in case we only want to transmit 132 + via this socket, which avoids an expensive call to packet_rcv(). 133 + In this case, you also need to bind(2) the TX_RING with sll_protocol = 0 134 + set. Otherwise, htons(ETH_P_ALL) or any other protocol, for example. 135 + 126 136 Binding the socket to your network interface is mandatory (with zero copy) to 127 137 know the header size of frames used in the circular buffer. 128 138
-2
MAINTAINERS
··· 4466 4466 M: Carolyn Wyborny <carolyn.wyborny@intel.com> 4467 4467 M: Don Skidmore <donald.c.skidmore@intel.com> 4468 4468 M: Greg Rose <gregory.v.rose@intel.com> 4469 - M: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> 4470 4469 M: Alex Duyck <alexander.h.duyck@intel.com> 4471 4470 M: John Ronciak <john.ronciak@intel.com> 4472 - M: Tushar Dave <tushar.n.dave@intel.com> 4473 4471 L: e1000-devel@lists.sourceforge.net 4474 4472 W: http://www.intel.com/support/feedback.htm 4475 4473 W: http://e1000.sourceforge.net/
+3 -3
drivers/net/bonding/bond_main.c
··· 4199 4199 (arp_ip_count < BOND_MAX_ARP_TARGETS) && arp_ip_target[i]; i++) { 4200 4200 /* not complete check, but should be good enough to 4201 4201 catch mistakes */ 4202 - __be32 ip = in_aton(arp_ip_target[i]); 4203 - if (!isdigit(arp_ip_target[i][0]) || ip == 0 || 4204 - ip == htonl(INADDR_BROADCAST)) { 4202 + __be32 ip; 4203 + if (!in4_pton(arp_ip_target[i], -1, (u8 *)&ip, -1, NULL) || 4204 + IS_IP_TARGET_UNUSABLE_ADDRESS(ip)) { 4205 4205 pr_warning("Warning: bad arp_ip_target module parameter (%s), ARP monitoring will not be performed\n", 4206 4206 arp_ip_target[i]); 4207 4207 arp_interval = 0;
+2 -2
drivers/net/bonding/bond_sysfs.c
··· 1635 1635 char *buf) 1636 1636 { 1637 1637 struct bonding *bond = to_bond(d); 1638 - int packets_per_slave = bond->params.packets_per_slave; 1638 + unsigned int packets_per_slave = bond->params.packets_per_slave; 1639 1639 1640 1640 if (packets_per_slave > 1) 1641 1641 packets_per_slave = reciprocal_value(packets_per_slave); 1642 1642 1643 - return sprintf(buf, "%d\n", packets_per_slave); 1643 + return sprintf(buf, "%u\n", packets_per_slave); 1644 1644 } 1645 1645 1646 1646 static ssize_t bonding_store_packets_per_slave(struct device *d,
+3 -2
drivers/net/ethernet/allwinner/sun4i-emac.c
··· 717 717 if (netif_msg_ifup(db)) 718 718 dev_dbg(db->dev, "enabling %s\n", dev->name); 719 719 720 - if (devm_request_irq(db->dev, dev->irq, &emac_interrupt, 721 - 0, dev->name, dev)) 720 + if (request_irq(dev->irq, &emac_interrupt, 0, dev->name, dev)) 722 721 return -EAGAIN; 723 722 724 723 /* Initialize EMAC board */ ··· 772 773 emac_mdio_remove(ndev); 773 774 774 775 emac_shutdown(ndev); 776 + 777 + free_irq(ndev->irq, ndev); 775 778 776 779 return 0; 777 780 }
+5
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
··· 3114 3114 { 3115 3115 struct bnx2x *bp = netdev_priv(pci_get_drvdata(dev)); 3116 3116 3117 + if (!IS_SRIOV(bp)) { 3118 + BNX2X_ERR("failed to configure SR-IOV since vfdb was not allocated. Check dmesg for errors in probe stage\n"); 3119 + return -EINVAL; 3120 + } 3121 + 3117 3122 DP(BNX2X_MSG_IOV, "bnx2x_sriov_configure called with %d, BNX2X_NR_VIRTFN(bp) was %d\n", 3118 3123 num_vfs_param, BNX2X_NR_VIRTFN(bp)); 3119 3124
+22 -7
drivers/net/ethernet/broadcom/tg3.c
··· 8932 8932 void (*write_op)(struct tg3 *, u32, u32); 8933 8933 int i, err; 8934 8934 8935 + if (!pci_device_is_present(tp->pdev)) 8936 + return -ENODEV; 8937 + 8935 8938 tg3_nvram_lock(tp); 8936 8939 8937 8940 tg3_ape_lock(tp, TG3_APE_LOCK_GRC); ··· 11584 11581 memset(&tp->net_stats_prev, 0, sizeof(tp->net_stats_prev)); 11585 11582 memset(&tp->estats_prev, 0, sizeof(tp->estats_prev)); 11586 11583 11587 - tg3_power_down_prepare(tp); 11584 + if (pci_device_is_present(tp->pdev)) { 11585 + tg3_power_down_prepare(tp); 11588 11586 11589 - tg3_carrier_off(tp); 11590 - 11587 + tg3_carrier_off(tp); 11588 + } 11591 11589 return 0; 11592 11590 } 11593 11591 ··· 16503 16499 /* Clear this out for sanity. */ 16504 16500 tw32(TG3PCI_MEM_WIN_BASE_ADDR, 0); 16505 16501 16502 + /* Clear TG3PCI_REG_BASE_ADDR to prevent hangs. */ 16503 + tw32(TG3PCI_REG_BASE_ADDR, 0); 16504 + 16506 16505 pci_read_config_dword(tp->pdev, TG3PCI_PCISTATE, 16507 16506 &pci_state_reg); 16508 16507 if ((pci_state_reg & PCISTATE_CONV_PCI_MODE) == 0 && ··· 17733 17726 struct pci_dev *pdev = to_pci_dev(device); 17734 17727 struct net_device *dev = pci_get_drvdata(pdev); 17735 17728 struct tg3 *tp = netdev_priv(dev); 17736 - int err; 17729 + int err = 0; 17730 + 17731 + rtnl_lock(); 17737 17732 17738 17733 if (!netif_running(dev)) 17739 - return 0; 17734 + goto unlock; 17740 17735 17741 17736 tg3_reset_task_cancel(tp); 17742 17737 tg3_phy_stop(tp); ··· 17780 17771 tg3_phy_start(tp); 17781 17772 } 17782 17773 17774 + unlock: 17775 + rtnl_unlock(); 17783 17776 return err; 17784 17777 } 17785 17778 ··· 17790 17779 struct pci_dev *pdev = to_pci_dev(device); 17791 17780 struct net_device *dev = pci_get_drvdata(pdev); 17792 17781 struct tg3 *tp = netdev_priv(dev); 17793 - int err; 17782 + int err = 0; 17783 + 17784 + rtnl_lock(); 17794 17785 17795 17786 if (!netif_running(dev)) 17796 - return 0; 17787 + goto unlock; 17797 17788 17798 17789 netif_device_attach(dev); 17799 17790 ··· 17819 17806 if (!err) 17820 17807 tg3_phy_start(tp); 17821 17808 17809 + unlock: 17810 + rtnl_unlock(); 17822 17811 return err; 17823 17812 } 17824 17813 #endif /* CONFIG_PM_SLEEP */
+53 -29
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
··· 49 49 #include <asm/io.h> 50 50 #include "cxgb4_uld.h" 51 51 52 - #define FW_VERSION_MAJOR 1 53 - #define FW_VERSION_MINOR 4 54 - #define FW_VERSION_MICRO 0 52 + #define T4FW_VERSION_MAJOR 0x01 53 + #define T4FW_VERSION_MINOR 0x06 54 + #define T4FW_VERSION_MICRO 0x18 55 + #define T4FW_VERSION_BUILD 0x00 55 56 56 - #define FW_VERSION_MAJOR_T5 0 57 - #define FW_VERSION_MINOR_T5 0 58 - #define FW_VERSION_MICRO_T5 0 57 + #define T5FW_VERSION_MAJOR 0x01 58 + #define T5FW_VERSION_MINOR 0x08 59 + #define T5FW_VERSION_MICRO 0x1C 60 + #define T5FW_VERSION_BUILD 0x00 59 61 60 62 #define CH_WARN(adap, fmt, ...) dev_warn(adap->pdev_dev, fmt, ## __VA_ARGS__) 61 63 ··· 242 240 unsigned char width; 243 241 }; 244 242 243 + #define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision)) 244 + #define CHELSIO_CHIP_FPGA 0x100 245 + #define CHELSIO_CHIP_VERSION(code) (((code) >> 4) & 0xf) 246 + #define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf) 247 + 248 + #define CHELSIO_T4 0x4 249 + #define CHELSIO_T5 0x5 250 + 251 + enum chip_type { 252 + T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1), 253 + T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2), 254 + T4_FIRST_REV = T4_A1, 255 + T4_LAST_REV = T4_A2, 256 + 257 + T5_A0 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0), 258 + T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 1), 259 + T5_FIRST_REV = T5_A0, 260 + T5_LAST_REV = T5_A1, 261 + }; 262 + 245 263 struct adapter_params { 246 264 struct tp_params tp; 247 265 struct vpd_params vpd; ··· 281 259 282 260 unsigned char nports; /* # of ethernet ports */ 283 261 unsigned char portvec; 284 - unsigned char rev; /* chip revision */ 262 + enum chip_type chip; /* chip code */ 285 263 unsigned char offload; 286 264 287 265 unsigned char bypass; 288 266 289 267 unsigned int ofldq_wr_cred; 290 268 }; 269 + 270 + #include "t4fw_api.h" 271 + 272 + #define FW_VERSION(chip) ( \ 273 + FW_HDR_FW_VER_MAJOR_GET(chip##FW_VERSION_MAJOR) | \ 274 + FW_HDR_FW_VER_MINOR_GET(chip##FW_VERSION_MINOR) | \ 275 + FW_HDR_FW_VER_MICRO_GET(chip##FW_VERSION_MICRO) | \ 276 + FW_HDR_FW_VER_BUILD_GET(chip##FW_VERSION_BUILD)) 277 + #define FW_INTFVER(chip, intf) (FW_HDR_INTFVER_##intf) 278 + 279 + struct fw_info { 280 + u8 chip; 281 + char *fs_name; 282 + char *fw_mod_name; 283 + struct fw_hdr fw_hdr; 284 + }; 285 + 291 286 292 287 struct trace_params { 293 288 u32 data[TRACE_LEN / 4]; ··· 551 512 552 513 struct l2t_data; 553 514 554 - #define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision)) 555 - #define CHELSIO_CHIP_VERSION(code) ((code) >> 4) 556 - #define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf) 557 - 558 - #define CHELSIO_T4 0x4 559 - #define CHELSIO_T5 0x5 560 - 561 - enum chip_type { 562 - T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 0), 563 - T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1), 564 - T4_A3 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2), 565 - T4_FIRST_REV = T4_A1, 566 - T4_LAST_REV = T4_A3, 567 - 568 - T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0), 569 - T5_FIRST_REV = T5_A1, 570 - T5_LAST_REV = T5_A1, 571 - }; 572 - 573 515 #ifdef CONFIG_PCI_IOV 574 516 575 517 /* T4 supports SRIOV on PF0-3 and T5 on PF0-7. However, the Serial ··· 735 715 736 716 static inline int is_t5(enum chip_type chip) 737 717 { 738 - return (chip >= T5_FIRST_REV && chip <= T5_LAST_REV); 718 + return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T5; 739 719 } 740 720 741 721 static inline int is_t4(enum chip_type chip) 742 722 { 743 - return (chip >= T4_FIRST_REV && chip <= T4_LAST_REV); 723 + return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T4; 744 724 } 745 725 746 726 static inline u32 t4_read_reg(struct adapter *adap, u32 reg_addr) ··· 920 900 int t4_load_fw(struct adapter *adapter, const u8 *fw_data, unsigned int size); 921 901 unsigned int t4_flash_cfg_addr(struct adapter *adapter); 922 902 int t4_load_cfg(struct adapter *adapter, const u8 *cfg_data, unsigned int size); 923 - int t4_check_fw_version(struct adapter *adapter); 903 + int t4_get_fw_version(struct adapter *adapter, u32 *vers); 904 + int t4_get_tp_version(struct adapter *adapter, u32 *vers); 905 + int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info, 906 + const u8 *fw_data, unsigned int fw_size, 907 + struct fw_hdr *card_fw, enum dev_state state, int *reset); 924 908 int t4_prep_adapter(struct adapter *adapter); 925 909 int t4_port_init(struct adapter *adap, int mbox, int pf, int vf); 926 910 void t4_fatal_err(struct adapter *adapter);
+142 -136
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 276 276 { 0, } 277 277 }; 278 278 279 - #define FW_FNAME "cxgb4/t4fw.bin" 279 + #define FW4_FNAME "cxgb4/t4fw.bin" 280 280 #define FW5_FNAME "cxgb4/t5fw.bin" 281 - #define FW_CFNAME "cxgb4/t4-config.txt" 281 + #define FW4_CFNAME "cxgb4/t4-config.txt" 282 282 #define FW5_CFNAME "cxgb4/t5-config.txt" 283 283 284 284 MODULE_DESCRIPTION(DRV_DESC); ··· 286 286 MODULE_LICENSE("Dual BSD/GPL"); 287 287 MODULE_VERSION(DRV_VERSION); 288 288 MODULE_DEVICE_TABLE(pci, cxgb4_pci_tbl); 289 - MODULE_FIRMWARE(FW_FNAME); 289 + MODULE_FIRMWARE(FW4_FNAME); 290 290 MODULE_FIRMWARE(FW5_FNAME); 291 291 292 292 /* ··· 1071 1071 } 1072 1072 1073 1073 /* 1074 - * Returns 0 if new FW was successfully loaded, a positive errno if a load was 1075 - * started but failed, and a negative errno if flash load couldn't start. 1076 - */ 1077 - static int upgrade_fw(struct adapter *adap) 1078 - { 1079 - int ret; 1080 - u32 vers, exp_major; 1081 - const struct fw_hdr *hdr; 1082 - const struct firmware *fw; 1083 - struct device *dev = adap->pdev_dev; 1084 - char *fw_file_name; 1085 - 1086 - switch (CHELSIO_CHIP_VERSION(adap->chip)) { 1087 - case CHELSIO_T4: 1088 - fw_file_name = FW_FNAME; 1089 - exp_major = FW_VERSION_MAJOR; 1090 - break; 1091 - case CHELSIO_T5: 1092 - fw_file_name = FW5_FNAME; 1093 - exp_major = FW_VERSION_MAJOR_T5; 1094 - break; 1095 - default: 1096 - dev_err(dev, "Unsupported chip type, %x\n", adap->chip); 1097 - return -EINVAL; 1098 - } 1099 - 1100 - ret = request_firmware(&fw, fw_file_name, dev); 1101 - if (ret < 0) { 1102 - dev_err(dev, "unable to load firmware image %s, error %d\n", 1103 - fw_file_name, ret); 1104 - return ret; 1105 - } 1106 - 1107 - hdr = (const struct fw_hdr *)fw->data; 1108 - vers = ntohl(hdr->fw_ver); 1109 - if (FW_HDR_FW_VER_MAJOR_GET(vers) != exp_major) { 1110 - ret = -EINVAL; /* wrong major version, won't do */ 1111 - goto out; 1112 - } 1113 - 1114 - /* 1115 - * If the flash FW is unusable or we found something newer, load it. 1116 - */ 1117 - if (FW_HDR_FW_VER_MAJOR_GET(adap->params.fw_vers) != exp_major || 1118 - vers > adap->params.fw_vers) { 1119 - dev_info(dev, "upgrading firmware ...\n"); 1120 - ret = t4_fw_upgrade(adap, adap->mbox, fw->data, fw->size, 1121 - /*force=*/false); 1122 - if (!ret) 1123 - dev_info(dev, 1124 - "firmware upgraded to version %pI4 from %s\n", 1125 - &hdr->fw_ver, fw_file_name); 1126 - else 1127 - dev_err(dev, "firmware upgrade failed! err=%d\n", -ret); 1128 - } else { 1129 - /* 1130 - * Tell our caller that we didn't upgrade the firmware. 1131 - */ 1132 - ret = -EINVAL; 1133 - } 1134 - 1135 - out: release_firmware(fw); 1136 - return ret; 1137 - } 1138 - 1139 - /* 1140 1074 * Allocate a chunk of memory using kmalloc or, if that fails, vmalloc. 1141 1075 * The allocated memory is cleared. 1142 1076 */ ··· 1349 1415 static int get_regs_len(struct net_device *dev) 1350 1416 { 1351 1417 struct adapter *adap = netdev2adap(dev); 1352 - if (is_t4(adap->chip)) 1418 + if (is_t4(adap->params.chip)) 1353 1419 return T4_REGMAP_SIZE; 1354 1420 else 1355 1421 return T5_REGMAP_SIZE; ··· 1433 1499 data += sizeof(struct port_stats) / sizeof(u64); 1434 1500 collect_sge_port_stats(adapter, pi, (struct queue_port_stats *)data); 1435 1501 data += sizeof(struct queue_port_stats) / sizeof(u64); 1436 - if (!is_t4(adapter->chip)) { 1502 + if (!is_t4(adapter->params.chip)) { 1437 1503 t4_write_reg(adapter, SGE_STAT_CFG, STATSOURCE_T5(7)); 1438 1504 val1 = t4_read_reg(adapter, SGE_STAT_TOTAL); 1439 1505 val2 = t4_read_reg(adapter, SGE_STAT_MATCH); ··· 1455 1521 */ 1456 1522 static inline unsigned int mk_adap_vers(const struct adapter *ap) 1457 1523 { 1458 - return CHELSIO_CHIP_VERSION(ap->chip) | 1459 - (CHELSIO_CHIP_RELEASE(ap->chip) << 10) | (1 << 16); 1524 + return CHELSIO_CHIP_VERSION(ap->params.chip) | 1525 + (CHELSIO_CHIP_RELEASE(ap->params.chip) << 10) | (1 << 16); 1460 1526 } 1461 1527 1462 1528 static void reg_block_dump(struct adapter *ap, void *buf, unsigned int start, ··· 2123 2189 static const unsigned int *reg_ranges; 2124 2190 int arr_size = 0, buf_size = 0; 2125 2191 2126 - if (is_t4(ap->chip)) { 2192 + if (is_t4(ap->params.chip)) { 2127 2193 reg_ranges = &t4_reg_ranges[0]; 2128 2194 arr_size = ARRAY_SIZE(t4_reg_ranges); 2129 2195 buf_size = T4_REGMAP_SIZE; ··· 2901 2967 size = t4_read_reg(adap, MA_EDRAM1_BAR); 2902 2968 add_debugfs_mem(adap, "edc1", MEM_EDC1, EDRAM_SIZE_GET(size)); 2903 2969 } 2904 - if (is_t4(adap->chip)) { 2970 + if (is_t4(adap->params.chip)) { 2905 2971 size = t4_read_reg(adap, MA_EXT_MEMORY_BAR); 2906 2972 if (i & EXT_MEM_ENABLE) 2907 2973 add_debugfs_mem(adap, "mc", MEM_MC, ··· 3353 3419 3354 3420 v1 = t4_read_reg(adap, A_SGE_DBFIFO_STATUS); 3355 3421 v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2); 3356 - if (is_t4(adap->chip)) { 3422 + if (is_t4(adap->params.chip)) { 3357 3423 lp_count = G_LP_COUNT(v1); 3358 3424 hp_count = G_HP_COUNT(v1); 3359 3425 } else { ··· 3522 3588 do { 3523 3589 v1 = t4_read_reg(adap, A_SGE_DBFIFO_STATUS); 3524 3590 v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2); 3525 - if (is_t4(adap->chip)) { 3591 + if (is_t4(adap->params.chip)) { 3526 3592 lp_count = G_LP_COUNT(v1); 3527 3593 hp_count = G_HP_COUNT(v1); 3528 3594 } else { ··· 3642 3708 3643 3709 adap = container_of(work, struct adapter, db_drop_task); 3644 3710 3645 - if (is_t4(adap->chip)) { 3711 + if (is_t4(adap->params.chip)) { 3646 3712 disable_dbs(adap); 3647 3713 notify_rdma_uld(adap, CXGB4_CONTROL_DB_DROP); 3648 3714 drain_db_fifo(adap, 1); ··· 3687 3753 3688 3754 void t4_db_full(struct adapter *adap) 3689 3755 { 3690 - if (is_t4(adap->chip)) { 3756 + if (is_t4(adap->params.chip)) { 3691 3757 t4_set_reg_field(adap, SGE_INT_ENABLE3, 3692 3758 DBFIFO_HP_INT | DBFIFO_LP_INT, 0); 3693 3759 queue_work(workq, &adap->db_full_task); ··· 3696 3762 3697 3763 void t4_db_dropped(struct adapter *adap) 3698 3764 { 3699 - if (is_t4(adap->chip)) 3765 + if (is_t4(adap->params.chip)) 3700 3766 queue_work(workq, &adap->db_drop_task); 3701 3767 } 3702 3768 ··· 3723 3789 lli.nchan = adap->params.nports; 3724 3790 lli.nports = adap->params.nports; 3725 3791 lli.wr_cred = adap->params.ofldq_wr_cred; 3726 - lli.adapter_type = adap->params.rev; 3792 + lli.adapter_type = adap->params.chip; 3727 3793 lli.iscsi_iolen = MAXRXDATA_GET(t4_read_reg(adap, TP_PARA_REG2)); 3728 3794 lli.udb_density = 1 << QUEUESPERPAGEPF0_GET( 3729 3795 t4_read_reg(adap, SGE_EGRESS_QUEUES_PER_PAGE_PF) >> ··· 4417 4483 u32 bar0, mem_win0_base, mem_win1_base, mem_win2_base; 4418 4484 4419 4485 bar0 = pci_resource_start(adap->pdev, 0); /* truncation intentional */ 4420 - if (is_t4(adap->chip)) { 4486 + if (is_t4(adap->params.chip)) { 4421 4487 mem_win0_base = bar0 + MEMWIN0_BASE; 4422 4488 mem_win1_base = bar0 + MEMWIN1_BASE; 4423 4489 mem_win2_base = bar0 + MEMWIN2_BASE; ··· 4602 4668 const struct firmware *cf; 4603 4669 unsigned long mtype = 0, maddr = 0; 4604 4670 u32 finiver, finicsum, cfcsum; 4605 - int ret, using_flash; 4671 + int ret; 4672 + int config_issued = 0; 4606 4673 char *fw_config_file, fw_config_file_path[256]; 4674 + char *config_name = NULL; 4607 4675 4608 4676 /* 4609 4677 * Reset device if necessary. ··· 4622 4686 * then use that. Otherwise, use the configuration file stored 4623 4687 * in the adapter flash ... 4624 4688 */ 4625 - switch (CHELSIO_CHIP_VERSION(adapter->chip)) { 4689 + switch (CHELSIO_CHIP_VERSION(adapter->params.chip)) { 4626 4690 case CHELSIO_T4: 4627 - fw_config_file = FW_CFNAME; 4691 + fw_config_file = FW4_CFNAME; 4628 4692 break; 4629 4693 case CHELSIO_T5: 4630 4694 fw_config_file = FW5_CFNAME; ··· 4638 4702 4639 4703 ret = request_firmware(&cf, fw_config_file, adapter->pdev_dev); 4640 4704 if (ret < 0) { 4641 - using_flash = 1; 4705 + config_name = "On FLASH"; 4642 4706 mtype = FW_MEMTYPE_CF_FLASH; 4643 4707 maddr = t4_flash_cfg_addr(adapter); 4644 4708 } else { 4645 4709 u32 params[7], val[7]; 4646 4710 4647 - using_flash = 0; 4711 + sprintf(fw_config_file_path, 4712 + "/lib/firmware/%s", fw_config_file); 4713 + config_name = fw_config_file_path; 4714 + 4648 4715 if (cf->size >= FLASH_CFG_MAX_SIZE) 4649 4716 ret = -ENOMEM; 4650 4717 else { ··· 4715 4776 FW_LEN16(caps_cmd)); 4716 4777 ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd, sizeof(caps_cmd), 4717 4778 &caps_cmd); 4779 + 4780 + /* If the CAPS_CONFIG failed with an ENOENT (for a Firmware 4781 + * Configuration File in FLASH), our last gasp effort is to use the 4782 + * Firmware Configuration File which is embedded in the firmware. A 4783 + * very few early versions of the firmware didn't have one embedded 4784 + * but we can ignore those. 4785 + */ 4786 + if (ret == -ENOENT) { 4787 + memset(&caps_cmd, 0, sizeof(caps_cmd)); 4788 + caps_cmd.op_to_write = 4789 + htonl(FW_CMD_OP(FW_CAPS_CONFIG_CMD) | 4790 + FW_CMD_REQUEST | 4791 + FW_CMD_READ); 4792 + caps_cmd.cfvalid_to_len16 = htonl(FW_LEN16(caps_cmd)); 4793 + ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd, 4794 + sizeof(caps_cmd), &caps_cmd); 4795 + config_name = "Firmware Default"; 4796 + } 4797 + 4798 + config_issued = 1; 4718 4799 if (ret < 0) 4719 4800 goto bye; 4720 4801 ··· 4775 4816 if (ret < 0) 4776 4817 goto bye; 4777 4818 4778 - sprintf(fw_config_file_path, "/lib/firmware/%s", fw_config_file); 4779 4819 /* 4780 4820 * Return successfully and note that we're operating with parameters 4781 4821 * not supplied by the driver, rather than from hard-wired ··· 4782 4824 */ 4783 4825 adapter->flags |= USING_SOFT_PARAMS; 4784 4826 dev_info(adapter->pdev_dev, "Successfully configured using Firmware "\ 4785 - "Configuration File %s, version %#x, computed checksum %#x\n", 4786 - (using_flash 4787 - ? "in device FLASH" 4788 - : fw_config_file_path), 4789 - finiver, cfcsum); 4827 + "Configuration File \"%s\", version %#x, computed checksum %#x\n", 4828 + config_name, finiver, cfcsum); 4790 4829 return 0; 4791 4830 4792 4831 /* ··· 4792 4837 * want to issue a warning since this is fairly common.) 4793 4838 */ 4794 4839 bye: 4795 - if (ret != -ENOENT) 4796 - dev_warn(adapter->pdev_dev, "Configuration file error %d\n", 4797 - -ret); 4840 + if (config_issued && ret != -ENOENT) 4841 + dev_warn(adapter->pdev_dev, "\"%s\" configuration file error %d\n", 4842 + config_name, -ret); 4798 4843 return ret; 4799 4844 } 4800 4845 ··· 5041 5086 return ret; 5042 5087 } 5043 5088 5089 + static struct fw_info fw_info_array[] = { 5090 + { 5091 + .chip = CHELSIO_T4, 5092 + .fs_name = FW4_CFNAME, 5093 + .fw_mod_name = FW4_FNAME, 5094 + .fw_hdr = { 5095 + .chip = FW_HDR_CHIP_T4, 5096 + .fw_ver = __cpu_to_be32(FW_VERSION(T4)), 5097 + .intfver_nic = FW_INTFVER(T4, NIC), 5098 + .intfver_vnic = FW_INTFVER(T4, VNIC), 5099 + .intfver_ri = FW_INTFVER(T4, RI), 5100 + .intfver_iscsi = FW_INTFVER(T4, ISCSI), 5101 + .intfver_fcoe = FW_INTFVER(T4, FCOE), 5102 + }, 5103 + }, { 5104 + .chip = CHELSIO_T5, 5105 + .fs_name = FW5_CFNAME, 5106 + .fw_mod_name = FW5_FNAME, 5107 + .fw_hdr = { 5108 + .chip = FW_HDR_CHIP_T5, 5109 + .fw_ver = __cpu_to_be32(FW_VERSION(T5)), 5110 + .intfver_nic = FW_INTFVER(T5, NIC), 5111 + .intfver_vnic = FW_INTFVER(T5, VNIC), 5112 + .intfver_ri = FW_INTFVER(T5, RI), 5113 + .intfver_iscsi = FW_INTFVER(T5, ISCSI), 5114 + .intfver_fcoe = FW_INTFVER(T5, FCOE), 5115 + }, 5116 + } 5117 + }; 5118 + 5119 + static struct fw_info *find_fw_info(int chip) 5120 + { 5121 + int i; 5122 + 5123 + for (i = 0; i < ARRAY_SIZE(fw_info_array); i++) { 5124 + if (fw_info_array[i].chip == chip) 5125 + return &fw_info_array[i]; 5126 + } 5127 + return NULL; 5128 + } 5129 + 5044 5130 /* 5045 5131 * Phase 0 of initialization: contact FW, obtain config, perform basic init. 5046 5132 */ ··· 5119 5123 * later reporting and B. to warn if the currently loaded firmware 5120 5124 * is excessively mismatched relative to the driver.) 5121 5125 */ 5122 - ret = t4_check_fw_version(adap); 5123 - 5124 - /* The error code -EFAULT is returned by t4_check_fw_version() if 5125 - * firmware on adapter < supported firmware. If firmware on adapter 5126 - * is too old (not supported by driver) and we're the MASTER_PF set 5127 - * adapter state to DEV_STATE_UNINIT to force firmware upgrade 5128 - * and reinitialization. 5129 - */ 5130 - if ((adap->flags & MASTER_PF) && ret == -EFAULT) 5131 - state = DEV_STATE_UNINIT; 5126 + t4_get_fw_version(adap, &adap->params.fw_vers); 5127 + t4_get_tp_version(adap, &adap->params.tp_vers); 5132 5128 if ((adap->flags & MASTER_PF) && state != DEV_STATE_INIT) { 5133 - if (ret == -EINVAL || ret == -EFAULT || ret > 0) { 5134 - if (upgrade_fw(adap) >= 0) { 5135 - /* 5136 - * Note that the chip was reset as part of the 5137 - * firmware upgrade so we don't reset it again 5138 - * below and grab the new firmware version. 5139 - */ 5140 - reset = 0; 5141 - ret = t4_check_fw_version(adap); 5142 - } else 5143 - if (ret == -EFAULT) { 5144 - /* 5145 - * Firmware is old but still might 5146 - * work if we force reinitialization 5147 - * of the adapter. Ignoring FW upgrade 5148 - * failure. 5149 - */ 5150 - dev_warn(adap->pdev_dev, 5151 - "Ignoring firmware upgrade " 5152 - "failure, and forcing driver " 5153 - "to reinitialize the " 5154 - "adapter.\n"); 5155 - ret = 0; 5156 - } 5129 + struct fw_info *fw_info; 5130 + struct fw_hdr *card_fw; 5131 + const struct firmware *fw; 5132 + const u8 *fw_data = NULL; 5133 + unsigned int fw_size = 0; 5134 + 5135 + /* This is the firmware whose headers the driver was compiled 5136 + * against 5137 + */ 5138 + fw_info = find_fw_info(CHELSIO_CHIP_VERSION(adap->params.chip)); 5139 + if (fw_info == NULL) { 5140 + dev_err(adap->pdev_dev, 5141 + "unable to get firmware info for chip %d.\n", 5142 + CHELSIO_CHIP_VERSION(adap->params.chip)); 5143 + return -EINVAL; 5157 5144 } 5145 + 5146 + /* allocate memory to read the header of the firmware on the 5147 + * card 5148 + */ 5149 + card_fw = t4_alloc_mem(sizeof(*card_fw)); 5150 + 5151 + /* Get FW from from /lib/firmware/ */ 5152 + ret = request_firmware(&fw, fw_info->fw_mod_name, 5153 + adap->pdev_dev); 5154 + if (ret < 0) { 5155 + dev_err(adap->pdev_dev, 5156 + "unable to load firmware image %s, error %d\n", 5157 + fw_info->fw_mod_name, ret); 5158 + } else { 5159 + fw_data = fw->data; 5160 + fw_size = fw->size; 5161 + } 5162 + 5163 + /* upgrade FW logic */ 5164 + ret = t4_prep_fw(adap, fw_info, fw_data, fw_size, card_fw, 5165 + state, &reset); 5166 + 5167 + /* Cleaning up */ 5168 + if (fw != NULL) 5169 + release_firmware(fw); 5170 + t4_free_mem(card_fw); 5171 + 5158 5172 if (ret < 0) 5159 - return ret; 5173 + goto bye; 5160 5174 } 5161 5175 5162 5176 /* ··· 5251 5245 if (ret == -ENOENT) { 5252 5246 dev_info(adap->pdev_dev, 5253 5247 "No Configuration File present " 5254 - "on adapter. Using hard-wired " 5248 + "on adapter. Using hard-wired " 5255 5249 "configuration parameters.\n"); 5256 5250 ret = adap_init0_no_config(adap, reset); 5257 5251 } ··· 5793 5787 5794 5788 netdev_info(dev, "Chelsio %s rev %d %s %sNIC PCIe x%d%s%s\n", 5795 5789 adap->params.vpd.id, 5796 - CHELSIO_CHIP_RELEASE(adap->params.rev), buf, 5790 + CHELSIO_CHIP_RELEASE(adap->params.chip), buf, 5797 5791 is_offload(adap) ? "R" : "", adap->params.pci.width, spd, 5798 5792 (adap->flags & USING_MSIX) ? " MSI-X" : 5799 5793 (adap->flags & USING_MSI) ? " MSI" : ""); ··· 5916 5910 if (err) 5917 5911 goto out_unmap_bar0; 5918 5912 5919 - if (!is_t4(adapter->chip)) { 5913 + if (!is_t4(adapter->params.chip)) { 5920 5914 s_qpp = QUEUESPERPAGEPF1 * adapter->fn; 5921 5915 qpp = 1 << QUEUESPERPAGEPF0_GET(t4_read_reg(adapter, 5922 5916 SGE_EGRESS_QUEUES_PER_PAGE_PF) >> s_qpp); ··· 6070 6064 out_free_dev: 6071 6065 free_some_resources(adapter); 6072 6066 out_unmap_bar: 6073 - if (!is_t4(adapter->chip)) 6067 + if (!is_t4(adapter->params.chip)) 6074 6068 iounmap(adapter->bar2); 6075 6069 out_unmap_bar0: 6076 6070 iounmap(adapter->regs); ··· 6122 6116 6123 6117 free_some_resources(adapter); 6124 6118 iounmap(adapter->regs); 6125 - if (!is_t4(adapter->chip)) 6119 + if (!is_t4(adapter->params.chip)) 6126 6120 iounmap(adapter->bar2); 6127 6121 kfree(adapter); 6128 6122 pci_disable_pcie_error_reporting(pdev);
+6 -6
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 509 509 u32 val; 510 510 if (q->pend_cred >= 8) { 511 511 val = PIDX(q->pend_cred / 8); 512 - if (!is_t4(adap->chip)) 512 + if (!is_t4(adap->params.chip)) 513 513 val |= DBTYPE(1); 514 514 wmb(); 515 515 t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL), DBPRIO(1) | ··· 847 847 wmb(); /* write descriptors before telling HW */ 848 848 spin_lock(&q->db_lock); 849 849 if (!q->db_disabled) { 850 - if (is_t4(adap->chip)) { 850 + if (is_t4(adap->params.chip)) { 851 851 t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL), 852 852 QID(q->cntxt_id) | PIDX(n)); 853 853 } else { ··· 1596 1596 return 0; 1597 1597 } 1598 1598 1599 - if (is_t4(adap->chip)) 1599 + if (is_t4(adap->params.chip)) 1600 1600 __skb_pull(skb, sizeof(struct cpl_trace_pkt)); 1601 1601 else 1602 1602 __skb_pull(skb, sizeof(struct cpl_t5_trace_pkt)); ··· 1661 1661 const struct cpl_rx_pkt *pkt; 1662 1662 struct sge_eth_rxq *rxq = container_of(q, struct sge_eth_rxq, rspq); 1663 1663 struct sge *s = &q->adap->sge; 1664 - int cpl_trace_pkt = is_t4(q->adap->chip) ? 1664 + int cpl_trace_pkt = is_t4(q->adap->params.chip) ? 1665 1665 CPL_TRACE_PKT : CPL_TRACE_PKT_T5; 1666 1666 1667 1667 if (unlikely(*(u8 *)rsp == cpl_trace_pkt)) ··· 2182 2182 static void init_txq(struct adapter *adap, struct sge_txq *q, unsigned int id) 2183 2183 { 2184 2184 q->cntxt_id = id; 2185 - if (!is_t4(adap->chip)) { 2185 + if (!is_t4(adap->params.chip)) { 2186 2186 unsigned int s_qpp; 2187 2187 unsigned short udb_density; 2188 2188 unsigned long qpshift; ··· 2641 2641 * Set up to drop DOORBELL writes when the DOORBELL FIFO overflows 2642 2642 * and generate an interrupt when this occurs so we can recover. 2643 2643 */ 2644 - if (is_t4(adap->chip)) { 2644 + if (is_t4(adap->params.chip)) { 2645 2645 t4_set_reg_field(adap, A_SGE_DBFIFO_STATUS, 2646 2646 V_HP_INT_THRESH(M_HP_INT_THRESH) | 2647 2647 V_LP_INT_THRESH(M_LP_INT_THRESH),
+147 -85
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 296 296 u32 mc_bist_cmd, mc_bist_cmd_addr, mc_bist_cmd_len; 297 297 u32 mc_bist_status_rdata, mc_bist_data_pattern; 298 298 299 - if (is_t4(adap->chip)) { 299 + if (is_t4(adap->params.chip)) { 300 300 mc_bist_cmd = MC_BIST_CMD; 301 301 mc_bist_cmd_addr = MC_BIST_CMD_ADDR; 302 302 mc_bist_cmd_len = MC_BIST_CMD_LEN; ··· 349 349 u32 edc_bist_cmd, edc_bist_cmd_addr, edc_bist_cmd_len; 350 350 u32 edc_bist_cmd_data_pattern, edc_bist_status_rdata; 351 351 352 - if (is_t4(adap->chip)) { 352 + if (is_t4(adap->params.chip)) { 353 353 edc_bist_cmd = EDC_REG(EDC_BIST_CMD, idx); 354 354 edc_bist_cmd_addr = EDC_REG(EDC_BIST_CMD_ADDR, idx); 355 355 edc_bist_cmd_len = EDC_REG(EDC_BIST_CMD_LEN, idx); ··· 402 402 static int t4_mem_win_rw(struct adapter *adap, u32 addr, __be32 *data, int dir) 403 403 { 404 404 int i; 405 - u32 win_pf = is_t4(adap->chip) ? 0 : V_PFNUM(adap->fn); 405 + u32 win_pf = is_t4(adap->params.chip) ? 0 : V_PFNUM(adap->fn); 406 406 407 407 /* 408 408 * Setup offset into PCIE memory window. Address must be a ··· 863 863 } 864 864 865 865 /** 866 - * get_fw_version - read the firmware version 866 + * t4_get_fw_version - read the firmware version 867 867 * @adapter: the adapter 868 868 * @vers: where to place the version 869 869 * 870 870 * Reads the FW version from flash. 871 871 */ 872 - static int get_fw_version(struct adapter *adapter, u32 *vers) 872 + int t4_get_fw_version(struct adapter *adapter, u32 *vers) 873 873 { 874 - return t4_read_flash(adapter, adapter->params.sf_fw_start + 875 - offsetof(struct fw_hdr, fw_ver), 1, vers, 0); 874 + return t4_read_flash(adapter, FLASH_FW_START + 875 + offsetof(struct fw_hdr, fw_ver), 1, 876 + vers, 0); 876 877 } 877 878 878 879 /** 879 - * get_tp_version - read the TP microcode version 880 + * t4_get_tp_version - read the TP microcode version 880 881 * @adapter: the adapter 881 882 * @vers: where to place the version 882 883 * 883 884 * Reads the TP microcode version from flash. 884 885 */ 885 - static int get_tp_version(struct adapter *adapter, u32 *vers) 886 + int t4_get_tp_version(struct adapter *adapter, u32 *vers) 886 887 { 887 - return t4_read_flash(adapter, adapter->params.sf_fw_start + 888 + return t4_read_flash(adapter, FLASH_FW_START + 888 889 offsetof(struct fw_hdr, tp_microcode_ver), 889 890 1, vers, 0); 890 891 } 891 892 892 - /** 893 - * t4_check_fw_version - check if the FW is compatible with this driver 894 - * @adapter: the adapter 895 - * 896 - * Checks if an adapter's FW is compatible with the driver. Returns 0 897 - * if there's exact match, a negative error if the version could not be 898 - * read or there's a major version mismatch, and a positive value if the 899 - * expected major version is found but there's a minor version mismatch. 893 + /* Is the given firmware API compatible with the one the driver was compiled 894 + * with? 900 895 */ 901 - int t4_check_fw_version(struct adapter *adapter) 896 + static int fw_compatible(const struct fw_hdr *hdr1, const struct fw_hdr *hdr2) 902 897 { 903 - u32 api_vers[2]; 904 - int ret, major, minor, micro; 905 - int exp_major, exp_minor, exp_micro; 906 898 907 - ret = get_fw_version(adapter, &adapter->params.fw_vers); 908 - if (!ret) 909 - ret = get_tp_version(adapter, &adapter->params.tp_vers); 910 - if (!ret) 911 - ret = t4_read_flash(adapter, adapter->params.sf_fw_start + 912 - offsetof(struct fw_hdr, intfver_nic), 913 - 2, api_vers, 1); 914 - if (ret) 915 - return ret; 899 + /* short circuit if it's the exact same firmware version */ 900 + if (hdr1->chip == hdr2->chip && hdr1->fw_ver == hdr2->fw_ver) 901 + return 1; 916 902 917 - major = FW_HDR_FW_VER_MAJOR_GET(adapter->params.fw_vers); 918 - minor = FW_HDR_FW_VER_MINOR_GET(adapter->params.fw_vers); 919 - micro = FW_HDR_FW_VER_MICRO_GET(adapter->params.fw_vers); 903 + #define SAME_INTF(x) (hdr1->intfver_##x == hdr2->intfver_##x) 904 + if (hdr1->chip == hdr2->chip && SAME_INTF(nic) && SAME_INTF(vnic) && 905 + SAME_INTF(ri) && SAME_INTF(iscsi) && SAME_INTF(fcoe)) 906 + return 1; 907 + #undef SAME_INTF 920 908 921 - switch (CHELSIO_CHIP_VERSION(adapter->chip)) { 922 - case CHELSIO_T4: 923 - exp_major = FW_VERSION_MAJOR; 924 - exp_minor = FW_VERSION_MINOR; 925 - exp_micro = FW_VERSION_MICRO; 926 - break; 927 - case CHELSIO_T5: 928 - exp_major = FW_VERSION_MAJOR_T5; 929 - exp_minor = FW_VERSION_MINOR_T5; 930 - exp_micro = FW_VERSION_MICRO_T5; 931 - break; 932 - default: 933 - dev_err(adapter->pdev_dev, "Unsupported chip type, %x\n", 934 - adapter->chip); 935 - return -EINVAL; 909 + return 0; 910 + } 911 + 912 + /* The firmware in the filesystem is usable, but should it be installed? 913 + * This routine explains itself in detail if it indicates the filesystem 914 + * firmware should be installed. 915 + */ 916 + static int should_install_fs_fw(struct adapter *adap, int card_fw_usable, 917 + int k, int c) 918 + { 919 + const char *reason; 920 + 921 + if (!card_fw_usable) { 922 + reason = "incompatible or unusable"; 923 + goto install; 936 924 } 937 925 938 - memcpy(adapter->params.api_vers, api_vers, 939 - sizeof(adapter->params.api_vers)); 940 - 941 - if (major < exp_major || (major == exp_major && minor < exp_minor) || 942 - (major == exp_major && minor == exp_minor && micro < exp_micro)) { 943 - dev_err(adapter->pdev_dev, 944 - "Card has firmware version %u.%u.%u, minimum " 945 - "supported firmware is %u.%u.%u.\n", major, minor, 946 - micro, exp_major, exp_minor, exp_micro); 947 - return -EFAULT; 926 + if (k > c) { 927 + reason = "older than the version supported with this driver"; 928 + goto install; 948 929 } 949 930 950 - if (major != exp_major) { /* major mismatch - fail */ 951 - dev_err(adapter->pdev_dev, 952 - "card FW has major version %u, driver wants %u\n", 953 - major, exp_major); 954 - return -EINVAL; 955 - } 931 + return 0; 956 932 957 - if (minor == exp_minor && micro == exp_micro) 958 - return 0; /* perfect match */ 933 + install: 934 + dev_err(adap->pdev_dev, "firmware on card (%u.%u.%u.%u) is %s, " 935 + "installing firmware %u.%u.%u.%u on card.\n", 936 + FW_HDR_FW_VER_MAJOR_GET(c), FW_HDR_FW_VER_MINOR_GET(c), 937 + FW_HDR_FW_VER_MICRO_GET(c), FW_HDR_FW_VER_BUILD_GET(c), reason, 938 + FW_HDR_FW_VER_MAJOR_GET(k), FW_HDR_FW_VER_MINOR_GET(k), 939 + FW_HDR_FW_VER_MICRO_GET(k), FW_HDR_FW_VER_BUILD_GET(k)); 959 940 960 - /* Minor/micro version mismatch. Report it but often it's OK. */ 961 941 return 1; 942 + } 943 + 944 + int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info, 945 + const u8 *fw_data, unsigned int fw_size, 946 + struct fw_hdr *card_fw, enum dev_state state, 947 + int *reset) 948 + { 949 + int ret, card_fw_usable, fs_fw_usable; 950 + const struct fw_hdr *fs_fw; 951 + const struct fw_hdr *drv_fw; 952 + 953 + drv_fw = &fw_info->fw_hdr; 954 + 955 + /* Read the header of the firmware on the card */ 956 + ret = -t4_read_flash(adap, FLASH_FW_START, 957 + sizeof(*card_fw) / sizeof(uint32_t), 958 + (uint32_t *)card_fw, 1); 959 + if (ret == 0) { 960 + card_fw_usable = fw_compatible(drv_fw, (const void *)card_fw); 961 + } else { 962 + dev_err(adap->pdev_dev, 963 + "Unable to read card's firmware header: %d\n", ret); 964 + card_fw_usable = 0; 965 + } 966 + 967 + if (fw_data != NULL) { 968 + fs_fw = (const void *)fw_data; 969 + fs_fw_usable = fw_compatible(drv_fw, fs_fw); 970 + } else { 971 + fs_fw = NULL; 972 + fs_fw_usable = 0; 973 + } 974 + 975 + if (card_fw_usable && card_fw->fw_ver == drv_fw->fw_ver && 976 + (!fs_fw_usable || fs_fw->fw_ver == drv_fw->fw_ver)) { 977 + /* Common case: the firmware on the card is an exact match and 978 + * the filesystem one is an exact match too, or the filesystem 979 + * one is absent/incompatible. 980 + */ 981 + } else if (fs_fw_usable && state == DEV_STATE_UNINIT && 982 + should_install_fs_fw(adap, card_fw_usable, 983 + be32_to_cpu(fs_fw->fw_ver), 984 + be32_to_cpu(card_fw->fw_ver))) { 985 + ret = -t4_fw_upgrade(adap, adap->mbox, fw_data, 986 + fw_size, 0); 987 + if (ret != 0) { 988 + dev_err(adap->pdev_dev, 989 + "failed to install firmware: %d\n", ret); 990 + goto bye; 991 + } 992 + 993 + /* Installed successfully, update the cached header too. */ 994 + memcpy(card_fw, fs_fw, sizeof(*card_fw)); 995 + card_fw_usable = 1; 996 + *reset = 0; /* already reset as part of load_fw */ 997 + } 998 + 999 + if (!card_fw_usable) { 1000 + uint32_t d, c, k; 1001 + 1002 + d = be32_to_cpu(drv_fw->fw_ver); 1003 + c = be32_to_cpu(card_fw->fw_ver); 1004 + k = fs_fw ? be32_to_cpu(fs_fw->fw_ver) : 0; 1005 + 1006 + dev_err(adap->pdev_dev, "Cannot find a usable firmware: " 1007 + "chip state %d, " 1008 + "driver compiled with %d.%d.%d.%d, " 1009 + "card has %d.%d.%d.%d, filesystem has %d.%d.%d.%d\n", 1010 + state, 1011 + FW_HDR_FW_VER_MAJOR_GET(d), FW_HDR_FW_VER_MINOR_GET(d), 1012 + FW_HDR_FW_VER_MICRO_GET(d), FW_HDR_FW_VER_BUILD_GET(d), 1013 + FW_HDR_FW_VER_MAJOR_GET(c), FW_HDR_FW_VER_MINOR_GET(c), 1014 + FW_HDR_FW_VER_MICRO_GET(c), FW_HDR_FW_VER_BUILD_GET(c), 1015 + FW_HDR_FW_VER_MAJOR_GET(k), FW_HDR_FW_VER_MINOR_GET(k), 1016 + FW_HDR_FW_VER_MICRO_GET(k), FW_HDR_FW_VER_BUILD_GET(k)); 1017 + ret = EINVAL; 1018 + goto bye; 1019 + } 1020 + 1021 + /* We're using whatever's on the card and it's known to be good. */ 1022 + adap->params.fw_vers = be32_to_cpu(card_fw->fw_ver); 1023 + adap->params.tp_vers = be32_to_cpu(card_fw->tp_microcode_ver); 1024 + 1025 + bye: 1026 + return ret; 962 1027 } 963 1028 964 1029 /** ··· 1433 1368 PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS, 1434 1369 pcie_port_intr_info) + 1435 1370 t4_handle_intr_status(adapter, PCIE_INT_CAUSE, 1436 - is_t4(adapter->chip) ? 1371 + is_t4(adapter->params.chip) ? 1437 1372 pcie_intr_info : t5_pcie_intr_info); 1438 1373 1439 1374 if (fat) ··· 1847 1782 { 1848 1783 u32 v, int_cause_reg; 1849 1784 1850 - if (is_t4(adap->chip)) 1785 + if (is_t4(adap->params.chip)) 1851 1786 int_cause_reg = PORT_REG(port, XGMAC_PORT_INT_CAUSE); 1852 1787 else 1853 1788 int_cause_reg = T5_PORT_REG(port, MAC_PORT_INT_CAUSE); ··· 2315 2250 2316 2251 #define GET_STAT(name) \ 2317 2252 t4_read_reg64(adap, \ 2318 - (is_t4(adap->chip) ? PORT_REG(idx, MPS_PORT_STAT_##name##_L) : \ 2253 + (is_t4(adap->params.chip) ? PORT_REG(idx, MPS_PORT_STAT_##name##_L) : \ 2319 2254 T5_PORT_REG(idx, MPS_PORT_STAT_##name##_L))) 2320 2255 #define GET_STAT_COM(name) t4_read_reg64(adap, MPS_STAT_##name##_L) 2321 2256 ··· 2397 2332 { 2398 2333 u32 mag_id_reg_l, mag_id_reg_h, port_cfg_reg; 2399 2334 2400 - if (is_t4(adap->chip)) { 2335 + if (is_t4(adap->params.chip)) { 2401 2336 mag_id_reg_l = PORT_REG(port, XGMAC_PORT_MAGIC_MACID_LO); 2402 2337 mag_id_reg_h = PORT_REG(port, XGMAC_PORT_MAGIC_MACID_HI); 2403 2338 port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2); ··· 2439 2374 int i; 2440 2375 u32 port_cfg_reg; 2441 2376 2442 - if (is_t4(adap->chip)) 2377 + if (is_t4(adap->params.chip)) 2443 2378 port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2); 2444 2379 else 2445 2380 port_cfg_reg = T5_PORT_REG(port, MAC_PORT_CFG2); ··· 2452 2387 return -EINVAL; 2453 2388 2454 2389 #define EPIO_REG(name) \ 2455 - (is_t4(adap->chip) ? PORT_REG(port, XGMAC_PORT_EPIO_##name) : \ 2390 + (is_t4(adap->params.chip) ? PORT_REG(port, XGMAC_PORT_EPIO_##name) : \ 2456 2391 T5_PORT_REG(port, MAC_PORT_EPIO_##name)) 2457 2392 2458 2393 t4_write_reg(adap, EPIO_REG(DATA1), mask0 >> 32); ··· 2539 2474 int t4_mem_win_read_len(struct adapter *adap, u32 addr, __be32 *data, int len) 2540 2475 { 2541 2476 int i, off; 2542 - u32 win_pf = is_t4(adap->chip) ? 0 : V_PFNUM(adap->fn); 2477 + u32 win_pf = is_t4(adap->params.chip) ? 0 : V_PFNUM(adap->fn); 2543 2478 2544 2479 /* Align on a 2KB boundary. 2545 2480 */ ··· 3371 3306 int i, ret; 3372 3307 struct fw_vi_mac_cmd c; 3373 3308 struct fw_vi_mac_exact *p; 3374 - unsigned int max_naddr = is_t4(adap->chip) ? 3309 + unsigned int max_naddr = is_t4(adap->params.chip) ? 3375 3310 NUM_MPS_CLS_SRAM_L_INSTANCES : 3376 3311 NUM_MPS_T5_CLS_SRAM_L_INSTANCES; 3377 3312 ··· 3433 3368 int ret, mode; 3434 3369 struct fw_vi_mac_cmd c; 3435 3370 struct fw_vi_mac_exact *p = c.u.exact; 3436 - unsigned int max_mac_addr = is_t4(adap->chip) ? 3371 + unsigned int max_mac_addr = is_t4(adap->params.chip) ? 3437 3372 NUM_MPS_CLS_SRAM_L_INSTANCES : 3438 3373 NUM_MPS_T5_CLS_SRAM_L_INSTANCES; 3439 3374 ··· 3764 3699 { 3765 3700 int ret, ver; 3766 3701 uint16_t device_id; 3702 + u32 pl_rev; 3767 3703 3768 3704 ret = t4_wait_dev_ready(adapter); 3769 3705 if (ret < 0) 3770 3706 return ret; 3771 3707 3772 3708 get_pci_mode(adapter, &adapter->params.pci); 3773 - adapter->params.rev = t4_read_reg(adapter, PL_REV); 3709 + pl_rev = G_REV(t4_read_reg(adapter, PL_REV)); 3774 3710 3775 3711 ret = get_flash_params(adapter); 3776 3712 if (ret < 0) { ··· 3783 3717 */ 3784 3718 pci_read_config_word(adapter->pdev, PCI_DEVICE_ID, &device_id); 3785 3719 ver = device_id >> 12; 3720 + adapter->params.chip = 0; 3786 3721 switch (ver) { 3787 3722 case CHELSIO_T4: 3788 - adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T4, 3789 - adapter->params.rev); 3723 + adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T4, pl_rev); 3790 3724 break; 3791 3725 case CHELSIO_T5: 3792 - adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T5, 3793 - adapter->params.rev); 3726 + adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T5, pl_rev); 3794 3727 break; 3795 3728 default: 3796 3729 dev_err(adapter->pdev_dev, "Device %d is not supported\n", 3797 3730 device_id); 3798 3731 return -EINVAL; 3799 3732 } 3800 - 3801 - /* Reassign the updated revision field */ 3802 - adapter->params.rev = adapter->chip; 3803 3733 3804 3734 init_cong_ctrl(adapter->params.a_wnd, adapter->params.b_wnd); 3805 3735
+14
drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
··· 1092 1092 1093 1093 #define PL_REV 0x1943c 1094 1094 1095 + #define S_REV 0 1096 + #define M_REV 0xfU 1097 + #define V_REV(x) ((x) << S_REV) 1098 + #define G_REV(x) (((x) >> S_REV) & M_REV) 1099 + 1095 1100 #define LE_DB_CONFIG 0x19c04 1096 1101 #define HASHEN 0x00100000U 1097 1102 ··· 1203 1198 #define EDC_T51_BASE_ADDR 0x50800 1204 1199 #define EDC_STRIDE_T5 (EDC_T51_BASE_ADDR - EDC_T50_BASE_ADDR) 1205 1200 #define EDC_REG_T5(reg, idx) (reg + EDC_STRIDE_T5 * idx) 1201 + 1202 + #define A_PL_VF_REV 0x4 1203 + #define A_PL_VF_WHOAMI 0x0 1204 + #define A_PL_VF_REVISION 0x8 1205 + 1206 + #define S_CHIPID 4 1207 + #define M_CHIPID 0xfU 1208 + #define V_CHIPID(x) ((x) << S_CHIPID) 1209 + #define G_CHIPID(x) (((x) >> S_CHIPID) & M_CHIPID) 1206 1210 1207 1211 #endif /* __T4_REGS_H */
+6 -1
drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
··· 2157 2157 2158 2158 struct fw_hdr { 2159 2159 u8 ver; 2160 - u8 reserved1; 2160 + u8 chip; /* terminator chip type */ 2161 2161 __be16 len512; /* bin length in units of 512-bytes */ 2162 2162 __be32 fw_ver; /* firmware version */ 2163 2163 __be32 tp_microcode_ver; ··· 2174 2174 __u32 reserved4; 2175 2175 __be32 flags; 2176 2176 __be32 reserved6[23]; 2177 + }; 2178 + 2179 + enum fw_hdr_chip { 2180 + FW_HDR_CHIP_T4, 2181 + FW_HDR_CHIP_T5 2177 2182 }; 2178 2183 2179 2184 #define FW_HDR_FW_VER_MAJOR_GET(x) (((x) >> 24) & 0xff)
-1
drivers/net/ethernet/chelsio/cxgb4vf/adapter.h
··· 344 344 unsigned long registered_device_map; 345 345 unsigned long open_device_map; 346 346 unsigned long flags; 347 - enum chip_type chip; 348 347 struct adapter_params params; 349 348 350 349 /* queue and interrupt resources */
+11 -4
drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
··· 1064 1064 /* 1065 1065 * Chip version 4, revision 0x3f (cxgb4vf). 1066 1066 */ 1067 - return CHELSIO_CHIP_VERSION(adapter->chip) | (0x3f << 10); 1067 + return CHELSIO_CHIP_VERSION(adapter->params.chip) | (0x3f << 10); 1068 1068 } 1069 1069 1070 1070 /* ··· 1551 1551 reg_block_dump(adapter, regbuf, 1552 1552 T4VF_MPS_BASE_ADDR + T4VF_MOD_MAP_MPS_FIRST, 1553 1553 T4VF_MPS_BASE_ADDR + T4VF_MOD_MAP_MPS_LAST); 1554 + 1555 + /* T5 adds new registers in the PL Register map. 1556 + */ 1554 1557 reg_block_dump(adapter, regbuf, 1555 1558 T4VF_PL_BASE_ADDR + T4VF_MOD_MAP_PL_FIRST, 1556 - T4VF_PL_BASE_ADDR + T4VF_MOD_MAP_PL_LAST); 1559 + T4VF_PL_BASE_ADDR + (is_t4(adapter->params.chip) 1560 + ? A_PL_VF_WHOAMI : A_PL_VF_REVISION)); 1557 1561 reg_block_dump(adapter, regbuf, 1558 1562 T4VF_CIM_BASE_ADDR + T4VF_MOD_MAP_CIM_FIRST, 1559 1563 T4VF_CIM_BASE_ADDR + T4VF_MOD_MAP_CIM_LAST); ··· 2091 2087 unsigned int ethqsets; 2092 2088 int err; 2093 2089 u32 param, val = 0; 2090 + unsigned int chipid; 2094 2091 2095 2092 /* 2096 2093 * Wait for the device to become ready before proceeding ... ··· 2119 2114 return err; 2120 2115 } 2121 2116 2117 + adapter->params.chip = 0; 2122 2118 switch (adapter->pdev->device >> 12) { 2123 2119 case CHELSIO_T4: 2124 - adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T4, 0); 2120 + adapter->params.chip = CHELSIO_CHIP_CODE(CHELSIO_T4, 0); 2125 2121 break; 2126 2122 case CHELSIO_T5: 2127 - adapter->chip = CHELSIO_CHIP_CODE(CHELSIO_T5, 0); 2123 + chipid = G_REV(t4_read_reg(adapter, A_PL_VF_REV)); 2124 + adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T5, chipid); 2128 2125 break; 2129 2126 } 2130 2127
+1 -1
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
··· 537 537 */ 538 538 if (fl->pend_cred >= FL_PER_EQ_UNIT) { 539 539 val = PIDX(fl->pend_cred / FL_PER_EQ_UNIT); 540 - if (!is_t4(adapter->chip)) 540 + if (!is_t4(adapter->params.chip)) 541 541 val |= DBTYPE(1); 542 542 wmb(); 543 543 t4_write_reg(adapter, T4VF_SGE_BASE_ADDR + SGE_VF_KDOORBELL,
+16 -8
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_common.h
··· 39 39 #include "../cxgb4/t4fw_api.h" 40 40 41 41 #define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision)) 42 - #define CHELSIO_CHIP_VERSION(code) ((code) >> 4) 42 + #define CHELSIO_CHIP_VERSION(code) (((code) >> 4) & 0xf) 43 43 #define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf) 44 44 45 + /* All T4 and later chips have their PCI-E Device IDs encoded as 0xVFPP where: 46 + * 47 + * V = "4" for T4; "5" for T5, etc. or 48 + * = "a" for T4 FPGA; "b" for T4 FPGA, etc. 49 + * F = "0" for PF 0..3; "4".."7" for PF4..7; and "8" for VFs 50 + * PP = adapter product designation 51 + */ 45 52 #define CHELSIO_T4 0x4 46 53 #define CHELSIO_T5 0x5 47 54 48 55 enum chip_type { 49 - T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 0), 50 - T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1), 51 - T4_A3 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2), 56 + T4_A1 = CHELSIO_CHIP_CODE(CHELSIO_T4, 1), 57 + T4_A2 = CHELSIO_CHIP_CODE(CHELSIO_T4, 2), 52 58 T4_FIRST_REV = T4_A1, 53 - T4_LAST_REV = T4_A3, 59 + T4_LAST_REV = T4_A2, 54 60 55 - T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0), 56 - T5_FIRST_REV = T5_A1, 61 + T5_A0 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0), 62 + T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 1), 63 + T5_FIRST_REV = T5_A0, 57 64 T5_LAST_REV = T5_A1, 58 65 }; 59 66 ··· 210 203 struct vpd_params vpd; /* Vital Product Data */ 211 204 struct rss_params rss; /* Receive Side Scaling */ 212 205 struct vf_resources vfres; /* Virtual Function Resource limits */ 206 + enum chip_type chip; /* chip code */ 213 207 u8 nports; /* # of Ethernet "ports" */ 214 208 }; 215 209 ··· 261 253 262 254 static inline int is_t4(enum chip_type chip) 263 255 { 264 - return (chip >= T4_FIRST_REV && chip <= T4_LAST_REV); 256 + return CHELSIO_CHIP_VERSION(chip) == CHELSIO_T4; 265 257 } 266 258 267 259 int t4vf_wait_dev_ready(struct adapter *);
+2 -2
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
··· 1027 1027 unsigned nfilters = 0; 1028 1028 unsigned int rem = naddr; 1029 1029 struct fw_vi_mac_cmd cmd, rpl; 1030 - unsigned int max_naddr = is_t4(adapter->chip) ? 1030 + unsigned int max_naddr = is_t4(adapter->params.chip) ? 1031 1031 NUM_MPS_CLS_SRAM_L_INSTANCES : 1032 1032 NUM_MPS_T5_CLS_SRAM_L_INSTANCES; 1033 1033 ··· 1121 1121 struct fw_vi_mac_exact *p = &cmd.u.exact[0]; 1122 1122 size_t len16 = DIV_ROUND_UP(offsetof(struct fw_vi_mac_cmd, 1123 1123 u.exact[1]), 16); 1124 - unsigned int max_naddr = is_t4(adapter->chip) ? 1124 + unsigned int max_naddr = is_t4(adapter->params.chip) ? 1125 1125 NUM_MPS_CLS_SRAM_L_INSTANCES : 1126 1126 NUM_MPS_T5_CLS_SRAM_L_INSTANCES; 1127 1127
+3
drivers/net/ethernet/emulex/benet/be_hw.h
··· 64 64 #define SLIPORT_ERROR_NO_RESOURCE1 0x2 65 65 #define SLIPORT_ERROR_NO_RESOURCE2 0x9 66 66 67 + #define SLIPORT_ERROR_FW_RESET1 0x2 68 + #define SLIPORT_ERROR_FW_RESET2 0x0 69 + 67 70 /********* Memory BAR register ************/ 68 71 #define PCICFG_MEMBAR_CTRL_INT_CTRL_OFFSET 0xfc 69 72 /* Host Interrupt Enable, if set interrupts are enabled although "PCI Interrupt
+29 -12
drivers/net/ethernet/emulex/benet/be_main.c
··· 2464 2464 */ 2465 2465 if (sliport_status & SLIPORT_STATUS_ERR_MASK) { 2466 2466 adapter->hw_error = true; 2467 - dev_err(&adapter->pdev->dev, 2468 - "Error detected in the card\n"); 2467 + /* Do not log error messages if its a FW reset */ 2468 + if (sliport_err1 == SLIPORT_ERROR_FW_RESET1 && 2469 + sliport_err2 == SLIPORT_ERROR_FW_RESET2) { 2470 + dev_info(&adapter->pdev->dev, 2471 + "Firmware update in progress\n"); 2472 + return; 2473 + } else { 2474 + dev_err(&adapter->pdev->dev, 2475 + "Error detected in the card\n"); 2476 + } 2469 2477 } 2470 2478 2471 2479 if (sliport_status & SLIPORT_STATUS_ERR_MASK) { ··· 2940 2932 } 2941 2933 } 2942 2934 2943 - static int be_clear(struct be_adapter *adapter) 2935 + static void be_mac_clear(struct be_adapter *adapter) 2944 2936 { 2945 2937 int i; 2946 2938 2939 + if (adapter->pmac_id) { 2940 + for (i = 0; i < (adapter->uc_macs + 1); i++) 2941 + be_cmd_pmac_del(adapter, adapter->if_handle, 2942 + adapter->pmac_id[i], 0); 2943 + adapter->uc_macs = 0; 2944 + 2945 + kfree(adapter->pmac_id); 2946 + adapter->pmac_id = NULL; 2947 + } 2948 + } 2949 + 2950 + static int be_clear(struct be_adapter *adapter) 2951 + { 2947 2952 be_cancel_worker(adapter); 2948 2953 2949 2954 if (sriov_enabled(adapter)) 2950 2955 be_vf_clear(adapter); 2951 2956 2952 2957 /* delete the primary mac along with the uc-mac list */ 2953 - for (i = 0; i < (adapter->uc_macs + 1); i++) 2954 - be_cmd_pmac_del(adapter, adapter->if_handle, 2955 - adapter->pmac_id[i], 0); 2956 - adapter->uc_macs = 0; 2958 + be_mac_clear(adapter); 2957 2959 2958 2960 be_cmd_if_destroy(adapter, adapter->if_handle, 0); 2959 2961 2960 2962 be_clear_queues(adapter); 2961 - 2962 - kfree(adapter->pmac_id); 2963 - adapter->pmac_id = NULL; 2964 2963 2965 2964 be_msix_disable(adapter); 2966 2965 return 0; ··· 3827 3812 } 3828 3813 3829 3814 if (change_status == LANCER_FW_RESET_NEEDED) { 3815 + dev_info(&adapter->pdev->dev, 3816 + "Resetting adapter to activate new FW\n"); 3830 3817 status = lancer_physdev_ctrl(adapter, 3831 3818 PHYSDEV_CONTROL_FW_RESET_MASK); 3832 3819 if (status) { ··· 4380 4363 goto err; 4381 4364 } 4382 4365 4383 - dev_err(dev, "Error recovery successful\n"); 4366 + dev_err(dev, "Adapter recovery successful\n"); 4384 4367 return 0; 4385 4368 err: 4386 4369 if (status == -EAGAIN) 4387 4370 dev_err(dev, "Waiting for resource provisioning\n"); 4388 4371 else 4389 - dev_err(dev, "Error recovery failed\n"); 4372 + dev_err(dev, "Adapter recovery failed\n"); 4390 4373 4391 4374 return status; 4392 4375 }
+4 -9
drivers/net/ethernet/freescale/fec_main.c
··· 98 98 * detected as not set during a prior frame transmission, then the 99 99 * ENET_TDAR[TDAR] bit is cleared at a later time, even if additional TxBDs 100 100 * were added to the ring and the ENET_TDAR[TDAR] bit is set. This results in 101 - * If the ready bit in the transmit buffer descriptor (TxBD[R]) is previously 102 - * detected as not set during a prior frame transmission, then the 103 - * ENET_TDAR[TDAR] bit is cleared at a later time, even if additional TxBDs 104 - * were added to the ring and the ENET_TDAR[TDAR] bit is set. This results in 105 101 * frames not being transmitted until there is a 0-to-1 transition on 106 102 * ENET_TDAR[TDAR]. 107 103 */ ··· 381 385 * data. 382 386 */ 383 387 bdp->cbd_bufaddr = dma_map_single(&fep->pdev->dev, bufaddr, 384 - FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE); 388 + skb->len, DMA_TO_DEVICE); 385 389 if (dma_mapping_error(&fep->pdev->dev, bdp->cbd_bufaddr)) { 386 390 bdp->cbd_bufaddr = 0; 387 391 fep->tx_skbuff[index] = NULL; ··· 775 779 else 776 780 index = bdp - fep->tx_bd_base; 777 781 778 - dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, 779 - FEC_ENET_TX_FRSIZE, DMA_TO_DEVICE); 780 - bdp->cbd_bufaddr = 0; 781 - 782 782 skb = fep->tx_skbuff[index]; 783 + dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr, skb->len, 784 + DMA_TO_DEVICE); 785 + bdp->cbd_bufaddr = 0; 783 786 784 787 /* Check for errors. */ 785 788 if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
+1 -1
drivers/net/ethernet/ibm/ehea/ehea_main.c
··· 3033 3033 3034 3034 dev->hw_features = NETIF_F_SG | NETIF_F_TSO | 3035 3035 NETIF_F_IP_CSUM | NETIF_F_HW_VLAN_CTAG_TX; 3036 - dev->features = NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_TSO | 3036 + dev->features = NETIF_F_SG | NETIF_F_TSO | 3037 3037 NETIF_F_HIGHDMA | NETIF_F_IP_CSUM | 3038 3038 NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX | 3039 3039 NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_RXCSUM;
+3
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 354 354 struct rtnl_link_stats64 *vsi_stats = i40e_get_vsi_stats_struct(vsi); 355 355 int i; 356 356 357 + if (!vsi->tx_rings) 358 + return stats; 359 + 357 360 rcu_read_lock(); 358 361 for (i = 0; i < vsi->num_queue_pairs; i++) { 359 362 struct i40e_ring *tx_ring, *rx_ring;
+4 -1
drivers/net/ethernet/intel/igb/e1000_phy.c
··· 1728 1728 * ownership of the resources, wait and try again to 1729 1729 * see if they have relinquished the resources yet. 1730 1730 */ 1731 - udelay(usec_interval); 1731 + if (usec_interval >= 1000) 1732 + mdelay(usec_interval/1000); 1733 + else 1734 + udelay(usec_interval); 1732 1735 } 1733 1736 ret_val = hw->phy.ops.read_reg(hw, PHY_STATUS, &phy_status); 1734 1737 if (ret_val)
+2 -2
drivers/net/ethernet/marvell/mvneta.c
··· 1378 1378 1379 1379 dev_kfree_skb_any(skb); 1380 1380 dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, 1381 - rx_desc->data_size, DMA_FROM_DEVICE); 1381 + MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); 1382 1382 } 1383 1383 1384 1384 if (rx_done) ··· 1424 1424 } 1425 1425 1426 1426 dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr, 1427 - rx_desc->data_size, DMA_FROM_DEVICE); 1427 + MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE); 1428 1428 1429 1429 rx_bytes = rx_desc->data_size - 1430 1430 (ETH_FCS_LEN + MVNETA_MH_SIZE);
+2
drivers/net/ethernet/mellanox/mlx4/main.c
··· 2635 2635 return -ENOMEM; 2636 2636 2637 2637 ret = pci_register_driver(&mlx4_driver); 2638 + if (ret < 0) 2639 + destroy_workqueue(mlx4_wq); 2638 2640 return ret < 0 ? ret : 0; 2639 2641 } 2640 2642
+5 -3
drivers/net/ethernet/nvidia/forcedeth.c
··· 5150 5150 { 5151 5151 struct fe_priv *np = netdev_priv(dev); 5152 5152 u8 __iomem *base = get_hwbase(dev); 5153 - int result; 5154 - memset(buffer, 0, nv_get_sset_count(dev, ETH_SS_TEST)*sizeof(u64)); 5153 + int result, count; 5154 + 5155 + count = nv_get_sset_count(dev, ETH_SS_TEST); 5156 + memset(buffer, 0, count * sizeof(u64)); 5155 5157 5156 5158 if (!nv_link_test(dev)) { 5157 5159 test->flags |= ETH_TEST_FL_FAILED; ··· 5197 5195 return; 5198 5196 } 5199 5197 5200 - if (!nv_loopback_test(dev)) { 5198 + if (count > NV_TEST_COUNT_BASE && !nv_loopback_test(dev)) { 5201 5199 test->flags |= ETH_TEST_FL_FAILED; 5202 5200 buffer[3] = 1; 5203 5201 }
+1 -1
drivers/net/ethernet/qlogic/qlge/qlge.h
··· 18 18 */ 19 19 #define DRV_NAME "qlge" 20 20 #define DRV_STRING "QLogic 10 Gigabit PCI-E Ethernet Driver " 21 - #define DRV_VERSION "1.00.00.33" 21 + #define DRV_VERSION "1.00.00.34" 22 22 23 23 #define WQ_ADDR_ALIGN 0x3 /* 4 byte alignment */ 24 24
+4
drivers/net/ethernet/qlogic/qlge/qlge_ethtool.c
··· 181 181 }; 182 182 #define QLGE_TEST_LEN (sizeof(ql_gstrings_test) / ETH_GSTRING_LEN) 183 183 #define QLGE_STATS_LEN ARRAY_SIZE(ql_gstrings_stats) 184 + #define QLGE_RCV_MAC_ERR_STATS 7 184 185 185 186 static int ql_update_ring_coalescing(struct ql_adapter *qdev) 186 187 { ··· 280 279 *iter = data; 281 280 iter++; 282 281 } 282 + 283 + /* Update receive mac error statistics */ 284 + iter += QLGE_RCV_MAC_ERR_STATS; 283 285 284 286 /* 285 287 * Get Per-priority TX pause frame counter statistics.
-8
drivers/net/ethernet/qlogic/qlge/qlge_main.c
··· 2376 2376 netdev_features_t features) 2377 2377 { 2378 2378 int err; 2379 - /* 2380 - * Since there is no support for separate rx/tx vlan accel 2381 - * enable/disable make sure tx flag is always in same state as rx. 2382 - */ 2383 - if (features & NETIF_F_HW_VLAN_CTAG_RX) 2384 - features |= NETIF_F_HW_VLAN_CTAG_TX; 2385 - else 2386 - features &= ~NETIF_F_HW_VLAN_CTAG_TX; 2387 2379 2388 2380 /* Update the behavior of vlan accel in the adapter */ 2389 2381 err = qlge_update_hw_vlan_features(ndev, features);
+7 -1
drivers/net/ethernet/sfc/efx.c
··· 585 585 EFX_MAX_FRAME_LEN(efx->net_dev->mtu) + 586 586 efx->type->rx_buffer_padding); 587 587 rx_buf_len = (sizeof(struct efx_rx_page_state) + 588 - NET_IP_ALIGN + efx->rx_dma_len); 588 + efx->rx_ip_align + efx->rx_dma_len); 589 589 if (rx_buf_len <= PAGE_SIZE) { 590 590 efx->rx_scatter = efx->type->always_rx_scatter; 591 591 efx->rx_buffer_order = 0; ··· 645 645 WARN_ON(channel->rx_pkt_n_frags); 646 646 } 647 647 648 + efx_ptp_start_datapath(efx); 649 + 648 650 if (netif_device_present(efx->net_dev)) 649 651 netif_tx_wake_all_queues(efx->net_dev); 650 652 } ··· 660 658 661 659 EFX_ASSERT_RESET_SERIALISED(efx); 662 660 BUG_ON(efx->port_enabled); 661 + 662 + efx_ptp_stop_datapath(efx); 663 663 664 664 /* Stop RX refill */ 665 665 efx_for_each_channel(channel, efx) { ··· 2544 2540 2545 2541 efx->net_dev = net_dev; 2546 2542 efx->rx_prefix_size = efx->type->rx_prefix_size; 2543 + efx->rx_ip_align = 2544 + NET_IP_ALIGN ? (efx->rx_prefix_size + NET_IP_ALIGN) % 4 : 0; 2547 2545 efx->rx_packet_hash_offset = 2548 2546 efx->type->rx_hash_offset - efx->type->rx_prefix_size; 2549 2547 spin_lock_init(&efx->stats_lock);
+29 -10
drivers/net/ethernet/sfc/mcdi.c
··· 50 50 static void efx_mcdi_timeout_async(unsigned long context); 51 51 static int efx_mcdi_drv_attach(struct efx_nic *efx, bool driver_operating, 52 52 bool *was_attached_out); 53 + static bool efx_mcdi_poll_once(struct efx_nic *efx); 53 54 54 55 static inline struct efx_mcdi_iface *efx_mcdi(struct efx_nic *efx) 55 56 { ··· 238 237 } 239 238 } 240 239 240 + static bool efx_mcdi_poll_once(struct efx_nic *efx) 241 + { 242 + struct efx_mcdi_iface *mcdi = efx_mcdi(efx); 243 + 244 + rmb(); 245 + if (!efx->type->mcdi_poll_response(efx)) 246 + return false; 247 + 248 + spin_lock_bh(&mcdi->iface_lock); 249 + efx_mcdi_read_response_header(efx); 250 + spin_unlock_bh(&mcdi->iface_lock); 251 + 252 + return true; 253 + } 254 + 241 255 static int efx_mcdi_poll(struct efx_nic *efx) 242 256 { 243 257 struct efx_mcdi_iface *mcdi = efx_mcdi(efx); ··· 288 272 289 273 time = jiffies; 290 274 291 - rmb(); 292 - if (efx->type->mcdi_poll_response(efx)) 275 + if (efx_mcdi_poll_once(efx)) 293 276 break; 294 277 295 278 if (time_after(time, finish)) 296 279 return -ETIMEDOUT; 297 280 } 298 - 299 - spin_lock_bh(&mcdi->iface_lock); 300 - efx_mcdi_read_response_header(efx); 301 - spin_unlock_bh(&mcdi->iface_lock); 302 281 303 282 /* Return rc=0 like wait_event_timeout() */ 304 283 return 0; ··· 630 619 rc = efx_mcdi_await_completion(efx); 631 620 632 621 if (rc != 0) { 622 + netif_err(efx, hw, efx->net_dev, 623 + "MC command 0x%x inlen %d mode %d timed out\n", 624 + cmd, (int)inlen, mcdi->mode); 625 + 626 + if (mcdi->mode == MCDI_MODE_EVENTS && efx_mcdi_poll_once(efx)) { 627 + netif_err(efx, hw, efx->net_dev, 628 + "MCDI request was completed without an event\n"); 629 + rc = 0; 630 + } 631 + 633 632 /* Close the race with efx_mcdi_ev_cpl() executing just too late 634 633 * and completing a request we've just cancelled, by ensuring 635 634 * that the seqno check therein fails. ··· 648 627 ++mcdi->seqno; 649 628 ++mcdi->credits; 650 629 spin_unlock_bh(&mcdi->iface_lock); 630 + } 651 631 652 - netif_err(efx, hw, efx->net_dev, 653 - "MC command 0x%x inlen %d mode %d timed out\n", 654 - cmd, (int)inlen, mcdi->mode); 655 - } else { 632 + if (rc == 0) { 656 633 size_t hdr_len, data_len; 657 634 658 635 /* At the very least we need a memory barrier here to ensure
+3
drivers/net/ethernet/sfc/net_driver.h
··· 683 683 * @n_channels: Number of channels in use 684 684 * @n_rx_channels: Number of channels used for RX (= number of RX queues) 685 685 * @n_tx_channels: Number of channels used for TX 686 + * @rx_ip_align: RX DMA address offset to have IP header aligned in 687 + * in accordance with NET_IP_ALIGN 686 688 * @rx_dma_len: Current maximum RX DMA length 687 689 * @rx_buffer_order: Order (log2) of number of pages for each RX buffer 688 690 * @rx_buffer_truesize: Amortised allocation size of an RX buffer, ··· 818 816 unsigned rss_spread; 819 817 unsigned tx_channel_offset; 820 818 unsigned n_tx_channels; 819 + unsigned int rx_ip_align; 821 820 unsigned int rx_dma_len; 822 821 unsigned int rx_buffer_order; 823 822 unsigned int rx_buffer_truesize;
+2
drivers/net/ethernet/sfc/nic.h
··· 560 560 bool efx_ptp_is_ptp_tx(struct efx_nic *efx, struct sk_buff *skb); 561 561 int efx_ptp_tx(struct efx_nic *efx, struct sk_buff *skb); 562 562 void efx_ptp_event(struct efx_nic *efx, efx_qword_t *ev); 563 + void efx_ptp_start_datapath(struct efx_nic *efx); 564 + void efx_ptp_stop_datapath(struct efx_nic *efx); 563 565 564 566 extern const struct efx_nic_type falcon_a1_nic_type; 565 567 extern const struct efx_nic_type falcon_b0_nic_type;
+57 -9
drivers/net/ethernet/sfc/ptp.c
··· 220 220 * @evt_list: List of MC receive events awaiting packets 221 221 * @evt_free_list: List of free events 222 222 * @evt_lock: Lock for manipulating evt_list and evt_free_list 223 + * @evt_overflow: Boolean indicating that event list has overflowed 223 224 * @rx_evts: Instantiated events (on evt_list and evt_free_list) 224 225 * @workwq: Work queue for processing pending PTP operations 225 226 * @work: Work task ··· 271 270 struct list_head evt_list; 272 271 struct list_head evt_free_list; 273 272 spinlock_t evt_lock; 273 + bool evt_overflow; 274 274 struct efx_ptp_event_rx rx_evts[MAX_RECEIVE_EVENTS]; 275 275 struct workqueue_struct *workwq; 276 276 struct work_struct work; ··· 637 635 } 638 636 } 639 637 } 638 + /* If the event overflow flag is set and the event list is now empty 639 + * clear the flag to re-enable the overflow warning message. 640 + */ 641 + if (ptp->evt_overflow && list_empty(&ptp->evt_list)) 642 + ptp->evt_overflow = false; 640 643 spin_unlock_bh(&ptp->evt_lock); 641 644 } 642 645 ··· 683 676 break; 684 677 } 685 678 } 679 + /* If the event overflow flag is set and the event list is now empty 680 + * clear the flag to re-enable the overflow warning message. 681 + */ 682 + if (ptp->evt_overflow && list_empty(&ptp->evt_list)) 683 + ptp->evt_overflow = false; 686 684 spin_unlock_bh(&ptp->evt_lock); 687 685 688 686 return rc; ··· 717 705 __skb_queue_tail(q, skb); 718 706 } else if (time_after(jiffies, match->expiry)) { 719 707 match->state = PTP_PACKET_STATE_TIMED_OUT; 720 - netif_warn(efx, rx_err, efx->net_dev, 721 - "PTP packet - no timestamp seen\n"); 708 + if (net_ratelimit()) 709 + netif_warn(efx, rx_err, efx->net_dev, 710 + "PTP packet - no timestamp seen\n"); 722 711 __skb_queue_tail(q, skb); 723 712 } else { 724 713 /* Replace unprocessed entry and stop */ ··· 801 788 static int efx_ptp_stop(struct efx_nic *efx) 802 789 { 803 790 struct efx_ptp_data *ptp = efx->ptp_data; 804 - int rc = efx_ptp_disable(efx); 805 791 struct list_head *cursor; 806 792 struct list_head *next; 793 + int rc; 794 + 795 + if (ptp == NULL) 796 + return 0; 797 + 798 + rc = efx_ptp_disable(efx); 807 799 808 800 if (ptp->rxfilter_installed) { 809 801 efx_filter_remove_id_safe(efx, EFX_FILTER_PRI_REQUIRED, ··· 827 809 list_for_each_safe(cursor, next, &efx->ptp_data->evt_list) { 828 810 list_move(cursor, &efx->ptp_data->evt_free_list); 829 811 } 812 + ptp->evt_overflow = false; 830 813 spin_unlock_bh(&efx->ptp_data->evt_lock); 831 814 832 815 return rc; 816 + } 817 + 818 + static int efx_ptp_restart(struct efx_nic *efx) 819 + { 820 + if (efx->ptp_data && efx->ptp_data->enabled) 821 + return efx_ptp_start(efx); 822 + return 0; 833 823 } 834 824 835 825 static void efx_ptp_pps_worker(struct work_struct *work) ··· 927 901 spin_lock_init(&ptp->evt_lock); 928 902 for (pos = 0; pos < MAX_RECEIVE_EVENTS; pos++) 929 903 list_add(&ptp->rx_evts[pos].link, &ptp->evt_free_list); 904 + ptp->evt_overflow = false; 930 905 931 906 ptp->phc_clock_info.owner = THIS_MODULE; 932 907 snprintf(ptp->phc_clock_info.name, ··· 1016 989 skb->len >= PTP_MIN_LENGTH && 1017 990 skb->len <= MC_CMD_PTP_IN_TRANSMIT_PACKET_MAXNUM && 1018 991 likely(skb->protocol == htons(ETH_P_IP)) && 992 + skb_transport_header_was_set(skb) && 993 + skb_network_header_len(skb) >= sizeof(struct iphdr) && 1019 994 ip_hdr(skb)->protocol == IPPROTO_UDP && 995 + skb_headlen(skb) >= 996 + skb_transport_offset(skb) + sizeof(struct udphdr) && 1020 997 udp_hdr(skb)->dest == htons(PTP_EVENT_PORT); 1021 998 } 1022 999 ··· 1137 1106 { 1138 1107 if ((enable_wanted != efx->ptp_data->enabled) || 1139 1108 (enable_wanted && (efx->ptp_data->mode != new_mode))) { 1140 - int rc; 1109 + int rc = 0; 1141 1110 1142 1111 if (enable_wanted) { 1143 1112 /* Change of mode requires disable */ ··· 1154 1123 * succeed. 1155 1124 */ 1156 1125 efx->ptp_data->mode = new_mode; 1157 - rc = efx_ptp_start(efx); 1126 + if (netif_running(efx->net_dev)) 1127 + rc = efx_ptp_start(efx); 1158 1128 if (rc == 0) { 1159 1129 rc = efx_ptp_synchronize(efx, 1160 1130 PTP_SYNC_ATTEMPTS * 2); ··· 1327 1295 list_add_tail(&evt->link, &ptp->evt_list); 1328 1296 1329 1297 queue_work(ptp->workwq, &ptp->work); 1330 - } else { 1331 - netif_err(efx, rx_err, efx->net_dev, "No free PTP event"); 1298 + } else if (!ptp->evt_overflow) { 1299 + /* Log a warning message and set the event overflow flag. 1300 + * The message won't be logged again until the event queue 1301 + * becomes empty. 1302 + */ 1303 + netif_err(efx, rx_err, efx->net_dev, "PTP event queue overflow\n"); 1304 + ptp->evt_overflow = true; 1332 1305 } 1333 1306 spin_unlock_bh(&ptp->evt_lock); 1334 1307 } ··· 1426 1389 if (rc != 0) 1427 1390 return rc; 1428 1391 1429 - ptp_data->current_adjfreq = delta; 1392 + ptp_data->current_adjfreq = adjustment_ns; 1430 1393 return 0; 1431 1394 } 1432 1395 ··· 1441 1404 1442 1405 MCDI_SET_DWORD(inbuf, PTP_IN_OP, MC_CMD_PTP_OP_ADJUST); 1443 1406 MCDI_SET_DWORD(inbuf, PTP_IN_PERIPH_ID, 0); 1444 - MCDI_SET_QWORD(inbuf, PTP_IN_ADJUST_FREQ, 0); 1407 + MCDI_SET_QWORD(inbuf, PTP_IN_ADJUST_FREQ, ptp_data->current_adjfreq); 1445 1408 MCDI_SET_DWORD(inbuf, PTP_IN_ADJUST_SECONDS, (u32)delta_ts.tv_sec); 1446 1409 MCDI_SET_DWORD(inbuf, PTP_IN_ADJUST_NANOSECONDS, (u32)delta_ts.tv_nsec); 1447 1410 return efx_mcdi_rpc(efx, MC_CMD_PTP, inbuf, sizeof(inbuf), ··· 1527 1490 if (efx_ptp_disable(efx) == 0) 1528 1491 efx->extra_channel_type[EFX_EXTRA_CHANNEL_PTP] = 1529 1492 &efx_ptp_channel_type; 1493 + } 1494 + 1495 + void efx_ptp_start_datapath(struct efx_nic *efx) 1496 + { 1497 + if (efx_ptp_restart(efx)) 1498 + netif_err(efx, drv, efx->net_dev, "Failed to restart PTP.\n"); 1499 + } 1500 + 1501 + void efx_ptp_stop_datapath(struct efx_nic *efx) 1502 + { 1503 + efx_ptp_stop(efx); 1530 1504 }
+3 -3
drivers/net/ethernet/sfc/rx.c
··· 94 94 95 95 void efx_rx_config_page_split(struct efx_nic *efx) 96 96 { 97 - efx->rx_page_buf_step = ALIGN(efx->rx_dma_len + NET_IP_ALIGN, 97 + efx->rx_page_buf_step = ALIGN(efx->rx_dma_len + efx->rx_ip_align, 98 98 EFX_RX_BUF_ALIGNMENT); 99 99 efx->rx_bufs_per_page = efx->rx_buffer_order ? 1 : 100 100 ((PAGE_SIZE - sizeof(struct efx_rx_page_state)) / ··· 189 189 do { 190 190 index = rx_queue->added_count & rx_queue->ptr_mask; 191 191 rx_buf = efx_rx_buffer(rx_queue, index); 192 - rx_buf->dma_addr = dma_addr + NET_IP_ALIGN; 192 + rx_buf->dma_addr = dma_addr + efx->rx_ip_align; 193 193 rx_buf->page = page; 194 - rx_buf->page_offset = page_offset + NET_IP_ALIGN; 194 + rx_buf->page_offset = page_offset + efx->rx_ip_align; 195 195 rx_buf->len = efx->rx_dma_len; 196 196 rx_buf->flags = 0; 197 197 ++rx_queue->added_count;
+35 -10
drivers/net/ethernet/smsc/smc91x.c
··· 82 82 #include <linux/mii.h> 83 83 #include <linux/workqueue.h> 84 84 #include <linux/of.h> 85 + #include <linux/of_device.h> 85 86 86 87 #include <linux/netdevice.h> 87 88 #include <linux/etherdevice.h> ··· 2185 2184 } 2186 2185 } 2187 2186 2187 + #if IS_BUILTIN(CONFIG_OF) 2188 + static const struct of_device_id smc91x_match[] = { 2189 + { .compatible = "smsc,lan91c94", }, 2190 + { .compatible = "smsc,lan91c111", }, 2191 + {}, 2192 + }; 2193 + MODULE_DEVICE_TABLE(of, smc91x_match); 2194 + #endif 2195 + 2188 2196 /* 2189 2197 * smc_init(void) 2190 2198 * Input parameters: ··· 2208 2198 static int smc_drv_probe(struct platform_device *pdev) 2209 2199 { 2210 2200 struct smc91x_platdata *pd = dev_get_platdata(&pdev->dev); 2201 + const struct of_device_id *match = NULL; 2211 2202 struct smc_local *lp; 2212 2203 struct net_device *ndev; 2213 2204 struct resource *res, *ires; ··· 2228 2217 */ 2229 2218 2230 2219 lp = netdev_priv(ndev); 2220 + lp->cfg.flags = 0; 2231 2221 2232 2222 if (pd) { 2233 2223 memcpy(&lp->cfg, pd, sizeof(lp->cfg)); 2234 2224 lp->io_shift = SMC91X_IO_SHIFT(lp->cfg.flags); 2235 - } else { 2225 + } 2226 + 2227 + #if IS_BUILTIN(CONFIG_OF) 2228 + match = of_match_device(of_match_ptr(smc91x_match), &pdev->dev); 2229 + if (match) { 2230 + struct device_node *np = pdev->dev.of_node; 2231 + u32 val; 2232 + 2233 + /* Combination of IO widths supported, default to 16-bit */ 2234 + if (!of_property_read_u32(np, "reg-io-width", &val)) { 2235 + if (val & 1) 2236 + lp->cfg.flags |= SMC91X_USE_8BIT; 2237 + if ((val == 0) || (val & 2)) 2238 + lp->cfg.flags |= SMC91X_USE_16BIT; 2239 + if (val & 4) 2240 + lp->cfg.flags |= SMC91X_USE_32BIT; 2241 + } else { 2242 + lp->cfg.flags |= SMC91X_USE_16BIT; 2243 + } 2244 + } 2245 + #endif 2246 + 2247 + if (!pd && !match) { 2236 2248 lp->cfg.flags |= (SMC_CAN_USE_8BIT) ? SMC91X_USE_8BIT : 0; 2237 2249 lp->cfg.flags |= (SMC_CAN_USE_16BIT) ? SMC91X_USE_16BIT : 0; 2238 2250 lp->cfg.flags |= (SMC_CAN_USE_32BIT) ? SMC91X_USE_32BIT : 0; ··· 2403 2369 } 2404 2370 return 0; 2405 2371 } 2406 - 2407 - #ifdef CONFIG_OF 2408 - static const struct of_device_id smc91x_match[] = { 2409 - { .compatible = "smsc,lan91c94", }, 2410 - { .compatible = "smsc,lan91c111", }, 2411 - {}, 2412 - }; 2413 - MODULE_DEVICE_TABLE(of, smc91x_match); 2414 - #endif 2415 2372 2416 2373 static struct dev_pm_ops smc_drv_pm_ops = { 2417 2374 .suspend = smc_drv_suspend,
-1
drivers/net/ethernet/tehuti/tehuti.c
··· 2019 2019 ndev->features = NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO 2020 2020 | NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX | 2021 2021 NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_RXCSUM 2022 - /*| NETIF_F_FRAGLIST */ 2023 2022 ; 2024 2023 ndev->hw_features = NETIF_F_IP_CSUM | NETIF_F_SG | 2025 2024 NETIF_F_TSO | NETIF_F_HW_VLAN_CTAG_TX;
+16 -3
drivers/net/ethernet/ti/cpsw.c
··· 1151 1151 * receive descs 1152 1152 */ 1153 1153 cpsw_info(priv, ifup, "submitted %d rx descriptors\n", i); 1154 + 1155 + if (cpts_register(&priv->pdev->dev, priv->cpts, 1156 + priv->data.cpts_clock_mult, 1157 + priv->data.cpts_clock_shift)) 1158 + dev_err(priv->dev, "error registering cpts device\n"); 1159 + 1154 1160 } 1155 1161 1156 1162 /* Enable Interrupt pacing if configured */ ··· 1203 1197 netif_carrier_off(priv->ndev); 1204 1198 1205 1199 if (cpsw_common_res_usage_state(priv) <= 1) { 1200 + cpts_unregister(priv->cpts); 1206 1201 cpsw_intr_disable(priv); 1207 1202 cpdma_ctlr_int_ctrl(priv->dma, false); 1208 1203 cpdma_ctlr_stop(priv->dma); ··· 1823 1816 } 1824 1817 1825 1818 i++; 1819 + if (i == data->slaves) 1820 + break; 1826 1821 } 1827 1822 1828 1823 return 0; ··· 1992 1983 goto clean_runtime_disable_ret; 1993 1984 } 1994 1985 priv->regs = ss_regs; 1995 - priv->version = __raw_readl(&priv->regs->id_ver); 1996 1986 priv->host_port = HOST_PORT_NUM; 1987 + 1988 + /* Need to enable clocks with runtime PM api to access module 1989 + * registers 1990 + */ 1991 + pm_runtime_get_sync(&pdev->dev); 1992 + priv->version = readl(&priv->regs->id_ver); 1993 + pm_runtime_put_sync(&pdev->dev); 1997 1994 1998 1995 res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 1999 1996 priv->wr_regs = devm_ioremap_resource(&pdev->dev, res); ··· 2169 2154 if (priv->data.dual_emac) 2170 2155 unregister_netdev(cpsw_get_slave_ndev(priv, 1)); 2171 2156 unregister_netdev(ndev); 2172 - 2173 - cpts_unregister(priv->cpts); 2174 2157 2175 2158 cpsw_ale_destroy(priv->ale); 2176 2159 cpdma_chan_destroy(priv->txch);
+25 -1
drivers/net/ethernet/ti/davinci_emac.c
··· 61 61 #include <linux/davinci_emac.h> 62 62 #include <linux/of.h> 63 63 #include <linux/of_address.h> 64 + #include <linux/of_device.h> 64 65 #include <linux/of_irq.h> 65 66 #include <linux/of_net.h> 66 67 ··· 1753 1752 #endif 1754 1753 }; 1755 1754 1755 + static const struct of_device_id davinci_emac_of_match[]; 1756 + 1756 1757 static struct emac_platform_data * 1757 1758 davinci_emac_of_get_pdata(struct platform_device *pdev, struct emac_priv *priv) 1758 1759 { 1759 1760 struct device_node *np; 1761 + const struct of_device_id *match; 1762 + const struct emac_platform_data *auxdata; 1760 1763 struct emac_platform_data *pdata = NULL; 1761 1764 const u8 *mac_addr; 1762 1765 ··· 1798 1793 1799 1794 priv->phy_node = of_parse_phandle(np, "phy-handle", 0); 1800 1795 if (!priv->phy_node) 1801 - pdata->phy_id = ""; 1796 + pdata->phy_id = NULL; 1797 + 1798 + auxdata = pdev->dev.platform_data; 1799 + if (auxdata) { 1800 + pdata->interrupt_enable = auxdata->interrupt_enable; 1801 + pdata->interrupt_disable = auxdata->interrupt_disable; 1802 + } 1803 + 1804 + match = of_match_device(davinci_emac_of_match, &pdev->dev); 1805 + if (match && match->data) { 1806 + auxdata = match->data; 1807 + pdata->version = auxdata->version; 1808 + pdata->hw_ram_addr = auxdata->hw_ram_addr; 1809 + } 1802 1810 1803 1811 pdev->dev.platform_data = pdata; 1804 1812 ··· 2038 2020 }; 2039 2021 2040 2022 #if IS_ENABLED(CONFIG_OF) 2023 + static const struct emac_platform_data am3517_emac_data = { 2024 + .version = EMAC_VERSION_2, 2025 + .hw_ram_addr = 0x01e20000, 2026 + }; 2027 + 2041 2028 static const struct of_device_id davinci_emac_of_match[] = { 2042 2029 {.compatible = "ti,davinci-dm6467-emac", }, 2030 + {.compatible = "ti,am3517-emac", .data = &am3517_emac_data, }, 2043 2031 {}, 2044 2032 }; 2045 2033 MODULE_DEVICE_TABLE(of, davinci_emac_of_match);
+1 -1
drivers/net/ethernet/xilinx/ll_temac_main.c
··· 1017 1017 platform_set_drvdata(op, ndev); 1018 1018 SET_NETDEV_DEV(ndev, &op->dev); 1019 1019 ndev->flags &= ~IFF_MULTICAST; /* clear multicast */ 1020 - ndev->features = NETIF_F_SG | NETIF_F_FRAGLIST; 1020 + ndev->features = NETIF_F_SG; 1021 1021 ndev->netdev_ops = &temac_netdev_ops; 1022 1022 ndev->ethtool_ops = &temac_ethtool_ops; 1023 1023 #if 0
+1 -1
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1486 1486 1487 1487 SET_NETDEV_DEV(ndev, &op->dev); 1488 1488 ndev->flags &= ~IFF_MULTICAST; /* clear multicast */ 1489 - ndev->features = NETIF_F_SG | NETIF_F_FRAGLIST; 1489 + ndev->features = NETIF_F_SG; 1490 1490 ndev->netdev_ops = &axienet_netdev_ops; 1491 1491 ndev->ethtool_ops = &axienet_ethtool_ops; 1492 1492
+13 -38
drivers/net/ethernet/xilinx/xilinx_emaclite.c
··· 163 163 __raw_writel(reg_data | XEL_TSR_XMIT_IE_MASK, 164 164 drvdata->base_addr + XEL_TSR_OFFSET); 165 165 166 - /* Enable the Tx interrupts for the second Buffer if 167 - * configured in HW */ 168 - if (drvdata->tx_ping_pong != 0) { 169 - reg_data = __raw_readl(drvdata->base_addr + 170 - XEL_BUFFER_OFFSET + XEL_TSR_OFFSET); 171 - __raw_writel(reg_data | XEL_TSR_XMIT_IE_MASK, 172 - drvdata->base_addr + XEL_BUFFER_OFFSET + 173 - XEL_TSR_OFFSET); 174 - } 175 - 176 166 /* Enable the Rx interrupts for the first buffer */ 177 167 __raw_writel(XEL_RSR_RECV_IE_MASK, drvdata->base_addr + XEL_RSR_OFFSET); 178 - 179 - /* Enable the Rx interrupts for the second Buffer if 180 - * configured in HW */ 181 - if (drvdata->rx_ping_pong != 0) { 182 - __raw_writel(XEL_RSR_RECV_IE_MASK, drvdata->base_addr + 183 - XEL_BUFFER_OFFSET + XEL_RSR_OFFSET); 184 - } 185 168 186 169 /* Enable the Global Interrupt Enable */ 187 170 __raw_writel(XEL_GIER_GIE_MASK, drvdata->base_addr + XEL_GIER_OFFSET); ··· 189 206 __raw_writel(reg_data & (~XEL_TSR_XMIT_IE_MASK), 190 207 drvdata->base_addr + XEL_TSR_OFFSET); 191 208 192 - /* Disable the Tx interrupts for the second Buffer 193 - * if configured in HW */ 194 - if (drvdata->tx_ping_pong != 0) { 195 - reg_data = __raw_readl(drvdata->base_addr + XEL_BUFFER_OFFSET + 196 - XEL_TSR_OFFSET); 197 - __raw_writel(reg_data & (~XEL_TSR_XMIT_IE_MASK), 198 - drvdata->base_addr + XEL_BUFFER_OFFSET + 199 - XEL_TSR_OFFSET); 200 - } 201 - 202 209 /* Disable the Rx interrupts for the first buffer */ 203 210 reg_data = __raw_readl(drvdata->base_addr + XEL_RSR_OFFSET); 204 211 __raw_writel(reg_data & (~XEL_RSR_RECV_IE_MASK), 205 212 drvdata->base_addr + XEL_RSR_OFFSET); 206 - 207 - /* Disable the Rx interrupts for the second buffer 208 - * if configured in HW */ 209 - if (drvdata->rx_ping_pong != 0) { 210 - 211 - reg_data = __raw_readl(drvdata->base_addr + XEL_BUFFER_OFFSET + 212 - XEL_RSR_OFFSET); 213 - __raw_writel(reg_data & (~XEL_RSR_RECV_IE_MASK), 214 - drvdata->base_addr + XEL_BUFFER_OFFSET + 215 - XEL_RSR_OFFSET); 216 - } 217 213 } 218 214 219 215 /** ··· 220 258 *to_u16_ptr++ = *from_u16_ptr++; 221 259 *to_u16_ptr++ = *from_u16_ptr++; 222 260 261 + /* This barrier resolves occasional issues seen around 262 + * cases where the data is not properly flushed out 263 + * from the processor store buffers to the destination 264 + * memory locations. 265 + */ 266 + wmb(); 267 + 223 268 /* Output a word */ 224 269 *to_u32_ptr++ = align_buffer; 225 270 } ··· 242 273 for (; length > 0; length--) 243 274 *to_u8_ptr++ = *from_u8_ptr++; 244 275 276 + /* This barrier resolves occasional issues seen around 277 + * cases where the data is not properly flushed out 278 + * from the processor store buffers to the destination 279 + * memory locations. 280 + */ 281 + wmb(); 245 282 *to_u32_ptr = align_buffer; 246 283 } 247 284 }
+8 -5
drivers/net/macvtap.c
··· 770 770 int ret; 771 771 int vnet_hdr_len = 0; 772 772 int vlan_offset = 0; 773 - int copied; 773 + int copied, total; 774 774 775 775 if (q->flags & IFF_VNET_HDR) { 776 776 struct virtio_net_hdr vnet_hdr; ··· 785 785 if (memcpy_toiovecend(iv, (void *)&vnet_hdr, 0, sizeof(vnet_hdr))) 786 786 return -EFAULT; 787 787 } 788 - copied = vnet_hdr_len; 788 + total = copied = vnet_hdr_len; 789 + total += skb->len; 789 790 790 791 if (!vlan_tx_tag_present(skb)) 791 792 len = min_t(int, skb->len, len); ··· 801 800 802 801 vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto); 803 802 len = min_t(int, skb->len + VLAN_HLEN, len); 803 + total += VLAN_HLEN; 804 804 805 805 copy = min_t(int, vlan_offset, len); 806 806 ret = skb_copy_datagram_const_iovec(skb, 0, iv, copied, copy); ··· 819 817 } 820 818 821 819 ret = skb_copy_datagram_const_iovec(skb, vlan_offset, iv, copied, len); 822 - copied += len; 823 820 824 821 done: 825 - return ret ? ret : copied; 822 + return ret ? ret : total; 826 823 } 827 824 828 825 static ssize_t macvtap_do_read(struct macvtap_queue *q, struct kiocb *iocb, ··· 876 875 } 877 876 878 877 ret = macvtap_do_read(q, iocb, iv, len, file->f_flags & O_NONBLOCK); 879 - ret = min_t(ssize_t, ret, len); /* XXX copied from tun.c. Why? */ 878 + ret = min_t(ssize_t, ret, len); 879 + if (ret > 0) 880 + iocb->ki_pos = ret; 880 881 out: 881 882 return ret; 882 883 }
+15
drivers/net/phy/micrel.c
··· 336 336 .resume = genphy_resume, 337 337 .driver = { .owner = THIS_MODULE,}, 338 338 }, { 339 + .phy_id = PHY_ID_KSZ8041RNLI, 340 + .phy_id_mask = 0x00fffff0, 341 + .name = "Micrel KSZ8041RNLI", 342 + .features = PHY_BASIC_FEATURES | 343 + SUPPORTED_Pause | SUPPORTED_Asym_Pause, 344 + .flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT, 345 + .config_init = kszphy_config_init, 346 + .config_aneg = genphy_config_aneg, 347 + .read_status = genphy_read_status, 348 + .ack_interrupt = kszphy_ack_interrupt, 349 + .config_intr = kszphy_config_intr, 350 + .suspend = genphy_suspend, 351 + .resume = genphy_resume, 352 + .driver = { .owner = THIS_MODULE,}, 353 + }, { 339 354 .phy_id = PHY_ID_KSZ8051, 340 355 .phy_id_mask = 0x00fffff0, 341 356 .name = "Micrel KSZ8051",
+11 -7
drivers/net/tun.c
··· 1184 1184 { 1185 1185 struct tun_pi pi = { 0, skb->protocol }; 1186 1186 ssize_t total = 0; 1187 - int vlan_offset = 0; 1187 + int vlan_offset = 0, copied; 1188 1188 1189 1189 if (!(tun->flags & TUN_NO_PI)) { 1190 1190 if ((len -= sizeof(pi)) < 0) ··· 1248 1248 total += tun->vnet_hdr_sz; 1249 1249 } 1250 1250 1251 + copied = total; 1252 + total += skb->len; 1251 1253 if (!vlan_tx_tag_present(skb)) { 1252 1254 len = min_t(int, skb->len, len); 1253 1255 } else { ··· 1264 1262 1265 1263 vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto); 1266 1264 len = min_t(int, skb->len + VLAN_HLEN, len); 1265 + total += VLAN_HLEN; 1267 1266 1268 1267 copy = min_t(int, vlan_offset, len); 1269 - ret = skb_copy_datagram_const_iovec(skb, 0, iv, total, copy); 1268 + ret = skb_copy_datagram_const_iovec(skb, 0, iv, copied, copy); 1270 1269 len -= copy; 1271 - total += copy; 1270 + copied += copy; 1272 1271 if (ret || !len) 1273 1272 goto done; 1274 1273 1275 1274 copy = min_t(int, sizeof(veth), len); 1276 - ret = memcpy_toiovecend(iv, (void *)&veth, total, copy); 1275 + ret = memcpy_toiovecend(iv, (void *)&veth, copied, copy); 1277 1276 len -= copy; 1278 - total += copy; 1277 + copied += copy; 1279 1278 if (ret || !len) 1280 1279 goto done; 1281 1280 } 1282 1281 1283 - skb_copy_datagram_const_iovec(skb, vlan_offset, iv, total, len); 1284 - total += len; 1282 + skb_copy_datagram_const_iovec(skb, vlan_offset, iv, copied, len); 1285 1283 1286 1284 done: 1287 1285 tun->dev->stats.tx_packets++; ··· 1358 1356 ret = tun_do_read(tun, tfile, iocb, iv, len, 1359 1357 file->f_flags & O_NONBLOCK); 1360 1358 ret = min_t(ssize_t, ret, len); 1359 + if (ret > 0) 1360 + iocb->ki_pos = ret; 1361 1361 out: 1362 1362 tun_put(tun); 1363 1363 return ret;
+11 -6
drivers/net/virtio_net.c
··· 426 426 if (unlikely(len < sizeof(struct virtio_net_hdr) + ETH_HLEN)) { 427 427 pr_debug("%s: short packet %i\n", dev->name, len); 428 428 dev->stats.rx_length_errors++; 429 - if (vi->big_packets) 430 - give_pages(rq, buf); 431 - else if (vi->mergeable_rx_bufs) 429 + if (vi->mergeable_rx_bufs) 432 430 put_page(virt_to_head_page(buf)); 431 + else if (vi->big_packets) 432 + give_pages(rq, buf); 433 433 else 434 434 dev_kfree_skb(buf); 435 435 return; ··· 1367 1367 1368 1368 static void virtnet_free_queues(struct virtnet_info *vi) 1369 1369 { 1370 + int i; 1371 + 1372 + for (i = 0; i < vi->max_queue_pairs; i++) 1373 + netif_napi_del(&vi->rq[i].napi); 1374 + 1370 1375 kfree(vi->rq); 1371 1376 kfree(vi->sq); 1372 1377 } ··· 1401 1396 struct virtqueue *vq = vi->rq[i].vq; 1402 1397 1403 1398 while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) { 1404 - if (vi->big_packets) 1405 - give_pages(&vi->rq[i], buf); 1406 - else if (vi->mergeable_rx_bufs) 1399 + if (vi->mergeable_rx_bufs) 1407 1400 put_page(virt_to_head_page(buf)); 1401 + else if (vi->big_packets) 1402 + give_pages(&vi->rq[i], buf); 1408 1403 else 1409 1404 dev_kfree_skb(buf); 1410 1405 --vi->rq[i].num;
+1 -1
drivers/net/vxlan.c
··· 1668 1668 netdev_dbg(dev, "circular route to %pI4\n", 1669 1669 &dst->sin.sin_addr.s_addr); 1670 1670 dev->stats.collisions++; 1671 - goto tx_error; 1671 + goto rt_tx_error; 1672 1672 } 1673 1673 1674 1674 /* Bypass encapsulation if the destination is local */
+12 -10
drivers/net/wireless/ath/ath9k/ar9003_eeprom.c
··· 3984 3984 int quick_drop; 3985 3985 s32 t[3], f[3] = {5180, 5500, 5785}; 3986 3986 3987 - if (!(pBase->miscConfiguration & BIT(1))) 3987 + if (!(pBase->miscConfiguration & BIT(4))) 3988 3988 return; 3989 3989 3990 - if (freq < 4000) 3991 - quick_drop = eep->modalHeader2G.quick_drop; 3992 - else { 3993 - t[0] = eep->base_ext1.quick_drop_low; 3994 - t[1] = eep->modalHeader5G.quick_drop; 3995 - t[2] = eep->base_ext1.quick_drop_high; 3996 - quick_drop = ar9003_hw_power_interpolate(freq, f, t, 3); 3990 + if (AR_SREV_9300(ah) || AR_SREV_9580(ah) || AR_SREV_9340(ah)) { 3991 + if (freq < 4000) { 3992 + quick_drop = eep->modalHeader2G.quick_drop; 3993 + } else { 3994 + t[0] = eep->base_ext1.quick_drop_low; 3995 + t[1] = eep->modalHeader5G.quick_drop; 3996 + t[2] = eep->base_ext1.quick_drop_high; 3997 + quick_drop = ar9003_hw_power_interpolate(freq, f, t, 3); 3998 + } 3999 + REG_RMW_FIELD(ah, AR_PHY_AGC, AR_PHY_AGC_QUICK_DROP, quick_drop); 3997 4000 } 3998 - REG_RMW_FIELD(ah, AR_PHY_AGC, AR_PHY_AGC_QUICK_DROP, quick_drop); 3999 4001 } 4000 4002 4001 4003 static void ar9003_hw_txend_to_xpa_off_apply(struct ath_hw *ah, bool is2ghz) ··· 4037 4035 struct ar9300_eeprom *eep = &ah->eeprom.ar9300_eep; 4038 4036 u8 bias; 4039 4037 4040 - if (!(eep->baseEepHeader.featureEnable & 0x40)) 4038 + if (!(eep->baseEepHeader.miscConfiguration & 0x40)) 4041 4039 return; 4042 4040 4043 4041 if (!AR_SREV_9300(ah))
+3 -4
drivers/net/wireless/ath/ath9k/hw.c
··· 146 146 else 147 147 clockrate = ATH9K_CLOCK_RATE_5GHZ_OFDM; 148 148 149 - if (IS_CHAN_HT40(chan)) 150 - clockrate *= 2; 151 - 152 - if (ah->curchan) { 149 + if (chan) { 150 + if (IS_CHAN_HT40(chan)) 151 + clockrate *= 2; 153 152 if (IS_CHAN_HALF_RATE(chan)) 154 153 clockrate /= 2; 155 154 if (IS_CHAN_QUARTER_RATE(chan))
+4
drivers/net/wireless/ath/ath9k/xmit.c
··· 1276 1276 if (!rts_thresh || (len > rts_thresh)) 1277 1277 rts = true; 1278 1278 } 1279 + 1280 + if (!aggr) 1281 + len = fi->framelen; 1282 + 1279 1283 ath_buf_set_rate(sc, bf, &info, len, rts); 1280 1284 } 1281 1285
+13 -6
drivers/net/wireless/ath/wcn36xx/smd.c
··· 2041 2041 case WCN36XX_HAL_DELETE_STA_CONTEXT_IND: 2042 2042 mutex_lock(&wcn->hal_ind_mutex); 2043 2043 msg_ind = kmalloc(sizeof(*msg_ind), GFP_KERNEL); 2044 - msg_ind->msg_len = len; 2045 - msg_ind->msg = kmalloc(len, GFP_KERNEL); 2046 - memcpy(msg_ind->msg, buf, len); 2047 - list_add_tail(&msg_ind->list, &wcn->hal_ind_queue); 2048 - queue_work(wcn->hal_ind_wq, &wcn->hal_ind_work); 2049 - wcn36xx_dbg(WCN36XX_DBG_HAL, "indication arrived\n"); 2044 + if (msg_ind) { 2045 + msg_ind->msg_len = len; 2046 + msg_ind->msg = kmalloc(len, GFP_KERNEL); 2047 + memcpy(msg_ind->msg, buf, len); 2048 + list_add_tail(&msg_ind->list, &wcn->hal_ind_queue); 2049 + queue_work(wcn->hal_ind_wq, &wcn->hal_ind_work); 2050 + wcn36xx_dbg(WCN36XX_DBG_HAL, "indication arrived\n"); 2051 + } 2050 2052 mutex_unlock(&wcn->hal_ind_mutex); 2053 + if (msg_ind) 2054 + break; 2055 + /* FIXME: Do something smarter then just printing an error. */ 2056 + wcn36xx_err("Run out of memory while handling SMD_EVENT (%d)\n", 2057 + msg_header->msg_type); 2051 2058 break; 2052 2059 default: 2053 2060 wcn36xx_err("SMD_EVENT (%d) not supported\n",
+2
drivers/net/wireless/brcm80211/Kconfig
··· 5 5 tristate "Broadcom IEEE802.11n PCIe SoftMAC WLAN driver" 6 6 depends on MAC80211 7 7 depends on BCMA 8 + select NEW_LEDS if BCMA_DRIVER_GPIO 9 + select LEDS_CLASS if BCMA_DRIVER_GPIO 8 10 select BRCMUTIL 9 11 select FW_LOADER 10 12 select CRC_CCITT
+2
drivers/net/wireless/brcm80211/brcmfmac/bcmsdh_sdmmc.c
··· 109 109 brcmf_err("Disable F2 failed:%d\n", 110 110 err_ret); 111 111 } 112 + } else { 113 + err_ret = -ENOENT; 112 114 } 113 115 } else if ((regaddr == SDIO_CCCR_ABORT) || 114 116 (regaddr == SDIO_CCCR_IENx)) {
+27 -2
drivers/net/wireless/iwlwifi/iwl-7000.c
··· 67 67 #include "iwl-agn-hw.h" 68 68 69 69 /* Highest firmware API version supported */ 70 - #define IWL7260_UCODE_API_MAX 7 71 - #define IWL3160_UCODE_API_MAX 7 70 + #define IWL7260_UCODE_API_MAX 8 71 + #define IWL3160_UCODE_API_MAX 8 72 72 73 73 /* Oldest version we won't warn about */ 74 74 #define IWL7260_UCODE_API_OK 7 ··· 130 130 .ht_params = &iwl7000_ht_params, 131 131 .nvm_ver = IWL7260_NVM_VERSION, 132 132 .nvm_calib_ver = IWL7260_TX_POWER_VERSION, 133 + .host_interrupt_operation_mode = true, 133 134 }; 134 135 135 136 const struct iwl_cfg iwl7260_2ac_cfg_high_temp = { ··· 141 140 .nvm_ver = IWL7260_NVM_VERSION, 142 141 .nvm_calib_ver = IWL7260_TX_POWER_VERSION, 143 142 .high_temp = true, 143 + .host_interrupt_operation_mode = true, 144 144 }; 145 145 146 146 const struct iwl_cfg iwl7260_2n_cfg = { ··· 151 149 .ht_params = &iwl7000_ht_params, 152 150 .nvm_ver = IWL7260_NVM_VERSION, 153 151 .nvm_calib_ver = IWL7260_TX_POWER_VERSION, 152 + .host_interrupt_operation_mode = true, 154 153 }; 155 154 156 155 const struct iwl_cfg iwl7260_n_cfg = { ··· 161 158 .ht_params = &iwl7000_ht_params, 162 159 .nvm_ver = IWL7260_NVM_VERSION, 163 160 .nvm_calib_ver = IWL7260_TX_POWER_VERSION, 161 + .host_interrupt_operation_mode = true, 164 162 }; 165 163 166 164 const struct iwl_cfg iwl3160_2ac_cfg = { ··· 171 167 .ht_params = &iwl7000_ht_params, 172 168 .nvm_ver = IWL3160_NVM_VERSION, 173 169 .nvm_calib_ver = IWL3160_TX_POWER_VERSION, 170 + .host_interrupt_operation_mode = true, 174 171 }; 175 172 176 173 const struct iwl_cfg iwl3160_2n_cfg = { ··· 181 176 .ht_params = &iwl7000_ht_params, 182 177 .nvm_ver = IWL3160_NVM_VERSION, 183 178 .nvm_calib_ver = IWL3160_TX_POWER_VERSION, 179 + .host_interrupt_operation_mode = true, 184 180 }; 185 181 186 182 const struct iwl_cfg iwl3160_n_cfg = { ··· 191 185 .ht_params = &iwl7000_ht_params, 192 186 .nvm_ver = IWL3160_NVM_VERSION, 193 187 .nvm_calib_ver = IWL3160_TX_POWER_VERSION, 188 + .host_interrupt_operation_mode = true, 194 189 }; 195 190 196 191 const struct iwl_cfg iwl7265_2ac_cfg = { 197 192 .name = "Intel(R) Dual Band Wireless AC 7265", 193 + .fw_name_pre = IWL7265_FW_PRE, 194 + IWL_DEVICE_7000, 195 + .ht_params = &iwl7000_ht_params, 196 + .nvm_ver = IWL7265_NVM_VERSION, 197 + .nvm_calib_ver = IWL7265_TX_POWER_VERSION, 198 + }; 199 + 200 + const struct iwl_cfg iwl7265_2n_cfg = { 201 + .name = "Intel(R) Dual Band Wireless N 7265", 202 + .fw_name_pre = IWL7265_FW_PRE, 203 + IWL_DEVICE_7000, 204 + .ht_params = &iwl7000_ht_params, 205 + .nvm_ver = IWL7265_NVM_VERSION, 206 + .nvm_calib_ver = IWL7265_TX_POWER_VERSION, 207 + }; 208 + 209 + const struct iwl_cfg iwl7265_n_cfg = { 210 + .name = "Intel(R) Wireless N 7265", 198 211 .fw_name_pre = IWL7265_FW_PRE, 199 212 IWL_DEVICE_7000, 200 213 .ht_params = &iwl7000_ht_params,
+5
drivers/net/wireless/iwlwifi/iwl-config.h
··· 207 207 * @rx_with_siso_diversity: 1x1 device with rx antenna diversity 208 208 * @internal_wimax_coex: internal wifi/wimax combo device 209 209 * @high_temp: Is this NIC is designated to be in high temperature. 210 + * @host_interrupt_operation_mode: device needs host interrupt operation 211 + * mode set 210 212 * 211 213 * We enable the driver to be backward compatible wrt. hardware features. 212 214 * API differences in uCode shouldn't be handled here but through TLVs ··· 237 235 enum iwl_led_mode led_mode; 238 236 const bool rx_with_siso_diversity; 239 237 const bool internal_wimax_coex; 238 + const bool host_interrupt_operation_mode; 240 239 bool high_temp; 241 240 }; 242 241 ··· 297 294 extern const struct iwl_cfg iwl3160_2n_cfg; 298 295 extern const struct iwl_cfg iwl3160_n_cfg; 299 296 extern const struct iwl_cfg iwl7265_2ac_cfg; 297 + extern const struct iwl_cfg iwl7265_2n_cfg; 298 + extern const struct iwl_cfg iwl7265_n_cfg; 300 299 #endif /* CONFIG_IWLMVM */ 301 300 302 301 #endif /* __IWL_CONFIG_H__ */
+1 -4
drivers/net/wireless/iwlwifi/iwl-csr.h
··· 495 495 * the CSR_INT_COALESCING is an 8 bit register in 32-usec unit 496 496 * 497 497 * default interrupt coalescing timer is 64 x 32 = 2048 usecs 498 - * default interrupt coalescing calibration timer is 16 x 32 = 512 usecs 499 498 */ 500 499 #define IWL_HOST_INT_TIMEOUT_MAX (0xFF) 501 500 #define IWL_HOST_INT_TIMEOUT_DEF (0x40) 502 501 #define IWL_HOST_INT_TIMEOUT_MIN (0x0) 503 - #define IWL_HOST_INT_CALIB_TIMEOUT_MAX (0xFF) 504 - #define IWL_HOST_INT_CALIB_TIMEOUT_DEF (0x10) 505 - #define IWL_HOST_INT_CALIB_TIMEOUT_MIN (0x0) 502 + #define IWL_HOST_INT_OPER_MODE BIT(31) 506 503 507 504 /***************************************************************************** 508 505 * 7000/3000 series SHR DTS addresses *
+5 -1
drivers/net/wireless/iwlwifi/mvm/bt-coex.c
··· 391 391 BT_VALID_LUT | 392 392 BT_VALID_WIFI_RX_SW_PRIO_BOOST | 393 393 BT_VALID_WIFI_TX_SW_PRIO_BOOST | 394 - BT_VALID_MULTI_PRIO_LUT | 395 394 BT_VALID_CORUN_LUT_20 | 396 395 BT_VALID_CORUN_LUT_40 | 397 396 BT_VALID_ANT_ISOLATION | ··· 841 842 842 843 sta = rcu_dereference_protected(mvm->fw_id_to_mac_id[mvmvif->ap_sta_id], 843 844 lockdep_is_held(&mvm->mutex)); 845 + 846 + /* This can happen if the station has been removed right now */ 847 + if (IS_ERR_OR_NULL(sta)) 848 + return; 849 + 844 850 mvmsta = (void *)sta->drv_priv; 845 851 846 852 data->num_bss_ifaces++;
+3 -2
drivers/net/wireless/iwlwifi/mvm/d3.c
··· 895 895 /* new API returns next, not last-used seqno */ 896 896 if (mvm->fw->ucode_capa.flags & 897 897 IWL_UCODE_TLV_FLAGS_D3_CONTINUITY_API) 898 - err -= 0x10; 898 + err = (u16) (err - 0x10); 899 899 } 900 900 901 901 iwl_free_resp(&cmd); ··· 1549 1549 if (gtkdata.unhandled_cipher) 1550 1550 return false; 1551 1551 if (!gtkdata.num_keys) 1552 - return true; 1552 + goto out; 1553 1553 if (!gtkdata.last_gtk) 1554 1554 return false; 1555 1555 ··· 1600 1600 (void *)&replay_ctr, GFP_KERNEL); 1601 1601 } 1602 1602 1603 + out: 1603 1604 mvmvif->seqno_valid = true; 1604 1605 /* +0x10 because the set API expects next-to-use, not last-used */ 1605 1606 mvmvif->seqno = le16_to_cpu(status->non_qos_seq_ctr) + 0x10;
+4
drivers/net/wireless/iwlwifi/mvm/debugfs.c
··· 119 119 120 120 if (sscanf(buf, "%d %d", &sta_id, &drain) != 2) 121 121 return -EINVAL; 122 + if (sta_id < 0 || sta_id >= IWL_MVM_STATION_COUNT) 123 + return -EINVAL; 124 + if (drain < 0 || drain > 1) 125 + return -EINVAL; 122 126 123 127 mutex_lock(&mvm->mutex); 124 128
+5 -2
drivers/net/wireless/iwlwifi/mvm/time-event.c
··· 176 176 * P2P Device discoveribility, while there are other higher priority 177 177 * events in the system). 178 178 */ 179 - if (WARN_ONCE(!le32_to_cpu(notif->status), 180 - "Failed to schedule time event\n")) { 179 + if (!le32_to_cpu(notif->status)) { 180 + bool start = le32_to_cpu(notif->action) & 181 + TE_V2_NOTIF_HOST_EVENT_START; 182 + IWL_WARN(mvm, "Time Event %s notification failure\n", 183 + start ? "start" : "end"); 181 184 if (iwl_mvm_te_check_disconnect(mvm, te_data->vif, NULL)) { 182 185 iwl_mvm_te_clear_data(mvm, te_data); 183 186 return;
+21
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 353 353 354 354 /* 7265 Series */ 355 355 {IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)}, 356 + {IWL_PCI_DEVICE(0x095A, 0x5110, iwl7265_2ac_cfg)}, 357 + {IWL_PCI_DEVICE(0x095B, 0x5310, iwl7265_2ac_cfg)}, 358 + {IWL_PCI_DEVICE(0x095B, 0x5302, iwl7265_2ac_cfg)}, 359 + {IWL_PCI_DEVICE(0x095B, 0x5210, iwl7265_2ac_cfg)}, 360 + {IWL_PCI_DEVICE(0x095B, 0x5012, iwl7265_2ac_cfg)}, 361 + {IWL_PCI_DEVICE(0x095B, 0x500A, iwl7265_2ac_cfg)}, 362 + {IWL_PCI_DEVICE(0x095A, 0x5410, iwl7265_2ac_cfg)}, 363 + {IWL_PCI_DEVICE(0x095A, 0x1010, iwl7265_2ac_cfg)}, 364 + {IWL_PCI_DEVICE(0x095A, 0x5000, iwl7265_2n_cfg)}, 365 + {IWL_PCI_DEVICE(0x095B, 0x5200, iwl7265_2n_cfg)}, 366 + {IWL_PCI_DEVICE(0x095A, 0x5002, iwl7265_n_cfg)}, 367 + {IWL_PCI_DEVICE(0x095B, 0x5202, iwl7265_n_cfg)}, 368 + {IWL_PCI_DEVICE(0x095A, 0x9010, iwl7265_2ac_cfg)}, 369 + {IWL_PCI_DEVICE(0x095A, 0x9210, iwl7265_2ac_cfg)}, 370 + {IWL_PCI_DEVICE(0x095A, 0x9410, iwl7265_2ac_cfg)}, 371 + {IWL_PCI_DEVICE(0x095A, 0x5020, iwl7265_2n_cfg)}, 372 + {IWL_PCI_DEVICE(0x095A, 0x502A, iwl7265_2n_cfg)}, 373 + {IWL_PCI_DEVICE(0x095A, 0x5420, iwl7265_2n_cfg)}, 374 + {IWL_PCI_DEVICE(0x095A, 0x5090, iwl7265_2ac_cfg)}, 375 + {IWL_PCI_DEVICE(0x095B, 0x5290, iwl7265_2ac_cfg)}, 376 + {IWL_PCI_DEVICE(0x095A, 0x5490, iwl7265_2ac_cfg)}, 356 377 #endif /* CONFIG_IWLMVM */ 357 378 358 379 {0}
+8
drivers/net/wireless/iwlwifi/pcie/internal.h
··· 477 477 CSR_GP_CNTRL_REG_FLAG_HW_RF_KILL_SW); 478 478 } 479 479 480 + static inline void iwl_nic_error(struct iwl_trans *trans) 481 + { 482 + struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 483 + 484 + set_bit(STATUS_FW_ERROR, &trans_pcie->status); 485 + iwl_op_mode_nic_error(trans->op_mode); 486 + } 487 + 480 488 #endif /* __iwl_trans_int_pcie_h__ */
+6 -1
drivers/net/wireless/iwlwifi/pcie/rx.c
··· 489 489 490 490 /* Set interrupt coalescing timer to default (2048 usecs) */ 491 491 iwl_write8(trans, CSR_INT_COALESCING, IWL_HOST_INT_TIMEOUT_DEF); 492 + 493 + /* W/A for interrupt coalescing bug in 7260 and 3160 */ 494 + if (trans->cfg->host_interrupt_operation_mode) 495 + iwl_set_bit(trans, CSR_INT_COALESCING, IWL_HOST_INT_OPER_MODE); 492 496 } 493 497 494 498 static void iwl_pcie_rx_init_rxb_lists(struct iwl_rxq *rxq) ··· 800 796 iwl_pcie_dump_csr(trans); 801 797 iwl_dump_fh(trans, NULL); 802 798 799 + /* set the ERROR bit before we wake up the caller */ 803 800 set_bit(STATUS_FW_ERROR, &trans_pcie->status); 804 801 clear_bit(STATUS_HCMD_ACTIVE, &trans_pcie->status); 805 802 wake_up(&trans_pcie->wait_command_queue); 806 803 807 804 local_bh_disable(); 808 - iwl_op_mode_nic_error(trans->op_mode); 805 + iwl_nic_error(trans); 809 806 local_bh_enable(); 810 807 } 811 808
-3
drivers/net/wireless/iwlwifi/pcie/trans.c
··· 279 279 spin_lock_irqsave(&trans_pcie->irq_lock, flags); 280 280 iwl_pcie_apm_init(trans); 281 281 282 - /* Set interrupt coalescing calibration timer to default (512 usecs) */ 283 - iwl_write8(trans, CSR_INT_COALESCING, IWL_HOST_INT_CALIB_TIMEOUT_DEF); 284 - 285 282 spin_unlock_irqrestore(&trans_pcie->irq_lock, flags); 286 283 287 284 iwl_pcie_set_pwr(trans, false);
+3 -3
drivers/net/wireless/iwlwifi/pcie/tx.c
··· 207 207 IWL_ERR(trans, "scratch %d = 0x%08x\n", i, 208 208 le32_to_cpu(txq->scratchbufs[i].scratch)); 209 209 210 - iwl_op_mode_nic_error(trans->op_mode); 210 + iwl_nic_error(trans); 211 211 } 212 212 213 213 /* ··· 1023 1023 if (nfreed++ > 0) { 1024 1024 IWL_ERR(trans, "HCMD skipped: index (%d) %d %d\n", 1025 1025 idx, q->write_ptr, q->read_ptr); 1026 - iwl_op_mode_nic_error(trans->op_mode); 1026 + iwl_nic_error(trans); 1027 1027 } 1028 1028 } 1029 1029 ··· 1562 1562 get_cmd_string(trans_pcie, cmd->id)); 1563 1563 ret = -ETIMEDOUT; 1564 1564 1565 - iwl_op_mode_nic_error(trans->op_mode); 1565 + iwl_nic_error(trans); 1566 1566 1567 1567 goto cancel; 1568 1568 }
+12 -4
drivers/net/wireless/mac80211_hwsim.c
··· 383 383 __le16 rt_chbitmask; 384 384 } __packed; 385 385 386 + struct hwsim_radiotap_ack_hdr { 387 + struct ieee80211_radiotap_header hdr; 388 + u8 rt_flags; 389 + u8 pad; 390 + __le16 rt_channel; 391 + __le16 rt_chbitmask; 392 + } __packed; 393 + 386 394 /* MAC80211_HWSIM netlinf family */ 387 395 static struct genl_family hwsim_genl_family = { 388 396 .id = GENL_ID_GENERATE, ··· 508 500 const u8 *addr) 509 501 { 510 502 struct sk_buff *skb; 511 - struct hwsim_radiotap_hdr *hdr; 503 + struct hwsim_radiotap_ack_hdr *hdr; 512 504 u16 flags; 513 505 struct ieee80211_hdr *hdr11; 514 506 ··· 519 511 if (skb == NULL) 520 512 return; 521 513 522 - hdr = (struct hwsim_radiotap_hdr *) skb_put(skb, sizeof(*hdr)); 514 + hdr = (struct hwsim_radiotap_ack_hdr *) skb_put(skb, sizeof(*hdr)); 523 515 hdr->hdr.it_version = PKTHDR_RADIOTAP_VERSION; 524 516 hdr->hdr.it_pad = 0; 525 517 hdr->hdr.it_len = cpu_to_le16(sizeof(*hdr)); 526 518 hdr->hdr.it_present = cpu_to_le32((1 << IEEE80211_RADIOTAP_FLAGS) | 527 519 (1 << IEEE80211_RADIOTAP_CHANNEL)); 528 520 hdr->rt_flags = 0; 529 - hdr->rt_rate = 0; 521 + hdr->pad = 0; 530 522 hdr->rt_channel = cpu_to_le16(chan->center_freq); 531 523 flags = IEEE80211_CHAN_2GHZ; 532 524 hdr->rt_chbitmask = cpu_to_le16(flags); ··· 1238 1230 HRTIMER_MODE_REL); 1239 1231 } else if (!info->enable_beacon) { 1240 1232 unsigned int count = 0; 1241 - ieee80211_iterate_active_interfaces( 1233 + ieee80211_iterate_active_interfaces_atomic( 1242 1234 data->hw, IEEE80211_IFACE_ITER_NORMAL, 1243 1235 mac80211_hwsim_bcn_en_iter, &count); 1244 1236 wiphy_debug(hw->wiphy, " beaconing vifs remaining: %u",
+2 -2
drivers/net/wireless/mwifiex/sta_ioctl.c
··· 319 319 if (bss_desc && bss_desc->ssid.ssid_len && 320 320 (!mwifiex_ssid_cmp(&priv->curr_bss_params.bss_descriptor. 321 321 ssid, &bss_desc->ssid))) { 322 - kfree(bss_desc); 323 - return 0; 322 + ret = 0; 323 + goto done; 324 324 } 325 325 326 326 /* Exit Adhoc mode first */
+12 -8
drivers/net/xen-netback/interface.c
··· 368 368 unsigned long rx_ring_ref, unsigned int tx_evtchn, 369 369 unsigned int rx_evtchn) 370 370 { 371 + struct task_struct *task; 371 372 int err = -ENOMEM; 372 373 373 - /* Already connected through? */ 374 - if (vif->tx_irq) 375 - return 0; 374 + BUG_ON(vif->tx_irq); 375 + BUG_ON(vif->task); 376 376 377 377 err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref); 378 378 if (err < 0) ··· 411 411 } 412 412 413 413 init_waitqueue_head(&vif->wq); 414 - vif->task = kthread_create(xenvif_kthread, 415 - (void *)vif, "%s", vif->dev->name); 416 - if (IS_ERR(vif->task)) { 414 + task = kthread_create(xenvif_kthread, 415 + (void *)vif, "%s", vif->dev->name); 416 + if (IS_ERR(task)) { 417 417 pr_warn("Could not allocate kthread for %s\n", vif->dev->name); 418 - err = PTR_ERR(vif->task); 418 + err = PTR_ERR(task); 419 419 goto err_rx_unbind; 420 420 } 421 + 422 + vif->task = task; 421 423 422 424 rtnl_lock(); 423 425 if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN) ··· 463 461 if (netif_carrier_ok(vif->dev)) 464 462 xenvif_carrier_off(vif); 465 463 466 - if (vif->task) 464 + if (vif->task) { 467 465 kthread_stop(vif->task); 466 + vif->task = NULL; 467 + } 468 468 469 469 if (vif->tx_irq) { 470 470 if (vif->tx_irq == vif->rx_irq)
+150 -116
drivers/net/xen-netback/netback.c
··· 452 452 } 453 453 454 454 /* Set up a GSO prefix descriptor, if necessary */ 455 - if ((1 << skb_shinfo(skb)->gso_type) & vif->gso_prefix_mask) { 455 + if ((1 << gso_type) & vif->gso_prefix_mask) { 456 456 req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++); 457 457 meta = npo->meta + npo->meta_prod++; 458 458 meta->gso_type = gso_type; ··· 1149 1149 return 0; 1150 1150 } 1151 1151 1152 - static inline void maybe_pull_tail(struct sk_buff *skb, unsigned int len) 1152 + static inline int maybe_pull_tail(struct sk_buff *skb, unsigned int len, 1153 + unsigned int max) 1153 1154 { 1154 - if (skb_is_nonlinear(skb) && skb_headlen(skb) < len) { 1155 - /* If we need to pullup then pullup to the max, so we 1156 - * won't need to do it again. 1157 - */ 1158 - int target = min_t(int, skb->len, MAX_TCP_HEADER); 1159 - __pskb_pull_tail(skb, target - skb_headlen(skb)); 1160 - } 1155 + if (skb_headlen(skb) >= len) 1156 + return 0; 1157 + 1158 + /* If we need to pullup then pullup to the max, so we 1159 + * won't need to do it again. 1160 + */ 1161 + if (max > skb->len) 1162 + max = skb->len; 1163 + 1164 + if (__pskb_pull_tail(skb, max - skb_headlen(skb)) == NULL) 1165 + return -ENOMEM; 1166 + 1167 + if (skb_headlen(skb) < len) 1168 + return -EPROTO; 1169 + 1170 + return 0; 1161 1171 } 1172 + 1173 + /* This value should be large enough to cover a tagged ethernet header plus 1174 + * maximally sized IP and TCP or UDP headers. 1175 + */ 1176 + #define MAX_IP_HDR_LEN 128 1162 1177 1163 1178 static int checksum_setup_ip(struct xenvif *vif, struct sk_buff *skb, 1164 1179 int recalculate_partial_csum) 1165 1180 { 1166 - struct iphdr *iph = (void *)skb->data; 1167 - unsigned int header_size; 1168 1181 unsigned int off; 1169 - int err = -EPROTO; 1182 + bool fragment; 1183 + int err; 1170 1184 1171 - off = sizeof(struct iphdr); 1185 + fragment = false; 1172 1186 1173 - header_size = skb->network_header + off + MAX_IPOPTLEN; 1174 - maybe_pull_tail(skb, header_size); 1187 + err = maybe_pull_tail(skb, 1188 + sizeof(struct iphdr), 1189 + MAX_IP_HDR_LEN); 1190 + if (err < 0) 1191 + goto out; 1175 1192 1176 - off = iph->ihl * 4; 1193 + if (ip_hdr(skb)->frag_off & htons(IP_OFFSET | IP_MF)) 1194 + fragment = true; 1177 1195 1178 - switch (iph->protocol) { 1196 + off = ip_hdrlen(skb); 1197 + 1198 + err = -EPROTO; 1199 + 1200 + switch (ip_hdr(skb)->protocol) { 1179 1201 case IPPROTO_TCP: 1202 + err = maybe_pull_tail(skb, 1203 + off + sizeof(struct tcphdr), 1204 + MAX_IP_HDR_LEN); 1205 + if (err < 0) 1206 + goto out; 1207 + 1180 1208 if (!skb_partial_csum_set(skb, off, 1181 1209 offsetof(struct tcphdr, check))) 1182 1210 goto out; 1183 1211 1184 - if (recalculate_partial_csum) { 1185 - struct tcphdr *tcph = tcp_hdr(skb); 1186 - 1187 - header_size = skb->network_header + 1188 - off + 1189 - sizeof(struct tcphdr); 1190 - maybe_pull_tail(skb, header_size); 1191 - 1192 - tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, 1193 - skb->len - off, 1194 - IPPROTO_TCP, 0); 1195 - } 1212 + if (recalculate_partial_csum) 1213 + tcp_hdr(skb)->check = 1214 + ~csum_tcpudp_magic(ip_hdr(skb)->saddr, 1215 + ip_hdr(skb)->daddr, 1216 + skb->len - off, 1217 + IPPROTO_TCP, 0); 1196 1218 break; 1197 1219 case IPPROTO_UDP: 1220 + err = maybe_pull_tail(skb, 1221 + off + sizeof(struct udphdr), 1222 + MAX_IP_HDR_LEN); 1223 + if (err < 0) 1224 + goto out; 1225 + 1198 1226 if (!skb_partial_csum_set(skb, off, 1199 1227 offsetof(struct udphdr, check))) 1200 1228 goto out; 1201 1229 1202 - if (recalculate_partial_csum) { 1203 - struct udphdr *udph = udp_hdr(skb); 1204 - 1205 - header_size = skb->network_header + 1206 - off + 1207 - sizeof(struct udphdr); 1208 - maybe_pull_tail(skb, header_size); 1209 - 1210 - udph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr, 1211 - skb->len - off, 1212 - IPPROTO_UDP, 0); 1213 - } 1230 + if (recalculate_partial_csum) 1231 + udp_hdr(skb)->check = 1232 + ~csum_tcpudp_magic(ip_hdr(skb)->saddr, 1233 + ip_hdr(skb)->daddr, 1234 + skb->len - off, 1235 + IPPROTO_UDP, 0); 1214 1236 break; 1215 1237 default: 1216 - if (net_ratelimit()) 1217 - netdev_err(vif->dev, 1218 - "Attempting to checksum a non-TCP/UDP packet, " 1219 - "dropping a protocol %d packet\n", 1220 - iph->protocol); 1221 1238 goto out; 1222 1239 } 1223 1240 ··· 1244 1227 return err; 1245 1228 } 1246 1229 1230 + /* This value should be large enough to cover a tagged ethernet header plus 1231 + * an IPv6 header, all options, and a maximal TCP or UDP header. 1232 + */ 1233 + #define MAX_IPV6_HDR_LEN 256 1234 + 1235 + #define OPT_HDR(type, skb, off) \ 1236 + (type *)(skb_network_header(skb) + (off)) 1237 + 1247 1238 static int checksum_setup_ipv6(struct xenvif *vif, struct sk_buff *skb, 1248 1239 int recalculate_partial_csum) 1249 1240 { 1250 - int err = -EPROTO; 1251 - struct ipv6hdr *ipv6h = (void *)skb->data; 1241 + int err; 1252 1242 u8 nexthdr; 1253 - unsigned int header_size; 1254 1243 unsigned int off; 1244 + unsigned int len; 1255 1245 bool fragment; 1256 1246 bool done; 1257 1247 1248 + fragment = false; 1258 1249 done = false; 1259 1250 1260 1251 off = sizeof(struct ipv6hdr); 1261 1252 1262 - header_size = skb->network_header + off; 1263 - maybe_pull_tail(skb, header_size); 1253 + err = maybe_pull_tail(skb, off, MAX_IPV6_HDR_LEN); 1254 + if (err < 0) 1255 + goto out; 1264 1256 1265 - nexthdr = ipv6h->nexthdr; 1257 + nexthdr = ipv6_hdr(skb)->nexthdr; 1266 1258 1267 - while ((off <= sizeof(struct ipv6hdr) + ntohs(ipv6h->payload_len)) && 1268 - !done) { 1259 + len = sizeof(struct ipv6hdr) + ntohs(ipv6_hdr(skb)->payload_len); 1260 + while (off <= len && !done) { 1269 1261 switch (nexthdr) { 1270 1262 case IPPROTO_DSTOPTS: 1271 1263 case IPPROTO_HOPOPTS: 1272 1264 case IPPROTO_ROUTING: { 1273 - struct ipv6_opt_hdr *hp = (void *)(skb->data + off); 1265 + struct ipv6_opt_hdr *hp; 1274 1266 1275 - header_size = skb->network_header + 1276 - off + 1277 - sizeof(struct ipv6_opt_hdr); 1278 - maybe_pull_tail(skb, header_size); 1267 + err = maybe_pull_tail(skb, 1268 + off + 1269 + sizeof(struct ipv6_opt_hdr), 1270 + MAX_IPV6_HDR_LEN); 1271 + if (err < 0) 1272 + goto out; 1279 1273 1274 + hp = OPT_HDR(struct ipv6_opt_hdr, skb, off); 1280 1275 nexthdr = hp->nexthdr; 1281 1276 off += ipv6_optlen(hp); 1282 1277 break; 1283 1278 } 1284 1279 case IPPROTO_AH: { 1285 - struct ip_auth_hdr *hp = (void *)(skb->data + off); 1280 + struct ip_auth_hdr *hp; 1286 1281 1287 - header_size = skb->network_header + 1288 - off + 1289 - sizeof(struct ip_auth_hdr); 1290 - maybe_pull_tail(skb, header_size); 1282 + err = maybe_pull_tail(skb, 1283 + off + 1284 + sizeof(struct ip_auth_hdr), 1285 + MAX_IPV6_HDR_LEN); 1286 + if (err < 0) 1287 + goto out; 1291 1288 1289 + hp = OPT_HDR(struct ip_auth_hdr, skb, off); 1292 1290 nexthdr = hp->nexthdr; 1293 - off += (hp->hdrlen+2)<<2; 1291 + off += ipv6_authlen(hp); 1294 1292 break; 1295 1293 } 1296 - case IPPROTO_FRAGMENT: 1297 - fragment = true; 1298 - /* fall through */ 1294 + case IPPROTO_FRAGMENT: { 1295 + struct frag_hdr *hp; 1296 + 1297 + err = maybe_pull_tail(skb, 1298 + off + 1299 + sizeof(struct frag_hdr), 1300 + MAX_IPV6_HDR_LEN); 1301 + if (err < 0) 1302 + goto out; 1303 + 1304 + hp = OPT_HDR(struct frag_hdr, skb, off); 1305 + 1306 + if (hp->frag_off & htons(IP6_OFFSET | IP6_MF)) 1307 + fragment = true; 1308 + 1309 + nexthdr = hp->nexthdr; 1310 + off += sizeof(struct frag_hdr); 1311 + break; 1312 + } 1299 1313 default: 1300 1314 done = true; 1301 1315 break; 1302 1316 } 1303 1317 } 1304 1318 1305 - if (!done) { 1306 - if (net_ratelimit()) 1307 - netdev_err(vif->dev, "Failed to parse packet header\n"); 1308 - goto out; 1309 - } 1319 + err = -EPROTO; 1310 1320 1311 - if (fragment) { 1312 - if (net_ratelimit()) 1313 - netdev_err(vif->dev, "Packet is a fragment!\n"); 1321 + if (!done || fragment) 1314 1322 goto out; 1315 - } 1316 1323 1317 1324 switch (nexthdr) { 1318 1325 case IPPROTO_TCP: 1326 + err = maybe_pull_tail(skb, 1327 + off + sizeof(struct tcphdr), 1328 + MAX_IPV6_HDR_LEN); 1329 + if (err < 0) 1330 + goto out; 1331 + 1319 1332 if (!skb_partial_csum_set(skb, off, 1320 1333 offsetof(struct tcphdr, check))) 1321 1334 goto out; 1322 1335 1323 - if (recalculate_partial_csum) { 1324 - struct tcphdr *tcph = tcp_hdr(skb); 1325 - 1326 - header_size = skb->network_header + 1327 - off + 1328 - sizeof(struct tcphdr); 1329 - maybe_pull_tail(skb, header_size); 1330 - 1331 - tcph->check = ~csum_ipv6_magic(&ipv6h->saddr, 1332 - &ipv6h->daddr, 1333 - skb->len - off, 1334 - IPPROTO_TCP, 0); 1335 - } 1336 + if (recalculate_partial_csum) 1337 + tcp_hdr(skb)->check = 1338 + ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr, 1339 + &ipv6_hdr(skb)->daddr, 1340 + skb->len - off, 1341 + IPPROTO_TCP, 0); 1336 1342 break; 1337 1343 case IPPROTO_UDP: 1344 + err = maybe_pull_tail(skb, 1345 + off + sizeof(struct udphdr), 1346 + MAX_IPV6_HDR_LEN); 1347 + if (err < 0) 1348 + goto out; 1349 + 1338 1350 if (!skb_partial_csum_set(skb, off, 1339 1351 offsetof(struct udphdr, check))) 1340 1352 goto out; 1341 1353 1342 - if (recalculate_partial_csum) { 1343 - struct udphdr *udph = udp_hdr(skb); 1344 - 1345 - header_size = skb->network_header + 1346 - off + 1347 - sizeof(struct udphdr); 1348 - maybe_pull_tail(skb, header_size); 1349 - 1350 - udph->check = ~csum_ipv6_magic(&ipv6h->saddr, 1351 - &ipv6h->daddr, 1352 - skb->len - off, 1353 - IPPROTO_UDP, 0); 1354 - } 1354 + if (recalculate_partial_csum) 1355 + udp_hdr(skb)->check = 1356 + ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr, 1357 + &ipv6_hdr(skb)->daddr, 1358 + skb->len - off, 1359 + IPPROTO_UDP, 0); 1355 1360 break; 1356 1361 default: 1357 - if (net_ratelimit()) 1358 - netdev_err(vif->dev, 1359 - "Attempting to checksum a non-TCP/UDP packet, " 1360 - "dropping a protocol %d packet\n", 1361 - nexthdr); 1362 1362 goto out; 1363 1363 } 1364 1364 ··· 1445 1411 return false; 1446 1412 } 1447 1413 1448 - static unsigned xenvif_tx_build_gops(struct xenvif *vif) 1414 + static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget) 1449 1415 { 1450 1416 struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop; 1451 1417 struct sk_buff *skb; 1452 1418 int ret; 1453 1419 1454 1420 while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX 1455 - < MAX_PENDING_REQS)) { 1421 + < MAX_PENDING_REQS) && 1422 + (skb_queue_len(&vif->tx_queue) < budget)) { 1456 1423 struct xen_netif_tx_request txreq; 1457 1424 struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX]; 1458 1425 struct page *page; ··· 1475 1440 continue; 1476 1441 } 1477 1442 1478 - RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, work_to_do); 1443 + work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx); 1479 1444 if (!work_to_do) 1480 1445 break; 1481 1446 ··· 1615 1580 } 1616 1581 1617 1582 1618 - static int xenvif_tx_submit(struct xenvif *vif, int budget) 1583 + static int xenvif_tx_submit(struct xenvif *vif) 1619 1584 { 1620 1585 struct gnttab_copy *gop = vif->tx_copy_ops; 1621 1586 struct sk_buff *skb; 1622 1587 int work_done = 0; 1623 1588 1624 - while (work_done < budget && 1625 - (skb = __skb_dequeue(&vif->tx_queue)) != NULL) { 1589 + while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) { 1626 1590 struct xen_netif_tx_request *txp; 1627 1591 u16 pending_idx; 1628 1592 unsigned data_len; ··· 1696 1662 if (unlikely(!tx_work_todo(vif))) 1697 1663 return 0; 1698 1664 1699 - nr_gops = xenvif_tx_build_gops(vif); 1665 + nr_gops = xenvif_tx_build_gops(vif, budget); 1700 1666 1701 1667 if (nr_gops == 0) 1702 1668 return 0; 1703 1669 1704 1670 gnttab_batch_copy(vif->tx_copy_ops, nr_gops); 1705 1671 1706 - work_done = xenvif_tx_submit(vif, nr_gops); 1672 + work_done = xenvif_tx_submit(vif); 1707 1673 1708 1674 return work_done; 1709 1675 }
+8
drivers/pci/pci.c
··· 4165 4165 return 0; 4166 4166 } 4167 4167 4168 + bool pci_device_is_present(struct pci_dev *pdev) 4169 + { 4170 + u32 v; 4171 + 4172 + return pci_bus_read_dev_vendor_id(pdev->bus, pdev->devfn, &v, 0); 4173 + } 4174 + EXPORT_SYMBOL_GPL(pci_device_is_present); 4175 + 4168 4176 #define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE 4169 4177 static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0}; 4170 4178 static DEFINE_SPINLOCK(resource_alignment_lock);
+1
include/linux/ipv6.h
··· 4 4 #include <uapi/linux/ipv6.h> 5 5 6 6 #define ipv6_optlen(p) (((p)->hdrlen+1) << 3) 7 + #define ipv6_authlen(p) (((p)->hdrlen+2) << 2) 7 8 /* 8 9 * This structure contains configuration options per IPv6 link. 9 10 */
+2
include/linux/micrel_phy.h
··· 22 22 #define PHY_ID_KSZ8021 0x00221555 23 23 #define PHY_ID_KSZ8031 0x00221556 24 24 #define PHY_ID_KSZ8041 0x00221510 25 + /* undocumented */ 26 + #define PHY_ID_KSZ8041RNLI 0x00221537 25 27 #define PHY_ID_KSZ8051 0x00221550 26 28 /* same id: ks8001 Rev. A/B, and ks8721 Rev 3. */ 27 29 #define PHY_ID_KSZ8001 0x0022161A
+1 -1
include/linux/net.h
··· 181 181 int offset, size_t size, int flags); 182 182 ssize_t (*splice_read)(struct socket *sock, loff_t *ppos, 183 183 struct pipe_inode_info *pipe, size_t len, unsigned int flags); 184 - void (*set_peek_off)(struct sock *sk, int val); 184 + int (*set_peek_off)(struct sock *sk, int val); 185 185 }; 186 186 187 187 #define DECLARE_SOCKADDR(type, dst, src) \
+1 -1
include/linux/netdevice.h
··· 1255 1255 unsigned char perm_addr[MAX_ADDR_LEN]; /* permanent hw address */ 1256 1256 unsigned char addr_assign_type; /* hw address assignment type */ 1257 1257 unsigned char addr_len; /* hardware address length */ 1258 - unsigned char neigh_priv_len; 1258 + unsigned short neigh_priv_len; 1259 1259 unsigned short dev_id; /* Used to differentiate devices 1260 1260 * that share the same link 1261 1261 * layer address
+1
include/linux/pci.h
··· 960 960 int __must_check pci_assign_resource(struct pci_dev *dev, int i); 961 961 int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align); 962 962 int pci_select_bars(struct pci_dev *dev, unsigned long flags); 963 + bool pci_device_is_present(struct pci_dev *pdev); 963 964 964 965 /* ROM control related routines */ 965 966 int pci_enable_rom(struct pci_dev *pdev);
+18 -21
include/linux/skbuff.h
··· 2263 2263 2264 2264 unsigned char *skb_pull_rcsum(struct sk_buff *skb, unsigned int len); 2265 2265 2266 + /** 2267 + * pskb_trim_rcsum - trim received skb and update checksum 2268 + * @skb: buffer to trim 2269 + * @len: new length 2270 + * 2271 + * This is exactly the same as pskb_trim except that it ensures the 2272 + * checksum of received packets are still valid after the operation. 2273 + */ 2274 + 2275 + static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len) 2276 + { 2277 + if (likely(len >= skb->len)) 2278 + return 0; 2279 + if (skb->ip_summed == CHECKSUM_COMPLETE) 2280 + skb->ip_summed = CHECKSUM_NONE; 2281 + return __pskb_trim(skb, len); 2282 + } 2283 + 2266 2284 #define skb_queue_walk(queue, skb) \ 2267 2285 for (skb = (queue)->next; \ 2268 2286 skb != (struct sk_buff *)(queue); \ ··· 2377 2359 __wsum csum, const struct skb_checksum_ops *ops); 2378 2360 __wsum skb_checksum(const struct sk_buff *skb, int offset, int len, 2379 2361 __wsum csum); 2380 - 2381 - /** 2382 - * pskb_trim_rcsum - trim received skb and update checksum 2383 - * @skb: buffer to trim 2384 - * @len: new length 2385 - * 2386 - * This is exactly the same as pskb_trim except that it ensures the 2387 - * checksum of received packets are still valid after the operation. 2388 - */ 2389 - 2390 - static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len) 2391 - { 2392 - if (likely(len >= skb->len)) 2393 - return 0; 2394 - if (skb->ip_summed == CHECKSUM_COMPLETE) { 2395 - __wsum adj = skb_checksum(skb, len, skb->len - len, 0); 2396 - 2397 - skb->csum = csum_sub(skb->csum, adj); 2398 - } 2399 - return __pskb_trim(skb, len); 2400 - } 2401 2362 2402 2363 static inline void *skb_header_pointer(const struct sk_buff *skb, int offset, 2403 2364 int len, void *buffer)
+2 -1
include/net/ipv6.h
··· 110 110 __be32 identification; 111 111 }; 112 112 113 - #define IP6_MF 0x0001 113 + #define IP6_MF 0x0001 114 + #define IP6_OFFSET 0xFFF8 114 115 115 116 #include <net/sock.h> 116 117
-6
include/net/sctp/structs.h
··· 1726 1726 /* How many duplicated TSNs have we seen? */ 1727 1727 int numduptsns; 1728 1728 1729 - /* Number of seconds of idle time before an association is closed. 1730 - * In the association context, this is really used as a boolean 1731 - * since the real timeout is stored in the timeouts array 1732 - */ 1733 - __u32 autoclose; 1734 - 1735 1729 /* These are to support 1736 1730 * "SCTP Extensions for Dynamic Reconfiguration of IP Addresses 1737 1731 * and Enforcement of Flow and Message Limits"
+2 -4
include/net/sock.h
··· 1035 1035 }; 1036 1036 1037 1037 struct cg_proto { 1038 - void (*enter_memory_pressure)(struct sock *sk); 1039 1038 struct res_counter memory_allocated; /* Current allocated memory. */ 1040 1039 struct percpu_counter sockets_allocated; /* Current number of sockets. */ 1041 1040 int memory_pressure; ··· 1154 1155 struct proto *prot = sk->sk_prot; 1155 1156 1156 1157 for (; cg_proto; cg_proto = parent_cg_proto(prot, cg_proto)) 1157 - if (cg_proto->memory_pressure) 1158 - cg_proto->memory_pressure = 0; 1158 + cg_proto->memory_pressure = 0; 1159 1159 } 1160 1160 1161 1161 } ··· 1169 1171 struct proto *prot = sk->sk_prot; 1170 1172 1171 1173 for (; cg_proto; cg_proto = parent_cg_proto(prot, cg_proto)) 1172 - cg_proto->enter_memory_pressure(sk); 1174 + cg_proto->memory_pressure = 1; 1173 1175 } 1174 1176 1175 1177 sk->sk_prot->enter_memory_pressure(sk);
+10
net/bridge/br_private.h
··· 426 426 int br_handle_frame_finish(struct sk_buff *skb); 427 427 rx_handler_result_t br_handle_frame(struct sk_buff **pskb); 428 428 429 + static inline bool br_rx_handler_check_rcu(const struct net_device *dev) 430 + { 431 + return rcu_dereference(dev->rx_handler) == br_handle_frame; 432 + } 433 + 434 + static inline struct net_bridge_port *br_port_get_check_rcu(const struct net_device *dev) 435 + { 436 + return br_rx_handler_check_rcu(dev) ? br_port_get_rcu(dev) : NULL; 437 + } 438 + 429 439 /* br_ioctl.c */ 430 440 int br_dev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); 431 441 int br_ioctl_deviceless_stub(struct net *net, unsigned int cmd,
+1 -1
net/bridge/br_stp_bpdu.c
··· 153 153 if (buf[0] != 0 || buf[1] != 0 || buf[2] != 0) 154 154 goto err; 155 155 156 - p = br_port_get_rcu(dev); 156 + p = br_port_get_check_rcu(dev); 157 157 if (!p) 158 158 goto err; 159 159
-1
net/core/drop_monitor.c
··· 64 64 .hdrsize = 0, 65 65 .name = "NET_DM", 66 66 .version = 2, 67 - .maxattr = NET_DM_CMD_MAX, 68 67 }; 69 68 70 69 static DEFINE_PER_CPU(struct per_cpu_dm_data, dm_cpu_data);
+1
net/core/skbuff.c
··· 3584 3584 skb->tstamp.tv64 = 0; 3585 3585 skb->pkt_type = PACKET_HOST; 3586 3586 skb->skb_iif = 0; 3587 + skb->local_df = 0; 3587 3588 skb_dst_drop(skb); 3588 3589 skb->mark = 0; 3589 3590 secpath_reset(skb);
+1 -1
net/core/sock.c
··· 882 882 883 883 case SO_PEEK_OFF: 884 884 if (sock->ops->set_peek_off) 885 - sock->ops->set_peek_off(sk, val); 885 + ret = sock->ops->set_peek_off(sk, val); 886 886 else 887 887 ret = -EOPNOTSUPP; 888 888 break;
-1
net/dccp/ipv6.c
··· 851 851 flowlabel = fl6_sock_lookup(sk, fl6.flowlabel); 852 852 if (flowlabel == NULL) 853 853 return -EINVAL; 854 - usin->sin6_addr = flowlabel->dst; 855 854 fl6_sock_release(flowlabel); 856 855 } 857 856 }
+4 -1
net/ipv4/fib_rules.c
··· 104 104 static bool fib4_rule_suppress(struct fib_rule *rule, struct fib_lookup_arg *arg) 105 105 { 106 106 struct fib_result *result = (struct fib_result *) arg->result; 107 - struct net_device *dev = result->fi->fib_dev; 107 + struct net_device *dev = NULL; 108 + 109 + if (result->fi) 110 + dev = result->fi->fib_dev; 108 111 109 112 /* do not accept result if the route does 110 113 * not meet the required prefix length
-7
net/ipv4/tcp_memcontrol.c
··· 6 6 #include <linux/memcontrol.h> 7 7 #include <linux/module.h> 8 8 9 - static void memcg_tcp_enter_memory_pressure(struct sock *sk) 10 - { 11 - if (sk->sk_cgrp->memory_pressure) 12 - sk->sk_cgrp->memory_pressure = 1; 13 - } 14 - EXPORT_SYMBOL(memcg_tcp_enter_memory_pressure); 15 - 16 9 int tcp_init_cgroup(struct mem_cgroup *memcg, struct cgroup_subsys *ss) 17 10 { 18 11 /*
+28 -19
net/ipv4/udp.c
··· 560 560 __be16 sport, __be16 dport, 561 561 struct udp_table *udptable) 562 562 { 563 - struct sock *sk; 564 563 const struct iphdr *iph = ip_hdr(skb); 565 564 566 - if (unlikely(sk = skb_steal_sock(skb))) 567 - return sk; 568 - else 569 - return __udp4_lib_lookup(dev_net(skb_dst(skb)->dev), iph->saddr, sport, 570 - iph->daddr, dport, inet_iif(skb), 571 - udptable); 565 + return __udp4_lib_lookup(dev_net(skb_dst(skb)->dev), iph->saddr, sport, 566 + iph->daddr, dport, inet_iif(skb), 567 + udptable); 572 568 } 573 569 574 570 struct sock *udp4_lib_lookup(struct net *net, __be32 saddr, __be16 sport, ··· 1599 1603 kfree_skb(skb1); 1600 1604 } 1601 1605 1602 - static void udp_sk_rx_dst_set(struct sock *sk, const struct sk_buff *skb) 1606 + /* For TCP sockets, sk_rx_dst is protected by socket lock 1607 + * For UDP, we use sk_dst_lock to guard against concurrent changes. 1608 + */ 1609 + static void udp_sk_rx_dst_set(struct sock *sk, struct dst_entry *dst) 1603 1610 { 1604 - struct dst_entry *dst = skb_dst(skb); 1611 + struct dst_entry *old; 1605 1612 1606 - dst_hold(dst); 1607 - sk->sk_rx_dst = dst; 1613 + spin_lock(&sk->sk_dst_lock); 1614 + old = sk->sk_rx_dst; 1615 + if (likely(old != dst)) { 1616 + dst_hold(dst); 1617 + sk->sk_rx_dst = dst; 1618 + dst_release(old); 1619 + } 1620 + spin_unlock(&sk->sk_dst_lock); 1608 1621 } 1609 1622 1610 1623 /* ··· 1744 1739 if (udp4_csum_init(skb, uh, proto)) 1745 1740 goto csum_error; 1746 1741 1747 - if (skb->sk) { 1742 + sk = skb_steal_sock(skb); 1743 + if (sk) { 1744 + struct dst_entry *dst = skb_dst(skb); 1748 1745 int ret; 1749 - sk = skb->sk; 1750 1746 1751 - if (unlikely(sk->sk_rx_dst == NULL)) 1752 - udp_sk_rx_dst_set(sk, skb); 1747 + if (unlikely(sk->sk_rx_dst != dst)) 1748 + udp_sk_rx_dst_set(sk, dst); 1753 1749 1754 1750 ret = udp_queue_rcv_skb(sk, skb); 1755 - 1751 + sock_put(sk); 1756 1752 /* a return value > 0 means to resubmit the input, but 1757 1753 * it wants the return to be -protocol, or 0 1758 1754 */ ··· 1919 1913 1920 1914 void udp_v4_early_demux(struct sk_buff *skb) 1921 1915 { 1922 - const struct iphdr *iph = ip_hdr(skb); 1923 - const struct udphdr *uh = udp_hdr(skb); 1916 + struct net *net = dev_net(skb->dev); 1917 + const struct iphdr *iph; 1918 + const struct udphdr *uh; 1924 1919 struct sock *sk; 1925 1920 struct dst_entry *dst; 1926 - struct net *net = dev_net(skb->dev); 1927 1921 int dif = skb->dev->ifindex; 1928 1922 1929 1923 /* validate the packet */ 1930 1924 if (!pskb_may_pull(skb, skb_transport_offset(skb) + sizeof(struct udphdr))) 1931 1925 return; 1926 + 1927 + iph = ip_hdr(skb); 1928 + uh = udp_hdr(skb); 1932 1929 1933 1930 if (skb->pkt_type == PACKET_BROADCAST || 1934 1931 skb->pkt_type == PACKET_MULTICAST)
+1 -1
net/ipv6/addrconf.c
··· 2613 2613 if (sp_ifa->rt) 2614 2614 continue; 2615 2615 2616 - sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, 0); 2616 + sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, false); 2617 2617 2618 2618 /* Failure cases are ignored */ 2619 2619 if (!IS_ERR(sp_rt)) {
-1
net/ipv6/datagram.c
··· 73 73 flowlabel = fl6_sock_lookup(sk, fl6.flowlabel); 74 74 if (flowlabel == NULL) 75 75 return -EINVAL; 76 - usin->sin6_addr = flowlabel->dst; 77 76 } 78 77 } 79 78
+5 -1
net/ipv6/fib6_rules.c
··· 122 122 static bool fib6_rule_suppress(struct fib_rule *rule, struct fib_lookup_arg *arg) 123 123 { 124 124 struct rt6_info *rt = (struct rt6_info *) arg->result; 125 - struct net_device *dev = rt->rt6i_idev->dev; 125 + struct net_device *dev = NULL; 126 + 127 + if (rt->rt6i_idev) 128 + dev = rt->rt6i_idev->dev; 129 + 126 130 /* do not accept result if the route does 127 131 * not meet the required prefix length 128 132 */
+3
net/ipv6/ndisc.c
··· 1277 1277 ri->prefix_len == 0) 1278 1278 continue; 1279 1279 #endif 1280 + if (ri->prefix_len == 0 && 1281 + !in6_dev->cnf.accept_ra_defrtr) 1282 + continue; 1280 1283 if (ri->prefix_len > in6_dev->cnf.accept_ra_rt_info_max_plen) 1281 1284 continue; 1282 1285 rt6_route_rcv(skb->dev, (u8*)p, (p->nd_opt_len) << 3,
-1
net/ipv6/raw.c
··· 792 792 flowlabel = fl6_sock_lookup(sk, fl6.flowlabel); 793 793 if (flowlabel == NULL) 794 794 return -EINVAL; 795 - daddr = &flowlabel->dst; 796 795 } 797 796 } 798 797
+13 -17
net/ipv6/route.c
··· 84 84 85 85 static int ip6_pkt_discard(struct sk_buff *skb); 86 86 static int ip6_pkt_discard_out(struct sk_buff *skb); 87 + static int ip6_pkt_prohibit(struct sk_buff *skb); 88 + static int ip6_pkt_prohibit_out(struct sk_buff *skb); 87 89 static void ip6_link_failure(struct sk_buff *skb); 88 90 static void ip6_rt_update_pmtu(struct dst_entry *dst, struct sock *sk, 89 91 struct sk_buff *skb, u32 mtu); ··· 235 233 }; 236 234 237 235 #ifdef CONFIG_IPV6_MULTIPLE_TABLES 238 - 239 - static int ip6_pkt_prohibit(struct sk_buff *skb); 240 - static int ip6_pkt_prohibit_out(struct sk_buff *skb); 241 236 242 237 static const struct rt6_info ip6_prohibit_entry_template = { 243 238 .dst = { ··· 1564 1565 goto out; 1565 1566 } 1566 1567 } 1567 - rt->dst.output = ip6_pkt_discard_out; 1568 - rt->dst.input = ip6_pkt_discard; 1569 1568 rt->rt6i_flags = RTF_REJECT|RTF_NONEXTHOP; 1570 1569 switch (cfg->fc_type) { 1571 1570 case RTN_BLACKHOLE: 1572 1571 rt->dst.error = -EINVAL; 1572 + rt->dst.output = dst_discard; 1573 + rt->dst.input = dst_discard; 1573 1574 break; 1574 1575 case RTN_PROHIBIT: 1575 1576 rt->dst.error = -EACCES; 1577 + rt->dst.output = ip6_pkt_prohibit_out; 1578 + rt->dst.input = ip6_pkt_prohibit; 1576 1579 break; 1577 1580 case RTN_THROW: 1578 - rt->dst.error = -EAGAIN; 1579 - break; 1580 1581 default: 1581 - rt->dst.error = -ENETUNREACH; 1582 + rt->dst.error = (cfg->fc_type == RTN_THROW) ? -EAGAIN 1583 + : -ENETUNREACH; 1584 + rt->dst.output = ip6_pkt_discard_out; 1585 + rt->dst.input = ip6_pkt_discard; 1582 1586 break; 1583 1587 } 1584 1588 goto install_route; ··· 2146 2144 return ip6_pkt_drop(skb, ICMPV6_NOROUTE, IPSTATS_MIB_OUTNOROUTES); 2147 2145 } 2148 2146 2149 - #ifdef CONFIG_IPV6_MULTIPLE_TABLES 2150 - 2151 2147 static int ip6_pkt_prohibit(struct sk_buff *skb) 2152 2148 { 2153 2149 return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED, IPSTATS_MIB_INNOROUTES); ··· 2157 2157 return ip6_pkt_drop(skb, ICMPV6_ADM_PROHIBITED, IPSTATS_MIB_OUTNOROUTES); 2158 2158 } 2159 2159 2160 - #endif 2161 - 2162 2160 /* 2163 2161 * Allocate a dst for local (unicast / anycast) address. 2164 2162 */ ··· 2166 2168 bool anycast) 2167 2169 { 2168 2170 struct net *net = dev_net(idev->dev); 2169 - struct rt6_info *rt = ip6_dst_alloc(net, net->loopback_dev, 0, NULL); 2170 - 2171 - if (!rt) { 2172 - net_warn_ratelimited("Maximum number of routes reached, consider increasing route/max_size\n"); 2171 + struct rt6_info *rt = ip6_dst_alloc(net, net->loopback_dev, 2172 + DST_NOCOUNT, NULL); 2173 + if (!rt) 2173 2174 return ERR_PTR(-ENOMEM); 2174 - } 2175 2175 2176 2176 in6_dev_hold(idev); 2177 2177
-1
net/ipv6/tcp_ipv6.c
··· 156 156 flowlabel = fl6_sock_lookup(sk, fl6.flowlabel); 157 157 if (flowlabel == NULL) 158 158 return -EINVAL; 159 - usin->sin6_addr = flowlabel->dst; 160 159 fl6_sock_release(flowlabel); 161 160 } 162 161 }
-1
net/ipv6/udp.c
··· 1140 1140 flowlabel = fl6_sock_lookup(sk, fl6.flowlabel); 1141 1141 if (flowlabel == NULL) 1142 1142 return -EINVAL; 1143 - daddr = &flowlabel->dst; 1144 1143 } 1145 1144 } 1146 1145
-1
net/l2tp/l2tp_ip6.c
··· 528 528 flowlabel = fl6_sock_lookup(sk, fl6.flowlabel); 529 529 if (flowlabel == NULL) 530 530 return -EINVAL; 531 - daddr = &flowlabel->dst; 532 531 } 533 532 } 534 533
+11 -4
net/mac80211/cfg.c
··· 1368 1368 changed |= 1369 1369 ieee80211_mps_set_sta_local_pm(sta, 1370 1370 params->local_pm); 1371 - ieee80211_bss_info_change_notify(sdata, changed); 1371 + ieee80211_mbss_info_change_notify(sdata, changed); 1372 1372 #endif 1373 1373 } 1374 1374 ··· 2488 2488 struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev); 2489 2489 struct ieee80211_local *local = wdev_priv(dev->ieee80211_ptr); 2490 2490 2491 - if (sdata->vif.type != NL80211_IFTYPE_STATION && 2492 - sdata->vif.type != NL80211_IFTYPE_MESH_POINT) 2491 + if (sdata->vif.type != NL80211_IFTYPE_STATION) 2493 2492 return -EOPNOTSUPP; 2494 2493 2495 2494 if (!(local->hw.flags & IEEE80211_HW_SUPPORTS_PS)) ··· 3119 3120 params->chandef.chan->band) 3120 3121 return -EINVAL; 3121 3122 3123 + ifmsh->chsw_init = true; 3124 + if (!ifmsh->pre_value) 3125 + ifmsh->pre_value = 1; 3126 + else 3127 + ifmsh->pre_value++; 3128 + 3122 3129 err = ieee80211_mesh_csa_beacon(sdata, params, true); 3123 - if (err < 0) 3130 + if (err < 0) { 3131 + ifmsh->chsw_init = false; 3124 3132 return err; 3133 + } 3125 3134 break; 3126 3135 #endif 3127 3136 default:
+4
net/mac80211/ibss.c
··· 823 823 if (err) 824 824 return false; 825 825 826 + /* channel switch is not supported, disconnect */ 827 + if (!(sdata->local->hw.wiphy->flags & WIPHY_FLAG_HAS_CHANNEL_SWITCH)) 828 + goto disconnect; 829 + 826 830 params.count = csa_ie.count; 827 831 params.chandef = csa_ie.chandef; 828 832
+1
net/mac80211/ieee80211_i.h
··· 1228 1228 u8 mode; 1229 1229 u8 count; 1230 1230 u8 ttl; 1231 + u16 pre_value; 1231 1232 }; 1232 1233 1233 1234 /* Parsed Information Elements */
-1
net/mac80211/iface.c
··· 1325 1325 sdata->vif.bss_conf.bssid = NULL; 1326 1326 break; 1327 1327 case NL80211_IFTYPE_AP_VLAN: 1328 - break; 1329 1328 case NL80211_IFTYPE_P2P_DEVICE: 1330 1329 sdata->vif.bss_conf.bssid = sdata->vif.addr; 1331 1330 break;
+3
net/mac80211/main.c
··· 940 940 wiphy_debug(local->hw.wiphy, "Failed to initialize wep: %d\n", 941 941 result); 942 942 943 + local->hw.conf.flags = IEEE80211_CONF_IDLE; 944 + 943 945 ieee80211_led_init(local); 944 946 945 947 rtnl_lock(); ··· 1049 1047 1050 1048 cancel_work_sync(&local->restart_work); 1051 1049 cancel_work_sync(&local->reconfig_filter); 1050 + flush_work(&local->sched_scan_stopped_work); 1052 1051 1053 1052 ieee80211_clear_tx_pending(local); 1054 1053 rate_control_deinitialize(local);
+12 -8
net/mac80211/mesh.c
··· 943 943 params.chandef.chan->center_freq); 944 944 945 945 params.block_tx = csa_ie.mode & WLAN_EID_CHAN_SWITCH_PARAM_TX_RESTRICT; 946 - if (beacon) 946 + if (beacon) { 947 947 ifmsh->chsw_ttl = csa_ie.ttl - 1; 948 - else 949 - ifmsh->chsw_ttl = 0; 948 + if (ifmsh->pre_value >= csa_ie.pre_value) 949 + return false; 950 + ifmsh->pre_value = csa_ie.pre_value; 951 + } 950 952 951 - if (ifmsh->chsw_ttl > 0) 953 + if (ifmsh->chsw_ttl < ifmsh->mshcfg.dot11MeshTTL) { 952 954 if (ieee80211_mesh_csa_beacon(sdata, &params, false) < 0) 953 955 return false; 956 + } else { 957 + return false; 958 + } 954 959 955 960 sdata->csa_radar_required = params.radar_required; 956 961 ··· 1168 1163 offset_ttl = (len < 42) ? 7 : 10; 1169 1164 *(pos + offset_ttl) -= 1; 1170 1165 *(pos + offset_ttl + 1) &= ~WLAN_EID_CHAN_SWITCH_PARAM_INITIATOR; 1171 - sdata->u.mesh.chsw_ttl = *(pos + offset_ttl); 1172 1166 1173 1167 memcpy(mgmt_fwd, mgmt, len); 1174 1168 eth_broadcast_addr(mgmt_fwd->da); ··· 1186 1182 u16 pre_value; 1187 1183 bool fwd_csa = true; 1188 1184 size_t baselen; 1189 - u8 *pos, ttl; 1185 + u8 *pos; 1190 1186 1191 1187 if (mgmt->u.action.u.measurement.action_code != 1192 1188 WLAN_ACTION_SPCT_CHL_SWITCH) ··· 1197 1193 u.action.u.chan_switch.variable); 1198 1194 ieee802_11_parse_elems(pos, len - baselen, false, &elems); 1199 1195 1200 - ttl = elems.mesh_chansw_params_ie->mesh_ttl; 1201 - if (!--ttl) 1196 + ifmsh->chsw_ttl = elems.mesh_chansw_params_ie->mesh_ttl; 1197 + if (!--ifmsh->chsw_ttl) 1202 1198 fwd_csa = false; 1203 1199 1204 1200 pre_value = le16_to_cpu(elems.mesh_chansw_params_ie->mesh_pre_value);
+2
net/mac80211/mlme.c
··· 1910 1910 if (ifmgd->flags & IEEE80211_STA_CONNECTION_POLL) 1911 1911 already = true; 1912 1912 1913 + ifmgd->flags |= IEEE80211_STA_CONNECTION_POLL; 1914 + 1913 1915 mutex_unlock(&sdata->local->mtx); 1914 1916 1915 1917 if (already)
+4 -3
net/mac80211/rc80211_minstrel_ht.c
··· 226 226 nsecs = 1000 * mi->overhead / MINSTREL_TRUNC(mi->avg_ampdu_len); 227 227 228 228 nsecs += minstrel_mcs_groups[group].duration[rate]; 229 - tp = 1000000 * ((mr->probability * 1000) / nsecs); 229 + tp = 1000000 * ((prob * 1000) / nsecs); 230 230 231 231 mr->cur_tp = MINSTREL_TRUNC(tp); 232 232 } ··· 277 277 if (!(mg->supported & BIT(i))) 278 278 continue; 279 279 280 + index = MCS_GROUP_RATES * group + i; 281 + 280 282 /* initialize rates selections starting indexes */ 281 283 if (!mg_rates_valid) { 282 284 mg->max_tp_rate = mg->max_tp_rate2 = 283 285 mg->max_prob_rate = i; 284 286 if (!mi_rates_valid) { 285 287 mi->max_tp_rate = mi->max_tp_rate2 = 286 - mi->max_prob_rate = i; 288 + mi->max_prob_rate = index; 287 289 mi_rates_valid = true; 288 290 } 289 291 mg_rates_valid = true; ··· 293 291 294 292 mr = &mg->rates[i]; 295 293 mr->retry_updated = false; 296 - index = MCS_GROUP_RATES * group + i; 297 294 minstrel_calc_rate_ewma(mr); 298 295 minstrel_ht_calc_tp(mi, group, i); 299 296
+2 -1
net/mac80211/rx.c
··· 911 911 u16 sc; 912 912 u8 tid, ack_policy; 913 913 914 - if (!ieee80211_is_data_qos(hdr->frame_control)) 914 + if (!ieee80211_is_data_qos(hdr->frame_control) || 915 + is_multicast_ether_addr(hdr->addr1)) 915 916 goto dont_reorder; 916 917 917 918 /*
+1 -1
net/mac80211/scan.c
··· 1088 1088 1089 1089 trace_api_sched_scan_stopped(local); 1090 1090 1091 - ieee80211_queue_work(&local->hw, &local->sched_scan_stopped_work); 1091 + schedule_work(&local->sched_scan_stopped_work); 1092 1092 } 1093 1093 EXPORT_SYMBOL(ieee80211_sched_scan_stopped);
+2
net/mac80211/spectmgmt.c
··· 78 78 if (elems->mesh_chansw_params_ie) { 79 79 csa_ie->ttl = elems->mesh_chansw_params_ie->mesh_ttl; 80 80 csa_ie->mode = elems->mesh_chansw_params_ie->mesh_flags; 81 + csa_ie->pre_value = le16_to_cpu( 82 + elems->mesh_chansw_params_ie->mesh_pre_value); 81 83 } 82 84 83 85 new_freq = ieee80211_channel_to_frequency(new_chan_no, new_band);
+2 -9
net/mac80211/util.c
··· 2278 2278 { 2279 2279 struct ieee80211_local *local = 2280 2280 container_of(work, struct ieee80211_local, radar_detected_work); 2281 - struct cfg80211_chan_def chandef; 2281 + struct cfg80211_chan_def chandef = local->hw.conf.chandef; 2282 2282 2283 2283 ieee80211_dfs_cac_cancel(local); 2284 2284 2285 2285 if (local->use_chanctx) 2286 2286 /* currently not handled */ 2287 2287 WARN_ON(1); 2288 - else { 2289 - chandef = local->hw.conf.chandef; 2288 + else 2290 2289 cfg80211_radar_event(local->hw.wiphy, &chandef, GFP_KERNEL); 2291 - } 2292 2290 } 2293 2291 2294 2292 void ieee80211_radar_detected(struct ieee80211_hw *hw) ··· 2457 2459 WLAN_EID_CHAN_SWITCH_PARAM_TX_RESTRICT : 0x00; 2458 2460 put_unaligned_le16(WLAN_REASON_MESH_CHAN, pos); /* Reason Cd */ 2459 2461 pos += 2; 2460 - if (!ifmsh->pre_value) 2461 - ifmsh->pre_value = 1; 2462 - else 2463 - ifmsh->pre_value++; 2464 2462 pre_value = cpu_to_le16(ifmsh->pre_value); 2465 2463 memcpy(pos, &pre_value, 2); /* Precedence Value */ 2466 2464 pos += 2; 2467 - ifmsh->chsw_init = true; 2468 2465 } 2469 2466 2470 2467 ieee80211_tx_skb(sdata, skb);
+1 -1
net/netfilter/ipset/ip_set_hash_netnet.c
··· 59 59 u32 *multi) 60 60 { 61 61 return ip1->ipcmp == ip2->ipcmp && 62 - ip2->ccmp == ip2->ccmp; 62 + ip1->ccmp == ip2->ccmp; 63 63 } 64 64 65 65 static inline int
+33 -13
net/netfilter/nf_tables_api.c
··· 1717 1717 return -ENOENT; 1718 1718 } 1719 1719 1720 + static int nf_table_delrule_by_chain(struct nft_ctx *ctx) 1721 + { 1722 + struct nft_rule *rule; 1723 + int err; 1724 + 1725 + list_for_each_entry(rule, &ctx->chain->rules, list) { 1726 + err = nf_tables_delrule_one(ctx, rule); 1727 + if (err < 0) 1728 + return err; 1729 + } 1730 + return 0; 1731 + } 1732 + 1720 1733 static int nf_tables_delrule(struct sock *nlsk, struct sk_buff *skb, 1721 1734 const struct nlmsghdr *nlh, 1722 1735 const struct nlattr * const nla[]) ··· 1738 1725 const struct nft_af_info *afi; 1739 1726 struct net *net = sock_net(skb->sk); 1740 1727 const struct nft_table *table; 1741 - struct nft_chain *chain; 1742 - struct nft_rule *rule, *tmp; 1728 + struct nft_chain *chain = NULL; 1729 + struct nft_rule *rule; 1743 1730 int family = nfmsg->nfgen_family, err = 0; 1744 1731 struct nft_ctx ctx; 1745 1732 ··· 1751 1738 if (IS_ERR(table)) 1752 1739 return PTR_ERR(table); 1753 1740 1754 - chain = nf_tables_chain_lookup(table, nla[NFTA_RULE_CHAIN]); 1755 - if (IS_ERR(chain)) 1756 - return PTR_ERR(chain); 1741 + if (nla[NFTA_RULE_CHAIN]) { 1742 + chain = nf_tables_chain_lookup(table, nla[NFTA_RULE_CHAIN]); 1743 + if (IS_ERR(chain)) 1744 + return PTR_ERR(chain); 1745 + } 1757 1746 1758 1747 nft_ctx_init(&ctx, skb, nlh, afi, table, chain, nla); 1759 1748 1760 - if (nla[NFTA_RULE_HANDLE]) { 1761 - rule = nf_tables_rule_lookup(chain, nla[NFTA_RULE_HANDLE]); 1762 - if (IS_ERR(rule)) 1763 - return PTR_ERR(rule); 1749 + if (chain) { 1750 + if (nla[NFTA_RULE_HANDLE]) { 1751 + rule = nf_tables_rule_lookup(chain, 1752 + nla[NFTA_RULE_HANDLE]); 1753 + if (IS_ERR(rule)) 1754 + return PTR_ERR(rule); 1764 1755 1765 - err = nf_tables_delrule_one(&ctx, rule); 1766 - } else { 1767 - /* Remove all rules in this chain */ 1768 - list_for_each_entry_safe(rule, tmp, &chain->rules, list) { 1769 1756 err = nf_tables_delrule_one(&ctx, rule); 1757 + } else { 1758 + err = nf_table_delrule_by_chain(&ctx); 1759 + } 1760 + } else { 1761 + list_for_each_entry(chain, &table->chains, list) { 1762 + ctx.chain = chain; 1763 + err = nf_table_delrule_by_chain(&ctx); 1770 1764 if (err < 0) 1771 1765 break; 1772 1766 }
+11 -14
net/netfilter/xt_hashlimit.c
··· 325 325 add_timer(&ht->timer); 326 326 } 327 327 328 - static void htable_destroy(struct xt_hashlimit_htable *hinfo) 328 + static void htable_remove_proc_entry(struct xt_hashlimit_htable *hinfo) 329 329 { 330 330 struct hashlimit_net *hashlimit_net = hashlimit_pernet(hinfo->net); 331 331 struct proc_dir_entry *parent; 332 - 333 - del_timer_sync(&hinfo->timer); 334 332 335 333 if (hinfo->family == NFPROTO_IPV4) 336 334 parent = hashlimit_net->ipt_hashlimit; 337 335 else 338 336 parent = hashlimit_net->ip6t_hashlimit; 339 337 340 - if(parent != NULL) 338 + if (parent != NULL) 341 339 remove_proc_entry(hinfo->name, parent); 340 + } 342 341 342 + static void htable_destroy(struct xt_hashlimit_htable *hinfo) 343 + { 344 + del_timer_sync(&hinfo->timer); 345 + htable_remove_proc_entry(hinfo); 343 346 htable_selective_cleanup(hinfo, select_all); 344 347 kfree(hinfo->name); 345 348 vfree(hinfo); ··· 886 883 static void __net_exit hashlimit_proc_net_exit(struct net *net) 887 884 { 888 885 struct xt_hashlimit_htable *hinfo; 889 - struct proc_dir_entry *pde; 890 886 struct hashlimit_net *hashlimit_net = hashlimit_pernet(net); 891 887 892 - /* recent_net_exit() is called before recent_mt_destroy(). Make sure 893 - * that the parent xt_recent proc entry is is empty before trying to 894 - * remove it. 888 + /* hashlimit_net_exit() is called before hashlimit_mt_destroy(). 889 + * Make sure that the parent ipt_hashlimit and ip6t_hashlimit proc 890 + * entries is empty before trying to remove it. 895 891 */ 896 892 mutex_lock(&hashlimit_mutex); 897 - pde = hashlimit_net->ipt_hashlimit; 898 - if (pde == NULL) 899 - pde = hashlimit_net->ip6t_hashlimit; 900 - 901 893 hlist_for_each_entry(hinfo, &hashlimit_net->htables, node) 902 - remove_proc_entry(hinfo->name, pde); 903 - 894 + htable_remove_proc_entry(hinfo); 904 895 hashlimit_net->ipt_hashlimit = NULL; 905 896 hashlimit_net->ip6t_hashlimit = NULL; 906 897 mutex_unlock(&hashlimit_mutex);
+40 -25
net/packet/af_packet.c
··· 237 237 static void __fanout_unlink(struct sock *sk, struct packet_sock *po); 238 238 static void __fanout_link(struct sock *sk, struct packet_sock *po); 239 239 240 + static struct net_device *packet_cached_dev_get(struct packet_sock *po) 241 + { 242 + struct net_device *dev; 243 + 244 + rcu_read_lock(); 245 + dev = rcu_dereference(po->cached_dev); 246 + if (likely(dev)) 247 + dev_hold(dev); 248 + rcu_read_unlock(); 249 + 250 + return dev; 251 + } 252 + 253 + static void packet_cached_dev_assign(struct packet_sock *po, 254 + struct net_device *dev) 255 + { 256 + rcu_assign_pointer(po->cached_dev, dev); 257 + } 258 + 259 + static void packet_cached_dev_reset(struct packet_sock *po) 260 + { 261 + RCU_INIT_POINTER(po->cached_dev, NULL); 262 + } 263 + 240 264 /* register_prot_hook must be invoked with the po->bind_lock held, 241 265 * or from a context in which asynchronous accesses to the packet 242 266 * socket is not possible (packet_create()). ··· 270 246 struct packet_sock *po = pkt_sk(sk); 271 247 272 248 if (!po->running) { 273 - if (po->fanout) { 249 + if (po->fanout) 274 250 __fanout_link(sk, po); 275 - } else { 251 + else 276 252 dev_add_pack(&po->prot_hook); 277 - rcu_assign_pointer(po->cached_dev, po->prot_hook.dev); 278 - } 279 253 280 254 sock_hold(sk); 281 255 po->running = 1; ··· 292 270 struct packet_sock *po = pkt_sk(sk); 293 271 294 272 po->running = 0; 295 - if (po->fanout) { 273 + 274 + if (po->fanout) 296 275 __fanout_unlink(sk, po); 297 - } else { 276 + else 298 277 __dev_remove_pack(&po->prot_hook); 299 - RCU_INIT_POINTER(po->cached_dev, NULL); 300 - } 301 278 302 279 __sock_put(sk); 303 280 ··· 2080 2059 return tp_len; 2081 2060 } 2082 2061 2083 - static struct net_device *packet_cached_dev_get(struct packet_sock *po) 2084 - { 2085 - struct net_device *dev; 2086 - 2087 - rcu_read_lock(); 2088 - dev = rcu_dereference(po->cached_dev); 2089 - if (dev) 2090 - dev_hold(dev); 2091 - rcu_read_unlock(); 2092 - 2093 - return dev; 2094 - } 2095 - 2096 2062 static int tpacket_snd(struct packet_sock *po, struct msghdr *msg) 2097 2063 { 2098 2064 struct sk_buff *skb; ··· 2096 2088 2097 2089 mutex_lock(&po->pg_vec_lock); 2098 2090 2099 - if (saddr == NULL) { 2091 + if (likely(saddr == NULL)) { 2100 2092 dev = packet_cached_dev_get(po); 2101 2093 proto = po->num; 2102 2094 addr = NULL; ··· 2250 2242 * Get and verify the address. 2251 2243 */ 2252 2244 2253 - if (saddr == NULL) { 2245 + if (likely(saddr == NULL)) { 2254 2246 dev = packet_cached_dev_get(po); 2255 2247 proto = po->num; 2256 2248 addr = NULL; ··· 2459 2451 2460 2452 spin_lock(&po->bind_lock); 2461 2453 unregister_prot_hook(sk, false); 2454 + packet_cached_dev_reset(po); 2455 + 2462 2456 if (po->prot_hook.dev) { 2463 2457 dev_put(po->prot_hook.dev); 2464 2458 po->prot_hook.dev = NULL; ··· 2516 2506 2517 2507 spin_lock(&po->bind_lock); 2518 2508 unregister_prot_hook(sk, true); 2509 + 2519 2510 po->num = protocol; 2520 2511 po->prot_hook.type = protocol; 2521 2512 if (po->prot_hook.dev) 2522 2513 dev_put(po->prot_hook.dev); 2523 - po->prot_hook.dev = dev; 2524 2514 2515 + po->prot_hook.dev = dev; 2525 2516 po->ifindex = dev ? dev->ifindex : 0; 2517 + 2518 + packet_cached_dev_assign(po, dev); 2526 2519 2527 2520 if (protocol == 0) 2528 2521 goto out_unlock; ··· 2639 2626 po = pkt_sk(sk); 2640 2627 sk->sk_family = PF_PACKET; 2641 2628 po->num = proto; 2642 - RCU_INIT_POINTER(po->cached_dev, NULL); 2629 + 2630 + packet_cached_dev_reset(po); 2643 2631 2644 2632 sk->sk_destruct = packet_sock_destruct; 2645 2633 sk_refcnt_debug_inc(sk); ··· 3351 3337 sk->sk_error_report(sk); 3352 3338 } 3353 3339 if (msg == NETDEV_UNREGISTER) { 3340 + packet_cached_dev_reset(po); 3354 3341 po->ifindex = -1; 3355 3342 if (po->prot_hook.dev) 3356 3343 dev_put(po->prot_hook.dev);
+2 -3
net/rds/ib_send.c
··· 552 552 && rm->m_inc.i_hdr.h_flags & RDS_FLAG_CONG_BITMAP) { 553 553 rds_cong_map_updated(conn->c_fcong, ~(u64) 0); 554 554 scat = &rm->data.op_sg[sg]; 555 - ret = sizeof(struct rds_header) + RDS_CONG_MAP_BYTES; 556 - ret = min_t(int, ret, scat->length - conn->c_xmit_data_off); 557 - return ret; 555 + ret = max_t(int, RDS_CONG_MAP_BYTES, scat->length); 556 + return sizeof(struct rds_header) + ret; 558 557 } 559 558 560 559 /* FIXME we may overallocate here */
+14 -12
net/sched/act_api.c
··· 270 270 { 271 271 struct tc_action_ops *a, **ap; 272 272 273 + /* Must supply act, dump, cleanup and init */ 274 + if (!act->act || !act->dump || !act->cleanup || !act->init) 275 + return -EINVAL; 276 + 277 + /* Supply defaults */ 278 + if (!act->lookup) 279 + act->lookup = tcf_hash_search; 280 + if (!act->walk) 281 + act->walk = tcf_generic_walker; 282 + 273 283 write_lock(&act_mod_lock); 274 284 for (ap = &act_base; (a = *ap) != NULL; ap = &a->next) { 275 285 if (act->type == a->type || (strcmp(act->kind, a->kind) == 0)) { ··· 391 381 } 392 382 while ((a = act) != NULL) { 393 383 repeat: 394 - if (a->ops && a->ops->act) { 384 + if (a->ops) { 395 385 ret = a->ops->act(skb, a, res); 396 386 if (TC_MUNGED & skb->tc_verd) { 397 387 /* copied already, allow trampling */ ··· 415 405 struct tc_action *a; 416 406 417 407 for (a = act; a; a = act) { 418 - if (a->ops && a->ops->cleanup) { 408 + if (a->ops) { 419 409 if (a->ops->cleanup(a, bind) == ACT_P_DELETED) 420 410 module_put(a->ops->owner); 421 411 act = act->next; ··· 434 424 { 435 425 int err = -EINVAL; 436 426 437 - if (a->ops == NULL || a->ops->dump == NULL) 427 + if (a->ops == NULL) 438 428 return err; 439 429 return a->ops->dump(skb, a, bind, ref); 440 430 } ··· 446 436 unsigned char *b = skb_tail_pointer(skb); 447 437 struct nlattr *nest; 448 438 449 - if (a->ops == NULL || a->ops->dump == NULL) 439 + if (a->ops == NULL) 450 440 return err; 451 441 452 442 if (nla_put_string(skb, TCA_KIND, a->ops->kind)) ··· 733 723 a->ops = tc_lookup_action(tb[TCA_ACT_KIND]); 734 724 if (a->ops == NULL) 735 725 goto err_free; 736 - if (a->ops->lookup == NULL) 737 - goto err_mod; 738 726 err = -ENOENT; 739 727 if (a->ops->lookup(a, index) == 0) 740 728 goto err_mod; ··· 1091 1083 1092 1084 memset(&a, 0, sizeof(struct tc_action)); 1093 1085 a.ops = a_o; 1094 - 1095 - if (a_o->walk == NULL) { 1096 - WARN(1, "tc_dump_action: %s !capable of dumping table\n", 1097 - a_o->kind); 1098 - goto out_module_put; 1099 - } 1100 1086 1101 1087 nlh = nlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq, 1102 1088 cb->nlh->nlmsg_type, sizeof(*t), 0);
-2
net/sched/act_csum.c
··· 585 585 .act = tcf_csum, 586 586 .dump = tcf_csum_dump, 587 587 .cleanup = tcf_csum_cleanup, 588 - .lookup = tcf_hash_search, 589 588 .init = tcf_csum_init, 590 - .walk = tcf_generic_walker 591 589 }; 592 590 593 591 MODULE_DESCRIPTION("Checksum updating actions");
-2
net/sched/act_gact.c
··· 206 206 .act = tcf_gact, 207 207 .dump = tcf_gact_dump, 208 208 .cleanup = tcf_gact_cleanup, 209 - .lookup = tcf_hash_search, 210 209 .init = tcf_gact_init, 211 - .walk = tcf_generic_walker 212 210 }; 213 211 214 212 MODULE_AUTHOR("Jamal Hadi Salim(2002-4)");
-4
net/sched/act_ipt.c
··· 298 298 .act = tcf_ipt, 299 299 .dump = tcf_ipt_dump, 300 300 .cleanup = tcf_ipt_cleanup, 301 - .lookup = tcf_hash_search, 302 301 .init = tcf_ipt_init, 303 - .walk = tcf_generic_walker 304 302 }; 305 303 306 304 static struct tc_action_ops act_xt_ops = { ··· 310 312 .act = tcf_ipt, 311 313 .dump = tcf_ipt_dump, 312 314 .cleanup = tcf_ipt_cleanup, 313 - .lookup = tcf_hash_search, 314 315 .init = tcf_ipt_init, 315 - .walk = tcf_generic_walker 316 316 }; 317 317 318 318 MODULE_AUTHOR("Jamal Hadi Salim(2002-13)");
-2
net/sched/act_mirred.c
··· 271 271 .act = tcf_mirred, 272 272 .dump = tcf_mirred_dump, 273 273 .cleanup = tcf_mirred_cleanup, 274 - .lookup = tcf_hash_search, 275 274 .init = tcf_mirred_init, 276 - .walk = tcf_generic_walker 277 275 }; 278 276 279 277 MODULE_AUTHOR("Jamal Hadi Salim(2002)");
-2
net/sched/act_nat.c
··· 308 308 .act = tcf_nat, 309 309 .dump = tcf_nat_dump, 310 310 .cleanup = tcf_nat_cleanup, 311 - .lookup = tcf_hash_search, 312 311 .init = tcf_nat_init, 313 - .walk = tcf_generic_walker 314 312 }; 315 313 316 314 MODULE_DESCRIPTION("Stateless NAT actions");
-2
net/sched/act_pedit.c
··· 243 243 .act = tcf_pedit, 244 244 .dump = tcf_pedit_dump, 245 245 .cleanup = tcf_pedit_cleanup, 246 - .lookup = tcf_hash_search, 247 246 .init = tcf_pedit_init, 248 - .walk = tcf_generic_walker 249 247 }; 250 248 251 249 MODULE_AUTHOR("Jamal Hadi Salim(2002-4)");
-1
net/sched/act_police.c
··· 407 407 .act = tcf_act_police, 408 408 .dump = tcf_act_police_dump, 409 409 .cleanup = tcf_act_police_cleanup, 410 - .lookup = tcf_hash_search, 411 410 .init = tcf_act_police_locate, 412 411 .walk = tcf_act_police_walker 413 412 };
-1
net/sched/act_simple.c
··· 201 201 .dump = tcf_simp_dump, 202 202 .cleanup = tcf_simp_cleanup, 203 203 .init = tcf_simp_init, 204 - .walk = tcf_generic_walker, 205 204 }; 206 205 207 206 MODULE_AUTHOR("Jamal Hadi Salim(2005)");
-1
net/sched/act_skbedit.c
··· 203 203 .dump = tcf_skbedit_dump, 204 204 .cleanup = tcf_skbedit_cleanup, 205 205 .init = tcf_skbedit_init, 206 - .walk = tcf_generic_walker, 207 206 }; 208 207 209 208 MODULE_AUTHOR("Alexander Duyck, <alexander.h.duyck@intel.com>");
+12 -8
net/sched/sch_htb.c
··· 1477 1477 sch_tree_lock(sch); 1478 1478 } 1479 1479 1480 + rate64 = tb[TCA_HTB_RATE64] ? nla_get_u64(tb[TCA_HTB_RATE64]) : 0; 1481 + 1482 + ceil64 = tb[TCA_HTB_CEIL64] ? nla_get_u64(tb[TCA_HTB_CEIL64]) : 0; 1483 + 1484 + psched_ratecfg_precompute(&cl->rate, &hopt->rate, rate64); 1485 + psched_ratecfg_precompute(&cl->ceil, &hopt->ceil, ceil64); 1486 + 1480 1487 /* it used to be a nasty bug here, we have to check that node 1481 1488 * is really leaf before changing cl->un.leaf ! 1482 1489 */ 1483 1490 if (!cl->level) { 1484 - cl->quantum = hopt->rate.rate / q->rate2quantum; 1491 + u64 quantum = cl->rate.rate_bytes_ps; 1492 + 1493 + do_div(quantum, q->rate2quantum); 1494 + cl->quantum = min_t(u64, quantum, INT_MAX); 1495 + 1485 1496 if (!hopt->quantum && cl->quantum < 1000) { 1486 1497 pr_warning( 1487 1498 "HTB: quantum of class %X is small. Consider r2q change.\n", ··· 1510 1499 if ((cl->prio = hopt->prio) >= TC_HTB_NUMPRIO) 1511 1500 cl->prio = TC_HTB_NUMPRIO - 1; 1512 1501 } 1513 - 1514 - rate64 = tb[TCA_HTB_RATE64] ? nla_get_u64(tb[TCA_HTB_RATE64]) : 0; 1515 - 1516 - ceil64 = tb[TCA_HTB_CEIL64] ? nla_get_u64(tb[TCA_HTB_CEIL64]) : 0; 1517 - 1518 - psched_ratecfg_precompute(&cl->rate, &hopt->rate, rate64); 1519 - psched_ratecfg_precompute(&cl->ceil, &hopt->ceil, ceil64); 1520 1502 1521 1503 cl->buffer = PSCHED_TICKS2NS(hopt->buffer); 1522 1504 cl->cbuffer = PSCHED_TICKS2NS(hopt->cbuffer);
+72 -45
net/sched/sch_tbf.c
··· 118 118 }; 119 119 120 120 121 + /* Time to Length, convert time in ns to length in bytes 122 + * to determinate how many bytes can be sent in given time. 123 + */ 124 + static u64 psched_ns_t2l(const struct psched_ratecfg *r, 125 + u64 time_in_ns) 126 + { 127 + /* The formula is : 128 + * len = (time_in_ns * r->rate_bytes_ps) / NSEC_PER_SEC 129 + */ 130 + u64 len = time_in_ns * r->rate_bytes_ps; 131 + 132 + do_div(len, NSEC_PER_SEC); 133 + 134 + if (unlikely(r->linklayer == TC_LINKLAYER_ATM)) { 135 + do_div(len, 53); 136 + len = len * 48; 137 + } 138 + 139 + if (len > r->overhead) 140 + len -= r->overhead; 141 + else 142 + len = 0; 143 + 144 + return len; 145 + } 146 + 121 147 /* 122 148 * Return length of individual segments of a gso packet, 123 149 * including all headers (MAC, IP, TCP/UDP) ··· 315 289 struct tbf_sched_data *q = qdisc_priv(sch); 316 290 struct nlattr *tb[TCA_TBF_MAX + 1]; 317 291 struct tc_tbf_qopt *qopt; 318 - struct qdisc_rate_table *rtab = NULL; 319 - struct qdisc_rate_table *ptab = NULL; 320 292 struct Qdisc *child = NULL; 321 - int max_size, n; 293 + struct psched_ratecfg rate; 294 + struct psched_ratecfg peak; 295 + u64 max_size; 296 + s64 buffer, mtu; 322 297 u64 rate64 = 0, prate64 = 0; 323 298 324 299 err = nla_parse_nested(tb, TCA_TBF_MAX, opt, tbf_policy); ··· 331 304 goto done; 332 305 333 306 qopt = nla_data(tb[TCA_TBF_PARMS]); 334 - rtab = qdisc_get_rtab(&qopt->rate, tb[TCA_TBF_RTAB]); 335 - if (rtab == NULL) 336 - goto done; 307 + if (qopt->rate.linklayer == TC_LINKLAYER_UNAWARE) 308 + qdisc_put_rtab(qdisc_get_rtab(&qopt->rate, 309 + tb[TCA_TBF_RTAB])); 337 310 338 - if (qopt->peakrate.rate) { 339 - if (qopt->peakrate.rate > qopt->rate.rate) 340 - ptab = qdisc_get_rtab(&qopt->peakrate, tb[TCA_TBF_PTAB]); 341 - if (ptab == NULL) 342 - goto done; 343 - } 344 - 345 - for (n = 0; n < 256; n++) 346 - if (rtab->data[n] > qopt->buffer) 347 - break; 348 - max_size = (n << qopt->rate.cell_log) - 1; 349 - if (ptab) { 350 - int size; 351 - 352 - for (n = 0; n < 256; n++) 353 - if (ptab->data[n] > qopt->mtu) 354 - break; 355 - size = (n << qopt->peakrate.cell_log) - 1; 356 - if (size < max_size) 357 - max_size = size; 358 - } 359 - if (max_size < 0) 360 - goto done; 361 - 362 - if (max_size < psched_mtu(qdisc_dev(sch))) 363 - pr_warn_ratelimited("sch_tbf: burst %u is lower than device %s mtu (%u) !\n", 364 - max_size, qdisc_dev(sch)->name, 365 - psched_mtu(qdisc_dev(sch))); 311 + if (qopt->peakrate.linklayer == TC_LINKLAYER_UNAWARE) 312 + qdisc_put_rtab(qdisc_get_rtab(&qopt->peakrate, 313 + tb[TCA_TBF_PTAB])); 366 314 367 315 if (q->qdisc != &noop_qdisc) { 368 316 err = fifo_set_limit(q->qdisc, qopt->limit); ··· 349 347 err = PTR_ERR(child); 350 348 goto done; 351 349 } 350 + } 351 + 352 + buffer = min_t(u64, PSCHED_TICKS2NS(qopt->buffer), ~0U); 353 + mtu = min_t(u64, PSCHED_TICKS2NS(qopt->mtu), ~0U); 354 + 355 + if (tb[TCA_TBF_RATE64]) 356 + rate64 = nla_get_u64(tb[TCA_TBF_RATE64]); 357 + psched_ratecfg_precompute(&rate, &qopt->rate, rate64); 358 + 359 + max_size = min_t(u64, psched_ns_t2l(&rate, buffer), ~0U); 360 + 361 + if (qopt->peakrate.rate) { 362 + if (tb[TCA_TBF_PRATE64]) 363 + prate64 = nla_get_u64(tb[TCA_TBF_PRATE64]); 364 + psched_ratecfg_precompute(&peak, &qopt->peakrate, prate64); 365 + if (peak.rate_bytes_ps <= rate.rate_bytes_ps) { 366 + pr_warn_ratelimited("sch_tbf: peakrate %llu is lower than or equals to rate %llu !\n", 367 + peak.rate_bytes_ps, rate.rate_bytes_ps); 368 + err = -EINVAL; 369 + goto done; 370 + } 371 + 372 + max_size = min_t(u64, max_size, psched_ns_t2l(&peak, mtu)); 373 + } 374 + 375 + if (max_size < psched_mtu(qdisc_dev(sch))) 376 + pr_warn_ratelimited("sch_tbf: burst %llu is lower than device %s mtu (%u) !\n", 377 + max_size, qdisc_dev(sch)->name, 378 + psched_mtu(qdisc_dev(sch))); 379 + 380 + if (!max_size) { 381 + err = -EINVAL; 382 + goto done; 352 383 } 353 384 354 385 sch_tree_lock(sch); ··· 397 362 q->tokens = q->buffer; 398 363 q->ptokens = q->mtu; 399 364 400 - if (tb[TCA_TBF_RATE64]) 401 - rate64 = nla_get_u64(tb[TCA_TBF_RATE64]); 402 - psched_ratecfg_precompute(&q->rate, &rtab->rate, rate64); 403 - if (ptab) { 404 - if (tb[TCA_TBF_PRATE64]) 405 - prate64 = nla_get_u64(tb[TCA_TBF_PRATE64]); 406 - psched_ratecfg_precompute(&q->peak, &ptab->rate, prate64); 365 + memcpy(&q->rate, &rate, sizeof(struct psched_ratecfg)); 366 + if (qopt->peakrate.rate) { 367 + memcpy(&q->peak, &peak, sizeof(struct psched_ratecfg)); 407 368 q->peak_present = true; 408 369 } else { 409 370 q->peak_present = false; ··· 408 377 sch_tree_unlock(sch); 409 378 err = 0; 410 379 done: 411 - if (rtab) 412 - qdisc_put_rtab(rtab); 413 - if (ptab) 414 - qdisc_put_rtab(ptab); 415 380 return err; 416 381 } 417 382
+1 -4
net/sctp/associola.c
··· 154 154 155 155 asoc->timeouts[SCTP_EVENT_TIMEOUT_HEARTBEAT] = 0; 156 156 asoc->timeouts[SCTP_EVENT_TIMEOUT_SACK] = asoc->sackdelay; 157 - asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = 158 - min_t(unsigned long, sp->autoclose, net->sctp.max_autoclose) * HZ; 157 + asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = sp->autoclose * HZ; 159 158 160 159 /* Initializes the timers */ 161 160 for (i = SCTP_EVENT_TIMEOUT_NONE; i < SCTP_NUM_TIMEOUT_TYPES; ++i) ··· 289 290 if (asoc->base.sk->sk_family == PF_INET6) 290 291 asoc->peer.ipv6_address = 1; 291 292 INIT_LIST_HEAD(&asoc->asocs); 292 - 293 - asoc->autoclose = sp->autoclose; 294 293 295 294 asoc->default_stream = sp->default_stream; 296 295 asoc->default_ppid = sp->default_ppid;
+2 -1
net/sctp/output.c
··· 581 581 unsigned long timeout; 582 582 583 583 /* Restart the AUTOCLOSE timer when sending data. */ 584 - if (sctp_state(asoc, ESTABLISHED) && asoc->autoclose) { 584 + if (sctp_state(asoc, ESTABLISHED) && 585 + asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) { 585 586 timer = &asoc->timers[SCTP_EVENT_TIMEOUT_AUTOCLOSE]; 586 587 timeout = asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]; 587 588
+6 -6
net/sctp/sm_statefuns.c
··· 820 820 SCTP_INC_STATS(net, SCTP_MIB_PASSIVEESTABS); 821 821 sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL()); 822 822 823 - if (new_asoc->autoclose) 823 + if (new_asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) 824 824 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_START, 825 825 SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE)); 826 826 ··· 908 908 SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB); 909 909 SCTP_INC_STATS(net, SCTP_MIB_ACTIVEESTABS); 910 910 sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL()); 911 - if (asoc->autoclose) 911 + if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) 912 912 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_START, 913 913 SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE)); 914 914 ··· 2970 2970 if (chunk->chunk_hdr->flags & SCTP_DATA_SACK_IMM) 2971 2971 force = SCTP_FORCE(); 2972 2972 2973 - if (asoc->autoclose) { 2973 + if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) { 2974 2974 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART, 2975 2975 SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE)); 2976 2976 } ··· 3878 3878 SCTP_CHUNK(chunk)); 3879 3879 3880 3880 /* Count this as receiving DATA. */ 3881 - if (asoc->autoclose) { 3881 + if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) { 3882 3882 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART, 3883 3883 SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE)); 3884 3884 } ··· 5267 5267 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART, 5268 5268 SCTP_TO(SCTP_EVENT_TIMEOUT_T5_SHUTDOWN_GUARD)); 5269 5269 5270 - if (asoc->autoclose) 5270 + if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) 5271 5271 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP, 5272 5272 SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE)); 5273 5273 ··· 5346 5346 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_RESTART, 5347 5347 SCTP_TO(SCTP_EVENT_TIMEOUT_T2_SHUTDOWN)); 5348 5348 5349 - if (asoc->autoclose) 5349 + if (asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) 5350 5350 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP, 5351 5351 SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE)); 5352 5352
+26 -10
net/sctp/socket.c
··· 2196 2196 unsigned int optlen) 2197 2197 { 2198 2198 struct sctp_sock *sp = sctp_sk(sk); 2199 + struct net *net = sock_net(sk); 2199 2200 2200 2201 /* Applicable to UDP-style socket only */ 2201 2202 if (sctp_style(sk, TCP)) ··· 2205 2204 return -EINVAL; 2206 2205 if (copy_from_user(&sp->autoclose, optval, optlen)) 2207 2206 return -EFAULT; 2207 + 2208 + if (sp->autoclose > net->sctp.max_autoclose) 2209 + sp->autoclose = net->sctp.max_autoclose; 2208 2210 2209 2211 return 0; 2210 2212 } ··· 2815 2811 { 2816 2812 struct sctp_rtoinfo rtoinfo; 2817 2813 struct sctp_association *asoc; 2814 + unsigned long rto_min, rto_max; 2815 + struct sctp_sock *sp = sctp_sk(sk); 2818 2816 2819 2817 if (optlen != sizeof (struct sctp_rtoinfo)) 2820 2818 return -EINVAL; ··· 2830 2824 if (!asoc && rtoinfo.srto_assoc_id && sctp_style(sk, UDP)) 2831 2825 return -EINVAL; 2832 2826 2827 + rto_max = rtoinfo.srto_max; 2828 + rto_min = rtoinfo.srto_min; 2829 + 2830 + if (rto_max) 2831 + rto_max = asoc ? msecs_to_jiffies(rto_max) : rto_max; 2832 + else 2833 + rto_max = asoc ? asoc->rto_max : sp->rtoinfo.srto_max; 2834 + 2835 + if (rto_min) 2836 + rto_min = asoc ? msecs_to_jiffies(rto_min) : rto_min; 2837 + else 2838 + rto_min = asoc ? asoc->rto_min : sp->rtoinfo.srto_min; 2839 + 2840 + if (rto_min > rto_max) 2841 + return -EINVAL; 2842 + 2833 2843 if (asoc) { 2834 2844 if (rtoinfo.srto_initial != 0) 2835 2845 asoc->rto_initial = 2836 2846 msecs_to_jiffies(rtoinfo.srto_initial); 2837 - if (rtoinfo.srto_max != 0) 2838 - asoc->rto_max = msecs_to_jiffies(rtoinfo.srto_max); 2839 - if (rtoinfo.srto_min != 0) 2840 - asoc->rto_min = msecs_to_jiffies(rtoinfo.srto_min); 2847 + asoc->rto_max = rto_max; 2848 + asoc->rto_min = rto_min; 2841 2849 } else { 2842 2850 /* If there is no association or the association-id = 0 2843 2851 * set the values to the endpoint. 2844 2852 */ 2845 - struct sctp_sock *sp = sctp_sk(sk); 2846 - 2847 2853 if (rtoinfo.srto_initial != 0) 2848 2854 sp->rtoinfo.srto_initial = rtoinfo.srto_initial; 2849 - if (rtoinfo.srto_max != 0) 2850 - sp->rtoinfo.srto_max = rtoinfo.srto_max; 2851 - if (rtoinfo.srto_min != 0) 2852 - sp->rtoinfo.srto_min = rtoinfo.srto_min; 2855 + sp->rtoinfo.srto_max = rto_max; 2856 + sp->rtoinfo.srto_min = rto_min; 2853 2857 } 2854 2858 2855 2859 return 0;
+67 -9
net/sctp/sysctl.c
··· 56 56 extern int sysctl_sctp_rmem[3]; 57 57 extern int sysctl_sctp_wmem[3]; 58 58 59 - static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, 60 - int write, 59 + static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, int write, 61 60 void __user *buffer, size_t *lenp, 62 - 63 61 loff_t *ppos); 62 + static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write, 63 + void __user *buffer, size_t *lenp, 64 + loff_t *ppos); 65 + static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write, 66 + void __user *buffer, size_t *lenp, 67 + loff_t *ppos); 68 + 64 69 static struct ctl_table sctp_table[] = { 65 70 { 66 71 .procname = "sctp_mem", ··· 107 102 .data = &init_net.sctp.rto_min, 108 103 .maxlen = sizeof(unsigned int), 109 104 .mode = 0644, 110 - .proc_handler = proc_dointvec_minmax, 105 + .proc_handler = proc_sctp_do_rto_min, 111 106 .extra1 = &one, 112 - .extra2 = &timer_max 107 + .extra2 = &init_net.sctp.rto_max 113 108 }, 114 109 { 115 110 .procname = "rto_max", 116 111 .data = &init_net.sctp.rto_max, 117 112 .maxlen = sizeof(unsigned int), 118 113 .mode = 0644, 119 - .proc_handler = proc_dointvec_minmax, 120 - .extra1 = &one, 114 + .proc_handler = proc_sctp_do_rto_max, 115 + .extra1 = &init_net.sctp.rto_min, 121 116 .extra2 = &timer_max 122 117 }, 123 118 { ··· 299 294 { /* sentinel */ } 300 295 }; 301 296 302 - static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, 303 - int write, 297 + static int proc_sctp_do_hmac_alg(struct ctl_table *ctl, int write, 304 298 void __user *buffer, size_t *lenp, 305 299 loff_t *ppos) 306 300 { ··· 343 339 ret = -EINVAL; 344 340 } 345 341 342 + return ret; 343 + } 344 + 345 + static int proc_sctp_do_rto_min(struct ctl_table *ctl, int write, 346 + void __user *buffer, size_t *lenp, 347 + loff_t *ppos) 348 + { 349 + struct net *net = current->nsproxy->net_ns; 350 + int new_value; 351 + struct ctl_table tbl; 352 + unsigned int min = *(unsigned int *) ctl->extra1; 353 + unsigned int max = *(unsigned int *) ctl->extra2; 354 + int ret; 355 + 356 + memset(&tbl, 0, sizeof(struct ctl_table)); 357 + tbl.maxlen = sizeof(unsigned int); 358 + 359 + if (write) 360 + tbl.data = &new_value; 361 + else 362 + tbl.data = &net->sctp.rto_min; 363 + ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); 364 + if (write) { 365 + if (ret || new_value > max || new_value < min) 366 + return -EINVAL; 367 + net->sctp.rto_min = new_value; 368 + } 369 + return ret; 370 + } 371 + 372 + static int proc_sctp_do_rto_max(struct ctl_table *ctl, int write, 373 + void __user *buffer, size_t *lenp, 374 + loff_t *ppos) 375 + { 376 + struct net *net = current->nsproxy->net_ns; 377 + int new_value; 378 + struct ctl_table tbl; 379 + unsigned int min = *(unsigned int *) ctl->extra1; 380 + unsigned int max = *(unsigned int *) ctl->extra2; 381 + int ret; 382 + 383 + memset(&tbl, 0, sizeof(struct ctl_table)); 384 + tbl.maxlen = sizeof(unsigned int); 385 + 386 + if (write) 387 + tbl.data = &new_value; 388 + else 389 + tbl.data = &net->sctp.rto_max; 390 + ret = proc_dointvec(&tbl, write, buffer, lenp, ppos); 391 + if (write) { 392 + if (ret || new_value > max || new_value < min) 393 + return -EINVAL; 394 + net->sctp.rto_max = new_value; 395 + } 346 396 return ret; 347 397 } 348 398
+1 -1
net/sctp/transport.c
··· 573 573 u32 old_cwnd = t->cwnd; 574 574 u32 max_burst_bytes; 575 575 576 - if (t->burst_limited) 576 + if (t->burst_limited || asoc->max_burst == 0) 577 577 return; 578 578 579 579 max_burst_bytes = t->flight_size + (asoc->max_burst * asoc->pathmtu);
+4 -3
net/tipc/core.c
··· 113 113 static void tipc_core_stop(void) 114 114 { 115 115 tipc_netlink_stop(); 116 - tipc_handler_stop(); 117 116 tipc_cfg_stop(); 118 117 tipc_subscr_stop(); 119 118 tipc_nametbl_stop(); ··· 145 146 res = tipc_subscr_start(); 146 147 if (!res) 147 148 res = tipc_cfg_init(); 148 - if (res) 149 + if (res) { 150 + tipc_handler_stop(); 149 151 tipc_core_stop(); 150 - 152 + } 151 153 return res; 152 154 } 153 155 ··· 178 178 179 179 static void __exit tipc_exit(void) 180 180 { 181 + tipc_handler_stop(); 181 182 tipc_core_stop_net(); 182 183 tipc_core_stop(); 183 184 pr_info("Deactivated\n");
+8 -3
net/tipc/handler.c
··· 56 56 { 57 57 struct queue_item *item; 58 58 59 + spin_lock_bh(&qitem_lock); 59 60 if (!handler_enabled) { 60 61 pr_err("Signal request ignored by handler\n"); 62 + spin_unlock_bh(&qitem_lock); 61 63 return -ENOPROTOOPT; 62 64 } 63 65 64 - spin_lock_bh(&qitem_lock); 65 66 item = kmem_cache_alloc(tipc_queue_item_cache, GFP_ATOMIC); 66 67 if (!item) { 67 68 pr_err("Signal queue out of memory\n"); ··· 113 112 struct list_head *l, *n; 114 113 struct queue_item *item; 115 114 116 - if (!handler_enabled) 115 + spin_lock_bh(&qitem_lock); 116 + if (!handler_enabled) { 117 + spin_unlock_bh(&qitem_lock); 117 118 return; 118 - 119 + } 119 120 handler_enabled = 0; 121 + spin_unlock_bh(&qitem_lock); 122 + 120 123 tasklet_kill(&tipc_tasklet); 121 124 122 125 spin_lock_bh(&qitem_lock);
+6 -2
net/unix/af_unix.c
··· 530 530 static int unix_seqpacket_recvmsg(struct kiocb *, struct socket *, 531 531 struct msghdr *, size_t, int); 532 532 533 - static void unix_set_peek_off(struct sock *sk, int val) 533 + static int unix_set_peek_off(struct sock *sk, int val) 534 534 { 535 535 struct unix_sock *u = unix_sk(sk); 536 536 537 - mutex_lock(&u->readlock); 537 + if (mutex_lock_interruptible(&u->readlock)) 538 + return -EINTR; 539 + 538 540 sk->sk_peek_off = val; 539 541 mutex_unlock(&u->readlock); 542 + 543 + return 0; 540 544 } 541 545 542 546
+9
net/wireless/core.c
··· 451 451 int i; 452 452 u16 ifmodes = wiphy->interface_modes; 453 453 454 + /* support for 5/10 MHz is broken due to nl80211 API mess - disable */ 455 + wiphy->flags &= ~WIPHY_FLAG_SUPPORTS_5_10_MHZ; 456 + 457 + /* 458 + * There are major locking problems in nl80211/mac80211 for CSA, 459 + * disable for all drivers until this has been reworked. 460 + */ 461 + wiphy->flags &= ~WIPHY_FLAG_HAS_CHANNEL_SWITCH; 462 + 454 463 #ifdef CONFIG_PM 455 464 if (WARN_ON(wiphy->wowlan && 456 465 (wiphy->wowlan->flags & WIPHY_WOWLAN_GTK_REKEY_FAILURE) &&
+9 -9
net/wireless/ibss.c
··· 262 262 263 263 /* try to find an IBSS channel if none requested ... */ 264 264 if (!wdev->wext.ibss.chandef.chan) { 265 - wdev->wext.ibss.chandef.width = NL80211_CHAN_WIDTH_20_NOHT; 265 + struct ieee80211_channel *new_chan = NULL; 266 266 267 267 for (band = 0; band < IEEE80211_NUM_BANDS; band++) { 268 268 struct ieee80211_supported_band *sband; ··· 278 278 continue; 279 279 if (chan->flags & IEEE80211_CHAN_DISABLED) 280 280 continue; 281 - wdev->wext.ibss.chandef.chan = chan; 282 - wdev->wext.ibss.chandef.center_freq1 = 283 - chan->center_freq; 281 + new_chan = chan; 284 282 break; 285 283 } 286 284 287 - if (wdev->wext.ibss.chandef.chan) 285 + if (new_chan) 288 286 break; 289 287 } 290 288 291 - if (!wdev->wext.ibss.chandef.chan) 289 + if (!new_chan) 292 290 return -EINVAL; 291 + 292 + cfg80211_chandef_create(&wdev->wext.ibss.chandef, new_chan, 293 + NL80211_CHAN_NO_HT); 293 294 } 294 295 295 296 /* don't join -- SSID is not there */ ··· 364 363 return err; 365 364 366 365 if (chan) { 367 - wdev->wext.ibss.chandef.chan = chan; 368 - wdev->wext.ibss.chandef.width = NL80211_CHAN_WIDTH_20_NOHT; 369 - wdev->wext.ibss.chandef.center_freq1 = freq; 366 + cfg80211_chandef_create(&wdev->wext.ibss.chandef, chan, 367 + NL80211_CHAN_NO_HT); 370 368 wdev->wext.ibss.channel_fixed = true; 371 369 } else { 372 370 /* cfg80211_ibss_wext_join will pick one if needed */
+37 -23
net/wireless/nl80211.c
··· 2687 2687 hdr = nl80211hdr_put(msg, info->snd_portid, info->snd_seq, 0, 2688 2688 NL80211_CMD_NEW_KEY); 2689 2689 if (!hdr) 2690 - return -ENOBUFS; 2690 + goto nla_put_failure; 2691 2691 2692 2692 cookie.msg = msg; 2693 2693 cookie.idx = key_idx; ··· 5349 5349 err = -EINVAL; 5350 5350 goto out_free; 5351 5351 } 5352 + 5353 + if (!wiphy->bands[band]) 5354 + continue; 5355 + 5352 5356 err = ieee80211_get_ratemask(wiphy->bands[band], 5353 5357 nla_data(attr), 5354 5358 nla_len(attr), ··· 9637 9633 nla_put(msg, NL80211_ATTR_IE, req->ie_len, req->ie)) 9638 9634 goto nla_put_failure; 9639 9635 9640 - if (req->flags) 9641 - nla_put_u32(msg, NL80211_ATTR_SCAN_FLAGS, req->flags); 9636 + if (req->flags && 9637 + nla_put_u32(msg, NL80211_ATTR_SCAN_FLAGS, req->flags)) 9638 + goto nla_put_failure; 9642 9639 9643 9640 return 0; 9644 9641 nla_put_failure: ··· 11098 11093 struct nlattr *reasons; 11099 11094 11100 11095 reasons = nla_nest_start(msg, NL80211_ATTR_WOWLAN_TRIGGERS); 11096 + if (!reasons) 11097 + goto free_msg; 11101 11098 11102 11099 if (wakeup->disconnect && 11103 11100 nla_put_flag(msg, NL80211_WOWLAN_TRIG_DISCONNECT)) ··· 11125 11118 wakeup->pattern_idx)) 11126 11119 goto free_msg; 11127 11120 11128 - if (wakeup->tcp_match) 11129 - nla_put_flag(msg, NL80211_WOWLAN_TRIG_WAKEUP_TCP_MATCH); 11121 + if (wakeup->tcp_match && 11122 + nla_put_flag(msg, NL80211_WOWLAN_TRIG_WAKEUP_TCP_MATCH)) 11123 + goto free_msg; 11130 11124 11131 - if (wakeup->tcp_connlost) 11132 - nla_put_flag(msg, 11133 - NL80211_WOWLAN_TRIG_WAKEUP_TCP_CONNLOST); 11125 + if (wakeup->tcp_connlost && 11126 + nla_put_flag(msg, NL80211_WOWLAN_TRIG_WAKEUP_TCP_CONNLOST)) 11127 + goto free_msg; 11134 11128 11135 - if (wakeup->tcp_nomoretokens) 11136 - nla_put_flag(msg, 11137 - NL80211_WOWLAN_TRIG_WAKEUP_TCP_NOMORETOKENS); 11129 + if (wakeup->tcp_nomoretokens && 11130 + nla_put_flag(msg, 11131 + NL80211_WOWLAN_TRIG_WAKEUP_TCP_NOMORETOKENS)) 11132 + goto free_msg; 11138 11133 11139 11134 if (wakeup->packet) { 11140 11135 u32 pkt_attr = NL80211_WOWLAN_TRIG_WAKEUP_PKT_80211; ··· 11272 11263 return; 11273 11264 11274 11265 hdr = nl80211hdr_put(msg, 0, 0, 0, NL80211_CMD_FT_EVENT); 11275 - if (!hdr) { 11276 - nlmsg_free(msg); 11277 - return; 11278 - } 11266 + if (!hdr) 11267 + goto out; 11279 11268 11280 - nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx); 11281 - nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex); 11282 - nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, ft_event->target_ap); 11283 - if (ft_event->ies) 11284 - nla_put(msg, NL80211_ATTR_IE, ft_event->ies_len, ft_event->ies); 11285 - if (ft_event->ric_ies) 11286 - nla_put(msg, NL80211_ATTR_IE_RIC, ft_event->ric_ies_len, 11287 - ft_event->ric_ies); 11269 + if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) || 11270 + nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex) || 11271 + nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, ft_event->target_ap)) 11272 + goto out; 11273 + 11274 + if (ft_event->ies && 11275 + nla_put(msg, NL80211_ATTR_IE, ft_event->ies_len, ft_event->ies)) 11276 + goto out; 11277 + if (ft_event->ric_ies && 11278 + nla_put(msg, NL80211_ATTR_IE_RIC, ft_event->ric_ies_len, 11279 + ft_event->ric_ies)) 11280 + goto out; 11288 11281 11289 11282 genlmsg_end(msg, hdr); 11290 11283 11291 11284 genlmsg_multicast_netns(&nl80211_fam, wiphy_net(&rdev->wiphy), msg, 0, 11292 11285 NL80211_MCGRP_MLME, GFP_KERNEL); 11286 + return; 11287 + out: 11288 + nlmsg_free(msg); 11293 11289 } 11294 11290 EXPORT_SYMBOL(cfg80211_ft_event); 11295 11291