Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:

1) Revert iwlwifi reclaimed packet tracking, it causes problems for a
bunch of folks. From Emmanuel Grumbach.

2) Work limiting code in brcmsmac wifi driver can clear tx status
without processing the event. From Arend van Spriel.

3) rtlwifi USB driver processes wrong SKB, fix from Larry Finger.

4) l2tp tunnel delete can race with close, fix from Tom Parkin.

5) pktgen_add_device() failures are not checked at all, fix from Cong
Wang.

6) Fix unintentional removal of carrier off from tun_detach(),
otherwise we confuse userspace, from Michael S. Tsirkin.

7) Don't leak socket reference counts and ubufs in vhost-net driver,
from Jason Wang.

8) vmxnet3 driver gets it's initial carrier state wrong, fix from Neil
Horman.

9) Protect against USB networking devices which spam the host with 0
length frames, from Bjørn Mork.

10) Prevent neighbour overflows in ipv6 for locally destined routes,
from Marcelo Ricardo. This is the best short-term fix for this, a
longer term fix has been implemented in net-next.

11) L2TP uses ipv4 datagram routines in it's ipv6 code, whoops. This
mistake is largely because the ipv6 functions don't even have some
kind of prefix in their names to suggest they are ipv6 specific.
From Tom Parkin.

12) Check SYN packet drops properly in tcp_rcv_fastopen_synack(), from
Yuchung Cheng.

13) Fix races and TX skb freeing bugs in via-rhine's NAPI support, from
Francois Romieu and your's truly.

14) Fix infinite loops and divides by zero in TCP congestion window
handling, from Eric Dumazet, Neal Cardwell, and Ilpo Järvinen.

15) AF_PACKET tx ring handling can leak kernel memory to userspace, fix
from Phil Sutter.

16) Fix error handling in ipv6 GRE tunnel transmit, from Tommi Rantala.

17) Protect XEN netback driver against hostile frontend putting garbage
into the rings, don't leak pages in TX GOP checking, and add proper
resource releasing in error path of xen_netbk_get_requests(). From
Ian Campbell.

18) SCTP authentication keys should be cleared out and released with
kzfree(), from Daniel Borkmann.

19) L2TP is a bit too clever trying to maintain skb->truesize, and ends
up corrupting socket memory accounting to the point where packet
sending is halted indefinitely. Just remove the adjustments
entirely, they aren't really needed. From Eric Dumazet.

20) ATM Iphase driver uses a data type with the same name as the S390
headers, rename to fix the build. From Heiko Carstens.

21) Fix a typo in copying the inner network header offset from one SKB
to another, from Pravin B Shelar.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (56 commits)
net: sctp: sctp_endpoint_free: zero out secret key data
net: sctp: sctp_setsockopt_auth_key: use kzfree instead of kfree
atm/iphase: rename fregt_t -> ffreg_t
net: usb: fix regression from FLAG_NOARP code
l2tp: dont play with skb->truesize
net: sctp: sctp_auth_key_put: use kzfree instead of kfree
netback: correct netbk_tx_err to handle wrap around.
xen/netback: free already allocated memory on failure in xen_netbk_get_requests
xen/netback: don't leak pages on failure in xen_netbk_tx_check_gop.
xen/netback: shutdown the ring if it contains garbage.
net: qmi_wwan: add more Huawei devices, including E320
net: cdc_ncm: add another Huawei vendor specific device
ipv6/ip6_gre: fix error case handling in ip6gre_tunnel_xmit()
tcp: fix for zero packets_in_flight was too broad
brcmsmac: rework of mac80211 .flush() callback operation
ssb: unregister gpios before unloading ssb
bcma: unregister gpios before unloading bcma
rtlwifi: Fix scheduling while atomic bug
net: usbnet: fix tx_dropped statistics
tcp: ipv6: Update MIB counters for drops
...

+648 -347
+73 -73
drivers/atm/iphase.h
··· 636 636 #define SEG_BASE IPHASE5575_FRAG_CONTROL_REG_BASE 637 637 #define REASS_BASE IPHASE5575_REASS_CONTROL_REG_BASE 638 638 639 - typedef volatile u_int freg_t; 639 + typedef volatile u_int ffreg_t; 640 640 typedef u_int rreg_t; 641 641 642 642 typedef struct _ffredn_t { 643 - freg_t idlehead_high; /* Idle cell header (high) */ 644 - freg_t idlehead_low; /* Idle cell header (low) */ 645 - freg_t maxrate; /* Maximum rate */ 646 - freg_t stparms; /* Traffic Management Parameters */ 647 - freg_t abrubr_abr; /* ABRUBR Priority Byte 1, TCR Byte 0 */ 648 - freg_t rm_type; /* */ 649 - u_int filler5[0x17 - 0x06]; 650 - freg_t cmd_reg; /* Command register */ 651 - u_int filler18[0x20 - 0x18]; 652 - freg_t cbr_base; /* CBR Pointer Base */ 653 - freg_t vbr_base; /* VBR Pointer Base */ 654 - freg_t abr_base; /* ABR Pointer Base */ 655 - freg_t ubr_base; /* UBR Pointer Base */ 656 - u_int filler24; 657 - freg_t vbrwq_base; /* VBR Wait Queue Base */ 658 - freg_t abrwq_base; /* ABR Wait Queue Base */ 659 - freg_t ubrwq_base; /* UBR Wait Queue Base */ 660 - freg_t vct_base; /* Main VC Table Base */ 661 - freg_t vcte_base; /* Extended Main VC Table Base */ 662 - u_int filler2a[0x2C - 0x2A]; 663 - freg_t cbr_tab_beg; /* CBR Table Begin */ 664 - freg_t cbr_tab_end; /* CBR Table End */ 665 - freg_t cbr_pointer; /* CBR Pointer */ 666 - u_int filler2f[0x30 - 0x2F]; 667 - freg_t prq_st_adr; /* Packet Ready Queue Start Address */ 668 - freg_t prq_ed_adr; /* Packet Ready Queue End Address */ 669 - freg_t prq_rd_ptr; /* Packet Ready Queue read pointer */ 670 - freg_t prq_wr_ptr; /* Packet Ready Queue write pointer */ 671 - freg_t tcq_st_adr; /* Transmit Complete Queue Start Address*/ 672 - freg_t tcq_ed_adr; /* Transmit Complete Queue End Address */ 673 - freg_t tcq_rd_ptr; /* Transmit Complete Queue read pointer */ 674 - freg_t tcq_wr_ptr; /* Transmit Complete Queue write pointer*/ 675 - u_int filler38[0x40 - 0x38]; 676 - freg_t queue_base; /* Base address for PRQ and TCQ */ 677 - freg_t desc_base; /* Base address of descriptor table */ 678 - u_int filler42[0x45 - 0x42]; 679 - freg_t mode_reg_0; /* Mode register 0 */ 680 - freg_t mode_reg_1; /* Mode register 1 */ 681 - freg_t intr_status_reg;/* Interrupt Status register */ 682 - freg_t mask_reg; /* Mask Register */ 683 - freg_t cell_ctr_high1; /* Total cell transfer count (high) */ 684 - freg_t cell_ctr_lo1; /* Total cell transfer count (low) */ 685 - freg_t state_reg; /* Status register */ 686 - u_int filler4c[0x58 - 0x4c]; 687 - freg_t curr_desc_num; /* Contains the current descriptor num */ 688 - freg_t next_desc; /* Next descriptor */ 689 - freg_t next_vc; /* Next VC */ 690 - u_int filler5b[0x5d - 0x5b]; 691 - freg_t present_slot_cnt;/* Present slot count */ 692 - u_int filler5e[0x6a - 0x5e]; 693 - freg_t new_desc_num; /* New descriptor number */ 694 - freg_t new_vc; /* New VC */ 695 - freg_t sched_tbl_ptr; /* Schedule table pointer */ 696 - freg_t vbrwq_wptr; /* VBR wait queue write pointer */ 697 - freg_t vbrwq_rptr; /* VBR wait queue read pointer */ 698 - freg_t abrwq_wptr; /* ABR wait queue write pointer */ 699 - freg_t abrwq_rptr; /* ABR wait queue read pointer */ 700 - freg_t ubrwq_wptr; /* UBR wait queue write pointer */ 701 - freg_t ubrwq_rptr; /* UBR wait queue read pointer */ 702 - freg_t cbr_vc; /* CBR VC */ 703 - freg_t vbr_sb_vc; /* VBR SB VC */ 704 - freg_t abr_sb_vc; /* ABR SB VC */ 705 - freg_t ubr_sb_vc; /* UBR SB VC */ 706 - freg_t vbr_next_link; /* VBR next link */ 707 - freg_t abr_next_link; /* ABR next link */ 708 - freg_t ubr_next_link; /* UBR next link */ 709 - u_int filler7a[0x7c-0x7a]; 710 - freg_t out_rate_head; /* Out of rate head */ 711 - u_int filler7d[0xca-0x7d]; /* pad out to full address space */ 712 - freg_t cell_ctr_high1_nc;/* Total cell transfer count (high) */ 713 - freg_t cell_ctr_lo1_nc;/* Total cell transfer count (low) */ 714 - u_int fillercc[0x100-0xcc]; /* pad out to full address space */ 643 + ffreg_t idlehead_high; /* Idle cell header (high) */ 644 + ffreg_t idlehead_low; /* Idle cell header (low) */ 645 + ffreg_t maxrate; /* Maximum rate */ 646 + ffreg_t stparms; /* Traffic Management Parameters */ 647 + ffreg_t abrubr_abr; /* ABRUBR Priority Byte 1, TCR Byte 0 */ 648 + ffreg_t rm_type; /* */ 649 + u_int filler5[0x17 - 0x06]; 650 + ffreg_t cmd_reg; /* Command register */ 651 + u_int filler18[0x20 - 0x18]; 652 + ffreg_t cbr_base; /* CBR Pointer Base */ 653 + ffreg_t vbr_base; /* VBR Pointer Base */ 654 + ffreg_t abr_base; /* ABR Pointer Base */ 655 + ffreg_t ubr_base; /* UBR Pointer Base */ 656 + u_int filler24; 657 + ffreg_t vbrwq_base; /* VBR Wait Queue Base */ 658 + ffreg_t abrwq_base; /* ABR Wait Queue Base */ 659 + ffreg_t ubrwq_base; /* UBR Wait Queue Base */ 660 + ffreg_t vct_base; /* Main VC Table Base */ 661 + ffreg_t vcte_base; /* Extended Main VC Table Base */ 662 + u_int filler2a[0x2C - 0x2A]; 663 + ffreg_t cbr_tab_beg; /* CBR Table Begin */ 664 + ffreg_t cbr_tab_end; /* CBR Table End */ 665 + ffreg_t cbr_pointer; /* CBR Pointer */ 666 + u_int filler2f[0x30 - 0x2F]; 667 + ffreg_t prq_st_adr; /* Packet Ready Queue Start Address */ 668 + ffreg_t prq_ed_adr; /* Packet Ready Queue End Address */ 669 + ffreg_t prq_rd_ptr; /* Packet Ready Queue read pointer */ 670 + ffreg_t prq_wr_ptr; /* Packet Ready Queue write pointer */ 671 + ffreg_t tcq_st_adr; /* Transmit Complete Queue Start Address*/ 672 + ffreg_t tcq_ed_adr; /* Transmit Complete Queue End Address */ 673 + ffreg_t tcq_rd_ptr; /* Transmit Complete Queue read pointer */ 674 + ffreg_t tcq_wr_ptr; /* Transmit Complete Queue write pointer*/ 675 + u_int filler38[0x40 - 0x38]; 676 + ffreg_t queue_base; /* Base address for PRQ and TCQ */ 677 + ffreg_t desc_base; /* Base address of descriptor table */ 678 + u_int filler42[0x45 - 0x42]; 679 + ffreg_t mode_reg_0; /* Mode register 0 */ 680 + ffreg_t mode_reg_1; /* Mode register 1 */ 681 + ffreg_t intr_status_reg;/* Interrupt Status register */ 682 + ffreg_t mask_reg; /* Mask Register */ 683 + ffreg_t cell_ctr_high1; /* Total cell transfer count (high) */ 684 + ffreg_t cell_ctr_lo1; /* Total cell transfer count (low) */ 685 + ffreg_t state_reg; /* Status register */ 686 + u_int filler4c[0x58 - 0x4c]; 687 + ffreg_t curr_desc_num; /* Contains the current descriptor num */ 688 + ffreg_t next_desc; /* Next descriptor */ 689 + ffreg_t next_vc; /* Next VC */ 690 + u_int filler5b[0x5d - 0x5b]; 691 + ffreg_t present_slot_cnt;/* Present slot count */ 692 + u_int filler5e[0x6a - 0x5e]; 693 + ffreg_t new_desc_num; /* New descriptor number */ 694 + ffreg_t new_vc; /* New VC */ 695 + ffreg_t sched_tbl_ptr; /* Schedule table pointer */ 696 + ffreg_t vbrwq_wptr; /* VBR wait queue write pointer */ 697 + ffreg_t vbrwq_rptr; /* VBR wait queue read pointer */ 698 + ffreg_t abrwq_wptr; /* ABR wait queue write pointer */ 699 + ffreg_t abrwq_rptr; /* ABR wait queue read pointer */ 700 + ffreg_t ubrwq_wptr; /* UBR wait queue write pointer */ 701 + ffreg_t ubrwq_rptr; /* UBR wait queue read pointer */ 702 + ffreg_t cbr_vc; /* CBR VC */ 703 + ffreg_t vbr_sb_vc; /* VBR SB VC */ 704 + ffreg_t abr_sb_vc; /* ABR SB VC */ 705 + ffreg_t ubr_sb_vc; /* UBR SB VC */ 706 + ffreg_t vbr_next_link; /* VBR next link */ 707 + ffreg_t abr_next_link; /* ABR next link */ 708 + ffreg_t ubr_next_link; /* UBR next link */ 709 + u_int filler7a[0x7c-0x7a]; 710 + ffreg_t out_rate_head; /* Out of rate head */ 711 + u_int filler7d[0xca-0x7d]; /* pad out to full address space */ 712 + ffreg_t cell_ctr_high1_nc;/* Total cell transfer count (high) */ 713 + ffreg_t cell_ctr_lo1_nc;/* Total cell transfer count (low) */ 714 + u_int fillercc[0x100-0xcc]; /* pad out to full address space */ 715 715 } ffredn_t; 716 716 717 717 typedef struct _rfredn_t {
+5
drivers/bcma/bcma_private.h
··· 94 94 #ifdef CONFIG_BCMA_DRIVER_GPIO 95 95 /* driver_gpio.c */ 96 96 int bcma_gpio_init(struct bcma_drv_cc *cc); 97 + int bcma_gpio_unregister(struct bcma_drv_cc *cc); 97 98 #else 98 99 static inline int bcma_gpio_init(struct bcma_drv_cc *cc) 99 100 { 100 101 return -ENOTSUPP; 102 + } 103 + static inline int bcma_gpio_unregister(struct bcma_drv_cc *cc) 104 + { 105 + return 0; 101 106 } 102 107 #endif /* CONFIG_BCMA_DRIVER_GPIO */ 103 108
+1 -1
drivers/bcma/driver_chipcommon_nflash.c
··· 21 21 struct bcma_bus *bus = cc->core->bus; 22 22 23 23 if (bus->chipinfo.id != BCMA_CHIP_ID_BCM4706 && 24 - cc->core->id.rev != 0x38) { 24 + cc->core->id.rev != 38) { 25 25 bcma_err(bus, "NAND flash on unsupported board!\n"); 26 26 return -ENOTSUPP; 27 27 }
+5
drivers/bcma/driver_gpio.c
··· 96 96 97 97 return gpiochip_add(chip); 98 98 } 99 + 100 + int bcma_gpio_unregister(struct bcma_drv_cc *cc) 101 + { 102 + return gpiochip_remove(&cc->gpio); 103 + }
+7
drivers/bcma/main.c
··· 268 268 void bcma_bus_unregister(struct bcma_bus *bus) 269 269 { 270 270 struct bcma_device *cores[3]; 271 + int err; 272 + 273 + err = bcma_gpio_unregister(&bus->drv_cc); 274 + if (err == -EBUSY) 275 + bcma_err(bus, "Some GPIOs are still in use.\n"); 276 + else if (err) 277 + bcma_err(bus, "Can not unregister GPIO driver: %i\n", err); 271 278 272 279 cores[0] = bcma_find_core(bus, BCMA_CORE_MIPS_74K); 273 280 cores[1] = bcma_find_core(bus, BCMA_CORE_PCIE);
+1
drivers/net/bonding/bond_sysfs.c
··· 1053 1053 pr_info("%s: Setting primary slave to None.\n", 1054 1054 bond->dev->name); 1055 1055 bond->primary_slave = NULL; 1056 + memset(bond->params.primary, 0, sizeof(bond->params.primary)); 1056 1057 bond_select_active_slave(bond); 1057 1058 goto out; 1058 1059 }
+5 -1
drivers/net/can/c_can/c_can.c
··· 488 488 489 489 priv->write_reg(priv, C_CAN_IFACE(MASK1_REG, iface), 490 490 IFX_WRITE_LOW_16BIT(mask)); 491 + 492 + /* According to C_CAN documentation, the reserved bit 493 + * in IFx_MASK2 register is fixed 1 494 + */ 491 495 priv->write_reg(priv, C_CAN_IFACE(MASK2_REG, iface), 492 - IFX_WRITE_HIGH_16BIT(mask)); 496 + IFX_WRITE_HIGH_16BIT(mask) | BIT(13)); 493 497 494 498 priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface), 495 499 IFX_WRITE_LOW_16BIT(id));
+4 -4
drivers/net/ethernet/emulex/benet/be.h
··· 36 36 37 37 #define DRV_VER "4.4.161.0u" 38 38 #define DRV_NAME "be2net" 39 - #define BE_NAME "ServerEngines BladeEngine2 10Gbps NIC" 40 - #define BE3_NAME "ServerEngines BladeEngine3 10Gbps NIC" 41 - #define OC_NAME "Emulex OneConnect 10Gbps NIC" 39 + #define BE_NAME "Emulex BladeEngine2" 40 + #define BE3_NAME "Emulex BladeEngine3" 41 + #define OC_NAME "Emulex OneConnect" 42 42 #define OC_NAME_BE OC_NAME "(be3)" 43 43 #define OC_NAME_LANCER OC_NAME "(Lancer)" 44 44 #define OC_NAME_SH OC_NAME "(Skyhawk)" 45 - #define DRV_DESC "ServerEngines BladeEngine 10Gbps NIC Driver" 45 + #define DRV_DESC "Emulex OneConnect 10Gbps NIC Driver" 46 46 47 47 #define BE_VENDOR_ID 0x19a2 48 48 #define EMULEX_VENDOR_ID 0x10df
+1 -1
drivers/net/ethernet/emulex/benet/be_main.c
··· 25 25 MODULE_VERSION(DRV_VER); 26 26 MODULE_DEVICE_TABLE(pci, be_dev_ids); 27 27 MODULE_DESCRIPTION(DRV_DESC " " DRV_VER); 28 - MODULE_AUTHOR("ServerEngines Corporation"); 28 + MODULE_AUTHOR("Emulex Corporation"); 29 29 MODULE_LICENSE("GPL"); 30 30 31 31 static unsigned int num_vfs;
+9
drivers/net/ethernet/intel/e1000e/defines.h
··· 232 232 #define E1000_CTRL_FRCDPX 0x00001000 /* Force Duplex */ 233 233 #define E1000_CTRL_LANPHYPC_OVERRIDE 0x00010000 /* SW control of LANPHYPC */ 234 234 #define E1000_CTRL_LANPHYPC_VALUE 0x00020000 /* SW value of LANPHYPC */ 235 + #define E1000_CTRL_MEHE 0x00080000 /* Memory Error Handling Enable */ 235 236 #define E1000_CTRL_SWDPIN0 0x00040000 /* SWDPIN 0 value */ 236 237 #define E1000_CTRL_SWDPIN1 0x00080000 /* SWDPIN 1 value */ 237 238 #define E1000_CTRL_SWDPIO0 0x00400000 /* SWDPIN 0 Input or output */ ··· 390 389 391 390 #define E1000_PBS_16K E1000_PBA_16K 392 391 392 + /* Uncorrectable/correctable ECC Error counts and enable bits */ 393 + #define E1000_PBECCSTS_CORR_ERR_CNT_MASK 0x000000FF 394 + #define E1000_PBECCSTS_UNCORR_ERR_CNT_MASK 0x0000FF00 395 + #define E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT 8 396 + #define E1000_PBECCSTS_ECC_ENABLE 0x00010000 397 + 393 398 #define IFS_MAX 80 394 399 #define IFS_MIN 40 395 400 #define IFS_RATIO 4 ··· 415 408 #define E1000_ICR_RXSEQ 0x00000008 /* Rx sequence error */ 416 409 #define E1000_ICR_RXDMT0 0x00000010 /* Rx desc min. threshold (0) */ 417 410 #define E1000_ICR_RXT0 0x00000080 /* Rx timer intr (ring 0) */ 411 + #define E1000_ICR_ECCER 0x00400000 /* Uncorrectable ECC Error */ 418 412 #define E1000_ICR_INT_ASSERTED 0x80000000 /* If this bit asserted, the driver should claim the interrupt */ 419 413 #define E1000_ICR_RXQ0 0x00100000 /* Rx Queue 0 Interrupt */ 420 414 #define E1000_ICR_RXQ1 0x00200000 /* Rx Queue 1 Interrupt */ ··· 451 443 #define E1000_IMS_RXSEQ E1000_ICR_RXSEQ /* Rx sequence error */ 452 444 #define E1000_IMS_RXDMT0 E1000_ICR_RXDMT0 /* Rx desc min. threshold */ 453 445 #define E1000_IMS_RXT0 E1000_ICR_RXT0 /* Rx timer intr */ 446 + #define E1000_IMS_ECCER E1000_ICR_ECCER /* Uncorrectable ECC Error */ 454 447 #define E1000_IMS_RXQ0 E1000_ICR_RXQ0 /* Rx Queue 0 Interrupt */ 455 448 #define E1000_IMS_RXQ1 E1000_ICR_RXQ1 /* Rx Queue 1 Interrupt */ 456 449 #define E1000_IMS_TXQ0 E1000_ICR_TXQ0 /* Tx Queue 0 Interrupt */
+2
drivers/net/ethernet/intel/e1000e/e1000.h
··· 309 309 310 310 struct napi_struct napi; 311 311 312 + unsigned int uncorr_errors; /* uncorrectable ECC errors */ 313 + unsigned int corr_errors; /* correctable ECC errors */ 312 314 unsigned int restart_queue; 313 315 u32 txd_cmd; 314 316
+2
drivers/net/ethernet/intel/e1000e/ethtool.c
··· 108 108 E1000_STAT("dropped_smbus", stats.mgpdc), 109 109 E1000_STAT("rx_dma_failed", rx_dma_failed), 110 110 E1000_STAT("tx_dma_failed", tx_dma_failed), 111 + E1000_STAT("uncorr_ecc_errors", uncorr_errors), 112 + E1000_STAT("corr_ecc_errors", corr_errors), 111 113 }; 112 114 113 115 #define E1000_GLOBAL_STATS_LEN ARRAY_SIZE(e1000_gstrings_stats)
+1
drivers/net/ethernet/intel/e1000e/hw.h
··· 77 77 #define E1000_POEMB E1000_PHY_CTRL /* PHY OEM Bits */ 78 78 E1000_PBA = 0x01000, /* Packet Buffer Allocation - RW */ 79 79 E1000_PBS = 0x01008, /* Packet Buffer Size */ 80 + E1000_PBECCSTS = 0x0100C, /* Packet Buffer ECC Status - RW */ 80 81 E1000_EEMNGCTL = 0x01010, /* MNG EEprom Control */ 81 82 E1000_EEWR = 0x0102C, /* EEPROM Write Register - RW */ 82 83 E1000_FLOP = 0x0103C, /* FLASH Opcode Register */
+11
drivers/net/ethernet/intel/e1000e/ich8lan.c
··· 3624 3624 if (hw->mac.type == e1000_ich8lan) 3625 3625 reg |= (E1000_RFCTL_IPV6_EX_DIS | E1000_RFCTL_NEW_IPV6_EXT_DIS); 3626 3626 ew32(RFCTL, reg); 3627 + 3628 + /* Enable ECC on Lynxpoint */ 3629 + if (hw->mac.type == e1000_pch_lpt) { 3630 + reg = er32(PBECCSTS); 3631 + reg |= E1000_PBECCSTS_ECC_ENABLE; 3632 + ew32(PBECCSTS, reg); 3633 + 3634 + reg = er32(CTRL); 3635 + reg |= E1000_CTRL_MEHE; 3636 + ew32(CTRL, reg); 3637 + } 3627 3638 } 3628 3639 3629 3640 /**
+46
drivers/net/ethernet/intel/e1000e/netdev.c
··· 1678 1678 mod_timer(&adapter->watchdog_timer, jiffies + 1); 1679 1679 } 1680 1680 1681 + /* Reset on uncorrectable ECC error */ 1682 + if ((icr & E1000_ICR_ECCER) && (hw->mac.type == e1000_pch_lpt)) { 1683 + u32 pbeccsts = er32(PBECCSTS); 1684 + 1685 + adapter->corr_errors += 1686 + pbeccsts & E1000_PBECCSTS_CORR_ERR_CNT_MASK; 1687 + adapter->uncorr_errors += 1688 + (pbeccsts & E1000_PBECCSTS_UNCORR_ERR_CNT_MASK) >> 1689 + E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT; 1690 + 1691 + /* Do the reset outside of interrupt context */ 1692 + schedule_work(&adapter->reset_task); 1693 + 1694 + /* return immediately since reset is imminent */ 1695 + return IRQ_HANDLED; 1696 + } 1697 + 1681 1698 if (napi_schedule_prep(&adapter->napi)) { 1682 1699 adapter->total_tx_bytes = 0; 1683 1700 adapter->total_tx_packets = 0; ··· 1756 1739 /* guard against interrupt when we're going down */ 1757 1740 if (!test_bit(__E1000_DOWN, &adapter->state)) 1758 1741 mod_timer(&adapter->watchdog_timer, jiffies + 1); 1742 + } 1743 + 1744 + /* Reset on uncorrectable ECC error */ 1745 + if ((icr & E1000_ICR_ECCER) && (hw->mac.type == e1000_pch_lpt)) { 1746 + u32 pbeccsts = er32(PBECCSTS); 1747 + 1748 + adapter->corr_errors += 1749 + pbeccsts & E1000_PBECCSTS_CORR_ERR_CNT_MASK; 1750 + adapter->uncorr_errors += 1751 + (pbeccsts & E1000_PBECCSTS_UNCORR_ERR_CNT_MASK) >> 1752 + E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT; 1753 + 1754 + /* Do the reset outside of interrupt context */ 1755 + schedule_work(&adapter->reset_task); 1756 + 1757 + /* return immediately since reset is imminent */ 1758 + return IRQ_HANDLED; 1759 1759 } 1760 1760 1761 1761 if (napi_schedule_prep(&adapter->napi)) { ··· 2138 2104 if (adapter->msix_entries) { 2139 2105 ew32(EIAC_82574, adapter->eiac_mask & E1000_EIAC_MASK_82574); 2140 2106 ew32(IMS, adapter->eiac_mask | E1000_IMS_OTHER | E1000_IMS_LSC); 2107 + } else if (hw->mac.type == e1000_pch_lpt) { 2108 + ew32(IMS, IMS_ENABLE_MASK | E1000_IMS_ECCER); 2141 2109 } else { 2142 2110 ew32(IMS, IMS_ENABLE_MASK); 2143 2111 } ··· 4287 4251 adapter->stats.mgptc += er32(MGTPTC); 4288 4252 adapter->stats.mgprc += er32(MGTPRC); 4289 4253 adapter->stats.mgpdc += er32(MGTPDC); 4254 + 4255 + /* Correctable ECC Errors */ 4256 + if (hw->mac.type == e1000_pch_lpt) { 4257 + u32 pbeccsts = er32(PBECCSTS); 4258 + adapter->corr_errors += 4259 + pbeccsts & E1000_PBECCSTS_CORR_ERR_CNT_MASK; 4260 + adapter->uncorr_errors += 4261 + (pbeccsts & E1000_PBECCSTS_UNCORR_ERR_CNT_MASK) >> 4262 + E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT; 4263 + } 4290 4264 } 4291 4265 4292 4266 /**
+2 -6
drivers/net/ethernet/via/via-rhine.c
··· 1801 1801 rp->tx_skbuff[entry]->len, 1802 1802 PCI_DMA_TODEVICE); 1803 1803 } 1804 - dev_kfree_skb_irq(rp->tx_skbuff[entry]); 1804 + dev_kfree_skb(rp->tx_skbuff[entry]); 1805 1805 rp->tx_skbuff[entry] = NULL; 1806 1806 entry = (++rp->dirty_tx) % TX_RING_SIZE; 1807 1807 } ··· 2010 2010 if (intr_status & IntrPCIErr) 2011 2011 netif_warn(rp, hw, dev, "PCI error\n"); 2012 2012 2013 - napi_disable(&rp->napi); 2014 - rhine_irq_disable(rp); 2015 - /* Slow and safe. Consider __napi_schedule as a replacement ? */ 2016 - napi_enable(&rp->napi); 2017 - napi_schedule(&rp->napi); 2013 + iowrite16(RHINE_EVENT & 0xffff, rp->base + IntrEnable); 2018 2014 2019 2015 out_unlock: 2020 2016 mutex_unlock(&rp->task_lock);
+24 -14
drivers/net/tun.c
··· 298 298 } 299 299 300 300 static void tun_flow_update(struct tun_struct *tun, u32 rxhash, 301 - u16 queue_index) 301 + struct tun_file *tfile) 302 302 { 303 303 struct hlist_head *head; 304 304 struct tun_flow_entry *e; 305 305 unsigned long delay = tun->ageing_time; 306 + u16 queue_index = tfile->queue_index; 306 307 307 308 if (!rxhash) 308 309 return; ··· 312 311 313 312 rcu_read_lock(); 314 313 315 - if (tun->numqueues == 1) 314 + /* We may get a very small possibility of OOO during switching, not 315 + * worth to optimize.*/ 316 + if (tun->numqueues == 1 || tfile->detached) 316 317 goto unlock; 317 318 318 319 e = tun_flow_find(head, rxhash); ··· 414 411 415 412 tun = rtnl_dereference(tfile->tun); 416 413 417 - if (tun) { 414 + if (tun && !tfile->detached) { 418 415 u16 index = tfile->queue_index; 419 416 BUG_ON(index >= tun->numqueues); 420 417 dev = tun->dev; 421 418 422 419 rcu_assign_pointer(tun->tfiles[index], 423 420 tun->tfiles[tun->numqueues - 1]); 424 - rcu_assign_pointer(tfile->tun, NULL); 425 421 ntfile = rtnl_dereference(tun->tfiles[index]); 426 422 ntfile->queue_index = index; 427 423 428 424 --tun->numqueues; 429 - if (clean) 425 + if (clean) { 426 + rcu_assign_pointer(tfile->tun, NULL); 430 427 sock_put(&tfile->sk); 431 - else 428 + } else 432 429 tun_disable_queue(tun, tfile); 433 430 434 431 synchronize_net(); ··· 442 439 } 443 440 444 441 if (clean) { 445 - if (tun && tun->numqueues == 0 && tun->numdisabled == 0 && 446 - !(tun->flags & TUN_PERSIST)) 447 - if (tun->dev->reg_state == NETREG_REGISTERED) 442 + if (tun && tun->numqueues == 0 && tun->numdisabled == 0) { 443 + netif_carrier_off(tun->dev); 444 + 445 + if (!(tun->flags & TUN_PERSIST) && 446 + tun->dev->reg_state == NETREG_REGISTERED) 448 447 unregister_netdevice(tun->dev); 448 + } 449 449 450 450 BUG_ON(!test_bit(SOCK_EXTERNALLY_ALLOCATED, 451 451 &tfile->socket.flags)); ··· 475 469 wake_up_all(&tfile->wq.wait); 476 470 rcu_assign_pointer(tfile->tun, NULL); 477 471 --tun->numqueues; 472 + } 473 + list_for_each_entry(tfile, &tun->disabled, next) { 474 + wake_up_all(&tfile->wq.wait); 475 + rcu_assign_pointer(tfile->tun, NULL); 478 476 } 479 477 BUG_ON(tun->numqueues != 0); 480 478 ··· 510 500 goto out; 511 501 512 502 err = -EINVAL; 513 - if (rtnl_dereference(tfile->tun)) 503 + if (rtnl_dereference(tfile->tun) && !tfile->detached) 514 504 goto out; 515 505 516 506 err = -EBUSY; ··· 1209 1199 tun->dev->stats.rx_packets++; 1210 1200 tun->dev->stats.rx_bytes += len; 1211 1201 1212 - tun_flow_update(tun, rxhash, tfile->queue_index); 1202 + tun_flow_update(tun, rxhash, tfile); 1213 1203 return total_len; 1214 1204 } 1215 1205 ··· 1668 1658 device_create_file(&tun->dev->dev, &dev_attr_owner) || 1669 1659 device_create_file(&tun->dev->dev, &dev_attr_group)) 1670 1660 pr_err("Failed to create tun sysfs files\n"); 1671 - 1672 - netif_carrier_on(tun->dev); 1673 1661 } 1662 + 1663 + netif_carrier_on(tun->dev); 1674 1664 1675 1665 tun_debug(KERN_INFO, tun, "tun_set_iff\n"); 1676 1666 ··· 1823 1813 ret = tun_attach(tun, file); 1824 1814 } else if (ifr->ifr_flags & IFF_DETACH_QUEUE) { 1825 1815 tun = rtnl_dereference(tfile->tun); 1826 - if (!tun || !(tun->flags & TUN_TAP_MQ)) 1816 + if (!tun || !(tun->flags & TUN_TAP_MQ) || tfile->detached) 1827 1817 ret = -EINVAL; 1828 1818 else 1829 1819 __tun_detach(tfile, false);
+3
drivers/net/usb/cdc_ncm.c
··· 1215 1215 { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x46), 1216 1216 .driver_info = (unsigned long)&wwan_info, 1217 1217 }, 1218 + { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x76), 1219 + .driver_info = (unsigned long)&wwan_info, 1220 + }, 1218 1221 1219 1222 /* Infineon(now Intel) HSPA Modem platform */ 1220 1223 { USB_DEVICE_AND_INTERFACE_INFO(0x1519, 0x0443,
+13
drivers/net/usb/qmi_wwan.c
··· 351 351 USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_VENDOR_SPEC, 1, 57), 352 352 .driver_info = (unsigned long)&qmi_wwan_info, 353 353 }, 354 + { /* HUAWEI_INTERFACE_NDIS_CONTROL_QUALCOMM */ 355 + USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_VENDOR_SPEC, 0x01, 0x69), 356 + .driver_info = (unsigned long)&qmi_wwan_info, 357 + }, 354 358 355 359 /* 2. Combined interface devices matching on class+protocol */ 356 360 { /* Huawei E367 and possibly others in "Windows mode" */ ··· 363 359 }, 364 360 { /* Huawei E392, E398 and possibly others in "Windows mode" */ 365 361 USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_VENDOR_SPEC, 1, 17), 362 + .driver_info = (unsigned long)&qmi_wwan_info, 363 + }, 364 + { /* HUAWEI_NDIS_SINGLE_INTERFACE_VDF */ 365 + USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_VENDOR_SPEC, 0x01, 0x37), 366 + .driver_info = (unsigned long)&qmi_wwan_info, 367 + }, 368 + { /* HUAWEI_INTERFACE_NDIS_HW_QUALCOMM */ 369 + USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_VENDOR_SPEC, 0x01, 0x67), 366 370 .driver_info = (unsigned long)&qmi_wwan_info, 367 371 }, 368 372 { /* Pantech UML290, P4200 and more */ ··· 473 461 {QMI_FIXED_INTF(0x1199, 0x901c, 8)}, /* Sierra Wireless EM7700 */ 474 462 {QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */ 475 463 {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */ 464 + {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */ 476 465 477 466 /* 4. Gobi 1000 devices */ 478 467 {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
+29 -6
drivers/net/usb/usbnet.c
··· 380 380 unsigned long lockflags; 381 381 size_t size = dev->rx_urb_size; 382 382 383 + /* prevent rx skb allocation when error ratio is high */ 384 + if (test_bit(EVENT_RX_KILL, &dev->flags)) { 385 + usb_free_urb(urb); 386 + return -ENOLINK; 387 + } 388 + 383 389 skb = __netdev_alloc_skb_ip_align(dev->net, size, flags); 384 390 if (!skb) { 385 391 netif_dbg(dev, rx_err, dev->net, "no rx skb\n"); ··· 543 537 dev->net->stats.rx_errors++; 544 538 netif_dbg(dev, rx_err, dev->net, "rx status %d\n", urb_status); 545 539 break; 540 + } 541 + 542 + /* stop rx if packet error rate is high */ 543 + if (++dev->pkt_cnt > 30) { 544 + dev->pkt_cnt = 0; 545 + dev->pkt_err = 0; 546 + } else { 547 + if (state == rx_cleanup) 548 + dev->pkt_err++; 549 + if (dev->pkt_err > 20) 550 + set_bit(EVENT_RX_KILL, &dev->flags); 546 551 } 547 552 548 553 state = defer_bh(dev, skb, &dev->rxq, state); ··· 807 790 (dev->driver_info->flags & FLAG_FRAMING_RN) ? "RNDIS" : 808 791 (dev->driver_info->flags & FLAG_FRAMING_AX) ? "ASIX" : 809 792 "simple"); 793 + 794 + /* reset rx error state */ 795 + dev->pkt_cnt = 0; 796 + dev->pkt_err = 0; 797 + clear_bit(EVENT_RX_KILL, &dev->flags); 810 798 811 799 // delay posting reads until we're fully open 812 800 tasklet_schedule (&dev->bh); ··· 1125 1103 if (info->tx_fixup) { 1126 1104 skb = info->tx_fixup (dev, skb, GFP_ATOMIC); 1127 1105 if (!skb) { 1128 - if (netif_msg_tx_err(dev)) { 1129 - netif_dbg(dev, tx_err, dev->net, "can't tx_fixup skb\n"); 1130 - goto drop; 1131 - } else { 1132 - /* cdc_ncm collected packet; waits for more */ 1106 + /* packet collected; minidriver waiting for more */ 1107 + if (info->flags & FLAG_MULTI_PACKET) 1133 1108 goto not_drop; 1134 - } 1109 + netif_dbg(dev, tx_err, dev->net, "can't tx_fixup skb\n"); 1110 + goto drop; 1135 1111 } 1136 1112 } 1137 1113 length = skb->len; ··· 1273 1253 netdev_dbg(dev->net, "bogus skb state %d\n", entry->state); 1274 1254 } 1275 1255 } 1256 + 1257 + /* restart RX again after disabling due to high error rate */ 1258 + clear_bit(EVENT_RX_KILL, &dev->flags); 1276 1259 1277 1260 // waiting for all pending urbs to complete? 1278 1261 if (dev->wait) {
+3 -4
drivers/net/vmxnet3/vmxnet3_drv.c
··· 154 154 if (ret & 1) { /* Link is up. */ 155 155 printk(KERN_INFO "%s: NIC Link is Up %d Mbps\n", 156 156 adapter->netdev->name, adapter->link_speed); 157 - if (!netif_carrier_ok(adapter->netdev)) 158 - netif_carrier_on(adapter->netdev); 157 + netif_carrier_on(adapter->netdev); 159 158 160 159 if (affectTxQueue) { 161 160 for (i = 0; i < adapter->num_tx_queues; i++) ··· 164 165 } else { 165 166 printk(KERN_INFO "%s: NIC Link is Down\n", 166 167 adapter->netdev->name); 167 - if (netif_carrier_ok(adapter->netdev)) 168 - netif_carrier_off(adapter->netdev); 168 + netif_carrier_off(adapter->netdev); 169 169 170 170 if (affectTxQueue) { 171 171 for (i = 0; i < adapter->num_tx_queues; i++) ··· 3059 3061 netif_set_real_num_tx_queues(adapter->netdev, adapter->num_tx_queues); 3060 3062 netif_set_real_num_rx_queues(adapter->netdev, adapter->num_rx_queues); 3061 3063 3064 + netif_carrier_off(netdev); 3062 3065 err = register_netdev(netdev); 3063 3066 3064 3067 if (err) {
+21 -14
drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c
··· 36 36 #include "debug.h" 37 37 38 38 #define N_TX_QUEUES 4 /* #tx queues on mac80211<->driver interface */ 39 + #define BRCMS_FLUSH_TIMEOUT 500 /* msec */ 39 40 40 41 /* Flags we support */ 41 42 #define MAC_FILTERS (FIF_PROMISC_IN_BSS | \ ··· 709 708 wiphy_rfkill_set_hw_state(wl->pub->ieee_hw->wiphy, blocked); 710 709 } 711 710 711 + static bool brcms_tx_flush_completed(struct brcms_info *wl) 712 + { 713 + bool result; 714 + 715 + spin_lock_bh(&wl->lock); 716 + result = brcms_c_tx_flush_completed(wl->wlc); 717 + spin_unlock_bh(&wl->lock); 718 + return result; 719 + } 720 + 712 721 static void brcms_ops_flush(struct ieee80211_hw *hw, bool drop) 713 722 { 714 723 struct brcms_info *wl = hw->priv; 724 + int ret; 715 725 716 726 no_printk("%s: drop = %s\n", __func__, drop ? "true" : "false"); 717 727 718 - /* wait for packet queue and dma fifos to run empty */ 719 - spin_lock_bh(&wl->lock); 720 - brcms_c_wait_for_tx_completion(wl->wlc, drop); 721 - spin_unlock_bh(&wl->lock); 728 + ret = wait_event_timeout(wl->tx_flush_wq, 729 + brcms_tx_flush_completed(wl), 730 + msecs_to_jiffies(BRCMS_FLUSH_TIMEOUT)); 731 + 732 + brcms_dbg_mac80211(wl->wlc->hw->d11core, 733 + "ret=%d\n", jiffies_to_msecs(ret)); 722 734 } 723 735 724 736 static const struct ieee80211_ops brcms_ops = { ··· 786 772 787 773 done: 788 774 spin_unlock_bh(&wl->lock); 775 + wake_up(&wl->tx_flush_wq); 789 776 } 790 777 791 778 /* ··· 1034 1019 wl->wiphy = hw->wiphy; 1035 1020 1036 1021 atomic_set(&wl->callbacks, 0); 1022 + 1023 + init_waitqueue_head(&wl->tx_flush_wq); 1037 1024 1038 1025 /* setup the bottom half handler */ 1039 1026 tasklet_init(&wl->tasklet, brcms_dpc, (unsigned long) wl); ··· 1625 1608 wiphy_rfkill_start_polling(wl->pub->ieee_hw->wiphy); 1626 1609 spin_lock_bh(&wl->lock); 1627 1610 return blocked; 1628 - } 1629 - 1630 - /* 1631 - * precondition: perimeter lock has been acquired 1632 - */ 1633 - void brcms_msleep(struct brcms_info *wl, uint ms) 1634 - { 1635 - spin_unlock_bh(&wl->lock); 1636 - msleep(ms); 1637 - spin_lock_bh(&wl->lock); 1638 1611 }
+2 -1
drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.h
··· 68 68 spinlock_t lock; /* per-device perimeter lock */ 69 69 spinlock_t isr_lock; /* per-device ISR synchronization lock */ 70 70 71 + /* tx flush */ 72 + wait_queue_head_t tx_flush_wq; 71 73 72 74 /* timer related fields */ 73 75 atomic_t callbacks; /* # outstanding callback functions */ ··· 102 100 extern void brcms_free_timer(struct brcms_timer *timer); 103 101 extern void brcms_add_timer(struct brcms_timer *timer, uint ms, int periodic); 104 102 extern bool brcms_del_timer(struct brcms_timer *timer); 105 - extern void brcms_msleep(struct brcms_info *wl, uint ms); 106 103 extern void brcms_dpc(unsigned long data); 107 104 extern void brcms_timer(struct brcms_timer *t); 108 105 extern void brcms_fatal_error(struct brcms_info *wl);
+12 -28
drivers/net/wireless/brcm80211/brcmsmac/main.c
··· 1027 1027 static bool 1028 1028 brcms_b_txstatus(struct brcms_hardware *wlc_hw, bool bound, bool *fatal) 1029 1029 { 1030 - bool morepending = false; 1031 1030 struct bcma_device *core; 1032 1031 struct tx_status txstatus, *txs; 1033 1032 u32 s1, s2; ··· 1040 1041 txs = &txstatus; 1041 1042 core = wlc_hw->d11core; 1042 1043 *fatal = false; 1043 - s1 = bcma_read32(core, D11REGOFFS(frmtxstatus)); 1044 - while (!(*fatal) 1045 - && (s1 & TXS_V)) { 1046 - /* !give others some time to run! */ 1047 - if (n >= max_tx_num) { 1048 - morepending = true; 1049 - break; 1050 - } 1051 1044 1045 + while (n < max_tx_num) { 1046 + s1 = bcma_read32(core, D11REGOFFS(frmtxstatus)); 1052 1047 if (s1 == 0xffffffff) { 1053 1048 brcms_err(core, "wl%d: %s: dead chip\n", wlc_hw->unit, 1054 1049 __func__); 1055 1050 *fatal = true; 1056 1051 return false; 1057 1052 } 1058 - s2 = bcma_read32(core, D11REGOFFS(frmtxstatus2)); 1053 + /* only process when valid */ 1054 + if (!(s1 & TXS_V)) 1055 + break; 1059 1056 1057 + s2 = bcma_read32(core, D11REGOFFS(frmtxstatus2)); 1060 1058 txs->status = s1 & TXS_STATUS_MASK; 1061 1059 txs->frameid = (s1 & TXS_FID_MASK) >> TXS_FID_SHIFT; 1062 1060 txs->sequence = s2 & TXS_SEQ_MASK; ··· 1061 1065 txs->lasttxtime = 0; 1062 1066 1063 1067 *fatal = brcms_c_dotxstatus(wlc_hw->wlc, txs); 1064 - 1065 - s1 = bcma_read32(core, D11REGOFFS(frmtxstatus)); 1068 + if (*fatal == true) 1069 + return false; 1066 1070 n++; 1067 1071 } 1068 1072 1069 - if (*fatal) 1070 - return false; 1071 - 1072 - return morepending; 1073 + return n >= max_tx_num; 1073 1074 } 1074 1075 1075 1076 static void brcms_c_tbtt(struct brcms_c_info *wlc) ··· 7511 7518 return wlc->band->bandunit; 7512 7519 } 7513 7520 7514 - void brcms_c_wait_for_tx_completion(struct brcms_c_info *wlc, bool drop) 7521 + bool brcms_c_tx_flush_completed(struct brcms_c_info *wlc) 7515 7522 { 7516 - int timeout = 20; 7517 7523 int i; 7518 7524 7519 7525 /* Kick DMA to send any pending AMPDU */ 7520 7526 for (i = 0; i < ARRAY_SIZE(wlc->hw->di); i++) 7521 7527 if (wlc->hw->di[i]) 7522 - dma_txflush(wlc->hw->di[i]); 7528 + dma_kick_tx(wlc->hw->di[i]); 7523 7529 7524 - /* wait for queue and DMA fifos to run dry */ 7525 - while (brcms_txpktpendtot(wlc) > 0) { 7526 - brcms_msleep(wlc->wl, 1); 7527 - 7528 - if (--timeout == 0) 7529 - break; 7530 - } 7531 - 7532 - WARN_ON_ONCE(timeout == 0); 7530 + return !brcms_txpktpendtot(wlc); 7533 7531 } 7534 7532 7535 7533 void brcms_c_set_beacon_listen_interval(struct brcms_c_info *wlc, u8 interval)
+1 -2
drivers/net/wireless/brcm80211/brcmsmac/pub.h
··· 314 314 extern void brcms_c_scan_start(struct brcms_c_info *wlc); 315 315 extern void brcms_c_scan_stop(struct brcms_c_info *wlc); 316 316 extern int brcms_c_get_curband(struct brcms_c_info *wlc); 317 - extern void brcms_c_wait_for_tx_completion(struct brcms_c_info *wlc, 318 - bool drop); 319 317 extern int brcms_c_set_channel(struct brcms_c_info *wlc, u16 channel); 320 318 extern int brcms_c_set_rate_limit(struct brcms_c_info *wlc, u16 srl, u16 lrl); 321 319 extern void brcms_c_get_current_rateset(struct brcms_c_info *wlc, ··· 330 332 extern int brcms_c_get_tx_power(struct brcms_c_info *wlc); 331 333 extern bool brcms_c_check_radio_disabled(struct brcms_c_info *wlc); 332 334 extern void brcms_c_mute(struct brcms_c_info *wlc, bool on); 335 + extern bool brcms_c_tx_flush_completed(struct brcms_c_info *wlc); 333 336 334 337 #endif /* _BRCM_PUB_H_ */
+7 -17
drivers/net/wireless/iwlwifi/dvm/tx.c
··· 1153 1153 next_reclaimed = ssn; 1154 1154 } 1155 1155 1156 + if (tid != IWL_TID_NON_QOS) { 1157 + priv->tid_data[sta_id][tid].next_reclaimed = 1158 + next_reclaimed; 1159 + IWL_DEBUG_TX_REPLY(priv, "Next reclaimed packet:%d\n", 1160 + next_reclaimed); 1161 + } 1162 + 1156 1163 iwl_trans_reclaim(priv->trans, txq_id, ssn, &skbs); 1157 1164 1158 1165 iwlagn_check_ratid_empty(priv, sta_id, tid); ··· 1210 1203 if (!is_agg) 1211 1204 iwlagn_non_agg_tx_status(priv, ctx, hdr->addr1); 1212 1205 1213 - /* 1214 - * W/A for FW bug - the seq_ctl isn't updated when the 1215 - * queues are flushed. Fetch it from the packet itself 1216 - */ 1217 - if (!is_agg && status == TX_STATUS_FAIL_FIFO_FLUSHED) { 1218 - next_reclaimed = le16_to_cpu(hdr->seq_ctrl); 1219 - next_reclaimed = 1220 - SEQ_TO_SN(next_reclaimed + 0x10); 1221 - } 1222 - 1223 1206 is_offchannel_skb = 1224 1207 (info->flags & IEEE80211_TX_CTL_TX_OFFCHAN); 1225 1208 freed++; 1226 - } 1227 - 1228 - if (tid != IWL_TID_NON_QOS) { 1229 - priv->tid_data[sta_id][tid].next_reclaimed = 1230 - next_reclaimed; 1231 - IWL_DEBUG_TX_REPLY(priv, "Next reclaimed packet:%d\n", 1232 - next_reclaimed); 1233 1209 } 1234 1210 1235 1211 WARN_ON(!is_agg && freed != 1);
+5 -4
drivers/net/wireless/mwifiex/scan.c
··· 1563 1563 dev_err(adapter->dev, "SCAN_RESP: too many AP returned (%d)\n", 1564 1564 scan_rsp->number_of_sets); 1565 1565 ret = -1; 1566 - goto done; 1566 + goto check_next_scan; 1567 1567 } 1568 1568 1569 1569 bytes_left = le16_to_cpu(scan_rsp->bss_descript_size); ··· 1634 1634 if (!beacon_size || beacon_size > bytes_left) { 1635 1635 bss_info += bytes_left; 1636 1636 bytes_left = 0; 1637 - return -1; 1637 + ret = -1; 1638 + goto check_next_scan; 1638 1639 } 1639 1640 1640 1641 /* Initialize the current working beacon pointer for this BSS ··· 1691 1690 dev_err(priv->adapter->dev, 1692 1691 "%s: bytes left < IE length\n", 1693 1692 __func__); 1694 - goto done; 1693 + goto check_next_scan; 1695 1694 } 1696 1695 if (element_id == WLAN_EID_DS_PARAMS) { 1697 1696 channel = *(current_ptr + sizeof(struct ieee_types_header)); ··· 1754 1753 } 1755 1754 } 1756 1755 1756 + check_next_scan: 1757 1757 spin_lock_irqsave(&adapter->scan_pending_q_lock, flags); 1758 1758 if (list_empty(&adapter->scan_pending_q)) { 1759 1759 spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags); ··· 1815 1813 } 1816 1814 } 1817 1815 1818 - done: 1819 1816 return ret; 1820 1817 } 1821 1818
+4 -3
drivers/net/wireless/rtlwifi/base.c
··· 1004 1004 is_tx ? "Tx" : "Rx"); 1005 1005 1006 1006 if (is_tx) { 1007 - rtl_lps_leave(hw); 1007 + schedule_work(&rtlpriv-> 1008 + works.lps_leave_work); 1008 1009 ppsc->last_delaylps_stamp_jiffies = 1009 1010 jiffies; 1010 1011 } ··· 1015 1014 } 1016 1015 } else if (ETH_P_ARP == ether_type) { 1017 1016 if (is_tx) { 1018 - rtl_lps_leave(hw); 1017 + schedule_work(&rtlpriv->works.lps_leave_work); 1019 1018 ppsc->last_delaylps_stamp_jiffies = jiffies; 1020 1019 } 1021 1020 ··· 1025 1024 "802.1X %s EAPOL pkt!!\n", is_tx ? "Tx" : "Rx"); 1026 1025 1027 1026 if (is_tx) { 1028 - rtl_lps_leave(hw); 1027 + schedule_work(&rtlpriv->works.lps_leave_work); 1029 1028 ppsc->last_delaylps_stamp_jiffies = jiffies; 1030 1029 } 1031 1030
+2 -2
drivers/net/wireless/rtlwifi/usb.c
··· 542 542 WARN_ON(skb_queue_empty(&rx_queue)); 543 543 while (!skb_queue_empty(&rx_queue)) { 544 544 _skb = skb_dequeue(&rx_queue); 545 - _rtl_usb_rx_process_agg(hw, skb); 546 - ieee80211_rx_irqsafe(hw, skb); 545 + _rtl_usb_rx_process_agg(hw, _skb); 546 + ieee80211_rx_irqsafe(hw, _skb); 547 547 } 548 548 } 549 549
+3
drivers/net/xen-netback/common.h
··· 151 151 /* Notify xenvif that ring now has space to send an skb to the frontend */ 152 152 void xenvif_notify_tx_completion(struct xenvif *vif); 153 153 154 + /* Prevent the device from generating any further traffic. */ 155 + void xenvif_carrier_off(struct xenvif *vif); 156 + 154 157 /* Returns number of ring slots required to send an skb to the frontend */ 155 158 unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb); 156 159
+14 -9
drivers/net/xen-netback/interface.c
··· 343 343 return err; 344 344 } 345 345 346 - void xenvif_disconnect(struct xenvif *vif) 346 + void xenvif_carrier_off(struct xenvif *vif) 347 347 { 348 348 struct net_device *dev = vif->dev; 349 - if (netif_carrier_ok(dev)) { 350 - rtnl_lock(); 351 - netif_carrier_off(dev); /* discard queued packets */ 352 - if (netif_running(dev)) 353 - xenvif_down(vif); 354 - rtnl_unlock(); 355 - xenvif_put(vif); 356 - } 349 + 350 + rtnl_lock(); 351 + netif_carrier_off(dev); /* discard queued packets */ 352 + if (netif_running(dev)) 353 + xenvif_down(vif); 354 + rtnl_unlock(); 355 + xenvif_put(vif); 356 + } 357 + 358 + void xenvif_disconnect(struct xenvif *vif) 359 + { 360 + if (netif_carrier_ok(vif->dev)) 361 + xenvif_carrier_off(vif); 357 362 358 363 atomic_dec(&vif->refcnt); 359 364 wait_event(vif->waiting_to_free, atomic_read(&vif->refcnt) == 0);
+71 -44
drivers/net/xen-netback/netback.c
··· 147 147 atomic_dec(&netbk->netfront_count); 148 148 } 149 149 150 - static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx); 150 + static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx, 151 + u8 status); 151 152 static void make_tx_response(struct xenvif *vif, 152 153 struct xen_netif_tx_request *txp, 153 154 s8 st); ··· 880 879 881 880 do { 882 881 make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR); 883 - if (cons >= end) 882 + if (cons == end) 884 883 break; 885 884 txp = RING_GET_REQUEST(&vif->tx, cons++); 886 885 } while (1); 887 886 vif->tx.req_cons = cons; 888 887 xen_netbk_check_rx_xenvif(vif); 888 + xenvif_put(vif); 889 + } 890 + 891 + static void netbk_fatal_tx_err(struct xenvif *vif) 892 + { 893 + netdev_err(vif->dev, "fatal error; disabling device\n"); 894 + xenvif_carrier_off(vif); 889 895 xenvif_put(vif); 890 896 } 891 897 ··· 909 901 910 902 do { 911 903 if (frags >= work_to_do) { 912 - netdev_dbg(vif->dev, "Need more frags\n"); 904 + netdev_err(vif->dev, "Need more frags\n"); 905 + netbk_fatal_tx_err(vif); 913 906 return -frags; 914 907 } 915 908 916 909 if (unlikely(frags >= MAX_SKB_FRAGS)) { 917 - netdev_dbg(vif->dev, "Too many frags\n"); 910 + netdev_err(vif->dev, "Too many frags\n"); 911 + netbk_fatal_tx_err(vif); 918 912 return -frags; 919 913 } 920 914 921 915 memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + frags), 922 916 sizeof(*txp)); 923 917 if (txp->size > first->size) { 924 - netdev_dbg(vif->dev, "Frags galore\n"); 918 + netdev_err(vif->dev, "Frag is bigger than frame.\n"); 919 + netbk_fatal_tx_err(vif); 925 920 return -frags; 926 921 } 927 922 ··· 932 921 frags++; 933 922 934 923 if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) { 935 - netdev_dbg(vif->dev, "txp->offset: %x, size: %u\n", 924 + netdev_err(vif->dev, "txp->offset: %x, size: %u\n", 936 925 txp->offset, txp->size); 926 + netbk_fatal_tx_err(vif); 937 927 return -frags; 938 928 } 939 929 } while ((txp++)->flags & XEN_NETTXF_more_data); ··· 978 966 pending_idx = netbk->pending_ring[index]; 979 967 page = xen_netbk_alloc_page(netbk, skb, pending_idx); 980 968 if (!page) 981 - return NULL; 969 + goto err; 982 970 983 971 gop->source.u.ref = txp->gref; 984 972 gop->source.domid = vif->domid; ··· 1000 988 } 1001 989 1002 990 return gop; 991 + err: 992 + /* Unwind, freeing all pages and sending error responses. */ 993 + while (i-- > start) { 994 + xen_netbk_idx_release(netbk, frag_get_pending_idx(&frags[i]), 995 + XEN_NETIF_RSP_ERROR); 996 + } 997 + /* The head too, if necessary. */ 998 + if (start) 999 + xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR); 1000 + 1001 + return NULL; 1003 1002 } 1004 1003 1005 1004 static int xen_netbk_tx_check_gop(struct xen_netbk *netbk, ··· 1019 996 { 1020 997 struct gnttab_copy *gop = *gopp; 1021 998 u16 pending_idx = *((u16 *)skb->data); 1022 - struct pending_tx_info *pending_tx_info = netbk->pending_tx_info; 1023 - struct xenvif *vif = pending_tx_info[pending_idx].vif; 1024 - struct xen_netif_tx_request *txp; 1025 999 struct skb_shared_info *shinfo = skb_shinfo(skb); 1026 1000 int nr_frags = shinfo->nr_frags; 1027 1001 int i, err, start; 1028 1002 1029 1003 /* Check status of header. */ 1030 1004 err = gop->status; 1031 - if (unlikely(err)) { 1032 - pending_ring_idx_t index; 1033 - index = pending_index(netbk->pending_prod++); 1034 - txp = &pending_tx_info[pending_idx].req; 1035 - make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR); 1036 - netbk->pending_ring[index] = pending_idx; 1037 - xenvif_put(vif); 1038 - } 1005 + if (unlikely(err)) 1006 + xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR); 1039 1007 1040 1008 /* Skip first skb fragment if it is on same page as header fragment. */ 1041 1009 start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx); 1042 1010 1043 1011 for (i = start; i < nr_frags; i++) { 1044 1012 int j, newerr; 1045 - pending_ring_idx_t index; 1046 1013 1047 1014 pending_idx = frag_get_pending_idx(&shinfo->frags[i]); 1048 1015 ··· 1041 1028 if (likely(!newerr)) { 1042 1029 /* Had a previous error? Invalidate this fragment. */ 1043 1030 if (unlikely(err)) 1044 - xen_netbk_idx_release(netbk, pending_idx); 1031 + xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY); 1045 1032 continue; 1046 1033 } 1047 1034 1048 1035 /* Error on this fragment: respond to client with an error. */ 1049 - txp = &netbk->pending_tx_info[pending_idx].req; 1050 - make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR); 1051 - index = pending_index(netbk->pending_prod++); 1052 - netbk->pending_ring[index] = pending_idx; 1053 - xenvif_put(vif); 1036 + xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR); 1054 1037 1055 1038 /* Not the first error? Preceding frags already invalidated. */ 1056 1039 if (err) ··· 1054 1045 1055 1046 /* First error: invalidate header and preceding fragments. */ 1056 1047 pending_idx = *((u16 *)skb->data); 1057 - xen_netbk_idx_release(netbk, pending_idx); 1048 + xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY); 1058 1049 for (j = start; j < i; j++) { 1059 1050 pending_idx = frag_get_pending_idx(&shinfo->frags[j]); 1060 - xen_netbk_idx_release(netbk, pending_idx); 1051 + xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY); 1061 1052 } 1062 1053 1063 1054 /* Remember the error: invalidate all subsequent fragments. */ ··· 1091 1082 1092 1083 /* Take an extra reference to offset xen_netbk_idx_release */ 1093 1084 get_page(netbk->mmap_pages[pending_idx]); 1094 - xen_netbk_idx_release(netbk, pending_idx); 1085 + xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY); 1095 1086 } 1096 1087 } 1097 1088 ··· 1104 1095 1105 1096 do { 1106 1097 if (unlikely(work_to_do-- <= 0)) { 1107 - netdev_dbg(vif->dev, "Missing extra info\n"); 1098 + netdev_err(vif->dev, "Missing extra info\n"); 1099 + netbk_fatal_tx_err(vif); 1108 1100 return -EBADR; 1109 1101 } 1110 1102 ··· 1114 1104 if (unlikely(!extra.type || 1115 1105 extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) { 1116 1106 vif->tx.req_cons = ++cons; 1117 - netdev_dbg(vif->dev, 1107 + netdev_err(vif->dev, 1118 1108 "Invalid extra type: %d\n", extra.type); 1109 + netbk_fatal_tx_err(vif); 1119 1110 return -EINVAL; 1120 1111 } 1121 1112 ··· 1132 1121 struct xen_netif_extra_info *gso) 1133 1122 { 1134 1123 if (!gso->u.gso.size) { 1135 - netdev_dbg(vif->dev, "GSO size must not be zero.\n"); 1124 + netdev_err(vif->dev, "GSO size must not be zero.\n"); 1125 + netbk_fatal_tx_err(vif); 1136 1126 return -EINVAL; 1137 1127 } 1138 1128 1139 1129 /* Currently only TCPv4 S.O. is supported. */ 1140 1130 if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) { 1141 - netdev_dbg(vif->dev, "Bad GSO type %d.\n", gso->u.gso.type); 1131 + netdev_err(vif->dev, "Bad GSO type %d.\n", gso->u.gso.type); 1132 + netbk_fatal_tx_err(vif); 1142 1133 return -EINVAL; 1143 1134 } 1144 1135 ··· 1277 1264 1278 1265 /* Get a netif from the list with work to do. */ 1279 1266 vif = poll_net_schedule_list(netbk); 1267 + /* This can sometimes happen because the test of 1268 + * list_empty(net_schedule_list) at the top of the 1269 + * loop is unlocked. Just go back and have another 1270 + * look. 1271 + */ 1280 1272 if (!vif) 1281 1273 continue; 1274 + 1275 + if (vif->tx.sring->req_prod - vif->tx.req_cons > 1276 + XEN_NETIF_TX_RING_SIZE) { 1277 + netdev_err(vif->dev, 1278 + "Impossible number of requests. " 1279 + "req_prod %d, req_cons %d, size %ld\n", 1280 + vif->tx.sring->req_prod, vif->tx.req_cons, 1281 + XEN_NETIF_TX_RING_SIZE); 1282 + netbk_fatal_tx_err(vif); 1283 + continue; 1284 + } 1282 1285 1283 1286 RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, work_to_do); 1284 1287 if (!work_to_do) { ··· 1323 1294 work_to_do = xen_netbk_get_extras(vif, extras, 1324 1295 work_to_do); 1325 1296 idx = vif->tx.req_cons; 1326 - if (unlikely(work_to_do < 0)) { 1327 - netbk_tx_err(vif, &txreq, idx); 1297 + if (unlikely(work_to_do < 0)) 1328 1298 continue; 1329 - } 1330 1299 } 1331 1300 1332 1301 ret = netbk_count_requests(vif, &txreq, txfrags, work_to_do); 1333 - if (unlikely(ret < 0)) { 1334 - netbk_tx_err(vif, &txreq, idx - ret); 1302 + if (unlikely(ret < 0)) 1335 1303 continue; 1336 - } 1304 + 1337 1305 idx += ret; 1338 1306 1339 1307 if (unlikely(txreq.size < ETH_HLEN)) { ··· 1342 1316 1343 1317 /* No crossing a page as the payload mustn't fragment. */ 1344 1318 if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) { 1345 - netdev_dbg(vif->dev, 1319 + netdev_err(vif->dev, 1346 1320 "txreq.offset: %x, size: %u, end: %lu\n", 1347 1321 txreq.offset, txreq.size, 1348 1322 (txreq.offset&~PAGE_MASK) + txreq.size); 1349 - netbk_tx_err(vif, &txreq, idx); 1323 + netbk_fatal_tx_err(vif); 1350 1324 continue; 1351 1325 } 1352 1326 ··· 1374 1348 gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1]; 1375 1349 1376 1350 if (netbk_set_skb_gso(vif, skb, gso)) { 1351 + /* Failure in netbk_set_skb_gso is fatal. */ 1377 1352 kfree_skb(skb); 1378 - netbk_tx_err(vif, &txreq, idx); 1379 1353 continue; 1380 1354 } 1381 1355 } ··· 1474 1448 txp->size -= data_len; 1475 1449 } else { 1476 1450 /* Schedule a response immediately. */ 1477 - xen_netbk_idx_release(netbk, pending_idx); 1451 + xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY); 1478 1452 } 1479 1453 1480 1454 if (txp->flags & XEN_NETTXF_csum_blank) ··· 1526 1500 xen_netbk_tx_submit(netbk); 1527 1501 } 1528 1502 1529 - static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx) 1503 + static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx, 1504 + u8 status) 1530 1505 { 1531 1506 struct xenvif *vif; 1532 1507 struct pending_tx_info *pending_tx_info; ··· 1541 1514 1542 1515 vif = pending_tx_info->vif; 1543 1516 1544 - make_tx_response(vif, &pending_tx_info->req, XEN_NETIF_RSP_OKAY); 1517 + make_tx_response(vif, &pending_tx_info->req, status); 1545 1518 1546 1519 index = pending_index(netbk->pending_prod++); 1547 1520 netbk->pending_ring[index] = pending_idx;
+12
drivers/ssb/driver_gpio.c
··· 174 174 175 175 return -1; 176 176 } 177 + 178 + int ssb_gpio_unregister(struct ssb_bus *bus) 179 + { 180 + if (ssb_chipco_available(&bus->chipco) || 181 + ssb_extif_available(&bus->extif)) { 182 + return gpiochip_remove(&bus->gpio); 183 + } else { 184 + SSB_WARN_ON(1); 185 + } 186 + 187 + return -1; 188 + }
+9
drivers/ssb/main.c
··· 443 443 444 444 void ssb_bus_unregister(struct ssb_bus *bus) 445 445 { 446 + int err; 447 + 448 + err = ssb_gpio_unregister(bus); 449 + if (err == -EBUSY) 450 + ssb_dprintk(KERN_ERR PFX "Some GPIOs are still in use.\n"); 451 + else if (err) 452 + ssb_dprintk(KERN_ERR PFX 453 + "Can not unregister GPIO driver: %i\n", err); 454 + 446 455 ssb_buses_lock(); 447 456 ssb_devices_unregister(bus); 448 457 list_del(&bus->list);
+5
drivers/ssb/ssb_private.h
··· 252 252 253 253 #ifdef CONFIG_SSB_DRIVER_GPIO 254 254 extern int ssb_gpio_init(struct ssb_bus *bus); 255 + extern int ssb_gpio_unregister(struct ssb_bus *bus); 255 256 #else /* CONFIG_SSB_DRIVER_GPIO */ 256 257 static inline int ssb_gpio_init(struct ssb_bus *bus) 257 258 { 258 259 return -ENOTSUPP; 260 + } 261 + static inline int ssb_gpio_unregister(struct ssb_bus *bus) 262 + { 263 + return 0; 259 264 } 260 265 #endif /* CONFIG_SSB_DRIVER_GPIO */ 261 266
+28 -13
drivers/vhost/net.c
··· 165 165 } 166 166 167 167 /* Caller must have TX VQ lock */ 168 - static void tx_poll_start(struct vhost_net *net, struct socket *sock) 168 + static int tx_poll_start(struct vhost_net *net, struct socket *sock) 169 169 { 170 + int ret; 171 + 170 172 if (unlikely(net->tx_poll_state != VHOST_NET_POLL_STOPPED)) 171 - return; 172 - vhost_poll_start(net->poll + VHOST_NET_VQ_TX, sock->file); 173 - net->tx_poll_state = VHOST_NET_POLL_STARTED; 173 + return 0; 174 + ret = vhost_poll_start(net->poll + VHOST_NET_VQ_TX, sock->file); 175 + if (!ret) 176 + net->tx_poll_state = VHOST_NET_POLL_STARTED; 177 + return ret; 174 178 } 175 179 176 180 /* In case of DMA done not in order in lower device driver for some reason. ··· 646 642 vhost_poll_stop(n->poll + VHOST_NET_VQ_RX); 647 643 } 648 644 649 - static void vhost_net_enable_vq(struct vhost_net *n, 645 + static int vhost_net_enable_vq(struct vhost_net *n, 650 646 struct vhost_virtqueue *vq) 651 647 { 652 648 struct socket *sock; 649 + int ret; 653 650 654 651 sock = rcu_dereference_protected(vq->private_data, 655 652 lockdep_is_held(&vq->mutex)); 656 653 if (!sock) 657 - return; 654 + return 0; 658 655 if (vq == n->vqs + VHOST_NET_VQ_TX) { 659 656 n->tx_poll_state = VHOST_NET_POLL_STOPPED; 660 - tx_poll_start(n, sock); 657 + ret = tx_poll_start(n, sock); 661 658 } else 662 - vhost_poll_start(n->poll + VHOST_NET_VQ_RX, sock->file); 659 + ret = vhost_poll_start(n->poll + VHOST_NET_VQ_RX, sock->file); 660 + 661 + return ret; 663 662 } 664 663 665 664 static struct socket *vhost_net_stop_vq(struct vhost_net *n, ··· 834 827 r = PTR_ERR(ubufs); 835 828 goto err_ubufs; 836 829 } 837 - oldubufs = vq->ubufs; 838 - vq->ubufs = ubufs; 830 + 839 831 vhost_net_disable_vq(n, vq); 840 832 rcu_assign_pointer(vq->private_data, sock); 841 - vhost_net_enable_vq(n, vq); 842 - 843 833 r = vhost_init_used(vq); 844 834 if (r) 845 - goto err_vq; 835 + goto err_used; 836 + r = vhost_net_enable_vq(n, vq); 837 + if (r) 838 + goto err_used; 839 + 840 + oldubufs = vq->ubufs; 841 + vq->ubufs = ubufs; 846 842 847 843 n->tx_packets = 0; 848 844 n->tx_zcopy_err = 0; ··· 869 859 mutex_unlock(&n->dev.mutex); 870 860 return 0; 871 861 862 + err_used: 863 + rcu_assign_pointer(vq->private_data, oldsock); 864 + vhost_net_enable_vq(n, vq); 865 + if (ubufs) 866 + vhost_ubuf_put_and_wait(ubufs); 872 867 err_ubufs: 873 868 fput(sock->file); 874 869 err_vq:
+15 -3
drivers/vhost/vhost.c
··· 77 77 init_poll_funcptr(&poll->table, vhost_poll_func); 78 78 poll->mask = mask; 79 79 poll->dev = dev; 80 + poll->wqh = NULL; 80 81 81 82 vhost_work_init(&poll->work, fn); 82 83 } 83 84 84 85 /* Start polling a file. We add ourselves to file's wait queue. The caller must 85 86 * keep a reference to a file until after vhost_poll_stop is called. */ 86 - void vhost_poll_start(struct vhost_poll *poll, struct file *file) 87 + int vhost_poll_start(struct vhost_poll *poll, struct file *file) 87 88 { 88 89 unsigned long mask; 90 + int ret = 0; 89 91 90 92 mask = file->f_op->poll(file, &poll->table); 91 93 if (mask) 92 94 vhost_poll_wakeup(&poll->wait, 0, 0, (void *)mask); 95 + if (mask & POLLERR) { 96 + if (poll->wqh) 97 + remove_wait_queue(poll->wqh, &poll->wait); 98 + ret = -EINVAL; 99 + } 100 + 101 + return ret; 93 102 } 94 103 95 104 /* Stop polling a file. After this function returns, it becomes safe to drop the 96 105 * file reference. You must also flush afterwards. */ 97 106 void vhost_poll_stop(struct vhost_poll *poll) 98 107 { 99 - remove_wait_queue(poll->wqh, &poll->wait); 108 + if (poll->wqh) { 109 + remove_wait_queue(poll->wqh, &poll->wait); 110 + poll->wqh = NULL; 111 + } 100 112 } 101 113 102 114 static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work, ··· 804 792 fput(filep); 805 793 806 794 if (pollstart && vq->handle_kick) 807 - vhost_poll_start(&vq->poll, vq->kick); 795 + r = vhost_poll_start(&vq->poll, vq->kick); 808 796 809 797 mutex_unlock(&vq->mutex); 810 798
+1 -1
drivers/vhost/vhost.h
··· 42 42 43 43 void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn, 44 44 unsigned long mask, struct vhost_dev *dev); 45 - void vhost_poll_start(struct vhost_poll *poll, struct file *file); 45 + int vhost_poll_start(struct vhost_poll *poll, struct file *file); 46 46 void vhost_poll_stop(struct vhost_poll *poll); 47 47 void vhost_poll_flush(struct vhost_poll *poll); 48 48 void vhost_poll_queue(struct vhost_poll *poll);
+3 -1
include/linux/usb/usbnet.h
··· 33 33 wait_queue_head_t *wait; 34 34 struct mutex phy_mutex; 35 35 unsigned char suspend_count; 36 + unsigned char pkt_cnt, pkt_err; 36 37 37 38 /* i/o info: pipes etc */ 38 39 unsigned in, out; ··· 71 70 # define EVENT_DEV_OPEN 7 72 71 # define EVENT_DEVICE_REPORT_IDLE 8 73 72 # define EVENT_NO_RUNTIME_PM 9 73 + # define EVENT_RX_KILL 10 74 74 }; 75 75 76 76 static inline struct usb_driver *driver_of(struct usb_interface *intf) ··· 102 100 #define FLAG_LINK_INTR 0x0800 /* updates link (carrier) status */ 103 101 104 102 #define FLAG_POINTTOPOINT 0x1000 /* possibly use "usb%d" names */ 105 - #define FLAG_NOARP 0x2000 /* device can't do ARP */ 106 103 107 104 /* 108 105 * Indicates to usbnet, that USB driver accumulates multiple IP packets. ··· 109 108 */ 110 109 #define FLAG_MULTI_PACKET 0x2000 111 110 #define FLAG_RX_ASSEMBLE 0x4000 /* rx packets may span >1 frames */ 111 + #define FLAG_NOARP 0x8000 /* device can't do ARP */ 112 112 113 113 /* init device ... can sleep, or cause probe() failure */ 114 114 int (*bind)(struct usbnet *, struct usb_interface *);
+10 -10
include/net/transp_v6.h
··· 34 34 struct sockaddr *uaddr, 35 35 int addr_len); 36 36 37 - extern int datagram_recv_ctl(struct sock *sk, 38 - struct msghdr *msg, 39 - struct sk_buff *skb); 37 + extern int ip6_datagram_recv_ctl(struct sock *sk, 38 + struct msghdr *msg, 39 + struct sk_buff *skb); 40 40 41 - extern int datagram_send_ctl(struct net *net, 42 - struct sock *sk, 43 - struct msghdr *msg, 44 - struct flowi6 *fl6, 45 - struct ipv6_txoptions *opt, 46 - int *hlimit, int *tclass, 47 - int *dontfrag); 41 + extern int ip6_datagram_send_ctl(struct net *net, 42 + struct sock *sk, 43 + struct msghdr *msg, 44 + struct flowi6 *fl6, 45 + struct ipv6_txoptions *opt, 46 + int *hlimit, int *tclass, 47 + int *dontfrag); 48 48 49 49 #define LOOPBACK4_IPV6 cpu_to_be32(0x7f000006) 50 50
+3 -3
net/bluetooth/hci_conn.c
··· 249 249 __u8 reason = hci_proto_disconn_ind(conn); 250 250 251 251 switch (conn->type) { 252 - case ACL_LINK: 253 - hci_acl_disconn(conn, reason); 254 - break; 255 252 case AMP_LINK: 256 253 hci_amp_disconn(conn, reason); 254 + break; 255 + default: 256 + hci_acl_disconn(conn, reason); 257 257 break; 258 258 } 259 259 }
+13
net/bluetooth/smp.c
··· 859 859 860 860 skb_pull(skb, sizeof(code)); 861 861 862 + /* 863 + * The SMP context must be initialized for all other PDUs except 864 + * pairing and security requests. If we get any other PDU when 865 + * not initialized simply disconnect (done if this function 866 + * returns an error). 867 + */ 868 + if (code != SMP_CMD_PAIRING_REQ && code != SMP_CMD_SECURITY_REQ && 869 + !conn->smp_chan) { 870 + BT_ERR("Unexpected SMP command 0x%02x. Disconnecting.", code); 871 + kfree_skb(skb); 872 + return -ENOTSUPP; 873 + } 874 + 862 875 switch (code) { 863 876 case SMP_CMD_PAIRING_REQ: 864 877 reason = smp_cmd_pairing_req(conn, skb);
+6 -3
net/core/pktgen.c
··· 1781 1781 return -EFAULT; 1782 1782 i += len; 1783 1783 mutex_lock(&pktgen_thread_lock); 1784 - pktgen_add_device(t, f); 1784 + ret = pktgen_add_device(t, f); 1785 1785 mutex_unlock(&pktgen_thread_lock); 1786 - ret = count; 1787 - sprintf(pg_result, "OK: add_device=%s", f); 1786 + if (!ret) { 1787 + ret = count; 1788 + sprintf(pg_result, "OK: add_device=%s", f); 1789 + } else 1790 + sprintf(pg_result, "ERROR: can not add device %s", f); 1788 1791 goto out; 1789 1792 } 1790 1793
+1 -1
net/core/skbuff.c
··· 683 683 new->network_header = old->network_header; 684 684 new->mac_header = old->mac_header; 685 685 new->inner_transport_header = old->inner_transport_header; 686 - new->inner_network_header = old->inner_transport_header; 686 + new->inner_network_header = old->inner_network_header; 687 687 skb_dst_copy(new, old); 688 688 new->rxhash = old->rxhash; 689 689 new->ooo_okay = old->ooo_okay;
+10 -4
net/ipv4/tcp_cong.c
··· 310 310 { 311 311 int cnt; /* increase in packets */ 312 312 unsigned int delta = 0; 313 + u32 snd_cwnd = tp->snd_cwnd; 314 + 315 + if (unlikely(!snd_cwnd)) { 316 + pr_err_once("snd_cwnd is nul, please report this bug.\n"); 317 + snd_cwnd = 1U; 318 + } 313 319 314 320 /* RFC3465: ABC Slow start 315 321 * Increase only after a full MSS of bytes is acked ··· 330 324 if (sysctl_tcp_max_ssthresh > 0 && tp->snd_cwnd > sysctl_tcp_max_ssthresh) 331 325 cnt = sysctl_tcp_max_ssthresh >> 1; /* limited slow start */ 332 326 else 333 - cnt = tp->snd_cwnd; /* exponential increase */ 327 + cnt = snd_cwnd; /* exponential increase */ 334 328 335 329 /* RFC3465: ABC 336 330 * We MAY increase by 2 if discovered delayed ack ··· 340 334 tp->bytes_acked = 0; 341 335 342 336 tp->snd_cwnd_cnt += cnt; 343 - while (tp->snd_cwnd_cnt >= tp->snd_cwnd) { 344 - tp->snd_cwnd_cnt -= tp->snd_cwnd; 337 + while (tp->snd_cwnd_cnt >= snd_cwnd) { 338 + tp->snd_cwnd_cnt -= snd_cwnd; 345 339 delta++; 346 340 } 347 - tp->snd_cwnd = min(tp->snd_cwnd + delta, tp->snd_cwnd_clamp); 341 + tp->snd_cwnd = min(snd_cwnd + delta, tp->snd_cwnd_clamp); 348 342 } 349 343 EXPORT_SYMBOL_GPL(tcp_slow_start); 350 344
+6 -2
net/ipv4/tcp_input.c
··· 3504 3504 } 3505 3505 } else { 3506 3506 if (!(flag & FLAG_DATA_ACKED) && (tp->frto_counter == 1)) { 3507 + if (!tcp_packets_in_flight(tp)) { 3508 + tcp_enter_frto_loss(sk, 2, flag); 3509 + return true; 3510 + } 3511 + 3507 3512 /* Prevent sending of new data. */ 3508 3513 tp->snd_cwnd = min(tp->snd_cwnd, 3509 3514 tcp_packets_in_flight(tp)); ··· 5654 5649 * the remote receives only the retransmitted (regular) SYNs: either 5655 5650 * the original SYN-data or the corresponding SYN-ACK is lost. 5656 5651 */ 5657 - syn_drop = (cookie->len <= 0 && data && 5658 - inet_csk(sk)->icsk_retransmits); 5652 + syn_drop = (cookie->len <= 0 && data && tp->total_retrans); 5659 5653 5660 5654 tcp_fastopen_cache_set(sk, mss, cookie, syn_drop); 5661 5655
+5 -1
net/ipv4/tcp_ipv4.c
··· 496 496 * errors returned from accept(). 497 497 */ 498 498 inet_csk_reqsk_queue_drop(sk, req, prev); 499 + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS); 499 500 goto out; 500 501 501 502 case TCP_SYN_SENT: ··· 1501 1500 * clogging syn queue with openreqs with exponentially increasing 1502 1501 * timeout. 1503 1502 */ 1504 - if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) 1503 + if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) { 1504 + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS); 1505 1505 goto drop; 1506 + } 1506 1507 1507 1508 req = inet_reqsk_alloc(&tcp_request_sock_ops); 1508 1509 if (!req) ··· 1669 1666 drop_and_free: 1670 1667 reqsk_free(req); 1671 1668 drop: 1669 + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS); 1672 1670 return 0; 1673 1671 } 1674 1672 EXPORT_SYMBOL(tcp_v4_conn_request);
+1
net/ipv6/addrconf.c
··· 1660 1660 if (dev->addr_len != IEEE802154_ADDR_LEN) 1661 1661 return -1; 1662 1662 memcpy(eui, dev->dev_addr, 8); 1663 + eui[0] ^= 2; 1663 1664 return 0; 1664 1665 } 1665 1666
+9 -7
net/ipv6/datagram.c
··· 380 380 if (skb->protocol == htons(ETH_P_IPV6)) { 381 381 sin->sin6_addr = ipv6_hdr(skb)->saddr; 382 382 if (np->rxopt.all) 383 - datagram_recv_ctl(sk, msg, skb); 383 + ip6_datagram_recv_ctl(sk, msg, skb); 384 384 if (ipv6_addr_type(&sin->sin6_addr) & IPV6_ADDR_LINKLOCAL) 385 385 sin->sin6_scope_id = IP6CB(skb)->iif; 386 386 } else { ··· 468 468 } 469 469 470 470 471 - int datagram_recv_ctl(struct sock *sk, struct msghdr *msg, struct sk_buff *skb) 471 + int ip6_datagram_recv_ctl(struct sock *sk, struct msghdr *msg, 472 + struct sk_buff *skb) 472 473 { 473 474 struct ipv6_pinfo *np = inet6_sk(sk); 474 475 struct inet6_skb_parm *opt = IP6CB(skb); ··· 598 597 } 599 598 return 0; 600 599 } 600 + EXPORT_SYMBOL_GPL(ip6_datagram_recv_ctl); 601 601 602 - int datagram_send_ctl(struct net *net, struct sock *sk, 603 - struct msghdr *msg, struct flowi6 *fl6, 604 - struct ipv6_txoptions *opt, 605 - int *hlimit, int *tclass, int *dontfrag) 602 + int ip6_datagram_send_ctl(struct net *net, struct sock *sk, 603 + struct msghdr *msg, struct flowi6 *fl6, 604 + struct ipv6_txoptions *opt, 605 + int *hlimit, int *tclass, int *dontfrag) 606 606 { 607 607 struct in6_pktinfo *src_info; 608 608 struct cmsghdr *cmsg; ··· 873 871 exit_f: 874 872 return err; 875 873 } 876 - EXPORT_SYMBOL_GPL(datagram_send_ctl); 874 + EXPORT_SYMBOL_GPL(ip6_datagram_send_ctl);
+2 -2
net/ipv6/ip6_flowlabel.c
··· 365 365 msg.msg_control = (void*)(fl->opt+1); 366 366 memset(&flowi6, 0, sizeof(flowi6)); 367 367 368 - err = datagram_send_ctl(net, sk, &msg, &flowi6, fl->opt, &junk, 369 - &junk, &junk); 368 + err = ip6_datagram_send_ctl(net, sk, &msg, &flowi6, fl->opt, 369 + &junk, &junk, &junk); 370 370 if (err) 371 371 goto done; 372 372 err = -EINVAL;
+1 -1
net/ipv6/ip6_gre.c
··· 960 960 int ret; 961 961 962 962 if (!ip6_tnl_xmit_ctl(t)) 963 - return -1; 963 + goto tx_err; 964 964 965 965 switch (skb->protocol) { 966 966 case htons(ETH_P_IP):
+3 -3
net/ipv6/ipv6_sockglue.c
··· 476 476 msg.msg_controllen = optlen; 477 477 msg.msg_control = (void*)(opt+1); 478 478 479 - retv = datagram_send_ctl(net, sk, &msg, &fl6, opt, &junk, &junk, 480 - &junk); 479 + retv = ip6_datagram_send_ctl(net, sk, &msg, &fl6, opt, &junk, 480 + &junk, &junk); 481 481 if (retv) 482 482 goto done; 483 483 update: ··· 1002 1002 release_sock(sk); 1003 1003 1004 1004 if (skb) { 1005 - int err = datagram_recv_ctl(sk, &msg, skb); 1005 + int err = ip6_datagram_recv_ctl(sk, &msg, skb); 1006 1006 kfree_skb(skb); 1007 1007 if (err) 1008 1008 return err;
+3 -3
net/ipv6/raw.c
··· 507 507 sock_recv_ts_and_drops(msg, sk, skb); 508 508 509 509 if (np->rxopt.all) 510 - datagram_recv_ctl(sk, msg, skb); 510 + ip6_datagram_recv_ctl(sk, msg, skb); 511 511 512 512 err = copied; 513 513 if (flags & MSG_TRUNC) ··· 822 822 memset(opt, 0, sizeof(struct ipv6_txoptions)); 823 823 opt->tot_len = sizeof(struct ipv6_txoptions); 824 824 825 - err = datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt, 826 - &hlimit, &tclass, &dontfrag); 825 + err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt, 826 + &hlimit, &tclass, &dontfrag); 827 827 if (err < 0) { 828 828 fl6_sock_release(flowlabel); 829 829 return err;
+1 -1
net/ipv6/route.c
··· 928 928 dst_hold(&rt->dst); 929 929 read_unlock_bh(&table->tb6_lock); 930 930 931 - if (!rt->n && !(rt->rt6i_flags & RTF_NONEXTHOP)) 931 + if (!rt->n && !(rt->rt6i_flags & (RTF_NONEXTHOP | RTF_LOCAL))) 932 932 nrt = rt6_alloc_cow(rt, &fl6->daddr, &fl6->saddr); 933 933 else if (!(rt->dst.flags & DST_HOST)) 934 934 nrt = rt6_alloc_clone(rt, &fl6->daddr);
+5 -1
net/ipv6/tcp_ipv6.c
··· 423 423 } 424 424 425 425 inet_csk_reqsk_queue_drop(sk, req, prev); 426 + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS); 426 427 goto out; 427 428 428 429 case TCP_SYN_SENT: ··· 959 958 goto drop; 960 959 } 961 960 962 - if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) 961 + if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) { 962 + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS); 963 963 goto drop; 964 + } 964 965 965 966 req = inet6_reqsk_alloc(&tcp6_request_sock_ops); 966 967 if (req == NULL) ··· 1111 1108 drop_and_free: 1112 1109 reqsk_free(req); 1113 1110 drop: 1111 + NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS); 1114 1112 return 0; /* don't send reset */ 1115 1113 } 1116 1114
+3 -3
net/ipv6/udp.c
··· 443 443 ip_cmsg_recv(msg, skb); 444 444 } else { 445 445 if (np->rxopt.all) 446 - datagram_recv_ctl(sk, msg, skb); 446 + ip6_datagram_recv_ctl(sk, msg, skb); 447 447 } 448 448 449 449 err = copied; ··· 1153 1153 memset(opt, 0, sizeof(struct ipv6_txoptions)); 1154 1154 opt->tot_len = sizeof(*opt); 1155 1155 1156 - err = datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt, 1157 - &hlimit, &tclass, &dontfrag); 1156 + err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt, 1157 + &hlimit, &tclass, &dontfrag); 1158 1158 if (err < 0) { 1159 1159 fl6_sock_release(flowlabel); 1160 1160 return err;
+65 -17
net/l2tp/l2tp_core.c
··· 168 168 169 169 } 170 170 171 + /* Lookup the tunnel socket, possibly involving the fs code if the socket is 172 + * owned by userspace. A struct sock returned from this function must be 173 + * released using l2tp_tunnel_sock_put once you're done with it. 174 + */ 175 + struct sock *l2tp_tunnel_sock_lookup(struct l2tp_tunnel *tunnel) 176 + { 177 + int err = 0; 178 + struct socket *sock = NULL; 179 + struct sock *sk = NULL; 180 + 181 + if (!tunnel) 182 + goto out; 183 + 184 + if (tunnel->fd >= 0) { 185 + /* Socket is owned by userspace, who might be in the process 186 + * of closing it. Look the socket up using the fd to ensure 187 + * consistency. 188 + */ 189 + sock = sockfd_lookup(tunnel->fd, &err); 190 + if (sock) 191 + sk = sock->sk; 192 + } else { 193 + /* Socket is owned by kernelspace */ 194 + sk = tunnel->sock; 195 + } 196 + 197 + out: 198 + return sk; 199 + } 200 + EXPORT_SYMBOL_GPL(l2tp_tunnel_sock_lookup); 201 + 202 + /* Drop a reference to a tunnel socket obtained via. l2tp_tunnel_sock_put */ 203 + void l2tp_tunnel_sock_put(struct sock *sk) 204 + { 205 + struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk); 206 + if (tunnel) { 207 + if (tunnel->fd >= 0) { 208 + /* Socket is owned by userspace */ 209 + sockfd_put(sk->sk_socket); 210 + } 211 + sock_put(sk); 212 + } 213 + } 214 + EXPORT_SYMBOL_GPL(l2tp_tunnel_sock_put); 215 + 171 216 /* Lookup a session by id in the global session list 172 217 */ 173 218 static struct l2tp_session *l2tp_session_find_2(struct net *net, u32 session_id) ··· 1168 1123 struct udphdr *uh; 1169 1124 struct inet_sock *inet; 1170 1125 __wsum csum; 1171 - int old_headroom; 1172 - int new_headroom; 1173 1126 int headroom; 1174 1127 int uhlen = (tunnel->encap == L2TP_ENCAPTYPE_UDP) ? sizeof(struct udphdr) : 0; 1175 1128 int udp_len; ··· 1179 1136 */ 1180 1137 headroom = NET_SKB_PAD + sizeof(struct iphdr) + 1181 1138 uhlen + hdr_len; 1182 - old_headroom = skb_headroom(skb); 1183 1139 if (skb_cow_head(skb, headroom)) { 1184 1140 kfree_skb(skb); 1185 1141 return NET_XMIT_DROP; 1186 1142 } 1187 1143 1188 - new_headroom = skb_headroom(skb); 1189 1144 skb_orphan(skb); 1190 - skb->truesize += new_headroom - old_headroom; 1191 - 1192 1145 /* Setup L2TP header */ 1193 1146 session->build_header(session, __skb_push(skb, hdr_len)); 1194 1147 ··· 1646 1607 tunnel->old_sk_destruct = sk->sk_destruct; 1647 1608 sk->sk_destruct = &l2tp_tunnel_destruct; 1648 1609 tunnel->sock = sk; 1610 + tunnel->fd = fd; 1649 1611 lockdep_set_class_and_name(&sk->sk_lock.slock, &l2tp_socket_class, "l2tp_sock"); 1650 1612 1651 1613 sk->sk_allocation = GFP_ATOMIC; ··· 1682 1642 */ 1683 1643 int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel) 1684 1644 { 1685 - int err = 0; 1686 - struct socket *sock = tunnel->sock ? tunnel->sock->sk_socket : NULL; 1645 + int err = -EBADF; 1646 + struct socket *sock = NULL; 1647 + struct sock *sk = NULL; 1648 + 1649 + sk = l2tp_tunnel_sock_lookup(tunnel); 1650 + if (!sk) 1651 + goto out; 1652 + 1653 + sock = sk->sk_socket; 1654 + BUG_ON(!sock); 1687 1655 1688 1656 /* Force the tunnel socket to close. This will eventually 1689 1657 * cause the tunnel to be deleted via the normal socket close 1690 1658 * mechanisms when userspace closes the tunnel socket. 1691 1659 */ 1692 - if (sock != NULL) { 1693 - err = inet_shutdown(sock, 2); 1660 + err = inet_shutdown(sock, 2); 1694 1661 1695 - /* If the tunnel's socket was created by the kernel, 1696 - * close the socket here since the socket was not 1697 - * created by userspace. 1698 - */ 1699 - if (sock->file == NULL) 1700 - err = inet_release(sock); 1701 - } 1662 + /* If the tunnel's socket was created by the kernel, 1663 + * close the socket here since the socket was not 1664 + * created by userspace. 1665 + */ 1666 + if (sock->file == NULL) 1667 + err = inet_release(sock); 1702 1668 1669 + l2tp_tunnel_sock_put(sk); 1670 + out: 1703 1671 return err; 1704 1672 } 1705 1673 EXPORT_SYMBOL_GPL(l2tp_tunnel_delete);
+4 -1
net/l2tp/l2tp_core.h
··· 188 188 int (*recv_payload_hook)(struct sk_buff *skb); 189 189 void (*old_sk_destruct)(struct sock *); 190 190 struct sock *sock; /* Parent socket */ 191 - int fd; 191 + int fd; /* Parent fd, if tunnel socket 192 + * was created by userspace */ 192 193 193 194 uint8_t priv[0]; /* private data */ 194 195 }; ··· 229 228 return tunnel; 230 229 } 231 230 231 + extern struct sock *l2tp_tunnel_sock_lookup(struct l2tp_tunnel *tunnel); 232 + extern void l2tp_tunnel_sock_put(struct sock *sk); 232 233 extern struct l2tp_session *l2tp_session_find(struct net *net, struct l2tp_tunnel *tunnel, u32 session_id); 233 234 extern struct l2tp_session *l2tp_session_find_nth(struct l2tp_tunnel *tunnel, int nth); 234 235 extern struct l2tp_session *l2tp_session_find_by_ifname(struct net *net, char *ifname);
+5 -5
net/l2tp/l2tp_ip6.c
··· 554 554 memset(opt, 0, sizeof(struct ipv6_txoptions)); 555 555 opt->tot_len = sizeof(struct ipv6_txoptions); 556 556 557 - err = datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt, 558 - &hlimit, &tclass, &dontfrag); 557 + err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt, 558 + &hlimit, &tclass, &dontfrag); 559 559 if (err < 0) { 560 560 fl6_sock_release(flowlabel); 561 561 return err; ··· 646 646 struct msghdr *msg, size_t len, int noblock, 647 647 int flags, int *addr_len) 648 648 { 649 - struct inet_sock *inet = inet_sk(sk); 649 + struct ipv6_pinfo *np = inet6_sk(sk); 650 650 struct sockaddr_l2tpip6 *lsa = (struct sockaddr_l2tpip6 *)msg->msg_name; 651 651 size_t copied = 0; 652 652 int err = -EOPNOTSUPP; ··· 688 688 lsa->l2tp_scope_id = IP6CB(skb)->iif; 689 689 } 690 690 691 - if (inet->cmsg_flags) 692 - ip_cmsg_recv(msg, skb); 691 + if (np->rxopt.all) 692 + ip6_datagram_recv_ctl(sk, msg, skb); 693 693 694 694 if (flags & MSG_TRUNC) 695 695 copied = skb->len;
-6
net/l2tp/l2tp_ppp.c
··· 388 388 struct l2tp_session *session; 389 389 struct l2tp_tunnel *tunnel; 390 390 struct pppol2tp_session *ps; 391 - int old_headroom; 392 - int new_headroom; 393 391 int uhlen, headroom; 394 392 395 393 if (sock_flag(sk, SOCK_DEAD) || !(sk->sk_state & PPPOX_CONNECTED)) ··· 406 408 if (tunnel == NULL) 407 409 goto abort_put_sess; 408 410 409 - old_headroom = skb_headroom(skb); 410 411 uhlen = (tunnel->encap == L2TP_ENCAPTYPE_UDP) ? sizeof(struct udphdr) : 0; 411 412 headroom = NET_SKB_PAD + 412 413 sizeof(struct iphdr) + /* IP header */ ··· 414 417 sizeof(ppph); /* PPP header */ 415 418 if (skb_cow_head(skb, headroom)) 416 419 goto abort_put_sess_tun; 417 - 418 - new_headroom = skb_headroom(skb); 419 - skb->truesize += new_headroom - old_headroom; 420 420 421 421 /* Setup PPP header */ 422 422 __skb_push(skb, sizeof(ppph));
+9 -7
net/openvswitch/vport-netdev.c
··· 35 35 /* Must be called with rcu_read_lock. */ 36 36 static void netdev_port_receive(struct vport *vport, struct sk_buff *skb) 37 37 { 38 - if (unlikely(!vport)) { 39 - kfree_skb(skb); 40 - return; 41 - } 38 + if (unlikely(!vport)) 39 + goto error; 40 + 41 + if (unlikely(skb_warn_if_lro(skb))) 42 + goto error; 42 43 43 44 /* Make our own copy of the packet. Otherwise we will mangle the 44 45 * packet for anyone who came before us (e.g. tcpdump via AF_PACKET). ··· 51 50 52 51 skb_push(skb, ETH_HLEN); 53 52 ovs_vport_receive(vport, skb); 53 + return; 54 + 55 + error: 56 + kfree_skb(skb); 54 57 } 55 58 56 59 /* Called with rcu_read_lock and bottom-halves disabled. */ ··· 173 168 packet_length(skb), mtu); 174 169 goto error; 175 170 } 176 - 177 - if (unlikely(skb_warn_if_lro(skb))) 178 - goto error; 179 171 180 172 skb->dev = netdev_vport->dev; 181 173 len = skb->len;
+6 -4
net/packet/af_packet.c
··· 2361 2361 2362 2362 packet_flush_mclist(sk); 2363 2363 2364 - memset(&req_u, 0, sizeof(req_u)); 2365 - 2366 - if (po->rx_ring.pg_vec) 2364 + if (po->rx_ring.pg_vec) { 2365 + memset(&req_u, 0, sizeof(req_u)); 2367 2366 packet_set_ring(sk, &req_u, 1, 0); 2367 + } 2368 2368 2369 - if (po->tx_ring.pg_vec) 2369 + if (po->tx_ring.pg_vec) { 2370 + memset(&req_u, 0, sizeof(req_u)); 2370 2371 packet_set_ring(sk, &req_u, 1, 1); 2372 + } 2371 2373 2372 2374 fanout_release(sk); 2373 2375
+6 -6
net/sched/sch_netem.c
··· 438 438 if (q->rate) { 439 439 struct sk_buff_head *list = &sch->q; 440 440 441 - delay += packet_len_2_sched_time(skb->len, q); 442 - 443 441 if (!skb_queue_empty(list)) { 444 442 /* 445 - * Last packet in queue is reference point (now). 446 - * First packet in queue is already in flight, 447 - * calculate this time bonus and substract 443 + * Last packet in queue is reference point (now), 444 + * calculate this time bonus and subtract 448 445 * from delay. 449 446 */ 450 - delay -= now - netem_skb_cb(skb_peek(list))->time_to_send; 447 + delay -= netem_skb_cb(skb_peek_tail(list))->time_to_send - now; 448 + delay = max_t(psched_tdiff_t, 0, delay); 451 449 now = netem_skb_cb(skb_peek_tail(list))->time_to_send; 452 450 } 451 + 452 + delay += packet_len_2_sched_time(skb->len, q); 453 453 } 454 454 455 455 cb->time_to_send = now + delay;
+1 -1
net/sctp/auth.c
··· 71 71 return; 72 72 73 73 if (atomic_dec_and_test(&key->refcnt)) { 74 - kfree(key); 74 + kzfree(key); 75 75 SCTP_DBG_OBJCNT_DEC(keys); 76 76 } 77 77 }
+5
net/sctp/endpointola.c
··· 249 249 /* Final destructor for endpoint. */ 250 250 static void sctp_endpoint_destroy(struct sctp_endpoint *ep) 251 251 { 252 + int i; 253 + 252 254 SCTP_ASSERT(ep->base.dead, "Endpoint is not dead", return); 253 255 254 256 /* Free up the HMAC transform. */ ··· 272 270 /* Cleanup. */ 273 271 sctp_inq_free(&ep->base.inqueue); 274 272 sctp_bind_addr_free(&ep->base.bind_addr); 273 + 274 + for (i = 0; i < SCTP_HOW_MANY_SECRETS; ++i) 275 + memset(&ep->secret_key[i], 0, SCTP_SECRET_SIZE); 275 276 276 277 /* Remove and free the port */ 277 278 if (sctp_sk(ep->base.sk)->bind_hash)
+1 -1
net/sctp/socket.c
··· 3390 3390 3391 3391 ret = sctp_auth_set_key(sctp_sk(sk)->ep, asoc, authkey); 3392 3392 out: 3393 - kfree(authkey); 3393 + kzfree(authkey); 3394 3394 return ret; 3395 3395 } 3396 3396
+1 -1
net/sunrpc/svcsock.c
··· 465 465 } 466 466 467 467 /* 468 - * See net/ipv6/datagram.c : datagram_recv_ctl 468 + * See net/ipv6/datagram.c : ip6_datagram_recv_ctl 469 469 */ 470 470 static int svc_udp_get_dest_address6(struct svc_rqst *rqstp, 471 471 struct cmsghdr *cmh)
+1 -1
net/wireless/scan.c
··· 1358 1358 &iwe, IW_EV_UINT_LEN); 1359 1359 } 1360 1360 1361 - buf = kmalloc(30, GFP_ATOMIC); 1361 + buf = kmalloc(31, GFP_ATOMIC); 1362 1362 if (buf) { 1363 1363 memset(&iwe, 0, sizeof(iwe)); 1364 1364 iwe.cmd = IWEVCUSTOM;