Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Pull networking fixes from David Miller:
"Bug fixes galore, mostly in drivers as is often the case:

1) USB gadget and cdc_eem drivers need adjustments to their frame size
lengths in order to handle VLANs correctly. From Ian Coolidge.

2) TIPC and several network drivers erroneously call tasklet_disable
before tasklet_kill, fix from Xiaotian Feng.

3) r8169 driver needs to apply the WOL suspend quirk to more chipsets,
fix from Cyril Brulebois.

4) Fix multicast filters on RTL_GIGA_MAC_VER_35 r8169 chips, from
Nathan Walp.

5) FDB netlink dumps should use RTM_NEWNEIGH as the message type, not
zero. From John Fastabend.

6) Fix smsc95xx tx checksum offload on big-endian, from Steve
Glendinning.

7) __inet_diag_dump() needs to repsect and report the error value
returned from inet_diag_lock_handler() rather than ignore it.
Otherwise if an inet diag handler is not available for a particular
protocol, we essentially report success instead of giving an error
indication. Fix from Cyrill Gorcunov.

8) When the QFQ packet scheduler sees TSO/GSO packets it does not
handle things properly, and in fact ends up corrupting it's
datastructures as well as mis-schedule packets. Fix from Paolo
Valente.

9) Fix oopser in skb_loop_sk(), from Eric Leblond.

10) CXGB4 passes partially uninitialized datastructures in to FW
commands, fix from Vipul Pandya.

11) When we send unsolicited ipv6 neighbour advertisements, we should
send them to the link-local allnodes multicast address, as per
RFC4861. Fix from Hannes Frederic Sowa.

12) There is some kind of bug in the usbnet's kevent deferral
mechanism, but more immediately when it triggers an uncontrolled
stream of kernel messages spam the log. Rate limit the error log
message triggered when this problem occurs, as sending thousands
of error messages into the kernel log doesn't help matters at all,
and in fact makes further diagnosis more difficult.

From Steve Glendinning.

13) Fix gianfar restore from hibernation, from Wang Dongsheng.

14) The netlink message attribute sizes are wrong in the ipv6 GRE
driver, it was using the size of ipv4 addresses instead of ipv6
ones :-) Fix from Nicolas Dichtel."

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
gre6: fix rtnl dump messages
gianfar: ethernet vanishes after restoring from hibernation
usbnet: ratelimit kevent may have been dropped warnings
ipv6: send unsolicited neighbour advertisements to all-nodes
net: usb: cdc_eem: Fix rx skb allocation for 802.1Q VLANs
usb: gadget: g_ether: fix frame size check for 802.1Q
cxgb4: Fix initialization of SGE_CONTROL register
isdn: Make CONFIG_ISDN depend on CONFIG_NETDEVICES
cxgb4: Initialize data structures before using.
af-packet: fix oops when socket is not present
pkt_sched: enable QFQ to support TSO/GSO
net: inet_diag -- Return error code if protocol handler is missed
net: bnx2x: Fix typo in bnx2x driver
smsc95xx: fix tx checksum offload for big endian
rtnetlink: Use nlmsg type RTM_NEWNEIGH from dflt fdb dump
ptp: update adjfreq callback description
r8169: allow multicast packets on sub-8168f chipset.
r8169: Fix WoL on RTL8168d/8111d.
drivers/net: use tasklet_kill in device remove/close process
tipc: do not use tasklet_disable before tasklet_kill

+131 -66
+1 -1
drivers/isdn/Kconfig
··· 4 4 5 5 menuconfig ISDN 6 6 bool "ISDN support" 7 - depends on NET 7 + depends on NET && NETDEVICES 8 8 depends on !S390 && !UML 9 9 ---help--- 10 10 ISDN ("Integrated Services Digital Network", called RNIS in France)
+1 -1
drivers/isdn/i4l/Kconfig
··· 6 6 7 7 config ISDN_PPP 8 8 bool "Support synchronous PPP" 9 - depends on INET && NETDEVICES 9 + depends on INET 10 10 select SLHC 11 11 help 12 12 Over digital connections such as ISDN, there is no need to
-4
drivers/isdn/i4l/isdn_common.c
··· 1312 1312 } else 1313 1313 return -EINVAL; 1314 1314 break; 1315 - #ifdef CONFIG_NETDEVICES 1316 1315 case IIOCNETGPN: 1317 1316 /* Get peer phone number of a connected 1318 1317 * isdn network interface */ ··· 1321 1322 return isdn_net_getpeer(&phone, argp); 1322 1323 } else 1323 1324 return -EINVAL; 1324 - #endif 1325 1325 default: 1326 1326 return -EINVAL; 1327 1327 } ··· 1350 1352 case IIOCNETLCR: 1351 1353 printk(KERN_INFO "INFO: ISDN_ABC_LCR_SUPPORT not enabled\n"); 1352 1354 return -ENODEV; 1353 - #ifdef CONFIG_NETDEVICES 1354 1355 case IIOCNETAIF: 1355 1356 /* Add a network-interface */ 1356 1357 if (arg) { ··· 1488 1491 return -EFAULT; 1489 1492 return isdn_net_force_hangup(name); 1490 1493 break; 1491 - #endif /* CONFIG_NETDEVICES */ 1492 1494 case IIOCSETVER: 1493 1495 dev->net_verbose = arg; 1494 1496 printk(KERN_INFO "isdn: Verbose-Level is %d\n", dev->net_verbose);
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
··· 1702 1702 SHMEM_EEE_ADV_STATUS_SHIFT); 1703 1703 if ((advertised != (eee_cfg & SHMEM_EEE_ADV_STATUS_MASK))) { 1704 1704 DP(BNX2X_MSG_ETHTOOL, 1705 - "Direct manipulation of EEE advertisment is not supported\n"); 1705 + "Direct manipulation of EEE advertisement is not supported\n"); 1706 1706 return -EINVAL; 1707 1707 } 1708 1708
+2 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.c
··· 9941 9941 else 9942 9942 rc = bnx2x_8483x_disable_eee(phy, params, vars); 9943 9943 if (rc) { 9944 - DP(NETIF_MSG_LINK, "Failed to set EEE advertisment\n"); 9944 + DP(NETIF_MSG_LINK, "Failed to set EEE advertisement\n"); 9945 9945 return rc; 9946 9946 } 9947 9947 } else { ··· 12987 12987 DP(NETIF_MSG_LINK, "Analyze TX Fault\n"); 12988 12988 break; 12989 12989 default: 12990 - DP(NETIF_MSG_LINK, "Analyze UNKOWN\n"); 12990 + DP(NETIF_MSG_LINK, "Analyze UNKNOWN\n"); 12991 12991 } 12992 12992 DP(NETIF_MSG_LINK, "Link changed:[%x %x]->%x\n", vars->link_up, 12993 12993 old_status, status);
+5 -1
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 2519 2519 { 2520 2520 struct fw_bye_cmd c; 2521 2521 2522 + memset(&c, 0, sizeof(c)); 2522 2523 INIT_CMD(c, BYE, WRITE); 2523 2524 return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL); 2524 2525 } ··· 2536 2535 { 2537 2536 struct fw_initialize_cmd c; 2538 2537 2538 + memset(&c, 0, sizeof(c)); 2539 2539 INIT_CMD(c, INITIALIZE, WRITE); 2540 2540 return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL); 2541 2541 } ··· 2553 2551 { 2554 2552 struct fw_reset_cmd c; 2555 2553 2554 + memset(&c, 0, sizeof(c)); 2556 2555 INIT_CMD(c, RESET, WRITE); 2557 2556 c.val = htonl(reset); 2558 2557 return t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL); ··· 2831 2828 HOSTPAGESIZEPF7(sge_hps)); 2832 2829 2833 2830 t4_set_reg_field(adap, SGE_CONTROL, 2834 - INGPADBOUNDARY(INGPADBOUNDARY_MASK) | 2831 + INGPADBOUNDARY_MASK | 2835 2832 EGRSTATUSPAGESIZE_MASK, 2836 2833 INGPADBOUNDARY(fl_align_log - 5) | 2837 2834 EGRSTATUSPAGESIZE(stat_len != 64)); ··· 3281 3278 { 3282 3279 struct fw_vi_enable_cmd c; 3283 3280 3281 + memset(&c, 0, sizeof(c)); 3284 3282 c.op_to_viid = htonl(FW_CMD_OP(FW_VI_ENABLE_CMD) | FW_CMD_REQUEST | 3285 3283 FW_CMD_EXEC | FW_VI_ENABLE_CMD_VIID(viid)); 3286 3284 c.ien_to_len16 = htonl(FW_VI_ENABLE_CMD_LED | FW_LEN16(c));
+4 -1
drivers/net/ethernet/freescale/gianfar.c
··· 1353 1353 struct gfar_private *priv = dev_get_drvdata(dev); 1354 1354 struct net_device *ndev = priv->ndev; 1355 1355 1356 - if (!netif_running(ndev)) 1356 + if (!netif_running(ndev)) { 1357 + netif_device_attach(ndev); 1358 + 1357 1359 return 0; 1360 + } 1358 1361 1359 1362 gfar_init_bds(ndev); 1360 1363 init_registers(ndev);
+4 -4
drivers/net/ethernet/jme.c
··· 1948 1948 1949 1949 JME_NAPI_DISABLE(jme); 1950 1950 1951 - tasklet_disable(&jme->linkch_task); 1952 - tasklet_disable(&jme->txclean_task); 1953 - tasklet_disable(&jme->rxclean_task); 1954 - tasklet_disable(&jme->rxempty_task); 1951 + tasklet_kill(&jme->linkch_task); 1952 + tasklet_kill(&jme->txclean_task); 1953 + tasklet_kill(&jme->rxclean_task); 1954 + tasklet_kill(&jme->rxempty_task); 1955 1955 1956 1956 jme_disable_rx_engine(jme); 1957 1957 jme_disable_tx_engine(jme);
+1 -1
drivers/net/ethernet/marvell/skge.c
··· 4026 4026 dev0 = hw->dev[0]; 4027 4027 unregister_netdev(dev0); 4028 4028 4029 - tasklet_disable(&hw->phy_task); 4029 + tasklet_kill(&hw->phy_task); 4030 4030 4031 4031 spin_lock_irq(&hw->hw_lock); 4032 4032 hw->intr_mask = 0;
+2 -2
drivers/net/ethernet/micrel/ksz884x.c
··· 5407 5407 /* Delay for receive task to stop scheduling itself. */ 5408 5408 msleep(2000 / HZ); 5409 5409 5410 - tasklet_disable(&hw_priv->rx_tasklet); 5411 - tasklet_disable(&hw_priv->tx_tasklet); 5410 + tasklet_kill(&hw_priv->rx_tasklet); 5411 + tasklet_kill(&hw_priv->tx_tasklet); 5412 5412 free_irq(dev->irq, hw_priv->dev); 5413 5413 5414 5414 transmit_cleanup(hw_priv, 0);
+5
drivers/net/ethernet/realtek/r8169.c
··· 3827 3827 void __iomem *ioaddr = tp->mmio_addr; 3828 3828 3829 3829 switch (tp->mac_version) { 3830 + case RTL_GIGA_MAC_VER_25: 3831 + case RTL_GIGA_MAC_VER_26: 3830 3832 case RTL_GIGA_MAC_VER_29: 3831 3833 case RTL_GIGA_MAC_VER_30: 3832 3834 case RTL_GIGA_MAC_VER_32: ··· 4520 4518 mc_filter[0] = swab32(mc_filter[1]); 4521 4519 mc_filter[1] = swab32(data); 4522 4520 } 4521 + 4522 + if (tp->mac_version == RTL_GIGA_MAC_VER_35) 4523 + mc_filter[1] = mc_filter[0] = 0xffffffff; 4523 4524 4524 4525 RTL_W32(MAR0 + 4, mc_filter[1]); 4525 4526 RTL_W32(MAR0 + 0, mc_filter[0]);
+1 -1
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 990 990 axienet_setoptions(ndev, lp->options & 991 991 ~(XAE_OPTION_TXEN | XAE_OPTION_RXEN)); 992 992 993 - tasklet_disable(&lp->dma_err_tasklet); 993 + tasklet_kill(&lp->dma_err_tasklet); 994 994 995 995 free_irq(lp->tx_irq, ndev); 996 996 free_irq(lp->rx_irq, ndev);
+2 -1
drivers/net/usb/cdc_eem.c
··· 31 31 #include <linux/usb/cdc.h> 32 32 #include <linux/usb/usbnet.h> 33 33 #include <linux/gfp.h> 34 + #include <linux/if_vlan.h> 34 35 35 36 36 37 /* ··· 93 92 94 93 /* no jumbogram (16K) support for now */ 95 94 96 - dev->net->hard_header_len += EEM_HEAD + ETH_FCS_LEN; 95 + dev->net->hard_header_len += EEM_HEAD + ETH_FCS_LEN + VLAN_HLEN; 97 96 dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len; 98 97 99 98 return 0;
+1
drivers/net/usb/smsc95xx.c
··· 1344 1344 } else { 1345 1345 u32 csum_preamble = smsc95xx_calc_csum_preamble(skb); 1346 1346 skb_push(skb, 4); 1347 + cpu_to_le32s(&csum_preamble); 1347 1348 memcpy(skb->data, &csum_preamble, 4); 1348 1349 } 1349 1350 }
+5 -3
drivers/net/usb/usbnet.c
··· 359 359 void usbnet_defer_kevent (struct usbnet *dev, int work) 360 360 { 361 361 set_bit (work, &dev->flags); 362 - if (!schedule_work (&dev->kevent)) 363 - netdev_err(dev->net, "kevent %d may have been dropped\n", work); 364 - else 362 + if (!schedule_work (&dev->kevent)) { 363 + if (net_ratelimit()) 364 + netdev_err(dev->net, "kevent %d may have been dropped\n", work); 365 + } else { 365 366 netdev_dbg(dev->net, "kevent %d scheduled\n", work); 367 + } 366 368 } 367 369 EXPORT_SYMBOL_GPL(usbnet_defer_kevent); 368 370
+1 -1
drivers/net/wireless/b43legacy/pio.c
··· 382 382 { 383 383 struct b43legacy_pio_txpacket *packet, *tmp_packet; 384 384 385 - tasklet_disable(&queue->txtask); 385 + tasklet_kill(&queue->txtask); 386 386 387 387 list_for_each_entry_safe(packet, tmp_packet, &queue->txrunning, list) 388 388 free_txpacket(packet, 0);
+2 -1
drivers/usb/gadget/u_ether.c
··· 20 20 #include <linux/ctype.h> 21 21 #include <linux/etherdevice.h> 22 22 #include <linux/ethtool.h> 23 + #include <linux/if_vlan.h> 23 24 24 25 #include "u_ether.h" 25 26 ··· 296 295 while (skb2) { 297 296 if (status < 0 298 297 || ETH_HLEN > skb2->len 299 - || skb2->len > ETH_FRAME_LEN) { 298 + || skb2->len > VLAN_ETH_FRAME_LEN) { 300 299 dev->net->stats.rx_errors++; 301 300 dev->net->stats.rx_length_errors++; 302 301 DBG(dev, "rx length %d\n", skb2->len);
+2 -1
include/linux/ptp_clock_kernel.h
··· 54 54 * clock operations 55 55 * 56 56 * @adjfreq: Adjusts the frequency of the hardware clock. 57 - * parameter delta: Desired period change in parts per billion. 57 + * parameter delta: Desired frequency offset from nominal frequency 58 + * in parts per billion 58 59 * 59 60 * @adjtime: Shifts the time of the hardware clock. 60 61 * parameter delta: Desired change in nanoseconds.
+1 -1
net/core/dev.c
··· 1666 1666 1667 1667 static inline bool skb_loop_sk(struct packet_type *ptype, struct sk_buff *skb) 1668 1668 { 1669 - if (ptype->af_packet_priv == NULL) 1669 + if (!ptype->af_packet_priv || !skb->sk) 1670 1670 return false; 1671 1671 1672 1672 if (ptype->id_match)
+2 -1
net/core/rtnetlink.c
··· 2192 2192 goto skip; 2193 2193 2194 2194 err = nlmsg_populate_fdb_fill(skb, dev, ha->addr, 2195 - portid, seq, 0, NTF_SELF); 2195 + portid, seq, 2196 + RTM_NEWNEIGH, NTF_SELF); 2196 2197 if (err < 0) 2197 2198 return err; 2198 2199 skip:
+4 -1
net/ipv4/inet_diag.c
··· 892 892 struct inet_diag_req_v2 *r, struct nlattr *bc) 893 893 { 894 894 const struct inet_diag_handler *handler; 895 + int err = 0; 895 896 896 897 handler = inet_diag_lock_handler(r->sdiag_protocol); 897 898 if (!IS_ERR(handler)) 898 899 handler->dump(skb, cb, r, bc); 900 + else 901 + err = PTR_ERR(handler); 899 902 inet_diag_unlock_handler(handler); 900 903 901 - return skb->len; 904 + return err ? : skb->len; 902 905 } 903 906 904 907 static int inet_diag_dump(struct sk_buff *skb, struct netlink_callback *cb)
+4 -4
net/ipv6/ip6_gre.c
··· 1633 1633 /* IFLA_GRE_OKEY */ 1634 1634 nla_total_size(4) + 1635 1635 /* IFLA_GRE_LOCAL */ 1636 - nla_total_size(4) + 1636 + nla_total_size(sizeof(struct in6_addr)) + 1637 1637 /* IFLA_GRE_REMOTE */ 1638 - nla_total_size(4) + 1638 + nla_total_size(sizeof(struct in6_addr)) + 1639 1639 /* IFLA_GRE_TTL */ 1640 1640 nla_total_size(1) + 1641 1641 /* IFLA_GRE_TOS */ ··· 1659 1659 nla_put_be16(skb, IFLA_GRE_OFLAGS, p->o_flags) || 1660 1660 nla_put_be32(skb, IFLA_GRE_IKEY, p->i_key) || 1661 1661 nla_put_be32(skb, IFLA_GRE_OKEY, p->o_key) || 1662 - nla_put(skb, IFLA_GRE_LOCAL, sizeof(struct in6_addr), &p->raddr) || 1663 - nla_put(skb, IFLA_GRE_REMOTE, sizeof(struct in6_addr), &p->laddr) || 1662 + nla_put(skb, IFLA_GRE_LOCAL, sizeof(struct in6_addr), &p->laddr) || 1663 + nla_put(skb, IFLA_GRE_REMOTE, sizeof(struct in6_addr), &p->raddr) || 1664 1664 nla_put_u8(skb, IFLA_GRE_TTL, p->hop_limit) || 1665 1665 /*nla_put_u8(skb, IFLA_GRE_TOS, t->priority) ||*/ 1666 1666 nla_put_u8(skb, IFLA_GRE_ENCAP_LIMIT, p->encap_limit) ||
+1 -2
net/ipv6/ndisc.c
··· 535 535 { 536 536 struct inet6_dev *idev; 537 537 struct inet6_ifaddr *ifa; 538 - struct in6_addr mcaddr; 538 + struct in6_addr mcaddr = IN6ADDR_LINKLOCAL_ALLNODES_INIT; 539 539 540 540 idev = in6_dev_get(dev); 541 541 if (!idev) ··· 543 543 544 544 read_lock_bh(&idev->lock); 545 545 list_for_each_entry(ifa, &idev->addr_list, if_list) { 546 - addrconf_addr_solict_mult(&ifa->addr, &mcaddr); 547 546 ndisc_send_na(dev, NULL, &mcaddr, &ifa->addr, 548 547 /*router=*/ !!idev->cnf.forwarding, 549 548 /*solicited=*/ false, /*override=*/ true,
+79 -30
net/sched/sch_qfq.c
··· 84 84 * grp->index is the index of the group; and grp->slot_shift 85 85 * is the shift for the corresponding (scaled) sigma_i. 86 86 */ 87 - #define QFQ_MAX_INDEX 19 88 - #define QFQ_MAX_WSHIFT 16 87 + #define QFQ_MAX_INDEX 24 88 + #define QFQ_MAX_WSHIFT 12 89 89 90 90 #define QFQ_MAX_WEIGHT (1<<QFQ_MAX_WSHIFT) 91 - #define QFQ_MAX_WSUM (2*QFQ_MAX_WEIGHT) 91 + #define QFQ_MAX_WSUM (16*QFQ_MAX_WEIGHT) 92 92 93 93 #define FRAC_BITS 30 /* fixed point arithmetic */ 94 94 #define ONE_FP (1UL << FRAC_BITS) 95 95 #define IWSUM (ONE_FP/QFQ_MAX_WSUM) 96 96 97 - #define QFQ_MTU_SHIFT 11 97 + #define QFQ_MTU_SHIFT 16 /* to support TSO/GSO */ 98 98 #define QFQ_MIN_SLOT_SHIFT (FRAC_BITS + QFQ_MTU_SHIFT - QFQ_MAX_INDEX) 99 + #define QFQ_MIN_LMAX 256 /* min possible lmax for a class */ 99 100 100 101 /* 101 102 * Possible group states. These values are used as indexes for the bitmaps ··· 232 231 q->wsum += delta_w; 233 232 } 234 233 234 + static void qfq_update_reactivate_class(struct qfq_sched *q, 235 + struct qfq_class *cl, 236 + u32 inv_w, u32 lmax, int delta_w) 237 + { 238 + bool need_reactivation = false; 239 + int i = qfq_calc_index(inv_w, lmax); 240 + 241 + if (&q->groups[i] != cl->grp && cl->qdisc->q.qlen > 0) { 242 + /* 243 + * shift cl->F back, to not charge the 244 + * class for the not-yet-served head 245 + * packet 246 + */ 247 + cl->F = cl->S; 248 + /* remove class from its slot in the old group */ 249 + qfq_deactivate_class(q, cl); 250 + need_reactivation = true; 251 + } 252 + 253 + qfq_update_class_params(q, cl, lmax, inv_w, delta_w); 254 + 255 + if (need_reactivation) /* activate in new group */ 256 + qfq_activate_class(q, cl, qdisc_peek_len(cl->qdisc)); 257 + } 258 + 259 + 235 260 static int qfq_change_class(struct Qdisc *sch, u32 classid, u32 parentid, 236 261 struct nlattr **tca, unsigned long *arg) 237 262 { ··· 265 238 struct qfq_class *cl = (struct qfq_class *)*arg; 266 239 struct nlattr *tb[TCA_QFQ_MAX + 1]; 267 240 u32 weight, lmax, inv_w; 268 - int i, err; 241 + int err; 269 242 int delta_w; 270 243 271 244 if (tca[TCA_OPTIONS] == NULL) { ··· 297 270 298 271 if (tb[TCA_QFQ_LMAX]) { 299 272 lmax = nla_get_u32(tb[TCA_QFQ_LMAX]); 300 - if (!lmax || lmax > (1UL << QFQ_MTU_SHIFT)) { 273 + if (lmax < QFQ_MIN_LMAX || lmax > (1UL << QFQ_MTU_SHIFT)) { 301 274 pr_notice("qfq: invalid max length %u\n", lmax); 302 275 return -EINVAL; 303 276 } 304 277 } else 305 - lmax = 1UL << QFQ_MTU_SHIFT; 278 + lmax = psched_mtu(qdisc_dev(sch)); 306 279 307 280 if (cl != NULL) { 308 - bool need_reactivation = false; 309 - 310 281 if (tca[TCA_RATE]) { 311 282 err = gen_replace_estimator(&cl->bstats, &cl->rate_est, 312 283 qdisc_root_sleeping_lock(sch), ··· 316 291 if (lmax == cl->lmax && inv_w == cl->inv_w) 317 292 return 0; /* nothing to update */ 318 293 319 - i = qfq_calc_index(inv_w, lmax); 320 294 sch_tree_lock(sch); 321 - if (&q->groups[i] != cl->grp && cl->qdisc->q.qlen > 0) { 322 - /* 323 - * shift cl->F back, to not charge the 324 - * class for the not-yet-served head 325 - * packet 326 - */ 327 - cl->F = cl->S; 328 - /* remove class from its slot in the old group */ 329 - qfq_deactivate_class(q, cl); 330 - need_reactivation = true; 331 - } 332 - 333 - qfq_update_class_params(q, cl, lmax, inv_w, delta_w); 334 - 335 - if (need_reactivation) /* activate in new group */ 336 - qfq_activate_class(q, cl, qdisc_peek_len(cl->qdisc)); 295 + qfq_update_reactivate_class(q, cl, inv_w, lmax, delta_w); 337 296 sch_tree_unlock(sch); 338 297 339 298 return 0; ··· 672 663 673 664 674 665 /* 675 - * XXX we should make sure that slot becomes less than 32. 676 - * This is guaranteed by the input values. 677 - * roundedS is always cl->S rounded on grp->slot_shift bits. 666 + * If the weight and lmax (max_pkt_size) of the classes do not change, 667 + * then QFQ guarantees that the slot index is never higher than 668 + * 2 + ((1<<QFQ_MTU_SHIFT)/QFQ_MIN_LMAX) * (QFQ_MAX_WEIGHT/QFQ_MAX_WSUM). 669 + * 670 + * With the current values of the above constants, the index is 671 + * then guaranteed to never be higher than 2 + 256 * (1 / 16) = 18. 672 + * 673 + * When the weight of a class is increased or the lmax of the class is 674 + * decreased, a new class with smaller slot size may happen to be 675 + * activated. The activation of this class should be properly delayed 676 + * to when the service of the class has finished in the ideal system 677 + * tracked by QFQ. If the activation of the class is not delayed to 678 + * this reference time instant, then this class may be unjustly served 679 + * before other classes waiting for service. This may cause 680 + * (unfrequently) the above bound to the slot index to be violated for 681 + * some of these unlucky classes. 682 + * 683 + * Instead of delaying the activation of the new class, which is quite 684 + * complex, the following inaccurate but simple solution is used: if 685 + * the slot index is higher than QFQ_MAX_SLOTS-2, then the timestamps 686 + * of the class are shifted backward so as to let the slot index 687 + * become equal to QFQ_MAX_SLOTS-2. This threshold is used because, if 688 + * the slot index is above it, then the data structure implementing 689 + * the bucket list either gets immediately corrupted or may get 690 + * corrupted on a possible next packet arrival that causes the start 691 + * time of the group to be shifted backward. 678 692 */ 679 693 static void qfq_slot_insert(struct qfq_group *grp, struct qfq_class *cl, 680 694 u64 roundedS) 681 695 { 682 696 u64 slot = (roundedS - grp->S) >> grp->slot_shift; 683 - unsigned int i = (grp->front + slot) % QFQ_MAX_SLOTS; 697 + unsigned int i; /* slot index in the bucket list */ 698 + 699 + if (unlikely(slot > QFQ_MAX_SLOTS - 2)) { 700 + u64 deltaS = roundedS - grp->S - 701 + ((u64)(QFQ_MAX_SLOTS - 2)<<grp->slot_shift); 702 + cl->S -= deltaS; 703 + cl->F -= deltaS; 704 + slot = QFQ_MAX_SLOTS - 2; 705 + } 706 + 707 + i = (grp->front + slot) % QFQ_MAX_SLOTS; 684 708 685 709 hlist_add_head(&cl->next, &grp->slots[i]); 686 710 __set_bit(slot, &grp->full_slots); ··· 933 891 return err; 934 892 } 935 893 pr_debug("qfq_enqueue: cl = %x\n", cl->common.classid); 894 + 895 + if (unlikely(cl->lmax < qdisc_pkt_len(skb))) { 896 + pr_debug("qfq: increasing maxpkt from %u to %u for class %u", 897 + cl->lmax, qdisc_pkt_len(skb), cl->common.classid); 898 + qfq_update_reactivate_class(q, cl, cl->inv_w, 899 + qdisc_pkt_len(skb), 0); 900 + } 936 901 937 902 err = qdisc_enqueue(skb, cl->qdisc); 938 903 if (unlikely(err != NET_XMIT_SUCCESS)) {
-1
net/tipc/handler.c
··· 116 116 return; 117 117 118 118 handler_enabled = 0; 119 - tasklet_disable(&tipc_tasklet); 120 119 tasklet_kill(&tipc_tasklet); 121 120 122 121 spin_lock_bh(&qitem_lock);