Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'net-6.16-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
"Including fixes from bluetooth and wireless.

Current release - regressions:

- af_unix: allow passing cred for embryo without SO_PASSCRED/SO_PASSPIDFD

Current release - new code bugs:

- eth: airoha: correct enable mask for RX queues 16-31

- veth: prevent NULL pointer dereference in veth_xdp_rcv when peer
disappears under traffic

- ipv6: move fib6_config_validate() to ip6_route_add(), prevent
invalid routes

Previous releases - regressions:

- phy: phy_caps: don't skip better duplex match on non-exact match

- dsa: b53: fix untagged traffic sent via cpu tagged with VID 0

- Revert "wifi: mwifiex: Fix HT40 bandwidth issue.", it caused
transient packet loss, exact reason not fully understood, yet

Previous releases - always broken:

- net: clear the dst when BPF is changing skb protocol (IPv4 <> IPv6)

- sched: sfq: fix a potential crash on gso_skb handling

- Bluetooth: intel: improve rx buffer posting to avoid causing issues
in the firmware

- eth: intel: i40e: make reset handling robust against multiple
requests

- eth: mlx5: ensure FW pages are always allocated on the local NUMA
node, even when device is configure to 'serve' another node

- wifi: ath12k: fix GCC_GCC_PCIE_HOT_RST definition for WCN7850,
prevent kernel crashes

- wifi: ath11k: avoid burning CPU in ath11k_debugfs_fw_stats_request()
for 3 sec if fw_stats_done is not set"

* tag 'net-6.16-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (70 commits)
selftests: drv-net: rss_ctx: Add test for ntuple rules targeting default RSS context
net: ethtool: Don't check if RSS context exists in case of context 0
af_unix: Allow passing cred for embryo without SO_PASSCRED/SO_PASSPIDFD.
ipv6: Move fib6_config_validate() to ip6_route_add().
net: drv: netdevsim: don't napi_complete() from netpoll
net/mlx5: HWS, Add error checking to hws_bwc_rule_complex_hash_node_get()
veth: prevent NULL pointer dereference in veth_xdp_rcv
net_sched: remove qdisc_tree_flush_backlog()
net_sched: ets: fix a race in ets_qdisc_change()
net_sched: tbf: fix a race in tbf_change()
net_sched: red: fix a race in __red_change()
net_sched: prio: fix a race in prio_tune()
net_sched: sch_sfq: reject invalid perturb period
net: phy: phy_caps: Don't skip better duplex macth on non-exact match
MAINTAINERS: Update Kuniyuki Iwashima's email address.
selftests: net: add test case for NAT46 looping back dst
net: clear the dst when changing skb protocol
net/mlx5e: Fix number of lanes to UNKNOWN when using data_rate_oper
net/mlx5e: Fix leak of Geneve TLV option object
net/mlx5: HWS, make sure the uplink is the last destination
...

+818 -555
+3
.mailmap
··· 426 426 Krzysztof Wilczyński <kwilczynski@kernel.org> <kw@linux.com> 427 427 Kshitiz Godara <quic_kgodara@quicinc.com> <kgodara@codeaurora.org> 428 428 Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> 429 + Kuniyuki Iwashima <kuniyu@google.com> <kuniyu@amazon.com> 430 + Kuniyuki Iwashima <kuniyu@google.com> <kuniyu@amazon.co.jp> 431 + Kuniyuki Iwashima <kuniyu@google.com> <kuni1840@gmail.com> 429 432 Kuogee Hsieh <quic_khsieh@quicinc.com> <khsieh@codeaurora.org> 430 433 Lee Jones <lee@kernel.org> <joneslee@google.com> 431 434 Lee Jones <lee@kernel.org> <lee.jones@canonical.com>
+3 -3
MAINTAINERS
··· 17494 17494 NETWORKING [TCP] 17495 17495 M: Eric Dumazet <edumazet@google.com> 17496 17496 M: Neal Cardwell <ncardwell@google.com> 17497 - R: Kuniyuki Iwashima <kuniyu@amazon.com> 17497 + R: Kuniyuki Iwashima <kuniyu@google.com> 17498 17498 L: netdev@vger.kernel.org 17499 17499 S: Maintained 17500 17500 F: Documentation/networking/net_cachelines/tcp_sock.rst ··· 17524 17524 17525 17525 NETWORKING [SOCKETS] 17526 17526 M: Eric Dumazet <edumazet@google.com> 17527 - M: Kuniyuki Iwashima <kuniyu@amazon.com> 17527 + M: Kuniyuki Iwashima <kuniyu@google.com> 17528 17528 M: Paolo Abeni <pabeni@redhat.com> 17529 17529 M: Willem de Bruijn <willemb@google.com> 17530 17530 S: Maintained ··· 17539 17539 F: net/socket.c 17540 17540 17541 17541 NETWORKING [UNIX SOCKETS] 17542 - M: Kuniyuki Iwashima <kuniyu@amazon.com> 17542 + M: Kuniyuki Iwashima <kuniyu@google.com> 17543 17543 S: Maintained 17544 17544 F: include/net/af_unix.h 17545 17545 F: include/net/netns/unix.h
+18 -13
drivers/bluetooth/btintel_pcie.c
··· 396 396 static int btintel_pcie_start_rx(struct btintel_pcie_data *data) 397 397 { 398 398 int i, ret; 399 + struct rxq *rxq = &data->rxq; 399 400 400 - for (i = 0; i < BTINTEL_PCIE_RX_MAX_QUEUE; i++) { 401 + /* Post (BTINTEL_PCIE_RX_DESCS_COUNT - 3) buffers to overcome the 402 + * hardware issues leading to race condition at the firmware. 403 + */ 404 + 405 + for (i = 0; i < rxq->count - 3; i++) { 401 406 ret = btintel_pcie_submit_rx(data); 402 407 if (ret) 403 408 return ret; ··· 1787 1782 * + size of index * Number of queues(2) * type of index array(4) 1788 1783 * + size of context information 1789 1784 */ 1790 - total = (sizeof(struct tfd) + sizeof(struct urbd0) + sizeof(struct frbd) 1791 - + sizeof(struct urbd1)) * BTINTEL_DESCS_COUNT; 1785 + total = (sizeof(struct tfd) + sizeof(struct urbd0)) * BTINTEL_PCIE_TX_DESCS_COUNT; 1786 + total += (sizeof(struct frbd) + sizeof(struct urbd1)) * BTINTEL_PCIE_RX_DESCS_COUNT; 1792 1787 1793 1788 /* Add the sum of size of index array and size of ci struct */ 1794 1789 total += (sizeof(u16) * BTINTEL_PCIE_NUM_QUEUES * 4) + sizeof(struct ctx_info); ··· 1813 1808 data->dma_v_addr = v_addr; 1814 1809 1815 1810 /* Setup descriptor count */ 1816 - data->txq.count = BTINTEL_DESCS_COUNT; 1817 - data->rxq.count = BTINTEL_DESCS_COUNT; 1811 + data->txq.count = BTINTEL_PCIE_TX_DESCS_COUNT; 1812 + data->rxq.count = BTINTEL_PCIE_RX_DESCS_COUNT; 1818 1813 1819 1814 /* Setup tfds */ 1820 1815 data->txq.tfds_p_addr = p_addr; 1821 1816 data->txq.tfds = v_addr; 1822 1817 1823 - p_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT); 1824 - v_addr += (sizeof(struct tfd) * BTINTEL_DESCS_COUNT); 1818 + p_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT); 1819 + v_addr += (sizeof(struct tfd) * BTINTEL_PCIE_TX_DESCS_COUNT); 1825 1820 1826 1821 /* Setup urbd0 */ 1827 1822 data->txq.urbd0s_p_addr = p_addr; 1828 1823 data->txq.urbd0s = v_addr; 1829 1824 1830 - p_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT); 1831 - v_addr += (sizeof(struct urbd0) * BTINTEL_DESCS_COUNT); 1825 + p_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT); 1826 + v_addr += (sizeof(struct urbd0) * BTINTEL_PCIE_TX_DESCS_COUNT); 1832 1827 1833 1828 /* Setup FRBD*/ 1834 1829 data->rxq.frbds_p_addr = p_addr; 1835 1830 data->rxq.frbds = v_addr; 1836 1831 1837 - p_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT); 1838 - v_addr += (sizeof(struct frbd) * BTINTEL_DESCS_COUNT); 1832 + p_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT); 1833 + v_addr += (sizeof(struct frbd) * BTINTEL_PCIE_RX_DESCS_COUNT); 1839 1834 1840 1835 /* Setup urbd1 */ 1841 1836 data->rxq.urbd1s_p_addr = p_addr; 1842 1837 data->rxq.urbd1s = v_addr; 1843 1838 1844 - p_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT); 1845 - v_addr += (sizeof(struct urbd1) * BTINTEL_DESCS_COUNT); 1839 + p_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT); 1840 + v_addr += (sizeof(struct urbd1) * BTINTEL_PCIE_RX_DESCS_COUNT); 1846 1841 1847 1842 /* Setup data buffers for txq */ 1848 1843 err = btintel_pcie_setup_txq_bufs(data, &data->txq);
+5 -5
drivers/bluetooth/btintel_pcie.h
··· 154 154 /* Default interrupt timeout in msec */ 155 155 #define BTINTEL_DEFAULT_INTR_TIMEOUT_MS 3000 156 156 157 - /* The number of descriptors in TX/RX queues */ 158 - #define BTINTEL_DESCS_COUNT 16 157 + /* The number of descriptors in TX queues */ 158 + #define BTINTEL_PCIE_TX_DESCS_COUNT 32 159 + 160 + /* The number of descriptors in RX queues */ 161 + #define BTINTEL_PCIE_RX_DESCS_COUNT 64 159 162 160 163 /* Number of Queue for TX and RX 161 164 * It indicates the index of the IA(Index Array) ··· 179 176 180 177 /* Doorbell vector for TFD */ 181 178 #define BTINTEL_PCIE_TX_DB_VEC 0 182 - 183 - /* Number of pending RX requests for downlink */ 184 - #define BTINTEL_PCIE_RX_MAX_QUEUE 6 185 179 186 180 /* Doorbell vector for FRBD */ 187 181 #define BTINTEL_PCIE_RX_DB_VEC 513
+1 -5
drivers/net/dsa/b53/b53_common.c
··· 2034 2034 2035 2035 b53_get_vlan_entry(dev, pvid, vl); 2036 2036 vl->members &= ~BIT(port); 2037 - if (vl->members == BIT(cpu_port)) 2038 - vl->members &= ~BIT(cpu_port); 2039 - vl->untag = vl->members; 2040 2037 b53_set_vlan_entry(dev, pvid, vl); 2041 2038 } 2042 2039 ··· 2112 2115 } 2113 2116 2114 2117 b53_get_vlan_entry(dev, pvid, vl); 2115 - vl->members |= BIT(port) | BIT(cpu_port); 2116 - vl->untag |= BIT(port) | BIT(cpu_port); 2118 + vl->members |= BIT(port); 2117 2119 b53_set_vlan_entry(dev, pvid, vl); 2118 2120 } 2119 2121 }
+2 -1
drivers/net/ethernet/airoha/airoha_regs.h
··· 614 614 RX19_DONE_INT_MASK | RX18_DONE_INT_MASK | \ 615 615 RX17_DONE_INT_MASK | RX16_DONE_INT_MASK) 616 616 617 - #define RX_DONE_INT_MASK (RX_DONE_HIGH_INT_MASK | RX_DONE_LOW_INT_MASK) 618 617 #define RX_DONE_HIGH_OFFSET fls(RX_DONE_HIGH_INT_MASK) 618 + #define RX_DONE_INT_MASK \ 619 + ((RX_DONE_HIGH_INT_MASK << RX_DONE_HIGH_OFFSET) | RX_DONE_LOW_INT_MASK) 619 620 620 621 #define INT_RX2_MASK(_n) \ 621 622 ((RX_NO_CPU_DSCP_HIGH_INT_MASK & (_n)) | \
+5 -1
drivers/net/ethernet/freescale/enetc/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 config FSL_ENETC_CORE 3 3 tristate 4 + select NXP_NETC_LIB if NXP_NTMP 4 5 help 5 6 This module supports common functionality between the PF and VF 6 7 drivers for the NXP ENETC controller. ··· 22 21 This module provides common functionalities for both ENETC and NETC 23 22 Switch, such as NETC Table Management Protocol (NTMP) 2.0, common tc 24 23 flower and debugfs interfaces and so on. 24 + 25 + config NXP_NTMP 26 + bool 25 27 26 28 config FSL_ENETC 27 29 tristate "ENETC PF driver" ··· 49 45 select FSL_ENETC_CORE 50 46 select FSL_ENETC_MDIO 51 47 select NXP_ENETC_PF_COMMON 52 - select NXP_NETC_LIB 48 + select NXP_NTMP 53 49 select PHYLINK 54 50 select DIMLIB 55 51 help
+4 -4
drivers/net/ethernet/intel/e1000/e1000_main.c
··· 477 477 478 478 cancel_delayed_work_sync(&adapter->phy_info_task); 479 479 cancel_delayed_work_sync(&adapter->fifo_stall_task); 480 - 481 - /* Only kill reset task if adapter is not resetting */ 482 - if (!test_bit(__E1000_RESETTING, &adapter->flags)) 483 - cancel_work_sync(&adapter->reset_task); 484 480 } 485 481 486 482 void e1000_down(struct e1000_adapter *adapter) ··· 1261 1265 e1000_release_manageability(adapter); 1262 1266 1263 1267 unregister_netdev(netdev); 1268 + 1269 + /* Only kill reset task if adapter is not resetting */ 1270 + if (!test_bit(__E1000_RESETTING, &adapter->flags)) 1271 + cancel_work_sync(&adapter->reset_task); 1264 1272 1265 1273 e1000_phy_hw_reset(hw); 1266 1274
+7 -4
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 1546 1546 * @vf: pointer to the VF structure 1547 1547 * @flr: VFLR was issued or not 1548 1548 * 1549 - * Returns true if the VF is in reset, resets successfully, or resets 1550 - * are disabled and false otherwise. 1549 + * Return: True if reset was performed successfully or if resets are disabled. 1550 + * False if reset is already in progress. 1551 1551 **/ 1552 1552 bool i40e_reset_vf(struct i40e_vf *vf, bool flr) 1553 1553 { ··· 1566 1566 1567 1567 /* If VF is being reset already we don't need to continue. */ 1568 1568 if (test_and_set_bit(I40E_VF_STATE_RESETTING, &vf->vf_states)) 1569 - return true; 1569 + return false; 1570 1570 1571 1571 i40e_trigger_vf_reset(vf, flr); 1572 1572 ··· 4328 4328 reg = rd32(hw, I40E_GLGEN_VFLRSTAT(reg_idx)); 4329 4329 if (reg & BIT(bit_idx)) 4330 4330 /* i40e_reset_vf will clear the bit in GLGEN_VFLRSTAT */ 4331 - i40e_reset_vf(vf, true); 4331 + if (!i40e_reset_vf(vf, true)) { 4332 + /* At least one VF did not finish resetting, retry next time */ 4333 + set_bit(__I40E_VFLR_EVENT_PENDING, pf->state); 4334 + } 4332 4335 } 4333 4336 4334 4337 return 0;
+11
drivers/net/ethernet/intel/iavf/iavf_main.c
··· 3209 3209 } 3210 3210 3211 3211 continue_reset: 3212 + /* If we are still early in the state machine, just restart. */ 3213 + if (adapter->state <= __IAVF_INIT_FAILED) { 3214 + iavf_shutdown_adminq(hw); 3215 + iavf_change_state(adapter, __IAVF_STARTUP); 3216 + iavf_startup(adapter); 3217 + queue_delayed_work(adapter->wq, &adapter->watchdog_task, 3218 + msecs_to_jiffies(30)); 3219 + netdev_unlock(netdev); 3220 + return; 3221 + } 3222 + 3212 3223 /* We don't use netif_running() because it may be true prior to 3213 3224 * ndo_open() returning, so we can't assume it means all our open 3214 3225 * tasks have finished, since we're not holding the rtnl_lock here.
+17
drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
··· 79 79 return iavf_status_to_errno(status); 80 80 received_op = 81 81 (enum virtchnl_ops)le32_to_cpu(event->desc.cookie_high); 82 + 83 + if (received_op == VIRTCHNL_OP_EVENT) { 84 + struct iavf_adapter *adapter = hw->back; 85 + struct virtchnl_pf_event *vpe = 86 + (struct virtchnl_pf_event *)event->msg_buf; 87 + 88 + if (vpe->event != VIRTCHNL_EVENT_RESET_IMPENDING) 89 + continue; 90 + 91 + dev_info(&adapter->pdev->dev, "Reset indication received from the PF\n"); 92 + if (!(adapter->flags & IAVF_FLAG_RESET_PENDING)) 93 + iavf_schedule_reset(adapter, 94 + IAVF_FLAG_RESET_PENDING); 95 + 96 + return -EIO; 97 + } 98 + 82 99 if (op_to_poll == received_op) 83 100 break; 84 101 }
+1
drivers/net/ethernet/intel/ice/ice_ptp.c
··· 2299 2299 ts = ((u64)ts_hi << 32) | ts_lo; 2300 2300 system->cycles = ts; 2301 2301 system->cs_id = CSID_X86_ART; 2302 + system->use_nsecs = true; 2302 2303 2303 2304 /* Read Device source clock time */ 2304 2305 ts_lo = rd32(hw, cfg->dev_time_l[tmr_idx]);
+1 -4
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
··· 43 43 #include "en/fs_ethtool.h" 44 44 45 45 #define LANES_UNKNOWN 0 46 - #define MAX_LANES 8 47 46 48 47 void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv, 49 48 struct ethtool_drvinfo *drvinfo) ··· 1097 1098 speed = info->speed; 1098 1099 lanes = info->lanes; 1099 1100 duplex = DUPLEX_FULL; 1100 - } else if (data_rate_oper) { 1101 + } else if (data_rate_oper) 1101 1102 speed = 100 * data_rate_oper; 1102 - lanes = MAX_LANES; 1103 - } 1104 1103 1105 1104 out: 1106 1105 link_ksettings->base.duplex = duplex;
+6 -6
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 2028 2028 return err; 2029 2029 } 2030 2030 2031 - static bool mlx5_flow_has_geneve_opt(struct mlx5e_tc_flow *flow) 2031 + static bool mlx5_flow_has_geneve_opt(struct mlx5_flow_spec *spec) 2032 2032 { 2033 - struct mlx5_flow_spec *spec = &flow->attr->parse_attr->spec; 2034 2033 void *headers_v = MLX5_ADDR_OF(fte_match_param, 2035 2034 spec->match_value, 2036 2035 misc_parameters_3); ··· 2068 2069 } 2069 2070 complete_all(&flow->del_hw_done); 2070 2071 2071 - if (mlx5_flow_has_geneve_opt(flow)) 2072 + if (mlx5_flow_has_geneve_opt(&attr->parse_attr->spec)) 2072 2073 mlx5_geneve_tlv_option_del(priv->mdev->geneve); 2073 2074 2074 2075 if (flow->decap_route) ··· 2573 2574 2574 2575 err = mlx5e_tc_tun_parse(filter_dev, priv, tmp_spec, f, match_level); 2575 2576 if (err) { 2576 - kvfree(tmp_spec); 2577 2577 NL_SET_ERR_MSG_MOD(extack, "Failed to parse tunnel attributes"); 2578 2578 netdev_warn(priv->netdev, "Failed to parse tunnel attributes"); 2579 - return err; 2579 + } else { 2580 + err = mlx5e_tc_set_attr_rx_tun(flow, tmp_spec); 2580 2581 } 2581 - err = mlx5e_tc_set_attr_rx_tun(flow, tmp_spec); 2582 + if (mlx5_flow_has_geneve_opt(tmp_spec)) 2583 + mlx5_geneve_tlv_option_del(priv->mdev->geneve); 2582 2584 kvfree(tmp_spec); 2583 2585 if (err) 2584 2586 return err;
+13 -8
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 1295 1295 ret = mlx5_eswitch_load_pf_vf_vport(esw, MLX5_VPORT_ECPF, enabled_events); 1296 1296 if (ret) 1297 1297 goto ecpf_err; 1298 - if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1299 - ret = mlx5_eswitch_load_ec_vf_vports(esw, esw->esw_funcs.num_ec_vfs, 1300 - enabled_events); 1301 - if (ret) 1302 - goto ec_vf_err; 1303 - } 1298 + } 1299 + 1300 + /* Enable ECVF vports */ 1301 + if (mlx5_core_ec_sriov_enabled(esw->dev)) { 1302 + ret = mlx5_eswitch_load_ec_vf_vports(esw, 1303 + esw->esw_funcs.num_ec_vfs, 1304 + enabled_events); 1305 + if (ret) 1306 + goto ec_vf_err; 1304 1307 } 1305 1308 1306 1309 /* Enable VF vports */ ··· 1334 1331 { 1335 1332 mlx5_eswitch_unload_vf_vports(esw, esw->esw_funcs.num_vfs); 1336 1333 1334 + if (mlx5_core_ec_sriov_enabled(esw->dev)) 1335 + mlx5_eswitch_unload_ec_vf_vports(esw, 1336 + esw->esw_funcs.num_ec_vfs); 1337 + 1337 1338 if (mlx5_ecpf_vport_exists(esw->dev)) { 1338 - if (mlx5_core_ec_sriov_enabled(esw->dev)) 1339 - mlx5_eswitch_unload_ec_vf_vports(esw, esw->esw_funcs.num_vfs); 1340 1339 mlx5_eswitch_unload_pf_vf_vport(esw, MLX5_VPORT_ECPF); 1341 1340 } 1342 1341
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 2228 2228 struct mlx5_flow_handle *rule; 2229 2229 struct match_list *iter; 2230 2230 bool take_write = false; 2231 + bool try_again = false; 2231 2232 struct fs_fte *fte; 2232 2233 u64 version = 0; 2233 2234 int err; ··· 2293 2292 nested_down_write_ref_node(&g->node, FS_LOCK_PARENT); 2294 2293 2295 2294 if (!g->node.active) { 2295 + try_again = true; 2296 2296 up_write_ref_node(&g->node, false); 2297 2297 continue; 2298 2298 } ··· 2315 2313 tree_put_node(&fte->node, false); 2316 2314 return rule; 2317 2315 } 2318 - rule = ERR_PTR(-ENOENT); 2316 + err = try_again ? -EAGAIN : -ENOENT; 2317 + rule = ERR_PTR(err); 2319 2318 out: 2320 2319 kmem_cache_free(steering->ftes_cache, fte); 2321 2320 return rule;
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
··· 291 291 static int alloc_system_page(struct mlx5_core_dev *dev, u32 function) 292 292 { 293 293 struct device *device = mlx5_core_dma_dev(dev); 294 - int nid = dev_to_node(device); 294 + int nid = dev->priv.numa_node; 295 295 struct page *page; 296 296 u64 zero_addr = 1; 297 297 u64 addr;
+7 -7
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/action.c
··· 1370 1370 struct mlx5hws_cmd_set_fte_attr fte_attr = {0}; 1371 1371 struct mlx5hws_cmd_forward_tbl *fw_island; 1372 1372 struct mlx5hws_action *action; 1373 - u32 i /*, packet_reformat_id*/; 1374 - int ret; 1373 + int ret, last_dest_idx = -1; 1374 + u32 i; 1375 1375 1376 1376 if (num_dest <= 1) { 1377 1377 mlx5hws_err(ctx, "Action must have multiple dests\n"); ··· 1401 1401 dest_list[i].destination_id = dests[i].dest->dest_obj.obj_id; 1402 1402 fte_attr.action_flags |= MLX5_FLOW_CONTEXT_ACTION_FWD_DEST; 1403 1403 fte_attr.ignore_flow_level = ignore_flow_level; 1404 - /* ToDo: In SW steering we have a handling of 'go to WIRE' 1405 - * destination here by upper layer setting 'is_wire_ft' flag 1406 - * if the destination is wire. 1407 - * This is because uplink should be last dest in the list. 1408 - */ 1404 + if (dests[i].is_wire_ft) 1405 + last_dest_idx = i; 1409 1406 break; 1410 1407 case MLX5HWS_ACTION_TYP_VPORT: 1411 1408 dest_list[i].destination_type = MLX5_FLOW_DESTINATION_TYPE_VPORT; ··· 1425 1428 goto free_dest_list; 1426 1429 } 1427 1430 } 1431 + 1432 + if (last_dest_idx != -1) 1433 + swap(dest_list[last_dest_idx], dest_list[num_dest - 1]); 1428 1434 1429 1435 fte_attr.dests_num = num_dest; 1430 1436 fte_attr.dests = dest_list;
+17 -2
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/bwc_complex.c
··· 1070 1070 struct mlx5hws_bwc_matcher *bwc_matcher = bwc_rule->bwc_matcher; 1071 1071 struct mlx5hws_bwc_complex_rule_hash_node *node, *old_node; 1072 1072 struct rhashtable *refcount_hash; 1073 - int i; 1073 + int ret, i; 1074 1074 1075 1075 bwc_rule->complex_hash_node = NULL; 1076 1076 ··· 1078 1078 if (unlikely(!node)) 1079 1079 return -ENOMEM; 1080 1080 1081 - node->tag = ida_alloc(&bwc_matcher->complex->metadata_ida, GFP_KERNEL); 1081 + ret = ida_alloc(&bwc_matcher->complex->metadata_ida, GFP_KERNEL); 1082 + if (ret < 0) 1083 + goto err_free_node; 1084 + node->tag = ret; 1085 + 1082 1086 refcount_set(&node->refcount, 1); 1083 1087 1084 1088 /* Clear match buffer - turn off all the unrelated fields ··· 1098 1094 old_node = rhashtable_lookup_get_insert_fast(refcount_hash, 1099 1095 &node->hash_node, 1100 1096 hws_refcount_hash); 1097 + if (IS_ERR(old_node)) { 1098 + ret = PTR_ERR(old_node); 1099 + goto err_free_ida; 1100 + } 1101 + 1101 1102 if (old_node) { 1102 1103 /* Rule with the same tag already exists - update refcount */ 1103 1104 refcount_inc(&old_node->refcount); ··· 1121 1112 1122 1113 bwc_rule->complex_hash_node = node; 1123 1114 return 0; 1115 + 1116 + err_free_ida: 1117 + ida_free(&bwc_matcher->complex->metadata_ida, node->tag); 1118 + err_free_node: 1119 + kfree(node); 1120 + return ret; 1124 1121 } 1125 1122 1126 1123 static void
+3
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/definer.c
··· 785 785 HWS_SET_HDR(fc, match_param, IP_PROTOCOL_O, 786 786 outer_headers.ip_protocol, 787 787 eth_l3_outer.protocol_next_header); 788 + HWS_SET_HDR(fc, match_param, IP_VERSION_O, 789 + outer_headers.ip_version, 790 + eth_l3_outer.ip_version); 788 791 HWS_SET_HDR(fc, match_param, IP_TTL_O, 789 792 outer_headers.ttl_hoplimit, 790 793 eth_l3_outer.time_to_live_hop_limit);
+4 -1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/fs_hws.c
··· 966 966 switch (attr->type) { 967 967 case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE: 968 968 dest_action = mlx5_fs_get_dest_action_ft(fs_ctx, dst); 969 + if (dst->dest_attr.ft->flags & 970 + MLX5_FLOW_TABLE_UPLINK_VPORT) 971 + dest_actions[num_dest_actions].is_wire_ft = true; 969 972 break; 970 973 case MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE_NUM: 971 974 dest_action = mlx5_fs_get_dest_action_table_num(fs_ctx, ··· 1360 1357 pkt_reformat->fs_hws_action.pr_data = pr_data; 1361 1358 } 1362 1359 1360 + mutex_init(&pkt_reformat->fs_hws_action.lock); 1363 1361 pkt_reformat->owner = MLX5_FLOW_RESOURCE_OWNER_HWS; 1364 1362 pkt_reformat->fs_hws_action.hws_action = hws_action; 1365 1363 return 0; ··· 1507 1503 err = -ENOMEM; 1508 1504 goto release_mh; 1509 1505 } 1510 - mutex_init(&modify_hdr->fs_hws_action.lock); 1511 1506 modify_hdr->fs_hws_action.mh_data = mh_data; 1512 1507 modify_hdr->fs_hws_action.fs_pool = pool; 1513 1508 modify_hdr->owner = MLX5_FLOW_RESOURCE_OWNER_SW;
+1
drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws.h
··· 213 213 struct mlx5hws_action *dest; 214 214 /* Optional reformat action */ 215 215 struct mlx5hws_action *reformat; 216 + bool is_wire_ft; 216 217 }; 217 218 218 219 /**
+34 -6
drivers/net/macsec.c
··· 247 247 return sci; 248 248 } 249 249 250 - static sci_t macsec_frame_sci(struct macsec_eth_header *hdr, bool sci_present) 250 + static sci_t macsec_active_sci(struct macsec_secy *secy) 251 251 { 252 - sci_t sci; 252 + struct macsec_rx_sc *rx_sc = rcu_dereference_bh(secy->rx_sc); 253 253 254 - if (sci_present) 254 + /* Case single RX SC */ 255 + if (rx_sc && !rcu_dereference_bh(rx_sc->next)) 256 + return (rx_sc->active) ? rx_sc->sci : 0; 257 + /* Case no RX SC or multiple */ 258 + else 259 + return 0; 260 + } 261 + 262 + static sci_t macsec_frame_sci(struct macsec_eth_header *hdr, bool sci_present, 263 + struct macsec_rxh_data *rxd) 264 + { 265 + struct macsec_dev *macsec; 266 + sci_t sci = 0; 267 + 268 + /* SC = 1 */ 269 + if (sci_present) { 255 270 memcpy(&sci, hdr->secure_channel_id, 256 271 sizeof(hdr->secure_channel_id)); 257 - else 272 + /* SC = 0; ES = 0 */ 273 + } else if ((!(hdr->tci_an & (MACSEC_TCI_ES | MACSEC_TCI_SC))) && 274 + (list_is_singular(&rxd->secys))) { 275 + /* Only one SECY should exist on this scenario */ 276 + macsec = list_first_or_null_rcu(&rxd->secys, struct macsec_dev, 277 + secys); 278 + if (macsec) 279 + return macsec_active_sci(&macsec->secy); 280 + } else { 258 281 sci = make_sci(hdr->eth.h_source, MACSEC_PORT_ES); 282 + } 259 283 260 284 return sci; 261 285 } ··· 1133 1109 struct macsec_rxh_data *rxd; 1134 1110 struct macsec_dev *macsec; 1135 1111 unsigned int len; 1136 - sci_t sci; 1112 + sci_t sci = 0; 1137 1113 u32 hdr_pn; 1138 1114 bool cbit; 1139 1115 struct pcpu_rx_sc_stats *rxsc_stats; ··· 1180 1156 1181 1157 macsec_skb_cb(skb)->has_sci = !!(hdr->tci_an & MACSEC_TCI_SC); 1182 1158 macsec_skb_cb(skb)->assoc_num = hdr->tci_an & MACSEC_AN_MASK; 1183 - sci = macsec_frame_sci(hdr, macsec_skb_cb(skb)->has_sci); 1184 1159 1185 1160 rcu_read_lock(); 1186 1161 rxd = macsec_data_rcu(skb->dev); 1162 + 1163 + sci = macsec_frame_sci(hdr, macsec_skb_cb(skb)->has_sci, rxd); 1164 + if (!sci) 1165 + goto drop_nosc; 1187 1166 1188 1167 list_for_each_entry_rcu(macsec, &rxd->secys, secys) { 1189 1168 struct macsec_rx_sc *sc = find_rx_sc(&macsec->secy, sci); ··· 1310 1283 macsec_rxsa_put(rx_sa); 1311 1284 drop_nosa: 1312 1285 macsec_rxsc_put(rx_sc); 1286 + drop_nosc: 1313 1287 rcu_read_unlock(); 1314 1288 drop_direct: 1315 1289 kfree_skb(skb);
+1 -2
drivers/net/netconsole.c
··· 1252 1252 */ 1253 1253 static int prepare_extradata(struct netconsole_target *nt) 1254 1254 { 1255 - u32 fields = SYSDATA_CPU_NR | SYSDATA_TASKNAME; 1256 1255 int extradata_len; 1257 1256 1258 1257 /* userdata was appended when configfs write helper was called ··· 1259 1260 */ 1260 1261 extradata_len = nt->userdata_length; 1261 1262 1262 - if (!(nt->sysdata_fields & fields)) 1263 + if (!nt->sysdata_fields) 1263 1264 goto out; 1264 1265 1265 1266 if (nt->sysdata_fields & SYSDATA_CPU_NR)
+2 -1
drivers/net/netdevsim/netdev.c
··· 371 371 int done; 372 372 373 373 done = nsim_rcv(rq, budget); 374 - napi_complete(napi); 374 + if (done < budget) 375 + napi_complete_done(napi, done); 375 376 376 377 return done; 377 378 }
+12
drivers/net/phy/mdio_bus.c
··· 445 445 446 446 lockdep_assert_held_once(&bus->mdio_lock); 447 447 448 + if (addr >= PHY_MAX_ADDR) 449 + return -ENXIO; 450 + 448 451 if (bus->read) 449 452 retval = bus->read(bus, addr, regnum); 450 453 else ··· 476 473 int err; 477 474 478 475 lockdep_assert_held_once(&bus->mdio_lock); 476 + 477 + if (addr >= PHY_MAX_ADDR) 478 + return -ENXIO; 479 479 480 480 if (bus->write) 481 481 err = bus->write(bus, addr, regnum, val); ··· 541 535 542 536 lockdep_assert_held_once(&bus->mdio_lock); 543 537 538 + if (addr >= PHY_MAX_ADDR) 539 + return -ENXIO; 540 + 544 541 if (bus->read_c45) 545 542 retval = bus->read_c45(bus, addr, devad, regnum); 546 543 else ··· 574 565 int err; 575 566 576 567 lockdep_assert_held_once(&bus->mdio_lock); 568 + 569 + if (addr >= PHY_MAX_ADDR) 570 + return -ENXIO; 577 571 578 572 if (bus->write_c45) 579 573 err = bus->write_c45(bus, addr, devad, regnum, val);
+12 -6
drivers/net/phy/phy_caps.c
··· 188 188 * When @exact is not set, we return either an exact match, or matching capabilities 189 189 * at lower speed, or the lowest matching speed, or NULL. 190 190 * 191 + * Non-exact matches will try to return an exact speed and duplex match, but may 192 + * return matching capabilities with same speed but a different duplex. 193 + * 191 194 * Returns: a matched link_capabilities according to the above process, NULL 192 195 * otherwise. 193 196 */ ··· 198 195 phy_caps_lookup(int speed, unsigned int duplex, const unsigned long *supported, 199 196 bool exact) 200 197 { 201 - const struct link_capabilities *lcap, *last = NULL; 198 + const struct link_capabilities *lcap, *match = NULL, *last = NULL; 202 199 203 200 for_each_link_caps_desc_speed(lcap) { 204 201 if (linkmode_intersects(lcap->linkmodes, supported)) { ··· 207 204 if (lcap->speed == speed && lcap->duplex == duplex) { 208 205 return lcap; 209 206 } else if (!exact) { 210 - if (lcap->speed <= speed) 211 - return lcap; 207 + if (!match && lcap->speed <= speed) 208 + match = lcap; 209 + 210 + if (lcap->speed < speed) 211 + break; 212 212 } 213 213 } 214 214 } 215 215 216 - if (!exact) 217 - return last; 216 + if (!match && !exact) 217 + match = last; 218 218 219 - return NULL; 219 + return match; 220 220 } 221 221 EXPORT_SYMBOL_GPL(phy_caps_lookup); 222 222
+1
drivers/net/usb/r8152.c
··· 10054 10054 { USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041) }, 10055 10055 { USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff) }, 10056 10056 { USB_DEVICE(VENDOR_ID_TPLINK, 0x0601) }, 10057 + { USB_DEVICE(VENDOR_ID_TPLINK, 0x0602) }, 10057 10058 { USB_DEVICE(VENDOR_ID_DLINK, 0xb301) }, 10058 10059 { USB_DEVICE(VENDOR_ID_DELL, 0xb097) }, 10059 10060 { USB_DEVICE(VENDOR_ID_ASUS, 0x1976) },
+2 -2
drivers/net/veth.c
··· 909 909 910 910 /* NAPI functions as RCU section */ 911 911 peer_dev = rcu_dereference_check(priv->peer, rcu_read_lock_bh_held()); 912 - peer_txq = netdev_get_tx_queue(peer_dev, queue_idx); 912 + peer_txq = peer_dev ? netdev_get_tx_queue(peer_dev, queue_idx) : NULL; 913 913 914 914 for (i = 0; i < budget; i++) { 915 915 void *ptr = __ptr_ring_consume(&rq->xdp_ring); ··· 959 959 rq->stats.vs.xdp_packets += done; 960 960 u64_stats_update_end(&rq->stats.syncp); 961 961 962 - if (unlikely(netif_tx_queue_stopped(peer_txq))) 962 + if (peer_txq && unlikely(netif_tx_queue_stopped(peer_txq))) 963 963 netif_tx_wake_queue(peer_txq); 964 964 965 965 return done;
+25 -8
drivers/net/wireless/ath/ath10k/mac.c
··· 4 4 * Copyright (c) 2011-2017 Qualcomm Atheros, Inc. 5 5 * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved. 6 6 * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved. 7 + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. 7 8 */ 8 9 9 10 #include "mac.h" ··· 1021 1020 return -ETIMEDOUT; 1022 1021 1023 1022 return ar->last_wmi_vdev_start_status; 1023 + } 1024 + 1025 + static inline int ath10k_vdev_delete_sync(struct ath10k *ar) 1026 + { 1027 + unsigned long time_left; 1028 + 1029 + lockdep_assert_held(&ar->conf_mutex); 1030 + 1031 + if (!test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map)) 1032 + return 0; 1033 + 1034 + if (test_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags)) 1035 + return -ESHUTDOWN; 1036 + 1037 + time_left = wait_for_completion_timeout(&ar->vdev_delete_done, 1038 + ATH10K_VDEV_DELETE_TIMEOUT_HZ); 1039 + if (time_left == 0) 1040 + return -ETIMEDOUT; 1041 + 1042 + return 0; 1024 1043 } 1025 1044 1026 1045 static int ath10k_monitor_vdev_start(struct ath10k *ar, int vdev_id) ··· 5921 5900 struct ath10k *ar = hw->priv; 5922 5901 struct ath10k_vif *arvif = (void *)vif->drv_priv; 5923 5902 struct ath10k_peer *peer; 5924 - unsigned long time_left; 5925 5903 int ret; 5926 5904 int i; 5927 5905 ··· 5960 5940 ath10k_warn(ar, "failed to delete WMI vdev %i: %d\n", 5961 5941 arvif->vdev_id, ret); 5962 5942 5963 - if (test_bit(WMI_SERVICE_SYNC_DELETE_CMDS, ar->wmi.svc_map)) { 5964 - time_left = wait_for_completion_timeout(&ar->vdev_delete_done, 5965 - ATH10K_VDEV_DELETE_TIMEOUT_HZ); 5966 - if (time_left == 0) { 5967 - ath10k_warn(ar, "Timeout in receiving vdev delete response\n"); 5968 - goto out; 5969 - } 5943 + ret = ath10k_vdev_delete_sync(ar); 5944 + if (ret) { 5945 + ath10k_warn(ar, "Error in receiving vdev delete response: %d\n", ret); 5946 + goto out; 5970 5947 } 5971 5948 5972 5949 /* Some firmware revisions don't notify host about self-peer removal
+3 -1
drivers/net/wireless/ath/ath10k/snoc.c
··· 938 938 939 939 dev_set_threaded(ar->napi_dev, true); 940 940 ath10k_core_napi_enable(ar); 941 - ath10k_snoc_irq_enable(ar); 941 + /* IRQs are left enabled when we restart due to a firmware crash */ 942 + if (!test_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags)) 943 + ath10k_snoc_irq_enable(ar); 942 944 ath10k_snoc_rx_post(ar); 943 945 944 946 clear_bit(ATH10K_SNOC_FLAG_RECOVERY, &ar_snoc->flags);
+15 -14
drivers/net/wireless/ath/ath11k/core.c
··· 990 990 INIT_LIST_HEAD(&ar->fw_stats.bcn); 991 991 992 992 init_completion(&ar->fw_stats_complete); 993 + init_completion(&ar->fw_stats_done); 993 994 } 994 995 995 996 void ath11k_fw_stats_free(struct ath11k_fw_stats *stats) ··· 2135 2134 { 2136 2135 int ret; 2137 2136 2137 + switch (ath11k_crypto_mode) { 2138 + case ATH11K_CRYPT_MODE_SW: 2139 + set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags); 2140 + set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags); 2141 + break; 2142 + case ATH11K_CRYPT_MODE_HW: 2143 + clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags); 2144 + clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags); 2145 + break; 2146 + default: 2147 + ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode); 2148 + return -EINVAL; 2149 + } 2150 + 2138 2151 ret = ath11k_core_start_firmware(ab, ab->fw_mode); 2139 2152 if (ret) { 2140 2153 ath11k_err(ab, "failed to start firmware: %d\n", ret); ··· 2165 2150 if (ret) { 2166 2151 ath11k_err(ab, "failed to init DP: %d\n", ret); 2167 2152 goto err_firmware_stop; 2168 - } 2169 - 2170 - switch (ath11k_crypto_mode) { 2171 - case ATH11K_CRYPT_MODE_SW: 2172 - set_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags); 2173 - set_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags); 2174 - break; 2175 - case ATH11K_CRYPT_MODE_HW: 2176 - clear_bit(ATH11K_FLAG_HW_CRYPTO_DISABLED, &ab->dev_flags); 2177 - clear_bit(ATH11K_FLAG_RAW_MODE, &ab->dev_flags); 2178 - break; 2179 - default: 2180 - ath11k_info(ab, "invalid crypto_mode: %d\n", ath11k_crypto_mode); 2181 - return -EINVAL; 2182 2153 } 2183 2154 2184 2155 if (ath11k_frame_mode == ATH11K_HW_TXRX_RAW)
+3 -1
drivers/net/wireless/ath/ath11k/core.h
··· 600 600 struct list_head pdevs; 601 601 struct list_head vdevs; 602 602 struct list_head bcn; 603 + u32 num_vdev_recvd; 604 + u32 num_bcn_recvd; 603 605 }; 604 606 605 607 struct ath11k_dbg_htt_stats { ··· 786 784 u8 alpha2[REG_ALPHA2_LEN + 1]; 787 785 struct ath11k_fw_stats fw_stats; 788 786 struct completion fw_stats_complete; 789 - bool fw_stats_done; 787 + struct completion fw_stats_done; 790 788 791 789 /* protected by conf_mutex */ 792 790 bool ps_state_enable;
+13 -135
drivers/net/wireless/ath/ath11k/debugfs.c
··· 1 1 // SPDX-License-Identifier: BSD-3-Clause-Clear 2 2 /* 3 3 * Copyright (c) 2018-2020 The Linux Foundation. All rights reserved. 4 - * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved. 4 + * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved. 5 5 */ 6 6 7 7 #include <linux/vmalloc.h> ··· 93 93 spin_unlock_bh(&dbr_data->lock); 94 94 } 95 95 96 - static void ath11k_debugfs_fw_stats_reset(struct ath11k *ar) 97 - { 98 - spin_lock_bh(&ar->data_lock); 99 - ar->fw_stats_done = false; 100 - ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs); 101 - ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs); 102 - spin_unlock_bh(&ar->data_lock); 103 - } 104 - 105 96 void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats) 106 97 { 107 98 struct ath11k_base *ab = ar->ab; 108 - struct ath11k_pdev *pdev; 109 - bool is_end; 110 - static unsigned int num_vdev, num_bcn; 111 - size_t total_vdevs_started = 0; 112 - int i; 99 + bool is_end = true; 113 100 114 - /* WMI_REQUEST_PDEV_STAT request has been already processed */ 115 - 116 - if (stats->stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) { 117 - ar->fw_stats_done = true; 118 - return; 119 - } 120 - 121 - if (stats->stats_id == WMI_REQUEST_VDEV_STAT) { 122 - if (list_empty(&stats->vdevs)) { 123 - ath11k_warn(ab, "empty vdev stats"); 124 - return; 125 - } 126 - /* FW sends all the active VDEV stats irrespective of PDEV, 127 - * hence limit until the count of all VDEVs started 128 - */ 129 - for (i = 0; i < ab->num_radios; i++) { 130 - pdev = rcu_dereference(ab->pdevs_active[i]); 131 - if (pdev && pdev->ar) 132 - total_vdevs_started += ar->num_started_vdevs; 133 - } 134 - 135 - is_end = ((++num_vdev) == total_vdevs_started); 136 - 137 - list_splice_tail_init(&stats->vdevs, 138 - &ar->fw_stats.vdevs); 139 - 140 - if (is_end) { 141 - ar->fw_stats_done = true; 142 - num_vdev = 0; 143 - } 144 - return; 145 - } 146 - 101 + /* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_RSSI_PER_CHAIN_STAT and 102 + * WMI_REQUEST_VDEV_STAT requests have been already processed. 103 + */ 147 104 if (stats->stats_id == WMI_REQUEST_BCN_STAT) { 148 105 if (list_empty(&stats->bcn)) { 149 106 ath11k_warn(ab, "empty bcn stats"); ··· 109 152 /* Mark end until we reached the count of all started VDEVs 110 153 * within the PDEV 111 154 */ 112 - is_end = ((++num_bcn) == ar->num_started_vdevs); 155 + if (ar->num_started_vdevs) 156 + is_end = ((++ar->fw_stats.num_bcn_recvd) == 157 + ar->num_started_vdevs); 113 158 114 159 list_splice_tail_init(&stats->bcn, 115 160 &ar->fw_stats.bcn); 116 161 117 - if (is_end) { 118 - ar->fw_stats_done = true; 119 - num_bcn = 0; 120 - } 162 + if (is_end) 163 + complete(&ar->fw_stats_done); 121 164 } 122 - } 123 - 124 - static int ath11k_debugfs_fw_stats_request(struct ath11k *ar, 125 - struct stats_request_params *req_param) 126 - { 127 - struct ath11k_base *ab = ar->ab; 128 - unsigned long timeout, time_left; 129 - int ret; 130 - 131 - lockdep_assert_held(&ar->conf_mutex); 132 - 133 - /* FW stats can get split when exceeding the stats data buffer limit. 134 - * In that case, since there is no end marking for the back-to-back 135 - * received 'update stats' event, we keep a 3 seconds timeout in case, 136 - * fw_stats_done is not marked yet 137 - */ 138 - timeout = jiffies + secs_to_jiffies(3); 139 - 140 - ath11k_debugfs_fw_stats_reset(ar); 141 - 142 - reinit_completion(&ar->fw_stats_complete); 143 - 144 - ret = ath11k_wmi_send_stats_request_cmd(ar, req_param); 145 - 146 - if (ret) { 147 - ath11k_warn(ab, "could not request fw stats (%d)\n", 148 - ret); 149 - return ret; 150 - } 151 - 152 - time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ); 153 - 154 - if (!time_left) 155 - return -ETIMEDOUT; 156 - 157 - for (;;) { 158 - if (time_after(jiffies, timeout)) 159 - break; 160 - 161 - spin_lock_bh(&ar->data_lock); 162 - if (ar->fw_stats_done) { 163 - spin_unlock_bh(&ar->data_lock); 164 - break; 165 - } 166 - spin_unlock_bh(&ar->data_lock); 167 - } 168 - return 0; 169 - } 170 - 171 - int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id, 172 - u32 vdev_id, u32 stats_id) 173 - { 174 - struct ath11k_base *ab = ar->ab; 175 - struct stats_request_params req_param; 176 - int ret; 177 - 178 - mutex_lock(&ar->conf_mutex); 179 - 180 - if (ar->state != ATH11K_STATE_ON) { 181 - ret = -ENETDOWN; 182 - goto err_unlock; 183 - } 184 - 185 - req_param.pdev_id = pdev_id; 186 - req_param.vdev_id = vdev_id; 187 - req_param.stats_id = stats_id; 188 - 189 - ret = ath11k_debugfs_fw_stats_request(ar, &req_param); 190 - if (ret) 191 - ath11k_warn(ab, "failed to request fw stats: %d\n", ret); 192 - 193 - ath11k_dbg(ab, ATH11K_DBG_WMI, 194 - "debug get fw stat pdev id %d vdev id %d stats id 0x%x\n", 195 - pdev_id, vdev_id, stats_id); 196 - 197 - err_unlock: 198 - mutex_unlock(&ar->conf_mutex); 199 - 200 - return ret; 201 165 } 202 166 203 167 static int ath11k_open_pdev_stats(struct inode *inode, struct file *file) ··· 146 268 req_param.vdev_id = 0; 147 269 req_param.stats_id = WMI_REQUEST_PDEV_STAT; 148 270 149 - ret = ath11k_debugfs_fw_stats_request(ar, &req_param); 271 + ret = ath11k_mac_fw_stats_request(ar, &req_param); 150 272 if (ret) { 151 273 ath11k_warn(ab, "failed to request fw pdev stats: %d\n", ret); 152 274 goto err_free; ··· 217 339 req_param.vdev_id = 0; 218 340 req_param.stats_id = WMI_REQUEST_VDEV_STAT; 219 341 220 - ret = ath11k_debugfs_fw_stats_request(ar, &req_param); 342 + ret = ath11k_mac_fw_stats_request(ar, &req_param); 221 343 if (ret) { 222 344 ath11k_warn(ar->ab, "failed to request fw vdev stats: %d\n", ret); 223 345 goto err_free; ··· 293 415 continue; 294 416 295 417 req_param.vdev_id = arvif->vdev_id; 296 - ret = ath11k_debugfs_fw_stats_request(ar, &req_param); 418 + ret = ath11k_mac_fw_stats_request(ar, &req_param); 297 419 if (ret) { 298 420 ath11k_warn(ar->ab, "failed to request fw bcn stats: %d\n", ret); 299 421 goto err_free;
+1 -9
drivers/net/wireless/ath/ath11k/debugfs.h
··· 1 1 /* SPDX-License-Identifier: BSD-3-Clause-Clear */ 2 2 /* 3 3 * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved. 4 - * Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved. 4 + * Copyright (c) 2021-2022, 2025 Qualcomm Innovation Center, Inc. All rights reserved. 5 5 */ 6 6 7 7 #ifndef _ATH11K_DEBUGFS_H_ ··· 273 273 void ath11k_debugfs_fw_stats_process(struct ath11k *ar, struct ath11k_fw_stats *stats); 274 274 275 275 void ath11k_debugfs_fw_stats_init(struct ath11k *ar); 276 - int ath11k_debugfs_get_fw_stats(struct ath11k *ar, u32 pdev_id, 277 - u32 vdev_id, u32 stats_id); 278 276 279 277 static inline bool ath11k_debugfs_is_pktlog_lite_mode_enabled(struct ath11k *ar) 280 278 { ··· 375 377 } 376 378 377 379 static inline int ath11k_debugfs_rx_filter(struct ath11k *ar) 378 - { 379 - return 0; 380 - } 381 - 382 - static inline int ath11k_debugfs_get_fw_stats(struct ath11k *ar, 383 - u32 pdev_id, u32 vdev_id, u32 stats_id) 384 380 { 385 381 return 0; 386 382 }
+83 -44
drivers/net/wireless/ath/ath11k/mac.c
··· 8997 8997 } 8998 8998 } 8999 8999 9000 + static void ath11k_mac_fw_stats_reset(struct ath11k *ar) 9001 + { 9002 + spin_lock_bh(&ar->data_lock); 9003 + ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs); 9004 + ath11k_fw_stats_vdevs_free(&ar->fw_stats.vdevs); 9005 + ar->fw_stats.num_vdev_recvd = 0; 9006 + ar->fw_stats.num_bcn_recvd = 0; 9007 + spin_unlock_bh(&ar->data_lock); 9008 + } 9009 + 9010 + int ath11k_mac_fw_stats_request(struct ath11k *ar, 9011 + struct stats_request_params *req_param) 9012 + { 9013 + struct ath11k_base *ab = ar->ab; 9014 + unsigned long time_left; 9015 + int ret; 9016 + 9017 + lockdep_assert_held(&ar->conf_mutex); 9018 + 9019 + ath11k_mac_fw_stats_reset(ar); 9020 + 9021 + reinit_completion(&ar->fw_stats_complete); 9022 + reinit_completion(&ar->fw_stats_done); 9023 + 9024 + ret = ath11k_wmi_send_stats_request_cmd(ar, req_param); 9025 + 9026 + if (ret) { 9027 + ath11k_warn(ab, "could not request fw stats (%d)\n", 9028 + ret); 9029 + return ret; 9030 + } 9031 + 9032 + time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 1 * HZ); 9033 + if (!time_left) 9034 + return -ETIMEDOUT; 9035 + 9036 + /* FW stats can get split when exceeding the stats data buffer limit. 9037 + * In that case, since there is no end marking for the back-to-back 9038 + * received 'update stats' event, we keep a 3 seconds timeout in case, 9039 + * fw_stats_done is not marked yet 9040 + */ 9041 + time_left = wait_for_completion_timeout(&ar->fw_stats_done, 3 * HZ); 9042 + if (!time_left) 9043 + return -ETIMEDOUT; 9044 + 9045 + return 0; 9046 + } 9047 + 9048 + static int ath11k_mac_get_fw_stats(struct ath11k *ar, u32 pdev_id, 9049 + u32 vdev_id, u32 stats_id) 9050 + { 9051 + struct ath11k_base *ab = ar->ab; 9052 + struct stats_request_params req_param; 9053 + int ret; 9054 + 9055 + lockdep_assert_held(&ar->conf_mutex); 9056 + 9057 + if (ar->state != ATH11K_STATE_ON) 9058 + return -ENETDOWN; 9059 + 9060 + req_param.pdev_id = pdev_id; 9061 + req_param.vdev_id = vdev_id; 9062 + req_param.stats_id = stats_id; 9063 + 9064 + ret = ath11k_mac_fw_stats_request(ar, &req_param); 9065 + if (ret) 9066 + ath11k_warn(ab, "failed to request fw stats: %d\n", ret); 9067 + 9068 + ath11k_dbg(ab, ATH11K_DBG_WMI, 9069 + "debug get fw stat pdev id %d vdev id %d stats id 0x%x\n", 9070 + pdev_id, vdev_id, stats_id); 9071 + 9072 + return ret; 9073 + } 9074 + 9000 9075 static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw, 9001 9076 struct ieee80211_vif *vif, 9002 9077 struct ieee80211_sta *sta, ··· 9106 9031 9107 9032 ath11k_mac_put_chain_rssi(sinfo, arsta, "ppdu", false); 9108 9033 9034 + mutex_lock(&ar->conf_mutex); 9109 9035 if (!(sinfo->filled & BIT_ULL(NL80211_STA_INFO_CHAIN_SIGNAL)) && 9110 9036 arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA && 9111 9037 ar->ab->hw_params.supports_rssi_stats && 9112 - !ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0, 9113 - WMI_REQUEST_RSSI_PER_CHAIN_STAT)) { 9038 + !ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0, 9039 + WMI_REQUEST_RSSI_PER_CHAIN_STAT)) { 9114 9040 ath11k_mac_put_chain_rssi(sinfo, arsta, "fw stats", true); 9115 9041 } 9116 9042 ··· 9119 9043 if (!signal && 9120 9044 arsta->arvif->vdev_type == WMI_VDEV_TYPE_STA && 9121 9045 ar->ab->hw_params.supports_rssi_stats && 9122 - !(ath11k_debugfs_get_fw_stats(ar, ar->pdev->pdev_id, 0, 9123 - WMI_REQUEST_VDEV_STAT))) 9046 + !(ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0, 9047 + WMI_REQUEST_VDEV_STAT))) 9124 9048 signal = arsta->rssi_beacon; 9049 + mutex_unlock(&ar->conf_mutex); 9125 9050 9126 9051 ath11k_dbg(ar->ab, ATH11K_DBG_MAC, 9127 9052 "sta statistics db2dbm %u rssi comb %d rssi beacon %d\n", ··· 9457 9380 return ret; 9458 9381 } 9459 9382 9460 - static int ath11k_fw_stats_request(struct ath11k *ar, 9461 - struct stats_request_params *req_param) 9462 - { 9463 - struct ath11k_base *ab = ar->ab; 9464 - unsigned long time_left; 9465 - int ret; 9466 - 9467 - lockdep_assert_held(&ar->conf_mutex); 9468 - 9469 - spin_lock_bh(&ar->data_lock); 9470 - ar->fw_stats_done = false; 9471 - ath11k_fw_stats_pdevs_free(&ar->fw_stats.pdevs); 9472 - spin_unlock_bh(&ar->data_lock); 9473 - 9474 - reinit_completion(&ar->fw_stats_complete); 9475 - 9476 - ret = ath11k_wmi_send_stats_request_cmd(ar, req_param); 9477 - if (ret) { 9478 - ath11k_warn(ab, "could not request fw stats (%d)\n", 9479 - ret); 9480 - return ret; 9481 - } 9482 - 9483 - time_left = wait_for_completion_timeout(&ar->fw_stats_complete, 9484 - 1 * HZ); 9485 - 9486 - if (!time_left) 9487 - return -ETIMEDOUT; 9488 - 9489 - return 0; 9490 - } 9491 - 9492 9383 static int ath11k_mac_op_get_txpower(struct ieee80211_hw *hw, 9493 9384 struct ieee80211_vif *vif, 9494 9385 unsigned int link_id, ··· 9464 9419 { 9465 9420 struct ath11k *ar = hw->priv; 9466 9421 struct ath11k_base *ab = ar->ab; 9467 - struct stats_request_params req_param = {0}; 9468 9422 struct ath11k_fw_stats_pdev *pdev; 9469 9423 int ret; 9470 9424 ··· 9475 9431 */ 9476 9432 mutex_lock(&ar->conf_mutex); 9477 9433 9478 - if (ar->state != ATH11K_STATE_ON) 9479 - goto err_fallback; 9480 - 9481 9434 /* Firmware doesn't provide Tx power during CAC hence no need to fetch 9482 9435 * the stats. 9483 9436 */ ··· 9483 9442 return -EAGAIN; 9484 9443 } 9485 9444 9486 - req_param.pdev_id = ar->pdev->pdev_id; 9487 - req_param.stats_id = WMI_REQUEST_PDEV_STAT; 9488 - 9489 - ret = ath11k_fw_stats_request(ar, &req_param); 9445 + ret = ath11k_mac_get_fw_stats(ar, ar->pdev->pdev_id, 0, 9446 + WMI_REQUEST_PDEV_STAT); 9490 9447 if (ret) { 9491 9448 ath11k_warn(ab, "failed to request fw pdev stats: %d\n", ret); 9492 9449 goto err_fallback;
+3 -1
drivers/net/wireless/ath/ath11k/mac.h
··· 1 1 /* SPDX-License-Identifier: BSD-3-Clause-Clear */ 2 2 /* 3 3 * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved. 4 - * Copyright (c) 2021-2023 Qualcomm Innovation Center, Inc. All rights reserved. 4 + * Copyright (c) 2021-2023, 2025 Qualcomm Innovation Center, Inc. All rights reserved. 5 5 */ 6 6 7 7 #ifndef ATH11K_MAC_H ··· 179 179 void ath11k_mac_fill_reg_tpc_info(struct ath11k *ar, 180 180 struct ieee80211_vif *vif, 181 181 struct ieee80211_chanctx_conf *ctx); 182 + int ath11k_mac_fw_stats_request(struct ath11k *ar, 183 + struct stats_request_params *req_param); 182 184 #endif
+43 -6
drivers/net/wireless/ath/ath11k/wmi.c
··· 8158 8158 static void ath11k_update_stats_event(struct ath11k_base *ab, struct sk_buff *skb) 8159 8159 { 8160 8160 struct ath11k_fw_stats stats = {}; 8161 + size_t total_vdevs_started = 0; 8162 + struct ath11k_pdev *pdev; 8163 + bool is_end = true; 8164 + int i; 8165 + 8161 8166 struct ath11k *ar; 8162 8167 int ret; 8163 8168 ··· 8189 8184 8190 8185 spin_lock_bh(&ar->data_lock); 8191 8186 8192 - /* WMI_REQUEST_PDEV_STAT can be requested via .get_txpower mac ops or via 8187 + /* WMI_REQUEST_PDEV_STAT, WMI_REQUEST_VDEV_STAT and 8188 + * WMI_REQUEST_RSSI_PER_CHAIN_STAT can be requested via mac ops or via 8193 8189 * debugfs fw stats. Therefore, processing it separately. 8194 8190 */ 8195 8191 if (stats.stats_id == WMI_REQUEST_PDEV_STAT) { 8196 8192 list_splice_tail_init(&stats.pdevs, &ar->fw_stats.pdevs); 8197 - ar->fw_stats_done = true; 8193 + complete(&ar->fw_stats_done); 8198 8194 goto complete; 8199 8195 } 8200 8196 8201 - /* WMI_REQUEST_VDEV_STAT, WMI_REQUEST_BCN_STAT and WMI_REQUEST_RSSI_PER_CHAIN_STAT 8202 - * are currently requested only via debugfs fw stats. Hence, processing these 8203 - * in debugfs context 8197 + if (stats.stats_id == WMI_REQUEST_RSSI_PER_CHAIN_STAT) { 8198 + complete(&ar->fw_stats_done); 8199 + goto complete; 8200 + } 8201 + 8202 + if (stats.stats_id == WMI_REQUEST_VDEV_STAT) { 8203 + if (list_empty(&stats.vdevs)) { 8204 + ath11k_warn(ab, "empty vdev stats"); 8205 + goto complete; 8206 + } 8207 + /* FW sends all the active VDEV stats irrespective of PDEV, 8208 + * hence limit until the count of all VDEVs started 8209 + */ 8210 + for (i = 0; i < ab->num_radios; i++) { 8211 + pdev = rcu_dereference(ab->pdevs_active[i]); 8212 + if (pdev && pdev->ar) 8213 + total_vdevs_started += ar->num_started_vdevs; 8214 + } 8215 + 8216 + if (total_vdevs_started) 8217 + is_end = ((++ar->fw_stats.num_vdev_recvd) == 8218 + total_vdevs_started); 8219 + 8220 + list_splice_tail_init(&stats.vdevs, 8221 + &ar->fw_stats.vdevs); 8222 + 8223 + if (is_end) 8224 + complete(&ar->fw_stats_done); 8225 + 8226 + goto complete; 8227 + } 8228 + 8229 + /* WMI_REQUEST_BCN_STAT is currently requested only via debugfs fw stats. 8230 + * Hence, processing it in debugfs context 8204 8231 */ 8205 8232 ath11k_debugfs_fw_stats_process(ar, &stats); 8206 8233 8207 8234 complete: 8208 8235 complete(&ar->fw_stats_complete); 8209 - rcu_read_unlock(); 8210 8236 spin_unlock_bh(&ar->data_lock); 8237 + rcu_read_unlock(); 8211 8238 8212 8239 /* Since the stats's pdev, vdev and beacon list are spliced and reinitialised 8213 8240 * at this point, no need to free the individual list.
+7 -3
drivers/net/wireless/ath/ath12k/core.c
··· 2129 2129 if (!ag) { 2130 2130 mutex_unlock(&ath12k_hw_group_mutex); 2131 2131 ath12k_warn(ab, "unable to get hw group\n"); 2132 - return -ENODEV; 2132 + ret = -ENODEV; 2133 + goto err_unregister_notifier; 2133 2134 } 2134 2135 2135 2136 mutex_unlock(&ath12k_hw_group_mutex); ··· 2145 2144 if (ret) { 2146 2145 mutex_unlock(&ag->mutex); 2147 2146 ath12k_warn(ab, "unable to create hw group\n"); 2148 - goto err; 2147 + goto err_destroy_hw_group; 2149 2148 } 2150 2149 } 2151 2150 ··· 2153 2152 2154 2153 return 0; 2155 2154 2156 - err: 2155 + err_destroy_hw_group: 2157 2156 ath12k_core_hw_group_destroy(ab->ag); 2158 2157 ath12k_core_hw_group_unassign(ab); 2158 + err_unregister_notifier: 2159 + ath12k_core_panic_notifier_unregister(ab); 2160 + 2159 2161 return ret; 2160 2162 } 2161 2163
+2 -1
drivers/net/wireless/ath/ath12k/hal.h
··· 585 585 * or cache was blocked 586 586 * @HAL_REO_CMD_FAILED: Command execution failed, could be due to 587 587 * invalid queue desc 588 - * @HAL_REO_CMD_RESOURCE_BLOCKED: 588 + * @HAL_REO_CMD_RESOURCE_BLOCKED: Command could not be executed because 589 + * one or more descriptors were blocked 589 590 * @HAL_REO_CMD_DRAIN: 590 591 */ 591 592 enum hal_reo_cmd_status {
+6
drivers/net/wireless/ath/ath12k/hw.c
··· 951 951 .hal_umac_ce0_dest_reg_base = 0x01b81000, 952 952 .hal_umac_ce1_src_reg_base = 0x01b82000, 953 953 .hal_umac_ce1_dest_reg_base = 0x01b83000, 954 + 955 + .gcc_gcc_pcie_hot_rst = 0x1e38338, 954 956 }; 955 957 956 958 static const struct ath12k_hw_regs qcn9274_v2_regs = { ··· 1044 1042 .hal_umac_ce0_dest_reg_base = 0x01b81000, 1045 1043 .hal_umac_ce1_src_reg_base = 0x01b82000, 1046 1044 .hal_umac_ce1_dest_reg_base = 0x01b83000, 1045 + 1046 + .gcc_gcc_pcie_hot_rst = 0x1e38338, 1047 1047 }; 1048 1048 1049 1049 static const struct ath12k_hw_regs ipq5332_regs = { ··· 1219 1215 .hal_umac_ce0_dest_reg_base = 0x01b81000, 1220 1216 .hal_umac_ce1_src_reg_base = 0x01b82000, 1221 1217 .hal_umac_ce1_dest_reg_base = 0x01b83000, 1218 + 1219 + .gcc_gcc_pcie_hot_rst = 0x1e40304, 1222 1220 }; 1223 1221 1224 1222 static const struct ath12k_hw_hal_params ath12k_hw_hal_params_qcn9274 = {
+2
drivers/net/wireless/ath/ath12k/hw.h
··· 375 375 u32 hal_reo_cmd_ring_base; 376 376 377 377 u32 hal_reo_status_ring_base; 378 + 379 + u32 gcc_gcc_pcie_hot_rst; 378 380 }; 379 381 380 382 static inline const char *ath12k_bd_ie_type_str(enum ath12k_bd_ie_type type)
+3 -3
drivers/net/wireless/ath/ath12k/pci.c
··· 292 292 293 293 ath12k_dbg(ab, ATH12K_DBG_PCI, "pci ltssm 0x%x\n", val); 294 294 295 - val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST); 295 + val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST(ab)); 296 296 val |= GCC_GCC_PCIE_HOT_RST_VAL; 297 - ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST, val); 298 - val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST); 297 + ath12k_pci_write32(ab, GCC_GCC_PCIE_HOT_RST(ab), val); 298 + val = ath12k_pci_read32(ab, GCC_GCC_PCIE_HOT_RST(ab)); 299 299 300 300 ath12k_dbg(ab, ATH12K_DBG_PCI, "pci pcie_hot_rst 0x%x\n", val); 301 301
+3 -1
drivers/net/wireless/ath/ath12k/pci.h
··· 28 28 #define PCIE_PCIE_PARF_LTSSM 0x1e081b0 29 29 #define PARM_LTSSM_VALUE 0x111 30 30 31 - #define GCC_GCC_PCIE_HOT_RST 0x1e38338 31 + #define GCC_GCC_PCIE_HOT_RST(ab) \ 32 + ((ab)->hw_params->regs->gcc_gcc_pcie_hot_rst) 33 + 32 34 #define GCC_GCC_PCIE_HOT_RST_VAL 0x10 33 35 34 36 #define PCIE_PCIE_INT_ALL_CLEAR 0x1e08228
+16 -10
drivers/net/wireless/ath/wil6210/interrupt.c
··· 179 179 wil_dbg_irq(wil, "mask_irq\n"); 180 180 181 181 wil6210_mask_irq_tx(wil); 182 - wil6210_mask_irq_tx_edma(wil); 182 + if (wil->use_enhanced_dma_hw) 183 + wil6210_mask_irq_tx_edma(wil); 183 184 wil6210_mask_irq_rx(wil); 184 - wil6210_mask_irq_rx_edma(wil); 185 + if (wil->use_enhanced_dma_hw) 186 + wil6210_mask_irq_rx_edma(wil); 185 187 wil6210_mask_irq_misc(wil, true); 186 188 wil6210_mask_irq_pseudo(wil); 187 189 } ··· 192 190 { 193 191 wil_dbg_irq(wil, "unmask_irq\n"); 194 192 195 - wil_w(wil, RGF_DMA_EP_RX_ICR + offsetof(struct RGF_ICR, ICC), 196 - WIL_ICR_ICC_VALUE); 197 - wil_w(wil, RGF_DMA_EP_TX_ICR + offsetof(struct RGF_ICR, ICC), 198 - WIL_ICR_ICC_VALUE); 193 + if (wil->use_enhanced_dma_hw) { 194 + wil_w(wil, RGF_DMA_EP_RX_ICR + offsetof(struct RGF_ICR, ICC), 195 + WIL_ICR_ICC_VALUE); 196 + wil_w(wil, RGF_DMA_EP_TX_ICR + offsetof(struct RGF_ICR, ICC), 197 + WIL_ICR_ICC_VALUE); 198 + } 199 199 wil_w(wil, RGF_DMA_EP_MISC_ICR + offsetof(struct RGF_ICR, ICC), 200 200 WIL_ICR_ICC_MISC_VALUE); 201 201 wil_w(wil, RGF_INT_GEN_TX_ICR + offsetof(struct RGF_ICR, ICC), ··· 849 845 offsetof(struct RGF_ICR, ICR)); 850 846 wil_clear32(wil->csr + HOSTADDR(RGF_DMA_EP_TX_ICR) + 851 847 offsetof(struct RGF_ICR, ICR)); 852 - wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_RX_ICR) + 853 - offsetof(struct RGF_ICR, ICR)); 854 - wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_TX_ICR) + 855 - offsetof(struct RGF_ICR, ICR)); 848 + if (wil->use_enhanced_dma_hw) { 849 + wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_RX_ICR) + 850 + offsetof(struct RGF_ICR, ICR)); 851 + wil_clear32(wil->csr + HOSTADDR(RGF_INT_GEN_TX_ICR) + 852 + offsetof(struct RGF_ICR, ICR)); 853 + } 856 854 wil_clear32(wil->csr + HOSTADDR(RGF_DMA_EP_MISC_ICR) + 857 855 offsetof(struct RGF_ICR, ICR)); 858 856 wmb(); /* make sure write completed */
+20 -4
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
··· 1501 1501 * Scratch value was altered, this means the device was powered off, we 1502 1502 * need to reset it completely. 1503 1503 * Note: MAC (bits 0:7) will be cleared upon suspend even with wowlan, 1504 - * so assume that any bits there mean that the device is usable. 1504 + * but not bits [15:8]. So if we have bits set in lower word, assume 1505 + * the device is alive. 1506 + * For older devices, just try silently to grab the NIC. 1505 1507 */ 1506 - if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ && 1507 - !iwl_read32(trans, CSR_FUNC_SCRATCH)) 1508 - device_was_powered_off = true; 1508 + if (trans->mac_cfg->device_family >= IWL_DEVICE_FAMILY_BZ) { 1509 + if (!(iwl_read32(trans, CSR_FUNC_SCRATCH) & 1510 + CSR_FUNC_SCRATCH_POWER_OFF_MASK)) 1511 + device_was_powered_off = true; 1512 + } else { 1513 + /* 1514 + * bh are re-enabled by iwl_trans_pcie_release_nic_access, 1515 + * so re-enable them if _iwl_trans_pcie_grab_nic_access fails. 1516 + */ 1517 + local_bh_disable(); 1518 + if (_iwl_trans_pcie_grab_nic_access(trans, true)) { 1519 + iwl_trans_pcie_release_nic_access(trans); 1520 + } else { 1521 + device_was_powered_off = true; 1522 + local_bh_enable(); 1523 + } 1524 + } 1509 1525 1510 1526 if (restore || device_was_powered_off) { 1511 1527 trans->state = IWL_TRANS_NO_FW;
+2 -4
drivers/net/wireless/marvell/mwifiex/11n.c
··· 403 403 404 404 if (sband->ht_cap.cap & IEEE80211_HT_CAP_SUP_WIDTH_20_40 && 405 405 bss_desc->bcn_ht_oper->ht_param & 406 - IEEE80211_HT_PARAM_CHAN_WIDTH_ANY) { 407 - chan_list->chan_scan_param[0].radio_type |= 408 - CHAN_BW_40MHZ << 2; 406 + IEEE80211_HT_PARAM_CHAN_WIDTH_ANY) 409 407 SET_SECONDARYCHAN(chan_list->chan_scan_param[0]. 410 408 radio_type, 411 409 (bss_desc->bcn_ht_oper->ht_param & 412 410 IEEE80211_HT_PARAM_CHA_SEC_OFFSET)); 413 - } 411 + 414 412 *buffer += struct_size(chan_list, chan_scan_param, 1); 415 413 ret_len += struct_size(chan_list, chan_scan_param, 1); 416 414 }
+1 -11
drivers/ptp/ptp_private.h
··· 98 98 /* Check if ptp virtual clock is in use */ 99 99 static inline bool ptp_vclock_in_use(struct ptp_clock *ptp) 100 100 { 101 - bool in_use = false; 102 - 103 - if (mutex_lock_interruptible(&ptp->n_vclocks_mux)) 104 - return true; 105 - 106 - if (!ptp->is_virtual_clock && ptp->n_vclocks) 107 - in_use = true; 108 - 109 - mutex_unlock(&ptp->n_vclocks_mux); 110 - 111 - return in_use; 101 + return !ptp->is_virtual_clock; 112 102 } 113 103 114 104 /* Check if ptp clock shall be free running */
+7 -4
include/net/bluetooth/hci_core.h
··· 242 242 __u8 mesh; 243 243 __u8 instance; 244 244 __u8 handle; 245 + __u8 sid; 245 246 __u32 flags; 246 247 __u16 timeout; 247 248 __u16 remaining_time; ··· 547 546 struct hci_conn_hash conn_hash; 548 547 549 548 struct list_head mesh_pending; 549 + struct mutex mgmt_pending_lock; 550 550 struct list_head mgmt_pending; 551 551 struct list_head reject_list; 552 552 struct list_head accept_list; ··· 1552 1550 u16 timeout); 1553 1551 struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst, 1554 1552 __u8 dst_type, struct bt_iso_qos *qos); 1555 - struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, 1553 + struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, __u8 sid, 1556 1554 struct bt_iso_qos *qos, 1557 1555 __u8 base_len, __u8 *base); 1558 1556 struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst, 1559 1557 __u8 dst_type, struct bt_iso_qos *qos); 1560 1558 struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst, 1561 - __u8 dst_type, struct bt_iso_qos *qos, 1559 + __u8 dst_type, __u8 sid, 1560 + struct bt_iso_qos *qos, 1562 1561 __u8 data_len, __u8 *data); 1563 1562 struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst, 1564 1563 __u8 dst_type, __u8 sid, struct bt_iso_qos *qos); ··· 1834 1831 1835 1832 void hci_adv_instances_clear(struct hci_dev *hdev); 1836 1833 struct adv_info *hci_find_adv_instance(struct hci_dev *hdev, u8 instance); 1834 + struct adv_info *hci_find_adv_sid(struct hci_dev *hdev, u8 sid); 1837 1835 struct adv_info *hci_get_next_instance(struct hci_dev *hdev, u8 instance); 1838 1836 struct adv_info *hci_add_adv_instance(struct hci_dev *hdev, u8 instance, 1839 1837 u32 flags, u16 adv_data_len, u8 *adv_data, ··· 1842 1838 u16 timeout, u16 duration, s8 tx_power, 1843 1839 u32 min_interval, u32 max_interval, 1844 1840 u8 mesh_handle); 1845 - struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance, 1841 + struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance, u8 sid, 1846 1842 u32 flags, u8 data_len, u8 *data, 1847 1843 u32 min_interval, u32 max_interval); 1848 1844 int hci_set_adv_instance_data(struct hci_dev *hdev, u8 instance, ··· 2404 2400 u8 instance); 2405 2401 void mgmt_advertising_removed(struct sock *sk, struct hci_dev *hdev, 2406 2402 u8 instance); 2407 - void mgmt_adv_monitor_removed(struct hci_dev *hdev, u16 handle); 2408 2403 int mgmt_phy_configuration_changed(struct hci_dev *hdev, struct sock *skip); 2409 2404 void mgmt_adv_monitor_device_lost(struct hci_dev *hdev, u16 handle, 2410 2405 bdaddr_t *bdaddr, u8 addr_type);
+2 -2
include/net/bluetooth/hci_sync.h
··· 115 115 int hci_enable_advertising_sync(struct hci_dev *hdev); 116 116 int hci_enable_advertising(struct hci_dev *hdev); 117 117 118 - int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 data_len, 119 - u8 *data, u32 flags, u16 min_interval, 118 + int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 sid, 119 + u8 data_len, u8 *data, u32 flags, u16 min_interval, 120 120 u16 max_interval, u16 sync_interval); 121 121 122 122 int hci_disable_per_advertising_sync(struct hci_dev *hdev, u8 instance);
-8
include/net/sch_generic.h
··· 973 973 *backlog = qstats.backlog; 974 974 } 975 975 976 - static inline void qdisc_tree_flush_backlog(struct Qdisc *sch) 977 - { 978 - __u32 qlen, backlog; 979 - 980 - qdisc_qstats_qlen_backlog(sch, &qlen, &backlog); 981 - qdisc_tree_reduce_backlog(sch, qlen, backlog); 982 - } 983 - 984 976 static inline void qdisc_purge_queue(struct Qdisc *sch) 985 977 { 986 978 __u32 qlen, backlog;
+5 -2
include/net/sock.h
··· 3010 3010 int sk_ioctl(struct sock *sk, unsigned int cmd, void __user *arg); 3011 3011 static inline bool sk_is_readable(struct sock *sk) 3012 3012 { 3013 - if (sk->sk_prot->sock_is_readable) 3014 - return sk->sk_prot->sock_is_readable(sk); 3013 + const struct proto *prot = READ_ONCE(sk->sk_prot); 3014 + 3015 + if (prot->sock_is_readable) 3016 + return prot->sock_is_readable(sk); 3017 + 3015 3018 return false; 3016 3019 } 3017 3020 #endif /* _SOCK_H */
+10 -7
net/bluetooth/eir.c
··· 242 242 return ad_len; 243 243 } 244 244 245 - u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr) 245 + u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size) 246 246 { 247 247 struct adv_info *adv = NULL; 248 248 u8 ad_len = 0, flags = 0; ··· 286 286 /* If flags would still be empty, then there is no need to 287 287 * include the "Flags" AD field". 288 288 */ 289 - if (flags) { 289 + if (flags && (ad_len + eir_precalc_len(1) <= size)) { 290 290 ptr[0] = 0x02; 291 291 ptr[1] = EIR_FLAGS; 292 292 ptr[2] = flags; ··· 316 316 } 317 317 318 318 /* Provide Tx Power only if we can provide a valid value for it */ 319 - if (adv_tx_power != HCI_TX_POWER_INVALID) { 319 + if (adv_tx_power != HCI_TX_POWER_INVALID && 320 + (ad_len + eir_precalc_len(1) <= size)) { 320 321 ptr[0] = 0x02; 321 322 ptr[1] = EIR_TX_POWER; 322 323 ptr[2] = (u8)adv_tx_power; ··· 367 366 368 367 void *eir_get_service_data(u8 *eir, size_t eir_len, u16 uuid, size_t *len) 369 368 { 370 - while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, len))) { 369 + size_t dlen; 370 + 371 + while ((eir = eir_get_data(eir, eir_len, EIR_SERVICE_DATA, &dlen))) { 371 372 u16 value = get_unaligned_le16(eir); 372 373 373 374 if (uuid == value) { 374 375 if (len) 375 - *len -= 2; 376 + *len = dlen - 2; 376 377 return &eir[2]; 377 378 } 378 379 379 - eir += *len; 380 - eir_len -= *len; 380 + eir += dlen; 381 + eir_len -= dlen; 381 382 } 382 383 383 384 return NULL;
+1 -1
net/bluetooth/eir.h
··· 9 9 10 10 void eir_create(struct hci_dev *hdev, u8 *data); 11 11 12 - u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr); 12 + u8 eir_create_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr, u8 size); 13 13 u8 eir_create_scan_rsp(struct hci_dev *hdev, u8 instance, u8 *ptr); 14 14 u8 eir_create_per_adv_data(struct hci_dev *hdev, u8 instance, u8 *ptr); 15 15
+24 -7
net/bluetooth/hci_conn.c
··· 1501 1501 1502 1502 /* This function requires the caller holds hdev->lock */ 1503 1503 static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst, 1504 - struct bt_iso_qos *qos, __u8 base_len, 1505 - __u8 *base) 1504 + __u8 sid, struct bt_iso_qos *qos, 1505 + __u8 base_len, __u8 *base) 1506 1506 { 1507 1507 struct hci_conn *conn; 1508 1508 int err; ··· 1543 1543 return conn; 1544 1544 1545 1545 conn->state = BT_CONNECT; 1546 + conn->sid = sid; 1546 1547 1547 1548 hci_conn_hold(conn); 1548 1549 return conn; ··· 2063 2062 if (qos->bcast.bis) 2064 2063 sync_interval = interval * 4; 2065 2064 2066 - err = hci_start_per_adv_sync(hdev, qos->bcast.bis, conn->le_per_adv_data_len, 2065 + err = hci_start_per_adv_sync(hdev, qos->bcast.bis, conn->sid, 2066 + conn->le_per_adv_data_len, 2067 2067 conn->le_per_adv_data, flags, interval, 2068 2068 interval, sync_interval); 2069 2069 if (err) ··· 2136 2134 } 2137 2135 } 2138 2136 2139 - struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, 2137 + struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst, __u8 sid, 2140 2138 struct bt_iso_qos *qos, 2141 2139 __u8 base_len, __u8 *base) 2142 2140 { ··· 2158 2156 base, base_len); 2159 2157 2160 2158 /* We need hci_conn object using the BDADDR_ANY as dst */ 2161 - conn = hci_add_bis(hdev, dst, qos, base_len, eir); 2159 + conn = hci_add_bis(hdev, dst, sid, qos, base_len, eir); 2162 2160 if (IS_ERR(conn)) 2163 2161 return conn; 2164 2162 ··· 2209 2207 } 2210 2208 2211 2209 struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst, 2212 - __u8 dst_type, struct bt_iso_qos *qos, 2210 + __u8 dst_type, __u8 sid, 2211 + struct bt_iso_qos *qos, 2213 2212 __u8 base_len, __u8 *base) 2214 2213 { 2215 2214 struct hci_conn *conn; 2216 2215 int err; 2217 2216 struct iso_list_data data; 2218 2217 2219 - conn = hci_bind_bis(hdev, dst, qos, base_len, base); 2218 + conn = hci_bind_bis(hdev, dst, sid, qos, base_len, base); 2220 2219 if (IS_ERR(conn)) 2221 2220 return conn; 2222 2221 2223 2222 if (conn->state == BT_CONNECTED) 2224 2223 return conn; 2224 + 2225 + /* Check if SID needs to be allocated then search for the first 2226 + * available. 2227 + */ 2228 + if (conn->sid == HCI_SID_INVALID) { 2229 + u8 sid; 2230 + 2231 + for (sid = 0; sid <= 0x0f; sid++) { 2232 + if (!hci_find_adv_sid(hdev, sid)) { 2233 + conn->sid = sid; 2234 + break; 2235 + } 2236 + } 2237 + } 2225 2238 2226 2239 data.big = qos->bcast.big; 2227 2240 data.bis = qos->bcast.bis;
+20 -12
net/bluetooth/hci_core.c
··· 1585 1585 } 1586 1586 1587 1587 /* This function requires the caller holds hdev->lock */ 1588 + struct adv_info *hci_find_adv_sid(struct hci_dev *hdev, u8 sid) 1589 + { 1590 + struct adv_info *adv; 1591 + 1592 + list_for_each_entry(adv, &hdev->adv_instances, list) { 1593 + if (adv->sid == sid) 1594 + return adv; 1595 + } 1596 + 1597 + return NULL; 1598 + } 1599 + 1600 + /* This function requires the caller holds hdev->lock */ 1588 1601 struct adv_info *hci_get_next_instance(struct hci_dev *hdev, u8 instance) 1589 1602 { 1590 1603 struct adv_info *cur_instance; ··· 1749 1736 } 1750 1737 1751 1738 /* This function requires the caller holds hdev->lock */ 1752 - struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance, 1739 + struct adv_info *hci_add_per_instance(struct hci_dev *hdev, u8 instance, u8 sid, 1753 1740 u32 flags, u8 data_len, u8 *data, 1754 1741 u32 min_interval, u32 max_interval) 1755 1742 { ··· 1761 1748 if (IS_ERR(adv)) 1762 1749 return adv; 1763 1750 1751 + adv->sid = sid; 1764 1752 adv->periodic = true; 1765 1753 adv->per_adv_data_len = data_len; 1766 1754 ··· 1891 1877 if (monitor->handle) 1892 1878 idr_remove(&hdev->adv_monitors_idr, monitor->handle); 1893 1879 1894 - if (monitor->state != ADV_MONITOR_STATE_NOT_REGISTERED) { 1880 + if (monitor->state != ADV_MONITOR_STATE_NOT_REGISTERED) 1895 1881 hdev->adv_monitors_cnt--; 1896 - mgmt_adv_monitor_removed(hdev, monitor->handle); 1897 - } 1898 1882 1899 1883 kfree(monitor); 1900 1884 } ··· 2499 2487 2500 2488 mutex_init(&hdev->lock); 2501 2489 mutex_init(&hdev->req_lock); 2490 + mutex_init(&hdev->mgmt_pending_lock); 2502 2491 2503 2492 ida_init(&hdev->unset_handle_ida); 2504 2493 ··· 3430 3417 3431 3418 bt_dev_err(hdev, "link tx timeout"); 3432 3419 3433 - rcu_read_lock(); 3420 + hci_dev_lock(hdev); 3434 3421 3435 3422 /* Kill stalled connections */ 3436 - list_for_each_entry_rcu(c, &h->list, list) { 3423 + list_for_each_entry(c, &h->list, list) { 3437 3424 if (c->type == type && c->sent) { 3438 3425 bt_dev_err(hdev, "killing stalled connection %pMR", 3439 3426 &c->dst); 3440 - /* hci_disconnect might sleep, so, we have to release 3441 - * the RCU read lock before calling it. 3442 - */ 3443 - rcu_read_unlock(); 3444 3427 hci_disconnect(c, HCI_ERROR_REMOTE_USER_TERM); 3445 - rcu_read_lock(); 3446 3428 } 3447 3429 } 3448 3430 3449 - rcu_read_unlock(); 3431 + hci_dev_unlock(hdev); 3450 3432 } 3451 3433 3452 3434 static struct hci_chan *hci_chan_sent(struct hci_dev *hdev, __u8 type,
+35 -10
net/bluetooth/hci_sync.c
··· 1261 1261 hci_cpu_to_le24(adv->min_interval, cp.min_interval); 1262 1262 hci_cpu_to_le24(adv->max_interval, cp.max_interval); 1263 1263 cp.tx_power = adv->tx_power; 1264 + cp.sid = adv->sid; 1264 1265 } else { 1265 1266 hci_cpu_to_le24(hdev->le_adv_min_interval, cp.min_interval); 1266 1267 hci_cpu_to_le24(hdev->le_adv_max_interval, cp.max_interval); 1267 1268 cp.tx_power = HCI_ADV_TX_POWER_NO_PREFERENCE; 1269 + cp.sid = 0x00; 1268 1270 } 1269 1271 1270 1272 secondary_adv = (flags & MGMT_ADV_FLAG_SEC_MASK); ··· 1561 1559 static int hci_adv_bcast_annoucement(struct hci_dev *hdev, struct adv_info *adv) 1562 1560 { 1563 1561 u8 bid[3]; 1564 - u8 ad[4 + 3]; 1562 + u8 ad[HCI_MAX_EXT_AD_LENGTH]; 1563 + u8 len; 1565 1564 1566 1565 /* Skip if NULL adv as instance 0x00 is used for general purpose 1567 1566 * advertising so it cannot used for the likes of Broadcast Announcement ··· 1588 1585 1589 1586 /* Generate Broadcast ID */ 1590 1587 get_random_bytes(bid, sizeof(bid)); 1591 - eir_append_service_data(ad, 0, 0x1852, bid, sizeof(bid)); 1592 - hci_set_adv_instance_data(hdev, adv->instance, sizeof(ad), ad, 0, NULL); 1588 + len = eir_append_service_data(ad, 0, 0x1852, bid, sizeof(bid)); 1589 + memcpy(ad + len, adv->adv_data, adv->adv_data_len); 1590 + hci_set_adv_instance_data(hdev, adv->instance, len + adv->adv_data_len, 1591 + ad, 0, NULL); 1593 1592 1594 1593 return hci_update_adv_data_sync(hdev, adv->instance); 1595 1594 } 1596 1595 1597 - int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 data_len, 1598 - u8 *data, u32 flags, u16 min_interval, 1596 + int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 sid, 1597 + u8 data_len, u8 *data, u32 flags, u16 min_interval, 1599 1598 u16 max_interval, u16 sync_interval) 1600 1599 { 1601 1600 struct adv_info *adv = NULL; ··· 1608 1603 1609 1604 if (instance) { 1610 1605 adv = hci_find_adv_instance(hdev, instance); 1611 - /* Create an instance if that could not be found */ 1612 - if (!adv) { 1613 - adv = hci_add_per_instance(hdev, instance, flags, 1606 + if (adv) { 1607 + if (sid != HCI_SID_INVALID && adv->sid != sid) { 1608 + /* If the SID don't match attempt to find by 1609 + * SID. 1610 + */ 1611 + adv = hci_find_adv_sid(hdev, sid); 1612 + if (!adv) { 1613 + bt_dev_err(hdev, 1614 + "Unable to find adv_info"); 1615 + return -EINVAL; 1616 + } 1617 + } 1618 + 1619 + /* Turn it into periodic advertising */ 1620 + adv->periodic = true; 1621 + adv->per_adv_data_len = data_len; 1622 + if (data) 1623 + memcpy(adv->per_adv_data, data, data_len); 1624 + adv->flags = flags; 1625 + } else if (!adv) { 1626 + /* Create an instance if that could not be found */ 1627 + adv = hci_add_per_instance(hdev, instance, sid, flags, 1614 1628 data_len, data, 1615 1629 sync_interval, 1616 1630 sync_interval); ··· 1836 1812 return 0; 1837 1813 } 1838 1814 1839 - len = eir_create_adv_data(hdev, instance, pdu->data); 1815 + len = eir_create_adv_data(hdev, instance, pdu->data, 1816 + HCI_MAX_EXT_AD_LENGTH); 1840 1817 1841 1818 pdu->length = len; 1842 1819 pdu->handle = adv ? adv->handle : instance; ··· 1868 1843 1869 1844 memset(&cp, 0, sizeof(cp)); 1870 1845 1871 - len = eir_create_adv_data(hdev, instance, cp.data); 1846 + len = eir_create_adv_data(hdev, instance, cp.data, sizeof(cp.data)); 1872 1847 1873 1848 /* There's nothing to do if the data hasn't changed */ 1874 1849 if (hdev->adv_data_len == len &&
+12 -5
net/bluetooth/iso.c
··· 336 336 struct hci_dev *hdev; 337 337 int err; 338 338 339 - BT_DBG("%pMR", &iso_pi(sk)->src); 339 + BT_DBG("%pMR (SID 0x%2.2x)", &iso_pi(sk)->src, iso_pi(sk)->bc_sid); 340 340 341 341 hdev = hci_get_route(&iso_pi(sk)->dst, &iso_pi(sk)->src, 342 342 iso_pi(sk)->src_type); ··· 365 365 366 366 /* Just bind if DEFER_SETUP has been set */ 367 367 if (test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) { 368 - hcon = hci_bind_bis(hdev, &iso_pi(sk)->dst, 368 + hcon = hci_bind_bis(hdev, &iso_pi(sk)->dst, iso_pi(sk)->bc_sid, 369 369 &iso_pi(sk)->qos, iso_pi(sk)->base_len, 370 370 iso_pi(sk)->base); 371 371 if (IS_ERR(hcon)) { ··· 375 375 } else { 376 376 hcon = hci_connect_bis(hdev, &iso_pi(sk)->dst, 377 377 le_addr_type(iso_pi(sk)->dst_type), 378 - &iso_pi(sk)->qos, iso_pi(sk)->base_len, 379 - iso_pi(sk)->base); 378 + iso_pi(sk)->bc_sid, &iso_pi(sk)->qos, 379 + iso_pi(sk)->base_len, iso_pi(sk)->base); 380 380 if (IS_ERR(hcon)) { 381 381 err = PTR_ERR(hcon); 382 382 goto unlock; 383 383 } 384 + 385 + /* Update SID if it was not set */ 386 + if (iso_pi(sk)->bc_sid == HCI_SID_INVALID) 387 + iso_pi(sk)->bc_sid = hcon->sid; 384 388 } 385 389 386 390 conn = iso_conn_add(hcon); ··· 1341 1337 addr->sa_family = AF_BLUETOOTH; 1342 1338 1343 1339 if (peer) { 1340 + struct hci_conn *hcon = iso_pi(sk)->conn ? 1341 + iso_pi(sk)->conn->hcon : NULL; 1342 + 1344 1343 bacpy(&sa->iso_bdaddr, &iso_pi(sk)->dst); 1345 1344 sa->iso_bdaddr_type = iso_pi(sk)->dst_type; 1346 1345 1347 - if (test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags)) { 1346 + if (hcon && hcon->type == BIS_LINK) { 1348 1347 sa->iso_bc->bc_sid = iso_pi(sk)->bc_sid; 1349 1348 sa->iso_bc->bc_num_bis = iso_pi(sk)->bc_num_bis; 1350 1349 memcpy(sa->iso_bc->bc_bis, iso_pi(sk)->bc_bis,
+61 -79
net/bluetooth/mgmt.c
··· 1447 1447 1448 1448 send_settings_rsp(cmd->sk, cmd->opcode, match->hdev); 1449 1449 1450 - list_del(&cmd->list); 1451 - 1452 1450 if (match->sk == NULL) { 1453 1451 match->sk = cmd->sk; 1454 1452 sock_hold(match->sk); 1455 1453 } 1456 - 1457 - mgmt_pending_free(cmd); 1458 1454 } 1459 1455 1460 1456 static void cmd_status_rsp(struct mgmt_pending_cmd *cmd, void *data) 1461 1457 { 1462 1458 u8 *status = data; 1463 1459 1464 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, *status); 1465 - mgmt_pending_remove(cmd); 1460 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, *status); 1466 1461 } 1467 1462 1468 1463 static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data) ··· 1471 1476 1472 1477 if (cmd->cmd_complete) { 1473 1478 cmd->cmd_complete(cmd, match->mgmt_status); 1474 - mgmt_pending_remove(cmd); 1475 - 1476 1479 return; 1477 1480 } 1478 1481 ··· 1479 1486 1480 1487 static int generic_cmd_complete(struct mgmt_pending_cmd *cmd, u8 status) 1481 1488 { 1482 - return mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status, 1489 + return mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status, 1483 1490 cmd->param, cmd->param_len); 1484 1491 } 1485 1492 1486 1493 static int addr_cmd_complete(struct mgmt_pending_cmd *cmd, u8 status) 1487 1494 { 1488 - return mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status, 1495 + return mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status, 1489 1496 cmd->param, sizeof(struct mgmt_addr_info)); 1490 1497 } 1491 1498 ··· 1525 1532 1526 1533 if (err) { 1527 1534 u8 mgmt_err = mgmt_status(err); 1528 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err); 1535 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err); 1529 1536 hci_dev_clear_flag(hdev, HCI_LIMITED_DISCOVERABLE); 1530 1537 goto done; 1531 1538 } ··· 1700 1707 1701 1708 if (err) { 1702 1709 u8 mgmt_err = mgmt_status(err); 1703 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err); 1710 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err); 1704 1711 goto done; 1705 1712 } 1706 1713 ··· 1936 1943 new_settings(hdev, NULL); 1937 1944 } 1938 1945 1939 - mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, cmd_status_rsp, 1940 - &mgmt_err); 1946 + mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true, 1947 + cmd_status_rsp, &mgmt_err); 1941 1948 return; 1942 1949 } 1943 1950 ··· 1947 1954 changed = hci_dev_test_and_clear_flag(hdev, HCI_SSP_ENABLED); 1948 1955 } 1949 1956 1950 - mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, settings_rsp, &match); 1957 + mgmt_pending_foreach(MGMT_OP_SET_SSP, hdev, true, settings_rsp, &match); 1951 1958 1952 1959 if (changed) 1953 1960 new_settings(hdev, match.sk); ··· 2067 2074 bt_dev_dbg(hdev, "err %d", err); 2068 2075 2069 2076 if (status) { 2070 - mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, cmd_status_rsp, 2071 - &status); 2077 + mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, cmd_status_rsp, 2078 + &status); 2072 2079 return; 2073 2080 } 2074 2081 2075 - mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, settings_rsp, &match); 2082 + mgmt_pending_foreach(MGMT_OP_SET_LE, hdev, true, settings_rsp, &match); 2076 2083 2077 2084 new_settings(hdev, match.sk); 2078 2085 ··· 2131 2138 struct sock *sk = cmd->sk; 2132 2139 2133 2140 if (status) { 2134 - mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, 2141 + mgmt_pending_foreach(MGMT_OP_SET_MESH_RECEIVER, hdev, true, 2135 2142 cmd_status_rsp, &status); 2136 2143 return; 2137 2144 } ··· 2631 2638 2632 2639 bt_dev_dbg(hdev, "err %d", err); 2633 2640 2634 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 2641 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 2635 2642 mgmt_status(err), hdev->dev_class, 3); 2636 2643 2637 2644 mgmt_pending_free(cmd); ··· 3420 3427 bacpy(&rp.addr.bdaddr, &conn->dst); 3421 3428 rp.addr.type = link_to_bdaddr(conn->type, conn->dst_type); 3422 3429 3423 - err = mgmt_cmd_complete(cmd->sk, cmd->index, MGMT_OP_PAIR_DEVICE, 3430 + err = mgmt_cmd_complete(cmd->sk, cmd->hdev->id, MGMT_OP_PAIR_DEVICE, 3424 3431 status, &rp, sizeof(rp)); 3425 3432 3426 3433 /* So we don't get further callbacks for this connection */ ··· 5101 5108 mgmt_event(MGMT_EV_ADV_MONITOR_ADDED, hdev, &ev, sizeof(ev), sk); 5102 5109 } 5103 5110 5104 - void mgmt_adv_monitor_removed(struct hci_dev *hdev, u16 handle) 5111 + static void mgmt_adv_monitor_removed(struct sock *sk, struct hci_dev *hdev, 5112 + __le16 handle) 5105 5113 { 5106 5114 struct mgmt_ev_adv_monitor_removed ev; 5107 - struct mgmt_pending_cmd *cmd; 5108 - struct sock *sk_skip = NULL; 5109 - struct mgmt_cp_remove_adv_monitor *cp; 5110 5115 5111 - cmd = pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev); 5112 - if (cmd) { 5113 - cp = cmd->param; 5116 + ev.monitor_handle = handle; 5114 5117 5115 - if (cp->monitor_handle) 5116 - sk_skip = cmd->sk; 5117 - } 5118 - 5119 - ev.monitor_handle = cpu_to_le16(handle); 5120 - 5121 - mgmt_event(MGMT_EV_ADV_MONITOR_REMOVED, hdev, &ev, sizeof(ev), sk_skip); 5118 + mgmt_event(MGMT_EV_ADV_MONITOR_REMOVED, hdev, &ev, sizeof(ev), sk); 5122 5119 } 5123 5120 5124 5121 static int read_adv_mon_features(struct sock *sk, struct hci_dev *hdev, ··· 5179 5196 hci_update_passive_scan(hdev); 5180 5197 } 5181 5198 5182 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 5199 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 5183 5200 mgmt_status(status), &rp, sizeof(rp)); 5184 5201 mgmt_pending_remove(cmd); 5185 5202 ··· 5210 5227 5211 5228 if (pending_find(MGMT_OP_SET_LE, hdev) || 5212 5229 pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR, hdev) || 5213 - pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev) || 5214 - pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev)) { 5230 + pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev)) { 5215 5231 status = MGMT_STATUS_BUSY; 5216 5232 goto unlock; 5217 5233 } ··· 5380 5398 struct mgmt_pending_cmd *cmd = data; 5381 5399 struct mgmt_cp_remove_adv_monitor *cp; 5382 5400 5383 - if (status == -ECANCELED || 5384 - cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev)) 5401 + if (status == -ECANCELED) 5385 5402 return; 5386 5403 5387 5404 hci_dev_lock(hdev); ··· 5389 5408 5390 5409 rp.monitor_handle = cp->monitor_handle; 5391 5410 5392 - if (!status) 5411 + if (!status) { 5412 + mgmt_adv_monitor_removed(cmd->sk, hdev, cp->monitor_handle); 5393 5413 hci_update_passive_scan(hdev); 5414 + } 5394 5415 5395 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 5416 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 5396 5417 mgmt_status(status), &rp, sizeof(rp)); 5397 - mgmt_pending_remove(cmd); 5418 + mgmt_pending_free(cmd); 5398 5419 5399 5420 hci_dev_unlock(hdev); 5400 5421 bt_dev_dbg(hdev, "remove monitor %d complete, status %d", ··· 5406 5423 static int mgmt_remove_adv_monitor_sync(struct hci_dev *hdev, void *data) 5407 5424 { 5408 5425 struct mgmt_pending_cmd *cmd = data; 5409 - 5410 - if (cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev)) 5411 - return -ECANCELED; 5412 - 5413 5426 struct mgmt_cp_remove_adv_monitor *cp = cmd->param; 5414 5427 u16 handle = __le16_to_cpu(cp->monitor_handle); 5415 5428 ··· 5424 5445 hci_dev_lock(hdev); 5425 5446 5426 5447 if (pending_find(MGMT_OP_SET_LE, hdev) || 5427 - pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev) || 5428 5448 pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR, hdev) || 5429 5449 pending_find(MGMT_OP_ADD_ADV_PATTERNS_MONITOR_RSSI, hdev)) { 5430 5450 status = MGMT_STATUS_BUSY; 5431 5451 goto unlock; 5432 5452 } 5433 5453 5434 - cmd = mgmt_pending_add(sk, MGMT_OP_REMOVE_ADV_MONITOR, hdev, data, len); 5454 + cmd = mgmt_pending_new(sk, MGMT_OP_REMOVE_ADV_MONITOR, hdev, data, len); 5435 5455 if (!cmd) { 5436 5456 status = MGMT_STATUS_NO_RESOURCES; 5437 5457 goto unlock; ··· 5440 5462 mgmt_remove_adv_monitor_complete); 5441 5463 5442 5464 if (err) { 5443 - mgmt_pending_remove(cmd); 5465 + mgmt_pending_free(cmd); 5444 5466 5445 5467 if (err == -ENOMEM) 5446 5468 status = MGMT_STATUS_NO_RESOURCES; ··· 5770 5792 cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev)) 5771 5793 return; 5772 5794 5773 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err), 5795 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err), 5774 5796 cmd->param, 1); 5775 5797 mgmt_pending_remove(cmd); 5776 5798 ··· 5991 6013 5992 6014 bt_dev_dbg(hdev, "err %d", err); 5993 6015 5994 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err), 6016 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_status(err), 5995 6017 cmd->param, 1); 5996 6018 mgmt_pending_remove(cmd); 5997 6019 ··· 6216 6238 u8 status = mgmt_status(err); 6217 6239 6218 6240 if (status) { 6219 - mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, 6241 + mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true, 6220 6242 cmd_status_rsp, &status); 6221 6243 return; 6222 6244 } ··· 6226 6248 else 6227 6249 hci_dev_clear_flag(hdev, HCI_ADVERTISING); 6228 6250 6229 - mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, settings_rsp, 6251 + mgmt_pending_foreach(MGMT_OP_SET_ADVERTISING, hdev, true, settings_rsp, 6230 6252 &match); 6231 6253 6232 6254 new_settings(hdev, match.sk); ··· 6570 6592 */ 6571 6593 hci_dev_clear_flag(hdev, HCI_BREDR_ENABLED); 6572 6594 6573 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err); 6595 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err); 6574 6596 } else { 6575 6597 send_settings_rsp(cmd->sk, MGMT_OP_SET_BREDR, hdev); 6576 6598 new_settings(hdev, cmd->sk); ··· 6707 6729 if (err) { 6708 6730 u8 mgmt_err = mgmt_status(err); 6709 6731 6710 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, mgmt_err); 6732 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, mgmt_err); 6711 6733 goto done; 6712 6734 } 6713 6735 ··· 7154 7176 rp.max_tx_power = HCI_TX_POWER_INVALID; 7155 7177 } 7156 7178 7157 - mgmt_cmd_complete(cmd->sk, cmd->index, MGMT_OP_GET_CONN_INFO, status, 7179 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, MGMT_OP_GET_CONN_INFO, status, 7158 7180 &rp, sizeof(rp)); 7159 7181 7160 7182 mgmt_pending_free(cmd); ··· 7314 7336 } 7315 7337 7316 7338 complete: 7317 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, status, &rp, 7339 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, status, &rp, 7318 7340 sizeof(rp)); 7319 7341 7320 7342 mgmt_pending_free(cmd); ··· 8564 8586 rp.instance = cp->instance; 8565 8587 8566 8588 if (err) 8567 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, 8589 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, 8568 8590 mgmt_status(err)); 8569 8591 else 8570 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 8592 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 8571 8593 mgmt_status(err), &rp, sizeof(rp)); 8572 8594 8573 8595 add_adv_complete(hdev, cmd->sk, cp->instance, err); ··· 8755 8777 8756 8778 hci_remove_adv_instance(hdev, cp->instance); 8757 8779 8758 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, 8780 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, 8759 8781 mgmt_status(err)); 8760 8782 } else { 8761 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 8783 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 8762 8784 mgmt_status(err), &rp, sizeof(rp)); 8763 8785 } 8764 8786 ··· 8905 8927 rp.instance = cp->instance; 8906 8928 8907 8929 if (err) 8908 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, 8930 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, 8909 8931 mgmt_status(err)); 8910 8932 else 8911 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 8933 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 8912 8934 mgmt_status(err), &rp, sizeof(rp)); 8913 8935 8914 8936 mgmt_pending_free(cmd); ··· 9067 9089 rp.instance = cp->instance; 9068 9090 9069 9091 if (err) 9070 - mgmt_cmd_status(cmd->sk, cmd->index, cmd->opcode, 9092 + mgmt_cmd_status(cmd->sk, cmd->hdev->id, cmd->opcode, 9071 9093 mgmt_status(err)); 9072 9094 else 9073 - mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, 9095 + mgmt_cmd_complete(cmd->sk, cmd->hdev->id, cmd->opcode, 9074 9096 MGMT_STATUS_SUCCESS, &rp, sizeof(rp)); 9075 9097 9076 9098 mgmt_pending_free(cmd); ··· 9342 9364 if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks)) 9343 9365 return; 9344 9366 9345 - mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match); 9367 + mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match); 9346 9368 9347 9369 if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) { 9348 9370 mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0, ··· 9380 9402 hci_update_passive_scan(hdev); 9381 9403 } 9382 9404 9383 - mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match); 9405 + mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, true, settings_rsp, 9406 + &match); 9384 9407 9385 9408 new_settings(hdev, match.sk); 9386 9409 ··· 9396 9417 struct cmd_lookup match = { NULL, hdev }; 9397 9418 u8 zero_cod[] = { 0, 0, 0 }; 9398 9419 9399 - mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match); 9420 + mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, true, settings_rsp, 9421 + &match); 9400 9422 9401 9423 /* If the power off is because of hdev unregistration let 9402 9424 * use the appropriate INVALID_INDEX status. Otherwise use ··· 9411 9431 else 9412 9432 match.mgmt_status = MGMT_STATUS_NOT_POWERED; 9413 9433 9414 - mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match); 9434 + mgmt_pending_foreach(0, hdev, true, cmd_complete_rsp, &match); 9415 9435 9416 9436 if (memcmp(hdev->dev_class, zero_cod, sizeof(zero_cod)) != 0) { 9417 9437 mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev, ··· 9652 9672 device_unpaired(hdev, &cp->addr.bdaddr, cp->addr.type, cmd->sk); 9653 9673 9654 9674 cmd->cmd_complete(cmd, 0); 9655 - mgmt_pending_remove(cmd); 9656 9675 } 9657 9676 9658 9677 bool mgmt_powering_down(struct hci_dev *hdev) ··· 9707 9728 struct mgmt_cp_disconnect *cp; 9708 9729 struct mgmt_pending_cmd *cmd; 9709 9730 9710 - mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, unpair_device_rsp, 9711 - hdev); 9731 + mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, true, 9732 + unpair_device_rsp, hdev); 9712 9733 9713 9734 cmd = pending_find(MGMT_OP_DISCONNECT, hdev); 9714 9735 if (!cmd) ··· 9901 9922 9902 9923 if (status) { 9903 9924 u8 mgmt_err = mgmt_status(status); 9904 - mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, 9925 + mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, true, 9905 9926 cmd_status_rsp, &mgmt_err); 9906 9927 return; 9907 9928 } ··· 9911 9932 else 9912 9933 changed = hci_dev_test_and_clear_flag(hdev, HCI_LINK_SECURITY); 9913 9934 9914 - mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, settings_rsp, 9915 - &match); 9935 + mgmt_pending_foreach(MGMT_OP_SET_LINK_SECURITY, hdev, true, 9936 + settings_rsp, &match); 9916 9937 9917 9938 if (changed) 9918 9939 new_settings(hdev, match.sk); ··· 9936 9957 { 9937 9958 struct cmd_lookup match = { NULL, hdev, mgmt_status(status) }; 9938 9959 9939 - mgmt_pending_foreach(MGMT_OP_SET_DEV_CLASS, hdev, sk_lookup, &match); 9940 - mgmt_pending_foreach(MGMT_OP_ADD_UUID, hdev, sk_lookup, &match); 9941 - mgmt_pending_foreach(MGMT_OP_REMOVE_UUID, hdev, sk_lookup, &match); 9960 + mgmt_pending_foreach(MGMT_OP_SET_DEV_CLASS, hdev, false, sk_lookup, 9961 + &match); 9962 + mgmt_pending_foreach(MGMT_OP_ADD_UUID, hdev, false, sk_lookup, 9963 + &match); 9964 + mgmt_pending_foreach(MGMT_OP_REMOVE_UUID, hdev, false, sk_lookup, 9965 + &match); 9942 9966 9943 9967 if (!status) { 9944 9968 mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev, dev_class,
+27 -5
net/bluetooth/mgmt_util.c
··· 217 217 struct mgmt_pending_cmd *mgmt_pending_find(unsigned short channel, u16 opcode, 218 218 struct hci_dev *hdev) 219 219 { 220 - struct mgmt_pending_cmd *cmd; 220 + struct mgmt_pending_cmd *cmd, *tmp; 221 221 222 - list_for_each_entry(cmd, &hdev->mgmt_pending, list) { 222 + mutex_lock(&hdev->mgmt_pending_lock); 223 + 224 + list_for_each_entry_safe(cmd, tmp, &hdev->mgmt_pending, list) { 223 225 if (hci_sock_get_channel(cmd->sk) != channel) 224 226 continue; 225 - if (cmd->opcode == opcode) 227 + 228 + if (cmd->opcode == opcode) { 229 + mutex_unlock(&hdev->mgmt_pending_lock); 226 230 return cmd; 231 + } 227 232 } 233 + 234 + mutex_unlock(&hdev->mgmt_pending_lock); 228 235 229 236 return NULL; 230 237 } 231 238 232 - void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, 239 + void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, bool remove, 233 240 void (*cb)(struct mgmt_pending_cmd *cmd, void *data), 234 241 void *data) 235 242 { 236 243 struct mgmt_pending_cmd *cmd, *tmp; 237 244 245 + mutex_lock(&hdev->mgmt_pending_lock); 246 + 238 247 list_for_each_entry_safe(cmd, tmp, &hdev->mgmt_pending, list) { 239 248 if (opcode > 0 && cmd->opcode != opcode) 240 249 continue; 241 250 251 + if (remove) 252 + list_del(&cmd->list); 253 + 242 254 cb(cmd, data); 255 + 256 + if (remove) 257 + mgmt_pending_free(cmd); 243 258 } 259 + 260 + mutex_unlock(&hdev->mgmt_pending_lock); 244 261 } 245 262 246 263 struct mgmt_pending_cmd *mgmt_pending_new(struct sock *sk, u16 opcode, ··· 271 254 return NULL; 272 255 273 256 cmd->opcode = opcode; 274 - cmd->index = hdev->id; 257 + cmd->hdev = hdev; 275 258 276 259 cmd->param = kmemdup(data, len, GFP_KERNEL); 277 260 if (!cmd->param) { ··· 297 280 if (!cmd) 298 281 return NULL; 299 282 283 + mutex_lock(&hdev->mgmt_pending_lock); 300 284 list_add_tail(&cmd->list, &hdev->mgmt_pending); 285 + mutex_unlock(&hdev->mgmt_pending_lock); 301 286 302 287 return cmd; 303 288 } ··· 313 294 314 295 void mgmt_pending_remove(struct mgmt_pending_cmd *cmd) 315 296 { 297 + mutex_lock(&cmd->hdev->mgmt_pending_lock); 316 298 list_del(&cmd->list); 299 + mutex_unlock(&cmd->hdev->mgmt_pending_lock); 300 + 317 301 mgmt_pending_free(cmd); 318 302 } 319 303
+2 -2
net/bluetooth/mgmt_util.h
··· 33 33 struct mgmt_pending_cmd { 34 34 struct list_head list; 35 35 u16 opcode; 36 - int index; 36 + struct hci_dev *hdev; 37 37 void *param; 38 38 size_t param_len; 39 39 struct sock *sk; ··· 54 54 55 55 struct mgmt_pending_cmd *mgmt_pending_find(unsigned short channel, u16 opcode, 56 56 struct hci_dev *hdev); 57 - void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, 57 + void mgmt_pending_foreach(u16 opcode, struct hci_dev *hdev, bool remove, 58 58 void (*cb)(struct mgmt_pending_cmd *cmd, void *data), 59 59 void *data); 60 60 struct mgmt_pending_cmd *mgmt_pending_add(struct sock *sk, u16 opcode,
+13 -6
net/core/filter.c
··· 3233 3233 .arg1_type = ARG_PTR_TO_CTX, 3234 3234 }; 3235 3235 3236 + static void bpf_skb_change_protocol(struct sk_buff *skb, u16 proto) 3237 + { 3238 + skb->protocol = htons(proto); 3239 + if (skb_valid_dst(skb)) 3240 + skb_dst_drop(skb); 3241 + } 3242 + 3236 3243 static int bpf_skb_generic_push(struct sk_buff *skb, u32 off, u32 len) 3237 3244 { 3238 3245 /* Caller already did skb_cow() with len as headroom, ··· 3336 3329 } 3337 3330 } 3338 3331 3339 - skb->protocol = htons(ETH_P_IPV6); 3332 + bpf_skb_change_protocol(skb, ETH_P_IPV6); 3340 3333 skb_clear_hash(skb); 3341 3334 3342 3335 return 0; ··· 3366 3359 } 3367 3360 } 3368 3361 3369 - skb->protocol = htons(ETH_P_IP); 3362 + bpf_skb_change_protocol(skb, ETH_P_IP); 3370 3363 skb_clear_hash(skb); 3371 3364 3372 3365 return 0; ··· 3557 3550 /* Match skb->protocol to new outer l3 protocol */ 3558 3551 if (skb->protocol == htons(ETH_P_IP) && 3559 3552 flags & BPF_F_ADJ_ROOM_ENCAP_L3_IPV6) 3560 - skb->protocol = htons(ETH_P_IPV6); 3553 + bpf_skb_change_protocol(skb, ETH_P_IPV6); 3561 3554 else if (skb->protocol == htons(ETH_P_IPV6) && 3562 3555 flags & BPF_F_ADJ_ROOM_ENCAP_L3_IPV4) 3563 - skb->protocol = htons(ETH_P_IP); 3556 + bpf_skb_change_protocol(skb, ETH_P_IP); 3564 3557 } 3565 3558 3566 3559 if (skb_is_gso(skb)) { ··· 3613 3606 /* Match skb->protocol to new outer l3 protocol */ 3614 3607 if (skb->protocol == htons(ETH_P_IP) && 3615 3608 flags & BPF_F_ADJ_ROOM_DECAP_L3_IPV6) 3616 - skb->protocol = htons(ETH_P_IPV6); 3609 + bpf_skb_change_protocol(skb, ETH_P_IPV6); 3617 3610 else if (skb->protocol == htons(ETH_P_IPV6) && 3618 3611 flags & BPF_F_ADJ_ROOM_DECAP_L3_IPV4) 3619 - skb->protocol = htons(ETH_P_IP); 3612 + bpf_skb_change_protocol(skb, ETH_P_IP); 3620 3613 3621 3614 if (skb_is_gso(skb)) { 3622 3615 struct skb_shared_info *shinfo = skb_shinfo(skb);
+2 -1
net/ethtool/ioctl.c
··· 1083 1083 ethtool_get_flow_spec_ring(info.fs.ring_cookie)) 1084 1084 return -EINVAL; 1085 1085 1086 - if (!xa_load(&dev->ethtool->rss_ctx, info.rss_context)) 1086 + if (info.rss_context && 1087 + !xa_load(&dev->ethtool->rss_ctx, info.rss_context)) 1087 1088 return -EINVAL; 1088 1089 } 1089 1090
+55 -55
net/ipv6/route.c
··· 3737 3737 } 3738 3738 } 3739 3739 3740 + static int fib6_config_validate(struct fib6_config *cfg, 3741 + struct netlink_ext_ack *extack) 3742 + { 3743 + /* RTF_PCPU is an internal flag; can not be set by userspace */ 3744 + if (cfg->fc_flags & RTF_PCPU) { 3745 + NL_SET_ERR_MSG(extack, "Userspace can not set RTF_PCPU"); 3746 + goto errout; 3747 + } 3748 + 3749 + /* RTF_CACHE is an internal flag; can not be set by userspace */ 3750 + if (cfg->fc_flags & RTF_CACHE) { 3751 + NL_SET_ERR_MSG(extack, "Userspace can not set RTF_CACHE"); 3752 + goto errout; 3753 + } 3754 + 3755 + if (cfg->fc_type > RTN_MAX) { 3756 + NL_SET_ERR_MSG(extack, "Invalid route type"); 3757 + goto errout; 3758 + } 3759 + 3760 + if (cfg->fc_dst_len > 128) { 3761 + NL_SET_ERR_MSG(extack, "Invalid prefix length"); 3762 + goto errout; 3763 + } 3764 + 3765 + #ifdef CONFIG_IPV6_SUBTREES 3766 + if (cfg->fc_src_len > 128) { 3767 + NL_SET_ERR_MSG(extack, "Invalid source address length"); 3768 + goto errout; 3769 + } 3770 + 3771 + if (cfg->fc_nh_id && cfg->fc_src_len) { 3772 + NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing"); 3773 + goto errout; 3774 + } 3775 + #else 3776 + if (cfg->fc_src_len) { 3777 + NL_SET_ERR_MSG(extack, 3778 + "Specifying source address requires IPV6_SUBTREES to be enabled"); 3779 + goto errout; 3780 + } 3781 + #endif 3782 + return 0; 3783 + errout: 3784 + return -EINVAL; 3785 + } 3786 + 3740 3787 static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg, 3741 3788 gfp_t gfp_flags, 3742 3789 struct netlink_ext_ack *extack) ··· 3932 3885 { 3933 3886 struct fib6_info *rt; 3934 3887 int err; 3888 + 3889 + err = fib6_config_validate(cfg, extack); 3890 + if (err) 3891 + return err; 3935 3892 3936 3893 rt = ip6_route_info_create(cfg, gfp_flags, extack); 3937 3894 if (IS_ERR(rt)) ··· 4530 4479 rcu_read_unlock(); 4531 4480 } 4532 4481 4533 - static int fib6_config_validate(struct fib6_config *cfg, 4534 - struct netlink_ext_ack *extack) 4535 - { 4536 - /* RTF_PCPU is an internal flag; can not be set by userspace */ 4537 - if (cfg->fc_flags & RTF_PCPU) { 4538 - NL_SET_ERR_MSG(extack, "Userspace can not set RTF_PCPU"); 4539 - goto errout; 4540 - } 4541 - 4542 - /* RTF_CACHE is an internal flag; can not be set by userspace */ 4543 - if (cfg->fc_flags & RTF_CACHE) { 4544 - NL_SET_ERR_MSG(extack, "Userspace can not set RTF_CACHE"); 4545 - goto errout; 4546 - } 4547 - 4548 - if (cfg->fc_type > RTN_MAX) { 4549 - NL_SET_ERR_MSG(extack, "Invalid route type"); 4550 - goto errout; 4551 - } 4552 - 4553 - if (cfg->fc_dst_len > 128) { 4554 - NL_SET_ERR_MSG(extack, "Invalid prefix length"); 4555 - goto errout; 4556 - } 4557 - 4558 - #ifdef CONFIG_IPV6_SUBTREES 4559 - if (cfg->fc_src_len > 128) { 4560 - NL_SET_ERR_MSG(extack, "Invalid source address length"); 4561 - goto errout; 4562 - } 4563 - 4564 - if (cfg->fc_nh_id && cfg->fc_src_len) { 4565 - NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing"); 4566 - goto errout; 4567 - } 4568 - #else 4569 - if (cfg->fc_src_len) { 4570 - NL_SET_ERR_MSG(extack, 4571 - "Specifying source address requires IPV6_SUBTREES to be enabled"); 4572 - goto errout; 4573 - } 4574 - #endif 4575 - return 0; 4576 - errout: 4577 - return -EINVAL; 4578 - } 4579 - 4580 4482 static void rtmsg_to_fib6_config(struct net *net, 4581 4483 struct in6_rtmsg *rtmsg, 4582 4484 struct fib6_config *cfg) ··· 4567 4563 4568 4564 switch (cmd) { 4569 4565 case SIOCADDRT: 4570 - err = fib6_config_validate(&cfg, NULL); 4571 - if (err) 4572 - break; 4573 - 4574 4566 /* Only do the default setting of fc_metric in route adding */ 4575 4567 if (cfg.fc_metric == 0) 4576 4568 cfg.fc_metric = IP6_RT_PRIO_USER; ··· 5402 5402 int nhn = 0; 5403 5403 int err; 5404 5404 5405 + err = fib6_config_validate(cfg, extack); 5406 + if (err) 5407 + return err; 5408 + 5405 5409 replace = (cfg->fc_nlinfo.nlh && 5406 5410 (cfg->fc_nlinfo.nlh->nlmsg_flags & NLM_F_REPLACE)); 5407 5411 ··· 5638 5634 5639 5635 err = rtm_to_fib6_config(skb, nlh, &cfg, extack); 5640 5636 if (err < 0) 5641 - return err; 5642 - 5643 - err = fib6_config_validate(&cfg, extack); 5644 - if (err) 5645 5637 return err; 5646 5638 5647 5639 if (cfg.fc_metric == 0)
+1 -1
net/sched/sch_ets.c
··· 661 661 for (i = q->nbands; i < oldbands; i++) { 662 662 if (i >= q->nstrict && q->classes[i].qdisc->q.qlen) 663 663 list_del_init(&q->classes[i].alist); 664 - qdisc_tree_flush_backlog(q->classes[i].qdisc); 664 + qdisc_purge_queue(q->classes[i].qdisc); 665 665 } 666 666 WRITE_ONCE(q->nstrict, nstrict); 667 667 memcpy(q->prio2band, priomap, sizeof(priomap));
+1 -1
net/sched/sch_prio.c
··· 211 211 memcpy(q->prio2band, qopt->priomap, TC_PRIO_MAX+1); 212 212 213 213 for (i = q->bands; i < oldbands; i++) 214 - qdisc_tree_flush_backlog(q->queues[i]); 214 + qdisc_purge_queue(q->queues[i]); 215 215 216 216 for (i = oldbands; i < q->bands; i++) { 217 217 q->queues[i] = queues[i];
+1 -1
net/sched/sch_red.c
··· 285 285 q->userbits = userbits; 286 286 q->limit = ctl->limit; 287 287 if (child) { 288 - qdisc_tree_flush_backlog(q->qdisc); 288 + qdisc_purge_queue(q->qdisc); 289 289 old_child = q->qdisc; 290 290 q->qdisc = child; 291 291 }
+12 -3
net/sched/sch_sfq.c
··· 310 310 /* It is difficult to believe, but ALL THE SLOTS HAVE LENGTH 1. */ 311 311 x = q->tail->next; 312 312 slot = &q->slots[x]; 313 - q->tail->next = slot->next; 313 + if (slot->next == x) 314 + q->tail = NULL; /* no more active slots */ 315 + else 316 + q->tail->next = slot->next; 314 317 q->ht[slot->hash] = SFQ_EMPTY_SLOT; 315 318 goto drop; 316 319 } ··· 656 653 NL_SET_ERR_MSG_MOD(extack, "invalid quantum"); 657 654 return -EINVAL; 658 655 } 656 + 657 + if (ctl->perturb_period < 0 || 658 + ctl->perturb_period > INT_MAX / HZ) { 659 + NL_SET_ERR_MSG_MOD(extack, "invalid perturb period"); 660 + return -EINVAL; 661 + } 662 + perturb_period = ctl->perturb_period * HZ; 663 + 659 664 if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max, 660 665 ctl_v1->Wlog, ctl_v1->Scell_log, NULL)) 661 666 return -EINVAL; ··· 680 669 headdrop = q->headdrop; 681 670 maxdepth = q->maxdepth; 682 671 maxflows = q->maxflows; 683 - perturb_period = q->perturb_period; 684 672 quantum = q->quantum; 685 673 flags = q->flags; 686 674 687 675 /* update and validate configuration */ 688 676 if (ctl->quantum) 689 677 quantum = ctl->quantum; 690 - perturb_period = ctl->perturb_period * HZ; 691 678 if (ctl->flows) 692 679 maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS); 693 680 if (ctl->divisor) {
+1 -1
net/sched/sch_tbf.c
··· 452 452 453 453 sch_tree_lock(sch); 454 454 if (child) { 455 - qdisc_tree_flush_backlog(q->qdisc); 455 + qdisc_purge_queue(q->qdisc); 456 456 old = q->qdisc; 457 457 q->qdisc = child; 458 458 }
+2 -1
net/unix/af_unix.c
··· 1971 1971 if (UNIXCB(skb).pid) 1972 1972 return; 1973 1973 1974 - if (unix_may_passcred(sk) || unix_may_passcred(other)) { 1974 + if (unix_may_passcred(sk) || unix_may_passcred(other) || 1975 + !other->sk_socket) { 1975 1976 UNIXCB(skb).pid = get_pid(task_tgid(current)); 1976 1977 current_uid_gid(&UNIXCB(skb).uid, &UNIXCB(skb).gid); 1977 1978 }
+1 -1
net/wireless/nl80211.c
··· 1583 1583 1584 1584 return result; 1585 1585 error: 1586 - kfree(result); 1586 + kfree_sensitive(result); 1587 1587 return ERR_PTR(err); 1588 1588 } 1589 1589
+58 -1
tools/testing/selftests/drivers/net/hw/rss_ctx.py
··· 747 747 'noise' : (0,) }) 748 748 749 749 750 + def test_rss_default_context_rule(cfg): 751 + """ 752 + Allocate a port, direct this port to context 0, then create a new RSS 753 + context and steer all TCP traffic to it (context 1). Verify that: 754 + * Traffic to the specific port continues to use queues of the main 755 + context (0/1). 756 + * Traffic to any other TCP port is redirected to the new context 757 + (queues 2/3). 758 + """ 759 + 760 + require_ntuple(cfg) 761 + 762 + queue_cnt = len(_get_rx_cnts(cfg)) 763 + if queue_cnt < 4: 764 + try: 765 + ksft_pr(f"Increasing queue count {queue_cnt} -> 4") 766 + ethtool(f"-L {cfg.ifname} combined 4") 767 + defer(ethtool, f"-L {cfg.ifname} combined {queue_cnt}") 768 + except Exception as exc: 769 + raise KsftSkipEx("Not enough queues for the test") from exc 770 + 771 + # Use queues 0 and 1 for the main context 772 + ethtool(f"-X {cfg.ifname} equal 2") 773 + defer(ethtool, f"-X {cfg.ifname} default") 774 + 775 + # Create a new RSS context that uses queues 2 and 3 776 + ctx_id = ethtool_create(cfg, "-X", "context new start 2 equal 2") 777 + defer(ethtool, f"-X {cfg.ifname} context {ctx_id} delete") 778 + 779 + # Generic low-priority rule: redirect all TCP traffic to the new context. 780 + # Give it an explicit higher location number (lower priority). 781 + flow_generic = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} context {ctx_id} loc 1" 782 + ethtool(f"-N {cfg.ifname} {flow_generic}") 783 + defer(ethtool, f"-N {cfg.ifname} delete 1") 784 + 785 + # Specific high-priority rule for a random port that should stay on context 0. 786 + # Assign loc 0 so it is evaluated before the generic rule. 787 + port_main = rand_port() 788 + flow_main = f"flow-type tcp{cfg.addr_ipver} dst-ip {cfg.addr} dst-port {port_main} context 0 loc 0" 789 + ethtool(f"-N {cfg.ifname} {flow_main}") 790 + defer(ethtool, f"-N {cfg.ifname} delete 0") 791 + 792 + _ntuple_rule_check(cfg, 1, ctx_id) 793 + 794 + # Verify that traffic matching the specific rule still goes to queues 0/1 795 + _send_traffic_check(cfg, port_main, "context 0", 796 + { 'target': (0, 1), 797 + 'empty' : (2, 3) }) 798 + 799 + # And that traffic for any other port is steered to the new context 800 + port_other = rand_port() 801 + _send_traffic_check(cfg, port_other, f"context {ctx_id}", 802 + { 'target': (2, 3), 803 + 'noise' : (0, 1) }) 804 + 805 + 750 806 def main() -> None: 751 807 with NetDrvEpEnv(__file__, nsim_test=False) as cfg: 752 808 cfg.context_cnt = None ··· 816 760 test_rss_context_overlap, test_rss_context_overlap2, 817 761 test_rss_context_out_of_order, test_rss_context4_create_with_cfg, 818 762 test_flow_add_context_missing, 819 - test_delete_rss_context_busy, test_rss_ntuple_addition], 763 + test_delete_rss_context_busy, test_rss_ntuple_addition, 764 + test_rss_default_context_rule], 820 765 args=(cfg, )) 821 766 ksft_exit() 822 767
+1
tools/testing/selftests/net/Makefile
··· 27 27 TEST_PROGS += unicast_extensions.sh 28 28 TEST_PROGS += udpgro_fwd.sh 29 29 TEST_PROGS += udpgro_frglist.sh 30 + TEST_PROGS += nat6to4.sh 30 31 TEST_PROGS += veth.sh 31 32 TEST_PROGS += ioam6.sh 32 33 TEST_PROGS += gro.sh
+15
tools/testing/selftests/net/nat6to4.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + NS="ns-peer-$(mktemp -u XXXXXX)" 5 + 6 + ip netns add "${NS}" 7 + ip -netns "${NS}" link set lo up 8 + ip -netns "${NS}" route add default via 127.0.0.2 dev lo 9 + 10 + tc -n "${NS}" qdisc add dev lo ingress 11 + tc -n "${NS}" filter add dev lo ingress prio 4 protocol ip \ 12 + bpf object-file nat6to4.bpf.o section schedcls/egress4/snat4 direct-action 13 + 14 + ip netns exec "${NS}" \ 15 + bash -c 'echo 012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789abc | socat - UDP4-DATAGRAM:224.1.0.1:6666,ip-multicast-loop=1'