Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

net: reformat kdoc return statements

kernel-doc -Wall warns about missing Return: statement for non-void
functions. We have a number of kdocs in our headers which are missing
the colon, IOW they use
* Return some value
or
* Returns some value

Having the colon makes some sense, it should help kdoc parser avoid
false positives. So add them. This is mostly done with a sed script,
and removing the unnecessary cases (mostly the comments which aren't
kdoc).

Acked-by: Johannes Berg <johannes@sipsolutions.net>
Acked-by: Richard Cochran <richardcochran@gmail.com>
Acked-by: Sergey Ryazanov <ryazanov.s.a@gmail.com>
Reviewed-by: Edward Cree <ecree.xilinx@gmail.com>
Acked-by: Alexandra Winter <wintera@linux.ibm.com>
Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Link: https://patch.msgid.link/20241205165914.1071102-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+101 -101
+9 -9
include/linux/etherdevice.h
··· 81 81 * is_link_local_ether_addr - Determine if given Ethernet address is link-local 82 82 * @addr: Pointer to a six-byte array containing the Ethernet address 83 83 * 84 - * Return true if address is link local reserved addr (01:80:c2:00:00:0X) per 84 + * Return: true if address is link local reserved addr (01:80:c2:00:00:0X) per 85 85 * IEEE 802.1Q 8.6.3 Frame filtering. 86 86 * 87 87 * Please note: addr must be aligned to u16. ··· 104 104 * is_zero_ether_addr - Determine if give Ethernet address is all zeros. 105 105 * @addr: Pointer to a six-byte array containing the Ethernet address 106 106 * 107 - * Return true if the address is all zeroes. 107 + * Return: true if the address is all zeroes. 108 108 * 109 109 * Please note: addr must be aligned to u16. 110 110 */ ··· 123 123 * is_multicast_ether_addr - Determine if the Ethernet address is a multicast. 124 124 * @addr: Pointer to a six-byte array containing the Ethernet address 125 125 * 126 - * Return true if the address is a multicast address. 126 + * Return: true if the address is a multicast address. 127 127 * By definition the broadcast address is also a multicast address. 128 128 */ 129 129 static inline bool is_multicast_ether_addr(const u8 *addr) ··· 157 157 * is_local_ether_addr - Determine if the Ethernet address is locally-assigned one (IEEE 802). 158 158 * @addr: Pointer to a six-byte array containing the Ethernet address 159 159 * 160 - * Return true if the address is a local address. 160 + * Return: true if the address is a local address. 161 161 */ 162 162 static inline bool is_local_ether_addr(const u8 *addr) 163 163 { ··· 168 168 * is_broadcast_ether_addr - Determine if the Ethernet address is broadcast 169 169 * @addr: Pointer to a six-byte array containing the Ethernet address 170 170 * 171 - * Return true if the address is the broadcast address. 171 + * Return: true if the address is the broadcast address. 172 172 * 173 173 * Please note: addr must be aligned to u16. 174 174 */ ··· 183 183 * is_unicast_ether_addr - Determine if the Ethernet address is unicast 184 184 * @addr: Pointer to a six-byte array containing the Ethernet address 185 185 * 186 - * Return true if the address is a unicast address. 186 + * Return: true if the address is a unicast address. 187 187 */ 188 188 static inline bool is_unicast_ether_addr(const u8 *addr) 189 189 { ··· 197 197 * Check that the Ethernet address (MAC) is not 00:00:00:00:00:00, is not 198 198 * a multicast address, and is not FF:FF:FF:FF:FF:FF. 199 199 * 200 - * Return true if the address is valid. 200 + * Return: true if the address is valid. 201 201 * 202 202 * Please note: addr must be aligned to u16. 203 203 */ ··· 214 214 * 215 215 * Check that the value from the Ethertype/length field is a valid Ethertype. 216 216 * 217 - * Return true if the valid is an 802.3 supported Ethertype. 217 + * Return: true if the valid is an 802.3 supported Ethertype. 218 218 */ 219 219 static inline bool eth_proto_is_802_3(__be16 proto) 220 220 { ··· 458 458 * ether_addr_to_u64 - Convert an Ethernet address into a u64 value. 459 459 * @addr: Pointer to a six-byte array containing the Ethernet address 460 460 * 461 - * Return a u64 value of the address 461 + * Return: a u64 value of the address 462 462 */ 463 463 static inline u64 ether_addr_to_u64(const u8 *addr) 464 464 {
+3 -3
include/linux/ethtool.h
··· 257 257 * @mode : one of the ETHTOOL_LINK_MODE_*_BIT 258 258 * (not atomic, no bound checking) 259 259 * 260 - * Returns true/false. 260 + * Returns: true/false. 261 261 */ 262 262 #define ethtool_link_ksettings_test_link_mode(ptr, name, mode) \ 263 263 test_bit(ETHTOOL_LINK_MODE_ ## mode ## _BIT, (ptr)->link_modes.name) ··· 1199 1199 * @dev: pointer to net_device structure 1200 1200 * @vclock_index: pointer to pointer of vclock index 1201 1201 * 1202 - * Return number of phc vclocks 1202 + * Return: number of phc vclocks 1203 1203 */ 1204 1204 int ethtool_get_phc_vclocks(struct net_device *dev, int **vclock_index); 1205 1205 ··· 1253 1253 * ethtool_get_ts_info_by_layer - Obtains time stamping capabilities from the MAC or PHY layer. 1254 1254 * @dev: pointer to net_device structure 1255 1255 * @info: buffer to hold the result 1256 - * Returns zero on success, non-zero otherwise. 1256 + * Returns: zero on success, non-zero otherwise. 1257 1257 */ 1258 1258 int ethtool_get_ts_info_by_layer(struct net_device *dev, 1259 1259 struct kernel_ethtool_ts_info *info);
+14 -14
include/linux/if_vlan.h
··· 310 310 * eth_type_vlan - check for valid vlan ether type. 311 311 * @ethertype: ether type to check 312 312 * 313 - * Returns true if the ether type is a vlan ether type. 313 + * Returns: true if the ether type is a vlan ether type. 314 314 */ 315 315 static inline bool eth_type_vlan(__be16 ethertype) 316 316 { ··· 341 341 * @mac_len: MAC header length including outer vlan headers 342 342 * 343 343 * Inserts the VLAN tag into @skb as part of the payload at offset mac_len 344 - * Returns error if skb_cow_head fails. 345 - * 346 344 * Does not change skb->protocol so this function can be used during receive. 345 + * 346 + * Returns: error if skb_cow_head fails. 347 347 */ 348 348 static inline int __vlan_insert_inner_tag(struct sk_buff *skb, 349 349 __be16 vlan_proto, u16 vlan_tci, ··· 390 390 * @vlan_tci: VLAN TCI to insert 391 391 * 392 392 * Inserts the VLAN tag into @skb as part of the payload 393 - * Returns error if skb_cow_head fails. 394 - * 395 393 * Does not change skb->protocol so this function can be used during receive. 394 + * 395 + * Returns: error if skb_cow_head fails. 396 396 */ 397 397 static inline int __vlan_insert_tag(struct sk_buff *skb, 398 398 __be16 vlan_proto, u16 vlan_tci) ··· 533 533 * @skb: skbuff to query 534 534 * @vlan_tci: buffer to store value 535 535 * 536 - * Returns error if the skb is not of VLAN type 536 + * Returns: error if the skb is not of VLAN type 537 537 */ 538 538 static inline int __vlan_get_tag(const struct sk_buff *skb, u16 *vlan_tci) 539 539 { ··· 551 551 * @skb: skbuff to query 552 552 * @vlan_tci: buffer to store value 553 553 * 554 - * Returns error if @skb->vlan_tci is not set correctly 554 + * Returns: error if @skb->vlan_tci is not set correctly 555 555 */ 556 556 static inline int __vlan_hwaccel_get_tag(const struct sk_buff *skb, 557 557 u16 *vlan_tci) ··· 570 570 * @skb: skbuff to query 571 571 * @vlan_tci: buffer to store value 572 572 * 573 - * Returns error if the skb is not VLAN tagged 573 + * Returns: error if the skb is not VLAN tagged 574 574 */ 575 575 static inline int vlan_get_tag(const struct sk_buff *skb, u16 *vlan_tci) 576 576 { ··· 587 587 * @type: first vlan protocol 588 588 * @depth: buffer to store length of eth and vlan tags in bytes 589 589 * 590 - * Returns the EtherType of the packet, regardless of whether it is 590 + * Returns: the EtherType of the packet, regardless of whether it is 591 591 * vlan encapsulated (normal or hardware accelerated) or not. 592 592 */ 593 593 static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type, ··· 629 629 * vlan_get_protocol - get protocol EtherType. 630 630 * @skb: skbuff to query 631 631 * 632 - * Returns the EtherType of the packet, regardless of whether it is 632 + * Returns: the EtherType of the packet, regardless of whether it is 633 633 * vlan encapsulated (normal or hardware accelerated) or not. 634 634 */ 635 635 static inline __be16 vlan_get_protocol(const struct sk_buff *skb) ··· 710 710 * Expects the skb to contain a VLAN tag in the payload, and to have skb->data 711 711 * pointing at the MAC header. 712 712 * 713 - * Returns a new pointer to skb->data, or NULL on failure to pull. 713 + * Returns: a new pointer to skb->data, or NULL on failure to pull. 714 714 */ 715 715 static inline void *vlan_remove_tag(struct sk_buff *skb, u16 *vlan_tci) 716 716 { ··· 727 727 * skb_vlan_tagged - check if skb is vlan tagged. 728 728 * @skb: skbuff to query 729 729 * 730 - * Returns true if the skb is tagged, regardless of whether it is hardware 730 + * Returns: true if the skb is tagged, regardless of whether it is hardware 731 731 * accelerated or not. 732 732 */ 733 733 static inline bool skb_vlan_tagged(const struct sk_buff *skb) ··· 743 743 * skb_vlan_tagged_multi - check if skb is vlan tagged with multiple headers. 744 744 * @skb: skbuff to query 745 745 * 746 - * Returns true if the skb is tagged with multiple vlan headers, regardless 746 + * Returns: true if the skb is tagged with multiple vlan headers, regardless 747 747 * of whether it is hardware accelerated or not. 748 748 */ 749 749 static inline bool skb_vlan_tagged_multi(struct sk_buff *skb) ··· 774 774 * @skb: skbuff to query 775 775 * @features: features to be checked 776 776 * 777 - * Returns features without unsafe ones if the skb has multiple tags. 777 + * Returns: features without unsafe ones if the skb has multiple tags. 778 778 */ 779 779 static inline netdev_features_t vlan_features_check(struct sk_buff *skb, 780 780 netdev_features_t features)
+8 -6
include/linux/netdevice.h
··· 509 509 * is scheduled for example in the context of delayed timer 510 510 * that can be skipped if a NAPI is already scheduled. 511 511 * 512 - * Return True if NAPI is scheduled, False otherwise. 512 + * Return: True if NAPI is scheduled, False otherwise. 513 513 */ 514 514 static inline bool napi_is_scheduled(struct napi_struct *n) 515 515 { ··· 524 524 * 525 525 * Schedule NAPI poll routine to be called if it is not already 526 526 * running. 527 - * Return true if we schedule a NAPI or false if not. 527 + * Return: true if we schedule a NAPI or false if not. 528 528 * Refer to napi_schedule_prep() for additional reason on why 529 529 * a NAPI might not be scheduled. 530 530 */ ··· 558 558 * Mark NAPI processing as complete. Should only be called if poll budget 559 559 * has not been completely consumed. 560 560 * Prefer over napi_complete(). 561 - * Return false if device should avoid rearming interrupts. 561 + * Return: false if device should avoid rearming interrupts. 562 562 */ 563 563 bool napi_complete_done(struct napi_struct *n, int work_done); 564 564 ··· 3851 3851 * @online_mask: bitmask for CPUs/Rx queues that are online 3852 3852 * @nr_bits: number of bits in the bitmask 3853 3853 * 3854 - * Returns true if a CPU/Rx queue is online. 3854 + * Returns: true if a CPU/Rx queue is online. 3855 3855 */ 3856 3856 static inline bool netif_attr_test_online(unsigned long j, 3857 3857 const unsigned long *online_mask, ··· 3871 3871 * @srcp: the cpumask/Rx queue mask pointer 3872 3872 * @nr_bits: number of bits in the bitmask 3873 3873 * 3874 - * Returns >= nr_bits if no further CPUs/Rx queues set. 3874 + * Returns: next (after n) CPU/Rx queue index in the mask; 3875 + * >= nr_bits if no further CPUs/Rx queues set. 3875 3876 */ 3876 3877 static inline unsigned int netif_attrmask_next(int n, const unsigned long *srcp, 3877 3878 unsigned int nr_bits) ··· 3894 3893 * @src2p: the second CPUs/Rx queues mask pointer 3895 3894 * @nr_bits: number of bits in the bitmask 3896 3895 * 3897 - * Returns >= nr_bits if no further CPUs/Rx queues set in both. 3896 + * Returns: next (after n) CPU/Rx queue index set in both masks; 3897 + * >= nr_bits if no further CPUs/Rx queues set in both. 3898 3898 */ 3899 3899 static inline int netif_attrmask_next_and(int n, const unsigned long *src1p, 3900 3900 const unsigned long *src2p,
+1 -1
include/linux/netfilter/x_tables.h
··· 357 357 * Begin packet processing : all readers must wait the end 358 358 * 1) Must be called with preemption disabled 359 359 * 2) softirqs must be disabled too (or we should use this_cpu_add()) 360 - * Returns : 360 + * Returns: 361 361 * 1 if no recursion on this cpu 362 362 * 0 if recursion detected 363 363 */
+2 -1
include/linux/netfilter_netdev.h
··· 66 66 * @rc: result code which shall be returned by __dev_queue_xmit() on failure 67 67 * @dev: netdev whose egress hooks shall be applied to @skb 68 68 * 69 - * Returns @skb on success or %NULL if the packet was consumed or filtered. 70 69 * Caller must hold rcu_read_lock. 71 70 * 72 71 * On ingress, packets are classified first by tc, then by netfilter. ··· 80 81 * called recursively by tunnel drivers such as vxlan, the flag is reverted to 81 82 * false after sch_handle_egress(). This ensures that netfilter is applied 82 83 * both on the overlay and underlying network. 84 + * 85 + * Returns: @skb on success or %NULL if the packet was consumed or filtered. 83 86 */ 84 87 static inline struct sk_buff *nf_hook_egress(struct sk_buff *skb, int *rc, 85 88 struct net_device *dev)
+2 -2
include/linux/ptp_clock_kernel.h
··· 307 307 * @info: Structure describing the new clock. 308 308 * @parent: Pointer to the parent device of the new clock. 309 309 * 310 - * Returns a valid pointer on success or PTR_ERR on failure. If PHC 310 + * Returns: a valid pointer on success or PTR_ERR on failure. If PHC 311 311 * support is missing at the configuration level, this function 312 312 * returns NULL, and drivers are expected to gracefully handle that 313 313 * case separately. ··· 445 445 * @hwtstamp: timestamp 446 446 * @vclock_index: phc index of ptp vclock. 447 447 * 448 - * Returns converted timestamp, or 0 on error. 448 + * Returns: converted timestamp, or 0 on error. 449 449 */ 450 450 ktime_t ptp_convert_timestamp(const ktime_t *hwtstamp, int vclock_index); 451 451 #else
+1 -1
include/linux/rfkill.h
··· 241 241 * rfkill_find_type - Helper for finding rfkill type by name 242 242 * @name: the name of the type 243 243 * 244 - * Returns enum rfkill_type that corresponds to the name. 244 + * Returns: enum rfkill_type that corresponds to the name. 245 245 */ 246 246 enum rfkill_type rfkill_find_type(const char *name); 247 247
+1 -1
include/linux/rtnetlink.h
··· 78 78 * rtnl_dereference - fetch RCU pointer when updates are prevented by RTNL 79 79 * @p: The pointer to read, prior to dereferencing 80 80 * 81 - * Return the value of the specified RCU-protected pointer, but omit 81 + * Return: the value of the specified RCU-protected pointer, but omit 82 82 * the READ_ONCE(), because caller holds RTNL. 83 83 */ 84 84 #define rtnl_dereference(p) \
+8 -8
include/linux/skbuff.h
··· 1134 1134 * skb_dst - returns skb dst_entry 1135 1135 * @skb: buffer 1136 1136 * 1137 - * Returns skb dst_entry, regardless of reference taken or not. 1137 + * Returns: skb dst_entry, regardless of reference taken or not. 1138 1138 */ 1139 1139 static inline struct dst_entry *skb_dst(const struct sk_buff *skb) 1140 1140 { ··· 1222 1222 * skb_unref - decrement the skb's reference count 1223 1223 * @skb: buffer 1224 1224 * 1225 - * Returns true if we can free the skb. 1225 + * Returns: true if we can free the skb. 1226 1226 */ 1227 1227 static inline bool skb_unref(struct sk_buff *skb) 1228 1228 { ··· 1344 1344 * @sk: socket 1345 1345 * @skb: buffer 1346 1346 * 1347 - * Returns true if skb is a fast clone, and its clone is not freed. 1347 + * Returns: true if skb is a fast clone, and its clone is not freed. 1348 1348 * Some drivers call skb_orphan() in their ndo_start_xmit(), 1349 1349 * so we also check that didn't happen. 1350 1350 */ ··· 3516 3516 * A page shouldn't be considered for reusing/recycling if it was allocated 3517 3517 * under memory pressure or at a distant memory node. 3518 3518 * 3519 - * Returns false if this page should be returned to page allocator, true 3519 + * Returns: false if this page should be returned to page allocator, true 3520 3520 * otherwise. 3521 3521 */ 3522 3522 static inline bool dev_page_is_reusable(const struct page *page) ··· 3633 3633 * skb_frag_address - gets the address of the data contained in a paged fragment 3634 3634 * @frag: the paged fragment buffer 3635 3635 * 3636 - * Returns the address of the data within @frag. The page must already 3636 + * Returns: the address of the data within @frag. The page must already 3637 3637 * be mapped. 3638 3638 */ 3639 3639 static inline void *skb_frag_address(const skb_frag_t *frag) ··· 3648 3648 * skb_frag_address_safe - gets the address of the data contained in a paged fragment 3649 3649 * @frag: the paged fragment buffer 3650 3650 * 3651 - * Returns the address of the data within @frag. Checks that the page 3651 + * Returns: the address of the data within @frag. Checks that the page 3652 3652 * is mapped and returns %NULL otherwise. 3653 3653 */ 3654 3654 static inline void *skb_frag_address_safe(const skb_frag_t *frag) ··· 3890 3890 * skb_has_shared_frag - can any frag be overwritten 3891 3891 * @skb: buffer to test 3892 3892 * 3893 - * Return true if the skb has at least one frag that might be modified 3893 + * Return: true if the skb has at least one frag that might be modified 3894 3894 * by an external entity (as in vmsplice()/sendfile()) 3895 3895 */ 3896 3896 static inline bool skb_has_shared_frag(const struct sk_buff *skb) ··· 4612 4612 4613 4613 /* Check if we need to perform checksum complete validation. 4614 4614 * 4615 - * Returns true if checksum complete is needed, false otherwise 4615 + * Returns: true if checksum complete is needed, false otherwise 4616 4616 * (either checksum is unnecessary or zero checksum is allowed). 4617 4617 */ 4618 4618 static inline bool __skb_checksum_validate_needed(struct sk_buff *skb,
+1 -1
include/linux/wwan.h
··· 97 97 * 98 98 * This function must be balanced with a call to wwan_remove_port(). 99 99 * 100 - * Returns a valid pointer to wwan_port on success or PTR_ERR on failure 100 + * Returns: a valid pointer to wwan_port on success or PTR_ERR on failure 101 101 */ 102 102 struct wwan_port *wwan_create_port(struct device *parent, 103 103 enum wwan_port_type type,
+1 -1
include/net/cfg80211.h
··· 5957 5957 * @wiphy: the wiphy to check the locking on 5958 5958 * @p: The pointer to read, prior to dereferencing 5959 5959 * 5960 - * Return the value of the specified RCU-protected pointer, but omit the 5960 + * Return: the value of the specified RCU-protected pointer, but omit the 5961 5961 * READ_ONCE(), because caller holds the wiphy mutex used for updates. 5962 5962 */ 5963 5963 #define wiphy_dereference(wiphy, p) \
+1 -1
include/net/dst.h
··· 307 307 * @skb: buffer 308 308 * 309 309 * If dst is not yet refcounted and not destroyed, grab a ref on it. 310 - * Returns true if dst is refcounted. 310 + * Returns: true if dst is refcounted. 311 311 */ 312 312 static inline bool skb_dst_force(struct sk_buff *skb) 313 313 {
+3 -3
include/net/genetlink.h
··· 354 354 * such requests) or a struct initialized by genl_info_init_ntf() 355 355 * when constructing notifications. 356 356 * 357 - * Returns pointer to new genetlink header. 357 + * Returns: pointer to new genetlink header. 358 358 */ 359 359 static inline void * 360 360 genlmsg_iput(struct sk_buff *skb, const struct genl_info *info) ··· 366 366 * genlmsg_nlhdr - Obtain netlink header from user specified header 367 367 * @user_hdr: user header as returned from genlmsg_put() 368 368 * 369 - * Returns pointer to netlink header. 369 + * Returns: pointer to netlink header. 370 370 */ 371 371 static inline struct nlmsghdr *genlmsg_nlhdr(void *user_hdr) 372 372 { ··· 435 435 * @flags: netlink message flags 436 436 * @cmd: generic netlink command 437 437 * 438 - * Returns pointer to user specific header 438 + * Returns: pointer to user specific header 439 439 */ 440 440 static inline void *genlmsg_put_reply(struct sk_buff *skb, 441 441 struct genl_info *info,
+1 -1
include/net/ipv6.h
··· 471 471 /* This helper is specialized for BIG TCP needs. 472 472 * It assumes the hop_jumbo_hdr will immediately follow the IPV6 header. 473 473 * It assumes headers are already in skb->head. 474 - * Returns 0, or IPPROTO_TCP if a BIG TCP packet is there. 474 + * Returns: 0, or IPPROTO_TCP if a BIG TCP packet is there. 475 475 */ 476 476 static inline int ipv6_has_hopopt_jumbo(const struct sk_buff *skb) 477 477 {
+15 -15
include/net/iucv/iucv.h
··· 202 202 * 203 203 * Registers a driver with IUCV. 204 204 * 205 - * Returns 0 on success, -ENOMEM if the memory allocation for the pathid 205 + * Returns: 0 on success, -ENOMEM if the memory allocation for the pathid 206 206 * table failed, or -EIO if IUCV_DECLARE_BUFFER failed on all cpus. 207 207 */ 208 208 int iucv_register(struct iucv_handler *handler, int smp); ··· 224 224 * 225 225 * Allocate a new path structure for use with iucv_connect. 226 226 * 227 - * Returns NULL if the memory allocation failed or a pointer to the 227 + * Returns: NULL if the memory allocation failed or a pointer to the 228 228 * path structure. 229 229 */ 230 230 static inline struct iucv_path *iucv_path_alloc(u16 msglim, u8 flags, gfp_t gfp) ··· 260 260 * This function is issued after the user received a connection pending 261 261 * external interrupt and now wishes to complete the IUCV communication path. 262 262 * 263 - * Returns the result of the CP IUCV call. 263 + * Returns: the result of the CP IUCV call. 264 264 */ 265 265 int iucv_path_accept(struct iucv_path *path, struct iucv_handler *handler, 266 266 u8 *userdata, void *private); ··· 278 278 * successfully, you are not able to use the path until you receive an IUCV 279 279 * Connection Complete external interrupt. 280 280 * 281 - * Returns the result of the CP IUCV call. 281 + * Returns: the result of the CP IUCV call. 282 282 */ 283 283 int iucv_path_connect(struct iucv_path *path, struct iucv_handler *handler, 284 284 u8 *userid, u8 *system, u8 *userdata, ··· 292 292 * This function temporarily suspends incoming messages on an IUCV path. 293 293 * You can later reactivate the path by invoking the iucv_resume function. 294 294 * 295 - * Returns the result from the CP IUCV call. 295 + * Returns: the result from the CP IUCV call. 296 296 */ 297 297 int iucv_path_quiesce(struct iucv_path *path, u8 *userdata); 298 298 ··· 304 304 * This function resumes incoming messages on an IUCV path that has 305 305 * been stopped with iucv_path_quiesce. 306 306 * 307 - * Returns the result from the CP IUCV call. 307 + * Returns: the result from the CP IUCV call. 308 308 */ 309 309 int iucv_path_resume(struct iucv_path *path, u8 *userdata); 310 310 ··· 315 315 * 316 316 * This function terminates an IUCV path. 317 317 * 318 - * Returns the result from the CP IUCV call. 318 + * Returns: the result from the CP IUCV call. 319 319 */ 320 320 int iucv_path_sever(struct iucv_path *path, u8 *userdata); 321 321 ··· 327 327 * 328 328 * Cancels a message you have sent. 329 329 * 330 - * Returns the result from the CP IUCV call. 330 + * Returns: the result from the CP IUCV call. 331 331 */ 332 332 int iucv_message_purge(struct iucv_path *path, struct iucv_message *msg, 333 333 u32 srccls); ··· 347 347 * 348 348 * Locking: local_bh_enable/local_bh_disable 349 349 * 350 - * Returns the result from the CP IUCV call. 350 + * Returns: the result from the CP IUCV call. 351 351 */ 352 352 int iucv_message_receive(struct iucv_path *path, struct iucv_message *msg, 353 353 u8 flags, void *buffer, size_t size, size_t *residual); ··· 367 367 * 368 368 * Locking: no locking. 369 369 * 370 - * Returns the result from the CP IUCV call. 370 + * Returns: the result from the CP IUCV call. 371 371 */ 372 372 int __iucv_message_receive(struct iucv_path *path, struct iucv_message *msg, 373 373 u8 flags, void *buffer, size_t size, ··· 382 382 * are notified of a message and the time that you complete the message, 383 383 * the message may be rejected. 384 384 * 385 - * Returns the result from the CP IUCV call. 385 + * Returns: the result from the CP IUCV call. 386 386 */ 387 387 int iucv_message_reject(struct iucv_path *path, struct iucv_message *msg); 388 388 ··· 399 399 * pathid, msgid, and trgcls. Prmmsg signifies the data is moved into 400 400 * the parameter list. 401 401 * 402 - * Returns the result from the CP IUCV call. 402 + * Returns: the result from the CP IUCV call. 403 403 */ 404 404 int iucv_message_reply(struct iucv_path *path, struct iucv_message *msg, 405 405 u8 flags, void *reply, size_t size); ··· 419 419 * 420 420 * Locking: local_bh_enable/local_bh_disable 421 421 * 422 - * Returns the result from the CP IUCV call. 422 + * Returns: the result from the CP IUCV call. 423 423 */ 424 424 int iucv_message_send(struct iucv_path *path, struct iucv_message *msg, 425 425 u8 flags, u32 srccls, void *buffer, size_t size); ··· 439 439 * 440 440 * Locking: no locking. 441 441 * 442 - * Returns the result from the CP IUCV call. 442 + * Returns: the result from the CP IUCV call. 443 443 */ 444 444 int __iucv_message_send(struct iucv_path *path, struct iucv_message *msg, 445 445 u8 flags, u32 srccls, void *buffer, size_t size); ··· 461 461 * reply to the message and a buffer is provided into which IUCV moves 462 462 * the reply to this message. 463 463 * 464 - * Returns the result from the CP IUCV call. 464 + * Returns: the result from the CP IUCV call. 465 465 */ 466 466 int iucv_message_send2way(struct iucv_path *path, struct iucv_message *msg, 467 467 u8 flags, u32 srccls, void *buffer, size_t size,
+2 -2
include/net/netfilter/nf_tproxy.h
··· 49 49 * 50 50 * nf_tproxy_handle_time_wait4() consumes the socket reference passed in. 51 51 * 52 - * Returns the listener socket if there's one, the TIME_WAIT socket if 52 + * Returns: the listener socket if there's one, the TIME_WAIT socket if 53 53 * no such listener is found, or NULL if the TCP header is incomplete. 54 54 */ 55 55 struct sock * ··· 108 108 * 109 109 * nf_tproxy_handle_time_wait6() consumes the socket reference passed in. 110 110 * 111 - * Returns the listener socket if there's one, the TIME_WAIT socket if 111 + * Returns: the listener socket if there's one, the TIME_WAIT socket if 112 112 * no such listener is found, or NULL if the TCP header is incomplete. 113 113 */ 114 114 struct sock *
+22 -22
include/net/netlink.h
··· 649 649 * @nlh: netlink message header 650 650 * @remaining: number of bytes remaining in message stream 651 651 * 652 - * Returns the next netlink message in the message stream and 652 + * Returns: the next netlink message in the message stream and 653 653 * decrements remaining by the size of the current message. 654 654 */ 655 655 static inline struct nlmsghdr * ··· 676 676 * exceeding maxtype will be rejected, policy must be specified, attributes 677 677 * will be validated in the strictest way possible. 678 678 * 679 - * Returns 0 on success or a negative error code. 679 + * Returns: 0 on success or a negative error code. 680 680 */ 681 681 static inline int nla_parse(struct nlattr **tb, int maxtype, 682 682 const struct nlattr *head, int len, ··· 701 701 * exceeding maxtype will be ignored and attributes from the policy are not 702 702 * always strictly validated (only for new attributes). 703 703 * 704 - * Returns 0 on success or a negative error code. 704 + * Returns: 0 on success or a negative error code. 705 705 */ 706 706 static inline int nla_parse_deprecated(struct nlattr **tb, int maxtype, 707 707 const struct nlattr *head, int len, ··· 726 726 * exceeding maxtype will be rejected as well as trailing data, but the 727 727 * policy is not completely strictly validated (only for new attributes). 728 728 * 729 - * Returns 0 on success or a negative error code. 729 + * Returns: 0 on success or a negative error code. 730 730 */ 731 731 static inline int nla_parse_deprecated_strict(struct nlattr **tb, int maxtype, 732 732 const struct nlattr *head, ··· 833 833 * @hdrlen: length of family specific header 834 834 * @attrtype: type of attribute to look for 835 835 * 836 - * Returns the first attribute which matches the specified type. 836 + * Returns: the first attribute which matches the specified type. 837 837 */ 838 838 static inline struct nlattr *nlmsg_find_attr(const struct nlmsghdr *nlh, 839 839 int hdrlen, int attrtype) ··· 854 854 * specified policy. Validation is done in liberal mode. 855 855 * See documentation of struct nla_policy for more details. 856 856 * 857 - * Returns 0 on success or a negative error code. 857 + * Returns: 0 on success or a negative error code. 858 858 */ 859 859 static inline int nla_validate_deprecated(const struct nlattr *head, int len, 860 860 int maxtype, ··· 877 877 * specified policy. Validation is done in strict mode. 878 878 * See documentation of struct nla_policy for more details. 879 879 * 880 - * Returns 0 on success or a negative error code. 880 + * Returns: 0 on success or a negative error code. 881 881 */ 882 882 static inline int nla_validate(const struct nlattr *head, int len, int maxtype, 883 883 const struct nla_policy *policy, ··· 914 914 * nlmsg_report - need to report back to application? 915 915 * @nlh: netlink message header 916 916 * 917 - * Returns 1 if a report back to the application is requested. 917 + * Returns: 1 if a report back to the application is requested. 918 918 */ 919 919 static inline int nlmsg_report(const struct nlmsghdr *nlh) 920 920 { ··· 925 925 * nlmsg_seq - return the seq number of netlink message 926 926 * @nlh: netlink message header 927 927 * 928 - * Returns 0 if netlink message is NULL 928 + * Returns: 0 if netlink message is NULL 929 929 */ 930 930 static inline u32 nlmsg_seq(const struct nlmsghdr *nlh) 931 931 { ··· 952 952 * @payload: length of message payload 953 953 * @flags: message flags 954 954 * 955 - * Returns NULL if the tailroom of the skb is insufficient to store 955 + * Returns: NULL if the tailroom of the skb is insufficient to store 956 956 * the message header and payload. 957 957 */ 958 958 static inline struct nlmsghdr *nlmsg_put(struct sk_buff *skb, u32 portid, u32 seq, ··· 971 971 * 972 972 * Append data to an existing nlmsg, used when constructing a message 973 973 * with multiple fixed-format headers (which is rare). 974 - * Returns NULL if the tailroom of the skb is insufficient to store 974 + * Returns: NULL if the tailroom of the skb is insufficient to store 975 975 * the extra payload. 976 976 */ 977 977 static inline void *nlmsg_append(struct sk_buff *skb, u32 size) ··· 993 993 * @payload: length of message payload 994 994 * @flags: message flags 995 995 * 996 - * Returns NULL if the tailroom of the skb is insufficient to store 996 + * Returns: NULL if the tailroom of the skb is insufficient to store 997 997 * the message header and payload. 998 998 */ 999 999 static inline struct nlmsghdr *nlmsg_put_answer(struct sk_buff *skb, ··· 1050 1050 * nlmsg_get_pos - return current position in netlink message 1051 1051 * @skb: socket buffer the message is stored in 1052 1052 * 1053 - * Returns a pointer to the current tail of the message. 1053 + * Returns: a pointer to the current tail of the message. 1054 1054 */ 1055 1055 static inline void *nlmsg_get_pos(struct sk_buff *skb) 1056 1056 { ··· 1276 1276 * @nla: netlink attribute 1277 1277 * @remaining: number of bytes remaining in attribute stream 1278 1278 * 1279 - * Returns the next netlink attribute in the attribute stream and 1279 + * Returns: the next netlink attribute in the attribute stream and 1280 1280 * decrements remaining by the size of the current attribute. 1281 1281 */ 1282 1282 static inline struct nlattr *nla_next(const struct nlattr *nla, int *remaining) ··· 1292 1292 * @nla: attribute containing the nested attributes 1293 1293 * @attrtype: type of attribute to look for 1294 1294 * 1295 - * Returns the first attribute which matches the specified type. 1295 + * Returns: the first attribute which matches the specified type. 1296 1296 */ 1297 1297 static inline struct nlattr * 1298 1298 nla_find_nested(const struct nlattr *nla, int attrtype) ··· 2091 2091 * nla_get_msecs - return payload of msecs attribute 2092 2092 * @nla: msecs netlink attribute 2093 2093 * 2094 - * Returns the number of milliseconds in jiffies. 2094 + * Returns: the number of milliseconds in jiffies. 2095 2095 */ 2096 2096 static inline unsigned long nla_get_msecs(const struct nlattr *nla) 2097 2097 { ··· 2183 2183 * marked their nest attributes with NLA_F_NESTED flag. New APIs should use 2184 2184 * nla_nest_start() which sets the flag. 2185 2185 * 2186 - * Returns the container attribute or NULL on error 2186 + * Returns: the container attribute or NULL on error 2187 2187 */ 2188 2188 static inline struct nlattr *nla_nest_start_noflag(struct sk_buff *skb, 2189 2189 int attrtype) ··· 2204 2204 * Unlike nla_nest_start_noflag(), mark the nest attribute with NLA_F_NESTED 2205 2205 * flag. This is the preferred function to use in new code. 2206 2206 * 2207 - * Returns the container attribute or NULL on error 2207 + * Returns: the container attribute or NULL on error 2208 2208 */ 2209 2209 static inline struct nlattr *nla_nest_start(struct sk_buff *skb, int attrtype) 2210 2210 { ··· 2219 2219 * Corrects the container attribute header to include the all 2220 2220 * appended attributes. 2221 2221 * 2222 - * Returns the total data length of the skb. 2222 + * Returns: the total data length of the skb. 2223 2223 */ 2224 2224 static inline int nla_nest_end(struct sk_buff *skb, struct nlattr *start) 2225 2225 { ··· 2252 2252 * specified policy. Attributes with a type exceeding maxtype will be 2253 2253 * ignored. See documentation of struct nla_policy for more details. 2254 2254 * 2255 - * Returns 0 on success or a negative error code. 2255 + * Returns: 0 on success or a negative error code. 2256 2256 */ 2257 2257 static inline int __nla_validate_nested(const struct nlattr *start, int maxtype, 2258 2258 const struct nla_policy *policy, ··· 2285 2285 * nla_need_padding_for_64bit - test 64-bit alignment of the next attribute 2286 2286 * @skb: socket buffer the message is stored in 2287 2287 * 2288 - * Return true if padding is needed to align the next attribute (nla_data()) to 2288 + * Return: true if padding is needed to align the next attribute (nla_data()) to 2289 2289 * a 64-bit aligned area. 2290 2290 */ 2291 2291 static inline bool nla_need_padding_for_64bit(struct sk_buff *skb) ··· 2312 2312 * This will only be done in architectures which do not have 2313 2313 * CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS defined. 2314 2314 * 2315 - * Returns zero on success or a negative error code. 2315 + * Returns: zero on success or a negative error code. 2316 2316 */ 2317 2317 static inline int nla_align_64bit(struct sk_buff *skb, int padattr) 2318 2318 {
+3 -6
include/net/page_pool/helpers.h
··· 104 104 * 105 105 * Get a page fragment from the page allocator or page_pool caches. 106 106 * 107 - * Return: 108 - * Return allocated page fragment, otherwise return NULL. 107 + * Return: allocated page fragment, otherwise return NULL. 109 108 */ 110 109 static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool, 111 110 unsigned int *offset, ··· 154 155 * depending on the requested size in order to allocate memory with least memory 155 156 * utilization and performance penalty. 156 157 * 157 - * Return: 158 - * Return allocated page or page fragment, otherwise return NULL. 158 + * Return: allocated page or page fragment, otherwise return NULL. 159 159 */ 160 160 static inline struct page *page_pool_dev_alloc(struct page_pool *pool, 161 161 unsigned int *offset, ··· 188 190 * This is just a thin wrapper around the page_pool_alloc() API, and 189 191 * it returns va of the allocated page or page fragment. 190 192 * 191 - * Return: 192 - * Return the va for the allocated page or page fragment, otherwise return NULL. 193 + * Return: the va for the allocated page or page fragment, otherwise return NULL. 193 194 */ 194 195 static inline void *page_pool_dev_alloc_va(struct page_pool *pool, 195 196 unsigned int *size)
+2 -2
include/net/pkt_cls.h
··· 319 319 * tcf_exts_has_actions - check if at least one action is present 320 320 * @exts: tc filter extensions handle 321 321 * 322 - * Returns true if at least one action is present. 322 + * Returns: true if at least one action is present. 323 323 */ 324 324 static inline bool tcf_exts_has_actions(struct tcf_exts *exts) 325 325 { ··· 501 501 * through all ematches respecting their logic relations returning 502 502 * as soon as the result is obvious. 503 503 * 504 - * Returns 1 if the ematch tree as-one matches, no ematches are configured 504 + * Returns: 1 if the ematch tree as-one matches, no ematches are configured 505 505 * or ematch is not enabled in the kernel, otherwise 0 is returned. 506 506 */ 507 507 static inline int tcf_em_tree_match(struct sk_buff *skb,
+1 -1
include/net/tcp.h
··· 1817 1817 * @id: tcp_sigpool that was previously allocated by tcp_sigpool_alloc_ahash() 1818 1818 * @c: returned tcp_sigpool for usage (uninitialized on failure) 1819 1819 * 1820 - * Returns 0 on success, error otherwise. 1820 + * Returns: 0 on success, error otherwise. 1821 1821 */ 1822 1822 int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c); 1823 1823 /**