Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'char-misc-7.0-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc/iio driver fixes from Greg KH:
"Here are a relativly large number of small char/misc/iio and other
driver fixes for 7.0-rc7. There's a bunch, but overall they are all
small fixes for issues that people have been having that I finally
caught up with getting merged due to delays on my end.

The "largest" change overall is just some documentation updates to the
security-bugs.rst file to hopefully tell the AI tools (and any users
that actually read the documentation), how to send us better security
bug reports as the quantity of reports these past few weeks has
increased dramatically due to tools getting better at "finding"
things.

Included in here are:
- lots of small IIO driver fixes for issues reported in 7.0-rc
- gpib driver fixes
- comedi driver fixes
- interconnect driver fix
- nvmem driver fixes
- mei driver fix
- counter driver fix
- binder rust driver fixes
- some other small misc driver fixes

All of these have been in linux-next this week with no reported issues"

* tag 'char-misc-7.0-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (63 commits)
Documentation: fix two typos in latest update to the security report howto
Documentation: clarify the mandatory and desirable info for security reports
Documentation: explain how to find maintainers addresses for security reports
Documentation: minor updates to the security contacts
.get_maintainer.ignore: add myself
nvmem: zynqmp_nvmem: Fix buffer size in DMA and memcpy
nvmem: imx: assign nvmem_cell_info::raw_len
misc: fastrpc: check qcom_scm_assign_mem() return in rpmsg_probe
misc: fastrpc: possible double-free of cctx->remote_heap
comedi: dt2815: add hardware detection to prevent crash
comedi: runflags cannot determine whether to reclaim chanlist
comedi: Reinit dev->spinlock between attachments to low-level drivers
comedi: me_daq: Fix potential overrun of firmware buffer
comedi: me4000: Fix potential overrun of firmware buffer
comedi: ni_atmio16d: Fix invalid clean-up after failed attach
gpib: fix use-after-free in IO ioctl handlers
gpib: lpvo_usb: fix memory leak on disconnect
gpib: Fix fluke driver s390 compile issue
lis3lv02d: Omit IRQF_ONESHOT if no threaded handler is provided
lis3lv02d: fix kernel-doc warnings
...

+545 -260
+1
.get_maintainer.ignore
··· 1 1 Alan Cox <alan@lxorguk.ukuu.org.uk> 2 2 Alan Cox <root@hraefn.swansea.linux.org.uk> 3 3 Alyssa Rosenzweig <alyssa@rosenzweig.io> 4 + Askar Safin <safinaskar@gmail.com> 4 5 Christoph Hellwig <hch@lst.de> 5 6 Jeff Kirsher <jeffrey.t.kirsher@intel.com> 6 7 Marc Gonzalez <marc.w.gonzalez@free.fr>
+139 -15
Documentation/process/security-bugs.rst
··· 5 5 6 6 Linux kernel developers take security very seriously. As such, we'd 7 7 like to know when a security bug is found so that it can be fixed and 8 - disclosed as quickly as possible. Please report security bugs to the 9 - Linux kernel security team. 8 + disclosed as quickly as possible. 9 + 10 + Preparing your report 11 + --------------------- 12 + 13 + Like with any bug report, a security bug report requires a lot of analysis work 14 + from the developers, so the more information you can share about the issue, the 15 + better. Please review the procedure outlined in 16 + Documentation/admin-guide/reporting-issues.rst if you are unclear about what 17 + information is helpful. The following information are absolutely necessary in 18 + **any** security bug report: 19 + 20 + * **affected kernel version range**: with no version indication, your report 21 + will not be processed. A significant part of reports are for bugs that 22 + have already been fixed, so it is extremely important that vulnerabilities 23 + are verified on recent versions (development tree or latest stable 24 + version), at least by verifying that the code has not changed since the 25 + version where it was detected. 26 + 27 + * **description of the problem**: a detailed description of the problem, with 28 + traces showing its manifestation, and why you consider that the observed 29 + behavior as a problem in the kernel, is necessary. 30 + 31 + * **reproducer**: developers will need to be able to reproduce the problem to 32 + consider a fix as effective. This includes both a way to trigger the issue 33 + and a way to confirm it happens. A reproducer with low complexity 34 + dependencies will be needed (source code, shell script, sequence of 35 + instructions, file-system image etc). Binary-only executables are not 36 + accepted. Working exploits are extremely helpful and will not be released 37 + without consent from the reporter, unless they are already public. By 38 + definition if an issue cannot be reproduced, it is not exploitable, thus it 39 + is not a security bug. 40 + 41 + * **conditions**: if the bug depends on certain configuration options, 42 + sysctls, permissions, timing, code modifications etc, these should be 43 + indicated. 44 + 45 + In addition, the following information are highly desirable: 46 + 47 + * **suspected location of the bug**: the file names and functions where the 48 + bug is suspected to be present are very important, at least to help forward 49 + the report to the appropriate maintainers. When not possible (for example, 50 + "system freezes each time I run this command"), the security team will help 51 + identify the source of the bug. 52 + 53 + * **a proposed fix**: bug reporters who have analyzed the cause of a bug in 54 + the source code almost always have an accurate idea on how to fix it, 55 + because they spent a long time studying it and its implications. Proposing 56 + a tested fix will save maintainers a lot of time, even if the fix ends up 57 + not being the right one, because it helps understand the bug. When 58 + proposing a tested fix, please always format it in a way that can be 59 + immediately merged (see Documentation/process/submitting-patches.rst). 60 + This will save some back-and-forth exchanges if it is accepted, and you 61 + will be credited for finding and fixing this issue. Note that in this case 62 + only a ``Signed-off-by:`` tag is needed, without ``Reported-by:`` when the 63 + reporter and author are the same. 64 + 65 + * **mitigations**: very often during a bug analysis, some ways of mitigating 66 + the issue appear. It is useful to share them, as they can be helpful to 67 + keep end users protected during the time it takes them to apply the fix. 68 + 69 + Identifying contacts 70 + -------------------- 71 + 72 + The most effective way to report a security bug is to send it directly to the 73 + affected subsystem's maintainers and Cc: the Linux kernel security team. Do 74 + not send it to a public list at this stage, unless you have good reasons to 75 + consider the issue as being public or trivial to discover (e.g. result of a 76 + widely available automated vulnerability scanning tool that can be repeated by 77 + anyone). 78 + 79 + If you're sending a report for issues affecting multiple parts in the kernel, 80 + even if they're fairly similar issues, please send individual messages (think 81 + that maintainers will not all work on the issues at the same time). The only 82 + exception is when an issue concerns closely related parts maintained by the 83 + exact same subset of maintainers, and these parts are expected to be fixed all 84 + at once by the same commit, then it may be acceptable to report them at once. 85 + 86 + One difficulty for most first-time reporters is to figure the right list of 87 + recipients to send a report to. In the Linux kernel, all official maintainers 88 + are trusted, so the consequences of accidentally including the wrong maintainer 89 + are essentially a bit more noise for that person, i.e. nothing dramatic. As 90 + such, a suitable method to figure the list of maintainers (which kernel 91 + security officers use) is to rely on the get_maintainer.pl script, tuned to 92 + only report maintainers. This script, when passed a file name, will look for 93 + its path in the MAINTAINERS file to figure a hierarchical list of relevant 94 + maintainers. Calling it a first time with the finest level of filtering will 95 + most of the time return a short list of this specific file's maintainers:: 96 + 97 + $ ./scripts/get_maintainer.pl --no-l --no-r --pattern-depth 1 \ 98 + drivers/example.c 99 + Developer One <dev1@example.com> (maintainer:example driver) 100 + Developer Two <dev2@example.org> (maintainer:example driver) 101 + 102 + These two maintainers should then receive the message. If the command does not 103 + return anything, it means the affected file is part of a wider subsystem, so we 104 + should be less specific:: 105 + 106 + $ ./scripts/get_maintainer.pl --no-l --no-r drivers/example.c 107 + Developer One <dev1@example.com> (maintainer:example subsystem) 108 + Developer Two <dev2@example.org> (maintainer:example subsystem) 109 + Developer Three <dev3@example.com> (maintainer:example subsystem [GENERAL]) 110 + Developer Four <dev4@example.org> (maintainer:example subsystem [GENERAL]) 111 + 112 + Here, picking the first, most specific ones, is sufficient. When the list is 113 + long, it is possible to produce a comma-delimited e-mail address list on a 114 + single line suitable for use in the To: field of a mailer like this:: 115 + 116 + $ ./scripts/get_maintainer.pl --no-tree --no-l --no-r --no-n --m \ 117 + --no-git-fallback --no-substatus --no-rolestats --no-multiline \ 118 + --pattern-depth 1 drivers/example.c 119 + dev1@example.com, dev2@example.org 120 + 121 + or this for the wider list:: 122 + 123 + $ ./scripts/get_maintainer.pl --no-tree --no-l --no-r --no-n --m \ 124 + --no-git-fallback --no-substatus --no-rolestats --no-multiline \ 125 + drivers/example.c 126 + dev1@example.com, dev2@example.org, dev3@example.com, dev4@example.org 127 + 128 + If at this point you're still facing difficulties spotting the right 129 + maintainers, **and only in this case**, it's possible to send your report to 130 + the Linux kernel security team only. Your message will be triaged, and you 131 + will receive instructions about whom to contact, if needed. Your message may 132 + equally be forwarded as-is to the relevant maintainers. 133 + 134 + Sending the report 135 + ------------------ 136 + 137 + Reports are to be sent over e-mail exclusively. Please use a working e-mail 138 + address, preferably the same that you want to appear in ``Reported-by`` tags 139 + if any. If unsure, send your report to yourself first. 10 140 11 141 The security team and maintainers almost always require additional 12 142 information beyond what was initially provided in a report and rely on ··· 148 18 or cannot effectively discuss their findings may be abandoned if the 149 19 communication does not quickly improve. 150 20 151 - As it is with any bug, the more information provided the easier it 152 - will be to diagnose and fix. Please review the procedure outlined in 153 - 'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what 154 - information is helpful. Any exploit code is very helpful and will not 155 - be released without consent from the reporter unless it has already been 156 - made public. 157 - 21 + The report must be sent to maintainers, with the security team in ``Cc:``. 158 22 The Linux kernel security team can be contacted by email at 159 23 <security@kernel.org>. This is a private list of security officers 160 - who will help verify the bug report and develop and release a fix. 161 - If you already have a fix, please include it with your report, as 162 - that can speed up the process considerably. It is possible that the 163 - security team will bring in extra help from area maintainers to 164 - understand and fix the security vulnerability. 24 + who will help verify the bug report and assist developers working on a fix. 25 + It is possible that the security team will bring in extra help from area 26 + maintainers to understand and fix the security vulnerability. 165 27 166 28 Please send **plain text** emails without attachments where possible. 167 29 It is much harder to have a context-quoted discussion about a complex ··· 164 42 Markdown, HTML and RST formatted reports are particularly frowned upon since 165 43 they're quite hard to read for humans and encourage to use dedicated viewers, 166 44 sometimes online, which by definition is not acceptable for a confidential 167 - security report. 45 + security report. Note that some mailers tend to mangle formatting of plain 46 + text by default, please consult Documentation/process/email-clients.rst for 47 + more info. 168 48 169 49 Disclosure and embargoed information 170 50 ------------------------------------
+5 -3
drivers/android/binder/page_range.rs
··· 13 13 // 14 14 // The shrinker will use trylock methods because it locks them in a different order. 15 15 16 + use crate::AssertSync; 17 + 16 18 use core::{ 17 19 marker::PhantomPinned, 18 20 mem::{size_of, size_of_val, MaybeUninit}, ··· 145 143 } 146 144 147 145 // We do not define any ops. For now, used only to check identity of vmas. 148 - static BINDER_VM_OPS: bindings::vm_operations_struct = pin_init::zeroed(); 146 + static BINDER_VM_OPS: AssertSync<bindings::vm_operations_struct> = AssertSync(pin_init::zeroed()); 149 147 150 148 // To ensure that we do not accidentally install pages into or zap pages from the wrong vma, we 151 149 // check its vm_ops and private data before using it. 152 150 fn check_vma(vma: &virt::VmaRef, owner: *const ShrinkablePageRange) -> Option<&virt::VmaMixedMap> { 153 151 // SAFETY: Just reading the vm_ops pointer of any active vma is safe. 154 152 let vm_ops = unsafe { (*vma.as_ptr()).vm_ops }; 155 - if !ptr::eq(vm_ops, &BINDER_VM_OPS) { 153 + if !ptr::eq(vm_ops, &BINDER_VM_OPS.0) { 156 154 return None; 157 155 } 158 156 ··· 344 342 345 343 // SAFETY: We own the vma, and we don't use any methods on VmaNew that rely on 346 344 // `vm_ops`. 347 - unsafe { (*vma.as_ptr()).vm_ops = &BINDER_VM_OPS }; 345 + unsafe { (*vma.as_ptr()).vm_ops = &BINDER_VM_OPS.0 }; 348 346 349 347 Ok(num_pages) 350 348 }
+1 -1
drivers/android/binder/rust_binder_main.rs
··· 306 306 /// Makes the inner type Sync. 307 307 #[repr(transparent)] 308 308 pub struct AssertSync<T>(T); 309 - // SAFETY: Used only to insert `file_operations` into a global, which is safe. 309 + // SAFETY: Used only to insert C bindings types into globals, which is safe. 310 310 unsafe impl<T> Sync for AssertSync<T> {} 311 311 312 312 /// File operations that rust_binderfs.c can use.
+5 -3
drivers/comedi/comedi_fops.c
··· 793 793 __comedi_clear_subdevice_runflags(s, COMEDI_SRF_RUNNING | 794 794 COMEDI_SRF_BUSY); 795 795 spin_unlock_irqrestore(&s->spin_lock, flags); 796 - if (comedi_is_runflags_busy(runflags)) { 796 + if (async) { 797 797 /* 798 798 * "Run active" counter was set to 1 when setting up the 799 799 * command. Decrement it and wait for it to become 0. 800 800 */ 801 - comedi_put_is_subdevice_running(s); 802 - wait_for_completion(&async->run_complete); 801 + if (comedi_is_runflags_busy(runflags)) { 802 + comedi_put_is_subdevice_running(s); 803 + wait_for_completion(&async->run_complete); 804 + } 803 805 comedi_buf_reset(s); 804 806 async->inttrig = NULL; 805 807 kfree(async->cmd.chanlist);
+8
drivers/comedi/drivers.c
··· 1063 1063 ret = -EIO; 1064 1064 goto out; 1065 1065 } 1066 + if (IS_ENABLED(CONFIG_LOCKDEP)) { 1067 + /* 1068 + * dev->spinlock is for private use by the attached low-level 1069 + * driver. Reinitialize it to stop lock-dependency tracking 1070 + * between attachments to different low-level drivers. 1071 + */ 1072 + spin_lock_init(&dev->spinlock); 1073 + } 1066 1074 dev->driver = driv; 1067 1075 dev->board_name = dev->board_ptr ? *(const char **)dev->board_ptr 1068 1076 : dev->driver->driver_name;
+12
drivers/comedi/drivers/dt2815.c
··· 175 175 ? current_range_type : voltage_range_type; 176 176 } 177 177 178 + /* 179 + * Check if hardware is present before attempting any I/O operations. 180 + * Reading 0xff from status register typically indicates no hardware 181 + * on the bus (floating bus reads as all 1s). 182 + */ 183 + if (inb(dev->iobase + DT2815_STATUS) == 0xff) { 184 + dev_err(dev->class_dev, 185 + "No hardware detected at I/O base 0x%lx\n", 186 + dev->iobase); 187 + return -ENODEV; 188 + } 189 + 178 190 /* Init the 2815 */ 179 191 outb(0x00, dev->iobase + DT2815_STATUS); 180 192 for (i = 0; i < 100; i++) {
+12 -4
drivers/comedi/drivers/me4000.c
··· 315 315 unsigned int val; 316 316 unsigned int i; 317 317 318 + /* Get data stream length from header. */ 319 + if (size >= 4) { 320 + file_length = (((unsigned int)data[0] & 0xff) << 24) + 321 + (((unsigned int)data[1] & 0xff) << 16) + 322 + (((unsigned int)data[2] & 0xff) << 8) + 323 + ((unsigned int)data[3] & 0xff); 324 + } 325 + if (size < 16 || file_length > size - 16) { 326 + dev_err(dev->class_dev, "Firmware length inconsistency\n"); 327 + return -EINVAL; 328 + } 329 + 318 330 if (!xilinx_iobase) 319 331 return -ENODEV; 320 332 ··· 358 346 outl(val, devpriv->plx_regbase + PLX9052_CNTRL); 359 347 360 348 /* Download Xilinx firmware */ 361 - file_length = (((unsigned int)data[0] & 0xff) << 24) + 362 - (((unsigned int)data[1] & 0xff) << 16) + 363 - (((unsigned int)data[2] & 0xff) << 8) + 364 - ((unsigned int)data[3] & 0xff); 365 349 usleep_range(10, 1000); 366 350 367 351 for (i = 0; i < file_length; i++) {
+19 -16
drivers/comedi/drivers/me_daq.c
··· 344 344 unsigned int file_length; 345 345 unsigned int i; 346 346 347 + /* 348 + * Format of the firmware 349 + * Build longs from the byte-wise coded header 350 + * Byte 1-3: length of the array 351 + * Byte 4-7: version 352 + * Byte 8-11: date 353 + * Byte 12-15: reserved 354 + */ 355 + if (size >= 4) { 356 + file_length = (((unsigned int)data[0] & 0xff) << 24) + 357 + (((unsigned int)data[1] & 0xff) << 16) + 358 + (((unsigned int)data[2] & 0xff) << 8) + 359 + ((unsigned int)data[3] & 0xff); 360 + } 361 + if (size < 16 || file_length > size - 16) { 362 + dev_err(dev->class_dev, "Firmware length inconsistency\n"); 363 + return -EINVAL; 364 + } 365 + 347 366 /* disable irq's on PLX */ 348 367 writel(0x00, devpriv->plx_regbase + PLX9052_INTCSR); 349 368 ··· 375 356 /* Write a dummy value to Xilinx */ 376 357 writeb(0x00, dev->mmio + 0x0); 377 358 sleep(1); 378 - 379 - /* 380 - * Format of the firmware 381 - * Build longs from the byte-wise coded header 382 - * Byte 1-3: length of the array 383 - * Byte 4-7: version 384 - * Byte 8-11: date 385 - * Byte 12-15: reserved 386 - */ 387 - if (size < 16) 388 - return -EINVAL; 389 - 390 - file_length = (((unsigned int)data[0] & 0xff) << 24) + 391 - (((unsigned int)data[1] & 0xff) << 16) + 392 - (((unsigned int)data[2] & 0xff) << 8) + 393 - ((unsigned int)data[3] & 0xff); 394 359 395 360 /* 396 361 * Loop for writing firmware byte by byte to xilinx
+2 -1
drivers/comedi/drivers/ni_atmio16d.c
··· 698 698 699 699 static void atmio16d_detach(struct comedi_device *dev) 700 700 { 701 - reset_atmio16d(dev); 701 + if (dev->private) 702 + reset_atmio16d(dev); 702 703 comedi_legacy_detach(dev); 703 704 } 704 705
+35 -32
drivers/counter/rz-mtu3-cnt.c
··· 107 107 struct rz_mtu3_cnt *const priv = counter_priv(counter); 108 108 unsigned long tmdr; 109 109 110 - pm_runtime_get_sync(priv->ch->dev); 110 + pm_runtime_get_sync(counter->parent); 111 111 tmdr = rz_mtu3_shared_reg_read(priv->ch, RZ_MTU3_TMDR3); 112 - pm_runtime_put(priv->ch->dev); 112 + pm_runtime_put(counter->parent); 113 113 114 114 if (id == RZ_MTU3_32_BIT_CH && test_bit(RZ_MTU3_TMDR3_LWA, &tmdr)) 115 115 return false; ··· 165 165 if (ret) 166 166 return ret; 167 167 168 - pm_runtime_get_sync(ch->dev); 168 + pm_runtime_get_sync(counter->parent); 169 169 if (count->id == RZ_MTU3_32_BIT_CH) 170 170 *val = rz_mtu3_32bit_ch_read(ch, RZ_MTU3_TCNTLW); 171 171 else 172 172 *val = rz_mtu3_16bit_ch_read(ch, RZ_MTU3_TCNT); 173 - pm_runtime_put(ch->dev); 173 + pm_runtime_put(counter->parent); 174 174 mutex_unlock(&priv->lock); 175 175 176 176 return 0; ··· 187 187 if (ret) 188 188 return ret; 189 189 190 - pm_runtime_get_sync(ch->dev); 190 + pm_runtime_get_sync(counter->parent); 191 191 if (count->id == RZ_MTU3_32_BIT_CH) 192 192 rz_mtu3_32bit_ch_write(ch, RZ_MTU3_TCNTLW, val); 193 193 else 194 194 rz_mtu3_16bit_ch_write(ch, RZ_MTU3_TCNT, val); 195 - pm_runtime_put(ch->dev); 195 + pm_runtime_put(counter->parent); 196 196 mutex_unlock(&priv->lock); 197 197 198 198 return 0; 199 199 } 200 200 201 201 static int rz_mtu3_count_function_read_helper(struct rz_mtu3_channel *const ch, 202 - struct rz_mtu3_cnt *const priv, 202 + struct counter_device *const counter, 203 203 enum counter_function *function) 204 204 { 205 205 u8 timer_mode; 206 206 207 - pm_runtime_get_sync(ch->dev); 207 + pm_runtime_get_sync(counter->parent); 208 208 timer_mode = rz_mtu3_8bit_ch_read(ch, RZ_MTU3_TMDR1); 209 - pm_runtime_put(ch->dev); 209 + pm_runtime_put(counter->parent); 210 210 211 211 switch (timer_mode & RZ_MTU3_TMDR1_PH_CNT_MODE_MASK) { 212 212 case RZ_MTU3_TMDR1_PH_CNT_MODE_1: ··· 240 240 if (ret) 241 241 return ret; 242 242 243 - ret = rz_mtu3_count_function_read_helper(ch, priv, function); 243 + ret = rz_mtu3_count_function_read_helper(ch, counter, function); 244 244 mutex_unlock(&priv->lock); 245 245 246 246 return ret; ··· 279 279 return -EINVAL; 280 280 } 281 281 282 - pm_runtime_get_sync(ch->dev); 282 + pm_runtime_get_sync(counter->parent); 283 283 rz_mtu3_8bit_ch_write(ch, RZ_MTU3_TMDR1, timer_mode); 284 - pm_runtime_put(ch->dev); 284 + pm_runtime_put(counter->parent); 285 285 mutex_unlock(&priv->lock); 286 286 287 287 return 0; ··· 300 300 if (ret) 301 301 return ret; 302 302 303 - pm_runtime_get_sync(ch->dev); 303 + pm_runtime_get_sync(counter->parent); 304 304 tsr = rz_mtu3_8bit_ch_read(ch, RZ_MTU3_TSR); 305 - pm_runtime_put(ch->dev); 305 + pm_runtime_put(counter->parent); 306 306 307 307 *direction = (tsr & RZ_MTU3_TSR_TCFD) ? 308 308 COUNTER_COUNT_DIRECTION_FORWARD : COUNTER_COUNT_DIRECTION_BACKWARD; ··· 377 377 return -EINVAL; 378 378 } 379 379 380 - pm_runtime_get_sync(ch->dev); 380 + pm_runtime_get_sync(counter->parent); 381 381 if (count->id == RZ_MTU3_32_BIT_CH) 382 382 rz_mtu3_32bit_ch_write(ch, RZ_MTU3_TGRALW, ceiling); 383 383 else 384 384 rz_mtu3_16bit_ch_write(ch, RZ_MTU3_TGRA, ceiling); 385 385 386 386 rz_mtu3_8bit_ch_write(ch, RZ_MTU3_TCR, RZ_MTU3_TCR_CCLR_TGRA); 387 - pm_runtime_put(ch->dev); 387 + pm_runtime_put(counter->parent); 388 388 mutex_unlock(&priv->lock); 389 389 390 390 return 0; ··· 495 495 static int rz_mtu3_count_enable_write(struct counter_device *counter, 496 496 struct counter_count *count, u8 enable) 497 497 { 498 - struct rz_mtu3_channel *const ch = rz_mtu3_get_ch(counter, count->id); 499 498 struct rz_mtu3_cnt *const priv = counter_priv(counter); 500 499 int ret = 0; 501 500 501 + mutex_lock(&priv->lock); 502 + 503 + if (priv->count_is_enabled[count->id] == enable) 504 + goto exit; 505 + 502 506 if (enable) { 503 - mutex_lock(&priv->lock); 504 - pm_runtime_get_sync(ch->dev); 507 + pm_runtime_get_sync(counter->parent); 505 508 ret = rz_mtu3_initialize_counter(counter, count->id); 506 509 if (ret == 0) 507 510 priv->count_is_enabled[count->id] = true; 508 - mutex_unlock(&priv->lock); 509 511 } else { 510 - mutex_lock(&priv->lock); 511 512 rz_mtu3_terminate_counter(counter, count->id); 512 513 priv->count_is_enabled[count->id] = false; 513 - pm_runtime_put(ch->dev); 514 - mutex_unlock(&priv->lock); 514 + pm_runtime_put(counter->parent); 515 515 } 516 + 517 + exit: 518 + mutex_unlock(&priv->lock); 516 519 517 520 return ret; 518 521 } ··· 543 540 if (ret) 544 541 return ret; 545 542 546 - pm_runtime_get_sync(priv->ch->dev); 543 + pm_runtime_get_sync(counter->parent); 547 544 tmdr = rz_mtu3_shared_reg_read(priv->ch, RZ_MTU3_TMDR3); 548 - pm_runtime_put(priv->ch->dev); 545 + pm_runtime_put(counter->parent); 549 546 *cascade_enable = test_bit(RZ_MTU3_TMDR3_LWA, &tmdr); 550 547 mutex_unlock(&priv->lock); 551 548 ··· 562 559 if (ret) 563 560 return ret; 564 561 565 - pm_runtime_get_sync(priv->ch->dev); 562 + pm_runtime_get_sync(counter->parent); 566 563 rz_mtu3_shared_reg_update_bit(priv->ch, RZ_MTU3_TMDR3, 567 564 RZ_MTU3_TMDR3_LWA, cascade_enable); 568 - pm_runtime_put(priv->ch->dev); 565 + pm_runtime_put(counter->parent); 569 566 mutex_unlock(&priv->lock); 570 567 571 568 return 0; ··· 582 579 if (ret) 583 580 return ret; 584 581 585 - pm_runtime_get_sync(priv->ch->dev); 582 + pm_runtime_get_sync(counter->parent); 586 583 tmdr = rz_mtu3_shared_reg_read(priv->ch, RZ_MTU3_TMDR3); 587 - pm_runtime_put(priv->ch->dev); 584 + pm_runtime_put(counter->parent); 588 585 *ext_input_phase_clock_select = test_bit(RZ_MTU3_TMDR3_PHCKSEL, &tmdr); 589 586 mutex_unlock(&priv->lock); 590 587 ··· 601 598 if (ret) 602 599 return ret; 603 600 604 - pm_runtime_get_sync(priv->ch->dev); 601 + pm_runtime_get_sync(counter->parent); 605 602 rz_mtu3_shared_reg_update_bit(priv->ch, RZ_MTU3_TMDR3, 606 603 RZ_MTU3_TMDR3_PHCKSEL, 607 604 ext_input_phase_clock_select); 608 - pm_runtime_put(priv->ch->dev); 605 + pm_runtime_put(counter->parent); 609 606 mutex_unlock(&priv->lock); 610 607 611 608 return 0; ··· 643 640 if (ret) 644 641 return ret; 645 642 646 - ret = rz_mtu3_count_function_read_helper(ch, priv, &function); 643 + ret = rz_mtu3_count_function_read_helper(ch, counter, &function); 647 644 if (ret) { 648 645 mutex_unlock(&priv->lock); 649 646 return ret;
+1
drivers/gpib/Kconfig
··· 122 122 depends on OF 123 123 select GPIB_COMMON 124 124 select GPIB_NEC7210 125 + depends on HAS_IOMEM 125 126 help 126 127 GPIB driver for Fluke based cda devices. 127 128
+73 -23
drivers/gpib/common/gpib_os.c
··· 888 888 if (read_cmd.completed_transfer_count > read_cmd.requested_transfer_count) 889 889 return -EINVAL; 890 890 891 - desc = handle_to_descriptor(file_priv, read_cmd.handle); 892 - if (!desc) 893 - return -EINVAL; 894 - 895 891 if (WARN_ON_ONCE(sizeof(userbuf) > sizeof(read_cmd.buffer_ptr))) 896 892 return -EFAULT; 897 893 ··· 899 903 /* Check write access to buffer */ 900 904 if (!access_ok(userbuf, remain)) 901 905 return -EFAULT; 906 + 907 + /* Lock descriptors to prevent concurrent close from freeing descriptor */ 908 + if (mutex_lock_interruptible(&file_priv->descriptors_mutex)) 909 + return -ERESTARTSYS; 910 + desc = handle_to_descriptor(file_priv, read_cmd.handle); 911 + if (!desc) { 912 + mutex_unlock(&file_priv->descriptors_mutex); 913 + return -EINVAL; 914 + } 915 + atomic_inc(&desc->descriptor_busy); 916 + mutex_unlock(&file_priv->descriptors_mutex); 902 917 903 918 atomic_set(&desc->io_in_progress, 1); 904 919 ··· 944 937 retval = copy_to_user((void __user *)arg, &read_cmd, sizeof(read_cmd)); 945 938 946 939 atomic_set(&desc->io_in_progress, 0); 940 + atomic_dec(&desc->descriptor_busy); 947 941 948 942 wake_up_interruptible(&board->wait); 949 943 if (retval) ··· 972 964 if (cmd.completed_transfer_count > cmd.requested_transfer_count) 973 965 return -EINVAL; 974 966 975 - desc = handle_to_descriptor(file_priv, cmd.handle); 976 - if (!desc) 977 - return -EINVAL; 978 - 979 967 userbuf = (u8 __user *)(unsigned long)cmd.buffer_ptr; 980 968 userbuf += cmd.completed_transfer_count; 981 969 ··· 983 979 /* Check read access to buffer */ 984 980 if (!access_ok(userbuf, remain)) 985 981 return -EFAULT; 982 + 983 + /* Lock descriptors to prevent concurrent close from freeing descriptor */ 984 + if (mutex_lock_interruptible(&file_priv->descriptors_mutex)) 985 + return -ERESTARTSYS; 986 + desc = handle_to_descriptor(file_priv, cmd.handle); 987 + if (!desc) { 988 + mutex_unlock(&file_priv->descriptors_mutex); 989 + return -EINVAL; 990 + } 991 + atomic_inc(&desc->descriptor_busy); 992 + mutex_unlock(&file_priv->descriptors_mutex); 986 993 987 994 /* 988 995 * Write buffer loads till we empty the user supplied buffer. ··· 1018 1003 userbuf += bytes_written; 1019 1004 if (retval < 0) { 1020 1005 atomic_set(&desc->io_in_progress, 0); 1006 + atomic_dec(&desc->descriptor_busy); 1021 1007 1022 1008 wake_up_interruptible(&board->wait); 1023 1009 break; ··· 1038 1022 */ 1039 1023 if (!no_clear_io_in_prog || fault) 1040 1024 atomic_set(&desc->io_in_progress, 0); 1025 + atomic_dec(&desc->descriptor_busy); 1041 1026 1042 1027 wake_up_interruptible(&board->wait); 1043 1028 if (fault) ··· 1064 1047 if (write_cmd.completed_transfer_count > write_cmd.requested_transfer_count) 1065 1048 return -EINVAL; 1066 1049 1067 - desc = handle_to_descriptor(file_priv, write_cmd.handle); 1068 - if (!desc) 1069 - return -EINVAL; 1070 - 1071 1050 userbuf = (u8 __user *)(unsigned long)write_cmd.buffer_ptr; 1072 1051 userbuf += write_cmd.completed_transfer_count; 1073 1052 ··· 1072 1059 /* Check read access to buffer */ 1073 1060 if (!access_ok(userbuf, remain)) 1074 1061 return -EFAULT; 1062 + 1063 + /* Lock descriptors to prevent concurrent close from freeing descriptor */ 1064 + if (mutex_lock_interruptible(&file_priv->descriptors_mutex)) 1065 + return -ERESTARTSYS; 1066 + desc = handle_to_descriptor(file_priv, write_cmd.handle); 1067 + if (!desc) { 1068 + mutex_unlock(&file_priv->descriptors_mutex); 1069 + return -EINVAL; 1070 + } 1071 + atomic_inc(&desc->descriptor_busy); 1072 + mutex_unlock(&file_priv->descriptors_mutex); 1075 1073 1076 1074 atomic_set(&desc->io_in_progress, 1); 1077 1075 ··· 1118 1094 fault = copy_to_user((void __user *)arg, &write_cmd, sizeof(write_cmd)); 1119 1095 1120 1096 atomic_set(&desc->io_in_progress, 0); 1097 + atomic_dec(&desc->descriptor_busy); 1121 1098 1122 1099 wake_up_interruptible(&board->wait); 1123 1100 if (fault) ··· 1301 1276 { 1302 1277 struct gpib_close_dev_ioctl cmd; 1303 1278 struct gpib_file_private *file_priv = filep->private_data; 1279 + struct gpib_descriptor *desc; 1280 + unsigned int pad; 1281 + int sad; 1304 1282 int retval; 1305 1283 1306 1284 retval = copy_from_user(&cmd, (void __user *)arg, sizeof(cmd)); ··· 1312 1284 1313 1285 if (cmd.handle >= GPIB_MAX_NUM_DESCRIPTORS) 1314 1286 return -EINVAL; 1315 - if (!file_priv->descriptors[cmd.handle]) 1287 + 1288 + mutex_lock(&file_priv->descriptors_mutex); 1289 + desc = file_priv->descriptors[cmd.handle]; 1290 + if (!desc) { 1291 + mutex_unlock(&file_priv->descriptors_mutex); 1316 1292 return -EINVAL; 1317 - 1318 - retval = decrement_open_device_count(board, &board->device_list, 1319 - file_priv->descriptors[cmd.handle]->pad, 1320 - file_priv->descriptors[cmd.handle]->sad); 1321 - if (retval < 0) 1322 - return retval; 1323 - 1324 - kfree(file_priv->descriptors[cmd.handle]); 1293 + } 1294 + if (atomic_read(&desc->descriptor_busy)) { 1295 + mutex_unlock(&file_priv->descriptors_mutex); 1296 + return -EBUSY; 1297 + } 1298 + /* Remove from table while holding lock to prevent new IO from starting */ 1325 1299 file_priv->descriptors[cmd.handle] = NULL; 1300 + pad = desc->pad; 1301 + sad = desc->sad; 1302 + mutex_unlock(&file_priv->descriptors_mutex); 1326 1303 1327 - return 0; 1304 + retval = decrement_open_device_count(board, &board->device_list, pad, sad); 1305 + 1306 + kfree(desc); 1307 + return retval; 1328 1308 } 1329 1309 1330 1310 static int serial_poll_ioctl(struct gpib_board *board, unsigned long arg) ··· 1367 1331 if (retval) 1368 1332 return -EFAULT; 1369 1333 1334 + /* 1335 + * Lock descriptors to prevent concurrent close from freeing 1336 + * descriptor. ibwait() releases big_gpib_mutex when wait_mask 1337 + * is non-zero, so desc must be pinned with descriptor_busy. 1338 + */ 1339 + mutex_lock(&file_priv->descriptors_mutex); 1370 1340 desc = handle_to_descriptor(file_priv, wait_cmd.handle); 1371 - if (!desc) 1341 + if (!desc) { 1342 + mutex_unlock(&file_priv->descriptors_mutex); 1372 1343 return -EINVAL; 1344 + } 1345 + atomic_inc(&desc->descriptor_busy); 1346 + mutex_unlock(&file_priv->descriptors_mutex); 1373 1347 1374 1348 retval = ibwait(board, wait_cmd.wait_mask, wait_cmd.clear_mask, 1375 1349 wait_cmd.set_mask, &wait_cmd.ibsta, wait_cmd.usec_timeout, desc); 1350 + 1351 + atomic_dec(&desc->descriptor_busy); 1352 + 1376 1353 if (retval < 0) 1377 1354 return retval; 1378 1355 ··· 2084 2035 desc->is_board = 0; 2085 2036 desc->autopoll_enabled = 0; 2086 2037 atomic_set(&desc->io_in_progress, 0); 2038 + atomic_set(&desc->descriptor_busy, 0); 2087 2039 } 2088 2040 2089 2041 int gpib_register_driver(struct gpib_interface *interface, struct module *provider_module)
+8
drivers/gpib/include/gpib_types.h
··· 364 364 unsigned int pad; /* primary gpib address */ 365 365 int sad; /* secondary gpib address (negative means disabled) */ 366 366 atomic_t io_in_progress; 367 + /* 368 + * Kernel-only reference count to prevent descriptor from being 369 + * freed while IO handlers hold a pointer to it. Incremented 370 + * before each IO operation, decremented when done. Unlike 371 + * io_in_progress, this cannot be modified from userspace via 372 + * general_ibstatus(). 373 + */ 374 + atomic_t descriptor_busy; 367 375 unsigned is_board : 1; 368 376 unsigned autopoll_enabled : 1; 369 377 };
+2 -2
drivers/gpib/lpvo_usb_gpib/lpvo_usb_gpib.c
··· 406 406 for (j = 0 ; j < MAX_DEV ; j++) { 407 407 if ((assigned_usb_minors & 1 << j) == 0) 408 408 continue; 409 - udev = usb_get_dev(interface_to_usbdev(lpvo_usb_interfaces[j])); 409 + udev = interface_to_usbdev(lpvo_usb_interfaces[j]); 410 410 device_path = kobject_get_path(&udev->dev.kobj, GFP_KERNEL); 411 411 match = gpib_match_device_path(&lpvo_usb_interfaces[j]->dev, 412 412 config->device_path); ··· 421 421 for (j = 0 ; j < MAX_DEV ; j++) { 422 422 if ((assigned_usb_minors & 1 << j) == 0) 423 423 continue; 424 - udev = usb_get_dev(interface_to_usbdev(lpvo_usb_interfaces[j])); 424 + udev = interface_to_usbdev(lpvo_usb_interfaces[j]); 425 425 DIA_LOG(1, "dev. %d: bus %d -> %d dev: %d -> %d\n", j, 426 426 udev->bus->busnum, config->pci_bus, udev->devnum, config->pci_slot); 427 427 if (config->pci_bus == udev->bus->busnum &&
+2
drivers/iio/accel/adxl313_core.c
··· 998 998 999 999 ret = regmap_write(data->regmap, ADXL313_REG_FIFO_CTL, 1000 1000 FIELD_PREP(ADXL313_REG_FIFO_CTL_MODE_MSK, ADXL313_FIFO_BYPASS)); 1001 + if (ret) 1002 + return ret; 1001 1003 1002 1004 ret = regmap_write(data->regmap, ADXL313_REG_INT_ENABLE, 0); 1003 1005 if (ret)
+1 -1
drivers/iio/accel/adxl355_core.c
··· 745 745 BIT(IIO_CHAN_INFO_OFFSET), 746 746 .scan_index = 3, 747 747 .scan_type = { 748 - .sign = 's', 748 + .sign = 'u', 749 749 .realbits = 12, 750 750 .storagebits = 16, 751 751 .endianness = IIO_BE,
+1 -1
drivers/iio/accel/adxl380.c
··· 877 877 ret = regmap_update_bits(st->regmap, ADXL380_FIFO_CONFIG_0_REG, 878 878 ADXL380_FIFO_SAMPLES_8_MSK, 879 879 FIELD_PREP(ADXL380_FIFO_SAMPLES_8_MSK, 880 - (fifo_samples & BIT(8)))); 880 + !!(fifo_samples & BIT(8)))); 881 881 if (ret) 882 882 return ret; 883 883
+3 -5
drivers/iio/adc/ad4062.c
··· 719 719 } 720 720 st->gpo_irq[1] = true; 721 721 722 - return devm_request_threaded_irq(dev, ret, 723 - ad4062_irq_handler_drdy, 724 - NULL, IRQF_ONESHOT, indio_dev->name, 725 - indio_dev); 722 + return devm_request_irq(dev, ret, ad4062_irq_handler_drdy, 723 + IRQF_NO_THREAD, indio_dev->name, indio_dev); 726 724 } 727 725 728 726 static const struct iio_trigger_ops ad4062_trigger_ops = { ··· 953 955 default: 954 956 return -EINVAL; 955 957 } 956 - }; 958 + } 957 959 958 960 static int ad4062_write_raw(struct iio_dev *indio_dev, 959 961 struct iio_chan_spec const *chan, int val,
+6 -6
drivers/iio/adc/ade9000.c
··· 787 787 ADE9000_MIDDLE_PAGE_BIT); 788 788 if (ret) { 789 789 dev_err_ratelimited(dev, "IRQ0 WFB write fail"); 790 - return IRQ_HANDLED; 790 + return ret; 791 791 } 792 792 793 793 ade9000_configure_scan(indio_dev, ADE9000_REG_WF_BUFF); ··· 1123 1123 tmp &= ~ADE9000_PHASE_C_POS_BIT; 1124 1124 1125 1125 switch (tmp) { 1126 - case ADE9000_REG_AWATTOS: 1126 + case ADE9000_REG_AWATT: 1127 1127 return regmap_write(st->regmap, 1128 1128 ADE9000_ADDR_ADJUST(ADE9000_REG_AWATTOS, 1129 1129 chan->channel), val); ··· 1706 1706 1707 1707 init_completion(&st->reset_completion); 1708 1708 1709 + ret = devm_mutex_init(dev, &st->lock); 1710 + if (ret) 1711 + return ret; 1712 + 1709 1713 ret = ade9000_request_irq(dev, "irq0", ade9000_irq0_thread, indio_dev); 1710 1714 if (ret) 1711 1715 return ret; ··· 1719 1715 return ret; 1720 1716 1721 1717 ret = ade9000_request_irq(dev, "dready", ade9000_dready_thread, indio_dev); 1722 - if (ret) 1723 - return ret; 1724 - 1725 - ret = devm_mutex_init(dev, &st->lock); 1726 1718 if (ret) 1727 1719 return ret; 1728 1720
+1
drivers/iio/adc/aspeed_adc.c
··· 415 415 } 416 416 adc_engine_control_reg_val = 417 417 readl(data->base + ASPEED_REG_ENGINE_CONTROL); 418 + adc_engine_control_reg_val &= ~ASPEED_ADC_REF_VOLTAGE; 418 419 419 420 ret = devm_regulator_get_enable_read_voltage(data->dev, "vref"); 420 421 if (ret < 0 && ret != -ENODEV)
+5 -4
drivers/iio/adc/nxp-sar-adc.c
··· 718 718 struct nxp_sar_adc *info = iio_priv(indio_dev); 719 719 int ret; 720 720 721 + info->dma_chan = dma_request_chan(indio_dev->dev.parent, "rx"); 722 + if (IS_ERR(info->dma_chan)) 723 + return PTR_ERR(info->dma_chan); 724 + 721 725 nxp_sar_adc_dma_channels_enable(info, *indio_dev->active_scan_mask); 722 726 723 727 nxp_sar_adc_dma_cfg(info, true); ··· 742 738 out_dma_channels_disable: 743 739 nxp_sar_adc_dma_cfg(info, false); 744 740 nxp_sar_adc_dma_channels_disable(info, *indio_dev->active_scan_mask); 741 + dma_release_channel(info->dma_chan); 745 742 746 743 return ret; 747 744 } ··· 769 764 int current_mode = iio_device_get_current_mode(indio_dev); 770 765 unsigned long channel; 771 766 int ret; 772 - 773 - info->dma_chan = dma_request_chan(indio_dev->dev.parent, "rx"); 774 - if (IS_ERR(info->dma_chan)) 775 - return PTR_ERR(info->dma_chan); 776 767 777 768 info->channels_used = 0; 778 769
+20 -21
drivers/iio/adc/ti-adc161s626.c
··· 15 15 #include <linux/init.h> 16 16 #include <linux/err.h> 17 17 #include <linux/spi/spi.h> 18 + #include <linux/unaligned.h> 18 19 #include <linux/iio/iio.h> 19 20 #include <linux/iio/trigger.h> 20 21 #include <linux/iio/buffer.h> ··· 71 70 72 71 u8 read_size; 73 72 u8 shift; 74 - 75 - u8 buffer[16] __aligned(IIO_DMA_MINALIGN); 73 + u8 buf[3] __aligned(IIO_DMA_MINALIGN); 76 74 }; 77 75 78 76 static int ti_adc_read_measurement(struct ti_adc_data *data, ··· 80 80 int ret; 81 81 82 82 switch (data->read_size) { 83 - case 2: { 84 - __be16 buf; 85 - 86 - ret = spi_read(data->spi, (void *) &buf, 2); 83 + case 2: 84 + ret = spi_read(data->spi, data->buf, 2); 87 85 if (ret) 88 86 return ret; 89 87 90 - *val = be16_to_cpu(buf); 88 + *val = get_unaligned_be16(data->buf); 91 89 break; 92 - } 93 - case 3: { 94 - __be32 buf; 95 - 96 - ret = spi_read(data->spi, (void *) &buf, 3); 90 + case 3: 91 + ret = spi_read(data->spi, data->buf, 3); 97 92 if (ret) 98 93 return ret; 99 94 100 - *val = be32_to_cpu(buf) >> 8; 95 + *val = get_unaligned_be24(data->buf); 101 96 break; 102 - } 103 97 default: 104 98 return -EINVAL; 105 99 } ··· 108 114 struct iio_poll_func *pf = private; 109 115 struct iio_dev *indio_dev = pf->indio_dev; 110 116 struct ti_adc_data *data = iio_priv(indio_dev); 111 - int ret; 117 + struct { 118 + s16 data; 119 + aligned_s64 timestamp; 120 + } scan = { }; 121 + int ret, val; 112 122 113 - ret = ti_adc_read_measurement(data, &indio_dev->channels[0], 114 - (int *) &data->buffer); 115 - if (!ret) 116 - iio_push_to_buffers_with_timestamp(indio_dev, 117 - data->buffer, 118 - iio_get_time_ns(indio_dev)); 123 + ret = ti_adc_read_measurement(data, &indio_dev->channels[0], &val); 124 + if (ret) 125 + goto exit_notify_done; 119 126 127 + scan.data = val; 128 + iio_push_to_buffers_with_timestamp(indio_dev, &scan, iio_get_time_ns(indio_dev)); 129 + 130 + exit_notify_done: 120 131 iio_trigger_notify_done(indio_dev->trig); 121 132 122 133 return IRQ_HANDLED;
+1 -1
drivers/iio/adc/ti-ads1018.c
··· 249 249 struct iio_chan_spec const *chan, u16 *cnv) 250 250 { 251 251 u8 max_drate_mode = ads1018->chip_info->num_data_rate_mode_to_hz - 1; 252 - u8 drate = ads1018->chip_info->data_rate_mode_to_hz[max_drate_mode]; 252 + u32 drate = ads1018->chip_info->data_rate_mode_to_hz[max_drate_mode]; 253 253 u8 pga_mode = ads1018->chan_data[chan->scan_index].pga_mode; 254 254 struct spi_transfer xfer[2] = { 255 255 {
+6 -5
drivers/iio/adc/ti-ads1119.c
··· 274 274 275 275 ret = pm_runtime_resume_and_get(dev); 276 276 if (ret) 277 - goto pdown; 277 + return ret; 278 278 279 279 ret = ads1119_configure_channel(st, mux, gain, datarate); 280 280 if (ret) 281 281 goto pdown; 282 + 283 + if (st->client->irq) 284 + reinit_completion(&st->completion); 282 285 283 286 ret = i2c_smbus_write_byte(st->client, ADS1119_CMD_START_SYNC); 284 287 if (ret) ··· 738 735 return dev_err_probe(dev, ret, "Failed to setup IIO buffer\n"); 739 736 740 737 if (client->irq > 0) { 741 - ret = devm_request_threaded_irq(dev, client->irq, 742 - ads1119_irq_handler, 743 - NULL, IRQF_ONESHOT, 744 - "ads1119", indio_dev); 738 + ret = devm_request_irq(dev, client->irq, ads1119_irq_handler, 739 + IRQF_NO_THREAD, "ads1119", indio_dev); 745 740 if (ret) 746 741 return dev_err_probe(dev, ret, 747 742 "Failed to allocate irq\n");
+5 -3
drivers/iio/adc/ti-ads7950.c
··· 427 427 static int ti_ads7950_get(struct gpio_chip *chip, unsigned int offset) 428 428 { 429 429 struct ti_ads7950_state *st = gpiochip_get_data(chip); 430 + bool state; 430 431 int ret; 431 432 432 433 mutex_lock(&st->slock); 433 434 434 435 /* If set as output, return the output */ 435 436 if (st->gpio_cmd_settings_bitmask & BIT(offset)) { 436 - ret = st->cmd_settings_bitmask & BIT(offset); 437 + state = st->cmd_settings_bitmask & BIT(offset); 438 + ret = 0; 437 439 goto out; 438 440 } 439 441 ··· 446 444 if (ret) 447 445 goto out; 448 446 449 - ret = ((st->single_rx >> 12) & BIT(offset)) ? 1 : 0; 447 + state = (st->single_rx >> 12) & BIT(offset); 450 448 451 449 /* Revert back to original settings */ 452 450 st->cmd_settings_bitmask &= ~TI_ADS7950_CR_GPIO_DATA; ··· 458 456 out: 459 457 mutex_unlock(&st->slock); 460 458 461 - return ret; 459 + return ret ?: state; 462 460 } 463 461 464 462 static int ti_ads7950_get_direction(struct gpio_chip *chip,
+30 -18
drivers/iio/common/hid-sensors/hid-sensor-trigger.c
··· 14 14 #include <linux/iio/triggered_buffer.h> 15 15 #include <linux/iio/trigger_consumer.h> 16 16 #include <linux/iio/sysfs.h> 17 + #include <linux/iio/kfifo_buf.h> 17 18 #include "hid-sensor-trigger.h" 18 19 19 20 static ssize_t _hid_sensor_set_report_latency(struct device *dev, ··· 203 202 _hid_sensor_power_state(attrb, true); 204 203 } 205 204 206 - static int hid_sensor_data_rdy_trigger_set_state(struct iio_trigger *trig, 207 - bool state) 205 + static int buffer_postenable(struct iio_dev *indio_dev) 208 206 { 209 - return hid_sensor_power_state(iio_trigger_get_drvdata(trig), state); 207 + return hid_sensor_power_state(iio_device_get_drvdata(indio_dev), 1); 210 208 } 209 + 210 + static int buffer_predisable(struct iio_dev *indio_dev) 211 + { 212 + return hid_sensor_power_state(iio_device_get_drvdata(indio_dev), 0); 213 + } 214 + 215 + static const struct iio_buffer_setup_ops hid_sensor_buffer_ops = { 216 + .postenable = buffer_postenable, 217 + .predisable = buffer_predisable, 218 + }; 211 219 212 220 void hid_sensor_remove_trigger(struct iio_dev *indio_dev, 213 221 struct hid_sensor_common *attrb) ··· 229 219 cancel_work_sync(&attrb->work); 230 220 iio_trigger_unregister(attrb->trigger); 231 221 iio_trigger_free(attrb->trigger); 232 - iio_triggered_buffer_cleanup(indio_dev); 233 222 } 234 223 EXPORT_SYMBOL_NS(hid_sensor_remove_trigger, "IIO_HID"); 235 - 236 - static const struct iio_trigger_ops hid_sensor_trigger_ops = { 237 - .set_trigger_state = &hid_sensor_data_rdy_trigger_set_state, 238 - }; 239 224 240 225 int hid_sensor_setup_trigger(struct iio_dev *indio_dev, const char *name, 241 226 struct hid_sensor_common *attrb) ··· 244 239 else 245 240 fifo_attrs = NULL; 246 241 247 - ret = iio_triggered_buffer_setup_ext(indio_dev, 248 - &iio_pollfunc_store_time, NULL, 249 - IIO_BUFFER_DIRECTION_IN, 250 - NULL, fifo_attrs); 242 + indio_dev->modes = INDIO_DIRECT_MODE | INDIO_HARDWARE_TRIGGERED; 243 + 244 + ret = devm_iio_kfifo_buffer_setup_ext(&indio_dev->dev, indio_dev, 245 + &hid_sensor_buffer_ops, 246 + fifo_attrs); 251 247 if (ret) { 252 - dev_err(&indio_dev->dev, "Triggered Buffer Setup Failed\n"); 248 + dev_err(&indio_dev->dev, "Kfifo Buffer Setup Failed\n"); 253 249 return ret; 254 250 } 251 + 252 + /* 253 + * The current user space in distro "iio-sensor-proxy" is not working in 254 + * trigerless mode and it expects 255 + * /sys/bus/iio/devices/iio:device0/trigger/current_trigger. 256 + * The change replacing iio_triggered_buffer_setup_ext() with 257 + * devm_iio_kfifo_buffer_setup_ext() will not create attribute without 258 + * registering a trigger with INDIO_HARDWARE_TRIGGERED. 259 + * So the below code fragment is still required. 260 + */ 255 261 256 262 trig = iio_trigger_alloc(indio_dev->dev.parent, 257 263 "%s-dev%d", name, iio_device_id(indio_dev)); 258 264 if (trig == NULL) { 259 265 dev_err(&indio_dev->dev, "Trigger Allocate Failed\n"); 260 - ret = -ENOMEM; 261 - goto error_triggered_buffer_cleanup; 266 + return -ENOMEM; 262 267 } 263 268 264 269 iio_trigger_set_drvdata(trig, attrb); 265 - trig->ops = &hid_sensor_trigger_ops; 266 270 ret = iio_trigger_register(trig); 267 271 268 272 if (ret) { ··· 298 284 iio_trigger_unregister(trig); 299 285 error_free_trig: 300 286 iio_trigger_free(trig); 301 - error_triggered_buffer_cleanup: 302 - iio_triggered_buffer_cleanup(indio_dev); 303 287 return ret; 304 288 } 305 289 EXPORT_SYMBOL_NS(hid_sensor_setup_trigger, "IIO_HID");
+1 -1
drivers/iio/dac/ad5770r.c
··· 322 322 chan->address, 323 323 st->transf_buf, 2); 324 324 if (ret) 325 - return 0; 325 + return ret; 326 326 327 327 buf16 = get_unaligned_le16(st->transf_buf); 328 328 *val = buf16 >> 2;
+22 -29
drivers/iio/dac/mcp47feb02.c
··· 65 65 #define MCP47FEB02_MAX_SCALES_CH 3 66 66 #define MCP47FEB02_DAC_WIPER_UNLOCKED 0 67 67 #define MCP47FEB02_NORMAL_OPERATION 0 68 - #define MCP47FEB02_INTERNAL_BAND_GAP_mV 2440 68 + #define MCP47FEB02_INTERNAL_BAND_GAP_uV 2440000 69 69 #define NV_DAC_ADDR_OFFSET 0x10 70 70 71 71 enum mcp47feb02_vref_mode { ··· 697 697 }; 698 698 699 699 static void mcp47feb02_init_scale(struct mcp47feb02_data *data, enum mcp47feb02_scale scale, 700 - int vref_mV, int scale_avail[]) 700 + int vref_uV, int scale_avail[]) 701 701 { 702 702 u32 value_micro, value_int; 703 703 u64 tmp; 704 704 705 - /* vref_mV should not be negative */ 706 - tmp = (u64)vref_mV * MICRO >> data->chip_features->resolution; 705 + /* vref_uV should not be negative */ 706 + tmp = (u64)vref_uV * MILLI >> data->chip_features->resolution; 707 707 value_int = div_u64_rem(tmp, MICRO, &value_micro); 708 708 scale_avail[scale * 2] = value_int; 709 709 scale_avail[scale * 2 + 1] = value_micro; 710 710 } 711 711 712 - static int mcp47feb02_init_scales_avail(struct mcp47feb02_data *data, int vdd_mV, 713 - int vref_mV, int vref1_mV) 712 + static int mcp47feb02_init_scales_avail(struct mcp47feb02_data *data, int vdd_uV, 713 + int vref_uV, int vref1_uV) 714 714 { 715 - struct device *dev = regmap_get_device(data->regmap); 716 715 int tmp_vref; 717 716 718 - mcp47feb02_init_scale(data, MCP47FEB02_SCALE_VDD, vdd_mV, data->scale); 717 + mcp47feb02_init_scale(data, MCP47FEB02_SCALE_VDD, vdd_uV, data->scale); 719 718 720 719 if (data->use_vref) 721 - tmp_vref = vref_mV; 720 + tmp_vref = vref_uV; 722 721 else 723 - tmp_vref = MCP47FEB02_INTERNAL_BAND_GAP_mV; 722 + tmp_vref = MCP47FEB02_INTERNAL_BAND_GAP_uV; 724 723 725 724 mcp47feb02_init_scale(data, MCP47FEB02_SCALE_GAIN_X1, tmp_vref, data->scale); 726 725 mcp47feb02_init_scale(data, MCP47FEB02_SCALE_GAIN_X2, tmp_vref * 2, data->scale); 727 726 728 727 if (data->phys_channels >= 4) { 729 - mcp47feb02_init_scale(data, MCP47FEB02_SCALE_VDD, vdd_mV, data->scale_1); 730 - 731 - if (data->use_vref1 && vref1_mV <= 0) 732 - return dev_err_probe(dev, vref1_mV, "Invalid voltage for Vref1\n"); 728 + mcp47feb02_init_scale(data, MCP47FEB02_SCALE_VDD, vdd_uV, data->scale_1); 733 729 734 730 if (data->use_vref1) 735 - tmp_vref = vref1_mV; 731 + tmp_vref = vref1_uV; 736 732 else 737 - tmp_vref = MCP47FEB02_INTERNAL_BAND_GAP_mV; 733 + tmp_vref = MCP47FEB02_INTERNAL_BAND_GAP_uV; 738 734 739 735 mcp47feb02_init_scale(data, MCP47FEB02_SCALE_GAIN_X1, 740 736 tmp_vref, data->scale_1); ··· 951 955 u32 num_channels; 952 956 u8 chan_idx = 0; 953 957 954 - guard(mutex)(&data->lock); 955 - 956 958 num_channels = device_get_child_node_count(dev); 957 959 if (num_channels > chip_features->phys_channels) 958 960 return dev_err_probe(dev, -EINVAL, "More channels than the chip supports\n"); ··· 1074 1080 return 0; 1075 1081 } 1076 1082 1077 - static int mcp47feb02_init_ch_scales(struct mcp47feb02_data *data, int vdd_mV, 1078 - int vref_mV, int vref1_mV) 1083 + static int mcp47feb02_init_ch_scales(struct mcp47feb02_data *data, int vdd_uV, 1084 + int vref_uV, int vref1_uV) 1079 1085 { 1080 1086 unsigned int i; 1081 1087 ··· 1083 1089 struct device *dev = regmap_get_device(data->regmap); 1084 1090 int ret; 1085 1091 1086 - ret = mcp47feb02_init_scales_avail(data, vdd_mV, vref_mV, vref1_mV); 1092 + ret = mcp47feb02_init_scales_avail(data, vdd_uV, vref_uV, vref1_uV); 1087 1093 if (ret) 1088 1094 return dev_err_probe(dev, ret, "failed to init scales for ch %u\n", i); 1089 1095 } ··· 1097 1103 struct device *dev = &client->dev; 1098 1104 struct mcp47feb02_data *data; 1099 1105 struct iio_dev *indio_dev; 1100 - int vref1_mV = 0; 1101 - int vref_mV = 0; 1102 - int vdd_mV; 1103 - int ret; 1106 + int vref1_uV, vref_uV, vdd_uV, ret; 1104 1107 1105 1108 indio_dev = devm_iio_device_alloc(dev, sizeof(*data)); 1106 1109 if (!indio_dev) ··· 1134 1143 if (ret < 0) 1135 1144 return ret; 1136 1145 1137 - vdd_mV = ret / MILLI; 1146 + vdd_uV = ret; 1138 1147 1139 1148 ret = devm_regulator_get_enable_read_voltage(dev, "vref"); 1140 1149 if (ret > 0) { 1141 - vref_mV = ret / MILLI; 1150 + vref_uV = ret; 1142 1151 data->use_vref = true; 1143 1152 } else { 1153 + vref_uV = 0; 1144 1154 dev_dbg(dev, "using internal band gap as voltage reference.\n"); 1145 1155 dev_dbg(dev, "Vref is unavailable.\n"); 1146 1156 } ··· 1149 1157 if (chip_features->have_ext_vref1) { 1150 1158 ret = devm_regulator_get_enable_read_voltage(dev, "vref1"); 1151 1159 if (ret > 0) { 1152 - vref1_mV = ret / MILLI; 1160 + vref1_uV = ret; 1153 1161 data->use_vref1 = true; 1154 1162 } else { 1163 + vref1_uV = 0; 1155 1164 dev_dbg(dev, "using internal band gap as voltage reference 1.\n"); 1156 1165 dev_dbg(dev, "Vref1 is unavailable.\n"); 1157 1166 } ··· 1162 1169 if (ret) 1163 1170 return dev_err_probe(dev, ret, "Error initialising vref register\n"); 1164 1171 1165 - ret = mcp47feb02_init_ch_scales(data, vdd_mV, vref_mV, vref1_mV); 1172 + ret = mcp47feb02_init_ch_scales(data, vdd_uV, vref_uV, vref1_uV); 1166 1173 if (ret) 1167 1174 return ret; 1168 1175
+21 -11
drivers/iio/gyro/mpu3050-core.c
··· 1129 1129 1130 1130 ret = iio_trigger_register(mpu3050->trig); 1131 1131 if (ret) 1132 - return ret; 1132 + goto err_iio_trigger; 1133 1133 1134 1134 indio_dev->trig = iio_trigger_get(mpu3050->trig); 1135 1135 1136 1136 return 0; 1137 + 1138 + err_iio_trigger: 1139 + free_irq(mpu3050->irq, mpu3050->trig); 1140 + 1141 + return ret; 1137 1142 } 1138 1143 1139 1144 int mpu3050_common_probe(struct device *dev, ··· 1226 1221 goto err_power_down; 1227 1222 } 1228 1223 1229 - ret = iio_device_register(indio_dev); 1230 - if (ret) { 1231 - dev_err(dev, "device register failed\n"); 1232 - goto err_cleanup_buffer; 1233 - } 1234 - 1235 1224 dev_set_drvdata(dev, indio_dev); 1236 1225 1237 1226 /* Check if we have an assigned IRQ to use as trigger */ ··· 1248 1249 pm_runtime_use_autosuspend(dev); 1249 1250 pm_runtime_put(dev); 1250 1251 1252 + ret = iio_device_register(indio_dev); 1253 + if (ret) { 1254 + dev_err(dev, "device register failed\n"); 1255 + goto err_iio_device_register; 1256 + } 1257 + 1251 1258 return 0; 1252 1259 1253 - err_cleanup_buffer: 1260 + err_iio_device_register: 1261 + pm_runtime_get_sync(dev); 1262 + pm_runtime_put_noidle(dev); 1263 + pm_runtime_disable(dev); 1264 + if (irq) 1265 + free_irq(mpu3050->irq, mpu3050->trig); 1254 1266 iio_triggered_buffer_cleanup(indio_dev); 1255 1267 err_power_down: 1256 1268 mpu3050_power_down(mpu3050); ··· 1274 1264 struct iio_dev *indio_dev = dev_get_drvdata(dev); 1275 1265 struct mpu3050 *mpu3050 = iio_priv(indio_dev); 1276 1266 1267 + iio_device_unregister(indio_dev); 1277 1268 pm_runtime_get_sync(dev); 1278 1269 pm_runtime_put_noidle(dev); 1279 1270 pm_runtime_disable(dev); 1280 - iio_triggered_buffer_cleanup(indio_dev); 1281 1271 if (mpu3050->irq) 1282 - free_irq(mpu3050->irq, mpu3050); 1283 - iio_device_unregister(indio_dev); 1272 + free_irq(mpu3050->irq, mpu3050->trig); 1273 + iio_triggered_buffer_cleanup(indio_dev); 1284 1274 mpu3050_power_down(mpu3050); 1285 1275 } 1286 1276
+4 -4
drivers/iio/imu/adis16550.c
··· 643 643 case IIO_CHAN_INFO_LOW_PASS_FILTER_3DB_FREQUENCY: 644 644 switch (chan->type) { 645 645 case IIO_ANGL_VEL: 646 - ret = adis16550_get_accl_filter_freq(st, val); 646 + ret = adis16550_get_gyro_filter_freq(st, val); 647 647 if (ret) 648 648 return ret; 649 649 return IIO_VAL_INT; 650 650 case IIO_ACCEL: 651 - ret = adis16550_get_gyro_filter_freq(st, val); 651 + ret = adis16550_get_accl_filter_freq(st, val); 652 652 if (ret) 653 653 return ret; 654 654 return IIO_VAL_INT; ··· 681 681 case IIO_CHAN_INFO_LOW_PASS_FILTER_3DB_FREQUENCY: 682 682 switch (chan->type) { 683 683 case IIO_ANGL_VEL: 684 - return adis16550_set_accl_filter_freq(st, val); 685 - case IIO_ACCEL: 686 684 return adis16550_set_gyro_filter_freq(st, val); 685 + case IIO_ACCEL: 686 + return adis16550_set_accl_filter_freq(st, val); 687 687 default: 688 688 return -EINVAL; 689 689 }
+5 -10
drivers/iio/imu/bmi160/bmi160_core.c
··· 573 573 int_out_ctrl_shift = BMI160_INT1_OUT_CTRL_SHIFT; 574 574 int_latch_mask = BMI160_INT1_LATCH_MASK; 575 575 int_map_mask = BMI160_INT1_MAP_DRDY_EN; 576 + pin_name = "INT1"; 576 577 break; 577 578 case BMI160_PIN_INT2: 578 579 int_out_ctrl_shift = BMI160_INT2_OUT_CTRL_SHIFT; 579 580 int_latch_mask = BMI160_INT2_LATCH_MASK; 580 581 int_map_mask = BMI160_INT2_MAP_DRDY_EN; 582 + pin_name = "INT2"; 581 583 break; 584 + default: 585 + return -EINVAL; 582 586 } 583 587 int_out_ctrl_mask = BMI160_INT_OUT_CTRL_MASK << int_out_ctrl_shift; 584 588 ··· 616 612 ret = bmi160_write_conf_reg(regmap, BMI160_REG_INT_MAP, 617 613 int_map_mask, int_map_mask, 618 614 write_usleep); 619 - if (ret) { 620 - switch (pin) { 621 - case BMI160_PIN_INT1: 622 - pin_name = "INT1"; 623 - break; 624 - case BMI160_PIN_INT2: 625 - pin_name = "INT2"; 626 - break; 627 - } 615 + if (ret) 628 616 dev_err(dev, "Failed to configure %s IRQ pin", pin_name); 629 - } 630 617 631 618 return ret; 632 619 }
+1 -1
drivers/iio/imu/bno055/bno055.c
··· 64 64 #define BNO055_GRAVITY_DATA_X_LSB_REG 0x2E 65 65 #define BNO055_GRAVITY_DATA_Y_LSB_REG 0x30 66 66 #define BNO055_GRAVITY_DATA_Z_LSB_REG 0x32 67 - #define BNO055_SCAN_CH_COUNT ((BNO055_GRAVITY_DATA_Z_LSB_REG - BNO055_ACC_DATA_X_LSB_REG) / 2) 67 + #define BNO055_SCAN_CH_COUNT ((BNO055_GRAVITY_DATA_Z_LSB_REG - BNO055_ACC_DATA_X_LSB_REG) / 2 + 1) 68 68 #define BNO055_TEMP_REG 0x34 69 69 #define BNO055_CALIB_STAT_REG 0x35 70 70 #define BNO055_CALIB_STAT_MAGN_SHIFT 0
+14 -1
drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
··· 225 225 const struct st_lsm6dsx_reg *batch_reg; 226 226 u8 data; 227 227 228 + /* Only internal sensors have a FIFO ODR configuration register. */ 229 + if (sensor->id >= ARRAY_SIZE(hw->settings->batch)) 230 + return 0; 231 + 228 232 batch_reg = &hw->settings->batch[sensor->id]; 229 233 if (batch_reg->addr) { 230 234 int val; ··· 862 858 int i, ret; 863 859 864 860 for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { 861 + const struct iio_dev_attr **attrs; 862 + 865 863 if (!hw->iio_devs[i]) 866 864 continue; 867 865 866 + /* 867 + * For the accelerometer, allow setting FIFO sampling frequency 868 + * values different from the sensor sampling frequency, which 869 + * may be needed to keep FIFO data rate low while sampling 870 + * acceleration data at high rates for accurate event detection. 871 + */ 872 + attrs = i == ST_LSM6DSX_ID_ACC ? st_lsm6dsx_buffer_attrs : NULL; 868 873 ret = devm_iio_kfifo_buffer_setup_ext(hw->dev, hw->iio_devs[i], 869 874 &st_lsm6dsx_buffer_ops, 870 - st_lsm6dsx_buffer_attrs); 875 + attrs); 871 876 if (ret) 872 877 return ret; 873 878 }
+12 -6
drivers/iio/light/vcnl4035.c
··· 103 103 struct iio_dev *indio_dev = pf->indio_dev; 104 104 struct vcnl4035_data *data = iio_priv(indio_dev); 105 105 /* Ensure naturally aligned timestamp */ 106 - u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8) = { }; 106 + struct { 107 + u16 als_data; 108 + aligned_s64 timestamp; 109 + } buffer = { }; 110 + unsigned int val; 107 111 int ret; 108 112 109 - ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, (int *)buffer); 113 + ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, &val); 110 114 if (ret < 0) { 111 115 dev_err(&data->client->dev, 112 116 "Trigger consumer can't read from sensor.\n"); 113 117 goto fail_read; 114 118 } 115 - iio_push_to_buffers_with_timestamp(indio_dev, buffer, 116 - iio_get_time_ns(indio_dev)); 119 + 120 + buffer.als_data = val; 121 + iio_push_to_buffers_with_timestamp(indio_dev, &buffer, 122 + iio_get_time_ns(indio_dev)); 117 123 118 124 fail_read: 119 125 iio_trigger_notify_done(indio_dev->trig); ··· 387 381 .sign = 'u', 388 382 .realbits = 16, 389 383 .storagebits = 16, 390 - .endianness = IIO_LE, 384 + .endianness = IIO_CPU, 391 385 }, 392 386 }, 393 387 { ··· 401 395 .sign = 'u', 402 396 .realbits = 16, 403 397 .storagebits = 16, 404 - .endianness = IIO_LE, 398 + .endianness = IIO_CPU, 405 399 }, 406 400 }, 407 401 };
+1 -3
drivers/iio/light/veml6070.c
··· 134 134 if (ret < 0) 135 135 return ret; 136 136 137 - ret = (msb << 8) | lsb; 138 - 139 - return 0; 137 + return (msb << 8) | lsb; 140 138 } 141 139 142 140 static const struct iio_chan_spec veml6070_channels[] = {
+20 -4
drivers/iio/orientation/hid-sensor-rotation.c
··· 19 19 struct hid_sensor_common common_attributes; 20 20 struct hid_sensor_hub_attribute_info quaternion; 21 21 struct { 22 - s32 sampled_vals[4]; 23 - aligned_s64 timestamp; 22 + IIO_DECLARE_QUATERNION(s32, sampled_vals); 23 + /* 24 + * ABI regression avoidance: There are two copies of the same 25 + * timestamp in case of userspace depending on broken alignment 26 + * from older kernels. 27 + */ 28 + aligned_s64 timestamp[2]; 24 29 } scan; 25 30 int scale_pre_decml; 26 31 int scale_post_decml; ··· 159 154 if (!rot_state->timestamp) 160 155 rot_state->timestamp = iio_get_time_ns(indio_dev); 161 156 162 - iio_push_to_buffers_with_timestamp(indio_dev, &rot_state->scan, 163 - rot_state->timestamp); 157 + /* 158 + * ABI regression avoidance: IIO previously had an incorrect 159 + * implementation of iio_push_to_buffers_with_timestamp() that 160 + * put the timestamp in the last 8 bytes of the buffer, which 161 + * was incorrect according to the IIO ABI. To avoid breaking 162 + * userspace that may be depending on this broken behavior, we 163 + * put the timestamp in both the correct place [0] and the old 164 + * incorrect place [1]. 165 + */ 166 + rot_state->scan.timestamp[0] = rot_state->timestamp; 167 + rot_state->scan.timestamp[1] = rot_state->timestamp; 168 + 169 + iio_push_to_buffers(indio_dev, &rot_state->scan); 164 170 165 171 rot_state->timestamp = 0; 166 172 }
+1 -1
drivers/iio/pressure/abp2030pa.c
··· 520 520 data->p_offset = div_s64(odelta * data->pmin, pdelta) - data->outmin; 521 521 522 522 if (data->irq > 0) { 523 - ret = devm_request_irq(dev, irq, abp2_eoc_handler, IRQF_ONESHOT, 523 + ret = devm_request_irq(dev, irq, abp2_eoc_handler, 0, 524 524 dev_name(dev), data); 525 525 if (ret) 526 526 return ret;
+4 -3
drivers/iio/proximity/rfd77402.c
··· 173 173 struct i2c_client *client = data->client; 174 174 int val, ret; 175 175 176 - if (data->irq_en) { 177 - reinit_completion(&data->completion); 176 + if (data->irq_en) 178 177 return rfd77402_wait_for_irq(data); 179 - } 180 178 181 179 /* 182 180 * As per RFD77402 datasheet section '3.1.1 Single Measure', the ··· 201 203 RFD77402_STATUS_MCPU_ON); 202 204 if (ret < 0) 203 205 return ret; 206 + 207 + if (data->irq_en) 208 + reinit_completion(&data->completion); 204 209 205 210 ret = i2c_smbus_write_byte_data(client, RFD77402_CMD_R, 206 211 RFD77402_CMD_SINGLE |
+2 -2
drivers/interconnect/qcom/sm8450.c
··· 800 800 .channels = 1, 801 801 .buswidth = 4, 802 802 .num_links = 1, 803 - .link_nodes = { MASTER_CDSP_NOC_CFG }, 803 + .link_nodes = { &qhm_nsp_noc_config }, 804 804 }; 805 805 806 806 static struct qcom_icc_node qhs_cpr_cx = { ··· 874 874 .channels = 1, 875 875 .buswidth = 4, 876 876 .num_links = 1, 877 - .link_nodes = { MASTER_CNOC_LPASS_AG_NOC }, 877 + .link_nodes = { &qhm_config_noc }, 878 878 }; 879 879 880 880 static struct qcom_icc_node qhs_mss_cfg = {
+4 -1
drivers/misc/fastrpc.c
··· 1401 1401 } 1402 1402 err_map: 1403 1403 fastrpc_buf_free(fl->cctx->remote_heap); 1404 + fl->cctx->remote_heap = NULL; 1404 1405 err_name: 1405 1406 kfree(name); 1406 1407 err: ··· 2390 2389 if (!err) { 2391 2390 src_perms = BIT(QCOM_SCM_VMID_HLOS); 2392 2391 2393 - qcom_scm_assign_mem(res.start, resource_size(&res), &src_perms, 2392 + err = qcom_scm_assign_mem(res.start, resource_size(&res), &src_perms, 2394 2393 data->vmperms, data->vmcount); 2394 + if (err) 2395 + goto err_free_data; 2395 2396 } 2396 2397 2397 2398 }
+4 -2
drivers/misc/lis3lv02d/lis3lv02d.c
··· 1230 1230 else 1231 1231 thread_fn = NULL; 1232 1232 1233 + if (thread_fn) 1234 + irq_flags |= IRQF_ONESHOT; 1235 + 1233 1236 err = request_threaded_irq(lis3->irq, lis302dl_interrupt, 1234 1237 thread_fn, 1235 - IRQF_TRIGGER_RISING | IRQF_ONESHOT | 1236 - irq_flags, 1238 + irq_flags | IRQF_TRIGGER_RISING, 1237 1239 DRIVER_NAME, lis3); 1238 1240 1239 1241 if (err < 0) {
+1
drivers/misc/mei/Kconfig
··· 3 3 config INTEL_MEI 4 4 tristate "Intel Management Engine Interface" 5 5 depends on PCI 6 + depends on X86 || DRM_XE!=n || COMPILE_TEST 6 7 default X86_64 || MATOM 7 8 help 8 9 The Intel Management Engine (Intel ME) provides Manageability,
+4 -10
drivers/misc/mei/hw-me.c
··· 1337 1337 /* check if we need to start the dev */ 1338 1338 if (!mei_host_is_ready(dev)) { 1339 1339 if (mei_hw_is_ready(dev)) { 1340 - /* synchronized by dev mutex */ 1341 - if (waitqueue_active(&dev->wait_hw_ready)) { 1342 - dev_dbg(&dev->dev, "we need to start the dev.\n"); 1343 - dev->recvd_hw_ready = true; 1344 - wake_up(&dev->wait_hw_ready); 1345 - } else if (dev->dev_state != MEI_DEV_UNINITIALIZED && 1346 - dev->dev_state != MEI_DEV_POWERING_DOWN && 1347 - dev->dev_state != MEI_DEV_POWER_DOWN) { 1340 + if (dev->dev_state == MEI_DEV_ENABLED) { 1348 1341 dev_dbg(&dev->dev, "Force link reset.\n"); 1349 1342 schedule_work(&dev->reset_work); 1350 1343 } else { 1351 - dev_dbg(&dev->dev, "Ignore this interrupt in state = %d\n", 1352 - dev->dev_state); 1344 + dev_dbg(&dev->dev, "we need to start the dev.\n"); 1345 + dev->recvd_hw_ready = true; 1346 + wake_up(&dev->wait_hw_ready); 1353 1347 } 1354 1348 } else { 1355 1349 dev_dbg(&dev->dev, "Spurious Interrupt\n");
+1
drivers/nvmem/imx-ocotp-ele.c
··· 131 131 static void imx_ocotp_fixup_dt_cell_info(struct nvmem_device *nvmem, 132 132 struct nvmem_cell_info *cell) 133 133 { 134 + cell->raw_len = round_up(cell->bytes, 4); 134 135 cell->read_post_process = imx_ocotp_cell_pp; 135 136 } 136 137
+1
drivers/nvmem/imx-ocotp.c
··· 589 589 static void imx_ocotp_fixup_dt_cell_info(struct nvmem_device *nvmem, 590 590 struct nvmem_cell_info *cell) 591 591 { 592 + cell->raw_len = round_up(cell->bytes, 4); 592 593 cell->read_post_process = imx_ocotp_cell_pp; 593 594 } 594 595
+4 -4
drivers/nvmem/zynqmp_nvmem.c
··· 66 66 dma_addr_t dma_buf; 67 67 size_t words = bytes / WORD_INBYTES; 68 68 int ret; 69 - int value; 69 + unsigned int value; 70 70 char *data; 71 71 72 72 if (bytes % WORD_INBYTES != 0) { ··· 80 80 } 81 81 82 82 if (pufflag == 1 && flag == EFUSE_WRITE) { 83 - memcpy(&value, val, bytes); 83 + memcpy(&value, val, sizeof(value)); 84 84 if ((offset == EFUSE_PUF_START_OFFSET || 85 85 offset == EFUSE_PUF_MID_OFFSET) && 86 86 value & P_USER_0_64_UPPER_MASK) { ··· 100 100 if (!efuse) 101 101 return -ENOMEM; 102 102 103 - data = dma_alloc_coherent(dev, sizeof(bytes), 103 + data = dma_alloc_coherent(dev, bytes, 104 104 &dma_buf, GFP_KERNEL); 105 105 if (!data) { 106 106 ret = -ENOMEM; ··· 134 134 if (flag == EFUSE_READ) 135 135 memcpy(val, data, bytes); 136 136 efuse_access_err: 137 - dma_free_coherent(dev, sizeof(bytes), 137 + dma_free_coherent(dev, bytes, 138 138 data, dma_buf); 139 139 efuse_data_fail: 140 140 dma_free_coherent(dev, sizeof(struct xilinx_efuse),
+12
include/linux/iio/iio.h
··· 931 931 #define IIO_DECLARE_DMA_BUFFER_WITH_TS(type, name, count) \ 932 932 __IIO_DECLARE_BUFFER_WITH_TS(type, name, count) __aligned(IIO_DMA_MINALIGN) 933 933 934 + /** 935 + * IIO_DECLARE_QUATERNION() - Declare a quaternion element 936 + * @type: element type of the individual vectors 937 + * @name: identifier name 938 + * 939 + * Quaternions are a vector composed of 4 elements (W, X, Y, Z). Use this macro 940 + * to declare a quaternion element in a struct to ensure proper alignment in 941 + * an IIO buffer. 942 + */ 943 + #define IIO_DECLARE_QUATERNION(type, name) \ 944 + type name[4] __aligned(sizeof(type) * 4) 945 + 934 946 struct iio_dev *iio_device_alloc(struct device *parent, int sizeof_priv); 935 947 936 948 /* The information at the returned address is guaranteed to be cacheline aligned */
+2 -2
include/linux/lis3lv02d.h
··· 30 30 * @default_rate: Default sampling rate. 0 means reset default 31 31 * @setup_resources: Interrupt line setup call back function 32 32 * @release_resources: Interrupt line release call back function 33 - * @st_min_limits[3]: Selftest acceptance minimum values 34 - * @st_max_limits[3]: Selftest acceptance maximum values 33 + * @st_min_limits: Selftest acceptance minimum values (x, y, z) 34 + * @st_max_limits: Selftest acceptance maximum values (x, y, z) 35 35 * @irq2: Irq line 2 number 36 36 * 37 37 * Platform data is used to setup the sensor chip. Meaning of the different