Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge branch 'akpm' (patches from Andrew)

Merge misc fixes from Andrew Morton:
"19 patches.

Subsystems affected by this patch series: MAINTAINERS, ipc, fork,
checkpatch, lib, and mm (memcg, slub, pagemap, madvise, migration,
hugetlb)"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
include/linux/log2.h: add missing () around n in roundup_pow_of_two()
mm/khugepaged.c: fix khugepaged's request size in collapse_file
mm/hugetlb: fix a race between hugetlb sysctl handlers
mm/hugetlb: try preferred node first when alloc gigantic page from cma
mm/migrate: preserve soft dirty in remove_migration_pte()
mm/migrate: remove unnecessary is_zone_device_page() check
mm/rmap: fixup copying of soft dirty and uffd ptes
mm/migrate: fixup setting UFFD_WP flag
mm: madvise: fix vma user-after-free
checkpatch: fix the usage of capture group ( ... )
fork: adjust sysctl_max_threads definition to match prototype
ipc: adjust proc_ipc_sem_dointvec definition to match prototype
mm: track page table modifications in __apply_to_page_range()
MAINTAINERS: IA64: mark Status as Odd Fixes only
MAINTAINERS: add LLVM maintainers
MAINTAINERS: update Cavium/Marvell entries
mm: slub: fix conversion of freelist_corrupted()
mm: memcg: fix memcg reclaim soft lockup
memcg: fix use-after-free in uncharge_batch

+129 -67
+16 -16
MAINTAINERS
··· 1694 1694 1695 1695 ARM/CAVIUM THUNDER NETWORK DRIVER 1696 1696 M: Sunil Goutham <sgoutham@marvell.com> 1697 - M: Robert Richter <rrichter@marvell.com> 1698 1697 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1699 1698 S: Supported 1700 1699 F: drivers/net/ethernet/cavium/thunder/ ··· 3947 3948 F: drivers/net/wireless/ath/carl9170/ 3948 3949 3949 3950 CAVIUM I2C DRIVER 3950 - M: Robert Richter <rrichter@marvell.com> 3951 - S: Supported 3951 + M: Robert Richter <rric@kernel.org> 3952 + S: Odd Fixes 3952 3953 W: http://www.marvell.com 3953 3954 F: drivers/i2c/busses/i2c-octeon* 3954 3955 F: drivers/i2c/busses/i2c-thunderx* ··· 3963 3964 F: drivers/net/ethernet/cavium/liquidio/ 3964 3965 3965 3966 CAVIUM MMC DRIVER 3966 - M: Robert Richter <rrichter@marvell.com> 3967 - S: Supported 3967 + M: Robert Richter <rric@kernel.org> 3968 + S: Odd Fixes 3968 3969 W: http://www.marvell.com 3969 3970 F: drivers/mmc/host/cavium* 3970 3971 ··· 3976 3977 F: drivers/crypto/cavium/cpt/ 3977 3978 3978 3979 CAVIUM THUNDERX2 ARM64 SOC 3979 - M: Robert Richter <rrichter@marvell.com> 3980 + M: Robert Richter <rric@kernel.org> 3980 3981 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 3981 - S: Maintained 3982 + S: Odd Fixes 3982 3983 F: Documentation/devicetree/bindings/arm/cavium-thunder2.txt 3983 3984 F: arch/arm64/boot/dts/cavium/thunder2-99xx* 3984 3985 ··· 4257 4258 F: .clang-format 4258 4259 4259 4260 CLANG/LLVM BUILD SUPPORT 4261 + M: Nathan Chancellor <natechancellor@gmail.com> 4262 + M: Nick Desaulniers <ndesaulniers@google.com> 4260 4263 L: clang-built-linux@googlegroups.com 4261 4264 S: Supported 4262 4265 W: https://clangbuiltlinux.github.io/ ··· 6192 6191 6193 6192 EDAC-CAVIUM OCTEON 6194 6193 M: Ralf Baechle <ralf@linux-mips.org> 6195 - M: Robert Richter <rrichter@marvell.com> 6196 6194 L: linux-edac@vger.kernel.org 6197 6195 L: linux-mips@vger.kernel.org 6198 6196 S: Supported 6199 6197 F: drivers/edac/octeon_edac* 6200 6198 6201 6199 EDAC-CAVIUM THUNDERX 6202 - M: Robert Richter <rrichter@marvell.com> 6200 + M: Robert Richter <rric@kernel.org> 6203 6201 L: linux-edac@vger.kernel.org 6204 - S: Supported 6202 + S: Odd Fixes 6205 6203 F: drivers/edac/thunderx_edac* 6206 6204 6207 6205 EDAC-CORE ··· 6208 6208 M: Mauro Carvalho Chehab <mchehab@kernel.org> 6209 6209 M: Tony Luck <tony.luck@intel.com> 6210 6210 R: James Morse <james.morse@arm.com> 6211 - R: Robert Richter <rrichter@marvell.com> 6211 + R: Robert Richter <rric@kernel.org> 6212 6212 L: linux-edac@vger.kernel.org 6213 6213 S: Supported 6214 6214 T: git git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras.git edac-for-next ··· 8272 8272 M: Tony Luck <tony.luck@intel.com> 8273 8273 M: Fenghua Yu <fenghua.yu@intel.com> 8274 8274 L: linux-ia64@vger.kernel.org 8275 - S: Maintained 8275 + S: Odd Fixes 8276 8276 T: git git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux.git 8277 8277 F: Documentation/ia64/ 8278 8278 F: arch/ia64/ ··· 13446 13446 F: drivers/pci/controller/dwc/*artpec* 13447 13447 13448 13448 PCIE DRIVER FOR CAVIUM THUNDERX 13449 - M: Robert Richter <rrichter@marvell.com> 13449 + M: Robert Richter <rric@kernel.org> 13450 13450 L: linux-pci@vger.kernel.org 13451 13451 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 13452 - S: Supported 13452 + S: Odd Fixes 13453 13453 F: drivers/pci/controller/pci-thunder-* 13454 13454 13455 13455 PCIE DRIVER FOR HISILICON ··· 17237 17237 F: drivers/net/thunderbolt.c 17238 17238 17239 17239 THUNDERX GPIO DRIVER 17240 - M: Robert Richter <rrichter@marvell.com> 17241 - S: Maintained 17240 + M: Robert Richter <rric@kernel.org> 17241 + S: Odd Fixes 17242 17242 F: drivers/gpio/gpio-thunderx.c 17243 17243 17244 17244 TI AM437X VPFE DRIVER
+1 -1
include/linux/log2.h
··· 173 173 #define roundup_pow_of_two(n) \ 174 174 ( \ 175 175 __builtin_constant_p(n) ? ( \ 176 - (n == 1) ? 1 : \ 176 + ((n) == 1) ? 1 : \ 177 177 (1UL << (ilog2((n) - 1) + 1)) \ 178 178 ) : \ 179 179 __roundup_pow_of_two(n) \
+1 -1
ipc/ipc_sysctl.c
··· 85 85 } 86 86 87 87 static int proc_ipc_sem_dointvec(struct ctl_table *table, int write, 88 - void __user *buffer, size_t *lenp, loff_t *ppos) 88 + void *buffer, size_t *lenp, loff_t *ppos) 89 89 { 90 90 int ret, semmni; 91 91 struct ipc_namespace *ns = current->nsproxy->ipc_ns;
+1 -1
kernel/fork.c
··· 3014 3014 } 3015 3015 3016 3016 int sysctl_max_threads(struct ctl_table *table, int write, 3017 - void __user *buffer, size_t *lenp, loff_t *ppos) 3017 + void *buffer, size_t *lenp, loff_t *ppos) 3018 3018 { 3019 3019 struct ctl_table t; 3020 3020 int ret;
+37 -12
mm/hugetlb.c
··· 1250 1250 int nid, nodemask_t *nodemask) 1251 1251 { 1252 1252 unsigned long nr_pages = 1UL << huge_page_order(h); 1253 + if (nid == NUMA_NO_NODE) 1254 + nid = numa_mem_id(); 1253 1255 1254 1256 #ifdef CONFIG_CMA 1255 1257 { 1256 1258 struct page *page; 1257 1259 int node; 1258 1260 1259 - for_each_node_mask(node, *nodemask) { 1260 - if (!hugetlb_cma[node]) 1261 - continue; 1262 - 1263 - page = cma_alloc(hugetlb_cma[node], nr_pages, 1264 - huge_page_order(h), true); 1261 + if (hugetlb_cma[nid]) { 1262 + page = cma_alloc(hugetlb_cma[nid], nr_pages, 1263 + huge_page_order(h), true); 1265 1264 if (page) 1266 1265 return page; 1266 + } 1267 + 1268 + if (!(gfp_mask & __GFP_THISNODE)) { 1269 + for_each_node_mask(node, *nodemask) { 1270 + if (node == nid || !hugetlb_cma[node]) 1271 + continue; 1272 + 1273 + page = cma_alloc(hugetlb_cma[node], nr_pages, 1274 + huge_page_order(h), true); 1275 + if (page) 1276 + return page; 1277 + } 1267 1278 } 1268 1279 } 1269 1280 #endif ··· 3465 3454 } 3466 3455 3467 3456 #ifdef CONFIG_SYSCTL 3457 + static int proc_hugetlb_doulongvec_minmax(struct ctl_table *table, int write, 3458 + void *buffer, size_t *length, 3459 + loff_t *ppos, unsigned long *out) 3460 + { 3461 + struct ctl_table dup_table; 3462 + 3463 + /* 3464 + * In order to avoid races with __do_proc_doulongvec_minmax(), we 3465 + * can duplicate the @table and alter the duplicate of it. 3466 + */ 3467 + dup_table = *table; 3468 + dup_table.data = out; 3469 + 3470 + return proc_doulongvec_minmax(&dup_table, write, buffer, length, ppos); 3471 + } 3472 + 3468 3473 static int hugetlb_sysctl_handler_common(bool obey_mempolicy, 3469 3474 struct ctl_table *table, int write, 3470 3475 void *buffer, size_t *length, loff_t *ppos) ··· 3492 3465 if (!hugepages_supported()) 3493 3466 return -EOPNOTSUPP; 3494 3467 3495 - table->data = &tmp; 3496 - table->maxlen = sizeof(unsigned long); 3497 - ret = proc_doulongvec_minmax(table, write, buffer, length, ppos); 3468 + ret = proc_hugetlb_doulongvec_minmax(table, write, buffer, length, ppos, 3469 + &tmp); 3498 3470 if (ret) 3499 3471 goto out; 3500 3472 ··· 3536 3510 if (write && hstate_is_gigantic(h)) 3537 3511 return -EINVAL; 3538 3512 3539 - table->data = &tmp; 3540 - table->maxlen = sizeof(unsigned long); 3541 - ret = proc_doulongvec_minmax(table, write, buffer, length, ppos); 3513 + ret = proc_hugetlb_doulongvec_minmax(table, write, buffer, length, ppos, 3514 + &tmp); 3542 3515 if (ret) 3543 3516 goto out; 3544 3517
+1 -1
mm/khugepaged.c
··· 1709 1709 xas_unlock_irq(&xas); 1710 1710 page_cache_sync_readahead(mapping, &file->f_ra, 1711 1711 file, index, 1712 - PAGE_SIZE); 1712 + end - index); 1713 1713 /* drain pagevecs to help isolate_lru_page() */ 1714 1714 lru_add_drain(); 1715 1715 page = find_lock_page(mapping, index);
+1 -1
mm/madvise.c
··· 289 289 */ 290 290 *prev = NULL; /* tell sys_madvise we drop mmap_lock */ 291 291 get_file(file); 292 - mmap_read_unlock(current->mm); 293 292 offset = (loff_t)(start - vma->vm_start) 294 293 + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); 294 + mmap_read_unlock(current->mm); 295 295 vfs_fadvise(file, offset, end - start, POSIX_FADV_WILLNEED); 296 296 fput(file); 297 297 mmap_read_lock(current->mm);
+6
mm/memcontrol.c
··· 6774 6774 __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages); 6775 6775 memcg_check_events(ug->memcg, ug->dummy_page); 6776 6776 local_irq_restore(flags); 6777 + 6778 + /* drop reference from uncharge_page */ 6779 + css_put(&ug->memcg->css); 6777 6780 } 6778 6781 6779 6782 static void uncharge_page(struct page *page, struct uncharge_gather *ug) ··· 6800 6797 uncharge_gather_clear(ug); 6801 6798 } 6802 6799 ug->memcg = page->mem_cgroup; 6800 + 6801 + /* pairs with css_put in uncharge_batch */ 6802 + css_get(&ug->memcg->css); 6803 6803 } 6804 6804 6805 6805 nr_pages = compound_nr(page);
+24 -13
mm/memory.c
··· 73 73 #include <linux/numa.h> 74 74 #include <linux/perf_event.h> 75 75 #include <linux/ptrace.h> 76 + #include <linux/vmalloc.h> 76 77 77 78 #include <trace/events/kmem.h> 78 79 ··· 84 83 #include <asm/tlb.h> 85 84 #include <asm/tlbflush.h> 86 85 86 + #include "pgalloc-track.h" 87 87 #include "internal.h" 88 88 89 89 #if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST) ··· 2208 2206 2209 2207 static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, 2210 2208 unsigned long addr, unsigned long end, 2211 - pte_fn_t fn, void *data, bool create) 2209 + pte_fn_t fn, void *data, bool create, 2210 + pgtbl_mod_mask *mask) 2212 2211 { 2213 2212 pte_t *pte; 2214 2213 int err = 0; ··· 2217 2214 2218 2215 if (create) { 2219 2216 pte = (mm == &init_mm) ? 2220 - pte_alloc_kernel(pmd, addr) : 2217 + pte_alloc_kernel_track(pmd, addr, mask) : 2221 2218 pte_alloc_map_lock(mm, pmd, addr, &ptl); 2222 2219 if (!pte) 2223 2220 return -ENOMEM; ··· 2238 2235 break; 2239 2236 } 2240 2237 } while (addr += PAGE_SIZE, addr != end); 2238 + *mask |= PGTBL_PTE_MODIFIED; 2241 2239 2242 2240 arch_leave_lazy_mmu_mode(); 2243 2241 ··· 2249 2245 2250 2246 static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, 2251 2247 unsigned long addr, unsigned long end, 2252 - pte_fn_t fn, void *data, bool create) 2248 + pte_fn_t fn, void *data, bool create, 2249 + pgtbl_mod_mask *mask) 2253 2250 { 2254 2251 pmd_t *pmd; 2255 2252 unsigned long next; ··· 2259 2254 BUG_ON(pud_huge(*pud)); 2260 2255 2261 2256 if (create) { 2262 - pmd = pmd_alloc(mm, pud, addr); 2257 + pmd = pmd_alloc_track(mm, pud, addr, mask); 2263 2258 if (!pmd) 2264 2259 return -ENOMEM; 2265 2260 } else { ··· 2269 2264 next = pmd_addr_end(addr, end); 2270 2265 if (create || !pmd_none_or_clear_bad(pmd)) { 2271 2266 err = apply_to_pte_range(mm, pmd, addr, next, fn, data, 2272 - create); 2267 + create, mask); 2273 2268 if (err) 2274 2269 break; 2275 2270 } ··· 2279 2274 2280 2275 static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d, 2281 2276 unsigned long addr, unsigned long end, 2282 - pte_fn_t fn, void *data, bool create) 2277 + pte_fn_t fn, void *data, bool create, 2278 + pgtbl_mod_mask *mask) 2283 2279 { 2284 2280 pud_t *pud; 2285 2281 unsigned long next; 2286 2282 int err = 0; 2287 2283 2288 2284 if (create) { 2289 - pud = pud_alloc(mm, p4d, addr); 2285 + pud = pud_alloc_track(mm, p4d, addr, mask); 2290 2286 if (!pud) 2291 2287 return -ENOMEM; 2292 2288 } else { ··· 2297 2291 next = pud_addr_end(addr, end); 2298 2292 if (create || !pud_none_or_clear_bad(pud)) { 2299 2293 err = apply_to_pmd_range(mm, pud, addr, next, fn, data, 2300 - create); 2294 + create, mask); 2301 2295 if (err) 2302 2296 break; 2303 2297 } ··· 2307 2301 2308 2302 static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd, 2309 2303 unsigned long addr, unsigned long end, 2310 - pte_fn_t fn, void *data, bool create) 2304 + pte_fn_t fn, void *data, bool create, 2305 + pgtbl_mod_mask *mask) 2311 2306 { 2312 2307 p4d_t *p4d; 2313 2308 unsigned long next; 2314 2309 int err = 0; 2315 2310 2316 2311 if (create) { 2317 - p4d = p4d_alloc(mm, pgd, addr); 2312 + p4d = p4d_alloc_track(mm, pgd, addr, mask); 2318 2313 if (!p4d) 2319 2314 return -ENOMEM; 2320 2315 } else { ··· 2325 2318 next = p4d_addr_end(addr, end); 2326 2319 if (create || !p4d_none_or_clear_bad(p4d)) { 2327 2320 err = apply_to_pud_range(mm, p4d, addr, next, fn, data, 2328 - create); 2321 + create, mask); 2329 2322 if (err) 2330 2323 break; 2331 2324 } ··· 2338 2331 void *data, bool create) 2339 2332 { 2340 2333 pgd_t *pgd; 2341 - unsigned long next; 2334 + unsigned long start = addr, next; 2342 2335 unsigned long end = addr + size; 2336 + pgtbl_mod_mask mask = 0; 2343 2337 int err = 0; 2344 2338 2345 2339 if (WARN_ON(addr >= end)) ··· 2351 2343 next = pgd_addr_end(addr, end); 2352 2344 if (!create && pgd_none_or_clear_bad(pgd)) 2353 2345 continue; 2354 - err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, create); 2346 + err = apply_to_p4d_range(mm, pgd, addr, next, fn, data, create, &mask); 2355 2347 if (err) 2356 2348 break; 2357 2349 } while (pgd++, addr = next, addr != end); 2350 + 2351 + if (mask & ARCH_PAGE_TABLE_SYNC_MASK) 2352 + arch_sync_kernel_mappings(start, start + size); 2358 2353 2359 2354 return err; 2360 2355 }
+18 -11
mm/migrate.c
··· 246 246 else if (pte_swp_uffd_wp(*pvmw.pte)) 247 247 pte = pte_mkuffd_wp(pte); 248 248 249 - if (unlikely(is_zone_device_page(new))) { 250 - if (is_device_private_page(new)) { 251 - entry = make_device_private_entry(new, pte_write(pte)); 252 - pte = swp_entry_to_pte(entry); 253 - if (pte_swp_uffd_wp(*pvmw.pte)) 254 - pte = pte_mkuffd_wp(pte); 255 - } 249 + if (unlikely(is_device_private_page(new))) { 250 + entry = make_device_private_entry(new, pte_write(pte)); 251 + pte = swp_entry_to_pte(entry); 252 + if (pte_swp_soft_dirty(*pvmw.pte)) 253 + pte = pte_swp_mksoft_dirty(pte); 254 + if (pte_swp_uffd_wp(*pvmw.pte)) 255 + pte = pte_swp_mkuffd_wp(pte); 256 256 } 257 257 258 258 #ifdef CONFIG_HUGETLB_PAGE ··· 2427 2427 entry = make_migration_entry(page, mpfn & 2428 2428 MIGRATE_PFN_WRITE); 2429 2429 swp_pte = swp_entry_to_pte(entry); 2430 - if (pte_soft_dirty(pte)) 2431 - swp_pte = pte_swp_mksoft_dirty(swp_pte); 2432 - if (pte_uffd_wp(pte)) 2433 - swp_pte = pte_swp_mkuffd_wp(swp_pte); 2430 + if (pte_present(pte)) { 2431 + if (pte_soft_dirty(pte)) 2432 + swp_pte = pte_swp_mksoft_dirty(swp_pte); 2433 + if (pte_uffd_wp(pte)) 2434 + swp_pte = pte_swp_mkuffd_wp(swp_pte); 2435 + } else { 2436 + if (pte_swp_soft_dirty(pte)) 2437 + swp_pte = pte_swp_mksoft_dirty(swp_pte); 2438 + if (pte_swp_uffd_wp(pte)) 2439 + swp_pte = pte_swp_mkuffd_wp(swp_pte); 2440 + } 2434 2441 set_pte_at(mm, addr, ptep, swp_pte); 2435 2442 2436 2443 /*
+7 -2
mm/rmap.c
··· 1511 1511 */ 1512 1512 entry = make_migration_entry(page, 0); 1513 1513 swp_pte = swp_entry_to_pte(entry); 1514 - if (pte_soft_dirty(pteval)) 1514 + 1515 + /* 1516 + * pteval maps a zone device page and is therefore 1517 + * a swap pte. 1518 + */ 1519 + if (pte_swp_soft_dirty(pteval)) 1515 1520 swp_pte = pte_swp_mksoft_dirty(swp_pte); 1516 - if (pte_uffd_wp(pteval)) 1521 + if (pte_swp_uffd_wp(pteval)) 1517 1522 swp_pte = pte_swp_mkuffd_wp(swp_pte); 1518 1523 set_pte_at(mm, pvmw.address, pvmw.pte, swp_pte); 1519 1524 /*
+6 -6
mm/slub.c
··· 672 672 } 673 673 674 674 static bool freelist_corrupted(struct kmem_cache *s, struct page *page, 675 - void *freelist, void *nextfree) 675 + void **freelist, void *nextfree) 676 676 { 677 677 if ((s->flags & SLAB_CONSISTENCY_CHECKS) && 678 - !check_valid_pointer(s, page, nextfree)) { 679 - object_err(s, page, freelist, "Freechain corrupt"); 680 - freelist = NULL; 678 + !check_valid_pointer(s, page, nextfree) && freelist) { 679 + object_err(s, page, *freelist, "Freechain corrupt"); 680 + *freelist = NULL; 681 681 slab_fix(s, "Isolate corrupted freechain"); 682 682 return true; 683 683 } ··· 1494 1494 int objects) {} 1495 1495 1496 1496 static bool freelist_corrupted(struct kmem_cache *s, struct page *page, 1497 - void *freelist, void *nextfree) 1497 + void **freelist, void *nextfree) 1498 1498 { 1499 1499 return false; 1500 1500 } ··· 2184 2184 * 'freelist' is already corrupted. So isolate all objects 2185 2185 * starting at 'freelist'. 2186 2186 */ 2187 - if (freelist_corrupted(s, page, freelist, nextfree)) 2187 + if (freelist_corrupted(s, page, &freelist, nextfree)) 2188 2188 break; 2189 2189 2190 2190 do {
+8
mm/vmscan.c
··· 2615 2615 unsigned long reclaimed; 2616 2616 unsigned long scanned; 2617 2617 2618 + /* 2619 + * This loop can become CPU-bound when target memcgs 2620 + * aren't eligible for reclaim - either because they 2621 + * don't have any reclaimable pages, or because their 2622 + * memory is explicitly protected. Avoid soft lockups. 2623 + */ 2624 + cond_resched(); 2625 + 2618 2626 mem_cgroup_calculate_protection(target_memcg, memcg); 2619 2627 2620 2628 if (mem_cgroup_below_min(memcg)) {
+2 -2
scripts/checkpatch.pl
··· 2639 2639 2640 2640 # Check if the commit log has what seems like a diff which can confuse patch 2641 2641 if ($in_commit_log && !$commit_log_has_diff && 2642 - (($line =~ m@^\s+diff\b.*a/[\w/]+@ && 2643 - $line =~ m@^\s+diff\b.*a/([\w/]+)\s+b/$1\b@) || 2642 + (($line =~ m@^\s+diff\b.*a/([\w/]+)@ && 2643 + $line =~ m@^\s+diff\b.*a/[\w/]+\s+b/$1\b@) || 2644 2644 $line =~ m@^\s*(?:\-\-\-\s+a/|\+\+\+\s+b/)@ || 2645 2645 $line =~ m/^\s*\@\@ \-\d+,\d+ \+\d+,\d+ \@\@/)) { 2646 2646 ERROR("DIFF_IN_COMMIT_MSG",