Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

fix mapping_writably_mapped()

Lee Schermerhorn noticed yesterday that I broke the mapping_writably_mapped
test in 2.6.7! Bad bad bug, good good find.

The i_mmap_writable count must be incremented for VM_SHARED (just as
i_writecount is for VM_DENYWRITE, but while holding the i_mmap_lock)
when dup_mmap() copies the vma for fork: it has its own more optimal
version of __vma_link_file(), and I missed this out. So the count
was later going down to 0 (dangerous) when one end unmapped, then
wrapping negative (inefficient) when the other end unmapped.

The only impact on x86 would have been that setting a mandatory lock on
a file which has at some time been opened O_RDWR and mapped MAP_SHARED
(but not necessarily PROT_WRITE) across a fork, might fail with -EAGAIN
when it should succeed, or succeed when it should fail.

But those architectures which rely on flush_dcache_page() to flush
userspace modifications back into the page before the kernel reads it,
may in some cases have skipped the flush after such a fork - though any
repetitive test will soon wrap the count negative, in which case it will
flush_dcache_page() unnecessarily.

Fix would be a two-liner, but mapping variable added, and comment moved.

Reported-by: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Hugh Dickins and committed by
Linus Torvalds
b88ed205 f4fd2c5b

+9 -6
+9 -6
kernel/fork.c
··· 315 315 file = tmp->vm_file; 316 316 if (file) { 317 317 struct inode *inode = file->f_path.dentry->d_inode; 318 + struct address_space *mapping = file->f_mapping; 319 + 318 320 get_file(file); 319 321 if (tmp->vm_flags & VM_DENYWRITE) 320 322 atomic_dec(&inode->i_writecount); 321 - 322 - /* insert tmp into the share list, just after mpnt */ 323 - spin_lock(&file->f_mapping->i_mmap_lock); 323 + spin_lock(&mapping->i_mmap_lock); 324 + if (tmp->vm_flags & VM_SHARED) 325 + mapping->i_mmap_writable++; 324 326 tmp->vm_truncate_count = mpnt->vm_truncate_count; 325 - flush_dcache_mmap_lock(file->f_mapping); 327 + flush_dcache_mmap_lock(mapping); 328 + /* insert tmp into the share list, just after mpnt */ 326 329 vma_prio_tree_add(tmp, mpnt); 327 - flush_dcache_mmap_unlock(file->f_mapping); 328 - spin_unlock(&file->f_mapping->i_mmap_lock); 330 + flush_dcache_mmap_unlock(mapping); 331 + spin_unlock(&mapping->i_mmap_lock); 329 332 } 330 333 331 334 /*