Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

aio: fix the "too late munmap()" race

Current code has put_ioctx() called asynchronously from aio_fput_routine();
that's done *after* we have killed the request that used to pin ioctx,
so there's nothing to stop io_destroy() waiting in wait_for_all_aios()
from progressing. As the result, we can end up with async call of
put_ioctx() being the last one and possibly happening during exit_mmap()
or elf_core_dump(), neither of which expects stray munmap() being done
to them...

We do need to prevent _freeing_ ioctx until aio_fput_routine() is done
with that, but that's all we care about - neither io_destroy() nor
exit_aio() will progress past wait_for_all_aios() until aio_fput_routine()
does really_put_req(), so the ioctx teardown won't be done until then
and we don't care about the contents of ioctx past that point.

Since actual freeing of these suckers is RCU-delayed, we don't need to
bump ioctx refcount when request goes into list for async removal.
All we need is rcu_read_lock held just over the ->ctx_lock-protected
area in aio_fput_routine().

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
Acked-by: Benjamin LaHaise <bcrl@kvack.org>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

authored by

Al Viro and committed by
Linus Torvalds
c7b28555 86b62a2c

+6 -8
+6 -8
fs/aio.c
··· 228 228 call_rcu(&ctx->rcu_head, ctx_rcu_free); 229 229 } 230 230 231 - static inline void get_ioctx(struct kioctx *kioctx) 232 - { 233 - BUG_ON(atomic_read(&kioctx->users) <= 0); 234 - atomic_inc(&kioctx->users); 235 - } 236 - 237 231 static inline int try_get_ioctx(struct kioctx *kioctx) 238 232 { 239 233 return atomic_inc_not_zero(&kioctx->users); ··· 603 609 fput(req->ki_filp); 604 610 605 611 /* Link the iocb into the context's free list */ 612 + rcu_read_lock(); 606 613 spin_lock_irq(&ctx->ctx_lock); 607 614 really_put_req(ctx, req); 615 + /* 616 + * at that point ctx might've been killed, but actual 617 + * freeing is RCU'd 618 + */ 608 619 spin_unlock_irq(&ctx->ctx_lock); 620 + rcu_read_unlock(); 609 621 610 - put_ioctx(ctx); 611 622 spin_lock_irq(&fput_lock); 612 623 } 613 624 spin_unlock_irq(&fput_lock); ··· 643 644 * this function will be executed w/out any aio kthread wakeup. 644 645 */ 645 646 if (unlikely(!fput_atomic(req->ki_filp))) { 646 - get_ioctx(ctx); 647 647 spin_lock(&fput_lock); 648 648 list_add(&req->ki_list, &fput_head); 649 649 spin_unlock(&fput_lock);